text
stringlengths
56
7.94M
\begin{document} \title{Conditional generation of an arbitrary superposition of coherent states } \date{\today} \author{Masahiro Takeoka} \author{Masahide Sasaki} \address{\affA} \begin{abstract} We present a scheme to conditionally generate an arbitrary superposition of a pair of coherent states from a squeezed vacuum by means of the modified photon subtraction where a coherent state ancilla and two on/off type detectors are used. We show that, even including realistic imperfections of the detectors, our scheme can generate a target state with a high fidelity. The amplitude of the generated states can be amplified by conditional homodyne detections. \end{abstract} \pacs{03.67.Hk, 42.50.Dv} \maketitle \section{Introduction \label{sec:intro}} Conditional quantum operation based on photon detection plays an important role in recent optical quantum information processing. Particularly, in the continuous variable regime, it is the only available tool with current technology to generate non-Gaussian states from Gaussian ones. A typical example is the `photon subtraction' operation, in which a nonclassical input state is split by a highly transmissive beamsplitter (BS) and the reflected state is measured by a photon number resolving detector (PNRD). Selecting the event that the detector observes photons, one obtains a non-Gaussian transformation from the input to output quantum state. This type of conditional operation was formulated in \cite{Ban94}. Dakna {\it et al.} \cite{Dakna97} then showed that applying it to a squeezed vacuum, one can generate a non-Gaussian state which is close to the superposition of coherent states with plus or minus phase \begin{equation} \label{eq:cat_state} |C_\pm(\alpha)\rangle = \frac{1}{\sqrt{\mathcal{N}_\pm}} \left( |\alpha\rangle \pm |-\alpha\rangle \right) , \end{equation} with very high fidelity, where $|\alpha\rangle$ is a coherent state with the amplitude of $\alpha$ and $\mathcal{N}_\pm$ is the normalization factor. Recently, such state has been experimentally generated by a single photon subtraction from pulsed \cite{Wenger04,Ourjoumtsev06} and CW \cite{N-Nielsen06,Wakui06} squeezed vacua. In these experiments, since a reflected beam includes sufficiently small average number of photons ($\bar{n} \ll 1$), single photon detection was approximately realized by use of an avalanche photodiode (APD) which is often called as `on/off' type detector since it discriminates only a presence of photons instead of resolving photon numbers. The progress of these experiments promises the realizations of more complicated applications of photon subtraction proposed so far, including the improvement of quantum teleportation \cite{Opatrny00,Cochrane02,Olivares03} and entanglement-assisted coding \cite{Kitagawa05}, entanglement distillation \cite{Browne03}, loophole free tests of Bell's inequalities \cite{Nha04,G-Patron04,G-Patron05}, and optical quantum computations in quadrature basis \cite{Gottesman01,Menicucci06} or superposed coherent state basis \cite{Ralph03,Lund05}. In the last application, $|C_\pm(\alpha)\rangle$ with appropriate $\alpha$ is required as an ancillary state. To prepare such ancillae, the method to conditionally amplify $\alpha$ with on/off detectors has been proposed \cite{Lund04,Jeong05}. Fiur\'{a}\v{s}ek {\it et al.} \cite{Fiurasek05} also recently showed that one can arbitrarily synthesize a single-mode quantum state up to the $N$-photon eigenstate by concatenating squeezing operations and $N$ times single photon subtractions \cite{Fiurasek05}. In this paper, we propose a method to conditionally generate the state in which two coherent states are superposed with {\it arbitrary} ratio and phase, $c_+ |\alpha\rangle + c_- |-\alpha\rangle$. This is accomplished by a simple modification of the scheme proposed by Dakna {\it et al.} (DAOKW) \cite{Dakna97}. We first discuss an ideal scheme using two PNRDs and a qubit ancilla, which produces a superposition of the one- and two-photon subtracted states. We show that such state fairly well approximates the target state $c_+ |\alpha\rangle + c_- |-\alpha\rangle$. We next present a more practical scheme where PNRDs and a qubit ancilla are replaced by the on/off detectors and a coherent state ancilla. Even including practical imperfections of the detectors, it can generate the target state with a high fidelity. Our scheme should be compared with the one by Fiur\'{a}\v{s}ek {\it et al.} \cite{Fiurasek05}, which requires $N$ detectors to synthesize a state consisting of the number states up to $|N\rangle$. Ours, on the other hand, uses only two detectors to synthesize a fully continuous variable state while, in return, the class of states to be generated is restricted. Finally, we show that our scheme is useful to simplify the setup of the conditional amplification of the superpositions of coherent states originally proposed in \cite{Lund04,Jeong05}. The paper is organized as follows. In Sec.~\ref{sec:PNRDscheme}, we discuss an ideal setup with PNRDs and a qubit ancilla and how the scheme's parameters are optimized to generate desired superposed states. In Sec.~\ref{sec:on/offscheme}, a practical scheme using on/off detectors and a coherent state ancilla is shown and its experimental feasibility is numerically examined. An application of our scheme to the conditional amplification of superposed coherent states is shown in Sec.~\ref{sec:amp} and Sec.~\ref{sec:conclusion} concludes the paper. \section{Generation of an arbitrarily superposition of coherent states \label{sec:PNRDscheme}} Figure~\ref{fig:SchemePNRD}(a) illustrates the DAOKW photon subtraction scheme \cite{Dakna97}. A squeezed vacuum with the squeezing parameter $r$ is mixed with a vacuum by a highly transmissive BS and the reflected part of the state is detected by a PNRD. When the reflected part is projected onto the photon number eigenstate $|m\rangle$ ($m>0$), the state remained in the transmitted mode is reduced to be the $m$ photon subtracted squeezed vacuum state that can be described by a minus- or plus-superposition of two distinct states as \begin{equation} \label{eq:Dakna_decomposition} |\Psi_m\rangle = A (|\Psi_m^{(+)}\rangle + (-1)^m |\Psi_m^{(-)}\rangle) , \end{equation} where $A$ is the normalization factor \cite{Comment1}. It was shown that, with an appropriate input squeezed vacuum, the states $|\Psi_m^{(\pm)}\rangle$ are very close to the coherent states $|\pm\alpha_m\rangle$ and thus the states $|\Psi_m\rangle$ are also very close to a superposition of coherent states. Let us extend the above scheme as illustrated in Fig.~\ref{fig:SchemePNRD}(b). Let the upper BS has the power transmittance $T \approx 1$ and the lower be a balanced BS. The reflected part of the state is mixed with the auxiliary state $b_0 |0\rangle + b_1 |1\rangle$ and then each port is incident into a PNRD. After some calculations, one finds that if the measurement outcome of the two detectors is (2, 0) or (0, 2), the reflected part is effectively projected onto $\mp b_1^*/\sqrt{2} |1\rangle + b_0^*/\sqrt{2} |2\rangle$ and the transmitted state conditioned on either of these outcomes has the form \begin{equation} \label{eq:proj_ideal} |\Psi_{\rm out}\rangle = a_1 |\Psi_1\rangle + a_2 |\Psi_2\rangle , \end{equation} where $a_1$ and $a_2$ are the functions of $b_0$, $b_1$, and $T$. Since $|\Psi_m\rangle$ can be regarded as a superposition specified in Eq.~(\ref{eq:cat_state}), the state $|\Psi_{\rm out}\rangle$ is also expected to be a superpostion of $|\pm\alpha\rangle$ with the controlled ratio and phase by choosing ancilla parameters $b_0$ and $b_1$ appropriately. \begin{figure} \caption{\label{fig:SchemePNRD} \label{fig:SchemePNRD} \end{figure} \begin{figure} \caption{\label{fig:Fidelity} \label{fig:Fidelity} \end{figure} Now let us see the state in Eq.~(\ref{eq:proj_ideal}) more carefully. To show $|\Psi_{\rm out}\rangle$ to be a superposition of two (classical) macroscopically distinct states, one has to find the following decompositions, \begin{eqnarray} \label{eq:decompose1} |\Psi_1\rangle & = & \frac{1}{2c_1} \left( |\phi_+\rangle - |\phi_-\rangle \right) , \\ \label{eq:decompose2} |\Psi_2\rangle & = & \frac{1}{2c_2} \left( |\phi_+\rangle + |\phi_-\rangle \right) , \end{eqnarray} in which $|\phi_{\pm}\rangle$ are close enough to the coherent states $|\pm\alpha\rangle$. $c_1$ and $c_2$ are the normalization factors satisfying $|\phi_\pm\rangle = c_2 |\Psi_2\rangle \pm c_1 |\Psi_1\rangle$. Note that the decomposition described in Eq.~(\ref{eq:Dakna_decomposition}) \cite{Dakna97} is not optimal in our purpose since $|\Psi_1\rangle$ and $|\Psi_2\rangle$ do not share the common decomposed components. The optimal $|\phi_\pm\rangle$ to maximize the fidelity $|\langle\phi_\pm|\pm\alpha\rangle|^2$ can be derived from the exact expression of $|\Psi_m\rangle$ \cite{Comment1} and Eqs.~(\ref{eq:decompose1}) and (\ref{eq:decompose2}) as \begin{eqnarray} \label{eq:c1} c_1 & = & \sqrt{\frac{3\lambda T}{(1+\lambda T)(1+2\lambda T)}} , \\ \label{eq:c2} c_2 & = & \sqrt{\frac{1+2\lambda^2 T^2}{(1+\lambda T)(1+2\lambda T)}} , \end{eqnarray} where $\lambda = \tanh r$ is the squeezing parameter and the amplitude of the corresponding coherent states is given by \begin{equation} \label{eq:coherent_state} |\pm\alpha\rangle = \left| \pm\sqrt{\frac{3\lambda T}{1-\lambda^2 T^2}} \right\rangle . \end{equation} Then we have quasi-coherent states \begin{eqnarray} \label{eq:phi_pm} |\phi_\pm\rangle & = & \frac{(1-\lambda^2 T^2)^{3/4}}{2\sqrt{(1+\lambda T)(1+ 2\lambda T)}} \sum_{n=0}^\infty \frac{(2n+2)!}{(n+1)!} \left(\frac{\lambda T}{2}\right)^n \nonumber\\ & & \left( \frac{1-\lambda^2 T^2}{\sqrt{(2n)!}} |2n\rangle \pm \sqrt{\frac{3\lambda T}{(2n+1)!}} |2n+1\rangle \right), \nonumber\\ \end{eqnarray} and the fidelity between Eqs.~(\ref{eq:coherent_state}) and (\ref{eq:phi_pm}) is given by \begin{eqnarray} \label{eq:fidelity} F & = & |\langle\alpha|\phi_+\rangle|^2 \nonumber\\ & = & \sqrt{1- \lambda^2 T^2} (1+\lambda T)(1+2\lambda T) \exp\left[ -\frac{3\lambda T}{1+\lambda T} \right] , \nonumber\\ \end{eqnarray} which is plotted in Fig.~(\ref{fig:Fidelity}) by the black line (line (a)). For $\alpha < 1$, more than $0.99$ fidelity is achieved. The validity of this optimization is also confirmed by looking at the fidelities $|\langle\Psi_1|C_-(\alpha)\rangle|^2$ and $|\langle\Psi_2|C_+(\alpha)\rangle|^2$ for the same $\alpha$. These are plotted in the same figure by the red (line (b)) and green (line (c)) lines, respectively. \section{Practical setup with on/off detectors \label{sec:on/offscheme}} Preparing PNRDs and a photon number qubit ancilla is still somehow challenging with current technology. In this section, we show a modified and more practical scheme in which PNRDs and a qubit ancillary state are replaced with on/off detectors and a coherent state, respectively. The modified scheme is depicted in Fig.~\ref{fig:SchemeOnOff}. The reflected state from the first BS with the transmittance $T$ is further split by the second balanced BS. One beam is directly measured by an on/off detector (mode B) and the other is first shifted by the displacement operator $\hat{D}(\beta) = \exp[ \beta \hat{a}^\dagger - \beta^* \hat{a} ]$ and then measured by another on/off detector (mode C). It is well known that a displacement operation is realized by interfering the signal with an auxiliary coherent state $|\beta/\sqrt{1-T_D}\rangle$ by a BS with the transmittance $T_D$. In the limit of $T_D \to 1$, this operation is exactly the same as $\hat{D}(\beta)$. The output state is conditionally selected only when both detectors are simultaneously clicked by photons. The photons detected in mode B always come from the squeezed vacuum while, in mode C, the photons from the squeezed vacuum are interfered by the displacement. This quantum interference and the on/off detection realizes a projection onto a superposition of different photon number states. The positive operator-valued measure (POVM) for on/off detectors is described by $\{ \hat{\Pi}_{\rm off}, \hat{\Pi}_{\rm on} \}$ where $\hat{\Pi}_{\rm off} = |0\rangle\langle0|$ and $\hat{\Pi}_{\rm on} = \hat{I} - \hat{\Pi}_{\rm off}$ and $\hat{I}$ is an identity operator. Similarly, when a displacement operation $\hat{D}(\beta)$ is placed before the detector, as in mode C, the total on/off POVM is expressed as \begin{eqnarray} \label{eq:off_POVM_with_displacement} \hat{\Pi}_{\rm off} (\beta) & = & \hat{D}^\dagger(\beta)|0\rangle\langle0| \hat{D}(\beta) = |-\beta\rangle\langle-\beta|, \\ \label{eq:on_POVM_with_displacement} \hat{\Pi}_{\rm on} (\beta) & = & \hat{D}^\dagger(\beta) (\hat{I} - |0\rangle\langle0|) \hat{D}(\beta) \nonumber\\ & = & \hat{I} - |-\beta\rangle\langle-\beta| . \end{eqnarray} The average photon number reflected to mode B from the initial squeezed vacuum is given by $(1-T)\sinh^2 r$. For moderate squeezing, this is sufficiently small to assume that the reflected beam in mode B contains maximally one photon and to ignore the more than one photon eigenspace at the measurement process (e.g. $(1-T)\sinh^2 r \sim 0.005$ for $r=0.3$ and $T=0.95$). When $|\beta|^2$ in Eqs.~(\ref{eq:off_POVM_with_displacement}) and (\ref{eq:on_POVM_with_displacement}) is also sufficiently small such that one can approximate as $|-\beta\rangle \approx |0\rangle - \beta|1\rangle$, $\hat{\Pi}_{\rm off} (\beta)$ in mode C acts as the projection onto $|0\rangle - \beta|1\rangle$, and $\hat{\Pi}_{\rm on} (\beta)$ as the projection onto the orthogonal superposition $\beta^* |0\rangle + |1\rangle$. As a consequence, when both detectors are clicked, the reflected part of the state is projected onto \begin{eqnarray} \label{eq:measurement_vector} && _C \langle 0| \hat{B}^\dagger_{1/2} |1\rangle_B \left( \beta^* |0\rangle_C + |1\rangle_C \right) \nonumber\\ && \propto \, _C \langle 0| \left\{ -\beta^* (|01\rangle - |10\rangle) - (|02\rangle - |20\rangle) \right\}_{BC} \nonumber\\ && = \beta^* |1\rangle_B + |2\rangle_B , \end{eqnarray} where the normalization factors and global phases are omitted and $\hat{B}_T = \exp[\theta (\hat{a}^\dagger \hat{b} - \hat{a} \hat{b}^\dagger)]$ is a BS operator with $\cos\theta=\sqrt{T}$. \begin{figure} \caption{\label{fig:SchemeOnOff} \label{fig:SchemeOnOff} \end{figure} On the other hand, the state after the first BS is described as \begin{eqnarray} \label{eq:BS1} \hat{B}_T \left( \hat{S}(r)|0\rangle_A \right) |0\rangle_B & = & \sqrt{\mathcal{P}_0} |\Psi_0\rangle_A |0\rangle_B + \sqrt{\mathcal{P}_1} |\Psi_1\rangle_A |1\rangle_B \nonumber\\ && + \sqrt{\mathcal{P}_2} |\Psi_2\rangle_A |2\rangle_B + \cdots \end{eqnarray} where $S(r) = \exp[ r/2 (\hat{a}^2-\hat{a}^{\dagger\,2})]$ is a squeezing operator and $\mathcal{P}_m$ is the probability to observe $m$ photons in modes $B$ and $C$ \cite{Dakna97}, \begin{eqnarray} \label{eq:P_m} \mathcal{P}_m & = & \sqrt{\frac{1-\lambda^2}{1-\lambda^2 T^2}} \left[ \frac{\lambda^2 T^2 (1-T)}{T(1-\lambda^2 T^2)} \right]^m \nonumber\\ && \times \sum_{k=0}^{[m/2]} \frac{m!}{(m-2k)!(k!)^2(2\lambda T)^{2k}} . \end{eqnarray} From Eqs.~(\ref{eq:measurement_vector}) and (\ref{eq:BS1}), we have the conditional output for the simultaneous click as \begin{equation} \label{eq:cond_output} |\Psi_{\rm out}\rangle \propto \beta \sqrt{\mathcal{P}_1} |\Psi_1\rangle + \sqrt{\mathcal{P}_2} |\Psi_2\rangle . \end{equation} Consequently, for the generation of the superposition state $c_+ |\phi_+ \rangle + c_- |\phi_-\rangle$, the optimal displacement $\beta$ is derived from Eqs.~(\ref{eq:decompose1}$-$\ref{eq:c2}), (\ref{eq:P_m}), and (\ref{eq:cond_output}) and given as \begin{equation} \label{eq:displacement_beta} \beta = \frac{c_+ - c_-}{c_+ + c_-} \left( \frac{3\lambda (1-T)}{2 (1-\lambda^2 T^2)} \right)^{1/2} , \end{equation} which is valid under the condition of $|\beta|^2 \ll 1$, i.e. \begin{equation} \label{eq:beta_condition} \left| \frac{c_+ - c_-}{c_+ + c_-} \right|^2 \ll \frac{2 (1-\lambda^2 T^2)}{3\lambda (1-T)} . \end{equation} Note that Eq.~(\ref{eq:displacement_beta}) is almost optimal for arbitrary $\beta$ although this condition will be broken when $c_+ + c_- \sim 0$ i.e. one wants to generate $|\Psi_{\rm out}\rangle \sim |\Psi_1\rangle$. For large $|\beta|^2$, Eq.~(\ref{eq:on_POVM_with_displacement}) up to one photon state is given by \begin{equation} \label{eq:beta_large} \hat{\Pi}_{\rm on} (\beta) \to (1-e^{-|\beta|^2}) \hat{I} + e^{-|\beta|^2} (\beta^* |0\rangle + |1\rangle) (\beta \langle0| + \langle1|) . \end{equation} Although it makes the output as a mixed state of \begin{equation} \label{eq:mixed_out} \hat{\rho}_{\rm out} = (1-e^{-|\beta|^2}) |\Psi_1\rangle\langle\Psi_1| + e^{-|\beta|^2} |\Psi_{\rm out}\rangle\langle\Psi_{\rm out}| , \end{equation} this is clearly a negligible error since $|\langle\Psi_{\rm out}|\Psi_1\rangle|^2$ exponentially approaches to unit. In the rest of this section, we numerically examine the conditional outputs under realistic conditions. In practice, there is always finite probability to detect more than one photon at each detector. Moreover, the detectors themselves have finite imperfections. The POVM for an imperfect on/off detector with the displacement operation $\hat{D}(\alpha)$ is given by \begin{eqnarray} \label{eq:on/off_POVM_off} \hat{\Pi}_{\rm off}(\alpha,\eta,\nu) & = & e^{-\nu} \sum_{m=0}^\infty (1-\eta)^m \hat{D}^\dagger(\alpha) |m \rangle\langle m| \hat{D}(\alpha) , \nonumber\\ \\ \label{eq:on/off_POVM_on} \hat{\Pi}_{\rm on}(\alpha,\eta,\nu) & = & \hat{I} - \hat{\Pi}_{\rm off}(\alpha) , \end{eqnarray} where $\eta$ and $\nu$ are the quantum efficiency and dark count of the detector, respectively. \begin{figure} \caption{\label{fig:WignerFunctions} \label{fig:WignerFunctions} \end{figure} This kind of detectors makes the output an unwanted mixed state. To derive photon subtracted states under these conditions, it is useful to use the characteristic functions to describe the states and POVMs \cite{Kim05,Olivares05,Molmer06}. Since the input squeezed vacuum is a Gaussian state, its characteristic function can be described as \begin{equation} \label{eq:CFSV} \chi_{\rm SV} ({\bf \omega}) = \exp\left[ -\frac{1}{4} {\bf \omega}^T \Gamma_{\rm SV} {\bf \omega} \right] , \end{equation} where ${\bf \omega} = (u, v)^T$ is a two dimensional vector and $\Gamma_{\rm SV}$ is the covariance matrix for the squeezed vacuum \begin{equation} \label{eq:CovMxSV} \Gamma_{\rm SV} = \left[ \begin{array}{cc} e^{2r} & 0 \\ 0 & e^{-2r} \end{array} \right] . \end{equation} The mixing of a squeezed vacuum and a vacuum by a BS is described by a linear transformation \begin{equation} \label{eq:BS_transformation} \Gamma_{\rm SV} \oplus \Gamma_{\rm vac} \to S_{BS}^T (T) \Gamma_{\rm SV} \oplus \Gamma_{\rm vac} S_{BS} (T) , \end{equation} where $\Gamma_{\rm vac} = {\bf I}$ is the covariance matrix for the vacuum state and $S_{BS} (T)$ is the $4 \times 4$ matrix \begin{equation} \label{eq:BS_matrix} S_{BS} (T) = \left( \begin{array}{cc} \sqrt{T} \, {\bf I} & \sqrt{1-T} \, {\bf I} \\ -\sqrt{1-T} \, {\bf I} & \sqrt{T} \, {\bf I} \end{array} \right) . \end{equation} Then the covariance matrix after the two BSs in Fig.~\ref{fig:SchemeOnOff} is given by \begin{eqnarray} \label{eq:SV_after_2BS} \tilde{\Gamma} & = & (I \oplus S_{BS}^T (1/2)) (S_{BS}^T (T) \oplus I) (\Gamma_{\rm SV} \oplus \Gamma_{\rm vac} \oplus \Gamma_{\rm vac}) \nonumber\\ && \times (S_{BS} (T) \oplus I) (I \oplus S_{BS} (1/2)) . \end{eqnarray} Let the characteristic function corresponding to $\tilde{\Gamma}$ be $\chi_{\rm in} ({\bf \omega_A}, {\bf \omega_B}, {\bf \omega_C})$. The output characteristic function conditioned on the simultaneous click in both two detectors is then given by \begin{eqnarray} \label{eq:cond_output_CF} \chi_{\rm out} ({\bf \omega_A}) & = & \int_{-\infty}^\infty \int_{-\infty}^\infty {\rm d}{\bf \omega_B} {\rm d}{\bf \omega_C} \, \chi_{\rm in} ({\bf \omega_A}, {\bf \omega_B}, {\bf \omega_C}) \nonumber\\ & & \times \chi_{\rm on} (-{\bf \omega_B}, -{\bf \omega_C}) , \end{eqnarray} where \begin{equation} \label{eq:CF_for_POVM} \chi_{\rm on} ({\bf \omega_B}, {\bf \omega_C}) = \chi_{\rm on } (0, {\bf \omega_B}) \chi_{\rm on } (-\beta^*, {\bf \omega_C}) , \end{equation} and $\chi_{\rm on } (\alpha, {\bf \omega})$ corresponds to the characteristic function for the POVM defined in Eq.~(\ref{eq:on/off_POVM_on}). Finally, the Wigner function for the output is given by the Fourier transform of the characteristic function as \begin{equation} \label{eq:cond_output_Wigner} W_{\rm out} ({\bf z}) = \frac{1}{(2\pi)^2} \int_{-\infty}^\infty d{\bf \omega} \, \chi_{\rm out} ({\bf \omega}) \exp\left[-i{\bf \omega}^T {\bf z}\right] , \end{equation} where ${\bf z} = ( x,p )^T$. Figure~\ref{fig:WignerFunctions}(a) shows $W_{\rm out} ({\bf z})$ corresponding to the state $|\phi_+\rangle + i |\phi_-\rangle$ with nearly ideal parameters, $T=0.999$, unit quantum efficiency, and zero dark counts. The squeezing parameter $r=0.3$ corresponds to the coherent amplitude of $\alpha=0.97$. The fidelity between this output and $|\alpha\rangle + i |-\alpha\rangle$ is $F=0.993$. Note that this is the same as the state generated from a coherent state by strong Kerr nonlinear evolution \cite{Yurke86}. Figures~\ref{fig:WignerFunctions}(b)-(d) plot the Wigner functions with realistic detectors ($\eta=0.1$ and $\nu=10^{-7}$) for different $\{ c_+, c_- \}$. Even with such imperfect detectors, high fidelities ($> 0.95$) could be achieved. \section{Amplification of the superposed states via conditional homodyne detection \label{sec:amp}} The fidelity between the state generated from our scheme and the ideal superposition of coherent states starts to decrease rapidly for $\alpha >1$. To realize surely macroscopic superposition or to apply these states to the coherent state superposition based quantum computation scheme \cite{Ralph03}, the superposed states are required to have larger amplitudes. One approach for the production of such a state is to introduce PNRDs in our scheme (Fig.~\ref{fig:SchemeOnOff}) to generate a superposition of $|\Psi_m\rangle$ and $|\Psi_{m+1}\rangle$ for large $m$. The other approach may be to apply the conditional amplification process proposed in \cite{Lund04}, which does not require PNRDs. It was shown that if one can prepare two inputs $|\alpha\rangle + e^{i \varphi} |-\alpha\rangle$ and $|\beta\rangle + e^{i \phi} |-\beta\rangle$, they can be conditionally transformed to the state $|\gamma\rangle + e^{i(\varphi+\phi)} |-\gamma\rangle$, where $\gamma = \sqrt{\alpha^2 + \beta^2}$, by using BSs, an auxiliary coherent state, and two on/off detectors. It could be used to amplify the initial state of $S(r)|1\rangle$, which well approximates $|C_-(\alpha)\rangle$ for $|\alpha|^2 \le 1$, and the scalability of the repetitive amplification process was discussed in detail in \cite{Jeong05}. \begin{figure} \caption{\label{fig:CatAmpHD} \label{fig:CatAmpHD} \end{figure} Application of our scenario to their scheme allows us to generate $|\gamma\rangle + e^{i \varphi}|-\gamma\rangle$ with large $\gamma$ and arbitrary $\varphi$. Moreover, we will show in this section that the two on/off detectors used in the scheme of Ref.~\cite{Lund04,Jeong05} can be simply replaced by a homodyne detector although the latter acts as a Gaussian operation. Note that it is not prohibited to transform non-Gaussian states to the other non-Gaussian states by only Gaussian operations. The conditional homodyne detection technique has been applied to purify the coherent state superpositions \cite{Suzuki06} or the squeezed states suffering non-Gaussian noises \cite{Heersink06}. The schematic of the conditional amplification via homodyne detection is shown in Fig.~\ref{fig:CatAmpHD}(a). To explain how it works, let us see a simple example where we have $|C_+(\alpha)\rangle$ and $|C_-(\alpha)\rangle$ as two input states. These states are combined by a balanced BS as \begin{eqnarray} \label{eq:interference_two_cats} && (|\alpha\rangle + |-\alpha\rangle)(|\alpha\rangle - |-\alpha\rangle) \nonumber\\ && \stackrel{\rm BS}{\to} (|\sqrt{2}\alpha\rangle - |-\sqrt{2}\alpha\rangle) |0\rangle + |0\rangle (|\sqrt{2}\alpha\rangle - |-\sqrt{2}\alpha\rangle) , \nonumber\\ \end{eqnarray} where normalization factors are omitted for simplicity. Then one makes homodyne detection on one of the two modes. An ideal homodyne detection corresponds to a projection onto the quadrature eigenstate $|x\rangle$, and for the measurement outcome $x$, one obtains the conditional output state \begin{eqnarray} \label{eq:x_conditioned_output} && \langle x|(|\sqrt{2}\alpha\rangle - |-\sqrt{2}\alpha\rangle) |0\rangle + \langle x|0\rangle (|\sqrt{2}\alpha\rangle - |-\sqrt{2}\alpha\rangle) \nonumber\\ && \propto \left( e^{-(x-2\alpha)^2/2} - e^{-(x+2\alpha)^2/2} \right) |0\rangle \nonumber\\ && \quad + e^{-x^2/2} \left( |\sqrt{2}\alpha\rangle - |-\sqrt{2}\alpha\rangle \right) . \end{eqnarray} Here, conditioned on the outcome $x=0$, the first term vanishes and one obtains the amplified state $|\sqrt{2}\alpha\rangle - |-\sqrt{2}\alpha\rangle$. More generally, the condition for the two inputs $|\alpha\rangle + e^{i\varphi_{1,2}} |-\alpha\rangle$ to be amplified is $\varphi_1 + \varphi_2 = \pi$. The amplified state has the phase $\varphi_1 - \varphi_2$ which implies that one can choose arbitrary $\varphi$ at the output. For further amplification, the process should be concatenated by carefully preparing the initial input states. Figure~\ref{fig:CatAmpHD}(b) is the schematic of the process to generate $|2\sqrt{2}\alpha\rangle - |-2\sqrt{2}\alpha\rangle$ by concatenating three amplification steps. The numbers represent the phase $\varphi$ of each state. The same rule can be applied in a straightforward way for the iterative generation of a superposition of large coherent states with arbitrary $\varphi$. Homodyne detection is a well matured technique and very high quantum efficiency ($\eta \ge 0.99$) has been achieved with current technology. It simplifies the experimental complexity, enhances the practical success probability compared to a use of two imperfect on/off detectors, and thus will increase the total feasibility of the experimental demonstration. \section{Conclusions\label{sec:conclusion}} In this paper, a novel scheme for the conditional generation of a coherent state superposition with arbitrary ratio and phase has been proposed. The scheme uses a squeezed vacuum, beamsplitters, a coherent state ancilla, and two on/off detectors. We first showed that $c_+ |\alpha\rangle + c_- |-\alpha\rangle$ for arbitrary $\{ c_+,c_- \}$ can be approximated by an appropriate superposition of single- and two-photon subtracted squeezed vacua with very high fidelity ($F>0.99$). Such a superposition of photon subtracted states is conditionally generated by using ideal photon number resolving detectors and a qubit ancilla $b_0|0\rangle + b_1|1\rangle$. We have shown that this ideal scheme is also realized by the more practical scheme in which a displacement operation (coherent state ancilla) and on/off detectors are used. Even including realistic dark counts and low quantum efficiency (e.g. $\nu=10^{-7}$ and $\eta=0.1$), fidelities of more than $0.95$ could be achieved with a highly transmissive beamsplitter ($T=0.95$) and the squeezing of $r=0.3$ ($\sim 2.6$ dB), which corresponds to the amplitude of $\alpha = 0.95$ for the generated state. All these parameters are reasonably comparable with the recent experiments on single-photon subtraction \cite{Ourjoumtsev06,N-Nielsen06,Wakui06}. We have also shown that our scheme is useful to simplify the setup of the conditional amplification of the coherent state superpositions originally proposed in \cite{Lund04,Jeong05}. Our simplified version is quite feasible to demonstrate with current experimental techniques. Our scheme would also be useful to save the amount of required resources in other quantum information applications, including superposed coherent state based quantum computing \cite{Ralph03,Lund05}. \begin{references} \bibitem{Ban94} M.~Ban, Phys.\ Rev.\ A\,\textbf{49}, 5078 (1994). \bibitem{Dakna97} M.~Dakna, T.~Anhut, T.~Opatrn\'{y}, L.~Kn\"{o}ll, and D.-G.~Welsch, Phys.\ Rev.\ A\,\textbf{55}, 3184 (1997). \bibitem{Wenger04} J.~Wenger, R.~Tualle-Brouri, and P.~Grangier, Phys.\ Rev.\ Lett.\ \textbf{92}, 153601 (2004). \bibitem{Ourjoumtsev06} A.~Ourjoumtsev, R.~Tualle-Brouri, J.~Laurat, and P.~Grangier, Science {\bf 312}, 83 (2006). \bibitem{N-Nielsen06} J.~S.~Neergaard-Nielsen, B.~Melholt~Nielsen, C.~Hettich, K.~M\o lmer, E.~S.~Polzik, Phys.\ Rev.\ Lett.\ \textbf{97}, 083604 (2006). \bibitem{Wakui06} K.~Wakui, H.~Takahashi, A.~Furusawa, and M.~Sasaki, quant-ph/0609153. \bibitem{Opatrny00} T.~Opatrn\'{y}, G.~Kurizki, and D.-G.~Welsch, Phys.\ Rev.\ A\,\textbf{61}, 032302 (2000). \bibitem{Cochrane02} P.~T.~Cochrane, T.~C.~Ralph, and G.~J.~Milburn, Phys.\ Rev.\ A\,\textbf{65}, 062306 (2002). \bibitem{Olivares03} S.~Olivares, M.~G.~Paris, and R.~Bonifacio, Phys.\ Rev.\ A\,\textbf{67}, 032314 (2003). \bibitem{Kitagawa05} A.~Kitagawa, M.~Takeoka, K.~Wakui, and M.~Sasaki, Phys.\ Rev.\ A\,\textbf{72}, 022334 (2005). \bibitem{Browne03} D.~E.~Browne, J.~Eisert, S.~Scheel, and M.~B.~Plenio, Phys.\ Rev.\ A\,\textbf{67}, 062320 (2003). \bibitem{Nha04} H.~Nha and H.~J.~Carmichael, Phys.\ Rev.\ Lett.\ \textbf{93}, 020401 (2004). \bibitem{G-Patron04} R.~Garc\'{i}a-Patr\'{o}n, J.~Fiur\'{a}\v{s}ek, N.~J.~Cerf, J.~Wenger, R.~Tualle-Brouri, and Ph.~Grangier, Phys.\ Rev.\ Lett.\ \textbf{93}, 130409 (2004). \bibitem{G-Patron05} R.~Garc\'{i}a-Patr\'{o}n, J.~Fiur\'{a}\v{s}ek, and N.~J.~Cerf, Phys.\ Rev.\ A\,\textbf{71}, 022105 (2005). \bibitem{Gottesman01} D.~Gottesman, A.~Kitaev, and J.~Preskill, Phys.\ Rev.\ A\,\textbf{64}, 012310 (2001). \bibitem{Menicucci06} N.~C.~Menicucci, P.~van Loock, M.~Gu, C.~Weedbrook, T.~C.~Ralph, M.~A.~Nielsen, Phys.\ Rev.\ Lett.\ \textbf{97}, 110501 (2006). \bibitem{Ralph03} T.~C.~Ralph, A.~Gilchrist, G.~J.~Milburn, W.~J.~Munro, and S.~Glancy, Phys.\ Rev.\ A\,\textbf{68}, 042319 (2003). \bibitem{Lund05} A.~P.~Lund and T.~C.~Ralph, Phys.\ Rev.\ A\,\textbf{71}, 032305 (2005). \bibitem{Lund04} A.~P.~Lund, H.~Jeong, T.~C.~Ralph, and M.~S.~Kim, Phys.\ Rev.\ A\,\textbf{70} 020101(R) (2004). \bibitem{Jeong05} H.~Jeong, A.~P.~Lund, and T.~C.~Ralph, Phys.\ Rev.\ A\,\textbf{72} 013801 (2005). \bibitem{Fiurasek05} J.~Fiur\'{a}\v{s}ek, R.~Garc\'{i}a-Patr\'{o}n, and N.~J.~Cerf, Phys.\ Rev.\ A\,\textbf{72}, 033822 (2005). \bibitem{Comment1} Note that the definition of $|\Psi_m^{(-)}\rangle$ in our paper is slightly different from the Dakna's original one \cite{Dakna97} by a factor of $(-1)^m$. Here we also refer the photon number expansion of $|\Psi_m\rangle$ as \cite{Dakna97} $$ |\Psi_m\rangle = \frac{1}{\sqrt{\mathcal{N}_m}} \sum_{n=0}^\infty a_{m,n} |n\rangle , $$ where \begin{eqnarray} \label{eq:C} a_{m,n} & = & \frac{(m+n)!}{\Gamma[(m+n)/2 + 1] \sqrt{n!}} , \nonumber\\ && \times \frac{1+(-1)^{m+n}}{2} \left( \frac{\lambda T}{2} \right)^{(m+n)/2} , \nonumber \end{eqnarray} and \begin{eqnarray} \label{eq:N_m} \mathcal{N}_m & = & \frac{1}{\sqrt{1-\lambda^2 T^2}} \left[ \frac{\lambda^2 T^2}{1-\lambda^2 T^2} \right]^m \nonumber\\ && \times \sum_{k=0}^{[m/2]} \frac{(m!)^2}{(m-2k)!(k!)^2(2\lambda T)^{2k}} . \nonumber \end{eqnarray} \bibitem{Kim05} M.~S.~Kim, E.~Park, P.~L.~Knight, and H.~Jeong, Phys.\ Rev.\ A\,\textbf{71} 043805 (2005). \bibitem{Olivares05} S.~Olivares and M.~G.~A.~Paris, J.\ Opt.\ B\,\textbf{7}, S616 (2005). \bibitem{Molmer06} K.~M\o lmer, Phys.\ Rev.\ A\,\textbf{73} 063804 (2006). \bibitem{Yurke86} B.~Yurke and D.~Stoler, Phys.\ Rev.\ Lett.\ \textbf{57}, 13 (1986). \bibitem{Suzuki06} S.~Suzuki, M.~Takeoka, M.~Sasaki, U.~L.~Andersen, and F.~Kannari Phys.\ Rev.\ A\,\textbf{73} 042304 (2006). \bibitem{Heersink06} J.~Heersink, Ch.~Marquardt, R.~Dong, R.~Filip, S.~Lorenz, G.~Leuchs, and U.~L.~Andersen, Phys.\ Rev.\ Lett.\ \textbf{96}, 253601 (2006). \end{references} \end{document}
\begin{document} \baselineskip=18pt \begin{abstract} Let $G$ be a connected reductive group over an algebraic closure $\bar{\mathbb{F}}_q$ of a finite field ${\mathbb{F}_q}$. In this paper it is proved that the infinite dimensional Steinberg module of $kG$ defined by N. Xi in 2014 is irreducible when $k$ is a field of positive characteristic and char$k\ne $char$\mathbb{F}_q$. For certain special linear groups, we show that the Steinberg modules of the groups are not quasi-finite with respect to some natural quasi-finite sequences of the groups. \end{abstract} \maketitle \def\mathcal{\mathcal} \def\mathbf{\mathbf} \def\mathcal A{\mathcal A} \def\mathcal D_0{\mathcal D_0} \def\mathcal D{\mathcal D} \def\mathcal Do{\mathcal D_1} \def\mathbf{\mathbf} \def\lambda{\lambdaambda} \def\lambdae{\lambdaeq} N. Xi studied some induced representations of infinite reductive groups with Frobenius maps (see [X]). In particular, he defined Steinberg modules for any reductive groups by extending Steinberg's construction of Steinberg modules for finite reductive groups. These Steinberg modules are infinite dimensional when the reductive groups are infinite. Let $G$ be a connected reductive group over the algebraic closure $\bar{\mathbb{F}}_q$ of a finite field ${\mathbb{F}_q}$ and $k$ a field. Xi proved that the Steinberg module of the group algebra $kG$ of $G$ over $k$ is irreducible if $k$ is the field of complex numbers or $k=\bar{\mathbb{F}}_q$ ( In fact, his proof works when char k=0 or char $\mathbb{F}_q$ ) . In this paper we prove that if $k$ has positive characteristic and char$k\ne $ char $\mathbb{F}_q$, then the Steinberg module of $kG$ remains irreducible (see Theorem 2.2). The reductive group $G$ is quasi-finite in the sense of [X, 1.8]. For quasi-finite groups Xi introduced the concept of quasi-finite irreducible module and raised the question whether an irreducible $kG$-module is always quasi-finite. For certain special linear groups, we show that the Steinberg modules of the groups are not quasi-finite with respect to some natural quasi-finite sequences of the groups (see Proposition 3.2). \section{Preliminaries} \def{\text {Ind}}{{\text {Ind}}} \def{\text {Hom}}{{\text {Hom}}} \def{\text {Res}}{{\text {Res}}} \def\mathbb{F}_q{\mathbb{F}_q} \def\mathbb{F}_qa{\mathbb{F}_{q^a}} \def\bar{\mathbb{F}}_q{\bar{\mathbb{F}}_q} \def\mathbb{C}{\mathbb{C}} In this section we recall some basic facts for reductive groups defined over a finite field, for details we refer to [C]. \subsection{} Let $G$ be a connected reductive group over an algebraically closure $\bar{\mathbb{F}}_q$ of a finite field $\mathbb{F}_q$ of $q$ elements, where $q$ is a power of a prime $p$. Assume that $G$ is defined over $\mathbb{F}_q$. Then $G$ has a Borel subgroup $B$ defined over $\mathbb{F}_q$ and $B$ contains a maximal torus $T$ defined over $\mathbb{F}_q$. The unipotent radical $U$ of $B$ is defined over $\mathbb{F}_q$. For any power $q^a$ of $q$, we denote by $G_{q^a}$ the $\mathbb{F}_{q^a}$-points of $G$ and shall identify $G$ with its $\bar{\mathbb{F}}_q$-points. Then we have $G=\bigcup_{a=1}^{\infty}G_{q^a}$. Similarly we define $B_{q^a}$, $T_{q^a}$ and $U_{q^a}$. \subsection{} Let $N=N_G(T)$ be the normalizer of $T$ in $G$. Then $B$ and $N$ form a $BN$-pair of $G$. Let $R\subset\text{Hom}(T,\bar{\mathbb{F}}_q^*)$ be the root system of $G$ and $R^+$ the set of positive roots determined by $B$. For $\alpha\in R^+$, let $U_\alpha$ be the corresponding root subgroup of $U$. For any simple root $\alpha$ in $R$, let $s_\alpha$ be the corresponding simple reflection in the Weyl group $W=N/T$. For $w\in W$, $U$ has two subgroups $U_w$ and $U'_w$ such that $U=U'_wU_w$ and $wU'_xw^{-1}\subseteq U$. If $w=s_\alpha$ for some simple root $\alpha$, then $U_w=U_\alpha$ and we simply write $U'_\alpha$ for $U'_w$, which equals $\prod_{\beta\in R^+-\{\alpha\}}U_\beta$. In general, let $w=s_{\alpha_i}\mathcal Dots s_{\alpha_2}s_{\alpha_1}$ be a reduced expression of $w$. Set $\beta_{j}=s_{\alpha_1}s_{\alpha_2}\mathcal Dots s_{\alpha_{j-1}}(\alpha_j)$ for $j=1,...,i$. Then \def\stackrel{\stackrelackrel} \def\scriptstyle{\scriptstyleriptstyle} \noindent (a) $U_w=U_{\beta_i}\mathcal Dots U_{\beta_2}U_{\beta_1}$ and $U'_w=\displaystyle{\prod_{\stackrel{\scriptstyle \beta\in R^+} {w(\beta)\in R^+}}U_\beta}$. \noindent (b) If $\alpha$ and $\beta$ are positive roots and $w(\alpha)=\beta$, then $n_wU_\alpha n_w^{-1}=U_\beta$, where $n_w$ is a representative of $w$ in $N$. Now assume that $w_0=s_{\alpha_r}\mathcal Dots s_{\alpha_2}s_{\alpha_1}$ is a reduced expression of the longest element of $W$. Set $\beta_{j}=s_{\alpha_1}s_{\alpha_2}\mathcal Dots s_{\alpha_{j-1}}(\alpha_j)$ for $j=1,...,r$. Then \noindent (c) For any $1\lambdae i\lambdae j\lambdae r$, $U_{\beta_j}\mathcal Dots U_{\beta_{i+1}}U_{\beta_i}$ is a subgroup of $U$ and $U_{\beta_j}\mathcal Dots U_{\beta_{i+1}}U_{\beta_i}=U_{\beta_i}U_{\beta_j}\mathcal Dots U_{\beta_{i+1}}.$ \def\varepsilon_\alpha{\varepsilon_\alpharepsilon_\alpha} The roots subgroups $U_\alpha,\ \alpha\in R^+$, are also defined over $\mathbb{F}_q$. For each positive root, we fix an isomorphism $\varepsilon_\alpharepsilon_\alpha:\bar{\mathbb{F}}_q\to U_\alpha$ such that $t\varepsilon_\alpha(c)t^{-1}=\varepsilon_\alpha(\alpha(t)c)$. Set $U_{\alpha,q^a}=\varepsilon_\alpha(\mathbb{F}_qa)$. \section{Infinite dimensional Steinberg modules} In this section the main result (Theorem 2.2) of this paper is proved, which says that certain infinite dimensional Steinberg modules are irreducible. \subsection{} Let $k$ be a field. For any one dimensional representation $\theta$ of $T$ over $k$, let $k_\theta$ be the corresponding $kT$-module. We define the $k G$-module $M(\theta)=k G\otimes_{k B}k_\theta$. When $\theta$ is trivial representation of $T$ over $k$, we write $M(tr)$ for $M(\theta)$ and let $1_{tr}$ be a nonzero element in $k_\theta$. We shall also write $x1_{tr}$ instead of $x\otimes 1_{tr}$ for $x\in kG$. For $w\in W=N/T$, the element $w1_{tr}$ is defined to be $n_w1_{tr}$, where $n_w$ is a representative in $N$ of $w$. This is well defined since $T$ acts on $k_\theta$ trivially. Let $\eta=\sum_{w\in W} (-1)^{l(w)}w1_{tr}\in M(tr),$ where $l:W\to \mathbb{N}$ is the length function of $W$. Then $kU\eta$ is a submodule of $M(tr)$ and is called a Steinberg module of $G$, denoted by St, see [X, Prop. 2.3]. Xi proved that St is irreducible if $k$ is the field of complex numbers or $k=\bar{\mathbb{F}}_q$ (see [X, Theorem 3.2]). His argument in fact works for proving that St is irreducible whenever char$k=0$ or char$k=$char$\mathbb{F}_q$. The main result of this paper is the following. \subsection{Theorem.} Assume that $k$ is a field of positive characteristic and char$k\ne$ char$\mathbb{F}_q$. Then the Steinberg module St is irreducible. Combining Xi's result we have the following result. \subsection{Corollary.} The Steinberg module St of $kG$ is irreducible for any field $k$. \subsection{} We need some preparation to prove the theorem. Let $s_{\alpha_r}s_{\alpha_{r-1}}\mathcal Dots s_{\alpha_1}$ be a reduced expression of the longest element $w_0$ of $W$. Set $\beta_i=s_{\alpha_1}\mathcal Dots s_{\alpha_{i-1}}(\alpha_i)$. Then $R^+$ consists of $\beta_r,\beta_{r-1},...,\beta_1$, and $U=U_{\beta_r}U_{\beta_{r-1}}\mathcal Dots U_{\beta_1}$. Let $n_i$ be a representative in $N=N_G(T)$ of $s_{\alpha_i}$ and set $n=n_rn_{r-1}\mathcal Dots n_1$. Note that the elements $z\eta,\ z\in U$, form a basis of St. \subsection{Lemma.} Let $u\in U_{q^a}$. If $u$ is not the neutral element $e$ of $U$, then the sum of all coefficients of $nu\eta$ in terms the basis $z\eta,\ z\in U$, is 0. Proof. Let $\alpha_i$ and $\beta_i$ be as in subsection 2.4. Then $u=u_{r}u_{r-1}\mathcal Dots u_1,$ $ u_m\in U_{\beta_m}$. Assume that $u_1=u_2=\mathcal Dots u_{i-1}=e$ but $u_i\ne e$, where $e$ is the neutral element of $G$. We use induction on $i$ to prove the lemma. Note that $n_i\eta=-\eta$ for $i=1,2,...,r$. Assume that $i=r$. Then $nu\eta=(-1)^{r-1}n_ru'_r\eta$, where $u'_r=n_{r-1}\mathcal Dots n_1u_rn_1^{-1}\mathcal Dots n_{r-1}^{-1}\in U_{\alpha_r}$. According to the proof of [S, Lemma 1] (see also proof of [X, Proposition 2.3]), there exists $x_r\in U_{\alpha_r}$ such that $n_ru'_r\eta=(x_r-1)\eta$. So the lemma is true in this case. Now assume that the lemma is true for $r,r-1,...,i+1$, we show that it is also true for $i$. In this case we have $nu=(-1)^{i-1}n_r\mathcal Dots n_i u'_{r}u'_{r-1}\mathcal Dots u'_i\eta$, where $u'_j=n_{i-1}\mathcal Dots n_1u_jn_1^{-1}\mathcal Dots n_{i-1}^{-1}\in n_{i-1}\mathcal Dots n_1U_{\beta_j}n_1^{-1}\mathcal Dots n_{i-1}^{-1}= U_{\gamma_j}$, where $\gamma_j=s_{\alpha_i}\mathcal Dots s_{\alpha_{j-1}}(\alpha_j)$, $j=i,i+1,...,r$. Then $u'_i\ne e$. Note that $\gamma_i=\alpha_i$. According to the proof of [S, Lemma 1], there exists $x_i\in U_{\gamma_i}$ such that $n_iu'_i\eta=(x_i-1)\eta$. If $u'_r\mathcal Dots u'_{i+1}=e$, we are done. Now assume that $u'=u'_r\mathcal Dots u'_{i+1}\ne e$. Since both $M_i=U_{\gamma_r}U_{\gamma_{r-1}}\mathcal Dots U_{\gamma_{i}}$ and $M_{i+1}=U_{\gamma_r}U_{\gamma_{r-1}}\mathcal Dots U_{\gamma_{i+1}}$ are subgroups of $U$ and $M_{i+1}x_i= x_iM_{i+1}$,we see that $u'x_i=x_iu''$ for some $u''\in M_{i+1}$ and $u''\ne e$. Thus $$(-1)^{i-1}nu\eta=n_r\mathcal Dots n_{i+1}u'(x_i-1)\eta=n_r\mathcal Dots n_{i+1}x_iu''\eta-n_r\mathcal Dots n_{i+1}u'\eta.$$ By induction hypotheses, we know that the sum of the coefficients of $n_r\mathcal Dots n_{i+1}u''\eta$ and the sum of the coefficients of $n_r\mathcal Dots n_{i+1}u'\eta$ are 0. Since $n_r\mathcal Dots n_{i+1}x_in_{i+1}^{-1}\mathcal Dots n_r^{-1}\in U$, we see that the sum of the coefficients of $nu\eta=(-1)^{i-1}n_r\mathcal Dots n_{i+1}u'(x_i-1)\eta$ is 0. The lemma is proved. \subsection{Lemma.} Let $V$ be a nonzero submodule of St. Then there exists an integer $a$ such that $\sum_{x\in U_{q^a}}x\eta$ is in $V$. Proof. Let $v$ be a nonzero element in $V$. Then $v\in kU_{q^a}\eta$ for some integer $a$. Let $v=\sum_{y\in U_{q^a}}a_yy\eta.$ We may assume that $a_e\ne 0$. Otherwise choose $y\in U_{q^a}$ such that $a_y$ is nonzero and replace $v$ by $y^{-1}v$. By Lemma 2.5 we see that the sum $A$ of all the coefficients of $nv$ in terms of the basis $z\eta,$ $z\in U$, is $(-1)^{l(w_0)}a_e\ne 0$. Thus $\sum_{\in U_{q^a}}xnv=A\sum_{x\in U_{q^a}} x\eta$. The lemma is proved. \subsection{} Now we can prove the theorem. We show that St=$kGv$ for any nonzero element $v$ in St. Let $V=kGv$. Let $\alpha_i$ and $\beta_i$ be as in subsection 2.4. For any positive integer $b$, set $X_{i,q^b}=U_{\beta_r,q^b}U_{\beta_{r-1},q^b}\mathcal Dots U_{\beta_i,q^b}$. Then $X_{i,q^b}$ is a subgroup of $U$ and $X_{i,q^b}=X_{i+1,q^b}U_{\beta_i,q^b}$. Clearly $X_{i,q^b}$ is a subgroup $X_{i,q^{b'}}$ if $\mathbb{F}_{q^b}$ is a subfield of $\mathbb{F}_{q^{b'}}.$ We use induction on $i$ to show that there exists positive integer $b_i$ such that the element $\sum_{x\in X_{i,q^{b_i}}}x\eta$ is in $V$. For $i=1$, this is true by Lemma 2.6. Now assume that $\sum_{x\in X_{i,q^{b_i}}}x\eta$ is in $V$, we show that $\sum_{x\in X_{i+1,q^{b_{i+1}}}}x\eta$ is in $V$ for some $b_{i+1}$. Let $c_1,...,c_{q^{b_i}+1}$ be a complete set of representatives of all cosets of $\mathbb{F}^*_{q^{b_i}}$ in $\mathbb{F}^*_{q^{2b_i}}$. Choose $t_1,...,t_{q^{b_i}+1}\in T$ such that $\beta_i(t_j)=c_j$ for $j=1,...,q^{b_i}+1$. Note that $t^{-1}\eta=\eta$ for any $t\in T$. Thus $$\sum_{j=1}^{q^{b_i}+1}t_j\sum_{x\in U_{\beta_i,q^{b_i}}}x\eta=q^{b_i}\eta+\sum_{x\in U_{\beta_i,q^{2b_{i}}}}x\eta.$$ Since $X_{i,q^{b_i}}=X_{i+1,q^{b_i}}U_{\beta_i,q^{b_i}}$ and $\sum_{x\in X_{i,q^{b_i}}}x\eta$ is in $V$, we see \begin{equation*} \begin{split} \xi=& \sum_{j=1}^{q^{b_i}+1}t_j\sum_{x\in X_{i,q^{b_i}}}x\eta\\ =&\sum_{j=1}^{q^{b_i}+1}t_j\sum_{y\in X_{i+1,q^{b_i}}}y\sum_{x\in U_{\beta_i,q^{b_i}}}x\eta\\ =&\sum_{y\in X_{i+1,q^{b_i}}}\sum_{j=1}^{q^{b_i}+1}t_jyt_j^{-1}(t_j\sum_{x\in U_{\beta_i,q^{b_i}}}x\eta)\in V. \end{split} \end{equation*} Choose $b_{i+1}$ such that all $\alpha_m(t_j)$ ($r\ge m\ge i$) are contained in $\mathbb{F}_{q^{b_{i+1}}}$. Then $\mathbb{F}_{q^{b_{i+1}}}$ contains $\mathbb{F}_{q^{2b_{i}}}$. Thus $t_jyt_j^{-1}$ is in $X_{i+1,q^{b_i+1}}$ for any $y\in X_{i+1,q^{b_i}}$. Let $Z\in kG$ be the sum of all elements in $X_{i+1,q^{b_i+1}}$. Then we have \noindent (1) $\displaystyle {Z\xi=q^{(r-i)b_i}Z\sum_{j=1}^{q^{b_i}+1}t_j\sum_{x\in U_{\beta_i,q^{b_i}}}x\eta=q^{(r-i)b_i}Z(q^{b_i}\eta+\sum_{x\in U_{\beta_i,q^{2b_{i}}}}x\eta)\in V.}$ Since $\sum_{x\in X_{i,q^{b_i}}}x\eta$ is in $V$, we have $\displaystyle \sum_{x\in X_{i,q^{2b_{i}}}}x\eta\in V.$ Thus \noindent(2) $\displaystyle Z\sum_{x\in X_{i,q^{2b_{i}}}}x\eta=q^{2(r-i)b_i}Z\sum_{x\in U_{\beta_i,q^{2b_i}}}x\eta\in V.$ Since $q\ne 0$ in $k$, combining (1) and (2) we see that $Z\eta\in V$, i.e., $\sum_{x\in X_{i+1,q^{b_{i+1}}}}x\eta$ is in $V$. Note that $X_{r,q^{b_r}}=U_{\beta_r,q^{b_r}}$. Now we have $$\sum_{x\in U_{r,q^{b_r}}}x\eta\in V\quad\text{and}\quad\sum_{x\in U_{r,q^{2b_r}}}x\eta\in V.$$ The above arguments show that $$q^{b_r}\eta+\sum_{x\in U_{r,q^{2b_r}}}x\eta\in V.$$ Therefore $q^{b_r}\eta$ is in $V$. So $V$ contains $kG\eta$=St, hence $V=$St. The theorem is proved. \subsection{Remark.} Let St${}_a=kG_{q^a}\eta$. Then St${}_a$ is the Steinberg module of $kG_{q^a}$, which is not irreducible in general. As an example, say, $G=SL_2(\bar{\mathbb{F}}_q)$ and $q$ is odd, char$k=2$. Since $q^a+1$ is always divisible by 2=char$k$, St${}_a$ is not irreducible for any positive number $a$ (see [S, Theorems 3]). However, by Theorem 2.2, St is irreducible $kG$-module. \section{Non-quasi-finite irreducibility of certain Steinberg modules} In this section we show that for certain special linear groups the Steinberg modules of the groups are not quasi-finite with respect to some natural quasi-finite sequences of the groups, see Proposition 3.2. \subsection{} By definition, a group $G$ is quasi-finite if $G$ has a sequence $G_1,\ G_2,\ ,...,\ G_n,\ ... $ of finite subgroups such that $G$ is the union of all $G_i$ and for any positive integers $i,j$ there exists integer $r$ such that $G_i$ and $G_j$ are contained in $G_r$. The sequence $G_1,G_2,G_3,...$ is called a quasi-finite sequence of $G$. An irreducible module (or representation) $M$ of $G$ is {\it quasi-finite} (with respect to the quasi-finite sequence $G_1,G_2,G_3,...)$ if it has a sequence of subspaces $M_1$, $ M_2$, $ M_3, $ ... of $M$ such that (1) each $M_i$ is an irreducible $G_i$-submodule of $M$, (2) if $G_i$ is a subgroup of $G_j$, then $M_i$ is a subspace of $M_j$, (3) $M$ is the union of all $M_i$. The sequence $M_1$, $ M_2$, $ M_3, $ ... will be called a quasi-finite sequence of $M$. See [X, 1.8] The following question was raised in [4, 1.8]: is every irreducible G-module quasi-finite (with respect to a certain quasi-finite sequence of G). The main result of this section is the following result. \subsection{Proposition.} Let $G=SL_n(\bar{\mathbb{F}}_q)$ and $k$ a field of positive characteristic. Assume that char$k$ divides $(1+q^{a})(1+q^{a}+q^{2a})\mathcal Dots(1+q^{a}+\mathcal Dots+q^{(n-1)a})$ for all positive integers $a$. If a quasi-finite sequence of $G$ is a subsequence of $SL_n(\mathbb{F}_q),$ $SL_n(\mathbb{F}_{q^2}),$ $ SL_n(\mathbb{F}_{q^3}),$ $ SL_n(\mathbb{F}_{q^4}),...$, then the Steinberg module St of $kG$ is not quasi-finite with respect to the quasi-finite sequence. Proof. Let $G_1,\ G_2,\ ,...,\ G_n,\ ... $ be a quasi-finite sequence of $G$. Assume that the quasi-finite sequence is a subsequence of $SL_n(\mathbb{F}_q),$ $SL_n(\mathbb{F}_{q^2}),$ $ SL_n(\mathbb{F}_{q^3}),$ $ SL_n(\mathbb{F}_{q^4}),...$. If St is quasi-finite with respect to this quasi-finite sequence, then there exists a sequence of subspaces $M_1$, $ M_2$, $ M_3, $ ... of St such that (1) each $M_i$ is an irreducible $G_i$-submodule of $M$, (2) if $G_i$ is a subgroup of $G_j$, then $M_i$ is a subspace of $M_j$, (3) $M$ is the union of all $M_i$. Choose a nonzero element $v\in M_1$. By the proof of Theorem 2.2, there exists $x\in kG$ such that $xv=\eta$. Since $G$ is the union of all $H_a=SL_n(\mathbb{F}_{q^a})$, there exists positive integer $i$ such that $x\in kH_i$. Since $G$ is the union of all $G_a$, there exists $j$ such that $H_i$ is included in $G_j$. Note that $G_j=H_{j'}$ for some positive integer $j'$. Choose integer $d$ such that $G_d$ includes both $G_1$ and $G_j$. Then $M_d$ includes $M_1$ and $M_j$. Moreover, $x\in kG_d$, so that $xv=\eta$ is in $M_d$ and $M_d$ includes $kG_d\eta$. Since $G_d=H_{d'}$ for some $d'$ and $kH_{d'}\eta$ is not irreducible, $M_d$ is not irreducible. This contradicts the assumption that $M_d$ is irreducible $G_d$-module. The proposition is proved. \subsection{Remark} Assume that $n$ is power of a prime $p'$ and $k$ has characteristic $p'$. If $n,q$ are coprime, then $n$ divides $(1+q^{a})(1+q^{a}+q^{2a})\mathcal Dots(1+q^{a}+\mathcal Dots+q^{(n-1)a})$ for all positive integers $a$. To see this, let $A_m= 1+q^{a}+\mathcal Dots+q^{(m-1)a}$, $m=2,...,n$. Since $n,q$ are coprime, if $A_m\equiv 1$(mod $n$), then $m>2$ and $A_{m-1}$ is divisible by $n$. If $A_m\not\equiv 1$(mod $n$) for $m=2,...,n$, then either some $A_m$ is divisible by $n$ or $A_m\equiv A_l$(mod $n$) for some $n\ge m>l\ge 2$. Since $n,q$ are coprime, we have $m\ge l+2$. Then $A_{m-l}$ is divisible by $n$. By Proposition 2.2, in this case, the Steinberg module St of $kSL_n(\bar{\mathbb{F}}_q)$ is not quasi-finite for a quasi-finite sequence of $G$ whenever it is a subsequence of $SL_n(\mathbb{F}_q),$ $SL_n(\mathbb{F}_{q^2}),$ $ SL_n(\mathbb{F}_{q^3}),$ $ SL_n(\mathbb{F}_{q^4}),...$. However, it is not clear that whether St is quasi-finite with other quasi-finite sequences of $SL_n(\bar{\mathbb{F}}_q)$. For other reductive groups, one discusses similarly. \subsection{} Assume that $G$ is quasi-finite and has sequence of normal subgroups $\{1\}=G_0\subset G_1\subset \mathcal Dots\subset G_n=G$ such that all $G_i/G_{i-1}$ are abelian. Xi asks whether any irreducible $\mathbb{C}G$-module is isomorphic to the induced module of a one dimensional module of a subgroup of $G$ (see [X, 1.12]). The question has a negative answer ever for finite groups, for instance, the two-dimensional irreducible complex representation of $SL_2(\mathbb F_3)$ is an counterexample. Perhaps for the question the condition of all $G_i/G_{i-1}$ being abelian should be further strengthened to all $G_i/G_{i-1}$ being cyclic. {\bf Acknowledgement.} I am very grateful to Professor Nanhua Xi for guidance and great helps in writing the paper. The work was done during my visit to the Academy of Mathematics and Systems Science, Chinese Academy of Sciences as a visiting student for the academic year 2014-2015. I thank the Academy of Mathematics and Systems Science for hospitality. \end{document}
\begin{document} \title{Wave packet tunneling } \author{H.M. Krenzlin, J. Budczies, and K.W. Kehr,\\ Institut f\"ur Festk\"orperforschung, \\ Forschungszentrum J\"ulich GmbH, 52425 J\"ulich, Germany \\ and \\ Institut f\"ur Theoretische Physik,\\ Universit\"at zu K\"oln, 50937 K\"oln, Germany} \date{\today} \maketitle \begin{abstract} The tunneling of Gaussian wave packets has been investigated by numerically solving the one-dimensional Schr\"odinger equation. The shape of wave packets interacting with a square barrier has been monitored for various values of the barrier width, height and initial width of the wave packet. Compared to the case of free propagation, the maximum of a tunneled wave packet exhibits a shift, which can be interpreted as an enhanced velocity during tunneling. \\ Keywords: wave packets; quantum-mechanical tunneling \end{abstract} Since its formulation the process of quantum mechanical tunneling has been subject to numerous investigations, theoretical as well as experimental. Of great interest is the question how long the process of tunneling lasts and at which speed a quantum mechanical object penetrates a potential barrier. Well presented reviews concerning the different approaches to tunneling times are given in \cite{hauge,chiao}. At the University of California at Berkeley, experiments were carried out to study single-photon tunneling \cite{stein}, while at the University of Cologne tunneling of wave packets in microwave guides was investigated \cite{nimtz}. In these experiments superluminal velocities have been measured giving rise to the still open question whether these results are in contradiction to Einstein causality. A related question is whether signals or information can be transmitted at velocities exceeding the speed of light. In this contribution we investigate tunneling of Gaussian wave packets within the framework of one-dimensional non-relativistic quantum mechanics. We will examine the deformation of the initial wave packets by the tunneling processes. The observed ``pulse reshaping'' leads to difficulties in the interpretation of tunneling velocities. Particles in quantum mechanics are represented by wave packets, that is by spatially extended objects that describe the probability amplitude of finding a particle at a specific point in space. In the simplest case the envelope of such a wave packet is given by a Gaussian. We studied in computer simulations \cite{kbk} tunneling of spin-1/2 particles through a potential barrier, that was covered by a weak homogenous magnetic field. The field was introduced in order to realize a Larmor clock set-up, by which the spin rotation angles of an initially spin-polarized particle are monitored. These angles measured after the tunneling process are quantities that define the ``Larmor times''\cite{hauge}. The study of wave packet tunneling of spin-1/2 particles that were initially polarized in the propagation direction allowed us to follow the reading of the Larmor clock during the tunneling process. Interesting transient effects such as rotation angles that did not increase monotonically in time were observed. The results are presented in detail in \cite{kbk}. Here we only mention that the asymptotic readings of the Larmor clock, for not too narrow wave packets, agree with the stationary-state calculations of B\"uttiker \cite{buett}. The calculations were based on the time-dependent Schr\"o\-dinger-Pauli equation; here we consider the propagation of spinless quantum particles. With regard to the interpretation of Einstein causality we point out that the Schr\"o\-dinger equation is not invariant with respect to Lorentz tranformations; the invariance is preserved for Galilei transformations only. This means that the propagation of the front or leading edge of a signal is not limited by the speed of light, in contrast to Maxwell's equations, which are Lorentz-invariant. In fact, initially localized pulses that propagate according to the free Schr\"odinger equation, will spread over the whole space, in contrast to pulses that propagate according to the free Maxwell equations. In the results presented here we concentrate on the behavior of the wave packets near their maxima. We expect that our observations can contribute to the interpretation of superluminal velocities measured in tunneling experiments. \begin{figure} \caption{Comparison of a freely propagating wave packet and transmitted parts at the same time step of the simulation for different barrier widths $d$ (given in sites) and fixed barrier height. The initial width $\sigma$ of the wave packet equals 10 sites.} \label{t1} \end{figure} The time-dependent Schr\"o\-dinger equation was numerically solved in real space by a norm-conserving algorithm. Particles were described by Gaussian wave packets with different initial widths $\sigma$ (standard deviation) in real space and the width $d$ of the tunneling barrier was varied. From the data of our computational simulations we produced ``snapshots'' by plotting the probability density represented by the wave packet at a particular time step, as shown in Fig.\ref{t1}. It pictures in a quantative way a freely propagating wave packet in comparison to wave packets that had to tunnel through a potential barrier. Note that the maximum of the transmitted part of the wave packet is shifted in front of the maximum of a freely propagating wave packet that was prepared identically. This is illustrated in Fig.\ref{t1} for different barrier widths, with the barrier starting at site $6000$. For narrow barriers the shift in the maximum of the transmitted part of the wave packet first increases with the barrier width. If a certain limit is exceeded, in our case barriers wider than 15 sites, deformations within the transmitted part occur and the symmetric form vanishes. Simultaneously, the shift of the maximum of the wave packet is now decreasing and the amplitude increases. \begin{figure} \caption{Comparison of a freely propagating wave packet and transmitted components for a fixed time step and varying potential height $h$. Here $\sigma=10$ and $d=20$ sites.} \label{t2} \end{figure} When varying the height of the potential barrier, the symmetry of the transmitted part of the wave packet is preserved within the range investigated, as shown in Fig.\ref{t2}. The quantity $h$ is the ratio of the potential energy of the tunneling barrier and the kinetic energy of the incident wave packet. Again, we consider a wave packet with an initial width of $\sigma=10$ sites and a barrier width of $20$ sites. Spoken in terms of energy, the higher the barrier, the more pronounced is the shift of the maximum of the transmitted part of the wave packet in propagation direction. The effect described above was also enhanced the narrower (in real space) the wave packet was prepared initially. Figure 3 summarizes the results of the dependence of the maximum position on the barrier width and the initial width of the wave packet. Increasing the initial width $\sigma$, the shift of the maximum of the tunneled wave packet becomes less pronounced. For narrow wave packets, a maximal shift at a certain value of the barrier width is clearly visible. \begin{figure} \caption{Position of the maximum of wave packets with different initial widths $\sigma$ as a function of the barrier width $d$ (both given in sites). } \label{t3} \end{figure} The observed reshaping of the wave packets by tunneling is due to the fact that wave packets prepared narrow in real space have a corresponding broad distribution of wave vectors in momentum space, according to the uncertainty relation of space and momentum. When tunneling through a potential barrier, the high wave vector components are transmitted preferrably and they also propagate faster than the parts of the wave packet with lower momentum. This effect will cause a reshaping of the wave packet behind the barrier thus shifting the maximum of the tranmitted part of the wave packet in the direction of propagation. The amount of the shift depends on the width and height of the barrier and the initial width of the wave packet. Comparing the maxima only, a wave packet which was forced to tunnel through a barrier, would seem to have moved faster than a freely propagating one that was prepared identically. It is important to point out that the transmitted part of the wave packet always remains within the envelope of the freely propagating packet, never leaking through this bounding curve. Thus a potential barrier attenuates the amplitude and gives rise to a spatial shift of the transmitted part of a wave packet in the direction of propagation within certain limits. From our simulations we conclude that apparently enhanced tunneling velocities are due to the deformation of pulses. We find no hints that the transmitted part moves ahead of the freely propagating wave packet with a measurable distance. The probability of finding a particle at a location behind the barrier cannot be enhanced by the insertion of a tunneling region in the path of the propagating wave packet. Our observations are consistent with the following view on the question of violation of Einstein causality by tunneling processes involving electromagnetic signals: The front of a signal propagates without violating causality and the speed of light represents the upper limit of a propagation velocity. However, other components of a wave packet, e.g. the maximum, can propagate at superluminal velocities within the tunneling region, but this does not indicate a violation of causality. \end{document}
\begin{document} \newcommand\restr[2]{{ \left.\kern-\nulldelimiterspace #1 \vphantom{\big|} \right|_{#2} }} \makeatletter \renewcommand{\@seccntformat}[1]{ \ifcsname prefix@#1\endcsname \csname prefix@#1\endcsname \else \csname the#1\endcsname\quad \fi} \makeatother \makeatletter \newcommand{\colim@}[2]{ \vtop{\m@th\ialign{##\cr \hfil$#1\operator@font colim$\hfil\cr \noalign{\nointerlineskip\kern1.5\ex@}#2\cr \noalign{\nointerlineskip\kern-\ex@}\cr}} } \newcommand{\colim}{ \mathop{\mathpalette\colim@{\rightarrowfill@\textstyle}}\nmlimits@ } \makeatother \newcommand\rightthreearrow{ \mathrel{\vcenter{\mathsurround0pt \ialign{##\crcr \noalign{\nointerlineskip}$\rightarrow$\crcr \noalign{\nointerlineskip}$\rightarrow$\crcr \noalign{\nointerlineskip}$\rightarrow$\crcr } }} } \title{A short geometric derivation of the dual Steenrod algebra} \author{Kiran Luecke} \date{\today} \maketitle \begin{abstract} \noindent This two-page note gives a non-computational derivation of the dual Steenrod algebra as the automorphisms of the formal additive group. Instead of relying on computational tools like spectral sequences and Steenrod operations, the argument uses a few simple universal properties of certain cohomology theories. \end{abstract} All algebras are graded and commutative. $C_2$ is the cyclic group of order 2. \noindent \textbf{Preamble:} Let $MT$ be the category of pairs $(F,s)$ where $F$ is a contravariant functor from the homotopy category of finite spectra to the category of graded abelian groups that takes sums to products and $s$ is a natural isomorphism from $F$ to $[1]\circ F\circ\Sigma$, where $[1]$ denotes the shift-of-grading functor. A morphism $(F,s)\to (F',s')$ is a natural transformation $F\to F'$ that takes $s$ to $s'$. Let $E$ be a homotopy ring spectrum and $E^*=\pi_{-*}E$. Consider the ``Conner-Floyd" functor $\text{Alg}_{E^*}\rightarrow MT$ $$A\mapsto (X\mapsto EA^*X:=E^*X\otimes_{E^*}A).$$ Suppose that $E_*E$ is flat over $E^*$, so that $E(E_*E)$ is represented by $E\wedge E$. Consider the natural transformation $$\eta_E:\text{Spec}E_*E\rightarrow \text{Hom}_{MT}(E,E-)$$ $$(E_*E\xrightarrow{f}A)\mapsto \eta_E(f):=(E^*X\xrightarrow{1\wedge\text{id}}(E\wedge E)^*X\simeq E(E_*E)^*X\xrightarrow{F}EA^*X)$$ The coproduct on $E_*E$ induced by the map $E\wedge E\xrightarrow{\text{id}\wedge 1\wedge\text{id}}E\wedge E\wedge E$ induces an ``internal" composition of morphisms $E\rightarrow EA$ in the image of $\eta_E$. If $EA$ is a cohomology theory (e.g. if $A$ is flat over $E^*$) then it corresponds to a (homotopy ring) spectrum which is an $E$-module, harmlessly denoted by $EA$, and there is an inverse to $\eta$ given by sending a transformation $T:E\rightarrow EA$ to the map on homotopy induced by $E\wedge E\xrightarrow{\text{id}\wedge T}E\wedge EA\rightarrow EA.$ \begin{lem} Let $E$ be a real oriented homotopy ring spectrum such that $E_*E$ is flat over $E^*$. Write $\mathbb{G}_E$ for the formal group with coordinate $\text{Spf}E^*BC_2$ and write $\text{Aut}\mathbb{G}_E$ for the group-valued presheaf on $\text{Alg}_{E^*}$ sending $R$ to the group of strict automorphisms of $\mathbb{G}_H(R)$. Then there is a morphism $\text{Spec}(E_*E)\xrightarrow{ev_E}\text{Aut}\mathbb{G}_E$ of monoid-valued presheaves on $\text{Alg}_{E^*}$. \end{lem} \begin{proof} Let $e$ be the canonical generator of $E^*B C_2$. Let $R$ be an $E^*$-algebra and let $T$ be an element of $\text{Hom}_{MT}(E,ER)$. The element $T(e)$ may be identified with a power series $f(e)\in R[[x]]$. Moreover by multiplicativity of $T$ and naturality applied to the multiplication map $BC_2\times BC_2\rightarrow BC_2$, the power series $f(e)$ must be an endomorphism of the formal group law of $\mathbb{G}_E$. The linear coefficient $f_0$ is forced to be 1 by considering the pullback of $e$ along $S^1\hookrightarrow BC_2$ and invoking stability of $T$. Hence $f(e)$ defines a strict automorphism of $\mathbb{G}_E$. Precomposition of the assignment $T\mapsto f(e)$ with $\eta_E$ defines the desired map $ev_E:\text{Spec} E_*E(R)\rightarrow\text{Aut}\mathbb{G}_E(R)$ which is clearly natural in $R$ and commutes with the monoidal structure on both sides by inspection. \end{proof} \begin{lem} Suppose that there is a homotopy commutative ring spectrum $M$ such that \begin{enumerate} \item There is a map $M\rightarrow H$ inducing an isomorphism on $\pi_0$. \item $M$ is real oriented with real orientation $e_M\in M^1BC_2$ and formal group $\mathbb{G}_M$ with formal group law $F_M$. \item $M_*M$ is flat over $M^*$, and the map $ev_M(R)$ afforded by the previous lemma is injective when $MR$ is a cohomology theory. \item For an $\mathbb{F}_2$-algebra $R$ let $\text{Aut}\mathbb{G}_M(R)$ be the groupoid whose objects are $\mathbb{F}_2$-algebra maps $M^*\xrightarrow{f}R$ and whose morphisms from $f$ to $g$ are strict isomorphisms of formal group laws $\phi$ from $f_*F_M$ to $g_*F_M$. Write $R_f$ for the induced $M^*$-algebra structure whose underlying ring is $R$. Note that $MR_f$ is canonically real oriented by the class $e_{MR_f}=[e_M\otimes 1]\in MR_f^*BC_2=M^*BC_2\otimes_{M^*}R_f$. Then there is a functor $$\gamma:\text{Aut}\mathbb{G}_M(R)\rightarrow MT$$ which on objects sends $f$ to $MR_f$ and on morphisms sends $\phi$ to the transformation $\gamma(\phi):MR_f\rightarrow MR_g$ such that $\gamma(\phi)(e_{MR_f})=\phi(e_{MR_g})\in MR_g^*BC_2\simeq R_g[[e_{MR_g}]]$ as a power series. \end{enumerate} Then the evaluation map $ev_H$ from the previous lemma is an isomorphism. In particular if $\mathcal{A}_2$ denotes a Hopf algebra corepresenting $\text{Aut}\mathbb{G}_H$ then there is an isomorphism of Hopf algebras $H_*H\simeq\mathcal{A}_2.$ \end{lem} \begin{proof} By item 1 $\mathbb{F}_2$ is an algebra over $M^*$. By item 2 the formal group law of $M^*BC_2$ has vanishing 2-series $[2](e_M)=0$ and so it is isomorphic to the additive one. Transporting such an isomorphism $\phi$ with $\gamma$ produces a natural isomorphism $$M^*X\simeq M^*X\otimes_{M^*}M^*\xrightarrow{\gamma(\phi)}M^*X\otimes_{M^*}\mathbb{F}_2\otimes_{\mathbb{F}_2}M^*=M\mathbb{F}_2\otimes_{\mathbb{F}_2}M^*. $$ Thus $M\mathbb{F}_2$ is a summand of a cohomology theory and hence a cohomology theory. The map in item 1 induces a map $M\mathbb{F}_2\rightarrow H$ which by Eilenberg-Steenrod uniqueness must be an isomorphism. Thus, combining item 3 with the pullback along the surjection $M\rightarrow M\mathbb{F}_2\simeq H$, the map $ev_H$ must be injective. So to produce an inverse it suffices to produce a section. Now since $\mathbb{F}_2$ is a field we have $MR\simeq HR$ for all $\mathbb{F}_2$-algebras $R$. Note that there is an inclusion $B\text{Aut}\mathbb{G}_H\hookrightarrow\text{Aut}\mathbb{F}_H$ since $F_M$ is sent to the additive formal group law by the map $M^*\rightarrow\mathbb{F}_2$ in item 1. Then the desired section of $ev_H$ is given as follows. Write $\mathbb{F}_2\xrightarrow{u}R$ for the unit map of an $\mathbb{F}_2$-algebra $R$. Start with an automorphism of $\text{Aut}\mathbb{G}_H(R)$ and view this as an automorphism of the object $M^*\rightarrow\mathbb{F}_2\rightarrow R$ in $\text{Aut}\mathbb{G}_M(R)$. Using $\gamma$ that produces an automorphism of $MR\simeq HR$, which one then precomposes with the morphism $H\rightarrow HR$ induced by $u$. \end{proof} \begin{lem} A cohomology theory $M$ satisfying the conditions of the previous lemma exists. It is $MO$. \end{lem} \begin{proof} Let $e_{MO}$ denote the homotopy class of the inclusion $\mathbb{RP}^\infty\hookrightarrow\Sigma MO$. This exhibits $(MO,e_{MO})$ as the universal real oriented multiplicative cohomology theory. Hence there is a map $MO\rightarrow H$ sending $e_{MO}$ to $e\in H^*BC_2$. Thus items 1 and 2 are satisfied. Since $MO$ is real oriented, an easy calculation\footnote{The real orientation $e_{MO}$ implies that all the differentials in the relevant Atiyah-Hirzebruch sequences are trivial.} shows that $MO_*MO\simeq MO_*[a_1,a_2,...]$ which is free and hence flat over $MO^*$. Let $A$ be an $MO^*$-algebra and $T:MO\rightarrow MOA$ a multiplicative natural transformation. Let $\Theta_n\in MO^*MO(n))$ be the universal Thom class. By the splitting principle $T(\Theta_1)$ determines $T(\Theta_n)$, which determines all of $T$ by universality. Therefore if $MOA$ is a cohomology theory then $ev_{MO}(A)$ is the composition of an isomorphism ($\eta_{MO}(A)$) and an injection ($T\mapsto T(\Theta_1)$) so item 3 is satisfied. \begin{comment} Let $A$ be a flat $MO^*$-algebra, $ f:MO_*MO\rightarrow A$ a ring map, and $T:=\eta_{MO}(f):MO\rightarrow MOA$ the multiplicative natural transformation afforded by the preamble. The component $MO^*BO(1)\rightarrow MOA^*BO(1)$ determines the components $MO^*BO(n)\rightarrow MOA^*BO(n)$ by the splitting principle, which in turn determine the components $MO^*MO(n)\rightarrow MOA^*MO(n)$ by the Thom isomorphism\footnote{If $\Theta_n$ and $e_n$ denote the universal Thom and euler classes, then one uses injectivity of the pullback along $BO(n)\rightarrow MO(n)$ to see that the $T(\Theta_n)$ is the image under the Thom isomorphism of $\frac{T(e_n)}{e_n}$. Note that $T(e_n)$ is indeed divisible by $e_n$ by multiplicativity and the fact that by strictness the power series $T(e_{MO})$ is divisible by $e_{MO}$.}, which by universality determines all of $T$. So item 3 is satisfied. \end{comment} Let $A$ be an $\mathbb{F}_2$-algebra receiving two maps $f,g:MO^*\rightarrow A$ and let $\phi\in A[[x]]$ be an isomorphism from $f_*F_{MO}$ to $g_*F_{MO}$. Note that $MOA_f$ and $MOA_g$ are canonically oriented by $e_{MOA_f}$ and $e_{MOA_g}$ as defined in item 4. It remains to construct (functorially) an automorphism $\gamma(\phi):MOA_f\rightarrow MOA_g$ such that $T(e_{MOA_f})=\phi(e_{MOA_g})$. By the calculation $MO_*MO\simeq MO_*\otimes_{\mathbb{F}_2}\mathbb{F}_2[a_1,a_2,...]$, the pair $(g,\phi)$ corresponds\footnote{By strictness $f(e_{MOA_g})=e_{MOA_g}+f_1e_{MOA_g}^2+...$ and the corresponding map $\Phi:MO_*[a_1,...]\rightarrow A$ sends $a_i$ to $f_{i+1}$.} to an $\mathbb{F}_2$-algebra map $\Phi:MO_*MO\rightarrow A$ and so using the natural transformation $\eta_{MO}$ from the preamble one gets multiplicative transformation $\eta_{MO}(\Phi):MO\rightarrow MOA_g$ with the property that $\eta_{MO}(\Phi)(e_{MO})=f(e_{MOA_g})$. Indeed the image of $e_{MO}$ under the map $MO^*BC_1\xrightarrow{1\wedge\text{id}}(MO\wedge MO)^*BC_2\simeq MO^*BC_2\otimes_{MO^*}MO_*MO$ is $[e_{MO}\otimes1] + [e_{MO}^2\otimes a_1]+...$ by a simple calculation using the pairing between homology and cohomology\footnote{See e.g. Adams [1] Lemma 6.3 page 60 for the complex version.}. Note that the map $\eta_{MO}(\Phi)(\text{pt}):MO^*\rightarrow A$ is not equal to $g$. Now $\eta_{MO}(\Phi)$ induces an $A$-linear map $MOA_f^*X\rightarrow MOA_g^*X$ and produces the desired map $\gamma(\phi):MOA_f\rightarrow MOA_g$ which is what we are really after. Functoriality\footnote{This argument for functoriality is due to Quillen in [4].} is proved by noting that $\gamma(\phi)$ is uniquely characterized by the properties of being $A$-linear, multiplicative, and sending $e_{MOA_f}$ to $\phi(e_{MOA_g})$. \end{proof} \begin{rem} There is a similar but slightly messier derivation of the dual Steenrod algebra at odd primes which uses a mod $p$ analog of $MO$ constructed by Shaun Bullett [2] using manifolds with singularities. I decided to leave the odd primes to forthcoming work\footnote{This work will also discuss derivations of algebras of cohomology operations of other spectra as well as a derivation of the Dyer-Lashof algebra.} since the added technical difficulties (which in this business are usual at odd primes) obscures the simplicity and brevity of the exposition, which is a cardinal virtue of this note. \end{rem} \noindent \textbf{Acknowledgements}: Thanks to Tim Campion for extensive comments on previous drafts, and to Robert Burklund for catching a gap in a proof. [1] Adams, F. \textit{Stable homotopy and generalised homology}. Chicago Lectures in Mathematics. The University of Chicago Press, 1995. \ \noindent [2] Bullett, S. \textit{A Z/p Analogue for Unoriented Bordism}. University of Warwick PhD Thesis (1973) \ \noindent [3] Peterson, E. \textit{Formal geometry and bordism operations}. Cambridge Studies in Advanced Mathematics, v17 (2019) \ \noindent [4] Quillen, D. \textit{Elementary Proofs of Rome Results of Cobordism Theory Using Steenrod Operations}. Advances in Mathematics, 7 29-56 (1971) \end{document}
\begin{document} \title{Geometrical approach to mutually unbiased bases} \author{Andrei~B~Klimov$^1$, Jos\'e~L~Romero$^1$, Gunnar~Bj\"{o}rk$^2$ and Luis~L~S\'anchez-Soto$^3$} \address{$^1$ Departamento de F\'{\i}sica, Universidad de Guadalajara, 44420~Guadalajara, Jalisco, Mexico} \address{$^2$ School of Information and Communication Technology, Royal Institute of Technology (KTH), Electrum 229, SE-164 40 Kista, Sweden} \address{$^3$ Departamento de \'{O}ptica, Facultad de F\'{\i}sica, Universidad Complutense, 28040~Madrid, Spain} \date{\today} \begin{abstract} We propose a unifying phase-space approach to the construction of mutually unbiased bases for a two-qubit system. It is based on an explicit classification of the geometrical structures compatible with the notion of unbiasedness. These consist of bundles of discrete curves intersecting only at the origin and satisfying certain additional properties. We also consider the feasible transformations between different kinds of curves and show that they correspond to local rotations around the Bloch-sphere principal axes. We suggest how to generalize the method to systems in dimensions that are powers of a prime. \end{abstract} \eqnobysec \section{Introduction} The notion of mutually unbiased bases (MUBs) emerged in the seminal work of Schwinger~\cite{Schwinger60} and it has turned into a cornerstone of the modern quantum information. Indeed, MUBs play a central role in a proper understanding of complementarity~\cite{Wootters87,Kraus87,Lawrence02,Chaturvedi02,Wootters04b}, as well as in approaching some relevant issues such as optimum state reconstruction~\cite{Wootters89,Asplund01}, quantum key distribution~\cite{Bechmann00,Cerf02}, quantum error correction codes~\cite{Gottesman96,Calderbank97}, and the mean king problem~\cite{Vaidman87,Englert01,Aravind03,Schulz03,Kimura06}. For a $d$-dimensional system (also known as a qudit) it has been found that the maximum number of MUBs cannot be greater than $d+1$ and this limit is reached if $d=p$ is prime~\cite{Ivanovic81} or power of prime, $d=p^{n}$~\cite{Calderbank97b}. It was shown in reference~\cite{Ban02} that the construction of MUBs is closely related to the possibility of finding of $d+1$ disjoint classes, each one having $d-1$ commuting operators, so that the corresponding eigenstates form sets of MUBs. Since then, different explicit constructions of MUBs in prime power dimensions have been suggested in a number of papers~\cite{Kla04,Lawrence04,Parthasarathy04,Pitt05,Durt05,Planat05,JPA05}. The phase space of of a qudit can be seen as a $d \times d$ lattice whose coordinates are elements of the finite Galois field $GF(d)$~\cite{Lidl86}. At first sight, the use of elements of $GF(d)$ as coordinates could be seen as an unnecessary complication, but it proves to be an essential step: only by doing this we can endow the phase-space grid with the same geometric properties as the ordinary plane. There are several possibilities for mapping quantum states onto this phase space~\cite{Buot74,Galetti88,Cohendet88}. However, special mention must be made of the elegant approach developed by Wootters and coworkers in references~\cite{Wootters04} and \cite{Gibbons04}, which has been used to define a discrete Wigner function (see references~\cite{Paz05} and \cite{Durt06} for picturing qubits in phase space). Any good assignment of quantum states to lines is called a `quantum net''. In fact, there is not a unique quantum net for a given $d \times d$ phase space. However, one can manage to construct lines and striations (sets of parallel lines) in this phase space: after an arbitrary choice that does not lead to anything fundamentally new, it turns out that the orthogonal bases associated with each striation are mutually unbiased. In this paper, we proceed just in the opposite way. We start by considering the geometrical structures in phase space that are compatible with the notion of unbiasedness. By taking the case of two identical two-dimensional systems (i.e., two qubits) as the thread for our approach, we classify these admissible structures into rays and curves (and the former also in regular and exceptional, depending on the degeneracy). To each bundle of curves, we associate a MUB, and we show how these MUBs are related by local transformations that do not change the corresponding entanglement properties. Finally, we sketch how to extend this theory to higher (power of prime) dimensions. We hope that this new method can seed light on the structure of MUBs and can help to resolve some of the open problems in this field. For example, all the MUB structures in 8- and 16-dimensional Hilbert space are known~\cite{PRA05}, but in the 16-dimensional case the transformations to go from one structure to any other are unknown and hitherto a method to find them (in any space dimension) has been lacking. Our approach provides a means to find such transformations in a systematic manner. \section{Constructing a set of mutually unbiased bases} When the space dimension $d = p^{n}$ is a power of a prime it is natural to conceive the system as composed of $n$ constituents, each of dimension $p$~\cite{Vourdas04}. We briefly summarize a simple construction of MUBs for this case, according to the method introduced in reference~\cite{JPA05}, although focusing on the particular case of two-qubits. The main idea consists in labeling both the states of the subsystems and the generators of the generalized Pauli group (acting in the four-dimensional Hilbert space) with elements of the finite field $GF(4)$, instead of natural numbers. In particular, we shall denote as $ | \alpha \rangle$ with $\alpha \in GF (4)$ an orthonormal basis in the Hilbert space of the system. Operationally, the elements of the basis can be labeled by powers of a primitive element (that is, a root of the minimal irreducible polynomial, $\sigma^{2} + \sigma + 1 = 0$), so that the basis reads \begin{equation} \{ |0 \rangle, \, |\sigma \rangle , \, | \sigma^{2} = \sigma + 1 \rangle , \, | \sigma^{3} = 1 \rangle \} \, . \end{equation} These vectors are eigenvectors of the generalized position operators $Z_{\beta}$ \begin{equation} Z_{\beta} = \sum_{\alpha \in GF(4) } \chi ( \alpha \beta) |\alpha \rangle \langle \alpha | \, , \end{equation} where henceforth we assume $\alpha , \beta \in GF(4)$. Here $\chi (\theta )$ is an additive character \begin{equation} \label{ch} \chi ( \theta ) = \exp \left [ \frac{2\pi i}{p} \tr ( \theta ) \right ] \, , \end{equation} and the trace operation, which maps elements of $GF(4) $ onto the prime field $GF(2) \simeq \mathbb{Z}_{2}$, is defined as $\tr (\theta) = \theta + \theta^{2}$. The diagonal operators $Z_{\beta}$ are conjugated to the generalized momentum operators $X_{\beta}$ \begin{equation} X_{\beta} = \sum_{\alpha \in GF(4)} | \alpha + \beta \rangle \langle \alpha | \, , \end{equation} precisely through the finite Fourier transform \begin{equation} \label{XZgf} F \, X_{\beta} \, F^{\dagger} = Z_{\beta} \, , \end{equation} with \begin{equation} F = \frac{1}{2} \sum_{\alpha, \beta \in GF(4)} \chi ( \alpha \beta) \, | \alpha \rangle \langle \beta | \, . \end{equation} The operators $ \{Z_{\alpha}, X_{\beta}\}$ are the generators of the generalized Pauli group \begin{equation} Z_{\alpha} X_{\beta} = \chi (\alpha \beta) \, X_{\beta} Z_{\alpha} \, . \label{com} \end{equation} In consequence, we can form five sets of commuting operators (which from now on will be called displacement operators) as follows, \begin{equation} \label{set1} \{ X_{\beta} \}, \{ Z_{\alpha} X_{\beta = \mu \alpha }\} \, , \end{equation} with $\mu \in GF(4)$. The displacement operators (\ref{set1}) can be factorized into products of powers of single-particle operators $\sigma_{z}$ and $\sigma_{x}$, whose expression in the standard basis of two-dimensional Hilbert space is \begin{equation} \label{primeZX} \sigma_{z} = | 0 \rangle \langle 0 | - | 1 \rangle \langle 1 | \, , \qquad \sigma_{x} = | 0 \rangle \langle 1 | + | 1 \rangle \langle 0 | \, . \end{equation} This factorization can be carried out by mapping each element of $GF(4)$ onto an ordered set of natural numbers~\cite{Gibbons04}, $\alpha \Leftrightarrow ( a_{1},a_{2}) $, where $a_{j}$ are the coefficients of the expansion of $\alpha $ in a field basis $\theta_j$ \begin{equation} \alpha = a_{1} \theta_{1} + a_{2} \theta_{2} \, . \end{equation} A convenient field basis is that in which the finite Fourier transform is factorized into a product of single-particle Fourier operators. This is the so-called self-dual basis, defined by the property $\tr ( \theta_{i} \theta_{j}) = \delta _{ij}$. In our case the self-dual basis is $(\sigma , \sigma^{2})$ and leads to the following factorizations \begin{equation} Z_{\alpha} = \sigma_{z}^{a_1} \, \sigma_{z}^{a_2}, \qquad X_{\beta} = \sigma_{x}^{b_1} \, \sigma_{x}^{b_2}, \end{equation} where $\alpha = a_{1} \sigma + a_{2} \sigma^{2}$ and $\beta = b_{1} \sigma + b_{2}\sigma^{2}$. Using this factorization, one can immediately check that, among the five MUBs that exist in this case, three are factorable and two are maximally entangled~\cite{Englert00}. Although the factorization of a particular displacement operator depends on the choice of a basis in the field, the global separability properties (i.e., the number of factorable and maximally entangled MUBs) is basis independent. That is, any nonlocal unitary transformation that yields only factorable or maximally entangled bases (i.e., a transformation from the Clifford group) will provide an isomorphic set of MUBs with respect to the separability, except, perhaps, for some trivial permutations. Nevertheless, this property holds only for two qubits because for higher-dimensional cases more complicated structures arise~\cite{PRA05}. \section{Mapping the mutually unbiased bases onto phase space} The problem of MUBs can be further clarified by an appropriate representation in phase space, which is defined as a collection of ordered points $(\alpha, \beta ) \in GF( 4) \times GF( 4) $. In this finite phase space the operators from the five sets (\ref{set1}) are labeled by points of rays (i.e., `straight´´ lines passing through the origin). The vertical axis has $\alpha =0$ and the horizontal axis has $\beta = 0$. For our case, we explicitly have \begin{eqnarray} \label{set1a} \beta =0 & \rightarrow & Z_{\sigma}, Z_{\sigma^2}, Z_{\sigma^3} \nonumber \\ \beta = \alpha & \rightarrow & Z_{\sigma} X_{\sigma}, Z_{\sigma^{2}}X_{\sigma^2}, Z_{\sigma^3} X_{\sigma^3} \nonumber \\ \beta = \sigma \alpha & \rightarrow & Z_{\sigma} X_{\sigma^2}, Z_{\sigma^2} X_{\sigma^3}, Z_{\sigma^3} X_{\sigma} \\ \beta = \sigma^2 \alpha & \rightarrow & Z_{\sigma} X_{\sigma^3}, Z_{\sigma^2} X_{\sigma}, Z_{\sigma^3} X_{\sigma^2} \nonumber \\ \alpha =0 & \rightarrow & X_{\sigma}, X_{\sigma^2}, X_{\sigma^3} \nonumber \end{eqnarray} where the left column indicates the ray corresponding to the operators appearing in the three rightmost columns. In the factorized form, the set in (\ref{set1a}) can be expressed as in table~1. \begin{table} \caption{Rays and their associated physical operators.} \begin{indented} \item[] \begin{tabular}{@{}lllll} \br Basis & Ray & \multicolumn{3}{c}{Factorized operators} \\ \mr 1 & $\beta = 0$ & $\sigma_{z} \leavevmode\hbox{\small1\normalsize\kern-.33em1}$ & $\leavevmode\hbox{\small1\normalsize\kern-.33em1} \sigma_{z}$ & $\sigma_{z} \sigma_{z}$ \\ 2 & $\beta = \alpha $ & $\sigma_{y} \leavevmode\hbox{\small1\normalsize\kern-.33em1}$ & $ \leavevmode\hbox{\small1\normalsize\kern-.33em1} \sigma_{y}$ & $\sigma_{y}\sigma_{y}$ \\ 3 & $\beta = \sigma \alpha$ & $\sigma_{z} \sigma_{x}$ & $\sigma_{x} \sigma_{y}$ & $\sigma_{y} \sigma _{z} $ \\ 4 & $\beta = \sigma^{2} \alpha$ & $\sigma_{y} \sigma_{x}$ & $\sigma_{x} \sigma_{z}$ & $\sigma_{z} \sigma_{y}$ \\ 5 & $\alpha = 0$ & $\sigma_{x} \leavevmode\hbox{\small1\normalsize\kern-.33em1}$ & $\leavevmode\hbox{\small1\normalsize\kern-.33em1} \sigma_{x}$ & $\sigma_{x} \sigma_{x}$ \\ \br \end{tabular} \end{indented} \end{table} \begin{figure} \caption{Phase-space picture corresponding to the construction in table~1.} \label{fig1} \end{figure} In figure~1 we plot the phase-space representation of the sets of operators in table~1. Each set has been arbitrarily assigned to the number appearing in the left column of the table. The sets of operators 1 and 5 define the horizontal and the vertical axes, respectively, and they lead, together with the operators associated to line 2, to three separable bases (i.e., the three operators in each of the first three rows commute for each of the two subsystems, separately). In physical space, all these operators can be associated with rotations of each qubit around the $z$-, $x$- and $y$-axis, respectively. Eigenstates of the operators associated with the lines 3 and 4 form entangled bases (in fact, their simultaneous eigenstates are all maximally entangled states). The origin is labeled as $o$ and is the common intersecting point of all the rays. It is clear that under local transformations the factorable and entangled MUBs preserve their separability properties. Two natural questions thus arise in this respect: Is the arrangement in table~1 and the corresponding geometrical association with rays in phase-space unique? If this is not the case, why do different arrangements always lead to the same separability structure of MUBs? \section{Curves in phase space} To answer these questions we shall approach the problem from a different perspective, namely, by determining all the possible geometrical structures in phase space that correspond to MUBs. First of all, let us observe that any ray can be defined in the parametric form \begin{equation} \label{ray1} \alpha (\kappa ) = \eta \kappa \, , \qquad \beta (\kappa )= \zeta \kappa \, , \end{equation} where $\eta, \zeta \in GF(4)$ are fixed while $\kappa \in GF(4)$ is a parameter that runs through all the field elements. The rays (\ref{ray1}) can be seen as the simplest nonsingular (i.e., no self-intersecting) Abelian substructures in phase space, in the sense that \begin{equation} \label{ac} \alpha ( \kappa + \kappa^\prime ) = \alpha (\kappa) + \alpha ( \kappa^\prime ) \, , \qquad \beta ( \kappa + \kappa^\prime ) = \beta (\kappa) + \beta (\kappa^\prime) \, . \end{equation} However, the rays are not the only Abelian structures: it is easy to see that the parametric curves (that obviously pass through the origin) \begin{equation} \label{curve1} \alpha ( \kappa ) = \mu_{0} \kappa + \mu_{1} \kappa^{2} \, , \qquad \beta ( \kappa ) = \eta_{0} \kappa + \eta_{1} \kappa^{2} \, , \end{equation} also satisfy the condition (\ref{ac}). If, in addition, we impose \begin{equation} \label{Tr_cond} \tr ( \alpha \beta^\prime ) = \tr ( \alpha^\prime \beta ) \, , \end{equation} where $\alpha^\prime = \alpha (\kappa^\prime )$ and $\beta^\prime = \beta (\kappa^\prime)$, then the displacement operators associated to the curves (\ref{curve1}) commute with each other and the coefficients $\mu_{j}$ and $\eta_{j}$ must satisfy the following restrictions (commutativity conditions) \begin{equation} \label{Eq:condition} \mu_{1} \eta_{0} + (\mu_{1} \eta_{0})^{2} = \mu_{0} \eta_{1} + (\mu_{0} \eta_{1})^{2} \, . \end{equation} All the possible Abelian curves satisfying condition (\ref{Eq:condition}) can be divided into two types: a) regular curves \begin{eqnarray} \label{alphabeta} \alpha\mathrm{-curves} & : & \quad \alpha = \sigma \kappa \, , \quad \beta = \eta \kappa + \sigma^{2} \kappa^{2} \, , \nonumber \\ & & \\ \beta\mathrm{-curves} & : & \quad \beta = \sigma \kappa \, , \quad \alpha = \eta \kappa + \sigma^{2} \kappa^{2} \, . \nonumber \end{eqnarray} b) exceptional curves \begin{equation} \label{exep} \alpha = \mu (\kappa +\kappa^{2}) \, , \quad \beta =\mu^{2} (\sigma \kappa + \sigma^{2} \kappa^{2}) \, . \end{equation} The regular curves are nondegenerate, in the sense that $\alpha $ or $\beta$ (or both) are not repeated in any set of four points defining a curve. In other words, $\alpha $ or $\beta $ (or both) take all the values in the field $GF(4)$. This allows us to write down explicit relations between $\alpha $ and $\beta $ as follows \begin{eqnarray} \label{alpha1} \alpha\mathrm{-curves} & : & \quad \beta = \eta \sigma^{2} \alpha + \alpha^{2} \, , \nonumber \\ & & \\ \beta\mathrm{-curves} & : & \quad \alpha = \eta \sigma^{2} \beta + \beta^{2} \, . \nonumber \label{beta1} \end{eqnarray} By varying the parameter $\eta$ in the first of equations (\ref{alpha1}) we can construct the $\alpha$-curves in table~2, which show a different arrangement of operators than (\ref{set1a}). Figure~2 shows the corresponding points of $\alpha$-curves in phase space. Note, that we have completed table~2 and figure~2 with the vertical ($X_{\sigma}, X_{\sigma^2}, X_{\sigma^3}$) axis. The factorization of operators in each table (the self-dual basis is used for the representation of operators in terms of Pauli matrices) is different from the standard one in table~1. The curves marked as 3, 4 and 5 lead now to factorable MUBs, while the ones marked as 1 and 2 lead to maximally entangled bases. The $\beta$-curves and the corresponding table can be obtained from table~2 by exchanging $\alpha $ and $\beta $ (and correspondingly $Z$ and $X$ operators) and is given in table~3. The phase-space picture corresponding to table~3 is shown in figure~3 and can be easily obtained from figure~2 by mirroring this figure about the main diagonal. Observe that the curves $\beta =\alpha^{2}$ and $\alpha =\beta^{2}$ then become identical, since this curve is symmetric about the diagonal. \begin{table} \caption{$\alpha$-curves and their corresponding operators.} \begin{indented} \item[] \begin{tabular}{@{}llllll} \br Basis & $\alpha$-curves & Displacement operators & \multicolumn{3}{c}{Factorized operators} \\ \mr 1 & $\beta = \alpha^{2}$ & $Z_{\sigma^{2}} X_{\sigma}, Z_{\sigma^3} X_{\sigma^3}, Z_{\sigma} X_{\sigma^2}$ & $\sigma_{x} \sigma_{z}$ & $\sigma_{y}\sigma_{y}$ & $\sigma_{z}\sigma_{x}$ \\ 2 & $\beta = \alpha + \alpha^{2}$ & $Z_{\sigma^2} X_{\sigma^3}, Z_{\sigma^3}, Z_{\sigma} X_{\sigma^3}$ & $\sigma_{x} \sigma_{y}$ & $\sigma_{z} \sigma_{z}$ & $\sigma_{y}\sigma _{x}$ \\ 3 & $\beta =\sigma \alpha + \alpha^{2}$ & $Z_{\sigma^2} X_{\sigma^2}, Z_{\sigma^3} X_{\sigma^2}, Z_{\sigma}$ & $\leavevmode\hbox{\small1\normalsize\kern-.33em1} \sigma_{y}$ & $\sigma_{z} \sigma_{y}$ & $\sigma_{z}\leavevmode\hbox{\small1\normalsize\kern-.33em1}$ \\ 4 & $\beta = \sigma^{2} \alpha + \alpha^{2}$ & $Z_{\sigma^2}, Z_{\sigma^3} X_{\sigma}, Z_{\sigma} X_{\sigma}$ & $\leavevmode\hbox{\small1\normalsize\kern-.33em1} \sigma_{z}$ & $\sigma_{y} \sigma_{z}$ & $\sigma_{y}\leavevmode\hbox{\small1\normalsize\kern-.33em1}$ \\ 5 & $\alpha =0$ & $X_{\sigma}, X_{\sigma^2}, X_{\sigma^3}$ & $\sigma_{x} \leavevmode\hbox{\small1\normalsize\kern-.33em1}$ & $\leavevmode\hbox{\small1\normalsize\kern-.33em1} \sigma_{x}$ & $\sigma_{x} \sigma_{x}$\\ \br \end{tabular} \end{indented} \end{table} \begin{figure} \caption{Phase-space picture corresponding to the construction in table 2.} \label{fig2} \end{figure} \begin{table}[b] \caption{Phase-space $\beta$-curves and their corresponding operators.} \begin{indented} \item[] \begin{tabular}{@{}llllll} \br Basis & $beta$-curves & Displacement operators & \multicolumn{3}{c}{Factorized operators} \\ \mr 1 & $\alpha = \beta^{2}$ & $X_{\sigma^2} Z_{\sigma}, X_{\sigma^3} Z_{\sigma^3}, X_{\sigma} Z_{\sigma^2}$ & $\sigma_{z} \sigma_{x}$ & $\sigma_{y} \sigma_{y}$ & $\sigma_{x} \sigma_{z}$ \\ 2 & $\alpha =\beta + \beta^{2}$ & $ X_{\sigma^2} Z_{\sigma^3}, X_{\sigma^3}, X_{\sigma} Z_{\sigma^3}$ & $\sigma_{z} \sigma_{y}$ & $\sigma_{x} \sigma_{x}$ & $\sigma_{y} \sigma_{z}$ \\ 3 & $\alpha = \sigma \beta + \beta^{2}$ & $ X_{\sigma^2} Z_{\sigma^2}, X_{\sigma^3} Z_{\sigma^2}, X_{\sigma}$ & $\leavevmode\hbox{\small1\normalsize\kern-.33em1} \sigma_{y}$ & $\sigma_{x} \sigma_{y}$ & $\sigma_{x} \leavevmode\hbox{\small1\normalsize\kern-.33em1}$ \\ 4 & $\alpha = \sigma^{2} \beta + \beta^{2}$ & $X_{\sigma^2}, X_{\sigma^3} Z_{\sigma}, X_{\sigma} Z_{\sigma}$ & $\leavevmode\hbox{\small1\normalsize\kern-.33em1} \sigma_{x}$ & $\sigma_{y} \sigma_{x}$ & $\sigma_{y} \leavevmode\hbox{\small1\normalsize\kern-.33em1}$ \\ 5 & $\beta =0$ & $Z_{\sigma}, Z_{\sigma^2}, Z_{\sigma^3}$ & $\sigma_{z} \leavevmode\hbox{\small1\normalsize\kern-.33em1}$ & $\leavevmode\hbox{\small1\normalsize\kern-.33em1} \sigma_{z}$ & $\sigma_{z} \sigma_{z}$ \\ \br \end{tabular} \end{indented} \end{table} \begin{figure} \caption{Phase-space picture corresponding to the construction in table 3.} \label{fig3} \end{figure} It is worth noting that all the $\alpha$-curves, except $\beta =\alpha^{2}$, are $\beta$-degenerate: the same value of $\beta $ corresponds to different values of $\alpha $. Obviously, the analogous $\alpha$-degeneration appears in the $\beta$-curves. Exceptional curves (\ref{exep}) have quite a different structure. Now, every point is doubly degenerate and can be obtained from equations that relate powers of $\alpha $ and $\beta$: \begin{equation} \alpha^{2}=\mu \alpha \, , \qquad \beta^{2}=\mu^{2} \beta \, . \end{equation} It is impossible to write an explicit nontrivial equation of the form $f(\alpha ,\beta )=0$ for them. The existence of these curves allows us to obtain interesting arrangements of MUB operators in tables that do not contain any axis ($z$, $x$ or $y$). There are two of such structures, shown in tables~4 and 5. As can be seen from the rightmost column in both tables, the physical difference between the two structures is that the two qubits are permuted between them. The lines marked 2, 3 and 4 in both tables lead to factorable MUBs, while the lines marked as 1 and 5 give maximally entangled ones. \begin{table} \caption{Bundle consisting of two exceptional curves, one $\alpha $-curve, one $\beta $-curve, and a ray.} \begin{indented} \item[] \begin{tabular}{@{}llllll} \br Basis & Curves and rays & Displacement operators & \multicolumn{3}{c}{Factorized operators} \\ \mr 1 & $ \! \! \! \begin{array}{l} \alpha = \kappa +\kappa^{2} \\ \beta =\sigma \kappa + \sigma^{2} \kappa^{2} \end{array} $ & $X_{\sigma^3}, Z_{\sigma^3}, Z_{\sigma^3} X_{\sigma^3}$ & $\sigma_{x} \sigma_{x}$ & $\sigma_{z} \sigma_{z}$ & $\sigma_{y} \sigma _{y}$ \\ 2 & $ \! \! \! \begin{array}{l} \alpha = \sigma^2 (\kappa +\kappa^{2}) \\ \beta = \sigma^2 \kappa + \kappa^{2} \end{array} $ & $X_{\sigma}, Z_{\sigma^2}, Z_{\sigma^2} X_{\sigma}$ & $\sigma_{x} \leavevmode\hbox{\small1\normalsize\kern-.33em1}$ & $\leavevmode\hbox{\small1\normalsize\kern-.33em1} \sigma_{z}$ & $\sigma_{x} \sigma_{z}$ \\ 3 & $\beta = \sigma \alpha + \alpha^{2}$ & $ Z_{\sigma^2} X_{\sigma^2}, Z_{\sigma^3} X_{\sigma^2}, Z_{\sigma}$ & $\leavevmode\hbox{\small1\normalsize\kern-.33em1} \sigma_{y}$ & $\sigma_{z} \sigma_{y}$ & $\sigma_{z} \leavevmode\hbox{\small1\normalsize\kern-.33em1}$ \\ 4 & $\alpha = \sigma^{2} \beta + \beta^{2}$ & $X_{\sigma^{2}}, Z_{\sigma} X_{\sigma^3}, Z_{\sigma} X_{\sigma}$ & $\leavevmode\hbox{\small1\normalsize\kern-.33em1} \sigma_{x}$ & $\sigma_{y} \sigma_{x}$ & $\sigma_{y} \leavevmode\hbox{\small1\normalsize\kern-.33em1}$ \\ 5 & $\beta = \sigma \alpha$ & $Z_{\sigma} X_{\sigma^2}, Z_{\sigma^2} X_{\sigma^3}, Z_{\sigma^3} X_{\sigma}$ & $\sigma_{z} \sigma_{x}$ & $\sigma_{x}\sigma_{y}$ & $\sigma_{y}\sigma _{z}$ \\ \br \end{tabular} \end{indented} \end{table} \begin{figure} \caption{Phase-space picture corresponding to the construction in table 4.} \label{fig4} \end{figure} \begin{table}[b] \caption{Bundle consisting of two exceptional curves, one $\alpha $-curve, one $\beta $-curve, and a ray.} \begin{indented} \item[] \begin{tabular}{@{}llllll} \br Basis & Curves and rays & Displacement operators & \multicolumn{3}{c}{Factorized operators} \\ \mr 1 & $ \! \! \! \begin{array}{l} \alpha =\kappa +\kappa^{2} \\ \beta =\sigma \kappa +\sigma^{2}\kappa^{2} \end{array}$ & $X_{\sigma^3}, Z_{\sigma^3}, Z_{\sigma^3} X_{\sigma^3}$ & $\sigma_{x} \sigma_{x}$ & $\sigma_{z} \sigma_{z}$ & $\sigma_{y} \sigma_{y}$ \\ 2 & $\! \! \! \begin{array}{l} \alpha = \sigma ( \kappa + \kappa^{2}) \\ \beta = \kappa + \sigma \kappa^{2} \end{array}$ & $X_{\sigma^2}, Z_{\sigma}, Z_{\sigma} X_{\sigma^2}$ & $\leavevmode\hbox{\small1\normalsize\kern-.33em1} \sigma_{x}$ & $\sigma_{z} \leavevmode\hbox{\small1\normalsize\kern-.33em1}$ & $ \sigma_{z} \sigma_{x}$ \\ 3 & $\beta = \sigma^{2} \alpha + \alpha^{2}$ & $Z_{\sigma^2},Z_{\sigma^3} X_{\sigma}, Z_{\sigma} X_{\sigma}$ & $\leavevmode\hbox{\small1\normalsize\kern-.33em1} \sigma_{z}$ & $\sigma_{y} \sigma_{z}$ & $\sigma_{y} \leavevmode\hbox{\small1\normalsize\kern-.33em1} $ \\ 4 & $\alpha = \sigma \beta + \beta^{2}$ & $Z_{\sigma^2} X_{\sigma^2}, Z_{\sigma^2} X_{\sigma^3}, X_{\sigma}$ & $\leavevmode\hbox{\small1\normalsize\kern-.33em1} \sigma_{y}$ & $\sigma_{x} \sigma_{y}$ & $\sigma_{x} \leavevmode\hbox{\small1\normalsize\kern-.33em1}$ \\ 5 & $\beta = \sigma^{2} \alpha$ & $Z_{\sigma} X_{\sigma^3}, Z_{\sigma^2} X_{\sigma}, Z_{\sigma^3} X_{\sigma^2}$ & $\sigma_{y} \sigma_{x}$ & $\sigma_{x} \sigma_{z}$ & $\sigma_{z} \sigma _{y}$ \\ \br \end{tabular} \end{indented} \end{table} \begin{figure} \caption{Phase-space picture corresponding to the construction in table 5.} \label{fig5} \end{figure} Finally, there is a last table containing two exceptional curves and a ray corresponding to the spin operators in the $y$-direction, as it is shown in table~6. To sum up, there exist fifteen different Abelian structures, five rays and ten curves, which can be organized in six different forms with the respect to MUBs. The existence of only six bundles of mutually nonintersecting Abelian nonsingular curves (i.e., different tables) also follows from the fact that the coset of the full symplectic group, which preserves the commutation relations (\ref{com}), on operations corresponding to nontrivial permutations of columns and rows of (any) table [generated by the symplectic group $Sp(2, GF(4))$], is precisely of order 6. \begin{table} \caption{Bundle consisting of two exceptional curves, one $\alpha $-curve, one $\beta $-curve, and a ray.} \begin{indented} \item[] \begin{tabular}{@{}llllll} \br Basis & Curves and rays & Displacement operators & \multicolumn{3}{c}{Factorized operators} \\ \mr 1 & $ \! \! \! \begin{array}{l} \alpha = \sigma^{2} (\kappa + \kappa^{2}) \\ \beta =\sigma^{2}\kappa +\kappa^{2} \end{array} $ & $X_{\sigma^{2}}, Z_{\sigma}, Z_{\sigma} X_{\sigma^{2}}$ & $\leavevmode\hbox{\small1\normalsize\kern-.33em1} \sigma _{x}$ & $\sigma_{z} \leavevmode\hbox{\small1\normalsize\kern-.33em1}$ & $\sigma_{z} \sigma_{x}$ \\ 2 & $\! \! \! \begin{array}{l} \alpha = \sigma ( \kappa +\kappa^{2}) \\ \beta =\sigma \kappa +\sigma \kappa^{2} \end{array} $ & $X_{\sigma}, Z_{\sigma^2}, Z_{\sigma^2} X_{\sigma}$ & $\sigma_{x} \leavevmode\hbox{\small1\normalsize\kern-.33em1}$ & $\leavevmode\hbox{\small1\normalsize\kern-.33em1} \sigma_{z}$ & $\sigma_{x} \sigma_{z}$ \\ 3 & $\beta = \alpha + \alpha^{2}$ & $Z_{\sigma^2} X_{\sigma^3}, Z_{\sigma^3}, Z_{\sigma} X_{\sigma^3}$ & $\sigma_{y} \sigma_{x}$ & $\sigma_{z} \sigma_{z}$ & $\sigma_{x} \sigma_{y}$ \\ 4 & $\alpha =\beta + \beta^{2}$ & $Z_{\sigma^3} X_{\sigma^2}, X_{\sigma^3}, Z_{\sigma^3} X_{\sigma}$ & $\sigma_{z} \sigma_{y}$ & $\sigma_{x} \sigma_{x}$ & $\sigma_{y} \sigma_{z}$ \\ 5 & $\beta = \alpha $ & $Z_{\sigma} X_{\sigma}, Z_{\sigma^2} X_{\sigma^2}, Z_{\sigma^3} X_{\sigma^3}$ & $\sigma_{y} \leavevmode\hbox{\small1\normalsize\kern-.33em1}$ & $\leavevmode\hbox{\small1\normalsize\kern-.33em1} \sigma_{y}$ & $\sigma_{y} \sigma_{y}$ \\ \br \end{tabular} \end{indented} \end{table} \section{The effect of local transformations} As we have noticed, different arrangements of operators in tables (or bundling of phase-space curves) lead to the same separability structure. To understand this point, let us study the effect of local transformations. In other words, we wish to characterize how a given curve changes when a local transformation is applied to a set of operators labeled by points of this curve. To deal with such operations with curves, let us recall that a generic displacement operator is factorized in the self-dual basis as \begin{equation} Z_{\alpha} X_{\beta} = (\sigma_{z}^{a_1} \sigma_{x}^{b_1}) (\sigma_{z}^{a_2} \sigma_{x}^{b_2}) \equiv ( a_{1},b_{1} ) \otimes ( a_{2},b_{2} ) \, . \end{equation} It is clear that under local transformation (rotations by $\pi/2$ radians around the $z$-, $x$- or $y$-axes) applied to the $j$th particle ($j=1,2$), the indices of the displacement operators are transformed as follows: \begin{eqnarray} z\mathrm{-rotation} & : & \quad ( a_{j}, b_{j} ) \rightarrow (a_{j} + b_{j}, b_{j}) \, , \nonumber \\ x\mathrm{-rotation} & :& \quad ( a_{j}, b_{j} ) \rightarrow (a_{j}, b_{j} + a_{j}) \, , \\ y\mathrm{-rotation} & :& \quad ( a_{j},b_{j} ) \rightarrow (a_{j} + a_{j} + b_{j}, b_{j} + a_{j}+ b_{j}) = ( b_{j},a_{j} ) \, . \nonumber \end{eqnarray} To give a concrete example, suppose we consider a $z$-axis rotation. The operator $\sigma_{z}$, corresponding to $(a_j=1, b_j=0)$, is transformed into $(a_j = 1 + 0 = 1, b_j=0)$; i.e., into itself, while, e.g., the operator $\sigma_{x}$, corresponding to $(a_j=0, b_j=1)$, is mapped onto $(a_j = 0 + 1 = 1, b_j=1)$, which coincides with $\sigma_{y}$. In the same way $\sigma_{y}$ is mapped onto $\sigma_{x}$, while the identity ($a_j=0$, $b_j=0$) is mapped onto itself. In terms of field elements these transformations read \begin{eqnarray} z\mathrm{-rotation} & : & \quad \begin{array}{l} \alpha \rightarrow \alpha + \theta_{j} \tr( \beta \theta _{j}) , \\ \beta \rightarrow \beta , \end{array} \nonumber \\ x\mathrm{-rotation} & : & \quad \begin{array}{l} \alpha \rightarrow \alpha , \\ \beta \rightarrow \beta + \theta_{j} \tr( \alpha \theta_{j}) , \end{array} \\ y\mathrm{-rotation} & : & \quad \begin{array}{l} \alpha \rightarrow \alpha + \theta_{j} \tr [ ( \alpha + \beta ) \theta_{j} ] , \\ \beta \rightarrow \beta + \theta_{j} \tr [ (\alpha + \beta ) \theta _{j} ] . \end{array} \nonumber \end{eqnarray} In particular, applying the above transformations to a ray (\ref{ray1}) we get \begin{eqnarray} \label{x} z\mathrm{-rotation} & : & \quad \begin{array}{l} \alpha \rightarrow (\eta + \zeta \theta_{j}) \kappa + \kappa^{2} \zeta^{2}, \\ \beta \rightarrow \beta = \zeta \kappa , \end{array} \nonumber \\ x\mathrm{-rotation} & : & \quad \begin{array}{l} \alpha \rightarrow \alpha = \eta \kappa , \\ \beta \rightarrow ( \zeta + \eta \theta_{j}) \kappa + \kappa^{2} \eta^{2} , \end{array} \\ y\mathrm{-rotation} & : & \quad \begin{array}{l} \alpha \rightarrow ( \eta + \zeta \theta_{j} + \eta \theta_{j} ) \kappa + \kappa^{2} (\zeta + \eta )^{2}, \\ \beta \rightarrow ( \zeta + \zeta \theta_{j} + \eta \theta_{j}) \kappa + \kappa^{2}(\zeta + \eta )^{2}, \end{array} \nonumber \end{eqnarray} which are explicitly nonlinear operations. Note that the $z$- and $x$-transformations produce regular curves starting from a ray \begin{eqnarray} z\mathrm{-rotation} & : & \quad \alpha = \eta \zeta^{-1} \beta \rightarrow ( \eta \zeta^{-1} + \theta_{j} ) \beta + \beta^{2} , \nonumber \\ & & \\ x\mathrm{-rotation} & : & \quad \beta = \eta^{-1} \zeta \alpha \rightarrow ( \eta^{-1} \zeta + \theta_{j} ) \alpha + \alpha^{2}. \nonumber \end{eqnarray} Meanwhile, the $y$-rotation may lead to an exceptional curve (as it happens when we start with the horizontal or the vertical axes, $\zeta =0$ or $\eta =0$). An important result to stress is that it is possible to obtain all the curves of the form (\ref{alphabeta}) and (\ref{exep}) from the rays after some (nonlinear) operations (\ref{x}), corresponding to local transformations. The families of such transformations are the following: I. The rays and curves corresponding to factorable basis can be obtained from a single ray $\alpha =0, \beta =\sigma^{2} \kappa $ (vertical axis) as shown in table~7 (left). II. The rays and curves corresponding to nonfactorable basis can be obtained from the ray $\alpha =\sigma \kappa ,\beta =\sigma^{2}\kappa $ ($\beta =\sigma \alpha )$ as shown in table~7 (right). \begin{table} \caption{Curves and their corresponding transformations from $\alpha =0, \beta = \sigma^{2} \kappa $ (left) and $ \beta = \sigma \alpha$ (right). The $x$-, $y$ and $z$-rotations are indicated as $x$, $y$ and $z$, respectively.} \begin{indented} \item[] \begin{tabular}{@{}ll||ll} \br Curve (ray) & Transformation & Curve (ray) & Transformation \\ \mr $ \! \! \! \begin{array}{l} \alpha = \sigma^{2} ( \kappa + \kappa^{2}) \\ \beta =\sigma^{2}\kappa + \kappa^{2} \end{array} $ & $\leavevmode\hbox{\small1\normalsize\kern-.33em1} \otimes y$ & $ \! \! \! \begin{array}{l} \alpha = \kappa + \kappa^{2} \\ \beta =\sigma \kappa + \sigma^{2} \kappa^{2} \end{array}$ & $z\otimes y$ \\ $ \! \! \! \begin{array}{l} \alpha =\sigma ( \kappa +\kappa^{2} ) \\ \beta =\sigma \kappa +\sigma \kappa^{2} \end{array} $ & $y \otimes \leavevmode\hbox{\small1\normalsize\kern-.33em1}$ & $\beta =\sigma^{2}\alpha$ & $x\otimes x$ \\ $\beta =0$ & $y \otimes y$ & $\beta =\alpha^{2}$ & $ \leavevmode\hbox{\small1\normalsize\kern-.33em1}\otimes x$ \\ $\beta =\alpha$ & $z \otimes z$ & $\beta =\alpha +\alpha^{2}$ & $\leavevmode\hbox{\small1\normalsize\kern-.33em1} \otimes y$ \\ $\beta =\sigma \alpha +\alpha^{2}$ & $y \otimes z$ & $\alpha =\beta +\beta^{2}$ & $y \otimes \leavevmode\hbox{\small1\normalsize\kern-.33em1}$ \\ $\beta =\sigma^{2}\alpha + \alpha^{2}$ & $z \otimes y$ & & \\ $\alpha =\sigma \beta + \beta^{2}$ & $\leavevmode\hbox{\small1\normalsize\kern-.33em1} \otimes z$ & & \\ $\alpha =\sigma^{2}\beta + \beta^{2}$ & $z \otimes \leavevmode\hbox{\small1\normalsize\kern-.33em1}$ & & \\ \br \end{tabular} \end{indented} \end{table} This means that all the different tables can be generated from the standard one, given in table~1, by applying only local transformations that do not change the factorization properties of the MUBs. So, tables 2 to 6 are obtained from table~1 from the transformations given in table~8. The full set of striations for each bundle of curves (each table) is obtained by constructing ``parallel curves'' in the bundle in an obvious way: \begin{equation} \alpha_{\lambda} ( \kappa ) = \mu_{0} \kappa + \mu_{1} \kappa^{2}, \qquad \beta_{\lambda} ( \kappa ) = \eta_{0} \kappa + \eta_{1} \kappa^{2} + \lambda , \end{equation} with $\lambda \in GF(4)$. It is clear that no ($\alpha_{\lambda} (\kappa ), \beta_{\lambda} ( \kappa)$) curve intersects the curve ($\alpha_{\lambda^\prime} (\kappa ), \beta_{\lambda^\prime} (\kappa )$) for $\lambda \neq \lambda^{\prime }$. \section{Extension to larger spaces} The relation between Abelian curves in discrete phase space and different systems of MUBs can be extended to higher (power of prime) dimensions. For the most interesting $n$-qubit case, a generic Abelian curve (\ref{ac}) has the following parametric from \begin{equation} \alpha ( \kappa ) = \sum_{m=0}^{n-1} \mu_{m} \kappa^{2^{m}} \, , \qquad \beta ( \kappa ) = \sum_{m=0}^{n-1} \eta_{m} \kappa^{2^{m}} \, , \end{equation} with $\mu_{m}, \eta _{m}, \kappa \in GF(2^{n})$, and the commutativity condition takes now the invariant form \begin{equation} \label{Tr_n} \sum_{m \neq k} \tr( \mu_{m} \mathbf{\eta}_{k} )= 0 \, . \end{equation} The simplest example of such curves are obviously the rays, parametrically defined as in Eq.~(\ref{ray1}), where the conditions (\ref{Tr_n}) are trivially satisfied. Imposing the nonintersecting condition we can, in principle, get all the possible bundles of commutative curves. Nevertheless, in higher dimensions it is impossible to obtain all the curves from the rays by local transformations. This leads to the existence of different nontrivial bundles of nonintersecting curves, and consequently to MUBs with different types of factorization~\cite{Lawrence04,PRA05}. The problem of classification of bundles of mutually nonintersecting, nonsingular Abelian curves and its relation to the problem of MUBs in higher dimensions, and in particular the transformation relations between different MUB structures, will be considered elsewhere. \begin{table} \caption{Transformation operators converting table 1 into each one of the tables indicated in the left column. Again, $x$-, $y$- and $z$-rotations are indicated as $x$, $y$ and $z$, respectively.} \begin{indented} \item[] \begin{tabular}{@{}ll} \br Table & Transformation \\ \mr 2 & $x\otimes \leavevmode\hbox{\small1\normalsize\kern-.33em1}$ \\ 3 & $\leavevmode\hbox{\small1\normalsize\kern-.33em1}\otimes z$ \\ 4 & $y\otimes z$ \\ 5 & $y\otimes x$ \\ 6 & $\leavevmode\hbox{\small1\normalsize\kern-.33em1}\otimes y$ \\ \br \end{tabular} \end{indented} \end{table} \section{Conclusions} A new MUB construction has been worked out, with special emphasis in the two-qubit case. Its essential ingredient is a mapping between displacement operators, physical spin-1/2 operators and discrete phase-space curves. In phase space any nonsingular bundle of curves that fills every point and has only one common intersecting point (here taken to be the origin) will map onto a MUB. The corresponding displacement operators can be obtained from these phase-space curves. For the two-qubit case, we have derived all the admissible curves and classified them into rays and curves (regular and exceptional, depending on degeneracy). In total, six different bundles can be constructed from the set of five rays and ten curves. We have also shown how the six tables representing sets of MUBs are related by local transformations, i.e., physical rotations around the $x$-, $y$- and $z$-axes. It is obvious that such rotations will not change the MUBs entanglement properties. A Wigner function can also be associated to each phase-space structure. Although we have not pursued this topic in the paper, it is straightforward to use any of the phase-space structures and follow the algorithm described in reference~\cite{Gibbons04} (although in that paper the construction applies only to rays) to obtain such a function~\cite{JOPB07}. It is also formally straightforward to extend the method to any Hilbert space whose dimension is a power of a prime. However, only in the bipartite case one will find that all structures are related through local transformations. Already in the tripartite case different classes of entanglement exist~\cite{PRA05}, and consequently some MUB structures are related through nonlocal (entangling) transformations. The extention of the present method provides a systematic way to find these transformations. \end{document}
\begin{document} \title{Maximal quadratic modules on $\ast$-rings}{} \author{J. Cimpri\v c} \keywords{rings with involution, ordered structures, noncommutative real algebraic geometry} \subjclass[2000]{Primary: 16W80, 13J30; Secondary: 14P10, 12D15} \date{April 6th 2005} \address{Cimpri\v c Jakob, University of Ljubljana, Faculty of Mathematics and Physics, Department of Mathematics, Jadranska 19, SI-1000 Ljubljana, Slovenia, [email protected],http://www.fmf.uni-lj.si/\~{}cimpric} \maketitle \begin{abstract} We generalize the notion of and results on maximal proper quadratic modules from commutative unital rings to $\ast$-rings and discuss the relation of this generalization to recent developments in noncommutative real algebraic geometry. The simplest example of a maximal proper quadratic module is the cone of all positive semidefinite complex matrices of a fixed dimension. We show that the support of a maximal proper quadratic module is the symmetric part of a prime $\ast$-ideal, that every maximal proper quadratic module in a Noetherian $\ast$-ring comes from a maximal proper quadratic module in a simple artinian ring with involution and that maximal proper quadratic modules satisfy an intersection theorem. As an application we obtain the following extension of Schm\" udgen's Strict Positivstellensatz for the Weyl algebra: Let $c$ be an element of the Weyl algebra $\mathcal{W}(d)$ which is not negative semidefinite in the Schr\" odinger representation. It is shown that under some conditions there exists an integer $k$ and elements $r_1,\ldots,r_k \in \mathcal{W}(d)$ such that $\sum_{j=1}^k r_j c r_j^\ast$ is a finite sum of hermitian squares. This result is not a proper generalization however because we don't have the bound $k \le d$. \end{abstract} \section{Introduction} \label{firstsec} The aim of this note is to generalize the notion of and results on quadratic modules from commutative unital rings to associative unital rings with involution, which we call $\ast$-rings. The study of quadratic modules in $\ast$-rings is suggested by the recent developments in noncommutative real algebraic geometry, see \cite{av}, \cite{h1}, \cite{pc}, \cite{sch1}, \cite{sch2}. Commutative real algebraic geometry is based on the notion of an ordering and quadratic modules are considered just a technical tool. However, an attempt by M. Marshall to build a noncommutative real algebraic geometry on $\ast$-orderings in \cite{mm} showed that there is not enough of them. The advantage of maximal proper quadratic modules over $\ast$-orderings is not only in their quantity but also in their connection to the representation theory of $\ast$-rings, see \cite{cstar}. The exposition can be divided into three parts. In the first part (Sections \ref{defsec} and \ref{suppsec}) we define a quadratic module in a $\ast$-ring and provide elementary examples. We also try to generalize the following result (cf. \cite[1.1.4 Theorem]{walter}): If $M$ is a maximal proper quadratic module in a commutative ring $R$ with trivial involution, then $M \cup -M = R$ and $M \cap -M$ is a prime ideal. It will be shown that the first property cannot be generalized but the second can. In the second part (Sections \ref{fracsec} and \ref{repsec}) we show that every maximal quadratic module in a Noetherian $\ast$-ring comes from a maximal quadratic module in a simple artinian ring with involution using a variant of Goldie's theory from \cite{domo}. In the commutative case, this is rather obvious. Namely, by factoring out the prime ideal $M \cap -M$ we get a maximal proper quadratic module in $R/M \cap -M$ and by passing to the field $F$ of fraction of $R/M \cap -M$ we get a maximal proper quadratic module in $F$, both times in a natural way. In the third part (Sections \ref{intsec} and \ref{appsec}) we generalize the following intersection theorem (see \cite[1.8 Satz]{jacobi}): An element $r$ of a commutative ring $R$ with trivial involution belongs to $R \setminus -M$ for every maximal proper quadratic module $M$ in $R$ if and only if there exist elements $m,t \in R$ which are sums of squares of elements from $R$ and satisfy $tr=1+m$. As an application, we obtain the following extension of Schm\" udgen's Strict Positivstellensatz for the Weyl algebra (see \cite[Theorem 1.1]{sch1}): \begin{citethm} Let $\mathcal{W}(d)$ be the $d$-th Weyl algebra with the natural involution and let $\pi_0$ be its Schr\" odinger representation. If $c$ is a symmetric element of $\mathcal{W}(d)$ of even degree $2m$ then the following are equivalent: \begin{enumerate} \item $\pi_0(c)$ is not negative semidefinite and the highest degree part $c_{2m}(z,\overline{z})$ of $c$ is strictly positive for all $z \in \mathbb{C}^d$, $z \ne 0$. \item There exist elements $r_1,\ldots,r_k, s_0,s_1, \ldots,s_l \in \mathcal{W}(d)$ such that $ \sum_{j=1}^k r_j c r_j^\ast = \sum_{i=0}^l s_i s_i^\ast $ and $\pi_0(s_0)$ is invertible and $\deg(s_0) \ge \deg(s_j)$ for every $j=1,\ldots,l$. \end{enumerate} \end{citethm} An early version of this manuscript was presented in Luminy and Saskatoon in March 2005. Meanwhile, our techniques have also been applied to the free $\ast$-algebra, see \cite{ks}. \section{Definitions and elementary examples} \label{defsec} Let $R$ be a $\ast$-ring and $\sym(R) := \{r \in R \vert \ r=r^\ast\}$ its set of symmetric elements. When no confusion is possible we write $S$ for $\sym(R)$. A subset $M$ of $R$ is a \textit{quadratic module} if $M \subseteq S$, $1 \in M$, $M+M \subseteq M$ and $rMr^\ast \subseteq M$ for every $r \in R$. (This is very similar to the definition of an \textit{m-admissible wedge} in \cite[page 22]{schbook}.) A quadratic module $M$ is \textit{proper} if $-1 \not\in M$. The smallest quadratic module in $R$ is $N(R) := \{\sum_i r_i r_i^\ast \ \colon \ r_i \in R\}$. The quadratic module $N(R)$ need not be proper. If $N(R)$ is proper, then $R$ is \textit{semireal}. A proper quadratic module is \textit{maximal} if it is not contained in any strictly larger proper quadratic module. \begin{rem} The following properties of $R$ are equivalent: \begin{enumerate} \item $R$ is semireal, \item $R$ has at least one proper quadratic module, \item $R$ has at least one maximal proper quadratic module. \end{enumerate} \end{rem} \begin{ex} Let $H$ be a complex Hilbert space and $B(H)$ the algebra of all bounded operators on $H$ with the standard involution. Then the set $P(H)$ of all positive definite operators from $H$ is clearly a proper quadratic module in $B(H)$. If $H$ is finite dimensional, then $P(H)$ is maximal. Namely, if $A$ is a symmetric operator which is not in $P(H)$ then we can find a basis of $H$ such that the matrix of $A$ is diagonal with first entry equal to $-1$. Let $E_{ij}$ be the operator whose matrix in this basis has $(i,j)$-th entry equal to $1$ and all other entries equal to zero. Note that $\sum_k E_{k1} A E_{k1}^\ast = -I$ where $I$ is the identity operator on $H$. Hence, every quadratic module that contains $A$ also contains $-I$. Therefore, there is no proper quadratic module stricly larger then $P(H)$. Note that in the finite dimensional case $P(H) = N(B(H))$, hence $P(H)$ is the only proper quadratic module in $B(H)$. If $H$ is infinite dimensional, then $P(H)$ is not maximal. Namely if $K_s$ is the set of all symmetric compact operators on $H$, then $M = P(H)+K_s$ is a proper quadratic module in $B(H)$ that is strictly larger than $P(H)$. If $-1 \in M$ then there exists $A \in P(H)$ such that $-I-A$ is compact. This is not possible because the eigenvalues of a compact operator tend to zero while the eigenvalues of $-I-A$ are bounded away from zero. \end{ex} \begin{rem} If $R$ is a commutative unital ring with trivial involution (i.e. $\sym(R)=R$) and $M$ is a maximal proper quadratic module in $R$, then $M \cup -M = \sym(R)$. This property fails in the noncommutative case. Namely, if $H$ is a finite dimensional Hilbert space of dimension at least two then $P(H)$ is a maximal proper quadratic module on $B(H)$ such that $P(H) \cup -P(H) \ne \sym(B(H))$. \end{rem} \section{The support} \label{suppsec} Let $M$ be a proper quadratic module in a $\ast$-ring $R$. The set \[ M^0 = M \cap -M \] is called \textit{the support} of $M$. We will frequently use the following properties of $M^0$: $M^0+M^0 \subseteq M^0$, $-M^0 \subseteq M^0$, $r M^0 r^\ast \subseteq M^0$ for every $r \in R$, if $x+y \in M^0$ and $x,y \in M$ then $x \in M^0$ and $y \in M^0$. Write \[ J_M = \{ a \in R \vert \ a u u^\ast a^\ast \in M^0 \text{ for all } u \in R\}. \] \begin{prop} If $M$ is a proper quadratic module in $R$, then $J_M$ is a two-sided ideal in $R$ containing the set $M^0$. \end{prop} \begin{proof} If $a,b \in J_M$ then for every $u \in R$, $a u u^\ast a^\ast \in M^0$ and $b u u^\ast b^\ast \in M^0$. Write $x = (a+b) u u^\ast (a+b)^\ast$ and $y = (a-b) u u ^\ast (a-b)^\ast$. Since $x+y =2(a u u^\ast a^\ast +b u u^\ast b^\ast) \in M^0$ and $x,y \in N(R) \subseteq M$, it follows that $x \in M^0$ and $y \in M^0$ for any $u \in R$. Hence $a+b \in J_M$ and $a-b \in J_M$. If $a \in J_M$ and $r \in R$ then for every $u \in R$, $(ra) u u^\ast (r a)^\ast = r (a u u^\ast a^\ast) r^\ast \in M^0$ and $(ar) u u^\ast (a r)^\ast = a (ru)(ru)^\ast a^\ast \in M^0$, so that $ra \in J_M$ and $ar \in J_M$. Hence, $J_M$ is a two-sided ideal. To prove that $M^0 \subseteq J_M$, we must show that $a u u^\ast a \in M^0$ for every $a \in M^0$ and $u \in R$. Pick $a \in M^0$ and $u \in R$ and write $z=uu^\ast$, $x = (1+az)a(1+az)^\ast$ and $y = (1-az)a(1-az)^\ast$. Note that $x,y \in M^0$ and $4aza = x-y \in M^0$. Since $aza \in M$, it follows that $aza \in M^0$. \end{proof} Recall that a two-sided ideal $J$ of a ring $R$ is \textit{prime} if for any $a,b \in R$ such that $aRb \subseteq J$ we have that either $a \in R$ or $b \in R$. A ring $R$ is \textit{prime} if $(0)$ is a prime ideal of $R$. \begin{thm} \label{prime} If $M$ is a maximal proper quadratic module in $R$, then the ideal $J_M$ is prime and $\ast$-invariant. Moreover, $J_M \cap S = M^0$. \end{thm} \begin{proof} We already know that $J_M$ is a two-sided ideal containing $M^0$. Hence, $M^0 \subseteq J_M \cap S$. If $J_M \cap S \not\subseteq M^0$, then there exists $s \in J_M \cap S$ such that $s \not\in M^0$. Replacing $s$ by $-s$ if necessary, we may assume that $-s \not\in M$. Since $M$ is maximal, it follows that the smallest quadratic module containing $M$ and $-s$ is not proper. Hence there exist an element $n \in M$, an integer $l$ and elements $t_1,\ldots,t_l \in R$ such that $1+n = \sum_{j=1}^l t_j s t_j^\ast$. Since $J_M$ is a two-sided ideal containing $s$, it follows that $1+n \in J_M$. It follows that $(1+n) v v^\ast (1+n) \in M^0$ for every $v \in R$. For $v=1$, we get $(1+n)^2 \in M^0$. Since $n,n^2 \in M$, it follows that $1 \in -M$, contradicting the assumption that $M$ is proper. Therefore, $J_M \cap S = M^0$. To prove that $J_M$ is prime, pick $a,b \in R$ such that $arb \in J_M$ for every $r \in R$. If $b \not\in J_M$, then there exists $v \in R$ such that $b v v^\ast b^\ast \not\in M^0$. Since $M$ is maximal and $-b v v^\ast b^\ast \not\in M$, it follows that the smallest quadratic module containing $M$ and $-b v v^\ast b^\ast$ is not proper. Hence, there exist an element $m \in M$, an integer $k$ and elements $r_1,\ldots,r_k \in R$ such that $1+m = \sum_{i=1}^k r_i (b v v^\ast b^\ast) r_i^\ast$. Pick $r \in R$ and write $x = a r r^\ast a^\ast$ and $y = a r m r^\ast a^\ast$. Since $a r r_i b \in J_M$ for every $i=1,\ldots,k$ by assumption and $J_M \cap S = M^0$ by the first paragraph of this proof, it follows that $x+y = a r (1+m) r^\ast a^\ast = \sum_{i=1}^k (a r r_i b) v v^\ast (a r r_i b)^\ast \in M^0$. Clearly, $x,y \in M$, so that $x,y \in M^0$. Therefore, $a r r^\ast a^\ast$ for every $r \in R$, implying that $a \in J_M$. To show that $J_M$ is $\ast$-invariant, pick $a \in J_M$. Since $J_M$ is an ideal, it follows that $a^\ast u u ^\ast a \in J_M$ for every $u \in R$. By the first paragraph of this proof, $J_M \cap S = M^0$, so that $a^\ast u u ^\ast a \in M^0$ for every $u \in R$. So, $a^\ast \in J_M$ by the definition of $J_M$. \end{proof} \begin{rem} If $R$ is a complex $\ast$-algebra and $M$ is a maximal proper quadratic module in $R$, then $J_M = M^0+i M^0$. Namely, pick $z \in J_M$ and write $z = x+iy$ where $x,y \in S$. Then $z^\ast =x-iy$ also belongs to $J_M$. Pick any $r \in R$ and write $s = rr^\ast$. Since $z s z^\ast \in M^0$ and $z^\ast s z \in M^0$, it follows that $2(xsx+ysy)=zsz^\ast+z^\ast sz \in M^0$. Since $xsx, ysy \in M$, it follows that $xsx, ysy \in M^0$. Hence $x,y \in J_M$. The relation $J_M \cap S = M^0$ implies that $x,y \in M^0$. The converse is clear. \end{rem} \section{Quadratic modules and rings of fractions} \label{fracsec} We assume that the reader is familiar with the definition of a reversible Ore set and Ore localization, see Section 1.3 of \cite{cohn}. The aim of this section is to discuss consequences of the following observation: \begin{prop} \label{ore} Let $M$ be a proper quadratic module in a $\ast$-ring $A$ and let $N$ be a $\ast$-invariant reversible Ore set on $A$ such that $N \cap M^0 = \emptyset$. Let $Q = AN^{-1}$ and $\tilde{M}=\{q \in \sym(Q) \vert \ n q n^\ast \in M$ for some $n \in N\}.$ Then $\tilde{M}$ is a proper quadratic module in $Q$. \end{prop} \begin{proof} Clearly, the involution extends uniquely from $A$ to $Q$, see also \cite{domo}. To prove that $\tilde{M}+\tilde{M} \subseteq \tilde{M}$ take $q_1,q_2 \in \tilde{M}$. Pick $n_1,n_2 \in N$ such that $n_1 q_1 n_1^\ast \in M$ and $n_2 q_1 n_2^\ast \in M$. By the Ore property of $N$, there exist $u \in A$ and $v \in N$ such that $u n_1 = v n_2$. It follows that $v n_2(q_1+q_2)(v n_2)^\ast = u(n_1 q_1 n_1^\ast)u^\ast +v(n_2 q_1 n_2^\ast)v^\ast \in M$. Since $v n_2 \in N$, we have that $q_1+q_2 \in \tilde{M}$. Suppose that $q \in \tilde{M}$ and $d = n^{-1} a \in Q$. Pick $z \in N$ such that $zqz^\ast \in M$. By the Ore property of $N$, there exist $b \in A$ and $w \in N$ such that $bz=wa$. Then, $(wn)(dqd^\ast)(wn)^\ast =waq(wa)^\ast =b(zqz^\ast)b^\ast \in M$ and $wn \in N$. Hence, $dqd^\ast \in \tilde{M}$. If $-1 \in \tilde{M}$, then there exists $n \in N$ such that $-nn^\ast \in M$. It follows that $nn^\ast \in M^0 \cap N$ contrary to the assumption that $M^0 \cap N = \emptyset$. \end{proof} Let $A$, $N$ and $Q$ be as in Proposition \ref{ore}. Then for every proper quadratic module $M'$ in $Q$ we have \[ \big(M' \cap \sym(A) \big)\ \tilde{ } = M'. \] On the other hand, for every quadratic module $M$ in $A$ such that $M^0 \cap N = \emptyset$, the set \[ \overline{M} := \tilde{M} \cap \sym(A) = \{a \in \sym(A) \vert \ n a n^\ast \in M \text{ for some } n \in N\} \] is also a quadratic module in $A$ such that $\overline{M}^0 \cap N = \emptyset$, which we call the $N$-\textit{closure} of $M$. We say that $M$ is $N$-\textit{closed} if $M^0 \cap N = \emptyset$ and $M = \overline{M}$. Note that $N$-closure is an idempotent operation. \begin{thm} \label{bij} Let $A,N,Q$ be as above. The mappings $M \mapsto \tilde{M}$ and $M' \mapsto M' \cap \sym(A)$ give a bijective correspondence between proper $N$-closed quadratic modules of $A$ and proper quadratic modules in $Q$. \end{thm} \section{A representation theorem} \label{repsec} Suppose that $R$ is a prime Noetherian $\ast$-ring and $N$ the set of all elements from $R$ that are not zero divisors. A variant of Goldie's Theorem from \cite{domo} says that $N$ is a $\ast$-invariant reversible Ore set, the involution of $A$ extends uniquely to the Ore's localization $Q = RN^{-1}$ and there exists a skew--field $D$ such that $Q$ is either isomorphic to $M_n(D)$ or $\ast$-isomorphic to $M_n(D) \oplus M_n(D)^{op}$ with exchange involution $(a,b)^\ast=(b,a)$. If $R$ is \textit{real}, i.e. it has a support zero quadratic module, then by Proposition \ref{ore} this quadratic module extends to a proper quadratic module of $Q$. Note that $M_n(D) \oplus M_n(D)^{op}$ with involution $(a,b)^\ast=(b,a)$ cannot have a proper quadratic module because of the identity $(-1,-1) = (1,-1)(1,-1)^\ast$. Hence: \begin{prop} The Goldie ring of fractions of a real prime Noetherian $\ast$-ring is isomorphic to $M_n(D)$ for an integer $n$ and a skew-field $D$. \end{prop} The main result of this section is the following representation-theoretic characterization of maximal proper quadratic modules in Noetherian $\ast$-rings: \begin{thm} \label{rep} Let $A$ be a Noetherian $\ast$-ring and $M$ a proper quadratic module in $A$. The following are equivalent: \begin{enumerate} \item $M$ is maximal, \item there exists a simple artinian ring with involution $Q \cong M_n(D)$, a maximal proper quadratic module $M'$ in $Q$ and a $\ast$-ring homomorphism $\pi \colon A \to Q$ such that \[ M = \pi^{-1}(M') \cap \sym(A). \] \end{enumerate} \end{thm} \begin{proof} $(1) \Rightarrow (2) \colon$ If $M$ is maximal then by Theorem \ref{prime} $J_M$ is a prime $\ast$-ideal such that $J_M \cap \sym(A) = M^0$. Let $j \colon A \to A/J_M$ be the canonical projection. Then $j(M)$ is a maximal support zero quadratic module on $A/J_M$ and $M = j^{-1}(j(M)) \cap \sym(A)$. Let $N$ be the set of all non-zero divisors in $A/J_M$ and $Q = (A/J_M) N^{-1}$. By Proposition \ref{ore} and Theorem \ref{bij}, $M' = j(M)\ \tilde{ }$ a maximal proper quadratic module in $Q$, $j(M) = M' \cap j(A)$ and $Q$ is a simple artinian ring with involution. Let $i \colon A/J_M \to Q$ be the canonical imbedding. Then $i^{-1}(M') = M' \cap j(A)= j(M)$. Write $\pi = i \circ j$ and note that $\pi \colon A \to Q$ is a $\ast$-ring homomorphism such that $M = \pi^{-1}(M') \cap \sym(A)$. $(2) \Rightarrow (1) \colon$ Follows from Theorem \ref{bij}. \end{proof} Theorem \ref{rep} reduces the study of maximal proper quadratic modules in Noetherian $\ast$-rings into two parts: the study of their supports and the study of maximal proper quadratic modules in simple artinian rings with involution. In the special case of PI $\ast$-rings, the simple artinian rings in part two are finite dimensional (i.e. central simple algebras). Quadratic modules on central simple algebras with involution will be studied in a separate paper. \section{Intersection theorem} \label{intsec} Intersection theorems for quadratic modules in commutative rings with trivial involution have been considered in \cite{jacobi}. Theorem \ref{nicht} generalizes \cite[1.8 Satz]{jacobi}. \begin{thm} \label{nicht} Let $R$ be a semireal ring and $S$ its set of symmetric elements. For every $x \in S$ and every proper quadratic module $M_0$ the following are equivalent: \begin{enumerate} \item $x \in S \setminus -M$ for every maximal proper quadratic module $M$ containing $M_0$, \item $-1 \in M_0 - N[x]$, where $M_0-N[x]$ is the smallest quadratic module containing $M_0$ and $-x$. \end{enumerate} \end{thm} \begin{proof} If (2) is false, then $-1 \not\in M_0-N[x]$, hence $M_0-N[x]$ is a proper quadratic module. By Zorn's Lemma, there exists a maximal proper quadratic module $M$ containing $M_0-N[x]$. The fact that $x \in -M$ implies that (1) is false. Conversely, if (1) is false, then $x \in -M$ for some maximal proper quadratic module $M$. It follows that $M_0-N[x] \subseteq M$, so that $-1 \not\in M_0-N[x]$. Hence, (2) is false. \end{proof} Intersection theorems are popularly called \text\it{Stellens\" atze}. M. Schweighofer proposed the name \textit{abstract Nirgendsnegativsemidefinitheitsstellensatz} for our Theorem \ref{nicht}. In the archimedean case we can reformulate Theorem \ref{nicht} by using the representation theorem from \cite{cstar}. Recall that a proper quadratic module $M$ is \textit{archimedean} if for every element $a \in R$ there exists an integer $n$ such that $n-aa^\ast \in M$. In Section 2 of \cite{cstar}, it is shown that for every archimedean quadratic module $M$ in a complex $\ast$-algebra $R$, there exists an $M$-positive irreducible $\ast$-representation $\phi_M$ of $R$ on a complex Hilbert space. Recall that a representation $\phi$ is \textit{$M$-positive} if $\phi(a)$ is positive semidefinite for every $a \in M$. \begin{thm} \label{another} For every archimedean proper quadratic module $M_0$ in a complex $\ast$-algebra $R$ and for every $x \in \sym(R)$, the following are equivalent: \begin{enumerate} \item For every $M_0$-positive irreducible representation $\psi$ of $R$, $\psi(x)$ is not negative semidefinite. \item There exists $k \in \NN$ and $r_1,\ldots,r_k \in R$ such that $\sum_{i=1}^k r_i x r_i^\ast \in 1+M_0$. \end{enumerate} \end{thm} \begin{proof} If (1) is false, then there exists an $M_0$-positive irreducible $\ast$-representation $\phi$ such that $\phi(x)$ is negative semidefinite. Hence, $\phi$ is $M_0-N[x]$ positive. If follows that $-1 \not\in M_0-N[x]$, so that (2) is false. Conversely, if (2) is false then $-1 \not\in M_0-N[x]$, so that $M_0-N[x]$ is a proper quadratic module containing $M_0$. By Zorn's Lemma, there exists a maximal proper quadratic module $M$ containing $M_0-N[x]$. Clearly, $M$ is archimedean. By the discussion above, there exists an $M$-positive irreducible $\ast$-representation $\phi$ of $R$. Clearly, $\phi$ is $M_0$-positive and $\phi(-x)$ is positive semidefinite. Hence, (1) is false. \end{proof} \section{An application of the intersection theorem} \label{appsec} The aim of this section is to prove a variant of Schm\" udgen's Strict Positivstellensatz for the Weyl algebra, see \cite[Theorem 1.1]{sch1}. We will refer to Schm\" udgen's original proof several times. Recall that the $d$-th Weyl algebra $\mathcal{W}(d)$ is the unital complex \linebreak $\ast$-algebra with generators $a_1,\ldots,a_d,a_{-1},\ldots,a_{-d}$, defining relations \linebreak $a_k a_{-k}-a_{-k} a_k=1$ for $k=1,\ldots,d$ and $a_k a_l=a_l a_k$ for $k,l=1,\ldots,d$, $-1,\ldots,-d$, $k \ne -l$ and involution $a_k^\ast =a_{-k}$ for $k=1,\ldots,d$. Write $\deg$ for the total degree in the generators. The graded algebra that corresponds to the filtration by $\deg$ is $\CC[z,\bar{z}]=\CC[z_1,\ldots,z_d,\bar{z}_1,\ldots,\bar{z}_d]$, a polynomial algebra in $2d$ complex variables. Write $N=a_1^\ast a_1+\ldots+a_d^\ast a_d$ and fix $\alpha \in \RR^+\setminus \NN$. Let $\mathcal{N}$ be the set of all finite products of elements $N+(\alpha+n)1$, where $n \in \ZZ$. Let $\Phi$ be the Fock-Bargmann representation of $\mathcal{W}(d)$. It is unitarily equivalent to the Schr\" odinger representation $\pi_0$. Let $\mathcal{X}$ be the unital complex $\ast$-algebra generated by $y_n=\Phi(N+(\alpha+n)1)^{-1}$ for $n \in \ZZ$ and $x_{kl}=\Phi(a_k a_l) y_0$, $k,l=1,\ldots,d,-1,\ldots,-d$. By \cite[Lemma 3.1]{sch1}, $N(\mathcal{X})$ is an archimedean quadratic module in $\mathcal{X}$. \begin{thm} Let $c$ be a symmetric element of $\mathcal{W}(d)$ with even degree $2m$ and let $c_{2m}$ be the polynomial from $\CC[z,\bar{z}]$ that corresponds to the $2m$-th component of $c$. The following assertions are equivalent: \begin{enumerate} \item $\pi_0(c)$ is not negative semidefinite and $c_{2m}(z,\overline{z}) > 0$ for all $z \in \mathbb{C}^d$, $z \ne 0$. \item There exist $k,l \in \NN$ and elements $r_1,\ldots,r_k,s_0,\ldots,s_l \in \mathcal{W}(d)$ such that $\sum_{i=1}^k r_i^\ast c r_i = \sum_{j=0}^l s_j^\ast s_j$ and $s_0 \in \mathcal{N}$ and $\deg(s_0) \ge \deg(s_j)$ for every $j=1,\ldots,l$. \end{enumerate} \end{thm} \begin{proof} $(1) \Rightarrow (2) \colon$ If $c$ has degree $4n$ then by \cite[Lemma 3.2]{sch1} the element $\tilde{c} := y_0^n \Phi(c) y_0^n$ belongs to $\mathcal{X}$. The main part of the proof is to show that there exist elements $h_i \in \mathcal{X}$ such that $\sum_i h_i \tilde{c} h_i^\ast \in 1+ N( \mathcal{X} )$. (Then we get (2) by clearing out the denominators using the identities $\Phi(a_j)y_k=y_{k+1}\Phi(a_j)$ and $\Phi(a_j)^\ast y_k=y_{k-1} \Phi(a_j)^\ast$.) If this claim is false, then by Theorem \ref{another}, there exists a representation $\pi$ of $\mathcal{X}$ such that $\pi(\tilde{c})$ is negative semidefinite. By \cite[Section 4]{sch1}, we know that $\pi$ can be decomposed as $\pi_1 \oplus \pi_\infty$ where $\pi_1$ is a sum of \lq\lq identity\rq\rq{} representations and $\pi_\infty$ is an integral representation defined by $\pi_\infty(\tilde{c}) = \int_{S^d} c(z,\bar z) dE(z,\bar z)$ where $E$ is a spectral measure on the sphere $S^d$. By \cite[Section 5]{sch1}, we have that $\pi_\infty(\tilde{c}) = \pi_\infty(c_{4n})$ where $c_{4n}$ is the leading term of $c$. Since $c_{4n}(z,\bar z) > 0$ for every $z \in S^d$ by the second assumption, there exists by the compactness of $S^d$ an $\epsilon > 0$ such that $c_{4n} \ge \epsilon$. It follows that $\langle \pi_\infty(\tilde{c}) \phi, \phi \rangle \ge \int_{S^d} \epsilon d \langle E(z,\bar z) \phi, \phi \rangle = \epsilon \int_{S^d} d \Vert E(z,\bar z) \phi \Vert^2 = \epsilon \Vert \phi \Vert^2$ for every $\phi \in L^2(\RR^d)$. Since $\pi(\tilde c)$ is negative semidefinite and $\pi_\infty(\tilde c)$ is positive definite, it follows that $\pi_1(\tilde c)$ is nontrivial and negative definite. Since $\pi_1$ is a direct sum of identity representations, it follows that $\tilde c < 0$. Since $y_0^n$ has dense image, it follows that $\Phi(c) \le 0$. Hence $\pi_0(c) \le 0$ by the unitary equivalence of $\Phi$ and $\pi_0$, a contradiction with assumption (1). If $c$ has degree $4n+2$, then we replace $c$ by $\sum_{j=1}^d a_j c a_j^\ast$ and proceed as above. $(2) \Rightarrow (1) \colon$ Since $s_0 \in \mathcal{N}$, $\pi_0(s_0)$ is invertible. It follows that $\pi_0(\sum_{j=0}^l s_j^\ast s_j)>0$. Since $\sum_{i=1}^k r_i^\ast c r_i= \sum_{j=0}^l s_j^\ast s_j$ it follows that also $\pi_0(\sum_{i=1}^k r_i^\ast c r_i)>0$. Hence $c \not\le 0$ as claimed. Write $t=\deg(s_0)$ and note that $(s_0^\ast s_0)_{2t}(z,\bar{z}) = (\sum_{n=1}^d z_n \bar{z}_n)^t > 0$ for $z \ne 0$, Since $t \ge \deg(s_j)$ for every $j=1,\ldots,l$, it follows that $(\sum_{j=0}^l s_j^\ast s_j)_{2t}(z,\bar{z}) >0$ for $z \ne 0$. From $\sum_{i=1}^k r_i^\ast c r_i= \sum_{j=0}^l s_j^\ast s_j$ and $ (\sum_{i=1}^k r_i^\ast c r_i)_{2t}(z,\overline{z})= (\sum_{i=1}^k r_i^\ast r_i)_{2t-2m}(z,\overline{z}) c_{2m}(z,\overline{z}) $ we get that $c_{2m}(z,\overline{z}) > 0$ for $z\ne 0$. \end{proof} \begin{rem} In \cite[Theorem 1.1]{sch1} both (1) and (2) are stronger. In (1) the assumption $\pi_0(c) \not\le 0$ is replaced by $\pi_0(c-\epsilon \cdot 1)>0$ for some $\epsilon$ and in (2) the bound $k \le d$ is provided. The implication from (2) to (1) is not considered. \end{rem} We believe that the main result of \cite{sch2} can be extended in a similar way. It would also be interesting to know whether the second part of assertion (1) ($c_{2m}>0$) implies the first part ($\pi_0(c) \not\le 0$). \end{document}
\begin{document} \begin{titlepage} \title{Quantum Time Crystal By Decoherence:\\ Proposal With Incommensurate Charge Density Wave Ring} \author{K. Nakatsugawa$^1$, T. Fujii$^3$, S. Tanda$^{1,2}$} \vspace*{.2in} \affiliation{ $^1$Department of Applied Physics, Hokkaido University, Sapporo 060-8628, Japan\\ $^2$Center of Education and Research for Topological Science and Technology, Hokkaido University, Sapporo 060-8628, Japan\\ $^3$Department of Physics, Asahikawa Medical University, Asahikawa 078-8510, Japan } \date{\today} \vspace*{.3in} \begin{abstract} We show that time translation symmetry of a ring system with a macroscopic quantum ground state is broken by decoherence. In particular, we consider a ring-shaped incommensurate charge density wave (ICDW ring) threaded by a fluctuating magnetic flux: the Caldeira-Leggett model is used to model the fluctuating flux as a bath of harmonic oscillators. We show that the charge density expectation value of a quantized ICDW ring coupled to its environment oscillates periodically. The Hamiltonians considered in this model are time independent unlike ``Floquet time crystals" considered recently. Our model forms a metastable quantum time crystal with a finite length in space and in time. \end{abstract} \nopagebreak \maketitle \end{titlepage} \section{INTRODUCTION} \noindent The original proposal of a quantum time crystal (QTC) given by Wilczek \cite{QTC} and Li \textit{et al.} \cite{Wigner} is a quantum mechanical ground state which breaks time translation symmetry. In this QTC ground state there exists an operator $\hat Q$ whose expectation value oscillates permanently with a well-defined ``lattice constant" $P$, that is, with a well-defined period. \\ \indent Volovik \cite{Volovic} relaxed the condition of permanent oscillation and proposed the possibility of effective QTC, that is, a periodic oscillation in a metastable state such that the oscillation will persist for a finite duration $\tau_Q\gg P$ in the time domain and will eventually decay. \\ \indent In this paper we promote Volovik’s line and consider the possibility of a metastable QTC state without spontaneous symmetry breaking: We consider symmetry breaking by \textit{decoherence} of a macroscopic quantum ground state (FIG. \ref{Symm.Break by Decoh.}). Decoherence is defined as the loss of quantum coherence of a system coupled to its environment. Coupling to environment will inevitably introduce friction to the system such that the oscillation will eventually decay at $t=\tau_\text{damp}$. However, for $t<\tau_Q\ll\tau_\text{damp}$ the oscillation period $P$ is well defined. If friction is sufficiently weak such that $P\ll\tau_Q$, then we have a model of effective QTC with life time $\tau_Q$. \begin{figure} \caption{The concept of translation symmetry breaking by decoherence is illustrated. (a) A simplified version of the two-state system considered by Leggett \textit{et al.} \label{Symm.Break by Decoh.} \end{figure} \\ \indent Our model consists of a ring-shaped incommensurate charge density wave (ICDW ring) threaded by a fluctuating magnetic flux (FIG. \ref{FluctuatingB}(a)). A charge density wave (CDW) is a periodic (spatial) modulation of electric charge density which occurs in quasi-one-dimensional {crystals} \cite{Gruner2,*Sambongi,*Monceau}: the periodic modulation of the electric charge density occurs due to electron-phonon interaction. \textcolor{red}{If the ratio between the CDW wavelength $\lambda$ and the lattice constant $a$ of the crystal is a simple fraction like $2$, $5/2$ etc, then the CDW is commensurate with the underlying lattice. A commensurate CDW cannot move freely because of commensurability pinning, i.e. the CDW phase is pinned by ions’ position in the crystal. On the other hand, if $\lambda/a$}\st{If the ratio between the CDW wavelength $\lambda$ and the lattice constant $a$ of the crystal} is effectively an irrational number \cite{Sambongi}, then the CDW is incommensurate with the underlying lattice. An ICDW ring with a radius of $10\mu$m, for instance, contains approximately $10^5$ wavelengths \cite{Zettl}, so it is possible that $a/\lambda$ is very close to an irrational number. \textcolor{red}{See the discussion section for an elaboration of this assumption.} \begin{figure} \caption{(a) We consider an incommensurate CDW ring threaded by a fluctuating magnetic flux. This figure shows a commensurate CDW with $\lambda/a=2$ because it is easier to visualise. The wave (typically $\sim 10^5$ wavelengths) represents the charge density and the dots represent the atoms of a quasi-one dimensional crystal.(b) ICDW ring crystals {such as monoclinic TaS$_3$ ring crystals and NbSe$_3$ ring crystals } \label{FluctuatingB} \end{figure} The sliding of an ICDW {without pinning} is described by a gapless Nambu-Goldstone (phason) mode \cite{Gruner2} and the energy of an ICDW is independent of its phase (i.e. position), which implies that the expected ground state of an ICDW ring is a superposition of ICDWs with different phases. Ring-shaped crystals and ring-shaped (I)CDWs have been produced \cite{ring} (FIG. \ref{FluctuatingB} (b)). The presence of circulating CDW current \cite{Matsuura} and Aharonov-Bohm oscillation (evidence of macroscopic wave function) \cite{ABCDW} are verified experimentally. \\ \indent We show in section \ref{Without_Dissipation} that the charge density expectation value of an isolated ICDW ring {with moment of inertia $I$ is periodic in time with period $P=4\pi I/\hbar$}. This periodicity is a consequence of the uncertainty relation on $S^1$ (ring). However, this oscillation becomes unobservable at ground state because the ground state of an isolated ICDW ring is a plane wave state, \textit{i.e.} a coherent superposition of ICDWs with different phases. Therefore, in section \ref{With_Dissipation} we use the Caldeira-Leggett model \cite{Caldeira,Weiss} to show that time translation symmetry is broken by decoherence. More precisely, the {superposition is broken by decoherence} and the amplitude of the ICDW oscillates periodically (FIG. \ref{FluctuatingB} (c)). If the ICDW ring weakly couples to its environment then this state is a metastable ground state. Therefore, our model forms an effective QTC with a finite length in space and in time. \\ \indent Before developing our main arguments, we compare our work to recent developments of QTC. In analogy with spatial crystal, the original proposal of QTC is based on the spontaneous breaking of time translation symmetry. However, Bruno \cite{Bruno3} and Watanabe and Oshikawa \cite{Watanabe} theoretically proved that spontaneous breaking of time translation symmetry cannot occur at ground state. Recently, it was shown that there is a notion of spontaneous breaking of time translation symmetry in periodically driven (Floquet) states \cite{FTCTheory,*DTCYao,*Prethermal} and this idea was proved experimentally\cite{FTCChoi,*FTCZhang}. On the other hand, the periodic oscillation we consider in this paper is inherent to ring systems with a macroscopic wave function. \section{Ground State of an Isolated ICDW Ring}\label{Without_Dissipation} \subsection{Classical Theory of ICDW Ring} It is well known that the electric charge density of a quasi-one-dimensional crystal becomes periodic by opening a gap at the {Fermi wave number $k_\mathrm{F}$} and form a charge density wave (CDW) {ground} state with a wavelength $\lambda=\pi/k_\mathrm{F}$ \cite{Gruner2}. Consider a CDW formed on a ring-shaped quasi-one-dimensional crystal with radius $R$. The order parameter of this CDW ring is a complex scalar $\Delta(x,t)=|\Delta(x,t)|\exp[i\theta(x,t)]$, where $|\Delta(x,t)|$ is the size of the energy gap at $\pm k_\mathrm{F}$, $\theta(x,t)$ is the phase of the CDW, $x\in[0,2\pi R)$ is the coordinate on the crystal, and $t$ is the time coordinate. The charge density is given by \begin{equation} n(x,t)=n_0+n_1\cos[2k_\text Fx+\theta(x,t)] \label{chargedensity} \end{equation} where $n_0$ is the average charge density and $n_1$ is the amplitude of the wave. Bogachek \emph{et al.} \cite{Bogachek1} derived the following Lagrangian density of the phase of a ring-shaped incommensurate CDW (ICDW ring) threaded by a magnetic flux \begin{equation*} {\mathscr{L}_0\left(\frac{\partial\theta}{\partial t},\frac{\partial\theta}{\partial x}\right)}=\frac{N_0}{2}\left[\left(\frac{\partial \theta}{\partial t}\right)^2-c_0^2\left(\frac{\partial \theta}{\partial x}\right)^2\right]+\frac{eA}{\pi}\frac{\partial \theta}{\partial t} \end{equation*} where $A$ is the magnetic vector potential, $N_0=v_\text F^2\hbar^2N(\varepsilon_\mathrm{F})/(2c_0^2)$, $N(\varepsilon_\mathrm{F})$ is the density of states of electrons at the Fermi level per unit length and per spin direction, $v_\text F$ is the Fermi velocity of the crystal, $c_0 =\sqrt{m/m^\ast}v_\text F$ is the phason velocity, $m^\ast$ is the effective mass of electrons and $\hbar$ is the reduced Planck constant. We first consider an isolated ICDW ring with $A=0$. Assuming that $N(\varepsilon_\mathrm{F})$ is equivalent to the density of states of electrons on a one dimensional line, that is, $N(\varepsilon_\mathrm{F})=1/(\pi\hbar v_\mathrm{F})$, we have $ N_0=\hbar v_\mathrm F/(2\pi c_0^2)$. An incommensurate CDW (ICDW) can slide freely because of spatial translation symmetry, so the dynamics of an ICDW is understood by its phase $\theta$. We further assume the rigid-body model of ICDW, i.e. the ICDW ring does not deform locally and the phase $\theta(x,t)=\theta(t)$ is independent of position. Then, the Lagrangian, the canonical angular momentum and the Hamiltonian of the ICDW ring are, respectively \begin{align} L_0(\dot\theta)&=\int_0^{2\pi R}dx\mathscr L_0(\dot\theta)=\frac{I}{2}\dot\theta^2, \label{Classical_Phase_Lagrangian} \\ \pi_\theta(\dot\theta)&=\frac{\partial L_0(\dot\theta)}{\partial \dot\theta}=I\dot\theta, \label{momentum} \\ H_0(\pi_\theta)&=\pi_\theta\dot\theta-L_0(\dot\theta)=\frac{\pi_\theta^2}{2I} \label{Classical_Phase_Hamiltonian} \end{align} where $\dot\theta=d\theta/dt$ and $I=\hbar R v_\mathrm{F}/ c_0^2$ is the moment of inertia. We note that \eqref{Classical_Phase_Lagrangian}, \eqref{momentum}, and \eqref{Classical_Phase_Hamiltonian} are time independent. \subsection{Quantization of {an Isolated} ICDW Ring} Next, we quantize the ICDW ring system. We show that a quantized ICDW ring possesses an inherent oscillation which originates from the uncertainty principle. Let $\hat H_0=\hat\pi_\theta^2/(2I)$ and $\hat\pi_\theta$ be the Hamiltonian operator and angular momentum operator of the ICDW ring, respectively. The macroscopic quantum state $\psi\in\mathscr H$ is defined in the Hilbert space $\mathscr H$ of positive square-integrable functions with the periodic boundary condition $\psi(\theta+2\pi)=\psi(\theta)$. The canonical commutation relation $[\hat\theta,\hat\pi_\theta]=i\hbar$ is not satisfactory because $\hat\theta$ is a multi-valued operator and is not well-defined. Ohnuki and Kitakado \cite{Ohnuki} resolved this difficulty by using the unitary operator $\hat W$ and the self-adjoint angular momentum operator $\hat \pi_\theta$ defined by \begin{equation*} \braket{\theta|\hat W|\psi}=e^{i\theta}\psi(\theta),\qquad \braket{\theta|\hat \pi_\theta|\psi}=-i\hbar\frac{\partial\psi(\theta)}{\partial\theta} \end{equation*} which satisfy the commutation relation on $\mathscr H$ \begin{equation} [\hat \pi_\theta,\hat W]=\hbar\hat W.\label{Algebra_Ohnuki} \end{equation} $\hat H_0$ is a function of $\hat\pi_\theta$ only, hence the complete orthonormal set $\{\psi_l\}_{l=-\infty}^\infty$ of momentum eigenstates spans $\mathscr H$ and satisfy $\psi_l(\theta)=e^{il\theta}/\sqrt{2\pi}$. The eigenvalues of $\hat\pi_\theta$ are quantized with $\braket{\psi_l|\hat\pi_\theta|\psi_l}=l\hbar,l\in\mathbb Z$. $\hat W$ and $\hat W^\dagger$ are ladder operators which satisfy $\hat W\psi_l=\psi_{l+1}$ and $\hat W\psi_l=\psi_{l-1}$. Therefore, \eqref{Algebra_Ohnuki} is the one dimensional version of the well known angular momentum algebra \cite{Sakurai}. Time evolution is introduced via the Heisenberg picture: $\hat \pi_\theta(t)=e^{i\hat H_0t/\hbar}\hat \pi_\theta e^{-i\hat H_0t/\hbar}$ and $\hat W(t)=e^{i\hat H_0t/\hbar}\hat W e^{-i\hat H_0t/\hbar}$. $\hat \pi_\theta$ commutes with $\hat H_0$, so $\hat \pi_\theta(t)=\hat \pi_\theta$. From the commutation relation \eqref{Algebra_Ohnuki} we obtain the following solutions of $\hat W(t)$: \begin{equation} \hat W(t)=e^{it\hat\pi_\theta/I}\hat We^{-\frac{it}{2\mu}}=\hat We^{it\hat\pi_\theta/I}e^{\frac{it}{2\mu}}\label{TimeDependentW} \end{equation} where $\mu=I/\hbar$. {The two different expressions in \eqref{TimeDependentW} arise from the noncommutativity between $\hat W$ and $e^{it\hat\pi_\theta/I}$. }For a QTC we need a periodic expectation value at the ground state. So, we define the time dependent charge density operator \begin{equation} {\hat n(x,t)=n_0+\frac{n_1}{2}\left(e^{2ik_\mathrm F x}\hat W(t)+\mathrm{h.c.}\right)}\label{chargedensityoperator} \end{equation} and replace the classical charge density \eqref{chargedensity} by the expectation value \begin{equation} n(x,t)=\braket{\hat n(x,t)}. \end{equation} Any states in $\mathscr H$ must be a linear superposition of $\{\psi_l\}$, that is $\psi=\sum_{l\in\mathbb Z}c_l\psi_l$ provided $\sum_{l}|c_l|^2=1$. Therefore, the expectation values $\braket{\hat W(t)}$ and $\braket{\hat n(x,t)}$ {are periodic with} period $P=4\pi\mu$ for any state $\psi$: \begin{align} \begin{split} &\braket{\hat W(t)}{=\frac{1}{2}\text{tr}[\hat W(t)\hat\rho+\hat\rho\hat W(t)]} \\ &=\frac{1}{2}\!\int_{-\pi}^\pi\! d\theta e^{i\theta}[e^{\frac{it}{2\mu}}\rho(\theta+ t/\mu,\theta)\!+\!\rho(\theta,\theta- t/\mu)e^{-\frac{it}{2\mu}}].\end{split} \label{WExpectationValue} \end{align} From the two different expressions of $W(t)$ in \eqref{TimeDependentW} we can define the Weyl form of the commutation relation \cite{WeylCCR} \begin{align*} \hat We^{it\hat\pi_\theta/I}=e^{it\hat\pi_\theta/I}\hat We^{-\frac{it}{\mu}}\label{WeylCCR} \end{align*} hence the phase $\frac{t}{2\mu}$ and the periodic oscillation with period $4\pi\mu$ is a manifestation of the uncertainty principle. For an alternative explanation, let us consider an electron with effective mass $m^\ast$ confined in a finite space with volume $L\sim 2R$. From the uncertainty principle, the momentum {uncertainty} of this particle is $\Delta p\sim\hbar/(4R)$. This means that the particle's wave packet expands with velocity $v=\Delta p/m^\ast\sim\hbar/(4m^\ast R)$. Then, because of the periodic boundary condition, the physical quantity $W=e^{i x/\lambda}$ is periodic with period $P=\lambda/v\sim4\pi m^\ast R/(\hbar k_\text F)=4\pi m^\ast R/mv_\text F=4\pi Rv_\text F/c_0^2=4\pi\mu$. So, the origin of the periodicity is (i) the macroscopic wave function of the ICDW ring diffuses due to the uncertainty principle then (ii) $W=e^{i\theta}$ oscillates periodically. However, the oscillation in \eqref{WExpectationValue} is not observable at the ground state $\hat\rho_0\equiv\ket{\psi_0}\bra{\psi_0}$ because $\braket{\theta|\hat\rho_0|\phi}=\frac{1}{2\pi}$ and the $\theta$ integral vanishes. {Therefore, the ground state of an isolated ICDW ring is not yet a time crystal because of superposition.} \section{COUPLING TO ENVIRONMENT} \label{With_Dissipation} Now, suppose that the ICDW ring starts to interact with its surrounding enviromnent at $t=0$. Then, we expect decoherence of the phase $\theta$. This interaction is modeled using the Caldeira-Leggett model \cite{Caldeira} which is a model quantum Brownian motion. It describes a particle coupled to its environment. This environment is described as a set of non-interacting harmonic oscillators. {First, the classical solution $\theta(t)$ is calculated to study the dynamics of the ICDW ring. Next, this system is quantized to calculate the amplitude of the charge density expectation value $\braket{\hat n(x,t)}$}. \subsection{Classical Theory of ICDW Ring With Environment} Let us consider the following Lagrangian of an ICDW ring threaded by a fluctuating magnetic flux \begin{equation} \tilde L(\dot\theta,\mathbf q,\dot{\mathbf q})=\frac12I\dot\theta^2+A(\mathbf q)\dot\theta+\sum_{j=1}^{\mathcal N}\left(\frac12m\dot q_j^2-\frac12 m\omega_j^2q_j^2\right) \label{ClassicalLagrangian1} \end{equation} where $q_j$ are the normal coordinates of the fluctuation and $\tilde\pi_\theta=\partial \tilde L/\partial\dot\theta$ and $p_j=\partial \tilde L/\partial\dot q_j$ are the canonical momenta of the ICDW ring and the environment, respectively. The fluctuating magnetic flux is given by \begin{equation*} A(\mathbf q)=\sum_{j=1}^\mathcal N cq_j. \end{equation*} Classically, this magnetic flux will randomly changes the phase and the \textit{mechanical} angular momentum $I\dot\theta$ of the ICDW ring due to electromotive force. An equivalent Lagrangian obtained by a Canonical transformation is \begin{align} \begin{split} L(\dot\theta,\mathbf R,\dot{\mathbf R})&=\frac12I\dot\theta^2\!+\!\frac{m}{2}\sum_j^\mathcal N\!\left[\!\dot R_j^2(\theta)\!-\!\omega_j^2\left(\!\!R_j(\theta)-\frac{C_j\theta}{m\omega_j^2}\! \right)^2\right] \label{ClassicalLagrangian2} \end{split} \end{align} which is the Lagrangian of a bath of field particles $R_j$ coupled to the phase $\theta$ by springs. $R_j(p_j,\theta)=-\frac{p_j-c\theta}{m\omega_j}$, $P_j(q_j)=m\omega_jq_j$ and $C_j=c\omega_j$. \eqref{ClassicalLagrangian1} and \eqref{ClassicalLagrangian2} are precisely the kinds of Lagrangian considered by Caldeira and Leggett, so we can use the results in \cite{Caldeira} but with slight modifications due to the periodicity of the ring. \subsection{{Classical Solution}} The equation of motion of $\theta$ obtained from the Lagrangian \eqref{ClassicalLagrangian2} is the generalized Langevin equation\cite{Hanggi1997} \begin{equation} I\ddot\theta(t)+2\int_0^td\tau\alpha_\text I(t-\tau)\theta(\tau)=\xi(t)\label{GLEModified} \end{equation} with the dissipation kernel $\alpha_\text I(t-\tau)$, the memory function $\gamma(t-\tau)$ and the classical fluctuating force $\xi(t)$ defined by \begin{align*} \alpha_{\text I}(t-\tau)&=I\gamma(0)\delta(t-\tau)+\frac{I}{2}\frac{d}{dt}\gamma(t-\tau),\label{alphaIDef} \\ \gamma(t-\tau)&= \sum_{j=1}^{\mathcal N}\frac{C_j^2}{Im\omega_j^2}\cos\omega_j(t-\tau), \\ \xi(t)&=\sum_{j=1}^{\mathcal N}C_j\left[R_j(0)\cos\omega_jt+\frac{P_j(0)}{m\omega_j}\sin\omega_jt\right]. \end{align*} The correlation function {of the classical force is} given by the noise kernel \begin{align*} \braket{\xi(t)\xi(\tau)}_\text{env}&=\hbar\alpha_{\text R}(t-\tau) \\ \alpha_\text R(t-\tau)&=\sum_j^\mathcal N\frac{C_j^2}{2m\omega_j}\coth\left(\frac{\hbar\omega_j}{2k_\mathrm{B}T}\right)\cos\omega_j(t-\tau) \end{align*} where the average $\braket{\cdot}_\text{env}$ is taken with respect to the environment coordinate at equilibrium. It is convenient to define the spectral density function \begin{equation} \mathcal J(\omega)=\frac{\pi}{2}\sum_j^\mathcal N\frac{C_j^2}{m_j\omega_j}\delta(\omega-\omega_j) \label{J_def} \end{equation} and assume the power law spectrum $\mathcal J(\omega)=I g_s\omega^s$ \cite{Grabert1988115,*Schramm1987} with a cutoff frequency $\Omega$ and $0<s<2$. Then, $\alpha_\text R(t-\tau)$ can be written \begin{align*} \alpha_\text R(t-\tau)&=\frac{Ig_s}{\pi}\int_0^\Omega \omega^s\coth\left(\frac{\hbar \omega}{2k_\mathrm BT}\right)\cos\omega(t-\tau)d\omega.\label{alphaRIntegral} \end{align*} The classical solution of $\theta$ is \begin{equation} \theta(t)=G(t)\dot\theta(0)+\dot G(t)\theta(0)+\frac{1}{I}\int_0^td\tau G(t-\tau)\xi(t) \end{equation} with the fundamental solution \begin{equation*} G(t)=\mathcal L^{-1}\left[\frac{1}{z^2+z\hat\gamma(z)}\right](t). \end{equation*} where $\mathcal L^{-1}$ is the inverse Laplace transform. The Laplace transform of the memory function $\gamma(t)$ can be written \cite{Weiss} \begin{equation*} \hat\gamma(z)=\omega_s^{2-s}z^{s-1},\qquad \omega_s=\left(\frac{g_s}{\sin\frac{\pi s}{2}}\right)^{1/(2-s)} \end{equation*} and $G(t)$ takes the form of a generalized Mittag-Leffler function $E_{\alpha,\beta}(x)=\sum_{k=0}^\infty\frac{x^k}{\Gamma(\alpha k+\beta)}$: \begin{align*} G(t)=tE_{2-s,2}[-(\omega_s t)^{2-s}]. \end{align*} For ohmic damping with $s=1$ and $\hat \gamma(z)=g_1\equiv2\gamma$, we obtain \begin{equation} G(t)=\frac{1-e^{-2\gamma t}}{2\gamma}.\label{ClassicalMotionOhmic} \end{equation} {$G(t)$ and $\dot G(t)$ are shown in FIG. \ref{ClassicalMotion}. We note that $\dot G(t)\approx 1$ for $t$ less than some damping time scale $\tau_{\text{damp},s}$. In other words, the fluctuating magnetic flux does not affect the dynamics of an ICDW ring for $t<\tau_{\text{damp},s}$ and $e^{i\theta(t)}$ oscillates periodically with period $P\approx 4\pi\mu$. Next, we quantize the ICDW ring + environment system to show that this oscillation is observable for a finite time $\tau_Q$ and form an effective QTC as a metastable state.} \begin{figure} \caption{These plots are shown with $g_s=1$ Hz$^{2-s} \label{ClassicalMotion} \end{figure} \subsection{Quantization {of ICDW Ring Coupled to Environment}} {The ICDW ring $+$ environment system is} quantized using the commutation relations $[\hat{\tilde\pi}_\theta,\hat W]=\hbar\hat W$ and $[\hat q_j,\hat p_k]=i\hbar\delta_{jk}$. Define the orthonormal position state $\ket{\mathbf q}=\prod_{i=1}^\mathcal{N}\ket{q_i}$ and the orthonormal momentum state $\ket{\mathbf p}=\prod_{i=1}^\mathcal{N}\ket{p_i}$ such that \begin{equation} \braket{\mathbf q|\hat q_j|\psi}=q_j\braket{\mathbf q|\psi},\qquad \braket{\mathbf q|\hat p_j|\psi}=-i\hbar\frac{\partial}{\partial q_j}\braket{\mathbf q|\psi}, \end{equation} and the inner product of $\ket{\mathbf q}$ and $\ket{\mathbf p}$ is defined as $\braket{\mathbf q|\mathbf p}=\frac{1}{\sqrt{2\pi\hbar}^\mathcal{N}}\exp\left(\frac{i}{\hbar}\mathbf q\cdot\mathbf p\right)$. The periodic boundary condition of the ring implies that $\braket{\theta+2\pi n|\tilde\pi_\theta}=\braket{\theta|\tilde\pi_\theta}$ for some integer $n$, hence the angular momentum eigenstates are quantized: $\braket{\psi_l|\hat{\tilde\pi}_\theta|\psi_l}=l\hbar, l=0,\pm,\pm2,\dots$, $\ket{\tilde\pi_\theta}=\hbar^{-1/2}\ket{\psi_l}$, $\braket{\theta|\psi_l}=\frac{1}{\sqrt{2\pi}}e^{il\theta}$. Moreover, one can easily show that \begin{equation} \braket{\theta,\mathbf p|\tilde\pi_\theta,\mathbf q}=\braket{\theta,\mathbf R(\theta)|\pi_\theta,\mathbf P}. \end{equation} $\hat W$ is independent of the environmental coordinate. So, the expectation value of $\hat W$ is \begin{align*} \braket{\hat W(t)} &=\frac{1}{2}\int_{-\pi}^\pi d\theta_\text f\int_{-\pi}^\pi d\phi_\text f \rho(\theta_\text f,\phi_\text f,t)\braket{\phi_\text f|\hat W|\theta_\text f} \\ &+\frac{1}{2}\int_{-\pi}^\pi d\theta_\text f\int_{-\pi}^\pi d\phi_\text f\braket{\theta_\text f|\hat W|\phi_\text f}\rho(\phi_\text f,\theta_\text f,t). \end{align*} where the reduced density matrix of the ICDW ring is (see \textit{Appendix} \ref{CLS1}) \begin{align} \begin{split} \rho(\theta_\text f,\phi_\text f,t)&=\int_{-\pi}^{\pi}d\theta_\text i\int_{-\pi}^{\pi}d\phi_\text i\sum_{l_1,l_2\in\mathbb Z}\rho(\theta_\text i,\phi_\text i,0) \\ &\times J(\theta_\text f+2\pi l_1,\phi_\text f+2\pi l_2,t;\theta_\text i,\phi_\text i,0). \label{rho} \end{split} \end{align} The exact form of $J(\theta_\text f,\phi_\text f,t;\theta_\text i,\phi_\text i,0)$ for ohmic dissipation $s=1$ was calculated in \cite{Caldeira}. For general damping with arbitrary $s$ the computation of the reduced density matrix is essentially equivalent to \cite{Caldeira} and we obtain \begin{align} J(\theta_\text f,\phi_\text f,t;\theta_\text i,\phi_\text i,0)&=F^2(t)\exp\left(\frac{i}{\hbar}S[\varphi^+_\text{cl},\varphi^-_\text{cl}]-\Gamma[\varphi^-_\text{cl}]\right).\label{JClassical} \end{align} $\varphi^+_\text{cl}=\frac12(\theta_\text{cl}+\phi_\text{cl})$ and $\varphi^-_\text{cl}=\phi_\text{cl}-\theta_\text{cl}$ are the classical coordinates obtained from the Euler-Lagrange equation \begin{align} I\ddot\varphi_\text{cl}^-(u)+2\int_u^td\tau\varphi_\text{cl}^-(\tau)\alpha_\text I(\tau-u)=0,\label{eqn1} \\ I\ddot\varphi^+_\text{cl}(u)+2\int_0^ud\tau\varphi^+_\text{cl}(\tau)\alpha_\text I(u-\tau)=0\label{eqn2} \end{align} whose solution are given in terms of boundary conditions $\varphi^\pm_\text i=\varphi_\text{cl}^\pm(0),\varphi^\pm_\text f=\varphi_\text{cl}^\pm(t)$: \begin{align} \varphi^+_\text{cl}(u)&=\kappa_i(u;t)\varphi^+_\text i+\kappa_f(u;t)\varphi^+_\text f, \label{varphisolfinal} \\ \varphi^-_\text{cl}(u)&=\kappa_i(t-u;t)\varphi^-_\text f+\kappa_f(t-u;t)\varphi^-_\text i, \label{varphiprimesolfinal} \\ \kappa_i(u;t)&=\dot G(u)-\frac{\dot G(t)}{G(t)}G(u), \qquad \kappa_f(u;t)=\frac{G(u)}{G(t)}. \end{align} The classical action and the noise action are given by \begin{align*} S[\varphi^+_\text{cl},\varphi^-_\text{cl}] &=S_\text{cl}(\varphi^+_\text f,\varphi^-_\text f,t;\varphi^+_\text i,\varphi^-_\text i,0) \\ &=-I[\dot\varphi^+_\text{cl}(t)\varphi^-_\text f-\dot\varphi^+_\text{cl}(0)\varphi^-_\text i], \\ \Gamma[\varphi_\text{cl}^-]&=\Gamma_\text{cl}(\varphi_f^-,t;\varphi_i^-,0) \\ &=\frac{1}{2\hbar}\int_0^td\tau\int_0^td\tau'\varphi^-(\tau)\alpha_\text R(\tau-\tau')\varphi^-(s). \end{align*} $F^2(t)$ is a normalization function such that $\text{tr}[\rho(t)]=\braket{1}=1$. The winding numbers $l_1$ and $l_2$ can be absorbed into the $\theta_\text f$ and $\phi_\text f$ integrals, respectively, by changing the domain of $\theta_\text f$ and $\phi_\text f$ from $S^1$ to $\mathbb R^1$. Then, taking care of the non-Hermiticity of $\hat W$, we obtain \begin{align} \braket{\hat W(t)}&=\frac{r_1^+(t)+r_1^-(t)}{r_2^+(t)+r_2^-(t)},\label{WExpGeneral} \\ r_1^+&=\int_{-\pi}^{\pi}d\theta_\text i\sum_{n\in\mathbb S_1(\theta,t)}\rho(\theta_\text i-f_1(t),\theta_\text i,0)\nonumber \\ &\times e^{-in\pi-i\mu \theta \dot f_1(t)+\frac{i \mu}{2} f_1(t) \dot f_1(t)-\Gamma_\text{cl}(2\pi n,t;f_1(t),0)},\nonumber \\ r_1^-&=\int_{-\pi}^{\pi}d\theta_\text i\sum_{n\in\mathbb S_1(\theta,t)}\rho(\theta_\text i,\theta_\text i+f_1(t),0)\nonumber \\ &\times e^{-in\pi-i\mu \theta \dot f_1(t)-\frac{i \mu}{2} f_1(t) \dot f_1(t)-\Gamma_\text{cl}(2\pi n,t;f_1(t),0)},\nonumber \\ r_2^+&=\int_{-\pi}^{\pi}d\theta_\text i\sum_{n\in\mathbb S_2(\theta,t)}\rho(\theta_\text i-f_2(t),\theta_\text i,0)\nonumber \\ &\times e^{-i\mu \theta \dot f_2(t)+\frac{i\mu}{2} f_2(t)\dot f_2(t)-\Gamma_\text{cl}(2\pi n,t;f_2(t),0)},\nonumber \\ r_2^-&=\int_{-\pi}^{\pi}d\theta_\text i\sum_{n\in\mathbb S_2(\theta,t)}\rho(\theta_\text i,\theta_\text i+f_2(t),0)\nonumber \\ &\times e^{-i\mu \theta \dot f_2(t)-\frac{i\mu}{2} f_2(t)\dot f_2(t)-\Gamma_\text{cl}(2\pi n,t;f_2(t),0)}.\nonumber \end{align} with $f_1(t)=2\pi n \dot G(t)-G(t)/\mu$, $f_2(t)=2\pi n \dot G(t)$, $\mathbb S_1(\theta,t)=\{n\in\mathbb Z|-\pi<\theta+f_1(t)<\pi\}$, and $\mathbb S_2(\theta,t)=\{n\in\mathbb Z|-\pi<\theta+f_2(t)<\pi\}$. This is the most general form of the expectation value of $\hat W$ for an ICDW ring coupled to its environment. Although the derivation of \eqref{WExpGeneral} is exact, it is not very insightful, so we make some approximations. \subsection{Early Time {Approximation}} Let us consider the classical solution (22) with $\varphi^+_\text{cl}=\theta_\text f$ and ohmic damping \eqref{ClassicalMotionOhmic}, then we see immediately that $\dot\theta(t)\sim \dot\theta(0)e^{-2\gamma t}$. Therefore, we are interested in the range $t\ll1/(2\gamma)\equiv\tau_{\text{damp},1}$. For general dissipation, we can see from FIG. \ref{ClassicalMotion} that there exist a time scale $\tau_{\text{damp},s}$ such that $G(t)\approx t$, $\dot G(t)\approx 1$ for $t\ll\tau_{\text{damp},s}$. Then, writing $t=2\pi I(m+a)/\hbar$ for an integer $m$ and $0<a<1$, we have $\mathbb S_1(\theta<-2a\pi,t)=\{m+1\}$, $\mathbb S_1(\theta>-2a\pi,t)=\{m\}$, and $\mathbb S_2(\theta,t)=\{0\}$. Therefore, we conclude that $\mathbb S_1(\theta,t)$ is approximately the $(m/2)^{\text{th}}$ lattice point. Then, using $\dot f_1(t)\approx -\dot G(t)\hbar/I$ we obtain the approximate form \begin{align*} \braket{\hat W(t)}&\approx\int_{-\pi}^{\pi}d\theta_\text i\sum_{n\in\mathbb S_1(\theta,t)}\rho(\theta_\text i,\theta_\text i-G(t)/\mu,0) \\ &\times\exp\left\{i\theta_\text i\dot G(t)-i\frac{G(t)\dot G(t)}{2\mu}-\Gamma_\text{cl}(2\pi n,t;f_1(t),0)\right\} \\ &+\int_{-\pi}^{\pi}d\theta_\text i\sum_{n\in\mathbb S_1(\theta,t)}\rho(\theta_\text i+G(t)/\mu,\theta_\text i,0) \\ &\times\exp\left\{i\theta_\text i\dot G(t)+i\frac{G(t)\dot G(t)}{2\mu}-\Gamma_\text{cl}(2\pi n,t;f_1(t),0)\right\}. \end{align*} For $t\ll\tau_{\text{damp},s}$ we have $\varphi^-_\text{cl}(\tau)\approx \tau/\mu$ and the noise action can be written \begin{align*} &\Gamma_\text{cl}(2\pi n,t;f_1(t),0)\approx \Gamma_{T,s}(t) \\ &\qquad=\frac{g_s}{2\pi \mu}\int_0^\Omega d\omega\coth\left(\frac{\hbar \omega}{2k_\text BT}\right)\Upsilon(\omega), \\ &\Upsilon(\omega)=\omega^{s-4}(2+\omega^2t^2-2\cos\omega t-2\omega t\sin\omega t) \end{align*} \begin{figure} \caption{(a) The amplitude of the charge density oscillation is shown for $g_s=1$ Hz$^{2-s} \label{GSOsc} \end{figure} Taking the low temperature limit $\frac{\hbar \Omega}{2k_\text BT}\to\infty$ such that $\coth\frac{\hbar \Omega}{2k_\text BT}\to1$ we obtain \begin{align*} \Gamma_{T,s}(t) =&-\frac{g_s}{\pi \mu}\frac{\Omega ^{s-3} \left(\, _1F_2\left(\frac{s-3}{2};\frac{1}{2},\frac{s-1}{2};-\frac{1} {4} t^2 \Omega ^2\right)-1\right)}{s-3} \\ &-\frac{g_st^2}{2\pi \mu}\frac{\Omega ^{s-1} \left(\, _1F_2\left(\frac{s-1}{2};\frac{1}{2},\frac{s+1}{2};-\frac{1} {4} t^2 \Omega ^2\right)-1\right)}{s-1} \end{align*} where $\,_1F_2\left(a_1;b_1,b_2;z\right)$ is the generalized hypergeometric function. Define the decoherence time $\tau_{\text{decoh},s}$ such that $\Gamma_{T,s}(\tau_{\text{decoh},s})=1$. Numerical analysis shows that the order of $\tau_{\text{decoh},s}$ does not change with $\Omega<1/(2\mu)$ and decreases rapidly for $\Omega>1/(2\mu)$. If we set $\Omega\sim1/\mu$ \st{we obtain $\tau_{\text{decoh},s}\sim g_s^{1/4}\mu ^{(s+2)/4}$ and the number of observable ``lattice points" is $N\sim g_s^{1/4}\mu ^{(s-2)/4}$. If we}\textcolor{red}{and} use the ground state $\rho_0$, then for $t\ll\tau_Q\equiv\min\{\tau_{\text{damp,s}},\tau_{\text{decoh},s}\}$ we have \begin{align} n(x,t)&\approx n_0+n^\text{osc.}_1(t)\cos(2k_\text F x),\label{CDWResult}\\ n^\text{osc.}_1(t)&=n_1\text{sinc}[\pi\dot G(t)]\cos\left[\frac{\dot G(t)G(t)}{2\mu}\right]e^{-\Gamma_{T,s}(t)}. \end{align} Equation \eqref{CDWResult} is the main result of this paper. It shows that the amplitude of an ICDW ring threaded by a (time independent) fluctuating magnetic flux oscillates for a finite time $\tau_Q$ and form an effective QTC. In the no-damping limit $\hat\gamma(z)\to0$ we have $G(t)\to t,G(t)\to 1$ and recover $\braket{\hat W(t)}=0$. This charge density oscillation is shown in FIG. \ref{GSOsc}. \section{DISCUSSION} \label{Conclusion} \textcolor{red}{First, we elaborate our assumtion of ICDW ring. A mathematical definition of ICDW is that $\lambda/a$ is an irrational number. We note that a CDW formed on a macroscopic crystal is basically incommensurate because the wavelength of a CDW is given by $\lambda=\pi/k_\text{F}$, where the Fermi wave number $k_\text{F}$ is usually an irrational number for an arbitrary band filling. However, strictly speaking, $\lambda/a$ of a finite-size system can never be irrational. A physical condition is that the commensurability pinning energy is negligible, which is possible if $\lambda/a$ cannot be expressed as a simple fraction like 2, 5/2, etc. In order to explain this, suppose that for some integer $M\geq2$, $Ma/\lambda$ is an integer. In other words, the same atom-electron configuration is obtained if we move the CDW by $M$ wavelengths and $\epsilon_{k+2Mk_\text{F} }=\epsilon_k$ ($\epsilon_k$ is the energy of an electron with momentum $\hbar k$). Then, the energy required to move a CDW by a small phase $\phi$ from its equilibrium is \cite{LRA} \begin{equation*} \epsilon(\phi;M)=\frac{|\Delta|^2}{\epsilon_\text{F}} \left(\frac{e|\Delta|}{W}\right)^{M-2}\frac{M\phi^2}{2} \end{equation*} where $\epsilon_\text{F}$ is the Fermi energy, $W$ is the band width, $|\Delta|$ is the CDW gap width and $e$ is the elementary charge in natural units. We see that $\epsilon(\phi;M)$ approaches zero rapidly for large $M$ as the distinction between rational and irrational numbers becomes academic. For example, for a ring crystal with $N_a$ atoms and $N_\lambda$ CDW wavelengths, we obtain $\lambda/a=N_a/N_\lambda$. Therefore, for a large $N_a$ and a large $N_\lambda$, $M$ can always be arbitrary large (order of $N_a$), hence the commensurability energy is completely negligible. In fact, we can experimentally make sub-micrometer scale ICDW rings such that, we expect, $N_a$ and $N_\lambda$ are not so large but $M$ is very large such that commensurability energy is negligible. } \textcolor{red}{ Usually, superposition of ICDWs with different phases is not observed because of impurity pinning. However, the probability of impurity decreases with decreasing radius. Commensurability effect may become significant for small radius (more precisely, for some small $M$). But, the origin of the “time crystal periodicity” in our model is the uncertainty principle, which appears as a collective fluctuation of the ICDW phase. If commensurability effect becomes significant, then the ICDW phase is expected to fluctuate periodically around some phase $\theta=\theta_0$ determined by the ions’ position. This fluctuation is expected to become apparent and oscillate periodically by coupling to environment (fluctuating magnetic flux).} \textcolor{red}{Next, we discuss the presence of an upper bound and a lower bound for the radius of the ICDW ring in our model. Decoherence induced breaking of time translation symmetry occurs, in principle, only in mesoscopic systems: There is an upper bound for the CDW radius determined by the coupling strength $\gamma=\omega_s/2$ and a lower bound given by the CDW wavelength $\lambda$. We will first calculate the upper bound by replacing the environment with an equivalent LC circuit. Here, we focus on Ohmic damping because an approximate form of $\gamma$ can be calculated explicitly. In general, the coupling strength $\gamma$ depends on the CDW radius $R$. In order to explicitly see the $R$ dependence, suppose that the fluctuating magnetic flux in our model comes from an external coil connected to a series of parallel capacitors. Then, the Lagrangian of the CDW + environment system is \begin{equation*} L_\text{circuit}=L_0+MI_{CDW} \sum_{j=1}^\mathcal{N}I_j +\sum_{j=1}^\mathcal{N}\frac m2(I_j^2-\omega_j^2 Q_j^2 ). \end{equation*} $L_0$ is the Lagrangian of an isolated ICDW ring, $I_{CDW}=e\dot\theta/\pi$ is the CDW current induced by the fluctuating flux, $Q_j$ is the net charge on the capacitor $j$ with capacitance $\mathscr C_j$, $m$ is the inductance of the coil, $\omega_j=1/\sqrt{m\mathscr C_j}$, and $M$ is the mutual inductance between the coil and the CDW. We immediately see that $L_\text{circuit}$ is equivalent to the Lagrangian in equation \eqref{ClassicalLagrangian1} but with the interaction Lagrangian replaced by $\dot \theta\sum_{j=1}^\mathcal N \frac{Me}{\pi} \dot Q$. Therefore, define $p_j=mI_j$, $C_j=Me\omega_j^2/\pi$ and we obtain the Lagrangian in equation \eqref{ClassicalLagrangian2} after the canonical transformation \begin{equation*} L=L_\text{circuit}-\frac{d}{dt}\sum_j^\mathcal{N}(MI_{CDW}+p_j) Q_j \end{equation*} with $P_j=m\dot R_j=m\omega_j Q_j$ and $R_j=-(p_j-MI_{CDW})/(m\omega_j)$. Assume that the radius of the coil $r_\text{coil}$ is much larger than the CDW radius $R$ and that the coil and the CDW are concentric. Then, we obtain $M=(\mu_0 \pi R^2)/(2r_\text{coil} )$. Next, integrate \eqref{J_def} with respect to $\omega$ from $\omega=0$ to $\omega=\Omega=1/\mu$ and obtain $\gamma=\beta R$, $\beta=(\pi \mu_0^2 e^2 \rho c_0^6 )/(32\hbar r_\text{coil}^2 m v_F^3 )$. Here, $\mu_0$ is the permeability of free space and $\rho$ is the density of states defined by $\sum_j\to\int d\omega\rho$. If we assume that $\rho\sim\Omega$ like in the Caldeira-Leggett model, then the order of $\beta$ may change depending on the parameters of the CDW and the parameters of the coil, but $\gamma=\beta R$ is usually smaller than 1Hz. Now, if the radius of the CDW ring is too large, then periodicity does not appear because the oscillation period exceeds the lifetime $\tau_Q$ of the time crystal. The upper bound of the CDW radius R is given by the condition $N>1$, where $N=\min\{\tau_{\text{damp},s},\tau_{\text{deph},s}\}/(4\pi\mu)$ is the number of oscillations. For Ohmic damping ($s=1$) we have the approximate form $\tau_{\text{damp},1}=\gamma^{-1}$ and $\tau_{\text{deph},1}=\sqrt{\mu\gamma^{-1}}=\sqrt{\mu\gamma}\tau_{\text{damp},1}$. Let $\mu\gamma<1$, i.e. $\tau_{\text{damp},1}>\tau_{\text{deph},1}$ (which is a valid assumption because a typical value for $\mu$ with radius $10^{-6}$m is $10^{-6}$s and $\gamma$ is usually smaller than 1Hz), then, the upper bound to observe more than one oscillation is $R<\frac{c_0}{4\pi\sqrt{v_F\beta}}$ which is typically 1mm. There is also a lower bound determined by the CDW wavelength $\lambda$: The radius should be large enough to define a Fermi surface. This condition is given by $p_\text F=\hbar\pi/\lambda\gg\hbar/R$. In other words, $R$ should be much larger than $\lambda$. } Finally, we discuss how our model can be tested experimentally and discuss how our results may be applicable to other physical systems. Ring-shaped crystals and ICDW ring crystals (such as monoclinic TaS$_3$ ring crystals and NbSe$_3$ ring crystals) have been produced and studied by the Hokkaido group \cite{ring,Matsuura,ABCDW}. Therefore, our model can be tested provided ring crystals with almost no defects and impurities can be produced. The oscillation in \eqref{CDWResult} implies that the local charge density of the ICDW ring oscillates with frequency $\omega=\hbar/2I$. For a ring with diameter $2R=1\mu$m, $v_\text F/c_0= 10^3$ and $v_\mathrm{F}=10^5$m/s, we have $\omega=10^8Hz.$ The time dependence of the charge density modulation can be measured using scanning tunneling microscopy (STM) \cite{Ichimura} and/or using narrow-band noise with vanishing threshhold voltage \cite{IDO}. We recall that the origin of the quantum oscillation in our model is the uncertainty principle. If we were to consider a single particle with mass $m^\ast$ confined on a ring with radius $R$, then the particle's wave packet expands with a velocity $v\sim \hbar/(4m^\ast R)$ and $e^{i\theta}$ oscillates with period $P=2\pi R/v$. However, a charge density wave has an internal periodicity given by the wavelength $\lambda$. Then the period of oscillation is $P\sim\lambda/v$. Therefore, because an ICDW ring is described by a macroscopic wave function with internal periodicity, the number of lattice points $N$ in our model is numerous. And yet, the periodicity of $\hat W(t)$ seems to be universal for any wave functions on $S^1$ (ring system). Therefore, our results may be applicable to earlier models of QTC such as \cite{QTC,Wigner,Josephson} and annular Josephson junctions \cite{AnnularJJ}. Moreover, it was shown vely recently in \cite{Haffner} that the ground state of a $^{40}$Ca$^+$ ring trap possesses rotational symmetry as the number of ions is decreased. Our results predict that quantum oscillations may appear in such ring traps with the appropriate set up. We also recall that Volovik's proposal of metastable effective QTC \cite{Volovic} is not restricted to ring systems. Therefore, time translation symmetry breaking by \textit{decoherence} may occur in other systems coupled to time-independent environment, and without a periodic driving field. We also expect that many other incommensurate systems such as incommensurate spin density waves \cite{Gruner2}, incommensurate mass density waves \cite{PhysRevLett.40.1507,PhysRevB.20.751,PhysRevB.71.104508}, or possibly some dielectrics that exhibit incommensurate phases \cite{blinc1986incommensurate}, may be used to test our results and to model QTC without spontaneous symmetry breaking. \acknowledgments We thank Kohichi Ichimura, Toru Matsuura, Noriyuki Hatakenaka, Avadh Saxena, Yuji Hasegawa, Kousuke Yakubo and Tatsuya Honma for stimulating and valuable discussions. \onecolumngrid \appendix \section{QUANTUM BROWNIAN MOTION ON $S^1$} \label{CLS1} \subsection{Reduced Density Matrix On $S^1$} Consider an ICDW ring $A$ coupled to its environment $B$. Let $\rho_A$ and $\rho_B$ denote the density operators of $A$ and $B$, respectively. Let $\theta$ and $\phi$ be angular coordinates on the ring system $A$ and let $\mathbf p=\{p_k:k=1,\mathcal N\}$ and $\mathbf s=\{s_k:k=1,\mathcal N\}$ be momentum coordinates of the bath $B$. Then, the density matrix of the coupled system can be written \begin{align*} \rho(\theta,\mathbf p,\phi,\mathbf s)&= \braket{\theta , \mathbf p|\rho_{AB}(t)|\phi ,\mathbf s} \\ &=\int_0^{2\pi}d\theta'd\phi'\int_{-\infty}^{\infty}d\mathbf p'd\mathbf s'K(\theta ,\mathbf p,t;\theta',\mathbf p',0)K^\ast(\phi,\mathbf s,t;\phi',\mathbf s',0)\\ &\quad\times\braket{\theta',\mathbf p'|\rho_{AB}(0)|\phi',\mathbf s'} \end{align*} where \begin{equation} K(\theta,\mathbf p,t;\theta',\mathbf p',0)=\braket{\theta,\mathbf p|e^{-\frac{i}{\hbar}\hat H t}|\theta',\mathbf p'} \label{PropagatorTheta} \end{equation} and \begin{equation} K^\ast(\phi,\mathbf s,t;\phi',\mathbf s',0)=\braket{\phi',\mathbf s'|e^{\frac{i}{\hbar}\hat H t}|\phi,\mathbf s} \label{PropagatorPhi} \end{equation} are recognized as Feynman propagators if we notice that $\ket{\mathbf p}$ and $\ket{\mathbf s}$ are actually position states after canonical transformation. The propagators \eqref{PropagatorTheta} and \eqref{PropagatorPhi} can be written using path integrals by dividing the time $t$ into $N$ time steps of length $\epsilon=t/(N+1)$, $\mathbf p=\mathbf p_N$, $\mathbf p'=\mathbf p_0$, $\theta=\theta_N$ and $\theta'=\theta_0$. For $N\to\infty$ the propagator becomes \begin{align*} K(\theta,\mathbf p,t;\theta',\mathbf p',0)&=\Braket{\theta,\mathbf p|\lim_{\epsilon\to 0}\left(\exp-\frac{i\epsilon}{\hbar}\hat H\right)^N|\theta',\mathbf p'}\\ &=\lim_{\epsilon\to 0}\int_0^{2\pi}\left(\prod_{n=1}^{N-1}d\theta_n\right)\int_{-\infty}^\infty\left(\prod_{n=1}^{N-1}d\mathbf p_n\right)\prod_{n=1}^NK_n(\theta_n,\mathbf p_n,\epsilon;\theta_{n-1},\mathbf p_{n-1},0). \end{align*} The Hamiltonian operator $\hat H$ can be decomposed into the kinetic part $\hat{\mathcal K}$ and the potential part $\hat{\mathcal V}$, i.e. $\hat H=\hat{\mathcal K}+\hat{\mathcal V}$, which satisfy the eigenvalue equations \begin{equation} \hat{\mathcal K}\ket{\psi_l,\mathbf q}=\mathcal K(l,\mathbf P)\ket{\psi_l,\mathbf q},\quad \hat{\mathcal V}\ket{\theta,\mathbf p}=\mathcal V(\theta,\mathbf R(\theta))\ket{\theta,\mathbf p}. \end{equation} Then we obtain \begin{align*} K&(\theta_n,\mathbf p_n,\epsilon;\theta_{n-1},\mathbf p_{n-1},0)\\ &=\sum_{l_n}\int_{-\infty}^\infty d\mathbf q_n\Braket{\theta_n,\mathbf p_n|\exp\left(-\frac{i\epsilon}{\hbar}\hat{\mathcal K}\right)|\psi_{l_n},\mathbf q_n}\Braket{\psi_{l_n},\mathbf q_n|\exp\left(-\frac{i\epsilon}{\hbar}\hat{\mathcal V}\right)|\theta_{n-1},\mathbf p_{n-1}}\\ &=\sum_{l_n=-\infty}^\infty\frac{1}{2\pi}\int_{-\infty}^\infty \frac{d\mathbf q_n}{(2\pi \hbar)^{\mathcal N}}\exp\left\{-\frac{i\epsilon}{\hbar}\mathcal K(l_n,\mathbf P_n)-\frac{i\epsilon}{\hbar}\mathcal V(\theta_{n-1},\mathbf R_{n-1}(\theta_{n-1}))\right\}\\ &\quad\times\exp\left\{il_n(\theta_n-\theta_{n-1})-\frac{i}{\hbar}\mathbf q_n\cdot(\mathbf p_n-\mathbf p_{n-1}))\right\}. \end{align*} Next, define $A_n=\sum_jcq_{j,n}=\sum_j\frac{C_j}{m\omega_j^2}P_{j,n}$ and obtain \begin{align*} K&(\theta_n,\mathbf p_n,\epsilon;\theta_{n-1},\mathbf p_{n-1},0)\\ &=\sum_{l_n=-\infty}^\infty\frac{1}{2\pi}\int_{-\infty}^\infty \frac{d\mathbf q_n}{(2\pi \hbar)^{\mathcal N}}\exp\left\{-\frac{i\epsilon}{\hbar}\mathcal K(l_n,\mathbf P_n)-\frac{i\epsilon}{\hbar}\mathcal V(\theta_{n-1},\mathbf R_{n-1}(\theta_{n-1}))\right\}\\ &\quad\times\exp\left\{i(l_n-A_n/\hbar)(\theta_n-\theta_{n-1})+\frac{i}{\hbar}\mathbf P_n\cdot(\mathbf R_n(\theta_n)-\mathbf R_{n-1}(\theta_{n-1}))\right\}. \end{align*} The sum over $l_n$ can be replaced by a sum of integrals using the Poisson resummation formula \begin{equation} \sum_{l\in\mathbb Z}f(l)=\sum_{l\in\mathbb Z}\int_{-\infty}^\infty f(\zeta)\exp(2\pi l \zeta)d\zeta. \end{equation} Then, \begin{align*} K&(\theta_n,\mathbf p_n,\epsilon;\theta_{n-1},\mathbf p_{n-1},0)\\ &=\sum_{l_n=-\infty}^\infty\int_{-\infty}^\infty\frac{d\zeta_n}{2\pi}\int_{-\infty}^\infty \frac{d\mathbf q_n}{(2\pi \hbar)^{\mathcal N}}\exp\left\{-\frac{i\epsilon}{\hbar}\mathcal K(\zeta_n,\mathbf P_n)-\frac{i\epsilon}{\hbar}\mathcal V(\theta_{n-1},\mathbf R_{n-1}(\theta_{n-1}))\right\}\\ &\quad\times\exp\left\{i(\zeta_n-A_n/\hbar)(\theta_n-\theta_{n-1}+2\pi l_n)+\frac{i}{\hbar}\mathbf P_n\cdot(\mathbf R_n(\theta_n+2\pi l_n)-\mathbf R_{n-1}(\theta_{n-1}))\right\}. \end{align*} For our Hamiltonian of an ICDW ring threaded by a fluctuating magnetic flux we have \begin{align*} \mathcal K(\zeta_n,\mathbf P_n)&=\frac{\left(\zeta_n\hbar-A_n\right)^2}{2I}+\sum_j\frac{1}{2m}P_{j,n}^2\\ \mathcal V(\theta_{n-1},\mathbf R_{n-1}(\theta_{n-1}))&=\sum_j\frac{1}{2}m\omega_j^2 \left(R_{j,n-1}(\theta_{n-1})-\frac{C_j\theta_{n-1}}{m\omega_j^2}\right)^2. \end{align*} We note that that the potential $\mathcal V(\theta,\mathbf R(\theta))$ is rotationally invariant. That is, for an arbitrary rotation $\theta\to\theta+\delta$, the potential is $\mathcal V(\theta+\delta,\mathbf R(\theta+\delta))=\mathcal V(\theta,\mathbf R(\theta))$. Let $\tilde\zeta_n=\zeta_n-A_n/\hbar$. We use the fact that the phase space volume is conserved under canonical transformation. Then, \begin{align*} K&(\theta,\mathbf p,t;\theta',\mathbf p',0) \\ &=\left(\prod_{j=1}^\mathcal{N}\frac{1}{m\omega_j}\right)\lim_{\epsilon\to 0}\left(\prod_{n=1}^{N-1}\int_0^{2\pi}d\theta_n\int_{-\infty}^\infty d\mathbf R_{n}(\theta_n)\right)\left(\prod_{n=1}^N\sum_{l_n=-\infty}^\infty\int_{-\infty}^\infty\frac{d\mathbf P_n}{(2\pi \hbar)^{\mathcal N}}\frac{d\tilde \zeta_n}{2\pi}\right) \\ &\times\exp\sum_{n=1}^N\left\{-\frac{i\epsilon}{\hbar}\left(\frac{\hbar^2}{2I}\tilde \zeta_n^2+\sum_j\frac{1}{2m}P_{j,n}^2\right)-\frac{i\epsilon}{\hbar}\mathcal V(\theta_{n-1},\mathbf R_{n-1}(\theta_{n-1}))\right\} \\ &\times\exp\sum_{n=1}^N\left\{i\tilde\zeta_n(\theta_n-\theta_{n-1}+2\pi l_n)+\frac{i}{\hbar}\mathbf P_n\cdot(\mathbf R_n(\theta_n+2\pi l_n)-\mathbf R_{n-1}(\theta_{n-1}))\right\}\\ \end{align*} Solve the $\tilde \zeta_n$ and $P_{j,n}$ integrals to obtain \begin{align*} K&(\theta,\mathbf p,t;\theta',\mathbf p',0)\\ &=\left(\prod_{j=1}^\mathcal{N}\frac{1}{m\omega_j}\right)\lim_{\epsilon\to 0}\left(\prod_{n=1}^{N-1}\int_0^{2\pi}d\theta_n\int_{-\infty}^\infty d\mathbf R_{n}(\theta_n)\right)\left(\prod_{n=1}^N\sum_{l_n=-\infty}^\infty\sqrt{\frac{I}{2\pi i\epsilon\hbar}}\sqrt{\frac{m}{2\pi i\epsilon \hbar}}^{\mathcal N}\right)\\ &\times\exp\sum_{n=1}^N\left\{\frac{i}{\hbar}\frac{I}{2\epsilon}(\theta_n-\theta_{n-1}+2\pi l_n)^2+\frac{i}{\hbar}\frac{m}{2\epsilon}(\mathbf R_n(\theta_n+2\pi l_n)-\mathbf R_{n-1}(\theta_{n-1}))^2\right\}\\ &\times\exp\sum_{n=1}^N\left\{-\frac{i\epsilon}{\hbar}\mathcal V(\theta_{n-1},\mathbf R_{n-1}(\theta_{n-1}))\right\}. \end{align*} The integral including $\theta_{N-1}$ is \begin{align*} I_{N-1}&=\sum_{l_N=-\infty}^\infty\sum_{l_{N-1}=-\infty}^\infty\sqrt{\frac{I}{2\pi i\epsilon\hbar}}\sqrt{\frac{m}{2\pi i\epsilon \hbar}}^{\mathcal N}\int_0^{2\pi}d\theta_{N-1}\int_{-\infty}^\infty d\mathbf R_{N-1}(\theta_{N-1})\\ &\times\exp\left\{\frac{i}{\hbar}\frac{I}{2\epsilon}(\theta_N-\theta_{N-1}+2\pi l_N)^2+\frac{i}{\hbar}\frac{I}{2\epsilon}(\theta_{N-1}-\theta_{N-2}+2\pi l_{N-1})^2\right\}\\ &\times\exp\left\{\frac{i}{\hbar}\frac{m}{2\epsilon}[\mathbf R_N(\theta_N+2\pi l_N)-\mathbf R_{N-1}(\theta_{N-1})]^2\right\}\\ &\times\exp\left\{\frac{i}{\hbar}\frac{m}{2\epsilon}[\mathbf R_{N-1}(\theta_{N-1}+2\pi l_{N-1})-\mathbf R_{N-2}(\theta_{N-2})]^2\right\}\\ &\times\exp\left\{-\frac{i\epsilon}{\hbar}\mathcal V(\theta_{N-1},\mathbf R_{N-1}(\theta_{N-1}))\right\}. \end{align*} The sum over $l_{N-1}$ can be absorbed into the $\theta_{N-1}$ integral by changing the domain of $\theta_{N-1}$ integration from $[0,2\pi)$ to $(-\infty,\infty)$. This is done by the following procedure: \begin{enumerate} \item transform $l_{N}$ to $\tilde l_N=l_N+l_{N-1}$ and write $\tilde \theta_{N-1}=\theta_{N-1}+2\pi l_{N-1}$ to obtain \begin{align*} I_{N-1}&=\sum_{l_N=-\infty}^\infty\sum_{l_{N-1}=-\infty}^\infty\sqrt{\frac{I}{2\pi i\epsilon\hbar}}\sqrt{\frac{m}{2\pi i\epsilon \hbar}}^{\mathcal N}\int_0^{2\pi}d\theta_{N-1}\int_{-\infty}^\infty d\mathbf R_{N-1}(\theta_{N-1}) \\ &\times\exp\left\{\frac{i}{\hbar}\frac{I}{2\epsilon}(\tilde\theta_N-\theta_{N-1}+2\pi \tilde l_N)^2+\frac{i}{\hbar}\frac{I}{2\epsilon}(\tilde \theta_{N-1}-\theta_{N-2})^2\right\} \\ &\times\exp\left\{\frac{i}{\hbar}\frac{m}{2\epsilon}[\mathbf R_N(\theta_N+2\pi \tilde l_N)-\mathbf R_{N-1}(\tilde\theta_{N-1})]^2\right\} \\ &\times\exp\left\{\frac{i}{\hbar}\frac{m}{2\epsilon}[\mathbf R_{N-1}(\tilde\theta_{N-1})-\mathbf R_{N-2}(\theta_{N-2})]^2\right\} \\ &\times\exp\left\{-\frac{i\epsilon}{\hbar}\mathcal V(\tilde\theta_{N-1},\mathbf R_{N-1}(\tilde\theta_{N-1}))\right\}. \end{align*} \item Change the variable of integration $\int_0^{2\pi}d\theta_n\to\int_{2\pi n}^{2\pi(n+1)}d\tilde \theta_n$ and change the domain of $\theta_{N-1}$ integration from $[0,2\pi)$ to $(-\infty,\infty)$ and use the periodicity of $\mathcal V(\theta_{N-1},\mathbf R_{N-1}(\theta_{N-1}))$ to obtain \begin{align*} I_{N-1}&=\sum_{l_N=-\infty}^\infty\sqrt{\frac{I}{2\pi i\epsilon\hbar}}\sqrt{\frac{m}{2\pi i\epsilon \hbar}}^{\mathcal N}\int_{-\infty}^\infty d\theta_{N-1}\int_{-\infty}^\infty d\mathbf R_{N-1}(\theta_{N-1})\\ &\times\exp\left\{\frac{i}{\hbar}\frac{I}{2\epsilon}(\theta_N-\theta_{N-1}+2\pi l_N)^2+\frac{i}{\hbar}\frac{I}{2\epsilon}(\theta_{N-1}-\theta_{N-2})^2\right\}\\ &\times\exp\left\{\frac{i}{\hbar}\frac{m}{2\epsilon}[\mathbf R_N(\theta_N+2\pi l_N)-\mathbf R_{N-1}(\theta_{N-1})]^2\right\}\\ &\times\exp\left\{\frac{i}{\hbar}\frac{m}{2\epsilon}[\mathbf R_{N-1}(\theta_{N-1})-\mathbf R_{N-2}(\theta_{N-2})]^2\right\}\\ &\times\exp\left\{-\frac{i\epsilon}{\hbar}\mathcal V(\theta_{N-1},\mathbf R_{N-1}(\theta_{N-1}))\right\}. \end{align*} \end{enumerate} Repeat this procedures for the integrals involving $\theta_{n}$ from $n=N-2$ to $n=1$, and obtain \begin{align*} K&(\theta,\mathbf q,t;\theta',\mathbf q',0)\\ &=\sum_{l=-\infty}^\infty\lim_{\epsilon\to 0}\int_{-\infty}^{\infty}\left(\prod_{n=1}^{N-1}d\theta_n\right)\int_{-\infty}^\infty\left(\prod_{n=1}^{N-1}d\mathbf R_{n-1}(\theta_{n-1})\right)\prod_{n=1}^N\sqrt{\frac{I}{2\pi i\epsilon\hbar}}\sqrt{\frac{m}{2\pi i\epsilon \hbar}}^{\mathcal N}\\ &\times\exp\left\{\frac{i}{\hbar}\frac{I}{2\epsilon}(\theta_n-\theta_{n-1}+2\pi l\delta_{n,N})^2\right\}\\ &\times\exp\left\{\frac{i}{\hbar}\frac{m}{2\epsilon}[\mathbf R_n(\theta_n+2\pi l\delta_{n,N})-\mathbf R_{n-1}(\theta_{n-1})]^2-\frac{i\epsilon}{\hbar}\mathcal V(\theta_{n-1},\mathbf R_{n-1}(\theta_{n-1}))\right\}. \end{align*} Define \begin{align} \mathcal A_A&=\sqrt{\frac{2\pi i\epsilon\hbar}{I}},\qquad \mathcal A_B=\sqrt{\frac{2\pi i\epsilon\hbar}{m}}^\mathcal{N}, \\ \int \mathcal{D}\theta&=\frac{1}{\mathcal A_A}\int_{-\infty}^\infty\left(\prod_{n=1}^{N-1}\frac{d\theta_n}{\mathcal A_A}\right), \qquad \int \mathcal{D}\mathbf R(\theta)=\frac{1}{\mathcal A_B}\int_{-\infty}^\infty\left(\prod_{n=1}^{N-1}\frac{d\mathbf R_n(\theta_n)}{\mathcal A_B}\right), \end{align} \begin{align} L_n(\theta_n,\mathbf R_n(\theta_n))&=\frac{I}{2}\left(\frac{\theta_n-\theta_{n-1}+2\pi l\delta_{n,N}}{\epsilon}\right)^2\nonumber \\ &+\frac{m}{2}\left(\frac{\mathbf R_n(\theta_n+2\pi l\delta_{n,N})-\mathbf R_{n-1}(\theta_{n-1})}{\epsilon}\right)^2-\mathcal V(\theta_{n-1},\mathbf R_{n-1}(\theta_{n-1})) \end{align} where \begin{align} L[\theta,\mathbf R(\theta)]&=\lim_{\epsilon\to 0}L_n[\theta_n,\mathbf R_n(\theta_n)]=L_0[\theta]+L_B[\theta,\mathbf R(\theta)] \\ L_0[\theta]&=\frac{I}{2}\dot\theta^2 \\ L_B[\theta,\mathbf R(\theta)]&=\sum_{j=1}^\mathcal N\frac{m}{2}\dot R_j(\theta)^2-\sum_{j=1}^\mathcal N\frac{1}{2}m\omega_j^2\left(R_j(\theta)-\frac{C_j\theta}{m\omega_j^2}\right)^2 \end{align} is the Lagrangian of the coupled system. The action $S[\theta,\mathbf R(\theta)]$ is given by \begin{align} S[\theta,\mathbf R(\theta)]=\int_0^tL[\theta,\mathbf R(\theta)]ds&=S_0[\theta]+S_B[\theta,\mathbf R(\theta)] \\ S_0[\theta]&=\int_0^td\tau L[\theta] \\ S_B[\theta,\mathbf R(\theta)]&=\int_0^td\tau L_B[\theta,\mathbf R(\theta)]. \label{S_0} \end{align} Now, suppose that we initially have the total density operator given by \begin{equation} \hat\rho_{AB}(0)=\hat\rho_A(0)\hat\rho_B(0). \end{equation} Then, the reduced density matrix is \begin{align} \rho(\theta,\phi,t)&=\int_{-\infty}^\infty d\mathbf R(\theta)d\mathbf Q(\theta)\delta(\mathbf R(\theta)-\mathbf Q(\theta))\rho(\theta,\mathbf p,\phi,\mathbf s,t) \nonumber\\ &=\int_0^{2\pi}d\theta' d\phi' J(\theta,\phi,t;\theta',\phi',0)\rho_A(\theta',\phi',0) \label{Reduced_Density_Matrix} \end{align} where \begin{equation} J(\theta,\phi,t;\theta',\phi',0)=\sum_{l=-\infty}^\infty\sum_{l'=-\infty}^\infty\int_{\theta'}^{\theta+2\pi l}\mathcal D\theta\int_{\phi'}^{\phi+2\pi l'}\mathcal D\phi^\ast\exp\frac{i}{\hbar}\left(S_0[\theta]-S_0[\phi]\right)\mathcal F[\theta,\phi] \end{equation} is the propagator of the density matrix, \begin{align*} \mathcal F[\theta,\phi]&=\int_{-\infty}^\infty d\mathbf R(\theta)d\mathbf Q(\phi) d\mathbf R'(\theta') d\mathbf Q'(\phi')\delta(\mathbf R(\theta)-\mathbf Q(\phi))\rho_B(\mathbf p',\mathbf s',0) \\ &\times \int_{\mathbf R'(\theta')}^{\mathbf R(\theta)}\mathcal D\mathbf R(\theta)\int_{\mathbf Q'(\phi')}^{\mathbf Q(\phi)}\mathcal D\mathbf Q(\phi)\exp\frac{i}{\hbar}\left(S_B[\theta,\mathbf R(\theta)]-S_B[\phi,\mathbf Q(\phi)]\right) \end{align*} is the influence functional and $Q(\phi)=-(s_j-c\phi)/m\omega_j$. Supposed that the density operator $\hat \rho_B$ can be written as a canonical ensemble at $t=0$. That is, \begin{equation} \rho_B=\frac{\exp(-\beta\hat H_B)}{\mathrm{tr}_B[\exp(-\beta\hat H_B)]} \end{equation} Introduce the imaginary time $\tau=-i\hbar\beta$, then the density matrix of the environment at $t=0$ is \begin{align*} \rho_B(\mathbf p',\mathbf s',0)=\frac{\Braket{\mathbf p'|\exp\left(-\frac{i\tau}{\hbar}\hat H_B\right)|\mathbf s'}}{\mathrm{tr}_B\left[\exp\left(-\frac{i\tau}{\hbar}\hat H_B\right)\right]}=\frac{K_B(\mathbf p',\tau;\mathbf s',0)}{\int d\mathbf qK_B(\mathbf q,\tau;\mathbf q,0)}. \end{align*} \label{F_Derivation} \subsection{Momentum Representation of the Density matrix of a Harmonic Oscillator} The propagator of a harmonic oscillator with the Lagrangian \begin{equation} L=\frac{m}{2}\dot x^2-\frac{1}{2}m\omega x^2 \end{equation} has a well known solution \begin{align} K(x,t;x',0)&=\int \mathcal Dx\exp\frac{i}{\hbar}\int_0^tLds \\ &=\sqrt{\frac{m\omega}{2\pi i\hbar\sin\omega t}}\exp\left\{\frac{i}{\hbar}\frac{m\omega}{2\sin\omega t}[(x^2+x'^2)\cos\omega t-2xx']\right\} \end{align} $K(p,t;p',0)$ is obtained by a Fourier transformation \begin{equation} K(p,t:p',0)=\int_{-\infty}^\infty\frac{dxdx'}{2\pi\hbar}K(x,t;x',0)\exp\frac{i}{\hbar}\left(px-p'x'\right). \end{equation} Let $A=\frac{m\omega\cos\omega t}{2\sin\omega t}$ and $B=\frac{m\omega}{2\sin\omega t}$, then \begin{align*} K(p,t;p',0)=\frac{F(t)}{\sqrt{4B^2-4A^2}}\exp\frac{i}{\hbar}\frac{Ap^2+Ap'^2-2Bpp'}{4B^2-4A^2} \end{align*} where \begin{equation} 4B^2-4A^2=\frac{m^2\omega^2(1-\cos^2\omega t)}{\sin^2\omega t}=m^2\omega^2, \end{equation} so, \begin{equation} K(p,t;p',0)=\frac{F(t)}{m\omega}\exp\frac{i}{\hbar}S[p(t)/m\omega]. \end{equation} Therefore, using $\tau=-i\hbar\beta$, $\beta=1/k_BT$, $\nu_j=\hbar\omega_j\beta$ and $i\sinh x=\sin ix$ we have \begin{align} \braket{\mathbf p'|\exp(-i\tau\hat H_B/\hbar)|\mathbf s'}=\prod_{j=1}^\mathcal{N}\frac{1}{m\omega_j}\sqrt{\frac{m\omega_j}{2\pi \hbar\sinh\nu_j}}\exp-\frac{m\omega_j}{2\hbar\sinh\nu_j}\left[\frac{{p'}_j^2+{s'}_j^2}{m^2\omega_j^2}\cosh\nu-2\frac{{p'}_j{s'}_j}{m^2\omega_j^2}\right], \end{align} and \begin{align*} \mathrm{tr}_B[\exp(-i\tau\hat H_B/\hbar)]=\int_{-\infty}^\infty d\mathbf q\braket{\mathbf q|\exp(-i\tau\hat H_B/\hbar)|\mathbf q}=\prod_{j=1}^\mathcal{N}\frac{1}{2\sinh\frac{\nu_j}{2}}. \end{align*} We are assuming that the environment is not coupled to the system at $t=0$, so $R'_j=p'_j/m\omega_j$ and $Q'_j=s'_j/m\omega_j$. Therefore, \begin{equation} \rho_B(\mathbf p',\mathbf s',0)=\left(\prod_{j=1}^\mathcal{N}\frac{1}{m\omega_j}\right)\rho_B(\mathbf R',\mathbf Q',0) \end{equation} where \begin{equation} \rho_B(\mathbf R',\mathbf Q',0)=\prod_{j=1}^\mathcal{N}2\sinh\frac{\nu_j}{2}\sqrt{\frac{m\omega_j}{2\pi \hbar\sinh\nu_j}}\exp-\frac{m\omega_j}{2\hbar\sinh\nu_j}\left[({R'}_j^2+{Q'}_j^2)\cosh\nu_j-2{R'}_j{Q'}_j\right]. \end{equation} Let us rewrite this density matrix into a simpler form. Define $x_{0j}=R'_j+Q'_j$ and $x'_{0j}=R'_j-Q'_j$. Then, using \begin{equation} ({R'_j}^2+{Q'_j}^2)\cosh\nu_j-2{R'}_j{Q'}_j=\frac12(\cosh\nu_j-1)x_{0j}^2+\frac12(\cosh\nu_j+1){x'_{0j}}^2 \end{equation} and the identity \begin{equation} \frac{\cosh\nu_j-1}{\sinh\nu_j}=\frac{\sinh\nu_j}{\cosh\nu_j+1}=\tanh\frac{\nu_j}{2}, \end{equation} we obtain \begin{equation} \rho_B(\mathbf R',\mathbf Q',0)=\prod_{j=1}^\mathcal{N}\sqrt{\frac{m\omega_j}{\pi \hbar\mu_j}}\exp-\frac{m\omega_j}{4\hbar}\left[\frac{x_{0j}^2}{\mu_j}+{x'_{0j}}^2\mu_j\right]. \label{densitmatrix RQ} \end{equation} \begin{equation} \mu_j=\coth\frac{\nu_j}{2}. \end{equation} \subsection{The Influence Functional $\mathcal F[\theta,\phi]$} Now, we calculate the density functional $\mathcal F[\theta,\phi]$ explicitly. We will write $\mathbf R$ and $\mathbf Q$ instead of $\mathbf R[\theta]$ and $\mathbf Q[\phi]$ for simplicity, but keep in mind that they depend on $\theta$ and $\phi$, respectively. Then, the influence functional can be written as \begin{align} \mathcal F[\theta,\phi]&=\int_{-\infty}^\infty d\mathbf x_t d\mathbf x'_td\mathbf x_0d\mathbf x'_0(1/2)^{\mathcal{N}}\delta(\mathbf x'_t)\rho_B(\mathbf R',\mathbf Q',0)\int_{\mathbf R'}^{\mathbf R}\mathcal D\mathbf R\int_{\mathbf Q'}^{\mathbf Q}\mathcal D\mathbf Q^\ast\nonumber\\ &\times\exp\left\{\frac{i}{\hbar}\int_0^tds\sum_j\left[\frac{m}{2}(\dot R_j^2-\dot Q_j^2)-\frac{1}{2}m\omega_j^2(R_j^2-Q_j^2)+C_j(\theta R_j-\phi Q_j)-\frac{C_j^2}{2m\omega_j^2}(\theta^2-\phi^2)\right]\right\}. \end{align} To evaluate this we introduce the variables \begin{align} \begin{split} \varphi&=\theta+\phi,\qquad \varphi'=\theta-\phi, \\ x_j&=R_j+Q_j,\qquad x'_j=R_j-Q_j. \end{split} \end{align} Then, the influence functional becomes \begin{align*} \mathcal F[\theta,\phi]&=\int_{-\infty}^\infty d\mathbf x_td\mathbf x'_td\mathbf x_0d\mathbf x'_0(1/2)^{\mathcal{N}}\delta(\mathbf x'_t)\rho_B(\mathbf R',\mathbf Q',0)\int\mathcal D\mathbf x\mathcal D \mathbf x' \\ &\times\exp\left\{\frac{i}{\hbar}\int_0^tds\sum_j\left[\frac{m}{2}\dot x_j\dot x'_j-\frac{1}{2}m\omega_j^2x_jx'_j+\frac{C_j}{2}(\varphi x'_j+\varphi'x_j)-\frac{C_j^2}{2m\omega_j^2}\varphi'\varphi\right]\right\} \\ &=\int_{-\infty}^\infty d\mathbf x_td\mathbf x'_td\mathbf x_0d\mathbf x'_0(1/2)^{\mathcal{N}}\delta(\mathbf x'_t)\rho_B(\mathbf R',\mathbf Q',0) \\ &\times\exp\left\{\frac{im}{2\hbar}\sum_j(x_{tj}\dot x'_{tj}-x_{0j}\dot x'_{0j})-\frac{i}{\hbar}\int_0^tds\sum_j\frac{C_j^2}{2m\omega_j^2}\varphi'\varphi\right\} \\ &\times\int\mathcal D\mathbf x\mathcal D \mathbf x'\exp\left\{\frac{i}{\hbar}\int_0^tds\sum_j\left[-g(x'_j)x_j+\frac{C_j}{2}\varphi x_j'\right]\right\} \end{align*} where \begin{equation} g(x_j')=\frac{m}{2}\ddot x'_j+\frac{1}{2}m\omega_j^2x'_j-\frac{C_j}{2}\varphi'. \end{equation} We note that the classical solution $x_{j,cl}'$ satisfies the Euler-Lagrange equation \begin{equation} g(x_{j,cl}')=0,\quad g(x_{j}'(0))=0,\quad g(x_{j}'(t))=0. \end{equation} The path integral over $\mathbf x$ can be done first. Calling this integral $I_{\mathbf x,\mathbf x'}$ we obtain \begin{align} I_{\mathbf x,\mathbf x'}&=\int\mathcal D\mathbf x\mathcal D \mathbf x'\exp\left\{\frac{i}{\hbar}\int_0^tds\sum_j\left[-g(x'_{j})x_{j}+\frac{C_j}{2}\varphi x_{j}'\right]\right\} \\ &=\frac{1}{|\mathcal A_B|^2}\int_{-\infty}^\infty\left(\prod_{n=1}^{N-1}\frac{d\mathbf xd\mathbf x'}{|\mathcal A_B|^2}\right)\prod_{n=1}^N\prod_j\exp\frac{i\epsilon}{\hbar}\left[-g(x_{n,j}')x_{n,j}+\frac{C_j}{2}\varphi x_j'\right] \\ &=\frac{1}{|\mathcal A_B|^2}\int_{-\infty}^\infty\left(\prod_{n=1}^{N-1}\frac{d\mathbf x'}{|\mathcal A_B|^2}\right) \prod_j\left(\prod_{n=1}^{N-1}\delta\left[\frac{\epsilon}{2\pi\hbar}g(x'_{n,j})\right]\right)\prod_{n=1}^N\exp\frac{i\epsilon}{\hbar}\frac{C_j}{2}\varphi x_{n,j}'. \end{align} $|\mathcal A|^{-2}$ is absorbed into the delta functions. The interpretation of the delta function is to insert the classical solution $x'_{j,cl}$ into $x'_j$ which satisfy $g(x'_{j,cl})=0$. The outcome is, \begin{equation} I_{\mathbf x,\mathbf x'}=\frac{1}{|\mathcal A_B|^2}\exp\left(\frac{i}{\hbar}\int_0^tds\sum_j\frac{C_j}{2}\varphi x'_{cl} \right). \end{equation} Therefore, the influence functional is \begin{align*} \mathcal F[\theta,\phi]&=\int_{-\infty}^\infty d\mathbf x_td\mathbf x'_td\mathbf x_0d\mathbf x'_0(1/2)^{\mathcal{N}}\delta(\mathbf x'_t)\rho_B(\mathbf R',\mathbf Q',0)\exp\frac{i}{\hbar}\frac{m}{2}\sum_j(x_{tj}\dot x'_{tj}-x_{0j}\dot x'_{0j}) \\ &\times\frac{1}{|\mathcal A_B|^2}\exp\left(\frac{i}{\hbar}\int_0^tds\sum_j\left[-\frac{C_j^2}{2m\omega_j^2}\varphi'\varphi+\frac{C_j}{2}\varphi x_{cl}'\right]\right). \end{align*} The $\mathbf x_t$ integral gives a delta function of $\dot{\mathbf x}'_t$ \begin{align*} \mathcal F[\theta,\phi]&=\int_{-\infty}^\infty d\mathbf x'_td\mathbf x_0d\mathbf x'_0(1/2)^{\mathcal{N}}\delta(\mathbf x'_t)\delta(\dot{\mathbf x}'_t)\rho_B(\mathbf R',\mathbf Q',0) \\ &\times\exp\left(\frac{i}{\hbar}\int_0^tds\sum_j\left[-\frac{C_j^2}{2m\omega_j^2}\varphi'\varphi+\frac{C_j}{2}\varphi x_{cl}'\right]+\frac{i}{\hbar}\frac{m}{2}\sum_jx_{0j}\dot x'_{0j}\right). \end{align*} Insert equation \eqref{densitmatrix RQ} for the density matrix and do the $\mathbf x_0$ integral, then we obtain \begin{align*} \mathcal F[\theta,\phi]&=\int_{-\infty}^\infty d\mathbf x'_td\mathbf x'_0\delta(\mathbf x'_t)\delta(\dot{\mathbf x}'_t) \\ &\times\prod_j\exp\left(\frac{i}{\hbar}\int_0^tds\left[-\frac{C_j^2}{2m\omega_j^2}\varphi'\varphi+\frac{C_j}{2}\varphi x_{cl}'\right]-\frac{m\mu_j\omega_j}{4\hbar}\left(\frac{\dot{x}{'_{0j}}^2}{\omega_j^2}+{x'_{0j}}^2\right)\right). \end{align*} The two delta functions $\delta(\mathbf x'_t)$ and $\delta(\dot{\mathbf x}'_t)$ can be interpreted as boundary conditions on the classical solution of $g(\mathbf x'(s))=0$. The result of doing the remaining integrals would be to substitute the classical solution of $x'_{cl}$, $\dot{x}{'_{0j}}$ and $x'_{0j}$ which satisfy these boundary conditions. The solution to the classical solution of $g(x)=0$, that is \begin{equation} \ddot x'_j+\omega_j^2x'_j-\frac{C_j}{m}\varphi'=0. \end{equation} is \begin{align} x_j(\tau)&=-\int_\tau^t\frac{C_j^2\varphi'(s)}{m\omega_j}\sin\omega_j(\tau-s)ds+x'_{tj}\cos\omega_j(t-\tau)-\frac{\dot x'_{tj}}{\omega_j}\sin\omega_j(t-\tau). \end{align} For our boundary condition this solution reduces to \begin{align} x'_j(\tau)&=-\int_\tau^t\frac{C_j\varphi'(s)}{m\omega_j}\sin\omega_j(\tau-s)ds \\ \dot{x}'_j(\tau)&=-\int_\tau^t\frac{C_j\varphi'(s)}{m}\cos\omega_j(\tau-s)ds \end{align} Therefore, the result of integration is \begin{align*} \mathcal F[\theta,\phi]=\exp\left(-\sum_j\frac{i}{\hbar}\frac{C_j^2}{2m\omega_j}\int_0^t\int_\tau^t\varphi(\tau)\left(\frac{2}{\omega_j}\delta(\tau-s)-\sin\omega_j(\tau-s)\right)\varphi'(s)d\tau ds\right) \\ \times\exp\left(-\sum_j\frac{\mu_j}{4\hbar}\int_0^t\int_0^t\frac{C_j^2}{m\omega_j}\varphi'(\tau)\cos\omega_j(\tau-s)\varphi'(s)dsd\tau\right). \end{align*} Since $s$ and $\tau$ enter into the second integral symmetrically it can be rewritten \begin{equation} \int_0^t\int_0^tE(\tau,s)dsd\tau=2\int_0^t\int_0^\tau E(\tau,s)dsd\tau. \end{equation} The first integral can be written \begin{equation} \int_0^t\int_\tau^tE(\tau,s)dsd\tau=\int_0^t\int_0^\tau E(s,\tau)dsd\tau. \end{equation} Then, the influence functional becomes \begin{equation} \mathcal F[\theta,\phi]=\exp i\Phi[\theta,\phi] \end{equation} where the influence phase $\Phi[\theta,\phi]$ can be written as \begin{align} i\Phi[\theta,\phi]&=-\frac{i}{\hbar}\int_0^t\int_0^\tau d\tau ds\varphi(s)\alpha_I(\tau-s)\varphi'(\tau)-\frac{1}{\hbar}\int_0^t\int_0^\tau dsd\tau\varphi'(\tau)\alpha_R(\tau-s)\varphi'(s) \\ \alpha_R(\tau-s)&=\sum_j\frac{C_j^2}{2m\omega_j}\coth\frac{\hbar\omega_j}{2k_BT}\cos\omega_j(\tau-s) \\ \alpha_I(\tau-s)&=\sum_j\frac{C_j^2}{2m\omega_j}\left(\frac{2}{\omega_j}\delta(\tau-s)+\sin\omega_j(\tau-s)\right). \end{align} Therefore, we obtain the density matrix propagator $J(\theta,\phi,t;\theta',\phi',0)$ \begin{align} &J(\theta_\text f,\phi_\text f,t;\theta\text i,\phi_\text i,0) \\ &=\left. \int_{\theta_\text i}^{\theta_\text f}D\theta\int_{\phi_\text i}^{\phi_\text f}D\phi^\ast\exp\left\{\frac{i}{\hbar}S[\varphi^+,\varphi^-]-\Gamma[\varphi^-]\right\} \right|_{\substack{\varphi^+=(\theta+\phi)/2\\\varphi^-=\phi-\theta\quad}}, \label{JDef} \end{align} the classical action $S[\varphi^+,\varphi^-]$ is \begin{align*} S[\varphi,\varphi']&=-\int_0^td\tau I\dot\varphi^+(\tau)\dot\varphi^-(\tau) \\ &+2\int_0^td\tau\int_0^s ds\varphi^-(\tau)\alpha_\text I(\tau-s)\varphi^+(s) \end{align*} and the $\Gamma[\varphi^-]$ is given by \begin{align*} \Gamma[\varphi^-]&=\frac{1}{2\hbar}\int_0^td\tau\int_0^tds\varphi^-(\tau)\alpha_\text R(\tau-s)\varphi^-(s). \end{align*} The path integral in \eqref{JDef} can be done exactly and we obtain \begin{align} J(\theta_\text f,\phi_\text f,t;\theta_\text i,\phi_\text i,0)&=F^2(t)\exp\left(\frac{i}{\hbar}S[\varphi^+_\text{cl},\varphi^-_\text{cl}]-\Gamma[\varphi^-_\text{cl}]\right)\label{JClassical} \end{align} $\varphi^\pm_\text{cl}$ are the classical coordinates obtained from the Euler-Lagrange equation \begin{align} I\ddot\varphi_\text{cl}^-(u)+2\int_u^td\tau\varphi_\text{cl}^-(\tau)\alpha_\text I(\tau-u)=0\label{eqn1} \\ I\ddot\varphi^+_\text{cl}(u)+2\int_0^ud\tau\varphi^+_\text{cl}(\tau)\alpha_\text I(u-\tau)=0\label{eqn2} \end{align} whose solution are given in terms of boundary conditions $\varphi^\pm_\text i=\varphi^\pm(0),\varphi^\pm_\text f=\varphi^\pm(t)$: \begin{align} \varphi^+_\text{cl}(u)&=\kappa_i(u;t)\varphi^+_\text i+\kappa_f(u;t)\varphi^+_\text f \label{varphisolfinal} \\ \varphi^-_\text{cl}(u)&=\kappa_i(t-u;t)\varphi^-_\text f+\kappa_f(t-u;t)\varphi^-_\text i \label{varphiprimesolfinal} \\ \kappa_i(u;t)&=\dot G(u)-\frac{\dot G(t)}{G(t)}G(u), \qquad \kappa_f(u;t)=\frac{G(u)}{G(t)}. \end{align} Then, the classical action reduces to \begin{align} S[\varphi^+_\text{cl},\varphi^-_\text{cl}]&=-I[\dot\varphi^+_\text{cl}(t)\varphi^-_\text f-\dot\varphi^-_\text{cl}(0)\varphi^-_\text i]. \label{ClassicalAction} \end{align} \end{document}
\betagin{document} \title[A SYSTEM OF p-LAPLACIAN EQUATIONS ON THE SIERPI\'NSKI GASKET]{A SYSTEM OF p-LAPLACIAN EQUATIONS ON THE SIERPI\'NSKI GASKET} \author[A. Sahu]{Abhilash Sahu} \address{Department of Mathematics, Indian Institute of Technology Delhi, Hauz Khas, New Delhi 110016, India} \epsilonmail{[email protected]} \author[A. Priyadarshi]{Amit Priyadarshi} \address{Department of Mathematics, Indian Institute of Technology Delhi, Hauz Khas, New Delhi 110016, India} \epsilonmail{[email protected]} \mathcal{S}ubjclass[2010]{Primary 28A80, 35J61} \keywords{Sierpi\'nski gasket, p-Laplacian, weak solutions, p-energy, Euler functional, system of equations} \date{} \betagin{abstract} In this paper we study a system of boundary value problems involving weak p-Laplacian on the Sierpi\'nski gasket in $\mathbb{R}^2$. Parameters $\lambda, \gamma, \alpha, \beta$ are real and $1<q<p<\alpha+\beta.$ Functions $a,b,h : \mathcal{S} \rightarrow \mathbb{R}$ are suitably chosen. For $p>1$ we show the existence of at least two nontrivial weak solutions to the system of equations for some $(\lambda,\gamma) \in \mathbb{R}^2.$ \epsilonnd{abstract} \maketitle \mathcal{S}ection{Introduction} In this article we will discuss the existence of the weak solutions to the following system of boundary value problem on the Sierpi\'nski gasket. \betagin{equation}\lambdabel{prob} \betagin{split} -\Delta_p u &= \lambda a(x)|u|^{q-2} u + \mathbf{f}rac{\alpha}{\alpha+\beta}h(x)|u|^{\alpha-2}u |v|^\beta \; \text{in}\; \mathcal{S}\mathcal{S}etminus\mathcal{S}_0;\\ -\Delta_p v &= \gamma b(x)|v|^{q-2} v + \mathbf{f}rac{\beta}{\alpha+\beta}h(x)|u|^{\alpha}|v|^{\beta-2}v \; \text{in}\; \mathcal{S}\mathcal{S}etminus\mathcal{S}_0;\\ u &= v=0\;\mbox{on}\; \mathcal{S}_0, \epsilonnd{split} \epsilonnd{equation} where $\mathcal{S}$ is the Sierpi\'nski gasket in $\mathbb{R}^2$, $\mathcal{S}_0$ is the boundary of the Sierpi\'nski gasket and $\Delta_p$ denotes the $p$-Laplacian where $p>1$. We will discuss about it in the next section. We will assume the following hypotheses. \betagin{hyp}\lambdabel{hyp_1} $q, p, \alpha$ and $\beta$ are positive real numbers satisfying $1< q < p < \alpha +\beta$. \epsilonnd{hyp} \betagin{hyp}\lambdabel{hyp_3} $a, b, h : \mathcal{S} \to \mathbb{R}$ are functions belonging to $L^1(\mathcal{S},\mu)$. $a, b, h \geq 0$ and $a, b, h \mathcal{N}ot\epsilonquiv 0.$ Also, $\|h\|_1 >0.$ \epsilonnd{hyp} \betagin{hyp}\lambdabel{hyp_2} $\lambda$ and $\gamma$ are real parameters satisfying $|\lambda|\|a\|_1 + |\gamma|\|b\|_1 < \kappa_0.$ \epsilonnd{hyp} \mathcal{N}oi If $(u,v) \in \text{dom}_0(\mathcal{E}_p) \times \text{dom}_0(\mathcal{E}_p)$ and satisfies $$\int\limits_{\mathcal{S}} \lambda a(x)|u|^{q-2} u \phi_1 + \mathbf{f}rac{\alpha}{\alpha+\beta}\int\limits_{\mathcal{S}}h(x)|u|^{\alpha-2}u |v|^\beta\phi_1 + \int\limits_{\mathcal{S}} \gamma b(x)|v|^{q-2} v\phi_2 + \mathbf{f}rac{\beta}{\alpha+\beta}\int\limits_{\mathcal{S}}h(x)|u|^{\alpha}|v|^{\beta-2}v \phi_2 \in \mathcal{E}_p(u,\phi_1) + \mathcal{E}_p(v,\phi_2) $$ for all $(\phi_1, \phi_2) \in \text{dom}_0(\mathcal{E}_p) \times \text{dom}_0(\mathcal{E}_p),$ then we will call $(u,v)$ to be a weak solution of \epsilonqref{prob}.\\ Differential equation on fractal domains has been of great interest for researcher for past few decades. We will go back in time line and give a brief review of literature survey. In \cite{kiga1,kiga2,str,falc}, Laplacian is defined on Sierpi\'nski gasket and in \cite{kiga3,tang} on some p.c.f. fractals. Freiberg and Lancia \cite{fl} defined Laplacian on some non self similar fractals. In \cite{sw,hps,koz}, p-Laplacian is defined on Sierpi\'nski gasket. Once Laplacian and p-Laplacian is defined on fractals researchers address problems involving Laplacian and p-Laplacian. A vast amount of literature is available for Laplacian operators on fractal domains in contrast to p-Laplacian operators, which motivated us to study the p-Laplacian equations. Falconer and Hu \cite{falc1} studied the problem \[\Delta u + a(x)u= f(x,u)\] with zero Dirichlet boundary condition on the Sierpi\'nski gasket $\mathcal{S}$ where $a :\mathcal{S} \to \mathbb{R}$ is integrable and $f:\mathcal{S} \times \mathbb{R} \to \mathbb{R}$ is continuous having some growth conditions near zero and infinity. They used the mountain pass and the saddle point theorem to prove the existence of a weak solution to this problem. Hua and Zhenya studied a semilinear PDE on self-similar fractal sets in \cite{zc1} and a nonlinear PDE on self similar fractals in \cite{zc2}. In \cite{denisa}, Denisa proved the existence of at least two nontrivial weak solutions for a Dirichlet problem involving the Laplacian on the Sierpi\'nski gasket using the Ekeland variational principle and the critical point theory and in \cite{denisa2} studied the problem \betagin{align*} -\Delta &u(x) + g(u(x)) = \lambda f(u(x)),~~~\text{for}~~ x \in \mathcal{S}\mathcal{S}etminus\mathcal{S}_0\\ &u(x)= 0,~~~ \text{for}~~ x \in \mathcal{S}_0. \epsilonnd{align*} using variational method, she showed multiplicity of weak solutions for the problem. For p-Laplacian on Sierpinski gasket Strichartz and Wong \cite{sw} studied existence of solution and its numerical computation. Priyadarshi and Sahu\cite{ps} studied the problem \betagin{align*} -\Delta_p u &= \lambdambda a(x)|u|^{q-1}u + b(x)|u|^{l-1}u ~~ \text{in}~ \mathcal{S}\mathcal{S}etminus\mathcal{S}_0, \\ u &= 0 ~~ \text{on}~ \mathcal{S}_0, \epsilonnd{align*} and have shown the existence of two solutions under the assumptions $p>1, 0< q < p-1 < l, a,b : \mathcal{S} \to \mathbb{R}$ are bounded nonnegative functions for a small range of $\lambda.$ Many authors have tried to address problems on fractal domains (see, for example, \cite{falc,br,koz}). Now, we will give brief review of work done so far on system of Laplacian equations and (p,q)-Laplacian on regular domains. The problem \epsilonqref{prob} is motivated from the works on regular domains. Cl\'ement et.al \cite{cfm} studied the system $-\Delta u = f(v), -\Delta v = g(u)$ in $\Omega$ and $u=v=0$ on $\partialtal\Omega,$ where $f,g \in \mathcal{C}(\mathbb{R})$ nondecreasing function with $f(0) = g(0) = 0.$ They have shown the existence of positive solution under some more suitable conditions. \betagin{equation}\lambdabel{sys-2} \betagin{cases} -\Delta_p u = \lambda a(x)|u|^{p_1-2}u + (\alpha+1)c(x) |u|^{\alpha-1}u|v|^{\beta+1}, & \mbox{if } x \in \Omega \\ -\Delta_q v = \mu b(x)|v|^{q-2}v + (\beta+1)c(x) |u|^{\alpha+1}|v|^{\beta-1}v, & \mbox{if } x \in \Omega \\ \epsilonnd{cases} \epsilonnd{equation} Bozhkov and Mitidieri \cite{bm} studied existence and non-existence results for the quasilinear system \epsilonqref{sys-2} with boundary condition $u = v = 0 \mbox{ if } x \in \partialtal\Omega$ and $p_1=p.$ Here $\alpha, \beta, \lambda, \mu, p>1, q>1$ are real numbers, $\Delta_p$ and $\Delta_q$ are p and q-Laplace operators, respectively. Also, $a(x), b(x), c(x)$ are suitably chosen functions. Using fibering method introduced by Pohozaev, they have shown the existence of multiple solutions to the problem. Adriouch and Hamidi \cite{ah} studied the problem \epsilonqref{sys-2} with Dirichlet or mixed boundary conditions, under some hypotheses on parameters $p,p_1,\alpha,\beta$ and $q.$ They have shown the existence and multiplicity results with the help of Palais-Smale sequences in the Nehari manifold, with respect to real parameter $\lambda,\mu.$ In \cite{dt}, Djellit and Tas studied the problem : \betagin{equation}\lambdabel{sys-3} \betagin{cases} -\Delta_p u = \lambda f(x,u,v), & \mbox{if } x \in \mathbb{R}^N \\ -\Delta_q u = \mu g(x,u,v), & \mbox{if } x \in \mathbb{R}^N, \epsilonnd{cases} \epsilonnd{equation} where $\lambda, \mu, p, q$ are positive real numbers satisfying $2 \leq p,q<N$, $\Delta_p$ and $\Delta_q$ are $p$, $q$-Laplacian, respectively and $f,g : \mathbb{R}^n \times \mathbb{R} \times \mathbb{R} \to \mathbb{R}.$ Under some hypotheses they have shown that the system has solutions using fixed point theorems. J. Zhang and Z. Zhang \cite{zz} studied the problem \epsilonqref{sys-3} for $p=q=2$ on $\Omega$ bounded smooth open subset of $\mathbb{R}^N, u(x)=v(x)=0 \mbox{ if } x \in \partialtal\Omega $ and $f$ and $g$ are Carath\'eodory functions. They used variational methods to show the existence of weak solutions. T-F Wu \cite{wu7} studied the following problem : \betagin{equation}\lambdabel{sys} \betagin{cases} -\Delta u = \lambda f(x) |u|^{q-2}u + \mathbf{f}rac{\alpha}{\alpha+\beta} h(x) |u|^{\alpha-2}u |v|^{\beta}, & x \in \Omega \\ -\Delta v = \mu g(x) |v|^{q-2}v + \mathbf{f}rac{\beta}{\alpha+\beta} h(x) |u|^{\alpha}|v|^{\beta-2}v, & x \in \Omega \\ u = v = 0, & x \text{ on } \partialtal\Omega. \epsilonnd{cases} \epsilonnd{equation} where $1<q<2, \alpha >1, \beta >1$ satisfying $2< \alpha + \beta < 2^*(2^* = \mathbf{f}rac{2N}{N-2}$ if $N= 3, 2^* = \infty$ if $N = 2), f,g \in L^{p^*}$ and $h \in \mathcal{C}(\overline{\Omega})$ with $\|h\|_{\infty} = 1.$ Under this assumptions they have shown that the system \epsilonqref{sys} has at least two nontrivial nonnegative solutions for some $(\lambda,\mu) \in \mathbb{R}^2.$ A more general version of system of p-Laplacian equations studied by Afrouzi and Rasouli \cite{ar}. A similar kind of problems was addressed by Brown and Wu \cite{bw2} and F-Y Lu \cite{lu} involving derivative boundary conditions. In both the articles they have shown that the systems have at least two solutions. Many authors have tried to address problems on system of equations (see, for example, \cite{ams,velin,hamidi,as,yw,wu6,cw}). Following these footprints, we will move forward and study the problem \epsilonqref{prob} on the Sierpi\'nski gasket. But there is no well-known concept of Laplacian and p-Laplacian on Sierpi\'nski gasket. So, we will clarify the notion of Laplacian and p-Laplacian on Sierpi\'nski gasket. Then we will define weak solutions to our problem \epsilonqref{prob}. We will discuss about it in next section. Once the Laplacian is defined, one generally constructs a Hilbert space and then establish compactness theorems and minimax theorems to study PDEs which is not the case here. The function space considered here, that is, $\text{dom}(\mathcal{E}_p)\times \text{dom}(\mathcal{E}_p)$ is not even known to be reflexive. So, extraction of a weakly convergent subsequence from a bounded sequence is not possible here. Also, the difficulty increases here because the Euler functional associated to \epsilonqref{prob} is not differentiable. But overcoming all these issues, we prove the existence of at least two nontrivial weak solutions to \epsilonqref{prob} for some $(\lambda,\gamma) \in \mathbb{R}^2.$ The outline of our paper is as follows. In section 2 we discuss about the weak $p$-Laplacian on the Sierpi\'nski gasket and also describe how we are going from energy functional $\mathcal{E}_p(u)$ to energy form $\mathcal{E}_p(u,v)$. We recall some important results and state our main theorem. In section 3 we define the Euler functional $I_{\lambda,\gamma}$ associated to our problem \epsilonqref{prob}. We define fibering map $\Phi_{u,v}$ and also find a suitable subset of $\mathbb{R}^2$ for which problem \epsilonqref{prob} has at least two nontrivial solutions. Finally, in section 4 we give the detailed proof of our theorem stated in section 2. \mathcal{S}ection{Preliminaries and Main results} We will work on the Sierpi\'nski gasket on $\mathbb{R}^2$. Let $\mathcal{S}_0 = \{q_1, q_2, q_3\}$ be three points on $\mathbb{R}^2$ equidistant from each other. Let $F_i(x) = \mathbf{f}rac{1}{2}(x-q_i) + q_i$ for $i= 1,2,3$ and $F(A) = \cup_{i=1}^3F_i(A).$ It is well known that $F$ has a unique fixed point $\mathcal{S}$, that is, $\mathcal{S} = F(\mathcal{S})$ (see, for instance,\cite[Theorem 9.1]{KF}), which is called the Sierpi\'nski gasket. Another way to view this Sierpi\'nski gasket is $\mathcal{S} = \overline{\cup_{j \geq 0}F^{j}(\mathcal{S}_0)},$ where $F^j$ denotes $F$ composed with itself $j$ times. We know that $\mathcal{S}$ is a compact set in $\mathbb{R}^2$ and we will use certain properties of functions on $\mathcal{S}$ due to compactness of the domain. It is well known that the Hausdorff dimension of $\mathcal{S}$ is $\mathbf{f}rac{\ln 3}{\ln 2}$ and the $\mathbf{f}rac{\ln 3}{\ln 2}$-dimensional Hausdorff measure is finite ($0<\mathcal{H}^{\mathbf{f}rac{\ln 3}{\ln 2}}(\mathcal{S})<\infty)$ (see, \cite[Theorem 9.3]{KF}). Throughout this paper, we will use this measure. If $f$ is a measurable function on $\mathcal{S}$ then $\|f\|_1 \coloneqq \int_{\mathcal{S}}|f(x)| \mathrm{d}\mu.$ $L^1(\mathcal{S},\mu) \coloneqq \{[f] : f~ \text{is measurable}, \|f\|_1 < \infty\}$ is a Banach space. We define the $p$-energy with the help of a three variable real valued function $A_p$ which is convex, homogeneous of degree $p$, invariant under addition of constant, permutation of indices and the markov property. This $A_p$ is determined by an even function $g(x)$ defined on [0, 1]. In fact, $g(x) = A_p(-1, x, 1)$ for $-1 \leq x \leq 1.$ From the properties of $A_p$ we can conclude this as, \betagin{equation*} A_p(a_1,a_2,a_3) = \left|\mathbf{f}rac{a_3-a_1}{2}\right|^p g\left(\mathbf{f}rac{2a_2-a_1-a_3}{a_3-a_1}\right) \epsilonnd{equation*} satisfying $a_1\leq a_2 \leq a_3$ and $a_1 \mathcal{N}eq a_3$. The $m^{\text{th}}$ level Sierpi\'nski gasket is $\mathcal{S}^{(m)} = \cup_{j=0}^m F^j(\mathcal{S}_0)$. We construct the $m^{\text{th}}$ level crude energy as $$E_p^{(m)}(u) = \mathcal{S}um_{|\omega| = m} A_p\left(u(F_\omega q_1), u(F_\omega q_2), u(F_\omega q_3)\right)$$ and $m^{\text{th}}$ level renormalized $p$-energy is given by $$\mathcal{E}_p^{(m)}(u) = (r_p)^{-m} E_p^{(m)}(u)$$ where $r_p$ is the unique (with respect to p, independent of $A_p$) renormalizing factor and $0 < r_p <1$. For more details, see \cite{hps}. Now we can observe that $\mathcal{E}_p^{(m)}(u)$ is a monotonically increasing function of $m$ because of renormalization. So we define the $p$-energy function as $$\mathcal{E}_p(u) = \lim\limits_{m \to \infty} \mathcal{E}_p^{(m)}(u) $$ which exists for all $u$ as an extended real number. Now we define $\text{dom}(\mathcal{E}_p)$ as the space of continuous functions $u$ satisfying $\mathcal{E}_p(u) < \infty.$ In \cite{hps}, it is shown that $\text{dom}(\mathcal{E}_p)$ modulo constant functions forms a Banach space endowed with the norm $\|\cdot\|_{\mathcal{E}_p}$ defined as $$\|u\|_{\mathcal{E}_p} = \mathcal{E}_p(u)^{1/p}.$$ Now we will proceed to define energy form from energy function as follow \betagin{equation}\lambdabel{eq-6} \mathcal{E}_p(u,v) \coloneqq \mathbf{f}rac{1}{p}~\left.\mathbf{f}rac{\mathrm d}{\mathrm d t} \mathcal{E}_p(u+tv)\right|_{t=0}. \epsilonnd{equation} Note that we do not know whether $\mathcal{E}_p(u+tv)$ is differentiable or not but we know by the convexity of $A_p$ that $\mathcal{E}_p(u)$ is a convex function. So, we interpret the equation \epsilonqref{eq-6} as an interval valued equation. That is, $$\mathcal{E}_p(u,v) = [\mathcal{E}^-_p(u,v), \mathcal{E}^+_p(u,v)]$$ is a nonempty compact interval and the end points are the one-sided derivatives. Also, it satisfies the following properties \betagin{enumerate}[(i)] \item $\mathcal{E}_p(u,av) = a~\mathcal{E}_p(u,v)$ \item $\mathcal{E}_p(u,v_1 + v_2) \mathcal{S}ubseteq \mathcal{E}_p(u,v_1) + \mathcal{E}_p(u,v_2)$ \item $\mathcal{E}_p(u,u) = \mathcal{E}_p(u)$ \epsilonnd{enumerate} We recall some results which will be required to prove our results. \betagin{lemma}\cite[Lemma 3.2]{sw}\lambdabel{lem-2} There exists a constant $K_p$ such that for all $u \in \text{dom}(\mathcal{E}_p)$ we have \betagin{equation*} |u(x) - u(y)| \leq K_p \mathcal{E}_p(u)^{1/p}(r_p^{1/p})^m \epsilonnd{equation*} whenever $x$ and $y$ belong to the same or adjacent cells of order $m$. \epsilonnd{lemma} Let $\text{dom}_0(\mathcal{E}_p)$ be a subspace of $\text{dom}(\mathcal{E}_p)$ containing all functions which vanish at the boundary. Also, define a Banach space $\text{dom}_0(\mathcal{E}_p)\times \text{dom}_0(\mathcal{E}_p)$ with norm $\|(u,v)\|_{\mathcal{E}_p} \coloneqq (\|u\|_{\mathcal{E}_p}^p + \|v\|_{\mathcal{E}_p}^p)^{1/p}.$ Often we will use the following inequality \betagin{equation}\lambdabel{norm} \|u\|_{\mathcal{E}_p} + \|v\|_{\mathcal{E}_p} \leq \|(u,v)\|_{\mathcal{E}_p} = (\|u\|_{\mathcal{E}_p}^p + \|v\|_{\mathcal{E}_p}^p)^{1/p}. \epsilonnd{equation} \betagin{lemma}\lambdabel{lem-5} If $u \in \text{dom}_0(\mathcal{E}_p)$ then there exists a real positive constant $K$ such that $\|u\|_\infty \leq K \|u\|_{\mathcal{E}_p}.$ \epsilonnd{lemma} \betagin{proof} We can connect a point on $\cup_{j \geq 0}F^{j}(\mathcal{S}_0)$ and boundary point by a string of points. As boundary values are zero, using triangle inequality, Lemma \ref{lem-2} and the fact $0<r_p<1$ we get the result. \epsilonnd{proof} Now we define a weak solution for the problem \epsilonqref{prob} \betagin{definition} We say $(u,v) \in \text{dom}_0(\mathcal{E}_p) \times \text{dom}_0(\mathcal{E}_p)$ is a weak solution to the problem \epsilonqref{prob} if it satisfies \betagin{equation*} \betagin{split} &\int\limits_{\mathcal{S}} \lambda a(x)|u|^{q-2} u \phi_1\mathrm{d}\mu + \mathbf{f}rac{\alpha}{\alpha+\beta}\int\limits_{\mathcal{S}}h(x)|u|^{\alpha-2}u |v|^\beta\phi_1\mathrm{d}\mu + \int\limits_{\mathcal{S}} \gamma b(x)|v|^{q-2} v\phi_2\mathrm{d}\mu + \mathbf{f}rac{\beta}{\alpha+\beta}\int\limits_{\mathcal{S}}h(x)|u|^{\alpha}|v|^{\beta-2}v \phi_2\mathrm{d}\mu \\ &\quad \in \mathcal{E}_p(u,\phi_1) + \mathcal{E}_p(v,\phi_2) \epsilonnd{split} \epsilonnd{equation*} for all $(\phi_1, \phi_2) \in \text{dom}_0(\mathcal{E}_p) \times \text{dom}_0(\mathcal{E}_p).$ \epsilonnd{definition} Our main result states that : \betagin{theorem}\lambdabel{main} There exists a $\kappa_0 >0$ such that with the hypothesis H \ref{hyp_1}, H \ref{hyp_3} and H \ref{hyp_2} the problem \epsilonqref{prob} has at least two nontrivial weak solutions. \epsilonnd{theorem} \mathcal{S}ection{Euler functional $I_{\lambda,\gamma}$, Fibering map $\Phi_{u,v}$ and their analysis} Let $\lambda, \gamma$ be real parameters and $q,p, \alpha, \beta$ positive real number satisfying $1 <q<p<\alpha +\beta.$ The Euler functional associated with the problem \epsilonqref{prob} for all $(u,v) \in \text{dom}_0(\mathcal{E}_p) \times \text{dom}_0(\mathcal{E}_p)$ is defined as $$ I_{\lambda,\gamma}(u,v) = \mathbf{f}rac{1}{p}\|(u,v)\|_{\mathcal{E}_p}^p - \mathbf{f}rac{1}{q} \left(\int\limits_{\mathcal{S}} \lambda a(x)|u|^{q} \mathrm{d}\mu + \int\limits_{\mathcal{S}} \gamma b(x)|v|^{q} \mathrm{d}\mu\right)- \mathbf{f}rac{1}{\alpha +\beta} \int\limits_{\mathcal{S}} h(x)|u|^{\alpha}|v|^{\beta} \mathrm{d}\mu.$$ We do not know the range of $I_{\lambda,\gamma}$ on $\text{dom}_0(\mathcal{E}_p) \times \text{dom}_0(\mathcal{E}_p).$ So, we will consider a set where it is bounded below and do our analysis. Consider the set \betagin{align*} &\mathcal{M}_{\lambda,\gamma} \\ &\quad = \left\{ (u,v) \in \text{dom}_0(\mathcal{E}_p) \times \text{dom}_0(\mathcal{E}_p)\mathcal{S}etminus\{(0,0)\} : \|(u,v)\|_{\mathcal{E}_p}^p - \int\limits_{\mathcal{S}} \lambda a(x)|u|^{q} \mathrm{d}\mu - \int\limits_{\mathcal{S}} \gamma b(x)|v|^{q}\mathrm{d}\mu - \int\limits_{\mathcal{S}} h(x)|u|^{\alpha}|v|^{\beta} \mathrm{d}\mu = 0 \right\} \epsilonnd{align*} This means $(u, v) \in \mathcal{M}_{\lambda,\gamma}$ if and only if \betagin{equation}\lambdabel{eq-1} \|(u,v)\|_{\mathcal{E}_p}^p - \int\limits_{\mathcal{S}} \lambda a(x)|u|^{q} \mathrm{d}\mu - \int\limits_{\mathcal{S}} \gamma b(x)|v|^{q}\mathrm{d}\mu - \int\limits_{\mathcal{S}} h(x)|u|^{\alpha}|v|^{\beta} \mathrm{d}\mu = 0. \epsilonnd{equation} Using equation \epsilonqref{eq-1}, we get the following as a consequence on $\mathcal{M}_{\lambda,\gamma}$ \betagin{equation}\lambdabel{eq-2} \betagin{split} I_{\lambda,\gamma}(u,v) &= \left(\mathbf{f}rac{1}{p}-\mathbf{f}rac{1}{\alpha+\beta}\right)\|(u,v)\|_{\mathcal{E}_p}^p - \left(\mathbf{f}rac{1}{q} - \mathbf{f}rac{1}{\alpha+\beta}\right) \left(\int\limits_{\mathcal{S}} \lambda a(x)|u|^{q} \mathrm{d}\mu +\int\limits_{\mathcal{S}} \gamma b(x)|v|^{q} \mathrm{d}\mu \right) \epsilonnd{split} \epsilonnd{equation} At the same time we will get below equation as well \betagin{equation}\lambdabel{eq-3} I_{\lambda,\gamma}(u,v) = \left(\mathbf{f}rac{1}{p}- \mathbf{f}rac{1}{q}\right)\|(u,v)\|_{\mathcal{E}_p}^p + \left(\mathbf{f}rac{1}{q} - \mathbf{f}rac{1}{\alpha +\beta}\right)\int\limits_{\mathcal{S}} h(x)|u|^{\alpha}|v|^{\beta} \mathrm{d}\mu \epsilonnd{equation} For any $(u,v) \in \text{dom}_0(\mathcal{E}_p) \times \text{dom}_0(\mathcal{E}_p)$, now define the map $\Phi_{u,v} : (0,\infty) \to \mathbb{R}$ by $\Phi_{u,v}(t) = I_{\lambda,\gamma}(tu,tv)$, that is, $$\Phi_{u,v}(t) = \mathbf{f}rac{t^p}{p}\|(u,v)\|_{\mathcal{E}_p}^p - \mathbf{f}rac{t^q}{q} \left(\int\limits_{\mathcal{S}} \lambda a(x)|u|^{q} \mathrm{d}\mu + \int\limits_{\mathcal{S}} \gamma b(x)|v|^{q} \mathrm{d}\mu\right)- \mathbf{f}rac{t^{\alpha +\beta}}{\alpha +\beta} \int\limits_{\mathcal{S}} h(x)|u|^{\alpha}|v|^{\beta} \mathrm{d}\mu.$$ As $\Phi_{u,v}$ is a smooth function of $t$, we deduce the following lemma \betagin{lemma}\lambdabel{lem-1} $(tu,tv) \in \mathcal{M}_{\lambda,\gamma}$ if and only if $\Phi'_{tu,tv}(1) = 0$ or $\Phi'_{u,v}(t) = 0.$ \epsilonnd{lemma} To make our study easier, we will subdivide $\mathcal{M}_{\lambda,\gamma}$ into sets corresponding to local minima, local maxima and point of inflection at 1. Hence we define the sets as follows : \betagin{align*} \mathcal{M}_{\lambda,\gamma}^+ &= \{(u,v) \in \mathcal{M}_{\lambda,\gamma} : \Phi_{u,v}''(1) > 0\}, \\ \mathcal{M}_{\lambda,\gamma}^0 &= \{(u,v) \in \mathcal{M}_{\lambda,\gamma} : \Phi_{u,v}''(1) = 0\}~ \text{and} \\ \mathcal{M}_{\lambda,\gamma}^- &= \{(u,v) \in \mathcal{M}_{\lambda,\gamma} : \Phi_{u,v}''(1) < 0\}. \epsilonnd{align*} \betagin{definition} Let $(\mathcal{B},\|\cdot\|_\mathcal{B})$ be a Banach space. A functional $f : \mathcal{B} \to \mathbb{R}$ is said to be coercive if $f(x) \to \infty$ as $\|x\|_{\mathcal{B}} \to \infty.$ \epsilonnd{definition} The next result will be very crucial to prove the main result \betagin{theorem} $I_{\lambda,\gamma}$ is coercive and bounded below on $\mathcal{M}_{\lambda,\gamma}.$ \epsilonnd{theorem} \betagin{proof} From \epsilonqref{eq-2} and using $1<q<p<\alpha +\beta$, continuity of $u$ and $v$, $a,b \in L^1(\mathcal{S},\mu)$ and Lemma \ref{lem-5} we get \betagin{align*} I_{\lambda,\gamma}(u,v) &= \left(\mathbf{f}rac{1}{p}-\mathbf{f}rac{1}{\alpha+\beta}\right)\|(u,v)\|_{\mathcal{E}_p}^p - \left(\mathbf{f}rac{1}{q} - \mathbf{f}rac{1}{\alpha+\beta}\right) \left(\int\limits_{\mathcal{S}} \lambda a(x)|u|^{q} \mathrm{d}\mu +\int\limits_{\mathcal{S}} \gamma b(x)|v|^{q} \mathrm{d}\mu \right)\\ &\geq \left(\mathbf{f}rac{1}{p}-\mathbf{f}rac{1}{\alpha+\beta}\right)\|(u,v)\|_{\mathcal{E}_p}^p - \left(\mathbf{f}rac{1}{q} - \mathbf{f}rac{1}{\alpha+\beta}\right) \left( |\lambda| \|a\|_1 \|u\|_\infty^{q} + |\gamma| \|b\|_1 \|v\|_\infty^{q} \right)\\ &\geq \left(\mathbf{f}rac{1}{p}-\mathbf{f}rac{1}{\alpha+\beta}\right)\|(u,v)\|_{\mathcal{E}_p}^p - \left(\mathbf{f}rac{1}{q} - \mathbf{f}rac{1}{\alpha+\beta}\right) \left( |\lambda| K^q \|a\|_1 \|u\|_{\mathcal{E}_p}^{q} + |\gamma| K^q\|b\|_1 \|v\|_{\mathcal{E}_p}^{q} \right)\\ &\geq \left(\mathbf{f}rac{1}{p}-\mathbf{f}rac{1}{\alpha+\beta}\right)\|(u,v)\|_{\mathcal{E}_p}^p - \left(\mathbf{f}rac{1}{q} - \mathbf{f}rac{1}{\alpha+\beta}\right)K^q \left( |\lambda| \|a\|_1 + |\gamma| \|b\|_1 \right)(\|u\|_{\mathcal{E}_p}^{q} + \|v\|_{\mathcal{E}_p}^{q})\\ &\geq \left(\mathbf{f}rac{1}{p}-\mathbf{f}rac{1}{\alpha+\beta}\right)\|(u,v)\|_{\mathcal{E}_p}^p - \left(\mathbf{f}rac{1}{q} - \mathbf{f}rac{1}{\alpha+\beta}\right)K^q \left( |\lambda| \|a\|_1 + |\gamma| \|b\|_1 \right)(\|u\|_{\mathcal{E}_p} + \|v\|_{\mathcal{E}_p})^q\\ &\geq \left(\mathbf{f}rac{1}{p}-\mathbf{f}rac{1}{\alpha+\beta}\right)\|(u,v)\|_{\mathcal{E}_p}^p - \left(\mathbf{f}rac{1}{q} - \mathbf{f}rac{1}{\alpha+\beta}\right)K^q \left( |\lambda| \|a\|_1 + |\gamma| \|b\|_1 \right)\|(u,v)\|_{\mathcal{E}_p}^q \epsilonnd{align*} Hence, we conclude that $I_{\lambda,\gamma}(u,v)$ is coercive and bounded below. \epsilonnd{proof} Now we will study the mapping $\Phi_{u,v}$ with respect to our problem. Consider \betagin{align*} \Phi_{u,v}'(t) &= t^{p-1}\|(u,v)\|_{\mathcal{E}_p}^p - t^{q-1} \left(\int\limits_{\mathcal{S}} \lambda a(x)|u|^{q} \mathrm{d}\mu + \int\limits_{\mathcal{S}} \gamma b(x)|v|^{q} \mathrm{d}\mu\right)- t^{\alpha +\beta-1} \int\limits_{\mathcal{S}} h(x)|u|^{\alpha}|v|^{\beta} \mathrm{d}\mu \\ &= t^{q-1}\left(t^{p-q}\|(u,v)\|_{\mathcal{E}_p}^p - \int\limits_{\mathcal{S}} \lambda a(x)|u|^{q} \mathrm{d}\mu - \int\limits_{\mathcal{S}} \gamma b(x)|v|^{q} \mathrm{d}\mu - t^{\alpha +\beta-q} \int\limits_{\mathcal{S}} h(x)|u|^{\alpha}|v|^{\beta} \mathrm{d}\mu\right) \\ &= t^{q-1} \left(M_{u,v}(t) - \int\limits_{\mathcal{S}} \lambda a(x)|u|^{q} \mathrm{d}\mu - \int\limits_{\mathcal{S}} \gamma b(x)|v|^{q} \mathrm{d}\mu \right) \epsilonnd{align*} where we define $$M_{u,v}(t) \coloneqq t^{p-q}\|(u,v)\|_{\mathcal{E}_p}^p - t^{\alpha +\beta-q} \int\limits_{\mathcal{S}} h(x)|u|^{\alpha}|v|^{\beta} \mathrm{d}\mu. $$ We can see that for $t>0$, $(tu,tv) \in \mathcal{M}_{\lambda,\gamma}$ if and only if $t$ is a solution to the below problem \betagin{equation}\lambdabel{eq-8} M_{u,v}(t) = \int\limits_{\mathcal{S}} \lambda a(x)|u|^{q} \mathrm{d}\mu + \int\limits_{\mathcal{S}} \gamma b(x)|v|^{q} \mathrm{d}\mu. \epsilonnd{equation} Further, \betagin{equation}\lambdabel{eq-7} \betagin{split} M_{u,v}'(t) &= (p-q) t^{p-q-1}\|(u,v)\|_{\mathcal{E}_p}^p - (\alpha+\beta-q)t^{\alpha+\beta-q-1}\int\limits_{\mathcal{S}}h(x)|u|^{\alpha}|v|^{\beta} \mathrm{d}\mu\\ &= t^{-q-1}\left((p-q) t^{p}\|(u,v)\|_{\mathcal{E}_p}^p - (\alpha+\beta-q)t^{\alpha+\beta}\int\limits_{\mathcal{S}}h(x)|u|^{\alpha}|v|^{\beta} \mathrm{d}\mu\right). \epsilonnd{split} \epsilonnd{equation} We describe the nature of $M_{u,v}(t)$ depending on sign of $\int\limits_{\mathcal{S}}h(x)|u|^{\alpha}|v|^{\beta} \mathrm{d}\mu$ as below :\\ \underline{Case I} $\int\limits_{\mathcal{S}}h(x)|u|^{\alpha}|v|^{\beta}\mathrm{d}\mu \leq 0.$ Then from \epsilonqref{eq-7} we can see $M_{u,v}'(t) > 0$ for all $t > 0.$ Hence $M_{u,v}(t)$ is an increasing function.\\ \underline{Case II} $\int\limits_{\mathcal{S}}h(x)|u|^{\alpha}|v|^{\beta}\mathrm{d}\mu > 0.$ Then, $M_{u,v}(t) \to -\infty$ as $t \to \infty.$ We can obtain that $M_{u,v}'(t)=0$ has only positive solution at $\tilde{t}= \left(\mathbf{f}rac{(p-q)\|(u,v)\|_{\mathcal{E}_p}^p}{(\alpha+\beta-q)\int\limits_{\mathcal{S}}h(x)|u|^{\alpha}|v|^{\beta}\mathrm{d}\mu}\right)^\mathbf{f}rac{1}{\alpha+\beta-p}$. After a small calculation we can see that $M_{u,v}''(\tilde{t}) <0,$ so this is a local maximum. \betagin{figure} \betagin{center} \betagin{subfigure}[b]{0.35\textwidth} \resizebox{\linewidth}{!}{ \betagin{tikzpicture}[domain=0:4] \draw[->] (-0.2,0) -- (5,0) node[right] {$t$}; \draw[dashed] (0,1) -- (5,1) ; \draw[dashed] (0,1) -- (0,1) node[left] {$X$}; \draw[->] (0,-0.2) -- (0,4) node[left] {$M_{u,v}(t)$}; \draw[color=black] plot (\x,.05*\x*\x*\x + .3*\x) ; \epsilonnd{tikzpicture} } \caption{$\int_{\mathcal S} h(x)|u|^\alphapha|v|^\betata \mathrm{d}\mu \leq 0$ } \lambdabel{fig-A} \epsilonnd{subfigure} Hypothesisspace{.5in} \betagin{subfigure}[b]{0.35\textwidth} \resizebox{\linewidth}{!}{ \betagin{tikzpicture}[domain=0:4] \draw[->] (-0.2,0) -- (5,0) node[right] {$t$}; \draw[dashed] (0,1) -- (5,1) ; \draw[dashed] (0,1) -- (0,1) node[left] {$X$} ; \draw[->] (0,-0.2) -- (0,4) node[left] {$M_{u,v}(t)$}; \draw[color=black] plot (\x,1.2*\x*\x-0.305*\x*\x*\x) ; \epsilonnd{tikzpicture} } \caption{$\int_{\mathcal S} h(x)|u|^\alphapha|v|^\betata \mathrm{d}\mu > 0$} \lambdabel{fig-B} \epsilonnd{subfigure} \caption{Possible forms of $M_{u,v}(t)$} \epsilonnd{center} \epsilonnd{figure} \mathcal{N}oi Hence, we rename $\tilde{t} = t_{max}$ and we get \betagin{align*} M_{u,v}(t_{max}) &= \left(\mathbf{f}rac{(p-q)\|(u,v)\|_{\mathcal{E}_p}^p}{(\alpha+\beta-q)\int\limits_{\mathcal{S}}h(x)|u|^{\alpha}|v|^{\beta}\mathrm{d}\mu}\right)^\mathbf{f}rac{p-q}{\alpha+\beta-p}\|(u,v)\|_{\mathcal{E}_p}^p \\ &\quad - \left(\mathbf{f}rac{(p-q)\|(u,v)\|_{\mathcal{E}_p}^p}{(\alpha+\beta-q)\int\limits_{\mathcal{S}}h(x)|u|^{\alpha}|v|^{\beta}\mathrm{d}\mu}\right)^\mathbf{f}rac{\alpha +\beta-q}{\alpha+\beta-p} \int\limits_{\mathcal{S}} h(x)|u|^{\alpha}|v|^{\beta} \mathrm{d}\mu \\ &= \left(\mathbf{f}rac{p-q}{\alpha+\beta-q}\right)^\mathbf{f}rac{p-q}{\alpha+\beta-p} \|(u,v)\|^\mathbf{f}rac{p(\alpha+\beta-q)}{\alpha+\beta-p}\left(\mathbf{f}rac{1}{\int\limits_{\mathcal{S}}h(x)|u|^{\alpha}|v|^{\beta}\mathrm{d}\mu}\right)^\mathbf{f}rac{p-q}{\alpha+\beta-p}\\ &\quad - \left(\mathbf{f}rac{p-q}{\alpha+\beta-q}\right)^\mathbf{f}rac{\alpha+\beta-q}{\alpha+\beta-p} \|(u,v)\|^\mathbf{f}rac{p(\alpha+\beta-q)}{\alpha+\beta-p}\left(\mathbf{f}rac{1}{\int\limits_{\mathcal{S}}h(x)|u|^{\alpha}|v|^{\beta}\mathrm{d}\mu}\right)^\mathbf{f}rac{p-q}{\alpha+\beta-p}\\ & = \|(u,v)\|_{\mathcal{E}_p}^q \left(\left(\mathbf{f}rac{p-q}{\alpha+\beta-q}\right)^\mathbf{f}rac{p-q}{\alpha+\beta-p} - \left(\mathbf{f}rac{p-q}{\alpha+\beta-q}\right)^\mathbf{f}rac{\alpha+\beta-q}{\alpha+\beta-p}\right) \left(\mathbf{f}rac{\|(u,v)\|_{\mathcal{E}_p}^{\alpha+\beta}}{\int\limits_{\mathcal{S}}h(x)|u|^{\alpha}|v|^{\beta}\mathrm{d}\mu}\right)^\mathbf{f}rac{p-q}{\alpha+\beta-p}\\ \epsilonnd{align*} \betagin{align*} &\geq \|(u,v)\|_{\mathcal{E}_p}^q \left(\left(\mathbf{f}rac{p-q}{\alpha+\beta-q}\right)^\mathbf{f}rac{p-q}{\alpha+\beta-p} - \left(\mathbf{f}rac{p-q}{\alpha+\beta-q}\right)^\mathbf{f}rac{\alpha+\beta-q}{\alpha+\beta-p}\right) \left(\mathbf{f}rac{\|(u,v)\|_{\mathcal{E}_p}^{\alpha+\beta}}{\|h\|_1K^{\alpha+\beta}\|u\|_{\mathcal{E}_p}^{\alpha}\|v\|_{\mathcal{E}_p}^{\beta}}\right)^\mathbf{f}rac{p-q}{\alpha+\beta-p}\\ &\geq \|(u,v)\|_{\mathcal{E}_p}^q \left(\left(\mathbf{f}rac{p-q}{\alpha+\beta-q}\right)^\mathbf{f}rac{p-q}{\alpha+\beta-p} - \left(\mathbf{f}rac{p-q}{\alpha+\beta-q}\right)^\mathbf{f}rac{\alpha+\beta-q}{\alpha+\beta-p}\right) \left(\mathbf{f}rac{\|(u,v)\|_{\mathcal{E}_p}^{\alpha+\beta}}{\|h\|_1K^{\alpha+\beta}(\|u\|_{\mathcal{E}_p}+\|v\|_{\mathcal{E}_p})^{\alpha+\beta}}\right)^\mathbf{f}rac{p-q}{\alpha+\beta-p}\\ &\geq \|(u,v)\|_{\mathcal{E}_p}^q \left(\left(\mathbf{f}rac{p-q}{\alpha+\beta-q}\right)^\mathbf{f}rac{p-q}{\alpha+\beta-p} - \left(\mathbf{f}rac{p-q}{\alpha+\beta-q}\right)^\mathbf{f}rac{\alpha+\beta-q}{\alpha+\beta-p}\right) \left(\mathbf{f}rac{\|(u,v)\|_{\mathcal{E}_p}^{\alpha+\beta}}{\|h\|_1K^{\alpha+\beta}\|(u,v)\|_{\mathcal{E}_p}^{\alpha+\beta}}\right)^\mathbf{f}rac{p-q}{\alpha+\beta-p}\\ &= \|(u,v)\|_{\mathcal{E}_p}^q \left(\mathbf{f}rac{p-q}{\alpha+\beta-q}\right)^\mathbf{f}rac{p-q}{\alpha+\beta-p} \left(\mathbf{f}rac{\alpha+\beta-p}{\alpha+\beta-q}\right) \left(\mathbf{f}rac{1}{\|h\|_1K^{\alpha+\beta}}\right)^\mathbf{f}rac{p-q}{\alpha+\beta-p}.\\ \epsilonnd{align*} Now, we will establish the relation between $\Phi_{u,v}''$ and $M'_{u,v}.$ If $(tu,tv) \in \mathcal{M}_{\lambda,\gamma},$ then $\Phi'_{u,v}(t) = 0.$ This implies \betagin{align*} & t^{p-1}\|(u,v)\|_{\mathcal{E}_p}^p - t^{q-1} \left(\int\limits_{\mathcal{S}} \lambda a(x)|u|^{q} \mathrm{d}\mu + \int\limits_{\mathcal{S}} \gamma b(x)|v|^{q} \mathrm{d}\mu\right)- t^{\alpha +\beta-1} \int\limits_{\mathcal{S}} h(x)|u|^{\alpha}|v|^{\beta} \mathrm{d}\mu = 0 \\ i.e.~~ &t^{q-1} \left(\int\limits_{\mathcal{S}} \lambda a(x)|u|^{q} \mathrm{d}\mu + \int\limits_{\mathcal{S}} \gamma b(x)|v|^{q} \mathrm{d}\mu\right) = t^{p-1}\|(u,v)\|_{\mathcal{E}_p}^p - t^{\alpha +\beta-1} \int\limits_{\mathcal{S}} h(x)|u|^{\alpha}|v|^{\beta} \mathrm{d}\mu. \epsilonnd{align*} Therefore, \betagin{equation}\lambdabel{der} \betagin{split} \Phi_{u,v}''(t) &= (p-1)t^{p-2}\|(u,v)\|_{\mathcal{E}_p}^p - (q-1) t^{q-2} \left(\int\limits_{\mathcal{S}} \lambda a(x)|u|^{q} \mathrm{d}\mu + \int\limits_{\mathcal{S}} \gamma b(x)|v|^{q}\mathrm{d}\mu \right)\\ &\quad - (\alpha +\beta-1) t^{\alpha +\beta -2} \int\limits_{\mathcal{S}} h(x)|u|^{\alpha}|v|^{\beta} \mathrm{d}\mu\\ &= (p-1)t^{p-2}\|(u,v)\|_{\mathcal{E}_p}^p - (q-1) \left(t^{p-2}\|(u,v)\|_{\mathcal{E}_p}^p - t^{\alpha +\beta-2} \int\limits_{\mathcal{S}} h(x)|u|^{\alpha}|v|^{\beta} \mathrm{d}\mu\right) \\ & \quad - (\alpha +\beta-1) t^{\alpha +\beta -2} \int\limits_{\mathcal{S}} h(x)|u|^{\alpha}|v|^{\beta} \mathrm{d}\mu\\ &= (p-q)t^{p-2}\|(u,v)\|_{\mathcal{E}_p}^p - (\alpha+\beta-q) t^{\alpha+\beta-2} \int_{\mathcal{S}}h(x)|u|^{\alpha}|v|^\beta\mathrm{d}\mu \\ &= t^{q-1} M_{u,v}'(t). \epsilonnd{split} \epsilonnd{equation} So, $(tu,tv) \in \mathcal{M}_{\lambda,\gamma}^{+}$ if $ \Phi''_{u,v}(t) > 0~~ i.e.~~M_{u,v}'(t) > 0$ and $(tu,tv) \in \mathcal{M}_{\lambda,\gamma}^{-}$ if $ \Phi''_{u,v}(t) < 0~~ i.e.~~M_{u,v}'(t) <0.$ \betagin{figure} \betagin{center} \betagin{subfigure}[b]{0.31\textwidth} \resizebox{\linewidth}{!}{ \betagin{tikzpicture}[domain=0:2.5] \draw[->] (-0.5,0) -- (3,0) node[right] {$t$}; \draw[->] (0,-.5) -- (0,2.5) node[left] {$\phi_u(t)$}; \draw[color=black] plot (\x,0.13*\x*\x+0.11*\x*\x*\x) ; \epsilonnd{tikzpicture} } \caption{$\bf{X} \leq 0 \text{ and } \bf{H} \leq 0$} \lambdabel{fig-a} \epsilonnd{subfigure} Hypothesisspace{.5in} \betagin{subfigure}[b]{0.31\textwidth} \resizebox{\linewidth}{!}{ \betagin{tikzpicture}[domain=0:2.5] \draw[->] (-0.5,0) -- (3,0) node[right] {$t$}; \draw[->] (0,-.5) -- (0,2.5) node[left] {$\phi_u(t)$}; \draw[color=black] plot (\x,1.15*\x*\x-0.49*\x*\x*\x) ; \epsilonnd{tikzpicture} } \caption{$\bf{X} \leq 0 \text{ and } \bf{H} > 0$} \lambdabel{fig-b} \epsilonnd{subfigure} \betagin{subfigure}[b]{0.31\textwidth} \resizebox{\linewidth}{!}{ \betagin{tikzpicture}[domain=0:2.5] \draw[->] (-0.5,0) -- (3,0) node[right] {$t$}; \draw[->] (0,-.5) -- (0,2.5) node[left] {$\phi_u(t)$}; \draw[color=black] plot (\x,(-1.5*\x*\x*0.55+0.8*\x*\x*\x*0.55) ; \epsilonnd{tikzpicture} } \caption{$\bf{X} > 0 \text{ and } \bf{H} \leq 0 $} \lambdabel{fig-c} \epsilonnd{subfigure} Hypothesisspace{.5in} \betagin{subfigure}[b]{0.31\textwidth} \resizebox{\linewidth}{!}{ \betagin{tikzpicture}[domain=0:2.5] \draw[->] (-0.5,0) -- (3,0) node[right] {$t$}; \draw[->] (0,-.5) -- (0,2.5) node[left] {$\phi_u(t)$}; \draw[color=black] plot (\x,-1.0*\x +1.63*\x*\x*\x-0.6*\x*\x*\x*\x) ; \epsilonnd{tikzpicture} } \caption{$\bf{X} > 0 \text{ and } \bf{H} > 0 $} \lambdabel{fig-d} \epsilonnd{subfigure} \caption{Possible forms of $\Phi_{u,v}$} \lambdabel{figure-2} \epsilonnd{center} \epsilonnd{figure} Figure \ref{figure-2} describe all possible forms of fibering map($\Phi_{u,v}$) depending on the sign changing of $X = \int_{\mathcal{S}} \lambdambda a(x)|u|^q \mathrm{d}\mu+ \int_{\mathcal{S}} \gammamma b(x)|v|^q\mathrm{d}\mu \text{ and } H = \int_{\mathcal S} h(x)|u|^\alphapha|v|^\betata \mathrm{d}\mu.$ \betagin{lemma}\lambdabel{lem-6} \betagin{enumerate}[(i)] \item\lambdabel{z1} If $(u,v) \in \mathcal{M}_{\lambda,\gamma}^+,$ then $\int_{\mathcal{S}}\lambda a(x)|u|^q \mathrm{d}\mu + \int_{\mathcal{S}}\gamma b(x)|v|^q \mathrm{d}\mu >0.$ \item\lambdabel{z2} If $(u,v) \in \mathcal{M}_{\lambda,\gamma}^-,$ then $\int_{\mathcal{S}} h(x)|u|^{\alpha}|v|^{\beta} \mathrm{d}\mu>0.$ \item\lambdabel{z3} If $(u,v) \in \mathcal{M}_{\lambda,\gamma}^0,$ then $\int_{\mathcal{S}}\lambda a(x)|u|^q \mathrm{d}\mu + \int_{\mathcal{S}}\gamma b(x)|v|^q \mathrm{d}\mu >0$ and $\int_{\mathcal{S}} h(x)|u|^{\alpha}|v|^{\beta} \mathrm{d}\mu>0.$ \epsilonnd{enumerate} \epsilonnd{lemma} \betagin{proof} \epsilonqref{z1} $(u,v) \in \mathcal{M}_{\lambda,\gamma}^{+}$ then $\Phi_{u,v}''(1)>0$ \betagin{align*} &\implies (p-1)\|(u,v)\|_{\mathcal{E}_p}^p - (q-1)\left(\int\limits_{\mathcal{S}} \lambda a(x)|u|^{q} \mathrm{d}\mu + \int\limits_{\mathcal{S}} \gamma b(x)|v|^{q} \mathrm{d}\mu\right)- (\alpha+\beta-1) \int\limits_{\mathcal{S}} h(x)|u|^{\alpha}|v|^{\beta} \mathrm{d}\mu > 0 \\ &\implies (p- \alpha - \beta)\|(u,v)\|_{\mathcal{E}_p}^p + (\alpha+\beta-q)\left(\int\limits_{\mathcal{S}} \lambda a(x)|u|^{q} \mathrm{d}\mu + \int\limits_{\mathcal{S}} \gamma b(x)|v|^{q} \mathrm{d}\mu\right) > 0\\ &\implies (\alpha+\beta-q)\left(\int\limits_{\mathcal{S}} \lambda a(x)|u|^{q} \mathrm{d}\mu + \int\limits_{\mathcal{S}} \gamma b(x)|v|^{q} \mathrm{d}\mu\right) > ( \alpha + \beta-p)\|(u,v)\|_{\mathcal{E}_p}^p\\ &\implies \int\limits_{\mathcal{S}} \lambda a(x)|u|^{q} \mathrm{d}\mu + \int\limits_{\mathcal{S}} \gamma b(x)|v|^{q} \mathrm{d}\mu > \mathbf{f}rac{ \alpha + \beta-p}{\alpha+\beta-q}\|(u,v)\|_{\mathcal{E}_p}^p. \epsilonnd{align*} \epsilonqref{z2} $(u,v) \in \mathcal{M}_{\lambda,\gamma}^{-}$ then $\Phi_{u,v}''(1)<0$ \betagin{align*} &\implies (p-1)\|(u,v)\|_{\mathcal{E}_p}^p - (q-1)\left(\int\limits_{\mathcal{S}} \lambda a(x)|u|^{q} \mathrm{d}\mu + \int\limits_{\mathcal{S}} \gamma b(x)|v|^{q} \mathrm{d}\mu\right)- (\alpha+\beta-1) \int\limits_{\mathcal{S}} h(x)|u|^{\alpha}|v|^{\beta} \mathrm{d}\mu < 0 \\ &\implies (p-1)\|(u,v)\|_{\mathcal{E}_p}^p - (q-1)\left(\|(u,v)\|_{\mathcal{E}_p}^p - \int\limits_{\mathcal{S}} h(x)|u|^{\alpha}|v|^{\beta} \mathrm{d}\mu\right)- (\alpha+\beta-1) \int\limits_{\mathcal{S}} h(x)|u|^{\alpha}|v|^{\beta} \mathrm{d}\mu < 0 \\ &\implies (p-q)\|(u,v)\|_{\mathcal{E}_p}^p - (\alpha+\beta-q) \int\limits_{\mathcal{S}} h(x)|u|^{\alpha}|v|^{\beta} \mathrm{d}\mu < 0 \\ &\implies (p-q)\|(u,v)\|_{\mathcal{E}_p}^p < (\alpha+\beta-q) \int\limits_{\mathcal{S}} h(x)|u|^{\alpha}|v|^{\beta} \mathrm{d}\mu \\ &\implies \mathbf{f}rac{p-q}{\alpha+\beta-q}\|(u,v)\|_{\mathcal{E}_p}^p < \int\limits_{\mathcal{S}} h(x)|u|^{\alpha}|v|^{\beta} \mathrm{d}\mu. \epsilonnd{align*} \epsilonqref{z3} $(u,v) \in \mathcal{M}_{\lambda,\gamma}^{-}$ then $\Phi_{u,v}''(1)=0$ \betagin{align*} &\implies (p-1)\|(u,v)\|_{\mathcal{E}_p}^p - (q-1)\left(\int\limits_{\mathcal{S}} \lambda a(x)|u|^{q} \mathrm{d}\mu + \int\limits_{\mathcal{S}} \gamma b(x)|v|^{q} \mathrm{d}\mu\right)- (\alpha+\beta-1) \int\limits_{\mathcal{S}} h(x)|u|^{\alpha}|v|^{\beta} \mathrm{d}\mu = 0. \\ \epsilonnd{align*} From here we get, \betagin{equation}\lambdabel{eq-4} \int\limits_{\mathcal{S}} \lambda a(x)|u|^{q} \mathrm{d}\mu + \int\limits_{\mathcal{S}} \gamma b(x)|v|^{q} \mathrm{d}\mu = \mathbf{f}rac{ \alpha + \beta-p}{\alpha+\beta-q}\|(u,v)\|_{\mathcal{E}_p}^p >0 \epsilonnd{equation} and \betagin{equation}\lambdabel{eq-5} \int\limits_{\mathcal{S}} h(x)|u|^{\alpha}|v|^{\beta} \mathrm{d}\mu = \mathbf{f}rac{p-q}{\alpha+\beta-q}\|(u,v)\|_{\mathcal{E}_p}^p >0. \epsilonnd{equation} \epsilonnd{proof} \betagin{lemma}\lambdabel{lem-4} There exists a real number $\kappa > 0$ such that $ \lambda\|a\|_1 + \gamma\|b\|_1 < \kappa$ then $\mathcal{M}_{\lambda,\gamma}^0 = \epsilonmptyset.$ \epsilonnd{lemma} \betagin{proof} We will prove this by considering two cases\\ \underline{Case I} : $(u,v) \in \mathcal{M}_{\lambda,\gamma}$ and $\int_{\mathcal{S}} h(x)|u|^\alpha|v|^\beta \mathrm{d}\mu \leq 0$ then $$\Phi_{u,v}''(1) = (p-q)\|(u,v)\|_{\mathcal{E}_p}^p - (\alpha+\beta-q) \int\limits_{\mathcal{S}} h(x)|u|^{\alpha}|v|^{\beta} \mathrm{d}\mu >0$$ Therefore, $(u,v) \mathcal{N}otin \mathcal{M}_{\lambda,\gamma}^0.$\\ \underline{Case II} : $(u,v) \in \mathcal{M}_{\lambda,\gamma}$ and $\int_{\mathcal{S}} h(x)|u|^\alpha|v|^\beta \mathrm{d}\mu > 0.$\\ Suppose $\mathcal{M}_{\lambda,\gamma}^0 \mathcal{N}eq \epsilonmptyset$ for all $(\lambda,\mu) \in \mathbb{R}^2$ satisfying $ \lambda\|a\|_1 + \gamma\|b\|_1 < \kappa.$ Then for each $(u,v) \in \mathcal{M}_{\lambda,\gamma},$ from equation \epsilonqref{eq-5} we get, $$\|(u,v)\| \geq \left(\mathbf{f}rac{p-q}{\alpha+\beta-q} \mathbf{f}rac{1}{K^{\alpha+\beta} \|h\|_1}\right)^\mathbf{f}rac{1}{\alpha+\beta-p}$$ and from equation \epsilonqref{eq-4} we get, $$\|(u,v)\| \leq \left(\left(\mathbf{f}rac{\alpha+\beta-q}{\alpha+\beta-p}\right)(\lambda\|a\|_1+\gamma\|b\|_1)K^q\right)^{\mathbf{f}rac{1}{p-q}}.$$ Above two equations imply $$\left(\mathbf{f}rac{\alpha+\beta-q}{\alpha+\beta-p}(\lambda\|a\|_1+\gamma\|b\|_1)K^q\right)^{\mathbf{f}rac{1}{p-q}} \geq \left(\mathbf{f}rac{p-q}{\alpha+\beta-q} \mathbf{f}rac{1}{K^{\alpha+\beta} \|h\|_1}\right)^\mathbf{f}rac{1}{\alpha+\beta-p}.$$ $$i.e.~ \lambda\|a\|_1+\gamma\|b\|_1 \geq \left(\mathbf{f}rac{\alpha+\beta-p}{\alpha+\beta-q}\right) K^{-q}\left(\mathbf{f}rac{p-q}{\alpha+\beta-q} \mathbf{f}rac{1}{K^{\alpha+\beta} \|h\|_1}\right)^\mathbf{f}rac{p-q}{\alpha+\beta-p}.$$ where $\kappa \coloneqq\left(\mathbf{f}rac{\alpha+\beta-p}{\alpha+\beta-q}\right) K^{-q}\left(\mathbf{f}rac{p-q}{\alpha+\beta-q} \mathbf{f}rac{1}{K^{\alpha+\beta} \|h\|_1}\right)^\mathbf{f}rac{p-q}{\alpha+\beta-p}.$ Then this will be a contradiction. Hence, $\mathcal{M}_{\lambda,\gamma}^0 =\epsilonmptyset.$ \epsilonnd{proof} \mathcal{N}oi By emphasizing the result of Lemma \ref{lem-4} we introduce the set $$\Lambda \coloneqq \left\{(\lambda,\mu) \in \mathbb{R}^2 : \lambda \|a\|_1 + \gamma\|g\|_1 < \kappa \right\}$$ Hence for each $(\lambda,\gamma) \in \Lambda,$ we get $\mathcal{M}_{\lambda,\gamma} = \mathcal{M}_{\lambda,\gamma}^+ \cup \mathcal{M}_{\lambda,\gamma}^-.$ \betagin{lemma}\lambdabel{lem-7} If $\int_{\mathcal{S}}h(x)|u|^{\alpha}|v|^{\beta}\mathrm{d}\mu>0,~\int\limits_{\mathcal{S}} \lambda a(x)|u|^{q} \mathrm{d}\mu + \int\limits_{\mathcal{S}} \gamma b(x)|v|^{q} \mathrm{d}\mu >0$ and $\lambda\|a\|_1 + \gamma\|b\|_1 < \kappa$ then there exist $t_0$ and $t_1$ with $0<t_0<t_{max}<t_1$ such that $(t_0u,t_0v) \in \mathcal{M}_{\lambda,\gamma}^+$ and $(t_1u,t_1v) \in \mathcal{M}_{\lambda,\gamma}^-$ \epsilonnd{lemma} \betagin{proof} \betagin{align*} M_{u,v}(0) = 0 &< \int\limits_{\mathcal{S}} \lambda a(x)|u|^{q} \mathrm{d}\mu + \int\limits_{\mathcal{S}} \gamma b(x)|v|^{q} \mathrm{d}\mu \\ &\leq K^q (\lambda\|a\|_1 + \gamma\|b\|_1)\|(u,v)\|_{\mathcal{E}_p}^q\\ &< K^q \|(u,v)\|_{\mathcal{E}_p}^q \left(\mathbf{f}rac{\alpha+\beta-p}{\alpha+\beta-q}\right) K^{-q}\left(\mathbf{f}rac{p-q}{\alpha+\beta-q} \mathbf{f}rac{1}{K^{\alpha+\beta} \|h\|_1}\right)^\mathbf{f}rac{p-q}{\alpha+\beta-p}\\ &= \|(u,v)\|_{\mathcal{E}_p}^q \left(\mathbf{f}rac{\alpha+\beta-p}{\alpha+\beta-q}\right) \left(\mathbf{f}rac{p-q}{\alpha+\beta-q} \mathbf{f}rac{1}{K^{\alpha+\beta} \|h\|_1}\right)^\mathbf{f}rac{p-q}{\alpha+\beta-p}\\ &\leq M_{u,v}(t_{max}) \epsilonnd{align*} Hence, equation \epsilonqref{eq-8} has exactly two solutions $t_0$ and $t_1$(say). Also, we have $M_{u,v}'(t_0) > 0> M_{u,v}'(t_1).$ Hence by using \epsilonqref{der} we get, $(t_0u,t_0v) \in \mathcal{M}_{\lambda,\gamma}^+$ and $(t_1u,t_1v) \in \mathcal{M}_{\lambda,\gamma}^-.$ \epsilonnd{proof} \betagin{lemma}\lambdabel{lem-9}Let $(\lambda,\gamma) \in \Lambda$ then, \betagin{enumerate}[(i)] \item\lambdabel{z4} If $\int_{\mathcal{S}} \lambda a(x)|u|^q\mathrm{d}\mu + \int_{\mathcal{S}}\gamma b(x)|v|^q \mathrm{d}\mu >0$ then there exists a $t_2 >0$ such that $(t_2u,t_2v) \in \mathcal{M}_{\lambda,\gamma}^+.$ \item\lambdabel{z5} If $\int_{\mathcal{S}} h(x)|u|^\alpha |v|^\beta \mathrm{d}\mu >0$ then there exists a $t_3 >0$ such that $(t_3u,t_3v) \in \mathcal{M}_{\lambda,\gamma}^-.$ \epsilonnd{enumerate} \epsilonnd{lemma} \betagin{proof} \epsilonqref{z4} \underline{Case(a)} : $\int_{\mathcal{S}} h(x)|u|^\alpha |v|^\beta \mathrm{d}\mu \leq 0$ then equation \epsilonqref{eq-8} has a solution $t_2$(say) and by equation \epsilonqref{der} we conclude that $(t_2u,t_2v) \in \mathcal{M}_{\lambda,\gamma}^+.$\\ \underline{Case(b)} : $\int_{\mathcal{S}} h(x)|u|^\alpha |v|^\beta \mathrm{d}\mu > 0$ then by Lemma \ref{lem-7} there exists $t_2$(say) and $(t_2u,t_2v)$ belongs to $\mathcal{M}_{\lambda,\gamma}^+.$\\ \epsilonqref{z5} \underline{Case(a)} : $\int_{\mathcal{S}} \lambda a(x)|u|^q\mathrm{d}\mu + \int_{\mathcal{S}}\gamma b(x)|v|^q \mathrm{d}\mu \leq 0$ then equation \epsilonqref{eq-8} has exactly one positive solution $t_3$(say) and using equation \epsilonqref{der} we get $(t_3u,t_3v) \in \mathcal{M}_{\lambda,\gamma}^-.$\\ \underline{Case(b)} : $\int_{\mathcal{S}} \lambda a(x)|u|^q\mathrm{d}\mu + \int_{\mathcal{S}}\gamma b(x)|v|^q \mathrm{d}\mu >0,$ then by Lemma \ref{lem-7} there exist $t_3$(say) and $(t_3u,t_3v) \in \mathcal{M}_{\lambda,\gamma}^-.$\\ This completes the proof. \epsilonnd{proof} \betagin{lemma}\lambdabel{lem-8} There exists a positive number $\kappa_0$ such that, if $0< |\lambda| \|a(x)\|_1 + |\gamma| \|b\|_1<\kappa_0$ then there exists $d_0 > 0$ such that $$\inf_{(u,v) \in \mathcal{M}_{\lambda,\gamma}^-} I_{\lambda,\gamma}(u,v) > d_0 > 0.$$ \epsilonnd{lemma} \betagin{proof} Let $(u,v) \in \mathcal{M}_{\lambda,\gamma}^-$ by Lemma \ref{lem-6}\epsilonqref{z2} we have $\int_{\mathcal{S}}h(x)|u|^\alpha|v|^\beta >0$ and $$\|(u,v)\|_{\mathcal{E}_p} \geq \left(\left(\mathbf{f}rac{p-q}{\alpha+\beta-q}\right)\left(\mathbf{f}rac{1}{K}\right)^{\alpha+\beta}\mathbf{f}rac{1}{\|h\|_1}\right)^\mathbf{f}rac{1}{\alpha+\beta-p}.$$ From equation \epsilonqref{eq-2} we get, \betagin{align*} I_{\lambda,\gamma}(u,v) &= \left(\mathbf{f}rac{1}{p}-\mathbf{f}rac{1}{\alpha+\beta}\right)\|(u,v)\|_{\mathcal{E}_p}^p - \left(\mathbf{f}rac{1}{q} - \mathbf{f}rac{1}{\alpha+\beta}\right) \left(\int\limits_{\mathcal{S}} \lambda a(x)|u|^{q} \mathrm{d}\mu +\int\limits_{\mathcal{S}} \gamma b(x)|v|^{q} \mathrm{d}\mu \right)\\ &\geq \left(\mathbf{f}rac{1}{p}-\mathbf{f}rac{1}{\alpha+\beta}\right)\|(u,v)\|_{\mathcal{E}_p}^p - \left(\mathbf{f}rac{1}{q} - \mathbf{f}rac{1}{\alpha+\beta}\right) \left( K^q (|\lambda|\|a\|_1+|\gamma|\|b\|_1)\|(u,v)\|_{\mathcal{E}_p}^q \right)\\ &= \|(u,v)\|_{\mathcal{E}_p}^q\left(\left(\mathbf{f}rac{1}{p}-\mathbf{f}rac{1}{\alpha+\beta}\right)\|(u,v)\|_{\mathcal{E}_p}^{p-q} - \left(\mathbf{f}rac{1}{q} - \mathbf{f}rac{1}{\alpha+\beta}\right) K^q (|\lambda|\|a\|_1+|\gamma|\|b\|_1) \right)\\ &\geq\left(\left(\mathbf{f}rac{p-q}{\alpha+\beta-q}\right)\left(\mathbf{f}rac{1}{K}\right)^{\alpha+\beta}\mathbf{f}rac{1}{\|h\|_1}\right)^\mathbf{f}rac{q}{\alpha+\beta-p}\\ &\quad \times \left(\left(\mathbf{f}rac{1}{p} -\mathbf{f}rac{1}{\alpha+\beta}\right)\left(\left(\mathbf{f}rac{p-q}{\alpha+\beta-q}\right)\left(\mathbf{f}rac{1}{K}\right)^{\alpha+\beta}\mathbf{f}rac{1}{\|h\|_1}\right)^\mathbf{f}rac{p-q}{\alpha+\beta-p} - \left(\mathbf{f}rac{1}{q} - \mathbf{f}rac{1}{\alpha+\beta}\right) K^q (|\lambda|\|a\|_1+|\gamma|\|b\|_1) \right)\\ &>0. \epsilonnd{align*} If \betagin{align*} & 0< \left(\mathbf{f}rac{1}{p} -\mathbf{f}rac{1}{\alpha+\beta}\right)\left(\left(\mathbf{f}rac{p-q}{\alpha+\beta-q}\right)\left(\mathbf{f}rac{1}{K}\right)^{\alpha+\beta}\mathbf{f}rac{1}{\|h\|_1}\right)^\mathbf{f}rac{p-q}{\alpha+\beta-p} - \left(\mathbf{f}rac{1}{q} - \mathbf{f}rac{1}{\alpha+\beta}\right) K^q (|\lambda|\|a\|_1+|\gamma|\|b\|_1)\\ \implies &\left(\mathbf{f}rac{1}{q} - \mathbf{f}rac{1}{\alpha+\beta}\right) K^q (|\lambda|\|a\|_1+|\gamma|\|b\|_1)< \left(\mathbf{f}rac{1}{p} -\mathbf{f}rac{1}{\alpha+\beta}\right)\left(\left(\mathbf{f}rac{p-q}{\alpha+\beta+q}\right)\left(\mathbf{f}rac{1}{K}\right)^{\alpha+\beta}\mathbf{f}rac{1}{\|h\|_1}\right)^\mathbf{f}rac{p-q}{\alpha+\beta-p}\\ \implies & |\lambda|\|a\|_1+|\gamma|\|b\|_1< K^{-q}\left(\mathbf{f}rac{q(\alpha+\beta)}{\alpha+\beta-q}\right)\left(\mathbf{f}rac{1}{p} -\mathbf{f}rac{1}{\alpha+\beta}\right)\left(\left(\mathbf{f}rac{p-q}{\alpha+\beta+q}\right)\left(\mathbf{f}rac{1}{K}\right)^{\alpha+\beta}\mathbf{f}rac{1}{\|h\|_1}\right)^\mathbf{f}rac{p-q}{\alpha+\beta-p}\\ \implies &|\lambda|\|a\|_1+|\gamma|\|b\|_1< \left(\mathbf{f}rac{q}{p}\right)\left(\mathbf{f}rac{\alpha+\beta-p}{\alpha+\beta-q}\right) K^{-q}\left(\left(\mathbf{f}rac{p-q}{\alpha+\beta+q}\right)\left(\mathbf{f}rac{1}{K}\right)^{\alpha+\beta}\mathbf{f}rac{1}{\|h\|_1}\right)^\mathbf{f}rac{p-q}{\alpha+\beta-p}\\ \implies & |\lambda|\|a\|_1+|\gamma|\|b\|_1 < \left(\mathbf{f}rac{q}{p}\right)\kappa < \kappa. \epsilonnd{align*} Because $\mathbf{f}rac{q}{p} < 1.$ Let define $\kappa_0 \coloneqq \left(\mathbf{f}rac{q}{p}\right)\kappa$ and \betagin{align*} d_0 \coloneqq &\left(\left(\mathbf{f}rac{p-q}{\alpha+\beta-q}\right)\left(\mathbf{f}rac{1}{K}\right)^{\alpha+\beta}\mathbf{f}rac{1}{\|h\|_1}\right)^\mathbf{f}rac{q}{\alpha+\beta-p}\\ &\quad \times \left(\left(\mathbf{f}rac{1}{p} -\mathbf{f}rac{1}{\alpha+\beta}\right)\left(\left(\mathbf{f}rac{p-q}{\alpha+\beta-q}\right)\left(\mathbf{f}rac{1}{K}\right)^{\alpha+\beta}\mathbf{f}rac{1}{\|h\|_1}\right)^\mathbf{f}rac{p-q}{\alpha+\beta-p} - \left(\mathbf{f}rac{1}{q} - \mathbf{f}rac{1}{\alpha+\beta}\right) K^q (|\lambda|\|a\|_1+|\gamma|\|b\|_1) \right) \epsilonnd{align*} \epsilonnd{proof} Now, we will define the set $\Lambda_0 \coloneqq \{(\lambda,\gamma) \in \mathbb{R}^2 : \lambda\|a\|_1 + \gamma\|b\|_1 < \kappa_0\}.$ Clearly, $\Lambda_0 \mathcal{S}ubset \Lambda,$ so for all $(\lambda,\gamma) \in \Lambda_0,~ \mathcal{M}_{\lambda,\gamma} = \mathcal{M}_{\lambda,\gamma}^+ \cup \mathcal{M}_{\lambda,\gamma}^-$ and each subset is nonempty. \mathcal{S}ection{Proof of the Main Results} \betagin{theorem}\lambdabel{thm-1} If $(\lambda,\gamma) \in \Lambda$ then there exists a minimizer of $I_{\lambda,\gamma}$ on $\mathcal{M}_{\lambda,\gamma}^{+}.$ \epsilonnd{theorem} \betagin{proof} As we have shown $I_{\lambda,\gamma}$ is bounded below on $\mathcal{M}_{\lambda,\gamma},$ so on $\mathcal{M}_{\lambda,\gamma}^{+}.$ There exists a sequence $\{(u_n,v_n)\} \mathcal{S}ubset \mathcal{M}_{\lambda,\gamma}^{+}$ such that $$\lim\limits_{n \to \infty}I_\lambda(u_n,v_n) = \inf\limits_{(u,v) \in \mathcal{M}_{\lambda,\gamma}^{+}} I_\lambda(u,v)$$ Since $I_{\lambda,\gamma}$ is coercive, $\{(u_n,v_n)\}$ is bounded in $\text{dom}_0(\mathcal{E}_p) \times \text{dom}_0(\mathcal{E}_p).$ If $\{(u_n,v_n)\}$ is unbounded then there exists a subsequence $\{(u_k,v_k)\}$ such that $\|(u_k, v_k)\|_{\mathcal{E}_p} \to \infty$ as $k \to \infty$ then $I_{\lambda,\gamma}(u_k,v_k) \to \infty$ as $k \to \infty.$ So, $\inf\limits_{(u,v) \in \mathcal{M}_{\lambda,\gamma}^{+}} I_{\lambda,\gamma}(u,v) = \infty,$ is a contradiction as $\mathcal{M}_{\lambda,\gamma}^+$ is non empty.\\ \underline{Claim} : Sequences of functions $\{u_n\}$ and $\{v_n\}$ are equicontinuous.\\ By Lemma \ref{lem-2}, $|u(x) - u(y)| \leq K_p (\mathcal{E}_p(u))^{1/p} (r_p^{1/p})^m$ whenever $x$ and $y$ belongs to the same or adjacent cells of order $m.$ Let $ \widehat{S} = \mathcal{S}up\{\mathcal{E}_p(u_k)^{1/p} : n \in \mathbb{N}\}.$ Let $\epsilon>$ be given. As $0<r_p<1$ there exists $m \in \mathbb{N}$ such that $K_p \widehat{S}(r_p^{1/p})^{m} < \epsilon.$ Choose $\partialta =2^{-m}.$ Then $\|x-y\|_2 < \partialta$ implies that $|u_n(x) - u_n(y)| < \epsilon $ for all $n \in \mathbb{N}.$ Hence $\{u_n\}$ is a equicontinuous family of functions. As the boundary values are zero, by Lemma \ref{lem-5} $\|u_n\|_\infty < K \widehat{S}$ for all $n\in \mathbb{N}.$ Hence, $\{u_n\}$ is uniformly bounded. By Arzela-Ascoli theorem, there exists a subsequence of $\{u_n\}$ call it $\{u_{n_k}\}$ converging to a continuous function $u_0,$ that is, $$\|u_{n_k} - u_0\|_\infty \to 0~~ \text{as}~~ k \to \infty.$$ Next we claim that $u_0 \in \text{dom}_0(\mathcal{E}_p).$ $$\mathcal{E}_p(u_0) = \mathcal{S}up\limits_{m} \mathcal{E}_p^{(m)}(u_0) =\mathcal{S}up\limits_{m} \lim\limits_{k \to \infty} \mathcal{E}_p^{(m)}(u_{n_k}) \leq \mathcal{S}up\limits_{m} \limsup\limits_{k \to \infty} \mathcal{E}_p(u_{n_k}) = \limsup\limits_{k \to \infty} \mathcal{E}_p(u_{n_k}). $$ As $\limsup\limits_{k \to \infty} \mathcal{E}_p(u_{n_k}) < +\infty$, we get the claim.\\ Clearly we can see that $$\lim\limits_{k \to \infty}I_\lambda(u_{n_k},v_{n_k}) = \inf\limits_{(u,v) \in \mathcal{M}_{\lambda,\gamma}^{+}} I_\lambda(u,v).$$ By following similar arguments as above there exists a subsequence of $\{v_{n_k}\}$ call it $\{v_{n_{k_l}}\}$ which converges to a continuous function $v_0.$ Also, we have $v_0 \in \text{dom}_0(\mathcal{E}_p).$ Hence $$\lim\limits_{l \to \infty}I_\lambda(u_{n_{k_l}},v_{n_{k_l}}) = \inf\limits_{(u,v) \in \mathcal{M}_{\lambda,\gamma}^{+}} I_\lambda(u,v).$$ For convenience in writing we rename the sequence $\{(u_{n_{k_l}},v_{n_{k_l}})\}$ as $\{(u_n,v_n)\}$ which converges to $(u_0,v_0)$ as $n \to \infty,$ that is, $\|(u_n,v_n)-(u_0,v_0)\|_{\infty} \to 0$ as $n \to \infty.$ If we choose $(u,v) \in \text{dom}_0(\mathcal{E}_p) \times \text{dom}_0(\mathcal{E}_p)$ such that $\int_{\mathcal{S}}\lambda a(x)|u|^{q} \mathrm{d}\mu + \int_{\mathcal{S}}\gamma b(x) |v|^q \mathrm{d}\mu >0,$ (see Fig. \ref{fig-c} and \ref{fig-d}) then by Lemma \ref{lem-9}\epsilonqref{z4} there exists $t_1>0$ such that $(t_1u, t_1v) \in \mathcal{M}_{\lambda,\gamma}^+$ and $I_{\lambda,\gamma}(t_1u,t_1v) <0.$ Hence, $\inf\limits_{(u,v) \in \mathcal{M}_{\lambda,\gamma}^+} I_\lambda(u,v) < 0.$\\ By equation \epsilonqref{eq-2} $$I_{\lambda,\gamma}(u,v) = \left(\mathbf{f}rac{1}{p}-\mathbf{f}rac{1}{\alpha+\beta}\right)\|(u,v)\|_{\mathcal{E}_p}^p - \left(\mathbf{f}rac{1}{q} - \mathbf{f}rac{1}{\alpha+\beta}\right) \left(\int\limits_{\mathcal{S}} \lambda a(x)|u|^{q} \mathrm{d}\mu +\int\limits_{\mathcal{S}} \gamma b(x)|v|^{q} \mathrm{d}\mu \right)$$ and so $$\left(\mathbf{f}rac{1}{q} - \mathbf{f}rac{1}{\alpha+\beta}\right) \left(\int\limits_{\mathcal{S}} \lambda a(x)|u_n|^{q} \mathrm{d}\mu +\int\limits_{\mathcal{S}} \gamma b(x)|v_n|^{q} \mathrm{d}\mu \right) = \left(\mathbf{f}rac{1}{p}-\mathbf{f}rac{1}{\alpha+\beta}\right)\|(u,v)\|_{\mathcal{E}_p}^p - I_{\lambda,\gamma}(u_n,v_n) \geq - I_{\lambda,\gamma}(u_n,v_n). $$ Taking limit as $n \to \infty$, we see that $\int\limits_{\mathcal{S}} \lambda a(x)|u_0|^{q} \mathrm{d}\mu +\int\limits_{\mathcal{S}} \gamma b(x)|v_0|^{q} \mathrm{d}\mu >0.$ So, by Lemma \ref{lem-9}\epsilonqref{z4} there exists $t_0 > 0$ such that $(t_0u_0,t_0v_0) \in \mathcal{M}_{\lambda,\gamma}^+$ and $\Phi'_{u_0,v_0}(t_0) = 0.$ By Lebesgue dominated convergence theorem we have $$\lim\limits_{n \to \infty} \int\limits_{\mathcal{S}} \lambda a(x)|u_n|^{q} \mathrm{d}\mu +\int\limits_{\mathcal{S}} \gamma b(x)|v_n|^{q} \mathrm{d}\mu = \int\limits_{\mathcal{S}} \lambda a(x)|u_0|^{q} \mathrm{d}\mu +\int\limits_{\mathcal{S}} \gamma b(x)|v_0|^{q} \mathrm{d}\mu $$ and $$\lim\limits_{n \to \infty} \int\limits_{\mathcal{S}} h(x)|u_n|^{\alpha}|v_n|^{\beta} \mathrm{d}\mu = \int\limits_{\mathcal{S}} h(x)|u_0|^{\alpha}|v_0|^{\beta} \mathrm{d}\mu.$$ We know $\mathcal{E}_p(u_0) \leq \limsup\limits_{n \to \infty} \mathcal{E}_p(u_n)$ and $\mathcal{E}_p(v_0) \leq \limsup\limits_{n \to \infty} \mathcal{E}_p(v_n).$ If we assume $\|u_0,v_0\|_{\mathcal{E}_p} < \limsup\limits_{n\to\infty}\|u_n,v_n\|_{\mathcal{E}_p}$ then we get $\Phi'_{u_0,v_0}(t) < \limsup\limits_{n \to \infty} \Phi'_{u_n,v_n}(t).$ Since $\{(u_n,v_n)\} \mathcal{S}ubset \mathcal{M}_{\lambda,\gamma}^{+},~\Phi'_{u_n,v_n}(1)= 0$ for all $n \in \mathbb{N}.$ It follows from the above assumption that $\limsup\limits_{n \to \infty} \Phi '_{u_n,v_n}(t_0) > \Phi '_{u_0,v_0}(t_0) = 0$ which implies that $\Phi '_{u_n,v_n}(t_0) > 0$ for some $n.$ Hence $t_0 > 1$ because $\Phi_{u_n,v_n}'(t) <0$ for all $t<1$ and for all $n \in \mathbb{N}.$ As $(t_0 u_0,t_0v_0) \in \mathcal{M}_{\lambda,\gamma}^+,$ we get the following \betagin{align*} \inf\limits_{(u,v) \in \mathcal{M}_{\lambda,\gamma}^+} I_{\lambda,\gamma}(u,v) &\leq I_\lambda(t_0 u_0, t_0v_0) = \Phi_{u_0,v_0}(t_0) < \Phi_{u_0,v_0}(1) < \limsup\limits_{n \to \infty} \Phi_{u_n,v_n}(1)\\ &= \limsup\limits_{n \to \infty} I_{\lambda,\gamma}(u_n,v_n) = \lim\limits_{n \to \infty} I_{\lambda,\gamma}(u_n,v_n) = \inf\limits_{(u,v) \in \mathcal{M}_{\lambda,\gamma}^+} I_{\lambda,\gamma}(u,v) \epsilonnd{align*} which is a contradiction. Thus $t_0 = 1$ and $\|(u_0,v_0)\|_{\mathcal{E}_p} = \limsup\limits_{n \to \infty} \|(u_n,v_n)\|_{\mathcal{E}_p}.$\\ So, $$I_{\lambda,\gamma}(u_0,v_0) = \limsup\limits_{n \to \infty} I_{\lambda,\gamma}(u_n,v_n) = \lim\limits_{n \to \infty} I_{\lambda,\gamma}(u_n,v_n) = \inf\limits_{(u,v) \in \mathcal{M}_{\lambda,\gamma}^+} I_{\lambda,\gamma}(u,v)$$ Hence, $(u_0,v_0)$ is a minimizer of $I_{\lambda,\gamma}$ on $\mathcal{M}_{\lambda,\gamma}^+.$ \epsilonnd{proof} \betagin{theorem}\lambdabel{thm-2} If $(\lambda,\gamma) \in \Lambda_0$ then there exists a minimizer of $I_{\lambda,\gamma}$ on $\mathcal{M}_{\lambda,\gamma}^{-}.$ \epsilonnd{theorem} \betagin{proof} By Lemma \ref{lem-8} for all $(\lambda,\gamma) \in \Lambda_0$ $$\inf_{(u,v) \in \mathcal{M}_{\lambda,\gamma}^-} I_{\lambda,\gamma}(u,v) > d_0 >0.$$ So, $I_{\lambda,\gamma}$ is bounded below on $\mathcal{M}_{\lambda,\gamma}^-.$ Hence, there exists a sequence $\{(u_n,v_n)\} \mathcal{S}ubset \mathcal{M}_{\lambda,\gamma}^-$ such that $$\lim_{n\to \infty} I_{\lambda,\gamma}(u_n,v_n)= \inf_{(u,v) \in \mathcal{M}_{\lambda,\gamma}^-} I_{\lambda,\gamma}(u,v)$$ By similar arguments as in Theorem \ref{thm-1}, there exists a subsequence of $\{(u_n,v_n)\}$ still call it $\{(u_n,v_n)\}$ which converges to $(u_1,v_1),$ that is, $\lim\limits_{n\to\infty}\|u_n-u_1\|_\infty = 0, \lim\limits_{n\to\infty}\|v_n-v_1\|_\infty = 0$ and $\lim\limits_{n\to\infty}\|(u_n,v_n) - (u_1,v_1)\|_\infty =0.$ Also we get $(u_1,v_1) \in \text{dom}_0(\mathcal{E}_p) \times \text{dom}_0(\mathcal{E}_p)$ and $$\lim_{n\to \infty} I_{\lambda,\gamma}(u_n,v_n) = \inf_{(u,v) \in \mathcal{M}_{\lambda,\gamma}^-} I_{\lambda,\gamma}(u,v).$$ From equation \epsilonqref{eq-3} we get, $$\left(\mathbf{f}rac{1}{q} - \mathbf{f}rac{1}{\alpha +\beta}\right)\int\limits_{\mathcal{S}} h(x)|u_n|^{\alpha}|v_n|^{\beta} \mathrm{d}\mu = I_{\lambda,\gamma}(u_n,v_n)- \left(\mathbf{f}rac{1}{p}- \mathbf{f}rac{1}{q}\right)\|(u_n,v_n)\|_{\mathcal{E}_p}^p \geq I_{\lambda,\gamma}(u_n,v_n).$$ So, by taking limit as $n \to \infty$ we get, $$\int\limits_{\mathcal{S}} h(x)|u_1|^{\alpha}|v_1|^{\beta} \mathrm{d}\mu \geq \left(\mathbf{f}rac{q(\alpha+\beta)}{\alpha+\beta-q}\right)\inf_{(u,v) \in \mathcal{M}_{\lambda,\gamma}^-} I_{\lambda,\gamma}(u,v) \geq\left(\mathbf{f}rac{q(\alpha+\beta)}{\alpha+\beta-q}\right) d_0>0.$$ By Lemma \ref{lem-9}\epsilonqref{z5} there exists a $t_1$ such that $(t_1u_1,t_1v_1) \in \mathcal{M}_{\lambda,\gamma}^-$ and $\Phi_{u_1,v_1}'(t_1) = 0.$ By Lebesgue dominated convergence theorem we have $$\lim\limits_{n \to \infty} \int\limits_{\mathcal{S}} \lambda a(x)|u_n|^{q} \mathrm{d}\mu +\int\limits_{\mathcal{S}} \gamma b(x)|v_n|^{q} \mathrm{d}\mu = \int\limits_{\mathcal{S}} \lambda a(x)|u_1|^{q} \mathrm{d}\mu +\int\limits_{\mathcal{S}} \gamma b(x)|v_1|^{q} \mathrm{d}\mu $$ and $$\lim\limits_{n \to \infty} \int\limits_{\mathcal{S}} h(x)|u_n|^{\alpha}|v_n|^{\beta} \mathrm{d}\mu = \int\limits_{\mathcal{S}} h(x)|u_1|^{\alpha}|v_1|^{\beta} \mathrm{d}\mu.$$ We know $\mathcal{E}_p(u_1)\leq \limsup\limits_{n\to \infty}\mathcal{E}_p(u_n)$ and $\mathcal{E}_p(v_1)\leq \limsup\limits_{n\to \infty}\mathcal{E}_p(v_n).$ But, if we assume $\|(u_1,v_1)\|_{\mathcal{E}_p} < \limsup\limits_{n\to\infty} \|(u_n,v_n)\|_{\mathcal{E}_p}$ then we get $\Phi_{u_1,v_1}'(t) < \limsup\limits_{n\to\infty}\Phi_{u_n,v_n}'(t).$ Then, $\limsup\limits_{n\to\infty} \Phi_{u_n,v_n}'(t_1) > \Phi_{u_1,v_1}'(t_1) = \Phi_{t_1u_1,t_1v_1}'(1) =0$ because $(t_1,u_1,t_1v_1) \in \mathcal{M}_{\lambda,\gamma}^-.$ This implies that $\Phi_{u_n,v_n}'(t_1)>0$ for some $n \in \mathbb{N}.$ Hence $t_1 < 1$ because $\Phi_{u_n,v_n}'(t) < 0$ for all $t>1$ and for all $n \in \mathbb{N}.$ As $(t_1u_1,t_1v_1) \in \mathcal{M}_{\lambda,\gamma}^-$ we get the following \betagin{align*} I_{\lambda,\gamma}(t_1u_1,t_1v_1) &= \Phi_{u_1,v_1}(t_1) < \limsup_{n\to\infty} \Phi_{u_n,v_n}(t_1) \leq \limsup_{n\to\infty} \Phi_{u_n,v_n}(1) \\ &=\limsup_{n\to\infty} I_{\lambda,\gamma}(u_n,v_n) = \lim_{n\to\infty} I_{\lambda,\gamma}(u_n,v_n) = \inf_{(u,v) \in \mathcal{M}_{\lambda,\gamma}^-} I_{\lambda,\gamma}(u,v), \\ \epsilonnd{align*} which is a contradiction. Hence $\|(u_1,v_1)\|_{\mathcal{E}_p} = \limsup\limits_{n\to\infty} \|(u_n,v_n)\|_{\mathcal{E}_p}.$ So, $\Phi_{u_1,v_1}'(1) = 0$ and $\Phi_{u_1,v_1}''(1) \leq 0.$ But, by Lemma \ref{lem-4} we have $\mathcal{M}_{\lambda,\gamma}^0 = \epsilonmptyset$ for $(\lambda,\gamma) \in \Lambda_0.$ Hence, $\Phi_{u_1,v_1}''(1) < 0.$ So, $t_1 = 1$ as it has unique local maxima. Hence, $(u_1,v_1) \in \mathcal{M}_{\lambda,\gamma}^-$ and $$I_{\lambda,\gamma}(u_1,v_1) = \Phi_{u_1,v_1}(1) = \limsup_{n\to\infty} \Phi_{u_n,v_n}(1) = \limsup_{n\to\infty} I_{\lambda,\gamma}(u_n,v_n) = \lim_{n\to\infty} I_{\lambda,\gamma}(u_n,v_n) = \inf_{(u,v) \in \mathcal{M}_{\lambda,\gamma}^-} I_{\lambda,\gamma}(u,v).$$ Hence, $(u_1,v_1)$ is a minimizer of $I_{\lambda,\gamma}$ on $\mathcal{M}_{\lambda,\gamma}^-.$ \epsilonnd{proof} \betagin{lemma}\lambdabel{lem-3} For each $(w_1, w_2) \in \text{dom}_0(\mathcal{E}_p) \times \text{dom}_0(\mathcal{E}_p),$ \betagin{enumerate} \item\lambdabel{1} Let $(u_0,v_0) \in \mathcal{M}_{\lambda,\gamma}^{+}$ and $I_{\lambda,\gamma}(u_0,v_0) = \inf\limits_{(u,v) \in \mathcal{M}_{\lambda,\gamma}^{+}}I_{\lambda,\gamma}(u,v).$ There exists $\epsilon_0 > 0$ such that for each $\epsilon \in (-\epsilon_0, \epsilon_0)$ there exists unique $t_\epsilon>0$ such that $t_\epsilon(u_0 + \epsilon w_1, v_0+\epsilon w_2) \in \mathcal{M}_{\lambda,\gamma}^{+}.$ Also, $t_\epsilon \to 1$ as $\epsilon \to 0.$ \item\lambdabel{2} Let $(u_1,v_1) \in \mathcal{M}_{\lambda,\gamma}^{-}$ and $I_{\lambda,\gamma}(u_1,v_1) = \inf\limits_{(u,v) \in \mathcal{M}_{\lambda,\gamma}^{-}}I_{\lambda,\gamma}(u,v).$ There exists $\epsilon_1 > 0$ such that for each $\epsilon \in (-\epsilon_1, \epsilon_1)$ there exists unique ${t_\epsilon}>0$ such that ${t_\epsilon}(u_1 + \epsilon w_1, v_1+\epsilon w_2) \in \mathcal{M}_{\lambda,\gamma}^{-},$ Also, ${t_\epsilon} \to 1$ as $\epsilon \to 0.$ \epsilonnd{enumerate} \epsilonnd{lemma} \betagin{proof} (1) Let us define a function $\mathbf{f} : \mathbb{R}^4 \times (0,\infty) \to \mathbb{R} $ by $$\mathbf{f}(a,b,c,d,t) = at^{p-1} - t^{q-1}(\lambda b + \gamma c) - d t^{\alpha +\beta -1}.$$ Then, $$\mathbf{f}rac{\partial \mathbf{f}}{\partial t}(a,b,c,d,t) = (p-1)t^{p-2} a - (q-1) t^{q-2}(\lambda b + \gamma c) - (\alpha +\beta -1) t^{\alpha+\beta-2} d$$ Since $(u_0, v_0) \in \mathcal{M}_{\lambda,\gamma}^+,$ so $\Phi'_{u_0, v_0}(1) =0$ and $\Phi''_{u_0, v_0}(1) >0$, that is, $$\mathbf{f}\left(\|(u_0,v_0)\|_{\mathcal{E}_p}^p ,\int\limits_{\mathcal{S}}a(x)|u_0|^{q} \mathrm{d}\mu, \int\limits_{\mathcal{S}}b(x)|v_0|^{q} \mathrm{d}\mu, \int\limits_{\mathcal{S}} h(x)|u_0|^{\alpha} |v_0|^{\beta} \mathrm{d}\mu,1 \right) = \Phi'_{u_0, v_0}(1) = 0$$ and $$\mathbf{f}rac{\partial \mathbf{f}}{\partial t}\left(\|(u_0,v_0)\|_{\mathcal{E}_p}^p ,\int\limits_{\mathcal{S}}a(x)|u_0|^{q} \mathrm{d}\mu, \int\limits_{\mathcal{S}}b(x)|v_0|^{q} \mathrm{d}\mu, \int\limits_{\mathcal{S}} h(x)|u_0|^{\alpha} |v_0|^{\beta} \mathrm{d}\mu, 1 \right) = \Phi''_{u_0, v_0}(1) > 0$$ respectively. The function $\mathbf{f}_1(\epsilon) = \int\limits_{\mathcal{S}}\lambda a(x)|u_0 + \epsilon w_1|^{q} \mathrm{d}\mu + \int\limits_{\mathcal{S}}\gamma b(x)|v_0 + \epsilon w_2|^{q} \mathrm{d}\mu$ is a continuous function and $\mathbf{f}_1(0) > 0$ because $(u_0, v_0) \in \mathcal{M}_{\lambda,\gamma}^{+}.$ By property of continuous functions there exists $\epsilon_0 >0$ such that $\mathbf{f}_1(\epsilon) >0$ for all $\epsilon \in (-\epsilon_0, \epsilon_0).$ So, for each $\epsilon \in (-\epsilon_0, \epsilon_0)$ by Lemma \ref{lem-6}\epsilonqref{z4} there exists $t_\epsilon>0$ such that $t_\epsilon(u_0 + \epsilon w_1, v_0 + \epsilon w_2) \in \mathcal{M}_{\lambda,\gamma}^+.$ This implies \betagin{align*} &\mathbf{f}\left(\|(u_0 + \epsilon w_1, v_0 + \epsilon w_2)\|_{\mathcal{E}_p}^p ,\int\limits_{\mathcal{S}}a(x)|u_0 + \epsilon w_1|^{q} \mathrm{d}\mu, \int\limits_{\mathcal{S}} b(x)|v_0 + \epsilon w_2|^{q} \mathrm{d}\mu, \int\limits_{\mathcal{S}} h(x)|u_0 + \epsilon w_1|^{\alpha} |v_0 + \epsilon w_2|^{\beta} \mathrm{d}\mu, t_\epsilon \right) \\ &\quad = \Phi'_{u_0 + \epsilon w_1, v_0+\epsilon w_2}(t_\epsilon) = 0. \epsilonnd{align*} By implicit function theorem there exists an open set $A \mathcal{S}ubset (0,\infty)$ containing 1, an open set $B \mathcal{S}ubset \mathbb{R}^4$ containing $(\|(u_0,v_0)\|_{\mathcal{E}_p}^p, \int\limits_{\mathcal{S}}a(x)|u_0|^{q} \mathrm{d}\mu, \int\limits_{\mathcal{S}}b(x)|v_0|^{q} \mathrm{d}\mu,\int\limits_{\mathcal{S}} h(x)|u_0|^{\alpha}|v_0|^{\beta} \mathrm{d}\mu )$ and a continuous function $g : B \to A$ such that for all $y \in B$, $\mathbf{f}(y, g(y)) = 0.$ So there is a solution to the equation $t = g(y) \in A.$ Hence, $$t_\epsilon = g\left(\|(u_0 + \epsilon w_1, v_0 + \epsilon w_2)\|_{\mathcal{E}_p}^p ,\int\limits_{\mathcal{S}}a(x)|u_0 + \epsilon w_1|^{q} \mathrm{d}\mu, \int\limits_{\mathcal{S}} b(x)|v_0 + \epsilon w_2|^{q} \mathrm{d}\mu, \int\limits_{\mathcal{S}} h(x)|u_0 + \epsilon w_1|^{\alpha} |v_0 + \epsilon w_2|^{\beta} \mathrm{d}\mu\right)$$ As $\epsilon \to 0$ using continuity of $g$ we get $$1 = g\left(\|u_0, v_0\|_{\mathcal{E}_p}^p, \int\limits_{\mathcal{S}}a(x)|u_0|^{q} \mathrm{d}\mu, \int\limits_{\mathcal{S}} b(x)|v_0|^{q} \mathrm{d}\mu, \int\limits_{\mathcal{S}} h(x)|u_0|^{\alpha} |v_0|^{\beta} \mathrm{d}\mu\right)$$ Therefore, $t_\epsilon \to 1$ as $\epsilon \to 0.$\\ (2) This can be proved in similar fashion as in (1), instead of taking $\mathbf{f}_1(\epsilon)$ we have to take $$\mathbf{f}_2(\epsilon) = \int\limits_{\mathcal{S}} h(x)|u_1+\epsilon w_1|^{\alpha} |v_1+\epsilon w_2|^{\beta} \mathrm{d}\mu$$ and proceed as above. \epsilonnd{proof} \betagin{theorem}\lambdabel{thm-8} If $(u_0, v_0)$ is a minimizer of $I_{\lambda,\gamma}$ on $\mathcal{M}_{\lambda,\gamma}^{+}$ then $(u_0, v_0)$ is a weak solution to the problem \epsilonqref{prob}. \epsilonnd{theorem} \betagin{proof} Let $(\psi_1, \psi_2) \in \text{dom}_0(\mathcal{E}_p) \times \text{dom}_0(\mathcal{E}_p).$ Using Lemma \ref{lem-3}\epsilonqref{1}, there exists $\epsilon_0 > 0$ such that for each $\epsilon \in (-\epsilon_0,\epsilon_0)$ there exists $t_\epsilon$ such that $I_{\lambda,\gamma}(t_\epsilon (u_0 + \epsilon \psi_1),t_\epsilon(v_0+\epsilon \psi_2)) \geq I_{\lambda,\gamma}(u_0,v_0)$ and $t_\epsilon \to 1$ as $\epsilon \to 0.$ Then we have \betagin{align*} 0 &\leq \lim\limits_{\epsilon \to 0^+} \mathbf{f}rac{1}{\epsilon}\left(I_{\lambda,\gamma}(t_\epsilon (u_0 + \epsilon \psi_1),t_\epsilon(v_0+\epsilon \psi_2)) - I_{\lambda,\gamma}(u_0,v_0)\right)\\ &= \lim\limits_{\epsilon \to 0^+}\mathbf{f}rac{1}{\epsilon}\left(I_{\lambda,\gamma}(t_\epsilon (u_0 + \epsilon \psi_1),t_\epsilon(v_0+\epsilon \psi_2))-I_{\lambda,\gamma}(t_\epsilon u_0,t_\epsilon v_0)+I_{\lambda,\gamma}(t_\epsilon u_0,t_\epsilon v_0)- I_{\lambda,\gamma}(u_0,v_0)\right)\\ &= \lim\limits_{\epsilon \to 0^+}\mathbf{f}rac{1}{\epsilon} \left(I_{\lambda,\gamma}(t_\epsilon (u_0 + \epsilon \psi_1),t_\epsilon(v_0+\epsilon \psi_2))-I_{\lambda,\gamma}(t_\epsilon u_0,t_\epsilon v_0)\right)\\ &= \lim\limits_{\epsilon \to 0^+}\mathbf{f}rac{t_\epsilon^p}{\epsilon}\mathbf{f}rac{1}{p}\left(\mathcal{E}_p(u_0 + \epsilon \psi_1)+\mathcal{E}_p(v_0+\epsilon \psi_2) - \mathcal{E}_p( u_0)-\mathcal{E}_p( v_0) \right)\\ &\quad - \lim\limits_{\epsilon \to 0^+} \mathbf{f}rac{t_\epsilon^q}{\epsilon} \mathbf{f}rac{1}{q} \left(\int\limits_{\mathcal{S}} \lambda a(x)|(u_0 + \epsilon \psi_1)|^{q} \mathrm{d}\mu + \int\limits_{\mathcal{S}} \gamma b(x)|(v_0+\epsilon \psi_2)|^{q} \mathrm{d}\mu -\int\limits_{\mathcal{S}} \lambda a(x)|u_0|^{q} \mathrm{d}\mu - \int\limits_{\mathcal{S}} \gamma b(x)|v_0|^{q} \mathrm{d}\mu\right)\\ &\quad - \lim\limits_{\epsilon \to 0^+}\mathbf{f}rac{t_\epsilon^{\alpha+\beta}}{\epsilon}\mathbf{f}rac{1}{\alpha+\beta} \left(\int\limits_{\mathcal{S}} h(x)|(u_0 + \epsilon \psi_1)|^{\alpha}|(v_0+\epsilon \psi_2)|^{\beta} \mathrm{d}\mu - \int\limits_{\mathcal{S}} h(x)|u_0|^{\alpha}|v_0|^{\beta} \mathrm{d}\mu\right)\\ & = \mathcal{E}_p^+(u_0,\psi_1) + \mathcal{E}_p^+(v_0,\psi_2) - \int_{\mathcal{S}}\lambda a(x)|u_0|^{q-2}u_0 \psi_1 - \int_{\mathcal{S}}\gamma b(x)|v_0|^{q-2}v_0 \psi_2\\ & \quad - \lim\limits_{\epsilon \to 0^+}\mathbf{f}rac{t_\epsilon^{\alpha+\beta}}{\epsilon}\mathbf{f}rac{1}{\alpha+\beta} \left(\int\limits_{\mathcal{S}} h(x)|(u_0 + \epsilon \psi_1)|^{\alpha}|(v_0+\epsilon \psi_2)|^{\beta} \mathrm{d}\mu - \int\limits_{\mathcal{S}} h(x)|u_0|^{\alpha}|v_0 + \epsilon\psi_2|^{\beta} \mathrm{d}\mu \right)\\ &\quad - \lim\limits_{\epsilon \to 0^+}\mathbf{f}rac{t_\epsilon^{\alpha+\beta}}{\epsilon}\mathbf{f}rac{1}{\alpha+\beta} \left( \int\limits_{\mathcal{S}} h(x)|u_0|^{\alpha}|v_0 + \epsilon\psi_2|^{\beta} \mathrm{d}\mu- \int\limits_{\mathcal{S}} h(x)|u_0|^{\alpha}|v_0|^{\beta} \mathrm{d}\mu\right) \\ \epsilonnd{align*} From above we get, \betagin{align*} 0 \leq & \mathcal{E}_p^+(u_0,\psi_1) + \mathcal{E}_p^+(v_0,\psi_2) - \int_{\mathcal{S}}\lambda a(x)|u_0|^{q-2}u_0 \psi_1 - \int_{\mathcal{S}}\gamma b(x)|v_0|^{q-2}v_0 \psi_2 \\ &\quad -\mathbf{f}rac{\alpha}{\alpha+\beta}\int\limits_{\mathcal{S}} h(x)|u_0|^{\alpha-2}u_0|v_0|^{\beta}\psi_1 \mathrm{d}\mu -\mathbf{f}rac{\beta}{\alpha+\beta}\int\limits_{\mathcal{S}} h(x)|u_0|^{\alpha}|v_0|^{\beta-2}v_0\psi_2 \mathrm{d}\mu. \epsilonnd{align*} Note that the second equality follows by using $\lim\limits_{\epsilon \to 0^+}\mathbf{f}rac{1}{\epsilon} \left(I_{\lambda,\gamma}(t_\epsilon(u_0,v_0))- I_{\lambda,\gamma}(u_0, v_0)\right) = 0$ because the limit is same as $\Phi'_{u_0, v_0}(1)$ which is zero. This implies \betagin{align*} &\int_{\mathcal{S}}\lambda a(x)|u_0|^{q-2}u_0 \psi_1 + \int_{\mathcal{S}}\gamma b(x)|v_0|^{q-2}v_0 \psi_2 +\mathbf{f}rac{\alpha}{\alpha+\beta}\int\limits_{\mathcal{S}} h(x)|u_0|^{\alpha-2}u_0|v_0|^{\beta}\psi_1 \mathrm{d}\mu +\mathbf{f}rac{\beta}{\alpha+\beta}\int\limits_{\mathcal{S}} h(x)|u_0|^{\alpha}|v_0|^{\beta-2}v_0\psi_2 \mathrm{d}\mu \\ & \quad \leq \mathcal{E}_p^+(u_0,\psi_1) + \mathcal{E}_p^+(v_0,\psi_2). \epsilonnd{align*} Similarly, \betagin{align*} 0 &\geq \lim\limits_{\epsilon \to 0^-} \mathbf{f}rac{1}{\epsilon}\left(I_{\lambda,\gamma}(t_\epsilon (u_0 + \epsilon \psi_1),t_\epsilon(v_0+\epsilon \psi_2)) - I_{\lambda,\gamma}(u_0,v_0)\right)\\ &= \lim\limits_{\epsilon \to 0^-} \mathbf{f}rac{1}{\epsilon}\left(I_{\lambda,\gamma}(t_\epsilon (u_0 + \epsilon \psi_1),t_\epsilon(v_0+\epsilon \psi_2))-I_{\lambda,\gamma}(t_\epsilon u_0,t_\epsilon v_0)+I_{\lambda,\gamma}(t_\epsilon u_0,t_\epsilon v_0)- I_{\lambda,\gamma}(u_0,v_0)\right)\\ &= \lim\limits_{\epsilon \to 0^-} \mathbf{f}rac{1}{\epsilon}\left(I_{\lambda,\gamma}(t_\epsilon (u_0 + \epsilon \psi_1),t_\epsilon(v_0+\epsilon \psi_2))-I_{\lambda,\gamma}(t_\epsilon u_0,t_\epsilon v_0)\right)\\ &= \mathcal{E}_p^-(u_0,\psi_1) + \mathcal{E}_p^-(v_0,\psi_2) - \int_{\mathcal{S}}\lambda a(x)|u_0|^{q-2}u_0 \psi_1 - \int_{\mathcal{S}}\gamma b(x)|v_0|^{q-2}v_0 \psi_2 \\ &\quad -\mathbf{f}rac{\alpha}{\alpha+\beta}\int\limits_{\mathcal{S}} h(x)|u_0|^{\alpha-2}u_0|v_0|^{\beta}\psi_1 \mathrm{d}\mu -\mathbf{f}rac{\beta}{\alpha+\beta}\int\limits_{\mathcal{S}} h(x)|u_0|^{\alpha}|v_0|^{\beta-2}v_0\psi_2 \mathrm{d}\mu \epsilonnd{align*} which implies \betagin{align*} &\int_{\mathcal{S}}\lambda a(x)|u_0|^{q-2}u_0 \psi_1 + \int_{\mathcal{S}}\gamma b(x)|v_0|^{q-2}v_0 \psi_2 + \mathbf{f}rac{\alpha}{\alpha+\beta}\int\limits_{\mathcal{S}} h(x)|u_0|^{\alpha-2}u_0|v_0|^{\beta}\psi_1 \mathrm{d}\mu + \mathbf{f}rac{\beta}{\alpha+\beta}\int\limits_{\mathcal{S}} h(x)|u_0|^{\alpha}|v_0|^{\beta-2}v_0\psi_2 \mathrm{d}\mu \\ & \quad \geq \mathcal{E}_p^-(u_0,\psi_1) + \mathcal{E}_p^-(v_0,\psi_2) \epsilonnd{align*} So, combining the above inequalities \betagin{align*} &\mathcal{E}_p^-(u_0,\psi_1) + \mathcal{E}_p^-(v_0,\psi_2) \\ & \quad \leq \int_{\mathcal{S}}\lambda a(x)|u_0|^{q-2}u_0 \psi_1 + \int_{\mathcal{S}}\gamma b(x)|v_0|^{q-2}v_0 \psi_2 +\mathbf{f}rac{\alpha}{\alpha+\beta}\int\limits_{\mathcal{S}} h(x)|u_0|^{\alpha-2}u_0|v_0|^{\beta}\psi_1 \mathrm{d}\mu +\mathbf{f}rac{\beta}{\alpha+\beta}\int\limits_{\mathcal{S}} h(x)|u_0|^{\alpha}|v_0|^{\beta-2}v_0\psi_2 \mathrm{d}\mu \\ & \quad \quad \leq \mathcal{E}_p^+(u_0,\psi_1) + \mathcal{E}_p^+(v_0,\psi_2). \epsilonnd{align*} Hence \betagin{align*} &\int_{\mathcal{S}}\lambda a(x)|u_0|^{q-2}u_0 \psi_1 + \int_{\mathcal{S}}\gamma b(x)|v_0|^{q-2}v_0 \psi_2 +\mathbf{f}rac{\alpha}{\alpha+\beta}\int\limits_{\mathcal{S}} h(x)|u_0|^{\alpha-2}u_0|v_0|^{\beta}\psi_1 \mathrm{d}\mu +\mathbf{f}rac{\beta}{\alpha+\beta}\int\limits_{\mathcal{S}} h(x)|u_0|^{\alpha}|v_0|^{\beta-2}v_0\psi_2 \mathrm{d}\mu \\ & \quad \in \mathcal{E}_p(u_0,\psi_1) + \mathcal{E}_p(v_0,\psi_2) \epsilonnd{align*} for all $(\psi_1,\psi_2) \in \text{dom}_0(\mathcal{E}_p) \times \text{dom}_0(\mathcal{E}_p).$ Therefore, $(u_0,v_0)$ is a weak solution to the problem \epsilonqref{prob}. \epsilonnd{proof} \betagin{theorem}\lambdabel{thm-9} If $(u_1, v_1)$ is a minimizer of $I_{\lambda,\gamma}$ on $\mathcal{M}_{\lambda,\gamma}^{-}$ then $(u_1, v_1)$ is a weak solution to the problem \epsilonqref{prob}. \epsilonnd{theorem} \betagin{proof} Let $(\psi_1, \psi_2) \in \text{dom}_0(\mathcal{E}_p) \times \text{dom}_0(\mathcal{E}_p).$ Using Lemma \ref{lem-3}\epsilonqref{2}, there exists $\epsilon_1 > 0$ such that for each $\epsilon \in (-\epsilon_1,\epsilon_1)$ there exists $t_\epsilon$ such that $I_{\lambda,\gamma}({t_\epsilon}(u_1 + \epsilon \psi_1),{t_\epsilon}(v_1+\epsilon \psi_2)) \geq I_{\lambda,\gamma}(u_1,v_1)$ and ${t_\epsilon} \to 1$ as $\epsilon \to 0.$ Then we have \betagin{align*} 0 &\leq \lim\limits_{\epsilon \to 0^+} \mathbf{f}rac{1}{\epsilon}\left(I_{\lambda,\gamma}({t_\epsilon} (u_1 + \epsilon \psi_1),{t_\epsilon}(v_1+\epsilon \psi_2)) - I_{\lambda,\gamma}(u_1,v_1)\right)\\ &= \lim\limits_{\epsilon \to 0^+}\mathbf{f}rac{1}{\epsilon}\left(I_{\lambda,\gamma}({t_\epsilon} (u_1 + \epsilon \psi_1),{t_\epsilon}(v_1+\epsilon \psi_2))-I_{\lambda,\gamma}({t_\epsilon} u_1,{t_\epsilon} v_1)+I_{\lambda,\gamma}({t_\epsilon} u_1,{t_\epsilon} v_1)- I_{\lambda,\gamma}(u_1,v_1)\right)\\ &= \lim\limits_{\epsilon \to 0^+}\mathbf{f}rac{1}{\epsilon} \left(I_{\lambda,\gamma}({t_\epsilon} (u_1 + \epsilon \psi_1),{t_\epsilon}(v_1+\epsilon \psi_2))- I_{\lambda,\gamma}({t_\epsilon} u_1,{t_\epsilon} v_1)\right)\\ &= \lim\limits_{\epsilon \to 0^+}\mathbf{f}rac{{t_\epsilon^p}}{\epsilon}\mathbf{f}rac{1}{p}\left(\mathcal{E}_p(u_1 + \epsilon \psi_1)+\mathcal{E}_p(v_1+\epsilon \psi_2) - \mathcal{E}_p( u_1)-\mathcal{E}_p(v_1) \right)\\ &\quad -\lim\limits_{\epsilon \to 0^+} \mathbf{f}rac{t_\epsilon^q}{\epsilon} \mathbf{f}rac{1}{q} \left(\int\limits_{\mathcal{S}} \lambda a(x)|(u_1 + \epsilon \psi_1)|^{q} \mathrm{d}\mu + \int\limits_{\mathcal{S}} \gamma b(x)|(v_1+\epsilon \psi_2)|^{q} \mathrm{d}\mu -\int\limits_{\mathcal{S}} \lambda a(x)|u_1|^{q} \mathrm{d}\mu - \int\limits_{\mathcal{S}} \gamma b(x)|v_1|^{q} \mathrm{d}\mu\right)\\ &\quad - \lim\limits_{\epsilon \to 0^+}\mathbf{f}rac{t_\epsilon^{\alpha+\beta}}{\epsilon}\mathbf{f}rac{1}{\alpha+\beta} \left(\int\limits_{\mathcal{S}} h(x)|(u_1 + \epsilon \psi_1)|^{\alpha}|(v_1+\epsilon \psi_2)|^{\beta} \mathrm{d}\mu - \int\limits_{\mathcal{S}} h(x)|u_1|^{\alpha}|v_1|^{\beta} \mathrm{d}\mu\right)\\ & = \mathcal{E}_p^+(u_1,\psi_1) + \mathcal{E}_p^+(v_1,\psi_2) - \int_{\mathcal{S}}\lambda a(x)|u_1|^{q-2}u_1 \psi_1 - \int_{\mathcal{S}}\gamma b(x)|v_1|^{q-2}v_1 \psi_2\\ & \quad - \lim\limits_{\epsilon \to 0^+}\mathbf{f}rac{t_\epsilon^{\alpha+\beta}}{\epsilon}\mathbf{f}rac{1}{\alpha+\beta} \left(\int\limits_{\mathcal{S}} h(x)|(u_1 + \epsilon \psi_1)|^{\alpha}|(v_1+\epsilon \psi_2)|^{\beta} \mathrm{d}\mu - \int\limits_{\mathcal{S}} h(x)|u_1|^{\alpha}|v_1+ \epsilon\psi_2|^{\beta} \mathrm{d}\mu \right)\\ & \quad - \lim\limits_{\epsilon \to 0^+}\mathbf{f}rac{t_\epsilon^{\alpha+\beta}}{\epsilon}\mathbf{f}rac{1}{\alpha+\beta}\left( \int\limits_{\mathcal{S}} h(x)|u_1|^{\alpha}|v_1 + \epsilon\psi_2|^{\beta} \mathrm{d}\mu- \int\limits_{\mathcal{S}} h(x)|u_1|^{\alpha}|v_1|^{\beta} \mathrm{d}\mu\right)\\ &= \mathcal{E}_p^+(u_1,\psi_1) + \mathcal{E}_p^+(v_1,\psi_2) - \int_{\mathcal{S}}\lambda a(x)|u_1|^{q-2}u_1 \psi_1 - \int_{\mathcal{S}}\gamma b(x)|v_1|^{q-2}v_1 \psi_2 \\ & \quad -\mathbf{f}rac{\alpha}{\alpha+\beta}\int\limits_{\mathcal{S}} h(x)|u_1|^{\alpha-2}u_1|v_1|^{\beta}\psi_1 \mathrm{d}\mu -\mathbf{f}rac{\beta}{\alpha+\beta}\int\limits_{\mathcal{S}} h(x)|u_1|^{\alpha}|v_1|^{\beta-2}v_1\psi_2 \mathrm{d}\mu \epsilonnd{align*} Note that the second equality follows by using $\lim\limits_{\epsilon \to 0^+}\mathbf{f}rac{1}{\epsilon} \left(I_{\lambda,\gamma}(t_\epsilon u_1,t_\epsilon v_1) - I_{\lambda,\gamma}(u_1, v_1)\right) = 0$ because the limit is same as $\Phi'_{u_1, v_1}(1)$ which is zero. This implies \betagin{align*} & \int_{\mathcal{S}}\lambda a(x)|u_1|^{q-2}u_1 \psi_1 + \int_{\mathcal{S}}\gamma b(x)|v_1|^{q-2}v_1 \psi_2 +\mathbf{f}rac{\alpha}{\alpha+\beta}\int\limits_{\mathcal{S}} h(x)|u_1|^{\alpha-2}u_1|v_1|^{\beta}\psi_1 \mathrm{d}\mu +\mathbf{f}rac{\beta}{\alpha+\beta}\int\limits_{\mathcal{S}} h(x)|u_1|^{\alpha}|v_1|^{\beta-2}v_1\psi_2 \mathrm{d}\mu \\ & \quad \leq \mathcal{E}_p^+(u_1,\psi_1) + \mathcal{E}_p^+(v_1,\psi_2). \epsilonnd{align*} Similarly, \betagin{align*} 0 &\geq \lim\limits_{\epsilon \to 0^-} \mathbf{f}rac{1}{\epsilon}\left(I_{\lambda,\gamma}({t_\epsilon} (u_1 + \epsilon \psi_1),{t_\epsilon}(v_1+\epsilon \psi_2)) - I_{\lambda,\gamma}(u_1,v_1)\right)\\ &= \lim\limits_{\epsilon \to 0^-} \mathbf{f}rac{1}{\epsilon}\left(I_{\lambda,\gamma}({t_\epsilon} (u_1 + \epsilon \psi_1),{t_\epsilon}(v_1+\epsilon \psi_2))-I_{\lambda,\gamma}({t_\epsilon} u_1,{t_\epsilon} v_1)+I_{\lambda,\gamma}({t_\epsilon} u_1,{t_\epsilon} v_1)- I_{\lambda,\gamma}(u_1,v_1)\right)\\ &= \lim\limits_{\epsilon \to 0^-} \mathbf{f}rac{1}{\epsilon}\left(I_{\lambda,\gamma}({t_\epsilon} (u_1 + \epsilon \psi_1),{t_\epsilon}(v_1+\epsilon \psi_2))-I_{\lambda,\gamma}({t_\epsilon} u_1,{t_\epsilon} v_1)\right)\\ &= \mathcal{E}_p^-(u_1,\psi_1) + \mathcal{E}_p^-(v_1,\psi_2) - \int_{\mathcal{S}}\lambda a(x)|u_1|^{q-2}u_1 \psi_1 - \int_{\mathcal{S}}\gamma b(x)|v_1|^{q-2}v_1 \psi_2 \\ &\quad -\mathbf{f}rac{\alpha}{\alpha+\beta}\int\limits_{\mathcal{S}} h(x)|u_1|^{\alpha-2}u_1|v_1|^{\beta}\psi_1 \mathrm{d}\mu -\mathbf{f}rac{\beta}{\alpha+\beta}\int\limits_{\mathcal{S}} h(x)|u_1|^{\alpha}|v_1|^{\beta-2}v_1\psi_2 \mathrm{d}\mu \epsilonnd{align*} which implies \betagin{align*} &\int_{\mathcal{S}}\lambda a(x)|u_1|^{q-2}u_1 \psi_1 + \int_{\mathcal{S}}\gamma b(x)|v_1|^{q-2}v_1 \psi_2 + \mathbf{f}rac{\alpha}{\alpha+\beta}\int\limits_{\mathcal{S}} h(x)|u_1|^{\alpha-2}u_1|v_1|^{\beta}\psi_1 \mathrm{d}\mu + \mathbf{f}rac{\beta}{\alpha+\beta}\int\limits_{\mathcal{S}} h(x)|u_1|^{\alpha}|v_1|^{\beta-2}v_1\psi_2 \mathrm{d}\mu \\ & \quad \geq \mathcal{E}_p^-(u_1,\psi_1) + \mathcal{E}_p^-(v_1,\psi_2). \epsilonnd{align*} Therefore, \betagin{align*} &\mathcal{E}_p^-(u_1,\psi_1) + \mathcal{E}_p^-(v_1,\psi_2) \\ &\quad \leq \int_{\mathcal{S}}\lambda a(x)|u_1|^{q-2}u_1 \psi_1 + \int_{\mathcal{S}}\gamma b(x)|v_1|^{q-2}v_1 \psi_2 +\mathbf{f}rac{\alpha}{\alpha+\beta}\int\limits_{\mathcal{S}} h(x)|u_1|^{\alpha-2}u_1|v_1|^{\beta}\psi_1 \mathrm{d}\mu +\mathbf{f}rac{\beta}{\alpha+\beta}\int\limits_{\mathcal{S}} h(x)|u_1|^{\alpha}|v_1|^{\beta-2}v_1\psi_2 \mathrm{d}\mu \\ & \quad \quad \leq \mathcal{E}_p^+(u_1,\psi_1) + \mathcal{E}_p^+(v_1,\psi_2). \epsilonnd{align*} Hence, \betagin{align*} &\int_{\mathcal{S}}\lambda a(x)|u_1|^{q-2}u_1 \psi_1 + \int_{\mathcal{S}}\gamma b(x)|v_1|^{q-2}v_1 \psi_2 +\mathbf{f}rac{\alpha}{\alpha+\beta}\int\limits_{\mathcal{S}} h(x)|u_1|^{\alpha-2}u_1|v_1|^{\beta}\psi_1 \mathrm{d}\mu +\mathbf{f}rac{\beta}{\alpha+\beta}\int\limits_{\mathcal{S}} h(x)|u_1|^{\alpha}|v_1|^{\beta-2}v_1\psi_2 \mathrm{d}\mu \\ & \quad \in \mathcal{E}_p(u_1,\psi_1) + \mathcal{E}_p(v_1,\psi_2). \epsilonnd{align*} for all $(\psi_1,\psi_2) \in \text{dom}_0(\mathcal{E}_p) \times \text{dom}_0(\mathcal{E}_p).$ Therefore, $(u_1,v_1)$ is a weak solution to the problem \epsilonqref{prob}. \epsilonnd{proof} Now we are on the verge of proving the main theorem, which we had stated in Section 2. \betagin{proof}[Proof of Theorem \ref{main}] In Lemma \ref{lem-8} we have proved the existence of $\kappa_0.$ In Theorem \ref{thm-1} and \ref{thm-2} we have shown if $(\lambda,\gamma) \in \Lambda_0$ then there exist minimizers of $I_{\lambda,\gamma}$ on $\mathcal{M}_{\lambda,\gamma}^+$ and $\mathcal{M}_{\lambda,\gamma}^-.$ In Theorem \ref{thm-8} and \ref{thm-9} we have shown that both minimizers are solutions to \epsilonqref{prob}. Hence \epsilonqref{prob} has at least two non trivial solutions. This completes the proof. \epsilonnd{proof} \betagin{remark} For $p=2,$ in the above proofs $\lq\in \rq$ sign will be replaced by $\lq = \rq$ as the function $A_2$ is differentiable in this case. So, $\mathcal{E}_2^+(u_1,\psi_1) = \mathcal{E}_2^-(u_1,\psi_1).$ \epsilonnd{remark} \betagin{thebibliography}{11} \mathbf{f}ootnotesize \bibitem{ah} K. Adriouch, A. El Hamidi : The Nehari manifold for systems of nonlinear elliptic equations, Nonlinear Analysis, 64 (2006), 2149-2167. \bibitem{ar} G.A. Afrouzi, S.H. Rasouli : A remark on the existence of multiple solutions to a multiparameter nonlinear elliptic system, Nonlinear Analysis, 71 (2009), 445-455. \bibitem{ams} C.O. Alves, D.C. de Morais Filho, M.A.S. Souto : On systems of elliptic equations involving subcritical or critical Sobolev exponents, Nonlinear Analysis, 42 (2000), 771-787. \bibitem{as} S. Antontsev, S. Shmarev : Elliptic equations and systems with nonstandard growth conditions : Existence, uniqueness and localization properties of solutions, Nonlinear Analysis, 65 (2006), 728-761. \bibitem{br} G.M. Bisci, V. Radulescu : A characterization for elliptic problems on fractal sets, Proceedings of the American Mathematical Society, 143 (2015), no. 7 , 2959-2968. \bibitem{bm} Y. Bozhkov, E. Mitidieri : Existence of multiple solutions for quasilinear systems via fibering method, J. Differential Equations, 190 (2003), 239-267. \bibitem{bw2} K.J. Brown, T-F Wu : A semilinear elliptic system involving nonlinear boundary condition and sign-changing weight function, J. Math. Anal. Appl., 337 (2008) 1326-1336. \bibitem{cw} C-Y Chen, T-F Wu : The Nehari manifold for indefinite semilinear elliptic systems involving critical exponent, Applied Mathematics and Computation, 218 (2012), 10817-10828. \bibitem{cfm} Ph. Cl\'ement, D.G. de Figueiredo, E. Mitidieri : Positive solutions of semilinear elliptic systems, Comm. in Partial Differential Equations, 17(1992), no. 5\&6, 923-940. \bibitem{dt} A. Djellit, S. Tas : On some nonlinear elliptic systems, Nonlinear Analysis, 59 (2004), 695-706. \bibitem{hamidi} A. El Hamidi : Existence results to elliptic systems with nonstandard growth conditions, J. Math. Anal. Appl., 300 (2004), 30-42. \bibitem{falc} K.J. Falconer : Semi-linear PDES on self-similar fractals, Commun. Math. Phys., 206 (1999), 235-245. \bibitem{falc1} K. J. Falconer, J. Hu : Non-linear Elliptical Equations on the Sierpinski Gasket, Journal of Mathematical Analysis and Applications, 240 (1999), 552-573. \bibitem{KF} K. Falconer : Fractal Geometry : Mathematical Foundation and Applications, 2nd ed., Wiley, 2003. \bibitem{fl} U.R. Freiberg, M.R. Lancia : Energy form on non self similar fractals. Progress in Nonlinear Differential Equations and their Appl.,63(2005),267-277. \bibitem{hps}P. E. Herman, R. Peirone, R. S. Strichartz : p-energy and p-harmonic functions on Sierpinski gasket type fractals, Potential Analysis 20 (2004), 125-148. \bibitem{zc1} C. Hua, H. Zhenya : Semilinear elliptic equations on fractal sets, Acta Mathematica Scientia, 29B (2009), no. 2, 232-242. \bibitem{kiga1} J. Kigami : Analysis on Fractals, Cambridge University Press, 2004. \bibitem{kiga2} J. Kigami : A Harmonic Calculus on the Sierpinski Spaces, Japan J. Appl. Math., 6 (1989), 259-290. \bibitem{kiga3} J. Kigami : Harmonic calculus on p.c.f self-similar sets, Trans. Amer. Math. Soc., 335 (1993), 721-755. \bibitem{koz} S. M. Kozlov : Harmonization and Homogenization on Fractals, Commun. Math. Phys. 153 (1993), 339-357. \bibitem{lu} F-Y Lu : The Nehari manifold and application to a semilinear elliptic system, Nonlinear Analysis, 71 (2009), 3425-3433. \bibitem{ps} A. Priyadarshi, A. Sahu : Boundary value problem involving the p-Laplacian on the Sierpi\'nski gasket, to appear in Fractals. \bibitem{denisa} D. Stancu-Dimitru : Two nontrivial weak solutions for the Dirichlet problem on the Sierpinski gasket, Bull. Aust. Math. Soc., 85 (2012), 395-414. \bibitem{denisa2} D. Stancu-Dimitru : Variational treatment of nonlinear equations on the Sierpi\'nski gasket, Complex Variables and Elliptic Equations, 59(2014), no. 2, 172-189. \bibitem{str} R.S. Strichartz : The Laplacian on the sierpinski gasket via the method of averages, Pacific Journal of Math. vol.201, no.1, 2001. \bibitem{sw} R.S. Strichartz, C. Wong : The $p$-Laplacian on the Sierpinski gasket, Nonlinearity 17 (2004), 595-616. \bibitem{tang} D. Tang : The Laplacian on p.c.f. self-similar sets via the method of averages, Chaos, Solitons \& Fractals, 44 (2011), 538-547. \bibitem{velin} J. Velin : Existence results for some nonlinear elliptic system with lack of compactness, Nonlinear Analysis, 52 (2003), 1017-1034. \bibitem{wu6} T-F Wu: Multiple positive solutions for semilinear elliptic systems with nonlinear boundary condition, Applied Mathematics and Computation, 189 (2007), 1712-1722. \bibitem{wu7} T-F Wu : The Nehari manifold for a semilinear elliptic system involving sign-changing weight functions, Nonlinear Analysis, 68 (2008), 1733-1745. \bibitem{yw} G. Yang, M. Wang : Existence of multiple positive solutions for a p-Laplacian system with sign-changing weight functions, Computers and Mathematics with Applications, 55 (2008), 636-653. \bibitem{zz} J. Zhang, Z. Zhang : Existence results for some nonlinear elliptic systems, Nonlinear Analysis, 71 (2009), 2840-2846. \bibitem{zc2} H. Zhenya, C. Hua : Non-Linear Elliptic Equations on Fractal Domain, Wuhan University Journal of Natural Sciences, 12 (2007), no. 3, 391-394. \epsilonnd{thebibliography} \epsilonnd{document}
\begin{document} \begin{abstract} The degree of a CSP instance is the maximum number of times that a variable may appear in the scope of constraints. We consider the approximate counting problem for Boolean CSPs with bounded-degree instances, for constraint languages containing the two unary constant relations $\{0\}$ and $\{1\}$. When the maximum degree is at least $25$ we obtain a complete classification of the complexity of this problem. It is exactly solvable in polynomial-time if every relation in the constraint language is affine. It is equivalent to the problem of approximately counting independent sets in bipartite graphs if every relation can be expressed as conjunctions of $\{0\}$, $\{1\}$ and binary implication. Otherwise, there is no FPRAS unless $\ensuremath{\mathbf{NP}} = \ensuremath{\mathbf{RP}}$. For lower degree bounds, additional cases arise in which the complexity is related to the complexity of approximately counting independent sets in hypergraphs. \end{abstract} \maketitle \section{Introduction} \label{sec:Intro} In the constraint satisfaction problem (CSP), we seek to assign values from some domain to a set of variables, while satisfying given constraints on the combinations of values that certain subsets of the variables may take. Constraint satisfaction problems are ubiquitous in computer science, with close connections to graph theory, database query evaluation, type inference, satisfiability, scheduling and artificial intelligence \cite{KV2000:Conjunctive, Kum1992:CSP-algorithms, Mon1974:Constraints}. CSP can also be reformulated in terms of homomorphisms between relational structures \cite{FV1998:MMSNP} and conjunctive query containment in database theory \cite{KV2000:Conjunctive}. Weighted versions of CSP appear in statistical physics, where they correspond to partition functions of spin systems \cite{Wel1993:Complexity}. We give formal definitions in Section~\ref{sec:Prelim} but, for now, consider an undirected graph $G$ and the CSP where the domain is $\{\mathrm{red}, \mathrm{green}, \mathrm{blue}\}$, the variables are the vertices of $G$ and the constraints specify that, for every edge $xy\in G$, $x$ and $y$ must be assigned different values. Thus, in a satisfying assignment, no two adjacent vertices are given the same colour: the CSP is satisfiable if, and only if, the graph is 3-colourable. As a second example, given a formula in 3-CNF, we can write a system of constraints over the variables, with domain $\{\mathrm{true}, \mathrm{false}\}$, that requires the assignment to each clause to satisfy at least one literal. Clearly, the resulting CSP is directly equivalent to the original satisfiability problem. \subsection{Decision CSP} \label{sec:Decision} In the \emph{uniform constraint satisfaction problem}, we are given the set of constraints explicitly, as lists of allowable combinations for given subsets of the variables; these lists can be considered as relations over the domain. Since it includes problems such as 3-\textsc{sat} and 3-\textsc{colourability}, uniform CSP is \ensuremath{\mathbf{NP}}{}-complete. However, uniform CSP also includes problems in \ensuremath{\mathbf{P}}{}, such as 2-\textsc{sat} and 2-\textsc{colourability}, raising the natural question of what restrictions lead to tractable problems. There are two natural ways to restrict CSP: we can restrict the form of the instances and we can restrict the form of the constraints. The most common restriction to CSP is to allow only certain fixed relations in the constraints. The list of allowed relations is known as the \emph{constraint language} and we write $\ensuremath{\mathrm{CSP}}(\Gamma)$ for the so-called \emph{non-uniform} CSP in which each constraint states that the values assigned to some tuple of variables must be a tuple in a specified relation in $\Gamma$. The classic example of this is Schaefer's dichotomy for Boolean constraint languages $\Gamma$ (i.e., those with domain $\{0,1\}$; often called ``generalized satisfiability'') \cite{Sch1978:Boolean}. He showed that $\ensuremath{\mathrm{CSP}}(\Gamma)$ is in $\ensuremath{\mathbf{P}}$ if $\Gamma$ is included in one of six classes and is \ensuremath{\mathbf{NP}}{}-complete, otherwise. More recently, Bulatov has produced a corresponding dichotomy for the three-element domain \cite{Bul2006:Ternary}. These two results restrict the size of the domain but allow relations of arbitrary arity in the constraint language. The converse restriction --- relations of restricted arity, especially binary relations, over arbitrary finite domains --- has also been studied in depth \cite{HN1990:H-color, HN2004:Homomorphisms}. For all $\Gamma$ studied so far, $\ensuremath{\mathrm{CSP}}(\Gamma)$ has been either in \ensuremath{\mathbf{P}}{} or \ensuremath{\mathbf{NP}}{}-complete and Feder and Vardi have conjectured that this holds for every constraint language \cite{FV1998:MMSNP}. Ladner has shown that it is not the case that every problem in \ensuremath{\mathbf{NP}}{} is either in \ensuremath{\mathbf{P}}{} or \ensuremath{\mathbf{NP}}{}-complete since, if $\ensuremath{\mathbf{P}}{}\neq \ensuremath{\mathbf{NP}}{}$, there is an infinite, strict hierarchy between the two \cite{Lad1975:Reducibility}. However, there are problems in \ensuremath{\mathbf{NP}}{}, such as graph Hamiltonicity and even connectedness, that cannot be expressed as $\ensuremath{\mathrm{CSP}}(\Gamma)$ for any finite $\Gamma\,$\footnote{This follows from results on the expressive power of existential monadic second-order logic \cite{FSV1995:Monadic}.} and Ladner's diagonalization does not seem to be expressible in CSP \cite{FV1998:MMSNP}, so a dichotomy for CSP appears possible. Restricting the tree-width of instances has also been a fruitful direction of research \cite{Fre1990:CSP-tw, KV2000:Games-CSP}. In contrast, little is known about restrictions on the degree of instances, i.e., the maximum number of times that any variable may appear. Dalmau and Ford have shown that, for any fixed Boolean constraint language $\Gamma$ containing the constant unary relations $R_{\mathrm{zero}}=\{0\}$ and $R_{\mathrm{one}}=\{1\}$, the complexity of $\ensuremath{\mathrm{CSP}}(\Gamma)$ for instances of degree at most three is exactly the same as the complexity of $\ensuremath{\mathrm{CSP}}(\Gamma)$ with no degree restriction \cite{DF2003:bdeg-gensat}. The case where variables may appear at most twice has not yet been completely classified; it is known that degree-2 $\ensuremath{\mathrm{CSP}}(\Gamma)$ is as hard as general $\ensuremath{\mathrm{CSP}}(\Gamma)$ whenever $\Gamma$ contains $R_{\mathrm{zero}}$ and $R_{\mathrm{one}}$ and some relation that is not a $\Delta$-matroid \cite{Fed2001:Fanout}; the known polynomial-time cases come from restrictions on the kinds of $\Delta$-matroids that appear in $\Gamma$ \cite{DF2003:bdeg-gensat}. \subsection{Counting CSP} A generalization of classical CSP is to ask how many satisfying solutions there are. This is referred to as counting CSP, \ensuremath{\#\mathrm{CSP}}{}. Clearly, the decision problem is reducible to counting: if we can efficiently count the solutions, we can efficiently determine whether there is at least one. The converse does not hold: for example, we can determine in polynomial time whether a graph admits a perfect matching but it is \ensuremath{\#\mathbf{P}}{}-complete to count the perfect matchings, even in a bipartite graph \cite{Val1979:Enumeration}. \ensuremath{\#\mathbf{P}}{} is the class of functions $f$ for which there is a nondeterministic, polynomial-time Turing machine that has exactly $f(x)$ accepting paths for input $x$ \cite{Val1979:Permanent}. It is easily seen that the counting version of any \ensuremath{\mathbf{NP}}{} decision problem is in \ensuremath{\#\mathbf{P}}{} and \ensuremath{\#\mathbf{P}}{} can be considered the counting ``analogue'' of \ensuremath{\mathbf{NP}}{}. Note, though that problems that are \ensuremath{\#\mathbf{P}}{}-complete under appropriate reductions are, under standard complexity-theoretic assumptions, considerably harder than \ensuremath{\mathbf{NP}}{}-complete problems: $\ensuremath{\mathbf{P}}^{\ensuremath{\#\mathbf{P}}}$ includes the whole of the polynomial hierarchy \cite{Tod1989:PH}, whereas $\ensuremath{\mathbf{P}}^{\ensuremath{\mathbf{NP}}}$ is generally thought not to. Although no dichotomy is known for CSP, Bulatov has recently shown that, for all $\Gamma\!$, $\ensuremath{\#\mathrm{CSP}}(\Gamma)$ is either computable in polynomial time or \ensuremath{\#\mathbf{P}}{}-complete \cite{Bul2008:Dichotomy}. However, Bulatov's dichotomy sheds little light on which constraint languages yield polynomial-time counting CSPs and which do not. The criterion of the dichotomy is based on ``defects'' in a certain infinite algebra built up from the polymorphisms of $\Gamma$ and it is open whether the characterization is even decidable. It also seems not to apply to bounded-degree \ensuremath{\#\mathrm{CSP}}{}. So, although there is a full dichotomy for $\ensuremath{\#\mathrm{CSP}}(\Gamma)$, results for restricted forms of constraint language are still of interest. Creignou and Hermann have shown that only one of Schaefer's polynomial-time cases for Boolean languages survives the transition to counting: $\ensuremath{\#\mathrm{CSP}}(\Gamma)\in\ensuremath{\mathbf{FP}}$ (i.e., has a polynomial time algorithm) if $\Gamma$ is affine (i.e., each relation is the solution set of a system of linear equations over \ensuremath{\mathrm{GF}_2}{}) and is \ensuremath{\#\mathbf{P}}{}-complete, otherwise \cite{CH1996:Bool-numCSP}. This result has been extended to rational and even complex-weighted instances \cite{WBool, CLXxxxx:Complex-numCSP} and, in the latter case, the dichotomy is shown to hold for the restriction of the problem in which instances have degree~$3$. This implies that the degree-3 problem $\ensuremath{\#\mathrm{CSP}}d[3](\Gamma)$ ($\ensuremath{\#\mathrm{CSP}}(\Gamma)$ restricted to instances of degree~3) is in \ensuremath{\mathbf{FP}}{} if $\Gamma$ is affine and is \ensuremath{\#\mathbf{P}}{}-complete, otherwise. \subsection{Approximate counting} Since $\ensuremath{\#\mathrm{CSP}}(\Gamma)$ is very often \ensuremath{\#\mathbf{P}}{}-complete, approximation algorithms play an important role. The key concept is that of a \emph{fully polynomial randomized approximation scheme} (FPRAS). This is a randomized algorithm for computing some function $f(x)$, taking as its input $x$ and a constant $\epsilon > 0$, and computing a value $Y$ such that $e^{-\epsilon} \leqslant Y/f(x) \leqslant e^\epsilon$ with probability at least $\tfrac{3}{4}$, in time polynomial in both $|x|$ and ${\epsilon}^{-1}$. (See Section~\ref{sec:Prelim:Approx}.) Dyer, Goldberg and Jerrum have classified the complexity of approximately computing $\ensuremath{\#\mathrm{CSP}}(\Gamma)$ for Boolean constraint languages \cite{DGJ2007:Bool-approx}. When all relations in $\Gamma$ are affine, $\ensuremath{\#\mathrm{CSP}}(\Gamma)$ can be computed exactly in polynomial time by the result of Creignou and Hermann discussed above \cite{CH1996:Bool-numCSP}. Otherwise, if every relation in $\Gamma$ can be defined by a conjunction of pins (i.e., assertions $v=0$ or $v=1$) and Boolean implications, then $\ensuremath{\#\mathrm{CSP}}(\Gamma)$ is as hard to approximate as the problem \ensuremath{\#\mathrm{BIS}}{} of counting independent sets in a bipartite graph; otherwise, $\ensuremath{\#\mathrm{CSP}}(\Gamma)$ is as hard to approximate as the problem \ensuremath{\#\mathrm{SAT}}{} of counting the satisfying truth assignments of a Boolean formula. Dyer, Goldberg, Greenhill and Jerrum have shown that the latter problem is complete for \ensuremath{\#\mathbf{P}}{} under appropriate approximation-preserving reductions (see Section~\ref{sec:Prelim:Approx}) and has no FPRAS unless $\ensuremath{\mathbf{NP}} = \ensuremath{\mathbf{RP}}$ \cite{DGGJ2004:Approx}, which is thought to be unlikely. The complexity of \ensuremath{\#\mathrm{BIS}}{} is currently open: there is no known FPRAS but it is not known to be \ensuremath{\#\mathbf{P}}{}-complete, either. \ensuremath{\#\mathrm{BIS}}{} is known to be complete for a logically-defined subclass of \ensuremath{\#\mathbf{P}}{} with respect to approximation-preserving reductions \cite{DGGJ2004:Approx}. \subsection{Our result} We consider the complexity of approximately solving Boolean $\ensuremath{\#\mathrm{CSP}}$ problems when instances have bounded degree. Following Dalmau and Ford~\cite{DF2003:bdeg-gensat} and Feder~\cite{Fed2001:Fanout} we consider the case in which $R_{\mathrm{zero}}=\{0\}$ and $R_{\mathrm{one}}=\{1\}$ are available. We proceed by showing that any Boolean relation that is not definable as a conjunction of ORs or NANDs can be used in low-degree instances to assert equalities between variables. Thus, we can side-step degree restrictions by replacing high-degree variables with distinct variables asserted to be equal. Our main result, Corollary~\ref{cor:main}, is a trichotomy for the case in which instances have maximum degree~$d$ for some $d\geqslant 25$. If every relation in~$\Gamma$ is affine, then $\ensuremath{\#\mathrm{CSP}}d(\Gamma \cup \{R_{\mathrm{zero}},R_{\mathrm{one}}\})$ is solvable in polynomial time. Otherwise, if every relation in $\Gamma$ can be defined as a conjunction of $R_{\mathrm{zero}}$, $R_{\mathrm{one}}$ and binary implications, then $\ensuremath{\#\mathrm{CSP}}d(\Gamma \cup \{R_{\mathrm{zero}},R_{\mathrm{one}}\})$ is equivalent in approximation complexity to $\ensuremath{\#\mathrm{BIS}}{}$. Otherwise, it has no FPRAS unless $\ensuremath{\mathbf{NP}}=\ensuremath{\mathbf{RP}}$. Theorem~\ref{theorem:complexity} gives a partial classification of the complexity when $d<25$. In the new cases that arise here, the complexity is given in terms of the complexity of counting independent sets in hypergraphs with bounded degree and bounded hyper-edge size. The complexity of this problem is not fully understood and we explain what is known about it in Section~\ref{sec:Complexity}. \section{Preliminaries} \label{sec:Prelim} \subsection{Basic notation} We write $\overline{a}$ for the tuple $\tuple{a_1, \dots, a_r}$, which we often shorten to $\overline{a} = a_1\dots a_r$. We write $a^r$ for the $r$-tuple $a\dots a$ and $\overline{a}\overline{b}$ for the tuple formed from the elements of $\overline{a}$ followed by those of $\overline{b}$. The \emph{bit-wise complement} of a relation $R\subseteq \ensuremath{\{0,1\}}^r$ is the relation $\bwcomp{R} = \{\tuple{a_1\oplus 1, \dots, a_r\oplus 1} \mid \overline{a}\in R\}$, where $\oplus$ denotes addition modulo~2. We say that a relation $R$ is \emph{ppp-definable}\footnote{This should not be confused with the concept of primitive positive definability (pp-definability) which appears in algebraic treatments of CSP and \ensuremath{\#\mathrm{CSP}}{}, for example in the work of Bulatov \cite{Bul2008:Dichotomy}.} in a relation $R'$ and write $R\leqslant_{\mathrm{ppp}} R'$ if $R$ can be obtained from $R'$ by some sequence of the following operations: \begin{itemize} \itemspacing \item permutation of columns (for notational convenience only); \item pinning (taking sub-relations of the form $R_{i\mapsto c} = \{\overline{a}\in R \mid a_i = c\}$ for some $i$ and some $c\in\ensuremath{\{0,1\}}$); and \item projection (``deleting the $i$th column'' to give the relation $\{a_1\dots a_{i-1}a_{i+1}\dots a_r \mid a_1\dots a_r\in R\}$). \end{itemize} It is easy to see that $\leqslant_{\mathrm{ppp}}$ is reflexive and transitive and that, if $R\leqslant_{\mathrm{ppp}} R'\!$, then $R$ can be obtained from $R'$ by first permuting the columns, then making some pins and then projecting. We write $R_{=} = \{00, 11\}$, $R_{\neq} = \{01, 10\}$, $R_{\mathrm{OR}} = \{01, 10, 11\}$, $R_{\mathrm{NAND}} = \{00, 01, 10\}$, $R_{\imp} = \{00, 01, 11\}$ and $R_{\leftarrow} = \{00, 10, 11\}$. For $k\geqslant 2$, we write $R_{=}k = \{0^k\!, 1^k\}$, $R_{\mathrm{OR}}k = \ensuremath{\{0,1\}}^k \setminus \{0^k\}$ and $R_{\mathrm{NAND}}k = \ensuremath{\{0,1\}}^k \setminus \{1^k\}$ (i.e., $k$-ary equality, \ensuremath{\mathrm{OR}}{} and \ensuremath{\mathrm{NAND}}{}). \subsection{Boolean constraint satisfaction problems} A \emph{constraint language} is a set $\Gamma = \{R_1, \dots, R_m\}$ of named Boolean relations. Given a set $V$ of variables, the set of \emph{constraints} over $\Gamma$ is the set $\mathrm{Cons}(V,\Gamma)$ which contains $R(\overline{v})$ for every relation $R\in\Gamma$ with arity $r$ and every $\overline{v}\in V^r\!$. Note that $v=v'$ and $v\neq v'$ are not constraints unless the appropriate relations are included in $\Gamma\!$. The \emph{scope} of a constraint $R(\overline{v})$ is the tuple $\overline{v}$, which need not consist of distinct variables. An \emph{instance} of the constraint satisfaction problem (CSP) over $\Gamma$ is a set $V$ of variables and a set $C\subseteq \mathrm{Cons}(V,\Gamma)$ of constraints. An \emph{assignment} to a set $V$ of variables is a function $\sigma\colon V\to \ensuremath{\{0,1\}}$. An assignment to $V$ \emph{satisfies} an instance $(V, C)$ if $\tuple{\sigma(v_1), \dots, \sigma(v_r)}\in R$ for every constraint $R(v_1, \dots, v_r)$. We write $Z(I)$ for the number of satisfying assignments to a CSP instance $I$. We study the counting CSP problem $\ensuremath{\#\mathrm{CSP}}(\Gamma)$, parameterized by $\Gamma\!$, in which we must compute $Z(I)$ for an instance $I=(V, C)$ of CSP over $\Gamma$. The \emph{degree} of an instance is the greatest number of times any variable appears among its constraints. Note that the variable $v$ appears twice in the constraint $R(v,v)$. Our specific interest in this paper is in classifying the complexity of bounded-degree counting CSPs. For a constraint language $\Gamma$ and a positive integer $d$, define $\ensuremath{\#\mathrm{CSP}}d(\Gamma)$ to be the restriction of $\ensuremath{\#\mathrm{CSP}}(\Gamma)$ to instances of degree at most $d$. Instances of degree~1 are trivial. \begin{theorem} \label{thrm:degree-1} For any $\Gamma\!$, $\ensuremath{\#\mathrm{CSP}}d[1](\Gamma)\in \ensuremath{\mathbf{FP}}$. \qed \end{theorem} When considering $\ensuremath{\#\mathrm{CSP}}d$ for $d\geqslant 2$, we follow established practice by allowing \emph{pinning} in the constraint language \cite{DF2003:bdeg-gensat, Fed2001:Fanout}. We write $R_{\mathrm{zero}}=\{0\}$ and $R_{\mathrm{one}}=\{1\}$ for the two singleton unary relations. We refer to constraints in $R_{\mathrm{zero}}$ and $R_{\mathrm{one}}$ as \emph{pins}. To make notation easier, we will sometimes write constraints using constants instead of explicit pins. That is, we will allow the constants 0 and~1 to appear in the place of variables in the scopes of constraints. Such constraints can obviously be rewritten as a set of ``proper'' constraints, without increasing degree. We let $\ensuremath{\Gamma_\mathrm{\!pin}}$ denote the constraint language $\{R_{\mathrm{zero}}, R_{\mathrm{one}}\}$. \subsection{Hypergraphs} A \emph{hypergraph} $H=(V,E)$ is a set $V=V(H)$ of vertices and a set $E = E(H)\subseteq \powerset{V}$ of non-empty \emph{hyper-edges}. The \emph{degree} of a vertex $v\in V(H)$ is the number $d(v) = |\{e\in E(H)\mid v\in e\}|$ and the degree of a hypergraph is the maximum degree of its vertices. If $w = \max \{|e| \mid e\in E(H)\}$, we say that $H$ has \emph{width} $w$. An \emph{independent set} in a hypergraph $H$ is a set $S\subseteq V(H)$ such that $e\nsubseteq S$ for every $e\in E(H)$. Note that an independent set may contain more than one vertex from any hyper-edge of size at least three. We write \ensuremath{\#w\mathrm{\text{-}HIS}}{} for the problem of counting the independent sets in a width-$w$ hypergraph $H$, and \ensuremath{\#w\mathrm{\text{-}HIS}}d{} for the restriction of \ensuremath{\#w\mathrm{\text{-}HIS}}{} to inputs of degree at most $d$. \subsection{Approximation complexity} \label{sec:Prelim:Approx} A \emph{randomized approximation scheme} (RAS) for a function $f\colon\Sigma^*\rightarrow\mathbb{N}$ is a probabilistic Turing machine that takes as input a pair $(x,\epsilon)\in \Sigma^*\times (0,1)$, and produces, on an output tape, an integer random variable~$Y$ with $\Pr(e^{-\epsilon} \leqslant Y/f(x) \leqslant e^\epsilon)\geqslant \frac{3}{4}$. \footnote{The choice of the value $\frac{3}{4}$ is inconsequential: the same class of problems has an FPRAS if we choose any probability $p$ with $\frac{1}{2}<p<1$ \cite{JVV1986:Randgen}.} A \emph{fully polynomial randomized approximation scheme (FPRAS)} is a RAS that runs in time $\mathrm{poly}(|x|,\epsilon^{-1})$. To compare the complexity of approximate counting problems, we use the AP-reductions of \cite{DGGJ2004:Approx}. Suppose $f$ and $g$ are two functions from some input domain $\Sigma^*$ to the natural numbers and we wish to compare the complexity of approximately computing~$f$ to that of approximately computing~$g$. An \emph{approximation-preserving} reduction from~$f$ to~$g$ is a probabilistic oracle Turing machine $M$ that takes as input a pair $(x,\epsilon)\in \Sigma^*\times (0,1)$, and satisfies the following three conditions: (i) every oracle call made by $M$ is of the form $(w,\delta)$ where $w\in \Sigma^*$ is an instance of~$g$, and $0<\delta<1$ is an error bound satisfying $\delta^{-1} \leqslant \mathrm{poly}(|x|,\epsilon^{-1})$; (ii) $M$ is a randomized approximation scheme for $f$ whenever the oracle is a randomized approximation scheme for $g$; and (iii) the run-time of $M$ is polynomial in $|x|$ and $\epsilon^{-1}$. If there is an approximation-preserving reduction from $f$ to $g$, we write $f\leqslant_{\mathrm{AP}} g$ and say that $f$ is \emph{AP-reducible} to $g$. If $g$ has an FPRAS, then so does $f$. If $f\leqslant_{\mathrm{AP}} g$ and $g\leqslant_{\mathrm{AP}} f$, then we say that $f$ and $g$ are \emph{AP-interreducible} and write $f\equiv_{\mathrm{AP}} g$. \section{Classes of relations} \label{sec:Relations} A relation $R\subseteq \ensuremath{\{0,1\}}^r$ is \emph{affine} if it is the set of solutions to some system of linear equations over \ensuremath{\mathrm{GF}_2}{}. That is, there is a set $\Sigma$ of equations in variables $x_1, \dots, x_r$, each of the form $x_{i_1} \oplus \dots \oplus x_{i_n} = c$, where $\oplus$ denotes addition modulo~2 and $c\in \ensuremath{\{0,1\}}$, such that $\overline{a}\in R$ if, and only if, the assignment $x_1\mapsto a_1, \dots, x_r\mapsto a_r$ satisfies every equation in $\Sigma$. Note that the empty and complete relations are affine. We define \ensuremath{\text{IM-conj}}{} to be the class of relations defined by a conjunction of pins and (binary) implications. This class is called $\text{IM}_2$ in \cite{DGJ2007:Bool-approx}. \begin{lemma} \label{lemma:IMconj-implies} If $R\in\ensuremath{\text{IM-conj}}$ is not affine, then $R_{\imp}\leqslant_{\mathrm{ppp}} R$.\qed \end{lemma} Let \ensuremath{\mathrm{OR}}conj{} be the set of Boolean relations that are defined by a conjunction of pins and \ensuremath{\mathrm{OR}}{}s of any arity and \ensuremath{\mathrm{NAND}}conj{} the set of Boolean relations definable by conjunctions of pins and \ensuremath{\mathrm{NAND}}{}s (i.e., negated conjunctions) of any arity. We say that one of the defining formulae of these relations is \emph{normalized} if no pinned variable appears in any \ensuremath{\mathrm{OR}}{} or \ensuremath{\mathrm{NAND}}{}, the arguments of each individual \ensuremath{\mathrm{OR}}{} and \ensuremath{\mathrm{NAND}}{} are distinct, every \ensuremath{\mathrm{OR}}{} or \ensuremath{\mathrm{NAND}}{} has at least two arguments and no \ensuremath{\mathrm{OR}}{} or \ensuremath{\mathrm{NAND}}{}'s arguments are a subset of any other's. \begin{lemma} \label{lemma:conj-norm} Every \ensuremath{\mathrm{OR}}conj{} (respectively, \ensuremath{\mathrm{NAND}}conj{}) relation is defined by a unique normalized formula.\qed \end{lemma} Given the uniqueness of defining normalized formulae, we define the \emph{width} of an \ensuremath{\mathrm{OR}}conj{} or \ensuremath{\mathrm{NAND}}conj{} relation $R$ to be $\width{R}$, the greatest number of arguments to any of the \ensuremath{\mathrm{OR}}{}s or \ensuremath{\mathrm{NAND}}{}s in the normalized formula that defines it. Note that, from the definition of normalized formulae, there are no relations of width~1. \begin{lemma} \label{lemma:ORconj-OR} If $R\in\ensuremath{\mathrm{OR}}conj$ has width $w$, then $R_{\mathrm{OR}}k[2], \dots, R_{\mathrm{OR}}k[w] \leqslant_{\mathrm{ppp}} R$. Similarly, if $R\in\ensuremath{\mathrm{NAND}}conj$ has width $w$, then $R_{\mathrm{NAND}}k[2], \dots, R_{\mathrm{NAND}}k[w] \leqslant_{\mathrm{ppp}} R$.\qed \end{lemma} Given tuples $\overline{a}, \overline{b}\in \ensuremath{\{0,1\}}^r\!$, we write $\overline{a}\leqslant \overline{b}$ if $a_i\leqslant b_i$ for all $i\in [1,r]$. If $\overline{a}\leqslant \overline{b}$ and $\overline{a} \neq \overline{b}$, we write $\overline{a} < \overline{b}$. We say that a relation $R\subseteq \ensuremath{\{0,1\}}^r$ is \emph{monotone} if, whenever $\overline{a}\in R$ and $\overline{a}\leqslant \overline{b}$, then $\overline{b}\in R$. We say that $R$ is \emph{antitone} if, whenever $\overline{a}\in R$ and $\overline{b}\leqslant \overline{a}$, then $\overline{b}\in R$. Clearly, $R$ is monotone if, and only if, $\bwcomp{R}$ is antitone. Call a relation \emph{pseudo-monotone} (respectively, \emph{pseudo-antitone}) if its restriction to non-constant columns is monotone (respectively, antitone). The following is a consequence of results in \cite[Chapter~7.1.1]{KnuXXXX:TAOCPv4A}. \begin{proposition} \label{prop:OR-monotone} A relation $R\subseteq \ensuremath{\{0,1\}}^r$ is in \ensuremath{\mathrm{OR}}conj{} (respectively, \ensuremath{\mathrm{NAND}}conj) if, and only if, it is pseudo-monotone (respectively, pseudo-antitone).\qed \end{proposition} \section{Simulating equality} \label{sec:SimEq} An important ingredient in bounded-degree dichotomy theorems \cite{CLXxxxx:Complex-numCSP} is expressing equality using constraints from a language that does not necessarily include the equality relation. A constraint language $\Gamma$ is said to \emph{simulate} the $k$-ary equality relation $R_{=}k$ if, for some $\ell \geqslant k$, there is a $(\Gamma \cup \ensuremath{\Gamma_\mathrm{\!pin}})$-CSP instance $I$ with variables $x_1, \dots, x_\ell$ that has exactly $m\geqslant 1$ satisfying assignments $\sigma$ with $\sigma(x_1) = \dots = \sigma(x_k) = 0$, exactly $m$ with $\sigma(x_1) = \dots = \sigma(x_k) = 1$ and no other satisfying assignments. If, further, the degree of $I$ is $d$ and the degree of each variable $x_1, \dots, x_k$ is at most $d-1$, we say that $\Gamma$ \emph{$d$-simulates} $R_{=}k$. We say that $\Gamma$ \emph{$d$-simulates equality} if it $d$-simulates $R_{=}k$ for all $k\geqslant 2$. The point is that, if $\Gamma$ $d$-simulates equality, we can express the constraint $y_1 = \dots = y_r$ in $\Gamma \cup \ensuremath{\Gamma_\mathrm{\!pin}}$ and then use each $y_i$ in one further constraint, while still having an instance of degree $d$. The variables $x_{k+1}, \dots, x_\ell$ in the definition function as auxiliary variables and are not used in any other constraint. Simulating equality makes degree bounds moot. \begin{proposition} \label{prop:bound-unbound} If $\Gamma$ $d$-simulates equality, then $\ensuremath{\#\mathrm{CSP}}(\Gamma) \leqslant_{\mathrm{AP}} \ensuremath{\#\mathrm{CSP}}d(\Gamma \cup \ensuremath{\Gamma_\mathrm{\!pin}})$.\qed \end{proposition} We now investigate which relations simulate equality. \begin{lemma} \label{lemma:proj-eq} $R\in\ensuremath{\{0,1\}}^r$ 3-simulates equality if $R_{=}\leqslant_{\mathrm{ppp}} R$, $R_{\neq}\leqslant_{\mathrm{ppp}} R$ or $R_{\imp}\leqslant_{\mathrm{ppp}} R$. \end{lemma} \begin{proof} For each $k\geqslant 2$, we show how to 3-simulate $R_{=}k$. We may assume without loss of generality that the ppp-definition of $R_{=}$, $R_{\neq}$ or $R_{\imp}$ from $R$ involves applying the identity permutation to the columns, pinning columns 3 to $3+p-1$ inclusive to zero, pinning columns $3+p$ to $3+p+q-1$ inclusive to one (that is, pinning $p\geqslant 0$ columns to zero and $q\geqslant 0$ to one) and then projecting away all but the first two columns. Suppose first that $R_{=}\leqslant_{\mathrm{ppp}} R$ or $R_{\imp}\leqslant_{\mathrm{ppp}} R$. $R$ must contain $\alpha\geqslant 1$ tuples that begin $000^p1^q$, $\beta\geqslant 0$ that begin $010^p1^q$ and $\gamma\geqslant 1$ that begin $110^p1^q$, with $\beta=0$ unless we are ppp-defining $R_{\imp}$. We consider, first, the case where $\alpha=\gamma$, and show that we can 3-simulate $R_{=}k$, expressing the constraint $R_{=}k(x_1, \dots, x_k)$ with the constraints \begin{equation*} R(x_1 x_2 0^p 1^q *), \ R(x_2 x_3 0^p 1^q *), \dots, \ R(x_{k-1} x_k 0^p 1^q *), \ R(x_k x_1 0^p 1^q *) \,, \end{equation*} where $*$ denotes a fresh $(r-2-p-q)$-tuple of variables in each constraint. These constraints are equivalent to $x_1 = \dots = x_k = x_1$ or to $x_1\rightarrow \dots \rightarrow x_k \rightarrow x_1$ so constrain the variables $x_1, \dots, x_k$ to have the same value, as required. Every variable appears at most twice and there are $\alpha^k$ solutions to these constraints that put $x_1=\dots=x_k=0$, $\gamma^k=\alpha^k$ solutions with $x_1=\dots=x_k=1$ and no other solutions. Hence, $R$ 3-simulates $R_{=}k$, as required. We now show, by induction on $r$, that we can 3-simulate $R_{=}k$ even in the case that $\alpha\neq\gamma$. For the base case, $r=2$, we have $\alpha=\gamma=1$ and we are done. For the inductive step, let $r>2$ and assume, w.l.o.g.\@ that $\alpha>\gamma$ ($\alpha<\gamma$ is symmetric). In particular, we have $\alpha\geqslant 2$, so there are distinct tuples $000^p1^q\overline{a}$, and $000^p1^q\overline{b}$ and $110^p1^q\overline{c}$ in $R$. Choose $j$ such that $a_j\neq b_j$. Pinning the $(2+p+q+j)$th column of $R$ to $c_j$ and projecting out the resulting constant column gives a relation $R'$ of arity $r-1$ containing at least one tuple beginning $000^p1^q$ and at least one beginning $110^p1^q$: by the inductive hypothesis, $R'$ 3-simulates $R_{=}k$. Finally, we consider the case that $R_{\neq}\leqslant_{\mathrm{ppp}} R$. $R$ contains $\alpha\geqslant 1$ tuples beginning $010^p1^q$ and $\beta\geqslant 1$ beginning $100^p1^q$. We express the constraint $R_{=}k(x_1, \dots, x_k)$ by introducing fresh variables $y_1, \dots, y_k$ and using the constraints \begin{equation*} \begin{array}{ccccc} R(x_1 y_1 0^p1^q*), & R(x_2 y_2 0^p1^q*), & \ldots, & R(x_{k-1} y_{k-1} 0^p1^q*), & R(x_k y_k 0^p1^q*), \\ R(y_1 x_2 0^p1^q*), & R(y_2 x_3 0^p1^q*), & \ldots, & R(y_{k-1} x_k 0^p1^q*), & R(y_k x_1 0^p1^q*)\,. \end{array} \end{equation*} There are $\alpha^k \beta^k$ solutions when $x_1 = \dots = x_k = 0$ (and $y_1 = \dots = y_k = 1$) and $\beta^k \alpha^k$ solutions when the $x$s are~1 and the $y$s are~0. There are no other solutions and no variable is used more than twice. \end{proof} For $c\in \ensuremath{\{0,1\}}$, an $r$-ary relation is \emph{$c$-valid} if it contains the tuple $c^r\!$. \begin{lemma} \label{lemma:valid-eq} Let $r\geqslant 2$ and let $R\subseteq \ensuremath{\{0,1\}}^r$ be 0- and 1-valid but not complete. Then $R$ 3-simulates equality.\qed \end{lemma} In the following lemma, we do not require $R$ and $R'$ to be distinct. The technique is to assert $x_1=\dots=x_k$ by simulating the formula $\ensuremath{\mathrm{OR}}(x_1,y_1) \wedge \ensuremath{\mathrm{NAND}}(y_1,x_2) \wedge \ensuremath{\mathrm{OR}}(x_2,y_2) \wedge \ensuremath{\mathrm{NAND}}(y_2,x_3) \wedge \cdots \wedge \ensuremath{\mathrm{OR}}(x_k,y_k) \wedge \ensuremath{\mathrm{NAND}}(y_k, x_1)$. \begin{lemma} \label{lemma:OR-NAND-eq} If $R_{\mathrm{OR}}\leqslant_{\mathrm{ppp}} R$ and $R_{\mathrm{NAND}}\leqslant_{\mathrm{ppp}} R'\!$, then $\{R,R'\}$ 3-simulates equality.\qed \end{lemma} \section{Classifying relations} \label{sec:Trichotomy} We are now ready to prove that every Boolean relation $R$ is in \ensuremath{\mathrm{OR}}conj{}, in \ensuremath{\mathrm{NAND}}conj{} or 3-simulates equality. If $R_0$ and $R_1$ are $r$-ary, let $R_0+R_1 = \{0\overline{a}\mid \overline{a}\in R_0\} \cup \{1\overline{a}\mid \overline{a}\in R_1\}$. \begin{lemma} \label{lemma:OR-OR} Let $R_0, R_1\in\ensuremath{\mathrm{OR}}conj$ and let $R=R_0+R_1$. Then $R\in\ensuremath{\mathrm{OR}}conj$, $R\in\ensuremath{\mathrm{NAND}}conj$ or $R$ 3-simulates equality. \end{lemma} \begin{proof} Let $R_0$ and $R_1$ have arity $r$. We may assume that $R$ has no constant columns. If it does, let $R'$ be the relation that results from projecting them away. $R' = R'_0 + R'_1$, where both $R'_0$ and $R'_1$ are $\ensuremath{\mathrm{OR}}conj$ relations. By the remainder of the proof, $R'\in\ensuremath{\mathrm{OR}}conj$, $R'\in \ensuremath{\mathrm{NAND}}conj$ or $R'$ 3-simulates equality. Re-instating the constant columns does not alter this. For $R$ without constant columns, there are two cases. \noindent \emph{{\bf Case 1.} $R_0\subseteq R_1$.} Suppose $R_i$ is defined by the normalized \ensuremath{\mathrm{OR}}conj{} formula $\phi_i$ in variables $x_2, \dots, x_{r+1}$. Then $R$ is defined by the formula \begin{equation} \phi_0 \vee (x_1=1 \wedge \phi_1) \equiv (\phi_0 \vee x_1=1) \wedge (\phi_0 \vee \phi_1) \equiv (\phi_0 \vee x_1=1) \wedge \phi_1\,, \label{eq:OR-conj-formula} \end{equation} where the second equivalence is because $\phi_0$ implies $\phi_1$, because $R_0\subseteq R_1$. $R_1$ has no constant column, since such a column would have to be constant with the same value in $R_0$, contradicting our assumption that $R$ has no constant columns. There are two cases. \noindent \emph{{\bf Case 1.1.} $R_0$ has no constant columns.} $x_1=1$ is equivalent to $\ensuremath{\mathrm{OR}}(x_1)$ and $\phi_0$ contains no pins, so we can rewrite $\phi_0 \vee x_1=1$ in CNF. Therefore, (\ref{eq:OR-conj-formula}) is \ensuremath{\mathrm{OR}}conj{}. \noindent \emph{{\bf Case 1.2.} $R_0$ has a constant column.} Suppose first that the $k$th column of $R_0$ is constant-zero. $R_1$ has no constant columns, so the projection of $R$ onto its first and $(k+1)$st columns gives the relation $R_{\leftarrow}$, and $R$ 3-simulates equality by Lemma~\ref{lemma:proj-eq}. Otherwise, all constant columns of $R_0$ contain ones. Then $\phi_0$ is in CNF, since every pin $x_i=1$ in $\phi_0$ can be written $\ensuremath{\mathrm{OR}}(x_i)$. Thus, we can write $\phi_0 \vee x_1=1$ in CNF, so (\ref{eq:OR-conj-formula}) defines an \ensuremath{\mathrm{OR}}conj{} relation. \noindent \emph{{\bf Case 2.} $R_0\nsubseteq R_1$.} We will show that $R$ 3-simulates equality or is in \ensuremath{\mathrm{NAND}}conj{}. We consider two cases (recall that no relation has width~1). \noindent \emph{{\bf Case 2.1.} At least one of $R_0$ and $R_1$ has positive width.} There are two sub-cases. \noindent \emph{{\bf Case 2.1.1.} $R_1$ has a constant column.} Suppose the $k$th column of $R_1$ is constant. If the $k$th column of $R_0$ is also constant, then the projection of $R$ to its first and $(k+1)$st columns is either equality or disequality (since the corresponding column of $R$ is not constant) so $R$ 3-simulates equality by Lemma~\ref{lemma:proj-eq}. Otherwise, if the projection of $R$ to the first and $(k+1)$st columns is $R_{\imp}$, then $R$ 3-simulates equality by Lemma~\ref{lemma:proj-eq}. Otherwise, that projection must be $R_{\mathrm{NAND}}$. By Lemma~\ref{lemma:ORconj-OR} and the assumption of Case~2.1, $R_{\mathrm{OR}}$ is ppp-definable in at least one of $R_0$ and $R_1$ so $R$ 3-simulates equality by Lemma~\ref{lemma:OR-NAND-eq}. \noindent \emph{{\bf Case 2.1.2.} $R_1$ has no constant columns.} By Proposition~\ref{prop:OR-monotone}, $R_1$ is monotone. Let $\overline{a}\in R_0\setminus R_1$: by applying the same permutation to the columns of $R_0$ and $R_1$, we may assume that $\overline{a} = 0^\ell 1^{r-\ell}$. We must have $\ell\geqslant 1$ as every non-empty $r$-ary monotone relation contains the tuple $1^r\!$. Let $\overline{b}\in R_1$ be a tuple such that $a_i=b_i$ for a maximal initial segment of $[1,r]$. By monotonicity of $R_1$, we may assume that $\overline{b} = 0^k 1^{r-k}$. Further, we must have $k<\ell$, since, otherwise, we would have $\overline{b}<\overline{a}$, contradicting our choice of $\overline{a}\notin R_1$. Now, consider the relation $R' = \{a_0 a_1\dots a_{\ell-k}\mid a_00^ka_1\dots a_{\ell-k}1^{r-\ell} \in R\}$, which is the result of pinning columns 2 to $(k+1)$ of $R$ to zero and columns $(r-\ell+1)$ to $(r+1)$ to one and discarding the resulting constant columns. $R'$ contains $0^{\ell-k+1}$ and $1^{\ell-k+1}$ but is not complete, since $10^{\ell-k}\notin R'\!$. By Lemma~\ref{lemma:valid-eq}, $R'$ and, hence, $R$ 3-simulates equality. \noindent \emph{{\bf Case 2.2.} Both $R_0$ and $R_1$ have width zero,} i.e., are complete relations, possibly padded with constant columns. For $i \in [1,r]$, let $R'_i$ be the relation obtained from $R$ by projecting onto its first and $(i+1)$st columns. Since $R$ has no constant columns, $R'_i$ is either complete, $R_{=}$, $R_{\neq}$, $R_{\mathrm{OR}}$, $R_{\mathrm{NAND}}$, $R_{\imp}$ or $R_{\leftarrow}$. If there is a $k$ such that $R'_k$ is $R_{=}$, $R_{\neq}$, $R_{\imp}$ or $R_{\leftarrow}$, then $R_{=}$, $R_{\neq}$ or $R_{\imp}$ is ppp-definable in $R$ and hence $R$ 3-simulates equality by Lemma~\ref{lemma:proj-eq}. If there are $k_1$ and $k_2$ such that $R'_{k_1} = R_{\mathrm{OR}}$ and $R'_{k_2} = R_{\mathrm{NAND}}$, then $R$ 3-simulates equality by Lemma~\ref{lemma:OR-NAND-eq}. It remains to consider the following two cases. \noindent \emph{{\bf Case 2.2.1.} Each $R'_i$ is either $R_{\mathrm{OR}}$ or complete.} $R_1$ must be complete, which contradicts the assumption that $R_0\not\subseteq R_1$. \noindent \emph{{\bf Case 2.2.1.} Each $R'_i$ is either $R_{\mathrm{NAND}}$ or complete.} $R_0$ must be complete. Let $I = \{i\mid R'_i=R_{\mathrm{NAND}}\}$. Then $R = \bigwedge_{i\in I} \ensuremath{\mathrm{NAND}}(x_1,x_{i+1})$, so $R\in \ensuremath{\mathrm{NAND}}conj$. \end{proof} Using the duality between \ensuremath{\mathrm{OR}}conj{} and \ensuremath{\mathrm{NAND}}conj{} relations, we can prove the corresponding result for $R_0, R_1\in \ensuremath{\mathrm{NAND}}conj$. The proof of the classification is completed by a simple induction on the arity of $R$. Decomposing $R$ as $R_0+R_1$ and assuming inductively that $R_0$ and $R_1$ are of one of the stated types, we use the previous results in this section and Lemma~\ref{lemma:OR-NAND-eq} to show that $R$ is. \begin{theorem} \label{thrm:trichotomy} Every Boolean relation is \ensuremath{\mathrm{OR}}conj{} or \ensuremath{\mathrm{NAND}}conj{} or 3-simulates equality.\qed \end{theorem} \section{Complexity} \label{sec:Complexity} The complexity of approximating $\ensuremath{\#\mathrm{CSP}}(\Gamma)$ where the degree of instances is unbounded is given by Dyer, Goldberg and Jerrum \cite[Theorem~3]{DGJ2007:Bool-approx}. \begin{theorem} \label{thrm:unbounded} Let $\Gamma$ be a Boolean constraint language. \begin{itemize} \itemspacing \item If every $R\in\Gamma$ is affine, then $\ensuremath{\#\mathrm{CSP}}(\Gamma)\in \ensuremath{\mathbf{FP}}$. \item Otherwise, if $\Gamma\subseteq\ensuremath{\text{IM-conj}}$, then $\ensuremath{\#\mathrm{CSP}}(\Gamma) \equiv_{\mathrm{AP}} \ensuremath{\#\mathrm{BIS}}$. \item Otherwise, $\ensuremath{\#\mathrm{CSP}}(\Gamma) \equiv_{\mathrm{AP}} \ensuremath{\#\mathrm{SAT}}$. \end{itemize} \end{theorem} Working towards our classification of the approximation complexity of $\ensuremath{\#\mathrm{CSP}}(\Gamma)$, we first deal with subcases. The \ensuremath{\text{IM-conj}}{} case and \ensuremath{\mathrm{OR}}conj{}/\ensuremath{\mathrm{NAND}}conj{} cases are based on links between those classes of relations and the problems of counting independent sets in bipartite and general graphs, respectively\cite{DGJ2007:Bool-approx, DGGJ2004:Approx}, the latter extended to hypergraphs. \begin{proposition} \label{prop:im-bis} If $\Gamma \subseteq \ensuremath{\text{IM-conj}}$ contains at least one non-affine relation, then $\ensuremath{\#\mathrm{CSP}}d(\Gamma\cup \ensuremath{\Gamma_\mathrm{\!pin}}) \equiv_{\mathrm{AP}} \ensuremath{\#\mathrm{BIS}}$ for all $d\geqslant 3$. \qed \end{proposition} \begin{proposition} \label{prop:HIS-to-ORconj} Let $R$ be an \ensuremath{\mathrm{OR}}conj{} or \ensuremath{\mathrm{NAND}}conj{} relation of width~$w$. Then, for $d\geqslant 2$, $\ensuremath{\#w\mathrm{\text{-}HIS}}d{} \leqslant_{\mathrm{AP}} \ensuremath{\#\mathrm{CSP}}d(\{R\}\cup \ensuremath{\Gamma_\mathrm{\!pin}})$.\qed \end{proposition} \begin{proposition} \label{prop:ORconj-to-HIS} Let $R$ be an \ensuremath{\mathrm{OR}}conj{} or \ensuremath{\mathrm{NAND}}conj{} relation of width~$w$. Then, for $d\geqslant 2$, $\ensuremath{\#\mathrm{CSP}}d(\{R\}\cup \ensuremath{\Gamma_\mathrm{\!pin}}) \leqslant_{\mathrm{AP}} \ensuremath{\#w\mathrm{\text{-}HIS}}d[kd]$, where $k$ is the greatest number of times that any variable appears in the normalized formula defining $R$. \qed \end{proposition} We now give the complexity of approximating $\ensuremath{\#\mathrm{CSP}}d(\Gamma\cup \ensuremath{\Gamma_\mathrm{\!pin}})$ for $d\geqslant 3$. \begin{theorem} \label{theorem:complexity} Let $\Gamma$ be a Boolean constraint language and let $d\geqslant 3$. \begin{itemize} \itemspacing \item If every $R\in\Gamma$ is affine, then $\ensuremath{\#\mathrm{CSP}}d(\Gamma\cup \ensuremath{\Gamma_\mathrm{\!pin}}) \in \ensuremath{\mathbf{FP}}$. \item Otherwise, if $\Gamma\subseteq \ensuremath{\text{IM-conj}}$, then $\ensuremath{\#\mathrm{CSP}}d(\Gamma\cup \ensuremath{\Gamma_\mathrm{\!pin}}) \equiv_{\mathrm{AP}} \ensuremath{\#\mathrm{BIS}}$. \item Otherwise, if $\Gamma\subseteq \ensuremath{\mathrm{OR}}conj$ or $\Gamma\subseteq \ensuremath{\mathrm{NAND}}conj$, then let $w$ be the greatest width of any relation in $\Gamma$ and let $k$ be the greatest number of times that any variable appears in the normalized formulae defining the relations of $\Gamma$. Then $\ensuremath{\#w\mathrm{\text{-}HIS}}d \leqslant_{\mathrm{AP}} \ensuremath{\#\mathrm{CSP}}d(\Gamma\cup \ensuremath{\Gamma_\mathrm{\!pin}}) \leqslant_{\mathrm{AP}} \ensuremath{\#w\mathrm{\text{-}HIS}}d[kd]$. \item Otherwise, $\ensuremath{\#\mathrm{CSP}}d(\Gamma\cup \ensuremath{\Gamma_\mathrm{\!pin}}) \equiv_{\mathrm{AP}} \ensuremath{\#\mathrm{SAT}}$. \end{itemize} \end{theorem} \begin{proof} The affine case is immediate from Theorem~\ref{thrm:unbounded}. ($\Gamma\cup \ensuremath{\Gamma_\mathrm{\!pin}}$ is affine if, and only if, $\Gamma$ is.) Otherwise, if $\Gamma\subseteq \ensuremath{\text{IM-conj}}$ and some $R\in \Gamma$ is not affine, then $\ensuremath{\#\mathrm{CSP}}d(\Gamma\cup \ensuremath{\Gamma_\mathrm{\!pin}}) \equiv_{\mathrm{AP}} \ensuremath{\#\mathrm{BIS}}$ by Proposition~\ref{prop:im-bis}. Otherwise, if $\Gamma\subseteq \ensuremath{\mathrm{OR}}conj$ or $\Gamma\subseteq \ensuremath{\mathrm{NAND}}conj$, then $\ensuremath{\#w\mathrm{\text{-}HIS}}d \leqslant_{\mathrm{AP}} \ensuremath{\#\mathrm{CSP}}d(\Gamma\cup \ensuremath{\Gamma_\mathrm{\!pin}}) \leqslant_{\mathrm{AP}} \ensuremath{\#w\mathrm{\text{-}HIS}}d[kd]$ by Propositions \ref{prop:HIS-to-ORconj} and~\ref{prop:ORconj-to-HIS}. Finally, suppose that $\Gamma$ is not affine, $\Gamma\nsubseteq \ensuremath{\text{IM-conj}}$, $\Gamma\nsubseteq \ensuremath{\mathrm{OR}}conj$ and $\Gamma\nsubseteq \ensuremath{\mathrm{NAND}}conj$. Since $(\Gamma\cup \ensuremath{\Gamma_\mathrm{\!pin}})$ is neither affine or a subset of \ensuremath{\text{IM-conj}}{}, we have $\ensuremath{\#\mathrm{CSP}}(\Gamma\cup \ensuremath{\Gamma_\mathrm{\!pin}})\equiv_{\mathrm{AP}} \ensuremath{\#\mathrm{SAT}}$ by Theorem~\ref{thrm:unbounded} so, if we can show that $\Gamma$ $d$-simulates equality, then $\ensuremath{\#\mathrm{CSP}}d(\Gamma\cup \ensuremath{\Gamma_\mathrm{\!pin}}) \equiv_{\mathrm{AP}} \ensuremath{\#\mathrm{CSP}}(\Gamma\cup \ensuremath{\Gamma_\mathrm{\!pin}})$ by Proposition~\ref{prop:bound-unbound} and we are done. If $\Gamma$ contains a $R$ relation that is neither \ensuremath{\mathrm{OR}}conj{} nor \ensuremath{\mathrm{NAND}}conj{}, then $R$ 3-simulates equality by Theorem~\ref{thrm:trichotomy}. Otherwise, $\Gamma$ must contain distinct relations $R_1\in\ensuremath{\mathrm{OR}}conj$ and $R_2\in\ensuremath{\mathrm{NAND}}conj$ that are non-affine so have width at least two. So $\Gamma$ 3-simulates equality by Lemma~\ref{lemma:OR-NAND-eq}. \end{proof} Unless $\ensuremath{\mathbf{NP}}=\ensuremath{\mathbf{RP}}$, there is no FPRAS for counting independent sets in graphs of maximum degree at least~25 \cite{DFJ2002:IS-sparse}, and, therefore, no FPRAS for $\ensuremath{\#w\mathrm{\text{-}HIS}}d[d]$ with $r\geqslant 2$ and $d\geqslant 25$. Further, since $\ensuremath{\#\mathrm{SAT}}$ is complete for $\ensuremath{\#\mathbf{P}}$ under AP-reductions \cite{DGGJ2004:Approx}, $\ensuremath{\#\mathrm{SAT}}$ cannot have an FPRAS unless $\ensuremath{\mathbf{NP}}=\ensuremath{\mathbf{RP}}$. From Theorem~\ref{theorem:complexity} above we have the following corollary. \begin{corollary} \label{cor:main} Let $\Gamma$ be a Boolean constraint language and let $d\geqslant 25$. \begin{itemize} \itemspacing \item If every $R\in\Gamma$ is affine, then $\ensuremath{\#\mathrm{CSP}}d(\Gamma\cup \ensuremath{\Gamma_\mathrm{\!pin}}) \in \ensuremath{\mathbf{FP}}$. \item Otherwise, if $\Gamma\subseteq \ensuremath{\text{IM-conj}}$, then $\ensuremath{\#\mathrm{CSP}}d(\Gamma\cup \ensuremath{\Gamma_\mathrm{\!pin}}) \equiv_{\mathrm{AP}} \ensuremath{\#\mathrm{BIS}}$. \item Otherwise there is no FPRAS for $\ensuremath{\#\mathrm{CSP}}d(\Gamma\cup \ensuremath{\Gamma_\mathrm{\!pin}})$, unless $\ensuremath{\mathbf{NP}}=\ensuremath{\mathbf{RP}}$. \qEd \end{itemize} \end{corollary} $\Gamma \cup \ensuremath{\Gamma_\mathrm{\!pin}}$ is affine (respectively, in \ensuremath{\mathrm{OR}}conj{} or in \ensuremath{\mathrm{NAND}}conj{}) if, and only if $\Gamma$ is, so the case for large-degree instances ($d\geqslant 25$) corresponds exactly in complexity to the unbounded case \cite{DGJ2007:Bool-approx}. The case for lower degree bounds is more complex. To put Theorem~\ref{theorem:complexity} in context, we summarize the known approximability of $\ensuremath{\#w\mathrm{\text{-}HIS}}d$, parameterized by $d$ and $w$. The case $d=1$ is clearly in \ensuremath{\mathbf{FP}}{} (Theorem~\ref{thrm:degree-1}) and so is the case $d=w=2$, which corresponds to counting independent sets in graphs of maximum degree two. For $d=2$ and width $w\geqslant 3$, Dyer and Greenhill have shown that there is an FPRAS for $\ensuremath{\#w\mathrm{\text{-}HIS}}d$ \cite{DG2000:IS-Markov}. For $d=3$, they have shown that there is an FPRAS if the the width $w$ is at most~3. For larger width, the approximability of $\ensuremath{\#w\mathrm{\text{-}HIS}}d[3]$ is still not known. With the width restricted to $w=2$ (normal graphs), Weitz has shown that, for degree $d\in \{3,4,5\}$, there is a deterministic approximation scheme that runs in polynomial time (a PTAS) \cite{Wei2006:IS-threshold}. This extends a result of Luby and Vigoda, who gave an FPRAS for $d\leqslant 4$ \cite{LV1999:Convergence}. For $d>5$, approximating $\ensuremath{\#w\mathrm{\text{-}HIS}}d$ becomes considerably harder. More precisely, Dyer, Frieze and Jerrum have shown that for $d=6$ the Monte Carlo Markov chain technique is likely to fail, in the sense that ``cautious'' Markov chains are provably slowly mixing \cite{DFJ2002:IS-sparse}. They also showed that, for $d=25$, there can be no polynomial-time algorithm for approximate counting, unless $\ensuremath{\mathbf{NP}}=\ensuremath{\mathbf{RP}}$. These results imply that for $d\in \{6,\dots,24\}$ and $w\geqslant 2$ the Monte Carlo Markov chain technique is likely to fail and for $d\geqslant 25$ and $w\geqslant 2$, there can be no FPRAS unless $\ensuremath{\mathbf{NP}}=\ensuremath{\mathbf{RP}}$. Table~\ref{tab:aHIS-complexity} summarizes the results. \begin{table}[t] \centering\renewcommand{1.15}{1.15} \begin{tabular}{|c|c|l|} \hline Degree $d$ & Width $w$ & Approximability of $\ensuremath{\#w\mathrm{\text{-}HIS}}d[d]$ \\ \hline $1$ & $\geqslant 2$ & \ensuremath{\mathbf{FP}} \\ $2$ & $2$ & \ensuremath{\mathbf{FP}} \\ $2$ & $\geqslant 3$ & FPRAS~\cite{DG2000:IS-Markov} \\ $3$ & $2,3$ & FPRAS~\cite{DG2000:IS-Markov} \\ $3,4,5$ & $2$ & PTAS~\cite{Wei2006:IS-threshold} \\ $6,\dots,24$ & $\geqslant 2$ & The MCMC method is likely to fail~\cite{DFJ2002:IS-sparse} \\ $\geqslant 25$ & $\geqslant 2$ & No FPRAS unless $\ensuremath{\mathbf{NP}}=\ensuremath{\mathbf{RP}}$~\cite{DFJ2002:IS-sparse} \\ \hline \end{tabular} \caption{Approximability of $\ensuremath{\#w\mathrm{\text{-}HIS}}d[d]$ (still open for all other values of $d$ and $w$).} \label{tab:aHIS-complexity} \end{table} Returning to bounded-degree \ensuremath{\#\mathrm{CSP}}{}, the case $d=2$ seems to be rather different to degree bounds three and higher. This is also the case for decision CSP --- recall that degree-$d$ $\ensuremath{\mathrm{CSP}}(\Gamma\cup\ensuremath{\Gamma_\mathrm{\!pin}})$ has the same complexity as unbounded-degree $\ensuremath{\mathrm{CSP}}(\Gamma\cup\ensuremath{\Gamma_\mathrm{\!pin}})$ for all $d\geqslant 3$ \cite{DF2003:bdeg-gensat}, while degree-2 $\ensuremath{\mathrm{CSP}}(\Gamma\cup\ensuremath{\Gamma_\mathrm{\!pin}})$ is often easier than the unbounded-degree case \cite{DF2003:bdeg-gensat,Fed2001:Fanout} but the complexity of degree-2 $\ensuremath{\mathrm{CSP}}(\Gamma\cup\ensuremath{\Gamma_\mathrm{\!pin}})$ is still open for some $\Gamma\!$. Our key techniques for determining the complexity of $\ensuremath{\#\mathrm{CSP}}d(\Gamma \cup \ensuremath{\Gamma_\mathrm{\!pin}})$ for $d\geqslant 3$ were the 3-simulation of equality and Theorem~\ref{thrm:trichotomy}, which says that every Boolean relation is in \ensuremath{\mathrm{OR}}conj{}, in \ensuremath{\mathrm{NAND}}conj{} or 3-simulates equality. However, it seems that not all relations that 3-simulate equality also 2-simulate equality so the corresponding classification of relations does not appear to hold. It seems that different techniques will be required for the degree-2 case. For example, it is possible that there is no FPRAS for $\ensuremath{\#\mathrm{CSP}}d[3](\Gamma\cup\ensuremath{\Gamma_\mathrm{\!pin}})$ except when $\Gamma$ is affine. However, Bubley and Dyer have shown that there is an FPRAS for degree-2 \ensuremath{\#\mathrm{SAT}}{}, even though the exact counting problem is \ensuremath{\#\mathbf{P}}{}-complete \cite{BD1997:Graph-orient}. This shows that there is a class $\mathcal{C}$ of constraint languages for which $\ensuremath{\#\mathrm{CSP}}d[2](\Gamma \cup \ensuremath{\Gamma_\mathrm{\!pin}})$ has an FPRAS for every $\Gamma \in \mathcal{C}$ but for which no exact polynomial-time algorithm is known. We leave the complexity of degree-2 \ensuremath{\#\mathrm{CSP}}{} and of \ensuremath{\#\mathrm{BIS}}{} and the the various parameterized versions of the counting hypergraph independent sets problem as open questions. \end{document}
\begin{document} \begin{center} \textbf{\LARGE{Necessary conditions for a minimum in classical calculus of variation problems in the presence of various degenerations}} \end{center} \begin{center} \textbf{M. J. Mardanov }${\boldsymbol{\mathrm{\ }}}^{\boldsymbol{\mathrm{1}}\boldsymbol{\mathrm{,\ }}}$\textbf{, T. K. Melikov }${\boldsymbol{\mathrm{\ }}}^{\boldsymbol{\mathrm{2}}}$\textbf{, S. T. Melik }${\boldsymbol{\mathrm{\ }}}^{\boldsymbol{\mathrm{3}}}$ \end{center} \begin{center} ${\ }^1$\textit{IMM of ANAS, Vahabzadeh str. 9, Az 1141, Baku; Baku State University, Baku 1148, Z.Khalilov str. 23.} \end{center} \begin{center} \noindent E-mail: [email protected] $^2$\textit{ IMM of ANAS, Vahabzadeh str. 9, Az 1141, Baku; Institute of Control Systems of ANAS, Baku, Azerbaijan.} \noindent E-mail: [email protected] \noindent $^3$\textit{ Baku Higher Oil School, Baku, Azerbaijan; IMM of ANAS, Vahabzadeh str. 9, Az 1141, Baku;. } \noindent E-mail: [email protected] \end{center} \textbf{Abstract. }In the paper, we offer a method for studying an extremal in the classical calculus of variation in the presence of various degenerations. This method is based on introduction of Weierstrass type variations characterized by a numerical parameter. To obtain more effective results, introduced variations are used in two forms: in the form of variations on the right with respect to the given point, and in the form of variations on the left with respect to the same point. The research is conducted under the assumption that along the considered extremal the Weierstrass condition and the Legendre condition degenerate, i.e. they are fulfilled as equalities at separate points or on some intervals. Two types of new necessary conditions are obtained: of equality type and of inequality type conditions for a strong and also a weak local minimum. Given specific examples and counterexample show that some of the necessary minimum conditions obtained in this article are strengthening and refining of the corresponding known results in this direction. \textbf{Keywords:} calculus of variation, strong (weak) local minimum, necessary condition type equality (inequality), extremal, degeneration at the point (on the interval) \section{Introduction and Problem Statement} We consider the following vector problem of the classical calculus of variations: \begin{equation} \label{GrindEQ__1_1_} J\left(x\left(\cdot \right)\right)=\int _{t_{0} }^{t_{1} }L\left(t,\, x\left(t\right),\, \, \dot{x}\left(t\right)\right)\, dt\to {\mathop{\min }\limits_{{}_{x(\cdot )} }} , \end{equation} \begin{equation} \label{GrindEQ__1_2_} x\left(t_{0} \right)=x_{0} , x\left(t_{1} \right)=x_{1} , \,\,\, x_{0} , x_{1} \in R^{n} , \end{equation} where $R^{n} $ is $n$ -dimensional Euclidean space, $L\, \left(\cdot \right)$ is a given function, $x_{0} ,\, x_{1} ,\, t_{0} ,\, t_{1} $ are the given points. The function $L\left(t,x,y\right):\left(a,b\right)\times R^{n} \times R^{n} \to R:=\left(-\infty ,\, +\infty \right)$, called an integrant, is assumed to be continuously differentiable by a totality of variables, where $\left(a,\, b\right)$ is an interval, and $\left[t_{0} ,\, t_{1} \right]\subset \left(a,b\right)$. The sought-for function $x\left(\cdot \right):\left[t_{0} ,\, t_{1} \right]=:I\to R^{n} $ is a piecewise-smooth vector-function, i.e. is continuous, and its derivative is continuous everywhere on $I$, except for a finitely many points $\tau _{i} \in \left(t_{0} ,\, t_{1} \right),\, \, i=\overline{1,m}$, and at the points $\tau _{i} $ the derivative function $\dot{x}\left(\cdot \right)$ has first kind discontinuities (at the points $t_{0} $and $t_{1} $the values of the derivative function $\dot{x}\left(\cdot \right)$ are finite on the right and left, respectively). Herewith, as a rule, the points $\tau _{i} ,\, \, \, i=\overline{1,m}$ are called angular points for the functions $x\left(\cdot \right)$. We denote the set of all piecewise-smooth functions on $\left[t_{0} ,\, t_{1} \right]$ by $KC^{1} \left(I,\, R^{n} \right)$. We call the functions $x\left(\cdot \right)\in KC^{1} \left(I,\, R^{n} \right)$ satisfying the boundary condition \eqref{GrindEQ__1_2_}, admissible functions. Obviously, if $\bar{x}\, \left(\cdot \right)$ is a fixed admissible function, then for every $\theta \in \left[t_{0} ,\, t_{1} \right)\, \, \left(\theta \in \left(t_{0} ,\, t_{1} \right]\, \right)$ there exists a number $\alpha >0$ such that $\bar{x}\, \left(\cdot \right)$ is continuously differentiable on the semi-interval $\left[\theta ,\, \theta +\alpha \right)\subset I\, \left(\left(\theta -\alpha ,\, \theta \right]\subset I\right)$. We call this statement the property $P\left(\theta +;\, \bar{x}\left(\cdot \right),\, \, \alpha \right)$ $\left(P\left(\theta -;\, \bar{x}\left(\cdot \right),\, \, \alpha \right)\right)$ for the function $\bar{x}\, \left(\cdot \right)$, and we will use it in the future. Development of theory of classical variational calculus was set out, for example, in the papers and monographes [1, 4-6, 11, 15-17, 29, 30, 33 and etc.], where detailed review of main results obtained for problem (1.1), (1.2) and their various essential generalizations were given. Recall (see [11, p. ]) some notions from the classical calculus of variations. The admissible function $\bar{x}\, \left(\cdot \right)$ is said to be a strong (weak) local minimum in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_} if there exists such a number $\bar{\delta }>0\, \, \, \left(\hat{\delta }>0\right)$ that the inequality$J\left(x\left(\cdot \right)\right)\ge J\left(\bar{x}\left(\cdot \right)\right)$ is fulfilled for all admissible functions $x\left(\cdot \right)$ for which \[\left\| \, x\left(\cdot \right)-\bar{x}\left(\cdot \right)\, \right\| _{C\left(I,R^{n} \right)} \le \bar{\delta }\, \, \, \, \left(\max \, \left\{\, \left\| x\, \left(\cdot \right)-\hat{x}\, \left(\cdot \right)\right\| _{C\left(I,R^{n} \right)} ,\, \left\| \dot{x}\, \left(\cdot \right)-\dot{\bar{x}}\, \left(\cdot \right)\, \right\| _{L_{\infty } \left(I,R^{n} \right)} \right\}\le \hat{\delta }\right). \] In this case we say that the admissible function $\bar{x}\, \left(\cdot \right)$ affords a strong (weak) local minimum in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_} with $\bar{\delta }\, \, \left(\hat{\delta }\right)$ -neighborhood. Obviously, any strong local minimum at the same time is weak as well, but the opposite is not always true (see \cite{20}). We also recall (see e.g. \cite{4}) some known necessary conditions for a strong and weak local minimum for the considered problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}. Let $\{\tau\}\subset (t_{0}, t_1) $ be a set of angular points of the admissible function $\overline{x}(\cdot)$. Then: (i) if $\bar{x}\left(\cdot \right)$ is a weak local minimum in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}, then at the points $t\in I\setminus \{\tau\}$ it satisfies the Euler equation, i.e. for every $t\in I\setminus \{\tau\}$ we have the equality \begin{equation} \label{GrindEQ__1_3_} \frac{d}{dt} L_{\dot{x}} \left(t,\, \bar{x}\, \left(t\right),\, \dot{\bar{x}}\left(t\right)\right)=L_{x} \left(t,\, \bar{x}\, \left(t\right),\, \dot{\bar{x}}\left(t\right)\right), \end{equation} and also for every $t\in \{\tau\}$ along the function $\overline{x}(\cdot)$ the Weierstrass-Erdman condition is fulfilled, i.e. the following equalities are valid \begin{equation} \label{GrindEQ__1_4_} \bar{L}_{x} \left(t-\right)=\bar{L}_{x} \left(t+\right),\, \, \bar{L}\left(t-\right)-\dot{\bar{x}}\left(t-\right)\, \bar{L}_{\dot{x}} \left(t-\right)=\bar{L}\left(t+\right)-\dot{\bar{x}}\left(t+\right)\bar{L}_{\dot{x}} \left(t+0\right); \end{equation} (ii) if $\bar{x}\left(\cdot \right)$ is a strong local minimum in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}, then the Weierstrass condition is fulfilled along it, i.e. for all $\xi \in R^n$ the following inequalities are valid: \[E\left(\bar{L}\right)\, \left(t,\, \xi \right)\ge 0,\, \, \, \forall t\in \left(t_{0} ,\, t_{1} \right)\backslash \left\{\tau \right\},\] \begin{equation} \label{GrindEQ__1_5_} E \left(\bar{L}\right)\, \left(t\pm ,\, \xi \right)\ge 0,\, \, \, \forall t\in \left\{\tau \right\},\, \, \, E\left(\bar{L}\right)\left(t_{0} +,\xi \right)\ge 0,\, \, \, E\left(\bar{L}\right)\, \left(t_{1} -,\, \xi \right)\ge 0. \end{equation} Here, $\bar{L}\left(t\right):=L\left(t,\, \bar{x}\left(t\right),\, \, \dot{\bar{x}}\left(t\right)\right),$$\bar{L}_{y} \left(t\right):=\bar{L}_{y} \left(t,\, \bar{x}\left(t\right),\, \, \dot{\bar{x}}\left(t\right)\right),\, \, y\in \left\{x,\, \dot{x}\right\}$, \begin{equation} \label{GrindEQ__1_6_} E\left(\bar{L}\right)\, \left(t,\, \xi \right):=E\left(L\right)\, \left(t,\, \bar{x}\left(t\right),\, \dot{\bar{x}}\, \left(t\right),\, \dot{\bar{x}}\, \left(t\right)+\xi \right)=L\left(t,\, \bar{x}\left(t\right),\, \dot{\bar{x}}\, \left(t\right)+\xi \, \right)-\, \bar{L}\left(t\right)\, -\bar{L}_{\dot{x}}^{\, T} \left(t\right)\, \xi , \end{equation} where the symbol $T$ denotes the transposition operation and $E\left(L\right)\left(t,\, x,\, y,\, z\right)$ is a Weierstrass function for problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_} and is determined in the form: \begin{equation} \label{GrindEQ__1_7_} E\left(L\right)\, \left(t,x,y,z\right)=L\left(t,x,z\right)-L\left(t,x,y\right)-L_{y}^{T} \left(t,x,y\right)\left(z-y\right). \end{equation} Underline that here and in what follows the symbol $F\left(t+\right)$ $\left(F\left(t-\right)\right)$ means a right (left) hand limit of the function $F\left(\cdot \right)$ at the point $t$, furthermore, fulfillment of equality \eqref{GrindEQ__1_3_} for $t=t_{0} $ $\left(t=t_{1} \right)$ is understood as a right (left) hand limit at the point $t_{0} $ $\left(t_{1} \right)$. Following \cite{3, 21, 22}, we give a local modification of necessary condition for a minimum \eqref{GrindEQ__1_5_}. Let $\bar{x}\, \left(\cdot \right)$ be a weak local minimum in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}. Then there exists a number $\delta >0$ at which the following inequalities are fulfilled: \begin{equation} \label{GrindEQ__1_8_} \begin{array}{l} {E\left(\bar{L}\right)\, \left(t,\, \xi \right)\ge 0,\, \, \, \, \forall t\in \left(t_{0} ,\, t_{1} \right)\backslash \left\{\tau \right\},\, \, \, \, E\left(\bar{L}\right)\, \left(t\pm ,\xi \right)\ge 0,\, \, \forall t\in \left\{\tau \right\},\, } \\ {E \left(\bar{L}\right)\, \left(t_{0} +,\, \xi \right)\ge 0,\, \, \, E \left(\bar{L}\right)\, \left(t_{1} -,\, \xi \right)\ge 0,\, \, \, \, \forall \xi \in B_{\delta } \left(0\right).} \end{array} \end{equation} Here and in what follows, the symbol $B_{\delta } \left(0\right)$ is a closed ball of radius $\delta $ centered at the point $0\in R^{n} $. It is clear that the Legendre condition follows from necessary condition \eqref{GrindEQ__1_8_} as a corollary. We formulate this condition. Let the admissible function $\bar{x}\left(\cdot \right)$ be a weak local minimum in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}, and in addition suppose that the integrant $L\left(t,\, x,\, \dot{x}\right)$ is twice differentiable with respect to the variable $\dot{x}$ at the points of the set $\left\{\left(t,\, \bar{x}\left(t\right),\, \dot{\bar{x}}\left(t\right)\right):t\in I\right\}$. Then for all $\xi \in R^{n} $ the following inequalities hold: \begin{equation} \label{GrindEQ__1_9_} \begin{array}{l} {\xi ^{T} \, \bar{L}_{\dot{x}\dot{x}} \left(t\pm \right)\, \xi \ge 0,\, \, \forall t\in \left\{\tau \right\},\, \, \xi ^{T} \, \bar{L}_{\dot{x}\dot{x}} \left(t_{0} +\right)\, \xi \ge 0,\, } \\ {\xi ^{T} \, \bar{L}_{\dot{x}\dot{x}} \, \left(t_{1} -\right)\, \xi \ge 0,\, \, \xi ^{T} \, \bar{L}_{\dot{x}\dot{x}} \left(t\right)\xi \ge 0,\, \, \forall t\in \left(t_{0} ,\, t_{1} \right)\backslash \left\{\tau \right\}.} \end{array} \end{equation} The admissible function that satisfies the Euler equation, i.e. equality \eqref{GrindEQ__1_3_} is called an extremal in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}. In the classical calculus of variation (see, i.e., \cite{4, 15}), as the main goal, the extremal for a minimum was studied, and various necessary and also sufficient conditions were obtained. Recall that (see [4, 15, 33 and etc.]) in the classical calculus of variations a number of known necessary and also sufficient conditions for a minimum remain powerless in the case when at some point $t=\theta \in I$ for the vector $\xi=\eta \in R^{n} \backslash \left\{0\right\}$ at least one inequality from \eqref{GrindEQ__1_5_}, \eqref{GrindEQ__1_8_} and \eqref{GrindEQ__1_9_} is fulfilled as an equality, i.e. corresponding necessary condition degenerates at the point $\theta $ or at the point $\theta $ on the right or left. Therefore, the study of problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_} in such degenerated cases is of theoretical and practical interest today. It should be noted that similar problems in theory of optimal control, starting with the work of Kelly \cite{13}, in terms of singular controls were thoroughly studied by many authors and a hundreds of papers and monographs containing a number of important results were published [2, 7-10, 13, 14, 18-21, 24-28, 32, and etc.] . Application of the obtained results on singular controls either are ineffective or some their justified modifications are required in degenerated cases when solving problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}. Although problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_} is a special case of a terminal optimal control problem with equality type phase constraints, its study as an independent problem, allows to get more effective results not being corollaries of theorems proved in the theory of optimal control. Confirmation of the last sentence was shown for example in [6, p.33; 22, 23] and in the present paper in the presence of various degenerations. In this paper we offer a method for studying an extremum in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_} involving degenerations that are based on the introduction of various forms of Weierstrass-type variations characterized by a numerical parameter. Different necessary conditions for a strong and weak local minimum were obtained. It should be noted that some results of this paper are sharpening and refinement of corresponding statements of the paper \cite{3}. The structure of this paper is set out as follows. In the second section we obtain increment formulas of the functional for problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_} under various assumptions on the smoothness of the integrant $L\left(\cdot \right)$ and the extremal studied for a minimum (see (2.15), \eqref{GrindEQ__2_16_}, \eqref{GrindEQ__2_32_}, \eqref{GrindEQ__2_33_}, \eqref{GrindEQ__2_36_}, \eqref{GrindEQ__2_37_}). In the third and fourth sections, based on the increment formula of the functional obtained in the second section, we introduce necessary conditions for a strong and a weak local minimum in the presence of various degenerations at separate points and on the interval. In the last section, by means of special examples we discuss the results obtained in the third and fourth sections. \section{Various increment formulas of functional in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}.} In this section, by means of special variations we obtain increment formulas of functional under various assumptions on the smoothness of the integrant $L\, \left(\cdot \right)$ and on the considered extremal of problem \eqref{GrindEQ__1_1_}, (1.2). Note that these formulas have independent meaning and are the basis for the proof of the main results of this work. Let the admissible function $\bar{x}\, \left(\cdot \right)$ be an extremal in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_} and $\vartheta :=\left(\theta ,\, \lambda ,\, \xi \right)\in \left[t_{0} ,\, t_{1} \right)\times \left[0,\, 1\right)\times R^{n} $ be an arbitrary fixed point. Consider a function of the form \cite{22} \begin{equation} \label{GrindEQ__2_1_} h^{\left(+\right)} \left(t;\vartheta ,\, \varepsilon \right)=\left\{\begin{array}{c} {\left(t-\theta \right)\xi ,\, \, } \\ {\frac{\lambda }{\lambda -1} \left(t-\theta -\varepsilon \right)\xi ,} \\ {0,} \end{array}\right. \begin{array}{c} {t\in \left[\theta ,\, \theta +\lambda \varepsilon \right),} \\ {\, \, \, \, \, \, t\in \left[\theta +\lambda \varepsilon ,\, \theta +\varepsilon \right),} \\ {\, t\in I\backslash \left[\theta ,\, \theta +\varepsilon \right).} \end{array} \end{equation} Here $\varepsilon \in \left(0,\, \varepsilon _{0} \right]$, where $\, \varepsilon _{0} >0$ is a rather small number, moreover $\theta +\varepsilon _{0} <t_{1} $. Obviously, for any $\varepsilon \in \left(0,\, \varepsilon _{0} \right]$ the function $h^{\left(+\right)\, } \left(\cdot \, ;\, \vartheta ,\, \varepsilon \right)$ is an element of the space $KC^{1} \left(I,R^{n} \right)$ and its derivative $\dot{h}^{\left(+\right)\, } \left(\cdot \, ;\, \vartheta ,\, \varepsilon \right)$ is calculated by \begin{equation} \label{GrindEQ__2_2_} \dot{h}^{\left(+\right)} \left(t;\, \vartheta ,\, \varepsilon \right)=\left\{\begin{array}{c} {\xi ,\, \, } \\ {\frac{\lambda }{\lambda -1} \xi ,} \\ {0,} \end{array}\right. \begin{array}{c} {t\in \left[\theta ,\, \theta +\lambda \varepsilon \right],} \\ {\, \, \, \, \, \, \, t\in \left[\theta +\lambda \varepsilon ,\, \, \theta +\varepsilon \right],} \\ {\, t\in I\backslash \left(\theta ,\, \theta +\varepsilon \right).} \end{array} \end{equation} As can be seen, the derivative $\dot{h}^{\left(+\right)\, } \left(\cdot \, ;\, \vartheta ,\, \varepsilon \right)$ at the points $\theta ,\, \theta +\lambda \varepsilon $ and $\theta +\varepsilon $ is calculated both on the right and left, at the points $t_{0} $ and $t_{1} $ on the right and left, respectively. Since $\bar{x}\left(\cdot \right)$ is an extremum in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}, then by virtue of \eqref{GrindEQ__1_3_}, \eqref{GrindEQ__2_1_} and \eqref{GrindEQ__2_2_} allowing for the property $P\left(\theta +;\, \bar{x}\left(\cdot \right),\, \alpha \right)$ we have the equality \begin{equation} \label{GrindEQ__2_3_} \begin{array}{c} {0=\int _{\theta }^{\theta +\varepsilon }\frac{d}{dt} \, \left(\bar{L}_{\dot{x}}^{\, T} \left(t\right)\, h_{\varepsilon }^{\left(+\right)} \left(t\right)\right)\, dt= \int _{\theta }^{\theta +\varepsilon }\, \, \, \left[\bar{L}_{x}^{\, T} \left(t\right)\, h_{\varepsilon }^{\left(+\right)} \left(t\right)+\bar{L}_{\dot{x}}^{\, T} \left(t\right)\, \dot{h}_{\varepsilon }^{\left(+\right)} \left(t\right)\right]\, dt ,\, \forall \varepsilon \in \left(0,\, \bar{\varepsilon }\right]} \\ {} \end{array}, \end{equation} where $h_{\varepsilon }^{\left(+\right)} \left(t\right):=h_{\varepsilon }^{\left(+\right)} \left(t;\, \vartheta ,\, \varepsilon \right)$ and $\bar{\varepsilon }=\min \left\{\alpha ,\, \varepsilon _{0} \right\}$. Note that relation \eqref{GrindEQ__2_3_} is important in the future when calculating the increment of the functional in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}. Consider the special variation of the function $\bar{x}\left(\cdot \right)$: \begin{equation} \label{GrindEQ__2_4_} x^{\left(+\right)} \left(t;\, \vartheta ,\, \varepsilon \right)=\bar{x}\left(t\right)+h^{\left(+\right)} \left(t;\, \vartheta ,\, \varepsilon \right),\, \, t\in I,\, \, \varepsilon \in \left(0,\, \bar{\varepsilon }\right], \end{equation} where $h^{\left(+\right)} \left(\cdot \, ;\, \vartheta ,\, \, \varepsilon \right)$ is determined by \eqref{GrindEQ__2_1_}. We call \eqref{GrindEQ__2_4_} a variation introduced on the right with respect to the point $\theta $. Obviously, for every $\varepsilon \in \left(0,\, \bar{\varepsilon }\right]$ the function $x^{\left(+\right)} \left(\cdot \, ;\, \vartheta ,\, \, \varepsilon \right)$ is admissible. Similar to \eqref{GrindEQ__2_4_}, considering the property $P\left(\theta -;\, \bar{x}\left(\cdot \right),\, \alpha \right)$, we introduce into consideration the following variation of the function $\overline{x}\left(\cdot \right)$, the so-called variation introduced on the left with respect to the point $\theta $: \begin{equation} \label{GrindEQ__2_5_} x^{\left(-\right)} \, \left(t;\, \vartheta ,\, \varepsilon \right)=\bar{x}\left(t\right)+h^{\left(-\right)} \left(t:\, \vartheta ,\, \varepsilon \right),\, \, t\in I,\, \, \varepsilon \in \left(0,\, \tilde{\varepsilon }\right], \end{equation} where $\varepsilon \in \left(0,\, \, \tilde{\varepsilon }\right]$, moreover $\tilde{\varepsilon }=\min \left\{\alpha ,\, \theta -t_{0} \right\}$, further $\vartheta :=\left(\theta ,\, \lambda ,\, \xi \right)\in \left(t_{0} ,\, t_{1} \, \right]\times \left[0,\, 1\right)\times R^{n} $ is some fixed point and the function $h^{\left(-\right)} \, \left(\cdot \, ;\, \vartheta ,\, \varepsilon \right)$ is determined in the form \begin{equation} \label{GrindEQ__2_6_} h^{\left(-\right)} \, \left(t;\, \vartheta ,\, \varepsilon \right)=\left\{\begin{array}{c} {\left(t-\theta \right)\, \xi ,\, \, } \\ {\frac{\lambda }{\lambda -1} \left(t-\theta +\varepsilon \right)\xi ,} \\ {0,} \end{array}\begin{array}{c} {t\in \left(\theta -\lambda \, \varepsilon ,\, \theta \right]\, ,} \\ {\, \, \, \, \, \, \, t\in \left(\theta -\varepsilon ,\, \, \theta -\lambda \, \varepsilon \right]\, ,} \\ {t\in I\backslash \left(\theta -\varepsilon ,\, \theta \right]\, .} \end{array}\right. \end{equation} It is clear that for every $\varepsilon \in \left(0,\, \tilde{\varepsilon }\right]$ we have the inclusion $h^{\left(-\right)} \, \left(t;\, \vartheta ,\, \varepsilon \right)\in KC^{1} \left(I,\, R^{n} \right)$ and its derivative is calculated by \begin{equation} \label{GrindEQ__2_7_} \dot{h}^{\left(-\right)} \, \left(t;\, \vartheta ,\, \varepsilon \right)=\left\{\begin{array}{c} {\, \xi ,\, \, } \\ {\frac{\lambda }{\lambda -1} \xi ,} \\ {0,} \end{array}\, \begin{array}{c} {t\in \left[\theta -\lambda \, \varepsilon ,\, \theta \right]\, ,} \\ {\, \, \, \, \, \, t\in \left[\theta -\varepsilon ,\, \, \theta -\lambda \, \varepsilon \right]\, ,} \\ {t\in I\backslash \left(\theta -\varepsilon ,\, \, \theta \right)\, .} \end{array}\right. \end{equation} By virtue of \eqref{GrindEQ__2_6_} and \eqref{GrindEQ__2_7_} we have that for every $\varepsilon \in \left(0,\, \tilde{\varepsilon }\right]$ the function \eqref{GrindEQ__2_5_} is admissible. Similar to \eqref{GrindEQ__2_3_} we confirm that by virtue of \eqref{GrindEQ__1_3_}, \eqref{GrindEQ__2_6_} and \eqref{GrindEQ__2_7_}, allowing for the property $P\left(\theta -;\, \bar{x}\left(\cdot \right),\, \alpha \right)$, the following equality is valid: \begin{equation} \label{GrindEQ__2_8_} 0=\int _{\theta -\varepsilon }^{\theta }\frac{d}{dt} \, \left(\bar{L}_{\dot{x}}^{\, T} \left(t\right)\, h_{\varepsilon }^{\left(-\right)} \left(t\right)\right)\, dt= \int _{\theta -\varepsilon }^{\theta }\left[\bar{L}_{x}^{\, T} \left(t\right)\, h_{\varepsilon }^{-} \left(t\right)+\bar{L}_{\dot{x}}^{\, T} \left(t\right)\, \dot{h}_{\varepsilon }^{\left(-\right)} \left(t\right)\right]dt, \, \forall \varepsilon \in \left(0,\, \tilde{\varepsilon }\right], \end{equation} where $h_{\varepsilon }^{\left(-\right)} \left(t\right):=h^{\left(-\right)} \left(t;\, \vartheta ,\, \varepsilon \right)$. We introduce denotations that will be convenient in what follows: \begin{equation} \label{GrindEQ__2_9_} \bar{L}_{x} \left(t,\, \xi \right):=L_{x} \left(t,\bar{x}\left(t\right),\, \dot{\bar{x}}\left(t\right)+\xi \right),\, \, \, \, \bar{L}_{xx} \left(t,\, \xi \right):=L_{xx} \left(t,\bar{x}\left(t\right),\, \dot{\bar{x}}\left(t\right)+\xi \right)\, , \end{equation} \begin{equation} \label{GrindEQ__2_10_} {\mathbb D}elta \bar{L}_{x} \left(t,\, \xi \right):=L_{x} \left(t,\bar{x}\left(t\right),\, \dot{\bar{x}}\left(t\right)+\xi \right)-\, \bar{L}_{x} \left(t\right),\, \xi \in R^{n} , \end{equation} \begin{equation} \label{GrindEQ__2_11_} Q_{i} \left(\bar{L}\right)\, \left(t,\, \lambda \, ,\, \xi \right):=\lambda ^{i} E\, \left(\bar{L}\right)\, \left(t,\, \, \xi \right)+\left(1-\lambda ^{i} \right)E\left(\bar{L}\right)\, \left(t,\, \frac{\lambda }{\lambda -1} \xi \right),\, \, i=1,2,3, \end{equation} \begin{equation} \label{GrindEQ__2_12_} M_{i} \left(\bar{L}_{x} \right)\left(t,\, \lambda ,\, \xi \right):=\lambda ^{i} {\mathbb D}elta \, \bar{L}_{x}^{\, T} \left(t,\, \, \xi \right)\xi +\left(1-\lambda \right)\left(\frac{1}{2} +\lambda \right)^{i-1} {\mathbb D}elta \, \bar{L}_{x}^{\, T} \left(t,\, \frac{\lambda }{\lambda -1} \xi \right)\xi ,\, \, \, i=1,2. \end{equation} Considering the above stated, we prove the following propositions. \textbf{Proposition 2.1. }Let the functions $L\left(\cdot \right)$ and $L_{\dot{x}} \left(\cdot \right)$ be twice continuously differentiable in totality of variables, and the admissible function $\bar{x}\left(\cdot \right)$ be an extremal in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}. Then: (i) if at the point $\theta \in \left[t{}_{0} ,\, t{}_{1} \, \right)$ the extremal $\bar{x}\left(\cdot \right)$ is triply differentiable on the right in right semi-neighborhood $\left[\theta ,\, \theta +\alpha \right)\subset I$ of the point $\theta $, then for every $\left(\lambda ,\, \xi \right)\in \, \left[0,1\right)\times R^{n} $ there exists such a number $\varepsilon ^{*} >0$ that for all $\varepsilon \in \left(0,\, \varepsilon ^{*} \right]$ the increment $J\left(x^{\left(+\right)} \left(\cdot :\, \vartheta ,\varepsilon \right)\right)-J\left(\bar{x}\left(\cdot \right)\right)=:{\mathbb D}elta _{\varepsilon }^{\left(+\right)} J\left(\bar{x}\left(\cdot \right);\vartheta \right)$ of the functional \eqref{GrindEQ__1_1_}, corresponding to the variation \eqref{GrindEQ__2_4_} is represented in the form \[{\mathbb D}elta _{\varepsilon }^{\left(+\right)} J\left(\bar{x}\left(\cdot \right);\, \vartheta \right)\, =\varepsilon \, Q_{1} \left(\bar{L}\right)\, \left(\theta +,\, \lambda ,\, \xi \right)+\frac{1}{2} \varepsilon ^{2} W\left(\bar{L}\right)\, \left(\theta +,\, \lambda ,\, \xi \right)+\] \begin{equation} \label{GrindEQ__2_13_} +\frac{1}{6} \varepsilon ^{3} G\left(\bar{L}\right)\, \left(\theta +,\, \lambda ,\, \xi \right)+o\left(\varepsilon ^{3} \right); \end{equation} (ii) if at the point $\theta \in \left(t{}_{0} ,\, t{}_{1} \right]$ the extremal $\bar{x}\left(\cdot \right)$ is triply differentiable on the left in left semi-neighborhood $\left(\theta -\alpha ,\, \theta \right]\subset I$ of the point $\theta $, then for every $\left(\lambda ,\, \xi \right)\in \left[0,\, 1\right)\times R^{n} $ there exists such a number $\varepsilon ^{*} >0$ that for all $\varepsilon \in \left(0,\, \varepsilon ^{*} \right]$ the increment $J\left(x^{\left(-\right)} \left(\cdot ,\, \vartheta ,\, \varepsilon \right)\right)-J\left(\bar{x}\left(\cdot \right)\right)=:{\mathbb D}elta _{\varepsilon }^{\left(-\right)} J\left(\bar{x}\left(\cdot \right);\, \vartheta \right)$ corresponding to the variation \eqref{GrindEQ__2_5_}, is represented in the form \[{\mathbb D}elta _{\varepsilon }^{\left(-\right)} J\left(\bar{x}\left(\cdot \right);\, \vartheta \right)\, =\varepsilon \, Q_{1} \left(\bar{L}\right)\, \left(\theta -,\, \lambda ,\, \xi \right)-\frac{1}{2} \varepsilon ^{2} W\left(\bar{L}\right)\, \left(\theta -,\, \lambda ,\, \xi \right)+\] \begin{equation} \label{GrindEQ__2_14_} +\frac{1}{6} \varepsilon ^{3} G\left(\bar{L}\right)\, \left(\theta -,\, \lambda ,\, \xi \right)+o\left(\varepsilon ^{3} \right). \end{equation} Here \begin{equation} \label{GrindEQ__2_15_} W\left(\bar{L}\right)\, \left(\theta ,\, \lambda ,\, \xi \right):=\lambda \, M_{1} \left(\bar{L}_{x} \right)\, \left(\theta ,\, \lambda ,\, \xi \right)+\frac{d}{dt} Q_{2} \left(\bar{L}\right)\, \left(\theta ,\, \lambda ,\, \xi \right), \end{equation} \[G\left(\bar{L}\right)\, \left(\theta ,\, \lambda ,\, \xi \right):=\lambda ^{2} \xi ^{T} \left[\lambda \bar{L}_{xx} \left(\theta ,\, \xi \right)+\left(1-\lambda \right)\bar{L}_{xx} \left(\theta ,\, \frac{\lambda }{\lambda -1} \xi \right)\right]\xi +\] \begin{equation} \label{GrindEQ__2_16_} +2\lambda \frac{d}{dt} M_{2} \left(\bar{L}_{x} \right)\, \left(\theta ,\, \lambda ,\, \xi \right)+\frac{d^{2} }{dt^{2} } Q_{3} \left(\bar{L}\right)\, \left(\theta ,\, \lambda ,\, \xi \right), \end{equation} where $Q_{i} \left(\bar{L}\right)\, \left(\cdot \right),\, \, i=1,2,3$ and $M_{i} \left(\bar{L}_{x} \right)\, \left(\cdot \right),\, \, i=1,2$ are determined by \eqref{GrindEQ__2_11_} and \eqref{GrindEQ__2_12_}, allowing for \eqref{GrindEQ__1_6_}, \eqref{GrindEQ__2_9_} and \eqref{GrindEQ__2_10_}. \textbf{Proof. }At first we prove part (i) of the proposition, i.e. the validity of the equality \eqref{GrindEQ__2_13_}. Since the admissible function $\bar{x}\left(\cdot \right)$ is an extremal in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}, i.e. is the solution of the equation\eqref{GrindEQ__1_3_}, then using \eqref{GrindEQ__2_3_} and \eqref{GrindEQ__2_4_}, allowing for \eqref{GrindEQ__2_1_}, \eqref{GrindEQ__2_2_} and property $P\left(\theta +,\, \bar{x}\left(\cdot \right),\, \alpha \right)$, it is easy to represent the increment ${\mathbb D}elta _{\varepsilon }^{\left(+\right)} \, J\left(\bar{x}\, \left(\cdot \right),\, \vartheta \right)$ in the form: \begin{equation} \label{GrindEQ__2_17_} {\mathbb D}elta _{\varepsilon }^{\left(+\right)} \, J\left(\bar{x}\, \left(\cdot \right),\, \vartheta \right)=J_{1}^{\left(+\right)} \left(\varepsilon ,\, \vartheta \right)+J_{2}^{\left(+\right)} \left(\varepsilon ,\, \vartheta \right),\, \, \varepsilon \in \left(0,\, \bar{\varepsilon }\right]. \end{equation} Here \begin{equation} \label{GrindEQ__2_18_} \begin{split} J_{1}^{\left(+\right)} \left(\varepsilon ,\, \vartheta \right)=\int _{\theta }^{\theta +\varepsilon }\, \, \, \left[L\left(t,\, \bar{x}\left(t\right),\, \dot{\bar{x}}\left(t\right)+\dot{h}{}_{\varepsilon }^{\left(+\right)} \left(t\right)\right)-\bar{L}\left(t\right)-\bar{L}_{\dot{x}}^{T} \left(t\right)\dot{h}{}_{\varepsilon }^{\left(+\right)} \left(t\right)\right] \, \, dt\, , \end{split} \end{equation} \begin{equation} \label{GrindEQ__2_19_} \begin{split} J_{2}^{\left(+\right)} \left(\varepsilon ,\, \vartheta \right)=\int _{\theta }^{\theta +\varepsilon }\, \, \, [L(t,\, \bar{x}\left(t\right)+h{}_{\varepsilon }^{\left(+\right)} \left(t\right),\dot{\bar{x}}\left(t\right)+\dot{h}{}_{\varepsilon }^{\left(+\right)} (t))\\ - L\, \left(t,\, \bar{x}\left(t\right),\, \dot{\bar{x}}\left(t\right)+\dot{h}{}_{\varepsilon }^{\left(+\right)} \left(t\right)\right)-\bar{L}_{x}^{T} \left(t\right)h{}_{\varepsilon }^{\left(+\right)} \left(t\right)]dt, \end{split} \end{equation} where $h{}_{\varepsilon }^{\left(+\right)} \left(t\right):=h^{\left(+\right)} \left(t;\, \vartheta ,\, \varepsilon \right)$, and the number $\bar{\varepsilon }$ is determined above (see \eqref{GrindEQ__2_3_}). We calculate the integrals \eqref{GrindEQ__2_18_} and \eqref{GrindEQ__2_19_} with accuracy $o(\varepsilon ^{3})$. Considering the assumption on the smoothness of the functions $L\, \, \left(\cdot \right)$ and $\bar{x}\left(\cdot \right)$, we apply the Taylor formula. Then: (a) by virtue of \eqref{GrindEQ__2_1_} - \eqref{GrindEQ__2_3_}, allowing for \eqref{GrindEQ__1_6_}, \eqref{GrindEQ__1_7_} and \eqref{GrindEQ__2_11_}, from \eqref{GrindEQ__2_18_} we have \[J_{1}^{\left(+\right)} \left(\varepsilon ,\, \vartheta \right)=\int _{\theta }^{\theta +\lambda \varepsilon }\, E\left(\bar{L}\right)\, \left(t,\, \xi \right) \, dt\, +\int _{\theta +\lambda \varepsilon }^{\theta +\varepsilon }E\left(\bar{L}\right)\, \left(t,\, \frac{\lambda }{\lambda -1} \xi \right)dt= \] \[=\int _{\theta }^{\theta +\lambda \varepsilon }\left[\sum _{i=0}^{2}\frac{1}{i!} \left(\, t-\theta \right)^{i} \frac{d^{i} }{dt^{i} } E\left(\bar{L}\right)\, \left(\theta +,\xi \right)\, +o\left(\left(t-\theta \right)^{2} \right)\right]\, dt+ \] \[+\int _{\theta +\lambda \varepsilon }^{\theta +\varepsilon }\left[\sum _{i=0}^{2}\frac{1}{i!} \left(t-\theta \right)^{i}\frac{d^{i} }{dt^{i} } E\left(\bar{L}\right)\, \left(\theta +,\, \frac{\lambda }{\lambda -1} \xi \right)+o\left(\left(t-\theta \right)^{2} \right)\right]dt= \] \[=\sum _{i=0}^{2}\frac{\varepsilon ^{i+1} }{\left(i+1\right)\, !} \frac{d^{i} }{dt^{i} } \left[\lambda ^{i+1} E\left(\bar{L}\right)\, \left(\theta +,\xi \right)+\left(1-\lambda ^{i+1} \right)E\left(\bar{L}\right)\, \left(\theta +,\, \frac{\lambda }{\lambda -1} \xi \right)\right]+\] \begin{equation} \label{GrindEQ__2_20_} +o\left(\varepsilon ^{3} \right)=\, \sum _{i=0}^{2}\frac{\varepsilon ^{i+1} }{\left(i+1\right)!} \frac{d^{i} }{dt^{i} } Q_{i+1} \left(\bar{L}\right)\, \left(\theta +,\, \lambda ,\, \xi \right)+o\left(\varepsilon ^{3} \right); \end{equation} (b) by virtue of \eqref{GrindEQ__2_1_} - \eqref{GrindEQ__2_3_}, allowing for \eqref{GrindEQ__2_9_} and \eqref{GrindEQ__2_10_}, from \eqref{GrindEQ__2_19_} we get \begin{equation} \label{GrindEQ__2_21_} J{}_{2}^{\left(+\right)} \left(\varepsilon ,\, \vartheta \right)=J{}_{21}^{\left(+\right)} \left(\varepsilon ,\, \vartheta \right)+J{}_{22}^{\left(+\right)} \left(\varepsilon ,\, \vartheta \right),\, \, \forall \varepsilon \in \left(0,\, \bar{\varepsilon }\right]. \end{equation} Here the integrals $J{}_{21}^{\left(+\right)} \left(\cdot \right)$ and $J{}_{22}^{\left(+\right)} \left(\cdot \right)$ are calculated by the Taylor formula as follows: \[J{}_{21}^{\left(+\right)} \left(\varepsilon ,\, \vartheta \right)=\int _{\theta }^{\theta +\lambda \varepsilon }\left[\left(t-\theta \right){\mathbb D}elta \, \bar{L}_{x}^{\, T} \left(t,\, \xi \right)\xi +\frac{1}{2} \left(t-\theta \right)^{2} \xi ^{T} \bar{L}_{xx} \left(t,\, \xi \right)\xi +o\left(\left(t-\theta \right)^{2} \right)\, \right]\, dt =\] \[=\int _{\theta }^{\theta +\lambda \varepsilon }\left(t-\theta \right)\, \left[{\mathbb D}elta \bar{L}_{x}^{\, T} \left(\theta +,\, \xi \right)\, \xi +\left(t-\theta \right)\frac{d}{dt} {\mathbb D}elta \, \bar{L}_{x}^{\, T} \left(\theta +,\, \xi \right)\, \xi +o\left(t-\theta \right)\right] \, dt+\] \[+\frac{1}{6} \varepsilon ^{3} \lambda ^{3} \xi ^{T} \bar{L}_{xx} \left(\theta +,\, \xi \right)\xi +o\, \left(\varepsilon ^{3} \right)=\frac{\varepsilon ^{2} }{2} \lambda ^{2} {\mathbb D}elta \, \bar{L}_{x}^{\, T} \left(\theta +,\, \xi \right)\, \xi +\] \begin{equation} \label{GrindEQ__2_22_} +\frac{\varepsilon ^{3} }{6} \, \left[2\lambda ^{3} \frac{d}{dt} {\mathbb D}elta \, \bar{L}_{x}^{\, T} \left(\theta +,\, \xi \right)\xi +\lambda ^{3} \xi ^{T} \bar{L}_{xx} \left(\theta +,\, \xi \right)\xi \right]+o\left(\varepsilon ^{3} \right), \end{equation} \[J{}_{22}^{\left(+\right)} \left(\varepsilon ,\, \vartheta \right)=\int _{\theta +\lambda \varepsilon }^{\theta +\varepsilon }q\left(t;\theta ,\, \lambda ,\, \varepsilon \right)dt , \] where \[q\left(t;\theta ,\, \lambda ,\, \varepsilon \right)=\left(\lambda -1\right)^{-1} \lambda \, \left(t-\theta -\varepsilon \right){\mathbb D}elta \bar{L}_{x}^{T} \left(t,\, \left(\lambda -1\right)^{-1} \lambda \xi \right)\xi +\] \[+\frac{1}{2} \left(\lambda -1\right)^{-2} \lambda ^{2} \left(t-\theta -\varepsilon \right)^{2} \xi ^{T} \bar{L}_{xx} \left(t,\, \left(\lambda -1\right)^{-1} \lambda \, \xi \right)\, \xi +o\, \left(\left(t-\theta -\varepsilon \right)^{2} \right). \] Continuing the calculations, we have \[J{}_{22}^{\left(+\right)} \left(\varepsilon ,\, \vartheta \right)=\] \[=\frac{\lambda }{\lambda -1} \int _{\theta +\lambda \varepsilon }^{\theta +\varepsilon }\left(t-\theta -\varepsilon \right)\left[{\mathbb D}elta \bar{L}_{x} \left(\theta +,\, \frac{\lambda }{\lambda -1} \xi \right)\xi +\left(t-\theta \right)\frac{d}{dt} {\mathbb D}elta \bar{L}_{x} \left(\theta +,\, \frac{\lambda }{\lambda -1} \xi \right)\xi +o\left(t-\theta \right)\right] dt+\] \[+\frac{1}{6} \varepsilon ^{3} \lambda ^{2} \left(1-\lambda \right)\xi ^{T} \bar{L}_{xx} \left(\theta +,\, \frac{\lambda }{\lambda -1} \xi \right)\xi +o \left(\varepsilon ^{3} \right)=\frac{\varepsilon ^{2} }{2} \lambda \left(1-\lambda \right){\mathbb D}elta \bar{L}_{x}^{T} \left(\theta +,\, \frac{\lambda }{\lambda -1} \xi \right)\xi +\] \[+\frac{\varepsilon ^{3} }{6} \left[\lambda \left(1-\lambda \right)\left(1+2\lambda \right)\frac{d}{dt} {\mathbb D}elta \bar{L}_{x}^{T} \left(\theta +,\, \frac{\lambda }{\lambda -1} \xi \right)\xi +\left. \lambda ^{2} \left(1-\lambda \right)\xi ^{T} \bar{L}_{xx} \left(\theta +,\, \frac{\lambda }{\lambda -1} \xi \right)\xi \right]+o\left(\varepsilon ^{3} \right).\right . \] \begin{equation} \label{GrindEQ__2_23_} . \end{equation} By virtue of \eqref{GrindEQ__2_22_} and \eqref{GrindEQ__2_23_}, allowing for the notation \eqref{GrindEQ__2_12_}, the equality \eqref{GrindEQ__2_21_}, takes the form \[J_{2}^{\left(+\right)} \left(\varepsilon ,\, \vartheta \right)=\frac{1}{2} \varepsilon ^{2} \lambda \, M_{1} \left(\bar{L}_{x} \right)\, \left(\theta +,\, \lambda ,\xi \right)+\frac{1}{6} \varepsilon ^{3} \, \left[\lambda ^{2} \xi ^{T} \right. \lambda \, \bar{L}_{xx} \left(\theta +,\, \xi \right)+\] \begin{equation} \label{GrindEQ__2_24_} \left. +\left(1-\lambda \right)\bar{L}_{xx} \left(\theta +,\left(\lambda -1\right)^{-1} \lambda \xi \right)\xi +2\lambda \frac{d}{dt} M_{2} \left(\bar{L}_{x} \right)\, \left(\theta +,\, \lambda ,\xi \right)\right]+o\left(\varepsilon ^{3} \right). \end{equation} Consequently, having substituted \eqref{GrindEQ__2_20_} and \eqref{GrindEQ__2_24_} in \eqref{GrindEQ__2_17_}, and also having chosen $\varepsilon ^{*} =\bar{\varepsilon }$, allowing for \eqref{GrindEQ__2_10_}-\eqref{GrindEQ__2_12_}, \eqref{GrindEQ__2_15_} and \eqref{GrindEQ__2_16_}, we get the expansion \eqref{GrindEQ__2_13_}, i.e. part (i) of proposition 2.1 is proved. Similar to \eqref{GrindEQ__2_13_} we give the proof of part (ii) of proposition 2.1. For that it is sufficient to calculate the increment $J\left(x^{\left(-\right)} \left(\cdot ;\, \vartheta ,\varepsilon \right)\right)-J\left(\bar{x}\left(\cdot \right)\right)=:{\mathbb D}elta _{\varepsilon }^{\left(-\right)} J\left(\bar{x}\left(\cdot \right),\, \vartheta \right)$ with accuracy $o(\varepsilon ^{3} )$, where the function $x^{(-)} (\cdot ;\, \vartheta ,\, \varepsilon )$ is determined by \eqref{GrindEQ__2_5_} and \eqref{GrindEQ__2_6_}. Using \eqref{GrindEQ__2_5_} and \eqref{GrindEQ__2_8_}, allowing for \eqref{GrindEQ__2_6_}, \eqref{GrindEQ__2_7_} and the property $P\left(\theta -,\, \bar{x}\left(\cdot \right),\, \alpha \right)$, we can represent the increment ${\mathbb D}elta _{\varepsilon }^{\left(-\right)} J\left(\bar{x}\, \left(\cdot \right),\, \vartheta \right)$ in the form \begin{equation} \label{GrindEQ__2_25_} {\mathbb D}elta _{\varepsilon }^{\left(-\right)} J\, \left(\bar{x}\, \left(\cdot \right),\, \vartheta \right)=J_{1}^{\left(-\right)} \left(\varepsilon ,\vartheta \right)+J_{2}^{\left(-\right)} \left(\varepsilon ,\vartheta \right),\, \, \varepsilon \in \left(0,\, \tilde{\varepsilon }\right]. \end{equation} Here the number $\tilde{\varepsilon }>0$ is determined above (see \eqref{GrindEQ__2_5_}). \begin{equation} \label{GrindEQ__2_26_} J_{1}^{\left(-\right)} \left(\varepsilon ,\, \vartheta \right):=\int _{\theta -\varepsilon }^{\theta }\, \left[L\, \left(t,\, \bar{x}\left(t\right),\, \dot{\bar{x}}\left(t\right)+\dot{h}_{\varepsilon }^{\left(-\right)} \left(t\right)\right)-\bar{L}\left(t\right)-\bar{L}_{\dot{x}}^{T} \left(t\right)\, \dot{h}_{\varepsilon }^{\left(-\right)} \left(t\right)\right]\, dt , \end{equation} \[J_{2}^{\left(-\right)} \left(\varepsilon ,\, \vartheta \right):=\] \begin{equation} \label{GrindEQ__2_27_} \int _{\theta -\varepsilon }^{\theta }\, \left[L\, \left(t,\, \bar{x}\left(t\right)+h_{\varepsilon }^{\left(-\right)} \left(t\right),\, \, \dot{\bar{x}}\left(t\right)+\dot{h}_{\varepsilon }^{\left(-\right)} \left(t\right)\right)-\right. \left. L\, \left(t,\, \bar{x}\left(t\right),\, \dot{\bar{x}}\left(t\right)+\dot{h}_{\varepsilon }^{\left(-\right)} \left(t\right)\right)-\bar{L}_{x}^{T} \left(t\right)\, h_{\varepsilon }^{\left(-\right)} \left(t\right)\right]\, dt, \end{equation} where $h_{\varepsilon }^{\left(-\right)} \left(t\right):=h^{\left(-\right)} \left(t;\, \vartheta ,\, \varepsilon \right)$. Considering the assumption on the smoothness of functions $L\left(\cdot \right)$,$L_{\dot{x}} \, \left(\cdot \right)$ and $\bar{x}\left(\cdot \right)$, having applied the Taylor formula, we calculate the integrals \eqref{GrindEQ__2_26_} and \eqref{GrindEQ__2_27_} with accuracy $o(\varepsilon ^{3})$. More exactly, we carry out calculations in the following way: (a) similar to \eqref{GrindEQ__2_20_}, by virtue of \eqref{GrindEQ__2_6_} and \eqref{GrindEQ__2_7_}, allowing for notations \eqref{GrindEQ__1_6_} and \eqref{GrindEQ__2_11_}, from \eqref{GrindEQ__2_26_} by the Taylor formula we get \begin{equation} \label{GrindEQ__2_28_} \begin{split} J_{1}^{\left(-\right)} \left(\varepsilon ,\, \vartheta \right)=\int _{\theta -\lambda \varepsilon }^{\theta }\left[\sum_{i=0}^{2} \frac{1}{i!}(t-\theta)^i \frac{d^i}{dt^i}E(\overline{L})(\theta-, \xi)+o((t-\theta)^2)\right]dt\\ +\int _{\theta -\varepsilon}^{\theta -\lambda \varepsilon }\left[\sum_{i=0}^{2} \frac{1}{i!}(t-\theta)^i \frac{d^i}{dt^i}E(\overline{L})(\theta-, \frac{\lambda}{\lambda-1}\xi)+o((t-\theta)^2)\right]dt \\ =\sum _{i=0}^{2}\left(-1\right)^{i+2} \frac{\varepsilon ^{i+1} }{\left(i+1\right)!} \frac{d^{i} }{dt^{i} } Q_{i+1} \left(\bar{L}\right)\, \left(\theta -,\, \lambda ,\, \xi \right)+o\left(\varepsilon ^{{\rm 3}} \right) ,\, \, \forall \varepsilon \in \left(0,\, \tilde{\varepsilon }\right]; \end{split} \end{equation} (b) similar to \eqref{GrindEQ__2_22_} and \eqref{GrindEQ__2_23_}, by virtue of \eqref{GrindEQ__2_6_} and \eqref{GrindEQ__2_7_}, allowing for notations \eqref{GrindEQ__2_9_} and \eqref{GrindEQ__2_10_}, from \eqref{GrindEQ__2_27_} by the Taylor formula we have \begin{equation} \label{GrindEQ__2_29_} J_{2}^{\left(-\right)} \left(\varepsilon ,\, \vartheta \right)=J_{21}^{\left(-\right)} \left(\varepsilon ,\, \vartheta \right)+J_{22}^{\left(-\right)} \left(\varepsilon ,\, \vartheta \right),\, \, \varepsilon \in \left(0,\, \tilde{\varepsilon }\right], \end{equation} where \[J_{21}^{\left(-\right)} \left(\varepsilon ,\, \vartheta \right)=\int _{\theta -\lambda \varepsilon }^{\theta }\left[\left(t-\theta \right){\mathbb D}elta \bar{L}_{x}^{T} \, \left(t,\, \xi \right)\, \xi +\frac{1}{2} \left(t-\theta \right)^{2} \xi ^{T} \bar{L}_{xx} \left(t,\, \xi \right)\xi +o\left(\left(t-\theta \right)^{2} \right)\right]dt= \] \begin{equation} \label{GrindEQ__2_30_} =-\frac{\varepsilon ^{2} }{2} \lambda ^{2} {\mathbb D}elta \bar{L}_{x}^{T} \, \left(\theta -,\, \xi \right)\, \xi +\frac{1}{6} \varepsilon ^{3} \left. \left[2\lambda ^{3} \frac{d}{dt} {\mathbb D}elta \bar{L}_{x}^{T} \, \left(\theta -,\, \xi \right)\, \xi \right. +\lambda ^{3} \xi ^{T} \bar{L}_{xx} \left(\theta -,\, \xi \right)\, \xi \right]+o\left(\varepsilon ^{3} \right), \end{equation} \[J_{22}^{\left(-\right)} \left(\varepsilon ,\, \vartheta \right)=\int _{\theta -\varepsilon }^{\theta -\lambda \varepsilon }\frac{\lambda }{\lambda -1} \left(t-\theta +\varepsilon \right){\mathbb D}elta \bar{L}_{x}^{T} \, \left(t,\, \frac{\lambda }{\lambda -1} \xi \right)\xi \, dt+ \] \[+\int _{\theta -\varepsilon }^{\theta -\lambda \varepsilon }\left[\frac{\lambda ^{2} }{2\left(\lambda -1\right)^{2} } \left(t-\theta +\varepsilon \right)^{2} \xi ^{T} \bar{L}_{xx} \left(t,\, \frac{\lambda }{\lambda -1} \xi \right)\xi +o\left(\left(t-\theta +\varepsilon \right)^{2} \right)\right]\, dt= \] \[=-\frac{1}{2} \varepsilon ^{2} \lambda \left(1-\lambda \right){\mathbb D}elta \, \bar{L}_{x}^{T} \, \left(\theta -,\, \frac{\lambda }{\lambda -1} \xi \right)\xi +\] \[+\frac{\varepsilon ^{3} }{6} \left[\lambda \left(1-\lambda \right)\, \left(1+2\lambda \right)\frac{d}{dt} {\mathbb D}elta \, \bar{L}_{x}^{T} \, \left(\theta -,\, \frac{\lambda }{\lambda -1} \xi \right)\xi +\lambda ^{2} \left(1-\lambda \right)\xi ^{T} \bar{L}_{xx} \left(\theta -,\, \frac{\lambda }{\lambda -1} \xi \right)\xi \right]+o\left(\varepsilon ^{3} \right).\] Consequently, by virtue of \eqref{GrindEQ__2_28_}-\eqref{GrindEQ__2_30_} for the increment \eqref{GrindEQ__2_25_}, taking into account \eqref{GrindEQ__2_10_}-\eqref{GrindEQ__2_12_}, \eqref{GrindEQ__2_15_} and \eqref{GrindEQ__2_16_}, and also choosing $\varepsilon ^{*} =\tilde{\varepsilon }$, we get expansion \eqref{GrindEQ__2_14_}, i.e. part (ii) of Proposition 2.1 is proved. By the same token Proposition 2.1 is completely proved. Based on the technique for proving Proposition 2.1, namely, using \eqref{GrindEQ__2_17_}-\eqref{GrindEQ__2_19_} and \eqref{GrindEQ__2_25_}-\eqref{GrindEQ__2_27_}, under weak assumptions on the smoothness of functions $L\left(\cdot \right),\, L_{\dot{x}} \left(\cdot \right)$ and $\bar{x}\, \, \left(\cdot \right)$ as a corollary of Proposition 2.1, it is easy to arrive at the following statement. \textbf{Proposition 2.2. }Let the functions $L\left(\cdot \right)$ and $L_{\dot{x}} \left(\cdot \right)$ be continuously differentiable in totality of variables, and the admissible function $\bar{x}\, \, \left(\cdot \right)$ be an extremal in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}. If at the point $\theta \in \left[t_{0} ,\, t_{1} \right)$ $\left(\theta \in \left(t_{0} ,\, t_{1} \right]\right)$ the extremal $\bar{x}\, \, \left(\cdot \right)$ is twice differentiable on the right (left) in semi-neighborhood $\left[\theta ,\, \theta +\alpha \right)\subset I$$\left(\, \left(\theta -\alpha ,\, \theta \right]\subset I\, \right)$ of the point $\theta $, then for every $\left(\lambda ,\, \xi \right)\in \left[0,\, 1\right)\times R^{n} $ there exists such a number $\varepsilon ^{*} >0$ that for all $\varepsilon \in \left(0,\, \varepsilon ^{*} \right]$ the increment ${\mathbb D}elta _{\varepsilon }^{\left(+\right)} \, J\left(\bar{x}\, \left(\cdot \right);\vartheta \right)$ $\left({\mathbb D}elta _{\varepsilon }^{\left(-\right)} \, J\left(\bar{x}\, \left(\cdot \right);\vartheta \right)\right)$ of the functional in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}, corresponding to the variation \eqref{GrindEQ__2_4_} ((2.5)), is represented in the form \begin{equation} \label{GrindEQ__2_32_} {\mathbb D}elta _{\varepsilon }^{\left(+\right)} J\left(\bar{x}\left(\cdot \right); \vartheta \right)\, :=\varepsilon \, Q_{1} \left(\bar{L}\right)\, \left(\theta +,\, \lambda ,\, \xi \right)+\frac{1}{2} \varepsilon ^{2} W\left(\bar{L}\right)\, \left(\theta +,\, \lambda ,\, \xi \right)+o\left(\varepsilon ^{2} \right) \end{equation} \begin{equation} \label{GrindEQ__2_33_} \left({\mathbb D}elta _{\varepsilon }^{\left(-\right)} J\left(\bar{x}\left(\cdot \right);\, \vartheta \right)\, :=\varepsilon \, Q_{1} \left(\bar{L}\right)\, \left(\theta -,\, \lambda ,\, \xi \right)-\frac{1}{2} \varepsilon ^{2} W\left(\bar{L}\right)\, \left(\theta -,\, \lambda ,\, \xi \right)+o\left(\varepsilon ^{2} \right)\right), \end{equation} where $Q_{1}(\bar{L})(\cdot)$ and $W(\bar{L})(\cdot)$ are determined by \eqref{GrindEQ__2_11_} and \eqref{GrindEQ__2_15_} allowing for \eqref{GrindEQ__1_6_} and \eqref{GrindEQ__2_9_}-\eqref{GrindEQ__2_12_}. We now consider the following special case. Namely, assuming $\lambda =\varepsilon \in \left(0,1\right)\bigcap \left(0,\, \bar{\varepsilon }\right]$ $\left(\lambda =\varepsilon \in \left(0,1\right)\bigcap \left(0,\, \tilde{\varepsilon }\right]\right)$ in \eqref{GrindEQ__2_1_} ((2.6)), we introduce a new variation of the extremal $\bar{x}(\cdot)$ in the form: \begin{equation} \label{GrindEQ__2_34_} \left. x^{\left(+\right)} \left(t;\, \tilde{\vartheta },\, \varepsilon \right):=x^{\left(+\right)} \left(t;\, \vartheta ,\, \varepsilon \right)\, \right|_{\vartheta =\tilde{\vartheta }=\left(\theta ,\varepsilon ,\xi \right)} ,\, \varepsilon \in \left(0,1\right)\bigcap \left(0,\bar{\varepsilon }\right] \end{equation} \begin{equation} \label{GrindEQ__2_35_} \left(\left. x^{\left(-\right)} \left(t;\, \tilde{\vartheta },\, \varepsilon \right):=x^{\left(-\right)} \left(t;\, \vartheta ,\, \varepsilon \right)\, \right|_{\vartheta =\tilde{\vartheta }=\left(\theta ,\varepsilon ,\xi \right)} ,\, \varepsilon \in \left(0,1\right)\bigcap \left(0,\tilde{\varepsilon }\right]\right), \end{equation} where $\vartheta =\left(\theta ,\, \lambda ,\, \xi \right)$ and $x^{\left(+\right)} \left(\cdot ;\, \vartheta ,\, \varepsilon \right)$ $\left(x^{\left(-\right)} \left(\cdot ;\, \vartheta ,\, \varepsilon \right)\right)$ is determined by \eqref{GrindEQ__2_4_} allowing for \eqref{GrindEQ__2_1_} (by (2.5) allowing for \eqref{GrindEQ__2_6_}). In this case the following proposition is valid. \textbf{Proposition 2.3. }Let the functions $L\left(\cdot \right)$ and $L_{\dot{x}} \left(\cdot \right)$ be twice differentiable in totality of variables, the function $\bar{x}\, \, \left(\cdot \right)$ be an extremal in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}. If at the point $\theta \in \left[t_{0} ,\, t_{1} \right)$ $\left(\theta \in \left(t_{0} ,\, t_{1} \right]\right)$ the extremal $\bar{x}\, \, \left(\cdot \right)$ is twice differentiable on the right (left) in some semi-neighborhood $\left[\theta ,\, \theta +\alpha \right)\subset I$$\left(\, \left(\theta -\alpha ,\, \theta \right]\subset I\, \right)$ of the point $\theta $, then for every $\xi \in R^{n} $ there exists such a number $\varepsilon ^{*} >0$ that for all $\varepsilon \in \left(0,\, \varepsilon ^{*} \right]\bigcap \left(0,1\right)$ the increment $J\left(x^{\left(+\right)} \, \left(\cdot ;\tilde{\vartheta },\varepsilon \right)\right)-J\left(\bar{x}\left(\cdot \right)\right)=:{\mathbb D}elta _{\varepsilon }^{\left(+\right)} J\left(\bar{x}\left(\cdot \right);\tilde{\vartheta }\right)$$\left(J\left(x^{\left(-\right)} \, \left(\cdot ;\tilde{\vartheta },\varepsilon \right)\right)-J\left(\bar{x}\left(\cdot \right)\right)=:{\mathbb D}elta _{\varepsilon }^{\left(-\right)} J\left(\bar{x}\left(\cdot \right);\tilde{\vartheta }\right)\right)$ corresponding to the variation \eqref{GrindEQ__2_34_} ((2.34)), is represented in the form \[{\mathbb D}elta _{\varepsilon }^{\left(+\right)} J\, \left(\bar{x}\, \left(\cdot \right);\, \tilde{\vartheta }\right)=\varepsilon ^{2} E\left(\bar{L}\right)\, \left(\theta +,\xi \right)+\frac{\varepsilon ^{3} }{2\left(1-\varepsilon \right)} \xi ^{T} \bar{L}_{\dot{x}\dot{x}} \left(\theta +\right)\xi +\] \begin{equation} \label{GrindEQ__2_36_} +\frac{1}{2} \varepsilon ^{4} \left[K\left(\bar{L}\right)\left(\theta +,\varepsilon ,\, \xi \right)-\frac{1}{3\left(\varepsilon -1\right)^{2} } \left(\xi ^{T} \bar{L}_{\dot{x}\dot{x}} \left(\theta +\right)\xi \right)_{\dot{x}}^{T} \xi \right]+o\left(\varepsilon ^{4} \right) \end{equation} \[\left({\mathbb D}elta _{\varepsilon }^{\left(-\right)} J\, \left(\bar{x}\, \left(\cdot \right);\, \tilde{\vartheta }\right)=\varepsilon ^{2} E\left(\bar{L}\right)\, \left(\theta -,\xi \right)+\frac{\varepsilon ^{3} }{2\left(1-\varepsilon \right)} \xi ^{T} \bar{L}_{\dot{x}\dot{x}} \left(\theta -\right)\xi -\right. \] \begin{equation} \label{GrindEQ__2_37_} \left. -\frac{1}{2} \varepsilon ^{4} \left[K\left(\bar{L}\right)\left(\theta -,\varepsilon ,\, \xi \right)+\frac{1}{3\left(\varepsilon -1\right)^{2} } \left(\xi ^{T} \bar{L}_{\dot{x}\dot{x}} \left(\theta -\right)\xi \right)_{\dot{x}}^{T} \xi \right]+o\left(\varepsilon ^{4} \right)\right). \end{equation} Here $E(\bar{L})(\theta ,\xi)$ is determined by \eqref{GrindEQ__1_6_} and the function $K(\bar{L})(\theta ,\varepsilon ,\xi)$ has the form \[K\left(\bar{L}\right)\left(\theta ,\, \varepsilon ,\xi \right)=\xi ^{T} \left[\bar{L}_{x} \left(\theta ,\xi \right)-\bar{L}_{x} \left(\theta \right)-\bar{L}_{x\dot{x}} \left(\theta \right)\xi \right]+\frac{d}{dt} E\left(\bar{L}\right)\left(\theta ,\xi \right)+\] \begin{equation} \label{GrindEQ__2_38_} +\frac{1+\varepsilon }{2\left(1-\varepsilon \right)} \frac{d}{dt} \xi ^{T} \bar{L}_{\dot{x}\dot{x}} \left(\theta \right)\xi , \end{equation} where $\left(\xi ^{T} \bar{L}_{\dot{x}\dot{x}} \left(\theta \right)\xi \right)_{\dot{x}} :=\left. \left(\xi ^{T} L_{\dot{x}\dot{x}} \left(t,\, x,\, \dot{x}\right)\xi \right)_{\dot{x}} \right|_{x=\bar{x}\left(t\right)} $is a derivative of the function $\xi ^{T} L_{\dot{x}\dot{x}} \left(t,\, x,\, \dot{x}\right)\xi $ with respect to the variable $\dot{x}$, calculated along the extremal $\bar{x}(\cdot)$. \textbf{Proof. }At first, we prove the validity of expansion formula \eqref{GrindEQ__2_36_}, i.e. we find expansion with respect to $\varepsilon $ of the increment $J\left(x^{\left(+\right)} \, \left(\cdot ;\tilde{\vartheta },\varepsilon \right)\right)-J\left(\bar{x}\left(\cdot \right)\right)=:{\mathbb D}elta _{\varepsilon }^{\left(+\right)} J\left(\bar{x}\left(\cdot \right),\tilde{\vartheta }\right)$, where $\bar{x}(\cdot)$ is an extremal in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}, the admissible function $x^{\left(+\right)} (\cdot ;\tilde{\vartheta },\varepsilon)$ is determined by \eqref{GrindEQ__2_34_}. Substituting $\lambda =\varepsilon \in \left(0,\, \bar{\varepsilon }\right]\bigcap \left(0,1\right)$, i.e. $\vartheta =\tilde{\vartheta }$ in \eqref{GrindEQ__2_17_}-\eqref{GrindEQ__2_19_}, we calculate the increment ${\mathbb D}elta _{\varepsilon }^{\left(+\right)} J\left(\bar{x}\left(\cdot \right),\tilde{\vartheta }\right)$ with accuracy $o(\varepsilon ^{4} )$. Then, allowing for \eqref{GrindEQ__1_6_} and \eqref{GrindEQ__2_34_}, we have \begin{equation} \label{GrindEQ__2_39_} {\mathbb D}elta _{\varepsilon }^{\left(+\right)} J\left(\bar{x}\left(\cdot \right);\tilde{\vartheta }\right)=J_{1}^{\left(+\right)} \left(\varepsilon ,\, \tilde{\vartheta }\right)+J_{2}^{\left(+\right)} \left(\varepsilon ,\, \tilde{\vartheta }\right),\, \, \varepsilon \in \left(0,\, \bar{\varepsilon }\right]\bigcap \left(0,1\right), \end{equation} where \begin{equation} \label{GrindEQ__2_40_} J_{1}^{\left(+\right)} \left(\varepsilon ,\, \tilde{\vartheta }\right)=\int _{\theta }^{\theta +\varepsilon ^{2} }E\left(\bar{L}\right)\, \left(t,\, \xi \right)dt+ \int _{\theta +\varepsilon ^{2} }^{\theta +\varepsilon }E\left(\bar{L}\right)\left(t,\frac{\varepsilon }{\varepsilon -1} \, \xi \right)dt , \end{equation} \[J_{2}^{\left(+\right)} \left(\varepsilon ,\, \tilde{\vartheta }\right)=\int _{\theta }^{\theta +\varepsilon ^{2} }\, \, \, \left[L\left(t,\bar{x}\left(t\right)+\left(t-\theta \right)\xi ,\, \dot{\bar{x}}\left(t\right)+\xi \right)-L\left(t,\bar{x}\left(t\right),\, \dot{\bar{x}}\left(t\right)+\, \xi \right)-\left(t-\theta \right)\, \bar{L}_{x}^{T} \left(t\right)\, \xi \right] \, dt+\] \[+\int _{\theta +\varepsilon ^{2} }^{\theta +\varepsilon }\left[L\left(t,\, \bar{x}\left(t\right)+\frac{\varepsilon }{\varepsilon -1} \left(t-\theta -\varepsilon \right)\xi ,\, \dot{\bar{x}}\left(t\right)+\frac{\varepsilon }{\varepsilon -1} \xi \right)\right. \left. -L\left(t,\bar{x}\left(t\right),\dot{\bar{x}}\left(t\right)+\frac{\varepsilon }{\varepsilon -1} \xi \right)\right] dt-\] \begin{equation} \label{GrindEQ__2_41_} -\frac{\varepsilon }{\varepsilon -1} \int _{\theta +\varepsilon ^{2} }^{\theta +\varepsilon }\left(t-\theta -\varepsilon \right)\bar{L}_{x}^{T} \left(t\right)\xi dt . \end{equation} Considering the assumption on the smoothness of functions $L\left(\cdot \right)$, $L_{\dot{x}} \left(\cdot \right)$ and $\bar{x}\left(\cdot \right)$, we apply the Taylor formula. Then from \eqref{GrindEQ__2_40_} and \eqref{GrindEQ__2_41_}, allowing for notations \eqref{GrindEQ__1_6_} and \eqref{GrindEQ__2_9_}, we get \[J_{1}^{\left(+\right)} \left(\varepsilon ,\, \tilde{\vartheta }\right)=\int _{\theta }^{\theta +\varepsilon ^{2} }\, \left[E\left(\bar{L}\right)\, \left(\theta +,\, \xi \right)+\left(t-\theta \right)\frac{d}{dt} \, E\left(\bar{L}\right)\, \left(\theta +,\, \xi \right)+o\, \left(t-\theta \right)\right]dt+ \] \[+\int _{\theta +\varepsilon ^{2} }^{\theta +\varepsilon }\left[\frac{\varepsilon ^{2} }{2\left(\varepsilon -1\right)^{2} } \xi ^{T} \bar{L}_{\dot{x}\dot{x}} \left(t\right)\, \xi +\frac{1}{6} \frac{\varepsilon ^{3} }{\left(\varepsilon -1\right)^{3} } \left(\xi ^{T} L_{\dot{x}\dot{x}} \left(t\right)\xi \right)\, _{\dot{x}}^{T} \, \xi +o\left(\varepsilon ^{3} ;t\right)\right]dt= \] \[=\varepsilon ^{2} E\left(\bar{L}\right)\, \left(\theta +,\, \xi \right)+\frac{\varepsilon ^{3} }{2\left(1-\varepsilon \right)} \xi ^{T} \bar{L}_{\dot{x}\dot{x}} \left(\theta +\right)\, \xi +\frac{\varepsilon ^{4} }{2} \left[\frac{d}{dt} E\left(\bar{L}\right)\, \left(\theta +,\, \xi \right)+\right. \] \begin{equation} \label{GrindEQ__2_42_} +\frac{1+\varepsilon }{2\left(1-\varepsilon \right)} \frac{d}{dt} \xi ^{T} \bar{L}_{\dot{x}\dot{x}} \left(\theta +\right)\, \xi -\left. \frac{1}{3\left(1-\varepsilon \right)^{2} } \left(\xi ^{T} \bar{L}_{\dot{x}\dot{x}} \left(\theta +\right)\, \xi \right)\, _{\dot{x}}^{T} \xi \right]+o\left(\varepsilon ^{4} \right), \end{equation} \[J_{2}^{\left(+\right)} \left(\varepsilon ,\, \tilde{\vartheta }\right)\, =\int _{\theta }^{\theta +\varepsilon ^{2} }\, \, \, \left[\left(t-\theta \right)\, \left(\bar{L}_{x}^{T} \left(t,\xi \right)-\bar{L}_{x}^{T} \left(t\right)\right)\xi +o\, \left(t-\theta \right)\, \right] \, \, dt+\] \[+\int _{\theta +\varepsilon ^{2} }^{\theta +\varepsilon }\, \frac{\varepsilon }{\varepsilon -1} \left(t-\theta -\varepsilon \right)\, \, \left[\left. \left(\bar{L}_{x}^{T} \left(t,\, \frac{\varepsilon }{\varepsilon -1} \xi \right)-\bar{L}_{x}^{T} \left(t\right)\right)\, \xi \right]\right. dt+\] \[+\int _{\theta +\varepsilon ^{2} }^{\theta +\varepsilon }\left[\frac{1}{2} \left(t-\theta -\varepsilon \right)^{2} \frac{\varepsilon ^{2} }{\left(\varepsilon -1\right)^{2} } \xi ^{T} \bar{L}_{xx} \left(t,\, \frac{\varepsilon }{\varepsilon -1} \xi \right)\xi +o\left(\left(\, \frac{\varepsilon \, \left(t-\theta -\varepsilon \right)}{\varepsilon -1} \right)^{2} \right)\right] \, dt=\] \begin{equation} \label{GrindEQ__2_43_} =\frac{\varepsilon ^{4} }{2} \left[\xi ^{T} \left(\bar{L}_{x} \left(\theta +,\xi \right)-\bar{L}_{x} \left(\theta +\right)\right)-\xi ^{T} \bar{L}_{x\dot{x}} \left(\theta +\right)\, \xi \right]+o\left(\varepsilon ^{4} \right). \end{equation} Substituting \eqref{GrindEQ__2_42_} and \eqref{GrindEQ__2_43_} in \eqref{GrindEQ__2_39_} and taking into account \eqref{GrindEQ__2_38_}, and also choosing $\varepsilon ^{*} =\bar{\varepsilon }$, we get the validity of the expansion formula \eqref{GrindEQ__2_36_}. \noindent The proof of the validity of the increment formula \eqref{GrindEQ__2_37_} is carried out similar to \eqref{GrindEQ__2_36_} using \eqref{GrindEQ__2_5_}, \eqref{GrindEQ__2_6_} and \eqref{GrindEQ__2_25_}-\eqref{GrindEQ__2_27_}. Namely, substituting $\lambda =\varepsilon \in \left(0,\tilde{\varepsilon }\right]\bigcap \left(0,1\right)$, i.e. $\vartheta =\tilde{\vartheta }$ in \eqref{GrindEQ__2_25_}-\eqref{GrindEQ__2_27_}, we calculate the increment $J\left(x^{\left(-\right)} \, \left(\cdot ;\tilde{\vartheta },\varepsilon \right)\right)-J\left(\bar{x}\left(\cdot \right)\right)=:{\mathbb D}elta _{\varepsilon }^{\left(-\right)} J\left(\bar{x}\left(\cdot \right);\tilde{\vartheta }\right)$ with accuracy $o\left(\varepsilon ^{4} \right)$, where the number $\tilde{\varepsilon }>0$ is determined above (see (2.5)), while the admissible function $x^{\left(-\right)} (\cdot ;\tilde{\vartheta },\varepsilon)$ by \eqref{GrindEQ__2_35_}. Then we have \begin{equation} \label{GrindEQ__2_44_} {\mathbb D}elta _{\varepsilon }^{\left(-\right)} J\left(\, \bar{x}\, \left(\cdot \right),\, \tilde{\vartheta }\right)=J_{1}^{\left(-\right)} \left(\varepsilon ,\, \tilde{\vartheta }\right)+J_{2}^{\left(-\right)}(\varepsilon ,\, \tilde{\vartheta }), \,\, \varepsilon \in \left(0,\tilde{\varepsilon }\right]\bigcap \left(0,1\right), \end{equation} where \[J_{1}^{\left(-\right)} \left(\varepsilon ,\, \tilde{\vartheta }\right)=\int _{\theta -\varepsilon ^{2} }^{\theta }\, E\left(\bar{L}\right)\, \left(t,\, \xi \right)dt+ \int _{\theta -\varepsilon }^{\theta -\varepsilon ^{2} }E\left(\bar{L}\right)\, \left(t,\, \frac{\varepsilon }{\varepsilon -1} \xi \right)dt , \] \[J_{2}^{\left(-\right)} \left(\varepsilon ,\, \tilde{\vartheta }\right)=\int _{\theta -\varepsilon ^{2} }^{\theta }\left[L\, \left(t,\, \bar{x}\left(t\right)+\left(t-\theta \right)\, \xi ,\dot{\bar{x}}\left(t\right)+\xi \, \right)-\, \right. \left. L\, \left(t,\, \bar{x}\left(t\right),\, \dot{\bar{x}}\left(t\right)+\xi -\left(t-\theta \right)\, \bar{L}_{x}^{T} \left(t\right)\, \xi \right)\, \right]\, dt+\] \[+\int _{\theta -\varepsilon }^{\, \theta -\varepsilon ^{2} }\left[L\left(t,\, \bar{x}\left(t\right)+\frac{\varepsilon }{\varepsilon -1} \right. \, \left(t-\theta +\varepsilon \right)\right. \left. \, \xi ,\dot{\bar{x}}\left(t\right)\left. +\frac{\varepsilon }{\varepsilon -1} \xi \right)-L\left(t,\, \bar{x}\left(t\right),\, \dot{\bar{x}}\left(t\right)+\frac{\varepsilon }{\varepsilon -1} \xi \right)\right]dt-\] \[-\frac{\varepsilon }{\varepsilon -1} \int _{\theta -\varepsilon }^{\theta -\varepsilon ^{2} }\left(t-\theta +\varepsilon \right)\, \bar{L}_{x}^{T} \left(t\right)\xi \, dt. \] Applying the Taylor formula, allowing for notations \eqref{GrindEQ__1_6_}, \eqref{GrindEQ__2_9_} and \eqref{GrindEQ__2_10_}, for the integrals $J_{1}^{\left(-\right)} \left(\varepsilon ,\, \tilde{\vartheta }\right)$ and $J_{2}^{\left(-\right)} \left(\varepsilon ,\, \tilde{\vartheta }\right)$ we have the following expansions: \[J_{1}^{\left(-\right)} \left(\varepsilon ,\, \tilde{\vartheta }\right)=\int _{\theta -\varepsilon ^{2} }^{\theta }\left[E\, \left(\bar{L}\right)\, \left(\theta -,\, \xi \right)+\left(t-\theta \right)\, \frac{d}{dt} E\, \left(\bar{L}\right)\, \left(\theta -,\, \xi \right)+o\, \left(t-\theta \right)\right] \, \, dt+\] \[+\int _{\theta -\varepsilon }^{\theta -\varepsilon ^{2} }\left[\frac{\varepsilon ^{2} }{2\left(\varepsilon -1\right)^{2} } \xi ^{T} \bar{L}_{\dot{x}\dot{x}} \left(t\right)\xi +\frac{1}{6} \frac{\varepsilon ^{3} }{\left(\varepsilon -1\right)^{3} } \, \left(\xi ^{T} \bar{L}_{\dot{x}\dot{x}} \left(t\right)\, \xi \right)_{\dot{x}}^{T} \xi +o\, \left(\varepsilon ^{3} ;\, t\right)\right] \, dt=\] \[=\varepsilon ^{2} E\, \left(\bar{L}\right)\, \left(\theta -,\, \xi \right)+\frac{\varepsilon ^{3} }{2\left(1-\varepsilon \right)} \xi ^{T} \bar{L}_{\dot{x}\dot{x}} \, \left(\theta -\right)\, \xi -\frac{1}{2} \, \varepsilon ^{4} \left[\frac{d}{dt} E\, \left(\bar{L}\right)\, \left(\theta -,\, \xi \right)+\right. \] \begin{equation} \label{GrindEQ__2_45_} +\frac{1+\varepsilon }{2\left(1-\varepsilon \right)} \frac{d}{dt} \left. \xi ^{T} \bar{L}_{\dot{x}\dot{x}} \left(\theta -\right)\xi +\frac{1}{3\left(1-\varepsilon \right)^{2} } \, \left(\xi ^{T} \bar{L}_{\dot{x}\dot{x}} \left(\theta -\right)\, \xi \right)_{\dot{x}}^{\, T} \xi \right]+o\left(\varepsilon ^{4} \right), \end{equation} \[J_{2}^{\left(-\right)} \left(\varepsilon ,\, \tilde{\vartheta }\right)=\int _{\theta -\varepsilon ^{2} }^{\theta }\left[\, \left(t-\theta \right)\, {\mathbb D}elta \bar{L}_{x}^{T} \left(t,\, \xi \right)\, \xi +o\, \left(t-\theta \right)\, \right] \, dt+\] \[+\int _{\theta -\varepsilon }^{\theta -\varepsilon ^{2} }\left[\frac{\varepsilon \left(t-\theta +\varepsilon \right)}{\varepsilon -1} {\mathbb D}elta \bar{L}_{x}^{T} \left(t,\frac{\varepsilon }{\varepsilon -1} \, \xi \right)\xi +\frac{\varepsilon ^{2} \left(t-\theta +\varepsilon \right)^{2} }{2\left(\varepsilon -1\right)^{2} } \xi ^{T} \bar{L}_{xx} \left(t,\, \frac{\varepsilon }{\varepsilon -1} \xi \right)\xi \right]dt+ \] \[+\int _{\theta -\varepsilon }^{\theta -\varepsilon ^{2} }o\left(\left(\frac{\varepsilon \left(t-\theta +\varepsilon \right)}{\varepsilon -1} \right)^{2} \right)dt =-\frac{\varepsilon ^{4} }{2} \left[\xi ^{T} \left(\bar{L}_{x} \left(\theta -,\, \xi \right)-\bar{L}_{x} \left(\theta -\right)\right)-\right. \] \begin{equation} \label{GrindEQ__2_46_} \left. -\xi ^{T} \bar{L}_{x\dot{x}} \left(\theta -\right)\, \xi \, \, \, \right]+o\left(\varepsilon ^{4} \right). \end{equation} Consequently, substituting \eqref{GrindEQ__2_45_} and \eqref{GrindEQ__2_46_} in \eqref{GrindEQ__2_44_}, and taking into account \eqref{GrindEQ__2_38_} and choosing $\varepsilon ^{*} =\tilde{\varepsilon }$, we get the validity of the increment formula \eqref{GrindEQ__2_37_}. Thereby, Proposition 2.3 is proved. \textbf{Remark 2.1. }Based on Proposition 2.3, we confirm that various methods for choosing the parameter $\lambda $, as the function $\varepsilon $, allows to get a new increment formula of functional in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}. \section{Necessary conditions for a minimum in the presence of various degenerations at a point} In this section, using the results of the previous section we get various necessary conditions for a strong and weak local minimum with the degeneration of the Weierstrass condition and also with degenerations of Weierstrass and Legendre conditions simultaneously. \textbf{Theorem 3.1. }Let the functions $L\left(\cdot \right)$ and $L_{\dot{x}} \left(\cdot \right)$ be continuously differentiable in totality of variables, and the admissible function $\bar{x}\left(\cdot \right)$ be a strong local minimum in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}. Then: (i) if at the point $\theta \in \left[t_{0} ,\, t_{1} \right)$ $\left(\theta \in \left(t_{0} ,\, t_{1} \right]\, \right)$ the function $\bar{x}\left(\cdot \right)$ is twice differentiable on the right (left) in semi-neighborhood $\left[\theta ,\, \theta +\alpha \right)\subset I$ $\left(\left(\theta -\alpha ,\, \theta \right]\subset I\right)$ of the point $\theta $, and also along it for the vectors $\eta \ne 0$ and $\left(\bar{\lambda }-1\right)^{-1} \bar{\lambda }\eta $, where $\bar{\lambda }\in \left(0,\, 1\right)$, the Weierstrass condition degenerates at the point $\theta $ on the right (left) i.e. the following equality holds: \begin{equation} \label{GrindEQ__3_1_} E(\bar{L})(\theta +,\eta)=E\left(\bar{L}\right)(\theta +,\left(\bar{\lambda }-1\right)^{-1} \bar{\lambda }\eta)=0 \end{equation} \begin{equation} \label{GrindEQ__3_2_} (E(\bar{L})(\theta -,\eta)=E(\bar{L})(\theta -,(\bar{\lambda}-1)^{-1} \bar{\lambda }\eta)=0), \end{equation} then the following inequality is fulfilled: \begin{equation} \label{GrindEQ__3_3_} \bar{\lambda }M_{1} \left(\bar{L}_{x} \right)(\theta +,\bar{\lambda },\eta)+\frac{d}{dt} Q_{2} \left(\bar{L}\right)(\theta +,\bar{\lambda },\eta)\ge 0 \end{equation} \begin{equation} \label{GrindEQ__3_4_} \left(\bar{\lambda }M_{1} \left(\bar{L}_{x} \right)(\theta -,\bar{\lambda },\eta)+\frac{d}{dt} Q_{2} \left(\bar{L}\right)(\theta -,\bar{\lambda },\eta)\le 0\right), \end{equation} where $E(\bar{L})(\cdot)$, $Q_{2} (\bar{L})(\cdot)$ and $M_{1} (\bar{L}_{x} )(\cdot)$ are determined by \eqref{GrindEQ__1_6_}, \eqref{GrindEQ__2_11_}, \eqref{GrindEQ__2_12_} allowing for \eqref{GrindEQ__2_9_} and \eqref{GrindEQ__2_10_}; (ii) if at the point $\theta \in \left(t_{0} ,\, t_{1} \right)$ the function $\bar{x}\left(\cdot \right)$ is twice differentiable, furthermore, along it for the vectors $\eta \ne 0$ and $\left(\lambda -1\right)^{-1} \bar{\lambda }\eta $, where $\bar{\lambda }\in \left(0,1\right)$, the Weierstrass condition degenerates at the point $\theta $, i.e. we have the following equalities \begin{equation} \label{GrindEQ__3_5_} E\left(\bar{L}\right)(\theta ,\, \eta)=E\left(\bar{L}\right)\left(\theta ,\, \left(\bar{\lambda }-1\right)^{-1} \bar{\lambda }\eta \right)=0, \end{equation} then the following equality is fulfilled: \begin{equation} \label{GrindEQ__3_6_} \bar{\lambda }{\mathbb D}elta \bar{L}_{x}^{T} \left(\theta ,\, \eta \right)\eta +\left(1-\bar{\lambda }\right){\mathbb D}elta \bar{L}_{x}^{T} \left(\theta ,\, \left(\bar{\lambda }-1\right)^{-1} \bar{\lambda }\eta \right)\, \eta =0, \end{equation} where ${\mathbb D}elta \bar{L}_{x} \left(\cdot \right)$ is determined from \eqref{GrindEQ__2_10_} allowing for \eqref{GrindEQ__2_9_}. \textbf{Proof. }At first we prove part (i) of Theorem 3.1, i.e. the validity of inequality \eqref{GrindEQ__3_3_} (\eqref{GrindEQ__3_4_}). We use Proposition 2.2 (this is possible by the conditions of Theorem 3.1). Assume $\xi =\eta $ and $\lambda =\bar{\lambda }$, i.e. $\vartheta =\bar{\vartheta }=\left(\theta ,\, \bar{\lambda },\, \eta \right)$ in statement \eqref{GrindEQ__2_32_} ((2.33)) of Proposition 2.2. Then by virtue of assumption \eqref{GrindEQ__3_1_} ((3.2)) and denotation \eqref{GrindEQ__2_11_}, allowing for \eqref{GrindEQ__2_15_}, the increment formula \eqref{GrindEQ__2_32_} (\eqref{GrindEQ__2_33_}) takes the form \begin{equation} \label{GrindEQ__3_7_} {\mathbb D}elta _{\varepsilon }^{\left(+\right)} J\left(\bar{x}\left(\cdot \right),\, \bar{\vartheta }\right)=\frac{1}{2} \varepsilon ^{2} \left[\bar{\lambda }M_{1} \left(\bar{L}_{x} \right)\, \left(\theta +,\, \bar{\lambda },\eta \right)+\frac{d}{dt} Q_{2} \left(\bar{L}\right)\, \left(\theta +,\, \lambda ,\eta \right)\right] +o\left(\varepsilon ^{2} \right),\, \varepsilon \in \left(0,\, \varepsilon ^{*} \right] \end{equation} \begin{equation} \label{GrindEQ__3_8_} \left({\mathbb D}elta _{\varepsilon }^{\left(-\right)} J\left(\bar{x}\left(\cdot \right),\, \bar{\vartheta }\right)=-\frac{1}{2} \varepsilon ^{2} \left[\bar{\lambda }M_{1} \left(\bar{L}_{x} \right)\, \left(\theta -,\, \bar{\lambda },\eta \right)+\frac{d}{dt} Q_{2} \left(\bar{L}\right)\, \left(\theta -,\, \lambda ,\eta \right)\right]+\right. \left. o\left(\varepsilon ^{2} \right),\, \varepsilon \in \left(0,\, \varepsilon ^{*} \right]\, \right). \end{equation} Since the admissible function $\bar{x}\left(\cdot \right)$ is a strong local minimum in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}, then \[\varepsilon ^{-2} {\mathbb D}elta _{\varepsilon }^{\left(+\right)} \, J\left(\bar{x}\left(\cdot \right),\, \bar{\vartheta }\right)\ge 0 \,\,\, \left(\varepsilon ^{-2} {\mathbb D}elta _{\varepsilon }^{\left(-\right)} \, J\left(\bar{x}\left(\cdot \right),\, \bar{\vartheta }\right)\ge 0\right)\] for all $\varepsilon \in \left(0,\, \varepsilon ^{*} \right]$. Therefore, allowing for \eqref{GrindEQ__3_7_} ((3.8)), in the last inequality we pass to the limit as $\varepsilon \to +0$. Then we get the validity of the sought-for inequality \eqref{GrindEQ__3_3_} ((3.4)), i.e. part (i) of Theorem 3.1 is proved. We now prove part (ii) of Theorem 3.1, i.e. the validity of the equality \eqref{GrindEQ__3_6_}. Since $\theta \in \left(t_{0} ,\, t_{1} \right)$, and in addition the function $\bar{x}\left(\cdot \right)$ is a strong local minimum in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}, and is twice differentiable at the point $\theta $, then from inequality \eqref{GrindEQ__3_3_} and \eqref{GrindEQ__3_4_}, taking into account \eqref{GrindEQ__2_11_} and \eqref{GrindEQ__2_12_}, we get the validity of the following equality \[\bar{\lambda }\, \left[\, \, \bar{\lambda }{\mathbb D}elta \bar{L}_{x}^{T} \left(\theta ,\, \eta \right)\eta +\left(1-\bar{\lambda }\right){\mathbb D}elta \bar{L}_{x}^{T} \left(\theta ,\, \left(\bar{\lambda }-1\right)^{-1} \bar{\lambda }\eta \right)\, \eta \, \right]\, +\] \begin{equation} \label{GrindEQ__3_9_} +\frac{d}{dt} \left[\bar{\lambda }^{2} E\left(\bar{L}\right)\, \left(\theta ,\, \eta \right)\eta +\left(1-\bar{\lambda }^{2} \right)E\left(\bar{L}\right)\, \left(\theta ,\, \left(\bar{\lambda }-1\right)^{-1} \bar{\lambda }\eta \right)\, \right]=0. \end{equation} Since $\bar{x}\left(\cdot \right)$ is a strong local minimum in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}, by virtue of the Weierstrass condition \eqref{GrindEQ__1_5_}, allowing for assumption \eqref{GrindEQ__3_5_}, the functions $E(\bar{L})(t,\, \eta),\, t\in I$ and $E\left(\bar{L}\right)\, \left(t,\, \left(\bar{\lambda }-1\right)^{-1} \bar{\lambda }\eta \right),t\in I$, with respect to the variable $t$ attain a minimum at the point $\theta \in \left(t_{0} ,\, t_{1} \right)$. Then, taking into account the smoothness of the functions $L\left(\cdot \right)$, $L_{\dot{x}} \left(\cdot \right)$ and $\bar{x}\left(\cdot \right)$, by the Fermat theorem [11, p.15] we have $\frac{d}{dt} E\left(\bar{L}\right)\, \left(\theta ,\, \eta \right)=\frac{d}{dt} E\left(\bar{L}\right)\, \left(\theta ,\, \left(\bar{\lambda }-1\right)^{-1} \bar{\lambda }\eta \right)=0$. Consequently, from \eqref{GrindEQ__3_9_}, allowing for the last equalities, we get the proof of the equality \eqref{GrindEQ__3_6_}, i.e. part (ii) of Theorem 3.1 is proved. Thus, Theorem 3.1 is completely proved. Now we consider the case when condition for a minimum \eqref{GrindEQ__3_3_} ((3.4)) also degenerates i.e. we have the equalities \begin{equation} \label{GrindEQ__3_10_} W(\bar{L})(\theta +,\bar{\lambda },\, \eta)=0 \end{equation} \begin{equation}\label{GrindEQ__3_11_} (W(\bar{L})(\theta-, \bar{\lambda}, \eta)=0), \end{equation} where $W(\bar{L})(\cdot ,\bar{\lambda },\, \eta)$ is determined from \eqref{GrindEQ__2_15_}. In this case the following theorem is valid. \textbf{Theorem 3.2. }Let the functions $L\left(\cdot \right)$ and $L_{\dot{x}} \left(\cdot \right)$ be twice continuously differentiable in totality of variables, and the admissible function $\bar{x}\left(\cdot \right)$ be a strong local minimum in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}. Then: (i)\textbf{ }if at the point $\theta \in \left[t_{0} ,\, t_{1} \right)$ $\left(\theta \in \left(t_{0} ,\, t_{1} \right]\right)$ the function $\bar{x}\left(\cdot \right)$ is triply differentiable on the right (left) in semi-neighborhood $\left[\theta ,\, \theta +\alpha \right)\subset I$ $\left(\, \left(\theta -\alpha ,\, \theta \right]\subset I\right)$ of the point $\theta $, and also along it for the vectors $\eta \ne 0$ and $\left(\bar{\lambda }-1\right)^{-1} \bar{\lambda} \eta $, where $\bar{\lambda }\in \left(0,1\right)$, conditions \eqref{GrindEQ__3_1_} ((3.2)) and \eqref{GrindEQ__3_10_} ((3.11)) are fulfilled, i.e. the Weierstrass condition and the condition for a minimum \eqref{GrindEQ__3_3_} ((3.4)) degenerate at the point $\theta $ on the right (left), then the following inequality is fulfilled: \begin{equation} \label{GrindEQ__3_12_} G\left(\bar{L}\right)\, \left(\theta +,\, \bar{\lambda },\, \eta \right)\ge 0 \end{equation} \begin{equation} \label{GrindEQ__3_13_} \left(G\left(\bar{L}\right)\, \left(\theta -,\, \bar{\lambda },\, \eta \right)\ge 0\right); \end{equation} (ii) if at the point $\theta \in \left(t_{0} ,t_{1} \right)$ the admissible function $\bar{x}\left(\cdot \right)$ is triply differentiable, furthermore, along it for the vectors $\eta \ne 0$ and $\left(\bar{\lambda }-1\right)^{-1} \bar{\lambda }\eta $, where $\bar{\lambda }\in \left(0,1\right)$, the Weierstrass condition degenerates at the point $\theta $, i.e. the equality \eqref{GrindEQ__3_5_} holds, then the following inequality is fulfilled \begin{equation} \label{GrindEQ__3_14_} G\left(\bar{L}\right)\left(\theta ,\, \bar{\lambda },\, \eta \right)\ge 0, \end{equation} where $G\left(\bar{L}\right)\left(\cdot ,\bar{\lambda },\eta \right)$ is determined from \eqref{GrindEQ__2_16_}, allowing for \eqref{GrindEQ__2_9_}-\eqref{GrindEQ__2_12_}. \textbf{Proof. }Prove part (i) of Theorem 3.2, i.e. the validity of inequality \eqref{GrindEQ__3_12_} (\eqref{GrindEQ__3_13_}). Use Proposition 2.1 (it is possible by virtue of condition of Theorem 3.2). Assume $\xi =\eta $ and $\lambda =\bar{\lambda }$, i.e. $\vartheta =\bar{\vartheta }=\left(\theta ,\, \bar{\lambda },\eta \right)$ in statement \eqref{GrindEQ__2_13_} (\eqref{GrindEQ__2_14_} ) of Proposition 2.1. Since the function $\bar{x}\left(\cdot \right)$ is a strong local minimum in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}, then ${\mathbb D}elta _{\varepsilon }^{\left(+\right)} J\left(\bar{x}\left(\cdot \right);\bar{\vartheta }\right)\ge 0$ $\left({\mathbb D}elta _{\varepsilon }^{\left(-\right)} J\left(\bar{x}\left(\cdot \right);\bar{\vartheta }\right)\ge 0\right)$, $\forall \varepsilon \in \left(0,\, \varepsilon ^{*} \right]$. Then by virtue of \eqref{GrindEQ__3_1_} (\eqref{GrindEQ__3_2_}) and \eqref{GrindEQ__3_10_} (\eqref{GrindEQ__3_11_}), allowing for notation \eqref{GrindEQ__2_11_}, the first two summands in expansion formula \eqref{GrindEQ__2_13_} (\eqref{GrindEQ__2_14_}) vanish. Therefore, dividing the obtained inequality for ${\mathbb D}elta _{\varepsilon }^{\left(+\right)} J\left(\bar{x}\left(\cdot \right);\, \bar{\vartheta }\right)$ $\left({\mathbb D}elta _{\varepsilon }^{\left(-\right)} J\left(\bar{x}\left(\cdot \right);\, \bar{\vartheta }\right)\right)$ by $\varepsilon ^{3}$ and passing to the limit as $\varepsilon \to +0$, we get the sought-for inequality \eqref{GrindEQ__3_12_} (\eqref{GrindEQ__3_13_}), i.e. part (i) of Theorem 3.2 is proved. Prove part (ii) of Theorem 3.2, i.e. the validity of inequality \eqref{GrindEQ__3_14_}. Considering the assumption of part (ii) of Theorem 3.2, it is easy to get the validity of equality \eqref{GrindEQ__3_9_}, that was obtained when proving part (ii) of Theorem 3.1. By virtue of notations \eqref{GrindEQ__2_11_}, \eqref{GrindEQ__2_12_} and \eqref{GrindEQ__2_15_}, equality \eqref{GrindEQ__3_9_} takes a new form $W\left(\bar{L}\right)(\theta ,\, \bar{\lambda },\, \eta )=0$. Consequently, we confirm that all assumptions of part (i) of Theorem 3.2 are fulfilled, and the function $\bar{x}\left(\cdot \right)$ is triple differentiable at the point $\theta $. Therefore, the proof of part (ii) of Theorem 3.2 follows from the statement of part (i) of Theorem 3.2. Theorem 3.2 is completely proved. We prove the following theorem in the presence of new degenerations. \textbf{Theorem 3.3. }Let the admissible function $\bar{x}\left(\cdot \right)$ be a strong local minimum in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}. Then: (i) if the integrant $L\left(\cdot \right)$ is continuously differentiable in totality of variables, and it is triply continuously differentiable with respect to the variables $\dot{x}$, furthermore, along the function $\bar{x}\left(\cdot \right)$ for the vector $\eta \ne 0$ the Legendre condition degenerates at the point $\theta $ or at the point $\theta $ on the right (left), i.e. the equalities \begin{equation} \label{GrindEQ__3_15_} \eta ^{T} \bar{L}_{\dot{x}\dot{x}} \left(\theta \right)\eta =0,\, \, \,\,\,\, \eta ^{T} \bar{L}_{\dot{x}\dot{x}} \left(\theta +\right)\, \eta =0\, \, \, \,\,\,\, \left(\eta ^{T} \bar{L}_{\dot{x}\dot{x}} \left(\theta -\right)\eta =0\right), \end{equation} hold, then the following equalities are fulfilled: \begin{equation} \label{GrindEQ__3_16_} \left(\eta ^{T} \bar{L}_{\dot{x}\dot{x}} \left(\theta \right)\eta \right)_{\, \dot{x}}^{T} \eta =0,\, \, \, \left(\eta ^{T} \bar{L}_{\dot{x}\dot{x}} \left(\theta +\right)\, \eta \right)_{\, \dot{x}}^{T} \eta \, =0\, \, \left(\left(\eta ^{T} \bar{L}_{\dot{x}\dot{x}} \left(\theta -\right)\eta \right)_{\, \dot{x}}^{T} \eta =0\right), \end{equation} where $\theta \in \left(t_{0} ,\, t_{1} \right)\backslash \left\{\tau \right\}$ or $\theta \in \left\{t_{0} \right\}\bigcup \left\{\tau \right\}\, \, \left(\theta \in \left\{\tau \right\}\bigcup \left\{t_{1} \right\}\right)$, moreover $\left\{\tau \right\}$ is the set of angular points of the function $\bar{x}\left(\cdot \right)$, the symbol $\left(\eta ^{T} \bar{L}_{\dot{x}\dot{x}} \left(\cdot \right)\eta \right)\, _{\dot{x}} $ is determined above (see (2.36)); (ii) if the functions $L\left(\cdot \right)$ and $L_{\dot{x}} \left(\cdot \right)$ are twice differentiable in totality of variables, furthermore, at the point $\theta \in \left[t_{0} ,\, t_{1} \right)$ $\left(\theta \in \left(t_{0} ,\, t_{1} \right]\right)$ the function $\bar{x}\left(\cdot \right)$ is twice differentiable in semi-neighborhood $\left[\theta ,\, \theta +\alpha \right)\subset I$ $\left(\left(\theta -\alpha ,\, \theta \right]\subset I\right)$ of the point $\theta $, and along it for the vector $\eta \ne 0$ the Weierstrass and Legendre conditions degenerate at the point $\theta $ on the right (left), i.e. the equalities \begin{equation} \label{GrindEQ__3_17_} E\left(\bar{L}\right)\left(\theta +,\, \eta \right)=\eta ^{T} \bar{L}_{\dot{x}\dot{x}} \left(\theta +\right)\, \eta =0 \end{equation} \begin{equation} \label{GrindEQ__3_18_} \left(E\left(\bar{L}\right)\left(\theta -,\, \eta \right)=\eta ^{T} \bar{L}_{\dot{x}\dot{x}} \left(\theta -\right)\, \eta =0\right), \end{equation} hold, then the following inequalities are fulfilled: \[\eta ^{T} \left(\bar{L}_{x} \left(\theta +,\, \eta \right)-\bar{L}_{x} \left(\theta +\right)-\bar{L}_{x\dot{x}} \left(\theta +\right)\eta \right)+\frac{d}{dt} E\left(\bar{L}\right)\left(\theta +,\, \eta \right)+\] \begin{equation} \label{GrindEQ__3_19_} +\frac{1}{2} \frac{d}{dt} \eta ^{T} \bar{L}_{\dot{x}\dot{x}} \left(\theta +\right)\eta \ge 0 \end{equation} \[\left(\eta ^{T} \left(\bar{L}_{x} \left(\theta -,\, \eta \right)-\bar{L}_{x} \left(\theta -\right)-\bar{L}_{x\dot{x}} \left(\theta -\right)\eta \right)+\frac{d}{dt} E\left(\bar{L}\right)\left(\theta -,\, \eta \right)+\right. \] \begin{equation} \label{GrindEQ__3_20_} \left. +\frac{1}{2} \frac{d}{dt} \eta ^{T} \bar{L}_{\dot{x}\dot{x}} \left(\theta -\right)\eta \le 0\right), \end{equation} where $E\left(\bar{L}\right)\left(\cdot ,\, \eta \right)$ and $\bar{L}_{x} \left(\cdot ,\, \eta \right)$ are determined by \eqref{GrindEQ__1_6_} and \eqref{GrindEQ__2_9_}, respectively; (iii) if the functions $L\left(\cdot \right)$ and $L_{\dot{x}} \left(\cdot \right)$ are twice continuously differentiable in totality of variables, furthermore, at the point $\theta \in \left(t_{0} ,\, t_{1} \right)$ the function $\bar{x}\left(\cdot \right)$ is twice differentiable and along it for the vector $\eta \ne 0$ the Weierstrass and Legendre conditions degenerate at the point $\theta $, i.e. the equalities \begin{equation} \label{GrindEQ__3_21_} E(\bar{L})(\theta ,\, \eta)=\eta ^{T} \bar{L}_{\dot{x}\dot{x}} \left(\theta \right)\, \eta =0 \end{equation} hold, then the following equality is fulfilled: \begin{equation} \label{GrindEQ__3_22_} \, \eta ^{T} \left[L_{x} \left(\theta ,\, \bar{x}\left(\theta \right),\, \dot{\bar{x}}\left(\theta \right)+\eta \right)-\bar{L}_{x} \left(\theta \right)-\bar{L}_{x\dot{x}} \left(\theta \right)\, \eta \right]=0. \end{equation} \textbf{Proof. }At first we prove statement \eqref{GrindEQ__3_16_}. For that it suffices to show for example the validity of the equality $\left(\eta ^{T} \bar{L}_{\dot{x}\dot{x}} \left(\theta +\right)\eta \right)^T_{\dot{x}}\eta=0$ under the assumption $\eta ^{T} L_{\dot{x}\dot{x}} \left(\theta +\right)\eta =0$, became the validity of other equalities from \eqref{GrindEQ__3_16_} are proved quite similarly. Since $\bar{x}\left(\cdot \right)$ is a strong local minimum in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}, then by virtue of \eqref{GrindEQ__1_5_}, \eqref{GrindEQ__1_6_} we have \[E\left(\bar{L}\right)\left(\theta +,\, \xi \right)=L\left(\theta ,\, \bar{x}\left(\theta \right),\, \dot{\bar{x}}\left(\theta +\right)+\xi \right)-\bar{L}\left(\theta +\right)-\bar{L}_{\dot{x}} \left(\theta +\right)\xi \ge 0,\, \, \forall \xi \in R^{n} .\] Hence, assuming $\xi =\varepsilon \eta $, where $\varepsilon \in \left(-q_{0} ,\, q_{0} \right),\, q_{0} >0$, and considering the smoothness of the integrant $L\left(\cdot \right)$ with respect to the variable $\dot{x}$, by the Taylor formula we get \[E\left(\bar{L}\right)\left(\theta +,\, \xi \right)=\frac{1}{2} \varepsilon ^{2} \eta ^{T} \bar{L}_{\dot{x}\dot{x}} \left(\theta +\right)\eta +\frac{1}{6} \varepsilon ^{3} \left(\eta ^{T} \bar{L}_{\dot{x}\dot{x}} \left(\theta +\right)\eta \right)_{\dot{x}}^{T} \eta +o\left(\varepsilon ^{3} \right)\ge 0, \] \[\forall \varepsilon \in \left(-q_{0} ,\, q_{0} \right) . \] Since by the assumption $\eta ^{T} L_{\dot{x}\dot{x}} \left(\theta +\right)\eta =0$, then the validity of the equality $\left(\eta ^{T} L_{\dot{x}\dot{x}} \left(\theta +\right)\eta \right)_{\dot{x}}^{T} \eta =0$ easily follows from the last inequality. \noindent We now prove the validity of inequality \eqref{GrindEQ__3_19_} (\eqref{GrindEQ__3_20_}), i.e. part (ii) of Theorem 3.3. Let $\bar{x}\left(\cdot \right)$ be a strong local minimum in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}. Then by virtue of conditions assumed in part (ii) of Theorem 3.3, we confirm that Proposition 2.3 is valid, namely expansion \eqref{GrindEQ__2_36_} (\eqref{GrindEQ__2_37_}) for the increment ${\mathbb D}elta _{\varepsilon }^{\left(+\right)} J\left(\bar{x}\left(\cdot \right);\tilde{\vartheta }\right)$ $\left({\mathbb D}elta _{\varepsilon }^{\left(-\right)} J\left(\bar{x}\left(\cdot \right);\tilde{\vartheta }\right)\right)$ and the inequality ${\mathbb D}elta _{\varepsilon }^{\left(+\right)} J\left(\bar{x}\left(\cdot \right);\tilde{\vartheta }\right)\ge 0$ $\left({\mathbb D}elta _{\varepsilon }^{\left(-\right)} J\left(\bar{x}\left(\cdot \right);\tilde{\vartheta }\right)\ge 0\right)$, $\forall \varepsilon \in \left(0,\, \varepsilon ^{*} \right]\bigcap \left(0,\, 1\right)$ hold. Assume $\xi =\eta $ in the last inequality and take into account \eqref{GrindEQ__3_17_} ((3.18)), \eqref{GrindEQ__2_36_} (\eqref{GrindEQ__2_37_}), \eqref{GrindEQ__2_38_}, and also statement \eqref{GrindEQ__3_16_} of Theorem 3.3. Then we have: \[{\mathbb D}elta _{\varepsilon }^{\left(+\right)} J\left(\bar{x}\left(\cdot \right);\tilde{\vartheta }\right)=\frac{1}{2} \varepsilon ^{4} \left[\eta ^{T} \left(\bar{L}_{x} \left(\theta +,\eta \right)-\bar{L}_{x} \left(\theta +\right)-\bar{L}_{x\dot{x}} \left(\theta +\right)\eta \right)+\frac{d}{dt} \right. E\left(\bar{L}\right)\left(\theta +,\, \eta \right)+\] \[\left. +\frac{1+\varepsilon }{2\left(1-\varepsilon \right)} \frac{d}{dt} \eta ^{T} \bar{L}_{\dot{x}\dot{x}} \left(\theta +\right)\eta \right]+o\left(\varepsilon ^{4} \right)\ge 0\] \[\left({\mathbb D}elta _{\varepsilon }^{\left(-\right)} J\left(\bar{x}\left(\cdot \right)\tilde{\vartheta }\right)=-\frac{1}{2} \varepsilon ^{4} \left[\eta ^{T} \left(\bar{L}_{x} \left(\theta -,\eta \right)-\bar{L}_{x} \left(\theta -\right)-\bar{L}_{x\dot{x}} \left(\theta -\right)\eta \right)+\frac{d}{dt} \right. E\left(\bar{L}\right)\left(\theta -,\, \eta \right)+\right. \] \[\left. \left. +\frac{1+\varepsilon }{2\left(1-\varepsilon \right)} \frac{d}{dt} \eta ^{T} \bar{L}_{\dot{x}\dot{x}} \left(\theta -\right)\eta \right]+o\left(\varepsilon ^{4} \right)\ge 0\right), \forall \varepsilon \in \left(0,\, \varepsilon ^{*} \right]\bigcap \left(0,\, 1\right). \] Dividing the obtained last expression for ${\mathbb D}elta _{\varepsilon }^{\left(+\right)} J\left(\bar{x}\left(\cdot \right);\tilde{\vartheta }\right)$ $\left({\mathbb D}elta _{\varepsilon }^{\left(-\right)} J\left(\bar{x}\left(\cdot \right);\tilde{\vartheta }\right)\right)$ by $\varepsilon ^{4}$ and passing to the limit as $\varepsilon \to +0$, we get the sought-for inequality \eqref{GrindEQ__3_19_} (\eqref{GrindEQ__3_20_} ). Thus, part (ii) of Theorem 3.3 is proved. We now prove part (iii) of Theorem 3.3. Since the function $\bar{x}\left(\cdot \right)$ is a strong, local minimum in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_} and along it \eqref{GrindEQ__3_21_} is fulfilled, we arrive at the conclusion: firstly, assumptions \eqref{GrindEQ__3_17_} and \eqref{GrindEQ__3_18_} are fulfilled, and therefore inequalities \eqref{GrindEQ__3_19_} and \eqref{GrindEQ__3_20_} are valid; secondly, the left hand sides of these inequalities coincide and therefore are equal to zero, i.e. the equality \begin{equation} \label{GrindEQ__3_23_} \eta ^{T} \left(\bar{L}_{x} \left(\theta ,\, \eta \right)-\bar{L}_{x} \left(\theta \right)-\bar{L}_{x\dot{x}} \left(\theta \right)\eta \right)+\frac{d}{dt} E\left(\bar{L}\right)\left(\theta ,\, \eta \right)+\frac{1}{2} \frac{d}{dt} \eta ^{T} \bar{L}_{\dot{x}\dot{x}} \left(\theta \right)\eta =0 \end{equation} holds; thirdly, by virtue of Weierstrass and Legendre conditions, the functions $E\left(\bar{L}\right)\left(t,\eta \right),$ $t\in I$, and $\eta ^{T} L_{\dot{x}\dot{x}}\left(t,\eta \right)$ $ ,t\in I$, with respect to the variable $t$ obtain a minimum at the point $\theta \in \left(t_{0} ,\, t_{1} \right)$, and therefore, their derivatives with respect to $t$ at the point $\theta $ are equal to zero, i.e. $\frac{d}{dt} E\left(\bar{L}\right)\left(\theta ,\, \eta \right)=\frac{d}{dt} \eta ^{T} \bar{L}_{\dot{x}\dot{x}} \left(\theta ,\, \eta \right)=0$. Thus, considering the last equality in \eqref{GrindEQ__3_23_}, we get the sought-for equality \eqref{GrindEQ__3_22_}, i.e. part (iii) of Theorem 3.3 is proved. By the same token, Theorem 3.3 is completely proved. Continuing the study, below we obtain necessary conditions for a weak local minimum being local modifications of statements of Theorems 3.1-3.3. Namely, we prove the following theorems. \textbf{} \textbf{Theorem 3.4. }Let the functions $L\left(\cdot \right)$ and $L_{\dot{x}} \left(\cdot \right)$ be continuously differentiable in totality of variables, furthermore, the admissible function $\bar{x}\left(\cdot \right)$ be a weak local minimum in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}. Then there exists $\delta >0$ for which the following statements are valid: (j) if the assumptions of part (i) of Theorem 3.1 are fulfilled, then for every point $\left(\eta ,\, \left(\bar{\lambda }-1\right)^{-1} \bar{\lambda }\eta ,\, \bar{\lambda }\right)\in B_{\delta } \left(0\right)\times B_{\delta } \left(0\right)\times \left(0,\, 1\right)$ satisfying condition \eqref{GrindEQ__3_1_} (\eqref{GrindEQ__3_2_}), the inequality \eqref{GrindEQ__3_3_} (\eqref{GrindEQ__3_4_}) is valid; (jj) if the assumptions of part (ii) of Theorem 3.1 are fulfilled, then for every point $\left(\eta ,\, \left(\bar{\lambda }-1\right)^{-1} \bar{\lambda }\eta ,\, \bar{\lambda }\right)\in B_{\delta } \left(0\right)\times B_{\delta } \left(0\right)\times \left(0,\, 1\right)$ satisfying the condition \eqref{GrindEQ__3_5_}, the equality \eqref{GrindEQ__3_6_} is valid. \textbf{Proof. }It is clear that by virtue of the assumptions of Theorem 3.4 we have formula \eqref{GrindEQ__3_7_} (\eqref{GrindEQ__3_7_}) obtained for the increment ${\mathbb D}elta _{\varepsilon }^{\left(+\right)} J\left(\bar{x}\left(\cdot \right),\, \bar{\vartheta }\right)=J\left(x^{\left(+\right)} \left(\cdot ;\, \bar{\vartheta },\, \varepsilon \right)-J\left(\bar{x}\left(\cdot \right)\right)\right)$ $\left({\mathbb D}elta _{\varepsilon }^{\left(-\right)} J\left(\bar{x}\left(\cdot \right),\, \bar{\vartheta }\right)=J\left(x^{\left(-\right)} \left(\cdot ;\, \bar{\vartheta },\, \varepsilon \right)\right)-J\left(\bar{x}\left(\cdot \right)\right)\right)$, where $x^{\left(+\right)} \left(\cdot ;\, \bar{\vartheta },\, \varepsilon \right)$ $\left(x^{\left(-\right)} \left(\cdot ;\, \bar{\vartheta },\, \varepsilon \right)\right)$ is determined from \eqref{GrindEQ__2_4_}, (\eqref{GrindEQ__2_5_}) allowing for \eqref{GrindEQ__2_1_} (\eqref{GrindEQ__2_6_}) and $\vartheta =\bar{\vartheta }=\left(\theta ,\, \bar{\lambda },\, \eta \right)\in \left[t_{0} ,\, t_{1} \right)\times \left(0,1\right)\times R^{n} \backslash \left\{0\right\}$ $\left(\vartheta =\bar{\vartheta }=\left(\theta ,\, \bar{\lambda },\, \eta \right)\in \left(t_{0} ,\, t_{1} \right]\times \left(0,1\right)\times R^{n} \backslash \left\{0\right\}\right)$. By means of \eqref{GrindEQ__2_1_}, \eqref{GrindEQ__2_2_} and \eqref{GrindEQ__2_4_} ((2.5)-\eqref{GrindEQ__2_7_}), for $x^{\left(+\right)} \left(\cdot ;\, \bar{\vartheta },\, \varepsilon \right)$ $\left(x^{\left(-\right)} \left(\cdot ;\, \bar{\vartheta },\, \varepsilon \right)\right)$ for all $\varepsilon \in \left(0,\, \bar{\varepsilon }\right]\bigcap \left(0,\, \tilde{\varepsilon }\right]\bigcap \left(0,\, 1\right]$ the following estimations are valid: \[\left\| x^{\left(+\right)} \left(\cdot ;\, \bar{\vartheta },\, \varepsilon \right)-\bar{x}\left(\cdot \right)\right\| _{C\left(I,\, R^{n} \right)} =\left\| h^{\left(+\right)} \left(\cdot ,\, \bar{\vartheta },\, \varepsilon \right)\right\| _{C\left(I,\, R^{n} \right)} \le \left\| \eta \right\| _{R^{n} } ,\] \begin{equation} \label{GrindEQ__3_24_} \left\| \dot{x}^{\left(+\right)} \left(\cdot ;\, \bar{\vartheta },\, \varepsilon \right)-\dot{\bar{x}}\left(\cdot \right)\right\| _{L_{\infty } \left(I,\, R^{n} \right)} =\left\| \dot{h}^{\left(+\right)} \left(\cdot ;\, \bar{\vartheta },\, \varepsilon \right)\right\| _{L_{\infty } \left(I,\, R^{n} \right)} \le \max \left\{\, \left\| \eta \right\| _{R^{n} } ,\, \left(\bar{\lambda }-1\right)^{-1} \bar{\lambda }\left\| \eta \right\| _{R^{n} } \, \right\} \end{equation} \[\left(\left\| x^{\left(-\right)} \left(\cdot ;\, \bar{\vartheta },\, \varepsilon \right)-\bar{x}\left(\cdot \right)\right\| _{C\left(I,\, R^{n} \right)} =\left\| h^{\left(-\right)} \left(\cdot ;\bar{\vartheta },\, \varepsilon \right)\right\| _{C\left(I,\, R^{n} \right)} \le \left\| \eta \right\| _{R^{n} } \right. , \] \begin{equation} \label{GrindEQ__3_25_} \left. \left\| \dot{x}^{\left(-\right)} \left(\cdot ;\, \bar{\vartheta },\, \varepsilon \right)-\dot{\bar{x}}\left(\cdot \right)\right\| _{L_{\infty } \left(I,\, R^{n} \right)} =\left\| \dot{h}^{\left(-\right)} \left(\cdot ;\, \bar{\vartheta },\, \varepsilon \right)\right\| _{L_{\infty } \left(I,\, R^{n} \right)} \le \max \left\{\, \left\| \eta \right\| _{R^{n} } ,\, \left(\bar{\lambda }-1\right)^{-1} \bar{\lambda }\left\| \eta \right\| _{R^{n} } \, \right\}\, \right). \end{equation} \textbf{ }Let the admissible function $\bar{x}\left(\cdot \right)$ be a weak local minimum in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_} with $\hat{\delta }$-neighborhood. Then, considering estimations \eqref{GrindEQ__3_24_} (\eqref{GrindEQ__3_25_}), we confirm that for every point $\left(\eta ,\, \left(\bar{\lambda }-1\right)^{-1} \bar{\lambda }\eta ,\, \bar{\lambda }\right)\in B_{\hat{\delta }} \left(0\right)\times B_{\hat{\delta }} \left(0\right)\times \left(0,\, 1\right)$ satisfying condition \eqref{GrindEQ__3_1_} (\eqref{GrindEQ__3_2_}) or \eqref{GrindEQ__3_5_}, the following inequalities are fulfilled: \noindent ${\mathbb D}elta _{\varepsilon }^{\left(+\right)} J\left(\bar{x}\left(\cdot \right),\, \bar{\vartheta }\right)\ge 0,\, \forall \varepsilon \in \left(0,\, \bar{\varepsilon }\right]\bigcap \left(0,\, 1\, \right]$ $\left({\mathbb D}elta _{\varepsilon }^{\left(-\right)} J\left(\bar{x}\left(\cdot \right),\, \bar{\vartheta }\right)\ge 0,\, \forall \varepsilon \in \left(0,\, \bar{\varepsilon }\right]\bigcap \left(0,\, 1\, \right]\right)$ or ${\mathbb D}elta _{\varepsilon }^{\left(+\right)} J\left(\bar{x}\left(\cdot \right),\, \bar{\vartheta }\right)\ge 0$, ${\mathbb D}elta _{\varepsilon }^{\left(-\right)} J\left(\bar{x}\left(\cdot \right),\, \bar{\vartheta }\right)\ge 0$, $\forall \varepsilon \in \left(0,\, \bar{\varepsilon }\right]\bigcap \left(0,\, 1\, \right]$. Therefore, allowing for the estimation \eqref{GrindEQ__3_24_}, \eqref{GrindEQ__3_25_}, and also choosing $\delta =\hat{\delta }$, the proof of Theorem 3.4 directly follows from Theorem 3.1. Theorem 3.4 is proved. \textbf{ Remark 3.1. }Obviously, if there exists some set $B_{\delta } \left(0\right)\times B_{\delta } \left(0\right)$ that contains no solution of the form $\left(\eta ,\, \left(\bar{\lambda }-1\right)^{-1} \bar{\lambda }\eta \right)\, ,$ where $\eta \ne 0$, $\bar{\lambda }\in \left(0,\, 1\right)$, to each system of equations \eqref{GrindEQ__3_1_}, \eqref{GrindEQ__3_2_} and \eqref{GrindEQ__3_5_}, then Theorem 3.4 is inefficient. Using estimations \eqref{GrindEQ__3_24_} and \eqref{GrindEQ__3_25_}, by means of the reason given in the proof of Theorem 3.4, the following statement follows directly from Theorem 3.2. \textbf{Theorem 3.5. }Let the functions $L\left(\cdot \right)$ and $L_{\dot{x}} \left(\cdot \right)$ be twice continuously differentiable in totality of variables, furthermore, the admissible function $\bar{x}\left(\cdot \right)$ be a weak local minimum in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}. Then there exists $\delta >0$ for which the following statements are valid:\textbf{} (j) if the assumptions of part (i) of Theorem 3.2 are fulfilled, then for every point $\left(\eta ,\, \left(\bar{\lambda }-1\right)^{-1} \bar{\lambda }\eta ,\, \bar{\lambda }\right)\in B_{\delta } \left(0\right)\times B_{\delta } \left(0\right)\times \left(0,\, 1\right)$ satisfying conditions \eqref{GrindEQ__3_1_} (\eqref{GrindEQ__3_2_}) and \eqref{GrindEQ__3_10_} (\eqref{GrindEQ__3_11_}), the inequality \eqref{GrindEQ__3_12_} (\eqref{GrindEQ__3_13_}) is valid; (jj) if the assumptions of part (ii) of Theorem 3.2 are fulfilled, then for every point $\left(\eta ,\, \left(\bar{\lambda }-1\right)^{-1} \bar{\lambda }\eta ,\, \bar{\lambda }\right)\in B_{\delta } \left(0\right)\times B_{\delta } \left(0\right)\times \left(0,\, 1\right)$ satisfying the condition \eqref{GrindEQ__3_5_}, the inequality \eqref{GrindEQ__3_14_} is valid. \textbf{Theorem 3.6. }If the assumptions of part (i) of Theorem 3.3 are fulfilled, then the statement \eqref{GrindEQ__3_16_} of Theorem 3.3 is valid also for the admissible function $\bar{x}\left(\cdot \right)$, being a weak local minimum in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}. \textbf{Proof. }It suffices to show, for example, the validity of the equality $\left(\eta ^{T} \bar{L}_{\dot{x}\dot{x}} \left(\theta +\right)\eta \right)_{\dot{x}}^{T} \eta =0$ under the assumption \begin{equation} \label{GrindEQ__3_26_} \eta ^{T} \bar{L}_{\dot{x}\dot{x}} \left(\theta +\right)\eta =0, \end{equation} because other equalities from \eqref{GrindEQ__3_16_} are proved quite similarly. Let the function $\bar{x}\left(\cdot \right)$ be a weak local minimum in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_} with $\hat{\delta }$-neighborhood and $\eta \in R^{n} $ be an arbitrary fixed vector satisfying the condition \eqref{GrindEQ__3_26_}. We choose a number $\varepsilon _{0} >0$ so that for all $\varepsilon \in \left(-\varepsilon _{0} ,\, \varepsilon _{0} \right)$ the inclusion $\varepsilon \eta \in B_{\hat{\delta }} \left(0\right)$ holds. Assume $\xi =\varepsilon \eta $ in \eqref{GrindEQ__1_8_} and taking into account \eqref{GrindEQ__1_6_}, we apply the Taylor formula. Then by virtue of \eqref{GrindEQ__3_26_} we have \[E\left(\bar{L}\right)\left(\theta +,\, \varepsilon \eta \right)=\frac{1}{6} \varepsilon ^{3} \left(\eta ^{T} \bar{L}_{\dot{x}\dot{x}} \left(\theta +\right)\eta \right)_{\dot{x}}^{T} \eta +o\left(\varepsilon ^{3} \right)\ge 0,\, \, \forall \varepsilon \left(-\varepsilon _{0} ,\, \varepsilon _{0} \right]. \] Hence we get the validity of the sought-for equality $\left(\eta ^{T} \bar{L}_{\dot{x}\dot{x}} \left(\theta +\right)\eta \right)_{\dot{x}}^{T} \eta =0$. Theorem 3.6 is proved. \textbf{Theorem 3.7. }Let the admissible function $\bar{x}\left(t\right)$ be a weak local minimum in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}. Then there exists $\delta >0$ for which the following statements are valid: \textbf{} (j) if the assumptions of part (ii) of Theorem 3.3 are fulfilled, then for every point $\eta \in B_{\delta } \left(0\right)$ satisfying condition \eqref{GrindEQ__3_17_} ((3.18)), the inequality \eqref{GrindEQ__3_19_} ((3.20)) is valid; (jj) if the assumptions of part (iii) of Theorem 3.3 are fulfilled, then for every point $\eta \in B_{\delta } \left(0\right)$ satisfying condition \eqref{GrindEQ__3_21_}, the inequality \eqref{GrindEQ__3_22_} is valid. \textbf{Proof. }We consider the increment formula \eqref{GrindEQ__2_36_} ((2.37)) obtained for ${\mathbb D}elta _{\varepsilon }^{\left(+\right)} J\left(\bar{x}\left(\cdot \right),\, \tilde{\vartheta }\right)=J\left(x^{\left(+\right)} \left(\cdot ;\tilde{\vartheta },\, \varepsilon \right)\right)-J\left(\bar{x}\left(\cdot \right)\right)$ $\left({\mathbb D}elta _{\varepsilon }^{\left(-\right)} J\left(\bar{x}\left(\cdot \right),\, \tilde{\vartheta }\right)=J\left(x^{\left(-\right)} \left(\cdot ;\, \tilde{\vartheta },\, \varepsilon \right)\right)-J\left(\bar{x}\left(\cdot \right)\right)\right)$, and used while proving theorem 3.3. Here $\left. \tilde{\vartheta }=\vartheta =\left(\theta ,\, \lambda ,\, \xi \right)\right|_{\lambda =\varepsilon } $,$\left. x^{\left(+\right)} \left(\cdot ;\, \tilde{\vartheta },\, \varepsilon \right)=x^{\left(+\right)} \left(\cdot ;\, \vartheta ,\, \varepsilon \right)\right|_{\lambda =\varepsilon } $$\left(x^{\left(-\right)} \left(\cdot ;\tilde{\vartheta },\, \varepsilon \right)\right. $$\left. \left. =x^{\left(-\right)} \left(\cdot ;\, \vartheta ,\, \varepsilon \right)\right|_{\lambda =\varepsilon } \right)$, \\$\varepsilon \in \left(0,\, \bar{\varepsilon }\right]\bigcap \left(0,\, \tilde{\varepsilon }\right]\bigcap \left(0,\, 1\right)$, where $x^{\left(+\right)} \left(\cdot ;\vartheta ,\, \varepsilon \right)$ $\left(x^{\left(-\right)} \left(\cdot ;\vartheta ,\, \varepsilon \right)\right)$ was defined as a variation of the function $\bar{x}\left(\cdot \right)$ from \eqref{GrindEQ__2_4_} ((2.5)) allowing for \eqref{GrindEQ__2_1_} ((2.6)). Considering the definition of the function $x^{\left(+\right)} \left(\cdot ;\tilde{\vartheta },\, \varepsilon \right)$ $\left(x^{\left(-\right)} \left(\cdot ;\tilde{\vartheta },\, \varepsilon \right)\right)$ and assuming $\xi =\eta ,$ similar to \eqref{GrindEQ__3_24_} ((3.25)) we have that for all $\varepsilon \in \left(0,\, \bar{\varepsilon }\right]\bigcap \left(0,\, \tilde{\varepsilon }\right]\bigcap \left(0,\, 2^{-1} \right]$ the following estimations are valid \begin{equation} \label{GrindEQ__3_27_} \max \left\{\left\| x^{\left(+\right)} \left(\cdot ;\, \tilde{\vartheta },\varepsilon \right)-\bar{x}\left(\cdot \right)\right\| _{C\left(I,\, R^{n} \right)} ,\, \left\| \dot{x}^{\left(+\right)} \left(\cdot ,\, \tilde{\vartheta },\varepsilon \right)-\dot{\bar{x}}\left(\cdot \right)\right\| _{L_{\infty } \left(I,\, R^{n} \right)} \, \, \right\}\le \left\| \eta \right\| _{R^{n} } \end{equation} \begin{equation} \label{GrindEQ__3_28_} \left(\max \left\{\left\| x^{\left(-\right)} \left(\cdot ;\, \tilde{\vartheta },\varepsilon \right)-\bar{x}\left(\cdot \right)\right\| _{C\left(I,\, R^{n} \right)} ,\, \left\| \dot{x}^{\left(-\right)} \left(\cdot ,\, \tilde{\vartheta },\varepsilon \right)-\dot{\bar{x}}\left(\cdot \right)\right\| _{L_{\infty } \left(I,\, R^{n} \right)} \, \, \right\}\le \left\| \eta \right\| _{R^{n} } \right). \end{equation} Note that here the inequality $\left|\left(\varepsilon -1\right)^{-1} \varepsilon \right|\le 1$ was considered for $\varepsilon \in \left(0,\, \, 2^{-1} \, \right]$. Furthermore, carrying out similar reasoning stated in the proof of Theorem 3.4, allowing for estimations \eqref{GrindEQ__3_27_} and \eqref{GrindEQ__3_28_} the proof of Theorem 3.7 directly follows from Theorem 3.3. Theorem 3.7 is proved. \section{Necessary conditions for a minimum in the presence of various degenerations on the interval.} In frequent cases, the Weierstrass condition and also the Legendre condition degenerate on some interval. Such a situation as an independent problem is studied in this section. It is important to note that the research of such cases allows to get analogues of statements \eqref{GrindEQ__3_6_}, \eqref{GrindEQ__3_14_} and \eqref{GrindEQ__3_22_} under significantly weakened assumptions on the smoothness of the integrant $L\left(\cdot \right)$ and the considered extremal of the problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}. Here as in section 3, the used research approach is based on the introduced special variations of the extremal of problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_} and on increment formulas of functional \eqref{GrindEQ__1_1_} obtained in section 2. We prove the following theorems. \textbf{ Theorem 4.1. }Let the integrant $L\left(\cdot \right)$ be continuously differentiable in totality of variables, furthermore, the admissible function $\bar{x}\left(\cdot \right)$ be an extremal in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_} and along it for the vectors $\eta \ne 0$ and $\left(\bar{\lambda }-1\right)^{-1} \bar{\lambda }\eta $, where $\bar{\lambda }\in \left(0,1\right)$, the Weierstrass condition degenerates at any point of the interval $\left(\bar{t}_{0} ,\, \bar{t}_{1} \right)\subset \left[t_{0} ,\, t_{1} \right]$, i.e. the following equalities hold: \begin{equation} \label{GrindEQ__4_1_} E\left(\bar{L}\right)\left(t,\, \eta \right)=E\left(\bar{L}\right)\, \left(t,\, \left(\bar{\lambda }-1\right)^{-1} \bar{\lambda }\eta \, \right)=0,\, \, \, \forall t\in \left(\bar{t}_{0} ,\, \bar{t}_{1} \right), \end{equation} where $E\left(\bar{L}\right)\left(\cdot \right)$ is determined from \eqref{GrindEQ__1_6_} and the interval $\left(\bar{t}_{0} ,\, \bar{t}_{1} \right)$ does not contain any angular point of function $\bar{x}\left(\cdot \right)$. Then: (i) if the extremal $\bar{x}\left(\cdot \right)$ is a strong local minimum in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}, then the following equality is fulfilled: \begin{equation} \label{GrindEQ__4_2_} \bar{\lambda }{\mathbb D}elta \bar{L}_{x}^{T} \left(t,\, \eta \right)\eta +\left(1-\bar{\lambda }\right){\mathbb D}elta \bar{L}_{x}^{T} \left(t,\, \left(\bar{\lambda }-1\right)^{-1} \bar{\lambda }\eta \right)\eta =0,\, \, \forall t\in \left(\bar{t}_{0} ,\, \bar{t}_{1} \right), \end{equation} where ${\mathbb D}elta \bar{L}_{x} \left(\cdot \right)$ is determined by \eqref{GrindEQ__2_10_}; (ii) if the extremal $\bar{x}\left(\cdot \right)$ is a weak local minimum in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}, then there exists a number $\delta >0$ such that for every point $\left(\eta ,\left(\bar{\lambda }-1\right)^{-1} \bar{\lambda }\eta ,\bar{\lambda }\right)\in B_{\delta } \left(0\right)\times B_{\delta } \left(0\right)\times \left(0,1\right)$ satisfying condition \eqref{GrindEQ__4_1_}, equality \eqref{GrindEQ__4_2_} is fulfilled. \textbf{Proof. }Since the admissible function $\bar{x}\left(\cdot \right)$ is an extremal in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}, then along the function $\bar{x}\left(\cdot \right)$ formulas \eqref{GrindEQ__2_17_}-\eqref{GrindEQ__2_19_} are valid for the increment ${\mathbb D}elta _{\varepsilon }^{\left(+\right)} J\left(\bar{x}\left(\cdot \right),\vartheta \right)$, where $\vartheta =\left(\theta ,\, \lambda ,\, \xi \right)\in \left[t_{0} ,\, t_{1} \right)\times \left(0,\, 1\right)\times R^{n} \backslash \left\{0\right\}$ is an arbitrary fixed point, and $\varepsilon \in \left(0,\, \bar{\varepsilon }\right)$. Assume $\theta \in \left(\bar{t}_{0} ,\, \bar{t}_{1} \right)$, $\lambda =\bar{\lambda }$ and $\xi =\eta $, more exactly, $\vartheta =\bar{\vartheta }:=\left(\theta ,\, \bar{\lambda },\eta \right)\in \left(\bar{t}_{0} ,\, \bar{t}_{1} \right)\times \left(0,\, 1\right)\times R^{n} \backslash \left\{0\right\}$ and $\bar{\varepsilon }=\hat{\varepsilon }$, moreover $\hat{\varepsilon }<\bar{t}_{1} -\theta $. Then by virtue of \eqref{GrindEQ__1_6_}, \eqref{GrindEQ__2_1_} and \eqref{GrindEQ__2_2_}, the increment ${\mathbb D}elta _{\varepsilon }^{\left(+\right)} J\left(\bar{x}\left(\cdot \right),\bar{\vartheta }\right)$ takes the form \[{\mathbb D}elta _{\varepsilon }^{\left(+\right)} J\left(\bar{x}\left(\cdot \right),\bar{\vartheta }\right)=\int _{\theta }^{\theta +\bar{\lambda }\varepsilon }E\left(\bar{L}\right)\left(t,\, \eta \right)dt+\int _{\theta +\bar{\lambda }\varepsilon }^{\theta +\varepsilon }E\left(\bar{L}\right)\, \left(t,\, \left(\bar{\lambda }-1\right)^{-1} \bar{\lambda }\eta \right)dt+ \] \begin{equation} \label{GrindEQ__4_3_} +\hat{J}_{1}^{\left(+\right)} \left(\varepsilon ,\, \bar{\vartheta }\right)+\hat{J}_{2}^{\left(+\right)} \left(\varepsilon ,\, \bar{\vartheta }\right),\, \, \, \varepsilon \in \left(0,\, \hat{\varepsilon }\right]. \end{equation} Here \[\hat{J}_{1}^{\left(+\right)} \left(\varepsilon ,\, \bar{\vartheta }\right)=\int _{\theta }^{\theta +\bar{\lambda }\varepsilon }\, \, \, \left[L\left(t,\bar{x}\left(t\right)+\left(t-\theta \right)\eta ,\, \dot{\bar{x}}\left(t\right)+\eta \right)-L\, \left(t,\bar{x}\left(t\right),\, \dot{\bar{x}}\left(t\right)+\eta \right)-\left(t-\theta \right)\, \bar{L}_{x}^{T} \left(t\right)\eta \right]dt ,\] \[\hat{J}_{2}^{\left(+\right)} \left(\varepsilon ,\, \bar{\vartheta }\right)=\int _{\theta +\bar{\lambda }\varepsilon }^{\theta +\varepsilon }\left[L\left(t,\bar{x}\left(t\right)+\frac{\bar{\lambda }}{\bar{\lambda }-1} \left(t-\theta -\varepsilon \right)\eta ,\, \dot{\bar{x}}\left(t\right)+\frac{\bar{\lambda }}{\bar{\lambda }-1} \eta \right)-L\, \left(t,\bar{x}\left(t\right),\, \dot{\bar{x}}\left(t\right)+\frac{\bar{\lambda }}{\bar{\lambda }-1} \eta \right)-\right. \] $\left. -\frac{\bar{\lambda }}{\bar{\lambda }-1} \left(t-\theta -\varepsilon \right)\, \bar{L}_{x}^{T} \left(t\right)\eta \right]\, dt. $ Applying the Taylor formula and taking into account \eqref{GrindEQ__2_10_}, for $\hat{J}_{1}^{\left(+\right)} \left(\varepsilon ,\, \bar{\vartheta }\right)$ and$\hat{J}_{2}^{\left(+\right)} \left(\varepsilon ,\, \bar{\vartheta }\right)$ we have \begin{equation} \label{GrindEQ__4_4_} \hat{J}_{1}^{\left(+\right)} \left(\varepsilon ,\, \bar{\vartheta }\right)=\frac{1}{2} \varepsilon ^{2} \bar{\lambda }^{2} {\mathbb D}elta \bar{L}_{x}^{T} \left(\theta ,\eta \right)\eta +o \left(\varepsilon ^{2} \right), \end{equation} \begin{equation} \label{GrindEQ__4_5_} \hat{J}_{2}^{\left(+\right)} \left(\varepsilon ,\, \bar{\vartheta }\right)=\frac{1}{2} \varepsilon ^{2} \bar{\lambda }\left(1-\bar{\lambda }\right){\mathbb D}elta \bar{L}_{x}^{T} \left(\theta ,\, \left(\bar{\lambda }-1\right)^{-1} \bar{\lambda }\eta \right)\eta +o \left(\varepsilon ^{2} \right). \end{equation} Further, since $\theta \in \left(\bar{t}_{0} ,\, \bar{t}_{1} \right)$, $\bar{\lambda }\in \left(0,\, 1\right)$ and $\hat{\varepsilon }<\bar{t}_{1} -\theta $, then for all $\varepsilon \in \left(0,\, \hat{\varepsilon }\right]$ the inclusions $\left[\, \theta ,\, \theta +\bar{\lambda }\varepsilon \, \right]\subset \left(\bar{t}_{0} ,\, \bar{t}_{1} \right)$ and $\left[\, \theta +\bar{\lambda }\varepsilon ,\, \theta +\varepsilon \, \right]\subset \left(\bar{t}_{0} ,\, \bar{t}_{1} \right)$ hold. Therefore, by virtue of assumption \eqref{GrindEQ__4_1_} the first two terms in \eqref{GrindEQ__4_3_} vanish. Considering this and also \eqref{GrindEQ__4_4_} and \eqref{GrindEQ__4_5_}, from \eqref{GrindEQ__4_3_} we get \begin{equation} \label{GrindEQ__4_6_} {\mathbb D}elta _{\varepsilon }^{\left(+\right)} J\left(\bar{x}\left(\cdot \right),\, \bar{\vartheta }\right)=\frac{1}{2} \varepsilon ^{2} \left[\bar{\lambda }^{\, 2} {\mathbb D}elta L_{x}^{T} \left(\theta ,\, \eta \right)\eta +\bar{\lambda }\left(1-\bar{\lambda }\right){\mathbb D}elta L_{x}^{T} \left(\theta ,\, \left(\bar{\lambda }-1\right)^{-1} \bar{\lambda }\eta \right)\, \eta \right] +o\left(\varepsilon ^{2} \right),\, \, \varepsilon \in \left(0,\, \hat{\varepsilon }\right]. \end{equation} Now, quite similarly, using \eqref{GrindEQ__2_6_}, \eqref{GrindEQ__2_7_}, \eqref{GrindEQ__2_10_} and \eqref{GrindEQ__2_25_}-\eqref{GrindEQ__2_27_} allowing for $\vartheta =\bar{\vartheta }=\left(\theta ,\, \bar{\lambda },\, \eta \right)$ and choosing the number $\tilde{\varepsilon }=\varepsilon ^{*} <\theta -\bar{t}_{0} $, we prove that by virtue of assumption \eqref{GrindEQ__4_1_} the increment${\mathbb D}elta _{\varepsilon }^{\left(-\right)} J\left(\bar{x}\left(\cdot \right),\, \bar{\vartheta }\right)$ has the form \[{\mathbb D}elta _{\varepsilon }^{\left(-\right)} J\left(\bar{x}\left(\cdot \right),\, \vartheta \right)=-\frac{1}{2} \varepsilon ^{2} \left[\bar{\lambda }^{\, 2} {\mathbb D}elta L_{x}^{T} \left(\theta ,\, \eta \right)\eta +\bar{\lambda }\left(1-\bar{\lambda }\right){\mathbb D}elta L_{x}^{T} \left(\theta ,\, \left(\bar{\lambda }-1\right)^{-1} \bar{\lambda }\eta \right)\eta \right]+\] \begin{equation} \label{GrindEQ__4_7_} +o\left(\varepsilon ^{2} \right),\, \, \varepsilon \in \left(0,\, \varepsilon ^{*} \right]. \end{equation} Since the admissible function $\bar{x}(\cdot)$ is a strong local minimum in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}, then ${\mathbb D}elta _{\varepsilon }^{\left(+\right)} J\left(\bar{x}\left(\cdot \right),\, \bar{\vartheta }\right)\ge 0$ and ${\mathbb D}elta _{\varepsilon }^{\left(-\right)} J\left(\bar{x}\left(\cdot \right),\, \bar{\vartheta }\right)\ge 0$ for all $\varepsilon \in \left(0,\, \hat{\varepsilon }\right]\bigcap \, \left(\, 0,\, \varepsilon ^{*} \, \right]$. Considering the last inequalities and arbitrariness of the point $\theta \in \left(\bar{t}_{0} ,\, \bar{t}_{1} \right)$, by virtue of \eqref{GrindEQ__4_6_} and \eqref{GrindEQ__4_7_} we get the validity of the sought-for equality \eqref{GrindEQ__4_2_}, i.e. part (i) of Theorem 4.1 is proved. We now prove part (ii) of Theorem 4.1. Let us consider formulas \eqref{GrindEQ__4_6_} and \eqref{GrindEQ__4_7_} obtained for the increments ${\mathbb D}elta _{\varepsilon }^{\left(+\right)} J\left(\bar{x}\left(\cdot \right),\, \bar{\vartheta }\right)$ and ${\mathbb D}elta _{\varepsilon }^{\left(-\right)} J\left(\bar{x}\left(\cdot \right),\, \bar{\vartheta }\right)$. Here, by definition ${\mathbb D}elta _{\varepsilon }^{\left(+\right)} J\left(\bar{x}\left(\cdot \right),\, \bar{\vartheta }\right)=J\left(x^{\left(+\right)} \left(\cdot ;\, \bar{\vartheta },\, \varepsilon \right)\right)-J\left(\bar{x}\left(\cdot \right)\right)$ and ${\mathbb D}elta _{\varepsilon }^{\left(-\right)} J\left(\bar{x}\left(\cdot \right),\, \bar{\vartheta }\right)=J\left(x^{\left(-\right)} \left(\cdot ;\, \bar{\vartheta },\, \varepsilon \right)\right)-J\left(\bar{x}\left(\cdot \right)\right)$, where $x^{\left(+\right)} \left(\cdot ;\, \bar{\vartheta },\, \varepsilon \right)$ is determined from \eqref{GrindEQ__2_4_} allowing for \eqref{GrindEQ__2_1_}, while $x^{\left(-\right)} \left(\cdot ;\, \bar{\vartheta },\, \varepsilon \right)$ is determined from \eqref{GrindEQ__2_5_} allowing for \eqref{GrindEQ__2_6_} at $\vartheta =\bar{\vartheta }=\left(\theta ,\, \eta ,\, \bar{\lambda }\right)$, i.e. at $\xi =\eta ,\, \, \lambda =\bar{\lambda }$ and $\theta \in \left(\bar{t}_{0} ,\, \bar{t}_{1} \right)$. By virtue of definition $x^{\left(+\right)}(\cdot ;\, \bar{\vartheta },\, \varepsilon)$ and $x^{\left(-\right)}(\cdot ;\, \bar{\vartheta },\, \varepsilon)$ allowing for \eqref{GrindEQ__2_2_} and \eqref{GrindEQ__2_7_} the following estimations are valid: \begin{equation} \label{GrindEQ__4_8_} \left\| x^{\left(+\right)} \left(\cdot ;\, \bar{\vartheta },\, \varepsilon \right)-\bar{x}\left(\cdot \right)\right\| _{C\left(I,\, R^{n} \right)} =\left\| h^{\left(+\right)} \left(\cdot ;\, \bar{\vartheta },\, \varepsilon \right)\right\| _{C\left(I,\, R^{n} \right)} \le \left\| \eta \right\| ,\, \, \varepsilon \in \left(0,\, \hat{\varepsilon }\right]\bigcap \left(0,\, 1\right), \end{equation} \begin{equation} \label{GrindEQ__4_9_} \left\| \dot{x}^{\left(+\right)} \left(\cdot ;\, \bar{\vartheta },\, \varepsilon \right)-\dot{\bar{x}}\left(\cdot \right)\right\| _{L_{\infty } \left(I,\, R^{n} \right)} =\left\| \dot{h}^{\left(+\right)} \left(\cdot ;\, \bar{\vartheta },\, \varepsilon \right)\right\| _{{}_{L_{\infty } } \left(I,\, R^{n} \right)} \le \max \left\{\, \left\| \eta \right\| \, _{R^{n} } ,\, \frac{\bar{\lambda }}{1-\bar{\lambda }} \left\| \eta \right\| \, _{R^{n} } \right\} \end{equation} Similar estimations are valid for $\left\| x^{\left(-\right)} \left(\cdot ;\, \bar{\vartheta },\, \varepsilon \right)-\bar{x}\left(\cdot \right)\right\| _{C\left(I,\, R^{n} \right)} $ and $\left\| \dot{x}^{\left(-\right)} \left(\cdot ;\, \bar{\vartheta },\, \varepsilon \right)-\dot{\bar{x}}\left(\cdot \right)\right\| _{L_{\infty } \left(I,\, R^{n} \right)} $ as well. Let the function $\bar{x}\left(\cdot \right)$ be a local minimum in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_} with $\hat{\delta }$-neighborhood. Then, considering the last estimations we confirm that for every point $\left(\eta ,\, \left(\bar{\lambda }-1\right)^{-1} \bar{\lambda }\eta ,\bar{\lambda }\right)\in B_{\hat{\delta }} \left(0\right)\times B_{\hat{\delta }} \left(0\right)\times \left(0,\, 1\right)$, satisfying condition \eqref{GrindEQ__4_1_}, the inequalities ${\mathbb D}elta _{\varepsilon }^{\left(+\right)} J\left(\bar{x}\left(\cdot \right),\, \bar{\vartheta }\right)\ge 0$ and ${\mathbb D}elta _{\varepsilon }^{\left(-\right)} J\left(\bar{x}\left(\cdot \right),\, \bar{\vartheta }\right)\ge 0$ are fulfilled. Based on these inequalities, allowing for \eqref{GrindEQ__4_6_}, \eqref{GrindEQ__4_7_} and arbitrariness of $\theta \in \left(\bar{t}_{0} ,\, \bar{t}_{1} \right)$, and also choosing $\delta =\hat{\delta }$, we get the proof of part (ii) of Theorem 4.1. So, Theorem 4.1 is completely proved. \textbf{Theorem 4.2. }Let the functions $L\left(\cdot \right)$ and $L_{x} \left(\cdot \right)$ be continuously differentiable in totality of variables, and the admissible function $\bar{x}\left(\cdot \right)$ be an extremal of problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}, along it for the vectors $\eta \ne 0$ and $\left(\bar{\lambda }-1\right)^{-1} \bar{\lambda }\eta $, where $\bar{\lambda }\in \left(0,\, 1\right)$, the Weierstrass condition degenerates at any point of the interval $\left(\bar{t}_{0} ,\, \bar{t}_{1} \right)\subset \left[t_{0} ,\, t_{1} \right]$, i.e. condition \eqref{GrindEQ__4_1_} holds. Furthermore, let the function $\bar{x}\left(\cdot \right)$ be twice continuously differentiable on the interval $\left(\bar{t}_{0} ,\, \bar{t}_{1} \right)$. Then: (i) if the extremal $\bar{x}\left(\cdot \right)$ is a strong local minimum in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}, then the following inequality is fulfilled: \begin{equation} \label{GrindEQ__4_10_} \eta ^{T} \left(\bar{\lambda }\bar{L}_{xx} \left(t,\, \eta \right)+\left(1-\bar{\lambda }\right)\, \bar{L}_{xx} \left(t,\, \left(\bar{\lambda }-1\right)^{-1} \bar{\lambda }\eta \right)\right)\, \eta -\frac{d}{dt} {\mathbb D}elta \bar{L}_{x}^{T} \, \left(t,\, \eta \right)\eta \ge 0,\, \, \forall t\in \left(\bar{t}_{0} ,\, \bar{t}_{1} \right), \end{equation} where $\bar{L}_{xx} \left(t,\, \cdot \right)$ and ${\mathbb D}elta \bar{L}_{x} \left(t,\, \cdot \right)$ are determined by \eqref{GrindEQ__2_9_} and \eqref{GrindEQ__2_10_}, respectively; (ii) if the extremal $\bar{x}\left(\cdot \right)$ is a weak local minimum in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}, then there exists a number $\delta >0$ such that for every point $\left(\eta ,\, \left(\bar{\lambda }-1\right)^{-1} \bar{\lambda }\eta ,\, \bar{\lambda }\right)\in B_{\delta } \left(0\right)\times B_{\delta } \left(0\right)\times \left(0,\, 1\right)$ satisfying the condition \eqref{GrindEQ__4_1_}, the inequality \eqref{GrindEQ__4_10_} is fulfilled. \textbf{Proof. }Let us\textbf{ }prove part (i) of Theorem 4.2. Obviously Theorem 4.1 is valid subject to the condition of Theorem 4.2. Let us use the formula determined by \eqref{GrindEQ__4_3_} for the increment ${\mathbb D}elta _{\varepsilon }^{\left(+\right)} J\left(\bar{x}\left(\cdot \right),\, \bar{\vartheta }\right)$, where $\bar{\vartheta }=\left(\theta ,\, \bar{\lambda },\, \eta \right)\in \left(\bar{t}_{0} ,\, \bar{t}_{1} \right)\times \left(0,\, 1\right)\times R_{n} \backslash \left\{0\right\}$ and $\varepsilon \in \left(0,\, \hat{\varepsilon }\right]$. For finding the increment ${\mathbb D}elta _{\varepsilon }^{\left(+\right)} J\left(\bar{x}\left(\cdot \right),\, \bar{\vartheta }\right)$ it suffices to find the expansion of the integrals $\hat{J}_{1}^{\left(+\right)} \left(\varepsilon ,\, \bar{\vartheta }\right)$ and $\hat{J}_{2}^{\left(+\right)} \left(\varepsilon ,\, \bar{\vartheta }\right)$ determined by \eqref{GrindEQ__4_4_} and \eqref{GrindEQ__4_5_}, with accuracy $o(\varepsilon ^{3})$. These integrals are calculated similar to \eqref{GrindEQ__2_22_} and \eqref{GrindEQ__2_23_}. More exactly, from \eqref{GrindEQ__4_4_} and \eqref{GrindEQ__4_5_}, using the Taylor formula, allowing for \eqref{GrindEQ__2_9_} and \eqref{GrindEQ__2_10_}, we get \begin{equation} \label{GrindEQ__4_11_} \hat{J}_{1}^{\left(+\right)} \left(\varepsilon ,\, \bar{\vartheta }\right)=\frac{1}{2} \varepsilon ^{2} \bar{\lambda }^{2} {\mathbb D}elta \bar{L}_{x}^{T} \left(\theta ,\, \eta \right)\eta +\frac{1}{6} \varepsilon^{2} [2 \bar{\lambda }^{3} \frac{d}{dt} {\mathbb D}elta \bar{L}_{x}^{T} \left(\theta ,\, \eta \right)\eta +\bar{\lambda }^{3} \eta ^{T} \bar{L}_{xx} \left(\theta ,\, \eta \right)\eta ]+o\left(\varepsilon ^{3} \right), \end{equation} \[\hat{J}_{2}^{\left(+\right)} \left(\varepsilon ,\, \bar{\vartheta }\right)=\frac{1}{2} \varepsilon ^{2} \bar{\lambda }\left(1-\bar{\lambda }\right){\mathbb D}elta \bar{L}_{x}^{T} \left(\theta ,\frac{\bar{\lambda }}{\bar{\lambda }-1} \, \eta \right)\eta +\] \begin{equation} \label{GrindEQ__4_12_} \left. +\frac{1}{6} \varepsilon ^{2} \left[\bar{\lambda }\left(1-\bar{\lambda }\right)\left(1+2\bar{\lambda }\right)\right. \frac{d}{dt} {\mathbb D}elta \bar{L}_{x}^{T} \left(\theta ,\frac{\bar{\lambda }}{\bar{\lambda }-1} \, \eta \right)\eta +\bar{\lambda }^{2} \left(1-\bar{\lambda }\right)\eta ^{T} \bar{L}_{xx} \left(\theta ,\frac{\bar{\lambda }}{\bar{\lambda }-1} \, \eta \right)\eta \right]+o\left(\varepsilon ^{3} \right). \end{equation} Since $\theta \in \left(t_{0} ,\, \bar{t}_{1} \right)$, $\bar{\lambda }\in \left(0,\, 1\right)$, $\hat{\varepsilon }<t_{1} -\theta $ and $\bar{x}\left(\cdot \right)$ is a strong local minimum in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}, firstly, by assumption \eqref{GrindEQ__4_1_} for all $\varepsilon \in \left(0,\, \hat{\varepsilon }\right]$ the first two terms in \eqref{GrindEQ__4_3_} vanish; secondly, for all $\varepsilon \in \left(0,\, \hat{\varepsilon }\right]$ the increment ${\mathbb D}elta _{\varepsilon }^{\left(+\right)} J\left(\bar{x}\left(\cdot \right),\, \bar{\vartheta }\right)$ is non-negative, and by virtue of statement \eqref{GrindEQ__4_2_} of Theorem 4.1 for all $t\in \left(\bar{t}_{0} ,\, \bar{t}_{1} \right)$ the equality ${\mathbb D}elta \bar{L}_{x}^{T} \left(t,\, \left(\bar{\lambda }-1\right)^{-1} \bar{\lambda }\eta \right)\, \eta =\left(\bar{\lambda }-1\right)^{-1} \bar{\lambda }{\mathbb D}elta \bar{L}_{x}^{T} \left(t,\, \eta \right)\eta $ is valid. By means of \eqref{GrindEQ__4_11_}, \eqref{GrindEQ__4_12_} and the last statements, allowing for arbitrariness of $\theta \in \left(\bar{t}_{0} ,\, \bar{t}_{1} \right)$, the sought-for inequality \eqref{GrindEQ__4_10_} follows from \eqref{GrindEQ__4_3_} i.e. part (i) of Theorem 4.2 is proved. The proof of part (ii) of Theorem 4.2, allowing for estimations \eqref{GrindEQ__4_8_}, \eqref{GrindEQ__4_9_} and definition of a weak local minimum of the function $\bar{x}\left(\cdot \right)$, follows from part (i) of Theorem 4.2. By the same token, Theorem 4.2 is completely proved. Finally, we prove the following theorem. \textbf{Theorem 4.3. }Let the integrant $L\left(\cdot \right)$ be continuously differentiable in totality of variables, and partial derivatives of the form $L_{xx} \left(\cdot \right),\, L_{x\dot{x}} \left(\cdot \right),\, L_{\dot{x}\dot{x}} \left(\cdot \right)$ and $L_{\dot{x}\dot{x}\dot{x}} \left(\cdot \right)$ be continuous in totality of variables. Furthermore, let the admissible function $\bar{x}\left(\cdot \right)$ be an extremal of problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}, and along it for the vector $\eta \ne 0$ the Weierstrass and Legendre conditions degenerate at any point of the interval $\left(\bar{t}_{0} ,\, \bar{t}_{1} \right)\subset \left[t_{0} ,\, t_{1} \right]$, i.e. the following equalities hold \begin{equation} \label{GrindEQ__4_13_} E\left(\bar{L}\right)\left(t,\, \eta \right)=\eta ^{T} \bar{L}_{\dot{x}\dot{x}} \left(t\right)\eta =0,\, \, \forall t\in \left(\bar{t}_{0} ,\, \bar{t}_{1} \right), \end{equation} where the interval $\left(\bar{t}_{0} ,\, \bar{t}_{1} \right)$ does not contain any angular point of extremal $\bar{x}\left(\cdot \right)$. Then: (i) if the extremal $\bar{x}\left(\cdot \right)$ is a strong local minimum in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}, then the following equality is fulfilled \begin{equation} \label{GrindEQ__4_14_} \eta ^{T} \left(L_{x} \left(t,\, \bar{x}\left(t\right),\, \dot{\bar{x}}\left(t\right)+\eta \right)-\bar{L}_{x} \left(t\right)-\bar{L}_{x\dot{x}} \left(t\right)\, \eta \right)=0,\, \, \forall t\in \left(\bar{t}_{0} ,\, \, \bar{t}_{1} \right); \end{equation} (ii) if the extremal $\bar{x}\left(\cdot \right)$ is a weak local minimum in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}, the there exists a number $\delta >0$ such that for every point $\eta \in B_{\delta } \left(0\right)$ satisfying condition \eqref{GrindEQ__4_13_}, the equality \eqref{GrindEQ__4_14_} is fulfilled. \textbf{Proof. }Prove part (i) of Theorem 4.3. Use the formulas \eqref{GrindEQ__2_39_}-\eqref{GrindEQ__2_41_}, obtained for the increment ${\mathbb D}elta _{\varepsilon }^{\left(+\right)} J\left(\bar{x}\left(\cdot \right),\, \tilde{\vartheta }\right)=J_{1}^{\left(+\right)} \left(\varepsilon ,\, \tilde{\vartheta }\right)+J_{2}^{\left(+\right)} \left(\varepsilon ,\, \tilde{\vartheta }\right),\, \varepsilon \in \left(0,\, \bar{\varepsilon }\right]\bigcap \left(0,\, 1\right)$. Assume $\tilde{\vartheta }=\hat{\vartheta }$. Here $\tilde{\vartheta }=\hat{\vartheta }:=\left(\theta ,\, \varepsilon ,\, \eta \right)\in \left(\bar{t}_{0} ,\, \bar{t}_{1} \right)\times \left(0,\, \hat{\varepsilon }\right]\times R^{n} \backslash \left\{0\right\}$, where $\hat{\varepsilon }=\min \left\{\bar{\varepsilon },\, \bar{t}_{1} -\theta ,\, 1\right\}$. Then by the Taylor formula, allowing for $\hat{\vartheta }$ and $\hat{\varepsilon }$, firstly, from \eqref{GrindEQ__2_40_} for $J_1^{\left(+\right)} \left(\varepsilon ,\, \hat{\vartheta }\right)$, $\varepsilon \in \left(0,\, \hat{\varepsilon }\right]$, we get \[J_{1}^{\left(+\right)} \left(\varepsilon ,\, \hat{\vartheta }\right)=\int _{\theta }^{\theta +\varepsilon ^{2} }E\left(\bar{L}\right)\left(t,\, \eta \right)dt+\frac{\varepsilon ^{2} }{2\left(\varepsilon -1\right)^{2} } \int _{\theta +\varepsilon ^{2} }^{\theta +\varepsilon }\eta ^{T} \bar{L}_{\dot{x}\dot{x}} \left(t\right)\, \eta \, dt- \] \begin{equation} \label{GrindEQ__4_15_} -\frac{\varepsilon ^{4} }{6\left(1-\varepsilon \right)^{2} } \left(\eta ^{T} \bar{L}_{\dot{x}\dot{x}} \left(\theta \right)\eta \right)_{\dot{x}}^{T} \eta +o\left(\varepsilon ^{4} \right); \end{equation} secondly, taking into account \eqref{GrindEQ__2_41_}, similar to \eqref{GrindEQ__2_43_} for $J_{2}^{\left(+\right)} \left(\varepsilon ,\, \hat{\vartheta }\right)$, $\varepsilon \in \left(0,\, \hat{\varepsilon }\right]$ we have \begin{equation} \label{GrindEQ__4_16_} J_{2}^{\left(+\right)} \left(\varepsilon ,\, \hat{\vartheta }\right)=\frac{1}{2} \varepsilon ^{4} \left[\eta ^{T} \, \left(\bar{L}_{x} \left(\theta ,\, \eta \right)-\bar{L}_{x} \left(\theta \right)\right)-\eta ^{T} \bar{L}_{x\dot{x}} \left(\theta \right)\eta \right]+o\left(\varepsilon ^{4} \right),\, \forall \varepsilon \in \left(0,\, \hat{\varepsilon }\right]. \end{equation} Further, since $\bar{x}\left(\cdot \right)$ is a strong local minimum in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_} and \eqref{GrindEQ__4_13_} is fulfilled, then by Theorem 3.6, and also definition of the point $\theta $ and the number $\hat{\varepsilon }$ for the increment ${\mathbb D}elta _{\varepsilon }^{\left(+\right)} J\left(\bar{x}\left(\cdot \right),\, \hat{\vartheta }\right)$ allowing for \eqref{GrindEQ__4_15_} and \eqref{GrindEQ__4_16_} the following inequality is valid: \begin{equation} \label{GrindEQ__4_17_} {\mathbb D}elta _{\varepsilon }^{\left(+\right)} J\left(\bar{x}\left(\cdot \right),\, \hat{\vartheta }\right)=\frac{1}{2} \varepsilon ^{4} \left[\eta ^{T} \, \left(\bar{L}_{x} \, \left(\theta ,\, \eta \right)-\bar{L}_{x} \left(\theta \right)\right)-\eta ^{T} \bar{L}_{x\dot{x}} \left(\theta \right)\eta \right]+ o\left(\varepsilon ^{4} \right)\ge 0,\, \, \forall \varepsilon \in \left(0,\, \hat{\varepsilon }\right]. \end{equation} Quite similarly, using \eqref{GrindEQ__2_44_}-\eqref{GrindEQ__2_46_} allowing for $\tilde{\vartheta }=\hat{\vartheta }:=\left(\theta ,\, \varepsilon ,\, \eta \right)\in \left(\bar{t}_{0} ,\, \bar{t}_{1} \right)\times \left(0,\, \varepsilon ^{*} \right]\times R^{n} \backslash \left\{0\right\}$, where $\varepsilon ^{*} =\left\{\tilde{\varepsilon },\, \theta -\bar{t}_{0} ,\, 1\right\}$, is it easy to show that for the increment ${\mathbb D}elta _{\varepsilon }^{\left(-\right)} J\left(\bar{x}\left(\cdot \right),\, \hat{\vartheta }\right)$ the inequality of the form \[{\mathbb D}elta _{\varepsilon }^{\left(-\right)} J\left(\bar{x}\left(\cdot \right),\, \hat{\vartheta }\right)=-\frac{1}{2} \varepsilon ^{4} \left[\eta ^{T} \, \left(\bar{L}_{x} \, \left(\theta ,\, \eta \right)-\bar{L}_{x} \left(\theta \right)\right)-\eta ^{T} \bar{L}_{x\dot{x}} \left(\theta \right)\eta \right]+\] \begin{equation} \label{GrindEQ__4_18_} o\left(\varepsilon ^{4} \right)\ge 0,\, \, \forall \varepsilon \in \left(0,\, \varepsilon ^{*} \right) \end{equation} is valid. From inequalities \eqref{GrindEQ__4_17_} and \eqref{GrindEQ__4_18_}, allowing for arbitrariness of the point $\theta \in \left(\bar{t}_{0} ,\, \bar{t}_{1} \right)$, the sought-for equality \eqref{GrindEQ__4_14_} follows, i.e. part (i) of Theorem 4.3 is proved. Prove part (ii) of Theorem 4.3. Consider the increments ${\mathbb D}elta _{\varepsilon }^{\left(+\right)} J\left(\bar{x}\left(\cdot \right),\, \hat{\vartheta }\right)=J\left(x^{\left(+\right)} \left(\cdot ,\, \hat{\vartheta },\, \varepsilon \right)\right)-J\left(\bar{x}\left(\cdot \right)\right)$ and ${\mathbb D}elta _{\varepsilon }^{\left(-\right)} J\left(\bar{x}\left(\cdot \right),\, \hat{\vartheta }\right)=J\left(x^{\left(-\right)} \left(\cdot ,\, \hat{\vartheta },\, \varepsilon \right)\right)-J\left(\bar{x}\left(\cdot \right)\right)$, determined above while proving part (i) of Theorem 4.3. Here by virtue of \eqref{GrindEQ__2_34_} and \eqref{GrindEQ__2_35_} allowing for $\hat{\vartheta} =\left(\theta ,\, \varepsilon ,\, \eta \right)$ we have $x^{\left(+\right)} \left(\cdot ,\, \hat{\vartheta },\, \varepsilon \right)=\left. x^{\left(+\right)} \left(\cdot ;\left(\theta ,\, \lambda ,\, \eta \right),\varepsilon \right)\right|_{\lambda =\varepsilon } $ and $x^{\left(-\right)} \left(\cdot ,\, \hat{\vartheta },\, \varepsilon \right)=\left. x^{\left(-\right)} \left(\cdot ;\left(\theta ,\, \lambda ,\, \eta \right),\varepsilon \right)\right|_{\lambda =\varepsilon } $, where $\varepsilon \in \left(0,\, \hat{\varepsilon }\right),\, $$x^{\left(+\right)} \left(\cdot ;\left(\theta ,\, \lambda ,\, \eta \right),\varepsilon \right)$ and $x^{\left(-\right)} \left(\cdot ;\left(\theta ,\, \lambda ,\, \eta \right),\varepsilon \right)$ are determined from \eqref{GrindEQ__2_4_} and \eqref{GrindEQ__2_5_} allowing for \eqref{GrindEQ__2_1_}, \eqref{GrindEQ__2_6_}, $\lambda =\varepsilon$ and $\xi =\eta $. Therefore, by virtue of estimations \eqref{GrindEQ__4_8_} and \eqref{GrindEQ__4_9_} for all $\varepsilon \in \left(0,\, 2^{-1} \, \right]\bigcap \left(0,\, \hat{\varepsilon }\right)$ the following estimations are valid: \begin{equation} \label{GrindEQ__4_19_} \max \left\{\left\| x^{\left(+\right)} \left(\cdot ;\, \hat{\vartheta },\, \varepsilon \right)-\bar{x}\left(\cdot \right)\right\| _{C\left(I,\, R^{n} \right)} ,\, \left\| \dot{x}^{\left(+\right)} \left(\cdot ;\, \hat{\vartheta },\, \varepsilon \right)-\dot{\bar{x}}\left(\cdot \right)\right\| _{L_{\infty } \left(I,\, R^{n} \right)} \right\}\le \left\| \eta \right\| _{R^{n} } \end{equation} In a similar way we have estimations of the form \begin{equation} \label{GrindEQ__4_20_} \max \left\{\left\| x^{\left(-\right)} \left(\cdot ;\, \hat{\vartheta },\, \varepsilon \right)-\bar{x}\left(\cdot \right)\right\| _{C\left(I,\, R^{n} \right)} ,\, \left\| \dot{x}^{\left(-\right)} \left(\cdot ;\, \hat{\vartheta },\, \varepsilon \right)-\dot{\bar{x}}\left(\cdot \right)\right\| _{L_{\infty } \left(I,\, R^{n} \right)} \right\}\le \left\| \eta \right\| _{R^{n} } . \end{equation} Let the function $\bar{x}\left(\cdot \right)$ be a weak local minimum in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_} with $\hat{\delta }$-neighborhood. Then, considering estimation \eqref{GrindEQ__4_19_} and \eqref{GrindEQ__4_20_}, we confirm that for every point $\eta \in B_{\hat{\delta }} \left(0\right)$ satisfying the condition \eqref{GrindEQ__4_13_}, and for all $\varepsilon \in \left(0,\, \, 2^{-1} \right]\bigcap \left(0,\, \hat{\varepsilon }\right)\bigcap \left(0,\, \varepsilon ^{*} \right)$ the inequalities ${\mathbb D}elta _{\varepsilon }^{\left(+\right)} J\left(\bar{x}\left(\cdot \right),\, \hat{\vartheta }\right)\ge 0$, ${\mathbb D}elta _{\varepsilon }^{\left(-\right)} J\left(\bar{x}\left(\cdot \right),\, \hat{\vartheta }\right)\ge 0$ are fulfilled. Based on these inequalities, allowing for \eqref{GrindEQ__4_17_}, \eqref{GrindEQ__4_18_} and arbitrariness of $\theta =\left(\bar{t}_{0} ,\, \bar{t}_{1} \right)$, and also choosing $\delta =\hat{\delta }$, we get the proof of part (ii) of Theorem 4.3. Thus, Theorem 4.3 is completely proved. \textbf{Remark 4.1. }We consider a classical variational problem, for example with a free right end. We call it problem (A). If the admissible function $x^{*} \left(\cdot \right)$ is a strong (weak) local minimum in problem (A), then obviously, this function gives strong (weak) local minimum to the functional \eqref{GrindEQ__1_1_} with the boundary conditions $x\left(t_{0} \right)=x_{0} $, $x\left(t_{1} \right)=x^{*} :=x^{*} \left(t_{1} \right)$. Therefore, all the statements of Theorem 3.1-3.7 and 4.1-4.3 are necessary conditions for a minimum of problem (A) as well. \section{Discussions and examples} We consider a particular problem, more exactly the following problem with a free right end of the form \begin{equation} \label{GrindEQ__5_1_} J\left(x\left(\cdot \right)\right)=\int _{t_{0} }^{t_{1} }L^{*} \left(t,\, x\left(t\right),\, \dot{x}\left(t\right)\right)\, dt\to {\mathop{\min }\limits_{x\left(\cdot \right)}} , \end{equation} \begin{equation} \label{GrindEQ__5_2_} x\left(t_{0} \right)=x_{0} ,\, \, x\left(\cdot \right)\in PC^{1} \left(\left[t_{0} ,\, t_{1} \right],\, R^{n} \right). \end{equation} Here $L^{*} \left(t,\, x,\, \dot{x}\right)=\dot{x}^{T} A\left(t\right)\dot{x}+2\dot{x}^{T} B\left(t\right)x+x^{T}C\left(t\right)x$, where $A\left(\cdot \right),\, B\left(\cdot \right)$ and $C\left(\cdot \right)$ are $n\times n$ -continuously differentiable functions. Obviously, for every admissible function $\bar{x}\left(\cdot \right)$ (i.e. the function satisfying condition (5.2)) and the vector $\eta \in R^{n} $ we have the equality $E\left(\bar{L}\right)\left(t,\, \eta \right)=\eta ^{T} \bar{L}_{\dot{x}\dot{x}} \left(t\right)\eta ,\, t\in \left[t_{0} ,\, t_{1} \right]$, where $E\left(\bar{L}\right)\left(t,\, \eta \right)$ is determined by \eqref{GrindEQ__1_6_}, and $\bar{L}_{\dot{x}\dot{x}} \left(t\right)\equiv A\left(t\right)$. Hence, we have that in problem \eqref{GrindEQ__5_1_}, \eqref{GrindEQ__5_2_} that from every degeneration of the forms \eqref{GrindEQ__3_1_}, \eqref{GrindEQ__3_2_}, \eqref{GrindEQ__3_5_}, \eqref{GrindEQ__3_17_}. \eqref{GrindEQ__3_18_}, \eqref{GrindEQ__3_21_}, \eqref{GrindEQ__4_1_} and \eqref{GrindEQ__4_13_} it follows the degeneration of the Legendre condition and vice versa. Taking into account this property and Remark 4.1 when solving problem \eqref{GrindEQ__5_1_}, \eqref{GrindEQ__5_2_}, it is easy to conclude that the Kelly condition (see [8, p. 111]) and condition \eqref{GrindEQ__4_10_} coincide, namely, we have: if the admissible function $\bar{x}\left(\cdot \right)$ is a strong local minimum in problem \eqref{GrindEQ__5_1_}, \eqref{GrindEQ__5_2_} and for the vector $\eta \ne 0$ the Legendre condition degenerates at any point of the interval $\left(\bar{t}_{0} ,\, \bar{t}_{1} \right)\subset \left[t_{0} ,\, t_{1} \right]$, i.e. $\eta ^{T} \, A\left(t\right)\eta =0$, $\forall t\in \left(\bar{t}_{0} ,\, \bar{t}_{1} \right)$, then the inequality $\eta ^{T} \left[C\left(t\right)-\dot{B}\left(t\right)\right]\, \eta \ge 0$, $\forall t\in \left(\bar{t}_{0} ,\, \bar{t}_{1} \right)$ is valid. Further, unlike the necessary condition for a minimum \eqref{GrindEQ__4_10_}, the Kelly condition is obtained only by means of the second variation of the functional when solving a singular optimal control problem. So, based on what has been said, allowing for below given Example 5.1 we can say that necessary condition for a minimum \eqref{GrindEQ__4_10_} is one reinforced variant of the Kelly condition in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_}. Carrying out similar reasonings, we can confirm that in problem \eqref{GrindEQ__1_1_}, \eqref{GrindEQ__1_2_} the minimum condition \eqref{GrindEQ__4_2_}, and also minimum condition \eqref{GrindEQ__4_14_} are reinforced variant of the known equality type necessary condition obtained in \cite{8}. We also note that, following the below given Example 5.1, we come to the conclusion that necessary condition \eqref{GrindEQ__4_10_} is not a corollary of the result of the paper \cite{7}, and it is more constructive compared to \cite{7}, more exactly, no need to solve the matrix differential equation. The similar conclusion refers also to necessary conditions obtained in Section 3 when comparing with appropriate results of the monography \cite{8}. Simple comparison and below given Examples 5.2 and 5.3 show that Theorem 3.3, 3.6 and 3.7 are sharpening and refinement of appropriate results of the paper \cite{3}. It is important to note that degeneration of the form \eqref{GrindEQ__3_21_}, generally speaking does not follow from degeneration of the form \eqref{GrindEQ__3_5_}, and vice versa (see Example 5.2 and 5.4). Therefore, based on this proposition and formulation of the proved theorems in Section 3 and 4, it is easy to confirm that every condition for a minimum obtained by us has its own application area. \textbf{Example 5.1. }We consider a problem with a free right end of the form \begin{equation} \label{GrindEQ__5_3_} J\left(x\left(\cdot \right)\right)=\int _{0}^{1}x^{2} \left(1-\dot{x}^{2} \right)dt\to {\mathop{\min }\limits_{x\left(\cdot \right)}} ,\, \, x\left(0\right)=0, \end{equation} where $x\left(\cdot \right)\subset KC^{1} \left(\, \left[0,\, 1\right],\, R\right)$, $L\left(t,\, x,\, \dot{x}\right)=x^{2} \left(1-\dot{x}^{2} \right)$. Consider the admissible function $\bar{x}\left(t\right)=0,\, t\in \left[0,\, 1\right]$ for a minimum. Along this function allowing for \eqref{GrindEQ__1_6_}, \eqref{GrindEQ__2_9_} and \eqref{GrindEQ__2_10_} for all $\xi \in R$ and $t\in \left[0,1\right]$ we have \[\bar{L}\left(t\right)=L\left(t,\, \bar{x}\left(t\right),\, \dot{\bar{x}}\left(t\right)+\xi \right)\equiv 0,\, \, \, \, \bar{L}_{x} \left(t\right)=\bar{L}_{\dot{x}} \left(t\right)={\mathbb D}elta \bar{L}_{x} \left(t,\, \xi \right)=L_{x} \left(t,\, \bar{x}\left(t\right),\, \dot{\bar{x}}\left(t\right)+\xi \right)\equiv 0,\] \[\bar{L}_{xx} \left(t,\, \xi \right)=2 \left(1-\xi ^{2} \right).\] Hence it is seen that the function $\bar{x}\left(\cdot \right)=0$ is an extremal of problem \eqref{GrindEQ__5_3_} and satisfies the Weierstrass condition \eqref{GrindEQ__1_5_}, and also along it for all $\left(\bar{\lambda },\, \eta \right)\in \left(0,1\right)\times R$ at any points of the interval (0,1) the assumption \eqref{GrindEQ__4_1_} is fulfilled. Therefore, considering Remark 4.1, i.e. possibility of application of Theorem 4.2, we have: (a) condition \eqref{GrindEQ__4_10_} takes the form: $2 \eta ^{2} \left(1-\frac{\bar{\lambda }}{1-\bar{\lambda }} \eta ^{2} \right)\ge 0$ for all $\left(\bar{\lambda },\, \eta \right)\in \left(0,1\right)\times R$, and therefore for example for $\left(\bar{\lambda },\, \eta \right)=\left(\frac{1}{2} ,\, 2\right)$ it is not fulfilled. It means that the admissible function $\bar{x}\left(\cdot \right)=0$ by virtue of necessary minimum condition \eqref{GrindEQ__4_10_} is not a strong local minimum in problem \eqref{GrindEQ__5_3_}; (b) obviously, statement of part (ii) of Theorem 4.2 is fulfilled for $\delta =1$, and therefore an extremal $\bar{x}\left(\cdot \right)=0$ can be a weak local minimum in problem \eqref{GrindEQ__5_3_}. Naturally, it is important to compare the necessary condition for a minimum \eqref{GrindEQ__4_10_} with the known necessary conditions for optimality of singular controls, for example, with the Kelly condition and the optimality condition in the paper \cite{7}. For that we consider the following problem equivalent to problem \eqref{GrindEQ__5_3_}: \begin{equation} \label{GrindEQ__5_4_} \left\{\begin{array}{c} {\dot{x}_{1} =u,\, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, } \\ {\dot{x}_{2} =x_{1}^{2} \left(1-u^{2} \right),\, } \end{array}\, \, \, \, \, \begin{array}{c} {\, \, u\in R,\, \, \, } \\ {t\in \left[0,\, 1\right]\, \, \, \, \, x_{1} \left(0\right)=x_{2} \left(0\right)=0,} \end{array}\right. x_{2} \left(1\right)\to \min , \end{equation} where $x_{1} \left(\cdot \right)=x\left(\cdot \right)$ is a sought-for admissible function in problem \eqref{GrindEQ__5_3_}. We study the admissible process $\left(\bar{u}\left(\cdot \right),\, \left(\bar{x}_{1} \left(\cdot \right),\, \bar{x}_{2} \left(\cdot \right)\right)^{T} \right)=\left(0,\, \left(0,0\right)^{T} \right)$ for optimality in problem \eqref{GrindEQ__5_4_}. Along this process we have: $\bar{\psi }\left(\cdot \right)=\left(\bar{\psi }_{1} \left(\cdot \right),\bar{\psi }_{2} \left(\cdot \right)\right)^{T} =\left(0,\, -1\right)^{T} $, $H\left(\bar{\psi }\left(\cdot \right),\, x_{1} ,\, x_{2} ,\, u,\, t\right)=-x_{1}^{2} \left(1-u^{2} \right)$, \noindent $H_{uu} \left(\bar{\psi }\left(\cdot \right),\, \bar{x}_{1} \left(\cdot \right),\, \bar{x}_{2} \left(\cdot \right),\, \bar{u}\left(\cdot \right),\, t\right)=0$, $H\left(\bar{\psi }\left(\cdot \right),\, \bar{x}_{1} \left(\cdot \right),\, \bar{x}_{2} \left(\cdot \right),\vartheta ,\, t\right)=H\left(\bar{\psi }\left(\cdot \right),\, \bar{x}_{1} \left(\cdot \right),\, \bar{x}_{2} \left(\cdot \right),\, \bar{u}\left(\cdot \right),\, t\right)=0$ for all $\vartheta \in R$ and $t\in \left[0,1\right]$, where $H\left(\cdot \right)$ is a Hamilton-Pontryagin function \cite{31}. Hence we easily get that the control $\bar{u}\left(\cdot \right)=0$ is singular in the classical sense (see, for example, [8, p. 28]), and also singular in Pontryagin's sense (see [8, p. 26]). Carrying out simple calculations, along the singular control $\overline{u}\left(\cdot \right)=0$ the Kelly condition takes the form: $2\vartheta ^{2} \ge 0$, for all $\vartheta \in R$. Further, since the control $\bar{u}\left(\cdot \right)=0$ is also singular in the sense of Pontryagin, then using the necessary optimality condition in the paper \cite{7}, we arrive at the inequality $2\left(t-1\right)\vartheta ^{2} \le 0$ for all $\vartheta \in R$ and $t\in \left[0,\, 1\right]$. As can be seen, the Kelly condition and also the condition in the paper \cite{7} along the control $\bar{u}\left(\cdot \right)=0$ are fulfilled, and both of these necessary conditions for optimality keep the control $\bar{u}\left(\cdot \right)=0$ among the contenders for optimality. Consequently, in the considered case i.e. while studying the extremal $\bar{x}\left(\cdot \right)=0$ of problem \eqref{GrindEQ__5_3_} for a minimum, each of these necessary conditions for optimality is weak compared to minimum condition \eqref{GrindEQ__4_10_}. \textbf{Example 5.2. }Consider the problem \begin{equation} \label{GrindEQ__5_5_} \int _{0}^{1}\left(\left(\dot{x}_{1} -\dot{x}_{2}^{2} \right)^{4} +x_{1} \dot{x}_{2}^{2} \right)dt\to {\mathop{\min }\limits_{x\left(\cdot \right)}} ,\, \, x_{i} \left(0\right)=x_{i} \left(1\right) =0,\, \, i=1,\, 2, \end{equation} where $x=\left(x_{1} ,\, x_{2} \right)^{T} ,\, \, L\left(\cdot \right)=\left(\dot{x}_{1} -\dot{x}_{2} \right)^{4} +x_{1} \dot{x}_{2}^{2} $. Obviously, the admissible function $\bar{x}\left(\cdot \right)=0$ is an extremal in problem \eqref{GrindEQ__5_5_} and along it the degeneration in the form \eqref{GrindEQ__4_13_} is fulfilled for all $t\in \left[0,\, 1\right]$ and for any vectors $\eta =\left(\eta _{1} ,\, \eta _{2} \right)^{T} $ such that $\eta _{1} =\eta _{2}^{2} $, $\eta _{2} \in R$, since $E\left(\bar{L}\right)\left(t,\, \eta \right)=\left(\eta _{1} -\eta _{2}^{2} \right)^{4} $ and $\bar{L}_{\dot{x}\dot{x}}\left(t\right)=0,\, \, t\in \left[0,\, 1\right]$. Also we have: \begin{equation} \label{GrindEQ__5_6_} \eta ^{T} \left(L_{x} \left(t,\, \bar{x}\left(t\right),\, \dot{\bar{x}}\left(t\right)+\eta \right)-\bar{L}_{x} \left(t\right)-\bar{L}_{x\dot{x}} \left(t\right)\eta \right)=\eta _{2}^{4} , \end{equation} where $\eta =\left(\eta _{2}^{2} ,\, \eta _{2} \right)^{T} ,\, \, \eta _{2} \in R$. Hence we get that Theorems 2.1 and 2.3 of the paper \cite{3} keep the extremal $\bar{x}\left(\cdot \right)=0$ among the contenders for a minimum and even for a weak local minimum in problem \eqref{GrindEQ__5_5_}. However, from \eqref{GrindEQ__5_6_} it can be seen that necessary condition for a minimum \eqref{GrindEQ__4_14_} is not fulfilled for all $\eta_2 \ne 0$, so, by virtue of Theorem 4.3 we have that the extremal $\bar{x}\left(\cdot \right)=0$ is not even a weak local minimum in problem \eqref{GrindEQ__5_5_}. Consequently, by virtue of Example 5.2 we come to the conclusion that the statements of Theorem 2.1 in the paper \cite{3} were strengthened in the forms \eqref{GrindEQ__3_16_}, \eqref{GrindEQ__3_22_} and \eqref{GrindEQ__4_14_}, while the statements of Theorem 2.3 in the paper \cite{3} were strengthened by means of Theorem 3.6 and part (jj) of Theorem 3.7. It is important to note that in problem \eqref{GrindEQ__5_5_} along the extremal $\bar{x}\left(t\right)=0,\, \, t\in \left[0,\, 1\right]$, the expressions in the forms \eqref{GrindEQ__3_1_}, \eqref{GrindEQ__3_2_} and \eqref{GrindEQ__3_5_} are not fulfilled for any point $\left(\theta ,\, \bar{\lambda },\, \eta \right)$ of the set $\left[0,1\right]\times \left(0,\, 1\right)\times R^{2} \backslash \left\{0\right\}$, since $\frac{\bar{\lambda }}{\bar{\lambda }-1} \left(1-\frac{\bar{\lambda }}{\bar{\lambda }-1} \right)\ne 0$, for all $\bar{\lambda }\in \left(0,\, 1\right)$. Therefore, by virtue of problem 5.5 we confirm that degeneration in the form \eqref{GrindEQ__4_13_}, and also degenerations in the forms \eqref{GrindEQ__3_17_}, \eqref{GrindEQ__3_18_} and \eqref{GrindEQ__3_21_}, generally speaking, do not yield the degenerations in the forms \eqref{GrindEQ__4_1_}, \eqref{GrindEQ__3_1_}, \eqref{GrindEQ__3_2_} and \eqref{GrindEQ__3_5_}, respectively. \textbf{Example 5.3. }Let us consider the problem \begin{equation} \label{GrindEQ__5_7_} J\left(x\left(\cdot \right)\right)=\int _{0}^{1}\left(\left(1-t\right)\dot{x}^{3} -3x\right)dt\to {\mathop{\min }\limits_{x\left(\cdot \right)}} ,\, \, \, \, \, x\left(0\right)=0, \, \, \, x\left(1\right)=1, \end{equation} where $x\left(\cdot \right)\in KC^{1} \left(\left[0,\, 1\right],\, R\right),\, L\left(\cdot \right)=\left(1-t\right)\dot{x}^{3} -3x$. Obviously, the admissible function $\bar{x}\left(t\right)=t$, $t\in \left[0,\, 1\right]$ is an extremal in problem \eqref{GrindEQ__5_7_}. It is a weak local minimum. This follows from the fact that for an arbitrary increment ${\mathbb D}elta x\left(\cdot \right)\in KC^{1} \, \left(\, \left[0,\, 1\right],\, R\right)$ for which ${\mathbb D}elta x\left(0\right)={\mathbb D}elta x\left(1\right)=0$ and $\left\| {\mathbb D}elta \, \dot{x}\left(t\right)\right\| _{L_{\infty } \left(\left[0,\, 1\right],\, R\right)} \le 3$, the inequality \[{\mathbb D}elta J\left(\bar{x}\left(\cdot \right)\right)=\int _{0}^{1}\left({\mathbb D}elta \dot{x}\left(t\right)\right)^{2} \left(3+{\mathbb D}elta \, \dot{x}\left(t\right)\right)dt\ge 0\] is fulfilled. \noindent Let us verify the validity of Theorem 2.4 proved in \cite{3}. Since $E\left(\bar{L}\right)\left(t,\, \xi \right)=\left(1-t\right)\, \xi ^{2} \left(\xi +3\right)$ and $\bar{L}_{\dot{x}\dot{x}} \left(t\right)=6\left(1-t\right)$, then at $t=1-0$ there are equalities (2.5) and (2.6) in the paper \cite{3}. Therefore, it is clear that for $\bar{x}\left(t\right)=t$, $t\in \left[0,\, 1\right]$, at the point $t=1$ and for all $\xi \in R$ all the assumptions of Theorem 2.4 in the paper \cite{3} are fulfilled. Then, by virtue of the statement of Theorem 2.4 in the paper \cite{3} there exists such $\delta >0$ that the inequality$-\xi ^{2} \left(\xi +6\right)\ge 0$ is valid for all $\xi \in \left(-\delta ,\, \delta \right)$. Obviously, there is no such a number $\delta >0$ that the last inequality would be fulfilled. However, the statement of part (j) of Theorem 3.7, i.e. inequality \eqref{GrindEQ__3_20_} is fulfilled for example for $\delta =6$. Consequently, the statement formulated for $\tau _{-} $ in Theorem 2.4 of the paper \cite{3} is wrong; furthermore, Theorem 3.6 and part (j) of Theorem 3.7 are correct and strengthened version of Theorem 2.4 in the paper \cite{3}. In a similar way we confirm that parts (i) and (ii) of Theorem 3.3 are correct and strengthened version of Theorem 2.2 in the paper \cite{3}. \textbf{Example 5.4. }Let us consider the problem \begin{equation} \label{GrindEQ__5_8_} J\left(x\left(\cdot \right)\right)=\int _{0}^{1}\left(\left(\dot{x}_{1} -\dot{x}_{2}^{3} \right)^{2} +x_{1} \dot{x}_{2}^{2} \right)\, dt\to {\mathop{\min }\limits_{x\left(\cdot \right)}} ,\, \, \, \, x_{i} \left(0\right)=x_{i} \left(1\right)=0, \, \, \, i=1,\, 2, \end{equation} where $x=\left(x_{1} ,\, x_{2} \right)^{T} ,\, \, x\left(t\right)\in KC^{1} \left(\, \left[0,\, 1\right],\, R^{2} \right),\, \, L\left(\cdot \right)=\left(\dot{x}_{1} -\dot{x}_{2}^{3} \right)^{2} +x_{1} \dot{x}_{2}^{2} $. We study the admissible function $\bar{x}\left(t\right)=0,\, \, t\in \left[0,\, 1\right]$ for a minimum. Along this function allowing for \eqref{GrindEQ__1_6_} and \eqref{GrindEQ__2_10_} we have \[\bar{L}\left(t\right)=0,\, \bar{L}_{x} \left(t\right)=\bar{L}_{\dot{x}} \left(t\right)=0,\, \, L_{x} \left(t,\, \bar{x}\left(t\right),\, \dot{\bar{x}}\left(t\right)+\xi \right)=\left(\xi _{2}^{2} ,\, 0\right)^{T} ,\] \begin{equation} \label{GrindEQ__5_9_} E\left(\bar{L}\right)\, \left(t,\, \xi \right)=\left(\xi _{1} -\xi _{2}^{3} \right)^{2} ,\, \, \, \xi ^{T} L_{\dot{x}\dot{x}} \left(t\right)\xi =2\xi _{1}^{2} ,\, \, {\mathbb D}elta \bar{L}_{x} \left(t,\, \xi \right)=\left(\xi _{2}^{2} ,\, 0\right)^{T} . \end{equation} Obviously, the admissible function $\bar{x}\left(\cdot \right)=0$ is an extremal in problem \eqref{GrindEQ__5_8_} and along it the Weierstrass condition \eqref{GrindEQ__1_5_} is fulfilled. Considering \eqref{GrindEQ__5_9_}, we easily get that along the extremal $\bar{x}\left(\cdot \right)=0$ in problem \eqref{GrindEQ__5_8_} for $\bar{\lambda }=\frac{1}{2} $ and for all points $\left(t,\, \eta \right)\in \left(0,\, 1\right)\times \, \left\{\, \, \left(\eta _{2}^{3} ,\, \eta _{2} \right)^{T} :\eta _{2} \in R\, \right\}\, $ the degeneration in the form \eqref{GrindEQ__4_1_} is fulfilled. However, the degeneration in the form \eqref{GrindEQ__4_13_} is fulfilled only for $\eta =0$. Consequently, unlike Example 5.2 we confirm that, generally speaking, the degenerations in the forms \eqref{GrindEQ__3_1_}, \eqref{GrindEQ__3_2_}, \eqref{GrindEQ__3_5_} and \eqref{GrindEQ__4_1_} do not imply the degenerations in the forms \eqref{GrindEQ__3_17_}, \eqref{GrindEQ__3_18_}, \eqref{GrindEQ__3_21_} and \eqref{GrindEQ__4_13_}, respectively. We continue our study. Since the degeneration in the form \eqref{GrindEQ__4_13_} is fulfilled only for $\eta =0$, then Theorem 4.3 keeps the extremal $\bar{x}\left(\cdot \right)=0$ among the contenders for a minimum. Note that Theorem 4.2 is also ineffective while studying the extremal $\bar{x}\left(\cdot \right)=0$. We now apply Theorem 4.1. Since along the extremal $\bar{x}\left(\cdot \right)=0$ for$\bar{\lambda }=\frac{1}{2} $ and for all $\left(t,\, \eta \right)\in \left(0,\, 1\right)\times \left\{\, \left(\eta _{2}^{3} ,\, \eta _{2} \right)^{T} :\eta _{2} \in R\, \right\}$ the degeneration in the form \eqref{GrindEQ__4_1_} is fulfilled, then allowing for \eqref{GrindEQ__5_9_} the necessary condition \eqref{GrindEQ__4_2_} takes the form: $\eta _{2}^{5} =0$ for all $\eta _{2} \in R$. Consequently, necessary minimum condition \eqref{GrindEQ__4_2_} is not fulfilled for all $\eta_2 \ne 0$, so, by virtue of Theorem 4.1 the extremal $\bar{x}\left(\cdot \right)=0$ is not even a weak local minimum in problem \eqref{GrindEQ__5_8_}. In conclusion, we consider it promising to obtain analogues of some results of this paper, for example, the analogues of Theorems 4.1-4.3 in the theory of singular optimal controls. \end{document}
\begin{document} \title{The double of representations of Cohomological Hall Algebra for $A_1$-quiver} \author{Xinli Xiao} \date{\today} \textrm{ad} dress{Mathematical Department, Kansas State University, Cardwell Hall 128, Manhattan, Kansas, 66502} \email{[email protected]} \keywords{Cohomological Hall algebra, Grassmannians, quivers, double construction} \baselineskip=18pt \begin{abstract} We compute two representations of COHA for $A_1$-quiver. The two untwisted representations can be combined into a representation of $D_{n+1}$ Lie algebra. The untwisted increasing representation and the twisted decreasing representation can be combined into a representation of a finite Clifford algebra. \end{abstract} \maketitle \topmargin -0.1in \renewcommand{\arabic{page}}{\arabic{page}} \section{introduction} The aim of this paper is to define and discuss two representations of the Cohomological Hall algebras, and combine them into a single representation of the algebra which is called ``full'' (or ``double'') COHA in \cite{Soi2014}. Cohomological Hall algebra (COHA for short) was introduced in \cite{KoS2011}. The definition is similar to the definition of conventional Hall algebra (see e.g. \cite{Sch2006}) or its motivic version (see e.g. \cite{KoS2008}). Instead of spaces of constructible functions on the stack of objects of an abelian category, one considers cohomology groups of the stacks. The product is defined through the pullback-pushforward construction. Details can be found in \cite{KoS2011}. By analogy with conventional Hall algebra of a quiver, which gives the ``positive'' part of a quantization of the corresponding Lie algebra, one may want to define the ``double'' COHA, for which the one defined in \cite{KoS2011} would be a ``positive part''. Following the discussion in \cite{Soi2014}, we study the double of representations of COHA, and hope to find the double of COHA through its representations. This paper focuses on $A_1$-quiver. Stable framed representations of the quiver are used to produce two representations of COHA. Since the moduli spaces of stable framed representations of $A_1$-quiver are Grassmannians, we actually define two representations on the cohomology of Grassmannians. We show that the operators from these two representations form $D_{n+1}$-Lie algebra. We also make a modification to the decreasing representation and form a twisted decreasing representation. The operators from untwisted increasing operators and twisted decreasing operators form a finite Clifford algebra. These confirm the conjecture from \cite{Soi2014} that the double of $A_1$-COHA is the infinite Clifford algebra. \section{Two geometric representations of $A_1$-COHA} \subsection{COHA} Let $Q$ be a quiver with $N$ vertices. Given a dimension vector $\gamma=(\gamma_i)_{i=1}^N$, $ M_{\gamma}$ is the space of complex representations with fixed underlying vector space $\bigoplus_{i=1}^{N} \mathbb{C}^{\gamma_i}$ of dimension vector $\gamma$, and $G_{\gamma}=\prod_{i=1}^NGL_{\gamma_i}(\mathbb{C})$ is the associated gauge group. $[ M_{\gamma}/G_{\gamma}]$ is the stack of representations of $Q$ with fixed dimension vector $\gamma$. As a vector space, COHA of $Q$ is defined to be $\mathcal H:=\bigoplus_{\gamma}\mathcal H_{\gamma}:=\bigoplus_{\gamma}H^*([M_{\gamma}/G_{\gamma}]):=\bigoplus_{\gamma}H^*_{G_{\gamma}}( M_{\gamma})$. Here by equivariant cohomology of a complex algebraic variety $M_{\gamma}$ acted by a complex algebraic group $G_{\gamma}$ we mean the usual (Betti) cohomology with coefficients in $\mathbb{Q}$ of the bundle $EG_{\gamma}\times_{G_{\gamma}}M_{\gamma}$ associated to the universal $G_{\gamma}$-bundle $EG_{\gamma}\rightarrow BG_{\gamma}$ over the classifying space of $G_{\gamma}$. The product $*:\mathcal H\otimes \mathcal H\rightarrow \mathcal H$ is defined by means of the pullback-pushforward construction in \cite{KoS2011}. \subsection{$A_1$-COHA}\label{incre_basis} Let $Q$ be $A_1$. $N=1$. Since there is only one representation with fixed underlying vector space $\mathbb{C}^{d}$ of dimension $d$, $M_{d}$ is a point and $G_{d}=GL_d(\mathbb{C})$. Therefore $\mathcal H_d=H^*_{GL_d(\mathbb{C})}( M_d)=\mathbb{Q}[x_{1,d},\ldots,x_{d,d}]^{S_d}$ is the algebra of symmetric polynomials in variables $x_{1,d},\ldots,x_{d,d}$. It is possible to talk about the geometric interpretation of these variables. They can be treated as the first Chern classes of the tautological bundles over the classifying space of $G_d$. For details see e.g. \cite{Xia2013}. The COHA $\mathcal H$ for quiver $A_1$ is described in \cite{KoS2011}. It is the infinite exterior algebra generated by odd elements $\phi_0, \phi_1,\phi_2\ldots$ with wedge $\wedge$ as its product. Generators $(\phi_i)_{i\geq 0}$ correspond to the additive generators $(x^i_{1,1})_{i\geq0}$ of $\mathcal H_1=\mathbb{Q}[x_{1,1}]$. A monomial in the exterior algebra $$\phi_{k_1}\wedge\ldots\wedge\phi_{k_d}\in\mathcal H_d,\quad 0\leq k_1<\ldots<k_d$$ corresponds to the Schur symmetric polynomial $s_{\lambda}(x_{1,d},\ldots,x_{d,d})$, where $\lambda=(\lambda_d,\ldots,\lambda_1)=(k_d-d+1,k_{d-1}-d+2,\ldots,k_1)$ is a partition. Let $\Phi_{\bf k}=\phi_{k_1}\wedge\ldots\wedge\phi_{k_d}$ with index ${\bf k}=(k_1,\ldots,k_d)$, $0\leq k_1<\ldots<k_d$. Denote by ${\bf k}(\lambda)$ the index related to the partition $\lambda$ and by $\lambda({\bf k})$ the partition related to the basis index $\bf k$. Then we have $\Phi_{{\bf k}(\lambda)}=s_{\lambda(\bf k)}$. \begin{comment} Following the discussion in \cite{KoS2011}, it is easy to see that the embedding $\mathcal H_1\hookrightarrow\mathcal H$ induces a homomorphism of graded algebras $\varphi:\bigwedge^*(\mathcal H_1)\cong\mathcal H$. $\mathcal H_1$ (equipped with the cup product $\cup$) can be identified with the polynomial ring $\mathbb{Q}[x]$. Let $\phi_0,\phi_1,\ldots$ be the basis of $\mathcal H_1$ that are related to $x^0,x^1,\ldots$ under the identification. Then $\{\phi_{k_1}\wedge\ldots\wedge\phi_{k_d}\}$, where $k_1<\ldots<k_d$ is an increasing sequence of $d$ non-negative integers, form a basis of $\bigwedge^d(\mathcal H_1)$. By indu \begin{equation} (\phi_{k_1}\wedge\ldots\wedge\phi_{k_d})(x_1,\ldots,x_d)=s_{\lambda}(x_1,\ldots,x_d), \end{equation} where $s_{\lambda}$ is the Schur polynomial belonging to the partition $\lambda=(k_d-d+1,\ldots,k_1)$. Thus $A_1$-COHA $\mathcal H$ is isomorphic to $\bigwedge^*(\mathcal H_1)$. \end{comment} \subsection{Stable framed representations}\label{section23} Fix a dimension vector ${\bf n}=(n_i)_{i=1}^N$. A {\it framed representation} of $Q$ of dimension vector $\gamma$ is a pair $(V,f)$, where $V$ is an ordinary representation of $Q$ of dimension $\gamma$ and $f=(f_i)_{i=1}^{N}$ is a collection of linear maps from $\mathbb{C}^{n_i}$ to $V_i$. The set of framed representations of dimension vector $\gamma$ with framed structure dimension vector $\bf n$ is denoted by ${\hat M}_{\gamma,{\bf n}}$. It carries a natural gauge group $G_{\gamma}$-action. See e.g. \cite{Rei2008a}. For the notion of stable framed representation of a quiver, see e.g. \cite{Rei2008} (more general framework of triangulated categories can be found in \cite{Soi2014}). We focus on the trivial stability condition. In this case, a framed representation is called {\it stable} if there is no proper (ordinary) subrepresentation of $V$ which contains the image of $f$. The set of stable framed representations of dimension vector $\gamma$ with framed structure dimension vector $\bf n$ is denoted by $ {\hat M}^{st}_{\gamma,{\bf n}}$. The gauge group $G_{\gamma}$ of $ M_{\gamma,{\bf n}}$ induces a $G_{\gamma}$-action on $\hat M^{st}_{\gamma,{\bf n}}$. The stack of stable framed representations $[\hat M^{st}_{\gamma,{\bf n}}/G_{\gamma}]$ is in fact a smooth projective scheme. We denote it by $\mathcal M^{st}_{\gamma,{\bf n}}$ and call it {\it the smooth model} of quiver $Q$ with dimension $\gamma$ and framed structure $\bf n$. The pullback-pushforward construction is applied to the cohomology of the scheme of stable framed representations. This construction leads to two representations of COHA for the quiver $Q$ which we describe below. Fix two dimension vectors $\gamma_1$ and ${\gamma_2}$. Set $\gamma=\gamma_1+\gamma_2$. Consider the scheme consisting of diagrams \begin{equation} \mathcal M^{st}_{\gamma_2,\gamma,{\bf n}}:=\{\xymatrix{0\ar[r]{}&E_1\ar[r]&E\ar[r]&E_2\ar[r]&0\\&&\mathbb{C}^{\bf n}\ar[u]^{f}\ar[ur]_{f_2}&&}\}, \end{equation} where $E_1\in M_{\gamma_1}$, $(E,f)\in \mathcal M^{st}_{\gamma,{\bf n}}$, $(E_2,f_2)\in \mathcal M^{st}_{\gamma_2,{\bf n}}$. $f:\mathbb{C}^{\bf n}\rightarrow E$ and $f_2:\mathbb{C}^{\bf n}\rightarrow E_2$ are the framed structures attached to $E$ and $E_2$ respectively. The subgroup of the automorphism group of $E$ which preserves the embedding of $E_1$ is denoted by $P_{\gamma_1,\gamma,{\bf n}}$. It plays the role of the automorphism group of $\mathcal M^{st}_{\gamma_2,\gamma,{\bf n}}$. The natural projections from the diagram to its components give the following diagram: \begin{equation}\label{corres} \xymatrix{&\mathcal M^{st}_{\gamma,{\bf n}}&\\&[\hat M^{st}_{\gamma_2,\gamma,{\bf n}}/P_{\gamma_2,\gamma,{\bf n}}]\ar[u]^{p}\ar[dr]^{p_2}\ar[dl]_{p_1}&\\[ M_{\gamma_1}/G_{\gamma_1}]&&\mathcal M^{st}_{\gamma_2,{\bf n}}}. \end{equation} The map $p_*(p_1^*(\phi_1)\cup p_2^*(\varphi_2))$ defines a morphism from $H^*({\mathcal M}^{st}_{\gamma_2,{\bf n}})$ to $H^*({\mathcal M}^{st}_{\gamma,{\bf n}})$ for $\phi_1\in \mathcal H_{\gamma_1}$ and $\varphi_2\in H^*(\mathcal M^{st}_{\gamma_2,{\bf n}})$. This morphism induces a representation of $\mathcal H=\bigoplus_{\gamma}\mathcal H_{\gamma}$ on $\bigoplus_{\gamma}H^*(\mathcal M^{st}_{\gamma,{\bf n}})$. It is called {\it the increasing representation} of COHA for the quiver $Q$, and denoted by $R^+_{\bf n}$. Similarly, the map $(p_2)_*(p_1^*(\phi_1)\cup p^*(\varphi))$ for $\phi_1\in \mathcal H_{\gamma_1}$ and $\varphi\in H^*(\mathcal M^{st}_{\gamma,{\bf n}})$ gives {\it the decreasing representation} $R^-_{\bf n}$ on the cohomology of the smooth model. In order to have well-defined representations one needs to show that $p$ and $p_2$ are proper morphisms. For $A_1$-case the properness is obvious (see Section \ref{2repn_A1} below). \subsection{$A_1$-case} Let $n$ be the framed structure dimension. A framed representation $(\mathbb{C}^d,f)$ of $A_1$-quiver is stable if and only if $f:\mathbb{C}^n\rightarrow \mathbb{C}^d$ is surjective. Thus the stable framed moduli space $\mathcal M^{st}_{d,n}$ is the Grassmannian (of quotient spaces) $Gr(d,n)$ for $0\leq d\leq n$, and empty for $d>n$. It is well known (see e.g. \cite{Ful1997}, p.$161$) that the cohomology of full flag variety $Fl(n)$ is isomorphic to $R(n)=\mathbb{Q}[x_1,\ldots,x_n]/(e_1(x_1,\ldots,x_n),\ldots,e_n(x_1,\ldots,x_n))$, where $e_i(x_1,\ldots,x_n)$ represents the $i$-th elementary symmetric polynomial. The cohomology of Grassmannian $Gr(d,n)$ is a subalgebra of $R(n)$ which is generated by Schur polynomials in variables $x_1,\ldots,x_d$. Thus we can use $s_{\lambda}(x_1,\ldots,x_d)$ to represent classes in $H^*(Gr(d,n))$. There is a natural projection $\pi: Fl(n)\rightarrow Gr(d,n)$. By abuse of notations, we use the same symbol $x_i$ to denote the classes in $Gr(d,n)$ whose pullback $\pi^*(x_i)$ is $x_i\in H^*(Fl(n))$. Classes in $H^*(Gr(d,n))$ have an alternative presentation. \begin{lemma}\label{Lemma2.1} In $H^*(Gr(d,n))$, $s_{\lambda}(x_1,\ldots,x_d)=(-1)^{|\lambda|}s_{\lambda'}(x_{d+1},\ldots,x_n)$, where $\lambda'$ is the transpose partition of $\lambda$. \end{lemma} \begin{proof} The above identity can be easily deduced from the identity $\prod_{i=1}^d\frac{1}{1-x_it}=\prod_{i=d+1}^n(1-x_it)$ (see e.g. \cite{Ful1997}, p.$163$) in the ring $R(n)[t]$. Since \begin{equation} \prod_{i=1}^d\frac{1}{1-x_it}=\sum_{r\geq0}h_r(x_1,\ldots,x_d)t^r \end{equation} and \begin{equation} \prod_{i=d+1}^n(1-x_it)=\sum_{r\geq0}e_r(x_{d+1},\ldots,x_n)(-t)^r \end{equation} where $h_r$ (resp. $e_r$) stands for the $r$-th complete symmetric polynomial (resp. elementary symmetric polynomial), we have \begin{equation} h_r(x_1,\ldots,x_d)=(-1)^re_r(x_{d+1},\ldots,x_n), \quad r\geq0. \end{equation} By Jacobi-Trudi identity, \begin{eqnarray*} s_{\lambda'}(x_1,\ldots,x_d)&=&det(e_{\lambda_i-i+j}(x_1,\ldots,x_d))\\ &=&det((-1)^{\lambda_i-i+j}h_{\lambda_i-i+j}(x_{d+1},\ldots,x_n))\\ &=&(-1)^{|\lambda|}det(h_{\lambda_i-i+j}(x_{d+1},\ldots,x_n))\\ &=&(-1)^{|\lambda|}s_{\lambda}(x_{d+1},\ldots,x_n). \end{eqnarray*} The third identity comes from the fact that \begin{eqnarray*} det(h_{\lambda_i-i+j}t^{\lambda_i-i+j})&=&\sum_{\omega}\sum_{i=1}^n(-1)^{\omega}h_{\lambda_i-i+\omega(i)}t^{\lambda_i-i+\omega(i)}\\ &=&\sum_{\omega}t^{\sum_{i=1}^n\lambda_i-i+\omega(i)}\sum_{i=1}^n(-1)^{\omega}h_{\lambda_i-i+\omega(i)}\\ &=&\sum_{\omega}t^{|\lambda|}\sum_{i=1}^n(-1)^{\omega}h_{\lambda_i-i+\omega(i)}\\ &=&t^{|\lambda|}det(h_{\lambda_i-i+j}). \end{eqnarray*} \end{proof} In the following we will use this ``transpose'' presentation to do some computations. \subsection{Two representations of $A_1$-COHA}\label{2repn_A1} The scheme $[M^{st}_{d_2,d,n}/P_{d_2,d,n}]$ in $A_1$-quiver case is isomorphic as a scheme to the two-step flag variety $F_{d_2,d,n}$, which is variety of the flags $\{\mathbb{C}^n\twoheadrightarrow \mathbb{C}^d\twoheadrightarrow \mathbb{C}^{d_2}\}$. Let $\phi_i$ be a generator of $\mathcal H_1$, and $s_{\lambda}$ be the Schur polynomial considered as an element of the cohomology of the Grassmannian $H^*(Gr(d_2,n))$ whose partition is $\lambda$. In this case, $p$ is the obvious projection from $F_{d_2,d,n}$ to $Gr(d,n)$ and $p_2$ is the obvious projection from $F_{d_2,d,n}$ to $Gr(d_2,n)$. Therefore both $p$ and $p_2$ are proper morphisms of stacks (which are in fact schemes), and the increasing and decreasing representations introduced in Section \ref{section23} are well defined. Now we want to compute the increasing representation by the formula $p_*(p_1^*(\phi_i)\cup p_2^*(s_{\lambda}))$. Note that in this case, $d_1=1$. Recall that $\phi_i$ represents the polynomial $\phi_i(x_{1,1})=x_{1,1}^i$. Using the geometric interpretation, $x^i_{1,1}$ is treated as the first Chern class of the tautological line bundle $\mathscr O(-i)$ over the classifying space of $G_1$. $\mathscr O(-i)$ will be pulled back through $p_1$ to the line bundle over $F_{d_2,d,n}$ associated to the corresponding character of $G_{d_1}$ when treating $G_{d_1}$ as a subquotient of $P_{d_2,d,n}$. Hence $p_1^*(\phi_i)$ will be the first Chern class of the line bundle described above, which is $\phi_i(x_{d_2+1})=x^i_{d_2+1}$. As homogenous spaces, $Gr(d,n)\approx GL_n(\mathbb{C})/P_{d,n}$, $Gr(d_2,n)\approx GL_n(\mathbb{C})/P_{d_2,n}$ and $F(d_2,d,n)\approx GL_n(\mathbb{C})/P_{d_2,d,n}$. We use the formula in \cite{Bri1996} to compute the pushforward. \begin{thm}\label{thm2.5}\cite{Bri1996}. Let $G$ be a connected reductive algebraic group over $\mathbb{C}$ and $B$ a Borel subgroup. Choose a maximal torus $T\subset B$ with Weyl group $W$. The set of all positive roots of the root system of $(G,T)$ is denoted by $\Delta^+$. Let $P\supset B$ be a parabolic subgroup of $G$, with the set of positive roots $\Delta^+(P)$ and Weyl group $W_P$. Let $L_{\alpha}$ be the complex line bundle over $G/B$ which is associated to the root $\alpha$. The Gysin homomorphism $f_*:H^*(G/B)\rightarrow H^*(G/P)$ is given by \begin{equation} f_*(p)=\sum_{w\in W/W_P}w\cdot\frac{p}{\prod_{\alpha\in\Delta^+\backslash\Delta^+(P)}c_1(L_{\alpha})}. \end{equation} \end{thm} \begin{comment} \begin{thm}\label{thm2.5}\cite{Ped2007}. Let $G$ be a compact, connected Lie group and $T$ a maximal torus in $G$. Let $H$ be a closed, connected subgroup of maximal rank in $G$ which contains $T$. Denote by $W_H$ the Weyl group of $T$ in $H$, and by $\Delta^+(H)$ the set of positive roots of $T$ in $H$. Let $L_{\alpha}$ be the complex line bundle over $G/T$ which is associated to the root $\alpha$. The Gysin homomorphism $f_*:H^*(G/T)\rightarrow H^*(G/H)$ is given by \begin{equation} f_*(p)=\sum_{w\in W_H}w\cdot\frac{p}{\prod_{\alpha\in\Delta^+(H)}c_1(L_{\alpha})}. \end{equation} \end{thm} \end{comment} Applying Thm \ref{thm2.5}, for $s_{\lambda}\in H^{*}(Gr(d_2,n))$, \begin{equation}\label{incre-formula} (\phi_i^+\cdot s_{\lambda})(x_1,\ldots,x_{d_2+1})=\sum_{i_1<\ldots<i_{d_2}}\frac{s_{\lambda}(x_{i_1},\ldots,x_{i_{d_2}})\phi_i(x_{i_{d_2+1}})}{\prod_{j=1}^{d_2}(x_{i_j}-x_{i_{d_2+1}})}. \end{equation} Similarly, the formula of the decreasing actions is \begin{equation}\label{decre-formula-1} (\phi_i^-\cdot s_{\lambda})(x_1,\ldots,x_{d_2-1})=\sum_{i_1<\ldots<i_{d_2}}\frac{s_{\lambda}(x_{i_1},\ldots,x_{i_{d_2}})\phi_i(x_{i_{d_2}})}{\prod_{j=d_2+1}^n(x_{i_{d_2}}-x_{j})}. \end{equation} \begin{comment} \begin{remark} It is well-known that $G/T\simeq G_{\mathbb{C}}/B$ and $G/H\simeq G_{\mathbb{C}}/P$, where $B$ is the Borel subgroup in the complexification $G_{\mathbb{C}}$ of $G$ and $P$ is the parabolic subgroup in $G_{\mathbb{C}}$ corresponding to $H$. For the reason we can use (\ref{incre-formula}) and (\ref{decre-formula-1}) in the framework of our paper. \end{remark} \end{comment} \begin{remark} In Formula (\ref{decre-formula-1}), variables $x_i$ for $i>d_2-1$ appear on the right side, which do not belong to the variables on the left side. This is not a contradiction because of the formula $s_{\lambda}(x_1,\ldots,x_d)=(-1)^{|\lambda|}s_{\lambda'}(x_{d+1},\ldots,x_n)$ by Lemma \ref{Lemma2.1}. More details will be discussed in the following section. \end{remark} \begin{remark} The construction above actually only defines an incrasing operator $\phi^+_{i,d}$ from $H^*(Gr(d,n))$ to $H^*(Gr(d+1,n))$ and an decreasing operator $\phi^-_{i,d}$ from $H^*(Gr(d,n))$ to $H^*(Gr(d-1,n))$. The increasing operator we need is $\phi^+_i=\sum_{d=0}^n\phi^+_{i,d}$. The decreasing operator we need is $\phi^-_i=\sum_{d=0}^n\phi^-_{i,d}$. We can then define {\it the twisted decreasing operator} by $\hat{\phi}^-_i=\sum_{d=0}^n(-1)^{d-1}\phi^-_{i,d}$. We call the representation formed by these operators {\it the twisted decreasing representation} and denote it by $\hat{R}^-_n$. \end{remark} \section{Increasing and decreasing operators} \subsection{Increasing operators} The key result of this subsection is adapted from \cite{Fra2013}. \begin{prop}\cite{Fra2013}. The increasing representation structure is induced by the open embedding $j: \hat M^{st}_{d,n}\rightarrow \hat M_{d,n}$. The induced map $j^*:\mathcal H\rightarrow R^+_n$ is $\mathcal H$-linear and surjective. The kernel of $j^*$ equals $\sum_{p\geq0, q>0}\mathcal H_p\wedge(e_q^n\cup \mathcal H_q)$, where $e_q=\prod_{i=1}^dx_i$. \end{prop} \begin{proof} In \cite{Fra2013}, the similar result for $n=1$ is proved. It can be easily generalized to $n>1$ case for $A_1$-quiver. \end{proof} The next lemma follows immediately from the definition of Schur polynomials. \begin{lemma} $s_{(\lambda_d+1,\lambda_{d-1}+1,\ldots,\lambda_1+1)}=e_ds_{\lambda}$ for $s_{\lambda}\in \mathbb{Q}[x_1,\ldots,x_d]^{S_d}$ and $e_d=\prod_{i=1}^dx_i$. Thus $e^n_d\cup \Phi_{\bf k}=\Phi_{\bf k+n}$ for $\Phi_{\bf k}\in \mathcal H_d$, and ${\bf n}=(n,n,\ldots,n)$. \end{lemma} Finally, we come to the result, whose proof is straightforward. \begin{prop}\label{propincrebasis} The increasing representation $R^+_n$ is a quotient of $\mathcal H=\bigwedge^*(\mathcal H_1)$ whose kernel is the subalgebra generated by $\{\phi_i\}_{i\geq n}$. Thus $R_n^+$ is isomorphic to $\bigwedge^*(V(n))$ where $V(n)$ is the linear space spanned by $\phi_0, \ldots, \phi_{n-1}$ and the action is given by wedge product from left. Then $\{\phi_{k_1}\wedge\ldots\wedge\phi_{k_d}\}_{k_1<\ldots<k_d}, 0\leq d\leq n-1$ form a basis of $R^+_n$. \end{prop} \subsubsection{Two presentations of classes in the cohomology of Grassmannian}\label{dual_1} Proposition \ref{propincrebasis} implies that we can use the notations introduced in section \ref{incre_basis} to represent cohomology classes of Grassmannians, as well as those in COHA, since they share the same product structure. Thus in $H^*(Gr(d,n))$, $\Phi_{\bf k}=\phi_{k_1}\wedge\ldots\wedge\phi_{k_d}(x_1,\ldots,x_d)$ with index ${\bf k}=(k_1,\ldots,k_d)$ can represent the Schur polynomial $s_{\lambda({\bf k})}(x_{1,d},\ldots,x_{d,d})$, where $0\leq k_1<\ldots<k_d\leq n-1$ and $\lambda=(\lambda_d,\ldots,\lambda_1)=(k_d-d+1,k_{d-1}-d+2,\ldots,k_1)$ is a partition of length $\leq n$. Let $\lambda'$ be the transpose partition of $\lambda$, and ${\bf k'}={\bf k}(\lambda')$. By Lemma \ref{Lemma2.1}, $\Phi_{\bf k}(x_1,\ldots,x_d)=(-1)^{|\lambda|}\Phi_{\bf k'}(x_{d+1},\ldots,x_n)$. $\Phi_{\bf k}$ is called {\it the ordinary presentation} of the correspondent class $s_{\lambda}$, and $(-1)^{|\lambda|}\Phi_{\bf k'}$ is called {\it the transpose presentation}. \begin{comment} The increasing operators can be realized as the wedge product operators over $\bigwedge^n(V)$. From the previous section, the formula for the operators are \begin{equation} \begin{split} (f*g)(x_1,\ldots,x_d)&=\sum_{\text{shuffle}}\frac{f(x_{i_1},\ldots,x_{i_p})g(x_{j_1},\ldots,x_{i_q})}{\prod_{\mu=1}^p\prod_{v=1}^q(x_{j_v}-x_{i_{\mu}})}\\ sfd&=\\ dsf& \end{split} \end{equation} We want to prove that the representation is isomorphic to $\bigwedge^*(V)$. \end{comment} \subsection{Decreasing operators} \begin{comment} We use the basis used in the increasing representations to study the decreasing representation. The formula for decreasing representations is \begin{equation} (f*g)(x_1,\ldots,x_d)=\sum_{\text{shuffle}}\frac{f(x_{i_1},\ldots,x_{i_p})g(x_{j_1},\ldots,x_{i_q})}{\prod_{\mu=1}^p\prod_{v=1}^q(x_{j_v}-x_{i_{\mu}})}. \end{equation} \end{comment} \begin{comment} \begin{lemma} Schur polynomials structure on $H^*(Gr(d,n))$. \end{lemma} \begin{lemma} In $H^*(Gr(d,N))$, $s_{\lambda}(x_1,\ldots,x_d)=(-1)^{|\lambda|}s_{\lambda'}(x_{d+1},\ldots,x_n)$. [dual] \end{lemma} \end{comment} Our goal is to understand the decreasing representation using the basis $\{\Phi_{\bf k}\}_{\bf k}$ of $R^+_n$. From Section \ref{dual_1}, the equation (\ref{decre-formula-1}) can be rewritten as \begin{equation}\label{decre-formula-ture} \begin{split} (\phi_i^-\cdot \Phi_{\bf k})(x_1,\ldots,x_{d_2-1})&=\sum_{i_1<\ldots<i_{d_2}}\frac{\Phi_{\bf k}(x_{i_1},\ldots,x_{i_{d_2}})\phi_i(x_{i_{d_2}})}{\prod_{j=d_2+1}^n(x_{i_{d_2}}-x_{j})}\\ &= (-1)^{|\lambda(\bf k)|}\sum_{i_{d_2+1}<\ldots<i_n}\frac{\Phi_{\bf k'}(x_{i_{d_2+1}},\ldots,x_{i_n})\phi_i(x_{i_{d_2}})}{\prod_{j=d_2+1}^n(x_{i_{d_2}}-x_{i_{j}})}\\ &=(-1)^{|\lambda|+n-d_2}(\phi_i^+\cdot\Phi_{\bf k'})(x_{d_2},\ldots,x_n). \end{split} \end{equation} This formula suggests an algorithm. Start from an ordinary presentation of a class $ \Phi_{\bf k}= \phi_{k_1}\wedge\ldots\wedge\phi_{k_d}$ in $H^*(Gr(d,n))$, where ${\bf k}=(k_1,\ldots,k_d)$, and $0\leq k_1<\ldots<k_d\leq n-1$. First we change $\Phi_{\bf k}(x_1,\ldots,x_d)$ to $(-1)^{|\lambda(\bf k)|}\Phi_{\bf k'}(x_{d+1},\ldots,x_n)$ by Lemma \ref{Lemma2.1}. Then apply $\phi_i^-$ to $\Phi_{\bf k'}$ using formula (\ref{decre-formula-ture}) and Proposition \ref{propincrebasis}. Finally change the result back to the ordinary presentation. We need the following lemma to help us to do these transformations. \begin{lemma} If $\phi_r$ appears in $\Phi_{{\bf k}'(\lambda)}$, $\phi_{n-r-1}$ will not appear in $\Phi_{{\bf k}(\lambda)}$. On the other hand, if $\phi_r$ doesn't appear in $\Phi_{{\bf k}'(\lambda)}$, $\phi_{n-r-1}$ will appear in $\Phi_{{\bf k}(\lambda)}$. \end{lemma} \begin{proof} From Section \ref{propincrebasis}, $\lambda=(\lambda_d,\ldots,\lambda_1)=(k_d-d+1,k_{d-1}-d+2,\ldots,k_1)$ is a partition of length $\leq n$. The transpose partition is defined by $\lambda'_j=\#\{\lambda_i\geq n-d+1-j\}$ for $1\leq j\leq n-d$. Thus we have \begin{equation} \lambda_{d-i+1}=\begin{cases} n-d&\text{if}\ 1\leq i\leq \lambda'_1,\\ n-d-j&\text{if}\ \lambda'_j+1\leq i\leq \lambda'_{j+1}\ \text{for}\ 1\leq j\leq n-d-1,\\ 0&\text{if}\ \lambda'_{n-d}+1\leq i\leq d. \end{cases} \end{equation} From $\lambda=(\lambda_d,\ldots,\lambda_1)=(k_d-d+1,k_{d-1}-d+2,\ldots,k_1)$, it immediately implies \begin{equation} k_{d-i+1} =\begin{cases} n-i&\text{if}\ 1\leq i\leq \lambda'_1,\\ n-i-j&\text{if}\ \lambda'_j+1\leq i\leq \lambda'_{j+1}\ \text{for}\ 1\leq j\leq n-d-1,\\ d-i&\text{if}\ \lambda'_{n-d}+1\leq i\leq d. \end{cases} \end{equation} Then $n-k'_{j+1}= n-j-\lambda'_{j+1}\leq k_{d-i+1}=n-i-j\leq n-j-\lambda'_j-1= n-{k'_j}-2$ if $\lambda'_j+1\leq i\leq \lambda'_{j+1}$ for $1\leq j\leq n-d-1$, or $0=d-d\leq k_{d-i+1}=d-i\leq d-\lambda'_{n-d}-1=n-k'_{n-d}-2$ if $\lambda'_{n-d}+1\leq i\leq d$, or $n-k'_1=n-\lambda'_1\leq k_d=n-i\leq n-1$. Therefore $k_{d-i+1}$ would run over all integers between $n-k'_{j+1}$ and $n-k'_j-2$, or between $0$ and $n-k'_{n-d}-2$, or between $n-k'_1$ and $n-1$. If $\phi_r$ doesn't appear in $\Phi_{{\bf k'}(\lambda)}$, there are three cases. If $k'_s< r< k'_{s+1}$ for $1\leq s\leq n-d-1$, $n-k'_{s+1}\leq n-r-1\leq n-k'_s-2$. If $k'_{n-d}<r\leq d$, $0\leq n-r-1\leq n-k'_{n-d}-2$. If $0\leq r<k'_1$, $n-k'_1\leq n-r-1\leq n-1$. This means that there exists some $1\leq i\leq d$ such that $k_{d-i+1}=n-r-1$. If $\phi_r$ appear in $\Phi_{{\bf k}'(\lambda)}=\phi_{k'_1}\wedge\ldots\wedge\phi_{k'_{n-d}}$, let $r=k'_s$. Then $k_{d-i+1}$ can never be $n-k'_s-1=n-r-1$ for $1\leq i\leq d$. \end{proof} \begin{comment} since $n-k'_{j+1}\leq k_{d-i+1}\leq n-{k'_j}-2$, let $j=s$ and we get $n-k'_{s+1}\leq k_{d-i+1}\leq n-{k'_s}-2$ for $\lambda'_s+1\leq i\leq \lambda'_{s+1}$. $k'_s<r<k'_{s+1}$ is the same as $k'_s+1\leq r\leq k'_{s+1}-1$. Since the cohomology of Grassmannian is $S[x_1,\ldots,x_n]/R$, where $R$ is the subalgebra of symmetric polynomials in $S[x_1,\ldots,x_n]$, we have $S_{\lambda}(x_1,\ldots, x_d)=S_{\lambda'}(x_{d+1},\ldots,x_{n})$, where $\lambda$ is a partition of $d\times (n-d)$ and $\lambda'$ is the conjugate of $\lambda$. Therefore the formula for decreasing representation is[CHECK!] \begin{equation} (\phi_r^-\cdot s_{\lambda})(x_1,\ldots,x_d)=\sum_{\text{shuffle}}\frac{f(x_{i_1},\ldots,x_{i_p})g(x_{j_1},\ldots,x_{i_q})}{\prod_{\mu=1}^p\prod_{v=1}^q(x_{j_v}-x_{i_{\mu}})}, \end{equation} which can be written in the following forms \begin{equation} \phi_r\wedge f(x_{d+1},\ldots,x_n). \end{equation} The above argument For $\phi_i^-\cdot s_{\lambda}$ where $s_\lambda\in \bigwedge ^*(V)$, we first transfer $s_{\lambda}(x_1,\ldots,x_d)$ to $s_{\lambda'}(x_{d+1},\ldots,x_{n})$, do the wedge product, and then change the partition back (along with the basis). We now want to compute $\phi_r^-\cdot \phi_{k_1}\wedge\ldots\wedge\phi_{k_d}$. Let us start from $\phi_{k_1}\wedge\ldots\wedge\phi_{k_d}$. By section \ref{incre_basis}, the associated partition $\lambda=(\lambda_d,\ldots,\lambda_1)$ is given by $\lambda_i=k_i-i+1$ for $1\leq i\leq d$. The conjugate of $\lambda$ is $\lambda'=(\lambda'_{n-d},\ldots,\lambda'_{1})$ where $\lambda'_j=\#\{\lambda_i\geq n-d+1-j\}$ for $1\leq j\leq n-d$. Then the associated conjugate Schur polynomial is $(-1)^{|\lambda|}s_{\lambda'}(x_{d+1},\ldots,x_n)=(-1)^{|\lambda|}\phi_{k'_{1}}\wedge\ldots\wedge \phi_{k'_{n-d}}(x_{d+1},\ldots,x_n)$, where $k'_j=\lambda'_j+j-1$ for $1\leq j\leq n-d$. It immediately follows that $\#\{\lambda_j=n-d+1-i\}=\lambda'_i-\lambda'_{i-1}$ for $2\leq i\leq n-d$ and $\#\{\lambda_j=n-d\}=\lambda'_1$. Now assume $r$ does not appear in $k'$. Let $k'_{s}< r<k'_{s+1}$. After multiple $\phi_r$ to $\Phi_{\bf k'}$, the polynomial becomes $(-1)^{n-d-s}(-1)^{|\lambda|}\phi_{k'_1}\wedge\ldots\wedge\phi_{k'_s}\wedge\phi_r\wedge\phi_{k'_{s+1}}\wedge\ldots\wedge\phi_{k'_{n-d}}$, or $(-1)^{n-d-s+|\lambda|}\phi_{\tau'_1}\wedge\ldots\wedge\phi_{\tau'_{n-d+1}}$ where the index $\tau'=(\tau'_{n-d+1},\ldots,\tau'_1)$ is given by \begin{equation} \begin{split} \mu=&(\underbrace{n-d+1,\ldots,n-d+1}_{\lambda'_1},\underbrace{n-d,\ldots,n-d}_{\lambda'_2-\lambda'_1},\ldots,\underbrace{n-d-s+2,\ldots,n-d-s+2}_{\lambda'_s-\lambda'_{s-1}},\\ &\underbrace{n-d-s+1,\ldots,n-d-s+1}_{r-s-\lambda'_s},\underbrace{n-d-s,\ldots,n-d-s}_{\lambda'_{s+1}-1-r+s},\\ &\underbrace{n-d-s-1,\ldots,n-d-s-1}_{\lambda'_{s+2}-\lambda'_{s+1}},\ldots,\underbrace{1,\ldots,1}_{\lambda'_{n-d}-\lambda'_{n-d-1}}). \end{split} \end{equation} After checking these indexes, the only difference between $\lambda$ and $\mu$ is that in $\lambda$, $\#\{\lambda_j=n-d-s\}=\lambda'_{s+1}-\lambda'_{s}$, and in $\mu$, $\#\{\mu_j=n-d-s\}=\lambda'_{s+1}-1-r+s$ and $\#\{\mu_j=n-d-s+1\}=r-s-\lambda'_{s}$. Therefore we have the following partitions: \begin{equation} \lambda=(\underbrace{n-d,\ldots,n-d}_{\lambda'_1},\underbrace{n-d-1,\ldots,n-d-1}_{\lambda'_2-\lambda'_1},\ldots,\underbrace{2,\ldots,2}_{\lambda'_{n-d-1}-\lambda'_{n-d-2}},\underbrace{1,\ldots,1}_{\lambda'_{n-d}-\lambda'_{n-d-1}}), \end{equation} and \begin{equation} \lambda=(\underbrace{n-d,\ldots,n-d}_{\lambda'_1},\underbrace{n-d-1,\ldots,n-d-1}_{\lambda'_2-\lambda'_1},\ldots,\underbrace{2,\ldots,2}_{\lambda'_{n-d-1}-\lambda'_{n-d-2}},\underbrace{1,\ldots,1}_{\lambda'_{n-d}-\lambda'_{n-d-1}}), \end{equation} or Turn to the associated Schur polynomials. \end{comment} \begin{defn} We introduce the {\it right partial derivative operator} $\partial_i^R: \bigwedge^*(V(n))\rightarrow\bigwedge^*(V(n))$ to state the following proposition. For $\Phi_{\bf k}=\phi_{k_1}\wedge\ldots\wedge\phi_{k_d}$, if $\phi_{i}$ appears in $\Phi_{\bf k}$, $\partial_{i}^R(\Phi_{\bf k})=(-1)^{d-i}\phi_{k_1}\wedge\ldots\wedge\hat{\phi}_{i}\wedge\ldots\wedge\phi_{k_d}$. If $\phi_i$ does not appear in $\Phi_{\bf k}$, $\partial_{i}^R(\Phi_{\bf k})=0$. The {\it left partial derivative operator} $\partial_i^L:\bigwedge^*(V(n))\rightarrow\bigwedge^*(V(n))$ is defined in the similar way. If $\phi_{i}$ appears in $\Phi_{\bf k}$, $\partial_{i}^L(\Phi_{\bf k})=(-1)^{i-1}\phi_{k_1}\wedge\ldots\wedge\hat{\phi}_{i}\wedge\ldots\wedge\phi_{k_d}$. If $\phi_i$ does not appear in $\Phi_{\bf k}$, $\partial_{i}^L(\Phi_{\bf k})=0$. It is easy to see that $\partial_i^{R}=(-1)^{d-1}\partial^L_i$ on $\bigwedge^d(V(n))$. \end{defn} \begin{prop} The decreasing operators are the right partial derivative operators on $\bigwedge ^*(V(n))$: $\phi_r^-\cdot\Phi_{\bf k}=\partial_{n-r-1}^R(\Phi_{\bf k})$. \end{prop} \begin{proof} What we want is to compute $\phi^-_r\cdot \Phi_{\bf k}$. Based on formula (\ref{decre-formula-ture}), we have \begin{equation} \begin{split} (\phi^-_r\cdot \Phi_{\bf k})(x_1,\ldots,x_{d-1})&=(-1)^{|\lambda|+n-d}(\phi_r^+\cdot\Phi_{\bf k'})(x_{d},\ldots,x_n)\\ &=(-1)^{|\lambda|+n-d}(\phi_r\wedge\phi_{k'_1}\wedge\ldots\wedge\phi_{k'_{n-{d}}})(x_{d},\ldots,x_n). \end{split} \end{equation} If $\phi_{n-r-1}$ is not in the $\Phi_{\bf k}$, $\phi_r$ will appear in $\Phi_{\bf k'}$. Thus $\phi_r^-\cdot\Phi_{\bf k}(x_1,\ldots,x_{d-1})=(\phi_r\wedge\phi_{k'_1}\wedge\ldots\wedge\phi_r\wedge\ldots\wedge\phi_{k'_{n-d}})(x_{d},\ldots,x_{n})=0$. If $\phi_{n-r-1}$ appears in $\Phi_{\bf k}=\phi_{k_1}\wedge\ldots\wedge\phi_{k_d}$, $\phi_r$ won't be in $\Phi_{\bf k'}=\phi_{k'_1}\wedge\ldots\wedge\phi_{k'_{n-d}}$. Assume $k'_s< r<k'_{s+1}$. We have \begin{equation} \phi_r\wedge\phi_{k'_1}\wedge\ldots\wedge\phi_{k'_{n-{d}}}=(-1)^{s}\phi_{k'_1}\wedge\ldots\wedge\phi_{k'_s}\wedge\phi_r\wedge\phi_{k'_{s+1}}\wedge\ldots\wedge\phi_{k'_{n-{d}}}. \end{equation} We have to change this back to the ordinary presentation. First, let's find the partition associated to this polynomial. The index ${\bf l'}=(l'_{1},\ldots,l'_{n-d+1})$ is given by \begin{equation} l'_i=\begin{cases} k'_{i-1}&s+2\leq i\leq n-d+1,\\ r&i=s+1,\\ k'_i&1\leq i\leq s. \end{cases} \end{equation} Then the new partition $\mu'=(\mu'_{n-d+1},\ldots,\mu'_1)$ is given by \begin{equation} \mu'_i=\begin{cases} \lambda'_{i-1}-1&s+2\leq i\leq n-d+1,\\ r-s&i=s+1,\\ \lambda'_i&1\leq i\leq s. \end{cases} \end{equation} Next step is to recover the partition $\mu$ from its transpose $\mu'$. From the definition of transpose partition, $\mu'_j=\#\{\mu_i\geq n-d+2-j\}$ for $1\leq j\leq n-d-1$. Then \begin{comment} We have \begin{enumerate} \item $\#\{\mu_j= n-d+2-i\}=\lambda'_{i-1}-\lambda'_{i-2}=\#\{\lambda_j=n-d+2-i\}$ for $s+3\leq i\leq n-d+1$, \item $\#\{\mu_j= n-d-s\}=\lambda'_{s+1}-1-r+s$, \item $\#\{\mu_j= n-d-s+1\}=r-s-\lambda'_s$, \item $\#\{\mu_j= n-d+2-i\}=\lambda'_i-\lambda'_{i-1}=\#\{\lambda_j=n-d+1-i\}$ for $2\leq i\leq s$, \item $\#\{\mu_j= n-d+1\}=\lambda'_1=\#\{\lambda_j=n-d\}$. \end{enumerate} \end{comment} \begin{equation} \mu_{d-i}=\begin{cases} n-d-j &\text{if}\ \lambda'_{j}\leq i\leq\lambda'_{j+1}-1\ \text{and}\ s+1\leq j\leq n-d-1,\\ n-d-s &\text{if}\ r-s+1\leq i\leq \lambda'_{s+1}-1,\\ n-d+1-s &\text{if}\ \lambda'_s+1\leq i\leq r-s,\\ n-d+1-j&\text{if}\ \lambda'_j+1\leq i\leq \lambda'_{j+1}\ \text{and}\ 2\leq j\leq s-1,\\ n-d+1&\text{if}\ 1\leq i\leq \lambda'_1. \end{cases} \end{equation} By comparing it with \begin{equation} \lambda_{d-i+1}=\begin{cases} n-d-j&\text{if}\ \lambda'_j+1\leq i\leq \lambda'_{j+1}\ \text{and}\ 2\leq j\leq n-d-1,\\ n-d&\text{if}\ 1\leq i\leq \lambda'_1, \end{cases} \end{equation} we notice that $\mu_{i}=\lambda_{i+1}+1$ for $d-r+s\leq i\leq d-1$ and $\mu_{i}=\lambda_i$ for $1\leq i\leq d-r+s-1$. Therefore, since $l_i=\mu_i+i-1$ for $1\leq i\leq d-1$ and $k_j=\lambda_j+j-1$ for $1\leq j\leq d$, it is easy to see that $l_i=k_{i+1}$ for $d-r+s\leq i\leq d-1$ and $l_{i}=k_i$ for $1\leq i\leq d-r+s-1$. Thus the resulted presentation is $(-1)^{n-d+s+|\lambda|+|\mu|}\phi_{k_1}\wedge\ldots\wedge\hat{\phi}_{n-r-1}\wedge\ldots\wedge\phi_{k_d}$$=$$(-1)^{r+s}\phi_{k_1}\wedge\ldots\wedge\hat{\phi}_{n-r-1}\wedge\ldots\wedge\phi_{k_d}$$=$$\partial_{n-r-1}^R(\Phi_{\bf k})$, which is $\Phi_{\bf k}$ applied by the right partial derivative of $\phi_{n-r-1}$. If $r<k'_1$ or $r>k'_{n-d}$, the similar process will lead to the same result. \end{proof} \subsection{Twisted decreasing operators} From the above computations, it is obvious to have the following proposition about the twisted decreasing operators. \begin{prop} The twisted decreasing operators are the left partial derivative operators on $\bigwedge ^*(V(n))$: $\hat{\phi}_r^-\cdot\Phi_{\bf k}=\partial_{n-r-1}^L(\Phi_{\bf k})$. \end{prop} \section{The double of representations} \subsection{The double of untwisted representations} Let $V(n)$ be the $n$-dimensional vector space spanned by $\{\phi_i \}_{i=0}^{n-1}$. The increasing and decreasing representations can be realized as creation operators $\{\alpha_i^+\}_{i=0}^{n-1}$ and annihilation operators $\{\alpha_i^-\}_{i=0}^{n-1}$ on $\bigwedge^*(V(n))$. Here $\alpha_i^+=\phi_i^+$ is the left wedge product, and $\alpha_i^-=\phi_{n-i-1}^-$ is the right partial derivative $\partial_i^R$. Define $H=[\alpha_0^+,\alpha_0^-]$ and the following operators for $0\leq i\leq n-1$: \begin{equation*} T_i=\frac{\alpha_i^++[H,\alpha_i^+]/2}{2},\quad S_i=\frac{\alpha_i^--[H,\alpha_i^-]/2}{2}. \end{equation*} Then define the following operators \begin{equation*} \begin{split} E_{0}&=-\frac{\alpha_0^-+[H,\alpha_0^-]/2}{2}, \quad F_0=\frac{\alpha_0^+-[H,\alpha_0^+]/2}{2},\\ E_1&=S_0,\quad F_1=T_0,\\ E_i&=[T_{i-2},S_{i-1}],\quad F_i=[T_{i-1},S_{i-2}], \quad \text{for}\ \ 2\leq i\leq n,\\ H_i&=[E_i,F_i],\quad \text{for}\ \ 0\leq i\leq n. \end{split} \end{equation*} In the following, let $P_k$ be an arbitrary degree $k$ monomial in $\bigwedge^*(V(n))$. Denote by $R_i^j$ the operator which change the factor $\phi_i$ in $P_k$ to $\phi_j$. \begin{lemma}\label{want-to-prove-matrix} For $2\leq i\leq n$, \begin{enumerate} \item $H(P_k)=(-1)^{k-1}P_k$. \item $E_0(P_k)=-\partial_0^R (P_k)$ if $k$ is even, and $\phi_0$ is included in $P_k$. Otherwise it's 0. \item $F_0(P_k)=\phi_0\wedge P_k$ if $k$ is odd, and $\phi_0$ is NOT included in $P_k$. Otherwise it's 0. \item $E_1(P_k)=\partial_0^R (P_k)$ if $k$ is odd, and $\phi_0$ is included in $P_k$. Otherwise it's 0. \item $F_1(P_k)=\phi_0\wedge P_k$ if $k$ is even, and $\phi_0$ is NOT included in $P_k$. Otherwise it's 0. \item $S_{i-1}(P_k)=\partial_{i-1}^R (P_k)$ if $k$ is odd, and $\phi_{i-1}$ is included in $P_k$. Otherwise it's 0. \item $T_{i-1}(P_k)=\phi_{i-1}\wedge P_k$ if $k$ is even, and $\phi_{i-1}$ is NOT included in $P_k$. Otherwise it's 0. \item $E_i(P_k)=R_{i-1}^{i-2}(P_k)$ if $\phi_{i-1}$ is included in $P_k$ and $\phi_{i-2}$ is NOT. Otherwise it's 0. \item $F_i(P_k)=R_{i-2}^{i-1}(P_k)$ if $\phi_{i-2}$ is included in $P_k$ and $\phi_{i-1}$ is NOT. Otherwise it's 0. \item $H_0(P_k)=\begin{cases} -P_k \quad &\text{$k$ is even and $\phi_0$ is included in $P_k$}\\ P_k &\text{$k$ is odd and $\phi_0$ is NOT included in $P_k$}\\ 0&otherwise \end{cases}$. \item $H_1(P_k)=\begin{cases} P_k \quad &\text{$k$ is even and $\phi_0$ is NOT included in $P_k$}\\ -P_k &\text{$k$ is odd and $\phi_0$ is included in $P_k$}\\ 0&otherwise \end{cases}$. \item $H_i(P_k)=\begin{cases} -P_k \quad &\text{$\phi_{i-1}$ is included in $P_k$ and $\phi_{i-2}$ is NOT included}\\ P_k &\text{$\phi_{i-2}$ is included in $P_k$ and $\phi_{i-1}$ is NOT included}\\ 0&otherwise \end{cases}$. \end{enumerate} \end{lemma} \begin{proof} The proof of the lemma is straightforward. \end{proof} \begin{comment} \begin{proof} $E_0(P_n)=\frac{1+(-1)^n}{2}\phi_1^-(P_n)$, $F_0(P_n)=\frac{1-(-1)^n}{2}\phi_1^+(P_n)$, $E_1(P_n)=\frac{1-(-1)^n}{2}\phi_1^-(P_n)$, $F_1(P_n)=\frac{1+(-1)^n}{2}\phi_1^+(P_n)$. \end{proof} \end{comment} The main theorem below implies that the combination of two representations $R^+_n$ and $R^-_n$ of $A_1$-COHA forms an $D_{n+1}$-Lie algebra. \begin{thm} The above operators satisfy the Serre relations for $0\leq i, j\leq n$: \begin{enumerate} \item $[H_i,H_j]=0$, \item $[E_i,F_j]=\delta_{ij}H_i$, \item $[H_i,E_j]=a_{ji}E_j,\quad [H_i,F_j]=-a_{ji}F_j$, \item $(adE_i)^{-a_{ji}+1}(E_j)=0$, if $i\neq j$, \item $(adF_i)^{-a_{ji}+1}(F_j)=0$, if $i\neq j$, \end{enumerate} where $(a_{ij})$ is the Cartan matrix for $D_{n+1}$-Lie algebras. \end{thm} \begin{proof} The first statement holds since each $H_i$ is diagonal by Lemma \ref{want-to-prove-matrix}. The second is due to the definition of $H_i$ for $\delta_{ij}=1$. For the other relations, we need to check the following relations, which can be easily solved by Lemma \ref{want-to-prove-matrix}: \begin{enumerate} \item $a_{ii}=2$, for $0\leq i\leq n$, \item $a_{21}=a_{20}=a_{12}=a_{02}=a_{i-1,i}=a_{i,i-1}=-1$ for $3\leq i\leq n$, \item $a_{10}=a_{01}=a_{0,i}=a_{i,0}=a_{1,i}=a_{i,1}=a_{i,j}=a_{j,i}=0$, for $3\leq i\leq n$, $2\leq j\leq n$ and $|i-j|>1$ \item $[E_0,F_1]=[E_1,F_0]=[E_0,F_j]=[E_1,F_j]=[E_i,F_0]=[E_i,F_1]=[E_i,F_j]=0$, for $2\leq i\neq j\leq n$, \item $[E_2,[E_2,E_0]]=[E_0,[E_0,E_2]]=[F_2,[F_2,F_0]]=[F_0,[F_0,F_2]]=0$, \item $[E_{i-1},[E_{i-1},E_i]]=[E_{i},[E_{i},E_{i-1}]]=[F_{i-1},[F_{i-1},F_i]]=[F_{i},[F_{i},F_{i-1}]]=0$, for $2\leq i\leq n$, \item $[E_0,E_1]=[E_0,E_i]=[E_1,E_i]=[E_i,E_j]=0$ for $3\leq i\leq n$, $2\leq j\leq n$ and $|i-j|>1$, \item $[F_0,F_1]=[F_0,F_i]=[F_1,F_i]=[F_i,F_j]=0$ for $3\leq i\leq n$, $2\leq j\leq n$ and $|i-j|>1$. \end{enumerate} \end{proof} \subsection{The double of twisted representations} Use the setting from the previous subsection. Let $\hat{\alpha}_i^-=\hat{\phi}_{n-i-1}^-$ be the left partial derivative $\partial_i^L$. Now we use creation operators $\{\alpha_i^+\}_{i=0}^{n-1}$ and twisted annihilation operators $\{\hat{\alpha}_i^-\}_{i=0}^{n-1}$ to form representations. These relations show that the double of twisted representations form a finite Clifford algebra. \begin{thm} Operators $\{\alpha_i^{+}\}_{i=0}^{n-1}$ and $\{\hat{\alpha}_i^{-}\}_{i=0}^{n-1}$ satisfy the following relations: \begin{enumerate} \item $\alpha^+_i\alpha^+_j+\alpha^+_j\alpha^+_i=0$, \item $\hat{\alpha}^-_i\hat{\alpha}^-_j+\hat{\alpha}^-_j\hat{\alpha}^-_i=0$, \item $\alpha^+_i\hat{\alpha}^-_j+\hat{\alpha}^-_j\alpha^+_i=\delta_{i,j}$. \end{enumerate} \end{thm} \begin{proof} The proof is straightforward by applying the formula in the definitions of the operators to the basis vectors of $\bigwedge^*(V(n))$. \end{proof} \section{Further discussions} For fixed $n$, the double of $R^+_n$ and $R^-_n$ forms $D_{n+1}$-Lie algebra, and the double of $R^+_n$ and $\hat{R}^-_n$ forms a finite Clifford algebra. This leads to the following conjecture stated in \cite{Soi2014}. \begin{conj}\cite{Soi2014} Full COHA for the quiver $A_1$ is isomorphic to the infinite Clifford algebra $Cl_c$ with generators $\phi_n^{\pm},\ n\in\mathbb{Z}$ and the central element $c$, subject to the standard anticommuting relations between $\phi_n^+$ (resp. $\phi_n^-$) as well as the relation $\phi_n^+\phi_m^-+\phi_m^-\phi_n^+=\delta_{n,m}c$. \end{conj} \begin{remark} As stated in \cite{Soi2014}, in the case of finite-dimensional representations we have $c\mapsto0$ and we see two representations of the infinite Grassmann algebra, which are combined in the representations of the orthogonal Lie algebra. \end{remark} \begin{comment} \item $H(P_n)=(-1)^{n-1}P_n$. \begin{proof} \begin{eqnarray*} H(P_n)&=&[\phi_i^+,\phi_i^-](P_n)\\ &=&\phi_i^+(\phi_i^-(P_n))-\phi_i^-(\phi_i^+(P_n))\\ &=&\phi_i\wedge(P_n\partial_i)-(\phi_i\wedge P_n)\partial_i. \end{eqnarray*} If $P_n$ has $\phi_i$, $\phi_i\wedge(P_n\partial_i)=(-1)^{n-1}P_n$. If $P_n$ doesn't have $\phi_i$, $-(\phi_i\wedge P_n)\partial_i=-(-1)^nP_n$. To sum up, $H(P_n)=(-1)^{n-1}P_n$. \end{proof} \item $X_1^+(P_n)=\phi_1\wedge P_n$ if $n$ is even, and $0$ if $n$ odd. $X_1^-(P_n)=P_n\partial_1$ if $n$ is odd, and $0$ if $n$ even. $X_0^+(P_n)=\phi_1\wedge P_n$ if $n$ is odd, and $0$ if $n$ even. $X_0^-(P_n)=P_n\partial_i$ if $n$ is even, and $0$ if $n$ odd. \begin{proof} \begin{eqnarray*} [H,\phi_1^+](P_n)&=&H(\phi_1^+(P_n))-\phi_1^+(H(P_n))\\ &=&(-1)^n\phi_1^+(P_n)-(-1)^{n-1}\phi_1^+(P_n)\\ &=&2(-1)^n\phi_1^+(P_n). \end{eqnarray*} \begin{eqnarray*} [H,\phi_1^-](P_n)&=&H(\phi_1^-(P_n))-\phi_1^-(H(P_n))\\ &=&(-1)^{n-2}\phi_1^-(P_n)-(-1)^{n-1}\phi_1^-(P_n)\\ &=&2(-1)^n\phi_1^-(P_n). \end{eqnarray*} Thus \begin{eqnarray*} X_1^+(P_n)&=&\frac{\phi_1^+(P_n)+[H,\phi_1^+]/2(P_n)}{2}\\ &=&\frac{1+(-1)^n}{2}\phi_1^+(P_n). \end{eqnarray*} \begin{eqnarray*} X_1^-(P_n)&=&\frac{\phi_1^-(P_n)-[H,\phi_1^-]/2(P_n)}{2}\\ &=&\frac{1-(-1)^n}{2}\phi_1^-(P_n). \end{eqnarray*} \begin{eqnarray*} X_0^+(P_n)&=&\frac{\phi_1^+(P_n)-[H,\phi_1^+]/2(P_n)}{2}\\ &=&\frac{1-(-1)^n}{2}\phi_1^+(P_n). \end{eqnarray*} \begin{eqnarray*} X_0^-(P_n)&=&\frac{\phi_1^-(P_n)+[H,\phi_1^-]/2(P_n)}{2}\\ &=&\frac{1+(-1)^n}{2}\phi_1^-(P_n). \end{eqnarray*} \end{proof} \item $T_i^+(P_n)=\phi_i\wedge P_n$ if $n$ is even, and $0$ if $n$ odd. $T_i^+(P_n)=P_n\partial_i$ if $n$ is odd, and $0$ if $n$ even. \item $[T_i^+,T_{i-1}^-]=-R_{i-1}^i$ for $1\leq i\leq n$. $[T_i^-,T_{i-1}^+]=-R_{i}^{i-1}$ for $1\leq i\leq n$. \item $H_i(P_n)=P_n$ if $P_n$ has $\phi_{i-1}$ and doesn't have $\phi_i$. $H_i(P_n)=-P_n$ if $P_n$ has $\phi_{i}$ and doesn't have $\phi_{i-1}$. Otherwise, $H_i(P_n)=0$. \end{comment} \appendix \renewcommand{\Alph{chapter}.\arabic{section}}{\Alph{chapter}.\arabic{section}} \end{document}
\begin{document} \baselineskip=15pt \title{Graded Brauer tree algebras} \author{Dusko Bogdanic} \address{Dusko Bogdanic \newline Mathematical Institute \\ University of Oxford \\ \newline 24-29 St.\ Giles \\ Oxford OX1 3LB \\ United Kingdom} \email{[email protected]} \date{} \begin{abstract} In this paper we construct non-negative gradings on a basic Brauer tree algebra $A_{\Gamma}$ corresponding to an arbitrary Brauer tree $\Gamma$ of type $(m,e)$. We do this by transferring gradings via derived equivalence from a basic Brauer tree algebra $A_S$, whose tree is a star with the exceptional vertex in the middle, to $A_{\Gamma}$. The grading on $A_S$ comes from the tight grading given by the radical filtration. To transfer gradings via derived equivalence we use tilting complexes constructed by taking Green's walk around $\Gamma$ (cf.\ [\ref{Zak}]). By computing endomorphism rings of these tilting complexes we get graded algebras. We also compute ${\rm Out}^K(A_{\Gamma})$, the group of outer automorphisms that fix isomorphism classes of simple $A_{\Gamma}$-modules, where $\Gamma$ is an arbitrary Brauer tree, and we prove that there is unique grading on $A_{\Gamma}$ up to graded Morita equivalence and rescaling. \end{abstract} \maketitle \section{Introduction} \noindent In this paper we transfer gradings between Brauer tree algebras via derived equivalences. Our work has been motivated by Theorem 6.4 in an unpublished paper [\ref{Rou}] by Rouquier. This theorem says that gradings are compatible with derived equivalences. We show how this idea is used on the class of Brauer tree algebras. For an arbitrary Brauer tree $\Gamma$ of type $(m,e)$, ie.\ for a Brauer tree with $e$ edges and multiplicity of the exceptional vertex $m$, we transfer tight grading from the basic Brauer tree algebra $A_S$ corresponding to the Brauer star $S$ of the same type $(m,e)$, to the Brauer tree algebra $A_{\Gamma}$. In Section 4 we prove that the resulting grading on $A_{\Gamma}$ is non-negative and we investigate its properties. In particular, this construction associates to each Brauer tree algebra $A$, which is a symmetric algebra, a quasi-hereditary algebra $A_0$, the subalgebra of $A$ consisting of the elements of $A$ which have degree 0. We prove in Section 5 that the knowledge of the subalgebra $A_0$ and of the cyclic ordering of its components is sufficient to recover the whole algebra $A$. In Sections 6 and 7 we give explicit formulae for the graded Cartan matrix and graded Cartan determinant of $A_{\Gamma}$, and we prove that the graded Cartan determinant only depends on the type of the Brauer tree. Sections 9 and 10 deal respectively with the problem of shifting summands of a tilting complex and with the change of the exceptional vertex when the multiplicity of the exceptional vertex is 1. In the last section we compute ${\rm Out}^K(A_{\Gamma})$, the group of outer automorphisms that fix isomorphism classes of simple $A_{\Gamma}$-modules, and we prove that it only depends on the multiplicity of the exceptional vertex. We also classify all gradings on an arbitrary Brauer tree algebra, and we prove that there is unique grading up to graded Morita equivalence and rescaling. \section{Notation} Throughout this text $k$ will be an algebraically closed field of positive characteristic. All algebras will be finite dimensional algebras over $k$ and all modules will be left modules. The category of finite dimensional $A$--modules will be denoted by $A$--${\rm mod}$ and the full subcategory of finite dimensional projective $A$--modules will be denoted by $P_A$. The derived category of bounded complexes over $A$--${\rm mod}$ will be denoted by $D^b(A)$ and the homotopy category of bounded complexes over $P_A$ will be denoted by $K^b(P_A)$. Let $A$ be a $k$--algebra. We say that $A$ is a graded algebra if $A$ is the direct sum of subspaces $A=\bigoplus_{i\in\mathbb{Z}} A_i$, such that $A_iA_j\subset A_{i+j}$, $i,j\in \mathbb{Z}$. If $A_i=0$ for $i< 0$, we say that $A$ is non-negatively graded. An $A$-module $M$ is graded if it is the direct sum of its subspaces $M=\bigoplus_{i\in\mathbb{Z}} M_i$, such that $A_iM_j\subset M_{i+j}$, for all $i,j\in \mathbb{Z}$. If $M$ is a graded $A$--module, then $N=M\langle i\rangle$ denotes the graded module given by $N_j=M_{i+j}$, $j\in \mathbb{Z}$. An $A$-module homomorphism $f$ between two graded modules $M$ and $N$ is a homomorphism of graded modules if $f(M_i)\subseteq N_i$, for all $i\in \mathbb{Z}$. For a graded algebra $A$, we denote by $A$--${\rm modgr}$ the category of graded $A$--modules. This category sits inside the category of graded vector spaces. Its objects are graded $A$--modules and morphisms are given by $${\rm Homgr}_A(M,N):=\bigoplus_{i\in \mathbb{Z}}{\rm Hom}_A(M,N\langle i\rangle),$$ where ${\rm Hom}_A(M,N\langle i \rangle)$ denotes the space of all graded homomorphisms between $M$ and $N\langle i\rangle$ (the space of homogeneous morphisms of degree $i$). Let $X=(X^i,d^i)$ be a complex of $A$--modules. We say that $X$ is a complex of graded $A$--modules, or just a graded complex, if for each $i\in \mathbb{Z}$, $X^i$ is a graded module and $d^i$ is a homogeneous homomorphism of graded $A$--modules. If $X$ is a graded complex, then $X\langle j\rangle$ denotes the complex of graded $A$--modules given by $(X\langle j\rangle)^i:=X^i\langle j\rangle$ and $d_{X\langle j\rangle}^i:=d^i$. Let $X$ and $Y$ be graded complexes. A homomorphism $f=\{f^i\}_{i\in\mathbb{Z}}$ between complexes $X$ and $Y$ is a homomorphism of graded complexes if for each $i\in \mathbb{Z}$, $f^i$ is a homomorphism of graded modules. The category of complexes of graded $A$--modules, denoted by $C_{gr}(A)$, is the category whose objects are complexes of graded $A$--modules and morphisms between two graded complexes $X$ and $Y$ are given by $${\rm Homgr}_{{\mathcal{C}}_{gr}(A)}(X, Y) := \bigoplus_{i\in\mathbb{Z}}{\rm Homgr}_A(X,Y\langle i\rangle ),$$ where ${\rm Homgr}_A(X,Y\langle i\rangle )$ denotes the space of graded homomorphisms between $X$ and $Y\langle i\rangle$ (the space of homogeneous morphisms of degree $i$). Unless otherwise stated, for a graded algebra $A$, we will assume that the projective indecomposable $A$-modules are graded in such way that their tops are in degree 0. For an indecomposable bounded graded complex of projective $A$-modules, we will assume that the leftmost non-zero term is graded in such way that its top is in degree 0. We say that two symmetric algebras $A$ and $B$ are derived equivalent if their derived categories of bounded complexes are equivalent. From Rickard's theory we know that $A$ and $B$ are derived equivalent if and only if there exists a tilting complex $T$ of projective $A$--modules such that ${\rm End}_{K^b(P_A)}(T)\cong B^{op}$. For more details on derived categories and derived equivalences we recommend [\ref{KNG}]. \section{Brauer tree algebras} We now introduce Brauer tree algebras. For a general reference on Brauer tree algebras we refer reader to [\ref{alp}]. Let $\Gamma$ be a finite connected tree with $e$ edges. We say that $\Gamma$ is a Brauer tree of type $(m,e)$ if there is a cyclic ordering of the edges adjacent to a given vertex, and a distinguished vertex $v$, called the exceptional vertex, to whom we assign a positive integer $m$, called the multiplicity of the exceptional vertex. Let $A$ be a finite dimensional symmetric algebra and let $\Gamma$ be a Brauer tree. We say that $A$ is a Brauer tree algebra associated with $\Gamma$ if the isomorphism classes of simple $A$--modules are in one-to-one correspondence with the edges of $\Gamma$, and if $P_j$ denotes the projective cover of the simple $A$--module $S_j$ corresponding to the edge $j$, the following condition is satisfied: The heart ${\rm rad }\, P_j/{\rm soc }\, P_j$ is the direct sum of two uniserial modules $U_v$ and $U_w$, one of which might be zero, where $v$ and $w$ are vertices of $j$. For $u\in \{v,w\}$, let $j=j_0, j_1,\dots,j_r$ be the cyclic ordering of the $r+1$ edges around $u$. The composition factors of $U_u$, starting from the top, are $$S_{j_1},S_{j_2},\dots, S_{j_r},S_{j_0},S_{j_1},S_{j_2},\dots,S_{j_r},S_{j_0},\dots,S_{j_1},S_{j_2},\dots,S_{j_r}$$ where the number of composition factors is $m(r+1)-1$ if $u$ is the exceptional vertex, and $r$ otherwise. If $A$ is a Brauer tree algebra associated to a Brauer tree $\Gamma$ of type $(m,e)$, we say that $A$ has type $(m,e)$. We will usually label edges of a Brauer tree by the corresponding simple modules of a Brauer tree algebra associated to this tree. We will always assume that the cyclic ordering of the edges adjacent to a given vertex is given by the counter-clockwise direction in the given planar embedding of the tree. \begin{ex}{\rm A very important Brauer tree of type $(m,e)$ is the Brauer star, the Brauer tree with $e$ edges adjacent to the exceptional vertex which has multiplicity $m$. $$\xymatrix{&&\circ&\\ \dots&\bullet\ar@{-}[ur]^{S_2}\ar@{-}[rr]^{S_1}\ar@{-}[dr]_{S_e}&&\circ\\ & &\circ & \\}$$ The composition factors, starting from the top, of the projective indecomposable module $P_j$ corresponding to the edge $S_j$ are $$ S_j, S_{j+1}, \dots, S_e, S_1, S_2,\dots ,S_{j-1}; \,\, \dots \,\, ;S_j, S_{j+1}, \dots, S_e, S_1, S_2,\dots , S_{j-1}; S_j$$ where $S_j$ appears $m+1$ times and $S_i$ appears $m$ times, for $i\neq j$. We see that a Brauer tree algebra corresponding to the Brauer star of type $(m,e)$ is a uniserial algebra. This is very important because it is easy to do calculations with uniserial modules. } \end{ex} Any two Brauer tree algebras associated to the same Brauer tree $\Gamma$ and defined over the same field $k$ are Morita equivalent (cf.\ [\ref{Cluj2}], Corollary 4.3.3). A basic Brauer tree algebra corresponding to a Brauer tree $\Gamma$ is isomorphic to the algebra $kQ/I$, where $Q$ is a quiver and $I$ is the ideal of relations. This algebra is constructed as follows (cf.\ [\ref{Cluj2}], Section 5): We replace each edge of the Brauer tree by a vertex and for two adjacent edges, say $j_1$ and $j_2$, which come one after the other in the circular ordering, say $j_2$ comes after $j_1$, we have an arrow connecting the two corresponding vertices, starting at the vertex corresponding to $j_2$ and ending at the vertex corresponding to $j_1$. If there is only one edge adjacent to the exceptional vertex and $m>1$, then we add a loop starting and ending at the vertex corresponding to the edge that is adjacent to the exceptional vertex. This will give us the quiver $Q$. Notice that, for each vertex of $\Gamma$ that has more than one edge adjacent to it, we get a cycle in the quiver $Q$. The cycle of $Q$ corresponding to the exceptional vertex will be called the exceptional cycle. If we assume that $\Gamma$ is not the star, then the ideal $I$ is generated by two types of relations. The relations of the first type are given by $ab=0$, where arrows $a$ and $b$ belong to different cycles of $Q$. The second type relations are the relations of the form $\alpha=\beta$, for two cycles $\alpha$ and $\beta$ starting and ending at the same vertex, neither of which is the exceptional cycle, and relations of the form $\alpha^m=\beta$, if $\alpha$ is the exceptional cycle. The basic algebra $kQ/I$ constructed in this way will be denoted by $A_{\Gamma}$. If $\Gamma$ is the star, then the corresponding quiver has only one cycle and the ideal of relations is generated by all paths of length $me+1$. This basic algebra corresponding to the Brauer star will be denoted by $A_S$. \section{Transfer of gradings} Let $A$ and $B$ be two symmetric algebras over a field $k$ and let us assume that $A$ is a graded algebra. The following theorem is due to Rouquier. \begin{te}[[\ref{Rou}{]}, {\rm Theorem 6.4}] Let $A$ and $B$ be as above. Let $T$ be a tilting complex of $A$-modules that induces derived equivalence between $A$ and $B$. Then there exists a grading on $B$ and a structure of a graded complex $T^{\prime}$ on $T$, such that $T^{\prime}$ induces an equivalence between the derived categories of graded $A$-modules and graded $B$-modules. \end{te} This theorem tells us that derived equivalences are compatible with gradings, that is, gradings can be transferred between symmetric algebras via derived equivalences. We will now explain how the transfer of gradings via derived equivalences is done in our context of Brauer tree algebras. Brauer tree algebras are determined up to derived equivalence by the multiplicity of the exceptional vertex and the number of edges of the tree ([\ref{RiStab}], Theorem 4.2). We notice here that the basic Brauer tree algebra $A_{S}$, which corresponds to the Brauer star of type $(m,e)$, is naturally graded by putting all arrows in degree 1. This grading is compatible with radical filtration, in other words, $A_{S}$ is tightly graded. This means that $A_S$ is isomorphic to the graded algebra associated with the radical filtration, i.e.\ $$A_S\cong \bigoplus_{i=0}^{\infty}({\rm rad}\, A_S)^i/({\rm rad}\, A_S)^{i+1}.$$ We will transfer this grading from the algebra $A_S$ to the basic Brauer tree algebra $A_{\Gamma}$ corresponding to an arbitrary Brauer tree $\Gamma$ of type $(m,e)$. In order to do that we will construct a tilting complex of $A_S$-modules which tilts from $A_S$ to $A_{\Gamma}$. For a given tilting complex $T$ of $A_S$-modules, which is a bounded complex of finitely generated projective $A_S$--modules, there exists a structure of a complex of graded $A_S$--module $T^{\prime}$ on $T$. If $T$ is a tilting complex that tilts from $A_S$ to $A_{\Gamma}$, then ${\rm End}_{K^b(P_{A_S})}(T)\cong A_{\Gamma}^{op}$. Viewing $T$ as a graded complex $T^{\prime}$, and computing this endomorphism ring as a graded object, we get a graded algebra which is isomorphic to the opposite algebra of the basic Brauer tree algebra $A_{\Gamma}$ corresponding to $\Gamma$. We notice here that the choice of a grading on $T^{\prime}$ is unique up to shifting the grading of each indecomposable summand of $T^{\prime}$. \subsection{The tilting complex given by Green's walk} In [\ref{Zak}], the authors give a combinatorial construction of a tilting complex that tilts from the basic Brauer tree algebra $A_S$ corresponding to the star of type $(m,e)$, to the basic Brauer tree algebra $A_{\Gamma}$ corresponding to an arbitrary Brauer tree of type $(m,e)$. The tilting complexes considered in [\ref{Zak}] are direct sums of indecomposable complexes which have no more than two non-zero terms, and a complete classification for all such tilting complexes is given in [\ref{Zak}]. We will use only one special tilting complex that arises in this way. It is the complex constructed by taking Green's walk (cf.\ [\ref{Green}]) around $\Gamma$. We construct it as follows: Starting from the exceptional vertex of a Brauer tree $\Gamma$ of type $(m,e)$, we take Green's walk around $\Gamma$ and enumerate all the edges of $\Gamma$. We start this enumeration from an arbitrary edge adjacent to the exceptional vertex and walk around $\Gamma$ in the counter-clockwise direction. We will eventually show that the resulting grading on $A_{\Gamma}$ does not depend on where we start the enumeration. Define $T$, a tilting complex of $A_S$-modules, to be the direct sum of the complexes $T_i$, $1\leq i\leq e$, which correspond to the edges of $\Gamma$, and which are defined by induction on the distance of an edge from the exceptional vertex in the following way: \begin{itemize} \item[(a)] If $i$ is an edge which is adjacent to the exceptional vertex, then $T_i$ is defined to be the stalk complex $$Q_i\, :\, 0\longrightarrow P_i \longrightarrow 0$$ with $P_i$ in degree 0; \item[(b)] If $i$ is not adjacent to the exceptional vertex and assuming that the shortest path connecting $i$ to the exceptional vertex is $j_1,j_2,\dots,j_t, i$, where $j_1<j_2<\dots<j_t<i$ in the labelling from the Green's walk, then $T_i$ is defined to be the complex $Q_{j_ti}[n_i]$, where $Q_{j_ti}$ is the following complex with two non-zero entries in degrees 0 and 1 $$Q_{j_ti}\, : \, 0 \longrightarrow P_{j_t}\stackrel{h_{j_ti}}{\longrightarrow} P_i\longrightarrow 0,$$ where $h_{j_ti}$ is a homomorphism of the highest possible rank, and $n_i$ is the shift necessary to ensure that $P_{j_t}$ is in the same degree as $P_{j_t}$ in other summands of $T$ which are previously determined. \end{itemize} \noindent For the convenience of the reader we include the following example. \begin{ex}\label{CompExample}{\rm Let $\Gamma$ be the following Brauer tree with multiplicity 1 and with edges numbered by taking Green's walk: $$\xymatrix{&\circ\ar@{-}[dr]^{S_4}&&&\circ\\ &&\circ\ar@{-}[r]^{S_3}&\bullet\ar@{-}[dr]^{S_1}\ar@{-}[ur]^{S_2}&\\ &\circ\ar@{-}[ur]^{S_5}&&&\circ\\ \circ\ar@{-}[ur]^{S_6}&&&&\\ }$$ \noindent The tilting complex of $A_S$-modules, where $S$ is the Brauer star with six edges and multiplicity 1, given by taking Green's walk around $\Gamma$ is the direct sum $T=\oplus_{i=1}^6T_i$, where $T_1$, $T_2$ and $T_3$ are the stalk complexes $P_1$, $P_2$ and $P_3$ respectively, in degree 0, $T_4$ is $P_3\stackrel{h_{34}}{\longrightarrow} P_4$ with $P_3$ in degree 0, $T_5$ is $P_3\stackrel{h_{35}}{\longrightarrow} P_5$ with $P_3$ in degree 0, and $T_6$ is $P_5\stackrel{h_{56}}{\longrightarrow} P_6$ with $P_5$ in degree 1. This complex tilts from $A_S$ to $A_{\Gamma}$, ie.\ ${\rm End}_{K^b(P_{A_S})}(T)\cong A_{\Gamma}^{op}.$} \end{ex} \subsection{Calculating ${\bf {\rm\bf End}_{K^b(P_A)}(T)}$} Let $A_S$ be a basic Brauer tree algebra corresponding to the star of type $(m,e)$ and let $A_{\Gamma}$ be a basic Brauer tree algebra corresponding to a given Brauer tree $\Gamma$ of type $(m,e)$. Let $T$ be the tilting complex of $A_S$-modules that tilts from $A_S$ to $A_{\Gamma}$, constructed as in the previous section, i.e.\ constructed by taking Green's walk around $\Gamma$. Viewing each summand $T_i$ of $T$ as a graded complex $T_i^{\prime}$, we have a structure of a graded complex $T^{\prime}$ on $T$. By calculating ${\rm Endgr}_{K^b(P_{A_S})}(T^{\prime})\cong A_{\Gamma}^{op}$ we get $A_{\Gamma}$ as a graded algebra. We will choose $T_i^{\prime}$ to be $T_i\langle r_i\rangle$, where $r_i$ will be the necessary shifts that will ensure that the resulting grading is non-negative. We remind the reader that $T_i$ is assumed to be graded in such way that its leftmost non-zero term has its top in degree 0. We now state the main theorem of this section. \begin{te}\label{glteorema} Let $\Gamma$ be an arbitrary Brauer tree with $e$ edges and multiplicity $m$ of the exceptional vertex and let $A_{\Gamma}$ be the basic Brauer tree algebra determined by this tree. The algebra $A_{\Gamma}$ can be non-negatively graded. \end{te} \noindent {\bf Proof.} In order to grade $A_{\Gamma}$, we need to calculate ${\rm Homgr}_{K^b(P_{A_S})}(T_i^{\prime},T_j^{\prime})$, as graded vector spaces, for those $T_i^{\prime}$ and $T_j^{\prime}$ which correspond to edges $S_i$ and $S_j$ that are adjacent to the same vertex of $\Gamma$, and which come one after the other, $i$ after $j$, in the circular ordering associated to that vertex. This is a consequence of the fact that when identifying ${\rm End}_{K^b(P_{A_S})}(T)^{op}$ with $A_{\Gamma}$, elements corresponding to the vertices of the quiver of $A_{\Gamma} $ are given by $ {\rm id}_{T_i}\in {\rm End}_{K^b(P_{A_S})}(T_i)$, $i=1,2,\dots,e $; and the subspace of $A_{\Gamma}$ generated by all paths starting at the vertex of $A_{\Gamma}$ corresponding to ${\rm id}_{T_i}\in {\rm End}_{K^b(P_{A_S})}(T_i)$ and finishing at the vertex corresponding to ${\rm id}_{T_j} \in {\rm End}_{K^b(P_{A_S})}(T_j)$, is given by ${\rm Hom}_{K^b(P_{A_S})}(T_i, T_j)$. In fact, we only need to calculate the non-zero summand of ${\rm Homgr}_{K^b(P_{A_S})}(T_i^{\prime},T_j^{\prime})$ which is in the lowest degree. That will be degree of the unique arrow of the quiver of $A_{\Gamma}$ pointing from the vertex corresponding to $S_i$ to the vertex corresponding to $S_j$. The dimension of ${\rm Hom}_{K^b(A_S)}(T_i,T_j)$ \begin{itemize} \item[{\rm (1)}] is $0$, if the vertices corresponding to ${\rm id}_{T_i}$ and ${\rm id}_{T_j}$ do not belong to the same cycle, \item[{\rm (2)}] \begin{itemize}\item[{\rm (2.a)}]is $m$, if $i\neq j$ and the vertices corresponding to ${\rm id}_{T_i}$ and ${\rm id}_{T_j}$ belong to the exceptional cycle, \item[{\rm (2.b)}] is $1$, if $i\neq j$ and the vertices corresponding to ${\rm id}_{T_i}$ and ${\rm id}_{T_j}$ belong to the same non-exceptional cycle, \end{itemize} \item[{\rm (3)}] \begin{itemize}\item[{\rm (3.a)}] is $m+1$, if $i=j$ and the vertex corresponding to ${\rm id}_{T_i}$ belongs to the exceptional cycle, \item[{\rm (3.b)}]is $2$, if $i=j$ and the vertex corresponding to ${\rm id}_{T_i}$ does not belong to the exceptional cycle. \end{itemize} \end{itemize} Edges $i$ and $j$ of a Brauer tree $\Gamma$ that are adjacent to the same vertex, say $v$, and that come one after the other in the circular ordering of $v$, can either be at the same distance from the exceptional vertex or the distance of one of them, say $i$, is one less than the distance of $j$. If the former holds, then a part of $\Gamma$ is given by $$\xymatrix{ &&&\circ\\&\bullet\ar@{.}[r]&\circ \, v\ar@{-}[ur]^{S_i}\ar@{-}[dr]_{S_j}&\\&&&\circ}$$ where $v$ may or may not be the exceptional vertex. If the latter holds, than a part of $\Gamma$ is given by one of the following two diagrams $$ \xymatrix{&&&\circ\\ \bullet\ar@{.}[r]&\ar@{-}[r]^{S_i}& \circ\, v \ar@{-}[ur]^{S_j}\ar@{--}[dr]^{S_l}&\\ &&&\circ }\quad \quad \quad \xymatrix{&&&\circ\\ \bullet\ar@{.}[r]&\ar@{-}[r]^{S_i}& \circ\, v \ar@{--}[ur]^{S_l}\ar@{-}[dr]^{S_j}&\\ &&&\circ } $$ where the leftmost vertex of $S_i$ may or may not be the exceptional vertex, and there may or may not be more edges, such as $S_l$, adjacent to $v$. In the case of the first diagram we have an arrow from $S_i$ to $S_j$ in the quiver of $A_{\Gamma}$, and in the case of the second diagram we have an arrow from $S_j$ to $S_i$. It follows that it is sufficient to consider the following four cases: {\textbf{CASE 1}} Edges $S_i$ and $S_j$ are adjacent to the exceptional vertex. The corresponding part of $\Gamma$ is $$ \xymatrix{ &&\circ\\&\bullet\ar@{-}[ur]^{S_i}\ar@{-}[dr]_{S_j}&\\&&\circ} $$ In this case, the corresponding summands of the graded tilting complex $T^{\prime}$ are $T_i^{\prime}:=T_i$ and $T_j^{\prime}:=T_j$, where $T_i$ and $T_j$ are stalk complexes $P_i$ and $P_j$ concentrated in degree 0. If $i>j$, then ${\rm Homgr}_{K^b(P_A)}(T_i^{\prime},T_j^{\prime})\cong {\rm Homgr}_{A_S}(P_i, P_j)\cong k\langle -(i-j)\rangle \oplus M$, as graded vector spaces, where $M$ is the sum of the summands that appear in higher degrees than $i-j$. In other words, $i-j$ is the degree of the corresponding arrow of the quiver of $A_{\Gamma}$ whose source is $S_i$ and whose target is $S_j$. If $i<j$, then the degree of the corresponding arrow of the quiver of $A_{\Gamma}$ is $e-(j-i)$, where $e$ is the number of edges of the tree. If there is only one edge adjacent to the exceptional vertex and $m>1$, then the corresponding loop will be in degree $e$. \textbf{CASE 2} Edges $S_i$ and $S_j$ are adjacent to a non-exceptional vertex $v$ and one of them, say $S_i$, is adjacent to the exceptional vertex. Then we have that a part of $\Gamma$ is given by one of the following two diagrams $$ \xymatrix{&&\circ\\ \bullet\ar@{-}[r]^{S_i}="a"& \circ\, v \ar@{-}[ur]^{S_j}="b"\ar@{--}[dr]^{S_l}&\\ &&\circ }\quad\quad\quad\quad \xymatrix{&&\circ\\ \bullet\ar@{-}[r]^{S_i}& \circ\, v \ar@{--}[ur]^{S_l}\ar@{-}[dr]^{S_j}&\\ &&\circ } $$ where it could happen that there are more edges adjacent to $v$, such as $S_l$. In this case, the summands of the tilting complex $T^{\prime}$ are $T_i^{\prime}:=Q_i$, the stalk complex with $P_i$ in degree 0, and $T_j^{\prime}:=Q_{ij}$, where $Q_{ij}$ is previously defined complex $P_i\stackrel{h_{ij}}{\longrightarrow} P_j$, with $P_i$ in degree 0. Since $i<j$, we have that any map from $Q_i$ to $Q_{ij}$ has to map $P_i$ to the kernel of the map $h_{ij}$. It follows that this map has to map ${\rm top}\,P_i$ to ${\rm soc} \,P_i$. This means that the corresponding arrow of the quiver of $A_{\Gamma}$, whose source is $S_i$ and whose target is $S_j$, is in degree $me$. This happens in case of the first diagram. In case of the second diagram, there is an arrow from $S_j$ to $S_i$ in the quiver of $A_{\Gamma}$. The identity map on $P_i$ will give us a morphism between graded complexes $T_j^{\prime}$ and $T_i^{\prime}$ and we conclude that the corresponding arrow from $S_j$ to $S_i$ is in degree 0. \textbf{CASE 3} Edges $S_i$ and $S_j$, where $i<j$, are adjacent to a non-exceptional vertex $v$, $S_i$ is closer to the exceptional vertex than $S_j$, and the leftmost vertex of $S_i$ is non-exceptional. Then we have that a part of $\Gamma$ is given by one of the following two diagrams $$ \xymatrix{&&&&\circ\\ \bullet\ar@{.}[r]&\ar@{-}[r]^{S_l}&\circ \ar@{-}[r]^{S_i}& \circ\, v \ar@{-}[ur]^{S_j}\ar@{--}[dr]^{S_f}&\\ &&&&\circ } \quad\quad \xymatrix{&&&&\circ\\ \bullet\ar@{.}[r]&\ar@{-}[r]^{S_l}&\circ \ar@{-}[r]^{S_i}& \circ\, v \ar@{--}[ur]^{S_f}\ar@{-}[dr]^{S_j}&\\ &&&&\circ } $$ where it could happen that there are more edges adjacent to $v$, such as $S_f$. The leftmost vertex of $S_l$ can be either exceptional or non-exceptional. The summands $T_i^{\prime}$ and $T_j^{\prime}$ of the graded tilting complex $T^{\prime}$ corresponding to the edges $S_i$ and $S_j$ are defined to be graded complexes $(Q_{li}[-r_{li}])\langle r_{li}\rangle$ and $(Q_{ij}[-r_{li}-1])\langle r_{li}+1 \rangle$, where we set $r_{li}$ to be the distance between the exceptional vertex and the leftmost vertex of $S_l$. The horizontal shifts $[-r_{li}]$ and $[-r_{li}-1]$ are necessary to ensure that $P_i$ appears in the same degree as $P_i$ in all other previously defined summands of $T$. The vertical shifts $\langle r_{li}\rangle$ and $\langle r_{li}+1\rangle$ are necessary to ensure that top of the rightmost term of $Q_{li}$, which is $P_i$, and top of the leftmost term of $Q_{ij}$, which is also $P_i$, are in the same degree. This way we avoid negative degrees. If the first diagram occurs, then any morphism of graded complexes from $(Q_{li}[-r_{li}])\langle r_{li} \rangle$ to $(Q_{ij}[-r_{r_{li}-1}])\langle r_{li}+1 \rangle$ has to map ${\rm top}\, P_i$ to ${\rm soc}\, P_i$. From this we conclude that the corresponding arrow of the quiver of $A_{\Gamma}$ that points from $S_i$ to $S_j$ is in degree $me$. If the second diagram occurs, then the identity map on $P_i$ gives us a map from $(Q_{ij}[-r_{li}-1])\langle r_{li}+1 \rangle$ to $(Q_{li}[-r_{li}])\langle r_{li} \rangle$ which is not homotopic to zero. From this we have that the arrow from $S_j$ to $S_i$ is in degree 0. \textbf{CASE 4} Edges $S_i$ and $S_j$, where $j<i$, are adjacent to a non-exceptional vertex $v$, and the edge with minimal index among the edges adjacent to $v$, say $S_l$, comes before $S_j$ in the circular ordering of $v$. Then we have that a part of $\Gamma$ is $$ \xymatrix{&&&\circ\\ \bullet\ar@{.}[r]&\ar@{-}[r]^{S_l}&\circ\, v\ar@{-}[ur]^{S_i}\ar@{-}[dr]^{S_j}&\\ &&&\circ } $$ \noindent where the leftmost vertex of $S_l$ can be either exceptional or non-exceptional. In each case, the summands $T_i^{\prime}$ and $T_j^{\prime}$ of the graded tilting complex $T^{\prime}$ corresponding to the edges $S_i$ and $S_j$ are $(Q_{li}[-r_{li}])\langle r_{li} \rangle$ and $(Q_{lj}[-r_{li}])\langle r_{li} \rangle$, where we again set $r_{li}$ to be the distance between the exceptional vertex and the leftmost vertex of $S_l$. The map $({\rm id}_{P_l}; h_{ij})$, where $h_{ij}$ is a map of the maximal rank, from $(Q_{li}[-r_{li}])\langle r_{li} \rangle$ to $(Q_{lj}[-r_{li}])\langle r_{li} \rangle$ will give us a nonzero map in ${\rm Homgr}_{K^b(P_A)}(T_i^{\prime},T_j^{\prime})$ and this map is in degree 0. Therefore, the corresponding arrow from $S_i$ to $S_j$ is in degree 0. These four cases cover all the possible local structures of a Brauer tree $\Gamma$ that we can encounter when putting a grading on the basic Brauer tree algebra $A_{\Gamma}$ that corresponds to this tree. We only need to walk around the Brauer tree $\Gamma$ and recognize which of the four cases occurs for the adjacent edges $S_i$ and $S_j$. In each of the four cases above, we have that the corresponding arrows are in non-negative degrees. Hence, the resulting grading on $A_{\Gamma}$ is non-negative. $\blacksquare$ The grading on $A_{\Gamma}$ constructed in the proof of the previous theorem will be referred to as the grading constructed by taking Green's walk around $\Gamma$. \begin{ex}\label{linePR}{\rm (a) Let $\Gamma$ be the Brauer tree from Example \ref{CompExample}. We first construct the quiver of the basic Brauer tree algebra $A_{\Gamma}$ corresponding to this tree. Each edge is replaced by a vertex and for two adjacent edges, which come one after the other in the circular ordering, we have an arrow connecting two corresponding vertices of the quiver in the opposite ordering of the circular ordering of edges, i.e.\ in the clockwise direction. $$ \xymatrix{ &&&&\stackrel{S_2}{\bullet}\ar[dd]^{1}\\ &\stackrel{S_5}{\bullet}\ar@/^/[dl]^{6}\ar[dr]^{0}&&\stackrel{S_3}{\bullet}\ar[ll]_{6}\ar[ur]^{1}&\\ \stackrel{S_6}{\bullet}\ar@/^/[ur]^{0}&&\stackrel{S_4}{\bullet}\ar[ur]^{0}&&\stackrel{S_1}{\bullet}\ar[ul]^{4}\\ } $$ The degrees of the arrows between $S_1$ and $S_3$, $S_3$ and $S_2$, $S_2$ and $S_1$ are computed using the first case from the proof of the previous theorem. The degree of the arrow between $S_3$ and $S_5$ is 6 and is computed using the second case. Arrows between $S_5$ and $S_4$, and $S_4$ and $S_3$ are in degree 0 and these degrees are computed as in the fourth and the second case respectively. The degree of the arrow between $S_5$ and $S_6$ is 6 and the degree of the arrow between $S_6$ and $S_5$ is 0 and these are computed as in the third case. (b) If the Brauer tree is the line with $e$ edges and the exceptional vertex at the end, ie.\ $$ \xymatrix{\bullet\ar@{-}[r]^{S_1}&\circ\ar@{-}[r]^{S_2}&\circ\ar@{-}[r]^{S_3}&\circ\ar@{-}[r]^{S_{4}}&\dots\ar@{-}[r]^{S_{n-1}}&\circ\ar@{-}[r]^{S_e}&\circ} $$ then the basic Brauer tree algebra $A_{\Gamma}$ is graded and its quiver has $e$ vertices $$ \xymatrix{\bullet\ar@(ul,dl)_{e}\ar@/^/[r]^{me}&\bullet\ar@/^/[r]^{me}\ar@/^/[l]^0&\bullet\ar@/^/[l]^0\ar@/^/[r]^{me}&\dots\ar@/^/[l]^0\ar@/^/[r]^{me}&\bullet\ar@/^/[l]^0\ar@/^/[r]^{me}&\bullet\ar@/^/[l]^0\ar@/^/[r]^{me}&\bullet\ar@/^/[l]^0} $$ \noindent If $m=1$, then there is no loop in the above quiver.} \end{ex} Let $\Gamma$ be an arbitrary Brauer tree. Each edge of $\Gamma$ that is adjacent to the exceptional vertex determines a connected subtree of a Brauer tree $\Gamma$. We call these subtrees components of the Brauer tree. \begin{lm}\label{NoEdgesComp} Let $\alpha$ be an arrow contained in the exceptional cycle of the quiver of $A_{\Gamma}$ which starts at $S_i$ and ends at $S_j$. If $A_{\Gamma}$ is graded by taking Green's walk around $\Gamma$, then the degree of $\alpha$ is equal to the number of edges in the component of $\Gamma$ corresponding to $S_j$. \end{lm} \noindent {\bf Proof.} If $i>j$, then because of the way we enumerate edges by taking Green's walk we have that $i=j+s$, where $s$ is the number of edges in the component corresponding to $S_j$. Hence, $\alpha$ is in degree $i-j=s$. If $i<j$, which only happens if $i=1$, then $\alpha$ is in degree $e-(j-i)=e-j+1$, and this number is equal to the number of edges of the component corresponding to $S_j$. $\blacksquare$ From this lemma it follows that the resulting grading of the exceptional cycle does not depend on from which edge adjacent to the exceptional vertex we start enumeration of edges. This leads us to the following proposition. \begin{prop} Let $A_\Gamma$ be the basic Brauer tree algebra whose tree is $\Gamma$ and let us assume that $A_\Gamma$ is graded by taking Green's walk around $\Gamma$. The resulting grading does not depend on from which edge adjacent to the exceptional vertex we start Green's walk. \end{prop} \noindent {\bf Proof.} Let us assume that we have done two walks around $\Gamma$ starting at a different edge each time. Let us assume that the index of $S$, where $S$ is one of the edges adjacent to the exceptional vertex, is 1 in the first walk, and that its index is $1+l$ in the second walk. Let us assume that we got two tilting complexes $T_1$ and $T_2$ of $A_S$-modules by taking these two walks. These complexes are equal up to cyclic permutation of the vertices of the Brauer star $A_S$. In other words, each index of each term (which is a projective indecomposable $A_S$-module) of each summand of $T_1$ has been cyclically permuted by $l$ to get the corresponding index of the corresponding term of the corresponding summand of $T_2$. These two tilting complexes will give us the same grading because of the 'cyclic' structure of the Brauer star $A_S$. $\blacksquare$ \begin{lm}\label{lema1} Let $A_\Gamma$ be the basic Brauer tree algebra whose tree is $\Gamma$. Let $Q$ be its quiver and let us assume that $A_\Gamma$ is graded by taking Green's walk around $\Gamma$. The only cycle of $Q$ that does not contain any arrows that are in degree $0$ is the exceptional cycle. For a non-exceptional cycle there is exactly one arrow that is not in degree $0$. This arrow is in degree $me$ and the index of its target is greater than the index of its source. \end{lm} \noindent{\bf Proof.} If $\alpha$ is an arbitrary arrow of the exceptional vertex, then $\alpha$ is in a positive degree. This follows from Lemma \ref{NoEdgesComp}, since the number of edges in each component is strictly positive. For a non-exceptional cycle, from the last three cases from the proof of Theorem \ref{glteorema}, we see that exactly one arrow is not in degree $0$. This is the arrow whose source is the vertex of that cycle with the minimal index, and whose target is the vertex of that cycle with the maximal index. Its degree is $me$ by the proof of Theorem \ref{glteorema}. $\blacksquare$ For the arrows of the quiver of $A_{\Gamma}$ that are in degree 0, we have that the index of their source is greater than the index of their target. We state this in the following lemma. \begin{lm}\label{indicesArr} Let $\alpha$ be an arrow of a non-exceptional cycle which is in degree $0$. Then the index of the source of $\alpha$ is greater than the index of the target of $\alpha$. \end{lm} \begin{lm} Let $A_{\Gamma}$, $\Gamma$ and $Q$ be as above. The socle of $A_{\Gamma}$ is in degree $me$. \end{lm} \noindent {\bf Proof.} For an arbitrary cycle, say $\gamma$, let $\alpha_1,\dots,\alpha_r$ be the arrows of that cycle, appearing in that cyclic ordering. If $\gamma$ is a non-exceptional cycle of the quiver $Q$, then paths of the form $\alpha_i\alpha_{i+1}\dots\alpha_{i+r-1}$, where the addition in indices is modulo $r$, and $1\leq i\leq r$, belong to the socle. If $\gamma$ is the exceptional cycle, then the paths $(\alpha_i\alpha_{i+1}\dots\alpha_{i+r-1})^m$ belong to the socle. These elements span the whole socle. For a non-exceptional cycle, the only arrow which is in a non-zero degree is the arrow whose source is the vertex of that cycle with the minimal index, and whose target is the vertex of that cycle with the maximal index. By the cases 2,3,4 from the proof of Theorem \ref{glteorema}, that arrow is in degree $me$. It follows that the path $\alpha_i\alpha_{i+1}\dots\alpha_{i+r-1}$, for all $i$, is in degree $me$. For the exceptional cycle, let us assume that the vertices contained in the exceptional cycle are $S_{l_1}, S_{l_2}, \dots, S_{l_r}$, in that cyclic ordering, and that $l_1>l_2>\dots>l_r=1$. The sum of degrees of the arrows of the exceptional cycle is $\sum_{j=1}^{r-1}(l_{j}-l_{j+1}))+(e-(l_1-l_r))=e$. (This also follows from Lemma \ref{NoEdgesComp}) Therefore, the $m^{th}$ power of $\alpha_i\alpha_{i+1}\dots\alpha_{i+r-1}$, for all $i$, is in degree $me$. $\blacksquare$ \section{The subalgebra $A_0$} Let $\Gamma$ be a given Brauer tree and let $A_{\Gamma}$ be a basic Brauer tree algebra associated with this tree. Let $T$ be the tilting complex that we constructed by taking Green's walk around $\Gamma$. If we assume that we have a graded algebra $A_{\Gamma}$ by using this complex, then the subalgebra $A_0$ consisting of the elements that are in degree 0 has an interesting structure. The quiver of the basic algebra $A_{\Gamma}$ is the union of the cycles contained in it. From Lemma \ref{lema1} it follows that the only cycle that does not contain any arrows that are in degree 0 is the exceptional cycle. If we assume that there are $t$ edges that are adjacent to the exceptional vertex of $\Gamma$, we see immediately that the exceptional cycle divides the quiver of the subalgebra $A_0$ into $t$ disjoint parts, each labelled by a vertex of the exceptional cycle corresponding to an edge adjacent to the exceptional vertex of $\Gamma$. \begin{prop} Let $A_{\Gamma}$ be a basic Brauer tree algebra associated with a given Brauer tree $\Gamma$ and let $T$, $A_0$ and $t$ be as above. The algebra $A_0$ is the direct product of $t$ subalgebras. \end{prop} \noindent{\bf Proof.} The factors of $A_0$ are path algebras of $t$ disjoint subquivers of the quiver of $A_0$. $\blacksquare$ Each of these factors in the previous proposition is labelled by the corresponding vertex which belongs to the exceptional cycle. Let $A_v$ be the connected component of $A_0$ that corresponds to a vertex $v$ of the exceptional cycle. \begin{lm} In the quiver of the component $A_v$ of $A_0$ there is at most one arrow with vertex $v$ as its target. For any other vertex of the quiver of $A_v$ there are at most two arrows with that vertex as a target. \end{lm} \noindent {\bf Proof.} In the quiver $Q$ of the basic Brauer tree algebra $A_{\Gamma}$, each vertex is contained in at most two cycles. Hence, for an arbitrary vertex $v$ of $Q$, there are at most two arrows of $Q$ whose target is $v$. If one of the cycles that contain $v$ is the exceptional cycle, then one of those two arrows whose target is $v$ is in a positive degree. Therefore, for a given vertex $v$ of the exceptional cycle, there is at most one arrow which is in degree 0 that has $v$ as its target. Also, for every other vertex in this component of the quiver of $A_0$, there are at most two arrows with that vertex as a target. From Lemma \ref{indicesArr}, we see that these arrows point from a vertex with a larger index to a vertex with a smaller index. $\blacksquare$ \begin{lm} Let $w$ be a vertex of the quiver of $A_v$ different from $v$. In the quiver of $A_v$ there is exactly one arrow that has $w$ as its source. \end{lm} \noindent {\bf Proof.} The vertex $w$ belongs either to one or to two cycles of $Q$, depending on whether the corresponding edge of the Brauer tree is an end edge or not. Therefore, there are either one or two arrows that have $w$ as its source. If there are two such cycles, then the corresponding edge is not an end edge. Then the arrow that has $w$ as its source and that has vertex of a greater index than $w$ as its target is in a positive degree by Lemma \ref{indicesArr}. The other arrow of $Q$ that has $w$ as its source and a vertex of a smaller index than the index of $w$ as its target is in degree 0, by the same lemma. $\blacksquare$ \begin{prop} The quiver of $A_v$ is a directed rooted tree with vertex $v$ as its root, and with arrows pointing from higher levels of the tree to lower levels of the tree, with root $v$ being in level $0$. \end{prop} \noindent {\bf Proof.} From the previous two lemmas we conclude easily that the component of the quiver of $A_0$ does not have any cycles, because one arrow in each non-exceptional cycle of the quiver of $A_{\Gamma}$ is in a positive degree. Also, it follows that this component is a tree with at most two arrows having the same target, and at most one arrow having an arbitrary vertex as its source. If we view this tree as a rooted tree with the vertex that belongs to the exceptional cycle as the root, then all arrows point from the higher levels to the lower levels by Lemma \ref{indicesArr}, with the root being in level 0. $\blacksquare$ \begin{prop} Each of the components of the subalgebra $A_0$ is the path algebra of a directed rooted tree with arrows pointing from higher levels towards lower levels. The only relations that occur in these components are of the form $\alpha\beta=0$, where $\alpha$ and $\beta$ are arrows that belong to different cycles of the quiver of $A_{\Gamma}$, such that the target of $\alpha$ is the source of $\beta$. \end{prop} \noindent {\bf Proof.} It is left to prove that we only have relations of type $\alpha\beta=0$. These relations are inherited from the relations of the algebra $A_{\Gamma}$. The only other relations that appear in $A_{\Gamma}$ are of type $\rho=\sigma$, where $\rho$ and $\sigma$ are two cycles having the same source and target. Since in the quiver of $A_0$ there are no cycles, these relations are not present. $\blacksquare$ \begin{co} The subalgebra $A_0$ is tightly graded. \end{co} \noindent {\bf Proof.} By the previous proposition, the ideal of relations of $A_0$ is generated by elements of the form $\alpha\beta$. If the arrows of the quiver of $A_0$ are in degree 1, then these generators are homogeneous of degree 2. The ideal of relations is homogeneous, hence the quotient algebra $A_0$ of the path algebra $kQ$ is also graded with arrows in degree 1, i.e.\ it is tightly graded. $\blacksquare$ \begin{ex}\label{NulaPr} {\rm If $\Gamma$ is the following Brauer tree with multiplicity 1 and edges numbered by taking Green's walk: $$ \xymatrix{&&&&&\circ &&\\ \circ \ar@{-}[r]^{S_{11}}&\circ\ar@{-}[r]^{S_{10}}&\circ\ar@{-}[r]^{S_9}&\bullet\ar@{-}[r]^{S_1}&\circ\ar@{-}[dr]^{S_2}\ar@{-}[ur]^{S_8}\ar@{-}[r]^{S_6}&\circ\ar@{-}[r]^{S_7}&\circ &\\ &&&&&\circ\ar@{-}[dr]^{S_3}&&\circ\\ &&&&&&\circ\ar@{-}[dr]^{S_4}\ar@{-}[ur]^{S_5}&\\ &&&&&&&\circ} $$ then the quiver of the graded basic Brauer tree algebra $A_{\Gamma}$ associated with this tree is $$ \xymatrix{ &&&&\stackrel{S_8}{\bullet}\ar@/^/[dr]^0&&\\ \stackrel{S_{11}}{\bullet}\ar@/^/[r]^0&\stackrel{S_{10}}{\bullet}\ar@/^/[l]^{11}\ar@/^/[r]^0&\stackrel{S_9}{\bullet}\ar@/^/[l]^{11}\ar@/^/[r]^8&\stackrel{S_1}{\bullet}\ar@/^/[l]^3\ar@/^/[ur]^{11}&&\stackrel{S_6}{\bullet}\ar@/^/[dl]^0\ar@/^/[r]^{11}&\stackrel{S_7}{\bullet}\ar@/^/[l]^0\\ &&&&\stackrel{S_2}{\bullet}\ar@/^/[dr]^{11}\ar@/^/[ul]^0&&\stackrel{S_5}{\bullet}\ar@/^/[dd]^0\\ &&&&&\stackrel{S_3}{\bullet}\ar@/^/[ul]^0\ar@/^/[ur]^{11}&\\ &&&&&&\stackrel{S_4}{\bullet}\ar@/^/[ul]^0 } $$ The algebra $A_0$ is consisted of two components because there are two edges adjacent to the exceptional vertex. The quiver of the first component is $$\xymatrix{\bullet\ar[r]^{a_1}&\bullet\ar[r]^{a_2}&\bullet}$$ and the only relation is $a_1a_2=0.$ The quiver of the second component is $$ \xymatrix{ &&&\bullet\ar[dl]^{b_3}&\\ &&\bullet\ar[dl]^{b_1}&&\\ \bullet&\bullet\ar[l]^{b_0}&&\bullet\ar[ul]^{b_4}&\\ &&\bullet\ar[ul]^{b_2}&&\\ &&&\bullet\ar[ul]^{b_5}&\bullet\ar[l]^{b_6}} $$ and the relations are $b_2b_0=0$, $b_5b_2=0$ and $b_4b_1=0.$ } \end{ex} \subsection{Recovering the quiver of $A_{\Gamma}$ from the quiver of $A_0$} The grading resulting from taking Green's walk has some interesting properties. We will see in this section that the algebra $A_0$ carries a lot of information about the algebra $A_{\Gamma}$. Let $\Gamma$, $A_{\Gamma}$ and $A_0$ be as before. If we omit the arrows of the exceptional cycle of the quiver of $A_{\Gamma}$, we see that the resulting quiver consists of the connected components which correspond to the connected components of the quiver of $A_0$. If we look at the components of $A_0$ we see that it is sufficient to know the quiver and relations of such component to recover the quiver of the corresponding component of $A_{\Gamma}$. This is a consequence of Lemma \ref{lema1}, which says that in every non-exceptional cycle of the quiver of $A_{\Gamma}$ there is only one arrow that is not in degree 0. Let $Q_v$ be one of the connected components of the quiver of $A_0$ and let $Q_1$ be the corresponding component of the quiver of $A_{\Gamma}$. We have seen in the previous section that $Q_v$ is a rooted tree with the root $v$ belonging to the exceptional cycle. Starting from the root of this tree, we can recover the corresponding component $Q_1$ of the quiver $Q$. \begin{prop} Let $Q_v$ be a connected component of the quiver of $A_0$ and let $Q_1$ be its corresponding connected component of the quiver of $A_{\Gamma}$ when the exceptional cycle is omitted. From the quiver $Q_v$ and its relations we can recover the quiver $Q_1$ and the relations of the corresponding component of the algebra $A_{\Gamma}$. \end{prop} \noindent {\bf Proof.} Start from the root $v$ of the rooted tree $Q_v$. Take the longest non-zero path, say $\rho$, ending at $v$. Add an arrow pointing from $v$ to the source vertex of $\rho$. If there is no such path of length greater than 1, then add an arrow from $v$ to the starting point of the only arrow ending at the root $v$. In this way we recover the cycle of $Q_1$ which has root $v$ as one of its vertices. The added arrow was an arrow of $Q_1$ that is in a non-zero degree. Now, we repeat the same step with an arbitrary vertex in the level 1 of the rooted tree instead of the root, but we only consider paths which do not contain arrows that belong to already recovered cycles. Repeat the same step for all other vertices in level 1 of the rooted tree $Q_v$. Repeat the same steps for vertices in other levels of the rooted tree $Q_v$ until every cycle is recovered. In this way we recovered the whole corresponding component $Q_1$ of the quiver of $A_{\Gamma}$. As far as the relations are concerned, we get relations for the basic Brauer tree algebra corresponding to a given tree, ie.\ for two successive arrows belonging to two different cycles we set their product to be zero and we set two cycles starting and ending at the same vertex to be equal. $\blacksquare$ \begin{ex} {\rm Let $A_{\Gamma}$ be a basic Brauer tree algebra corresponding to the Brauer tree from Example \ref{NulaPr}. The algebra $A_0$ has two components. Let us recover the corresponding components of $A_{\Gamma}$. The first component is given by the quiver $$\xymatrix{\bullet\ar[r]^{a_1}&\bullet\ar[r]^{a_2}&\bullet}$$ and the relation $a_1a_2=0$. Starting from the root we immediately recover the first cycle since there is no non-zero path of length greater than 1 whose target is the root. Consequently, the second cycle is easily recovered and we get that the corresponding component of the quiver of $A_{\Gamma}$ is $$\xymatrix{\bullet\ar@/^/[r]^{a_1}&\bullet\ar@/^/[l]^{a_4}\ar@/^/[r]^{a_2}&\bullet\ar@/^/[l]^{a_3}}$$ The quiver of the second component is $$ \xymatrix{ &&&v_5\bullet\ar[dl]^{b_3}&\\ &&v_3\bullet\ar[dl]^{b_1}&&\\ v_1\bullet&v_2\bullet\ar[l]^{b_0}&&v_6\bullet\ar[ul]^{b_4}&\\ &&v_4\bullet\ar[ul]^{b_2}&&\\ &&&v_7\bullet\ar[ul]^{b_5}&v_8\bullet\ar[l]^{b_6}} $$ and the relations are $b_2b_0=0$, $b_5b_2=0$ and $b_4b_1=0.$ The longest non-zero path ending at $v_1$ is $b_3b_1b_0$. Therefore we have to add an arrow from $v_1$ to $v_5$. This will give us the following partial quiver $$ \xymatrix{ &&&v_5\bullet\ar[dl]^{b_3}&\\ &&v_3\bullet\ar[dl]^{b_1}&&\\ v_1\bullet\ar@/^/[rrruu]^{d_1}&v_2\bullet\ar[l]^{b_0}&&v_6\bullet\ar[ul]^{b_4}&\\ &&v_4\bullet\ar[ul]^{b_2}&&\\ &&&v_7\bullet\ar[ul]^{b_5}&v_8\bullet\ar[l]^{b_6}} $$ We move on to the next level and conclude that we need to add an arrow from $v_2$ to $v_4$. We do not add an arrow from $v_2$ to $v_5$ because the arrow from $v_3$ to $v_2$ is already in a fully recovered cycle. For level two vertices we need to add an arrow from $v_3$ to $v_6$ and an arrow from $v_4$ to $v_8$. Finally, the recovered component is given by the quiver $$ \xymatrix{ &&&v_5\bullet\ar[dl]^{b_3}&\\ &&v_3\bullet\ar@/^/[dr]^{c_3}\ar[dl]^{b_1}&&\\ v_1\bullet\ar@/^/[rrruu]^{c_1}&v_2\bullet\ar@/^/[dr]^{c_2}\ar[l]^{b_0}&&v_6\bullet\ar[ul]^{b_4}&\\ &&v_4\bullet\ar@/^/[rrd]^{c_4}\ar[ul]^{b_2}&&\\ &&&v_7\bullet\ar[ul]^{b_5}&v_8\bullet\ar[l]^{b_6}} $$ } \end{ex} \begin{te} Let $A_{\Gamma}$ be a graded basic Brauer tree algebra whose grading is constructed by taking Green's walk around $\Gamma$. From the quiver and relations of $A_0$ and the cyclic ordering of the components of $A_0$ we can recover the quiver and relations of $A_{\Gamma}$. \end{te} \noindent {\bf Proof.} We have seen in the previous proposition that from the quiver and relations of $A_0$ we can recover each of the components of the quiver of $A_{\Gamma}$ that we get when we omit the exceptional cycle. In order to completely recover the quiver of $A_{\Gamma}$, we are left to recover the exceptional cycle. The roots of the components of $A_0$ are the vertices of the exceptional cycle. From the cyclic ordering of the components we get the cyclic ordering of the vertices of the exceptional cycle. Thus, the exceptional cycle is recovered from the cyclic ordering of the components of $A_0$. $\blacksquare$ \subsection{Quasi-hereditary structure on $ A_0$} Let $Q_v$ be the quiver of an arbitrary connected component of $A_0$. We have seen that $Q_v$ is a rooted tree. We can enumerate the vertices of $Q_v$ in a natural way by the levels of the rooted tree. We start with the root $v$, then we enumerate all vertices that are in level 1 of the rooted tree, for example, we enumerate them from left to right in the planar embedding of the tree. Once we have enumerated all vertices of an arbitrary level $r$, we move on to level $r+1$ and repeat the same procedure until we enumerate all vertices. Let $P_i$ be the projective cover of the simple $A_0$-module $S_i$ corresponding to the vertex $v_i$. Then $P_i$ is spanned by paths of $Q_v$ ending at $v_i$. Since $Q_v$ is a rooted tree, we conclude that the only simple modules that occur as composition factors of $P_i$ are the simple modules whose corresponding vertex has index greater than $i$. Also, $S_i$ occurs only once as a composition factor of $P_i$. Hence, we obtain a quasi-hereditary structure on this component, by defining a partial order as follows. Let $v_j$ be the vertex of $Q_v$ corresponding to the simple module $S_{j}$. Then we define $S_{j}< S_{i}$, for $i\neq j$, if there is a path from $v_j$ to $v_i$, where $S_{i}$ is the simple $A_0$-module corresponding to the vertex $v_i$. The standard modules with respect to this order are the projective indecomposable modules and the costandard modules are the simple modules. Therefore, $(A_0,\leq)$ is a quasi-hereditary algebra as a product of quasi-hereditary algebras. The Cartan matrix of the path algebra of $Q_v$ is a lower triangular matrix with diagonal elements equal to 1. Since $ A_0$ is the product of its components, we have that the following standard result for quasi-hereditary algebras holds for $A_0$ (cf.\ [\ref{KenHer}]). \begin{prop} Let $\Gamma $ be a Brauer tree of type $(m,e)$ and let $A_{\Gamma}$ be a graded basic Brauer tree algebra whose tree is $\Gamma$ and whose grading is constructed by taking Green's walk around $\Gamma$. If $A_0$ is the subalgebra of $A_{\Gamma}$ consisted of elements in degree $0$, then the Cartan matrix of $A_0$ is a lower triangular matrix with diagonal elements equal to $1$ and with determinant equal to $1$. \end{prop} Quasi-hereditary algebras have finite global dimension (cf.\ [\ref{RingelQHer}]), hence, $A_0$ has a finite global dimension. We give an upper bound for the global dimension of a quasi-hereditary algebra $A_0$. Let $Q_v$ be the quiver of a connected component of $A_0$ and let $B$ be its path algebra. Let $l(Q_v)$ be the length of the rooted tree $Q_v$, ie.\ the index of the last level of $Q_v$. The global dimension of $B$ is at most $l(Q_v)$. This can be easily proved by looking at the projective dimensions of the simple $B$-modules. One starts at the bottom of the tree and works by induction on the distance of a vertex from the bottom of the tree. \begin{prop} Let $A_{\Gamma}$, $A_0$ and $Q_v$ be as above. Then, $${\rm gl.dim.}\, A_0\leq\max \{l(Q_v)\,|\, Q_v\,\, {\rm a\,\, component\,\, of\,\, the \,\, quiver\,\, of}\,\, A_0\}.$$ \end{prop} Note that the upper bound is achieved if the relations of $A_0$ are maximal possible in a sense that the product of every two arrows is equal to zero. For example, this happens in the case of a Brauer line where the subalgebra $A_0$ is given by the quiver $$ \xymatrix{\stackrel{v_1}{\bullet}&\stackrel{v_2}{\bullet}\ar@/^/[l]^{a_1}&\stackrel{v_3}{\bullet}\ar@/^/[l]^{a_2} &\dots\ar@/^/[l]^{a_{3}}&\stackrel{v_{e-2}}{\bullet}\ar@/^/[l]^{a_{e-3}}&\stackrel{v_{e-1}}{\bullet}\ar@/^/[l]^{a_{e-2}}&\stackrel{v_e}{\bullet}\ar@/^/[l]^{a_{e-1}}} $$ and the relations are $a_ia_{i-1}=0$, $i=2,3,\dots, e-1$. The other extreme is the case when there are no relations, that is when $A_0$ is hereditary. Then $A_0$ has global dimension $\leq 1$, on the other hand $l(Q_v)$ can be arbitrarily large. \section{Graded Cartan matrix} Let $A_{\Gamma}$ be a basic Brauer tree algebra of type $(m,e)$ given by the quiver $Q$ and relations $I$. We have seen that $A_{\Gamma}$ is a graded algebra. Let $S_1, S_2,\dots , S_e$ be the simple $A_{\Gamma}$--modules corresponding to the vertices of the quiver $Q$. We assume that the simple modules are enumerated by taking Green's walk around $\Gamma$. We define the graded Cartan matrix $C$ of $A_{\Gamma}$ to be the ($e\times e$)-matrix with entries from the ring $\mathbb{Z}[q,q^{-1}]$ given by $$c_{ij}=C(S_i,S_j):=\sum_{l\in \mathbb{Z}}q^l {\rm dim}\, {\rm Homgr}_{A_{\Gamma}}(P_{S_i},P_{S_j}\langle l \rangle ),$$ where $P_{S_i}$ is the projective cover of $S_i$. Note that the coefficient of $q^l$ is equal to the number of times that $S_i$ appears in degree $l$ as a composition factor of $P_{S_j}$. \begin{prop} Let $A_{\Gamma}$ be a graded basic Brauer tree algebra whose tree is $\Gamma$ and with grading constructed by taking Green's walk around $\Gamma$. Let $S_i$ and $S_j$ be simple modules corresponding to vertices $v_i$ and $v_j$ of the quiver $Q$ of $A_{\Gamma}$. Then \begin{itemize} \item[{\rm (i)}] if $S_i$ and $S_j$ do not belong to the same cycle of $Q$, then $c_{ij}=0$; \item[{\rm (ii)}] if $S_i$ belongs to the exceptional cycle, we have that $$ c_{ii}=1+q^e+q^{2e}+\dots + q^{me},$$ if $S_i$ does not belong to the exceptional cycle, we have that $$c_{ii}=1+q^{me}.$$ \item[{\rm (iii)}] if $i\neq j$ and $S_i$ and $S_j$ belong to the same non-exceptional cycle, then $$i>j \Rightarrow c_{ij}=1,$$ $$i<j \Rightarrow c_{ij}=q^{me};$$ \item[{\rm (iv)}] if $i\neq j$ and $S_i$ and $S_j$ belong to the exceptional cycle, then $$i>j \Rightarrow c_{ij}=q^{i-j}+q^{i-j+e}+\dots + q^{i-j+(m-1)e},$$ $$i<j \Rightarrow c_{ij}=q^{e-(j-i)}+q^{2e-(j-i)}+\dots + q^{me-(j-i)}.$$ \end{itemize} \end{prop} \noindent{\bf Proof.} Since the projective cover of $S_j$ is spanned by the paths ending at $S_j$, we conclude that the exponents of the non-zero terms of $c_{ij}$ are exactly the degrees of the non-zero paths starting at $S_i$ and ending at $S_j$. Case ${\rm (i)}$ is obvious, because $P_{S_j}$ does not contain $S_i$ as a composition factor. In case ${\rm (ii)}$ the degrees of the paths starting and ending at $S_i$ are $0, e, 2e, \dots, me$ when $S_i$ belongs to the exceptional cycle, and are $0,me$ otherwise. In case ${\rm (iii)}$, if $i>j$, the only non-zero path from $S_i$ to $S_j$ has degree $0$. Similarly, if $i<j$, the only non-zero path from $S_i$ to $S_j$ has degree $me$. In case ${\rm (iv)}$ the same argument shows that, if $i>j$, then the degrees of the paths from $S_i$ to $S_j$ are $i-j, e+(i-j), \dots , (m-1)e+i-j$, and if $i<j$, they are $e-(j-i), 2e-(j-i), \dots , me-(j-i)$. $\blacksquare$ \section{Graded Cartan determinant} \begin{prop} Let $A_{\Gamma}$ be a graded basic Brauer tree algebra whose tree $\Gamma$ is of type $(m,e)$ and whose grading is constructed by taking Green's walk around $\Gamma$. If $C_{A_{\Gamma}}$ is the graded Cartan matrix of $A_{\Gamma}$, then $${\rm det}\, C_{A_{\Gamma}}=1+q^e+q^{2e}+\dots + q^{me^2}.$$ \end{prop} \noindent{\bf Proof.} By [\ref{Rou}], Proposition 5.17, the constant term of ${\rm det } \, C_A$ is equal to the determinant of the Cartan matrix of $A_0$. We have seen that the determinant of the Cartan matrix of $A_0$ is $1$. By Proposition 6.6 in [\ref{Rou}], we also have that if $A$ and $B$ are two graded Brauer tree algebras of the same type $(m,e)$, with gradings constructed by taking Green's walk, then ${\rm det} \, C_A$ is equal to $\pm q^l\,\, {\rm det}\, C_B$ for some integer $l$. Since the constant term is equal to 1, we conclude that $l=0$, and that ${\rm det}\, C_A={\rm det}\, C_B$. Therefore, it is enough to compute ${\rm det}\, C_B$ where $B$ is the graded basic Brauer tree algebra whose tree is the Brauer line of type $(m,e)$ with the exceptional vertex at one of the ends (see Example \ref{linePR}(b)). If $|i-j|>1$, then $c_{ij}=0$, because the corresponding vertices belong to different cycles. Also, $c_{11}=1+q^e+q^{2e}+\dots +q^{me} $, and if $i>1$, then $c_{ii}=1+q^{me}$. Other entries are given by $c_{i,i+1}=q^{me}$ and $c_{i+1,i}=1$. We are left to compute the following $e\times e$ determinant $${\rm det}\, C_B=\left| \begin{array}{cccccc} \alpha & \beta & & & & \\ 1 & \gamma & \beta & & & \\ &1 & \gamma & \beta & & \\ & & & \ddots & & \\ & & & 1 & \gamma & \beta \\ & & & & 1 & \gamma \\ \end{array}\right| $$ where $\alpha=1+q^e+q^{2e}+\dots +q^{me}$, $\beta=q^{me}$, $\gamma=1+q^{me}$ and the omitted entries are all equal to zero. If $d_l$ is the determinant of the $l\times l$ block in the lower right corner, then from $d_0=1$, $d_1=\gamma$ and the recursion $$d_l=\gamma d_{l-1}-\beta d_{l-2},$$ it is easy to show that $$d_l=1+q^{me}+\dots+q^{lme}.$$ Expanding the determinant along the first column gives us the desired formula $${\rm det}\, C_B=\alpha d_{e-1}-\beta d_{e-2}=1+q^e+q^{2e}+\dots+q^{me^2}. $$ $\blacksquare$ \section{Brauer lines as trivial extension algebras} Let $\Gamma$ be the Brauer line with $e$ edges and multiplicity of the exceptional vertex equal to 1. Let $A_{\Gamma}$ be a basic Brauer tree algebra whose tree is $\Gamma$ and let us assume that this algebra is graded by taking Green's walk around $\Gamma$. We have seen in Example \ref{linePR} (b) that with respect to such grading the graded quiver of $A_{\Gamma}$ is given by $$\xymatrix{\bullet\ar@/^/[r]^{e}&\bullet\ar@/^/[r]^{e}\ar@/^/[l]^0&\bullet\ar@/^/[l]^0\ar@/^/[r]^{e}&\dots\ar@/^/[l]^0\ar@/^/[r]^{e}&\bullet\ar@/^/[l]^0\ar@/^/[r]^{e}&\bullet\ar@/^/[l]^0\ar@/^/[r]^{e}&\bullet\ar@/^/[l]^0}$$ and we have that the only non-zero degree appearing in this grading is $e$. Therefore, we can divide every degree by $e$ and we will still have a graded algebra whose graded quiver is $$\xymatrix{\bullet\ar@/^/[r]^{1}&\bullet\ar@/^/[r]^{1}\ar@/^/[l]^0&\bullet\ar@/^/[l]^0 \ar@/^/[r]^{1}&\dots\ar@/^/[l]^0\ar@/^/[r]^{1}&\bullet\ar@/^/[l]^0\ar@/^/[r]^{1}&\bullet\ar@/^/[l]^0\ar@/^/[r]^{1}&\bullet\ar@/^/[l]^0}$$ with arrows only in degrees 0 and 1. We call this procedure of dividing each degree by the same integer rescaling. This algebra has an interesting connection with trivial extension algebras. Let $B$ be a finite dimensional algebra over a field $k$. The trivial extension algebra of $B$, denoted $T(B)$, is the vector space $B\oplus B^*$ with multiplication defined by $$(x,f)(y,g):=(xy,xg+fy)$$ where $x,y\in B$ and $f,g \in B^*$ and $B^*$ is the $B$--bimodule ${\rm Hom}_k(B,k)$. This algebra is always symmetric and the map $B\rightarrow T(B)$, given by $b\mapsto (b,0)$, is an embedding of algebras. The algebra $T(B)$ is naturally graded by putting $B$ in degree 0 and $B^*$ in degree 1. This raises the question of whether the graded Brauer tree algebra $A_{\Gamma}$ (with degrees normalized by dividing by $e$) is the trivial extension algebra of some algebra $B$? The obvious candidate would be its subalgebra $A_0$. The quiver of $A_0$ is given by $$ \xymatrix{\stackrel{v_1}{\bullet}&\stackrel{v_2}{\bullet}\ar@/^/[l]^{a_1}&\stackrel{v_3}{\bullet}\ar@/^/[l]^{a_2} &\dots\ar@/^/[l]^{a_{3}}&\stackrel{v_{e-2}}{\bullet}\ar@/^/[l]^{a_{e-3}}&\stackrel{v_{e-1}}{\bullet}\ar@/^/[l]^{a_{e-2}}&\stackrel{v_e}{\bullet}\ar@/^/[l]^{a_{e-1}}} $$ and the following proposition says that the trivial extension algebra of $A_0$ is $A_{\Gamma}$. \begin{prop} Let $A_{\Gamma}$ and $A_0$ be as above. Then $$T(A_0)=A_0\oplus A_0^*\cong A_{\Gamma}.$$ \end{prop} \noindent {\bf Proof.} Let $\{v_1^*,\dots, v_e^*, a_1^*,\dots, a_{e-1}^*\}$ be the basis of $A_0^*$ dual to the basis $\{v_1,\dots, v_e, a_1,\dots, a_{e-1}\}$ of $A_0$ and let $b_i$, $i=1,2,\dots, e-1$, be the arrow of the quiver of $A_{\Gamma}$ starting at the vertex $v_i$ and ending at the vertex $v_{i+1}$. Each $b_i$ is in degree 1. It is now easily verified that the map given by $a\mapsto (a,0)$ for $a \in A_0$ and $b_i\mapsto (0,a_i^*)$, $i=1,2,\dots, e-1$, is an algebra isomorphism between $A_{\Gamma}$ and $T(A_0)$. $\blacksquare$ \section{Shifts of gradings} Let $\Gamma $ be a Brauer tree of type $(m,e)$ and let $A_{\Gamma}$ be a basic Brauer tree algebra whose tree is $\Gamma$. We have seen that we can grade this algebra by computing the endomorphism ring of the graded complex $T^{\prime}=\oplus_{i=1}^{e} T_i^{\prime}$ which we constructed by taking Green's walk around $\Gamma$. In other words, we got a structure of a graded algebra $A_{\Gamma}^{\prime}$ on $A_{\Gamma}$. Recall that this was a non-negative grading, i.e.\ a grading such that every homogeneous element is in a non-negative degree. Let $\tilde{T}$ be the shifted graded complex $\oplus_{i=1}^e T_i^{\prime}\langle n_i\rangle$, where $n_i\in \mathbb{Z}$, $i=1,2,\dots, e$. The endomorphism ring of the graded complex $\tilde{T}$ is the graded algebra $\tilde{A}_{\Gamma}$ which is graded Morita equivalent to $A_{\Gamma}^{\prime}$ (see Definition \ref{grMorDef}). The question is if we can choose non-zero integers $n_i$ in such way that the resulting grading is positive, i.e.\ to get such a grading in which all homogeneous elements from the radical of $A_{\Gamma}$ are in strictly positive degrees. The answer to this question is positive, and moreover, these integers $n_i$ can be chosen to be positive integers. Let $S_i$ and $S_j$ be vertices of the quiver of the algebra $\tilde{A}_{\Gamma}$ which belong to the same cycle, and which correspond to the summands $T_i^{\prime}\langle n_i\rangle$ and $T_j^{\prime}\langle n_j\rangle$ of $\tilde{T}$. We need to compute the degree of the arrow $\alpha$ from $S_i$ to $S_j$. Let $d$ be the degree of this arrow for the graded algebra $A_{\Gamma}^{\prime}$. \begin{prop}\label{shiftchange} Let $\alpha$ be the arrow connecting vertices $S_i$ and $S_j$ of the graded quiver of the graded algebra $\tilde{A}_{\Gamma}$. Then, $${\rm deg }(\alpha)=d+n_i-n_j,$$ where $d$ is the degree of the same arrow of the quiver of the graded algebra $A_{\Gamma}^{\prime}$, and $n_i$ and $n_j$ are the shifts of $T_i^{\prime}$ and $T_j^{\prime}$ respectively. \end{prop} \noindent {\bf Proof.} Since $T_i^{\prime}$ are complexes of uniserial modules, the top of the leftmost non-zero term of $T_i^{\prime}\langle n_i\rangle$ is in degree $d_1-n_i$ after the shift, where $d_1$ is the degree of the top of the leftmost non-zero term of $T_i^{\prime}$, ie.\ the degree before the shift. Also, the top of the leftmost non-zero term of $T_j^{\prime}\langle n_j\rangle$ is in degree $d_2-n_j$, where $d_2$ is its degree before the shift. The degree of $\alpha$ after the shift is $(d_2-n_j)-(d_1-n_i)=d+n_i-n_j$. $\blacksquare$ Note that when we compare graded quivers of $A_{\Gamma}^{\prime}$ and $\tilde{A}_{\Gamma}$, the difference is that we added $n_i-n_j$ to the degree of the arrow from $S_i$ to $S_j$. If this arrow is in degree 0, then its degree after the shift is $n_i-n_j$, where $i>j$. The source and the target of such an arrow belong to two consecutive levels of a rooted tree. The arrows that are not part of the exceptional cycle and whose degree was non-zero, are now in degree $me+n_i-n_j$ where $i<j$. Also, the degree of an arrow between two vertices $S_i$ and $S_j$ of the exceptional cycle is now $i-j+n_i-n_j$, if $i>j$, and if $i<j$, it is $e-(j-i)+n_i-n_j$. We want to find integers $n_i$, $i=1,2,\dots, e$, such that all these degrees are positive. \begin{prop} Let $\Gamma$ be a Brauer tree of type $(m,e)$ and let $\tilde{T}:=\oplus_{i=1}^e T_i^{\prime}\langle n_i\rangle $ be the shifted tilting complex constructed by taking Green's walk around $\Gamma$. Let $\tilde{A}_{\Gamma}:={\rm Endgr}_{K^b(P_{A_S})} (\tilde{T})^{op}$. There are positive integers $n_i$, $i=1,2,\dots, e$, such that the graded algebra $\tilde{A}_{\Gamma}$ is non-negatively graded with ${\rm deg}(a)>0$ for all homogeneous elements $a\in {\rm rad}\, \tilde{A}_{\Gamma}$. \end{prop} \noindent{\bf Proof.} Let $Q_v$ be an arbitrary component of the quiver of $A_0$, where $A_0$ is as before the subalgebra of $A^{\prime}_{\Gamma}$ of degree 0 elements, and let $S_i$ be a vertex of $Q_v$. If we choose $n_i$ to be $1+l_i$, where $l_i$ is the level of the rooted tree $Q_v$ to whom $S_i$ belongs, we see that all arrows of the graded quiver $Q_v$ are in degree 1 after the shift. Also, the arrows of $Q$, the quiver of $A_{\Gamma}$, that connect two vertices of $Q_v$ and which were not in degree 0 are still in positive degrees after the shift because $n_j-n_i<e$ (the number of levels in each component of the quiver of $A_0$ is less than $e$) and consequently $me+n_i-n_j>0$ . The arrows of the exceptional cycle are in the same degrees as they used to be because we set $n_i:=1$ for each root $S_i$. Then for every homogeneous element $a\in {\rm rad}\, \tilde{A}_{\Gamma}$ we have that ${\rm deg}(a)>0$. $\blacksquare$ We note here that, in general, there are many choices for the integers $n_i$, $i=1,2,\dots,e$. \begin{ex} {\rm Let $\Gamma$ be the tree from Example \ref{NulaPr}. With the notation from the previous proposition the graded quiver of the basic Brauer tree algebra $\tilde{A_{\Gamma}}={\rm Endgr}_{K^b(P_{A_S})}(\oplus_{i=1}^{11}T_i^{\prime}\langle n_i\rangle)^{op}$ is given by $$ \xymatrix{ &&&&&&\stackrel{S_8}{\bullet}\ar@/^/[dr]^{n_8-n_6}&&\\ &\stackrel{S_{10}}{\bullet}\ar@/^/[dl]^{11+n_{10}-n_{11}}\ar@/^/[rr]^{n_{10}-n_9}&&\stackrel{S_9}{\bullet}\ar@/^/[ll]^{11+n_9-n_{10}}\ar@/^/[rr]^{8+n_9-n_1}&&\stackrel{S_1}{\bullet}\ar@/^/[ll]^{3+n_1-n_9}\ar@/^/[ur]^{11+n_1-n_8}&&\stackrel{S_6}{\bullet}\ar@/^/[dl]^{n_6-n_2}\ar@/^/[rr]^{11+n_6-n_7}&&\stackrel{S_7}{\bullet}\ar@/^/[ll]^{n_7-n_6}\\ \stackrel{S_{11}}{\bullet}\ar@/^/[ur]^{n_{11}-n_{10}}&&&&&&\stackrel{S_2}{\bullet}\ar@/^/[d]^{11+n_2-n_3}\ar@/^/[ul]^{n_2-n_1}&&&\\ &&&&&&\stackrel{S_3}{\bullet}\ar@/^/[u]^{n_3-n_2}\ar@/^/[dr]^{11+n_3-n_5}&&&\\ &&&&&\stackrel{S_4}{\bullet}\ar@/^/[ur]^{n_4-n_3}&&\stackrel{S_5}{\bullet}\ar@/^/[ll]^{n_5-n_4}&& }$$ If we set $n_1=n_9=1$, $n_2=n_{10}=2$, $n_3=n_6=n_{11}=3$, $n_4=n_7=n_8=4$ and $n_5=5$, then all arrows are in positive degrees. } \end{ex} \noindent Note that the change of shifts on the summands $T_i^{\prime}$ of the tilting complex $T^{\prime}$ is the same as the change of shifts on the projective indecomposable modules of $A_{\Gamma}^{\prime}\cong \displaystyle{\rm Endgr}_{K^b(P_{A_S})}(\oplus_{i=1}^e T_i^{\prime})^{op}$. Let $\tilde{A}_{\Gamma}\cong {\rm Endgr}_{K^b(P_{A_S})}(\oplus_{i=1}^e T_i^{\prime}\langle n_i\rangle)^{op}$. When we change the shifts, in general, we get a different grading on $A_{\Gamma}$ and the resulting graded algebra $\tilde{A}_{\Gamma}$ is not isomorphic to $A_{\Gamma}^{\prime}$ as a graded algebra. But these two graded algebras are graded Morita equivalent, ie.\ there is an equivalence $A_{\Gamma}^{\prime}$--${\rm grmod}\cong \tilde{A}_{\Gamma}$--${\rm grmod}$ as we shall see in Section 11. \section{Change of the exceptional vertex} Let $A_{\Gamma}$ be a basic Brauer tree algebra whose tree $\Gamma$ has $e$ edges and the multiplicity of the exceptional vertex equal to 1. If we change the exceptional vertex, the algebra $A_{\Gamma}$ does not change. But when constructing the tilting complex that tilts from $A_S$ to $A_{\Gamma}$ by taking Green's walk around $\Gamma$ it is obvious that we start from a different vertex, and in general, the resulting tilting complex is different. Therefore, we get different gradings on $A_{\Gamma}$. \begin{ex}\label{promcvora} {\rm Let $\Gamma$ be the following Brauer tree with multiplicity of the exceptional vertex equal to 1 and let $A_{\Gamma}$ be the corresponding Brauer tree algebra. $$\xymatrix{&&& \circ\ar@{-}[dl]_{S_4}\\ \circ\ar@{-}[r]^{S_2}&\circ\ar@{-}[r]^{S_1}&\bullet\ar@{-}[dr]^{S_3}&\\ &&&\circ\\}$$ If $T=\oplus_{i=1}^4T_i$ is the tilting complex constructed by taking Green's walk around $\Gamma$, then the resulting graded quiver of $A_{\Gamma}$ is given by $$ \xymatrix{&&\stackrel{S_4}{\bullet}\ar@/^/[dd]^1\\ \stackrel{S_2}{\bullet}\ar@/^/[r]^0&\stackrel{S_1}{\bullet}\ar@/^/[ur]^1\ar@/^/[l]^4&\\ &&\stackrel{S_3}{\bullet}\ar@/^/[ul]^2} $$ If we change the exceptional vertex, say we have Brauer tree $\Delta$ $$\xymatrix{&&& \circ\ar@{-}[dl]_{S_4}\\ \bullet\ar@{-}[r]^{S_1}&\circ\ar@{-}[r]^{S_2}&\circ\ar@{-}[dr]^{S_3}&\\ &&&\circ\\}$$ \noindent then the basic Brauer tree algebra $A_{\Delta}$ whose tree is $\Delta$, is the same as $A_\Gamma$. Thus, changing the exceptional vertex of $\Gamma$ does not change $A_{\Gamma}$. The tilting complex $D$ constructed by taking Green's walk around $\Delta$ is different from $T$. Therefore, we get a new grading on $A_{\Delta}=A_{\Gamma}\cong {\rm End}_{K^b(P_{A_S})}(D)^{op}$, and the resulting graded quiver of the graded algebra $A_{\Delta}^{\prime}$ is given by $$ \xymatrix{&&\stackrel{S_4}{\bullet}\ar@/^/[dd]^0\\ \stackrel{S_2}{\bullet}\ar@/^/[r]^4&\stackrel{S_1}{\bullet}\ar@/^/[ur]^4\ar@/^/[l]^0&\\ &&\stackrel{S_3}{\bullet}\ar@/^/[ul]^0} $$ \noindent If $\tilde{T}:=\oplus_{i=1}^4T_i^{\prime}\langle n_i\rangle$ is a graded complex given by shifting summands of $T^{\prime}$, then from Proposition \ref{shiftchange} we get another grading on $A_{\Gamma}\cong {\rm End}_{K^b(P_{A_S})}(T)^{op}$ and the resulting graded quiver of the graded algebra $\tilde{A}_{\Gamma}$ is given by $$ \xymatrix{&&&\stackrel{S_4}{\bullet}\ar@/^/[dd]^{1+n_4-n_3}\\ \stackrel{S_2}{\bullet}\ar@/^/[rr]^{n_2-n_1}&&\stackrel{S_1}{\bullet}\ar@/^/[ur]^{1+n_1-n_4}\ar@/^/[ll]^{4+n_1-n_2}&\\ &&&\stackrel{S_3}{\bullet}\ar@/^/[ul]^{2+n_3-n_1}} $$ If we set $n_1=3$, $n_2=7$, $n_3=1$ and $n_4=0$, we see that the resulting grading on $A_{\Gamma}$ is the same as the grading that we got by taking Green's walk around $\Delta$. } \end{ex} In the previous example we had two different gradings on $A_{\Gamma}$, but we were able, by changing shifts of the summands of the graded tilting complex $T^{\prime}$, to move from one grading to another via graded Morita equivalence (Definition \ref{grMorDef}). We will prove in the next section that this holds for all Brauer tree algebras, regardless of the multiplicity of the exceptional vertex. \section{Classification of gradings} In this section we classify, up to graded Morita equivalence and rescaling, all gradings on an arbitrary Brauer tree algebra with $n$ edges and the multiplicity of the exceptional vertex equal to $m$. For a finite dimensional $k$-algebra $A$, there is a correspondence between gradings on $A$ and homomorphisms of algebraic groups from $\textbf{G}_m$ to ${\rm Aut}(A)$, where $\textbf{G}_m$ is the multiplicative group $k^*$ of a field $k$. For each grading $A=\oplus_{i\in \mathbb{Z}}A_i$ there is a homomorphism of algebraic groups $\pi \, : \, \textbf{G}_m \rightarrow {\rm Aut}(A)$ where element $x\in k^*$ acts on $A_i$ by multiplication by $x^i$ (see [\ref{Rou}], Section 5). If $A$ is graded and $\pi$ is the corresponding homomorphism, we will write $(A,\pi)$ to denote that $A$ is graded with grading $\pi$. \begin{de}\label{grMorDef} Let $(A,\pi)$ and $(A,\pi^{\prime})$ be two gradings on a $k$-algebra $A$, and let $P_1, P_2,\dots, P_r$ be the isomorphism classes of the projective indecomposable $A$-modules. We say that $(A,\pi)$ and $(A,\pi^{\prime})$ are graded Morita equivalent if there exist integers $d_1,d_2,\dots,d_r$ such that the graded algebras $(A,\pi^{\prime})$ and ${\rm Endgr}_{(A,\pi)}(\oplus_{i=1}^rn_iP_i\langle d_i\rangle)^{\rm op}$ are isomorphic. \end{de} Note that two graded algebras are graded Morita equivalent if and only if their categories of graded modules are equivalent. The following proposition tells us how to classify all gradings on $A$ up to graded Morita equivalence. \begin{prop}[[\ref{Rou}{]}, Corollary 5.8]\label{cokarak} Two graded algebras $(A,\pi)$ and $(A,\pi^{\prime})$ are graded Morita equivalent if and only if the corresponding cocharacters $\,\pi \, : \, \textbf{G}_m \rightarrow {\rm Out}(A)$ and $\,\pi^{\prime} \, : \, \textbf{G}_m \rightarrow {\rm Out}(A)$ are conjugate. \end{prop} From this proposition we see that in order to classify gradings on $A$ up to graded Morita equivalence, we need to compute maximal tori in ${\rm Out}(A)$. Let ${\rm Out}^K (A)$ be the subgroup of ${\rm Out}\, (A)$ of those automorphisms fixing the isomorphism classes of simple $A$-modules. Since ${\rm Out}^K (A)$ contains ${\rm Out}^0(A)$, the connected component of ${\rm Out}(A)$ that contains the identity element, we have that maximal tori in ${\rm Out}(A)$ are actually contained in ${\rm Out}^K (A)$. We start by computing ${\rm Out}^K (A_{\Gamma})$ for a Brauer tree algebra $A_{\Gamma}$ of type $(m,n)$, where $m>1$. We remark here that for the case $m=1$, ${\rm Out}^K (A_{\Gamma})$ has been computed in Lemma 4.3 in [\ref{RouZim}]. Although the proof of this Lemma in [\ref{RouZim}] seems to be incomplete, the result is correct if one assumes that the ground field $k$ is an algebraically closed field. The same result, namely that ${\rm Out}^K (A_{\Gamma})\cong k^*$ when $m=1$, follows directly from our computation below for $m>1$, if we disregard the loop $\rho$ which does not appear in the case $m=1.$ Let $A_{\Gamma}$ be a basic Brauer tree algebra whose tree $\Gamma$ is of type $(m,n)$ where $m>1$. Since ${\rm Out}^K (A_{\Gamma})$ is invariant under derived equivalence (cf.\ [\ref{Link}], Section 4), in order to compute this group for any Brauer tree algebra $A_{\Gamma}$ whose tree is of type $(m,n)$, it is sufficient to compute this group for the Brauer line of the same type. Let $A_{\Gamma}$ be a basic Brauer tree algebra of type $(m,n)$ whose tree is the Brauer line with $n$ edges and with the exceptional vertex at the end of the line. The quiver of $A_{\Gamma}$ is given by $$ \xymatrix{\stackrel{e_1}{\bullet}\ar@(ul,dl)_{\rho}\ar@/^/[r]^{a_1}&\stackrel{e_2}{\bullet}\ar@/^/[r]^{a_2}\ar@/^/[l]^{b_1}&\stackrel{e_3}{\bullet}\ar@/^/[l]^{b_2}\ar@/^/[r]^{a_3}&\dots\ar@/^/[l]^{b_3}\ar@/^/[r]^{a_{n-3}}&\stackrel{e_{n-2}}{\bullet}\ar@/^/[l]^{b_{n-3}}\ar@/^/[r]^{a_{n-2}}&\stackrel{e_{n-1}}{\bullet}\ar@/^/[l]^{b_{n-2}}\ar@/^/[r]^{a_{n-1}}&\stackrel{e_n}{\bullet}\ar@/^/[l]^{b_{n-1}}} $$ and relations are given by $a_ia_{i+1}=b_{i+1}b_i=0$ $(i=1,2,\dots, n-2)$, $\rho^m=a_1b_1$ and $a_ib_i=b_{i-1}a_{i-1}$ $(i=2,3,\dots, n-1)$. In order to simplify calculation let us set $t_i:=a_i+b_i$, $i=1,2,\dots,n-1.$ Then, $a_i=e_it_i$ and $b_i=e_{i+1}t_i$, for $i=1,2,\dots, n-1$. Then the relations become $t_i^3=0$ $(i=1,2,\dots,n-1)$, $e_1t_1^2=\rho^m$, $e_it_i^2=t_{i-1}^2e_i$ $(i=2,3,\dots,n-1)$. Also, we have that $e_it_i=t_ie_{i+1}$ and $e_{i+1}t_i=t_ie_i$, $i=1,2,\dots, n-1$. Let us write $t_0^2:=\rho^m$. Let $\varphi$ be an arbitrary automorphism of $A_{\Gamma}$ that fixes isomorphism classes of simple $A_{\Gamma}$-modules. We will compose this automorphism with suitably chosen inner automorphisms in order to get an automorphism $\phi$, which represents the same element as $\varphi$ in the group of outer automorphisms, but which is suitable for our computation. If $\{e_1, e_2,\dots, e_n\}$ is a complete set of orthogonal primitive idempotents, then $\{\varphi(e_1),\varphi(e_2),\dots,\varphi(e_n)\}$ is also a complete set of orthogonal primitive idempotents. From classical ring theory (cf.\ [\ref{jac}], Theorem 3.10.2) we know that these two sets are conjugate. Hence, when we compose $\varphi$ with a suitably chosen inner automorphism we get that $x^{-1}\varphi(e_i)x=e_ {\pi(i)}$, for all $i$, where $\pi$ is some permutation. Since $\varphi$ fixes isomorphism classes of simple modules, we can assume that, for all $i$, $\varphi(e_i)=e_i.$ Since $\varphi({\rm rad}\, A_{\Gamma})\subset {\rm rad}\, A_{\Gamma}$ we have that $$\varphi(t_i)=\sum_{j=1}^{n-1}A_{ij}e_jt_j+\sum_{j=1}^{n-1}B_{ij}e_{j+1}t_j+\sum_{j=0}^{n-1}C_{ij}e_{j+1}t_j^2+\sum_{j=1}^{m-1}D_{ij}\rho^j,$$ where $A_{ij}$, $B_{ij}$, $C_{ij}$ and $D_{ij}$ are scalars. If $i>1$, then from $e_1t_i=0$ we get that $D_{ij}=0$ for $j=1,2,\dots,m-1$. If $i=1$, then from $e_1t_1=t_1e_2$ we get that $D_{1j}=0$, $j=1,2,\dots,m-1$. In both cases we have that $D_{ij}=0$ for all $i$ and all $j$. If $l\notin \{i,i+1\}$, then $\varphi(e_lt_i)=0$ and $\varphi(t_ie_l)=0$. From the first equality we get that $A_{il}=B_{il-1}=0$ and from the second equality we get that $B_{il}=A_{il-1}=0$. Subsequently, we have that $C_{il}=0$ for $l\notin \{i,i-1\}$. Therefore, we must have that $$\varphi(t_i)=A_{ii}e_it_i+B_{ii}e_{i+1}t_i+C_{ii-1}e_{i}t_{i-1}^2+C_{ii}e_{i+1}t_i^2.$$ From $e_it_i=t_ie_{i+1}$ it follows that $\varphi(e_i) \varphi(t_i)=\varphi(t_i)\varphi(e_{i+1})$, and we have $$C_{ii-1}=C_{ii}=0.$$ Also, $A_{ii}\neq 0$ and $B_{ii}\neq 0$. If one of them would be zero, then $\varphi(t_i^2)=0$, which is in contradiction with $\varphi$ being an automorphism. Assume now that $$\varphi(\rho)=\sum_{j=1}^{m-1}D_{j}\rho^j+\sum_{j=1}^{n-1}E_{j}e_jt_j+\sum_{j=1}^{n-1}F_{j}e_{j+1}t_j+\sum_{j=0}^{n-1}G_{j}e_{j+1}t_j^2,$$ for some scalars $D_j$, $E_j$, $F_j$ and $G_j$. For $l>2$, from $e_l\rho=0$ we have that $\varphi(e_l)\varphi(\rho)=0$ and it follows that $E_l=F_{l-1}=0$, for $l=2,3,\dots,m-1$. Similarly, from $\rho e_l=0$ we have that $F_l=E_{l-1}=0$, for $l=2,3,\dots,m-1$ and consequently, we have that $G_j=0$ for all $j$. Therefore, we get $$\varphi(\rho)=\sum_{j=1}^m D_j \rho^j.$$ Note that $D_1\neq 0$, because if $D_1=0$, then $\varphi(\rho^m)=0$, which again contradicts the fact that $\varphi$ is an automorphism. Gathering all the information we got, we conclude that, up to inner automorphism, an arbitrary automorphism that fixes isomorphism classes of simple $A_{\Gamma}$-modules has the following action on a set of generators $\{e_i,\, t_l, \, \rho\, |\, i=1,2,\dots,n ; \, l=1,2,\dots,n-1\}$ $$\varphi(e_i)=e_i,\quad \varphi(t_l)=A_{ll}e_lt_l+B_{ll}e_{l+1}t_l, \quad \varphi(\rho)=\sum_{j=1}^m D_j \rho^j.$$ \noindent To compute ${\rm Out}^K ( A_{\Gamma})$, for each automorphism $\varphi$ we need to find an automorphism which is in the same class as $\varphi$ in ${\rm Out}^K (A_{\Gamma})$, and which acts by scalar multiplication on as many of the above generators as possible. In order to do that we will compose $\varphi$ with suitably chosen inner automorphisms. First of all, let us see how an arbitrary inner automorphism acts on our set of generators. Let $y$ be an arbitrary invertible element in $A_{\Gamma}$. Then, $$y=\sum_{j=1}^nl_je_j+\sum_{j=1}^{n-1}s_je_jt_j+\sum_{j=1}^{n-1}r_je_{j+1}t_j+\sum_{j=1}^{n-1}p_je_{j+1}t_j^2+\sum_{j=1}^mq_j\rho^j,$$ where $l_i\neq 0$, $i=1,\dots,n$. From $yy^{-1}=1$, we easily compute scalars $s_j^{\prime}$, $r_j^{\prime}$, $p_j^{\prime}$, $q_j^{\prime}$, in $$y^{-1}=\sum_{j=1}^nl_j^{-1}e_j+\sum_{j=1}^{n-1}s_j^{\prime}e_jt_j+\sum_{j=1}^{n-1}r_j^{\prime}e_{j+1}t_j+\sum_{j=1}^{n-1}p_j^{\prime}e_{j+1}t_j^2+\sum_{j=1}^mq_j^{\prime}\rho^j.$$ \noindent The inner automorphism given by $y$ has the following action on $\rho$ $$f_y(\rho):=y\rho y^{-1}=(l_1\rho+\sum_{j=1}^m q_j\rho^{j+1})y^{-1}=$$ $$=\rho+\sum_{j=1}^ml_1q_j^{\prime}\rho^{j+1}+\sum_{j=1}^ml_1^{-1}q_j\rho^{j+1}+\sum_{j=1}^m q_j \rho^{j+1} \sum_{j=1}^m q_j^{\prime}\rho^{j}=$$ $$=\rho+\rho(\sum_{j=1}^ml_1q_j^{\prime}\rho^{j}+\sum_{j=1}^ml_1^{-1}q_j\rho^{j}+\sum_{j=1}^m q_j \rho^{j} \sum_{j=1}^m q_j^{\prime}\rho^{j}+s_1r_1^{\prime}\rho^m)=\rho.$$ \noindent Therefore, an arbitrary inner automorphism fixes $\rho$. From now on, we will use inner automorphisms given by elements of the form $$x=\sum_{j=1}^nl_je_j.$$ They will be enough to get a class representative of $\varphi$ in ${\rm Out}^K (A_{\Gamma})$ that is easy to work with. If $f_x$ is the inner automorphism given by an invertible element $x$, then we easily get that $$f_x(t_i)=l_il^{-1}_{i+1}e_it_i+l_i^{-1}l_{i+1}e_{i+1}t_i.$$ \noindent When we compose $f_x$ and $\varphi$ we get that $$f_x\circ \varphi(\rho)=\sum^{m}_{j=1}D_j\rho^j,$$ $$ f_x\circ \varphi(t_i)=A_{ii}l_il_{i+1}^{-1}e_it_i+B_{ii}l_i^{-1}l_{i+1}e_{i+1}t_i,$$ \noindent for all $i$, and $$f_x\circ \varphi(e_i)=e_i.$$ \noindent We want to choose $l_i$'s so that we get $$A_i:=A_{ii}l_il_{i+1}^{-1}=B_{ii}l_i^{-1}l_{i+1}$$ for $i=1,2,\dots,n-1.$ To do this we need to choose $l_i$'s in such way that the following equality holds for all $i$ $$A_{ii}B_{ii}^{-1}=l_i^{-2}l_{i+1}^2.$$ We can choose $l_1=1$ and then inductively, assuming that we have chosen $l_1,l_2,\dots,l_i$, because we are working over an algebraically closed field, we get $l_{i+1}$ from $A_{ii}B_{ii}^{-1}l_i^{2}=l_{i+1}^2.$ If we choose such $x$, then the map $\varphi_1:=f_{x}\circ \varphi$ has the following action on our generating set $$\varphi_1(e_i)=e_i,\quad \varphi_1(\rho)=\sum_{j=1}^mD_j\rho^j,\quad \varphi_1(t_i)=A_{i}t_i.$$ \noindent From the relations $e_it_{i-1}^2=t_{i}^2e_{i}$, for $i=2,3,\dots,n-1$, we get that $A_1^2=A_2^2=\dots=A_{n-1}^2.$ We can assume that $A_1=A_2=\dots=A_{n-1}$, because if not, then by multiplying $\varphi_1$ by an inner automorphism given by $x_1=\sum_{i=1}^nr_ie_i$, where we set $r_1=1$ and then inductively, $r_{i+1}=-r_i$ if $A_{i+1}=-A_i$, and $r_{i+1}=r_i$ if $A_{i+1}=A_i$, we get a new automorphism $\varphi_2$ such that $\varphi_2(t_i)=A_1t_i$, for all $i$. Also from the relation $\rho^m=e_1t_1^2$ we get that $A_1^2=D_1^m.$ This means that for a fixed $D_1$ we have two choices for $A_1$, since there are two square roots of $D_1^m$. These two values of $A_1$ will give us two different automorphisms, but as before, we can assume that after multiplying by an appropriate inner automorphism these two automorphisms represent the same automorphism in ${\rm Out}^K(A_{\Gamma})$. We started with an arbitrary automorphism $\varphi$ that fixes the isomorphism classes of simple $A_{\Gamma}$-modules and we showed that in the group ${\rm Out}^K (A_{\Gamma})$ it represents the same class as the element $\phi$ whose action is given by $$ \phi(e_i)=e_i, \quad \phi(\rho)=\sum^m_{j=1}D_j\rho^j, \quad \phi(t_s)=A_1 t_s,$$ where $A_1^2=D_1^m$, $i=1,\dots,n$, and $s=1,\dots,n-1.$ Therefore, every element in ${\rm Out}^K (A_{\Gamma})$ is uniquely determined by its action on $\rho$, that is, it is uniquely determined by an $m$-tuple $(D_1,D_2,\dots,D_m)$ where $D_1\neq 0$. The map $\theta$ that assigns to each $m$-tuple $D=(D_1,D_2,\dots, D_m)$ an isomorphism $\phi_D$, where $\phi_D(\rho)=\sum_{j=1}^m D_j \rho^j$, is an isomorphism of groups. But what is the group structure on the set $k^*\times \underbrace{k\times k \times\dots\times k}_{m-1}$, where $k^*$ denotes non-zero elements of $k$? If $\alpha=(\alpha_1, \alpha_2, \dots, \alpha_m)$ and $\beta=(\beta_1,\beta_2,\dots,\beta_m)$ are two $m$-tuples and $\phi_{\alpha}$ and $\phi_{\beta}$ are two corresponding automorphisms, then $\phi_{\beta}\circ \phi_{\alpha}=\phi_{\beta*\alpha}$ gives us the definition of the group operation $*$ on $k^*\times \underbrace{k\times k \times\dots\times k}_{m-1}$. Computing $\phi_{\beta}\circ \phi_{\alpha}$ gives us that \begin{equation} \label{eqGrupa} \beta * \alpha:=\left(\sum_{i=1}^l\alpha_i(\sum_{\tiny \begin{array}{c}k_1+\dots+k_i=l\\ k_1,\dots,k_i>0\end{array}}\beta_{k_1}\beta_{k_2}\cdots\beta_{k_i})\right)_{l=1}^m \end{equation} \noindent Here are first few coordinates explicitly $$\beta * \alpha=(\alpha_1\beta_1,\, \alpha_1\beta_2+\alpha_2\beta_1^2, \alpha_1\beta_3+2\alpha_2\beta_1\beta_2+\alpha_3\beta_1^3,\dots).$$ \begin{de} We define $H_m$ to be the group $(k^*\times \underbrace{k\times k \times\dots\times k}_{m-1}\, , \, *\, )$, where the multiplication $*$ is given by the above equation \eqref{eqGrupa}. \end{de} \noindent The identity element of $H_m$ is $(1,0,\dots,0)$, and this element corresponds to the class of inner automorphisms. The inverse element of an arbitrary $m$-tuple is easily computed inductively from the definition of $*$. Associativity is verified after elementary, but tedious computation. \begin{lm} The group $H_m$ is isomorphic to the group of automorphisms of the polynomial algebra $k[x]/(x^{m+1}).$ \end{lm} \noindent {\bf Proof.} An arbitrary automorphism $f$ from ${\rm Aut} (k[x]/(x^{m+1}))$ is given by its action on $x$. Since it has to be surjective, and $f(x)^{m+1}=0$, we have that $x$ has to be mapped to a polynomial $d_1x+d_2x^2+\dots+d_mx^m$, where $d_1\neq 0$. Therefore, every automorphism of ${\rm Aut}(k[x]/(x^{m+1}))$ is given by a unique $m$-tuple $(d_1,d_2,\dots,d_m)$ where $d_1\neq 0$. The structure of a group on the set of all such $m$-tuples is the same as for the group $H_m$. $\blacksquare$ Since the group ${\rm Out}^K(A)$ is invariant under derived equivalence and Brauer tree algebras of the same type are derived equivalent, we have the following theorem. \begin{te} Let $\Gamma$ be a Brauer tree of type $(m,n)$ and let $A_{\Gamma}$ be a basic Brauer tree algebra whose tree is $\Gamma$. Then $${\rm Out}^K (A_{\Gamma})\cong {\rm Aut}(k[x]/(x^{m+1}))\cong H_m.$$ \end{te} We see that the group of outer automorphisms that fix isomorphism classes of simple modules of an arbitrary Brauer tree algebra depends only on the multiplicity $m$ of the exceptional vertex and does not depend on the number of edges. If we take an arbitrary Brauer tree $\Gamma$, and if $\Gamma^{\prime}$ is any connected Brauer subtree of $\Gamma$ that contains the exceptional vertex, then the corresponding Brauer tree algebras $A_{\Gamma}$ and $A_{\Gamma^{\prime}}$ have isomorphic groups of outer automorphisms that fix isomorphism classes of simple modules. If we take the subtree ${\Gamma}^{\prime}$ to be the exceptional vertex with one edge adjacent to it, we get $A_{\Gamma^{\prime}}=k[x]/(x^{m+1}).$ \begin{co} Let $\Gamma$ be a Brauer tree of type $(m,n)$ and let $\Gamma^{\prime}$ be an arbitrary connected Brauer subtree of $\Gamma$ that contains the exceptional vertex. If $A_{\Gamma}$ and $A_{\Gamma^{\prime}}$ are two basic Brauer tree algebras whose trees are $\Gamma$ and $\Gamma^{\prime}$ respectively, then $${\rm Out}^K (A_{\Gamma})\cong {\rm Out}^K(A_{\Gamma^{\prime}})\cong {\rm Aut}(k[x]/(x^{m+1})).$$ \end{co} Let $L$ be the subgroup of $H_m$ consisting of the elements of the form $(1,\alpha_2,\dots, \alpha_m)$ and let $K$ be the subgroup of $H_m$ consisting of the elements of the form $(\alpha_1,0,\dots,0)$. \begin{prop} The group $H_m$ is a semidirect product of $L$ and $K$, where $L\unlhd G$ is unipotent and the subgroup $K\cong {\rm\textbf{G}}_m$ is a maximal torus in $H_m$. \end{prop} We see that, regardless of the multiplicity of the exceptional vertex, $T\cong \textbf{G}_m$ is a maximal torus in ${\rm Out}^K (A_{\Gamma})$. From this we deduce the following theorem. \begin{te} Let $\Gamma$ be an arbitrary Brauer tree and let $A_{\Gamma}$ be a basic Brauer tree algebra whose tree is $\Gamma$. Up to graded Morita equivalence and rescaling there is unique grading on $A_{\Gamma}$. \end{te} \noindent{\bf Proof.} By Proposition \ref{cokarak} homomorphisms of algebraic groups from ${\textbf{G} }_m$ to ${ \textbf{G}}_m$ give us all gradings on $A_{\Gamma}$ up to graded Morita equivalence. Since the only homomorphisms from ${ \textbf{G}}_m$ to ${ \textbf{G}}_m$ are given by maps $x\mapsto x^r$, for $x\in { \textbf{G}}_m$ and $r\in \mathbb{Z}$, we have that there is unique grading on $A_{\Gamma}$ up to rescaling (dividing each degree by the same integer) and graded Morita equivalence (shifting each summand of the tilting complex given by Green's walk). $\blacksquare$ \end{document}
\begin{document} \title{Speeding up the solution of the Site and Power Assignment Problem in Wireless Networks} \titlerunning{Site and Power Assignment in Wireless Networks} \author{Pasquale Avella\inst{1} \and Alice Calamita\inst{2}\thanks{This author has been partially supported by Fondazione Ugo Bordoni, Rome, Italy} \and Laura Palagi\inst{2}} \authorrunning{P. Avella et al.} \institute{DING, Università del Sannio, Benevento, 82100, Italy\\ \email{[email protected]} \and DIAG, Sapienza University of Rome, Rome, 00185, Italy\\ \email{\{alice.calamita,laura.palagi\}@uniroma1.it}} \maketitle \begin{abstract} This paper addresses the site and power assignment problem arising in the optimal design of wireless networks. It is well-known that natural formulations of this problem are sources of numerical instabilities and make the optimal solution challenging for state-of-the-art solvers, even in small-sized instances. We tackle this limitation from a computational perspective by suggesting two implementation procedures that can speed up the solution of this problem. The first is an extremely effective branching rule for a compact reformulation of this problem. Presolve operations are used as a second strategy to manage numerical instability. The approaches are validated using realistic LTE instances kindly provided by Fondazione Ugo Bordoni. The proposed implementation techniques have proved capable of significantly accelerating the solution of the problem, beating the performance of a standard solution. \keywords{wireless network design \and base station deployment \and power assignment \and 0-1 linear programming \and reduced cost fixing \and fixing heuristic} \end{abstract} \section{Introduction} Wireless network design is the problem of configuring a set of transmitters to provide service coverage to a set of receivers. The term configuring refers both to the optimal identification of the transmitter locations and receiver assignments, the so-called siting problem, and the optimal identification of some parameters of the transmitters, such as transmission power and/or frequency. This paper considers the site and the power assignment problem, thus assuming the frequency as fixed. This research was carried out in collaboration with the Fondazione Ugo Bordoni (FUB) \cite{FUB}, a higher education and research institution under the supervision of Italian Ministero delle Imprese e del Made in Italy (MISE). that operates in the telecommunication field, providing innovative services for government bodies. Wireless networks are becoming denser due to technological advancements and increased traffic, and further rapid growth is expected in the upcoming years \cite{israr2021renewable}. In this context, practitioners' traditional design approach, based on trial-and-error supported by simulation, has exhibited many limitations. The inefficiency of this approach leads to the need for optimization approaches, which are critical for lowering costs and meeting user-demanded service quality standards. Many optimization models for Wireless Network Design (WND) have been investigated over the years (we recommend \cite{kennington2010wireless} for a thorough overview of the optimization challenges in modern wireless network design.). However, the natural formulation on which most models are based has severe limitations since it involves numerical problems in the problem-solving phase, which emerge even in small instances. Indeed, the constraint matrices of these models contain coefficients that range in a huge interval, as well as large big-$M$ leading to weak bounds. The exact approaches that have been proposed in the literature are mainly oriented towards non-compact formulations and row generation methods. In \cite{d2013gub}, a non-compact binary formulation is investigated. The solution algorithm is based on a row-generation method. The same authors of \cite{d2013gub} present a 0-1 model for a WND variant linked to the feasible server assignment problem in \cite{d2011negative}. In \cite{capone2011new}, a non-compact formulation is proposed for the maximum link activation problem; the formulation uses cover inequalities to replace the source of numerical instability. In \cite{naoum2010nested}, a mixed-integer linear programming (MILP) formulation is introduced, and the proposed exact solution method combines combinatorial and classical Benders decomposition and valid cuts. In \cite{ageyev2015lte,ageyev2014optimization,bondarenko2019optimization,dmytro2015multi}, mixed-integer formulations are used to solve randomly generated instances through standard mixed-integer programming (MIP) solvers. In \cite{avella}, a compact reformulation for the base station deployment problem has been proposed that allows for the exact solution of large instances. In this work, we give implementation details that accelerate the optimal solution of large instances. A contribution of the paper is an extended formulation involving adding some variables whose structure is more manageable by numerical solvers. We show how the use of particular branching strategies makes the solution of the subproblems generated during the branch-and-bound faster. This effect is more marked in the proposed formulation. \\A second contribution is a reduced cost fixing presolve procedure to fix some variables to zero and reduce the big-$M$, strengthening the formulation. Although reduced cost fixing is a well-known technique, its application to this problem has never been investigated before (as far as we know) and shows significant potential thanks to the positive impact on the reduction of numerical problems. To improve this technique, we develop a heuristic, producing near-optimal solutions very quickly. The remainder of this paper is organized as follows. Section \ref{ch_formulation} reports the problem statement and its natural formulation. Section \ref{ch_branching_presolve} describes the implementation techniques we propose to accelerate the optimal solution. Section \ref{ch_results} reports the computational results, whereas conclusions are provided in Sect. \ref{ch_conclusions}. \section{A Wireless Network Design Formulation} \label{ch_formulation} A wireless network consists of radio transmitters distributing service (i.e., wireless connection) to a target area. The target area is usually partitioned into elementary areas, called testpoints, in line with the recommendations of the telecommunications regulatory bodies. Each testpoint is considered a representative receiver of all the users in the elementary area. Testpoints receive signals from all the transmitters. The power received is classified as serving power if it relates to the signal emitted by the transmitter serving the testpoint; otherwise is classified as interfering power (see Long-Term Evolution (LTE) standard \cite{rumney2013lte}). A testpoint is regarded as served (or covered) by a base station if the ratio of the serving power to the sum of the interfering powers and noise power (Signal-to-Interference-plus-Noise Ratio or SINR) is above a threshold \cite{rappaport1996wireless}, whose value depends on the desired quality of service. We assume the frequency channel is given and equal for all the transmitters. We also assume that the power emissions of the activated transmitters can be represented by a finite set of power values, which fits with the standard network planning practice of considering a small number of discrete power values. The practice of power discretization for modeling purposes has been introduced in \cite{d2013gub}. Let $\mathcal{B}$ be the finite set of candidate transmitters and $\mathcal{T}$ be the finite set of receivers located at the testpoints. Let $\mathcal P = \{P_1, \dots,P_{|\mathcal P|}\}$ be the finite set of feasible power values assumed by the activated transmitters, with $P_1>0$ and $P_{|\mathcal P|} = P_{max}$. Hence $\mathcal L = \{1, \dots, |\mathcal P|\}$ is the finite set of power value indices (or simply power levels). We introduce the variables $$ z_{bl}=\left\{\begin{array}{ll} 1 & \mbox{if transmitter $b$ is emitting at power $P_l$} \\ 0 & \mbox{otherwise} \end{array}\right. \qquad b\in\mathcal{B}, l\in\mathcal{L} $$ and $$ x_{tb}=\left\{\begin{array}{ll} 1 & \mbox{if testpoint $t$ is served by transmitter $b$} \\ 0 & \mbox{otherwise.} \end{array}\right. \qquad b\in\mathcal{B},\ t\in\mathcal{T} $$ Natural formulations of the WND problem contain the so-called SINR inequalities used to assess service coverage conditions. To formulate the SINR inequalities, we refer to the discrete big-$M$ formulation reported, e.g., in \cite{d2013gub}, which considers a discretization of the power range. Let $a_{tb} > 0$ be the fading coefficient applied to the signal received in $t \in \mathcal T$ and emitted by $b \in \mathcal B$. Then a receiver $t$ is served by a base station $\beta \in\ \mathcal{B}$ if the SINR of the serving power to the sum of the interfering powers and noise $\mu > 0$ is above a given SINR threshold $\delta > 0$, namely \begin{equation} \label{SINR} \frac{{a}_{t\beta}\displaystyle\sum_{l \in \mathcal{L}} P_l z_{\beta l}}{\mu + \displaystyle\sum_{b \in \mathcal{B} \setminus \{\beta\}}{a}_{tb} \sum_{l \in \mathcal{L}} P_l z_{bl}} \geq \delta \qquad t \in \mathcal T, \beta \in \mathcal B:\ x_{t \beta} = 1. \end{equation} Following \cite{d2013gub}, we can rewrite the SINR condition \eqref{SINR} using the big-$M$ constraints \begin{equation} {a}_{t\beta} \sum_{l \in \mathcal L} P_l z_{\beta l} - \delta \sum_{b \in \mathcal{B} \setminus \{\beta\}} {a}_{tb} \sum_{l \in \mathcal L} P_l z_{bl} \geq \delta \mu - M_{t \beta} (1 - x_{t \beta}) \qquad t \in \mathcal{T}, \beta \in \mathcal{B} \label{sinr_ineq} \end{equation} where $M_{t\beta}$ is a large (strictly) positive constant. When $x_{t \beta}=1$, \eqref{sinr_ineq} reduces to \eqref{SINR}; when $x_{t \beta}=0$ and $M_{t\beta}$ is sufficiently large, \eqref{sinr_ineq} becomes redundant. We can set, e.g., \begin{equation} \label{bigM} M_{t \beta} = \delta \mu + \delta P_{max} \sum_{b \in \mathcal{B} \setminus \{\beta\}} a_{tb}. \end{equation} To enforce a minimum territorial coverage, namely a minimum integral number $\alpha \in [0, |\mathcal T|]$ of served testpoints, we introduce \begin{equation} \sum_{b \in \mathcal{B}}\sum_{t \in \mathcal{T}} x_{tb} \geq \alpha. \label{constr_coverage} \end{equation} \\Each testpoint must be covered by at most one serving base station, namely \begin{equation} \sum_{b \in \mathcal{B}} x_{tb} \leq 1 \qquad t \in \mathcal{T} \label{constr_receivers}. \end{equation} To enforce the choice of only one (strictly positive) power level for each activated transmitter we use \begin{equation} \sum_{l \in \mathcal{L}} z_{bl} \leq 1 \qquad b \in \mathcal{B}. \label{constr_power} \end{equation} \\Variable upper bound constraints \begin{equation} x_{tb} \leq \sum_{l \in \mathcal{L}} z_{bl} \quad t \in \mathcal T, b \in \mathcal B, \label{VUBs} \end{equation} enforce that a testpoint $t$ can be assigned to a transmitter $b$ only if $b$ is activated. Though redundant, they are included in the formulation as they are known to strengthen the quality of linear relaxation bounds significantly. A considerable goal to pursue is the citizens' welfare; therefore, we use a model that aims at identifying solutions with low environmental impact in terms of electromagnetic pollution and/or power consumption. Reducing electromagnetic pollution indeed involves reducing the power emitted by the transmitters \cite{chiaraviglio2018planning}. Hence, we aim to minimize the total number of activated base stations with a penalization on the use of stronger power levels: the cost associated with the use of a power level equal to $l \in \mathcal L$, namely $c_l$, is greater the greater is $P_l$. To summarize, a natural formulation for the WND problem is the following 0-1 Linear Programming: \begin{equation}\label{eq:initial} \begin{array}{rlr} \min_{x,z} \hspace{2mm}& \displaystyle\sum_{b \in \mathcal{B}}\sum_{l \in \mathcal{L}} c_{l} z_{bl} \\ &(x,z)\in S\\ \end{array} \end{equation} where the feasible region $S$ is defined as $$S=\left\{(x,z)\in\{0,1\}^{n+m}: \ \mbox{satisfying } \eqref{sinr_ineq}, \eqref{constr_coverage}, \eqref{constr_receivers}, \eqref{constr_power}, \eqref{VUBs}\right\} $$ with $x=(x_{t b})_{t \in \mathcal{T},b \in \mathcal{B} }, z=(z_{bl})_{b \in \mathcal{B}, l \in \mathcal L } $ and $n=|\mathcal{T}| \times |\mathcal{B}|, m=|\mathcal{B}| \times |\mathcal{L}|$. \section{Implementation Details} \label{ch_branching_presolve} In principle, model \eqref{eq:initial} can be solved by MIP solvers. However, it is well-known (see, e.g.,\cite{avella,d2013gub,d2016towards}) that the following issues may arise: \begin{itemize} \item[\textbullet] the power received in each testpoint lies in a large interval, from very small values to huge, which makes the range of the coefficients in the constraint matrix very large and the solution process numerically unstable and possibly affected by error; \item[\textbullet] the big-$M$ coefficients lead to poor quality bounds that impact the effectiveness of standard solution procedures; \item[\textbullet] real-world problems lead to models with a huge number of variables and constraints. \end{itemize} These issues make solving real-life instances of this problem challenging. Therefore, our study aims at reporting some implementation details that can make the solution of this problem more efficient and fast. \subsection{Reformulation} We can rewrite \eqref{sinr_ineq} as \begin{equation} \label{sinr_ineq_intermedia} M_{t \beta} (1 - x_{t \beta}) \geq \delta \displaystyle\sum_{b \in \mathcal{B} \setminus \{\beta\}} {a}_{tb} \displaystyle\sum_{l \in \mathcal L} P_l z_{bl} - {a}_{t\beta} \displaystyle\sum_{l \in \mathcal L} P_l z_{\beta l} + \delta \mu \qquad t \in \mathcal{T}, \beta \in \mathcal{B}. \end{equation} We introduce the auxiliary binary variables $w_{tb}$, defined as \begin{equation} \label{def_w} \begin{array}{l} w_{tb} \leq 1 - x_{tb} \qquad t \in \mathcal{T}, b \in \mathcal{B}\\ w_{tb} \in \{0,1\}. \end{array} \end{equation} Hence, $w_{tb}$ is forced to zero whenever a testpoint $t$ is served by a base station $b$. \\We can rewrite \eqref{sinr_ineq_intermedia} using $w_{tb}$ \begin{equation} \label{new_sinr_ineq} M_{t \beta} w_{t \beta} \geq \delta \displaystyle\sum_{b \in \mathcal{B} \setminus \{\beta\}} {a}_{tb} \displaystyle\sum_{l \in \mathcal L} P_l z_{bl} - {a}_{t\beta} \displaystyle\sum_{l \in \mathcal L} P_l z_{\beta l} + \delta \mu \qquad t \in \mathcal{T}, \beta \in \mathcal{B}. \end{equation} \begin{theorem} Inequalities \eqref{def_w} and \eqref{new_sinr_ineq} are valid for $S$. \end{theorem} \begin{proof} By \eqref{def_w}, we get that when $x_{t \beta}=1$ we must have $w_{t \beta}=0$, and \eqref{sinr_ineq} and \eqref{new_sinr_ineq} enforce the same condition on the SINR. When instead $x_{t \beta}=0$, the big-$M$ activates in constraint \eqref{sinr_ineq} so that no condition is enforced on the corresponding SINR; in turn from \eqref{def_w} we have that $w_{tb}\in\{0,1\}$ so that \eqref{new_sinr_ineq} does not enforce any condition on SINR too. \qed \end{proof} We will denote by \emph{reformulated} WND the following 0-1 Linear Programming: \begin{equation}\label{eq:reformulated} \begin{array}{rlr} \min_{x,w,z} \hspace{2mm}& \displaystyle\sum_{b \in \mathcal{B}}\sum_{l \in \mathcal{L}} c_{l} z_{bl} \\ &(x,w,z)\in S^{\prime}\\ \end{array} \end{equation} where $S^{\prime}=\left\{(x,w,z)\in\{0,1\}^{2n+m}: \ \mbox{satisfying } \eqref{constr_coverage}, \eqref{constr_receivers}, \eqref{constr_power}, \eqref{VUBs}, \eqref{def_w}, \eqref{new_sinr_ineq} \right\}$ with $w=(w_{t b})_{t \in \mathcal{T},b \in \mathcal{B}}$. Reformulation \eqref{eq:reformulated} leads to the adoption of particularly efficient branching rules. Indeed, as shown in the computational tests (see Sect. \ref{ch_results}), branching with a priority on the $w$ variables typically yields faster results than alternative branching rules. \subsection{Coefficient Tightening} Reducing the model size is crucial as real-life instances typically involve many variables and constraints. In this direction, we propose using a Reduced Cost Fixing method, in short RCF (see \cite{achterberg2020presolve} for a survey on presolve techniques). Although this procedure is well-known and widespread, no one has ever tried to see its effects on this type of problem (based on our knowledge). By solving the linear relaxation of the problem, we can get the lower bound $lb$ and the corresponding reduced costs $ \bar{c}_{bl}$ associated with the sole variables $z_{bl}$ in the optimal solution of the linear relaxation. Then, given an upper bound $ub> lb$, if $ \bar{c}_{ b l} \geq ub-lb\text{ for some } b \in \mathcal B, l \in \mathcal L$, the corresponding $z_{ b l}$ must be at its lower bound in every optimal solution; hence we can fix $z_{ b l}=0$. Whenever the fixing of a variable $z_{ b l}$ occurs at a given $ l\in \mathcal L$ such that $P_{ l} = P_{max}$, we can recompute and reduce the big-$M$, resulting in a tightening of the formulation. Indeed, after the RCF, we may have some transmitters $b$ that cannot emit at the maximum power level $l$ such that $P_l = P_{max}$, since the corresponding $z_{bl}$ variables have been fixed to zero. In such cases, the value by which $a_{tb}$ is weighted in the big-$M$ (see \eqref{bigM}) can be reduced to the highest power value that $b$ can assume, which is strictly less than $P^{max}$. To formalize it, let us define the set $\mathcal B^R \subseteq \mathcal B$ of base stations affected by RCF, i.e., such that $b \in \mathcal B^R$ if the variable $z_{bl}$ has been fixed to zero for at least one $l \in \mathcal L$. Then, for a given $b \in \mathcal B^R$, we can define the set $\mathcal L_b^R \subset \mathcal L$ of power levels that $b$ can assume after the RCF. We denote by $P_{b, max}^R$ the power value corresponding to the maximum power level that $b \in \mathcal B^R$ can assume. Since $\mathcal L_b^R \subset \mathcal L$, we have that $P_{b, max}^R \leq P_{max}$. Using this notation, we can write down the new value of the big-$M$ $$M^{\prime}_{t \beta} = \delta \mu + \delta \left(P_{max} \sum_{b \in \mathcal{B} \setminus \{\beta, \mathcal{B}^R\}} a_{tb} + \sum_{b \in \mathcal{B}^R} P_{b, max}^R a_{tb}\right) = \delta \mu + \delta \sum_{b \in \mathcal{B}\setminus \{\beta\}} \tilde{P_b} a_{tb}$$ which satisfies $M^{\prime}_{t \beta} \leq M_{t \beta}$ since $\tilde P_b = \begin{cases} P^{max} & \text{ if } b \in \mathcal B \setminus \mathcal B^R\\ P^{R}_{b,max} & \text{ if } b \in \mathcal B^R \end{cases} \leq P_{max}.$ The smaller the optimality gap given by the estimated lower and upper bounds, the greater the number of $z_{bl}$ variables that can be fixed to zero, and the smaller the big-$M$ coefficients. Hence applying a standard algorithm, as implemented in MIP commercial solvers, to the tightened formulation (i.e., the formulation got after the RCF) produces stronger bounds and a faster solution. We can apply a further reduction (also in \cite{avella}). Since we have information on the maximum number of transmitters that can be installed from the $ub$, we can further reduce the value of the big-$M$ by replacing the sum of all the interfering contributions in the testpoint $t$ with the sum of the \emph{major} interfering contributions in $t$. In particular, only the strongest $\gamma$ interferers are considered, where $\gamma$ is the maximum number of transmitters that can be activated. Given $\gamma$, the big-$M$ can be computed as $$M^{\prime\prime}_{t \beta} = \delta \mu + \delta \displaystyle \sum_{b \in \mathcal{A}_t \setminus \{\beta\}} \tilde{P}_{b} a_{tb} \leq M^{\prime}_{t \beta}\leq M_{t \beta}$$ where $\mathcal A_t \subset \mathcal B$ is the set of the $\gamma$ base stations emitting the strongest signals received in $t$, i.e $|\mathcal{A}_t| = \gamma$. The smaller is $\gamma$, and the smaller is the big-$M$; therefore, the estimate of $\gamma$ should be as accurate as possible. We note that getting a reasonable lower bound is straightforward since eqref VUB constraints, which naturally lead to a decent limit, exist in S prime. Instead, obtaining a solid upper bound is more difficult. Commercial MIP solvers can be used to find a suitable, workable solution, although this process could take a while. As a result, we created a fixing heuristic based on the finding that zero/non-zero variables in an ideal integer linear programming (ILP) solution are frequently well-predicted by the fractional values of the variables in the LP relaxation. It might indicate that the fractional solution is extremely near to the ideal when the LP relaxation is particularly tight. We observe that getting a good lower bound is straightforward since constraints \eqref{VUBs} in $S^{\prime}$ naturally lead to a good linear relaxation value. Instead, getting a good upper bound is harder. Commercial MIP solvers can be used to find a good feasible solution, but it could take time. Consequently, we developed a fixing heuristic based on the observation that the fractional values of the variables in the LP relaxation are often good predictors of zero/non-zero variables in an optimal ILP solution. This specifically occurs when the LP relaxation is extremely tight. The scheme of our fixing heuristic is: \begin{enumerate} \item solve the LP relaxation of the problem and take the fractional solution; \item identify the set of fractional variables that are likely to be zero (i.e., whose value is less than a very low threshold in the fractional solution); \item fix the selected variables to zero and perform bound strengthening to propagate implications; \item solve the resulting ILP problem to get a near-optimal feasible solution. \end{enumerate} The variables rounded to zero correspond to the power levels that are not needed by the transmitters to cover the target area. \section{Computational Experiments} \label{ch_results} The computational experiments are done over eleven realistic instances provided by the FUB concerning the 4G LTE 800 MHz signal in the Municipality of Bologna (Italy). Each instance differs in the quality of service required, i.e., it is characterized by a threshold $\delta$ chosen in the typical range of LTE services [0.100\,W, 0.316\,W], and the minimum number of receivers covered, i.e., $\alpha$ ranges in [0.95$|\mathcal T|$, $|\mathcal T|$]. The parameters of the optimization are: $|\mathcal B|= 135$, $|\mathcal T| = 4\,693$, $|\mathcal L|= 3$, $\mathcal P =\{20\ W, 40\ W, 80\ W\}$, $c_l=(1,2,4)$, $\mu= 7.998 \times 10^{-14} \,W$. The values of the received power $a_{tb}P_l$, and of the noise $\mu$ by consequence, are scaled by a factor of $10^{-10}$ to get better accuracy on the optimal solutions, as suggested in \cite{d2016towards}. Hence, $a_{tb}P_l$ ranges in $(10^{-4}, 10^5)$ after the scaling. The code has been implemented in Python, and the experiments have been carried out on a Ubuntu server with an Intel(R) Xeon(R) Gold 5218 CPU with 96 GB RAM. The MIP solver Gurobi (Gurobi Optimizer 9 \cite{GurobiOptimizer}) has been used to get all the results, including the feasible solutions for the RCF. On Gurobi, we set the default setting with more time spent on identifying feasible solutions. We refer to this setting as the standard setting in the following. The purpose of this section is to validate the two proposed approaches. As regards the first, in Table \ref{tab:PB_comparison} there are the results got with the natural formulation \eqref{eq:initial}, denoted by N, and its reformulation \eqref{eq:reformulated}, denoted by R, under different solution methods. When not specified, the solution method is the standard setting. Otherwise, we use the notation PB followed by a variable when we adopt a branching rule that prioritizes the specified variable. \begin{table} [h!] \setlength{\tabcolsep}{5pt} \caption{Computational results to evaluate the first proposal} \begin{center} \resizebox{\textwidth}{!}{ \begin{tabular}{lrrrrrrrr} \toprule & \multicolumn{2}{c}{\textbf{N}} & \multicolumn{2}{c}{\textbf{N + PB}\pmb{$x$}}& \multicolumn{2}{c}{\textbf{R}} & \multicolumn{2}{c}{\textbf{R+PB}\pmb{$w$}}\\%& \multicolumn{2}{c}{{R+PB$x$}}\\ \cmidrule(lr){2-3}\cmidrule(lr){4-5}\cmidrule(lr){6-7}\cmidrule(lr){8-9} \textbf{ID} & \textbf{Nodes} & \textbf{Time[s]} & \textbf{Nodes} & \textbf{Time[s]} & \textbf{Nodes} & \textbf{Time[s]} & \textbf{Nodes} & \textbf{Time[s]}\\%& \textbf{Nodes} & \textbf{Time[s]}\\ \cmidrule(lr){1-1}\cmidrule(lr){2-9} I-10 & 81 & 652.33 & 587 & 941.11 & 189 & {629.35} & 189 & \textbf{528.38}\\% & 189 & 537.11\\ I-9.5 & 80 & {629.35} & 244 & 727.43 & 308 & 987.71 & 82 & \textbf{544.11} \\%& 82 & \textbf{540.74 }\\ I-9 & 88 & 751.76 & 227 & {495.73} & 341 & 2052.64 & 88 & \textbf{435.03} \\%& 88 & \textbf{433.70} \\ I-8.5 & 83 & {652.87} & 390 & \textbf{643.51} & 87 & 686.63 & 87 & 686.86 \\% & 87 & 662.00 \\ I-8 & 2097 & 13939.16 & 169 & \textbf{535.40} & 328 & 1076.34 & 525 & {609.97} \\%& 525 & 727.12\\ I-7.5 & 2102 & 1587.88 & 282 & {621.91} & 251 & 800.53 & 39 & \textbf{348.67}\\% & 39 & 385.21\\ I-7 & 350 & 1093.30 & 80 & \textbf{365.61} & 356 & 1214.33 & 165 & {393.10} \\% & 165 & 395.94\\ I-6.5 & 1528 & 1331.78 & 1528 & 1371.79 & 521 & {1102.29} & 1141 & \textbf{883.88}\\% & 1141 & 902.55\\ I-6 & 1973 & 1265.77 & 2471 & 1432.41 & 1480 & {1096.09} & 997 & \textbf{841.16}\\% & 997 & 852.44\\ I-5.5 & 76 & 2474.90 & 76 & {2470.98} & 76 & 2538.30 & 76 & \textbf{2348.91}\\% & 76 & 2486.41\\ I-5 & 75 & 2934.47 & 72 & {2460.33} & 75 & 2535.57 & 76 & \textbf{ 2403.94}\\% & 76 & 2428.77\\ \bottomrule \end{tabular} } \end{center} \label{tab:PB_comparison} \end{table} The metrics reported in Table \ref{tab:PB_comparison} are the number of branching nodes and the computational time on each tested instance (identified by the ID code) and for each framework. The lowest times per instance are in bold. The instances have been reported with increasing complexity. \\The performance obtained with the standard solution method on the two formulations shows little difference: framework R is faster than N in 6/11 instances (almost 55\%). This result is straightforward since the reformulation cannot yield better bounds. However, the computational difference in using a natural formulation or its reformulation becomes evident when adopting particular branching strategies. Indeed, in R+PB$w$, we can close the MIP gap more quickly than in the N/R frameworks in 10/11 instances (more than 90\%). Such good results cannot be obtained by directly prioritizing the branching of the $x$ variables on the natural formulation. Indeed, even though N+PB$x$ is faster than N in solving most instances, R+PB$w$ is faster than N+PB$x$ in solving 8/11 instances (more than 70\%). \\Other branching rules, such as priority branching on $z$ variables, pseudo reduced cost branching, pseudo shadow price branching, maximum infeasibility branching, and strong branching, all result in deteriorating performance and, therefore, have not been reported. The second proposed technique to speed up the problem solution is a presolve based on reduced cost fixing. In Table \ref{tab:rcf_comparison}, we report the computational results obtained with this technique on the two formulations. The lowest times per instance are in bold. We use N+RCF when this procedure is applied to the natural formulation \eqref{eq:initial} and R+RCF when it is applied to its reformulation \eqref{eq:reformulated}. We also combined the two proposals by experimenting with the RCF applied to the reformulation and using priority branching on $w$ when solving the MIP after the RCF: we denote this framework by R+PB$w$+RCF. We compare these frameworks to the standard solution method in the natural formulation N. \begin{table} [h!] \setlength{\tabcolsep}{5pt} \caption{Computational results to evaluate the second proposal and a combination of the two} \begin{center}\resizebox{\textwidth}{!}{ \begin{tabular}{lrrrrrrrr} \toprule & \multicolumn{2}{c}{\textbf{N}} & \multicolumn{2}{c}{\textbf{N + RCF}} & \multicolumn{2}{c}{\textbf{R + RCF}} & \multicolumn{2}{c}{\textbf{R + PB}\pmb{$w$}\textbf{+ RCF}}\\ \cmidrule(lr){2-3}\cmidrule(lr){4-5}\cmidrule(lr){6-7}\cmidrule(lr){8-9} \textbf{ID} & \textbf{Nodes} & \textbf{Time[s]} & \textbf{Nodes} & \textbf{Time[s]} & \textbf{Nodes} & \textbf{Time[s]} & \textbf{Nodes} & \textbf{Time[s]} \\ \cmidrule(lr){1-1}\cmidrule(lr){2-9} I-10 & 80 & 652.33 & 36 & 405.55 & 49 & 484.46 & 65 & \textbf{393.4} \\ I-9.5 & 81 & 629.35 & 248 & 556.98 & 156 & \textbf{493.38} & 156 & 606.58\\ I-9 & 88 & 751.76 & 125 & 647.30 & 186 & \textbf{460.41} & 375 & 1590.17\\ I-8.5 & 83 & 652.87 & 116 & \textbf{430.66} & 86 & 601.51 & 106 & 438.11\\ I-8 & 2097 & 13939.16 & 149 & \textbf{497.47} & 1647 & 8772.59 & 404 & 567.39\\ I-7.5 & 2102 & 1587.88 & 154 & 486.41 & 105 & \textbf{430.23} & 157 & 563.69\\ I-7 & 350 & 1093.30 & 154 & \textbf{461.14} & 111 & 1036.27 & 111 & 1069.02\\ I-6.5 & 1528 & 1331.78 & 124 & 544.68 & 175 & \textbf{496.49} & 256 & 829.04\\ I-6 & 1973 & 1265.77 & 258 & 593.12 & 75 & 459.39 & 75 & \textbf{454.56}\\ I-5.5 & 76 & 2474.90 & 63 & 2795.88 & 37 & \textbf{2368.67} & 37 & 2546.26\\ I-5 & 75 & 2934.47 & 340 & \textbf{2780.13} & 211 & 3768.62 & 211 & 3780.96\\ \bottomrule \end{tabular}} \end{center} \label{tab:rcf_comparison} \end{table} From a computational perspective, applying the RCF-based presolve is preferable in most cases, regardless of the formulation or the branching rules. Indeed, RCF produces smaller problems that are typically faster to solve. Both N+RCF and R+RCF produce good results. However, we remark that the reformulation requires the addition of some variables and constraints. The framework combining the two proposals (R+PB$w$+RCF) is faster than N in 8/11 instances but slower than N/R +RCF. We conclude that applying the RCF to the natural formulation \eqref{eq:initial} is the best strategy. We now want to analyze the effect of the RCF presolve on the problem sparsity and the big-$M$ range and assess the importance of having a fast heuristic for achieving such good results. In Table \ref{tab:rcf_effect}, we report the number of non-zeros for what concerns the sparsity, the maximum value of the big-$M$, and some metrics about the time, including the total solution time (denoted by \emph{Time}). The total solution time is given by the sum of the time to get the lower bound (not reported), the time to get the upper bound using our heuristic (denoted by \emph{HTime}), and the time to solve the reduced problem after the RCF-presolve (denoted by \emph{RTime}). These metrics have been evaluated on the natural formulation \eqref{eq:initial}. The lowest times per instance are in bold. \begin{table} [h!] \setlength{\tabcolsep}{5pt} \caption{Effect of the reduced cost fixing presolve} \begin{center}\resizebox{\textwidth}{!}{ \begin{tabular}{lrrrrrrrr} \toprule & \multicolumn{3}{c}{\textbf{N}} & \multicolumn{5}{c}{\textbf{N + RCF}}\\ \cmidrule(lr){2-4}\cmidrule(lr){5-9} \textbf{ID} & \textbf{Non-zeros} & \textbf{MaxBig-}\pmb{$M$} & \textbf{Time[s]} & \textbf{Non-zeros} & \textbf{MaxBig-}\pmb{$M$} & \textbf{HTime[s]} & \textbf{RTime[s]} & \textbf{Time[s]} \\ \cmidrule(lr){1-1}\cmidrule(lr){2-9} I-10 & 12573655 & 91958.73 & 652.33 & 7435231 & 45111.35 & 61.55 & 318.92 & \textbf{405.55}\\ I-9.5 & 12573655 & 103179.39 & 629.35 & 7950820 & 51589.60 & 80.71 & 450.91 & \textbf{556.98}\\ I-9 & 12573655 & 115769.18 & 751.76 & 7950820 & 57884.48 & 51.04 & 562.77 & \textbf{647.30}\\ I-8.5 & 12573655 & 129895.16 & 652.87 & 7950820 & 64947.46 & 49.36 & 356.77 & \textbf{430.66} \\ I-8 & 12573655 & 145744.77 & 13939.16 & 7950820 & 72872.25 & 54.40 & 416.21 & \textbf{497.47}\\ I-7.5 & 12573655 & 161284.63 & 1587.88 & 7497127 & 79119.94 & 48.84 & 410.17 & \textbf{486.41}\\ I-7 & 12573655 & 183481.79 & 1093.30 & 7950820 & 91740.72 & 56.12 & 381.49 & \textbf{461.14}\\ I-6.5 & 12573655 & 205869.96 & 1331.78 & 7950820 & 102934.78 & 56.84 & 462.02 & \textbf{544.68} \\ I-6 & 12573655 & 230989.89 & 1265.77 & 7950820 & 115494.73 & 127.56 & 439.38 & \textbf{593.12}\\ I-5.5 & 12620585 & 259174.92 & \textbf{2474.90} & 3781038 & 62347.47 & 37.96 & 2714.18 & 2795.88\\ I-5 & 12620585 & 290799.04 & 2934.47 & 3768647 & 69955.01 & 36.60 & 2687.69 & \textbf{2780.13}\\ \bottomrule \end{tabular}} \end{center} \label{tab:rcf_effect} \end{table} RCF-based presolve induces more sparsity in the model as the number of non-zeros is noticeably lower. It also intervenes on each big-$M$ by decreasing its value (the maximum value is halved on average) and experimentally reduces the total solution time. It should be noticed that a fast heuristic is required to achieve a low total solution time. However, the RCF presolve has one drawback: despite being a promising and valid tool for efficiently solving large-scale problems of site and power assignment, it fails in the most complex scenarios, that is, when a very high level of service is required (such as in I-5.5 and I-5). In conclusion, the two best implementation techniques are R+PB$w$ and N+RCF. The former reaches the optimum with an average number of nodes equal to 315 and with average times equal to 911.27 seconds, while N+RCF takes on average 160.64 nodes and 927.21 seconds. Both the strategies significantly beat the standard solution framework N, which takes on average 775.73 nodes and 2483.05 seconds. \section{Conclusions} \label{ch_conclusions} This work aimed at providing implementation procedures to accelerate the optimal solution of the site and power assignment problem in wireless networks. We proposed a reformulation leading to more efficient branching and presolve operations for reducing the big-$M$ and the problem size. The presolve is based on a reduced cost fixing procedure. Computational tests on several realistic instances of this problem confirmed the efficiency of our proposals, which, compared to the standard solution of the natural formulation of the problem, resulted in faster solution times. However, the reduced cost fixing procedure should only be used when proper upper bounds can be computed quickly: this is not trivial in all cases, particularly when high-quality service is required. In this direction, our fixing heuristic was crucial to the success of this approach. \section*{Acknowledgments} The authors gratefully acknowledge the support of Fondazione Ugo Bordoni, in particular Manuel Faccioli, Federica Mangiatordi, Luca Rea, Guido Riva, Pierpaolo Salvo, and Antonio Sassano. \end{document}
\begin{document} \begin{abstract} Discrete exterior calculus (DEC) is a framework for constructing discrete versions of exterior differential calculus objects, and is widely used in computer graphics, computational topology, and discretizations of the Hodge-Laplace operator and other related partial differential equations. However, a rigorous convergence analysis of DEC has always been lacking; as far as we are aware, the only convergence proof of DEC so far appeared is for the scalar Poisson problem in two dimensions, and it is based on reinterpreting the discretization as a finite element method. Moreover, even in two dimensions, there have been some puzzling numerical experiments reported in the literature, apparently suggesting that there is convergence without consistency. In this paper, we develop a general independent framework for analyzing issues such as convergence of DEC without relying on theories of other discretization methods, and demonstrate its usefulness by establishing convergence results for DEC beyond the Poisson problem in two dimensions. Namely, we prove that DEC solutions to the scalar Poisson problem in arbitrary dimensions converge pointwise to the exact solution at least linearly with respect to the mesh size. We illustrate the findings by various numerical experiments, which show that the convergence is in fact of second order when the solution is sufficiently regular. The problems of explaining the second order convergence, and of proving convergence for general $p$-forms remain open. \end{abstract} \maketitle \section{Introduction} \label{s:intro} The main objective of this paper is to establish convergence of discrete exterior calculus approximations on unstructured triangulations for the scalar Poisson problem in general dimensions. There are several approaches to extending the exterior calculus to discrete spaces; What we mean by {\em discrete exterior calculus} (DEC) in this paper is the approach put forward by Anil Hirani and his collaborators, cf. \parencite{Hirani03,Desbrun2005}. Since its conception, DEC has found many applications and has been extended in various directions, including general relativity \parencite{Frau06}, electrodynamics \parencite{Stern2007}, linear elasticity \parencite{Yavari08}, computational modeling \parencite{Desbrun2008}, port-Hamiltonian systems \parencite{SSS2012}, digital geometry processing \parencite{Crane2013}, Darcy flow \parencite{HNC15}, and the Navier-Stokes equations \parencite{MHS16}. However, a rigorous convergence analysis of DEC has always been lacking; As far as we are aware, the only convergence proof of DEC so far appeared is for the scalar Poisson problem in two dimensions, and it is based on reinterpreting the discretization as a finite element method, cf. \parencite{HPW06}. In the current paper, we develop a general independent framework for analyzing issues such as convergence of DEC without relying on theories established for other discretization methods, and demonstrate its usefulness by proving that DEC solutions to the scalar Poisson problem in arbitrary dimensions converge to the exact solution as the mesh size tends to $0$. Developing an original framework to study the convergence of DEC allows us to explore to what extent the theory is compatible in the sense of \parencite{arnold2006}. At the turn of the millennium, compatibility appeared as a conducive paradigm for stability. In this spirit, we here reproduce at the discrete level a standard variational technique in the analysis of PDEs based on the use of the Poincar\'e inequality. In what follows, we would like to describe our results in more detail. Let $K_h$ be a family of $n$-dimensional completely well-centered simplicial complexes triangulating a $n$-dimensional polytope $\mathcal{P}$ in $\mathbb{R}^n$, with the parameter $h>0$ representing the diameter of a typical simplex in the complex. Let $\Delta_h=\delta_h\dif_{\,h}+\dif_{\,h}\delta_h$ be their associated discrete Hodge-Laplace operators, where the discrete exterior derivatives $\dif_{\,h}$ and the codifferentials $\delta_h=(-1)^{n(k-1)+1}\star_h\dif_{\,h}\star_h$ are defined as in \parencite{Desbrun2005,Hirani03} up to a sign. Denote by $C^k(K_h)$ the space of $k$-cochains over $K_h$. In this paper, we study the convergence of solutions $\omega_h\in C^k(K_h)$ solving discrete Hodge-Laplace Dirichlet problems of the form \begin{equation}\label{eq: Discrete Poisson Dirichlet} \begin{cases} \Delta_h \omega_h = R_h f& \mbox{in } K_h\\ \omega_h=R_h g& \mbox{on } \partial K_h, \end{cases} \end{equation} where $f$ and $g$ are differential forms, and $R_h$ is the deRham operator, cf. \parencite{Whitney57}. It is shown in \parencite{Xu04,Xu04b} that under {\em very} special symmetry assumptions on $K_h$, the consistency estimate \begin{equation*} \|\Delta_h R_h \omega - R_h\Delta \omega\|_{\infty}= O(h^2) \end{equation*} holds for sufficiently smooth functions $\omega$. However, as the numerical experiments from \parencite{Xu04,Nong04} revealed, this does \textit{not} hold for general triangulations. In Section~\ref{s:consistency}, we show that a common shape regularity assumption on $K_h$ can only guarantee \begin{equation}\label{eq: consistency estimate} \|\Delta_h R_h \omega - R_h\Delta \omega\| = O(1)+O(h) \end{equation} for sufficiently smooth functions $\omega$, in both the maximum $\|\cdot\|_\infty$ and discrete $L^2$-norm $\|\cdot\|_h$. Although the consistency estimate \eqref{eq: consistency estimate} is not adequate for the Lax-Richtmyer framework, by making use of a special structure of the error term itself, we are able to establish convergence for $0$-forms in general dimensions. Namely, we prove in Section \ref{s:stability} that the approximations $u_h\in C^0(K_h)$ obtained from solving (\ref{eq: Discrete Poisson Dirichlet}) still satisfy \begin{equation}\label{intro convergence} \|\omega_h-R_h \omega\|_h = O(h) , \qquad\text{and}\qquad \|\dif\,(\omega_h-R_h \omega)\|_h=O(h) , \end{equation} where $\omega\in C^2(\mathcal{P})$ is the solution of the corresponding continuous Poisson problem with source term $f$ and boundary condition $g$. We remark that a convergence proof in 2 dimensions can be obtained by reinterpreting the discrete problem as arising from affine finite elements, cf. \parencite{HPW06,Wardetzky08}, which shows that the first quantity in \eqref{intro convergence} should indeed be $O(h^2)$ if the exact solution is sufficiently regular. This is consistent with the numerical experiments from \parencite{Nong04}, and moreover, in Section \ref{s:numerics}, we report some numerical experiments in 3 dimensions, which suggest that one has $O(h^2)$ convergence in general dimensions. Therefore, our $O(h)$ convergence result for $0$-forms should be considered only as a first step towards a complete theoretical understanding of the convergence behavior of discrete exterior calculus. Apart from explaining the $O(h^2)$ convergence for $0$-forms, proving convergence for general $p$-forms remains as an important open problem. This paper is organized as follows. In the next section, we review the basic notions of discrete exterior calculus, not only to fix notations, but also to discuss issues such as the boundary of a dual cell in detail to clarify some inconsistencies existing in the current literature. Then in Section~\ref{s:consistency}, we treat the consistency question, and in Section~\ref{s:stability}, we establish stability of DEC for the scalar Poisson problem. Our main result \eqref{intro convergence} is proved in Section~\ref{s:stability}. We end the paper by reporting on some numerical experiments, in Section \ref{s:numerics}. \section{Discrete environment} We review the basic definitions involved in discrete exterior calculus that are needed for the purposes of this work. Readers unaquainted with these discrete structures are encouraged to go over the material covered in the important works cited below. The shape regularity condition that we impose on our triangulations is discussed in Subsection \ref{subs: Discrete Structures}, and a new definition for the boundary of a dual cell is introduced in Subsection \ref{subs: Discrete Operators}. \subsection{Simplicial complexes and regular triangulations} \label{subs: Discrete Structures} The basic geometric objects upon which DEC is designed are borrowed from algebraic topology. While the use of cube complexes is discussed in \parencite{BH12}, we here consider simplices to be the main building blocks of the theory. The following definitions are given in \parencite{Desbrun2005} and can also be found in any introductory textbook on simplicial homology. By a \textit{$k$-simplex} in $\mathbb{R}^n$, we mean the $k$-dimensional convex span \begin{equation*} \sigma=\left.\lbrack v_0,...,v_k\rbrack=\left\{\sum_{i=0}^k a_i v_i\right\vert v_i\in\mathbb{R}^n, a_i\geq0,\sum_{i=0}^k a_i =1\right\}, \end{equation*} where $v_0,...,v_k$ are affinely independent. We denote its circumcenter by $c(\sigma)$. Any $\ell$-simplex $\tau$ defined from a nonempty subset of these vertices is said to be a face of $\sigma$, and we denote this partial ordering by $\sigma\succ\tau$. We write $\vert\sigma\vert$ for the $k$-dimensional volume of a $k$-simplex $\sigma$ and adopt the convention that the volume of a vertex is $1$. The plane of $\tau$ is defined as the smallest affine subspace of $\mathbb{R}^n$ containing $\tau$ and is denoted by $P(\tau)$. A simplicial \textit{$n$-complex} $K$ in $\mathbb{R}^n$ is a collection of $n$-simplices in $\mathbb{R}^n$ such that: \begin{itemize} \item Every face of a simplex in $K$ is in $K$; \item The intersection of any two simplices of $K$ is either empty or a face of both. \end{itemize} It is well-centered if the the circumcenter of a simplex of any dimension is contained in the interior of that simplex. If the set of simplices of a complex $L$ is a subset of the simplices of another complex $K$, then $L$ is called a \textit{subcomplex} of $K$. We denote by $\Delta_k(L)$ the set of all elementary $k$-simplices of $L$. The \textit{star} of a $k$-simplex $\sigma \in K$ is defined as the set $\text{St}(\sigma)=\{\rho\in K \vert \sigma \prec \rho\}$. $\text{St}(\sigma)$ is not closed under taking faces in general. It is thus useful to define the closed star $\overline{\text{St}}(\sigma)$ to be the smallest subcomplex of $K$ containing $\text{St}(\sigma)$. We denote the free abelian group generated by a basis consisting of the oriented k-simplices by $C_k(K)$. This is the space of finite formal sums of $k$-simplices with coefficients in $\mathbb{Z}$. Elements of $C_k(K)$ are called \textit{$k$-chains}. More generally, we will write $\oplus_kC_k(K)$ for the space of finite formal sums of elements in $\cup_k C_k(K)$ with coefficients in $\mathbb{F}_2$. The \textit{boundary} of a $k$-chain is obtained using the linear operator $\partial:\oplus_k C_k(K)\longrightarrow \oplus_k C_{k-1}(K)$ defined on a simplex $[v_0,...,v_k]\in K$ by \begin{equation*} \partial\lbrack v_0,...,v_k\rbrack = \sum_{i=0}^k (-1)^i \lbrack v_0,...,\hat{v_i},...,v_k\rbrack. \end{equation*} Any simplicial $n$-complex $K$ in $\mathbb{R}^n$ such that $\cup_{\sigma\in\Delta_n(K)}\sigma=\mathcal{P}$ is called a \textit{triangulation} of $\mathcal{P}$. We consider in this work families of well-centered triangulations $K_h$ in which each complex is indexed by the size $h>0$ of its longest edge. We write $\gamma_{\tau}$ for the radius of the largest $\dim(\tau)$-ball contained in $\tau$. The following condition imposed on $K_h$ is common in the finite element literature, see, e.g., \parencite{Ciarlet02}. We suppose there exists a shape regularity constant $C_{reg}>0$, independent of $h$, such that for all simplex $\sigma$ in $K_h$, \begin{equation*} \mathrm{diam}(\sigma)\leq C_{\text{reg}}\,\gamma_{\sigma}. \end{equation*} A family of triangulations satisfying this condition is said to be \textit{regular}. Important consequences of this regularity assumption will later follow from the next lemma. \begin{lemma}\label{lem: bounded number of simplices} Let $\{K_h\}$ be a regular family of simplicial complexes triangulating an $n$-dimensional polytope $\mathcal{P}$ in $\mathbb{R}^n$. Then there exists a positive constant $C_{\#}$, independent of $h$, such that \begin{equation*} \#\left\{\sigma\in\Delta_n(K_h) : \sigma\in \overline{\mathrm{St}}(\tau)\right\}\leq C_{\#},\,\,\,\,\,\forall\,\tau\in\Delta_k(K_h). \end{equation*} \end{lemma} \begin{proof} If $\tau\prec\sigma$, then $\sigma$ contains all the vertices of $\tau$. It is thus sufficient to consider the case where $\tau$ is a vertex. Let $v$ be a vertex in $K_h$, and suppose $\sigma\in\overline{\text{St}}(v)$. Let $B_\sigma$ be the $n$-dimensional ball of radius $\gamma_\sigma$ contained in $\sigma$. The set $A=\cup_{\eta\in\Delta_{k-1}(\sigma)}\eta\cap B_{\sigma}\cap\text{St}(\sigma)$ contains exactly $n$ points in $\mathbb{R}^n$. The argument \begin{equation*} \theta=\min_{A}\arctan \left(\frac{\gamma_{\sigma}}{\vert x - v\vert}\right) \end{equation*} is thus well-defined, and it satisfies $\theta\geq \arctan\left(1/C_{\text{reg}}\right)$ by the regularity assumption. Consider an $n$-dimensional cone of height $\gamma_{\sigma}$, apex $v$ and aperture $2\theta$ contained in $\sigma$. Its generatrix determine a spherical cap $V_\sigma$ on the hypersphere $S_{\sigma}$ of radius $h$ centered at $v$. The intersection of spherical caps determined by cones contained in distinct simplices is countable. Therefore, there can be at most \begin{equation*} \frac{\mathrm{vol} (S_{\sigma})}{V(\arctan\left(1/C_{\text{reg}}\right))} = 2\pi\left(\int_0^{\arctan(1/C_{\text{reg}})}\sin^n(t)dt\right)^{-1} \end{equation*} of them, and only so many distinct $n$-simplices containing $v$ as a result. \end{proof} \subsection{Combinatorial orientation and duality}\label{sec: Combinatorial Orientation and Duality} A triangulation stands as a geometric discretization of a polytopic domain, but it ought to be equipped with a meaningful notion of orientation if a compatible discrete calculus is to be defined on its simplices. We outline below the exposition found in \parencite{Hirani03}. We felt the need to review the definitions of Subsection \ref{subs: Discrete Structures} partly to be able to stress the fact that the expression of a $k$-simplex $\sigma$ comes naturally with an ordering of its vertices. Defining two orderings to be equivalent if they differ by an even permutation yields two equivalence classes called \textit{orientations}. The vertices themselves are dimensionless. As such, they are given no orientation. By interpreting a permutation $\rho$ of the vertices in $\sigma$ as an ordering for the basis vectors $v_{\rho(1)}-v_{\rho(0)},v_{\rho(2)}-v_{\rho(1)},...,v_{\rho(k)}-v_{\rho(k-1)}$, we see that these equivalence classes coincide with the ones obtained when the affine space $P(\sigma)$ is endowed with an orientation in the usual sense. A simplex is thus oriented by its plane and vice versa. The planes of the $(k-1)$-faces of $\sigma$ inherits an orientation as subspaces of $P(\sigma)$. Correspondingly, we establish that the \textit{induced orientation} by $[v_0,...,v_k]$ on the $(k-1)$-face $v_0,...,\hat{v_i},...,v_k$ is the same as the orientation of $[v_0,...,\hat{v_i},...,v_k]$ if $i$ is even, while the opposite orientation is assigned otherwise. Two $k$-simplices $\sigma$ and $\tau$ in $\mathbb{R}^n$ are hence comparable if $P(\sigma)=P(\tau)$, or if they share a ($k-1$)-face. In the first case, the \textit{relative orientation} $\text{sign}(\sigma,\tau)=\pm 1$ of the two simplices is $+1$ when their bases yield the same orientation of the plane and $-1$ otherwise. The induced orientation on the common face is similarly used to establish relative orientation in the second case. The mechanics of orientation are conveniently captured by the structure of exterior algebra. For example, $v_1-v_0\wedge ... \wedge v_k-v_{k-1}$ could be used to represent the orientation of $\lbrack v_0,...,v_k\rbrack$. From now on, we will always assume that $K_h$ is well-centered and that all its $n$-simplices are positively oriented with respect to each other. The orientations of lower dimensional simplices are chosen independently. We are now ready to introduce the final important objects pertaining to the discrete domain before we move on to the functional aspects of DEC. Denote by $D_h$ the smallest simplicial $n$-complex in $\mathbb{R}^n$ containing every simplex of the form $[c(v),...,c(\tau),...,c(\sigma)]$, where the simplices $v\prec ...\prec\tau\prec ...\prec \sigma$ belong to $K_h$. Let $\ast: \oplus_kC_k(K_h)\longrightarrow \oplus_kC_{n-k}(D_h)$ be the homomorphism acting on $\tau\in K_h$ by \begin{equation*}\label{def: circumcentric duality operator} \ast\tau = \sum_{\tau\prec...\prec\sigma}\pm_{\sigma,...,\tau}[c(\tau),...,c(\sigma)], \end{equation*} where $\pm_{\sigma,...,\tau}=\mathrm{sign}\left(\lbrack v,...,c\left(\tau\right)], \tau\right) \mathrm{sign}\left(\lbrack v,...,c\left(\sigma\right)\rbrack, \sigma\right)$. We define the \textit{oriented circumcentric dual} of $K_h$ by $\ast K_h=\{\ast\tau \,\vert\, \tau\in K_h\}$ (see Figure \ref{fig: example of dual}). Note that the space $C_{n-k}(\ast K_h)$ of finite formal sums of ($n-k$)-dimensional elements of $\ast K_h$ with integer coefficients is a subgroup of $C_{n-k}(D_h)$ onto which the restriction of $\ast$ to $C_k(K_h)$ is an isomorphism. In fact, since a simplex whose vertices are $c(\tau),...,c(\sigma)$ is oriented in the above formula to satisfy $ P(\tau)\times P(c(\tau),...,c(\sigma))\sim \sigma$, we effortlessly extend the definition of $\ast$ to $\oplus_k C_k(\ast K_h)$ as well by defining $\ast\ast\tau=(-1)^{k(n-k)}\tau$ for every simplex $\tau\in\Delta_k(K_h)$, $k=1,...,n$. \begin{figure} \caption{Boundary and interior faces are perpendicular to their duals. On the left is shown a $1$-dimensional complex in blue and its dual in red. On the right, each shade of grey indicates a $2$-dimensional simplex in $D_h$. The dual of the vertex $\sigma$ is a complex of these simplices, and its boundary is colored in red.} \label{fig: example of dual} \end{figure}\ \subsection{Discrete operators}\label{subs: Discrete Operators} A strength of discrete exterior calculus is foreseen in the following objects. The theory is built on a natural and intuitive notion of discrete differential forms having a straightforward implementation. We call \textit{$k$-cochains} the elements of the space $C^k(K_h) = \text{Hom}(C_k(K_h), \mathbb{R})$, and define $\oplus_kC^k(K_h)$, $C^k(\ast K_h)$ and $\oplus_kC^k(\ast K_h)$ as expected. The \textit{discrete Hodge star} is an isomorphism $\star_h:\oplus_k C^{k}(K_h)\longrightarrow \oplus_kC^{n-k}(\ast K_h)$ satisfying \begin{equation*}\label{def: Hodge star} \langle \star_h\omega_h,\ast\sigma\rangle = \frac{\vert\ast\sigma\vert}{\vert\sigma\vert}\langle \omega_h,\sigma\rangle,\,\,\,\,\,\forall\,\omega\in K_h. \end{equation*} In complete formal analogy, we further impose that \begin{equation*} \langle \star_h\star_h\omega_h,\ast\ast\sigma\rangle = \left(\frac{\vert\ast\sigma\vert}{\vert\sigma\vert}\right)^{-1}\langle \star_h\omega_h,\ast\sigma\rangle,\,\,\,\,\,\forall\,\omega\in K_h. \end{equation*} In other words, $\langle \star_h\star_h\omega_h,\sigma\rangle=(-1)^{k(n-k)}\langle \star_h\star_h\omega_h,\ast\ast\sigma\rangle=(-1)^{k(n-k)}\langle \omega_h,\sigma\rangle$ for all $k$-cochains $\omega_h$, and as a consequence $\star_h\star_h=(-1)^{k(n-k)}$ on $C^k(K_h)$. The spaces $C^k(K_h)$ are finite dimensional Hilbert spaces when equipped with the discrete inner product \begin{equation*} \left(\alpha_h,\beta_h\right)_h=\sum_{\tau\in\Delta_k(K_h)}\langle\alpha_h,\tau\rangle\langle\star_h\beta_h,\ast\tau\rangle,\,\,\,\,\,\alpha_h,\beta_h\in C^k(K_h). \end{equation*} A compatible definition of the discrete $L^2$-norm on the dual triangulation is also chosen by enforcing the Hodge star to be an isometry. The norm in the following definition is obtained from the inner product \begin{equation*} \left(\star\alpha_h,\star\beta_h\right)_h=\sum_{\tau\in\Delta_k(K_h)}\langle\star\alpha_h,\ast\tau\rangle\langle\star_h\star_h\beta_h,\ast\ast\tau\rangle,\,\,\,\,\,\star\alpha_h,\star\beta_h\in C^k(\ast K_h), \end{equation*} which is immediatly seen to satisfy $\left(\alpha_h,\beta_h\right)_h=\left(\star\alpha_h,\star\beta_h\right)_h$. \begin{definition} The discrete $L^2$-norm on $C^k(\ast K_h)$ is defined by \begin{equation*} \|\star\omega_h\|^2_h = \sum_{\tau\in\Delta_k(K_h)}\left(\frac{\vert\ast\tau\vert}{\vert\tau\vert}\right)^{-1}\langle\star_h\omega_h,\ast\tau\rangle^2,\,\,\,\,\, \omega_h\in C^k(\ast K_h). \end{equation*} \end{definition} The last step towards a full discretization of the Hodge-Laplace operator is to discretize the exterior derivative. While requiring that \begin{equation*} \langle \dif\omega_h,\tau\rangle =\langle \omega_h,\partial\tau\rangle,\,\,\,\,\,\forall\,\tau\in K_h, \end{equation*} readily defines a \textit{discrete exterior derivative} $\dif_{\,h}:\oplus_kC^k(K_h)\longrightarrow \oplus_kC^{k+1}(K_h)$ that satisfies Stokes' theorem, we need to make precise what is meant by the boundary of a dual element if we hope to define the exterior derivative on $C^k(\ast K_h)$ similarly. We examine the case of a $k$-simplex $\tau=\pm_{\tau}[v_0,...,v_k]$ in $K_h$. It is sufficient to restrict our attention to an $n$-dimensional simplex $\sigma$ to which $\tau$ is a face. We assume that the orientation of $\sigma$ is consistent with $ \left(v_1-v_0\right)\wedge\left(v_2-v_1\right)\wedge...\wedge\left(v_n-v_{n-1}\right), $ and thereon begin our study. The orientation of $\tau$ is represented by $ \pm_{\tau}\left(v_1-v_0\right)\wedge...\wedge\left(v_k-v_{k-1}\right) $. We have seen that enforcing the orientation of its circumcentric dual $ \ast\tau=\pm_{\ast\tau}[c(\tau),c([v_0,...,v_{k+1}]),...,c(\sigma)] $ to satisfy the definition given in Section \ref{sec: Combinatorial Orientation and Duality} is equivalent to requiring that $ \pm_{\tau}\pm_{\ast \tau}\left(v_1-v_0\right)\wedge...\wedge\left(v_n-v_{n-1}\right) \sim \sigma $. Consequently, $\pm_{\tau}=\pm_{\ast \tau}$ by hypothesis (i.e. the signs agree), and we obtain $ \ast\tau\sim \pm_{\tau}\left(v_{k+1}-v_k\right)\wedge...\wedge\left(v_n-v_{n-1}\right) $. Since similar equivalences hold for any $(k+1)$-face $\eta=\pm_{\eta}[v_0,...,v_{k+1}]$ of $\sigma$, the orientation induced by $\eta$ on the face $v_0,...,v_k$ is defined to be the same as the orientation of \begin{equation*} (-1)^{k+1}\pm_{\eta}\pm_{\tau}\tau \sim(-1)^{k+1}\pm_{\eta}\left(v_1-v_0\right)\wedge...\wedge\left(v_{k}-v_{k-1}\right). \end{equation*} Therefore, choosing $\pm_{\text{old}}=(-1)^{k+1}\pm_{\eta}\pm_{\tau}$ makes the induced orientation of $\pm_{\text{old}}\eta$ on that face consistent with the orientation of $\tau$, and yields on the one hand that the orientation of $c(\eta)$, ..., $c(\sigma)$ as a piece of $\ast\pm_{\text{old}}\eta$ is given by \begin{equation*} (-1)^{k+1}\pm_{\tau}(\pm_{\eta})^2\left(v_{k+2}-v_{k+1}\right)\wedge...\wedge\left(v_n-v_{n-1}\right). \end{equation*} On the other hand, $P\left(\pm_{\text{new}}\left(\ast\pm_{\text{old}}\eta\right)\right)$ is a subspace of codimension $1$ in $P(\ast\tau)$. As such, it inherits a boundary orientation in the usual sense through the assignement \begin{equation*} P(\ast\tau)\sim\pm_{\text{new}}(-1)^{k+1}\pm_{\tau}\overrightarrow{\nu}\wedge\left(v_{k+2}-v_{k+1}\right)\wedge...\wedge\left(v_n-v_{n-1}\right), \end{equation*} where $\overrightarrow{\nu}$ is any vector pointing away from $P(\ast\eta)$. Since by hypothesis $\overrightarrow{\nu}=(v_{k+1}-v_k)$ is a valid choice of outward vector, we conclude that the above compatible relation holds if and only if $\pm_{\text{new}}=(-1)^{k+1}$. The claim that the following new definition for the boundary of a dual element is suited for integration by parts rests on the later observation. \begin{figure} \caption{This figure illustates the exposition of Section \ref{subs: Discrete Operators} \end{figure} \begin{definition}\label{def: boundary of dual} The linear operator $\partial: \oplus_kC_{n-k}\left(\ast K_h,\mathbb{Z}\right)\longrightarrow \oplus C_{n-k-1}\left(\ast K_h,\mathbb{Z}\right)$, which we call the \textit{dual boundary operator}, is defined as the linear operator acting on the dual of $\tau=[v_0,...,v_k]\in K_h$ by \begin{equation*} \partial\ast\tau = (-1)^{k+1} \sum_{\eta\succ\tau}\ast\eta, \end{equation*} where $\eta$ is a $(k+1)$-simplex in $K_h$ oriented so that the induced orientation of $\eta$ on the face $v_0,...,v_k$ is consistent with the orientation of $\tau$. \end{definition} \begin{example} Let $\sigma=\lbrack v_0,v_1,v_2\rbrack$ be an oriented $2$-simplex. The $2$-dimensional dual of $\tau=v_0$ is given by \begin{equation*} \ast \tau = \pm_{v_0,\lbrack v_0,v_1\rbrack,\sigma}\lbrack v_0,c\left(\lbrack v_0,v_1\rbrack\right), c(\sigma) \rbrack +\pm_{v_0,\lbrack v_0,v_2\rbrack,\sigma}\lbrack v_0,c\left(\lbrack v_0,v_2\rbrack\right), c(\sigma) \rbrack, \end{equation*} where by definition \begin{align*} \pm_{v_0,\lbrack v_0,v_1\rbrack,\sigma} &= \mathrm{sign}\left(\lbrack v_0,c\left(\lbrack v_0,v_1\rbrack\right), c(\sigma) \rbrack, \sigma\right) = 1;\\ \pm_{v_0,\lbrack v_0,v_2\rbrack,\sigma} &= \mathrm{sign}\left(\lbrack v_0,c\left(\lbrack v_0,v_2\rbrack\right), c(\sigma) \rbrack, \sigma\right) = -1. \end{align*} The orientation of the boundary edges of $\ast\tau$ with endpoints $c([v_0,v_2])$, $c(\sigma)$ and $c([v_0,v_1])$, $c(\sigma)$ are compatible with integration by parts if they are assigned an orientation equivalent to the one given in the above expression. Yet, from the definition of the boundary of a dual cell as found in the literature, \begin{align*} \partial\ast\tau &= \ast \left((-1)[v_0,v_1]\right)+\ast\left((-1)[v_0,v_2]\right)\\ & = \ast [v_1,v_0] + \ast [v_2,v_0]\\ &= \pm_{[v_1,v_0],\sigma}[c(v_0,v_2),c(\sigma)] +\pm_{[v_2,v_0],\sigma}[c(v_0,v_1),c(\sigma)], \end{align*} where \begin{align*} \pm_{[v_1,v_0],\sigma} &= \mathrm{sign}\left([v_0,c([v_1,v_0])],[v_1,v_0]\right) \cdot\mathrm{sign}\left(\lbrack v_0,c\left(\lbrack v_0,v_1\rbrack\right),c(\sigma)\rbrack,\sigma\right)= (-1)(+1)=-1; \\ \pm_{[v_2,v_0]\sigma} &= \mathrm{sign}\left([v_0,c([v_2,v_0])],[v_2,v_0]\right) \cdot\mathrm{sign}\left(\lbrack v_0,c\left(\lbrack v_0,v_2\rbrack\right),c(\sigma)\rbrack,\sigma\right)= (-1)(-1)=+1. \end{align*} We illustrate this example in Figure \ref{fig: example 2d dual}. \begin{figure} \caption{In red is shown the orientation of the boundary of $\ast\tau$ as obtained from definition \ref{def: boundary of dual} \label{fig: example 2d dual} \end{figure} \end{example} Using Definition \ref{def: boundary of dual}, we can now resolutely extend the exterior derivative to $$\oplus_kC^{n-k}(\ast K_h)$$ by defining it as the homomorphism satisfying \begin{equation*} \langle \dif\star_h\omega_h,\ast\tau\rangle =\langle \star_h\omega_h,\partial\ast\tau\rangle,\,\,\,\,\,\forall\,\tau\in K_h. \end{equation*} \section{Consistency} \label{s:consistency} In this section, we develop a framework to address consistency questions for operators such as the Hodge-Laplace operator. We begin by proving a preliminary lemma which reduces the question of consistency of the Hodge-Laplace operator to the one of consistency of the Hodge star. We then proceed with a demonstration of the latter, and derive bounds on the norms of various discrete operators. We denote by $C^r\Lambda^k(\mathcal{O})$ the space of differential $k$-forms $\omega$ on an open set $\mathcal{O}\subset\mathbb{R}^n$ such that the function \begin{equation*} x\mapsto \omega_x(X_1(x),...,X_k(x)) \end{equation*} is $r$ times continuously differentiable for every $k$-tuples $X_1,...,X_k$ of smooth vector fields on $\mathcal{O}$. Given an $n$-polytope $\mathcal{Q}$, say an $n$-dimensional subcomplex of $K_h$, we write $C^r\Lambda^k(\mathcal{Q})$ for the set of differential $k$-forms obtained by restricting to $\mathcal{Q}$ a differential form in $C^r\Lambda^k(\mathcal{O})$, where $\mathcal{O}$ is any open set in $\mathbb{R}^n$ containing $\mathcal{Q}$. The `exact' discrete representatives for solutions of the continuous problems corresponding to (\ref{eq: Discrete Poisson Dirichlet}) are constructed using the {\em deRham map} $R_h$ defined on $C\Lambda^k(\mathcal{P})$ by \begin{equation*} \langle R_h\omega, s \rangle = \int_{s}\omega, \end{equation*} where $s$ is a $k$-dimensional simplex in $K_h$ or the dual of a $(n-k)$-dimensional one. The next lemma depends on the extremely convenient property that $\dif_{\,h}R_h= R_h\dif\,$. \begin{lemma}\label{lem: reducing consistency} Given $\omega\in C^2\Lambda^k(\mathcal{P})$, we have \begin{multline*} \Delta_h R_h\omega - R_h \Delta \omega = \star_h \dif_{\,h}\left(\star_h R_h-R_h \star \right)\dif\omega +\left(\star_h R_h - R_h\star\right)d\star d\omega \\ +\dif_{\,h}\left(\star_hR_h-R_h\star\right)d\star \omega +\dif_{\,h} \star_h \dif_{\,h} \left(\star_h R_h -R_h \star\right)\omega. \end{multline*} \end{lemma} \begin{proof} Consider the explicit expression \begin{multline*} \Delta_h R_h\omega - R_h \Delta \omega = \left(\delta_h \dif_{\,h}+\dif_{\,h} \delta_h\right)R_h\omega -R_h\left(\delta \dif+\dif \delta\right)\omega \\ = \left(\star_h \dif_{\,h} \star_h \dif_{\,h} R_h - R_h \star \dif \star \dif\right)\omega+ \left(\dif_{\,h} \star_h \dif_{\,h}\star_h R_h - R_h \dif\star \dif\star\right)\omega \end{multline*} of the discrete Hodge-Laplace operator. Substituting $$ \star_h \dif_{\,h} R_h \omega = \star_h R_h \dif \omega= \star_h R_h \dif \omega -R_h \star \dif \omega+R_h \star \dif \omega= \left(\star_h R_h-R_h \star \right)\dif\omega +R_h \star \dif\omega $$ in the first summand, we obtain \begin{align*} \star_h \dif_{\,h} \star_h \dif_{\,h} R_h \omega - R_h \star \dif \star \dif \omega &= \star_h \dif_{\,h}\left(\star_h R_h-R_h \star \right)\dif\omega + \star_h \dif_{\,h} R_h \star d\omega - R_h \star \dif \star \dif\omega \\ &= \star_h \dif_{\,h}\left(\star_h R_h-R_h \star \right)\dif\omega +\left(\star_h R_h - R_h\star\right)\dif\star \dif\omega. \end{align*} The second summand can be rewritten as \begin{align*} \left(\dif_{\,h} \star_h \dif_{\,h}\star_h R_h - R_h \dif\star \dif\star\right)\omega &= \dif_{\,h}\left(\star_h \dif_{\,h}\star_h R_h - R_h \star \dif\star\right)\omega \\ &= \dif_{\,h}\left(\star_h \dif_{\,h}\star_h R_h-\star_h R_h \dif \star +\star_h R_h \dif \star-R_h \star \dif\star\right) \omega \\ &= \dif_{\,h}\left(\star_hR_h-R_h\star\right)\dif\star \omega +\dif_{\,h}\left(\star_h \dif_{\,h} \star_h R_h - \star_h R_h \dif\star\right)\omega. \end{align*} The desired equality then follows from the identity \begin{equation*} \left(\star_h \dif_{\,h} \star_h R_h - \star_h R_h \dif\star\right)\omega = \left(\star_h \dif_{\,h} \star_h R_h - \star_h \dif_{\,h} R_h\star\right)\omega = \star_h \dif_{\,h} \left(\star_h R_h -R_h \star\right)\omega , \end{equation*} which completes the proof. \end{proof} Given a $n$-simplex $\sigma$ and a $k$-dimensional face $\tau\prec\sigma$, the subspaces $P(\tau)$ and $P(\ast\tau)$ of $\mathbb{R}^n$ are perpendicular and satisfy $P(\tau)\times P(\ast\tau)=P(\sigma)$. Since the the discrete hodge star maps $k$-cochains on $K_h$ to $(n-k)$-cochains on the dual mesh $\ast K_h$, it captures the geometric gist of its classical Hodge operator $\star$, which acts on wedge products of orthonormal covectors as $\star (\dif\lambda_{\rho(1)}\wedge \dif\lambda_{\rho(2)}\wedge...\wedge \dif\lambda_{\rho(k)}) =\mathrm{sign}(\rho)\,\dif\lambda_{\rho(k+1)}\wedge \dif\lambda_{\rho(k+2)}\wedge...\wedge \dif\lambda_{\rho(n)}$. We hope to exploit this correspondence to estimate the consistency of $\star_h$. By definition, the deRham map directly relates the integral of a differential $k$-form $\omega$ over a $k$-dimensional simplex $\tau$ of $K_h$ with the value of its discrete representative at $\tau$. But one can also integrate any differential $(n-k)$-form, in particular $\star\omega$, over $\ast\tau$. This suggests that simple approximation arguments could be used to compare $\langle R_h\star\omega,\ast\tau\rangle$ with $\langle\star_h\omega_h,\ast\tau\rangle$, since the latter is defined as a scaling of the aforesaid integral $\omega_h=R_h\omega$ by the area factor $\vert\ast \tau\vert/\vert \tau\vert$, which is also naturally related to those very integrals under consideration. The technique we use is best illustrated in low dimensions. Let $p=(p_1,p_2)$ be a vertex of a triangulation $K_h$ embedded in $\mathbb{R}^2$. For a function $f$ on $\mathbb{R}^2$, we have $\star f=f\dif x\wedge \dif y$. On the one hand, $\langle R_h f, p \rangle$ is simply the evaluation of $f$ at $p$. One the other hand, if $f$ is differentiable, then a Taylor expansion around $p$ yields \begin{align*} \langle R_h\star f,\ast p \rangle &= \iint_{\ast p} f \dif A\\ &= \iint_{\ast p} f(p) + ((x,y)-(p_1,p_2))^TDf(p)+O(h^2)\dif A\\ &= \vert\ast p\vert f(p) + O(h^3). \end{align*} Relying on our convention that $\vert p\vert=1$, we conclude that \begin{equation}\label{eq: consistency prototype estimate} \langle R_h\star f,\ast p \rangle - \langle \star_hR_h f, \ast p \rangle = \langle R_h\star f,\ast p \rangle - \vert\ast p\vert f(p) = O(h^3). \end{equation} This argument is used in the next theorem as a prototype which we generalize in order to derive estimates for differential forms of higher order in arbitrary dimensions. \begin{theorem}\label{thm:consistency estimate} Let $\sigma$ be a $n$-simplex, and suppose $\tau\prec\sigma$ is $k$-dimensional. Then \begin{equation*} \langle \star_h R_h \omega , \ast\tau \rangle = \langle R_h\star\omega,\ast\tau\rangle+ O\left(h^{n+1}/(\gamma_{\tau})^k\right),\,\,\,\,\,\omega\in C^1\Lambda^k(\sigma). \end{equation*} \end{theorem} \begin{proof} Denote by $\Sigma(k,n)$ the set of strictly increasing maps $\{1,...,k\}\longrightarrow\{1,...,n\}$. Since $P(\tau)\perp P(\ast\tau)$ and $P(\tau)\times P(\ast\tau)=\mathbb{R}^n$, we can choose orthonormal vectors $t_i=x_i-c(\tau)$, $x_i\in \mathbb{R}^n$, such that $\{t_i\}_{i=1,...,k}$ and $\{t_i\}_{i=k+1,..,n}$ are bases for $P(\tau)$ and $P(\ast\tau)$ respectively. Let $\dif\lambda_1,...,\dif\lambda_n$ be a basis for $\text{Alt}^n\mathbb{R}^n$ dual to $\{t_i\}_{i=1,...,n}$. We write $\mathrm{vol}_{\tau}$ for $\dif\lambda_1\wedge...\wedge\dif\lambda_k$ and $\mathrm{vol}_{\ast\tau}$ for $\dif\lambda_{k+1}\wedge...\wedge\dif\lambda_n$. Following the argument which led to (\ref{eq: consistency prototype estimate}), we approximate the integral $\langle R_h\omega,\tau\rangle$ of a differential $k$-form $\omega=\sum_{\rho\in\Sigma(k,n)}f_{\rho}(\dif\lambda)_\rho$ over $\tau$ using Taylor expansion. This yields \begin{equation*} \int_{\tau}\sum_{\rho\in\Sigma(k,n)}f_{\rho}(\dif\lambda)_\rho =\int_{\tau} f_{1,...,k}\mathrm{vol}_{\tau} =\vert \tau \vert f_{1,...,k}\left(c(\tau)\right)+O(h^{k+1}). \end{equation*} We find similarly that \begin{equation*} \langle R_h\star \omega, \ast\tau \rangle = \int_{\ast \tau}\sum_{\rho\in\sum(k,n)}f_{\rho}\star(\dif\lambda)_{\rho} =\int_{\ast\tau} f_{1,...,k}\mathrm{vol}_{\ast\tau} =\vert\ast\tau\vert f_{1,...,k}\left(c(\tau)\right)+O(h^{n-k+1}). \end{equation*} Combining these two equations, we get \begin{align*} \langle R_h\star \omega, \ast\tau \rangle &=\frac{\vert\ast\tau\vert}{\vert\tau\vert}\langle R_h\omega,\tau\rangle +O\left(h^{n+1}/(\gamma_{\tau})^k\right)+O(h^{n-k+1})\\ &=\langle\star_h R_h\omega,\ast\tau\rangle+O\left(h^{n+1}/(\gamma_{\tau})^k\right) , \end{align*} which completes the proof. \end{proof} \begin{corollary}\label{cor: consistency of hodge star on dual} Integrating on the dual mesh, we obtain \begin{equation*} \langle\star_h R_h (\star\omega),\tau\rangle = \langle R_h\star(\star\omega), \tau\rangle + O(h^{k+1}),\,\,\,\,\, \omega\in C^1\Lambda^k(\mathcal{P}), \end{equation*} when $K_h$ is regular. \end{corollary} \begin{proof} Using Theorem \ref{thm:consistency estimate}, we find that \begin{align*} \langle\star_h R_h (\star\omega),\tau\rangle &= (-1)^{k(n-k)}\langle\star_h R_h (\star\omega),\ast\ast\tau\rangle\\ &= (-1)^{k(n-k)}\left(\frac{\vert\ast\tau\vert}{\vert\tau\vert}\right)^{-1}\langle R_h (\star\omega),\ast\tau\rangle\\ &= (-1)^{k(n-k)}\left(\frac{\vert\ast\tau\vert}{\vert\tau\vert}\right)^{-1}\left(\frac{\vert\ast\tau\vert}{\vert\tau\vert}\langle R_h\omega,\tau\rangle +O(h^{n-k+1})\right)\\ &= \langle R_h \star \left(\star\omega\right),\tau\rangle + O(h^{k+1}). \end{align*} \end{proof} \begin{comment} \begin{theorem}\label{cor: consistency first estimate} Let $\sigma$ be a $n$-simplex, and suppose $\tau\prec\sigma$ is $k$-dimensional. Then \begin{equation*} \langle \star_h R_h \omega , \ast\tau \rangle = \langle\star\omega,\ast\tau\rangle+ O\left(h^{n+1}/(\gamma_{\tau})^k\right),\,\,\,\,\,\omega\in\Lambda^k(\sigma). \end{equation*} \end{theorem} \begin{proof} Denote by $\Sigma(k,n)$ the set of strictly increasing maps $\{1,...,k\}\longrightarrow\{1,...,n\}$. Let $\sigma=[v_0,...,v_n]$ and assume $\tau$ is an orientation of the face $v_0,...,v_k$. Let $\dif\lambda_1,...,\dif\lambda_k$ be a basis for $\text{Alt}^k(\tau)$ dual to $t_i=v_i-v_0$, $i=1,...,n$, i.e. the vectors $t_1,...t_k$ form a basis for $P(\tau)$ and satisfy $\dif\lambda_1\wedge...\wedge\dif\lambda_k(t_1,...,t_n)=1$. Under these hypotheses, \begin{equation*} \vert \tau \vert = \vert\det\left(v_1-v_0,...,v_k-v_0\right)\vert =\vert\det\left(t_1,...,t_k\right)\vert =\frac{1}{k!}\mathrm{vol}_\sigma(t_1,...,t_n). \end{equation*} In the spirit of the argument which led to (\ref{eq: consistency prototype estimate}), we now approximate the integral of a differential $k$-form $\omega=\sum_{\rho\in\Sigma(k,n)}f_{\rho}(\dif\lambda)_\rho$ over $\tau$ by \begin{equation*} \langle\omega,\tau\rangle=\int_{\tau}\sum_{\rho\in\Sigma(k,n)}f_{\rho}(\dif\lambda)_\rho =\frac{1}{\vert \tau\vert k!}\int_{\tau} f_{1,...,k}\mathrm{vol}_{\tau} =\frac{f_{1,...,k}\left(c(\tau)\right)}{k!}+O(h^{k+1}). \end{equation*} Moreover, since $P(\tau)\perp P(\ast\tau)$, \begin{equation*} \langle\star \omega, \ast\tau \rangle = \int_{\ast \tau}\sum_{\rho\in\sum(k,n)}f_{\rho}\star(\dif\lambda)_{\rho} =\frac{\vert\ast\tau\vert f_{1,...,k}\left(c(\tau)\right)}{\vert \tau\vert k!}+O(h^{n-k+1}). \end{equation*} Combining these two equations, \begin{align*} \langle\star \omega, \ast\tau \rangle &=\frac{\vert\ast\tau\vert}{\vert\tau\vert}\langle\omega,\tau\rangle +O\left(h^{n+1}/(\gamma_{\tau})^k\right)+O(h^{n-k+1})\\ &=\langle\star R_h\omega,\ast\tau\rangle+O\left(h^{n+1}/(\gamma_{\tau})^k\right). \end{align*} \end{proof} \end{comment} \begin{corollary}\label{cor: consistency of hodge star} The estimate \begin{equation*} \| \star_h R_h \omega - R_h \star \omega \|_\infty = O(h^{n-k+1}),\,\,\,\,\, \omega\in C^1\Lambda^k(\mathcal{P}) \end{equation*} holds when $K_h$ is regular. \end{corollary} \begin{corollary}\label{cor: consistency of hodge star L2} The estimate \begin{equation*} \| \star_h R_h \omega - R_h \star \omega \|_h= O(h),\,\,\,\,\, \omega\in C^1\Lambda^k(\mathcal{P}) \end{equation*} holds when $K_h$ is regular. \end{corollary} \begin{proof} Using Theorem \ref{thm:consistency estimate}, we estimate directly \begin{align*} \| \star_h R_h \omega - R_h \star \omega \|_h^2 &= \sum_{\tau\in\Delta_k(K_h)}\left(\frac{\ast\tau}{\tau}\right)^{-1}\langle \star_h R_h \omega - R_h \star \omega,\ast\tau\rangle^2\\ &\leq C\sum_{\tau\in\Delta_k(K_h)}\left(\frac{\ast\tau}{\tau}\right)^{-1}\left( h^{n-k+1}\right)^2\\ & \lesssim \sum_{\tau\in\Delta_k(K_h)} h^{n+2}\\ &\lesssim h^2, \end{align*} where the last inequality is obtained from the fact that $\#\{\tau:\tau\in\Delta_k(K_h)\}\sim h^n$, which is a consequence of the regularity assumption. \end{proof} Finally, the following Corollary is of special importance. It's proof is similar to the one used for Corollary \ref{cor: consistency of hodge star L2}. \begin{corollary} \label{cor: consistency of hodge star L2 on the dual} The estimate \begin{equation*} \| \left(\star_h R_h - R_h \star\right) \left(\star\omega\right) \|_h= O(h),\,\,\,\,\, \omega\in C^1\Lambda^k(\mathcal{P}) \end{equation*} holds when $K_h$ is regular. \end{corollary} \begin{proof} \begin{align*} \| \left(\star_h R_h \omega - R_h \star\right) \left(\star\omega\right) \|^2_h &= \sum_{\tau\in\Delta_k(K_h)}\langle\left(\star_h R_h \omega - R_h \star\right) \left(\star\omega\right),\tau\rangle \langle \star_h\left(\star_h R_h \omega - R_h \star\right) \left(\star\omega\right),\ast\tau\rangle\\ &=\sum_{\tau\in\Delta_k(K_h)}\frac{\vert\ast\tau\vert}{\vert\tau\vert}\langle\left(\star_h R_h \omega - R_h \star\right) \left(\star\omega\right),\tau\rangle^2\\ &\lesssim \sum_{\tau\in\Delta_k(K_h)} h^{n-2k}\left(h^{k+1}\right)\\ &\lesssim h^2. \end{align*} \end{proof} As claimed, we see that estimates on the norm of the operators defined in Section \ref{subs: Discrete Operators} would finally allow one to use the above corollaries to find related bounds on the consistency expression given in Lemma \ref{lem: reducing consistency}. In particular, since \begin{equation}\label{eq: Hodge-Laplace on functions} \Delta_h\omega_h=\delta_h\dif_{\,h}\omega_h,\,\,\,\,\, \omega_h\in C^0(K_h), \end{equation} it is sufficient for the Hodge-Laplace problem on $0$-forms to evaluate the operator norm of the discrete exterior derivative and of the discrete Hodge star over $C^k(\ast K_h)$, $k=1,...,n$. Namely, from (\ref{eq: Hodge-Laplace on functions}) and the proof of Lemma \ref{lem: reducing consistency}, the consistency error of the Hodge-Laplace restricted to $0$-forms $\omega$ can be written as \begin{equation*} \Delta_h R_h\omega - R_h \Delta \omega = \star_h \dif_{\,h}\left(\star_h R_h-R_h \star \right)\dif\omega +\left(\star_h R_h - R_h\star\right)\dif\star \dif\omega. \end{equation*} The following estimates mainly rely on Lemma \ref{lem: bounded number of simplices}, and on the regularity assumption imposed on $K_h$. Indeed, for $\omega_h \in \Delta_k(K_h)$, the inequality \begin{equation}\label{eq: op maximum norm exterior derivative over dual} \|\dif_{\,h}\star_h\omega_h\|_{\infty}\leq C\|\star_h\omega_h\|_\infty, \end{equation} is a direct consequence of the former, whereas \begin{equation}\label{eq: op maximum norm Hodge star over dual} \|\star_h\star_h\omega_h\|_{\infty}\sim h^{2k-n}\|\star_h\omega_h\|_{\infty} \end{equation} is immediately obtained from the latter. While \begin{equation}\label{eq: op L2 norm Hodge star over dual} \|\star_h\|_{\mathcal{L}\left((C^k(\ast K_h),\|\cdot\|_h),(C^{n-k}(K_h),\|\cdot\|_h)\right)}= 1 \end{equation} is merely a statement about the Hodge star being an isometry, the last required estimate is derived from \begin{align} \|\dif_{\,h} \star_h\omega_h\|_h^2 & = \sum_{\eta\in\Delta_{k-1}(K_h)}\left(\frac{\vert\ast\eta\vert}{\vert\eta\vert}\right)^{-1}\langle\star_h\omega_h,\partial\ast\eta\rangle^2 \nonumber\\ &=\sum_{\eta\in\Delta_{k-1}(K_h)}\,\,\sum_{\substack{\tau\in\Delta_k(K_h),\\\tau\succ\eta}}\left(\frac{\vert\ast\eta\vert}{\vert\eta\vert}\right)^{-1}\langle\star_h\omega_h,\ast\tau\rangle^2 \nonumber\\ &\leq Ch^{-2}\|\star_h\omega_h\|_h. \label{eq: op L2 norm exterior derivative over dual} \end{align} Finally, by combining the estimates (\ref{eq: op maximum norm exterior derivative over dual}), (\ref{eq: op maximum norm Hodge star over dual}), (\ref{eq: op L2 norm Hodge star over dual}) and (\ref{eq: op L2 norm exterior derivative over dual}), with corollaries \ref{cor: consistency of hodge star on dual}, \ref{cor: consistency of hodge star}, \ref{cor: consistency of hodge star L2} and \ref{cor: consistency of hodge star L2 on the dual}, we infer \begin{align*} \|\star_h \dif_{\,h}\left(\star_h R_h-R_h \star \right)\dif\omega\|_h &\leq \|\star_h\|_{\mathcal{L}\left((C^k(\ast K_h),\|\cdot\|_h),(C^{n-k}(K_h),\|\cdot\|_h)\right)}\|\dif_{\,h}\left(\star_h R_h-R_h \star \right)\dif\omega\|_h\\ &\lesssim h^{-1} \|\left(\star_h R_h-R_h \star \right)\dif\omega\|_h\\ &\lesssim h^{-1}h = O(1), \end{align*} \begin{align*} \|\star_h \dif_{\,h}\left(\star_h R_h-R_h \star \right)\dif\omega\|_{\infty} &\lesssim h^{2k-n}\|\dif_{\,h}\left(\star_h R_h-R_h \star \right)\dif\omega\|_{\infty}\\ &\lesssim h^{2k-n} \|\left(\star_h R_h-R_h \star \right)\dif\omega\|_{\infty}\\ & \lesssim h^{2k-n}h^{n-k} = O(h^{k}), \end{align*} \begin{equation*} \|\left(\star_h R_h - R_h\star\right)\dif\star \dif\omega\|_h = O(h), \end{equation*} and \begin{align*} \|\left(\star_h R_h - R_h\star\right)\dif\star \dif\omega\|_\infty = O(h^{k+1}). \end{align*} We have proved the following. \begin{corollary}\label{c: consistency bound} If $K_h$ is regular, then \begin{equation*} \begin{split} \Delta_h R_h\omega - R_h \Delta \omega &= \star_h \dif_{\,h}\left(\star_h R_h-R_h \star \right)\dif\omega +\left(\star_h R_h - R_h\star\right)\dif\star \dif\omega \\ &= O(1)+O(h) , \end{split} \end{equation*} for $\omega\in C^2\Lambda^0(\mathcal{P})$, in both the maximum and discrete $L^2$-norm. \end{corollary} Note that in the displayed equation above, we have used the notation $O(1)+O(h)$ to convey information on the individual sizes of the two terms appearing in the right hand side of the first line. In fact, the preceding corollary will not be employed in what follows, since it yields the consistency error of $O(1)$. However, the techniques we have used to estimate the individual terms and operator norms will prove to be fruitful. \section{Stability and convergence} \label{s:stability} We carry on with a proof of the first order convergence estimates \begin{equation*} \|u_h-R_h u\|_h = O(h)\,\,\,\,\,\text{and}\,\,\,\,\, \|\dif\,(u_h-R_h u)\|_h=O(h), \end{equation*} previously stated in (\ref{intro convergence}). The demonstration is based on the discrete variational form of the Poisson problem (\ref{eq: Discrete Poisson Dirichlet}). Its expression can be derived efficiently using the next proposition. \begin{proposition}\label{prop: adjoint of exterior derivative} If $\omega_h\in C^{k}(K_h)$ and $\eta_h\in C^{k+1}(K_h)$, then \begin{equation} \left(\dif_{\,h}\omega_h,\eta_h \right)_h = \left( \omega_h,\delta_h\eta_h\right)_h. \end{equation} \end{proposition} \begin{proof} It is sufficient to consider the case where $\omega_h(\tau)=1$ on a $k$-simplex $\tau$ in $K_h$ and vanishes otherwise. On the one hand, \begin{equation}\label{adjoint derivative equation} \left(\dif_{\,h}\omega_h,\eta_h \right)_h = \sum_\sigma\langle\omega_h, \partial\sigma\rangle\langle \star_h\eta_h, \ast\sigma\rangle =\langle\omega_h,\tau\rangle \sum_{\sigma\succ\tau}\langle\star\eta_h,\ast\sigma\rangle, \end{equation} where $\sigma$ is a $(k+1$)-simplex oriented so that it is consistent with the induced orientation on $\tau$. On the other hand, since $\delta_h = (-1)^{k}\star_h^{-1}\dif_{\,h}\star_h$ on $k$-cochains, it follows from definition \ref{def: boundary of dual} that \begin{equation*} \left(\omega_h,\delta_h\eta_h\right)_h = (-1)^{k+1}\langle\omega_h,\tau\rangle\langle \dif_{\,h}\star_h\eta_h, \ast\tau\rangle = \langle\omega_h,\tau\rangle\sum_{\sigma\succ\tau}\langle \star\eta_h, \ast\sigma\rangle, \end{equation*} where $\sigma$ is oriented as in (\ref{adjoint derivative equation}). \end{proof} Before we do so however, we prove a discrete Poincar\'e-like inequality for $0$-cochains that is essential to the argument. The inequality is established by comparing the discrete norm of these cochains with the continuous $L^2$-norm of Whitney forms \begin{equation*} \phi_\tau = k!\sum_{i=0}^k (-1)^i\lambda_i\dif\lambda_1\wedge...\wedge\widehat{\dif\lambda_i}\wedge...\wedge\dif\lambda_k,\,\,\,\,\,\tau\in\Delta_k(K_h), \end{equation*} where $\lambda_i$ is the piecewise linear hat function evaluating to $1$ at the $i$th vertex of $\tau$ and $0$ at every other vertices of $K_h$. Setting \begin{equation*} W_h\omega_h = \sum_{\tau}\omega_h(\tau)\phi_\tau,\,\,\,\,\, \forall \tau\in\Delta_k(K_h), \end{equation*} for all $\omega_h\in C^k(K_h)$ defines linear maps $W_h$, called Whitney maps, which have the key property that $W_h\dif_{\,h}\omega_h=\dif W_h\omega_h$. \begin{theorem}\label{thm: equivalence with Whitney forms} Let $K_h$ be a family of regular triangulations. There exist two positive constants $c_1$ and $c_2$, independent of $h$, satisfying \begin{equation*} c_1 \|\omega_h\|_h \leq \|W_h\omega_h\|_{L^2\Lambda^k(K_h)}\leq c_2\|\omega_h\|_h, \,\,\,\,\, \omega_h\in C^k(K_h). \end{equation*} \end{theorem} \begin{proof} We proceed using a scaling argument. Suppose $\sigma=[v_0,...,v_n]\in\Delta_n(K_h)$, and assume $\tau\prec\sigma$ is an orientation of the face $v_0,...,v_k$. Let $\{\lambda_i\}$ be the barycentric coordinate functions associated to the vertices of $\sigma$ (these are the piecewise linear hat functions used in the definition of Whitney forms). The $1$-forms $\dif\lambda_0,...,\widehat{\dif\lambda_{\ell}},...,\dif\lambda_n$ are a basis for $\text{Alt}^n(\mathbb{R}^n)$ dual to $t^{\ell}_i=v_i-v_{\ell}$, $i=1,...,n$, i.e. the vectors $t^{\ell}_1,...t^{\ell}_n$ form a basis for $P(\sigma)$ and satisfy $\dif\lambda_1\wedge...\wedge \widehat{\lambda_{\ell}}\wedge...\wedge\dif\lambda_n(t^{\ell}_1,...,t^{\ell}_n)=1$. In this setting, it follows in general that for any oriented face $\rho=[v_{\rho(0)},...,v_{\rho(m)}]$ of $\sigma$, \begin{multline*} \vert \rho \vert = \vert\det\left(v_{\rho(1)}-v_{\rho(0)},...,v_{\rho(m)}-v_{\rho(0)}\right)\vert\\ =\vert\det\left(t^{\rho(0)}_1,...,t^{\rho(0)}_m\right)\vert =\pm_{\rho}\frac{1}{m!}\mathrm{vol}_\rho(t^{\rho(0)}_1,...,t^{\rho(0)}_n). \end{multline*} In other words, \begin{equation}\label{eq: change of basis simplex} \dif\lambda_{\rho(1)}\wedge...\wedge \widehat{\dif\lambda_{\rho(0)}} \wedge...\wedge\dif\lambda_{\rho(m)}=\pm_{\rho}\frac{1}{\vert\rho\vert m!}\mathrm{vol}_{\rho}, \end{equation} where $\pm_{\rho}$ depends on the orientation of $\rho$. Now let $\hat{\sigma}$ denote the standard $n$-simplex in $\mathbb{R}^n$, and consider an affine transformation $F:\hat{\sigma}\longrightarrow\sigma$ of the form $F=B+b$, where $B:\mathbb{R}^n\longrightarrow\mathbb{R}^n$ is a linear map and $b\in\mathbb{R}^n$ a vector. We compute the pullback $\hat{\phi_{\tau}}=F^*\phi_{\tau}$ of a Whitney $k$-form $\phi_{\tau}$ using (\ref{eq: change of basis simplex}). To lighten notation, we write $\hat{\tau}=F^{-1}(\tau)$ and $B_{\hat{\tau}}=B\vert_{P(\hat{\tau})}$. We evaluate directly \begin{align*} \left(F^* \phi_\tau\right)_x &= k!\sum_{i=0}^k \frac{\pm_i\lambda_{i}\circ F}{k!\vert\tau\vert}F^*\mathrm{vol}_{\tau} \\ &= k!\sum_{i=0}^k (-1)^i\left(\lambda_{i}\circ F\right) \vert\det B_{\hat{\tau}}\vert\bigwedge_{\substack{\ell=0\\ \ell\neq i}}^k \dif\lambda_{\ell} \\ &= \vert\det B_{\hat{\tau}}\vert\left(\phi_{\tau}\right)_{F(x)}. \end{align*} A change of variables then yields \begin{align*} \int_{\sigma}\|\sum_{\tau}\omega_h(\tau)\phi_{\tau}\|^2 \mathrm{vol}_{\sigma}&= \int_{\sigma}\|\sum_{\tau}\frac{\omega_h(\tau)}{\vert \det B_{\hat{\tau}}\vert}(\hat{\phi}_{\tau})_{F^{-1}}\|^2\mathrm{vol}_{\sigma}\\ &= \vert\det B\vert\int_{\hat{\sigma}}\|\sum_{\tau}\frac{\omega_h(\tau)}{\vert \det B_{\hat{\tau}}\vert}\hat{\phi}_{\tau}\|^2\mathrm{vol}_{\hat{\sigma}}. \end{align*} Using the equivalence of norms on finite dimensional Banach spaces and the regularity assumption on $K_h$, we therefore conclude that \begin{equation*} \|W_h\omega_h\|^2_{L^2\Lambda^k(\sigma)}\sim \vert\det B\vert\sum_{\tau}\left(\frac{\omega_h(\tau)}{\vert \det B_{\hat{\tau}}\vert}\right)^2 \sim h^{n-2k}\sum_{\tau}\omega_h(\tau)^2 \sim \|\omega_h\|_h^2, \end{equation*} where the equivalences do not depend on $h$. \end{proof} \begin{corollary}\label{cor: poincare} There exists a constant $C$, independent of $h$, such that the discrete Poincar\'e inequality \begin{equation*} \|\omega_h\|_h \leq C\|\dif_{\,h}\omega_h\|_h \end{equation*} holds for all $\omega_h\in C^0(K_h)$ such that $\omega_h = 0$ on $\partial K_h$. \end{corollary} \begin{proof} Using Theorem \ref{thm: equivalence with Whitney forms} and the Poincar\'e inequality, we have \begin{equation} \|\omega_h\|_h\lesssim\|W_h\omega_h\|_h\lesssim\|\dif W_h\omega_h\|_{L^2\Lambda^k(K_h)}=\| W_h\dif_{\,h}\omega_h\|_{L^2\Lambda^k(K_h)}\lesssim\|\dif_{\,h}\omega_h\|_h , \end{equation} which establishes the proof. \end{proof} The argument is twofold. We first restrict our attention in (\ref{eq: Discrete Poisson Dirichlet}) to the homogeneous boundary condition $g=0$, and introduce in a second breath inhomogeneous Dirichlet conditions. Consider the following variational formulations. Suppose that $\nu_h\in C^0(K_h)$ vanishes everywhere but at a vertex $p$. For all $\omega_h \in C^0(K_h)$, we have \begin{equation*} \left(\delta_h\dif_{\,h}\omega_h,\nu_h\right)_h - \left( R_h f,\nu_h\right)_h = \nu_h(p)\vert\ast p\vert\left( \langle\delta_h\dif_{\,h}\omega_h,p\rangle - \langle R_hf,p\rangle\right). \end{equation*} In other words, $\Delta_h\omega_h = R_h f$ if and only if $\left( \delta_hd_h\omega_h,\nu_h\right)_h = \left( R_h f,\nu_h\right)_h$ for all $\nu_h$. By Proposition \ref{prop: adjoint of exterior derivative}, we may thus equivalently find a discrete function $\omega_h$ vanishing on $\partial K_h$ such that \begin{equation}\label{eq: critical solutions} \left(\dif\omega_h,\dif_{\,h}\nu_h\right) = \left(R_h f,\nu_h\right)_h \end{equation} for all $0$-cochains $\nu_h$ satisfying the homogeneous boundary condition. Note that this problem in turn reduces to the one of minimizing the energy functional \begin{equation*}\label{eq: energy} E_h(\nu_h)= \frac{1}{2}\left(\dif_{\,h}\nu_h,\dif_{\,h}\nu_h\right)_h-\left(R_hf,\nu_h\right)_h \end{equation*} over the same collection of $\nu_h$. Indeed a direct computation shows that $\omega_h$ satisfies (\ref{eq: critical solutions}) if and only if $\frac{\dif}{\dif\varepsilonilon}\big\vert_{\varepsilonilon = 0}E_h\left(\omega_h+\varepsilonilon \nu_h\right) = 0$ for all these $\nu_h$, in which case we further conclude from \begin{equation*} E_h\left(\omega_h+\nu_h\right)= \underbrace{\frac{1}{2}\left(\dif_{\,h}\omega_h,\dif_{\,h}\omega_h\right)_h}_{\geq 0}+\underbrace{\left(\omega_h,\nu_h\right)_h-\left(R_hf,\omega_h\right)_h}_{= 0\text{ by (\ref{eq: critical solutions})}}+\underbrace{\frac{1}{2}\left(\nu_h,\nu_h\right)_h-\left(R_hf,\nu_h\right)_h}_{E_h(\nu_h)} \end{equation*} that $\omega_h$ is a minimizer. Since $\left(\dif_{\,h}u_h,\dif_{\,h}u_h\right)_h=0$ if and only if $u_h$ is constant, $\delta_h\dif_{\,h}$ is invertible over the space of discrete functions vanishing on $\partial K_h$. The existence and uniqueness of a solution to (\ref{eq: critical solutions}) is thus guaranteed for all $h$. Using Corollary \ref{cor: poincare} in (\ref{eq: critical solutions}) and assuming $R_hf$ is not identically $0$ (the case with trivial solution $\omega_h=0$), we obtain \begin{equation*} \left(\dif_{\,h}\omega_h,\dif_{\,h}\omega_h\right)_h\leq\|\omega_h\|_h\|R_hf\|_h\leq C\|\dif_{\,h}\omega_h\|_h\|R_hf\|_h, \end{equation*} and another application of that corollary finally yields the stability estimate \begin{equation*} \|\omega_h\|_h\leq C \|R_hf\|_h. \end{equation*} Unfortunately, convergence cannot be obtained with an application of the Lax-Richtmyer theorem with this estimate and the consistency bound derived in Corollary \ref{c: consistency bound}. However, Proposition \ref{prop: adjoint of exterior derivative} comes to our rescue. Given a solution $\omega$ to the associated continuous problem, it allows us to use the expression for $\Delta_h$ given in Lemma \ref{lem: reducing consistency} to evaluate the error $e_h=R_h\omega-\omega_h$ with \begin{align*} \left(\dif_{\,h}e_h,\dif_{\,h}e_h\right)_h&=\left(\Delta_he_h,e_h\right)_h\\ &= \left(\star_h \dif_{\,h}\left(\star_h R_h-R_h \star \right)\dif\omega,e_h\right)_h +\left(\left(\star_h R_h - R_h\star\right)\dif\star \dif\omega,e_h\right)_h\\ &=\left(\star_h^{-1}\left(\star_h R_h-R_h \star \right)\dif\omega,\dif_{\,h}e_h\right)_h + \left(\left(\star_h R_h - R_h\star\right)\dif\star \dif\omega,e_h\right)_h\\ &\leq Ch\|\dif_{\,h} e\|_h+Ch\|e_h\|_h, \end{align*} where the last inequality was obtained from an application of Corollary \ref{cor: consistency of hodge star L2}, (\ref{eq: op L2 norm Hodge star over dual}) and the second to last estimate in Corollary \ref{c: consistency bound}. Using the discrete Poincar\'e-like inequality again completes the proof of the homogeneous case of the following theorem. \begin{mdframed} \begin{theorem} The unique discrete solution $\omega_h\in C^0(K_h)$ of problem (\ref{eq: Discrete Poisson Dirichlet}) stated for $0$-forms over a regular triangulation $K_h$ satisfy \begin{equation*} \|e_h\|_h\leq C\|\dif_{\, h}e_h\|_h= O(h). \end{equation*} \end{theorem} \end{mdframed} As one would expect, the case of inhomogeneous boundary conditions reduces to the homogeneous one. Given $g_h\neq \vec{0}$, the Poisson problem is to find $\omega_h$ in the affine space $\{g+\eta_h\vert\, \eta_h\in C^0(K_h), \eta_h = 0\, \text{on}\,\,\, \partial K_h\}$ satisfying (\ref{eq: critical solutions}). That is, we are looking for $u_h\in C^0(K_h)\cap\{\eta_h:\eta_h\vert_{\partial K_h}=0\}$ satisfying \begin{equation}\label{eq: critical solutions inhomogeneous} \left(\dif_{\,h}u_h,\dif_{\,h}\nu_h\right)_h= \left(R_hf, \nu_h \right)_h - \left(\dif_{\, h}g_h,\dif_{\, h}\nu_h\right)_h = F(u_h,\nu_h), \end{equation} for all $\nu_h$ vanishing on the boundary, which is equivalent to the previous homogeneous problem, but with the linear functional $F:C^0(K_h)\times C^0(K_h)\longrightarrow \mathbb{R}$ on the right hand side. Repeating the previous arguments, we obtain the stability estimate \begin{equation} \|u_h\|_h\leq C\left( \|R_hf\|_h+\|\dif_{\, h} g_h\|_h\right). \end{equation} The rest of the proof goes through similarly, so that the solution $\omega_h=g_h+u_h$ to (\ref{eq: Discrete Poisson Dirichlet}) with inhomogeneous boundary condition $g$ is first order convergent. \section{Numerical experiments} \label{s:numerics} In this section, we report on some numerical experiments performed over two and three dimensional triangulations. The discrete solutions are computed from the inverse of $\Delta_h$ built as a compound of the operators defined in Section \ref{subs: Discrete Operators}. The volumes needed to implement the Hodge stars as diagonal matrices $\star^h_k$ are computed in a standard way. The matrices $\dif^{\,h}_{\,k-1,\text{Primal}}=(\partial^h_{k})^T$ acting on vector representations of cochains (denoted by $[\cdot]$) in $C^{k-1}(K_h)$ are created using the algorithm suggested in \parencite{BH12}. By construction, the orientation of $\dif^{\,h}_{\,k-1,\text{Primal}}[\tau]\in\Delta_{k}(K_h)$ is therefore consistent with the orientation of the $(k-1)$-simplex $\tau$. Hence, it is suitably oriented for definition \ref{def: boundary of dual} to imply \begin{align*} \left(\dif^{\,h}_{\,\,n-k,\text{Dual}}\star^h_k\lbrack\omega_h\rbrack\right)^T \lbrack\tau\rbrack&= \langle \dif_{\,h}\star_h\omega_h, \ast\tau\rangle\\ &=(-1)^k\sum_{\eta\succ\tau}\langle\star_h\omega_h,\ast\eta\rangle\\ &= (-1)^k\left(\star_h^k[\omega_h]\right)^T\dif^{\,h}_{\,k-1,\text{Primal}}[\tau]\\ &=(-1)^k\left((\dif^{\,h}_{\,k-1,\text{Primal}})^T\star_h^k[\omega_h]\right)^T[\tau] \end{align*} for all $\omega_h\in C^k(K_h)$ and $\tau\in\Delta_{k-1}(K_h)$. This proves the practical definition \begin{equation*} \dif^{\,h}_{\,\,n-k,\text{Dual}}=(-1)^k\left(\dif^{\,h}_{\,k-1,\text{Primal}}\right)^T = (-1)^k \partial^h_k \end{equation*} given in \parencite{Desbrun2008}, which we use in the following. \subsection{Experiments in two dimensions} Convergence of DEC solutions for $0$-forms is first studied over two types of convex polygons. Non-convex polygons are then used to evaluate the consequences of a lack of regularity caused by re-entrant corners. \subsubsection{Regular polygons}\label{Regular n-gons} We conduct numerical experiments on refinements of the form $\{K_{c\cdot2^{-i}}\}_{i=0}^N$ of the wheel graph $W_{5}$ whose boundary is a regular pentagon. Triangulation of the pentagonal domain is performed by recursively subdividing each elementary $2$-simplex of the initial complex into its four inscribed subtriangles delimited by the graph of its medial triangle and its boundary. We illustrate the process in Figure \ref{fig: W5 Initial Mesh}. A discrete solution to the Hodge-Laplace Dirichlet problem with trigonometric exact solution $u(x,y)=x^2\sin(y)$ over this complex is displayed along with its error function in Figure \ref{results convex polygons}. The size of the errors for its collection of refinements with $N=9$ is found in different discrete norms in Table \ref{table: Integral Errors Pentagon Trig}. The results show second order convergence both in the discrete $L^2$ and $H^1$ norms. Other cases where the initial primal triangulation is designed from a wheel graph on $n+1$ vertices are also considered. In particular, repeating the experiment on regular $n$-gons with $6\leq n \leq 8$ yields the same asymptotics. \begin{figure} \caption{Initial complex $K_c$} \caption{Refined complex $K_{c\cdot2^{-1} \caption{The initial primal triangulation of $W_5$ and its first refinement are shown. On the left, (a) was obtained by fixing the length of the interior edges to $1$, consequently setting the initial maximal length for the edges to $c=(2-2\cos(2\pi/5))^{\frac{1} \label{fig: W5 Initial Mesh} \end{figure} \begin{figure} \caption{$u_{C\cdot 2^{-8} \caption{$e_{C\cdot 2^{-8} \caption{This figure refers to the experiment $u(x,y)=x^2\sin(y)$. On the left, a DEC solution to the Hodge-Laplace Dirichlet problem over $T_{C\cdot 2^{-8} \label{results convex polygons} \end{figure} \begin{table} \scalebox{0.70}{ \begin{tabular}{|c||c|c|c|c|c|c|} \hline $i$ &$e_i^\infty=\|e_{C\cdot 2^{-i}}\|_{\infty}$& $\log\left(e_i^\infty/e^\infty_{i-1}\right)$ & $e^{H^1_d}=\|\dif e_{C\cdot 2^{-i}}\|_{C\cdot 2^{-i}}$& $\log\left(e_i^{H^1_d}/e^{H^1_d}_{i-1}\right)$ & $e^{L_d^2}=\|e_{C\cdot 2^{-i}}\|_{C\cdot 2^{-i}}$ & $\log\left(e_i^{L^2_d}/e^{L^2_d}_{i-1}\right)$ \\ \hline 0 & 1.561251e-17 & - & 2.975694e-17 & - & 1.487847e-17 & -\\ \hline 1 & 3.202794e-03 & -4.754362e+01 & 1.072846e-02 & -4.835714e+01 & 2.821094e-03 & -4.743002e+01\\ \hline 2 & 7.836073e-04 & 2.031128e+00 & 2.879579e-03 & 1.897512e+00 & 6.332754e-04 & 2.155350e+00\\ \hline 3 & 1.956510e-04 & 2.001848e+00 & 7.353114e-04 & 1.969431e+00 & 1.532456e-04 & 2.046987e+00\\ \hline 4 & 4.891893e-05 & 1.999818e+00 & 1.849975e-04 & 1.990850e+00 & 3.798925e-05 & 2.012183e+00\\ \hline 5 & 1.227086e-05 & 1.995157e+00 & 4.633277e-05 & 1.997401e+00 & 9.477213e-06 & 2.003057e+00\\ \hline 6 & 3.067823e-06 & 1.999949e+00 & 1.158895e-05 & 1.999283e+00 & 2.368052e-06 & 2.000762e+00\\ \hline 7 & 7.669629e-07 & 1.999987e+00 & 2.897627e-06 & 1.999806e+00 & 5.919350e-07 & 2.000190e+00\\ \hline 8 & 1.917491e-07 & 1.999937e+00 & 7.244331e-07 & 1.999948e+00 & 1.479789e-07 & 2.000047e+00\\ \hline \end{tabular} } \caption{Experiment of Subsection \ref{Regular n-gons} with $u(x,y)=x^2\sin(y)$. The terms $C\cdot2^{-i}\in\mathbb{R}$ indicates the values of $h$ in $\|\cdot\|_h$.}\label{table: Integral Errors Pentagon Trig} \end{table} \subsubsection{Rectangular domains}\label{Rectangular Domains} Three different types of regular triangulations were used to investigate convergence of DEC solutions over the unit square. These appear as degenerate examples of circumcentric complexes in which the circumcenters lie on the hypotenuses of the primal triangles. Examples of these triangulations are shown in Figure \ref{fig: Triangulations of the Square}. The computations carried over the unit square produce results similar to the ones previously obtained in Subsection \ref{Regular n-gons}. \begin{figure} \caption{Three regular triangulations of the square.} \label{fig: Triangulations of the Square} \end{figure} \subsubsection{Unstructured meshes}\label{sub: Asymmetric Convex Polygons} Second order convergence in all norm is also observed for unstructured meshes on convex polygons. The discrete solutions behaved similarly over non-convex polygons when the solution is smooth enough. One example of a primal domain over which the algorithm was tested is shown in Figure \ref{fig:asymmetric mesh}. \begin{figure} \caption{Initial complex $K_c$} \caption{Refined complex $K_{c\cdot2^{-1} \caption{Unstructured triangulations were refined as in Subsection \ref{Regular n-gons} \label{fig:asymmetric mesh} \end{figure} \subsubsection{Non-convex polygons}\label{concave Polygons} We study the convergence behavior of DEC when a reentrant corner is present along the boundary of the primal complex. More precisely, we consider re-entrant corners with various angles, leading to the exact solutions $u\in H^{1+\mu}(\mathcal{P})$, $0<\mu<1$, which lack $H^2$ regularity. Elected candidates for $u\in H^{1+\mu}(\mathcal{P})$ were harmonic functions of the form $r^{\mu}\sin(\mu\theta)$. These functions suit the model of Figure \ref{fig: Pentagon with Corner} (A) for $\mu=\pi/(2\pi -\beta)=\pi/\alpha$ and the result of the experiments can found in Table \ref{table: Integral Errors Pentagon Corner 5/8}. The initial triangulation for this case is also plotted along its first refinement in Figure \ref{fig: Pentagon with Corner}. The refinement algorithm of Subsection \ref{Regular n-gons} was used. Other experiments were performed and led to the empirical observation that $\|u-u_h\|_h\sim h^{2\mu}$ and $\|\dif\,(u-u_h)\|_h\sim h^{\mu}$. This convergence behavior is precisely what would be expected when a finite element method is applied to the same problem, which may be explained by the fact that our discretization is equivalent to a finite element method, cf. \parencite{HPW06,Wardetzky08} \begin{figure} \caption{Model of exact solutions} \caption{Initial complex $K_c$} \caption{Refined complex $T_{c\cdot2^{-1} \caption{We design an exact solution $u\in H^{1+\mu} \label{fig: Pentagon with Corner} \end{figure} \begin{figure} \caption{$u_{C\cdot 2^{-8} \caption{$e_{C\cdot 2^{-8} \caption{These plots were obtained from the experiment $r^{\mu} \label{results nonconvex polygons} \end{figure} \begin{table} \scalebox{0.70}{ \begin{tabular}{|c||c|c|c|c|c|c|} \hline $i$ &$e_i^\infty=\|e_{C\cdot 2^{-i}}\|_{\infty}$& $\log\left(e_i^\infty/e^\infty_{i-1}\right)$ & $e^{H^1_d}=\|\dif e_{C\cdot 2^{-i}}\|_{C\cdot 2^{-i}}$& $\log\left(e_i^{H^1_d}/e^{H^1_d}_{i-1}\right)$ & $e^{L_d^2}=\| e_{C\cdot 2^{-i}}\|_{C\cdot 2^{-i}}$ & $\log\left(e_i^{L^2_d}/e^{L^2_d}_{i-1}\right)$ \\ \hline 0 & 0 & - & 0 & - & 0 & - \\ \hline 1 & 3.402738e-02 & - & 8.467970e-02 & - & 2.346479e-02 & - \\ \hline 2 & 3.194032e-02 & 9.131748e-02 & 6.533106e-02 & 3.742472e-01 & 1.353817e-02 & 7.934654e-01 \\ \hline 3 & 2.346298e-02 & 4.449927e-01 & 4.496497e-02 & 5.389676e-01 & 6.570546e-03 & 1.042947e+00 \\ \hline 4 & 1.595752e-02 & 5.561491e-01 & 2.983035e-02 & 5.920204e-01 & 2.970932e-03 & 1.145097e+00 \\ \hline 5 & 1.054876e-02 & 5.971636e-01 & 1.952228e-02 & 6.116590e-01 & 1.299255e-03 & 1.193231e+00 \\ \hline 6 & 6.894829e-03 & 6.134867e-01 & 1.270715e-02 & 6.194814e-01 & 5.584503e-04 & 1.218184e+00 \\ \hline 7 & 4.485666e-03 & 6.201927e-01 & 8.252738e-03 & 6.226958e-01 & 2.377754e-04 & 1.231830e+00 \\ \hline 8 & 2.912660e-03 & 6.229847e-01 & 5.354822e-03 & 6.240341e-01 & 1.007013e-04 & 1.239517e+00 \\ \hline \end{tabular} } \caption{Experiment of Subsection \ref{concave Polygons} with $r^{\mu}\sin(\mu\theta)$, $\mu=5/8$. The terms $C\cdot2^{-i}\in\mathbb{R}$ indicates the values of $h$ in $\|\cdot\|_h$.}\label{table: Integral Errors Pentagon Corner 5/8} \end{table} \subsection{Three dimensional case}\label{sub: 3-Dimensional Triangulations} As explained in \parencite{VanderZee2008}, it is not an easy task in general to generate and refine tetrahedral meshes of $3$-dimensional arbitrary convex domains without altering its circumcentric quality. The next example is a degenerate case: the circumcenters of the tetrahedra used to triangulate the cube lies on lower dimensional simplices, but it does allow for a better control over the step size in $h$ (and thus for a better evaluation of the convergence rate), because the mesh can be refined recursively by gluing smaller copies of itself filling the cuboidal domain. The triangulation is displayed in Figure \ref{cubic mesh}. Again, a second order convergence in all norms similar to the results of Subsection \ref{Regular n-gons} is obtained and presented in Table \ref{table: errors cubic}. \begin{table} \scalebox{0.70}{ \begin{tabular}{|c||c|c|c|c|c|c|} \hline $i$ &$e_i^\infty=\|e_{C\cdot 2^{-i}}\|_{\infty}$& $\log\left(e_i^\infty/e^\infty_{i-1}\right)$ & $e^{H^1_d}=\|\dif e_{C\cdot 2^{-i}}\|_{C\cdot 2^{-i}}$& $\log\left(e_i^{H^1_d}/e^{H^1_d}_{i-1}\right)$ & $e^{L_d^2}=\|e_{C\cdot 2^{-i}}\|_{C\cdot 2^{-i}}$ & $\log\left(e_i^{L^2_d}/e^{L^2_d}_{i-1}\right)$ \\ \hline 0 & 8.586493e-04 & - & 1.487224e-03 & - & 3.035784e-04 & -\\ \hline 1 & 2.666725e-04 & 1.687000e+00 & 6.216886e-04 & 1.258358e+00 & 1.156983e-04 & 1.391702e+00 \\ \hline 2 & 7.122948e-05 & 1.904523e+00 & 1.774812e-04 & 1.808526e+00 & 3.166206e-05 & 1.869540e+00 \\ \hline 3 & 1.835021e-05 & 1.956678e+00 & 4.594339e-05 & 1.949737e+00 & 8.083333e-06 & 1.969733e+00 \\ \hline 4 & 4.621759e-06 & 1.989283e+00 & 1.158904e-05 & 1.987096e+00 & 2.031176e-06 & 1.992635e+00 \\ \hline \end{tabular} } \caption{Experiment of Subsection \ref{sub: 3-Dimensional Triangulations} with $u(x,y)=x^2\sin(y)+\cos(z)$.The terms $C\cdot2^{-i}\in\mathbb{R}$ indicates the values of $h$ in $\|\cdot\|_h$.}\label{table: errors cubic} \end{table} \begin{figure} \caption{Initial complex $K_c$} \caption{Refined complex $K_{c\cdot2^{-1} \caption{The initial 3-dimensional primal triangulation and its first refinement are shown. A sequence of finer triangulations was created by gluing at each step $4$ smaller copies of the previous refinement into the initial cubic domain.} \label{cubic mesh} \end{figure} \section{Conclusion} This paper fills a gap in the current literature, which has yet to offer any theoretical validation regarding the convergence of discrete exterior calculus approximations for PDE problems in general dimensions. Namely, we prove that discrete exterior calculus approximations to the scalar Poisson problem converge at least linearly with respect to the mesh size for quasi-uniform triangulations in arbitrary dimensions. Nevertheless, it must be emphasized that this {\em first order} convergence result is only partially satisfactory. In accordance with \parencite{Nong04}, the numerical experiments of Section \ref{s:numerics} display pointwise {\em second order} convergence of the discrete solutions over unstructured triangulations. The same behavior was observed also for the discrete $L^2$ norm. The challenges of explaining the second order convergence, and of proving convergence for general $p$-forms persist. Due emphasis must be given to the role played by compatibility in obtaining this result. The reliable framework of the continuous setting was reproduced at the discrete level to a sufficient extent so as to successfully provide a theorem of stability comparable to the one found in the classical PDEs literature. This makes another convincing case for the study and development of structure preserving discretization in general. An accessory consequence of our investigations is the technical modification of the combinatorial definition for the boundary of a dual cell currently found in the early literature. We suggest that it is presently only compatible with the continuous theory up to a sign, and that accounting for the latter yields a new definition which agrees with the algorithms later described in \parencite{Desbrun2008}. A compatible extension of the usual discrete $L^2$-norm to $C^k(\ast K_h)$ was also explicitly introduced. \section*{Acknowledgments} This paper is the first author's first research article, and he would like to thank his family for its unwavering support on his journey to become a scientist. The second author would like to thank Gerard Awanou (UI Chicago) for his insights offered during the initial stage of this project, and Alan Demlow (TAMU) for a discussion on interpreting some of the numerical experiments. This work is supported by NSERC Discovery Grant, and NSERC Discovery Accelerator Supplement Program. During the course of this work, the second author was also partially supported by the Mongolian Higher Education Reform Project L2766-MON. \nocite{*} \printbibliography \end{document}
\begin{document} \twocolumn[ \icmltitle{Towards OOD Detection in Graph Classification \\ from Uncertainty Estimation Perspective} \icmlsetsymbol{equal}{*} \begin{icmlauthorlist} \icmlauthor{Gleb Bazhenov}{Skoltech} \icmlauthor{Sergei Ivanov}{Skoltech,Criteo} \icmlauthor{Maxim Panov}{TII,Skoltech} \icmlauthor{Alexey Zaytsev}{Skoltech} \icmlauthor{Evgeny Burnaev}{Skoltech} \end{icmlauthorlist} \icmlaffiliation{Skoltech}{Skolkovo Institute of Science and Technology, Russia} \icmlaffiliation{Criteo}{Criteo AI, France} \icmlaffiliation{TII}{Technology Innovation Institute, UAE} \icmlcorrespondingauthor{Gleb Bazhenov}{[email protected]} \icmlkeywords{out-of-distribution detection, graph classification, uncertainty estimation, deep learning} \vskip 0.3in ] \printAffiliationsAndNotice{} \begin{abstract} The problem of out-of-distribution detection for graph classification is far from being solved. The existing models tend to be overconfident about OOD examples or completely ignore the detection task. In this work, we consider this problem from the uncertainty estimation perspective and perform the comparison of several recently proposed methods. In our experiments, we find that there is no universal approach for OOD detection, and it is important to consider both graph representations and predictive categorical distribution. \end{abstract} \section{Introduction} Out-of-distribution (OOD) detection requires us to distinguish between the in-distribution (ID) observations that can be processed by model and OOD samples that have to be rejected as inconsistent with the training data. This problem is out there for a long time for all kinds of modalities including the graph-structured data. One way to deal with this problem is to provide an approach that assigns point uncertainties for a particular model. In general, there are several approaches to uncertainty estimation in classification task that employ the ensembling techniques \cite{gal2016dropout, lakshminarayanan2017simple}, the probabilistic frameworks based on Dirichlet distribution \cite{malinin2018predictive, sensoy2018evidential, charpentier2020posterior}, and other methods \cite{liu2020simple, van2020simple, van2021improving, mukhoti2021deterministic}. While there were previous attempts to explore the OOD detection for image classification \cite{sinhamahapatra2022all}, their direct application to graph classification requires further research due to a complex network structure and additional attributes associated with the nodes. \\ Recent works \cite{zhao2020uncertainty, stadler2021graph} consider the OOD detection in node classification problem using uncertainty estimation; however, the node-level detection is different from the graph-level one due to the dependent relationships between nodes, which does not occur for graphs. Likewise, some works \cite{wu2021handling, ding2021closer, li2022out} explored the question of OOD generalisation on graph-structured data, where robust models have been proposed to deal with the distribution shifts; however, such works do not consider the detection of OOD samples, which is the focus of our paper. The problem of OOD detection for graph classification is not solved yet. The existing models tend to be overconfident about OOD examples or completely ignore the detection task. In our work, we consider this problem from the uncertainty estimation perspective and perform the comparison of several recently proposed methods. In particular, we find that there is no universal approach for OOD detection across different graph datasets, and it is important to consider not only the entropy of predictive distribution, but also the geometry of the latent space produced by graph encoder. In addition, we observe that sometimes the classification task has a \textit{default} ID class for most OOD samples. \section{Methods} \label{methods} In general, we construct the model that for a given graph $x$ and the associated classifier $c(x)$ can provide the uncertainty estimate $u(x)$. Since these estimates are used for ranking the test samples, we require that such model predicts higher uncertainty for OOD examples. In this work, we consider two common distinct classes of methods for OOD detection through uncertainty estimation: \begin{itemize} \item entropy-based methods, which use the entropy of predictive categorical distribution as the measure of uncertainty, treating the predictions with high entropy as less certain; \item density-based methods, which perform probability density estimation using the provided representations in latent space and assign high uncertainty to the points with low density. \end{itemize} The entropy-based methods expect that we have a particular classification model, while the density-based focus on a density estimate $\mathtt{p}(x)$ that can be independent from our classifier. However, in practice, they can share the representations with the main classification model. Most methods that we describe here can distinguish between two sources of uncertainty \cite{gal2016uncertainty} — data uncertainty $u_{\text{data}}$, which reflects the internal noise in data due to class overlap or data labeling errors, and knowledge uncertainty $u_{\text{know}}$, which refers to the lack of knowledge about unseen data. In our experiments, we use both types of uncertainty and investigate which one is more consistent with the OOD detection problem. \subsection{Entropy-Based} \label{methods:entropy} \begin{figure*} \caption{OOD detection performance per class of all the considered methods using \\ knowledge uncertainty KU $u_{\text{know} \label{fig:ood-detection-main} \end{figure*} \begin{figure*} \caption{t-SNE graph embeddings produced by \texttt{single} \label{fig:tsne-main} \end{figure*} The entropy-based methods that we use in our experiments can be described in terms of the probabilistic framework which implies some unknown distribution $\mathtt{p}(\theta|\mathcal{D})$ of model parameters $\theta$ given the training data $\mathcal{D}$. In our classification task, the predictive distribution is expressed as follows: \[ \mathtt{P}(y|x, \mathcal{D}) = \mathbb{E}_{\mathtt{p}(\theta|\mathcal{D})}\mathtt{P}(y|x, \theta), \] where $\mathtt{P}(y|x, \theta)$ is defined by a graph classification model $f_{\theta}(x)$. Each method presented below provides a particular form of distribution $\mathtt{p}(\theta|\mathcal{D})$ and the uncertainty measure that is used for OOD detection in graph classification problem. \subsubsection{Single GNN Model} \label{methods:single} This approach employs the most simple form of distribution $\mathtt{p}(\theta|\mathcal{D})$ which is expressed through a delta function around the MLE of model parameters $\theta$: \[ \mathtt{p}(\theta|\mathcal{D}) = \delta(\theta - \theta_{\text{MLE}}). \] In such case, the predictive distribution has the form \[ \mathtt{P}(y|x, \mathcal{D}) = \mathtt{P}(y|x, \theta_{\text{MLE}}). \] The model consists of several graph convolution layers and global graph pooling that provides a representation $z(x)$, which is then processed by a single-layer fully-connected network in order to obtain the predictive distribution. In this approach, we do not distinguish between data and knowledge uncertainty and base the total uncertainty measure of this model on the entropy of predictive distribution: \[ u_{\text{total}} = \mathbb{H}\big[\mathtt{P}(y|x, \theta_{\text{MLE}})\big]. \] Further in the text, we refer to this method as \texttt{single}. \subsubsection{Ensemble of GNN Models} \label{methods:ensemble} We also consider two ensembling techniques for constructing more robust predictive models — Monte Carlo Dropout \cite{gal2016dropout} and Deep Ensembles \cite{lakshminarayanan2017simple}. To formally define these approaches, we consider the approximation of distribution $\mathtt{p}(\theta|\mathcal{D})$ via a set of models, which either represent the dropout versions of the same estimator described in \cref{methods:single} (MC), or independently trained instances of such model (DE): \[ \theta \sim \mathtt{q}(\theta|\mathcal{D}), \] where the distribution $\mathtt{q}(\theta|\mathcal{D})$ is discrete uniform over $n$ model instances $\theta_k$. Given this, the predictive distribution can be expressed as a discrete average instead of the original integral: \[ \mathtt{P}(y|x, \mathcal{D}) = \frac{1}{n}\sum\limits_{k = 1}^n \mathtt{P}(y|x, \theta_k). \] It enables us to separate data and knowledge uncertainty in the same way as it was done in~\cite{malinin2018predictive}: \[ \begin{split} \begin{gathered} u_{\text{data}} = \mathbb{E}_{\mathtt{q}(\theta|\mathcal{D})} \mathbb{H}\big[\mathtt{P}(y|x, \theta)\big], \\ u_{\text{know}} = \mathbb{H}\big[\mathbb{E}_{\mathtt{q}(\theta|\mathcal{D})} \mathtt{P}(y|x, \theta)\big] - u_{\text{data}}. \end{gathered} \end{split} \] For the sake of brevity, we refer to Deep Ensemble as \texttt{DE} and Monte Carlo Dropout as \texttt{MC}. \subsection{Density-Based} \label{methods:density} These methods are rather more flexible and diverse in terms of formulation, but their main idea is to estimate the density function of probability distribution in the latent space of graph representations $z(x)$ which are associated with a particular classification task. The density-based methods all suffer from challenging density estimation in high-dimensional embedding space. However, existing approaches in various ways try to mitigate this issue for large-dimension spaces. \subsubsection{NUQ} \label{methods:nuq} The first density-based method for uncertainty estimation that we consider is Non-Parametric Uncertainty Quantifaction \cite{kotelevskii2022nuq}, which can be derived from the problem of statistical risk minimisation and leads to the natural decomposition of total uncertainty into the sum of data uncertainty \[ \eta_k(x) = \mathtt{P}(y = k|x) \Longrightarrow u_{\text{data}} = \min_k \eta_k(x), \] where $\mathtt{P}(y = k|x)$ is defined by the Nadaraya-Watson estimator \cite{nadaraya1964estimating}, and knowledge uncertainty \[ u_{\text{know}} \sim \max\limits_k \sigma_k^2(z) / \mathtt{p}(z), \] where $\sigma_k^2(z) = \eta_k(x)\big[1 - \eta_k(x)\big]$ and the probability density function $\mathtt{p}(z)$ is approximated using the kernel density estimator (KDE) in graph representation space, which in turn is induced by the already trained graph encoder described in \cref{methods:single}. We use its short notation \texttt{NUQ} for convenience. \subsubsection{NatPostNet} \label{methods:natpostnet} Another view on the uncertainty estimation is given by Natural Posterior Network \cite{charpentier2021natural}, which predicts the parameters of Dirichlet distribution instead of directly modeling the posterior categorical distribution. Using the parameters of Dirichlet distribution $\mathtt{p}(\mu|x, \theta)$, the predictive categorical distribution can be defined as \[ \mathtt{P}(y|x, \mathcal{D}) = \mathbb{E}_{\mathtt{p}(\mu|x, \theta)}\mathtt{P}(y|\mu), \] where the distribution $\mathtt{p}(\mu|x, \theta)$ is learned using the same model $f_{\theta}(x)$ as described in \cref{methods:single} — the only difference is that such GNN predicts the Dirichlet parameters instead of categorical ones. In this framework, data uncertainty is defined as the entropy of predictive distribution: \[ u_{\text{data}} = \mathbb{H}\big[\mathtt{P}(y|x, \mathcal{D})\big]. \] As for knowledge uncertainty, it can be expressed through the density estimate provided by a single normalising flow $g_{\psi}(z)$ \cite{kobyzev2020normalizing} over graph representations $z$: \[ u_{\text{know}} \sim -g_{\psi}(z). \] This method is denoted by \texttt{NatPostNet}. \begin{figure*} \caption{OOD confusion (left) and pair-wise distance (right) matrices \\ produced by \texttt{NatPostNet} \label{fig:confusion-main} \label{figure:distances-main} \label{fig:distances-confusion-main} \end{figure*} \section{Experiments} \label{experiments} Because of the space limitation, we report the graph dataset statistics and their class names in \cref{appendix:datasets}. Moreover, we describe the evaluation procedure and OOD detection metrics for all datasets in \cref{appendix:experiments,appendix:ood-performance}, while here we discuss only the key insights that we have managed to obtain during our experiments. In particular, we formulate and answer the following questions: \textit{Are there methods that provide the optimal OOD detection performance on both chemical and social graph datasets?} Based on \cref{fig:ood-detection-main}, it is natural to conclude that chemical and social datasets vary in the difficulty of OOD detection for the same graph encoder architecture, since the performance of \texttt{DE} and both considered density-based methods on IMDB-MULTI is much better than on ENZYMES. It means that particular applications such as chemistry might require a special design of encoders that model the graph manifold more efficiently. \textit{How the OOD detection performance is influenced by the structure of the graph dataset and the associated latent space induced by the model?} As it can be seen from the last row in \cref{fig:ood-detection-all}, the OOD detection performance of the entropy-based methods on TRIANGLES dataset drops significantly as the OOD class moves from the interior of ID region to its boundary (i.e., from 5 to 1 or 10). We can say that these methods become overconfident for the boundary classes. However, looking on the distribution of graph embeddings (see \cref{fig:tsne-main}), one can notice that for the boundary class (1 triangle) the OOD embeddings seem to be better distinguishable from ID ones than for the classes in the middle of the range (5 triangles). It means that the usage of methods based solely on the entropy of predictive distribution is not enough, and one also may need to take into account the geometry of embeddings. \textit{What the predictive categorical distribution over OOD samples can say about the ID classes?} We recall that the classes that are available to classifier are referred to as ID and those that it has not seen are treated as OOD. By retraining the classifiers on a different set of ID classes we can obtain different predictions for OOD classes. In \cref{fig:confusion-main} (left), one can notice that regardless of a particular OOD class, the model often predicts it to be the \texttt{WorldNews} class, which can be treated as a \textit{default} ID class for any OOD sample. \textit{Can we confirm that the most distant classes in the latent space appear to be the most simple for OOD detection?} Comparing the OOD detection performance using knowledge uncertainty on REDDIT-MULTI-12K dataset in the corresponding row of \cref{fig:ood-detection-all} and pair-wise distances between its classes in \cref{fig:confusion-main} (right), one can understand that the most distant classes such as \texttt{TrollXChromosomes} tend to be easier for OOD detection. \textit{What type of uncertainty is more consistent with the OOD detection problem for graph classification?} In general, the estimates based on knowledge uncertainty appear to be more appropriate for OOD detection than using data uncertainty, as can be seen in \cref{fig:ood-detection-all}. \section{Conclusion} We investigated the OOD detection problem in graph classification task by comparing several uncertainty estimation methods and discussing some key insights about the dependence of OOD detection performance on the structure of graph representations and the interpretation of predictive categorical distribution in context of OOD detection. \appendix \onecolumn \section{Experimental Setup} \label{appendix:experiments} In this work, we consider a particular OOD detection setup when one of the originally available $C$ classes in graph dataset is removed and marked as OOD, while the graph classification model is trained on the remaining set of classes in stratified validation mode. After that, we predict the uncertainty measures for both ID and OOD samples separated previously. Based on this, we evaluate the ability of the considered methods to rank observations and detect the OOD samples — we compute the AUROC metric on the united set of predictions for ID validation samples and OOD samples, treating the OOD detection problem as formal binary classification task. In other words, we solve one distinct ID classification problem for each of $C$ OOD classes. For the sake of a consistent evaluation, we perform 5 random splits of the ID set into train and validation parts, and then average the obtained metrics. Along with the OOD detection metrics, we also provide the related materials which might be helpful for interpretation. \cref{appendix:tsne} contains the figures of graph embeddings on TRIANGLES dataset that represent the structure of latent space in the corresponfing OOD detection problem. Moreover, \cref{appendix:confusion} presents the OOD confusion matrices for all the considered datasets, where rows correspond to particular OOD classes, while columns — to the predicted ID classes. In \cref{appendix:distances}, we place the pair-wise distance matrices between classes in standard ID classification, which reveals the correlation between the distance to other classes and the difficulty of OOD detection. \section{Datasets Details} \label{appendix:datasets} \subsection{Basic Statistics} \label{appendix:statistics} In \cref{tab:datasets}, we report basic graph dataset statistics, such as number of graphs and avergae number of nodes and edges. \begin{table}[!ht] \centering \caption{Dataset statistics. Note that only ENZYMES dataset has natural features \\ in nodes, so we assign the same artifiial feature 1 to nodes in other datasets.} \label{tab:datasets} \begin{tabular}{lccccc} \toprule Name & \# Graphs & \# Classes & \# Nodes & \# Edges & \# Features \\ \midrule ENZYMES & 600 & 6 & 32.63 & 62.14 & 3 \\[1pt] IMDB-MULTI & 1500 & 3 & 13.00 & 65.94 & 1 \\[1pt] REDDIT-MULTI-5K & 4999 & 5 & 508.52 & 594.87 & 1 \\[1pt] REDDIT-MULTI-12K & 11929 & 11 & 391.41 & 456.89 & 1 \\[1pt] TRIANGLES & 45000 & 10 & 20.85 & 32.74 & 1 \\ \bottomrule \end{tabular} \end{table} \subsection{Class Names} \label{appendix:classes} In \cref{tab:class-names}, you can find the list of class names for each considered graph dataset. \begin{table}[!ht] \centering \caption{List of class names for every considered graph dataset.} \label{tab:class-names} \begin{tabular}{ll} \toprule Name & Class Names \\ \midrule ENZYMES & \texttt{Oxidoreductases Transferases Hydrolases} \\ & \texttt{Lyases Isomerases Ligases} \\[8pt] IMDB-MULTI & \texttt{Comedy Romance Sci-Fi} \\[8pt] REDDIT-MULTI-5K & \texttt{WorldNews Videos AdviceAnimals} \\ & \texttt{Aww MildlyInteresting} \\[8pt] REDDIT-MULTI-12K & \texttt{AskReddit AdviceAnimals Atheism Aww IAmA} \\ & \texttt{MildlyInteresting ShowerThoughts Videos} \\ & \texttt{TodayILearned WorldNews TrollXChromosomes} \\[8pt] TRIANGLES & \texttt{1 2 3 4 5 6 7 8 9 10} \\ \bottomrule \end{tabular} \end{table} \section{Out-of-Distribution Detection Performance} \label{appendix:ood-performance} The OOD detection metrics presented here are partially discussed in \cref{experiments} — they enable us to compare the performance of various methods and conclude that there is no universal approach for OOD detection in graph classification. \begin{figure} \caption{OOD detection performance per class of all the mentioned methods using \\ data uncertainty DU (left column) and knowledge uncertainty KU (right column) on all the considered datasets.} \label{fig:ood-detection-all} \end{figure} \section{t-SNE Graph Embeddings} \label{appendix:tsne} Here you can see how the joint distribution of graph representations changes depending on the particular OOD class. \begin{figure} \caption{t-SNE graph embeddings produced by \texttt{single} \label{fig:tsne-all} \end{figure} \section{OOD Confusion Matrices} \label{appendix:confusion} These OOD confusion matrices are used to interpret the predictive categorical distribution over OOD samples and find that sometimes the classification task has a \textit{default} ID class for most OOD samples, as discussed in \cref{experiments}. \begin{figure} \caption{OOD confusion matrices produced by \texttt{single} \label{fig:confusion-all} \end{figure} \section{Distance Matrices Between Classes} \label{appendix:distances} Comparing these matrices and the OOD detection performance on the corresponding dataset, we can confirm that the most distant classes in latent space appear to be the most simple for OOD detection. \begin{figure} \caption{Distance matrices produced by \texttt{single} \label{fig:distances-all} \end{figure} \end{document}
\begin{eqnarray*}gin{document} \title{Quantum cluster algebras associated to weighted projective lines} \author{Fan Xu, Fang Yang*} \address{Department of Mathematical Sciences\\ Tsinghua University\\ Beijing 100084, P.~R.~China} \email{[email protected](F. Xu)} \address{Department of Mathematical Sciences\\ Tsinghua University\\ Beijing 100084, P.~R.~China} \email{[email protected](F. Yang)} \subjclass[2010]{ 17B37, 17B20, 18F20, 20G42 } \keywords{ Quantum cluster algebras; Hall algebras; Cluster multiplication formulas, $Z[q^{\pm\frac{1}{2}}]$-bases. } \thanks{$*$~Corresponding author.} \begin{eqnarray*}gin{abstract} Let ${\mathbb X}_{\boldsymbol{p},\boldsymbol{\lambda}}$ be a weighted projective line. We define the quantum cluster algebra of ${\mathbb X}_{\boldsymbol{p},\boldsymbol{\lambda}}$ and realize its specialized version as the subquotient of the Hall algebra of ${\mathbb X}_{\boldsymbol{p},\boldsymbol{\lambda}}$ via the quantum cluster character map. Inspired by \cite{Chen2021}, we prove an analogue cluster multiplication formula between quantum cluster characters. As an application, we obtain the polynomial property of the cardinalities of Grassmannian varieties of exceptional coherent sheaves on ${\mathbb X}_{\boldsymbol{p},\boldsymbol{\lambda}}$ . In the end, we construct several bar-invariant ${\mathbb Z}[\nu^{\pm}]$-bases for the quantum cluster algebra of the projective line ${\mathbb P}^1$ and show how it coincides with the quantum cluster algebra of the Kronecker quiver. \end{abstract} \maketitle \tableofcontents \section{Introduction} The cluster algebras is a commutative algebra generated by a family of generators called cluster variables, which was introduced by Fomin and Zelevinsky \cite{SergeyFomin2002, Fomin2003} in order to study total positivity in algebraic groups and the specialization of canonical bases of quantum groups at $q = 1$. In \cite{Buan2006a}, Buan et al. introduces the cluster category as an additive categorification of the cluster algebra. Cluster algebra and cluster category are closely related by the Caldero-Chapoton map in \cite{Caldero2006a} and the Caldero-Keller multiplication theorem in \cite{Caldero2008,Caldero2006a}. Caldero and Keller \cite{Caldero2008} proved the following formula (called $cluster$ $multiplication$ $formula$) \begin{eqnarray*}gin{equation} \langle bel{eqn1.1} \chi({\mathbb P} \operatorname{Ext}^1(M,N))X_MX_N=\sum_E (\chi({\mathbb P} \operatorname{Ext}^1(M,N)_E)+\chi({\mathbb P} \operatorname{Ext}^1(N,M)_E))X_E. \end{equation} for any objects $M, N \in {\mathcal C}_Q$ such that $\operatorname{Ext}^1_{{\mathcal C}_Q}(M, N)\neq 0$ for $Q$ is of finite type. And Caldero-Keller \cite{Caldero2006a} showed \begin{eqnarray*}gin{equation} \langle bel{eqn1.2} X_MX_N=X_E+X_{E'}. \end{equation} for $M,N\in {\mathcal C}_Q$ indecomposable such that $\operatorname{Ext}^1_{{\mathcal C}_Q}(M,N)$ is one-dimensional. Various generalizations of the above formulas were made by Hubery \cite{Hubery2010}, by Xiao and Xu \cite{Xiao2010,Xu2010}, by Fu and Keller \cite{Fu2010} and by Palu \cite{Palu2008,Palu2012}. In the cluster theory, the Caldero-Chapoton map and the cluster multiplication theorem play a very important role in proving some structural results such as bases with good properties, positivity conjecture, denominator conjecture and so on (cf. \cite{Caldero2008,Ding2013}). As a quantum analogue of cluster algebras, quantum cluster algebras were defined by Berenstein and Zelevinsky \cite{Berenstein2005} in order to study canonical bases for quantum groups of Kac-Moody type. Under the specialization $q = 1$, the quantum cluster algebras are exactly cluster algebras. As for the quantum cluster algebra of a valued acyclic quiver, Rupel \cite{Rupel2011} defined a quantum analogue of the Caldero-Chapoton map over a finite field. The quantum version of Equation (\ref{eqn1.2}) was proved by Rupel in \cite{Rupel2011} for indecomposable rigid objects for all finite type valued quivers, by Qin \cite{Qin2012} for indecomposable rigid objects for acyclic quivers. Chen-Ding-Zhang \cite{Chen2021} gave the cluster multiplication formulas between any two quantum cluster characters. These formulas were a quantum version of the cluster multiplication formula in Equations (\ref{eqn1.1}) and (\ref{eqn1.2}) for acyclic quantum cluster algebras. In \cite{Ding2020a}, Ding-Xu-Zhang realized an acyclic quantum cluster algebra as a subquotient of certain derived Hall algebra. This result was refined and generalized by Fu-Peng-Zhang \cite{Fu2020} via the integration map from the Hall algebra of an acyclic quiver to certain quantum torus. This provides a connection between Hall algebras and quantum cluster algebras. Then one may define a ``new" quantum cluster algebra as a proper subquotient of the Hall algebra. As shown by Kapranov \cite{Kapranov1997} and then Schiffmann \cite{Schiffmann2004}, the Hall algebra of a weighted projective line gives a categorification of the positive part of the associated quantum loop algebra. If we can define a kind of quantum cluster algebra as a subquotient of the Hall algebra of weighted projective lines, then it may be possible to study the canonical bases of quantum loop algebras by using the quantum cluster algebras of weighted projective lines. The aim of this paper is to define the quantum cluster algebra associated to a weighted projective line ${\mathbb X}_{\boldsymbol{p},\boldsymbol{\lambda}}$ by the Hall algebra of the category $\mathrm{Coh}({\mathbb X}_{\boldsymbol{p},\boldsymbol{\lambda}})$ of coherent sheaves on ${\mathbb X}_{\boldsymbol{p},\boldsymbol{\lambda}}$. To define a cluster algebra as a subalgebra of the quantum torus generated by some elements indexed by a set ${\mathcal J}$ of indecomposable rigid objects in certain cluster category ${\mathcal C}$ over an algebraically closed field, it is required that ${\mathcal J}$ admits a cluster structure (see \cite[Section 1]{Buan2010}). But for the definition of a quantum cluster algebra, the first difficulty is to find a cluster structure independent of finite fields. In this paper, we use the valued regular $m$-tree ${\mathbb T}_m(k)$ (defined in Definition \ref{def3.6}) to denote the cluster structure of the cluster category ${\mathcal C}(\mathrm{Coh}({\mathbb X}_{\tilde{\boldsymbol{p}},\tilde{\boldsymbol{\lambda}}})_k)$. Besides, we also need to find a suitable compatible pair $(\varnothingLambda, \tilde{B})$ (see \cite[Section 3]{Berenstein2005}). Our strategy is firstly to show there is a common valued regular tree ${\mathbb T}_m$ over finite fields ${\mathbb F}_{q^r}$ and the algebraic closure $\bar{{\mathbb F}}_q$ for some fixed prime $q$, whose proof will be given in Appendix \ref{ap}. Then to show that the valued regular trees over algebraic closures of distinct finite characteristics are the same by taking use of quiver with potentials. For the skew-symmetrizable matrix $\tilde{B}$, as in the case of acyclic quivers, we let $\tilde{B}$ be the skew-symmetric Euler form on $\tilde{\mathcal A}:=\mathrm{Coh}({\mathbb X}_{\tilde{\boldsymbol{p}},\tilde{\boldsymbol{\lambda}}})$, where each item of $\tilde{\boldsymbol{p}}$ is odd. Due to \cite[Section 9]{Geigle1991}, the category $\mathrm{Coh}({\mathbb X}_{\boldsymbol{p},\boldsymbol{\lambda}})$ can be embedded into $\mathrm{Coh}({\mathbb X}_{\tilde{\boldsymbol{p}},\tilde{\boldsymbol{\lambda}}})$ if $\boldsymbol{p}\leq \tilde{\boldsymbol{p}}$ and $ \langle mbda=\tilde{ \langle mbda}$, which makes sure that the principal part of $\tilde{B}$ is the skew-symmetric Euler form on $\mathrm{Coh}({\mathbb X}_{\boldsymbol{p},\boldsymbol{\lambda}})$. The definition of quantum cluster algebra ${\mathcal A}(\varnothingLambda,\tilde{B}(\boldsymbol{p},\boldsymbol{\lambda}))$ of ${\mathbb X}_{\boldsymbol{p},\boldsymbol{\lambda}}$ is given in Section \ref{sec3.4}. The first difference between the quantum cluster algebras of weighted projective line and the one of acyclic quiver is that the exchange matrix of the $1$-th mutation from the initial cluster-tilting object generally may not be determined by the initial compatible pairs of the quantum cluster algebras of ${\mathbb X}_{\boldsymbol{p},\boldsymbol{\lambda}}$. The essential reason is that the cluster category of a weighted projective line ${\mathbb X}_{\boldsymbol{p},\boldsymbol{\lambda}}$ may not be triangle equivalent to the cluster category of an acyclic quiver except for domestic type (see \cite[Remark 5.4]{Geigle1987}). Hence wo do not know whether the quantum cluster algebras of weighted projective lines admit Laurent phenomenon in general. The second difference between them is that the skew-symmetric form $\varnothingLambda$ of the quantum cluster algebra of ${\mathbb X}_{\boldsymbol{p},\boldsymbol{\lambda}}$ does not change after mutations. In Section \ref{sec3}, we construct an algebra homomorphism (called quantum cluster character map) $X_?$ from the $\varnothingLambda$-twisted Hall algebra $H_{\varnothingLambda}(\tilde{{\mathcal A}})$ to the specialized complete quantum torus $\hat{{\mathcal T}}_{\varnothingLambda,v}$, then in Section \ref{sec4.1} show a quantum analogue of the cluster multiplication formula (\ref{eqn1.1}) in $\hat{{\mathcal T}}_{\varnothingLambda,v}$ as Chen-Ding-Zhang did in \cite{Chen2021}: \begin{eqnarray*}gin{thmx}[Theorem \ref{thm3.2}] For ${\mathcal M}, {\mathcal N}\in \tilde{{\mathcal A}}$, in $\hat{{\mathcal T}}_{\varnothingLambda,v}$ we have: \begin{eqnarray*}gin{align*} &(q^{[{\mathcal M},{\mathcal N}]^1}-1)X_{{\mathcal M}}X_{{\mathcal N}}=q^{\frac{1}{2}\varnothingLambda(\boldsymbol m^*,\boldsymbol n^*)}\sum_{{\mathcal L}\neq [{\mathcal M}\oplus {\mathcal N}]} |\operatorname{Ext}^1_{\tilde{{\mathcal A}}}({\mathcal M},{\mathcal N})_{{\mathcal L}}| X_{{\mathcal L}}\\ &\ \ \ \ \ \ +\sum_{[{\mathcal G}],[{\mathcal F}]\neq [{\mathcal N}]} q^{\frac{1}{2}\varnothingLambda((\boldsymbol m-\boldsymbol g)^*,(\boldsymbol n+\boldsymbol g)^*)+\frac{1}{2} \langle ngle \boldsymbol m-g,\boldsymbol n \rangle }|_{{\mathcal F}}{\mathrm{Hom}}_{\tilde{{\mathcal A}}}({\mathcal N},\tau{\mathcal M})_{\tau{\mathcal G}}| X_{{\mathcal G}}X_{{\mathcal F}}, \end{align*} \end{thmx} As an application, it is proved in Section \ref{sec4.2} that for an indecomposable rigid object $T_i(t)$ for $t\in {\mathbb T}_m$, there is a ${\mathbb Z}$-polynomial $P(z)$ such that the cardinality of $\mathrm{Gr}_{\boldsymbol{e}}(T_i(t)^k)$ is $P(|k|^{\frac{1}{2}})$. As a result, the generators of the quantum cluster algebra ${\mathcal A}(\varnothingLambda,\tilde{B}(\boldsymbol{p},\boldsymbol{\lambda}))$ defined recursively by mutation formulas can be described as certain quantum cluster characters as stated in the following \begin{eqnarray*}gin{thmx}[Theorem \ref{thm4.6}] The quantum cluster algebra ${\mathcal A}(\varnothingLambda,\tilde{B}(\boldsymbol{p},\boldsymbol{\lambda}))$ as a subalgebra of $\hat{{\mathcal T}}_{\varnothingLambda}$ is generated by $X_{T_i(t)}$ for $t\in {\mathbb T}_n(\boldsymbol{p},\boldsymbol{\lambda})$ and $X_{T_l(t_0)}^{\pm}$ for $n< l\leq m$. \end{thmx} As another application of the cluster multiplication formula, in Section \ref{sec4.3} we show that the specialized quantum cluster algebra ${\mathcal A}_q(\varnothingLambda,B(\tilde{\boldsymbol{p}},\tilde{\boldsymbol{\lambda}}))$ is a subquotient of the $\varnothingLambda$-twisted Hall algebra $H_{\varnothingLambda}(\tilde{{\mathcal A}}_k)$. We prove the following \begin{eqnarray*}gin{thmx}[Theorem \ref{thm4.9}] There is an isomorphism of algebras : $$\phi_k: (\mathrm{CH}'_{\varnothingLambda}(\tilde{{\mathcal A}}_k)\otimes_{{\mathbb Z}[v^{\pm1}]} {\mathbb Q})/I_k \longrightarrow {\mathcal A}_q(\varnothingLambda,B(\tilde{\boldsymbol{p}},\tilde{\boldsymbol{\lambda}}))\otimes_{{\mathbb Z}[v^{\pm1}]} {\mathbb Q},$$ which maps $[T_i(t)^k]$ to $X_{T_i(t)^k}$ for $1\leq i\leq m$ and $t\in {\mathbb T}_m$. \end{thmx} In Section \ref{sec5}, we study the quantum cluster algebra ${\mathcal A}(\varnothingLambda,B)$ of the projective line ${\mathbb P}^1$ and show how it coincides with the quantum cluster algebra of the Kronecker quiver. We obtain \begin{eqnarray*}gin{thmx}[Theorem \ref{thm5.14}] Each one of the following sets gives rise to a bar-invariant ${\mathbb Z}[\nu^{\pm1}]$-basis for ${\mathcal A}(\varnothingLambda,B)$: \begin{eqnarray*}gin{center} $ {\mathbb B}_1^{tor}\cup \bar{{\mathbb B}}^{vet}$, $\ \ $ ${\mathbb B}_2^{tor}\cup \bar{{\mathbb B}}^{vet}$, $\ \ $ ${\mathbb B}_3^{tor}\cup \bar{{\mathbb B}}^{vet}$. \end{center} \end{thmx} These bases above are corresponding to the bar-invariant ${\mathbb Z}[\nu^{\pm1}]$-bases of the quantum cluster algebra ${\mathcal A}(2,2)$ of Kronecker quiver constructed by Ding-Xu \cite{Ding2012}. They showed that under the specialization $\nu = 1$, these ${\mathbb Z}[\nu^{\pm1}]$-bases are exactly the canonical basis, semicanonical basis and dual semicanonical basis of the corresponding cluster algebra. \subsection*{Conventions} Throughout this paper, denote by $k$ a finite field. $K$ is denoted to be an algebraically closed field of finite characteristic. Let $\nu$ be a formal variable. Denote by ${\mathbb D}={\mathrm{Hom}}_k(-,k)$ the $k$-duality. ${\mathcal A}_k$ (resp. $\tilde{{\mathcal A}}_k$) is the category of coherent sheaves on weighted projective line ${\mathbb X}_{\boldsymbol{p},\boldsymbol{\lambda}}$ (resp. ${\mathbb X}_{\tilde{\boldsymbol{p}},\tilde{\boldsymbol{\lambda}}}$) over $k$. We will omit ${\mathcal A}_k$ for ${\mathcal A}$ when it does not cause any confusions. In the Hall algebra of ${\mathcal A}$, denote by $[{\mathcal F}]$ the isoclass of ${\mathcal F}$. In the Grothendieck group $K_0({\mathcal A})$, we will denote by $\hat{{\mathcal F}}$ (some times by $[{\mathcal F}]$) the class of ${\mathcal F}\in {\mathcal A}$. Let ${\mathcal C}({\mathcal A})$ be the cluster category of ${\mathcal A}$. every cluster-tilting object is assumed to be basic. Let $\{\begin{eqnarray*}e_i|\ 1\leq i\leq m\}$ be the canonical basis for ${\mathbb Z}^m$. \begin{eqnarray*}gin{itemize} \item ${\mathcal T}_{\varnothingLambda}$: the quantum torus associated to $\varnothingLambda$, \item $\hat{{\mathcal T}}_{\varnothingLambda}$: the complete quantum torus associated to $\varnothingLambda$, \item $\hat{{\mathcal T}}_{\varnothingLambda,v}$: the complete quantum torus specialized at $\nu=v$. \item ${\mathcal A}(\varnothingLambda,\tilde{B}(\boldsymbol{p},\boldsymbol{\lambda}))$: the quantum cluster algebra of ${\mathbb X}_{\boldsymbol{p},\boldsymbol{\lambda}}$, \item ${\mathcal A}_q(\varnothingLambda,\tilde{B}(\boldsymbol{p},\boldsymbol{\lambda}))$: the quantum cluster algebra of ${\mathbb X}_{\boldsymbol{p},\boldsymbol{\lambda}}$ specialized at $\nu=q^{\frac{1}{2}}$. \end{itemize} \maketitle \section{Preliminary} \subsection{Weighted projective lines} Let $k$ be a finite field ${\mathbb F}_q$ with $|k|=q$. Set $\boldsymbol{p}=(p_1,\cdots,p_N)$ be a collection of $N\geq 3$ positive integers. Denote by $S(\boldsymbol{p})$ the polynomial ring $k[X_1,\cdots,X_N]$ and consider the ideal $I(\boldsymbol{p},\boldsymbol{ \langle mbda})$ generated by $X_i^{p_i}=X_2^{p_2}- \langle mbda_{i}X_1^{p_1}$ for $i\geq 3$, where $ \langle mbda_1, \langle mbda_2,\cdots, \langle mbda_N$ are distinct points of ${\mathbb P}^1$ normalized in such a way that $ \langle mbda_1=\infty$, $ \langle mbda_2=0$ and $ \langle mbda_3=1$. Let $S(\boldsymbol{p},\boldsymbol{ \langle mbda})$ be the quotient $S(\boldsymbol{p})/I(\boldsymbol{p},\boldsymbol{\boldsymbol{\lambda}})$. Then $S(\boldsymbol{p},\boldsymbol{ \langle mbda})$ is naturally graded by an abelian group $L(\boldsymbol{p}):={\mathbb Z}\vec{x}_1\oplus {\mathbb Z}\vec{x}_2\cdots \oplus {\mathbb Z}\vec{x}_N/ (p_i\vec{x}_i-p_j\vec{x}_j,\forall i,j)$, and $X_i$ is associated with degree $\vec{x}_i$. Note that $S(\boldsymbol{p})$ is $L(\boldsymbol{p})$-graded and $I(\boldsymbol{p},\boldsymbol{ \langle mbda})$ is generated by homogeneous elements, hence $S(\boldsymbol{p},\boldsymbol{ \langle mbda})$ is also $L(\boldsymbol{p})$-graded. Denote $\vec{c}\in L(\boldsymbol{p})$ by $p_i\vec{x}_i$. The weighted projective line ${\mathbb X}_{\boldsymbol{p},\boldsymbol{\lambda}}$ is defined to be the spectrum $\mathrm{Spec}_{L(\boldsymbol{p})}S(\boldsymbol{p},\boldsymbol{\lambda})$. Let $\mathrm{Coh}(X_{\boldsymbol{p},\boldsymbol{\lambda}})$ be the category of coherent sheaves on the weighted projective line ${\mathbb X}_{\boldsymbol{p},\boldsymbol{\lambda}}$, which is an abelian and hereditary category admitting an automorphism $$\tau:\mathrm{Coh}(X_{\boldsymbol{p},\boldsymbol{\lambda}})\to \mathrm{Coh}(X_{\boldsymbol{p},\boldsymbol{\lambda}}),\ {\mathcal F}\mapsto {\mathcal F}(\vec{w}).$$ where $\vec{w}=(N-2)\vec{c}-\sum_{i=1}^N \vec{x}_i=-2\vec{c}+\sum_{i=1}^{N} (p_i-1)\vec{x}_i\in L(\boldsymbol{p})$. Let $\mathrm{Vec}(X_{\boldsymbol{p},\boldsymbol{\lambda}})$ be the subcategory of $\mathrm{Coh}(X_{\boldsymbol{p},\boldsymbol{\lambda}})$ of locally free sheaves, and $\mathrm{Tor}(X_{\boldsymbol{p},\boldsymbol{\lambda}})$ be the subcategory of torsion sheaves. Since every coherent sheaf can be decomposed into a direct sum of a torsion part and a locally free part, we have $$\mathrm{Coh}(X_{\boldsymbol{p},\boldsymbol{\lambda}})=\mathrm{Vec}(X_{\boldsymbol{p},\boldsymbol{\lambda}})\oplus \mathrm{Tor}(X_{\boldsymbol{p},\boldsymbol{\lambda}})$$ Set ${\mathbb L}ambda:=\{ \langle mbda_1,\cdots, \langle mbda_N\}$. these points are called exceptional points on ${\mathbb P}^1$. For any $x\in{\mathbb P}^1$, let $\mathrm{Tor}_x$ be the subcategory of torsion sheaves supported on $x$. \begin{eqnarray*}gin{lemma}[\cite{Schiffmann2012a}] \langle bel{lem2.1} The category $\mathrm{Tor}(X_{\boldsymbol{p},\boldsymbol{\lambda}})$ decomposes as a direct product of orthogonal blocks $$\mathrm{Tor}(X_{\boldsymbol{p},\boldsymbol{\lambda} })=\prod_{x\in {\mathbb P}^1-{\mathbb L}ambda} \mathrm{Tor}_x\times \prod_{i=1}^N \mathrm{Tor}_{ \langle mbda_i}.$$ Moreover, $\mathrm{Tor}_x$ is equivalent to the category $\mathrm{Rep}^{nil}_{k_x}A_0^{(1)}$ of nilpotent representations of the Jordan quiver over the residue field $k_x$, and $\mathrm{Tor}_{ \langle mbda_i}$ is equivalent to the category $\mathrm{Rep}^{nil}_k A_{p_i-1}^{(1)}$ of nilpotent representations of the cyclic quiver $A_{p_i-1}^{(1)}$ over $k$. \end{lemma} Hence, we denote by $S_x\in \mathrm{Tor}_x$ the simple torsion sheaf corresponding to the simple module of $\mathrm{Rep}^{nil}_{k_x}A_0^{(1)}$ for $x\in {\mathbb P}^1-{\mathbb L}ambda$, and by $S_{ij}$ the simple torsion sheaf corresponding to the simple module on the $j$-th vertex of $\mathrm{Rep}^{nil}_k A_{p_i-1}^{(1)}$, $1\leq i\leq N$, $1\leq j\leq p_i$. Denote by ${\mathcal O}$ the structure sheaf on ${\mathbb X}_{\boldsymbol{p},\boldsymbol{\lambda}}$. \begin{eqnarray*}gin{lemma}[\cite{Schiffmann2012a}] \langle bel{lem1} The Grothendieck group $K_0(X_{\boldsymbol{p},\boldsymbol{\lambda}})$ of $\mathrm{Coh}(X_{\boldsymbol{p},\boldsymbol{\lambda}})$ is isomorphic to $$({\mathbb Z}[{\mathcal O}]\oplus {\mathbb Z}[S_x]\oplus \bigoplus_{1\leq i\leq N,1\leq j\leq p_i} {\mathbb Z}[S_{ij}])\big / J.$$ where $J$ is the subgroup generated by $[S_x]-\sum_{j=1}^{p_i} [S_{i,j}]$ for $1\leq i\leq N$. \end{lemma} As a corollary, we have that $$K_0(X_{\boldsymbol{p},\boldsymbol{\lambda}})\cong {\mathbb Z}[{\mathcal O}]\oplus {\mathbb Z}[S_x]\oplus \bigoplus_{1\leq i\leq N,2\leq j\leq {p_i}} {\mathbb Z}[S_{ij}].$$ \subsection{ The Hall algebra of \texorpdfstring{$\mathrm{Coh}{\mathbb X}_{\boldsymbol{p},\boldsymbol{\lambda}}$}{Lg}} Fix $\boldsymbol{p}=(p_1,\cdots,p_N)$ and $\boldsymbol{\lambda}=( \langle mbda_1,\cdots, \langle mbda_N)$, we get a weighted projective line ${\mathbb X}_{\boldsymbol{p},\boldsymbol{\lambda}}$. Let $k={\mathbb F}_q$. Denote by ${\mathcal A}$ the category $\mathrm{Coh}(X_{\boldsymbol{p},\boldsymbol{\lambda}})_k$ over $k$ and $\mathrm{Iso}({\mathcal A})$ the set of isoclasses of objects in ${\mathcal A}$. Let $ \langle ngle , \rangle $ be the Euler form of ${\mathcal A}$ on the Grothendieck group $K_0({\mathcal A})$, that is, $$ \langle ngle \hat{{\mathcal F}},\hat{{\mathcal G}} \rangle =\dim_k {\mathrm{Hom}}_{{\mathcal A}}({\mathcal F},{\mathcal G})-\dim_k \operatorname{Ext}^1_{{\mathcal A}}({\mathcal F},{\mathcal G}).$$ where ${\mathcal F},{\mathcal G}\in {\mathcal A}$ and $\hat{{\mathcal F}}\in K_0({\mathcal A})$ represents the class of ${\mathcal F}$. The symmetric Euler form is given by $(\hat{{\mathcal F}},\hat{{\mathcal G}}):= \langle ngle \hat{{\mathcal F}},\hat{{\mathcal G}} \rangle + \langle ngle \hat{{\mathcal G}},\hat{{\mathcal F}} \rangle $. To simplify notations, we will write $[{\mathcal F},{\mathcal G}]^0$ for $\dim_k {\mathrm{Hom}}_{{\mathcal A}}({\mathcal F},{\mathcal G})$ and $[{\mathcal F},{\mathcal G}]^1$ for $\dim_k\operatorname{Ext}^1_{{\mathcal A}}({\mathcal F},{\mathcal G})$. Denote $g_{{\mathcal F},{\mathcal G}}^{{\mathcal L}}=\#\{{\mathcal L}_1\subset {\mathcal L}| {\mathcal L}_1\cong {\mathcal G}, {\mathcal L}/{\mathcal L}_1\cong {\mathcal F}\}$. The dual Hall algebra $H^{\vee}({\mathcal A})$ of ${\mathcal A}$ is defined to be the ${\mathbb Q}$-vector space $\bigoplus\limits_{[{\mathcal F}]\in \mathrm{Iso}({\mathcal A})} {\mathbb Q}[{\mathcal F}]$ equipped with the multiplication $$[{\mathcal F}][{\mathcal G}]:=\sum_{[{\mathcal L}]}q^{ \langle ngle {\mathcal F},{\mathcal G} \rangle } \frac{|\operatorname{Ext}_{{\mathcal A}}^1({\mathcal F},{\mathcal G})_{{\mathcal L}}|}{|{\mathrm{Hom}}_{{\mathcal A}}({\mathcal F},{\mathcal G})|} [{\mathcal L}].$$ In the sequel, we will write $f_{{\mathcal L}}^{{\mathcal F},{\mathcal G}}$ for $ \frac{|\operatorname{Ext}_{{\mathcal A}}^1({\mathcal F},{\mathcal G})_{{\mathcal L}}|}{|{\mathrm{Hom}}_{{\mathcal A}}({\mathcal F},{\mathcal G})|}$. \begin{eqnarray*}gin{remark} \langle bel{rem1} (i)Note that we define the Hall algebra $H^{\vee}({\mathcal A})$ by using another multiplication which is dual to the usual Hall multiplication by counting subobjects. The original Hall algebra $H({\mathcal A})$ is the ${\mathbb Q}$-vector space $\bigoplus\limits_{[[{\mathcal F}]]\in \mathrm{Iso}({\mathcal A})} {\mathbb Q}[[{\mathcal F}]]$ equipped with the multiplication $$[[{\mathcal F}]][[{\mathcal G}]]:=\sum_{[{\mathcal L}]}q^{ \langle ngle {\mathcal F},{\mathcal G} \rangle } g_{{\mathcal F},{\mathcal G}}^{{\mathcal L}} [[{\mathcal L}]].$$ (ii) The category ${\mathcal A}$ of coherent sheaves does not satisfy the finite subobject condition. For example, the structure sheaf ${\mathcal O}$ has subobjects ${\mathcal O}(r\vec{c})$ for $r<0$. Hence, if we want to give a comultiplication ${\mathbb D}elta:H^{\vee}({\mathcal A})\to H^{\vee}({\mathcal A})\hat{\otimes} H^{\vee}({\mathcal A})$, then $H^{\vee}({\mathcal A})\hat{\otimes} H^{\vee}({\mathcal A})$ is simply the space of all formal (may be infinitely many) linear combinations of $[{\mathcal F}]\otimes [{\mathcal G}]$. \end{remark} \begin{eqnarray*}gin{lemma}[\cite{Schiffmann2012a}] The following defines on $H^{\vee}({\mathcal A})$ the structure of a topological coassociative coproduct: $${\mathbb D}elta([{\mathcal F}])=\sum_{{\mathcal F}_1,{\mathcal F}_2}q^{ \langle ngle {\mathcal F}_1,{\mathcal F}_2 \rangle } g_{{\mathcal F}_1,{\mathcal F}_2}^{{\mathcal F}} [{\mathcal F}_1]\otimes [{\mathcal F}_2].$$ \end{lemma} Define the twisted multiplication on $H^{\vee}({\mathcal A})\hat{\otimes} H^{\vee}({\mathcal A})$ by $$([{\mathcal F}_1]\otimes [{\mathcal F}_2])([{\mathcal G}_1]\otimes [{\mathcal G}_2]):=q^{({\mathcal F}_2,{\mathcal G}_1)+ \langle ngle {\mathcal F}_1,{\mathcal G}_2 \rangle } [{\mathcal F}_1][{\mathcal G}_1]\otimes [{\mathcal F}_2][{\mathcal G}_2].$$ \begin{eqnarray*}gin{lemma}[\cite{Schiffmann2012a}] The comultiplication ${\mathbb D}elta: H^{\vee}({\mathcal A})\longrightarrow H^{\vee}({\mathcal A})\hat{\otimes} H^{\vee}({\mathcal A})$ is a homomorphism of algebras. \begin{eqnarray*}gin{proof} We have $${\mathbb D}elta([{\mathcal F}][{\mathcal G}])=\sum_{{\mathcal L}}q^{ \langle ngle {\mathcal F},{\mathcal G} \rangle }f^{{\mathcal F},{\mathcal G}}_{{\mathcal L}}\sum_{{\mathcal L}_1,{\mathcal L}_2} q^{ \langle ngle {\mathcal L}_1,{\mathcal L}_2 \rangle }g_{{\mathcal L}_1,{\mathcal L}_2}^{{\mathcal L}}[{\mathcal L}_1]\otimes [{\mathcal L}_2].$$ On the other hand, $${\mathbb D}elta([{\mathcal F}]){\mathbb D}elta([{\mathcal G}])=\sum_{{\mathcal L}_1,{\mathcal L}_2}\sum_{{\mathcal F}_i,{\mathcal G}_i}q^ag_{{\mathcal F}_1,{\mathcal F}_2}^{{\mathcal F}}g_{{\mathcal G}_1,{\mathcal G}_2}^{{\mathcal G}} f_{{\mathcal L}_1}^{{\mathcal F}_1,{\mathcal G}_1}f_{{\mathcal L}_2}^{{\mathcal F}_2,{\mathcal G}_2}[{\mathcal L}_1]\otimes [{\mathcal L}_2].$$ where $a= \langle ngle {\mathcal F}_1,{\mathcal F}_2 \rangle + \langle ngle {\mathcal G}_1,{\mathcal G}_2 \rangle +({\mathcal F}_2,{\mathcal G}_1)+ \langle ngle {\mathcal F}_1,{\mathcal G}_2 \rangle + \langle ngle {\mathcal F}_1,{\mathcal G}_1 \rangle + \langle ngle {\mathcal F}_2,{\mathcal G}_2 \rangle = \langle ngle {\mathcal F},{\mathcal G} \rangle + \langle ngle {\mathcal L}_1,{\mathcal L}_2 \rangle $ $- \langle ngle {\mathcal F}_1,{\mathcal G}_2 \rangle $. To show ${\mathbb D}elta([{\mathcal F}][{\mathcal G}])={\mathbb D}elta([{\mathcal F}])([{\mathcal G}])$, it suffices to show that $$\sum_{{\mathcal L}}f^{{\mathcal F},{\mathcal G}}_{{\mathcal L}}g_{{\mathcal L}_1,{\mathcal L}_2}^{{\mathcal L}}=\sum_{{\mathcal F}_i,{\mathcal G}_i} q^{- \langle ngle {\mathcal F}_1,{\mathcal G}_2 \rangle }g_{{\mathcal F}_1,{\mathcal F}_2}^{{\mathcal F}}g_{{\mathcal G}_1,{\mathcal G}_2}^{{\mathcal G}} f_{{\mathcal L}_1}^{{\mathcal F}_1,{\mathcal G}_1}f_{{\mathcal L}_2}^{{\mathcal F}_2,{\mathcal G}_2}.$$ for any ${\mathcal L}_1$ and ${\mathcal L}_2$, which is precisely the Green's formula in the \cite[Theorem 2]{Green1995}. \end{proof} \end{lemma} \section{Quantum cluster characters} \langle bel{sec3} \subsection{Compatible pairs} \langle bel{sec3.1} For the weighted projective line ${\mathbb X}_{\boldsymbol{p},\boldsymbol{\lambda}}$. Recall that the Grothendieck group $K_0({\mathcal A})$ is isomorphic to ${\mathbb Z}[{\mathcal O}]\oplus {\mathbb Z}[S_x]\oplus \bigoplus_{i,2\leq j\leq {p_i}} {\mathbb Z}[S_{ij}]\cong{\mathbb Z}^n$ by the corollary of Lemma \ref{lem1}, here $n=2+\sum_{i=1}^N (p_i-1)$. Note that for any element $\hat{{\mathcal F}}$ in $K_0({\mathcal A})$, we will write $\underline{\dim}{\mathcal F}$ for the dimension vector of $\hat{{\mathcal F}}$ under the basis $\mathbf{b}(p, \langle mbda)$: $$\{\hat{{\mathcal O}},\hat{S_x},\hat{S}_{i,j} |1\leq i\leq N, 2\leq j\leq p_i\}$$ Namely, $$\underline{\dim}{\mathcal F}=a_1\hat{{\mathcal O}}+a_2\hat{S_x}+a_3\hat{S}_{1,2}+\cdots+a_{p_1+1}\hat{S}_{1,p_1}+a_{p_1+2}\hat{S}_{2,2}+\cdots+a_n\hat{S}_{N,p_N}.$$ Let $E:=E(\boldsymbol{p},\boldsymbol{\lambda})$ be the $n\times n$ matrix associated to the Euler bilinear form $ \langle ngle , \rangle $ such that $$(\underline{\dim}{\mathcal F})^tE \underline{\dim}{\mathcal G}= \langle ngle \hat{{\mathcal F}},\hat{{\mathcal G}} \rangle .$$ Denote $E^t$ by the transpose of E. Set $B(\boldsymbol{p},\boldsymbol{\lambda}):=E^t-E$. Then by direct computation, $ \langle ngle \hat{{\mathcal O}},\hat{S_{ij}} \rangle =\delta_{p_i,j}$ and $ \langle ngle \hat{S_{ij}},\hat{{\mathcal O}} \rangle =-\delta_{1,j}$ for $1\leq j\leq p_i$, the matrix $B(\boldsymbol{p},\boldsymbol{\lambda})$ has the form: $$\begin{eqnarray*}gin{bmatrix} B_0 &C_1 &C_2 &\cdots &C_N\\ -C_1^t &B_1 &0 &\cdots &0\\ -C_2^t &0 &B_2 &\cdots &0\\ \vdots &0 &0 &\ddots &0\\ -C^t_N &0 &0 &\cdots &B_N\\ \end{bmatrix}$$ where $B_0=\begin{eqnarray*}gin{bmatrix} 0 &-2\\2 &0 \end{bmatrix}$, matrix $C_i=\begin{eqnarray*}gin{bmatrix} 0 &0 &\cdots &-1\\ 0 &0 &\cdots &0\end{bmatrix}$, and $B_i$ is a square matrix of $p_i-1$ as follows: $$\begin{eqnarray*}gin{bmatrix} 0 &1 &0 &\cdots &0 &0 \\ -1 &0 &1 &\cdots &0&0 \\ 0 &-1 &0 &\cdots &0 &0 \\ \vdots & & &\ddots & &\\ 0 &0 &0 &\cdots &0 &1 \\ 0 &0 &0 &\cdots &-1&0 \\ \end{bmatrix}$$ Since $\det{B}(\boldsymbol{p},\boldsymbol{\lambda})$ is the product of $\det{B_i}$, and $B_i$ is invertible iff $p_i-1$ is even for $i=1,\cdots, N$, then $B(\boldsymbol{p},\boldsymbol{\lambda})$ is invertible if and only if all $p_i$ is odd. If $B$ is not invertible, we can embed $B$ into some $m\times m$ invertible matrix $\tilde{B}(\boldsymbol{p},\boldsymbol{\lambda})=\tilde{E}^t-\tilde{E}$ such that $E$ is the upper submatrix of $\tilde{E}$. In the following, we give the construction of $\tilde{B}(\boldsymbol{p},\boldsymbol{\lambda})$ such that it is the matrix of skew-symmetric Euler form of another weighted projective line up to a choice of basis for its Grothendieck group. Without loss of generality, we assume that only $B_1$ are noninvertible. Hence $p_1$ is even. Set $\tilde{\boldsymbol{p}}=(p_1+1,p_2,\cdots,p_N)$ and $\tilde{\boldsymbol{\lambda}}=\boldsymbol{\lambda}$. By \cite[Theorem 9.5]{Geigle1991}, if $\tilde{\boldsymbol{p}}=(p_1+1,p_2,\cdots,p_N)$, then there exists an exact equivalence $$\phi_{*}:\tilde{{\mathcal A}}\big/{{\mathrm{add}\tilde{S}_{1,1}}}\simeq {{\mathcal A}},$$ such that $\phi_{*}(\tilde{{\mathcal O}})=\phi_{*}(\tilde{{\mathcal O}}(\vec{x}_1))={\mathcal O}$, $\phi_{*}(\tilde{S}_{i,j})=S_{i,j}$ if $i\neq 1$ and $\phi_{*}(\tilde{S}_{1,j})=S_{1,j-1}$ for $2\leq j\leq p_1+1$. By direct computations, we have that $E(\boldsymbol{p},\boldsymbol{\lambda})$ is the upper-left submatrix of $E(\tilde{\boldsymbol{p}},\tilde{\boldsymbol{\lambda}})^*$, where $E(\tilde{\boldsymbol{p}},\tilde{\boldsymbol{\lambda}})^*$ is the matrix of Euler form on $K_0(\tilde{{\mathcal A}})$ under the following basis $\mathbf{b}(\tilde{\boldsymbol{p}},\tilde{\boldsymbol{\lambda}})^*$: $$[\tilde{{\mathcal O}}],[\tilde{S}_x], [\tilde{S}_{1,3}],\cdots,[\tilde{S}_{1,{p_1+1}}], [\tilde{S}_{2,2}],[\tilde{S}_{2,3}],\cdots, [\tilde{S}_{N,p_N}], [\tilde{S}_{1,2}].$$ Set $\tilde{B}(\boldsymbol{p},\boldsymbol{\lambda})$ to be $E(\tilde{\boldsymbol{p}},\tilde{\boldsymbol{\lambda}})^{*t}-E(\tilde{\boldsymbol{p}},\tilde{\boldsymbol{\lambda}})^*$, which is obtained from $B(\tilde{\boldsymbol{p}},\tilde{\boldsymbol{\lambda}})$ by base change from $\mathbf{b}(\tilde{\boldsymbol{p}},\tilde{\boldsymbol{\lambda}})$ to $\mathbf{b}(\tilde{\boldsymbol{p}},\tilde{\boldsymbol{\lambda}})^*$. Now all $\tilde{p}_i$ is odd, it follows that $B(\tilde{\boldsymbol{p}},\tilde{\boldsymbol{\lambda}})$ and $\tilde{B}(\boldsymbol{p},\boldsymbol{\lambda})$ is invertible. \begin{eqnarray*}gin{example} Let $N=3$, $\boldsymbol{p}=(1,1,4)$ and $\boldsymbol{\lambda}=(0,\infty,1)$, then the Grothendieck group $K_0$ of the coherent category $\mathrm{Coh}({\mathbb X}_{\boldsymbol{p},\boldsymbol{\lambda}})$ has a basis $$\{\hat{{\mathcal O}}, \hat{S_x},\hat{S}_{1,2}, \hat{S}_{1,3},\hat{S}_{1,4}\}.$$ Therefore $B(\boldsymbol{p},\boldsymbol{\lambda})$ looks like $$\begin{eqnarray*}gin{bmatrix} 0 &-2 &0 &0 &-1\\ 2 &0 &0 &0 &0\\ 0 &0 &0 &1 &0 \\ 0 &0 &-1 &0 &1 \\ 1 &0 &0 &-1 &0 \\ \end{bmatrix}.$$ Set $\tilde{\boldsymbol{p}}=(1,1,5)$, then $B(\tilde{\boldsymbol{p}},\boldsymbol{\lambda})$ is as follows: $$\begin{eqnarray*}gin{bmatrix} 0 &-2 &0 &0 &0 &-1\\ 2 &0 &0 &0 &0 &0\\ 0 &0 &0 &1 &0 &0 \\ 0 &0 &-1 &0 &1 &0 \\ 0 &0 &0 &-1 &0 &1\\ 1 &0 &0 &0 &-1 &0 \end{bmatrix}.$$ Move the third column to the last and then the third row to the last, we get $\tilde{B}(\boldsymbol{p},\boldsymbol{\lambda})$: $$\begin{eqnarray*}gin{bmatrix} 0 &-2 &0 &0 &-1 &0\\ 2 &0 &0 &0 &0 &0\\ 0 &0 &0 &1 &0 &-1 \\ 0 &0 &-1 &0 &1 &0\\ 1 &0 &0 &-1 &0 &0\\ 0 &0 &1 &0 &0 &0\\ \end{bmatrix}.$$ It can be easily checked that $\tilde{B}(\boldsymbol{p},\boldsymbol{\lambda})$ is invertible and $B(\boldsymbol{p},\boldsymbol{\lambda})$ is the upper-left submatrix of $\tilde{B}(\boldsymbol{p},\boldsymbol{\lambda})$. \end{example} Fix $\tilde{B}(\boldsymbol{p},\boldsymbol{\lambda})$ constructed as above. Since $\tilde{B}(\boldsymbol{p},\boldsymbol{\lambda})$ is skew-symmetric and invertible, there exists $d\in {\mathbb N}$ and an $m\times m$ skew-symmetric matrix $\varnothingLambda$ of integers such that $$-\varnothingLambda \tilde{B}(\boldsymbol{p},\boldsymbol{\lambda})=d I_m.$$ \begin{eqnarray*}gin{remark} In general, $d$ may not be 1. In the level of categorifications, it can be realized by working on everything over the field ${\mathbb F}_{q^d}$ rather than ${\mathbb F}_q$. Let $\tilde{B}(\boldsymbol{p},\boldsymbol{\lambda})$ be constructed as above. Say $-\varnothingLambda' \tilde{B}(\boldsymbol{p},\boldsymbol{\lambda})=I_m$ for some skew-symmetric matrix $\varnothingLambda'$ such that $d\varnothingLambda'$ is a matrix of integers. Then $|{\mathbb F}_{q^d}|^{\frac{1}{2}\varnothingLambda'(x,y)}=q^{\frac{d}{2}\varnothingLambda'(x,y)}$ will be a polynomial of $v^{\pm1}$, where $v=q^{\frac{1}{2}}$. \end{remark} In the sequel, we will write $\tilde{E}'$ for $\tilde{E}(\tilde{\boldsymbol{p}},\tilde{\boldsymbol{\lambda}})^{*t}$, and $\tilde{E}$ for $\tilde{E}(\tilde{\boldsymbol{p}},\tilde{\boldsymbol{\lambda}})^*$. \begin{eqnarray*}gin{proposition} \langle bel{prop1} We identify $\varnothingLambda$ with bilinear form and have the following We have the following identities for $\boldsymbol m,\boldsymbol n\in K_0(\tilde{{\mathcal A}})$: \begin{eqnarray*}gin{enumerate} \item[(i)]$\varnothingLambda(\tilde{B}\boldsymbol m,\tilde{E}\boldsymbol n)= \langle ngle \boldsymbol m,\boldsymbol n \rangle $. \item[(ii)]$\varnothingLambda(\tilde{B}\boldsymbol m,\tilde{E}'\boldsymbol n)= \langle ngle \boldsymbol n,\boldsymbol m \rangle $. \item[(iii)]$\varnothingLambda(\tilde{B}\boldsymbol m,\tilde{B}\boldsymbol n)= \langle ngle \boldsymbol n,\boldsymbol m \rangle - \langle ngle \boldsymbol m,\boldsymbol n \rangle $. \item[(iv)]$\varnothingLambda(\tilde{E}\boldsymbol m,\tilde{E}\boldsymbol n)=\varnothingLambda(\tilde{E}'\boldsymbol m,\tilde{E}'\boldsymbol n)$. \end{enumerate} \end{proposition} \begin{eqnarray*}gin{proof} These results are obtained by direct computation. \end{proof} Denote $\boldsymbol m^*:=\tilde{E}'\boldsymbol m$ and $^*\boldsymbol n:=\tilde{E}\boldsymbol n$. \begin{eqnarray*}gin{lemma} $\varnothingLambda(-b^*-^*a,-d^*-^*c)=\varnothingLambda((a+b)^*,(c+d)^*)+ \langle ngle b,c \rangle - \langle ngle d,a \rangle .$ \begin{eqnarray*}gin{proof} we have \begin{eqnarray*}gin{align*} &\varnothingLambda(-b^*-^*a,-d^*-^*c)\\ &=\varnothingLambda(b^*+a^*,d^*+c^*)-\varnothingLambda(b^*,c^*)-\varnothingLambda(a^*,d^*)+\varnothingLambda(b^*,^*c)+\varnothingLambda(^*a,d^*)\\ &=\varnothingLambda(b^*+a^*,d^*+c^*)+ \langle ngle b,c \rangle - \langle ngle d,a \rangle . \end{align*} The first equality is induced by $(iv)$ and the second is by $(i)$ and $(ii)$ in Proposition \ref{prop1}. \end{proof} \end{lemma} \subsection{Quantum torus and integration maps} Let $B(\boldsymbol{p},\boldsymbol{\lambda})$ be the skew-symmetric matrix associated to ${\mathcal A}$. Then there exist $(\tilde{\boldsymbol{p}},\tilde{\boldsymbol{\lambda}})$ and a skew-symmetric matrix $\varnothingLambda$ such that $-\varnothingLambda B(\tilde{\boldsymbol{p}},\tilde{\boldsymbol{\lambda}})^*=I_m$. Notice that there exists some positive integer $d$ such that $d \varnothingLambda$ is a matrix of integers. We take $d$ as the minimal one. Denoted by $\tilde{{\mathcal A}}_k$ the category $\mathrm{Coh}({\mathbb X}_{\tilde{\boldsymbol{p}},\tilde{\boldsymbol{\lambda}}})_k$ over $k={\mathbb F}_{q^d}$. let $\nu$ be a formal invariable. ${\mathcal T}_m$ is defined to be the ${\mathbb Z}[\nu^{\pm1}]$-algebra with a basis $\{X^{\alpha}|\alpha\in {\mathbb Z}^m\}$ (namely, ${\mathcal T}_m={\mathbb Z}[\nu^{\pm1}][x_1^{\pm1},x_2^{\pm1},\cdots,x_m^{\pm1}]$, where $x_i$ are formal variables) and multiplication given by $$X^\alpha X^{\begin{eqnarray*}ta}=X^{\alpha+\begin{eqnarray*}ta}.$$ The quantum torus ${\mathcal T}_\varnothingLambda$ is a ${\mathbb Z}[\nu,\nu^{-1}]$-algebra with the same vector space as ${\mathcal T}_m$ but with a twisted multiplication: $$X^\alpha * X^\begin{eqnarray*}ta=\nu^{d\varnothingLambda(\alpha^*,\begin{eqnarray*}ta^*)}X^{\alpha+\begin{eqnarray*}ta}.$$ Set $v=q^{\frac{1}{2}}$. Denoted by ${\mathcal T}_{\varnothingLambda,v}$ (resp. ${\mathcal T}_{\varnothingLambda}$) the specialization of ${\mathcal T}_m$ (resp. ${\mathcal T}_{\varnothingLambda}$) at $\nu=v$. Let $\hat{{\mathcal T}}_{\varnothingLambda}$ (resp. $\hat{{\mathcal T}}_{\varnothingLambda,v}$) be the completion $${\mathbb Z}[\nu^{\pm1}][X^{-\boldsymbol f_1},X^{\boldsymbol f_2},X^{-\boldsymbol f_3},\cdots,X^{-\boldsymbol f_m}][[X^{\boldsymbol f_1},X^{-\boldsymbol f_2},X^{\boldsymbol f_3},\cdots,X^{\boldsymbol f_m}]],$$ of ${\mathcal T}_{\varnothingLambda}$ (resp. ${\mathcal T}_{\varnothingLambda,v}$), where $f_i=\tilde{B}(\boldsymbol{p},\boldsymbol{\lambda})\begin{eqnarray*}e_i$. \begin{eqnarray*}gin{proposition} The integration map $\int: H^{\vee}(\tilde{{\mathcal A}}_{k})\longrightarrow {\mathcal T}_{m,v}$, $[{\mathcal F}]\mapsto X^{\underline{\dim}{\mathcal F}}$ is an algebraic homomorphism. \begin{eqnarray*}gin{proof} We have \begin{eqnarray*}gin{align*} \int([{\mathcal F}][{\mathcal G}]) &=\sum_{[{\mathcal L}]}q^{-d[{\mathcal F},{\mathcal G}]^1} |\operatorname{Ext}^1_{\tilde{{\mathcal A}}}({\mathcal F},{\mathcal G})_{{\mathcal L}}| X^{\underline{\dim}{\mathcal L}}\\ &=q^{-d[{\mathcal F},{\mathcal G}]^1} |\operatorname{Ext}^1_{\tilde{{\mathcal A}}}({\mathcal F},{\mathcal G})| X^{\underline{\dim} {\mathcal F}+\underline{}{\dim}{\mathcal G}}\\ &=X^{\underline{\dim}{\mathcal F}}X^{\underline{\dim}{\mathcal G}}\\ &=\int([{\mathcal F}])\int([{\mathcal G}]). \end{align*} \end{proof} \end{proposition} \subsection{\texorpdfstring{$\varnothingLambda$}{Lg}-twisted versions} Provided the skew-symmetric form $\varnothingLambda$, we twist the multiplication on $H^{\vee}(\tilde{{\mathcal A}})$ as follows: $$[{\mathcal M}]*[{\mathcal N}]=v^{d\varnothingLambda(m^*,n^*)}[{\mathcal M}][{\mathcal N}].$$ where $\boldsymbol m$ (resp. $\boldsymbol n$) is the dimension vector of $[{\mathcal M}]$ (resp.$[{\mathcal N}]$) in $K_0(\tilde{{\mathcal A}}_{k})$. The $\varnothingLambda$-twisted Hall algebra is denoted by $H_{\varnothingLambda}(\tilde{{\mathcal A}}_{k})$. We also twist the multiplication on $H^{\vee}(\tilde{{\mathcal A}}_{k})\hat{\otimes} H^{\vee}(\tilde{{\mathcal A}}_{k})$ again such that the coproduct ${\mathbb D}elta$ is still an algebra homomorphism. Let $(H^{\vee}(\tilde{{\mathcal A}}_{k})\hat{\otimes} H^{\vee}(\tilde{{\mathcal A}}_{k}),*)$ be the tensor algebra with twisted multiplication $*$ given as $$([{\mathcal M}_1]\otimes [{\mathcal M}_2])*([{\mathcal N}_1]\otimes[{\mathcal N}_2]):=v^{d\varnothingLambda((\boldsymbol m_1+\boldsymbol m_2)^*,(\boldsymbol n_1+\boldsymbol n_2)^*)}([{\mathcal M}_1]\otimes [{\mathcal M}_2])([{\mathcal N}_1]\otimes[{\mathcal N}_2]).$$ Hence it can be easily checked that ${\mathbb D}elta: H_{\varnothingLambda}(\tilde{{\mathcal A}}_{k})\to (H^\vee({\mathcal A}_{k})\hat{\otimes} H^{\vee}({\mathcal A}_{k}),*)$ is also an algebra homomorphism. Recall that we have defined an integration map $\int: H^{\vee}(\tilde{{\mathcal A}}_{k})\to {\mathcal T}_{m,v}$, which induces a map $$\int\otimes \int:H^{\vee}(\tilde{{\mathcal A}}_{k})\hat{\otimes} H^{\vee}(\tilde{{\mathcal A}}_{k})\to {\mathcal T}_{m,v}\hat{\otimes} {\mathcal T}_{m,v},\ [{\mathcal M}]\otimes [{\mathcal N}]\mapsto X^{\boldsymbol m}\otimes X^{\boldsymbol n}.$$ Note if the multiplications on $H^{\vee}(\tilde{{\mathcal A}}_{k})\hat{\otimes} H^{\vee}(\tilde{{\mathcal A}}_{k})$ and ${\mathcal T}_{m,v}\hat{\otimes} {\mathcal T}_{m,v}$ both are untwisted, that is $(x_1\otimes y_1)(x_2\otimes y_2)=x_1x_2\otimes y_1y_2$, then $\int\otimes \int$ is a homomorphism of algebras. Since we have twisted the multiplication on $H^{\vee}(\tilde{{\mathcal A}}_{k})\hat{\otimes} H^{\vee}(\tilde{{\mathcal A}}_{k})$, we also twisted the multiplication on ${\mathcal T}_{m,v}\hat{\otimes} {\mathcal T}_{m,v}$ by $$(X^{\alpha_1}\otimes X^{\begin{eqnarray*}ta_1})*(X^{\alpha_2}\otimes X^{\begin{eqnarray*}ta_2}):=q^{\frac{d}{2}\varnothingLambda((\alpha_1+\begin{eqnarray*}ta_1)^*,(\alpha_2+\begin{eqnarray*}ta_2)^*)+d(\begin{eqnarray*}ta_1,\alpha_2)+ d \langle ngle \alpha_1,\begin{eqnarray*}ta_2 \rangle }X^{\alpha_1+\alpha_2}\otimes X^{\begin{eqnarray*}ta_1+\begin{eqnarray*}ta_2}.$$ Then $\int\otimes \int: (H^{\vee}(\tilde{{\mathcal A}}_{k})\hat{\otimes} H^{\vee}(\tilde{{\mathcal A}}_{k}),*)\longrightarrow ({\mathcal T}_{m,v}\hat{\otimes} {\mathcal T}_{m,v},*)$ is a homomorphism of algebras. Finally, following \cite[Proposition 7.11]{Fu2020}, we define an algebra homomorphism $\mu: ({\mathcal T}_{m,v}\hat{\otimes} {\mathcal T}_{m,v},*)\longrightarrow \hat{{\mathcal T}}_{\varnothingLambda,v}$ by $$\mu(X^\alpha\otimes X^\begin{eqnarray*}ta)=v^{-d(\alpha,\begin{eqnarray*}ta)- d \langle ngle \alpha,\begin{eqnarray*}ta \rangle }X^{-^{*}\alpha-\begin{eqnarray*}ta^*}.$$ Therefore we get an algebra homomorphism $X_?: H_{\varnothingLambda}(\tilde{{\mathcal A}}_{k})\longrightarrow \hat{{\mathcal T}}_{\varnothingLambda,v}$ given by the composition $\mu\circ(\int\otimes\int)\circ{\mathbb D}elta$, called the character map. Namely, we have the following commutative diagram: $$\begin{eqnarray*}gin{tikzcd} H_{\varnothingLambda}(\tilde{{\mathcal A}}_{k})\arrow[r,"X_?"]\arrow[d,"Delta"] &\hat{{\mathcal T}}_{\varnothingLambda,v}\\ (H^{\vee}(\tilde{{\mathcal A}}_{k})\hat{\otimes} H^{\vee}(\tilde{{\mathcal A}}_{k}),*)\arrow[r,"\int\otimes \int"] &({\mathcal T}_{m,v}\otimes{\mathcal T}_{m,v},*)\arrow[u,"\mu"].\\ \end{tikzcd}$$ Therefore, for ${\mathcal M}\in \tilde{{\mathcal A}}_{k}$, the quantum cluster character of ${\mathcal M}$ is \begin{eqnarray*}gin{align*} X_{{\mathcal M}}&=\sum_{{\mathcal X},{\mathcal Y}} q^{-\frac{d}{2} \langle ngle {\mathcal Y},{\mathcal X} \rangle }g_{{\mathcal X},{\mathcal Y}}^{{\mathcal M}} X^{-y^*-^*x}\\ &=\sum_{\boldsymbol{e}\leq \underline{\dim}{\mathcal M}} q^{-\frac{d}{2} \langle ngle \boldsymbol m-\boldsymbol{e},\boldsymbol{e} \rangle }|\mathrm{Gr}_{\boldsymbol{e}}({\mathcal M})| X^{-(\boldsymbol m-\boldsymbol{e})^*-^*\boldsymbol{e}}. \end{align*} where $y=\underline{\dim}{\mathcal F}$, $x=\underline{\dim}{\mathcal G}$ and $\boldsymbol m=\underline{\dim}{\mathcal M}$, $\mathrm{Gr}_{\boldsymbol{e}}({\mathcal M})$ is the Grassmannian variety of subobjects of ${\mathcal M}$ with dimension vector $\boldsymbol{e}$ and $|\mathrm{Gr}_{\begin{eqnarray*}e}({\mathcal M})|$ is its cardinality. \begin{eqnarray*}gin{example} \langle bel{ex2.5} Let $B$ be the skew-symmetric matrix associated to ${\mathbb P}^1$. Then $$B=\begin{eqnarray*}gin{bmatrix}0 &-2\\ 2 &0\end{bmatrix}=\begin{eqnarray*}gin{bmatrix}1 &-1\\ 1 &0\end{bmatrix}-\begin{eqnarray*}gin{bmatrix}1 &1\\ -1 &0\end{bmatrix},$$ $\varnothingLambda=\begin{eqnarray*}gin{bmatrix}0 &-\frac{1}{2}\\ \frac{1}{2} &0\end{bmatrix}$ and $d=2$. Hence $\tilde{{\mathcal A}}$ is the category $\mathrm{Coh}({\mathbb P}^1)$ over ${\mathbb F}_{q^2}$. (1) $ X_{{\mathcal O}(l)}=X^{-(\begin{eqnarray*}gin{smallmatrix}1+l\\ -1\end{smallmatrix})}+\sum_{r\geq -l} v^{-2(l+r)}[l+r+1]_{q'} X^{-(\begin{eqnarray*}gin{smallmatrix}l+2r+1\\ 1\end{smallmatrix})}$, here $[n]_{q}$ means $\frac{q^{n}-1}{q-1}$, and $q'=q^2$. (2) Let $S_x$ be a simple torsion sheaf supported on $x\in {\mathbb P}^1$ with degree $d$, then $$X_{S_x}=X^{-(\begin{eqnarray*}gin{smallmatrix}d\\ 0\end{smallmatrix})}+X^{-(\begin{eqnarray*}gin{smallmatrix}-d\\ 0\end{smallmatrix})}.$$ \end{example} \subsection{Definition of the quantum cluster algebra of \texorpdfstring{${\mathbb X}_{\boldsymbol{p},\boldsymbol{\lambda}}$}{Lg}} \langle bel{sec3.4} In this subsection, we want to define the quantum cluster algebra of the weighted projective line ${\mathbb X}_{\boldsymbol{p},\boldsymbol{\lambda}}$. Recall the definition of quantum cluster algebras introduced by \cite{Berenstein2005}. Let $n\leq m$, ${{\mathcal T}}_{\varnothingLambda}={{\mathcal T}}({\mathbb Z}^m,\varnothingLambda)$be the quantum torus. Let $(\varnothingLambda,\tilde{B},X)$ be an initial seed (see \cite[Definition 2.1.5]{Qin2012}), and ${\mathbb T}_m$ be an $m$-regular tree with root $t_0$. By \cite[Corollary 2.1.10]{Qin2012}, given another seed $(\varnothingLambda',\tilde{B}',X')$, we say $X'$ is mutated from $X$ at $i$ $(1\leq i\leq n)$ if \begin{eqnarray*}gin{itemize} \item[$\cdot$] $X(e_j)=X'(e_j)\qquad$ if $j\neq i$,\\ \item[$\cdot$] \begin{eqnarray*}gin{equation} \langle bel{eq1} \begin{eqnarray*}gin{aligned} &X(e_i)X'(e_i)\\ &=v^{\varnothingLambda(e_i,\sum\limits_{1\leq l\leq m}[b_{li}]_+ e_l)}X({\sum\limits_{1\leq l\leq m}[b_{li}]_+ e_l})+v^{\varnothingLambda(e_i,\sum\limits_{1\leq l\leq m}[-b_{li}]_+ e_l)}X({\sum\limits_{1\leq l\leq m}[-b_{li}]_+ e_l}). \end{aligned} \end{equation} \end{itemize} Write $\begin{eqnarray*}gin{tikzcd}t\arrow[r,no head] &t'\end{tikzcd}$ if $t$ and $t'$ of ${\mathbb T}_m$ are linked by an edge labeled $i$. Then one can associate iteratively each seed mutated from $X$ with each vertex $t$ of ${\mathbb T}_m$. Namely, Set the initial seed to be $(\varnothingLambda(t_0),\tilde{B}(t_0),X(t_0))$. If $\begin{eqnarray*}gin{tikzcd}t\arrow[r,no head] &t'\end{tikzcd}$ and $1\leq i\leq n$, then label the seed mutated from $(\varnothingLambda(t),\tilde{B}(t),X(t))$ at $i$ by $(\varnothingLambda(t'),\tilde{B}(t'),X(t'))$; If $\begin{eqnarray*}gin{tikzcd}t\arrow[r,no head] &t'\end{tikzcd}$ and $n<i\leq m$, then set $(\varnothingLambda(t'),\tilde{B}(t'),X(t'))=(\varnothingLambda(t),\tilde{B}(t),X(t))$. The quantum cluster algebra of $(\varnothingLambda,\tilde{B},X)$ is defined to be a ${\mathbb Z}[\nu^{\pm 1}]$-subalgebra of the quantum torus ${\mathcal T}_{\varnothingLambda}$ generated by quantum cluster variables $X_i(t)$ for all the vertices $t\in {\mathbb T}_m$, $1\leq i\leq n$ and elements $X_j(t_0)^{\pm1}$ for all $n<j\leq m$ in \cite{Qin2012}. There Qin used the refined CC-map to categorify the quantum cluster algebra, and showed that the CC-map $X_{T_i(t)}$ of indecomposable coefficient-free rigid objects $T_i(t)$ in certain cluster category are bijectively corresponding to the quantum cluster variables $X_i(t)$ for $t\in {\mathbb T}_m$. In a similar way, we will give the definition of quantum cluster algebra of a weighted projective line by setting generators indexed by certain indecomposable rigid objects in a cluster category. For a field k, let $\tilde{{\mathcal A}}_k$ be the category of coherent sheaves on ${\mathbb X}_{\tilde{\boldsymbol{p}},\tilde{\boldsymbol{\lambda}}}$ over k. The cluster category ${\mathcal C}_k:={\mathcal C}(\tilde{{\mathcal A}}_k)$ is defined to be the orbit category $D^b(\tilde{{\mathcal A}}_k)/\tau\circ[-1]$, where $\tau$ is the Auslander-Reiten translation. Following \cite[Theorem 6.8]{Buan2006a}, any almost cluster-tilting object $\bar{T}^k$ has exactly two complements $T_i^k$ and $T_i^{*k}$. Such $(T_i^k,T_i^{*k})$ is called an exchange pair. Moreover, $T_i^k$ and $T_i^{*k}$ are linked by exchange triangles: $$T_i^k\stackrelackrel{u}\longrightarrow E^k \stackrelackrel{v}\longrightarrow T_i^{*k} \longrightarrow T_i^k[1],\qquad\text{and}\qquad T_i^{*k}\stackrelackrel{u'}\longrightarrow E^{'k} \stackrelackrel{v'}\longrightarrow T_i \longrightarrow T^{*k}_i[1],$$ where $u$ and $u'$ are minimal left $\mathrm{add}\bar{T}^k$-approximations and $v$ and $v'$ are minimal right $\mathrm{add}\bar{T}^k$-approximations. Write $E^k=\bigoplus_{j \neq i} T_j^{k,\oplus a_{ij}}$ and $E^{'k}=\bigoplus_{j\neq i} T_j^{k,\oplus b_{ij}}$. Let $A_k:=\mathrm{End}_{{\mathcal C}_k}(T^k)$ and $Q_{T^k}$ be the Gabriel quiver of $A_k$. Since we have an equivalence $\mathrm{add}\bar{T}^k \stackrelackrel{\sim}\longrightarrow \mathrm{proj}A_k$ of additive categories, $a_{ij}(k)$ is the number of arrows from $j$ to $i$ in $Q_{T^k}$ and $b_{ij}(k)$ is the number of arrows from $i$ to $j$ in $Q_{T^k}$. Hence, we can also construct a $m$-regular tree ${\mathbb T}_m(k)$ as above, where $m$ is the rank of $K_0(\tilde{{\mathcal A}}_k)$. Furthermore. we need to record the number of arrows $(a_{ij}(k),b_{ij}(k))$ of the Gabriel quiver $Q_{T^k}$ of the endomorphism algebra of each cluster-tilting object $T^k$ in ${\mathcal C}_k$. Let $T^k:=\bigoplus_{0\leq \vec{l}\leq \vec{c}} {\mathcal O}(\vec{l})^k$ be an initial cluster-tilting object in ${\mathcal C}_k$, which is associated to the root $t_0$ of the tree ${\mathbb T}_m(k)$. i.e. $T(t_0)=T^k$. If $T^{'k}$ is mutated from $T^k$ at the $i$-th direct summand $T_i^k$, we set $T(t)=T^{'k}$ where $\begin{eqnarray*}gin{tikzcd}[column sep=huge]t_0\arrow[r, no head,"{(a_{ij}(k),b_{ij}(k))}"] &t\end{tikzcd}$. Here $a_{ij}(k)$ (resp. $b_{ij}(k)$) is the number of arrows from $j$ to $i$ (resp. $i$ to $j$) of the quiver $Q_{T(t_0)}$. \begin{eqnarray*}gin{definition} \langle bel{def3.6} The regular m-tree ${\mathbb T}_m(T^k)$ constructed as above is called the valued regular $m$-tree over $k$ with the initial cluster-tilting object $T^k$ associated to ${\mathcal C}(\tilde{{\mathcal A}}_k)$. \end{definition} According to Theorem \ref{thm7.8}, we know that each valued regular m-tree ${\mathbb T}_m(T^{{\mathbb F}_{q^r}})$ over ${\mathbb F}_{q^r}$ is the same as ${\mathbb T}_m(T^{\bar{{\mathbb F}}_q})$ for a fixed prime $q$ and $r\geq 1$. To show that ${\mathbb T}_m(T^{\bar{{\mathbb F}}_q})$ and ${\mathbb T}_m(T^{\bar{{\mathbb F}}_p})$ are identical for distinct primes $p$ and $q$, we need the following \begin{eqnarray*}gin{theorem}[{\cite[Theorem 5.2]{Buan2011}}] Let ${\mathcal C}_K$ be a 2-$CY$ triangulated category with a cluster-tilting object $T$ over an algebraically closed field $K$. If the endomorphism algebra $\mathrm{End}_{{\mathcal C}_K}(T)$ is isomorphic to the Jacobian algebra $J(Q,W)$ for some quiver with potential $(Q,W)$, and if no 2-cycles start in the vertex $i$ of $Q$, then we have an isomorphism $$\mathrm{End}_{{\mathcal C}_K}(\mu_i(T))=J(\mu_i(Q,W)) .$$ \end{theorem} Here $\mu_i(Q,W)$ is the mutation of quiver with potentials, see \cite[Section 1.2]{Buan2011}. Since $\tilde{{\mathcal A}}_K$ is derived equivalent to a canonical algebra which is of global dimension $\leq 2$, by \cite[Theorem 6.12]{Keller2011} $\mathrm{End}_{{\mathcal C}_K}(T)=J(Q,W)$ for some quiver with potential $(Q,W)$, where $T:=\bigoplus_{0\leq \vec{l}\leq \vec{c}} {\mathcal O}(\vec{l})$. Moreover, the quiver with potential $(Q,W)$ is non-degenerate by \cite[Lemma 3.2]{Fu2021}. Combining with the theorem above and notice that $Q_{T^{\bar{{\mathbb F}}_q}}$ is the same as $Q_{T^{\bar{{\mathbb F}}_p}}$, we can conclude that the quiver $Q_{T^{'K}}$ of each cluster-tilting object $T^{'K}$ mutated from $T^K$ is independent of the choice of algebraically closed fields. Thus ${\mathbb T}_m(T^{\bar{{\mathbb F}}_q})={\mathbb T}_m(T^{\bar{{\mathbb F}}_q})$ for any primes $p$ and $q$. On the other hand, because the cluster-tilting graph of ${\mathcal C}(\tilde{{\mathcal A}}_k)$ is connected by Corollary \ref{cor7.9}, the valued regular $m$-tree ${\mathbb T}_m(T^k)$ is also independent of the choice of the initial cluster-tilting objects $T^k$. Namely, if $T^{'k}$ is another cluster-tilting object, then $T^{'k}=T(t)$ for some $t\in {\mathbb T}_m(T^k)$ and the valued regular $m$-tree ${\mathbb T}_m(T^{'k})$ with initial object $T^{'k}$ is obtained from ${\mathbb T}_m(T^k)$ by taking $t$ to be the new root. In summary, we obtain the following \begin{eqnarray*}gin{lemma} The valued regular m-tree ${\mathbb T}_m(T^k)$ associated to ${\mathcal C}(\tilde{{\mathcal A}}_k)$ with initial cluster-tilting object $T^k$ is independent of finite fields and the choice of initial cluster-tilting objects. \end{lemma} Set the initial cluster-tilting object to be $$S^k:=\tilde{{\mathcal O}}^k\oplus \tilde{{\mathcal O}}(\vec{\tilde{c}})^k\oplus \bigoplus_{2\leq j\leq \tilde{p}_i} \tilde{S}_{i,j}^k.$$ In the sequel, we will abbreviate ${\mathbb T}_m$ for ${\mathbb T}_m(S^k)={\mathbb T}_m(T^k)$. One of exchange triangles linking $T_i^k$ and $T_i^{*k}$ is induced by a short exact sequence in $\tilde{{\mathcal A}}_k$, say $\mathrm{Ext}^1_{\tilde{{\mathcal A}}_k}(T_i^{*k}, T_i^k)\cong k$ and the other is given by $$T_i^{*k}\longrightarrow E^{'k}\longrightarrow T_i^k\stackrelackrel{f_k}\longrightarrow T_i^{*k}[1],$$ where $E^{'k}\cong {\mathbb K}er f_k\oplus \tau^{-1}\mathrm{Coker} f_k$. Hence in $K_0(\tilde{{\mathcal A}}_k)$ we have $[T_i^{*k}]=[E^k]-[T_i^k]=[E^{'k}]-[T_i^k]+[\mathrm{Im} f_k]+[\tau^{-1}\mathrm{Im} f_k]$, which implies that $[E^k]-[E^{'k}]=[\mathrm{Im} f_k]+[\tau^{-1}\mathrm{Im} f_k]>0$. Similarly, if $\mathrm{Ext}^1_{\tilde{{\mathcal A}}_k}(T_i^k,T_i^{*k})\cong k$, then we have $[E^{'k}]-[E^k]=[\mathrm{Im} g_k]+[\tau^{-1}\mathrm{Im} g_k]>0$, where $g_k: T_i^k\to T_i^{*k}[1]$ is nonzero. Note that if dimension vectors of $[E^k]$, $[E^{'k}]$ and $[T_i^k]\in K_0(\tilde{{\mathcal A}}_k)$ are independent of the choice of fields, then for two distinct fields $k_1$ and $k_2$, we have $\mathrm{Ext}^1_{\tilde{{\mathcal A}}_{k_1}}(T_i^{*k_1}, T_i^{k_1})\cong k_1$ implying that $[E^{'k_2}]-[E^{k_2}]=[E^{'k_1}]-[E^{k_1}]<0$ and then $\mathrm{Ext}^1_{\tilde{{\mathcal A}}_{k_2}}(T_i^{k_2},T_i^{*k_2})$ must be $0$. As a consequence, $$(*)\qquad \dim_{k_1}\mathrm{Ext}^1_{\tilde{{\mathcal A}}_{k_1}}(T_i^{*k_1},T_i^{k_1})= 1 \text{ if and only if } \dim_{k_2}\mathrm{Ext}^1_{\tilde{{\mathcal A}}_{k_2}}(T_i^{*k_2},T_i^{k_2})= 1.$$ Hence, by induction from the root $t_0$ of the tree ${\mathbb T}_m(k)$, it can be showed that the dimension vector of $[T_i(t)]$ for $t\in {\mathbb T}_m$ is independent of the choice of fields. So we will write $d_i(t)$ for the dimension vector $[T_i(t)^k]$ for $t\in {\mathbb T}_m$, $1\leq i\leq m$. Let $B(\boldsymbol{p},\boldsymbol{\lambda})$ be the skew-symmetric matrix associated to ${\mathcal A}$, and $(\varnothingLambda, \tilde{B}(\boldsymbol{p},\boldsymbol{\lambda}))$ the compatible pair given as in Section \ref{sec3.1}, where $\tilde{B}:=\tilde{B}(\boldsymbol{p},\boldsymbol{\lambda})$ is similar to $B(\tilde{\boldsymbol{p}},\tilde{\boldsymbol{\lambda}})$. If $B(\boldsymbol{p},\boldsymbol{\lambda})$ is a proper submatrix of $\tilde{B}(\boldsymbol{p},\boldsymbol{\lambda})$. i.e. $m>n$, then it does not need to do mutations at every direct summand of the initial cluster-tilting object $S$. If $\tilde{\boldsymbol{p}}=(p_1+1,p_2,\cdots,p_N)$, then $\phi_*^{-1}(S')=S/\tilde{S}_{1,2}$ where $S'={\mathcal O}\oplus {\mathcal O}(\vec{c})\oplus \bigoplus_{2\leq j\leq p_i} S_{i,j}$ is a cluster-tilting object in ${\mathcal C}({\mathcal A})$. Therefore, it does not need to do mutations at $\tilde{S}_{i,2}$ for $p_i$ even. Write $T(t_0)=\bigoplus_{i=1}^m T_i(t_0)=S$ , we order the direct summands of $S$ as the basis $\boldsymbol{b}(\tilde{\boldsymbol{p}},\tilde{\boldsymbol{\lambda}})^*$ for $K_0(\tilde{{\mathcal A}})$ defined in Section \ref{sec3.1}. Then, we may not do mutations at $T_j(t)^k$ of the cluster-tilting object $T(t)^k$ for $n<j\leq m$, $t\in {\mathbb T}_m$, it follows that the subgraph of ${\mathbb T}_m$ consisting of vertices where we actually do mutations with respect to $(\boldsymbol{p},\boldsymbol{\lambda})$ is a regular $n$-tree, denoted by ${\mathbb T}_n(\boldsymbol{p},\boldsymbol{\lambda})$. In the sequel, we let $T_i(t)$ and $T(t)$ be symbols associated to $t\in {\mathbb T}_m$, $1\leq i\leq m$. Note that $T_i(t)^k$ (resp. $T(t)$) is a indecomposable rigid (resp. cluster-tilting) object in ${\mathcal C}(\tilde{{\mathcal A}}_k)$ labeled by $t\in {\mathbb T}_m$ for some $i$. By $(*)$, $\mathrm{Ext}^1(T_i(t),T_i(t'))=0$ means that $\mathrm{Ext}^1_{\tilde{{\mathcal A}}_k}(T_i(t)^k,T_i(t')^k)\neq 0$. Now we are in the position to give the definition of quantum cluster algebra of ${\mathbb X}_{\boldsymbol{p},\boldsymbol{\lambda}}$. For convenience of notations, we assume that $-\varnothingLambda \tilde{B}(\boldsymbol{p},\boldsymbol{\lambda})=I_m$ with $\varnothingLambda\in \mathrm{Mat}(m,{\mathbb Z})$. If $-\varnothingLambda' \tilde{B}(\boldsymbol{p},\boldsymbol{\lambda})=I_m$ such that $d\varnothingLambda'$ is a matrix of integers for some $d$, then we only need to set the following $\nu$ to be $\nu^d$. Recall $$S^k=\tilde{{\mathcal O}}^k\oplus \tilde{{\mathcal O}}(\vec{\tilde{c}})^k\oplus \bigoplus_{2\leq j\leq \tilde{p}_i} \tilde{S}_{i,j}^k=:\bigoplus_{i=1}^m T_i(t_0)^k$$ is the initial cluster-titling object in ${\mathcal C}(\tilde{{\mathcal A}}_k)$. Every subject of the line bundle ${\mathcal O}(\vec{l})^k$ is of the form ${\mathcal O}(\vec{r})^k$ such that $\vec{r}=\sum_{i=1}^N r_i\vec{x_i}+r_0\vec{c}\leq \vec{l}=\sum_{i=1}^N l_i\vec{x_i}+l_0\vec{c}$. Hence, for each $\begin{eqnarray*}e\leq \dim[{\mathcal O}(\vec{l})]$, there exists a unique isoclasses $[{\mathcal O}(\vec{r_e})^k]$ with dimension vector $\begin{eqnarray*}e$ such that ${\mathcal O}(\vec{r_e})^k$ is a subject of ${\mathcal O}(\vec{l})^k$. It is easy to see that there exists a ${\mathbb Z}$-polynomial $P(z)$ (independent of $k$) such that $|\mathrm{Gr}_{\begin{eqnarray*}e}({\mathcal O}(\vec{l})^k)|=P_i(|k|)$. Denote $|\mathrm{Gr}_{\begin{eqnarray*}e}(T_i(t_0))|_{\nu^2}:=P_i(\nu^2)$ for $i=1,2$. Set $$ X_i(t_0)=\sum_{\boldsymbol{e}\leq d_i(t_0)} \nu^{ \langle ngle d_i(t_0)-\boldsymbol{e},\boldsymbol{e} \rangle }|\mathrm{Gr}_{\boldsymbol{e}}(T_i(t_0))|_{\nu^2} X^{-(d_i(t_0)-\boldsymbol{e})^*-^*\boldsymbol{e}}$$ for $i=1,2$. For $3\leq i\leq m$, $T_i(t_0)^k=\tilde{S}_{l,j}^k$ is a simple torsion sheaf for some $l,j$, then we set $$X_i(t_0)=X^{-\begin{eqnarray*}e_i^*}(X^{\tilde{B}\begin{eqnarray*}e_i}+1),$$ where $\{\begin{eqnarray*}e_i\ | 1\leq i \leq m\}$ is the canonical basis for ${\mathbb Z}^m$. Observe that $X_i(t_0)=x_l+1$ for $n<i\leq m$, since $\tilde{B}\begin{eqnarray*}e_i=-\begin{eqnarray*}e_l$ where $3\leq l\leq m$ such that $\begin{eqnarray*}e_l=\underline{\dim}[\tilde{S}_{j,3}]$. Here $T_i(t_0)=\tilde{S}_{j,2}$ for some $j\in \{1\leq i\leq N|\ p_i \text{ is even}\}$. Hence $X_i(t_0)$ is invertible in $\hat{{\mathcal T}}_{\varnothingLambda}$ for $i>n$. We define another partial order in ${\mathbb Z}^m$ associated to $\boldsymbol f\in {\mathbb Z}^m$. Notice that $\tilde{B}(\boldsymbol{e})=\boldsymbol{e}^*-^*\boldsymbol{e}$ and $\tilde{B}$ is invertible, then $\boldsymbol m=\tilde{B}\begin{eqnarray*}e$ is uniquely determined by $\begin{eqnarray*}e$. We say $$ -^*\boldsymbol f+\tilde{B}\begin{eqnarray*}e\leq -^*\boldsymbol f+\tilde{B}\begin{eqnarray*}e' \text{ if and only if } \begin{eqnarray*}e\leq \begin{eqnarray*}e'.$$ Hence the maximal degree of $X_i(t_0)$ is $-^*d_i(t_0)$. \begin{eqnarray*}gin{lemma} \langle bel{lem3.9} $\{X_i(t_0)|1\leq i\leq m\}$ is algebraically independent in $\hat{{\mathcal T}}_{\varnothingLambda}$. \begin{eqnarray*}gin{proof} $T(t_0)^k=\tilde{{\mathcal O}}^k\oplus \tilde{{\mathcal O}}(\vec{\tilde{c}})^k\oplus \bigoplus_{2\leq j\leq \tilde{p}_i} \tilde{S}_{i,j}^k$ is a cluster-tilting object, the set $\{d_i(t_0)| 1\leq i\leq m\}$ forms a basis for $K_0(\tilde{{\mathcal A}})$. Hence the maximal degrees $-^*d_i(t_0)$ of $X_i(t_0)$, $1\leq i\leq m$ forms a basis of ${\mathbb Z}^m$ by noting that $E(\tilde{\boldsymbol{p}},\tilde{\boldsymbol{\lambda}})$ is invertible. Since $$\{X^{\begin{eqnarray*}e_i}=x_i\ |\ 1\leq i\leq m\}$$ are algebraically independent in $\hat{{\mathcal T}}_{\varnothingLambda}$, it follows that $\{X^{-^*d_i(t_0)}|1\leq i\leq m\}$ are algebraically independent. As a consequence, $\{X_i(t_0)\ |\ 1\leq i\leq m\}$ is algebraically independent in $\hat{{\mathcal T}}_{\varnothingLambda}$. \end{proof} \end{lemma} \begin{eqnarray*}gin{definition} \langle bel{def3.10} The quantum cluster algebra ${\mathcal A}(\varnothingLambda,\tilde{B}(\boldsymbol{p},\boldsymbol{\lambda}))$ of the weighted projective line ${\mathbb X}_{\boldsymbol{p},\boldsymbol{\lambda}}$ is the ${\mathbb Z}[\nu^{\pm 1}]$-subalgebra of $\hat{{\mathcal T}}_{\varnothingLambda}$, generated by $X_j(t)$ for $t\in {\mathbb T}_n(\boldsymbol{p},\boldsymbol{\lambda})$, $1\leq j\leq n$ and $X_l(t_0)^{\pm 1}$ for $n<l\leq m$, subject to \begin{eqnarray*}gin{itemize} \item[(1)] if $j\neq i$ $$X_j(t)X_i(t)=\nu^{2\varnothingLambda(d_j(t)^*,d_i(t)^*)}X_i(t)X_j(t),$$ \item [(2)] if $\begin{eqnarray*}gin{tikzcd}[column sep=huge]t\arrow[r, no head,"{(a_{ij}(t),b_{ij}(t))}"] &t'\end{tikzcd}$ and $\mathrm{Ext}^1(T_i(t),T_i(t'))=0$, $$X_i(t')X_i(t)=\nu^{\varnothingLambda(d_i(t')^*,d_i(t)^*)} \nu^s \prod_{j\neq i}X_j(t)^{a_{ij}(t)} + \nu^{\varnothingLambda(d_i(t')^*,d_i(t)^*)-1} \nu^{s'} \prod_{j\neq i}X_j(t)^{b_{ij}(t)},$$ \item [(3)] if $\begin{eqnarray*}gin{tikzcd}[column sep=huge]t\arrow[r, no head,"{(a_{ij}(t),b_{ij}(t))}"] &t'\end{tikzcd}$ and $\mathrm{Ext}^1(T_i(t'),T_i(t))=0$, $$X_i(t')X_i(t)=\nu^{\varnothingLambda(d_i(t')^*,d_i(t)^*)+1} \nu^s \prod_{j\neq i}X_j(t)^{a_{ij}(t)} + \nu^{\varnothingLambda(d_i(t')^*,d_i(t)^*)} \nu^{s'} \prod_{j\neq i}X_j(t)^{b_{ij}(t)},$$ \end{itemize} where $s=-\sum\limits_{l=1}^m \varnothingLambda(a_{il}d_l(t)^*,\sum\limits_{r=l+1}^n a_{ir}d_r(t)^*)$, $s'=-\sum\limits_{l=1}^m \varnothingLambda(b_{il}d_l(t)^*,\sum\limits_{r=l+1}^n b_{ir}d_r(t)^*)$. \end{definition} \begin{eqnarray*}gin{remark} \langle bel{rem2}\ \ (1) Although the mutation relations in Definition \ref{def3.10} are similar to Relations (\ref{eq1}) of the usual quantum cluster algebra, the exchange matrix $\begin{eqnarray*}gin{tikzcd}[column sep=huge]t_0\arrow[r, no head,"{(a_{ij}(t_0),b_{ij}(t_0))}"] &t\end{tikzcd}$ has nothing to do with $\tilde{B}(\boldsymbol{p},\boldsymbol{\lambda})$. Indeed, $(a_{ij}(t_0),b_{ij}(t_0))$ can be read from the Gabriel quiver $Q_{T_0}$ of $\mathrm{End}_{{\mathcal C}}(T_0)$, which is (see \cite[Theorem 6.12]{Keller2011}) $$\begin{eqnarray*}gin{tikzcd} &\circ\arrow[ldd,"\alpha_{1,1}"] &\circ\arrow[l,"\alpha_{1,2}"] &\cdots\arrow[l,"\alpha_{1,p_1-2}"] &\circ\arrow[l,"\alpha_{1,p_1-1}"] & \\ & &\vdots & & &\\ \stackrelar\arrow[rrrrr,"\rho_3",bend left]\arrow[rrrrr,"\rho_N",bend right]\arrow[rrrrr,"\rho_4",bend left=20] \ &\circ\arrow[l,"\alpha_{i,1}"] &\circ\arrow[l,"\alpha_{i,2}"] &\cdots\arrow[l,"\alpha_{i,p_i-2}"] &\circ\arrow[l,"\alpha_{i,p_i-1}"] &\ast\arrow[luu,"\alpha_{1,p_1}"]\arrow[l,"\alpha_{i,p_i}"]\arrow[ldd,"\alpha_{N,p_N}"'] \\ & &\vdots & & &\\ &\circ\arrow[luu,"\alpha_{N,1}"'] &\circ\arrow[l,"\alpha_{N,2}"] &\cdots\arrow[l,"\alpha_{N,p_N-2}"] &\circ\arrow[l,"\alpha_{1,p_N-1}"] & \end{tikzcd}$$ Note that $\mathrm{End}_{{\mathcal C}}(T_0)$ is not a hereditary algebra in general, thus the skew-symmetric matrix associated to $Q_{T_0}$ is different from the skew-symmetric matrix $\tilde{B}(\boldsymbol{p},\boldsymbol{\lambda})$ of Euler form. (2) If ${\mathbb X}_{\boldsymbol{p},\boldsymbol{\lambda}}$ is of parabolic type (see \cite[Section 5.4.1]{Geigle1987}) and each term of $\boldsymbol{p}$ is odd (i.e. $\boldsymbol{p}=(2r_1+1,2r_2+1)$), then the cluster category ${\mathcal C}({\mathcal A}_k)$ is triangle equivalent to the cluster category ${\mathcal C}(\mathrm{mod}kQ)$ of the acyclic quiver $Q$ of type $\tilde{A}_{p_1,p_2}$. Then $\tilde{B}(\boldsymbol{p},\boldsymbol{\lambda})$ is the same as the skew-symmetric matrix $B_Q$ associated to the quiver $Q$ up to a choice of basis for ${\mathbb Z}^m$. So that the quantum cluster algebra ${\mathcal A}(\varnothingLambda,B(\boldsymbol{p},\boldsymbol{\lambda}))$ of ${\mathbb X}_{\boldsymbol{p},\boldsymbol{\lambda}}$ is isomorphic to the quantum cluster algebra of the acyclic quiver $Q$. (3) We define the quantum cluster algebra ${\mathcal A}(\varnothingLambda,\tilde{B}(\boldsymbol{p},\boldsymbol{\lambda}))$ of ${\mathbb X}_{\boldsymbol{p},\boldsymbol{\lambda}}$ as a subalgebra of $\hat{{\mathcal T}}_{\varnothingLambda}$, it follows that each $X_i(t)$ may be a infinite sum of monomials in ${\mathbb Z}[\nu^{\pm}][x_1^{\pm},x_2^{\pm},\cdots, x_m^{\pm}]$. However, we can not deduce that any $X_i(t)$ expressed as a fraction of polynomial of $X_1(t_0), X_2(t_0),\cdots, X_m(t_0)$ is a Laurent polynomial (i.e. the denominator is a monomial). In other words, we do not know whether the quantum cluster algebra ${\mathcal A}(\varnothingLambda,\tilde{B}(\boldsymbol{p},\boldsymbol{\lambda}))$ has the Laurent phenomenon in general. \end{remark} \section{Quantum cluster algebras of \texorpdfstring{${\mathbb X}_{\boldsymbol{p},\boldsymbol{\lambda}}$}{Lg}} \langle bel{sec4} \subsection{Cluster multiplication formulas} \langle bel{sec4.1} Let $(\varnothingLambda,\tilde{B}(\boldsymbol{p},\boldsymbol{\lambda}))$ be a compatible pair. Without loss of generality, assume that $-\varnothingLambda\tilde{B}(\boldsymbol{p},\boldsymbol{\lambda})=I_m$. Let $k={\mathbb F}_q$ and $v=q^{\frac{1}{2}}$. Recall that $\tilde{{\mathcal A}}$ is the category $\mathrm{Coh}({\mathbb X}_{\tilde{\boldsymbol{p}},\tilde{\boldsymbol{\lambda}}})$ over $k$. To make notations simpler, we will omit the multiplication symbol $*$ of $H_{\varnothingLambda}(\tilde{{\mathcal A}})$ and $\hat{{\mathcal T}}_{\varnothingLambda,v}$ in the sequel. \begin{eqnarray*}gin{lemma} \langle bel{lem2} Let ${\mathcal M}$, ${\mathcal N}\in \tilde{{\mathcal A}}$, the following identity holds: $$|\mathrm{Gr}_{\underline{e}}({\mathcal M}\oplus {\mathcal N})|=\sum_{\substack{A,B,C,D,\\ [B]+[D]=\underline{e}}} q^{[B,C]^0}g_{AB}^{\mathcal M} g_{CD}^{{\mathcal N}}. $$ \begin{eqnarray*}gin{proof} The statement is deduced by applying \cite[Lemma 7]{Hubery2010} to the split exact sequence: $$0\longrightarrow {\mathcal N}\longrightarrow {\mathcal N}\oplus {\mathcal M} \stackrelackrel{\pi}\longrightarrow {\mathcal M} \longrightarrow 0.$$ \end{proof} \end{lemma} Similar to \cite[Theorem 7.4]{Chen2021}, we also have a cluster multiplication formula on the quantum torus $\hat{{\mathcal T}}_{\varnothingLambda,v}$ specialized at $\nu=v$. \begin{eqnarray*}gin{theorem} \langle bel{thm3.2} For ${\mathcal M}$, ${\mathcal N}\in \tilde{{\mathcal A}}$, we have the following equation in $\hat{{\mathcal T}}_{\varnothingLambda,v}$: \begin{eqnarray*}gin{center} \begin{eqnarray*}gin{align*} &(q^{[{\mathcal M},{\mathcal N}]^1}-1)X_{{\mathcal M}}X_{{\mathcal N}}=q^{\frac{1}{2}\varnothingLambda(\boldsymbol m^*,\boldsymbol n^*)}\sum_{{\mathcal L}\neq [{\mathcal M}\oplus {\mathcal N}]} |\operatorname{Ext}^1_{\tilde{{\mathcal A}}}({\mathcal M},{\mathcal N})_{{\mathcal L}}| X_{{\mathcal L}}\\ &\ \ \ \ \ \ +\sum_{[{\mathcal G}],[{\mathcal F}]\neq [{\mathcal N}]} q^{\frac{1}{2}\varnothingLambda((\boldsymbol m-\boldsymbol g)^*,(\boldsymbol n+\boldsymbol g)^*)+\frac{1}{2} \langle ngle \boldsymbol m-g,\boldsymbol n \rangle }|_{{\mathcal F}}{\mathrm{Hom}}_{\tilde{{\mathcal A}}}({\mathcal N},\tau{\mathcal M})_{\tau{\mathcal G}}| X_{{\mathcal G}}X_{{\mathcal F}}, \end{align*} \end{center} where $ |\operatorname{Ext}^1_{\tilde{{\mathcal A}}}({\mathcal M},{\mathcal N})_{{\mathcal L}}| $ means the number of extension classes whose middle term is isomorphic to ${\mathcal L}$, $|_{{\mathcal F}}{\mathrm{Hom}}_{\tilde{{\mathcal A}}}({\mathcal N},\tau{\mathcal M})_{\tau{\mathcal G}}|$ meas the homomorphism $f\in {\mathrm{Hom}}_{\tilde{{\mathcal A}}}({\mathcal N},\tau{\mathcal M})$ such that ${\mathbb K}er f$ isomorphic to ${\mathcal F}$ and ${\mathbb C}oker f$ isomorphic to $\tau{\mathcal G}$. \begin{eqnarray*}gin{proof} Since $X_?:H_{\varnothingLambda}(\tilde{{\mathcal A}})\to \hat{{\mathcal T}}_{\varnothingLambda,v}$ is an algebra homomorphism, we have $$q^{[{\mathcal M},{\mathcal N}]^1}X_{{\mathcal M}}X_{{\mathcal N}}=q^{\frac{1}{2}\varnothingLambda(\boldsymbol m^*,\boldsymbol n^*)}\sum_{[{\mathcal L}]\neq[{\mathcal M}\oplus{\mathcal N}]} |\operatorname{Ext}^1_{\tilde{{\mathcal A}}}({\mathcal M},{\mathcal N})_{{\mathcal L}}|X_{{\mathcal L}}+q^{\frac{1}{2}\varnothingLambda(\boldsymbol m^*,\boldsymbol n^*)}X_{{\mathcal M}\oplus {\mathcal N}}.$$ On the other hand, by Lemma \ref{lem2} we have $$|\mathrm{Gr}_{\underline{e}}({\mathcal M}\oplus {\mathcal N})|=\sum_{\substack{A,B,C,D,\\ [B]+[D]=\underline{e}}} q^{[B,C]^0}g_{AB}^{\mathcal M} g_{CD}^{{\mathcal N}}.$$ Hence, \begin{eqnarray*}gin{align*} &q^{[{\mathcal M},{\mathcal N}]^1}X_{{\mathcal M}}X_{{\mathcal N}}-q^{\frac{1}{2}\varnothingLambda(\boldsymbol m^*,\boldsymbol n^*)}\sum_{[{\mathcal L}]\neq[{\mathcal M}\oplus{\mathcal N}]} |\operatorname{Ext}^1_{\tilde{{\mathcal A}}}({\mathcal M},{\mathcal N})_{{\mathcal L}}|X_{{\mathcal L}}\\ &=q^{\frac{1}{2}\varnothingLambda(\boldsymbol m^*,\boldsymbol n^*)}\sum_{A,B,C,D}q^{- \langle ngle b+d,a+c \rangle }q^{[B,C]^0}g_{AB}^{{\mathcal M}}g_{CD}^{{\mathcal N}}X^{-(b+d)^*-^*(a+c)}. \end{align*} Set $\sigma:=\sum_{{\mathcal F},{\mathcal G}}q^{\frac{1}{2}\varnothingLambda((\boldsymbol m-\boldsymbol g)^*,(\boldsymbol n+\boldsymbol g)^*)+\frac{1}{2} \langle ngle \boldsymbol m-\boldsymbol g,\boldsymbol n \rangle }|_{{\mathcal F}}{\mathrm{Hom}}_{\tilde{{\mathcal A}}}({\mathcal N},\tau{\mathcal M})_{\tau{\mathcal G}}|X_{{\mathcal G}}X_{{\mathcal F}}.$ Note that $|_{{\mathcal F}}{\mathrm{Hom}}_{\tilde{{\mathcal A}}}({\mathcal N},\tau{\mathcal M})_{\tau{\mathcal G}}|=\sum_{S} a_S g_{{\mathcal S}{\mathcal F}}^{{\mathcal N}}g_{\tau{\mathcal G},S}^{\tau{\mathcal M}}$, then $$\sigma=\sum_{\substack{{\mathcal F},{\mathcal G},{\mathcal S}\{\mathbb K},L,X,Y}} q^t a_{{\mathcal S}}g_{{\mathcal S},{\mathcal F}}^{{\mathcal N}}g_{{\mathcal G},\tau^{-1}{\mathcal S}}^{{\mathcal M}}g_{K,L}^{{\mathcal G}}g_{X,Y}^{{\mathcal F}}X^{-(l+y)^*- ^*(k+l)}.$$ where $t=\frac{1}{2}(\varnothingLambda((\boldsymbol m-\boldsymbol g)^*,(\boldsymbol n+\boldsymbol g)^*)+ \langle ngle \boldsymbol m-\boldsymbol g,\boldsymbol n \rangle - \langle ngle y,x \rangle - \langle ngle l,k \rangle +\varnothingLambda(-l^*-^*k,-y^*-^*x)).$ Now let us focus on the exponent $t$. Firstly replace the skew-symmetric form $\varnothingLambda$ by $ \langle ngle , \rangle $ as much as possible. Note $(\underline{\dim} \tau^{-1}(S))^*=\tilde{E}'\underline{\dim} \tau^{-1}(S)=-\tilde{E}\underline{\dim} S=-^*s$, then $\tau^{-1}(s)^*=-^*s$. So we have \begin{eqnarray*}gin{align*} 2t&=\varnothingLambda(\boldsymbol m^*,\boldsymbol n^*)+\varnothingLambda(\tau^{-1}(s)^*,\boldsymbol g^*)-\varnothingLambda(\boldsymbol g^*,\boldsymbol n^*)+ \langle ngle \boldsymbol m,\boldsymbol n \rangle - \langle ngle \boldsymbol g,\boldsymbol n \rangle +\\ &\ \ \varnothingLambda((l+k)^*,(y+x)^*)+ \langle ngle l,x \rangle - \langle ngle y,k \rangle - \langle ngle y,x \rangle - \langle ngle l,k \rangle .\\ &=\varnothingLambda(\boldsymbol m^*,\boldsymbol n^*)+\varnothingLambda(\tau^{-1}(s)^*,\boldsymbol g^*)+ \langle ngle \boldsymbol m,\boldsymbol n \rangle +\varnothingLambda(^*s,\boldsymbol g^*)- \langle ngle \boldsymbol g,\boldsymbol f \rangle +\\ &\ \ \langle ngle l,x \rangle - \langle ngle y,k \rangle - \langle ngle y,x \rangle - \langle ngle l,k \rangle .\\ &=\varnothingLambda(\boldsymbol m^*,\boldsymbol n^*)+ \langle ngle \boldsymbol m,\boldsymbol n \rangle - \langle ngle \boldsymbol g,\boldsymbol f \rangle + \langle ngle l,x \rangle - \langle ngle y,k \rangle \\ &\ \ \langle ngle y,x \rangle - \langle ngle l,k \rangle . \end{align*} Secondly, replace $l$ by $\boldsymbol g-k$ and $x$ by $\boldsymbol f-y$, then \begin{eqnarray*}gin{align*} 2t&=\varnothingLambda(\boldsymbol m^*,\boldsymbol n^*)+ \langle ngle \boldsymbol m,\boldsymbol n \rangle - \langle ngle \boldsymbol g,\boldsymbol f \rangle + \langle ngle \boldsymbol g-k,\boldsymbol f-y \rangle - \langle ngle y,k \rangle \\ &\ \ - \langle ngle y,\boldsymbol f-y \rangle - \langle ngle \boldsymbol g-k,k \rangle .\\ &=\varnothingLambda(\boldsymbol m^*,\boldsymbol n^*)+2 \langle ngle \boldsymbol m-k,\boldsymbol n-y \rangle - \langle ngle \boldsymbol m+y-k,\boldsymbol n-y+k \rangle . \end{align*} The second equality is induced by $ \langle ngle y,\boldsymbol n-d \rangle = \langle ngle y,s \rangle =- \langle ngle \tau^{-1}s,y \rangle =- \langle ngle \boldsymbol m-\boldsymbol g,y \rangle $ and $ \langle ngle k,\boldsymbol m-\boldsymbol g \rangle = \langle ngle k,\tau^{-1}s \rangle =- \langle ngle s,k \rangle =- \langle ngle \boldsymbol n-\boldsymbol f,k \rangle $. So \begin{eqnarray*}gin{align*} \sigma=q^{\varnothingLambda(\boldsymbol m^*,\boldsymbol n^*)}\sum_{\substack{{\mathcal F},{\mathcal G},{\mathcal S}\{\mathbb K},L,X,Y}}& q^{ \langle ngle \boldsymbol m-k,\boldsymbol n-y \rangle -\frac{1}{2} \langle ngle \boldsymbol m+y-k,\boldsymbol n-y+k \rangle } \\ &a_{{\mathcal S}}g_{{\mathcal S}{\mathcal F}}^{{\mathcal N}}g_{{\mathcal G},\tau^{-1}{\mathcal S}}^{{\mathcal M}}g_{KL}^{{\mathcal G}}g_{XY}^{{\mathcal F}}X^{-(l+y)^*- ^*(k+l)}. \end{align*} By the associativity of Hall algebra $H_{\varnothingLambda}(\tilde{{\mathcal A}})$, we have $$\sum_{{\mathcal F}}g_{{\mathcal S}{\mathcal F}}^{{\mathcal N}}g_{XY}^{{\mathcal F}}=\sum_{D} g_{{\mathcal S} X}^Dg_{DY}^{\mathcal N} \ \ and\ \ \sum_{{\mathcal G}}g_{{\mathcal G},\tau^{-1}{\mathcal S}}^{{\mathcal M}}g_{KL}^{{\mathcal G}}=\sum_{A} g_{L,\tau^{-1}{\mathcal S}}^{A}g_{KA}^{{\mathcal M}}.$$ Then $\sum_{{\mathcal F},{\mathcal G},{\mathcal S}} a_{{\mathcal S}}g_{{\mathcal S}{\mathcal F}}^{{\mathcal N}}g_{{\mathcal G},\tau^{-1}{\mathcal S}}^{{\mathcal M}}g_{KL}^{{\mathcal G}}g_{XY}^{{\mathcal F}}=\sum_{D,A,{\mathcal S}} a_{{\mathcal S}}g_{{\mathcal S} X}^Dg_{DY}^{{\mathcal N}} g_{L,\tau^{-1}{\mathcal S}}^{A}g_{KA}^{{\mathcal M}}$, it follows that $$\sum_{{\mathcal S},L,X}a_{{\mathcal S}}g_{{\mathcal S} X}^D g_{L,\tau^{-1}{\mathcal S}}^{A}=\sum_{X,L}|_ X{\mathrm{Hom}}_{\tilde{{\mathcal A}}}(D,\tau A)_{\tau L}|= |{\mathrm{Hom}}_{\tilde{{\mathcal A}}}(D,\tau A)|=q^{[A,D]^1}.$$ and $$(l+y)^*+^*(k+x)=(l+y+\tau^{-1}s)^*+^*(k+x+s)=(a+y)^*+^*(d+x).$$ We can summarize all ${\mathcal S}$, $L$ and $X$ of $\sigma$ to get $$\sigma=q^{\varnothingLambda(\boldsymbol m^*,\boldsymbol n^*)}\sum_{A,D,Y,K}q^{ \langle ngle \boldsymbol m-k,\boldsymbol n-y \rangle -\frac{1}{2} \langle ngle \boldsymbol m+y-k,\boldsymbol n-y+k \rangle } q^{[a,d]^1}g_{DY}^{{\mathcal N}} g_{KA}^{{\mathcal M}}X^{-(a+y)^*-^*(d+x)}.$$ Replace $A,D,Y,K$ by $B,C,D,A$, we have $$\sigma=q^{\varnothingLambda(\boldsymbol m^*,\boldsymbol n^*)}\sum_{A,B,C,D}q^{ \langle ngle b,c \rangle -\frac{1}{2} \langle ngle b+d,a+c \rangle } q^{[b,c]^1}g_{CD}^{{\mathcal N}} g_{AB}^{{\mathcal M}}X^{-(b+d)^*-^*(a+c)}.$$ which is exactly $X_{{\mathcal M}\oplus {\mathcal N}}$. Hence, we have $$(q^{[{\mathcal M},{\mathcal N}]^1})X_{{\mathcal M}}X_{{\mathcal N}}=-q^{\frac{1}{2}\varnothingLambda(\boldsymbol m^*,\boldsymbol n^*)}\sum_{[{\mathcal L}]\neq[{\mathcal M}\oplus{\mathcal N}]} |\operatorname{Ext}^1_{{\mathcal A}}({\mathcal M},{\mathcal N})_{{\mathcal L}}|X_{{\mathcal L}}+\sigma.$$ If ${\mathcal F}\cong {\mathcal N}$, then $f\in_{{\mathcal F}}{\mathrm{Hom}}_{\tilde{{\mathcal A}}}({\mathcal N},\tau({\mathcal M}))_{\tau({\mathcal G})}$ must be 0. In this case, ${\mathcal G}\cong{\mathcal M}$, so $\sigma=\sigma_{[{\mathcal F}]\neq [{\mathcal N}]}+X_{{\mathcal M}}X_{{\mathcal N}}=:\sigma_2+X_{{\mathcal M}}X_{{\mathcal N}}$ and we complete the proof. \end{proof} \end{theorem} \begin{eqnarray*}gin{corollary} \langle bel{cor4.3} For an exchange pair $(T_i,T_i^*)$ in ${\mathcal C}(\tilde{{\mathcal A}_k})$ such that $Ext^1_{\tilde{{\mathcal A}}_k}(T_i^*,T_i)\neq 0$, then we have the following identities: \begin{eqnarray*}gin{equation} \langle bel{eq4.1} X_{T_i^*}X_{T_i}=q^{\frac{1}{2}\varnothingLambda(\boldsymbol m^*,\boldsymbol n^*)} X_{E} + q^{\frac{1}{2}(\varnothingLambda(\boldsymbol m^*,\boldsymbol n^*)-1)}X_{E'}, \end{equation} \begin{eqnarray*}gin{equation} \langle bel{eq4.4} X_{T_i}X_{T_i^*}=q^{\frac{1}{2}\varnothingLambda(\boldsymbol n^*,\boldsymbol m^*)} X_{E} + q^{\frac{1}{2}(\varnothingLambda(\boldsymbol n^*,\boldsymbol m^*)+1)}X_{E'}, \end{equation} where $E$ and $E'$ are the middle terms of the exchange triangles respectively, $\boldsymbol m$ (resp. $\boldsymbol n$) is the dimension vector of $T_i^*$ (resp. $T_i$). \begin{eqnarray*}gin{proof} By Theorem \ref{thm7.3}, $\dim_k \mathrm{Ext}^1_{{\mathcal C}(\tilde{{\mathcal A}}_k)}(T_i,T_i^*)=1$. Note that $$\mathrm{Ext}^1_{{\mathcal C}_k}(T_i,T_i^*)=\mathrm{Ext}^1_{\tilde{{\mathcal A}}_k}(T_i,T_i^*)\oplus \mathrm{Ext}^1_{\tilde{{\mathcal A}}_k}(T_i^*,T_i),$$ and $\mathrm{Ext}^1_{\tilde{{\mathcal A}}_k}(T_i^*,T_i)\neq 0$, it follows that $\dim_k \mathrm{Ext}^1_{\tilde{{\mathcal A}}_k}(T_i^*,T_i)=1$, $\dim_k \mathrm{Ext}^1_{\tilde{{\mathcal A}}_k}(T_i,T_i^*)=0$. Since ${\mathrm{Hom}}_k(T_i,\tau T_i^*)\cong {\mathbb D}\mathrm{Ext}^1_{\tilde{{\mathcal A}}}(T_i^*,T_i)\cong k$, then ${\mathcal F}:={\mathbb K}er f$ (resp. ${\mathcal G}:=\tau^{-1}{\mathbb C}oker f$) are the same for any nonzero homomorphism $f: T_i\to \tau T_i^*$. Denoted by ${\mathcal S}$ the image $\mathrm{Im} f$ of $f$. Following from Theorem \ref{thm3.2} we have that \begin{eqnarray*}gin{equation} \langle bel{eq4.2} X_{T_i^*}X_{T_i}=q^{\frac{1}{2}\varnothingLambda(\boldsymbol m^*,\boldsymbol n^*)} X_{E} + q^{\frac{1}{2}\varnothingLambda((\boldsymbol m-\boldsymbol g)^*,(\boldsymbol n+\boldsymbol g)^*)+\frac{1}{2} \langle ngle \boldsymbol m-g,\boldsymbol n \rangle }X_{{\mathcal G}}X_{{\mathcal F}}, \end{equation} where $g=\underline{\dim}{\mathcal G}$ and $f=\underline{\dim}{\mathcal F}$. Note that $E'\cong {\mathcal F}\oplus {\mathcal G}$ is rigid and $X_?: H_{\varnothingLambda}(\tilde{{\mathcal A}})\to {\mathcal T}_{\varnothingLambda,v}$ is an algebra homomorphism, $X_{{\mathcal G}}X_{{\mathcal F}}=q^{\frac{1}{2}\varnothingLambda(g^*,f^*)}X_{E'}$. Comparing Equation (\ref{eq4.2}) with Equation (\ref{eq4.1}), it suffices to show that $$\varnothingLambda((\boldsymbol m-\boldsymbol g)^*,(\boldsymbol n+\boldsymbol g)^*)+ \langle ngle \boldsymbol m-g,\boldsymbol n \rangle+\varnothingLambda(g^*,f^*)=\varnothingLambda(\boldsymbol m^*,\boldsymbol n^*)-1.$$ Using $\boldsymbol m=\tau^{-1}s+g$, $\boldsymbol n=f+s$ and $\tau^{-1}(s)^*=-^*s$, where $s=\underline{\dim}{\mathcal S}$, we have that \begin{eqnarray*}gin{align*} &\varnothingLambda((\boldsymbol m-\boldsymbol g)^*,(\boldsymbol n+\boldsymbol g)^*)+ \langle ngle \boldsymbol m-g,\boldsymbol n \rangle+\varnothingLambda(g^*,f^*) \\ &=\varnothingLambda(\boldsymbol m^*,\boldsymbol n^*)+\varnothingLambda(\tau^{-1}(s)^*,g^*)-\varnothingLambda(g^*,\boldsymbol n^*)+ \langle ngle \boldsymbol m,\boldsymbol n \rangle- \langle ngle g,\boldsymbol n \rangle+\varnothingLambda(g^*,f^*),\\ &=\varnothingLambda(\boldsymbol m^*,\boldsymbol n^*)+\varnothingLambda(\tau^{-1}(s)^*,g^*)+\varnothingLambda(^*s,g^*)+ \langle ngle \boldsymbol m,\boldsymbol n \rangle- \langle ngle g,f \rangle,\\ &=\varnothingLambda(\boldsymbol m^*,\boldsymbol n^*)+ \langle ngle \boldsymbol m,\boldsymbol n \rangle- \langle ngle g,f \rangle. \end{align*} Applying ${\mathrm{Hom}}_{\tilde{{\mathcal A}}_k}({\mathcal F},-)$ to the exact sequence ${\mathcal F}\rightarrowtail T_i\twoheadrightarrow {\mathcal S}$ and note $\mathrm{Ext}^1_{\tilde{{\mathcal A}}_k}({\mathcal F},T_i)=0$, we obtain ${\mathbb D}{\mathrm{Hom}}_{\tilde{{\mathcal A}}}({\mathcal S},\tau{\mathcal F})\cong$ $\mathrm{Ext}^1({\mathcal F},{\mathcal S})=0$. Then apply ${\mathrm{Hom}}_{\tilde{{\mathcal A}}_k}(-,{\mathcal F})$ to the exact sequence $\tau^{-1}S\rightarrowtail T_i^*\twoheadrightarrow {\mathcal G}$, we deduce that ${\mathrm{Hom}}_{\tilde{{\mathcal A}}_k}({\mathcal G},{\mathcal F})\cong {\mathrm{Hom}}_{\tilde{{\mathcal A}}_k}(T_i^*,{\mathcal F})$. Finally apply ${\mathrm{Hom}}_{\tilde{{\mathcal A}}_k}(T_i^*,-)$ to the exact sequence ${\mathcal F}\rightarrowtail T_i\twoheadrightarrow {\mathcal S}$ to get ${\mathrm{Hom}}_{\tilde{{\mathcal A}}_k}(T_i^*,{\mathcal F})\cong {\mathrm{Hom}}_{\tilde{{\mathcal A}}_k}(T_i^*,T_i)$ since ${\mathrm{Hom}}_{\tilde{{\mathcal A}}_k}(T_i^*,{\mathcal S})\rightarrowtail {\mathrm{Hom}}_{\tilde{{\mathcal A}}_k}(T_i^*,\tau T_i^*)\cong D\mathrm{Ext}_{\tilde{{\mathcal A}}_k}^1(T_i^*,T_i^*)=0$. Thus $ \langle ngle g,f\rangle=\dim_k {\mathrm{Hom}}_{\tilde{{\mathcal A}}_k}({\mathcal G},{\mathcal F})= \langle ngle \boldsymbol m,\boldsymbol n\rangle +1$, implying that $$\varnothingLambda(\boldsymbol m^*,\boldsymbol n^*)+ \langle ngle \boldsymbol m,\boldsymbol n \rangle- \langle ngle g,f \rangle=\varnothingLambda(\boldsymbol m^*,\boldsymbol n^*)-1,$$ which gives rise to the first equation. For the second equation, we have $X_{T_i}X_{T_i^*}=q^{\frac{1}{2}\varnothingLambda(\boldsymbol n^*,\boldsymbol m^*)}X_{T_i\oplus T_i^*}$ for $\mathrm{Ext}^1_{\tilde{{\mathcal A}}_k}(T_i,T_i^*)=0$. On the other hand, from $\mathrm{Ext}^1_{\tilde{{\mathcal A}}_k}(T_i^*,T_i)\cong k$ we have that $$qX_{T_i^*}X_{T_i}=q^{\frac{1}{2}\varnothingLambda(\boldsymbol m^*,\boldsymbol n^*)}(q-1)X_E+q^{\frac{1}{2}\varnothingLambda(\boldsymbol m^*,\boldsymbol n^*)} X_{T_i\oplus T_i^*}.$$ Combining with Equation $(\ref{eq4.1})$ and $\varnothingLambda$ is skew-symmetric, we will obtain Equation $(\ref{eq4.4})$. \end{proof} \end{corollary} \begin{eqnarray*}gin{example} \langle bel{ex3.3} Take ${\mathbb X}_{\boldsymbol{p},\boldsymbol{\lambda}}$ to be the projective line ${\mathbb P}^1$, then $\tau({\mathcal F})={\mathcal F}(-2)$. The matrix $E$ of the Euler form on $K_0({\mathcal A})$ under the basis $\{\hat{{\mathcal O}},\hat{S}_x\}$ is $E=\begin{eqnarray*}gin{bmatrix}1 &1\\ -1 &0\end{bmatrix}$ and $\varnothingLambda'=\begin{eqnarray*}gin{bmatrix}0 &-1/2\\ 1/2 &0\end{bmatrix}$. Then $-2\varnothingLambda' B=I_2$. $\varnothingLambda=2\varnothingLambda'$ and $\tilde{{\mathcal A}}={\mathcal A}$ is $\mathrm{Coh}({\mathbb P}^1)$ over ${\mathbb F}:={\mathbb F}_{q^2}$. (1) Take ${\mathcal M}={\mathcal O}(2)$, ${\mathcal N}={\mathcal O}$. Their dimension vectors are $m=\begin{eqnarray*}gin{bmatrix}1 \\2 \end{bmatrix}$ and $n=\begin{eqnarray*}gin{bmatrix}1 \\0 \end{bmatrix}$ respectively. Then \begin{eqnarray*}gin{center} $ m^*=\begin{eqnarray*}gin{bmatrix}-1 \\1 \end{bmatrix}$, $n^*=\begin{eqnarray*}gin{bmatrix}1 \\1 \end{bmatrix}$, and $\varnothingLambda(m^*,n^*)=2$. \end{center} The only non-trivial extension of ${\mathcal O}$ by ${\mathcal O}(2)$ is $$0\to {\mathcal O}\to {\mathcal O}(1)^{\oplus 2}\to {\mathcal O}(2)\to 0.$$ Note for any nonzero homomorphism $f\in {\mathrm{Hom}}_{{\mathcal A}}({\mathcal O},\tau({\mathcal O}(2)))\cong {\mathbb F}_{q^2}$, $f$ is isomorphic, thus Theorem \ref{thm3.2} applied to ${\mathcal M},{\mathcal N}$ is \begin{eqnarray*}gin{equation} \langle bel{eqn3.1} (q^2-1)X_{{\mathcal O}(2)}X_{{\mathcal O}}=q(q^2-1)X_{{\mathcal O}(1)^{\oplus 2}}+(q^2-1). \end{equation} On the other hand, the cluster character of the vector bundle ${\mathcal O}(l)$ is $$X_{{\mathcal O}(l)}=X^{-(\begin{eqnarray*}gin{smallmatrix}l+1\\ -1\end{smallmatrix})}+\sum_{r\geq l}q^{-(l+r)}[l+r+1]_{q^2}X^{-(\begin{eqnarray*}gin{smallmatrix}l+2r+1\\ 1\end{smallmatrix})}.$$ By direct computation, we have the following identities in $\hat{{\mathcal T}}_{\varnothingLambda,v}$. \begin{eqnarray*}gin{equation} \langle bel{eqn3.2} X_{{\mathcal O}(2)}X_{{\mathcal O}}=qX_{{\mathcal O}(1)}X_{{\mathcal O}(1)}+1. \end{equation} Note $[{\mathcal O}(1)]*[{\mathcal O}(1)]=[{\mathcal O}(1)^{\oplus 2}]$ in $H_{\varnothingLambda}({\mathcal A})_{{\mathbb F}}$, implying $X_{{\mathcal O}(1)}^2=X_{{\mathcal O}(1)^{\oplus 2}}$. So from identity (\ref{eqn3.2}), we have \begin{eqnarray*}gin{equation} X_{{\mathcal O}(2)}X_{{\mathcal O}}=qX_{{\mathcal O}(1)^{\oplus 2}}+1. \end{equation} which gives the identity (\ref{eqn3.1}). (2) Take ${\mathcal M}=S_x$, ${\mathcal N}={\mathcal O}$, where $\mathrm{deg}(x)=1$. Then \begin{eqnarray*}gin{center} $ m^*=\begin{eqnarray*}gin{bmatrix}-1 \\0 \end{bmatrix}$, and $n^*=\begin{eqnarray*}gin{bmatrix}1 \\1 \end{bmatrix}$, $\varnothingLambda(m^*,n^*)=1$. \end{center} Notice $|\operatorname{Ext}^1_{{\mathcal A}}(S_x,{\mathcal O})|=q^2-1$ and any nonzero homomorphism $g\in {\mathrm{Hom}}_{{\mathcal A}}({\mathcal O},\tau(S_x))\cong k$ is surjective with ${\mathbb K}er g={\mathcal O}(-1)$. Hence the quantum cluster multiplication formula given in Theorem \ref{thm3.2} applied to ${\mathcal M},{\mathcal N}$ is \begin{eqnarray*}gin{equation} (q^2-1)X_{S_x}X_{{\mathcal O}}=q^{\frac{1}{2}}(q^2-1)X_{{\mathcal O}(1)}+q^{-\frac{1}{2}}(q^2-1)X_{{\mathcal O}(-1)}. \end{equation} On the other hand, the cluster character of $S_x$ is $$X_{S_x}=X^{-(\begin{eqnarray*}gin{smallmatrix}1\\ 0\end{smallmatrix})}+X^{-(\begin{eqnarray*}gin{smallmatrix}-1\\ 0\end{smallmatrix})}.$$ By direct computations, we have \begin{eqnarray*}gin{equation} \langle bel{eqn3.4} X_{S_x}X_{{\mathcal O}(m)}=q^{\frac{1}{2}}X_{{\mathcal O}(m+1)}+q^{-\frac{1}{2}}X_{{\mathcal O}(m-1)}. \end{equation} which gives rise to the equation (\ref{eqn3.4}). \end{example} \subsection{Quantum F-polynomials} \langle bel{sec4.2} In this subsection, we still assume that $\varnothingLambda \tilde{B}(\boldsymbol{p},\boldsymbol{\lambda})=-I_m$ for some skew-symmetric matrix $\varnothingLambda$ of integers. Write $\tilde{B}:=\tilde{B}(\boldsymbol{p},\boldsymbol{\lambda})$. Let $k$ be a finite field. Recall that we have constructed a valued regular $m$-tree ${\mathcal T}_m$ in Section \ref{sec3.4}. Remind that $T(t)$ is a symbol associated to $t\in {\mathbb T}_m$ such that $T(t)^k$ is the cluster-tilting object in ${\mathcal C}(\tilde{{\mathcal A}}_k)$. In this subsection, we want to show that $\mathrm{Gr}_{e}(T_i(t)^k)$ is a polynomial of $|k|$ for $t\in {\mathbb T}_m$, $1\leq i\leq m$ and $\begin{eqnarray*}e\in {\mathbb Z}^m$. The quantum cluster character of ${\mathcal F}^k\in \tilde{{\mathcal A}}_k$ is $$X_{{\mathcal F}^k}=\sum_{\begin{eqnarray*}e\leq \boldsymbol f} q^{-\frac{1}{2} \langle ngle \boldsymbol f-\begin{eqnarray*}e,\begin{eqnarray*}e \rangle}|\mathrm{Gr}_{\begin{eqnarray*}e}({\mathcal F}^k)|_q X^{-\boldsymbol f^*+\begin{eqnarray*}e^*-^*\begin{eqnarray*}e},$$ where $\boldsymbol f=\underline{\dim}{\mathcal F}^k$. Recall that we have defined a partial order associated to $\boldsymbol f$ on $\{-^*\boldsymbol f+\tilde{B}\begin{eqnarray*}e|\ \begin{eqnarray*}e\leq \boldsymbol f\}$ in Section \ref{sec3.4}. One observation is that each $X_{T_j(t)^k}$ has a unique maximal degree for $1\leq j\leq m$ and $t\in {\mathbb T}_m$. \begin{eqnarray*}gin{theorem} For $T_i(t')$ with $t'\in {\mathbb T}_m$, $1\leq i\leq m$, there exists a ${\mathbb Z}$-polynomial $P(z)$ such that the cardinality $|\mathrm{Gr}_{\begin{eqnarray*}e}(T_i(t')^k)|=P(|k|^{\frac{1}{2}})$. \begin{eqnarray*}gin{proof} We prove the statement by induction on $t$ from the root $t_0\in {\mathbb T}_m$. Notice that we have already shown that $|\mathrm{Gr}_{\begin{eqnarray*}e}(T_i(t_0)^k)|$ is a polynomial of $|k|^{\frac{1}{2}}$ in Section \ref{sec3.4}. Assume that for $\begin{eqnarray*}gin{tikzcd}[column sep=huge]t\arrow[r, no head,"{(a_{ij}(t),b_{ij}(t))}"] &t'\end{tikzcd}$, the statement holds for $T_j(t)$ for any $1\leq j\leq m$. Namely, $|\mathrm{Gr}_{\begin{eqnarray*}e}(T_j(t)^k)|$ is a ${\mathbb Z}$-polynomial of $|k|^{\frac{1}{2}}$ for any $\begin{eqnarray*}e \leq d_j(t)$. Using Corollary \ref{cor4.3}, say one of equations is \begin{eqnarray*}gin{equation*} X_{T_i(t')^k}X_{T_i(t)^{k}}=q^{\frac{1}{2}\varnothingLambda(d_i(t')^*,d_i(t)^*)}q^r\prod_{j\neq i} X_{T_j(t)^k}^{a_{ij}} + q^{\frac{1}{2}(\varnothingLambda(d_i(t')^*,d_i(t)^*)-1)}q^{r'}\prod_{j\neq i} X_{T_j(t)^k}^{b_{ij}}, \end{equation*} comparing degrees from the unique maximal one on both side, we can calculate the cardinality $|\mathrm{Gr}_{\boldsymbol{e}}(T_i(t')^{k})|$ for each $\boldsymbol{e}\leq d_i(t')$. Note that $|\mathrm{Gr}_{\begin{eqnarray*}e'}(T_i(t)^k)|$ and $|\mathrm{Gr}_{\begin{eqnarray*}e'}(T_j(t)^{k})|$ for $j\neq i$ are ${\mathbb Z}$-polynomials of $|k|$ by induction hypothesis, it follows that $|\mathrm{Gr}_{\begin{eqnarray*}e}(T_i(t')^k)|$ is equal to ${v^{-s}}f(v)\in {\mathbb Z}[v^{\pm1}]$ where $v=|k|^{\frac{1}{2}}$. Because $|\mathrm{Gr}_{\begin{eqnarray*}e}(T_i(t')^k)|$ is an integer for any $|k|>2$, we have $|\mathrm{Gr}_{\begin{eqnarray*}e}(T_i(t')^k)|\in {\mathbb Z}[v]$. Otherwise $t^{-s}f(t)=f_1(t)+f_2(t^{-1})$ with $f_1\in {\mathbb Z}[t]$ and $f_2\in t^{-1}{\mathbb Z}[t^{-1}]$. when $t=q^r$ goes to $+\infty$, $f_1(t)\in {\mathbb Z}$ while $|f_2(t^{-1})|\leq 1$. This is contradict to $f(t)\in {\mathbb Z}$. The proof is completed. \end{proof} \end{theorem} Recall the quantum cluster character of $T_i(t)^k$ is $$ X_{T_i(t)^k}=\sum_{\begin{eqnarray*}e\leq d_i(t)} |k|^{-\frac{1}{2} \langle ngle d_i(t)-\begin{eqnarray*}e,\begin{eqnarray*}e \rangle }|\mathrm{Gr}_{\begin{eqnarray*}e}(T_i(t)^k)| X^{-(d_i(t)-\begin{eqnarray*}e)^*-^*\begin{eqnarray*}e}.$$ From the last theorem, we know that $|\mathrm{Gr}_{\begin{eqnarray*}e}(T_i(t)^k)|$ is a ${\mathbb Z}$-polynomial of $|k|^{\frac{1}{2}}$, it follows that there exists a unique element $X_{T_i(t)} \in \hat{{\mathcal T}}_{\varnothingLambda}$ such that $$X_{T_i(t)}|_{\nu=|k|^{\frac{1}{2}}}= X_{T_i(t)^k}.$$ By Corollary \ref{cor4.3}, for an exchange pair $(T_i(t)^k,T_i(t')^k)$ with $\begin{eqnarray*}gin{tikzcd}[column sep=huge]t\arrow[r, no head,"{(a_{ij}(t),b_{ij}(t))}"] &t'\end{tikzcd}$, if $\mathrm{Ext}^1_{\tilde{{\mathcal A}}_k}(T_i(t)^k,T_i(t')^k)=0$, then we have that \begin{eqnarray*}gin{equation} \langle bel{eq4.10} X_{T_i(t')^k}X_{T_i(t)^k}=q^{\frac{1}{2}\varnothingLambda(d_i(t')^*,d_i(t)^*)} q^s \prod_{j\neq i} X_{T_j(t)^k}^{a_{ij}} + q^{\frac{1}{2}(\varnothingLambda(d_i(t')^*,d_i(t)^*)-1)}q^{s'} \prod_{j\neq i} X_{T_j(t)^k}^{b_{ij}}. \end{equation} by observing that $E^k=\bigoplus_{j\neq i} T_j(t)^{k,\oplus a_{ij}}$ and $E^{'k}=\bigoplus_{j\neq i} T_j(t)^{k,\oplus b_{ij}}$ are rigid, where $s$ and $s'$ are the same as Definition \ref{def3.10}. If $\mathrm{Ext}^1_{\tilde{{\mathcal A}}_k}(T_i(t')^k,T_i(t)^k)=0$, then we have that \begin{eqnarray*}gin{equation} \langle bel{eq4.11} X_{T_i(t')^k}X_{T_i(t)^k}=q^{\frac{1}{2}\varnothingLambda(d_i(t')^*,d_i(t)^*)+1} q^s \prod_{j\neq i} X_{T_j(t)^k}^{a_{ij}} + q^{\frac{1}{2}(\varnothingLambda(d_i(t')^*,d_i(t)^*))}q^{s'} \prod_{j\neq i} X_{T_j(t)^k}^{b_{ij}}. \end{equation} Moreover, $T(t)^k=\bigoplus_{1\leq j\leq m} T_j(t)^k$ is rigid, it follows that \begin{eqnarray*}gin{equation} \langle bel{eq4.12} X_{T_j(t)^k}X_{T_i(t)^k}=q^{\varnothingLambda(d_j(t)^*,d_i(t)^*)} X_{T_i(t)^k}X_{T_j(t)^k}. \end{equation} Let ${\mathbb T}_n(\boldsymbol{p},\boldsymbol{\lambda})$ be the subgraph of ${\mathbb T}_m$ associated to $(\boldsymbol{p},\boldsymbol{\lambda})$ given in Definition \ref{def3.6}. Note that $X_{T_i(t_0)}=X_i(t_0)$, using Equations (\ref{eq4.10}) and $(\ref{eq4.11})$ we reach the following \begin{eqnarray*}gin{theorem} \langle bel{thm4.6} The quantum cluster algebra ${\mathcal A}(\varnothingLambda,\tilde{B}(\boldsymbol{p},\boldsymbol{\lambda}))$ is a ${\mathbb Z}[\nu^{\pm1}]$-subalgebra of $\hat{{\mathcal T}}_{\varnothingLambda}$ generated by $X_{T_i(t)}$ for $t\in {\mathbb T}_n(\boldsymbol{p},\boldsymbol{\lambda})$, $1\leq i\leq n$ and $X_{T_l(t_0)}^{\pm}$ for $n< l\leq m$. \end{theorem} There is a bar-involution $\overline{\bullet}$ on the complete quantum torus $\hat{{\mathcal T}}_{\varnothingLambda}$, given by $\nu^{\pm} \mapsto \nu^{\mp}$, and $X^{\alpha} \mapsto X^{\alpha}$, $\alpha\in {\mathbb Z}^m$. It is clear that $X_{T_i(t_0)}$ and $X_{T_l(t_0)}^{\pm}$ is bar-invariant (i.e. $\overline{X_{T_i(t_0)}}$=$X_{T_i(t_0)}$) for each $1\leq i\leq n$, $n<l\leq m$ . \begin{eqnarray*}gin{proposition}{prop4.7} For $1\leq i\leq n$ and $t\in {\mathbb T}_n(\boldsymbol{p},\boldsymbol{\lambda})$, $X_{T_i(t)}$ is bar-invariant. \begin{eqnarray*}gin{proof} For $\begin{eqnarray*}gin{tikzcd}[column sep=huge]t_0\arrow[r, no head,"{(a_{ij}(t_0),b_{ij}(t_0))}"] &t\end{tikzcd}$, if $\mathrm{Ext}^1(T_i(t_0),T_i(t))=0$, then by Theorem \ref{thm4.6} and Corollary \ref{cor4.3} we have \begin{eqnarray*}gin{equation} X_{T_i(t)}X_{T_i(t_0)}=\nu^{\varnothingLambda(d_i(t)^*,d_i(t_0)^*)} X_{E} + \nu^{(\varnothingLambda(d_i(t)^*,d_i(t_0)^*)-1)}X_{E'}. \end{equation} Note that $E$ and $E'$ are rigid and $X_{T_j(t_0)}$ is bar-invariant, $X_E$ and $X_{E'}$ are bar-invariant. Applying $\overline{\bullet}$ to the equation above, \begin{eqnarray*}gin{equation*} X_{T_i(t_0)}\overline{X_{T_i(t)}}=\nu^{\varnothingLambda(d_i(t_0)^*,d_i(t)^*)} X_{E} + \nu^{(\varnothingLambda(d_i(t_0)^*,d_i(t)^*)+1)}X_{E'}. \end{equation*} On the other hand, using Corollary \ref{cor4.3} again, we also have \begin{eqnarray*}gin{equation*} X_{T_i(t_0)}X_{T_i(t)}=\nu^{\varnothingLambda(d_i(t_0)^*,d_i(t)^*)} X_{E} + \nu^{(\varnothingLambda(d_i(t_0)^*,d_i(t)^*)+1)}X_{E'}, \end{equation*} Therefore, $X_{T_i(t_0)}\overline{X_{T_i(t)}}=X_{T_i(t_0)}X_{T_i(t)}$ in $\hat{{\mathcal T}}_{\varnothingLambda}$, it follows that $\overline{X_{T_i(t)}}=X_{T_i(t)}$. Repeat last procedure, we can prove that $X_{T_i(t)}$ is bar-invariant for any $t\in {\mathbb T}_n(\boldsymbol{p},\boldsymbol{\lambda})$. \end{proof} \end{proposition} \begin{eqnarray*}gin{corollary} \langle bel{co4.8} The map $$\begin{eqnarray*}gin{aligned} \overline{\bullet}: {\mathcal A}(\varnothingLambda,\tilde{B}(\boldsymbol{p},\boldsymbol{\lambda})) &\longrightarrow {\mathcal A}(\varnothingLambda,\tilde{B}(\boldsymbol{p},\boldsymbol{\lambda})),\\ \nu^{\pm} &\mapsto \nu^{\mp},\\ X_{T_i(t)}&\mapsto X_{T_i(t)}. \end{aligned}$$ give rise to a bar-involution. \end{corollary} \subsection{Specialized quantum cluster algebras} \langle bel{sec4.3} Assume $-\varnothingLambda B(\tilde{\boldsymbol{p}},\tilde{\boldsymbol{\lambda}})=dI_m$ where $\tilde{B}:=B(\tilde{\boldsymbol{p}},\tilde{\boldsymbol{\lambda}})$ is invertible. Namely, each $\tilde{p}_i$ is odd. Let $k={\mathbb F}_{q^d}$ and $v=q^{\frac{1}{2}}$. Recall that the specialized quantum cluster algebra ${\mathcal A}_q(\varnothingLambda,\tilde{B})$ of ${\mathbb X}_{\boldsymbol{p},\boldsymbol{\lambda}}$ at $\nu=v$ is the subalgebra of $\hat{{\mathcal T}}_{\varnothingLambda,v}$ generated by quantum cluster characters $X_{T_i(t)^k}$ of $T_i(t)^k$ for $t\in {\mathbb T}_n(\boldsymbol{p},\boldsymbol{\lambda})$, $1\leq i\leq m$ and $X_{T_j(t_0)^k}^{\pm}$ for $n<j\leq m$ Let $\mathrm{CH}'_{\varnothingLambda}(\tilde{{\mathcal A}}_k)$ be the subalgebra of $H_{\varnothingLambda}(\tilde{{\mathcal A}}_k)$ generated by $[{\mathcal O}(l\vec{c})^k]$, and $[S_{i,j}^k]$, for $l\in{\mathbb Z} $, $1\leq i\leq N$, and $1\leq j\leq p_i$. Consider $\mathrm{CH}'_{\varnothingLambda}(\tilde{{\mathcal A}}_k)\otimes_{{\mathbb Z}[v^{\pm1}]} {\mathbb Q}$, note that we have the following exact sequences $$0\longrightarrow {\mathcal O}((j-1)\vec{x}_i)^k\longrightarrow {\mathcal O}(j\vec{x}_i)^k\longrightarrow S_{i,j}^k\longrightarrow 0,$$ it follows that ${\mathcal O}(\vec{l})^k\in \mathrm{CH}'_{\varnothingLambda}(\tilde{{\mathcal A}}_k)\otimes_{{\mathbb Z}[v^{\pm1}]} {\mathbb Q}$ for any $\vec{l}\in {\mathbb L}(\tilde{\boldsymbol{p}},\tilde{\boldsymbol{\lambda}})$. An object ${\mathcal F}$ in a hereditary category ${\mathcal A}$ is called exceptional if ${\mathcal F}$ is rigid and $\mathrm{End}_{{\mathcal A}}({\mathcal F})$ is a division ring. In addition, it is well known that $\mathrm{End}_{\tilde{{\mathcal A}}_k}({\mathcal F})\cong k$ for an indecomposable rigid object ${\mathcal F}\in \tilde{{\mathcal A}}_k$ (see \cite[Proposition 6.4.2]{Chen2009}). Thus exceptional objects in $\tilde{{\mathcal A}}_k$ are precisely indecomposable rigid objects in $\tilde{{\mathcal A}}_k$. So the following theorem will be applied to finite fields \cite{CrawleyBoevey1992,Lenzing2011, Meltzer1995,Lenzing2009}. \begin{eqnarray*}gin{theorem}[{\cite[Theorem 1]{Kedzierski2013}}] \langle bel{thm4.7} Let ${\mathcal F}\in {\mathcal A}$ be an exceptional vector bundle of rank greater than one on a weighted projective line ${\mathbb X}_{\boldsymbol{p},\boldsymbol{\lambda}}$ over an algebraically closed field. Then there are exceptional objects ${\mathcal F}'$ and ${\mathcal F}''$ with the following properties: \begin{eqnarray*}gin{itemize} \item [(i)] ${\mathrm{Hom}}_{{\mathcal A}}({\mathcal F}',{\mathcal F}'') = {\mathrm{Hom}}_{{\mathcal A}}({\mathcal F}'',{\mathcal F}') = \mathrm{Ext}_{{\mathcal A}}^1({\mathcal F}',{\mathcal F}'')=0$, and there is a nonsplit exact sequence $$0\longrightarrow {\mathcal F}^{'\oplus a} \longrightarrow {\mathcal F} \longrightarrow {\mathcal F}^{''\oplus b} \longrightarrow 0, $$ where $(a,b)$ is the dimension vector of ${\mathcal F}\in {\mathcal C}({\mathcal F}',{\mathcal F}'')\simeq \mathrm{mod}k{\mathbb T}heta_r$ and $r=\dim \mathrm{Ext}^1_{{\mathcal A}}({\mathcal F}'',{\mathcal F}')$. \item [(ii)] $\mathrm{rank}({\mathcal F}')< \mathrm{rank}({\mathcal F})$ and $\mathrm{rank}({\mathcal F}'')< \mathrm{rank}({\mathcal F})$. \end{itemize} Here ${\mathcal C}({\mathcal F}',{\mathcal F}'')\subset {\mathcal A}$ is the subcategory containing ${\mathcal F}$ and ${\mathcal F}''$, closed under extensions, kernels of epimorphisms, and cokernels of monomorphisms. ${\mathbb T}heta_r$ denotes the $r$-Kronecker quiver: $$\begin{eqnarray*}gin{tikzcd} 1 &2\arrow[l,shift right=4,"\vdots"]\arrow[l, shift right=2]\arrow[l, shift left=3] \ \ (r \text{ arrows}). \end{tikzcd}$$ \end{theorem} \begin{eqnarray*}gin{corollary} \langle bel{cor4.8} Each indecomposable rigid object in $\tilde{{\mathcal A}}_k$ belongs to $\mathrm{CH}'_{\varnothingLambda}(\tilde{{\mathcal A}}_k)\otimes_{{\mathbb Z}[v^{\pm1}]} {\mathbb Q}$. \begin{eqnarray*}gin{proof} Indecomposable rigid objects in $\tilde{{\mathcal A}}_k$ are exceptional vector bundles and exceptional torsion sheaves which lie in $\mathrm{Tor}_{ \langle mbda_i}$ defined in Lemma \ref{lem2.1}. It is clear that all exceptional torsion sheaves and line bundles belong to $\mathrm{CH}'_{\varnothingLambda}(\tilde{{\mathcal A}}_k)\otimes_{{\mathbb Z}[v^{\pm1}]} {\mathbb Q}$. For an exceptional vector bundle ${\mathcal F}^k$, by Theorem \ref{thm4.7}, it suffices to show that ${\mathcal F}^k$ of rank $\geq 2$ belongs to the ${\mathbb Q}$-linear composition Hall algebra $\mathrm{CH}_{\varnothingLambda}({\mathcal C}({\mathcal F}',{\mathcal F}''))\otimes_{{\mathbb Z}[v^{\pm1}]} {\mathbb Q}$. Note that ${\mathcal C}({\mathcal F}',{\mathcal F}'')$ in Theorem \ref{thm4.7} is equivalent to the module category $\mathrm{mod}k{\mathbb T}heta_r$, it is equivalent to show that for each indecomposable rigid object $M$ in $\mathrm{mod}k{\mathbb T}heta_r$, $[M]\in \mathrm{CH}(\mathrm{mod}k{\mathbb T}heta_r)\otimes_{{\mathbb Z}[v^{\pm1}]} {\mathbb Q}$. Without loss of generality, we assume $r=2$, namely, ${\mathbb T}heta_r$ is the Kronecker quiver. Now the statement is induced from \cite[Theorem 1]{Zhang1996}. Since every indecomposable object in $\mathrm{mod}k{\mathbb T}heta_2$ is either an indecomposable preprojective or an indecomposable preinjective object. \end{proof} \end{corollary} Let $I_k$ be a 2-sided ideal of $\mathrm{CH}'_{\varnothingLambda}(\tilde{{\mathcal A}}_k)\otimes_{{\mathbb Z}[v^{\pm1}]} {\mathbb Q}$ generated by \begin{eqnarray*}gin{equation} \langle bel{eq4.13} [T_j(t)^k][T_i(t)^k]-v^{2\varnothingLambda(d_j(t)^*,d_i(t)^*)}[T_i(t)^k][T_j(t)^k], \text{ if } j\neq i, \end{equation} and if $\begin{eqnarray*}gin{tikzcd}[column sep=huge]t\arrow[r, no head,"{(a_{ij}(t),b_{ij}(t))}"] &t'\end{tikzcd}$ in ${\mathbb T}_m$ such that $\mathrm{Ext}^1_{\tilde{{\mathcal A}}_k}(T_i(t)^k,T_i(t')^k)=0$, \begin{eqnarray*}gin{equation} \langle bel{eq4.14} [T_i(t')^k][T_i(t)^k]-v^{\varnothingLambda(d_i(t')^*,d_i(t)^*)} v^s \prod_{j\neq i}[T_j(t)^k]^{a_{ij}(t)} - v^{\varnothingLambda(d_i(t')^*,d_i(t)^*)-1} v^{s'} \prod_{j\neq i}[T_j(t)^k]^{b_{ij}(t)}, \end{equation} or if $\begin{eqnarray*}gin{tikzcd}[column sep=huge]t\arrow[r, no head,"{(a_{ij}(t),b_{ij}(t))}"] &t'\end{tikzcd}$ in ${\mathbb T}_m$ such that $\mathrm{Ext}^1_{\tilde{{\mathcal A}}_k}(T_i(t')^k,T_i(t)^k)=0$, \begin{eqnarray*}gin{equation} \langle bel{eq4.15} [T_i(t')^k][T_i(t)^k]-v^{\varnothingLambda(d_i(t')^*,d_i(t)^*)+1} v^s \prod_{j\neq i}[T_j(t)^k]^{a_{ij}(t)} - v^{\varnothingLambda(d_i(t')^*,d_i(t)^*)} v^{s'} \prod_{j\neq i}[T_j(t)^k]^{b_{ij}(t)}, \end{equation} For an element $[{\mathcal F}^k]$ in $\mathrm{CH}'_{\varnothingLambda}(\tilde{{\mathcal A}}_k)\otimes_{{\mathbb Z}[v^{\pm1}]} {\mathbb Q}$, we still denote $[{\mathcal F}^k]$ by the image of $[{\mathcal F}^k]$ in the quotient algebra $(\mathrm{CH}'_{\varnothingLambda}(\tilde{{\mathcal A}}_k)\otimes_{{\mathbb Z}[v^{\pm1}]} {\mathbb Q})/I_k$. \begin{eqnarray*}gin{theorem} \langle bel{thm4.9} There is a homomorphism of algebras : $$\phi_k: (\mathrm{CH}'_{\varnothingLambda}(\tilde{{\mathcal A}}_k)\otimes_{{\mathbb Z}[v^{\pm1}]} {\mathbb Q})/I_k \longrightarrow {\mathcal A}_q(\varnothingLambda,B(\tilde{\boldsymbol{p}},\tilde{\boldsymbol{\lambda}}))\otimes_{{\mathbb Z}[v^{\pm1}]} {\mathbb Q},$$ which maps $[T_i(t)^k]$ to $X_{T_i(t)^k}$ for $1\leq i\leq m$ and $t\in {\mathbb T}_m$. In particular, $\phi_k$ is an isomorphism. \begin{eqnarray*}gin{proof} By Equations (\ref{eq4.10})-(\ref{eq4.12}) and $X_?: H_{\varnothingLambda}(\tilde{{\mathcal A}}_k)\to {\mathcal A}_q(\varnothingLambda,\tilde{B})$ is an algebra homomorphism, it can be seen that $\phi_k$ is a homomorphism of algebras. Moreover, following from Corollary \ref{cor4.8}, $\phi_k$ is surjective. On the other hand, the defining relations in ${\mathcal A}_q(\varnothingLambda,\tilde{B})$ are exactly relations (\ref{eq4.13})-(\ref{eq4.15}) by Lemma \ref{lem3.9}, which induces a homomorphism $$\psi_k:{\mathcal A}_q(\varnothingLambda,\tilde{B})\otimes_{{\mathbb Z}[v^{\pm1}]} {\mathbb Q}\longrightarrow (\mathrm{CH}'_{\varnothingLambda}(\tilde{{\mathcal A}}_k)\otimes_{{\mathbb Z}[v^{\pm1}]} {\mathbb Q})/I_k,$$ mapping $X_{T_i(t)^k}$ to $[T_i(t)^k]$. As a consequence, we have $\phi_k\psi_k={ id}$ and $\psi_k\phi_k={ id}$, which means that $\phi_k$ is an isomorphism. \end{proof} \end{theorem} \section{The quantum cluster algebra of \texorpdfstring{${\mathbb P}^1$}{Lg}} \langle bel{sec5} In this section, we study bases of the quantum cluster algebra ${\mathcal A}(\varnothingLambda,B)$ of ${\mathbb P}^1$ \subsection{ \texorpdfstring{${\mathcal A}(\varnothingLambda,B)$}{Lg} and \texorpdfstring{${\mathcal A}(2,2)$}{Lg}} \langle bel{sec5.1}\ \ The compatible pair $(\varnothingLambda,B)$ associated to ${\mathbb P}^1$ is $$\varnothingLambda=\begin{eqnarray*}gin{bmatrix}0 &-1\\ 1 &0\end{bmatrix},\ \ \ B=\begin{eqnarray*}gin{bmatrix}1 &-1\\ 1 &0\end{bmatrix}-\begin{eqnarray*}gin{bmatrix}1 &1\\ -1 &0\end{bmatrix},\ \ \text{and}\ \ -\varnothingLambda B=2I_2.$$ Note that all indecomposable rigid objects in $\mathrm{Coh}({\mathbb P}^1)$ are line bundles and exceptional pairs are exactly $({\mathcal O}(l),{\mathcal O}(l+1))$ and $({\mathcal O}(l+1),{\mathcal O}(l))$. Then ${\mathcal A}(\varnothingLambda,B)$ is the subalgebra of $\hat{{\mathcal T}}_{\varnothingLambda}$ generated by $X_{{\mathcal O}(l)}$ for $l\in {\mathbb Z}$, subject to \begin{eqnarray*}gin{align} \langle bel{id6.1} X_{{\mathcal O}(l+2)}X_{{\mathcal O}(l)}&=\nu^2 X_{{\mathcal O}(l+1)}^2+1,\\ \langle bel{id6.2} X_{{\mathcal O}(l)}X_{{\mathcal O}(l+2)}&=\nu^{-2} X_{{\mathcal O}(l+1)}^2+1,\\ \langle bel{id6.3} X_{{\mathcal O}(l+1)}X_{{\mathcal O}(l)}&=\nu^2X_{{\mathcal O}(l)}X_{{\mathcal O}(l+1)}. \end{align} where $$X_{{\mathcal O}(l)}=X^{-(\begin{eqnarray*}gin{smallmatrix}l+1\\ -1\end{smallmatrix})}+\sum_{r\geq l}\nu^{-2(l+r)}[l+r+1]_{\nu^4}X^{-(\begin{eqnarray*}gin{smallmatrix}l+2r+1\\ 1\end{smallmatrix})}.$$ Note that we have defined a bar-involution $\overline{\bullet}$ on ${\mathcal A}(\varnothingLambda,B)$ in Corollary \ref{co4.8}, the Identity (\ref{id6.2}) can be induced by applying $\overline{\bullet}$ to Identity (\ref{id6.1}). Thus, we will omit Identity (\ref{id6.2}) as defining relations in the sequel. Recall that the compatible pair $(\varnothingLambda({\mathbb T}heta_2),B({\mathbb T}heta_2))$ of the Kronecker quiver ${\mathbb T}heta_2: \begin{eqnarray*}gin{tikzcd} 1 &2\arrow[l,shift left]\arrow[l,shift right] \end{tikzcd}$ is $$B({\mathbb T}heta_2)=\begin{eqnarray*}gin{bmatrix}0 &-2\\ 2 &0\end{bmatrix}\ \ \ \text{and}\ \ \ \varnothingLambda({\mathbb T}heta_2)=\begin{eqnarray*}gin{bmatrix}0 &-1\\ 1 &0\end{bmatrix},$$ which is the same as $(\varnothingLambda,B)$ of ${\mathbb P}^1$. The quantum cluster algebra ${\mathcal A}(2,2)$ of the Kronecker quiver defined in \cite{Ding2012b} is the subalgebra of ${\mathcal T}_{\varnothingLambda}$ generated by $X_{V(l)}$ for $l\in {\mathbb Z}$ subject to $$\begin{eqnarray*}gin{aligned} X_{V(l-1)}X_{V(l)}&=\nu^2X_{V(l)}X_{V(l-1)},\\ X_{V(l-2)}X_{V(l)}&=\nu^2 X_{V(l-1)}^2+1, \end{aligned}$$ where $V(l)$ is the indecomposable preprojective $k{\mathbb T}heta_2$-module $P_l=(1-l,-l)$ for $l\leq 0$, the indecomposable preinjective $k{\mathbb T}heta_2$-module $I_{l-2}=(l-3,l-2)$ for $l\geq -$, and $V(1)=P_2[1]$, $V(2)=P_1[1]$ for $l=1,2$ by setting $X_{P_l[1]}=x_l$. By the definition of ${\mathcal A}(2,2)$, we have the following \begin{eqnarray*}gin{proposition} There is an isomorphism of algebras : $$ \kappa :{\mathcal A}(2,2)\longrightarrow {\mathcal A}(\varnothingLambda,B), \qquad X_{V(l)}\mapsto X_{{\mathcal O}(-l)}.$$ \end{proposition} \begin{eqnarray*}gin{remark} In Remark \ref{rem2}, we state that the quantum cluster algebra of ${\mathbb X}_{\boldsymbol{p},\boldsymbol{\lambda}}$ is isomorphic to the quantum cluster algebra of acyclic quiver $Q$ of type $\tilde{A}_{p_1,p_2}$ when $\boldsymbol{p}=(2r_1+1,2r_2+1)$. In particular, in the case of $\boldsymbol{p}=(1,1)$, ${\mathbb X}_{\boldsymbol{p},\boldsymbol{\lambda}}$ is ${\mathbb P}^1$ and $Q$ is the Kronecker quiver. The isomorphism from ${\mathcal A}(2,2)$ to ${\mathcal A}(\varnothingLambda,B)$ is given in the last Proposition. In general, we can explicitly give the isomorphism between them through the equivalence ${\mathcal C}({\mathcal A}_k)\simeq {\mathcal C}(\mathrm{mod}kQ)$ of cluster categories. \end{remark} Denote $X_{n\delta}:=X_{S_{x_n}}=X^{-(\begin{eqnarray*}gin{smallmatrix}n\\ 0\end{smallmatrix})}+X^{-(\begin{eqnarray*}gin{smallmatrix}-n\\ 0\end{smallmatrix})}$ for some simple torsion sheaf $S_{x_n}$ supported on the point $x_n\in {\mathbb P}^1$ of degree n. In the sequel, we will show $X_{n\delta}\in {\mathcal A}(\varnothingLambda,B)$. We will call the subalgebra of $\hat{{\mathcal T}}_{\varnothingLambda}$ generated by $X_{n\delta}$, $n\in {\mathbb N}$ the $torsion$-$part$, and denote it by ${\mathcal A}^{tor}(\varnothingLambda,B)$. Obviously, if we show $X_{n\delta}\in {\mathcal A}(\varnothingLambda,B)$, then ${\mathcal A}^{tor}(\varnothingLambda,B)\subset {\mathcal A}(\varnothingLambda,B)$. \subsection{Bases of the torsion part} Let $E_x^{(n)}\in {\mathcal A}_{{\mathbb F}}$ be the indecomposable torsion sheaf of length $n$ supported on $x\in {\mathbb P}$ with $\mathrm{deg}(x)=1$. Then, the quantum cluster character of $E_x^{(n)}$ is $$X_{E_x^{(n)}}=\sum_{l=0}^n X^{-(\begin{eqnarray*}gin{smallmatrix}n-2l \\ 0 \end{smallmatrix})}.$$ Note that the quantum cluster character of $E_x^{(n)}$ is independent of finite fields. \begin{eqnarray*}gin{definition} (1) The $n$-th Chebyshev polynomials of the first kind is the polynomials $F_n(x)\in {\mathbb Z}[x]$ defined by $$F_0(x)=1, F_1(x)=x,F_2(x)=x^2-2, F_{n+1}(x)=F_1(x)F_n(x)-F_{n-1}(x)\ for\ n\geq 2.$$ (2) The $n$-th Chebyshev polynomials of the second kind is the polynomials $G_n(x)\in {\mathbb Z}[x]$ defined by $$G_0(x)=1, G_1(x)=x,G_2(x)=x^2-1, G_{n+1}(x)=G_1(x)G_n(x)-G_{n-1}(x)\ for\ n\geq 2.$$ \end{definition} \begin{eqnarray*}gin{lemma} \langle bel{lem5.3} We have $$F_n(X_{\delta})=X_{n\delta}\ \ \ and\ \ \ G_n(X_{\delta})=X_{E_x^{(n)}}.$$ for some $x\in {\mathbb P}^1$ with degree 1. \begin{eqnarray*}gin{proof} Note $X^{-(\begin{eqnarray*}gin{smallmatrix}r_1 \\ 0 \end{smallmatrix})}X^{-(\begin{eqnarray*}gin{smallmatrix}r_2 \\ 0 \end{smallmatrix})}=X^{-(\begin{eqnarray*}gin{smallmatrix}r_2 \\ 0 \end{smallmatrix})}X^{-(\begin{eqnarray*}gin{smallmatrix}r_1 \\ 0 \end{smallmatrix})}$ for $r_1,r_2\in {\mathbb Z}$. Consequently we can identity $X^{-(\begin{eqnarray*}gin{smallmatrix}n \\ 0 \end{smallmatrix})}$ with $z^{n}$ and $X^{-(\begin{eqnarray*}gin{smallmatrix}-n \\ 0 \end{smallmatrix})}$ with $z^{-n}$. By direct computations, we have $$F_2(X_{\delta})=(z+z^{-1})^2-2=z^2+z^{-2}=X_{2\delta}\ \ and \ \ G_2(X_{\delta})=z^2+1+z^{-2}=X_{E_x^{(2)}}.$$ Assume $F_i(X_{\delta})=X_{i\delta}$ and $G_i(X_{\delta})=X_{E_x^{(i)}}$ for $i\leq n$. Then \begin{eqnarray*}gin{align*} G_{n+1}(X_{\delta})&=X_{\delta}X_{E_x^{n}}-X_{E_x^{(n-1)}} \\ &=(z+z^{-1})(z^n+z^{n-2}+\cdots +z^{-n})-(z^{n-1}+z^{n-3}+\cdots +z^{-(n-1)})\\ &=z^{n+1}+z^{n-1}+\cdots t^{-n+1} +z^{-n-1}=X_{E_x^{(n+1)}}. \end{align*} $$F_{n+1}(X_{\delta})=X_{\delta}X_{n\delta}-X_{(n-1)\delta}=(z+z^{-1})(z^n+z^{-n})-(z^{n+1}+z^{-(n-1)})=X_{(n+1)\delta}.$$ \end{proof} \end{lemma} Next, we will show $X_{\delta}\in {\mathcal A}(\varnothingLambda,B)$, then as a result all $X_{n\delta}$ and $X_{E_x^{(n)}}$ will belong to ${\mathcal A}(\varnothingLambda,B)$. \begin{eqnarray*}gin{lemma} \langle bel{lem5.4} The following relations hold on ${\mathcal A}(\varnothingLambda,B)$: \begin{eqnarray*}gin{align} \langle bel{id5.1} X_{{\mathcal O}(2)}X_{{\mathcal O}}&=\nu^2 X_{{\mathcal O}(1)}^2+1.\\ \langle bel{id5.2} X_{{\mathcal O}(3)}X_{{\mathcal O}}&=\nu^2 X_{{\mathcal O}(2)}X_{{\mathcal O}(1)}+\nu^{-1} X_{\delta}.\\ \langle bel{id5.3} X_{\delta}X_{{\mathcal O}}&=\nu X_{{\mathcal O}(1)}+\nu^{-1}X_{{\mathcal O}(-1)}. \end{align} \begin{eqnarray*}gin{proof} The first one is the defining relation of ${\mathcal A}(\varnothingLambda,B)$. For any finite field $k$, the quantum cluster multiplication formula applied to ${\mathcal O}(3)^k$ and ${\mathcal O}^k$ is $$(|k|^2-1)X_{{\mathcal O}(3)^k}X_{{\mathcal O}^k}=|k|^{\frac{3}{4}}(|k|^2-1)X_{{\mathcal O}(2)^k\oplus {\mathcal O}(1)^k}+|k|^{\frac{-1}{4}} \sum_{i=1}^{|k|+1}(|k|-1)X_{S_{x_i}^k}.$$ where $S_{x_i}^k$ is the simple torsion sheaf supported on $x_i\in {\mathbb P}$ with $\mathrm{deg}(x_i)=1$. Note that $X_{S_{x_i}^k}=X_{\delta}$ for each $x_i$ and $X_{{\mathcal O}(2)^k}X_{{\mathcal O}(1)^k}=|k|^{\frac{1}{4}}X_{{\mathcal O}(1)^k\oplus {\mathcal O}(2)^k}$, we have $$X_{{\mathcal O}(3)^k}X_{{\mathcal O}^k}=|k|^{\frac{1}{2}}X_{{\mathcal O}(2)^k} X_{{\mathcal O}(1)^k}+ |k|^{\frac{-1}{4}}X_{\delta}.$$ Thus, in ${\mathcal A}(\varnothingLambda, B)$ we have $$X_{{\mathcal O}(3)}X_{{\mathcal O}}=\nu^2X_{{\mathcal O}(2)} X_{{\mathcal O}(1)}+\nu^{-1} X_{\delta}.$$ The proof of the third equation is similar due to Equation (\ref{eqn3.4}). \end{proof} \end{lemma} Since $X_{{\mathcal O}(3)}$ and $X_{{\mathcal O}}$ belong to ${\mathcal A}(\varnothingLambda,B)$, then $X_\delta=\nu X_{{\mathcal O}(3)}X_{{\mathcal O}}-\nu^3X_{{\mathcal O}(2)}X_{{\mathcal O}(1)}$ lies in ${\mathcal A}(\varnothingLambda,B)$. Applying the twisting operation $\sigma$ successfully to Identities (\ref{id5.1}) and (\ref{id5.2}), we have the following \begin{eqnarray*}gin{corollary} In ${\mathcal A}(\varnothingLambda,B)$, we have \begin{eqnarray*}gin{align} X_{{\mathcal O}(l+2)}X_{{\mathcal O}(l)}&=\nu^2 X_{{\mathcal O}(l+1)}^2+1,\\ X_{{\mathcal O}(l+3)}X_{{\mathcal O}(l)}&=\nu^2 X_{{\mathcal O}(l+2)}X_{{\mathcal O}(l+1)}+\nu^{-1} X_{\delta}.\\ \langle bel{eqn5.8} X_{\delta}X_{{\mathcal O}}&=\nu X_{{\mathcal O}(1)}+\nu^{-1}X_{{\mathcal O}(-1)}. \end{align} \end{corollary} \begin{eqnarray*}gin{lemma} \langle bel{lem5.9} In ${\mathcal A}^{tor}(\varnothingLambda,B)$, we have \begin{eqnarray*}gin{align*} X_{n\delta}X_{m\delta}&=X_{m\delta}X_{n\delta}.\\ X_{n\delta}X_{m\delta}&=X_{(n+m)\delta}+X_{(n-m)\delta}\ for\ n>m.\\ X_{n\delta}X_{n\delta}&=X_{2n\delta}+2,\ \ for\ n\in {\mathbb Z}. \end{align*} Moreover $X_{\delta}^n$ is a ${\mathbb Z}$-linear combinations of $X_0$, $X_{\delta}$, $X_{2\delta}$, $\cdots$, $X_{n\delta}$ with the coefficient of $X_{n\delta}$ being 1. \begin{eqnarray*}gin{proof} The first and second statement is obtained by easy computation. We proceed an induction on $n$ to prove the third one. When $n=1$, the statement holds obviously. Assume it holds for $n-1$, then $X_{\delta}^n=X_{\delta}X_{\delta}^{n-1}$=$X_{\delta}(X_{(n-1)\delta}+a_1X_{(n-2)\delta}+\cdots +a_{n-2}X_{\delta}+a_{n-1})$ for some $a_i\in {\mathbb Z}$. Using the first statement, $X_{\delta}^n=X_{n\delta}+a_1X_{(n-1)\delta}+(a_1+1)X_{(n-2)\delta}+\cdots+b_n$ for $b_i\in {\mathbb Z}$. \end{proof} \end{lemma} \begin{eqnarray*}gin{remark} The Lemma \ref{lem5.9} above has been proved for the quantum cluster algebra ${\mathcal A}(2,2)$ of Kronecker quiver in \cite[Proposition 6(1)]{Ding2012b}. \end{remark} \begin{eqnarray*}gin{proposition} \langle bel{prop5.10} Each one of the following sets forms a ${\mathbb Z}[\nu^{\pm1}]$-basis for ${\mathcal A}^{tor}(\varnothingLambda,B)$: $$\begin{eqnarray*}gin{aligned} {\mathbb B}_1^{tor}&=\{X_{r\delta}|r\in {\mathbb N}\},\\ {\mathbb B}_2^{tor}&=\{X_{\delta}^r|r\in {\mathbb N}\},\\ {\mathbb B}_3^{tor}&=\{X_{E_x^{(r)}}|r\in {\mathbb N}\}.\\ \end{aligned}$$ for some $x\in {\mathbb P}^1$ with degree 1. \begin{eqnarray*}gin{proof} We have shown ${\mathbb B}_1^{tor}$ spans ${\mathcal A}^{tor}(\varnothingLambda,B)$ in Lemma \ref{lem5.9}. Since $X_{r\delta}=z^r+z^{-r}$ for distinct $r$ has different maximal degree, it follows that $\{X_{r\delta}|r\in {\mathbb N}\}$ is linearly independent. As a consequence, ${\mathbb B}_1^{tor}$ is a basis for ${\mathcal A}(\varnothingLambda,B)$. Note that ${\mathbb B}_1^{tor}$ and ${\mathbb B}_i^{tor}$ can be linearly represented by each other for $i=2,3$, which implies ${\mathbb B}_i^{tor}$ is also a basis. \end{proof} \end{proposition} \subsection{Bases of \texorpdfstring{${\mathcal A}(\varnothingLambda,B)$}{Lx}} Since ${\mathcal A}(\varnothingLambda,B)$ is generated by $X_{{\mathcal O}(l)}$ for $l\in {\mathbb Z}$, every element in ${\mathcal A}(\varnothingLambda,B)$ is a ${\mathbb Z}[\nu^{\pm}]$-linear combination of products of several $X_{{\mathcal O}(l_i)}$. To find a basis for ${\mathcal A}(\varnothingLambda,B)$, we proceed by induction on the rank, i.e. the length (=s) of a product $\prod_{i=1}^s X_{{\mathcal O}(l_i)}$. For the case when rank is 1, every element in ${\mathcal A}(\varnothingLambda,B)$ of rank 1 is a linear combination of $X_{{\mathcal O}(l)}=X^{-(\begin{eqnarray*}gin{smallmatrix}l+1\\ -1\end{smallmatrix})}+\sum_{r\geq l}\nu^{-2(l+r)}[l+r+1]_{\nu^4}X^{-(\begin{eqnarray*}gin{smallmatrix}l+2r+1\\ 1\end{smallmatrix})}$, for $l\in {\mathbb Z}$. For the case when rank is 2, we need to deal with $X_{{\mathcal O}(n)}X_{{\mathcal O}(m)}$ for any $n,m\in {\mathbb Z}$. Apply the operation of twisting, it suffices to deal with $X_{{\mathcal O}}X_{{\mathcal O}(n)}$ and $X_{{\mathcal O}(n)}X_{{\mathcal O}}$ for $n\geq 0$. Denoted $z_n$ by $X_{n\delta}=F(X_{\delta})$, \begin{eqnarray*}gin{proposition} \langle bel{prop5.11} For $n\in {\mathbb N}$, $n\geq 0$, we have\ \ \begin{eqnarray*}gin{align} \langle bel{eqn5.9}&X_{{\mathcal O}(2n)}X_{{\mathcal O}}=\nu^{2n}X_{{\mathcal O}(n)}^2+\sum_{l=0}^{n-1}\nu^{2(-n+2l+1)}\sum_{i=l+1}^{n}z_{2(n-i)}.\\ \langle bel{eqn5.10}&X_{{\mathcal O}(2n+1)}X_{{\mathcal O}}=\nu^{2n}X_{{\mathcal O}(n+1)}X_{{\mathcal O}(n)}+\sum_{l=0}^{n-1}\nu^{2(-n+2l)+1}\sum_{i=l+1}^{n}z_{2(n-i)+1}. \end{align} \begin{eqnarray*}gin{proof} The proof is similar to the proof of \cite[Proposition 6(3)]{Ding2012b} using Lemma \ref{lem5.4}. \end{proof} \end{proposition} Applying the bar involution $\overline{\bullet}$ to Equations (\ref{eqn5.8}), (\ref{eqn5.9}) and (\ref{eqn5.10}), we have \begin{eqnarray*}gin{align} X_{{\mathcal O}}X_{\delta}&=\nu^{-1}X_{{\mathcal O}(1)}+\nu X_{{\mathcal O}(-1)}.\\ X_{{\mathcal O}}X_{{\mathcal O}(2n)}&=\nu^{-2n}X_{{\mathcal O}(n)}^2+\sum_{l=0}^{n-1}\nu^{-2(-n+2l+1)}\sum_{i=l+1}^{n}z_{2(n-i)}.\\ \langle bel{eqn5.13} X_{{\mathcal O}}X_{{\mathcal O}(2n+1)}&=\nu^{-2n}X_{{\mathcal O}(n)}X_{{\mathcal O}(n+1)}+\sum_{l=0}^{n-1}\nu^{-2(-n+2l)+1}\sum_{i=l+1}^{n}z_{2(n-i)+1}. \end{align} Define a subset ${\mathbb C}_r$ of ${\mathcal A}(\varnothingLambda,B)$ to be $${\mathbb C}_r=\{ X_{{\mathcal O}(l)}^d X_{{\mathcal O}(l+1)}^{r-d}|l\in {\mathbb Z}, 1\leq d\leq r\}.$$ Set ${\mathbb B}^{vet}:=\bigcup_{r\geq 1} {\mathbb C}_r$. \begin{eqnarray*}gin{theorem} \langle bel{thm5.15} The set ${\mathbb B}^{vet}\bigcup {\mathbb B}_2^{tor}$ forms a ${\mathbb Z}[\nu^{\pm1}]$-basis for ${\mathcal A}(\varnothingLambda,B)$. \begin{eqnarray*}gin{proof} The set linearly spans ${\mathcal A}(\varnothingLambda,B)$ following from Equations (\ref{eqn5.8})-(\ref{eqn5.13}). The linear independence of ${\mathbb B}^{vet}\bigcup {\mathbb B}_2^{tor}$ is induced from that each element in this set has distinct minimal degrees. Indeed, the minimal degrees of $X_{{\mathcal O}(l)}$ are $-(\begin{eqnarray*}gin{smallmatrix}l+1\\ -1\end{smallmatrix})$ and $-(\begin{eqnarray*}gin{smallmatrix}3l+1\\ 1\end{smallmatrix})$. Thus the minimal degrees of $X_ {{\mathcal O}(l)}^d X_{{\mathcal O}(l+1)}^{r-d}$ are $-(r-d)(\begin{eqnarray*}gin{smallmatrix}l+1\\ -1\end{smallmatrix})-d(\begin{eqnarray*}gin{smallmatrix}l+2\\ -1\end{smallmatrix})$ and $-(r-d)(\begin{eqnarray*}gin{smallmatrix}3l+1\\ 1\end{smallmatrix})-d(\begin{eqnarray*}gin{smallmatrix}3l+4\\ 1\end{smallmatrix})$, which are $-(\begin{eqnarray*}gin{smallmatrix}rl+d+r\\ -r\end{smallmatrix})$ and $-(\begin{eqnarray*}gin{smallmatrix}3(lr+d)+r\\ r\end{smallmatrix})$ respectively. When $r$ is fixed, two such elements share same minimal degree if and only if $rl+d=rl'+d'$. But $|d'-d|<r$ and $|l-l'|\geq 1$, it is impossible that there exist two different pairs $(l,d)$ and $(l',d')$ such that $rl+d=rl'+d'$. On the other hand the minimal degree of $X_\delta^n$ is $-(\begin{eqnarray*}gin{smallmatrix}-n\\ 0\end{smallmatrix})$. Hence different $(r,l,d)$ gives different minimal degrees ($X_\delta^n$ corresponding to $(0,0,n)$), it follows that the set $\{x\in {\mathbb C}_r|r\in {\mathbb Z}^+\}\cup \{X^n_{\delta}|n\in {\mathbb N}\}$ is linearly independent. \end{proof} \end{theorem} \subsection{Bar-invariant bases} \begin{eqnarray*}gin{proposition}\ \ \begin{eqnarray*}gin{itemize} \item [(i)] $\overline{\nu^{-1}X_{{\mathcal O}(l)}X_{{\mathcal O}(l+1)}}=\nu^{-1}X_{{\mathcal O}(l)}X_{{\mathcal O}(l+1)}$ and $\overline{X_{{\mathcal O}(l)}X_{{\mathcal O}(l)}}=X_{{\mathcal O}(l)}X_{{\mathcal O}(l)}$. \item [(ii)]$\nu^{d(r-d)}X_ {{\mathcal O}(l)}^d X_{{\mathcal O}(l+1)}^{r-d}$ is bar-invariant. \end{itemize} \begin{eqnarray*}gin{proof} The first statement is induced from Equation (\ref{id6.2}) and $X_{{\mathcal O}(l)}$ is bar-invariant, i.e., \begin{eqnarray*}gin{center} $\overline{X_{{\mathcal O}(l)}X_{{\mathcal O}(l+1)}}=\overline{X_{{\mathcal O}(l+1)}}$ $\overline{X_{{\mathcal O}(l)}}=\nu^{-2}X_{{\mathcal O}(l)}X_{{\mathcal O}(l+1)}$. \end{center} To show the second argument, by using $\overline{XY}=\overline{Y}$ $\overline{X}$, we have $$\overline{X_{{\mathcal O}(l)}^d X_{{\mathcal O}(l+1)}^{r-d}}=X_{{\mathcal O}(l+1)}^{r-d} X_{{\mathcal O}(l)}^d.$$ Apply Equation \ref{id6.2} successfully on the left, we have $$\overline{\nu^{-d(r-d)}X_{{\mathcal O}(l+1)}^{r-d} X_{{\mathcal O}(l)}^d}=\nu^{-d(r-d)}X_{{\mathcal O}(l)}^d X_{{\mathcal O}(l+1)}^{r-d}.$$ The proof is completed. \end{proof} \end{proposition} Set $$\bar{{\mathbb C}}_r:=\{ \nu^{d(r-d)}X_{{\mathcal O}(l)}^d X_{{\mathcal O}(l+1)}^{r-d}|l\in {\mathbb Z}, 1\leq d\leq r\},$$ and $\bar{{\mathbb B}}:=\bigcup_{r\geq 1} \bar{{\mathbb C}}_r$. Combining Theorem \ref{thm5.15} with Proposition \ref{prop5.10}, we have \begin{eqnarray*}gin{theorem} \langle bel{thm5.14} Each one of the following sets gives rise to a bar-invariant ${\mathbb Z}[\nu^{\pm1}]$-basis for ${\mathcal A}(\varnothingLambda,B)$: \begin{eqnarray*}gin{center} $ {\mathbb B}_1^{tor}\bigcup \bar{{\mathbb B}}^{vet}$, $\ \ $ ${\mathbb B}_2^{tor}\bigcup \bar{{\mathbb B}}^{vet}$, $\ \ $ ${\mathbb B}_3^{tor}\bigcup \bar{{\mathbb B}}^{vet}$. \end{center} \end{theorem} \begin{eqnarray*}gin{remark} The isomorphism $\kappa:{\mathcal A}(2,2)\to {\mathcal A}(\varnothingLambda,B)$ also preserves bar-invariant ${\mathbb Z}[\nu^{\pm1}]$-bases (see \cite[Corollary 9]{Ding2012b}). \end{remark} \appendix \section{Compatibility of exchange triangles} \langle bel{ap} \subsection{Cluster categories of weighted projective lines} Let $k$ be any field. Recall ${\mathbb X}_{\boldsymbol{p},\boldsymbol{\lambda}}$ is the weighted projective line of $(\boldsymbol{p},\boldsymbol{\lambda})$ given in Section 2.1. Denoted by ${\mathcal A}_k$ the hereditary category of coherent sheaves on ${\mathbb X}_{\boldsymbol{p},\boldsymbol{\lambda}}^k$. \begin{eqnarray*}gin{definition} \langle bel{def7.1} The canonical algebra $C(\boldsymbol{p},\boldsymbol{\lambda})$ is defined to be the path algebra $kQ$ of the quiver $Q$ modulo following relations: $$\alpha_{i,1}\alpha_{i,2}\cdots \alpha_{i,p_i}=\alpha_{2,1}\cdots \alpha_{2,p_2}- \langle mbda_{i}\alpha_{1,1}\cdots \alpha_{1,p_1},$$ for $i=3,\cdots, N$. Denote by $S$ the set consisting of the above relations. Here $Q$ is given by $$\begin{eqnarray*}gin{tikzcd} &\circ\arrow[ldd,"\alpha_{1,1}"] &\circ\arrow[l,"\alpha_{1,2}"] &\cdots\arrow[l,"\alpha_{1,p_1-2}"] &\circ\arrow[l,"\alpha_{1,p_1-1}"] & \\ & &\cdots & & &\\ \stackrelar &\circ\arrow[l,"\alpha_{i,1}"] &\circ\arrow[l,"\alpha_{i,2}"] &\cdots\arrow[l,"\alpha_{i,p_i-2}"] &\circ\arrow[l,"\alpha_{i,p_i-1}"] &\ast\arrow[luu,"\alpha_{1,p_1}"]\arrow[l,"\alpha_{i,p_i}"]\arrow[ldd,"\alpha_{N,p_N}"'] \\ & &\cdots & & &\\ &\circ\arrow[luu,"\alpha_{N,1}"'] &\circ\arrow[l,"\alpha_{N,2}"] &\cdots\arrow[l,"\alpha_{N,p_N-2}"] &\circ\arrow[l,"\alpha_{1,p_N-1}"] & \end{tikzcd}$$ \end{definition} Let $T^k=\bigoplus_{0\leq \vec{l}\leq \vec{c}} {\mathcal O}(\vec{l})^k$ and $D^b(C(\boldsymbol{p},\boldsymbol{\lambda}))$ denoted by $D^b(\mathrm{mod}C(\boldsymbol{p},\boldsymbol{\lambda}))$. It is known that $T^k$ is a tilting object in ${\mathcal A}_k$ and the derived functor ${\mathbb R}{\mathrm{Hom}}(T^k,-): D^b({\mathcal A}_k)\to D^b(C(\boldsymbol{p},\boldsymbol{\lambda}))$ is a derived equivalence, see \cite{Geigle1991,Chen2009}. Obviously, under this functor, the image of ${\mathcal O}(j\vec{x_i})$ is the indecomposable projective $C(\boldsymbol{p},\boldsymbol{\lambda})$-module $P_{i,j}$ for $1\leq i\leq N$ and $1\leq j\leq p_i-1$. ${\mathbb R}{\mathrm{Hom}}(T^k,{\mathcal O}^k)=P_{\stackrelar}$ and ${\mathbb R}{\mathrm{Hom}}(T^k,{\mathcal O}(\vec{c}))=P_{\ast}$. Let ${\mathcal C}({\mathcal A}_k)$ be the cluster category $D^b({\mathcal A}_k)/\tau\circ [-1]$ of ${\mathcal A}_k$. Since ${\mathcal A}_k$ is derived equivalent to $\mathrm{mod}(C(\boldsymbol{p},\boldsymbol{\lambda}))$, the orbit category $D^b(C(\boldsymbol{p},\boldsymbol{\lambda}))/\tau\circ [-1]$ has a natural triangulated structure induced from ${\mathcal C}({\mathcal A}_k)$ such that the projection functor $D^b(C(\boldsymbol{p},\boldsymbol{\lambda}))\to D^b(C(\boldsymbol{p},\boldsymbol{\lambda}))/\tau\circ [-1]$ is a triangle functor. We say ${\mathcal C}(C(\boldsymbol{p},\boldsymbol{\lambda})):=D^b(C(\boldsymbol{p},\boldsymbol{\lambda}))/\tau\circ [-1]$ is the cluster category of $C(\boldsymbol{p},\boldsymbol{\lambda})$. \begin{eqnarray*}gin{theorem}[{\cite[Theorem 6.8]{Buan2006a}}] \langle bel{thm7.2} Let $T$ be a 2-Calabi-Yau triangulated category with a cluster-tilting object $T$. Let $T_i$ be indecomposable and $T = T_i \oplus \bar{T}$. Then there exists a unique indecomposable $T_i^*$ non-isomorphic to $T_i$ such that $\bar{T} \oplus T_i^*$ is cluster tilting. Moreover $T_i$ and $T_i^*$ are linked by the existence of exchange triangles $$T_i\stackrelackrel{u}\longrightarrow B \stackrelackrel{v}\longrightarrow T_i^* \longrightarrow T_i[1]\qquad\text{and}\qquad T_i^*\stackrelackrel{u'}\longrightarrow B' \stackrelackrel{v'}\longrightarrow T_i \longrightarrow T^*_i[1]$$ where $u$ and $u'$ are minimal left add $\bar{T}$-approximations and $v$ and $v'$ are minimal right add $\bar{T}$-approximations. \end{theorem} \begin{eqnarray*}gin{theorem}[{\cite[Theorem 7.5]{Buan2006a}}] \langle bel{thm7.3} Two indecomposable rigid objects $T_i$ and $T_i^*$ form an exchange pair if and only if $$\dim_{\mathrm{End}_{{\mathcal C}}T_i}\operatorname{Ext}^1_{{\mathcal C}}(T_i,T_i^*)=1=\dim_{\mathrm{End}_{{\mathcal C}}T_i^*}\operatorname{Ext}^1_{{\mathcal C}}(T^*_i,T_i).$$ \end{theorem} It is well known that for an indecomposable rigid object ${\mathcal F}$ in $\tilde{{\mathcal A}}_k$, we have that $\mathrm{End}_{{\mathcal A}_k}({\mathcal F})\cong k$ for any fields $k$. Moreover, since the global dimension of ${\mathcal A}_k$ is $2$, we know that $$\frac{\mathrm{End}_{{\mathcal C}_k}{\mathcal F}}{\mathrm{rad}(\mathrm{End}_{{\mathcal C}_k}{\mathcal F})}\cong \frac{\mathrm{End}_{{\mathcal A}_k}{\mathcal F}}{\mathrm{rad}(\mathrm{End}_{{\mathcal A}_k}{\mathcal F}).}$$ Thus, $\mathrm{End}_{{\mathcal C}_k}({\mathcal F})\cong k$. \subsection{Base change functors} Let $q\in {\mathbb N}$ be prime, set $k={\mathbb F}_q$ and $K=$. In the sequel, write $A_{(r)}$ for $C(\boldsymbol{p},\boldsymbol{\lambda})_{{\mathbb F}_{q^r}}$, $A_K$ for $C(\boldsymbol{p},\boldsymbol{\lambda})_{K}$ and ${\mathcal A}_{(r)}$ for ${\mathcal A}_{{\mathbb F}_{q^r}}$. We have the following base change functors: $$\begin{eqnarray*}gin{tikzcd} \mathrm{mod}A_k\arrow[rr,"-\otimes_k {\mathbb F}_{q^r}"]\arrow[rrrr,bend left,"-\otimes_k K"] & &\mathrm{mod}A_{(r)}\arrow[rr,"-\otimes_{{\mathbb F}_{q^r}} K"] & &\mathrm{mod}A_K. \end{tikzcd}$$ Clearly, $-\otimes_k K=(-\otimes_{{\mathbb F}_{q^r}} K)\circ (-\otimes_k {\mathbb F}_{q^r})$. Since these base change functors are exact functors, we have following functors $$\begin{eqnarray*}gin{tikzcd} {\mathcal C}(A_k)\arrow[rr,"-\otimes_k {\mathbb F}_{q^r}"]\arrow[rrrr,bend left,"-\otimes_k K"] & &{\mathcal C}(A_{(r)})\arrow[rr,"-\otimes_{{\mathbb F}_{q^r}} K"] & &{\mathcal C}(A_k). \end{tikzcd}$$ Set $M^{(r)}:=M\otimes_k {\mathbb F}_{q^r}$ and $M^{K}:=M\otimes_k K$ for $M\in \mathrm{mod}A_k$. \begin{eqnarray*}gin{lemma} \langle bel{lem7.3} For $M, N\in \mathrm{mod}A_k$, then $${\mathrm{Hom}}_{A_k}(M,N)\otimes_k K\cong {\mathrm{Hom}}_{A_K}(M^K,N^K).$$ \begin{eqnarray*}gin{proof} For any finite dimensional projective $A_k$-module $P$, we have an isomorphism $${\mathrm{Hom}}_{A_k}(P,N)\otimes_k K\cong {\mathrm{Hom}}_{A_K}(P^K,N^K),$$ since $A_K=A_k\otimes_k K$ and the above isomorphism holds for $A_k$, then for all direct summands of $A_k$. Indeed, ${\mathrm{Hom}}_{A_k}(A_k,N)\otimes_k K=N^K={\mathrm{Hom}}_{A_K}(A_K,N^K)$. Note that the global dimension of $A_k$ is 2, for $M\in A_k$, consider a projective resolution of $M$ $$\begin{eqnarray*}gin{tikzcd} 0\arrow[r] &P_2\arrow[r] &P_1\arrow[r] &P_0\arrow[r] &M\arrow[r] &0. \end{tikzcd}$$ Then applying $-\otimes_k K$, we get a projective resolution of $M^K$. $$\begin{eqnarray*}gin{tikzcd} 0\arrow[r] &P^K_2\arrow[r] &P^K_1\arrow[r] &P^K_0\arrow[r] &M^K\arrow[r] &0. \end{tikzcd}$$ Then for $N\in \mathrm{mod}A_k$, we have following commutative diagram with exact rows $$\begin{eqnarray*}gin{tikzcd} 0\arrow[r] &{\mathrm{Hom}}_{A_k}(M,N)\otimes_k K\arrow[r] &{\mathrm{Hom}}_{A_k}(P_0,N)\otimes_k K\arrow[r]\arrow[d] &{\mathrm{Hom}}_{A_k}(P_1,N)\otimes_k K\arrow[d]\\ 0\arrow[r] &{\mathrm{Hom}}_{A_K}(M^K,N^K)\arrow[r] &{\mathrm{Hom}}_{A_K}(P^K_0,N^K)\arrow[r] &{\mathrm{Hom}}_{A_K}(P^K_1,N^K).\\ \end{tikzcd}$$ Here we use the fact ${\mathrm{Hom}}_{A_k}(M,N)$ is a $k$-linear vector space and $-\otimes_k K$ is exact. Note that the two vertical arrows are isomorphisms, it follows that $${\mathrm{Hom}}_{A_k}(M,N)\otimes_k K\cong {\mathrm{Hom}}_{A_K}(M^K,N^K).$$ \end{proof} \end{lemma} Let $K^b(\mathrm{proj}A_k)$ the bounded homotopy category of complexes of finite dimensional projective $A_k$-modules. Note that the global dimension of $C(\boldsymbol{p},\boldsymbol{\lambda})_k$ is $2$, we known that $D^b(A_k)$ is triangle equivalent to $K^b(\mathrm{proj}A_k)$. For $P_{\bullet}\in K^b(\mathrm{proj}A_k)$, set $P_{\bullet}^K:=P_{\bullet}\otimes_k K$, which belongs to $K^b(\mathrm{proj}A_K)$. ${\mathrm{Hom}}_{A_k}(P_{\bullet},Q_{\bullet}):=\bigoplus_{i} {\mathrm{Hom}}_{A_k}(P_i,Q_i)$. \begin{eqnarray*}gin{lemma} \langle bel{lem7.4} For $P_{\bullet}$, $Q_{\bullet}\in K^b(\mathrm{proj}A_k)$, we have an isomorphism $${\mathrm{Hom}}_{K^b(\mathrm{proj}A_k)}(P_{\bullet},Q_{\bullet})\otimes_k K\cong {\mathrm{Hom}}_{K^b(\mathrm{proj}A_K)}(P_{\bullet}^K,Q_{\bullet}^K).$$ \begin{eqnarray*}gin{proof} Let $${\mathrm{Hom}}^{\bullet}(P_{\bullet},Q_{\bullet}):=\bigoplus_{i\in {\mathbb Z}}{\mathrm{Hom}}_{A_k}(P_{\bullet},Q_{\bullet}[i])$$ be the complex of vector spaces with differential $d$ given by $d(f^i)=d_{Q}f^i-(-1)^if^id_{P}$ for $f^i\in {\mathrm{Hom}}_{A_k}(P_{\bullet},Q_{\bullet}[i])$. So we have that $${\mathrm{Hom}}_{K^b(\mathrm{proj}A_k)}(P_{\bullet},Q_{\bullet})=H^0({\mathrm{Hom}}^{\bullet}(P_{\bullet},Q_{\bullet})),$$ and $${\mathrm{Hom}}_{K^b(\mathrm{proj}A_K)}(P^K_{\bullet},Q^K_{\bullet})=H^0({\mathrm{Hom}}^{\bullet}(P^K_{\bullet},Q^K_{\bullet})).$$ On the other hand, it can be checked that the following diagram is commutative: $$\begin{eqnarray*}gin{tikzcd} {\mathrm{Hom}}_{A_k}(P_{\bullet},Q_{\bullet}[-1])\otimes_k K\arrow[r]\arrow[d] & {\mathrm{Hom}}_{A_k}(P_{\bullet},Q_{\bullet})\otimes_k K\arrow[r]\arrow[d] & {\mathrm{Hom}}_{A_k}(P_{\bullet},Q_{\bullet}[1])\otimes_k K\arrow[d]\\ {\mathrm{Hom}}_{A_K}(P^K_{\bullet},Q^K_{\bullet}[-1])\arrow[r] &{\mathrm{Hom}}_{A_K}(P^K_{\bullet},Q^K_{\bullet})\arrow[r] &{\mathrm{Hom}}_{A_K}(P^K_{\bullet},Q^K_{\bullet}[1]), \end{tikzcd}$$ where vertical arrows are induced by isomorphisms $${\mathrm{Hom}}_{A_k}(P_i,Q_j)\otimes_k K\cong {\mathrm{Hom}}_{A_K}(P^K_i,Q^K_j),$$ as shown in Lemma \ref{lem7.3}. Hence we have that $$H^0({\mathrm{Hom}}^{\bullet}(P_{\bullet},Q_{\bullet})\otimes_k K)\cong {\mathrm{Hom}}_{K^b(\mathrm{proj}A_K)}(P^K_{\bullet},Q^K_{\bullet}).$$ Notice that we deal with complexes of finite dimensional vector spaces, it follows that $$H^0({\mathrm{Hom}}^{\bullet}(P_{\bullet},Q_{\bullet})\otimes_k K)\cong H^0({\mathrm{Hom}}^{\bullet}(P_{\bullet},Q_{\bullet}))\otimes_k K.$$ Thus, we get $${\mathrm{Hom}}_{K^b(\mathrm{proj}A_k)}(P_{\bullet},Q_{\bullet})\otimes_k K\cong {\mathrm{Hom}}_{K^b(\mathrm{proj}A_K)}(P^K_{\bullet},Q^K_{\bullet}).$$ \end{proof} \end{lemma} Replacing $K$ with ${\mathbb F}_{q^r}$, we can show the above two statements over finite fields also hold. \begin{eqnarray*}gin{lemma} \langle bel{lem7.5}\ \ \begin{eqnarray*}gin{itemize} \item [(i)]For $M, N\in \mathrm{mod}A_k$, we have an isomorphism $${\mathrm{Hom}}_{A_k}(M,N)\otimes_k {\mathbb F}_{q^r}\cong {\mathrm{Hom}}_{A_{(r)}}(M^{(r)},N^{(r)}).$$ \item [(ii)] For $P_{\bullet}$, $Q_{\bullet}\in K^b(\mathrm{Proj}A_k)$, we have an isomorphism $${\mathrm{Hom}}_{K^b(\mathrm{proj}A_k)}(P_{\bullet},Q_{\bullet})\otimes_k {\mathbb F}_{q^r}\cong {\mathrm{Hom}}_{K^b(\mathrm{proj}A_{(r)})}(P^{(r)}_{\bullet},Q^{(r)}_{\bullet}).$$ \end{itemize} \end{lemma} For an indecomposable object ${\mathcal F}\in {\mathcal A}_k$, there exists a complex $P_{\bullet}({\mathcal F})$ of projective $A_k$-modules in $K^b(\mathrm{proj}A_k)$ corresponding to ${\mathcal F}$ under the derived functor ${\mathbb R}{\mathrm{Hom}}(T^k,-)$. Set ${\mathcal F}^K\in {\mathcal A}_K$ (resp. ${\mathcal F}^{(r)}\in {\mathcal A}_{{\mathbb F}_{q^r}}$) such that ${\mathbb R}{\mathrm{Hom}}(T^K,P_{\bullet}({\mathcal F})\otimes_k K)\cong {\mathcal F}^K$ in ${\mathcal C}(A_K)$ (resp. ${\mathbb R}{\mathrm{Hom}}(T^{(r)},P_{\bullet}({\mathcal F})\otimes_k {\mathbb F}_{q^r})\cong {\mathcal F}^{(r)}$ in ${\mathcal C}(A_{(r)})$). \begin{eqnarray*}gin{lemma} \langle bel{lem6.7} If ${\mathcal F}\in {\mathcal A}_{{\mathbb F}_q}$ is an indecomposable object, then both ${\mathcal F}^{K}$ and ${\mathcal F}^{(r)}$ are indecomposable in ${\mathcal A}_K$ and ${\mathcal A}_{{\mathbb F}_{q^r}}$ respectively, where $K=\bar{{\mathbb F}}_q$. \begin{eqnarray*}gin{proof} Note that ${\mathcal F}$ is also indecomposable in ${\mathcal C}({\mathcal A}_k)$ by \cite[Proposition 2.3]{Barot2010}, it follows that $\mathrm{End}_{{\mathcal C}({\mathcal A}_k)}({\mathcal F})$ is a local ring. Hence $\mathrm{End}_{{\mathcal C}({\mathcal A}_k)}({\mathcal F})\otimes_k K$ is also a local ring. Moreover by Lemma \ref{lem7.4} we have isomorphisms $$\mathrm{End}_{{\mathcal C}({\mathcal A}_k)}({\mathcal F})\otimes_k K\cong \mathrm{End}_{{\mathcal C}(A_k)}(P_{\bullet}({\mathcal F}))\otimes_k K \cong \mathrm{End}_{{\mathcal C}(A_k)}(P_{\bullet}({\mathcal F})\otimes_k K). $$ We can deduce that $\mathrm{End}_{{\mathcal C}({\mathcal A}_K)}({\mathcal F}^K)\cong \mathrm{End}_{{\mathcal C}(A_K)}(P_{\bullet}({\mathcal F})\otimes_k K)$ is a local ring. One can show ${\mathcal F}^{(r)}$ is an indecomposable object in the same way. \end{proof} \end{lemma} Put everything together, by Theorems \ref{thm7.2} and \ref{thm7.3}, we obtain the following \begin{eqnarray*}gin{theorem} \langle bel{thm7.8} Let $(T_i,T_i^*)$ be an exchange pair in ${\mathcal C}({\mathcal A}_k)$ with exchange triangles $$T_i\stackrelackrel{u}\longrightarrow B \stackrelackrel{v}\longrightarrow T_i^* \longrightarrow T_i[1]\qquad\text{and}\qquad T_i^*\stackrelackrel{u'}\longrightarrow B' \stackrelackrel{v'}\longrightarrow T_i \longrightarrow T_i[1.]$$ Then \begin{eqnarray*}gin{itemize} \item [(i)]$(T_i^{(r)},T_i^{*(r)})$ is an exchange pair in ${\mathcal C}({\mathcal A}_{(r)})$, whose exchange triangles are $$T_i^{(r)}\stackrelackrel{u\otimes_k {\mathbb F}_{q^r}}\longrightarrow B^{(r)} \stackrelackrel{v\otimes_k {\mathbb F}_{q^r}}\longrightarrow T_i^{*(r)} \longrightarrow T_i^{(r)}[1],$$ and $$ T_i^{*(r)}\stackrelackrel{u'\otimes_k {\mathbb F}_{q^r}}\longrightarrow B^{'(r)} \stackrelackrel{v'\otimes_k {\mathbb F}_{q^r}}\longrightarrow T_i^{(r)}\longrightarrow T_i^{*(r)}[1].$$ \item[(ii)]$(T_i^{K},T_i^{*K})$ is an exchange pair in ${\mathcal C}({\mathcal A}_{K})$, whose exchange triangles are $$T_i^{K}\stackrelackrel{u\otimes_k K}\longrightarrow B^{K} \stackrelackrel{v\otimes_k K}\longrightarrow T_i^{*K} \longrightarrow T_i^{K}[1],$$ and $$ T_i^{*K}\stackrelackrel{u'\otimes_k K}\longrightarrow B^{'K} \stackrelackrel{v'\otimes_k K}\longrightarrow T_i^{K}\longrightarrow T_i^{*K}[1].$$ \end{itemize} \end{theorem} The cluster-tilting graph of ${\mathcal C}({\mathcal A}_k)$ has as vertices the isomorphism classes of basic cluster-tilting objects of ${\mathcal C}({\mathcal A}_k)$, while two vertices $T$ and $T'$ are connected by an edge if and only if they differ by precisely one indecomposable direct summand. \begin{eqnarray*}gin{corollary} \langle bel{cor7.9} The cluster-tilting graph of ${\mathcal C}({\mathcal A}_k)$ is connected if $k={\mathbb F}_{q^r}$ or $\bar{{\mathbb F}}_q$, where $q$ is a prime and $r\geq 1$. \begin{eqnarray*}gin{proof} By \cite[Theorem 1.2]{Fu2021a}, the cluster-tilting graph of ${\mathcal C}({\mathcal A}_K)$ is connected for an algebraically closed field $K$. Since for any cluster-tilting object $T$ in ${\mathcal C}({\mathcal A}_k)$, $T^K$ is a cluster-tilting object in ${\mathcal C}({\mathcal A}_K)$ by Lemma \ref{lem7.4} and \ref{lem6.7}. It follows that the cluster-tilting graph of ${\mathcal C}({\mathcal A}_K)$ is the same with the one of ${\mathcal C}({\mathcal A}_k)$ from Theorem \ref{thm7.8}. The case when $k={\mathbb F}_{q^r}$ is similar. \end{proof} \end{corollary} \end{document}
\begin{equation}gin{document} \renewcommand{\thefootnote}{} \begin{equation}gin{center} {{\langle}rge Optimality Conditions and Exact Penalty for Mathematical Programs with Switching Constraints\footnote{The first author's work was supported in part by NSFC Grant \#11801152, \#12071133, \#11671122. The second author's work was supported by NSERC.}} {\large Yan-Chao Liang\footnote{Yan-Chao Liang, College of Mathematics and Information Science, Henan Normal University, Xinxiang 453007, China. Email: {\tt [email protected].}} \ and\ Jane J. Ye\footnote{Corresponding author: Jane J. Ye, Department of Mathematics and Statistics, University of Victoria, Victoria, BC, V8W 2Y2, Canada. Email: {\tt [email protected].}}} \varepsilonnd{center} { \noindent{\bf Abstract.} In this paper, we give {an overview} on optimality conditions and exact penalization for the mathematical program with switching constraints (MPSC). MPSC is a new class of optimization problems with important applications. It is well known that if MPSC is treated as a standard nonlinear program, some of the usual constraint qualifications may fail. {To} deal with this issue, one could reformulate it as a mathematical program with disjunctive constraints (MPDC). In this paper, we first survey recent results on constraint qualifications and optimality conditions for MPDC, then apply them to MPSC. Moreover, we provide two types of sufficient conditions for {the local error bound} and exact penalty results for MPSC. One comes from the directional quasi-normality for MPDC, and the other is obtained via the local decomposition approach. \noindent{\bf Key words.} \ Mathematical program with switching constraints, mathematical program with disjunctive constraints, directional optimality condition, directional pseudo-normality, directional quasi-normality, error bound, exact penalization. } \noindent{\bf 2010 Mathematics Subject Classification.} 90C30, 90C33, 90C46. \baselineskip 18pt \mathcal{S}ection{Introduction} The mathematical program with switching constraints (MPSC) defines a class of optimization problems in which some of the equality constraint functions are products of two functions. The terminology ``switching constraint'' comes from the fact that if the product of two {constraint} functions is equal to zero, then at least one of them must be equal to zero. {MPSC can be used to model the discretized version of the optimal control problem with switching structure (see e.g. \cite{Clason,kanzow-mehlitz-steck,Mehlitz MP} and the references therein), or to reformulate the so-called mathematical programs with either-or-constraints (see \cite[Section 7]{Mehlitz MP}). MPSC has many interesting applications, e.g., optimal control with switching structures have been used to model certain real-world applications \cite{Gugat,Hante,Seidman} and the mathematical program with {either-or-constraints} was used to study some special instances of portfolio optimization \cite{kanzow-mehlitz-steck}.} It is {well known} that if the mathematical program with equilibrium constraints (MPEC) \cite{LuoPangRalph,outrata}, or the mathematical program with vanishing constraints (MPVC) \cite{Achitziger,Hoheisel}, are treated as nonlinear programs, then there are issues involving the usual constraint qualifications such as the Mangasarian-Fromovitz Constraint Qualification (MFCQ) and/or the linear independence constraint qualification (LICQ) for MPEC and MPVC. It is not surprising that this issue also exists for MPSC. Indeed, Mehlitz \cite[Lemma 4.1]{Mehlitz MP} showed that if an MPSC is treated as a nonlinear program, then MFCQ fails at any feasible point $z^*$ for which there is a pair of switching functions with value equal to zero. Consequently, he introduced the concepts of weak, Mordukhovich (M-), and strong (S-) stationarity for MPSC and presented some associated constraint qualifications. Kanzow et al. \cite{kanzow-mehlitz-steck} adopted several relaxation methods from the numerical treatment of MPEC to MPSC. Li and Guo extended some weak and verifiable constraint qualifications for nonlinear programs to MPSC in \cite{Li-Guo}. In the work of \cite{Mehlitz MP,Li-Guo}, the error bound property was not studied and that is one of the main {focuses} of this paper. {The mathematical {program} with disjunctive constraints (also called the disjunctive program) is a {type of set-constrained optimization problem where the set is the union of finitely many polyhedral convex sets.} Programs such as MPEC, MPVC and MPSC can be reformulated as disjunctive programs. The classical concepts of optimality for disjunctive programs such as S-stationary condition based on the regular normal cone and M-stationary condition based on the limiting/Mordukhovich normal cone for disjunctive programs were introduced by Flegel et al. \cite{Flegel etal 2007}. Although M-stationary condition holds for a local minimizer under very weak constraint {qualifications} such as the generalized Guignard constraint qualification (GGCQ), it may be weak for some problems and it does not exclude feasible descent directions. Based on concepts of metric subregularity and some new developments in variational analysis, for disjunctive programs, Gfrerer \cite{Gfrerer SIAM Optimal 2014} introduced various new concepts of constraint qualifications and stationarity concepts including the strong M-stationarity and the extended M-stationarity which are stronger than M-stationarity. Moreover, a directional version of LICQ and directional first and second order optimality conditions are given in \cite{Gfrerer SIAM Optimal 2014}. Another direction of sharpening optimality conditions and weakening constraint qualifications is to consider directional optimality conditions and constraint qualifications. Bai et al. \cite{Bai-Ye-Zhang2019} introduced the directional quasi/pseudo-normality as sufficent conditions for the metric subregularity {which} are weaker than both the classical quasi/pseudo-normality and the first order sufficient condition for metric subregularity. {Benko et al. \cite{Benko etal2019} generalized the notions of directional pseudo- and quasi-normality to obtain more sufficient conditions for metric subregularity. In particular, they have shown that for the disjunctive program, the (directional) pseudo-normality can always take the simplified form while for a special class of the disjunctive program called the ortho-disjunctive program (which includes MPSC), the (directional) quasi-normality can also take the simplified form. Mehlitz \cite{Mehlitz2020optimization} introduced an alternative concept of LICQ and {obtained} first and second order optimality conditions for disjunctive programs.} {Recall that M-stationary condition does not preclude the existence of feasible descent directions. To deal with this issue, recently }Benko and Gfrerer \cite{Benko-Gfrerer2017} introduced the so-called $\mathcal{Q}$-stationarity and $\mathcal{Q}_M$-stationarity {where $\mathcal{Q}_M$-stationarity is stronger than M-stationarity}. A further extension of $\mathcal{Q}$-stationarity and $\mathcal{Q}_M$-stationarity are presented in Benko and Gfrerer \cite{Benko-Gfrerer2018}. To deal with the difficulty of calculating the limiting normal cone to the feasible region, Gfrerer \cite{Gfrerer2019} introduced a new concept of stationary condition for a set-constrained optimization problem called the linearized M-stationary condition. {Recently, sequential optimality conditions and constraint qualifications and their applications in numerical algorithms became a popular topic. A suitable theory has been developed in the context of MPEC in \cite{Andreani etal2019,Ramos2021}. Mehlitz \cite{Mehlitz2020} has generalized the underlying theories to an very general optimization problem which includes MPDC as a special case.} In this paper, we will survey the aforementioned results about new stationarity concepts and sufficient conditions for metric subregularity for disjunctive programs. We then apply these results to obtain various optimality conditions and local error bound results for MPSC. Moreover, we propose to use the local decomposition approach to study sufficient conditions for the error bound property by the corresponding constraint qualifications for each branch as a standard nonlinear program (NLP). The remainder of this paper is organized as follows. {In} Section 2, we review some constraint qualifications from nonlinear programs, and existing constraint qualifications and optimality conditions for MPSC. In Section 3, we summarize the results that we need for disjunctive programs. In Section 4, we apply the results from Section 3 to MPSC. In Section 5, we derive the local error bound and exact penalty results for MPSC. In Section 6, we conclude our discussion and provide relationships among various constraint qualifications, error bound properties and stationary conditions. Throughout the paper, for a differentiable mapping $c : \mathbb{R}^n \rightarrow \mathbb{R}^m$ and a vector $z\in \mathbb{R}^n$, we denote by $\nabla c(z)$ {the Jacobian} of $c$ at $z$. For a differentiable function $f : \mathbb{R}^n \rightarrow \mathbb{R}$, we denote by $\nabla f(z)$ its gradient vector and $\nabla^2 f(z)$ its Hessian matrix at $z$ provided that it is twice differentiable. For a set ${\cal C}$, we denote by ${\cal C}^\circ:=\{x\ |\ x^Ty\leq0,\forall y\in{\cal C}\}$ its polar cone, and by ${\rm dist}_{\cal C}(x)$ the distance between $x$ and ${\cal C}$. Unless otherwise specified, $\|\cdot\|$ denotes an arbitrary norm in $\mathbb{R}^n$. \mathcal{S}ection{Review of constraint qualifications and optimality conditions} In this section, we first recall some constraint qualifications for NLP. Then we review some existing constraint qualifications and optimality conditions for MPSC. The reader is referred to \cite{Mehlitz MP,Li-Guo} for those constraint qualifications for MPSC that are not reviewed here. \mathcal{S}ubsection{Constraint qualifications for NLP} Consider the standard nonlinear program \begin{equation}gin{eqnarray} \min f(z) && {\rm s.t.}\ g(z)\leq 0, \ h (z)=0,\ \label{standard NLP} \varepsilonnd{eqnarray} where ${f}:\mathbb{R}^n\to \mathbb{R}$, ${g}:\mathbb{R}^n\to\mathbb{R}^p$, ${h}:\mathbb{R}^n\to\mathbb{R}^q$ are continuously differentiable. Denote by $\bar{{\cal I}}_{{g}}:={{\cal I}}_{{g}}(\bar z)=\{i\in \{1,\cdots,p\}|{g}_i(\bar{z})=0\}$ the index set of active inequality constraints at $\bar z$. We recall some constraint qualifications for problem (\ref{standard NLP}) that we will refer to in this paper. \begin{equation}gin{defi}\rm Let $\bar{z}\in\mathbb{R}^n$ be a feasible point of problem (\ref{standard NLP}). We say that $\bar{z}$ satisfies \begin{equation}gin{itemize} \item[1.] {\varepsilonm linear independence constraint qualification} (LICQ), if the family of gradients $\{\nabla {g}_i(\bar{z})\}_{i\in \bar{{\cal I}}_{{g}}}\cup\{\nabla {h}_i(\bar{z})\}_{i=1}^q$ is linearly independent; \item[2.] {\varepsilonm Mangasarian-Fromovitz constraint qualification} (MFCQ) \cite{MFCQ1967}, or equivalently {\varepsilonm positive-linearly independent constraint qualification} (PLICQ) if the family of gradients $\{\nabla {g}_i(\bar{z})\}_{i\in \bar{{\cal I}}_{{g}}}\cup\{\nabla {h}_i(\bar{z})\}_{i=1}^q$ is positive-linearly independent, i.e. the family of gradients $\{\nabla {g}_i(\bar{z})\}_{i\in \bar{{\cal I}}_{{g}}}\cup\{\nabla {h}_i(\bar{z})\}_{i=1}^q$ is linearly independent with non-negative scalars associated to the gradients of the active inequality constraints; \item[3.] {\varepsilonm constant rank constraint qualification} (CRCQ) \cite{Janin1984}, if there exists a neighborhood $N(\bar{z})$ of $\bar z$ such that for every $I\mathcal{S}ubseteq \bar{{\cal I}}_{{g}}$ and every $J \mathcal{S}ubseteq \{1,\cdots,q\}$, the family of gradients $\{\nabla {g}_i (z)\}_{i\in I} \cup \{\nabla {h}_i (z)\}_{i\in J} $ has the same rank for every $z \in N(\bar{z})$; \item[4.] {\varepsilonm relaxed constant rank constraint qualification} (RCRCQ) \cite{minchenko-stakhovski2011}, if there exists a neighborhood $N(\bar{z})$ of $\bar z$ such that for every $I\mathcal{S}ubseteq \bar{{\cal I}}_{{g}}$ , the family of gradients $\{\nabla {g}_i (z)\}_{i\in I}\cup\{\nabla {h}_i (z)\}_{i=1}^q$ has the same rank for every $z \in N(\bar{z})$; \item[5.] {\varepsilonm constant positive linear dependence constraint qualification} (CPLD) \cite{Qi-Wei2000}, if there exists a neighborhood $N(\bar{z})$ of $\bar z$ such that for every $I\mathcal{S}ubseteq \bar{{\cal I}}_{{g}}$ and every $J \mathcal{S}ubseteq \{1,\cdots,q\}$, whenever the family of gradients $\{\nabla {g}_i(\bar z)\}_{i\in I} \cup \{\nabla {h}_i (\bar z)\}_{i\in J} $ is positive-linearly dependent, then $\{\nabla {g}_i (z)\}_{i\in I} \cup\{\nabla {h}_i (z)\}_{i\in J}$ is linearly dependent for every $z \in N(\bar{z})$; \item[6.] {\varepsilonm relaxed constant positive linear dependence constraint qualification} (RCPLD) \cite{Andreani etal 2012MP}, if there exists a neighborhood $N(\bar{z})$ of $\bar{z}$ such that (i) $\{\nabla {h}_i (z)\}_{i=1}^q$ has the same rank for every $z \in N(\bar{z})$; (ii) For every $I\mathcal{S}ubseteq \bar{{\cal I}}_{{g}}$, if the family of gradients $\{\nabla {g}_i(\bar z)\}_{i\in I} \cup \{\nabla {h}_i (\bar z)\}_{i\in J} $ is positive-linearly dependent, where $J\mathcal{S}ubseteq \{1,\cdots,q\}$ is such that $\{\nabla {h}_i (\bar{z})\}_{i\in J}$ is {a} basis for span $\{\nabla {h}_i (\bar{z})\}_{i=1}^q$, then $\{\nabla {g}_i (z)\}_{i\in I} \cup \{\nabla {h}_i (z)\}_{i\in J}$ is linearly dependent for every $z \in N(\bar{z})$; \item[7.] {\varepsilonm constant rank of subspace component }(CRSC) \cite{Andreani etal 2012}, if there exists a neighborhood $N(\bar{z})$ of $\bar{z}$ such that the rank of $\{\nabla {g}_i(z)\}_{i\in {\cal I}^-}\cup\{\nabla {h}_i(z)\}_{i=1}^q$ remains constant for $z\in N(\bar{z})$, where $${\cal I}^-=:\left \{l \in \bar{{\cal I}}_{{g}} {\left|-\nabla g_l(\bar z)\in \left \{\mathcal{S}um_{i=1}^q \lambda_i \nabla h_i(\bar z)+\mathcal{S}um_{i\in \bar{{\cal I}}_{{g}}\mathcal{S}etminus \{l\}} \mu_i \nabla g_i(\bar z)|\mu_i \geq 0, i\in \bar{{\cal I}}_{{g}}\right\}\right.} \right \}.$$ \varepsilonnd{itemize} \varepsilonnd{defi} \begin{equation}gin{remark} Let $L(\bar{z}):=\{d|\nabla g_i(\bar{z})d\leq0,\ i\in \bar {\cal I}_g,\nabla h_i(\bar{z})d=0,i=1,\cdots,q\}$ be the linearization cone of problem (\ref{standard NLP}) at $\bar{z}$. Kruger al. \cite{Kruger etal2014} pointed out that since the polar of the linearization cone is equal to $$ L(\bar{z})^\circ= \left \{{\left.\mathcal{S}um_{i=1}^q \lambda_i \nabla h_i(\bar z)+\mathcal{S}um_{i\in \bar{{\cal I}}_{{g}}} \mu_i \nabla g_i(\bar z)\right|}\mu_i \geq 0, i\in \bar{{\cal I}}_{{g}}\right \},$$ by the definition of the linearization cone, the index set ${\cal I}^-$ can be equivalently written as $${{\cal I}^-=\{l\in \bar{{\cal I}}_g|\nabla g_l(\bar{z})^{{T}}d=0,\forall d\in L(\bar{z})\}.}$$ Hence in \cite{Kruger etal2014}, CRSC is also called {\varepsilonm relaxed MFCQ}. \varepsilonnd{remark} \begin{equation}gin{defi} \rm(see e.g. \cite{Solodov2011}) Let ${{\cal F}}_{NLP}$ be the feasible region of problem (\ref{standard NLP}). We say that an {\varepsilonm error bound} holds in a neighborhood $N(\bar{z})$ of a feasible point $\bar{z}\in{{\cal F}}_{NLP}$ if there exists $\alpha>0$ such that for every $z\in N(\bar{z})$ $${\rm dist}_{{{\cal F}}_{NLP}}(z)\leq\alpha\left(\mathcal{S}um_{i=1}^p\max\{g_i(z),0\}+\mathcal{S}um_{i=1}^q|h_i(z)|\right).$$ \varepsilonnd{defi} {It is easy to see that the local error bound condition holds at $\bar{z}$ for NLP if and only if the feasibility mapping $z\rightrightarrows (g(z),h(z))-\mathbb{R}^p_-\times\{0\}^q$ is {\varepsilonm metrically subregular} (see Definition \ref{definition of directional metric subregularity}) at $(\bar{z},0)$. } Andreani al. \cite[Theorem 5.5]{Andreani etal 2012} showed that CRSC implies the existence of local error bounds under the second-order differentiability of functions $g, h$. This assumption was removed by Guo et al. \cite{Guo-Zhang-Lin2014}. Finally, we summarize relations among constraint qualifications for NLP discussed in this subsection in Figure 1. \begin{equation}gin{figure} \centering \mathcal{S}criptsize \tikzstyle{format}=[rectangle,draw,thin,fill=white] \tikzstyle{test}=[diamond,aspect=2,draw,thin] \tikzstyle{point}=[coordinate,on grid,] \begin{equation}gin{tikzpicture} \node[format](LICQ){LICQ}; \node[format,right of=LICQ,node distance=15mm](MFCQ){MFCQ}; \node[format,right of=MFCQ,node distance=15mm](CPLD){CPLD}; \node[format,right of=CPLD,node distance=15mm](RCPLD){RCPLD}; \node[format,right of=RCPLD,node distance=15mm](CRSC){CRSC}; \node[format, below of=MFCQ,node distance=7mm](CRCQ){CRCQ}; \node[format,right of=CRCQ,node distance=15mm](RCRCQ){RCRCQ}; \node[format,right of=CRSC,node distance=27mm](error bound){error bound/metric subregular}; \draw[->](LICQ)--(MFCQ); \draw[->](MFCQ)--(CPLD); \draw[->](CPLD)--(RCPLD); \draw[->](RCPLD)--(CRSC); \draw[->](LICQ)--(CRCQ); \draw[->](CRCQ)--(RCRCQ); \draw[->](CRCQ)--(CPLD); \draw[->](RCRCQ)--(RCPLD); \draw[->](CRSC)--(error bound); \varepsilonnd{tikzpicture} \centering{Fig.1 Relation among constraint qualifications for NLPs} \varepsilonnd{figure} \mathcal{S}ubsection{Constraint qualifications and optimality conditions for MPSC}\label{s2.2} In this paper, we consider the following MPSC: \begin{equation}gin{eqnarray} \min & & f(z) \nonumber\\ \mbox{s.t.} & & g(z)\le 0, \ h(z)=0, \quad G_i(z) H_i(z)=0,\ i=1,\cdots,m \label{MPSC} \varepsilonnd{eqnarray} where $f:\mathbb{R}^n\rightarrow \mathbb{R},~g:\mathbb{R}^n\rightarrow \mathbb{R}^p,~h:\mathbb{R}^n\rightarrow \mathbb{R}^q,~G_1,\cdots,G_m:\mathbb{R}^n\rightarrow \mathbb{R},~H_1\cdots,H_m:\mathbb{R}^n\rightarrow \mathbb{R}$. We assume that unless otherwise specified, all defining functions are continuously differentiable. Let ${\cal F}$ denote the feasible region of (\ref{MPSC}). For a feasible point $z^*\in{\cal F}$ , we define some useful index sets as follows: \begin{equation}gin{eqnarray*} &&{\cal I}_g^*:={\cal I}_g(z^*)=\{i\in\{1,\cdots,p\}\ |\ g_i(z^*)=0\}, \\ &&{\cal I}_G^*:={\cal I}_G(z^*)=\{i\in\{i,\cdots,m\}\ |\ G_i(z^*)=0,\ H_i(z^*)\neq 0\},\\ &&{\cal I}_H^*:={\cal I}_H(z^*)=\{i\in\{i,\cdots,m\}\ |\ G_i(z^*)\neq 0,\ H_i(z^*)=0\},\\ &&{\cal I}_{GH}^*:={\cal I}_{GH}(z^*)=\{i\in\{i,\cdots,m\}\ |\ G_i(z^*)=H_i(z^*)=0\}. \varepsilonnd{eqnarray*} Since by \cite[Lemma 4.1]{Mehlitz MP}, MPSC never {satisfies} MFCQ at a feasible point $z^*$ with $ {\cal I}_{GH}^*\not =\varepsilonmptyset$, Mehlitz \cite{Mehlitz MP} defined and studied the following alternative stationarity concepts. \begin{equation}gin{defi}\rm\cite{Mehlitz MP} \label{WMS} We say that $z^*\in{\cal F}$ is a {\varepsilonm weakly stationary} (W-stationary) point of MPSC (\ref{MPSC}) if there exist multipliers $(\lambda^g,\lambda^h,\lambda^G,\lambda^H)$ such that \begin{equation}gin{eqnarray} &&\nabla f(z^*)+\mathcal{S}um_{i\in{\cal I}_g^*}\lambda^g_i\nabla g_i(z^*)+\mathcal{S}um_{i=1}^q\lambda^h_i\nabla h_i(z^*)+\mathcal{S}um_{i=1}^m(\lambda^G_i\nabla G_i(z^*)+\lambda^H_i\nabla H_i(z^*))=0,\label{W-stationary1}\\ &&\lambda^g_i\geq 0,\ i\in{\cal I}_g^*, \lambda^G_i=0,\ i\in{\cal I}_H^*, \lambda^H_i=0,\ i\in{\cal I}_G^*.\label{W-stationary4} \varepsilonnd{eqnarray} We say that $z^*\in{\cal F}$ is a {\varepsilonm Mordukhovich stationary} (M-stationary) point of MPSC (\ref{MPSC}) if there exist multipliers $(\lambda^g,\lambda^h,\lambda^G,\lambda^H)$ such that (\ref{W-stationary1})--(\ref{W-stationary4}) hold and $\lambda^G_i\lambda^H_i=0,\ i\in{\cal I}_{GH}^*.$ Moreover, we call $(\lambda^g,\lambda^h,\lambda^G,\lambda^H)$ an M-multiplier. We say that $z^*\in{\cal F}$ is a {\varepsilonm strongly stationary} (S-stationary) point of MPSC (\ref{MPSC}) if there exist multipliers $(\lambda^g,\lambda^h,\lambda^G,\lambda^H)$ such that (\ref{W-stationary1})--(\ref{W-stationary4}) hold and $\lambda^G_i=\lambda^H_i=0,\ i\in{\cal I}_{GH}^*.$ Moreover, we call $(\lambda^g,\lambda^h,\lambda^G,\lambda^H)$ an S-multiplier. \varepsilonnd{defi} Consider the associated tightened nonlinear problem at $z^*\in{\cal F}$: \begin{equation}gin{eqnarray*} ({\rm TNLP})\ \min &&f(z)\\ {{\rm s.t.}}&&g(z)\leq 0, h(z)=0, G_i(z)=0,\ i\in{\cal I}_G^*\cup{\cal I}_{GH}^*,H_i(z)=0,\ i\in{\cal I}_H^*\cup{\cal I}_{GH}^*. \varepsilonnd{eqnarray*} \begin{equation}gin{defi}\rm\cite{Mehlitz MP}\label{definition of MPSC LICQ and MFCQ} Let $z^*$ be a feasible point of MPSC (\ref{MPSC}). We say that $z^*$ satisfies MPSC-LICQ/-MFCQ, if LICQ/MFCQ holds for (TNLP) at $z^*$. \varepsilonnd{defi} \begin{equation}gin{defi}\rm\label{New CQ definition-1} Let $z^*$ be a feasible point of MPSC (\ref{MPSC}). We say that $z^*$ satisfies MPSC-CRCQ/-CPLD, if CRCQ/CPLD holds for (TNLP) at $z^*$. \varepsilonnd{defi} \begin{equation}gin{remark} The MPSC-CRCQ/-CPLD defined in Definition \ref{New CQ definition-1} coincides with those defined in \cite[Definition 4.2]{Li-Guo}. The advantage of defining these constraint qualifications as the corresponding ones for the tightened nonlinear program (TNLP) is that we can immediately conclude from the definitions of CRCQ and CPLD for nonlinear programs that MPSC-CRCQ implies MPSC-CPLD without proof as in (i) of the proof for \cite[Theorem 4.2]{Li-Guo}. \varepsilonnd{remark} \begin{equation}gin{defi}[MPSC-RCPLD]\rm \cite{Li-Guo} Let $z^*$ be a feasible point of MPSC (\ref{MPSC}). We say that $z^*$ satisfies MPSC-RCPLD if there exists a neighborhood $N(z^*)$ of $z^*$ such that \begin{equation}gin{itemize} \item[(i)] The vectors $\{\nabla h_i(z)\}_{i=1}^q\cup \{ \nabla G_i(z)\}_{i\in \mathcal{I}_G^*} \cup \{ \nabla H_i(z)\}_{i\in \mathcal{I}_H^*}$ have the same rank for all $z$ in $N(z^*)$; \item[(ii)] Let $I_1\mathcal{S}ubseteq\{1,2,\cdots,q\},I_2\mathcal{S}ubseteq{\cal I}_G^*,I_3\mathcal{S}ubseteq{\cal I}_H^*$ be index sets such that the set of vectors $\{\nabla h_i(z^*)\}_{i\in I_1}\cup \{ \nabla G_i(z^*)\}_{i\in I_2} \cup \{ \nabla H_i(z^*)\}_{i\in I_3}$ is a basis for span {$(\{\nabla h_i(z^*)\}_{i=1}^q, \{ \nabla G_i(z^*)\}_{i\in \mathcal{I}_G^*} , \{ \nabla H_i(z^*)\}_{i\in \mathcal{I}_H^*})$}. For each $I_4\mathcal{S}ubseteq{\cal I}_g^*,$ $I_5,I_6\mathcal{S}ubseteq{\cal I}_{GH}^*$, if there exist $\{\lambda^g,\lambda^h,\lambda^G,\lambda^H\}$ not all zero, with $\lambda^g_i\geq0$ for each $i\in I_4$ and $\lambda^G_i\lambda^H_i=0$ for each $i\in{\cal I}_{GH}^*$, such that \begin{equation}gin{eqnarray*} \mathcal{S}um_{i\in I_4}\lambda^g_i\nabla g_i(z^*)+\mathcal{S}um_{i\in I_1}\lambda^h_i\nabla h_i(z^*)+\mathcal{S}um_{i\in I_2\cup I_5}\lambda^G_i\nabla G_i(z^*)+\mathcal{S}um_{i\in I_3\cup I_6}\lambda^H_i\nabla H_i(z^*)=0, \varepsilonnd{eqnarray*} then for any $z\in N(z^*)$, the set of vectors $$\left \{\nabla g_i(z) \right \}_{i\in I_4} \cup \{\nabla h_i(z)\}_{i\in I_1}\cup\{\nabla G_i(z)\}_{i\in I_2\cup I_5}\cup \{\nabla H_i(z)\}_{i\in I_3\cup I_6}$$ {is} linearly dependent. \varepsilonnd{itemize} \varepsilonnd{defi} We now gather constraint conditions and necessary optimality conditions from \cite{Mehlitz MP,Li-Guo} in the following theorem. One can find the definition of MPSC No Nonzero Abnormal Multiplier Constraint Qualification (MPSC-NNAMCQ) and MPSC quasi/pseudo-normality from the comments after Definitions \ref{Defi4.3} and \ref{Defi4.4} respectively. \begin{equation}gin{thm}{\rm \cite{Mehlitz MP,Li-Guo}} Let $z^*$ be feasible for problem (\ref{MPSC}). If MPSC-LICQ is fulfilled at $z^*$, then $z^*$ is S-stationary. If {MPSC-MFCQ/-CPLD/-CRCQ/-RCPLD/-NNAMCQ/-quasi/-pseudo-normality} is fulfilled at $z^*$, then $z^*$ is M-stationary. \varepsilonnd{thm} \mathcal{S}ection{Optimality conditions for mathematical programs with disjunctive constraints}\label{sec3} In this section we review some optimality conditions for the mathematical programs with disjunctive constraints (MPDC): \begin{equation}gin{eqnarray}\label{disjunctive constraints} \min &&f(z) \quad {\rm s.t.} \ P(z)\in{\langle}mbda, \varepsilonnd{eqnarray} where $f:\mathbb{R}^n \rightarrow \mathbb{R}$, $ P:\mathbb{R}^n \rightarrow \mathbb{R}^{m}$ are continuously differentiable, and ${\langle}mbda \mathcal{S}ubseteq \mathbb{R}^{m}$ is the union of finitely many convex polyhedral sets. We denote the feasible region by ${\cal F}_{D}:=\left \{z\in \mathbb{R}^{n}| P(z)\in {\langle}mbda \right \}$ and the linearization cone by $$L_{{\cal F}_D}^{\rm lin}(z^*):=\{ d\in \mathbb{R}^{n}|\nabla P(z^*) d \in T_{\langle}mbda (P(z^*))\}.$$} To study the mathematical program with disjunctive constraints (\ref{disjunctive constraints}), we need to study various tangent cones and normal cones to set ${\langle}mbda$. First we recall definitions of tangent cones and normal cones. Suppose that $A\mathcal{S}ubseteq \mathbb{R}^m$ is closed and $x^*\in A$. The tangent/Bouligand cone, the Fr\'{e}chet/regular and the limiting/basic/Mordukhovich normal cone to $A$ at $x^*$ are defined by \begin{equation}gin{eqnarray*} T_{A}(x^*) &:=&\left \{d\in \mathbb{R}^m\ |\ \varepsilonxists\ t_k\downarrow 0,\ d^k\rightarrow d\ \text{such\ that}\ x^*+t_k d^k\in A\right \},\\ \omegaidehat{N}_A(x^*)&:=&\left \{{ z}eta\in\mathbb{R}^m| \langlegle { z}eta, x-x^*{\ranglegle}ngle \leq o(\|x-x^*\|) \quad \forall x \in A \right \},\\ N_A(x^*)&:=& \left \{{ z}eta\in\mathbb{R}^m\ {\left|\ \varepsilonxists\ \{x^k\}\mathcal{S}ubseteq A,\ \varepsilonxists { z}eta^k\ {\rm such\ that}\ x^k\to x^*,\ { z}eta^k\to { z}eta,\ { z}eta^k\in\omegaidehat{N}_A(x^k) \right.}\right \}, \varepsilonnd{eqnarray*} respectively, see e.g. \cite{Rockafellar}. When $A$ is convex, all the normal cones above are equal and they coincide with the normal cone in the sense of convex analysis. Using various normal cones, some stationary conditions were introduced; cf \cite[Definition 1]{Flegel etal 2007}, \cite[Definition 3]{Benko-Gfrerer2017}. \begin{equation}gin{defi}\rm Let $z^*\in {\cal F}_D$. \begin{equation}gin{itemize} \item[(a)] We say that $z^*$ is B-stationary (Bouligand stationary) if $0\in \nabla f(z^*)+ \omegaidehat{N}_{{\cal F} _D}(z^*).$ \item[(b)] We say that $z^*$ is S-stationary (Strongly stationary) if $ 0\in \nabla f(z^*)+ \nabla P(z^*)^T \omegaidehat{N}_{\langle}mbda( P(z^*)).$ \item[(c)] We say that $z^*$ is M-stationary (Mordukhovich stationary) if $0\in \nabla f(z^*)+ \nabla P(z^*)^T N_{\langle}mbda (P(z^*)).$ \varepsilonnd{itemize} \varepsilonnd{defi} \begin{equation}gin{defi}\rm Let $z^*\in {{\cal F} _D}$. We say that the generalized Guignard constraint qualification (GGCQ) holds at $z^*$ if \begin{equation}gin{equation}\omegaidehat{N}_{{\cal F} _D}(z^*)=(L_{{\cal F} _D}^{\rm lin}(z^*))^\circ. \label{GGCQ}\varepsilonnd{equation} \varepsilonnd{defi} GGCQ is a rather weak constraint qualification. For example, it holds if the set-valued map $F(z):=-P(z)+{\langle}mbda$ is metrically subregular at $z^*$ (see Definition \ref{definition of directional metric subregularity}). The following necessary optimality conditions are well known. Proposition \ref{prop3.1}(a) follows from the well-known fact that any local optimizer has no feasible descent directions and the fact that $(T_{{\cal F} _D}(z^*))^\circ=\omegaidehat{N}_{{\cal F} _D}(z^*)$. Proposition \ref{prop3.1}(b) follows from (a) and the change of coordinates formula in \cite[Exercise 6.7]{Rockafellar}. \begin{equation}gin{prop}\label{prop3.1} Let $z^*$ be a local optimal solution of problem (\ref{disjunctive constraints}). Then \begin{equation}gin{itemize} \item[(a)] $z^*$ is B-stationary. \item[(b)] If $\nabla P(z^*)$ has full rank $m$, then $z^*$ is S-stationary. \item[(c)]\cite[Theorem 7]{Flegel etal 2007} Suppose GGCQ holds at $z^*$. Then $z^*$ is M-stationary. \varepsilonnd{itemize} \varepsilonnd{prop} Recently some MPDC-tailored versions of LICQ have been introduced in \cite[Definition 3.1]{Mehlitz2020optimization} and in \cite[(31)]{GfrererYeZhou} (see also Definition \ref{definition of MPSC LICQ(d)}). These conditions all ensure that a local optimal solution is S-stationary. It is easy to see that although B-stationary condition does not need any constraint qualification, it is implicit and hence not easy to use. S-stationary condition is sharper than M-stationary condition but requires very strong constraint qualification to hold. M-stationary condition is necessary for optimality under very weak constraint qualification but it can be very weak for certain problems. {Recently, some stationary conditions weaker than B-stationarity but stronger than M-stationarity have been introduced.} We now review these results. {For problems in the form (\ref{disjunctive constraints}) but with ${\langle}mbda$ being an arbitrary closed set, the limiting normal cone in M-stationary condition can be hard to compute and the resulting M-stationary condition can be weak. In order to deal with this difficulty, Gfrerer \cite{Gfrerer2019} introduced the so-called linearized M-stationary condition by a repeated linearization procedure. We now apply the linearization procedure to our problem. Suppose that $z^*$ is B-stationary for problem (\ref{disjunctive constraints}) with ${\langle}mbda$ being a closed set and GGCQ holds at $z^*$. Then since $-\nabla f(z^*)\in \omegaidehat{N}_{{\cal F} _D}(z^*) $, by (\ref{GGCQ}) the point $d^*=0$ is a global minimizer for the linearized problem \begin{equation}gin{equation} \label{Linearizedproblem} \min_d \nabla f(z^*)^Td \mbox{ subject to } \nabla P(z^*) d \in T_{\langle}mbda (P(z^*)).\varepsilonnd{equation} If a constraint qualification holds, then M-stationary condition for the above linearized problem holds at $d^*=0$. In our case since ${\langle}mbda$ is the union of finitely many convex polyhedral sets, the perturbed feasible map ${{\cal F} _D}(d):= \nabla P(z^*) d - T_{\langle}mbda (P(z^*))$ is metric subregular at $(0,0)$ and hence GGCQ holds automatically. Then by Proposition \ref{prop3.1}(c), $d^*=0$ is an M-stationary point of the linearized problem which means that \begin{equation}gin{equation}\label{LinearM} 0\in \nabla f({ z}^*) +\nabla P(z^*)^T {N}_{T_{\langle}mbda(P(z^*))} (0) .\varepsilonnd{equation} The linearization procedure would continue if $T_{\langle}mbda(P(z^*))$ is not the union of finitely many convex polyhedral sets, until a series of tangent cones to tangent cones to the set ${\langle}mbda$ is the union of finitely many convex polyhedral sets. The resulting optimality condition is called a linearized M-stationary condition. In general, the linearized M-stationary condition is sharper than M-stationary condition. To see this, suppose $T_{\langle}mbda(P(z^*))$ is the union of finitely many convex polyhedral sets and the original set ${\langle}mbda$ is not. Then the linearized M-stationary condition is (\ref{LinearM}). Since ${N}_{T_{\langle}mbda(P(z^*))} (0)\mathcal{S}ubseteq N_{\langle}mbda(P(z^*))$, cf. \cite[Proposition 6.27]{Rockafellar}, the linearized M-stationary condition is sharper than M-stationary condition. Moreover, ${N}_{T_{\langle}mbda(P(z^*))} (0)$ would be easier to calculate than the normal cone $N_{\langle}mbda(P(z^*))$ in this case. But since in our case,} ${\langle}mbda$ is the union of finitely many convex polyhedral sets, ${N}_{T_{\langle}mbda(P(z^*))} (0)=N_{\langle}mbda(P(z^*))$ (see \cite[p. 59]{Henrion}. Hence the linearized M-stationary condition coincides with M-stationary condition for the disjunctive program. {Another approach taken by Benko and Gfrerer in \cite{Benko-Gfrerer2017} to obtain sharper stationary condition than M-stationary condition for problems in the form (\ref{disjunctive constraints}) but with ${\langle}mbda$ being an arbitrary closed set is to give an accurate estimate for the regular normal cone to the constraint system. The idea is as follows. Let $Q_1,Q_2\mathcal{S}ubseteq T_{\langle}mbda(P(z^*))$ be two closed convex cones. Then $$\nabla P(z^*)^{-1}Q_i \mathcal{S}ubseteq \nabla P(z^*)^{-1}T_{\langle}mbda(P(z^*))=L_{{\cal F} _D}^{\rm lin}(z^*),\quad i=1,2,$$ where $\nabla P(z^*)^{-1}Q_i:=\{ d| \nabla P(z^*) d\in Q_i\}.$ Therefore, if GGCQ holds at $z^*$, we have \begin{equation}gin{eqnarray*} \omegaidehat{N}_{{\cal F} _D}(z^*) &=& (L_{{\cal F} _D}^{\rm lin}(z^*))^\circ \\ &\mathcal{S}ubseteq & (\nabla P(z^*)^{-1}Q_1 \cup \nabla P(z^*)^{-1}Q_2)^\circ\\ &=& (\nabla P(z^*)^{-1}Q_1)^\circ \cap ( \nabla P(z^*)^{-1}Q_2)^\circ \qquad \mbox{ since $Q_1$ and $Q_2$ are convex cones}. \varepsilonnd{eqnarray*} Moreover, suppose the following condition holds: \begin{equation}gin{equation} (\nabla P(z^*)^{-1}Q_i)^\circ=\nabla P(z^*)^TQ_i^\circ,i=1,2. \label{validity} \varepsilonnd{equation} Then it follows that \begin{equation}gin{equation}\omegaidehat{N}_{{\cal F} _D}(z^*) \mathcal{S}ubseteq (\nabla P(z^*)^T Q_1 ^\circ ) \cap ( \nabla P(z^*)^T Q_2^\circ)= \nabla P(z^*)^T \left(Q_1 ^\circ \cap ( \ker \nabla P(z^*)^T+ Q_2^\circ)\right),\label{tight}\varepsilonnd{equation} where $\ker\nabla P(z^*)^T:=\{r| P(z^*)^Tr=0\}$ and the equality follows from \cite[Lemma 1]{Benko-Gfrerer2017}. The right hand side of the inclusion (\ref{tight}) gives an upper estimate for $\omegaidehat{N}_{{\cal F} _D}(z^*)$. In order to have that the above inclusion provides a good estimate for the regular normal cone, it is obvious that we want to choose $Q_1, Q_2$ as large as possible so that the inclusion is tight. Furthermore, since one always has $\nabla P(z^*)^T \omegaidehat N_{\langle}mbda(P(z^*)) \mathcal{S}ubseteq \omegaidehat{N}_{{\cal F} _D}(z^*)$ (\cite[Theorem 6.14]{Rockafellar}), if \begin{equation}gin{equation}\label{sufficentc} \nabla P(z^*)^T \left(Q_1 ^\circ \cap ( \ker \nabla P(z^*)^T+ Q_2^\circ) \right)\mathcal{S}ubseteq \nabla P(z^*)^T \omegaidehat N_{\langle}mbda(P(z^*)), \varepsilonnd{equation} then the equality holds in (\ref{tight}) and consequently, \begin{equation}gin{equation}\omegaidehat{N}_{{\cal F} _D}(z^*) =\nabla P(z^*)^T \left(Q_1 ^\circ \cap ( \ker \nabla P(z^*)^T+ Q_2^\circ) \right)=\nabla P(z^*)^T \omegaidehat N_{\langle}mbda(P(z^*)). \label{B-S} \varepsilonnd{equation} How to choose $Q_1,Q_2$ satisfying the condition (\ref{sufficentc})? One good choice is to find $Q_1,Q_2$ satisfying $$Q_1^\circ \cap Q_2^\circ=\omegaidehat{N}_{\langle}mbda (P(z^*)),$$ since then condition (\ref{sufficentc}) holds whenever $\nabla P(z^*)$ has full rank. Based on the estimates of the regular normal cone in (\ref{tight}) and the fact that any local minimizer is an B-stationary point, Benko and Gfrerer in \cite{Benko-Gfrerer2017} introduced the concept of the so-called $\mathcal{Q}$-stationarity. Moreover when an $\mathcal{Q}$-stationary point is also an M-stationary point, then they call it an $\mathcal{Q}_M$-stationary point. In our case, since $T_{\langle}mbda(P(z^*))$ is the union of finitely many convex polyheral sets, we can choose $Q_1,Q_2$ to be closed convex polyhedral cones. By \cite[Proposition 1]{Benko-Gfrerer2017}, the polyhedrality of the cones $Q_i\mathcal{S}ubseteq T_{\langle}mbda(P(z^*)), i=1,2$ ensures validity of (\ref{validity}). We now give definition for $\mathcal{Q}$-stationarity for the disjunctive program. } \begin{equation}gin{defi}{\rm(\cite[Definiton 4 and Lemma 2]{Benko-Gfrerer2017})\label{definition of Q-statioanry} Let $\mathcal{Q}$ denote some collection of pairs $(Q_1,Q_2)$ of closed convex polyhedral cones fulfilling $Q_i\mathcal{S}ubseteq T_{\langle}mbda(P(z^*)), i=1,2$. \begin{equation}gin{itemize} \item[(i)] Given $(Q_1,Q_2)\in\mathcal{Q}$, we say that $z^*$ is $\mathcal{Q}$-stationary with respect to $(Q_1,Q_2)$ for program (\ref{disjunctive constraints}) if \begin{equation}gin{equation} 0\in\nabla f(z^*)+\nabla P(z^*)^T\left(Q_1^\circ\cap(\ker\nabla P(z^*)^T+Q_2^\circ)\right).\nonumber \varepsilonnd{equation} \item[(ii)] We say that $z^*$ is $\mathcal{Q}$-stationary for program (\ref{disjunctive constraints}), if $z^*$ is $\mathcal{Q}$-stationary with respect to some pair $(Q_1,Q_2)\in\mathcal{Q}$. \item [(iii)] We say that $z^*$ is $\mathcal{Q}_M$-stationary provided that $z^*$ is both M-stationary and $\mathcal{Q}$-stationary with respect to some pair $(Q_1,Q_2)\in\mathcal{Q}$, i.e., there exists a pair $(Q_1,Q_2)\in\mathcal{Q}$ such that $$0\in\nabla f(z^*)+\nabla P(z^*)^T\left(Q_1^\circ\cap(\ker\nabla P(z^*)^T+Q_2^\circ)\cap N_{\langle}mbda(P(z^*))\right).$$ \varepsilonnd{itemize}} \varepsilonnd{defi} Based on the discussion before Definition \ref{definition of Q-statioanry}, we obtain the following optimality conditions. \begin{equation}gin{prop}\label{Prop3.3} Let $z^*$ be a local optimal solution for program (\ref{disjunctive constraints}). If GGCQ holds at $z^*$, then $z^*$ is $\mathcal{Q}$-stationary with respect to every pair $(Q_1,Q_2)\in \mathcal{Q}$. {Moreover,} $z^*$ is $\mathcal{Q}_M$-stationary with respect to every pair $(Q_1,Q_2)\in \mathcal{Q}$. Conversely, if $z^*$ is $\mathcal{Q}$-stationary with respect to some pair $(Q_1,Q_2)\in \mathcal{Q}$ fulfilling (\ref{sufficentc}), then $z^*$ is S-stationary and consequently also B-stationary. \varepsilonnd{prop} \begin{equation}gin{proof} Let $z^*$ be a local optimal solution for program (\ref{disjunctive constraints}). Then by Proposition \ref{prop3.1}, $z^*$ is a B-stationary point, i.e., $-\nabla f(z^*)\in \omegaidehat{N}_{{\cal F} _D} (z^*)$. If GGCQ holds at $z^*$, then by (\ref{tight}) and Definition \ref{definition of Q-statioanry}, $z^*$ is $\mathcal{Q}$-stationary with respect to every pair $(Q_1,Q_2)\in \mathcal{Q}$. {Moreover,} by Proposition \ref{prop3.1}, it is also an M-stationary point and hence $\mathcal{Q}_M$-stationary. Conversely, suppose that $z^*$ is $\mathcal{Q}$-stationary with respect to some pair $(Q_1,Q_2)\in \mathcal{Q}$ fulfilling (\ref{sufficentc}). Then by definition of $\mathcal{Q}$-stationarity and (\ref{B-S}), $z^*$ is also S-stationary and B-stationary. \varepsilonnd{proof} Now we review the asymptotical version of M-stationarity. Using a simple penalization argument, \cite[Theorem 3.2]{Mehlitz2020} showed that any local minimizer $z^*$ of MPDC must be AM-stationary for MPDC. The question is under what conditions, is an AM-stationary point M-stationary? In \cite[Definition 3.8]{Mehlitz2020}, Mehlitz defined the so-called asymptotically Mordukhovich-regularity (AM-regularity for short) and showed that under AM-regularity, an AM-stationary point is M-stationary. Moreover, for the case of MPDC, according to the equivalence theorem shown in \cite[Theorem 5.3]{Mehlitz2020}, we can define AM-{regularity} as follows. \begin{equation}gin{defi}\label{AM-stationary}\rm\cite[Definition 3.1]{Mehlitz2020} Let $z^*\in {{\cal F} _D}$. We say that $z^*$ is asymptotically M-stationary (AM-stationary) if there exist sequences $\{z^k\},\,\{{\rm Var}epsilon^k\}\mathcal{S}ubseteq \mathbb{R}^n$ with $z^k\rightarrow z^*,{\rm Var}epsilon^k\rightarrow0$ such that $${\rm Var}epsilon^k\in \nabla f(z^k)+ \nabla P(z^k)^T N_{\langle}mbda (P(z^k))\quad \forall k.$$ \varepsilonnd{defi} \begin{equation}gin{defi}\label{AM-regular}\rm\cite[Theorem 5.3]{Mehlitz2020} Let $z^*\in {{\cal F} _D}$. Define a set-valued mapping $\mathcal{K}:\mathbb{R}^n\rightrightarrows\mathbb{R}^n$ by means of $$\mathcal{K}(z):=\nabla P(z)^TN_{\langle}mbda(P(z^*))\quad \forall z\in\mathbb{R}^n.$$ We say that $z^*$ is AM-regular if the following condition holds: $$\limsup_{z\rightarrow z^*}\mathcal{K}(z)\mathcal{S}ubseteq \mathcal{K}(z^*),$$ where \begin{equation}gin{eqnarray*}{ \limsup_{z\to z^*}\mathcal{K}(z):=\{y\in\mathbb{R}^n|\varepsilonxists \ z^k\to z^*,y^k\to y, \mbox{ s.t. } y^k\in \mathcal{K}(z^k)\ \forall k \}.} \varepsilonnd{eqnarray*} \varepsilonnd{defi} \begin{equation}gin{prop}{{\rm(\cite[Theorem 3.2 and Theorem 3.9]{Mehlitz2020})}\label{AM-Prop} Let $z^*$ be a local minimizer of MPDC. Then $z^*$ is AM-stationary. {Moreover,} suppose that $z^*$ is AM-regular. Then $z^*$ is M-stationary.} \varepsilonnd{prop} Recently, the following directional version of the limiting normal cone was introduced. \begin{equation}gin{defi}[directional normal cones] \rm (\cite[Definition 2]{Gfrerer SVAA2013} or \cite[Definition 2.3]{Ginchev and Mordukhovich2011}) Let $A \mathcal{S}ubseteq \mathbb{R}^m$ be closed, $x^*\in A$ and $d\in\mathbb{R}^m$. The limiting normal cone to $A$ at $x^*$ in direction $d$ is defined by $$N_A(x^*;d):=\left\{{ z}eta\in\mathbb{R}^m {\left|\varepsilonxists t_k\downarrow0, d^k\rightarrow d,{ z}eta^k\rightarrow{ z}eta, \mbox{ s.t. }{ z}eta^k\in\omegaidehat{N}_A(x^*+t_kd^k)\right.}\right\}.$$ \varepsilonnd{defi} From the definition, it is obvious that the limiting normal cone to $A$ at $x^*$ in direction $d=0$ is equal to the limiting normal cone. It is also easy to see that $N_A(x^*;d)=\varepsilonmptyset$ if $d\notin T_A(x^*)$ and $N_A(x^*;d)\mathcal{S}ubseteq N_A(x^*)$ for all $d$. If $A$ is convex, then by \cite[Lemma 2.1]{Gfrerer SIAM Optimal 2014}, the following relationship holds \begin{equation}gin{equation}N_A(x^*;d)=N_A(x^*)\cap \{d\}^\perp \quad \forall d\in T_A(x^*).\label{convexcase} \varepsilonnd{equation} \begin{equation}gin{defi} \rm (\cite[Definition 3.3]{Ye-Zhou2018MP}) Let $A \mathcal{S}ubseteq \mathbb{R}^m$ be closed, $x^*\in A$ and $d\in\mathbb{R}^m$. We say that set $A$ is directionally regular at $x^*$ if $$N_A(x^*;d)=N_A^i(x^*;d) \quad \forall d,$$ where $N_A^i(x^*;d):=\left\{{ z}eta\in\mathbb{R}^m {\left|\forall t_k\downarrow0, \varepsilonxists d^k\rightarrow d,{ z}eta^k\rightarrow{ z}eta, \mbox{ s.t. }{ z}eta^k\in\omegaidehat{N}_A(x^*+t_kd^k)\right.}\right\}.$ If $A$ is directionally regular at any point $x\in A$, we say that the set $A$ is directionally regular. \varepsilonnd{defi} By \cite[Proposition 3.5]{Ye-Zhou2018MP}, any closed convex set is directionally regular. The following calculus rules will be useful. It is a special case of \cite[Proposition 3.3]{Ye-Zhou2018MP}. \begin{equation}gin{prop}{\rm(\cite[Proposition 3.3]{Ye-Zhou2018MP})}\label{proposition} Let $A:=A_1\times A_2\times\cdots\times A_l$, where $A_{i}\mathcal{S}ubseteq \mathbb{R}^{m_i}$ is closed for $i=1,2,\cdots,l$ and $m=m_1+m_2+\cdots+m_l$. Consider a point $x^*=(x^*_1,\cdots,x^*_l)\in A$ and a direction $d=(d_1,\cdots,d_l)\in\mathbb{R}^m$. Moreover, suppose that all except at most one of $A_i$ for $i=1,\cdots,l$ are directionally regular at $x^*_i$, then \begin{equation}gin{eqnarray*} T_A(x^*)= T_{A_1}(x^*_1)\times\cdots\times T_{A_l}(x^*_l), \quad N_A(x^*;d)= N_{A_1}(x^*_1;d_1)\times\cdots\times N_{A_l}(x^*_l;d_l). \varepsilonnd{eqnarray*} \varepsilonnd{prop} \begin{equation}gin{defi}[directional metric subregularity]\label{definition of directional metric subregularity} \rm (\cite[Definition 2.1]{Gfrerer SVAA2013}) Let $F(z):=P(z)-{\langle}mbda$ be a set-valued map induced by $P(z)\in{\langle}mbda$. We say that the set-valued map $F$ is metrically subregular in direction $d$ at $(z^*,0)\in {\rm gph}F$, where ${\rm gph}F:=\{(z,y)|y\in F(z)\}$ is the graph of $F$, if there exist $\kappa>0$ and $\rho,\delta>0$ such that $\mathop{\rm dist}_{F^{-1}(0)}(z)\leq\kappa \mathop{\rm dist}_{{\langle}mbda}(P(z)), \ \forall z\in z^*+V_{\rho,\delta}(d),$ where $ V_{\rho,\delta}(d):=\left\{z\in B_\rho(0)|\|\|d\|z-\|z\|d\|\leq\delta\|z\| \|d\|\right\} $ is the so-called directional neighborhood in direction $d$, and $F^{-1}(y):=\{z|y\in F(z)\}$ denotes the inverse of $F$ at $y$. If $d=0$ in the above definition, then we say $F$ is metrically subregular at $(z^*,0)$. \varepsilonnd{defi} According to \cite {Benko etal2019}, when the disjunctive set ${\langle}mbda:=\cup_{l=1}^N {\langle}mbda^l, {\langle}mbda^l=\Pi_{i=1}^m [a_i^l,b_i^l], $ where $ a_i^l,b_i^l $ are given numbers with $ a_i^l\leq b_i^l $, with possibility of $ a_i^l=-\infty$ and $ b_i^l =+\infty$, we call (\ref{disjunctive constraints}) the ortho-disjunctive program. We now recall the following sufficient conditions for directional metric subregularity for the ortho-disjunctive program. \begin{equation}gin{defi}\rm Let $z^*$ be a feasible solution to the ortho-disjunctive program. \label{directional normality} \textcolor{red}{} \begin{equation}gin{itemize} {\rm\item[(a)] ({\cite[Corollary 5.1]{Benko etal2019}})} We say that the quasi-normality holds at $z^*$ in direction {$d\in L_{{\cal F} _D}^{\rm lin}(z^*)$} if there exists no ${ z}eta\neq0$ such that \begin{equation}gin{eqnarray}\label{no-zeta} 0=\nabla P(z^*)^T{ z}eta ,\quad { z}eta\in N_{\langle}mbda(P(z^*);\nabla P(z^*)d), \varepsilonnd{eqnarray} and {\begin{equation}gin{eqnarray*} \varepsilonxists d^k \to d \ t_k\downarrow0\ s.t.\ \ { z}eta_i(P_i(z^*+t_kd^k)-P_i(z^*))>0\ if\ { z}eta_i\neq0. \label{quasi} \varepsilonnd{eqnarray*}} We say that the directional quasi-normality holds at $z^*$ if the quasi-normality holds at $z^*$ in any direction {$d\in L_{{\cal F} _D}^{\rm lin}(z^*)$}. {\rm\item[(b)] ({\cite[Corollary 4.5]{Benko etal2019}})} We say that the pseudo-normality holds at $z^*$ in direction {$d\in L_{{\cal F} _D}^{\rm lin}(z^*)$} if there exists no ${ z}eta\neq0$ such that (\ref{no-zeta}) holds and {\begin{equation}gin{eqnarray*} \varepsilonxists d^k \to d \ t_k\downarrow0\ s.t.\ \langlegle{ z}eta,P(z^*+t_kd^k)-P(z^*){\ranglegle}ngle>0. \label{pseudo} \varepsilonnd{eqnarray*}} We say that the directional pseudo-normality holds at $z^*$ if the pseudo-normality holds at $z^*$ in any direction {$d\in L_{{\cal F} _D}^{\rm lin}(z^*)$}. {\rm\item[(c)] (\cite[Theorem 4.3]{Gfr132})} We say that the first-order sufficient condition for metric subregularity (FOSCMS) holds at $z^*$ in direction {$d\in L_{{\cal F} _D}^{\rm lin}(z^*)$} if there exists no ${ z}eta\neq0$ such that (\ref{no-zeta}) holds. {\rm\item[(d)] (\cite[Theorem 4.3]{Gfr132})} Suppose $P$ is second-order differentiable at $z^*$. We say that the second-order sufficient condition for metric subregularity (SOSCMS) holds at $z^*$ in direction {$d\in L_{{\cal F} _D}^{\rm lin}(z^*)$} if there exists no ${ z}eta\neq0$ such that (\ref{no-zeta}) and the following second-order condition hold $$ {\mathcal{S}um_{i=1}^{{m}} { z}eta_i d^T \nabla^2 P_i(z^*)d \geq 0.}$$ \varepsilonnd{itemize} \varepsilonnd{defi} {Note that the concepts of directional quasi/pseudo-normality for the ortho-disjunctive program as defined in Definition \ref{directional normality} correspond precisely to the ones introduced for the general set-constrained optimization problem in \cite{Bai-Ye-Zhang2019}.} It was shown in {\cite{Benko etal2019,Gfr132}} that the following implication holds: $${{\mbox{ FOSCMS in } d \Longrightarrow}\mbox{ SOSCMS in } d} \Longrightarrow \mbox{ pseudo-normality in } d\Longrightarrow \mbox{ quasi-normality in } d .$$ We refer the reader to higher order sufficient condition for metric subregularity and other sufficient conditions for metric subregularity in \cite{Benko etal2019}. The following result is a directional version of \cite[Corollary 4.1]{Bai-Ye-Zhang2019}. \begin{equation}gin{prop}{\rm(\cite[Corollary 4.1]{Bai-Ye-Zhang2019})} Suppose that the quasi-normality holds at $z^*$ in {$d\in L_{{\cal F} _D}^{\rm lin}(z^*)$}. Then the set-valued map $F(z):=P(z)-{\langle}mbda$ is metrically subregular at $(z^*,0)$ in direction $d$. \varepsilonnd{prop} We now summarize some first and second order necessary optimality conditions for MPDC in the following propositions. \begin{equation}gin{prop} \label{Thm3.1}{\rm(\cite[Theorems 3.3]{Gfrerer SIAM Optimal 2014})} Let $z^*$ be a local minimizer of problem (\ref{disjunctive constraints}) and $d\in {\cal C}(z^*)$, where $ {\cal{C}}(z^*):=\{{d\in L_{{\cal F} _D}^{\rm lin}(z^*)}|\nabla f(z^*)d\leq0\}$ is the critical cone at $z^*$. If $F(z):=P(z)-{\langle}mbda$ is metrically subregular at $(z^*,0)$ in direction $d$, then M-stationary condition in direction $d$ holds. That is, there exists ${ z}eta$ such that \begin{equation}gin{equation} 0= \nabla f(z^*)+ \nabla P(z^*)^T { z}eta ,\quad { z}eta\in N_{\langle}mbda (P(z^*); \nabla P(z^*)d). \label{firstorder} \varepsilonnd{equation} Moreover, if $f$ and $P$ are twice differentiable at $z^*$, then there exists ${ z}eta$ satisfying (\ref{firstorder}) such that the second order condition holds: $$d^T\nabla^2_z\mathcal{L}(z^*,{ z}eta) d\geq0,$$ where $\mathcal{L}(z,{ z}eta) := f(z)+ P(z)^T{ z}eta $ is the Lagrangian. \varepsilonnd{prop} We also give a sufficient optimality condition based on S-stationary condition below. One may refer to \cite[Theorems 3.3]{Gfrerer SIAM Optimal 2014} for more general sufficient optimality condition. \begin{equation}gin{prop}\label{prop3.6}{\rm(\cite[Theorems 3.3]{Gfrerer SIAM Optimal 2014}) or \cite[Theorem 4.3]{Mehlitz2020optimization}}) Let $z^*$ be a feasible solution for problem (\ref{disjunctive constraints}) where $f$ and $P$ are twice differentiable at $z^*$. Suppose for each $0\not =d \in {\cal C}(z^*)$, there exists an S-multiplier ${ z}eta$ satisfying S-stationary condition $$ 0= \nabla f(z^*)+ \nabla P(z^*)^T { z}eta, \quad { z}eta\in \omegaidehat{N}_{\langle}mbda(P(z^*)) $$ and the second order condition $$d^T\nabla^2_z\mathcal{L}(z^*,{ z}eta) d> 0.$$ Then there is a constant $C>0$ and $N(z^*)$ a neighborhood of $z^*$ such that the following quadratic growth condition is valid: $$f(z) \geq f(z^*) +C\|z-z^*\|^2 \qquad \forall z\in {{\cal F} _D}\cap N(z^*).$$ In particular, $z^*$ is a strict local minimizer of MPDC. \varepsilonnd{prop} \begin{equation}gin{defi}\rm (see (31) in \cite{GfrererYeZhou}) \label{Defn3.6} Let {$d\in L_{{\cal F} _D}^{\rm lin}(z^*)$}. We say that LICQ in direction $d$ (LICQ($d$)) holds at $z^*$ if $$\nabla P(z^*)^T \lambda =0,\quad \lambda \in {\rm span } N_{\langle}mbda (P(z^*);\nabla P(z^*) d) \Longrightarrow \lambda=0.$$ \varepsilonnd{defi} \begin{equation}gin{prop}{\rm (\cite[Lemma 7]{GfrererYeZhou})} \label{Thm3.2} Let $z^*$ be a local minimizer of problem (\ref{disjunctive constraints}) and $d\in{\cal C}(z^*)$. Suppose that LICQ(d) holds. Then S-stationary condition in direction $d$ holds. That is, there exists ${ z}eta$ such that \begin{equation}gin{equation}0= {\nabla f(z^*)+} \nabla P(z^*)^T{ z}eta ,\quad { z}eta\in \omegaidehat{N}_{T_{\langle}mbda(P(z^*))} (\nabla P(z^*)d).\label{directionalS} \varepsilonnd{equation} Moreover, the multiplier ${ z}eta$ is unique. \varepsilonnd{prop} \mathcal{S}ection{Optimality conditions for MPSC from the corresponding ones for MPDC} In this section, we reformulate MPSC as the following disjunctive program and derive the corresponding optimality conditions from Section \ref{sec3}. \begin{equation}gin{eqnarray}\min f(z) \mbox{ s.t. } P(z)\in {\langle}mbda,\label{equivalent-MPSC}\varepsilonnd{eqnarray} where \begin{equation}gin{equation} P(z):=(g(z),h(z), (G(z),H(z))), \quad {\langle}mbda:=\mathbb{R}^p_-\times \{0\}^q \times \Omegamega_{SC}^m \label{definingfunctions} \varepsilonnd{equation} with the switching cone \begin{equation}gin{eqnarray}\label{C-set} \Omegamega_{SC}:=\{(a,b)\in\mathbb{R}^2|ab=0\}. \varepsilonnd{eqnarray} Since the switching cone $\Omegamega_{SC}$ is the union of the two subspaces $\mathbb{R}\times\{0\}$ and $\{0\}\times\mathbb{R}$, the cone ${\langle}mbda$ is the union of $2^m$ convex polyhedral sets. By a straightforward calculation, we can obtain the formulas for various tangent and normal cones for the switching cone $\Omegamega_{SC}$ defined in (\ref{C-set}) as follows. \begin{equation}gin{lema}\label{normal cone lema} For all $a=(a_1,a_2)\in \Omegamega_{SC}$ we have \begin{equation}gin{eqnarray*} T_{\Omegamega_{SC}}(a)=\left\{\begin{equation}gin{array}{ll} \{0\}\times \mathbb{R} & if\ a_1=0,a_2\neq0,\\ {\Omegamega_{SC}} & if\ a_1=a_2=0,\\ \mathbb{R} \times \{0\} & if\ a_1\neq0,a_2=0 \varepsilonnd{array}\right\}, \varepsilonnd{eqnarray*} \begin{equation}gin{eqnarray*} \omegaidehat{N}_{\Omegamega_{SC}}(a)=\left\{\begin{equation}gin{array}{ll} \mathbb{R} \times \{0\} & if\ a_1=0,a_2\neq0,\\ \{0\}\times \{0\} & if\ a_1=a_2=0,\\ \{0\}\times \mathbb{R} & if\ a_1\neq0,a_2=0 \varepsilonnd{array}\right\}, \varepsilonnd{eqnarray*} \begin{equation}gin{eqnarray*} N_{\Omegamega_{SC}}(a) = \left\{\begin{equation}gin{array}{ll} \mathbb{R} \times \{0\} & if\ a_1=0,a_2\neq0,\\ {\Omegamega_{SC}} & if\ a_1=a_2=0,\\ \{0\}\times \mathbb{R} & if\ a_1\neq0,a_2=0 \varepsilonnd{array}\right\}, \varepsilonnd{eqnarray*} \begin{equation}gin{eqnarray*} \omegaidehat{N}_{{T}_{\Omegamega_{SC}} (a)}(d)& =& \left\{\begin{equation}gin{array}{ll} \mathbb{R} \times \{0\} & if\ a_1=0,a_2\neq0 , d_1=0,\\%, d_2\not =0 \{0\} \times \mathbb{R} & if\ a_1\neq0, a_2=0 , d_2 =0,\\%d_1\not =0,, \mathbb{R} \times \{0\} & if\ a_1=a_2=0, d_1=0, d_2\not =0,\\ \{0\} \times \mathbb{R} & if\ a_1=a_2=0, d_1\not =0, d_2 =0,\\ \{0\}\times \{0\}& if \ a_1=a_2=d_1=d_2 =0 , \varepsilonnd{array}\right. \varepsilonnd{eqnarray*} \begin{equation}gin{eqnarray*} N_{\Omegamega_{SC}} (a;d)=N_{\Omegamega_{SC}}^i (a;d)=\left\{\begin{equation}gin{array}{ll} \mathbb{R} \times \{0\} & if\ a_1=0,a_2\neq0, d_1=0,\\% d_2\not =0,\\ \{0\} \times \mathbb{R} & if\ a_1\neq0, a_2=0, d_2 =0,\\%d_1\not =0,,\\ \mathbb{R} \times \{0\} & if\ a_1=a_2=0, d_1=0, d_2\not =0,\\ \{0\} \times \mathbb{R} & if\ a_1=a_2=0, d_1\not =0, d_2 =0,\\ {\Omegamega_{SC}} & if \ a_1=a_2=d_1=d_2 =0 . \varepsilonnd{array}\right. \varepsilonnd{eqnarray*} Hence the switching cone $\Omegamega_{SC}$ is directionally regular. \varepsilonnd{lema} {Since $\mathbb{R}^p_-$ and $ \{0\}^q $ are obviously directionally regular and the switching cone $\Omegamega_{SC}$ is directionally regular, the calculus rules for tangent and directional normal cones of ${\langle}mbda$ as a Cartesian product in Proposition \ref{proposition} hold. Hence for any $z^*$ such that $P(z^*)\in {\langle}mbda$, we can obtain the expression for the tangent cone to ${\langle}mbda$ at $P(z^*)$ as follows: \begin{equation}gin{eqnarray}\label{tangentcone} {T_{\langle}mbda(P(z^*))}= T_{\mathbb{R}^p_-}(g(z^*)) \times T_{\{0\}^q}( 0)\times \Pi_{i=1}^m T_{\Omegamega_{SC}} (G_i(z^*),H_i(z^*)). \varepsilonnd{eqnarray}} First we study $\mathcal{Q}$ and $\mathcal{Q}_M$ stationary conditions for MPSC. Let $\mathcal{P}({\cal I}_{GH}^*)$ be the set of all (disjoint) bipartitions of ${\cal I}_{GH}^*$. For fixed $(\begin{equation}ta_1,\begin{equation}ta_2)\in\mathcal{P}({\cal I}_{GH}^*)$, we define the convex polyhedral cone \begin{equation}gin{eqnarray*} Q_{SC}^{\begin{equation}ta_1,\begin{equation}ta_2}:= T_{\mathbb{R}^p_-}(g(z^*)) \times \{0\}^q\times \prod_{i=1}^m \tau_i^{\begin{equation}ta_1,\begin{equation}ta_2} \varepsilonnd{eqnarray*} where $\tau_i^{\begin{equation}ta_1,\begin{equation}ta_2}:=T_{\Omegamega_{SC}}(G_i(z^*),H_i(z^*))$ if $i\in{\cal I}_G^*\cup{\cal I}_H^*$ and \begin{equation}gin{eqnarray*} \tau_i^{\begin{equation}ta_1,\begin{equation}ta_2}:=\left\{\begin{equation}gin{array}{cc} \{0\}\times\mathbb{R}& {\rm if}\ i\in\begin{equation}ta_1,\\ \mathbb{R}\times\{0\}& {\rm if}\ i\in \begin{equation}ta_2.\varepsilonnd{array}\right. \varepsilonnd{eqnarray*} {By (\ref{tangentcone}) and the formula for the tangent cone to $\Omegamega_{SC}$ in Lemma \ref{normal cone lema}, it is easy to see that $Q_{SC}^{\begin{equation}ta_1,\begin{equation}ta_2}$ is a subset of $T_{\langle}mbda(P(z^*))$ as required by $\mathcal{Q}$-stationarity. Moreover, similarly to \cite[Lemma 3]{Benko-Gfrerer2017}, we can show that for any $(\begin{equation}ta_1,\begin{equation}ta_2)\in\mathcal{P}({\cal I}_{GH}^*)$, $$(Q_{SC}^{\begin{equation}ta_1,\begin{equation}ta_2})^\circ \cap (Q_{SC}^{\begin{equation}ta_2,\begin{equation}ta_1})^\circ=\omegaidehat{N}_{\langle}mbda(P(z^*)).$$ Hence according to the discussion in Section \ref{sec3}, $Q_1:=Q_{SC}^{\begin{equation}ta_1,\begin{equation}ta_2}, Q_2:=Q_{SC}^{\begin{equation}ta_2,\begin{equation}ta_1}$ would be a good choice for the $\mathcal{Q}$-stationarity. Similar to \cite[Proposition 4]{Benko-Gfrerer2017}, we can derive the definition of $\mathcal{Q}$-stationarity for MPSC by using the corresponding definitions for the disjunctive program in Definition \ref{definition of Q-statioanry} by using $(Q_1,Q_2):=(Q_{SC}^{\begin{equation}ta_1,\begin{equation}ta_2},Q_{SC}^{\begin{equation}ta_2,\begin{equation}ta_1})$. By Definition \ref{definition of Q-statioanry}, $z^*$ is $\mathcal{Q}$-stationary with respect to $(Q_{SC}^{\begin{equation}ta_1,\begin{equation}ta_2},Q_{SC}^{\begin{equation}ta_2,\begin{equation}ta_1})$ if \begin{equation}gin{eqnarray*} -\nabla f(z^*)& \in & \nabla P(z^*)^T\left((Q_{SC}^{\begin{equation}ta_1,\begin{equation}ta_2})^\circ \cap(\ker\nabla P(z^*)^T+(Q_{SC}^{\begin{equation}ta_2,\begin{equation}ta_1})^\circ\right)\\ &=& \left \{\mathcal{S}um_{i=1}^p\lambda_i^g\nabla g_i(z^*)+\mathcal{S}um_{i=1}^q\lambda_i^h\nabla h_i(z^*)+\mathcal{S}um_{i=1}^m\left(\lambda_i^G\nabla G_i(z^*)+\lambda_i^H\nabla H_i(z^*)\right) \right . \\ && \qquad\qquad \left| (\lambda^h,\lambda^g,\lambda^G,\lambda^H)\in (Q_{SC}^{\begin{equation}ta_1,\begin{equation}ta_2})^\circ \cap(\ker\nabla p(z^*)^T+(Q_{SC}^{\begin{equation}ta_2,\begin{equation}ta_1})^\circ) \right \}. \varepsilonnd{eqnarray*}} {Next we are aiming to find a formula for $(Q_{SC}^{\begin{equation}ta_1,\begin{equation}ta_2})^\circ \cap\left(\ker\nabla P(z^*)^T+(Q_{SC}^{\begin{equation}ta_2,\begin{equation}ta_1})^\circ\right) $ in the above. Obviously, we have $(Q_{SC}^{\begin{equation}ta_1,\begin{equation}ta_2})^\circ=N_{\mathbb{R}_-^p} (g(z^*))\times \mathbb{R}^q \times (\prod_{i=1}^m \tau_i^{\begin{equation}ta_1,\begin{equation}ta_2})^\circ$ and the set $(Q_{SC}^{\begin{equation}ta_1,\begin{equation}ta_2})^\circ\cap \left(\ker\nabla P(z^*)^T+(Q_{SC}^{\begin{equation}ta_2,\begin{equation}ta_1})^\circ\right) $ consists of all $\lambda=(\lambda^h, \lambda^g,\lambda^G,\lambda^H)$ such that there exists $\mu=(\mu^h,\mu^g,\mu^G,\mu^H) \in \ker \nabla P(z^*)^T$ and $(\varepsilonta^h,\varepsilonta^g,\varepsilonta^G,\varepsilonta^H) \in (Q_{SC}^{\begin{equation}ta_2,\begin{equation}ta_1})^\circ=N_{\mathbb{R}_-^p} (g(z^*))\times \mathbb{R}^q \times (\prod_{i=1}^m \tau_i^{\begin{equation}ta_2,\begin{equation}ta_1})^\circ$ such that $$\lambda=\mu +\varepsilonta\in (Q_{SC}^{\begin{equation}ta_1,\begin{equation}ta_2})^\circ.$$ We now analyse the following different cases. \begin{equation}gin{itemize} \item Equality constraints: We obtain $\lambda^h=\mu^h+\varepsilonta^h\in \mathbb{R}^q, \varepsilonta^h \in \mathbb{R}^q$, i.e., $\lambda^h,\mu^h \in \mathbb{R}^q$. \item Inequality constraints: For $i \in {\cal I}_g^*${,} we have $\lambda^g_i=\mu^g_i+\varepsilonta^g_i \geq 0, \varepsilonta_i^g \geq 0$ or equivalently $\lambda_i^g \geq \max \{0, \mu_i^g \}$, whereas for $i\in \{1,\dots, p\}\mathcal{S}etminus {\cal I}_g^*$, we obtain $\lambda_i^g=\mu_i^g=0$. \item {$i\in {\cal I}_G^*$}: Since $(\tau_i^{\begin{equation}ta_1,\begin{equation}ta_2})^\circ=\mathbb{R}\times \{0\}$, we obtain $\lambda_i^H=\varepsilonta_i^H=0$ and so $\mu_i^H=0$. \item {$i\in {\cal I}_H^*$}: Similarly as in the previous case{,} we obtain $\lambda_i^G=\mu_i^G=0.$ \item $i\in \begin{equation}ta_1$: Since $(\tau_i^{\begin{equation}ta_1,\begin{equation}ta_2})^\circ=\mathbb{R}\times \{0\}$ and $(\tau_i^{\begin{equation}ta_2,\begin{equation}ta_1})^\circ=\{0\} \times \mathbb{R}$, we have $$(\lambda_i^G,\lambda_i^H)=(\mu_i^G,\mu_i^H)+(\varepsilonta_i^G,\varepsilonta_i^H)\in \mathbb{R}\times \{0\}$$ and $(\varepsilonta_i^G,\varepsilonta_i^H)\in \{0\} \times \mathbb{R}$. Equivalently, we obtain $\lambda_i^H=0$ and $\lambda_i^G=\mu_i^G$. \item $i\in \begin{equation}ta_2$: Similarly as in the previous case{,} we obtain $\lambda_i^G=0$ and $\lambda_i^H=\mu_i^H$. \varepsilonnd{itemize} We now denote two multiplier sets \begin{equation}gin{eqnarray*} \begin{equation}gin{array}{rr}\mathcal{R}_{SC}:=\{(\mu^g,\mu^h,\mu^G,\mu^H) \in\mathbb{R}^p\times\mathbb{R}^q\times\mathbb{R}^m\times\mathbb{R}^m| \mu_i^g=0,i=\{1,\cdots,p\}\mathcal{S}etminus{\cal I}_g^*,\quad \\ \mu_i^G=0,i\in{\cal I}_H^*,\mu_i^H=0,i\in{\cal I}_G^* \}\varepsilonnd{array} \varepsilonnd{eqnarray*}} and \mathcal{S}mall{\begin{equation}gin{eqnarray*} \mathcal{N}_{SC}:= \left\{(\mu^g,\mu^h,\mu^G,\mu^H)\in \mathcal{R}_{SC}\left|\mathcal{S}um_{i=1}^p\mu_i^g\nabla g_i(z^*)+\mathcal{S}um_{i=1}^q\mu_i^h\nabla h_i(z^*)+\mathcal{S}um_{i=1}^m(\mu_i^G\nabla G_i(z^*)+\mu_i^H\nabla H_i(z^*))=0\right.\right\}. \varepsilonnd{eqnarray*}} Based on the discussion above, we can now give the following definition. \begin{equation}gin{defi}\rm Let $z^*\in{\cal F}$. We say that $z^*$ is $\mathcal{Q}$-stationary for MPSC (\ref{MPSC}) with respect to $(\begin{equation}ta_1,\begin{equation}ta_2)\in\mathcal{P}({\cal I}_{GH}^*)$ if there exists multipliers $(\lambda^g,\lambda^h,\lambda^G,\lambda^H)\in \mathcal{R}_{SC}$ such that $$0=\nabla f(z^*)+\mathcal{S}um_{i=1}^p\lambda_i^g\nabla g_i(z^*)+\mathcal{S}um_{i=1}^q\lambda_i^h\nabla h_i(z^*)+\mathcal{S}um_{i=1}^m(\lambda_i^G\nabla G_i(z^*)+\lambda_i^H\nabla H_i(z^*))$$ and $(\mu^g,\mu^h,\mu^G,\mu^H)\in \mathcal{N}_{SC}$ such that $$ \lambda_i^g\geq\max\{\mu_i^g,0\},i\in{\cal I}_g^*,\lambda_i^H=0, \lambda^G_i=\mu^G_i, i\in\begin{equation}ta_1,\lambda_i^G=0,\lambda^H_i=\mu^H_i, i\in\begin{equation}ta_2.$$ \varepsilonnd{defi} It is easy to see that for MPSC, an $\mathcal{Q}$-stationary point is also {an M-stationary point}. Hence for MPSC, $\mathcal{Q}$-stationary condition coincides with $\mathcal{Q}_M$-stationary condition. Applying Proposition \ref{Prop3.3}, similar to \cite[Proposition 4 and Theorem 5]{Benko-Gfrerer2017}, we have the following optimality conditions. \begin{equation}gin{prop}Let $z^*$ be a local optimal solution for MPSC. If GGCQ holds at $z^*$, then $z^*$ is $\mathcal{Q}$-stationary with respect to every pair $(\begin{equation}ta_1,\begin{equation}ta_2)\in\mathcal{P}({\cal I}_{GH}^*)$. Conversely, if $z^*$ is $\mathcal{Q}$-stationary with respect to some pair $(\begin{equation}ta_1,\begin{equation}ta_2)\in\mathcal{P}({\cal I}_{GH}^*)$ such that for every $\mu\in \mathcal{N}_{SC}$ there holds \begin{equation}gin{eqnarray} &&\mu_i^G\mu_{i'}^G=0,\mu_i^H\mu_{i'}^H=0\quad\forall(i,i')\in\begin{equation}ta_1\times\begin{equation}ta_2,\label{propassum1}\\ &&\mu_i^G\mu_{i'}^H=0\quad\forall(i,i')\in\begin{equation}ta_1\times\begin{equation}ta_1,\label{propassum2}\\ &&\mu_i^G\mu_{i'}^H=0\quad\forall(i,i')\in\begin{equation}ta_2\times\begin{equation}ta_2.\label{propassum3} \varepsilonnd{eqnarray} Then $z^*$ is S-stationary. \varepsilonnd{prop} \begin{equation}gin{proof} {The first statement follows directly from Proposition \ref{Prop3.3}. We now prove the converse statement.} Suppose that $z^*$ is $\mathcal{Q}$-stationary with respect to some pair $(\begin{equation}ta_1,\begin{equation}ta_2)\in\mathcal{P}({\cal I}_{GH}^*)$. Then by definition, there exist $(\lambda^g,\lambda^h,\lambda^G,\lambda^H)\in \mathcal{R}_{SC}$ and $(\mu^g,\mu^h,\mu^G,\mu^H)\in \mathcal{N}_{SC}$ satisfying the $\mathcal{Q}$-stationary condition. {Since by the definition of $\mathcal{Q}$-stationarity, $\lambda_i^H=0, i\in \begin{equation}ta_1$ and $\lambda_i^G=0, i\in \begin{equation}ta_2$. So if $\lambda_i^G=0, i\in \begin{equation}ta_1$ and $\lambda_i^H=0, i\in \begin{equation}ta_2$, then $z^*$ must be S-stationary in this case.} Otherwise, either there is some $j\in \begin{equation}ta_1$ such that $\lambda_j^G\not =0$ or some $j\in \begin{equation}ta_2$ such that $\lambda_j^H\not =0$. First consider the case when $\lambda_j^G\not =0$ for some $j\in \begin{equation}ta_1$. Set $(\tilde \lambda^g,\tilde\lambda^h,\tilde\lambda^G,\tilde\lambda^H):=(\lambda^g-\mu^g,\lambda^h-\mu^h,\lambda^G-\mu^G,\lambda^H-\mu^H)$. Then $$0=\nabla f(z^*)+\mathcal{S}um_{i=1}^p\tilde \lambda_i^g\nabla g_i(z^*)+\mathcal{S}um_{i=1}^q \tilde \lambda_i^h\nabla h_i(z^*)+\mathcal{S}um_{i=1}^m(\tilde \lambda_i^G\nabla G_i(z^*)+\tilde \lambda_i^H\nabla H_i(z^*))$$ and {$$\tilde \lambda_i^g= 0, i\notin {\cal I}_g^*, \tilde \lambda_i^g\geq 0, i\in {\cal I}_g^*, \tilde \lambda_i^G=0,i\in{\cal I}_H^*,\tilde \lambda_i^H=0,i\in{\cal I}_G^*, \tilde \lambda_i^G=0, i\in \begin{equation}ta_1, \tilde \lambda_i^H=0, i\in \begin{equation}ta_2.$$}Further, since $0\not = \lambda_j^G=\mu_j^G$, {then by (\ref{propassum1}) we have $\mu_i^G=0 \ \forall i\in \begin{equation}ta_2$ and by (\ref{propassum2}) we have $\mu_i^H=0\ \forall i\in \begin{equation}ta_1$ .} Consequently, $\tilde \lambda_i^G=0$, and {$\tilde\lambda_i^H=0$} holds for all $i\in \begin{equation}ta_1\cup \begin{equation}ta_2$. Hence $z^*$ is S-stationary. The proof for the case when $\lambda_j^H\not =0$ for some $j\in \begin{equation}ta_2$ is similar and (\ref{propassum1}) and (\ref{propassum3}) are used to derive the result in this case. \varepsilonnd{proof} {Applying Definition \ref{AM-stationary}-\ref{AM-regular} and Lemma \ref{normal cone lema}, we have the following AM-stationary condition and AM-regular for MPSC. \begin{equation}gin{defi}\rm Let $z^*\in{\cal F}$. We say that $z^*$ is AM-stationary of MPSC if there exist sequences $\{z^k\}\mathcal{S}ubseteq{\cal F},\,\{{\rm Var}epsilon^k\}\mathcal{S}ubseteq\mathbb{R}^n$ and multipliers $\left\{(\lambda^{g,k},\lambda^{h,k},\lambda^{G,k},\lambda^{H,k})\right\}\mathcal{S}ubseteq \mathbb{R}^p\times\mathbb{R}^q\times\mathbb{R}^m\times\mathbb{R}^m$ with ${\rm Var}epsilon^k\rightarrow0,z^k\rightarrow z^*$ such that $\forall k$ \begin{equation}gin{eqnarray*} &&\nabla f(z^k)+\mathcal{S}um_{i=1}^p\lambda^{g,k}_i\nabla g_i(z^k)+\mathcal{S}um_{i=1}^q\lambda^{h,k}_i\nabla h_i(z^k)+\mathcal{S}um_{i=1}^m(\lambda^{G,k}_i\nabla G_i(z^k)+\lambda^{H,k}_i\nabla H_i(z^k))={\rm Var}epsilon^k,\\ &&\lambda_i^{g,k}=0,if\ g_i(z^k)<0;\lambda^{g,k}_i\geq 0\ if\ g_i(z^k)=0;\ \lambda^{G,k}_i=0,\ if\ H_i(z^k)=0\neq G_i(z^k);\\ &&\lambda^{H,k}_i=0,\ if\ G_i(z^k)=0\neq H_i(z^k); \ \lambda^{G,k}_i\lambda^{H,k}_i=0,\ if\ G_i(z^k)=H_i(z^k)=0. \varepsilonnd{eqnarray*} \varepsilonnd{defi} \begin{equation}gin{defi}\rm Let $z^*\in{\cal F}$. Define a set-valued mapping $\mathcal{K}:\mathbb{R}^n\rightrightarrows\mathbb{R}^{n}$ by means of \begin{equation}gin{eqnarray*} \mathcal{K}(z):=\left\{\begin{equation}gin{array}{ll} \mathcal{S}um_{i=1}^p\lambda^{g}_i\nabla g_i(z)+\mathcal{S}um_{i=1}^q\lambda^{h}_i\nabla h_i(z)\\+\mathcal{S}um_{i=1}^m(\lambda^{G}_i\nabla G_i(z)+\lambda^{H}_i\nabla H_i(z))\varepsilonnd{array}\left|\begin{equation}gin{array}{ll}(\lambda^{g},\lambda^{h},\lambda^{G},\lambda^{H})\}\mathcal{S}ubseteq\mathbb{R}^p_+\times\mathbb{R}^q\times\mathbb{R}^m\times\mathbb{R}^m,\\ \lambda_i^g=0\ {\rm for}\ i\notin{\cal I}_g^*,\\ \lambda_i^H=0\ {\rm for}\ i\in{\cal I}_G^*,\lambda_i^G=0\ {\rm for}\ i\in{\cal I}_H^*,\\ \lambda_i^G\lambda_i^H=0\ {\rm for}\ i\in{\cal I}_{GH}^*\varepsilonnd{array}\right.\right\}. \varepsilonnd{eqnarray*} We say that $z^*$ is AM-regular if the following condition holds: $$\limsup_{z\rightarrow z^*}\mathcal{K}(z)\mathcal{S}ubseteq \mathcal{K}(z^*).$$ \varepsilonnd{defi}} {Applying Proposition \ref{AM-Prop}, we have the following conclusion. \begin{equation}gin{thm}Let $z^*$ be a local minimizer of MPSC, then $z^*$ is AM-stationary. {Moreover,} suppose that $z^*$ is AM-regular. Then $z^*$ is M-stationary. \varepsilonnd{thm}} We now apply Propositions \ref{Thm3.1} and \ref{Thm3.2} to MPSC in the form of (\ref{equivalent-MPSC}). {By the expressions for $T_{{\langle}mbda}(P(z^*))$ in (\ref{tangentcone}) and the expression for the tangent cone of the switching set in Lemma \ref{normal cone lema}}, the linearization cone of the feasible region ${\cal F}$ can be expressed as follows: \begin{equation}gin{eqnarray*} L_{\cal F}^{\rm lin}(z^*)&:=& \big\{d|\nabla P(z^*)d\in T_{{\langle}mbda}(P(z^*)) \big \}\\ &=&\left\{d\in\mathbb{R}^n{\left|\begin{equation}gin{array}{ll} \nabla g_i(z^*)d\leq0 & i\in {\cal I}_g^*,\\ \nabla h_j(z^*)d=0 & j=1,\cdots,q,\\ \nabla G_i(z^*)d=0 & i\in{\cal I}_G^*,\\ \nabla H_i(z^*)d=0 & i\in{\cal I}_H^*,\\ {(\nabla G_i(z^*)d)(\nabla H_i(z^*)d)=0} & i\in{\cal I}_{GH}^* \varepsilonnd{array}\right.}\right\}. \varepsilonnd{eqnarray*} Denote the critical cone at $z^*$ by $ {\cal{C}}_{\cal F}(z^*):=\{d\in L^{\rm lin}_{\cal F}(z^*)|\nabla f(z^*)d\leq0\}. $ Given $d\in L_{\cal F}^{\rm lin}(z^*)$, we define \begin{equation}gin{eqnarray*} &&{\cal I}_g^*(d):=\left \{i\in{\cal I}_g^*\ |\ \nabla g_i(z^*)d=0 \right \}, \\ &&{\cal I}_G^*(d):=\left \{i\in{\cal I}_{GH}^*\ |\ \nabla G_i(z^*)d=0,\ \nabla H_i(z^*)d\neq 0 \right \},\\ &&{\cal I}_H^*(d):= \left \{i\in{\cal I}_{GH}^*\ |\ \nabla G_i(z^*)d\neq 0,\ \nabla H_i(z^*)d=0 \right \},\\ &&{\cal I}_{GH}^*(d):=\left \{i\in{\cal I}_{GH}^*\ |\ \nabla G_i(z^*)d=\nabla H_i(z^*)d=0 \right \}. \varepsilonnd{eqnarray*} Then by the Cartesian product rule in Proposition \ref{proposition}, the expressions for the tangent cone and the directional limiting normal cone to the switching cone in Lemma \ref{normal cone lema} we have, \begin{equation}gin{eqnarray*} N_{\langle}mbda(P(z^*);\nabla P(z^*)d) &=& N_{\mathbb{R}^p_-}(g(z^*); \nabla g(z^*)d) \times N_{\{0\}^q}( h(z^*);\nabla h(z^*)d)\\&&\times \Pi_{i=1}^m N_{\Omegamega_{SC}} ((G_i(z^*),H_i(z^*));(\nabla G_i(z^*)d, \nabla H_i(z^*)d)),\varepsilonnd{eqnarray*} with \begin{equation}gin{eqnarray*} N_{\Omegamega_{SC}} ((G_i(z^*),H_i(z^*));(\nabla G_i(z^*)d, \nabla H_i(z^*)d)) = \left\{( \lambda^G,\lambda^H)\left |\begin{equation}gin{array}{ll} \lambda_i^G=0 & i\in{\cal I}_H^*\cup {\cal I}_H^*(d),\\ \lambda_i^H=0 & i\in{\cal I}_G^*\cup {\cal I}_G^*(d),\\ \lambda_i^G\lambda_i^H=0 & i\in{\cal I}_{GH}^*(d) \varepsilonnd{array}\right. \right\}. \varepsilonnd{eqnarray*} Since $d\in L_{\cal F}^{\rm lin}(z^*)$ implies $\nabla g_i(z^*) d\leq 0 \ \forall i\in {\cal I}_g^*$ and by (\ref{convexcase})$$N_{\mathbb{R}^p_-}(g(z^*); \nabla g(z^*)d)=N_{\mathbb{R}^p_-}(g(z^*)) \cap \{\nabla g(z^*)d\}^\perp,$$ for any $\lambda^g\in N_{\mathbb{R}^p_-}(g(z^*); \nabla g(z^*)d)$, we have that $\lambda_i^g =0 \ \forall i\not \in {\cal I}_g^*(d)$ and $\lambda_i^g \geq 0 \ \forall i \in {\cal I}_g^*(d)$. Hence \begin{equation}gin{eqnarray} {N_{\langle}mbda(P(z^*);\nabla P(z^*)d)} = \left\{( \lambda^g,\lambda^h,\lambda^G,\lambda^H)\left |\begin{equation}gin{array}{ll} \lambda_i^g\geq 0 & i\in {\cal I}_g^*(d),\ \lambda_i^g= 0, \ i \not \in {\cal I}_g^*(d),\\ \lambda_i^G=0 & i\in{\cal I}_H^*\cup {\cal I}_H^*(d),\\ \lambda_i^H=0 & i\in{\cal I}_G^*\cup {\cal I}_G^*(d),\\ \lambda_i^G\lambda_i^H=0 & i\in{\cal I}_{GH}^*(d) \varepsilonnd{array}\right. \right\}.\label{Ncone} \varepsilonnd{eqnarray} Based on the {directional} M-stationary condition (\ref{firstorder}) and directional S-stationary condition (\ref{directionalS}), we now define the directional version of the W, S, M-stationarity for MPSC. \begin{equation}gin{defi}\rm\label{doptimalitycondition} Let $z^*$ be a feasible solution of MPSC and $d \in {\cal C}_{\cal F}(z^*)$. We say that $z^*$ is a W-stationary point of MPSC (\ref{MPSC}) in direction $d$ if there exists $(\lambda^g,\lambda^h,\lambda^G,\lambda^H)$ such that \begin{equation}gin{eqnarray} && 0=\nabla f(z^*)+\mathcal{S}um_{i\in{\cal I}_g^*(d)}\lambda^g_i\nabla g_i(z^*)+\mathcal{S}um_{i=1}^q\lambda^h_i\nabla h_i(z^*)+\mathcal{S}um_{i=1}^m(\lambda^G_i\nabla G_i(z^*)+\lambda^H_i\nabla H_i(z^*)), \qquad \label{W-stationary1-d}\\ &&\lambda^g_i\geq 0,\ i\in{\cal I}_g^*(d),\ \lambda^G_i=0,\ i\in{\cal I}_H^*\cup{\cal I}_H^*(d),\ \lambda^H_i=0,\ i\in{\cal I}_G^*\cup{\cal I}_G^*(d).\label{W-stationary4-d} \varepsilonnd{eqnarray} We say that $z^*$ is a M-stationary point of MPSC (\ref{MPSC}) in direction $d$ if there exists $(\lambda^g,\lambda^h,\lambda^G,$ $\lambda^H)$ such that (\ref{W-stationary1-d})--(\ref{W-stationary4-d}) hold and $\lambda^G_i\lambda^H_i=0,\ i\in{\cal I}_{GH}^*(d).$ We say that $z^*$ is a S-stationary point of MPSC (\ref{MPSC}) in direction $d$ if there exists $(\lambda^g,\lambda^h,\lambda^G,$ $\lambda^H)$ such that (\ref{W-stationary1-d})--(\ref{W-stationary4-d}) hold and $\lambda^G_i=\lambda^H_i=0,\ i\in{\cal I}_{GH}^*(d).$ \varepsilonnd{defi} Using the formula in (\ref{Ncone}), we have \begin{equation}gin{eqnarray*} {\rm span }N_{\langle}mbda(P(z^*);\nabla P(z^*)d) = \left\{( \lambda^g,\lambda^h,\lambda^G,\lambda^H) \left |\begin{equation}gin{array}{ll} \lambda_i^g= 0 & \ i \not \in {\cal I}_g^*(d),\\ \lambda_i^G=0 & i\in{\cal I}_H^*\cup {\cal I}_H^*(d),\\ \lambda_i^H=0 & i\in{\cal I}_G^*\cup {\cal I}_G^*(d). \varepsilonnd{array}\right.\right\}. \varepsilonnd{eqnarray*} Hence based on Definition \ref{Defn3.6}, we define the following directional version of the MPSC-LICQ. \begin{equation}gin{defi}\rm\label{definition of MPSC LICQ(d)} Let $z^*$ be a feasible solution of MPSC (\ref{MPSC}) and $d\in L_{\cal F}^{\rm lin}(z^*)$. We say that the MPSC-LICQ in direction $d$ (MPSC-LICQ({$d$})) holds at $z^*$ if and only if the gradients \begin{equation}gin{eqnarray*} \{\nabla g_i(z^*)|i\in{\cal I}_g^*(d)\}\cup\{\nabla h_j(z^*)|j=1,2,\cdots,q\}\cup\{\nabla G_i(z^*)|i\in{\cal I}_G^*\cup{\cal I}_G^*(d)\cup{\cal I}_{GH}^*(d)\}\\\cup\{\nabla H_i(z^*)|i\in{\cal I}_H^*\cup{\cal I}_H^*(d)\cup{\cal I}_{GH}^*(d)\} \varepsilonnd{eqnarray*} are linearly independent. \varepsilonnd{defi} Since ${\cal I}_g^*(0)={\cal I}_g^*$, ${\cal I}_G^*(0)={\cal I}_H^*(0)=\varepsilonmptyset$ and ${\cal I}_{GH}^*(0)={\cal I}_{GH}^*$, it is easy to see that MPSC-LICQ(0) is exactly the MPSC-LICQ. It is easy to see that MPSC is an ortho-disjunctive program. Hence by Definition \ref{directional normality}, it is easy to see that the directional {quasi/pseudo-normality } for constraint system of MPSC (\ref{MPSC}) can be rewritten in the following form. \begin{equation}gin{defi}\label{Defi4.3}\rm\label{definition of MPSC directional normality} Let $z^*$ be a feasible solution of MPSC (\ref{MPSC}). $z^*$ is said to be MPSC quasi- or pseudo-normal in direction $ d\in L^{\rm lin}_{\cal F}(z^*)$ if there exists no $(\lambda^g,\lambda^h,\lambda^G,\lambda^H)\neq0$ such that \begin{equation}gin{itemize} \item[\rm(i)]$0=\nabla g(z^*)^T\lambda^g+\nabla h(z^*)^T\lambda^h+\nabla G(z^*)^T\lambda^G+\nabla H(z^*)^T\lambda^H$; \item[\rm(ii)]$\lambda^g_i\geq0,i\in{\cal I}_g^*(d);\lambda^g_i=0,i\notin{\cal I}^*_g(d)$; $\lambda^H_i=0,i\in{\cal I}^*_G\cup{\cal I}^*_G(d)$;$\lambda^G_i=0,i\in{\cal I}^*_H\cup{\cal I}^*_H(d)$; $\lambda^G_i\lambda^H_i=0,i\in{\cal I}^*_{G,H}(d)$; \item[\rm(iii)]$\varepsilonxists d^k\rightarrow d$ and $t_k\downarrow0$ such that \begin{equation}gin{eqnarray*} \left\{\begin{equation}gin{array}{ll} \lambda^g_ig_i(z^*+t_kd^k)>0,\ {\rm if}\ \lambda^g_i\neq0,\\ \lambda^h_ih_i(z^*+t_kd^k)>0,\ {\rm if}\ \lambda^h_i\neq0,\\ \lambda^G_iG_i(z^*+t_kd^k)>0, \ {\rm if}\ \lambda^G_i\neq0,\\ \lambda^H_iH_i(z^*+t_kd^k)>0,\ {\rm if}\ \lambda^H_i\neq0, \varepsilonnd{array}\right. \varepsilonnd{eqnarray*} or \begin{equation}gin{eqnarray*} {\lambda^g}^Tg(z^*+t_kd^k)+ {\lambda^h}^Th(z^*+t_kd^k)+ {\lambda^G}^TG(z^*+t_kd^k)+ {\lambda^H}^TH(z^*+t_kd^k)>0, \varepsilonnd{eqnarray*} respectively. \varepsilonnd{itemize} $z^*$ is said to be directionally quasi- or pseudo-normal if it is quasi- or pseudo-normal in all directions from $L^{\rm lin}_{\cal F}(z^*)$. \varepsilonnd{defi} {Note that MPSC quasi/pseudo-normality in direction $d=0$ coincides with MPSC quasi/pseudo-normality defined as in \cite{Li-Guo}} and when $d\not =0$, the directional one is weaker. We now apply Definition \ref{directional normality} to obtain FOSCMS/SOSCMS for MPSC. \begin{equation}gin{defi}\label{Defi4.4} \rm Let $z^*$ be a feasible solution of MPSC (\ref{MPSC}) and $d\in L_{\cal F}^{\rm lin}(z^*)$. We say that MPSC first order sufficient condition for metric subregularity (MPSC-FOSCMS) in direction $d$ holds at $z^*$ if there exists no $(\lambda^g,\lambda^h,\lambda^G,\lambda^H)\neq0$ such that (i)--(ii) in Definition \ref{definition of MPSC directional normality} holds. \varepsilonnd{defi} {Note that MPSC-FOSCMS in direction $d=0$ coincides with the MPSC-NNAMCQ defined as in \cite{Mehlitz MP} and when $d\not =0$, MPSC-FOSCMS is weaker than MPSC-NNAMCQ.} \begin{equation}gin{defi}\rm\label{second-order sufficient condition} Let $z^*$ be a feasible solution of MPSC (\ref{MPSC}) and $d\in L_{\cal F}^{\rm lin}(z^*)$. We say that MPSC second-order sufficient condition for metric subregularity (MPSC-SOSCMS) in direction $d$ holds at $z^*$ if there exists no $(\lambda^g,\lambda^h,\lambda^G,\lambda^H)\neq0$ such that {(i)--(ii) in Definition \ref{definition of MPSC directional normality} hold} and { $$d^T\nabla^2{\cal L}^0(z^*, \lambda^g,\lambda^h,\lambda^G,\lambda^H) d\geq0,$$ where ${\cal L}^0(z, \lambda^g,\lambda^h,\lambda^G,\lambda^H):=\langlegle \lambda^g, g(z) {\ranglegle}ngle +\langlegle \lambda^h, h(z) {\ranglegle}ngle+ \langlegle \lambda^G, G(z) {\ranglegle}ngle+\langlegle \lambda^H, H(z) {\ranglegle}ngle.$} \varepsilonnd{defi} The following result follows from Propositions \ref{Thm3.1}-\ref{Thm3.2}. {The reader is referred to Figure 3 for sufficient conditions for MPSC quasi-normality. \begin{equation}gin{thm}\label{stationary-d} Let $z^*$ be a local minimizer for MPSC (\ref{MPSC}) and let $d\in {\cal {C}}_{\cal F}(z^*)$. If MPSC-LICQ(d) holds, then $z^*$ is an S-stationary point in direction $d$. If {MPSC quasi-normality} holds at $z^*$ in direction $d$, then $z^*$ is an M-stationary point in direction $d$. If $f$ and $F$ are twice differentiable at $z^*$ then there exist an M-multiplier in direction $d$ denoted by $(\lambda^g,\lambda^h,\lambda^G,\lambda^H)$ such that the second-order condition holds: $$d^T\nabla^2_z\mathcal{L}(z^*,\lambda^g,\lambda^h,\lambda^G,\lambda^H)d\geq0,$$ where ${\cal L}(z, \lambda^g,\lambda^h,\lambda^G,\lambda^H):=f(z)+\langlegle \lambda^g, g(z) {\ranglegle}ngle +\langlegle \lambda^h, h(z) {\ranglegle}ngle+ \langlegle \lambda^G, G(z) {\ranglegle}ngle+\langlegle \lambda^H, H(z) {\ranglegle}ngle.$ Conversely, suppose that $z^*$ is a feasible solution to MPSC and for each $0\not = d\in {\cal {C}}_{\cal F}(z^*)$, there is an S-multiplier denoted by $(\lambda^g,\lambda^h,\lambda^G,\lambda^H)$ and the second-order condition $$d^T\nabla^2_z\mathcal{L}(z^*,\lambda^g,\lambda^h,\lambda^G,\lambda^H)d> 0$$ holds, then $z^*$ is a strict local minimizer of MPSC. \varepsilonnd{thm} For MPEC, Gfrerer \cite{Gfrerer SIAM Optimal 2014} pointed out that the extended M-stationary condition (which means the directional M-stationary condition holds at every critical direction) is usually hard to verify and introduced the strong M-stationary condition to build a bridge between M-stationarity and S-stationarity. Similarly we can propose a concept of strong M-stationary condition in a critical direction. In what follows we denote by $r(z^*{;d})$ the rank of the family of gradients \begin{equation}gin{eqnarray*}\label{family of gradients} \begin{equation}gin{array}{cc}\{\nabla g_i(z^*)|i\in {{\cal I}_g^*(d)}\}\cup\{\nabla h_j(z^*)|j=1,\cdots,q\}\cup\{\nabla G_i(z^*)|i\in{\cal I}_G^*\cup {\cal I}_G^*(d) \cup {\cal I}_{GH}^*(d)\}\\ \cup\{\nabla H_i(z^*)|i\in{\cal I}_H^* \cup {\cal I}_H^*(d) \cup{\cal I}_{GH}^*(d)\}. \varepsilonnd{array} \varepsilonnd{eqnarray*} \begin{equation}gin{defi}\rm A triple of index sets $(J_g,J_G,J_H)$ with $J_g\mathcal{S}ubseteq{\cal I}_g^*(d),J_G\mathcal{S}ubseteq{\cal I}_G^*\cup {\cal I}_G^*(d) \cup {\cal I}_{GH}^*(d),J_H\mathcal{S}ubseteq{\cal I}_H^* \cup {\cal I}_H^*(d) \cup{\cal I}_{GH}^*(d)$ is called an MPSC working set in direction $d$ for MPSC (\ref{MPSC}), if $J_G\cup J_H=\{1,2,\cdots,m\}$, $$|J_g|+q+|J_G|+|J_H|=r(z^*{;d}),$$ and the family of gradients \begin{equation}gin{eqnarray*} \{\nabla g_i(z^*)|i\in J_g\}\cup\{\nabla h_j(z^*)|j=1,\cdots,q\}\cup\{\nabla G_i(z^*)|i\in J_G\} \cup\{\nabla H_i(z^*)|i\in J_H\} \varepsilonnd{eqnarray*} is linearly independent. The point $z^*$ is called strongly M-stationary in direction $d$ for MPSC (\ref{MPSC}), if there exists an MPSC working set $(J_g,J_G,J_H)$ in direction $d$ together with $\lambda=(\lambda^g,\lambda^h,\lambda^G,\lambda^H)$, an M-multiplier in direction $d$, satisfying \begin{equation}gin{eqnarray*} &&\lambda^g_i=0,i\in\{1,\cdots,p\}\mathcal{S}etminus J_g,\label{Strongly M 1}\\ &&\lambda^G_i=0,i\in\{1,\cdots,m\}\mathcal{S}etminus J_G,\label{Strongly M 2}\\ &&\lambda^H_i=0,i\in\{1,\cdots,m\}\mathcal{S}etminus J_H,\label{Strongly M 3}\\ &&\lambda^G_i=\lambda^H_i=0,i\in J_G\cap J_H.\label{Strongly M 4} \varepsilonnd{eqnarray*} \varepsilonnd{defi} Similarly as in \cite[Theorem 4.3]{Gfrerer SIAM Optimal 2014}, we have the following result. \begin{equation}gin{thm} Assume that $z^*$ is M-stationary in direction $d\in {\cal C}_{\cal F}(z^*)$ for MPSC (\ref{MPSC}) and assume that there exists some MPSC working set in direction $d$. Then, $z^*$ is strongly M-stationary in direction $d$. \varepsilonnd{thm} \begin{equation}gin{thm} Let $z^*$ be feasible for MPSC (\ref{MPSC}) and assume that MPSC-LICQ{(d)} is fulfilled at $z^*$. Then $z^*$ is strongly M-stationary in direction $d$ if and only if it is S-stationary in direction $d$. \varepsilonnd{thm} \begin{equation}gin{proof} The statement follows immediately from the fact that under MPSC-LICQ{({$d$})} there exists exactly one MPSC working set and this set fulfills $J_g={\cal I}_g^*(d),J_G={\cal I}_G^*\cup {\cal I}_G^*(d)\cup {\cal I}_{GH}^*(d),J_H={\cal I}_H^*\cup{\cal I}_H^*(d)\cup {\cal I}_{GH}^*(d)$. \varepsilonnd{proof} In \cite[Example 5.2]{Mehlitz MP}, it was shown that the optimal solution of the following problem is M-stationary but not S-stationary. But we can show that MPSC-LICQ({$d$}) holds at $z^*$ and $z^*$ is S-stationary in any nonzero critical direction. \begin{equation}gin{ex}{\rm\cite[Example 5.2]{Mehlitz MP}} Consider the following optimization problem \begin{equation}gin{eqnarray*} \min&&z_1+z_2^2\\ {\rm s.t.}&&-z_1+z_2\leq0,\quad z_1z_2=0. \varepsilonnd{eqnarray*} Its unique global minimizer is given by $z^*:=(0,0)$. The linearization cone and critical cone of this problem at $z^*$ are given by \begin{equation}gin{eqnarray*} L^{\rm lin}_{\cal F}(z^*)&=&\{d\in\mathbb{R}^2|-d_1+d_2\leq0,d_1d_2=0\},\\ \mathcal{C}_{\cal F}(z^*)&=&\{d\in\mathbb{R}^2|-d_1+d_2\leq0,d_1d_2=0,d_1\leq0\}=\{d\in\mathbb{R}^2|d_1=0,d_2\leq0\}. \varepsilonnd{eqnarray*} Define $g(z):=-z_1+z_2, {G(z):=z_1, H(z):=z_2}$. Let $0\neq d\in\mathcal{C}_{\cal F}(z^*)$, then ${\cal I}_g^*(d),{\cal I}_H^*(d),{\cal I}_{GH}^*(d)$ are all empty but the index set ${\cal I}_G^*(d)=\{1\}$. Hence MPSC-LICQ(d) holds at $z^*$. It is easy to check that $z^*$ is indeed S-stationary in any direction $0\neq d\in\mathcal{C}_{\cal F}(z^*)$. \varepsilonnd{ex} The strong M-stationarity in direction $d$ builds a bridge between M-stationarity in direction $d$ and S-stationarity in direction $d$. We summarize the relations among the various stationarity concepts in Figure 2. \begin{equation}gin{figure} \centering \mathcal{S}criptsize \tikzstyle{format}=[rectangle,draw,thin,fill=white] \tikzstyle{test}=[diamond,aspect=2,draw,thin] \tikzstyle{point}=[coordinate,on grid,] \begin{equation}gin{tikzpicture} \node[format](S){S-stationary}; \node[format,below of=S,node distance=10mm](S direction){S-stationary in direction $d$}; \node[format,below of=S direction,node distance=7mm](Strong M direction){strongly M-stationary in direction $d$}; \node[format,below of=Strong M direction,node distance=7mm](M direction){M-stationary in direction $d$}; \node[format,left of=S,node distance=35mm](QM){$\mathcal{Q}_M$-stationary}; \node[format,below of=QM,node distance=14mm](Q){$\mathcal{Q}$-stationary}; \node[format,below of=Q,node distance=10mm](M-stationary){M-stationary}; \node[format,below of=M-stationary,node distance=10mm](linearized){linearized M-stationary}; \node[format,right of=linearized,node distance=35mm](AM-stationary){AM-stationary}; \draw[->](S)--(S direction); \draw[->](S direction)--(Strong M direction); \draw[->](Strong M direction)--(M direction); \draw[->](M direction)--(M-stationary); \draw[->](S)--(QM); \draw[->](QM)--(Q); \draw[->](Q)--(QM); \draw[->](Q)--(M-stationary); \draw[->](linearized)--(M-stationary); \draw[->](M-stationary)--(linearized); \draw[->] (M-stationary)--(AM-stationary); \varepsilonnd{tikzpicture} \centering{Fig.2 Relation among stationarities} \varepsilonnd{figure} \mathcal{S}ection{Error bound and exact penalty for MPSC} In this section we show the error bound property under two types of constraint qualifications: one is based on the local decomposition approach and the other is based on the directional quasi-normality. First we discuss the local decomposition approach. Let $\mathcal{P}({\cal I}_{GH}^*)$ be the set of all (disjoint) bipartitions of ${\cal I}_{GH}^*$. For fixed $(\begin{equation}ta_1,\begin{equation}ta_2)\in\mathcal{P}({\cal I}_{GH}^*)$, define \begin{equation}gin{eqnarray*} {\rm NLP(\begin{equation}ta_1,\begin{equation}ta_2)}\ \min &&f(z)\\ {{\rm s.t.}}&&g(z)\leq 0, h(z)=0,G_i(z)=0,\ i\in{\cal I}_G^*\cup\begin{equation}ta_1,H_i(z)=0,\ i\in{\cal I}_H^*\cup\begin{equation}ta_2. \varepsilonnd{eqnarray*} \begin{equation}gin{defi}\rm\label{New CQ definition} Let $z^*$ be a feasible point of MPSC (\ref{MPSC}). We say that $z^*$ satisfies \begin{equation}gin{itemize} \item MPSC piecewise MFCQ/CRCQ/CPLD/RCRCQ/RCPLD/CRSC, if for each $(\begin{equation}ta_1,\begin{equation}ta_2)$ $\in\mathcal{P}({\cal I}_{GH}^*)$, MFCQ/CRCQ/CPLD/RCRCQ/RCPLD/CRSC holds for $({\rm NLP(\begin{equation}ta_1,\begin{equation}ta_2)})$ at $z^*$. \varepsilonnd{itemize} \varepsilonnd{defi} We now compare the piecewise constraint qualifications just defined with MPSC-MFCQ/-CRCQ/-CPLD as defined in subsection \ref{s2.2}. {It is easy to see that if MFCQ/CRCQ/CPLD holds for (TNLP) at $z^*$ then for any $(\begin{equation}ta_1,\begin{equation}ta_2)\in\mathcal{P}({\cal I}_{GH}(z^*))$, MFCQ/CRCQ/CPLD holds for $({\rm NLP(\begin{equation}ta_1,\begin{equation}ta_2)})$ at $z^*$. Hence MPSC-MFCQ/-CRCQ/-CPLD implies MPSC piecewise MFCQ/CRCQ/CPLD.} {MPSC piecewise MFCQ/CRCQ/CPLD does not imply MPSC-MFCQ/-CRCQ/-CPLD. For example, consider MPSC with constraint system $G(z)=-z_1,H(z)=z_1-z_1^2z_2^2$ at $z^*=(0,0)$. $\nabla G(z)=(-1,0)^T,\nabla H(z)=(1-2z_1z_2^2,-2z_1^2z_2)$. For (TNLP), CPLD does not hold at $z^*$, but for $({\rm NLP(\begin{equation}ta_1,\begin{equation}ta_2)})$, LICQ holds at $z^*$, then MFCQ/CRCQ/CPLD holds at $z^*$.} This counter example shows that MPSC piecewise MFCQ/CRCQ/CPLD is strictly weaker than MPSC-MFCQ/-CRCQ/-CPLD. Since piecewise constraint qualifications are required to hold for all pieces, {they} may be harder to verify than the non-piecewise version. {However sometimes, these two concepts may be equivalent. For example, it was shown in \cite{MengweiYe} that MPSC piecewise RCPLD is equivalent to MPSC-RCPLD.} In Theorem \ref{CRSCerrorb} we will show that MPSC piecewise CRSC which is the weakest one among all the piecewise constraint qualifications introduced will imply the error bound property. For this purpose, we first give the following definition for local error bound property of MPSC (\ref{MPSC}). \begin{equation}gin{defi}\rm We say that {\varepsilonm MPSC local error bound} holds around $z^*\in{\cal F}$ if there exists a neighborhood $V(z^*)$ of $z^*$ and $\alpha>0$ such that \begin{equation}gin{eqnarray*} {\rm dist}_{\cal F}(z)\leq\alpha\left(\mathcal{S}um_{i=1}^p\max\{g_i(z),0\} +\mathcal{S}um_{j=1}^q|h_j(z)|+\mathcal{S}um_{i=1}^m\min\{|G_i(z)|,|H_i(z)|\}\right) \ \forall z\in V(z^*). \varepsilonnd{eqnarray*} \varepsilonnd{defi} \begin{equation}gin{thm}\label{CRSCerrorb} If $z^*\in{\cal F}$ verifies MPSC piecewise CRSC, then MPSC local error bound holds in a neighborhood of $z^*$. \varepsilonnd{thm} \begin{equation}gin{proof} Recall that the definition of MPSC piecewise CRSC means that for any $(\begin{equation}ta_1,\begin{equation}ta_2)\in\mathcal{P}({\cal I}_{GH}^*)$, CRSC holds for nonlinear programs $({\rm NLP(\begin{equation}ta_1,\begin{equation}ta_2)})$ at $z^*$. {When $i\in {\cal I}_G^*$, $|H_i(z^*)|>|G_i(z^*)|=0$, there exists a neighborhood $V_G(z^*)$ of $z^*$ such that $|H_i(z)|\geq|G_i(z)|$, then we have $\min\{|G_i(z)|,|H_i(z)|\}=|G_i(z)|$, for $i\in{\cal I}_G^*$ and $z\in V_G(z^*)$. Similarly, there exists a neighborhood $V_H(z^*)$ of $z^*$ such that $\min\{|G_i(z)|,|H_i(z)|\}=|H_i(z)|$, for $i\in{\cal I}_H^*$ and $z\in V_H(z^*)$.} Thus by \cite[Corollary 4.1]{Guo-Zhang-Lin2014} we have that for $(\begin{equation}ta_1,\begin{equation}ta_2)\in \mathcal{P}({\cal I}_{GH}^*)$, there exist a neighborhood $V_{\begin{equation}ta_1,\begin{equation}ta_2}(z^*)$ and a constant $\alpha_{\begin{equation}ta_1,\begin{equation}ta_2}$ such that \begin{equation}gin{eqnarray*} {\rm dist}_{\cal F}(z)&\leq&\alpha_{\begin{equation}ta_1,\begin{equation}ta_2} \left (\mathcal{S}um_{i=1}^p\max\{g_i(z),0\} +\mathcal{S}um_{j=1}^q|h_j(z)|+\mathcal{S}um_{i\in{\cal I}_G^*\cup \begin{equation}ta_1}|G_i(z)|+\mathcal{S}um_{i\in{\cal I}_H^*\cup \begin{equation}ta_2}|H_i(z)|\right ) \\ &=&\alpha_{\begin{equation}ta_1,\begin{equation}ta_2}\left (\mathcal{S}um_{i=1}^p\max\{g_i(z),0\} +\mathcal{S}um_{j=1}^q|h_j(z)|+\mathcal{S}um_{i\in{\cal I}_G^*}\min\{|G_i(z)|,|H_i(z)|\} \right. \\ &&\qquad\qquad\qquad\left .+\mathcal{S}um_{i\in{\cal I}_H^*}\min\{|G_i(z)|,|H_i(z)|\} +\mathcal{S}um_{i\in\begin{equation}ta_1}|G_i(z)|+\mathcal{S}um_{i\in\begin{equation}ta_2}|H_i(z)|\right ), \varepsilonnd{eqnarray*} for all $z\in V_{\begin{equation}ta_1,\begin{equation}ta_2}(z^*)$. Taking $\alpha:=\max_{(\begin{equation}ta_1,\begin{equation}ta_2)\in\mathcal{P}({\cal I}_{GH}^*)}\alpha_{\begin{equation}ta_1,\begin{equation}ta_2}$, $V(z^*):=\cap_{(\begin{equation}ta_1,\begin{equation}ta_2)\in\mathcal{P}({\cal I}_{GH}^*)} V_{\begin{equation}ta_1,\begin{equation}ta_2}(z^*)$, we get for all $z\in V(z^*)$ \begin{equation}gin{eqnarray*}{\rm dist}_{\cal F}(z)\leq\alpha\left(\mathcal{S}um_{i=1}^p\max\{g_i(z),0\} +\mathcal{S}um_{j=1}^q|h_j(z)|+\mathcal{S}um_{i\in{\cal I}_G^*}\min\{|G_i(z)|,|H_i(z)|\}\right.\\ \left.+\mathcal{S}um_{i\in{\cal I}_H^*}\min\{|G_i(z)|,|H_i(z)|\} +\mathcal{S}um_{i\in\begin{equation}ta_1}|G_i(z)|+\mathcal{S}um_{i\in\begin{equation}ta_2}|H_i(z)|\right). \varepsilonnd{eqnarray*} Finally, it holds for all $(\begin{equation}ta_1,\begin{equation}ta_2)\in \mathcal{P}({\cal I}_{GH}^*)$. {Set $$\begin{equation}ta_1^*(z):=\{i\in{\cal I}^*_{GH}||G_i(z)|=\min\{|G_i(z)|,|H_i(z)|\}\},\quad \begin{equation}ta_2^*(z):={\cal I}^*_{GH}\mathcal{S}etminus\begin{equation}ta_1^*(z),$$ then $(\begin{equation}ta_1^*(z),\begin{equation}ta_2^*(z))\in \mathcal{P}({\cal I}_{GH}^*)$.} Then we have \begin{equation}gin{eqnarray*}{\rm dist}_{\cal F}(z)\leq\alpha\left(\mathcal{S}um_{i=1}^p\max\{g_i(z),0\} +\mathcal{S}um_{j=1}^q|h_j(z)|+\mathcal{S}um_{i\in{\cal I}_G^*}\min\{|G_i(z)|,|H_i(z)|\}\right.\\ +\mathcal{S}um_{i\in{\cal I}_H^*}\min\{|G_i(z)|,|H_i(z)|\} +\mathcal{S}um_{i\in\begin{equation}ta_1^*(z)}\min\{|G_i(z)|,|H_i(z)|\}\quad\\ \left.+\mathcal{S}um_{i\in\begin{equation}ta_2^*(z)}\min\{|G_i(z)|,|H_i(z)|\}\right)\\ =\alpha\left(\mathcal{S}um_{i=1}^p\max\{g_i(z),0\} +\mathcal{S}um_{j=1}^q|h_j(z)|+\mathcal{S}um_{i=1}^m\min\{|G_i(z)|,|H_i(z)|\}\right). \varepsilonnd{eqnarray*} This completes the proof. \varepsilonnd{proof} Now we discuss the second approach based on the directional quasi-normality. First we need the following calculation. \begin{equation}gin{lema}\label{lema-distance function} Under the $l_1$-norm, the distance functions are given by the following expressions for $a,b\in\mathbb{R}:$ \begin{equation}gin{eqnarray*} &&{\rm dist}_{(-\infty,0]}(a)=\max\{a,0\},\quad {\rm dist}_{\{0\}}(a)=|a|,\\ &&{\rm dist}_{\Omegamega_{SC}}((a,b))=\min\{|a|,|b|\}=\left(\begin{equation}gin{array}{ll} a\ or\ b & a=b\geq0,\\ b\quad &|a|>b\geq0,\\ -b\quad &|a|>-b\geq0,\\ a\quad &|b|>a\geq0,\\ -a&|b|>-a\geq0,\\ -a\ or\ -b &a=b\leq0. \varepsilonnd{array}\right. \varepsilonnd{eqnarray*} \varepsilonnd{lema} \begin{equation}gin{thm} Let $z^*\in{\cal F}$ such that MPSC directional quasi-normality holds. Then MPSC local error bound holds in a neighborhood of $z^*$. \varepsilonnd{thm} \begin{equation}gin{proof} If MPSC directional quasi-normality holds at $z^*$, then by \cite[Corollary 4.1]{Bai-Ye-Zhang2019}, the set-valued map $F(z):=P(z)-{\langle}mbda$ is metrically subregular at $(z^*,0)$. By the definition of metric subregularity, there exist $\alpha\geq0$ and a neighborhood $N(z^*)$ of $z^*$ such that \begin{equation}gin{eqnarray*} {\rm dist}_{F^{-1}(0)}(z)\leq \alpha{\rm dist}_{\langle}mbda(P(z))\quad \forall z\in N(z^*). \varepsilonnd{eqnarray*} Recall the distance functions in Lemma \ref{lema-distance function}, we complete the proof. \varepsilonnd{proof} By Clarke's exact penalty principle \cite[Proposition 2.4.3]{Clark1990}, we obtain the following exact penalty result immediately. \begin{equation}gin{thm} Let $z^*$ be a local optimal solution of MPSC (\ref{MPSC}). If either MPSC directional quasi-normality or MPSC piecewise CRSC holds at $z^*$, then $z^*$ is a local optimal solution of the penalized problem: \begin{equation}gin{eqnarray*} \min f(z)+L_f \alpha\left[\mathcal{S}um_{i=1}^p\max\{0,g_i(z\})+\mathcal{S}um_{j=1}^q|h_j(z)|+\mathcal{S}um_{i=1}^m\min\{|G_i(z)|,|H_i(z)|\}\right], \varepsilonnd{eqnarray*} where $\alpha$ is the error bound constant and $L_f$ is the Lipschitz constant of $f$ around $z^*$. \varepsilonnd{thm} \mathcal{S}ection{Conclusions} \begin{equation}gin{figure} \mathcal{S}criptsize \tikzstyle{format}=[rectangle,draw,thin,fill=white] \tikzstyle{test}=[diamond,aspect=2,draw,thin] \tikzstyle{point}=[coordinate,on grid,] \begin{equation}gin{tikzpicture} \node[format] (MPSC-LICQ){MPSC-LICQ}; \node[format,below of=MPSC-LICQ,node distance=7mm] (MPSC-MFCQ){MPSC-MFCQ}; \node[format,below of=MPSC-MFCQ,node distance=7mm] (MPSC-CPLD){MPSC-CPLD}; \node[format,right of=MPSC-LICQ,node distance=35mm] (LCQ){MPSC Linear CQ}; \node[format,below of=LCQ,node distance=7mm](MPSC-CRCQ){MPSC-CRCQ}; \node[format,below of=MPSC-CRCQ,node distance=7mm] (piecewise CRCQ){MPSC piecewise CRCQ}; \node[format,below of=piecewise CRCQ,node distance=7mm](piecewise RCRCQ){MPSC piecewise RCRCQ}; \node[format,below of=piecewise RCRCQ,node distance=12mm] (piecewise RCPLD){MPSC piecewise RCPLD}; \node[format,below of=piecewise RCPLD,node distance=13mm](MPSC-RCPLD){MPSC-RCPLD}; \node[format,left of=MPSC-MFCQ,node distance=30mm](piecewise MFCQ){MPSC piecewise MFCQ}; \node[format,below of=piecewise MFCQ,node distance=7mm](NNAMCQ){MPSC-NNAMCQ}; \node[format,below of=NNAMCQ,node distance=12mm](pseudo){MPSC pseudo-normality}; \node[format,below of=pseudo,node distance=13mm](quasi){MPSC quasi-normality}; \node[format,left of=MPSC-LICQ,node distance=65mm](LICQ-d){MPSC-LICQ(d)}; \node[format,below of=LICQ-d,node distance=13mm](FOSCMS){MPSC-FOSCMS in direction $d$}; \node[format,below of=LICQ-d,node distance=20mm](SOSCMS){MPSC-SOSCMS in direction $d$}; \node[format,below of=SOSCMS,node distance=14mm](pseudo directional){MPSC pseudo-normality in direction $d$}; \node[format,below of=pseudo directional,node distance=12mm](quasi directional){MPSC quasi-normality in direction $d$}; \node[format,below of=quasi directional,node distance=7mm](M in directional){M-stationarity in direction $d$}; \node[format,below of=MPSC-CPLD,node distance=32mm](error bound){Metric subregularity/Error bound}; \node[format,below of=error bound,node distance=7mm](M-stationary){M-stationarity}; \node[format,below of=MPSC-CPLD,node distance=7mm](piecewise CPLD){MPSC piecewise CPLD}; \node[format,below of=piecewise CPLD,node distance=12mm](MPSC-CRSC){MPSC piecewise CRSC}; \node[format,below of=MPSC-RCPLD,node distance=7mm](AM-regular){AM-{regularity}}; \draw[->] (MPSC-LICQ)--(MPSC-MFCQ); \draw[->] (MPSC-LICQ)--(MPSC-CRCQ); \draw[->](MPSC-CRCQ)--(piecewise CRCQ); \draw[->](piecewise CRCQ)--(piecewise RCRCQ); \draw[->](piecewise RCRCQ)--(piecewise RCPLD); \draw[->](MPSC-MFCQ)--(MPSC-CPLD); \draw[->](MPSC-CPLD)--(piecewise CPLD); \draw[->](piecewise CPLD)--(piecewise RCPLD); \draw[->](MPSC-CRCQ)--(MPSC-CPLD); \draw[->](piecewise RCPLD)--(MPSC-CRSC); \draw[->](NNAMCQ)--(piecewise MFCQ); \draw[->](piecewise MFCQ)--(NNAMCQ); \draw[->](NNAMCQ)--(pseudo); \draw[->](pseudo)--(quasi); \draw[->](MPSC-MFCQ)--(piecewise MFCQ); \draw[->](pseudo)--(pseudo directional); \draw[->](quasi)--(quasi directional); \draw[->](pseudo directional)--(quasi directional); \draw[->](quasi directional)--(error bound); \draw[->](quasi directional)--(M in directional); \draw[->](M in directional)--(M-stationary); \draw[->](MPSC-CRSC)--(error bound); \draw[->](error bound)--(M-stationary); \draw[->](MPSC-LICQ)--(LICQ-d); \draw[->](LICQ-d)--(FOSCMS); \draw[->](NNAMCQ)--(FOSCMS); \draw[->](FOSCMS)--(SOSCMS); \draw[->](SOSCMS)--(pseudo directional); \draw[->](piecewise CRCQ)--(piecewise CPLD); \draw[->](piecewise RCPLD)--(MPSC-RCPLD); \draw[->](MPSC-RCPLD)--(piecewise RCPLD); \draw[->](MPSC-RCPLD)--(M-stationary); \draw[->](LCQ)--(MPSC-CRCQ); \draw[->] (AM-regular)--(M-stationary); \draw[->] (MPSC-RCPLD)--(AM-regular); \varepsilonnd{tikzpicture} \centering{Fig.3 Relation among CQs, stationary conditions and error bounds for MPSC} \varepsilonnd{figure} In Figure 3, we give a diagram displaying the relations of various constraint qualifications, stationary conditions and error bounds. Note that in the diagram, the arrows pointing to stationary points only hold for local optimal solutions. MPSC Linear CQ means all defining constraint functions $g,h, G,H$ are all affine. The relation between MPSC piecewise RCPLD and MPSC-RCPLD can be checked easily by using definitions. {The proof of relation between MPSC-RCPLD and AM-regularity is similar to \cite[Theorem 4.8]{Andreani etal2019}.} To obtain all other relationships, we use definitions and the results presented here together with the results from \cite{Mehlitz MP,Li-Guo,Bai-Ye-Zhang2019,Gfr132}. From the diagram, we can see that directional conditions in a nonzero critical direction $d$ are weaker than the corresponding nondirectional ones. \begin{equation}gin{thebibliography}{99} \bibitem{Clason}{Clason C., Rund A. and Kunisch K.}: {Nonconvex penalization of switching control of partial differential equations}. Syst. Control Lett. 106, 1--8 (2017) \bibitem{kanzow-mehlitz-steck}{Kanzow C., Mehlitz P. and Steck D.}: {Relaxation schemes for mathematical programmes with switching constraints}. Optim. Methods Softw. (2019) DOI: 10.1080/10556788.2019.1663425 \bibitem{Mehlitz MP} {Mehlitz P.}: {Stationarity conditions and constraint qualifications for mathematical programs with switching constraints with appications to either-or-constrainted programming}. Math. Program. 181, 149--186 (2020) \bibitem{Gugat}{Gugat M.}: {Optimal switching boundary control of a string to rest in finite time}. ZAMM J. Appl. Math. Mech. 88, 283--305 (2008) \bibitem{Hante}{Hante F.M., Sager S.}: Relaxation methods for mixed-integer optimal control of partial differential equations. Comput. Optim. Appl. 55, 197--225 (2013) \bibitem{Seidman}{Seidman T.I.}: Optimal control of a diffusion/reaction/switching system. Evolut. Equ. Control Theory. 2, 723--731 (2013) \bibitem{LuoPangRalph} Luo Z.-Q., Pang J.-S. and Ralph D.: {Mathematical Programs with Equilibrium Constraints}. Cambridge University Press, Cambridge, UK (1996) \bibitem{outrata} Outrata J.V., Kocvara M. and Zowe J.: {Nonsmooth Approach to Optimization problems with Equilibrium Constraints}. Kluwer Academic, Dordrecht (1998) \bibitem{Achitziger} Achtziger W. and Kanzow C.: {Mathematical programs with vanishing constraints: optimality conditions and constraint qualifications}. Math. Program. 114, 69--99 (2008) \bibitem{Hoheisel}{Hoheisel T. and Kanzow C.}: {Stationary conditions for mathematical programs with vanishing constraints using weak constraint qualifications}. J. Math. Anal. Appl. 337, 292--310 (2008) \bibitem{Li-Guo} {Li G. and Guo L.}: {Mordukhovich stationarity for mathematical programs with switching constraints under weak constraint qualifications}. http://www.optimization-online.org/DB$\_$HTML/2019/07/7288.html \bibitem{Flegel etal 2007}{Flegel M.L., Kanzow C. and Outrata J.V.}: Optimality conditions for disjunctive programs with application to mathematical programs with equilibrium constraints. Set-Valued Analysis, 15, 139--162 (2007) \bibitem{Gfrerer SIAM Optimal 2014}{Gfrerer H.}: {Optimality conditions for disjunctive programs based on generalized differentiation with application to mathematical programs with equilibrium constraints}. SIAM J. Optim. 24, 898--931 (2014) \bibitem{Bai-Ye-Zhang2019}{ Bai K., Ye J.J. and Zhang J.}: {Directional quasi-/pseudo-normality as sufficient conditions for metric subregularity}. SIAM J. Optim. 29, 2625--2649 (2019) \bibitem{Benko etal2019}{Benko M., $\rm\check{C}$ervinka M. and Hoheisel T.}: {Suffcient conditions for metric subregularity of constraint systems with applications to disjunctive and ortho-disjunctive programs}. {Set-Valued Var. Anal., (2021) DOI: 10.1007/s11228-020-00569-7} \bibitem{Mehlitz2020optimization}{Mehlitz P.}: On the linear independence constraint qualification in disjunctive programming. Optimization, 69, 2241--2277 (2020) \bibitem{Benko-Gfrerer2017}{Benko M. and Gfrerer H.}: On estimating the regular normal cone to constraint systems and stationarity conditions. Optimization, 66, 61--92 (2017) \bibitem{Benko-Gfrerer2018}{Benko M. and Gfrerer H.}: New verifiable stationarity concepts for a class of mathematical programs with disjunctive constraints. Optimization, 67, 1--23 (2018) \bibitem{Gfrerer2019}{Gfrerer H.}: Linearized M-stationarity conditions for general optimization problems. Set-Valued Var. Anal. 27, 819--840 (2019) \bibitem{Andreani etal2019}{Andreani R., Haeser G., Secchin L.D., and Silva P.J.S.}: New sequential optimality conditions for mathematical programs with complementarity constraints and algorithmic consequences. SIAM J. Optim., 29, 3201--3230 (2019) \bibitem{Ramos2021} Ramos A.: Mathematical programs with equilibrium constraints: a sequential optimality condition, new constraint qualifications and algorithmic consequences, Optim. Methods Softw., 36, 1--37 (2021) \bibitem{Mehlitz2020} Mehlitz P.: Asymptotic stationarity and regularity for nonsmooth optimization problems. J. Nonsmooth Anal. Optim., 1, 6575 (2020) \bibitem{MFCQ1967} {Mangasarian O.L. and Fromovitz S.}: {The Fritz John optimality conditions in the presence of equality and inequality constraints}. J. Math. Anal. Appl. 17, 37--47 (1967) \bibitem{Janin1984} {Janin R.}: {Direction derivative of the marginal function in nonlinear programming}. Math. Program. Stud. 21, 110--126 (1984) \bibitem{minchenko-stakhovski2011} {Minchenko L. and Stakhovski S.}: {On relaxed constant rank regularity condition in mathematical programming}. Optimization 60, 429--440 (2011) \bibitem{Qi-Wei2000}{ Qi L. and Wei Z.}: {On the constant positive linear dependence condition and its application to SQP methods}. SIAM J. Optim. 10, 963--981 (2000) \bibitem{Andreani etal 2012MP}{Anderani R., Haeser G., Schuverdt M.L. and Silva P.J.S.}: {A relaxed constant positive linear dependence constraint qualification and applications}. Math. Program. 135, 255--273 (2012) \bibitem{Andreani etal 2012} {Anderani R., Haeser G., Schuverdt M.L. and Silva P.J.S.}: {Two new weak constraint qualifications and applications}. SIAM J. Optim. 22, 1109--1135 (2012) \bibitem{Kruger etal2014}{Kruger A.Y., Minchenko L. and Outrata J.V.}: {On relaxing the Mangasarian-Fromovitz constraint qualication}. Positivity 18, 171--189 (2014) \bibitem{Solodov2011}{Solodov M.V.}: {Constraint qualifications}. Wiley Encyclopedia of Operations Research and Management Science, John Wiley, Hoboken, NJ (2011) \bibitem{Guo-Zhang-Lin2014}{Guo L., Zhang J. and Lin G.-H.}: {New results on constraint qualifications for nonlinear extremum problems and extensions}. J. Optim. Theory Appl. 163, 737--754 (2014) \bibitem{Rockafellar}{ Rockafellar R.T. and Wets R.J.-B.}: {Variational Analysis}. Springer-Verlag, Berlin (1998) \bibitem{GfrererYeZhou}{Gfrerer H., Ye J.J. and Zhou J.C.}: {Second-order optimality conditions for non-convex set-constrained optimization problems}. preprint, arXiv:1911.04076 \bibitem{Henrion}{Henrion R. and Outrata J.V.}: On calculating the normal cone to a finite union of convex polyhedra, Optimization 57, 57--78 (2008) \bibitem{Gfrerer SVAA2013} {Gfrerer H.}: {On directional metric regularity, subregularity and optimality conditions for nonsmooth mathematical programs}. Set-Valued Var. Anal. 21, 151--176 (2013) \bibitem{Ginchev and Mordukhovich2011}{Ginchev I. and Mordukhovich B.S.}: {On directionally dependent subdifferentials}. C.R. Bulg. Acad. Sci. 64, 497--508 (2011) \bibitem{Ye-Zhou2018MP}{Ye J.J. and Zhou J.}: {Verifiable sufficient conditions for the error bound property of second-order cone complementarity problems}. Math. Program. 171, 361--395 (2018) \bibitem{Gfr132}{Gfrerer H.}: {On directional metric subregularity and second-order optimality conditions for a class of nonsmooth mathematical programs}. SIAM J. Optim. 23, 632--665 (2013) \bibitem{MengweiYe}{Xu M. and Ye J.J.}: Relaxed constant positive linear dependence constraint qualification for disjunctive programs, preprint, {arXiv:1912.12372v1} \bibitem{Clark1990}{Clarke F.H.}: {Optimization and Nonsmooth Analysis}. Wiley Interscience, New York, 1983; reprinted as vol. 5 of Classics Appl. Math. 5, SIAM, Philadelphia, PA (1990) \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \date{\today} \title{A Modified Schur Method for Robust Pole Assignment\ in State Feedback Control hanks{This research was supported in part by NSFC under grant 61075119 and the Fundamental Research Funds for the Central Universities (BUPT2013RC0903).} \blfootnote{Email addresses: [email protected] (Z.C. Guo), [email protected] (Y.F. Cai), [email protected] (J. Qian), [email protected] (S.F. Xu)} \begin{abstract} Recently, a \textbf{SCHUR} method was proposed in \cite{Chu2} to solve the robust pole assignment problem in state feedback control. It takes the departure from normality of the closed-loop system matrix $A_c$ as the measure of robustness, and intends to minimize it via the real Schur form of $A_c$. The \textbf{SCHUR} method works well for real poles, but when complex conjugate poles are involved, it does not produce the real Schur form of $A_c$ and can be problematic. In this paper, we put forward a modified Schur method, which improves the efficiency of \textbf{SCHUR} when complex conjugate poles are to be assigned. Besides producing the real Schur form of $A_c$, our approach also leads to a relatively small departure from normality of $A_c$. Numerical examples show that our modified method produces better or at least comparable results than both \textbf{place} and \textbf{robpole} algorithms, with much less computational costs. \vskip 2mm \noindent {\bf Key words.} pole assignment, state feedback control, robustness, departure from normality, real Schur form \vskip2mm \noindent {\bf AMS subject classification.} 15A18, 65F18, 93B55. \end{abstract} \section{Introduction} Let the matrix pair $(A, B)$ denotes the dynamic state equation \begin{align}\label{eqstate-equation} \dot{x}(t)=Ax(t)+Bu(t) \end{align} of the time invariant linear system, where $A\in \mathbb{R}^{n\times n}$ and $B\in \mathbb{R}^{n\times m}$ are the open-loop system matrix and the input matrix, respectively. The dynamic behavior of \eqref{eqstate-equation} is governed by the eigen-structure of $A$, especially the poles (eigenvalues). And in order to change the dynamic behavior of the open-loop system \eqref{eqstate-equation} in some desirable way (to achieve stability or to speed up response), one needs to modify the poles of \eqref{eqstate-equation}. Typically, this may be actualized by the state-feedback control \begin{align}\label{eqfeedback} u(t)=Fx(t), \end{align} where the feedback matrix $F\in \mathbb{R}^{m\times n}$ is to be chosen such that the closed-loop system \begin{align}\label{eqstate-equation2} \dot{x}(t)=(A+BF)x(t)\equiv A_c x(t) \end{align} has specified poles. Mathematically, the {\em state-feedback pole assignment problem} can be stated as: \noindent{\bf State-Feedback Pole Assignment Problem (SFPA)} Given $A\in\mathbb{R}^{n\times n}$, $B\in\mathbb{R}^{n\times m}$ and a set of $n$ complex numbers $\mathfrak{L}=\{\lambda_1, \lambda_2, \ldots, \lambda_n\}$, closed under complex conjugation, find an $F\in\mathbb{R}^{m\times n}$ such that $\lambda(A+BF)=\mathfrak{L}$, where $\lambda(A+BF)$ is the eigenvalue set of $A+BF$. A necessary and sufficient condition for the solvability of the {\bf SFPA} for any set $\mathfrak{L}$ of $n$ self-conjugate complex numbers is that $(A, B)$ is controllable, or equivalently, the controllability matrix $\begin{bmatrix} B&AB&\cdots& A^{n-1}B\end{bmatrix}$ is of full row rank \cite{ XU, Won, Wonh}. Many algorithms have been put forward to solve the {\bf SFPA}, such as the invariant subspace method \cite{ PCK1}, the QR-like method \cite{ MP2, MP3}, etc.. We refer readers to \cite{ASC, BD, Chu0, Ka,RM, PM, Var, FO} for some other approaches. When $m>1$, the solution to the {\bf SFPA} is generally not unique. We may then utilize the freedom of $F$ to achieve some other desirable properties of the closed-loop system. In applications, one sympathetic character for system design is that the eigenvalues of the closed-loop system matrix $A_c$ are insensitive to perturbations, which leads to the following {\em state-feedback robust pole assignment problem}: \noindent{\bf State-Feedback Robust Pole Assignment Problem (SFRPA)} Find a solution $F\in\mathbb{R}^{m\times n}$ to the {\bf SFPA}, such that the closed-loop system is robust, that is, the eigenvalues of $A_c$ are as insensitive to perturbations on $A_c$ as possible. The key to solve the {\bf SFRPA} is to choose an appropriate measure of robustness formulated in quantitative form. Some measures can be found in \cite{XU, KNV, BN, Chu2, Dic}, such as the \texttt{condition number measurement} $\kappa_F(X)=\|X\|_F\|X^{-1}\|_F$, where $X$ is the eigenvector matrix of $A_c$, the \texttt{departure from normality} $\Delta_F(A_c)=\sqrt{\|A_c\|_F^2-\sum_{j=1}^{j=n}|\lambda_j|^2}$ and so on. Ramar and Gourishankar \cite{RG} made an early contribution to the {\bf SFRPA} and since then various optimization methods have been proposed based on different measures \cite{ BN, CB, Chu2, KNV, Dic, Tits, LY}. The most classic methods should be those proposed by Kautsky, Nichols and Van Dooren in \cite{KNV}, where $\kappa_F(X)$ is used as the measure of robustness of the closed-loop system matrix. However, Method $0$ in \cite{KNV} may fail to converge, Method $1$ may suffer from slow convergence, and Method $2/3$ may not perform well on ill-conditioned problems. Based on Method $0$ in \cite{KNV}, Tits and Yang \cite{Tits} proposed a method for solving the {\bf SFRPA} by trying to maximize the absolute value of the determinant of the eigenvector matrix $X$. The optimization processes are iterative, and hence generally expensive. Recently, Chu \cite{Chu2} put forward a Schur-type method for the {\bf SFRPA} by tending to minimize the departure from normality of the closed-loop system matrix $A_c$ via the Schur decomposition of $A_c$. It computes the matrices $X$ and $T$ column by column, where $A_c=XTX^{-1}$, $X,T$ are real and $T$ is upper quasi-triangular, such that the strictly block upper triangular elements of matrix $T$ are minimized in each step. If $\lambda_1,\dots,\lambda_n$ are all real, \textbf{SCHUR} \cite{Chu2} will generate an orthogonal matrix $X$, that is, $A_c=XTX^{-1}$ is the Schur decomposition of $A_c$. This implies that the departures from normality of $A_c$ and $T$ are the same. Hence the strategy aiming to minimize the departure from normality of $T$ is also pliable to $A_c$. However, in case of complex conjugate poles, it cannot produce an orthogonal $X$, suggesting that the departure from normality of $A_c$ is generally not identical to that of $T$. Hence, although it attempts to optimize the departure from normality of $T$, that of $A_c$ may still be large. In this paper, we propose a modified Schur method upon \textbf{SCHUR} \cite{Chu2}, where poles are assigned via the real Schur decomposition of $A_c=XTX^{\top}$, with $X$ being real orthogonal and $T$ being real upper quasi-triangular. In each step (assigning a real pole or a pair of conjugate poles), one optimization problem arises for purpose of minimizing the departure from normality of $T$. When assigning a real pole, we improve the efficiency of \textbf{SCHUR} by computing the SVD of a matrix, instead of computing the GSVD of a matrix pencil. When assigning a pair of conjugate poles, by exploring the properties of the posed optimization problem, we provide a polished way to obtain its suboptimal solution. Numerical examples show that our method outperforms \textbf{SCHUR} when complex conjugate poles are involved. We also compare our method with the MATLAB functions \textbf{place} (an implementation of Method 1 in \cite{KNV}), \textbf{robpole} (an implementation of the method in \cite{Tits}) and the \textbf{O-SCHUR} algorithm (an implementation of an optimization method in \cite{Chu2}) on some benchmark examples and randomly generated examples, where numerical results show that our method is comparable in accuracy and robustness, while with lower computational costs. The paper is organized as follows. In Section 2, we give some preliminaries which will be used in subsequent sections. Our method is developed in Section 3, including both the real case and the complex conjugate case. Numerical results are presented in Section 4. Some concluding remarks are finally drawn in Section 5. \section{Preliminaries} In this section, we briefly review the parametric solutions to the {\bf SFPA}, and the departure from normality. \subsection{Solutions to the SFPA} The parametric solutions to the {\bf SFPA} can be expressed in several ways. In this paper, as in \cite{Chu2}, we formulate it by using the real Schur decomposition of $A_c=A+BF$. Assume that the real Schur decomposition of $A+BF$ is \begin{equation}\label{eqrealschur} A + BF = XTX^{\top}, \end{equation} where $X\in \mathbb{R}^{n\times n}$ is orthogonal, $T\in \mathbb{R}^{n\times n}$ is upper quasi-triangular with only $1\times 1$ and $2\times2$ diagonal blocks. Without loss of generality, we may assume that $B$ is of full column rank. Let \begin{equation}\label{eqqrofb} B=Q\begin{bmatrix}R\\ 0\end{bmatrix}=\begin{bmatrix}Q_1&Q_2\end{bmatrix}\begin{bmatrix}R\\ 0\end{bmatrix}=Q_1R \end{equation} be the QR decomposition of $B$, where $Q\in\mathbb{R}^{n\times n}$ is orthogonal, $Q_1\in \mathbb{R}^{n\times m}$, and $R\in\mathbb{R}^{m\times m}$ is nonsingular and upper triangular. It follows from \eqref{eqrealschur} that \begin{equation}\label{eqchangingofrealschur} AX + BFX - XT =0. \end{equation} Pre-multiplying \eqref{eqchangingofrealschur} by $\diag(R^{-1}, I_{n-m})\begin{bmatrix}Q_1&Q_2\end{bmatrix}^{\top}$ on both sides gives \begin{equation}\label{eqsolve} \left\{ \begin{array}{l} R^{-1}Q_1^{\top}AX + FX - R^{-1}Q_1^{\top}XT=0, \\ Q_2^{\top}(AX -XT)=0. \end{array}\right. \end{equation} Consequently, if we get an orthogonal matrix $X$ and an upper quasi-triangular matrix $T$ from the second equation of \eqref{eqsolve}, then a solution $F$ to the {\bf SFPA} will be obtained immediately from the first equation of \eqref{eqsolve} as \begin{equation} \label{eqsolveoff} F=R^{-1}Q_1^{\top}(XTX^{\top}-A). \end{equation} \subsection{Departure from normality} In this paper, we adopt the departure from normality of $A_c=A+BF$ as a measure of robustness of the closed-loop system matrix as in \cite{Chu2}, which is defined as (\cite{Hen, SSun}) \[ \Delta_F(A_c)=\sqrt{\|A_c\|_F^2-\sum_{j=1}^{n}|\lambda_j|^2}, \] where $\lambda_1,\dots,\lambda_n$ are the poles to be assigned, and hence eigenvalues of $A_c$. Now let $D$ be the block diagonal part of $T$ with only $1\times 1$ and $2\times 2$ blocks on its diagonal. Each $1\times1$ block of $D$ admits a real eigenvalue $\lambda_j$ of $T$, while each $2\times 2$ block of $D$ admits a pair of conjugate eigenvalues $\lambda_{j}=\alpha_j+ i \beta_j, \lambda_{j+1}=\bar{\lambda}_j$ and is of the form $D_{j}=\begin{bmatrix}\begin{smallmatrix}\alpha_j&\delta_j\beta_j\\-\frac{\beta_j}{\delta_j}&\alpha_j \end{smallmatrix}\end{bmatrix}\in \mathbb{R}^{2\times2}$ with $\delta_j\beta_j\ne 0$, where $\delta_j$ is some real number. Let $N=T-D=\begin{bmatrix}\breve{v}_1&\breve{v}_2&\cdots&\breve{v}_n\end{bmatrix}$ be the strictly upper quasi-triangular part of $T$ with $\breve{v}_k=\begin{bmatrix}v_k^\top&0\end{bmatrix}^\top, v_k\in \mathbb{R}^{k-1} \text{or} \ \mathbb{R}^{k-2} $. Direct calculations give rise to \begin{equation}\label{departure} \Delta_F^2(A_c)=\Delta_F^2(T)=\|N\|_F^2+\sum_{j}(\delta_j-\frac{1}{\delta_j})^2\beta_j^2, \end{equation} where the summation is over all $2\times 2$ blocks of $D$. When all poles $\lambda_1,\dots,\lambda_n$ are real, the second part of $\Delta_{F}^2(A_c)$ in \eqref{departure} will vanish. However, when some poles are non-real, not only the strictly block upper triangular part $N$ contributes to the departure from normality, but also the block diagonal part $D$. When some $|\delta_j|$ is large or close to zero, the second term can be pretty large, which means that it is not negligible. \section{Solving the SFRPA via the real Schur form} In this section, we solve the {\bf SFRPA} by finding an orthogonal matrix $X=\begin{bmatrix}x_1&x_2&\cdots&x_{n}\end{bmatrix}$ and an upper quasi-triangular matrix $T=D+N$ satisfying the second equation of \eqref{eqsolve}, such that $\Delta_{F}^2(A_c)$ in \eqref{departure} is minimized. Obtaining a global optimization solution to the problem $\min\{\Delta_F^2(A_c)\}$ is rather difficult. In this paper, we propose an efficient method to get a suboptimal solution, which balances the contributions of $N$ and $D$ to the departure from normality. As in \cite{Chu2}, we compute the matrices $X$ and $T$ column by column. For any matrix $S$, we denote its range space and null space by $\mathcal{R}(S)$ and $\mathcal{N}(S)$, respectively. Assume that we have already obtained $X_{j}=\begin{bmatrix}x_1&x_2&\cdots&x_{j}\end{bmatrix} \in\mathbb{R}^{n\times j}$ and $T_{j} \in\mathbb{R}^{j\times j}$ satisfying \begin{align}\label{eq1} Q_2^\top (AX_{j} - X_{j}T_{j})=0,\qquad X_{j}^\top X_{j}=I_{j}, \end{align} where $T_j$ is upper quasi-triangular and $\lambda(T_j)=\{\lambda_k\}_{k=1}^{k=j}$. We then are to assign the pole $\lambda_{j+1}$ (if $\lambda_{j+1}$ is real) or poles $\lambda_{j+1},\bar{\lambda}_{j+1}$ (if $\lambda_{j+1}$ is non-real) to get $x_{j+1}$, $\breve{v}_{j+1}$ or $x_{j+1}, x_{j+2}$, $\breve{v}_{j+1}, \breve{v}_{j+2}$, such that the departure from normality of $A_c$ is optimized in some sense. This procedure is repeated until all columns of $X$ and $T$ are acquired, and eventually a solution $F$ to the {\bf SFRPA} would be computed from \eqref{eqsolveoff}. In the following subsections we will distinguish two different cases when $\lambda_{j+1}$ is real or non-real. Before this, we shall show how to get the first one (two) column(s) of $X$ and $T$. If $\lambda_1$ is real, the first column of $T$ is then $\lambda_1e_1$, or $T_1=\lambda_1$, and the first column $x_1$ of $X$ must satisfy \begin{equation}\label{initial} Q_2^{\top}(A-\lambda_1I_n)x_1=0, \end{equation} and $\|x_1\|_2=1$. Let the columns of $S\in\mathbb{R}^{n\times r}$ be an orthonormal basis of $\mathcal{N}(Q_2^{\top}(A-\lambda_1I_n))$, then $x_1$ can be chosen to be any unit vector in $\mathcal{R}(S)$. We take \begin{align}\label{x1real} x_1=(S\begin{bmatrix}1& \ldots&1\end{bmatrix}^{\top})/{\|S\begin{bmatrix}1& \ldots&1\end{bmatrix}^{\top}\|_2} \end{align} in our algorithm as in \cite{Chu2}, and then initially set $X_1=x_1,T_1=\lambda_1$. If $\lambda_1=\alpha_1+i\beta_1$ is non-real, to get the real Schur form, we should place $\bar{\lambda}_1=\alpha_1-i\beta_1$ together with $\lambda_1$. Notice that $T_2$ is of the form $T_2=\begin{bmatrix}\begin{smallmatrix}\alpha_{1} & \delta_1 \beta_{1} \\ -\beta_{1}/\delta_1 & \alpha_{1}\end{smallmatrix}\end{bmatrix}$ with $0\ne\delta_1\in\mathbb{R}$, then the first two columns $x_1,x_2\in\mathbb{R}^n$ of $X$ should be chosen to satisfy \begin{align}\label{x1x2} Q_2^{\top}(A\begin{bmatrix}x_1 &x_2\end{bmatrix}-\begin{bmatrix}x_1 &x_2\end{bmatrix}T_2)=0, \quad x_1^\top x_2=0, \quad \|x_1\|_2=\|x_2\|_2=1, \end{align} so that $(\delta_1-\frac{1}{\delta_1})^2\beta_1^2$ is minimized, which obviously achieves its minimum when $\delta_1=1$. Let the columns of $S\in\mathbb{C}^{n\times r}$ be an orthonormal basis of $\mathcal{N}(Q_2^{\top}(A-\lambda_1I_n))$, and $S_R=\mbox{Re}(S)$, $S_I=\mbox{Im}(S)$. Direct calculations show that such $x_1,x_2$ satisfying \eqref{x1x2} with $\delta_1=1$ can be obtained by \begin{align}\label{x1x2get} x_1&=\begin{bmatrix}S_{R}& -S_I\end{bmatrix}\begin{bmatrix}\gamma_1&\ldots& \gamma_r& \zeta_1& \ldots& \zeta_r\end{bmatrix}^{\top}, \qquad x_2&=\begin{bmatrix}S_I& S_R\end{bmatrix}\begin{bmatrix}\gamma_1&\ldots& \gamma_r& \zeta_1& \ldots& \zeta_r\end{bmatrix}^{\top}, \end{align} with $x_1^\top x_2=0$ and $\|x_1\|_2=\|x_2\|_2=1$. Clearly, \begin{equation}\label{initial_com} \begin{split} &x_1^{\top}x_2+x_2^{\top}x_1\\ =&\begin{bmatrix}\gamma_1&\ldots& \gamma_r&\zeta_1& \ldots& \zeta_r\end{bmatrix} \begin{bmatrix}S_R^{\top}S_I+S_I^{\top}S_R&S_R^{\top}S_R-S_I^{\top}S_I\\ S_R^{\top}S_R-S_I^{\top}S_I&-(S_R^{\top}S_I+S_I^{\top}S_R) \end{bmatrix} \begin{bmatrix}\gamma_1&\ldots& \gamma_r& \zeta_1& \ldots& \zeta_r\end{bmatrix}^{\top},\\ &x_1^{\top}x_1-x_2^{\top}x_2\\ =&\begin{bmatrix}\gamma_1&\ldots& \gamma_r& \zeta_1& \ldots& \zeta_r\end{bmatrix} \begin{bmatrix}S_R^{\top}S_R-S_I^{\top}S_I&-(S_R^{\top}S_I+S_I^{\top}S_R)\\ -(S_R^{\top}S_I+S_I^{\top}S_R)&S_I^{\top}S_I-S_R^{\top}S_R \end{bmatrix} \begin{bmatrix}\gamma_1&\ldots& \gamma_r& \zeta_1& \ldots& \zeta_r\end{bmatrix}^{\top}. \end{split} \end{equation} Note that the two matrices in the above two equations are symmetric Hamiltonian systems owning special properties. So we exhibit some simple results about symmetric Hamiltonian system which will be used here and when assigning the complex conjugate poles. Both results can be verified directly, and we omit the proof. \begin{Lemma}\label{Lemma3.1} Let $A, B\in \mathbb{R}^{n\times n}$ satisfying $A^\top=A, B^\top=B.$ If $\lambda$ is an eigenvalue of $\begin{bmatrix}A&B\\B&-A\end{bmatrix}$ and $\begin{bmatrix}x^\top&y^\top\end{bmatrix}^\top$ is the corresponding eigenvector, then \begin{align*} \begin{bmatrix}A&B\\B&-A\end{bmatrix} \begin{bmatrix}x&-y\\y&x\end{bmatrix}=\begin{bmatrix}x&-y\\y&x\end{bmatrix} \begin{bmatrix}\lambda& \\ & -\lambda\end{bmatrix}, \end{align*} and \begin{align*} \begin{bmatrix}B&-A\\-A&-B\end{bmatrix}\begin{bmatrix}x&-y\\y&x\end{bmatrix} \begin{bmatrix}\frac{\sqrt{2}}{2}&-\frac{\sqrt{2}}{2}\\-\frac{\sqrt{2}}{2}&-\frac{\sqrt{2}}{2}\end{bmatrix} =\begin{bmatrix}x&-y\\y&x\end{bmatrix} \begin{bmatrix}\frac{\sqrt{2}}{2}&-\frac{\sqrt{2}}{2}\\-\frac{\sqrt{2}}{2}&-\frac{\sqrt{2}}{2}\end{bmatrix} \begin{bmatrix}\lambda& \\ & -\lambda\end{bmatrix}. \end{align*} \end{Lemma} \begin{Lemma}(Property of Two Hamiltonian Systems)\label{Lemma3.2} Let $A, B\in \mathbb{R}^{n\times n}$ be symmetric, and let $\begin{bmatrix}A&B\\B&-A\end{bmatrix}=U\diag(\Theta, -\Theta) U^\top$ be the spectral decomposition, where $\Theta=\diag(\theta_1, \theta_2, \ldots, \theta_n)$ with $\theta_1\geq \theta_2\geq\ldots \geq\theta_n\geq 0$. If the $j$-th column $u_{j}$ and the $(n+j)$-th column $u_{n+j}$ of $U$ satisfy $u_{n+j}=\begin{bmatrix}&-I_n\\I_n&\end{bmatrix}u_{j}$, then $\begin{bmatrix}B&-A\\-A&-B\end{bmatrix}=U\begin{bmatrix}0&-\Theta\\-\Theta&0\end{bmatrix}U^\top$. \end{Lemma} Applying Lemma \ref{Lemma3.2} to the two symmetric Hamiltonian systems which appeared in \eqref{initial_com}, that is \begin{align*} \begin{bmatrix}S_R^{\top}S_I+S_I^{\top}S_R&S_R^{\top}S_R-S_I^{\top}S_I\\ S_R^{\top}S_R-S_I^{\top}S_I&-(S_R^{\top}S_I+S_I^{\top}S_R) \end{bmatrix}=&U\diag(\Theta, -\Theta) U^\top,\\ \begin{bmatrix}S_R^{\top}S_R-S_I^{\top}S_I&-(S_R^{\top}S_I+S_I^{\top}S_R)\\ -(S_R^{\top}S_I+S_I^{\top}S_R)&S_I^{\top}S_I-S_R^{\top}S_R \end{bmatrix}=&U\begin{bmatrix}0&-\Theta\\-\Theta&0\end{bmatrix}U^{\top}, \end{align*} then if we let \begin{align}\label{gammamunu} \begin{bmatrix}\gamma_1&\ldots& \gamma_r&\zeta_1& \ldots& \zeta_r\end{bmatrix}^\top=U \begin{bmatrix}\mu_1&\ldots&\mu_r& \nu_1&\ldots&\nu_r\end{bmatrix}^{\top}, \end{align} $x_1^{\top}x_2+x_2^{\top}x_1=\sum_{j=1}^r\theta_j(\mu_j^2-\nu_j^2)$ and $x_1^{\top}x_1-x_2^{\top}x_2=-2\sum_{j=1}^r\theta_j\mu_j\nu_j$ follow. Without loss of generality, we may assume that $\theta_1\geq\theta_2\geq\ldots\geq\theta_r\geq0$, then by taking \begin{subequations}\label{munuinitial} \begin{align} &\mu_3=\nu_3=\ldots=\mu_r=\nu_r=0, \quad \mu_1=-\nu_1=\sqrt{\frac{\theta_2}{\theta_1}\mu_2^2}, \\ &\mu_2=\nu_2=\frac{1}{\|\begin{bmatrix}S_R&-S_I\end{bmatrix}U \begin{bmatrix}\sqrt{\frac{\theta_2}{\theta_1}}&1&0&\cdots&0&-\sqrt{\frac{\theta_2}{\theta_1}}&1&0&\cdots&0\end{bmatrix}^{\top}\|_2}, \end{align} \end{subequations} it is easy to verify that \eqref{x1x2} holds with $x_1$ and $x_2$ computed by \eqref{x1x2get} and \eqref{gammamunu}. Hence, we can still choose initial vectors $x_1$ and $x_2$, so that $(\delta_1-\frac{1}{\delta_1})^2\beta_1^2=0$. We then initially set \begin{align}\label{initialnonreal} X_2=\begin{bmatrix}x_1 &x_2\end{bmatrix},\qquad T_2=\begin{bmatrix}\alpha_{1} & \beta_{1} \\ -\beta_{1}& \alpha_{1}\end{bmatrix}. \end{align} Now assume that \eqref{eq1} has been satisfied with $j\geq1$, we shall then assign the next pole $\lambda_{j+1}$. \subsection{Assigning a real pole} Assume that $\lambda_{j+1}$ is real, then the $(j+1)$-th diagonal element of $T$ must be $\lambda_{j+1}$. Comparing the $(j+1)$-th column of $Q_2^{\top}AX - Q_2^{\top}XT=0$ gives rise to \begin{equation} Q_2^\top Ax_{j+1} - Q_2^\top X_{j}v_{j+1}- \lambda_{j+1}Q_2^\top x_{j+1}=0. \end{equation} Recall the definition of the departure from normality of $A_c$ in \eqref{departure} and notice that we are now computing the $(j+1)$-th columns of $X$ and $T$, it is then natural to consider the following optimization problem: \begin{gather}\label{eqreal-opt-equal} \min_{\|x_{j+1}\|_2=1}\|v_{j+1}\|_2^2\\ \mbox{s.t. } M_{j+1} \begin{bmatrix}x_{j+1}\\v_{j+1}\end{bmatrix}=0 ,\label{eqreal-opt-constrain} \end{gather} where \begin{align}\label{M} M_{j+1}=\begin{bmatrix}Q_2^\top(A-\lambda_{j+1}I_n)& -Q_2^\top X_j\\X_j^\top&0\end{bmatrix}. \end{align} Let $r=\dim\mathcal{N}(M_{j+1})$. Then it follows from the controllability of $(A, B)$ that $Q_2^\top(A-\lambda_{j+1} I_n)$ is of full row rank, indicating that $n-m\leq \rank (M_{j+1})\leq n-m+j$ and $\mathcal{N}(M_{j+1})\neq\emptyset$ (\cite{Chu2}). Suppose that the columns of $S=\begin{bmatrix}S_{1}^\top &S_{2}^\top\end{bmatrix}^\top$ with $S_{1}\in\mathbb{R}^{n\times r}, S_{2}\in\mathbb{R}^{j\times r}$ form an orthonormal basis of $\mathcal{N}(M_{j+1})$, then \eqref{eqreal-opt-constrain} shows that \begin{align}\label{eqrealxv} \begin{array}{lll} x_{j+1}=S_{1}y,& v_{j+1}=S_{2}y, &\qquad \forall y\in \mathbb{R}^r. \end{array} \end{align} Consequently, the optimization problem \eqref{eqreal-opt-equal} subject to \eqref{eqreal-opt-constrain} is equivalent to the following problem: \begin{equation}\label{eqreal-opt-equal-2} \min_{y^\top S_{1}^\top S_{1}y=1}y^\top S_{2}^\top S_{2}y. \end{equation} Perceived that the discussions above can also be found in \cite{Chu2}, and the constrained optimization problem \eqref{eqreal-opt-equal-2} is solved via the GSVD of the matrix pencil $(S_1,S_2)$. We put forward a simpler approach here. Actually, since $S^\top S=I_r$, we have $S_{2}^\top S_{2}=I_r-S_{1}^\top S_{1}$. Thus the problem \eqref{eqreal-opt-equal-2} is equivalent to \begin{equation}\label{eqreal-opt-equal-3} \min_{y^\top S_{1}^\top S_{1}y=1}y^\top y, \end{equation} whose minimum value is acquired when $y$ is an eigenvector of $S_1^{\top}S_1$ corresponding to its greatest eigenvalue and satisfies $y^\top S_{1}^\top S_{1}y=1$. Once such $y$ is obtained, $x_{j+1}$ and $v_{j+1}$ can be given by \eqref{eqrealxv}. We may then update $X_j$ and $T_j$ as \begin{align}\label{updatereal} X_{j+1}=\begin{bmatrix}X_j &x_{j+1}\end{bmatrix}\in\mathbb{R}^{n\times (j+1)},\qquad T_{j+1}=\begin{bmatrix}T_j &v_{j+1}\\0 &\lambda_{j+1}\end{bmatrix}\in\mathbb{R}^{(j+1)\times (j+1)}, \end{align} and continue with the next pole $\lambda_{j+2}$. \subsection{Assigning a pair of conjugate poles } In this subsection, we will consider the case that $\lambda_{j+1}$ is non-real. To obtain a real matrix $F$ from the real Schur form of $A_c=A+BF$, we would assign $\lambda_{j+1}$ and $\lambda_{j+2}=\bar{\lambda}_{j+1}$ simultaneously to get the $(j+1)$-th and $(j+2)$-th columns of $X$ and $T$. \subsubsection{Initial optimization problem} Assume that $\lambda_{j+1}=\alpha_{j+1} + i \beta_{j+1} \,\,(\beta_{j+1}\ne 0)$ and let $D_{\delta}=\begin{bmatrix}\alpha_{j+1} & \delta \beta_{j+1} \\ -\beta_{j+1}/\delta & \alpha_{j+1}\end{bmatrix}$ be the diagonal block in $T$ whose eigenvalues are $\lambda_{j+1}$ and $\bar{\lambda}_{j+1}$. By comparing the $(j+1)$-th and $(j+2)$-th columns of $Q_2^{\top}AX - Q_2^{\top}XT=0$, we have \begin{equation} Q_2^\top A \begin{bmatrix}x_{j+1}&x_{j+2}\end{bmatrix} - Q_2^\top X_{j}\begin{bmatrix}v_{j+1}&v_{j+2}\end{bmatrix}- Q_2^\top \begin{bmatrix}x_{j+1}&x_{j+2}\end{bmatrix} D_{\delta}=0. \end{equation} Recalling the form of $\Delta_F^2(A_c)$ in \eqref{departure}, it is then natural to consider the following optimization problem: \begin{subequations}\label{opt1} \begin{align} \min_{\delta, v_{j+1}, v_{j+2}}&\|v_{j+1}\|_2^2 + \|v_{j+2}\|_2^2 + \beta_{j+1}^2 (\delta -\frac{1}{\delta})^2\label{eqcomplex-opt}\\ \mbox{s.t.}\quad &Q_2^\top (A \begin{bmatrix}x_{j+1}& x_{j+2}\end{bmatrix} -X_j \begin{bmatrix}v_{j+1} &v_{j+2}\end{bmatrix} - \begin{bmatrix}x_{j+1}& x_{j+2}\end{bmatrix}D_{\delta})=0,\label{eqcomplex-constraina}\\ &X_j^\top \begin{bmatrix}x_{j+1}& x_{j+2}\end{bmatrix}=0,\label{eqcomplex-constrainb}\\ &\begin{bmatrix}x_{j+1}& x_{j+2}\end{bmatrix}^\top \begin{bmatrix}x_{j+1}& x_{j+2}\end{bmatrix}=I_2.\label{eqcomplex-constrainc} \end{align} \end{subequations} The constraints \eqref{eqcomplex-constraina} and \eqref{eqcomplex-constrainc} are nonlinear. In \cite{Chu2}, the author solves this optimization problem by taking $\delta=1$ and neglecting the orthogonal requirement $x_{j+1}^\top x_{j+2}=0$. These simplify the problem significantly. However, it cannot lead to the real Schur form of the closed-loop system matrix $A_c$, since $x_{j+1}$ is generally not orthogonal to $x_{j+2}$. Moreover, the minimum value of the simplified optimization problem in \cite{Chu2} may be much greater than that of the original problem \eqref{opt1}. We may rewrite the optimization problem \eqref{opt1} into another equivalent form. If we write $\delta=\frac{\delta_2}{\delta_1}$ with $\delta_1,\delta_2>0$, and set $D_0=\begin{bmatrix}\begin{smallmatrix}\alpha_{j+1} & \beta_{j+1} \\ -\beta_{j+1} & \alpha_{j+1}\end{smallmatrix}\end{bmatrix}$, then $D_{\delta}=\begin{bmatrix}\begin{smallmatrix}1/{\delta_1}&\\&1/{\delta_2}\end{smallmatrix}\end{bmatrix} D_0 \begin{bmatrix}\begin{smallmatrix}\delta_1&\\&\delta_2\end{smallmatrix}\end{bmatrix}$. Redefine ${x}_{j+1}\triangleq\frac{x_{j+1}}{\delta_1}, {x}_{j+2}\triangleq\frac{x_{j+2}}{\delta_2}, {v}_{j+1}\triangleq\frac{v_{j+1}}{\delta_1}, {v}_{j+2}\triangleq\frac{v_{j+2}}{\delta_2}$, then the optimization problem \eqref{opt1} is equivalent to \begin{subequations}\label{opt2} \begin{align} \min_{\delta_1, \delta_2, v_{j+1}, v_{j+2}}&\|\delta_1v_{j+1}\|_2^2 + \|\delta_2v_{j+2}\|_2^2 + \beta_{j+1}^2 (\frac{\delta_1}{\delta_2} -\frac{\delta_2}{\delta_1})^2\label{eqcomplex-opt-2}\\ \mbox{s.t.}\quad &Q_2^\top (A \begin{bmatrix}x_{j+1}& x_{j+2}\end{bmatrix} -X_j \begin{bmatrix}v_{j+1} &v_{j+2}\end{bmatrix} - \begin{bmatrix}x_{j+1}& x_{j+2}\end{bmatrix}D_0)=0,\label{eqcomplex-opt-2-constraina}\\ &X_j^\top \begin{bmatrix}x_{j+1}& x_{j+2}\end{bmatrix}=0,\label{eqcomplex-opt-2-constrainb}\\ &\begin{bmatrix}x_{j+1}& x_{j+2}\end{bmatrix}^\top \begin{bmatrix}x_{j+1}& x_{j+2}\end{bmatrix}=\begin{bmatrix}1/{\delta_1^2}&\\&1/{\delta_2^2}\end{bmatrix}. \label{eqcomplex-opt-2-constrainc} \end{align} \end{subequations} Here the constraint \eqref{eqcomplex-opt-2-constraina} becomes linear. Once a solution to the optimization problem \eqref{opt2} is obtained, we need to redefine \[ v_{j+1}\triangleq\frac{v_{j+1}}{\|x_{j+1}\|_2},\quad v_{j+2}\triangleq\frac{v_{j+2}}{\|x_{j+2}\|_2},\quad x_{j+1}\triangleq\frac{x_{j+1}}{\|x_{j+1}\|_2},\quad x_{j+2}\triangleq\frac{x_{j+2}}{\|x_{j+2}\|_2} \] as the corresponding columns of $T$ and $X$. The constraints \eqref{eqcomplex-opt-2-constraina} and \eqref{eqcomplex-opt-2-constrainb} are linear. Actually, all vectors $x_{j+1},x_{j+2},v_{j+1},v_{j+2}$ satisfying these two constraints can be found via the null space of the matrix \begin{align}\label{Mcomplex} M_{j+1}=\begin{bmatrix}Q_2^{\top}(A-(\alpha_{j+1}+i\beta_{j+1})I_n)& -Q_2^{\top}X_j\\ X_j^{\top}&0\end{bmatrix}. \end{align} Specifically, for any $x_{j+1},x_{j+2},v_{j+1},v_{j+2}$ satisfying \eqref{eqcomplex-opt-2-constraina} and \eqref{eqcomplex-opt-2-constrainb}, direct calculations show that $M_{j+1}\begin{bmatrix}x_{j+1}+i x_{j+2}\\v_{j+1}+i v_{j+2}\end{bmatrix}=0$. Conversely, for any vector $\begin{bmatrix}z^{\top}&w^{\top}\end{bmatrix}^{\top}\in\mathcal{N}(M_{j+1})$, the vectors $x_{j+1}=\mbox{Re}(z), x_{j+2}=\mbox{Im}(z), v_{j+1}=\mbox{Re}(w), v_{j+2}=\mbox{Im}(w)$ satisfy \eqref{eqcomplex-opt-2-constraina} and \eqref{eqcomplex-opt-2-constrainb}. The constraint \eqref{eqcomplex-opt-2-constrainc} shows that $x_{j+1}^\top x_{j+2}=0$. For any vector $\begin{bmatrix}z^{\top}&w^{\top}\end{bmatrix}^{\top}\in\mathcal{N}(M_{j+1})$ with $\mbox{Re}(z)$ and $\mbox{Im}(z)$ being linearly independent, we may then orthogonalize $\mbox{Re}(z)$ and $\mbox{Im}(z)$ by the Jacobi transformation as follows to get $x_{j+1}$ and $x_{j+2}$ satisfying $x_{j+1}^\top x_{j+2}=0$. Let $\varrho_1=\|\mbox{Re}(z)\|_2^2,\ \varrho_2=\|\mbox{Im}(z)\|_2^2, \ \gamma=\mbox{Re}(z)^{\top}\mbox{Im}(z)$ and $\tau=\frac{\varrho_2-\varrho_1}{2\gamma}$, and define $t$ as \begin{displaymath} t=\left\{ \begin{array}{ll} 1/(\tau+\sqrt{1+\tau^2}), & \text{ if}\quad \tau\geq0,\\ -1/(-\tau+\sqrt{1+\tau^2}), & \text{ if}\quad \tau<0.\\ \end{array} \right. \end{displaymath} Let $c=1/\sqrt{1+t^2}$, $s=tc$. Then $x_{j+1}$ and $x_{j+2}$ obtained by \begin{align} \label{Jocobi_x} \begin{bmatrix}x_{j+1}& x_{j+2} \end{bmatrix}=\begin{bmatrix}\mbox{Re}(z)& \mbox{Im}(z) \end{bmatrix} \begin{bmatrix}c & s\\ -s&c\end{bmatrix} \end{align} satisfy $x_{j+1}^\top x_{j+2}=0$. Moreover, if we let \begin{align}\label{Jocobi_v} \begin{bmatrix}v_{j+1}&v_{j+2}\end{bmatrix}=\begin{bmatrix}\mbox{Re}(w)& \mbox{Im}(w) \end{bmatrix} \begin{bmatrix}c & s\\ -s&c\end{bmatrix}, \end{align} then $x_{j+1},x_{j+2},v_{j+1},v_{j+2}$ satisfy \eqref{eqcomplex-opt-2-constraina} and \eqref{eqcomplex-opt-2-constrainb}. Hence, we can get $x_{j+1},x_{j+2},v_{j+1},v_{j+2}$ satisfying the constrains \eqref{eqcomplex-opt-2-constraina}-\eqref{eqcomplex-opt-2-constrainc} in this way. Furthermore, \begin{equation} \label{eqortho-one} 1/{\delta_1^2}=\|x_{j+1}\|_2^2=\|x\|_2^2-\omega,\quad 1/{\delta_2^2}=\|x_{j+2}\|_2^2=\|y\|_2^2+\omega, \end{equation} where $x=\mbox{Re}(z)$, $y=\mbox{Im}(z)$, $\omega=\frac{2(x^{\top}y)^2}{\|y\|_2^2-\|x\|_2^2+\sqrt{4(x^{\top}y)^2+(\|y\|_2^2-\|x\|_2^2)^2}}$ if $\|x\|_2<\|y\|_2$; and $\omega=\frac{2(x^{\top}y)^2}{\|y\|_2^2-\|x\|_2^2-\sqrt{4(x^{\top}y)^2 +(\|y\|_2^2-\|x\|_2^2)^2}}$ if $\|x\|_2\ge\|y\|_2$. \subsubsection{The suboptimal strategy} It is hard to get an optimal solution to \eqref{opt2} since it is a nonlinear optimization problem with quadratic constraints. Even if such an optimal solution can be found, the cost will be expensive. So instead of finding an optimal solution, we prefer to get a suboptimal one with less computational cost. Let the columns of $S=\begin{bmatrix}S_1^{\top}&S_2^{\top}\end{bmatrix}^{\top}\in \mathbb{C}^{(n+j)\times r}$ with $S_1\in \mathbb{C}^{n\times r}$ and $S_2\in \mathbb{C}^{j\times r}$ form an orthonormal basis of $\mathcal{N}(M_{j+1})$, and let $S_1=U\Sigma V^{*}$ be the SVD of $S_1$. Since $S_1^{*}S_1+S_2^{*}S_2=I_r$, it follows that $S_2^{*}S_2=V(I_r-\Sigma^{*}\Sigma)V^{*}$. For any vector $\begin{bmatrix}z^{\top}&w^{\top}\end{bmatrix}^{\top}\in \mathcal{N}(M_{j+1})$ with $z\in \mathbb{C}^{n}$ and $w\in \mathbb{C}^{j}$, there exists $b\in \mathbb{C}^{r}$ such that $z=S_1b=U(\Sigma V^{*}b)$ and $w=S_2b$. Hence \begin{align*} \|z\|_2\leq\sigma_1 \|b\|_2 \qquad \text{ and } \qquad \|w\|_2^2\geq (1-\sigma_1^2)\|b\|_2^2, \end{align*} where $\sigma_1$ is the largest singular value of $S_1$. Now suppose that the real part and the imaginary part of $z$ are linearly independent satisfying $\|\mbox{Re}(z)\|_2\leq\|\mbox{Im}(z)\|_2$, and $x_{j+1}, x_{j+2}$, $v_{j+1}, v_{j+2}$ are obtained from the the Jacobi orthogonal process \eqref{Jocobi_x}, \eqref{Jocobi_v}. Define $C=\frac{\|z\|_2}{\|x_{j+1}\|_2}$, then $C\geq\sqrt{2}$ and the objective function in \eqref{eqcomplex-opt-2} becomes \begin{equation}\label{value_cost} \begin{split} &\|\delta_1v_{j+1}\|_2^2 + \|\delta_2v_{j+2}\|_2^2 + \beta_{j+1}^2 (\frac{\delta_1}{\delta_2} -\frac{\delta_2}{\delta_1})^2\\ =&\frac{C^2}{C^2-1}\frac{\|w\|_2^2}{\|z\|_2^2}+\frac{C^4-2C^2}{C^2-1}\frac{\|v_{j+1}\|_2^2}{\|z\|_2^2}+\beta_{j+1}^2(C^2-3+\frac{1}{C^2-1}). \end{split} \end{equation} Obviously, \begin{align}\label{value_cost_ieq} \frac{C^2}{C^2-1}\frac{\|w\|_2^2}{\|z\|_2^2}\leq \frac{C^2}{C^2-1}\frac{\|w\|_2^2}{\|z\|_2^2}+\frac{C^4-2C^2}{C^2-1}\frac{\|v_{j+1}\|_2^2}{\|z\|_2^2}\leq C^2\frac{\|w\|_2^2}{\|z\|_2^2}. \end{align} So the objective function in \eqref{eqcomplex-opt-2} depends on $\frac{\|w\|_2^2}{\|z\|_2^2}$ and $C$ with $\min\frac{\|w\|_2^2}{\|z\|_2^2}=\frac{1-\sigma_1^2}{\sigma_1^2}$. In our suboptimal strategy, we will first take $b$ from $\subspan\{Ve_1\}$, where $e_i$ is the $i$-th column of the identity matrix. With this choice, $\frac{\|w\|_2^2}{\|z\|_2^2}$ achieves its minimum value. And the following theorem shows the relevant results. \begin{Theorem}\label{Theorem3.1} With the notations above, let $u_1$ be the first column of $U$ and assume that $\mbox{Re}(u_1)$ and $\mbox{Im}(u_1)$ are linearly independent. Let $x_{j+1}$ and $x_{j+2}$ be the vectors obtained from $\mbox{Re}(u_1)$ and $\mbox{Im}(u_1)$ via the Jacobi orthogonal process \begin{align*} \begin{bmatrix}x_{j+1}&x_{j+2}\end{bmatrix}=\begin{bmatrix}\mbox{Re}(u_1)&\mbox{Im}(u_1)\end{bmatrix}\begin{bmatrix}c&s\\-s&c\end{bmatrix}, \end{align*} and let \begin{align*} \begin{bmatrix}v_{j+1}&v_{j+2}\end{bmatrix}=\begin{bmatrix}\mbox{Re}(w)&\mbox{Im}(w)\end{bmatrix}\begin{bmatrix}c&s\\-s&c\end{bmatrix}, \end{align*} where $w=S_2Ve_1/\sigma_1$. Then $x_{j+1},x_{j+2},v_{j+1},v_{j+2}$ satisfy the constrains \eqref{eqcomplex-opt-2-constraina}-\eqref{eqcomplex-opt-2-constrainc}, and the value of the corresponding objective function specified by \eqref{eqcomplex-opt-2} will be no larger than \[\frac{1}{\min\{\|x_{j+1}\|_2^2, \|x_{j+2}\|_2^2\}}(\frac{1-\sigma_1^2}{\sigma_1^2}+\beta_{j+1}^2).\] \end{Theorem} \begin{proof} The first part of the theorem is obvious. To prove the second part, note that here $b=\frac{Ve_1}{\sigma_1}$, $\|z\|_2=\|u_1\|_2=1, \|w\|_2^2=\frac{1-\sigma_1^2}{\sigma_1^2}$. If $\|\mbox{Re}(u_1)\|_2\leq\|\mbox{Im}(u_1)\|_2$, it then follows directly from \eqref{value_cost}, \eqref{value_cost_ieq} and $C^2-3+\frac{1}{C^2-1}\leq C^2$ with $C=\frac{1}{\|x_{j+1}\|_2}$. The case when $\|\mbox{Re}(u_1)\|_2\geq\|\mbox{Im}(u_1)\|_2$ can be proved similarly. \end{proof} Theorem \ref{Theorem3.1} shows that if $\mbox{Re}(u_1)$ and $\mbox{Im}(u_1)$ are linearly independent, and $\min\{\|x_{j+1}\|_2, \|x_{j+2}\|_2\}$ is not pathologically small, the above procedure will generate $x_{j+1},x_{j+2},v_{j+1},v_{j+2}$ satisfying the constrains \eqref{eqcomplex-opt-2-constraina}-\eqref{eqcomplex-opt-2-constrainc}, and the value of the corresponding objective function in \eqref{eqcomplex-opt-2} is not too large. We then take these $x_{j+1},x_{j+2},v_{j+1},v_{j+2}$ as the suboptimal solution. However, if $\mbox{Re}(u_1)$ and $\mbox{Im}(u_1)$ are linearly dependent, we cannot get orthogonal $x_{j+1}$ and $x_{j+2}$ via the Jacobi orthogonal process. Even if $\mbox{Re}(u_1)$ and $\mbox{Im}(u_1)$ are linearly independent, the resulted $\min\{\|x_{j+1}\|_2, \|x_{j+2}\|_2\}$ might be fairly small, which means that the corresponding value of the objective function might be large. In this case, we would choose $b$ from $\subspan\{Ve_1, Ve_2\}$. Define \begin{align}\label{tildexy} &\tilde{x}_1+i\tilde{y}_1=z_1=u_1=\frac{S_1Ve_1}{\sigma_1}, &&w_1=\frac{S_2Ve_1}{\sigma_1},\notag\\ &\tilde{x}_2+i\tilde{y}_2=z_2=u_2=\frac{S_1Ve_2}{\sigma_2}, &&w_2=\frac{S_2Ve_2}{\sigma_2}, \end{align} where $\sigma_1, \sigma_2$ are the first two greatest singular values of $S_1$. Let $b=\begin{bmatrix}\begin{smallmatrix}\frac{Ve_1}{\sigma_1}&\ &\frac{Ve_2}{\sigma_2}\end{smallmatrix}\end{bmatrix} \begin{bmatrix}\begin{smallmatrix}\gamma_1+i\zeta_1\\ \gamma_2+i\zeta_2\end{smallmatrix}\end{bmatrix}$ with $\gamma_1^2+\gamma_2^2+\zeta_1^2+\zeta_2^2=1$, then \begin{align}\label{xyw} x+iy=z=S_1b=\begin{bmatrix}z_1&z_2\end{bmatrix}\begin{bmatrix}\gamma_1+i\zeta_1\\ \gamma_2+i\zeta_2\end{bmatrix}, \quad w=S_2b=\begin{bmatrix}w_1&w_2\end{bmatrix}\begin{bmatrix}\gamma_1+i\zeta_1\\ \gamma_2+i\zeta_2\end{bmatrix}. \end{align} Denoting $\tilde{X}=\begin{bmatrix}\tilde{x}_1&\tilde{x}_2\end{bmatrix}$, $\tilde{Y}=\begin{bmatrix}\tilde{y}_1&\tilde{y}_2\end{bmatrix}$, it can be easily verified that \begin{align} x=\begin{bmatrix}\tilde{X}&-\tilde{Y}\end{bmatrix}\begin{bmatrix}\gamma_1&\gamma_2&\zeta_1& \zeta_2\end{bmatrix}^{\top}, \qquad y=\begin{bmatrix}\tilde{Y}&\tilde{X}\end{bmatrix}\begin{bmatrix}\gamma_1&\gamma_2& \zeta_1& \zeta_2\end{bmatrix}^{\top}, \end{align} and \begin{align}\label{eq-orthogonal} x^{\top}y+y^{\top}x=\begin{bmatrix}\gamma_1& \gamma_2& \zeta_1& \zeta_2\end{bmatrix} \begin{bmatrix}\tilde{X}^{\top}\tilde{Y}+\tilde{Y}^{\top}\tilde{X}&\tilde{X}^{\top}\tilde{X}-\tilde{Y}^{\top}\tilde{Y}\\ \tilde{X}^{\top}\tilde{X}-\tilde{Y}^{\top}\tilde{Y}&-(\tilde{X}^{\top}\tilde{Y}+\tilde{Y}^{\top}\tilde{X}) \end{bmatrix} \begin{bmatrix}\gamma_1& \gamma_2& \zeta_1& \zeta_2\end{bmatrix}^\top, \end{align} \begin{align}\label{eq-equal-length} x^{\top}x-y^{\top}y=\begin{bmatrix}\gamma_1& \gamma_2& \zeta_1& \zeta_2\end{bmatrix} \begin{bmatrix}\tilde{X}^{\top}\tilde{X}-\tilde{Y}^{\top}\tilde{Y}&-(\tilde{X}^{\top}\tilde{Y}+\tilde{Y}^{\top}\tilde{X})\\ -(\tilde{X}^{\top}\tilde{Y}+\tilde{Y}^{\top}\tilde{X})&\tilde{Y}^{\top}\tilde{Y}-\tilde{X}^{\top}\tilde{X} \end{bmatrix} \begin{bmatrix}\gamma_1& \gamma_2& \zeta_1& \zeta_2\end{bmatrix}^\top. \end{align} Obviously, the two matrices in \eqref{eq-orthogonal} and \eqref{eq-equal-length} are symmetric Hamiltonian systems and they satisfy the property in Lemma \ref{Lemma3.2}. Hence we can get the following lemma. \begin{Lemma}\label{Lemma3.3} Let $\phi_m, \phi_M$ be the two smallest singular values of $\begin{bmatrix}\tilde{Y}&\tilde{X}\end{bmatrix}$ and $\begin{bmatrix}\begin{smallmatrix}p_1\\q_1\end{smallmatrix}\end{bmatrix}, \begin{bmatrix}\begin{smallmatrix}p_2\\q_2\end{smallmatrix}\end{bmatrix}$ be the corresponding right singular vectors respectively. Define \begin{align}\label{Omega} \Omega=\begin{bmatrix}p_1&p_2&-q_1&-q_2\\q_1&q_2&p_1&p_2\end{bmatrix}, \end{align} $\Phi=\diag (\phi_1, \phi_2, -\phi_1, -\phi_2)$ with $\phi_1=1-2\phi_m^2$, $\phi_2=1-2\phi_M^2$, then \begin{align}\label{specdecom} \begin{bmatrix}\tilde{X}^{\top}\tilde{X}-\tilde{Y}^{\top}\tilde{Y}&-(\tilde{X}^{\top}\tilde{Y}+\tilde{Y}^{\top}\tilde{X})\\ -(\tilde{X}^{\top}\tilde{Y}+\tilde{Y}^{\top}\tilde{X})&\tilde{Y}^{\top}\tilde{Y}-\tilde{X}^{\top}\tilde{X} \end{bmatrix}= \Omega\Phi \Omega^{\top}, \end{align} and \begin{align*} \begin{bmatrix}\tilde{X}^{\top}\tilde{Y}+\tilde{Y}^{\top}\tilde{X}&\tilde{X}^{\top}\tilde{X}-\tilde{Y}^{\top}\tilde{Y}\\ \tilde{X}^{\top}\tilde{X}-\tilde{Y}^{\top}\tilde{Y}&-(\tilde{X}^{\top}\tilde{Y}+\tilde{Y}^{\top}\tilde{X}) \end{bmatrix} =\Omega\left(\begin{array}{c|c} \begin{array}{cc} &\\ &\\ \end{array} &\begin{array}{cc} \phi_1&\\ &\phi_2\\ \end{array}\\ & \\[-2mm] \hline & \\[-2mm] \begin{array}{cc} \phi_1&\\ &\phi_2\\ \end{array} &\begin{array}{cc} &\\ &\\ \end{array}\\ \end{array}\right)\Omega^{\top}. \end{align*} \end{Lemma} \begin{proof} Since $(\tilde{X}^{\top}-i\tilde{Y}^{\top})(\tilde{X}+i\tilde{Y})=\begin{bmatrix}z_1&z_2\end{bmatrix}^{*} \begin{bmatrix}z_1&z_2\end{bmatrix}=I_2$, so $\tilde{X}^{\top}\tilde{X}+\tilde{Y}^{\top}\tilde{Y}=I_2$ and $\tilde{X}^{\top}\tilde{Y}=\tilde{Y}^{\top}\tilde{X}$. Thus \begin{align*} \begin{bmatrix}\tilde{X}^{\top}\tilde{X}-\tilde{Y}^{\top}\tilde{Y}&-(\tilde{X}^{\top}\tilde{Y}+\tilde{Y}^{\top}\tilde{X})\\ -(\tilde{X}^{\top}\tilde{Y}+\tilde{Y}^{\top}\tilde{X})&\tilde{Y}^{\top}\tilde{Y}-\tilde{X}^{\top}\tilde{X}\end{bmatrix} =\begin{bmatrix}I_2-2\tilde{Y}^{\top}\tilde{Y}&-2\tilde{Y}^{\top}\tilde{X}\\ -2\tilde{X}^{\top}\tilde{Y}&I_2-2\tilde{X}^{\top}\tilde{X} \end{bmatrix} =I_4-2\begin{bmatrix}\tilde{Y}^{\top}\\ \tilde{X}^{\top}\end{bmatrix}\begin{bmatrix}\tilde{Y}&\tilde{X}\end{bmatrix}. \end{align*} From the above equation, it obviously holds that $\phi_1, \phi_2$ are the two nonnegative eigenvalues of $\begin{bmatrix}\tilde{X}^{\top}\tilde{X}-\tilde{Y}^{\top}\tilde{Y}&-(\tilde{X}^{\top}\tilde{Y}+\tilde{Y}^{\top}\tilde{X})\\ -(\tilde{X}^{\top}\tilde{Y}+\tilde{Y}^{\top}\tilde{X})&\tilde{Y}^{\top}\tilde{Y}-\tilde{X}^{\top}\tilde{X}\end{bmatrix}$ with $\begin{bmatrix}p_1\\q_1\end{bmatrix}$, $\begin{bmatrix}p_2\\q_2\end{bmatrix}$ being the corresponding eigenvectors. Note that $\begin{bmatrix}\tilde{X}^{\top}\tilde{X}-\tilde{Y}^{\top}\tilde{Y}&-(\tilde{X}^{\top}\tilde{Y}+\tilde{Y}^{\top}\tilde{X})\\ -(\tilde{X}^{\top}\tilde{Y}+\tilde{Y}^{\top}\tilde{X})&\tilde{Y}^{\top}\tilde{Y}-\tilde{X}^{\top}\tilde{X}\end{bmatrix}$ is a Hamiltonian matrix, thus the results follow immediately from Lemma \ref{Lemma3.1} and Lemma \ref{Lemma3.2}. \end{proof} Now by defining \begin{align}\label{trans} \begin{bmatrix}\mu_1& \mu_2& \nu_1& \nu_2\end{bmatrix}^{\top}=\Omega^{\top} \begin{bmatrix}\gamma_1& \gamma_2& \zeta_1& \zeta_2\end{bmatrix}^{\top}, \end{align} we have \begin{align}\label{xdoty-xdotxplusydoty} x^{\top}y+y^{\top}x=2\phi_1\mu_1\nu_1+2\phi_2\mu_2\nu_2, \qquad x^{\top}x-y^{\top}y=\phi_1(\mu_1^2-\nu_1^2)+\phi_2(\mu_2^2-\nu_2^2). \end{align} \begin{Theorem}\label{Theorem3.2} With the notations above, there exist $\mu_1, \mu_2, \nu_1, \nu_2\in\mathbb{R}$ such that $x^{\top}y=0$ and $\|x\|_2=\|y\|_2=\frac{\sqrt{2}}{2}$. For these $\mu_1, \mu_2, \nu_1, \nu_2$, let $\gamma_1, \gamma_2, \zeta_1, \zeta_2$ be computed from \eqref{trans}, where $\Omega$ is as in \eqref{Omega}. Then $x_{j+1}=x, x_{j+2}=y, v_{j+1}=\mbox{Re}(w)$ and $v_{j+2}=\mbox{Im}(w)$, where $w$ is computed by \eqref{xyw}, satisfy the constrains \eqref{eqcomplex-opt-2-constraina}-\eqref{eqcomplex-opt-2-constrainc}, and the value of the corresponding objective function in \eqref{eqcomplex-opt-2} will be no larger than $\frac{2(1-\sigma_2^2)}{\sigma_2^2}$. \end{Theorem} \begin{proof} It is easy to check that all solutions of the following system of equations \begin{align}\label{eqtwominimal} \left\{\begin{array}{ll} \phi_1\mu_1\nu_1+\phi_2\mu_2\nu_2&=0,\\ \phi_1(\mu_1^2-\nu_1^2)+\phi_2(\mu_2^2-\nu_2^2)&=0,\\ \mu_1^2+\mu_2^2+\nu_1^2+\nu_2^2&=1. \end{array}\right. \end{align} are \begin{equation}\label{munu1} \begin{array}{lll} \left\{\begin{split} \mu_2&=\pm\sqrt{\frac{\phi_1}{\phi_1+\phi_2}-\nu_2^2}\\ \mu_1&=-\sqrt{\frac{\phi_2}{\phi_1}}\nu_2\\ \nu_1&=\pm\sqrt{\frac{\phi_2}{\phi_1+\phi_2}-\frac{\phi_2}{\phi_1}\nu_2^2} \end{split}\right.& \textrm{and}& \left\{\begin{split} \mu_2&=\pm\sqrt{\frac{\phi_1}{\phi_1+\phi_2}-\nu_2^2}\\%[6pt] \mu_1&=\sqrt{\frac{\phi_2}{\phi_1}}\nu_2\\%[6pt] \nu_1&=\mp\sqrt{\frac{\phi_2}{\phi_1+\phi_2}-\frac{\phi_2}{\phi_1}\nu_2^2} \end{split}\right. \end{array} \end{equation} with $\nu_2^2\leq\frac{\phi_1}{\phi_1+\phi_2}$. Note \eqref{xdoty-xdotxplusydoty} and $\|x\|_2^2+\|y\|_2^2=1$, so with the values in \eqref{munu1}, it holds that $x^{\top}y=0$ and $\|x\|_2=\|y\|_2=\frac{\sqrt{2}}{2}$. Since $\begin{bmatrix}z^\top&w^\top\end{bmatrix}^\top\in\mathcal{N}(M_{j+1})$, so $\begin{bmatrix}\begin{smallmatrix}x_{j+1}&\ &x_{j+2}\\v_{j+1}&\ &v_{j+2}\end{smallmatrix}\end{bmatrix}= \begin{bmatrix}\begin{smallmatrix}x&\ &y\\ \mbox{Re}(w)& \ &\mbox{Im}(w)\end{smallmatrix}\end{bmatrix}$ satisfy the constrains \eqref{eqcomplex-opt-2-constraina}-\eqref{eqcomplex-opt-2-constrainc} with $\delta_1=\delta_2=\frac{\sqrt{2}}{2}$. Hence \begin{align*} &\|\delta_1v_{j+1}\|_2^2 + \|\delta_2v_{j+2}\|_2^2 + \beta_{j+1}^2 (\frac{\delta_1}{\delta_2} -\frac{\delta_2}{\delta_1})^2\\ =&2\|w\|_2^2=2(\gamma_1^2+\zeta_1^2)\frac{1-\sigma_1^2}{\sigma_1^2}+ 2(\gamma_2^2+\zeta_2^2)\frac{1-\sigma_2^2}{\sigma_2^2}\leq\frac{2(1-\sigma_2^2)}{\sigma_2^2}, \end{align*} which completes the proof of the theorem. \end{proof} From the proof of Theorem \ref{Theorem3.2} we can see that with such choice of $x_{j+1},x_{j+2},v_{j+1},v_{j+2}$, the value of the corresponding objective function is just $2\|w\|_2^2$. Define $\xi_1=p_1^{\top}\Xi p_1, \xi_2=p_2^{\top}\Xi p_2, \eta_1=q_1^{\top}\Xi q_1, \eta_2=q_2^{\top}\Xi q_2, \zeta_{12}=q_1^{\top}\Xi p_2, \zeta_{21}=q_2^{\top}\Xi p_1$, with $\Xi=\diag\{ (1-\sigma_1^2)/\sigma_1^2, (1-\sigma_2^2)/\sigma_2^2\}$, it then follows \begin{align}\label{lengthw} \|w\|_2^2= \left\{ \begin{array}{ll} \frac{\phi_2}{\phi_1+\phi_2}(\xi_1+\eta_1)+\frac{\phi_1}{\phi_1+\phi_2}(\xi_2+\eta_2)+ 2\sqrt{\frac{\phi_2}{\phi_1}}\frac{\phi_1}{\phi_1+\phi_2}(\zeta_{21}-\zeta_{12}) & \textrm{if} \ \ (\mu_1\nu_2)\leq0,\\ \frac{\phi_2}{\phi_1+\phi_2}(\xi_1+\eta_1)+\frac{\phi_1}{\phi_1+\phi_2}(\xi_2+\eta_2)+ 2\sqrt{\frac{\phi_2}{\phi_1}}\frac{\phi_1}{\phi_1+\phi_2}(\zeta_{12}-\zeta_{21}) & \textrm{if} \ \ (\mu_1\nu_2)>0.\\ \end{array} \right. \end{align} So in order to get a smaller $\|w\|_2$, we can take $\mu_1, \mu_2, \nu_1, \nu_2$ satisfying $\mu_1\nu_2\leq0$ if $\zeta_{21}\leq\zeta_{12}$, and $\mu_1\nu_2>0$ if $\zeta_{21}>\zeta_{12}$. Till now we have proposed two strategies for computing $x_{j+1}, x_{j+2}, v_{j+1}, v_{j+2}$. The first strategy computes $x_{j+1}, x_{j+2}, v_{j+1}$ and $v_{j+2}$ by using the Jacobi orthogonal process \eqref{Jocobi_x} and \eqref{Jocobi_v} with $z=u_1$ and $w=\frac{S_2Ve_1}{\sigma_1}$. While the second one first computes $\mu_1, \mu_2, \nu_1, \nu_2$ by \eqref{munu1} satisfying $\mu_1\nu_2\leq0$ if $\zeta_{21}\leq\zeta_{12}$, and $\mu_1\nu_2>0$ if $\zeta_{21}>\zeta_{12}$, and then compute $\gamma_1, \gamma_2, \zeta_1, \zeta_2$ from \eqref{trans}, where $\Omega$ is as in \eqref{Omega}, and finally set $x_{j+1}=x, x_{j+2}=y, v_{j+1}=\mbox{Re}(w)$ and $v_{j+2}=\mbox{Im}(w)$, where $x,y,w$ are computed by \eqref{xyw}. We cannot tell which strategy is better. So we suggest to apply both strategies, compare the corresponding values of the objective function and adopt the one which gives better results. Specifically, if the value of the objective function corresponding to the first strategy is smaller, we would update $X_j$ and $T_j$ as \begin{align}\label{updatecomplex1} X_{j+2}=\begin{bmatrix}X_j &\delta_1x_{j+1}&\delta_2x_{j+2}\end{bmatrix}\in\mathbb{R}^{n\times (j+2)},\qquad T_{j+2}=\begin{bmatrix}T_j &\delta_1v_{j+1}&\delta_2v_{j+2}\\ 0 &\alpha_{j+1}&\delta\beta_{j+1}\\0&-\frac{1}{\delta}\beta_{j+1}&\alpha_{j+1}\end{bmatrix}\in\mathbb{R}^{(j+2)\times (j+2)}, \end{align} where $\delta_1=\frac{1}{\|x_{j+1}\|_2}, \delta_2=\frac{1}{\|x_{j+2}\|_2}, \delta=\frac{\delta_2}{\delta_1}$. Otherwise, we update $X_j$ and $T_j$ as \begin{align}\label{updatecomplex2} X_{j+2}=\begin{bmatrix}X_j &\sqrt{2}x&\sqrt{2}y\end{bmatrix}\in\mathbb{R}^{n\times (j+2)},\quad T_{j+2}=\begin{bmatrix}T_j &\sqrt{2}\mbox{Re}(w)&\sqrt{2}\mbox{Im}(w)\\0 &\alpha_{j+1}&\beta_{j+1}\\0&-\beta_{j+1}&\alpha_{j+1}\end{bmatrix}\in\mathbb{R}^{(j+2)\times(j+2)}, \end{align} with $x, y$ and $w$ defined as in \eqref{xyw}. This completes the assignment of the complex conjugate poles $\lambda_{j+1},\lambda_{j+2}=\bar{\lambda}_{j+1}$, and we can then continue with the next pole $\lambda_{j+3}$. These two strategies essentially choose $z$ from $\mathcal{R}(u_1)$ and $\mathcal{R}(\begin{bmatrix}u_1&u_2\end{bmatrix})$, respectively. If the results by these two strategies are not satisfactory, theoretically, we can choose $z$ from a higher dimensional space, i.e. $z\in \subspan\{u_1, u_2, \ldots, u_k\}, k\geq3$, with $u_l$ being the $l$-th column of $U$. However the resulted optimization problem is much more complicated. More importantly, numerical examples show that these two strategies with $k=1,2$ can produce fairly satisfying results for most problems. \subsection{Algorithm} In this part, we give the framework of our algorithm. \begin{algorithm} \renewcommand{\textbf{Input:}}{\textbf{Input:}} \renewcommand\algorithmicensure {\textbf{Output:} } \caption{ Framework of our {\bf Schur-rob} algorithm.} \label{alg:Framwork} \begin{algorithmic}[1] \REQUIRE ~~\\ $A, B$ and $\mathfrak{L}=\{\lambda_1,\dots,\lambda_n\}$ (complex conjugate poles appear in pairs). \ENSURE ~~\\ The feedback matrix $F$. \STATE If $\lambda_1$ is real, compute $x_1$ by \eqref{x1real} and set $X_1=x_1,T_1=\lambda_1,j=1$. If $\lambda_1$ is non-real, compute $x_1,x_2$ by \eqref{x1x2get}, \eqref{gammamunu}, \eqref{munuinitial}, and set $X_2,T_2$ as in \eqref{initialnonreal}, $j=2$. \label{code:fram:extract} \WHILE{$j<n$} \IF{ $\lambda_{j+1}$ is real} \STATE Find $S=\begin{bmatrix}S_1^{\top}&S_2^{\top}\end{bmatrix}^{\top}$, whose columns form an orthonormal basis of $\mathcal{N}(M_{j+1})$ in \eqref{M}; \STATE Compute $y$ by \eqref{eqreal-opt-equal-3}; \STATE Compute $x_{j+1}$ and $v_{j+1}$ by \eqref{eqrealxv}, update $X_j$ and $T_j$ as \eqref{updatereal} and set $j=j+1$. \ELSE \STATE Find $S=\begin{bmatrix}S_1^{\top}&S_2^{\top}\end{bmatrix}^{\top}$, whose columns form an orthonormal basis of $\mathcal{N}(M_{j+1})$ in \eqref{Mcomplex}; \STATE Compute the SVD of $S_1$ as $S_1=U\Sigma V^{*}$; \IF{ $\mbox{Re}(Ue_1)$ and $\mbox{Im}(Ue_1)$ are linearly independent} \STATE Compute $x_{j+1}, x_{j+2}, v_{j+1}, v_{j+2}$ by \eqref{Jocobi_x} and \eqref{Jocobi_v} with $z=\frac{S_1Ve_1}{\sigma_1}, w=\frac{S_2Ve_1}{\sigma_1}$; \STATE Set $\delta_1=\frac{1}{\|x_{j+1}\|_2}, \delta_2=\frac{1}{\|x_{j+2}\|_2}$ and $\delta=\frac{\delta_2}{\delta_1}$; \STATE Compute $dep_1=\|\delta_1v_{j+1}\|_2^2+\|\delta_2v_{j+2}\|_2^2+\beta_{j+1}^2(\delta-\frac{1}{\delta})^2$; \ELSE \STATE Set $dep_1=\infty$; \ENDIF \STATE Let $\tilde{X}=\begin{bmatrix}\tilde{x}_1&\tilde{x}_2\end{bmatrix}$, $\tilde{Y}=\begin{bmatrix}\tilde{y}_1&\tilde{y}_2\end{bmatrix}$ with $\tilde{x}_1, \tilde{y}_1, \tilde{x}_2, \tilde{y}_2$ defined as in \eqref{tildexy}, and compute the spectral decomposition \eqref{specdecom}; \STATE Compute $\mu_1, \mu_2, \nu_1, \nu_2$ by \eqref{munu1} satisfying $\mu_1\nu_2\leq0$ if $\zeta_{21}\leq\zeta_{12}$, and $\mu_1\nu_2>0$ if $\zeta_{21}>\zeta_{12}$, and then compute $\gamma_1, \gamma_2, \zeta_1, \zeta_2$ from \eqref{trans}, where $\Omega$ is as in \eqref{Omega}; \STATE Compute $z$, $w$ by \eqref{xyw}, set $x_{j+1}=\mbox{Re}(z), x_{j+2}=\mbox{Im}(z), v_{j+1}=\mbox{Re}(w)$ and $v_{j+2}=\mbox{Im}(w)$. Compute $dep_2=2[(\gamma_1^2+\zeta_1^2)\frac{1-\sigma_1^2}{\sigma_1^2}+(\gamma_2^2+\zeta_2^2)\frac{1-\sigma_2^2}{\sigma_2^2}]$; \STATE If $dep_1<dep_2$, update $X_{j}$ and $T_{j}$ as in \eqref{updatecomplex1}; otherwise, update them as in \eqref{updatecomplex2}. Set $j=j+2$. \ENDIF \ENDWHILE \STATE Set $X=X_n, T=T_n$, and compute $F$ by \eqref{eqsolveoff}. \end{algorithmic} \end{algorithm} \section{Numerical Examples} In this section, we give some numerical examples to illustrate the performance of our \textbf{Schur-rob} algorithm, and compare it with some of the different versions of \textbf{SCHUR} in \cite{Chu2}, the MATLAB functions \textbf{robpole} \cite{Tits} and \textbf{place} \cite{KNV}. Each algorithm computes a feedback matrix $F$ such that the eigenvalues of $A+BF$ are those given in $\mathfrak{L}$, and $A+BF$ is robust. When applying \textbf{robpole} to all test examples, we set the maximum number of sweep to be the default value $5$. All calculations are carried out on an Intel\textregistered Core\texttrademark i3, dual core, 2.27 GHz machine, with $2.00$ GB RAM. MATLAB R2012a is used with machine epsilon $\varepsilonilon\approx 2.2\times 10^{-16}$. With $\lambda_1\in\mathbb{R}$ fixed, the choice of $x_1$ in \textbf{Schur-rob} ignores the freedom of $x_1$. Inspired by \textbf{O-SCHUR} \cite{Chu2}, we may regard $x_1$ as a free parameter and manage to optimize the robustness. Specifically, we may run \textbf{Schur-rob} with several different choices of $x_1$, and keep the solution $F$ corresponding to the minimum departure from normality. We denote such method as ``\textbf{O-Schur-rob}". In this section, results on precision and robustness obtained by different algorithms are displayed. Here the precision refers to the accuracy of the eigenvalues of computed $A_c=A+BF$, compared with the prescribed poles in $\mathfrak{L}$. Precisely, we list \[ precs=\left\lfloor\min_{1\le j\le n}(-\log(|\frac{\lambda_j-\hat{\lambda}_j}{\lambda_j}|)) \right\rfloor, \] where $\hat{\lambda}_j, j=1, \ldots, n$ are eigenvalues of computed $A_c=A+BF$. Larger values of $precs$ indicate more accurate computed eigenvalues. The robustness is, however, more complicated, since different measures of robustness are used in these algorithms. Specifically, let the spectral decomposition and the real Schur decomposition of $A+BF$ respectively be \[ A+BF=X\Lambda X^{-1},\qquad A+BF=UTU^\top, \] where $\Lambda$ is a diagonal matrix whose diagonal elements are those in $\mathfrak{L}$, $U$ is orthogonal, and $T$ is the real Schur form. The MATLAB function \textbf{place} tends to minimize $\|X^{-1}\|_F$ and \textbf{robpole} aims to maximum $|\det(X)|$. Both measures are closely related to the condition number $\kappa_F(X)=\|X\|_F\|X^{-1}\|_F$. While different versions of \textbf{SCHUR} \cite{Chu2} and our \textbf{Schur-rob} try to minimize the departure from normality of $A_c=A+BF$. Hence, in the following tests, we adopt the following two measures of robustness: the departure from normality of $A_c$ (denoted as ``dep.") and the condition number of $X$ (denoted as $``\kappa_F(X)"$). \begin{Example}{\rm~ Let \begin{align*} \begin{array}{ll} A=\begin{bmatrix}1&0&0\\ 0&I_{n-2}&0\\0& 0.5\times e^\top& 0.5 \end{bmatrix}, & B=\begin{bmatrix}I_{n-1}\\0\end{bmatrix}, \\ \mathfrak{L}=\{ randn(1, n-2), \ 0.5+ki,\ 0.5-ki\}, \end{array} \end{align*} where $e^\top$ is the row vector with its all entries being $1$, ``$randn(1, n-2)$" is a row vector of dimension $n-2$, generated by the MATLAB function \textbf{randn}. We set $k$ as $1e+1, 1e+2, 1e+3, 1e+4, 1e+5$, and apply the four algorithms \textbf{SCHUR}, \textbf{SCHUR-D}, \textbf{O-SCHUR} and \textbf{Schur-rob} on these examples, where ``\textbf{SCHUR-D}" denotes the algorithm combining the $D_k$ varying strategy in \cite{Chu2} with \textbf{SCHUR}. In \cite{Chu2}, the author points out that minimizing the departure from normality via the $D_k$ varying technique can be achieved by optimizing the condition number of $X^\top X$ or $X$, which actually is hard to realize. So here, the numerical results associated with ``\textbf{SCHUR-D}" are obtained by taking many different vectors from the null space of $(6)$ in \cite{Chu2}, which lead to orthogonal columns in $X$ when placing complex conjugate poles, and adopting the one owning the minimal departure from normality as the solution to the {\bf SFRPA}. All numerical results are summarized in Table~\ref{table_add}, which shows that our algorithm outperforms \textbf{SCHUR} and \textbf{O-SCHUR} on these examples with complex conjugate poles to be assigned. \tabcolsep 0.0001in \begin{table} \centering \begin{tabular}{c|c|c|c|c|c|c|c|c} \Xhline{2pt} \multirow{2}*{$(n,k)$} & \multicolumn{4}{c|}{$dep.$}&\multicolumn{4}{c}{$precs$} \\ \Xcline{2-9}{0.1pt} & \textbf{SCHUR} & \textbf{SCHUR-D}& \textbf{O-SCHUR}&\textbf{Schur-rob} & \textbf{SCHUR} & \textbf{SCHUR-D}&\textbf{O-SCHUR}&\textbf{Schur-rob} \\ \Xhline{1pt} (4, 1e+1)&9.5e+1& 2.2e+1 &4.3e+1 & 2.7e+0 & 14&14 & 14&15\\ (4, 1e+2)&1.5e+4 &8.2e+2 & 1.4e+4& 3.3e+2 & 11& 13&11 & 14\\ (4, 1e+3)&1.4e+6& 6.6e+4 &1.2e+6 & 6.6e+2&7&8 &7 & 10\\ (4, 1e+4)&2.9e+8&9.9e+5 &4.3e+7&1.0e+4 &4& 10&6 &13 \\ (4, 1e+5)&1.8e+10&7.3e+6 &1.2e+10 &3.8e+5 & 3& 7 &3 & 10\\ \Xhline{1pt} (20,1e+1)&4.0e+1& 7.6e+0&1.7e+1 &4.6e+0 & 13&14&14 &14 \\ (20,1e+2)& 7.7e+4& 2.6e+2&2.4e+2&1.8e+1 & 9 &12&11 &12 \\ (20,1e+3)&2.0e+5&4.4e+3& 9.3e+4&4.7e+2 &9& 11&10&12 \\ (20,1e+4)&3.2e+7& 2.4e+4&5.2e+6 &1.9e+3 & 6&10& 8& 11 \\ (20,1e+5)&1.7e+9& 1.2e+6&8.8e+8 &6.0e+4 & 3&9&6 &10\\ \Xhline{1pt} (50,1e+1)& 1.1e+1& 2.9e+0&4.4e+0 &4.4e+0 & 13& 12 &13 &13 \\ (50,1e+2)& 2.0e+4& 5.9e+2&8.8e+2 &1.8e+1 & 10& 12 & 11 & 12 \\ (50,1e+3)& 1.1e+6& 7.8e+2&5.8e+4 &5.5e+2 & 8 &11& 9 &12 \\ (50,1e+4)& 8.8e+7&3.2e+4&9.6e+6 &2.1e+3 & 6 &10&7 & 11 \\ (50,1e+5)& 8.4e+9& 2.0e+5&4.8e+8 &3.7e+4 & 3& 9 & 5&10 \\ \Xhline{2pt} \end{tabular} \caption{Numerical results for Example 4.1} \label{table_add} \end{table} }\end{Example} We now compare our \textbf{Schur-rob}, \textbf{O-Schur-rob} algorithms with the MATLAB functions \textbf{place}, \textbf{robpole} and the \textbf{SCHUR}, \textbf{O-SCHUR} algorithms by applying them on some benchmark sets. The tested benchmark sets include eleven illustrated examples from \cite{BN}, ten multi-input CARE examples and nine multi-input DARE examples in benchmark collections \cite{AB, AB2}. All examples are numbered in the order as they appear in the references. \begin{Example}{\rm~ The first benchmark set includes eleven small examples from \cite{BN}. Applying the six algorithms on these examples, all algorithms produce comparable precisions of the assigned poles, which are greater than $10$, and we omit the results here. Table~\ref{table_1} lists two measures of robustness, i.e. $dep.$ and $\kappa_F(X)$, for five examples. The results are generally comparable. The remaining six examples are not displayed in the table, as the results of the six algorithms applying on these examples are quite similar. \tabcolsep 0.02in \begin{table} \centering \begin{tabular}{c|c|c|c|c|c|c} \Xhline{2pt} \multicolumn{2}{c|}{$num.$} & 5 & 7& 8& 9& 11\\ \Xcline{1-7}{1pt} \multirow{6}*{$dep.$}&\textbf{place}& 7.4e-1 &3.5e+0 &1.3e+1 &1.2e+1 & 2.5e-3 \\ & \textbf{robpole} &7.4e-1 & 3.4e+0&5.0e+0 &1.2e+1 & 3.6e-1 \\ & \textbf{SCHUR} & 7.2e-1&7.2e+0 &7.0e+0 &1.9e+1 & 2.3e+0 \\ & \textbf{O-SCHUR} & 7.1e-1& 4.8e+0& 6.0e+0&1.7e+1 & 6.0e-1 \\ & \textbf{Schur-rob} &7.2e-1 &3.7e+0 &7.5e+0 &1.8e+1 & 2.4e-1\\ & \textbf{O-Schur-rob} &7.1e-1 &3.2e+0 &3.3e+0 &1.1e+1 & 1.4e-1\\ \Xcline{1-7}{1pt} \multirow{6}*{$\kappa_F(X)$} &\textbf{place}& 1.5e+2& 1.2e+1&3.7e+1& 2.4e+1&4.0e+0 \\ & \textbf{robpole}&1.5e+2&1.2e+1& 6.2e+0&2.4e+1&4.1e+0\\ & \textbf{SCHUR} &2.7e+3&1.3e+2& 1.1e+1&5.6e+1&6.0e+0 \\ & \textbf{O-SCHUR} & 1.1e+3&4.5e+1&7.5e+0&5.5e+1&4.1e+0\\ & \textbf{Schur-rob} &1.9e+3&2.5e+1&1.2e+1&5.8e+1&4.1e+0\\ & \textbf{O-Schur-rob} &1.2e+3&2.2e+1&9.6e+0&3.3e+1&4.0e+0\\ \Xcline{1-7}{1pt} \Xhline{2pt} \end{tabular} \caption{Robustness of the closed-loop system for the examples from \cite{BN}} \label{table_1} \end{table} Now we apply the six algorithms on ten CARE and nine DARE examples from the SLICOT CARE/DARE benchmark collections \cite{AB, AB2}. Table~\ref{table_2} to Table~\ref{table_5} present the numerical results, respectively. The ``-"s in the first columns in Table~\ref{table_3} and Table~\ref{table_5} corresponding to \textbf{place}, \textbf{robpole}, \textbf{SCHUR} and \textbf{O-SCHUR} mean that all four algorithms fail to output a solution, since the multiplicity of some pole is greater than $m$. Note that the $``precs"$ in the last six columns associated with \textbf{SCHUR} and \textbf{O-SCHUR} in Table~\ref{table_2} and those in the third and eighth columns in Table~\ref{table_3} are also `` -"s, which suggest that there exists at least one eigenvalue of $A+BF$, which owns no relative accuracy compared with the assigned poles. From Table~\ref{table_2}, we know that the relative accuracy $``precs"$ of the poles in example $4$ and $5$ corresponding to \textbf{Schur-rob} and \textbf{O-Schur-rob} are lower than those produced by \textbf{place} and \textbf{robpole}. And the reason is that there are semi-simple eigenvalues in both examples. So how to dispose the issue that semi-simple eigenvalues can achieve higher relative accuracy deserves further exploration and we will treat it in a separate paper. For the sixth column in Table~\ref{table_2}, $``precs"$ from our algorithms are also smaller than those obtained from \textbf{place} and \textbf{robpole} for the existence of poles which are relatively badly separated from the imaginary axis. And this is a weakness of our algorithm. \begin{table} \begin{minipage}{0.5\textwidth} \centering \begin{tabular}{ccccccccccc} \Xhline{2pt} \multicolumn{10}{c}{ $precs$}\\ \Xhline{0.4pt} & $1$& $2$&$3$ & $4$& $5$ & $6$& $7$& $8$& $9$&$10$\\ \textbf{place}&14&14&11&11&11&9 &14&11& 13&11\\ \textbf{robpole}&14&14&12&13&12&11&14&14&13&10\\ \textbf{SCHUR}&12&13&9&6&-&-&-&-&-&-\\ \textbf{O-SCHUR}&14&16&10&7&-&-&-&-&-&-\\ \textbf{Schur-rob}&14&14&12&8&9&6&14&14&12&9\\ \textbf{O-Schur-rob}&15&15&13&8&9&6&14&14&12&9\\ \Xhline{2pt} \end{tabular} \caption{Accuracy for CARE examples} \label{table_2} \end{minipage} \begin{minipage}{0.5\textwidth} \centering \begin{tabular}{cccccccccc} \Xhline{2pt} \multicolumn{10}{c}{ $precs$}\\ \Xhline{0.4pt} & $1$& $2$&$3$ & $4$& $5$ & $6$& $7$& $8$& $9$\\ \textbf{place}&-&15&14& 14&7&11 &5& -& 13\\ \textbf{robpole}&-&15&14&14&7&11&1&-&13\\ \textbf{SCHUR}&-&1&-&14&7&8&1&-&12\\ \textbf{O-SCHUR}&-&1&-&14&8&9&2&-&15\\ \textbf{Schur-rob}&15&15&15&15&8&10&4&-&12\\ \textbf{O-Schur-rob}&15&15&15&15&8&10&4&-&13\\ \Xhline{2pt} \end{tabular} \caption{Accuracy for DARE examples} \label{table_3} \end{minipage} \end{table} \tabcolsep 0.001in \begin{table} \centering \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c} \Xhline{2pt} \multicolumn{2}{c|}{$num.$} & 1& 2& 3& 4& 5& 6 & 7& 8& 9& 10\\ \Xcline{1-12}{1pt} \multirow{5}*{$dep.$} &\textbf{place}&5.2e+0&3.0e-1&7.3e+2&1.5e+6&2.9e+6&2.3e+7&7.6e+0&2.2e+1&6.1e+0&4.9e+9 \\ & \textbf{robpole}& 5.2e+0&2.9e-1&5.7e+2& 7.5e+5&2.9e+6&2.3e+7&8.1e+0&2.0e+1& 6.0e+0&3.8e+9 \\ & \textbf{SCHUR} &8.4e+1& 7.2e+0&5.0e+2&1.7e+6&3.0e+9 &5.3e+7&6.2e+1& 8.9e+2&7.5e+0& 4.4e+17\\ & \textbf{O-SCHUR} &4.7e+1&2.6e+0& 3.8e+2& 8.0e+5&5.4e+8 &2.6e+7 &7.3e+0&1.7e+2&6.8e+0& 2.3e+17 \\ & \textbf{Schur-rob}&7.6e+0&3.0e-1& 1.4e+2&1.1e+5&7.3e+6&2.3e+7&7.5e+0&2.1e+1&8.4e+0&2.2e+10 \\ & \textbf{O-Schur-rob}&7.3e+0&2.6e-1& 1.4e+2&1.1e+5&2.5e+6&2.3e+7&6.8e+0&2.0e+1&6.8e+0&2.2e+10 \\ \Xcline{1-12}{1pt} \multirow{5}*{$\kappa_F(X)$} &\textbf{place}&7.4e+0&8.0e+0&4.3e+1&1.7e+15&8.5e+4&4.8e+6&1.6e+1&9.8e+1&1.5e+2&2.3e+6 \\ & \textbf{robpole} &7.3e+0&8.0e+0&4.2e+1&2.2e+7&8.9e+4&3.2e+6&1.6e+1&9.0e+1&1.4e+2& 2.3e+6\\ & \textbf{SCHUR} &2.2e+2&1.0e+1&1.7e+3&9.1e+9& 6.0e+11&4.0e+13 &3.5e+8& 6.1e+9&1.3e+9& 4.6e+13\\ & \textbf{O-SCHUR} &1.2e+2& 5.1e+1&2.1e+3&1.0e+9&2.4e+10 &1.2e+8&1.0e+8& 3.7e+9& 4.1e+9&5.7e+13\\ & \textbf{Schur-rob} &1.1e+1& 8.2e+0&9.2e+2&9.0e+7&2.0e+6&3.2e+8&3.3e+1&5.7e+2&6.5e+3&4.3e+6\\ & \textbf{O-Schur-rob} &1.0e+1&8.0e+0&9.1e+2&6.5e+7&1.3e+6&1.2e+8&2.8e+1&4.2e+2&3.4e+3&4.3e+6\\ \Xcline{1-12}{1pt} \Xhline{2pt} \end{tabular} \caption{Robustness of the closed-loop system matrix for ten CARE examples} \label{table_4} \end{table} \tabcolsep 0.02in \begin{table} \centering \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \Xhline{2pt} \multicolumn{2}{c|}{$num.$} & 1& 2& 3& 4& 5& 6 & 7& 8& 9\\ \Xcline{1-11}{1pt} \multirow{5}*{$dep.$} &\textbf{place}&-&2.2e-1&3.9e-1&4.3e-1&1.7e+0&1.4e+0&2.3e+1&4.3e+7&8.9e+0 \\ & \textbf{robpole}&-&2.2e-1&3.9e-1&3.6e-1&1.7e+0&1.3e+0&1.8e+1&3.9e+12& 8.0e+0\\ & \textbf{SCHUR} &-&4.1e-1&1.1e+2&5.9e-1&1.8e+0&1.1e+1&3.2e+2&3.4e+2&1.1e+1\\ & \textbf{O-SCHUR} &-&3.3e-1&4.9e+1&4.1e-1&1.7e+0&1.1e+0& 1.7e+2&1.2e+1&8.0e+0 \\ & \textbf{Schur-rob}&1.0e-1&2.5e-1&1.3e+0&3.4e-1& 1.7e+0&2.0e+0&1.9e+1&9.8e+0&9.9e+0\\ & \textbf{O-Schur-rob}&1.0e-1&2.5e-1&1.3e+0&3.4e-1& 1.7e+0&1.2e+0&1.8e+1&9.4e+0&6.6e+0\\ \Xcline{1-11}{1pt} \multirow{5}*{$\kappa_F(X)$} &\textbf{place}&-&5.2e+0& 4.9e+0&5.4e+0&1.8e+1&1.3e+1&2.3e+8&9.2e+292&3.4e+2\\ & \textbf{robpole} &-&5.2e+0&5.0e+0&5.3e+0&1.8e+1&1.2e+1&2.9e+8&1.3e+308&3.0e+2\\ & \textbf{SCHUR} &-&4.0e+7&1.2e+9&5.7e+0& 1.8e+1&5.8e+3&1.9e+11&2.8e+295&4.7e+3\\ & \textbf{O-SCHUR} &-&3.3e+7&8.0e+8&5.4e+0&1.8e+1&1.7e+3&2.0e+11&3.3e+295&2.6e+3\\ & \textbf{Schur-rob} &7.1e+15&5.5e+0&5.6e+0&7.2e+0&1.8e+1&3.8e+1&1.7e+9&5.6e+292&2.2e+4\\ & \textbf{O-Schur-rob} &2.5e+15&5.5e+0&5.5e+0&7.2e+0&1.8e+1&3.8e+1&1.2e+9&5.6e+292&4.7e+3\\ \Xcline{1-11}{1pt} \Xhline{2pt} \end{tabular} \caption{Robustness of the closed-loop system matrix for nine DARE examples} \label{table_5} \end{table} }\end{Example} We now test the five methods \textbf{place}, \textbf{robpole}, \textbf{SCHUR}, \textbf{O-SCHUR} and \textbf{Schur-rob} on some random examples generated by the MATLAB function \textbf{randn}. \begin{Example}{\rm~ This test set includes $33$ examples where $n$ varies from 3 to 25 increased by 2, and $m$ is set to be $2, \lfloor\frac{n}{2}\rfloor, n-1$ for each $n$. The examples are generated as following. We first randomly generate the matrices $A, B$ and $F$ by the MATLAB function \textbf{randn}, and then get $\mathfrak{L}$ using the MATLAB function \textbf{eig}, that is, $\mathfrak{L}=eig(A+BF)$. We then apply the five algorithms on the $A,B$ and $\mathfrak{L}$ as input. Fig.~\ref{fig1} to Fig.~\ref{fig4}, respectively exhibit the departure from normality of the computed $A_c$, the condition number of the eigenvector matrix $X$, the relative accuracy of the poles and the CPU time of the five algorithms applied on these randomly generated examples. In these figures, the $x$-axis represents the number of the $33$ different $(n,m)$. For example, $(3,2)$, $(5,2)$ and $(5,4)$ correspond to $1$, $2$ and $3$ in the $x$-axis, respectively. And the values along the $y$-axis are the mean values over 50 trials for a certain $(n,m)$. \begin{figure} \caption{$dep.$ over 50 trials} \label{fig1} \caption{$\kappa_F(X)$ over 50 trials} \label{fig2} \end{figure} \begin{figure} \caption{$precs$ over 50 trials} \label{fig3} \caption{CPU time over 50 trials} \label{fig4} \end{figure} All these figures show that our \textbf{Schur-rob} algorithm can produce comparable or even better results as \textbf{place} and \textbf{robpole}, but with much less CPU time. }\end{Example} \section{Conclusion} Pole assignment problem for multi-input control is generally under-determined. And utilizing this freedom to make the closed-loop system matrix to be insensitive to perturbations as far as possible evokes the state-feedback robust pole assignment problem ({\bf SFRPA}) arising. Based on {\bf SCHUR} \cite{Chu2}, we propose a new direct method to solve the {\bf SFRPA}, which obtains the real Schur form of the closed-loop system matrix and tends to minimize its departure from normality via solving some standard eigen-problems. Many numerical examples show that our algorithm can produce comparable or even better results than existing methods, but with much less computational costs than the two classic methods \textbf{place} and \textbf{robpole}. \end{document}
\begin{document} \preprintTitle{A Matlab Tutorial for Diffusion-Convection-Reaction Equations using DGFEM} \preprintAuthor{Murat Uzunca \footnote{Department of Mathematics, Middle East Technical University, 06800 Ankara, Turkey, \textit{email}: [email protected] }, B\"{u}lent Karas\"{o}zen \footnote{Department of Mathematics and Institute of Applied Mathematics, Middle East Technical University, 06800 Ankara, Turkey, \textit{email}: [email protected] }} \preprintAbstract{\small We present a collection of MATLAB routines using discontinuous Galerkin finite elements method (DGFEM) for solving steady-state diffusion-convection-reaction equations. The code employs the sparse matrix facilities of MATLAB with "vectorization" and uses multiple matrix multiplications {\it "MULTIPROD"} \cite{leva} to increase the efficiency of the program. } \preprintKeywords{Discontinuous Galerkin FEMs, Diffusion-convection-reaction equations, Matlab} \preprintDate{August 2014} \preprintNo{2014-4} \makePreprintCoverPage \section{DG discretization of the linear model problem} Many engineering problems such as chemical reaction processes, heat conduction, nuclear reactors, population dynamics etc. are governed by convection-diffusion-reaction partial differential equations (PDEs). The general model problem used in the code is \begin{subequations}\label{1} \begin{align} \alpha u - \epsilon\Delta u + {\bf b}\cdot\nabla u &= f \quad \; \text{ in } \; \Omega, \\ u &= g^D \quad \text{on } \; \Gamma^D, \\ \epsilon\nabla u\cdot {\bf n} &= g^N \quad \text{on } \; \Gamma^N. \end{align} \end{subequations} The domain $\Omega$ is bounded, open, convex in $\mathbb{R}^2$ with boundary $\partial\Omega =\Gamma^D\cup\Gamma^N$, $\Gamma^D\cap\Gamma^N=\emptyset$, $0<\epsilon\ll 1 $ is the diffusivity constant, $f\in L^2(\Omega)$ is the source function, ${\bf b}\in\left(W^{1,\infty}(\Omega)\right)^2$ is the velocity field, $g^D\in H^{3/2}(\Gamma^D )$ is the Dirichlet boundary condition, $g^N\in H^{1/2}(\Gamma^N )$ is the Neumann boundary condition and ${\bf n}$ denote the unit outward normal vector to the boundary.\\ \noindent The weak formulation of (\ref{1}) reads as: find $u\in U$ such that \begin{equation} \label{2} \int_{\Omega}(\epsilon\nabla u\cdot\nabla v+{\bf b}\cdot\nabla uv+\alpha uv)dx = \int_{\Omega}fvdx +\int_{\Gamma^N}g^Nv ds\; , \quad \forall v\in V \end{equation} where the solution space $U$ and the test function space $V$ are given by \begin{equation*} U= \{ u\in H^1(\Omega) \, : \; u=g^D \text{ on } \Gamma^D\}, \quad V= \{ v\in H^1(\Omega) \, : \; v=0 \text{ on } \Gamma^D\}. \end{equation*} Being the next step, an approximation to the problem (\ref{2}) is found in a finite-dimensional space $V_h$. In case of classical (continuous) FEMs, the space $V_h$ is set to be the set of piecewise continuous polynomials vanishing on the boundary $\partial\Omega$. In contrast to the continuous FEMs, the DGFEMs uses the set of piecewise polynomials that are fully discontinuous at the interfaces. In this way, the DGFEMs approximation allows to capture the sharp gradients or singularities that affect the numerical solution locally. Moreover, the functions in $V_h$ do not need to vanish at the boundary since the boundary conditions in DGFEMs are imposed weakly.\\ \noindent In our code, the discretization of the problem (\ref{1}) is based on the discontinuous Galerkin methods for the diffusion part \cite{arnold02uad,riviere08dgm} and the upwinding for the convection part \cite{ayuso09dgm,houston02dhp}. Let $\{\xi_h\}$ be a family of shape regular meshes with the elements (triangles) $K_i\in\xi_h$ satisfying $\overline{\Omega}=\cup \overline{K}$ and $K_i\cap K_j=\emptyset$ for $K_i$, $K_j$ $\in\xi_h$. Let us denote by $\Gamma_0$, $\Gamma_D$ and $\Gamma_N$ the set of interior, Dirichlet boundary and Neumann boundary edges, respectively, so that $\Gamma_0\cup\Gamma_D\cup\Gamma_N$ forms the skeleton of the mesh. For any $K\in\xi_h$, let $\mathbb{P}_k(K)$ be the set of all polynomials of degree at most $k$ on $K$. Then, set the finite dimensional solution and test function space by $$ V_h=\left\{ v\in L^2(\Omega ) : v|_{K}\in\mathbb{P}_k(K) ,\; \forall K\in \xi_h \right\}\not\subset V. $$ Note that the trial and test function spaces are the same because the boundary conditions in discontinuous Galerkin methods are imposed in a weak manner. Since the functions in $V_h$ may have discontinuities along the inter-element boundaries, along an interior edge, there would be two different traces from the adjacent elements sharing that edge. In the light of this fact, let us first introduce some notations before giving the DG formulation. Let $K_i$, $K_j\in\xi_h$ ($i<j$) be two adjacent elements sharing an interior edge $e=K_i\cap K_j\subset \Gamma_0$ (see Fig.\ref{jump}). Denote the trace of a scalar function $v$ from inside $K_i$ by $v_{i}$ and from inside $K_j$ by $v_{j}$. Then, set the jump and average values of $v$ on the edge $e$ $$ [v]= v_{i}{\bf n}_e- v_{j}{\bf n}_e , \quad \{ v\}=\frac{1}{2}(v_{i}+ v_{j}), $$ where ${\bf n}_e$ is the unit normal to the edge $e$ oriented from $K_i$ to $K_j$. Similarly, we set the jump and average values of a vector valued function ${\bf q}$ on e $$ [{\bf q}]= {\bf q}_{i}\cdot {\bf n}_e- {\bf q}_{j}\cdot {\bf n}_e , \quad \{ {\bf q}\}=\frac{1}{2}({\bf q}_{i}+ {\bf q}_{j}), $$ Observe that $[v]$ is a vector for a scalar function $v$, while, $[{\bf q}]$ is scalar for a vector valued function ${\bf q}$. On the other hands, along any boundary edge $e=K_i\cap \partial\Omega$, we set $$ [v]= v_{i}{\bf n} , \quad \{ v\}=v_{i}, \quad [{\bf q}]={\bf q}_{i}\cdot {\bf n}, \quad \{ {\bf q}\}={\bf q}_{i} $$ where ${\bf n}$ is the unit outward normal to the boundary at $e$. \begin{figure} \caption{Two adjacent elements sharing an edge (left); an element near to domain boundary (right)} \label{jump} \end{figure} \noindent We also introduce the inflow parts of the domain boundary and the boundary of a mesh element $K$, respectively $$ \Gamma^- = \{ x\in\partial \Omega \; : \; {\bf b}(x)\cdot {\bf n}(x)<0\} \; , \quad \partial K^- = \{ x\in\partial K \; : \; {\bf b}(x)\cdot {\bf n}_K(x)<0\}. $$ Then, the DG discretized system to the problem (\ref{1}) combining with the upwind discretization for the convection part reads as: find $u_h\in V_h$ such that \begin{equation} \label{ds} a_{h}(u_{h},v_{h})=l_{h}(v_{h}) \qquad \forall v_h\in V_h, \end{equation} \begin{subequations} \begin{align} a_{h}(u_{h}, v_{h})=& \sum \limits_{K \in {\xi}_{h}} \int_{K} \epsilon \nabla u_{h}\cdot\nabla v_{h} dx + \sum \limits_{K \in {\xi}_{h}} \int_{K} ({\bf b} \cdot \nabla u_{h}+\alpha u_h) v_{h} dx\nonumber \\ &- \sum \limits_{ e \in \Gamma_{0}\cup\Gamma_{D}}\int_{e} \{\epsilon \nabla u_{h}\} \cdot [v_{h}] ds + \kappa \sum \limits_{ e \in \Gamma_{0}\cup\Gamma_{D}} \int_{e} \{\epsilon \nabla v_{h}\} \cdot [u_{h}] ds \nonumber \\ &+ \sum \limits_{K \in {\xi}_{h}}\int_{\partial K^-\setminus\partial\Omega } {\bf b}\cdot {\bf n} (u_{h}^{out}-u_{h}^{in}) v_{h} ds - \sum \limits_{K \in {\xi}_{h}} \int_{\partial K^-\cap \Gamma^{-}} {\bf b}\cdot {\bf n} u_{h}^{in} v_{h} ds \nonumber \\ &+ \sum \limits_{ e \in \Gamma_{0}\cup\Gamma_{D}}\frac{\sigma \epsilon}{h_{e}} \int_{e} [u_{h}] \cdot [v_{h}] ds, \nonumber \\ l_{h}( v_{h})=& \sum \limits_{K \in {\xi}_{h}} \int_{K} f v_{h} dx + \sum \limits_{e \in \Gamma_{D}} \int_e g^D \left( \frac{\sigma \epsilon}{h_{e}} v_{h} - {\epsilon\nabla v_{h}} \cdot {\bf n} \right) ds \nonumber \\ &- \sum \limits_{K \in {\xi}_{h}}\int_{\partial K^-\cap \Gamma^{-}} {\bf b}\cdot {\bf n} g^D v_{h} ds + \sum \limits_{e \in \Gamma_{N}} \int_e g^N v_{h} ds, \nonumber \end{align} \end{subequations} where $u_{h}^{out}$ and $u_{h}^{in}$ denotes the values on an edge from outside and inside of an element $K$, respectively. The parameter $\kappa$ determines the type of DG method, which takes the values $\{ -1,1,0 \}$: $\kappa =-1$ gives {\it "symmetric interior penalty Galerkin"} (SIPG) method, $\kappa =1$ gives {\it "non-symmetric interior penalty Galerkin"} (NIPG) method and $\kappa =0$ gives {\it "inconsistent interior penalty Galerkin"} (IIPG) method. The parameter $\sigma\in\mathbb{R}_0^+$ is called the penalty parameter which should be sufficiently large; independent of the mesh size $h$ and the diffusion coefficient $\epsilon$ \cite{riviere08dgm} [Sec. 2.7.1]. In our code, we choose the penalty parameter $\sigma$ on interior edges depending on the polynomial degree $k$ as $\sigma=3k(k+1)$ for the SIPG and IIPG methods, whereas, we take $\sigma=1$ for the NIPG method. On boundary edges, we take the penalty parameter as twice of the penalty parameter on interior edges. \section{Descriptions of the MATLAB code} The given codes are mostly self-explanatory with comments to explain what each section of the code does. In this section, we give the description of our main code. The use of the main code consists of three parts \begin{enumerate} \item Mesh generation, \item Entry of user defined quantities (boundary conditions, order of basis etc.), \item Forming and solving the linear systems, \item Plotting the solutions. \end{enumerate} \noindent Except the last one, all the parts above take place in the m-file {\it Main\_Linear.m} which is the main code to be used by the users for linear problems without need to entry to any other m-file. The last part, plotting the solutions, takes place in the m-file {\it dg\_error.m}. \subsection{Mesh generation} In this section, we define the data structure of a triangular mesh on a polygonal domain in $\mathbb{R}^2$. The data structure presented here is based on simple arrays \cite{chen08fem} which are stored in a MATLAB "struct" that collects two or more data fields in one object that can then be passed to routines. To obtain an initial mesh, firstly, we define the nodes, elements, Dirichlet and Neumann conditions in the m-file {\it Main\_Linear.m}, and we call the {\it getmesh} function to form the initial mesh structure {\it mesh}. \begin{lstlisting} Nodes = [0,0;0.5,0;1,0;0,0.5;0.5,0.5;1,0.5;0,1;0.5,1;1,1]; Elements = [4,1,5;1,2,5;5,2,6; 2,3,6;7,4,8;4,5,8;8,5,9;5,6,9]; Dirichlet = [1,2;2,3;1,4;3,6;4,7;6,9;7,8;8,9]; Neumann = []; mesh = getmesh(Nodes,Elements,Dirichlet,Neumann); \end{lstlisting} \noindent As it can be understood, each row in the {\bf Nodes} array corresponds to a mesh node with the first column keeps the x-coordinate of the node and the second is for the y-coordinate, and the $i-th$ row of the {\bf Nodes} array is called the node having index $i$. In the {\bf Elements} array, each row with 3 columns corresponds to a triangular element in the mesh containing the indices of the nodes forming the 3 vertices of the triangles in the counter-clockwise orientation. Finally, in the {\bf Dirichlet} and {\bf Neumann} arrays, each row with 2 columns corresponds to a Dirichlet and Neumann boundary edge containing the indices of the starting and ending nodes, respectively (see Fig.\ref{meshdata}). \begin{figure} \caption{Initial mesh on the unit square $\Omega = [0,1]^2$ with nodes $n_i$, triangles $E_j$ and edges $e_k$} \label{meshdata} \end{figure} The mesh "struct" in the code has the following fields: \begin{itemize} \item Nodes, Elements, Edges, intEdges, DbdEdges, NbdEdges, intEdges \item vertices1, vertices2, vertices3, \item Dirichlet, Neumann, EdgeEls, ElementsE. \end{itemize} which can be reached by {\it mesh.Nodes}, {\it mesh.Elements} and so on, and they are used by the other functions to form the DG construction. The initial mesh can be uniformly refined several times in a "for loop" by calling the function {\it uniformrefine}. \begin{lstlisting} for jj=1:2 mesh=uniformrefine(mesh); end \end{lstlisting} \subsection{User defined quantities} There are certain input values that have to be supplied by the user. Here, we will describe that how one can define these quantities in the main code {\it Main\_Linear.m}. \\ \noindent One determines the type of the DG method (SIPG, NIPG or IIPG) and the order of the polynomial basis to be used by the variables {\it method} and {\it degree}, respectively. According to these choices, the values of the penalty parameter and the parameter $\kappa \in \{-1,1,0\}$ defining DG method in (\ref{ds}) are set by calling the sub-function {\it set\_parameter}. \begin{lstlisting} method=2; degree=1; [penalty,kappa]=set_parameter(method,degree); \end{lstlisting} The next step is to supply the problem parameters. The diffusion constant $\epsilon$, the advection vector $\bf b$ and the linear reaction term $\alpha$ are defined via the sub-functions {\it fdiff}, {\it fadv} and {\it freact}, respectively. \begin{lstlisting} function diff = fdiff(x,y) diff = (10^(-6)).*ones(size(x)); end function [adv1,adv2] = fadv(x,y) adv1 =(1/sqrt(5))*ones(size(x)); adv2 =(2/sqrt(5))*ones(size(x)); end function react = freact(x,y) react = ones(size(x)); end \end{lstlisting} The exact solution (if exists) and the source function $f$ are defined via the sub-functions {\it fexact} and {\it fsource}, respectively. Finally, the boundary conditions are supplied via the sub-functions {\it DBCexact} and {\it NBCexact}. \\ \begin{lstlisting} yex_x=(-1./(sqrt(5*diff))).*(sech((2*x-y-0.25)./... (sqrt(5*diff)))).^2; yex_y=((0.5)./(sqrt(5*diff))).*(sech((2*x-y-0.25)./... (sqrt(5*diff)))).^2; yex_xx=((0.8)./diff).*tanh((2*x-y-0.25)./(sqrt(5*diff))).*... (sech((2*x-y-0.25)./(sqrt(5*diff)))).^2; yex_yy=((0.2)./diff).*tanh((2*x-y-0.25)./(sqrt(5*diff))).*... (sech((2*x-y-0.25)./(sqrt(5*diff)))).^2; source=-diff.*(yex_xx+yex_yy)+(adv1.*yex_x+adv2.*yex_y)+... \end{lstlisting} \subsection{Forming and solving linear systems} To form the linear systems, firstly, let us rewrite the discrete DG scheme (\ref{ds}) as \begin{equation}\label{dgscheme} a_{h}(u_{h},v_{h}):= D_{h}(u_{h},v_{h}) + C_{h}(u_{h},v_{h}) + R_{h}(u_{h},v_{h}) =l_{h}(v_{h}) \qquad \forall v_h\in V_h, \end{equation} where the forms $D_{h}(u_{h},v_{h})$, $C_{h}(u_{h},v_{h})$ and $R_{h}(u_{h},v_{h})$ corresponding to the diffusion, convection and linear reaction parts of the problem, respectively \begin{subequations} \begin{align} D_{h}(u_{h}, v_{h})=& \sum \limits_{K \in {\xi}_{h}} \int_{K} \epsilon \nabla u_{h}\cdot\nabla v_{h} dx + \sum \limits_{ e \in \Gamma_{0}\cup\Gamma_{D}}\frac{\sigma \epsilon}{h_{e}} \int_{e} [u_{h}] \cdot [v_{h}] ds \nonumber \\ &- \sum \limits_{ e \in \Gamma_{0}\cup\Gamma_{D}}\int_{e} \{\epsilon \nabla u_{h}\} \cdot [v_{h}] ds + \kappa \sum \limits_{ e \in \Gamma_{0}\cup\Gamma_{D}} \int_{e} \{\epsilon \nabla v_{h}\} \cdot [u_{h}] ds \nonumber \\ C_{h}(u_{h}, v_{h})=& \sum \limits_{K \in {\xi}_{h}} \int_{K} {\bf b} \cdot \nabla u_{h} v_{h} dx\nonumber \\ &+ \sum \limits_{K \in {\xi}_{h}}\int_{\partial K^-\setminus\partial\Omega } {\bf b}\cdot {\bf n} (u_{h}^{out}-u_{h}^{in}) v_{h} ds - \sum \limits_{K \in {\xi}_{h}} \int_{\partial K^-\cap \Gamma^{-}} {\bf b}\cdot {\bf n} u_{h}^{in} v_{h} ds \nonumber \\ R_{h}(u_{h}, v_{h})=& \sum \limits_{K \in {\xi}_{h}} \int_{K} \alpha u_h v_{h} dx \nonumber \\ l_{h}( v_{h})=& \sum \limits_{K \in {\xi}_{h}} \int_{K} f v_{h} dx + \sum \limits_{e \in \Gamma_{D}} \int_e g^D \left( \frac{\sigma \epsilon}{h_{e}} v_{h} - {\epsilon\nabla v_{h}} \cdot {\bf n} \right) ds \nonumber \\ &- \sum \limits_{K \in {\xi}_{h}}\int_{\partial K^-\cap \Gamma^{-}} {\bf b}\cdot {\bf n} g^D v_{h} ds + \sum \limits_{e \in \Gamma_{N}} \int_e g^N v_{h} ds, \nonumber \end{align} \end{subequations} For a set of basis functions $\{\phi_i\}_{i=1}^N$ spanning the space $V_h$, the discrete solution $u_h\in V_h$ is of the form \begin{equation}\label{sol} u_h = \sum_{j=1}^N \upsilon_j\phi_j \end{equation} where $\upsilon=(\upsilon_1,\upsilon_2, \ldots , \upsilon_N)^T$ is the unknown coefficient vector. After substituting (\ref{sol}) into (\ref{dgscheme}) and taking $v_h=\phi_i$, we get the linear system of equations \begin{equation}\label{system} \sum_{j=1}^N \upsilon_j D_{h}(\phi_j,\phi_i) + \sum_{j=1}^N \upsilon_j C_{h}(\phi_j,\phi_i) + \sum_{j=1}^N \upsilon_j R_{h}(\phi_j,\phi_i) = l_h(\phi_i) \; , \quad i=1,2,\ldots , N \end{equation} Thus, for $i=1,2,\ldots , N$, to form the linear system in matrix-vector form, we need the matrices $D, C, R\in \mathbb{R}^{N\times N}$ related to the terms including the forms $D_{h}$, $C_{h}$ and $R_{h}$ in (\ref{system}), respectively, satisfying $$ D\upsilon + C\upsilon + R\upsilon = F $$ with the unknown coefficient vector $\upsilon$ and the vector $F\in\mathbb{R}^N$ related to the linear rhs functionals $l_h(\phi_i)$ such that $F_i=l_h(\phi_i)$, $i=1,2,\ldots , N$. In the code {\it Main\_Linear.m}, all the matrices $D,C,R$ and the vector $F$ are obtained by calling the function {\it global\_system}, in which the sub-functions introduced in the previous subsection are used. We set the stiffness matrix, {\it Stiff}, as the sum of the obtained matrices and we solve the linear system for the unknown coefficient vector {\it coef}$:=\upsilon$. \begin{lstlisting} [D,C,R,F]=global_system(mesh,@fdiff,@fadv,@freact,... @fsource,@DBCexact,@NBCexact,penalty,kappa,degree); Stiff=D+C+R; coef=Stiff\F; \end{lstlisting} \subsection{Plotting the solution} After solving the problem for the unknown coefficient vector, the solutions are plotted via the the function {\it dg\_error}, and also the $L^2$-error between the exact and numerical solution is computed. \begin{lstlisting} [l2err,hmax]=dg_error(coef,mesh,@fexact,@fdiff,degree); \end{lstlisting} \section{Models with non-linear reaction mechanisms} Most of the problems include non-linear source or sink terms. The general model problem in this case is \begin{subequations}\label{nonlin} \begin{align} \alpha u - \epsilon\Delta u + {\bf b}\cdot\nabla u + r(u) &= f \quad \; \text{ in } \; \Omega, \\ u &= g^D \quad \text{on } \; \Gamma^D, \\ \epsilon\nabla u\cdot {\bf n} &= g^N \quad \text{on } \; \Gamma^N. \end{align} \end{subequations} which arises from the time discretization of the time-dependent non-linear diffusion-convection-reaction equations. Here, the coefficient of the linear reaction term, $\alpha >0$, stand for the temporal discretization, corresponding to $1/\Delta t$, where $\Delta t$ is the discrete time-step. The model (\ref{nonlin}) differs from the model (\ref{1}) by the additional non-linear term $r(u)$. To have a unique solution, in addition to the assumptions given in Section 1, we assume that the non-linear reaction term, $r(u)$, is bounded, locally Lipschitz continuous and monotone, i.e. satisfies for any $s, s_1, s_2\ge 0$, $s,s_1, s_2 \in \mathbb{R}$ the following conditions \cite{uzunca14adg} \begin{align*} |r_i(s)| &\leq C , \quad C>0 \\ \| r_i(s_1)-r_i(s_2)\|_{L^2(\Omega)} &\leq L\| s_1-s_2\|_{L^2(\Omega)} , \quad L>0 \\ r_i\in C^1(\mathbb{R}_0^+), \quad r_i(0) =0, &\quad r_i'(s)\ge 0. \end{align*} The non-linear reaction term $r(u)$ occur in chemical engineering usually in the form of products and rational functions of concentrations, or exponential functions of the temperature, expressed by the Arrhenius law. Such models describe chemical processes and they are strongly coupled as an inaccuracy in one unknown affects all the others.\\ \noindent To solve the non-linear problems, we use the m-file {\it Main\_Nonlinear} which is similar to the m-file {\it Main\_Linear}, but now we use Newton iteration to solve for $i=1,2,\ldots , N$ the non-linear system of equations \begin{equation}\label{nonlinsystem} \sum_{j=1}^N \upsilon_j D_{h}(\phi_j,\phi_i) + \sum_{j=1}^N \upsilon_j C_{h}(\phi_j,\phi_i) + \sum_{j=1}^N \upsilon_j R_{h}(\phi_j,\phi_i) + \int_{\Omega} r(u_h)\phi_i dx = l_h(\phi_i) \end{equation} Similar to the linear case, the above system leads to the matrix-vector form $$ D\upsilon + C\upsilon + R\upsilon + H(\upsilon) = F $$ where, in addition to the matrices $D,C,R\in\mathbb{R}^{N\times N}$ and the vector $F\in\mathbb{R}^N$, we also need the vector $H\in\mathbb{R}^N$ related to the non-linear term such that $$ H_i(\upsilon) = \int_{\Omega} r\left( \sum_{j=1}^N \upsilon_j \phi_j \right)\phi_i dx \; , \quad i=1,2,\ldots , N. $$ We solve the nonlinear system by Newton method. For an initial guess $\upsilon^0=(\upsilon^0_1,\upsilon^0_2, \ldots , \upsilon^0_N)^T$, we solve the system \begin{eqnarray} \label{newton} J^kw^k &=& -Res^k \\ \upsilon^{k+1} &=& w^k + \upsilon^k \; , \quad k=0,1,2,\ldots \nonumber \end{eqnarray} until a user defined tolerance is satisfied. In (\ref{newton}), $Res^k$ and $J^k$ denote the vector of system residual and its Jacobian matrix at the current iterate $\upsilon^k$, respectively, given by \begin{eqnarray*} Res^k &=& (D+C+R)\upsilon^k + H(\upsilon^k) - F \\ J^k &=& D+C+R + HJ(\upsilon^k) \end{eqnarray*} where $HJ(\upsilon^k)$ is the Jacobian matrix of the non-linear vector $H$ at $\upsilon^k$ $$ HJ(\upsilon^k)= \begin{bmatrix} \frac{\partial H_1(\upsilon^k)}{\partial \upsilon^k_1} & \frac{\partial H_1(\upsilon^k)}{\partial \upsilon^k_2} & \cdots & \frac{\partial H_1(\upsilon^k)}{\partial \upsilon^k_N} \\ \vdots & \ddots & & \vdots \\ \frac{\partial H_N(\upsilon^k)}{\partial \upsilon^k_1} & \frac{\partial H_N(\upsilon^k)}{\partial \upsilon^k_2} & \cdots & \frac{\partial H_N(\upsilon^k)}{\partial \upsilon^k_N} \end{bmatrix} $$ In the code {\it Main\_Nonlinear}, obtaining the matrices $D,C,R$ and the rhs vector $F$ is similar to the linear case, but now, additionally, we introduce an initial guess for Newton iteration, and we solve the nonlinear system by Newton method. \pagebreak \begin{lstlisting} coef=zeros(size(Stiff,1),1); noi=0; for ii=1:50 noi=noi+1; [H,HJ]=nonlinear_global(coef,mesh,@freact_nonlinear,degree); Res = Stiff*coef + H - F; J = Stiff + HJ ; w = J \ (-Res); coef = coef + w; if norm(J*w+Res) < 1e-20 break; end end \end{lstlisting} \noindent To obtain the non-linear vector $H$ and its Jacobian $HJ$ at the current iterate, we call the function {\it nonlinear\_global}, and it uses the function handle {\it freact\_nonlinear} which is a sub-function in the file {\it Main\_Nonlinear}. The sub-function {\it freact\_nonlinear} has to be supplied by user as the non-linear term $r(u)$ and its derivative $r'(u)$. \begin{lstlisting} function [r,dr] = freact_nonlinear(u) r = u.^2; dr = 2*u; end \end{lstlisting} \section{MATLAB routines for main code} Here, we give the main m-file {\it Main\_Nonlinear.m} of the code. The full code is available upon request to the e-mail address \email{[email protected]}. \begin{lstlisting} 1 2 3 4 5 6 7 function Main_Nonlinear() 8 9 clear all 10 clc 11 12 13 14 15 Nodes = [0,0;0.5,0;1,0;0,0.5;0.5,0.5;1,0.5;0,1;0.5,1;1,1]; 16 17 Elements = [4,1,5;1,2,5;5,2,6; 2,3,6;7,4,8;4,5,8;8,5,9;5,6,9]; 18 19 Dirichlet = [1,2;2,3;1,4;3,6;4,7;6,9;7,8;8,9]; 20 21 Neumann = []; 22 23 mesh = getmesh(Nodes,Elements,Dirichlet,Neumann); 24 25 for jj=1:2 26 mesh=uniformrefine(mesh); 27 end 28 29 30 method=2; 31 32 33 degree=1; 34 35 36 [penalty,kappa]=set_parameter(method,degree); 37 38 39 [D,C,R,F]=global_system(mesh,@fdiff,@fadv,@freact,... 40 @fsource,@DBCexact,@NBCexact,penalty,kappa,degree); 41 42 Stiff=D+C+R; 43 44 45 coef=zeros(size(Stiff,1),1); 46 47 48 noi=0; 49 for ii=1:50 50 noi=noi+1; 51 52 53 54 [H,HJ]=nonlinear_global(coef,mesh,@freact_nonlinear,degree); 55 56 57 Res = Stiff*coef + H - F; 58 59 60 61 J = Stiff + HJ ; 62 63 64 w = J \ (-Res); 65 66 67 coef = coef + w; 68 69 70 if norm(J*w+Res) < 1e-20 71 break; 72 end 73 74 end 75 76 77 [l2err,hmax]=dg_error(coef,mesh,@fexact,@fdiff,degree); 78 79 80 dof=size(mesh.Elements,1)*(degree+1)*(degree+2)*0.5; 81 82 fprintf(' DoFs h_max L2-error #it\n') 83 84 fprintf(' 85 dof, hmax ,l2err,noi); 86 end 87 88 89 90 91 function diff = fdiff(x,y) 92 diff = (10^(-6)).*ones(size(x)); 93 end 94 95 96 function [adv1,adv2] = fadv(x,y) 97 adv1 =(1/sqrt(5))*ones(size(x)); 98 adv2 =(2/sqrt(5))*ones(size(x)); 99 end 100 101 102 function react = freact(x,y) 103 react = ones(size(x)); 104 end 105 106 107 function [r,dr] = freact_nonlinear(u) 108 109 r = u.^2; 110 111 112 113 dr = 2*u; 114 end 115 116 117 118 119 function [yex,yex_x,yex_y] = fexact(fdiff,x,y) 120 121 diff = feval(fdiff,x,y); 122 123 yex=0.5*(1-tanh((2*x-y-0.25)./(sqrt(5*diff)))); 124 125 yex_x=(-1./(sqrt(5*diff))).*(sech((2*x-y-0.25)./... 126 (sqrt(5*diff)))).^2; 127 128 yex_y=((0.5)./(sqrt(5*diff))).*(sech((2*x-y-0.25)./... 129 (sqrt(5*diff)))).^2; 130 end 131 132 133 function source = fsource(fdiff,fadv,freact,x,y) 134 135 diff = feval(fdiff,x,y ); 136 137 [adv1,adv2] = feval(fadv,x, y ); 138 139 reac = feval(freact,x,y); 140 141 142 yex=0.5*(1-tanh((2*x-y-0.25)./(sqrt(5*diff)))); 143 144 yex_x=(-1./(sqrt(5*diff))).*(sech((2*x-y-0.25)./... 145 (sqrt(5*diff)))).^2; 146 147 yex_y=((0.5)./(sqrt(5*diff))).*(sech((2*x-y-0.25)./... 148 (sqrt(5*diff)))).^2; 149 150 yex_xx=((0.8)./diff).*tanh((2*x-y-0.25)./(sqrt(5*diff))).*... 151 (sech((2*x-y-0.25)./(sqrt(5*diff)))).^2; 152 153 yex_yy=((0.2)./diff).*tanh((2*x-y-0.25)./(sqrt(5*diff))).*... 154 (sech((2*x-y-0.25)./(sqrt(5*diff)))).^2; 155 156 157 source=-diff.*(yex_xx+yex_yy)+(adv1.*yex_x+adv2.*yex_y)+... 158 reac.*yex+yex.^2; 159 end 160 161 162 163 164 165 function DBC=DBCexact(fdiff,x,y) 166 167 diff = feval(fdiff,x,y); 168 169 DBC=0.5*(1-tanh((2*x-y-0.25)./(sqrt(5*diff)))); 170 end 171 172 173 function NC = NBCexact(mesh,fdiff,x,y) 174 175 NC=zeros(size(x)); 176 end 177 178 179 180 181 182 function [penalty,kappa]=set_parameter(method,degree) 183 184 global Equation; 185 186 Equation.b0=1; 187 Equation.base=2; 188 189 switch method 190 case 1 191 192 Equation.method=1; 193 kappa=1; 194 penalty=1; 195 case 2 196 197 Equation.method=2; 198 kappa=-1; 199 penalty=3*degree*(degree+1); 200 case 3 201 202 Equation.method=3; 203 kappa=0; 204 penalty=3*degree*(degree+1); 205 206 end 207 208 end \end{lstlisting} \end{document}
\begin{document} \title{The Cascading Metric Tree} \author{{Jeffrey~Uhlmann,~ Miguel~R.\ Zuniga} \IEEEcompsocitemizethanks{\IEEEcompsocthanksitem Prof.\ Uhlmann is a faculty of the Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO, 65211.\protect\\ \IEEEcompsocthanksitem M.\ Zuniga is head of biosystems analytics at New Generation Software.} \thanks{Manuscript received September 1, 2021; revised December 26, 2021.}} \markboth{Journal of \LaTeX\ Class Files,~Vol.~14, No.~8, August~2015} {The Cascading Metric Tree} \IEEEtitleabstractindextext{ \begin{abstract} This paper presents the Cascaded Metric Tree (CMT) for efficient satisfaction of metric search queries over a dataset of $N$ objects. It provides extra information that permits query algorithms to exploit all distance calculations performed along each path in the tree for pruning purposes. In addition to improving standard metric range (ball) query algorithms, we present a new algorithm for exploiting the CMT cascaded information to achieve near-optimal performance for $k$-nearest neighbor (kNN) queries. We demonstrate the performance advantage of CMT over classical metric search structures on synthetic datasets of up to 10 million objects and on the $564K$ Swiss-Prot protein sequence dataset containing over $200$ million amino acids. As a supplement to the paper, we provide reference implementations of the empirically-examined algorithms to encourage improvements and further applications of CMT to practical scientific and engineering problems. \end{abstract} \begin{IEEEkeywords} Biosequence databases, CMT, GIS, metric search, spatial data structures. \end{IEEEkeywords}} \maketitle \IEEEdisplaynontitleabstractindextext \IEEEpeerreviewmaketitle \section{Introduction}\label{sec:introduction} \IEEEPARstart{I}{n} this paper we examine new methods, as well as combinations of known methods, for augmenting metric search structures to provide a level of performance necessary for practical applications, e.g., involving large-scale protein and genomic sequence datasets. Our first contribution is the use of {\em metric cascading}, in the form of a {\em cascaded metric tree} (CMT) to permit every distance calculation performed along a path in a tree to provide an additional pruning test at every subsequent node along that path. In other words, if $P$ distance calculations have been performed along the path from the root node to a given node, then $O(P)$ fast independent scalar-inequality tests (i.e., not requiring computationally-expensive metric distance calculations) are available for pruning the search at that node. We formally define a general family of cascaded metric tree structures for which the number of distance calculations per node is parameterized. However, our goal in this paper is to define a search structure and associated query algorithms that maximally exploit information from each distance calculation such that near-optimal performance can be expected without need for application-specific tuning. To this end, our tests only examine the most basic form with no parametric tuning of any kind. \section{Background}\label{sec:background} \IEEEPARstart{P}{erhaps} the simplest generalization of the familiar 1d (scalar) balanced binary search tree (BST) to accommodate general metric search queries can be constructed by choosing one of the $n$ objects in a dataset and computing the distance (radius) from it to the remaining objects, with the object and the median distance stored at the root node. Applying this process recursively such that objects less than the median radius at a given node form the left subtree, and the remainder form the right subtree, yields a balanced {\em metric} binary search tree (MBST) such that the path to a given object is uniquely determined by a sequence of distance calculations starting at the root node. For example, if the distance between a given query object $s$ and the object $r$ stored at the root node is less than the radius stored at the node then $s$ must be in the left subtree, otherwise it must be in the right subtree. This provides a recursive search algorithm that is guaranteed to find $s$, or determine that object $s$ is not in the tree, using only $O(\log(N))$ distance calculations Generalization of the MBST search algorithm to satisfy queries asking for all objects within a given distance from a query object, referred to as a metric {\em range} or {\em ball} query, are directly analogous to the satisfying of interval range queries in a BST where both subtrees of a node must be searched if the query range spans the partitioning value. In the case of the simple MBST, a metric range query can only prune a subtree from the search process when the metric ball determined by the object and radius at a given node does not intersect the query volume (ball). This requires only the calculation of the distance between the query object and the object at the node, followed by a triangle inequality test involving their respective radii. Thus, the triangle inequality property of metric distances is absolutely necessary for correctness of the algorithm when pruning is applied\footnote{The books of Samet \cite{sametbook} and Zuzula {\em et al.} \cite{zezula2006similarity}, and the review article by Chavez {\em et al.} \cite{chavez2001searching}, provide outstanding coverage of the literature on metric search structures and algorithms.} As described thus far, the simple MBST only permits left subtrees to be pruned because there is no available bound on the spatial extent of objects in the right subtree. If in addition to the median, however, information defining the minimum-volume {\em bounding ball} is also stored at each node then an additional criterion for pruning will be available at each node during the search process. Unfortunately, this bounding ball cannot necessarily be made tight except in the case of vector spaces (which permit arithmetic operations to be performed on objects, e.g., to create a new object as the mean of two other objects) because there is no way in general to construct a virtual ``spatial centroid'' object in the general case of black-box distance functions and objects. However, it {\em is} possible to identify an object for which the maximum radius to any other object is minimized, but this incurs the cost of a potentially quadratic number of distance calculations during the tree construction process. Clearly there are many degrees of freedom available during the tree construction process that can be tailored to potentially improve MBST query performance, (i.e., to reduce the average number of distance calculations per query) based on the characteristics of datasets that are deemed typical for a particular application. For example, the strict binary structure can be relaxed to a $b$-ary tree with branching-value $b$ determined empirically based on tests over representative datasets for a given application. Unfortunately, too many degrees of freedom can mask the fact that the search algorithm is suboptimal in its sensitivity to application-specific variables. Ideally, there would exist a data structure that provides consistent near-optimal performance such that minimal improvement can be expected from additional heuristic application-specific tuning. In other words, search structures and/or algorithms with few or no tunable parameters are likely to exhibit more robust practical performance than those for which good performance demands extensive tuning of many parameters. \section{Metric Search}\label{sec:metric} \IEEEPARstart{A}{} function $d(x,y)$ defines a metric distance on the set $S$ if for elements $x,y,z$ of $S$ if the properties in Table \ref{table:metric} hold. \begin{table}[htb] \centering \caption{Metric Properties $\forall\, x,y,z \in S$} \label{table:metric} \begin{tabular}{ l | l } \hline $d(x,y) \ge 0$ & non-negativity\vphantom{\Huge x} \\ $d(x,y) = d(y,x)$ & symmetry \\ $x = y \Rightarrow d(x,y) = 0$ & identity\\ $d(x,z) \leq d(x,y) + d(y,z)$ & triangle inequality \\ \hline \end{tabular} \end{table} Given a query object $q$, a search radius $r$, and a set $S$ of objects, a metric range query $R(q,r)$ over $S$ reports all objects in $S$ within distance $r$ of the object $q$. An exhaustive (brute force) search can satisfy the query by performing $|S|$ distance functions, but this may not be practical when $S$ is very large or when the relevant distance function $d(x,y)$ is computationally expensive to evaluate. However, the metric triangle inequality property offers a means for potentially reducing the number of distance calculations needed to satisfy such queries. For example, consider object $p \in S$ and $C \equiv \{ S \setminus p \}$. Also assume that the nearest and farthest objects in $C$ from $p$ are at known distances $d(p,p_n)$ and $d(p,p_f)$, respectively. Given these conditions there exist queries $R(q,r)$ for which the performing of only one additional distance calculation $d(p,q)$ establishes that {\em none} of the objects in $C$ satisfy the query. For example, in the case when the query object $q$ is ``far'' from $p$ in the sense that it is farther from $p$ than any object in $S$, i.e, $d(q,p) > d(p,p_f)$, then $d(q, p) > d(p,p_f) + r$ implies $d(q,x) > r, \: \forall \, x \, \in \{C\}$: \begin{quote} \begin{proof} Let $ d(q, p) - d(p,p_f) > r$, so \begin{flalign*} &d(q, x)\! +\! d(x, p)\! -\! d(p, p_f) > r ~\text{({triangle inequality})}\\ &d(q, x) > r + [d(p,p_f) - d(x, p)] \\ &d(q, x) > r ~~~~ \text{({since $d(p,p_f)\! -\! d(x, p)$ is positive})} \\ \qedhere \end{flalign*} \end{proof} \end{quote} An analogous result holds for the case of a ``near'' query object, i.e., $d(q,p) < d(p,p_n)$, when $d(q,p) > d(p, p_n) - r$. It is convenient to view $d(p,p_n)$ and $d(p,p_f)$ as defining a distance interval $\mbox{DI}(p, C) \equiv [d(p,p_n), d(p,p_f)]$ that bounds the distances from $p$ to the objects in $C$ and which may permit an immediate determination (via the triangle inequality) that the query ball {\em cannot} contain any objects in $C$. In other words, either of two simple inequality tests may establish that further searching is unnecessary. Various such inequalities have been examined in the literature and provide the standard means for exploiting metric assumptions for pruning searches of objects in large datasets. The limitation of conventional metric search trees during the query process is that information obtained from the computation of a distance calculation between the query object and an object stored at a given node is only exploited for pruning purposes {\em at that node}. In the following section we describe a means for augmenting the search tree with additional information so that the pruning step at the current node can exploit all previous distance calculations performed along the path from the root to the current node. \subsection {Cascaded Metric Trees} \IEEEPARstart{C}{onsider} a dataset that is recursively decomposed into a metric search tree with one or more objects stored at each node. Purely for convenience of exposition we will assume a balanced binary tree with a single object stored at each node. We refer to the structure as a {\em metric} search structure because it is further assumed that any search algorithm must traverse the tree based on information obtained from black-box metric distance calculations involving objects stored along the sequence of visited nodes. Ideally, this information will permit the search space (i.e., the amount of the tree that must be traversed) to be significantly reduced so that the total number of performed distance calculations does not greatly exceed the number of objects that are found to satisfy the query. \begin{figure*} \caption{In addition to an object, each CMT node contains distances between its object and all ancestor objects. (These distances are pre-computed during construction of the tree.) Because the query object has already computed distances between itself and the ancestor objects of the current node at depth $d$, an extra $d$ triangle-inequality tests can be performed without need for any extra distance calculations.} \label{fig:tree} \end{figure*} The principal feature of the cascaded metric tree (CMT) is that each node contains not only an object, but also stores the distance between that object and each object along the path from that node to the root of the tree (see Fig.\,\ref{fig:tree}). Because a distance calculation must have been performed between the query object and the {\em ancestor} objects visited along the path to the current node, all the information necessary to perform triangle-inequality pruning tests with respect those ancestor objects is available with no need for additional distance calculations. In other words, the number of ``free'' pruning tests at each node increases linearly with depth. More fundamentally, each distance calculation performed at a node during the search process is exploited in a pruning test at each subsequent node along that path in the tree, as opposed to a single pruning test at the current node. Importantly, the CMT construction algorithm performs only $O(N\log N)$ total distance calculations, which is the same as that required for conventional linear-space metric search trees, despite the fact that CMT stores $O(\log N)$ ancestral distances at each node. This is possible because the CMT construction process is able to exploit cascaded distance information in a manner that is analogous to the search process. To formalize the specification of the tree and search algorithms, we define the following notation for information defined for or maintained by the query algorithm: \begin{itemize} \item $q$ is the query object. \item $r\geq 0$ is the query radius. A value of $\infty$ may be implicitly or explicitly defined for the case in which no bound is placed on the size of the query radius. \item $k\geq 1$ is an integer limit on the size of the returned query set of nearest objects. A value of $\infty$ (or sufficiently large number greater than or equal to the size of the dataset) may be implicitly or explicitly defined for the case in which no limit is placed on the number of objects that may be returned. \item {\em depth} is the number of levels the current node is from the root. Thus, {\em depth}\,-\,1 is the depth of the parent node, {\em depth}~-~2 is the grandparent node, and the ancestor at level {\em depth} is the root node of the overall tree. The value of {\em depth} is dynamically maintained during the search/traversal process and thus is not a query parameter. \item {\em dist}[j] is the distance from $q$ to the $j$th ancestor of the current node $1 \leq j <$\;{\em depth}. This indexed list/vector is dynamically maintained during the search/traversal process and thus is not a query parameter. \end{itemize} And we define the following notation for information maintained at each CMT node: \begin{itemize} \item {\em near}[i] and {\em far}[i] provide near and far distance interval information with respect to the $i$th ancestor of the node. For example, {\em near}[1] is the distance of the object in the subtree rooted at the current node (which includes $p$) that is nearest to the node-object in the parent, and {\em far}[1] is analogously defined with respect to the farthest object in the subtree from the parent object. More generally, distance interval information with respect to the $i$th ancestor of the current node is similarly available for each index $1 \leq i <$\;{\em depth}. The values {\em near}[0] and {\em far}[0] provide the near and far distances to objects in the subtree beneath the current node, i.e., excluding the object $p$ at the current node. \item {\em count} provides the number of objects in the subtree of the current node. This is necessary to satisfy {\em counting} queries, which ask for the number of objects that satisfy a given query without actually retrieving those objects. More specifically, if the query volume completely contains the bounding volume of objects in the subtree of a given node (which can be determined from $p$ and {\em far}[0]), then the number of satisfying objects is immediately given by {\em count} without need to traverse the subtree. This permits counting queries to be satisfied with complexity sublinear in the number of objects that satisfy the query, e.g., $O(1)$ if the query volume is sufficiently large to enclose all objects in the tree. \item {\em left} and {\em right} are the left and right child nodes (subtrees) of the current node. \end{itemize} If the query interval/ball fails to intersect the distance interval/annulus for any ancestor $i$ then the entire subtree (including the node-object $p$) can be pruned from further consideration. This provides $O(\log N)$ separate pruning tests, each of which may immediately terminate search of the current subtree. More specifically, the number of pruning tests increases from $O(1)$ at the root node to $O(\mbox{\em height})$ at leaf nodes, which is $O(\log N)$ if the tree is balanced. \begin{table}[htb] \centering \caption{Information Relating to the Current Query} \label{table:queryparms} \begin{tabular}{c|l} \hline $q$\vphantom{\Huge x} & The query object: goal is to find the $k$-nearest\\ ~ & objects within distance $r$ of $q$.\\ $r$ & The maximum allowed radius (distance) from the\\ ~ & query object $q$. \\ $k$ & The maximum allowed number of objects nearest\\ ~ & to $q$ to be returned. \\ ~ & ~~~({\em Below is information maintained during}\\ ~ & ~~~~{\em execution of each query}) \\ {\em depth} & The number of levels the current node is\\ ~ & from the root. \\ {\em dist}[j] & The distance from $q$ to the node-object\\ ~ & of the $j$th ancestor of the current node. \\ \hline \end{tabular} \end{table} \begin{table}[htb] \centering \caption{Information Stored at Each Node of the CMT} \label{table:treeparms} \begin{tabular}{c|l} \hline $p$\vphantom{\Huge x} & The node-object, i.e., the object stored at\\ ~ & \,the node. \\ {\em near}[i],\vphantom{\Huge x} & The min and max distances from the $i$th\\ {\em far}[i] & ancestor object to the objects in the subtree.\\ ~ & ({\em Index $0$ refers to min and max distances from}\\ ~ & \,{\em $p$ to objects in its subtree, excluding $p$ itself}).\\ {\em count}\vphantom{\Huge x} & The number of objects in the subtree.\\ {\em left, right}\vphantom{\Huge x} & The left (right) child/subtree. \\ \hline \end{tabular} \end{table} It should be recognized that a conventional metric range/ball query (i.e., asking for all objects in the tree that are within a specified radius $r$ of a given query object $q$) can be parameterized with $k$ implicitly equal to infinity. Similarly, a $k$-nearest query can be parameterized with $r$ implicitly equal to infinity. More generally, nontrivial values can be specified for both $r$ and $k$. This allows the use of $r$ so that the return set from a $k$-nearest query does not include elements that are of an impractically-large distance $r$ from $q$. Similarly, $k$ can be specified to ensure that an impractically-large number of objects within distance $r$ of $q$ are returned by a ball query. In practice, the efficiency of a $k$-nearest query can often be improved if a nontrivial upper-bound radius $r$ can be provided to guide the search. The benefits of this {\em range-bounded} near-neighbor search has long been recognized \cite{uhlmannImp1991,uhlmannDAconf91,10.5555/338219.338273}. We further propose additional parameters $nv$ and $dc$ to limit the total number of nodes visited and the total number of distance calculations performed, respectively, during the query. These additional parameters can be used impose rigid limits on the query complexity and thus would provide only approximate query solutions based on a limited search of the tree. However, we will not exploit the $nv$ and $dc$ parameters here because our focus in this paper is on {\em exact} metric query satisfaction. More generally, the node-object $p$ in each node can be replaced with a list of $m$ node-objects $p[j]$, $1<j\leq m$. Thus each $p[j]$ provides a distinct {\em near/far} distance interval for each descendant, which means that two indices are required: {\em near}$[i][j]$ and {\em far}$[i][j]$. Note that even in the case that $m$ is chosen to be the same value $c$ for every node, or for every node at a given level of the tree, an explicit integer is required for each node to accommodate the case of a number of objects not equal to $c$ at leaf nodes. Note also that in addition to maintaining ancestor distance information with respect to $q$, the number of node-objects at each ancestor must also be maintained. \begin{table}[htb] \centering \caption{Information Stored at Each Node of General CMT} \label{table:gcmtparms} \begin{tabular}{c|l} \hline $m$\vphantom{\Huge x} & Strictly positive integer giving the number\\ ~ & of node-objects at the node.\\ $p[j]$\vphantom{\Huge x} & The list of $m$ node-objects, $1\leq j \leq m$.\\ {\em near}$[i][j]$,\vphantom{\Huge X} & Min and max distances from the $i$th ancestor\\ {\em far}$[i][j]$ & object to the objects in the subtree.\\ {\em count}\vphantom{\Huge x} & The number of objects in the subtree.\\ {\em left, right}\vphantom{\Huge x} & The left (right) child/subtree. \\ \hline \end{tabular} \end{table} \noindent The generalization to $m>1$ objects per node provides a complex tradeoff involving more space and more distance calculations per node in return for a multiplicative increase in pruning tests per node\footnote{In fact, the number of distance calculations per node can be varied, e.g., more at nodes near the top of the tree and decreasing to zero at leaf nodes}. At the opposite limit, a leaf-oriented CMT (or LCMT) can be defined with {\em zero} objects stored at each interior node, thus offering a means to reduce the overall number of distance calculations if certain assumptions are satisfied. While these generalizations offer vast opportunities for application-specific tailoring, our stated goal for this paper is to focus on core performance without any discretionary variables. Therefore, we will assume $m=1$ and leave the examination of alternatives for future work. Regarding test results, our principal measure of performance will be the total number of distance calculations performed because those calculations are fundamental to the problem and are expected to dominate the overall running time for most nontrivial metrics. Although our tests will also include information about the number of nodes visited, we note that this is an unreliable measure that can easily be ``gamed'' simply by front-loading simple termination tests that cannot actually reduce the number of distance calculations -- {\em and may even increase the overall computational overhead} -- but can reduce node visitations at the frontier of the query traversal by as much as a factor of two. In other words, the overall computation cost may be increased while ``{\em number of nodes visited}'' as a performance metric spuriously suggests otherwise. (We note that it may be worthwhile to reduce node visitations in some contexts, e.g., to reduce external memory accesses, but that choice is available for any tree-type data structure. In many respects it is analogous to increasing the branching factor of a tree to compress its height in order to optimize an application-specific utility function.) To summarize, the focus of this paper is the method of {\em metric cascading}, which involves the storing of ancillary information at each node to permit distance calculations performed at ancestor nodes during a search to subsequently be exploited for potential pruning of the search. In the most general formulation this information grows linearly with the depth of each node in the cascading metric tree (CMT). Thus, under our assumptions with $N=|S|$, the total space of the CMT increases from $O(N)$ to $O(N\log(N))$. This extra information consists of the {\em near} and {\em far} distances to objects in the tree rooted at each node with respect to each of the objects stored in the {\em ancestor} nodes on the path from the root to the node. This means that every distance calculation performed by the search algorithm prior to the current node can be exploited for pruning purposes at the current node. This represents a significant departure from traditional metric search structures which do not exploit any information from distance calculations performed at previously-visited nodes. This will be demonstrated respectively for range and $k$-nearest neighbor (kNN) in the next two sections. \section{CMT Range (Ball) Queries} \IEEEPARstart{T}{he} basic algorithm for satisfying a range query on a metric search tree consists of determining whether the query ball intersects the bounding ball associated with the currently-visited node. If so then the subtree of that node must be searched, otherwise it can be eliminated (pruned) from further search. In a conventional search tree, the intersection test is a simple application of the triangle inequality based on the distance of the query object to the node object, the query radius $r$, and the bounding radius for the node. What the CMT provides is the distances between the current node object and the objects in all of its ancestral nodes. In its traversal to the current node, the search process has performed distance calculations between the query object and those ancestral objects. If the results of those distance calculations have been accumulated, then each can be exploited to provide an independent triangle-inequality pruning test. The details of the basic CMT range query algorithm are given in Appendix\,\ref{app:BasicRQ}. The adjective {\em basic} is applied because it does not fully exploit all available opportunities for pruning. Specifically, the stored bounding radius at each node defines a metric ball within which all objects in the subtree of the node are contained. The pruning test of the simple range query algorithm checks whether the query volume intersects the bounding volume of the node and prunes the search if it does not. However, if the query volume {\em encloses} that bounding volume, then it can be concluded that all objects in the subtree must satisfy the query. This means that a {\em collection/aggregation} operation can be performed to add all of the objects in the subtree to the retrieval set without need for any distance calculations with the query object. Details of the full CMT range query algorithm with collection are provided in Appendix\,\ref{app:collectRQ}. An extreme example of the benefit provided by collection is the case in which the search algorithm can determine at the root node that all objects in the tree are contained within the query volume. In this case a single distance calculation is sufficient to identify that all objects in the tree satisfy the query. Collection is particularly effective when the object stored at each CMT node is selected to be a radius-minimizing centroid of the objects in its subtree\footnote{The CMT construction algorithm used for our tests (Appendix\,\ref{app:bom}) randomly selects the object stored at each node from the set of objects comprising its subtree, whereas a radius-minimizing centroid would clearly be a superior choice to maximally exploit collection. However, our test results demonstrate it to be effective because it is an element of the set of objects in the subtree. If the object were not a member of the set then the bounding radius required to contain the set would likely be impractically large to effectively support collection.}. It can be expected that collection will most likely be triggered at nodes lower in the tree that have relatively smaller bounding distances, or for queries with relatively large search radii. \subsection{Collection/Aggregation for Counting Queries} In addition to its use for more efficiently satisfying range queries, collection can also be applied to calculate only the {\em size} of the return set for a given query without actually performing retrieval of the objects. Specifically, if each node stores an integer giving the size of its associated subtree, then a {\em counting query} can be performed in which only the {\em number} of objects satisfying the query is computed without returning that set of objects. For example, instead of performing a collection operation at a given node, the number of objects in the subtree is simply added to the count without need to even traverse the subtree. Counting queries are important in the context of interactive data analysis applications to permit analysts to refine their queries to guarantee that subsequent retrieval queries have focused return sets of manageable size. To be effective, however, results of all distance calculations must be maintained in a hash table (or other kind of efficient map \cite{Cormen2001introduction}) to avoid redundant calculations during subsequent queries within the interactive session for a given query object. Collection-enhanced query satisfaction also offers potential benefit in applications for which the distance function used for queries is actually a surrogate for a far more complex metric or in place of a measure of similarity for which a tight equivalent distance function cannot be found. This is common in domains involving protein and biosequence objects for which the most explicit and intuitively understood models for assessing {\em similarity} may not translate to a measure of distance, i.e., of {\em dissimilarity}, that satisfies the metric conditions necessary for the application of efficient metric search structures. In such cases a computationally expensive surrogate distance function may be applied as a culling or {\em gating} strategy to retrieve a superset of candidate objects to which a more complex and even more computationally expensive measure of similarity or dissimilarity will subsequently be applied. In this context it may be expected that a significant fraction of the dataset may be retrieved, but the goal is to retrieve them as efficiently as possible before the ``true'' measure is applied to obtain the desired set of objects. What is critical to note is that in applications of this type there is no subsequent use made of the distances calculated using the surrogate distance function during the search process, so the savings obtained from collection are achieved at no cost. We note that if distances to retrieved objects are required then it may seem that collection cannot be productively applied. However, the aggregation of objects in a subtree, followed by distance calculations performed as a post-processing step, is significantly more efficient than incurring the overhead of a full search traversal of the subtree. In addition, the post-calculation of distances with respect to the query object is trivially parallelizable, whereas the process of performing distance calculations during the full traversal process is not. \section{CMT Range Query Tests} \IEEEPARstart{I}{n} this section we examine CMT performance on range queries to assess the benefits of cascading\footnote{The tree construction algorithm, given in Appendix\,\ref{app:bom}, produces a balanced binary tree of height $\lceil\log_2 n\rceil$.}. We do this by comparing with a variant of the CMT range query algorithm in which pruning is limited only to ancestral information up to the parent node, {\em CMT-1}, and with a conventional linear-space metric search structure, referred to as {\em Baseline}, which does not maintain any ancestral information and serves to represent the performance of prior linear-space metric search structures (e.g., metric trees \cite{Uhlmann1991}, VP trees \cite{Yianilos1993}, and their many variants \cite{sametbook})\footnote{A C++ reference implementation of all algorithms discussed in this paper, including the Baseline search structure and algorithms, is provided at https://github.com/ngs333/CMT.}. \begin{figure*} \caption{{\em Range/Radius Search Performance of CMT for Euclidean and Edit-Distance Metrics} \label{test:rqs} \end{figure*} Our first battery of tests are performed on datasets of $10^7$ uniformly-distributed Euclidean-space objects in $3$ and $10$ dimensions. Figures \ref{test:rqradius3d} and \ref{test:rqradius3dz} show the fraction of objects for which distance calculations are performed as a function of query radius/volume. As the query radius increases, the number of distance calculations can be expected to increase until the query volume becomes so large that collection begins to dominate (i.e., an increasing number of objects can be identified as satisfying the query without need for individual distance calculations) and this is clearly seen. CMT performs roughly half as many distance calculations as Baseline across most of the range of possible query radii, and the performance of CMT-1 is roughly betwen that of CMT and Baseline. The performance of CMT-1 indicates that only one generation of ancestral information is sufficient to provide more than half of the pruning power of CMT in this case. Figures \ref{test:rqradius10d} and \ref{test:rqradius10dz} perform analogous tests but in $10$ dimensions. Because intrinsic difficulty of search increases with dimensionality, it is not surprising to see that the advantage of CMT over Baseline increases from a factor of $2$ to a factor of $5$ reduction in needed distance calculations. On the other hand, uniformly-distributed objects in Euclidean space make the problem relatively easy in the sense that it is amenable to effective approximate search methods based on grid or orthogonal range-search structures such as the kd-tree. This may partly explain why the performance of CMT-1 degrades less relative to that of CMT when the dimensionality is increased from $3$ to $10$. Figures \ref{test:rqsprot} and \ref{test:rqsprotz} show results for edit-distance (more specifically Levenshtein distance \cite{10.1093/bioinformatics/btw753,10.1145/375360.375365}) range queries on the 2021 SwissProt database of $564$K protein sequences \cite{swissprot}, with an average protein length of 360 amino acids. Figure \ref{test:rqsprot} shows that the performance of CMT-1 is degraded significantly relative to CMT and approaches that of Baseline. This tends to suggest that CMT-1's lack of cascaded information beyond the parent significantly limits its pruning power, i.e., pruning information provided by distant ancestors for CMT tends to increase with problem difficulty. Figures \ref{test:rqradius3d}, \ref{test:rqradius10d}, and \ref{test:rqsprot} show the power of collection for large retrieval sets due to large query volumes. Specifically, the number of distance calculations eventually plateaus and then diminishes with increasing search radius. Although conventional metric search structures examined in the literature have not exploited collection capability, we have equipped the Baseline range search algorithm to exploit collection. Without it, the number of distance calculations increases exponentially with query radius and cannot be meaningfully graphed against searches performed with collection. \section{{\em k}-Nearest Neighbor (kNN) Queries} \IEEEPARstart{T}{he} $k$-nearest neighbor query (or $K(q,k)$) asks for the $k$ objects in a dataset $S$ that are nearest to a given query object $q$. Unlike range queries, for which the order of tree traversal has no effect on performance (i.e., number of required distance calculations and/or node visitations), the ordering of nodes visited for kNN queries has a substantial effect. This is because the bounding radius for the currently-identified nearest $k$ objects during the search imposes a pruning constraint on how close future objects must be to $q$ in order to displace the farthest element of the current set of $k$ nearest objects. Thus, the sooner a ``good'' set of $k$ objects can be found, the sooner non-candidates can be pruned. A priority queue provides precisely the needed means for steering the sequence of nodes visited to those with higher likelihood of containing objects nearer to $q$. This is achieved by computing a ``priority'' for each visited node and placing the node (with its priority) into the queue. The next node to be visited is then obtained as the node with highest priority in the queue. The resulting performance of the overall algorithm thus depends, of course, on the formula used to determine the priorities. Given the distance $d(p,q)$ between query object $q$ and node object $p$, and the bounding radius for for $p$ that contains all objects in its associated subtree, a triangle inequality calculation can produce a lower bound on the distance to the nearest object in the subtree. Alternatively, a lower bound can be obtained for the distance of the nearest possible object outside the bounding radius of the node\footnote{CMT nodes actually contain more precise information in the form of both the minimal bounding radius for objects within the subtree {\em and} the maximal radius that does not include another object in the dataset. Appendix\,\ref{app:kNNquery} includes details for how this information can be exploited to achieve tighter lower-bound distances to nearest objects depending on whether $q$ is within or outside of the node's bounding volume.}. In the case of non-cascaded metric search trees, a lower-bound distance between $q$ and the nearest-neighbor in the subtree of the current node can be directly calculated. It is zero if $q$ is within the node's bounding volume; otherwise it is the distance from $q$ to the maximum bounding volume that is guaranteed to not include and objects within the bounding volume of the node. In either case, the computed value is the obvious choice for use as a search priority. What is critical to note is that this leads to what is essentially a proximity-based greedy search of the tree. For CMT, by contrast, multiple lower-bound distances can be computed: one from the current node, and one from use of the triangle inequality with the available distances $d(q,a_i)$, $d(a_i,p)$, and $d(q,p)$ for each ancestor object $a_i$. Because each represents a lower bound, the {\em largest} of the lower bounds represents the most informative estimate of the smallest distance to an object in the subtree, and it is used as the priority for the node by the CMT kNN query algorithm (given in Appendix\,\ref{app:kNNquery}). In other words, the priority function exploits information available in the CMT that is not available to structures of linear $O(N)$ size. This permits the kNN search algorithm to pursue a risk-averse ordering of nodes rather than a greedy proximity-focused ordering. \subsection{Range-Optimal kNN Search} In the computer science literature, many data structures for satisfying {\em orthogonal} (also known as {\em coordinate-aligned} or {\em box}) queries have been developed that are amenable to proving rigorous worst-case complexity bounds on the search time. For example, the optimal linear-size range-search data structure \cite{bentley1975multidimensional} can provably satisfy multidimensional range queries in $O(N^{(1-1/d)}+m)$ time, where $d$ is the dimensionality of the space and $m$ is the number of retrieved objects. Unfortunately, few rigorous claims can be made about the performance of metric search structures without reference to properties of the specific metric that will be used. This presents a challenge to metric search as a research area because performance can typically only be expressed in the form of limited empirical comparisons to other methods. We now propose a step forward in this regard by defining the optimality of a given kNN algorithm for a particular metric search structure in terms of its performance relative to a range query (without collection) on that search structure with assumed knowledge of the minimum-size bounding ball containing the $k$-nearest neighbors. \begin{quote} {\bf Definition}: A $k$-nearest neighbor search algorithm is {\em range-optimal} if its empirically assessed performance (e.g., in terms of distance calculations or number of nodes visited) approaches that of the best range query algorithm for the search structure of interest when given the smallest search radius enclosing the $k$-nearest objects. \end{quote} The rationale for this definition is that the satisfaction of a range query is not sensitive to the order in which the satisfying objects are identified during the search process. More specifically, whether or not a particular object satisfies the query can be determined in isolation (independently) without regard to knowledge about other objects in the dataset. In other words, the sole information available for pruning is given by the radius of the range query. By contrast, a kNN search initially has no pruning information and therefore must dynamically {\em acquire} it in the form of a variable bounding radius at each point during the search process based on the objects examined up to that point. This means that an optimally-defined range query should represent a lower bound on the best performance possible for satisfying a kNN query\footnote{Said another way, a kNN algorithm that outperforms a given range query algorithm with the optimal $k$-enclosing radius should be interpreted as evidence {\em that the range query algorithm is suboptimal}.}. It should be noted that optimality as defined here is search-structure-specific in that kNN performance is bounded by the best possible range-search algorithm {\em for that structure}. As an example, an unstructured dataset of size $n$ offers range-optimality in the trivial sense that both kNN and range searches must perform exactly $n$ distance calculations during an exhaustive brute-force examination of all objects in the dataset. The value of the range-optimal kNN criterion is realized when assessing the relative performance of different priority-search methods for satisfying kNN queries on sophisticated data structures. Specifically, if a kNN method is found that achieves the same performance as a minimum-radius range query for the $k$-nearest neighbors, no other method can possibly perform better. During the kNN tests discussed in the next section, we assessed the range-optimality of CMT and Baseline. We found CMT to be 99\% range-optimal, which means that improvements to the CMT kNN search algorithm can at most reduce the number of distance calculations and/or node visitations by a small fraction. Baseline was similarly found to be between 97\% and 99\% range-optimal. This implies that empirically-observed performance advantages of CMT for kNN queries cannot be attributed to possible use of a highly suboptimal kNN query algorithm for the Baseline search tree. \subsection{Range-Bounded kNN Queries} As discussed earlier, the query model can be extended to a general pruning function over any number of application-relevant parameters, e.g., \begin{equation} f(r,k,nv,dc,t,...) ~<~ \mbox{\em threshold} \end{equation} where in isolation a bound $r$ defines an ordinary range query; $k$ defines a bound on the number of returned nearest neighbors; $nv$ defines a bound on nodes visited; $dc$ defines a bound on distance calculations; $t$ defines a bound on total execution time; and so on. Various multi-criteria pruning methods have been examined, often termed {\em approximate} queries when the criteria does not guarantee that the $m$ returned objects are the {\em nearest} $m$ objects \cite{Mao2006MoBIoSI,4498354}. Of particular practical interest is the satisfaction of range-bounded kNN queries of the form $K(b,k)$, where $b$ defines a bound on the maximum range/radius and $k$ defines the number of nearest neighbors to be returned from within that bounded range. Its practical significance derives from the fact that a standard kNN query can be thought of as a type of dynamic range query in which which the range is progressively reduced during evaluation of the query based on the bounding radius of the current set of $k$-nearest neighbors. This means that the effective bounding radius is infinite until a first set of $k$ objects has been accumulated, and the maximum radius of those initial $k$ objects is likely to be much larger than that of the final returned set of $k$-nearest objects. In applications for which it is known that objects beyond some radius are either not of interest, or that the radius is virtually guaranteed to contain the nearest $k$ objects, then the bound $b$ can provide substantial pruning capability early in the query process. As will be demonstrated in the next section, this can lead to orders-of-magnitude improvements in performance. \begin{figure*} \caption{{\em kNN Search Performance for Euclidean and Edit Distance Metrics} \label{fig:knnsearch} \end{figure*} \section{CMT {\em k}-Nearest Neighbor (kNN) Tests}\label{sec:kNN} \IEEEPARstart{I}{n} this section we examine the satisfaction of kNN queries for CMT, CMT-1, and Baseline. Specifically, Figure\,\ref{fig:knnsearch} shows the the percentage of the dataset examined (i.e., requiring distance calculations) as a function of the number $k$ of returned nearest neighbors. To more realistically model practical usage, query objects were not chosen from the actual dataset. Thus, $k$=$1$ does not necessarily return an object of distance zero from the query object. Figure\,\ref{test:knn3d} shows results for Euclidean kNN queries on a dataset of $10^7$ uniformly-distributed points in 3 dimensions for $k$ from 1 to 1000. At $k$=$100$ it can be seen that Baseline performs over 5 times as many distance calculations as CMT, and over 3 times as many as CMT-1. As $k$ approaches 1000, the performance of all three methods is degrading rapidly. The relative behavior of the three methods in 10 dimensions, shown in Figure\,\ref{test:knn10d} is qualitatively similar to that in 3 dimensions, but with almost two orders of magnitude more distance calculations per corresponding value of $k$. Figure\,\ref{test:knnsprot} shows edit-distance kNN results for the same $564K$ protein sequence dataset as Figure\,\ref{test:rqs}. However, the discrete nature of edit distance does not admit a smooth functional relationship with $k$ because each increment of distance leads to an integral increase in the number of sequences such that there are typically many more than $k$ objects of strictly-equal distance to the query object, which accounts for the nonsmooth graphs in Figure\,\ref{test:knnsprot}. Another issue that arises with kNN queries in practical applications is that the choice of $k$ may dramatically affect query complexity when one of the $k$ objects is much farther from the query object than the others. For example, the SwissProt dataset of protein sequences has high intrinsic dimensionality due to wide variation in sequence lengths such that $k$=$10$ can require $60\%$ of the dataset to be examined. In practice, however, the returning of sequences that are more than a factor of $2$ larger or smaller than the query sequence doesn't make sense because the similarity is essentially equivalent to that between random sequences. Figure\,\ref{test:sprotseq} is more practically revealing by providing results for range-bounded kNN queries, $K(b,k)$, with $b$ determined as a percentage $p$ of the length of the query sequence. For $p$=$10$, the Baseline evaluations are reduced from about $60\%$ of the dataset size to about $35\%$, and for CMT the reduction is from about $40\%$ to $7\%$. For smaller ranges, the advantage of CMT over Baseline is even more dramatic. In the case of $p$=$<2$, for example, the number of distance calculations performed by CMT is $30$ times less than that of Baseline. \section{Discussion}\label{sec:discussion} \IEEEPARstart{I}{n} this paper we have introduced the notion of cascading as a means for permitting metric search algorithms to exploit more pruning information from distance calculations performed along each path of a search tree. The {\em cascaded metric tree} (CMT) is a concrete realization of this concept, and we have provided test results showing the performance advantages it provides for range (ball) queries and $k$-nearest neighbor (kNN) queries. Our tests of the benefits provided strictly by cascading are novel in that we compare both against a Baseline metric search tree structure, which we believe serves as a fair proxy for prior-art methods, and a version of CMT, referred to as CMT-1, with only one level of cascading\footnote{It would not have been inaccurate, though potentially confusing, for Baseline to have been referred to as CMT-0 in the sense that it does not provide any ancestral pruning information.}. The resulting test results give us confidence that independent comparisons of prior-art methods to the CMT will corroborate its significant advantages. A contribution of this paper in the context of range queries is an emphasis on the importance of {\em collection} as being not only useful for reducing distance calculations needed for retrieval, but also for {\em interactive query systems}, e.g., that use counting queries to determine the number of objects that satisfy a given query without retrieving them. Although satisfaction of isolated counting queries on metric search structures is likely to be only marginally more efficient than retrieval queries, the number of counting queries performed in interactive data analysis applications may be proportionally much higher because of their iterative-refinement use in formulating range queries with practical-sized return sets. Another contribution of potentially broader interest is our metric-independent definition of kNN algorithmic optimality, which we refer to as {\em range-optimality}, that can be assessed for any kNN algorithm as applied with any metric search structure for any choice of metric distance function. This is a step toward establishing more rigorous and objective statements about the performance of particular metric search structures and query algorithms. A potentially more important practical contribution of the paper is our set of range and kNN edit-distance tests on the Swiss-Prot-X dataset, which contains 564k protein sequences. These tests remain far-removed from capturing the full scope and detail of the kinds of queries needed for practical sequence analysis, but they provide strong evidence of the {\em potential} to significantly outperform optimized brute-force methods as dataset sizes continue to increase. Future work is needed to more fully assess the query requirements for large-scale use of metric search structures for applications relating to proteomic and genomic sequence analyses. These applications often require the finding of objects that are within a specified threshold on {\em similarity} -- rather than {\em distance} -- according to a highly complex non-metric similarity function. Methods for converting similarity queries of this kind to metric distances have been examined in the bio-medical literature, but they tend to achieve only a relatively loose correspondence, i.e., the resulting metric search query is expected to return a superset of the desired objects. If this looseness proves unavoidable, the need for highly efficient metric search algorithms may be necessary to mitigate the resulting computational overhead. Next-generation genomic and proteomic applications may also demand use of much more sophisticated metrics such as tree-edit \cite{ECIR-2013-LaitangPB} and graph-edit distance functions \cite{10.1007/978-3-540-89689-0_33, VLDB-2009-ZengTWFZ}. Tree-edit distance, which determines the number of primitive edit operations necessary to transform one tree structure to another, can be expected to require $O(N^3)$ rather than the $O(N^2)$ complexity of string-edit distance \cite{10.1145/2746539.2746612, Andoni2010PolylogarithmicAF, 10.1007/978-3-319-68474-1_11, Pawlik:2011:RRA:2095686.2095692}. Graph-edit distance, by contrast, has been proven to be NP-Complete \cite{abuaisheh:hal-01168816}, and thus can be expected to demand computation time that is exponential in the size of the molecular structures contained in the search structure. Consequently, approximation methods may be necessary for both tree and graph-edit distance calculations \cite{DBLP:series/acvpr/Riesen15}, despite the possibility that queries (e.g., performed using CMT) may not equate to what would be obtained using the true distance functions. In summary, we believe the results in this paper serve to advance the practical application of metric search algorithms to a wider array of real-world problem domains. Of particular interest are the enormous datasets presently being generated in biomedical, astronomical, and experimental physics applications. In such applications, reducing the number of distance calculations by only a factor of $4$ or $5$ can reduce the total processing time for a large data analysis program from a month down to a week. \appendices \section*{APPENDICES} Sufficient detail is provided in the main text to understand and implement the CMT and its associated search algorithms. For completeness, however, we provide the following appendices with explicit pseudocode based on our C++ reference implementation. \section{Basic CMT Range Query Algorithm}\label{app:BasicRQ} Pruning operations based on metric triangle-inequality tests can be neatly encapsulated for implementation using the concept of {\em pruning distances}. Consider the set $C$ of all objects in a node's subtree, and let {\em near} and {\em far} be the distances from the node object to the nearest and farthest objects in $C$. The pruning distance is the distance the query object is outside of interval $D \coloneqq \left[ \mbox{\em near,\, far} \right]$ (see Algorithm\,\ref{alg:prunedist}). When a node is visited during the search, if the search radius is less than the pruning distance, the node's subtree can be excluded from the search. We extend the concept for use with ancestral distance intervals and define {\em maximum pruning distance} as the maximum of the pruning distances over the ancestral distance intervals. \begin{algorithm} \SetKwFunction{PruningDistance}{\textbf{PruningDistance}} \SetKwData{Node}{node} \SetKwData{Near}{near} \SetKwData{Far}{far} \SetKwData{DAQ}{distancePQ} \SetKwData{pruningDistance}{pruningDistance} \SetKwBlock{Begin}{begin}{end} \PruningDistance{\DAQ , \Node, l} \Begin(){ \If {$\DAQ < \Node.\Near[l]$} { \pruningDistance $\leftarrow \Node.\Near[l] - \DAQ$ } \ElseIf {$\DAQ > \Node.\Far[l]$}{ \pruningDistance $\leftarrow \DAQ - \Node.\Far[l]$ } \Else { \pruningDistance $\leftarrow 0$ } \KwRet{\pruningDistance} } \caption{\vphantom{\Huge x} Calculation of the pruning distance by the function $PruningDistance( d_{pq}, n, 0)$.} \label{alg:prunedist} \end{algorithm} Algorithm \ref{alg:BasicRangeQuery} performs basic CMT range queries. Its recursive structure applied to the tree visits a node at most once. Upon visiting a node, the max pruning distance with respect to the set of ancestral objects is determined, if this is greater than the query radius then the current search path can be terminated. If the query object is not pruned at this step, then a distance calculation must be performed between the query object and the node object, which provides one more pruning test with respect to the bounding volume of the node. If the node object satisfies the query then it is added to the result set. Regardless, the distance between the query object and the node object is added to the list of ancestral objects for use in pruning tests at subsequently visited nodes along the current path in the tree, and the search proceeds recursively to the left subtree and then the right subtree. \begin{algorithm} \SetKwFunction{BasicRangeQuery}{\textbf{BasicRangeQuery}} \SetKwData{Object}{object} \SetKwData{MaxPDF}{MaxPruningDistance} \SetKwData{MaxPDV}{maxPD} \SetKwData{PDF}{PruningDistance} \SetKwData{PD}{PD} \SetKwData{Stack}{stack} \SetKwData{SQ}{query} \SetKwData{Dist}{distance} \SetKwData{MDI}{MDI} \SetKwData{Node}{node} \SetKwData{Left}{left} \SetKwData{Right}{right} \SetKwData{null}{null} \SetKwBlock{Begin}{begin}{end} \BasicRangeQuery{\SQ, \Node, \Stack} \Begin(){ \If { \Node is \null}{ return } \MaxPDV $\leftarrow$ \MaxPDF(\Stack, \Node) \If { $\MaxPDV \leq \SQ.searchRadius()$ } { \Dist $\leftarrow \Dist(\SQ.\Object, \Node.\Object) $ \If {$\Dist \leq \SQ.searchRadius()$ }{ \SQ.addResult(\Node.\Object, \Dist) } \mbox{PD} = \mbox{PD}F(\Dist, \Node, 0) \If { $\mbox{PD} \leq \SQ.searchRadius()$ }{ \Stack.push( \Dist ) \BasicRangeQuery(\SQ, \Node.\Left, \Stack ) \BasicRangeQuery(\SQ, \Node.\Right, \Stack ) \Stack.pop() } } \KwRet{} } \caption{Basic CMT Range Query Algorithm} \label{alg:BasicRangeQuery} \end{algorithm} \section{Collection CMT Range Query Algorithm}\label{app:collectRQ} {\em Collection} is an enhancement to the basic range query (Algorithm \ref{alg:BasicRangeQuery}) with tests added to identify cases in which the query volume encloses a node's bounding volume. We define the {\em collection distance} as the sum of the distance between the query object and the node object, plus the {\em far} component of the node's distance interval. We also define a {\em minimum collection distance} as the minimum of the collection distances over the ancestral distance intervals. More intuitively, this represents the intersection of constraining bounding volumes of the ancestor objects and that of the current node. If the query volume contains this intersection volume then all objects in the subtree rooted at the current node must necessarily satisfy the query, and therefore they may be collected into the result set without need for explicit distance calculations. Algorithm \ref{alg:CollectRangeQuery} performs CMT queries with collection. The algorithm is similar to algorithm \ref{alg:BasicRangeQuery} except for the addition of a test against the minimum collection distance and a test against the collection distance. When the distances are less than the search radius, all objects in the subtree (including the node object) are added to the result set. \begin{algorithm} \SetKwFunction{CollectRangeQuery}{\textbf{CollectRangeQuery}} \SetKwData{Object}{object} \SetKwData{MaxPDF}{MaxPruningDistance} \SetKwData{MaxPDV}{maxPrunigDist} \SetKwData{MinCDF}{MinCollectionDistance} \SetKwData{CDF}{CollectionDistance} \SetKwData{MinCDV}{minCD} \SetKwData{CDV}{cd} \SetKwData{PDF}{PruningDistance} \SetKwData{PD}{pruningDist} \SetKwData{Stack}{stack} \SetKwData{SQ}{query} \SetKwData{Dist}{distance} \SetKwData{MDI}{MDI} \SetKwData{Node}{node} \SetKwData{Left}{left} \SetKwData{Right}{right} \SetKwData{null}{null} \SetKwBlock{Begin}{begin}{end} \CollectRangeQuery{\SQ, \Node, \Stack} \Begin(){ \If { \Node is \null}{ return } \MaxPDV $\leftarrow$ \MaxPDF(\Stack, \Node) \If { $\MaxPDV \leq \SQ.searchRadius()$ } { \MinCDV $\leftarrow$ \MinCDF(\Stack, \Node) \If { $\MinCDV \leq \SQ.searchRadius()$ } { \SQ.addSubtree(\Node, \MinCDV) } \Else{ \Dist $\leftarrow \Dist(\SQ.\Object, \Node.\Object) $ \CDV $\leftarrow \CDF(\Dist, \Node) $ \If {$\CDV \leq \SQ.searchRadius()$ }{ \SQ.addSubtree(\Node, \CDV) } \Else{ \If {$\Dist \leq \SQ.searchRadius()$ }{ \SQ.addResult(\Node.\Object, \Dist) } \mbox{PD} $\leftarrow$ \mbox{PD}F(\Dist, \Node, 0) \If { $\mbox{PD} \leq \SQ.searchRadius()$ }{ \Stack.push( \Dist ) \CollectRangeQuery(\SQ, \Node.\Left, \Stack ) \CollectRangeQuery(\SQ, \Node.\Right, \Stack ) \Stack.pop() } } } } \KwRet{} } \caption{Full range query algorithm with collection/aggregation.} \label{alg:CollectRangeQuery} \end{algorithm} \section{CMT {\em k}-Nearest Neighbor (kNN) Algorithm}\label{app:kNNquery} The CMT kNN query algorithm \ref{alg:kNNQuery} uses a priority queue to determine the node visitation order. The priority queue provides the standard push() and pop() functions with members given in Table\,\ref{tab:pqnode}. \begin{table}[h] \centering \caption{Data Members of Priority-Queue Node} \begin{tabular}{r|l} \hline \textbf{tree node}\vphantom{\Huge x} & The \emph{tree} node associated with the queue node. \\ \textbf{parent} & The parent queue node of the queue node. \\ \textbf{distance} & The distance between the tree node object and\\ ~ & the query object. \\ \textbf{priority} & Measure of likelihood that close neighbors\\ ~ & will be found in the subtree of the tree node.\\ \hline \end{tabular} \label{tab:pqnode} \end{table} The max pruning distance between the tree node and the query object is used for the priority, causing the search algorithm to preferentially visit nodes with greater likelihood of being pruned\footnote{This is typically done using the distance between the query object and the node object as a priority \cite{uhlmannImp1991}.}. The algorithm visits a tree node by removing a queue node from the priority queue. The tree node is pruned if its priority (i.e., max pruning distance) is greater than the search radius. The distance between the query and node objects is then calculated; and if the object is within the current range then it is added to the result set, and if the size of the set exceeds $k$ then the farthest of the $k+1$ objects is removed and the minimum bounding radius of the remaining $k$ objects is updated. This bounding radius (distance) is also used to determine the max pruning distances of the two child nodes, and if they cannot be immediately pruned then they are each placed into the priority queue with their respective max pruning distances as their priorities. This means that a pruning test is applied both before the object is added to the queue {\em and} when it is later removed from the queue, where the latter test can be productive if the bounding radius of the $k$ objects has decreased in the interim. \begin{algorithm}[h] \SetKwFunction{kNNQuery}{\textbf{kNNQuery}} \SetKwData{Object}{object} \SetKwData{MaxPDF}{MaxPruningDistance} \SetKwData{MaxPDV}{maxPruningDist} \SetKwData{PDF}{PruningDistance} \SetKwData{PQNode}{queueNode} \SetKwData{PQNodeType}{QueueNode} \SetKwData{PQ}{queue} \SetKwData{SQ}{query} \SetKwData{Dist}{distance} \SetKwData{MDI}{MDI} \SetKwData{Node}{node} \SetKwData{Left}{left} \SetKwData{Right}{right} \SetKwData{Priority}{priority} \SetKwData{null}{null} \SetKwBlock{Begin}{begin}{end} \kNNQuery{\SQ, \PQ} \Begin(){ \While {\PQ \textbf{is not empty}} { \PQNode $\leftarrow$ \PQ.pop \Node $\leftarrow \PQNode.\Node$ \MaxPDV $\leftarrow \PQNode.priority $ \If { $\MaxPDV \leq \SQ.searchRadius()$ } { \Dist $\leftarrow \Dist(\SQ.\Object, \Node.\Object) $ \If {$\Dist \leq \SQ.searchRadius()$ }{ \SQ.addResult(\Node.\Object, \Dist) } \PQNode.dist $\leftarrow$ \Dist \If { $\mbox{PD}F(\Dist, \Node, 0) \leq \SQ.searchRadius()$ }{ \If { $\Node.\Left \neq \null$ } { \Priority $\leftarrow$ \MaxPDF(\PQNode, \Node.\Left) \If { $\Priority \leq \SQ.searchRadius()$ } { \PQ.push(new \PQNodeType (\Node.\Left, \PQNode , \Priority )) } } \If { $\Node.\Right \neq \null $ } { \Priority $\leftarrow$ \MaxPDF(\PQNode, \Node.\Right) \If { $\Priority \leq \SQ.searchRadius()$ } { \PQ.push(new \PQNodeType (\Node.right, \PQNode , \Priority )) } } } } } \KwRet{} } \caption{CMT $k$NN Query algorithm} \label{alg:kNNQuery} \end{algorithm} \section{Tree Construction}\label{app:bom} In the context of spatial search trees, the term ``pivot'' is commonly used to refer to an object used to discriminate between partitions of a dataset. Typically, the pivot object at a node is selected/determined during construction of the tree to define a partition of the dataset, and that object is stored at a node for subsequent use by a search algorithm to prune the traversal of certain subtrees. In the more general case, the pivot objects used for partitioning during the construction process need not be the same objects as those stored for use during the search process. In other words, a distinction must be made between {\em partition} pivots and {\em search} pivots, and we will explicitly distinguish between the two when there is potential ambiguity based on the context of the discussion. It should be noted that the general the CMT definition does not specifically prescribe the manner of partitioning and selection of search pivots. Ideally, the search performance properties of the tree should be relatively invariant to discretion applied during the construction process, i.e., search efficiency should {\em not} depend on meticulous tailoring of the construction algorithm to exploit assumed characteristics of datasets arising in a particular application. However, the degree to which this ideal is achieved must be empirically assessed by examining different construction algorithms. For our tests we simply choose each partitioning pivot at random and then compute the median distance to the remaining objects to obtain a balanced partition. If the median distance is not unique, then each object with distance equal to the median will be arbitrarily assigned to one of the two subsets such that equal cardinality is maintained\footnote{In the reference implementation, this functionality is provided by the C++ std::nth-element function.}. This method ensures that the resulting tree is balanced with $O(\log N)$ height, which minimizes CMT memory requirements by limiting the maximum number of query pivots at any given node to $O(\log N)$. We refer to this as {\em balanced object median} (BOM) partitioning. The CMT construction of Algorithm\,\ref{algorithm:buildcmt} is preceded by a pre-processing of a set $O$ of objects by allocating a set $S$ of nodes with $|S| = |O| = N$, with the assigning of one object to each node. The tree is then constructed by recursively applying a procedure that selects a node $p$ from $S$, which will also serve as a root or parent of the current subtree. This is followed by partitioning the set $\{S \setminus \{p\}\}$ into two subsets, $S_l$ and $S_r$, which will be recursively constructed as the left and right subtrees, respectively. In the process of partitioning, the metric distances between $p$ and each object in $\{S_l \cup S_r\}$ are calculated. Additionally, the minimum and maximum of those distances are stored in the {\em near} and {\em far} data members of the node's distance interval. Finally, a vector $p.pDistance{[}{]}$ is stored as a member of each node and contains the distance between the node object and each of the node's ancestor objects. The function ComputeADIV() uses these vectors to compute, for the local root, a distance interval per each ancestor\footnote{ $p.pDistance{[}{]}$ contains information that is only used in the tree construction process and can be deleted after construction, i.e., it is not a necessary member of CMT nodes.}. \begin{algorithm} \SetKwFunction{BuildCMT}{\textbf{BuildCMT}} \SetKwFunction{ComputeADIV}{\textbf{ComputeADIV}} \SetKwData{Cnodes}{cnodes} \SetKwData{Pnode}{pnode} \SetKwData{Depth}{depth} \SetKwData{Dist}{distance} \SetKwData{Nearest}{nearest} \SetKwData{Furthest}{furthest} \SetKwData{DI}{DI} \SetKwData{PDL}{pdl} \SetKwData{Intervals}{intervals} \SetKwData{PD}{pDistances} \SetKwData{Root}{root} \SetKwData{Left}{left} \SetKwData{Right}{right} \SetKwData{Null}{null} \SetKwData{Nodes}{nodes} \SetKwData{Node}{node} \SetKwData{Object}{object} \SetKwData{DMR}{dmr} \SetKwData{Size}{size} \SetKwInOut{Input}{input} \SetKwBlock{Begin}{begin}{end} \ComputeADIV{\Pnode, \Cnodes, \Depth } \Begin(){ \For{$l\in \left[0, \Depth \right)$}{ \mbox{PD}L $\leftarrow \Depth - l -1 $ \Nearest $\leftarrow \underset{node \in {\Cnodes \cup \Pnode}} {min \{node.\mbox{PD}[\mbox{PD}L] \}}$ \Furthest $\leftarrow \underset{node \in {\Cnodes \cup \Pnode}} {max \{node.\mbox{PD}[\mbox{PD}L] \}}$ \Pnode.near[$l$] $\leftarrow \Nearest $ \Pnode.far[$l$] $\leftarrow \Furthest $ } } \caption{\vphantom{\Huge x} Calculating the ancestral distance intervals for node $pnode$ } \label{algorithm:computedadiv} \BuildCMT{\Nodes, \Depth} \Begin(){ \If{ $|\Nodes | == 0 $}{ \KwRet{\Null} } \If{$ | \Nodes | == 1 $} { \Root $\leftarrow$ first \{ \Nodes \} \KwRet{ \Root } } \Root $\leftarrow$ a random node $\in$ \Nodes \Nodes $\leftarrow \{ \Nodes \setminus \Root \} $ \For{\Node $\in$ \Nodes}{ \Node.\Dist[ \Depth ] $\leftarrow \Dist ( \Node.object, \Root.object ) $ } \Root.near[0] $\leftarrow \underset{\Node \in \Nodes }{\mathrm{min}\{ \Node.\Dist[ \Depth ] \}}$ \Root.far[0] $\leftarrow \underset{\Node \in \Nodes }{\mathrm{max}\{ \Node.\Dist[ \Depth ] \}}$ MedDist $\leftarrow$ {\em median distance to random pivot} \Left $\leftarrow$ \emph{\{$\Node \in \Nodes \mid \Node.\Dist[ \Depth ] < $ MedDist \}} \Right $\leftarrow \{ \Nodes \setminus \Left \} $ \Root.right$\leftarrow$\BuildCMT{\Right, $\Depth + 1$} \Root.left$\leftarrow$\BuildCMT{\Left, $\Depth + 1$} ComputeADIV ( \Root, \Nodes, \Depth) \KwRet{\Root} } \caption{\vphantom{\Huge x} ComputeADIV() - Building the CMT with random pivots and BOM partitioning.} \label{algorithm:buildcmt} \end{algorithm} ComputeADIV() (Algorithm\,\ref{algorithm:computedadiv}) computes ancestral distance intervals for the current node. This algorithm is executed for a given node after its children are recursively processed by the calling algorithm. When this algorithm is called for node $pnode$ at depth $d$ and with descendant node set $cnodes$, each descendant node already has $d+1$ distance values in its own $pDistances$ array. (The index into $pDistances$ starts with the root node of the entire tree indexed as $nd.pDistance[0]$.) The construction algorithm for the {\em Baseline} tree is nearly identical to Algorithm\,\ref{algorithm:buildcmt}, and the number of distance function evaluations is the same, but no ancestral information is maintained. \subsection{Tree construction complexity} \label{sec:buildcomp} The computational complexity of building CMT with BOM partitioning is technically $O(N \log^2 N)$ because the tree has $O(N)$ nodes with $O(\log N)$ scalar ancestral distance values. However, the $O(N \log^2 N)$ component of the runtime is simply due to the maintenance of $O(\log N)$ ancestral distances per node. Thus the coefficient of the $O(N \log^2 N)$ component is relatively small compared to that of the $O(N\log N)$ distance calculations performed by ComputeADIV(). Consequently, for any nontrivial distance function the practical build time for the tree will be dominated by the $O(N \log N)$ component. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi Thanks to Seth Wiesman and Yeshwanthi Pachalla for their contributions to earlier implementations and tests of the data structure. A portion of this work was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-18-2-0285. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. \end{document}
\begin{document} \title{The type-independent resource theory of local operations and shared randomness} \author{David Schmid} \affiliation{Perimeter Institute for Theoretical Physics, 31 Caroline St. N, Waterloo, Ontario, N2L 2Y5, Canada} \affiliation{Institute for Quantum Computing and Dept. of Physics and Astronomy, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada} \email{[email protected]} \author{Denis Rosset} \affiliation{Perimeter Institute for Theoretical Physics, 31 Caroline St. N, Waterloo, Ontario, N2L 2Y5, Canada} \author{Francesco Buscemi} \affiliation{Graduate School of Informatics, Nagoya University, Chikusa-ku, 464-8601 Nagoya, Japan} \begin{abstract} In space-like separated experiments and other scenarios where multiple parties share a classical common cause but no cause-effect relations, quantum theory allows a variety of nonsignaling resources which are useful for distributed quantum information processing. These include quantum states, nonlocal boxes, steering assemblages, teleportages, channel steering assemblages, and so on. Such resources are often studied using nonlocal games, semiquantum games, entanglement-witnesses, teleportation experiments, and similar tasks. We introduce a unifying framework which subsumes the full range of nonsignaling resources, as well as the games and experiments which probe them, into a common resource theory: that of local operations and shared randomness (LOSR). Crucially, we allow these LOSR operations to locally change the type of a resource, so that players can convert resources of {\em any} type into resources of any other type, and in particular into strategies for the specific type of game they are playing. We then prove several theorems relating resources and games of different types. These theorems generalize a number of seminal results from the literature, and can be applied to lessen the assumptions needed to characterize the nonclassicality of resources. As just one example, we prove that semiquantum games are able to perfectly characterize the LOSR nonclassicality of every resource of {\em any} type (not just quantum states, as was previously shown). As a consequence, we show that any resource can be characterized in a measurement-device-independent manner. \end{abstract} \maketitle \section{Introduction} A key focus in quantum foundations is the study of nonclassicality. Starting from the Einstein-Podolsky-Rosen paradox~\cite{Einstein1935}, special focus has been given to experiments involving space-like separated subsystems. In the modern language of causality~\cite{Wood2015,Costa_2016,Allen2017,Lorenz2019}, the key feature of these scenarios is that the subsystems which are being probed share a classical common cause, but do not share any cause-effect channels between them. In such scenarios, quantum theory allows for distributed quantum channels which act as valuable nonclassical resources for accomplishing tasks which would otherwise be impossible. The most common examples of such resources are entangled quantum states~\cite{horodecki2009quantum} and boxes producing nonlocal correlations~\cite{brunner2013Bell}; but there are many other types of useful resources. We develop a resource-theoretic~\cite{resthry} framework which unifies a wide variety of these, including quantum states~\cite{Watrous}, boxes~\cite{brunner2013Bell}, steering assemblages~\cite{wisesteer,Cavalcanti2017}, channel steering assemblages~\cite{channelsteer}, teleportages~\cite{Hoban_2018,telep}, distributed measurements~\cite{Bennett}, measurement-device-independent steering channels~\cite{steer}, Bob-with-input steering channels~\cite{BobWI}, and generic no-signaling quantum channels~\cite{Watrous}. Free (or classical) resources are those that can be generated freely by local operations and shared randomness (LOSR), encompassing the specific cases of separable quantum states, local boxes, unsteerable assemblages, and so on. Any resource which cannot be simulated by LOSR operations is said to be nonfree, or nonclassical. A resource is said to be at least as nonclassical as another resource if it can be transformed to the second using LOSR transformations. Crucially, such comparisons can be made for resources of arbitrary and potentially differing types. Some works in the past have focused on LOSR as a resource theory {\em in specific scenarios}, such as for quantum states~\cite{sq,LOSRvsLOCCentang}, for nonlocal correlations~\cite{de2014nonlocality,gallego2016nonlocality,Bellquantified,LOSRvsLOCCentang}, and for steering assemblages~\cite{steer} (albeit under a different name). These previous works focused on one or two types of resources, and most commonly on quantum states. Our framework is more general, but subsumes each of these as a special case. In addition to introducing this encompassing framework, our second primary goal herein is to study how the type of a resource impacts the methods by which one can characterize its nonclassicality in practice. For example, nonlocal boxes have classical inputs and outputs, and so only weak assumptions~\cite{Bell2004,Putz2014} about one's laboratory instruments are required for their characterization. However, when a resource has a quantum output, one requires a well-characterized quantum measurement to probe that output and consequently the resource~\cite{Rosset2012a}. In such a case, the test of nonclassicality is said to be {\em device-dependent}, while in adversarial scenarios such as cryptography, the terminology of {\em trust} is also used~\cite{Pironio2016}. The same idea applies to a quantum input, which must be probed using a well-characterized quantum state preparation device. Thus, only nonlocal boxes can be probed in a {\em device-independent} manner; {\em a priori}, quantum states require well-characterized quantum measurement devices; while other objects, such as steering assemblages, require a mixture of both~\cite{Cavalcanti2015}. Consequently, it is important to determine under what circumstances devices of one type may be converted into devices of a second type {\em in a manner that does not degrade their usefulness as a resource}. If such a conversion is possible, then one may be able to lessen the assumptions and technological requirements needed to characterize one's devices. In some particular cases, previous work has studied this question of whether the nonclassicality of a quantum state can be characterized by first applying free operations which convert it to another type of resource. For example, we know that some Werner states~\cite{Werner, Barrettlocal} have a local model for all measurements; such nonclassical states can only be transformed into classical boxes, and so all information about their nonclassicality is lost in the conversion. In contrast, the main result of Ref.~\cite{sq} proves that every entangled state can have its nonclassicality encoded in a semiquantum channel. Additionally, in Ref.~\cite{telep}, it is shown that every entangled state can generate a type of no-signaling channel (recently termed a teleportage~\cite{Hoban_2018}) which could not be generated by any separable state and which is useful for some task related to quantum teleportation~\cite{LipkaBartosik2019}. It is useful to distinguish between qualitative versus quantitative characterizations of nonclassicality. To highlight the distinction, it is instructive to examine one particular line of research. Ref.~\cite{sq} is often advertised as proving that the nonclassicality of every entangled state can be revealed in a generalization of nonlocal games termed {\em semiquantum games} (which were later used to construct {\em measurement-device-independent entanglement witnesses}~\cite{Branciard2013}). However, this claim is actually a (qualitative) corollary of the (quantitative) main theorem, which showed that the performance of states in semiquantum games {\em exactly} reproduces the classification of entangled states under LOSR transformations. Subsequent works~\cite{Branciard2013,Rosset2013a} focused on the qualitative distinction between classical and nonclassical resources, but still later works reinterpreted the payoffs of semiquantum games as measures of entanglement~\cite{Shahandeh2017,Rosset2018a}, thus reconnecting with the quantitative nature of Buscemi's original work. Note also that the quantitative study of entanglement is historically linked to entanglement monotones~\cite{vidal2000entanglement}. However, the study of nonclassicality cannot be reduced to a single such measure, as there are many inequivalent species of nonclassicality even in the simplest cases~\cite{Bellquantified}. Informed by the recent formalization of resource theories~\cite{resthry}, we study the fundamental mathematical object---the preorder of resources under LOSR transformations. One can then derive specific nonclassicality witnesses and monotones~\cite{rosset2019characterizing}, each of which provides an incomplete characterization of the preorder. As implied just above, the mathematical structure which best allows for comparison between objects that need not be strictly ordered is a {\em preorder}. Formally, a preorder is an ordering relation that is reflexive ($a \succeq a$) and transitive ($a \succeq b$ and $b \succeq c$ implies $a \succeq c$)\footnote{A preorder is distinguished from a partial order by the fact that $a \succeq b$ and $b \succeq a$ need not imply $a = b$. In a partial order, $a \succeq b$ and $b \succeq a$ implies $a = b$.}. Our work focuses on three distinct preorders, which the reader should be careful to distinguish. First, there is the preorder $R \succeq_{\rm LOSR} R'$ (sometimes denoted $R \LOSRconv R'$) that indicates if a resource $R$ can be converted into another resource $R'$ by LOSR transformations (Definition~\ref{LOSRorder}). Second, there is the preorder $\succeq_{\rm type}$ over resource types that orders those types according to their ability to encode nonclassicality (Definition~\ref{typeorder}). Finally, there is the preorder $\succeq_{\mathcal{G}_{\! T}}$ that ranks resources according to their performance with respect to the set $\mathcal{G}_{\! T}$ of all games of a particular type $T$ (Definition~\ref{orderwrtgame}). This paper is best read alongside Ref.~\cite{rosset2019characterizing}. In the current paper, we present a general framework to study quantum resources of arbitrary types, and we quantify the nonclassicality of these resources within a type-independent resource theory of local operations and shared randomness. Here, our main results center on showing how resources of one type can be more easily characterized by first converting them to resources of a second type. In Ref.~\cite{rosset2019characterizing}, our aim is practical and computational, focusing on how data can be used to characterize one's resources using off-the-shelf software. There, we include type-independent techniques for computing witnesses which can certify the nonclassicality of a resource, as well as techniques for computing the value of type-independent monotones (which we introduce therein). \subsection{Organization of the paper} In Section~\ref{sectypes}, we discuss various types of resources. We inventory the 9 possible types of a single party's partition of a resource, where that party's input and output may each be trivial, classical, or quantum. Focusing on the 81 bipartite resource types for simplicity, we recognize 10 types that have been studied in the literature and identify 5 new nontrivial resource types. All other bipartite resource types are either trivial or equivalent up to a symmetry. We then define LOSR transformations between resources of arbitrary types, as well as the ordering over resources that this induces. In Section~\ref{expressivity}, we define a precise sense in which some types can express the LOSR nonclassicality of other types. In many cases, conversions from a resource of one type to another type necessarily degrade the nonclassicality of the resource, as in Werner's example. In other cases, one can perfectly encode the nonclassicality of any given resource into some resource of the target type, as in Buscemi's example. For every single-party type, we ask which can perfectly encode the nonclassicality of which others, and we answer this question for almost every pair, with the exception of one open question. From these considerations of single party types, one can deduce encodings of more complicated resource types which involve multiple parties. Most strikingly, we show that semiquantum channels (with quantum inputs and classical outputs) are universal, in the sense that the nonclassicality of all resources can be encoded into them. In Section~\ref{unifiedgames}, we give an abstract framework for probing the nonclassicality of resources, subsuming as special cases the notions of nonlocal games~\cite{Bellreview}, semiquantum games~\cite{sq}, steering~\cite{wisesteer,steer} and teleportation~\cite{telep} experiments, and entanglement witnessing~\cite{Chruscinski2014}. In our framework, every type of resource has a corresponding type of game, where a game of some type maps every resource of that type to a real number. (E.g., in nonlocal and semiquantum games, this number is the usual average game payoff). We then show how resources of any type can be used to play a game designed for one specific type. In some cases, games of one type can {\em completely} characterize the nonclassicality of every resource of another type. For example, Ref.~\cite{sq} showed that the LOSR nonclassicality of every quantum state is perfectly characterized by the set of semiquantum games. We generalize these ideas by proving that if one type can encode another, then games of the first type can perfectly characterize the LOSR nonclassicality of all resources of the second type. Together with our results on which types can encode which others, this expands the known methods for quantifying LOSR nonclassicality in practice and in theory. For example, our result on the universality of the semiquantum type implies that any resource of any type can be characterized by some semiquantum game, and hence can be characterized in a measurement-device-independent manner. In Section~\ref{extending}, we relate our work to existing results. First, we note how our results generalize the main result of Ref.~\cite{sq}, showing that semiquantum games can completely characterize the LOSR nonclassicality of arbitrary resource, not just of quantum states. Next, we show that the results of Ref.~\cite{steer} are a special case of two of our theorems when one applies steering experiments to quantify the nonclassicality of quantum states; further, our theorems provide a generalization of these arguments to more general experiments and types of resources. Finally, we show that the LOSR nonclassicality of every quantum state is {\em completely} characterized by the set of teleportation games, and thus that the results of Ref.~\cite{telep} can be extended to be quantitative as well as qualitative. \section{Resource types and LOSR transformations between them} \label{sectypes} We are interested in scenarios where the relevant parties share a classical common cause but do not share any cause-effect channels. For example, parties who perform experiments at space-like separation cannot access classical communication. For simplicity, we henceforth focus on bipartite scenarios; however, all of our results generalize immediately to arbitrarily many parties. We will consider only nonsignaling resources~\cite{Popescu1994,Barrett2005} throughout this work.\footnote{In fact, if one wishes to interpret resourcefulness as {\em nonclassicality}, then one must further restrict the enveloping theory to those resources which can be generated by local operations and quantum common causes. For non-signaling resources that {\em cannot} be realized in this manner~\cite{causallocaliz}, resourcefulness may originate in the nonclassicality of a common-cause process {\em or} in {\em classical} communication channels (which are fine-tuned so as to not exhibit signaling).} We will not specifically consider post-quantum channels in this work, although one might naturally extend our work to include these as resources. Hence, in this work a resource is a completely positive~\cite{NielsenAndChuang,Schmidcausal}, trace-preserving, nonsignaling quantum channel. The parties may share various types of resources, which we now classify by type. \subsection{Partition-types and global types} In this paper, we use the term {\bf type} (of a resource) to refer exclusively to whether the various input and output systems are trivial ($\mathsf{I}$), classical ($\mathsf{C}$), or quantum ($\mathsf{Q}$). A system is said to be trivial if it has dimension one, is said to be classical if all operators on its Hilbert space are diagonal, and is otherwise said to be quantum. (See Ref.~\cite{rosset2019characterizing} for more details.) Additionally, if a resource has more than one input (output), which may be of different types, we imagine grouping them together, yielding an effective input (output) whose type is the least expressive type which embeds all those in the grouping, where quantum systems embed classical systems, which embed trivial systems. We will denote the type of a single party's share of a resource by $T_i := X_i \! \! \to \! \! Y_i$, where $i$ labels the party and $X,Y \in \{\mathsf{I}, \mathsf{C}, \mathsf{Q} \}$, with $X$ labeling whether the input to that party is trivial ($\mathsf{I}$), classical ($\mathsf{C}$), or quantum ($\mathsf{Q}$) and $Y$ labeling the output similarly. We will refer to $T_i$ as the {\bf partition-type} of party $i$. We can then denote the {\bf global type} of an $n$-party resource as $T := T_1 T_2 ...T_n \simeq X_1X_2 ... X_n\! \! \to \! \! Y_1 Y_2...Y_n $. Note that while the specification of the global type of a resource fixes the number of parties and the types of their partitions of the resource, the specification of a partition-type does not constrain either the number of other parties who share the resource, nor the types of those other partitions. One could also consider partition-types for partitions of a resource which involve more than one party, but this paper makes use only of partition-types which involve a single party. We now describe the ten examples of resource types from Fig.~\ref{types}, setting up some explicit terminology and conventions as we go. We graphically depict trivial, classical, and quantum systems by the lack of a wire, a single wire, and a double wire, respectively. \begin{figure} \caption{Common types of no-signaling resources, where classical systems are represented by single wires and quantum systems are represented by double wires. (a) A quantum state $\rho$ has type $\mathsf{II} \label{types} \end{figure} Fig.~\ref{types}(a) depicts a {\bf quantum state}, the canonical quantum resource. Bipartite quantum states have type $\mathsf{II} \! \! \to \! \! \mathsf{QQ}$; that is, they have no inputs and both outputs are quantum. The nonclassicality of quantum states is often quantified using the resource theory of local operations and classical communication (LOCC). While this is appropriate in some contexts, allowing classical communication for free is not appropriate in the context of space-like separated experiments, nor in any other scenario where distributed systems are unable to causally influence one another. In such cases, LOSR operations are the relevant ones for quantifying nonclassicality of any resource, including quantum states, and it is {\em LOSR-entanglement}, not LOCC-entanglement, that is relevant, as argued extensively in Ref.~\cite{LOSRvsLOCCentang}. Fig.~\ref{types}(b) depicts another canonical type of resource~\cite{Barrett2005, brunner2013Bell}, often termed a correlation or a box-type resource, or {\bf box} for short. Bipartite boxes have type $\mathsf{CC} \! \! \to \! \! \mathsf{CC}$; that is, both parties have a classical input and a classical output. Extensive research has been done on boxes, e.g. to characterize the set of local boxes~\cite{brunner2013Bell} and the possible LOSR conversions between them~\cite{Horodecki2015, Bellquantified, Vicente2014}. The fact that we wish to subsume boxes in our framework provides another reason to focus on LOSR as opposed to LOCC, since LOSR has been argued to be the appropriate set of free operations in this context~\cite{Bellquantified} Furthermore, under unbounded LOCC {\em all} boxes would be deemed free, even nonlocal or signaling boxes. Fig.~\ref{types}(c) depicts the type of resource that arises naturally in a steering scenario~\cite{Einstein1935, schrodinger_1935,wisesteer, Skrzypczyk2014, Gallego2015, Piani2015a, Cavalcanti2017, Uola2019}, often termed an {\bf assemblage}~\cite{Pusey2013}. Such resources have type $\mathsf{CI} \! \! \to \! \! \mathsf{CQ}$; that is, the first party has a classical input and classical output, while the second party has no input and a quantum output. Fig.~\ref{types}(d) depicts a type of resource that arises naturally in a teleportation scenario~\cite{telep, Supic2018}, termed {\bf teleportages}~\cite{Hoban_2018}. Such resources have type $\mathsf{QI} \! \! \to \! \! \mathsf{CQ}$. Intuitively, given a teleportage, one would complete the standard teleportation protocol by applying one of a set of unitaries on the quantum output, conditioned on the classical output. The precise operational sense in which these teleportages relate to the possibility of implementing an effective quantum channel is still being investigated~\cite{LipkaBartosik2019}\footnote{While LOSR is clearly the correct set of free operations for studying resources in Bell scenarios and other common cause scenarios, the same is not true for teleportation experiments, which might be better described by another resource theory (such as LOCC). The surprising insight which follows from Ref.~\cite{telep} is that a great deal can nonetheless be learned about teleportation scenarios by studying LOSR. }. Fig.~\ref{types}(e) depicts the type of resource that arises naturally in semiquantum games, namely type $\mathsf{QQ} \! \! \to \! \! \mathsf{CC}$. We will term these {\bf distributed measurements} or {\bf semiquantum channels}, since they arise in multiple contexts where one term~\cite{Bennett} or the other~\cite{sq} is more natural. Fig.~\ref{types}(f) depicts the type of resource that arises naturally in measurement-device-independent (MDI) steering scenarios~\cite{steer}, namely type $\mathsf{CQ} \! \! \to \! \! \mathsf{CC}$. We will term these {\bf MDI-steering channels}. Fig.~\ref{types}(g) depicts the type of resource that arises naturally in channel steering scenarios~\cite{channelsteer}, often termed a {\bf channel assemblage}. Such resources have type $\mathsf{CQ} \! \! \to \! \! \mathsf{CQ}$. Fig.~\ref{types}(h) depicts the type of resource that arises when one generalizes a steering scenario to have a classical input on the steered party~\cite{BobWI}, termed a {\bf Bob-with-input steering channel}. Such resources have type $\mathsf{CC} \! \! \to \! \! \mathsf{CQ}$. Fig.~\ref{types}(i) depicts a distributed classical-to-quantum channel, of type $\mathsf{CC} \! \! \to \! \! \mathsf{QQ}$. We will term these {\bf ensemble-preparing channels}. An interesting example of such a channel can be found in Ref.~\cite{causallocaliz} (see Eq.~82). Fig.~\ref{types}(j) depicts a generic bipartite {\bf quantum channel}, of type $\mathsf{QQ} \! \! \to \! \! \mathsf{QQ}$. This list is not exhaustive. Even in the bipartite case, one might wonder how many nontrivial resource types there are, and whether all of these have been studied. First, note that the partition-type $\mathsf{I} \! \! \to \! \! \mathsf{I}$ corresponds to a trivial party. As there are no nonclassical resources involving only one party, all bipartite types involving partition-type $\mathsf{I}\! \! \to \! \! \mathsf{I}$ for either party are trivial. Two other partition-types, $\mathsf{C} \! \! \to \! \! \mathsf{I}$, and $\mathsf{Q} \! \! \to \! \! \mathsf{I}$, are also trivial, since the no-signaling principle guarantees that their input cannot affect the operation of the remaining parties~\cite{rosset2019characterizing}. Moreover, some global types are equivalent up to exchange of parties, in which case we will consider only a single representative. This leads us to our first open question. \begin{customopq}{1} Even in the bipartite case, there are five nontrivial global types of resources that have not (to our knowledge) been previously studied, namely $\mathsf{QC} \! \! \to \! \! \mathsf{CQ}$, $\mathsf{CQ} \! \! \to \! \! \mathsf{QQ}$, $\mathsf{IQ} \! \! \to \! \! \mathsf{QQ}$, $ \mathsf{QQ} \! \! \to \! \! \mathsf{CQ}$, and $\mathsf{CI} \! \! \to \! \! \mathsf{QQ}$. Do any of these correspond to scenarios which are interesting in their own right? \end{customopq} \noindent At the very least, each new type implies a novel form of `nonlocality'. What remains to be seen is whether these will be directly relevant for quantum information processing tasks. \subsection{Free versus nonfree resources} A nonsignaling resource (of any type) is {\bf free} with respect to LOSR, or {\bf classical}\footnote{In reference to the fact such resources can be generated by classical common causes. Classicality of a {\em resource} is not to be confused with classicality of input and output systems.}, if the parties can generate it freely using local operations and shared randomness. This notion of being free with respect to LOSR subsumes the established notions of classicality for every type of resource in Fig.~\ref{types}; e.g. for states it coincides with separability~\cite{horodecki2009quantum}, for boxes, it coincides with admitting of a local hidden variable model~\cite{brunner2013Bell}, for assemblages it coincides with unsteerability~\cite{Cavalcanti2017,Uola2019}, for teleportages it coincides with the inability to outperform classical teleportation~\cite{telep}, and so on, as pictured in Fig.~\eqref{freeset}. \begin{figure} \caption{Free LOSR resources are those which can be simulated by local operations (in black) and shared randomness (in purple). We depict four canonical types of free resources here: separable states, local boxes, unsteerable assemblages, and classical teleportages.} \label{freeset} \end{figure} Any resource which cannot be simulated by local operations and shared randomness is {\em non-free} and constitutes a resource of LOSR nonclassicality. The purpose of our type-independent resource theory of LOSR is to quantitatively characterize nonfree resources of arbitrary types, as we now do. \subsection{Type-changing LOSR operations} Two parties in an LOSR scenario transform resources using free LOSR operations. Most previous works which studied LOSR focused on conversions between specific types of resources; for example, Refs.~\cite{de2014nonlocality,gallego2016nonlocality,Bellquantified} considered LOSR conversions from boxes to boxes, Ref.~\cite{sq} considered LOSR conversions from quantum states to quantum states, and Ref.~\cite{steer} considered LOSR conversions\footnote{In this last case, the authors introduced the term local operations with steering and shared randomness (LOSSR); however, the operations they consider involve all and only the subset of LOSR operations from quantum states to assemblages, so there is no need for the new term LOSSR.} from quantum states to assemblages. In keeping with our aim to unify a range of scenarios in one framework, and because local operations can freely change the type of a resource, we do {\em not} restrict attention to conversions among resources of fixed type, but rather allow conversions among resources of all types. We denote the set of all operations which can be generated by local operations and shared randomness by \LOSR. As depicted in Fig.~\ref{typechange}(a), the most general local operation on a given party is given by a comb~\cite{qcombs09}, and the different parties may correlate their choice of comb using their shared randomness. Note that this shared randomness can be transmitted down the side channel of each local comb, which implies that this depiction of LOSR is completely general and is convex~\cite{Bellquantified} for conversions from one fixed type to another. We will denote an element of this set by $\tau \in \LOSR$ and a generic resource of arbitrary type by $R$. As in any resource theory~\cite{resthry}, the set of free operations induces a preorder over the set of all resources. Here, we write $R \LOSRconv R'$ whenever there exists some $\tau \in \LOSR$ such that $R'= \tau \circ R$, and we say that $R$ is {\bf at least as nonclassical} (as resourceful) as $R'$. We denote the ordering relation for the preorder defined by LOSR conversions as $\succeq_{\rm LOSR}$: \begin{defn} \label{LOSRorder} For resources $R$ and $R'$ of different and arbitrary type, we say that $R \succeq_{\rm LOSR} R'$ iff $R \LOSRconv R'$. \end{defn} \noindent This definition allows us to make rigorous, quantitative comparisons of LOSR nonclassicality among resources of arbitrary types. The relation $\succeq_{\rm LOSR}$ is a preorder, as there exists an identity LOSR transformation (reflexivity), and LOSR transformations compose (transitivity). Two resources $R$ and $R'$ are equally nonclassical if they are interconvertible under LOSR; that is, if $R \LOSRconv R'$ and $R' \LOSRconv R$. We denote this $R \LOSRinterconv R'$, and we say that $R$ and $R'$ are in the same LOSR equivalence class. We give several examples of conversions among resource types in Fig.~\ref{typechange}, depicting wires of unspecified (and arbitrary) type by dashed double lines. \begin{figure} \caption{ Some type-changing operations (in green), as described in the main text. Dashed wires denote systems of arbitrary and unspecified type. (a) A generic bipartite type-changing LOSR transformation. (b) A transformation taking partition-type $\mathsf{Q} \label{typechange} \end{figure} Fig.~\ref{typechange}(a) depicts a generic bipartite type-changing LOSR operation. Fig.~\ref{typechange}(b) depicts an example of a specific transformation which takes the left partition of the resource from $\mathsf{Q} \! \! \to \! \! \mathsf{Q}$ to $\mathsf{C} \! \! \to \! \! \mathsf{C}$. It is generated by composition with a local ensemble-preparing channel and a local measurement channel, respectively. Fig.~\ref{typechange}(c) depicts an example of a specific transformation which takes the left partition of the resource from $\mathsf{Q} \! \! \to \! \! \mathsf{I}$ to $\mathsf{I} \! \! \to \! \! \mathsf{Q}$. The transformation is generated by (sequential) composition with half of an entangled state and parallel composition with a classical system in some fixed state. In this example, the output system type is quantum, since it is comprised of a classical and quantum system. Fig.~\ref{typechange}(d) depicts an example of a specific transformation which takes the left partition of the resource from $\mathsf{C} \! \! \to \! \! \mathsf{Q}$ to $\mathsf{Q} \! \! \to \! \! \mathsf{C}$, generated by a stochastic transformation on the classical input to the resource and performing a joint quantum measurement channel on the quantum output of the resource together with some new quantum input. \section{Encoding nonclassicality of one type of resource in another type} \label{expressivity} We now consider a preorder over {\em types of resources} (rather than over the resources themselves). This allows us to formally compare the different manifestations of nonclassicality. For example, this preorder provides a {\em formal} sense in which entanglement and nonlocality are incomparable types of nonclassicality. Surprisingly, we will also show that not all types of nonclassicality are incomparable. \begin{defn} Global type $T$ {\bf encodes the nonclassicality of} global type $T'$, denoted $T \succeq_{\rm type} T'$, if for every resource $R'$ of type $T'$, there exists at least one resource $R$ of type $T$ such that $R' \LOSRinterconv R $. \end{defn} In other words, there exists some resource of the higher type in every equivalence class of resources of the lower type. Several well-known examples of such encodings will be given shortly. To study the preorder over global types, it is also useful to consider a preorder over partition-types; that is, over the nine possible types $T_i := X_i \! \! \to \! \! Y_i$ of a single party's share of a resource. Considering without loss of generality the first party, denoted by subscript $1$, we say that type $T_1$ is higher in the preorder than type $T_1'$ if for every resource of type $T_1' T_2 ... T_n $, there exists a resource of type $T_1 T_2 ... T_n $ which is in the same LOSR equivalence class (for all numbers of parties $n$). Equivalently, this means that the LOSR equivalence class of any resource with partition-type $T_1'$ on the first party always contains at least one resource of partition-type $T_1$ (on the first party). We denote this second ordering relation $\succeq_{\rm type}$: \begin{defn} \label{typeorder} We say that $T_1 \succeq_{\rm type} T_1'$ iff for all $R'$ of type $T_1' T_2 ... T_n $ (as one ranges over all $T_2, ..., T_n$ and all $n$), there exists $R$ of type $T_1 T_2 ... T_n $ in the LOSR equivalence class of $R'$, that is, satisfying $R' \LOSRinterconv R $. \end{defn} \noindent In such cases, we say that partition-type $T_1$ encodes (the nonclassicality of) all resources of partition-type $T_1'$, or more simply that type $T_1$ encodes type $T_1'$. If every partition-type of some given global type is higher than the corresponding partition-type of a second global type on every partition, then the first type is necessarily higher in the preorder over global types. Hence, orderings over global types can often be deduced from orderings over partition-types. As a trivial example, it is clear that the global type $\mathsf{QQ} \! \! \to \! \! \mathsf{QQ}$ (that of bipartite quantum channels) is above every other bipartite type. For example, it is above the global type $\mathsf{II} \! \! \to \! \! \mathsf{QQ}$ (that of bipartite quantum states) in the preorder, so that $\mathsf{QQ} \! \! \to \! \! \mathsf{QQ} \succeq_{\rm type} \mathsf{II} \! \! \to \! \! \mathsf{QQ}$, since the former is an instance of the latter where the inputs to the channel are trivial. In other words: given any bipartite quantum state, there is a bipartite quantum channel which is in the same LOSR equivalence class---namely, the quantum state itself, viewed as a channel from the trivial system to a quantum system on each partition. We will refer to such trivial instances of ordering among types as {\bf embeddings} of one type into the other. Two resource types are in the same equivalence class over types if any resource of either type can be converted into a resource of the other type which is in the same LOSR equivalence class. For example, the three partition-types $\mathsf{I}\! \! \to \! \! \mathsf{I}$, $\mathsf{C}\! \! \to \! \! \mathsf{I}$, and $\mathsf{Q}\! \! \to \! \! \mathsf{I}$ are all in the lowest equivalence class over partition-types, since (as discussed above) they never play any role in the nonclassicality of any nonsignaling resource. Understanding the scope of nonclassicality-preserving conversions between resources of different global types is particularly useful for devising experimental measures and witnesses of nonclassicality, as we discuss in Section~\ref{implictypetoperf} (and in Ref.~\cite{rosset2019characterizing}). Abstractly, this is because one type is above another type if there exists an embedding of the partial order over equivalence classes of resources of the lower type into the partial order of the higher type. When this is the case, techniques for characterizing the preorder of the higher type give direct information about the preorder of the lower type. \subsection{Determining which types encode the nonclassicality of which others} \label{deterencode} In this section, we derive all but two of the ordering relations that hold between the possible pairings of partition-types by leveraging various results from the literature. These results are summarized in Table~\ref{typeorderings}. As discussed above, orderings over global types can be deduced from these. \begin{table}[htb!] \centering \includegraphics[width=0.49\textwidth]{figures/typeorderings} \caption{ A green check mark in a given cell indicates that the column type $T$ is higher in the order over partition-types than the row type $T'$ (denoted $T \succeq_{\rm type} T'$), while a red cross indicates that it is not higher (denoted $T \not\succeq_{\rm type} T'$. The text in each cell alludes to the proof (given in the main text) of that ordering relation. Two relations are unknown, as indicated by blue question marks. } \label{typeorderings} \end{table} As discussed above, there are no nonfree resources which nontrivially involve the types $\mathsf{I} \! \! \to \! \! \mathsf{I}$, $\mathsf{C} \! \! \to \! \! \mathsf{I}$, or $\mathsf{Q} \! \! \to \! \! \mathsf{I}$, so we need not discuss them further. There remain 6 nontrivial types, and hence 36 ordering relations to check. These are all shown in the table. If the column resource type $T$ is higher in the order than the row type $T'$, so that $T \succeq_{\rm type}T'$, then we indicate this with a green check mark in the corresponding cell in the table. If instead $T \not\succeq_{\rm type}T'$, we indicate this with a red cross. In each case, we briefly allude to the logic behind the proofs for that particular ordering---proofs which we now give. As stated in Section~\ref{expressivity}, a type is higher in the order than all types which it embeds, where quantum systems embed classical systems, which embed trivial systems. In the table, we indicate these trivial ordering relations by the word `embed'. Next, recall that Werner proved the existence of entangled states which cannot violate any Bell inequality involving projective measurements~\cite{Werner}. It was subsequently proved that this holds true even for arbitrary local measurements~\cite{Barrettlocal}, a result that holds even if the choice of local measurements are made in a correlated fashion using shared randomness. This constitutes the most general LOSR conversion scheme from quantum states to boxes. In other words, an entangled Werner state cannot be converted into {\em any} nonfree box, much less into a box that is in its LOSR equivalence class (as would be required for encoding its nonclassicality into a box-type resource). It follows that global type $\mathsf{CC} \! \! \to \! \! \mathsf{CC}$ is not above global type $\mathsf{II} \! \! \to \! \! \mathsf{QQ}$, which in turn implies that partition-type $\mathsf{C} \! \! \to \! \! \mathsf{C}$ is not above partition-type $\mathsf{I} \! \! \to \! \! \mathsf{Q}$. That is, $\mathsf{C} \! \! \to \! \! \mathsf{C} \not\succeq_{\rm type} \mathsf{I} \! \! \to \! \! \mathsf{Q}$, as is indicated in the table by the phrase `Werner states'. In addition, it is well known that LOCC can generate arbitrary boxes and yet cannot generate any entangled state. Since LOSR operations form a subset of LOCC operations, this implies that LOSR operations applied to any box (of type $\mathsf{CC} \! \! \to \! \! \mathsf{CC}$) cannot generate {\em any} nonfree state (of type $\mathsf{II} \! \! \to \! \! \mathsf{QQ}$), much less a state in its LOSR equivalence class. Hence, global type $\mathsf{II} \! \! \to \! \! \mathsf{QQ}$ is not above global type $\mathsf{CC} \! \! \to \! \! \mathsf{CC}$, which in turn implies that partition-type $\mathsf{I} \! \! \to \! \! \mathsf{Q}$ is not above partition-type $\mathsf{C} \! \! \to \! \! \mathsf{C}$. That is, $\mathsf{I} \! \! \to \! \! \mathsf{Q} \not\succeq_{\rm type} \mathsf{C} \! \! \to \! \! \mathsf{C}$, as is indicated in the table by the phrase `LOSR cannot entangle'. We can use transitivity of the ordering relation to prove that $\mathsf{I} \! \! \to \! \! \mathsf{C}$ is not above $\mathsf{I} \! \! \to \! \! \mathsf{Q}$ and is not above $\mathsf{C} \! \! \to \! \! \mathsf{C}$, and that none of $\mathsf{I} \! \! \to \! \! \mathsf{C}$, $\mathsf{I} \! \! \to \! \! \mathsf{Q}$, or $\mathsf{C} \! \! \to \! \! \mathsf{C}$ are above any of $\mathsf{C} \! \! \to \! \! \mathsf{Q}$, $\mathsf{Q} \! \! \to \! \! \mathsf{C}$, and $\mathsf{Q} \! \! \to \! \! \mathsf{Q}$. For example, from the fact that $\mathsf{C} \! \! \to \! \! \mathsf{C}$ is above $\mathsf{I} \! \! \to \! \! \mathsf{C}$ and the fact that $\mathsf{C} \! \! \to \! \! \mathsf{C}$ is not above $\mathsf{I} \! \! \to \! \! \mathsf{Q}$, it must be that $\mathsf{I} \! \! \to \! \! \mathsf{C}$ is not above $\mathsf{I} \! \! \to \! \! \mathsf{Q}$. If it were otherwise, one would have $\mathsf{C} \! \! \to \! \! \mathsf{C}$ above $\mathsf{I} \! \! \to \! \! \mathsf{C}$ above $\mathsf{I} \! \! \to \! \! \mathsf{Q}$ $\implies$ $\mathsf{C} \! \! \to \! \! \mathsf{C}$ above $\mathsf{I} \! \! \to \! \! \mathsf{Q}$, which is false. The other transitivity arguments run analogously. In the table, we indicate all such ordering relations by the abbreviation `trans.'. One of the authors proved in Ref.~\cite{sq} that there exists some semiquantum channel (of type $\mathsf{QQ} \! \! \to \! \! \mathsf{CC}$) in the same equivalence class as any given quantum state (of type $\mathsf{II} \! \! \to \! \! \mathsf{QQ}$). A slight reframing of this result implies that the semiquantum partition-type $\mathsf{Q} \! \! \to \! \! \mathsf{C}$ is higher in the order than $\mathsf{I} \! \! \to \! \! \mathsf{Q}$, as we show below. That is, $\mathsf{Q} \! \! \to \! \! \mathsf{C} \succeq_{\rm type} \mathsf{I} \! \! \to \! \! \mathsf{Q}$, as is indicated in the table by the phrase `semiquantum games'. Finally, as we prove in Theorem~\ref{squniv}, the semiquantum partition-type $\mathsf{Q} \! \! \to \! \! \mathsf{C}$ is higher in the order than all other partition-types. The ordering relations that follow from our proof but not from previous work, namely $\mathsf{Q} \! \! \to \! \! \mathsf{C} \succeq_{\rm type} \mathsf{C} \! \! \to \! \! \mathsf{Q}$ and $\mathsf{Q} \! \! \to \! \! \mathsf{C} \succeq_{\rm type} \mathsf{Q} \! \! \to \! \! \mathsf{Q}$, are indicated in the table by the phrase `Thm~3'. This proves all the results shown in the table. There remain two unknown ordering relations, indicated in the table by question marks; namely whether $\mathsf{C} \! \! \to \! \! \mathsf{Q}$ is higher in the order than either $\mathsf{Q} \! \! \to \! \! \mathsf{C}$ or $\mathsf{Q} \! \! \to \! \! \mathsf{Q}$. Because $\mathsf{Q} \! \! \to \! \! \mathsf{C}$ and $\mathsf{Q} \! \! \to \! \! \mathsf{Q}$ are in the same equivalence class (at the top of the order), the answer to both of these questions must be the same; that is, either $\mathsf{C} \! \! \to \! \! \mathsf{Q}$ encodes them both, or it encodes neither. Such an encoding could have dramatic practical consequences. For example, if the encoding can be done with a fixed transformation (which is not a function of the resource to be converted), then this would enable the possibility of {\em preparation-device-independent} quantification of nonclassicality. \begin{customopq}{2} Can the LOSR nonclassicality of any resource be perfectly characterized in a {\em preparation-device-independent} manner? \end{customopq} \subsection{Semiquantum channels are universal encoders of nonclassicality} \label{sec:sq} To complete the arguments of the last section, we prove that the semiquantum partition-type can encode any other partition-type. The consequences of this fact are fleshed out further in Section~\ref{implictypetoperf}. \begin{thm} \label{squniv} The semiquantum partition-type $\mathsf{Q} \! \! \to \! \! \mathsf{C}$ is in the unique equivalence class at the top of the order over partition-types. That is, it can encode the nonclassicality of all other partition-types. \end{thm} \begin{proof} Consider a bipartite channel $\mathcal{E}$ which has a quantum output of dimension $d$, together with arbitrary other outputs and inputs (denoted by dashed double lines), as shown in black in Fig.~\ref{SQandBack}(a). One can transform $\mathcal{E}$ into a resource with a quantum input of dimension $d$ and a classical output of dimension $d^2$ by composing $\mathcal{E}$ with a Bell measurement as shown in green in Fig.~\ref{SQandBack}(a); that is, by performing a measurement in a maximally entangled basis on the quantum output of $\mathcal{E}$ and a new quantum input of the same dimension $d$. \begin{figure} \caption{ (a) A free transformation (in green) that converts a quantum output to a classical output together with a new quantum input. (b) This transformation does not change the LOSR equivalence class, since it has a left inverse (shown in pink) which is a free transformation.} \label{SQandBack} \end{figure} To see that this transformation preserves LOSR equivalence class, it suffices to note that there exists a local (and hence free) operation, shown in pink on the left-hand side of Fig.~\ref{SQandBack}(b), which takes the transformed channel back to the original channel $\mathcal{E}$. In particular, this local operation feeds one half of a maximally entangled state $\Phi_{\rm max}$ into the Bell measurement, and then performs a correcting unitary operation $U$ on the other half of the entangled state, conditioned on the classical outcome of the Bell measurement. For the correct choice of correction operations, the overall transformation on $\mathcal{E}$ is just the well-known teleportation protocol~\cite{Bennett93}, and so the equality shown in Fig.~\ref{SQandBack}(b) holds. Hence, the channel in Fig.~\ref{SQandBack}(a) is in the same LOSR equivalence class as $\mathcal{E}$, which implies that every partition of a resource can be transformed to a resource of type $\mathsf{Q} \! \! \to \! \! \mathsf{C}$ in the same equivalence class. \end{proof} Note that $\mathsf{Q} \! \! \to \! \! \mathsf{Q}$ is trivially also at the top of the order, since every other type embeds into it. It is thus in the same equivalence class as $\mathsf{Q} \! \! \to \! \! \mathsf{C}$. \section{A unified framework for distributed games of all types } \label{unifiedgames} A variety of `games' have been studied for the purposes of quantifying nonclassicality of various types of resources. For instance, the nonclassicality of quantum states has been studied from the point of view of nonlocal games and semiquantum games, as well as teleportation, steering, and entanglement witnessing experiments. Nonlocal games have also been used to study the nonclassicality of boxes. In fact, there is a natural class of distributed tasks for every type of resource, including one for each of the common types in Section~\ref{sectypes}. \begin{defn} \label{defn:Tgame} For a given global type $T$, we define a distributed $\mathbf{T}${\bf -game} as a linear map from resources of type $T$ to the real numbers. \end{defn} The set $\mathcal{G}_{\! T}$ of all such maps for fixed $T$ is the set of $T$-games, and a resource of type $T$ is said to be a {\bf strategy} for a $T$-game. This last terminology is motivated by the fact that no matter how complicated the players' tactics, their score for a given $T$-game only depends on the resource of type $T$ that they ultimately share with the referee. We will refer to any game of any type as a distributed game. In Fig.~\ref{games}, we depict four distributed games together with the type of resource that acts as a strategy for that game. We represent a game diagrammatically as a monolithic comb with appropriate input and output structure such that composition of the comb corresponding to a game $G_{\! T}$ with a strategy $\mathcal{E}_T$ of type $T$ yields a circuit with no open inputs or outputs, representing the real number $G_{\! T}(\mathcal{E}_T)$. \begin{figure} \caption{ Some games and their strategies. (a) Boxes are strategies for nonlocal games. (b) Semiquantum channels are strategies for semiquantum games. (c) Teleportages are strategies for teleportation games. (d) Entangled states are strategies for entanglement witnesses. } \label{games} \end{figure} \subsection{Implementations of a game} We have noted that a variety of games and experiments can be viewed abstractly under the umbrella of $T$-games. The {\em practical} meaning of such games is made more clear by considering the following two-step procedure, by which a referee can implements any game (of any type $T$). This procedure is depicted on the right-hand side of Fig.~\ref{TCgames}. First, the referee performs a tomographically complete measurement on the composite system defined by the collection of output systems of the given strategy $\mathcal{E}_T$, and implements a preparation drawn at random from a tomographically complete set of preparations on the composite system defined by the collection of all the systems which are inputs of $\mathcal{E}_T$. In fact, it suffices for the referee to perform tomographically complete measurements and preparations {\em independently} on every input and output, as depicted in the dashed box in Fig.~\ref{TCgames}. We will refer to this process as the application of an {\bf analyzer} $Z$ to the given strategy. That is, an analyzer $Z$ is a linear and tomographically complete map from strategies to correlations of the form $P_{Z\circ \mathcal{E}_T}(ab|xy) := Z\circ \mathcal{E}_T$, with $a,b$ labeling the values of the classical outputs of $Z$ and $x,y$ the values of the classical inputs of $Z$. Second, the referee uses a fixed payoff function $F_{\rm payoff}(abxy)$ to assign a real number $G_{\! T}(\mathcal{E}_T)= \sum_{abxy} F_{\rm payoff}(abxy) P_{Z\circ \mathcal{E}_T}(ab|xy)$ to strategy $\mathcal{E}_T$. This point of view on games is useful for the proof of Theorem~\ref{subsumestrat}, and it is also useful for establishing a physical picture of games of each type. For example, in a Bell experiment, one applies LOSR operations (or often just LO operations) in order to convert one's quantum state to a conditional probability distribution, and the payoff function in the game constitutes the Bell inequality that one tests. As a second example, see Ref.~\cite{lipkabartosik2019operational} for a study of various teleportation games. As noted therein, there are interesting teleportation tasks (which admit of a simple operational interpretation) beyond merely attempting to establish an identity channel between two parties using shared entanglement. However, in the rest of this paper it will be simpler to view a game in the abstract (simply as a linear map from resources of a given type to the real numbers), and we will leave the further investigation of such games (beyond the cases which have already been studied) to future work. \begin{figure} \caption{ A depiction of the concrete two-step process by which a referee can implement a game (of any type). The referee first applies a tomographically complete analyzer $Z$, and then assigns a real number to the resulting statistics using a payoff function $F_{\rm payoff} \label{TCgames} \end{figure} \subsection{Performance of resources of arbitrary type with respect to a game} By definition, every $T$-game assigns a real number to every resource of type $T$. At this stage, the number need not be related in any way to the nonclassicality of resources; e.g., the score need not behave monotonically under LOSR. Nonetheless, one can use any $T$-game to learn about the LOSR ordering of resources of type $T$; indeed, the full set of $T$-games perfectly characterizes this preorder. (In case this is not completely obvious, it will follow as a corollary of our Theorem~\ref{proporder}.) Furthermore, one can use a $T$-game to (partially) quantify the nonclassicality of a resource {\em of arbitrary type}, not only of type $T$. For example, nonlocal games and semiquantum games have been used to probe the nonclassicality of quantum states~\cite{sq,Shahandeh2017,Supic2017,Rosset2018a}. This is because---although a $T$-game does not {\em directly} assign a score to resources of any type other than $T$---it can quantify the performance of a resource of any type by a maximization over all $\tau \in {\rm LOSR}$ which map the given resource to one of type $T$. That is: \begin{defn} The (optimal) {\em performance} of a resource $R$ of arbitrary type with respect to a game $G_{\! T}$ of arbitrary type $T$ is given by \begin{equation} \omega_{G_{\! T}}(R) = \max_{\tau: \mathbf{Type}[R] \! \to \! T} G_{\! T}(\tau \circ R). \end{equation} \end{defn} Clearly, $\omega_{G_{\! T}}(R)$ is a measure of how well an arbitrary resource $R$ can perform at LOSR-game $G_{\! T}$. Because of the maximization over LOSR operations, $\omega_{G_{\! T}}(R)$ is by construction a monotone with respect to LOSR. Constructions of this sort are often termed {\em yield monotones}~\cite{Gonda2019}. We discuss monotones further in Ref.~\cite{rosset2019characterizing}, as monotones are useful tools for obtaining partial information about the preorder over resources and for relating the preorder to practical tasks. The set $\mathcal{G}_{\! T}$ of all games of a given type $T$ defines a preorder over all resources of all types, where resource $R$ is above $R'$ in the order if for every $T$-game, $R$ can achieve a value at least as high as $R'$ can. We denote this third ordering relation $\succeq_{\mathcal{G}_{\! T}}$: \begin{defn} \label{orderwrtgame} For resources $R$ and $R'$ of different and arbitrary type, we say that $R \succeq_{\mathcal{G}_{\! T}} R'$ iff $\omega_{G_{\! T}}(R) \ge \omega_{G_{\! T}}(R')$ for every $G_{\! T} \in \mathcal{G}_{\! T}$. \end{defn} Next, we prove that if one resource outperforms a second at all possible games of a given type, then it can also generate {\em any specific strategy} of that type which the second resource can generate. This is a nontrivial result, since it need not be the case that the first resource is higher in the LOSR order. \begin{thm} \label{subsumestrat} For resources $R$ and $R'$ of different and arbitrary type and a resource $\mathcal{E}_{T}$ of arbitrary type $T$, $R \succeq_{\mathcal{G}_{\! T}} R'$ iff $R' \LOSRconv \mathcal{E}_{T} \implies R \LOSRconv \mathcal{E}_{T}$. That is, any strategy $\mathcal{E}_{T}$ for games of type $T$ that can be freely generated from $R'$ can {\em also} be freely generated from $R$. \end{thm} \begin{proof} If $R' \LOSRconv \mathcal{E}_{T} \implies R \LOSRconv \mathcal{E}_{T}$, then $R$ can generate any strategy for any given game $G_{\! T}$ that $R'$ can, and so always performs at least as well as $R'$ at $T$-games, and so $R \succeq_{\mathcal{G}_{\! T}} R'$. To prove the converse, consider a set of games of type $T$ defined by ranging over all possible payoff functions $F_{\rm payoff}(abxy)$ for some fixed analyzer $Z$---that is, a specific tomographically complete measurement for each output system of the resource and a specific tomographically complete set of states for each input system of the resource. Assume that $R' \LOSRconv \mathcal{E}_{T}$ for some strategy $\mathcal{E}_{T}$, and define $P_{Z\circ \mathcal{E}_T}(ab|xy) = Z \circ \mathcal{E}_{T}$. For $R \succeq_{\mathcal{G}_{\! T}} R'$, it must be that $R \LOSRconv \mathcal{E}_{T}'$ for at least one strategy $\mathcal{E}_{T}'$ satisfying $P_{Z\circ \mathcal{E}'_T}(ab|xy) = Z \circ \mathcal{E}_{T}'$. If this were {\em not} the case, then the convex set $S(R)$ of all correlations which $R$ can generate in this scenario, $S(R):=\left\{P_{Z \circ \tau \circ R}(ab|xy)=Z \circ \tau \circ R \right\}_{\tau \in {\rm LOSR}}$, would not contain $P_{Z\circ \mathcal{E}_T}(ab|xy)$, and the hyperplane which separated $P_{Z\circ \mathcal{E}_T}(ab|xy)$ from $S$ would constitute a payoff function $F_{\rm payoff}$ for which $R'$ outperformed $R$, which would be in contradiction with the claim that $R \succeq_{\mathcal{G}_{\! T}} R'$. By tomographic completeness, the preimage of every correlation under $Z$ contains at most one strategy. Hence, if two strategies map to the same correlation, then they must be the same strategy, and so it must be that $\mathcal{E}_{T}=\mathcal{E}_{T}'$ in argument above. That is, we have shown that if $R \succeq_{\rm SQ} R'$ and $R' \LOSRconv \mathcal{E}_{T}$, then $R \LOSRconv \mathcal{E}_{T}$. \end{proof} \subsection{Implications from the type of a resource to its performance at games} \label{implictypetoperf} We now prove that games of a higher type perfectly characterize the LOSR nonclassicality of resources of a lower type. \begin{thm} \label{proporder} If $T \succeq_{\rm type} T'$, then for resources $R_1,R_2$ of type $T'$, $R_1 \succeq_{\rm LOSR} R_2$ iff $R_1 \succeq_{\mathcal{G}_{\! T}} R_2$. Equivalently: if type $T$ is above type $T'$, then for resources of type $T'$, the orders defined by $\succeq_{\rm LOSR}$ and $\succeq_{\mathcal{G}_{\! T}}$ are identical. \end{thm} \begin{proof} Consider the set $\mathcal{G}_{\! T}$ of all games of type $T$ and two resources $R_1$ and $R_2$, both of type $T'$, where $T \succeq_{\rm type} T'$. Clearly $R_1 \succeq_{\rm LOSR} R_2$ implies $R_1 \succeq_{\mathcal{G}_{\! T}} R_2$, since $R_1 \succeq_{\rm LOSR} R_2$ implies that $R_1$ can be used to freely generate $R_2$ and hence to generate any strategy which can be generated using $R_2$. Next, we prove that $R_1 \succeq_{\mathcal{G}_{\! T}} R_2$ implies $R_1 \succeq_{\rm LOSR} R_2$. By assumption, $T \succeq_{\rm type} T'$, and so for $R_2$ of type $T'$, there exists a strategy $\mathcal{E}_{ T}$ for games of type $T$ such that $R_2 \LOSRinterconv \mathcal{E}_{ T}$. Since $R_1 \succeq_{\mathcal{G}_{\! T}} R_2$, Theorem~\ref{subsumestrat} tells us that $R_2 \LOSRconv \mathcal{E}_{ T}$ implies $R_1 \LOSRconv \mathcal{E}_{ T}$, and hence $R_1 \LOSRconv R_2$ by transitivity. Hence we have proven that the two orderings are the same; that is, $R_1 \succeq_{\rm LOSR}R_2$ if and only if $R_1 \succeq_{\mathcal{G}_{\! T}}R_2$. \end{proof} A consequence of this result is that if $T \succeq_{\rm type} T'$, then every nonfree resource of type $T'$ is useful for some $T$-game. Two special cases of this fact that were previously proved are that all entangled states are useful for semiquantum games and that all entangled states are useful for teleportation. If one views the encoding of one type into another type as an embedding of the partial order over equivalence classes of resources of the lower type into the partial order of the higher type, then this result can be seen as a consequence of the fact that games of type $T$ are sufficient for characterizing the partial order over resources of type $T$. A corollary of Theorem~\ref{squniv} and Theorem~\ref{proporder} is that semiquantum games fully characterize the LOSR ordering among all resources of arbitrary type. \begin{cor} \label{Buscgen} For any resources $R$ and $R'$ (which may be of arbitrary and different types), $R \succeq_{\rm LOSR} R'$ if and only if $R \succeq_{\mathcal{G}_{\rm SQ}}R'$. \end{cor} \noindent This generalizes the main result of Ref.~\cite{sq} from quantum states to resources {\em of arbitrary type}. Since semiquantum games characterize the LOSR nonclassicality of arbitrary resources, and since referees in semiquantum games do not require any well-characterized quantum measurement devices~\cite{steer}, it follows that the nonclassicality of any resource of any type can be characterized in a measurement-device-independent manner. Note that for such tests to be practically useful, it must be possible to convert an {\em unknown} resource into a semiquantum channel in the same LOSR equivalence class. This is indeed possible, because for all resources of a given type, there is a {\em single} transformation which implements the conversion, namely, the Bell measurement in Fig.~\ref{SQandBack}(a). Critically, this transformation is not a function of the resource to be converted. \section{Extending results from the literature } \label{extending} We now give further applications of our results, in particular showing how our framework extends a number of seminal results from the literature. \subsection{Applying semiquantum games to perfectly characterize arbitrary quantum channels} Buscemi proved in Ref.~\cite{sq} that the order over quantum states with respect to LOSR is equivalent to the order over quantum states defined by their performance with respect to semiquantum games. This result is an instance of our Corollary~\eqref{Buscgen} where $R$ and $R'$ are both quantum states. For concreteness, we now briefly reiterate the argument in this specific context. The existence of the invertible transformation in Fig.~\ref{SQandBack} implies that $\mathsf{II} \! \! \to \! \! \mathsf{QQ}$ is below $\mathsf{QQ} \! \! \to \! \! \mathsf{CC}$ in the order on global types, and hence that \noindent For this $\sigma$ and $\mathcal{E}_{\rm SQ}$ such that $\sigma \LOSRconv \mathcal{E}_{\rm SQ}$, Theorem~\ref{subsumestrat} states that if $\rho \succeq_{\mathcal{G}_{\rm SQ}} \sigma$, then $\rho \LOSRconv \mathcal{E}_{SQ}$. Since $\mathcal{E}_{\rm SQ} \LOSRconv \sigma$, transitivity gives that $\rho \succeq_{\mathcal{G}_{\rm SQ}} \sigma \implies \rho \LOSRconv \sigma$. Since the converse implication is self-evident, one sees that the LOSR order over quantum states is equivalent to the order over quantum states defined by their performance with respect to semiquantum games. This proof is inspired by the original argument in Ref.~\cite{sq}, but our framework makes the proof shorter and more intuitive. As we saw in Corollary~\eqref{Buscgen}, it also allowed us to generalize the result from quantum states to arbitrary resources. As stated above, this implies that the LOSR nonclassicality of {\em any} resource can be witnessed and quantified in a measurement-device-independent~\cite{Branciard2013,steer} manner. \subsection{Applying measurement-device-independent steering games to perfectly characterize assemblages } Cavalcanti, Hall, and Wiseman proved in Ref.~\cite{steer} that the LOSR order over quantum states defined by subset inclusion over the assemblages that each can generate via LOSR is equivalent to the order over quantum states defined by their performance with respect to steering games. This result is a special case of our Theorem~\ref{subsumestrat}, where $R$ and $R'$ are quantum states and $\mathcal{E}_T$ is a steering assemblage: \begin{cor} $\rho \succeq_{\mathcal{G}_{\rm steer}} \sigma$ iff $\sigma \LOSRconv \mathcal{E}_{\rm steer} \implies \rho \LOSRconv \mathcal{E}_{\rm steer}$. \end{cor} \noindent Our Theorem~\ref{subsumestrat} extends this result to arbitrary resource types and games. Additionally, the existence of the invertible transformation in Fig.~\ref{SQandBack} immediately implies that \noindent In other words, $\mathsf{CI} \! \! \to \! \! \mathsf{CQ}$ is below $\mathsf{CQ} \! \! \to \! \! \mathsf{CC}$ in the order on global types. Our Theorem~\ref{proporder} then gives a new result, which is the direct analogue of the result in Ref.~\cite{sq} in this new context: the LOSR order over assemblages is equivalent to the order over assemblages defined by performance relative to all measurement-device-independent steering games. Explicitly: the fact (proven in Section~\ref{deterencode}) that $T_{\rm MDI} \succeq_{\rm type} T_{\rm steer}$ implies that \begin{cor} For two assemblages $\mathcal{E}_{\rm steer}$ and $\mathcal{E}_{\rm steer}'$, one has $\mathcal{E}_{\rm steer} \succeq_{\rm LOSR} \mathcal{E}_{\rm steer}'$ iff $\mathcal{E}_{\rm steer} \succeq_{\mathcal{G}_{\rm MDI}} \mathcal{E}_{\rm steer}'$. \end{cor} \noindent Indeed, this theorem holds not just for assemblages, but for any resource type which is lower in the global order than measurement-device-independent steering channels, including channel steering assemblages and Bob-with-input assemblages. \subsection{Applying teleportation games to perfectly characterize quantum states} \label{app:tel} Cavalcanti, Skrzypczyk, and $\check{\rm S}$upi\'{c} proved in Ref.~\cite{telep} that the nonclassicality of every entangled state can be witnessed by some teleportation experiment. We apply arguments analogous to those of the last two subsections to strengthen their results, most notably in Corollary~\ref{quantup}, which provides the quantitative analogue of their (qualitative) main result. First, the existence of the invertible transformation in Fig.~\ref{SQandBack} again implies that \noindent In other words, $\mathsf{II} \! \! \to \! \! \mathsf{QQ}$ is below $\mathsf{QI} \! \! \to \! \! \mathsf{CQ}$ in the order on global types. Our Theorem~\ref{proporder} again yields a result analogous to that in Ref.~\cite{sq}, namely, that the LOSR order over entangled states is equivalent to the order over entangled states with respect to performance at teleportation games\footnote{It is worth noting that there are subtleties in the relationship between teleportation games (as defined here, and see also Ref.~\cite{lipkabartosik2019operational}) and the usual conception of teleportation experiments (as attempts to establish an identity channel between two parties using shared entanglement). For example, note that any nonfree assemblage constitutes a special instance of a teleportage which is useless for generating a coherent quantum channel between two parties, and yet which {\em is} useful for some teleportation game.}. Explicitly: denoting the type of quantum states by $T_{\rho}$, the fact that $T_{\rm tel} \succeq_{\rm type} T_{\rho}$ implies that \begin{cor} \label{quantup} $\rho \succeq_{\rm LOSR} \sigma$ iff $\rho \succeq_{\mathcal{G}_{\rm tel}} \sigma$. \end{cor} \noindent Indeed, this theorem holds not just for quantum states, but for any resource type which is lower in the global order than teleportages, including, for example, steering assemblages. Our Theorem~\ref{subsumestrat} can also be applied to teleportation games, yielding a result analogous to that in Ref.~\cite{steer}. That is, any resource which outperforms a second resource at all teleportation games can generate any specific strategy that the second can generate: \begin{cor} $R \succeq_{\mathcal{G}_{\rm tel}} R'$ iff $R' \LOSRconv \mathcal{E}_{\rm tel} \implies R \LOSRconv \mathcal{E}_{\rm tel}$. \end{cor} \section{Open questions} Our framework suggests a great deal of open questions for future study, two important examples of which were highlighted above. Ideally, one would have type-independent methods for characterizing nonclassicality in practice. We begin developing such a toolbox in Ref.~\cite{rosset2019characterizing}. For each of the fifteen bipartite global types mentioned above, it is interesting to study the basic features of the (type-specific) LOSR resource theory. While this has been done for boxes, little attention has been given to this problem in other cases, even for quantum states. Such features include the geometry of the free set of resources, the LOSR preorder, useful monotones and witnesses, and so on. Ultimately, we advocate not just for these type-specific investigations, but for research in the type-independent context. Part of the project of characterizing the preorder will be to characterize the sense in which there exist inequivalent kinds of nonclassicality. At the top of the preorder, the situation for bipartite LOCC-entanglement is quite simple: there is a single maximally entangled state of a given dimension, from which all other states can be obtained by LOCC transformations. This is no longer the case for multipartite LOCC-entanglement~\cite{Dur2000twoways}, nor for LOSR-entanglement even in the bipartite case~\cite{LOSRvsLOCCentang}. For resources beyond quantum states and for more parties, the situation gets even more complex. As an example, our work implies that there exist semiquantum channels in the equivalence class of Werner states, and semiquantum channels in the equivalence class of nonlocal boxes, and that these semiquantum channels exhibit inequivalent forms of nonclassicality. \begin{customopq}{3} What are the key features of the type-independent preorder over LOSR resources? What inequivalent forms of nonclassicality do these resources exhibit? \end{customopq} If one was interested only in {\em witnessing} nonclassicality as opposed to {\em quantifying} it, one could consider a preorder over types defined by a less restrictive condition, where type $T$ is above type $T'$ if every nonfree resource of type $T'$ could freely generate at least one nonfree resource of type $T$. All the known results in Table~\ref{typeorderings} hold for this definition as well; however, the two definitions might yield different answers for the open questions that remain. One could also consider modifying our Definition~\ref{typeorder} such that local operations were taken to be free rather than local operations and shared randomness. Note that the operations required in the proof of Theorem~\ref{SQandBack} do not make use of any shared randomness, and so the theorem would still hold. In fact, one can readily verify that all the orderings in Figure~\ref{typeorderings} would continue to hold. However, Theorem~\ref{subsumestrat} requires convexity (through its use of the separating hyperplane theorem), as do Theorem~\ref{proporder} and Corollary~\ref{Buscgen} (since they rely on Theorem~\ref{subsumestrat}). If one were to modify Definition~\ref{typeorder} so that local operations and classical communication were free, the situation is less clear, as one would presumably need to widen the scope of applicability to signaling resources. \begin{customopq}{4} What can be learned by considering a type-independent framework of LOCC nonclassicality? \end{customopq} This would be the relevant resource theory, for example, for distributed parties who share quantum memories and the ability to communicate classically. Our framework has focused on the divide between classical and quantum resources. One can also study the divide between quantum and post-quantum resources, as we do in Ref.~\cite{postquantumschmid}. A final open question regards the relationship between our work and self-testing~\cite{Mayers2004,Montanaro2016,Supic2019a}. In self-testing, correlations (e.g. of type $\mathsf{CC} \! \! \to \! \! \mathsf{CC}$) certify the existence of an underlying valuable quantum resource (say $\mathsf{II} \! \! \to \! \! \mathsf{QQ}$). For example, the quantum correlations violating the CHSH inequality~\cite{CHSH} maximally are a signature of an underlying quantum state that is at least as nonclassical as a singlet state (see~\cite{Supic2019a} for a pedagogical derivation). Recently, the self-testing line of research has expanded beyond self-testing of states, and now has also been applied to steering assemblages~\cite{Supic2016a,Gheorghiu2015}, entangled measurements~\cite{Bancal2018a,Renou2018}, prepare-and-measure scenarios~\cite{Tavakoli2018c}, and quantum gates~\cite{Sekatski2018a}. However, the correlations that are a signature of the given resource cannot be converted back to that quantum state, and so are not in the same LOSR equivalence class. Rather, they merely allow one to {\em infer the prior existence} of the self-tested resource. As such, the precise relationship with our work is left for exploration. In the present work, we did not consider the Hilbert space dimensions as part of the resource type. One could consider a more fine-grained study of conversions between resources of different sizes. For example, the notion of nonclassical dimension for bipartite quantum states is encoded by the Schmidt rank~\cite{Sperling2011}. We leave as an open question the generalization of this notion to other resource types; note that Ref.~\cite{rosset2019characterizing} includes a discussion of Hilbert space dimensions solely for the purposes of implementing numerical algorithms. As a final remark, we recall that the semiquantum games introduced in~\cite{Buscemi2012LOSR} to test bipartite states in a measurement-device independent fashion~\cite{Branciard2013}, can be transformed into guessing games suitable for testing, always in a measurement-device independent fashion, quantum channels and quantum memories~\cite{Buscemi-ProbInfTrans-2016,Rosset-Buscemi-Liang-PRX}. More generally, such single-party guessing games have found application in the context of measurement resources~\cite{Buscemi-ProbInfTrans-2016,Skr-Linden-2019,Skr-Supic-Cavalcanti-2019} and general convex resource theories~\cite{Takagi-etal-2019,Uola-etal-2019,uola2019quantification,Takagi-Regula-PRX}. We leave further investigations about relations between these works and ours for future research. \section{Conclusions} We have presented a resource-theoretic framework which unifies various types of resources of nonclassicality which arise when multiple parties have access to classical common causes but no cause-effect relations. This type-independent resource theory allows us to compare the LOSR nonclassicality of resources of arbitrary types and to quantify them using games of arbitrary types. We then derived several theorems which ultimately can be used to simplify the methods by which one characterizes the nonclassicality of resources. Our theorems additionally generalize, unify, and simplify the seminal results of Refs.~\cite{sq,steer,telep}, and our framework leads to a number of exciting questions for future work. \section{Acknowledgments} The authors acknowledge useful discussions with T.C.~Fraser, T.~Gonda, R.W.~Spekkens, and E.~Wolfe and helpful feedback on the manuscript from M.~Hoban, M.~Hall and A.B.~Sainz. D.S. is supported by a Vanier Canada Graduate Scholarship. F.B. is grateful for the hospitality of Perimeter Institute where part of this work was carried out and acknowledges partial support from the Japan Society for the Promotion of Science (JSPS) KAKENHI, Grant No.20K03746. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Colleges and Universities. This publication was made possible through the support of a grant from the John Templeton Foundation. The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the John Templeton Foundation. \setlength{\bibsep}{2pt plus 1pt minus 2pt} \nocite{apsrev41Control} \onecolumngrid \begin{center} \line(1,0){250} \end{center} \twocolumngrid \onecolumngrid \end{document}
\begin{document} \title{Quantum computation capability verification protocol for NISQ devices with dihedral coset problem} \author{Ruge Lin} \affiliation{Technology Innovation Institute, Abu Dhabi, UAE.} \affiliation{Departament de F\'{i}sica Qu\`{a}ntica i Astrof\'{i}sica and Institut de Ci\`{e}ncies del Cosmos (ICCUB), Universitat de Barcelona, Mart\'{i} i Franqu\`{e}s 1, 08028 Barcelona, Spain.} \author{Weiqiang Wen} \affiliation{LTCL, Telecom Paris, Institut Polytechnique de Paris, France.} \begin{abstract} In this article, we propose an interactive protocol for one party (the verifier) holding a quantum computer to verify the quantum computation power of another party's (the prover) device via a one-way quantum channel. This protocol is referred to as the dihedral coset problem (DCP) challenge. The verifier needs to prepare quantum states encoding secrets (DCP samples) and send them to the prover. The prover is then tasked with recovering those secrets with a certain accuracy. Numerical simulation demonstrates that this accuracy is sensitive to errors in quantum hardware. Additionally, the DCP challenge serves as benchmarking protocol for locally fully connected (LFC) quantum architecture and aims to be performed on current and near-future quantum resources. We conduct a $4$-qubit experiment on one of the IBM Q devices. \end{abstract} \maketitle \section{Introduction} In 2019, Google succeeded in reaching quantum supremacy with their Sycamore processor \cite{Supremacy}. However, it remains a long way to a fully functioning quantum computer. At this moment, only noisy intermediate-scale quantum (NISQ) \cite{NISQ} devices are available, and a method is needed to verify their computing power. Currently, instead of computation capability, random circuit sampling and cross-entropy benchmarking \cite{RCS,CEB} are primarily concerned with testing the quantum property of the device. It is desirable to have a performance test on quantum hardware, proving to a verifier and unable to falsify. Recent works \cite{Regev09,Brakerski0,Brakerski1,Zhu} demands a classical verifier. In particular, they rely on the hardness of the learning with errors (LWE) problem and needs thousands of qubits, which is not applicable to present quantum hardware. This test should be designed based on two principles: dynamic enough to adapt various processors and friendly to NISQ devices, which can be directly applied in an experiment. In order to be dynamic, we focus on LFC quantum architecture. LFC means that the chip consists of $m$ unit cells of $n+1$ qubits, with $m \geq 2$ and $n \geq 1$. Within each cell, $n+1$ qubits are fully connected, and each cell has a leader qubit, $m$ leader qubits are fully connected. LFC shares many similarities with Chimera and Pegasus topologies in quantum annealing processor D-Wave \cite{Dwave}. Notice that in reality, hardware for gate-based quantum computing rarely follow this geometry, but $SWAP$ gates can be applied. A test based on LFC structure can cover any quantum chip with the number of qubits $\geq 4$ and not prime. Moreover, a quantum device should pass a test based on LFC architecture to demonstrate its potential for fully connected circuits, such as Shor algorithm \cite{Shor} and Grover algorithm \cite{Grover}. Furthermore, for applying to NISQ devices, the test should contain only shallow circuits and not rely on quantum memory. Nowadays, classical simulation programs for quantum circuits such as Cirq \cite{Cirq}, Qiskit \cite{Qiskit} and Qibo \cite{Qibo,QiboGithub} can mimic noisy or noiseless quantum devices for up to dozens of qubits on classical hardware. It is hard to distinguish between a quantum device and a simulator around this scale. Therefore, we can consider introducing a quantum verifier. In previous works \cite{fitzsimons2018post,takeuchi2021divide}, the quantum verifier(s) is(are) asked to witness particular states generated by the prover. However, in \cite{fitzsimons2018post}, the target state is too complicated for NISQ devices. Also, the method provided in \cite{takeuchi2021divide} is designed for sparse quantum chips with certain geometry restrictions. This article presents the DCP challenge, a verification protocol of quantum computation capability, requiring a quantum verifier and a one-way quantum channel from the verifier to the prover. It is an interactive protocol for Alice, the verifier holding a $n+1$-qubit quantum device, to test the quantum computing power of Bob, the prover holding a $m\times\left(n+1\right)$-qubit device, which runs on the LFC architecture. In contrast to the method in \cite{takeuchi2021divide} where the verifier needs more than half of the qubits of the prover, the DCP challenge only needs a fraction, implying a quantum channel with fewer qubits. In particular, Alice needs to provide simple quantum states (DCP samples) as a superposition of two possibilities, which can be easily verified by measurement, and send them to Bob, who solves the problem essentially using Quantum Fourier transform on $n$ qubits. The advantage of the prover being the receiver of the quantum states is that the measurement error is also tested. We have also performed simulations of our protocol. On one side, we show that in the error-free model, the quantum computing capability of the prover can be successfully verified with overwhelming probability. On the other side, in the noisy setting simulation, our protocol is shown to be very sensitive to the presence of errors, while it is still shown to be robust up to some restricted errors. This property also makes the DCP challenge a promising benchmarking protocol when preparing samples and solving the problem are performed by the same quantum device. \section{Preliminary} \subsection{Dihedral coset problem}\label{ssec:dcp} The dihedral coset problem has been a fundamental problem in studying the quantum hardness of the hidden subgroup problem over (non-abelian) dihedral group in the last two decades \cite{MEPH,GSVV01,Regev02,FIMSS03,HRTS00,RoBe98}. Informally, it asks to recover the hidden subgroup of a dihedral group given random cosets of the hidden subgroup as superposition. A dihedral group is generated by reflections and rotations of a $E$-gon (regular polygon with $E$ edges). The first part of the superposition encodes the reflection. From now on, we call it the reflection qubit. The second part encodes the rotation. Normalization is omitted for every equation in this article. \begin{definition}[Dihedral coset problem, DCP]\label{def:DCP} The input of the DCP$_{E}^{\ell}$ with modulus $E$ consists of $\ell$ samples. Each sample is a quantum state of the form \begin{equation} \ket{\psi_{x,s}}=\ket{0}\ket{x} \ + \ \ket{1}\ket{\left(x+s\right) \bmod E}, \end{equation} stored in $1+\lceil\log_2 E\rceil$ qubits, where $x\in\{0,1,...,E-1\}$ is randomly and uniformly selected for each sample and $s\in\{0,1,...,E-1\}$ is fixed throughout all the states. The task is to output the secret $s$. \end{definition} The problem is hypothesized to be unsolvable by direct measurement on the computational basis, which means the best-known classical solution is a random guess. We could not obtain $x$ and $\left(x+s\right) \bmod E$ at the same time. The DCP is known to be solvable in sub-exponential time while given a sub-exponential number of samples \cite{Kuperberg05,Regev04,Kuperberg13}. These solving algorithms were designed with different optimization targets. So far, Kuperberg's algorithm \cite{Kuperberg05} achieves a smallest running-time $2^{O(\sqrt{\log E})}$ but requires $2^{O(\sqrt{\log E})}$ space while Regev's \cite{Regev04} variant requires only a polynomial (in $\log E$) space but its running-time is slightly worse as $2^{O(\sqrt{\log E\log\log E})}$. Both of them start by running quantum Fourier transform on the given DCP samples (except the reflection qubit) and measure them, which naturally possess an LFC structure. The main drawback of these two algorithms is that some quantum states need to be maintained throughout the whole process. In this work, given the constraints of current quantum computing devices (e.g., NISQ), the circuit depth and quantum memory required by both Kuperberg's and Regev's algorithms can not be satisfied. Therefore, we consider a slightly different variant of the DCP problem and algorithm by minimizing circuit depth and limiting quantum registers. Before introducing them, we first recall the quantum Fourier transform. \begin{definition}[Quantum Fourier transform, QFT]\label{def:QFT} The quantum Fourier transform on the computational basis $\ket{0},...,\ket{N-1}$ of an $n$ qubit state is defined to be a linear operator with the following action on the basis states, \begin{equation} \ket{j} \mapsto \sum_{k=0}^{N-1} \omega_N^{jk} \ket{k}, \end{equation} where $\omega_N = \textrm{e}^{\frac{2\pi i}{N}}$. \end{definition} The evaluation time of QFT is $\mathcal{O}\left(n^2\right)$ \cite[Section~5.1]{NiCh00}. \subsection{New variant} Currently, NISQ devices have limited registers, low coherence time, low relaxation time, and imperfect gate implementation. They can only efficiently perform shallow circuits. Therefore, we slightly modify the DCP adapting this status. First, we set $E=N=2^{n}$. Then, instead of solving the secret $s$, we ask to solve the parity of $s$, which represents the same order of complexity. FIG. \ref{fig: s_even} and FIG. \ref{fig: s_odd} are two example circuits of this new variant. Alice can prepare the state $\ket{\psi_{x,s}}$ with only $H$, $X$ and $CNOT$ (which are the Clifford gates) and it takes $\mathcal{O}\left(n\right)$ gates. She can verify the accuracy of $\ket{\psi_{x,s}}$ by measuring it. Notice that for total $N^2$ combinations of $x$ and $s$, there are total $N^2$ combinations of $X$ and $CNOT$ gates. However, we do not have a direct relation between $x$, $s$ and each of these gates. To solve the parity of $s$ within $m$ cells of $n+1$ qubits, using the shallowest circuit currently known, we use a highly simplified version of Kuperberg's algorithm \cite{Kuperberg05}, and name it \emph{ParitySolve}. Bob performs QFT on the last $n$ qubits and measures them. Here we highlight that he always needs more computation resources and operation steps than Alice; otherwise, it would not be a challenge. After QFT is applied on the last $n$ qubits of the DCP sample, the total state becomes \begin{equation} \sum_{k=0}^{N-1}\left(\ket{0}+\omega^{ks}_{N}\ket{1}\right)\ket{k}, \hspace{0.5cm}\omega_N=e^{\frac{2\pi i}{N}}. \end{equation} Bob then checks the measurements after QFT. He needs a pair of measurements that the most significant qubit is different and the rest are identical. We call it a collision. If he does not have it, he resets all registers to $\ket{0}$ and starts another \emph{ParitySolve}. After the measurement, the reflection qubit becomes \begin{equation} \ket{\phi_{\hat{x},s}}=\ket{0}+\omega_N^{\hat{x}s}\ket{1}, \end{equation} for some uniform distributed random measured $\hat{x}\in\{0,1,...,N-1\}$. Assume that Bob has a collision, $\hat{x}_1$ and $\hat{x}_2$, then the tensor product between $\ket{\phi_{\hat{x}_1,s}}$ and $\ket{\phi_{\hat{x}_2,s}}$ gives \begin{equation} \ket{0,0}+\omega_N^{\hat{x}_{1}s}\ket{1,0}+\omega_N^{\hat{x}_{2}s}\ket{0,1}+\omega_N^{\left(\hat{x}_{1}+\hat{x}_{2}\right)s}\ket{1,1}. \end{equation} Bob performs a $CNOT$ gate on these two reflection qubits. The state becomes \begin{equation} \ket{0,0}+\omega_N^{\hat{x}_{1}s}\ket{1,1}+\omega_N^{\hat{x}_{2}s}\ket{0,1}+\omega_N^{\left(\hat{x}_{1}+\hat{x}_{2}\right)s}\ket{1,0}. \end{equation} Then he measures the target qubits, with $\frac{1}{2}$ probability he can measure $\ket{1}$. If $\ket{0}$ is measured, he needs to reset all registers to $\ket{0}$ and start another \emph{ParitySolve}. After $\ket{1}$ on the target qubit is measured, the controlled qubit becomes \begin{equation} \ket{0}+\omega_N^{\left(\hat{x}_1-\hat{x}_2\right)s}\ket{1}=\ket{0}+\left(-1\right)^{s}\ket{1}. \end{equation} The equality holds because if $\hat{x}_1$ and $\hat{x}_2$ is a collision, then $\hat{x}_{1}-\hat{x}_{2} \mod N=\frac{N}{2}$. Finally, the parity of $s$ lies inside the phase of $\ket{1}$. Bob can solve it by applying an $H$ gate on the remaining qubit and measuring it. If the result is $\ket{0}$, then $s$ is even. He replies $0$ to Alice. Otherwise, $s$ is odd. He replies $1$. The solution is completely correct if the quantum channel and devices are noiseless. \begin{figure} \caption{A toy circuit for $m=2$, $n=3$ and $s=2$, first four qubits correspond to the state when $x=0$, $\ket{\psi_{0,2} \label{fig: s_even} \end{figure} \begin{figure} \caption{A toy circuit for $m=2$, $n=3$ and $s=3$, first four qubits correspond to the state when $x=2$, $\ket{\psi_{2,3} \label{fig: s_odd} \end{figure} \subsection{Measurement method} We have found a new method to solve the parity of $s$ when $E=N$. As indicated previously, $s$ is unsolvable by direct measurement on the computational basis. However, it is possible to have a minor advantage with single-qubit measurement on a different basis, equivalent to applying one layer of same single-qubit unitary gates before measuring on the computational basis, as shown in FIG. \ref{fig: measurement circuit}. The third general unitary gate $U_3$ can be written into \begin{equation} \begin{aligned} &U_3\left(a,b,c\right)= \begin{pmatrix} e^{-i\left(b+c\right)/2}\cos\left(\frac{a}{2}\right) & -e^{-i\left(b-c\right)/2}\sin\left(\frac{a}{2}\right) \\ e^{i\left(b-c\right)/2}\sin\left(\frac{a}{2}\right) & e^{i\left(b+c\right)/2}\cos\left(\frac{a}{2}\right) \end{pmatrix}, \end{aligned} \end{equation} with $a\in \left[0,\pi\right)$, $b\in \left[0,4\pi\right)$ and $c\in \left[0,2\pi\right)$. When $a=\frac{\pi}{2}$, $b=0$ and $c=\pi$, $U_3$ is an $H$ gate (with a global phase). The parity of $s$ can be distinguished with $a=\frac{\pi}{2}$, $c\in\{0,\pi\}$, and an arbitrary $b$. The measurement is read as $M=m_0 2^0 + m_1 2^1 + ... + m_{n} 2^{n}$. We note $M_{non}$ the value not able to measure, $M_{even}$ the value that is only measurable when $s$ is even, $M_{odd}$ the value that is only measurable when $s$ is odd. Therefore, $M_{even}$ and $M_{odd}$ can be considered as the feature values of the parity of $s$, which can be determined when one of them appears. When $c=0$, $M_{non}=N-1$, $M_{even}=2N-2$ and $M_{odd}=N-2$. When $c=\pi$, $M_{non}=N$, $M_{even}=1$ and $M_{odd}=N+1$. The probability of measuring $M_{even}$ or $M_{odd}$ is $\frac{1}{N}$. For example, Bob can measure every DCP samples on an $H$ basis, if any of them is $1$, $s$ is even, or if any of them is $N+1$, then $s$ is odd. This new technique is found by brute-force simulation for $n<10$ and conjectured to be valid for lager $n$, potentially leads to a solution of the DCP with only measurement. \begin{figure} \caption{A toy circuit for the measurement method for $n=3$.} \label{fig: measurement circuit} \end{figure} \section{Protocol} In this section, we use an example to demonstrate the full protocol of the DCP challenge. A diagram is in FIG. \ref{fig: protocol}. Alice is the verifier who has a quantum computer. Bob is the prover who declares having a quantum computer and wants to prove his quantum computation capability to Alice. To perform the challenge, Alice needs to have $n+1$ qubits to prepare DCP samples, and Bob needs $m\times\left(n+1\right)$ qubits to solve them. Before starting the challenge, they agree on the choice of $m$, $n$, the number of iterations $t$ and the number of repetitions $r$. In every repetition, there are $t$ iterations. At the first stage, Alice uniformly selects two numbers $x\in\{0,1,..., N-1\}$ and $s\in\{0,1,..., N-1\}$, which she keeps both of them as secret. She generates $\ket{\psi_{x,s}}$ with $x$ and $s$. Alice sends $m$ DCP samples one by one to Bob via a quantum channel in every iteration. Bob stores them into his $m$ cells of registers and attempts to solve the parity of $s$ using \emph{ParitySolve}. In the first repetition, every sample has the same $s=s_1$ and a different random $x$. When Bob could not have a result after $t$ iterations, he randomly guesses a $0$ or $1$. Here we have $\ell=m\times t$. Then Alice starts new repetitions, each time with a different $s$ until she completes the challenge with a secret $s_r$. The number of repetitions $r$ can be any number large enough to reflect Bob's probability of success, also called the accuracy $\mathbf{p}$. In this article, we use bold letter for simulation or experimental outcome. Bob sends his results in a bit-string back to Alice via a classical channel. Finally, Alice verifies Bob's probability of success $\mathbf{p}$. If Bob has an error-free device, his accuracy is expected to be $p$, which can be calculated or simulated numerically. Furthermore, the choice of $m$, $n$, $t$ depends on the number of qubits from both parties, the maximum transmission of the quantum channel and the difference $p-p_B$, where $p_B$ is the expected accuracy of performing the measurement method. Details are shown in appendix \ref{app_birthday}. Moreover, the presence of noise also implies the loss of computing power will reflect in the accuracy. Since NISQ hardware is not likely to be error-free, we expect to have $p \geq \mathbf{p} \geq \frac{1}{2}$. The quantum computation capability of Bob's processor is verified with $\mathbf{p}>p_B$. When $\mathbf{p}\sim p$, the device is qualified for a stricter test. The numerical simulation of this verification protocol can be found in appendix \ref{app_simulation_verification}. \begin{figure*} \caption{The DCP challenge in a diagram.} \label{fig: protocol} \end{figure*} \section{Possible cheating methods}\label{cheating} There is not any known method to cheat the DCP challenge without a quantum computer of better performance unless a new algorithm is found, reaching the expected accuracy $p$ with a shallower circuit than \emph{ParitySolve}. However, it is possible to cheat when having such a device and obtain $\mathbf{p}>p$ with even less than $m\times\left(n+1\right)$ qubits. Two methods are outlined below. The first method assumes Bob's quantum computer has a longer relaxation time, such that those unmeasured qubits do not quickly return to $\ket{0}$. Instead of receiving $m$ samples and erasing them all if he could not find a collision, he can erase one sample each time and receive another one. Once a collision is found, and after $CNOT$ gate he measures $\ket{0}$, he can erase both of them and receive another two samples. This method wastes fewer DCP samples and leads to a larger probability of success. The second method assumes Bob's quantum computer has less noise to perform $SWAP$ gates efficiently. Bob can move reflection qubits to the register of measured qubits after resetting them to $\ket{0}$; therefore, he has more room to store the reflection qubit of every sample. This method increases the probability of collision, thus, increasing the accuracy. Both methods rely on a more enhanced quantum computation capability, so they should not be considered cheating. Once quantum computers become powerful enough to "cheat" accurately, the "cheating method" can become the standard protocol. All Alice needs to do is to reduce $t$ or raise $p$ accordingly. There are more methods to reach $\mathbf{p}>p$ when Regev's \cite{Regev02,Regev04} and Kuperberg's \cite{Kuperberg05,Kuperberg13} complete algorithms can be performed. Nonetheless, we can prohibit all these cheating methods by setting the time interval between iterations long enough to bypass the possible relaxation time for the near future but still short enough for an experiment. For example, we can set the interval as one second (Sycamore processor is in the order of $\mu s$), so the device loses all its memory of the previous iteration. In this way, Bob has no cheating method unless he has a quantum computer with an extremely longer relaxation time. \section{Other applications} The DCP challenge has more applications than a verification mechanism of quantum computation power. Here are some examples. This protocol can be used to benchmark a quantum computer. It is a straightforward method for evaluating the performance of NISQ hardware. Gate based quantum devices are usually manufactured using various techniques thus have distinct connection geometry and parameters. Even when they have the same number of qubits, direct comparison of their computation capability is difficult. The readout of the DCP challenge is the numerical accuracy $\mathbf{p}$ after applying a large amount of simple pre-defined circuits. It provides us a quantitative insight into a quantum computer, which can be regarded as a score. The numerical simulation of using the DCP challenge as a benchmarking protocol is in appendix \ref{app_simulation_benchmarking}. The smallest instance using the DCP challenge for benchmarking only requires $4$ qubits in a line. In this case, $QFT$ is an $H$ gate. We perform this experiment on the first $4$ qubits of $5$-qubit IBM Q processor \emph{ibmq\_manila} \cite{IBM}, as detailed in appendix \ref{app_IBM}. The DCP challenge helps benchmark a quantum channel. If Bob tests on his processor and has a probability of success $\mathbf{p}$. They should anticipate a comparable level of accuracy when Alice transmits the challenge to Bob. This protocol can also help to spot eavesdropping on a quantum channel. If Alice and Bob used to have a probability of success $\mathbf{p}$, suddenly the probability has dropped. If they are both certain there are no technical issues with the channel or their hardware, then perhaps Eve is intercepting. She steals some DCP samples from the channel, and when she puts some fake samples back, she is unaware of the parity of $s$. Even if Eve can also intercept the classical channel from Bob to Alice and change the result, she has no method to raise the probability unless she replaces Bob completely. The DCP challenge is a very elemental puzzle game for NISQ devices. Its numerous potentials remain unexplored. \section{Generalization} Here we give a more general interactive verification protocol. The central assumption is a question encoding a secret in $\ell$ quantum states (samples), a quantum algorithm solves the secret with a probability $p$, a classical or measurement algorithm solves the secret with a baseline probability $p_B$. The key to verifying the quantum computing power lies in the inequality $p>p_B$. Alice sends a fixed amount $\ell$ of samples in each repetition. Bob needs to solve the secret and sends his result back to Alice, and she verifies the accuracy and compares it with $p_B$. By increasing the number of repetitions, Alice can confirm Bob's quantum computation capability. The protocol can be optimized by lowering the number of qubits and simplifying Alice's process of preparing the samples, creating a computation imbalance between the verifier and prover. The DCP is chosen with the extra advantage that its current solutions naturally process an LFC structure. Moreover, even within the DCP framework, our readers are free to design new protocols for more advanced quantum computers, for example, Alice can ask about the full $s$ instead of its parity or she can decide another $E<N$. New algorithm for solving the DCP or its different variants will be invented in the future and the protocol will be updated accordingly. However, the sub-exponential quantum complexity of the DCP remains relatively solid since it secures the hardness of LWE problem \cite{brakerski2018learning}. \section{Conclusions} In this article, the DCP challenge has been proposed. Its computation has been shown, numerical simulations have been done and different cheating strategies have been evaluated. Other applications have been described and a generalization has been produced. Our readers may perceive it as a quantum game rather than a methodology for confirming quantum processing capacity. Its rules are flexible and can be adapted to different situations. The DCP challenge is designed for NISQ devices, and it is aimed to serve temporary. One day, when quantum computers are powerful enough to outperform classical computers in various tasks such as factoring big integers, the protocol will lose its purpose as a proof of computation. Nevertheless, its other applications remain. \begin{thebibliography}{10} \bibitem{Supremacy} Frank Arute, Kunal Arya, Ryan Babbush, Dave Bacon, Joseph~C Bardin, Rami Barends, Rupak Biswas, Sergio Boixo, Fernando~GSL Brandao, David~A Buell, et~al. \newblock Quantum supremacy using a programmable superconducting processor. \newblock {\em Nature}, 574(7779):505--510, 2019. \bibitem{NISQ} John Preskill. \newblock Quantum computing in the nisq era and beyond. \newblock {\em Quantum}, 2:79, 2018. \bibitem{RCS} Sean Mullane. \newblock Sampling random quantum circuits: a pedestrian's guide. \newblock {\em arXiv preprint arXiv:2007.07872}, 2020. \bibitem{CEB} Sergio Boixo, Sergei~V Isakov, Vadim~N Smelyanskiy, Ryan Babbush, Nan Ding, Zhang Jiang, Michael~J Bremner, John~M Martinis, and Hartmut Neven. \newblock Characterizing quantum supremacy in near-term devices. \newblock {\em Nature Physics}, 14(6):595--600, 2018. \bibitem{Regev09} Oded Regev. \newblock On lattices, learning with errors, random linear codes, and cryptography. \newblock {\em Journal of the ACM (JACM)}, 56(6):1--40, 2009. \bibitem{Brakerski0} Zvika Brakerski, Paul Christiano, Urmila Mahadev, Umesh Vazirani, and Thomas Vidick. \newblock A cryptographic test of quantumness and certifiable randomness from a single quantum device. \newblock {\em Journal of the ACM (JACM)}, 68(5):1--47, 2021. \bibitem{Brakerski1} Zvika Brakerski, Venkata Koppula, Umesh Vazirani, and Thomas Vidick. \newblock Simpler proofs of quantumness. \newblock {\em arXiv preprint arXiv:2005.04826}, 2020. \bibitem{Zhu} Daiwei Zhu, Gregory~D Kahanamoku-Meyer, Laura Lewis, Crystal Noel, Or~Katz, Bahaa Harraz, Qingfeng Wang, Andrew Risinger, Lei Feng, Debopriyo Biswas, et~al. \newblock Interactive protocols for classically-verifiable quantum advantage. \newblock {\em arXiv preprint arXiv:2112.05156}, 2021. \bibitem{Dwave} Nike Dattani, Szilard Szalay, and Nick Chancellor. \newblock Pegasus: The second connectivity graph for large-scale quantum annealing hardware. \newblock {\em arXiv preprint arXiv:1901.07636}, 2019. \bibitem{Shor} Peter~W Shor. \newblock Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. \newblock {\em SIAM review}, 41(2):303--332, 1999. \bibitem{Grover} Lov~K Grover. \newblock A fast quantum mechanical algorithm for database search. \newblock In {\em Proceedings of the twenty-eighth annual ACM symposium on Theory of computing}, pages 212--219, 1996. \bibitem{Cirq} Cirq. \newblock https://github.com/quantumlib/cirq. \bibitem{Qiskit} Qiskit. \newblock https://github.com/qiskit/qiskit. \bibitem{Qibo} Stavros Efthymiou, Sergi Ramos-Calderer, Carlos Bravo-Prieto, Adri{\'a}n P{\'e}rez-Salinas, Diego Garc{\'\i}a-Mart{\'\i}n, Artur Garcia-Saez, Jos{\'e}~Ignacio Latorre, and Stefano Carrazza. \newblock Qibo: a framework for quantum simulation with hardware acceleration. \newblock {\em Quantum Science and Technology}, 7(1):015018, 2021. \bibitem{QiboGithub} Qibo. \newblock https://github.com/qiboteam/qibo. \bibitem{fitzsimons2018post} Joseph~F Fitzsimons, Michal Hajdu{\v{s}}ek, and Tomoyuki Morimae. \newblock Post hoc verification of quantum computation. \newblock {\em Physical review letters}, 120(4):040501, 2018. \bibitem{takeuchi2021divide} Yuki Takeuchi, Yasuhiro Takahashi, Tomoyuki Morimae, and Seiichiro Tani. \newblock Divide-and-conquer verification method for noisy intermediate-scale quantum computation. \newblock {\em arXiv preprint arXiv:2109.14928}, 2021. \bibitem{MEPH} Mark Ettinger and Peter H\o{}yer. \newblock On quantum algorithms for noncommutative hidden subgroups. \newblock {\em Advances in Applied Mathematics}, 25(3):239--251, 2000. \bibitem{GSVV01} Michelangelo Grigni, Leonard Schulman, Monica Vazirani, and Umesh Vazirani. \newblock Quantum mechanical algorithms for the nonabelian hidden subgroup problem. \newblock In {\em Proceedings of the thirty-third annual ACM symposium on Theory of computing}, pages 68--74, 2001. \bibitem{Regev02} Oded Regev. \newblock Quantum computation and lattice problems. \newblock {\em SIAM Journal on Computing}, 33(3):738--760, 2004. \bibitem{FIMSS03} Katalin Friedl, G{\'a}bor Ivanyos, Fr{\'e}d{\'e}ric Magniez, Miklos Santha, and Pranab Sen. \newblock Hidden translation and orbit coset in quantum computing. \newblock In {\em Proceedings of the thirty-fifth annual ACM symposium on Theory of computing}, pages 1--9, 2003. \bibitem{HRTS00} Sean Hallgren, Alexander Russell, and Amnon Ta-Shma. \newblock Normal subgroup reconstruction and quantum computation using group representations. \newblock In {\em Proceedings of the thirty-second annual ACM symposium on Theory of computing}, pages 627--635, 2000. \bibitem{RoBe98} Martin Roetteler and Thomas Beth. \newblock Polynomial-time solution to the hidden subgroup problem for a class of non-abelian groups. \newblock {\em arXiv preprint quant-ph/9812070}, 1998. \bibitem{Kuperberg05} Greg Kuperberg. \newblock A subexponential-time quantum algorithm for the dihedral hidden subgroup problem. \newblock {\em SIAM Journal on Computing}, 35(1):170--188, 2005. \bibitem{Regev04} Oded Regev. \newblock A subexponential time algorithm for the dihedral hidden subgroup problem with polynomial space. \newblock {\em arXiv preprint quant-ph/0406151}, 2004. \bibitem{Kuperberg13} Greg Kuperberg. \newblock Another subexponential-time quantum algorithm for the dihedral hidden subgroup problem. \newblock {\em arXiv preprint arXiv:1112.3333}, 2011. \bibitem{NiCh00} Michael~A Nielsen and Isaac Chuang. \newblock Quantum computation and quantum information, 2002. \bibitem{IBM} IBM Quantum. \newblock https://quantum-computing.ibm.com/. \bibitem{brakerski2018learning} Zvika Brakerski, Elena Kirshanova, Damien Stehl{\'e}, and Weiqiang Wen. \newblock Learning with errors and extrapolated dihedral cosets. \newblock In {\em IACR International Workshop on Public Key Cryptography}, pages 702--727. Springer, 2018. \bibitem{Github} Ruge Lin. \newblock https://github.com/gogoko699/dcp-challenge. \bibitem{datasheet} Google~Quantum AI. \newblock Quantum computer datasheet. \end{thebibliography} \appendix \section{Analytical probability}\label{app_birthday} To obtain the ideal probability of success $p$, we need to determine $k_{collision}$, the probability of NOT having a collision in $m$ cells among $N$ possibilities. We can use the formula of "Birthday Paradox" to provide its upper bound and a lower bound. $k_{lower}$ is a direct application of the formula to calculate the probability of NOT having two identical elements when choosing $m$ times out of $N$ possibilities, \begin{equation} k_{lower}=\prod_{i=0}^{m-1} \frac{N-i}{N}. \end{equation} In order NOT to have a pair of identical elements, each choice must be different. However, in the case of NOT having a collision, we can keep the same choice. So it is slightly easier to have a collision than to have a pair of identical elements. For calculating $k_{upper}$, we first consider the probability of NOT having two identical elements when choosing $m$ times out of $\frac{N}{2}$ possibilities (for $n-1$ qubits except the most significant one). Then, we take into account that the last qubit is different. We have \begin{equation} k_{upper}=\frac{1}{2}+\frac{1}{2}\prod_{i=0}^{m-1} \frac{N/2-i}{N/2}. \end{equation} $k_{upper}$ ignores the case of having more than one pair of identical elements when we are considering the first $n-1$ qubits. We have \begin{equation} k_{upper} > k_{collision} > k_{lower}. \end{equation} But each collision has only a probability of $\frac{1}{2}$ to solve the parity of $s$, so the chance of NOT being able to solve after $t$ iterations is \begin{equation} 2-2p=\left(\frac{1+k_{collision}}{2}\right)^t. \end{equation} Finally, when Bob is unable to solve, he has to randomly guess a result, which has $\frac{1}{2}$ to be correct. So in total, his probability of success is \begin{equation} p=\left(2-\left(\frac{1+k_{collision}}{2}\right)^t\right)/2. \label{eq_p} \end{equation} And we have the relation, \begin{equation} p_{upper} > p > p_{lower}, \end{equation} with \begin{equation} p_{upper}=\left(2-\left(\frac{1+k_{lower}}{2}\right)^t\right)/2, \label{eq_p_upper} \end{equation} and \begin{equation} p_{lower}=\left(2-\left(\frac{1+k_{upper}}{2}\right)^t\right)/2. \label{eq_p_lower} \end{equation} Although we do not have the analytical expression of $p$, we can obtain it numerically. We can generate $m$ random bit strings of length $n$, search for collision. By repeating this procedure, we can have the numerical $k_{collision}$ and use it for calculating $p$. In order to have a given $p$, we can have an estimation of $t$, \begin{equation} t\sim\frac{\log\left(2-2p\right)}{\log\left(\frac{1+k_{lower}}{2}\right)}. \label{eq_t} \end{equation} We use $k_{lower}$ instead of $k_{upper}$ because it is numerically closer to $k_{collision}$. The choice of $t$ is flexible, but setting it too large is not just a waste of resources: if Bob can solve the parity of $s$ multiple times within $t$ iterations and take the majority result, the protocol becomes less sensitive to error. Here we consider only the case when measuring $\ket{0}$ after $CNOT$; we reset all registers and pass directly into the next iteration, ignoring the fact that there might be another collision in the same group. This is because the probability of having two collisions in the same group is significantly lower than having one, and this difference vanishes in $p$ with the exponent $t$. We would like to keep the problem as simple as possible. Also, we would like to maintain the shallowest circuit. This situation is easy to simulate classically. In some situations, especially when $N \gg m$, $t$ can be too large ($>1,000$) to fit in an experiment. In this case, we can set a lower $p$ and increase $r$ for preciseness. The rules of this protocol are adjustable. For a numerical indication, for a device with $1,000$ qubits, if we set $n=19$ and $m=50$, $t$ should be $\geq 785$ to have $p>80\%$ according to our protocol. Therefore, our protocol is still feasible for the advanced NISQ-era. At that time, quantum computers might be capable of "cheating" accurately (as in section \ref{cheating}), we can even set a lower $t$. There is a probability of $2-2p$ that a random guess needs to be made. Using the standard deviation formula, we can calculate the fluctuation from the expectation $p$, \begin{equation} \sigma_p=\sqrt{\frac{1-p}{2r}}. \label{eq_SD} \end{equation} This formula of standard deviation is also valid for experimental $\mathbf{p}$. When $\mathbf{p}=\frac{1}{2}$, the fluctuation becomes the same as flipping a coin $r$ times. For the measurement method, the probability of measuring $M_{even}$ or $M_{odd}$ is $\frac{1}{N}$ and the probability of measuring both of them in the same repetition is $0$. Therefore, the probability of measuring none of them, also means not able to solve the parity of $s$ within $m\times t$ samples is \begin{equation} 2-2p_B=\left(\frac{N-1}{N}\right)^{mt}. \end{equation} The expected accuracy is \begin{equation} p_B=\left(2-\left(\frac{N-1}{N}\right)^{mt}\right)/2. \label{eq_p_B} \end{equation} The standard deviation can also be calculated with the Eq. (\ref{eq_SD}). From the Eq. (\ref{eq_p_upper}) and the Eq. (\ref{eq_p_B}), we can compare $p_{upper}$, which is numerically closer to $p$, and $p_{B}$ for difference $m$, $n$ and $t$. For less than $1,600$ qubits, a minor difference is shown in FIG. \ref{fig: small_gap} for an indication. The number $1600$ is the most up-to-date lower bound of a verification protocol with a classical verifier \cite{Zhu}, ideally a quantum verifier is no longer need after this scale. A distinguishable difference is shown in FIG. \ref{fig: mid_gap}, \emph{ParitySolve} and the measurement method can be distinguished for $r\sim 1,000$. A significant difference is shown in FIG. \ref{fig: big_gap}, in this case, the quantum computation capability of Bob can be verified even with moderated error. \begin{figure*} \caption{Smallest instance: $m=6$, $n=4$, $t=1$ with $p_{B} \label{fig: small_gap} \caption{Smallest instance: $m=9$, $n=6$, $t=4$ with $p_{B} \label{fig: mid_gap} \caption{Smallest instance: $m=21$, $n=9$, $t=9$ with $p_{B} \label{fig: big_gap} \caption{Combinations of $m$ and $n$ that we can have $p_{upper} \label{fig:three graphs} \end{figure*} \section{Numerical simulation for verification}\label{app_simulation_verification} One advantage of the DCP challenge is that it is effortless to simulate the whole process classically. Due to the LFC structure and the shallowness of the circuit, instead of simulating the entire circuit of $m\times\left(n+1\right)$ qubits, we can simulate each DCP sample individually and store the measurement bit-string and the state vector of the remaining qubit. Qibo can efficiently simulate a quantum circuit for up to $31$ qubits on a laptop. Therefore, it can simulate a DCP circuit for up to $n=30$ qubits. If Bob has a quantum chip of $m=9$, and $n=6$, Alice can first choose with FIG. \ref{fig: mid_gap} that $t=4$. Then she can calculate $p_{upper}$ with Eq. (\ref{eq_p_upper}) and $p_{B}$ with Eq. (\ref{eq_p_B}), or even calculate $p$ using Eq. (\ref{eq_p}) with a numerical $k_{collision}$, to see the probability that she expects. She prepares $r=1,000$ repetitions to challenge Bob, each with $m=9$ samples of $n+1=7$ qubits, and $t=4$ iterations. In total, she needs to prepare $36,000$ DCP samples, and Bob needs to perform about that many QFTs. Finally, Bob sends his $1,000$ answers back to Alice, verifying his accuracy. FIG. \ref{fig: verification} is a simulation with Qibo of $\mathbf{p}_{clean}$ the accuracy of error-free simulation of \emph{ParitySolve} and $\mathbf{p}_{Hbasis}$ the accuracy of solving by measuring on the $H$ basis. We have $\overline{\mathbf{p}_{clean}}=p$ and $\overline{\mathbf{p}_{Hbasis}}=p_B$. The code is on Github \cite{Github}. We consider $r=1,000$ acceptable since it is trivial to distinguish the probability distribution of the clean circuit performing \emph{ParitySolve} and the measurement method despite fluctuation. \begin{figure} \caption{Normalized probability distribution of $\mathbf{p} \label{fig: verification} \end{figure} \section{Numerical simulation for benchmarking}\label{app_simulation_benchmarking} We use $p\sim 80\%$ for simulation. It is not too close to $50\%$, so we can notice the decrease of accuracy due to noise. Our readers can also choose $p\sim 90\%$, which means $t$ needs to be $\sim 1.75$ times greater according to Eq. (\ref{eq_t}). TABLE \ref{simulation} is the table of accuracy, a comparison between analytical $p_{lower}$, $p_{upper}$, error-free circuit simulation $\mathbf{p}_{clean}$ and noisy circuit simulation $\mathbf{p}_{error}$ \cite{Github}. For the noise map, we use $1\%$ for bit error and phase error, $3\%$ for measurement error, the choice of errors is inspired by Quantum Computer Datasheet \cite{datasheet} from Google. In the table, $p_{upper}>\mathbf{p}_{clean}>p_{lower}$ for $m<8$, which is what we expected. But $p$ can be very close to $p_{upper}$. As we can see, when $m=8$, $\mathbf{p}_{clean}$ is actually larger than $p_{upper}$ because we obtain it through sampling and there is minor fluctuation. To benchmark a quantum chip with $n=5$ and $m=6$, we can first predict with Eq. (\ref{eq_t}) that $t=5$, then calculate $p_{upper}$ with Eq. (\ref{eq_p_upper}) and $p_{lower}$ with Eq. (\ref{eq_p_lower}), or even calculate $p$ using Eq. (\ref{eq_p}) with a numerical $k_{collision}$, to see the probability that we expect. If we set $r=1,000$, we need to prepare $30,000$ DCP samples in total and perform about that many QFTs. FIG. \ref{fig: PD} is a simulation of $\mathbf{p}_{clean}$ and $\mathbf{p}_{error}$ in this case. Comparing to $\mathbf{p}_{clean}$, the accuracy $\mathbf{p}_{error}$ shifts towards $\frac{1}{2}$ due to the noise. We consider $r=1,000$ acceptable since it is trivial to distinguish the probability distribution of the clean circuit and the noisy circuit despite fluctuation. \begin{table}[t] \centering \begin{tabular}{ |c|c|c|c|c|c|c| } \hline $m$ & 3 & 4 & 5 & 6 & 7 & 8\\ \hline $t$ & 3 & 3 & 4 & 5 & 7 & 9\\ \hline $p_{upper}$ & 83.75\% & 82.47\% & 84.18\% & 83.22\% & 83.17\% & 80.62\%\\ \hline $p_{lower}$ & 78.90\% & 76.86\% & 79.38\% & 79.59\% & 80.61\% & 78.91\%\\ \hline $\mathbf{p}_{clean}$& 81.60\% & 80.86\% & 83.05\% & 82.45\% & 82.38\% & 80.69\%\\ \hline $\mathbf{p}_{error}$& 69.02\% & 65.38\% & 61.77\% & 58.74\% & 55.65\% & 53.30\%\\ \hline \end{tabular} \caption{Here we assume $m=n+1$, so there are in total $m^2$ qubits, and $t$ is the minimal number of iterations for $p_{upper}>80\%$. $\mathbf{p}_{clean}$ and $\mathbf{p}_{error}$ are results of $r=10,000$.} \label{simulation} \end{table} \begin{figure} \caption{Normalized probability distribution of $\mathbf{p} \label{fig: PD} \end{figure} \section{IBM Q experiment}\label{app_IBM} We benchmark the DCP challenge on $4$ superconducting qubits provided by IBM Q quantum computers. Since the IBM Quantum Composer interface does not allow applying gates after measurement, the DCP challenge can not be directly implemented. We need to perform multiple experiments on every possible configuration then use the output data to reconstruct the DCP challenge. When $n=1$ and $m=2$, there are in total $2^3=8$ possible cases for two DCP samples, as shown in FIG. \ref{cases}. Notice that the reflection qubits are in the centre to avoid $SWAP$ gates. We perform five tests of each case on the first $4$ qubits of $5$-qubit quantum processor \emph{ibmq\_manila}, which has a linear architecture. By default, each test consists of $1024$ shots. Then we select the measurements that have a collision and the result is $\ket{1}$ on the target qubit of $CNOT$ gate, $q_0 \neq q_3$ and $q_2=1$. If the device is noiseless, we should have $q_1=s$. The result is in TABLE. \ref{IBM}. We can see that the error when $s=1$ is more significant since the circuit has more $CNOT$ gates for preparing DCP samples. The difference will decrease with larger $n$. Eventually, the essential gate-error will be on the $QFT$s or $SWAP$ gates depending on the structure of the device. Furthermore, due to the imbalanced measurement error \cite{datasheet}, it is more likely to measure $\ket{0}$ than $\ket{1}$ on a current quantum processor. The same situation can also be caused by the low relaxation time. If Alice chooses $s$ uniformly, Bob is very likely to have more $0$ than $1$ in his result, and he can have a rough estimation of his performance. However, this extra information does not allow him to cheat since he does not know which $0$ should be replaced by $1$. We use the erroneous data to reconstruct the DCP challenge \cite{Github}, the result is shown in FIG. \ref{fig: IBM}. The performance of \emph{ibmq\_manila} is not perfect but satisfying. Our reader can also use the DCP challenge to benchmark other processors of IBM Q, such as \emph{ibmq\_santiago}. \begin{figure} \caption{Case A, with $s=0$, $x_0=0$ and $x_1=0$.} \label{fig: caseA} \caption{Case B, with $s=0$, $x_0=0$ and $x_1=1$.} \label{fig: caseB} \caption{Case C, with $s=0$, $x_0=1$ and $x_1=0$.} \label{fig: caseC} \caption{Case D, with $s=0$, $x_0=1$ and $x_1=1$.} \label{fig: caseD} \caption{Case E, with $s=1$, $x_0=0$ and $x_1=0$.} \label{fig: caseE} \caption{Case F, with $s=1$, $x_0=0$ and $x_1=1$.} \label{fig: caseF} \caption{Case G, with $s=1$, $x_0=1$ and $x_1=0$.} \label{fig: caseG} \caption{Case H, with $s=1$, $x_0=1$ and $x_1=1$.} \label{fig: caseH} \caption{Eight possible cases for two DCP samples of $n=1$.} \label{cases} \end{figure} \begin{table}[t] \centering \begin{tabular}{ |c|c|c|c|c|c|c|c|c| } \hline $q_3 q_2 q_1 q_0$ & A & B & C & D & E & F & G & H\\ \hline $\ket{0101}$ & 517 & 638 & 624 & 642 & 89 & 73 & 61 & 78 \\ \hline $\ket{0111}$ & 9 & 9 & 14 & 17 & 603 & 563 &526 & 560 \\ \hline $\ket{1100}$ & 682 & 575 & 632 & 583 & 56 & 50 &51 & 75 \\ \hline $\ket{1110}$ & 20 & 14 & 13 & 13 & 477 & 490 &552 & 545 \\ \hline error &2.4\%&1.9\%&2.1\%&2.4\%&11.8\%&10.5\%&9.4\%&12.2\%\\ \hline \end{tabular} \caption{Post-selected measurements from IBM Q \emph{ibmq\_manila} processor. In total there are $5426$ shots measuring $q_1=0$ and $4425$ shots measuring $q_1=1$. There are $>1,000$ shots per each cases, which allows us to reconstruct the DCP challenge with $r=1,000$.} \label{IBM} \end{table} \begin{figure} \caption{Normalized probability distribution of $\mathbf{p} \label{fig: IBM} \end{figure} \end{document}
\begin{document} \title{\LARGE \bf Time-Optimal Paths for Simple Cars with Moving Obstacles in the Hamilton-Jacobi Formulation } \thispagestyle{empty} \pagestyle{empty} \begin{abstract} We consider the problem of time-optimal path planning for simple nonholonomic vehicles. In previous similar work, the vehicle has been simplified to a point mass and the obstacles have been stationary. Our formulation accounts for a rectangular vehicle, and involves the dynamic programming principle and a time-dependent Hamilton-Jacobi-Bellman (HJB) formulation which allows for moving obstacles. To our knowledge, this is the first HJB formulation of the problem which allows for moving obstacles. We design an upwind finite difference scheme to approximate the equation and demonstrate the efficacy of our model with a few synthetic examples. \end{abstract} \section{INTRODUCTION} \begin{figure} \caption{A simple rectangular car.} \label{fig:car} \end{figure} As automated driving technology becomes more prevalent, it is ever more important to develop interpretable trajectory planning algorithms. In this manuscript, we address the problem of trajectory planning for simple self-driving cars using a method rooted in optimal control theory and dynamic programming. We consider the vehicle pictured in \cref{fig:car}. The configuration space for the car is $(x,y,\theta)$ where $(x,y)$ denotes the coordinate for the center of mass of the car, and $\theta \in [0,2\pi)$ denotes the angle of inclination from the horizontal. The rear axle has length $2R$ and the distance from the rear axle to the center of mass is $d$. Such cars are typically propelled using actuators which supply torque to either of the rear wheels \cite{Bertozzi}. The motion is subject to a nonholonomic constraint \begin{equation} \dot y \cos \theta - \dot x \sin \theta = d\dot \theta. \end{equation} This ensures motion (approximately) tangential to the rear wheels; indeed, in the case that $d = 0$, the constraint reduces to $dy/dx = \tan \theta$. We also assume the car has a minimum turning radius, or equivalently, a maximum angular velocity so that $\lvert \dot \theta\rvert \le W$, for some $W >0$. LaValle \cite[Chap. 13]{LaValle} includes an extended discussion of models for this and similar vehicles. Trajectory planning for simple cars goes back to Dubins \cite{Dubins} who considered that case that $d = R =0$ (so that the car is simplified to a point mass) and only allowed unidirectional (``foward'') movement. Reeds and Shepp \cite{ReedsShepp} considered forward and backward motion, and proved that in the absence of obstacles, the optimal trajectories are combinations of straight lines and arcs of circles of minimum radius, and that optimal trajectories have at most two kinks where the car changes from moving forward to backward or vice versa. Later effort was devoted toward adding obstacles \cite{Barraquand}, and developing an algorithm for near-optimal trajectories which are robust to perturbation \cite{AgarwalWang}. All of this work was carried out in a discrete and combinatorial fashion, breaking the paths into ``turning'' or ``straight'' segments and proving results regarding the possible combinations of these pieces. To the authors' knowledge, this problem was first analyzed in the context of optimal control theory by Boissonnat et al. \cite{boissonnat1,boissonnat2,boissonnat3} who gave shorter proofs and extensions of results of \cite{Dubins} and \cite{ReedsShepp}. Later, Takei and Tsai et al. \cite{TakeiTsai1,TakeiTsai2} used dynamic programming to derive a partial differential equation (PDE) which is solved by the optimal travel time function. Through all this work, the car was still simplified to a point mass. Later, the same approach was applied while considering the rectangular vehicle pictured in \cref{fig:car} \cite{ParkinsonCar}. PDE-based optimal path planning algorithms have also been developed for a number of applications besides simple self-driving cars, including underwater path planning in dynamic currents \cite{Lolla}, human navigation in a number of contexts, \cite{cartee2019time,Parkinson,Parkinson2} and recent models for environmental crime \cite{Arnold,cartee2020control,Chen}. Other recent work has been devoted to machine learning and variatial approaches to the problem; for example, \cite{Shukla,Gao,Johnson}. Such approaches often rely on a hierarchical algorithms with global trajectory generation and local collision avoidance as in \cite{Rodriguez,Mao}. \subsection{Our Contribution} We present a PDE based optimal path planning algorithm for simple self-driving vehicles. Our method is in the same spirit as \cite{TakeiTsai1,TakeiTsai2,ParkinsonCar}. We use dynamic programming to derive a Hamilton-Jacobi-Bellman (HJB) equation which is satisfied by the optimal travel time function. The optimal steering plan is generated using the solution to the HJB equation. To the authors' knowledge, in all previous HJB formulations of optimal trajectory planning for simple self-driving cars, the obstacles are stationary. We present a time-dependent formulation in which obstacles are allowed to move, which is a significant step in adding realism to this formulation. In general, the time depedent HJB equation has the form \begin{equation} \label{eq:generalHJB }u_t + H(\boldsymbol{x}, \nabla u (\boldsymbol{x})) = r(x).\end{equation} Because the equation is nonlinear and solutions develop kinks \cite{Falcone}, some care is needed when solving HJB equations numerically; for example, they are not amenable to the simplest finite difference methods. Accordingly, we present an upwind finite difference scheme to solve our equation. The HJB formulation of minimal time path planning has a number of natural advantages. Because it is rooted in optimal control theory, there are some theoretical guarantees and there are no ``black box'' components, so the results are interpretable. It is also a very robust modeling framework, wherein one can easily account for a number of other realistic concerns such as energy minimization. Finally, it eschews the need for hierarchical algorithms, and after a single PDE solve, this formuation can resolve optimal paths from any initial configuration to the desired ending configuration. \section{MATHEMATICAL FORMULATION} Our algorithm is based on a control theoretic formulation. Generally, to analyze control problems using dynamic programming, one derives a Hamilton-Jacobi-Bellman equation which is satisfied by the optimal travel time function. Solving the equation provides the optimal travel time from any given starting configuration to a fixed ending configuration, and the derivatives of the travel time function determine the optimal steering plan. For general treatment of this approach (in both theory and practice), see \cite{Fleming,Bertsekas}. \subsection{Equations of Motion \& Control Problem} We consider a kinematic model of a self driving car which moves about a domain $\Omega \subset \mathbb R^2$ in the presence of moving obstacles. Fix a horizon time $T > 0$. At any time $t \in [0,T]$, the obstacles occupy a set $\Omega_{\text{obs}}(t) \subset \Omega$, so that the free space is given by $\Omega_{\text{free}}(t) = \Omega \setminus \Omega_{\text{obs}}(t)$. As described above, we track the current configuration of the car using variables $(\boldsymbol{x},\boldsymbol{y},\boldsymbol{\theta}) : [0,T] \to \Omega \times [0,2\pi)$, which obey the kinematic equations \begin{equation} \label{eq:kinematic} \begin{split} \dot \boldsymbol{x} &= \boldsymbol{v} \cos \boldsymbol{\theta} - \boldsymbol{\omega} W d \sin\boldsymbol{\theta}, \\ \dot \boldsymbol{y} &= \boldsymbol{v}\sin \boldsymbol{\theta} + \boldsymbol{\omega} W d \cos \boldsymbol{\theta}, \\ \dot \boldsymbol{\theta} &= \boldsymbol{\omega} W. \end{split} \end{equation} Here $W > 0$ is a bound on the angular velocity of the vehicle which enforces bounded curvature of the trajectory. The control variables are $\boldsymbol{v}(\cdot),\boldsymbol{\omega}(\cdot) \in [-1,1]$, representing the tangential and angular velocity respectively. By taking velocities to be control variables, we are neglecting some of the ambient dynamics. In a more complete dynamic model, the control variables would be the torques applied to the rear wheels by the actuators. For a derivation of both this kinematic model and a dynamic model, see \cite{Moret}, and for generalizations of the kinematic model, see \cite{Triggs}. For configurations $(x,y,\theta) \in \Omega \times [0,2\pi)$, define $C(x,y,\theta) \subset \Omega$ to be the space occupied by the car. In general, this could be any shape but for our purposes it will be a rectangle as pictured in \cref{fig:car}. Then given a desired ending configuration $(x_f,y_f,\theta_f)$, a trajectory $(\boldsymbol{x}(t),\boldsymbol{y}(t),\boldsymbol{\theta}(t))$ is referred to as \emph{admissable} if each of the following is true: \begin{itemize} \item[(1)] it obeys \eqref{eq:kinematic} for $t \in [0,T]$, \item[(2)] $C(\boldsymbol{x}(t),\boldsymbol{y}(t),\boldsymbol{\theta}(t)) \cap \Omega_{\text{obs}}(t) = \varnothing$ for all $t \in [0,T]$, \item[(3)] $(\boldsymbol{x}(T),\boldsymbol{y}(T),\boldsymbol{\theta}(T)) = (x_f,y_f,\theta_f)$. \end{itemize} Here (2) signifies that the car does not collide with obstacles, and (3) signifies that the trajectory ends at the desired ending configuration. Given an initial configuration $(x,y,\theta)$, the goal is then to resolve the steering plan $\boldsymbol{v}(t),\boldsymbol{\omega}(t)$ that determines the minimal time required to traverse an admissable trajectory from $(x,y,\theta)$ to $(x_f,y_f,\theta_f)$. \subsection{The Dynamic Programming Approach} We resolve the optimal steering plan using dynamic programming and a Hamilton-Jacobi-Bellman (HJB) equation. To analyze the problem in the dynamic programming framework, we first define the travel-time function. For a given configuration $(x,y,\theta) \in \Omega \times [0,2\pi)$ and time $t \in [0,T]$, we restrict ourselves to trajectories $(\boldsymbol{x}(\cdot),\boldsymbol{y}(\cdot),\boldsymbol{\theta}(\cdot))$ such that $(\boldsymbol{x}(t),\boldsymbol{y}(t),\boldsymbol{\theta}(t)) = (x,y,\theta)$. For such trajectories, if $\boldsymbol{v}(\cdot),\boldsymbol{\omega}(\cdot)$ is the corresponding steering plan, we define the first arrival time \begin{equation}\label{eq:arrivalTime} t^*_{\boldsymbol{v},\boldsymbol{\omega}} = \inf\{s \, : \, (\boldsymbol{x}(s),\boldsymbol{y}(s),\boldsymbol{\theta}(s)) = (x_f,y_f,\theta_f)\}.\end{equation} The cost functional for the control problem is then \begin{equation}\label{eq:functional} \mathscr{T}(x,y,\theta,t,\boldsymbol{v}(\cdot),\boldsymbol{\omega}(\cdot)) = \left\{\begin{matrix}t^*_{\boldsymbol{v},\boldsymbol{\omega}}, & \text{ if } t^*_{\boldsymbol{v},\boldsymbol{\omega}} \le T, \\ +\infty, & \text{otherwise}. \end{matrix} \right.\end{equation} The optimal travel time function is then defined \begin{equation} \label{eq:valueFunc}u(x,y,\theta,t) = \inf_{\boldsymbol{v}(\cdot),\boldsymbol{\omega}(\cdot)} \mathscr{T}(x,y,\theta,t,\boldsymbol{v}(\cdot),\boldsymbol{\omega}(\cdot)). \end{equation} Intuitively, $u(x,y,\theta,t)$ is the minimal time required to steer the car to $(x_f,y_f,\theta_f)$, given that the car is at $(x,y,\theta)$ at time $t$. Note that if $(x,y,\theta)$ is far from $(x_f,y_f,\theta_f)$ and $t$ is close to $T$, there may be no way to steer the car to the ending configuration in the allotted time. If this is the case, then $u(x,y,\theta,t) = +\infty$. However, if there are any admissable trajectories $(\boldsymbol{x}(\cdot),\boldsymbol{y}(\cdot),\boldsymbol{\theta}(\cdot))$ such that $(\boldsymbol{x}(t),\boldsymbol{y}(t),\boldsymbol{\theta}(t)) = (x,y,\theta)$, then $u(x,y,\theta,t) \le T$. We want to derive a partial differental equation satisfied by the optimal travel time function. The dynamic programming principle \cite{Bellman} for this control problem is \begin{equation} \label{eq:DPP} \begin{split}u(&x,y,\theta,t) = \\ &\delta + \inf_{\boldsymbol{v}(\cdot),\boldsymbol{\omega}(\cdot)}\{u(\boldsymbol{x}(t+\delta), \boldsymbol{y}(t+\delta), \boldsymbol{\theta}(t+\delta),t+\delta)\}\end{split} \end{equation} where $(\boldsymbol{x}(t),\boldsymbol{y}(t),\boldsymbol{\theta}(t)) = (x,y,\theta)$ and the infimum is taken with respect to the values $\boldsymbol{v}(s),\boldsymbol{\omega}(s)$ for $s \in (t,t+\delta)$. Supposing that $u(x,y,\theta,t)$ is smooth, we can divide by $\delta$ and take the limit as $\delta \to 0$ to arrive at \begin{equation} \label{eq:preHJB} \inf_{v,\omega} \big\{ u_t + \dot \boldsymbol{x} u_x + \dot \boldsymbol{y} u_y + \dot \boldsymbol{\theta} u_{\theta} \big\} = -1, \end{equation} whereupon inserting \eqref{eq:kinematic} yields \begin{equation} u_t + \inf_{v,\omega} \left\{ \begin{split} (&u_x \cos\theta + u_y \sin \theta)v\,\,+ \\ &W(-du_x\sin\theta + du_y\cos\theta+ u_\theta)\omega \end{split}\right\} = -1. \end{equation} Notice the minimization is linear in the variables $v,\omega \in [-1,1]$, and thus the minimizing values can be resolved explicitly. We see that \begin{equation} \label{eq:controls} \begin{split} v &= - \text{sign}(u_x \cos\theta + u_y \sin \theta), \\ \omega &= -\text{sign}(-d\sin\theta u_x + d\cos\theta u_y + u_\theta), \end{split} \end{equation} where $u(x,y,\theta,t)$ solves the HJB equation \begin{equation} \label{eq:HJB} \begin{split} u_t& - \abs{u_x \cos \theta + u_y \sin \theta}\\ &\,\,\,\,\,\,- W\abs{-d u_x \sin\theta + du_y \cos \theta + u_\theta} \end{split} = -1. \end{equation} This derivation is only valid when $u(x,y,\theta,t)$ is smooth, which is not expected to be the case. However, under very general conditions, the travel time function is the unique viscosity solution of \eqref{eq:HJB} \cite{CrandallLions}. For a fully rigorous derivation of the Hamilton-Jacobi-Bellman equation, see \cite{Bardi1997}. There are a few natural conditions appended to \eqref{eq:HJB}. At the terminal time $T$, the cost functional \eqref{eq:functional} assigns a value of either 0 or $+\infty$, depending on whether car is at the ending configuration or not. Thus, we have the terminal condition \begin{equation}\label{eq:terminal} u(x,y,\theta,T) = \left\{\begin{matrix} 0, & (x,y,\theta) = (x_f,y_f,\theta_f), \\ +\infty, & \text{otherwise}, \end{matrix} \right. \end{equation} and we want to resolve $u(x,y,\theta,t)$ for \emph{preceding} times $t \in [0,T)$. So the equation runs ``backwards'' in time. Likewise, if the trajectory has already arrived at the ending configuration, the remaining travel time is 0, so we have the boundary condition \begin{equation}\label{eq:boundary} u(x_f,y_f,\theta_f,t) = 0, \,\,\,\,\,\, t \in [0,T]. \end{equation} Lastly, to ensure the car does not collide with obstacles, we assign $u(x,y,\theta,t) = +\infty$ for any $(x,y,\theta,t)$ such that $C(x,y,\theta) \cap \Omega_{\text{obs}}(t) \neq \varnothing$. By \eqref{eq:controls}, the only possible values of the control variables are $v,\omega\in\{-1,0,1\}$, resulting in a bang-bang controller which has a ``no bang'' option. This makes intuitive sense because there is never incentive to drive or turn slower than the maximum possible speed, unless one needs to wait for an obstacle to move out of the way (whereupon $v=0$) or one needs to drive in a straight line (whereupon $\omega = 0$). When no obstacles are present, one can eliminate the $v = 0$ option and the path will consist of straight lines and arcs of circles of minimum radius, which agrees with early analysis of the problem \cite{Dubins,ReedsShepp}. As a final note, this derivation is very similar to that in \cite{ParkinsonCar}. However, when the obstacles are stationary, as in \cite{TakeiTsai1,TakeiTsai2,ParkinsonCar}, the optimal travel time function does not depend on $t$, since the optimal trajectory depends only upon the current configuration, not upon the time $t$ when the car occupies that configuration. In that case, one can eliminate the time horizon $T$, and opt instead for a stationary HJB equation. One can than visualize solving the stationary HJB equation by evolving a front outward from the final configuration, and recording the time as the front passes through other configurations, terminating when each point in the domain $\Omega \times [0,2\pi)$ has been assigned a value. This is the philosophy behind level-set inspired optimal path planning \cite{Parkinson,Parkinson2}, and numerical implementations like fast sweeping \cite{tsai2003fast,kao2004lax,RotatingGrid} and fast marching methods \cite{Tsitsiklis,Sethian1}. In theory, something similar is possible here. If one does not care to enforce a finite time horizon, then making the substitution $\tau = T-t$ and taking $T \to \infty$ will do away with it. However, in practice, we will want to discretize the HJB equation in order to solve computationally, which will require choosing a fixed time horizon. Thus we cannot do away with $T$, but to minimize its effect, we set it large enough that the travel time $u(x,y,\theta,0)$ is finite for all $(x,y,\theta) \in \Omega \times [0,2\pi)$. In this manner, any initial configuration (not overlapping the obstacles) will have admissable paths which reach $(x_f,y_f,\theta_f)$ within time $T$. \section{NUMERICAL METHODS} In this section, we design a numerical scheme to approximate \eqref{eq:HJB}. Since Hamilton-Jacobi equation admit non-smooth solutions which cannot be approximated by simple finite difference schemes, effort has been expended to develop schemes which resolve the viscosity solution. For a survey of numerical methods for Hamilton-Jacobi equations, see \cite{Falcone,osher2003level}. \subsection{An Upwind, Monotone Scheme for \eqref{eq:HJB}} For simplicity, we confine ourselves to a rectangular spatial domain $\Omega = [x_{\text{min}}, x_{\text{max}}] \times [y_{\text{min}},y_{\text{max}}]$. Choosing $I,J,K,N \in \mathbb N$, let $(x_i)_{i=0}^I, (y_j)_{j=0}^{J}, (\theta_k)_{k=0}^K, (t_n)_{n=0}^N$ be uniform discretizations of their respective domains with grid parameters $\Delta x, \Delta y, \Delta \theta, \Delta t$, and let $u_{ijk}^n$ be our approximation to $u(x_i,y_j,\theta_k,t_n)$. For each $v,\omega \in \{-1,0,1\}$, define \begin{equation} \label{eq:coeffs} \begin{split} A_k(v,\omega) &= v\cos \theta_k - \omega Wd\sin\theta_k, \\ a_k(v,\omega) &= \text{sign}(v\cos \theta_k - \omega Wd\sin\theta_k), \\ B_k(v,\omega) &= v\sin \theta_k + \omega W d \cos \theta_k,\\ b_k(v,\omega) &= \text{sign}(v\sin \theta_k + \omega W d \cos \theta_k). \end{split} \end{equation} Then \eqref{eq:preHJB} can be rewritten \begin{equation}\label{eq:discEq} u_t + \min_{v,\omega}\{A_k(v,\omega)u_x +B_k(v,\omega)u_y + \omega W u_\theta\} = -1. \end{equation} Recall, the terminal values $u^N_{ijk}$ are supplied here, and we need to integrate this equation backwards in time. Thus at time step $t_n$, we need to resolve $u^n_{ijk}$ given known values $u^{n+1}_{ijk}$. This suggests backward Euler time integration \begin{equation} \label{eq:timeDisc} (u_t)^n_{ijk} = \frac{u^{n+1}_{ijk} - u^{n}_{ijk}}{\Delta t}. \end{equation} The upwind approximations to the other derivatives in \eqref{eq:discEq} using $u^{n+1}_{ijk}$ are given by \begin{equation} \label{eq:upwindApprox} \begin{split} (A_k(v,\omega)u_x)^{n+1}_{ijk} &= \abs{A_{k}(v,\omega)} \left(\frac{u^{n+1}_{i+a_k(v,\omega),j,k} - u^{n+1}_{ijk}}{\Delta x}\right), \\ (B_k(v,\omega)u_y)^{n+1}_{ijk} &= \abs{B_{k}(v,\omega)}\left( \frac{u^{n+1}_{i,j+b_k(v,\omega),k} - u^{n+1}_{ijk}}{\Delta y}\right), \\ (\omega W u_\theta)^{n+1}_{ijk} &= \abs \omega W \left(\frac{u^{n+1}_{i,j,k+\text{sign}(\omega)} - u^{n+1}_{ijk}}{\Delta\theta}\right). \end{split} \end{equation} We insert these approximations in \eqref{eq:discEq} to arrive at \begin{equation} \label{eq:update} \begin{split} u^n_{ijk} = u^{n+1}_{ijk} + &\Delta t \left(1 + \min_{v,w} \{(A_k(v,\omega)u_x)^{n+1}_{ijk} \right.\\&+ \left.(B_k(v,\omega)u_y)^{n+1}_{ijk} + (\omega W u_\theta)^{n+1}_{ijk} \} \right). \end{split} \end{equation} Since there are only finitely many pairs $(v,\omega)$, we can compute the right hand side for each pair and explicity choose the pair which suggests the minimum possible value. Using this formula and stepping through $n=N-1, N-2,\ldots, 1,0$, we arrive at our approximation of the travel time function. To initialize, we set $u_{ijk}^n = +\infty$ (or some very large number) for all $i,j,k,n$ except at the node $(i_f,j_f,k_f)$ respresenting the configuration nearest to $(x_f,y_f,\theta_f)$ where we set $u^n_{i_f,j_f,k_f} = 0$ for all $n$. We then only update the node $u^n_{ijk}$ if the value suggested by \eqref{eq:update} is smaller than the value already stored at $u^n_{ijk}$. This ensures that the scheme is monotone so long as the CFL condition \begin{equation} \label{eq:CFL} \Delta t \left(\frac{1+Wd}{\Delta x} + \frac{1+Wd}{\Delta y} + \frac{W}{\Delta \theta} \right) \le 1 \end{equation} is satisfied \cite{Falcone,osher2003level}. In this case, since the scheme is also consistent, the approximation converges to the viscosity solution of \eqref{eq:HJB} as $\Delta x, \Delta y, \Delta \theta, \Delta t \to 0$. We include a few implementation notes. First, to account for obstacles, at each time step $n$, we first need to find the illegal nodes (i.e. those which correspond to configurations wherein the car collides with an obstacle). At these nodes $(i^*,j^*,k^*)$, we do not use \eqref{eq:update}, but rather set $u^n_{i^*,j^*,k^*} = +\infty$. In previous work, this could be done in pre-processing since the obstacles were stationary and illegal configurations only needed to be resolved once. In this work, since the obstacles move, this must be repeated at every time step. Second, we use \eqref{eq:update} for $i = 2,\ldots,I-1, j = 2,\ldots,J-1$. The values $u_{ijk}^n$ at nodes corresponding to the spatial boundary are never updated, but should be given the value $+\infty$. This will ensure that the car never leaves the domain. Because we enforce the correct causality, the boundary nodes have no effect on interior nodes. Third, one needs to enforce periodic boundary conditions in $\theta$ by identifyting the nodes at $k=0$ and $k=K$. Lastly, above it is stated that $v,\omega \in \{-1,0,1\}$. However, because it is impossible to turn a car without moving backward or forward, one should eliminate the cases $(v,\omega) = (0,\pm1)$. So there are seven possible pairs of $(v,\omega)$ to consider in total; in short, $(v,\omega) = (\pm1, \pm1), (\pm1, 0), (0,0)$. \subsection{Generating Optimal Trajectories} There are a few different manners in which one can obtain optimal control values and generate optimal trajectories. It is possible to resolve control values $v^n_{ijk},\omega^n_{ijk}$ while evaluating \eqref{eq:update}. One can define them to be the pair that achieves the minimum in \eqref{eq:update} at any node $(i,j,k,n).$ Alternatively, after resolving $u^{n}_{ijk}$, one can interpolate to off-grid values and use \eqref{eq:controls} to resolve the optimal steering plan at any point $(x,y,\theta,t)$. In either case, after choosing an initial point, one can insert the optimal control values into \eqref{eq:kinematic} and integrate the equations of motion until the trajectory reaches $(x_f,y_f,\theta_f)$. This is the approach taken by \cite{ParkinsonCar}. In a different approach, we opt for a semi-Lagrangian path-planner as in \cite{TakeiTsai2,cartee2019time}. Specifically, we first interpolate $u^n_{ijk}$ to off grid values, so we have an approximate travel time function $u(x,y,\theta,t)$. Then, choosing an initial point $(\boldsymbol{x}_0,\boldsymbol{y}_0,\boldsymbol{\theta}_0)$ and a time step $\delta > 0$, and rewriting \eqref{eq:kinematic} as $(\dot \boldsymbol{x}, \dot \boldsymbol{y},\dot \boldsymbol{\theta}) = F(\boldsymbol{x},\boldsymbol{y},\boldsymbol{\theta},\boldsymbol{v},\boldsymbol{\omega})$, we set \begin{equation} \label{eq:SLscheme} \begin{split} (v^*,\omega^*) = \argmin_{v,\omega} u( (\boldsymbol{x}_\ell,\boldsymbol{y}_\ell,\boldsymbol{\theta}_\ell) + \delta F(\boldsymbol{x}_\ell,\boldsymbol{y}_\ell,\boldsymbol{\theta}_\ell,v,\omega),\ell\delta),\\ (\boldsymbol{x}_{\ell+1},\boldsymbol{y}_{\ell+1},\boldsymbol{\theta}_{\ell+1}) = (\boldsymbol{x}_\ell,\boldsymbol{y}_\ell,\boldsymbol{\theta}_\ell) + \delta F(\boldsymbol{x}_\ell,\boldsymbol{y}_\ell,\boldsymbol{\theta}_\ell,v^*,\omega^*), \end{split} \end{equation} for $\ell = 0,1,2,\ldots$, halting when $(\boldsymbol{x}_\ell,\boldsymbol{y}_\ell,\boldsymbol{\theta}_\ell)$ is within some tolerance of $(x_f,y_f,\theta_f)$. \section{RESULTS \& EXAMPLES} \label{sec:examp} We present results of our algorithm in three examples. In all cases, we use the spatial domain $\Omega = [-1,1] \times [-1,1]$. We take the car to be a rectangles as pictured in \cref{fig:car} with $d = 0.07$ and $R = 0.04$ and we take the maximum angular velocity to be $W = 4$. These are dimensionaless variables used for testing purposes. In each of the following pictures the final configuration $(x_f,y_f,\theta_f)$ will be marked with a red star and the initial configurations of the various cars will be marked with green stars. We use a $101 \times 101 \times 101$ discretization of $\Omega \times [0,2\pi)$ and then choose $\Delta t$ according to the CFL condition \eqref{eq:CFL}. We choose the time horizon $T= 10$. As mentioned before, this simply needs to be chosen so that there are admissable paths from every point on the domain to the final configuration which take time less than $T$ to traverse. In some of the examples it could likely be smaller, but $T = 10$ was sufficiently large for all of them. \begin{figure*} \caption{In the first example, we have several cars starting in the corners and ending in the center facing due west. The obstacles (black and blue sectors) rotate counterclockwise with black obstacles rotating at three times the speed of the blue ones. } \label{fig:examp1} \end{figure*} In the first example, the final configuration is $(x_f, y_f, \theta_f) = (0,0,\pi)$ meaning the cars will end at the center of the domain, facing due west. In this case, the obstacles $\Omega_{\text{obs}}(t)$ are 4 annular sectors which will rotate about the origin in the counterclockwise direction. These are represented in black and blue in \cref{fig:examp1}. The black obstacles rotate with 3 times the speed of the blue obstacles. Notice that in the second panel, the green and blue cars (respectively bottom right and top left corners) need to stop and wait to let the obstacles pass before completing their path. The grey and pink cars are essentially unoccluded and can travel directly to the destination. We note that these paths are generated individually, and simply plotted over each other. There is no competition between the cars. In the second example, the final configuration is $(x_f,y_f,\theta_f) = (0.8, 0.8, \pi/4)$ so that the car needs to end near the top right corner of the domain facing northeast. The car begins in the bottom left corner of the domain as seen in \cref{fig:doors}, and must navigate through three moving doorways. The black bars represent the obstacles and they oscillate as indicated by the arrows. The car is able to navigate through the domain without stopping to wait for the doors. In the third example, we consider the more realistic scenario of a car changing lanes in between two other cars as seen in \cref{fig:changingLanes}. In this case the two blue cars are treated as obstacles and the orange car must slide in between them. \section{CONCLUSION \& DISCUSSION} We present a Hamilton-Jacobi-Bellman formulation for time-optimal paths of simple vehicles in the presence of moving obstacles. This is distinguished from previous similar formulations which could only handle stationary obstacles. There are many ways in which this work could be extended. Some simple improvements would be to account for other realistic concerns such as energy minimization or intrumentation noise, which can both be added to the model in a straightforward manner, though they may complicate the numerical methods. Perhaps the biggest drawback of this method is that it is currently too computationally intensive for real-time applications. The simulations for each of the examples in \cref{sec:examp} required several minutes of CPU time (on the authors' home computers). However, one may be able to apply recent methods for high-dimensional Hamilton-Jacobi equations \cite{Darbon,Lin}. These methods are based on Hopf-Lax type formulas and trade finite differences for optimization problems. It may be difficult to account for crucial boundary conditions in our model when using such schemes, so some care would be required. However, if they could be applied to this problem, it would also provide an opportunity to extend the model to higher dimensions where finite difference methods are infeasible. \begin{figure*} \caption{In the second example, one care attempts to navigate through moving doorways. The black obstacles oscillate up and down as indicated by the arrows in each panel.} \label{fig:doors} \end{figure*} \begin{figure} \caption{A car (orange) changing lanes between two other cars (blue). Here the blue cars are the obstacles.} \label{fig:changingLanes} \end{figure} \section*{ACKNOWLEDGMENT} The authors were supported in part by NSF DMS-1937229 through the Data Driven Discovery Research Training Group at the University of Arizona. \end{document}
\mathfrak mathfrak begin{document} \setlength{\mathfrak mathfrak baselineskip}{15pt} \title{The almost Gorenstein Rees algebras over two-dimensional regular local rings} \mathfrak mathrm author{Shiro Goto} \mathfrak mathrm address{Department of Mathematics, School of Science and Technology, Meiji University, 1-1-1 Higashi-mita, Tama-ku, Kawasaki 214-8571, Japan} \mathfrak mathrm{e}mail{[email protected]} \mathfrak mathrm author{Naoyuki Matsuoka} \mathfrak mathrm address{Department of Mathematics, School of Science and Technology, Meiji University, 1-1-1 Higashi-mita, Tama-ku, Kawasaki 214-8571, Japan} \mathfrak mathrm{e}mail{[email protected]} \mathfrak mathrm author{Naoki Taniguchi} \mathfrak mathrm address{Department of Mathematics, School of Science and Technology, Meiji University, 1-1-1 Higashi-mita, Tama-ku, Kawasaki 214-8571, Japan} \mathfrak mathrm{e}mail{[email protected]} \mathfrak mathrm author{Ken-ichi Yoshida} \mathfrak mathrm address{Department of Mathematics, College of Humanities and Sciences, Nihon University, 3-25-40 Sakurajosui, Setagaya-Ku, Tokyo 156-8550, Japan} \mathfrak mathrm{e}mail{[email protected]} \thanks{2010 {\mathfrak mathrm{e}m Mathematics Subject Classification.} 13H10, 13H15, 13A30} \thanks{{\mathfrak mathrm{e}m Key words and phrases.} almost Gorenstein local ring, almost Gorenstein graded ring, Rees algebra} \thanks{The first author was partially supported by JSPS Grant-in-Aid for Scientific Research 25400051. The second author was partially supported by JSPS Grant-in-Aid for Scientific Research 26400054. The third author was partially supported by Grant-in-Aid for JSPS Fellows 26-126 and by JSPS Research Fellow. The fourth author was partially supported by JSPS Grant-in-Aid for Scientific Research 25400050.} \mathfrak mathfrak begin{abstract} Let $(R,\mathfrak m)$ be a two-dimensional regular local ring with infinite residue class field. Then the Rees algebra $\mathfrak mathcal{R} (I)= \mathfrak mathfrak bigoplus_{n \ge 0}I^n$ of $I$ is an almost Gorenstein graded ring in the sense of \mathfrak mathfrak cite{GTT} for every $\mathfrak m$-primary integrally closed ideal $I$ in $R$. \mathfrak mathrm{e}nd{abstract} \mathfrak maketitle \section{Introduction}\mathfrak label{intro} The main purpose of this paper is to prove the following. \mathfrak mathfrak begin{thm}\mathfrak label{3.4} Let $(R,\mathfrak m)$ be a two-dimensional regular local ring with infinite residue class field and $I$ an $\mathfrak m$-primary integrally closed ideal in $R$. Then the Rees algebra $\mathfrak mathcal{R}(I)=\mathfrak mathfrak bigoplus_{n \ge 0}I^n$ of $I$ is an almost Gorenstein graded ring. \mathfrak mathrm{e}nd{thm} As a direct consequence one has the following. \mathfrak mathfrak begin{cor}\mathfrak label{3.5} Let $(R,\mathfrak m)$ be a two-dimensional regular local ring with infinite residue class field. Then $\mathfrak mathcal{R}(\mathfrak m^\mathfrak mathrm{e}ll)$ is an almost Gorenstein graded ring for every integer $\mathfrak mathrm{e}ll > 0$. \mathfrak mathrm{e}nd{cor} The proof of Theorem 1.1 depends on a result of J. Verma \mathfrak mathfrak cite{V} which guarantees the existence of joint reductions with joint reduction number zero. Therefore our method of proof works also for two-dimensional rational singularities, which we shall discuss in the forthcoming paper \mathfrak mathfrak cite{GTY1}. Here, before entering details, let us recall the notion of almost Gorenstein {\it graded/local} ring as well as some historical notes about it. Almost Gorenstein rings are a new class of Cohen-Macaulay rings, which are not necessarily Gorenstein, but still good, possibly next to the Gorenstein rings. The notion of these local rings dates back to the paper \mathfrak mathfrak cite{BF} of V. Barucci and R. Fr\"oberg in 1997, where they dealt with one-dimensional analytically unramified local rings. Because their notion is not flexible enough to analyze analytically ramified rings, in 2013 S. Goto, N. Matsuoka, and T. T. Phuong \mathfrak mathfrak cite{GMP} extended the notion to arbitrary (but still of dimension one) Cohen-Macaulay local rings. The reader may consult \mathfrak mathfrak cite{GMP} for examples of analytically ramified almost Gorenstein local rings. In 2015 S. Goto, R. Takahashi, and N. Taniguchi \mathfrak mathfrak cite{GTT} finally gave the definition of almost Gorenstein graded/local rings of higher dimension. In \mathfrak mathfrak cite{GTT} the authors developed a very nice theory, exploring many concrete examples. We recall here the definition of almost Gorenstein graded rings given in \mathfrak mathfrak cite{GTT}. We refer to \mathfrak mathfrak cite[Definition 1.1]{GTT} for the definition of almost Gorenstein {\it local} rings. \mathfrak mathfrak begin{defn}[{\mathfrak mathfrak cite[Definition 1.5]{GTT}}]\mathfrak label{1.2} Let $R = \mathfrak mathfrak bigoplus_{n \ge 0}R_n$ be a Cohen-Macaulay graded ring such that $R_0$ is a local ring. Suppose that $R$ possesses the graded canonical module $\mathrm{K}_R$. Let $\mathfrak mathfrak{M}$ be the unique graded maximal ideal of $R$ and $a = \mathfrak mathrm{a} (R)$ the $\mathfrak mathrm{a}$-invariant of $R$. Then we say that $R$ is an almost Gorenstein {\it graded} ring, if there exists an exact sequence $$ 0 \to R \to \mathfrak mathrm{K}_R(-a) \to C \to 0 $$ of graded $R$-modules such that $\mathfrak mu_R(C) = \mathfrak mathrm{e}_\mathfrak mathfrak{M}^0(C)$, where $\mathfrak mu_R(C)$ denotes the number of elements in a minimal system of generators of $C$ and $\mathfrak mathrm{e}_\mathfrak mathfrak{M}^0(C)$ is the multiplicity of $C$ with respect to $\mathfrak mathfrak{M}$. Remember that $\mathfrak mathrm{K}_R(-a)$ stands for the graded $R$-module whose underlying $R$-module is the same as that of $\mathrm{K}_R$ and whose grading is given by $[\mathfrak mathrm{K}_R(-a)]_n = [\mathfrak mathrm{K}_R]_{n-a}$ for all $n \in {\mathcal B}bb Z$. \mathfrak mathrm{e}nd{defn} Definition \mathfrak ref{1.2} means that if $R$ is an almost Gorenstein graded ring, then even though $R$ is not a Gorenstein ring, $R$ can be embedded into the graded $R$-module $\mathfrak mathrm{K}_R(-a)$, so that the difference $\mathfrak mathrm{K}_R(-a)/R$ is a graded Ulrich $R$-module (\mathfrak mathfrak cite{BHU}) and behaves well. The reader may consult \mathfrak mathfrak cite{GTT} about the basic theory of almost Gorenstein graded/local rings and the relation between the graded theory and the local theory, as well. For instance, it is shown in \mathfrak mathfrak cite{GTT} that certain Cohen-Macaulay local rings of finite Cohen-Macaulay representation type, including two-dimensional rational singularities, are almost Gorenstein local rings. The almost Gorenstein local rings which are not Gorenstein are G-regular (\mathfrak mathfrak cite[Corollary 4.5]{GTT}) in the sense of \mathfrak mathfrak cite{T}. They are now getting revealed to enjoy good properties. The theory of Rees algebras has been satisfactorily developed and nowadays one knows many Cohen-Macaulay Rees algebras (see, e.g., \mathfrak mathfrak cite{GS, HU, MU, SUV}). Among them Gorenstein Rees algebras are rather rare (\mathfrak mathfrak cite{Ikeda}). Nevertheless, although they are not Gorenstein, some of Cohen-Macaulay Rees algebras are still good and could be {\it almost Gorenstein} rings, which we are eager to report in this paper. Except \mathfrak mathfrak cite[Theorems 8.2, 8.3]{GTT}, our Theorem 1.1 is the first attempt to answer the question of when the Rees algebras are almost Gorenstein graded rings. For these reasons our Theorem 1.1 might have some significance. We briefly explain how this paper is organized. The proof of Theorem 1.1 shall be given in Section 2. For the Rees algebras of modules over two-dimensional regular local rings we have a similar result, which we give in Section 2 (Corollary \mathfrak ref{3.6}). In Section 3 we explore the case where the ideals are linearly presented over power series rings. The result (Theorem \mathfrak ref{4.1}) seems to suggest that almost Gorenstein Rees algebras are still rare, when the dimension of base rings is greater than two, which we shall discuss in the forthcoming paper \mathfrak mathfrak cite{GTY2}. In Section 4 we study the Rees algebras of socle ideals $Q:\mathfrak m$ in a two-dimensional regular local ring $(R,\mathfrak m)$ and show that Rees algebras are not necessarily almost Gorenstein graded rings even for these ideals (Corollary \mathfrak ref{badex}). In what follows, unless otherwise specified, let $(R,\mathfrak m)$ denote a Cohen-Macaulay local ring. For each finitely generated $R$-module $M$ let $\mathfrak mu_R(M)$ (resp. $\mathfrak mathrm{e}_\mathfrak m^0(M)$) denote the number of elements in a minimal system of generators for $M$ (resp. the multiplicity of $M$ with respect to $\mathfrak m$). Let $\mathfrak mathrm{K}_R$ stand for the canonical module of $R$. \section{Proof of Theorem 1.1} The purpose of this section is to prove Theorem 1.1. Let $(R, \mathfrak m)$ be a Gorenstein local ring with $\dim R = 2$ and let $I \subsetneq R$ be an $\mathfrak m$-primary ideal of $R$. Assume that $I$ contains a parameter ideal $Q=(a,b)$ of $R$ such that $I^2 = Q I$. We set $J=Q:I$. Let $$ \mathfrak mathcal{R} = R[It] \subseteq R[t] \ \ \text{and} \ \ T = R[Qt]\subseteq R[t], $$ where $t$ stands for an indeterminate over $R$. Remember that the Rees algebra $\mathfrak mathcal{R}$ of $I$ is a Cohen-Macaulay ring (\mathfrak mathfrak cite{GS}) with $\mathfrak mathrm{a}(\mathfrak mathcal{R}) = -1$ and $\mathfrak mathcal{R} = T + T{\mathfrak mathfrak cdot}It$, while the Rees algebra $T$ of $Q$ is a Gorenstein ring of dimension $3$ and $\mathfrak mathrm{a}(T)=-1$ (remember that $T \mathfrak mathfrak cong R[x,y]/(bx-ay)$). Hence $\mathfrak mathrm{K}_T(1) \mathfrak mathfrak cong T$ as a graded $T$-module, where $\mathfrak mathrm{K}_T$ denotes the graded canonical module of $T$. Let us begin with the following, which is a special case of \mathfrak mathfrak cite[Theorem 2.7 (a)]{U}. We note a brief proof. \mathfrak mathfrak begin{prop}\mathfrak label{3.1} $\mathfrak mathrm{K}_{\mathfrak mathcal{R}}(1) \mathfrak mathfrak cong J \mathfrak mathcal{R}$ as a graded $\mathfrak mathcal{R}$-module. \mathfrak mathrm{e}nd{prop} \mathfrak mathfrak begin{proof} Since $\mathfrak mathcal{R}$ is a module-finite extension of $T$, we get $$ \mathfrak mathrm{K}_R(1) \mathfrak mathfrak cong \operatorname{Hom}_T(\mathfrak mathcal{R}, \mathfrak mathrm{K}_T)(1) \mathfrak mathfrak cong \operatorname{Hom}_T(\mathfrak mathcal{R}, T) \mathfrak mathfrak cong T:_F \mathfrak mathcal{R} $$ as graded $\mathfrak mathcal{R}$-modules, where $F=\mathfrak mathrm{Q}(T) =\mathfrak mathrm{Q}(\mathfrak mathcal{R})$ is the total ring of fractions. Therefore $T:_F \mathfrak mathcal{R} = T:_T It$, since $\mathfrak mathcal{R} = T + T{\mathfrak mathfrak cdot}It$. Because $Q^n \mathfrak mathfrak cap [Q^{n+1}:I] = Q^n[Q:I]$ for all $n \ge 0$, we have $T:_T It = JT$. Hence $T:_F \mathfrak mathcal{R}=JT$, so that $JT = J\mathfrak mathcal{R}$. Thus $\mathfrak mathrm{K}_{\mathfrak mathcal{R}}(1) \mathfrak mathfrak cong J \mathfrak mathcal{R}$ as a graded $\mathfrak mathcal{R}$-module. \mathfrak mathrm{e}nd{proof} \mathfrak mathfrak begin{cor}\mathfrak label{3.2} The ideal $J =Q:I$ in $R$ is integrally closed, if $\mathfrak mathcal{R}$ is a normal ring. \mathfrak mathrm{e}nd{cor} \mathfrak mathfrak begin{proof} Since $J\mathfrak mathcal{R} \mathfrak mathfrak cong \mathfrak mathrm{K}_\mathfrak mathcal{R}(1)$, the ideal $J\mathfrak mathcal{R}$ of $\mathfrak mathcal{R}$ is unmixed and of height one. Therefore, if $\mathfrak mathcal{R}$ is a normal ring, $J\mathfrak mathcal{R}$ must be integrally closed in $\mathfrak mathcal{R}$, whence $J$ is integrally closed in $R$ because $\overline{J} \subseteq J\mathfrak mathcal{R}$, where $\overline{J}$ denotes the integral closure of $J$. \mathfrak mathrm{e}nd{proof} Let us give the following criterion for $\mathfrak mathcal{R}$ to be a special kind of almost Gorenstein graded rings. Notice that Condition (2) in Theorem \mathfrak ref{3.3} requires the existence of joint reductions of $\mathfrak m$, $I$, and $J$ with reduction number zero (cf. \mathfrak mathfrak cite{V}). \mathfrak mathfrak begin{thm}\mathfrak label{3.3} With the same notation as above, set $\mathfrak mathfrak{M} = \mathfrak m\mathfrak mathcal{R} + \mathfrak mathcal{R}_+$, the graded maximal ideal of $\mathfrak mathcal{R}$. Then the following conditions are equivalent. \mathfrak mathfrak begin{enumerate}[$(1)$] \item There exists an exact sequence $$ 0 \to \mathfrak mathcal{R} \to \mathfrak mathrm{K}_{\mathfrak mathcal{R}}(1) \to C \to 0 $$ of graded $\mathfrak mathcal{R}$-modules such that $\mathfrak mathfrak{M} C = (\xi, \mathfrak mathrm{e}ta)C$ for some homogeneous elements $\xi, \mathfrak mathrm{e}ta$ of $\mathfrak mathfrak{M}$. \item There exist elements $f\in \mathfrak m$, $g \in I$, and $h \in J$ such that $$ IJ = gJ + I h \ \ \ \text{and} \ \ \ \mathfrak m J = f J + \mathfrak m h. $$ \mathfrak mathrm{e}nd{enumerate} When this is the case, $\mathfrak mathcal{R}$ is an almost Gorenstein graded ring. \mathfrak mathrm{e}nd{thm} \mathfrak mathfrak begin{proof} $(2) {R}ightarrow (1)$ Notice that $\mathfrak mathfrak{M}{\mathfrak mathfrak cdot}J\mathfrak mathcal{R} \subseteq (f, gt){\mathfrak mathfrak cdot}J\mathfrak mathcal{R} + \mathfrak mathcal{R} h$, since $IJ = gJ + I h$ and $\mathfrak m J = f J + \mathfrak m h$. Consider the exact sequence $$ \mathfrak mathcal{R} \overset{\varphi}{\to} J\mathfrak mathcal{R} \to C \to 0 $$ of graded $\mathfrak mathcal{R}$-modules where $\varphi(1) = h$. We then have $\mathfrak mathfrak{M} C = (f, gt)C$, so that $\dim_{\mathfrak mathcal{R}_{\mathfrak mathfrak{M}}}C_{\mathfrak mathfrak{M}} \mathfrak le 2$. Hence by \mathfrak mathfrak cite[Lemma 3.1]{GTT} the homomorphism $\varphi$ is injective and $\mathfrak mathcal{R}$ is an almost Gorenstein graded ring. $(1) {R}ightarrow (2)$ Suppose that $\mathfrak mathcal{R}$ is a Gorenstein ring. Then $\mathfrak mu_R(J) = 1$, since $\mathfrak mathrm{K}_\mathfrak mathcal{R}(1)\mathfrak mathfrak cong J\mathfrak mathcal{R}$. Hence $J = R$ as $\mathfrak m \subseteq \sqrt{J}$, so that choosing $h=1$ and $f = g= 0$, we get $IJ = gJ + I h$ and $\mathfrak m J = f J + \mathfrak m h$. Suppose that $\mathfrak mathcal{R}$ is not a Gorenstein ring and consider the exact sequence $$ 0 \to \mathfrak mathcal{R} \overset{\varphi}{\to} J\mathfrak mathcal{R} \to C \to 0 $$ of graded $\mathfrak mathcal{R}$-modules with $C \mathfrak ne (0)$ and $\mathfrak mathfrak{M} C = (\xi, \mathfrak mathrm{e}ta)C$ for some homogeneous elements $\xi, \mathfrak mathrm{e}ta$ of $\mathfrak mathfrak{M}$. Hence $\mathfrak mathcal{R}_{\mathfrak mathfrak{M}}$ is an almost Gorenstein local ring in the sense of \mathfrak mathfrak cite[Definition 3.3]{GTT}. We set $h = \varphi(1) \in J$, $m=\deg \xi$, and $n=\deg \mathfrak mathrm{e}ta$; hence $C = J\mathfrak mathcal{R}/\mathfrak mathcal{R} h$. Remember that $h \mathfrak not\in \mathfrak m J$, since $\mathfrak mathcal{R}_\mathfrak mathfrak{M}$ is not a regular local ring (see \mathfrak mathfrak cite[Corollary 3.10]{GTT}). If $\mathfrak min\{m , n\} > 0$, then $\mathfrak mathfrak{M} C \subseteq \mathfrak mathcal{R}_+ C$, whence $\mathfrak m C_0 = (0)$ (notice that $[\mathfrak mathcal{R}_+ C]_0 = (0)$, as $C=\mathfrak mathcal{R} C_0$). Therefore $\mathfrak m J \subseteq (h)$, so that we have $J =(h)=R$. Thus $\mathfrak mathcal{R} h = J \mathfrak mathcal{R}$ and $\mathfrak mathcal{R}$ is a Gorenstein ring, which is impossible. Assume $m=0$. If $n=0$, then $\mathfrak mathfrak{M} C = \mathfrak m C$ since $\xi, \mathfrak mathrm{e}ta \in \mathfrak m$, so that $$ C_1 \subseteq \mathfrak mathcal{R}_+ C_0 \subseteq \mathfrak m C $$ and therefore $C_1 = (0)$ by Nakayama's lemma. Hence $IJ = Ih$ as $[J\mathfrak mathcal{R}]_1 = \varphi (\mathfrak mathcal{R}_1)$, which shows $(h)$ is a reduction of $J$, so that $(h) = R=J$. Therefore $\mathfrak mathcal{R}$ is a Gorenstein ring, which is impossible. If $n \ge 2$, then because $$\mathfrak mathfrak{M}{\mathfrak mathfrak cdot}J\mathfrak mathcal{R} \subseteq \xi{\mathfrak mathfrak cdot}J\mathfrak mathcal{R} + \mathfrak mathrm{e}ta{\mathfrak mathfrak cdot}J\mathfrak mathcal{R} + \mathfrak mathcal{R} h,$$ we get $IJ \subseteq \xi IJ + Ih$, whence $IJ = Ih$. This is impossible as we have shown above. Hence $n =1$. Let us write $\mathfrak mathrm{e}ta = gt$ with $g \in I$ and take $f = \xi$. We then have $$ \mathfrak mathfrak{M}{\mathfrak mathfrak cdot}J\mathfrak mathcal{R} \subseteq (f, gt){\mathfrak mathfrak cdot}J\mathfrak mathcal{R} + \mathfrak mathcal{R} h, $$ whence $\mathfrak m J \subseteq fJ + R h$. Because $h \mathfrak not\in \mathfrak m J$, we get $\mathfrak m J \subseteq fJ + \mathfrak m h$, so that $\mathfrak m J = fJ+\mathfrak m h$, while $IJ =g J + I h$, because $IJ \subseteq fIJ + gJ + Ih$. This completes the proof of Theorem \mathfrak ref{3.3}. \mathfrak mathrm{e}nd{proof} Let us explore two examples to show how Theorem \mathfrak ref{3.3} works. \mathfrak mathfrak begin{ex} Let $S = k[[x, y, z]]$ be the formal power series ring over an infinite field $k$. Let $\mathfrak n = (x,y,z)$ and choose $f \in \mathfrak n^2 \setminus \mathfrak n^3$. We set $R = S/(f)$ and $\mathfrak m = \mathfrak n/(f)$. Then for every integer $\mathfrak mathrm{e}ll > 0$ the Rees algebra $\mathfrak mathcal{R} (\mathfrak m^\mathfrak mathrm{e}ll)$ of $\mathfrak m^\mathfrak mathrm{e}ll$ is an almost Gorenstein graded ring and $\mathfrak mathrm{r}(\mathfrak mathcal{R}) = 2\mathfrak mathrm{e}ll + 1$, where $\mathfrak mathrm{r}(\mathfrak mathcal{R})$ denotes the Cohen-Macaulay type of $\mathfrak mathcal{R}$. \mathfrak mathrm{e}nd{ex} \mathfrak mathfrak begin{proof} Since $\mathfrak mathrm{e}^0_\mathfrak m(R) = 2$, we have $\mathfrak m^2 = (a,b)\mathfrak m$ for some elements $a, b \in \mathfrak m$. Let $\mathfrak mathrm{e}ll > 0$ be an integer and set $I = \mathfrak m^\mathfrak mathrm{e}ll$ and $Q = (a^\mathfrak mathrm{e}ll, b^\mathfrak mathrm{e}ll)$. We then have $I^2 =QI$ and $Q:I = I$, so that $\mathfrak mathcal{R} = \mathfrak mathcal{R} (I)$ is a Cohen-Macaulay ring and $\mathfrak mathrm{K}_\mathfrak mathcal{R}(1) \mathfrak mathfrak cong I\mathfrak mathcal{R}$ by Proposition \mathfrak ref{3.1}, whence $\mathfrak mathrm{r}(\mathfrak mathcal{R}) = \mathfrak mu_R(I) = 2\mathfrak mathrm{e}ll + 1$. Because $\mathfrak m^{\mathfrak mathrm{e}ll +1} = a\mathfrak m^\mathfrak mathrm{e}ll + b^\mathfrak mathrm{e}ll \mathfrak m$ and $Q:I = I = \mathfrak m^\mathfrak mathrm{e}ll$, by Theorem \mathfrak ref{3.3} $\mathfrak mathcal{R}$ is an almost Gorenstein graded ring. \mathfrak mathrm{e}nd{proof} \mathfrak mathfrak begin{ex} Let $(R,\mathfrak m)$ be a two-dimensional regular local ring with $\mathfrak m = (x,y)$. Let $1 \mathfrak le m \mathfrak le n$ be integers and set $I = (x^m)+\mathfrak m^n$. Then $\mathfrak mathcal{R}(I)$ is an almost Gorenstein graded ring. \mathfrak mathrm{e}nd{ex} \mathfrak mathfrak begin{proof} We may assume $m > 1$. We set $Q = (x^m,y^n)$ and $J = Q:I$. Then $Q \subseteq I$ and $I^2 = QI$. Since $I = (x^m) +(x^iy^{n-i} \mathfrak mid 0 \mathfrak le i \mathfrak le m-1)$, we get \mathfrak mathfrak begin{eqnarray*} J = Q : (x^iy^{n-i} \mathfrak mid 0 \mathfrak le i \mathfrak le m-1)&=& \mathfrak mathfrak bigcap_{i=1}^{m-1}\mathfrak left[(x^m,y^n):x^iy^{n-i}\mathfrak right] \\ &=& \mathfrak mathfrak bigcap_{i=1}^{m-1}(x^{m-i},y^i)\\ &=&\mathfrak m^{m-1}. \mathfrak mathrm{e}nd{eqnarray*} Take $f = x \in \mathfrak m$, $g = x^m \in I$, and $h = y^{m-1}\in J=\mathfrak m^{m-1}$. We then have $\mathfrak m J = fJ + \mathfrak m h$ and $IJ = Ih + gJ$, so that by Theorem \mathfrak ref{3.3} $\mathfrak mathcal{R} (I)$ is an almost Gorenstein graded ring. \mathfrak mathrm{e}nd{proof} To prove Theorem 1.1 we need a result of J. Verma \mathfrak mathfrak cite{V} about joint reductions of integrally closed ideals. Let $(R,\mathfrak m)$ be a Noetherian local ring. Let $I$ and $J$ be ideals of $R$ and let $a \in I$ and $b \in J$. Then we say that $a, b$ are a {\it joint reduction} of $I, J$ if $aJ + Ib$ is a reduction of $IJ$. Joint reductions always exist (see, e.g., \mathfrak mathfrak cite{HS}), if the residue class field of $R$ is infinite. We furthermore have the following. \mathfrak mathfrak begin{thm}[{\mathfrak mathfrak cite[Theorem 2.1]{V}}]\mathfrak label{verma} Let $(R,\mathfrak m)$ be a two-dimensional regular local ring. Let $I$ and $J$ be $\mathfrak m$-primary ideals of $R$. Assume that $a, b$ are a joint reduction of $I$, $J$. Then $IJ = aJ + Ib$, if $I$ and $J$ are integrally closed. \mathfrak mathrm{e}nd{thm} We are now ready to prove Theorem 1.1. \mathfrak mathfrak begin{proof}[Proof of Theorem 1.1.] Let $(R,\mathfrak m)$ be a two-dimensional regular local ring with infinite residue class field and let $I$ be an $\mathfrak m$-primary integrally closed ideal in $R$. We choose a parameter ideal $Q$ of $R$ so that $Q \subseteq I$ and $I^2 = QI$ (this choice is possible; see \mathfrak mathfrak cite[Appendix 5]{ZS} or \mathfrak mathfrak cite{H}). Therefore the Rees algebra $\mathfrak mathcal{R} = \mathfrak mathcal{R} (I)$ is a Cohen-Macaulay ring (\mathfrak mathfrak cite{GS}). Because $\mathfrak mathcal{R}$ is a normal ring (\mathfrak mathfrak cite{ZS}), by Corollary \mathfrak ref{3.2} $J = Q:I$ is an integrally closed ideal in $R$. Consequently, choosing three elements $f \in \mathfrak m$, $g \in I$, and $h \in J$ so that $f, h$ are a joint reduction of $\mathfrak m, J$ and $g, h$ are a joint reduction of $I, J$, we readily get by Theorem \mathfrak ref{verma} the equalities $$\mathfrak m J = fJ + \mathfrak m g \ \ \text{and} \ \ IJ = gJ + Ih$$ stated in Condition (2) of Theorem \mathfrak ref{3.3}. Thus $\mathfrak mathcal{R} = \mathfrak mathcal{R} (I)$ is an almost Gorenstein graded ring. \mathfrak mathrm{e}nd{proof} We now explore the almost Gorenstein property of the Rees algebras of modules. To state the result we need additional notation. For the rest of this section let $(R, \mathfrak m)$ be a two-dimensional regular local ring with infinite residue class field. Let $M \mathfrak neq (0)$ be a finitely generated torsion-free $R$-module and assume that $M$ is non-free. Let $(-)^* = \operatorname{Hom}_R(-, R)$. Then $F=M^{**}$ is a finitely generated free $R$-module and we get a canonical exact sequence $$ 0 \to M \overset{\varphi}{\to} F \to C \to 0 $$ of $R$-modules with $C\mathfrak neq (0)$ and $\mathfrak mathrm{e}ll_R(C) < \infty$. Let $\operatorname{Sym}(M)$ and $\operatorname{Sym} (F)$ denote the symmetric algebras of $M$ and $F$ respectively and let $\operatorname{Sym} (\varphi) : \operatorname{Sym} (M) \to \operatorname{Sym} (F)$ be the homomorphism induced from $\varphi : M \to F$. Then the Rees algebra $\mathfrak mathcal{R}(M)$ of $M$ is defined by $$ \mathfrak mathcal{R}(M) = \operatorname{Im} \mathfrak left[\mathrm{Sym}(M) \overset{\mathrm{Sym} (\varphi)}{\mathfrak longrightarrow} \mathrm{Sym}(F)\mathfrak right] $$ (\mathfrak mathfrak cite{SUV}). Hence $\mathfrak mathcal{R} (M) = \mathrm{Sym} (M)/T$ where $T=t(\mathrm{Sym} (M))$ denotes the torsion part of $\mathrm{Sym} (M)$, so that $M = [\mathfrak mathcal{R} (M)]_1$ is an $R$-submodule of $\mathfrak mathcal{R} (M)$. Let $x \in F$. Then we say that $x$ is integral over $M$, if it satisfies an integral equation $$x^n + c_1x^{n-1} + \mathfrak mathfrak cdots + c_n = 0$$ in the symmetric algebra $\mathrm{Sym} (F)$ with $n >0$ and $c_i \in M^i$ for each $1 \mathfrak le i \mathfrak le n$. Let $\overline{M}$ be the set of elements of $F$ which are integral over $M$. Then $\overline{M}$ forms an $R$-submodule of $F$, which is called the integral closure of $M$. We say that $M$ is {\it integrally closed}, if $\overline{M} = M$. With this notation we have the following. \mathfrak mathfrak begin{cor}\mathfrak label{3.6} Let $\mathfrak mathfrak{M} = \mathfrak m \mathfrak mathcal{R}(M) + \mathfrak mathcal{R}(M)_+$ be the unique graded maximal ideal of $\mathfrak mathcal{R}(M)$ and suppose that $M$ is integrally closed. Then $\mathfrak mathcal{R}(M)_\mathfrak mathfrak{M}$ is an almost Gorenstein local ring in the sense of \mathfrak mathfrak cite{GTT}. \mathfrak mathrm{e}nd{cor} \mathfrak mathfrak begin{proof} Let $U = R[x_1,x_2, \mathfrak ldots, x_n]$ be the polynomial ring with sufficiently large $n > 0$ and set $S = U_{\mathfrak m U}$. We denote by $\mathfrak n$ the maximal ideal of $S$. Then thanks to \mathfrak mathfrak cite[Theorem 3.5]{SUV} and \mathfrak mathfrak cite[Theorem 3.6]{HongU}, we can find some elements $f_1, f_2, \mathfrak ldots, f_{r-1} \in S \otimes_R M$ ($r = \mathfrak mathrm{rank}_RF$) and an $\mathfrak n$-primary integrally closed ideal $I$ in $S$, so that $f_1, f_2, \mathfrak ldots, f_{r-1}$ form a regular sequence in $\mathfrak mathcal{R}(S \otimes_R M)$ and $$ \mathfrak mathcal{R}(S \otimes_R M)/(f_1, f_2, \mathfrak ldots, f_{r - 1}) \mathfrak mathfrak cong \mathfrak mathcal{R}(I) $$ as a graded $S$-algebra. Therefore, because $\mathfrak mathcal{R}(I)$ is an almost Gorenstein graded ring by Theorem 1.1, $S \otimes_R\mathfrak mathcal{R} (M) = \mathfrak mathcal{R}(S \otimes_R M)$ is an almost Gorenstein graded ring (cf. \mathfrak mathfrak cite[Theorem 3.7 (1)]{GTT}). Consequently $\mathfrak mathcal{R}(M)_\mathfrak mathfrak{M}$ is an almost Gorenstein local ring by \mathfrak mathfrak cite[Theorem 3.9]{GTT}. \mathfrak mathrm{e}nd{proof} \section{Almost Gorenstein property in Rees algebras of ideals with linear presentation matrices} Let $R=k[[x_1, x_2, \mathfrak ldots, x_d]]~(d\ge 2)$ be the formal power series ring over an infinite field $k$. Let $I$ be a perfect ideal of $R$ with $\operatorname{grade}_R I = 2$, possessing a linear presentation matrix $\varphi$ $$\ \ \ \ \ 0\to R^{\oplus (n-1)} \overset{\varphi}{\to} R^{\oplus n} \to I \to 0,$$ that is each entry of the matrix $\varphi$ is contained in $\sum_{i=1}^dkx_i$. We set $n = \mathfrak mu_R(I)$ and $\mathfrak m = (x_1, x_2, \mathfrak ldots, x_d)$; hence $I = \mathfrak m^{n-1}$ if $d=2$. In what follows we assume that $n > d$ and that our ideal $I$ satisfies the condition $({\mathfrak rm G}_d)$ of \mathfrak mathfrak cite{AN}, that is $\mathfrak mu_{R_\mathfrak mathfrak{p}} (IR_\mathfrak mathfrak{p}) \mathfrak le \dim R_\mathfrak mathfrak{p}$ for every $\mathfrak p \in \mathfrak mathrm{V}(I) \setminus \{\mathfrak m\}$. Then thanks to \mathfrak mathfrak cite[Theorem 1.3]{MU} and \mathfrak mathfrak cite[Proposition 2.3]{HU}, the Rees algebra $\mathfrak mathcal{R} = \mathfrak mathcal{R}(I)$ of $I$ is a Cohen-Macaulay ring with $\mathfrak mathrm{a} (\mathfrak mathcal{R}) = -1$ and $$\mathfrak mathrm{K}_{\mathfrak mathcal{R}}(1) \mathfrak mathfrak cong \mathfrak m^{n-d}\mathfrak mathcal{R}$$ as a graded $\mathfrak mathcal{R}$-module. We are interested in the question of when $\mathfrak mathcal{R}$ is an almost Gorenstein ring. Our answer is the following, which suggests almost Gorenstein Rees algebras might be rare in dimension greater than two. \mathfrak mathfrak begin{thm}\mathfrak label{4.1} Let $\mathfrak mathfrak{M}$ be the graded maximal ideal of $\mathfrak mathcal{R}$. Then $\mathfrak mathcal{R}_{\mathfrak mathfrak{M}}$ is an almost Gorenstein local ring if and only if $d=2$. \mathfrak mathrm{e}nd{thm} \mathfrak mathfrak begin{proof} If $d=2$, then $I = \mathfrak m^{n-1}$ and so $\mathfrak mathcal{R}$ is an almost Gorenstein graded ring (Corollary \mathfrak ref{3.5}). We assume that $d > 2$ and that $\mathfrak mathcal{R}_{\mathfrak mathfrak{M}}$ is an almost Gorenstein local ring. The goal is to produce a contradiction. Let $\mathfrak mathcal{D}elta_i = (-1)^{i+1} \det \varphi_i$ for each $1 \mathfrak le i \mathfrak le n$, where $\varphi_i$ stands for the $(n-1)\times (n-1)$ matrix which is obtained from $\varphi$ by deleting the $i$-th row. Hence $I = (\mathfrak mathcal{D}elta_1, \mathfrak mathcal{D}elta_2, \mathfrak ldots, \mathfrak mathcal{D}elta_n)$ and the ideal $I$ has a presentation $$(*)\ \ \ 0 \to R^{\oplus (n-1)} \overset{\varphi}{\to} R^{\oplus n} \overset{\mathfrak left[\mathfrak mathfrak begin{smallmatrix} \mathfrak mathcal{D}elta_1&\mathfrak mathcal{D}elta_2&\mathfrak mathfrak cdots&\mathfrak mathcal{D}elta_n\\ \mathfrak mathrm{e}nd{smallmatrix}\mathfrak right]} {\mathfrak longrightarrow} I \to 0.$$ Notice that $\mathfrak mathcal{R}$ is not a Gorenstein ring, since $\mathfrak mathrm{r}(\mathfrak mathcal{R})= \mathfrak mu_R(\mathfrak m^{n-d})= \mathfrak mathfrak binom{n-1}{d-1} >1$. We set $A = \mathfrak mathcal{R}_\mathfrak mathfrak{M}$ and $\mathfrak n = \mathfrak mathfrak{M} A$; hence $\mathfrak mathrm{K}_A = [\mathfrak mathrm{K}_\mathfrak mathcal{R}]_\mathfrak mathfrak{M}$. We take an exact sequence $$ 0 \to A \overset{\mathfrak phi}{\to} \mathfrak mathrm{K}_A \to C \to 0 $$ of $A$-modules such that $C \mathfrak neq (0)$ and $C$ is an Ulrich $A$-module. Let $f = \mathfrak phi(1)$. Then $f \mathfrak not\in \mathfrak n \mathfrak mathrm{K}_A$ by \mathfrak mathfrak cite[Corollary 3.10]{GTT} and we get the exact sequence $$(**) \ \ \ \ \ 0 \to \mathfrak n f \to \mathfrak n \mathfrak mathrm{K}_A \to \mathfrak n C \to 0. $$ Because $ \mathfrak n C = (f_1, f_2, \mathfrak ldots, f_d) C $ for some $f_1, f_2, \mathfrak ldots, f_d \in \mathfrak n$ (\mathfrak mathfrak cite[Proposition 2.2]{GTT}) and $\mathfrak mu_A(\mathfrak n) = d+n$, we get by the exact sequence ($**$) that $$ \mathfrak mu_{\mathfrak mathcal{R}}(\mathfrak mathfrak{M} \mathfrak mathrm{K}_{\mathfrak mathcal{R}}) = \mathfrak mu_{A}(\mathfrak mathfrak{n} \mathfrak mathrm{K}_A) \mathfrak le (d + n) +d {\mathfrak mathfrak cdot}\mathfrak left(\mathfrak mathrm{r}(A)-1\mathfrak right) = d \mathfrak mathfrak binom{n-1}{d-1} + n, $$ while $$ \mathfrak mu_{\mathfrak mathcal{R}}(\mathfrak mathfrak{M} \mathfrak mathrm{K}_{\mathfrak mathcal{R}}) = \mathfrak mu_R(\mathfrak m^{n-d+1}) + \mathfrak mu_R(\mathfrak m^{n-d}I) = \mathfrak mathfrak binom{n}{d-1} + \mathfrak mu_R(\mathfrak m^{n-d}I) $$ since $\mathfrak mathfrak{M} =(\mathfrak m, It)\mathfrak mathcal{R}$ and $\mathfrak mathrm{K}_\mathfrak mathcal{R}(1) = \mathfrak m^{n-d}\mathfrak mathcal{R}$. Consequently we have $$ \mathfrak mu_R(\mathfrak m^{n-d} I ) \mathfrak le d \mathfrak mathfrak binom{n-1}{d-1} + n - \mathfrak mathfrak binom{n}{d-1}. $$ To estimate the number $\mathfrak mu_R(\mathfrak m^{n-d} I )$ from below, we consider the homomorphism $$\mathfrak psi : \mathfrak m^{n-d}\otimes_R I \to \mathfrak m^{n-d} I$$ defined by $x\otimes y\mathfrak mapsto xy $ and set $X= \operatorname{Ker} \mathfrak psi$. Let $x \in X$ and write $x = \sum_{i=1}^dx_i\otimes \mathfrak mathcal{D}elta_i$ with $x_i \in \mathfrak m^{n-d}$. Then since $\sum_{i=1}^dx_i\mathfrak mathcal{D}elta_i= 0$ in $R$ and since every entry of the matrix $\varphi$ is linear, the presentation $(*)$ of $I$ guarantees the existence of elements $y_j \in \mathfrak m^{n-d-1}$~$(1 \mathfrak le j \mathfrak le n-1)$ such that $$ \mathfrak mathfrak begin{pmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \mathfrak mathrm{e}nd{pmatrix}= \varphi \mathfrak mathfrak begin{pmatrix} y_1 \\ y_2 \\ \vdots \\ y_{n-1} \mathfrak mathrm{e}nd{pmatrix}. $$ Hence $X$ is a homomorphic image of $[\mathfrak m^{n-d-1}]^{\oplus (n-1)}$ and therefore $$ \mathfrak mu_R(X) \mathfrak le (n-1) \mathfrak mathfrak binom{n-2}{d-1} $$ in the exact sequence $$0 \to X \to \mathfrak m^{n-d}\otimes_RI \to \mathfrak m^{n-d}I \to 0.$$ Consequently $$ \mathfrak mu_R(\mathfrak m^{d-n}I) \ge n \mathfrak mathfrak binom{n-1}{d-1} - (n-1)\mathfrak mathfrak binom{n-2}{d-1}, $$ so that \mathfrak mathfrak begin{eqnarray*} 0 &\mathfrak le& \mathfrak left[d\mathfrak mathfrak binom{n-1}{d-1} + n - \mathfrak mathfrak binom{n}{d-1}\mathfrak right] -\mathfrak left[n \mathfrak mathfrak binom{n-1}{d-1} - (n-1) \mathfrak mathfrak binom{n-2}{d-1}\mathfrak right] \\ &=& \mathfrak left[(d-n) \mathfrak mathfrak binom{n-1}{d-1} + (n-1) \mathfrak mathfrak binom{n-2}{d-1} \mathfrak right] + \mathfrak left[ n - \mathfrak mathfrak binom{n}{d-1} \mathfrak right] \\ &=& n - \mathfrak mathfrak binom{n}{d-1} < 0 \mathfrak mathrm{e}nd{eqnarray*} which is the required contradiction. Thus $A$ is not an almost Gorenstein local ring, if $d>2$. \mathfrak mathrm{e}nd{proof} Before closing this section, let us note one concrete example. \mathfrak mathfrak begin{ex} Let $R = k[[x,y,z]]$ be the formal power series ring over an infinite field $k$. We set $I =(x^2y,y^2z,z^2x,xyz)$ and $Q = (x^2y, y^2z, z^2x)$. Then $Q$ is a minimal reduction of $I$ with $\mathfrak mathrm{red}_Q(I)=2$. The ideal $I$ has a presentation of the form $$0 \to {R}^{\oplus 3} \overset{\varphi}{\to} R^{\oplus 4} \to I \to 0$$ with $\varphi = \mathfrak left(\mathfrak mathfrak begin{smallmatrix} x&0&0\\ 0&y&0\\ 0&0&z\\ y&z&x \mathfrak mathrm{e}nd{smallmatrix}\mathfrak right)$ and it is direct to check that $I$ satisfies all the conditions required for Theorem \mathfrak ref{4.1}. Hence Theorem \mathfrak ref{4.1} shows that $\mathfrak mathcal{R} (I)$ cannot be an almost Gorenstein graded ring, while $Q$ is not a perfect ideal of $R$ but its Rees algebra $\mathfrak mathcal{R}(Q)$ is an almost Gorenstein graded ring with $\mathfrak mathrm{r}(\mathfrak mathcal{R}) = 2$; see \mathfrak mathfrak cite{Kamoi}. \mathfrak mathrm{e}nd{ex} \section{The Rees algebras of socle ideals $I = (a,b):\mathfrak m$} Throughout this section let $(R,\mathfrak m)$ denote a two-dimensional regular local ring and let $Q=(a,b)$ be a parameter ideal of $R$. We set $I = Q:\mathfrak m$, $\mathfrak mathcal{R} = \mathfrak mathcal{R}(I)$, and $\mathfrak mathfrak{M} = \mathfrak m \mathfrak mathcal{R}+ \mathfrak mathcal{R}_+$. In this section we are interested in the question of when $\mathfrak mathcal{R}_\mathfrak mathfrak{M}$ is an almost Gorenstein local ring. If $Q \mathfrak not\subseteq \mathfrak m^2$, then $Q:\mathfrak m$ is a parameter ideal of $R$ (or the whole ring $R$) and $\mathfrak mathcal{R}$ is a Gorenstein ring. Assume that $Q \subseteq \mathfrak m^2$. Hence $I^2 = QI$ (see e.g., \mathfrak mathfrak cite{W}) with $\mathfrak mu_R(I)=3$. We write $I = (a,b,c)$. Then since $xc, yc \in Q$, we get equations $$f_1a + f_2b + xc = 0 \ \ \ \text{and}\ \ \ g_1a + g_2b + yc = 0$$ with $f_i, g_i \in \mathfrak m$~($i= 1,2$). With this notation we have the following. \mathfrak mathfrak begin{thm}\mathfrak label{thm1} If $(f_1, f_2, g_1,g_2) \subseteq \mathfrak m^2$, then $\mathfrak mathcal{R}_\mathfrak mathfrak{M}$ is not an almost Gorenstein local ring. \mathfrak mathrm{e}nd{thm} We divide the proof of Theorem \mathfrak ref{thm1} into a few steps. Let us begin with the following. \mathfrak mathfrak begin{lem}\mathfrak label{lem1} Let ${\mathcal B}bb{M} = \mathfrak mathfrak begin{pmatrix} f_1&f_2&x\\ g_1&g_2&y\\ \mathfrak mathrm{e}nd{pmatrix}$. Then $R/I$ has a minimal free resolution $$0 \to R^{\oplus 2} \overset{{}^t{\mathcal B}bb{M}} {\mathfrak longrightarrow} R^{\oplus 3} \overset{[\mathfrak mathfrak begin{smallmatrix} a&b&c\\ \mathfrak mathrm{e}nd{smallmatrix}]} {\mathfrak longrightarrow} R \to R/I \to 0.$$ \mathfrak mathrm{e}nd{lem} \mathfrak mathfrak begin{proof} Let $\mathfrak mathbf{f} = \mathfrak mathfrak begin{pmatrix} f_1\\ f_2\\ x \mathfrak mathrm{e}nd{pmatrix}$ and $\mathfrak mathbf{g} = \mathfrak mathfrak begin{pmatrix} g_1\\ g_2\\ y \mathfrak mathrm{e}nd{pmatrix}$. Then $\mathfrak mathbf{f}, \mathfrak mathbf{g} \in \mathfrak m{\mathfrak mathfrak cdot}R^{\oplus 3}$. As $\mathfrak mathbf{f}, \mathfrak mathbf{g}~\mathfrak mathrm{mod}~\mathfrak m^2{\mathfrak mathfrak cdot}R^{\oplus 3}$ are linearly independent over $R/\mathfrak m$, the complex $$0 \to R^{\oplus 2} \overset{{}^t{\mathcal B}bb{M}} {\mathfrak longrightarrow} R^{\oplus 3} \overset{[\mathfrak mathfrak begin{smallmatrix} a&b&c\\ \mathfrak mathrm{e}nd{smallmatrix}]} {\mathfrak longrightarrow} R \to R/I \to 0$$ is exact and gives rise to a minimal free resolution of $R/I$. \mathfrak mathrm{e}nd{proof} Let $\mathfrak mathcal{S} =R[X,Y,Z]$ be the polynomial ring and let $\varphi : \mathfrak mathcal{S} \to \mathfrak mathcal{R} =R[It]$~($t$ an indeterminate) be the $R$-algebra map defined by $\varphi (X) = at$, $\varphi (Y) = bt$, and $\varphi (Z) = ct$. Let $K= \operatorname{Ker} \varphi$. Since $c^2 \in QI$, we have a relation of the form $$c^2 = a^2f +b^2g + abh + bci + caj$$ with $f,g,h, i,j \in R$. We set \mathfrak mathfrak begin{eqnarray*} F &=& Z^2 - \mathfrak left(fX^2 + gY^2 + hXY + iYZ + jZX \mathfrak right),\\ G&=& f_1X+f_2Y+xZ,\\ H&=&g_1X + g_2Y + yZ. \mathfrak mathrm{e}nd{eqnarray*} Notice that $F\in \mathfrak mathcal{S}_2$ and $G,H \in \mathfrak mathcal{S}_1$. \mathfrak mathfrak begin{prop}\mathfrak label{lem2} $\mathfrak mathcal{R}$ has a minimal graded free resolution of the form $$0 \to \mathfrak mathcal{S}(-2)\oplus\mathfrak mathcal{S}(-2) \overset{{}^t{\mathcal B}bb{N}}{\mathfrak longrightarrow} \mathfrak mathcal{S}(-2)\oplus \mathfrak mathcal{S}(-1) \oplus \mathfrak mathcal{S}(-1) \overset{[\mathfrak mathfrak begin{smallmatrix} F&G&H\\ \mathfrak mathrm{e}nd{smallmatrix}]}{\mathfrak longrightarrow} \mathfrak mathcal{S} \to \mathfrak mathcal{R} \to0,$$ so that the graded canonical module $\mathfrak mathrm{K}_\mathfrak mathcal{R}$ of $\mathfrak mathcal{R}$ has a presentation $$ \mathfrak mathcal{S}(-1)\oplus \mathfrak mathcal{S}(-2) \oplus \mathfrak mathcal{S}(-2) \overset{{\mathcal B}bb{N}}{\mathfrak longrightarrow} \mathfrak mathcal{S}(-1)\oplus \mathfrak mathcal{S}(-1)\to \mathfrak mathrm{K}_\mathfrak mathcal{R} \to 0.$$ \mathfrak mathrm{e}nd{prop} \mathfrak mathfrak begin{proof} We have $K= \mathfrak mathcal{S} K_1 + (F)$ (remember that $I^2 = QI$ and $c^2 \in QI$). Hence $\mathfrak mathcal{R}$ has a minimal graded free resolution of the form $$(*)\ \ \ \ 0 \to \mathfrak mathcal{S}(-m)\oplus\mathfrak mathcal{S}(-\mathfrak mathrm{e}ll) \overset{{}^t{\mathcal B}bb{N}}{\mathfrak longrightarrow} \mathfrak mathcal{S}(-2)\oplus \mathfrak mathcal{S}(-1) \oplus \mathfrak mathcal{S}(-1) \overset{[\mathfrak mathfrak begin{smallmatrix} F&G&H\\ \mathfrak mathrm{e}nd{smallmatrix}]}{\mathfrak longrightarrow} \mathfrak mathcal{S} \to \mathfrak mathcal{R} \to 0$$ with $m, \mathfrak mathrm{e}ll \ge 1$. We take the $\mathfrak mathcal{S}(-3)$-dual of the resolution ($*$). Then as $\mathfrak mathrm{K}_\mathfrak mathcal{S} = \mathfrak mathcal{S}(-3)$, we get the presentation $$ \mathfrak mathcal{S}(-1)\oplus \mathfrak mathcal{S}(-2) \oplus \mathfrak mathcal{S}(-2) \overset{{\mathcal B}bb{N}}{\mathfrak longrightarrow} \mathfrak mathcal{S}(m-3)\oplus \mathfrak mathcal{S}(\mathfrak mathrm{e}ll -3)\to \mathfrak mathrm{K}_\mathfrak mathcal{R} \to 0$$ of the canonical module $\mathfrak mathrm{K}_\mathfrak mathcal{R}$ of $\mathfrak mathcal{R}$. Hence $m, \mathfrak mathrm{e}ll \mathfrak le 2$ because $\mathfrak mathrm{a}(\mathfrak mathcal{R}) = -1$. Assume that $m = 1$. Then the matrix ${}^t{\mathcal B}bb{N}$ has the form ${}^t{\mathcal B}bb{N} = \mathfrak mathfrak begin{pmatrix} 0&\mathfrak mathfrak beta_1\\ \mathfrak mathrm alpha_2&\mathfrak mathfrak beta_2\\ \mathfrak mathrm alpha_3&\mathfrak mathfrak beta_3 \mathfrak mathrm{e}nd{pmatrix}$ with $\mathfrak mathrm alpha_2, \mathfrak mathrm alpha_3 \in R$. We have $\mathfrak mathrm alpha_2G + \mathfrak mathrm alpha_3 H = 0$, or equivalently $\mathfrak mathrm alpha_2 \mathfrak mathfrak begin{pmatrix} f_1\\ f_2\\ x \mathfrak mathrm{e}nd{pmatrix} + \mathfrak mathrm alpha_3\mathfrak mathfrak begin{pmatrix} g_1\\ g_2\\ y \mathfrak mathrm{e}nd{pmatrix}= \mathfrak mathbf{0}$, whence $\mathfrak mathrm alpha_2 = \mathfrak mathrm alpha_3 = 0$ by Lemma \mathfrak ref{lem1}. This is impossible, whence $m= 2$. We similarly have $\mathfrak mathrm{e}ll = 2$ and the assertion follows. \mathfrak mathrm{e}nd{proof} We are now ready to prove Theorem \mathfrak ref{thm1}. \mathfrak mathfrak begin{proof}[Proof of Theorem \mathfrak ref{thm1}] Let ${\mathcal B}bb{N}$ be the matrix given by Proposition \mathfrak ref{lem2} and write $N = \mathfrak mathfrak begin{pmatrix} \mathfrak mathrm alpha&F_1&F_2\\ \mathfrak mathfrak beta&G_1&G_2 \mathfrak mathrm{e}nd{pmatrix}$. Then Proposition \mathfrak ref{lem2} shows that $F_i,G_i \in \mathfrak mathcal{S}_1$~($i=1,2$) and $\mathfrak mathrm alpha, \mathfrak mathfrak beta \in \mathfrak m$. We write $F_i = \mathfrak mathrm alpha_{i1}X + \mathfrak mathrm alpha_{i2}Y + \mathfrak mathrm alpha_{i3}Z$ and $G_i = \mathfrak mathfrak beta_{i1}X + \mathfrak mathfrak beta_{i2}Y + \mathfrak mathfrak beta_{i3}Z$ with $\mathfrak mathrm alpha_{ij}, \mathfrak mathfrak beta_{ij} \in R$. Let $\mathfrak mathcal{D}elta_j$ denote the determinant of the matrix obtained by deleting the $j$-th column from ${\mathcal B}bb{N}$. Then by the theorem of Hilbert-Burch we have $G= -\varepsilon \mathfrak mathcal{D}elta_2$ and $H=\varepsilon \mathfrak mathcal{D}elta_3$ for some unit $\varepsilon$ of $R$, so that $$(**) \ \ \ \mathfrak mathfrak begin{pmatrix} f_1\\ f_2\\ x \mathfrak mathrm{e}nd{pmatrix} = (\varepsilon\mathfrak mathfrak beta) \mathfrak mathfrak begin{pmatrix} \mathfrak mathrm alpha_{21}\\ \mathfrak mathrm alpha_{22}\\ \mathfrak mathrm alpha_{23} \mathfrak mathrm{e}nd{pmatrix} -(\varepsilon\mathfrak mathrm alpha) \mathfrak mathfrak begin{pmatrix} \mathfrak mathfrak beta_{21}\\ \mathfrak mathfrak beta_{22}\\ \mathfrak mathfrak beta_{23} \mathfrak mathrm{e}nd{pmatrix}\ \ \ \text{and} \ \ \mathfrak mathfrak begin{pmatrix} g_1\\ g_2\\ y \mathfrak mathrm{e}nd{pmatrix} = (\varepsilon\mathfrak mathrm alpha) \mathfrak mathfrak begin{pmatrix} \mathfrak mathfrak beta_{11}\\ \mathfrak mathfrak beta_{12}\\ \mathfrak mathfrak beta_{13} \mathfrak mathrm{e}nd{pmatrix} - (\varepsilon\mathfrak mathfrak beta) \mathfrak mathfrak begin{pmatrix} \mathfrak mathrm alpha_{11}\\ \mathfrak mathrm alpha_{12}\\ \mathfrak mathrm alpha_{13} \mathfrak mathrm{e}nd{pmatrix}.$$ Hence $$x = \varepsilon\mathfrak left(\mathfrak mathfrak beta \mathfrak mathrm alpha_{23} - \mathfrak mathrm alpha \mathfrak mathfrak beta_{23}\mathfrak right) \ \ \ \text{and} \ \ \ y = \varepsilon\mathfrak left(\mathfrak mathrm alpha \mathfrak mathfrak beta_{13}- \mathfrak mathfrak beta \mathfrak mathrm alpha_{13}\mathfrak right),$$ which shows $(x,y)=(\mathfrak mathrm alpha, \mathfrak mathfrak beta)=\mathfrak m$, because $(x,y) \subseteq (\mathfrak mathrm alpha, \mathfrak mathfrak beta) \subseteq \mathfrak m$. Therefore if $(f_1, f_2, g_1,g_2) \subseteq \mathfrak m^2$, then equations ($**$) above show $\mathfrak mathrm alpha_{ij}, \mathfrak mathfrak beta_{ij} \in \mathfrak m$ for all $i,j =1,2$, whence $$N \mathfrak mathrm{e}quiv \mathfrak mathfrak begin{pmatrix} \mathfrak mathrm alpha&\mathfrak mathrm alpha_{13}Z&\mathfrak mathrm alpha_{23}Z\\ \mathfrak mathfrak beta&\mathfrak mathfrak beta_{13}Z&\mathfrak mathfrak beta_{23}Z \mathfrak mathrm{e}nd{pmatrix} \ \mathfrak mathrm{mod}~\mathfrak mathfrak{N}^2$$ where $\mathfrak mathfrak{N} = \mathfrak m \mathfrak mathcal{S} +\mathfrak mathcal{S}_+$ is the graded maximal ideal of $\mathfrak mathcal{S}$. We set $B = \mathfrak mathcal{S}_\mathfrak mathfrak{N}$. Then it is clear that after any elementary row and column operations the matrix ${\mathcal B}bb{N}$ over the regular local ring $B$ of dimension 5 is not equivalent to a matrix of the form $$ \mathfrak mathfrak begin{pmatrix} \mathfrak mathrm alpha_1&\mathfrak mathrm alpha_2&\mathfrak mathrm alpha_3\\ \mathfrak mathfrak beta_1&\mathfrak mathfrak beta_2&\mathfrak mathfrak beta_3\\ \mathfrak mathrm{e}nd{pmatrix} $$ with $\mathfrak mathrm alpha_1, \mathfrak mathrm alpha_2, \mathfrak mathrm alpha_3$ a part of a regular system of parameters of $B$. Hence by \mathfrak mathfrak cite[Theorem 7.8]{GTT} $\mathfrak mathcal{R}_\mathfrak mathfrak{M}$ cannot be an almost Gorenstein local ring. \mathfrak mathrm{e}nd{proof} As a consequence of Theorem \mathfrak ref{thm1} we get the following. \mathfrak mathfrak begin{cor} Suppose that $Q=(a,b) \subseteq \mathfrak m^3$. Then $\mathfrak mathcal{R}_\mathfrak mathfrak{M}$ is not an almost Gorenstein local ring. \mathfrak mathrm{e}nd{cor} \mathfrak mathfrak begin{proof} We write $\mathfrak mathfrak begin{pmatrix} a\\ b \mathfrak mathrm{e}nd{pmatrix} = \mathfrak mathfrak begin{pmatrix} f_{11}&f_{12}\\ f_{21}&f_{22} \mathfrak mathrm{e}nd{pmatrix} \mathfrak mathfrak begin{pmatrix} x\\ y \mathfrak mathrm{e}nd{pmatrix}$ with $f_{ij} \in \mathfrak m^2$~($i,j=1,2$) and set $c = \det \mathfrak mathfrak begin{pmatrix} f_{11}&f_{12}\\ f_{21}&f_{22} \mathfrak mathrm{e}nd{pmatrix}$. Then $Q : c = \mathfrak m$ and $Q:\mathfrak m = Q + (c)$. We have $$(-f_{22})a+f_{12}b +cx = 0 \ \ \text{and} \ \ f_{21}a + (-f_{11})b+cy = 0.$$ Hence by Theorem \mathfrak ref{thm1} $\mathfrak mathcal{R}_\mathfrak mathfrak{M}$ is not an almost Gorenstein local ring. \mathfrak mathrm{e}nd{proof} \mathfrak mathfrak begin{cor}\mathfrak label{badex} Let $m \ge n \ge 2$ be integers and set $Q = (x^m, y^n)$. Then $\mathfrak mathcal{R}$ is an almost Gorenstein graded ring if and only if $n =2$. \mathfrak mathrm{e}nd{cor} \mathfrak mathfrak begin{proof} Suppose $n = 2$. Then $K = \mathfrak mathfrak begin{pmatrix} x&Z&X\\ y&x^{m-2}Y&Z \mathfrak mathrm{e}nd{pmatrix} $. Hence by \mathfrak mathfrak cite[Theorem 7.8]{GTT} $\mathfrak mathcal{R}$ is an almost Gorenstein graded ring. Conversely, suppose that $\mathfrak mathcal{R}$ is an almost Gorenstein graded ring. Then $n =2$ by Theorem \mathfrak ref{thm1}, because $\mathfrak mathcal{R}_\mathfrak mathfrak{M}$ is an almost Gorenstein local ring. \mathfrak mathrm{e}nd{proof} \mathfrak mathfrak begin{thebibliography}{20} \mathfrak mathfrak bibitem{AN} {\sc M. Artin and M. Nagata}, Residual intersections in Cohen-Macaulay rings, {\mathfrak mathrm{e}m J. Math. Kyoto Univ.}, {\mathfrak mathfrak bf 12} (1972), 307--323. \mathfrak mathfrak bibitem{BF} {\sc V. Barucci and R. Fr\"{o}berg}, One-dimensional almost Gorenstein rings, {\mathfrak mathrm{e}m J. Algebra}, {\mathfrak mathfrak bf 188} (1997), no. 2, 418--442. \mathfrak mathfrak bibitem{BHU} {\sc J. P. Brennan, J. Herzog and B. Ulrich}, Maximally generated maximal Cohen-Macaulay modules, {\mathfrak mathrm{e}m Math. Scand.}, {\mathfrak mathfrak bf 61} (1987), no. 2, 181--203. \mathfrak mathfrak bibitem{GMP} {\sc S. Goto, N. Matsuoka and T. T. Phuong}, Almost Gorenstein rings, {\mathfrak mathrm{e}m J. Algebra}, {\mathfrak mathfrak bf 379} (2013), 355--381. \mathfrak mathfrak bibitem{GS} {\sc S. Goto and Y. Shimoda}, On the Rees algebras of Cohen-Macaulay local rings, Commutative algebra (Fairfax, Va., 1979), 201--231, Lecture Notes in Pure and Appl. Math., 68, Dekker, New York, 1982. \mathfrak mathfrak bibitem{GTT} {\sc S. Goto, R. Takahashi and N. Taniguchi}, Almost Gorenstein rings -towards a theory of higher dimension, {\mathfrak mathrm{e}m J. Pure Appl. Algebra}, {\mathfrak mathfrak bf 219} (2015), 2666--2712. \mathfrak mathfrak bibitem{GTY1} {\sc S. Goto, N. Taniguchi and K.-i. Yoshida}, The almost Gorenstein Rees algebras of $p_g$-ideals, Preprint 2015. \mathfrak mathfrak bibitem{GTY2} {\sc S. Goto, N. Matsuoka, N. Taniguchi and K.-i. Yoshida}, The almost Gorenstein Rees algebras of parameters, Preprint 2015. \mathfrak mathfrak bibitem{H} {\sc C. Huneke}, Complete ideals in two-dimensional regular local rings, {\mathfrak mathrm{e}m MSRI Publications}, {\mathfrak mathfrak bf 15} (1989), 325--338. \mathfrak mathfrak bibitem{HongU} {\sc J. Hong and B. Ulrich}, Specialization and Integral Closure, {\mathfrak mathrm{e}m J. London Math. Soc.}, {\mathfrak mathfrak bf 90} (3) (2014), 861--878. \mathfrak mathfrak bibitem{HU} {\sc C. Huneke and B. Ulrich}, Residual intersections, {\mathfrak mathrm{e}m J. reine angew. Math.}, {\mathfrak mathfrak bf 390} (1988), 1--20. \mathfrak mathfrak bibitem{Ikeda} {\sc S. Ikeda}, On the Gorensteinness of Rees algebras over local rings, {\mathfrak mathrm{e}m Nagoya Math. J.}, {\mathfrak mathfrak bf 102} (1986), 135--154. \mathfrak mathfrak bibitem{Kamoi} {\sc Y. Kamoi}, Gorenstein Rees algebras and Huneke-Ulrich ideals, in preparation. \mathfrak mathfrak bibitem{MU} {\sc S. Morey and B. Ulrich}, Rees algebras of ideals with low codimension, {\mathfrak mathrm{e}m Proc. Amer. Math. Soc.}, {\mathfrak mathfrak bf 124} (1996), 3653--3661. \mathfrak mathfrak bibitem{HS} {\sc I. Swanson and C. Huneke}, Integral Closure of Ideals, Rings, and Modules, {\mathfrak mathrm{e}m Cambridge University Press}, 2006. \mathfrak mathfrak bibitem{SUV} {\sc A. Simis, B. Ulrich and W. V. Vasconcelos}, Rees algebras of modules, {\mathfrak mathrm{e}m Proc. London Math. Soc.}, {\mathfrak mathfrak bf 87} (3) (2003), 610--646. \mathfrak mathfrak bibitem{T} {\sc R. Takahashi}, On G--regular local rings, {\mathfrak mathrm{e}m Comm. Algebra}, {\mathfrak mathfrak bf 36} (2008), no. 12, 4472--4491. \mathfrak mathfrak bibitem{U} {\sc B. Ulrich}, Ideals having the expected reduction number, {\mathfrak mathrm{e}m Amer. J. Math.}, {\mathfrak mathfrak bf 118} (1996), no. 1, 17--38. \mathfrak mathfrak bibitem{V} {\sc J. K. Verma}, Joint reductions and Rees algebras, {\mathfrak mathrm{e}m Math. Proc. Camb. Phil. Soc.}, {\mathfrak mathfrak bf 109} (1991), 335--342. \mathfrak mathfrak bibitem{W} {\sc H.-J. Wang}, Links of symbolic powers of prime ideals, {\mathfrak mathrm{e}m Math. Z.}, {\mathfrak mathfrak bf 256} (2007), 749--756. \mathfrak mathfrak bibitem{ZS} {\sc O. Zariski and P. Samuel}, Commutative Algebra Volume II, {\mathfrak mathrm{e}m Springer}, 1960. \mathfrak mathrm{e}nd{thebibliography} \mathfrak mathrm{e}nd{document}
\begin{document} \title[Pre-rigid Monoidal Categories]{Pre-rigid Monoidal Categories} \thanks{This article was written while the first and the third author were members of the ``National Group for Algebraic and Geometric Structures, and their Applications'' (GNSAGA-INdAM). They were both partially supported by MIUR within the National Research Project PRIN 2017. \\The authors would also like to thank Joost Vercruysse and Miodrag C. Iovanov for helpful discussions. } \begin{abstract} Liftable pairs of adjoint functors between braided monoidal categories in the sense of \cite{GV-OnTheDuality} provide auto-adjunctions between the associated categories of bialgebras. Motivated by finding interesting examples of such pairs, we study general pre-rigid monoidal categories. Roughly speaking, these are monoidal categories in which for every object $X$, an object $X^{\ast}$ and a nicely behaving evaluation map from $X^{\ast}\otimes X$ to the unit object exist. A prototypical example is the category of vector spaces over a field, where $X^{\ast}$ is not a categorical dual if $X$ is not finite-dimensional. We explore the connection with related notions such as right closedness, and present meaningful examples. We also study the categorical frameworks for Turaev's Hopf group-(co)algebras in the light of pre-rigidity and closedness, filling some gaps in literature along the way. Finally, we show that braided pre-rigid monoidal categories indeed provide an appropriate setting for liftability in the sense of loc. cit. and we present an application, varying on the theme of vector spaces, showing how -in favorable cases- the notion of pre-rigidity allows to construct liftable pairs of adjoint functors when right closedness of the category is not available. \end{abstract} \keywords{Pre-rigid, closed, Turaev, Zunino and braided monoidal categories, liftable pairs of functors, bialgebras} \author{Alessandro Ardizzoni} \address{ \parbox[b]{\linewidth}{University of Turin, Department of Mathematics ``G. Peano'', via Carlo Alberto 10, I-10123 Torino, Italy}} \email{[email protected]} \urladdr{\url{https://sites.google.com/site/aleardizzonihome}} \author{Isar Goyvaerts} \address{ \parbox[b]{\linewidth}{Department of Mathematics, Faculty of Engineering, Vrije Universiteit Brussel, Pleinlaan 2, B-1050 Brussel, Belgium}} \email{[email protected]} \author{Claudia Menini} \address{ \parbox[b]{\linewidth}{University of Ferrara, Department of Mathematics and Computer Science, Via Machiavelli 30, Ferrara, I-44121, Italy}} \email{[email protected]} \urladdr{\url{https://sites.google.com/a/unife.it/claudia-menini/}} \subjclass[2010]{Primary 18M05; Secondary 18D15; 18M15; 16W50; 16T10} \maketitle \tableofcontents \section{Introduction} Basic Linear Algebra teaches us that any vector space $V$ (over a field $k$) has a dual space, $V^{\ast}$, which is unique up to isomorphism. When $V$ is moreover finite-dimensional, we have two $k$-linear maps, ${\rm ev}_{V}:V^{\ast }{\otimesimes}_k V\rightarrow k$ (the evaluation at $V$) and $\mathrm{coev}_{V}:k\rightarrow V {\otimesimes}_k V^{\ast }$ (the coevaluation at $V$), satisfying two compatibility relations in the form of triangular identities (cf. \cite[Definition 2.10.1]{EGNO} e.g.). With respect to the tensor product (over $k$) and the ground field as unit object, this extra data turns the category of finite-dimensional $k$-vector spaces into a rigid monoidal category. When $V$ is infinite dimensional, $V^{\ast}$ and ${\rm ev}_{V}$ can still be defined, but there is no coevaluation map anymore. In other words, the monoidal category of all $k$-vector spaces is no longer rigid; in turn, it is the prototype of a so-called {\it pre-rigid} monoidal category. \\A monoidal category $(\mathcal{C},\otimesimes,\mathds{I})$ is said to be pre-rigid if for every object $X$ there exists an object $X^{\ast }$ and a morphism ${\rm ev}_{X}:X^{\ast }\otimesimes X\rightarrow \mathds{I}$ such that the map $${\rm Hom} _{\mathcal{C}}\left( T,X^{\ast }\right) \rightarrow {\rm Hom} _{\mathcal{C}}\left( T\otimesimes X,\mathds{I}\right) :u\mapsto {\rm ev} _{X}\circ \left( u\otimesimes X\right) $$ is bijective for every object $T$ in $\mathcal{C}$. The notion of pre-rigidity, in its original form, stems from \cite{GV-OnTheDuality} although, as we will see, it turns out to be equivalent to the definition of \emph{weak dual} given in \cite{DP}. A basic fact is that a (right) rigid monoidal category is right closed. It is also easily verified that right closedness implies pre-rigidity (cf. Proposition \ref{prop:prerigid2} below). Thus, what is in a sense missing in the notion of pre-rigidity compared to the notion of rigidity is the coevaluation. \\Pre-rigidity arose in the study of so-called \textit{liftable} pairs of adjoint functors between monoidal categories. An adjoint pair of functors $(L,R)$ between monoidal categories $\mathcal{A}$ and $\mathcal{B}$ such that $R$ is a lax monoidal functor (or, equivalently, $L$ is colax monoidal) is called liftable if the induced functor $\overline{R}={\sf Alg}(R):{{\sf Alg}}({\mathcal{A}})\rightarrow {{\sf Alg}}({\mathcal{B}})$ between the respective categories of algebra objects has a left adjoint and if the functor $\underline{L}={\sf Coalg}(L):{{\sf Coalg}}({\mathcal{B}})\rightarrow {{\sf Coalg}}({\mathcal{A}})$ between the respective categories of coalgebra objects has a right adjoint. If $\mathcal{A}$ and $\mathcal{B}$ come both endowed with a braiding, it is shown in \cite[Theorem 2.7]{GV-OnTheDuality} that such a liftable pair of functors $(L,R)$ gives rise to an adjunction between the respective categories of bialgebra objects $$ \xymatrix{ {\sf Bialg}(\mathcal{A}) \ar@<.5ex>[rr]^-{\underline{\overline{R}}} && {\sf Bialg}(\mathcal{B}) \ar@<.5ex>[ll]^-{\underline{\overline{L}}} } $$ provided the functor $R$ enjoys the property of being braided with respect to the braidings of $\mathcal{A}$ and $\mathcal{B}$. Using this fact, a theorem originally due to Michaelis (cf. \cite{Michaelis-ThePrimitives}) is proven for a particular class of liftable pairs of adjoint functors between symmetric monoidal categories, the main application being a version of this theorem for Turaev's Hopf group-(co)algebras (cf. \cite[Theorem 4.16]{GV-OnTheDuality}). \\\\In loc.cit., however, the liftability condition in the motivating examples is shown to hold by rather ad-hoc methods. The present article's origin lies in finding a general setting where the liftability condition can be proved to hold. The final Section \ref{finalsection} of this paper shows that braided pre-rigid monoidal categories indeed provide an appropriate setting for liftability, but, while dealing with liftability and while having a closer look at the categories where Turaev's Hopf group-(co)algebras live, we came to study pre-rigid monoidal categories (that are not necessarily braided) \textit{an sich}, as we realised the bare notion of pre-rigidity may have its own right to exist. To the best of our knowledge such a study has not appeared elsewhere in literature. \\Upon first sight, as hinted at above, the pre-rigidity property has some taste of right closedness of a monoidal category. We believed it was worth further investigating the relationship between these two concepts, giving rise to the work carried out in Section \ref{liftablesection}. Section \ref{Sectionexamples} dives deeper into the categories where Turaev's Hopf group-(co)algebras reside, which are variations on categories of families of objects of a given monoidal category, known as Zunino and Turaev categories. In this section, pre-rigidity and closedness of these categories are studied in detail, providing a broader picture of the motivating examples of the article \cite{GV-OnTheDuality} -where such questions where not adressed- and filling some other gaps in literature along the way. Let us now sketch in more detail the content of the present article. \\\\In Section \ref{liftablesection}, we first recall the definition of pre-rigid category and we provide a first instance of a pre-rigid monoidal category that is not closed, see Example \ref{ex:lambek}, which is related to syntactic calculus for categorical grammars. We then investigate the connections between pre-rigidity and with the existence of a dualizible object on the one hand, amongst other things connecting pre-rigidity to the notion of a category with weak duals (introduced in \cite{DP}), and between pre-rigidiy and closedness on the other. Here we also present two examples of not necessarily braided categories that provide an instance of a strict monoidal functor that does not preserve pre-duals. Although Example \ref{ex:lambek} already showed that pre-rigidity and closedness are no synonyms, by the earlier-mentioned Proposition \ref{prop:prerigid2} one expects that closed monoidal categories and pre-rigid ones do share some properties, of course. For instance, it is known that a monoidal co-reflective full subcategory of a monoidal closed category is also closed. Proposition \ref{pro:prigadj} extends this result to the pre-rigid case, but it holds in a more general setting than a co-reflection since the relevant isomorphism needs not to be the unit. Proposition \ref{pro:terminal} enables us to obtain further examples of pre-rigid monoidal categories (although with trivial pre-dual) some of which are not closed and which do not necessarily allow for a braiding (see Examples \ref{exa:terminal}). Given a pre-rigid monoidal category $\mathcal{C}$, we conclude Section \ref{liftablesection} by considering the construction of a functor $(-)^{*}:\mathcal{C}^\mathrm{op} \rightarrow \mathcal{C}$ acting as the pre-dual on objects and we investigate some of its properties. In Section \ref{finalsection} we will study under which conditions this functor is part of a liftable pair of adjoint functors. \\\\In Section \ref{Sectionexamples}, we explore different constructions of new pre-rigid monoidal categories starting from known ones, by considering the examples which play a key role in \cite[Section 4]{GV-OnTheDuality}: the ``Zunino category'' ${\sf Fam}(\mathcal{C})$ of families of a base category $\mathcal{C}$ and the ``Turaev category'' ${\sf Maf}(\mathcal{C})={\sf Fam}(\mathcal{C}^{\mathrm{op} })^{\mathrm{op} }$. \\As shown in \cite[Section 2.1]{CaeDel}, the category ${\sf Fam}({{\sf Vec}})$ serves as a categorical framework for Turaev's Hopf group-algebras. Similarly, ${\sf Maf}({{\sf Vec}})$ catches Turaev's Hopf group-coalgebras. It is worth noticing that, in contrast with classical Hopf algebras, the definition of a Hopf group-coalgebra is not selfdual. In the discussion right above, we denoted the monoidal category of vector spaces over a field $k$ as ${\sf Vec}$; when ${\sf Vec}$ is replaced by ${{\sf Vec}}^{\textrm{f}}$, the category of finite-dimensional vector spaces, objects in ${\sf Fam}({{\sf Vec}}^{\textrm{f}})$ and ${\sf Maf}({{\sf Vec}}^{\textrm{f}})$ are called ``locally finite''. Using this framework, Corollary 4.5 in \cite{GV-OnTheDuality} provides an equivalence between the categories of locally finite Hopf group-algebras and Hopf group-coalgebras, by establishing a suitable adjunction between ${\sf Fam}({{\sf Vec}}^{\textrm{f}})$ and ${\sf Maf}({{\sf Vec}}^{\textrm{f}})$. However, in \cite{GV-OnTheDuality}, the question of the pre-rigidity of these categories was not studied (let alone the question of their closedness), the liftability condition being the main concern. \\We first recall the structures of ${\sf Fam}(\mathcal{C})$ and ${\sf Maf}(\mathcal{C})$. In Proposition \ref{pro:Famprerig}, we prove the pre-rigidity of the category ${\sf Fam}(\mathcal{C})$ for $\mathcal{C}$ any pre-rigid monoidal category possessing products of pre-duals. From this, we may deduce that the Zunino category ${\sf Fam}({{\sf Vec}})$ is pre-rigid. We notice, however, that for Proposition \ref{pro:Famprerig} to hold, arbitrary products of pre-duals are needed: Remark \ref{Remark-predualsnecessary} teaches that ${\sf Fam}({{\sf Vec}}^{\textrm{f}})$ is not pre-rigid, for instance. To the best of our knowledge, it was unknown in literature under which conditions ${\sf Fam}(\mathcal{C})$ inherits closedness from $\mathcal{C}$, except in case $\mathcal{C}$ is cartesian (which was considered in \cite{Carboni} and in \cite{AR}). Proposition \ref{pro:Famclosed} fills this gap: ${\sf Fam}(\mathcal{C})$ is shown to be closed monoidal whenever $\mathcal{C}$ is closed monoidal and has products. \\${\sf Maf}(\mathcal{C})$ is a slightly different story: ${\sf Maf}(\mathcal{C})$ is \emph{never} pre-rigid (Proposition \ref{pro:Maf}). This implies that ${\sf Maf}({{\sf Vec}})$, the category where generic Hopf group-coalgebras reside, does not enjoy the same pre-rigidity property as ${\sf Fam}({{\sf Vec}})$, the home of Hopf group-algebras. We are able, however, to adjust the situation a bit: we study ${\sf FamRel}(\mathcal{C})$, an interesting variant of the category ${\sf Maf}(\mathcal{C})$ (cf. \cite{LMM}), and prove, in Proposition \ref{pro:Faf}, that it is pre-rigid monoidal whenever $\mathcal{C}$ is. Next, Proposition \ref{pro:catfun} asserts that, given a small category $\mathcal{I}$ and a complete pre-rigid monoidal category $\mathcal{C}$, the functor category $\left[ \mathcal{I},\mathcal{C}\right] $ is pre-rigid as well. Finally, we conclude this section by considering the category $\mathcal{M}^G$ of externally $G$-graded $\mathcal{M}$-objects where $G$ is a monoid and $\mathcal{M}$ is a given monoidal category. In Proposition \ref{pro:funcatG}, under mild assumptions, we prove that the category $\mathcal{M}^G$ is pre-rigid whenever $\mathcal{M}$ is. This will allow us to provide another interesting example of a pre-rigid monoidal category which is not right closed, namely the category of externally $\mathbb{N}$-graded finite-dimensional vector spaces, see Example \ref{examplerightclosed}. \\\\In the final Section \ref{finalsection}, we propose to study liftability of adjoint pairs of functors in the light of general pre-rigid braided monoidal categories. As already mentioned above, in \cite{GV-OnTheDuality}, the liftability condition in the main examples is verified by ad-hoc methods. It is our purpose here to treat the case of generic pre-rigid braided monoidal categories in a more systematic way. We first recall what this liftability condition precisely is (Definition \ref{def:liftable}) and what this condition means for the bialgebra objects in the involved categories. Example \ref{ex:liftable} seems to be new and is considered to be of independent interest: we show that not every suitable adjunction is liftable. More precisely, setting $S=\frac{k \left[ X\right]}{\left( X^{2}\right) },$ we obtain that the induced functor $\overline{R^{f}}={\sf Alg}(R^f):{\sf Alg} ({\sf Vec} ^{f} ) \to {\sf Alg}( {\sf Vec} ^{f}) $ of the functor $$R^{f}:{\sf Vec} ^{\textrm{f}}\rightarrow {\sf Vec} ^{\textrm{f}},\quad V\mapsto S{\otimesimes}_{k} V$$ has no left adjoint (although $R^{f}$ itself does have one!). In Proposition \ref{prop:prerigid}, we show that the pre-dual construction defines a special type of self-adjoint functor $R=(-)^*:\mathcal{C}^\mathrm{op}\to\mathcal{C}$. In Proposition \ref{lem:Barop}, we show that this type of functor gives rise to a liftable pair whenever the functor $\overline{R}={\sf Alg}(R)$ it induces at the level of algebras has a left adjoint and we apply this result to the specific functor $R=(-)^*:\mathcal{C}^\mathrm{op}\to\mathcal{C}$ in Corollary \ref{coro:Isar}. Then, in Proposition \ref{pro:monadj}, we provide a criterion to transport the desired liftability from one category to another in presence of a suitable monoidal adjunction and we apply it, in Corollary \ref{coro:externlift}, to transfer liftability from a category $\mathcal{M}$ to the category $\mathcal{M}^{\mathbb N}$ of externally ${\mathbb N}$-graded objects. As a consequence, we arrive at the final Example \ref{example-prerigidnotclosed} which revisits Example \ref{examplerightclosed} and provides an instance of a situation in which Corollary \ref{coro:externlift} (properly) holds; it shows how -in favorable cases- the notion of pre-rigidity allows to construct liftable pairs of adjoint functors when right closedness of the category is not available. \subsection{Notational conventions}\label{notations} When $X$ is an object in a category $\mathcal{C}$, we will denote the identity morphism on $X$ by $1_X$ or $X$ for short. For categories $\mathcal{C}$ and $\mathcal{D}$, a functor $F:\mathcal{C}\to \mathcal{D}$ will be the name for a covariant functor; it will only be a contravariant one if it is explicitly mentioned. By ${\sf id}_{\mathcal{C}}$ we denote the identity functor on $\mathcal{C}$. For any functor $F:\mathcal{C}\to \mathcal{D}$, we denote ${\sf Id}_{F}$ (or sometimes -in order to lighten notation in some computations- just $F$, if the context does not allow for confusion) the natural transformation defined by ${\sf Id}_{FX}=1_{FX}$. \\Let $\mathcal{C}$ be a category. Denote by $\mathcal{C}^{\mathrm{op} }$ the opposite category of $\mathcal{C}$. Using the notation of \cite[page 12]{Pareigis-CatFunct}, an object $X$ and a morphism $f:X\rightarrow Y$ in $ \mathcal{C}$ will be denoted by $X^{\mathrm{op} }$ and $f^{\mathrm{op} }:Y^{ \mathrm{op} }\rightarrow X^{\mathrm{op} }$ when regarded as object and morphism in $\mathcal{C}^{\mathrm{op} }$. Given a functor $F:\mathcal{C}\to \mathcal{D}$, one defines its opposite functor $F^\mathrm{op} :\mathcal{C}^\mathrm{op} \to \mathcal{D}^\mathrm{op} $ by setting $F^\mathrm{op} X^\mathrm{op} =(FX)^\mathrm{op} $ and $F^\mathrm{op} f^\mathrm{op} =(Ff)^\mathrm{op} $. If $\alpha:F\to G$ is a natural transformation, its opposite $\alpha^\mathrm{op}$ is defined by $(\alpha^\mathrm{op})_{X^\mathrm{op}}:=(\alpha_X)^\mathrm{op}$ for every object $X$. \\\\Throughout the paper, we will work in the setting of monoidal categories. It is useful to recall the following notation. Let $(\mathcal{M},\otimesimes ,\mathds{I},a,l,r)$ be a monoidal category. Following \cite[0.1.4, 1.4]{SaavedraRivano}, we have that $\mathcal{M}^{\mathrm{op} }$ is also monoidal, the monoidal structure being given by \begin{gather*} X^\mathrm{op} \otimesimes Y^\mathrm{op} :=\left( X\otimesimes Y\right) ^\mathrm{op} ,\quad\text{ the unit is }\mathds{I}^{\mathrm{op} }, \\ a_{X^\mathrm{op} ,Y^\mathrm{op} ,Z^\mathrm{op} } :=\left( a_{X,Y,Z}^{-1}\right) ^{\mathrm{op} },\quad l_{X^\mathrm{op} } :=\left( l_{X}^{-1}\right) ^{\mathrm{op} },\qquad r_{X^\mathrm{op} }:=\left( r_{X}^{-1}\right) ^{\mathrm{op} }. \end{gather*} If $\mathcal{M}$ is moreover braided (with braiding $c$), then so is $\mathcal{M}^{ \mathrm{op} }$, the braiding being given by \begin{equation*} c_{X^\mathrm{op} ,Y^\mathrm{op} }:=\left( c_{X,Y}^{-1}\right) ^{ \mathrm{op} }. \end{equation*} As already mentioned, in this note we will operate within the framework of monoidal categories, which will be assumed to be strict from now on. By Mac Lane's Coherence Theorem, this does not impose restrictions on the obtained results. We will moreover consider braided and (pre)additive monoidal categories. A basic reference for these notions is \cite{MacLane}, for instance. Recall (see e.g. \cite[Definition 3.1]{Aguiar-Mahajan}) that a functor $F:\mathcal{A}\to\mathcal{B}$ between monoidal categories $(\mathcal{A},\otimes,{\mathds{I}}_{\mathcal{A}})$ and $(\mathcal{B},\otimes',{\mathds{I}}_{\mathcal{B}})$ is said to be a \emph{lax monoidal functor} if it comes equipped with a family of natural morphisms $\phi_{2}(X,Y):F(X)\otimes' F(Y)\to F(X\otimes Y)$, $X,Y\in \mathcal{A}$ and a $\mathcal{B}$-morphism $\phi_0:{\mathds{I}}_{\mathcal{B}}\to F({\mathds{I}}_{\mathcal{A}})$, satisfying the known suitable compatibility conditions with respect to the associativity and unit constraints of $\mathcal{A}$ and $\mathcal{B}$. Moreover, $(F,\phi_0,\phi_{2})$ is called {\it strong} if $\phi_0$ is an isomorphism and $\phi_{2}(X,Y)$ is a natural isomorphism for any objects $X,Y\in\mathcal{A}$. $(F,\phi_0,\phi_2)$ is called {\it strict} if $\phi_0$ is the identity morphism and $\phi_2$ is the identity natural transformation. Dually, colax monoidal functors are defined.\\ Also recall that given a lax monoidal functor $(F,\phi_2,\phi_0)$, then $(F^{\mathrm{op} },\phi_2^{\mathrm{op} },\phi_0^{\mathrm{op} })$ is a colax monoidal functor, where we set $\phi_2^{\mathrm{op} }(X^{\mathrm{op} },Y^{\mathrm{op} }):=\phi_2(X,Y)^{\mathrm{op} }$, see e.g. \cite[Proposition 3.7]{Aguiar-Mahajan}. Throughout the paper, $k$ will be a field. The category of vector spaces over $k$ will be denoted by ${\sf Vec}$ and endowed with its usual structure of monoidal category where the tensor product is $\otimes_k$ and the unit object is $k$. If we take objects only to be finite-dimensional vector spaces, we will use the notation ${{\sf Vec} ^{\textrm{f}}}$. \subsection{Some basic known facts}\label{sub:basics} We recall some basic notions and properties which are well-known and serve as a comparison for the results we will deal with in the present paper. Here $(\mathcal{C},\otimesimes,\mathds{I})$ denotes a monoidal category and every notion stated on the right admits its proper left analogue. \begin{itemize} \item An object $X$ in $\mathcal{C}$ is called \emph{right dualizable} if there are an object $X^*$, called the \emph{right dual} of $X$, and morphisms ${\rm ev}_X:X^*\otimesimes X\to\mathds{I} $ (called the counit or the \emph{evaluation}) and ${\rm coev}_X:\mathds{I}\to X\otimesimes X^*$ (called the unit or the \emph{coevaluation}) in $\mathcal{C}$ that fulfill the triangle identities $({\rm ev}_X\otimesimes X)(X^*\otimesimes{\rm coev}_X)={\sf id}_{X^*}$ and $(X\otimesimes {\rm ev}_X)\circ ({\rm coev}_X\otimesimes X)={\sf id}_X$. The left dual of an object $X$, if any, is denoted by $^*X$. \item If every object in $\mathcal{C}$ is right dualizable, the category $\mathcal{C}$ is called right \emph{rigid} (or \emph{autonomous}). \item $\mathcal{C}$ is said to be right \emph{closed}, if for every object $X\in \mathcal{C}$ the functor $\left( -\right) \otimesimes X: \mathcal{C}\rightarrow \mathcal{C}$ has a right adjoint $\left[ X,-\right] : \mathcal{C}\rightarrow \mathcal{C}$, called the internal-hom. Note that any right rigid monoidal category is right closed with $[X,-]:=(-)\otimesimes X^*$, see e.g. \cite[Theorem 1.3]{DP}. On the other hand closedness does not imply rigidity, e.g. ${\sf Vec}$ is closed but not rigid. \item If $\mathcal{C}$ is braided then $\mathcal{C}$ is right rigid if and only if it is left rigid and the dual object is the same, see e.g. \cite[page 780]{Street12}. However there exist monoidal categories with objects $X$ such that ${}^{\ast} X\ncong X^{\ast }$ (see \cite[Example 7.19.5]{EGNO}\footnote{Note that, by \cite[Remark 2.10.3]{EGNO}, if $X$ has both a left and a right dual, then $(^{\ast }X)^\ast\cong X$. In particular, if ${}^{\ast} X\cong X^{\ast }$, then $X^{\ast\ast }\cong X$.}, e.g.). \item If $\mathcal{C}$ is braided then $\mathcal{C}$ is right closed if and only if it is left closed as the braiding provides a functorial isomorphism $(-)\otimesimes X\cong X\otimesimes(-)$. \item If $\mathcal{C}$ is a symmetric monoidal category which is also rigid, then $\mathcal{C}$ is called \emph{compact closed}. \item $\mathcal{C}$ is called \emph{$*$-autonomous category}, see \cite[(4.3), page 13]{Ba-star-auto}, if it is symmetric monoidal and equipped with a fully faithful functor $(-)^{*}:\mathcal{C}^\mathrm{op} \rightarrow \mathcal{C}$ such that there is a natural isomorphism ${\rm Hom} _{\mathcal{C}}\left( A\otimesimes B,C^{\ast }\right) \cong {\rm Hom} _{\mathcal{C}}\left( A,(B\otimesimes C)^{\ast }\right) $. Note that a $*$-autonomous category is in particular right closed monoidal with $[X,Y]:=(X\otimesimes Y^*)^*$. \item Strong monoidal functors between right rigid categories preserve duals, see e.g. \cite[Proposition 3]{Li}. \end{itemize} \section{Pre-rigid monoidal categories}\label{liftablesection} In this section we recall the definition of pre-rigid category, we connect it to the notion of weak dual introduced in \cite{DP} and we investigate its connections to the notions of rigid and closed category, providing meaningful examples. Then we study pre-rigid categories with constant pre-dual. We conclude this section by considering the functor $(-)^{*}:\mathcal{C}^\mathrm{op} \rightarrow \mathcal{C}$ induced by the pre-dual which will play a central role in Section \ref{finalsection} in the context of liftability. The following definition, in its original form, see \cite[4.1.3]{GV-OnTheDuality}, required the monoidal category to be braided, the motivation being to be able to consider bi and Hopf algebra objects therein. We here remove this hypothesis, since there are interesting examples of non-braided monoidal categories that fulfil the remaining conditions, see Example \ref{ex:closed} for instance. \begin{definition} A monoidal category $(\mathcal{C},\otimesimes,\mathds{I})$ is called right {\it pre-rigid} if for every object $X$ there exists an object $X^{\ast }$ (a \emph{pre-dual} of $X$) and a morphism ${\rm ev}_{X}:X^{\ast }\otimesimes X\rightarrow \mathds{I}$ (the \emph{evaluation at $X$}) with the following universal property: For every morphism $t:T\otimesimes X\rightarrow \mathds{I}$ there is a unique morphism $t^\dag:T\rightarrow X^{\ast }$ such that $t={\rm ev} _{X}\circ \left(t^\dag\otimesimes X\right) .$ Equivalently the map \begin{equation}\label{maprerig} {\rm Hom} _{\mathcal{C}}\left( T,X^{\ast }\right) \rightarrow {\rm Hom} _{\mathcal{C}}\left( T\otimesimes X,\mathds{I}\right) :u\mapsto {\rm ev} _{X}\circ \left( u\otimesimes X\right) \end{equation} is bijective for every object $T$ in $\mathcal{C}$. Similarly, one could define a monoidal category $(\mathcal{C},\otimesimes,\mathds{I})$ to be left pre-rigid if for every object $X$ there exists an object ${}^{\ast} X$ and a morphism ${\rm ev}'_{X}:X \otimesimes {}^{\ast} X\rightarrow \mathds{I}$ such that the map \begin{equation*} {\rm Hom} _{\mathcal{C}}\left( T,{}^{\ast}X\right) \rightarrow {\rm Hom} _{\mathcal{C}}\left( X\otimesimes T,\mathds{I}\right) :u\mapsto {\rm ev}' _{X}\circ \left( X\otimesimes u\right) \end{equation*} is bijective for every object $T$ in $\mathcal{C}$. \end{definition} We now give a first example of left and right pre-rigid category related to categorical grammars. \begin{example}\label{ex:lambek} Recall that a \emph{pomonoid} is quadruple $(P,\leq,\cdot,1)$ where $(P,\leq)$ is a poset and $(P,\cdot,1)$ is a monoid such that the multiplication is monotone, i.e. $a\leq c$ and $b\leq d$ implies $a\cdot b\leq c\cdot d$ for all $a,b,c,d\in P$. A pomonoid $(P,\leq,\cdot,1)$ can be considered as a monoidal category $\mathcal{P}$ whose objects are the elements in $P$ and where ${\rm Hom}_\mathcal{P}(a,b)$ is a singleton if $a\leq b$ and is empty otherwise; the tensor product is given by $\cdot$ and the unit object is $1$. It is well-known that \begin{itemize} \item $\mathcal{P}$ is left and right rigid if and only if the pomonoid $(P,\leq,\cdot,1)$ is a \emph{pregroup}, meaning that for every $t\in P$ there are elements $t^*,\,^*t\in P$, called \emph{proto-inverses}, such that $t^*\cdot t\leq 1\leq t\cdot t^*$ and $t\cdot {^* t}\leq 1\leq {^*t}\cdot t$; \item $\mathcal{P}$ is left and right closed if and only if the pomonoid $(P,\leq,\cdot,1)$ is a \emph{residuated pomonoid}, meaning that for every $b,c\in P$ there are elements in $P$, denoted by $c/b$ and $a\backslash c$ and called \emph{residuals}, such that $a\cdot b\leq c\Leftrightarrow a\leq c/b\Leftrightarrow b\leq a\backslash c$ for every $a,b,c\in P$. \end{itemize} Instead, if we just ask $\mathcal{P}$ to be left and right pre-rigid, we get what we can name a \emph{contractive pomonoid}, meaning that, for every $t\in P$, there are $t^*,\,^*t\in P$ such that \begin{gather} t^*\cdot t\leq 1 \text{ and } t\cdot {^* t}\leq 1,\text{ for every } t\in P\quad \text{(contractions)};\label{pomon1}\\ a\cdot b\leq 1\text{ implies both } a\leq b^* \text{ and } b\leq {^* a} ,\text{ for every } a,b\in P.\label{pomon2} \end{gather} It is easy to check that \eqref{pomon1} and \eqref{pomon2} are equivalent to require that \begin{gather} a\cdot b\leq 1\Leftrightarrow a\leq b^*\Leftrightarrow b\leq {^* a}\text{ for every }a,b\in P.\label{pomon3} \end{gather} \begin{invisible} Assume that \eqref{pomon1} holds true. Then $a\leq b^*\Rightarrow a\cdot b\leq b^*\cdot b\leq 1\Rightarrow a\cdot b\leq 1$. Similarly $b\leq {^* a}\Rightarrow a\cdot b\leq a\cdot {^* a}\leq 1\Rightarrow a\cdot b\leq 1$. Adding \eqref{pomon2} we get \eqref{pomon3}. Conversely by assuming \eqref{pomon3} and observing that $t^*\leq t^*$ and ${^* t}\leq {^* t}$, we get \eqref{pomon1}, while \eqref{pomon2} is evident. \end{invisible} In particular, if $\mathcal{P}$ is left and right pre-rigid, then the pomonoid $(P,\leq,\cdot,1)$ fulfills \eqref{pomon1} i.e. it is a special instance of a \emph{protogroup}, notion introduce by Lambek in \cite{Lambek} (together with the one of pregroup) as an aid for checking which strings of words in a natural language, such as English, are well-formed sentences. Note that the condition $a\leq b^*\Leftrightarrow b\leq {^* a}$ means that $(-)^*:P\to P$ and $^*(-):P\to P$ defines an order-reversing Galois connection. It follows from the definition that in a contractive pomonoid $t^*,\,^*t\in P$ are necessarily unique and the following properties follow easily $$1^*=1={^*1},\quad t\leq {}^*(t^*),\quad t\leq(^*t)^*,\quad b^*\cdot a^*\leq(a\cdot b)^*\quad {}^*b\cdot {}^*a\leq {}^*(a\cdot b).$$ \begin{invisible} In fact, if there is there is $t^\star$ fulfilling $t^\star\cdot t\leq 1$ and $a\cdot t\leq 1\Rightarrow a\leq t^\star$, then $t^*\cdot t\leq 1$ plus $\eqref{pomon2}^\star$ imply $t^*\leq t^\star$ while $t^\star\cdot t\leq 1$ plus \eqref{pomon2} imply $t^\star\leq t^*$ and hence $t^*= t^\star$. Similarly on the left. Since $1\cdot 1\leq 1$ and $a\cdot 1\leq 1\Rightarrow a\leq 1$, we get $1^*=1$. Similarly ${^*1}=1.$ \end{invisible} Note that \begin{center} pregoups $\subseteq$ residuated monoids $\subseteq$ contractive pomonoids $\subseteq$ protogroups. \end{center} Explicitly, every residuated pomonoid is contractive through $t^*:=1/t$ and $^*t:=1\backslash t$. A simple example of contractive pomonoid is given by a pomonoid $(P,\leq,\cdot,1)$ where the identity $1$ is a maximum and $t^*={^*t}=1$ for every $t\in P$, as \eqref{pomon3} is trivially satisfied. Consider $P=\{\frac{m}{10^n}\mid m,n\in{\mathbb N},m\geq10^n\}$, the set of terminating decimals greater or equal to $1$. Then $(P,\leq,\cdot,1)$, with the ordinary order and product of rational numbers, is a pomonoid with $1$ as a minimum, so that its opposite is a contractive pomonoid. We point out that this pomonoid is not residuated: otherwise it should admit the residual $4/3=\min\{a\in P\mid 3a\geq 4\}$ but such a minimun does not exist in $P$. \begin{invisible} $1.34>\frac{4}{3}>1.33$; $1.334>\frac{4}{3}>1.333$; $1.3334>\frac{4}{3}>1.3333$; ... \end{invisible} \end{example} From now on it suffices us to restrict our attention to right pre-rigid monoidal categories in this article, and henceforth we omit the adjective ``right''. The lemma below provides a different characterization of pre-rigidity. \begin{lemma}\label{lem:presheaf} A monoidal category $(\mathcal{C},\otimesimes,\mathds{I})$ is pre-rigid if and only if for every object $X$ there exists an object $X^{\ast }$ and an isomorphism of presheaves on $\mathcal{C}$ \begin{equation*} \Pi:{\rm Hom} _{\mathcal{C}}\left( -,X^{\ast }\right) \rightarrow {\rm Hom} _{\mathcal{C}}\left( -\otimesimes X,\mathds{I}\right) \end{equation*} i.e. if and only if the presheaf ${\rm Hom} _{\mathcal{C}}\left( -\otimesimes X,\mathds{I}\right):\mathcal{C}^{\mathrm{op} }\to\mathsf{Set}$ is representable. \end{lemma} \begin{proof} If $\mathcal{C}$ is pre-rigid we can set $\Pi_T(u):={\rm ev} _{X}\circ \left( u\otimesimes X\right)$ and this assignment is natural in $T$. Conversely, given $\Pi$ we can set ${\rm ev} _{X}:=\Pi_{X^{\ast }}(1_{X^{\ast}})$. Given $u:T\to X^\ast$, the naturality of $\Pi$ yields the equality $\Pi_T\circ {\rm Hom} _{\mathcal{C}}\left( u,X^{\ast }\right)={\rm Hom} _{\mathcal{C}}\left( u\otimesimes X,\mathds{I}\right)\circ \Pi_{X^{\ast }}$ which evaluated on $1_{X^{\ast}}$ gives the equality $\Pi_T(u):={\rm ev} _{X}\circ \left( u\otimesimes X\right)$. \end{proof} Note that an object $X$ whose presheaf ${\rm Hom} _{\mathcal{C}}\left( -\otimesimes X,\mathds{I}\right):\mathcal{C}^{\mathrm{op} }\to\mathsf{Set}$ is representable is called \emph{semi-dualizable} in \cite[Definition 4.5]{ST}. Moreover the object $X^*$ which represents this functor is called the \emph{weak dual} of $X$ in \cite[page 86]{DP}. Therefore, Lemma \ref{lem:presheaf} says that a monoidal category $\mathcal{C}$ is pre-rigid if and only if every object in $\mathcal{C}$ is semi-dualizable and that the notion of pre-dual coincides with the one of weak dual. As a consequence, we have the following result. \begin{corollary}\label{cor:uniqueness} In a pre-rigid monoidal category, for every object $X$, the pre-dual $X^*$ is unique up to isomorphism. Moreover $\mathds{I}^*\cong \mathds{I}.$ \end{corollary} \begin{proof} This is already mentioned in \cite[page 86, footnote]{DP}. We include here a proof for the reader's sake. The uniqueness follows from Lemma \ref{lem:presheaf} and Yoneda's Lemma. If $r_T:T\otimesimes \mathds{I}\to T$ denotes the right unit constraint, then ${\rm Hom} _{\mathcal{C}}\left( r_{-},\mathds{I}\right):{\rm Hom} _{\mathcal{C}}\left( -,\mathds{I}\right) \rightarrow {\rm Hom} _{\mathcal{C}}\left( -\otimesimes \mathds{I},\mathds{I}\right) $ is a natural isomorphism. Thus $\mathds{I}$ is a pre-dual of $\mathds{I}$. \end{proof} The following result provides us with a large class of examples of pre-rigid monoidal categories. \begin{proposition}\label{prop:prerigid2} Let $\mathcal{C}$ be a right closed monoidal category. Then $ \mathcal{C}$ is pre-rigid. \end{proposition} \begin{proof}As mentioned in \cite[4.2]{ST}, in a right closed monoidal category $\mathcal{C}$ every object is semi-dualizable. By the foregoing this means that $\mathcal{C}$ is pre-rigid. Explicitly, if the functor $\left( -\right) \otimesimes X: \mathcal{C}\rightarrow \mathcal{C}$ has a right adjoint $\left[ X,-\right] : \mathcal{C}\rightarrow \mathcal{C}$, the pre-dual is given by $X^{\ast }:=\left[ X,\mathds{I} \right] \in \mathcal{C}.$ Moreover, if $\epsilon : \left[ X,-\right] \otimesimes X\rightarrow \mathrm{Id}$ is the counit of the adjunction, then the evaluation is ${\rm ev} _{X}:=\epsilon _{\mathds{I}}.$ \begin{invisible} As $\mathcal{C}$ is assumed to be right closed, this means that for every object $X\in \mathcal{C}$ the functor $\left( -\right) \otimesimes X: \mathcal{C}\rightarrow \mathcal{C}$ has a right adjoint $\left[ X,-\right] : \mathcal{C}\rightarrow \mathcal{C}$. Set $X^{\ast }:=\left[ X,\mathds{I} \right] \in \mathcal{C}.$ Consider the counit of the adjunction $\epsilon : \left[ X,-\right] \otimesimes X\rightarrow \mathrm{Id}$ and set ${\rm ev} _{X}:=\epsilon _{\mathds{I}}.$ Clearly we have the following bijection. \begin{equation*} {\rm Hom} _{\mathcal{C}}\left( T,X^{\ast }\right) \rightarrow {\rm Hom} _{\mathcal{C}}\left( T\otimesimes X,\mathds{I}\right) :u\mapsto {\rm ev} _{X}\circ \left( u\otimesimes X\right) . \end{equation*} This shows that $\mathcal{C}$ is pre-rigid. \end{invisible} \end{proof} \subsection{Connections with rigid categories}We here discuss some connections with rigidity or, more generally, with the existence of a dualizable object. \begin{remark} We have observed that the notion of pre-dual coincides with the one of weak dual in the sense of \cite{DP}. As a consequence, if $X$ is a dualizable object in a pre-rigid monoidal category $\mathcal{C}$, then the pre-dual $X^*$ is exactly its dual in $\mathcal{C}$, cf. \cite[Theorem 1.3]{DP} plus the uniqueness of the pre-dual stated in Corollary \ref{cor:uniqueness}. \end{remark} \begin{remark}We observed in Subsection \ref{sub:basics} that a right rigid monoidal category is right closed. On the other hand, by Proposition \ref{prop:prerigid2}, right closed implies pre-rigid. Thus, what is in a sense missing in the notion of pre-rigidity compared to the notion of rigidity is the coevaluation. \end{remark} \begin{remark}In Subsection \ref{sub:basics} we also observed that there is no distinction between left and right rigidity or closedness for a braided monoidal category $(\mathcal{C},\otimesimes,\mathds{I})$. The same is true for pre-rigidity. For instance, if $\mathcal{C}$ is right pre-rigid, then it is also left pre-rigid with $^*X:=X^*$ and ${\rm ev}'_X:={\rm ev}_X\circ c_{X,X^*}:X\otimesimes X^*\to\mathds{I}$, where $c$ denotes the braiding of $\mathcal{C}$. \begin{invisible}The following bijection \begin{equation*} {\rm Hom} _{\mathcal{C}}\left( T,X^*\right) \rightarrow{\rm Hom} _{\mathcal{C}}\left( T\otimesimes X,\mathds{I}\right) \overset{{\rm Hom} _{\mathcal{C}}\left( c_{X,T},\mathds{I}\right)}\rightarrow {\rm Hom} _{\mathcal{C}}\left( X\otimesimes T,\mathds{I}\right) \end{equation*} maps $u:T\to X^*$ to ${\rm ev}_X\circ ( u\otimesimes X)\circ c_{X,T}={\rm ev}_X\circ c_{X,X^*}\circ ( X\otimesimes u)={\rm ev}'_X\circ ( X\otimesimes u).$ \end{invisible} \end{remark} We describe the pre-dual in case of a particular example of compact closed monoidal category. \begin{example}\label{ex:Rel} Consider the category ${\sf Rel} $ whose objects are sets and morphisms are binary relations, see \cite[page 26]{MacLane}. A binary relation $\mathcal{R} \subseteq I\times J$ is then denoted by $\mathcal{R} :I\relbar\joinrel\mapstochar\joinrel\rightarrow J.$ Given $\mathcal{S }:U\relbar\joinrel\mapstochar\joinrel\rightarrow I$ the composition $\mathcal{R} \circ \mathcal{S}:U\relbar\joinrel\mapstochar\joinrel\rightarrow J$ is defined by setting \begin{equation*} \mathcal{R} \circ \mathcal{S}:=\left\{ \left( u,j\right) \in U\times J\mid \exists i\in I\quad \left( i,j\right) \in \mathcal{R} \quad \left( u,i\right) \in \mathcal{S}\right\} . \end{equation*} By \cite[page 194]{Kelly-Laplaza}, the category ${\sf Rel} $ is compact closed where $\otimesimes$ is the cartesian product $\times$ and the unit object the singleton $\left\{ \ast \right\} .$ In particular it is rigid whence a fortiori both closed and pre-rigid. Explicitly the internal-hom is given by $\left[ J,K\right] :=J\times K$ as we have the obvious bijection \begin{equation*} {\rm Hom} _{{\sf Rel} }\left( I,\left[ J,K\right] \right) \cong \mathrm{ Hom}_{{\sf Rel} }\left( I\times J,K\right) . \end{equation*} As a consequence the pre-dual of an object $I$ is $I^*:=I$. The evaluation ${\rm ev}_{I}:I\times I\relbar\joinrel\mapstochar\joinrel\rightarrow \left\{ \ast \right\} $ is the binary relation ${\rm ev}_{I}:=\left\{ \left( i,i,\ast \right) \mid i\in I\right\} .$ Given a binary relation $\mathcal{R}:I\times J\relbar\joinrel\mapstochar\joinrel\rightarrow \left\{ \ast \right\} \ $we define $\mathcal{R}^\dag:I\relbar\joinrel\mapstochar\joinrel\rightarrow J$ by setting $ \mathcal{R}^\dag:=\left\{ \left( i,j\right) \in I\times J\mid \left( \left( i,j\right) ,\ast \right) \in \mathcal{R}\right\} $ so that $\mathcal{R}^\dag$ is the unique binary relation such that ${\rm ev}_{J}\circ \left( \mathcal{R}^\dag\times \mathrm{Id}_{J}\right) =\mathcal{R}$. For future use, note that, given binary relations $ \mathcal{R} :I\relbar\joinrel\mapstochar\joinrel\rightarrow J$ and $\mathcal{R} ^{\prime }:I^{\prime }\relbar\joinrel\mapstochar\joinrel\rightarrow J^{\prime }$, their cartesian product $\mathcal{R} \times \mathcal{R} ^{\prime }:I\times I^{\prime }\relbar\joinrel\mapstochar\joinrel\rightarrow J\times J^{\prime }$ is defined by \begin{equation*} \mathcal{R} \times \mathcal{R} ^{\prime }:=\left\{ \left( i,i^{\prime },j,j^{\prime }\right) \mid \left( i,j\right) \in \mathcal{R} \quad \left( i^{\prime },j^{\prime }\right) \in \mathcal{R} ^{\prime }\right\} . \end{equation*} \end{example} \subsection{Connections with closed categories} We observed in Subsection \ref{sub:basics} that ${\sf Vec}$ is closed but not rigid. In particular, by Proposition \ref{prop:prerigid2}, ${\sf Vec}$ is pre-rigid but not rigid. Yet, it can be given a braided (symmetric) structure by considering the twist. We now consider two examples of not necessarily braided categories that provide an instance of a strict monoidal functor that does not preserve pre-duals. In contrast, as mentioned in Subsection \ref{sub:basics}, the strong monoidal functors between right rigid categories preserve duals. \begin{example}\label{ex:closed} The monoidal category of vector spaces graded by a monoid $G$ will be denoted by ${\sf Vec}_G$. Let us briefly recall its structure, see e.g. \cite[Chapter A]{NvO} and \cite[Example 2.3.6]{EGNO}. \\If $V,W \in {\sf Vec}_{G}$, then $V\otimes W:= \bigoplus_{g}(V\otimes W)_g$, where $(V\otimes W)_g:=\mathrm{op}lus_{xy=g} V_x \otimes_k W_y$, becomes an object in ${\sf Vec}_G$. The unit object is $k=k_e$, $e$ being the neutral element in $G$. For objects $V=\mathrm{op}lus_{g\in G}V_g, W=\mathrm{op}lus_{g\in G}W_g \in {\sf Vec}_{G}$, we set $[V,W] =\mathrm{op}lus _{g\in G}[ V,W] _{g}$ where \begin{equation*} [V,W]_{g}=\left\{ f\in \mathrm{Hom} _{k}\left( V,W\right) \mid f\left( V_{h}\right) \subseteq W_{gh}\text{ for every }h\in G\right\}. \end{equation*} This defines an endofunctor $[V,-]$ of $\mathsf{Vec} _{G}$ which is a right adjoint of the endofunctor $\left( -\right) \otimesimes V$. On the other hand the right adjoint of the endofunctor $ V \otimesimes\left( -\right)$ is $\mathrm{HOM}\left(V,-\right)$ which is defined by setting $\mathrm{HOM}\left( V,W\right) =\mathrm{op}lus _{g\in G}\mathrm{HOM}\left( V,W\right) _{g}$ where \begin{equation*} \mathrm{HOM}\left( V,W\right) _{g}=\left\{ f\in \mathrm{Hom} _{k}\left( V,W\right) \mid f\left( V_{h}\right) \subseteq W_{hg}\text{ for every }h\in G\right\}, \end{equation*} for any $V,W\in \mathsf{Vec}_{G}$. Note that this latter inner-hom is the one usually employed in graded theory. Since the endofunctor $\left( -\right) \otimesimes V$ of $\mathsf{Vec }_{G}$ has a right adjoint $[V,-]$, the category ${\sf Vec}_G$ is right closed. Proposition \ref{prop:prerigid2} teaches that ${\sf Vec}_G$ is also pre-rigid and the pre-dual $V^{\ast}$ becomes $[V,k]$ for any $G$-graded vector space $V$. Now, unless $G$ is a commutative monoid, ${\sf Vec}_G$ can not be endowed with a braiding. Notice also that, unless $G$ is trivial, the pre-dual $V^{\ast}$ of an object $V$ in ${\sf Vec}_G$ does not coincide with the set of morphisms from $V$ to $k$ in this category. Thus the forgetful functor $U:{\sf Vec}_G\to {\sf Vec}$ between these two pre-rigid monoidal categories is strict monoidal but does not preserve pre-duals. \end{example} The category ${\sf Vec}_G$ is a particular instance of the following more general situation for $H=kG$, the group algebra. \begin{example}\label{ex:comodclosed} Let $H$ be a $k$-Hopf algebra. Then the category $^H\mathfrak{M}$ of left $H$-comodules is closed monoidal (as mentioned in \cite[page 362]{Schauenburg-HopExt}; see also \cite[Theorem 1.3.1]{Hovey}). Hence by Proposition \ref{prop:prerigid2} it is also pre-rigid. Note that the category itself needs not to be braided unless $H$ is coquasi-triangular (dual of \cite[Proposition XIII.1.4.]{Kassel}). From a communication with G. B\"{o}hm, it emerged that the existence of the antipode in Hovey's proof is superfluous so that the category of left $H$-comodules is closed even if $H$ is just a bialgebra. \\As in the particular case of ${\sf Vec}_G$, here also the forgetful functor $U:\!^H\mathfrak{M}\to {\sf Vec}$ between these two pre-rigid monoidal categories is strict monoidal but does not preserve pre-duals. \end{example} The very definition of a pre-rigid monoidal category together with Proposition \ref{prop:prerigid2} suggests a close connection between the notions of closedness and pre-rigidity. However, these notions are no synonyms. We already encountered an example of a pre-rigid monoidal category which is not closed namely the category associated to a contractive pomonoid, cf. Example \ref{ex:lambek}. We come back to other examples soon in Examples \ref{exa:terminal} and \ref{examplerightclosed}. \\But closed monoidal categories and pre-rigid ones share quite some properties, of course. For instance, it is known that a monoidal co-reflective full subcategory of a monoidal closed category is also closed (although the closed structure may not be preserved by the inclusion), see e.g. \cite[Lemma 4.1]{Hasegawa}. The following proposition extends this result to the pre-rigid case but it holds in a more general setting than a co-reflection since the relevant isomorphism needs not to be the unit. \begin{proposition} \label{pro:prigadj}Let $\mathcal{A}$ and $\mathcal{B}$ be monoidal categories and let $\left( L,R\right) $ be an adjunction such that $L: \mathcal{B}\rightarrow \mathcal{A}$ is strong monoidal. If $\mathcal{A}$ is pre-rigid then the presheaf ${\rm Hom} _{\mathcal{B}}\left( -\otimesimes B,RL\left( \mathds{I}_{\mathcal{B}}\right) \right)$ is representable. If furthermore $\mathds{I}_{\mathcal{B}}\cong RL\left( \mathds{I}_{\mathcal{B}}\right) $ (e.g. $L$ is fully faithful), then $\mathcal{B}$ is pre-rigid too. \end{proposition} \begin{proof} Denote by $\left( L,\psi _{2},\psi _{0}\right) $ the strong monoidal structure of $L$. For every $B\in \mathcal{B}$ we set $B^{\ast }:=R\left( \left( LB\right) ^{\ast }\right) $. Then, for every $T\in \mathcal{B}$, we have the following chain of isomorphisms \begin{align*} {\rm Hom} _{ \mathcal{B}}\left( T,R\left( \left( LB\right) ^{\ast }\right) \right) &\cong {\rm Hom} _{\mathcal{A}}\left( LT,\left( LB\right) ^{\ast }\right) \cong {\rm Hom} _{\mathcal{A}}\left( LT\otimesimes LB,\mathds{I}_{\mathcal{A} }\right) \cong{\rm Hom} _{\mathcal{A}}\left( L\left( T\otimesimes B\right) ,\mathds{I}_{\mathcal{A}}\right)\\ &\cong {\rm Hom} _{\mathcal{B}}\left( T\otimesimes B,R\left( \mathds{I}_{\mathcal{A}}\right) \right)\cong {\rm Hom} _{\mathcal{B}}\left( T\otimesimes B,RL\left( \mathds{I}_{\mathcal{B}}\right) \right) \end{align*} where the last isomorphism is induced by the isomorphism $R\psi _{0}: RL\left( \mathds{I}_{\mathcal{B}}\right)\to R\left(\mathds{I}_{\mathcal{A}}\right)$. The above displayed composition is natural in $T$ so that we get a natural isomorphism ${\rm Hom} _{ \mathcal{B}}\left( -,R\left( \left( LB\right) ^{\ast }\right) \right)\cong {\rm Hom} _{\mathcal{B}}\left( -\otimesimes B,RL\left( \mathds{I}_{\mathcal{B}}\right) \right)$ which says that the presheaf ${\rm Hom} _{\mathcal{B}}\left( -\otimesimes B,RL\left( \mathds{I}_{\mathcal{B}}\right) \right)$ is representable. Now $\mathds{I}_{\mathcal{B}}\cong RL\left( \mathds{I}_{\mathcal{B}}\right) $ induces an isomorphism ${\rm Hom} _{\mathcal{B}}\left( -\otimesimes B,RL\left( \mathds{I}_{\mathcal{B}}\right) \right)\cong {\rm Hom} _{\mathcal{B}}\left( -\otimesimes B,\mathds{I}_{\mathcal{B}} \right)$ so that the latter presheaf is representable as well. By Lemma \ref{lem:presheaf} we conclude. Note that if $L$ is fully faithful then the unit $\eta:{\sf id}_{\mathcal{B}}\to RL$ is an isomorphism by the dual of \cite[Proposition 3.4.1]{Borceux1}. In particular $\eta \mathds{I}_{\mathcal{B}}:\mathds{I}_{\mathcal{B}}\to RL\left( \mathds{I}_{\mathcal{B}}\right)$ is an isomorphism. \end{proof} Now follows a variant of Proposition \ref{pro:prigadj} that holds in case $\mathcal{B}$ is Cauchy complete i.e. when every idempotent in $\mathcal{B}$ splits. Recall that an object $A$ in a category is called a \emph{retract} of an object $B$ if there are morphisms $i:A\to B$ and $p:B\to A$ such that $p\circ i= 1_A$. A functor $F: \mathcal{C} \rightarrow \mathcal{D}$ is called \emph{separable} if the obvious natural transformation $\mathcal{F} : {\rm Hom} _{\mathcal{C}}(-,-)\rightarrow {\rm Hom} _{\mathcal{D}}(F(-), F(-))$ is a split natural monomorphism. \begin{proposition} \label{pro:prigadj2}Let $\mathcal{A}$ and $\mathcal{B}$ be monoidal categories and let $\left( L,R\right) $ be an adjunction such that $L: \mathcal{B}\rightarrow \mathcal{A}$ is strong monoidal. Assume that $\mathcal{B}$ is Cauchy complete and that $\mathds{I}_{\mathcal{B}}$ is a retract of $RL\left( \mathds{I}_{\mathcal{B}}\right) $ (e.g. when $L$ is a separable functor). Then, if $\mathcal{A}$ is pre-rigid, so is $\mathcal{B}$. \end{proposition} \begin{proof} By definition of retract, there are morphisms $p:RL\left( \mathds{I}_{\mathcal{B}}\right)\to \mathds{I}_{\mathcal{B}}$ and $i:\mathds{I}_{\mathcal{B}}\to RL\left( \mathds{I}_{\mathcal{B}}\right)$ such that $p\circ i=1_{\mathds{I}}$. Then ${\rm Hom} _{\mathcal{B}}\left( T\otimesimes B,p\right)\circ {\rm Hom} _{\mathcal{B}}\left( T\otimesimes B,i\right)$ is trivial and hence ${\rm Hom} _{\mathcal{B}}\left( -\otimesimes B,\mathds{I}_{\mathcal{B}} \right)$ is a retract of the presheaf ${\rm Hom} _{\mathcal{B}}\left( -\otimesimes B,RL\left( \mathds{I}_{\mathcal{B}}\right) \right)$, which is representable by Proposition \ref{pro:prigadj}. Thus, by \cite[Lemma 6.5.6]{Borceux1} (which can be applied since $\mathcal{B}$ is Cauchy complete), we get that ${\rm Hom} _{\mathcal{B}}\left( -\otimesimes B,\mathds{I}_{\mathcal{B}} \right)$ is representable as well. By Lemma \ref{lem:presheaf}, $\mathcal{B}$ is pre-rigid. \\Notice that if $L$ is a separable functor then the unit $\eta:{\sf id}\to RL$ is a split natural monomorphism by Rafael's Theorem. In particular $i:=\eta \mathds{I}_{\mathcal{B}}:\mathds{I}_{\mathcal{B}}\to RL\left( \mathds{I}_{\mathcal{B}}\right)$ is a split monomorphism. \end{proof} \begin{example} Recall that a \emph{monoidal comonad} on a monoidal category $(\mathcal{M},\otimesimes,\mathds{I})$ consists of a comonad $(\bot,\delta,\epsilon)$ on $\mathcal{M}$ such that $\bot:\mathcal{M}\to\mathcal{M}$ is lax monoidal and $\delta:\bot\to\bot\bot$ and $\epsilon:\bot\to\mathds{I}$ are monoidal natural transformations, see e.g. \cite{Pastro-Street}. Let $\bot$ be a comonad on a monoidal category $\mathcal{M}$. By the dual version of \cite[Corollary 3.13]{McCrudden} (see also \cite[Theorem 3.19]{Bohm}) there is a bijective correspondence between monoidal structures on the Eilenberg-Moore category of $\bot$-comodules $\mathcal{M}^\bot$ such that the forgetful functor $L:\mathcal{M}^\bot\to \mathcal{M}$ is strict monoidal and monoidal comonad structures on the functor $\bot$. Thus if $\bot$ is a monoidal comonad and also a coseparable comonad, i.e. when $L$ is a separable functor (see \cite[Theorem 1.6]{EV}), then we can apply Proposition \ref{pro:prigadj2} to conclude that $\mathcal{M}^\bot$ is pre-rigid in case $\mathcal{M}$ is pre-rigid and $\mathcal{M}^\bot$ is Cauchy complete (note that $L$ has a right adjoint given by the free functor). Consider a $k$-bialgebra $B$ whose underlying coalgebra is coseparable. Let $\mathfrak{M}^B$ be the category of right $B$-comodules. Since $k$ is a field, $\bot:=-\otimesimes_k B:\mathfrak{M}\to \mathfrak{M}$ is left exact and hence the forgetful functor $U:\mathfrak{M}^B\to \mathfrak{M}$ creates equalizers. Since $U$ has a right adjoint given by the functor $-\otimesimes_k B$, we can apply Beck's Theorem (see \cite{MacLane}), to get that the comparison functor $K:\mathfrak{M}^B\to \mathfrak{M}^\bot$ is a category isomorphism. Since $B$ is coseparable, the functor $U=L\circ K$ is separable and hence also $L:\mathfrak{M}^\bot\to \mathfrak{M}$ is separable. Since $B$ is a bialgebra, $\mathfrak{M}^B$ is monoidal and $U$ is strict monoidal. As a consequence $\mathfrak{M}^\bot$ is monoidal and $L$ is strict monoidal. Hence, by the foregoing, $\mathfrak{M}^\bot$ is pre-rigid. Unfortunately for our theory developed so far, this can also be deduced from the stronger fact that $\mathfrak{M}^B$ is closed which holds even if $B$ is not coseparable, see Example \ref{ex:comodclosed}. \end{example} \subsection{Examples with constant pre-dual} The next result enables us to obtain further examples of pre-rigid monoidal categories (although with trivial pre-dual) some of which are not closed and which do not necessarily allow for a braiding. It also gives a criterium to exclude right closedness. \begin{proposition}\label{pro:terminal} Let $(\mathcal{C},\otimesimes,\mathds{I})$ be a monoidal category. \begin{enumerate} \item[$1)$] If the unit object $\mathds{I}$ is terminal in $\mathcal{C}$, then $\mathcal{C}$ is pre-rigid where $X^*:=\mathds{I}$ for every $X$ in $\mathcal{C}$. \item[$2)$] If the unit object $\mathds{I}$ is initial in $\mathcal{C}$ and $\mathcal{C}$ is pre-rigid, then $\mathds{I}$ is terminal in $\mathcal{C}$. \item[$3)$] If the unit object $\mathds{I}$ is initial in $\mathcal{C}$ and the skeleton of $\mathcal{C}$ is not the trivial category, then $\mathcal{C}$ is not right closed. \end{enumerate} \end{proposition} \begin{proof} $1)$. If $\mathds{I}$ is terminal, for every object $X$ there is a unique morphism $t_X:X\to \mathds{I}$ such that ${\rm Hom} _{\mathcal{C}}\left(X, \mathds{I}\right)=\{t_X\} $. Set $X^*:=\mathds{I}$ and ${\rm ev}_{X}:=t_{X^*\otimesimes X}$. The map \eqref{maprerig} is trivially bijective for every object $T$ in $\mathcal{C}$ and hence $\mathcal{C}$ is pre-rigid. $2)$. For every object $X$ in $\mathcal{C}$ we have ${\rm Hom} _{\mathcal{C}}\left(X, \mathds{I}\right)\cong{\rm Hom} _{\mathcal{C}}\left(\mathds{I}\otimesimes X, \mathds{I}\right)\cong{\rm Hom} _{\mathcal{C}}\left(\mathds{I}, X^*\right)$ and the latter is a singleton if $\mathds{I}$ is an initial object. Thus $\mathds{I}$ is also terminal in this case. $3)$. Assume that $\mathds{I}$ is an initial object and that $\mathcal{C}$ is right closed. Then, for every object $X,Y$ in $\mathcal{C}$, the functor $(-)\otimesimes X$ has a right adjoint $[X,-]$ and hence we have ${\rm Hom} _{\mathcal{C}}\left(X, Y\right)\cong {\rm Hom} _{\mathcal{C}}\left(\mathds{I}\otimesimes X, Y\right)\cong {\rm Hom} _{\mathcal{C}}\left(\mathds{I}, [X,Y]\right)$ and the latter is a singleton. Then $X$ is an initial object as well, for every $X$. So $X\cong \mathds{I}$ and the skeleton of $\mathcal{C}$ is the trivial category. \end{proof} \begin{remark} A monoidal category $\mathcal{C}$ whose unit object $\mathds{I}$ is a terminal object is called \emph{semicartesian} in literature. Rephrasing the first item from the above proposition, we have that being semicartesian implies pre-rigidity for a monoidal category. The converse of this statement does not hold; consider ${\sf Vec}$ for instance. \end{remark} \begin{example} As observed in \cite[page 15]{Kelly}, the symmetric cartesian monoidal category ${\sf Top}$ is not closed. Note that the unit object of this monoidal category is the singleton and it is also a terminal object. Therefore, by Proposition \ref{pro:terminal}, ${\sf Top}$ is pre-rigid. \end{example} \begin{example}\label{ex:slice} Let $(\mathcal{C},\otimesimes,\mathds{I})$ be a monoidal category. Then the slice category $\mathcal{C}/\mathds{I}$ becomes monoidal with unit object $(\mathds{I},{\sf id}_\mathds{I})$, see \cite[page 3]{BJT}, which is also terminal. By Proposition \ref{pro:terminal}, $\mathcal{C}/\mathds{I}$ is pre-rigid. \end{example} \begin{examples}\label{exa:terminal} We collect here further examples where Proposition \ref{pro:terminal} applies. Here $(\mathcal{C},\otimesimes,\mathds{I})$ denotes a braided monoidal category. \begin{enumerate}[1.] \item Since $\mathcal{C}$ is braided, the category ${\sf Coalg}(\mathcal{C})$ of coalgebra objects in $\mathcal{C}$ is monoidal. It has unit object $\mathds{I}$ which is also a terminal object in ${\sf Coalg}(\mathcal{C})$ via the counit. Hence this category is pre-rigid. Note that there are conditions on $\mathcal{C}$ ensuring that ${\sf Coalg}(\mathcal{C})$ is closed, see e.g. \cite[3.2]{Po}. \item The category ${\sf Alg}^+(\mathcal{C})$ of augmented algebra objects in $\mathcal{C}$ is monoidal too with unit object $\mathds{I}$ which is also a zero object (both terminal and initial) in ${\sf Alg}^+(\mathcal{C})$. Hence this category is pre-rigid but not right closed. The fact that ${\sf Alg}^+(\mathcal{C})$ is pre-rigid can also be deduced from Example \ref{ex:slice} and the identification ${\sf Alg}^+(\mathcal{C})\cong {\sf Alg}(\mathcal{C})/\mathds{I}$. \item If we further assume that $\mathcal{C}$ is symmetric monoidal, then the category ${\sf Bialg}(\mathcal{C})$ of bialgebra objects in $\mathcal{C}$ is monoidal too (see \cite[1.2.7]{Aguiar-Mahajan}) with unit object $\mathds{I}$ which is also a zero object in ${\sf Bialg}(\mathcal{C})$. Hence this category is pre-rigid but not right closed, as its skeleton is not trivial, in general. The same argument applies to ${\sf Hopfalg}(\mathcal{C})$. \item Assume that $\mathcal{C}$ has a terminal object $\mathbf{1}$ such that $\mathbf{1}\ncong \mathds{I}$ (e.g. in ${\sf Vec}$ one has $\{0\}\ncong k$). The unique morphisms $m:\mathbf{1}\otimesimes \mathbf{1}\to \mathbf{1}$ and $u:\mathds{I}\to \mathbf{1}$ turn $(\mathbf{1},m,u)$ into an object in ${\sf Alg}(\mathcal{C})$: indeed, $m\circ(m\otimesimes \mathbf{1})$ and $m\circ(\mathbf{1}\otimesimes m)$ coincide as they both have $\mathbf{1}$ as target. Analogously one checks that $m$ is unitary. By a similar argument one shows that this algebra is a terminal object in ${\sf Alg}(\mathcal{C})$. On the other hand $\mathds{I}$ is initial in ${\sf Alg}(\mathcal{C})$ via the unit. We deduce that ${\sf Alg}(\mathcal{C})$ is not pre-rigid by negation of 2) in Proposition \ref{pro:terminal}. \end{enumerate} \end{examples} \subsection{The functor induced by the pre-dual} Here, given a pre-rigid monoidal category $\mathcal{C}$ we consider the construction of a functor $(-)^{*}:\mathcal{C}^\mathrm{op} \rightarrow \mathcal{C}$ acting as the pre-dual on objects and investigate some of its properties. In Section \ref{finalsection} we will study under which conditions this functor is part of a liftable pair of adjoint functors.\\ The following fact is implicitly understood in \cite{GV-OnTheDuality}. We write it for future reference. \begin{lemma}\label{lem:contravariant} Let $\mathcal{C}$ be a pre-rigid monoidal category, the assignment $X\mapsto X^*$ induces a functor $(-)^{*}:\mathcal{C}^\mathrm{op} \rightarrow \mathcal{C}$ such that \eqref{maprerig} is natural in $T$ and $X$. \end{lemma} \begin{proof} For every morphism $t:T\otimesimes X\rightarrow \mathds{I}$ there is a unique morphism $t^\dag:T\rightarrow X^{\ast }$ such that $t={\rm ev} _{X}\circ \left( t^\dag\otimesimes X\right) .$ For every morphism $f:X\rightarrow Y$ define $f^{\ast }:=({{\rm ev} _{Y}\circ \left( Y^{\ast }\otimesimes f\right) })^\dag:Y^{\ast }\rightarrow X^{\ast }$ so that \begin{equation} {\rm ev}_{X}\circ \left( f^{\ast }\otimesimes X\right) ={\rm ev}_{Y}\circ \left( Y^{\ast }\otimesimes f\right) . \label{form:dual} \end{equation} This defines the desired functor. The naturality of \eqref{maprerig} in $T$ has already been observed, while the one in $X$ follows from \eqref{form:dual}. \end{proof} \begin{remark} We observed in Subsection \ref{sub:basics} that a $*$-autonomous category $\mathcal{C}$ is necessarily closed whence, a fortiori, pre-rigid. By definition, such a category is equipped with a functor $(-)^{*}:\mathcal{C}^\mathrm{op} \rightarrow \mathcal{C}$ and a natural isomorphism ${\rm Hom} _{\mathcal{C}}\left( A\otimesimes B,C^{\ast }\right) \cong {\rm Hom} _{\mathcal{C}}\left( A,(B\otimesimes C)^{\ast }\right) $. Note that, if $\mathcal{C}$ is just a pre-rigid monoidal category, in view of Lemma \ref{lem:contravariant}, such a functor always exists and there is also the aforementioned isomorphism (without asking $\mathcal{C}$ to be symmetric) as $${\rm Hom} _{\mathcal{C}}\left( A\otimesimes B,C^{\ast }\right)\cong{\rm Hom} _{\mathcal{C}}\left( (A\otimesimes B)\otimesimes C,\mathds{I}\right) \cong {\rm Hom} _{\mathcal{C}}\left( A\otimesimes (B\otimesimes C),\mathds{I}\right) \cong{\rm Hom} _{\mathcal{C}}\left( A,(B\otimesimes C)^{\ast }\right) .$$ \end{remark} The next result characterizes the situation when the functor $(-)^{*}:\mathcal{C}^\mathrm{op} \rightarrow \mathcal{C}$ is \emph{self-adjoint (on the right)} i.e. when there is a bijection ${\rm Hom} _{\mathcal{C}}\left( Y,X^{\ast }\right) \cong {\rm Hom} _{\mathcal{C}}\left( X,Y^{\ast }\right)$, natural both in $X$ and $Y$, see e.g. \cite[Chapter IV, Section 5]{MacLane-Moerdijk}. \begin{proposition}\label{pro:adjprig} Let $\mathcal{C}$ be a pre-rigid monoidal category. The following are equivalent. \begin{enumerate} \item[$(1)$] There is a bijection ${\rm Hom} _{\mathcal{C}}\left( Y,X^{\ast }\right) \cong {\rm Hom} _{\mathcal{C}}\left( X,Y^{\ast }\right) $ natural both in $X$ and $Y$. \item[$(2)$] There is a bijection ${\rm Hom} _{\mathcal{C}}\left( X\otimesimes Y,\mathds{I}\right) \cong {\rm Hom} _{\mathcal{C}}\left( Y\otimesimes X,\mathds{I}\right) $ natural both in $X$ and $Y$. \end{enumerate} \end{proposition} \begin{proof} The statement follows from the following diagram where the two vertical arrows are bijective by pre-rigidity and natural in $X$ and $Y$ by Lemma \ref{lem:contravariant}. \begin{equation*}\xymatrixcolsep{1cm}\xymatrixrowsep{.5cm} \xymatrix{ {\rm Hom} _{\mathcal{C}}\left( X,Y^{\ast }\right) \ar[rr]\ar[d]^\wr &&{\rm Hom} _{\mathcal{C}}\left( Y,X^{\ast }\right)\ar[d]^\wr\\ {\rm Hom} _{\mathcal{C}}\left( X\otimesimes Y,\mathds{I}\right) \ar[rr]&& {\rm Hom} _{\mathcal{C}}\left( Y\otimesimes X,\mathds{I}\right) } \end{equation*}\qedhere \end{proof} \begin{remark} The easiest way to apply Proposition \ref{pro:adjprig} to get that $(-)^*$ is self-adjoint is to require that the category is braided (this idea will be applied in the proof of Proposition \ref{prop:prerigid}). Let us show, by means of an example, that the existence of the braiding is not necessary. Let $(\mathcal{C},\otimesimes,\mathds{I})$ be a monoidal category such that the unit object $\mathds{I}$ is terminal in $\mathcal{C}$. By Proposition \ref{pro:terminal}, $\mathcal{C}$ is pre-rigid. Although $\mathcal{C}$ is not necessarily braided, we can apply Proposition \ref{pro:adjprig} to get that the functor $(-)^*:\mathcal{C}^\mathrm{op} \to\mathcal{C}$ is self-adjoint since ${\rm Hom} _{\mathcal{C}}\left( X\otimesimes Y,\mathds{I}\right) \cong {\rm Hom} _{\mathcal{C}}\left( Y\otimesimes X,\mathds{I}\right) $ is trivially bijective, being $\mathds{I}$ a terminal object, and natural both in $X$ and $Y$. An instance of this situation is given by the category ${\sf Coalg}(\mathcal{C})$ for $\mathcal{C}$ braided monoidal, see Example \ref{exa:terminal}. Note that in general ${\sf Coalg}(\mathcal{C})$ is not braided unless $\mathcal{C}$ is symmetric, see e.g. \cite[1.2.7]{Aguiar-Mahajan}. \end{remark} \section{Natural constructions of pre-rigid categories}\label{Sectionexamples} One of the aims of the present paper is to explore different constructions of new pre-rigid monoidal categories starting from known ones. In \cite{GV-OnTheDuality}, the first examples of braided pre-rigid monoidal categories that are considered in the context of liftability of the adjoint pair of functors provided by taking pre-duals are ${\sf Vec}$ with the twist as symmetry and ${\sf Vec}_{{\mathbb Z}_{2}}$ endowed with the ``super'' symmetry. In this section, we consider some further examples some of which play a key role in loc. cit.: the category ${\sf Fam}(\mathcal{C})$ of families of a base category $\mathcal{C}$ and the category ${\sf Maf}(\mathcal{C})={\sf Fam}(\mathcal{C}^{\mathrm{op} })^{\mathrm{op} }$, which were both appearing in \cite{CaeDel}, with the eye on Turaev's group Hopf-(co)algebras, see next definition below. Remark that in \cite{GV-OnTheDuality}, the question of their pre-rigidity was not studied. \\In Proposition \ref{pro:Famprerig} we prove the pre-rigidity of the category ${\sf Fam}(\mathcal{C})$ for $\mathcal{C}$ any pre-rigid monoidal category possessing products of pre-duals; conversely if $\mathcal{C}$ has an initial object and ${\sf Fam}(\mathcal{C})$ is pre-rigid then $\mathcal{C}$ necessarily has products of pre-duals. We notice that for Proposition \ref{pro:Famprerig} to hold, arbitrary products of pre-duals are needed. Indeed, Remark \ref{Remark-predualsnecessary} teaches that ${\sf Fam}({{\sf Vec} ^{\textrm{f}}})$ is not pre-rigid, although ${{\sf Vec} ^{\textrm{f}}}$ has a rigid monoidal structure. For the sake of ``completeness'' we also show in Proposition \ref{pro:Famclosed} that ${\sf Fam}(\mathcal{C})$ is closed monoidal whenever $\mathcal{C}$ is closed and has products. \\${\sf Maf}(\mathcal{C})$ is a slightly different story: ${\sf Maf}(\mathcal{C})$ is \emph{never} pre-rigid (Proposition \ref{pro:Maf}). We are able, however, to adjust the situation a bit: we study a variant ${\sf FamRel}(\mathcal{C})$ of this category and prove, in Proposition \ref{pro:Faf} that it is pre-rigid monoidal whenever $\mathcal{C}$ is. Next, Proposition \ref{pro:catfun} asserts that, given a small category $\mathcal{I}$ and a complete pre-rigid monoidal category $\mathcal{C}$, the functor category $\left[ \mathcal{I},\mathcal{C}\right] $ is pre-rigid as well. Finally, we conclude this section by considering the category $\mathcal{M}^G$ of externally $G$-graded $\mathcal{M}$-objects where $G$ is a monoid and $\mathcal{M}$ is a given monoidal category: in Proposition \ref{pro:funcatG}, we prove that the category $\mathcal{M}^G$ is pre-rigid whenever $\mathcal{M}$ is pre-rigid under mild assumptions. This will allow us to provide a non-trivial example of a pre-rigid monoidal category which is not right closed, see Example \ref{examplerightclosed}. \subsection{The ``Zunino'' category of families \texorpdfstring{${\sf Fam}(\mathcal{C})$}{TEXT}} Recall from \cite[\S 3]{Benabou-Fibered} the definition of the category ${\sf Fam}(\mathcal{C})$ of families of $\mathcal{C}$. An object in this category is a pair $\underline{X}:=\left( X_{i}\right) _{i\in I}=\left( I,X_{i}\right) $ where $I$ is a set and $X_{i}$ is an object in $\mathcal{C}$ for all $i\in I$. Given two objects $\underline{X}=\left( I,X_{i}\right) $ and $\underline{Y}=\left( J,Y_{j}\right) $ a morphism $ \underline{X}\rightarrow \underline{Y}$ is a pair $\underline{\phi }:=\left( f,\phi _{i} \right):\underline{X}\rightarrow \underline{Y}$ where $f:I\rightarrow J$ is map and $\phi _{i }:X_{i}\rightarrow Y_{f(i)}$ is a morphism in $\mathcal{C}$ for all $i \in I$. If $\mathcal{C}$ is a monoidal category so is ${\sf Fam}(\mathcal{C})$, as follows (see e.g. \cite[Section 4]{GV-OnTheDuality}). Given objects $\underline{X}$ and $\underline{Y}$ as above, their tensor product is defined by $\underline{X}\otimesimes \underline{Y}=\left( I\times J,X_{i}\otimesimes Y_{j}\right) .$ Given morphisms $\underline{\phi } :=\left( f,\phi _{i}\right) :\underline{X} \rightarrow \underline{Y}$ and $\underline{\phi }^{\prime }:=\left( f^{\prime },\phi _{i'}^{\prime }\right) :\underline{X}^{\prime }\rightarrow \underline{Y}^{\prime }$, their tensor product is $\underline{\phi }\otimesimes \underline{\phi }^{\prime }=\left( f\times f^{\prime },\phi _{i}\otimesimes \phi _{i'}^{\prime }\right) $. The unit object of ${\sf Fam}(\mathcal{C})$ is $\underline{\mathds{I}}=\left( \left\{ \ast \right\} ,\mathds{I} \right) $, where $\mathds{I}$ is the unit object $\mathcal{C}$. \begin{proposition}\label{pro:Famprerig} Let $\mathcal{C}$ be a pre-rigid monoidal category. If $\mathcal{C}$ has products of pre-duals, then $\mathsf{ Fam}(\mathcal{C})$ is pre-rigid and the pre-dual of $\underline{Y}=(J,Y_j)$ is $ \underline{Y}^{\ast }=( \left\{ \ast \right\} ,\prod\limits_{j\in J}Y_{j}^{\ast }) .$ Conversely, if $\mathcal{C}$ has an initial object and $\mathsf{ Fam}(\mathcal{C})$ is pre-rigid, then $\mathcal{C}$ has products of pre-duals. \end{proposition} \begin{proof} Consider the diagonal functor $F:\mathcal{C}\rightarrow {\sf Fam}( \mathcal{C}):X\mapsto \left( \left\{ \ast \right\} ,X\right) \ $considered in \cite[\S 3]{Benabou-Fibered} (where it is denoted by $\eta _{\mathcal{C}}$). Note that for every set $S$, there is a unique map $S\rightarrow \left\{ \ast \right\} $ that will be denoted by $t_{S}.$ The assignment $\left( t_{I}:I\rightarrow \left\{ \ast \right\} ,\phi _{i}:X_{i}\rightarrow Y\right) \mapsto \left( \phi _{i}\right) _{i\in I}$ defines a bijection (natural in $Y$ but not in $\underline{X}$) \begin{equation}\label{form:homFam} {\rm Hom} _{{\sf Fam}(\mathcal{C})}\left( \underline{X},FY\right) \cong \prod\limits_{i\in I}{\rm Hom} _{\mathcal{C}}\left( X_{i},Y\right) . \end{equation} Assume $\mathcal{C}$ has products of pre-duals. Then we can consider $\prod\limits_{j\in J}Y_{j}^{\ast }$ and we get a chain of bijections \begin{gather*} {\rm Hom} _{{\sf Fam}(\mathcal{C})}\left( \underline{X},F( \prod\limits_{j\in J}Y_{j}^{\ast }) \right) \overset{\eqref{form:homFam}}\cong \prod\limits_{i\in I} {\rm Hom} _{\mathcal{C}}\left( X_{i},\prod\limits_{j\in J}Y_{j}^{\ast }\right) \cong \prod\limits_{i\in I}\prod\limits_{j\in J}{\rm Hom} _{ \mathcal{C}}\left( X_{i},Y_{j}^{\ast }\right) \\ \cong \prod\limits_{\left( i,j\right) \in I\times J}{\rm Hom} _{\mathcal{C} }\left( X_{i}\otimesimes Y_{j},\mathds{I}\right) \overset{\eqref{form:homFam}}\cong {\rm Hom} _{{\sf Fam} (\mathcal{C})}\left( (I\times J,X_i\otimesimes Y_j),F\mathds{I}\right) = {\rm Hom} _{{\sf Fam}(\mathcal{C})}\left( \underline{X}\otimesimes \underline{Y},\underline{\mathds{I}}\right) . \end{gather*} Since the first isomorphism above is not natural in $\underline{X}$ we can not immediately conclude that the composition is natural and hence that ${\sf Fam}(\mathcal{C})$ is pre-rigid. Nevertheless this composition maps $\left( t_{I}:I\rightarrow \left\{ \ast \right\} ,\phi _{i}:X_{i}\rightarrow \prod\limits_{j\in J}Y_{j}^{\ast }\right) $ to $\left( t_{I\times J}:I\times J\rightarrow \left\{ \ast \right\} ,\left( {\rm ev} _{Y_{j}}\circ \left( p_{j}\phi _{i}\otimesimes Y_{j}\right) \right) \right)$, where \ $p_{j}:\prod\limits_{j\in J}Y_{j}^{\ast }\rightarrow Y_{j}^{\ast }$ denotes the canonical projection. We set $\underline{Y}^{\ast }:=F\left( \prod\limits_{j\in J}Y_{j}^{\ast }\right) .$ If we take $ \underline{X}=\underline{Y}^{\ast }$ and apply the above composition to $ \mathrm{Id}_{\underline{Y}^{\ast }}$, we get ${\rm ev}_{\underline{Y} }:=\left( t_{\left\{ \ast \right\} \times J},{\rm ev}_{Y_{j}}\circ \left( p_{j}\otimesimes Y_{j}\right) \right) .$ By the following computation we see that the composition above is $\left( t_{I},\phi _{i}\right) \mapsto \mathrm{ ev}_{\underline{Y}}\circ \left( \left( t_{I},\phi _{i}\right) \otimesimes {\underline{Y}}\right) .$ \begin{eqnarray*} {\rm ev}_{\underline{Y}}\circ \left( \left( t_{I},\phi _{i}\right) \otimesimes {\underline{Y}}\right) &=&\left( t_{\left\{ \ast \right\} \times J},{\rm ev}_{Y_{j}}\circ \left( p_{j}\otimesimes Y_{j}\right) \right) \circ \left( \left( t_{I},\phi _{i}\right) \otimesimes \left( {J}, {Y_{j}}\right) \right) \\ &=&\left( t_{\left\{ \ast \right\} \times J},{\rm ev}_{Y_{j}}\circ \left( p_{j}\otimesimes Y_{j}\right) \right) \circ \left( t_{I}\times {J},\phi _{i}\otimesimes {Y_{j}}\right) \\ &=&\left( t_{\left\{ \ast \right\} \times J}\circ \left( t_{I}\times {J}\right) ,{\rm ev}_{Y_{j}}\circ \left( p_{j}\otimesimes Y_{j}\right) \circ \left( \phi _{i}\otimesimes {Y_{j}}\right) \right) \\ &=&\left( t_{I\times J},{\rm ev}_{Y_{j}}\circ \left( p_{j}\phi _{i}\otimesimes Y_{j}\right) \right) \end{eqnarray*} As a consequence ${\sf Fam}(\mathcal{C})$ is pre-rigid and the pre-dual of $\underline{Y}$ is $\underline{Y}^{\ast }:=F( \prod\limits_{j\in J}Y_{j}^{\ast }) .$ Conversely, assume that $\mathcal{C}$ has an initial object $\mathbf{0}$ and that $\mathsf{ Fam}(\mathcal{C})$ is pre-rigid. Let $\underline{Y}=(J,Y_j)$ be a family of objects in $\mathcal{C}$. By hypothesis this family has a pre-dual $\underline{Y}^*\in {\sf Fam}(\mathcal{C})$. Thus, there is a set $S$ and an object $L_s$ for every $s\in S$ such that $\underline{Y}^*=(S,L_s)$. We have a chain of bijections \begin{gather*} S\cong{\rm Hom} _{\mathsf{Set}}\left( \{*\},S \right) \cong {\rm Hom} _{{\sf Fam}(\mathcal{C})}\left( ({*},\mathbf{0}),(S,L_s) \right) \cong {\rm Hom} _{{\sf Fam}(\mathcal{C})}\left( F(\mathbf{0}),\underline{Y}^* \right) \cong {\rm Hom} _{{\sf Fam}(\mathcal{C})}\left( F(\mathbf{0})\otimesimes \underline{Y},\underline{\mathds{I}} \right)\\= {\rm Hom} _{{\sf Fam}(\mathcal{C})}\left( F(\mathbf{0})\otimesimes \underline{Y},F{\mathds{I}} \right) \overset{\eqref{form:homFam}} \cong \prod\limits_{j\in J}{\rm Hom} _{\mathcal{C}}\left( \mathbf{0}\otimesimes Y_j,\mathds{I} \right)\cong \prod\limits_{j\in J}{\rm Hom} _{\mathcal{C}}\left( \mathbf{0},Y_j^* \right) \end{gather*} and the latter is a singleton as $\mathbf{0}$ is an initial object. Thus $S$ is a singleton and hence we can choose $\underline{Y}^*=F(L)$ for some $L\in \mathcal{C}$. Note that the evaluation ${\rm ev}_{\underline{Y}}:\underline{Y}^*\otimesimes \underline{Y}\to \underline{\mathds{I}}$ is of the form $(t_{\{*\}\times J},{\rm ev}_{j})$ for some morphism ${\rm ev}_{j}:L\otimesimes {Y_j}\to {\mathds{I}}$. Set $p_j:=({\rm ev}_{j})^\dag:L\to Y_j^*$. Let us show that $(L,(p_j)_{j\in J})$ is the product of the family of pre-duals $(Y_j^*)_{j\in J}$. To this aim, consider the following chain of bijections \begin{gather*} {\rm Hom} _{\mathcal{C}}\left( X,L \right)\overset{\eqref{form:homFam}}\cong {\rm Hom} _{{\sf Fam}(\mathcal{C})}\left( FX,FL \right) = {\rm Hom} _{{\sf Fam}(\mathcal{C})}\left( FX,\underline{Y}^* \right)\cong {\rm Hom} _{{\sf Fam}(\mathcal{C})}\left( FX\otimesimes \underline{Y},\underline{\mathds{I}} \right)\\\cong {\rm Hom} _{{\sf Fam}(\mathcal{C})}\left( FX\otimesimes \underline{Y},F{\mathds{I}} \right)\overset{\eqref{form:homFam}}\cong \prod\limits_{j\in J}{\rm Hom} _{\mathcal{C}}\left( X\otimesimes Y_j,\mathds{I} \right)\cong \prod\limits_{j\in J}{\rm Hom} _{\mathcal{C}}\left( X ,Y_j^* \right). \end{gather*} A direct computation shows that this composition gives the bijection $${\rm Hom} _{\mathcal{C}}\left( X,L \right)\to \prod\limits_{j\in J}{\rm Hom} _{\mathcal{C}}\left( X ,Y_j^* \right),f\mapsto \left(({\rm ev}_{j}\circ (f\otimesimes Y_j))^\dag\ \right)_{j\in J}.$$ Note that ${\rm ev}_{Y_j}\circ ((p_j\circ f)\otimesimes Y_j)={\rm ev}_{Y_j}\circ (p_j\otimesimes Y_j)\circ (f\otimesimes Y_j)={\rm ev}_{Y_j}\circ (({\rm ev}_{j})^\dag\otimesimes Y_j)\circ (f\otimesimes Y_j)= {\rm ev}_{j}\circ (f\otimesimes Y_j)$ so that $({\rm ev}_{j}\circ (f\otimesimes Y_j))^\dag=p_j\circ f$. Hence the above bijection maps $f$ to the family $\left(p_j\circ f\right)_{j\in J}$. This means that $(L,(p_j)_{j\in J})$ is the product of the family $(Y_j^*)_{j\in J}$. \end{proof} \begin{remark}\label{Remark-predualsnecessary} Consider any pre-rigid monoidal category $(\mathcal{C},\otimesimes, \mathds{I})$ with an initial object and suppose ${\sf Fam}(\mathcal{C})$ is pre-rigid. By Proposition \ref{pro:Famprerig} we get $\mathcal{C}$ has products of pre-duals. In particular, for every set $S$, one has that $\prod_S^{\mathcal{C}}\mathds{I}^*$ exists, where $\prod^{\mathcal{C}}$ denotes the product in $\mathcal{C}$. Now, by Corollary \ref{cor:uniqueness}, one has $\mathds{I}^*\cong \mathds{I}.$ Thus $\prod_S^{\mathcal{C}}\mathds{I}$ exists. As an instance of this situation, consider a monoidal category $(\mathcal{A},\otimesimes, \mathds{I})$ and let $\mathcal{C}$ be $\mathcal{A}^f$, the full subcategory of $\mathcal{A}$ consisting of all rigid objects in $\mathcal{A}$, which is known to form a rigid monoidal category $(\mathcal{A}^f,\otimesimes, \mathds{I})$. By the foregoing, if $\mathcal{A}^f$ has an initial object and $\prod_S^{\mathcal{A}^f}\mathds{I}$ does not exist, we can conclude that ${\sf Fam}(\mathcal{A}^f)$ is not pre-rigid. The following are examples of this situation. \begin{enumerate} \item Putting $\mathcal{A}={\sf Vec}$, $\mathcal{A}^f$ comes out to be ${\sf Vec}^{\textrm{f}}$. It has $0$ as a zero (whence initial) object. Note that, if $\prod_{{\mathbb N}}^{{\sf Vec}^{f}}k$ existed, then we would have isomorphisms of vector spaces \begin{gather*} \prod_{{\mathbb N}}^{{\sf Vec}^{f}}k\cong {\rm Hom} _{{\sf Vec}^{f}}\left(k,\prod_{{\mathbb N}}^{{\sf Vec}^{f}}k\right)\cong \prod_{{\mathbb N}}^{\mathsf{Set}}{\rm Hom} _{{\sf Vec}^{f}}(k,k)\cong \prod_{{\mathbb N}}^{\mathsf{Set}}k\notin {\sf Vec}^{f}; \end{gather*} a contradiction. It follows that ${\sf Fam}({\sf Vec}^{\textrm{f}})$ is not pre-rigid. \item More generally consider a commutative ring $R$ and the monoidal category $(R\text{-}\mathsf{Mod},\otimesimes_R,R)$ of $R$-modules. It is well-known that $R\text{-}\mathsf{Mod}^f$ coincides with the category of finitely-generated projective $R$-modules (cf. \cite[Proposition 2.6]{De}). It has $0$ as a zero object. Moreover, by an argument similar to the above one, one checks that $\prod_{{\mathbb N}}^{R\text{-}\mathsf{Mod}^f}R$ does not exist. It follows that ${\sf Fam}(R\text{-}\mathsf{Mod}^f)$ is not pre-rigid. \item Similarly, for $R$ a noetherian commutative ring, if we denote by $(\mathcal{A}:=\mathsf{Comod}\text{-}H,\otimesimes_R,R)$ the category of right $H$-comodules for a Hopf $R$-algebra $H$, then every object in $\mathcal{A}^f$ is finitely-generated projective over $R$ (see \cite[Example]{Ulbrich}). Moreover $0$ is a zero object in $\mathcal{A}^f$. Let us check that $\prod_{{\mathbb N}}^{\mathcal{A}^f}R$ does not exist. Suppose it does; then it is a finitely generated $R$-module, hence noetherian ($R$ being a noetherian ring). As a consequence its $R$-submodule $\left(\prod_{{\mathbb N}}^{\mathcal{A}^f}R\right)^{\mathrm{co}H}$ of $H$-coinvariant elements must be finitely generated. This leads to the desired contradiction. In fact, since $R$ is a comodule via trivial coaction, we get \begin{gather*} \left(\prod_{{\mathbb N}}^{\mathcal{A}^f}R\right)^{\mathrm{co}H}\cong {\rm Hom} _{\mathcal{A}^f}\left(R,\prod_{{\mathbb N}}^{\mathcal{A}^f}R\right)\cong \prod_{{\mathbb N}}^{\mathsf{Set}}{\rm Hom} _{\mathcal{A}^f}\left(R,R\right)\cong \prod_{{\mathbb N}}^{\mathsf{Set}}R^{\mathrm{co}H}= \prod_{{\mathbb N}}^{\mathsf{Set}}R. \end{gather*} Thus ${\sf Fam}(\mathcal{A}^f)$ is not pre-rigid. \end{enumerate} \end{remark} So far we have dealt with the pre-rigidity of ${\sf Fam}(\mathcal{C})$ but, to the best of our knowledge, it is even unknown whether ${\sf Fam}(\mathcal{C})$ inherits closeness from $\mathcal{C}$ except in case $\mathcal{C}$ is cartesian which was considered in \cite[Lemma 4.1]{Carboni} and in \cite[Theorem 2.11]{AR}, the latter giving a complete characterization of cartesian closeness of ${\sf Fam}(\mathcal{C}):=\Sigma\mathcal{C}$. The following result fills this gap. \begin{proposition}\label{pro:Famclosed} Let $\mathcal{C}$ be a complete closed monoidal category with products. Then ${\sf Fam}(\mathcal{C})$ is closed monoidal, with $[(J,Y_j),( U,Z_u)]:=([J,U],[Y,Z] _{\alpha})$ where for each funtion $\alpha:J\to U$, we set $\left[ \underline{Y},\underline{Z}\right] _{\alpha }:=\prod\limits_{j\in J}\left[ Y_{j},Z_{\alpha \left( j\right) }\right]$. Given $\underline{c}=\left( q:U\rightarrow U^{\prime },z_{u}:Z_{u}\rightarrow Z_{q\left( u\right) }^{\prime }\right) :\underline{Z }\rightarrow \underline{Z}^{\prime }$ we set $$\left[ \underline{Y}, \underline{c}\right] :=\left( \left[ J,q\right] :\left[ J,U\right] \rightarrow \left[ J,U^{\prime }\right] ,\prod\limits_{j\in J}\left[ Y_{j},z_{\alpha \left( j\right) }\right] :\left[ \underline{Y},\underline{Z} \right] _{\alpha }\rightarrow \left[ \underline{Y},\underline{Z}^{\prime } \right] _{q\circ \alpha }\right) :\left[ \underline{Y},\underline{Z}\right] \rightarrow \left[ \underline{Y},\underline{Z}^{\prime }\right] .$$ \end{proposition} \begin{proof} The category $\mathsf{Set}$ is monoidal closed. In fact, we have a bijection \begin{equation*} {\rm Hom} _{\mathsf{Set}}\left( I\times J,U\right) \cong {\rm Hom} _{ \mathsf{Set}}\left( I,\left[ J,U\right] \right) \end{equation*} that assigns to a map $f:I\times J\rightarrow U$ the map $f^\dag :I\rightarrow \left[ J,U\right] $, where $f^\dag\left( i\right) \left( j\right) =f\left( i,j\right) .$ Set ${\rm ev}_{J,U}:\left[ J,U\right] \times J\rightarrow U,\left( \alpha ,j\right) \mapsto \alpha \left( j\right) .$ As a consequence we have that the map $ \alpha =\alpha _{\underline{X},\underline{Z}}:{\rm Hom} _{{\sf Fam}( \mathcal{C})}\left( \underline{X}\otimesimes \underline{Y},\underline{Z}\right) \rightarrow {\rm Hom} _{{\sf Fam}(\mathcal{C})}\left( \underline{X}, \left[ \underline{Y},\underline{Z}\right] \right)$ defined by the assignment \begin{eqnarray*} \left( f:I\times J\rightarrow U,\phi _{\left( i,j\right) }:X_{i}\otimesimes Y_{j}\rightarrow Z_{f\left( i,j\right) }\right) &\mapsto &\left( f^\dag :I\rightarrow \left[ J,U\right] ,\Delta \left( (\phi _{\left( i,j\right) })^\dag\right) _{j\in J}:X_{i}\rightarrow \prod\limits_{j\in J}\left[ Y_{j},Z_{f\left( i,j\right) }\right] \right) \end{eqnarray*} is invertible, where, given $h:X\otimesimes Y\rightarrow Z,$ we denoted by $h^\dag:X\rightarrow \left[ Y,Z\right] $ the unique morphism such that $\mathrm{ev }_{Y,Z}\circ \left( h^\dag\otimesimes Y\right) =h,$ (here ${\rm ev} _{Y,Z}:\left[ Y,Z\right] \otimesimes Y\rightarrow Z$). Its inverse $\beta =\beta _{\underline{X},\underline{Z}}:{\rm Hom} _{{\sf Fam}(\mathcal{C} )}\left( \underline{X},\left[ \underline{Y},\underline{Z}\right] \right) \rightarrow {\rm Hom} _{{\sf Fam}(\mathcal{C})}\left( \underline{X} \otimesimes \underline{Y},\underline{Z}\right) $ maps $\left( g:I\rightarrow \left[ J,U\right] ,\psi _{i}:X_{i}\rightarrow \prod\limits_{j\in J}\left[ Y_{j},Z_{g\left( i\right) \left( j\right) }\right] \right) $ to $\left( {\rm ev}_{J,U}\circ \left( g\times J\right) :I\times J\rightarrow U, {\rm ev}_{Y_{j},Z_{g\left( i\right) \left( j\right) }}\circ \left( p_{j}\psi _{i}\otimesimes Y_{j}\right) :X_{i}\otimesimes Y_{j}\rightarrow Z_{g\left( i\right) \left( j\right) }\right) $. In fact, \begin{eqnarray*} \beta \alpha \left( \left( f,\phi _{\left( i,j\right) }\right) \right) &=&\beta \left( \left( f^\dag,\Delta \left( (\phi _{\left( i,j\right) })^\dag\right) _{j\in J}\right) \right) \\ &=&\left( {\rm ev}_{J,U}\circ \left( f^\dag\times J\right) ,\mathrm{ ev}_{Y_{j},Z_{f\left( i,j\right) }}\circ \left( p_{j}\Delta \left( ( \phi _{\left( i,j\right) })^\dag\right) _{j\in J}\otimesimes Y_{j}\right) \right) \\ &=&\left( f,{\rm ev}_{Y_{j},Z_{f\left( i,j\right) }}\circ \left( ( \phi _{\left( i,j\right) })^\dag\otimesimes Y_{j}\right) \right) =\left( f,\phi _{\left( i,j\right) }\right) . \end{eqnarray*} Conversely \begin{eqnarray*} \alpha \beta \left( \left( g,\psi _{i}\right) \right) &=&\alpha \left( {\rm ev}_{J,U}\circ \left( g\times J\right) ,{\rm ev} _{Y_{j},Z_{g\left( i\right) \left( j\right) }}\circ \left( p_{j}\psi _{i}\otimesimes Y_{j}\right) \right) \\ &=&\left( \left( {\rm ev}_{J,U}\circ \left( g\times J\right) \right) ^\dag,\Delta \left( \left( {\rm ev}_{Y_{j},Z_{g\left( i\right) \left( j\right) }}\circ \left( p_{j}\psi _{i}\otimesimes Y_{j}\right) \right) ^\dag\right) _{j\in J}\right) \\ &=&\left( g,\Delta \left( p_{j}\psi _{i}\right) _{j\in J}\right) =\left( g,\psi _{i}\right) . \end{eqnarray*} In order to check that $(-)\otimesimes \underline{Y}\dashv\left[ \underline{Y},-\right]$ it suffices to check the naturality of $\beta _{\underline{X},\underline{Z}}$. Given $\underline{a}=\left( p:I^{\prime }\rightarrow I,x_{i^{\prime }}:X_{i^{\prime }}^{\prime }\rightarrow X_{p\left( i^{\prime }\right) }\right) :\underline{X}^{\prime }\rightarrow \underline{X},$ we have \begin{eqnarray*} &&\left( \beta _{\underline{X^{\prime }},\underline{Z^{\prime }}}\circ {\rm Hom} _{{\sf Fam}(\mathcal{C})}\left( \underline{a},\left[ \underline{Y},\underline{c}\right] \right) \right) \left( g,\psi _{i}\right) \\ &=&\beta _{\underline{X^{\prime }},\underline{Z^{\prime }}}\left( \left[ \underline{Y},\underline{c}\right] \circ \left( g,\psi _{i}\right) \circ \underline{a}\right) \\ &=&\beta _{\underline{X^{\prime }},\underline{Z^{\prime }}}\left( \left( \left[ J,q\right] ,\prod\limits_{j\in J}\left[ Y_{j},z_{\alpha \left( j\right) }\right] \right) \circ \left( g,\psi _{i}\right) \circ \left( p,x_{i^{\prime }}\right) \right) \\ &=&\beta _{\underline{X^{\prime }},\underline{Z^{\prime }}}\left( \left( \left[ J,q\right] \circ g\circ p,\left( \prod\limits_{j\in J}\left[ Y_{j},z_{g\left( i\right) \left( j\right) }\right] \right) \circ \psi _{p\left( i^{\prime }\right) }\circ x_{i^{\prime }}\right) \right) \\ &=&\left( {\rm ev}_{J,U^{\prime }}\circ \left( \left[ J,q\right] gp\times J\right) ,{\rm ev}_{Y_{j},Z_{q\left( g\left( i\right) \left( j\right) \right) }^{\prime }}\circ \left( p_{j}\left( \prod\limits_{j\in J}\left[ Y_{j},z_{g\left( i\right) \left( j\right) }\right] \right) \psi _{p\left( i^{\prime }\right) }x_{i^{\prime }}\otimesimes Y_{j}\right) \right) \\ &=&\left( {\rm ev}_{J,U^{\prime }}\circ \left( \left[ J,q\right] gp\times J\right) ,{\rm ev}_{Y_{j},Z_{q\left( g\left( i\right) \left( j\right) \right) }^{\prime }}\circ \left( \left[ Y_{j},z_{g\left( i\right) \left( j\right) }\right] p_{j}\psi _{p\left( i^{\prime }\right) }x_{i^{\prime }}\otimesimes Y_{j}\right) \right) \\ &=&\left( q\circ {\rm ev}_{J,U}\circ \left( g\times J\right) \circ \left( p\times J\right) ,z_{g\left( i\right) \left( j\right) }\circ {\rm ev} _{Y_{j},Z_{g\left( i\right) \left( j\right) }}\circ \left( p_{j}\psi _{p\left( i^{\prime }\right) }\otimesimes Y_{j}\right) \circ \left( x_{i^{\prime }}\otimesimes Y_{j}\right) \right) \\ &=&\left( q,z_{u}\right) \circ \left( {\rm ev}_{J,U}\circ \left( g\times J\right) ,{\rm ev}_{Y_{j},Z_{g\left( i\right) \left( j\right) }}\circ \left( p_{j}\psi _{i}\otimesimes Y_{j}\right) \right) \circ \left( p\times J,x_{i^{\prime }}\otimesimes Y_{j}\right) \\ &=&\left( q,z_{u}\right) \circ \left( {\rm ev}_{J,U}\circ \left( g\times J\right) ,{\rm ev}_{Y_{j},Z_{g\left( i\right) \left( j\right) }}\circ \left( p_{j}\psi _{i}\otimesimes Y_{j}\right) \right) \circ \left( \left( p,x_{i^{\prime }}\right) \otimesimes \underline{Y}\right) \\ &=&\underline{c}\circ \beta _{\underline{X},\underline{Z}}\left( g,\psi _{i}\right) \circ \left( \underline{a}\otimesimes \underline{Y}\right) =\left( {\rm Hom} _{{\sf Fam}(\mathcal{C})}\left( \underline{a} \otimesimes \underline{Y},\underline{c}\right) \circ \beta _{\underline{X}, \underline{Z}}\right) \left( g,\psi _{i}\right) . \end{eqnarray*} This shows that $\beta _{\underline{X},\underline{Z}}$ is natural so that $ -\otimesimes \underline{Y}\dashv \left[ \underline{Y},-\right] $ and hence $ {\sf Fam}(\mathcal{C})$ is closed. \end{proof} Since a closed category is necessarily pre-rigid (Proposition \ref{prop:prerigid2}), the previous result ensures that, if $\mathcal{C}$ is a complete closed monoidal category with products, then ${\sf Fam}(\mathcal{C})$ is in particular pre-rigid. Anyway these conditions on $\mathcal{C}$ are decidedly heavier than those assumed in Proposition \ref{pro:Famprerig}. \subsection{The ``Turaev'' category \texorpdfstring{${\sf Maf} (\mathcal{C})$}{TEXT}} Recall from \cite[Section 4]{GV-OnTheDuality} the definition of the ``Turaev'' category ${\sf Maf} (\mathcal{C})$ as being ${\sf Fam}(\mathcal{C}^{\mathrm{op} })^{\mathrm{ op}}$. Objects in this category are the same as in ${\sf Fam}(\mathcal{C})$, i.e. pairs $\underline{X}:=\left( X_{i}\right) _{i\in I}=\left( I,X_{i}\right) $ where $I$ is a set and $X_{i}$ is an object in $\mathcal{C}$ for all $i\in I$. A morphism between two objects $\underline{X}=\left( I,X_{i}\right) $ and $\underline{Y}=\left( J,Y_{j}\right) $, however, is a pair $\underline{\phi }:=\left( f,\phi _{j} \right):\underline{X}\rightarrow \underline{Y}$ where $f:J\rightarrow I$ is a map and $\phi _{j }:X_{f(j)}\rightarrow Y_{j}$ is a morphism in $\mathcal{C}$ for all $i \in I$. If $\mathcal{C}$ is a monoidal category, so is ${\sf Maf} (\mathcal{C})$, as follows (see e.g. \cite[Section 4]{GV-OnTheDuality}). Given objects $\underline{X}$ and $\underline{Y}$ as above, their tensor product is defined by $\underline{X}\otimesimes \underline{Y}=\left( I\times J,X_{i}\otimesimes Y_{j}\right) .$ Given morphisms $\underline{\phi } :=\left( f,\phi _{j}\right) :\underline{X} \rightarrow \underline{Y}$ and $\underline{\phi }^{\prime }:=\left( f^{\prime },\phi _{j'}^{\prime }\right) :\underline{X}^{\prime }\rightarrow \underline{Y}^{\prime }$, their tensor product is $\underline{\phi }\otimesimes \underline{\phi }^{\prime }=\left( f\times f^{\prime },\phi _{j}\otimesimes \phi _{j'}^{\prime }\right) $. The unit object of ${\sf Maf} (\mathcal{C})$ is $\underline{\mathds{I}}=\left( \left\{ \ast \right\} ,\mathds{I} \right) $, where $\mathds{I}$ is the unit object $\mathcal{C}$. The following result shows how the category ${\sf Maf} (\mathcal{C})$ fails to be pre-rigid. \begin{proposition}\label{pro:Maf} Let $\mathcal{C}$ be a monoidal category. Then the category ${\sf Maf}(\mathcal{C})$ is never pre-rigid. \end{proposition} \begin{proof}${\sf Maf} (\mathcal{C} )$ has a terminal object given by the empty family of objects $\mathbf{1}:=\left( \emptyset ,-\right) .$ Note that, given any object $\underline{X}=\left( I,X\right) $, we have $ \underline{X}\otimesimes \mathbf{1}=\left( I\times \emptyset ,-\right) \cong \left( \emptyset ,-\right) =\mathbf{1}$. Now suppose that $\mathbf{1}$ has a pre-dual $\mathbf{1}^{\ast }$ in ${\sf Maf} (\mathcal{C} )$. Then $ \mathrm{Id}_{\mathbf{1}^{\ast }}\in \mathrm{Hom}_{{\sf Maf} (\mathcal{C} )}\left( \mathbf{1}^{\ast },\mathbf{1}^{\ast }\right) \cong \mathrm{Hom }_{{\sf Maf} (\mathcal{C} )}\left( \mathbf{1}^{\ast }\otimesimes \mathbf{1},\underline{\mathds{I}}\right) \cong \mathrm{Hom}_{{\sf Maf} ( \mathcal{C} )}\left( \mathbf{1},\underline{\mathds{I}}\right) $ but there can be no morphism $\mathbf{1}\rightarrow \underline{\mathds{I}}$ as part of it would be a map $\{*\} \rightarrow \emptyset $, a contradiction. \end{proof} \begin{remark} Proposition \ref{pro:Maf} assures that ${\sf Maf} (\mathcal{{\sf Vec}})$ is not pre-rigid. Contraposition of Proposition \ref{prop:prerigid2} delivers that ${\sf Maf} (\mathcal{{\sf Vec}})$ is not even closed. Similarly ${\sf Maf} (\mathcal{{\sf Vec}^{\textrm{f}}})$ is not pre-rigid, whence not even closed. \end{remark} \begin{remark}\label{TuraevZunino} ${\sf Vec}$ can be given the obvious symmetric monoidal structure by considering the twist. It is shown in \cite[Section 2.1]{CaeDel} (resp. Section 2.2) that this induces a symmetric monoidal structure on the Turaev category ${\sf Maf}({\sf Vec})$ (resp. Zunino category ${\sf Fam}({\sf Vec})$). \cite[Proposition 2.5]{CaeDel} (resp. Proposition 2.10) asserts that Hopf group-coalgebras (resp. Hopf group-algebras), intruduced in \cite{Tu}, are precisely Hopf algebra objects in ${\sf Maf}({\sf Vec})$ (resp. ${\sf Fam}({\sf Vec})$) endowed with this symmetry. Notice that the definition of a Hopf group-coalgebra is not self-dual. The above results obtained so far in this section show that, with regard to pre-rigidity and closedness, the categorical places where Hopf group-coalgebras and Hopf group-algebras live behave differently as well. \end{remark} \subsection{The variant of the category of families \texorpdfstring{${\sf FamRel}(\mathcal{C})$}{TEXT}} We now turn to the study of the category ${\sf FamRel}(\mathcal{C})$ where the fact that pre-rigidity is inherited from $\mathcal{C}$ is restored, contrary to ${\sf Maf}(\mathcal{C})$ above. \begin{definition}[The category ${\sf FamRel}(\mathcal{C})$] An object in ${\sf FamRel}(\mathcal{C})$, see \cite{LMM}, is a pair $\underline{X}:=\left( X_{i}\right) _{i\in I}=\left( I,X_{i}\right) $ where $I$ is a set and $X_{i}$ is an object in $\mathcal{C}.$ Given two objects $\underline{X}=\left( I,X_{i}\right) $ and $\underline{Y}=\left( J,Y_{j}\right) $, a morphism $ \underline{X}\rightarrow \underline{Y}$ is a set of triples $(i,j,f)$ where $i\in I,j\in J$ nd $f:X_i\to Y_j$ is a morphism in $\mathcal{C}$. We can reorganize such a set of triples into a pair $\underline{\phi }:=\left( \mathcal{R} ,\phi _{\left( i,j\right) }\right) :\underline{X}\rightarrow \underline{Y}$ where $\mathcal{R} :I\relbar\joinrel\mapstochar\joinrel\rightarrow J$ is a binary relation (see Example \ref{ex:Rel}), i.e. a subset $\mathcal{R} \subseteq I\times J,$ while $\phi _{\left( i,j\right) }\subseteq {\rm Hom} _\mathcal{C} (X_{i}, Y_{j})$ for every $\left( i,j\right) \in \mathcal{R} $. Given two morphisms $\underline{\phi }:=\left( \mathcal{R} ,\phi _{\left( i,j\right) }\right) :(I,X_i)\rightarrow (J,Y_j)$ and $\underline{\psi }:=\left( \mathcal{S} ,\psi _{\left( j,k\right) }\right) :(J,Y_j)\rightarrow (K,Z_k)$ their composition is defined to be $\underline{\psi}\circ \underline{\phi }:=\left( \mathcal{S}\circ\mathcal{R} ,\bigcup_{j\in J_{(i,k)}}\psi _{(j,k)}\circ\phi _{(i,j)}\right)$, where $J_{(i,k)}:=\{j\in J\mid (i,j)\in\mathcal{R}, (j,k)\in\mathcal{S} \}$ and $\psi _{(j,k)}\circ\phi _{(i,j)}:=\{f\circ g\mid f\in\psi _{(j,k)},g\in\phi _{(i,j)}\}$. The identity morphism is $({\sf id}_I,\{{\sf id}_{X_i}\}):(I,X_i)\to (I,X_i)$. \begin{invisible} The associativity of the composition $(I,X_i)\overset{(\mathcal{R},\phi_{(i,j)})}\to (J,Y_j)\overset{(\mathcal{S},\psi_{(j,k)})}\to(K,Z_k)\overset{(\mathcal{T},\lambda_{(k,t)})}\to (T,W_t)$ follows from \begin{align*} (\lambda_{(k,t)})_{(k,t)\in \mathcal{T}}\circ((\psi_{(j,k)})_{(j,k)\in \mathcal{S}}\circ (\phi_{(i,j)})_{(i,j)\in \mathcal{R}}) &=(\lambda_{(k,t)})_{(k,t)\in \mathcal{T}}\circ(\bigcup_{j\in J_{(i,k)}}\psi _{(j,k)}\circ\phi _{(i,j)})_{(i,k)\in \mathcal{S}\circ\mathcal{R}}\\ &=(\bigcup_{k\in K_{(i,t)}}\lambda_{(k,t)}\circ\bigcup_{j\in J_{(i,k)}}\psi _{(j,k)}\circ\phi _{(i,j)})_{(i,t)\in \mathcal{T}\circ\mathcal{S}\circ\mathcal{R}}\\ &=(\bigcup_{k\in K_{(i,t)}}\bigcup_{j\in J_{(i,k)}}\lambda_{(k,t)}\circ\psi _{(j,k)}\circ\phi _{(i,j)})_{(i,t)\in \mathcal{T}\circ\mathcal{S}\circ\mathcal{R}}\\ &=(\bigcup_{j\in J_{(i,t)}}\bigcup_{k\in K_{(j,t)}}\lambda_{(k,t)}\circ\psi _{(j,k)}\circ\phi _{(i,j)})_{(i,t)\in \mathcal{T}\circ\mathcal{S}\circ\mathcal{R}}\\ &=(\bigcup_{k\in K_{(j,t)}}\lambda_{(k,t)}\circ\psi _{(j,k)})_{(j,t)\in\mathcal{T}\circ\mathcal{S}}\circ(\phi_{(i,j)})_{(i,j)\in \mathcal{R}}\\ &=((\lambda_{(k,t)})_{(k,t)\in \mathcal{T}}\circ(\psi_{(j,k)})_{(j,k)\in \mathcal{S}})\circ (\phi_{(i,j)})_{(i,j)\in \mathcal{R}}. \end{align*} \end{invisible} If $(\mathcal{C},\otimesimes,\mathds{I})$ is a monoidal category so is ${\sf FamRel}(\mathcal{C})$ as follows. Given objects $\underline{X}$ and $\underline{Y}$ as above, their tensor product is defined by $\underline{X}\otimesimes \underline{Y}=\left( I\times J,X_{i}\otimesimes Y_{j}\right) .$ Given morphisms $\underline{\phi } :=\left( \mathcal{R} ,\phi _{\left( i,j\right) }\right) :\underline{X} \rightarrow \underline{Y}$ and $\underline{\phi }^{\prime }:=\left( \mathcal{ R}^{\prime },\phi _{\left( i^{\prime },j^{\prime }\right) }^{\prime }\right) :\underline{X}^{\prime }\rightarrow \underline{Y}^{\prime }$, their tensor product is $\underline{\phi }\otimesimes \underline{\phi }^{\prime }=\left( \mathcal{R} \times \mathcal{R} ^{\prime },\phi _{\left( i,j\right) }\otimesimes \phi _{\left( i^{\prime },j^{\prime }\right) }^{\prime }\right) $ where $ \mathcal{R} \times \mathcal{R} ^{\prime }:=\left\{ \left( \left( i,i^{\prime }\right) ,\left( j,j^{\prime }\right) \right) \mid \left( i,j\right) \in \mathcal{R} \text{ and }\left( i^{\prime },j^{\prime }\right) \in \mathcal{R} ^{\prime }\right\} $ is the cartesian product of binary relations and $\phi _{\left( i,j\right) }\otimesimes \phi _{\left( i^{\prime },j^{\prime }\right) }^{\prime }:=\{f\otimesimes f'\mid f\in\phi _{\left( i,j\right) },f'\in \phi _{\left( i^{\prime },j^{\prime }\right) }^{\prime }\}$. The unit object is $\underline{\mathds{I}}=\left( \left\{ \ast \right\} ,\mathds{I} \right) .$ \end{definition} \begin{remark} As observed in \cite{LMM}, if $\mathbf{1}$ denotes the terminal category, then ${\sf FamRel}(\mathbf{1})$ identifies with the category ${\sf Rel}$. \end{remark} \begin{proposition}\label{pro:Faf} If $\mathcal{C}$ is a pre-rigid monoidal category, so is ${\sf FamRel}(\mathcal{C})$. \end{proposition} \begin{proof} Given an object $\underline{Y}=\left( J,Y_{j}\right) \in {\sf FamRel}(\mathcal{C})$ we set $\underline{Y}^{\ast }:=\left( J,Y_{j}^{\ast }\right) .$ Consider now the map \begin{eqnarray*} {\rm Hom} _{{\sf FamRel}(\mathcal{C})}\left( \underline{X},\underline{Y} ^{\ast }\right) &\rightarrow &{\rm Hom} _{{\sf FamRel}(\mathcal{C})}\left( \underline{X}\otimesimes \underline{Y},\underline{\mathds{I}}\right) \\ \left( \mathcal{R} :I\relbar\joinrel\mapstochar\joinrel\rightarrow J,\phi _{\left( i,j\right) }\subseteq{\rm Hom} _\mathcal{C}(X_{i}, Y_{j}^{\ast })\right) &\mapsto &\left( {\rm ev} _{J}\circ \left( \mathcal{R} \times \mathrm{Id}_{J}\right) ,\{{\rm ev}_{Y_{j}}\}\circ \left( \phi _{\left( i,j\right) }\otimesimes \{\mathrm{Id} _{Y_{j}}\}\right) \right)\subseteq {\rm Hom} _\mathcal{C}(X_i\otimesimes Y_j,\mathds{I}). \end{eqnarray*} where ${\rm ev}_{J}\circ \left( \mathcal{R} \times \mathrm{Id}_{J}\right) $ is the binary relation considered in Example \ref{ex:Rel}. Its inverse is given by $\left( \mathcal{R} ,\phi _{\left( \left( i,j\right) ,\ast \right) }\right) \rightarrow \left( \mathcal{R}^\dag,(\phi _{\left( \left( i,j\right) ,\ast \right) })^\dag\right) $ where for every binary relation $ \mathcal{R} :I\times J\relbar\joinrel\mapstochar\joinrel\rightarrow \left\{ \ast \right\}$ we define $ \mathcal{R}^\dag:I\relbar\joinrel\mapstochar\joinrel\rightarrow J$ as in Example \ref{ex:Rel} while for every $\phi _{\left( \left( i,j\right) ,\ast \right) }\subseteq {\rm Hom} _\mathcal{C}(X_{i}\otimesimes Y_{j},\mathds{I})$, we set $(\phi _{\left( \left( i,j\right) ,\ast \right) })^\dag:=\{f^\dag\mid f\in \phi _{\left( \left( i,j\right) ,\ast \right) }\}\subseteq{\rm Hom} (X_{i},Y_{j}^{\ast })$ and for every morphism $f:X_{i}\otimesimes Y_{j}\rightarrow \mathds{I}$ the morphism $f^\dag:X_{i}\rightarrow Y_{j}^{\ast }$ is the unique morphism in $\mathcal{C}$ defined by ${\rm ev}_{Y_{j}}\circ \left( f^\dag\otimesimes Y_{j}\right) =f$ given by the pre-rigid category of $\mathcal{C}$. Define ${\rm ev}_{\underline{Y}}:\underline{Y}^{\ast }\otimesimes \underline{Y} \rightarrow \underline{\mathds{I}}$ by setting ${\rm ev}_{\underline{Y} }:=\left( {\rm ev}_{J},\{{\rm ev}_{Y_{j}}\}\right) .$ We compute \begin{eqnarray*} {\rm ev}_{\underline{Y}}\circ \left( \left( \mathcal{R} ,\phi _{\left( i,j\right) }\right) \otimesimes \mathrm{Id}_{\underline{Y}}\right) &=&\left( {\rm ev}_{J},\{{\rm ev}_{Y_{j}}\}\right) \circ \left( \left( \mathcal{R} ,\phi _{\left( i,j\right) }\right) \otimesimes \left( \mathrm{Id}_{J},\{\mathrm{Id} _{Y_{j}}\}\right) \right) \\ &=&\left( {\rm ev}_{J},\{{\rm ev}_{Y_{j}}\}\right) \circ \left( \mathcal{R} \times \mathrm{Id}_{J},\phi _{\left( i,j\right) }\otimesimes \{\mathrm{Id} _{Y_{j}}\}\right) \\ &=&\left( {\rm ev}_{J}\circ \left( \mathcal{R} \times \mathrm{Id} _{J}\right) ,\{{\rm ev}_{Y_{j}}\}\circ \left( \phi _{\left( i,j\right) }\otimesimes \{\mathrm{Id} _{Y_{j}}\}\right) \right) . \end{eqnarray*} Thus the bijection above is exactly ${\rm ev}_{\underline{Y}}\circ \left( \left( \mathcal{R} ,\phi _{\left( i,j\right) }\right) \otimesimes \mathrm{Id}_{ \underline{Y}}\right) $ and hence ${\sf FamRel}(\mathcal{C})$ is pre-rigid. \end{proof} \begin{remark} Note that ${\sf Fam}(\mathcal{C})$ is a subcategory of ${\sf FamRel}( \mathcal{C})$ via the strong monoidal embedding (identity-on-object faithful functor) \begin{equation*} {\sf Fam}(\mathcal{C})\rightarrow {\sf FamRel}(\mathcal{C}):\quad\left( I,X_{i}\right) \mapsto \left( I,X_{i}\right) ,\quad\left( f,\phi _{i}\right) \mapsto \left( f,\{\phi _{i}\}\right) . \end{equation*} This induces a functor from the Turaev category \begin{equation*} {\sf Maf} (\mathcal{C}):={\sf Fam}(\mathcal{C}^{\mathrm{op} })^{\mathrm{ op}}\rightarrow {\sf FamRel}(\mathcal{C}^{\mathrm{op} })^{\mathrm{op} }. \end{equation*} Note also that we have a strong monoidal embedding \begin{equation*} {\sf Maf} (\mathcal{C})\rightarrow {\sf FamRel}(\mathcal{C}):\left( I,X_{i}\right) \mapsto \left( I,X_{i}\right), \left( f:I\rightarrow J,\phi _{i}:X_{f\left( i\right) }\rightarrow Y_{i}\right) \mapsto \left( f^{\sharp }:J\rightarrow I,\{\phi _{i}\}\subseteq{\rm Hom} _\mathcal{C}(X_{f\left( i\right) }, Y_{i})\right) \end{equation*} where $\mathcal{R} ^{\sharp }=\left\{ \left( j,i\right) \mid \left( i,j\right) \in \mathcal{R} \right\} :J\relbar\joinrel\mapstochar\joinrel\rightarrow I$ is the converse relation of a binary relation $\mathcal{R} :I\relbar\joinrel\mapstochar\joinrel\rightarrow J$. \end{remark} Summing up, for a monoidal category $\mathcal{C}$, both the monoidal categories ${\sf Fam}(\mathcal{C})$ and ${\sf Maf}(\mathcal{C})$ embeds in ${\sf FamRel}(\mathcal{C})$. Moreover ${\sf FamRel}(\mathcal{C})$ inherits the pre-rigidity from $\mathcal{C}$ with no further assumption, ${\sf Fam}(\mathcal{C})$ inherits the pre-rigidity if $\mathcal{C}$ has products of pre-duals while ${\sf Maf}(\mathcal{C})$ does not. \subsection{The functor category \texorpdfstring{$\left[ \mathcal{I},\mathcal{C}\right] $}{TEXT}} Given a small category $\mathcal{I}$ and a complete closed monoidal category $\mathcal{C}$, it is well-known that the functor category $\left[ \mathcal{I},\mathcal{C}\right] $ is closed as well, see e.g. \cite[Theorem B.14] {Aguiar-Mahajan-Bimonoids}. Explicitly, given functors $F,G:\mathcal{I}\to \mathcal{C}$, for any object $I$ in $\mathcal{I}$, $[F,G](I)$ is defined to be the universal object in $\mathcal{C}$ with the following property: For any morphism $f:I\to X$ in $\mathcal{I}$ there is a morphism $\eta_f:[F,G](I)\to [F(X),G(X)]$ in $\mathcal{C}$ such that for any $g:X\to Y$ in $\mathcal{I}$ the following diagram commutes. \begin{equation*} \begin{aligned} \xymatrixcolsep{2cm}\xymatrix{[F,G](I)\ar[r]^{\eta_f}\ar[d]_-{\eta_{g\circ f }}&[F(X),G(X)]\ar[d]^-{[F(X),G(g)]} \\ [F(Y),G(Y)] \ar[r]^{[F(g),G(X)]} & [F(X),G(Y))] } \end{aligned} \end{equation*} It has also the following description as an end of a functor, see e.g. \cite[(B.22)] {Aguiar-Mahajan-Bimonoids}. \begin{equation}\label{endhomfuncat} [F,G](I)=\int_{J\in \mathcal{I}}\prod_{{\rm Hom} _{\mathcal{I} }(I,J)}[F(J),G(J)]. \end{equation} Our next aim is to show that a similar result holds in case $\mathcal{C}$ is just pre-rigid. \begin{proposition}\label{pro:catfun} Let $\mathcal{I}$ be a small category and let $\mathcal{C}$ be a complete monoidal category. If $\mathcal{C}$ is pre-rigid, so is the functor category $ \left[ \mathcal{I},\mathcal{C}\right] .$ \end{proposition} \begin{proof} By e.g. \cite[Exercice 4, page 165]{MacLane}, we know that $\left[ \mathcal{I}, \mathcal{C}\right] $ is monoidal where, for any functors $T,F:\mathcal{I} \rightarrow \mathcal{C}$, we have $\left( T\otimesimes F\right) \left( x\right) =T\left( x\right) \otimesimes F\left( x\right) $ and the unit object of $ \mathcal{C}^{\mathcal{I}}$ is the constant functor $\mathds{I}^{\prime }: \mathcal{I}\rightarrow \mathcal{C}$ on the unit object $\mathds{I}\in \mathcal{C}$. Consider a functor $F:\mathcal{I}\rightarrow \mathcal{C}$. Define $ S\left( x,y\right) :=\prod_{{\rm Hom} _{\mathcal{I}}(x,y)}F(y)^{\ast }$ and denote by $p_{g}:S\left( x,y\right) \rightarrow F(y)^{\ast }$ the canonical projection for every $g\in {\rm Hom} _{\mathcal{I}}(x,y)$. Given morphisms $u :x_{1}\rightarrow x_{2}$ and $v :y_{2}\rightarrow y_{1}, $ there is a unique morphism $S\left( u,v\right) :S\left( x_1,y_{1}\right) \rightarrow S\left( x_2,y_{2}\right) $ such that the following diagram commutes for every $g:x_{2}\rightarrow y_{2}$ \begin{equation}\label{def:Suv} \begin{aligned} \xymatrixcolsep{1.5cm}\xymatrix{S\left( x_{1},y_{1}\right)\ar@{.>}[r]^{S\left( u ,v \right)}\ar[d]_-{p_{v \circ g\circ u }}&S\left( x_{2},y_{2}\right)\ar[d]^-{p_{g}} \\ F(y_{1})^{\ast } \ar[r]^{F(v )^{\ast }} & F(y_{2})^{\ast } } \end{aligned} \end{equation} In this way we have defined a functor $S:\mathcal{I}\times \mathcal{I}^{^{ \mathrm{op} }}\rightarrow \mathcal{C}:\left( x,y^\mathrm{op} \right) \mapsto S\left( x,y\right) .$ Denote by $F^{\ast }(x):=\underleftarrow{\lim }_{y\in \mathcal{I}}\prod_{ {\rm Hom} _{\mathcal{I}}(x,y)}F(y)^{\ast }:=\underleftarrow{\lim }S\left( x,-\right) $ i.e. the limit of the functor $S\left( x,-\right) :\mathcal{I} ^\mathrm{op} \rightarrow \mathcal{C}:y^\mathrm{op} \mapsto S\left( x,y\right) =\prod_{{\rm Hom} _{\mathcal{I}}(x,y)}F(y)^{\ast }.$ Given $ u :x_{1}\rightarrow x_{2}$ in $\mathcal{I}$ we set $F^{\ast }(u):= \underleftarrow{\lim }S\left(u,-\right) :\underleftarrow{\lim } S\left( x_{1},-\right) \rightarrow \underleftarrow{\lim }S\left( x_{2},-\right) .$ This defines a functor $F^{\ast }=\underleftarrow{\lim } _{y\in \mathcal{I}}\prod_{{\rm Hom} _{\mathcal{I}}(-,y)}F(y)^{\ast }.$ Let us check that $F^{\ast }$ is a pre-dual of and construct explicitly an isomorphism \begin{equation*} \mathrm{Nat}(T\otimesimes F,\mathds{I}^{\prime })\cong \mathrm{Nat}\left( T,F^{\ast }\right) \end{equation*} for any functors $T,F:\mathcal{I}\rightarrow \mathcal{C}$. Given $\alpha :T\otimesimes F\rightarrow \mathds{I}^{\prime }$, its components are morphisms $\alpha _{y}:T(y)\otimesimes F(y)\rightarrow \mathds{I}$ with $ y\in \mathcal{I}.$ Since $\mathcal{C}$ is pre-rigid we can consider $ (\alpha _{y})^\dag:T(y)\rightarrow F(y)^{\ast }\ $where $F(y)^{\ast }$ denotes the pre-dual of $F(y).$ For every $x\in \mathcal{I}$, there is a unique morphism $\alpha _{y}^{x}:T(x)\rightarrow S\left( x,y\right) $ such that, for every $g:x\rightarrow y$ in $\mathcal{I}$, we have \begin{equation}\label{def:alphaxy} \begin{aligned} \xymatrixcolsep{1.5cm}\xymatrix{T\left( x\right)\ar@{.>}[r]^{\alpha _{y}^{x}}\ar[d]_-{T\left( g\right)}&S\left( x,y\right)\ar[d]^-{p_{g}} \\ T\left( y\right) \ar[r]^{(\alpha _{y})^\dag} & F(y)^{\ast } } \end{aligned} \end{equation} Given morphisms $u :x_{1}\rightarrow x_{2}$ and $v :y_{2}\rightarrow y_{1},$ for every $g:x_{2}\rightarrow y_{2}$ we have \begin{eqnarray*} p_{g}\circ S\left( u ,v \right) \circ \alpha _{y_{1}}^{x_{1}} &\overset{\eqref{def:Suv}}=&F(v )^{\ast }\circ p_{v \circ g\circ u }\circ \alpha _{y_{1}}^{x_{1}}\overset{\eqref{def:alphaxy}}=F(v )^{\ast }\circ (\alpha _{y_{1}})^\dag\circ T\left( v\circ g\circ u \right) \\ &=&F(v )^{\ast }\circ (\alpha _{y_{1}})^\dag\circ T\left( v \right) \circ T\left( g\right) \circ T\left( u \right) \overset{\left( \ref{form:alphahat}\right) }{=}(\alpha _{y_{2}})^\dag\circ T\left( g\right) \circ T\left( u \right) \overset{\eqref{def:alphaxy}}=p_{g}\circ \alpha _{y_{2}}^{x_{2}}\circ T\left( u \right) \end{eqnarray*} where we applied the following formula \begin{equation} F(v )^{\ast }\circ (\alpha _{y_{1}})^\dag\circ T\left(v \right) = (\alpha _{y_{2}})^\dag,\text{ for every }v :y_{2}\rightarrow y_{1}, \label{form:alphahat} \end{equation} that can be proved as follows. The naturality of $\alpha $ tells $\alpha _{y_{1}}\circ \left( T\left( v \right) \otimesimes F\left( v \right) \right) =\alpha _{y_{2}}$ so that \begin{align*} {\rm ev}_{F(y_{2})}&\circ \left( F(v )^{\ast }\otimesimes F(y_{2})\right) \circ \left( (\alpha _{y_{1}})^\dag\otimesimes F(y_{2})\right) \circ \left( T\left( v \right) \otimesimes F(y_{2})\right) \\ &\overset{\left( \ref{form:dual}\right) }{=}{\rm ev}_{F(y_{1})}\circ \left( F(y_{1})^{\ast }\otimesimes F(v )\right) \circ \left( (\alpha _{y_{1}})^\dag\otimesimes F(y_{2})\right) \circ \left( T\left( v \right) \otimesimes F(y_{2})\right) \\ &={\rm ev}_{F(y_{1})}\circ \left( (\alpha _{y_{1}})^\dag\otimesimes F(y_{1})\right) \circ \left( T(y_{1})\otimesimes F(v )\right) \circ \left( T\left( v \right) \otimesimes F(y_{2})\right) =\alpha _{y_{1}}\circ \left( T\left( v \right) \otimesimes F\left( v \right) \right) \overset{\text{nat.}\alpha}=\alpha _{y_{2}} \end{align*} which means that $F(v)^{\ast }\circ (\alpha _{y_{1}})^\dag\circ T\left( v \right) =(\alpha _{y_{2}})^\dag.$ Thus $p_{g}\circ S\left( u ,v \right) \circ \alpha ^{ x_{1}} _{y_{1}}=p_{g}\circ \alpha ^{ x_{2}} _{y_{2}}\circ T\left( u \right) $ and hence \begin{equation} S\left( u ,v \right) \circ \alpha _{y_{1}}^{x_{1}}=\alpha _{y_{2}}^{x_{2}}\circ T\left( u\right) ,\text{ for every }u :x_{1}\rightarrow x_{2}\text{ and }v :y_{2}\rightarrow y_{1}. \label{form:S} \end{equation} In particular, taking $u =1_{x},$ we obtain $S\left( x,v \right) \circ \alpha _{y_{1}}^{x}=\alpha _{y_{2}}^{x}$ for all $v :y_{2}\rightarrow y_{1}.$ This means that $\left( T(x),\alpha _{y}^{x}:T(x)\rightarrow S\left( x,y\right) \right) _{y\in \mathcal{I}}$ is a cone for the functor $S\left( x,-\right) :\mathcal{I}^{^{\mathrm{op} }}\rightarrow \mathcal{C}:y^\mathrm{op} \mapsto S\left( x,y\right) $ and hence it defines a unique morphism $(\alpha^\dag )_{x}:T(x)\rightarrow F^{\ast }(x):=\underleftarrow{\lim }_{y\in \mathcal{I}}S\left( x,y\right) $ such that $q^x_{y}\circ (\alpha^\dag)_{x}=\alpha _{y}^{x},$ where $ q^x_{y}:F^{\ast }(x)\rightarrow S\left( x,y\right) $ is the canonical map defining the limit. \begin{equation}\label{def:alphahatx} \begin{aligned} \xymatrixcolsep{1.5cm}\xymatrix{T\left( x\right)\ar@{.>}[rd]_{(\alpha^\dag )_{x}}\ar[rr]^{\alpha _{y}^{x}}&&S\left( x,y\right) \\ &F^*(x)\ar[ru]_{q^x_{y}} } \end{aligned} \end{equation} Let us check it is natural in $x.$ Given $u :x_1\rightarrow x_2$ in $\mathcal{I}$, we have \begin{equation*} q^{x_2}_{y}\circ F^{\ast }(u)\circ(\alpha^\dag)_{x_1}\overset{\text{def.} F^*}=S\left(u ,y\right) \circ q^{x_1}_{y}\circ (\alpha^\dag) _{x_1}\overset{\eqref{def:alphahatx}}=S\left( u ,y\right) \circ \alpha _{y}^{x_1}\overset{\eqref{form:S}}=\alpha _{y}^{x_2}\circ T(u )\overset{\eqref{def:alphahatx}}=q^{x_2}_{y}\circ (\alpha^\dag) _{x_2}\circ T(u ). \end{equation*} Thus $F^{\ast }(u )\circ (\alpha^\dag)_{x_1}=(\alpha^\dag) _{x_2}\circ T(u)$ i.e. $(\alpha^\dag)_{x}$ is natural in $ x $ and it defines $\alpha^\dag:T\rightarrow F^{\ast }.$ This way we get \begin{equation*} \Phi :\mathrm{Nat}(T\otimesimes F,\mathds{I}^{\prime })\rightarrow \mathrm{Nat} \left( T,F^{\ast }\right) :\alpha \rightarrow \alpha^\dag. \end{equation*} We have to check it is invertible and that its inverse is the one arising from evaluation. Let us first define this evaluation. We have to construct a natural transformation ${\rm ev}_{F}:F^{\ast }\otimesimes F\rightarrow \mathds{I}^{\prime }.$ We define it on the component $ x $ as follows \begin{equation*} F^{\ast }(x)\otimesimes F(x)\overset{q^{x}_{x}\otimesimes F(x)}{\longrightarrow }S\left( x,x\right) \otimesimes F\left( x\right) \overset{p_{\mathrm{Id}}\otimesimes F(x)}{ \longrightarrow }F\left( x\right) ^{\ast }\otimesimes F\left( x\right) \overset{ {\rm ev}_{F\left( x\right) }}{\longrightarrow }\mathds{I} \end{equation*} so that $\left( {\rm ev}_{F}\right) _{x}:={\rm ev}_{F\left( x\right) }\circ \left( p_{\mathrm{Id}}q^{x}_{x}\otimesimes F(x)\right) .$ The naturality follows from the following computation performed for every $f:x\rightarrow y$ . \begin{align*} \left( {\rm ev}_{F}\right) _{y}\circ &\left( F^{\ast }\otimesimes F\right) \left( f\right) ={\rm ev}_{F\left( y\right) }\circ \left( p_{\mathrm{Id} }q^{y}_{y}\otimesimes F(y)\right) \circ \left( F^{\ast }\left( f\right) \otimesimes F\left( f\right) \right) ={\rm ev}_{F\left( y\right) }\circ \left( p_{ \mathrm{Id}}q^{y}_{y}F^{\ast }\left( f\right) \otimesimes F(f)\right) \\ &\overset{\text{def.} F^*}={\rm ev}_{F\left( y\right) }\circ \left( p_{\mathrm{Id}}S\left( f,y\right) q^{x}_{y}\otimesimes F(f)\right)\overset{\eqref{def:Suv}} ={\rm ev}_{F\left( y\right) }\circ \left( F(\mathrm{Id}_{y})^{\ast }p_{f}q^{x}_{y}\otimesimes F(f)\right) ={\rm ev} _{F\left( y\right) }\circ \left( p_{f}q^{x}_{y}\otimesimes F(f)\right) \\ &={\rm ev}_{F\left( y\right) }\circ \left( F(y)^{\ast }\otimesimes F(f)\right) \circ \left( p_{f}q^{x}_{y}\otimesimes F(x)\right) \overset{\left( \ref {form:dual}\right) }{=}{\rm ev}_{F\left( x\right) }\circ \left( F(f)^{\ast }\otimesimes F(x)\right) \circ \left( p_{f}q^x_{y}\otimesimes F(x)\right) \\ &={\rm ev}_{F\left( x\right) }\circ \left( F(f)^{\ast }p_{f}q^{x}_{y}\otimesimes F(x)\right) \overset{\eqref{def:Suv}}={\rm ev}_{F\left( x\right) }\circ \left( p_{\mathrm{Id} }S\left( x,f\right) q^{x}_{y}\otimesimes F(x)\right) \\ &\overset{q^{x}_{y}\text{ cocone}}{=}{\rm ev}_{F\left( x\right) }\circ \left( p_{\mathrm{Id}}q^{x}_{x}\otimesimes F(x)\right) =\left( {\rm ev}_{F}\right) _{x}=\mathds{I}^{\prime }\left( f\right) \circ \left( {\rm ev}_{F}\right) _{x}. \end{align*} Define now \begin{equation*} \Psi :\mathrm{Nat}\left( T,F^{\ast }\right) \rightarrow \mathrm{Nat} (T\otimesimes F,\mathds{I}^{\prime }):\lambda \rightarrow {\rm ev}_{F}\circ \left( \lambda \otimesimes F\right) \end{equation*} For $\alpha :=\Psi \left( \lambda \right) ={\rm ev}_{F}\circ \left( \lambda \otimesimes F\right) $, we have \begin{equation}\label{form:pslmbdht}(\alpha _{y})^\dag =\left( \left( {\rm ev} _{F}\right) _{y}\circ \left( \lambda _{y}\otimesimes F\left( y\right) \right) \right) ^\dag =\left( {\rm ev}_{F\left( y\right) }\circ \left( p_{\mathrm{Id} }q^y_{y}\lambda _{y}\otimesimes F(y)\right) \right) ^\dag=p_{\mathrm{Id} }\circ q^y_{y}\circ\lambda _{y} \end{equation} and hence, for every $g:x\to y$, \begin{align*} p_{g}\circ q^{x}_{y}\circ \Phi \left( \Psi \left( \lambda \right) \right) _{x} &=p_{g}\circ q^{x}_{y}\circ (\alpha^\dag) _{x} \overset{\eqref{def:alphahatx}}=p_{g}\circ \alpha ^{x}_{y} \overset{\eqref{def:alphaxy}}= (\alpha _{y})^\dag\circ T\left( g\right) \overset{\eqref{form:pslmbdht}}=p_{\mathrm{Id} }\circ q^y_{y}\circ\lambda _{y}\circ T\left( g\right) \\ &\overset{\text{nat.}\lambda}=p_{\mathrm{Id} }\circ q^y_{y}\circ F^*\left( g\right)\circ\lambda _{x}\overset{\text{def.} F^*}=p_{\mathrm{Id} }\circ S\left( g,y\right)\circ q^x_{y}\circ \lambda _{x}\overset{\eqref{def:Suv}}=p_{g }\circ q^x_{y}\circ \lambda _{x} \end{align*} so that $\Phi \left( \Psi \left( \lambda \right) \right) =\lambda .$ Conversely \begin{eqnarray*} \Psi \left( \Phi \left( \alpha \right) \right) _{x} &=&\left( {\rm ev} _{F}\circ \left( \Phi \left( \alpha \right) \otimesimes F\right) \right) _{x}=\left( {\rm ev}_{F}\right) _{x}\circ \left( (\alpha^\dag) _{x}\otimesimes F\left( x\right) \right) ={\rm ev}_{F\left( x\right) }\circ \left( p_{\mathrm{Id}}q_{x}\otimesimes F(x)\right) \circ \left( (\alpha^\dag) _{x}\otimesimes F\left( x\right) \right) \\ &=&{\rm ev}_{F\left( x\right) }\circ \left( p_{\mathrm{Id}}q^x_{x}(\alpha^\dag)_{x}\otimesimes F(x)\right) \overset{\eqref{def:alphahatx}}{=} {\rm ev}_{F\left( x\right) }\circ \left( p_{\mathrm{Id}}\alpha _{x}^{x}\otimesimes F(x)\right) \overset{\eqref{def:alphaxy}}{=}\mathrm{ev }_{F\left( x\right) }\circ \left((\alpha _{x})^\dag\otimesimes F(x)\right) =\alpha _{x} \end{eqnarray*} so that $\Psi \left( \Phi \left( \alpha \right) \right) =\alpha $. As a consequence $\Psi $ is bijective and hence $\mathcal{C}^{\mathcal{I}}$ is pre-rigid. \end{proof} \begin{remark} For those who are familiar with the language of ends, we sketch here a different approach to the proof of the previous result; it can be seen as an adaptation of \cite{Shulman-mathoverflow}. Consider the same functor $S:\mathcal{I}\times \mathcal{I}^\mathrm{op} \rightarrow \mathcal{C}:\left( x,y^\mathrm{op} \right) \mapsto S\left( x,y\right) .$ Define now the functor $S^{\prime }\left( x\right) :\mathcal{I}^{^{\mathrm{op }}}\times \mathcal{I}\rightarrow \mathcal{C}:\left( y^{^{\mathrm{op} }},z\right) \mapsto S\left( x,y\right) $ which is constant in $z$. By \cite[ Corollary 2, page 224]{MacLane}, the end of $S^{\prime }\left( x\right) $ exists and coincide with the limit of the functor $S\left( x,-\right) : \mathcal{I}^\mathrm{op} \rightarrow \mathcal{C}:y^{^{\mathrm{op} }}\mapsto S\left( x,y\right) \ $i.e. with $F^{\ast }(x)$ (left-hand version of \cite[Proposition 3, page 225]{MacLane} applied $S^{\prime }\left( x\right) $ represented as the composition $\mathcal{I}^{^{\mathrm{op} }}\times \mathcal{I}\overset{Q}{\rightarrow }\mathcal{I}^\mathrm{op} \overset{S\left( x,-\right) }{\rightarrow }\mathcal{C}$ where $Q$ is the first projection). As a consequence we can write \begin{equation*} F^{\ast }(x)=\int_{y\in \mathcal{I}}\prod_{{\rm Hom} _{\mathcal{I} }(x,y)}F(y)^{\ast } \end{equation*} (note that this description agrees with \eqref{endhomfuncat} in case $\mathcal{C}$ is closed, once we take $G=\mathds{I}^{\prime }$). We compute \begin{align*} \mathrm{Nat}(T\otimesimes F,\mathds{I}^{\prime })&\overset{(a)}{\cong }\int_{y\in \mathcal{I}}{\rm Hom} _{\mathcal{C}}\left( (T\otimesimes F)(y),\mathds{I} ^{\prime }(y)\right) =\int_{y\in \mathcal{I}}{\rm Hom} _{\mathcal{C} }\left( T(y)\otimesimes F(y),\mathds{I}\right) \\ \cong& \int_{y\in \mathcal{I}}{\rm Hom} _{\mathcal{C}}\left( T(y),F(y)^{\ast }\right) \overset{(b)}{\cong }\int_{y\in \mathcal{I} }\int_{x\in \mathcal{I}}{\rm Hom} _{\mathcal{C}}\left( T(x),\prod_{\mathrm{ Hom}_{\mathcal{I}}(x,y)}F(y)^{\ast }\right) \\ \overset{(c)}{\cong }&\int_{x\in \mathcal{I}}\int_{y\in \mathcal{I}}\mathrm{ Hom}_{\mathcal{C}}\left( T(x),\prod_{{\rm Hom} _{\mathcal{I} }(x,y)}F(y)^{\ast }\right) \\ \overset{(d)}{\cong }&\int_{x\in \mathcal{I}}{\rm Hom} _{\mathcal{C}}\left( T(x),\int_{y\in \mathcal{I}}\prod_{{\rm Hom} _{\mathcal{I} }(x,y)}F(y)^{\ast }\right)\\ \cong &\int_{x\in \mathcal{I}}{\rm Hom} _{ \mathcal{C}}\left( T(x),F^{\ast }(x)\right) \overset{(a)}{\cong }\mathrm{Nat} \left( T,F^{\ast }\right) \end{align*} where in $(a)$ we used \cite[(2) on page 223]{MacLane}, in $(c)$ the Fubini rule for ends \cite[page 231]{MacLane}, in $(d)$ we used \cite[(4) on page 225]{MacLane} and in $\left( b\right) $ we applied for $C=F(y)^{\ast }$ the isomorphism ${\rm Hom} _{\mathcal{C}}\left( T(y),C\right) \cong \int_{x\in I}{\rm Hom} _{\mathcal{C}}\left( T(x),\prod_{{\rm Hom} _{ \mathcal{I}}(x,y)}C\right) \ $that can be achieved by the following argument. Given a functor $G:\mathcal{J}\rightarrow \mathsf{Set}$, by Yoneda Lemma one has, for $y\in \mathcal{J}$ \begin{equation*} G\left( y\right) \cong \mathrm{Nat}({\rm Hom} _{\mathcal{J}}\left( y,-\right) ,G)\overset{(a)}{\cong }\int_{x\in \mathcal{J}}{\rm Hom} _{ \mathsf{Set}}({\rm Hom} _{\mathcal{J}}\left( y,x\right) ,G\left( x\right) )=\int_{x\in \mathcal{J}}\prod_{{\rm Hom} _{\mathcal{J}}(y,x)}G\left( x\right) . \text{ } \end{equation*} Note that, by \cite[Formula (3), page 242]{MacLane}, the last term coincides with the right Kan extension $\mathrm{Ran}_{\mathrm{Id}_{\mathcal{J}}}G$ of $G$ along $ \mathrm{Id}_{\mathcal{J}}$. The above isomorphism can then be seen as a consequence of the fact that $\mathrm{Ran}_{K}G$, for some functor $K,$ is uniquely determined by $\mathrm{Nat}(T,\mathrm{Ran}_{K}G)\cong \mathrm{Nat}(T\circ K,G)$ and by the trivial equality $\mathrm{Nat}(T,G)=\mathrm{Nat}(T\circ \mathrm{Id}_{\mathcal{J}},G)$. In case $\mathcal{J}=\mathcal{I}^\mathrm{op} $ and $G:={\rm Hom} _{ \mathcal{C}}\left( T(-),C\right) :\mathcal{I}^\mathrm{op} \rightarrow \mathsf{Set}:x^\mathrm{op} \mapsto {\rm Hom} _{\mathcal{C}}\left( T(x),C\right) ,$ we get \begin{eqnarray*} {\rm Hom} _{\mathcal{C}}\left( T(y),C\right) &=&\int_{x^{^{\mathrm{op} }}\in \mathcal{I}^\mathrm{op} }\prod_{{\rm Hom} _{\mathcal{I}^{^{ \mathrm{op} }}}\left( y^\mathrm{op} ,x^\mathrm{op} \right) }\mathrm{ Hom}_{\mathcal{C}}\left( T(x),C\right) \\ &=&\int_{x\in \mathcal{I}}\prod_{{\rm Hom} _{\mathcal{I}}(x,y)}{\rm Hom} _{\mathcal{C}}\left( T(x),C\right) \cong \int_{x\in \mathcal{I}}{\rm Hom} _{\mathcal{ C}}\left( T(x),\prod_{{\rm Hom} _{\mathcal{I}}(x,y)}C\right) . \end{eqnarray*} \end{remark} \subsection{The category of \texorpdfstring{$G$}{TEXT}-graded vector spaces \texorpdfstring{$\mathcal{M}^G$}{TEXT}} Here we consider the construction of the category of externally $G$-graded $\mathcal{M}$-objects where $G$ is a monoid and $\mathcal{M}$ is a given monoidal category. As we will se below, this will allow us to provide a non-trivial example of a pre-rigid monoidal category which is not right closed, see Example \ref{examplerightclosed}. \begin{claim}\label{claim:MG}Let $\left( \mathcal{M},\otimesimes ,\mathds{I}\right) $ be a monoidal category and let $G$ be a monoid with neutral element $e$. Assume that $\mathcal{M}$ has an initial object $\mathbf{0}$ and coproducts indexed by $S_{g}:=\left\{ \left( a,b\right) \in G\times G\mid ab=g\right\} $ for every $g\in G$ and that $\otimesimes $ preserves them. Then we can consider the monoidal category $\mathcal{M} ^{G}$ of externally $G$-graded $\mathcal{M}$-objects, see e.g. \cite[Section 3]{Mitchell-Low}. Recall that an object in $\mathcal{M}^{G}$ is a sequence $(X_{g})_{g\in G}$ of objects in $ \mathcal{M}$ and a morphism is a sequence $(f_{g})_{g\in G}$ of morphisms in $\mathcal{M}$. We can define the tensor product $X\otimesimes Y$ in $\mathcal{M}^{G}$ of $ X=(X_{g})_{g\in G}$ and $Y=(Y_{g})_{g\in G}$ by the rule $$\left( X\otimesimes Y\right) _{g}:=\mathrm{op}lus _{(a,b)\in S_g}X_{a}\otimesimes Y_{b}=\mathrm{op}lus _{ab=g}X_{a}\otimesimes Y_{b},$$ and the unit by $\mathds{I} ^{G}:=\left( \delta _{g,e}\mathds{I}\right) _{g\in G}$ where $\delta _{g,e}\mathds{I}=\mathds{I}$ if $ g=e$ and $\delta _{g,e}\mathds{I}=\mathbf{0}$ otherwise. \end{claim} In the case when $\mathcal{M}$ is the category ${\sf Vec}$ of vectors spaces, the category ${\sf Vec}^G$ is monoidally equivalent to the category ${\sf Vec}_G$ of $G$-graded vector spaces through the functor $F:{\sf Vec}^G\to{\sf Vec}_G$ which maps an object $(V_g)_{g\in G}$ to $\mathrm{op}lus_{g\in G}V_g$ and a morphism $(f_g)_{g\in G}$ to the morphism $\mathrm{op}lus_{g\in G}f_g$. We already observed that the monoidal category ${\sf Vec}_G$ is closed in Example \ref{ex:closed}. The following result shows to what extend the same property is true for $\mathcal{M}^G$. \begin{proposition} \label{pro:funcatClosed}In the setting of \ref{claim:MG}, assume further that $\mathcal{M}$ has products indexed by $G$. If $\mathcal{M}$ is closed so is the category $\mathcal{M}^{G}$ where $ [V,W]$ is defined by $[V,W]_{g}:=\prod_{h\in G}[V_{h},W_{gh}]$ for every objects $V$ and $W$ in $\mathcal{M}^{G}$. \end{proposition} \begin{proof} Once recalled that $S_{g}=\left\{ \left( a,b\right) \in G\times G\mid ab=g\right\} ,$ the conclusion comes from the following chain of bijections whose composition is natural in $X$ and $Z$. \begin{gather*} \mathrm{Hom}_{\mathcal{M}^{G}}\left( X\otimesimes Y,Z\right) =\prod_{g\in G} \mathrm{Hom}_{\mathcal{M}}\left( \mathrm{op}lus _{\left( a,b\right) \in S_{g}}X_{a}\otimesimes Y_{b},Z_{g}\right) \\ \cong \prod_{g\in G}\prod_{\left( a,b\right) \in S_{g}}\mathrm{Hom}_{ \mathcal{M}}\left( X_{a}\otimesimes Y_{b},Z_{g}\right) \cong \prod_{g\in G}\prod_{\left( a,b\right) \in S_{g}}\mathrm{Hom}_{\mathcal{M}}\left( X_{a},[Y_{b},Z_{g}]\right) \\ \cong \prod_{a\in G}\prod_{b\in G}\mathrm{Hom}_{\mathcal{M}}\left( X_{a},[Y_{b},Z_{ab}]\right) \cong \prod_{a\in G}\mathrm{Hom}_{\mathcal{M} }\left( X_{a},\prod_{b\in G}[Y_{b},Z_{ab}]\right) \\ =\prod_{a\in G}\mathrm{Hom}_{\mathcal{M}}\left( X_{a},[Y,Z]_{a}\right) = \mathrm{Hom}_{\mathcal{M}^{G}}\left( X,[Y,Z]\right) .\qedhere \end{gather*} \begin{invisible} In generale siano \ $W_{a,b}^{g}$ degli insiemi. Abbiamo usato sopra la formula $\prod_{g\in G}\prod_{\left( a,b\right) \in S_{g}}W_{a,b}^{g}=\prod_{a\in G}\prod_{b\in G}W_{a,b}^{ab}.$ Vediamo che \`{e} vera. Posto $T_{g}:=\left\{ \left( g,a,b\right) \mid g\in G,\left( a,b\right) \in S_{g}\right\} =\left\{ \left( ab,a,b\right) \mid \left( a,b\right) \in G\times G\right\} $ \begin{eqnarray*} \prod_{g\in G}\prod_{\left( a,b\right) \in S_{g}}W_{a,b}^{g} &=&\prod_{\left( g,a,b\right) \in T_{g}}W_{a,b}^{g} \\ &=&\left\{ f:T_{g}\rightarrow \bigcup \left\{ W_{a,b}^{g}:\left( g,a,b\right) \in T_{g}\right\} \mid f\left( g,a,b\right) \in W_{a,b}^{g},\forall \left( g,a,b\right) \in T_{g}\right\} \\ &=&\left\{ f:T_{g}\rightarrow \bigcup \left\{ W_{a,b}^{ab}:\left( a,b\right) \in G\times G\right\} \mid f\left( ab,a,b\right) \in W_{a,b}^{ab},\forall \left( a,b\right) \in G\times G\right\} \\ &&\overset{h\left( a,b\right) :=f\left( ab,a,b\right) }{=}\left\{ h:G\times G\rightarrow \bigcup \left\{ W_{a,b}^{ab}:\left( a,b\right) \in G\times G\right\} \mid h\left( a,b\right) \in W_{a,b}^{ab},\forall \left( a,b\right) \in G\times G\right\} \\ &=&\prod_{\left( a,b\right) \in G\times G}W_{a,b}^{ab}=\prod_{a\in G}\prod_{b\in G}W_{a,b}^{ab}. \end{eqnarray*} \end{invisible} \end{proof} Next result concerns the pre-rigidity of $\mathcal{M}^G$. \begin{proposition} \label{pro:funcatG}In the setting of \ref{claim:MG}, assume further that the initial object $\mathbf{0}$ is also terminal (i.e. a zero object). Then, if $\mathcal{M}$ is pre-rigid so is the category $\mathcal{M}^{G}$. Explicitly, the pre-dual of $X=(X_{g})_{g\in G}$ is defined by setting $\left( X^{\ast }\right) _{g}:=\left( \mathrm{op}lus _{h\in G,gh=e}X_{h}\right) ^{\ast }.$ \end{proposition} \begin{proof} First note that, if we set $W^g _{a,b}:=\delta _{a,g}X_{b},$ we have that, by hypothesis, $\mathcal{M}$ contains $\mathrm{op}lus _{\left( a,b\right) \in S_{e}}W^g _{a,b}=\mathrm{op}lus _{\left( a,b\right) \in S_{e}}\delta _{a,g}X_{b}\cong \mathrm{op}lus _{b\in G,gb=e}X_{b}$ so that it makes sense to define $\left( X^{\ast }\right) _{g}:=\left( Y_g\right) ^{\ast }$, where we set $Y_g:= \mathrm{op}lus _{h\in G,gh=e}X_{h}$ \begin{invisible} Let $I$ and $J$ be sets. Consider $\left( X_{i}\right) _{i\in I\cup J}$ where $X_{i}=\mathbf{0}$ for every $i\in J.$ Let us check that $\mathrm{op}lus _{i\in I\cup J}X_{i}\cong \mathrm{op}lus _{i\in I}X_{i}.$ We compite \begin{equation*} \mathrm{op}lus _{i\in I\cup J}X_{i}\cong \left( \mathrm{op}lus _{i\in I}X_{i}\right) \mathrm{op}lus \left( \mathrm{op}lus _{i\in J}X_{i}\right) \cong \left( \mathrm{op}lus _{i\in I}X_{i}\right) \mathrm{op}lus \left( \mathrm{op}lus _{i\in J}\mathbf{0}\right) . \end{equation*} Thus we have to check that $\left( \mathrm{op}lus _{i\in I}X_{i}\right) \mathrm{op}lus \left( \mathrm{op}lus _{i\in J}\mathbf{0}\right) \cong \mathrm{op}lus _{i\in I}X_{i}.$ It suffices to check that $X\mathrm{op}lus \left( \mathrm{op}lus _{i\in J}\mathbf{0}\right) \cong X.$ We have \begin{eqnarray*} {\rm Hom} _{\mathcal{M}}\left( X\mathrm{op}lus \left( \mathrm{op}lus _{i\in J}\mathbf{0} \right) ,Y\right) &\cong &{\rm Hom} _{\mathcal{M}}\left( X,Y\right) \times {\rm Hom} _{\mathcal{M}}\left( \mathrm{op}lus _{i\in J}\mathbf{0},Y\right) \\ &\cong &{\rm Hom} _{\mathcal{M}}\left( X,Y\right) \times \prod\limits_{i\in J}{\rm Hom} _{\mathcal{M}}\left( \mathbf{0},Y\right) \\ &\cong &{\rm Hom} _{\mathcal{M}}\left( X,Y\right) \times \prod\limits_{i\in J}\left\{ 0_{\mathbf{0},Y}\right\} \cong {\rm Hom} _{ \mathcal{M}}\left( X,Y\right) . \end{eqnarray*} Since this is natural in $Y,$ by Yoneda we get $X\mathrm{op}lus \left( \mathrm{op}lus _{i\in J}\mathbf{0}\right) \cong X$ as desired. \end{invisible} Since $\mathbf{0}$ is a zero object, for every morphism $f$ in $\mathcal{M}$, we can define $\delta _{g,e}f$ to be $f$ if $g=e$ and to be the zero morphism otherwise. Consider the functors \begin{eqnarray*} L:\mathcal{M}^{G}\rightarrow \mathcal{M}, &&\qquad X=(X_{g})_{g\in G}\mapsto X_{e},\qquad f=(f_{g})_{g\in G}\mapsto f_{e}, \\ R:\mathcal{M}\rightarrow \mathcal{M}^{G}, &&\qquad V\mapsto \left( \delta _{g,e}V\right) _{g\in G},\qquad f\mapsto \left( \delta _{g,e}f\right) _{g\in G}. \end{eqnarray*} Note that $LRV=\left( RV\right) _{e}=V$ and let $\epsilon _{V}:=\mathrm{Id} _{V}.$ Moreover $RLX=RX_{e}=\left( \delta _{g,e}X_{e}\right) _{g\in G}.$ Define $\eta _{X}:=\left( \delta _{g,e}\mathrm{Id}_{X_{e}}\right) _{g\in G}:X\rightarrow RLX.$ This way we get natural transformations $\eta $ and $ \epsilon $ such that $\left( L,R,\eta ,\epsilon \right) $ is an adjunction. \begin{invisible} Given $f:X\rightarrow Y$ we check the naturality of $\eta $ as follows \begin{equation*} RLf\circ \eta _{X}=\left( \delta _{g,e}f_{e}\right) _{g\in G}\circ \left( \delta _{g,e}\mathrm{Id}_{X_{e}}\right) _{g\in G}=\left( \delta _{g,e}f_{e}\right) _{g\in G}=\left( \delta _{g,e}\mathrm{Id}_{Y_{e}}\right) _{g\in G}\circ \left( \delta _{g,e}f_{e}\right) _{g\in G}=\eta _{Y}\circ f. \end{equation*} Since $\epsilon =\mathsf{id}$, in order to check that $\eta $ and $\epsilon $ give rise to an adjunction we have to prove that $L\eta =\mathrm{Id}_{L}$ and $\eta R=\mathrm{Id}_{R}.$ We have \begin{eqnarray*} L\eta _{X} &=&\left( \eta _{X}\right) _{e}=\mathrm{Id}_{X_{e}}=\mathrm{Id} _{LX}, \\ \eta _{RV} &=&\left( \delta _{g,e}\mathrm{Id}_{\left( RV\right) _{e}}\right) _{g\in G}=\left( \delta _{g,e}\mathrm{Id}_{V}\right) _{g\in G}=\mathrm{Id} _{RV}. \end{eqnarray*} Note that $L$ is not strong monoidal. In fact $L\left( X\otimesimes Y\right) =\left( X\otimesimes Y\right) _{e}=\mathrm{op}lus _{ab=e}X_{a}\otimesimes Y_{b}\neq X_{e}\otimesimes Y_{e}=LX\otimesimes LY.$ \end{invisible} Set $G^{r}:=\left\{ a\in G\mid \exists b\in G,ab=e\right\} .$ Then \begin{eqnarray*} \mathrm{op}lus _{a\in G}T_{a}\otimesimes Y_a&=&\mathrm{op}lus _{a\in G}T_{a}\otimesimes (\mathrm{op}lus _{b\in G,ab=e}X_{b}) \cong\mathrm{op}lus _{a\in G}\mathrm{op}lus _{b\in G,ab=e}T_{a}\otimesimes X_{b} \\ &=&\left( \mathrm{op}lus _{a\in G^{r}}\mathrm{op}lus _{b\in G,ab=e}T_{a}\otimesimes X_{b}\right) \mathrm{op}lus \left( \mathrm{op}lus _{a\in G\backslash G^{r}}\mathrm{op}lus _{b\in G,ab=e}T_{a}\otimesimes X_{b}\right) \\ &=&\left( \mathrm{op}lus _{\left( a,b\right) \in G\times G,ab=e}T_{a}\otimesimes X_{b}\right) \mathrm{op}lus \left( \mathrm{op}lus _{a\in G\backslash G^{r}}\mathrm{op}lus _{b\in \emptyset }T_{a}\otimesimes X_{b}\right) \\ &=&\left( \mathrm{op}lus _{ab=e}T_{a}\otimesimes X_{b}\right) \mathrm{op}lus \left( \mathrm{op}lus _{a\in G\backslash G^{r}}\mathbf{0}\right) \cong \mathrm{op}lus _{ab=e}T_{a}\otimesimes X_{b}=(T\otimesimes X)_e=L(T\otimesimes X). \end{eqnarray*} Moreover, since $\mathds{I}^{G}=\left( \delta _{g,e}\mathds{I}\right) _{g\in G}=R\mathds{I}$, we get \begin{gather*} {\rm Hom} _{\mathcal{M}^{G}}\left( T\otimesimes X,\mathds{I}^{G}\right) = {\rm Hom} _{\mathcal{M}^{G}}\left( T\otimesimes X,R\mathds{I}\right) \cong {\rm Hom} _{\mathcal{M}}\left( L\left( T\otimesimes X\right) ,\mathds{I}\right) \\ \cong{\rm Hom} _{\mathcal{M}}\left( \mathrm{op}lus _{a\in G}T_{a}\otimesimes Y_a,\mathds{I} \right) \cong \prod_{a\in G}{\rm Hom} _{\mathcal{M}}\left( T_{a}\otimesimes Y_a , \mathds{I}\right) \\ \cong \prod_{a\in G}{\rm Hom} _{\mathcal{M}}\left( T_{a},\left( Y_a\right) ^{\ast }\right) =\prod_{a\in G}{\rm Hom} _{ \mathcal{M}}\left( T_{a},\left( X^{\ast }\right) _{a}\right) ={\rm Hom} _{ \mathcal{M}^{G}}\left( T,X^{\ast }\right) . \end{gather*} A direct computation shows that this yield the bijection ${\rm Hom} _{ \mathcal{M}^{G}}\left( T,X^{\ast }\right)\to {\rm Hom} _{\mathcal{M}^{G}}\left( T\otimesimes X,\mathds{I}^{G}\right)$, $u\mapsto {\rm ev}_{X}\circ(u\otimesimes X)$ (whence $\mathcal{M}^G$ is pre-rigid), where ${\rm ev}_{X}$ is defined as follows. Consider, for every $b\in G$ such that $ab=e$, the canonical inclusion $i_b:X_b\to Y_a$ and the morphism $f_{a,b}$ defined by $(X^*)_a\otimesimes X_b=(Y_a)^*\otimesimes X_b\overset{(Y_a)^*\otimesimes i_b}{\to}(Y_a)^*\otimesimes Y_a\overset{{\rm ev}_{Y_a}}{\to}\mathds{I}$. Then $({\rm ev}_{X})_g:\mathrm{op}lus _{ab=g}(X^*)_{a}\otimesimes X_b\to \mathds{I}^G_g$ is defined to be the zero morphism if $g\neq e$ and to be the codiagonal map of the $f_{a,b}$'s otherwise. \end{proof} As a consequence we get the following result. \begin{proposition} \label{pro:funcat} Let $\mathcal{M}$ be a monoidal category where $\mathcal{M} $ has finite coproducts and $\otimesimes $ preserves them. Assume that the initial object is also terminal. If $\mathcal{M}$ is pre-rigid so is $\mathcal{M}^{{\mathbb N}}$. Explicitly, the pre-dual of $ X=\left( X_{n}\right) _{n\in {\mathbb N}}$ is defined by $(X^{\ast })_n:=\delta _{n,0} \left(X_{0}\right) ^{\ast }$ for every $n\in{\mathbb N}$. \end{proposition} \begin{proof} Note that, since $\mathcal{M}$ has finite coproducts it has also the empty coproduct i.e. the initial object. Given $n\in {\mathbb N}$, then $ S_{n}:=\left\{ \left( a,b\right) \in {\mathbb N} \times {\mathbb N} \mid a+b=n\right\} $ is finite so that $\mathcal{M}$ contains all coproducts indexed by $S_{n}$. As a consequence we are in the setting of \ref{claim:MG} and hence can consider the monoidal category $ \mathcal{M}^{{\mathbb N}}$ with unit defined by $\mathbf{I}_{n}^{{\mathbb N}}=\delta _{n,0}\mathbf{I}$. By Proposition \ref{pro:funcatG}, we get that $\mathcal{M} ^{{\mathbb N}}$ is pre-rigid. Explicitly, the pre-dual of $X=(X_{n})_{n\in {\mathbb N} }$ is defined by setting $\left( X^{\ast }\right) _{n}=\left( \mathrm{op}lus _{h\in {\mathbb N} ,n+h=0}X_{h}\right) ^{\ast }=\left( \delta _{n,0}X_{0}\right) ^{\ast }$. Note that, since $\mathbf{0}$ is an initial object, then $\mathbf{0}^*$ is a terminal object as ${\rm Hom} _{ \mathcal{M}}\left( T,\mathbf{0}^{\ast }\right) \cong {\rm Hom} _{\mathcal{M }}\left( T\otimesimes \mathbf{0},\mathbf{I}\right) \cong {\rm Hom} _{\mathcal{M }}\left( \mathbf{0},\mathbf{I}\right) $ is a singleton (we are using that $T\otimesimes \left( -\right) $ preserves finite coproducts and in particular $\mathbf{0}$ i.e. $T\otimesimes \mathbf{0\cong 0}$). Thus, we get $\mathbf{0}^{\ast }\cong \mathbf{0}$. As a consequence we arrive at $\left( \delta _{n,0}X_{0}\right) ^{\ast }\cong \delta _{n,0}\left( X_{0}\right) ^{\ast }$. \end{proof} \begin{example} \label{examplerightclosed} Consider $ {\sf Vec} ^{\textrm{f}}$ and denote by $ \mathcal{A}$ the category $\left( {\sf Vec} ^{\textrm{f}}\right) ^{\mathbb{N }}$ of externally ${\mathbb N}$-graded $ {\sf Vec} ^{\textrm{f}}$-objects. Since ${\sf Vec} ^{\textrm{f}}$ is a monoidal category where ${\sf Vec} ^{\textrm{f}}$ is abelian and the tensor product preserves finite coproducts, we can apply Proposition \ref{pro:funcat} to conclude that $ \mathcal{A}$ is a pre-rigid monoidal category too. Let us check that $\mathcal{A}$ is not right closed. Suppose the opposite, i.e. assume that $-\otimesimes V\dashv [V,-]$ for $ V=(k)_{n\in {\mathbb N}}\in \mathcal{A}$. Thus, if we consider the unit object $U=\left( \delta _{n,0}k\right) _{n\in {\mathbb N}}$, we get \begin{equation*} {\rm Hom} _{\mathcal{A}}\left( V,V \right) \cong{\rm Hom} _{\mathcal{A}}\left( U\otimesimes V,V \right) \cong {\rm Hom} _{\mathcal{A}}\left( U,\left[V ,V \right] \right) \cong \left[ V ,V \right] _{0}. \end{equation*} Since the latter is finite-dimensional, we obtain the desired contradiction by observing that \begin{gather*} {\rm Hom} _{\mathcal{A}}\left( V ,V \right) ={\rm Hom} _{\mathcal{A}}\left( (k)_{n\in {\mathbb N}},(k)_{n\in {\mathbb N}}\right) =\prod _{n\in {\mathbb N}}{\rm Hom} _{k}\left( k,k\right) \cong \prod _{n\in {\mathbb N}}k= k^{{\mathbb N}}. \end{gather*} Note that, by the same argument used above, the category ${\sf Vec}^{{\mathbb N}}$ is a pre-rigid monoidal category too. In contrast ${\sf Vec}^{{\mathbb N}}$ is closed, where $[V,W]_n:=\prod_{t\in {\mathbb N}}{\rm Hom} _{k}(V_t,W_{t+n})$, as Proposition \ref{pro:funcatClosed} shows. \end{example} \section{Pre-rigidity and liftability}\label{finalsection} In this final section, we propose to study liftability of adjoint pairs of functors in the light of general pre-rigid braided monoidal categories. In \cite{GV-OnTheDuality}, the liftability condition in the motivating examples is shown to hold by rather ad-hoc methods. It is our purpose here to treat the case of generic pre-rigid braided monoidal categories in a more systematic way. We first recall what this liftability condition precisely is (Definition \ref{def:liftable}) and what this condition means for the bialgebra objects in the involved categories. In Example \ref{ex:liftable}, which seems to be new and is considered to be of independent interest, we show that not every adjunction is liftable. In Proposition \ref{prop:prerigid}, we show that the pre-dual construction defines a special type of self adjoint functor $R=(-)^*:\mathcal{C}^\mathrm{op}\to\mathcal{C}$. In Proposition \ref{lem:Barop}, we show that this type of functor gives rise to a liftable pair whenever the functor it induces at the level of algebras has a left adjoint and we apply this result to the specific functor $R=(-)^*:\mathcal{C}^\mathrm{op}\to\mathcal{C}$ in Corollary \ref{coro:Isar}. Then, in Proposition \ref{pro:monadj}, we provide a criterion to transport the desired liftability from one category to another in presence of a suitable monoidal adjunction and we apply it, in Corollary \ref{coro:externlift}, to transfer liftability from a category $\mathcal{M}$ to the category $\mathcal{M}^{\mathbb N}$ of externally ${\mathbb N}$-graded objects. As a consequence, we arrive at Example \ref{example-prerigidnotclosed} which revisits Example \ref{examplerightclosed} and provides an instance of a situation in which Corollary \ref{coro:externlift} (properly) holds; it shows how -in favorable cases- the notion of pre-rigid category allows to construct liftable pair of adjoint functors when the right-closedness is not available. \subsection{Liftability of adjoint pairs} Let $\left( L:\mathcal{B}\rightarrow \mathcal{A},R:\mathcal{A} \rightarrow \mathcal{B}\right) $ be an adjunction with unit $\eta$ and counit $\epsilon$. It is known, see e.g. \cite[Proposition 3.84] {Aguiar-Mahajan}, that if $(L,\psi _{2},\psi _{0})$ is a colax monoidal functor, then $(R,\phi _{2},\phi _{0})$ is a lax monoidal functor where, for every $ X,Y\in \mathcal{A}$, \begin{gather} \phi _{2}\left( X,Y\right) =\left(\xymatrixcolsep{35pt}\xymatrix{RX\otimesimes RY\ar[r]^-{\eta_{ \left( RX\otimesimes RY\right)}}&RL\left( RX\otimesimes RY\right)\ar[r]^{R\psi _{2}\left( RX,RY\right)} &R\left( LRX\otimesimes LRY\right)\ar[r]^-{R\left( \epsilon_{X}\otimesimes \epsilon_{Y}\right)} & R\left( X\otimesimes Y\right)}\right), \label{form:PhiFromPsi2}\\ \phi _{0} =\left( \xymatrix{ \mathds{I}_{\mathcal{B}}\ar[r]^-{\eta_{ \mathds{I}_{\mathcal{ B}}}}& RL\left( \mathds{I}_{\mathcal{B}}\right) \ar[r]^-{R\psi _{0}}&R\left( \mathds{I}_{\mathcal{A}}\right)} \right). \label{form:PhiFromPsi0} \end{gather} Conversely, if $(R,\phi _{2},\phi _{0})$ is a lax monoidal functor, then $ (L,\psi _{2},\psi _{0})$ is a colax monoidal functor where, for every $ X,Y\in \mathcal{B}$ \begin{gather} \psi _{2}\left( X,Y\right) :=\left(\xymatrixcolsep{35pt}\xymatrix{ L\left( X\otimesimes Y\right) \ar[r]^-{ L\left( \eta_{ X}\otimesimes \eta_{Y}\right) }&L\left( RLX\otimesimes RLY\right) \ar[r]^-{L\phi _{2}\left( LX,LY\right) } &LR\left( LX\otimesimes LY\right) \ar[r]^-{\epsilon_{\left( LX\otimesimes LY\right) }}& LX\otimesimes LY}\right) , \label{form:PsiFromPhi2}\\ \psi _{0} =\left( \xymatrix{L\left( \mathds{I}_{\mathcal{B}}\right) \ar[r]^-{L\phi _{0} }& LR\left( \mathds{I}_{\mathcal{A}}\right)\ar[r]^-{ \epsilon_{\mathds{I}_{\mathcal{A}}}}&\mathds{I}_{\mathcal{A} }}\right) . \label{form:PsiFromPhi0} \end{gather} Let $(R,\phi_2 ,\phi_0 ):\mathcal{A}\rightarrow \mathcal{B}$ be a lax monoidal functor. It is well-known that $R$ induces a functor $\overline{R}:={{\sf Alg}}(R):{{\sf Alg}}({\mathcal{A}})\rightarrow {{\sf Alg}}({\mathcal{B}})$ such that the diagram on the right-hand side in (\ref{diag:bar}) commutes (cf. \cite[Proposition 6.1, page 52]{Benabou-IntrodBicat}; see also \cite[Proposition 3.29] {Aguiar-Mahajan}). Explicitly, \begin{equation*} \overline{R}\left( A,m,u\right) =\left( RA,\xymatrix{RA\otimesimes RA\ar[r]^-{\phi _{2}\left( A,A\right) }&R\left( A\otimesimes A\right)\ar[r]^-{ Rm}&RA},\xymatrix{\mathds{I}_{{\mathcal{B}}}\ar[r]^-{\phi _{0}}&R\mathds{I}_{{\mathcal{A}}}\ar[r]^-{Ru}& RA}\right). \end{equation*} Dually, a colax monoidal functor $(L,\psi _{2},\psi _{0}):\mathcal{B} \rightarrow \mathcal{A}$ colifts to a functor $\underline{L}:={{\sf Coalg}}(L): {{\sf Coalg}}({\mathcal{B}})\rightarrow {{\sf Coalg}}({\mathcal{A}})$ such that the diagram on the left-hand side in (\ref{diag:bar}) commutes. Explicitly, \begin{equation*} \underline{L}\left( C,\Delta ,\varepsilon \right) =\left( LC,\xymatrix{LC\ar[r]^-{ L\Delta }&L\left( C\otimesimes C\right) \ar[r]^-{\psi _{2}\left( C,C\right) }&LC\otimesimes LC},\xymatrix{LC\ar[r]^-{ L\varepsilon }&L\mathds{I}_{{\mathcal{B}}}\ar[r]^-{\psi _{0} }&\mathds{I}_{{\mathcal{A}}}}\right). \end{equation*} The vertical arrows in the two diagrams below are the obvious forgetful functors. \begin{equation} \begin{array}{ccc} \xymatrix{{{\sf Coalg}}({\mathcal{B}})\ar[d]_ {\mho '=\mho _{\mathcal{B}} }\ar[rr]^-{\underline{L}={\sf Coalg}(L)} && {{{\sf Coalg}}({\mathcal{A}})}\ar[d]^{\mho =\mho _{{\mathcal{A}}}} \\ {\mathcal{B}} \ar[rr]^-{L} && {\mathcal{A}} } \end{array} \qquad \begin{array}{ccc} \xymatrix{{{\sf Alg}}({\mathcal{A}})\ar[d]_ {\Omega =\Omega _{{\mathcal{A}}}}\ar[rr]^-{\overline{R}={\sf Alg}(R)} && {{{\sf Alg}}({\mathcal{B}})}\ar[d]^{\Omega ' =\Omega _{{\mathcal{B}}}} \\ {\mathcal{A}} \ar[rr]^-{R} && {\mathcal{B}} } \end{array} \label{diag:bar} \end{equation} \begin{definition}[{\protect\cite[Definition 2.3]{GV-OnTheDuality}}] \label{def:liftable}Suppose $\mathcal{A}$ and $\mathcal{B}$ are monoidal categories and $R:\mathcal{A}\rightarrow \mathcal{B}$ is a lax monoidal functor with a left adjoint $L$. The pair $(L,R)$ is called \textit{liftable} if the induced functor $\overline{R}={\sf Alg}(R):{\sf Alg}(\mathcal{A})\to {\sf Alg}(\mathcal{B})$ has a left adjoint, denoted by $\overline{L}$, and the induced functor $\underline{L}={\sf Coalg}(L):{\sf Coalg}(\mathcal{B})\to {\sf Coalg}(\mathcal{A})$ has a right adjoint, denoted by $\underline{R}$. \end{definition} \subsection{Liftability for braided categories} Recall that when a category is \textit{braided} monoidal, its category of algebras and its category of coalgebras inherit the monoidal structure, see e.g. \cite[1.2.2]{Aguiar-Mahajan}. Let $\mathcal{A}$ and $\mathcal{B}$ now be braided monoidal categories and let $R:\mathcal{A}\rightarrow \mathcal{B}$ be a braided lax monoidal functor having a left adjoint $L$. By \cite[Proposition 3.80] {Aguiar-Mahajan}, the functor $\overline{R}$ is lax monoidal too. Explicitly, the lax monoidal functors $ (R,\phi _{2},\phi _{0})$ and $(\overline{R},\overline{\phi }_{2},\overline{ \phi }_{0})$ are connected by the following equalities, for every $\overline{ A}=\left( A,m_{A},u_{A}\right) ,\overline{B}=\left( B,m_{B},u_{B}\right) \in {{\sf Alg}}({\mathcal{A}})$ \begin{equation} \Omega _{{\mathcal{B}}}\circ \overline{R}=R\circ \Omega _{\mathcal{ A}} ,\qquad \Omega _{{\mathcal{B}}} (\overline{\phi } _{2}\left( \overline{A},\overline{B}\right)) =\phi _{2}\left( A,B\right) ,\qquad \Omega _{{\mathcal{B}}}(\overline{\phi }_{0})=\phi _{0}. \label{form:phiOver} \end{equation} Note that $R$ is a braided lax monoidal functor if and only if $L$ is a braided colax monoidal functor, see e.g. \cite[Proposition 3.85] {Aguiar-Mahajan}. Moreover, if $L$ is a braided colax monoidal functor one shows in a similar fashion as above that $\underline{L}$ is colax monoidal. The colax monoidal functors $(L,\psi _{2},\psi _{0})$ and $( \underline{L},\underline{\psi }_{2},\underline{\psi }_{0})$ are connected by the following equalities for every $\underline{C}=\left( C,\Delta _{C},\varepsilon _{C}\right) ,\underline{D}=\left( D,\Delta _{D},\varepsilon _{D}\right) \in {{\sf Coalg}}({\mathcal{B}})$ \begin{equation} \mho _{\mathcal{A}}\circ \underline{L}=L\circ \mho _{{\mathcal{B}} } ,\qquad \mho _{\mathcal{A}}( \underline{\psi } _{2}\left( \underline{C},\underline{D}\right) )=\psi _{2}\left( C,D\right) ,\qquad \mho _{\mathcal{A}}(\underline{\psi }_{0})=\psi _{0}. \label{form:psiUnder} \end{equation} As the following Example \ref{ex:liftable} shows, a pair $(L,R)$, where $R:\mathcal{A}\rightarrow \mathcal{B}$ is a (braided) lax monoidal functor between (braided) monoidal categories $\mathcal{A}$ and $\mathcal{B}$, having a left adjoint $L$, needs not to be liftable, \textit{a priori}. But, in case $\mathcal{A}$ and $\mathcal{B}$ are braided monoidal categories and $R:\mathcal{A}\rightarrow \mathcal{B}$ is a braided lax monoidal functor having a left adjoint $L$ such that the pair $(L,R)$ \textit{is} liftable, then, by \cite[Lemma 2.4 and Theorem 2.7 ]{GV-OnTheDuality}, there is an adjunction $\left( \overline{\underline{L}}, \overline{\underline{R}}\right) $ that fits into the following commutative diagrams (and explains the choice of the -perhaps somewhat fuzzy- term ``liftable'') \begin{equation} \xymatrix{{{\sf Bialg}}({\mathcal{B}})\ar[d]_ {\overline{\mho^{\prime}}}\ar[rr]^-{\overline{\underline{L}}={\sf Coalg}(\overline{L})} && {{{\sf Bialg}}({\mathcal{A}})}\ar[d]^{\overline{\mho '}} \\ {{\sf Alg}(\mathcal{B}}) \ar[rr]^-{\overline{L}} && {{\sf Alg}(\mathcal{A}}) } \qquad \xymatrix{{{\sf Bialg}}({\mathcal{A}})\ar[d]_ {\underline{\Omega}}\ar[rr]^-{\underline{\overline{R}}={\sf Alg}(\underline{R})} && {{{\sf Bialg}}({\mathcal{B}})}\ar[d]^{\underline{\Omega '}} \\ {{\sf Coalg}(\mathcal{A})} \ar[rr]^-{\underline{R}} && {{\sf Coalg}(\mathcal{B}}) } \label{diag:bibar} \end{equation} In this diagram, all vertical arrows are forgetful functors. One could wonder whether \textit{any} appropriate adjunction $(L,R)$ is liftable. The answer is no: below we present an (apparently original) example of a lax monoidal functor $R$ between monoidal categories that has a left adjoint $L$, but for which $\overline{R}$ does not have a left adjoint. \begin{example}\label{ex:liftable} Let $k$ be a field and set $S:=\frac{k \left[ X\right]}{\left( X^{2}\right) }.$ Consider the functor $$R^{f}:{\sf Vec} ^{\textrm{f}}\rightarrow {\sf Vec} ^{\textrm{f}},\quad V\mapsto S{\otimesimes}_{k} V.$$ Note that the functor $R^{f}$ has a left adjoint $L^{f}$, where $ L^{f}\left( V\right) =S^{\ast }{\otimesimes}_{k} V.$ As $S$ is an algebra, the functor $R^{f}$ is lax monoidal with respect to \begin{gather*} \phi_2(X,Y):(S{\otimesimes}_{k}X)\otimesimes (S{\otimesimes}_{k}Y)\to S{\otimesimes}_{k}(X\otimesimes Y),\quad(s\otimesimes_kx)\otimesimes (t\otimesimes_ky)\mapsto st\otimesimes_k(x\otimesimes y)\\ \phi_0:k\to S{\otimesimes}_{k}k,\quad q\mapsto 1_S\otimesimes_k q \end{gather*} so that it induces a functor $\overline{R^{f}}:{\sf Alg} ^{f}\rightarrow {\sf Alg}^{f}$, where we used the notation ${\sf Alg} ^{f}={\sf Alg}\left( {\sf Vec} ^{f}\right) $ for the category of finite-dimensional algebras. \\Our aim is to check that $\overline{R^{f}}$ has no left adjoint. \\To this end, suppose that there is a left adjoint $ \overline{L^{f}}$ of $\overline{R^{f}}$ and denote by $\overline{\eta ^{f}}$ and $\overline{\epsilon ^{f}}$ the corresponding unit and counit. Consider the functor $R:{\sf Vec} \rightarrow {\sf Vec} :V\mapsto S{\otimesimes}_{k} V.$ This functor has a left adjoint $L$ and induces a functor $\overline{R}:\mathsf{ Alg}\rightarrow {\sf Alg}.$ By a result of Tambara (cf. \cite[Remark 1.5]{Tambara}), this functor has a left adjoint $\overline{L}=a(S,-)$ with unit and counit $\overline{\eta }$ and $ \overline{\epsilon }.$ By \cite[Example 1.2(ii)]{Tambara}, one has \begin{equation*} \overline{L}\left( S\right) =a\left( S,S\right) =k\left\{ X,Y\right\} /\left( X^{2},XY+YX\right) . \end{equation*} Notice that this algebra is not finite-dimensional. Consider the forgetful functor $\overline{\Lambda }:{\sf Alg} ^{f}\rightarrow {\sf Alg}$. Clearly $\overline{\Lambda }\circ \overline{ R^{f}}=\overline{R}\circ \overline{\Lambda }$. We will negate that $\overline{\Lambda }\overline{L^{f}} \left( S\right)$ is finite-dimensional by showing that the following map is injective when restricted to some infinite-dimensional subspace of its domain. $$ \zeta :=\left( {\overline{\epsilon }}_{\overline{\Lambda }\overline{L^{f}} }\circ \overline{L}\,\overline{\Lambda }\overline{\eta ^{f}}\right) _{S}:\overline{L}\,\overline{ \Lambda }\left( S\right) \rightarrow \overline{\Lambda }\overline{L^{f}} \left( S\right) .$$ It is easy to check that the obvious chain of isomorphisms ${\sf Alg}\left( \overline{\Lambda }\overline{L^{f}}(S),\overline{\Lambda } (B)\right) \cong {\sf Alg}^{f}\left( \overline{L^{f}}(S),B\right) \cong {\sf Alg}^{f}\left( S,\overline{R^{f}}(B)\right) \cong {\sf Alg}\left( \overline{\Lambda }(S),\overline{\Lambda }\overline{R^{f}}(B)\right) =\mathsf{Alg }\left( \overline{\Lambda }(S),\overline{R}\,\overline{\Lambda }(B)\right) \cong {\sf Alg}\left( \overline{L}\,\overline{\Lambda }(S),\overline{\Lambda } (B)\right)$ is exactly ${\sf Alg}\left( \zeta,\overline{\Lambda } (B)\right)$ so that the latter is invertible for every $B\in {\sf Alg}^{f}$. Since $ k\left[ \left[ Y\right] \right] $ is the inverse limit of $\overline{\Lambda }\left(\frac{k\left[ Y\right] }{\left( Y^{n}\right) }\right),$ we have that ${\sf Alg}\left( \zeta,k\left[ \left[ Y\right] \right]\right)\cong {\sf Alg}\left( \zeta,\underleftarrow{\lim} \overline{\Lambda }\left(\frac{k\left[ Y\right] }{\left( Y^{n}\right) }\right)\right)\cong \underleftarrow{\lim}{\sf Alg}\left( \zeta, \overline{\Lambda }\left(\frac{k\left[ Y\right] }{\left( Y^{n}\right) }\right)\right)$ is invertible too. We now construct the diagram \begin{equation*} \xymatrix{k\left[ Y\right]\ar@{^(->}[r]^\gamma\ar@{^(->}[dr]_\tau & \overline{L}\,\overline{\Lambda }(S)\ar[r]^\zeta\ar[d]_{\pi}& \overline{\Lambda }\overline{L^{f}}(S)\ar@{.>}[dl]^{\beta}\\ &k\left[ \left[ Y\right] \right] } \end{equation*} Consider the following maps \begin{itemize} \item $\pi :\overline{L}\overline{\Lambda}\left( S\right) =\frac{k\left\{ X,Y\right\} }{ \left( X^{2},XY+YX\right) }\longrightarrow k\left[ \left[ Y\right] \right]:\overline{X}\mapsto 0;\overline{Y}\mapsto \overline{Y}.$ \item $\gamma:k[Y]\hookrightarrow \frac{k\left\{ X,Y\right\} }{ \left( X^{2},XY+YX\right) }$ and $\tau=\pi\circ\gamma :k[Y]\hookrightarrow k[[Y]]$ are the canonical injections. \end{itemize} Since ${\sf Alg}\left( \zeta,k[[Y]]\right)$ is invertible, there is a unique $\beta \in {\sf Alg}\left( \overline{L}\,\overline{\Lambda}\left( S\right) ,K[[Y]]\right) $ such that $\beta \circ \zeta =\pi $. Now we compute $ \beta \circ \zeta \circ \gamma =\pi \circ \gamma = \tau .$ Thus, since $\tau$ is injective, so is $ \zeta \circ \gamma $ and we obtain that $\overline{\Lambda }\overline{L^{f}}\left( S\right) $ contains a copy of $k[Y]$, which implies that $\overline{\Lambda }\overline{L^{f}}\left( S\right) $ is not finite-dimensional. This is a contradiction. \end{example} \begin{remark} With respect to the ``liftability'' terminology, it seems opportune to mention some related work, carried out by Porst and Street in \cite{PS}. \\In Section 3 of loc. cit., the authors assume $\overline{R}$ to admit a left adjoint $\overline{L}$ and are concerned with investigating which of the properties of Sweedler's finite dual functor $(-)^{\circ}$ might be shared by $\overline{L}$. We note that they also use a notion of ``liftability'' (Definition 14 in loc. cit.) which does not coincide with the notion of a liftable pair of functors as in Definition \ref{def:liftable} here above. \\It is also instructive to remark that, in Section 3.3.2 of \cite{PS}, the authors study symmetric monoidal functors, obtaining the following result (item 1 of Proposition 33 in their article). Let $\mathcal{A}$ and $\mathcal{B}$ be symmetric monoidal closed categories and $R:\mathcal{A}\rightarrow \mathcal{B}$ be a symmetric lax monoidal functor having a left adjoint $L$ such that $\overline{R}$ has a left adjoint. Assuming that $\mathcal{B}$ is locally presentable, $\underline{\overline{L}}: {\sf Bialg}({\mathcal{B}})\to {\sf Bialg}({\mathcal{A}})$ has a right adjoint. \end{remark} \subsection{Liftability of the functor computing pre-duals} Having recalled the theory of liftable functors, starting from a pre-rigid braided monoidal category $\mathcal{C}$, which is not necessarily closed, we aim to construct a self-adjoint (on the right) functor $(-)^{*}: \mathcal{C}^{\mathrm{op} }\rightarrow \mathcal{C}$, in Proposition \ref{prop:prerigid}, and afterwards to provide sufficient conditions to obtain a liftable adjunction from it. An occurrence of this situation is the case $\mathcal{C}={\sf Vec}$, expounded in \cite[Section 3]{GV-OnTheDuality}. This example, however, is closed monoidal. The first part of the following result is \cite[Proposition 4.2]{GV-OnTheDuality}: there is however some difference in the proof, which is explained in Remark \ref{rem:GV4.2}. \begin{proposition}\label{prop:prerigid} When $\mathcal{C}$ is a pre-rigid braided monoidal category, the assignment $X\mapsto X^*$ induces a functor $R=(-)^{*}:\mathcal{C}^{\mathrm{op}}\rightarrow \mathcal{C}$ with a left adjoint $L=R^{\mathrm{op}}=(-)^{*}:\mathcal{C}\rightarrow \mathcal{C}^{\mathrm{op}}.$ Moreover there are $\phi _{2},\phi _{0}$ such that $\left( R,\phi _{2},\phi _{0}\right) $ is lax monoidal and the induced colax monoidal structure on $L$ by \eqref{form:PsiFromPhi2} and \eqref{form:PsiFromPhi0} is specifically $(\phi _{2}^{\mathrm{op}},\phi _{0}^{\mathrm{op}})$. \end{proposition} \begin{proof}Let $\mathcal{C}$ be a pre-rigid braided monoidal category. Since $\mathcal{C}$ has a braiding, we can apply Proposition \ref{pro:adjprig} to get a bijection ${\rm Hom} _{\mathcal{C}}\left( Y,X^{\ast }\right) \cong {\rm Hom} _{\mathcal{C}}\left( X,Y^{\ast }\right) $ natural both in $X$ and $Y$, whence the claimed adjunction. In order to write it explicitly, note that for every morphism $t:T\otimesimes X\rightarrow \mathds{I}$ there is a unique morphism $t^\dag:T\rightarrow X^{\ast }$ such that $t={\rm ev} _{X}\circ \left( t^\dag\otimesimes X\right) .$ Set \begin{align*} \eta_{X} &:=({{\rm ev}_{X}\circ c_{X,X^{\ast }}})^\dag : X\rightarrow X^{\ast \ast }, \\ j_{X} &:=({{\rm ev}_{X}\circ \left( c_{X^{\ast },X}\right) ^{-1}})^\dag :X\rightarrow X^{\ast \ast }. \end{align*} Equivalently \begin{eqnarray} {\rm ev}_{X}\circ c_{X,X^{\ast }} &=&{\rm ev}_{X^{\ast }}\circ \left( \eta_{X}\otimesimes X^{\ast }\right) , \label{form:etaX}\\ {\rm ev}_{X}\circ \left( c_{X^{\ast },X}\right) ^{-1} &=&{\rm ev} _{X^{\ast }}\circ \left( j_{X}\otimesimes X^{\ast }\right)\label{form:jX} . \end{eqnarray} By Lemma \ref{lem:contravariant} we have a functor $R=(-)^{*}:\mathcal{C}^{\mathrm{op}}\rightarrow \mathcal{C}$ defined by $R(X^{\mathrm{op} }):=X ^{\ast }$ and $R(f^{\mathrm{op} }):=f ^{\ast }$. Then $(L=R^{\mathrm{op} },R,\eta,\epsilon)$ is an adjunction, where we set ${\epsilon }_{X^{\mathrm{op} }}=\left(j_{X}\right) ^{\mathrm{op} }$. Define $\varphi _{2}\left( X,Y\right) :=({\left( {\rm ev} _{X}\otimesimes {\rm ev}_{Y}\right) \circ ( X^{\ast }\otimesimes \left( c_{X,Y^{\ast }}\right) ^{-1}\otimesimes Y) })^\dag :X^{\ast }\otimesimes Y^{\ast }\rightarrow \left( X\otimesimes Y\right) ^{\ast }$, i.e. the morphism that corresponds to $\left( {\rm ev} _{X}\otimesimes {\rm ev}_{Y}\right) \circ ( X^{\ast }\otimesimes \left( c_{X,Y^{\ast }}\right) ^{-1}\otimesimes Y)$ via the bijection \begin{equation}\label{mapvaphi} {\rm Hom} _{\mathcal{C}}\left( X^{\ast }\otimesimes Y^{\ast },\left( X\otimesimes Y\right) ^{\ast }\right) \overset{\cong }{\longrightarrow }{\rm Hom} _{ \mathcal{C}}\left( X^{\ast }\otimesimes Y^{\ast }\otimesimes X\otimesimes Y,\mathds{I} \right) . \end{equation} Define $\phi _{0}:\mathds{I}\rightarrow \mathds{I}^{\ast }$ by $\phi _{0}=(m_{\mathds{I}})^\dag$, i.e. such that ${\rm ev}_{\mathds{I}}\circ \left( \phi _{0}\otimesimes \mathds{I}\right) =m_{\mathds{I}}$, and define $\phi _{2}\left( X^{\mathrm{op}},Y^{\mathrm{op}}\right) :=\varphi _{2}\left( X,Y\right)$. It is straightforward to check that $\left( R,\phi _{2},\phi _{0}\right) $ is lax monoidal. Now by \eqref{form:PsiFromPhi2} and \eqref{form:PsiFromPhi0}, we know that $\left( L,\psi _{2},\psi _{0}\right) $ is colax monoidal where $\psi _{2}\left( X,Y\right) =\epsilon _{\left( LX\otimesimes LY\right) }\circ L\phi _{2}\left( LX,LY\right) \circ L\left( \eta _{X}\otimesimes \eta _{Y}\right) $ and $\psi _{0}=\epsilon _{ \mathds{I}^{\mathrm{op}}}\circ L \phi _{0} .$ We compute \begin{eqnarray*} \psi _{2}\left( X,Y\right) ^{\mathrm{op}} &=&\left[ L\left( \eta _{X}\otimesimes \eta _{Y}\right) \right] ^{\mathrm{op}}\circ \left[ L\phi _{2}\left( LX,LY\right) \right] ^{\mathrm{op}}\circ \left[ \epsilon _{\left( LX\otimesimes LY\right) }\right] ^{\mathrm{op}} \\ &=&\left( \eta_{X}\otimesimes \eta_{Y}\right)^* \circ (\varphi _{2}\left( X^*,Y^*\right))^* \circ j_{X^*\otimesimes Y^*} \overset{(*)}{=}\varphi _{2}\left( X,Y\right) =\phi _{2}\left( X^{\mathrm{op}},Y^{\mathrm{op}}\right), \end{eqnarray*} where $(*)$ can be checked by applying the bijection (\ref{mapvaphi}) on both sides. Finally, $ \psi _{0} ^{\mathrm{op}} =\left( L \phi _{0} \right) ^{\mathrm{op}}\circ \left( \epsilon _{\mathds{I}^{\mathrm{op}}}\right) ^{\mathrm{op}}=\left( \phi _{0}\right)^* \circ j_{\mathds{I}}=\phi _{0} $, where the last equality follows by applying the bijection ${\rm Hom} _{\mathcal{C}}\left( \mathds{I},\mathds{I}^*\right)\to {\rm Hom} _{\mathcal{C}}\left( \mathds{I}\otimesimes \mathds{I},\mathds{I}\right)$, $u\mapsto {\rm ev}_{\mathds{I}}\circ \left( u\otimesimes \mathds{I}\right)$, on both sides. \end{proof} \begin{remark}\label{rem:GV4.2} \cite[Proposition 4.2]{GV-OnTheDuality} asserts that if $\mathcal{A}$ is a pre-rigid braided monoidal category, then $(-)^*: \mathcal{A}^{\mathrm{op}}\rightarrow \mathcal{A}$ is a self-adjoint covariant functor. Although the assertion is true for general pre-rigid braided monoidal categories (as shown in the above Proposition \ref{prop:prerigid}), the proof is erroneously communicated in loc. cit.. Indeed, the last sentence of the argument appearing in the printed version of the above-cited proposition only works in case the braiding is moreover symmetric (notice this does not harm the conclusions of the work carried out in loc.cit., as all involved braidings there {\it are} symmetric) as, in general, the unit and counit are given by different underlying morphisms, see above (note that a functor $F:\mathcal{A}^{\mathrm{op}}\to \mathcal{A}$ such that $F^\mathrm{op}\dashv F$ where the unit and the counit are given by the same underlying morphism is sometimes called a ``self-dual adjunction'' in the literature). The requirement that the braiding is symmetric has been added in \cite[Proposition 4.2]{GV-OnTheDuality-rev}. We point out that, even if $\mathcal{C}$ is rigid, in general we cannot conclude that the unit and the counit are given by the same underlying morphism unless ${\rm ev}_{X}\circ c_{X,X^{\ast }}\circ c_{X^{\ast },X} ={\rm ev}_{X}$ for every object $X$, in view of the equalities \eqref{form:etaX} and \eqref{form:jX}. \end{remark} In Proposition \ref{prop:prerigid}, the functor $R=(-)^{*}:\mathcal{C}^{\mathrm{op}}\rightarrow \mathcal{C}$ is proved to be self-adjoint on the right. Notice that the unit and counit do not share the same underlying morphism here. Moreover, the induced colax monoidal structure on $L=R^\mathrm{op}$ by \eqref{form:PsiFromPhi2} and \eqref{form:PsiFromPhi0} is specifically $(\phi _{2}^{\mathrm{op}},\phi _{0}^{\mathrm{op}})$, where $(\phi _{2},\phi _{0})$ is the lax monoidal structure of $R$. The last property seems to be a particular feature of $(-)^{*}$ as we cannot prove in general that it holds true for an arbitrary functor $R$ which is self-adjoint on the right. Next aim is to show that, when it holds true, then $(L,R)$ is liftable whenever $\overline{R}$ has a left adjoint. Of course this will be applied to examine whether the pair $((-)^{*\mathrm{op}}, (-)^*)$ is liftable. To this aim recall that an adjunction $(L,R,\eta,\varepsilon)$ gives rise to an adjunction $(R^\mathrm{op} ,L^\mathrm{op} ,\varepsilon^\mathrm{op} ,\eta^\mathrm{op} )$. \begin{proposition} \label{lem:Barop} For a monoidal category $\mathcal{C}$, suppose a lax monoidal functor $\left( R,\phi _{2},\phi _{0}\right) :\mathcal{C}^{\mathrm{op}}\to \mathcal{C}$ has a left adjoint $L=R^{\mathrm{op}}$. If the induced colax monoidal structure on $L$ by \eqref{form:PsiFromPhi2} and \eqref{form:PsiFromPhi0} is specifically $(\phi _{2}^{\mathrm{op}},\phi _{0}^{\mathrm{op}})$, then $\overline{R}=\left( \underline{L}\right) ^{\mathrm{op}}$. Moreover, if $\overline{R}$ has a left adjoint, then $\left( L,R\right) $ is liftable. \end{proposition} \begin{proof} We check that $\left( \underline{L}\right) ^{\mathrm{op}}=\overline{R}$. First observe that the domain and codomain of $\left( \underline{L}\right) ^{ \mathrm{op}}$ are respectively \begin{equation*} \left( {\sf Coalg} (\mathcal{C})\right) ^{\mathrm{op}}={\sf Alg}\left(\mathcal{C}^{\mathrm{op}}\right) \qquad \text{and}\qquad \left( {\sf Coalg}\left( \mathcal{C}^{\mathrm{op}}\right) \right) ^{\mathrm{op}}={\sf Alg}\left( \mathcal{C}\right) \end{equation*} so that the domain and codomain of $\left( \underline{L}\right) ^{\mathrm{op} }$ and $\overline{R}$ are the same. Next, by means of the equality $\left(L,\psi _{2},\psi _{0}\right)=\left( R^{\mathrm{op}},\phi _{2}^{\mathrm{op}},\phi _{0}^{\mathrm{op}}\right)$, one checks, in a straightforward fashion, that $\left( \underline{L}\right) ^{\mathrm{op}}$ and $\overline{R}$ coincide on objects. They also agree on morphisms (whence $\left( \underline{L}\right) ^{\mathrm{op}}=\overline{R}$) by the following computation $$\Omega _{\mathcal{C}}\circ \left( \underline{L}\right) ^{\mathrm{op}}=\left( \mho _{\mathcal{C}^{\mathrm{op} }}\right) ^\mathrm{op} \circ \left( \underline{L}\right) ^{\mathrm{op}} =\left( \mho _{\mathcal{C}^{\mathrm{op}}}\circ \underline{L} \right)^\mathrm{op}=\left( L\circ \mho _{\mathcal{C}} \right)^\mathrm{op} =L^\mathrm{op}\circ \left(\mho _{\mathcal{C}} \right)^\mathrm{op} =R\circ \Omega _{\mathcal{C}^\mathrm{op}}=\Omega _{\mathcal{C}}\circ \overline{R} $$ together with the faithfulness of $\Omega _{\mathcal{C}}$. We now prove the final sentence of the statement. Assume $\overline{R}$ has a left adjoint $\overline{L}$. Thus we have the adjunction $((\overline{R})^{\mathrm{op}},(\overline{L})^{\mathrm{op}})$. Now, by the first part, we have that $\left( \underline{L}\right) ^{\mathrm{op}}=\overline{R}$ and hence $\underline{L}=(\overline{R})^{\mathrm{op}}$. Thus $\underline{L}$ has a right adjoint and hence $\left( L,R\right) $ is liftable. \end{proof} \begin{corollary}\label{coro:Isar} Let $\mathcal{C}$ be a pre-rigid braided monoidal category. If $\overline{(-)^{*}}:{\sf Alg}(\mathcal{C}^{\mathrm{op}})\to {\sf Alg}(\mathcal{C})$ has a left adjoint, then $\left( (-)^{*}:\mathcal{C}\rightarrow \mathcal{C}^{\mathrm{op}},(-)^{*}:\mathcal{C}^{\mathrm{op}}\rightarrow \mathcal{C}\right)$ is a liftable pair of adjoint functors. \end{corollary} \begin{proof} It follows by Proposition \ref{prop:prerigid} and Proposition \ref{lem:Barop}. \end{proof} Recall that an adjunction $(L,R)$ between two lax monoidal functors $L$ and $R$ is called a \emph{monoidal adjunction} whenever the unit and the counit of the adjunction are monoidal natural transformations. The following result allows to transfer the condition required in Corollary \ref{coro:Isar} to have liftability from a pre-rigid braided monoidal category $\mathcal{M}$ to another one $\mathcal{N}$ whenever these categories are connected by a suitable monoidal adjunction $L\dashv R:\mathcal{M}\to\mathcal{N}$. \begin{proposition} \label{pro:monadj}Let $\mathcal{M}$ and $\mathcal{N}$ be braided monoidal categories. Assume that $\mathcal{M}$ is pre-rigid and that there is a monoidal adjunction $L\dashv R:\mathcal{M}\to\mathcal{N}$ with $L$ and $R$ both strict monoidal and $L$ braided monoidal. Then $\mathcal{N}$ is pre-rigid, with pre-dual $N^*=R((LN)^*)$, for every object $N$ in $\mathcal{N}$. If the assumption in Corollary \ref{coro:Isar} holds for $\mathcal{M}$, then the analogous conclusion holds for $\mathcal{N}$. \end{proposition} \begin{proof}Since $L$ is strict monoidal, it is in particular strong monoidal. Moreover, since $L$ and $R$ are strict monoidal we have $\mathds{I}_{\mathcal{N}}= R(\mathds{I}_{\mathcal{M}})= RL(\mathds{I}_{\mathcal{N}})$. Thus we are in the setting of Proposition \ref{pro:prigadj} so that $\mathcal{N}$ is pre-rigid, with pre-dual $N^*=R((LN)^*)$, for every object $N$ in $\mathcal{N}$. Now, consider the adjunctions \begin{align*} \left( L_{1},R_{1}\right) &=\left( (-)^{\ast }:\mathcal{ M}\rightarrow \mathcal{M}^{\mathrm{op} },(-)^{\ast }:\mathcal{M}^{\mathrm{op} }\rightarrow \mathcal{M}\right) \\ \left( L_{2},R_{2}\right) &=\left( (-)^{\ast }:\mathcal{N}\rightarrow \mathcal{N} ^{\mathrm{op} },(-)^{\ast }:\mathcal{N} ^{\mathrm{op} }\rightarrow \mathcal{N}\right) . \end{align*} as in the following diagram \begin{equation*} \xymatrixcolsep{1.5cm}\xymatrix{{\mathcal{M}}^\mathrm{op}\ar@<.5ex>[r]^{R^\mathrm{op}}\ar@<.5ex>[d]^{R_1} & \mathcal{N}^\mathrm{op} \ar@<.5ex>[d]^{R_2}\ar@<.5ex>[l]^{L^\mathrm{op}}\\ \mathcal{M}\ar@<.5ex>[u]^{L_1}\ar@<.5ex>[r]^{R}&\mathcal{N}\ar@<.5ex>[u]^{L_2}\ar@<.5ex>[l]^{L}} \end{equation*} By Proposition \ref{prop:prerigid}, we have the functors $\overline{R_{1}}={\sf Alg}(R_1):{ {\sf Alg}}(\mathcal{M}^{\mathrm{op} })\rightarrow {{\sf Alg}}(\mathcal{M })$ and $\overline{ R_{2}}={\sf Alg}(R_2):{{\sf Alg}}(\mathcal{N} ^{\mathrm{op} })\rightarrow {{\sf Alg}}(\mathcal{N}).$ Assume that $\overline{R_{1}}$ has a left adjoint, say $\overline{L_{1}}$, and let us check that the functor $\overline{ R_{2}}$ admits a left adjoint too. Since the functors $L$ and $R$ are strict monoidal, they are in particular lax monoidal whence they induce $\overline{L}={\sf Alg}(L):{{\sf Alg}}( \mathcal{N})\rightarrow {{\sf Alg}}(\mathcal{M}),$ $ \overline{R}={\sf Alg}(R):{{\sf Alg}}(\mathcal{M})\rightarrow {{\sf Alg}}(\mathcal{N})$ and by \cite[Proposition 3.91]{Aguiar-Mahajan}, we have that $\overline{L}\dashv \overline{R}$. Since the functors $L^{\mathrm{op} }$ and $R^{\mathrm{ op}}$ are also strict monoidal, we have the functors $\overline{L^\mathrm{op}}={\sf Alg}(L^\mathrm{op}):{{\sf Alg}}(\mathcal{N} ^{\mathrm{op} })\rightarrow {{\sf Alg}}(\mathcal{M}^{\mathrm{op} }),$ $\overline{R^{ \mathrm{op} }}={\sf Alg}(R^\mathrm{op}):{{\sf Alg}}(\mathcal{M}^{\mathrm{op} })\rightarrow {\mathsf{ Alg}}(\mathcal{N} ^{\mathrm{op} })$. Note the $ L\dashv R$ implies that ${R^{\mathrm{op} }}\dashv {L^{ \mathrm{op} }}$ and hence $\overline{R^{\mathrm{op} }}\dashv \overline{L^{ \mathrm{op} }}$. As a consequence $\overline{R\circ R_{1}\circ L^{\mathrm{op} }}=\overline{R}\circ \overline{R_{1}}\circ \overline{L^{\mathrm{op} }}$ has $\overline{R^{\mathrm{op} }}\circ \overline{ L_{1}}\circ \overline{L}$ as a left adjoint. It remains to check that $\overline{R_{2}}=\overline{R\circ R_{1}\circ L^{\mathrm{op} }}$. To this aim we have to check that $R_{2}$ and $R\circ R_{1}\circ L^{\mathrm{op} }$ are the same as monoidal functor. For $N$ an object in $\mathcal{N}$, we have $R R_{1} L^{\mathrm{op} }(N^\mathrm{op})=RR_{1} ((LN)^\mathrm{op})=R((LN)^*)=N^*=R_2(N^\mathrm{op})$. The same holds on morphisms so that $R \circ R_{1}\circ L^{\mathrm{op} }=R_2.$ Using the fact that the unit and the counit of the adjunction $(L,R)$ are monoidal natural transformations and that $L$ is braided monoidal, one easily checks that $R \circ R_{1}\circ L^{\mathrm{op} }$ and $R_2$ have the same monoidal structure. \begin{invisible} Let us check that $RR_{1}L^{\mathrm{op} }=R_{2}$ as monoidal functors i.e. that $\phi _{2}^{RR_{1}L^{\mathrm{op} }}=\phi _{2}^{R_{2}}$ and that $\phi _{0}^{RR_{1}L^{\mathrm{op} }}=\phi _{0}^{R_{2}}.$ If $L:\mathcal{N}\rightarrow \mathcal{M}$ is strict monoidal so is $L^{ \mathrm{op} }:\mathcal{N}^{\mathrm{op} }\rightarrow \mathcal{M}^{\mathrm{op} }$ as \begin{eqnarray*} L^{\mathrm{op} }X^{\mathrm{op} }\otimesimes L^{\mathrm{op} }Y^{\mathrm{op} } &=&\left( LX\right) ^{\mathrm{op} }\otimesimes \left( LY\right) ^{\mathrm{op} }=\left( LX\right) ^{\mathrm{op} }\otimesimes \left( LY\right) ^{\mathrm{op} }=\left( LX\otimesimes LY\right) ^{\mathrm{op} }=\left( L\left( X\otimesimes Y\right) \right) ^{\mathrm{op} } \\ &=&L^{\mathrm{op} }\left( X\otimesimes Y\right) ^{\mathrm{op} }=L^{\mathrm{op} }\left( X^{\mathrm{op} }\otimesimes Y^{\mathrm{op} }\right) , \\ L^{\mathrm{op} }I_{\mathcal{N}}^{\mathrm{op} } &=&\left( LI_{\mathcal{N} }\right) ^{\mathrm{op} }=I_{\mathcal{M}}^{\mathrm{op} } \end{eqnarray*} Since also $R$ is strict monoidal, The monoidal structure of $RR_{1}L^{ \mathrm{op} }$ is given by \begin{eqnarray*} \phi _{2}^{RR_{1}L^{\mathrm{op} }} &:&=\left( RR_{1}L^{\mathrm{op} }X^{\mathrm{ op}}\otimesimes RR_{1}L^{\mathrm{op} }Y^{\mathrm{op} }=R\left( R_{1}L^{\mathrm{op} }X^{\mathrm{op} }\otimesimes R_{1}L^{\mathrm{op} }Y^{\mathrm{op} }\right) \overset{ R\phi _{2}^{R_{1}}\left( L^{\mathrm{op} }X^{\mathrm{op} },L^{\mathrm{op} }Y^{ \mathrm{op} }\right) }{\longrightarrow }RR_{1}\left( L^{\mathrm{op} }X^{ \mathrm{op} }\otimesimes L^{\mathrm{op} }Y^{\mathrm{op} }\right) =RR_{1}L^{\mathrm{ op}}\left( X^{\mathrm{op} }\otimesimes Y^{\mathrm{op} }\right) \right) , \\ \phi _{0}^{RR_{1}L^{\mathrm{op} }} &:&=\left( I_{\mathcal{N}}=RI_{\mathcal{M}} \overset{R\phi _{0}^{R_{1}}}{\longrightarrow }RR_{1}I_{\mathcal{M}}^{\mathrm{ op}}=RR_{1}L^{\mathrm{op} }I_{\mathcal{N}}^{\mathrm{op} }\right) . \end{eqnarray*} Since $R_{1}=\left( -\right) ^{\ast }:\mathcal{M}^{\mathrm{op} }\rightarrow \mathcal{M}$, we can write explicitly $\phi _{2}^{R_{1}}\ $and $\phi _{0}^{R_{1}}$. Explicitly \begin{equation*} \phi _{2}^{R_{1}}\left( A^{\mathrm{op} },B^{\mathrm{op} }\right) =\varphi _{2}^{R_{1}}\left( A,B\right) =\left[ \left( \mathrm{ev}_{A}\otimesimes \mathrm{ ev}_{B}\right) \circ \left( A^{\ast }\otimesimes \left( c_{A,B^{\ast }}\right) ^{-1}\otimesimes B\right) \right] ^{\dag } \end{equation*} i.e. $\varphi _{2}^{R_{1}}\left( A,B\right) $ is uniquely determined by the equality \begin{equation*} \mathrm{ev}_{A\otimesimes B}\circ \left( \varphi _{2}^{R_{1}}\left( A,B\right) \otimesimes B\otimesimes A\right) =\left( \mathrm{ev}_{A}\otimesimes \mathrm{ev} _{B}\right) \circ \left( A^{\ast }\otimesimes \left( c_{A,B^{\ast }}\right) ^{-1}\otimesimes B\right) . \end{equation*} In particular $\phi _{2}^{R_{1}}\left( L^{\mathrm{op} }X^{\mathrm{op} },L^{ \mathrm{op} }Y^{\mathrm{op} }\right) =\phi _{2}^{R_{1}}\left( \left( LX\right) ^{\mathrm{op} },\left( LY\right) ^{\mathrm{op} }\right) =\varphi _{2}^{R_{1}}\left( LX,LY\right) $ is uniquely determined by the equality \begin{equation*} \mathrm{ev}_{LX\otimesimes LY}\circ \left( \varphi _{2}^{R_{1}}\left( LX,LY\right) \otimesimes LY\otimesimes LY\right) =\left( \mathrm{ev}_{LX}\otimesimes \mathrm{ev}_{LY}\right) \circ \left( \left( LX\right) ^{\ast }\otimesimes \left( c_{LX,\left( LY\right) ^{\ast }}\right) ^{-1}\otimesimes LY\right) . \end{equation*} On the other hand since $R_{2}=\left( -\right) ^{\ast }:\mathcal{N}^{\mathrm{ op}}\rightarrow \mathcal{N}$, we can write explicitly $\phi _{2}^{R_{2}}\ $ and $\phi _{0}^{R_{2}}$. Explicitly $\phi _{0}^{R_{2}}=\left( m_{I}\right) ^{\dag }$ while \begin{equation*} \phi _{2}^{R_{2}}\left( X^{\mathrm{op} },Y^{\mathrm{op} }\right) =\varphi _{2}^{R_{2}}\left( X,Y\right) =\left[ \left( \mathrm{ev}_{X}\otimesimes \mathrm{ ev}_{Y}\right) \circ \left( X^{\ast }\otimesimes \left( c_{X,Y^{\ast }}\right) ^{-1}\otimesimes Y\right) \right] ^{\dag } \end{equation*} i.e. $\varphi _{2}^{R_{2}}\left( X,Y\right) $ is uniquely determined by the equality \begin{equation*} \mathrm{ev}_{X\otimesimes Y}\circ \left( \varphi _{2}^{R_{2}}\left( X,Y\right) \otimesimes X\otimesimes Y\right) =\left( \mathrm{ev}_{X}\otimesimes \mathrm{ev} _{Y}\right) \circ \left( X^{\ast }\otimesimes \left( c_{X,Y^{\ast }}\right) ^{-1}\otimesimes Y\right) . \end{equation*} We have to check that $\phi _{2}^{RR_{1}L^{\mathrm{op} }}=\phi _{2}^{R_{2}}$ i.e. that $R\phi _{2}^{R_{1}}\left( L^{\mathrm{op} }X^{\mathrm{op} },L^{ \mathrm{op} }Y^{\mathrm{op} }\right) =\phi _{2}^{R_{2}}\left( X^{\mathrm{op} },Y^{\mathrm{op} }\right) $ i.e. that $R\varphi _{2}^{R_{1}}\left( LX,LY\right) =\varphi _{2}^{R_{2}}\left( X,Y\right) .$ Thus we have to check that \begin{equation*} \mathrm{ev}_{X\otimesimes Y}\circ \left( R\varphi _{2}^{R_{1}}\left( LX,LY\right) \otimesimes X\otimesimes Y\right) =\left( \mathrm{ev}_{X}\otimesimes \mathrm{ev}_{Y}\right) \circ \left( X^{\ast }\otimesimes \left( c_{X,Y^{\ast }}\right) ^{-1}\otimesimes Y\right) . \end{equation*} By construction, for $X\in \mathcal{N}$ we have $X^{\ast }:=R\left( \left( LX\right) ^{\ast }\right) $ and $\mathrm{ev}_{X}$ is uniquely determined (easly checked) by \begin{equation*} X^{\ast }\otimesimes X\overset{\eta _{X^{\ast }\otimesimes X}^{R}}{\longrightarrow } RL\left( X^{\ast }\otimesimes X\right) \overset{R\psi _{2}^{L}\left( X^{\ast },X\right) }{\longrightarrow }R\left( L\left( X^{\ast }\right) \otimesimes LX\right) =R\left( LR\left( \left( LX\right) ^{\ast }\right) \otimesimes LX\right) \overset{R\left( \epsilon _{\left( LX\right) ^{\ast }}^{R}\otimesimes LX\right) }{\longrightarrow }R\left( \left( LX\right) ^{\ast }\otimesimes LX\right) \overset{R\left( \mathrm{ev}_{LX}\right) }{\longrightarrow }RI_{ \mathcal{M}}\overset{R\left( \psi _{0}^{L}\right) ^{-1}}{\longrightarrow } RLI_{\mathcal{N}}\overset{\cong }{\longrightarrow }I_{\mathcal{N}}. \end{equation*} Since $L$ and $R$ are strict monoidal this reduces to \begin{equation*} X^{\ast }\otimesimes X\overset{\eta _{X^{\ast }\otimesimes X}^{R}}{\longrightarrow } RL\left( X^{\ast }\otimesimes X\right) =R\left( L\left( X^{\ast }\right) \otimesimes LX\right) =R\left( LR\left( \left( LX\right) ^{\ast }\right) \otimesimes LX\right) \overset{R\left( \epsilon _{\left( LX\right) ^{\ast }}^{R}\otimesimes LX\right) }{\longrightarrow }R\left( \left( LX\right) ^{\ast }\otimesimes LX\right) \overset{R\left( \mathrm{ev}_{LX}\right) }{\longrightarrow }RI_{ \mathcal{M}}=I_{\mathcal{N}} \end{equation*} so that $\mathrm{ev}_{X}=R\left( \mathrm{ev}_{LX}\right) \circ R\left( \epsilon _{\left( LX\right) ^{\ast }}^{R}\otimesimes LX\right) \circ \eta _{X^{\ast }\otimesimes X}^{R}$ for very $X$ in $\mathcal{N}$. Via the adjunction this equality becomes $\epsilon _{I_{\mathcal{M}}}^{R}\circ L\left( \mathrm{ ev}_{X}\right) =\mathrm{ev}_{LX}\circ \left( \epsilon _{\left( LX\right) ^{\ast }}^{R}\otimesimes LX\right) .$ Note that, $\left( L,R,\eta ,\epsilon \right) $ is a monoidal natural transformation, the two frunctors $L$ and $R$ are lax monoidal and $\epsilon :LR\rightarrow \mathrm{Id}$ and $\eta :\mathrm{Id}\rightarrow RL$ are monoidal natural transformations. Since $\epsilon :LR\rightarrow \mathrm{Id}$ and $\eta :\mathrm{Id}\rightarrow RL$ are a monoidal natural transformation between strict monoidal functors, we have $\epsilon _{A\otimesimes B}^{R}=\epsilon _{A}^{R}\otimesimes \epsilon _{B}^{R}$, $\epsilon _{I_{\mathcal{M }}}^{R}=I_{\mathcal{M}}$, $\eta _{A\otimesimes B}^{R}=\eta _{A}^{R}\otimesimes \eta _{B}^{R}$ and $\eta _{I_{\mathcal{N}}}^{R}=I_{\mathcal{N}}.$ Since $L$ is braided monoidal, we also have $L\left( c_{A,B}\right) =c_{LA,LB}.$ Thus we compute \begin{eqnarray*} &&\left( \mathrm{ev}_{X}\otimesimes \mathrm{ev}_{Y}\right) \circ \left( X^{\ast }\otimesimes \left( c_{X,Y^{\ast }}\right) ^{-1}\otimesimes Y\right) \\ &=&\left( R\left( \mathrm{ev}_{LX}\right) \otimesimes R\left( \mathrm{ev} _{LY}\right) \right) \circ \left( R\left( \epsilon _{\left( LX\right) ^{\ast }}^{R}\otimesimes LX\right) \otimesimes R\left( \epsilon _{\left( LY\right) ^{\ast }}^{R}\otimesimes LY\right) \right) \circ \left( \eta _{X^{\ast }\otimesimes X}^{R}\otimesimes \eta _{Y^{\ast }\otimesimes Y}^{R}\right) \circ \left( X^{\ast }\otimesimes \left( c_{X,Y^{\ast }}\right) ^{-1}\otimesimes Y\right) \\ &=&R\left[ \left( \mathrm{ev}_{LX}\otimesimes \mathrm{ev}_{LY}\right) \circ \left( \epsilon _{\left( LX\right) ^{\ast }}^{R}\otimesimes LX\otimesimes \epsilon _{\left( LY\right) ^{\ast }}^{R}\otimesimes LY\right) \right] \circ \left( \eta _{X^{\ast }\otimesimes X}^{R}\otimesimes \eta _{Y^{\ast }\otimesimes Y}^{R}\right) \circ \left( X^{\ast }\otimesimes \left( c_{X,Y^{\ast }}\right) ^{-1}\otimesimes Y\right) \\ \text{if }\eta _{A\otimesimes B}^{R} &=&\eta _{A}^{R}\otimesimes \eta _{B}^{R} \\ &=&R\left[ \left( \mathrm{ev}_{LX}\otimesimes \mathrm{ev}_{LY}\right) \circ \left( \epsilon _{\left( LX\right) ^{\ast }}^{R}\otimesimes LX\otimesimes \epsilon _{\left( LY\right) ^{\ast }}^{R}\otimesimes LY\right) \right] \circ \eta _{X^{\ast }\otimesimes X\otimesimes Y^{\ast }\otimesimes Y}^{R}\circ \left( X^{\ast }\otimesimes \left( c_{X,Y^{\ast }}\right) ^{-1}\otimesimes Y\right) \\ &=&R\left[ \left( \mathrm{ev}_{LX}\otimesimes \mathrm{ev}_{LY}\right) \circ \left( \epsilon _{\left( LX\right) ^{\ast }}^{R}\otimesimes LX\otimesimes \epsilon _{\left( LY\right) ^{\ast }}^{R}\otimesimes LY\right) \right] \circ RL\left( X^{\ast }\otimesimes \left( c_{X,Y^{\ast }}\right) ^{-1}\otimesimes Y\right) \circ \eta _{X^{\ast }\otimesimes Y^{\ast }\otimesimes X\otimesimes Y}^{R} \\ &=&R\left[ \left( \mathrm{ev}_{LX}\otimesimes \mathrm{ev}_{LY}\right) \circ \left( \epsilon _{\left( LX\right) ^{\ast }}^{R}\otimesimes LX\otimesimes \epsilon _{\left( LY\right) ^{\ast }}^{R}\otimesimes LY\right) \circ \left( L\left( X^{\ast }\right) \otimesimes L\left( c_{X,Y^{\ast }}\right) ^{-1}\otimesimes LY\right) \right] \circ \eta _{X^{\ast }\otimesimes Y^{\ast }\otimesimes X\otimesimes Y}^{R} \\ \text{if }L\left( c_{A,B}\right) &=&c_{LA,LB} \\ &=&R\left[ \left( \mathrm{ev}_{LX}\otimesimes \mathrm{ev}_{LY}\right) \circ \left( \epsilon _{\left( LX\right) ^{\ast }}^{R}\otimesimes LX\otimesimes \epsilon _{\left( LY\right) ^{\ast }}^{R}\otimesimes LY\right) \circ \left( L\left( X^{\ast }\right) \otimesimes \left( c_{LX,L\left( Y^{\ast }\right) }\right) ^{-1}\otimesimes LY\right) \right] \circ \eta _{X^{\ast }\otimesimes Y^{\ast }\otimesimes X\otimesimes Y}^{R} \\ &=&R\left[ \left( \mathrm{ev}_{LX}\otimesimes \mathrm{ev}_{LY}\right) \circ \left( \left( LX\right) ^{\ast }\otimesimes \left( c_{LX,\left( LY\right) ^{\ast }}\right) ^{-1}\otimesimes LY\right) \circ \left( \epsilon _{\left( LX\right) ^{\ast }}^{R}\otimesimes \epsilon _{\left( LY\right) ^{\ast }}^{R}\otimesimes LX\otimesimes LY\right) \right] \circ \eta _{X^{\ast }\otimesimes Y^{\ast }\otimesimes X\otimesimes Y}^{R} \\ \text{if }\epsilon _{A\otimesimes B}^{R} &=&\epsilon _{A}^{R}\otimesimes \epsilon _{B}^{R} \\ &=&R\left[ \left( \mathrm{ev}_{LX}\otimesimes \mathrm{ev}_{LY}\right) \circ \left( \left( LX\right) ^{\ast }\otimesimes \left( c_{LX,\left( LY\right) ^{\ast }}\right) ^{-1}\otimesimes LY\right) \circ \left( \epsilon _{\left( LX\right) ^{\ast }\otimesimes \left( LY\right) ^{\ast }}^{R}\otimesimes LX\otimesimes LY\right) \right] \circ \eta _{X^{\ast }\otimesimes Y^{\ast }\otimesimes X\otimesimes Y}^{R} \end{eqnarray*} which equals \begin{eqnarray*} &&\mathrm{ev}_{X\otimesimes Y}\circ \left( R\varphi _{2}^{R_{1}}\left( LX,LY\right) \otimesimes X\otimesimes Y\right) \\ &=&R\left( \mathrm{ev}_{L\left( X\otimesimes Y\right) }\right) \circ R\left( \epsilon _{\left( L\left( X\otimesimes Y\right) \right) ^{\ast }}^{R}\otimesimes L\left( X\otimesimes Y\right) \right) \circ \eta _{\left( X\otimesimes Y\right) ^{\ast }\otimesimes X\otimesimes Y}^{R}\circ \left( R\varphi _{2}^{R_{1}}\left( LX,LY\right) \otimesimes X\otimesimes Y\right) \\ &=&R\left( \mathrm{ev}_{LX\otimesimes LY}\right) \circ R\left( \epsilon _{\left( LX\otimesimes LY\right) ^{\ast }}^{R}\otimesimes LX\otimesimes LY\right) \circ RL\left( R\varphi _{2}^{R_{1}}\left( LX,LY\right) \otimesimes X\otimesimes Y\right) \circ \eta _{X^{\ast }\otimesimes Y^{\ast }\otimesimes X\otimesimes Y}^{R} \\ &=&R\left[ \left( \mathrm{ev}_{LX\otimesimes LY}\right) \circ \left( \epsilon _{\left( LX\otimesimes LY\right) ^{\ast }}^{R}\otimesimes LX\otimesimes LY\right) \circ \left( LR\varphi _{2}^{R_{1}}\left( LX,LY\right) \otimesimes LX\otimesimes LY\right) \right] \circ \eta _{X^{\ast }\otimesimes Y^{\ast }\otimesimes X\otimesimes Y}^{R} \\ &=&R\left[ \mathrm{ev}_{LX\otimesimes LY}\circ \left( \varphi _{2}^{R_{1}}\left( LX,LY\right) \otimesimes LX\otimesimes LY\right) \circ \left( \epsilon _{\left( LX\right) ^{\ast }\otimesimes \left( LY\right) ^{\ast }}^{R}\otimesimes LX\otimesimes LY\right) \right] \circ \eta _{X^{\ast }\otimesimes Y^{\ast }\otimesimes X\otimesimes Y}^{R} \\ &=&R\left[ \left( \mathrm{ev}_{LX}\otimesimes \mathrm{ev}_{LY}\right) \circ \left( \left( LX\right) ^{\ast }\otimesimes \left( c_{LX,\left( LY\right) ^{\ast }}\right) ^{-1}\otimesimes LY\right) \circ \left( \epsilon _{\left( LX\right) ^{\ast }\otimesimes \left( LY\right) ^{\ast }}^{R}\otimesimes LX\otimesimes LY\right) \right] \circ \eta _{X^{\ast }\otimesimes Y^{\ast }\otimesimes X\otimesimes Y}^{R}. \end{eqnarray*} It remains to prove that $\phi _{0}^{RR_{1}L^{\mathrm{op} }}=\phi _{0}^{R_{2}} $ i.e. that $R\phi _{0}^{R_{1}}=\left( m_{I_{\mathcal{N}}}\right) ^{\dag }$ i.e. that \begin{equation*} \mathrm{ev}_{I_{\mathcal{N}}}\circ \left( R\phi _{0}^{R_{1}}\otimesimes I_{ \mathcal{N}}\right) =m_{I_{\mathcal{N}}}. \end{equation*} The fact that $\epsilon :LR\rightarrow \mathrm{Id}$ is a monoidal natural transformation between strict monoidal functors, we have $\epsilon _{A\otimesimes B}^{R}=\epsilon _{A}^{R}\otimesimes \epsilon _{B}^{R}$ and $\epsilon _{I_{\mathcal{M}}}^{R}=I_{\mathcal{M}}$ so that \begin{eqnarray*} \mathrm{ev}_{I_{\mathcal{N}}}\circ \left( R\phi _{0}^{R_{1}}\otimesimes I_{ \mathcal{N}}\right) &=&R\left( \mathrm{ev}_{LI_{\mathcal{N}}}\right) \circ R\left( \epsilon _{\left( LI_{\mathcal{N}}\right) ^{\ast }}^{R}\otimesimes LI_{ \mathcal{N}}\right) \circ \eta _{\left( I_{\mathcal{N}}\right) ^{\ast }\otimesimes I_{\mathcal{N}}}^{R}\circ \left( R\phi _{0}^{R_{1}}\otimesimes I_{ \mathcal{N}}\right) \\ &=&R\left( \mathrm{ev}_{LI_{\mathcal{N}}}\right) \circ R\left( \epsilon _{\left( LI_{\mathcal{N}}\right) ^{\ast }}^{R}\otimesimes LI_{\mathcal{N} }\right) \circ RL\left( R\phi _{0}^{R_{1}}\otimesimes I_{\mathcal{N}}\right) \circ \eta _{RI_{\mathcal{M}}\otimesimes I_{\mathcal{N}}}^{R} \\ &=&R\left[ \mathrm{ev}_{LI_{\mathcal{N}}}\circ \left( \epsilon _{\left( LI_{ \mathcal{N}}\right) ^{\ast }}^{R}\otimesimes LI_{\mathcal{N}}\right) \circ \left( LR\phi _{0}^{R_{1}}\otimesimes LI_{\mathcal{N}}\right) \right] \circ \eta _{RI_{\mathcal{M}}\otimesimes I_{\mathcal{N}}}^{R} \\ &=&R\left[ \mathrm{ev}_{I_{\mathcal{M}}}\circ \left( \epsilon _{I_{\mathcal{M }}^{\ast }}^{R}\otimesimes I_{\mathcal{M}}\right) \circ \left( LR\phi _{0}^{R_{1}}\otimesimes I_{\mathcal{M}}\right) \right] \circ \eta _{RI_{\mathcal{ M}}\otimesimes I_{\mathcal{N}}}^{R} \\ &=&R\left[ \mathrm{ev}_{I_{\mathcal{M}}}\circ \left( \phi _{0}^{R_{1}}\otimesimes I_{\mathcal{M}}\right) \circ \left( \epsilon _{I_{ \mathcal{M}}}^{R}\otimesimes I_{\mathcal{M}}\right) \right] \circ \eta _{RI_{ \mathcal{M}}\otimesimes I_{\mathcal{N}}}^{R} \\ &=&R\left[ m_{I_{\mathcal{M}}}\circ \left( \epsilon _{I_{\mathcal{M} }}^{R}\otimesimes I_{\mathcal{M}}\right) \right] \circ \eta _{RI_{\mathcal{M} }\otimesimes I_{\mathcal{N}}}^{R} \\ \text{since }\epsilon _{I_{\mathcal{M}}}^{R} &=&I_{\mathcal{M}} \\ &=&R\left[ m_{I_{\mathcal{M}}}\circ \left( \epsilon _{I_{\mathcal{M} }}^{R}\otimesimes \epsilon _{I_{\mathcal{M}}}^{R}\right) \right] \circ \eta _{RI_{\mathcal{M}}\otimesimes I_{\mathcal{N}}}^{R} \\ \text{since }\epsilon _{A\otimesimes B}^{R} &=&\epsilon _{A}^{R}\otimesimes \epsilon _{B}^{R} \\ &=&R\left[ m_{I_{\mathcal{M}}}\circ \epsilon _{I_{\mathcal{M}}\otimesimes I_{ \mathcal{M}}}^{R}\right] \circ \eta _{RI_{\mathcal{M}}\otimesimes RI_{\mathcal{M} }}^{R} \\ &=&Rm_{I_{\mathcal{M}}}\circ R\epsilon _{I_{\mathcal{M}}\otimesimes I_{\mathcal{M }}}^{R}\circ \eta _{R\left( I_{\mathcal{M}}\otimesimes I_{\mathcal{M}}\right) }^{R} \\ &=&Rm_{I_{\mathcal{M}}}\overset{(\ast )}{=}m_{RI_{\mathcal{M}}}=m_{I_{ \mathcal{N}}}. \end{eqnarray*} It remains to check $\left( \ast \right) $ but \begin{equation*} Rm_{I_{\mathcal{M}}}=Rr_{I_{\mathcal{M}}}=Rr_{I_{\mathcal{M}}}\circ \phi _{2}^{R}\left( I_{\mathcal{M}},I_{\mathcal{M}}\right) \circ \left( RI_{ \mathcal{M}}\otimesimes \phi _{0}^{R}\right) \overset{R\text{ monoidal}}{=} r_{RI_{\mathcal{M}}}=m_{RI_{\mathcal{M}}}. \end{equation*} \end{invisible} \end{proof} Under mild assumptions, by means of Proposition \ref{pro:monadj}, we are now able to transfer the main condition of Corollary \ref{coro:Isar} from a braided monoidal category $\mathcal{M}$ to the category $\mathcal{M}^{{\mathbb N}}$. This will be applied to provide an explicit example of a pre-rigid braided monoidal category which is not right closed and where liftability is available. \begin{proposition} \label{coro:externlift}Let $\mathcal{M}$ be a braided monoidal category where $\mathcal{M}$ is abelian and the tensor product is additive and exact in each argument. Assume that $\mathcal{M}$ is pre-rigid and that the assumption in Corollary \ref{coro:Isar} holds for $\mathcal{M}$. Then the analogous conclusion holds for the category $\mathcal{M}^{{\mathbb N}}$ of externally ${\mathbb N}$-graded $\mathcal{M}$-objects, too. \end{proposition} \begin{proof} By Proposition \ref{pro:funcat}, we know that $\mathcal{M}^{{\mathbb N}}$ is a pre-rigid monoidal category with pre-dual of $X=\left( X_{n}\right) _{n\in {\mathbb N}}$ given by $X^{\ast }=\left( \delta _{n,0}X_{0}^{\ast }\right) _{n\in {\mathbb N}}$. We also know that the hypotheses that $\mathcal{M}$ is abelian and the tensor product is additive and exact in each argument guarantee that the category $\mathcal{M}^{{\mathbb N}}$ is moreover braided, with braiding $c_{X,Y}$ defined by $(c_{X,Y})_n=\mathrm{op}lus_{i=0}^nc_{X_i,Y_{n-i}}$ for all $X,Y$ objects in $\mathcal{M}^{{\mathbb N}}$, see e.g. \cite[Definition 2.1]{Schauenburg}. We want to apply Proposition \ref{pro:monadj} in case $\mathcal{N}:=\mathcal{M}^{{\mathbb N}}$. To this aim, note that the functors \begin{eqnarray*} L:\mathcal{M}^{{\mathbb N}}\rightarrow \mathcal{M}, &&\qquad X=\left( X_{n}\right) _{n\in {\mathbb N}}\mapsto X_{0},\qquad f=\left( f_{n}\right) _{n\in {\mathbb N}}\mapsto f_{0}, \\ R:\mathcal{M}\rightarrow \mathcal{M}^{{\mathbb N}}, &&\qquad V\mapsto \left( \delta _{n,0}V\right) _{n\in {\mathbb N}},\qquad f\mapsto \left( \delta _{n,0}f\right) _{n\in {\mathbb N}}. \end{eqnarray*} considered in the proof of Proposition \ref{pro:funcatG}, where the counit is $\epsilon :={\sf id}$ and the unit is defined on $X=(X_n)_{n\in{\mathbb N}}$ by $\eta _{X}:=\left( \delta _{n,0}\mathrm{Id}_{X_{0}}\right) _{n\in {\mathbb N}}:X\rightarrow RLX=\left( \delta _{n,0}X_{0}\right) _{n\in {\mathbb N}}$, are both strict monoidal as \begin{align*} L\left( X\otimesimes Y\right) &=L\left((\mathrm{op}lus _{i=0}^{n}X_{i}\otimesimes Y_{n-i})_{n\in{\mathbb N}}\right) =\mathrm{op}lus _{i=0}^{0}X_{i}\otimesimes Y_{0-i}=X_{0}\otimesimes Y_{0}=LX\otimesimes LY, \\ L\left( \mathds{I}^{{\mathbb N} }\right) &=L\left((\delta_{n,0} \mathds{I})_{n\in{\mathbb N}}\right) =\delta _{0,0}\mathds{I}=\mathds{I}, \\ R\left( V\right) \otimesimes R\left( W\right) &=\left( \delta _{n,0}V\right) _{n\in {\mathbb N}}\otimesimes \left( \delta _{n,0}W\right) _{n\in {\mathbb N} }=\left( \mathrm{op}lus _{i=0}^{n}\delta _{i,0}V\otimesimes \delta _{n-i,0}W\right) _{n\in {\mathbb N}}=\left( V\otimesimes \delta _{n-0,0}W\right) _{n\in {\mathbb N} }\\&=\left( \delta _{n,0}V\otimesimes W\right) _{n\in {\mathbb N}}=R\left( V\otimesimes W\right) , \\ R\left( \mathds{I}\right) &=\left( \delta _{n,0}\mathds{I}\right) _{n\in {\mathbb N}}=\mathds{I}^{{\mathbb N} }\text{.} \end{align*} Moreover, $\epsilon_{V\otimesimes W}={\sf id}_{V\otimesimes W} =\epsilon_{V}\otimesimes \epsilon_{W}$, $\epsilon_\mathds{I}={\sf id}_\mathds{I}$, $\eta_{X\otimesimes Y}=\left( \delta _{n,0}\mathrm{Id}_{(X\otimesimes Y)_{0}}\right) _{n\in {\mathbb N}}=\left( \delta _{n,0}\mathrm{Id}_{X_0\otimesimes Y_0}\right) _{n\in {\mathbb N}}=\eta_X\otimesimes\eta_Y$, $\eta_{\mathds{I}^{{\mathbb N} }}=\left( \delta _{n,0}\mathrm{Id}_{\mathds{I}^{{\mathbb N} }_{0}}\right) _{n\in {\mathbb N}}=\left( \delta _{n,0}\mathrm{Id}_{\mathds{I}}\right) _{n\in {\mathbb N}}={\sf id}_{\mathds{I}^{{\mathbb N} }}$ so that $\eta$ and $\epsilon$ are monoidal natural transformations and hence $(L,R,\eta,\epsilon)$ is a monoidal adjunction. Furthermore, $L(c_{X,Y})=(c_{X,Y})_0=c_{X_0,Y_0}=c_{LX,LY}$, so that $L$ is braided monoidal. We conclude by Proposition \ref{pro:monadj}. \\Note that the pre-dual otained via this proposition is $R((LX)^*)=R(X_0^*)=\left( \delta _{n,0}X_{0}^{\ast }\right) _{n\in {\mathbb N}}$ i.e. the same object declared at the beginning of this proof. \end{proof} \begin{example}\label{example-prerigidnotclosed} Set $\mathcal{M}={\sf Vec} ^{\text{f}}$. As we have seen in Example \ref {examplerightclosed}, $\mathcal{M}$ is a pre-rigid braided monoidal category which is not right closed. Note that $\mathcal{M}$ fulfills the requirements of Corollary \ref{coro:externlift}. In fact the adjunction $\left( L_{1},R_{1}\right) =\left( (-)^{\ast }:\mathcal{M}\rightarrow \mathcal{M}^{ \mathrm{op} },(-)^{\ast }:\mathcal{M}^{\mathrm{op} }\rightarrow \mathcal{M} \right) $ in this case is a category equivalence as $V^{\ast \ast }\cong V$ for $V\in {\sf Vec} ^{\text{f}}$. As a consequence, by definition of their monoidal structures, both $L_{1}$ and $R_{1}$ are strong monoidal and the equivalence $\left( L_1,R_1\right) $ induces, by \cite[Proposition 3.91]{Aguiar-Mahajan}, an adjunction $\left( \overline{ L_{1}},\overline{R_{1}}\right) $, which is a category equivalence as well. In particular, $\overline{R_{1}}$ has a left adjoint i.e. $\overline{L_{1}}$. By Corollary \ref{coro:externlift}, if we consider the adjunction $\left( L_{2},R_{2}\right) =\left( (-)^{\ast }: \mathcal{M}^{{\mathbb N}}\rightarrow \left( \mathcal{M}^{{\mathbb N}}\right) ^{ \mathrm{op} },(-)^{\ast }:\left( \mathcal{M}^{{\mathbb N}}\right) ^{\mathrm{op} }\rightarrow \mathcal{M}^{{\mathbb N}}\right) ,$ we get that $\overline{R_{2}} $ has a left adjoint, too. In this way we have obtained that $\mathcal{M}^{ {\mathbb N}}$ is an example of a pre-rigid braided monoidal category which is not right closed and such that Corollary \ref{coro:Isar} applies, so that $ \left( L_{2},R_{2}\right) $ is a liftable pair of adjoint functors. \end{example} The above Example \ref{example-prerigidnotclosed} shows how, in favorable cases, the notion of pre-rigid category allows to construct liftable pair of adjoint functors when the right-closedness is not available. This was achieved by first proving that the pre-dual construction defines a self adjoint functor $R=(-)^*:\mathcal{C}^\mathrm{op}\to\mathcal{C}$ whose lax monoidal structure is the opposite of the canonical colax monoidal structure induced on its left adjoint $L=R^\mathrm{op}$ (Proposition \ref{prop:prerigid}), then by showing that such a functor gives rise to a liftable pair $(L,R)$ whenever the induced functor $\overline{R}={\sf Alg}(R)$ has a left adjoint (Proposition \ref{lem:Barop}), and finally by transporting the desired liftability from a possibly closed braided monoidal category $\mathcal{N}$ to a desirably not closed braided monoidal category $\mathcal{N}$ when these are connected by a suitable monoidal adjunction (Proposition \ref{pro:monadj}). \end{document}
\begin{document} \title{A Definability Dichotomy for Finite Valued CSPs} \author{Anuj Dawar and Pengming Wang} \affil{University of Cambridge Computer Laboratory\\ \texttt{\{anuj.dawar, pengming.wang\}@cl.cam.ac.uk}} \maketitle \begin{abstract} Finite valued constraint satisfaction problems are a formalism for describing many natural optimization problems, where constraints on the values that variables can take come with rational weights and the aim is to find an assignment of minimal cost. Thapper and \v{Z}ivn\'y have recently established a complexity dichotomy for finite valued constraint languages. They show that each such language either gives rise to a polynomial-time solvable optimization problem, or to an $\ensuremath{\mathrm{NP}}\xspace$-hard one, and establish a criterion to distinguish the two cases. We refine the dichotomy by showing that all optimization problems in the first class are definable in fixed-point language with counting, while all languages in the second class are not definable, even in infinitary logic with counting. Our definability dichotomy is not conditional on any complexity-theoretic assumption. \end{abstract} \section{Introduction} \label{sec:intro} Constraint Satisfaction Problems (CSPs) are a widely-used formalism for describing many problems in optimization, artificial intelligence and many other areas. The classification of CSPs according to their tractability has been a major area of theoretical research ever since Feder and Vardi~\cite{FV98} formulated their dichotomy conjecture. The main aim is to classify various constraint satisfaction problems as either tractable (i.e.\ decidable in polynomial time) or $\ensuremath{\mathrm{NP}}\xspace$-hard and a number of dichotomies have been established for special cases of the CSP as well as generalizations of it. In particular, Cohen et al.~\cite{ccjk06:ai} extend the algebraic methods that have been very successful in the classification of CSPs to what they call \emph{soft constraints}, that is constraint problems involving optimization rather than decision problems. In this context, a recent result by Thapper and \v{Z}ivn\'y~\cite{tz13:stoc} established a complexity dichotomy for \emph{finite valued CSPs} (VCSPs). This is a formalism for defining optimization problems that can be expressed as sums of explicitly given rational-valued functions (a more formal definition is given in Section~\ref{sec:background}). As Thapper and \v{Z}ivn\'y argue, the formalism is general enough to include a wide variety of natural optimization problems. They show that every finite valued CSP is either in $\ensuremath{\mathrm{P}}\xspace$ or $\ensuremath{\mathrm{NP}}\xspace$-hard and provide a criterion, in terms of the existence of a definable $\ensuremath{\mathrm{XOR}}\xspace$ function, that determines which of the two cases holds. In this paper we are interested in the \emph{definability} of constraint satisfaction problems in a suitable logic. Definability in logic has been a significant tool for the study of CSPs for many years. A particular logic that has received attention in this context is $\logic{Datalog}$, the language of inductive definitions by function-free Horn clauses. A dichotomy of definability has been established in the literature, which shows that every constraint satisfaction problem on a fixed template is either definable in $\logic{Datalog}$ or it is not definable even in the much stronger $\ensuremath{C^{\omega}}\xspace$---an infinitary logic with counting. This result has not been published as such but is an immediate consequence of results in~\cite{abd09:tcs} where it is shown that every CSP satisfying a certain algebraic condition is not definable in $\ensuremath{C^{\omega}}\xspace$, and in~\cite{BartoK14} where it is shown that those that fail to satisfy this condition have bounded width and are therefore definable in $\logic{Datalog}$. The definability dichotomy so established does not line up with the (conjectured) complexity dichotomy as it is known that there are tractable CSPs that are not definable in $\logic{Datalog}$. In the context of the definability of optimization problems, one needs to distinguish three kinds of definability. In general an optimization problem asks for a \emph{solution} (which will typically be an assignment of values from some domain $D$ to the variables $V$ of the instance) minimising the value of a \emph{cost} function. This problem is standardly turned into a decision problem by including a budget $b$ in the instance and asking if there is a solution that achieves a cost of at most $b$. Sentences in a logic naturally define decision problems, and in the context of definability a natural question is whether the decision problem is definable. Asking for a formula that defines an actual optimal solution may not be reasonable as such a solution may not be uniquely determined by the instance and formulas in logic are generally invariant under automorphisms of the structure on which they are interpreted. An intermediate approach is to ask for a term in the logic that defines the \emph{cost} of an optimal solution and this is our approach in this paper. Our main result is a definability dichotomy for finite valued CSPs. In the context of optimization problems involving numerical values, $\logic{Datalog}$ is unsuitable so we adopt as our yardstick definability in fixed-point logic with counting ($\logic{FPC}$). This is an important logic that defines a natural and powerful proper fragment of the polynomial-time decidable properties (see~\cite{Daw15}). It should be noted that $\ensuremath{C^{\omega}}\xspace$ properly extends the expressive power of $\logic{FPC}$ and therefore undefinability results for the former yield undefinability results for the latter. We establish that every finite valued CSP is either definable in $\logic{FPC}$ or undefinable in $\ensuremath{C^{\omega}}\xspace$. Moreover, this dichotomy lines up exactly with the complexity dichotomy of Thapper and \v{Z}ivn\'y. All the valued CSPs they determine are tractable are in fact definable in $\logic{FPC}$, and all the ones that are $\ensuremath{\mathrm{NP}}\xspace$-hard are provably not in $\ensuremath{C^{\omega}}\xspace$. It should be emphasised that, unlike the complexity dichotomy, our definability dichotomy is not conditional on any complexity-theoretic assumption. Even if it were the case that $\ensuremath{\mathrm{P}}\xspace = \ensuremath{\mathrm{NP}}\xspace$, the finite valued CSPs still divide into those definable in $\logic{FPC}$ and those that are not on these same lines. The positive direction of our result builds on the recent work of Anderson et al.~\cite{adh13:lics} showing that solutions to explicitly given instances of linear programming are definable in $\logic{FPC}$. Thapper and \v{Z}ivn\'y show that for the tractable VCSPs the optimal solution can be found by solving their basic linear programming (BLP) relaxation. Thus, to establish the definability of these problems in $\logic{FPC}$ it suffices to show that the reduction to the BLP is itself definable in $\logic{FPC}$, which we do in Section~\ref{s:exp}. For the negative direction, we use the reductions used in~\cite{tz13:stoc} to establish $\ensuremath{\mathrm{NP}}\xspace$-hardness of VCSPs and show that these reductions can be carried out within $\logic{FPC}$. We start with the standard CSP form of 3-SAT, which is not definable in $\ensuremath{C^{\omega}}\xspace$ as a consequence of results from~\cite{abd09:tcs}. Details of all these reductions are presented in Section~\ref{s:inexp}. There is one issue with regard to the representation of instances of VCSPs as relational structures which we need to consider in the context of definability. An instance is defined over a language which consists of a set $\Gamma$ of functions from a finite domain $D$ to the rationals. If $\Gamma$ is a finite set, it is reasonable to fix the relational signature to have a relation for each function in $\Gamma$, and the $\logic{FPC}$ formula defining the class of VCSPs would be in this fixed relational signature. Indeed, the result of Thapper and \v{Z}ivn\'y~\cite{tz13:stoc} is stated for infinite sets $\Gamma$ but is really about finite subsets of it. That is, they show that if $\Gamma$ does not have the $\ensuremath{\mathrm{XOR}}\xspace$ property, then \emph{every} finite subset of $\Gamma$ determines a tractable VCSP and that if $\Gamma$ does have the $\ensuremath{\mathrm{XOR}}\xspace$ property then it contains a finite subset $\Gamma'$ such that $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma')$ is $\ensuremath{\mathrm{NP}}\xspace$-hard. Our definability dichotomy replicates this precisely. However, we can also consider the \emph{uniform} definability of $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ when $\Gamma$ is infinite (note that only finitely many functions from the language $\Gamma$ are used in constraints in any instance). A natural way to represent this is to allow the functions themselves to be elements of the relational structure coding an instance. We can show that our dichotomy holds even under this uniform representation. For simplicity of exposition, we present the results for finite $\Gamma$ and then, in Section~\ref{s:inf}, we explain how the proof can be modified to the uniform case where the functions are explicitly given as elements of the structure. \section{Background}\label{sec:background} \textbf{Notation.} We write $\ensuremath{\mathbb{N}}$ for the natural numbers, $\ensuremath{\mathbb{Z}}$ for the integers, $\ensuremath{\mathbb{Q}}$ for the rational numbers and $\ensuremath{\mathbb{Q}}^+$ to denote the positive rationals. We use bars $\bar{v}$ to denote vectors. A vector over a set $A$ indexed by a set $I$ is a function $\bar{v}:I\rightarrow A$. We write $v_a$ for $\bar{v}(a)$. Often, but not always, the index set $I$ is $\{1,\dots,d\}$, an initial segment of the natural numbers. In this case, we also write $|\vec{v}|$ for the length of $\vec{v}$, i.e.\ $d$. A matrix $M$ over $A$ indexed by two sets $I,J$ is a function $M:I\times J\rightarrow A$. We use the symbol $\ \dot\cup\ $ for the disjoint union operator on sets. If $\bar{v}$ is an $I$-indexed vector over $A$ and $f: A \rightarrow B$ is a function, we write $f(\bar{v})$ to denote the $I$-indexed vector over $B$ obtained by applying $f$ componentwise to $\bar{v}$. \subsection{Valued Constraint Satisfaction} We begin with the basic definitions of valued constraint satisfaction problems. These definitions are based, with minor modifications, on the definitions given in~\cite{tz13:stoc}. \begin{definition} Let $D$ be a finite domain. A \emph{valued constraint language} $\Gamma$ over $D$ is a set of functions, where each $f \in \Gamma$ has an associated arity $m = \ensuremath{\mathrm{ar}}(f)$ and $f:D^{m} \rightarrow \mathbb{Q}^+$. \end{definition} \begin{definition} An instance of the \emph{valued constraint satisfaction problem (\ensuremath{\mathrm{VCSP}\xspace})} over a valued constraint language $\Gamma$ is a pair $I=(V,C)$, where $V$ is a finite set of \emph{variables} and $C$ is a finite set of \emph{constraints}. Each constraint in $C$ is a triple $(\sigma, f, q)$, where $f\in \Gamma$, $\sigma \in V^{\ensuremath{\mathrm{ar}}(f)}$ and $q \in \ensuremath{\mathbb{Q}}$. A \emph{solution} to an instance $I$ of $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ is an assignment $h:V\rightarrow D$ of values in $D$ to the variables in $V$. The \emph{cost} of the solution $h$ is given by $cost_I(h):=\sum_{(\sigma, f, q)\in C} q\cdot f(h(\sigma))$. The valued constraint satisfaction problem is then to find a solution with minimal cost. In the \emph{decision version} of the problem, an additional threshold constant $t\in\mathbb{Q}$ is given, and the question becomes whether there is a solution $h$ with $cost_I(h)\leq t$. \end{definition} Given a valued constraint language $\Gamma$, there are certain natural closures $\Gamma'$ of this set of functions for which the computational complexity of $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ and $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma')$ coincide. The first we consider is called the \emph{expressive power} of $\Gamma$, which consists of functions that can be defined by minimising a cost function over a fixed $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$-instance $I$ over some projection of the variables in $I$ (this is defined formally below). The second closure of $\Gamma$ we consider is under scaling and translation. Both of these are given formally in the following definition. \begin{definition} \label{def:expower} Let $\Gamma$ be a valued constraint language over $D$. We say a function $f:D^m\rightarrow\mathbb{Q}$, is \emph{expressible} in $\Gamma$, if there is some instance $I_f=(V_f,C_f)\in \ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ and a tuple $\tup{v}=(v_1,\ldots,v_m)\in V^m_f$ such that \[f(\bar{x}) = \min_{h\in H_{\bar{x}}} cost_{I_f}(h),\] where $H_{\bar{x}}:=\{h:V_f\rightarrow D \mid h(v_i)=x_i \ ,\ 1\leq i\leq m\}$. We then say the function $f$ is \emph{expressed} by the instance $I_f$ and the tuple $\tup v$, and call the set of all functions that can be expressed by an instance of $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ the \emph{expressive power} of $\Gamma$, denoted by $\langle\Gamma\rightarrowngle$. Furthermore, we write $f'\equiv f$ if $f'$ is obtained from $f$ by \emph{scaling} and \emph{translation}, i.e.\ there are $a,b\in\ensuremath{\mathbb{Q}}, a>0$ such that $f'=a\cdot f+b$. For a valued constraint language $\Gamma$, we write $\Gamma_{\equiv}$ to denote the set $\{f' \mid f'\equiv f\ \mbox{for some}\ f\in\Gamma\}$. \end{definition} The next two lemmas establish that closing $\Gamma$ under these operations does not change the complexity of the corresponding problem. The first of these is implicit in the literature, and we prove a stronger version of it in Lemma~\ref{lem:equiv_to_gamma}. \begin{lemma}\label{lem:equiv} Let $\Gamma$ and $\Gamma'$ be valued constraint languages on domain $D$ such that $\Gamma' \subseteq \Gamma_\equiv$. Then $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma')$ is polynomial-time reducible to $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$. \end{lemma} \begin{lemma}[Theorem 3.4, \cite{ccjk06:ai}]\label{lem:expower} Let $\Gamma$ and $\Gamma'$ be valued constraint languages on domain $D$ such that $\Gamma' \subseteq \langle\Gamma\rightarrowngle$. Then $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma')$ is polynomial-time reducible to $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$. \end{lemma} In the study of constraint satisfaction problems, and of structure homomorphisms more generally the \emph{core} of a structure plays an important role. The corresponding notion for valued constraint languages is given in the following definition. \begin{definition} We call a valued constraint language $\Gamma$ over domain $D$ a \emph{core} if for for all $a\in D$, there is some instance $I_a\in \ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ such that in every minimal cost solution over $I_a$, some variable is assigned $a$. A valued constraint language $\Gamma'$ over a domain $D'\subseteq D$ is a \emph{sub-language} of $\Gamma$ if it contains exactly the functions of $\Gamma$ restricted to $D'$. We say $\Gamma'$ is a \emph{core of} $\Gamma$, if $\Gamma'$ is a sub-language of $\Gamma$ and also a core. \end{definition} \begin{lemma}[Lemma 2.4, \cite{tz13:stoc}]\label{lem:core_equiv} Let $\Gamma'$ be a core of $\Gamma$. Then, $\min_h cost_I(h)=\min_h cost_{I'}(h)$ for all $I\in \ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ and $I'\in \ensuremath{\mathrm{VCSP}\xspace}(\Gamma')$ where $I'$ is obtained from $I$ by replacing each function of $\Gamma$ by its restriction in $\Gamma'$. \end{lemma} Finally, we consider the closure of $\Gamma$ under parameterized definitions. That is, we define $\Gamma_c$, the language obtained from $\Gamma$ by allowing functions that are obtained from those in $\Gamma$ by fixing some parameters. \begin{definition}\label{def:gamma_c} Let $\Gamma$ be a core over $D$, we denote by $\Gamma_c$ the language that contains exactly those functions $f: D^m \rightarrow \ensuremath{\mathbb{Q}}$ for which there exists \begin{itemize} \item a function $g\in\Gamma$, with $g: D^n \rightarrow \ensuremath{\mathbb{Q}}$ with $n\geq m$, \item an injective mapping $s_{f}:\{1,\ldots,m\}\rightarrow\{1,\ldots,n\}$, \item an index set $T_{f}\subseteq\{1,\ldots,n\}$, \item and a partial assignment $t_{f}:T_{f}\rightarrow D$, \end{itemize} such that $f$ is $g$ restricted on $t_{f}$, i.e.\ $f(x_{s_{f}(1)},\ldots,x_{s_{f}(m)})=f(t(x_1),\ldots,t(x_n))$, where $t(x_i)=t_{f}(i)$ if $i\in T_{f}$, and $t(x_i)=x_i$ otherwise. Furthermore, we fix a mapping $\gamma:\Gamma_c\rightarrow \Gamma$ that assigns each $f\in\Gamma_c$ a function $g=\gamma(f)\in\Gamma$ with the above properties. \end{definition} For example, if $f(x_1,x_2,x_3)\in\Gamma$, then $g(x_1,x_2):=f(x_1,a,x_2)$ for $a\in D$ is in $\Gamma_c$. \subsection{Linear Programming} \begin{definition}\label{def:lin-opt} Let $\mathbb{Q}^V$ be the rational Euclidean space indexed by a set $V$. A \emph{linear optimization problem} is given by a \emph{constraint matrix} $A\in\mathbb{Q}^{C\times V}$ and vectors $\bar{b}\in\mathbb{Q}^C, \bar{c}\in\mathbb{Q}^V$. Let $P_{A,\bar{b}}:=\{\bar{x}\in\mathbb{Q}^V | A\bar{x}\leq \bar{b}\}$ be the set of \emph{feasible solutions}. The linear optimization problem is then to determine either that $P_{A,\bar{b}}=\emptyset$, or to find a vector $\bar{y}=\mathrm{argmax}_{\bar{x}\in P_{A,\bar{b}}} \bar{c}^T\bar{x}$, or to determine that $\max_{\bar{x}\in P_{A,\bar{b}}} \bar{c}^T\bar{x}$ is unbounded. We speak of the \emph{integer linear optimization problem}, if the set of feasible solutions is instead defined as $P_{A,\bar{b}}:=\{\bar{x}\in\mathbb{Z}^V | A\bar{x}\leq \bar{b}\}$. In the decision version of the problem, an additional constant $t\in\mathbb{Q}$ is given, and the task is determine whether there exists a feasible solution $\bar{x}\in P_{A,\bar{b}}$, such that $\bar{c}^T\bar{x}\geq t$. \end{definition} It is often convenient to describe the linear optimization problem $(A,\bar{b},\bar{c})$ as a system of linear inequalities $A\bar{x}\leq\bar{b}$ along with the objective $\max_{\bar{x}\in P_{A,\bar{b}}} \bar{c}^T\bar{x}$. We may also alternatively, describe an instance with a minimization objective. It is easy to see that such a system can be converted to the standard form of Defintion~\ref{def:lin-opt}. Let $\Gamma$ now be a valued constraint language over $D$, and let $I=(V,C)$ be an instance of $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$. We associate with $I$ the following linear optimization problem in variables $\lambda_{c,\nu}$ for each $c \in C$ with $c=(\sigma,f,q)$ and $\nu \in D^{\ensuremath{\mathrm{ar}}(f)}$, and $\mu_{x,a}$ for each $x\in V$ and $a \in D$. \begin{equation} \min \sum_{c\in C} \sum_{\nu\in D^{\ensuremath{\mathrm{ar}}(f)}} \!\! \lambda_{c,\nu} \cdot q \cdot f(\nu) \quad \text{where } c = (\sigma,f,q) \end{equation} subject to the following constraints.\\ For each $c\in C$ with $c = (\sigma,f,q)$, each $i$ with $1\leq i \leq \ensuremath{\mathrm{ar}}(f)$ and each $a\in D$, we have \begin{equation}\label{eqn:constr} \sum_{\nu\in D^{\ensuremath{\mathrm{ar}}(f)}: \nu_i = a} \!\! \!\! \lambda_{c,\nu} \; = \; \mu_{\sigma_i,a} ; \end{equation} for each $x \in V$, we have \begin{equation} \sum_{a\in D} \mu_{x,a} \; = \; 1; \end{equation} and for all variables $\lambda_{c,\nu}$ and $\mu_{x,a}$ we have \begin{equation} 0\leq \lambda_{c,\nu}\leq 1 \quad \text{and} \quad 0 \leq \mu_{x,a} \leq 1 . \end{equation} A feasible \emph{integer} solution to the above system defines a solution $h: V \rightarrow D$ to the instance $I$, given by $h(x) = a$ iff $\mu_{x,a} = 1$. Equations~\ref{eqn:constr} then ensure that $\lambda_{c,\nu} = 1$ for $c=(\sigma,f,q)$ just in case $h(\sigma) = \nu$. Thus, it is clear that an optimal integer solution gives us an optimal solution to $I$. If we consider \emph{rational} solutions instead of integer ones, we obtain the \emph{basic LP-relaxation} of $I$, which we denote $\ensuremath{\mathrm{BLP}\xspace}(I)$. The following theorem characterises for which languages $\Gamma$ $\ensuremath{\mathrm{BLP}\xspace}(I)$ has the same optimal solutions as $I$. For the statement of the dichotomy result from \cite{tz13:stoc}, we need to introduce an additional notion. We say the property $(\ensuremath{\mathrm{XOR}}\xspace)$ holds for a valued constraint language $\Gamma$ over domain $D$ if there are $a,b\in D, a\neq b$, such that $\langle\Gamma\rightarrowngle$ contains a binary function $f$ with $\mathrm{argmin}\ f=\{(a,b),(b,a)\}$. \begin{theorem}[Theorem 3.3, \cite{tz13:stoc}]\label{thm:dichotomy} Let $\Gamma$ be a core over some finite domain $D$. \begin{itemize} \item Either for each instance $I$ of $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$, the optimal solutions of $I$ are the same as $\ensuremath{\mathrm{BLP}\xspace}(I)$; \item or property $(\ensuremath{\mathrm{XOR}}\xspace)$ holds for $\Gamma_c$ and $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ is $\ensuremath{\mathrm{NP}}\xspace$-hard. \end{itemize} \end{theorem} \subsection{Logic} A relational \emph{vocabulary} (also called a \emph{signature} or a \emph{language}) $\tau$ is a finite sequence of relation and constant symbols $(R_1, \dots, R_k, c_1, \dots, c_l)$, where every relation symbol $R_i$ has a fixed \emph{arity} $a_i \in \ensuremath{\mathbb{N}}$. A structure $\struct A = (\univ A, R_1^{\struct A}, \dots, R_k^{\struct A}, c_1^{\struct A}, \dots, c_l^{\struct A})$ over the signature $\tau$ (or a \emph{$\tau$-structure}) consists of a non-empty set $\univ A$, called the \emph{universe} of $\struct A$, together with relations $R_i^{\struct A} \subseteq \univ A^{a_i}$ and constants $c_j^{\struct A} \in \univ A$ for each $1 \leq i \leq k$ and $1 \leq j \leq l$. Members of the set $\univ A$ are called the \emph{elements} of $\struct A$ and we define the \emph{size} of $\struct A$ to be the cardinality of its universe. \subsubsection{Fixed-point Logic with Counting} Fixed-point logic with counting (\logic{FPC}) is an extension of inflationary fixed-point logic with the ability to express the cardinality of definable sets. The logic has two sorts of first-order variable: \emph{element variables}, which range over elements of the structure on which a formula is interpreted in the usual way, and \emph{number variables}, which range over some initial segment of the natural numbers. We write element variables with lower-case Latin letters $x, y, \dots$ and use lower-case Greek letters $\mu, \eta, \dots$ to denote number variables. The atomic formulas of $\logic{FPC}[\tau]$ are all formulas of the form $\mu = \eta$ or $\mu \le \eta$, where $\mu, \eta$ are number variables; $s = t$ where $s,t$ are element variables or constant symbols from $\tau$; and $R(t_1, \dots, t_m)$, where each $t_i$ is either an element variable or a constant symbol and $R$ is a relation symbol (i.e.\ either a symbol from $\tau$ or a relational variable) of arity $m$. Each relational variable of arity $m$ has an associated type from $\{\mathrm{elem},\mathrm{num}\}^m$. The set $\logic{FPC}[\tau]$ of \emph{$\logic{FPC}$ formulas} over $\tau$ is built up from the atomic formulas by applying an inflationary fixed-point operator $[\logicoperator{ifp}_{R,\tup x}\phi](\tup t) $; forming \emph{counting terms} $\countingTerm{x} \phi$, where $\phi$ is a formula and $x$ an element variable; forming formulas of the kind $s = t$ and $s \le t$ where $s,t$ are number variables or counting terms; as well as the standard first-order operations of negation, conjunction, disjunction, universal and existential quantification. Collectively, we refer to element variables and constant symbols as \emph{element terms}, and to number variables and counting terms as \emph{number terms}. For the semantics, number terms take values in $\{0,\ldots,n\}$, where $n= \univ{A}$ and element terms take values in $\mathrm{dom}(\struct A)$. The semantics of atomic formulas, fixed-points and first-order operations are defined as usual (c.f., e.g., \cite{EF99} for details), with comparison of number terms $\mu \le \eta$ interpreted by comparing the corresponding integers in $\{0,\ldots,n\}$. Finally, consider a counting term of the form $\countingTerm{x}\phi$, where $\phi$ is a formula and $x$ an element variable. Here the intended semantics is that $\countingTerm{x}\phi$ denotes the number (i.e.\ the element of $\{0,\ldots,n\}$) of elements that satisfy the formula $\phi$. For a more detailed definition of $\logic{FPC}$, we refer the reader to~\cite{EF99, Lib04}. We also consider $\ensuremath{C^{\omega}}\xspace$---the infinitary logic with counting, and finitely many variables. We will not define it formally (the interested reader may consult~\cite{Ott97}) but we need the following two facts about it: its expressive power properly subsumes that of $\logic{FPC}$, and it is closed under $\logic{FPC}$-reductions, defined below. It is known by the Immerman-Vardi theorem~\cite{EF99} that fixed-point logic can express all polynomial-time properties of finite ordered structures. It follows that in $\logic{FPC}$ we can express all polynomial-time relations on the number domain. In particular, we have formulas with free number variables $\alpha,\beta$ for defining sum and product, and we simply write $\alpha+ \beta$ and $\alpha\cdot \beta$ to denote these formulas. For a number term $\alpha$ and a non-negative integer $m$, we write $\alpha=m$ as short-hand for the formula that says that $\alpha$ is exactly $m$. We write $\ensuremath{\mathrm{BIT}\xspace}(\alpha,\beta)$ to denote the formula that is true just in case the $\beta$-th bit in the binary expansion of $\alpha$ is $1$. Finally, for each constant $c$, we assume a formula $\ensuremath{\mathrm{MULT}\xspace}_c(W,x,y)$ which works as follows. If $B$ is an ordered set and $W\subseteq B$ is a unary relation that codes the binary representation of an integer $w$, then $\ensuremath{\mathrm{MULT}\xspace}_c$ defines a binary relation $R \subseteq B^2$ which on the lexicographic order on $B^2$ defines the binary representation of $c\cdot w$. \subsubsection{Reductions} We frequently consider ways of defining one structure within another in some logic $\logic L$, such as first-order logic or $\logic{FPC}$. Consider two signatures $\sigma$ and $\tau$ and a logic $\logic L$. An \emph{$m$-ary $\logic L$-interpretation of $\tau$ in $\sigma$} is a sequence of formulae of $\logic L$ in vocabulary $\sigma$ consisting of: (i) a formula $\delta(\tup x)$; (ii) a formula $\mathit{var}epsilon(\tup x, \tup y)$; (iii) for each relation symbol $R \in \tau$ of arity $k$, a formula $\phi_R(\tup x_1, \dots, \tup x_k)$; and (iv) for each constant symbol $c \in \tau$, a formula $\gamma_c(\tup x)$, where each $\tup x$, $\tup y$ or $\tup x_i$ is an $m$-tuple of free variables. We call $m$ the \emph{width} of the interpretation. We say that an interpretation $\Theta$ associates a $\tau$-structure $\struct B$ to a $\sigma$-structure $\struct A$ if there is a surjective map $h$ from the $m$-tuples $\{ \tup a \in \univ{A}^m \mid \struct A \models \delta[\tup a] \}$ to $\struct B$ such that: \begin{itemize} \item $h(\tup a_1) = h(\tup a_2)$ if, and only if, $\struct A \models \mathit{var}epsilon[\tup a_1, \tup a_2]$; \item $R^\struct{B}(h(\tup a_1), \dots, h(\tup a_k))$ if, and only if, $\struct A \models \phi_R[\tup a_1, \dots, \tup a_k]$; \item $h(\tup a) = c^\struct{B}$ if, and only if, $\struct A \models \gamma_c[\tup a]$. \end{itemize} \noindent Note that an interpretation $\Theta$ associates a $\tau$-structure with $\struct A$ only if $\mathit{var}epsilon$ defines an equivalence relation on $\univ{A}^m$ that is a congruence with respect to the relations defined by the formulae $\phi_R$ and $\gamma_c$. In such cases, however, $\struct B$ is uniquely defined up to isomorphism and we write $\Theta(\struct A) := \struct B$. Throughout this paper, we will often use interpretations where $\mathit{var}epsilon$ is simply defined as the usual equality on $\tup a_1$ and $\tup a_2$. In these instances, we omit the explicit definition of $\mathit{var}epsilon$. The notion of interpretations is used to define logical reductions. Let $C_1$ and $C_2$ be two classes of $\sigma$- and $\tau$-structures respectively. We say $C_1$ \emph{$\logic L$-reduces} to $C_2$ if there is an $\logic L$-interpretation $\Theta$ of $\tau$ in $\sigma$, such that $\Theta(\struct{A})\in C_2$ if and only if $\struct{A}\in C_1$, and we write $C_1\leq_{\logic L}C_2$. It is not difficult to show that formulas of \logic{FPC} compose with reductions in the sense that, given an interpretation $\Theta$ of $\tau$ in $\sigma$ and a $\sigma$-formula $\phi$, we can define a $\tau$-formula $\phi'$ such that $\struct A \models \phi'$ if, and only if, $\Theta(\struct A) \models \phi$. Moreover $\ensuremath{C^{\omega}}\xspace$ is closed under $\logic{FPC}$-reductions. So if $C_2$ is definable in $\ensuremath{C^{\omega}}\xspace$ and $C_1 \leq_{\logic L}C_2$, then $C_1$ is also definable in $\ensuremath{C^{\omega}}\xspace$. \subsubsection{Representation} In order to discuss definability of constraint satisfaction and linear programming problems, we need to fix a representation of instances of these problems as relational structures. Here, we describe the representation we use. \noindent \textbf{Numbers and Vectors.} We represent an integer $z$ as a relational structure in the following way. Let $z=s\cdot x$, with $s\in\{-1,1\}$ being the sign of $z$, and $x\in\mathbb{N}$, and let $b\geq \lceil\log_2(x)\rceil$. We represent $z$ as the structure $\struct{z}$ with universe $\{1,\ldots,b\}$ over the vocabulary $\tau_{\mathbb{Z}}=\{X,S,<\}$, where $<$ is interpreted the usual linear order on $\{1,\ldots,b\}$; $S^{\struct{z}}$ is a unary relation where $S^{\struct{z}}=\emptyset$ indicates that $s=1$, and $s=-1$ otherwise; and $X^{\struct{z}}$ is a unary relation that encodes the bit representation of $x$, i.e.\ $X^{\struct{z}}=\{k\in\{1,\ldots,b\} \mid \ensuremath{\mathrm{BIT}\xspace}(x,k)=1\}$. In a similar vein, we represent a rational number $q=s\cdot \frac{x}{d}$ by a structure $\struct{q}$ over the domain $\tau_{\mathbb{Q}}=\{X,D,S,<\}$, where the additional relation $D^{\struct{q}}$ encodes the binary representation of the denominator $d$ in the same way as before. In order to represent vectors and matrices over integers or rationals, we have multi-sorted universes. Let $T$ be a non-empty set, and let $v$ be a vector of integers indexed by $T$. We represent $v$ as a structure $\struct{v}$ with a two-sorted universe with an index sort $T$, and bit sorts $\{1,\ldots,b\}$, where $b\geq\lceil\log_2(|m|)\rceil$, $m=\max_{t\in T}v_t$, over the vocabulary $(X,D,S,<)$. Now, the relation $S$ is of arity $2$, and $S^{\struct{v}}(t,\cdot)$ encodes the sign of the integer $v_t$ for $t\in T$. Similarly, $X$ is a binary relation interpreted as $X^{\struct{v}}=\{(t,k)\in T\times \{1,\ldots,b\}\mid \ensuremath{\mathrm{BIT}\xspace}(v_t,k)=1\}$. In order to represent matrices $M\in \mathbb{Z}^{T_1\times T_2}$, indexed by two sets $T_1, T_2$, we allow three-sorted universes with two sorts of index sets. The generalisation to rationals carries over from the numbers case. We write $\tau_{\text{vec}}$ to denote the vocabulary for vectors over $\ensuremath{\mathbb{Q}}$ and $\tau_{\text{mat}}$ for the vocabulary for matrices over $\ensuremath{\mathbb{Q}}$. \noindent \textbf{Linear Programs.} Let an instance of a linear optimization problem be given by a constraint maxtrix $A\in \mathbb{Q}^{C\times V}$, and vectors $\tup b\in\mathbb{Q}^C, \tup c\in\mathbb{Q}^V$ over some set of variables $V$ and constraints $C$. We represent this instance in the natural way as a structure over the vocabulary $\tau_{\ensuremath{\mathrm{LP}\xspace}}=\tau_{vec}\ \dot\cup\ \tau_{mat}$. We can now state the result from~\cite{adh13:lics} that we require, to the effect that there is an $\logic{FPC}$ interpretation that can define solutions to linear programs. \begin{theorem}[Theorem 11, \cite{adh13:lics}]\label{thm:lpdefinable} Let an instance $(A\in \mathbb{Q}^{C\times Q}, \tup b\in\mathbb{Q}^C, \tup c\in\mathbb{Q}^V)$ of a LP be explicitly given by a relational representation in $\tau_{LP}$. Then, there is a $\logic{FPC}$-interpretation that defines a representation of $(f\in\mathbb{Q},\tup v\in\mathbb{Q}^V)$, such that $f=1$ if and only if $\max_{\tup x\in P_{A,\tup b}} \tup c^T\tup x$ is unbounded, $\tup v\notin P_{A,\tup b}$ if and only if there is no feasible solution, and $f=0, \tup v=\mathrm{argmax}_{\tup x\in P_{A,\tup b}} \tup c^T\tup x$ otherwise. \end{theorem} \noindent \textbf{CSPs.} We next examine how instances of $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ for finite $\Gamma$ are represented as relational structures. We return to the case of infinite $\Gamma$ in Section~\ref{s:inf}. For a fixed finite language $\Gamma = \{f_1,\ldots,f_k\}$, we represent an instance $I$ of $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ as a structure $\struct{I}=(\univ I,<,(R^{\struct I}_f)_{f\in\Gamma},W_N^\struct{I},W_D^\struct{I})$ over the vocabulary $\tau_\Gamma$. The universe $\univ I =V\ \dot\cup\ C\ \dot\cup\ B$ is a three-sorted set, consisting of variables $V$, constraints $C$, and a set $B$ of \emph{bit positions}. We assume that $|B|$ is at least as large as the number of bits required to represent the numerator and denominator of any rational weight occurring in $I$. The relation $<$ is a linear order on $B$. The relation $R_f^\struct{I} \subseteq V^{ar(f)}\times C$ contains $(\sigma,c)$ if $c=(\sigma, f,q)$ is a constraint in $I$. The relations $W_N^\struct{I},W_D^\struct{I} \subseteq C\times B$ encode the weights of the constraints: $W_N^\struct{I}(c,\beta)$ (or $W_D^\struct{I}(c,\beta)$) holds if and only if the $\beta$-th bit of the bit-representation of the numerator (or denominator, respectively) of the weight of constraint $c$ is one. For the decision version of the VCSP, we have two additional unary relation $T_N$ and $T_D$ in the vocabulary which encode the binary representation of the numerator and denominator of the threshold constant of the instance. We are now ready to define what it means to express $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ in a logic such as $\logic{FPC}$. For a fixed finite langauge $\Gamma$, we say that the decision version of $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ is definable in a logic $L$ if there is some $\tau_{\Gamma} \cup \{T_N,T_D\}$-sentence $\phi$ of $L$ such that $\struct I \models \phi$ if, and only if, $I$ is satisfiable. We say that $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ is definable in $\logic{FPC} $ if there is an $\logic{FPC}$ interpretaion $\Theta$ of the vocabulary $\tau_{\ensuremath{\mathbb{Q}}}$ in $\tau_{\Gamma}$ such that for any $\struct I$, $\Theta(\struct I)$ codes the value of an optimal solution for the instance $I$. \section{Definable Reductions} \label{sec:reductions} An essential part of the machinery that leads to Theorem~\ref{thm:dichotomy} is that the computational complexity of $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ is robust under certain changes to $\Gamma$. In other words, closing the class of functions $\Gamma$ under certain natural operations does not change the complexity of the problem. This is established by showing that the distinct problems obtained are inter-reducible under polynomial-time reductions. Our aim in this section is to show that these reductions can be expressed as interpretations in a suitable logic (in some cases first-order logic suffices, and in others we need the power of counting). The following lemma is analogous to Lemma~\ref{lem:expower} and shows that the reductions there can be expressed as logical interpretations. \begin{lemma}\label{lem:expower_to_gamma} Let $\Gamma$ and $\Gamma'$ be valued constraint languages over domain $D$ of finite sizes such that $\Gamma' \subseteq \langle\Gamma\rightarrowngle$. Then $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma') \le_{\logic{FPC}} \ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$. \end{lemma} \begin{proof} The construction of the reduction follows closely the proof of Theorem 3.4.\ in \cite{ccjk06:ai}, while ensuring it is definable in $\logic{FPC}$. Let $I=(V,C)$ be a given instance of $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma')$. We fix for each function $f\in\Gamma'$ of arity $m$ an instance $I_f=(V_f,C_f)$ of $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ and a $m$-tuple of distinct elements $\tup{v}_f\in V_f^m$ that together express $f$ in the sense of Definition~\ref{def:expower}. The idea is now to replace each constraint $c=(\sigma,f,q)\in C$ by a copy of $I_f$ where the variables $v_{f1},\ldots,v_{fm}$ in $I_f$ are identified with $\sigma_1,\ldots,\sigma_m$, and the remaining variables are fresh. Since each $I_f$ is an instance of $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$, the instance $J=(U,E)$ obtained after all replacements is again an instance of $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$. Furthermore, by Definition~\ref{def:expower} it has the same optimal solution as $I$. Formally, we define the instance $J=(U,E)$ as follows. The set of variables $U$ consists of the variables in $V$ plus a fresh copy of the variables in $V_f$ for each constraint in $C$ that uses the function $f$, so we can identify $U$ with the following set. \[U=V\ \dot\cup\ \{(v,c)\mid c \in C, v\in V_f\}.\] Each constraint $c=(\sigma,f,q)\in C$ gives rise to a set of constraints $E_c$, representing a copy of the constraints in $C_f$. \[E_c=\{(h_c(\nu),g,q\cdot r)\mid (\nu,g,r)\in C_f\},\] where $h_c:V_f\rightarrow U$ is defined as the mapping $h_c(v)=\sigma_i$, if $ v = v_{fi}$, and $h_{c}(v)=(v,c)$ otherwise. The set of constraints $E$ is then simply the union of all sets $E_c$. \[E=\bigcup_{c\in C}E_c.\] Let $\tau_\Gamma=(<,(R_f)_{f\in\Gamma},W_N,W_D)$ and $\tau_{\Gamma'}=(<,(R_f)_{f\in\Gamma'},W_N,W_D)$ be the vocabularies for instances of $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ and $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma')$ respectively. We aim to define an $\logic{FPC}$ reduction $\Theta=(\tup{\delta},\mathit{var}epsilon,\phi_{<},(\phi_{R_f})_{f\in\Gamma},\phi_{W_N},\phi_{W_D})$ such that $\struct J=\Theta(\struct I)$ corresponds to the above construction of the instance $J$. Let an instance $I=(V,C)$ of $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma')$ be given as a structure $\struct I$ over $\tau_{\Gamma'}$ with the three-sorted universe $\univ I=V\ \dot\cup\ C\ \dot\cup\ B$. For each $m$-ary function $f\in\Gamma$ we have fixed an instance $I_f=(V_f,C_f)$ and a tuple $\tup{v}_f=(v_{f1},\ldots,v_{fm})$ that together express $f$. As the construction of $\struct J$ depends on these instances, we fix an encoding of them in an initial segment of the natural numbers. To be precise, as the sets $\hat V=\bigcup_{f\in\Gamma'}V_f$ and $\hat C=\bigcup_{f\in\Gamma'}C_f$ are of fixed size (independent of $I$), let $n_{\hat V}=|\hat V|$ and $n_{\hat C}=|\hat C|$. We then fix bijections $\mathit{var}:\hat V \rightarrow \{1,\ldots,n_{\hat V}\}$ and $\mathit{con}:\hat C \rightarrow \{1,\ldots,n_{\hat C}\}$ such that for each $f\in\Gamma'$, there are intervals $\mathcal{V}_f=[lv_f,rv_f]$ and $\mathcal{C}_f =[lc_f,rc_f]$ such that $\mathit{var}(V_f) = \mathcal{V}_f$ and $\mathit{con}(C_f) = \mathcal{C}_f$. We assume that $\univ I$ is larger than $\max(n_{\hat V},n_{\hat C})$ so that we can use number terms to index the elements of $\hat V$ and $\hat C$. There are only finitely many instances $I$ smaller than this, and they can be handled in the interpretation $\Theta$ individually. In defining the formulas below, for an integer interval $I$ we write $\mu \in I$ as shorthand for the formula $\bigvee_{m \in I} \mu = m$. The universe of $\struct J$ is a three-sorted set $\univ{J}=U\ \dot\cup\ E\ \dot\cup\ B'$ consisting of variables $U$, constraints $E$, and bit positions $B'$. The set $U$ is defined by the formula \begin{align*}\delta_U(x,\mu)=& \left(x\in C \wedge \bigvee_{f\in\Gamma'}(\exists \tup{y}\in V^{ar(f)}: R_f(\tup{y},x)\wedge \mu \in \mathcal{V}_f)\right) \\&\vee(\mu = 0 \wedge x\in V). \end{align*} In other words, the elements of $U$ consist of pairs $(x,\mu)$, where $x \in V \cup C$ and $\mu$ is an element of the number domain and we make the following case distinction: Either $x\in C$ and there is a constraint $x=(\tup y, f, q)$ in $I$, and a variable $v \in V_f$ with $\mathit{var}(v) = \mu$; then the pair represents one of the fresh variables in $C\times \hat V$. Or, $x\in V$ and $\mu = 0$ and the pair simply represents an element of $V$. Similarly, the constraints $E$ are given by \begin{align*} \delta_E(x,\mu)= x\in C \wedge \bigvee_{f\in\Gamma'}(\exists \tup{y}\in V^{ar(f)}: R_f(\tup{y},x)\wedge \mu\in \mathcal{C}_f). \end{align*} Again, the elements of $E$ are pairs $(x,\mu)$, with $\in C$ and $\mu$ an element of the number domain, and we require that if there is a constraint of the form $x=(\tup{y},f,q)$, then there is a constraint $c \in C_f$ with $\mathit{con}(c) = \mu$. For the domain of bit positions, we just need to make sure that the set is large enough to encode all weights in $J$. Taking $B'=B^2$ suffices, so \[\delta_{B'}(x_1,x_2)=x_1,x_2\in B\] and we take $\phi_{<}(\tup x, \tup y)$ to be the formula that defines the lexicographic order on pairs. The constraints of $J$ are encoded in the relations $R_g$, $g\in\Gamma$. For an $m$-ary function $g$, this is defined by a formula $\phi_{R_g}$ in the free variables $(x_1,\mu_1,\ldots,x_m,\mu_m,e,\nu)$ where each $(x_i,\mu_i)$ ranges over elements of $U$, and $(e,\nu)$ ranges over elements of $E$. To be precise, we define the formula by: \begin{align*}\phi_{R_g} = \bigvee_{f\in\Gamma'} &\left(\exists \vec{y} \in V^{ar(f)}:R_f(\tup y,e) \wedge \nu \in \mathcal{C}_f \right. \\ &\wedge \left. \bigvee_{e' = (\rho,g,r) \in C_f} \left( \nu =\ con(e') \land \bigwedge_{i: \rho_i \in \vec{v}_f} (x_i = e \land \mu_i = \mathit{var}(\rho_i)) \right.\right. \\ &\land \left.\left. \bigwedge_{i: \rho_i \not\in \vec{v}_f} (x_i = y_i \land \mu_i = 0) \right) \right). \end{align*} Finally, we define the weight relations. The weight of a constraint $\tup e=(e_1,e_2)$ is assigned the product of the weight of $e_1\in C$ and the weight of $e_2\in \hat C$. We have \begin{align*} \phi_{W_N}(\tup e, \tup \beta)=\bigvee_{e'\in\hat C}e_2=\mathit{con}(e') \wedge \ensuremath{\mathrm{MULT}\xspace}_{w}(W_N(e_1,\cdot),\tup beta), \end{align*} where $w$ is the numerator of the weight of the constraint $e'$. The definition of the denominator relation is analogous. \end{proof} The next lemma similarly establishes that the reduction in Lemma~\ref{lem:equiv} can be realised as an $\logic{FPC}$ interpretation. \begin{lemma}\label{lem:equiv_to_gamma} Let $\Gamma$ and $\Gamma'$ be valued languages over domain $D$ of finite sizes such that $\Gamma' \subseteq \Gamma_\equiv$. Then $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma') \le_{\logic{FPC}} \ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$. \end{lemma} \begin{proof} Note that adding constants to the value of constraints does not change the optimal solution of the instance. Hence, we only need to adapt to the scaling of the constraint functions. This can be achieved by changing the weights accordingly. Let $I=(V,C)$ be an instance of $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma')$, given as the relational structure $\struct{I}=(\univ I,(R_f)_{f\in\Gamma'},W_N,W_D)$. We aim to construct an instance $J=(U,E)$ of $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ with the same optimal solution. The set of variables of $J$ is $V$. For any $f\in\Gamma'$ we fix a function $S(f) \in \Gamma$ such that $S(f) \equiv f$. Then, the formula $\phi_{R_g}(\sigma, d)=\bigvee_{f\in\Gamma';g=S(f)}R_f(\sigma, d)$ defines the constraints of $J$. Let $d=(\sigma,g,r)$ be any constraint in $E$, and $c=(\sigma,f,q)$ be the corresponding constraint in $C$ where $g=S(f)$, and $g=a\cdot f+b$ for some $a,b\in\ensuremath{\mathbb{Q}}$. We then set the weight $r$ of the constraint $d$ to be $a\cdot q$. This can again be defined by a formula in $\logic{FPC}$. \end{proof} Next, we show that there is a definable reduction from $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ to the problem defined by a core of $\Gamma$. \begin{lemma}\label{lem:gamma_to_core} Let $\Gamma$ be a valued language over $D$, and $\Gamma'$ a core of $\Gamma$. Then, $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)\le_{\logic{FO}}\ensuremath{\mathrm{VCSP}\xspace}(\Gamma')$. \end{lemma} \begin{proof} Since the functions in $\Gamma'$ are exactly those in $\Gamma$, only restricted to some subset of $D$, we can interpret any instance of $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ directly as an instance of $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma')$. Since the optimum of both instances are the same, by Lemma \ref{lem:core_equiv}, this constitutes a reduction. \end{proof} The next two Lemmas together show that $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ and $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma_c)$ are $\logic{FPC}$-equivalent. The proof follows closely the proof from \cite{hkp14:joc} that they are polynomial-time equivalent. \begin{lemma}[Lemma 2, \cite{hkp14:joc}]\label{lem:perm_vcsp} Let $\Gamma$ be a core over domain $D$. There exists an instance $I_p$ of $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ with variables $V=\{x_a \mid a\in D\}$ such that $h_{id}(x_a)=a$ is an optimal solution of $I_p$ and for every optimal solution $h$, the following hold: \begin{enumerate} \item $h$ is injective; and \item for every instance $I'$ of $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ and every optimal solution $h'$ of $I'$, the mapping $s_h\circ h'$ is also an optimal solution, where $s_h(a):= h(x_a)$. \end{enumerate} \end{lemma} \begin{lemma}\label{lem:gammac_to_gamma} Let $\Gamma$ be a core over a domain $D$ of finite size. Then, $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma_c)\leq_{\logic{FPC}}\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$. \end{lemma} \begin{proof} Let $I_c=(V_c,C_c)$ be an instance of $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma_c)$, and let $I_p=(V_p,C_p)$ be an instance of $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ that satisfies the conditions of Lemma~\ref{lem:perm_vcsp}. We construct an instance $I=(V,C)$ of $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ as follows. The set of variables $V$ is \[V := V_c \ \dot\cup\ V_p = V_c \ \dot\cup\ \{x_a\mid a\in D\}.\] By Definition~\ref{def:gamma_c}, each function $f\in \Gamma_c$ is associated with some function $g=\gamma(f)\in\Gamma$, such that $f$ is obtained from $g$ by fixing the values of some set of variables of $g$. Let $T_{f}$ be the corresponding index set, $t_{f}:T_{f}\rightarrow D$ the corresponding partial assignment of variables of $g$, and $s_{f}$ the injective mapping between parameter positions of $f$ and $g$. Then, we add for each constraint $c'=(\sigma', f, q)\in C_c$ the constraint $c=(\sigma, g, q)$ to $C$, where we replace each parameter of $g$ that is fixed to $a\in D$ by the variable $x_a$, or formally, $\sigma_i=x_{t_{f}(i)}$ if $i\in T_{f}$, and $\sigma_i = \sigma'_{s_{f}^{-1}(i)}$ otherwise. Additionally, we add each constraint of $C_p$ to $C$ with its weight multiplied by some sufficiently large factor $M$ such that every optimal solution to $I$, when restricted to $\{x_a\mid a\in D\}$, constitutes also an optimal solution to $I_p$. For instance, $M$ can be chosen as $M:=\sum_{(\sigma,g,q)\in C\backslash C_p}q\cdot \max_{f\in\Gamma_c;\tup{x}}f(\tup{x})$. Note that since the domain and the constraint language are finite, and the functions are finite valued, the value of $\max_{f\in\Gamma_c;\tup{x}}f(\tup{x})$ exists and is a constant. Together, the set of constraints $C$ is defined as \begin{align*} C =& \{(\sigma,g,q)\mid \exists \sigma',f: g=\gamma(f),(\sigma',f,q)\in C_c,\ \forall i\in T_{f}: \sigma_i = t_{f}(i),\ \forall i\notin T_{f}: \sigma_i = \sigma'_{s_{f}^{-1}(i)}\}\\ &\cup \{(\sigma,g,M\cdot q)\mid (\sigma,g,q)\in C_p\}. \end{align*} In order to see that this construction is a reduction, consider the optimal solutions of $I_c$. Each such optimal solution $h_c$ gives rise to an optimal solution $h$ of $I$, where $h(x)=h_c(x)$ for $x\in V_c$, and $h(x)=h_{id}(x)$ for $x\in V_p$. In the other direction, let $h$ be an optimal solution to $I$, and its restriction to $V_p$, $h_p:=h_{|V_p}$ is an optimal solution to $I_p$. By Lemma \ref{lem:perm_vcsp}, the operation $s_{h_p}$ is a permutation on $D$, and in particular, by repeatedly applying the second part of Lemma \ref{lem:perm_vcsp}, the inverse permutation $s_{h_p}^{-1}$ is an optimal solution to $I_p$ as well. Now, again by application of the second part of Lemma \ref{lem:perm_vcsp}, we can obtain an optimal solution $h':=s_{h_p}^{-1}\circ h$ to $I$, for which $h'(x_a)=a$ for each $a\in D$. That means, the restriction of $h'$ to $V_c$ is an optimal solution to $I_c$. We now formulate the above construction as an $\logic{FPC}$ interpretation. Let $I_c$ be given as a structure $\struct I_c$ over $\tau_{\Gamma_c}=(<,(R_f)_{\Gamma_c},W_N,W_D)$. Furthermore, let $I_p=(V_p,C_p)$ be some fixed instance of $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ that satisfies the conditions of Lemma~\ref{lem:perm_vcsp}. We construct an $\logic{FPC}$-interpretation $\Theta=(\vec{\delta},\mathit{var}epsilon,\phi_{<},(\phi_{R_f})_{f\in\Gamma},\phi_{W_N},\phi_{W_D})$ that defines $\struct I=\Theta(\struct I_c)$. The universe $\univ{I_c}$ is the three-sorted set $V_c\ \dot\cup\ C_c\ \dot\cup\ B_c$. In the same way, the universe of the structure $\struct I$ is a three sorted set $V\ \dot\cup\ C\ \dot\cup\ B$. Just as in the proof of Lemma~\ref{lem:expower_to_gamma}, to code elements of $V_p$ and $C_p$, we fix bijections $\mathit{var}:V_p \rightarrow \{1,\ldots,|V_p|\}$ and $\mathit{con}:C_p \rightarrow \{1,\ldots,|C_p|\}$ The set $V$ is then defined by the formula \[\delta_V(x)=x\in V_c \lor x \in \{1,\ldots,|V_p|\}.\] Similarly, we define $C$ by \[\delta_C(x)=x\in C_c \lor x \in \{1,\ldots,|C_p|\}.\] The set of bit positions is chosen to be large enough to encode all weights. We can choose $B=B_c^2$. \[\delta_B(x_1,x_2)=x_1,x_2\in B_c,\] and let $\phi_{<}$ define the lexicographic order on $B_C^2$. For each $m$-ary function $g\in\Gamma$, we have the formula \begin{align*} \phi_{R_g}(\vec{x},c)= &\bigvee_{e=(\rho,g,r)\in C_p} \left(c=con(e) \land \bigwedge_{1\leq i\leq m}x_i=var(\rho_i) \right)\\ &\bigvee_{f: \gamma(f) = g}\left( \exists \vec{y} \in V_c^{\ensuremath{\mathrm{ar}}(f)}: R_f(\bar{y},c) \wedge \bigwedge_{i\in T_f}x_i=var(t_f(i)) \bigwedge_{i\notin T_f} x_i=y_{s^{-1}_f(i)}\right). \end{align*} The weights are given by \[\phi_{W_N}(c,\tup{\beta}) = (c\in C_c \wedge W_N(c,\beta) ) \lor \bigvee_{e=(\rho,g,r)\in C_p} (c=con(e) \wedge \ensuremath{\mathrm{MULT}\xspace}_{r\cdot L}(B_c,\tup \beta)),\] where $L$ is given by \[L = \max_{f\in\Gamma_c;\tup{x}\in D^{ar(f)}}f(\tup x).\] The denominator is given by \[\phi_{W_D}(c,\tup \beta)=(c\in C_c \wedge W_D(c,\beta)) \lor \bigvee_{e\in C_p} (c=con(e) \wedge \ensuremath{\mathrm{BIT}\xspace}(1,\beta)).\] Here, another case distinction is in place. Either we have $c\in C_c$, and the weight is simply the same as given by $W_N$ and $W_D$. Or, the constraint $c$ corresponds to some constraint $e=(\rho,g,r)\in C_p$, and we assign the weight $L\cdot 2^{|B_c|}\cdot r$ to $c$. \end{proof} \section{Expressibility Result}\label{s:exp} The fact that $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ is definable in $\logic{FPC}$ whenever $\Gamma_c$ does not have the $(\ensuremath{\mathrm{XOR}}\xspace)$ property is obtained quite directly from Theorems~\ref{thm:dichotomy} and~\ref{thm:lpdefinable}. Here we state the result in somewhat more general form. \begin{theorem}\label{thm:exp} For any valued constraint language $\Gamma$ over a finite domain $D$, there is an $\logic{FPC}$ interpretation $\Theta$ of $\tau_{\ensuremath{\mathbb{Q}}}$ in $\tau_{\Gamma}$ that takes an instance $I$ to a representation of the optimal value of $\ensuremath{\mathrm{BLP}\xspace}(I)$. \end{theorem} \begin{proof} We show that it is possible to interpret $\ensuremath{\mathrm{BLP}\xspace}(I)$ as a $\tau_{LP}$-structure in $I$ by means of an $\logic{FPC}$-interpretation. The statement then follows by Theorem \ref{thm:lpdefinable} and the composition of $\logic{FPC}$-reductions. Let $I=(V,C)$ be given as the $\tau_\Gamma$ structure $\struct{I}$ with universe $\univ I=V\ \dot\cup\ C\ \dot\cup\ B$. Our goal is to define a $\tau_{LP}$-structure $\struct{P}$ representing $\ensuremath{\mathrm{BLP}\xspace}(I)$ in given by $(A,\tup b, \tup c)$. The set of variables of $\struct{P}$ is the union of the two sets \[\lambda=\{\lambda_{c,\nu}\mid c=(\sigma,f,q)\in C, \nu\in D^{|\sigma|}\}\] and \[\mu=\{\mu_{x,a}\mid x\in V, a\in D\}.\] In order to refer to elements of $D$ in our interpretation, we fix a bijection $\mathit{dom}: D \rightarrow \{1,\ldots,|D|\}$ between $D$ and an initial segment of the natural numbers. Then, the sets $\lambda$ and $\mu$ are defined by \[\lambda(c,\vec{\nu})=\bigvee_{f\in\Gamma}\left(\exists \vec{y} \in V^{\ensuremath{\mathrm{ar}}(f)}:R_f(\vec{y},c) \land \bigwedge_{1\leq i\leq ar(f)} \bigvee_{a\in D}\nu_i=\mathit{dom}(a)\right).\] Here, we assume that $\vec{\nu}$ is a tuple of number variables of length $\max_{f \in \Gamma} \ensuremath{\mathrm{ar}} (f)$. This creates some redundant variables, related to constraints whose arity is less than the maximum. We also have \[\mu(x,\alpha)=x\in V \wedge \bigvee_{a\in D}y=\mathit{dom}(a).\] For the set of linear constraints, we observe that the constraints resulting from the equalities of the form $(2)$ can be indexed by the set \[J_{(2)}=\{j_{c,i,a,b}\mid c=(\sigma,f,q)\in C, i\in \{1,\ldots,|\sigma|\},a\in D, b\in \{0,1\}\},\] since we have for each $c\in C$, $i\in\{1,\ldots,|\sigma|\}$, and $a\in D$ a single equality, and hence two inequalities, one for each value of $b$. This can be expressed by \begin{align*} J_{(2)}(c,\iota,\alpha,\beta) =& c\in C \wedge \bigvee_{f\in\Gamma}\exists \vec{y} \in V^{ar(f)}: R_f(\vec{y},c)\\ &\wedge\iota \leq ar(f)\\ &\wedge \bigvee_{a\in D}\alpha=\mathit{dom}(a)\\ &\wedge \beta \in \{0,1\}. \end{align*} Similarly, the constraints resulting from $(3)$ can be indexed by \[J_{(3)}=\{j_{x,b}\mid x\in V, b\in\{0,1\}\}.\] Or, as a formula, \[J_{(3)}(x,\beta) = x\in V \wedge \beta \in\{0,1\}.\] Finally, we have two inequalities bounding the range of each variable, indexed by \[J_{(4)}=\{j_{v,b}\mid v\in \lambda\cup\mu, b\in\{0,1\}\},\] defined by \[J_{(4)}(\tup v,\beta)=\lambda(\tup{v})\vee \mu(\tup{v}) \wedge \beta\in\{0,1\}.\] The universe $\univ{L}$ is then the three-sorted set $Q\ \dot\cup\ R\ \dot\cup\ B'$ with index sets $Q$ and $R$ for columns and rows respectively, and a domain for bit positions $B'$, defined by \[\delta_Q(\tup x)=\lambda(\tup x)\vee \mu(\tup x),\] \[\delta_R(\tup x)=J_{(2)}(\tup x)\vee J_{(3)}(\tup x)\vee J_{(4)}(\tup x),\] \[\delta_{B'}(x)=x\in B.\] The entries in the matrix $A\in\mathbb{Q}^{Q\times R}$, and the two vectors $\tup b\in\mathbb{Q}^{Q}$ and $\tup c\in\mathbb{Q}^{R}$ consist only of elements of $\{0,1,-1\}$ and the weight of some constraint in $C$. It is easily seen that these can be suitably defined in $\logic{FPC}$. \end{proof} Combining this with Theorem~\ref{thm:dichotomy} yields immediately the positive half of the definability dichotomy. \begin{corollary}\label{cor:exp} If $\Gamma$ is a valued constraint language such that property $(\ensuremath{\mathrm{XOR}}\xspace)$ does not hold for $\Gamma_c$, then $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ is definable in $\logic{FPC}$. \end{corollary} \section{Inexpressibility Result}\label{s:inexp} We now turn to the other direction and show that if $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ is such that $\Gamma_c$ has the $(\ensuremath{\mathrm{XOR}}\xspace)$ property then $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ is not definable in $\logic{FPC}$. In fact, we will prove the stronger inexpressibility result that those $\ensuremath{\mathrm{VCSP}\xspace}$s are not even definable in the stronger logic $\ensuremath{C^{\omega}}\xspace$. Our proof proceeds as follows. The main result in \cite{tz13:stoc} characterizes the intractable constraint languages $\Gamma$ as exactly those languages whose extension $\Gamma_c$ has the property $(\ensuremath{\mathrm{XOR}}\xspace)$, by constructing a polynomial time reduction from $\ensuremath{\mathrm{MAXCUT}\xspace}$ to $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$. We show that this reduction can also be carried out within $\logic{FPC}$. It is then left to show that $\ensuremath{\mathrm{MAXCUT}\xspace}$ itself is not definable in $\ensuremath{C^{\omega}}\xspace$. To this end, we describe a series of $\logic{FPC}$-reductions from 3-$\ensuremath{\mathrm{SAT}\xspace}$ to $\ensuremath{\mathrm{MAXCUT}\xspace}$ which roughly follow their classical polynomial time counterparts. Finally, results of \cite{bjk05:joc} and \cite{abd09:tcs} establish that 3-$\ensuremath{\mathrm{SAT}\xspace}$ is not definable in $\ensuremath{C^{\omega}}\xspace$, concluding the proof. We consider the problem $\ensuremath{\mathrm{MAXCUT}\xspace}$, where one is given an undirected graph $G=(V,E)$ along with a weight function $w:E\rightarrow \ensuremath{\mathbb{Q}}^+$ and is looking for a bipartition of vertices $p:V\rightarrow \{0,1\}$ that maximises the payout function $b(p)=\sum_{(u,v)\in E; p(u)\neq p(v)}w(u,v)$. In the decision version of the problem, an additional constant $t\in\ensuremath{\mathbb{Q}}^+$ is given and the question is then whether there is a partition $p$ with $b(p)\geq t$. An instance of (decision) $\ensuremath{\mathrm{MAXCUT}\xspace}$ is given as a relational structure $\struct I$ over the vocabulary $\tau_\ensuremath{\mathrm{MAXCUT}\xspace}=(E,<,W_N,W_D,T_N,T_D)$. The universe $\univ I$ is a two-sorted set $U=V\ \dot\cup\ B$, consisting of vertices $V$, and a set $B$ of bit positions, linearly ordered by $<$. In addition to the edge relation $E\subseteq V\times V$, there are two weight relations $W_N,W_D\subseteq V\times V\times B$ which encode the numerator and denominator of the weight between two vertices. Finally, the unary relations $T_N,T_D\subseteq B$ encode the numerator and denominator of the threshold constant of the instance. \begin{lemma}\label{lem:maxcut_to_gamma} Let $\Gamma$ be a language over $D$ for which $(\ensuremath{\mathrm{XOR}}\xspace)$ holds. Then, $\ensuremath{\mathrm{MAXCUT}\xspace}\leq_{\logic{FPC}}\ensuremath{\mathrm{VCSP}\xspace}(\langle\Gamma\rightarrowngle_{\equiv})$. \end{lemma} \begin{proof} Let $I=(V,E,w,t)$ be a given $\ensuremath{\mathrm{MAXCUT}\xspace}$ instance. We define an equivalent instance $J=(U,C,t')$ of $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma_{\equiv})$ as follows. Since $(\ensuremath{\mathrm{XOR}}\xspace)$ holds for $\Gamma$, there are two distinct elements $a,b\in D$ for which $\langle\Gamma\rightarrowngle_{\equiv}$ contains a binary function $f$, such that $f(a,b)=1$ if $a=b$ and $f(a,b)=0$ otherwise. By creating a variable for each vertex in $V$ and adding a constraint $((u,v),f,w(e))$ for each edge $e=(u,v)\in E$, we obtain a $\ensuremath{\mathrm{VCSP}\xspace}$ with the same optimal solution. The threshold constant $t'$ is then set to $t'=M-t$, where $M:=\sum_{e\in E}w(e)$. We now define a $\logic{FPC}$-interpretation $\Theta$ of $\tau_{\langle\Gamma\rightarrowngle_\equiv}$ in $\tau_{\ensuremath{\mathrm{MAXCUT}\xspace}}$ that carries out the construction. Let $\struct{I}$ be the relational representation of $I$ over $\tau_{\ensuremath{\mathrm{MAXCUT}\xspace}}$ with the two-sorted universe $V\ \dot\cup\ B$. The structure $\struct J = \Theta (\struct I)$ has a three-sorted universe $\univ{J}=U\ \dot\cup\ C\ \dot\cup\ B'$ consisting of variables $U=V$, constraints $C=V^2$, and bit positions $B'=B \times \{1,\ldots,|E|\}$. \[\delta_U(x) = x\in V,\] \[\delta_C(x_1,x_2)=x_1,x_2\in V,\] \[\delta_{B'}(x,\mu)=x\in B \land \mu \leq \#_{y,z} E(y,z).\] Since $M \leq |E|\max_{e\in E} w(e)$, and each $w(e)$ can be represented by $|B|$ bits, $|E|\cdot |B|$ bits suffice to represent the threshold $M-t$. Each edge $e=(u,v)$ gives rise to a constraint $((u,v),e,w(e))$, which is then encoded in $R_f$. \[\phi_{R_f}(\tup x,\tup c) = E(\tup x) \wedge \bar{x}=\bar{c}.\] The weights are simply carried over. \[\phi_{W_N}(\bar{c},b) = W_N(\tup c,b)\] \[\phi_{W_D}(\bar{c},b) = W_D(\tup c,b)\] The threshold is set to $M-t$. As $\logic{FPC}$ can define any polynomial-time computable function on an ordered domain, it is possible to write formulas $\phi_{T_N}$ and $\phi_{T_D}$ defining the numerator and denominator of the threshold $M-t$ on the ordered sort $B'$. The remaining relations $R_g$ corresponding to functions in $g\in\langle\Gamma\rightarrowngle_\equiv\backslash\{f\}$ are simply empty. \end{proof} The next ingredient is to show that the classical series of polynomial time reductions from $3$-$\ensuremath{\mathrm{SAT}\xspace}$ to $\ensuremath{\mathrm{MAXCUT}\xspace}$ can also be carried out within $\logic{FPC}$. The chain of reductions goes over three steps, the first one reduces $3$-$\ensuremath{\mathrm{SAT}\xspace}$ to $4$-$\ensuremath{\mathrm{NAE}\xspace}SAT$ (Not All Equal SAT), then $4$-$\ensuremath{\mathrm{NAE}\xspace}SAT$ is reduced to $3$-$\ensuremath{\mathrm{NAE}\xspace}SAT$, and finally $3$-$\ensuremath{\mathrm{NAE}\xspace}SAT$ is reduced to $\ensuremath{\mathrm{MAXCUT}\xspace}$. We begin with defining the relational representations of these problems. An instance of $3$-$\ensuremath{\mathrm{SAT}\xspace}$ is given as a relational structure over the vocabulary $\tau_{3\ensuremath{\mathrm{SAT}\xspace}}=(R_{000},\ldots,R_{111})$ with eight ternary relations, one for each possible set of negations of literals within a clause (e.g.\ $(a,b,c)\in R_{000}$ may represent the clause $(a\vee b\vee c)$ while $(a,b,c)\in R_{101}$ may represent $(\neg a\vee b\vee \neg c)$). Similarly, we assume $3$-$\ensuremath{\mathrm{NAE}\xspace}SAT$ instances to be given as structures over $\tau_{3\ensuremath{\mathrm{NAE}\xspace}SAT}=(N_{000},\ldots,N_{111})$, where $(a,b,c)\in N_{000}$ represents the constraint that not all of $a,b$ and $c$ must evaluate to the same value. Finally, a $4$-$\ensuremath{\mathrm{NAE}\xspace}SAT$ instance is represented as a structure over $\tau_{4\ensuremath{\mathrm{NAE}\xspace}SAT}=(N_{0000},\ldots,N_{1111})$, only now with sixteen 4-ary relations encoding the clauses. \begin{lemma}\label{lem:3sat_to_maxcut} $3$-$\ensuremath{\mathrm{SAT}\xspace}\leq_{\logic{FPC}}\ensuremath{\mathrm{MAXCUT}\xspace}$. \end{lemma} \begin{proof} $3$-$\ensuremath{\mathrm{SAT}\xspace}\leq_{\logic{FO}}4$-$\ensuremath{\mathrm{NAE}\xspace}SAT$: Let $\struct I=(V,R_{000},\ldots,R_{111})$ be any given $3$-$\ensuremath{\mathrm{SAT}\xspace}$ instance. Consider a $4$-$\ensuremath{\mathrm{NAE}\xspace}SAT$ instance $\struct{J}=(U,N_{0000},\ldots,N_{1111})$ with $V\subset U$, i.e.\ there is at least one variable in $U$ not contained in $V$. Furthermore, let $(a,b,c,z)\in N_{ijk0}$ hold if, and only if, $(a,b,c)\in R_{ijk}$ and $z\in U\backslash V$, and let the relations $N_{ijk1}$ be empty. The instance $\struct{J}$ is now satisfiable if, and only if, $\struct{I}$ is satisfiable: Whenever there is a satisfying assignment for $\struct{I}$, the same assignment extended with $z=0$ for all $z\in U\backslash V$ will also be a satisfying assignment for $\struct{J}$. In the other direction, if there is a satisfying assignment for $\struct{J}$, there is always a satisfying one that sets $z=0$ for all $z\in U\backslash V$, since negating every variable does not change the value of a $\ensuremath{\mathrm{NAE}\xspace}$-clause, and each clause only contains one variable in $U\backslash V$. In terms of a $\logic{FPC}$-interpretation, this construction looks as follows. We take as universe $\univ{J}$ the set $V^2$, and interpret an element $(a,a)$ as representing the variable $a\in V$, and any element $(a,b), a\neq b$ as a fresh variable in $U\backslash V$. \[\delta_U(x_1,x_2) = x_1,x_2\in V \] \[\phi_{N_{ijk0}}(\bar{x},\bar{y},\bar{z},\bar{w}) = R_{ijk}(x_1,y_1,z_1) \wedge w_1 \neq w_2 \land \bigwedge_{\bar{v}\in\{\bar{x},\bar{y},\bar{z}\}} v_1=v_2\] \[\phi_{N_{ijk1}}(\bar{x},\bar{y},\bar{z},\bar{w}) = \mathrm{False}\] $4$-$\ensuremath{\mathrm{NAE}\xspace}SAT\leq_{\logic{FPC}}3$-$\ensuremath{\mathrm{NAE}\xspace}SAT$: Let $\struct{I}=(V,N_{0000},\ldots,N_{1111})$ be an instance of $4$-$\ensuremath{\mathrm{NAE}\xspace}SAT$. Note that we can split every clause $\ensuremath{\mathrm{NAE}\xspace}(a,b,c,d)$ into two smaller $3$-$\ensuremath{\mathrm{NAE}\xspace}SAT$ clauses $\ensuremath{\mathrm{NAE}\xspace}(a,b,z)$ and $\ensuremath{\mathrm{NAE}\xspace}(\neg z,c,d)$ for some fresh variable $z$. The following interpretation realises this conversion. In order to introduce a fresh variable for each clause of the $4$-$\ensuremath{\mathrm{NAE}\xspace}SAT$ instance, the universe of the $3$-$\ensuremath{\mathrm{NAE}\xspace}SAT$ instance will consist of tuples from $V^4\times \{0,1\}^5$, where the first eight components encode a clause in $\struct I$ and the last component is a flag indicating whether the element represents a fresh variable or one that appears already in $V$. The convention is then that an element of the form $(a,a,a,a,0,\ldots,0)$ represents the variable $a\in V$, and an element of the form $(a,b,c,d,i,j,m,n,1)$ represents the fresh variable that is used to split the clause $N_{ijmn}(a,b,c,d)$. \[\delta(\bar{x}) = \bar{x}\in V^4 \times \{0,1\}^5 \] \begin{align*} \phi_{N_{ij1}}(\bar{x},\bar{y},\bar{z}) = &\bigvee_{m,n\in\{0,1\}}\exists u,v\in V: N_{ijmn}(x_1,y_1,u,v) \\ &\wedge\ x_1=x_2=x_3=x_4 \wedge x_5=\ldots =x_9=0\\ &\wedge\ y_1=y_2=y_3=y_4 \wedge y_5=\ldots =y_9=0 \\ &\wedge\ \bar{z} = (x_1,y_1,u,v,i,j,m,n,1) \end{align*} \begin{align*} \phi_{N_{0ij}}(\bar{x},\bar{y},\bar{z}) = &\bigvee_{m,n\in\{0,1\}}\exists u,v\in V: N_{mnij}(u,v,y_1,z_1) \\ &\wedge\ y_1=y_2=y_3=y_4 \wedge y_5=\ldots =y_9=0 \\ &\wedge\ z_1=z_2=z_3=z_4 \wedge z_5=\ldots =z_9=0 \\ &\wedge\ \bar{x} = (u,v,y_1,z_1,m,n,i,j,1) \end{align*} The remaining relations are defined as empty. $3$-$\ensuremath{\mathrm{NAE}\xspace}SAT\leq_{\logic{FPC}}\ensuremath{\mathrm{MAXCUT}\xspace}$: The following construction transforms a given $3$-$\ensuremath{\mathrm{NAE}\xspace}SAT$ instance $\struct{I}=(V,N_{000},\ldots,N_{111})$ into an equivalent (decision) $\ensuremath{\mathrm{MAXCUT}\xspace}$ instance $\struct{J}=(\univ J,E,W_N,W_D,T_N,T_D)$. Let $m$ be the number of clauses in $\struct I$, and fix $M:=10m$. For each variable $v\in V$, we have two vertices denoted $v_0$ and $v_1$, in our graph, along with an edge $(v_0,v_1)$ of weight $M$. For each tuple $(x,y,z)\in N_{ijk}$ we add a triangle between the vertices $x_i$, $y_j$, and $z_k$ with edge-weight $1$. Setting the cut threshold to $t:=|V|\cdot M+2m$ gives us an equivalent instance: If $\struct{I}$ is satisfiable, say by an assignment $f$, then the partition given by $p(v_i)=f(v)+i\ \mathrm{mod}\ 2$ cuts through every edge of the form $(v_0,v_1)$, and through two edges in every triangle, resulting in a payout of $|V|\cdot M+2m$. On the other hand, any bipartition of payout larger or equal to $|V|\cdot M+2m$ has to cut through all edges of the form $(v_0,v_1)$, since it can only cut through two edges in each triangle. Hence, any such bipartition induces a satisfying assignment to the $3$-$\ensuremath{\mathrm{NAE}\xspace}SAT$ instance. We use the following $\logic{FPC}$-interpretation to realise this construction. The universe of $\struct{J}$ is defined as a two-sorted set $\univ J=U\ \dot\cup\ B$, consisting of vertices $U=V\times\{0,1\}$ and bit positions $B=\{1,\ldots,\alpha\}$ for some sufficiently large $\alpha$. In particular, $\alpha$ has to be chosen larger than $\log_2 t$. Since $m$ is at most $|V|^3$, taking $\alpha=|V|^4$ suffices. \[\delta_U(x_1,x_2)=x_1\in V, x_2\in \{0,1\} \] \[\delta_B(\vec{\mu})= \bigwedge_{1\leq i \leq 4} \mu_i \leq \#_v v \in V.\] The edge relation is given by \begin{align*} \phi_E(\bar{x},\bar{y}) &= x_1=y_1 \wedge x_2\neq y_2\\ & \bigvee_{i,j,k\in\{0,1\}}\exists u,v,w\in V: N_{ijk}(u,v,w) \wedge \bar{x},\bar{y}\in \{(u,i),(v,j),(w,k)\} . \end{align*} The edge weights and the cut threshold are defined by \begin{align*} \phi_{W_N}(\bar{x},\bar{y},\beta) &= x_1=y_1 \wedge x_2\neq y_2 \wedge \ensuremath{\mathrm{BIT}\xspace}(1,\beta) \\ &\vee \ensuremath{\mathrm{BIT}\xspace}\left(10 \cdot \sum_{i,j,k\in\{0,1\}}\#_{u,v,w}N_{ijk}(u,v,w),\beta\right), \end{align*} \[\phi_{T_N}(\beta) = \ensuremath{\mathrm{BIT}\xspace}\left((2+10\cdot \#_v v\in V) \cdot \sum_{i,j,k\in\{0,1\}}\#_{u,v,w}N_{ijk}(u,v,w),\beta\right),\] \begin{align*} \phi_{W_D}(\bar{x},\bar{y},\beta) = \ensuremath{\mathrm{BIT}\xspace}(1,\beta), \end{align*} \[\phi_{T_D}(\beta) = \ensuremath{\mathrm{BIT}\xspace}(1,\beta).\] Note that the weights and the cut threshold are integer, hence the denominator relation are simply coding $1$. \end{proof} \begin{lemma}\label{lem:3sat} $3$-$\ensuremath{\mathrm{SAT}\xspace}$ is not expressible in $\ensuremath{C^{\omega}}\xspace$. \end{lemma} \begin{proof} Note that a $3$-$\ensuremath{\mathrm{SAT}\xspace}$ instance $\struct{I}=(V,R^{\struct{I}}_{000},\ldots,R^{\struct{I}}_{111})$ can also be interpreted as an instance of $\ensuremath{\mathrm{CSP}\xspace}(\Gamma_{3\ensuremath{\mathrm{SAT}\xspace}})$ for $\Gamma_{3SAT}=\{R_{000},\ldots,R_{111}\}$ and $R_{ijk}=\{0,1\}^3\backslash (i,j,k)$. Hence, we can apply results from the algebraic classification of CSPs to determine the definability of $3$-$\ensuremath{\mathrm{SAT}\xspace}$. In this context, it has been shown in \cite{bjk05:joc} that the algebra of polymorphisms corresponding to $\Gamma_{3SAT}$ contains only essentially unary operations. It follows from the result in \cite{abd09:tcs} that $3$-$\ensuremath{\mathrm{SAT}\xspace}$ is not definable in $\ensuremath{C^{\omega}}\xspace$. \end{proof} \begin{theorem}\label{thm:inexp} Let $\Gamma$ be a valued constraint language of finite size and let $\Gamma'$ be a core of $\Gamma$. If $(\ensuremath{\mathrm{XOR}}\xspace)$ holds for $\Gamma'_c$, then $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ is not expressible in $\ensuremath{C^{\omega}}\xspace$. \end{theorem} \begin{proof} Assume property $(\ensuremath{\mathrm{XOR}}\xspace)$ holds for $\Gamma'_c$. By Lemma \ref{lem:maxcut_to_gamma}, $\ensuremath{\mathrm{MAXCUT}\xspace}$ $\logic{FPC}$-reduces to $\ensuremath{\mathrm{VCSP}\xspace}(\langle\Gamma'_c\rightarrowngle_{\equiv})$. Lemmas \ref{lem:expower_to_gamma} to \ref{lem:gammac_to_gamma} provide a chain of $\logic{FPC}$-reductions from $\ensuremath{\mathrm{VCSP}\xspace}(\langle\Gamma'_c\rightarrowngle_{\equiv})$ to $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$. Since $\ensuremath{C^{\omega}}\xspace$ is closed under $\logic{FPC}$-reductions, Lemmas \ref{lem:3sat_to_maxcut} and \ref{lem:3sat} together show that $\ensuremath{\mathrm{MAXCUT}\xspace}$ is not definable in $\ensuremath{C^{\omega}}\xspace$, and hence neither is $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$. \end{proof} \section{Constraint Languages of Infinite Size}\label{s:inf} In representing the problem $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ as a class of relational structures, we have chosen to fix a finite relational signature $\tau_{\Gamma}$ for each finite $\Gamma$. An alternative, \emph{uniform} representaation would be to fix a single signature which allows for the representation of instances of $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ for arbitrary $\Gamma$ by coding the functions in $\Gamma$ explicitly in the instance. In this section, , we give a description of how this can be done. Our goal is to show that our results generalise to this case, and that the definability dichotomy still holds. Let $\Gamma$ now be a valued constraint language over some finite domain $D$. The challenge of fixing a relational signature for instances of $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ is that different instances may use different sets of functions of $\Gamma$ in their constraints, and hence, we cannot represent each function as a relation in the signature. Instead, we make the functions part of the universe, together with tuples over $D$ of different arities as their input. Let $I$ be an instance of $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ where the constraints use functions from a finite subset $\Gamma_I\subset\Gamma$, and let $m$ be the maximal arity of any function in $\Gamma_I$. We then represent $I$ as a structure $\struct{I}$ with the multi-sorted universe $\univ I=V\ \dot\cup\ C\ \dot\cup\ B\ \dot\cup\ F\ \dot\cup\ T$, where $V$ is a set of variables, $C$ a set of constraints, $B$ a set of numbers on which we have a linear order, $F$ a set of function symbols corresponding to functions in $\Gamma_I$, and $T$ is a set of tuples from $D\cup D^2\cup\ldots\cup D^m$, over the signature $\tau_D=(<,R_{\mathrm{fun}},R_{\mathrm{scope}},W_N,W_D,\mathit{Def}_N,\mathit{Def}_D,\mathit{Enc})$. Here, the relations encode the following information. \begin{itemize} \item $R_{\mathrm{fun}}\subseteq C\times F$: This relation matches functions and constraints, i.e.\ $(c,f)\in R_{\mathrm{fun}}$ denotes that $c=(\sigma,f,q)$ is a constraint of the instance for some scope $\sigma$ and weight $q$. \item $R_{\mathrm{scope}}\subseteq C\times V\times B$: This relation fixes the scope of a constraint, i.e.\ $(c,v,\beta)\in R_{\mathrm{scope}}$ denotes that $c=(\sigma,f,q)$ is a constraint for some function $f$ and weight $q$, where the $\beta$-th component of $\sigma$ is $v$. \item $W_N,W_D\subseteq C\times B$: This is analogous to the finite case. These two relations together encode the rational weights of the constraints. \item $\mathit{Def}_N,\mathit{Def}_D\subseteq F\times T\times B$: These two relations together fix the definition of some function symbol in $F$. That is, $(f,t,\beta)\in \mathit{Def}_D$ denotes that the $\beta$-th bit of the numerator of the value of $f$ on input $t$ is $1$, and similarly for $\mathit{Def}_D$ and the denominator. \item $\mathit{Enc}\subseteq T\times D\times B$: This relation fixes the encoding of tuples as elements in $T$, i.e.\ $(t,a,\beta)\in \mathit{Enc}$ denotes that the $\beta$-th component of the tuple $t$ is the element $a\in D$. \end{itemize} The above signature allows now for instances $I$, $I'$ with different sets of functions $\Gamma_I$ and $\Gamma_{I'}$ to be represented as structures of the same vocabulary. Since the set of function symbols is part of the universe, the relations $\mathit{Def}_N, \mathit{Def}_D$ are required to give concrete meaning to these function symbols. We now say, for a (potentially infinite) valued constraint language $\Gamma$ that $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ is \emph{uniformly definable} in $\logic{FPC}$ if there is an $\logic{FPC}$-interpretation of $\tau_{\ensuremath{\mathbb{Q}}}$ in $\tau_D$ which takes an instance $\struct I$ of $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ to the cost of its optimal solution. Our inexpressibility result, Theorem~\ref{thm:inexp}, immediately carries over to this setting as it is easy to construct an $\logic{FPC}$ reduction from the $\tau_{\Gamma}$ representation of $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ to the $\tau_D$ representation. \begin{theorem}\label{thm:inf-inexp} Let $\Gamma$ be a valued constraint language and let $\Gamma'$ be a core of $\Gamma$. If $(\ensuremath{\mathrm{XOR}}\xspace)$ holds for $\Gamma'_c$, then $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ is not uniformly definable in $\ensuremath{C^{\omega}}\xspace$. \end{theorem} For the positive direction, i.e.\ to show that $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ is \emph{uniformly definable} in $\logic{FPC}$ in all other cases, we simply need to adapt the proof of Theorem \ref{thm:exp} to fit the new representation. \begin{theorem}\label{thm:inf-exp} Let $\Gamma$ be a valued constraint language and let $\Gamma'$ be a core of $\Gamma$. If $(\ensuremath{\mathrm{XOR}}\xspace)$ does not hold for $\Gamma'_c$, then $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$ is uniformly definable in $\ensuremath{C^{\omega}}\xspace$. \end{theorem} \begin{proof} We adapt the proof of Theorem \ref{thm:exp} for potentially infinite languages $\Gamma$. The main challenge is to work around the variable arities of the constraints. Let $\Gamma$ be a constraint language over some finite domain $D$, and let $I$ be an instance of $\ensuremath{\mathrm{VCSP}\xspace}(\Gamma)$, and $\struct{I}$ its relational representation in $\tau_D$. Recall that the set of variables of $\ensuremath{\mathrm{BLP}\xspace}(I)$ for $I=(V,C)$ is given by the union of the two sets \[\lambda=\{\lambda_{c,\nu}|c=(\sigma,f)\in C, \nu\in D^{|\sigma|}\}\] and \[\mu=\{\mu_{x,a}|x\in V, a\in D\}.\] These sets can now be $\logic{FPC}$ defined from $\struct{I}$ as follows. \[\lambda(c, s)=\exists f\in F:R_{\mathrm{fun}}(c,f) \wedge \exists \beta\in B: \mathrm{Ar}_{R}(c,\beta)=\mathrm{Ar}_{\mathit{Enc}}(s,\beta)\] and \[\mu(x,a)=x\in V \wedge a\in T \wedge \mathrm{Ar}_{\mathit{Enc}}(a,1).\] Here, we make use of a formula $\mathrm{Ar}(x,\beta)$ by which we mean that the tuple encoded by the element $x$ has the arity $\beta$. The formula can be defined as follows. \[\mathrm{Ar}_{R}(x,\beta)=\exists v\in V: \left(R_{\mathrm{scope}}(x,v,\beta) \wedge \forall u\in V, \beta'\in B: \beta'\geq \beta\Rightarrow \neg R_{\mathrm{scope}}(x,u,\beta')\right),\] \[\mathrm{Ar}_{\mathit{Enc}}(x,\beta)=\bigvee_{a\in D} \left(\mathit{Enc}(x,a,\beta) \bigwedge_{a'\in D} \forall \beta'\in B: \beta'\geq \beta\Rightarrow \neg \mathit{Enc}(x,a',\beta')\right).\] In words, the formulas ensure that $x$ is a tuple element that is used in the structure, that its $\beta$-th component is non-empty, and that for any position $\beta'\geq \beta$, the $\beta'$-th component of $x$ is not defined in the structure. The rest of the proof follows closely to the original one in Theorem \ref{thm:exp} without substantial changes. \end{proof} \input{document_full.bbl} \end{document}
\begin{document} \title[On the derived category...]{On the derived category of the Hilbert scheme of points on an Enriques surface} \author[A.\ Krug]{Andreas Krug} \address{Mathematisches Institut, Universit\"at Bonn\\ Endenicher Allee 60\\ 53115 Bonn, Germany} \email{[email protected]} \author[P.\ Sosna]{Pawel Sosna} \address{Fachbereich Mathematik der Universit\"at Hamburg\\ Bundesstra\ss e 55\\ 20146 Hamburg, Germany} \email{[email protected]} \begin{abstract} We use semi-orthogonal decompositions to construct autoequivalences of Hilbert schemes of points on Enriques surfaces and of Calabi--Yau varieties which cover them. While doing this, we show that the derived category of a surface whose irregularity and geometric genus vanish embeds into the derived category of its Hilbert scheme of points. \end{abstract} \maketitle \section{Introduction} The bounded derived category of coherent sheaves on a smooth projective complex variety $Z$, denoted by ${\rm D}^{\rm b}(Z)$, is now widely recognized as an important invariant which can be used to study the geometry of $Z$. It is therefore quite natural to consider the group of autoequivalences ${\rm Aut}({\rm D}^{\rm b}(Z))$. This group always contains the subgroup ${\rm Aut}^{\text{st}}({\rm D}^{\rm b}(Z))$ of so-called standard autoequivalences, namely those generated by automorphisms of $Z$, the shift functor and tensor products with line bundles. Note that all these equivalences send coherent sheaves to (shifts of) coherent sheaves. A classical result by Bondal and Orlov, see \cite{Bondal-Orlov}, states that ${\rm Aut}^{\text{st}}({\rm D}^{\rm b}(Z))\simeq {\rm Aut}({\rm D}^{\rm b}(Z))$ if $\omega_Z$ is either ample or anti-ample. Therefore, we expect the most interesting behaviour if the canonical bundle is trivial. For instance, if $\omega_Z\simeq{\mathcal O}_Z$ and $\Ho^i(Z,\omega_Z)=0$ for all $0<i<\dim(Z)$, then ${\rm Aut}({\rm D}^{\rm b}(Z))$ contains so-called spherical twists, which are more interesting than the ones mentioned above since, in general, they do not preserve the abelian category of coherent sheaves; see \cite{Seidel-Thomas}. It is usually quite difficult to construct new autoequivalences of a given variety. However, if ${\mathcal A}$ is some triangulated category and an exact functor $\Phi\colon {\mathcal A} \xymatrix@1@=15pt{\ar[r]&} {\rm D}^{\rm b}(Z)$ is a so-called spherical or $\mathbb{P}^n$-functor, see Subsection \ref{sphericalpn}, we do get an autoequivalence of ${\rm D}^{\rm b}(Z)$. For example, let $\tilde{X}$ be a K3 surface and $\tilde{X}^{[n]}$ its Hilbert scheme of $n$ points. It was shown in \cite{Addington} that the Fourier--Mukai functor ${\rm D}^{\rm b}(\tilde{X})\xymatrix@1@=15pt{\ar[r]&} {\rm D}^{\rm b}(\tilde{X}^{[n]})$ with the ideal sheaf of the universal family as kernel is a $\mathbb{P}^{n-1}$-functor whose associated autoequivalence is new. Interestingly, for an abelian surface $A$ the corresponding functor is not a $\mathbb{P}^{n-1}$-functor, but pulling everything to the generalised Kummer variety does give one; see \cite{Meachan}. Another example was given in \cite{Krug1} where it was shown that, given any surface $S$, there is a $\mathbb{P}^{n-1}$-functor ${\rm D}^{\rm b}(S)\xymatrix@1@=15pt{\ar[r]&} {\rm D}^{\rm b}(S^{[n]})$ which is defined using equivariant methods. In this paper we will construct new examples of spherical functors using Enriques surfaces and Hilbert schemes of points on them. Let $X$ be an Enriques surface and $X^{[n]}$ the Hilbert scheme of $n$ points on $X$. The canonical cover of $X^{[n]}$ is a Calabi--Yau variety (see \cite{Nieper} or \cite{Oguiso-Schroer}) and will be denoted by $\mathrm{CY}_n$. Write $\pi\colon \mathrm{CY}_n\xymatrix@1@=15pt{\ar[r]&} X^{[n]}$ for the quotient map. Consider the Fourier--Mukai functor $F\colon {\rm D}^{\rm b}(X)\xymatrix@1@=15pt{\ar[r]&}{\rm D}^{\rm b}(X^{[n]})$ induced by the ideal sheaf of the universal family. \begin{thm}[Theorem \ref{maintheorem}] The functor \[\tilde{F}=\pi^*F\colon {\rm D}^{\rm b}(X)\xymatrix@1@=15pt{\ar[r]&} {\rm D}^{\rm b}(\mathrm{CY}_n)\] is split spherical for all $n\geq 2$ and the associated twist $\tilde{T}$ is equivariant, so descends to an autoequivalence of $X^{[n]}$. The autoequivalence $\tilde{T}$ of ${\rm D}^{\rm b}(\mathrm{CY}_n)$ is not standard and not a spherical twist. \end{thm} Under some conditions we can also compare our twist $\tilde{T}$ to the autoequivalences constructed in \cite{Ploog-Sosna}; see Proposition \ref{comparisontoboxes}. Once we establish Theorem \ref{hilb-semiorth} below, the first part of the theorem is an incarnation of the following general principle, see Proposition \ref{sphericaldeg2cover}: If $Y$ is a smooth projective variety whose canonical bundle is of order $2$, $\tilde{Y}$ its canonical cover with quotient map $\pi\colon \tilde{Y}\xymatrix@1@=15pt{\ar[r]&} Y$, and ${\mathcal A}$ an admissible subcategory of ${\rm D}^{\rm b}(Y)$ with embedding functor $i\colon {\mathcal A}\xymatrix@1@=15pt{\ar[r]&} {\rm D}^{\rm b}(Y)$, then $\pi^*i$ is a split spherical functor and the associated twist is equivariant. The following result might be of independent interest. \begin{thm}\label{hilb-semiorth} If $S$ is any surface with $p_g=q=0$, then the FM-transform $F\colon {\rm D}^{\rm b}(S)\xymatrix@1@=15pt{\ar[r]&} {\rm D}^{\rm b}(S^{[n]})$ whose kernel is the ideal sheaf of the universal family, is fully faithful, hence ${\rm D}^{\rm b}(S)$ is an admissible subcategory of ${\rm D}^{\rm b}(S^{[n]})$. \end{thm} Since there are many semi-orthogonal decompositions of ${\rm D}^{\rm b}(X^{[n]})$, we have many, potentially non-standard, twists associated with them. The paper is organised as follows. In Section 2 we present some background information, before proving our main results in Section 3. In Section \ref{exceptionalsequences} we give a general construction of exceptional sequences on the Hilbert scheme $S^{[n]}$ out of exceptional sequences on a surface $S$. This construction has been independently considered by Evgeny Shinder. In particular, we have the following result which is probably well-known to experts. \begin{proposition} If $S$ is a surface having a full exceptional collection, then the same holds for $S^{[n]}$. \end{proposition} In the last section we describe what we call truncated ideal functors which provide us with a further example of a fully faithful functor ${\rm D}^{\rm b}(X)\xymatrix@1@=15pt{\ar[r]&} {\rm D}^{\rm b}(X^{[n]})$ for $X$ an Enriques surface, and, in some cases, $\mathbb{P}^n$-functors on smooth Deligne--Mumford stacks. The last section also gives some background on the proof of Proposition \ref{rfidentity}, the main ingredient in the proof of Theorem \ref{hilb-semiorth}. \noindent \textbf{Conventions.} We will work over the complex numbers and all functors are assumed to be derived. We will write ${\mathcal H}^i(E)$ for the $i$-th cohomology object of a complex $E\in {\rm D}^{\rm b}(Z)$ and $\Ho^*(E)$ for the complex $\oplus_i \Ho^i(Z,E)[-i]$. If $F$ is a functor, its right adjoint will be denoted by $F^R$ and the left adjoint by $F^L$. \noindent \textbf{Acknowledgements.} We thank Ciaran Meachan for comments. A.K.\ was supported by the SFB/TR 45 of the DFG (German Research Foundation). P.S.\ was partially financially supported by the RTG 1670 of the DFG. \section{Preliminaries}\label{section:preliminaries} \subsection{Hilbert schemes of surfaces with $p_g=q=0$} Let $S$ be a surface with $p_g=q=0$ and consider $S^{[n]}$, the Hilbert scheme of $n$ points on $S$. Then we have $\Ho^k(S^{[n]},{\mathcal O}_{S^{[n]}})=0$ for all $k>0$, compare \cite{Oguiso-Schroer}. Indeed, by Künneth formula $\Ho^*(S^n,\mathcal O_{S^n})=\Ho^*(S,\mathcal O_S)^{\otimes n}$ is concentrated in degree zero. As a consequence, the structure sheaf of the $n$-th symmetric product has no higher cohomology, and the same then also holds for $S^{[n]}$, because the symmetric product has rational singularities. For example, we can consider an \emph{Enriques surface}, which is a smooth projective surface $X$ with $p_g=q=0$ such that the canonical bundle $\omega_X$ is of order $2$. \subsection{Canonical covers} Let $Y$ be a variety with torsion canonical bundle of (minimal) order $k$. The \emph{canonical cover} $\tilde{Y}$ of $Y$ is the unique (up to isomorphism) variety with trivial canonical bundle and an \'etale morphism $\pi\colon \tilde{Y} \xymatrix@1@=15pt{\ar[r]&} Y$ of degree $k$ such that $\pi_*{\mathcal O}_{\tilde{Y}}=\bigoplus_{i=0}^{k-1}\omega_Y^i$. In this case, there is a free action of the cyclic group $\mathbb{Z}/k\mathbb{Z}$ on $\tilde{Y}$ such that $\pi$ is the quotient morphism. As an example, the canonical cover of an Enriques surface $X$ is a K3 surface $\tilde{X}$ and $X$ is the quotient of $\tilde{X}$ by the action of a fixed point free involution. Furthermore, the canonical bundle of $X^{[n]}$ has order $2$ and the associated canonical cover, denoted here by $\mathrm{CY}_n$, is a Calabi--Yau variety; see \cite[Prop.\ 1.6]{Nieper} or \cite[Thm.\ 3.1]{Oguiso-Schroer}. \subsection{Fourier--Mukai transforms and kernels}\label{fmtransforms} Recall that given an object ${\mathcal E}$ in ${\rm D}^{\rm b}(Z\times Z')$, where $Z$ and $Z'$ are smooth and projective, we get an exact functor ${\rm D}^{\rm b}(Z)\xymatrix@1@=15pt{\ar[r]&} {\rm D}^{\rm b}(Z')$, $\alpha\xymatrix@1@=15pt{\ar@{|->}[r]&} (p_{Z'})_*({\mathcal E}\otimes p_Z^*\alpha)$. Such a functor, denoted by $\mathrm{FM}_{\mathcal E}$, is called a \emph{Fourier--Mukai transform} (or FM-transform) and ${\mathcal E}$ is its kernel. See \cite{Huybrechts} for a concise introduction to FM-transforms. For example, $\mathrm{FM}_{{\rm D}elta_*{\mathcal L}}(\alpha)=\alpha\otimes{\mathcal L}$, where ${\rm D}elta\colon Z\xymatrix@1@=15pt{\ar[r]&} Z\times Z$ for the diagonal map and ${\mathcal L}\in {\rm Pic}(Z)$. In particular, $\mathrm{FM}_{{\mathcal O}_{\rm D}elta}$ is the identity functor. \noindent {\textbf{Convention.}} We will write $\mathrm{M}_{\mathcal L}$ for the functor $\mathrm{FM}_{{\rm D}elta_*{\mathcal L}}$. Let $S$ be any smooth projective surface, ${\mathcal Z}_n\subset S\times S^{[n]}$ be the universal family and consider its structure sequence \begin{align*}\xymatrix{0\ar[r] & {\mathcal I}_{{\mathcal Z}_n}\ar[r] & {\mathcal O}_{S\times S^{[n]}} \ar[r] & {\mathcal O}_{{\mathcal Z}_n}\ar[r] & 0.}\end{align*} Using the objects from the above sequence as kernels, we get a triangle $F\xymatrix@1@=15pt{\ar[r]&} F'\xymatrix@1@=15pt{\ar[r]&} F''$ of functors ${\rm D}^{\rm b}(S)\xymatrix@1@=15pt{\ar[r]&} {\rm D}^{\rm b}(S^{[n]})$. Since all these functors are FM-transforms, they have left and right adjoints; see \cite[Prop.\ 5.9]{Huybrechts}. \subsection{Equivalences of canonical covers}\label{ccequi} The relation between autoequivalences of a variety $Y$ with torsion canonical bundle and those of the canonical cover $\tilde{Y}$ was studied in \cite{Bridgeland-Maciocia}. We recall some facts in the special case where the order of $\omega_Y$ is $2$. An autoequivalence $\tilde{\varphi}$ of ${\rm D}^{\rm b}(\tilde{Y})$ is \emph{equivariant} under the conjugation action of $G=\mathbb{Z}/2\mathbb{Z}$ on ${\rm Aut}({\rm D}^{\rm b}(\tilde{Y}))$ if there is a group automorphism $\mu\in {\rm Aut}(G)$ such that $g_*\tilde{\varphi}\simeq\tilde{\varphi} \mu(g)_*$ for all $g\in G$. Of course, in our case, $\mu={\rm id}$. By \cite[Sect.\ 4]{Bridgeland-Maciocia}, an equivariant functor $\tilde{\varphi}$ descends to a functor $\varphi\in{\rm Aut}({\rm D}^{\rm b}(Y))$ with functor isomorphisms $\pi_*\tilde{\varphi} \simeq \varphi \pi_*$ and $\pi^* \varphi \simeq \tilde{\varphi}\pi^*$; moreover, two descents $\varphi$, $\varphi'$ of $\tilde{\varphi}$ are unique up to a line bundle twist with a power of $\omega_Y$. In the other direction, it is also shown in \cite[Sect.\ 4]{Bridgeland-Maciocia} that every autoequivalence of ${\rm Aut}({\rm D}^{\rm b}(Y))$ has an equivariant lift. Two lifts differ up to the action of $G$ in ${\rm Aut}({\rm D}^{\rm b}(\tilde{Y}))$. \subsection{Spherical functors}\label{sphericalpn} Now consider two triangulated categories ${\mathcal A}$ and ${\mathcal B}$ and any exact functor $F\colon {\mathcal A} \xymatrix@1@=15pt{\ar[r]&}{\mathcal B}$ with left and right adjoints $F^L, F^R\colon{\mathcal B}\xymatrix@1@=15pt{\ar[r]&}{\mathcal A}$. Define the \emph{twist} $T$ to be the cone on the counit $\epsilon\colon FF^R\xymatrix@1@=15pt{\ar[r]&} {\rm id}_{\mathcal B}$ of the adjunction and the \emph{cotwist} $C$ to be the cone on the unit $\eta\colon {\rm id}_{\mathcal A}\xymatrix@1@=15pt{\ar[r]&} F^RF$. \begin{remark}\label{dglifts1} Of course, one needs to make sure that the above cones actually exist. If one works with Fourier--Mukai-transforms, this is not a problem, because the maps between the functors come from the underlying kernels and everything works out, even for (reasonable) schemes which are not necessarily smooth and projective; see \cite{Anno-Logvinenko}. More generally, everything works out if one uses an appropriate notion of a spherical DG-functor; see \cite{Anno-Logvinenko2}. \end{remark} So, as we will see, in the cases of interest to us, we have triangles $FF^R\xymatrix@1@=15pt{\ar[r]&} {\rm id}_{\mathcal B}\xymatrix@1@=15pt{\ar[r]&} T$ and ${\rm id}_{\mathcal A}\xymatrix@1@=15pt{\ar[r]&} F^RF\xymatrix@1@=15pt{\ar[r]&} C$. Following \cite{Anno-Logvinenko2}, we call $F$ \emph{spherical} if $C$ is an equivalence and $F^R\simeq C F^L$. If ${\mathcal A}$ and ${\mathcal B}$ admit Serre functors ${\mathcal S}_{\mathcal A}$ and ${\mathcal S}_{\mathcal B}$, the last condition is equivalent to ${\mathcal S}_{\mathcal B} F C\simeq F{\mathcal S}_{\mathcal A}$. If $F$ is a spherical functor, then $T$ is an equivalence. If the triangle ${\rm id}_{\mathcal A}\xymatrix@1@=15pt{\ar[r]&} F^RF\xymatrix@1@=15pt{\ar[r]&} C$ splits, we call $F$ \emph{split spherical}. For an example of a (split) spherical functor consider a $d$-dimensional variety $Z$ and a \emph{spherical object} $E\in {\rm D}^{\rm b}(Z)$, that is, $E\otimes \omega_Z\simeq E$ and ${\rm Hom}^*(E,E)\simeq \mathbb{C}\oplus\mathbb{C}[-d]$. The functor \[F=-\otimes E\colon {\rm D}^{\rm b}({\rm Sp}ec(\mathbb{C}))\xymatrix@1@=15pt{\ar[r]&} {\rm D}^{\rm b}(Z)\] is then spherical and the associated autoequivalence of ${\rm D}^{\rm b}(Z)$ is the spherical twist from the introduction, denoted by $\mathrm{ST}_E$ in the following. \subsection{$\mathbb{P}^n$-functors} Following \cite{Addington}, a \emph{$\mathbb{P}^n$-functor} is a functor $F\colon {\mathcal A}\xymatrix@1@=15pt{\ar[r]&} {\mathcal B}$ of triangulated categories such that \begin{enumerate} \item There is an autoequivalence $H_F=H$ of ${\mathcal A}$ such that \[F^RF\simeq {\rm id}\oplus H\oplus H^2\oplus\dots \oplus H^n.\] \item The map \[HF^RF\xymatrix@1@=15pt{\ar@{^(->}[r]&} F^RFF^RF\xrightarrow{F^R\epsilon F} F^RF,\] with $\epsilon$ being the counit of the adjunction is, when written in the components \begin{align*}H\oplus H^2\oplus\dots\oplus H^n\oplus H^{n+1}\xymatrix@1@=15pt{\ar[r]&} {\rm id}\oplus H\oplus H^2\oplus\dots\oplus H^n,\end{align*} of the form \begin{align}\label{monadmatrix}\begin{pmatrix} * & * &\cdots &*&*\\ 1&*&\cdots&*&*\\ 0&1&\cdots&*&*\\ \vdots&\vdots&{\text{DD}}ots&\vdots&\vdots\\ 0&0&\cdots&1&* \end{pmatrix}. \end{align} \item $F^R\simeq H^nF^L$. If ${\mathcal A}$ and ${\mathcal B}$ have Serre functors, this is equivalent to $S_{\mathcal B} FH^n\simeq FS_{\mathcal A}$. \end{enumerate} If $F$ is a $\mathbb{P}^n$-functor, then there is also an associated autoequivalence of ${\mathcal B}$, denoted by $P_F=P$. A $\mathbb{P}^1$-functor is precisely a split spherical functor and for the associated equivalences we have $T^2\simeq P$. If $\tilde{X}$ is a K3 surface, the functor $F=\mathrm{FM}_{{\mathcal I}_{{\mathcal Z}_n}}$ defined in Subsection \ref{fmtransforms} is a $\mathbb{P}^{n-1}$-functor; see \cite{Addington}. \subsection{Values of autoequivalences} Let $Z,Z'$ be smooth projective varieties. Recall that an exact functor $F\colon {\rm D}^{\rm b}(Z)\xymatrix@1@=15pt{\ar[r]&} {\rm D}^{\rm b}(Z')$ is said to have \emph{cohomological amplitude} $[a,b]$ if for every complex $E\in {\rm D}^{\rm b}(Z)$ whose cohomology is concentrated in degrees between $p$ and $q$, the cohomology of $F(E)$ is concentrated in degrees between $p-a$ and $q+b$. We will need the following slight generalisation of a Proposition in {\cite[Sect.\ 1.4]{Addington}}. The case that $\Phi_1=\Phi_2={\rm id}$ could be generally useful when comparing twists along spherical and $\mathbb{P}^n$-functors $F$ whose cotwist is not simply a shift but a standard autoequivalence. \begin{proposition}\label{prop:Addingtongeneralised} Let $T\in{\rm Aut}({\rm D}^{\rm b}(Z'))$ and $F\colon{\rm D}^{\rm b}(Z)\xymatrix@1@=15pt{\ar[r]&} {\rm D}^{\rm b}(Z')$ be a Fourier--Mukai transform. Furthermore, let $\Psi\in {\rm Pic}(Z)\rtimes{\rm Aut}(Z)\subset{\rm Aut}^{\mathrm{st}}({\rm D}^{\rm b}(Z))$ be a non-shifted standard autoequivalence and let $\Phi_1,\Phi_2\in \{{\rm id},\mathrm{M}_{\omega_{Z'}}\}\subset{\rm Aut}({\rm D}^{\rm b}(Z'))$. If $\alpha\in {\rm D}^{\rm b}(Z')$ is such that $T(\alpha)\simeq\Phi_1(\alpha)[i]$ and there is an isomorphism of functors $T F\simeq\Phi_2 F \Psi[j]$ with $i\neq j\in \mathbb{Z}$, then $F(\beta)\in \alpha^\bot$ and $\alpha \in F(\beta)^\bot$ for every $\beta\in {\rm D}^{\rm b}(Z)$. \end{proposition} \begin{proof} We have $T^m(\alpha)\simeq\Phi_1^m(\alpha)[mi]$ and $T^mF(\beta)\simeq\Phi_2^mF\Psi^m(\beta)[mj]$ for all $m\in \mathbb{Z}$, since $\mathrm{M}_{\omega_Z}$ commutes with every autoequivalence. Hence, \begin{align*} {\rm Hom}\bigl(\alpha,F(\beta)[k]\bigr)&\simeq{\rm Hom}\bigl(T^m(\alpha),T^mF(\beta)[k]\bigr)\\&\simeq{\rm Hom}\bigl(\Phi_1^m(\alpha)[mi],\Phi_2^mF\Psi^m(\beta)[mj+k]\bigr) \\&\simeq{\rm Hom}\bigl(\Psi^{-m}F^L\Phi_2^{-m}\Phi_1^m(\alpha),\beta[(j-i)m+k]\bigr). \end{align*} for every $k\in \mathbb{Z}$. This vanishes for $m\gg0$ since $\Phi_1$, $\Phi_2$, and $\Psi$ have cohomological amplitude $[0,0]$ and $F^L$ has finite cohomological amplitude by \cite[Prop.\ 2.5]{Kuz}. The proof that $\alpha\in F(\beta)^\bot$ is analogous. \end{proof} \subsection{Semi-orthogonal decompositions} References for the following facts are, for example, \cite{Bon} and \cite{Bondal-Orlov2}. Let ${\mathcal T}$ be a triangulated category. A \emph{semi-orthogonal decomposition} of ${\mathcal T}$ is a sequence of strictly full triangulated subcategories ${\mathcal A}_1,\ldots,{\mathcal A}_m$ such that (a) if $A_i\in \mathcal{A}_i$ and $A_j\in \mathcal{A}_j$, then $\text{Hom}(A_i,A_j[l])=0$ for $i>j$ and all $l$, and (b) the $\mathcal{A}_i$ generate ${\mathcal T}$, that is, the smallest triangulated subcategory of ${\mathcal T}$ containing all the $\mathcal{A}_i$ is already ${\mathcal T}$. We write ${\mathcal T}=\langle{\mathcal A}_1,\ldots,{\mathcal A}_m\rangle$. If $m=2$, these conditions boil down to the existence of a functorial exact triangle $A_2\xymatrix@1@=15pt{\ar[r]&} T\xymatrix@1@=15pt{\ar[r]&} A_1$ for any object $T\in {\mathcal T}$. A subcategory ${\mathcal A}$ of ${\mathcal T}$ is \emph{right admissible} if the embedding functor $i$ has a right adjoint $i^R$, and it is called \emph{left admissible} if $i$ has a left adjoint $i^L$. We say that ${\mathcal A}$ is \emph{admissible} if both adjoints exist. Note that if ${\mathcal T}$ admits a Serre functor, then the existence of one of the adjoints implies the existence of the other. Given any subcategory ${\mathcal A}$, the category ${\mathcal A}^\bot$ consists of objects $b$ such that ${\rm Hom}(a,b[k])=0$ for all $a\in {\mathcal A}$ and all $k\in \mathbb{Z}$. If ${\mathcal A}$ is right admissible, then ${\mathcal T}=\langle{\mathcal A}^\bot,{\mathcal A}\rangle$ is a semi-orthogonal decomposition. Similarly, ${\mathcal T}=\langle{\mathcal A}, {}^\bot{\mathcal A}\rangle$ is a semi-orthogonal decomposition if ${\mathcal A}$ is left admissible, where ${}^\bot{\mathcal A}$ is defined in the obvious way. Examples typically arise from so-called exceptional objects. Recall that an object $E\in {\rm D}^{\rm b}(Z)$ (or any $\mathbb{C}$-linear triangulated category) is called \emph{exceptional} if ${\rm Hom}(E,E)=\mathbb{C}$ and ${\rm Hom}(E,E[k])=0$ for all $k\neq 0$. The smallest triangulated subcategory containing $E$ is then equivalent to ${\rm D}^{\rm b}({\rm Sp}ec(\mathbb{C}))$ and this category, by abuse of notation again denoted by $E$, is admissible, leading to a semi-orthogonal decomposition ${\rm D}^{\rm b}(Z)=\langle E^\bot, E\rangle$. We call a sequence of exceptional objects $E_1,\ldots,E_n$ an \emph{exceptional collection} if ${\rm D}^{\rm b}(Z)=\langle (E_1,\ldots,E_n)^\bot,E_1,\ldots,E_n\rangle$, where $(E_1,\ldots,E_n)^\bot$ is the category of objects $F$ which satisfy ${\rm Hom}(E_i,F[k])=0$ for all $i,k$. The collection is called \emph{full} if $(E_1\ldots,E_n)^\bot=0$. Note that any fully faithful FM-transform $i\colon{\mathcal A}={\rm D}^{\rm b}(Z')\xymatrix@1@=15pt{\ar[r]&} {\rm D}^{\rm b}(Z)$ gives a semi-orthogonal decomposition ${\rm D}^{\rm b}(Z)=\langle i({\mathcal A})^\bot,i({\mathcal A})\rangle$. We will need the following well-known and easy fact. \begin{lemma}\label{serreadmissible} If ${\mathcal T}$ has a Serre functor ${\mathcal S}_{\mathcal T}$ and ${\mathcal A}$ is an admissible subcategory, then ${\mathcal A}$ has a Serre functor ${\mathcal S}_{\mathcal A}\simeq i^R{\mathcal S}_{\mathcal T} i$. \end{lemma} \begin{proof} Given $a,a'\in {\mathcal A}$, we compute ${\rm Hom}_{\mathcal A}(a,a')\simeq{\rm Hom}_{\mathcal T}(i(a),i(a'))\simeq {\rm Hom}_{\mathcal T}(i(a'),{\mathcal S}_{\mathcal T} i(a))^\vee\simeq {\rm Hom}_{\mathcal A}(a',i^R{\mathcal S}_{\mathcal T} i(a))^\vee$. \end{proof} \begin{remark}\label{dglifts2} Assume ${\mathcal A}$ is an admissible subcategory of ${\rm D}^{\rm b}(Z)$. The embedding functor $i$ lifts to DG-enhancements (see, for example, \cite{Lunts-Kuznetsov} for this notion). It can be checked that the adjoints $i^R$ and $i^L$ also lift to so-called DG quasi-functors, which follows, for example, from \cite[Lem.\ 4.4 and Prop.\ 4.10]{Lunts-Kuznetsov}. So the composition of $i$ with any FM-transform will lift to the DG-level, hence we are in a position to use the results of \cite{Anno-Logvinenko2} and all required cones of natural transformations will exist. \end{remark} \subsection{Group actions and derived categories}\label{subsection:equivariant} Let $G$ be a finite group acting on a smooth projective variety $Z$. The \emph{equivariant derived category}, denoted by ${\rm D}^{\rm b}_G(Z)$, is defined as ${\rm D}^{\rm b}(\mathbb Coh^G(Z))$, see, for example, \cite{Ploog} for details. Recall that for every subgroup $H\subset G$ the restriction functor $\Res\colon {\rm D}^{\rm b}_G(Z)\xymatrix@1@=15pt{\ar[r]&} {\rm D}^{\rm b}_H(Z)$ has the inflation functor $\mathcal Inf\colon {\rm D}^{\rm b}_H(Z)\xymatrix@1@=15pt{\ar[r]&} {\rm D}^{\rm b}_G(Z)$ as a left and right adjoint (see e.g.\ \cite[Sect.\ 1.4]{Ploog}). It is given for $A\in {\rm D}^{\rm b}(Z)$ by \begin{align}\label{infdef} \mathcal Inf(A)=\bigoplus_{[g]\in H\setminus G}g^* A \end{align} with the linearisation given by permutation of the summands. If $G$ acts trivially on $Z$, there is also the functor ${\rm tr}iv\colon{\rm D}^{\rm b}(Z)\xymatrix@1@=15pt{\ar[r]&} {\rm D}^{\rm b}_{G}(Z)$ which equips an object with the trivial $G$-linearisation. Its left and right adjoint is the functor $(-)^G\colon {\rm D}^{\rm b}_G(Z)\xymatrix@1@=15pt{\ar[r]&} {\rm D}^{\rm b}(Z)$ of invariants. \noindent \textbf{Convention.} When working with Fourier--Mukai transforms, we will frequently identify the functor with its kernel. \section{Proofs of the main results}\label{section:proofs} \subsection{Surfaces with $p_g=q=0$} Recall the FM-transforms from Subsection \ref{fmtransforms}. To compute $RF$ in the examples known so far, one usually works out the various compositions such as, for example, $R''F'$. This can be done rather quickly in certain situations: \begin{proposition} Let $S$ be a surface with $p_g=q=0$. Then the following holds: \begin{align*} R''F'&\simeq {\mathcal O}_{S\times S},\\ R'F'&\simeq ({\mathcal O}_S\boxtimes \omega_S)[2],\\ R''F''&\simeq {\mathcal O}_{\rm D}elta\oplus {\mathcal O}_{S\times S},\\ R'F''&\simeq ({\mathcal O}_S\boxtimes \omega_S)[2]. \end{align*} \end{proposition} \begin{proof} Follows immediately from the results in Section 6 of \cite{Meachan} by using $\Ho^*(S^{[n]}, {\mathcal O}_{S^{[n]}})\simeq \mathbb{C}$. \end{proof} \begin{lemma}\label{rprimef} We have $R'F\simeq 0$ and $R''F\simeq \mathcal O_{\rm D}elta[-1]$. \end{lemma} \begin{proof} The map $F'\xymatrix@1@=15pt{\ar[r]&} F''$ induces an isomorphism $R'F'\xymatrix@1@=15pt{\ar[r]&} R'F''$ and the component $\mathcal O_{S\times S}\xymatrix@1@=15pt{\ar[r]&} \mathcal O_{S\times S}$ of the induced map $R''F'\xymatrix@1@=15pt{\ar[r]&} R'F''$ is an isomorphism too; see \cite[Sect.\ 6]{Meachan} or Section \ref{truncated}. Hence, the first assertion follows from the triangle $R'F\xymatrix@1@=15pt{\ar[r]&} R'F'\xymatrix@1@=15pt{\ar[r]&} R'F''$. We then consider the triangle $R''F\xymatrix@1@=15pt{\ar[r]&} R''F'\xymatrix@1@=15pt{\ar[r]&} R''F''$ and check that the cokernel of the map $R''F'\xymatrix@1@=15pt{\ar[r]&} R''F''$ is isomorphic to ${\mathcal O}_{\rm D}elta$. Indeed, if $\varphi=(\varphi_1,\varphi_2)\colon A\xymatrix@1@=15pt{\ar[r]&} A\oplus B$ is a map in an abelian category such that the first component is an isomorphism (so $\varphi$ is an injection), the cokernel has to be isomorphic to $B$. To see this, embed the situation into the abelian category of modules over some ring using the Freyd--Mitchell theorem and define $c\colon A\oplus B\xymatrix@1@=15pt{\ar[r]&} B$ by $(a,b)\xymatrix@1@=15pt{\ar@{|->}[r]&} b-\varphi_2(\varphi_1^{-1}(a))$. It is clear that $c\circ\varphi=0$. Now given a morphism $f\colon A\oplus B\xymatrix@1@=15pt{\ar[r]&} C$ such that $f\circ\varphi=0$, define $g\colon B\xymatrix@1@=15pt{\ar[r]&} C$ by $b\xymatrix@1@=15pt{\ar@{|->}[r]&} f(0,b)$. It is then straightforward to check that $g\circ c=f$. \end{proof} \begin{proposition}\label{rfidentity} The composition $RF$ is isomorphic to the identity. \end{proposition} \begin{proof} Use the triangle $R''F\xymatrix@1@=15pt{\ar[r]&} R'F\xymatrix@1@=15pt{\ar[r]&} RF$ and the above lemma. \end{proof} We are now ready to show our first main result. \begin{proof}[Proof of Theorem \ref{hilb-semiorth}] Since $RF\simeq {\rm id}$, $F$ is fully faithful. On the other hand, $F$ has adjoints, so $F({\rm D}^{\rm b}(S))$ is an admissible subcategory of ${\rm D}^{\rm b}(S^{[n]})$. \end{proof} \begin{remark} The above shows that for any surface with $p_g=q=0$, the functor $F$ is quite far from being a spherical or a $\mathbb{P}^n$-functor. \end{remark} \begin{remark} There exist surfaces $S$ of general type with $p_g=q=0$ such that ${\rm D}^{\rm b}(S)$ contains an admissible subcategory whose Hochschild homology is trivial and whose Grothendieck group is finite or torsion, see, for example, \cite{BBKS12}. Therefore, by Theorem \ref{hilb-semiorth}, ${\rm D}^{\rm b}(S^{[n]})$ also contains such a (quasi-)phantom category. \end{remark} \subsection{Canonical bundles of order two and spherical functors} Consider a $d$-dimensional variety $Y$ whose canonical bundle is of order $2$ and its canonical cover $\pi\colon\tilde{Y}\xymatrix@1@=15pt{\ar[r]&} Y$ with deck transformation $\tau\colon \tilde Y\xymatrix@1@=15pt{\ar[r]&} \tilde Y$. \begin{lemma} The functor $\pi^*\colon{\rm D}^{\rm b}(Y)\xymatrix@1@=15pt{\ar[r]&} {\rm D}^{\rm b}(\tilde Y)$ is split spherical with cotwist $C=(-)\otimes \omega_Y$ and twist $T_{\pi^*}=\tau^*[1]$. \end{lemma} \begin{proof} This follows from the identities $\pi_*\pi^*\simeq (-)\otimes (\mathcal O_Y\oplus \omega_Y)$, $\pi^*\omega_Y\simeq \omega_{\tilde Y}$, and $\pi^*\pi_*\simeq {\rm id}\oplus \tau^*$. \end{proof} \begin{proposition}\label{sphericaldeg2cover} If ${\mathcal A}$ is an admissible subcategory of ${\rm D}^{\rm b}(Y)$ with embedding functor $i$, then the functor $\pi^*i\colon {\mathcal A}\xymatrix@1@=15pt{\ar[r]&}{\rm D}^{\rm b}(\tilde{Y})$ is split spherical. The associated twist $\tilde{T}_{{\mathcal A}}:=T_{\pi^*i}$ is equivariant. \end{proposition} \begin{proof} That $\pi^*i$ is split spherical follows by the previous lemma together with Lemma \ref{serreadmissible}; compare \cite[Prop.\ on p.7]{Addington}. To see that $\tilde{T}_{\mathcal A}$ is equivariant, we note that $\pi^*ii^R\pi_*\tau_*\simeq \pi^*ii^R\pi_*\simeq \tau_*\pi^*ii^R\pi_*$, so $\tau_*\tilde{T}_{\mathcal A}\simeq \tilde{T}_{\mathcal A} \tau_*$, since both are a cone of $\pi^*ii^R\pi_*\xymatrix@1@=15pt{\ar[r]&} \tau_*$. \end{proof} \begin{example} If ${\mathcal A}\simeq {\rm D}^{\rm b}({\rm Sp}ec(\mathbb{C}))$ is the category generated by an exceptional object $E$, then the twist associated to $\pi^*i$ is the spherical twist $\mathrm{ST}_A$, where $A\simeq \pi^*i(E)$ is a spherical object by \cite[Prop.\ 3.13]{Seidel-Thomas}. \end{example} \begin{remark} Let ${\rm D}^{\rm b}(Y)=\langle {\mathcal A},{\mathcal B}\rangle$ be a semi-orthogonal decomposition. By \cite[Thm.\ 11]{Addington-Aspinwall} we have $\tilde T_{{\mathcal A}}\tilde T_{{\mathcal B}}\simeq \tau^*[1]$. \end{remark} \begin{proposition}\label{descentdesc} If $\langle {\mathcal A},{\mathcal A}\otimes \omega_Y\rangle ^\perp\neq 0$ and ${\mathcal A}\simeq {\rm D}^{\rm b}(Z)$ for some smooth projective variety with $\dim Z\le d-2$, then $\tilde{T}_{\mathcal A}$ and its descents are non-standard equivalences of ${\rm D}^{\rm b}(\tilde{Y})$ and ${\rm D}^{\rm b}(Y)$, respectively. \end{proposition} \begin{proof} One of the two descents of $\tilde{T}_{\mathcal A}$ is \begin{align}\label{coneofT} T_{\mathcal A}:=\cone( i i^R\oplus \mathrm{M}_{\omega_Y}i i^R \mathrm{M}_{\omega_Y} \xrightarrow{c} {\rm id})\end{align} where the components of $c$ are given by the counits of the adjunctions. We also get \begin{align}\label{Taction}T_{\mathcal A} i\simeq \mathrm{M}_{\omega_Y} i {\mathcal S}_{\mathcal A}[-d+1]\,,\quad T_{\mathcal A} \mathrm{M}_{\omega_Y} i\simeq i S_{\mathcal A}[-d+1]\end{align} by Lemma \ref{serreadmissible}. In particular, for every $\mathrm{M}_{\omega_Z}$-invariant object $\alpha\in {\rm D}^b(Z)$, for example a skyscraper sheaf, $T_{\mathcal A} i(\alpha)=\alpha[\dim Z-d+1]$. Furthermore, $T_{\mathcal A}$ acts as the identity on $\langle {\mathcal A},{\mathcal A}\otimes \omega_Y\rangle ^\perp$. Thus, $T_{\mathcal A}$ has cohomological amplitude at least $[\dim Z-d+1,0]$. In contrast, the cohomological amplitude of a standard autoequivalence is of the form $[c,c]$ for some $c\in \mathbb{Z}$. \end{proof} \begin{remark} By the previous proof we have a description of the restriction of $T_{\mathcal A}$ to ${\mathcal C}:={\mathcal A}\cup {\mathcal A}\otimes \omega_Y \cup \langle{\mathcal A}, {\mathcal A}\otimes\omega_Y\rangle^\perp$. Note that is a spanning class (see \cite[Sect.\ 1.3]{Huybrechts} for details regarding this notion). Indeed, if ${\rm Hom}(\beta,\gamma)=0$ for all $\gamma\in {\mathcal C}$, then, in particular, ${\rm Hom}(\beta,\alpha)=0={\rm Hom}(\beta,\alpha\otimes\omega_Y)$ for all $\alpha\in {\mathcal A}$. By Serre duality, $\beta\in\langle{\mathcal A}, {\mathcal A}\otimes\omega_Y\rangle^\perp\subset {\mathcal C}$, hence $\beta\simeq 0$. Similarly, one proves that ${\rm Hom}(\gamma,\beta)=0$ implies that $\beta\simeq 0$. \end{remark} \begin{remark}\label{twistrel} As in the case of spherical or $\mathbb{P}^n$-twists (see \cite[Lem.\ 2.3]{Krug1}), we have for any $\Psi\in {\rm Aut}({\rm D}^{\rm b}(Y))$ the relation $\Psi T_{\mathcal A}\simeq T_{\Psi({\mathcal A})}\Psi$. For this one uses the cone description of $T_{\mathcal A}$ given by Equation (\ref{coneofT}) and the fact that the embedding functor of $\Psi({\mathcal A})$ is $\Psi i$. \end{remark} \begin{remark} More generally, $\pi^*$ is a $\mathbb{P}^{n-1}$-functor if $Y$ has torsion canonical bundle of order $n\geq 2$. But the analogue of Proposition \ref{sphericaldeg2cover} does not hold for $n\ge 3$, that is, in general, for a fully faithful admissible embedding $i\colon {\mathcal A}\xymatrix@1@=15pt{\ar[r]&} {\rm D}^{\rm b}(Y)$ the composition $\pi^*i$ is not a $\mathbb{P}^{n-1}$-functor. The reason is that Lemma \ref{serreadmissible} does not generalise to powers of the Serre functor, that is, in general, ${\mathcal S}^k_{\mathcal A}\not\simeq i^R{\mathcal S}^k_{\mathcal T} i$ for $k\geq 2$. \end{remark} \subsection{Application to Enriques surfaces} In the following we want to apply Proposition \ref{sphericaldeg2cover} to the Hibert scheme of points on an Enriques surface and investigate the relation of the associated twist with some known autoequivalences. For this we first need the following statement. \begin{lemma}\label{rank} Let $S$ be any surface and let $\xi\in S^{[n]}$ be a point representing $n$ pairwise distinct points on $S$. Then $FR(k(\xi))$ has rank ${\rm ch}i-2n$ where ${\rm ch}i:={\rm ch}i(\omega_S)={\rm ch}i(\mathcal O_S)$. \end{lemma} \begin{proof} We will follow the computations in \cite[Lem.\ 5.7]{Krug1}. So let $\xi \in S^{[n]}$ correspond to $n$ pairwise distinct points and write $\alpha=k(\xi)$. Recall that $F'$ has kernel ${\mathcal O}_{S\times S^{[n]}}$ and $F''$ has kernel ${\mathcal O}_{{\mathcal Z}_n}$. The kernels of the right adjoints $R'$ and $R''$ are then given by $p_S^*\omega_S[2]$ and ${\mathcal O}_{{\mathcal Z}_n}^\vee\otimes p_S^*\omega_S[2]$, respectively. Hence, $R'(\alpha)=\omega_S[2]$ and $R''(\alpha)\simeq {\mathcal O}_\xi\otimes\omega_S\simeq{\mathcal O}_\xi$, since outside the locus of points representing non-reduced subschemes of $S$ ${\mathcal O}_{{\mathcal Z}_n}^\vee$ is a line bundle shifted into degree 2. We have $F'=\Ho^*(-)\otimes \mathcal O_S$. It follows that $\rk F'(\omega_S)={\rm ch}i$ and $\rk F'(\mathcal O_\xi)=n$. Since ${\mathcal Z}_n$ is flat and finite of degree $n$ over $S^{[n]}$, we have $\rk F''(\omega_S)=n$ and $\rk F''(\mathcal O_\xi)=0$. Using that the rank is compatible with exact triangles, we get \begin{align*} \rk FR(\alpha)&=\rk F'R'(\alpha)-\rk F'R''(\alpha)-\rk F''R'(\alpha)+\rk F''R''(\alpha)\\&={\rm ch}i -n-n+0={\rm ch}i-2n. \end{align*} \end{proof} \begin{corollary}\label{frsupport} Let $S$ be a surface with $p_g=q=0$. Then the support of $FR(k(\xi))$ is $S^{[n]}$.\hspace*{\fill}$\Box$ \end{corollary} \begin{thm}\label{maintheorem} Let $X$ be an Enriques surface, $F\colon {\rm D}^{\rm b}(X)\xymatrix@1@=15pt{\ar[r]&} {\rm D}^{\rm b}(X^{[n]})$ the FM-transform induced by the ideal sheaf of the universal family, $\mathrm{CY}_n$ the canonical cover of $X^{[n]}$ and ${\mathcal A}\subset {\rm D}^{\rm b}(X)$ an admissible subcategory with embedding functor $i$. Then $\pi^*Fi$ is a split spherical functor whose induced twist is equivariant for all $n\geq 2$. If ${\mathcal A}={\rm D}^{\rm b}(X)$, then the twist $\widetilde{T}=\tilde{T}_{{\rm D}^{\rm b}(X)}$ associated to $\widetilde{F}=\pi^*F$ is a non standard autoequivalence of $\mathrm{CY}_n$. Furthermore, it is not a spherical twist. \end{thm} \begin{proof} The first part is just an application of Proposition \ref{sphericaldeg2cover} together with Theorem \ref{hilb-semiorth}, so let us investigate $\tilde{T}$. By Corollary \ref{frsupport}, $FR(k(\xi))$ is supported on $X^{[n]}$, hence the same holds for the image under $\tilde{F}\tilde{R}$ of a skyscraper sheaf of a point on $\mathrm{CY}_n$ mapping to $\xi$. Using the triangle defining $\tilde{T}$ and the fact that for any triangle $\alpha\xymatrix@1@=15pt{\ar[r]&} \beta\xymatrix@1@=15pt{\ar[r]&} \gamma$ in ${\rm D}^{\rm b}(Z)$, we have $\operatorname{Supp}(\alpha)\subset \operatorname{Supp}(\beta)\cup\operatorname{Supp}(\gamma)$, we conclude that $\tilde{T}$ does not respect the dimension of the support. Therefore, $\tilde{T}$ cannot be a standard autoequivalence. Now, if $A$ is a spherical object in $\mathrm{CY}_n$, then the spherical twist $\mathrm{ST}_A$ acts as identity on $A^\bot$ and $\mathrm{ST}_A(A)\simeq A[1-2n]$. The discussion on p.\ 9 in \cite{Addington} shows that $\widetilde{T}\widetilde{F}(\alpha)\simeq\widetilde{F}C[1]\simeq\widetilde{F}\mathrm{M}_{\omega_X}[3-2n]$. If $\widetilde{T}$ were isomorphic to $\mathrm{ST}_A$ for some $A$, then, by Proposition \ref{prop:Addingtongeneralised}, $\widetilde{F}(\alpha)$ would have to be orthogonal to the spanning class $A\cup A^\bot$ (see \cite[Prop.\ 8.6]{Huybrechts} for details about this spanning class), hence zero for all $\alpha\in {\rm D}^{\rm b}(X)$. But clearly $\widetilde{F}$ is not trivial, a contradiction. \end{proof} \begin{remark}\label{kerR} The above shows that there are objects which $\tilde{T}$ shifts by $[3-2n]$. On the other hand, we know that $\tilde{T}$ acts as identity on ${\mathcal E}r\widetilde{R}$. However, finding a non-trivial element in ${\mathcal E}r\widetilde{R}$ seems to be difficult. \end{remark} \begin{remark} The functor $\widehat{F}\colon {\rm D}^{\rm b}(\tilde{X})\xymatrix@1@=15pt{\ar[r]&} {\rm D}^{\rm b}(X^{[n]})$ defined as $F \pi_{X*}$ is not spherical. Note that $\pi_X^!=\pi_X^*$ in this case, so $\widehat{R} \widehat{F}\simeq \pi_X^* \pi_{X*}\simeq {\rm id}\oplus \tau_X^*$ and $C\simeq \tau_X^*$ is an autoequivalence of ${\rm D}^{\rm b}(\tilde{X})$. But the condition $\widehat{F} {\mathcal S}_{\tilde{X}}\simeq {\mathcal S}_{X^{[n]}} \widehat{F} C$ is not satisfied. Indeed, $\widehat{F} {\mathcal S}_{\tilde{X}}(\alpha)\simeq F(\pi_*\alpha)$ and ${\mathcal S}_{X^{[n]}} \widehat{F} C$ are non isomorphic objects for $\alpha\in {\rm D}^b(\tilde X)$ since their non-vanishing cohomologies lie in different degrees. Furthermore, the functor $\overline{F}\colon {\rm D}^{\rm b}(\tilde{X})\xymatrix@1@=15pt{\ar[r]&} {\rm D}^{\rm b}(\mathrm{CY}_n)$ defined as $\pi^* F \pi_{X*}$ is also not spherical. Indeed, $\overline{R}=\pi_X^* R\pi_*$, hence \begin{align*} \overline{R}\,\overline{F}&\simeq \pi_X^*\tilde{R}\tilde{F}\pi_{X*}\simeq \pi_X^*\pi_{X*}\oplus \pi_X^*\pi_{X*}[2-2n]\\ &\simeq {\rm id}\oplus \tau_X^*\oplus[2-2n]\oplus \tau_X^*[2-2n]. \end{align*} \end{remark} We can say something about the autoequivalences on $X^{[n]}$ the twist $\tilde{T}$ descends to. \begin{proposition} Let $T'\in {\rm Aut}({\rm D}^{\rm b}(X^{[n]}))$ be a descent of $\tilde{T}$. Then $T'$ is not contained in the subgroup of ${\rm Aut}({\rm D}^{\rm b}(X^{[n]}))$ generated by ${\rm Aut}^{\rm{st}}({\rm D}^{\rm b}(X^{[n]}))$ and the equivalence $P$ arising from the $\mathbb{P}^{n-1}$-functor constructed in \cite{Krug1}. \end{proposition} \begin{proof} The twist $P$ is rank-preserving since all the objects in the image of the corresponding $\mathbb{P}^{n-1}$-functor are supported on a proper subset of $X^{[n]}$; see \cite[Rem.\ 4.7]{Krug1}. The same holds for every standard autoequivalence (up to the sign -1 occurring for odd shifts). But by Lemma \ref{rank} and Equation (\ref{coneofT}) we have $\rk(T(k(\xi))=4n-2$, where $T=T_{{\rm D}^{\rm b}(X)}$. Since the other descent is given by $\mathrm{M}_{\omega_{X^{[n]}}}T$, the claim follows. \end{proof} \begin{lemma}\label{notwist} There is no $0\neq \alpha\in {\rm D}^{\rm b}(X^{[n]})$ such that a descent $T'$ of $\tilde{T}$ satisfies $T'(\alpha)=\alpha[\ell]$ or $T'(\alpha)=\alpha\otimes \omega_{X^{[n]}}[\ell]$ for $\ell\notin\{0,3-2n\}$. There is also no $0\neq\tilde \alpha\in {\rm D}^{\rm b}(\mathrm{CY}_n)$ such that $\tilde T(\tilde \alpha)=\tilde \alpha[\ell]$ for $\ell\notin\{0,3-2n\}$. \end{lemma} \begin{proof} It is sufficient to consider the descent $T$ described in Proposition \ref{descentdesc}. Indeed, the other descent is given by $\mathrm{M}_{\omega_{X^{[n]}}}T$. Let $\alpha\in {\rm D}^b(X^{[n]})$ with $T(\alpha)=\alpha[\ell]$, $\ell\notin\{0,3-2n\}$. By Equation (\ref{Taction}) and Proposition \ref{prop:Addingtongeneralised} (here we use that $l\neq 3-2n$) we get that $\alpha\in F(\beta)^\perp$ and $F(\beta)\in \alpha^\perp$ for all $\beta\in {\rm D}^{\rm b}(X)$. Hence $\alpha\in {\mathcal A}^\perp$ and $\alpha\in {}^\bot{\mathcal A}$, where ${\mathcal A}=F({\rm D}^{\rm b}(X))$. By Serre duality also $\alpha\in ({\mathcal A}\otimes \omega_{X^{[n]}})^\perp$ and ${}^\bot({\mathcal A}\otimes \omega_{X^{[n]}})$. The object $\alpha$ is also orthogonal to $\langle {\mathcal A},{\mathcal A}\otimes \omega_{X^{[n]}}\rangle^\perp$ on which $T$ acts trivially. To see this, use again Proposition \ref{prop:Addingtongeneralised}, this time with $F$ being the embedding of the subcategory $\langle {\mathcal A},{\mathcal A}\otimes \omega_{X^{[n]}}\rangle^\perp$. Therefore, $\alpha$ is orthogonal to the spanning class ${\mathcal A}\cup {\mathcal A}\otimes\omega_{X^{[n]}}\cup \langle {\mathcal A},{\mathcal A}\otimes \omega_{X^{[n]}}\rangle^\perp$, hence zero. The proof in the case that $T(\alpha)=\alpha\otimes \omega_{X^{[n]}}[\ell]$ is similar and the statement about $\tilde{\alpha}$ follows by applying $\pi_*$. \end{proof} Next, we want to compare our equivalences to the ones constructed in \cite{Ploog-Sosna}. For this we need to recall some facts which will also be useful in the next section. Let $Z$ be a smooth projective variety and $n\ge 2$. We consider the cartesian power $Z^n$ equipped with the natural $\mathfrak S_n$-action given by permuting the factors. For $E\in {\rm D}^{\rm b}(Z)$ an exceptional object, the box product $E^{\boxtimes n}$ is again exceptional, since by Künneth formula \[ {\rm Ext}^*_{{\rm D}^{\rm b}_{\mathfrak S_n}(Z^n)}(E^{\boxtimes n},E^{\boxtimes n})\simeq {\rm Ext}^*_{X^n}(E^{\boxtimes n},E^{\boxtimes n})^{\mathfrak S_n}\simeq S^n{\rm Ext}^*_X(E,E)\simeq \mathbb{C}[0]. \] More generally, for $\rho$ an irreducible representation of $\mathfrak S_n$, the object $E^{\boxtimes n}\otimes \rho$ is exceptional and the objects obtained this way are pairwise orthogonal, i.e. ${\rm Ext}^*(E^{\boxtimes n}\otimes \rho,E^{\boxtimes n}\otimes \rho')=0$ for $\rho\neq\rho'$. The case of biggest interest is if $Z$ is a surface. Then there is the Bridgeland--King--Reid--Haiman equivalence (see \cite{BKR} and \cite{Hai}) \[\Phi\colon {\rm D}^{\rm b}(Z^{[n]})\xrightarrow{\simeq} {\rm D}^{\rm b}_{\mathfrak S_n}(Z^n).\] In particluar, when $Z=X$ is an Enriques surface, we get further induced autoequivalences of $X^{[n]}$ and its cover $\mathrm{CY}_n$ using Proposition \ref{sphericaldeg2cover}. Let now $Z=X$ be an Enriques surface and assume that there exists an object $0\neq F\in \langle E, E\otimes \omega_X\rangle ^\perp$. This is equivalent to $\tilde E^\perp$ being non-trivial, where $\tilde E:=\pi^*E$ is the corresponding spherical object on the K3 cover of $X$. In this case $F^{\boxtimes n}$ is orthogonal to every $E^{\boxtimes n}\otimes \rho$ and $\omega_{X^n}\otimes E^{\boxtimes n}\otimes \rho$. Thus, by Proposition \ref{descentdesc}, all the associated twists $T_\rho:=T_{E^{\boxtimes n}\otimes \rho}\in{\rm Aut}( {\rm D}^{\rm b}(X^{[n]}))$ and $\tilde T_\rho:=\tilde T_{E^{\boxtimes n}\otimes \rho}\in{\rm Aut}({\rm D}^{\rm b}(\mathrm{CY}_n))$ are non-standard. In our situation there is also another induced non-standard autoequivalence, namely $\widetilde{T_{E}^{\boxtimes n}}\colon {\rm D}^{\rm b}(\mathrm{CY}_n)\xymatrix@1@=15pt{\ar[r]&} {\rm D}^{\rm b}(\mathrm{CY}_n)$; see \cite{Ploog-Sosna}. It is given as a lift of $T_E^{\boxtimes n}=\FM_{{\mathcal P}^{\boxtimes n}}\in {\rm Aut}({\rm D}^{\rm b}(X^{[n]}))$, where ${\mathcal P}\in{\rm D}^{\rm b}(X\times X)$ denotes the Fourier--Mukai kernel of $T_E$; see \cite{Ploog}. Recall that the isomorphism classes of irreducible representations of $\mathfrak S_n$ are in bijection to the set $P(n)$ of partitions of $n$. For an exceptional object $E\in {\rm D}^{\rm b}(X)$ we set \begin{align*}&G_E:=\langle[1], T_E^{\boxtimes n},T_\rho:\,\rho\in P(n)\rangle\subset{\rm Aut}({\rm D}^{\rm b}(X^{[n]})), \\ & \tilde G_E:=\langle[1], \widetilde{T_E^{\boxtimes n}},\tilde T_\rho:\,\rho\in P(n)\rangle\subset{\rm Aut}({\rm D}^{\rm b}(\mathrm{CY}_n)). \end{align*} \begin{proposition}\label{comparisontoboxes} Assume that $E\in {\rm D}^{\rm b}(X)$ is exceptional and $0\neq F\in\langle E, E\otimes \omega_X\rangle ^\perp$. We have $G_E\simeq \mathbb{Z}^{p(n)}\simeq \tilde G_E$, where $p(n)=|P(n)|$. Furthermore, $T'\notin G_E$ and $\tilde T\notin \tilde G_E$, where $T'$ is a descent of $\tilde{T}$. \end{proposition} \begin{proof} For $0\le k\le n$ we consider the objects \[E^k\cdot F^{n-k}:=\mathcal Inf_{\mathfrak S_k\times \mathfrak S_{n-k}}^{\mathfrak S_n}(E^{\boxtimes k}\boxtimes E^{\boxtimes n})\in {\rm D}^{\rm b}_{\mathfrak S_n}(X^n)\,.\] We have $T_E(E)=E[-1]$ and $T_E(F)=F$. It follows that \begin{align}\label{1} T_E^{\boxtimes n}\colon E^k\cdot F^{n-k}\xymatrix@1@=15pt{\ar@{|->}[r]&} E^k\cdot F^{n-k}[-k]\,,\quad E^{\boxtimes n}\otimes \rho\xymatrix@1@=15pt{\ar@{|->}[r]&} E^{\boxtimes n}\otimes \rho[-n]. \end{align} By Remark \ref{twistrel}, the latter shows that $T_E^{\boxtimes n}$ commutes with all the $T_\rho$. Note that for $k<n$ and $\rho\not\simeq \rho'$ we have \[{\rm Ext}^*(E^{\boxtimes n}\otimes \rho,E^k\cdot F^{n-k})=0={\rm Ext}^*(E^{\boxtimes n}\otimes \rho,E^{\boxtimes n}\otimes \rho')\,.\] Thus, by Equations (\ref{coneofT}) and (\ref{Taction}), and Remark \ref{twistrel} \begin{align}\label{2}T_\rho\colon &E^k\cdot F^{n-k}\xymatrix@1@=15pt{\ar@{|->}[r]&} E^k\cdot F^{n-k} \text{ for $k<n$}\,,\\\notag T_\rho\colon& E^{\boxtimes n}\otimes \rho'\xymatrix@1@=15pt{\ar@{|->}[r]&} \begin{cases} E^{\boxtimes n}\otimes \rho' &\text{ if $\rho\neq\rho'$}\\ \omega_{X^n}\otimes E^{\boxtimes n}\otimes \rho'[-(2n-1)] &\text{ if $\rho=\rho'$}\\ \end{cases} \end{align} which, in particular, shows that the $T_\rho$ pairwise commute. Now consider $\Psi= (T_E^{\boxtimes n})^a\circ \prod_{\rho} T_\rho^{b_\rho}[c]\in G_E$ and assume that $\Psi\simeq {\rm id}$. We have $\Psi(F^{\boxtimes n})=F^{\boxtimes n}[c]$ and thus $c=0$. It follows that $\Psi(E^1\cdot F^{n-1})=E^1\cdot F^{n-1}[-a]$ and thus $a=0$. Finally, $\Psi(E^{\boxtimes n}\otimes \rho)=\omega_{X^n}\otimes E^{\boxtimes n}\otimes \rho[-b_\rho(2n-1)]$ shows that $b_\rho=0$ for all $\rho$. The assertion that $T'\notin G_E$ follows in a similar way using Lemma \ref{notwist}. The identities (\ref{1}) and (\ref{2}) lift to identities in ${\rm D}^{\rm b}(\mathrm{CY}_n)$. Therefore, one can analogously show that $\tilde G_E\simeq \mathbb{Z}^{p(d)}$ and that $\tilde T\notin G_E$. \end{proof} \begin{remark} Let $\mathfrak a$ be the alternating representation, i.e. the one-dimensional representation on which $\mathfrak S_n$ acts by multiplication by $\sgn$. By Remark \ref{twistrel} we have $T_\mathfrak a\simeq \mathrm{M}_\mathfrak a\circ T_{E^{\boxtimes n}}\circ \mathrm{M}_\mathfrak a$ where $\mathrm{M}_\mathfrak a\in {\rm Aut}({\rm D}^{\rm b}_{\mathfrak S_n}(X^n))$ is the involution $(-)\otimes \mathfrak a$. Note that for higher dimensional irreducible representations $\rho$ there is no such relation since $\mathrm{M}_\rho$ is not an equivalence. \end{remark} \section{Exceptional sequences on $X^{[n]}$}\label{exceptionalsequences} Let $Z$ be a smooth projective variety and $n\ge 2$. In this section we will construct exceptional sequences in the equivariant derived category ${\rm D}^{\rm b}_{\mathfrak S_n}(Z^n)$ out of exceptional sequences in ${\rm D}^{\rm b}(Z)$. \begin{proposition}[{\cite[Cor.\ 1]{Sam}}]\label{Sam} Let $Z$ and $Z'$ be smooth projective varieties with full exceptional sequences $E_1,\dots,E_k$ and $F_1,\dots, F_\ell$ respectively. Then \[E_1\boxtimes F_1, E_1\boxtimes F_2,\dots ,E_1\boxtimes F_\ell, E_2\boxtimes F_1,\dots,E_k\boxtimes F_\ell\] is a full exceptional sequence of ${\rm D}^{\rm b}(Z\times Z')$. \end{proposition} Let now $E_1,\dots,E_k$ be an exceptional sequence on $Z$. We consider for every multi-index $\alpha=(\alpha_1,\dots,\alpha_n)\in [k]^n:=\{1,\dots,k\}^n$ the object \[E(\alpha):=E_{\alpha_1}\boxtimes \dots \boxtimes E_{\alpha_n}\in {\rm D}^{\rm b}(Z^n).\] By Proposition \ref{Sam}, these objects form a full exceptional sequence of ${\rm D}^{\rm b}(Z^n)$ when considering them with the ordering given by the lexicographical order $<_{\text{lex}}$ on $[k]^n$. \begin{remark}\label{ext} Let $\alpha,\beta\in [k]^n$. By the K\"unneth formula \begin{equation} \label{ex}{\rm Ext}^*(E(\alpha),E(\beta))={\rm Ext}^*(E_{\alpha_1},E_{\beta_1})\otimes \dots\otimes {\rm Ext}^*(E_{\alpha_n},E_{\beta_n}). \end{equation} Thus, we have ${\rm Ext}^*(E(\alpha),E(\beta))=0$, whenever $\alpha_i>\beta_i$ for an $i\in [n]$. \end{remark} \begin{thm}[{\cite{Ela}}]\label{Ela2} Let $G$ be a finite group acting on a variety $M$. Consider a (full) exceptional sequence of ${\rm D}^{\rm b}(M)$ of the form \[E_1^{(1)},\dots,E^{(1)}_{k_1},E^{(2)}_{1},\dots,E^{(2)}_{k_2},\dots, E^{(\ell)}_{1},\dots,E^{(\ell)}_{k_\ell}\] such that $G$ acts transitively on every block $E^{(i)}_1,\dots,E^{(i)}_{k_i}$, i.e.\ for every $i\in [\ell]$ and every pair $a,b\in [k_i]$ there is a $g\in G$ such that $g^*E^{(i)}_a\simeq E^{(i)}_b$ (and conversely for every $g\in G$ and every $a\in[k_i]$ there is a $b\in [k_i]$ such that $g^*E^{(i)}_a\simeq g^*E^{(i)}_b$). Let $H_i:={\rm Stab}_G(E^{(i)}_1)$ and assume that $E^{(i)}_1$ carries an $H_i$-linearisation, i.e. there exists an ${\mathcal E}^{(i)}\in {\rm D}^{\rm b}_{H_i}(M)$ such that $\Res({\mathcal E}^{(i)})=E^{(i)}_1$. Then \begin{align*} &\mathcal Inf_{H_1}^G({\mathcal E}^{(1)}\otimes V^{(1)}_1),\dots,\mathcal Inf_{H_1}^G({\mathcal E}^{(1)}\otimes V^{(1)}_{m_1}),\dots,\\ &\mathcal Inf_{H_\ell}^G({\mathcal E}^{(\ell)}\otimes V^{(\ell)}_1),\dots,\mathcal Inf_{H_\ell}^G({\mathcal E}^{(\ell)}\otimes V^{(\ell)}_{m_\ell}) \end{align*} is a (full) exceptional sequence of ${\rm D}^{\rm b}_G(M)$ with $V_1^{(i)},\dots, V_{m_i}^{(i)}$ being all the irreducible representations of $H_i$. \end{thm} \begin{proof} In \cite{Ela} the Theorem is only stated in the case of full exceptional sequences. But one can easily infer from the proof that non-full exceptional sequences also induce non-full exceptional sequences. \end{proof} In order to apply Theorem \ref{Ela2} we have to reorder the sequence consisting of the $E(\alpha)$ as follows. For a multi-index $\alpha\in[k]^n$ we denote the unique non-decreasing representative of its $\mathfrak S_n$-orbit by $\nd(\alpha)$. Then we define a total order $\lhd$ of $[k]^n$ by \[\alpha\lhd \beta:\iff \begin{cases} \nd(\alpha)<_{\mathrm{lex}} \nd(\beta) \quad\text{or}\\ \nd(\alpha) = \nd(\beta)\text{ and } \alpha<_{\mathrm{lex}} \beta \end{cases} \] Now the group $\mathfrak S_n$ acts transitively on the blocks consisting of all $E(\alpha)$ with fixed $\nd(\alpha)$ because of $\sigma^*E(\alpha)\simeq E(\sigma^{-1}\cdot \alpha)$. Furthermore, every $E(\alpha)$ has a canonical ${\rm Stab}(\alpha)$-linearisation given by permutation of the factors in the box product. It remains to show that $(E(\alpha))_\alpha$ with the ordering given by $\lhd$ is still an exceptional sequence. This follows by Remark \ref{ext} and the last item of the following lemma. \begin{lemma}\label{combi} Let $\alpha,\beta\in[k]^n$. \begin{enumerate} \item Let $\nd(\alpha)=\nd(\beta)$ but $\alpha\neq \beta$. Then there exists an $i\in [n]$ such that $\alpha_i<\beta_i$. \item Let $\sigma\in \mathfrak S_n$. Then there exists an $i\in [n]$ such that $\alpha_i<\beta_i$ if and only if there exists a $j\in[n]$ such that $(\sigma\cdot\alpha)_j<(\sigma\cdot \beta)_j$. \item If $\nd(\alpha)<_{\mathrm{lex}}\nd(\beta)$, then there exists an $i\in [n]$ such that $\alpha_i<\beta_i$. \item Let $\alpha\lhd \beta$. Then there exists an $i\in [n]$ such that $\alpha_i<\beta_i$. \end{enumerate} \end{lemma} \begin{proof} If $\nd(\alpha)=\nd(\beta)$, we have $\alpha_1+\dots+\alpha_n=\beta_1+\dots+\beta_n$. This shows (1). By setting $j=\sigma(i)$ we obtain (2). In order to show (3) we may now assume using (2) that $\alpha=\nd(\alpha)$. Let $\sigma\in \mathfrak S_n$ be such that $\beta=\sigma\cdot\nd(\beta)$ and let $m:=\min\{\ell\in[n]\mid \nd(\alpha)_\ell\ne \nd(\beta)_\ell\}$. Then $\nd(\alpha)_m<\nd(\beta)_m$. If $\sigma(m)\le m$, we have $\alpha_{\sigma(m)}=\nd(\alpha)_{\sigma(m)}\le \nd(\alpha)_m<\nd(\beta)_m=\beta_{\sigma(m)}$. If $\sigma(m)>m$, there exists $\ell>m$ such that $\sigma(\ell)\le m$. This yields \[\alpha_{\sigma(\ell)}=\nd(\alpha)_{\sigma(\ell)}\le\nd(\alpha)_m<\nd(\beta)_m\le\nd(\beta)_\ell=\beta_{\sigma(\ell)}.\] Finally, (4) follows from (1) and (3). \end{proof} We summarize all of the above in the following \begin{proposition} If $\alpha\in[k]^n$ is a non-decreasing multi-index and $V_i^{(\alpha)}$ is an irreducible representation of $H_\alpha={\rm Stab}(\alpha)$, then the collection of objects ${\mathcal E}(\alpha,V_i^{(\alpha)}):=\mathcal Inf_{H_\alpha}^{\mathfrak S_n}\left(E(\alpha)\otimes V^{(\alpha)}_i\right)$ forms an exceptional sequence of ${\rm D}^{\rm b}_{\mathfrak S_n}(Z^n)$. The induced sequence is full if and only if the original sequence on ${\rm D}^{\rm b}(Z)$ is full.\hspace*{\fill}$\Box$ \end{proposition} \begin{remark} An exceptional sequence is called \textit{strong} if all the higher extension groups between its members vanish. Using Equation (\ref{ex}) one can show that $({\mathcal E}(\alpha,V^{(\alpha)}_i))_{\alpha,i}$ is strong if and only if $(E_\ell)_\ell$ is strong. Thus, in the case that the full exceptional sequence $E_1,\dots,E_k$ of ${\rm D}^{\rm b}(Z)$ is strong, there is an equivalence of triangulated categories ${\rm D}^{\rm b}_{\mathfrak S_n}(Z^n)\simeq {\rm D}^{\rm b}(\text{Mod}-\text{End}_{\mathfrak S_n}({\mathcal M}))$ where ${\mathcal M}:=\bigoplus_{\alpha,i} {\mathcal E}(\alpha,V^{(\alpha)}_i)$; see \cite{Bon}. \end{remark} \begin{remark} Using \cite{Ela2} one can also construct semi-orthogonal decompositions of ${\rm D}^{\rm b}_{\mathfrak S_n}(Z^n)$ out of general semi-orthogonal decompositions of ${\rm D}^{\rm b}(Z)$ in a similar way. Furthermore, if $T\in {\rm D}^{\rm b}(Z)$ is a tilting object, so is $\oplus_{\rho\in P(n)}(T^{\boxtimes n}\otimes \rho)\in {\rm D}^{\rm b}_{\mathfrak S_n}(Z^n)$. \end{remark} \begin{remark} Any Enriques surface has a completely orthogonal exceptional sequence $(E_i)_i$ of length $10$; see \cite{Zube}. The induced exceptional sequence in ${\rm D}^{\rm b}_{\mathfrak S_n}(X^n)\simeq {\rm D}^{\rm b}(X^{[n]})$ is again completely orthogonal and the same holds for the corresponding sequence of spherical objects on $CY_n$. By \cite[Cor.\ 2.4]{Krug1} it follows that the spherical twists give an embedding $\mathbb{Z}^{\ell(10,n)}\xymatrix@1@=15pt{\ar@{^(->}[r]&} {\rm Aut}({\rm D}^{\rm b}(\mathrm{CY}_n))$ where $\ell(10,n)$ denotes the length of the induced sequence. By arguments similar to those in the proof of Proposition \ref{comparisontoboxes} we also get an embedding $\mathbb{Z}^{\ell(10,n)}\xymatrix@1@=15pt{\ar@{^(->}[r]&} {\rm Aut}({\rm D}^{\rm b}(X^{[n]}))$. \end{remark} \begin{remark} Let $Z=X$ be an Enriques surface and let \[\tilde {\mathcal E}(\alpha):=\pi^*({\mathcal E}(\alpha,{\rm tr}iv))\in {\rm D}^{\rm b}(\mathrm{CY}_n).\] In the case that $\tilde E_1,\dots,\tilde E_k$ is an $A_k$-sequence of spherical objects on ${\rm D}^{\rm b}(\tilde X)$, the sequence \[\tilde {\mathcal E}(1,\dots,1),\tilde{\mathcal E}(1,\dots,2),\dots,\tilde {\mathcal E}(2,\dots,2),\dots,\tilde {\mathcal E}(k,\dots,k)\] is a $A_{(n-1)k+1}$ sequence of spherical objects in ${\rm D}^{\rm b}(\mathrm{CY}_n)$. Thus, there is an embedding $B_{(n-1)k+1}\xymatrix@1@=15pt{\ar@{^(->}[r]&} {\rm Aut}({\rm D}^{\rm b}(\mathrm{CY}_n))$ of the braid group; see \cite{Seidel-Thomas}. \end{remark} \section{The truncated universal ideal functor}\label{truncated} The arguments in this section follow those of \cite[Sect.\ 6]{Meachan}. The key new observation is that the functor $\hat F:=\Phi F\colon {\rm D}^{\rm b}(Z)\xymatrix@1@=15pt{\ar[r]&} {\rm D}^{\rm b}_{\mathfrak S_n}(Z^n)$ for $Z=S$ a surface can be truncated to a functor $G$ in such a way that $G^RG\simeq \hat F^R\hat F$ and that this functor generalises in a nicer way to varieties of arbitrary dimension than $\hat F$ does. If $Z=S$ is a surface, then $\hat F''=\Phi\FM_{{\mathcal O}_{{\mathcal Z}_n}}= \FM_{\mathcal C^\bullet}$, where $\mathcal C^\bullet$ is the complex concentrated in degrees $0,\dots,n-1$ given by \[0\xymatrix@1@=15pt{\ar[r]&} \bigoplus\limits_{i=0}^n\mathcal O_{D_i}\xymatrix@1@=15pt{\ar[r]&} \bigoplus\limits_{|I|=2}\mathcal O_{D_I}\otimes \mathfrak a_I\xymatrix@1@=15pt{\ar[r]&} \bigoplus\limits_{|I|=3}{\mathcal O}_{D_I}\otimes \mathfrak a_I \dots \xymatrix@1@=15pt{\ar[r]&} \mathcal O_{D_{[n]}}\otimes \mathfrak a_{[n]}\xymatrix@1@=15pt{\ar[r]&} 0\,;\] see \cite{Sca1}. For $I\subset [n]:=\{1,\dots,n\}$, the reduced subvariety $D_I\subset S\times S^n$ is given by $D_I=\{(y,x_1,\dots,x_n)\mid y=x_i\,\forall\, i\in I\}$ and $\mathfrak a_I$ is the alternating representation of $\mathfrak S_I$. Furthermore, $\hat F'=\Phi \FM_{\mathcal O_{S\times S^{[n]}}}\simeq \FM_{\mathcal O_{S\times S^n}}$ and the induced map $\hat F'\xymatrix@1@=15pt{\ar[r]&} \hat F''$ is given by the morphism of kernels $\mathcal O_{S\times S^n}\xymatrix@1@=15pt{\ar[r]&} \mathcal C^0=\oplus_i\mathcal O_{D_i}$ whose components are given by restriction of sections. We set $G'':=\FM_{\mathcal C^0}$. As explained in \cite[Sect.\ 6]{Meachan}, the main steps in the computation of the formulas of \cite{Sca1} and \cite{Krug} can be translated into the following statement \begin{align}\label{cbul}\hat F'^R\hat F''\simeq \hat F'^RG'',\quad \hat F''^R F'\simeq G''^RF',\quad \hat F''^R \hat F''\simeq G''^RG''.\end{align} \begin{definition} Let $Z$ be a smooth projective variety of arbitrary dimension $d$ and $n\ge 2$. The \textit{truncated universal ideal functor} $G=\FM_{\mathcal G}\colon {\rm D}^{\rm b}(Z)\xymatrix@1@=15pt{\ar[r]&} {\rm D}^{\rm b}_{\mathfrak S_n}(Z^n)$ is the Fourier--Mukai transform whose kernel is the complex \[{\mathcal G}:={\mathcal G}^\bullet:=(0\xymatrix@1@=15pt{\ar[r]&} \mathcal O_{Z\times Z^n}\xymatrix@1@=15pt{\ar[r]&} \oplus_{i=1}^n\mathcal O_{D_i}\xymatrix@1@=15pt{\ar[r]&} 0)\in {\rm D}^{\rm b}_{\mathfrak S_n}(Z\times Z^n).\] \end{definition} Thus, there is the triangle of FM transforms $G\xymatrix@1@=15pt{\ar[r]&} G'\xymatrix@1@=15pt{\ar[r]&} G''$ with \[ G':=\FM_{{\mathcal G}'}=\hat F'=\Ho^*(X,-)\otimes \mathcal O_{X^n},\quad G''=\FM_{{\mathcal G}''}=\mathcal Inf_{\mathfrak S_{n-1}}^{\mathfrak S_n} p_{n}^* {\rm tr}iv. \] For $E\in {\rm D}^{\rm b}(Z)$ we have $G''(E)=\oplus_{i=1}^n p_i^*E$ (see also Subsection \ref{subsection:equivariant} for details about the inflation functor $\mathcal Inf$ and its right adjoint $\Res$). The right-adjoints are \begin{align*} G'^R=\FM_{{\mathcal G}'^R}=\Ho^*(Z^n,-)^{\mathfrak S_n}\otimes \omega_Z[d]&,\quad {\mathcal G}'^R=\mathcal O_{Z^n}\boxtimes\omega_Z [d],\\ G''^R=\FM_{{\mathcal G}''^R}=[-]^{\mathfrak S_{n-1}}\circ p_{n*}\circ \Res_{\mathfrak S_n}^{\mathfrak S_{n-1}}&,\quad {\mathcal G}''^R=\oplus_{i=0}^n \mathcal O_{D_i}.\end{align*} In the surface case Equation (\ref{cbul}) gives the following \begin{lemma}\label{surftrunc} If $Z=S$ is a surface, $\hat F^R\hat F\simeq G^RG$.\hspace*{\fill}$\Box$ \end{lemma} We compute the compositions of the kernels: \begin{align*} {\mathcal G}'^R{\mathcal G}'&=(\mathcal O_Z\boxtimes \omega_Z)\otimes S^n\Ho^*(\mathcal O_Z)[d],\\ {\mathcal G}'^R{\mathcal G}''&=(\mathcal O_Z\boxtimes \omega_Z)\otimes S^{n-1}\Ho^*(\mathcal O_Z)[d],\\ {\mathcal G}''^R{\mathcal G}'&=(\mathcal O_Z\boxtimes \mathcal O_Z)\otimes S^{n-1}\Ho^*(\mathcal O_Z),\\ {\mathcal G}''^R{\mathcal G}''&=\mathcal O_{\rm D}elta\otimes S^{n-1}\Ho^*(\mathcal O_Z)\oplus (\mathcal O_Z\boxtimes \mathcal O_Z)\otimes S^{n-2}\Ho^*(\mathcal O_Z). \end{align*} The induced map ${\mathcal G}'^R{\mathcal G}'\xymatrix@1@=15pt{\ar[r]&} {\mathcal G}'^R{\mathcal G}''$ under these isomorphisms is given by evaluation as follows. Let $(e_i)_i$ be a basis of $\Ho^*(\mathcal O_Z)={\rm Hom}(\mathcal O_Z,\mathcal O_Z[*])$. Then the component \[(\mathcal O_Z\boxtimes \omega_Z)\cdot e_{i_1}\cdots e_{i_n}[d]\xymatrix@1@=15pt{\ar[r]&} (\mathcal O_Z\boxtimes \omega_Z)\cdot e_{i_1}\cdots\hat e_{i_k}\cdots e_{i_n}[d]\] is $e_{i_k}\boxtimes {\rm id}[d]$. The component \[(\mathcal O_Z\boxtimes \mathcal O_Z)\otimes S^{n-1}\Ho^*(\mathcal O_Z)\xymatrix@1@=15pt{\ar[r]&} (\mathcal O_Z\boxtimes \mathcal O_Z)\otimes S^{n-2}\Ho^*(\mathcal O_Z)\] of ${\mathcal G}''^R{\mathcal G}'\xymatrix@1@=15pt{\ar[r]&} {\mathcal G}''^R{\mathcal G}''$ is given in the same way and the component \[(\mathcal O_Z\boxtimes \mathcal O_Z)\otimes S^{n-1}\Ho^*(\mathcal O_Z)\xymatrix@1@=15pt{\ar[r]&} \mathcal O_{\rm D}elta\otimes S^{n-1}\Ho^*(\mathcal O_Z)\] is the restriction map. The map ${\mathcal G}''^R{\mathcal G}'\xymatrix@1@=15pt{\ar[r]&} {\mathcal G}'^R{\mathcal G}'$ is given as follows. Let $(\theta_i)_i$ be the basis of $\Ho^*(\mathcal O_Z)^\vee$ dual to $(e_i)_i$. Furthermore, let $\theta_i$ correspond to the morphism $\tilde \theta_i\in {\rm Hom}(\mathcal O_Z,\omega_Z[d-*])\simeq {\rm Hom}(\mathcal O_Z,\mathcal O_Z[*])^\vee$ under Serre duality. Then the component \[(\mathcal O_Z\boxtimes \mathcal O_Z)\cdot e_{i_1}\cdots\hat e_{i_k}\cdots e_{i_n}\xymatrix@1@=15pt{\ar[r]&} (\mathcal O_Z\boxtimes \omega_Z)\cdot e_{i_1}\cdots e_{i_n}[d]\] is $\tilde \theta_{i_k}$. In the following we will use the commutative diagram \begin{align}\label{lattice}\xymatrix{ {\mathcal G}''^R{\mathcal G}\ar[r]\ar[d] & {\mathcal G}''^R{\mathcal G}' \ar[r]\ar[d] & {\mathcal G}''^R{\mathcal G}'' \ar[d]\\ {\mathcal G}'^R{\mathcal G} \ar[r]\ar[d] & {\mathcal G}'^R{\mathcal G}' \ar[r]\ar[d] & {\mathcal G}'^R{\mathcal G}'' \ar[d] \\ {\mathcal G}^R{\mathcal G} \ar[r] & {\mathcal G}^R{\mathcal G}' \ar[r] & {\mathcal G}^R{\mathcal G}'' \,. } \end{align} with exact triangles as columns and rows in order to deduce formulae for $G^RG=\FM_{{\mathcal G}^R{\mathcal G}}$. \subsection{The case of an even dimensional Calabi--Yau variety} If $Z$ is a Calabi--Yau variety of even dimension $d$, then $\Ho^*(\mathcal O_Z)=\mathbb{C}[0]\oplus \mathbb{C}[-d]$ and $S^k\Ho^*(\mathcal O_Z)=\mathbb{C}[0]\oplus \mathbb{C}[-d]\oplus\dots\oplus \mathbb{C}[-dk]$. Let $u\in \Ho^d(\mathcal O_Z)$ be the basis vector whose dual $\theta\in {\rm Hom}(\mathcal O_Z,\mathcal O_Z[d])^\vee$ corresponds to ${\rm id}\in {\rm Hom}(\mathcal O_Z,\mathcal O_Z)$ under Serre duality. We denote the induced degree $d\ell$ basis vector of $S^k\Ho^*(\mathcal O_Z)$ by $u^\ell$. \begin{lemma} For a Calabi--Yau variety $Z$ of even dimension $d$ we have \[G^RG\simeq {\rm id}\oplus [-d]\oplus\dots \oplus [-d(n-1)]\,.\] \end{lemma} \begin{proof} By the description of the last subsection, the components $\mathcal O_{Z^2}\cdot u^\ell\xymatrix@1@=15pt{\ar[r]&} \mathcal O_{Z^2}\cdot u^{\ell}$ of ${\mathcal G}''^R{\mathcal G}'\xymatrix@1@=15pt{\ar[r]&} {\mathcal G}''^R{\mathcal G}''$ and ${\mathcal G}'^R{\mathcal G}'\xymatrix@1@=15pt{\ar[r]&} {\mathcal G}'^R{\mathcal G}''$ equal the identity. By \cite[Lem.\ 5]{Hub}, the top of diagram (\ref{lattice}) is isomorphic to \begin{align*}\xymatrix{ {\mathcal G}''^R{\mathcal G} \ar[r]\ar[d] & \mathcal O_{Z^2}[-d(n-1)] \ar[r]\ar[d] & \mathcal O_{\rm D}elta([0]\oplus[-d]\oplus\dots\oplus [-d(n-1)]) \ar[d]\\ {\mathcal G}'^R{\mathcal G} \ar[r] & \mathcal O_{Z^2}[-d(n-1)] \ar[r] & 0 } \end{align*} where the middle vertical map is the component $\mathcal O_{Z^2} \cdot u^{n-1}\xymatrix@1@=15pt{\ar[r]&} \mathcal O_{Z^2} u^n[d]$ of ${\mathcal G}''^R{\mathcal G}'\xymatrix@1@=15pt{\ar[r]&} {\mathcal G}'^R{\mathcal G}'$. By the above description it is the identity. Now the claim follows by the octahedral axiom. \end{proof} \begin{thm} If $Z$ is a Calabi--Yau variety of even dimension, then $G\colon {\rm D}^{\rm b}(Z)\xymatrix@1@=15pt{\ar[r]&} {\rm D}^{\rm b}_{\mathfrak S_n}(Z^n)$ is a $\mathbb{P}^{n-1}$-functor. \end{thm} \begin{proof} By the previous lemma, condition (1) of the definition of a $\mathbb{P}^{n-1}$-functor is satisfied. The proof that condition (2) is satisfied is analogous to the proof in the case of the non-truncated functor when $X$ is a K3 surface. Basically, one has to go through \cite[Sec.\ 2.5]{Addington} and replace $F$ by $G$, $q^*$ by $p_n^*\circ {\rm tr}iv$ and its right adjoint $q_*$ by $(-)^{\mathfrak S_{n-1}}\circ p_{n*}$, $g_*$ by $\mathcal Inf$ and its right adjoint $g^!$ by $\Res$, $\HH^*(S^{[n]})$ by $\HH^*(Z^n)^{\mathfrak S_n}$, $\Ho^*(\mathcal O_{S^{[n]}})$ by $\Ho^*(Z^n)^{\mathfrak S_n}$, 2 by $d$, and the symplectic form $\sigma$ by a generator of $\Ho^d(\mathcal O_Z)$. Very roughly, the idea is the following: $G^RG$ is identified with the direct summand $(p_n^*\circ {\rm tr}iv)^R(p_n^*\circ {\rm tr}iv)\simeq {\rm id} \otimes \Ho^*(Z^{n-1},\mathcal O_{Z^{n-1}})^{\mathfrak S_{n-1}}$ of $G''^RG''$ and the monad structure of $(p_n^*\circ {\rm tr}iv)^R(p_n^*\circ {\rm tr}iv)$ is given by the cup product on $\Ho^*(Z^{n-1},\mathcal O_{Z^{n-1}})^{\mathfrak S_{n-1}}$ which has the right form. Condition (3) is easy to check since all occurring autoequivalences are simply shifts. \end{proof} \subsection{The case of an odd dimensional Calabi--Yau variety} In this case $\Ho^*(\mathcal O_Z)=\mathbb{C}[0]\oplus \mathbb{C}[-d]$ and $S^k\Ho^*(\mathcal O_Z)=\mathbb{C}[0]\oplus \mathbb{C}[-d]$ for $k\ge 1$. The reason for the vanishing of the higher degrees is that the symmetric product is taken in the graded sense. \begin{lemma} For $n\ge 3$ we have ${\mathcal G}^R{\mathcal G}\simeq \mathcal O_{\rm D}elta([0]\oplus [-d])$. \end{lemma} \begin{proof} The component $\mathcal O_{Z^2} \otimes S^{n-1}\Ho^*(\mathcal O_Z)\xymatrix@1@=15pt{\ar[r]&} \mathcal O_{Z^2} \otimes S^{n-2}\Ho^*(\mathcal O_Z)$ of the map ${\mathcal G}''^R{\mathcal G}'\xymatrix@1@=15pt{\ar[r]&} {\mathcal G}''^R{\mathcal G}''$ as well as the whole ${\mathcal G}'^R{\mathcal G}'\xymatrix@1@=15pt{\ar[r]&} {\mathcal G}'^R{\mathcal G}''$ are isomorphisms. This yields ${\mathcal G}''^R{\mathcal G}\simeq \mathcal O_{\rm D}elta\otimes S^{n-1}\Ho^*(\mathcal O_Z)[-1]\simeq \mathcal O_{{\rm D}elta}([0]\oplus[-d])[-1]$ and ${\mathcal G}'^R{\mathcal G}=0$. The result now follows from the triangle ${\mathcal G}''^R{\mathcal G}\xymatrix@1@=15pt{\ar[r]&} {\mathcal G}'^R{\mathcal G}\xymatrix@1@=15pt{\ar[r]&} {\mathcal G}^R{\mathcal G}$. \end{proof} That means that the cotwist of $G$ is an equivalence. For dimension reasons the second axiom of a spherical functor cannot hold for $G$. Thus, one could call $G$ a \textit{sphere-like functor} in analogy with the sphere-like objects of \cite{HKP}. \subsection{The case $\Ho^*(\mathcal O_Z)=\mathbb{C}[0]$} In this case also $S^k\Ho^*(\mathcal O_Z)=\mathbb{C}[0]$ for $k\ge 1$. \begin{proposition} The functor $G$ is fully faithful, i.e. ${\mathcal G}^R{\mathcal G}=\mathcal O_{\rm D}elta$. \end{proposition} \begin{proof} The maps ${\mathcal G}''^R{\mathcal G}'\xymatrix@1@=15pt{\ar[r]&} {\mathcal G}''^R{\mathcal G}''$ and ${\mathcal G}'^R{\mathcal G}'\xymatrix@1@=15pt{\ar[r]&} {\mathcal G}'^R{\mathcal G}''$ are the identity on the components $\mathcal O_{Z^2}$. Hence, ${\mathcal G}''^R{\mathcal G}\simeq \mathcal O_{\rm D}elta\otimes S^{n-1}\Ho^*(\mathcal O_Z)[-1]$ and ${\mathcal G}'^R{\mathcal G}=0$. It follows that ${\mathcal G}^R{\mathcal G}=\mathcal O_{\rm D}elta\otimes S^{n-1}\Ho^*(\mathcal O_Z)=\mathcal O_{\rm D}elta$. \end{proof} \begin{remark} In particular, when $Z=S$ is a surface, the Proposition in conjunction with Lemma \ref{surftrunc} reproves Theorem \ref{hilb-semiorth}. \end{remark} \begin{remark} In contrast to the case of the non-truncated ideal functor (see Remark \ref{kerR}) it is, at least for $n\ge 3$, easy to find an object in ${\mathcal E}r G^R$, namely $\mathcal O_{Z^n}\otimes \mathfrak a$. Since for even $d=\dim Z$ also $G^R(\omega_{Z^n}\otimes \mathfrak a)=0$, we have in the case that $Z=X$ is an Enriques surface $T_G(\mathcal O_{X^n}\otimes \mathfrak a)=\mathcal O_{X^n}\otimes \mathfrak a$ by Equation (\ref{coneofT}). This shows that, although the functors $\hat F$ and $G$ are very similar, the induced twists differ at least slightly. It also follows by Remark \ref{twistrel} that $T_G$ and $T_{\mathcal O_{X^n}\otimes \mathfrak a}$ commute. \end{remark} \end{document}
\begin{document} \title[Stochastic Properties of the Laplacian on Riemannian Submersions]{Stochastic Properties of the Laplacian on Riemannian Submersions } \author{M. Cristiane Brand\~ao, Jobson Q. Oliveira } \address{Departamento de Matem\'atica-UFC\\ 60455-760-Fortaleza-CE-Br} \email{[email protected]}\email{[email protected]} \urladdr{http://www.mat.ufc.br/} \keywords {Feller Property, Stochastic Completeness, Parabolicity, Riemannian Immersions and Submersions} \begin{abstract} Based on ideas of Pigolla and Setti \cite{PS} we prove that immersed submanifolds with bounded mean curvature of Cartan-Hadamard manifolds are Feller. We also consider Riemannian submersions $\pi \colon M \to N$ with compact minimal fibers, and based on various criteria for parabolicity and stochastic completeness, see \cite{Grygor'yan}, we prove that $M$ is Feller, parabolic or stochastically complete if and only if the base $N$ is Feller, parabolic or stochastically complete respectively. \end{abstract} \maketitle \section{\bf Introduction} Let $M$ be a geodesically complete Riemannian manifold and $\triangle=\operatorname{div} \circ \operatorname{grad} $ the Laplace-Beltrami operator acting on the space $C_{0}^{\infty}(M)$ of smooth functions with compact support. The operator $\triangle$ is symmetric with respect to the $L^{2}(M)$-scalar product and it has a unique self-adjoint extension to a semi-bounded operator, also denoted by $\triangle$, whose domain is the set $W^{2}_{0}(M)=\{ f\in W_{0}^{1}(M),\, \triangle\! f\in L^{2}(M)\}$, see details in \cite{davies}, where $W_{0}^{1}(M)$ is the closure of $C_{0}^{\infty}(M)$ with respect to the norm $$ (u,v)_{1}=\int_{M}u\, v \,d\mu + \int_{M}\langle \nabla u, \, \nabla v \rangle \,d\mu.$$ The operator $\triangle$ defines the heat semi-group $\{e^{t\triangle}\}_{t\geq 0}$, a family of positive definite bounded self-adjoint operators in $L^{2}(M)$ such that for any $u_{0}\in L^{2}(M)$, the function $u(x,t)\colon\!\!=(e^{t\triangle }u_{0})(x)\in C^{\infty}((0, \infty)\times M)$ solves the heat equation \begin{equation}\left\{\begin{array}{rcl}\displaystyle\frac{\partial }{\partial \,t}u(t,x)&=&\triangle_{x}u(t,x)\\ &&\\ u(t,x)&\stackrel{L^{2}(M)}{\longrightarrow }&u_{0}(x)\,\,\;\;\;{\rm as}\,\,\;\;\; t\to 0^{+}\end{array}\right. \end{equation} Moreover, there exists a smooth function $p\in C^{\infty}(\mathbb{R}_{+}\times M \times M)$, called the heat kernel of $M$, such that \begin{equation} e^{t\triangle}u(x)=\int_{M}p(t,x,y)\,u(y)\,d\mu_{y},\end{equation}see \cite{dodziuk}, \cite{G-book}. In \cite{azencott}, Azencott studied (among other things) Riemannian manifolds such that the heat semi-group $e^{t\triangle }$ preserves the set of continuous function vanishing at infinity. He introduced the concept of Feller manifolds in the following definition. \begin{definition}A complete Riemannian manifold $M$ is Feller or enjoys the Feller property for the Laplace-Beltrami operator if \begin{equation} e^{t\triangle }(C_{0}(M))\subset C_{0}(M)\end{equation} where $C_{0}(M)=\{u\colon M\to \mathbb{R}, \,{\rm continous}\colon u(x)\to 0 \, \,{\rm as}\,\, x\to \infty\}.$ \end{definition} On the other hand, it is well known that the heat kernel has the following properties, \begin{equation}\begin{array}{rll}\displaystyle \frac{\partial p}{\partial t} &=& \triangle_{y}p. \\ &&\\ p(t, x,y) &> &0,\\ &&\\ p(t, x,y)&=&\int_{M}p(s, x,z)p(t-s,z,y)dz,\\&& \\ \int_{M}p(t, x,y)dy &\leq &1\end{array}\end{equation} for all $x\in M$ and all $t>0$, $s\in (0, t)$. From these properties one can construct a Markov process $X_{t}$ on $M$, called Brownian motion on $M$, with transition density $p(x,y,t)$, see \cite[p.143]{Grygor'yan}. The corresponding measure in the space of all paths issuing from a point $x$ is denoted by $\mathbb{P}_{x}$. If $X_{0}=x$ and $U\subset M$ is an open set. Then $$\mathbb{P}_{x}\left( \{ X_{t}\in U\}\right)=\int_{U}p(x, y, t)dy$$ The process $X_{t}$ is stochastically complete it the total probability of the particle being found in $M$ is equal to $1$. This motivates the following definition\begin{definition}A Riemannian manifold $M$ is said to be stochastically complete if for some, equivalently for all, $(x,t)\in M\times (0, \infty)$, $$ \int_{M}p(x, y, t)dy =1.$$ \end{definition} In this probabilistic point of view, Azencott \cite{azencott} proved the following probabilistic characterization of Feller manifolds. \begin{theorem}[Azencott]A Riemannian manifold $M$ is Feller if and only if, for every compact $K\subset M$ and for every $t_{0}>0$, the Brownian motion $X_{t}\in M$ starting at $X_{0}=x_{0}$ enters in $K$ before time $t<t_{0}$ with probability that tends to zero as $x_{0}\to \infty$. \end{theorem} After Azencott's paper, many authors \cite{dodziuk}, \cite{hsu-1}, \cite{hsu-2}, \cite{LK}, \cite{pinsky}, \cite{yau-1} contributed to the theory of Feller manifolds setting geometric conditions implying the Feller property. Most of those geometric conditions are all on the Ricci curvature of the manifolds although the methods employed differs, ranging from parabolic equations to probability methods. An interesting approach was taken recently by Pigola and Setti in \cite{PS}, they used a characterization of minimal solutions of certain elliptic problems due to Azencott \cite{azencott} to set up a very useful criteria to prove the Feller property of Riemannian manifolds(Theorem \ref{supersol compar}), similar to those used to prove parabolicity and stochastic completeness of Riemannian manifolds. Stemming from \cite{PS}, the paper \cite{BPS} considers Riemannian manifolds that are stochastically complete and Feller simultaneously and studies solutions of certain PDE's out of a compact set and prove a number of geometric applications, see \cite[Thms. 16 18, 20, 21 ]{BPS}. There are many examples of manifolds that are Feller and stochastically complete, like the Cartan-Hadamard manifolds with sectional curvature with quadratic decay, the Ricci solitons, the properly immersed minimal submanifolds of Cartan-Hadamard manifolds, etc. In order to apply the machinery developed in \cite{BPS}, it is important to establish geometric criteria to ensure stochastic completeness and Feller property of Riemannian manifolds. In our first result we show that any properly immersed submanifolds of a Hadamard-Cartan manifold with bounded mean curvature vector is Feller. It is known that properly immersed submanifolds with bounded mean curvature vector are stochastically complete, \cite{PRS}. We also prove stochastic completeness and Feller property of an important class of Riemannian manifolds, the Riemannian submersions with compact minimal fibers. The Riemannian submersions were introduced by O'Neill \cite{One66}, \cite{One67} and A. Gray \cite{Gra67} in order to produce new examples of non-negative sectional curvature manifolds, positive Ricci curvature manifolds, as a laboratory to test conjectures. Examples of Riemannian submersions are the coverings spaces $\pi\colon \widetilde{M}\to M$, warped product manifolds $\pi\colon (X\times_{\psi}Y, dX^{2}+\psi^{2}(x,y)dY^{2})\to X$. To give examples of Riemannian submersions with minimal fibers, let $G$ be Lie group endowed with a bi-invariant metric and $K$ be a closed subgroup, then the natural projection $\pi\colon G\to G/K$ is a Riemannian submersion with totally geodesic fibers diffeomorphic to $K$. Other examples are the homogeneous $3$-dimensional Riemannian manifolds with isometry group of dimension four described in details in \cite{Sco}. In our second result, we show that if $\pi\colon M \to N$ is a Riemannian submersion with compact minimal fibers $F$, then $M$ is, respectively Feller, stochastically complete or parabolic if and only if $N$ is Feller, stochastically complete or parabolic. \section{\bf Statement of the results} Pigola and Setti \cite{PS}, as consequence of the relations between the Faber-Krahn isoperimetric inequalities and Feller property, proved the following result. \begin{theorem}[Pigola-Setti] Let $\varphi\colon M \hookrightarrow N$ be an immersion of a $m$-submanifold with mean curvature vector $H$ into a Cartan-Hadamard $n$-manifold $N$. If \begin{equation} \int_{M}\vert H\vert^{m}d\mu_{M} < \infty\end{equation} then $M$ is Feller. In particular \begin{itemize} \item[a.] Every Cartan-Hadamard manifold is Feller. \item[b.]Every complete minimal submanifold of a Cartan-Hadamard manifold is Feller. \end{itemize}\label{thmPS} \end{theorem} In our first result we substitute the condition $ \Vert H\Vert_{L^{m}(M)} < \infty$ in Theorem \ref{thmPS} by properness of the immersion and boundedness of the mean curvature vector. We prove the following theorem. \begin{theorem} \label{Theorem 2.1}Let $\varphi\colon M \hookrightarrow N$ be an proper immersion of a $m$-submanifold with mean curvature vector $H$ into a Cartan-Hadamard $n$-manifold $N$. If $\varphi $ has bounded mean curvature vector, $\sup_{M}\vert H\vert <\infty$, then $M$ is Feller. \end{theorem} We should remark that properly immersed submanifolds of Cartan-Hadamard manifolds with mean curvature vector with controlled growth\footnote{Meaning that $\sup_{B_{N}(p,t)\cap \varphi (M)} \vert H\vert \leq c^{2}\cdot t^{2}\cdot \log^{2}(t+2)$, $c$ constant and $t>> 1.$} are stochastically complete, see details in \cite{PRS}. To put our second result in context let us consider a Riemannian covering $\pi\colon \widetilde{M}\to M$. It is known that $\widetilde{M}$ is stochastically complete if and only if $M$ is stochastically complete. A proof of that based on the fact that Brownian paths in $M$ lifts to Brownian paths in $\widetilde{M}$ and Brownian paths in $\widetilde{M}$ projects into Brownian paths in $M$ can be found in Elworthy's book \cite{elworthy}. For parabolicity, the situation is different. If $\widetilde{M}$ is parabolic then $M$ is parabolic however the converse is not true in general, as observed in \cite[p.24]{PS}, the double punctured disc is parabolic and it is covered by the Poincar\`{e} disc which is not parabolic. In our next theorem we consider parabolicity and stochastic completeness on Riemannian submersions $\pi \colon M\to N$ with minimal fibers $F_{p}=\pi^{-1}(p)$, $p\in N$. \begin{theorem}\label{Theorem 2.2} Let $\pi \colon M \to N$ be a Riemannian submersion with minimal fibers $F_{p}=\pi^{-1}(p)$, $p\in N$. Then \begin{itemize}\item[i.] If $M$ is parabolic then $N$ is parabolic. \item[ii.] If $M$ is stochastically complete then $N$ is stochastically complete. \end{itemize}If in addition to minimality, the fibers $F_{p}$ are compact then we have. \begin{itemize}\item[iii.]If $N$ is parabolic then $M$ is parabolic. \item[iv.] If $N$ is stochastically complete then $M$ is stochastically complete. \end{itemize} \end{theorem} \noindent \textbf{Observations.} \begin{itemize}\item A Riemannian covering is a particular example of a Riemannian submersion with minimal fibers, thus the items i. and ii. extend the well known facts about parabolicity and stochastic completeness cited above. \item The compactness of the fibers in items iii. and iv. can not be removed as one can see in the following examples. \item[] \begin{itemize}\item[1.]$\pi\colon \mathbb{R}^{3}\to \mathbb{R}^{2}$ is a Riemannian submersion with non-compact minimal fibers $\mathbb{R}$. The base $\mathbb{R}^{2}$ is parabolic while $\mathbb{R}^{3}$ is not. \item[2.] Let $M_1$, $M_{2}$ be stochastically incomplete and stochastically complete Riemannian manifolds respectively. The projection $\pi\colon M_{1}\times M_{2}\to M_{2}$ is a Riemannian submersion with totally geodesic fibers $F\approx M_{1}$. The base space $M_{2}$ is stochastically complete while the total space $M_{1}\times M_{2}$ is not. \end{itemize} \end{itemize} Regarding the Feller property, Pigola and Setti proved the following result. \begin{theorem}[Pigola-Setti] Let $\pi\colon \widetilde{M}\to M$ be a $k$-folding Riemannian covering, $k<\infty$. Then $\widetilde{M}$ is Feller if and only if $M$ is Feller. \label{thmFellerCov} \end{theorem} Moreover, they show an example of an $\infty$-covering $\pi\colon \widetilde{M}\to M$ such that $\widetilde{M}$ is Feller while $M$ is not. However, they prove that if $M$ is Feller then any $k$-folding Riemannian covering, $k\leq \infty$ $\widetilde{M}$ is Feller, see \cite[thm. 9.5]{PS}. Our third result is an extension of Pigola-Setti's Theorem \eqref{thmFellerCov}, however it does not extend Theorem (9.5) of \cite{PS}. We prove the following theorem. \begin{theorem}\label{thmFellerSub} Let $\pi \colon M \to N$ be a Riemannian submersion with compact minimal fibers $F$. Then $M$ is Feller if and only if $N$ is Feller. \end{theorem} \section{\bf Proof of the Results} Let $ \varphi : M \hookrightarrow N$ an isometric immersion of a Riemannian $m$-manifold $M$ into a Riemannian $n$-manifold $N$. Let $g: N \rightarrow \mathbb{R}$ be a smooth function and consider the function $f = g \circ \varphi$. It is well known that, (identifying $X$ with $d\varphi X$), \begin{equation*} {\rm Hess}\, f(p)(X,Y) = {\rm Hess}\, g(\varphi (p))(X,Y) + \langle \alpha (X,Y), \operatorname{grad} g \rangle (\varphi (p)),\,\, \forall \,X,Y\in T_{p}M \label{equation 2} \\ \end{equation*} Taking an orthonormal basis $\{X_{1}, \ldots, X_{m}\}$ of $T_{p}M$ and taking the trace we obtain \begin{equation} \triangle f(p) = \sum_{i=1}^{m}{\rm Hess}\, g(\varphi(x))(X_{i},X_{i}) + \langle H, \operatorname{grad} g \rangle(\varphi(p)) \label{equation 3} \end{equation} where $H={\rm Trace}\alpha$ denotes the mean curvature vector, see \cite{JK}. \subsection{Proof of Theorem \ref{Theorem 2.1}}The theorem below is due to Azencott \cite{azencott}, see \cite{PS}. It relates the Feller property and the decay at infinity of a minimal solution of a certain Dirichlet problem. \begin{theorem}[Azencott]\label{feller equiv} The following statements are equivalents. \begin{enumerate} \item[a.] M is Feller. \item[b.] For any $\Omega \subset \subset M$ with smooth boundary and for any constant $\lambda > 0$, the minimal solution $h: M \setminus \Omega \rightarrow \mathbb{R}$ of the problem \begin{equation}\label{eqFeller} \left\{ \begin{array}{lll} \Delta h = \lambda h, & \mbox{on} & M \setminus \Omega \\ h = 1, & \mbox{on} & \partial \Omega \\ h > 0, & \mbox{on} & M \setminus \Omega \\ \end{array} \right. \end{equation} \end{enumerate} Satisfies $h(x) \rightarrow 0$, as $x \rightarrow \infty$ \end{theorem} \noindent The minimal positive solution $h$ for the problem \eqref{eqFeller} always exists, see \cite{azencott}. \begin{definition}We say that $u\colon M\setminus \Omega\to \mathbb{R}$ is a super-solution of the exterior Dirichlet problem \eqref{eqFeller} if $u$ satisfies \begin{equation}\label{eqFeller2} \left\{ \begin{array}{rllll} \Delta u & \leq & \lambda u, & \mbox{on} & M \setminus \Omega \\ u & \geq &1, & \mbox{on} & \partial \Omega \\ \end{array} \right. \end{equation}A sub-solution is similarly defined, reversing the inequalities in \eqref{eqFeller2}. \end{definition} This next theorem due to Pigola and Setti \cite{PS} establish a comparison between the solution and the super-solution of the Dirichlet problem \eqref{eqFeller}. \begin{theorem}[Pigola-Setti]\label{supersol compar} Let $\Omega$ a relatively compact open set with smooth boundary $\partial \Omega$ in a Riemannian manifold $M$ and let $ \lambda > 0$. Let $u$ and $h$ be a positive super-solution and a minimal solution of the problem\eqref{eqFeller} respectively. Then $$h(x) \leq u(x), \,\, \forall x \in M \setminus \Omega .$$ \\ In particular if $u(x) \rightarrow 0$ as $x \rightarrow \infty$ then $M$ is Feller. \end{theorem} Using Theorem \eqref{supersol compar} we prove Theorem \eqref{Theorem 2.1}. \begin{proof} Let $p\in \varphi (M)\subset N$ and let $\rho_{N}(x)={\rm dist}_{N}(p,x)$ be the distance function in $N$. Let $\lambda, R > 0$ be positive constants and define $G\colon N\setminus B_{N}(p,R)\to \mathbb{R}$ given by $G(x)=g(\rho_{N})(x)$, where $g: [R, +\infty) \rightarrow \mathbb{R}$ is given by $$g(t)= \it{e}^{-\sqrt{\lambda}(t-R)}$$ and $B_{N}(p,R)$ is the geodesic ball of radius $R$ and center at $p$. Let $\Omega =\varphi (B_{N}(p,R))$ be a relatively compact open subset of $M$, (recall that $\varphi$ is a proper immersion) and define $u\colon M\setminus \Omega\to \mathbb{R}$ given by $u=G\circ \varphi$. Let $x\in M$ such that $\varphi (x)\in N\setminus B_{N}(p,R)$. By the Formula \ref{equation 3} we have, taking a orthonormal basis for $T_{\varphi(x)}M$ we have \begin{eqnarray*} \triangle u(x) & = & \sum_{i=1}^{m}{\rm Hess}\, (g \circ \rho_{N})(\varphi(x))(e_{i},e_{i}) + \langle H, \operatorname{grad} (g \circ \rho_{N}) \rangle(\varphi(x)) \\ \end{eqnarray*} Let $t=\rho_{N}(\varphi (x))$ and choosing the orthonormal basis $\{ e_{i} \}$ for $T_{\varphi (x)}M$ such that $e_{2}, \ldots, e_{m}$ are tangent to the sphere $\partial B_{N}(p,t)$ and $e_{1} = a\cdot (\partial/\partial t) + b\cdot (\partial/ \partial\theta)$, $a^{2}+b^{2}=1$, where $\partial/ \partial\theta \in [[e_{2}, \ldots, e_{m}]]$, $\vert \partial/\partial \theta\vert =1$, $ \partial/\partial t = \operatorname{grad} \rho_{N}$ we obtain \begin{eqnarray}\label{eqFeller3} \triangle u(x)& = & \sum_{i=1}^{m}{\rm Hess}\,(g \circ \rho_{N})(\varphi(x))(e_{i}, e_{i}) + \langle H, \operatorname{grad} (g \circ \rho) \rangle(\varphi(x)) \nonumber \\ & = & a^{2}g''(t) + b^{2}g'(t){\rm Hess}\, \rho_{N}(\varphi(x))(\frac{\partial}{\partial\theta},\frac{\partial}{\partial\theta} ) \nonumber\\ && + g'(t)\sum_{i=2}^{m}{\rm Hess}\, \rho_{N}(\varphi(x))(e_{i}, e_{i}) + g'(t) \langle \operatorname{grad} \rho_{N}, H \rangle \nonumber\\ & = & (1-b^{2})g''(t) + b^{2}g'(t){\rm Hess}\, \rho_{N}(\varphi(x))(\frac{\partial}{\partial\theta},\frac{\partial}{\partial\theta} )\nonumber\\ && + g'(t)\sum_{i=2}^{m}{\rm Hess}\, \rho_{N}(\varphi(x))(e_{i}, e_{i}) + g'(t) \langle \operatorname{grad} \rho_{N}, H \rangle \nonumber\\ & \leq & g''(t) + g'(t)\langle \operatorname{grad} \rho_{N}, H \rangle \end{eqnarray} Observe that $g> 0$ is positive, $g'= -\sqrt{\lambda}\,g<0$ and $ g'' = \lambda\, g>0.$ Thus \begin{eqnarray*} \triangle u(x) & \leq & g''(t) + g'(t) \langle \operatorname{grad} \rho, H\rangle \\ & = & \lambda g(t) + (-\sqrt{\lambda})g(t)\langle \operatorname{grad} \rho, H\rangle \\ & \leq & (\lambda + \sqrt{\lambda}\displaystyle\sup_{M}\cdot \vert H\vert )g(t) \\ & = & \mu \cdot g(t) \\ & = & \mu \cdot u(x) \end{eqnarray*} Moreover, if $x\in \partial \Omega$ then $u(x)=1$ and when $x \rightarrow \infty$ in $M$ then $\varphi (x)\to \infty $ in $N$. Therefore $u(x)\to 0$ as $x\to \infty$. Let $h > 0$ the minimal solution of the problem $$ \left\{ \begin{array}{rllll} \Delta h & =& \mu\cdot h, & \mbox{on} & M \setminus \Omega \\ h & =& 1, & \mbox{on} & \partial \Omega \\ h & >& 0, & \mbox{on} & M \setminus \Omega \\ \end{array} \right. $$ Accordingly to Theorem \ref{supersol compar} $$h(x) \leq u(x), \,\, \forall x \in M \setminus \Omega$$ Taking, in the inequality above, the limit when $x \rightarrow \infty$ we obtain $$0 \leq \displaystyle\lim_{x \to \infty}h(x) \leq \displaystyle\lim_{x \to \infty}u(x)=0$$ and we conclude that $M$ is Feller. \end{proof} \subsubsection{\bf Riemannian Submersions}\label{sub:notterm}In this section we discuss basic facts related to Riemannian submersions needed in the proof of our results, see more details in \cite{One66}. Let $M$ and $N$ be Riemannian manifolds, a smooth surjective map $\pi\colon M\to N$ is a {\em submersion} if the differential $\mathrm d\pi (q)$ has maximal rank for every $q\in M$. If $\pi:M\to N$ is a submersion, then for all $p\in N$ the inverse image $F_p=\pi^{-1}(p)$ is a smooth embedded submanifold of $M$, that called the \emph{fiber} at $p$. \begin{definition}A submersion $\pi:M\to N$ is called a \emph{Riemannian submersion} if for all $p\in N$ and all $q\in F_p$, the restriction of $\mathrm d\pi(q)$ to the orthogonal subspace $T_q F_p^\perp$ is an isometry onto $T_pM$. \end{definition} Given $p\in N$ and $q\in F_p$, a tangent vector $\xi\in T_qM$ is said to be \emph{vertical} if it is tangent to $ F_p$ and it is said to be \emph{horizontal} if it belongs to the orthogonal space $(T_q F_p)^\perp$. Given $\xi\in TM$, its horizontal and vertical components are denoted respectively by $\xi^{h}$ and $\xi^{v}$. The second fundamental form of the fibers is a symmetric tensor $\mathcal S^F:\mathcal D^\perp\times\mathcal D^\perp\to\mathcal D$, defined by \[\mathcal S^F(v,w)=(\displaystyle\nabla^{M}_vW)^{h},\] where $W$ is a vertical extension of $w$ and $\nabla^M$ is the Levi--Civita connection of $M$. For any given vector field $X\in\mathfrak X(N)$, there exists a unique horizontal vector field $\widetilde{X}\in\mathfrak X(M)$ which is $\pi$-related to $X$, this is, for any $p\in N$ and $q\in F_p$, then $\mathrm{ d}\pi_{q}(\widetilde{X}_{q})=X_{p}$, called {\em horizontal lifting} of $X$. On the other hand, a horizontal vector field $\widetilde{X}\in\mathfrak X(M)$ is called \emph{basic} if it is $\pi$-related to some vector field $X\in\mathfrak X(N)$. Observe that the fibers are totally geodesic submanifolds of $M$ exactly when $\mathcal S^F=0$. The \emph{mean curvature} vector of the fiber is the horizontal vector field $H$ defined\footnote{Sometimes the mean curvature vector is defined as $H(q) = \sum_{i=1}^{k}\mathcal S^F(q)(e_{i}, e_{i})$} by \begin{equation}H(q) =-\sum_{i=1}^{k}\mathcal S^F(q)(e_{i}, e_{i})=-\sum_{i=1}^{k}(\displaystyle\nabla^{M}_{e_{i}}e_{i})^{h}\label{eq:defmeancurvature} \end{equation} where $(e_i)_{i=1}^k$ is a local orthonormal frame for the fiber through $q$. Observe that $H$ is not basic in general. For instance, when the fibers are hypersurfaces of $M$, then $H$ is basic if and only if all the fibers have constant mean curvature. The fibers are \emph{minimal} submanifolds of $M$ when $H\equiv0$. The following lemma, whose proof can be found in \cite{BMP}, will play an important role in the proof of Theorem \ref{Theorem 2.1}. \begin{lemma}[Main]\label{Lemma 1} Let $\widetilde{X} \in \mathfrak{X}(M)$ be a basic vector field, $\pi$-related to $X \in \mathfrak{X}(N)$. Then the following relation between the divergence of $\widetilde{X}$ and of $X$ at $x \in N$ and at $\widetilde{x} \in \mathcal{F}_{x}$ respectively, holds. \begin{eqnarray} {\rm div}\,_{M}(\widetilde{X})(\widetilde{x}) & = & {\rm div}\,_{N}(X)(x) + g_{N}(\widetilde{X}_{\widetilde{x}}, H_{\widetilde{x}}) \nonumber \\ & = & {\rm div}\,_{N}(X)(x) + g_{N}(d \pi_{\widetilde{x}}(\widetilde{X}_{\widetilde{x}}) ,d \pi_{\widetilde{x}}(H_{\widetilde{x}})) \nonumber \end{eqnarray} If the fibers are minimal, then ${\rm div}\,_{M}\widetilde{X} = {\rm div}\,_{N}X$ \end{lemma} Let $u\colon N\to \mathbb{R}$ be a smooth function and denote by $\widetilde{u}=u \circ \pi \colon M\to \mathbb{R}$ its lifting to $M$. It is easy to show that $\widetilde{\operatorname{grad}_{N} u}= \operatorname{grad}_{M}\widetilde{u}$, the horizontal lifting of $\operatorname{grad}_{N}u$ is the gradient of the horizontal lifting $\widetilde{u}$, $\operatorname{grad}_{M}\widetilde{u}$. We are denoting with a tilde superscript $\widetilde{X}, \widetilde{u}$ the horizontal lifting of $X$, $u$, respectively. \subsection{\bf Proof of Theorem \ref{Theorem 2.2}, items i. and ii.} The proof of items i. and ii. follows from a characterization of parabolicity and stochastic completeness in terms of the weak Omori-Yau maximum principle at infinity, proved by Pigola-Rigoli-Setti in \cite{PRS-0}, \cite{PRS}. Precisely they proved the following theorem. \begin{theorem}[Pigola-Rigoli-Setti]\label{thm-PRS-parabolicity} A Riemannian manifold $M$ is parabolic, $($resp. stochastically complete$)$, if and only if for every $u\in C^{2}(M)$, $u^{\ast}=\sup_{M}u<\infty$, and for every $\eta>0$ one has \begin{equation}\label{eqPRS-parabolicity} \inf_{\Omega_{\eta}}\triangle u <0,\,\,\, (resp.\,\leq 0) \end{equation}where $\Omega_{\eta}=\{u>u^{\ast}-\eta\}$. \end{theorem} Let $\pi\colon M\to N$ be a Riemannian submersion with minimal fibers $F$ where $M$ is parabolic (resp. stochastically complete). Let us suppose, by contradiction, that $N$ is non-parabolic (resp. stochastically incomplete). By Theorem (3.3) there exists $\eta>0$ and a $u\in C^{2}(N)$ with $u^{\ast}<\infty$ such that $\inf_{\Omega_{\eta}}\triangle u \geq 0,\,\,\, (resp.\,> 0)$. Let $\widetilde{u}\in C^{2}(M)$ be the horizontal lifting of $u$. Applying Lemma \ref{Lemma 1} to $X=\operatorname{grad} u$ and $\widetilde{X}=\operatorname{grad} \widetilde{u}$ one has that $\operatorname{div}_{N}X=\triangle_{N} u (x)=\operatorname{div}_{M}\widetilde{X}=\triangle_{M}\widetilde{u}(y)$ for any $y\in F_{x}=\pi^{-1}(x)$. It is cleat that $\widetilde{u}^{\ast}=\sup_{M}\widetilde{u}=u^{\ast}<\infty$ and defining $\widetilde{\Omega}_{\eta}=\{\widetilde{u}>\widetilde{u}^{\ast}-\eta\}$ one has that $$\inf_{\widetilde{\Omega}_{\eta}}\triangle_{M}\widetilde{u}= \inf_{\Omega_{\eta}}\triangle u \geq 0,\,\, (resp. \,>0) $$ showing that $M$ is non-parabolic, (resp. stochastically incomplete) contradicting the hypothesis that $M$ is parabolic, (resp. stochastically complete). In fact, we can prove that if $M$ is $L^{\infty}$-Liouville then $N$ is $L^{\infty}$-Liouville. Recalling that $M$ is $L^{\infty}$-Liouville if every bounded harmonic function $u\colon M\to\mathbb{R}$ is constant. Just lift to $M$ a harmonic bounded function $u\in C^{\infty}(N)$. The lifting $\widetilde{u}\in C^{\infty}(M)$ is bounded and harmonic thus it is constant implying that $u$ is also constant. \subsection{Proof of Theorem \ref{Theorem 2.2}, item iii. }We start with two definitions. \begin{definition} Let $M$ be a complete Riemannian manifold and $v:M \rightarrow \mathbb{R}$ be a continuous function. We say that $v$ is an exhaustion function if all the level sets $B^{v}_{r} = \{ x \in M; v(x) < r \} $ are pre-compact. \end{definition}If the exhaustion function $v$ is smooth, $C^{\infty}(M)$, then the level sets $B^{v}_{r}$ are smooth hypersurfaces for almost all $r\in v(M)\subset \mathbb{R}$. \begin{definition} The flux of the function $v$ through a smooth oriented hypersurface $\Gamma$ is defined by ${\rm flux}\,_{\Gamma} v =\displaystyle\int_{\Gamma}\langle \operatorname{grad} v,\,\nu\rangle d\sigma$ where $\nu$ is the outward unit vector field normal to $\Gamma$. \end{definition} The following theorem proved by Grigor'yan \cite[Thm. 7.6]{Grygor'yan} is fundamental in the proof of Item iii. \begin{theorem}[Grigor'yan] A manifold $M$ is parabolic if and only if there exists a smooth exhaustion $v$ on $M$ such that $$\displaystyle\int_{1}^{\infty}\frac{dr}{{\rm flux}\,_ {\partial B^{v}_{r}}v}=\infty .$$ \end{theorem}Let $\pi \colon M\to N$ be a Riemannian manifold with compact minimal fibers $F$. Let $v\colon M\to N$ be an exhaustion function and $B^{v}_{r}$, $r>0$ its level sets. It is clear that the lifting $\widetilde{v}$ of $v$ is an exhaustion function of $M$ since the fibers are compact. Moreover, the level sets $B_{r}^{\widetilde{v}}$ of $\widetilde{v}$ is exactly the set $\widetilde{B^{v}_{r}}=\pi^{-1}(B^{v}_{r})=\{F_{p}=\pi^{-1}(p),\, p \in B^{v}_{r}\}$. Let $\nu$ be the outward unit vector field normal to $\partial B^{v}_{r}$. The lifting $\widetilde{\nu}$ of $\nu$ is the the outward unit vector field normal to $\partial \widetilde{B^{v}_{r}}$. Therefore, $$ \langle \operatorname{grad}_{M}\widetilde{v}, \widetilde{\nu}\rangle(q)=\langle \operatorname{grad}_{N}v, \nu \rangle (p), \,\,\forall p\in \partial B^{v}_{r}\,\, {\rm and}\,\, \forall q\in F_{p}$$ Hence, $${\rm flux}\,_{\partial B_{r}^{\widetilde{v}} }=\int_{\partial B_{r}^{\widetilde{v}}}\langle \operatorname{grad}_{M}\widetilde{v}, \widetilde{\nu}\rangle\,\widetilde{d\sigma}= \int_{F_{p}}\int_{ \partial B_{r}^{v}}\langle \operatorname{grad}_{N}v, \nu \rangle d\sigma(p)\,dF_{p}={\rm vol}(F_{p})\cdot {\rm flux}\,_{\partial B_{r}^{v}}$$ Thus, $$\int_{1}^{\infty}\frac{dr}{{\rm flux}\,_{\partial B_{r}^{\widetilde{v}} }}=vol(F_{p})\cdot \int_{1}^{\infty}\frac{dr}{{\rm flux}\,_{\partial B_{r}^{v}}}=\infty$$ This proves that $M$ is parabolic. Observe that we used the fact that in a Riemannian submersion with compact minimal fibers, the volume of the fibers is constant, see \cite{BMP} for more details. \subsection{\bf Proof of Theorem \ref{Theorem 2.2}, item iv.} The proof of item iv. is an application of a recent result due to L. Mari and D. Valtorta \cite{MV} where they prove the equivalence between the Khas'minskii criteria and stochastic completeness. We can summarize a simplified version of their result as follows. \begin{theorem}[Khas'minskii-Mari-Valtorta]An open Riemannian manifold $M$ is stochastically complete if and only there exists a smooth exhaustion function, (called Khas'minskii function), $\gamma\colon M\to \mathbb{R}$ satisfying $\triangle \gamma \leq \lambda \, \gamma$ for some/all $\lambda >0$. \end{theorem} By hypothesis we have a Riemannian submersion $\pi \colon M\to N$ with compact minimal fibers and the base space $N$ is stochastically complete. Accordingly to Khas'minskii-Mari-Valtorta's Theorem there is a Khas'minskii function $\gamma$. It is straightforward to show that the lifting $\widetilde{\gamma}$ is Khas'minskii function in $M$. This shows that $M$ is stochastically complete. It should be observed that in \cite{bessa-piccione} the authors proved item iv. with mean curvature vector of the fibers with controlled growth. \subsection{\bf Proof of Theorem \ref{thmFellerSub}} The idea here is to explore the relation between the minimal solutions of the Dirichlet problem in $M \setminus \Omega$ and $\widetilde{M} \setminus \widetilde{\Omega}$ via an exhaustion argument which allow us to conclude the validity of Feller property for $M$ through the validity of the Feller property for $\widetilde{M}$ and reciprocally. Let $\{ \Omega_{n}\}_{n=1}^{\infty}$ an exhaustion of $N$ by compact sets with smooth boundaries. Take $\Omega \subset \Omega_{1}$ a fixed open set with smooth boundary and $\lambda >0$. Denoting by $\widetilde{\Omega} = \pi^{-1}(\Omega)$ and letting $\widetilde{\Omega_{n}} = \pi^{-1}(\Omega_{n})$ we obtain an exhaustion of $\widetilde{M}$ by compact sets with smooth boundaries. For each $n\geq 1$, consider $h_{n}$ to be the minimal solution of the Dirichlet problem \begin{equation} \left\{ \begin{array}{lll} \triangle_{N} h_{n} = \lambda h, & \mbox{on} & \Omega_{n} \setminus \Omega \\ h_{n} = 1, & \mbox{on} & \partial \Omega \\ h = 0, & \mbox{on} & \Omega_{n} \\ \end{array} \right. \end{equation} \\ Let $\widetilde{h}_{n} = h_{n} \circ \pi$ be the lifting of $h_{n}$. Since the fibers are minimal, the $\widetilde{h}_{n}$'s satisfy the Dirichlet problem \begin{equation} \left\{ \begin{array}{lll} \triangle_{M} \widetilde{h}_{n} = \lambda \widetilde{h}, & \mbox{on} & \widetilde{\Omega}_{n} \setminus \widetilde{\Omega} \\ \widetilde{h}_{n} = 1, & \mbox{on} & \partial \widetilde{\Omega} \\ \widetilde{h} = 0, & \mbox{on} & \widetilde{\Omega}_{n} \\ \end{array} \right. \end{equation} In fact, as the fibers $\mathcal{F}_{p}$ are minimal then we have by \cite{BMP} that ${\rm div}\,_{M}(\widetilde{X}) = {\rm div}\,_{N}(X)$ where $\widetilde{X}$ and $X$ are $\pi$-related. In particular \begin{eqnarray*} \triangle_{M}\widetilde{h}_{n}(\widetilde{x}) & = & {\rm div}\,_{M}(\operatorname{grad}_{M}\widetilde{h}_{n})(\widetilde{x}) \\ & = & {\rm div}\,_{N}(\operatorname{grad}_{N}h_{n})(\pi (\widetilde{x})) \\ & = & \triangle_{N}h_{n}(\pi (\widetilde{x})) \\ & = & \lambda h_{n}(\pi (\widetilde{x})) \\ & = & \lambda \widetilde{h}_{n}(\widetilde{x}) \\ \end{eqnarray*} From $\pi(\partial \pi^{-1} (\Omega)) \subset \partial \Omega$, $\pi(\partial \pi^{-1}(\Omega_{n})) \subset \partial {\Omega}_{n}$ we conclude that $\widetilde{h}_{n} = 1$ in $\partial \widetilde{\Omega}$ and $\widetilde{h}_{n} = 0$ in $\partial \widetilde{\Omega}_{n}$. Applying Theorem \eqref{supersol compar} we can see that for each $n\geq 1$, the functions $\widetilde{h}_{n}$ is the minimal solution for the Dirichlet problem (3.8). Suppose that $M$ is Feller. Then we must show that given a $\varepsilon > 0$ there exists an compact $K \subset N$ such that $h(x) < \varepsilon$ for all $x \in N \setminus K$. Since $M$ is Feller then given an $\varepsilon > 0$ there exists an $\widetilde{K} \subset M$ such that $\widetilde{h}(\widetilde{x}) < \varepsilon, \forall \widetilde{x} \in M \setminus \widetilde{K}$. Set $K = \pi (\widetilde{K})$ and $\widetilde{K}_{0} = \pi^{-1}(K)$. We then have that $\widetilde{K} \subset \widetilde{K}_{0}$ hence $M \setminus \widetilde{K}_{0} \subset M \setminus \widetilde{K}$, this means that $\widetilde{h}(\widetilde{x}) < \varepsilon, \forall \widetilde{x} \in M \setminus \widetilde{K}_{0}$. But $\widetilde{h} = h \circ \pi$ hence, for all $x \in N \setminus K$ we have $$ h(x) = h(\pi (\widetilde{x})) = \widetilde{h}(\widetilde{x}) < \varepsilon \\ $$ By Theorem \ref{feller equiv} we obtain that $N$ is Feller. \\ Now, suppose that $N$ is Feller, then $$\displaystyle\lim_{x \to \infty}h(x) = 0$$ \\ Since the fiber $\mathcal{F}_{p}$ is compact then if $\widetilde{x} \to \infty $ on $M$ then $x \to \infty$ on $N$ hence $$ \displaystyle\lim_{\widetilde{x} \to \infty}\widetilde{h}(\widetilde{x}) = \displaystyle\lim_{\widetilde{x} \to \infty}h(\pi (\widetilde{x})) = \displaystyle\lim_{x \to \infty} h(x) = 0 $$ that is, $M$ is Feller. \noindent \textbf{ Acknowledgments:} The authors want to express their gratitude to G. Pacelli Bessa for their comments and suggestions along the preparation on this paper. \end{document}
\begin{document} \begin{abstract} Let $p\gammae 7$ be a prime, and $m\gammae 5$ an integer. A natural generalization of Bring's curve valid over any field $\mathbb{K}$ of zero characteristic or positive characteristic $p$, is the algebraic variety $V$ of $\textrm{PG}(m-1,\mathbb{K})$ which is the complete intersection of the projective algebraic hypersurfaces of homogeneous equations $x_1^k+\cdots +x_m^{k}=0$ with $1\elleq k\elleq m-2$. In positive characteristic, we also assume $m\elle p-1$. Up to a change of coordinates in $\textrm{PG}(m-1,\mathbb{K})$, we show that $V$ is a projective, absolutely irreducible, non-singular curve of $\textrm{PG}(m-2,\mathbb{K})$ with degree $(m-2)!$, genus $\mathfrak{g}= \frac{1}{4} ((m-2)(m-3)-4)(m-2)!+1$, and tame automorphism group $G$ isomorphic to $\textrm{Sym}_m$. We compute the genera of the quotient curves of $V$ with respect to the stabilizers of one or more coordinates under the action of $G$. In positive characteristic, the two extremal cases, $m=5$ and $m=p-1$ are investigated further. For $m=5$, we show that there exist infinitely many primes $p$ such that $V$ is $\mathbb{F}_{p^2}$-maximal curve of genus $4$. The smallest such primes are $29,59,149,239,839$. For $m=p-1$ we prove that $V$ has as many as $(p-2)!$ points over $\mathbb{F}_p$ and has no further points over $\mathbb{F}_{p^2}$. We also point out a connection with previous work of R\'edei about the famous Minkowski conjecture proven by Haj\'os (1941), as well as with a more recent result of Rodríguez Villegas, Voloch and Zagier (2001) on plane curves attaining the St\"ohr-Voloch bound, and the regular sequence problem for systems of diagonal equations introduced by Conca, Krattenthaler and Watanabe (2009). \end{abstract} \title{A generalization of Bring's curve in any characteristic} \noindent {\em Keywords}: algebraic curves, function fields, positive characteristic, automorphism groups. \noindent \noindent {\em Subject classifications}: \noindent 14H37, 14H05. \section{Introduction} Bring's curve is well known from classical geometry as being the curve with the largest automorphism group among all genus $4$ complex curves; see \cite{BN}. Its canonical representation in the complex $4$-space is the complete intersection of three algebraic hypersurfaces of equation $x_1^k+\cdots +x_5^k=0$ with $1\elleq k\elleq 3$. A natural generalization of Bring's curve valid over any field $\mathbb{K}$ of zero characteristic, or positive characteristic $p\gammae 7$, is the algebraic variety $V$ of $\textrm{PG}(m-1,\mathbb{K})$ which is the complete intersection of the projective algebraic hypersurfaces of homogeneous equations \begin{equation} \ellabel{sy}\elleft\{ \begin{array}{llll} X_1 + X_2 + \elldots + X_m=0;\\ X_1^2 + X_2^2 + \elldots + X_m^2=0;\\ \cdots\cdots\\ \cdots\cdots\\ X_1^{m-2} + X_2^{m-2} + \elldots+X_m^{m-2}=0; \end{array} \right. \end{equation} where $m\gammaeq 5$. From now on we assume $m\elle p-1$ when $\mathbb{K}$ has characteristic $p$. The first equation in (\ref{sy}) implies that $V$ is contained in the hyperplane of homogeneous equation $X_1 + X_2 + \elldots + X_m=0$. Up to a change of coordinates in $\textrm{PG}(m-1,\mathbb{K})$, we show that $V$ is a projective, absolutely irreducible, non-singular curve of degree $(m-2)!$ embedded in $\textrm{PG}(m-2,\mathbb{K})$; see Theorem \ref{the2502}. The symmetric group ${\rm{Sym}}_m$ has a natural action on the coordinates $(X_1:\elldots: X_m)$ of $\textrm{PG}(m-1,\mathbb{K})$. Therefore, the automorphism group $\mbox{\rm Aut}(V)$ of $V$ has a subgroup $G$ isomorphic to ${\rm{Sym}}_m$. It seems plausible that $G=\mbox{\rm Aut}(V)$, and this is proven to be true if $\mbox{\rm Aut}(V)$ is tame, and so in particular in zero characteristic; see Theorem \ref{the171021}. The non-tame case, i.e. if $V$ has an automorphism of order $p$ fixing a point of $V$, remains to be worked out. In Section \ref{secqc} we investigate the quotient curves of $V$ with respect to subgroups of $G$. The most interesting cases for the choice of a subgroup are the stabilizers of one or more coordinates. Since two such subgroups are conjugate in $G$ if they fix the same number of coordinates, it is enough to consider the stabilizer $G_{d+1,\elldots,m}$ of the coordinates $X_{d+1},\elldots,X_{m}$ in $\mbox{\rm Aut}(V)$ with $1\elle d \elle m-1$. A careful analysis of the actions of these stabilizers on the points of $V$, carried out in Section \ref{secqc}, together with the Hurwitz genus formula, allows to compute the genus of the quotient curve $V_d=V/G_{d+1,\elldots,m}$; see Proposition \ref{pro11giugno}. The quotient curve $V_d$ turns out to be rational for $d=m-2$, otherwise its genus $\mathfrak{g}$ is quite large; see (\ref{eq11giugno}). The projection of $V$ from the $m-d-1$ dimensional projective subspace $\mathbf Sigma$ of equations $X_1=0,\elldots,X_d=0$ is also a useful tool for the study of $V$ by virtue of the fact that $\mathbf Sigma\cap V=\emptyset$. Let $\bar{V}_d$ be the curve obtained by projecting $V$ from the vertex $\mathbf Sigma$ to a projective subspace $\mathbf Sigma'$ of dimension $d-1$ disjoint from $\mathbf Sigma$. Comparison of $\bar{V}_d$ with the quotient curve $V_d$ shows that they are actually isomorphic; see Proposition \ref{pro5mar}. Therefore, the function field extension $\mathbb{K}(V):\mathbb{K}(\bar{V}_d)$ is Galois, that is, the vertex $\mathbf Sigma$ can be viewed as a Galois subspace (also called higher dimensional Galois-point) for $V$. This shows that there are many higher dimensional Galois-subspaces for $V$, which is a somewhat rare phenomena. For results and open problems on Galois-points; see the recent papers \cite{AR,Fu,FH,H,KLT}. In Section \ref{sq} the case $d=m-3$ is investigated more closely. An explicit equation of $V_{m-3}$ is given in Theorem \ref{the24jun}, which shows that if $\mathbb{K}$ is either an algebraic closure of the rational field $\mathbb{Q}$, or of the prime field $\mathbb{F}_p$, then $V_{m-3}$ coincides with the plane curve of degree $m-2$ investigated by Rodr\'iguez Villegas, Voloch and Zagier \cite{voloch}, who pointed out that this curve has many points; in particular, for $m=p-1$, it is a non-singular plane curve attaining the St\"ohr-Voloch bound. From an algebraic number theory point of view, (\ref{sy}) is a particular system of diagonal equations. Proposition \ref{prop10.03} shows that every solution of (\ref{sy}) also satisfies further diagonal equations. On the other hand, Lemmas \ref{lem21jun21} and \ref{lem21jun21bis} imply that $\{1,2,\elldots,m\}$ is associated to a regular sequence of symmetric polynomials, i.e. the system of diagonal equations of $x_1^k+\cdots +x_m^k=0$ with $k$ ranging over $\{1,2,\elldots, m\}$ has only the trivial solution $(0,0,\elldots,0)$. This gives a different proof for Proposition 2.9 of the paper of Conca, Krattenthaler and Watanabe \cite{CKW} where the authors relied on the interpretation of the $q$-analogue of the binomial coefficient as a Hilbert function. In the same paper \cite{CKW}, the general notion of a regular sequence associated to $\mathcal{A}\subset \mathbb{N}^*$ was introduced as any system of diagonal equations $x_1^k+\cdots +x_m^{k}=0$ with $k\in \mathcal{A}$ which has only the trivial solution $(0,0,\elldots,0)$. \cite[Lemma 2.8]{CKW} shows (over the complex field) a simple necessary condition on $\mathcal{A}$ to be associated to a regular sequence, namely that $m!$ divides the product of the integers in $\mathcal{A}$. In this context, the major issue to find easily expressible sufficient conditions turned out be surprisingly difficult. For the smallest case $m=3$, the conjecture is that the above necessary condition is also sufficient, that is, $\mathcal{A}=\{a,b,c\}$ is associated to a regular sequence whenever $abc\equiv 0 {\bf P}mod 6$. Evidences for this conjecture were given in \cite{CKW} and later in \cite{chen}. For further developments related to regular sequences, see \cite{DZ,FS,GGW,KM,MSW}. In positive characteristic, an algebraic closure of the prime field $\mathbb{F}_p$ with $p\gammae 7$ is chosen for $\mathbb{K}$ and two extremal cases are investigated further, namely $m=5$ and $m=p-1$. In the former case, $V$ can be viewed as the characteristic $p$ version of the Bring curve, and for infinitely many primes $p$ we prove that $V$ is an $\mathbb{F}_{p^2}$-maximal curve, that is the number of points of $V$ defined over $\mathbb{F}_{p^2}$ attains the Hasse-Weil upper bound $p^2+1+2\mathfrak{g}p=p^2+1+8p$. We also show that the smallest such primes are $29,59,149,239,839$. Our result gives a new contribution to the study of $\mathbb{F}_{p^2}$-maximal curves of small genera, initiated by Serre in 1985 and continued until nowadays; see \cite{bgkm,gklm,gkm,ser}. For $m=p-1$, $V$ is never $\mathbb{F}_{p^2}$-maximal instead. In fact, the number of points of $V$ does not increase passing from $\mathbb{F}_p$ to $\mathbb{F}_{p^2}$ so that $V$ has as many as $(p-2)!$ points defined over $\mathbb{F}_{p^2}$ whereas its genus is equal to $\frac{1}{4}(((p-3)(p-4)-4)(p-3)!)+1$; see Lemmas \ref{lem1210}, \ref{lem11octE}, \ref{lem12oct}, and Theorem \ref{nopuntiFp2}. A more general question for $m=p-1$ is to determine the length of the set $V(\mathbb{F}_{p^i})$ consisting of all points of $V$ in $\mathbf PG(p-2,\mathbb{F}_{p^i})$. For $i=1$, Lemma \ref{lem1210} shows $V(\mathbb{F}_{p})=(p-2)!$. Unfortunately, the elementary argument used in the proof of Lemma \ref{lem1210}, seems far away from being sufficient to tackle the general case. Even in the second particular case $i=2$, settled in Theorem \ref{nopuntiFp2}, the proof relies on the St\"ohr-Voloch bound and on the structure of the $G$-orbits on the points of $V$ determined in Section \ref{secautgroup}. On the other hand, Theorem \ref{the181021} yields $|V(\mathbb{F}_{p^d})|> |V(\mathbb{F}_{p})|$ where $d$ stands for the smallest integer such that $(p-2)\mid (p^d-1)$. The proof uses methods from Galois theory. It remains unsolved the problem whether $|V(\mathbb{F}_{p^i})|> |V(\mathbb{F}_{p})|$ holds for some intermediate value $i$. An important question on the geometry of curves is to compute the possible intersection multiplicities $I(P,V\cap \mathbf Pi)$ between $V$ and hyperplanes $\mathbf Pi$ through a point $P$ generally chosen in $V$, also called orders. In the classical case (zero characteristic), the orders are precisely the non negative integers smaller than the dimension of the space where the curve is embedded, but this may fail in positive characteristic. If this occurs then the curve is called non-classical. For plane curves, non-classicality means that all non-singular points of the curve are flexes. In our case, Lemma \ref{lem17ag21} shows the existence of a hyperplane ${\bf P}i$ such that $I(P,V\cap \mathbf Pi)$ is at least $p$. Since $V$ is embedded in $\textrm{PG}(p-3,p)$, this implies that $V$ is non-classical; see Theorem \ref{the17ag21}. In Section \ref{secc1}, the intersection multiplicities $I(P,V\cap \mathbf Pi)$ for $P\in V(\mathbb{F}_p)$ are investigated. By Proposition \ref{pro211021}, $1,2,3$ are such intersection multiplicities. This together with Lemma \ref{lem17ag21} yield that $1,2,3,p$ are orders of $V$. Another concept of non-classicality, due to St\"ohr and Voloch \cite{sv}, arose from their studies on the maximum number $N_{p^i}=N_{p^i}(\mathfrak{g},r,d)$ of points of an irreducible curve of a given genus $\mathfrak{g}$, embedded in $\textrm{PG}(r,\mathbb{F}_{p^i})$ as a curve of degree $d$ can have. The St\"ohr-Voloch upper bound on $N_{p^i}=N_{p^i}(\mathfrak{g},r,d)$ is known to be a deep result; in particular it may be used to give a proof for the Hasse-Weil upper bound. Also, the St\"ohr-Voloch bound shows that the curves with large $N_{p^i}=N_{p^i}(\mathfrak{g},r,d)$ have the property that the osculating hyperplane to the curve at a generically chosen point on it also passes through the Frobenius image of the point. A curve with this purely geometric property can only exist in positive characteristic, and if this occurs the curve is called Frobenius non-classical. Apart from the trivial case $p^i=2$, Frobenius non-classical curves are also non-classical. Theorem \ref{the191021} shows that $V$ is Frobenius non-classical. However this does not hold true for the quotient curves of $V$ with respect to the stabilizer of the coordinates. For instance, for $m=p-1$, $V_{m-3}$ has degree $p-3<p$ and hence it is a classical (and Frobenius classical) plane curve. Over a finite field, the number of solutions of a system of diagonal equations has been the subject of many papers where the authors mainly rely on character sums and the distributions of their values. In the present paper we are not moving in this direction. We point out a connection between (\ref{sy}) over $\mathbb{F}_p$ and R\'edei's work related with the famous Minkowski conjecture, originally proven by Haj\'os in 1941. R\'edei proved that Minkowski conjecture holds if the following claim is true: if an elementary abelian group of order $p^2$ is factored as the product of two sets of length $p$, both containing the identity element, then at least one of the factors is a subgroup; see \cite{redei}. R\'edei and later on Wang, Panico and Szab\'o \cite{szabo} showed this claim is true if each solution $[\mathbf xi_1,\elldots,\mathbf xi_p]$ over $\mathbb{F}_p$ of the system of diagonal equations \begin{equation} \ellabel{syA}\elleft\{ \begin{array}{llll} X_1 + X_2 + \elldots + X_p=0;\\ X_1^2 + X_2^2 + \elldots + X_p^2=0;\\ \cdots\cdots\\ \cdots\cdots\\ X_1^{(p-1)/2} + X_2^{(p-1)/2} + \elldots+X_p^{(p-1)/2}=0. \end{array} \right. \end{equation} has either equal components or $[\mathbf xi_1,\elldots,\mathbf xi_p]$ is a permutation of the elements of $\mathbb{F}_p$. That such particular $p$-tuples are exactly the solutions over $\mathbb{F}_p$ of system (\ref{syA}) was first shown by R\'edei himself in \cite{redei}. The above quoted paper by Wang, Panico and Szab\'o also contains a proof. In Section \ref{secredei}, we give a geometric interpretation of their results in terms of the variety $V$. \section{Basic definitions and notation} \ellabel{back} \subsection{Automorphism groups} In this paper, $p\gammae 7$ stands for a prime and $\mathbb{K}$ for an algebraically closed field of characteristic zero or $p$. Also, we let $m\gammaeq 5$, and we assume $m\elleq p-1$ when $\mathbb{K}$ has characteristic $p$. Henceforth, ${\bf{V}}_{m}$ denotes the $m$-dimensional $\mathbb{K}$-vector space with basis $\{X_1,X_2,\elldots,X_{m}\}$, and $\textrm{PG}(m-1,\mathbb{K})$ the $(m-1)$-dimensional projective space over $\mathbb{K}$ arising from ${\bf{V}}_{m}$. The group $G={\rm{Sym}}_{m}$, viewed as the symmetric group on $\{X_1,\elldots, X_{m}\}$, has a natural faithful representation in ${\bf{V}}_{m}$, where $g\in G$ takes the vector ${\bf{v}}=\sum_{i=1}^{m}\ellambda_iX_i$ to the vector ${\bf{v}}'=\sum_{i=1}^{m}\ellambda_ig(X_i)$. With ${\bf{v}}'=g({\bf{v}})$, $g$ becomes a linear transformation of ${\bf{V}}_{m}$ and $G$ is viewed as a subgroup of $\textrm{GL}(m,\mathbb{K})$. Since no non-trivial element of $G$ preserves each $1$-dimensional subspace of ${\bf{V}}_{m}$, $G$ acts faithfully on $\textrm{PG}(m-1,\mathbb{K})$ and $G$ can also be regarded as a subgroup of the projective linear group $\textrm{PGL}(m,\mathbb{K})$ of $\textrm{PG}(m-1,\mathbb{K})$. For $1\elle i < j \elle m$, the transposition $\sigma=(X_iX_j)$, as a linear transformation of ${\bf{V}}_{m}$, is associated with the $(0,1)$-matrix $(g_{l,n})_{l,n}$ whose $1$ entries are $g_{l,n}$ with $l=n$ and $g_{ij}, g_{ji}$. As a projectivity of $\textrm{PG}(m-1,\mathbb{K})$, $\sigma$ is the involutory homology whose center is the point $C=(0:\cdots: 1:\cdots:-1:\cdots:0)$, where $1$ and $-1$ are in $i$-th and $j$-th positions, respectively, and whose axis is the hyperplane $\mathbf Pi$ of equation $X_i=X_j$. Therefore, the fixed points of $\sigma$ are those of $\mathbf Pi$ and $C$, whereas the fixed hyperplanes of $\sigma$ are those through $C$ and $\mathbf Pi$. Since any two transpositions with no common fixed point commute, the center of either lies on the axis of the other. The $m$-cycle $\sigma=(X_{m}X_{m-1}\elldots X_1)$, as a linear transformation of ${\bf{V}}_{m}$, is associated with the $(0,1)$-matrix $(g_{l,n})_{l,n}$ whose $1$ entries are $g_{l,l+1}$ for $l=1,\elldots,m-1$ and $g_{m,1}$. As a projectivity, $\sigma$ fixes the point $P_\omega=(\omega:\omega^2:\cdots:\omega^{m}=1)$, where $\omega$ is any element whose order divides $m$. Similarly, the $(m-1)$-cycle $\sigma=(X_{m-1}X_{m-2}\cdots X_1)$, as a linear transformation of ${\bf{V}}_{m}$, is associated with the $(0,1)$-matrix $(g_{l,n})_{l,n}$ whose $1$ entries are $g_{l,l+1}$ for $l=1,\elldots,m-2$ and $g_{m-1,1},g_{m,m}$. As a projectivity, $\sigma$ fixes the point $P_\varepsilon=(\varepsilon:\varepsilon^2:\cdots:\varepsilon^{m-1}=1:0)$ for any element $\varepsilon$ of $\mathbb{K}$ whose order divides $m-1$. \subsection{Projections} We also recall how the projection from a linear subspace $\mathbf Sigma$ of dimension $d$ to a disjoint linear subspace $\mathbf Sigma'$ is performed. If $\mathcal C$ is an irreducible, non-singular algebraic curve embedded in $\textrm{PG}(m-1,\mathbb{K})$, which is disjoint from $\mathbf Sigma$, then the projection determines a regular mapping ${\bf P}i: \mathcal C \rightarrow \textrm{PG}(m-2-d,\mathbb{K})$. The geometric interpretation of ${\bf P}i$ is straightforward. Take any ($m-2-d$)-dimensional subspace $\mathbf Sigma'$ of $\textrm{PG}(m-1,\mathbb{K})$ disjoint from $\mathbf Sigma$. Then through any point of $P\in \mathcal C$ there is a unique ($d+1$)-dimensional subspace $\Lambda_P$ containing $\mathbf Sigma$ and this subspace meets $\mathbf Sigma'$ in a unique point, the image ${\bf P}i(P)$ of $P$ projected from $\mathbf Sigma$. Moreover, $\mathcal C$ is projected into an irreducible, possibly singular, curve $\mathcal F$ embedded in $\mathbf Sigma'\cong \textrm{PG}(m-2-d,\mathbb{K})$.\\ In the particular case where $\mathbf Sigma$ and $\mathbf Sigma^{{\bf P}rime}$ are the subspaces defined as the intersections of the hyperplanes of equations $X_{d+2},\elldots,X_{m}=0$ and $X_{1},\elldots,X_{d+1}=0$, respectively, then the projection from the vertex $\mathbf Sigma$ to $\mathbf Sigma^{{\bf P}rime}$ maps a point $P=(x_1:\elldots:x_{m})\in \textrm{PG}(m-1,\mathbb{K})\setminus \mathbf Sigma$ to the point $P'=(0:\elldots:0:x_{d+2}:\elldots:x_{m})$.\\ Let $H$ be the subgroup of $\textrm{PGL}(m-1,\mathbb{K})$ which preserves $\mathcal C$ and fixes each $d+1$-dimensional subspace through $\mathbf Sigma$. The points of the quotient curve $\bar{\mathcal C}=\mathcal C/H$ can be viewed as $H$-orbits of the points of $\mathcal C$. Therefore every point $\bar{P}\in\bar{C}$ is identified by a unique $H$-orbit, say $\overline{H_P}$. Moreover, such an $H$-orbit $\overline{H_P}$ is contained in a unique ($d+1$)-dimensional subspace through $\mathbf Sigma$ which also meets $\mathcal C$ in a unique point. Therefore, notation $\Lambda_P$ for that subspace passing through the point $P\in\mathcal C$ is meaningful. Thus the sequence $\bar{P}=\overline{H_P}\rightarrow \Lambda_P \rightarrow {\bf P}i(P)$ is well defined. Actually it is a surjective homomorphism from $\bar{\mathcal C}$ to $\mathcal F$. It is bijective if and only if $|H|=|\Lambda_P\cap\mathcal C|$ for all but finitely many points $P\in \mathcal C$. In this case $\mathbf Sigma$ is called a $d$-dimensional outer \emph{Galois subspace}, and for $d=0$ an outer \emph{Galois}-point; see \cite{KLT}. If $N$ is a subgroup of the normalizer of $H$ in the $\mathbb{K}$-automorphism group of $\mathcal C$ then the quotient group $N/H$ is a subgroup of the $\mathbb{K}$-automorphism group of $\bar{\mathcal C}$. The genera $\mathfrak{g}(\mathcal C)$ and $\mathfrak{g}(\bar{\mathcal C})$ of the curves $\mathcal C$ and $\bar{\mathcal C}$ are linked by the Hurwitz genus formula. In particular, if $H$ is tame, that is, the characteristic of $\mathbb{K}$ is either zero, or equal $p$ and in the latter case $p$ is prime to the order $\ell=|H|$ of $H$ then \begin{equation} \ellabel{hufo} 2\mathfrak{g}(\mathcal C)-2=|H| (2\mathfrak{g}(\bar{\mathcal C})-2)+\sum_{i=1}^m(\ell-\ell_i) \end{equation} where $\ell_1,\elldots,\ell_r$ denotes the lengths of the short-orbits of $H$ on $\mathcal C$. The equations in (\ref{sy}) define a (possibly reducible and singular) projective variety $V$ of $\textrm{PG}(m-1,\mathbb{K})$ so that the points of $V$ are the nontrivial solutions of (\ref{sy}) up to a non-zero scalar. Actually, $V$ is contained in the hyperplane of $\textrm{PG}(m-1,\mathbb{K})$ of equation $X_1+X_2+\elldots X_{m}=0$. Therefore, $V$ is a projective variety embedded in $\textrm{PG}(m-2,\mathbb{K})$. Moreover, $G={\rm{Sym}}_{m}$ preserves $V$ and no nontrivial element of $G$ fixes $V$ pointwise. Therefore, $G$ is a subgroup of the $\mathbb{K}$-automorphism group of $V$. \subsection{B\'ezout's theorem} The higher dimensional generalization of B\'ezout's theorem about the number of common points of $m-1$ hypersurfaces $\mathcal H_1,\elldots,\mathcal H_{m-1}$ of $\textrm{PG}(m-1,\mathbb{F}_q)$ states that either that number is infinite or does not exceed the product $\mbox{\rm deg}(\mathcal H_1)\cdot\elldots \cdot\mbox{\rm deg}(\mathcal H_{m-1})$. For a discussion on B\'ezout's theorem and its generalization; see \cite{Vog}. \subsection{Background on algebraic curves} We report some background from \cite[Chapter 7]{HKT}; see also \cite{sv}. Let $\Gamma$ be a projective, absolutely irreducible, not necessarily non-singular curve, embedded in a projective space ${\rm{PG}}(r,\mathbb{K})$. For a non-singular model $\mathcal X$ of $\Gamma$, let $\mathbb{K}(\mathcal X)=\mathbb{K}(\Gamma)$ denote the function field of $\mathcal X$. There exists a bijection between the points of $\mathcal X$, and the places of $\mathbb{K}(\mathcal X)$ and the branches of $\Gamma$. For any point $P\in\Gamma$ (more precisely, for any branch of $\Gamma$ centered at $P$) the different intersection multiplicities of hyperplanes with the curve at $P$ are considered. There is only a finite number of these intersection multiplicities, the number being equal to $r+1$. There is a unique hyperplane, called osculating hyperplane with the maximum intersection multiplicity. The hyperplanes cut out on $\Gamma$ a simple, fixed point free, not-necessarily complete linear series $\mathbf Sigma$ of dimension $r$ and degree $n$, where $n$ is the degree of the curve $\Gamma$. An integer $j$ is a $(\mathbf Sigma,P)$-order if there is a hyperplane $H$ such that $I(P,H\cap \Gamma)=j$. Notice that, if $P$ is a singular point, then $P$ is intended as a branch of $\Gamma$ centered at $P$. In the case that $\mathbf Sigma$ is the canonical series, it follows from the Riemann--Roch theorem that $j$ is a $(\mathbf Sigma,P)$-order if and only if $j+1$ is a Weierstrass gap. For any non-negative integer $i$, consider the set of all hyperplanes $H$ of $\mathbf PG(r,\mathbb{K})$ for which the intersection number is at least $i$. Such hyperplanes correspond to the points of a subspace $\overline{\mathbf Pi}_i$ in the dual space of $\mathbf PG(r,K)$. Then we have the decreasing chain $$ \mathbf PG(r,K) = \overline{\mathbf Pi}_0 \supset \overline{\mathbf Pi}_1 \supset \overline{\mathbf Pi}_2 \supset \cdots. $$ An integer $j$ is a $(\mathbf Sigma,P)$-order if and only if $\overline{\mathbf Pi}_j$ is not equal to the subsequent space in the chain. In this case $\overline{\mathbf Pi}_{j+1}$ has codimension 1 in $\overline{\mathbf Pi}_j$. Since $\mbox{\rm deg}\,\, \mathbf Sigma=n$, we have that $\overline{\mathbf Pi}_i$ is empty as soon as $i> n$. The number of $(\mathbf Sigma,P)$-orders is exactly $r+1$; they are $j_0(P),j_1(P),\elldots,j_r(P)$ in increasing order, and $(j_0(P),j_1(P),\elldots,j_r(P))$ is the order-sequence of $\Gamma$ at $P$. Here $j_0(P)=0$, and $j_1(P)=1$ if and only if the branch $P$ is linear (in particular when $P$ is a non-singular point). Consider the intersection $\mathbf Pi_i$ of hyperplanes $H$ of $\mathbf PG(r,K)$, for which $$ I(P,H\cap\gamma)\gammaeq j_{i + 1}. $$ Then the flag $ \mathbf Pi_0\subset \mathbf Pi_1\subset \cdots \subset \mathbf Pi_{r-1} \subset \mathbf PG(r,K) $ can be viewed as the algebraic analogue of the Frenet frame in differential geometry. Notice that $\mathbf Pi_0$ is just $P$, the centre of the branch $\gamma$, and $\mathbf Pi_1$ is the tangent line to the branch $\gamma$ at $P$. Furthermore, $\mathbf Pi_{r-1}$ is the osculating hyperplane at $P$. The order-sequence is the same for all but finitely many points of $\Gamma$, each such exceptional point is called a $\mathbf Sigma$-Weierstrass point of $\Gamma$. The order-sequence at a generally chosen point of $\Gamma$ is the order sequence of $\Gamma$ and denoted by $(\varepsilon_0,\varepsilon_1,\elldots,\varepsilon_r)$. Here $j_i(P)$ is at least $\varepsilon_i$ for $0\elle i \elle r$ at any point of $\Gamma$. Now let $\Gamma$ be defined over $\mathbb{F}_\ell$ and viewed as a curve over the algebraic closure $\mathbb{K}=\bar{\mathbb{F}}_\ell$. St\"ohr and Voloch \cite{sv} (see also \cite[Chapter 8]{HKT}) introduced a divisor with support containing all points of $P\in \Gamma$ for which the osculating hyperplane contains the Frobenius image $\mathbf Phi(P)$ of $P$. Since every $\mathbb{F}_\ell$-rational point has this property an upper bound on the number of $\mathbb{F}_\ell$-rational points is obtained, unless all (but a finitely many) osculating hyperplanes have that property. In such an exceptional case, the curve is called Frobenius non-classical. Curves with many rational points are often Frobenius non classical (and in particular, non-classical for $q\neq 2$). Actually, St\"ohr and Voloch were able to give a bound on the number of $\mathbb{F}_\ell$-rational points for any (Frobenius classical, or non-classical) curve $\Gamma$. There exists a sequence of increasing non-negative integers $\nu_0,\elldots,\nu_{r-1}$ with $\nu_0\gammaeq 0$ such that $ \mbox{\rm det}(W_{\gammaz}^{\nu_0,\elldots,\nu_{r-1}}(x_0,\elldots,x_r)) \neq 0. $ In fact, if the $\nu_i$ are chosen minimally in lexicographical order$,$ then there exists an integer $I$ with $0 < I \elleq r$ such that $$ \nu_i = \elleft\{ \begin{array}{ll} \varepsilon_i & \quad \mbox{for $i < I$},\\ \varepsilon_{i+1} & \quad \mbox{for $i \gammaeq I$}. \end{array}\right. $$ The St\"ohr-Voloch divisor of $\Gamma$ is $$ S = \mathbf Div(W_{\gammaz}^{\nu_0,\elldots,\nu_{r-1}}(x_0,\elldots,x_r)) + (\nu_1 + \cdots + \nu_{r-1})\,\mathbf Div(d\gammaz) + (q + r)E, $$ where $E$ and $e_P$ are defined as before for the ramification divisor. The sequence $(\nu_0,\nu_1,\elldots,\nu_{r-1})$ is the Frobenius order-sequence, and $\Gamma$ is Frobenius-classical if $\nu_i=i$ for $0\elle i \elle r$. Also, $\mbox{\rm deg}\, S = (\nu_1 + \cdots + \nu_{r-1})(2\mathfrak{g} - 2) + (q+r)n$, where $n=\mbox{\rm deg}(\Gamma).$ From this the St\"ohr-Voloch bound follows: $$|\Gamma(\mathbb{F}_\ell)|\elle \frac{1}{r}\Big( (\nu_1 + \cdots+ \nu_{r-1})(2\mathfrak{g}-2) + (\ell +r)n\Big),$$ which is a deep result on the number of $\mathbb{F}_{\ell}$-rational points of $\mathbb{F}_\ell$-rational curves. For instance, the Hasse-Weil upper bound was re-proven in \cite{sv}; see also \cite[Chapter 9]{HKT}. \section{Some particular solutions of system (\ref{sy}) of diagonal equations} \begin{lemma} \ellabel{lem6pct} Let $\varepsilon\in\mathbb{K}$ be a $(m-1)$-th primitive root of unity. Then $(\varepsilon,\varepsilon^2,\elldots,\varepsilon^{m-1}=1,0)$ is a solution of system \eqref{sy}. \end{lemma} \begin{proof} Take an integer $i$ such that $1\elle i \elle m-2$, and let $\theta=\varepsilon ^i$. Then $\theta$ is a ($m-1$)-th root of unity (non necessarily primitive). Furthermore, $\theta\neq 1$ as $\varepsilon$ is a primitive ($m-1$)-th root of unity. The $i$-th equation in (\ref{sy}) is satisfied by $(\varepsilon,\varepsilon^2,\elldots,\varepsilon^{m-1},0)$ if and only if $\sum_{k=1}^{m-1} \theta^k=0$. This sum equals $(\theta^{m}-1)/(\theta-1)-1$, and hence it is zero. \end{proof} The same argument also proves the following result. \begin{lemma}\ellabel{lem1210} Let $\omega\in \mathbb{K}$ be a $m$-th primitive root of unity. Then $(\omega,\omega^2,\elldots,\omega^{m}=1)$ is a solution of system \eqref{sy}. \end{lemma} \begin{lemma} \ellabel{lem11octA} System (\ref{sy}) has no nontrivial solution $(x_1,x_2,\elldots,x_{m-1},x_{m})$ for $x_{m-1}=x_{m}=0.$ \end{lemma} \begin{proof} For a solution ${\bf{x}}=(x_1,x_2,\elldots,x_{m-1},x_{m})$ of system (\ref{sy}), let $y_1,\elldots,y_k$ be the pairwise distinct non-zero coordinates of ${\bf{x}}$. For $1\elle j \elle k$, the multiplicity of $y_j$ is defined to be the number $n_j$ of the coordinates of ${\bf{x}}$ which are equal to $y_j$. Since $x_{m-1}=x_{m}=0$ we have $k\elle m-2$. The first $k$ equations of (\ref{sy}) read \begin{equation} \ellabel{sy11}\elleft\{ \begin{array}{llll} n_1y_1+n_2y_2+\elldots + n_ky_k =0;\\ n_1y_1^2+n_2y_2^2+\elldots+n_ky_k^2=0;\\ \cdots\cdots\\ \cdots\cdots\\ n_1y_1^k+n_2y_2^k+ \elldots+n_ky_k^k=0. \end{array} \right. \end{equation} Then $(y_1,y_2,\elldots,y_k)$ is a nontrivial solution of the linear system \begin{equation} \ellabel{sy11U}\elleft\{ \begin{array}{llll} n_1X_1 + n_2X_2 + \elldots + n_kX_k=0;\\ n_1y_1X_1 + n_2y_2X_2+\elldots +n_ky_kX_k=0;\\ \cdots\cdots\\ \cdots\cdots\\ n_1y_1^{k-1}X_1 + n_2y_2^{k-1}X_2+ \elldots+n_ky_k^{k-1}X_k=0. \end{array} \right. \end{equation} whose determinant is equal to the product of ${\bf P}rod_{i=1}^k n_i$ by the $k\times k$ Vandermonde determinant $\mathbf Delta={\bf P}rod_{1\elle i <j \elle k} (y_i-y_j)$. Since either $p=0$ or, if $p>0$ then $n_i<p$ holds, it turns out that $\mathbf Delta=0$ and hence $y_i=y_j$ for some $i\ne j$. But this contradicts the definition of $y_1,\elldots,y_k$. \end{proof} \begin{lemma} \ellabel{lem11octB} No nontrivial solution $(x_1,x_2,\elldots,x_{m-1},x_{m})$ of system (\ref{sy}) has either \begin{itemize} \item[\rm(i)] four coordinates $x_{i_1},x_{i_2}, x_{i_3},x_{i_4}$ such that $x_{i_1}=x_{i_2}$ and $x_{i_3}=x_{i_4}$ with pairwise distinct indices $i_1,i_2,i_3,i_4$, or \item[\rm(ii)] three coordinates $x_{i_1},x_{i_2}, x_{i_3}$ such that $x_{i_1}=x_{i_2}=x_{i_3}$ with pairwise distinct indices $i_1,i_2,i_3$, or \item[\rm(iii)] three coordinates $x_{i_1},x_{i_2}, x_{i_3}$ such that $x_{i_1}=x_{i_2}$ and $x_{i_3}=0$ with two distinct indices $i_1,i_2$. \end{itemize} \end{lemma} \begin{proof} We use the argument in the proof of Lemma \ref{lem11octA}. For a non-trivial solution ${\bf{x}}=(x_1,x_2,\elldots,x_{m-1},x_{m})$ of system (\ref{sy}), let $y_1,\elldots,y_k$ be the pairwise distinct non-zero coordinates of ${\bf{x}}$ with multiplicities $n_j$ for $j=1,\elldots, k$. If ${\bf{x}}$ is a counterexample to Lemma \ref{lem11octB} then $k\elle m-2$. But the proof of Lemma \ref{lem11octA} shows that this is actually impossible. \end{proof} \begin{lemma} \ellabel{lem14oct} Up to a non-zero scalar, system (\ref{sy}) has finitely many solutions $(x_1,x_2,\elldots,x_{m-2},x_{m-1},0)$. \end{lemma} \begin{proof} We again use the idea from the proof of Lemma \ref{lem11octA}. For a non-trivial solution ${\bf{x}}=(x_1,x_2,\elldots,x_{m-1},0)$ of system (\ref{sy}), let $y_1,\elldots,y_k$ be the pairwise distinct non-zero coordinates of ${\bf{x}}$ with multiplicities $n_j$ for $j=1,\elldots, k$. Furthermore, one of them can be assumed to be equal to $1$. Up to a relabelling of indices, $n_k$ counts the coordinates equal to $1$. Then $(y_1,\elldots,y_{k-1})$ is a solution of \begin{equation} \ellabel{sy111}\elleft\{ \begin{array}{llll} n_1X_1 + n_2X_2 + \elldots + n_{k-1}X_{k-1}=-n_k;\\ n_1y_1X_1 + n_2y_2X_2+\elldots +n_{k-1}y_{k-1}X_{k-1}=-n_k;\\ \cdots\cdots\\ \cdots\cdots\\ n_1y_1^{k-2}X_1 + n_2y_2^{k-2}X_2+ \elldots+n_{k-1}y_{k-1}^{k-2}X_{k-1}=-n_k. \end{array} \right. \end{equation} Assume on the contrary that system (\ref{sy111}) has infinitely many solutions. Since $n_k\neq 0$, the determinant of (\ref{sy111}) vanishes. On the other hand, up to a non-zero constant, it is a Vandermonde determinant with parameters $(y_1,\elldots,y_{k-1})$. Therefore, $y_i=y_j$ must occur for some $i\ne j$ contradicting the definition of $y_1,\elldots,y_k$. \end{proof} \section{The variety defined by system \eqref{sy}} \ellabel{secvar} We keep upon with the notation introduced in Section \ref{back}. In particular, $V$ stands for the projective algebraic variety of ${\rm{PG}}(m-1,\mathbb{K})$ defined by the equations of system (\ref{sy}). Our goal is to prove that $V$ is an irreducible non-singular curve. This requires some technical lemmas involving both the geometry of $V$ and the transpositions of $G$. We begin by stating and proving them. \begin{lemma} \ellabel{lem1D} $V$ has dimension $1$. \end{lemma} \begin{proof} From \cite[Corollary 5]{sha} applied to $r=m-2$ and $n=m-1$, we have $\mbox{\rm dim}(V)\gammae 1$. On the other hand, we show that there exists a projective subspace of codimension $2$ which is disjoint from $V$. By \cite[Corollary 4]{sha}, this will yield $\mbox{\rm dim}(V)\elle m-1-(m-1-2)-1\elle 1$. Lemma \ref{lem11octA} ensures that a good choice for such a subspace of codimension 2 is the intersection $\Lambda$ of the hyperplanes of equation $X_{m}=0$ and $X_{m-1}=0$. \end{proof} \begin{lemma} \ellabel{lem1} $V$ is non-singular. \end{lemma} \begin{proof} The Jacobian matrix of $V$ is $$\nabla(V)=\frac{{\bf P}artial(f_1,\elldots,f_{m-2})}{{\bf P}artial(X_1,\elldots,X_{m})}= \elleft( \begin{array}{llll} \frac{{\bf P}artial f_1}{{\bf P}artial X_1} & \cdots & \frac{{\bf P}artial f_1}{{\bf P}artial X_{m}} \\ \frac{{\bf P}artial f_2}{{\bf P}artial X_1} & \cdots & \frac{{\bf P}artial f_2}{{\bf P}artial X_{m}} \\ \cdots & \cdots & \cdots \\ \frac{{\bf P}artial f_{m-2}}{{\bf P}artial X_1} & \cdots & \frac{{\bf P}artial f_{m-2}}{{\bf P}artial X_{m}} \\ \end{array} \right)= \elleft( \begin{array}{llll} 1 & \cdots & 1 \\ 2X_1 & \cdots & 2X_{m} \\ \cdots & \cdots & \cdots \\ (m-2)X_1^{m-3} & \cdots & (m-2)X_{m}^{m-3} \\ \end{array} \right). $$ Up to the non-zero factor $(m-2)!$, the determinants of maximum order $m-2$ are Vandermonde determinants. Therefore, for some point $P=(x_1:\elldots:x_{m})\in V$, $\nabla(V)$ evaluated at $P$ has rank less than $m-2$ if and only if $(x_1,\elldots,x_{m})$ has one of the properties (i) and (ii). But Lemma \ref{lem11octB} rules out these possibilities, and hence the point $P$ is non-singular. \end{proof} \begin{lemma} \ellabel{lem8oct} $\mbox{\rm deg}(V)=(m-2)!$. \end{lemma} \begin{proof} Let $\mathbf Pi$ be the hyperplane with equation $X_{m-1}=X_{m}$. Since $V$ has dimension $1$, $\mathbf Pi$ intersects $V$ in at least one point $P=(x_1:x_2:\elldots:x_{m-2}:x_{m-1}:x_{m-1})$. By Lemma \ref{lem11octA}, $x_{m-1}\neq 0$ and so $P=(x_1/x_{m-1}:\elldots:x_{m-2}/x_{m-1}:1:1$). Also, from Lemma \ref{lem11octB} $x_1/x_{m-1},\elldots,x_{m-2}/x_{m-1}$ are pairwise distinct. Since any permutation of $x_1/x_{m-1},\elldots,x_{m-2}/x_{m-1}$ gives rise to a new point of $V$ on $P$, we obtain $|\mathbf Pi\cap V|=(m-2)!$ and hence $\mbox{\rm deg}(V)\gammae (m-2)!$. On other hand, the higher dimensional generalization of B\'ezout's theorem yields $\mbox{\rm deg}(V)\elle (m-2)!$ whence the claim follows. \end{proof} \begin{lemma} \ellabel{lem10giugno} Let $P=(\mathbf xi_1:\mathbf xi_2:\cdots:\mathbf xi_{m})$ be a point of $V$, and fix $r\gammae 2$ indices $1\elle j_1<\cdots <j_r\elle m$. Let $R=(x_1:x_2:\cdots:x_{m})$ be a point of $V$ such that $x_{j_1}=\mathbf xi_{j_1},\elldots, x_{j_r}=\mathbf xi_{j_r}$. Then there is a permutation $\rho$ on $\{1,\elldots,m\}$ with $\rho(j_i)=j_i$ for $i=1,\elldots,r$ such that $x_k=\mathbf xi_{\rho(k)}$ for $k=1,\elldots,m$. \end{lemma} \begin{proof} If there is a pair $\{i,\ell\}$ with $i\ne \ell$ such that $\mathbf xi_{j_i}=\mathbf xi_{j_\ell}$, let $\mathbf Pi$ be the hyperplane of equation $X_{j_1}=X_{j_\ell}$. Lemma \ref{lem11octB} shows that the $m-2$ coordinates of $P$ other than $\mathbf xi_{j_i}$ and $\mathbf xi_{j_\ell}$ are pairwise distinct. Therefore the permutations on the coordinates other than $X_{j_i}$ and $X_{j_\ell}$ give rise as many as $(m-2)!$ pairwise distinct points of $V$. By Lemma \ref{lem8oct} these are all points in $V\cap \mathbf Pi$. From this the claim follows. If no such pair $\{i,\ell\}$ exists, take for $\mathbf Pi$ the hyperplane of equation $\mathbf Pi: X_{j_1}=\mathbf xi_{j_1}X_{m}$. Then the above argument still works, and the claim holds true. \end{proof} \begin{lemma} \ellabel{lem10octAchar0} For a $m$-th primitive root of unity $\omega$, let $P_\omega=(\omega:\omega^2:\cdots:\omega^{m}=1)$ be a point of $V$. Then the stabilizer of $P_\omega$ in $G$ is a cyclic group of order $m$, and it acts on $\{X_1,\elldots,X_{m}\}$ as a $m$-cycle. Moreover if $\mathcal{O}_\omega$ is the $G$-orbit of $P_\omega$, then $|\mathcal{O}_\omega|=(m-1)!$, and if $\mathbf Pi_\omega$ is the hyperplane $X_{m-1}=\omega X_{m}$, then $\mathbf Pi_\omega\cap \mathcal{O}_\omega=V\cap \mathbf Pi_\omega$. \end{lemma} \begin{proof} Let $u\in {\rm{PGL}}(m-1,\mathbb{K})$ be given by the $(0,1)$-matrix $\{g_{i,j}\}$ whose $1$ entries are $g_{i,i+1}$ for $i=1,2,\elldots, p-2$, and $g_{m,1}$. Clearly, $u\in G$ and $P_\omega$ is fixed by $u$. Also $u$ has order $m$ and acts on $\{X_1,\elldots,X_{m}\}$ as a $m$-cycle. Therefore, $|G_P|\gammaeq m$. We show that equality holds. Let $\sigma$ be permutation on $\{ 1, 2,\dots,m-1\}$. If $P_\omega=P_\omega^\sigma$, then $\omega^i=\omega^{\sigma(i)}$ for any $i\elleq m-1$. Thus $\sigma(i)-i\equiv 0{\bf P}mod{m}$ whence $\sigma(i)=i$ follows. Therefore $|G_P|= m$. From this $|\mathcal{O}_\omega|=|G|/|G_P|=(m-1)!$ follows. Since $\mathcal{O}_\omega\subset \mathbf Pi_\omega$ and $|V\cap \mathbf Pi_\omega|\elle \mbox{\rm deg}(V)$ by Lemma \ref{lem1}, the last claim follows from Lemma \ref{lem8oct}. \end{proof} \begin{lemma} \ellabel{lem7oct} For a $(m-1)$-th primitive root of unity $\varepsilon$, let $P_\varepsilon=(\varepsilon:\varepsilon^2:\elldots,\varepsilon^{m-1}=1:0)$. Then the stabilizer of $P_{\varepsilon}$ in $G$ is a cyclic group of order $m-1$ which acts on $\{X_1,\elldots,X_{m-1}\}$ as a cycle. \end{lemma} \begin{proof} Let $h\in {\rm{PGL}}(m-1,\mathbb{K})$ be given by the $0,1$-matrix $\{g_{i,j}\}$ whose $1$ entries are $g_{i,i+1}$ for $i=1,2,\elldots, m-2$, $g_{m-1,1}$ and $g_{m,m}$. Clearly $h$ takes $P_{\varepsilon}$ to the point $Q=(\varepsilon^2:\varepsilon^3:\elldots:1:\varepsilon:0)$. Actually, $Q=P_{\varepsilon}$ as the coordinates of $Q$ are proportional to those of $P_{\varepsilon}$. It remains to show that any $g\in G$ fixing $P_{\varepsilon}$ is a power of $h$. Let $g:\,(X_1:X_2:\elldots:X_{m-1}:0)\rightarrow (Y_1:Y_2:\elldots:Y_{m-1}:0)$ where $Y_1Y_2\cdots Y_{m-1}$ is a permutation ${\bf P}i$ of $X_1X_2\cdots X_{m-1}$. Since $g$ fixes $P_{\varepsilon}$, there exists $(y_1:y_2:\elldots:y_{m-1})$ with ${\bf P}i(\varepsilon^i)=y_i$ for $i=1,\elldots,m-1$ such that $(\varepsilon,\varepsilon^2,\elldots,\varepsilon^{m-1}=1)$ and $(y_1,y_2,\elldots,y_{m-1})$ are proportional. Then $y_{m-1}\varepsilon^i=y_i$ for $i=1,\elldots, m-1$. Also, there exists $1\elle j \elle m-1$ such that $y_j=1$. Therefore $y_{m-1}=\varepsilon^{-j}$ whence $y_i=\varepsilon^{i-j}.$ Thus ${\bf P}i(\varepsilon^i)=\varepsilon^{i-j}$ whence $Y_i=X_{i+(m-1-j)}$ where the indices are taken modulo $m-1$. Hence $g=h^{m-1-j}$ which proves the first claim. Since $h$ fixes $X_{m}$, $G$ acts on $\{X_1,\elldots,X_{m-1}\}$ as a cycle. \end{proof} \begin{lemma} \ellabel{lem8octU} Let $\mathbf Pi_\infty$ be the hyperplane of equation $X_{m}=0$. Then $\mathbf Pi_\infty$ intersects $V$ transversally at each of their $(m-2)!$ common points. Moreover, $\mathcal{O}_\varepsilon=V\cap\mathbf Pi_\infty$ is the orbit of $P_\epsilon$ under the action of the stabilizer of $\mathbf Pi_\infty$ in $G$. \end{lemma} \begin{proof} Since $|G|=m!$, the subgroup of $G$ preserving $\mathbf Pi_\infty$ has order $(m-1)!$. Since $P_\varepsilon\in \mathbf Pi_\infty$, Lemma \ref{lem7oct} yields that $\mathbf Pi_\infty$ contains at least $(m-1)!/(m-1)=(m-2)!$ pairwise distinct points of $V$. On the other hand, $|V\cap \mathbf Pi_\infty |\elle \mbox{\rm deg}(V)$ by Lemma \ref{lem1}. As $\mbox{\rm deg}(V)=(m-2)!$ by Lemma \ref{lem8oct}, the first claim follows. Since the stabilizer of $\mathbf Pi_\infty$ in $G$ has order $(m-1)!$, the first claim together with Lemma \ref{lem7oct} prove the second claim. \end{proof} \begin{lemma} \ellabel{lem18oct} Let $g\in G$ be a nontrivial element fixing a point of $V$. If $g$ acts on ${\bf{V}}_{m}$ fixing $X_i$ and $X_j$ then $g$ is a transposition on $\{X_1,X_2,\elldots,X_{m}\}$. Furthermore, the fixed points of $g$ in $V$ are as many as $(m-2)!$ and they are all the common points of $V$ with the hyperplane of equation $X_l=X_n$ where $X_n=g(X_l)$, and $l\ne n$. \end{lemma} \begin{proof} Up to a change of the indices, $g$ fixes $X_{m}$ and $X_{m-1}$. Let $P=(x_1,\elldots,x_{m-1},x_{m})\in V$ be a fixed point of $g$. From Lemma \ref{lem11octA}, $x_{m-1}$ and $x_{m}$ do not vanish simultaneously. Therefore, $x_{m}=1$ may be assumed. Furthermore, if $X_{g(l)}=g(X_l)$ then $g(P)=P$ yields $x_{g(l)}=cx_l$ for some $c\in \mathbb{K}$ and $l=1,\elldots,m$. In particular, since $g(X_{m})=X_{m}$, we have $x_{m}=cx_{m}$ whence $c=1$. As $g$ is nontrivial, the set $\mathcal{M}=\{l\mid l\neq g(l), 1\elle l \elle m-2\}$ is non empty. By $c=1$, $l\in \mathcal{M}$ yields that $x_{g(l)}=x_{l}$. By (i) and (ii) of Lemma \ref{lem11octB}. this implies $|\mathcal{M}|\elle 2$. Since also $|\mathcal{M}|\gammae 2$ holds, the claim follows. To show the other claims, observe that the transposition $g=(X_l X_n)$ is the involutory homology of ${\rm{PG}}(m-1,\mathbb{K})$ associated with the $(0,1)$ matrix $\{g_{u,v}\}$ whose $1$ entries are $g_{u,u}$ for $u\in \{1,2,\elldots, m\}\setminus\{l,n\}$, $g_{l,n}$ and $g_{n,l}$. In particular, its axis is the hyperplane $\mathbf Pi$ of equation $X_{l}=X_{n}$, and its center is the point $C=(0:0:\cdots:-1:\cdots :0: \cdots :1:\cdots :0)$. Moreover, $x_{l}=x_{n}$ as $P$ is fixed by $g$. From Lemma \ref{lem11octA}, $x_{l}=x_{n}=1$ may be assumed. This together with (i) of Lemma \ref{lem11octB} yield that $x_u\neq x_v$ for $1\elle u<v \elle m$ and $(u,v)\neq(i,j).$ Therefore, the images of $P$ under the action of the stabilizer $H$ of $X_{l}$ and $X_{n}$ in $G$ are all pairwise distinct and their number is $|H|=(m-2)!$. Since $H$ preserves both $V$ and $\mathbf Pi$, all the images of $P$ are in $V\cap \mathbf Pi$. Thus $|V\cap \mathbf Pi|\gammae (m-2)!$. On the other hand, Lemma \ref{lem8oct} shows $|V\cap \mathbf Pi|\elle (m-2)!$ whence the second part of the claim follows. \end{proof} \begin{lemma} \ellabel{lem5oct} $V$ has a unique component of dimension $1$. \end{lemma} \begin{proof} From Lemma \ref{lem1D}, the irreducible components of $V$ have dimension at most $1$, and at least one of them is an absolutely irreducible curve $\mathcal C$. Let $P$ be a common point of $\mathcal C$ and $\mathbf Pi_\infty$. By the second claim of Lemma \ref{lem8octU}, we can assume $P=P_{\varepsilon}$ for a $(m-1)$-th primitive root of unity $\varepsilon$. Since $P$ is a nonsingular point of $V$, $\mathcal C$ is the unique irreducible component of $V$ containing $P$. The last claim in Lemma \ref{lem8octU} ensures that this holds for each point in $V\cap \mathbf Pi_\infty$. In particular, the $1$-dimensional irreducible components of $V$ are exactly the images of $\mathcal C$ under the action of $G$. Clearly, these absolutely irreducible curves $\mathcal C=\mathcal C_1,\mathcal C_2,\elldots,\mathcal C_l$ have the same degree $k=\mbox{\rm deg}(\mathcal C)$, and $kl=\mbox{\rm deg}(V)$. The hyperplane $\mathbf Pi$ of equation $X_{m}=X_{m-1}$ meets $\mathcal C$ nontrivially, and let $R\in \mathcal C\cap \mathbf Pi$. Clearly the transposition $g=(X_{m}X_{m-1})$ in $G$ which fixes each $X_i$ for $1\elle i \elle m-2$ and interchanges $X_{m}$ with $X_{m-1}$ fixes $R$. Furthermore, $\mathcal C$ contains a point $S\in V$ lying on the hyperplane of equation $\mathbf Pi_\omega :X_{m-1}=\omega X_{m}$. By Lemma \ref{lem10octAchar0}, $S\in \mathcal{O}_\omega$ and some element $u\in G_S$ acts on the basis $(X_1,X_2,\elldots,X_{m})$ as a $m$-cycle. Since $S$ is a nonsingular point, $u$ preserves $\mathcal C$. Indeed, assume on the contrary that $u$ takes $\mathcal C$ to $\mathcal{C}_i$, $i\neq 1$. Then, since $u$ fixes $S$, $S$ must be in the intersection $\mathcal C\cap \mathcal{C}_i$, a contradiction with the non-singularity of $S$. Therefore $g$ and $u$ are automorphisms of $\mathcal C$. By a well known result, the group generated by the transposition $g$ and the $m$-cycle $u$ is the whole symmetric group $G={\rm{Sym}}_{m}$. This shows that $G$ preserves $\mathcal C$ and hence $l=1$. \end{proof} From now on, the irreducible non-singular curve $\mathcal C$ stands for the unique $1$-dimensional component of $V$. As $G$ preserves $\mathcal C$ and $G\elle {\rm{PGL}}(m,\mathbb{K})$, $G$ is a $\mathbb{K}$-automorphism group of $\mathcal C$. Since every hyperplane meets $\mathcal C$, the final claim of Lemma \ref{lem10octAchar0} shows that some point of $\mathcal{O}_\omega$ (and hence $P_\omega$) is in $\mathcal C$. Therefore, the $G$-orbit $\Omega_\omega$ of $P_\omega$ has length $|G|/m=(m-1)!$. Similarly, Lemmas \ref{lem7oct} and \ref{lem8octU} yield that $P_\varepsilon \in \mathcal C$, and hence the $G$-orbit $\Omega_\varepsilon$ of $P_\varepsilon$ has length $|G|/(m-1)=m(m-2)!$. A third short $G$-orbit $\Omega_\theta$ arises from transpositions, as Lemma \ref{lem18oct} shows that every transposition of $G$ fixes a point of $\mathcal C$. \begin{lemma} \ellabel{lem14octC} The hyperplane of equation $X_1+X_2+\elldots+X_{m}=0$ is the unique hyperplane of ${\rm{PG}}(m-1,\mathbb{K})$ which contains $\mathcal C$. \end{lemma} \begin{proof} We have $P_\omega\in \mathcal C$. Since $G$ preserves $\mathcal C$, this yields that $Q=(\omega:1:\omega^2:\cdots:\omega^{m-1})$ is in $\mathcal C$, as well. Assume that $\mathcal C$ is contained in a hyperplane $\mathbf Pi$ of equation $\alpha_1X_1+\alpha_2X_2+\elldots+\alpha_{m}X_{m}=0$ with $\alpha_i\in \mathbb{K}$. Then $\alpha_1+\omega\alpha_2+\omega^2\alpha_3+\elldots+\omega^{m-1}\alpha_{m-1}=0$ and $\omega\alpha_1+\alpha_2+\omega^2\alpha_3+\elldots+\omega^{m-1}\alpha_{m-1}=0$ whence $\alpha_1=\alpha_2$ by subtraction. Similar argument shows that any two consecutive coefficient in the equation of $\mathbf Pi$ are equal. This yields $\alpha_1=\alpha_2=\elldots=\alpha_{m-1}$, that is $\mathbf Pi$ has equation $X_1+X_2+\elldots+X_{m}=0$. On the other hand, as we have already pointed out in Section \ref{back}, the first equation in System (\ref{sy}) shows that hyperplane of equation $X_1+X_2+\elldots+X_{m}=0$ contains $\mathcal C$. \end{proof} \begin{lemma} \ellabel{lem11octC} Every transposition of $G$ fixes exactly $(m-2)!$ points of $\mathcal C$. \end{lemma} \begin{proof} We have already pointed out in the proof of Lemma \ref{lem5oct} that the transposition $g=(X_{m-1}X_{m})$ fixes a point $R\in \mathcal C\cap \mathbf Pi$ where $\mathbf Pi$ is the hyperplane of equation $X_{m}=X_{m-1}$. By Lemma \ref{lem11octA} and \ref{lem11octB}, we can assume $R=(x_1:\elldots:x_{m-2}:1:1)$ and that $x_1,\dots,x_{m-2}$ are pairwise distinct and non zero. Therefore any point whose first $m-2$ coordinates are a permutation on $\{x_1,\elldots,x_{m-2}\}$ is also fixed by $g$. Thus $g$ has at least $(m-2)!$ fixed points. On the other hand, if $P\in\mathcal C$ is not on $\mathbf Pi$ then the last two coordinates of $P$ are different and hence $P$ is not fixed by $g$. Therefore, the claim holds for $g$. Since the transpositions are pairwise conjugate in $G$, the claim holds true for every transposition in $G$. \end{proof} \begin{lemma} \ellabel{lem18octA} Let $P$ be a point of $\mathcal C$ which is fixed by a transposition $g\in G$. Then the tangent $\ell$ to $\mathcal C$ at $P$ is the line joining $P$ with the center of $g$. \end{lemma} \begin{proof} W.l.o.g. we can assume $g=(X_{m-1}X_{m})$. Then $g$ is an homology with center $C=(0:0:\cdots:0:-1:1)$ and axis the (pointwise fixed) hyperplane $\mathbf Pi$ of equation $X_{m}=X_{m-1}$. Also, $P=(x_1:\cdots:x_{m-2}:1:1)$. The tangent line $\ell$ is the intersection of the tangent hyperplanes in $P$ of the hypersurfaces of equation $f_i=0$ of system \eqref{sy}, $i=1,\dots,m-2$, and hence its equation is given by \begin{equation}\ellabel{sys11} \elleft\{ \begin{array}{llll} X_1 + X_2 + \elldots + X_{m-2}+X_{m-1}+X_{m}=0;\\ x_1X_1 + x_2X_2+\elldots +x_{m-2}X_{m-2}+X_{m-1}+X_{m}=0;\\ \cdots\cdots\\ \cdots\cdots\\ x_1^{m-2-1}X_1 + x_2^{m-2-1}X_2+ \elldots+x_{m-2}^{m-2-1}X_{m-2}+X_{m-1}+X_{m}=0. \end{array} \right. \end{equation} Since the coordinates of $C$ satisfy system \eqref{sys11}, the claim follows. \end{proof} \begin{lemma} \ellabel{lem19oct} Let $H$ be the stabilizer of $X_i$ and $X_j$ in $G$. Then the quotient curve $\tilde{\mathcal C}$ of $\mathcal C$ with respect to $H$ is rational. \end{lemma} \begin{proof} Up to a reordering of the coordinates, we can assume $(i,j)=(m-1,m)$. The hyperplanes $\mathbf Pi_{\ellambda,\mu}$ of equations $\ellambda X_{m-1}+\mu X_{m}=0$ form the pencil through the intersection $\mathbf Sigma$ of the hyperplanes $X_{m-1}=0$ and $X_{m}=0$. $\mathbf Sigma$ is a subspace of ${\rm{PG}}(m,\mathbb{F})$ of codimension $2$, and it is disjoint from $\mathcal C$ by Lemma \ref{lem11octA}. Furthermore, $H$ preserves each $\mathbf Pi_{\ellambda,\mu}$. Take any point of $P\in\mathcal C$ whose stabilizer $H_P$ in $H$ is trivial. Then the $H$-orbit $\mathbf Delta$ of $P$ has length $(m-2)!$ and $\mathbf Delta$ is contained in the unique hyperplane $\mathbf Pi_{\ellambda,\mu}$ of the pencil which contains $P$. Since $\mbox{\rm deg}(\mathcal C)=(m-2)!$, $\mathbf Delta$ coincides with the intersection of $\mathcal C$ with $\mathbf Pi_{\ellambda,\mu}$. If the stabilizer $H_P$ of $P\in \mathcal C$ in $H$ is nontrivial, from Lemma \ref{lem18oct} and the proof of Lemma \ref{lem11octC}, then $|H_P|=2$ and the only nontrivial element in $H_P$ is a transposition $h$. Indeed if $h'$ is a nontrivial element of $H_P$, from Lemma \ref{lem18oct} it is a transposition $(X_i X_j)$, and from the proof of Lemma \ref{lem11octC}, $x_i=x_j$ and $x_{m-1}=x_{m}$, which is a contradiction with Lemma \ref{lem11octB}. From Lemma \ref{lem18octA}, the tangent line $\ell$ to $\mathcal C$ at $P$ contains the center $C$ of $h$. The hyperplane $\mathbf Pi$ of equation $X_{m-1}-X_{m}=0$ is the axis of the transposition $g$ which interchanges $X_{m}$ and $X_{m-1}$. Since $g$ and $h$ commute, it follows that $C\in \mathbf Pi$. Therefore, $\ell$ is contained in $\mathbf Pi$, and hence $I(P,\mathcal C\cap \mathbf Pi)\gammae 2$, where $I(P,\mathcal C\cap \mathbf Pi)$ is the intersection multiplicity of $\mathcal C$ and $\mathbf Pi$ at $P$. From the higher dimensional generalization of B\'ezout's theorem, $|\mathcal C\cap \mathbf Pi|\elle \ha (m-2)!$. Thus, $|\mathcal C\cap \mathbf Pi|= \ha (m-2)!$, and, again, $\mathbf Delta$ coincides with $\mathcal C\cap \mathbf Pi$. This shows that $\tilde{\mathcal C}$ is isomorphic to the rational curve which is the projection of $\mathcal C$ from the vertex $\mathbf Sigma$. \end{proof} \begin{lemma} \ellabel{lem11octE} Let $\gammag$ be the genus of $\mathcal C$. Then \begin{equation} \ellabel{eq11octB} 2\gammag-2= \ha ((m-2)(m-3)-4)(m-2)!. \end{equation} \end{lemma} \begin{proof} Let $H\cong \mathbf Sym_{m-2}$ be the subgroup of $G$ which fixes both $X_{m}$ and $X_{m-1}$. From Lemma \ref{lem18oct}, if $h\in H$ has a fixed point in $\mathcal C$ then $h$ is a transposition. If $P$ is a fixed point of a transposition then, up to a reordering of the coordinates, $P=(1,1,x_3,\dots,x_{m-1},x_{m})$. A point whose coordinates are a permutation of those of $P$ is another fixed point of the transposition. Among these points, those which are contained in the hyperplane $X_{m-1}=X_{m}$, fixed by $H$, are as many as $(m-2)!$, and no more, since $\mbox{\rm deg}(\mathcal{C})=(m-2)!$. Therefore, the number of short orbits of $H$ is equal to the number of choices of two values among $x_3,\dots,x_{m}$. As these are $m-2$ such distinct values, the short orbits are as many as $(m-2)(m-3)$. The claim follows from the Riemann-Hurwitz formula. \end{proof} \begin{lemma} \ellabel{lem12oct} $V$ is irreducible, that is, $V=\mathcal C$. \end{lemma} \begin{proof} Suppose on the contrary the existence of a point $Q=(q_1:\cdots:q_{m-1}:q_{m})$ of $V$ which is not in $\mathcal C$. Since $(1:0:0:\cdots:0)$ is not a point of $V$, both $q_{m}=1$ and $q_{m-1}=\ellambda\neq 0$ may be assumed. Then $Q$ is contained in the hyperplane $\mathbf Pi$ of equation $X_{m-1}=\ellambda X_{m}$. Choose a point $P\in \mathcal C$ lying on $\mathbf Pi$. The stabilizer $H$ of $X_{m-1}$ and $X_{m}$ in $G$ preserves $\mathbf Pi$. Since $H$ also preserves $\mathcal C$, the $H$-orbit $\mathbf Delta$ of $P$ is contained in $\mathbf Pi$. As $P\in\mathbf Pi$, this together with Lemma \ref{lem8oct} yield $|\mathbf Delta|<|H|$, that is, $H_P$ is nontrivial. As in the proof of Lemma \ref{lem19oct}, from Lemma \ref{lem18oct} and Lemma \ref{lem11octC} we obtain $|H_P|=2$. Therefore, $|\mathbf Delta|=\ha (m-2)!$. Thus, the higher dimensional generalization of B\'ezout's theorem yields that the length of the $H$-orbit of $Q$ is also less than $ (m-2)!$, that is, $H_Q$ is non-trivial. This is a contradiction, since Lemma \ref{lem18oct} together with Lemma \ref{lem11octC}, yields that all the fixed points of any transposition of $G$ are on $\mathcal C$.\end{proof} Lemmas \ref{lem1} and \ref{lem12oct} have the following corollary. \begin{theorem} \ellabel{the2502} $V$ is an irreducible non-singular curve of ${\rm{PG}}(m-2,\mathbb{K})$. \end{theorem} \section{The automorphism group of $V$} \ellabel{secautgroup} As we have already pointed out, $G$ has at least three short orbits on $V$, named $\Omega_\omega$, $\Omega_\varepsilon$ and $\Omega_\theta$. We prove that they are the only short $G$-orbits. \begin{lemma}\ellabel{3orbite} The short $G$-orbits on $V$ are exactly $\Omega_\omega$, $\Omega_\varepsilon$ and $\Omega_\theta$, and they have lengths $(m-1)!$, $m(m-2)!$ and $m!/2$. \end{lemma} \begin{proof} Since $|\Omega_\omega|=(m-1)!$, $|\Omega_\varepsilon|=m(m-2)!$ and $\Omega_\theta$ are short $G$-orbits, and $|\Omega_\theta|\elleq \ha m!$, the Hurwitz genus formula (\ref{hufo}) applied to the $G$ yields \begin{equation*} 2\gammag-2\gammaeq -2m!+(m!-(m-1)!)+(m!-m(m-2)!)+(m!-\textstyle\frac{1}{2}m!). \end{equation*} Comparison with \eqref{eq11octB} shows that equality holds. Therefore, no further short $G$-orbits on $V$ exists, and $|\Omega_\theta|=\ha m!$, that is, the stabilizer of a point on $V$ which is fixed by a transposition contains no more nontrivial element of $G$. \end{proof} \begin{lemma} \ellabel{lem21jun21} The short $G$-orbit $\Omega_\omega$ consists of the common points of $V$ and the hypersurface $\mathbf Sigma_{m-1}$ of equation \begin{equation} \ellabel{eq21jun21} X_1^{m-1} + X_2^{m-1} + \elldots+X_{m}^{m-1}=0. \end{equation} \end{lemma} \begin{proof} We observe first that no point of the $G$-orbit $\Omega_\varepsilon$ is in $\mathbf Sigma_{m-1}$. In fact, if $P=(\mathbf xi_1:\elldots:\mathbf xi_{m-1}:0)\in \Omega_\varepsilon$ then $\mathbf xi_j^{m-1}=1$ and hence $\sum_{j=1}^{m-1}\mathbf xi_j^{m-1}=m-1\neq 0$, thus $P\not\in \mathbf Sigma_{m-1}$. On the other hand, it is readily seen that $P\in \mathbf Sigma_{m-1}$ for any $P\in \Omega_{\omega}$. From the higher dimensional generalization of B\'ezout's theorem, $|V\cap \mathbf Sigma_{m-1}|\elle (m-1)!$. Furthermore, since $G$ also preserves $\mathbf Sigma_{m-1}$, the intersection $V\cap \mathbf Sigma_{m-1}$ is $G$-invariant, as well. Therefore, Lemma \ref{3orbite} yields $V\cap \mathbf Sigma_{m-1}=\Omega_\omega$. \end{proof} A similar argument can be used to prove the following lemmas. \begin{lemma} \ellabel{lem21jun21bis} The short $G$-orbit $\Omega_\varepsilon$ consists of the common points of $V$ and the hypersurface $\mathbf Sigma_{m}$ of equation \begin{equation} \ellabel{eq21jun21bis} X_1^{m} + X_2^{m} + \elldots+X_{m}^{m}=0. \end{equation} \end{lemma} \begin{lemma} \ellabel{lem21jun21ter} The short $G$-orbit $\Omega_\theta$ consists of the common points of $V$ and the hypersurface $\mathbf Sigma_{m(m-1)/2}$ of equation \begin{equation} \ellabel{eq21jun21ter} X_1^{m(m-1)/2} + X_2^{m(m-1)/2} + \elldots+X_{m}^{m(m-1)/2}=0. \end{equation} \end{lemma} \begin{lemma} \ellabel{le8mar} Let $P_\omega$ be the point of $V$ given in Lemma \ref{lem10octAchar0}. Then the number of fixed points on $V$ of the involution in the stabilizer of $P_\omega$ in $G$ is \begin{equation}\ellabel{eq8mar} 2^{m/2}\frac{(m/2)!}{m}, \end{equation} if $m$ is even, and \begin{equation}\ellabel{eq8mar1} 2^{(m-1)/2}\frac{((m-1)/2)!}{m}, \end{equation} if $m$ is odd. \end{lemma} \begin{proof} We will perform the proof for $m$ even. The same approach can be applied for $m$ being odd. Let $u$ be the (unique) involution in $G$ which fixes $P_\omega$. Then $u$ acts on $(X_1,\elldots,X_{m})$ as the involutory permutation $(X_1X_{(m+2)/2})(X_2X_{(m+4)/2})\cdots (X_{m/2}X_{m})$. Therefore, the centralizer $C_G(u)$ of $u$ in $G$ has order $2^{m/2}(m/2)!$, and hence $u$ has as many as $$k=\frac{m!}{2^{m/2}(m/2)!}$$ conjugate in $G$. We show that if $v$ is conjugate of $u$ in $G$ and $u\neq v$ then $u$ and $v$ has no common fixed point. Assume on the contrary the existence of a point $Q\in V$ such that $u(Q)=v(Q)$. Then $u$ and $v$ are two distinct involutions in the stabilizer $G_Q$ of $Q$ in $G$. On the other hand, since either $p=0$ or $p>0$ and $p\nmid |G|$, $G_Q$ is cyclic, and hence it contains at most one involution; a contradiction. Therefore, each point in the $G$-orbit $\Omega_\omega$ is the fixed point of exactly one involution which is conjugate to $u$ in $G$. If $N_u$ counts the fixed points of $u$ in $\Omega_\omega$, this yields that $|\Omega_\omega|=k N_u$. Therefore, the number of fixed points $u$ in $\Omega_\omega$ equals (\ref{eq8mar}). It remains to show that $u$ has no further fixed points on $V$. By Lemma \ref{lem7oct} the stabilizer of any point $Q\in \Omega_\varepsilon$ in $G$ has odd order and hence contains no involution. By Lemma \ref{3orbite}, the 1-point stabilizer of the remaining short orbit has order $2$ and its non-trivial element is a transposition. Since $u$ is not a transposition, the claim follows. \end{proof} \begin{theorem} \ellabel{the171021} If $\mathbb{K}$ has zero characteristic, or it has characteristic $p$ and the $\mathbb{K}$-automorphism group of $V$ is tame, then $G$ is the $\mathbb{K}$-automorphism group of $V$. \end{theorem} \begin{proof} By way of a contradiction assume that $G={\rm{Sym}}_{m}$ is a proper subgroup of the $\mathbb{K}$-automorphism group $\Gamma$ of $V$. Then two cases arise, according as $G$ is a normal subgroup of $\Gamma$ or is not. In the former case, assume that the centralizer $C_\Gamma(G)$ of $G$ in $\Gamma$ is trivial. Then for any $\alphamma\in\Gamma$ the map $g\mapsto \alphamma^{-1}g\alphamma$ is a non trivial automorphism of $G$. Hence $\Gamma$ is isomorphic to a subgroup of $\mathbf Aut(G)$. However, if $m\neq 6$, then $\mathbf Aut(G)\cong G$ and hence $G=\Gamma$, a contradiction. In the remaining case, $m = 6$, then $G\cong P\alphamma L(2, 9)$ and $\mathbf Aut(G)\cong P\Gamma L(2,9)$. Therefore, $[\Gamma : G] = 2$. Lemma \ref{3orbite} shows that $G$ has a unique orbit of length $(m-1)! = 120$, namely $\Omega_\omega$. Since $G$ is a normal subgroup of $\Gamma$, this yields that $\Gamma$ also preserves $\Omega_\omega$. Hence, the stabilizer of $P_\omega \in \Omega_\omega$ of $\Gamma$ has order $12$. This yields that $\Gamma$ has a cyclic subgroup of order $12$, but this is impossible since $P\Gamma L(2, 9)$ has two subgroups of order $12$, up to conjugation, but neither is cyclic. Otherwise, let $C_\Gamma(G)$ be a nontrivial subgroup disjoint from $G$. Furthermore, since $C_\Gamma(G)$ is contained in the normalizer of $G$ in $\Gamma$, $C_\Gamma(G)$ induces a permutation group on the set of the short orbits of $G$. By Lemma \ref{3orbite}, there are three such orbits and they have pairwise different lengths. Therefore, this permutation group is trivial, that is, $C_\Gamma(G)$ preserves each of the three short orbits of $G$. Also, $\bar{G}=C_\Gamma(G)\cong C_\Gamma(G)G/G$ is a nontrivial $\mathbb{K}$-automorphism group of the quotient curve $\bar{V}=V/G$ which $\bar{G}$ fixes three points of $\bar{V}$. However, this is impossible as $\bar{V}$ is rational by Lemma \ref{lem19oct}. In the case where $G$ is not normal in $\Gamma$, take any conjugate $\tilde{G}=\alphamma^{-1}G\alphamma$ of $G$ with $\alphamma\in\Gamma$, and consider their intersection $N=G\cap \tilde{G}$. Assume first $N={\rm{Alt}}_{m}$. If this occurs for every $\alphamma \in \Gamma$, then $N$ is a normal subgroup of $\Gamma$. As shown before, this is impossible. Now take $\tilde{G}$ such that $N\neq {\rm{Alt}}_{m}$. Then $|N|\elle (m-1)!$; see \cite[Theorem 5.2B]{DM}. Therefore, $|<G,\tilde{G}>|\gammae |G|^2/(m-1)!=m\cdot m!$, whence $|\Gamma|\gammae m\cdot m!$. A comparison with (\ref{eq11octB}) gives $$ \frac{|\Gamma|}{\gammag-1}=\frac{4m^2(m-1)}{(m-2)(m-3)-4}$$ whence $|\Gamma|>84(\gammag-1)$ for $m=5,6$ and $m>15$ whereas $40(\gammag-1)<|\Gamma|\elleq 84(\gammag-1)$ for the remaining cases of $m$. Assume that $\Gamma$ is tame. Then $\Gamma$ has exactly three short orbits, and the classical Hurwitz bound yields $|\Gamma|\elle 84(\gammag-1)$. So we are left with $7\elleq m\elleq 15$. From Hurwitz's proof given in \cite{sti} or \cite[Theorem 11.56]{HKT}, it follows that $|\Gamma|>40(\gammag-1)$ is only possible when $V$ has two points with stabilizers of $\Gamma$ of order $2$ and $3$, respectively. But Lemma \ref{3orbite} shows that the three nontrivial point stabilizers of $\Gamma$ have order $m,m-1$ and $2$. This completes the proof. \end{proof} \section{More equations} \ellabel{secme} \begin{proposition} \ellabel{prop10.03} Every solution of {\rm{(\ref{sy})}} also satisfies the following equations. \begin{equation} \ellabel{syplus} \elleft\{ \begin{array}{lll} X_1^{m+2} + X_2^{m+2} + \elldots+X_{m}^{m+2}=0,\\ X_1^{m+3} + X_2^{m+3} + \elldots+X_{m}^{m+3}=0,\\ \cdots\cdots\\ \cdots\cdots\\ X_1^{2m-3} + X_2^{2m-3} + \elldots+X_{m}^{2m-3}=0. \end{array} \right. \end{equation} \end{proposition} \begin{proof} Let $P=(a_1,\elldots,a_{m})$ be any point of $V$. If $P\in \Omega_\omega$ then $a_i=a_i^{m+1}$ and hence $a_i^{m+1+l}=a_i^{1+l}$ for any positive integer $l$. Therefore, the claim holds for every $P\in \Omega_\omega$. Let $\mathcal H_j$ be the (irreducible) hypersurface of equation $X_1^{m+1+j} + X_2^{m+1+j} + \elldots+X_{m}^{m+1+j}=0$. Then $\Omega_\omega$ is contained in $\mathcal H_j$. We prove that $\mathcal H_j$ contains $V$ as far as $j\elle m-4$. Assume on the contrary that this does not occur for some $j$. Then Lemma \ref{lem8oct} together with B\'ezout's theorem applied to the intersection of $\mathcal H_j$ with $V$ yield \begin{equation} \ellabel{eqbez} \sum_{Q\in \mathcal H\cap V}I(Q,\mathcal H\cap V)=(m+1+j)(m-2)!. \end{equation} We show that $\mathcal H_j$ contains some points of $V$ other than those in $\Omega_\omega$. As $G$ also preserves $\mathcal H_j$, the intersection number $I(Q,\mathcal H_j\cap V)$ is invariant when $Q$ ranges over an $G$-orbit. For a point $Q\in \Omega_\omega$, let $\ellambda=I(P,\mathcal H_j\cap V)$. Then $\sum_{Q\in \Omega_\omega}I(Q,\mathcal H_j\cap V)=\ellambda (m-1)!$ whence $\ellambda(m-1)\elle m+1+j$. For $\ellambda\gammae 2$, this would yield $2(m-1)\elle m+1+j$, that is, $j>m-4$, a contradiction. Therefore $\ellambda=1$. Since $(m+1+j)(m-2)!>(m-1)!$, the claim follows. Let $R$ be a point $R\in \mathcal H\cap V$ not in $\Omega_\omega$. From Lemma \ref{3orbite}, the $G$-orbit of $R$ has length at least $m(m-2)!$. Then $\mathcal H_j$ and $V$ have at least $((m-1)+m)(m-2)!=(2m-1)(m-2)!$ common points. Since $j\elle m-3$ implies $2m-1>m+1+j$, this contradicts (\ref{eqbez}). \end{proof} \begin{lemma}\ellabel{lambda12} The group $G_{m,m-1,m-2}$ acts on $\Omega_\theta$ with $\ellambda_1$ short orbits and $\ellambda_2$ long orbits where \begin{equation*} \begin{split} &\ellambda_1=(m-2)(m-3)(m-4),\quad \ellambda_2=3(m-2)^2. \end{split} \end{equation*} \end{lemma} \begin{proof} Fix a point $P\in\Omega_\theta$. As in the proof of Lemma \ref{lem11octC}, assume that $P=(x_1,\dots,x_{m})$ where $x_i=x_j=t$ for some $1\elleq i<j\elleq m$, $t\in\mathbb{K}$ and $x_k\neq x_l$ whenever $(k,l)\neq(i,j)$. So, the coordinates $(x_1,\dots,x_{m})$ of $P$ are $m-1$ different values from $\mathbb{K}$. The points of $\Omega_\theta$ are those whose coordinates are a permutation of $x_1,\dots,x_{m}$, that is $Q\in\Omega_\theta$ if and only if $Q=(x_{\sigma(1)},\dots,x_{\sigma(m)})$ for a permutation $\sigma \in G$. Two points are in the same short $G_{m,m-1,m-2}$-orbit if and only if they share the last three coordinates, hence a short $G_{m,m-1,m-2}$-orbit arises every time we fix an ordered triple $(x_a,x_b,x_c)$, with $x_a,x_b,x_c\neq t$. This can be done in $(m-2)(m-3)(m-4)$ different ways. A long $G_{m,m-1,m-2}$-orbit arises every time we fix an ordered triple $(x_a,x_b,x_c)$, with either one or two of the values $x_a,x_b,x_c$ being equal to $t$. This can be done in $3(m-2)(m-3)$ and $3(m-2)$ different ways respectively. Therefore $\ellambda_1=(m-2)(m-3)(m-4)$ and $\ellambda_2=3(m-2)^2$. \end{proof} \section{Quotient curves of $V$} \ellabel{secqc} We have already determined the quotient curve of $V$ with respect to the subgroup of $G$ which fixes two given coordinates $X_i$ and $X_j$; see Lemma \ref{lem19oct}. In this section, we consider the more general case where the subgroup $H_l$ of $G$ fixes $l\gammae 3$ coordinates. Let $d=m-1-l$. W.l.o.g. these coordinates are assumed to be $X_{d+2},\elldots,X_{m}$. The hyperplanes $\mathbf Pi_i: X_i=0$ with $i=d+2,\elldots,m$ meet in a $d$-dimensional subspace $\mathbf Sigma$ which is disjoint from $V$ by Proposition \ref{lem11octA}. Clearly, $H_l$ preserves $\mathbf Sigma$. Furthermore, the hyperplanes $\mathbf Pi_i: X_i=0$ with $i=1,\elldots,d+1$ meet in a $(m-d-2)$-dimensional subspace $\mathbf Sigma'$ disjoint from $\mathbf Sigma$. Clearly, $H_l$ fixes $\mathbf Sigma'$ pointwise. Projecting $V$ from $\mathbf Sigma$ on $\mathbf Sigma'$ produces a curve $\bar{V}$ of $\mathbf Sigma'$ whose degree is equal to $(m-2)!/(m-l)!$. \begin{proposition} \ellabel{pro5mar} Let $H_l$ be the stabilizer of $(X_{j_1},\elldots,X_{j_l})$ in $G$. If $l\gammae 2$ then the quotient curve $V/H_l$ is isomorphic to $\bar{V}$. \end{proposition} \begin{proof} Take a point $P\in V$ such that no nontrivial element of $H_l$ fixes $P$. Then the $H_l$-orbit $\Omega$ of $P$ has length $(m-l)!=(d+1)!$. On the other hand, Lemma \ref{lem10giugno} shows that $\mathbf Sigma$ and $P$ generate a $(d+1)$-dimensional subspace $\tilde{\mathbf Sigma}$ that cuts out on $V$ a set of $(d+1)!$ points. Since $\Omega$ is contained in $\tilde{\mathbf Sigma}$, it turns out that $\Omega=V\cap \mathbf Sigma'$ whence the claim follows. \end{proof} \begin{rem} \ellabel{rem11giugno} {\emph{Proposition \ref{pro5mar} shows that $\mathbf Sigma$ is an outer Galois subspace of $V$. }} \end{rem} \begin{proposition} \ellabel{pro11giugno} Let $\bar{\gammag}$ be the genus of the quotient curve $V/H_l$ where $H_l$ is the stabilizer of $(X_{j_1},\elldots,X_{j_l})$. If $l\gammae 2$ then \begin{equation} \ellabel{eq11giugno} 2\bar{\gammag}-2= \frac{(m-2)(m-3)-4-(m-l)(m-1-l)}{2(m-l)!}(m-2)! \end{equation} \end{proposition} \begin{proof} The number of transpositions in $H_l$ is equal to $\ha(m-l)(m-1-l)$. By Lemma \ref{lem11octC} each such transposition has as many as $(m-2)!$ fixed points. From Lemmas \ref{lem10octAchar0}, \ref{lem7oct}, and \ref{3orbite}, no nontrivial element of $H_l$ other than its transpositions fixes a point of $V$. Therefore, the claim follows from the Hurwitz genus formula (\ref{hufo}). \end{proof} \subsection{Quotient curve of $V$ by the $3$-coordinate stabilizer of $G$} \ellabel{sq} In this section $H=G_{m,m-1,m-2}$ is the stabilizer of $X_{m}, X_{m-1}, X_{m-2}$ in $G$, and $\mathcal X$ is the quotient curve of $V$ with respect to $H$. Then $\mathcal X$ is an irreducible plane curve of degree $m-2$ whose genus equals $\ha(m^2-7m+12)$ by (\ref{eq11giugno}). Hence $\mathcal X$ is non-singular. The following proposition shows that $\mathcal X$ coincides with the curve introduced and investigated in \cite{voloch}. \begin{theorem} \ellabel{the24jun} $\mathcal X$ has homogeneous equation $G_{m-2}(x,y,z)=0$ where \begin{equation}\ellabel{eqvoloch} G_{m-2}(x,y,z)=\sum_{i,j,k\gammaeq0,i+j+k=m-2}x^iy^jz^k. \end{equation} \end{theorem} \begin{proof} Since $|H|=(m-3)!$, Lemma \ref{lem10octAchar0} yields that $\mathcal X$ contains as many as $(m-1)(m-2)$ points $(\alpha:\beta: 1)$ with $\alpha^{m}=\beta^{m}=1$ but $\alpha,\beta \ne 1$. Let $\tilde{\mathcal X}$ be the plane curve with homogeneous equation $G_{m-2}(x,y,z)=0$. By \cite[Equation (2)]{voloch} \begin{equation}\ellabel{Gmeq2} G_{m-2}(x,y,z)=\frac{1}{x-y}\elleft(\frac{x^{m}-z^{m}}{x-z}-\frac{y^{m}-z^{m}}{y-z}\right). \end{equation} This shows that $\tilde{\mathcal X}$ also contains each point $(\alpha:\beta: 1)$ with $\alpha^{m}=\beta^{m}=1$ but $\alpha,\beta \ne 1$. Therefore, $\mathcal X$ and $\tilde{\mathcal X}$ have at least $(m-1)(m-2)$ pairwise distinct points. On the other hand, since $\mathcal{X}$ and $\tilde{\mathcal X}$ both have degree $m-2$, B\'ezout's theorem applied to $\mathcal X$ and $\tilde{\mathcal X}$ yields that if $\mathcal X$ and $\tilde{\mathcal X}$ were distinct then they could share at most $(m-2)^2$ points. Therefore, $\mathcal X=\tilde{\mathcal X}$. \end{proof} Theorem \ref{the24jun} shows that $\mathbb{K}(\mathcal X)=\mathbb{K}(x,y)$ with $G_{m-2}(x,y,1)=0$. From Lemma \ref{lem19oct}, the quotient curve of $V$ with respect to the stabilizer of $X_{m},X_{m-1}$ is rational. Therefore its function field is $\mathbb{K}(x)$. \begin{theorem} \ellabel{th200721} The Galois closure $M$ of $\mathbb{K}(\mathcal X)|\mathbb{K}(x)$ is $\mathbb{K}(V)$, with Galois group isomorphic to ${\rm{Sym}}_{m-2}$. \end{theorem} \begin{proof} Since $\mathbb{K}(V)|\mathbb{K}(x)$ is a Galois extension, $M$ may be assumed to be a subfield of $\mathbb{K}(V)$. Thus $\mathbb{K}(V)|M$ is a Galois extension, as well. Galois theory yields that ${\rm{Gal}}(\mathbb{K}(V)|M)$ is a normal subgroup of ${\rm{Gal}}(\mathbb{K}(V)|\mathbb{K}(x))$. Since ${\rm{Gal}}(\mathbb{K}(V)|\mathbb{K}(x))\simeq {\rm{Sym}}_{m-2}$, for $ m\gammae 7$ and $m=5$, the unique non-trivial normal subgroup of ${\rm{Gal}}(\mathbb{K}(V)|\mathbb{K}(x))$ is ${\rm{Alt}}_{m-2}$. On the other hand, as $\mathbb{K}(\mathcal X)$ is the fixed field of $G_{m,m-1,m-2}\cong {\rm{Sym}}_{m-3}$, $\mathbb{K}(\mathcal X)\subseteq M\ne \mathbb{K}(V)$ would imply that ${\rm{Alt}}_{m-2}$ is isomorphic to a subgroup of ${\rm{Sym}}_{m-3}$ which is impossible by $|{\rm{Sym}}_{m-3}|<|{\rm{Alt}}_{m-2}|$. If $m=6$, a further case arises, namely ${\rm{Gal}}(\mathbb{K}(V)|M)$ is normal in ${\rm{Alt}}_4$ and of order $4$. However, this is again impossible since $4\nmid |{\rm{Sym}}_3|=6$. Thus $M=\mathbb{K}(V)$ and ${\rm{Gal}}(\mathbb{K}(\mathcal X)|\mathbb{K}(x))$ is isomorphic to the subgroup of $G$ fixing $x$. Since this subgroup is isomorphic to the stabilizer of $X_{m},X_{m-1}$ in $G$, we have ${\rm{Gal}}(\mathbb{K}(\mathcal X)|\mathbb{K}(x))\cong {\rm{Sym}}_{m-2}$. \end{proof} In the subsequent sections we further investigate the two extremal cases in positive characteristic, namely $m=5$ and $m=p-1$. Accordingly, from now on, $\mathbb{K}$ stands for the algebraic closure of the field $\mathbb{F}_p$ where $p\gammaeq 7$ is a prime. Then, $V$ is defined over $\mathbb{F}_p$ and viewed as a curve defined over $\mathbb{K}$. The group $G$ is also defined over $\mathbb{F}_p$, and it preserves the set $\mathcal X(\mathbb{F}_{p^i})$ of points of $V$ defined over $\mathbb{F}_{p^i}$ for every $i \gammae 1$. \section{The case of positive characteristic; $m=5$} In this section $m=5$, that is, $V$ is the $p$-characteristic analog of the Bring curve. Therefore, $V$ has genus $4$ and is embedded in $\mathbf PG(4,\mathbb{K})$ as the complete intersection of an hyperplane, a quadratic and a cubic surface, both non-singular. The (homogeneous) function field $\mathbb{K}(V)$ is $F=\mathbb{K}(x_1,x_2,x_3,x_4,x_5)$ with \begin{equation} \ellabel{syAA}\elleft\{ \begin{array}{llll} x_1 + x_2 +x_3+x_4+x_5=0;\\ x_1^2 + x_2^2 + x_3^2+ x_4^2+ x_5^2=0;\\ x_1^3 + x_2^3 + x_3^3+ x_4^3+ x_5^3=0. \end{array} \right. \end{equation} Our goal is to show that $V$ is an $\mathbb{F}_{p^2}$-maximal curve, that is the number of points of $V$ defined over $\mathbb{F}_{p^2}$ attains the Hasse-Weil upper bond $p^2+1+2\mathfrak{g}p=p^2+1+8p$ for infinitely many values of $p$. The essential tool for the proof is the Jacobian variety $J_V$ associated with $V$, with the following characterization of maximal curves due to Tate \cite[Theorem 2(d)]{tate} and explicitly pointed out by Lachaud \cite[Proposition 5]{la}: a curve defined over $\mathbb{F}_p$ of genus $\mathfrak{g}$ is an $\mathbb{F}_{p^2}$-maximal curve if and only if its Jacobian is $\mathbb{F}_{p^2}$-isogenous to the $\mathfrak{g}$-th power of an $\mathbb{F}_{p^2}$-maximal elliptic curve. To determine $J_V$ we use the following Kani-Rosen theorem \cite[Theorem B]{kr} about the decomposition of the Jacobian variety of an algebraic curve with respect to an automorphism group $G$ equipped by a partition, that is, the group $G$ has a family of subgroups $\{H_1, \dots, H_t\}$ such that $G=H_1\cup\cdots\cup H_t$ and $H_i\cap H_j=\{1\}$ for $1\elle i<j\elle t$. \begin{theorem}[Kani-Rosen]\ellabel{kanirosen} Let $G$ be a finite automorphism group of an algebraic curve $\mathcal X$. If $G$ is equipped by a partition, then the following isogeny relation holds: \begin{equation}\ellabel{kaniroseneq1} J_{\mathcal X}^{t-1}\times J_{\mathcal X/G}^{|G|}\sim J_{\mathcal X/{H_1}}^{h_1}\times\cdots\times J_{\mathcal X/{H_t}}^{h_t}, \end{equation} where $H_1, \dots, H_t$ are the components of the partition and $h_i=|H_i|$ for $i=1,\elldots t$. \end{theorem} In fact, the Kani-Rosen theorem applies to $G$ as $G\cong {\rm{Sym}}_5$ and ${\rm{Sym}}_5\cong \mbox{\rm PGL}(2,5)$ is equipped with a partition whose components form three conjugacy classes of lengths $15,6,10$, namely those consisting of all cyclic subgroups of order $4,5$ and $6$, respectively. Since $\mbox{\rm Aut}(V)$ acts on the set of five coordinates $\{X_1,\elldots X_5\}$, representatives of the conjugacy classes are: $C_4=\ellangle (X_1,X_2,X_3,X_4)\rangle$, $C_5=\ellangle(X_1,X_2,X_3,X_4,X_5)\rangle$, and $C_6=\ellangle (X_1,X_2,X_3)(X_4,X_5)\rangle$, respectively. In our case, since $V/G$ is rational, Theorem \ref{kanirosen} reads \begin{equation}\ellabel{kaniroseneq} J_{V}^{30}\sim J_{V/{C_4}}^{60}\times J_{V/{C_5}}^{30}\times J_{V/{C_6}}^{60} \end{equation} Therefore, $V$ is an $\mathbb{F}_{p^2}$-maximal curves if each $J_{V/C_i}$ with $i=4,5,6$ is either rational, or a $\mathbb{F}_{p^2}$-maximal elliptic curve. We show first that several quotient curves of $V$ are rational. \begin{proposition}\ellabel{quozrazionale} Each of the following quotient curves is a rational curve: \begin{itemize} \item $V/C_5$ \item $V/G_8$, where $G_8$ is a (dihedral) subgroup of $G$ of order $8$; \item $V/G_{24}$, where $G_{24}$ is a subgroup (isomorphic to $\rm{Sym}_{4}$) of $G$ of order $24$; \item $V/G_{12}$, where $G_{12}$ is a subgroup of $G$ of order $12$; \item $V/G_{20}$, where $G_{20}$ is a subgroup of $G$ of order $20$. \end{itemize} \end{proposition} \begin{proof} We apply the Riemann-Hurwitz formula to each of the groups $C_5,G_8,G_{24},$ and $G_{12}$. For $i\in\{5,8,12,24\}$, let $\gammag_i$ denote the genus of the corresponding quotient curve. From Theorem \ref{3orbite}, $G$ acts on $V$ with exactly $3$ short orbits, namely $\Omega_\omega$, $\Omega_\epsilon$ and $\Omega_\theta$, of length $24$, $30$ and $60$ respectively. For $C_5$ the Riemann-Hurwitz formula reads \begin{equation*} 6=10(\bar{\gammag}_5-1)+\sum (5-|o_i|); \end{equation*} with $o_i$ running over the set of short orbits of $C_5$. Since $C_5$ has at least four fixed points on $\Omega_\omega$, it follows $\gammag_5=0$. Also, since a group of order $20$ contains a subgroup of order $5$, this implies $\gammag_{20}=0$. For $G_8$ the Riemann-Hurwitz formula reads \begin{equation*} 6=16(\bar{\gammag}_8-1)+\sum (8-|o_i|); \end{equation*} with $o_i$ running over the set of short orbits of $G_8$. This implies $\bar{\gammag}_8\elle 1$. If equality holds then $G_8$ has a unique short orbit of length $2$. On the other hand, since neither $30$ nor $60$ is divisible by $8$, $G_8$ has a short orbit in both $\Omega_\epsilon$ and $\Omega_\theta$, a contradiction which implies $\bar{\gammag}_8=0$. Finally, since $\rm{Sym}_4$ contains a subgroup of order $8$, $\gammag_{24}=0$ also holds. For $G_{12}$, the Riemann-Hurwitz formula reads \begin{equation*} 6=24(\bar{\gammag}_{12}-1)+\sum (12-|o_i|); \end{equation*} with $o_i$ running over the set of short orbits of $G_{12}$. This implies $\bar{\gammag}_{12}\elle 1$. If equality holds then $G_{12}$ has a unique short orbit of length $12$, contained in $\Omega_\epsilon$. From the orbit-stabilizer theorem follows that $G_{12}$ contains an involution fixing a point in $\Omega_\epsilon$. This implies that the involution must be the product of two transposition, but it can be checked that the group $G_{12}$ does not contains any such element. It follows $\gammag_{12}=0$. \end{proof} Next we show for every $p \gammae 7$ that $J_{V/C_i}$ with $i=4,6$ is an elliptic curve, and that these two elliptic curves are pairwise isogenous over $\mathbb{F}_{p^2}$. The above defined $C_4$ can be viewed as an automorphism group of $F$ generated by the automorphism $(x_1,x_2,x_3,x_4,x_5)\mapsto (x_2,x_3,x_4,x_1,x_5)$. A Magma aided computation shows that each of following elements $b,c,d,\in F$ is left invariant by $C_4$: $$ \begin{array}{lll} b=-48 x_2 x_3^2 x_4^4 x_5^2 - 48 x_3^2 x_4^5 x_5^2 - 48 x_2 x_4^6 x_5^2 -48 x_4^7 x_5^2 - 24 x_2 x_3^2 x_4^3 x_5^3 - 24 x_2 x_3 x_4^4 x_5^3 - 48 x_3^2 x_4^4 x_5^3 - \\ 48 x_2 x_4^5 x_5^3 - 24 x_3 x_4^5 x_5^3 - 72 x_4^6 x_5^3 - 36 x_2 x_3^2 x_4^2 x_5^4 - 12 x_2 x_3 x_4^3 x_5^4 - 48 x_3^2 x_4^3 x_5^4 - 84 x_2 x_4^4 x_5^4 - 24 x_3 x_4^4 x_5^4 -\\ 108 x_4^5 x_5^4 - 208/9 x_2 x_3^2 x_4 x_5^5 - 28/9 x_2 x_3 x_4^2 x_5^5 - 370/9 x_3^2 x_4^2 x_5^5 - 668/9 x_2 x_4^3 x_5^5 - 82/9 x_3 x_4^3 x_5^5 - 1046/9 x_4^4 x_5^5 - \\ 16/3 x_2 x_3^2 x_5^6 - 104/9 x_2 x_3 x_4 x_5^6 - 152/9 x_3^2 x_4 x_5^6 - 44 x_2 x_4^2 x_5^6 - 118/9 x_3 x_4^2 x_5^6 - 730/9 x_4^3 x_5^6 + 64/27 x_2 x_3 x_5^7 -\\ 8/3 x_3^2 x_5^7 - 712/27 x_2 x_4 x_5^7 - 92/27 x_3 x_4 x_5^7 - 1306/27 x_4^2 x_5^7 - 2128/243 x_2 x_5^8 + 32/27 x_3 x_5^8 - 5332/243 x_4 x_5^8 - \\1064/243 x_5^9, \end{array} $$ $$ \begin{array}{lll} c=-67/3 x_2 x_3^2 x_4^2 x_5^4 + 1/3 x_2 x_3 x_4^3 x_5^4 + 68/3 x_3^2 x_4^3 x_5^4 - 67/3 x_2 x_4^4 x_5^4 + 67/3 x_4^5 x_5^4 - 11 x_2 x_3 x_4^2 x_5^5 + 1/6 x_3^2 x_4^2 x_5^5 - \\11 x_2 x_4^3 x_5^5 + 23/2 x_3 x_4^3 x_5^5 + 67/6 x_4^4 x_5^5 - 68/9 x_2 x_3^2 x_5^6 + 2 x_2 x_3 x_4 x_5^6 + 86/9 x_3^2 x_4 x_5^6 - 217/9 x_2 x_4^2 x_5^6 + \\ 5/18 x_3 x_4^2 x_5^6 + 439/18 x_4^3 x_5^6 + 272/81 x_2 x_3 x_5^7 + 578/81 x_3^2 x_5^7 - 790/81 x_2 x_4 x_5^7 - 97/81 x_3 x_4 x_5^7 + 107/6 x_4^2 x_5^7 -\\ 116/81 x_2 x_5^8 + 632/81 x_3 x_5^8 + 217/81 x_4 x_5^8 + 554/81 x_5^9, \end{array} $$ $$ \begin{array}{lll} d=72 x_2 x_3^2 x_4^5 x_5 + 72 x_2 x_4^7 x_5 + 54 x_2 x_3^2 x_4^4 x_5^2 + 36 x_2 x_3 x_4^5 x_5^2 + 18 x_3^2 x_4^5 x_5^2 + 90 x_2 x_4^6 x_5^2 + 18 x_4^7 x_5^2 + 63 x_2 x_3^2 x_4^3 x_5^3 + \\ 27 x_2 x_3 x_4^4 x_5^3 + 36 x_3^2 x_4^4 x_5^3 + 144 x_2 x_4^5 x_5^3 + 9 x_3 x_4^5 x_5^3 + 45 x_4^6 x_5^3 + 178/3 x_2 x_3^2 x_4^2 x_5^4 + 9 x_2 x_3 x_4^3 x_5^4 + 16 x_3^2 x_4^3 x_5^4 + \\ 154 x_2 x_4^4 x_5^4 + 55/3 x_3 x_4^4 x_5^4 + 142/3 x_4^5 x_5^4 + 50/3 x_2 x_3^2 x_4 x_5^5 + 24 x_2 x_3 x_4^2 x_5^5 + 29 x_3^2 x_4^2 x_5^5 + 298/3 x_2 x_4^3 x_5^5 +\\ 8/3 x_3 x_4^3 x_5^5 + 209/3 x_4^4 x_5^5 + 52/9 x_2 x_3^2 x_5^6 - 2/9 x_2 x_3 x_4 x_5^6 + 110/9 x_3^2 x_4 x_5^6 + 613/9 x_2 x_4^2 x_5^6 + 74/9 x_3 x_4^2 x_5^6 +\\ 139/3 x_4^3 x_5^6 - 208/81 x_2 x_3 x_5^7 + 532/81 x_3^2 x_5^7 + 2260/81 x_2 x_4 x_5^7 + 226/27 x_3 x_4 x_5^7 + 2738/81 x_4^2 x_5^7 + 4 x_2 x_5^8 + \\ 208/81 x_3 x_5^8 + 1460/81 x_4 x_5^8 + 676/81 x_5^9. \end{array} $$ Moreover, let $b_1=b/c,d_1=d/c$. From the above computation, \begin{equation} \ellabel{quoc4} 135b_1^3-360b_1^2+240b_1+256+256d_1^2=0. \end{equation} Let $F_1$ be the subfield of $F$ generated by $b_1,d_1$. Then $F_1$ is elliptic and the linear map $(b_1,d_1)\mapsto (x,y)$ with $x=-256/135b_1$, $y=-65536/18225d_1$, gives an equation $$y^2 = x^3 + 2^{11}3^{-4}5^{-1} x^2 +2^{20}3^{-8}5^{-2} x - 2^{32}3^{-12}5^{-4}$$ for $F_1$. Furthermore, $F_1$ is the fixed field $F^{C_4}$ of $C_4$. Indeed, since both $b_1$ and $d_1$ are fixed by $C_4$, $F_1 \subseteq F^{C_4}$ holds. On the other hand, if equality does not hold then Galois theory yields that $F_1$ is the fixed field of a subgroup $N$ of $G$ strictly containing $C_4$. Since $C_4$ is cyclic and $G\cong \rm{Sym}_5$, this yields that the order of $N$ is either $8$, $12$, $20$ or $24$. But then $V/N$ is rational by Proposition \ref{quozrazionale}. Therefore, $F_1=F^{C_4}$. The quotient curve $V/C_6$ can be investigated in a similar way. The subfield $F^{C_6}$ of $F$ is generated by $A,B$ with \begin{equation} \begin{array}{lll} 5585034240000A^4 + 23225726880000A^3B + 27897294510000A^2B^2 +\\ 7952734845000AB^3 + 1056082140000B^4 + 13606338560000A^2B +\\ 28775567360000AB^2 + 6849136640000B^3 + 11767644160000B^2=0. \end{array} \end{equation} The birational map $$\begin{array}{lll} (A,B)\mapsto(2^{19}3^{-6}5^{-2} AB + 2^{20} 3^{-6}5^{-2} B^2,2^{32}37\cdot3^{-12}5^{-4} A^2 + \\2^{30}313\cdot 3^{-12}5^{-4} AB + 2^{29}149\cdot 3^{-12}5^{-4} B^2 +\\ 2^{38} 3^{-12}5^{-4} B, -A^2 - 4AB - 4B^2), \end{array} $$ is a birational isomorphism over $\mathbb{F}_p$ to the elliptic curve $E_2$ with equation \begin{equation} y_2^2 = x_2^3 - 2^{20}3^{-8}5^{-2}71 x_2^2 - 2^{43}3^{-16}5^{-4}41 x_2 - 2^{64}23 \cdot 3^{-24}5^{-6}. \end{equation} Since, by Proposition \ref{quozrazionale}, the function field of the quotient curve of any subgroup of $G$ properly containing $C_6$ is rational, it follows that $\mathbb{K}( A,B)=\mathbb{K}(x_2,y_2)$ is the function field of $V/C_6$. Furthermore, the elliptic curves $E_1$ and $E_2$ are isogenous over $\mathbb {F}_p$ via the isogeny $$ \begin{array}{lll} (x , y ) \mapsto ((2^{10}3^{-4}x^3 + 2^{20}\cdot 17\cdot3^{-8} 5^{-2} x^2 + 2^{30}31\cdot3^{-12}\cdot 5^{-3} x + 2^{40}11 \cdot 3^{-16}5^{-4})\\ / (x^2 - 2^{11}3^{-4}5^{-1} x + 2^{20}3^{-8} 5^{-2}) , (2^{15}3^{-6} x^3y - 2^{25}3^{-9}5^{-1} x^2y - 2^{35}13\cdot 3^{-14}5^{-2} xy - \\ 2^{45}53\cdot 3^{-18}5^{-4} y) / (x^3 - 2^{10}3^{-3}5^{-1} x^2 + 2^{20}3^{-7}5^{-2} x - 2^{30}3^{-12}5^{-3}) ). \end{array} $$ Observe that, since $p\gammaeq 7$ the denominators do not vanish. Summing up, the following theorem holds. \begin{theorem} \ellabel{the27nov21} For $p\gammaeq 7$, the Jacobian variety $J_V$ of $V$ has a decomposition over $\mathbb{F}_p$ of the form $J_V\sim E^4$ where $E=E_1$ is the elliptic curve of Weierstrass equation $$y^2 = x^3 + 2^{11}3^{-4}5^{-1} x^2 + 2^{20}3^{-8}5^{-2} x - 2^{32}3^{-12}5^{-4} .$$ \end{theorem} Since $E$ is maximal over $\mathbb{F}_{p^2}$ for infinitely many primes \cite{elkies}, Theorem \ref{the27nov21} has the following corollary. \begin{corollary} $V$ is a $\mathbb{F}_{p^2}$-maximal curve for infinitely many primes $p$. \end{corollary} \begin{rem}\em{ The primes $p\elleq 10000$ such that $V$ is $\mathbb{F}_{p^2}$-maximal are $$p=29, 59, 149, 239, 269, 839, 1439, 1559, 2789, 2909, 4079, 4799, 5519, 6959, 8069, 8819, 9479, 9749.$$} \end{rem} \section{The case of positive characteristic: $m=p-1$} In this section we assume $m=p-1$. \subsection{Further results on $V(\mathbb{F}_{p^i})$.} \begin{theorem} \ellabel{redeith} System (\ref{sy}) mod $p$ has as many as $(p-1)!$ solutions. \end{theorem} \begin{proof} One can count the solutions of (\ref{sy}) modulo $p$ up to a non-zero constant factor by computing the number of points of $V$ over $\mathbb{F}_p$. Since any primitive $(p-1)$-th roots of unity in $\mathbb{K}$ is in $\mathbb{F}_p$, Lemma \ref{lem10octAchar0} yields $|V(\mathbb{F}_{p})|=(p-2)!$, and the claim follows. \end{proof} From the proof of Theorem \ref{redeith}, $V(\mathbb{F}_p)$ is the $G$-orbit $\mathcal{O}_\omega$ defined in Lemma \ref{lem10octAchar0}. Furthermore, Lemmas \ref{lem7oct} and \ref{lem8octU} have the following corollary. \begin{lemma} \ellabel{lem170621} The $G$-orbit $\mathcal{O}_\varepsilon$ is contained in $V(\mathbb{F}_{p^i})$ but not in $V(\mathbb{F}_{p^j})$ for $j<i$, where $\mathbb{F}_{p^i}$ is the smallest subfield of $\mathbb{K}$ containing a $(p-2)$-th primitive root $\varepsilon$ of unity. \end{lemma} We are in a position to prove the following theorem. \begin{theorem}\ellabel{nopuntiFp2} The curve $V$ has no proper $\mathbb{F}_{p^2}$-rational point, that is $V(\mathbb{F}_{p^2})=V(\mathbb{F}_{p})$. \end{theorem} \begin{proof} By way of a contradiction, $|V(\mathbb{F}_{p^2})|>| V(\mathbb{F}_{p})|$ is assumed. Then, since $G$ takes $\mathbb{F}_{p^2}$-rational points to $\mathbb{F}_{p^2}$-rational points, there exists a $G$-orbit $\Omega$ entirely contained in $V(\mathbb{F}_{p^2})\setminus V(\mathbb{F}_{p})$. Assume first that $\Omega$ is a long orbit, that is $|\Omega|=(p-1)!$, and let $H$ be the stabilizer of $X_{p-1}, X_{p-2}, X_{p-3}$ in $G$. Then $H$ partitions $\Omega$ into $(p-1)(p-2)(p-3)$ long $H$-orbits. Since each $H$-orbit corresponds to a point of $\mathcal{X}=V/H$, it follows that \begin{equation*} |\mathcal{X}(\mathbb{F}_{p^2})\setminus \mathcal{X}(\mathbb{F}_{p})|\gammaeq (p-1)(p-2)(p-3). \end{equation*} On the other hand, the St\"ohr-Voloch bound \cite{sv}, (see also \cite[Theorem 8.41]{HKT}) applied to $\mathcal X$ gives \begin{equation*} 2|\mathcal{X}(\mathbb{F}_{p^2})| \elleq (2\mathfrak{g}(\mathcal X)-2)+(p^2+2)(p-3)=(p-6)(p-3)+(p^2+2)(p-3)=(p-3)(p^2+p-4). \end{equation*} Therefore, $2(p-1)(p-2)(p-3)\elle (p-3)(p^2+p-4)$. But then $p<7$, a contradiction. Assume now that $\Omega$ is a short orbit. From Lemma \ref{3orbite}, the only possibility is that $\Omega=\Omega_\theta$ for some transposition $\theta\in G$. Since each $H$-orbit corresponds to a point of $\mathcal{X}$, Lemma \ref{lambda12} implies \begin{equation*} |\mathcal{X}(\mathbb{F}_{p^2})\setminus \mathcal{X}(\mathbb{F}_{p})|\gammaeq \ellambda_1+\ellambda_2=(p-3)(p^2-6p+11). \end{equation*} This time the St\"ohr-Voloch bound gives \begin{equation*} 2\cdot (p-3)(p^2-6p+11)\elleq (p-3)(p^2+p-4), \end{equation*} which is only possible for $p=7$. However, a Magma aided computation rules out this possibility. \end{proof} From Lemmas \ref{lem10octAchar0} and \ref{lem7oct}, $\Omega_\omega=V(\mathbb{F}_p)$ and $\Omega_\varepsilon \subset V(\mathbb{F}_{p^j})$ respectively, where $\mathbb{F}_{p^j}$ is the smallest subfield of $\mathbb{K}$ containing a primitive ($p-2$)-th root of unity. We prove an analog claim for $\Omega_\theta$. As pointed out in the proof of Lemma \ref{lem11octC}, $\Omega_\theta$ has a point $P$ whose last two coordinates are equal $1$. \begin{lemma} \ellabel{lem22jun21} Let $P=(\mathbf xi_1,\mathbf xi_2,\elldots,\mathbf xi_{p-3},1,1)$ be a point of $\Omega_\theta$. Then $\mathbf xi_{p-3}\in \mathbb{F}_{p^{j}}$ for some $1<j\elle p-3$. Furthermore, if $\mathbf xi_{p-3}\not \in \mathbb{F}_{p^{j}}$ with $j<p-3$ then, up to a permutation of the indices $\{1,2,\elldots, p-4\}$, $$x_j=x_{p-3}^{p^j}\quad j=1,2,\elldots p-4.$$ \end{lemma} \begin{proof} Since both $V$ and $\Omega_\theta$ are defined over $\mathbb{F}_p$, Lemma \ref{3orbite} shows that the Frobenius map $\mathbf Phi$ takes $P$ to the point $P^{(p)}\in \Omega_\theta$ where $P^{(p)}=(\mathbf xi_1^p,\mathbf xi_2^p,\elldots,\mathbf xi_{p-3}^p,1,1)$. Also, $\mathbf Phi^i$ takes $P$ to the point $$P^{(p^i)}=(\mathbf xi_1^{p^i},\mathbf xi_2^{p^i},\elldots,\mathbf xi_{p-3}^{p^i},1,1)$$ of $\Omega_\theta$. To prove the first claim, assume on the contrary $\mathbf xi_{p-3}\not \in \mathbb{F}_{p^{j}}$ for $j\elle p-3$. Then $\mathbf xi_{p-3}, \mathbf xi_{p-3}^p,\elldots, \mathbf xi_{p-3}^{p^{p-3}}$ are pairwise distinct. On the other hand, from Lemma \ref{lem10giugno}, $\mathbf xi_{p-3}^{p^i}\in \{\mathbf xi_1,\mathbf xi_2,\elldots,\mathbf xi_{p-3}\}$ for any $i\gammae 0$; a contradiction. To prove the second claim, we may assume that $\mathbf xi_{p-3}\in \mathbb{F}_{p^{p-3}}$. Then the previous argument shows that $\{\mathbf xi_{p-3}, \mathbf xi_{p-3}^p,\elldots, \mathbf xi_{p-3}^{p^{p-3}}\}=\{\mathbf xi_1,\mathbf xi_2,\elldots,\mathbf xi_{p-3}\}$ whence the claim follows. \end{proof} Theorem \ref{the24jun} together with Lemma \ref{lem22jun21} have the following corollary. \begin{proposition} \ellabel{lem24jun21} Let $\mathbb{F}_{p^j}$ be the subfield of $\mathbb{K}$ which is the splitting field of the polynomial $f(X)=X^{p-3}+2X^{p-4}+3X^{p-5}+\elldots+(p-3)X+p-2$. Then $\Omega_\theta \subset V(\mathbb{F}_{p^j})$ but $\Omega_\theta \nsubseteq V(\mathbb{F}_{p^i})$ for $i<j$. \end{proposition} The proof of the following theorem relies on Proposition \ref{lem24jun21}. \begin{theorem} \ellabel{the181021} Let $d$ be the smallest positive integer such that $(p-2)\mid (p^d-1)$. Then $\Omega_\theta, \Omega_\epsilon \subset V(\mathbb{F}_{p^d})$ but $\Omega_\theta, \Omega_\epsilon \nsubseteq V(\mathbb{F}_{p^i})$ for $i<d$. \end{theorem} \begin{proof} By definition, $\mathbb{F}_{p^d}$ is the smallest extension of $\mathbb{F}_p$ containing a primitive $(p-2)$-th root of unity. So, the claim holds for $\Omega_\epsilon$. To complete the proof, consider the polynomial $$g(X)=\frac{X^{p-2}-1}{X-1}=X^{p-3}+X^{p-4}+\elldots+X+1,$$ whose splitting field is $\mathbb{F}_{p^d}$. We prove that $g(1-X)=f(X)$. Indeed $$ g(1-X)=\sum_{i=0}^{p-3} \sum_{j=0}^i \binom{i}{j}(-X)^j, $$ and for $i\in \{0,\elldots,p-3\}$ the coefficient of $X^i$ in $g(1-X)$ is $$(-1)^i\sum_{k=i}^{p-3}\binom{k}{i}=(-1)^i\sum_{k=0}^{p-3-i}\binom{i+k}{i}=(-1)^i\binom{p-2}{p-3-i}=(-1)^i\binom{p-2}{i+1}.$$ Thus, $$ g(1-X)=\sum_{i=0}^{p-3}\binom{p-2}{i+1}(-1)^i X^i=\sum_{i=0}^{p-3}(p-2-i)X^i=f(X). $$ Therefore the splitting field of $f(X)$ coincides with the splitting field of $g(X)$ and Proposition \ref{lem24jun21} yields the claim. \end{proof} \subsection{Non-classicality and Frobenius non-classicality of $V$} \ellabel{nc} For $m=p-1$, Proposition \ref{prop10.03} has the following corollary. \begin{proposition} \ellabel{prop10.03A} If $\mathbb{K}$ has characteristic $p$ then $V$ is contained in the Hermitian variety $\mathcal H_{p-3}$ which is the intersection of the hyperplane $\mathbf Pi$ of equation $X_1 + X_2 + \elldots + X_{p-1}=0$ with the Hermitian variety $\mathcal H_{p-2}$ of equation $X_1^{p+1} + \elldots+X_{p-1}^{p+1}=0.$ \end{proposition} \begin{lemma} \ellabel{lem17ag21} For a point $P\in V$, let $\mathbf Pi_P$ be the tangent hyperplane to $\mathcal H_{p-3}$ at $P$. Then $I(P,V\cap \mathbf Pi_P)\gammae p$. \end{lemma} \begin{proof} Clearly, $\mathbf Pi_P$ is the intersection of the hyperplane $\mathbf Pi$ with the tangent hyperplane $\alpha_P$ to the Hermitian variety $\mathcal H_{p-2}$ at $P$. Since $V$ is contained in $\mathbf Pi$, $I(P,V\cap \mathbf Pi_P)=I(P,V\cap \alpha_P)$ holds. Hence, it is enough to show that $I(P,V\cap \alpha_P)$ is at least $p$. Let $P=(\mathbf xi_1:\cdots:\mathbf xi_{p-2}:\mathbf xi_{p-1}).$ Up to a change of coordinates, $\mathbf xi_{p-1}=1$ may be assumed. In the affine space ${\rm{AG}}(p-2,\mathbb{K})$ with infinite hyperplane $X_{p-1}=0$, $\mathcal H_{p-2}$ has equation $X_1^{p+1}+X_2^{p+1}+\elldots + X_{p-2}^{p+1}+1=0$. For every $i=1,\elldots,p-2,$ let $x_i(t)=\mathbf xi_i+\rho_i(t)$ with $x_i(t),\rho_i(t)\in \mathbb{K}[[t]]$ and ${\rm{ord}}(\rho_i(t))\gammae 1$ be a primitive representation of the unique branch of $V$ centered at $P$. By Proposition \ref{prop10.03A}, $V$ is contained in $\mathcal H$. Therefore, $x_1(t)^{p+1} + x_2(t)^{p+1} + \elldots+x_{p-2}(t)^{p+1}+1$ vanishes in $\mathbb{K}[[t]].$ From this $$\mathbf xi_1^p(\mathbf xi_1+\rho_1(t))+\mathbf xi_2^p(\mathbf xi_2+\rho_2(t))+\elldots +\mathbf xi_{p-2}^p(\mathbf xi_{p-2}+\rho_{p-2}(t))+1=t^pv(t),\,\,v(t)\in \mathbb{K}[[t]],{\rm{ord}}(v(t))\gammae 0.$$ Since $\mathbf Pi_P$ has equation $\mathbf xi_1^{p}X_1+\mathbf xi_2^{p}X_2+\elldots +\mathbf xi_{p-2}^{p}X_{p-2}+1=0$ in ${\rm{AG}}(p-2,\mathbb{K})$, the claim follows. \end{proof} Since the dimension of ${\rm{PG}}(p-2,\mathbb{K}))$ is smaller than $p$, Lemma \ref{lem17ag21} has the following corollary. \begin{theorem} \ellabel{the17ag21} $V$ is a non-classical curve. \end{theorem} \begin{theorem} \ellabel{the191021} $V$ is a Frobenius non-classical curve. \end{theorem} \begin{proof} Assume on the contrary that $V$ is Frobenius classical. Then $0,1,\elldots,p-4$ are orders at a generically chosen point of $V$. Lemma \ref{lem17ag21} yields that the last order, $\varepsilon_{p-3}$, is equal to $p$. Therefore, Lemma \ref{lem17ag21} also yields that the osculating tangent hyperplane to $V$ at $P$ coincides with the tangent hyperplane $\mathbf Pi$ to Hermitian variety $\mathcal H_p$. Since $\mathbf Pi$ passes through the Frobenius image of $P$, it follows that $V$ is Frobenius non-classical, a contradiction. \end{proof} \subsection{Some results on the orders of $V$ at $\mathbb{F}_p$-rational points} \ellabel{secc1} Since $G$ is transitive on $V(\mathbb{F}_p)$, the orders of $V$ are the same at every point $P\in V(\mathbb{F}_p)$. From Lemma \ref{lem1210}, such a point is $P=(1:\eta^{p-2}:\eta^{p-3}:\dots:\eta)$ for a primitive element $\eta$ of $\mathbb{F}_p$. From Lemma \ref{lem10octAchar0}, the stabilizer of $P$ in $G$ is a cyclic group of order $p-1$ generated by the projectivity $\sigma$ associated with the matrix \begin{equation}\ellabel{circulant} M_\sigma= \begin{pmatrix} 0 & 1 & 0 & &\cdots & 0\\ 0 & 0 & 1 & &\cdots & 0\\ & & & \ddots & & \\ \vdots& & &\ddots & &\\ 0 & 0 & 0 &\cdots & 0 & 1\\ 1 & 0 & 0 &\cdots & 0 & 0\\ \end{pmatrix}. \end{equation} The eigenvalues of $M_\sigma$ are $\ellambda_i=\eta^i$ for $i=0,\dots,p-2$. Moreover, then eigenvectors of $M_\sigma$ are ${\bf{w}}_i=(1,\eta^i,\eta^{2i},\dots,\eta^{(p-2)i})$. Thus the point of ${\rm{PG}}(p-2,\mathbb{F}_p)$ represented by ${\bf{w}}_i$ is in $V$ if and only if ${\rm{g.c.d.}}(i,p-1)=1$. Therefore, $\sigma$ has as many as $\varphi(p-1)$ fixed points on $V$. Let $\eta_i=\eta^{i}$ for $i=0,\elldots,p-2$, and consider the change of the projective frame from $(X_1:X_2:\elldots:X_{p-1})$ to $(Y_1:Y_2:\elldots:Y_{p-1})$ defined by \begin{equation}\ellabel{varchange} \begin{cases} X_1=Y_1+Y_2+\cdots+Y_{p-1};\\ X_2=Y_1+\eta_1Y_2+\eta_1^2Y_3\cdots+\eta_{1}^{p-2}Y_{p-1};\\ X_3=Y_1+\eta_2Y_2+\cdots+\eta_{2}^{p-2}Y_{p-1};\\ \vdots\\ X_{p-1}=Y_1+\eta_{p-2}Y_2+\cdots+\eta_{p-2}^{p-2}Y_{p-1}. \end{cases} \end{equation} With this transformation, $P$ is taken to the fundamental point $O=(0:0:\cdots:0 :1)$. Let $R$ be the matrix whose rows are the vectors ${\bf{w}}_i$, $i=1,\dots,p-1$. Then $R^{-1}M_\sigma R$ is the diagonal matrix \begin{equation*} D=\begin{pmatrix} \eta &0&&\cdots&0\\ 0&\eta^2&0&\cdots&0\\ \vdots& &&\ddots &\\ 0&\cdots&& 0&\eta^{2u} \end{pmatrix}, \end{equation*} where $2u=p-1$, and hence it is the matrix associated to $\sigma$ in the projective frame $(Y_1:Y_2:\elldots:Y_{p-1})$. Replacing $X_i$ by (\ref{varchange}) in \eqref{sy}, we obtain the equations of $V$ in the projective frame $(Y_1:Y_2:\elldots:Y_{p-1})$. Now, we explicitly write down the equations defining $V$ in the projective frame $(Y_1:Y_2:\elldots:Y_{p-1})$. From the first equation in \eqref{sy}, we obtain $Y_1=0$. In fact, $\sum_{i=1}^{p-1}X_i=(p-1)Y_1+Y_2\sum_{j=0}^{p-2}\eta_1^j+\cdots+Y_{p-1}\sum_{j=0}^{p-2}\eta_{p-2}^j,$ and for $i=1\dots,p-2$ we have $\sum_{j=0}^{p-2}\eta_{i}^j=(\eta_i^{p-1}-1)/(\eta_i-1)=0$. Therefore, $V$ is a projective variety of ${\rm{PG}}(p-3,\mathbb{F}_p)$ with projective frame $(Y_2:Y_3:\cdots :Y_{p-1})$. Since $O$ is off the hyperplane of homogenous equation $Y_{p-1}=0$, a branch representation of the unique branch $\alphamma$ of $V$ centered at $O$ has as components $y_2=y_2(t),\dots,y_{p-2}=y_{p-2}(t),y_{p-1}=1$. Here $y_i(t)\in \mathbb{F}_p[[t]]$. Furthermore, we can assume $ord(y_{p-2}(t))=1$, since $O$ is a simple point of $V$ and the tangent hyperplane to $V$ at $O$ does not contain the line of homogeneous equation $Y_2=0,\elldots, Y_{p-3}=0$. Therefore, \begin{equation}\ellabel{ramo} \begin{cases} &y_2(t)= \alpha_{2,1}t+\alpha_{2,2}t^2+\dots\\ &y_3(t)= \alpha_{3,1}t+\alpha_{3,2}t^2+\dots\\ &\vdots \\ &y_{p-3}(t)=\alpha_{p-3,1}t+\alpha_{p-3,2}t^2+\dots\\ &y_{p-2}(t)=t\\ &y_{p-1}(t)=1. \end{cases} \end{equation} Since the projectivity $\sigma$ preserves $V$ and fixes $O$, it also preserves $\alphamma$. Moreover, $\sigma$ is associated to the above diagonal matrix $D$. This yields that an equivalent branch representation is given by \begin{equation}\ellabel{ramo2} \begin{cases} &\bar{y}_1(t)=0\\ &\bar{y}_2(t)=\eta^2( \alpha_{2,1}t+\alpha_{2,2}t^2+\dots)\\ &\bar{y}_3(t)=\eta^3( \alpha_{3,1}t+\alpha_{3,2}t^2+\dots)\\ &\vdots \\ &\bar{y}_{p-3}(t)=\eta^{p-3}(\alpha_{p-3,1}t+\alpha_{p-3,2}t^2+\dots)\\ &\bar{y}_{p-2}(t)=\eta^{p-2}t=\eta^{-1}t\\ &\bar{y}_{p-1}(t)=1. \end{cases} \end{equation} Replacing $t$ by $\eta^{-1}t$, the equations of \eqref{ramo2} become \begin{equation}\ellabel{ramo3} \begin{cases} &\bar{y}_1(t)=0\\ &\bar{y}_2(t)=\eta^2( \eta\alpha_{2,1}t+\eta^2\alpha_{2,2}t^2+\dots)\\ &\bar{y}_3(t)=\eta^3( \eta\alpha_{3,1}t+\eta^2\alpha_{3,2}t^2+\dots)\\ &\vdots \\ &\bar{y}_{p-3}(t)=\eta^{p-3}(\eta\alpha_{p-3,1}t+\eta^2\alpha_{p-3,2}t^2+\dots)\\ &\bar{y}_{p-2}(t)=t.\\ \end{cases} \end{equation} Therefore \eqref{ramo} and \eqref{ramo3} are the same branch representation of $\alphamma$, whence \begin{equation*} \begin{cases} &\eta^3\alpha_{2,1}=\alpha_{2,1};\\ &\vdots\\ &\eta^{2+i}\alpha_{2,i}=\alpha_{2,i};\\ &\vdots\\ &\eta^{2u-1}\alpha_{2u-1,1}=\alpha_{2u-1,1};\\ &\vdots\\ &\eta^{2u-2+i}\alpha_{2u-2,i}=\alpha_{2u-2,i};\\ &\vdots\\ \end{cases} \end{equation*} From this $\alpha_{2,i}=0$ for $i< 2u-2$. More generally, $\alpha_{k,i}=0$ for $2\elleq k\elleq p-3$ and $i<p-1-k$. Therefore, $ord(y_k(t))\gammaeq p-1-k$ for $2\elleq k\elleq p-3$. Moreover, the only coefficients $\alpha_{k,i}\neq 0$ are among those verifying $i+k\equiv 0{\bf P}mod{p-1}$, where $2\elleq k\elleq p-3$ and $i\gammaeq 1$. Thus, the branch representation is \begin{equation}\ellabel{ramo3bis} \begin{cases} &y_2(t)= \alpha_{2,p-3}t^{p-3}+\alpha_{2,2p-4}t^{2p-4}+\dots+\alpha_{2,w(p-1)-2}t^{w(p-1)-2}+\dots\\ &y_3(t)= \alpha_{3,p-4}t^{p-4}+\alpha_{3,4u-3}t^{4u-3}+\dots+\alpha_{3,w(p-1)-3}t^{w(p-1)-3}+\dots\\ &\vdots \\ &y_{p-4}(t)=\alpha_{p-4,3}t^3+\alpha_{p-4,p+2}t^{p+2}+\dots+\alpha_{p-4,w(p-1)+3}t^{w(p-1)+3}+\dots\\ &y_{p-3}(t)=\alpha_{p-3,2}t^2+\alpha_{p-3,p+1}t^{p+1}+\dots+\alpha_{p-3,w(p-1)+2}t^{w(p-1)+2}+\dots\\ &y_{p-2}(t)=t\\ &y_{p-1}(t)=1. \end{cases} \end{equation} We now show how to compute the remaining orders.\\ Recall that, in the new coordinates, the $k$-th equation in \eqref{sy} reads \begin{equation}\ellabel{nuoveEquazioni} \sum_{j=1}^{p-1}\bigg(\sum_{i=2}^{p-2}\eta_{j-1}^{i-1}Y_i+\eta_{j-1}^{p-2}Y_{p-1}\bigg)^k, \end{equation} and that by the multinomial theorem \begin{equation*} \bigg(\sum_{i=2}^{p-2}s_i+v\bigg)^k=\sum_{k_2+\dots+k_{p-2}+l=k}\binom{k}{k_2,\dots,k_{p-2},l}{\bf P}rod_{w=2}^{p-2}s_w^{k_w}\cdot v^l. \end{equation*} Replacing $Y_{i}$ by $y_i(t)=\sum_{s=1}^\infty\alpha_{i,(p-1)s-i}t^{(p-1)s-i}$ in the $k$-th equation, it is obtained \begin{equation}\ellabel{eqramo} \begin{split} &\sum_{j=1}^{p-1}\bigg(\sum_{i=2}^{p-2}\eta_{j-1}^{i-1}\big(\sum_{s=1}^\infty\alpha_{(p-1)s-i,i}t^{(p-1)s-i}\big)+\eta_{j-1}^{p-2}\bigg)^k=0. \end{split} \end{equation} First, $ord(y_{p-3}(t))=2$, and, more precisely, $\alpha_{p-3,2}=2\neq 0$. This can be observed by looking at the quadratic term in $t$ in the $(p-3)$-th equation, after the substitution $Y_i=y_i(t)$. By Proposition \ref{prop10.03}, $k\in\{1,2,\dots,p-3\}\cup\{p+1,p+2,\dots,2p-5\}$.\\ In order to compute the orders of $y_i(t)$, with $i<p-1-2$, let $k\in \{p+1,p+2,\dots,2p-5\}$, and write $k=2p-2-\tilde{k}$, $\tilde{k}\in\{3,\dots,p-3\}$. \begin{rem}\ellabel{obs1} For any fixed $p$, the coefficients $\alpha_{p-1-\tilde{k},\tilde{k}}$, for $\tilde{k}\in\{3,\dots,p-3\}$, can be computed by taking into account the following constraints. \end{rem} The expansion of \eqref{eqramo} is of the form $$\sum_{j=1}^{p-1}\sum_{k_2+\cdots+k_{p-2}+l}\binom{k}{k_2,\dots,k_{p-2},l}\eta_{j-i}^{(2p-2)l}{\bf P}rod_{i=2}^{p-2}\eta_{j-1}^{(i-1)k_i}t^{(p-1-i)k_i}.$$ Here, $k_i\elleq \tilde{k}$ for terms of degree $\tilde{k}$ in $t$. Moreover, since $k>p$ and $$\binom{k}{k_1,k_2,\dots,k_{p-1-1},l}=\frac{(k)!}{k_1!k_2!\cdots k_{2u-1}!l!};$$ we see that $l\gammaeq p$ is a necessary condition in order to have a non-zero multinomial coefficient. Further, $l+\sum_{i=1}^{p-2}k_i=\tilde{k}$, and each term of degree $\tilde{k}$ corresponds to a choice of the indices such that $\sum(p-1-i)k_i=\tilde{k}$. Moreover, since for $u\neq 0 {\bf P}mod{p-1}$, \begin{equation*} \sum_{j=1}^{2u}\bigg(\eta^{j-1}_{u}\bigg)= \sum_{j=0}^{2u-1}\eta^j_{u}= \frac{\eta^{2u}_{u}-1}{\eta_{u}-1}=0, \end{equation*} every non-zero term must satisfy $\sum(i-1)k_i\equiv l{\bf P}mod{p-1}$.\\ To illustrate Remark \ref{obs1}, we show that $\alpha_{p-1-3,3}$ is non-zero, whereas $\alpha_{2,p-1-2}=0$.\\ \begin{itemize} \item Let $\tilde{k}=3$, $k=2p-5$. In the $k$-th equation, the terms of degree $\tilde{k}$ in $t$ are precisely the three terms corresponding to the choices $s=1$, $i=p-1-1$, $k_i=3$ and $l=2p-5-3$ (for $j\neq i$ it will be $k_j=0$), or $s=1$, $i=p-1-1$, $k_i=1$ and $l=2p-5-1$(for $j\neq i$, $k_j=0$), and $s=1$, $i_1=p-1-1$, $k_{i_1}=1$, $i_2=p-1-2$, $k_{i_2}=1$, $i_2=1$ and $l=2p-5-2$(for $j\neq i$, $k_j=0$). $$\binom{2p-5}{k_2,\dots,k_{2u-1},l}=\frac{(2p-5)!}{k_2!\cdots k_{2u-1}!l!}$$ Therefore, the term of degree $3$ in the $(2p-5)$-th equation is $$ \Bigg[ \binom{2p-5}{1,1,0,\dots,0,2p-5-2}\alpha_{p-1-2,2}+\binom{2p-5}{3,0,0\dots,0,2p-5-3}+\binom{2p-5}{0,0,1,0\dots,0,2p-5-1}\alpha_{p-1-3,3}\Bigg]t^3=$$ $$ =\Bigg[ \alpha_{p-1-2,2}\cdot (-5)\cdot (-6)+(-1)(-5)(-7)-5\alpha_{p-1-3,3}\Bigg]t^3 $$ Since $\alpha_{p-3,3}=2$, it follows $\alpha_{p-1-3,3}=5\neq 0$. \item Let $\tilde{k}=p-3$, $k=(p+1)$. Observe that, in this case, the factors in the multinomial theorem are of the form $\binom{p+1}{k_1,k_2,\dots,k_{2u-1},l}$ with $k_1+k_2+\dots+k_{2u-1}+l=p+1$. Also, the only non-zero coefficients are those with $k_i=p$ or $k_i=p+1$ for some $i$. Moreover, for $u\neq 0 {\bf P}mod{p-1}$, \begin{equation*} \sum_{j=1}^{2u}\bigg(\eta^{j-1}_{u}\bigg)= \sum_{j=0}^{2u-1}\eta^j_{u}= \frac{\eta^{2u}_{u}-1}{\eta_{u}-1}=0. \end{equation*} Therefore, for $k=p+1$, the possibly non-vanishing terms in equation \eqref{eqramo}, are those obtained for $i+\tilde{i}\equiv 2{\bf P}mod{2u}$, taking $\eta_{j-1}^{i-1}y_i(t)$ with multiplicity $p$ and $\eta_{j-1}^{\tilde{i}-1}y_{\tilde{i}}(t)$ with multiplicity $1$ or taking $\eta_{j-1}^{i-1}y_i(t)$ with multiplicity $1$ and $\eta_{j-1}^{\tilde{i}-1}y_{\tilde{i}}(t)$ with multiplicity $p$, and those obtained for $i=m+1$ and $\eta_{j-1}^{i-1}y_2(t)$ with multiplicity $p+1$.\\ Therefore, the term of degree $2u-2$ in $t$, can only be obtained by taking $\eta_{j-1}^{2-1}y_2(t)$ with multiplicity $1$ and $\eta_{j-1}^{2u-1}$ with multiplicity $p$, namely it is $-\alpha_{2,2u-2}t^{2u-2}$, and therefore $\alpha_{2,2u-2}=0$. As a consequence, $ord(y_2(t))\gammaeq 2p-4$. \end{itemize} \begin{proposition} \ellabel{pro211021} Each of the integers $1,2,3$ are intersection multiplicities $I(P,V\cap {\bf P}i)$ at any $\mathbb{F}_p$-rational point. Furthermore, the last order is at least $2p-4$. \end{proposition} \subsection{Case $p=7$} From Section \ref{secc1}, $y_1(t)=0$, $y_5(t)=t$, and $ord(y_4(t))=2$, with $\alpha_{4,2}=2$, that is $y_4(t)=2t^2+\alpha_{4,4}t^4+\cdots$. The second equation reads $$-y_3^3+y_2y_3y_4+4y_2^2y_5-y_5^3+y_4y_5+4y_3=0 ;$$ whence $\alpha_{3,3}=5$, that is $y_3(t)=5t^3+\alpha_{3,6}t^6+\cdots$. The eighth equation reads $$y_2^7+y_2+y_3^7y_5+y_3y_5^7+y_4^8=0.$$ Therefore it must be $ord(y_2(t))=10$.\\ Thus, the order sequence of the curve $V$ at the origin is $(0,1,2,3,10)$. \subsection{Case $p=11$} With the support of MAGMA the first terms of the branch expansions of the $y_i$ can be computed. \begin{equation*} \begin{split} y_1(t)=0;\\ y_2(t)=7t^{18}+\cdots ;\\ y_3(t)=5t^7+6t^{17}+\cdots;\\ y_4(t)=3t^6+2t^{16}+\cdots;\\ y_5(t)=9t^5+t^{15}+\cdots;\\ y_6(t)=3t^4+0t^{14}+\cdots;\\ y_7(t)=5t^3+6t^{13}+\cdots;\\ y_8(t)=2t^2+4t^{12}+\cdots;\\ y_9(t)=t. \end{split} \end{equation*} This shows that the order sequence of $V$ at the origin is $(0,1,2,3,4,5,6,7,18)$. \subsection{Connection with R\'edei's work on the Minkowski conjecture} \ellabel{secredei} From \cite[Theorem 2.1]{szabo} and \cite[Lemma 6.1]{ln}, every solution $(\mathbf xi_1,\elldots,\mathbf xi_p)$ with $\mathbf xi_i\in \mathbb{F}_p$ also satisfies the diagonal equations $X_1^{k} + X_2^{k} + \elldots+X_{p}^{k}=0$ for $k=\ha(p+1),\elldots p-3$. Now, let W be the algebraic variety of $\textrm{PG}(p-1,\mathbb{K})$ associated with the system \begin{equation} \ellabel{syconp}\elleft\{ \begin{array}{llll} X_1 + X_2 + \elldots + X_{p}=0;\\ X_1^2 + X_2^2 + \elldots + X_{p}^2=0;\\ \cdots\cdots\\ \cdots\cdots\\ X_1^{p-3} + X_2^{p-3} + \elldots+X_{p}^{p-3}=0. \end{array} \right. \end{equation} of diagonal equations. Clearly $E=(1:1:\cdots: 1)$ is a point of $W$. Furthermore, any $\ell$ line through $E$ and another point $P\in W$ is entirely contained in $W$. In fact, if $P=(a_1:\cdots:a_p)$ then the points on $\ell$ distinct from $E$ are $Q=(\ellambda+a_1:\cdots:\ellambda+a_p)$ with $\ellambda \in \mathbb{K}$, and a straightforward computation, $(\ellambda+a_1,\elldots, \ellambda+a_p)$ is also a solution of (\ref{syconp}). Since the hyperplane $\mathbf Pi$ of equation $X_p=0$ does not contain $E$, the algebraic variety cut out on $W$ by $\mathbf Pi$ is associated with (\ref{sy}) for $m=p-1$. Since this variety is $V$, Theorem \ref{redeith} and Lemma \ref{lem1210} show that the points of $W$ over $\mathbb{F}_p$ are $E$ together with the points $(\mathbf xi_1:\cdots: \mathbf xi_p)$ such that $[\mathbf xi_1,\elldots,\mathbf xi_p]$ is a permutation of the elements of $\mathbb{F}_p$. This gives a geometric interpretation for the result of R\'edei \cite{redei} and of Wang, Panico and Szab\'o \cite{redei} reported in the Introduction. \end{document}
\begin{document} \begin{abstract} We consider the $q$th root number function for the symmetric group. Our aim is to develop an asymptotic formula for the multiplicities of the $q$th root number function as $q$ tends to $\infty$. We use character theory, number theory and combinatorics. \end{abstract} \title{The Multiplicities of Root Number Functions} \section{Introduction} Let $q$ be a positive integer. We define the $q$th root number function $r_q\colon S_n\to\mathbb{N}_0$ via \begin{align*} r_q(\pi):=\#\{\sigma\in S_n:~\sigma^q=\pi\}. \end{align*} Obviously, $r_q$ is a class function of the symmetric group $S_n$. For each irreducible character $\chi$ of $S_n$ let \begin{equation*} m_{\chi}^{(q)}:=\langle r_q,\chi\rangle \end{equation*} be the multiplicity of $\chi$ in $r_q$.\par Scharf \cite{Sc1} proved that the $q$th root number functions $r_q$ are proper characters, that is, the multiplicities $m_{\chi}^{(q)}$ are non-negative integers. A good account of results on root number functions can be found in \cite[Chapter 6.2 and 6.3]{Ke1}. \par We now pay attention to the multiplicities $m_{\chi}^{(q)}$. Müller and Schlage-Puchta established estimates for the multiplicities $m_{\chi_{\lambda}}^{(q)}$ (cf. \cite[Proposition 2 and 3]{Mu1}). In addition, they showed the following: Let $\Delta\in\mathbb{N}$ be fixed and let $q\geqslant 2$ be an integer. Given a partition $\mu\vdash\Delta$, there exists some constant $C_{\mu}^q$, depending only on $\mu$ and $q$, such that for $n$ sufficiently large and for a partition $\lambda=(\lambda_1,\ldots,\lambda_l)\vdash n$ with $\lambda\backslash \lambda_1:=(\lambda_2,\ldots,\lambda_l)=\mu$, we have $m_{\chi_{\lambda}}^{(q)}=C_{\mu}^q$. In particular, we get \begin{align*} C_{(1)}^q&=\sigma_0(q)-1\\ C_{(2)}^q&=\frac{1}{2}(\sigma_1(q)+\sigma_0(q)^2-3\sigma_0(q)+\sigma_0'(q))\\ C_{(1,1)}^q&=\frac{1}{2}(\sigma_1(q)+\sigma_0(q)^2-3\sigma_0(q)-\sigma_0'(q))+1, \end{align*} where $\sigma_0'(q)$ is the number of odd divisors of $q$.\par Our aim is to generalize this result. We establish an asymptotic formula for the multiplicities $m_{\chi_{\lambda}}^{(q)}$ as $q$ tends to $\infty$. More precisely, we claim the following: \begin{thm} \label{3-1-1} Let $q\in\mathbb{N}$ be sufficiently large and let $\Delta\in\mathbb{N}$ with $\Delta\leqslant \frac{\log q}{\log 2}$. In addition, let $n\geqslant \Delta q$ be an integer. Then, for partitions $\lambda\vdash n$ and $\mu\vdash \Delta$ with $\lambda\backslash\lambda_1=\mu$, we have \begin{align*} m_{\chi_{\lambda}}^{(q)}=\begin{cases} \sigma_0(q)+\mathcal{O}(1) &\text{if }\Delta=1\\[0.5ex] \frac{1}{2}\sigma_1(q)+\mathcal{O}\left((\sigma_0(q))^2\right) &\text{if }\Delta=2\\[0.5ex] \frac{\chi_{\mu}(1)}{6}\sigma_2(q)+\mathcal{O}(\sigma_0(q)\sigma_1(q)) &\text{if }\Delta=3\\[0.5ex] \frac{\chi_{\mu}(1)}{\Delta!}\sigma_{\Delta-1}(q)+\mathcal{O}\left(\frac{\chi_{\mu}(1)}{\Delta!}q^{\Delta-2}\left(\Delta\sigma_0(q)+2^{\Delta}\right)\right) &\text{if }\Delta\geqslant 4. \end{cases} \end{align*} \par \noindent The $\mathcal{O}$-constant is universal. \end{thm} \begin{rem} \label{3-1-2} The error term in our asymptotic formula is essentially optimal. \end{rem} \noindent \par The proof of our Theorem proceeds as follows: At first, we realize that for $\pi\in S_n$ the value $\chi_{\lambda}(\pi)$ is a polynomial in the number $c_i(\pi)$ of $i$-cycles of the permutation $\pi$ for $i=1,\ldots,n$. We use this result to establish a formula with main and error term for $m_{\chi_{\lambda}}^{(q)}$, where the random variables $c_i$ appear again. Secondly, we summarize identities and estimates for Stirling numbers of the first and second kind and then we review bounds for the divisor function. Thirdly, we examine the distribution of cycle in $S_n$ and compute the mean of $(c_{k_1})^{m_1}\cdot\ldots\cdot(c_{k_j})^{m_j}$. The formula for the mean includes Stirling numbers of the second kind. Finally, we calculate the main and the error term of $m_{\chi_{\lambda}}^{(q)}$ obtained in the first step using the outcomes of step two and three. \par \noindent \textbf{Some notation.} A \emph{partition} of $n$ is a sequence $\lambda=(\lambda_1,\lambda_2,\ldots,\lambda_l)$ of positive integers such that $\lambda_1\geqslant\lambda_2\geqslant\ldots\geqslant\lambda_l$ and $\lambda_1+\lambda_2+\ldots+\lambda_l=n$. We write $\lambda\vdash n$ to indicate that $\lambda$ is a partition of $n$. By $\lambda\backslash \lambda_1$ we mean the partition $\lambda\backslash \lambda_1=(\lambda_2,\lambda_3,\ldots,\lambda_l)$. The \textit{weight} $|\lambda|$ of $\lambda$ is $|\lambda|=\sum_{j=1}^l\lambda_j$. For partitions $\lambda,\mu$ we write $\mu\subset\lambda$, if $\mu_j\leqslant\lambda_j$ for all $j$.\\ We denote by $\chi_{\lambda}$ the irreducible character of the symmetric group $S_n$ corresponding to the partition $\lambda$ of $n$.\\ For a permutation $\pi\in S_n$ and $1\leqslant i\leqslant n$ let $c_i(\pi)$ be the number of $i$-cycles of $\pi$.\\ Furthermore, for $\alpha\in\mathbb{R}$ let \begin{equation*} \sigma_{\alpha}(q):=\sum_{d|q}d^{\alpha} \end{equation*} be the \emph{divisor function}. As usual, we denote by $\zeta(s)$ the Riemann zeta function and we write $(n,k)$ for the greatest common divisor of $n$ and $k$.\\ We denote by ${n\brack k}$ and ${n\brace k}$ the \textit{Stirling numbers of the first and second kind}, respectively. Finally, $(x)_n$ denotes the falling factorial. \section{Proof of the Theorem} In this section, we use character theory to derive a formula with main and error term for the multiplicities $m_{\chi_{\lambda}}^{(q)}$. We apply this result to prove our Theorem.\\ Müller and Schlage-Puchta \cite[Lemma 7]{Mu1} established the following. \begin{lem} \label{3-2-1} Let $\lambda\vdash n$ be a partition, $\mu=\lambda\backslash\lambda_1$, and let $\pi\in S_n$ be a permutation. Then \begin{equation*} \chi_{\lambda}(\pi)=\sum_{\substack{\tilde{\mu}\subseteq\mu\\\tilde{\mu}_1\leqslant 1}}(-1)^{|\tilde{\mu}|}\sum_{\boldsymbol{c}\subseteq S_{|\mu|-|\tilde{\mu}|}}\chi_{\mu,\tilde{\mu}}(\boldsymbol{c})\prod_{i\leqslant|\mu|}\binom{c_i(\pi)}{c_i}, \end{equation*} where $\boldsymbol{c}$ runs over all conjugacy classes of $S_{|\mu|-|\tilde{\mu}|}$, $\chi_{\mu,\tilde{\mu}}(\boldsymbol{c})$ denotes the number of ways to obtain $\tilde{\mu}$ from $\mu$ by removing rim hooks according to the cycle structure of $\boldsymbol{c}$, counted with the sign prescribed by the Murnaghan-Nakayama rule\footnote{Cf. for instance \cite[Theorem 4.10.2]{Sa1}}, and $c_i$ is the number of $i$-cycles of an element of $\boldsymbol{c}$. \end{lem} \noindent This result shows that $\chi_{\lambda}(\pi)$ is a polynomial in $c_i(\pi)$ for $i=1,\ldots,|\mu|$ with leading term $\chi_{\mu}(1)(|\mu|!)^{-1}c_1(\pi)^{|\mu|}$. We now observe: \begin{lem} \label{3-2-2} Let $\lambda\vdash n$ and $\mu\vdash\Delta$ be partitions with $\mu=\lambda\backslash\lambda_1$, and let $\pi\in S_n$ be a permutation. Then we have \begin{equation*} \chi_{\lambda}(\pi)=\chi_{\mu}(1)\binom{c_1(\pi)}{\Delta}+\mathcal{O}\left(\chi_{\mu}(1)\sum_{j=1}^{\Delta}\binom{c_1(\pi)+\ldots+c_{j+1}(\pi)}{\Delta-j}\right). \end{equation*} \end{lem} \begin{proof} Applying Lemma \ref{3-2-1}, we obtain \begin{equation*} \chi_{\lambda}(\pi)=\chi_{\mu}(1)\binom{c_1(\pi)}{\Delta}+\sum_{\substack{\tilde{\mu}\subseteq\mu\\\tilde{\mu_1}\leqslant 1}}(-1)^{|\tilde{\mu}|}\sum_{\boldsymbol{c}\subseteq S_{\Delta-|\tilde{\mu}|}}\chi_{\mu,\tilde{\mu}}(\boldsymbol{c})\prod_{i\leqslant\Delta}\binom{c_i(\pi)}{c_i}, \end{equation*} where $\boldsymbol{c}$ runs over all conjugacy classes of $S_{\Delta-|\tilde{\mu}|}$ except the trivial class of $S_{\Delta}$. Therefore, we realize the expected main term. We shall show that the second term in the above formula can be absorbed into the error term.\\ At first, we observe that $|\chi_{\mu,\tilde{\mu}}(\boldsymbol{c})|\leqslant \chi_{\mu}(1)$ for a conjugacy class $\boldsymbol{c}$ of $S_{\Delta-|\tilde{\mu}|}$. Secondly, let $\boldsymbol{c}$ be a conjugacy class of $S_k$ with $1\leqslant k\leqslant \Delta$ and let $c_i$ be the number of $i$-cycles of an element of $\boldsymbol{c}$. Suppose that $c_1+c_2+\ldots+c_{\Delta}=\Delta-j$ for some positive integer $j$. Then we have $c_i=0$ for all $i\geqslant j+2$.\\ Therefore, it follows that the absolute value of the considered second term in the preceding formula is bounded above by \begin{equation*} \chi_{\mu}(1)\sum_{j=1}^{\Delta}\sum_{\substack{(c_1,\ldots,c_{j+1})\in\mathbb{N}_0^{j+1}\\1c_1+\ldots+(j+1)c_{j+1}\leqslant\Delta\\c_1+\ldots+c_{j+1}=\Delta-j}}\prod_{i\leqslant j+1}\binom{c_i(\pi)}{c_i}\leqslant\chi_{\mu}(1)\sum_{j=1}^{\Delta}\binom{c_1(\pi)+\ldots+c_{j+1}(\pi)}{\Delta-j}. \end{equation*} This yields our assertion. \end{proof} \noindent Due to \begin{equation*} m_{\chi_{\lambda}}^{(q)}=\frac{1}{n!}\sum_{\pi\in S_n}\chi_{\lambda}(\pi^q), \end{equation*} we obtain as immediate consequence of the preceding Lemma the following result. \begin{prop} \label{3-2-3} Let $\lambda\vdash n$ and $\mu\vdash\Delta$ be partitions with $\mu=\lambda\backslash\lambda_1$, and let $q\in\mathbb{N}$. Then we get \begin{equation*} m_{\chi_{\lambda}}^{(q)}=\frac{\chi_{\mu}(1)}{n!}\sum_{\pi\in S_n}\binom{c_1(\pi^q)}{\Delta}+\mathcal{O}\left(\frac{\chi_{\mu}(1)}{n!}\sum_{j=1}^{\Delta}\sum_{\pi\in S_n}\binom{c_1(\pi^q)+\ldots+c_{j+1}(\pi^q)}{\Delta-j}\right). \end{equation*} \end{prop} \noindent Now, we give the \begin{proof}[Proof of Theorem \ref{3-1-1}] We stated a formula with main and error term for $m_{\chi_{\lambda}}^{(q)}$ in Proposition \ref{3-2-3}. In section 5, we will evaluate the main term: see Proposition \ref{3-5-3}. In section 6, we will estimate the error term: cf. Lemma \ref{3-6-2}. Therefore, the proof of our Theorem is completed. Moreover, the error term in our Theorem is essentially optimal due to the Remark \ref{3-5-4}. \end{proof} \noindent In the next two sections, we shall establish some auxiliary results. \section{Combinatorics and number theory} In this section we review some results about Stirling numbers of the first and second kind as well as basic facts about the divisor function. \begin{defn} Let $n$ and $k$ be positive integers. The \emph{Stirling numbers of the second kind} $n\brace k$ count the number of ways to partition a set of $n$ labeled objects into $k$ nonempty unlabeled subsets. \end{defn} \begin{lem} \label{3-3-1} Let $n$ and $k$ be positive integers. \begin{compactenum}[1)] \item We have ${n\brace 2}=2^{n-1}-1$ and ${n\brace n-1}=\binom{n}{2}$. \item In addition, we state the recurrence \begin{equation*} k!{n\brace k}=k^n-\sum_{j=1}^{k-1}\frac{k!}{(k-j)!}{n\brace j}. \end{equation*} \end{compactenum} \end{lem} \begin{proof} 1) follows from the definition. For the proof of 2) see \cite[Theorem 7.2.6]{Mo1}. \end{proof} \noindent This recurrence yields an upper bound for $n\brace k$. Next, we would like to represent the ordinary powers $x^n$ by falling factorials $(x)_k:=x(x-1)\cdot\ldots\cdot (x-k+1)$. \begin{lem} \label{3-3-2} Let $n$ be a positive integer. Then the identity \begin{equation*} x^n=\sum_{k=1}^n {n\brace k}(x)_k \end{equation*} holds. \end{lem} \begin{proof} See for instance \cite[Formula (6.10)]{Gr1}. \end{proof} \noindent Now, we pay attention to the Stirling numbers of the first kind. \begin{defn} Let $n$ and $k$ be positive integers. The \emph{Stirling numbers of the first kind} $n\brack k$ count the number of ways to arrange $n$ objects into $k$ cycles. So $n\brack k$ equals the number of permutations of $n$ elements with exactly $k$ disjoint cycles. \end{defn} \begin{lem} \label{3-3-3} Let $n$ and $k$ be positive integers. \begin{compactenum}[1)] \item We have the recurrence \begin{equation*} {n\brack k}=(n-1){n-1\brack k}+{n-1\brack k-1}. \end{equation*} \item We obtain the estimate \begin{equation*} {n\brack k}\leqslant\frac{(n-1)!}{k!}\binom{n}{k-1}. \end{equation*} \end{compactenum} \end{lem} \begin{proof} 1) Cf. \cite[Formula (6.8)]{Gr1}.\\ 2) By induction over $n$: Obviously, the estimate is true for $n\leqslant k$. Applying the recurrence 1) yields for $n\geqslant k$ \begin{equation*} {n+1\brack k}\leqslant\frac{n!}{k!}\left(\binom{n}{k-1}+\frac{k}{n}\binom{n}{k-2}\right)\leqslant\frac{n!}{k!}\binom{n+1}{k-1}. \end{equation*} \end{proof} \noindent Stirling numbers of the first kind are (up to sign) the coefficients of ordinary powers that yield the falling factorial $(x)_n$. More precisely, we get \begin{lem} \label{3-3-4} Let $n$ be a positive integer. Then the identity \begin{equation*} (x)_n=\sum_{k=1}^n (-1)^{n-k}{n\brack k}x^k \end{equation*} holds. \end{lem} \begin{proof} See for instance \cite[Formula (6.13)]{Gr1}. \end{proof} \noindent Finally, we state results about the divisor function. \begin{lem} \label{3-3-5} \begin{compactenum}[1)] \item Let $\epsilon>0$. Then we have $~~~\sigma_0(q)\leqslant (2+\epsilon)^{\frac{\log q}{\log \log q}}~~~$ for all $q\geqslant q_0(\epsilon)$. \item $\sigma_1(q)\ll q\log\log q~~~$ for all $q\geqslant q_0$. \item Let $k\geqslant 2$. Then $~~~q^k\leqslant\sigma_k(q)\leqslant \zeta(2)q^k.$ \end{compactenum} \end{lem} \begin{proof} For 1) and 2) see \cite[Theorem 317 and Theorem 323]{Ha1}.\\ 3) The lower bound is obvious. The upper bound follows from the fact \begin{equation*} \sigma_k(q)=q^k\sum_{d|q}\frac{1}{d^k}. \end{equation*} \end{proof} \section{Statistics of the symmetric group} Müller and Schlage-Puchta \cite[Lemma 13]{Mu1} showed that, for $\pi\in S_n$ chosen at random, the distribution of $c_k(\pi)$ converges to a Poisson distribution with mean $\frac{1}{k}$ as $n\to\infty$. In addition, they proved that the mean of $\left(c_k(\cdot)\right)^m$ converges to $\sum_{s=1}^m{m\brace s}k^{-s}$ as $n\to\infty$. We generalize this result and make it more explicit. \begin{prop} \label{3-4-1} Let $k_1,\ldots,k_l$ be distinct positive integers and let $m_j\in\mathbb{N}$ for $j=1,\ldots,l$. Then \begin{equation*} \frac{1}{n!}\sum_{\pi\in S_n}\prod_{j=1}^l\left(c_{k_j}(\pi)\right)^{m_j}\leqslant \prod_{j=1}^l\left(\sum_{s=1}^{m_j}{m_j\brace s}k_j^{-s}\right). \end{equation*} If $~~\sum_{j=1}^l k_j m_j\leqslant n~~$ is fulfilled, then we have an equality. \end{prop} \begin{proof} 1) Let $k_1,\ldots,k_l$ be distinct positive integers and let $s_j\in\mathbb{N}$ for $j=1,\ldots,l$. We observe \begin{equation*} \sum_{\pi\in S_n}\prod_{j=1}^l\binom{c_{k_j}(\pi)}{s_j}= \begin{cases} 0&\textit{if }\sum_{j=1}^l k_j s_j>n\\ n!\left(\prod_{j=1}^l k_j^{s_j}s_j!\right)^{-1}&\textit{if }\sum_{j=1}^l k_j s_j\leqslant n. \end{cases} \end{equation*} You can see this equality as follows.\\ \textit{Case 1:} Let $~\sum_{j=1}^l k_j s_j>n.~~~$ Then there exists no $\pi\in S_n$ such that $c_{k_j}(\pi)\geqslant s_j$ for all $j=1,\ldots,l$. Therefore, the considered sum is equal to $0$.\\ \textit{Case 2:} Let $~\sum_{j=1}^l k_j s_j\leqslant n.~~~$ Then the left hand side of the equation is equal to the number of tuples $(\tau_1,\ldots,\tau_{l+1})$, which satisfy the following condition: There exists distinct, disjoint cycles $\sigma_{i j}$ and a non-negative integer $s_{l+1}$ such that $\tau_j=\sigma_{1j}\cdot\ldots\cdot\sigma_{s_jj}$ for all $j=1,\ldots,l+1$, the cycles $\sigma_{ij}$ have length $k_j$ for all $j=1,\ldots,l$ and $\prod_{j=1}^{l+1}\prod_{i=1}^{s_j}\sigma_{ij}$ is the cycle decomposition for a permutation from $S_n$. Finally, the number of these tuples is equal to \begin{align*} \frac{n!}{k_1^{s_1}s_1!(n-k_1s_1)!}&\frac{(n-k_1s_1)!}{k_2^{s_2}s_2!(n-k_1s_1-k_2s_2)!}\cdot\ldots\\ &\qquad\qquad\ldots\cdot\frac{\left(n-\sum_{j=1}^{l-1}k_js_j\right)!}{k_l^{s_l}s_l!\left(n-\sum_{j=1}^{l}k_js_j\right)!}\biggl(n-\sum_{j=1}^{l}k_js_j\biggr)! \end{align*} Canceling yields our assertion.\\ 2) It follows from 1) that \begin{equation*} \frac{1}{n!}\sum_{\pi\in S_n}\prod_{j=1}^l\left(c_{k_j}(\pi)\right)_{s_j}= \begin{cases} 0&\textit{if }\sum_{j=1}^l k_j s_j>n\\ \prod_{j=1}^l k_j^{-s_j} &\textit{if }\sum_{j=1}^l k_j s_j\leqslant n. \end{cases} \end{equation*} \noindent 3) Eventually, we compute the desired mean of a product of random variables $c_k(\cdot)$. Applying Lemma \ref{3-3-2} and 2) yields \begin{align*} \operatorname{E}\Biggl(\prod_{j=1}^l\left(c_{k_j}\right)^{m_j}\Biggr)&=\sum_{\substack{(s_1,\ldots,s_l)\\1\leqslant s_j\leqslant m_j}}\operatorname{E}\Biggl(\prod_{j=1}^l(c_{k_j})_{s_j}\Biggr)\prod_{i=1}^l{m_i\brace s_i}\\ &\leqslant \sum_{\substack{(s_1,\ldots,s_l)\\1\leqslant s_j\leqslant m_j}}\prod_{j=1}^l \biggl(k_j^{-s_j}{m_j\brace s_j}\biggr)\\ &=\prod_{j=1}^l\left(\sum_{s=1}^{m_j}{m_j\brace s}k_j^{-s}\right). \end{align*} Obviously, we have an equality if $~~\sum_{j=1}^l k_j m_j\leqslant n~~$ is fulfilled. \end{proof} \noindent Furthermore, Müller and Schlage-Puchta \cite[Formula (33)]{Mu1} established the following useful identity. \begin{lem} \label{3-4-2} Let $d$ and $q$ be positive integers and let $\pi\in S_n$. Then \begin{equation*} c_d(\pi^q)=\sum_{\substack{k\\k/(k,q)=d}}(k,q)c_k(\pi). \end{equation*} \end{lem} \section{The main term} We carry out the first step of the plan formulated in the proof at the end of section 2: We compute the main term in Proposition \ref{3-2-3}. At first, we draw our attention to the mean of a power of $c_1(\pi^q)$. \begin{lem} \label{3-5-1} Let $q\in\mathbb{N}$ be sufficiently large and let $\delta\in\mathbb{N}$ with $\delta\leqslant \frac{\log q}{\log 2}$. In addition, let $n\geqslant \delta q$ be an integer. Then we obtain \begin{align*} \frac{1}{n!}\sum_{\pi\in S_n}\Bigl(c_1(\pi^q)\Bigr)^{\delta}=\begin{cases} \sigma_0(q) &\text{if }\delta=1\\[0.5ex] \sigma_1(q)+(\sigma_0(q))^2 &\text{if }\delta=2\\[0.5ex] \sigma_2(q)+\mathcal{O}(\sigma_0(q)\sigma_1(q)) &\text{if }\delta=3\\[0.5ex] \sigma_{\delta-1}(q)+\mathcal{O}\Bigl(q^{\delta-2}\left(\delta\sigma_0(q)+2^{\delta}\right)\Bigr) &\text{if }\delta\geqslant 4. \end{cases} \end{align*} \par \noindent The $\mathcal{O}$-constant is universal. \end{lem} \begin{proof} 1) At first, we consider the case $\delta\in\{1,2\}$. We sketch the argument for $\delta=2$ (the case $\delta=1$ is similar). Using Lemma \ref{3-4-2} we get \begin{equation*} \frac{1}{n!}\sum_{\pi\in S_n}\Bigl(c_1(\pi^q)\Bigr)^2=\sum_{k|q}k^2\frac{1}{n!}\sum_{\pi\in S_n}\left(c_k(\pi)\right)^2+\sum_{\substack{(k_1,k_2)\\k_i|q\\k_1\neq k_2}}k_1k_2\frac{1}{n!}\sum_{\pi\in S_n}c_{k_1}(\pi)c_{k_2}(\pi). \end{equation*} Since $n\geqslant 2q$, it follows with Proposition \ref{3-4-1} that the considered mean is equal to $\sigma_1(q)+\left(\sigma_0(q)\right)^2$. This shows our claim for $\delta=2$.\par \noindent 2) We generalize this method for an arbitrary $\delta$. Let $n\geqslant \delta q$. Applying Lemma \ref{3-4-2} and Proposition \ref{3-4-1} yields \begin{equation*} \frac{1}{n!}\sum_{\pi\in S_n}\Bigl(c_1(\pi^q)\Bigr)^{\delta}=\sum_{l=1}^{\delta}\sum_{\{M_1,\ldots,M_l\}}\sum_{\substack{(k_1,\ldots,k_l)\\k_i|q\\ k_i\neq k_j~(i\neq j)}}\prod_{j=1}^l\sum_{s=1}^{|M_j|}{|M_j|\brace s}k_j^{|M_j|-s}, \end{equation*} where the second sum on the right is over all set partitions of $\{1,\ldots\delta\}$ in exactly $l$ sets $M_1,\ldots,M_l$.\par \noindent 3) We direct our attention to \begin{equation*} T_m:=\sum_{k|q}\sum_{s=1}^m{m\brace s}k^{m-s}=\sum_{s=1}^m{m\brace s}\sigma_{m-s}(q). \end{equation*} \textit{Let $q$ be sufficiently large and $m\leqslant \frac{\log q}{\log 2}$. Then} \begin{equation*} T_m\leqslant \begin{cases} \sigma_0(q)&\textit{if } m=1\\ \sigma_0(q)+\sigma_1(q)&\textit{if } m=2\\ 3q^{m-1}&\textit{if } m\geqslant 3. \end{cases} \end{equation*} \textit{In particular, we have} $T_2\leqslant q\sigma_0(q)$.\\ You can see this estimate as follows: The cases $m=1$ and $m=2$ are obvious. Let $m\geqslant 3$. It results from Lemma \ref{3-3-1} and Lemma \ref{3-3-5} for a constant $C>0$ \begin{equation*} T_m\leqslant Cm^2q\log\log q+\zeta(2)q^{m-1}\sum_{s=1}^{m-2}{m\brace s}q^{-s+1}. \end{equation*} Applying the estimate ${m\brace s}\leqslant s^m (s!)^{-1}$ (see Lemma \ref{3-3-1}) we find that \begin{equation*} \sum_{s=1}^{m-2}{m\brace s}q^{-s+1}\leqslant e-1 \end{equation*} \noindent This proves our assertion. \par \noindent 4) \textit{Let $q$ be sufficiently large and $\delta\leqslant\frac{\log q}{\log 2}$. Then, for positive integers $m_i$ such that $m_1+\ldots+m_l=\delta$, we have} \begin{equation*} \prod_{j=1}^lT_{m_j}\leqslant q^{\delta-l}\Bigl(\max\{3,\sigma_0(q)\}\Bigr)^l. \end{equation*} This results immediately from 3). \par \noindent 5) Using the outcomes of step 2) and 4), we obtain \begin{align*} \frac{1}{n!}\sum_{\pi\in S_n}\Bigl(c_1(\pi^q)\Bigr)^{\delta}&=\sum_{k|q}\sum_{s=1}^{\delta}{\delta\brace s}k^{\delta-s}+\mathcal{O}\left(\sum_{l=2}^{\delta}\sum_{\{M_1,\ldots,M_l\}}\prod_{j=1}^l T_{|M_j|}\right)\\[1.3ex] &=\sigma_{\delta-1}(q)+\mathcal{O}\Bigl(F_1+F_2+F_3\Bigr), \end{align*} where \begin{align*} F_1&:=\sum_{s=2}^{\delta}{\delta\brace s}\sigma_{\delta-s}(q),\\ F_2&:=\sum_{\{M_1,M_2\}}T_{|M_1|}T_{|M_2|},\\ F_3&:=\sum_{l=3}^{\delta}{\delta\brace l}q^{\delta-l}\Bigl(\operatorname{max}\{3,\sigma_0(q)\}\Bigr)^l. \end{align*} The sum in $F_2$ is over all set partitions of $\{1,\ldots,\delta\}$ in exactly two sets $M_1, M_2$.\\ Therefore, we realize the expected main term. We shall show that the error term is sufficiently small. \par \noindent 6) For the rest of the proof let $3\leqslant\delta\leqslant\frac{\log q}{\log 2}$. Lemma \ref{3-3-5} and Lemma \ref{3-3-1} yield \begin{equation*} F_1\ll \begin{cases} \sigma_1(q) &\textit{if }\delta=3\\ 2^{\delta}q^{\delta-2} &\textit{if }\delta\geqslant 4 \end{cases} \end{equation*} and \begin{equation*} F_3\ll2^{\delta}q^{\delta-2}, \end{equation*} which is sufficiently small. \par \noindent 7) Finally, we examine $F_2$. The term $F_2$ determines the order of the error term. More precisely, we get \begin{align*} F_2&\leqslant \binom{\delta}{1}T_1T_{\delta-1}+\binom{\delta}{2}T_2T_{\delta-2}+\sum_{\substack{\{M_1,M_2\}\\|M_i|\geqslant3}}T_{|M_1|}T_{|M_2|}\\ &\ll \begin{cases} \sigma_0(q)\sigma_1(q)&\textit{if }\delta=3\\ \sigma_0(q)q^2&\textit{if }\delta=4\\ q^{\delta-2}\left(\delta\sigma_0(q)+2^{\delta}\right)&\textit{if }\delta\geqslant 5. \end{cases} \end{align*} In the last estimate, we used the outcome of step 3). In addition, we applied the inequality $(\sigma_1(q))^2\leqslant\frac{9}{8}\sigma_0(q)q^2$ for $\delta=4$ and the fact that $\delta^2\sigma_1(q)q^{-1}\leqslant\delta^2\left(\log\sigma_0(q)+1\right)\ll\max\{2^{\delta},\delta\sigma_0(q)\}$ for the case $\delta\geqslant 5$. Therefore, the proof is completed. \end{proof} \noindent Our next aim is to show that the result stated in Lemma \ref{3-5-1} is essentially optimal. \begin{lem} \label{3-5-2} Let $q$ be prime and let $\delta\leqslant \frac{\log q}{\log 2}$ be a positive integer. In addition, let $n\geqslant \delta q$ be an integer. Then we get \begin{align*} \frac{1}{n!}\sum_{\pi\in S_n}\Bigl(c_1(\pi^q)\Bigr)^{\delta}=\begin{cases} \sigma_0(q) &\textit{if }\delta=1\\[0.5ex] \sigma_1(q)+(\sigma_0(q))^2 &\textit{if }\delta=2\\[0.5ex] \sigma_{\delta-1}(q)+q^{\delta-2}\left(\delta+2^{\delta-1}-1\right)+\mathcal{O}\left(3^{\delta} q^{\delta-3}\right) &\textit{if }\delta\geqslant 3. \end{cases} \end{align*} \par \noindent The $\mathcal{O}$-constant is universal. \end{lem} \begin{proof} 1) For $\delta\in\{1,2\}$ see Lemma \ref{3-5-1}. So let $\delta\geqslant3$. Since $q$ is prime, Lemma \ref{3-4-2} and Proposition \ref{3-4-1} yield \begin{align*} \frac{1}{n!}\sum_{\pi\in S_n}\Bigl(c_1(\pi^q)\Bigr)^{\delta}&=\frac{1}{n!}\sum_{\pi\in S_n}\sum_{k=0}^{\delta}\binom{\delta}{k}(c_1(\pi))^{\delta-k}(qc_q(\pi))^k\\ &=\sigma_{\delta-1}(q)+q^{\delta-2}(\delta+2^{\delta-1}-1)+F_1+F_2+F_3, \end{align*} where \begin{align*} F_1:=&\sum_{s=2}^{\delta}{\delta\brace s}+\sum_{t=3}^{\delta}{\delta\brace t}q^{\delta-t},\\ F_2:=&\sum_{k=1}^{\delta-2}\binom{\delta}{k}\left(\sum_{s=1}^{\delta-k}{\delta-k\brace s}\right)\left(\sum_{t=1}^k{k\brace t}q^{k-t}\right),\\ F_3:=&\binom{\delta}{\delta-1}\sum_{t=2}^{\delta-1}{\delta-1\brace t}q^{\delta-1-t}. \end{align*} So we found the expected main term. We shall show that $F_1,F_2$ and $F_3$ can be absorbed into the error term. \par \noindent 2) Taking into account that $3\leqslant\delta\leqslant\frac{\log q}{\log2}$ and ${\delta\brace t}\leqslant t^{\delta}(t!)^{-1}$, we obtain \begin{equation*} F_1\ll{\delta\brace 2}+q^{\delta-3}\left({\delta\brace 3}+\sum_{t=4}^{\delta}{\delta\brace t}q^{3-t}\right)\ll3^{\delta}q^{\delta-3}, \end{equation*} and \begin{equation*} F_2\ll\sum_{k=1}^{\delta-2}\binom{\delta}{k}\left(\sum_{s=1}^{\delta-k}{\delta-k\brace s}\right)q^{k-1}\ll q^{\delta-3}\sum_{k=1}^{\delta-2}\binom{\delta}{k}\leqslant 2^{\delta}q^{\delta-3} \end{equation*} as well as \begin{equation*} F_3\ll2^{\delta}q^{\delta-3}\delta. \end{equation*} So we are done. \end{proof} \begin{prop} \label{3-5-3} Let $q\in\mathbb{N}$ be sufficiently large and let $\Delta\in\mathbb{N}$ with $\Delta\leqslant \frac{\log q}{\log 2}$. In addition, let $n\geqslant \Delta q$ be an integer. Then \begin{align*} \frac{1}{n!}\sum_{\pi\in S_n}\binom{c_1(\pi^q)}{\Delta}= \begin{cases} \sigma_0(q)&\textit{if }\Delta=1\\[0.5ex] \frac{1}{2}\sigma_1(q)+\mathcal{O}\left((\sigma_0(q))^2\right) &\textit{if }\Delta=2\\[0.5ex] \frac{1}{6}\sigma_2(q)+\mathcal{O}(\sigma_0(q)\sigma_1(q)) &\textit{if }\Delta=3\\[0.5ex] \frac{1}{\Delta!}\sigma_{\Delta-1}(q)+\mathcal{O}\left(\frac{1}{\Delta!}q^{\Delta-2}\left(\Delta\sigma_0(q)+2^{\Delta}\right)\right) &\textit{if }\Delta\geqslant 4. \end{cases} \end{align*} \par \noindent The $\mathcal{O}$-constant is universal. \end{prop} \begin{proof} 1) It follows from Lemma \ref{3-3-4} that \begin{equation*} \Delta!\frac{1}{n!}\sum_{\pi\in S_n}\binom{c_1(\pi^q)}{\Delta}=\frac{1}{n!}\sum_{\pi\in S_n}(c_1(\pi^q))^{\Delta}+\mathcal{O}(F), \end{equation*} where \begin{equation*} F:=\sum_{\delta=1}^{\Delta-1}{\Delta\brack\delta}\frac{1}{n!}\sum_{\pi\in S_n}(c_1(\pi^q))^{\delta}. \end{equation*} The main term in the above formula is the mean we computed in Lemma \ref{3-5-1}. Applying this Lemma, we get the expected main and error term. Therefore, it only remains to be examined whether the error term $F$ is sufficiently small. \par \noindent 2) Before we analyze the error term, we give two technical estimates. \\ \textit{Let $\Delta\leqslant \frac{\log q}{\log 2}$. Then Lemma \ref{3-3-3} yields:}\\ \textit{i) For $\Delta\geqslant 3$ and $\delta\in\{1,2\}$ we have ${\Delta\brack\delta}\ll q^{\Delta-3}.$}\\ \textit{ii) For $\Delta\geqslant 4$ and $1\leqslant\delta\leqslant\Delta-1$ we get ${\Delta\brack\delta}\leqslant\Delta^2 q^{\Delta-\delta-1}.$ } \par \noindent 3) Now we estimate the error term $F$. For $\Delta=1$ we obtain $F=0$. For $\Delta=2$ Lemma \ref{3-5-1} yields, that $F=\sigma_0(q)$. So let $\Delta\geqslant 3$. Applying Lemma \ref{3-5-1}, Lemma \ref{3-3-5} and the upper bounds of step 2) we get \begin{equation*} F\ll q^{\Delta-3}\sigma_1(q)+\sum_{\delta=3}^{\Delta-1}\Delta^2q^{\Delta-\delta-1}\sigma_{\delta-1}(q)\leqslant q^{\Delta-2}\left(\frac{\sigma_1(q)}{q}+\Delta^3\right), \end{equation*} which is sufficiently small. \end{proof} \begin{rem} \label{3-5-4} The error term in the preceding Proposition is essentially optimal: Confer Lemma \ref{3-5-2} and step 1) in the above proof. \end{rem} \section{The error term} We implement the second step of our plan: We compute the error term in Proposition \ref{3-2-3}. \begin{lem} \label{3-6-1} Let $q\in\mathbb{N}$ be sufficiently large and let $1\leqslant r\leqslant \exp(q^{1/3})$. In addition let $\delta\in\mathbb{N}$ with $\delta\leqslant \frac{\log q}{\log 2}$ and $n$ be a positive integer. Then \begin{equation*} \frac{1}{n!}\sum_{\pi\in S_n}\left(\sum_{d=1}^rc_d(\pi^q)\right)^{\delta}\ll \begin{cases} \sigma_0(q)H_r &\text{if }\delta=1\\[0.5ex] (\sigma_0(q))^2H_r^2+\sigma_1(q)H_r &\text{if }\delta=2\\[0.5ex] q^2H_r &\text{if }\delta=3\\[0.5ex] q^{\delta-1}H_r^2 &\text{if }\delta\geqslant 4, \end{cases} \end{equation*} where $H_r:=\sum_{d=1}^r\frac{1}{d}$. The $\mathcal{O}$-constant is universal. \end{lem} \begin{proof} 1) Similar to step 2) in the proof of Lemma \ref{3-5-1}, we obtain \begin{equation*} \frac{1}{n!}\sum_{\pi\in S_n}\left(\sum_{d=1}^rc_d(\pi^q)\right)^{\delta}\leqslant \sum_{l=1}^{\delta}\sum_{\{M_1,\ldots,M_l\}}\prod_{j=1}^l\left(\sum_{\substack{k\\k\leqslant (k,q)r}}(k,q)^{|M_j|}\sum_{s=1}^{|M_j|}{|M_j|\brace s}k^{-s}\right), \end{equation*} where the second sum on the right is over all set partitions of $\{1,\ldots\delta\}$ in exactly $l$ sets $M_1,\ldots,M_l$.\par \noindent 2) We draw our attention to \begin{align*} \sum_{\substack{k\\k\leqslant (k,q)r}}(k,q)^{m}\sum_{s=1}^{m}{m\brace s}k^{-s}&=\sum_{d=1}^r\sum_{s=1}^{m}\frac{1}{d^s}{m\brace s}\sum_{\substack{k\\k=(k,q)d}}\left(\frac{k}{d}\right)^{m-s}\\ &\leqslant\sum_{d=1}^r\frac{1}{d}\sum_{s=1}^m{m\brace s}\sigma_{m-s}(q)\\[0.6ex] &=H_rT_m, \end{align*} where $T_m$ is defined as in step 3) in the proof of Lemma \ref{3-5-1}. \par \noindent 3) The previous considerations in combination with Step 4) in the proof of Lemma \ref{3-5-1} yield \begin{align*} \frac{1}{n!}&\sum_{\pi\in S_n}\left(\sum_{d=1}^rc_d(\pi^q)\right)^{\delta}\\ &\leqslant H_r T_{\delta}+H_r^2\sum_{\{M_1,M_2\}}T_{|M_1|}T_{|M_2|}+\sum_{l=3}^{\delta}H_r^l{\delta\brace l}q^{\delta-l}\Bigl(\max\{3,\sigma_0(q)\}\Bigr)^l. \end{align*} Taking into account that $H_r\leqslant \log r+1$, our claim follows from step 3) and 7) in the proof of Lemma \ref{3-5-1} and from the estimates in Lemma \ref{3-3-1} and \ref{3-3-5}. \end{proof} \begin{lem} \label{3-6-2} Let $q\in\mathbb{N}$ be sufficiently large and let $\Delta\in\mathbb{N}$ with $\Delta\leqslant \frac{\log q}{\log 2}$. In addition, let $n$ be a positive integer. Then \begin{equation*} \frac{1}{n!}\sum_{j=1}^{\Delta}\sum_{\pi\in S_n}\binom{c_1(\pi^q)+\ldots+c_{j+1}(\pi^q)}{\Delta-j}\ll \begin{cases} 1 &\text{if } \Delta=1\\[0.1ex] \sigma_0(q) &\text{if } \Delta=2\\[0.1ex] \sigma_1(q) &\text{if } \Delta=3\\[0.1ex] \frac{1}{(\Delta-1)!}q^{\Delta-2} &\text{if }\Delta\geqslant 4. \end{cases} \end{equation*} The $\mathcal{O}$-constant is universal. \end{lem} \begin{proof} 1) The case $\Delta=1$ is obvious. So let $\Delta\geqslant 2$. For $1\leqslant i\leqslant \Delta$ consider \begin{align*} Q(i,\Delta):&=\frac{i!}{n!}\sum_{\pi\in S_n}\binom{c_1(\pi^q)+\ldots+c_{\Delta-i+1}(\pi^q)}{i}\\ &=\sum_{\delta=1}^i (-1)^{i-\delta}{i\brack\delta}\frac{1}{n!}\sum_{\pi\in S_n}\left(\sum_{d=1}^{\Delta-i+1}c_d(\pi^q)\right)^{\delta}. \end{align*} The above equality is true due to Lemma \ref{3-3-4}. It follows with step 2) in the proof of Proposition \ref{3-5-3} and with Lemma \ref{3-6-1} that \begin{equation*} Q(i,\Delta)\ll \begin{cases} \sigma_0(q)H_{\Delta} &\textit{if }i=1\\ \sigma_1(q)H_{\Delta-1}^2&\textit{if }i=2\\ q^{i-1}H_{\Delta-i+1}^2&\textit{if }i\geqslant 3. \end{cases} \end{equation*} In the case $i=2$ we also used the estimate $(\sigma_0(q))^2\leqslant 2\sigma_1(q)$. \par \noindent 2) Finally, we look at \begin{equation*} R_{\Delta}:=\frac{1}{n!}\sum_{j=1}^{\Delta}\sum_{\pi\in S_n}\binom{c_1(\pi^q)+\ldots+c_{j+1}(\pi^q)}{\Delta-j}=1+\sum_{i=1}^{\Delta-1}\frac{Q(i,\Delta)}{i!}. \end{equation*} Due to the result of step 1), we find that \begin{equation*} R_{\Delta}\ll \begin{cases} \sigma_0(q) &\textit{if }\Delta=2\\ \sigma_1(q) &\textit{if }\Delta=3\\ \frac{1}{(\Delta-1)!}q^{\Delta-2} &\textit{if }\Delta\geqslant 4. \end{cases} \end{equation*} So we are done. \end{proof} \par \noindent \textbf{Author information}\\ Stefan-Christoph Virchow, Institut für Mathematik, Universität Rostock\\ Ulmenstr. 69 Haus 3, 18057 Rostock, Germany\\ E-mail: [email protected] \end{document}
\begin{document} \title{A block lower triangular preconditioner for a class of complex symmetric system of linear equations} \author{{ Davod Khojasteh Salkuyeh and Tahereh Salimi Siahkalaei}\\[2mm] \textit{{\small Faculty of Mathematical Sciences, University of Guilan, Rasht, Iran}} \\ \textit{{\small E-mails: [email protected], [email protected]}\textit{}}} \date{} \maketitle \noindent{\bf Abstract.} We present a block lower triangular (BLT) preconditioner to accelerate the convergence of nthe Krylov subspace iterative methods, such as generalized minimal residual (GMRES), for solving a broad class of complex symmetric system of linear equations. We analyze the eigenvalues distribution of preconditioned coefficient matrix. Numerical experiments are given to demonstrate the effectiveness of the BLT preconditioner. \\[-3mm] \noindent{\bf AMS subject classifications}: 65F10, 65F50, 65F08, 65F10.\\ \noindent{\bf Keywords}: {complex linear systems, block lower triangular, symmetric positive definite, preconditioning, GMRES, GSOR, MHSS.}\\ \noindent{\it \\ \pagestyle{myheadings}\markboth{D.K. Salkuyeh, and T. Salimi}{BLT preconditioner for complex symmetric linear systems} \thispagestyle{empty} \section{Introduction} \label{Eq1}\rm Consider the complex system of linear equations of the form \begin{equation}\label{Eq1} Au=b, \end{equation} where \[ A=W+iT,\quad W,T \in \mathbb{R }^{n \times n}, \] in which $u=x+iy$ and $b=p+iq$, such that the vectors $x,y,p$ and $q$ are in $\mathbb{R}^{n}$ and $i=\sqrt{-1}$. We assume that $W$ and $T$ are symmetric positive semidefinite matrices such that at least one of them, e.g., $W$, is positive definite. Complex symmetric linear systems of this kind arise in many important problems in scientific computing and engineering applications. For example, FFT-based solution of certain time-dependent PDEs \cite{Bertaccini}, diffuse optical tomography \cite{Arridge}, algebraic eigenvalue problems \cite{Moro, Schmitt}, molecular scattering \cite{Poirier}, structural dynamics \cite{Feriani} and lattice quantum chromodynamics \cite{Frommer}. In recent years, there have been many works to solve (\ref{Eq1}), and several iterative methods have been presented in literature. For example, based on the Hermitian and skew-Hermitian splitting (HSS) of the matrix $A$, Bai et al. in \cite{Bai1} introduced the Hermitian/skew-Hermitian splitting (HSS) method to solve positive definite system of linear equations. Next, Bai et al. presented a modified version of the HSS iterative method say (MHSS) \cite{Bai2} to solve systems of the form (\ref{Eq1}). The matrix $A$ naturally possesses the Hermitian/Skew-Hermitian (HS) splitting \begin{equation}\label{Eq2} A=H+S, \end{equation} where \[ H=\frac{1}{2}(A+A^H)=W, \quad S=\frac{1}{2}(A-A^H)=iT, \] with $A^H$ being the conjugate transpose of $A$. In this case, the HSS and MHSS methods to solve \eqref{Eq1} can be written as follows. \noindent {\bf The HSS method}: \verb"Let" $u^{(0)} \in {\mathbb{C }^{n}}$ \verb"be an initial guess". \verb"For" $k=0,1,2,\ldots$, \verb"until" $\{u^{(k)}\}$ \verb"converges", \verb"compute" ${u^{(k+1)}}$ \verb"according to the" \verb"following sequence": \begin{equation}\label{Eq3} \begin{cases} (\alpha I+W){u^{(k+\frac{1}{2})}}=(\alpha I-iT){u^{(k)}}+b, \\ (\alpha I+iT){u^{(k+1)}}=(\alpha I-W){u^{(k+\frac{1}{2})}}+b, \end{cases} \end{equation} \verb"where" $\alpha$ \verb"is a given positive constant and" $I$ \verb"is the identity matrix". The HSS iteration can be reformulated into the standard form $$ u^{(k+1)}=G_{\alpha}u^{(k)}+C_{\alpha}b, \quad k=0,1,2,\ldots, $$ where $$ G_{\alpha}=(\alpha I+iT)^{-1}(\alpha I-W)(\alpha I+W)^{-1}(\alpha I-iT), $$ and $$ C_{\alpha}=2\alpha(\alpha I+iT)^{-1}(\alpha I+W)^{-1}. $$ \noindent Assumming $$ M_{\alpha}=\frac{1}{2\alpha}(\alpha I+W)(\alpha I+iT), \quad N_{\alpha}=\frac{1}{2\alpha}(\alpha I-W)(\alpha I-iT), $$ it holds that $$ A=M_{\alpha}-N_{\alpha}, \quad G_{\alpha}={M_{\alpha}}^{-1}N_{\alpha}. $$ Hence, it follows that the matrix $M_{\alpha}$ can be used as a preconditioner for the complex symmetric system \eqref{Eq1}. Note that the multiplicative factor $1/(2\alpha)$ has no effect on the preconditioned system and therefore it can be dropped. Hence, the HSS preconditioner can be considered as $P_{\alpha}=(\alpha I+W)(\alpha I+iT)$. \noindent {\bf The MHSS method}: \verb"Let" $u^{(0)} \in {\mathbb{C }^{n}}$ \verb"be an initial guess". \verb"For" $k=0,1,2,\ldots$, until $\{u^{(k)}\}$ \verb"converges, compute" ${u^{(k+1)}}$ \verb"according to the following sequence": \begin{equation}\label{Eq4} \begin{cases} (\alpha I+W){u^{(k+\frac{1}{2})}}=(\alpha I-iT){u^{(k)}}+b, \\ (\alpha I+T){u^{(k+1)}}=(\alpha I+iW){u^{(k+\frac{1}{2})}}-ib, \end{cases} \end{equation} \verb"where" $\alpha$ \verb"is a given positive constant and" $I$ \verb"is the identity matrix". In \cite{Bai2}, it has been shown that if $W$ and $T$ are symmetric positive definite and symmetric positive semidefinite, respectively, then the MHSS iterative method is convergent. Since $\alpha I+T$ and $\alpha I+W$ are symmetric positive definite, the involving sub-system in each step of the MHSS iteration can be solved exactly by using the Cholesky factorization of the coefficient matrices or inexactly by the conjugate gradient (CG) method. Similar to the HSS method, Eq. (\ref{Eq4}) can be written in the stationary form $$ u^{(k+1)}=L_{\alpha}u^{(k)}+R_{\alpha}b, \quad k=0,1,2,\ldots, $$ where $$ L_{\alpha}=(\alpha I+T)^{-1}(\alpha I+iW)(\alpha I+W)^{-1}(\alpha I-iT), $$ and $$ R_{\alpha}=(1-i)\alpha(\alpha I+T)^{-1}(\alpha I+W)^{-1}. $$ Then, $A=F_{\alpha}-H_{\alpha}$ and $L_{\alpha}={F_{\alpha}}^{-1}H_{\alpha}$, where \[ F_{\alpha}=\frac{1+i}{2\alpha}(\alpha I+W)(\alpha I+T),\quad H_{\alpha}=\frac{1+i}{2\alpha}(\alpha I+iW)(\alpha I-iT). \] Therefore matrix $Q_{\alpha}=(\alpha I+W)(\alpha I+T)$ can be used as the MHSS preconditioner. It is possible to convert the complex system (\ref{Eq1}) to the real-valued form \begin{equation}\label{Eq5} \mathcal{A} u= \begin{bmatrix} W & -T \\ T & W \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} p \\ q \end{bmatrix}. \end{equation} \noindent Under our hypotheses, it can be easily proved that the matrix $\mathcal{A}$ is nonsingular. Recently, Salkuyeh et al. in \cite{Salkuyeh1} solved system (\ref{Eq5}) by the generalized successive overrelaxation (GSOR) iterative method. They split the coefficient matrix of the system (\ref{Eq5}) as $$ \mathcal{A}=\mathcal{D}-\mathcal{L}-\mathcal{U}, $$ where $$ \mathcal{D}= \begin{bmatrix} W & 0 \\ 0 & W \end{bmatrix},\quad \mathcal{L}= \begin{bmatrix} 0 & 0 \\ -T & 0 \end{bmatrix}, \quad \mathcal{U}= \begin{bmatrix} 0 & T \\ 0 & 0 \end{bmatrix}. $$ So, for $0\neq \alpha \in \mathbb{R}$, they construct GSOR method as $$ \begin{bmatrix} x^{k+1}\\ y^{k+1} \end{bmatrix}=\mathcal{G}_{\alpha} \begin{bmatrix} x^{k}\\ y^{k} \end{bmatrix}+\mathcal{C}_{\alpha} \begin{bmatrix} p\\ q \end{bmatrix}, $$ where $$ \mathcal{G}_{\alpha}={(\mathcal{D}-\alpha \mathcal{L})}^{-1}((1-\alpha)\mathcal{D}+\alpha \mathcal{U})= \begin{bmatrix} W & 0\\ \alpha T & W \end{bmatrix}^{-1} \begin{bmatrix} (1-\alpha)W & \alpha T\\ 0 & (1-\alpha)W \end{bmatrix}, $$ and $$ \mathcal{C}_{\alpha}=\alpha(\mathcal{D}-\alpha \mathcal{L})^{-1}=\alpha \begin{bmatrix} W & 0\\ \alpha T & W \end{bmatrix}^{-1}. $$ After some simplifications the GSOR method can be written as following. \noindent {\bf The GSOR iteration method}: \verb"Let" $(x^{(0)}; y^{(0)}) \in {\mathbb{R }^{n}}$ \verb"be an initial guess. For" $k=0,1,2,\ldots$, \verb"until" $\{(x^{(k)};y^{(k)})\}$ \verb"converges, compute" ${(x^{(k+1)};y^{(k+1)})}$ \verb"according to the" \\ \verb"following sequence" \begin{equation}\label{Eq6} \begin{cases} W x^{(k+1)}=(1-\alpha)Wx^{(k)}+\alpha T y^{(k)}+\alpha p, \\ Wy^{(k+1)}=-\alpha T x^{(k+1)}+(1-\alpha)W y^{(k)}+ \alpha q, \end{cases} \end{equation} \verb"where" $\alpha$ \verb"is a given positive constant and" $I$ \verb"is the identity matrix". In \cite{Salkuyeh1}, it has been shown that if $W$ and $T$ are symmetric positive definite and symmetric, respectively, then the GSOR method is convergent. Letting $$ \mathcal{M}_{\alpha}=\frac{1}{\alpha}(\mathcal{D}-\alpha \mathcal{L}), \quad \mathcal{N}_{\alpha}=\frac{1}{\alpha}((1-\alpha)\mathcal{D}+\alpha \mathcal{U}), $$ it holds that $$ \mathcal{A}=\mathcal{M}_{\alpha}-\mathcal{N}_{\alpha}, \quad \mathcal{G}_{\alpha}={\mathcal{M}_{\alpha}}^{-1}\mathcal{N}_{\alpha}. $$ Hence $$ \mathcal{M}_{\alpha}=\frac{1}{\alpha}(\mathcal{D}-\alpha \mathcal{L}), $$ can be used as a preconditioner for the system \eqref{Eq5}. As usual the multiplicative factor $1/\alpha$ can be neglected. In this paper we propose a block lower triangular (BLT) preconditioner for the system \eqref{Eq5} and investigate the eigenvalues distribution of the preconditioned coefficient matrix. Then the preconditioned system is solved by the restarted version of the GMRES (GMRES($m$)) method. The rest of the paper is organized as follows. In Section \ref{Sec2} our BLT preconditioner is introduced and its properties are investigated. Section \ref{Sec3} is devoted to some numerical experiments to show the effectiveness of BLT preconditioner. Finally, some concluding remarks are given in Section \ref{Sec4}. \section{The new block lower triangular preconditioner} \label{Sec2} We introduce a preconditioner of the form \begin{equation}\label{Eq7} \mathcal{G}_{\alpha}= \begin{bmatrix} W & 0\\ \alpha I & W \end{bmatrix}, \end{equation} for system \eqref{Eq5} and propose using the Krylov subspace method such as GMRES, to accelerate the convergence of the iteration, where $\alpha>0$. It is known that for the SPD problems, the rate of convergence of CG depends on the distribution of the eigenvalues of the system coefficient matrix. For nonsymmetric problems the eigenvalues may not describe the convergence of nonsymmetric matrix iterations like GMRES. Nevertheless, a clustered spectrum (away from 0) often results in rapid convergence \cite{Benzi3,Cao,BaiCAM}. Therefore, we need to examine the eigenvalues distribution of the matrix $\mathcal{G}_{\alpha}^{-1}\mathcal{A}$. \begin{lem}\label{Lem1} Let $W\in \mathbb{R }^{n \times n}$ be symmetric positive definite and $T\in \mathbb{R}^{n \times n}$ be symmetric positive semidefinite. Let also $\lambda_k$ be an eigenvalue of the preconditioned matrix $\mathcal{G}_{\alpha}^{-1} \mathcal{A}$, and $u_k=(x_k;y_k)$ be the corresponding eigenvector. If $y_k=0$, then $\lambda_k=1$. \end{lem} \begin{proof} We have $\mathcal{G}_{\alpha}^{-1}\mathcal{A}u_k=\lambda_k u_k$, or \[ \begin{bmatrix} W & -T \\ T & W \end{bmatrix} \begin{bmatrix} x_k \\ y_k \end{bmatrix}=\lambda_k \begin{bmatrix} W & 0\\ \alpha I & W \end{bmatrix} \begin{bmatrix} x_k \\ y_k \end{bmatrix}, \] which is itself equivalent to \begin{equation}\label{Eq8} \begin{cases} Wx_k-Ty_k=\lambda_k Wx_k, \\ Tx_k+Wy_k=\alpha \lambda_k x_k+\lambda_k Wy_k. \end{cases} \end{equation} If $y_k=0$, then from the first equation of \eqref{Eq8} we get $(1-\lambda_k)Wx_k=0$. Since $W$ is symmetric positive definite, we see that $x_k=0$ or $\lambda_k=1$. If $x_k=0$ then $u_k=0$, which is a contradiction. Hence $\lambda_k=1$. \end{proof} \begin{lem}\label{Lem2} Let $W\in \mathbb{R }^{n \times n}$ and $T \in \mathbb{R }^{n \times n}$ be symmetric positive definite and symmetric positive semidefinite, respectively, and $(\lambda_k,u_k=(x_k;y_k))$ be an eigenpair of the preconditioned matrix $\mathcal{G}_{\alpha}^{-1}\mathcal{A}$. For $y_k\neq 0$, set \begin{equation}\label{Eq9} a_k=\frac{y_k^{*}Ty_k}{y_k^* y_k},\quad b_k=\frac{y_k^{*}TW^{-2}Ty_k}{y_k^* y_k},\quad c_k=\frac{y_k^{*}TW^{-1}T W^{-1}Ty_k}{y_k^* y_k}. \end{equation} Then, $\lambda_k=1$ or \begin{equation}\label{Eq10} \lambda_k-1=\frac{\alpha b_k \pm \sqrt{\Delta(\alpha)}}{2a_k}, \end{equation} where $\Delta(\alpha)=\alpha^{2}b_k^{2}+4a_k(\alpha b_k-c_k)$. \end{lem} \begin{proof} As we saw in Lemma \ref{Lem1}, $\mathcal{G}_{\alpha}^{-1}\mathcal{A}u_k=\lambda_ku_k$ is equivalent to (\ref{Eq8}). It follows from the first equation of \eqref{Eq8} that $(1-\lambda_k)Wx_k=Ty_k$. If $\lambda_k\neq 1$, then we get \[ x_k=\frac{1}{1-\lambda_k} W^{-1}Ty_k. \] Substituting $x_k$ into the second equation of \eqref{Eq8} yields $$ (1-\lambda_k)Wy_k=\alpha \frac{\lambda_k}{1-\lambda_k}W^{-1}Ty_k-\frac{1}{1-\lambda_k}TW^{-1}Ty_k. $$ Simplifying this equation gives \begin{equation}\label{Eq11} (1-\lambda_k)^2 Wy_k=\alpha \lambda_k W^{-1}Ty_k-TW^{-1}Ty_k. \end{equation} According to Lemma \ref{Lem1} if $y_k=0$, then $\lambda_k=1$ and there is nothing to prove. Otherwise, multiplying both sides of \eqref{Eq11} by ${y_k}^{*}TW^{-1}$ results in $$ (1-\lambda_k)^{2} \frac{{y_k}^{*}Ty_k}{{y_k}^{*}y_k}=\alpha \lambda_k \frac{{y_k}^{*}TW^{-2}Ty_k}{{y_k}^{*}y_k}-\frac{{y_k}^{*}TW^{-1}TW^{-1}Ty_k}{{y_k}^{*}y_k}. $$ Now from the latter equation and Eq. \eqref{Eq9}, we get \begin{equation}\label{Eq12} a_k(1-\lambda_k)^2=\alpha \lambda_k b_k-c_k. \end{equation} Obviously, both of the matrices $TW^{-2}T$ and $TW^{-1}T W^{-1}T$ are symmetric positive semidefinite. Therefore, we deduce that $a_k,b_k,c_k\geq 0$. We consider the following two cases. \begin{itemize} \item If $a_k=0,$ then from \eqref{Eq9}, we have ${y_k}^{*}Ty_k=0$. Hence $Ty_k=0$ and therefore from the first equation in \eqref{Eq8} we obtain $\lambda_k=1$ or $x_k=0$. If $x_k=0$ then from the second equation in \eqref{Eq8}, $(1-\lambda_k)W y_k=0$. This implies that $\lambda_k=1$, since $W$ is nonsingular and $y_k \neq0$. \item If $a_k \neq 0$, then by solving the quadratic equation \eqref{Eq12} we get \[ \lambda_k-1=\frac{\alpha b_k \pm \sqrt{\Delta(\alpha)}}{2a_k}, \] where $\Delta(\alpha)=\alpha^{2}b_k^{2}+4a_k(\alpha b_k-c_k).$ \end{itemize} Therefore, the proof is complete. \end{proof} \rm \begin{thm}\label{Thm1} Let $W$ and $T \in \mathbb{R }^{n \times n}$ be symmetric positive definite and symmetric positive semidefinite, respectively. Let also \begin{equation}\label{Eq13} 0<\alpha \leq \min\left\{\frac{2}{b_k}(\sqrt{a_k(a_k+c_k)}-a_k):~b_k \neq 0\right\}. \end{equation} Then the eigenvalues of the matrix $\mathcal{G}_{\alpha}^{-1} \mathcal{A}$ are enclosed in a circle of radius $\sqrt{{(c_k-\alpha b_k)}/{a_k}}$ centered at (1,0) where $a_k$, $b_k$ and $c_k$ were defined in \eqref{Eq9}. \end{thm} \begin{proof} Let $\lambda_{k}$ be an eigenvalue of the matrix $\mathcal{G}^{-1} \mathcal{A}$. From Lemma \ref{Lem2}, we have $\lambda_k=1$ or $\lambda_k-1=({\alpha b_k \pm \sqrt{\Delta(\alpha)}})/{2a_k}$. If $\lambda_k=1$, there is nothing to prove. Otherwise, if $\Delta(\alpha)>0$ then the quadratic equation \eqref{Eq12} has two roots $\lambda_k^+$ and $\lambda_k^-$ which satisfy \[ |\lambda_k^{+}-1|=\frac{|\alpha b_k + \sqrt{\Delta(\alpha)}|}{2a_k}=:r_1,\quad |\lambda_k^{-}-1|=\frac{|\alpha b_k - \sqrt{\Delta(\alpha)}|}{2a_k}=:r_2. \] In this case, we deduce that these eigenvalues are enclosed in a circle of radius $\max\{r_1,r_2\}=r_1$ centered at $1$. On the other hand, if $\Delta(\alpha) \leq 0$, then from Eq. \eqref{Eq10}, we have $$ \lambda_k^{\pm}-1=\frac{\alpha b_k \pm i\sqrt{-\alpha^{2}b_k^{2}+4a_k(c_k-\alpha b_k)}}{2a_k}, $$ which gives \begin{equation}\label{Eq14} \vert{\lambda_k^{\pm}-1}\vert=\sqrt{\frac{c_k-\alpha b_k}{a_k}}=:r. \end{equation} This means that the eigenvalues of the preconditioned matrix are enclosed in a circle of radius $r$ centered at $1$. It is not difficult to see that $r\leq r_1$. Therefore, to have a more clustered eigenvalues around $1$ we should choose the parameter $\alpha$ such that $\Delta(\alpha)\leq 0$. We have $\Delta(\alpha)\leq 0 $ if and only if $\alpha\in[\alpha_1,\alpha_2]$, where \[ \alpha_1=\frac{-2}{b_k}\left(a_k+\sqrt{a_k(a_k+c_k)}\right)\quad \textrm{and} \quad \alpha_2=\frac{-2}{b_k}\left(a_k-\sqrt{a_k(a_k+c_k)}\right). \] It is necessary to mention that if $b_k=0$, then we have $\Delta(\alpha)=-4a_kc_k\leq 0$. Since $a_k,b_k$ and $c_k$ are nonnegative, we deduce that $\alpha_1$ is nonpositive and $\alpha_2$ is nonnegative. Therefore, having in mind that $\alpha>0$ it is enough to choose $\alpha\in (0,\alpha_2]$. Obviously, if we consider the above result over all the eigenpairs the desired result is obtained. \end{proof} \begin{thm}\rm Let $W$ and $T \in \mathbb{R }^{n \times n}$ be symmetric positive definite and symmetric positive semidefinite, respectively and $0 < \nu_1 \le \nu_2 \le \cdots \le \nu_n$ and $0 \le \mu_1 \le \mu_2 \le \cdots \le \mu_n \neq 0$ be the eigenvalues of $W$ and $T$, respectively. If $\alpha$ satisfies (\ref{Eq13}), then the eigenvalues of $\mathcal{G}^{-1}_{\alpha}\mathcal{A}$ lie in the following disk \begin{equation}\label{Eq15} \vert{\lambda-1}\vert \le \sqrt{\frac{\mu^3_n \nu^2_n-\alpha {\mu^2_1} {\nu^2_1}}{\nu^2_1 \nu^2_n \mu_1}}. \end{equation} \end{thm} \begin{proof} Since the matrix $T$ is symmetric, using the Courant-Fisher min–max theorem \cite{Axe}, we get \begin{equation}\label{Eq16} \mu_1 \le a_k \le \mu_n. \end{equation} On the other hand, \[ b_k=\frac{y_k^{*}TW^{-2}Ty_k}{y_k^* y_k}=\frac{{y_k}^{*}TW^{-2}Ty_k}{{y_k}^{*}T Ty_k}\frac{{y_k}^{*}T Ty_k}{{y_k}^{*}y_k}. \] Letting $z_k=T y_{k}$, again by using the Courant-Fisher min–max theorem it follows that \[ \lambda_{\min} (W^{-2}) \lambda_{\min} (T^2) \le b_k=\frac{{z_k}^{*}W^{-2}z_k}{{z_k}^{*}z_k} \frac{{y_k}^{*}T^2 y_k}{{y_k}^{*}y_k} \le \lambda_{\max} (W^{-2}) \lambda_{\max} (T^2), \] where $\lambda_{\min} (\cdot)$ and $\lambda_{\max} (\cdot)$ stand for the smallest and largest eigenvalues of the matrix, respectively. Hence, \[ \lambda^2_{\min} (W^{-1}) {\mu^2_1} \le b_k \le \lambda^2_{\max} (W^{-1}) {\mu^2_n}, \] and therefore \begin{equation}\label{Eq17} (\frac{\mu_1}{\nu_n})^2 \le b_k \le (\frac{\mu_n}{\nu_1})^2. \end{equation} Similarly, it is easy to see that \begin{equation}\label{Eq18} \frac{\mu^3_1}{\nu^2_n} \le c_k \le \frac{\mu^3_n}{\nu^2_1}. \end{equation} Now from Theorem \ref{Thm1} and Eqs. (\ref{Eq16}), (\ref{Eq17}) and (\ref{Eq18}) the desired result is obtained. \end{proof} \begin{remark}\rm Assuming $\alpha_k=\frac{2}{b_k}(\sqrt{a_k(a_k+c_k)}-a_k)$, from Eqs. (\ref{Eq16}), (\ref{Eq17}), (\ref{Eq18}) we obtain \[ \alpha_k=\frac{2}{b_k} \frac{a_k c_k}{\sqrt{a_k(a_k+c_k)}+a_k} \ge \frac{2\nu^3_1 \mu^4_1}{\nu^2_n \mu^3_n(\sqrt{\nu^2_1+\mu^2_n}+\nu_1)}=\alpha^*. \] for all $k$. Therefore, Eq. (\ref{Eq13}) can be replaced by $0<\alpha \le \alpha^{*}$. \end{remark} \begin{remark}\rm If we choose \[ \alpha=\tilde{\alpha}=\frac{\mu^3_n \nu^2_n}{\nu^2_1 \mu^2_1}, \] then the right-hand side of Eq. (15) becomes zero. However, it may not belong to the interval $(0,\alpha^*]$. Hence, we select the nearest $\alpha$ in interval $(0,\alpha^{*}]$ to $\tilde{\alpha}$. \end{remark} \section{Numerical experiments}\label{Sec3} We use three test problems from \cite{Bai1} and an example of \cite{Yang}, to illustrate the feasibility and effectiveness of the BLT preconditioner when it is employed as a perconditioner for the GMRES method to solve the real system \eqref{Eq5}. To do so, we compare the numerical results of the restarted generalized minimal residual (GMRES($m$)) method in conjunction with the BLT preconditioner with those of the MHSS and the GSOR preconditioners and the restarted GMRES method without preconditioning. Numerical results are compared in terms of both the number of iterations and the CPU which are respectively denoted by ``IT" and "CPU" in the tables. In Tables, a dagger $(\dag)$ means that the method fails to convergence in 500 iterations. In all the tests we use a zero vector as an initial guess and stopping criterion $$ \frac{\parallel b-A u^{(k)} \parallel_{2}}{\parallel b\parallel_{2}}<10^{-10} $$ is always used, where $u^{(k)}=x^{(k)}+iy^{(k)}$. In the implementation of the preconditioners to solve the inner symmetric positive definite system of linear equations we use the sparse Cholesky factorization incorporated with the symmetric approximate minimum degree reordering \cite{saad}. To do so we have used the \verb"symamd.m" command of \textsc{Matlab}. All runs are implemented in \textsc{Matlab} R2014b with a personal computer with 2.40 GHz central processing unit (Intel(R) Core(TM) i7-5500), 8 GB memory and Windows 10 operating system. \begin{example}\label {Ex1}\rm (see \cite{Bai1}) Consider the system of linear equations \begin{equation} [(K+\frac{3-\sqrt{3}}{\tau})+i\big(K+\frac{3+\sqrt{3}}{\tau}I)]x=b, \end{equation} where $\tau$ is the time step-size and $K$ is the five-point centered difference matrix approximating the negative Laplacian operator $L=-\Delta$ with homogeneous Dirichlet boundary conditions, on a uniform mesh in the unit square $[0, 1] \times [0, 1]$ with the mesh-size $h=1/(m + 1)$. The matrix $K\in\mathbb{R }^{n \times n}$ possesses the tensor-product form $K=I\otimes V_m + V_m \otimes I$, with $V_m=h^{-2} tridiag (−1, 2,−1)\in \mathbb{R }^{m \times m}$. Hence, $K$ is an $n \times n$ block-tridiagonal matrix, with $n = m^2$. We take $$ W=K+\frac{3-\sqrt{3}}{\tau}I, \quad \textrm{and} \quad T=K+\frac{3+\sqrt{3}}{\tau}I $$ and the right-hand side vector $b$ with its jth entry $b_j$ being given by $$ b_j=\frac{(1-i)j}{\tau(j+1)^2}, \quad j=1,2, \ldots ,n. $$ In our tests, we take $\tau= h$. Furthermore, we normalize coefficient matrix and right-hand side by multiplying both by $h^2$. \end{example} \begin{example}\label {Ex2}\rm (See \cite{Bai1}) Consider the system of linear equations $\eqref{Eq1} $ as following $$ \big[ (-\omega ^2 M+K) +i(\omega C_V +C_ H) \big] =b, $$ where $M$ and $K$ are the inertia and the stiffness matrices, $C_V$ and $C_H$ are the viscous and the hysteretic damping matrices, respectively, and $ \omega$ is the driving circular frequency. We take $C_H = \mu K$ with $\mu$ a damping coefficient, $M = I, C_V = 10I$, and $K$ the five-point centered difference matrix approximating the negative Laplacian operator with homogeneous Dirichlet boundary conditions, on a uniform mesh in the unit square $[0, 1] \times [0, 1]$ with the mesh-size $h =1/(m+1)$. The matrix $K \in \mathbb{R }^{n \times n}$ possesses the tensor-product form $K = I \otimes V_m+V_m \otimes I$, with $V_m =h^{-2} tridiag(-1, 2,-1) \in\mathbb{R }^{m \times m} $. Hence, $K$ is an $n \times n$ block-tridiagonal matrix, with $n = m^2$. In addition, we set $\mu = 8$, $\omega = \pi$, and the right-hand side vector b to be $b = (1 + i)A\textbf{1}$, with \textbf{1} being the vector of all entries equal to 1. As before, we normalize the system by multiplying both sides through by $h^2$. \end{example} \begin{table}[!ht] \caption{Numerical results for Example \ref{Ex1}.}\label{Table1} \begin{tabular}{ccc|c|cc|cc|cc|cc|cc|cc|cc|} \hspace{-0.2cm}line \multicolumn{1}{c}{Method} & \multicolumn{1}{c}{} & \multicolumn{2}{c}{$m=32$} & \multicolumn{2}{c}{$m=64$} & \multicolumn{2}{c}{$m=128$} & \multicolumn{2}{c}{$m=256$} & \multicolumn{2}{c}{$m=512$} & \multicolumn{2}{c}{$m=1024$}\\ [2mm] \hspace{-0.2cm}line \multicolumn{1}{c}{GMRES(5)} & \multicolumn{1}{c}{IT} & \multicolumn{2}{c}{349} & \multicolumn{2}{c}{\dag} & \multicolumn{2}{c}{\dag} & \multicolumn{2}{c}{\dag} & \multicolumn{2}{c}{\dag} & \multicolumn{2}{c}{\dag} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{CPU} & \multicolumn{2}{c}{0.39} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} \\ \\ \multicolumn{1}{c}{MHSS-GMRES(5)} & \multicolumn{1}{c}{$\alpha_{opt}$} & \multicolumn{2}{c}{10} & \multicolumn{2}{c}{9.1} & \multicolumn{2}{c}{4.7} & \multicolumn{2}{c}{5.1} & \multicolumn{2}{c}{10.5} & \multicolumn{2}{c}{} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{IT} & \multicolumn{2}{c}{54} & \multicolumn{2}{c}{26} & \multicolumn{2}{c}{71} & \multicolumn{2}{c}{114} & \multicolumn{2}{c}{179} & \multicolumn{2}{c}{} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{CPU} & \multicolumn{2}{c}{0.73} & \multicolumn{2}{c}{0.12} & \multicolumn{2}{c}{6.01} & \multicolumn{2}{c}{50.84} & \multicolumn{2}{c}{541.41} & \multicolumn{2}{c}{} \\ \\ \multicolumn{1}{c}{GSOR-GMRES(5)} & \multicolumn{1}{c}{$\alpha_{opt}$} & \multicolumn{2}{c}{0.037} & \multicolumn{2}{c}{0.457} & \multicolumn{2}{c}{0.432} & \multicolumn{2}{c}{0.418} & \multicolumn{2}{c}{0.412} & \multicolumn{2}{c}{0.411} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{IT} & \multicolumn{2}{c}{23} & \multicolumn{2}{c}{25} & \multicolumn{2}{c}{26} & \multicolumn{2}{c}{26} & \multicolumn{2}{c}{27} & \multicolumn{2}{c}{27} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{CPU} & \multicolumn{2}{c}{0.11} & \multicolumn{2}{c}{0.43} & \multicolumn{2}{c}{2.57} & \multicolumn{2}{c}{14.92} & \multicolumn{2}{c}{83.01} & \multicolumn{2}{c}{413.93} \\ \\ \multicolumn{1}{c}{BLTP-GMRES(5)} & \multicolumn{1}{c}{$\alpha_{opt}$} & \multicolumn{2}{c}{1.4} & \multicolumn{2}{c}{1.4} & \multicolumn{2}{c}{1.5} & \multicolumn{2}{c}{1.5} & \multicolumn{2}{c}{1.5} & \multicolumn{2}{c}{1.5} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{IT} & \multicolumn{2}{c}{6} & \multicolumn{2}{c}{7} & \multicolumn{2}{c}{7} & \multicolumn{2}{c}{7} & \multicolumn{2}{c}{7} & \multicolumn{2}{c}{7} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{CPU} & \multicolumn{2}{c}{0.02} & \multicolumn{2}{c}{0.1} & \multicolumn{2}{c}{0.59} & \multicolumn{2}{c}{3.51} & \multicolumn{2}{c}{18.98} & \multicolumn{2}{c}{98.37} \\ \hspace{-0.2cm}line \end{tabular} \label{Tbl1} \caption{Numerical results for Example \ref{Ex2}.} \begin{tabular}{ccc|c|cc|cc|cc|cc|cc|cc|cc|} \hspace{-0.2cm}line \multicolumn{1}{c}{Method} & \multicolumn{1}{c}{} & \multicolumn{2}{c}{$m=32$} & \multicolumn{2}{c}{$m=64$} & \multicolumn{2}{c}{$m=128$} & \multicolumn{2}{c}{$m=256$} & \multicolumn{2}{c}{$m=512$} & \multicolumn{2}{c}{$m=1024$}\\ [2mm] \hspace{-0.2cm}line \multicolumn{1}{c}{GMRES(5)} & \multicolumn{1}{c}{IT} & \multicolumn{2}{c}{\dag} & \multicolumn{2}{c}{\dag} & \multicolumn{2}{c}{\dag} & \multicolumn{2}{c}{\dag} & \multicolumn{2}{c}{\dag} & \multicolumn{2}{c}{\dag} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{CPU} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} \\ \\ \multicolumn{1}{c}{MHSS-GMRES(5)} & \multicolumn{1}{c}{$\alpha_{opt}$} & \multicolumn{2}{c}{81} & \multicolumn{2}{c}{110} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{IT} & \multicolumn{2}{c}{73} & \multicolumn{2}{c}{243} & \multicolumn{2}{c}{\dag} & \multicolumn{2}{c}{\dag} & \multicolumn{2}{c}{\dag} & \multicolumn{2}{c}{\dag} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{CPU} & \multicolumn{2}{c}{0.29} & \multicolumn{2}{c}{4.65} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} \\ \\ \multicolumn{1}{c}{GSOR-GMRES(5)} & \multicolumn{1}{c}{$\alpha_{opt}$} & \multicolumn{2}{c}{0.099} & \multicolumn{2}{c}{0.099} & \multicolumn{2}{c}{0.099} & \multicolumn{2}{c}{0.099} & \multicolumn{2}{c}{0.099} & \multicolumn{2}{c}{0.099} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{IT} & \multicolumn{2}{c}{65} & \multicolumn{2}{c}{70} & \multicolumn{2}{c}{71} & \multicolumn{2}{c}{67} & \multicolumn{2}{c}{63} & \multicolumn{2}{c}{61} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{CPU} & \multicolumn{2}{c}{0.23} & \multicolumn{2}{c}{1.07} & \multicolumn{2}{c}{6.97} & \multicolumn{2}{c}{38.01} & \multicolumn{2}{c}{196.88} & \multicolumn{2}{c}{959.75} \\ \\ \multicolumn{1}{c}{BLTP-GMRES(5)} & \multicolumn{1}{c}{$\alpha_{opt}$} & \multicolumn{2}{c}{0.4} & \multicolumn{2}{c}{0.4} & \multicolumn{2}{c}{0.4} & \multicolumn{2}{c}{0.4} & \multicolumn{2}{c}{0.4} & \multicolumn{2}{c}{0.4} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{IT} & \multicolumn{2}{c}{8} & \multicolumn{2}{c}{8} & \multicolumn{2}{c}{8} & \multicolumn{2}{c}{8} & \multicolumn{2}{c}{8} & \multicolumn{2}{c}{8} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{CPU} & \multicolumn{2}{c}{0.03} & \multicolumn{2}{c}{0.11} & \multicolumn{2}{c}{0.7} & \multicolumn{2}{c}{3.96} & \multicolumn{2}{c}{21.99} & \multicolumn{2}{c}{112.15} \\ \hspace{-0.2cm}line \end{tabular} \label{Tbl2} \end{table} \begin{table}[!ht] \caption{Numerical results for Example \ref{Ex3}.} \begin{tabular}{ccc|c|cc|cc|cc|cc|cc|cc|cc|} \hspace{-0.2cm}line \multicolumn{1}{c}{Method} & \multicolumn{1}{c}{} & \multicolumn{2}{c}{$m=32$} & \multicolumn{2}{c}{$m=64$} & \multicolumn{2}{c}{$m=128$} & \multicolumn{2}{c}{$m=256$} & \multicolumn{2}{c}{$m=512$} & \multicolumn{2}{c}{$m=1024$}\\ [2mm] \hspace{-0.2cm}line \multicolumn{1}{c}{GMRES(5)} & \multicolumn{1}{c}{IT} & \multicolumn{2}{c}{235} & \multicolumn{2}{c}{\dag} & \multicolumn{2}{c}{\dag} & \multicolumn{2}{c}{\dag} & \multicolumn{2}{c}{\dag} & \multicolumn{2}{c}{\dag} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{CPU} & \multicolumn{2}{c}{0.41} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} \\ \\ \multicolumn{1}{c}{MHSS-GMRES(5)} & \multicolumn{1}{c}{$\alpha_{opt}$} & \multicolumn{2}{c}{52} & \multicolumn{2}{c}{18} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{IT} & \multicolumn{2}{c}{120} & \multicolumn{2}{c}{272} & \multicolumn{2}{c}{\dag} & \multicolumn{2}{c}{\dag} & \multicolumn{2}{c}{\dag} & \multicolumn{2}{c}{\dag} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{CPU} & \multicolumn{2}{c}{0.6} & \multicolumn{2}{c}{6.95} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} \\ \\ \multicolumn{1}{c}{GSOR-GMRES(5)} & \multicolumn{1}{c}{$\alpha_{opt}$} & \multicolumn{2}{c}{0.776} & \multicolumn{2}{c}{0.566} & \multicolumn{2}{c}{0.354} & \multicolumn{2}{c}{0.199} & \multicolumn{2}{c}{0.106} & \multicolumn{2}{c}{0.055} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{IT} & \multicolumn{2}{c}{7} & \multicolumn{2}{c}{8} & \multicolumn{2}{c}{11} & \multicolumn{2}{c}{22} & \multicolumn{2}{c}{52} & \multicolumn{2}{c}{117} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{CPU} & \multicolumn{2}{c}{0.07} & \multicolumn{2}{c}{0.21} & \multicolumn{2}{c}{1.76} & \multicolumn{2}{c}{20.39} & \multicolumn{2}{c}{255.93} & \multicolumn{2}{c}{3926.97} \\ \\ \multicolumn{1}{c}{BLTP-GMRES(5)} & \multicolumn{1}{c}{$\alpha_{opt}$} & \multicolumn{2}{c}{0.4} & \multicolumn{2}{c}{0.7} & \multicolumn{2}{c}{1.0} & \multicolumn{2}{c}{1.4} & \multicolumn{2}{c}{1.7} & \multicolumn{2}{c}{2.0} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{IT} & \multicolumn{2}{c}{4} & \multicolumn{2}{c}{5} & \multicolumn{2}{c}{7} & \multicolumn{2}{c}{9} & \multicolumn{2}{c}{12} & \multicolumn{2}{c}{18} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{CPU} & \multicolumn{2}{c}{0.02} & \multicolumn{2}{c}{0.11} & \multicolumn{2}{c}{0.94} & \multicolumn{2}{c}{7.33} & \multicolumn{2}{c}{55.15} & \multicolumn{2}{c}{556.44} \\ \hspace{-0.2cm}line \end{tabular} \label{Tbl3} \caption{Numerical results for Example \ref{Ex4}.} \begin{tabular}{ccc|c|cc|cc|cc|cc|cc|cc|cc|} \hspace{-0.2cm}line \multicolumn{1}{c}{Method} & \multicolumn{1}{c}{} & \multicolumn{2}{c}{$m=32$} & \multicolumn{2}{c}{$m=64$} & \multicolumn{2}{c}{$m=128$} & \multicolumn{2}{c}{$m=256$} & \multicolumn{2}{c}{$m=512$} & \multicolumn{2}{c}{$m=1024$}\\ [2mm] \hspace{-0.2cm}line \multicolumn{1}{c}{GMRES(5)} & \multicolumn{1}{c}{IT} & \multicolumn{2}{c}{138} & \multicolumn{2}{c}{\dag} & \multicolumn{2}{c}{\dag} & \multicolumn{2}{c}{\dag} & \multicolumn{2}{c}{\dag} & \multicolumn{2}{c}{\dag} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{CPU} & \multicolumn{2}{c}{0.15} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} \\ \\ \multicolumn{1}{c}{MHSS-GMRES(5)} & \multicolumn{1}{c}{$\alpha_{opt}$} & \multicolumn{2}{c}{130} & \multicolumn{2}{c}{10} & \multicolumn{2}{c}{13} & \multicolumn{2}{c}{8} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{IT} & \multicolumn{2}{c}{12} & \multicolumn{2}{c}{28} & \multicolumn{2}{c}{84} & \multicolumn{2}{c}{283} & \multicolumn{2}{c}{\dag} & \multicolumn{2}{c}{\dag} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{CPU} & \multicolumn{2}{c}{0.08} & \multicolumn{2}{c}{0.41} & \multicolumn{2}{c}{5.75} & \multicolumn{2}{c}{88.92} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} \\ \\ \multicolumn{1}{c}{GSOR-GMRES(5)} & \multicolumn{1}{c}{$\alpha_{opt}$} & \multicolumn{2}{c}{0.038} & \multicolumn{2}{c}{0.038} & \multicolumn{2}{c}{0.038} & \multicolumn{2}{c}{0.038} & \multicolumn{2}{c}{0.038} & \multicolumn{2}{c}{0.037} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{IT} & \multicolumn{2}{c}{69} & \multicolumn{2}{c}{92} & \multicolumn{2}{c}{75} & \multicolumn{2}{c}{66} & \multicolumn{2}{c}{67} & \multicolumn{2}{c}{152} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{CPU} & \multicolumn{2}{c}{0.22} & \multicolumn{2}{c}{1.3} & \multicolumn{2}{c}{19.08} & \multicolumn{2}{c}{194.10} & \multicolumn{2}{c}{231.36} & \multicolumn{2}{c}{2422.25} \\ \\ \multicolumn{1}{c}{BLTP-GMRES(5)} & \multicolumn{1}{c}{$\alpha_{ opt}$} & \multicolumn{2}{c}{2.1} & \multicolumn{2}{c}{2.2} & \multicolumn{2}{c}{2.3} & \multicolumn{2}{c}{2.4} & \multicolumn{2}{c}{2.5} & \multicolumn{2}{c}{2.3} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{IT} & \multicolumn{2}{c}{21} & \multicolumn{2}{c}{21} & \multicolumn{2}{c}{19} & \multicolumn{2}{c}{21} & \multicolumn{2}{c}{20} & \multicolumn{2}{c}{20} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{CPU} & \multicolumn{2}{c}{0.06} & \multicolumn{2}{c}{0.3} & \multicolumn{2}{c}{9.30} & \multicolumn{2}{c}{13.20} & \multicolumn{2}{c}{63.25} & \multicolumn{2}{c}{369.66} \\ \hspace{-0.2cm}line \end{tabular} \label{Tbl4} \end{table} \begin{example}\label {Ex3}\rm (See \cite{Bai1}) Consider the system of linear equations $\eqref{Eq1}$ as following $$ T=I\otimes V + V \otimes I \quad \textrm{and} \quad W=10(I \otimes V_c + V_c \otimes I) +9(e_1 e^T_m + e_m (e^T_m) \otimes I, $$ where $V = tridiag(-1, 2,-1) \in \mathbb{R}^{m \times m}$, $V_c = V - e_1 e^T_m - e_m e^T_1 \in \mathbb{R}^{m \times m}$ and $e_1$ and $e_m$ are the first and last unit vectors in $R^m$, respectively. We take the right-hand side vector $b$ to be $b = (1 + i)A\textbf{1}$, with \textbf{1} being the vector of all entries equal to 1. Here $T$ and $W$ correspond to the five-point centered difference matrices approximating the negative Laplacian operator with homogeneous Dirichlet boundary conditions and periodic boundary conditions, respectively, on a uniform mesh in the unit square $[0, 1]\in [0, 1]$ with the mesh-size $h = 1/(m+ 1)$. \end{example} \begin{example}\label {Ex4}\rm (See \cite{Bai1,Yang}) We consider the complex Helmholtz equation $$ -\Delta u + \sigma_1 u +i \sigma_2 u = f, $$ where $\sigma_1$ and $\sigma_2$ are real coefficient functions, $u$ satisfies Dirichlet boundary conditions in $D = [0, 1] \times [0, 1]$ and $i = \sqrt{-1}$. We discretize the problem with finite differences on a $m \times m$ grid with mesh size $h = {1}/{(m+ 1)}$. This leads to a system of linear equations $$ ((K+ \sigma_1 I) + i\sigma_2 I) x = b, $$ where $K = I \otimes V_m + V_m \otimes I$ is the discretization of $-\Delta$ by means of centered differences, wherein $V_m = h^{-2}tridiag(-1, 2,-1) \in \mathbb{R}^{m\in m}$. The right-hand side vector $b$ is taken to be $b = (1+i)A\textbf{1}$, with \textbf{1} being the vector of all entries equal to 1. Furthermore, before solving the system we normalize the coefficient matrix and the right-hand side vector by multiplying both by $h^2$. For the numerical tests we set $\sigma_1 =-10$ and $\sigma_2= 500$. \end{example} Numerical results for Examples \ref{Ex1}-\ref{Ex4} are listed in Tables \ref{Tbl1}-\ref{Tbl4}, respectively. We use GMRES(5) as the iterative method. In the tables, the numerical results of the GMRES(5) method in conjunction with the MHSS, the GSOR and the BLT preconditioners are denoted by MHSS-GMRES(5), GSOR-GMRES(5) and BLTP-GMRES(5). For the BLT and MHSS preconditioners the optimal value of $\alpha$ ($\alpha_{opt}$) were found experimentally and are the ones resulting in the least numbers of iterations. For the GSOR preconditioner the values was computed by using Theorem 2 in \cite{Salkuyeh1}. As the numerical results show for all the four examples the preconditioned GMRES(5) with the BLT preconditioner outperforms the other methods in terms of the number of the iterations and the CPU time. Surprisingly, we see from Tables \ref{Tbl1}, \ref{Tbl2} and \ref{Tbl3} that the number of iterations of the BLT preconditioner, in contrast to the other methods, does not grow with the problem size. However, this growth for Example \ref{Ex3} is slow for the BLT preconditioner. Another observation which can be posed here is that, for the Examples \ref{Ex1}, \ref{Ex2} and \ref{Ex3}, the optimal value of the parameter $\alpha$ is not sensitive to the problem size. \section{Conclusion}\label{Sec4} We have established and analyzed a block lower triangular (BLT) preconditioner to expedite the convergence speed of the Krylov subspace methods such as GMRES, for solving an important class of complex symmetric system of linear equations. Eigenvalues distribution of the preconditioned matrix has been intestigated. Since the proposed preconditioner involves a parameter, we have defined an interval for the choosing a suitable parameter. We have compared the numerical results of the GMRES(5) in conjunction with BLT preconditioner with those of the GSOR and MHSS preconditioners. Numerical results show that the BLT preconditioner is superior to the other methods in terms of both iteration number and CPU time. \end{document}
\begin{document} \title{Asymptotic expansion of the Bergman kernel \\ for weakly pseudoconvex tube domains in {\bf C}$^2$} \author{by \\ Joe {\sc Kamimoto} \\ {\small Graduate School of Mathematical Sciences, The University of Tokyo,} \\ {\small 3-8-1, Komaba, Meguro, Tokyo, 153 Japan.}\\ {\it E-mail} : {\tt [email protected]} } \date{October 10, 1996} \maketitle \begin{abstract} \footnote{{\it Math Subject Classification.} 32A40, 32F15, 32H10.} \footnote{{\it Key Words and Phrases.} Bergman kernel, Szeg\"o kernel, weakly pseudoconvex, of finite type, tube, asymptotic expansion, real blowing-up, admissible approach region.} In this paper we give an asymptotic expansion of the Bergman kernel for certain weakly pseudoconvex tube domains of finite type in ${\bf C}^2$. Our asymptotic formula asserts that the singularity of the Bergman kernel at weakly pseudoconvex points is essentially expressed by using two variables ; moreover certain real blowing-up is necessary to understand its singularity. The form of the asymptotic expansion with respect to each variable is similar to that in the strictly pseudoconvex case due to C. Fefferman. We also give an analogous result in the case of the Szeg\"o kernel. \end{abstract} \section{Introduction} The purpose of this paper is to give an asymptotic expansion of the Bergman kernel for certain class of weakly pseudoconvex tube domains of finite type in ${\bf C}^2$. We also give an analogous result of the Szeg\"o kernel for the same class of tube domains. Let $\Omega$ be a domain with smooth boundary in ${\bf C}^n$. The Bergman space $B(\Omega)$ is the subspace of $L^2(\Omega)$ consisting of holomorphic $L^2$-functions on $\Omega$. The Bergman projection is the orthogonal projection ${\bf B}:L^2(\Omega)\to B(\Omega)$. We can write ${\bf B}$ as an integral operator $$ {\bf B}f(z)=\int_{\Omega} K(z,w)f(w)dV(w) \quad \mbox{ for $f\in L^2(\Omega)$}, $$ where $K:\Omega\times\Omega\to{\bf C}$ is the {\it Bergman kernel} of the domain $\Omega$ and $dV$ is the Lebesgue measure on $\Omega$. In this paper we restrict the Bergman kernel on the diagonal of the domain and study the boundary behavior of $K(z)=K(z,z)$. Although there are many explicit computations for the Bergman kernels of specific domains (\cite{ber},\cite{cha},\cite{ise},\cite{dan1},\cite{grs},\cite{bol},\cite{dan3},\cite{frh1},\cite{frh2},\cite{joe1}), it seems difficult to express the Bergman kernel in closed form in general. Therefore appropriate approximation formulas are necessary to know the boundary behavior of the Bergman kernel. From this viewpoint the following studies have great success in the case of strictly pseudoconvex domains. Assume $\Omega$ is a bounded strictly pseudoconvex domain. L. H\"ormander \cite{hor} shows that the limit of $K(z)d(z\!-\!z^0)^{n+1}$ at $z^0 \in \partial\Omega$ equals the determinant of the Levi form at $z^0$ times $n!/4\pi^n$, where $d$ is the Euclidean distance. Moreover C. Fefferman \cite{fef} and L. Boutet de Monvel and J. Sj\"ostrand \cite{bos} give the following asymptotic expansion of the Bergman kernel of $\Omega$ : \begin{equation} K(z)=\frac{\varphi(z)}{r(z)^{n+1}}+\psi(z)\log r(z), \label{eqn:1.1} \end{equation} where $r\in C^{\infty}(\bar{\Omega})$ is a defining function of $\Omega$ (i.e. $\Omega=\{r>0\}$ and $|dr|>0$ on $\partial\Omega$) and $\varphi$, $\psi \in C^{\infty}(\bar{\Omega})$ can be expanded asymptotically with respect to $r$. On the other hand, there are not so strong results in the weakly pseudoconvex case. Let us recall important studies in this case. Many sharp estimates of the size of the Bergman kernel are obtained (\cite{her1},\cite{ohs1},\cite{dho},\cite{cat},\cite{mcn1}, \cite{her2},\cite{dih},\cite{her3},\cite{mcn2},\cite{cho},\cite{ohs2},\cite{goz},\cite{mcn3}). In particular D. Catlin \cite{cat} gives a complete estimate from above and below for domains of finite type in ${\bf C}^2$. Recently H. P. Boas, E. J. Straube and J. Yu \cite{bsy} have computed a boundary limit in the sense of H\"ormander for a large class of domains of finite type on a non-tangential cone. However asymptotic formulas are yet to be explored more extensively. In this paper we give an asymptotic expansion of the Bergman kernel for certain class of weakly pseudoconvex tube domains of finite type in ${\bf C}^2$. N. W. Gebelt \cite{geb} and F. Haslinger \cite{has2} have recently computed for the special cases, but the method of our expansion is different from theirs. Our main idea used to analyse the Bergman kernel is to introduce certain real blowing-up. Let us briefly indicate how this blowing-up works for the Bergman kernel at a weakly pseudoconvex point $z^0$. Since the set of strictly pseudoconvex points are dense on the boundary of the domain of finite type, it is a serious problem to resolve the difficulty caused by strictly pseudoconvex points near $z^0$. This difficulty can be avoided by restricting the argument on a non-tangential cone in the domain (\cite{her1},\cite{dho},\cite{her2},\cite{dih},\cite{bsy}). We surmount the difficulty in the case of certain class of tube domains in the following. By blowing up at the weakly pseudoconvex point $z^0$, we introduce two new variables. The Bergman kernel can be developed asymptotically in terms of these variables in the sense of Sibuya \cite{sib}. (See also Majima \cite{maj}.) The expansion, regarded as a function of the first variable, has the form of Fefferman's expansion (\ref{eqn:1.1}), and hence it reflects the strict pseudoconvexity. The characteristic influence of the weak pseudoconvexity appears in the expansion with respect to second variable. Though the form of this expansion is similar to (\ref{eqn:1.1}), we must use $m$th root of the defining function, i.e. $r^{\frac{1}{m}}$, as the expansion variable when $z^0$ is of type $2m$. We remark that a similar situation occurs in the case of another class of domains in \cite{geb}. Our method of the computation is based on the studies \cite{fef},\cite{bos},\cite{boc},\cite{nak}. Our starting point is certain integral representation in \cite{kor},\cite{nak}. After introducing the blowing-up to this representation, we compute the asymptotic expansion by using the stationary phase method. For the above computation, it is necessary to localize the Bergman kernel near a weakly pseudoconvex point. This localization can be obtained in a fashion similar to the case of some class of Reinhardt domains (\cite{boc},\cite{nak}). This paper is organized as follows. Our main theorem is established in Section 2. The next three sections prepare the proof of the theorem. First an integral representation is introduced, which is a clue to our analysis in Section 3. Second the usefulness of our blowing-up is shown by using a simple tube domain $\{(z_1,z_2)\in {\bf C}^2 ; {\rm Im}z_2>[{\rm Im}z_1]^{2m}\},$ $m=2,3,\ldots$, in Section 4. This domain is considered to be a model domain for more general case. Third a localization lemma is established in Section 5, which is necessary to the computation in the proof of our theorem. Our main theorem is proved in Section 6. After an appropriate localization (\S 6.1) and the blowing-up at a weakly pseudoconvex point, an easy computation shows that certain two propositions are sufficient to prove our theorem (\S 6.2). In order to prove these propositions, we compute the asymptotic expansion of two functions by using the stationary phase method (\S 6.3, 6.4). The rest of Section 6 (\S6.5, 6.6) is devoted to proving two propositions. In Section 7 an analogous theorem about the Szeg\"o kernel is established. I would like to thank Professors Kazuo Okamoto, Takeo Ohsawa and Katsunori Iwasaki for generosity and several useful conversations. I would also like to Professor Iwasaki who carefully read the manuscript and supplied many corrections. \section{Statement of main result} Given a function $f \in C^{\infty}({\bf R})$ satisfying that \begin{eqnarray} \left\{ \begin{array}{rl} \!\!& \mbox{ $f''\geq 0$ on ${\bf R}$ and $f$ has the form in some neighborhood of $0$:}\\ \!\!& \mbox{ $f(x)=x^{2m}g(x)$ where $m=2,3,\ldots$, $g(0)>0$ and $xg'(x) \leq 0$}. \end{array}\right. \label{eqn:2.1} \end{eqnarray} Let $\omega_f \subset {\bf R}^2$ be a domain defined by $ \omega_f = \{(x,y) ; y > f(x) \}. $ Let $\Omega_f \subset {\bf C}^2$ be the tube domain over $\omega_f$, i.e., $$ \Omega_f={\bf R}^2 + i \omega_f. $$ Let $\pi: {\bf C}^2 \to {\bf R}^2$ be the projection defined by $\pi(z_1,z_2)= ({\rm Im}z_1,{\rm Im}z_2)$. It is easy to check that $\Omega_f$ is a pseudoconvex domain ; moreover $z^0 \in \partial\Omega_f$, with $\pi(z^0)=O$, is a weakly pseudoconvex point of type $2m$ (or $2m-1$) in the sense of Kohn or D'Angelo and $\partial\Omega_f \setminus \pi^{-1}(O)$ is strictly pseudoconvex near $z^0$. Now we introduce the transformation $\sigma$, which plays a key role on our analysis. Set $\Delta=\{(\tau,\varrho); 0<\tau \leq 1, \,\, \varrho>0 \}$. The transformation $\sigma : \overline{\omega_f} \to \overline{\Delta}$ is defined by \begin{eqnarray} \sigma : \left\{ \begin{array}{rl} \!\!\!\!\! &\tau = \chi(1-\frac{f(x)}{y}), \\ \!\!\!\!\! &\varrho= y, \end{array}\right. \label{eqn:2.2} \end{eqnarray} where the function $\chi \in C^{\infty}([0,1))$ satisfies the conditions: $\chi'(u) \geq 1/2$ on $[0,1]$, and $\chi(u)=u$ for $0 \leq u \leq 1/3$ and $\chi(u)=1-(1-u)^{\frac{1}{2m}}$ for $1-1/3^{2m} \leq u \leq 1$. Then $\sigma\circ\pi$ is the transformation from $\overline{\Omega}$ to $\overline{\Delta}$. The transformation $\sigma$ induces an isomorphism of $\omega_f \cap \{x\geq 0\}$ (or $\omega_f \cap \{x\leq 0\}$) on to $\Delta$. The boundary of $\omega_f$ is transfered by $\sigma$ in the following: $\sigma((\partial \omega_f)\setminus \{O\})= \{(0,\varrho); \varrho>0\}$ and $ \sigma^{-1}(\{(\tau,0);0\leq \tau \leq 1\})=\{O\} $. This indicates that $\sigma$ is the real blowing-up of $\partial\omega_f$ at $O$, so we may say that $\sigma\circ\pi$ is the blowing-up at the weakly pseudoconvex point $z^0$. Moreover $\sigma$ patches the coordinates $(\tau,\varrho)$ on $\omega_f$, which can be considered as the polar coordinates around $O$. We call $\tau$ the angular variable and $\varrho$ the radial variable, respectively. Note that if $z$ approaches some strictly (resp. weakly) pseudoconvex points, $\tau(\pi(z))$ (resp. $\varrho(\pi(z))$) tends to $0$ on the coordinates $(\tau,\varrho)$. The following theorem asserts that the singularity of the Bergman kernel of $\Omega_f$ at $z^0$, with $\pi(z^0)=O$, can be essentially expressed in terms of the polar coordinates $(\tau,\varrho)$. \begin{thm} The Bergman kernel of $\Omega_f$ has the form in some neighborhood of $z^0$: \begin{equation} K(z)=\frac{\Phi(\tau, \varrho^{\frac{1}{m}})} {\varrho^{2+\frac{1}{m}}} +\tilde{\Phi}(\tau,\varrho^{\frac{1}{m}}) \log \varrho^{\frac{1}{m}}, \label{eqn:2.3} \end{equation} where $\Phi \in C^{\infty}((0,1]\times [0,\varepsilon))$ and $\tilde{\Phi} \in C^{\infty}([0,1]\times [0,\varepsilon))$ with some $\varepsilon>0$. Moreover $\Phi$ is written in the form on the set $\{\tau> \alpha \varrho^{\frac{1}{2m}}\}$ with some $\alpha>0$: for every nonnegative integer $\mu_0$ \begin{equation} \Phi(\tau,\varrho^{\frac{1}{m}}) =\sum_{\mu=0}^{\mu_0} c_{\mu}(\tau) \varrho^{\frac{\mu}{m}} + R_{\mu_0}(\tau, \varrho^{\frac{\mu}{m}}) \varrho^{\frac{\mu_0}{m}+\frac{1}{2m}}, \label{eqn:2.4} \end{equation} where \begin{equation} c_{\mu}(\tau) = \frac{\varphi_{\mu}(\tau)} {\tau^{3+2\mu}} +\psi_{\mu}(\tau)\log \tau, \label{eqn:2.5} \end{equation} for $\varphi_{\mu},\psi_{\mu} \in C^{\infty}([0,1])$, $\varphi_0$ is positive on $[0,1]$ and $R_{\mu_0}$ satisfies $ |R_{\mu_0}(\tau,\varrho^{\frac{1}{m}})| \leq C_{\mu_0}[\tau-\alpha\varrho^{\frac{1}{2m}}]^{-4-2\mu_0} $ for some positive constant $C_{\mu_0}$. \end{thm} Let us describe the asymptotic expansion of the Bergman kernel $K$ in more detail. Considering the meaning of the variables $\tau,\varrho$, we may say that each expansion with respect to $\tau$ or $\varrho^{\frac{1}{m}}$ is induced by the strict or weak pseudoconvexity, respectively. Actually the expansion (\ref{eqn:2.5}) has the same form as that of Fefferman (\ref{eqn:1.1}). By (\ref{eqn:2.4}),(\ref{eqn:2.5}), in order to see the characteristic influence of the weak pseudoconvexity on the singularity of the Bergman kernel $K$, it is sufficient to argue about $K$ on the region $$ {\cal U}_{\alpha}=\{z \in {\bf C}^2; \tau\circ\pi(z) > \alpha^{-1}\} \quad (\alpha>1). $$ This is because ${\cal U}_{\alpha}$ is the {\it widest} region where the coefficients $c_{\mu}(\tau)$'s are bounded. We call ${\cal U}_{\alpha}$ an admissible approach region of the Bergman kernel of $\Omega_f$ at $z^0$. The region ${\cal U}_{\alpha}$ seems deeply connected with the admissible approach regions studied in \cite{kra1},\cite{kra2},\cite{ala},\cite{kra3}, etc. We remark that on the region ${\cal U}_{\alpha}$, the exchange of the expansion variable $\varrho^{\frac{1}{m}}$ for $r^{\frac{1}{m}}$, where $r$ is a defining function of $\Omega_f$ (e.g. $r(x,y)=y-f(x)$), gives no influence on the form of the expansion on the region ${\cal U}_{\alpha}$. Now let us compare the asymptotic expansion (\ref{eqn:2.3}) on ${\cal U}_{\alpha}$ with Fefferman's expansion (\ref{eqn:1.1}). The essential difference between them only appears in the expansion variable (i.e. $r^{\frac{1}{m}}$ in (\ref{eqn:2.3}) and $r$ in (\ref{eqn:1.1})). A similar phenomenon occurs in subelliptic estimates for the $\bar{\partial}$-Neumann problem. As is well-known, the finite-type condition is equivalent to the condition that a subelliptic estimate holds, i.e., $$ |||\phi |||_{\epsilon}^2 \leq C(||\bar{\partial}\phi||^2+||\bar{\partial}^* \phi||^2 +||\phi||^2) \quad\, (\epsilon>0), $$ (refer to \cite{koh2} for the details). Here, in two dimensional case, this estimate holds for any $0<\epsilon\leq \frac{1}{2}$ in the strictly pseudoconvex case and for $0<\epsilon\leq \frac{1}{2m}$ in the weakly pseudoconvex case of type $2m$, respectively. The difference between these two cases only appears in the value of $\epsilon$. From this viewpoint, our expansion (\ref{eqn:2.3}) seems to be a natural generalization of Fefferman's expansion (\ref{eqn:1.1}) in the strictly pseudoconvex case. {\it Remarks}\,\,{\it 1}.\,\, The idea of the blowing-up $\sigma$ is originally introduced in the study of the Bergman kernel of the domain ${\cal E}_m=\{z\in {\bf C}^n; \sum_{j=1}^n |z_j|^{2m_j}<1\}$ $(m_j \in {\bf N}$, $m_n \neq 1)$ in \cite{joe1}. Since ${\cal E}_m$ has high homogeneity, the asymptotic expansion with respect to the radial variable does not appear (see also \S4). {\it 2}. \,\, If we consider the Bergman kernel on the region ${\cal U}_{\alpha}$, then we can remove the condition $xg'(x)\leq 0$ in (\ref{eqn:2.1}). Namely even if the condition $xg'(x)\leq 0$ is not satisfied, we can still obtain (\ref{eqn:2.3}),(\ref{eqn:2.4}) in the theorem where $c_{\mu}$'s are bounded on ${\cal U}_{\alpha}$. But the condition $xg'(x) \leq 0$ is necessary to obtain the asymptotic expansion with respect to $\tau$. {\it 3}. \,\, From the definition of asymptotic expansion of functions of several variables in \cite{sib},\cite{maj}, the expansion in the theorem is not complete. In order to get a complete asymptotic expansion, we must take a further blowing-up at the point $(\tau, \varrho)=(0,0)$. The real blowing-up $(\tau, \varrho)\mapsto (\tau, \varrho\tau^{-2m})$ is sufficient for this purpose. {\it 4}. \,\, The limit of $\varrho^{2+\frac{1}{m}}K(z)$ at $z^0$ is $c_0(\tau)$, so the boundary limit depends on the angular variable $\tau$. But this limit is determined uniquely ($c_0 (1)=\varphi_0(1)$) on a non-tangential cone in $\Omega_f$ (see \cite{bsy}). {\it Notation}.\,\,\, In this paper we use $c$, $c_j$, or $C$ for various constants without further comment. \section{Integral representation} In this section we give an integral representation of the Bergman kernel, which is a clue to our analysis. Kor\'anyi \cite{kor}, Nagel \cite{nag} and Haslinger \cite{has1} obtain similar representations of Bergman kernels or Szeg\"o kernels for certain tube domains. In this section we assume that $f\in C^{\infty}({\bf R})$ is a function such that $f(0)=0$ and $f''(x) \geq 0$. The tube domain $\Omega_f \subset {\bf C}^2$ is defined as in Section 2. Let $\Lambda, \Lambda^{\ast} \subset {\bf R}^2$ be the cones defined by \begin{eqnarray*} \Lambda \!\!\!&=&\!\!\! \{(x,y); \mbox{$(tx,ty) \in \omega_f$ for any $t>0$} \},\\ \Lambda^{\ast} \!\!\!&=&\!\!\! \{(\zeta_1,\zeta_2); \mbox{$x\zeta_1+y\zeta_2>0$ for any $(x,y) \in \Lambda$} \}, \end{eqnarray*} respectively. We call $\Lambda^{\ast}$ the dual cone of $\omega_f$. Actually $\Lambda^{\ast}$ can be computed explicitly: $$ \Lambda^{\ast}= \{(\zeta_1,\zeta_2); -R^- \zeta_2 < \zeta_1 < R^+ \zeta_2 \}, $$ where $ (R^{\pm})^{-1}=\lim_{x\to \mp\infty} f(x)|x|^{-1}>0, $ respectively. We allow that $R^{\pm}=\infty$. If $\lim_{|x|\to\infty} f(x)|x|^{-1-\varepsilon}>0$ with some $\varepsilon>0$, then $R^{\pm}=\infty$, i.e. $\Lambda^{\ast}=\{(\zeta_1,\zeta_2);\zeta_2>0\}$. The Bergman kernel of $\Omega$ is expressed in the following. Set $(x,y)=({\rm Im} z_1,{\rm Im} z_2)$. \begin{equation} K(z)=\frac{1}{(4\pi)^2} \int\!\!\!\int_{\Lambda^{\ast}} e^{-x\zeta_1-y\zeta_2} \frac{\zeta_2}{D(\zeta_1,\zeta_2)} d\zeta_1d\zeta_2, \label{eqn:3.3} \end{equation} where \begin{equation} D(\zeta_1,\zeta_2) =\int_{-\infty}^{\infty} e^{-\xi\zeta_1-f(\xi)\zeta_2} d\xi. \label{eqn:3.4} \end{equation} The above representation can be obtained by a slight generalization of the argument of Kor\'anyi \cite{kor}, so we omit the proof. \section{Analysis on a model domain} Let $\omega_0 \subset {\bf R}^2$ be a domain defined by $\omega_0=\{(x,y); y>g x^{2m}\}$, where $m =2,3,\ldots$ and $g>0$. Set $\Omega_0={\bf R}^2+i \omega_0$. F. Haslinger \cite{has2} computes the asymptotic expansion of the Bergman kernel of $\Omega_0$ (not only on the diagonal but also off the diagonal). In his result Fefferman's expansion only appears. In this paper, we consider $\Omega_0$ as a model domain for the study of singularity of the Bergman kernel for more general domains. The following proposition shows the reason why we take $\Omega_0$ as a model domain. Set $(x,y)=({\rm Im}z_1, {\rm Im}z_2)$. \begin{proposition} The Bergman kernel $K$ of $\Omega_0$ has the form: \begin{equation} K(z)=\frac{\Phi(\tau)}{\varrho^{2+\frac{1}{m}}}, \label{eqn:4.1} \end{equation} where $\tau= \chi(g x^{2m} y^{-1})$, $\varrho=y$ $(see\,\,\, (\ref{eqn:2.2}))$ and $$ \Phi(\tau)=\frac{\varphi(\tau)}{\tau^3} + \psi(\tau)\log \tau, $$ with $\varphi, \psi \in C^{\infty}([0,1])$ and $\varphi$ is positive on $[0,1]$. \end{proposition} {\it Proof}. \,\,\, Normalizing the integral representation (\ref{eqn:3.3}) and introducing the variables $t= g^{\frac{1}{2m}} x y^{-\frac{1}{2m}}$, $\varrho=y^{\frac{1}{2m}}$, we have (\ref{eqn:4.1}) where \begin{eqnarray} &&\Phi(\tau)= \frac{2m}{(4\pi)^2} g^{\frac{1}{m}} \int_0^{\infty} e^{-s^{2m}} L(ts) s^{4m+1} ds, \label{eqn:4.3}\\ &&L(u)=\int_{-\infty}^{\infty} e^{uv}\frac{1}{\phi(v)}dv,\nonumber\\ &&\phi(v)=\int_{-\infty}^{\infty} e^{-w^{2m}+vw}dv. \nonumber \end{eqnarray} It turns out from (\ref{eqn:4.3}) and the definition of $\tau$ that $\Phi \in C^{\infty}((0,1])$. Now let $\hat{\Phi}$ be defined by \begin{equation} \hat{\Phi}(\tau) =\frac{2m}{(4\pi)^2} \int_1^{\infty} e^{-s^{2m}} L(ts)s^{4m+1} ds. \label{eqn:4.5} \end{equation} If we admit Lemma 6.2 in Subsection 6.4 below, we have \begin{equation} L(u)=u^{2m-2} e^{u^{2m}} \tilde{L}(u), \label{eqn:4.6} \end{equation} where $\tilde{L}(u)\sim \sum_{j=0}^{\infty}c_j u^{-2mj}$ as $u\to\infty$. Substituting (\ref{eqn:4.6}) into (\ref{eqn:4.5}), we have \begin{eqnarray} \hat{\Phi}(\tau) \!\!\!&=&\!\!\! \frac{2m}{(4\pi)^2} t^{2m-2} \int_1^{\infty} e^{-[1-t^{2m}]s^{2m}} \tilde{L}(ts)s^{6m-1} ds \nonumber\\ \!\!\!&=&\!\!\! \frac{1}{(4\pi)^2} \int_1^{\infty} e^{-\chi^{-1}(\tau)\sigma} \hat{L}(\tau,\sigma)\sigma^2 d\sigma. \nonumber \end{eqnarray} Since $\hat{L}(\tau,\sigma)\sim \sum_{j=0}^{\infty}c_j(\tau) \sigma^{-j}$ as $\sigma \to \infty$ for $c_j \in C^{\infty}([0,1])$, we have $$ \hat{\Phi}(\tau)=\frac{\hat{\varphi}(\tau)}{\tau^3} + \hat{\psi}(\tau)\log \tau, $$ with $\hat{\varphi}, \hat{\psi} \in C^{\infty}([0,1])$ and $\hat{\varphi}$ is positive on $[0,1]$. Finally since the difference between $\Phi$ and $\hat{\Phi}$ is smooth on $[0,1]$, we can obtain Proposition 4.1. \mbox{$\Box$ \ } \section{Localization lemma} In this section we prepare a lemma, which is necessary for the proof of Theorem 2.1. This lemma shows that the singularity of the Bergman kernel for certain class of domains is determined by the local information about the boundary. The method of the proof is similar to the case of some class of Reinhardt domains (\cite{boc},\cite{nak}). Throughout this section, $j$ stands for $1$ or $2$. Let $f_1$, $f_2 \in C^{\infty}({\bf R})$ be functions such that $f_j(0)=f'_j(0)=0$, $f_j''\geq 0$ on ${\bf R}$ and $f_1(x)=f_2(x)$ on $|x|<\delta$. Let $\omega_j \subset {\bf R}^2$ be a domain defined by $\omega_j =\{(x,y): y>f_j(x)\}$. Set $\Omega_j={\bf R}^2+i \omega_j \subset {\bf C}^2$. \begin{lemma} Let $K_j$ be the Bergman kernels of $\Omega_j$ for $j=1,2$, respectively. Then we have $$ K_1(z)- K_2(z) \in C^{\omega}(U), $$ where $U$ is some neighborhood of $z^0$. \end{lemma} {\it Proof}. \,\,\, Let $\Lambda_j^{\ast}$ be the dual cone of $\omega_j$, i.e. $\Lambda_j^{\ast} =\{(\zeta_1,\zeta_2); -R_j^- \zeta_2 < \zeta_1 < R_j^+ \zeta_2 \}$, where $(R_j^{\pm})^{-1}=\lim_{x\to\mp} f(x)|x|^{-1}$, respectively (see \S3). Let $K_j[\Delta](x,y)$ be defined by $$ K_j[\Delta](x,y) =\frac{1}{(4\pi)^2}\int\!\!\!\int_{\Delta} e^{-y \zeta_2 -x \zeta_1} \frac{\zeta_2}{D_j(\zeta_1,\zeta_2)} d\zeta_1 d\zeta_2, $$ where $\Delta \subset {\bf R}^2$ and $D_j(\zeta_1,\zeta_2) =\int_{-\infty}^{\infty} e^{-\zeta_2 f_j(\xi) - \zeta_1\xi} d\xi$. Set $\Lambda_{\varepsilon} =\{ (\zeta_1,\zeta_2); |\zeta_1|< \varepsilon\zeta_2 \}$, where $\varepsilon>0$ is small. Now the following claims (i), (ii) imply Lemma 5.1. Set $O=(0,0)$. \begin{eqnarray*} &{\rm (i)}& K_j[\Lambda_j^{\ast}] \equiv K_j[\Lambda_{\varepsilon}] \quad \mbox{modulo $C^{\omega}(\{O\})$ \,\, for any $\varepsilon>0$}, \\ &{\rm (ii)}& K_1[\Lambda_{\varepsilon_0}] \equiv K_2[\Lambda_{\varepsilon_0}] \quad \mbox{modulo $C^{\omega}(\{O\})$ for some $\varepsilon_0>0$}. \end{eqnarray*} In fact if we substitute $(x,y)\!=\!({\rm Im}z_1,{\rm Im}z_2)$, then $K_1\!=\!K_1[\Lambda_1^{\ast}] \!\equiv\! K_1[\Lambda_{\varepsilon_0}] \!\equiv\! K_2[\Lambda_{\varepsilon_0}] \!\equiv\! K_2[\Lambda_2^{\ast}] \!=\! K_2$ modulo $C^{\omega}(\{z^0\})$. Let us show the above claims. (i)\,\,\, Set $\Lambda_{\varepsilon}^{\pm} =\{(\zeta_1,\zeta_2); 0 < \varepsilon\zeta_2 < \pm\zeta_1 < R_j^{\pm} \zeta_2 \}$, respectively. Since $K_j[\Lambda_j^{\ast}]-K_j[\Lambda_{\varepsilon}] =K_j[\Lambda_{\varepsilon}^+]+K_j[\Lambda_{\varepsilon}^-]$, it is sufficient to show $K_j[\Lambda_{\varepsilon}^{\pm}] \in C^{\omega}(\{O\})$. We only consider the case of $K_j[\Lambda_{\varepsilon}^{+}]$. Changing the integral variables, we have $$ K_j[\Lambda_{\varepsilon}^+](x,y) =\frac{1}{(4\pi)^2} \int_0^{\infty}\!\!\int_{\varepsilon}^{R_j^+} H_j(\zeta,\eta; x,y) d\zeta d\eta, $$ where \begin{eqnarray*} &&H_j(\zeta,\eta; x,y)\,(=H_j) = e^{-y\eta+x\eta\zeta} \frac{\eta^2}{E_j(\zeta,\eta)}, \\ &&E_j(\zeta,\eta)= \int_{-\infty}^{\infty} e^{-\eta[f_j(\xi)-\zeta\xi]}d\xi. \end{eqnarray*} It is an important remark that $K_j[\Lambda_{\varepsilon}^+]$ is real analytic on the region where $H_j$ is integrable on $\{(\zeta,\eta); \zeta> \varepsilon, \eta>0\}$. If we take $\delta_1>0$ such that $|f_j(\xi)\xi^{-1}|<\frac{1}{2}\varepsilon$ if $|\xi|<\delta_1$, then we have \begin{eqnarray} E_j(\zeta,\eta) &\geq& 2\int_0^{\delta_1} e^{\eta\xi[\zeta-f(\xi)\xi^{-1}]}d\xi \nonumber\\ &\geq& 2\int_0^{\delta_1} e^{\frac{1}{2}\varepsilon\eta\xi}d\xi \geq \frac{4}{\varepsilon\eta}[e^{\frac{1}{2}\delta_1\varepsilon\eta}-1] \label{eqn:5.4} \end{eqnarray} By (\ref{eqn:5.4}), we have $H_j(\zeta,\eta;x,y)\leq C \eta e^{-[y-x\zeta+\frac{1}{2}\delta_1\varepsilon]\eta}$ for $\eta\geq 1$. This inequality implies that if $x<0$ and $y>-\frac{1}{2}\delta_1\varepsilon$, then $H_j$ is integrable on $\{(\zeta,\eta); \zeta> \varepsilon, \eta>0\}$. Thus $K_j[\Lambda_{\varepsilon}^+]$ is real analytic on $\omega_j \cup \{(x,y); x<0, y>-\frac{1}{2}\delta_1\varepsilon\} =:\omega_j^+$. By regarding $x,y$ as two complex variables, $K_j[\Lambda_{\varepsilon}^+](x,y)$ is holomorphic on $\omega_j^+ +i{\bf R}^2$, so the shape of $\omega_j^+$ implies that $K_j[\Lambda_{\varepsilon}^+]$ can be extended holomorphically to a region containing some neighborhood of $\{O\}+i {\bf R}^2$. Consequently we have $K_j[\Lambda_{\varepsilon}^+]\in C^{\omega}(\{O\})$. (ii)\,\,\, Changing the integral variables, we have $$ K_1[\Lambda_{\varepsilon}](x,y)- K_2[\Lambda_{\varepsilon}](x,y) = \frac{1}{(4\pi)^2}\int_{0}^{\infty}\!\!\! \int_{-\varepsilon}^{\varepsilon} (H_1-H_2) d\zeta d\eta. $$ We remark that $K_1[\Lambda_{\varepsilon}]- K_2[\Lambda_{\varepsilon}]$ is real analytic on the region where $H_1-H_2$ is integrable on $\{(\zeta,\eta); |\zeta|<\varepsilon, \eta>0\}$. To find a positive number $\varepsilon_0$ satisfying (ii), let us consider the integrability of \begin{equation} |H_1-H_2| =\eta^2 e^{-y\eta+x\eta\zeta} \frac{|E_2(\zeta,\eta)-E_1(\zeta,\eta)|} {|E_1(\zeta,\eta) \cdot E_2(\zeta,\eta)|}. \label{eqn:5.41} \end{equation} First we give an estimate of $|E_2(\zeta,\eta)-E_1(\zeta,\eta)|$. Let $\varepsilon_1 >0$ be defined by $|f_j(\xi)\xi^{-1}|\geq \varepsilon_1$ if $|\xi| \geq \delta$. If $|\zeta|\leq \frac{1}{2}\varepsilon_1$, then \begin{equation} \int_{|\xi|\geq \delta} e^{-\eta\xi[f_j(\xi)\xi^{-1}-\zeta]} d\xi \leq 2 \int_{\delta}^{\infty} e^{-\frac{1}{2}\varepsilon_1\eta\xi} d\xi \,\,\, \leq \frac{4}{\varepsilon_1\eta} e^{-\frac{1}{2}\delta\varepsilon_1\eta}. \label{eqn:5.5} \end{equation} Therefore (\ref{eqn:5.5}) implies \begin{equation} |E_2(\zeta,\eta)-E_1(\zeta,\eta)| \leq \sum_{j=1}^2 \int_{|\xi|\geq \delta} e^{-\eta[f_j(\xi)-\zeta\xi]} d\xi \leq \frac{8}{\varepsilon_1 \eta} e^{-\frac{1}{2}\delta\varepsilon_1\eta}. \label{eqn:5.6} \end{equation} Second we give an estimate of $E_j(\zeta,\eta)$. By Taylor's formula, we can choose $\varepsilon_2>0$ satisfying the following. If $|\zeta|<\varepsilon_2$, then there is a function $\alpha_j(\zeta)=\alpha_j$ ($\alpha_j(0)=0$) such that $f_j'(\alpha_j)=\zeta$ and moreover there is a bounded function $R_j(\zeta,\xi)$ on $[-\varepsilon_2,\varepsilon_2] \times [-\xi_0,\xi_0]$, with some $\xi_0>0$, such that $F_j(\xi)$ $(:=f_j(\xi)-\zeta\xi)$ $=F_j(\alpha_j)+R_j(\zeta, \xi-\alpha_j)(\xi-\alpha_j)^2$ with $F_j(\alpha_j)\leq 0$. Then if $|\zeta|<\varepsilon_2$, we have \begin{eqnarray} E_j(\zeta,\eta) \!\!\!&=&\!\!\! \int_{-\infty}^{\infty} e^{-\eta F_j(\xi)} d\xi \geq e^{-\eta F_j(\alpha_j)} \int_{-\xi_0}^{\xi_0} e^{-\eta R_j(\zeta,\xi)\xi^2} d\xi \nonumber\\ \!\!\!&\geq&\!\!\! \frac{C}{\sqrt{\eta}} e^{-\eta F_j(\alpha_j)} \geq \frac{C}{\sqrt{\eta}}. \label{eqn:5.8} \end{eqnarray} Now we set $\varepsilon_0=\min\{\varepsilon_1,\varepsilon_2\}$. Then by putting (\ref{eqn:5.41}),(\ref{eqn:5.6}),(\ref{eqn:5.8}) together, we have $$ |H_1-H_2| \leq C \eta^2 e^{-\eta[y-x\zeta+\frac{1}{2}\delta\varepsilon_1]} \quad \,\,\,\mbox{for $|\zeta|<\varepsilon_0, \eta>0$}. $$ This inequality implies that if $\varepsilon_0 |x|-y-\frac{1}{2}\delta\varepsilon_1>0$, then $|H_1-H_2|$ is integrable on $\{(\zeta,\eta);|\zeta|<\varepsilon_0, \eta>0\}$. Hence $K_1[\Lambda_{\varepsilon_0}]-K_2[\Lambda_{\varepsilon_0}]$ is real analytic on the region $\{(x,y); \varepsilon_0 |x|-y-\frac{1}{2}\delta\varepsilon_1 >0\}$, which contains $\{O\}$. This completes the proof of Lemma 5.1. \mbox{$\Box$ \ } \section{Proof of Theorem 2.1} In this section we give a proof of Theroem 2.1. The definitions of $f$, $\omega_f$ and $\Omega_f$ are given as in Section 2. \subsection{Localization} From the previous section, it turns out that the singularity of the Bergman kernel of $\Omega_f$ at $z^0$ is determined by the local information about $\partial\Omega_f$ near $z^0$. Thus we construct an appropriate domain whose boudary coincides $\partial\Omega_f$ near $z^0$ for the computation below. We can easily construct a function $\tilde{g} \in C^{\infty}({\bf R})$ such that \begin{eqnarray} &\tilde{g}(x)=\left\{ \begin{array}{rl} g(x) & \quad \mbox{for $|x|\leq \delta$} \\ \frac{9}{10} g(0) & \quad \mbox{for $|x|\geq 1$} \end{array}\right. \quad \mbox{and} \label{eqn:6.1} \\ &0\leq -x\tilde{g}'(x),\, |x^2\tilde{g}''(x)| < \frac{1}{5}g(0) \quad \mbox{for $x\in {\bf R}$}, \label{eqn:6.2} \end{eqnarray} for some small positive constant $\delta < 1$. Note that $\frac{9}{10}g(0)\leq \tilde{g}(x)\leq g(0)$. Set $\tilde{f}(x)=x^{2m}\tilde{g}(x)$ and $\omega_{\tilde{f}}= \{ (x,y)\in {\bf R}^2; y>\tilde{f}(x) \}$. Let $\Omega_{\tilde{f}} \subset {\bf C}^2$ be the tube domain over $\omega_{\tilde{f}}$, i.e. $\Omega_{\tilde{f}}={\bf R}^2 + i \omega_{\tilde{f}}$. Here we remark that the boundary of $\Omega_{\tilde{f}}$ is strictly pseudoconvex off the set $\{(z_1,z_2);{\rm Im}z_1\!=\!{\rm Im} z_2\!=\!0\}$. In fact we can easily check that $\tilde{f}''(x)>0$ if $x\neq0$ by (\ref{eqn:6.1}),(\ref{eqn:6.2}). Let $\tilde{K}$ be the Bergman kernel of $\Omega_{\tilde{f}}$. In order to obtain Theorem 2.1, it is sufficient to consider the singularity of the Bergman kernel $\tilde{K}$ near $z^0$ by Lemma 5.1. \subsection{Two propositions and the proof of Theorem 2.1} A clue to our analysis of the Bergman kernel is the integral representation in Section 3. Normalizing this representation, the Bergman kernel $\tilde{K}$ of $\Omega_{\tilde{f}}$ can be expressed in the following. $$ \tilde{K}(z)= \frac{2m}{(4\pi)^2}g(0)^{\frac{1}{m}} \int_0^{\infty} e^{-y u^{2m}} P(x,u) u^{4m+1} du, $$ with \begin{eqnarray*} P(x,u) \!\!\!&=&\!\!\! \int_{-\infty}^{\infty} e^{g(0)^{\frac{1}{2m}}xuv} \frac{1}{\phi(v,u^{-1})} dv,\\ \phi(v,X) \!\!\!&=&\!\!\! \int_{-\infty}^{\infty} e^{-\hat{g}(Xw)w^{2m} +vw}dw, \end{eqnarray*} where $\hat{g}(x)=\tilde{g}(x)/ g(0)$. In order to prove Theorem 2.1, it is sufficient to consider the following function $\bar{K}$ instead of $\tilde{K}$. \begin{equation} \bar{K}(z)= \frac{2m}{(4\pi)^2} g(0)^{\frac{1}{m}} \int_1^{\infty} e^{-y u^{2m}} P(x,u) u^{4m+1} du. \label{eqn:6.5} \end{equation} In fact the difference between $\tilde{K}$ and $\bar{K}$ is smooth. By introducing the variables $t_0=g(0)^{\frac{1}{2m}}x y^{\frac{-1}{2m}}$, $\xi=y^{\frac{1}{2m}}$ to the integral representation (\ref{eqn:6.5}), we have \begin{equation} \bar{K}(z)= \frac{2m}{(4\pi)^2}\xi^{-4m-2}g(0)^{\frac{1}{m}} \int_{\xi}^{\infty} e^{-s^{2m}}L(t_0,\xi;s)s^{4m+1}ds, \label{eqn:6.051} \end{equation} where \begin{equation} L(t_0,\xi;s)=\int_{-\infty}^{\infty}e^{t_0 sv} \frac{1}{\phi(v, \xi s^{-1})} dv. \label{eqn:6.052} \end{equation} We divide the integral in (\ref{eqn:6.051}) into two parts: \begin{equation} \bar{K}(z)=\frac{2m}{(4\pi)^2}g(0)^{\frac{1}{m}}\xi^{-4m-2} \{K^{\langle 1 \rangle}(z)+K^{\langle 2 \rangle}(z)\}, \label{eqn:6.053} \end{equation} where \begin{eqnarray} K^{\langle 1 \rangle}(z) \!\!\!&=&\!\!\! \int_{1}^{\infty} e^{-s^{2m}}L(t_0,\xi;s)s^{4m+1}ds, \label{eqn:6.6}\\ K^{\langle 2 \rangle}(z) \!\!\!&=&\!\!\! \int_{\xi}^{1} e^{-s^{2m}}L(t_0,\xi;s)s^{4m+1}ds. \label{eqn:6.7} \end{eqnarray} Since the function $[\phi(v,X)]^{-1}$ is smooth function of $X$ on $[0,1]$, for any positive integer $\mu_0$ \begin{equation} \frac{1}{\phi(v,X)} =\sum_{\mu=0}^{\mu_0} a_{\mu}(v)X^{\mu} + r_{\mu_0}(v,X)X^{\mu_0+1}, \label{eqn:6.071} \end{equation} where \begin{eqnarray} a_{\mu}(v) \!\!\!& = &\!\!\! \left. \frac{1}{\mu!} \frac{\partial^{\mu}}{\partial X^{\mu}} \frac{1}{\phi(v,X)}\right|_{X=0}, \label{eqn:6.8}\\ r_{\mu_0}(v,X) \!\!\!& = &\!\!\! \left. \frac{1}{\mu_0!} \int_{0}^{1} (1-p)^{\mu_0} \frac{\partial^{\mu_0+1}}{\partial Y^{\mu_0+1}} \frac{1}{\phi(v,Y)}\right|_{Y=Xp} dp. \nonumber \end{eqnarray} Substituting (\ref{eqn:6.071}) into (\ref{eqn:6.052}), we have \begin{equation} L(t_0,\xi;s) =\sum_{\mu=0}^{\mu_0}L_{\mu}(t_0 s)\xi^{\mu} s^{-\mu} + \tilde{L}_{\mu_0}(t_0,\xi;s)\xi^{\mu_0+1}s^{-\mu_0-1}, \label{eqn:6.091} \end{equation} where \begin{eqnarray*} L_{\mu}(u) \!\!\!&=&\!\!\! \int_{-\infty}^{\infty} e^{uv}a_{\mu}(v) dv,\\ \tilde{L}_{\mu_0}(t_0,\xi;s) \!\!\!&=&\!\!\! \int_{-\infty}^{\infty} e^{t_0 sv}r_{\mu_0}(v,\xi s^{-1})dv. \end{eqnarray*} Moreover substituting (\ref{eqn:6.091}) into (\ref{eqn:6.6}),(\ref{eqn:6.7}), we have $$ K^{\langle j \rangle}(z)= \sum_{\mu=0}^{\mu_0} K_{\mu}^{\langle j \rangle}(\tau,\xi) \xi^{\mu} + \tilde{K}_{\mu_0}^{\langle j \rangle}(\tau,\xi) \xi^{\mu_0+1}, $$ for $j = 1,2$ where \begin{eqnarray*} K_{\mu}^{\langle 1 \rangle}(\tau,\xi) \!\!\!&=&\!\!\! \int_{1}^{\infty} e^{-s^{2m}}L_{\mu}(t_0 s)s^{4m+1-\mu}ds,\\ \tilde{K}_{\mu_0}^{\langle 1 \rangle}(\tau,\xi) \!\!\!&=&\!\!\! \int_{1}^{\infty} e^{-s^{2m}} \tilde{L}_{\mu_0}(t_0,\xi;s)s^{4m-\mu_0}ds,\\ K_{\mu}^{\langle 2 \rangle}(\tau,\xi) \!\!\!&=&\!\!\! \int_{\xi}^{1} e^{-s^{2m}}L_{\mu}(t_0 s)s^{4m+1-\mu}ds,\\ \tilde{K}_{\mu_0}^{\langle 2 \rangle}(\tau,\xi) \!\!\!&=&\!\!\! \int_{\xi}^{1} e^{-s^{2m}} \tilde{L}_{\mu_0}(t_0,\xi;s)s^{4m-\mu_0}ds. \end{eqnarray*} The following two propositions are concerned with the singularities of the above functions. Their proofs are given in Subsections 6.5, 6.6. \begin{proposition} $(i)$\,\,For any nonnegative integer $k_0$, $K_{\mu}^{\langle 1 \rangle}$ is expressed in the form: $$ K_{\mu}^{\langle 1 \rangle}(\tau,\xi) = \sum_{k=0}^{k_0} c_{\mu,k}(\tau)\xi^k + \tilde{K}_{\mu,k_0}^{\langle 1 \rangle}(\tau,\xi)\xi^{k_0+1}, $$ where $$ c_{\mu,k}(\tau) = \frac{\varphi_{\mu,k}(\tau)} {\tau^{3+\mu+k}} +\psi_{\mu,k}(\tau)\log \tau, $$ for $\varphi_{\mu,k},\psi_{\mu,k} \in C^{\infty}([0,1])$ and $\tilde{K}_{\mu,k_0}^{\langle 1 \rangle}$ satisfies $|\tilde{K}_{\mu,k_0}^{\langle 1 \rangle}(\tau,\xi)|< C_{\mu,k_0}[\tau-\alpha\xi]^{-4-\mu-k_0}$ for some positive constants $C_{\mu,k_0}$ and $\alpha$. $(ii)$\,\,$\tilde{K}_{\mu_0}^{\langle 1 \rangle}$ satisfies $ |\tilde{K}_{\mu_0}^{\langle 1 \rangle}(\tau,\xi)| < C_{\mu_0}[\tau-\alpha\xi]^{-4-\mu_0} $ for some positive constants $C_{\mu_0}$ and $\alpha$. \end{proposition} \begin{proposition} $(i)$\,\,$(a)$\,\,For $0 \leq \mu \leq 4m+1$, $ K_{\mu}^{\langle 2 \rangle} \in C^{\infty}([0,1] \times [0,\varepsilon)). $ $(b)$\,\, For $\mu \geq 4m+2$, $K_{\mu}^{\langle 2 \rangle}$ can be expressed in the form: $$ K_{\mu}^{\langle 2 \rangle}(\tau,\xi) \xi^{-4m-2+\mu} =H_{\mu}(\tau, \xi) \xi^{-4m-2+\mu} \log \xi +\tilde{H}_{\mu}(\tau,\xi), $$ where $H_{\mu},\tilde{H}_{\mu} \in C^{\infty} ([0,1] \times [0,\varepsilon))$. $(ii)$\,\, For any positive integer $r$, there is a positive integer $\mu_0$ such that $$ \tilde{K}_{\mu_0}^{\langle 2 \rangle}(\tau,\xi) \xi^{-4m-1+\mu_0} \in C^{r}([0,1] \times [0,\varepsilon)). $$ \end{proposition} First by Proposition 6.1, $K^{\langle 1 \rangle}$ can be expressed in the form: \begin{equation} K^{\langle 1 \rangle}(z) = \sum_{\mu=0}^{\mu_0} c_{\mu}(\tau)\xi^{\mu} +R_{\mu_0}(\tau,\xi) \xi^{\mu_0+1}, \label{eqn:6.200} \end{equation} where $c_{\mu}$'s are expressed as in (\ref{eqn:2.5}) in Theorem 2.1 and $R_{\mu_0}$ satisfies $|R_{\mu_0}|<C_{\mu_0} [\tau-\alpha\xi]^{-4-\mu_0}$. Next by Proposition 6.2, $K^{\langle 2 \rangle}(z)$ can be expressed in the form: for any positive integer $r$, \begin{equation} \xi^{-4m-2} K^{\langle 2 \rangle}(z) =H(\tau,\xi) \log \xi + \tilde{H}(\tau,\xi), \label{eqn:6.210} \end{equation} where $H\in C^{\infty}([0,1] \times [0,\varepsilon))$ and $\tilde{H} \in C^{r}([0,1] \times [0,\varepsilon))$. Hence putting (\ref{eqn:6.053}),(\ref{eqn:6.200}),(\ref{eqn:6.210}) together, we can obtain Theorem 2.1. Note that $K(z)$ is an even function of $\xi$. \mbox{$\Box$ \ } \subsection{Asymptotic expansion of $a_{\mu}$} By a direct computation in (\ref{eqn:6.8}), $a_{\mu}(v)$ can be expressed in the following form: \begin{equation} a_{\mu}(v) = \sum_{|\alpha|=\mu} C_{\alpha} \frac{ \phi^{[\alpha_1]}(v)\cdots \phi^{[\alpha_{\mu}]}(v) }{\phi(v)^{\mu+1}}, \label{eqn:6.16} \end{equation} where $\phi^{[k]}(v)= \left.\frac{\partial^k}{\partial X^k}\phi(v,X)\right|_{X=0}$, $C_{\alpha}$'s are constants depending on $\alpha=(\alpha_1,\ldots,\alpha_{\mu})\in {\bf Z}_{\geq 0}^{\mu}$ and $|\alpha|=\sum_{j=1}^{\mu}\alpha_j$. Since $$ \frac{\partial^k}{\partial X^k}\phi(v,X) =\int_{-\infty}^{\infty} \left\{ w^k\sum_{j=1}^k c_{kj}(Xw)w^{2mj} \right\} e^{-\hat{g}(Xw)w^{2m} +vw}dw, $$ for $k \geq 1$ where $c_{kj} \in C^{\infty}({\bf R})$, we have \begin{equation} \phi^{[k]}(v)= \left.\frac{\partial^k}{\partial X^k}\phi(v,X)\right|_{X=0} =\sum_{j=1}^k c_{kj}(0)\phi_{2mj+k}(v), \label{eqn:6.161} \end{equation} for $k \geq 1$ where $$\phi_{l}(v)=\int_{-\infty}^{\infty} w^{l} e^{-w^{2m} +vw}dw. $$ Here the following lemma is concerned with the asymptotic expansion of $\phi_{l}$ at infinity. \begin{lemma} Set $a=[(2m)^{\frac{-1}{2m-1}}-(2m)^{\frac{-2m}{2m-1}}]>0$. Then we have $$ \phi_l(v) \sim v^{\frac{1-m+l}{2m-1}}\cdot \exp\{a v^{\frac{2m}{2m-1}}\}\cdot \sum_{j=0}^{\infty}c_j v^{-\frac{2m}{2m-1}j} \,\,\,\,\,\,\,{\rm as}\,\,v \to +\infty\,\,\,\, {\rm for}\,\,l \geq 0. $$ \end{lemma} The proof of the above lemma will be given soon later. Lemma 6.1 and (\ref{eqn:6.161}) imply $$ \frac{\phi^{[k]}(v)} {\phi(v)} \sim v^{\frac{(2m+1)k}{2m-1}} \cdot \sum_{j=0}^{\infty}c_j v^{-\frac{2m}{2m-1}j} \,\,\,\,\,\,\,{\rm as}\,\,v\to\infty\,\,\,\,{\rm for}\,\,k \geq 1. $$ Moreover, we have \begin{equation} \frac{\phi^{[\alpha_1]}(v)\cdots \phi^{[\alpha_{\mu}]}(v)} {\phi(v)^{\mu}} \sim v^{\frac{(2m+1)\mu}{2m-1}} \cdot \sum_{j=0}^{\infty}c_j v^{-\frac{2m}{2m-1}j} \,\,\,\,\,\,\,{\rm as}\,\,v\to\infty. \label{eqn:6.162} \end{equation} Therefore (\ref{eqn:6.16}),(\ref{eqn:6.162}) and Lemma 6.1 imply $$ a_{\mu}(v) \sim v^{\frac{m-1+(2m+1)\mu}{2m-1}} \cdot \exp\{-a v^{\frac{2m}{2m-1}}\} \cdot\sum_{j=0}^{\infty}c_j v^{-\frac{2m}{2m-1}j} \,\,\,\,\,\,\,{\rm as}\,\,v\to\infty. $$ {\it Proof of Lemma 6.1}.\,\,\, Changing the integral variable, we have \begin{equation} \phi_l(v)=v^{\frac{l+1}{2m-1}} \int_{-\infty}^{\infty} t^l e^{-\tilde{v}p(t)}dt, \label{eqn:6.163} \end{equation} where $\tilde{v}=v^{\frac{2m}{2m-1}}$ and $p(t)=t^{2m}-t$. We divide (\ref{eqn:6.163}) into two parts: \begin{equation} \phi_l(v)=v^{\frac{l+1}{2m-1}} \{I_1(\tilde{v})+I_2(\tilde{v})\}, \label{eqn:6.17} \end{equation} with $$ I_1(\tilde{v}) = \int_{|t-\alpha|\leq\delta} t^l e^{-\tilde{v}p(t)}dt \,\,\, \mbox{and} \,\,\, I_2(\tilde{v}) = \int_{|t-\alpha|>\delta} t^l e^{-\tilde{v}p(t)}dt, $$ where $\delta>0$ is small and $\alpha=(2m)^{\frac{-1}{2m-1}}$. Note that $p'(\alpha)=0$. First we consider the function $I_1$. By Taylor's formula, we have \begin{eqnarray*} I_1(\tilde{v}) \!\!\!&=&\!\!\! \int_{|t-\alpha|<\delta} t^l \exp\{ -\tilde{v}[p(\alpha)+\tilde{p}(t-\alpha)(t-\alpha)^2] \}dt \\ \!\!\!&=&\!\!\! e^{a\tilde{v}}\int_{|t|\leq\delta} (t+\alpha)^l e^{-\tilde{v}\tilde{p}(t)t^2}dt, \end{eqnarray*} where $a=-p(\alpha) =[(2m)^{\frac{-1}{2m-1}}-(2m)^{\frac{-2m}{2m-1}}]>0$ and $\tilde{p}(t)=\int_0^1 (1-u)p''(ut+\alpha) du$. Set $s=\tilde{p}(t)^{\frac{1}{2}}t$ for $|t|\leq\delta$ and $\delta_{\pm}=\tilde{p}(\pm \delta)^{\frac{1}{2}}\delta$, respectively. Then there is a function $\hat{p}\in C^{\infty}([-\delta_-,\delta_+])$ such that $t=\hat{p}(s)$ and $\hat{p}'>0$. Changing the integral variable, we have $$ I_1(\tilde{v})=\int_{-\delta_-}^{\delta_+} e^{-\tilde{v}s^2} \Psi(s) ds, $$ where $\Psi(s)=(\hat{p}(s)+\alpha)^l \cdot \hat{p}'(s)$. Since $\Psi\in C^{\infty}([-\delta_-,\delta_+])$, we have \begin{eqnarray} I_1(v) \!\!\!&=&\!\!\! \tilde{v}^{-\frac{1}{2}} \int_{-\delta_- \tilde{v}^{\frac{1}{2}}} ^{\delta_+ \tilde{v}^{\frac{1}{2}}} e^{-u^2} \Psi(u \tilde{v}^{-\frac{1}{2}})du \nonumber\\ \!\!\!&\sim&\!\!\! \tilde{v}^{-\frac{1}{2}} \sum_{j=0}^{\infty} c_j \tilde{v}^{-j} \quad \mbox{as $\tilde{v}\to\infty$.} \label{eqn:6.22} \end{eqnarray} We remark that $\int_{-\infty}^{\infty}e^{-u^2}u^j du=0$ if $j \in {\bf Z}$ is odd. Next we consider the function $I_2$. Let $p_d$ be the function defined by $ p_d(t)= d|t-\alpha|-a $ where $d>0$. We can choose $d>0$ such that $p(t) \geq p_d(t)$ for $|t-\alpha|>\delta$. Then we have \begin{eqnarray} |I_2(\tilde{v})| &\leq& \int_{|t-\alpha|>\delta} e^{-\tilde{v}p_d(t)} dt \nonumber\\ &\leq& 2 C e^{a\tilde{v}} \int_{\delta}^{\infty} e^{-d\tilde{v}t} dt \leq 2C\tilde{v}^{-1} e^{[a-d\delta]\tilde{v}}. \label{eqn:6.23} \end{eqnarray} Finally putting (\ref{eqn:6.17}),(\ref{eqn:6.22}),(\ref{eqn:6.23}) together, we have the asymptotic expansion in Lemma 6.1. \mbox{$\Box$ \ } \subsection{Asymptotic expansion of $L_{\mu}$} Let $A \in C^{\infty}({\bf R})$ be an even or odd function (i.e. $A(-v)=A(v)$ or $-A(v)$) satisfying $$ A(v)\sim v^{\frac{n}{2m-1}}\cdot\exp\{-av^{\frac{2m}{2m-1}}\} \cdot\sum_{j=0}^{\infty}c_j v^{-\frac{2m}{2m-1}j} \quad \mbox{as $v\to +\infty$,} $$ where $n\in {\bf N}$ and the constant $a$ is as in Lemma 6.1. Let $L\in C^{\omega}({\bf R})$ be the function defined by \begin{equation} L(u)=\int_{-\infty}^{\infty} e^{uv} A(v) dv. \label{eqn:6.230} \end{equation} In this section we give the asymptotic expansion of $L$ at infinity. \begin{lemma} \quad\, $ L(u) \sim u^{m-1+n} \cdot e^{u^{2m}}\cdot \sum_{j=0}^{\infty} c_j u^{-2mj} \,\,\,\,\,\,\,{\rm as}\,\, u \to +\infty. $ \end{lemma} {\it Remark}.\,\,\, Lemmas 6.1, 6.2 imply that for $\mu,l \geq 0$, \begin{eqnarray} &&L_{\mu}^{(l)}(u) = \frac{d^l}{du^l}L_{\mu}(u) \nonumber\\ &&\quad\sim u^{2m-2+(2m+1)\mu+(2m-1)l} \cdot e^{u^{2m}}\cdot \sum_{j=0}^{\infty} c_j u^{-2mj} \quad \mbox{ as $u \to \infty$.} \label{eqn:6.24} \end{eqnarray} {\it Proof}. \,\,\, We only show Lemma 6.2 in the case where $A$ is an even function. Let $\hat{A}\in C^{\infty}({\bf R}\setminus\{0\})$ be defined by \begin{equation} A(v)=v^{\frac{n}{2m-1}}\cdot\exp\{-a v^{\frac{2m}{2m-1}}\} \cdot \hat{A}(v^{\frac{2m}{2m-1}}), \label{eqn:6.241} \end{equation} then $\hat{A}(x)\sim\sum_{j=0}^{\infty}c_j x^{-j}$ as $x\to\infty$. Substituting (\ref{eqn:6.241}) into (\ref{eqn:6.230}), we have $$ L(u)=\int_{-\infty}^{\infty} \exp\{-a|v|^{\frac{2m}{2m-1}} +uv\} |v|^{\frac{n}{2m-1}} \hat{A}(|v|^{\frac{2m}{2m-1}}) dv. $$ Changing the integral variable and setting $q(t)=at^{2m}-t^{2m-1}$, we have \begin{equation} L(u)= (2m-1)u^{2m+n-1} \int_{-\infty}^{\infty} e^{-u^{2m}q(t)} \hat{A}(t^{2m}u^{2m}) t^{2m+n-2}dt. \label{eqn:6.240} \end{equation} Now we divide the integral in (\ref{eqn:6.240}) into two parts: \begin{equation} L(u)=(2m-1)u^{2m+n-1}\{ J_1(\tilde{u})+J_2(\tilde{u}) \}, \label{eqn:6.25} \end{equation} with \begin{eqnarray} J_1(\tilde{u})\!\!\!&=&\!\!\! \int_{|t-\beta|\leq\delta} e^{-\tilde{u}q(t)} \hat{A}(\tilde{u}t^{2m})t^{2m+n-2}dt, \nonumber\\ J_2(\tilde{u})\!\!\!&=&\!\!\! \int_{|t-\beta|>\delta} e^{-\tilde{u}q(t)} \hat{A}(\tilde{u}t^{2m})t^{2m+n-2}dt, \nonumber \end{eqnarray} where $\tilde{u}=u^{2m}$, $\delta>0$ is small and $\beta=(2m\!-\!1)\cdot(2m a)^{-1}$. Note that $q'(\beta)=0$. First we consider the function $J_1$. By Taylor's formula, we have $$ J_1(\tilde{u})=e^{\tilde{u}}\int_{|t|\leq\delta} e^{-\tilde{u}\tilde{q}(t)t^2} \hat{A}(\tilde{u}(t+\beta)^{2m})(t+\beta)^{2m+n-2}dt, $$ where $\tilde{q}(t)=\int_0^1(1-v)q''(vt+\alpha)dv$. Note that $q(0)=-1$. Set $s=\tilde{q}(t)^{\frac{1}{2}}t$ for $|t|\leq\delta$ and $\tilde{\delta}_{\pm}=\tilde{q}(\pm\delta)^{\frac{1}{2}}\delta$, respectively. Then there is a function $\hat{q}\in C^{\infty}([-\tilde{\delta}_-,\tilde{\delta}_+])$ such that $t=\hat{q}(s)$ and $\hat{q}'>0$. Changing the integral variable, we have \begin{equation} J_1(\tilde{u})=\int_{-\tilde{\delta}_-}^{\tilde{\delta}_+} e^{-\tilde{u}s^2} \tilde{\Psi}(s,\tilde{u}) ds, \label{eqn:6.280} \end{equation} where $ \tilde{\Psi}(s,\tilde{u})=\hat{A}(\tilde{u}(\hat{q}(s)+\beta)^{2m}) (\hat{q}(s)+\beta)^{2m+n-2} \hat{q}'(s). $ Since $\hat{A}(x)\sim \sum_{j=0}^{\infty}c_j x^{-j}$ as $x\to \infty$, we have \begin{equation} \tilde{\Psi}(s,\tilde{u})\sim \sum_{j=0}^{\infty}c_j(s)\tilde{u}^{-j}\quad \mbox{as $\tilde{u}\to\infty$}, \label{eqn:6.29} \end{equation} where $c_j \in C^{\infty}([-\tilde{\delta}_-,\tilde{\delta}_+])$. Substituting (\ref{eqn:6.29}) into (\ref{eqn:6.280}), we have \begin{equation} J_1(\tilde{u})\sim \tilde{u}^{-\frac{1}{2}} e^{\tilde{u}}\sum_{j=0}^{\infty} c_j \tilde{u}^{-j} \quad \mbox{as $\tilde{u}\to\infty$}. \label{eqn:6.30} \end{equation} Next we consider the function $J_2$. By a similar argument about the estimate of $I_2(\tilde{v})$ in the proof of Lemma 6.1, we can obtain \begin{equation} |J_2(\tilde{u})|\leq C \tilde{u}^{-1} e^{[1-\varepsilon]\tilde{u}}, \label{eqn:6.31} \end{equation} where $\varepsilon$ is a positive constant. Finally putting (\ref{eqn:6.25}),(\ref{eqn:6.30}),(\ref{eqn:6.31}) together, we obtain the asymptotic expansion in Lemma 6.2. \mbox{$\Box$ \ } \subsection{Proof of Proposition 6.1} We can construct the function $h\in C^{\infty}([0,\infty))$ such that if $Y=\tilde{f}(X)^{\frac{1}{2m}}$, then $X=Yh(Y)$. In fact $\frac{d}{dX}[\tilde{f}(X)^{\frac{1}{2m}}]>0$ for $X \geq 0$. Set $t=\tilde{f}(x)^{\frac{1}{2m}}\xi^{-1}$. Then we can write $t_0=th(t \xi)$. Note that $\hat{g}(X)^{\frac{1}{2m}} \cdot h(X)=1$ for $X, Y \geq 0$ and hence $h'(Y) \geq 0$ for $Y \geq 0$. Let us prepare two lemmas for the proof of Propositions 6.1, 6.2. \begin{lemma} \,\, Assume that the functions $a,b$ and $c$ on $[0,\varepsilon) \times [0,1]$ satisfy $a(\xi,t_0)=b(\xi,t)=c(\xi,\tau)$. If one of these functions belongs to $C^{\infty}([0,\varepsilon)\times [0,1])$, then so do the others. \end{lemma} {\it Proof}.\,\,\, This lemma is directly shown by the relation between three variables $t_0,t$ and $\tau$. \mbox{$\Box$ \ } \begin{lemma} There is a positive number $\alpha$ such that $1-t_0^{2m} \geq \tau-\alpha\xi$. \end{lemma} We remark that the above constant $\alpha$ is same as that in Proposition 6.2. {\it Proof}.\,\,\, By definition, we have $$ 1-t_0^{2m}=1-t^{2m}h(t \xi)^{2m}=(1-t^{2m})-(h(t\xi)-1)t^{2m}.$$ Since $h(0)=1$ and $h'(X)\geq 0$, we have $ h(t\xi)^{2m}-1\leq \alpha t\xi $ for some positive number $\alpha$. Therefore we have $$ 1-t_0^{2m}\geq \tau-\alpha t^{2m+1}\xi \geq \tau-\alpha\xi. $$ \mbox{$\Box$ \ } {\it Proof of Proposition 6.1.} \,(i) \, Recall the definition of the function $K_{\mu}^{\langle 1 \rangle}$: \begin{equation} K_{\mu}^{\langle 1 \rangle}(\tau,\xi) = \int_{1}^{\infty} e^{-s^{2m}}L_{\mu}(t_0 s)s^{4m+1-\mu}ds, \label{eqn:6.32} \end{equation} where $$ L_{\mu}(u) = \int_{-\infty}^{\infty} e^{uv}a_{\mu}(v) dv \quad {\rm and} \quad a_{\mu}(v) = \left. \frac{1}{\mu!} \frac{\partial^{\mu}}{\partial X^{\mu}} \frac{1}{\phi(v,X)}\right|_{X=0}. $$ We obtain the Taylor expansion of $L_{\mu}(t_0;s)=L_{\mu}(tsh(t\xi))$ with respect to $\xi$: \begin{equation} L_{\mu}(t_0 s)= \sum_{k=0}^{k_0} L_{\mu,k}(t;s) \xi^k +\tilde{L}_{\mu,k_0}(t,\xi;s)\xi^{k_0+1}, \label{eqn:6.330} \end{equation} where \begin{eqnarray} L_{\mu,k}(t;s) \!\!\!&=&\!\!\! \frac{1}{k!} \left.\frac{\partial^k}{\partial \xi^k} L_{\mu}(ts h(t \xi))\right|_{\xi=0}, \label{eqn:6.34}\\ \tilde{L}_{\mu,k_0}(t,\xi;s) \!\!\!&=&\!\!\! \frac{1}{k_0!} \int_0^1 (1-p)^{k_0} \left. \frac{\partial^{k_0+1}}{\partial X^{k_0+1}} L_{\mu}(ts h(t X))\right|_{X=\xi p} dp. \label{eqn:6.340} \end{eqnarray} Substituting (\ref{eqn:6.330}) into (\ref{eqn:6.32}), we have $$ K_{\mu}^{\langle 1 \rangle}(\tau,\xi) = \sum_{k=0}^{k_0} K_{\mu,k}(t) \xi^{k} +\tilde{K}_{\mu,k_0}(t,\xi)\xi^{k_0+1}, $$ where \begin{eqnarray} K_{\mu,k}(t) \!\!\!&=&\!\!\! \int_1^{\infty} e^{-s^{2m}} L_{\mu,k}(t;s)s^{4m+1-\mu}ds \label{eqn:6.37} \\ \tilde{K}_{\mu,k_0}(t,\xi) \!\!\!&=&\!\!\! \int_{1}^{\infty} e^{-s^{2m}}\tilde{L}_{\mu,k_0} (t,\xi;s)s^{4m+1-\mu}ds. \label{eqn:6.38} \end{eqnarray} First we consider the singularity of $K_{\mu,k}$ at $t=1$. By a direct computation in (\ref{eqn:6.34}), we have $$ L_{\mu,k}(t;s)= \sum_{l=1}^{k} h_{l}(t) s^l L_{\mu}^{(l)}(ts), $$ where $h_{l} \in C^{\infty}([0,1])$ (which depends on $\mu,k$). We define the function $S_{\mu,k}(t;s^{2m})$ by \begin{equation} L_{\mu,k}(t;s)= s^{2m-2+(2m+1)\mu +2mk}\cdot e^{t^{2m}s^{2m}}\cdot S_{\mu,k}(t;s^{2m}). \label{eqn:6.380} \end{equation} Lemma 6.2 implies \begin{equation} S_{\mu,k}(t;s^{2m}) \sim \sum_{j=0}^{\infty} c_j(t) s^{-2mj} \,\,\,\,\,\,{\rm as}\,\,\,s \to \infty, \label{eqn:6.381} \end{equation} where $c_j \in C^{\infty}([0,1])$. Substituting (\ref{eqn:6.380}) into (\ref{eqn:6.37}), we have \begin{eqnarray} K_{\mu,k}(t) \!\!\!&=&\!\!\! \int_{1}^{\infty} e^{-[1-t^{2m}]s^{2m}} S_{\mu,k}(t;s^{2m})s^{2m-2+2m\mu+2mk} ds \nonumber\\ \!\!\!&=&\!\!\! \frac{1}{2m} \int_{1}^{\infty} e^{-\chi^{-1}(\tau) \sigma} S_{\mu,k}(t;\sigma)\sigma^{2+\mu+k} d\sigma. \label{eqn:6.40} \end{eqnarray} Moreover substituting (\ref{eqn:6.381}) into (\ref{eqn:6.40}), we have $$ K_{\mu,k}(t)=\frac{\varphi_{\mu,k}(\tau)}{\tau^{3+\mu+k}} +\psi_{\mu,k}(\tau) \log \tau, $$ where $\varphi_{\mu,k},\psi_{\mu,k} \in C^{\infty}([0,1])$. Next we obtain the inequality $|\tilde{K}_{\mu,k}^{\langle 1 \rangle}(\tau,\xi)| \leq C_{\mu,k_0}[\tau-\alpha\xi]^{-4-\mu-k_0}$ for some positive constant $C_{\mu,k_0}$. By a direct computation in (\ref{eqn:6.340}), we have $$ \frac{\partial^{k_0+1}}{\partial X^{k_0+1}} L_{\mu}(tsh(tX))= \sum_{l=1}^{k_0+1} \tilde{h}_l (t,X) s^l L_{\mu}^{(l)}(tsh(tX)), $$ where $\tilde{h}_l$ are bounded functions (depending on $\mu,k_0$). Since $h'(X) \geq0$, we can obtain \begin{eqnarray} |\tilde{L}_{\mu,k_0}(t,\xi;s)| \!\!\!&\leq&\!\!\! C \sum_{l=1}^{k_0+1} s^{l}L_{\mu}^{(l)}(tsh(ts))\nonumber\\ \!\!\!&\leq&\!\!\! C s^{k_0+1} L_{\mu}^{(k_0+1)}(t_0 s)\nonumber\\ \!\!\!&\leq&\!\!\! C s^{2m-2+(2m+1)\mu+2m(k_0+1)}\cdot e^{t_0^{2m}s^{2m}} \label{eqn:6.50} \end{eqnarray} for $s \geq 1$. Substituting (\ref{eqn:6.50}) to (\ref{eqn:6.38}), we obtain \begin{eqnarray*} \tilde{K}_{\mu,k_0}(t,\xi) \!\!\!&\leq&\!\!\! C\int_{1}^{\infty} e^{-[1-t_0^{2m}]s^{2m}} s^{6m-2+2m\mu+2m(k_0+1)} ds \\ \!\!\!&\leq&\!\!\! C [1-t_0^{2m}]^{-4-\mu-k_0} \leq C [\tau-\alpha\xi]^{-4-\mu-k_0} \end{eqnarray*} by Lemma 6.4. This completes the proof of Proposition 6.2 (i). (ii)\,\, Recall the definition of the function $\tilde{K}_{\mu}^{\langle 1 \rangle}$: \begin{equation} \tilde{K}_{\mu_0}^{\langle 1 \rangle}(\tau,\xi) = \int_{1}^{\infty} e^{-s^{2m}} \tilde{L}_{\mu_0}(t_0,\xi;s)s^{4m-\mu_0}ds, \label{eqn:6.52} \end{equation} where \begin{eqnarray} &&\tilde{L}_{\mu_0}(t_0,\xi;s) = \int_{-\infty}^{\infty} e^{t_0 s v} r_{\mu_0}(v,\xi s^{-1})dv, \label{eqn:6.53}\\ &&r_{\mu_0}(v,X) = \frac{1}{\mu_0 !} \int_0^1(1-p)^{\mu_0} \left. \frac{\partial^{\mu_0}}{\partial Y^{\mu_0+1}} \frac{1}{\phi(v,Y)}\right|_{Y=Xp} dp. \label{eqn:6.54} \end{eqnarray} The following lemma is necessary to obtain the estimate of $\tilde{K}_{\mu}^{\langle 1 \rangle}$ in (ii). \begin{lemma} \,\,\, $ |r_{\mu_0}(v,X)|< C |v|^{\frac{(2m+1)(\mu_0+1)+m-1}{2m-1}} e^{-a |v|^{\frac{2m}{2m-1}}}$\,\, for $v\in {\bf R}$, $X\geq0$. \end{lemma} We remark that the constant $a$ is as in Lemma 6.1. The proof of the above lemma is given soon later. Applying Lemma 6.5 to (\ref{eqn:6.53}), we have \begin{eqnarray} |\tilde{L}_{\mu_0}(t_0,\xi,s)| \!\!\!&\leq&\!\!\! C \int_{-\infty}^{\infty} |v|^{\frac{(2m+1)(\mu_0+1)+m-1}{2m-1}} e^{t_0 sv-a |v|^{\frac{2m}{2m-1}}}dv \nonumber\\ \!\!\!&\leq&\!\!\! C s^{2m-2+(2m+1)(\mu_0+1)} e^{t_0^{2m} s^{2m}}. \label{eqn:6.55} \end{eqnarray} The second inequality is given by Lemma 6.2. Moreover substituting (\ref{eqn:6.55}) into (\ref{eqn:6.52}), we have \begin{eqnarray*} |\tilde{K}_{\mu_0}^{\langle 1 \rangle}(\tau,\xi)| \!\!\!&\leq&\!\!\! C \int_1^{\infty} e^{-[1-t_0^{2m}]s^{2m}}s^{8m-1+2m\mu_0} ds \\ \!\!\!&\leq\!\!\!& C [1-t_0^{2m}]^{-4-\mu_0} \leq C [\tau-\alpha \xi]^{-4-\mu_0}, \end{eqnarray*} by Lemma 6.4. Therefore we obtain the estimate of $\tilde{K}_{\mu_0}^{\langle 1 \rangle}$ in Proposition 6.1 (ii). \mbox{$\Box$ \ } {\it Proof of Lemma 6.5}.\,\, We only consider the case where $v$ is positive. The proof for the case where $v$ is negative is given in the same way. By a direct computation, we have \begin{equation} \frac{\partial^{\mu_0+1}}{\partial X^{\mu_0+1}} \frac{1}{\phi(v,X)} = \sum_{|\alpha|=\mu_0+1} C_{\alpha} \frac{ \phi^{[\alpha_1]}(v,X)\cdots \phi^{[\alpha_{\mu_0+1}]}(v,X) }{\phi(v,X)^{\mu_0+2}}, \label{eqn:6.550} \end{equation} where $\phi^{[k]}(v,X)= \frac{\partial^k}{\partial X^k}\phi(v,X)$ and $C_{\alpha}$'s are constants depending on $\alpha=(\alpha_1,\ldots,\alpha_{\mu_0+1}) \in {\bf Z}_{\geq 0}^{\mu_0+1}$. By a direct computation, the function $\phi^{[k]}$'s are expressed in the form: \begin{equation} \phi^{[k]}(v,X)= \sum_{|\beta|=k} c_{\beta} F_{\beta}(\tilde{v},\tilde{X}), \label{eqn:6.56} \end{equation} where $\tilde{v}=v^{\frac{2m}{2m-1}}$, $\tilde{X}=X v^{\frac{-1}{2m}}$, $c_{\beta}$'s are constants and \begin{eqnarray} F_{\beta}(\tilde{v},\tilde{X}) \!\!\!&=&\!\!\! \int_{-\infty}^{\infty} e^{-\tilde{g}(X w)w^{2m}+vw} w^{\gamma} \prod_{\beta} \hat{g}^{(\beta_k)}(Xw) dw \nonumber\\ \!\!\!&=&\!\!\! \tilde{v}^{\frac{\gamma+1}{2m}} \int_{-\infty}^{\infty} e^{-\tilde{v} p(s,\tilde{X})} s^{\gamma}\prod_{\beta} {\hat{g}}^{(\beta_k)}(\tilde{X}s) ds, \label{eqn:6.57} \end{eqnarray} with $p(s,\tilde{X}) =\hat{g}(\tilde{X}s)s^{2m}-s$, $\beta_k \in {\bf N}$ and $\gamma \in {\bf N}$ depending on $\beta=(\beta_k)_k$. In order to apply the stationary phase method to the above integral, we must know the location of the critical points of the function $p(\cdot,\tilde{X})$. The lemma below gives the information about it. \begin{lemma} There exists a function $\alpha \in C^{\infty}([0,\infty))$ such that \begin{equation} \frac{\partial}{\partial s} p(\alpha(\tilde{X}), \tilde{X})=0 \, \mbox{ and $\alpha_0 \leq \alpha(\tilde{X}) \le \alpha_1$ \, for \, $\tilde{X} \geq 0$}, \label{eqn:6.58} \end{equation} where $\alpha_0=(2m)^{\frac{-1}{2m-1}}$ and $\alpha_1=(\frac{4}{5}2m)^{\frac{-1}{2m-1}}$. \end{lemma} {\it Proof}. \,\,\, By a direct computation, we have \begin{eqnarray*} \frac{\partial}{\partial s} p(s,\tilde{X}) \!\!\!&=&\!\!\! s^{2m-1}[\hat{g}'(\eta)\eta+ 2m \hat{g}(\eta)]-1,\\ \frac{\partial^2}{\partial s^2} p(s,\tilde{X}) \!\!\!&=&\!\!\! s^{2m-2}[\hat{g}''(\eta)\eta^2+ 4m \hat{g}'(\eta)\eta +2m(2m-1)\hat{g}(\eta)], \end{eqnarray*} where $\eta=\tilde{X}s$. It is easy to obtain the following inequalities by using the conditions (\ref{eqn:6.1}),(\ref{eqn:6.2}). \begin{eqnarray} &&\frac{\partial}{\partial s} p(\alpha_0,\tilde{X}) \leq 0 < \frac{\partial}{\partial s} p(\alpha_1,\tilde{X}) \quad \mbox{for $\tilde{X}\geq 0$}, \label{eqn:6.61}\\ &&\frac{\partial^2}{\partial s^2} p(s,\tilde{X})\geq c >0 \quad \mbox{on $[\alpha_0,\alpha_1]\times [0,\infty)$}. \label{eqn:6.62} \end{eqnarray} for some positive constant $c$. Then the inequalities (\ref{eqn:6.61}),(\ref{eqn:6.62}) imply the claim in Lemma 6.6 by the implicit function theorem. \mbox{$\Box$ \ } Now we divide the integral in (\ref{eqn:6.57}) into two parts. $$ F_{\beta}(\tilde{v},\tilde{X}) = \tilde{v}^{\frac{\gamma+1}{2m}} \{ I_1(\tilde{v}, \tilde{X})+I_2(\tilde{v}, \tilde{X}) \}, $$ where \begin{eqnarray*} I_1(\tilde{v}, \tilde{X}) \!\!\!&=&\!\!\! \int_{|t-\alpha(\tilde{X})| \leq \delta} e^{-\tilde{v}p(t,\tilde{X})t^2} t^{\gamma} \prod_{\beta} {\hat{g}}^{(\beta_k)}(\tilde{X}t) dt, \\ I_2(\tilde{v}, \tilde{X}) \!\!\!&=&\!\!\! \int_{|t-\alpha(\tilde{X})| > \delta} e^{-\tilde{v}p(t,\tilde{X})t^2} t^{\gamma} \prod_{\beta} {\hat{g}}^{(\beta_k)}(\tilde{X}t) dt, \end{eqnarray*} where $\delta>0$ is small. First we consider the function $I_1$. By Lemma 6.6 and Taylor's formula, we have \begin{equation} I_1(\tilde{v}, \tilde{X}) = e^{a(\tilde{X})\tilde{v}} \int_{|t|\leq \delta} e^{-\tilde{v}\tilde{p}(t,\tilde{X})t^2} (t+\alpha(\tilde{X}))^{\gamma} \prod_{\beta}\hat{g}^{(\beta_k)} (\tilde{X}(t+\alpha(\tilde{X}))) dt, \end{equation} where $a(\tilde{X})=-p(\alpha(\tilde{X}),\tilde{X})$ and $\tilde{p}(t,\tilde{X}) = \int_{0}^{1}(1-u) \frac{\partial^2}{\partial s^2}p(ut+\alpha(\tilde{X}),\tilde{X}) du$. Set $s=\tilde{p}(t,\tilde{X})^{\frac{1}{2}} t$ on $[-\delta,\delta]\times [0,\infty)$ and $\delta_{\pm}(\tilde{X})= \tilde{p}(\pm\delta, \tilde{X})^{\frac{1}{2}}\delta$, respectively. Then there is a function $\hat{p}\in C^{\infty} ([-\delta_-(\tilde{X}),\delta_+(\tilde{X})]\times [0,\infty))$ such that $t=\hat{p}(s,\tilde{X})$ and $\frac{\partial}{\partial s} \hat{p}(s,\tilde{X})>0$. Changing the integral variable, we have $$ I_1(\tilde{v}, \tilde{X}) = e^{a(\tilde{X})\tilde{v}} \int_{-\delta_-(\tilde{X})}^{\delta_+(\tilde{X})} e^{-\tilde{v}s^2} \Psi(s,\tilde{X})ds, $$ where $\Psi(s,\tilde{X}) = (\hat{p}(s,\tilde{X})+\alpha(\tilde{X}))^{\gamma}\cdot \prod_{\beta}\hat{g}^{(\beta_k)} (\tilde{X}(\hat{p}(s,\tilde{X})+\alpha(\tilde{X})))\cdot \frac{\partial}{\partial s}\hat{p}(s,\tilde{X}). $ Since $\Psi \in C^{\infty} ([-\delta_-(\tilde{X}),\delta_+(\tilde{X})]\times [0,\infty))$, we have \begin{eqnarray} I_1(\tilde{v}, \tilde{X}) \cdot \{\tilde{v}^{-\frac{1}{2}} e^{a(\tilde{X})\tilde{v}}\}^{-1} \!\!\!&=&\!\!\! \int_{-\delta_-(\tilde{X})\tilde{v}^{\frac{1}{2}}} ^{\delta_+(\tilde{X})\tilde{v}^{\frac{1}{2}}} e^{-u^2} \Psi(u \tilde{v}^{-\frac{1}{2}}, \tilde{X})du \nonumber\\ \!\!\!&\to&\!\!\! \sqrt{\pi}\Psi(0,\tilde{X}) \quad {\rm as}\,\, \tilde{v}\to\infty. \label{eqn:6.68} \end{eqnarray} Note that $\Psi(0,\tilde{X}) =\alpha(\tilde{X})^{\gamma}\cdot \{\frac{1}{2} \frac{\partial^2}{\partial t^2} p(\alpha(\tilde{X}),\tilde{X})\}^{-\frac{1}{2}} \cdot \prod_{\beta}\hat{g} ^{(\beta_k)}(\tilde{X} \alpha(\tilde{X}))$. Next we consider the function $I_2$. In a similar argument about the estimate of $I_2(\tilde{v})$ in the proof in Lemma 6.1, we can obtain \begin{equation} |I_2(\tilde{v},\tilde{X})| \leq C \tilde{v}^{-1} e^{a(\tilde{X})\tilde{v} -\varepsilon\tilde{v}} \label{eqn:6.69} \end{equation} where $\varepsilon$ is a positive constant. Putting (\ref{eqn:6.68}),(\ref{eqn:6.69}) together, we have \begin{equation} \lim_{\tilde{v} \to \infty} F_{\beta}(\tilde{v},\tilde{X})\cdot \{ \tilde{v}^{\frac{\gamma+1-m}{2m}} e^{-p(\tilde{X})\tilde{v}} \}^{-1}= \sqrt{\pi} \tilde{\Psi}_{\beta}(0,\tilde{X}). \label{eqn:6.70} \end{equation} Now under the condition $|\beta|=k$, the number $\gamma$ in (\ref{eqn:6.57}) attains the maximum value $(2m+1)k$ when $\beta=(1,\ldots,1)$. Therefore (\ref{eqn:6.56}),(\ref{eqn:6.70}) imply that \begin{equation} \left| \frac{\phi^{[k]}(v,X)}{\phi(v,X)} \right| \leq C \tilde{v}^{\frac{(2m+1)k}{2m}}. \label{eqn:6.700} \end{equation} Moreover (\ref{eqn:6.550}),(\ref{eqn:6.70}) imply that $$ \left| \frac{\partial^{\mu_0+1}}{\partial X^{\mu_0+1}} \frac{1}{\phi(v,X)} \right| \leq C \tilde{v}^{\frac{(2m+1)(\mu_0+1)+m-1}{2m}} e^{-a(\tilde{X})\tilde{v}}. $$ Now we admit the following lemma. \begin{lemma}\quad $a(\tilde{X})\geq a(0)=a.$ \end{lemma} We remark that the constant $a$ is as in Lemma 6.1. The above lemma implies \begin{equation} \left| \frac{\partial^{\mu_0+1}}{\partial X^{\mu_0+1}} \frac{1}{\phi(v,X)} \right| \leq C \tilde{v}^{\frac{(2m+1)(\mu_0+1)+m-1}{2m}} e^{-a \tilde{v}}. \label{eqn:6.71} \end{equation} Finally substituting (\ref{eqn:6.71}) into (\ref{eqn:6.54}), we can obtain the estimate of $r_{\mu_0}$ in Lemma 6.5. \mbox{$\Box$ \ } {\it Proof of Lemma 6.7}. \,\,\, The definition of $\alpha(\tilde{X})$ in (\ref{eqn:6.58}) implies that \begin{eqnarray*} a'(\tilde{X}) \!\!\!&=&\!\!\! -\alpha'(\tilde{X})\cdot \alpha(\tilde{X})^{2m-1} [\hat{g}'(\tilde{X}a(\tilde{X}))\tilde{X}\alpha(\tilde{X}) +2m \hat{g}(\tilde{X}a(\tilde{X}))] \\ && \quad \quad \quad +\alpha'(\tilde{X}) -g'(\tilde{X}\alpha(\tilde{X}))\alpha(\tilde{X})^{2m+1}\\ \!\!\!&=&\!\!\! -\hat{g}'(\tilde{X}\alpha(\tilde{X}))\alpha(\tilde{X})^{2m+1}. \end{eqnarray*} Since the condition $xg'(x)\leq 0$ in (\ref{eqn:2.1}) implies $\tilde{X}a'(\tilde{X})= -\tilde{X} \hat{g}'(\tilde{X}\alpha(\tilde{X})) \cdot\alpha(\tilde{X})^{2m+1} \geq 0$, $a(\tilde{X})$ takes the minimum value when $\tilde{X}=0$. It is easy to check that $a(0)=a$. \mbox{$\Box$ \ } \subsection{Proof of Proposition 6.2} (i)\,\,\, Recall the definition of the function $K_{\mu}^{\langle 2 \rangle}$: $$ K_{\mu}^{\langle 2 \rangle}(\tau,\xi) = \int_{\xi}^{1} e^{-s^{2m}}L_{\mu}(t_0 s)s^{4m+1-\mu}ds, $$ where $$ L_{\mu}(u) = \int_{-\infty}^{\infty} e^{uv}a_{\mu}(v) dv. $$ We remark that $L_{\mu}$ extends to an entire function. By the residue formula, we have \begin{equation} K_{\mu}^{\langle 2 \rangle}(\tau,\xi) = h_{\mu}(t_0) \xi^{-4m-2+\mu} \log \xi + \tilde{h}_{\mu}(t_0,\xi), \end{equation} where \begin{equation} h_{\mu}(t_0)= \frac{1}{2 \pi i} \oint_{|\zeta|=\delta} e^{-\zeta^{2m}}L_{\mu}(t_0 \zeta) \zeta^{4m+1-\mu} d\zeta \label{eqn:6.74} \end{equation} for $\delta$ is a small positive integer and $\tilde{h}_{\mu} \in C^{\infty}([0,1]\times[0, \varepsilon))$. Here (\ref{eqn:6.74}) implies that $h_{\mu}\equiv 0$ for $0 \leq \mu \leq 4m+1$ and $h_{\mu} \in C^{\infty}([0,1])$ for $\mu \geq 4m+2$. By Lemma 6.3, we can obtain (i) in Proposition 6.2. (ii)\,\,\, Changing the integral variable, we have \begin{eqnarray*} && \xi^{-4m-2+\mu_0} \tilde{K}_{\mu_0}^{\langle 2 \rangle}(\tau,\xi) = \int_1^{\xi^{-1}} e^{-\xi^{2m}u^{2m}} \tilde{L}_{\mu_0}(t_0,\xi,\xi u) u^{4m-\mu_0}du, \\ && \tilde{L}_{\mu_0}(t_0,\xi,\xi u) =\int_{-\infty}^{\infty} e^{t_0\xi uv}r_{\mu_0}(v,u^{-1})dv. \end{eqnarray*} Note that $r_{\mu_0}$ satisfies the inequality in Lemma 6.5. Keeping the above integrals in mind, we define the function $H$ by \begin{equation} H(\alpha,\beta,\gamma,\delta) =t_0^{\delta}\xi^{\alpha} \int_1^{\xi^{-1}} e^{-\xi^{2m}u^{2m}}\frac{\partial^{\gamma}}{\partial X^{\gamma}} I(t_0 \xi u) u^{-\beta-1} du, \label{eqn:6.77} \end{equation} with \begin{equation} I(X)=\int_{-\infty}^{\infty}e^{Xv} r(v)dv, \label{eqn:6.770} \end{equation} where $\alpha,\beta,\gamma(\geq 0),\delta$ are integers and the function $r$ satisfies $|r(v)| \leq C e^{-c|v|^{\frac{2m}{2m-1}}}$ for some positive constants $c$, $C$. Note that $H$ is a function of $(t_0,\xi)$. By a direct computation, we have \begin{eqnarray} \frac{\partial}{\partial t_0} H(\alpha,\beta,\gamma,\delta) \!\!\!&=&\!\!\! H(\alpha+1,\beta-1,\gamma+1,\delta),\label{eqn:6.78}\\ \frac{\partial}{\partial \xi} H(\alpha,\beta,\gamma,\delta) \!\!\!&=&\!\!\! -e^{-1}t_0^{\delta}\,\xi^{\alpha+\beta-1} \frac{\partial^{\gamma}}{\partial X^{\gamma}}I(\xi)\nonumber +\alpha H(\alpha-1,\beta,\gamma,\delta)\\ &&\!\!\! -2m H(\alpha+2m-1,\beta-2m,\gamma,\delta)\nonumber\\ &&\!\!\! +H(\alpha,\beta-1,\gamma+1,\delta+1). \label{eqn:6.79} \end{eqnarray} Since $\frac{\partial^{\gamma}}{\partial X^{\gamma}}I$ is bounded on $[0,1]$, we have \begin{equation} |H(\alpha,\beta,\gamma,\delta)| \leq C|\xi|^{\alpha+\beta}, \label{eqn:6.80} \end{equation} by (\ref{eqn:6.77}). By induction, we have $$ \left| \frac{\partial^k}{\partial t_0^k}\frac{\partial^l}{\partial \xi^l} H(\alpha,\beta,\gamma,\delta) \right| \leq C|\xi|^{\alpha+\beta-l}, $$ by (\ref{eqn:6.78}),(\ref{eqn:6.79}),(\ref{eqn:6.80}). Now if we replace $r(v)$ by $r_{\mu_0}(v,Y)$ in (\ref{eqn:6.770}), then $\xi^{-4m-2+\mu_0} \tilde{K}_{\mu_0}^{\langle 2 \rangle}(\tau,\xi)= H(0,-4m+\mu_0-1,0,0)$. Therefore if $\mu_0>4m+1+l$, then $\frac{\partial^k}{\partial t_0^k} \frac{\partial^l}{\partial \xi^l} K_{\mu_0}^{\langle 2 \rangle}(\tau,\xi)$ is a continuous function of $(t_0, \xi)\in [0,1]\times[0,\varepsilon)$. Therefore we can obtain (ii) in Proposition 6.2 by Lemma 6.3. This completes the proof of Proposition 6.2. \mbox{$\Box$ \ } \section{The Szeg\"o kernel} Let $\Omega_f$ be a tube domain satisfying the condition in Section 2. Let $H^2(\Omega_f)$ be the subspace of $L^2(\Omega_f)$ consisting of holomorphic functions $F$ on $\Omega_f$ such that $$ \sup_{\epsilon>0} \int_{\partial\Omega_f} |F(z_1,z_2+i\epsilon)|^2 d\sigma(z) <\infty, $$ where $d\sigma$ is the measure on $\partial\Omega_f$ given by Lebesgue measure on ${\bf C}\times{\bf R}$ when we identify $\partial\Omega_f$ with ${\bf C}\times{\bf R}$ ($(z,t+if({\rm Im}z))\mapsto (z,t)$). The Szeg\"o projection is the orthogonal projection ${\bf S} : L^2(\partial\Omega_f)\to H^2(\Omega_f)$ and we can write $$ {\bf S}F(z)=\int_{\partial\Omega_f} S(z,w)F(w)d\sigma(w), $$ where $S:\Omega_f\times\Omega_f\to {\bf C}$ is the {\it Szeg\"o kernel} of the domain $\Omega_f$. We are interested in the restriction of the Szeg\"o kernel on the diagonal, so we write $S(z)=S(z,z)$. The Szeg\"o kernel of $\Omega_f$ has an integral representation : $$ S(z)=\frac{1}{(4\pi)^2} \int\!\!\!\int_{\Lambda^{\ast}} e^{-x\zeta_1-y\zeta_2} \frac{1}{D(\zeta_1,\zeta_2)} d\zeta_1d\zeta_2, $$ where $(x,y)=({\rm Im}z_1,{\rm Im}z_2)$ and $D(\zeta_1,\zeta_2)$ is as in Section 3 (\ref{eqn:3.4}). We also give an asymptotic expansion of the Szeg\"o kernel of $\Omega_f$. The theorem below can be obtained in a fashion similar to the case of the Bergman kernel, so we omit the proof. \begin{thm} The Szeg\"o kernel of $\Omega_f$ has the form in some neighborhood of $z^0$: $$ S(z)=\frac{\Phi^S(\tau, \varrho^{\frac{1}{m}})} {\varrho^{1+\frac{1}{m}}} +\tilde{\Phi}^S(\tau,\varrho^{\frac{1}{m}}) \log \varrho^{\frac{1}{m}}, $$ where $\Phi^S \in C^{\infty}((0,1]\times [0,\varepsilon))$ and $\tilde{\Phi}^S \in C^{\infty}([0,1]\times [0,\varepsilon))$, with some $\varepsilon>0$. Moreover $\Phi^S$ is written in the form on the set $\{\tau> \alpha \varrho^{\frac{1}{2m}}\}$ with some $\alpha>0$: for every nonnegative integer $\mu_0$ $$ \Phi^S(\tau,\varrho^{\frac{1}{m}}) =\sum_{\mu=0}^{\mu_0} c_{\mu}^S(\tau) \varrho^{\frac{\mu}{m}} + R_{\mu_0}^S(\tau, \varrho^{\frac{1}{m}}) \varrho^{\frac{\mu_0}{m}+\frac{1}{2m}}, $$ where $$ c_{\mu}^S(\tau) = \frac{\varphi_{\mu}^S(\tau)} {\tau^{2+2\mu}} +\psi_{\mu}^S(\tau)\log \tau, $$ for $\varphi_{\mu}^S,\psi_{\mu}^S \in C^{\infty}([0,1])$, $\varphi_0^S$ is positive on $[0,1]$ and $R_{\mu_0}^S$ satisfies $ |R_{\mu_0}^S(\tau,\varrho^{\frac{1}{m}})| \leq C_{\mu_0}^S[\tau-\alpha\varrho^{\frac{1}{2m}}]^{-3-2\mu_0} $ for some positive constant $C_{\mu_0}^S$. \end{thm} \end{document}
\begin{eqnarray}gin{document} \title{ space{-5mm} \begin{eqnarray}gin{abstract} We study the convergence rate of a class of linear multi-step methods for BSDEs. We show that, under a sufficient condition on the coefficients, the schemes enjoy a fundamental stability property. Coupling this result to an analysis of the truncation error allows us to design approximation with arbitrary order of convergence. Contrary to the analysis performed in \cite{zhazha10}, we consider general diffusion model and BSDEs with driver depending on $z$. The class of methods we consider contains well known methods from the ODE framework as Nystrom, Milne or Adams methods. We also study a class of Predictor-Correctot methods based on Adams methods. Finally, we provide a numerical illustration of the convergence of some methods. \end{abstract} \ensuremath{ \circ }oindent{\bf Key words:} Backward SDEs, High order discretization, Linear multi-step methods. \ensuremath{ \circ }oindent {\bf MSC Classification (2000):} 60H10, 65C30. \section{Introduction} In this paper, we are interested in the discrete-time approximation of solutions of (decoupled) Backward Stochastic Differential Equation (BSDE), i.e$.$ a triplet $(X,Y,Z)$ satisfying \begin{eqnarray}gin{align} X_{t}& = X_{0} + {\rm (i)}nt_{0}^{t}b(X_{s})\ud s + {\rm (i)}nt_{0}^{t}\sigma(X_{s})\ud W_{s},\label{eq sde X} \\ Y_{t} &= g(X_{T})+ {\rm (i)}nt_{t}^{T} f(Y_{t},Z_{t}) \ud t -{\rm (i)}nt_{t}^{T} Z_{t} \ud W_{t} \,. \label{eq bsde YZ} \end{align} The function $(b,\sigma): \R^d \mapsto \R^d {t_i}mes {\cal M}_d$, and $f:\R {t_i}mes \R^d \mapsto \R$ are Lipschitz-continuous function, $g: \R^d \mapsto \R$ is differentiable with continuous and bounded first derivative\footnote{These assumptions will be strengthened in the following sections.}. The positive constant $T$ is given and $W$ is a Brownian motion supported by a filtered probability space $(\Omega, {\cal F}, ({\cal F})_{0 \le t \le T}, \mathbb{P})$. The process $Y$ is a one-dimensional stochastic process, the processes $X$ and $Z$ are valued in $\R^d$ and $Z$ is written, by convention, as a row vector. Under the Lipschitz assumption on the coefficients, the processes $X$ and $Y$ belong to the set ${\cal S}^2$ of continuous adapted processes with square integrable supremum and $Z$ belongs to ${\cal H}^2$, the set of progressively measurable processes satisfying $\esp{{\rm (i)}nt_0^T |Z_s|^2 \ud s}$. The existence and uniqueness of solutions of the system \eqref{eq sde X} -\eqref{eq bsde YZ} was first addressed by Pardoux and Peng in \cite{parpen90}. Moreover, in \cite{parpen92}, they show that \[ Y_t=u(t,X_t),\ \ \ \ Z_t=\ensuremath{ \circ }abla u^{\!\top}\!(t,X_t)\sigma(X_t), \ \ \ \ t{\rm (i)}n [0,T], \] where $u{\rm (i)}n C^{1,2}([0,T]{t_i}mes \mathbb{R}^d)$ is the solution of the final value Cauchy problem \begin{eqnarray}gin{align} L^{(0)}u(t,x)&=-f\left( u(t,x),\ensuremath{ \circ }abla u^{\!\top}\!(t,x)\sigma\left( x\right) \right) ,\quad t{\rm (i)}n \lbrack 0,T),\,x{\rm (i)}n \mathbb{R}^{d} \label{b_pde_t} \\ u(T,x)&=g (x),\quad x{\rm (i)}n \mathbb{R}^{d} \label{b_pde_T} \end{align} with $L^{(0)}$ defined to be the second order differential operator \begin{eqnarray}gin{equation} L^{(0)}=\partial _{t} + \sum_{i=1}^{d}b_i\partial_{x_i}+ \frac12 \sum_{i,j=1}^{d}a_{ij} \partial_{x_i}\partial_{x_j}\label{operator}, \end{equation} and $a=a_{ij}=\sigma\sigma^{\top}$. To approximate \eqref{eq sde X}-\eqref{eq bsde YZ}, one has to come up with an approximation of the SDE part and the BSDE part. Obtaining approximations of the distribution of the forward component $X$ has been largely resolved in the last thirty years. There is a large literature on the subject and one can refer to \cite{klopla92} and the references therein for a systematic study of numerical methods for approximating $X$. Here, we focus on the approximation of $(Y,Z)$ instead. Numerical methods approximating this backward component have already been proposed. They are mainly based on a Euler approximation, see \cite{boutou04, zha04, goblab07, criman09} and the references therein. These methods have been successfully extended to a broader class of BSDEs: reflected BSDEs \cite{balpag03, cha08}, BSDEs with jumps \cite{boueli08}, BSDEs with driver of quadratic growth \cite{ric10}, see also the reference therein. In a very specific framework, \cite{zhaliy12, zhache06, zhawan09, zhazha10} proposed some high order methods to approximate the solution of the BSDE. Recently high order method of Runge-Kutta type have been studied \cite{criman10, chacri12} in the general framework of \eqref{eq sde X}-\eqref{eq bsde YZ}. In this paper, we consider another type of high order method, very well known for ODEs, namely linear multi-step methods. The approximations presented below are associated to an arbitrary, but fixed, partition $\pi$ of the interval $[0,T]$, $\pi = \set{t_{0} = 0 <\dots<t_i<t_{i+1}<\dots< t_{n} = T}$. We define $h_{i} = {t_i}{+1}-{t_i}{}$, $i=0,...,n-1$ and $|\pi| = \max_{i}h_{i}$ and denote by $(\ensuremath{Y }i{},\ensuremath{ Z^{i} }{})$ the approximation of $(Y_{t_i},Z_{t_i})$ for $i=1,...,n$. The construction of the approximating process is done in a recursive manner, backwards in time. We describe in the following the salient features of the class of approximations considered in this paper. \begin{eqnarray}gin{Definition} (Linear multi-step methods)\label{de multi-step scheme} (i) To initialise the scheme with $r$ steps, $r\ge 1$, we are given $r$ terminal condition $(Y_{n-j},Z_{n-j})$, ${\cal F}_{t_{n-j}}$-measurable square integrable random variables, $0 \le j \le r-1$. (ii) For $i \le n-r$, the computation of $(Y_i,Z_i)$ involves $r$ steps and is given by \begin{eqnarray}gin{align*} \left \{ \begin{eqnarray}gin{array}{rcl} \ensuremath{Y }i{} &=& \EFp{{t_i}{}}{ \sum_{j=1}^r a_{j}\ensuremath{Y }i{+j} + h \sum_{j=0}^r b_{i,j} f(\ensuremath{Y }i{+j},\ensuremath{ Z^{i} }{+j}) } \\ \ensuremath{ Z^{i} }{} &=&\EFp{{t_i}{}}{ \sum_{j = 1}^r \alpha_{j} H^Y_{i,j} \ensuremath{Y }i{+j} + h \sum_{j=1}^r \begin{eqnarray}ta_{i,j} H^f_{i,j} f(\ensuremath{Y }i{+j},\ensuremath{ Z^{i} }{+j}) } \end{array} \right. \end{align*} where $a_{j}$, $b_{i,j}$, $\alpha_{j}$, $\begin{eqnarray}ta_{i,j}$ are real numbers satisfying \begin{eqnarray}gin{align*} |a_{j}|+ |b_{i,j}| + |\alpha_{j}|+ |\begin{eqnarray}ta_{i,j}| \le \Lambda \;, \; 0 \le i \le n-r\;, 0 \le j \le r\;, \end{align*} and $\Lambda$ is a positive constant. We impose the so-called pre-consistency condition i.e. \begin{eqnarray}gin{align*} \sum_{j=1}^r{a_j} = \sum_{j=1}^r{\alpha_j} = 1\;. \end{align*} The coefficients $H^Y_{i,j}$, $H^f_{i,j}$, $0 \le i \le n-r$, $1 \le j \le r$ are ${\cal F}_{{t_i}{+j}}$-measurable random variables satisfying, for all $j$, \begin{eqnarray}gin{align*} h_i\esp{|H^Y_{i,j}|^2+|H^f_{i,j}|^2} \le \Lambda \quad\text{ and } \quad \EFp{{t_i}{}}{H^Y_{i,j}} = \EFp{{t_i}{}}{H^f_{i,j}} = 0 \,. \end{align*} \end{Definition} \begin{eqnarray}gin{Remark} \label{re intro} (i) The value $(Y_n,Z_n)$ is generally given by $(g(X_T),\ensuremath{ \circ }abla g^\top (X_T) \sigma(X_T))$. If $r>1$, one needs to specify other initialisation values. This choice is important because it will impact the global rate of convergence. One can use Runge-Kutta type scheme \cite{chacri12} with high order of convergence. (ii) When $r=1$, schemes \ref{de multi-step scheme} are one-step scheme. See \cite{boutou04, zha04, goblab07, criman10, chacri12} and the references therein for a study of these schemes. \end{Remark} The global error we investigate here is a \emph{time discretization error} and is, given a grid $\pi$, $({\cal E}_Y(\pi),{\cal E}_Z(\pi))$ with \begin{eqnarray}gin{align*} {\cal E}_Y(\pi) := \max_i \esp{|Y_{{t_i}{}}-Y_i|^2} \text{ and } {\cal E}_Z(\pi) := \sum_i h_{i} \esp{|Z_{{t_i}{}}-Z_i|^2}. \end{align*} To implement high order scheme in practice, we need to specify a particular form for the $H$-coefficient appearing in Definition \ref{de multi-step scheme} above. Let us first introduce a special class of random variables, which was already considered in \cite{chacri12}. \begin{eqnarray}gin{Definition}\label{de psi-H} (i) For $m \ge 0$, we denote by ${\cal B}^{m}_{[0,1]}$ the set of bounded measurable function $\psi :[0,1] \rightarrow \R$ satisfying \begin{eqnarray}gin{align*} {\rm (i)}nt_{0}^{1}\psi(u)\ud u = 1 \text{ and if } m \ge 1,\; {\rm (i)}nt_{0}^{1}\psi(u)u^{k}\ud u =0\;,\; 1\le k \le m. \end{align*} (ii) Let $(\psi^{\ell})_{1\le \ell \le d} {\rm (i)}n {\cal B}^{m}_{[0,1]}$, for $t {\rm (i)}n [0,T]$ and $h>0$ s.t. $t+h\le T$, we define, \begin{eqnarray}gin{align*} H^{\psi}_{t,h} := (\frac1h{\rm (i)}nt_{t}^{t+h}\psi^{\ell}(\frac{u-t}{h}) \ud W^\ell_{u})_{1\le \ell \le d} \;, \end{align*} which is a row vector. By convention, we set $H^{\psi}_{t,0} = 0$. \end{Definition} In the sequel, when studying the order of convergence of the scheme and depending of the order we want to retrieve, we will assume that, for $1\le j \le r$, \begin{eqnarray}gin{align} H^Y_{i,j} := H^\psi_{{t_i}{},jh} \text{ and } H^f_{i,j} := H^\phi_{{t_i}{},jh} \label{eq def H scheme} \end{align} for some functions $\psi$ and $\phi$ in ${\cal B}^m$, $m \ge 0$, see Theorem \ref{th main conv res} below. The convergence analysis is done in a classical way. We first prove a fundamental stability property for the schemes, under a reasonable sufficient condition, see Proposition \ref{pr L2 stab}. Then, assuming smoothness of the value function $u$ given by \eqref{b_pde_t}-\eqref{b_pde_T}, we study the truncation error associated to the above methods. We prove a sufficient condition on the coefficient to retrieve methods of any order. These two steps allow us to retrieve general convergence and design new high order method for BSDEs. Contrary to the analysis performed in \cite{zhazha10}, we work with general diffusion model given by \eqref{eq sde X} and BSDEs with driver depending on $z$. As an example of application, we extend some classical scheme used in the ODE framework and then proceed with the study of Adams type methods. Based on these methods, we also design Predictor-Corrector methods and study their convergence. To the best of our knowledge, it is the first time that these methods are considered for BSDEs. Finally, we illustrate our theoretical results with some numerical experiments showing empirical convergence rates. The rest of this paper is organised as follows. In section 2, we prove our general convergence result which relies heavily on a stability property. In section 3, we study Adams methods and Predictor-Corrector methods in the context of BSDEs. The main results are stated in the multi-dimensional case but for the reader's convenience the proofs are done with $d=1$. Finally, in section 4, we provide a numerical example. \paragraph{Notations} We denote by $\mathcal{M}_d$ the set of matrices with $d$ lines and $d$ columns. For a matrix $A{\rm (i)}n {\cal M}_d$, $\text{Tr}[A]$ denotes its trace, $A^{.j}$ its $j$-th column, $A^{i.}$ its $i$-th row, and $A^{ij}$ the $i$-th term of $A^{.j}$. $I_d$ is the identity matrix of ${\cal M}_d$. The transpose of a matrix or a vector $y$ will be denoted $y^\top$. The sup-norm for both vectors and matrix is denoted $|\,.\,|_{\rm (i)}nfty$. In the sequel $C$ is a positive constant whose value may change from line to line depending on $T$, $d$, $\Lambda$, $X_0$ but which does not depend on $\pi$. We write $C_p$ if it depends on some positive parameters $p$. For $t {\rm (i)}n \pi$, $R$ a random variable and $r$ a real number, the notation $R=O_t(r)$ means that $|U| \le \lambda^\pi_t u$ where $\lambda^\pi_t$ is a positive random variable satisfying: \begin{eqnarray}gin{align*} \esp{|\lambda^\pi_t|^p} \le C_p\;, \end{align*} for all $p > 0$, $t {\rm (i)}n \pi$ and all grid $\pi$. \section{General convergence results} In this part, we study the convergence properties of the schemes given in Definition \ref{de multi-step scheme}. We first establish a stability property for the schemes. We then state a sufficient condition on the coefficients which allows us to retrieve high order schemes. \subsection{$L^2$-stability} To investigate the \emph{stability} of the schemes given in Definition \ref{de multi-step scheme}, we introduce a pertubed scheme \begin{eqnarray}gin{align} \label{eq multi-step scheme stab pert} \left \{ \begin{eqnarray}gin{array}{rcl} \ensuremath{\widetilde Y }i{} &=& \EFp{{t_i}{}}{ \sum_{j=1}^r a_{j}\ensuremath{\widetilde Y }i{+j} + h \sum_{j=0}^r b_{i,j} f(\ensuremath{\widetilde Y }i{+j},\ensuremath{\widetilde Z }i{+j})} + \errYi{} \\ \ensuremath{\widetilde Z }i{} &=&\EFp{{t_i}{}}{ \sum_{j = 1}^r \alpha_{j} H^Y_{i,j} \ensuremath{\widetilde Y }i{+j} + h \sum_{j=1}^r \begin{eqnarray}ta_{i,j} H^f_{i,j} f(\ensuremath{\widetilde Y }i{+j},\ensuremath{\widetilde Z }i{+j}) } + \errZi{} \end{array} \right. \end{align} where $ \errYi{}$, $\errZi{}$ are random variables belonging to $L^2({\cal F}_{{t_i}{}})$, for $i \le n-r$. The notion of \emph{stablity} we consider here is the following. \begin{eqnarray}gin{Definition}($L^2$-Stability)\label{de stab} The scheme given in Definition \ref{de multi-step scheme} is said to be $L^2$-stable if \begin{eqnarray}gin{align*} \max_{0 \le i \le n-r} \esp{|\dYi{}|^2} & + \sum_{i=0}^{n-r} h_{i} \esp{|\dZi{}|^2} \le \\ & C \Big(\max_{0\le j \le r-1} \esp{|\dY_{n-j}|^2 + |\pi| |\dZ_{n-j}|^2} + |\pi|\sum_{i=0}^{n-r} \esp{\frac{1}{h_{i}^2} |{\errYi{}}|^2 + |{\errZi{}}|^2} \Big) \end{align*} for all sequences $\errYi{}$,$\errZi{}$ of $L^2({\cal F}_{{t_i}{}})$-random variable, $i\le n-r$, and terminal values $(Y_{n-j}, Z_{n-j})$, $({t_i}lde{Y}_{n-j}, {t_i}lde{Z}_{n-j})$ belonging to $L^2({\cal F}_{t_{n-j}})$, $0\le j \le r-1$. \end{Definition} \begin{eqnarray}gin{Proposition} \label{pr L2 stab} Assume that the following holds \begin{eqnarray}gin{itemize} {\rm (i)}tem[\HYP{c}] The coefficients $(a_j)$ are non-negative, $\sum_{j=1}^r a_j = 1$ and for $1 \le j \le r$, $a_j=0{\rm (i)}mplies \alpha_j = 0$, \end{itemize} then, the scheme given in Definition \ref{de multi-step scheme} is $L^2$-stable, recalling Definition \ref{de stab}. \end{Proposition} \proof We define $U_i = (Y_i,\dots,Y_{i+r-1})^\top$, ${t_i}lde{U}_i = (\ensuremath{\widetilde Y }i{},\dots,\ensuremath{\widetilde Y }i{+r-1})^\top$, and \begin{eqnarray}gin{align*} \mathbb{P}hi^Y_i &= \left( \begin{eqnarray}gin{array}{c} \sum_{j=0}^r b_{i,j} f(\ensuremath{Y }i{+j},\ensuremath{ Z^{i} }{+j}) \\ \mathbf{0} \\ \end{array} \right)_{r,1} \text{ , } {t_i}lde{\mathbb{P}hi}^Y_i = \left( \begin{eqnarray}gin{array}{c} \sum_{j=0}^r b_{i,j} f(\ensuremath{\widetilde Y }i{+j},\ensuremath{\widetilde Z }i{+j}) \\ \mathbf{0} \\ \end{array} \right)_{r,1} \\ \text{ and } & {t_i}lde{\mathbb{T}heta}^Y_i = \left( \begin{eqnarray}gin{array}{c} \errYi{}\\ \mathbf{0} \\ \end{array} \right)_{r,1} \end{align*} and denote $\delta U_i = U_i-{t_i}lde{U}_i$, $\delta \mathbb{P}hi^Y_i = \mathbb{P}hi^Y_i - {t_i}lde{\mathbb{P}hi}^Y_i$, \begin{eqnarray}gin{align*} a =(a_1, \dots,a_r),\; \alpha=(\alpha_1, \dots,\alpha_r) \text{ and } A = \left( \begin{eqnarray}gin{array}{c|c} a_1, \dots, a_{r-1} & a_r \\ \ensuremath{ \hat l }ine I_{r-1} & \mathbf{0} \\ \end{array} \right)_{r,r} \;. \end{align*} The scheme and the pertubed scheme rewrite then for the $Y$ part \begin{eqnarray}gin{align*} \EFp{{t_i}{}}{\Ui{}} &= \EFp{{t_i}{}}{ A \Ui{+1} + h \mathbb{P}hi^Y_i } \end{align*} and \begin{eqnarray}gin{align*} \EFp{{t_i}{}}{{t_i}lde{U}_i} &= \EFp{{t_i}{}}{ A{t_i}lde{U}_{i+1} + h {t_i}lde{\mathbb{P}hi}^Y_i + \mathbb{T}heta^Y_i } \;. \end{align*} 1.a For $i \le j \le n-r$, we compute that \begin{eqnarray}gin{align*} |\EFp{{t_i}{}}{\dUj{}}|_{\rm (i)}nfty \le | A |_{\rm (i)}nfty |\EFp{{t_i}{}}{\dUj{+1}}|_{\rm (i)}nfty + h_j| \EFp{{t_i}{}}{ \delta \mathbb{P}hi^Y_j}|_{\rm (i)}nfty + |\EFp{{t_i}{}}{\mathbb{T}heta^Y_j}|_{\rm (i)}nfty \end{align*} Under $\HYP{c}$, we observe that $|A|_{\rm (i)}nfty = 1$ and we get \begin{eqnarray}gin{align*} | \EFp{{t_i}{}}{\dUj{}}|_{\rm (i)}nfty \le |\EFp{{t_i}{}}{\dUj{+1}}|_{\rm (i)}nfty + h_j|\EFp{{t_i}{}}{ \delta \mathbb{P}hi^Y_j}|_{\rm (i)}nfty + |\EFp{{t_i}{}}{\mathbb{T}heta^Y_j}|_{\rm (i)}nfty \end{align*} Iterating on $j$, we compute that \begin{eqnarray}gin{align*} | \EFp{{t_i}{}}{\dUj{}}|_{\rm (i)}nfty \le |\EFp{{t_i}{}}{\delta U_{n-r+1}}|_{\rm (i)}nfty + \sum_{k=j}^{n-r}h_k|\EFp{{t_i}{}}{ \delta \mathbb{P}hi^Y_k}|_{\rm (i)}nfty + \sum_{k=j}^{n-r} |\EFp{{t_i}{}}{\mathbb{T}heta^Y_k}|_{\rm (i)}nfty \;. \end{align*} In particular, we have for $i = j$, and $|\pi|$ small enough, \begin{eqnarray}gin{align}\label{eq stab Y step 1} |\dYi{}| \le C\Big( \sum_{k=i}^{n-r}h_k\EFp{{t_i}{}}{|\dYk{}| + |\dZk{}|}+ \sum_{k=i}^{n-r}\EFp{{t_i}{}}{|\errYk{}|} + \sum_{k=n-r+1}^{n}\EFp{{t_i}{}}{|\dY_{k}|} \Big)\;. \end{align} We then compute \begin{eqnarray}gin{align}\label{eq stab Y step 2} \esp{|\dYi{}|^2} \le C\Big( |\pi|\sum_{k=i}^{n-r}\esp{|\dYk{}|^2} + \sum_{k=i}^{n-r}h_k\esp{|\dZk{}|^2} &+ \sum_{k=i}^{n-r}\frac1{h_k}\esp{|\errYk{}|^2} + \sum_{k=n-r+1}^{n}\esp{|\dY_{k}|^2} \Big) \;. \end{align} 1.b We will now control the term $ h\sum_{k=i}^{n-r}\esp{|\dZk{}|^2}$ appearing in \eqref{eq stab Y step 2}. Using Cauchy-Schwartz inequality, we obtain that, if $a_j \ensuremath{ \circ }eq 0$ then \begin{eqnarray}gin{align*} |\EFp{{t_i}{}}{\alpha_j H_{i,j}^Y\dYi{+j}}|^2 & \le C(a_j \EFp{{t_i}{}}{|\dYi{+j}|^2 - a_j |\EFp{{t_i}{}}{\dYi{+j}}|^2}) \end{align*} which leads to, under \HYP{c}, \begin{eqnarray}gin{align} \label{eq stab Z step 1} h_i\esp{|\dZi{}|^2} &\le C\Big(\sum_{j=1}^r a_j\esp{|\dYi{+j}|^2} - \sum_{j=1}^r\esp{a_j|\EFp{{t_i}{}}{ \dYi{+j}}|^2}) \ensuremath{ \circ }onumber \\ &\quad+ |\pi|^2 \sum_{j=1}^r\esp{|\dYi{+j}|^2 + |\dZi{+j}|^2} + |\pi|\esp{|\errZi{}|^2} \;. \Big) \end{align} Under $\HYP{c}$, we have that, \begin{eqnarray}gin{align*} - \sum_{j=1}^r\esp{a_j |\EFp{{t_i}{}}{ \dYi{+j}}|^2} & \le -\esp{|\sum_{j=1}^r\EFp{{t_i}{}}{a_j \dYi{+j}}|^2} \;. \end{align*} Then, recalling that \begin{eqnarray}gin{align*} \sum_{j=1}^r\EFp{{t_i}{}}{a_j \dYi{+j}} = \delta Y_i - h_i\sum_{j=0}^r \EFp{{t_i}{}}{\delta \mathbb{P}hi^Y_{j+r}} - \errYi{} \end{align*} we compute \begin{eqnarray}gin{align*} - \sum_{j=1}^r\esp{|\EFp{{t_i}{}}{a_j \dYi{+j}}|^2} &\le -\esp{| \dYi{}|^2} + C|\pi|\esp{| \dYi{}|\sum_{j=0}^r\EFp{{t_i}{}}{|\dYi{+j}|+|\dZi{+j}|}} + C\esp{| \dYi{}|\,|\errYi{}| } \end{align*} which leads, for $0<\ensuremath{{\varepsilonilon}_{i} }lon \le 1$ to be fixed later on, to \begin{eqnarray}gin{align*} - \sum_{j=1}^r\esp{|\EFp{{t_i}{}}{a_j \dYi{+j}}|^2} &\le -\esp{|\dYi{}|^2} + C |\pi| \sum_{j=0}^r\EFp{{t_i}{}}{\frac1\ensuremath{{\varepsilonilon}_{i} }lon|\dYi{+j}|^2+\ensuremath{{\varepsilonilon}_{i} }lon|\dZi{+j}|^2} + \frac{C}{h_i}\esp{|\errYi{}|^2}\;. \end{align*} Combining the last inequality with \eqref{eq stab Z step 1} and summing over $i$, we obtain, for $|\pi|$ small enough \begin{eqnarray}gin{align*} \sum_{k=i}^{n-r} h_k\esp{|\dZk{}|^2} &\le C\Big(\sum_{k=i}^{n-r}(\sum_{j=1}^r a_j\esp{|\dYi{+j}|^2} - \esp{|\dYi{}|^2}) + C(1+\frac1\ensuremath{{\varepsilonilon}_{i} }lon) |\pi| \sum_{k=n-r+1}^{n}\esp{|\dYk{}|^2+|\dZk{}|^2} \\ &+ C(1+\frac1\ensuremath{{\varepsilonilon}_{i} }lon)|\pi| \sum_{k=i}^{n-r}\esp{|\dYk{}|^2} +\frac{C}{\ensuremath{{\varepsilonilon}_{i} }lon} |\pi| \sum_{k=i}^{n-r}\esp{|\dZk{}|^2} +|\pi|\sum_{k=i}^{n-r}\esp{|{\errZk{}}|^2}+ \sum_{k=i}^{n-r} \frac{C}{h_k}\esp{|{\errYk{}}|^2} \Big) \end{align*} Using \HYP{c}, setting $\ensuremath{{\varepsilonilon}_{i} }lon := \frac{C}{2}$, we then obtain \begin{eqnarray}gin{align} \sum_{k=i}^{n-r} h_k \esp{|\dZk{}|^2} &\le C\Big( \sum_{k=n-r+1}^{n}\esp{|\dYk{}|^2+|\pi||\dZk{}|^2} + |\pi| \sum_{k=i}^{n-r}\esp{|\dYk{}|^2} + \sum_{k=i}^{n-r}\esp{\frac1{h_k}|{\errYk{}}|^2 +h_k|{\errZk{}}|^2} \Big) \label{eq stab interm Z} \end{align} 1.c Combining the last inequality with \eqref{eq stab Y step 2}, we get \begin{eqnarray}gin{align}\label{eq stab Y step 1.c-1} \esp{|\dYi{}|^2} \le C\Big( |\pi|\sum_{j=i}^{n-r}\esp{|\dYj{}|^2} + \sum_{k=i}^{n-r}|\pi|\esp{\frac1{h_k^2}|\EFp{\tk{}}{\errYk{}}|^2 +|\EFp{\tk{}}{\errZk{}}|^2} + \sum_{k=n-r+1}^{n}\esp{|\dY_{k}|^2+|\pi||\dZk{}|^2} \Big) \end{align} 2.a Let us define \begin{eqnarray}gin{align*} \delta_i &:= \sum_{j=i}^{n-r}\esp{|\dYj{}|^2} \;, \\ \theta_i &:=\sum_{k=i}^{n-r}|\pi|\esp{\frac1{h_k^2}|{\errYk{}}|^2 +|{\errZk{}}|^2} + \sum_{k=n-r+1}^{n}\esp{|\dY_{k}|^2+ |\pi| |\dZk{}|^2} \;. \end{align*} Equation \eqref{eq stab Y step 1.c-1} reads then \begin{eqnarray}gin{align}\label{eq stab Y step 2-1} \delta_i-\delta_{i+1} \le C|\pi|\delta_{i} + C\theta_i \end{align} Using a discrete version of Gronwall Lemma, we then compute \begin{eqnarray}gin{align*} \delta_i \le C(\delta_{n-r} + \sum_{k=i}^{n-r} \theta_k e^{C(n-r-k)|\pi|}) \end{align*} Since $\delta_{n-r} \le \eta_i$ and $\theta_k \le \theta_i$ for $k \ge i$, we compute \begin{eqnarray}gin{align*} \delta_i \le C \theta_i \frac{1}{1-e^{C|\pi|}} \end{align*} This last equation combined with \eqref{eq stab Y step 2-1} leads to \begin{eqnarray}gin{align*} \esp{|\dYi{}|^2} \le C \theta_i \end{align*} which concludes the proof for the $Y$-part. 2.b For the $Z$-part, the proof is concluded pluging last inequality in \eqref{eq stab interm Z}, with $i=0$ in this equation. \eproof \begin{eqnarray}gin{Remark} It is easily checked that $\HYP{c}$ implies that the roots of the following polynomial equations \begin{eqnarray}gin{align*} y^{r+1}-\sum_{j=1}^r a_j y^{r-j+1} = 0\,. \end{align*} are in the closed unit disc and the multiple roots are in the open unit disc. \\ It is known that in the ODEs framework this is a necessary and sufficient condition to get stability of linear multi-step schemes, see e.g \cite{but08, dem06}. In our context, this condition is only necessary. We have to imposed $\HYP{c}$ essentially because we need to deal with the new process $Z$. \end{Remark} \begin{eqnarray}gin{Remark} \label{re pr L2 stab} Proposition \ref{pr L2 stab} is generic in the sense that we do not use the particular property of the probability space nor the fact that $({\cal F}_{t})_{t {\rm (i)}n [0,T]}$ is a Brownian filtration. We will use this property in the last section of this paper. \end{Remark} \subsection{Study of the order} \subsubsection{Definitions} To study the order of the schemes, we use the following definition of truncation errors. The \emph{local truncation error} for the pair $(Y,Z)$ defined as \begin{eqnarray}gin{align} \eta_i := \eta^Y_i + \eta^Z_i,\ \ \ \ (\eta^Y_i ,\eta^Z_i):=\left(\frac1{h_{i}^2} \esp{|Y_{{t_i}{}} - \check{Y}_{{t_i}{}}|^2},\esp{ |Z_{{t_i}{}} - \check{Z}_{{t_i}{}}|^2}\right) ,\; i\le n-r\;, \label{eq de trunc error loc YnZ} \end{align} with \begin{eqnarray}gin{align} \check{Y}_{{t_i}{}} &=\EFp{{t_i}{}}{\sum_{j=1}^r a_jY_{{t_i}{+j}} + h_i \sum_{j=1}^r b_{i,j} f(Y_{{t_i}{+j}},Z_{{t_i}{+j}}) + h_i b_{i,0}f(\check{Y}_{{t_i}{}},\check{Z}_{{t_i}{}}) } \label{eq def hY} \\ \check{Z}_{{t_i}{}} &= \EFp{{t_i}{}}{ \sum_{j = 1}^r \alpha_j H^\psi_{{t_i}{},jh} Y_{{t_i}{+j}} + h_j \sum_{j=1}^r \begin{eqnarray}ta_{i,j} H^\phi_{{t_i}{},jh} f(Y_{{t_i}{+j}},Z_{{t_i}{+j}}) \label{eq def hZ} } \end{align} where $\psi$, $\phi$ belongs to ${\cal B}^0$. The \emph{global truncation error} for a given grid $\pi$ is given by \begin{eqnarray}gin{align} {\cal T}(\pi) := {\cal T}_Y(\pi) + \mathcal{T}_Z(\pi),\ \ \ ({\cal T}_Y(\pi), {\cal T}_Z(\pi)) :=\left( \sum_{i=0}^{n-r} h_{i} \eta^Y_i,\ \ \ \sum_{i=0}^{n-r} h_{i} \eta^Z_i \right), \label{eq de trunc error YnZ} \end{align} where ${\cal T}_Y$ is the global truncation error for $Y$ and ${\cal T}_Z$ is the global truncation error for $Z$ defined as above. \begin{eqnarray}gin{Definition}\label{de order} An approximation is said to have a \emph{global truncation error} of order $m$ if we have \begin{eqnarray}gin{align*} {\cal T}(\pi) \le C |\pi|^{2m} \end{align*} for all sufficiently smooth\footnote{The required regularity assumptions will be stated in the Theorems below.} solutions to \eqref{b_pde_t} and all partitions $\pi$ with sufficiently small mesh size. \end{Definition} \subsubsection{Expansion of the truncation error} We study the order of the methods given in Definition \ref{de multi-step scheme} using It\^o-Taylor expansions \cite{klopla92}. This requires the smoothness of the value function $u$ introduced in \eqref{b_pde_t}-\eqref{b_pde_T}. In order to state precisely these assumptions, we recall some notations of Chapter 5 (see Section 5.4) in \cite{klopla92}. Let \begin{eqnarray}gin{align*} {\cal M} := \set{\oslash}\cup \bigcup_{m =1}^{\rm (i)}nfty\set{0,\dots,d}^m \end{align*} be the set of multi-indices with entries in $\set{0,\dots,d}$ endowed with the measure $\ell$ of the length of a multi-index ($\ell(\oslash)=0$ by convention). We introduce the concatenation operator $*$ on ${\cal M}$ for multi-indices with finite length: $\alpha= (\alpha_1,\dots,\alpha_p)$, $\begin{eqnarray}ta= (\begin{eqnarray}ta_1,\dots,\begin{eqnarray}ta_q)$ then $\alpha*\begin{eqnarray}ta = (\alpha_1,\dots,\alpha_p,\begin{eqnarray}ta_1,\dots,\begin{eqnarray}ta_q)$. A non empty subset ${\cal A} \subset {\cal M}$ is called a hierarchical set if \begin{eqnarray}gin{align*} \sup_{\alpha} \ell(\alpha) < {\rm (i)}nfty \text{ and } -\alpha {\rm (i)}n {\cal A} , \; \forall \alpha {\rm (i)}n {\cal A} \setminus \set{\oslash} \end{align*} For any hierarchical ${\cal A}$ set, we consider the remainder set ${\cal B}({\cal A})$ given by \begin{eqnarray}gin{align*} {\cal B} ({\cal A}) := \set{\alpha {\rm (i)}n {\cal M}\setminus {\cal A} | -\alpha {\rm (i)}n {\cal A}} \end{align*} We will use in the sequel the following sets of multi-indices, for $n\ge 0$: \begin{eqnarray}gin{align*} {\cal A}_{n} := \set{ \alpha \; | \; \ell(\alpha) \le n} \end{align*} and observe that ${\cal B}({\cal A}_{n})={\cal A}_{n+1}\setminus {\cal A}_{n}$. For $j {\rm (i)}n \set{1, \dots,d}$, we consider the operators: \begin{eqnarray}gin{align*} L^{(j)} = \sum_{k=1}^d \sigma^{kj} \partial_{x_k}. \end{align*} For a multi-index $\alpha=(\alpha_1,\dots,\alpha_p)$, the iteration of these operators has to be understood in the following sense \begin{eqnarray}gin{align*} L^\alpha := L^{(\alpha_1)}\circ \dots \circ L^{(\alpha_p)}. \end{align*} By convention, $L^{\oslash}$ is the identity operator, recall also the definition of $L^{(0)}$ given in \eqref{operator}. One can observe that $L^{\alpha*\begin{eqnarray}ta}=L^\alpha \circ L^\begin{eqnarray}ta$. For a multi-index with finite length $\alpha$, we consider the set ${\cal G}^\alpha$ of function $v:[0,T]{t_i}mes \R^d \rightarrow \R$ for which $L^\alpha v$ is well defined and continuous. We also introduce ${\cal G}^\alpha_b$ the subset of function $v {\rm (i)}n {\cal G}^\alpha$ such that the function $L^\alpha v$ is bounded. For $v {\rm (i)}n {\cal G}^\alpha$, we denote $L^\alpha u$ by $u^\alpha$. Finally, for $n\ge 1$, we define the set ${\cal G}^n_b$ of function $u$ such that $u^\alpha {\rm (i)}n {\cal G}^\alpha_b$ for all $\alpha {\rm (i)}n {\cal A}_n\setminus \set{\oslash}$. The two following Propositions are key results to prove the high order rate of convergence of the schemes. We refer to \cite{chacri12} for proofs. \begin{eqnarray}gin{Proposition}\label{pr weak expansion Y} Assume $d=1$. Let $m\ge 0$, then for a function $v {\rm (i)}n {\cal G}^{m+1}_b$, we have that \begin{eqnarray}gin{align*} \EFp{t}{v(t+h,X_{t+h})} &= v_t + hv_t^{(0)} + \frac{h^2}{2}v_t^{(0,0)} + \dots + \frac{h^m}{m!}v_t^{(0)_m} + O_t(h^{m+1}) \end{align*} \end{Proposition} \begin{eqnarray}gin{Proposition}\label{pr weak expansion Z} Assume $d=1$. (i) Let $m \ge 0$, for $\psi {\rm (i)}n {\cal B}^m_{[0,1]}$, assuming that $v {\rm (i)}n {\cal G}^{m+2}_b$, we have \begin{eqnarray}gin{align*} \EFp{t}{H^{\psi}_{t, h}v(t+ h,X_{t+ h})} &= v^{(1)}_t + h v^{(1,0)}_t + \dots + \frac{h^{m}}{m!}v^{(1)*(0)_{m}}_t + O_t(h^{m+1}) \end{align*} \\ (ii) For $\psi {\rm (i)}n {\cal B}^0_{[0,1]}$, assuming that $v {\rm (i)}n {\cal G}^1_b$, we have \begin{eqnarray}gin{align*} \EFp{t}{H^{\psi}_{t, h}v(t+ h,X_{t+ h})} &= O_t(1)\,. \end{align*} \\ (iii) If $L^{(0)}\circ L^{(1)} = L^{(1)}\circ L^{(0)}$, then the expansion of (i) holds true with $\psi = 1$. \end{Proposition} \subsubsection{Sufficient condition for Order $m$} \label{subsubse order m} For the reader's convenience, we assume in this paragraph a constant time step for the grid $\pi$ i.e. $h_{i} = h = |\pi| := \frac{T}{n}$, for all $i$ and that the coefficients $b$, $\begin{eqnarray}ta$ do not depend of $i$. Under these conditions, the scheme given in Definition \ref{de multi-step scheme}, recalling \eqref{eq def H scheme}, rewrites, for $i\le n-r$, \begin{eqnarray}gin{align} \label{eq scheme general} \left \{ \begin{eqnarray}gin{array}{rcl} \ensuremath{Y }i{} &=& \EFp{{t_i}{}}{ \sum_{j=1}^r a_{j}\ensuremath{Y }i{+j} + h \sum_{j=0}^r b_{j} f(\ensuremath{Y }i{+j},\ensuremath{ Z^{i} }{+j}) } \\ \ensuremath{ Z^{i} }{} &=&\EFp{{t_i}{}}{ \sum_{j = 1}^r \alpha_{j} H^\psi_{{t_i}{},jh} \ensuremath{Y }i{+j} + h \sum_{j=1}^r \begin{eqnarray}ta_{j} H^\phi_{{t_i}{},jh} f(\ensuremath{Y }i{+j},\ensuremath{ Z^{i} }{+j}) } \end{array} \right. \end{align} \begin{eqnarray}gin{Proposition} (Order m) \label{pr order m} For $m\ge 2$, assume that the following holds \begin{eqnarray}gin{align*} (C^Y)_m&:\;\; \sum_{j=1}^r a_j j^p - p\sum_{j=0}^r b_j j^{p-1} = 0,\; 1\le p \le m \\ \text{ and } \quad (C^Z)_m& :\;\; \sum_{j=1}^r \alpha_j j^p - p \begin{eqnarray}ta_j j^{p-1} = 0,\; 1\le p \le m-1 \end{align*} and that $u {\rm (i)}n {\cal G}^{m+1}_b$, then we have \begin{eqnarray}gin{align*} {\cal T}_Y(\pi) + {\cal T}_Z(\pi) \le C |\pi|^{2m}, \end{align*} provided that $\psi {\rm (i)}n {\cal B}^{m-1}_{[0,1]}$ and $\phi {\rm (i)}n {\cal B}^{m-2}_{[0,1]}$, recalling \eqref{eq scheme general}. \end{Proposition} \proof 1. We first study the truncation error for the Z-part. We have that \begin{eqnarray}gin{align*} \check{Z}_{{t_i}{}} &=\EFp{{t_i}{}}{ \sum_{j = 1}^r \alpha_j H^\psi_{{t_i}{},jh} Y_{{t_i}{+j}} + h \sum_{j=1}^r \begin{eqnarray}ta_j H^\phi_{{t_i}{},jh} f(Y_{{t_i}{+j}},Z_{{t_i}{+j}}) } \\ &= \EFp{{t_i}{}}{\sum_{j=1}^r \alpha_j H^\psi_{{t_i}{},jh} u({t_i}{+j},X_{{t_i}{+j}}) - h \sum_{j=1}^r \begin{eqnarray}ta_j H^\phi_{{t_i}{},jh} u^{(0)}({t_i}{+j},X_{{t_i}{+j}})} \;. \end{align*} Using Proposition \ref{pr weak expansion Z}, we compute \begin{eqnarray}gin{align*} \check{Z}_{{t_i}{}} = \sum_{p=0}^{m-1} \sum_{j=1}^r \alpha_j j^p \frac{h^p}{p!} u^{(1)*(0)_p}({t_i}{},X_{{t_i}{}}) - \sum_{p=0}^{m-2} \sum_{j=1}^r \begin{eqnarray}ta_j j^{p} \frac{h^{p+1}}{p!} u^{(1)*(0)_{p+1}}({t_i}{},X_{{t_i}{}}) +O_{{t_i}{}}(|\pi|^{m}) \end{align*} which leads to \begin{eqnarray}gin{align*} \check{Z}_{{t_i}{}} & - Z_{{t_i}{}} = (\sum_{j=1}^r \alpha_j-1) u^{(1)}({t_i}{},X_{{t_i}{}}) + \sum_{p=1}^{m-1} \frac{h^p}{p!}u^{(1)*(0)_p}({t_i}{},X_{{t_i}{}})(\sum_{j=1}^r \alpha_j j^p - p\sum_{j=1}^r j^{p-1}\begin{eqnarray}ta_j ) +O_{{t_i}{}}(|\pi|^{m}) \end{align*} Under $(C^Z)_m$, we obtain \begin{eqnarray}gin{align*} \check{Z}_{{t_i}{}} & - Z_{{t_i}{}} = O_{{t_i}{}}(|\pi|^{m}) \end{align*} which leads directly to \begin{eqnarray}gin{align} \eta^Z_i = O(|\pi|^{2m}),\quad i\le n-r. \label{eq trunc error gene Z} \end{align} 2.a We now study the truncation error for the Y-part. Let us introduce \begin{eqnarray}gin{align*} {\ensuremath{\bar{a}}}r{Y}_{{t_i}{}} &=\EFp{{t_i}{}}{\sum_{j=1}^r a_jY_{{t_i}{+j}} + h \sum_{j=0}^r b_{j} f(Y_{{t_i}{+j}},Z_{{t_i}{+j}}) } \end{align*} We have that \begin{eqnarray}gin{align*} \check{Y}_{{t_i}{}} &= {\ensuremath{\bar{a}}}r{Y}_{{t_i}{}} + h b_{0} \Big(f(\check{Y}_{{t_i}{}},\check{Z}_{{t_i}{}}) - f(Y_{{t_i}{}},Z_{{t_i}{}})\Big) \end{align*} Since $f$ is Lipschitz-continuous, we get that for $|\pi|$ small enough, \begin{eqnarray}gin{align} \label{eq majo Y 1} \check{Y}_{{t_i}{}} - Y_{{t_i}{}} = O_{{t_i}{}}({\ensuremath{\bar{a}}}r{Y}_{{t_i}{}} - Y_{{t_i}{}}) + |\pi| O_{{t_i}{}}(\check{Z}_{{t_i}{}} - Z_{{t_i}{}}) \;. \end{align} 2.b Now observe that \begin{eqnarray}gin{align*} {\ensuremath{\bar{a}}}r{Y}_{{t_i}{}} &= \EFp{{t_i}{}}{\sum_{j=1}^r a_j u({t_i}{+j},X_{{t_i}{+j}}) - h \sum_{j=0}^r b_j u^{(0)}({t_i}{+j},X_{{t_i}{+j}}) } \;. \end{align*} Using Proposition \ref{pr weak expansion Y}, we compute \begin{eqnarray}gin{align*} {\ensuremath{\bar{a}}}r{Y}_{{t_i}{}} = \sum_{p=0}^m \sum_{j=1}^r a_j j^p \frac{h^p}{p!} u^{(0)_p}({t_i}{},X_{{t_i}{}}) - \sum_{p=0}^{m-1} \sum_{j=0}^r b_j j^{p} \frac{h^{p+1}}{p!} u^{(0)_{p+1}}({t_i}{},X_{{t_i}{}}) +O_{{t_i}{}}(|\pi|^{m+1}) \end{align*} which leads to \begin{eqnarray}gin{align*} {\ensuremath{\bar{a}}}r{Y}_{{t_i}{}} & - Y_{{t_i}{}} = (\sum_{j=1}^r a_j-1) u({t_i}{},X_{{t_i}{}}) + \sum_{p=1}^m \frac{h^p}{p!}u^{(0)_p}({t_i}{},X_{{t_i}{}})(\sum_{j=1}^r a_j j^p - p\sum_{j=0}^r j^{p-1}b_j ) +O_{{t_i}{}}(|\pi|^{m+1}). \end{align*} Under $(C^Y)_m$, we thus get \begin{eqnarray}gin{align*} {\ensuremath{\bar{a}}}r{Y}_{{t_i}{}} & - Y_{{t_i}{}} = O_{{t_i}{}}(|\pi|^{m+1}) \end{align*} 2.c Combining the last inequality with \eqref{eq majo Y 1} and \eqref{eq trunc error gene Z}, we then obtain \begin{eqnarray}gin{align*} \eta^Y_i = O(|\pi|^{2m+2})\;,\; i \le n-r\;. \end{align*} 3. Combining the last equation with \eqref{eq trunc error gene Z}, we conclude that \begin{eqnarray}gin{align*} {\cal T}(\pi)= O(|\pi|^{2m}) , \end{align*} and so the scheme is of order $m$. \eproof \subsection{Convergence results and examples of high order methods} \begin{eqnarray}gin{Theorem} \label{th main conv res} Under \HYP{c}, assuming that the scheme is of order $m$ according to Definition \ref{de order} and that \begin{eqnarray}gin{align} \max_{0\le j \le r-1} \esp{|Y_{t_{n-j}} - Y_{n-j}|^2 + h|Z_{t_{n-j}}- Z_{n-j}|^2} \le C|\pi|^{2m} \label{eq order cond term} \end{align} we have \begin{eqnarray}gin{align*} {\cal E}_Y(\pi) + {\cal E}_Z(\pi) \le C |\pi|^{2m}\;. \end{align*} \end{Theorem} \proof We simply observe that the solution $(Y,Z)$ of the BSDE is also the solution of a perturbed scheme with $\errYi{}:= \check{Y}_{{t_i}{}}-Y_{{t_i}{}}$ and $\errZi{}:= \check{Z}_{{t_i}{}}-Z_{{t_i}{}}$. The proof then follows directly from Proposition \ref{pr L2 stab}. \eproof In particular, in the special setting of paragraph \ref{subsubse order m}, a straightforward application of Theorem \ref{th main conv res} and Proposition \ref{pr order m} leads to \begin{eqnarray}gin{Corollary}\label{co gen res} Under \HYP{c} and $(C^Y)_m$- $(C^Z)_m$, assuming that \eqref{eq order cond term} holds, we have \begin{eqnarray}gin{align*} {\cal E}_Y(\pi) + {\cal E}_Z(\pi) \le C |\pi|^{2m}\;, \end{align*} provided $u {\rm (i)}n {\cal G}^{m+1}_b$ and $\psi {\rm (i)}n {\cal B}^{m-1}_{[0,1]}$, $\phi {\rm (i)}n {\cal B}^{m-2}_{[0,1]}$. \end{Corollary} To illustrate the previous results, we conclude this section by giving two examples of high order method which can be designed using Corollary \ref{co gen res}. \begin{eqnarray}gin{Example} (Nystrom's method) \label{ex leapfrog} The following scheme is --for the Y-part-- inspired by the \emph{Leap-frog (or Nystrom's) method} for ODE, namely \begin{eqnarray}gin{align*} \left \{ \begin{eqnarray}gin{array}{rcl} \ensuremath{Y }i{} &=& \EFp{{t_i}{}}{ \ensuremath{Y }i{+2} + 2h f(\ensuremath{Y }i{+1},\ensuremath{ Z^{i} }{+1}) } \\ \ensuremath{ Z^{i} }{} &=&\EFp{{t_i}{}}{ H^\psi_{{t_i}{},2h} \ensuremath{Y }i{+2} + 2 h H^\phi_{{t_i}{},2h} f(\ensuremath{Y }i{+2},\ensuremath{ Z^{i} }{+2})}. \end{array} \right. \end{align*} This 2-step method is convergent and the rate of convergence is at least of order 2, assuming that $u {\rm (i)}n {\cal G}^3_b$ and $\psi {\rm (i)}n {\cal B}^1_{[0,1]}$, $\phi {\rm (i)}n {\cal B}^0_{[0,1]}$. \end{Example} \begin{eqnarray}gin{Example} (Milne's method) \label{ex milne's} The second scheme we propose is inspired --for the Y-part-- by the \emph{Milne's method} for ODE, namely \begin{eqnarray}gin{align*} \left \{ \begin{eqnarray}gin{array}{rcl} \ensuremath{Y }i{} &=& \EFp{{t_i}{}}{ \ensuremath{Y }i{+4} + h\Big(\frac83 f(\ensuremath{Y }i{+1},\ensuremath{ Z^{i} }{+1}) - \frac43f(\ensuremath{Y }i{+2},\ensuremath{ Z^{i} }{+2}) + \frac83 f(\ensuremath{Y }i{+3},\ensuremath{ Z^{i} }{+3}) \Big) } \\ \ensuremath{ Z^{i} }{} &=& \EFp{{t_i}{}}{ H^\psi_{{t_i}{},4h} \ensuremath{Y }i{+4} + h \Big( \frac83H^\phi_{{t_i}{},h} f(\ensuremath{Y }i{+1},\ensuremath{ Z^{i} }{+1}) -\frac43H^\phi_{{t_i}{},2h} f(\ensuremath{Y }i{+2},\ensuremath{ Z^{i} }{+2}) + \frac83H^\phi_{{t_i}{},3h} f(\ensuremath{Y }i{+3},\ensuremath{ Z^{i} }{+3}) \Big) } . \end{array} \right. \end{align*} This 4-step method is convergent and the rate of convergence is at least of order 4, assuming that $u {\rm (i)}n {\cal G}^5_b$ and $\psi {\rm (i)}n {\cal B}^3_{[0,1]}$, $\phi {\rm (i)}n {\cal B}^2_{[0,1]}$. \end{Example} \section{Adams Methods} In this section, we introduce methods for BSDEs inspired by Adams methods from the ODE framework. These methods are of two kinds: explicit methods , also called Adams-Bashforth, or implicit methods, also called, Adams-Moulton. The schemes introduced in Definition \ref{de multi-step scheme} are always explicit for the $Z$-part but may be implicit for the $Y$-part. So, for the $Z$-part, we use Adams-Bashforth approximation which may then be combined with explicit or implicit approximation for the $Y$-part. We first study methods combining Adams-Moulton type approximation for the $Y$-part and Adams-Bashforth type approximation for the $Z$-part. We show that these methods are really efficient because high order rate of convergence can be achieved, assuming smoothness of the value function. We then quickly discuss the case of explicit methods, i.e. Adams-Bashforth type approximation both for the $Y$-part and $Z$-part. At the end of this section, we use these Adams type approximation to design Predictor-Corrector methods for BSDEs. \subsection{Implicit methods} These methods are inspired by Adams-Moulton method for the $Y$-part and Adams-Bashforth for the $Z$-part. They have the following form, for $i\le n-r$, \begin{eqnarray}gin{align*} (AMB)_r\,: \;\left \{ \begin{eqnarray}gin{array}{rcl} \ensuremath{Y }i{} &=& \EFp{{t_i}{}}{ \ensuremath{Y }i{+1} + h_i \sum_{j=0}^r b_{i,j,r} f(\ensuremath{Y }i{+j},\ensuremath{ Z^{i} }{+j}) } \\ \ensuremath{ Z^{i} }{} &=&\EFp{{t_i}{}}{ H^\psi_{{t_i}{},h} \ensuremath{Y }i{+1 } + h_i \sum_{j=1}^r \begin{eqnarray}ta_{i,j,r} H^\phi_{{t_i}{},jh} f(\ensuremath{Y }i{+j},\ensuremath{ Z^{i} }{+j}) } \end{array} \right. \end{align*} where $\psi,\phi {\rm (i)}n {\cal B}^0_{[0,1]}$. The coefficients for the $Y$-part are given by \begin{eqnarray}gin{align} b_{i,j,r} = \frac1{h_{i}}{\rm (i)}nt_{{t_i}{}}^{{t_i}{+1}} L_{i,j,r}(s) \ud s\, ,\; \text{ with } L_{i,j,r}(t) = \mathbb{P}rod_{k=0,k \ensuremath{ \circ }eq j}^{r} \frac{t-{t_i}{+k} }{{t_i}{+j}-{t_i}{+k} } \;, \; 0 \le j \le r. \label{eq de b AM} \end{align} The Lagrange polynomials $L_{i,j,r}$ are of degree $r$ and $L_{i,j,r}({t_i}{+j})=1$, which implies \begin{eqnarray}gin{align} \sum_{j=0}^r ({t_i}{+j}-{t_i}{})^k L_{i,j,r}(t) = (t-{t_i}{})^k\,,\quad 0\le k \le r\,. \label{eq useful Lijr} \end{align} The definition of the $b$-coefficients means that \begin{eqnarray}gin{align*} \ensuremath{Y }i{} = \EFp{{t_i}{}}{ \ensuremath{Y }i{+1} + {\rm (i)}nt_{{t_i}{}}^{{t_i}{+1}} Q^Y_{i,r}(t) \ud t } \end{align*} where $Q^Y_{i,r}$ is a polynomial of degree less than $r$ satisfying \begin{eqnarray}gin{align*} Q^Y_{i,r}({t_i}{+j}) = f(\ensuremath{Y }i{+j},\ensuremath{ Z^{i} }{+j})\;,\; 0 \le j \le r\;. \end{align*} In the case where the time step is constant, the coefficient does not depends on $i$ and are given by \begin{eqnarray}gin{align*} b_{j,r} = {\rm (i)}nt_0^1 \ell_{j,r}(s) \ud s\, , \; \text{ with } \ell_{j,r}(s) = \mathbb{P}rod_{k=0,k \ensuremath{ \circ }eq j}^{r+1} \frac{s-k}{j+1-k} \;,\; 0 \le j \le r\;. \end{align*} The coefficients for the $Z$-part are given by \begin{eqnarray}gin{align} \begin{eqnarray}ta_{i,j,r} = \frac1{h_{i}}{\rm (i)}nt_{{t_i}{}}^{{t_i}{+1}} {t_i}lde{L}_{i,j,r}(s) \ud s\, , \; 1 \le j \le r, \text{ with } {t_i}lde{L}_{i,j,r}(s) = \mathbb{P}rod_{k=1,k \ensuremath{ \circ }eq j}^r \frac{t-{t_i}{+k} }{{t_i}{+j}-{t_i}{+k} } \;. \label{eq de tilde L} \end{align} The Lagrange polynomials ${t_i}lde{L}_{i,j,r}$ are of degree $r-1$ and ${t_i}lde{L}_{i,j,r}({t_i}{+j})=1$, which implies \begin{eqnarray}gin{align} \sum_{j=1}^r ({t_i}{+j}-{t_i}{})^k {t_i}lde{L}_{i,j,r}(t) = (t-{t_i}{})^k\,,\quad 0\le k \le r-1\,. \label{eq useful tilde Lijr} \end{align} The definition of the $\begin{eqnarray}ta$-coefficients means that \begin{eqnarray}gin{align*} \ensuremath{ Z^{i} }{} = \EFp{{t_i}{}}{ H^\psi_{{t_i}{},h} \ensuremath{Y }i{+1} + {\rm (i)}nt_{{t_i}{}}^{{t_i}{+1}} Q^Z_{i,r}(t) \ud t } \end{align*} where $Q^Z_{i,r}$ is a polynomial of degree less than $r-1$ satisfying \begin{eqnarray}gin{align*} Q^Z_{i,r}({t_i}{+j}) = H^\phi_{{t_i}{},jh} f(\ensuremath{Y }i{+j},\ensuremath{ Z^{i} }{+j})\;,\; 1 \le j \le r\;. \end{align*} In the case where the time step is constant, the coefficient does not depends on $i$ and are given by \begin{eqnarray}gin{align*} \begin{eqnarray}ta_{j,r} = {\rm (i)}nt_0^1 \ell_{j,r}(s) \ud s\, , \; \text{ with } \ell_{j,r}(s) = \mathbb{P}rod_{k=1,k \ensuremath{ \circ }eq j}^r \frac{s-k}{j-k} \;. \end{align*} When the time step is constant, the table below gives the $b$-coefficients and $\begin{eqnarray}ta$-coefficients for $r \le 4$: \begin{eqnarray}gin{align*} \begin{eqnarray}gin{array}{c|ccccc|cccc} r & b_0 & b_1 & b_2 & b_3 & b_4& \begin{eqnarray}ta_1 & \begin{eqnarray}ta_2 & \begin{eqnarray}ta_3 & \begin{eqnarray}ta_4 \\ \ensuremath{ \hat l }ine 1 & \frac12 & \frac12 & & & & 1 & & & \\ 2 & \frac52 & \frac8{12} & - \frac1{12}& & & \frac32 & - \frac12 & & \\ 3 & \frac9{24}& \frac{19}{24} & -\frac5{24} & \frac1{24} & & \frac{23}{12} & - \frac{16}{12} & \frac{5}{12} & \\ 4 & \frac{251}{720} & \frac{646}{720} &-\frac{264}{720} & \frac{106}{720} & -\frac{19}{720} & \frac{55}{24} & - \frac{59}{24} & \frac{37}{24} & - \frac9{24} \end{array} \end{align*} \begin{eqnarray}gin{Proposition}\label{pr AM r} The $(AMB)_r$ method is convergent and at least of order $r+1$, provided that $\psi {\rm (i)}n {\cal B}^{r}$, $\phi {\rm (i)}n {\cal B}^{r-1}$ and $u {\rm (i)}n {\cal G}^{r+2}_b$. \end{Proposition} \proof 1. The stability of the schemes comes from a direct application of Proposition \ref{pr L2 stab}, since obviously $\HYP{c}$ holds for $(AMB)_r$. Following Theorem \ref{th main conv res}, we only have to study the order of the method. 2.a We first study the error for the $Z$ part. Observe that, recalling \eqref{eq def hZ}, \begin{eqnarray}gin{align*} \check{Z}_{{t_i}{}} &:= \EFp{{t_i}{}}{H^\psi_{{t_i}{},h} Y_{{t_i}{+1}} + \sum_{j=1}^r H^\phi_{{t_i}{},jh} f(Y_{{t_i}{+j}},Z_{{t_i}{+j}}){\rm (i)}nt_{{t_i}{}}^{{t_i}{+1}} L_{i,j,r}(t) \ud t } \\ &= \EFp{{t_i}{}}{H^\psi_{{t_i}{},h} u({t_i}{+1},X_{{t_i}{+1}}) - \sum_{j=1}^r H^\phi_{{t_i}{},jh} u^{(0)}({t_i}{+j},X_{{t_i}{+j}}){\rm (i)}nt_{{t_i}{}}^{{t_i}{+1}} L_{i,j,r}(t) \ud t } \end{align*} Using Proposition \ref{pr weak expansion Z}, we get \begin{eqnarray}gin{align*} \check{Z}_{{t_i}{}} - Z_{{t_i}{}} = \sum_{k=1}^{r}\frac{h_{i}^k}{k!}u^{(1)*(0)_k}({t_i}{},X_{{t_i}{}}) - \sum_{j=1}^r {\rm (i)}nt_{{t_i}{}}^{{t_i}{+1}} {t_i}lde{L}_{i,j,r}(t) \ud t \sum_{k=0}^{r-1} \frac{u^{(1)*(0)_{k+1}}({t_i}{},X_{{t_i}{}})}{k!}({t_i}{+j}-{t_i}{})^{k} + O_{{t_i}{}}(|\pi|^{r+1}) \end{align*} which reads also \begin{eqnarray}gin{align*} \check{Z}_{{t_i}{}} - Z_{{t_i}{}} = \sum_{k=1}^{r}\frac{h_{i}^k}{k!}u^{(1)*(0)_k}({t_i}{},X_{{t_i}{}}) - \sum_{k=0}^{r-1} \frac{u^{(1)*(0)_{k+1}}({t_i}{},X_{{t_i}{}})}{k!} {\rm (i)}nt_{{t_i}{}}^{{t_i}{+1}} \sum_{j=1}^r ({t_i}{+j}-{t_i}{})^{k} {t_i}lde{L}_{i,j,r}(t) \ud t + O_{{t_i}{}}(|\pi|^{r+1})\;. \end{align*} Using \eqref{eq useful tilde Lijr}, we obtain \begin{eqnarray}gin{align*} \check{Z}_{{t_i}{}} - Z_{{t_i}{}} & = \sum_{k=1}^r\Big(\frac{h_{i}^k}{k!}- \frac1{(k-1)!}{\rm (i)}nt_{{t_i}{}}^{{t_i}{+1}} (t-{t_i}{})^{k-1}\ud t \Big)u^{(1)*(0)_k}({t_i}{},X_{{t_i}{}}) + O_{{t_i}{}}(|\pi|^{r+1}) \\ & = O_{{t_i}{}}(|\pi|^{r+1}). \end{align*} 2.b We now study the truncation error for the $Y$ part. Let us define, \begin{eqnarray}gin{align*} {\ensuremath{\bar{a}}}r{Y}_{{t_i}{}} &:= \EFp{{t_i}{}}{Y_{{t_i}{+1}} + \sum_{j=0}^r f(Y_{{t_i}{+j}},Z_{{t_i}{+j}}){\rm (i)}nt_{{t_i}{}}^{{t_i}{+1}} L_{i,j,r}(t) \ud t } \end{align*} Observe that \begin{eqnarray}gin{align*} \check{Y}_{{t_i}{}} &:= {\ensuremath{\bar{a}}}r{Y}_{{t_i}{}} + \Big( f(\check{Y}_{{t_i}{}},\check{Z}_{{t_i}{}})-f(Y_{{t_i}{}},Z_{{t_i}{}}) \Big) {\rm (i)}nt_{{t_i}{}}^{{t_i}{+1}} L_{i,0,r}(t) \ud t \end{align*} which leads since $f$ is Lipschitz continuous, for $|\pi|$ small enough, to \begin{eqnarray}gin{align}\label{eq adams interm 1} \check{Y}_{{t_i}{}} - Y_{{t_i}{}}= O_{{t_i}{}}(|{\ensuremath{\bar{a}}}r{Y}_{{t_i}{}}- Y_{{t_i}{}}|) + |\pi|O_{{t_i}{}}(|\check{Z}_{{t_i}{}}- Z_{{t_i}{}}|)\;. \end{align} Now, \begin{eqnarray}gin{align*} {\ensuremath{\bar{a}}}r{Y}_{{t_i}{}} &= \EFp{{t_i}{}}{u({t_i}{+1},X_{{t_i}{+1}}) - \sum_{j=0}^r u^{(0)}({t_i}{+1},X_{{t_i}{+1}}){\rm (i)}nt_{{t_i}{}}^{{t_i}{+1}} L_{i,j,r}(t) \ud t } \end{align*} Using Proposition \ref{pr weak expansion Y}, we get \begin{eqnarray}gin{align*} {\ensuremath{\bar{a}}}r{Y}_{{t_i}{}} - Y_{{t_i}{}} = \sum_{k=1}^{r+1}\frac{h_{i}^k}{k!}u^{(0)_k}({t_i}{},X_{{t_i}{}}) - \sum_{j=0}^r {\rm (i)}nt_{{t_i}{}}^{{t_i}{+1}} L_{i,j,r}(t) \ud t \sum_{k=0}^{r} \frac{u^{(0)_{k+1}}({t_i}{},X_{{t_i}{}})}{k!}({t_i}{+j}-{t_i}{})^{k} + O_{{t_i}{}}(|\pi|^{r+2}) \end{align*} which reads also \begin{eqnarray}gin{align*} {\ensuremath{\bar{a}}}r{Y}_{{t_i}{}} - Y_{{t_i}{}} = \sum_{k=1}^{r+1}\frac{h_{i}^k}{k!}u^{(0)_k}({t_i}{},X_{{t_i}{}}) - \sum_{k=0}^{r} \frac{u^{(0)_{k+1}}({t_i}{},X_{{t_i}{}})}{k!} {\rm (i)}nt_{{t_i}{}}^{{t_i}{+1}} \sum_{j=0}^r ({t_i}{+j}-{t_i}{})^{k} L_{i,j,r}(t) \ud t + O_{{t_i}{}}(|\pi|^{r+2})\;. \end{align*} Using \eqref{eq useful Lijr}, we obtain \begin{eqnarray}gin{align*} {\ensuremath{\bar{a}}}r{Y}_{{t_i}{}} - Y_{{t_i}{}} & = \sum_{k=1}^{r+1}\Big(\frac{h_{i}^k}{k!}- \frac1{(k-1)!}{\rm (i)}nt_{{t_i}{}}^{{t_i}{+1}} (t-{t_i}{})^{k-1}\ud t \Big)u^{(0)_k}({t_i}{},X_{{t_i}{}}) + O_{{t_i}{}}(|\pi|^{r+2}) \\ & = O_{{t_i}{}}(|\pi|^{r+2}). \end{align*} Thus, using \eqref{eq adams interm 1}, \begin{eqnarray}gin{align*} \check{Y}_{{t_i}{}} - Y_{{t_i}{}} = O_{{t_i}{}}(|\pi|^{r+2}). \end{align*} 2.c Combining the results of steps 1.a and 1.b, we obtain \begin{eqnarray}gin{align*} {\cal T}(\pi)= O_{{t_i}{}}(|\pi|^{r+1})\;. \end{align*} which concludes the proof. \eproof \subsection{Explicit methods} These methods are inspired by Adams-Bashforth method both for the $Y$-part and $Z$-part. \begin{eqnarray}gin{align*} (ABB)_r \left \{ \begin{eqnarray}gin{array}{rcl} \ensuremath{Y }i{} &=& \EFp{{t_i}{}}{ \ensuremath{Y }i{+1} + h_{i} \sum_{j=1}^r b_{i,j,r} f(\ensuremath{Y }i{+j},\ensuremath{ Z^{i} }{+j}) } \\ \ensuremath{ Z^{i} }{} &=&\EFp{{t_i}{}}{ H^\psi_{{t_i}{},h} \ensuremath{Y }i{+1 } + h_i \sum_{j=1}^r \begin{eqnarray}ta_{i,j,r} H^\phi_{{t_i}{},jh} f(\ensuremath{Y }i{+j},\ensuremath{ Z^{i} }{+j}) } \end{array} \right. \end{align*} where $\psi, \phi {\rm (i)}n {\cal B}^0_{[0,1]}$. Now, the coefficients for the $Y$-part are given by \begin{eqnarray}gin{align*} b_{i,j,r} = \frac1{h_{i}}{\rm (i)}nt_{{t_i}{}}^{{t_i}{+1}} {t_i}lde{L}_{i,j,r}(s) \ud s\, , \end{align*} recalling \eqref{eq de tilde L}. From the proof of Proposition \ref{pr AM r}, step 1.b. we know that we can obtain a truncation error for the $Z$-part s.t. $\hat{Z_{{t_i}{}}} - Z_{{t_i}{}} =O_{{t_i}{}}(|\pi|^{r+1})$. But here, due to the explicit feature of the $Y$ part and thus an order $r$ global error only, we only need to retrieve an error for the $Z$-part of order $r$ as well. This simply means that the scheme, for the $Z$-part, has one more coefficient than needed. So one can set \begin{eqnarray}gin{align*} \begin{eqnarray}ta_{i,j,r} = b_{i,j,r} \end{align*} or \begin{eqnarray}gin{align*} \begin{eqnarray}ta_{i,r,r}= 0 \text{ and } \begin{eqnarray}ta_{i,j,r} = b_{i,j,r-1}, \; 1\le j \le r-1\;. \end{align*} Following the arguments of the proof of Proposition \ref{pr AM r}, one obtains \begin{eqnarray}gin{Proposition}\label{pr AB r} The $(ABB)_r$ method is convergent and at least of order $r$, provided that $\psi {\rm (i)}n {\cal B}^{r-1}$, $\phi {\rm (i)}n {\cal B}^{r-2}$ and $u {\rm (i)}n {\cal G}^{r+1}_b$. \end{Proposition} \subsection{Predictor-Corrector methods} These methods are fully explicit method but have a better rate of convergence than the $(ABB)_r$ methods presented above. Nevertheless, they require the computation of one more conditional expectation by step. This has to be compared in practice to the Picard Iteration required by $(AMB)_r$ approximation. \begin{eqnarray}gin{Definition} \label{de pece} \begin{eqnarray}gin{align} (PC)_r \left \{ \begin{eqnarray}gin{array}{rcl} \ensuremath{ Z^{i} }{} &=&\EFp{{t_i}{}}{ H^\psi_{{t_i}{},h} \ensuremath{Y }i{+1 } + h_{i} \sum_{j=1}^r \begin{eqnarray}ta_{i,j,r} H^\phi_{{t_i}{},jh} f(\ensuremath{Y }i{+j},\ensuremath{ Z^{i} }{+j}) } \\ {}^p\ensuremath{Y }i{} &=& \EFp{{t_i}{}}{ \ensuremath{Y }i{+1} + h_{i} \sum_{j=1}^r \begin{eqnarray}ta_{i,j,r} f(\ensuremath{Y }i{+j},\ensuremath{ Z^{i} }{+j}) } \\ \ensuremath{Y }i{} &=& \EFp{{t_i}{}}{ \ensuremath{Y }i{+1} + h_{i} \sum_{j=1}^r b_{i,j,r} f(\ensuremath{Y }i{+j},\ensuremath{ Z^{i} }{+j}) } + h_{i} b_{i,0,r} f({}^p\ensuremath{Y }i{},\ensuremath{ Z^{i} }{}) \end{array} \right. \label{eq de pece} \end{align} where $\psi, \phi {\rm (i)}n {\cal B}^0_{[0,1]}$. The $b$-coefficients are given by \eqref{eq de b AM} and the $\begin{eqnarray}ta$-coefficients by \eqref{eq de tilde L}. \end{Definition} \begin{eqnarray}gin{Theorem} \label{th conv pece} The $(PC)_r$ method is convergent and at least of order $r+1$, provided that $\psi {\rm (i)}n {\cal B}^{r}$, $\phi {\rm (i)}n {\cal B}^{r-1}$ and $u {\rm (i)}n {\cal G}^{r+2}_b$. \end{Theorem} As usual, the proof of this Theorem is splitted in two steps below. We first study the stability of the above schemes and then their truncation errors. \subsubsection{Stability} To study the \emph{stability} of the methods \eqref{eq de pece}, we introduce first a pertubed version of the scheme \begin{eqnarray}gin{align*} \left \{ \begin{eqnarray}gin{array}{rcl} \ensuremath{\widetilde Z }i{} &=&\EFp{{t_i}{}}{ H^\psi_{{t_i}{},h} \ensuremath{\widetilde Y }i{+1 } + h_{i} \sum_{j=1}^r \begin{eqnarray}ta_{i,j,r} H^\phi_{{t_i}{},jh} f(\ensuremath{\widetilde Y }i{+j},\ensuremath{\widetilde Z }i{+j}) + \errZi{}} \\ {}^p\ensuremath{\widetilde Y }i{} &=& \EFp{{t_i}{}}{ \ensuremath{\widetilde Y }i{+1} + h_{i} \sum_{j=1}^r b_{i,j,r} f(\ensuremath{\widetilde Y }i{+j},\ensuremath{\widetilde Z }i{+j}) } \\ \ensuremath{\widetilde Y }i{} &=& \EFp{{t_i}{}}{ \ensuremath{\widetilde Y }i{+1} + h_{i} \sum_{j=1}^r b^*_{i,j,r} f(\ensuremath{\widetilde Y }i{+j},\ensuremath{\widetilde Z }i{+j}) + h_{i} b^*_{i,0,r} f({}^p\ensuremath{\widetilde Y }i{},\ensuremath{\widetilde Z }i{}) + \errYi{}} \end{array} \right. \end{align*} where $ \errYi{}$, $\errZi{}$ are random variables belonging to $L^2({\cal F}_{{t_i}{}})$, for $i \le n-r$. \begin{eqnarray}gin{Proposition} \label{pr pc stab} The scheme given in \eqref{eq de pece} is $L^2$-stable, recalling Definition \ref{de stab}. \end{Proposition} \proof For $|\pi|$ small enough, we compute, denoting ${}^p \dY = {}^pY - {}^p{t_i}lde{Y} $, \begin{eqnarray}gin{align} \esp{|{}^p\dYi{}|^2} & \le (1+ |\pi|)\esp{|\dYi{+1}|^2} + C\sum_{j=1}^rh_i \esp{|\dYi{+j}|^2 +|\dZi{+j}|^2 } \label{eq stab pece 1} \\ \esp{|\dYi{}|^2} & \le (1+ |\pi|)\esp{|\dYi{+1}|^2} + C\Big(h_i \esp{|{}^p\dYi{}|^2} + \sum_{j=1}^rh_i \esp{|\dYi{+j}|^2 +|\dZi{+j}|^2 } \Big) + \frac{C}{h_i}|\errYi{}|^2 \label{eq stab pece 2} \end{align} Plugging \eqref{eq stab pece 1} into \eqref{eq stab pece 2} and using the discrete version of Gronwall's Lemma, we obtain \begin{eqnarray}gin{align}\label{eq pdY interm 1} \esp{|\dYi{}|^2} \le C\Big( |\pi|\sum_{k=i}^{n-r}\esp{|\dYk{}|^2} + \sum_{k=i}^{n-r}h_k\esp{|\dZk{}|^2} &+ \sum_{k=i}^{n-r}\frac1{h_k}\esp{|\errYk{}|^2} + \sum_{k=n-r+1}^{n}\esp{|\dY_{k}|^2} \Big) \;. \end{align} Using the same arguments as in step 1.b of the proof of Proposition \ref{pr L2 stab}, we retrieve that \begin{eqnarray}gin{align} \sum_{k=i}^{n-r} h_k \esp{|\dZk{}|^2} &\le C\Big( \sum_{k=n-r+1}^{n}\esp{|\dYk{}|^2+|\pi||\dZk{}|^2} + |\pi| \sum_{k=i}^{n-r}\esp{|\dYk{}|^2} + \sum_{k=i}^{n-r}\esp{\frac1{h_k}|{\errYk{}}|^2 +h_k|{\errZk{}}|^2} \Big) \label{eq stab interm Z pece} \end{align} This leads, using \eqref{eq pdY interm 1}, \begin{eqnarray}gin{align} \esp{|\dYi{}|^2} \le C\Big( |\pi|\sum_{j=i}^{n-r}\esp{|\dYj{}|^2} + \sum_{k=i}^{n-r}|\pi|\esp{\frac1{h_k^2}|\EFp{\tk{}}{\errYk{}}|^2 +|\EFp{\tk{}}{\errZk{}}|^2} + \sum_{k=n-r+1}^{n}\esp{|\dY_{k}|^2+|\pi||\dZk{}|^2} \Big) \end{align} which corresponds to \eqref{eq stab Y step 1.c-1}. The proof is then concluded using the same arguments as in step $2$ of the proof of Proposition \ref{pr L2 stab}. \eproof \subsubsection{Truncation error} \begin{eqnarray}gin{Proposition}\label{pr pc truncation error} The scheme given in Definition \ref{de pece} is at least of order $r+1$ provided that $\psi {\rm (i)}n {\cal B}^{r}$, $\phi {\rm (i)}n {\cal B}^{r-1}$ and $u {\rm (i)}n {\cal G}^{r+2}_b$. \end{Proposition} \proof 1. The truncation error for the $Z$-part is the same that the one of the $(AMM)_r$ method. From the proof of Proposition \ref{pr AM r} step 1, we get \begin{eqnarray}gin{align*} \check{Z}_{{t_i}{}} - Z_{{t_i}{}} = O_{{t_i}{}}(|\pi|^{r+1})\;. \end{align*} 2. The study of the truncation error for the Y-part is slightly more involved. Let us define \begin{eqnarray}gin{align*} Y^*_{{t_i}{}} := \EFp{{t_i}{}}{ Y_{{t_i}{+1}} + h_{i} \sum_{j=1}^r b_{i,j,r} f(Y_{{t_i}{+j}},Z_{{t_i}{+j}}) } + h_{i} b_{i,0,r} f(Y^*_{{t_i}{}},\hat{Z}_{{t_i}{}}) \end{align*} Using the proof of Proposition \ref{pr AM r} step 2, we know that \begin{eqnarray}gin{align} \label{eq trunc error corr} Y^*_{{t_i}{}} - Y_{{t_i}{}} = O_{{t_i}{}}(|\pi|^{r+2}) \end{align} this quantity represents the truncation error for the Y-part of the Adams-Moulton method. We also define \begin{eqnarray}gin{align*} {}^p\check{Y}_{{t_i}{}}&= \EFp{{t_i}{}}{ Y_{{t_i}{+1}} + h_{i} \sum_{j=1}^r \begin{eqnarray}ta_{i,j,r} f(Y_{{t_i}{+j}},Z_{{t_i}{+j}}) } \\ \check{Y}_{{t_i}{}}&= \EFp{{t_i}{}}{ Y_{{t_i}{+1}} + h_{i} \sum_{j=1}^r b_{i,j,r} f(Y_{{t_i}{+j}},Z_{{t_i}{+j}}) } + h_{i} b_{i,0,r} f({}^p\check{Y}_{{t_i}{}},\check{Z}_{{t_i}{}}) \end{align*} The term ${}^p\check{Y}_{{t_i}{}}- Y_{{t_i}{}}$ represents then the truncation error for the Predictor part, adapting the arguments of Proposition \ref{pr AM r} step 1, we have \begin{eqnarray}gin{align} \label{eq trunc error pred} {}^p\check{Y}_{{t_i}{}}- Y_{{t_i}{}} = O_{{t_i}{}}(|\pi|^{r+1}) \end{align} The term $\check{Y}_{{t_i}{}}- Y_{{t_i}{}}$ is the truncation error we are interested in. We then observe that \begin{eqnarray}gin{align*} \check{Y}_{{t_i}{}} &= Y^*_{{t_i}{}} + h_{i} b_{i,0,r} \Big( f({}^p\check{Y}_{{t_i}{}},\check{Z}_{{t_i}{}})- f(Y^*_{{t_i}{}},\check{Z}_{{t_i}{}}) \Big) \\ &= Y^*_{{t_i}{}} + h_{i} b_{i,0,r} \Big( f({}^p\check{Y}_{{t_i}{}},\check{Z}_{{t_i}{}})-f(Y_{{t_i}{}},\check{Z}_{{t_i}{}}) + f(Y_{{t_i}{}},\check{Z}_{{t_i}{}})- f(Y^*_{{t_i}{}},\check{Z}_{{t_i}{}}) \Big) \end{align*} Since $f$ is Lipschitz continuous, we obtain \begin{eqnarray}gin{align*} \check{Y}_{{t_i}{}} - Y_{{t_i}{}} = O_{{t_i}{}}(| Y^*_{{t_i}{}} - Y_{{t_i}{}} |) + |\pi|O_{{t_i}{}}(|{}^p\check{Y}_{{t_i}{}}- Y_{{t_i}{}}| ) \end{align*} Combining \eqref{eq trunc error corr}and \eqref{eq trunc error pred}, we then obtain \begin{eqnarray}gin{align*} \check{Y}_{{t_i}{}} - Y_{{t_i}{}} = O_{{t_i}{}}(|\pi|^{r+2})\;. \end{align*} 3. From step 1. and step 2. above, we obtain that \begin{eqnarray}gin{align*} {\cal T}(\pi) = O(|\pi|^{r+1}), \end{align*} which concludes the proof. \eproof \section{Numerical illustration} In this part, we provide a numerical illustration for the results presented above. The scheme given in Definition \ref{de multi-step scheme} is still a theoretical one because in practice one has to compute the conditional expectation involved. Many methods have been studied already in the context of BSDEs: regression methods \cite{goblem06}, quantization methods \cite{balpag03,delmen06}, Malliavin calculus methods \cite{boutou04, bouwar11} and tree based methods e.g. Cubature methods \cite{criman10}. To illustrate our previous results, i.e. the order of the time discretization error, we will focus on the simple case where $d=1$ and $X=W$, in the spirit of \cite{zhazha10}. Obviously, further numerical experiments are needed, specially in high dimension. Because we are looking towards high order approximation, it seems reasonable to combine the present multi-step schemes with Cubature methods \cite{criman10}. This is left for further research. In the sequel, we will also assume that the $r$ terminal conditions are perfectly known. Generally, this won't be the case but it is not really a problem, see Remark \ref{re intro} (i). We explain below how the Brownian motion is approximated and give the expression of the numerical scheme which is implemented in practice. We show that this scheme is convergent and characterise its convergence order. The error we are dealing with is now composed of the discrete time error and the space discretization error. Finally, we provide some numerical results, where we compute the empirical convergence rate. \subsection{Empirical schemes} In order to define the scheme implemented in practice, we use a multinomial approximation of the Brownian motion. Let us consider a discrete random variable $ {t_{i-1}}esi$ matching the moments of a gaussian variable $G$ up to order $K$, i.e. \begin{eqnarray}gin{align*} \esp{ {t_{i-1}}esi^k} = \esp{G^k}, \quad 0 \le k \le K. \end{align*} In dimension 1, an efficient way to construct $ {t_{i-1}}esi$ is to use quadrature formula. On a (discrete, but big enough) probability space $(\widehat{\Omega}, \widehat{\mathbb{P}})$, we are then given $( {t_{i-1}}esi_i)_{1\le i \le n}$, i.i.d random variables with the same law as $ {t_{i-1}}esi$ and define \begin{eqnarray}gin{align} \Wpi{} = x + \sum_{j=1}^i \sqrt{t_{j}-t_{j-1}} {t_{i-1}}esi_j\;,\quad \forall \, {t_i}{} {\rm (i)}n \pi\,. \label{de approx W} \end{align} For later use, we say that $\Wpi{}$ is an \emph{order $K$ approximation} of the Brownian motion. We also denote by $(\widehat{{\cal F}})_{t {\rm (i)}n \pi}$ the filtration generated by $(\widehat{W}^{0,x}_{t})_{t{\rm (i)}n \pi}$ and $\whEFp{t}{\cdot}$ the related conditional expectation. We can now define the numerical scheme which is used in practice. \begin{eqnarray}gin{Definition} (Linear multi-step) \label{de lms numerical scheme} (i) To initialise the scheme with $r$ steps, $r\ge 1$, we set, for $0 \le j \le r-1$, $$(Y_{n-j},Z_{n-j})=(u(t_{n-j},\widehat{W}^{0,x}_{t_{n-j}}),\partial_x u (t_{n-j},\widehat{W}^{0,x}_{t_{n-j}})).$$ (ii) For $i \le n-r$, the computation of $(Y_i,Z_i)$ involves $r$ steps and is given by \begin{eqnarray}gin{align} \label{de numerics scheme} \left \{ \begin{eqnarray}gin{array}{rcl} \whYi{} &=& \whEFp{{t_i}{}}{ \sum_{j=1}^r a_{j}\whYi{+j}+ h_i \sum_{j=0}^r b_{j} f(\whYi{+j},\whZi{+j}) } \\ \whZi{} &=&\whEFp{{t_i}{}}{ \sum_{j = 1}^r \whHi{,j} \Big( \alpha_{j} \whYi{+j} + h_i \sum_{j=1}^r \begin{eqnarray}ta_{j} f(\whYi{+j},\whZi{+j}) \Big) } \end{array} \right. \end{align} The coefficient $H_{i,j}$ are the discrete version of coefficients given in \eqref{eq def H scheme}. From Proposition \ref{pr weak expansion Z} (iii), we observe that, in the case $X=W$, they can simply be defined as approximation of the Brownian increment, i.e. \begin{eqnarray}gin{align} \label{de whH} \whHi{,j} = \frac{\Wpi{+j}-\Wpi{}}{{t_i}{+j}-{t_i}{}}\;. \end{align} \end{Definition} When implementing Predictor-Corrector methods, we use \begin{eqnarray}gin{Definition} (Predictor-Corrector) \label{de PC numerical scheme} \\ (i) To initialise the scheme with $r$ steps, $r\ge 1$, we set, for $0 \le j \le r-1$, $$(Y_{n-j},Z_{n-j})=(u(t_{n-j},\widehat{W}^{0,x}_{t_{n-j}}),\partial_x u (t_{n-j},\widehat{W}^{0,x}_{t_{n-j}})).$$ (ii) For $i \le n-r$, the computation of $(Y_i,Z_i)$ involves $r$ steps and is given by \begin{eqnarray}gin{align*} (PC)_r \left \{ \begin{eqnarray}gin{array}{rcl} \whZi{} &=&\whEFp{{t_i}{}}{ \whHi{,1} \whYi{+1} + h_i \sum_{j = 1}^r \whHi{,j} \sum_{j=1}^r \begin{eqnarray}ta_{j} f(\whYi{+j},\whZi{+j}) } \\ {}^p\whYi{} &=& \whEFp{{t_i}{}}{ \whYi{+1} + h_{i} \sum_{j=1}^r \begin{eqnarray}ta_{i,j,r} f(\whYi{+j},\whZi{+j}) } \\ \whYi{} &=& \whEFp{{t_i}{}}{ \whYi{+1} + h_{i} \sum_{j=1}^r b_{i,j,r} f(\whYi{+j},\whZi{+j}) } + h_{i} b_{i,0,r} f({}^p\whYi{},\whZi{}) \end{array} \right. \end{align*} where the $b$-coefficients are given by \eqref{eq de b AM}, the $\begin{eqnarray}ta$-coefficients by \eqref{eq de tilde L} and the $\widehat{H}$-coefficients by \eqref{de whH}. \end{Definition} \begin{eqnarray}gin{Proposition} \label{pr conv results} (i) In Definition \ref{de lms numerical scheme}, if we assume that the method given by the coefficients $a$, $b$, $\alpha$, $\begin{eqnarray}ta$ is of order $m$, according to Definition \ref{de order}, and that the multinomial approximation of the Brownian motion is of order $K = 2m+1$ then we have \begin{eqnarray}gin{align*} Y_0 - \wh{Y}_0 = O(h^m)\;, \end{align*} provided that the coefficient $f$ and the value function $u$ are smooth enough. (ii) In Definition \ref{de PC numerical scheme} for $(PC)_r$ method, if we assume that the multinomial approximation of the Brownian motion is of order $K = 2r+3$ then we have \begin{eqnarray}gin{align*} Y_0 - \wh{Y}_0 = O(h^{r+1})\;, \end{align*} provided that the coefficient $f$ and the value function $u$ are smooth enough. \end{Proposition} The proof of this proposition is postponed to the end of this section. We can now turn to a concrete example which illustrates the above order of convergence. \subsection{Application} As in \cite{zhazha10}, we consider the process, on $[0,T]$, \begin{eqnarray}gin{align*} (X_t,Y_t,Z_t) = \Big(W_t,\frac1{1 + \exp(-W_t-\frac{t}{4})},\frac{\exp(-W_t-\frac{t}4)}{(1+\exp(-W_t-\frac{t}4))^2}\Big) \;. \end{align*} This process is solution of the (decoupled) FBSDE \begin{eqnarray}gin{align*} X_t &= W_t \\ Y_t &= g_T(W_T) + {\rm (i)}nt_t^Tf(Y_s,Z_s) \ud s - {\rm (i)}nt_t^T Z_s \ud W_s \end{align*} where the driver $f$ is given by \begin{eqnarray}gin{align} f(y,z) = -z(\frac34-y)\;, \label{de num fyz} \quad\text{ and }\quad g_T(x) = \frac1{1 + \exp(-x-\frac{T}{4})} \end{align} \ensuremath{ \circ }ewpage To approximate the value of $Y_0$, we consider the following methods: \begin{eqnarray}gin{enumerate} {\rm (i)}tem Implicit Euler approximation, coupled with an order 3 Brownian approximation. {\rm (i)}tem Crank-Nicholson approximation, coupled with an order 5 Brownian approximation. {\rm (i)}tem Explicit two step Adams method, coupled with an order 5 Brownian approximation. {\rm (i)}tem Implicit two step Adams method, coupled with an order 7 Brownian approximation. {\rm (i)}tem Heun method which is a Predictor-Corrector method, coupled with an order 5 Brownian approximation. \end{enumerate} The log-log graph in Fig$.$ 1 below shows the rates of convergence of the method which are in accordance with the theoretical ones. Adams methods produce empirical rate slightly below the expected ones. But the highest is the theoritical convergence order, the smallest is the error in practice. \begin{eqnarray}gin{figure}[h!] \label{fig1} {\rm (i)}ncludegraphics[width = \textwidth]{convergence4.pdf} \caption{Illustration of the convergence rate} \end{figure} The graph in Fig$.$ 2 below shows the impact of the space discretization on the global order of the method. The empirical convergence rates are in accordance with the theoretical ones. \begin{eqnarray}gin{figure}[h!] \label{fig2} {\rm (i)}ncludegraphics[width = \textwidth]{spaceerror2.pdf} \caption{Impact of space discretization} \end{figure} \subsection{Proof of Proposition \ref{pr conv results}} We only provide the proof of (i), the proof of (ii) follows from the same arguments and using the proof of Proposition \ref{pr pc stab} and Proposition \ref{pr pc truncation error}. 1. \emph{Notations} We first need to consider 'functional' version of the schemes above. Let us introduce the following operator, related to the theoretical schemes given in Definition \ref{de multi-step scheme}. $\Rzi{j}:(C^1_b)^{2}\rightarrow C^1_b$ \begin{eqnarray}gin{align*} \Rzi{j}[\varphi^Y,\varphi^Z](x) = \esp{H^{\mathbf{1},x}_{{t_i}{},jh} \Big( \alpha_{j} \varphi^Y(\Wxi{+j}) + h \begin{eqnarray}ta_{j} f(\varphi^Y(\Wxi{+j}),\varphi^Z(\Wxi{+j})) \Big)} \end{align*} $\Ryi{j}:(C^1_b)^{2}\rightarrow C^1_b$ \begin{eqnarray}gin{align*} \Ryi{j}[\varphi^Y,\varphi^Z](x) = \esp{a_{j} \varphi^Y(\Wxi{+j}) + h b_{j} f(\varphi^Y(\Wxi{+j}),\varphi^Z(\Wxi{+j})) } \end{align*} Similarly, let us define - for the fully discrete scheme - the operators $\whRzi{j}:(C^1_b)^{2}\rightarrow C^1_b$ \begin{eqnarray}gin{align*} \whRzi{j}[\varphi^Y,\varphi^Z](x) = \whesp{ \widehat{H}_{i,j} \Big( \alpha_{j} \varphi^Y(\whWxi{j}) + h \begin{eqnarray}ta_{j} f(\varphi^Y(\whWxi{j}),\varphi^Z(\whWxi{j})) \Big)} \end{align*} $\whRyi{j}:(C^1_b)^{2}\rightarrow C^1_b$ \begin{eqnarray}gin{align*} \whRyi{j}[\varphi^Y,\varphi^Z](x) = \whesp{a_{j} \varphi^Y(\whWxi{+j}) + h b_{j} f(\varphi^Y(\whWxi{+j}),\varphi^Z(\whWxi{+j})) } \end{align*} The functional version of the schemes given in Definition \ref{de lms numerical scheme} reads then, for $i \le n-r$, \begin{eqnarray}gin{align*} \left \{ \begin{eqnarray}gin{array}{rcl} \whyi{}(x) &=& \sum_{j=1}^r \whRyi{j}[\whyi{+j},\whzi{+j}](x) \\ \whzi{}(x) &=& \sum_{j=1}^r \whRzi{j}[\whyi{+j},\whzi{+j}](x) \end{array} \right. \end{align*} given $r$ initial data $(\widehat{y}_{n-j},\widehat{z}_{n-j})=(u(t_{n-j},\cdot), \partial_x u(t_{n-j},\cdot))$, $0\le j \le r-1$. Due to the markov property of the discrete process $(\widehat{W}^{0,x}_{t})_{t{\rm (i)}n \pi}$, it is easily checked that \begin{eqnarray}gin{align*} \whYi{} = \whyi{}(\Wpi{}) \text{ and } \whZi{} = \whzi{}(\Wpi{}) \;. \end{align*} Finally, we define \begin{eqnarray}gin{align*} \wtYi{} = u({t_i}{},\Wpi{}) \text{ and } \wtZi{} = \partial_x u({t_i}{},\Wpi{}) \;. \end{align*} Observe that $\widetilde{Y}_0 = u(0,x)$ and that, for $0\le j \le r-1$, $(\widehat{Y}_{n-j},\widehat{Z}_{n-j}) = (\widetilde{Y}_{n-j},\widetilde{Z}_{n-j})$. 2. \emph{Stability} The key observation is here that $(\wtYi{},\wtZi{})$ can be seen as a perturbed version of the scheme given in \eqref{de numerics scheme}, namely \begin{eqnarray}gin{align*} \left \{ \begin{eqnarray}gin{array}{rcl} \wtYi{} &=& \whEFp{{t_i}{}}{ \sum_{j=1}^r a_{j}\wtYi{+j}+ h \sum_{j=0}^r b_{j} f(\wtYi{+j},\wtZi{+j}) } + {}^{t}\errYi{} + {}^{s}\errYi{} \\ \wtZi{} &=&\whEFp{{t_i}{}}{ \sum_{j = 1}^r H \Big( \alpha_{j} \wtYi{+j} + h \sum_{j=1}^r \begin{eqnarray}ta_{j} f(\wtYi{+j},\wtZi{+j}) \Big) }+ {}^{t}\errZi{} + {}^{s}\errZi{} \end{array} \right. \end{align*} where the local error due to the time-discretization is \begin{eqnarray}gin{align} \left \{ \begin{eqnarray}gin{array}{rcl} {}^t\errYi{} &=&\esp{Y_{{t_i}{}} - \check{Y}_{{t_i}{}} | X_{{t_i}{}} = \Wpi{}} \\ {}^t\errZi{} &=& \esp{Z_{{t_i}{}} - \check{Z}_{{t_i}{}} | X_{{t_i}{}} = \Wpi{}} \end{array} \right. \end{align} recalling \eqref{eq def hY}-\eqref{eq def hZ} and the local error due to the 'space-discretization' is \begin{eqnarray}gin{align} \left \{ \begin{eqnarray}gin{array}{rcl} {}^s\errYi{} &=& \sum_{j=1}^r (\Ryi{j} - \whRyi{j})[u({t_i}{+j},\cdot),\partial_x u({t_i}{+j},\cdot)](\Wpi{}) \\ {}^s\errZi{} &=& \sum_{j=1}^r (\Rzi{j} - \whRzi{j})[u({t_i}{+j},\cdot),\partial_x u({t_i}{+j},\cdot)](\Wpi{}) \end{array} \right. . \end{align} Now, we can apply Proposition \ref{pr L2 stab}, recalling Remark \ref{re pr L2 stab}, to obtain in particular that \begin{eqnarray}gin{align} \label{eq stab numerics} |\widetilde{Y}_0 - \widehat{Y}_0|^2 \le |\pi|\sum_{i=0}^{n-r} \whesp{\frac{1}{h_{i}^2} |{}^t\errYi{} +{}^s\errYi{}|^2 + |{}^t\errZi{} +{}^s\errZi{}|^2} \end{align} 3. \emph{Study of the local error} We now turn to the study of the local errors $(\errYi{},\errZi{})_{0 \le i \le n-r}$. Assuming that the function are smooth enough we compute the following expansion \begin{eqnarray}gin{align*} (\Ryi{j} - \whRyi{j})[u({t_i}{+j},\cdot),u^{(1)}({t_i}{+j},\cdot)](x) = \sum_{k=0}^K \frac1{k!} \chi^{(k)}_{i,j}(x)\esp{(\Woi{+j})^k} &- \sum_{k=0}^K \frac1{k!} \chi^{(k)}_{i,j}(x) \whesp{(\whWoi{+j})^k} \\&+ O(|\pi|^{\frac{K+1}{2}}) \end{align*} where $\chi_{i,j}$ are functions depending on $f$, $u$ and the coefficients of the methods. Using the matching moment property of $\widehat{W}^{0,{t_i}{}}$, we easily obtain that \begin{eqnarray}gin{align*} \whesp{|(\Ryi{j} - \whRyi{j})[u({t_i}{+j},\cdot),u^{(1)}({t_i}{+j},\cdot)](\Wpi{})|^2} \le C|\pi|^{K+1} \end{align*} For the $Z$ part, we have \begin{eqnarray}gin{align*} (\Rzi{j} - \whRzi{j})[u({t_i}{+j},\cdot),u^{(1)}({t_i}{+j},\cdot)](x) = \sum_{k=0}^{K-1} \frac1{k!} \chi^{(k)}_{i,j}(x)\esp{H_{i,j}(\Woi{+j})^k} &- \sum_{k=0}^{K-1} \frac1{k!} \chi^{(k)}_{i,j}(x) \whesp{(\widehat{H}_{i,j}\whWoi{+j})^k} \\ & +O(|\pi|^\frac{K-1}{2}) \end{align*} Using the matching moment property of $\widehat{W}^{0,{t_i}{}}$, we easily obtain that \begin{eqnarray}gin{align*} \whesp{|(\Rzi{j} - \whRzi{j})[u({t_i}{+j},\cdot),u^{(1)}({t_i}{+j},\cdot)](\Wpi{})|^2} \le C |\pi|^{K-1} \end{align*} Combining the above estimates with \eqref{eq stab numerics} and the fact that the discrete-time error is of order $m$, leads to \begin{eqnarray}gin{align*} |\widetilde{Y}_0 - \widehat{Y}_0| \le C|\pi|^{m} + C|\pi|^{\frac{K-1}{2}}. \end{align*} which concludes the proof since $K=2m+1$. \eproof \renewcommand{{t_i}}{t_{i}} \begin{eqnarray}gin{thebibliography}{aa12} \bibitem{balpag03} {\sc Bally V. and G. Pag\`es} (2003). Error analysis of the quantization algorithm for obstacle problems. {\sl Stochastic Processes and their Applications}, {\bf 106}, 1-40. \bibitem{boueli08} {\sc Bouchard B. and R. Elie (2008)}, Discrete-time approximation of decoupled forward-backward SDE with jumps, {{\rm (i)}t Stochastic Processes and their Applications}, {\bf 118}, 53-75. \bibitem{boutou04} {\sc Bouchard B. and N. Touzi (2004)}, Discrete-Time Approximation and Monte-Carlo Simulation of Backward Stochastic Differential Equations. {{\rm (i)}t Stochastic Processes and their Applications}, {\bf 111} (2), 175-206. \bibitem{bouwar11} {\sc Bouchard B. and X. Warin} (2011) Monte-Carlo valorisation of American options: facts and new algorithms to improve existing methods, to appear in {\sl Numerical Methods in Finance} , Springer Proceedings in Mathematics, ed. R. Carmona, P. Del Moral, P. Hu and N. Oudjane , 2011. \bibitem{but08} {\sc Butcher J. C.} (2008), \textit{Numerical Methods for Ordinary Differential Equations}, Second Edition, Wiley. \bibitem{cha08} {\sc Chassagneux J.-F.} (2008) Processus r\'efl\'echis en finance et probabilit\'e num\'erique, {\sl phd thesis}, Universit\'e Paris Diderot - Paris 7. \bibitem{chacri12} {\sc Chassagneux J.-F. and D. Crisan} (2012), Runge-Kutta Scheme for BSDEs, {{\rm (i)}t preprint}. \bibitem{criman09} {\sc Crisan D. and K. Manolarakis (2009)}, Solving Backward Stochastic Differential Equations using the Cubature Method, {{\rm (i)}t preprint}. \bibitem{criman10} {\sc Crisan D. and K. Manolarakis (2010)}, Second order discretization of a Backward SDE and simulation with the cubature method, {{\rm (i)}t preprint}. \bibitem{delmen06} {\sc Delarue, F. and S. Menozzi} (2006) A forward backward algorithm for quasi-linear PDEs, {\sl Annals of Applied Probability} ,{\bf 16}, 140-184. \bibitem{dem06} {\sc Demailly J.-P.} \textit{Analyse num\'erique et \'equations diff\'erentielles}, 3e \'edition, EDP Sciences. \bibitem{karpen97} {\sc El Karoui N., S. Peng, M.C. Quenez} (1997), Backward Stochastic Differential Equation in finance {\sl Mathematical finance}, {\bf 7} (1), 1-71. \bibitem{goblab07} {\sc Gobet E. and C. Labart (2007)}, Error expansion for the discretization of bacward stochastic differential equations, {{\rm (i)}t Stochastic Processes and their Applications}, {\bf 117}, 803-829. \bibitem{goblem06}{\sc Gobet E., J.-P. Lemor and X. Warin} (2006) Rate of convergence of an empirical regression method for solving generalized backward stochastic differential equations, {\sl Bernoulli}, {\bf 12}(5), 889-916. \bibitem{klopla92} {\sc Kloeden P. E. and E. Platen} (1992), \textit{Numerical solutions of Stochastic Differential Equations}, Applied Math. 23, Springer, Berlin. \bibitem{parpen90} {\sc Pardoux E. and S. Peng} (1990), Adapted solution of a backward stochastic differential equation, {\sl Systems and Control Letters}, {\bf 14}, 55-61. \bibitem{parpen92} {\sc Pardoux E. and S. Peng} (1992), Backward stochastic differential equations and quasilinear parabolic partial differential equations. In Stochastic partial differential equations and their applications (Charlotte, NC, 1991), 200-217, \ensuremath{ \circ }ewblock volume 176 of {\sl Lecture Notes in Control and Inform. Sci., Springer, Berlin, 1992.} \bibitem{ric10} {\sc Richou, A.} (2010) Numerical simulation of BSDEs with drivers of quadratic growth. Forthcoming in {\sl The Annals of Applied Probability}. \bibitem{zhache06} {\sc Zhao W., L. Chen and S. Peng} (2006), A new kind of accurate numerical method for backward stochastic differential equations, {\sl SIAM J. Sci. Comput.}, {\bf 28}, 1563-1581. \bibitem{zhaliy12}{\sc Zhao W., Y. Li and G. Zhang}, (2012) A generalized theta-scheme for solving backward stochastic differential equations , {\sl Dis. Cont. Dyn. Sys. B}, {\bf 117}, 1585-1603. \bibitem{zhawan09} {\sc Zhao W., J. Wang and S. Peng} (2009) Error estimates of the theta-scheme for backward stochastic differential equations , {\sl Dis. Cont. Dyn. Sys. B}, {\bf 12}, 905-924. \bibitem{zhazha10} {\sc Zhao W., G. Zhang and L. Ju} (2010) A stable multistep scheme for solving Backward Stochastic Differential Equations. {\sl SIAM J. Numer. Anal.} {\bf 48}(4), 1369-1394. \bibitem{zha04} {\sc Zhang J.} (2004), A numerical scheme for backward stochastic differential equation, {\sl Annals of Applied Probability}, {\bf 14}(1), 459-488. \end{thebibliography} \end{document}
\begin{document} \title{Scalable estimation of pure multi-qubit states} \author{L.~Pereira} \email{[email protected]} \affiliation{Instituto de F\'{\i}sica Fundamental IFF-CSIC, Calle Serrano 113b, Madrid 28006, Spain} \author{L.~Zambrano} \affiliation{Instituto Milenio de Investigaci\'on en \'Optica y Departamento de F\'{\i}sica, Universidad de Concepci\'on, casilla 160-C, Concepci\'on, Chile} \author{A.~Delgado} \affiliation{Instituto Milenio de Investigaci\'on en \'Optica y Departamento de F\'{\i}sica, Universidad de Concepci\'on, casilla 160-C, Concepci\'on, Chile} \begin{abstract} We introduce an inductive $n$-qubit pure-state estimation method. This is based on projective measurements on states of $2n$+1 separable bases or $2$ entangled bases plus the computational basis. Thus, the total number of measurement bases scales as $O(n)$ and $O(1)$, respectively. Thereby, the proposed method exhibits a very favorable scaling in the number of qubits when compared to other estimation methods. Monte Carlo numerical experiments show that the method can achieve a high estimation fidelity. For instance, an average fidelity of $0.88$ on the Hilbert space of $10$ qubits is achieved with 21 separable bases. The use of separable bases makes our estimation method particularly well suited for applications in noisy intermediate-scale quantum computers, where entangling gates are much less accurate than local gates. We experimentally demonstrate the proposed method in one of IBM's quantum processors by estimating a 4-qubit Greenberger-Horne-Zeilinger states with a fidelity close to $0.875$ via separable bases. Other 10-qubit separable and entangled states achieve an estimation fidelity in the order of $0.85$ and $0.7$, respectively. \end{abstract} \date{\today} \pacs{Valid PACS appear here} \maketitle \section{Introduction} In recent years, great advances have been achieved in the development of technologies based on quantum systems. These quantum technologies aim to provide significant performance improvements compared to their classical counterparts. A contemporary example is quantum computing, where large numbers of two-dimensional quantum systems (qubits) are controlled via canonical sets of operations to solve certain computational tasks more efficiently than classical computers \cite{Wright,Arute}. Several quantum systems, such as trapped ions \cite{Ions1,Ions2,Ions3,Ions4}, superconducting circuits \cite{Circuits1,Circuits2,Circuits3,Circuits4}, quantum dots \cite{dots1} and photonic platforms \cite{Multiports1,Multiports2,Multiports3,XANADU}, among others, have been proposed and tested to encode qubits and implement quantum gates, in order to build quantum simulators and quantum computers. Today, private companies provide cloud-based access to their quantum hardware and software. Currently, the above experimental platforms, among others, have evolved into devices that have been described as noisy intermediate-scale quantum (NISQ) technologies \cite{noise4}. These are devices that operate with imperfect gates on a number of qubits between 50 and up to a few hundred qubits. These devices are beyond the simulation capabilities of current classical computing devices but are still far from exhibiting a clear enough improvement in useful computing power. In particular, noisy quantum gates and decoherence limit the depth of the circuits and prevent the realization of complex algorithms \cite{noise1,noise2,noise3}. Because of the properties of NISQ computers, the design of methods for the characterization, certification, and benchmarking of these quantum devices has become increasingly relevant and difficult \cite{review_characterization}. For instance, the characterization of states generated by this class of devices requires measurements on a large set of bases. These are physically implemented by applying sets of universal unitary transformations on the qubits followed by measurements on the computational basis. Typically, an effort to reduce the number of bases lead to a new set of bases that require the application of entangling unitary transformations, such as, for instance, the controlled NOT (CNOT) gate , whose implementation is associated to a large error. Consequently, the estimation of states generated by NISQ devices could be affected by a large build up of inaccuracy in the measurement procedure. Several methods have been proposed to assess the performance of NISQ devices. Randomized benchmarking \cite{RB1,RB2,RB3} and direct fidelity estimation \cite{Fidelity1,Fidelity2} are popular tools for this purpose. However, they do not give complete information about the device in question. In contrast, quantum state estimation \cite{review_QT} gives plenty of information about the device since it fully determines the state of the system from a set of suitable chosen measurements. Today, several protocols to perform quantum state estimation of $d$-dimensional quantum systems are available. These attempt to estimate the $d^2-1$ real parameters that characterize density operators. Standard quantum tomography is based upon the measurement of the $d^2-1$ generalized Gell-Mann matrices \cite{SQT1,SQT2,SQT3}. Quantum state estimation based on projections onto the states of $d+1$ mutually unbiased bases has been suggested \cite{MUB1,MUB2,MUB3,MUB4,MUB5,MUB6,MUB7} to reduce the total number of measurement outcomes. This number can be reduced to a minimum using a symmetric, informationally complete, positive operator-valued measure \cite{SIC1,SIC2,SIC3,SIC4,SIC5,SIC6,SIC7}, which has exactly $d^2$ measurement outcomes. Despite the progress made in the reduction of the total number of measurement outcomes used by quantum state estimation methods, the exponential increase of this number prevails in the case of multipartite systems. This is known as the \emph{curse of dimensionality}. For example, standard quantum tomography for a $n$-qubit system requires the measurement of $3^n$ local Pauli settings. A solution to overcome this problem is the use of {\it a priori} information about the state to be determined. Compressed sensing \cite{CS,Ahn} allows a large reduction in the total number of measurement as long as the rank of the state is known. Another alternative is to consider states with special properties, such as, for instance, matrix product states \cite{MPS} or permutationally invariant states \cite{Toth}, which also leads to a significant reduction in the total number of measurements. The important case of pure states has also been studied. In this case a set of five bases allows one to estimate with high accuracy all pure quantum states \cite{5B,CI5BB,5BFIJAS}. Recently, a 3-bases-based tomographic scheme has been introduced to estimate a single qudit \cite{3B}. The generalization of this method to multiple qubits leads to entangled bases. Adaptivity has also been introduced as a means of increasing the estimation accuracy of tomographic methods \cite {Mahler, ADAPTIVE, STRUCHALIN, Utreras, Fernandes} and even reaching fundamental accuracy limits \cite {Guo, Zambrano}. Here, we present a method to estimate unknown pure quantum states in $n$-qubit Hilbert spaces, as used by NISQ computers, by means of $mn+1$ local bases (with $m$ an integer number), which correspond to the tensor product of $n$ single-qubit bases, or by the computational basis plus $m$ entangled bases, that is, bases that cannot be decomposed as the tensor product of single-qubit bases. Thereby, the number of bases scales linearly as $O(mn)$ and $O(m)$, respectively, with the number $n$ of qubits. This is a great advantage over standard methods, which are of exponential order. The $mn+1$ local bases also provide an advantage in the estimation of states generated via NISQ devices, such as current prototypes of quantum processors, because in this case projective measurements can be carried out without the use of typically noisy entangling gates. The computational basis together with $m=2$ entangled bases also lead to a clear reduction with respect to the 5-bases tomographic method \cite{5B,CI5BB}. The method here proposed also improves over the 3-bases method, since this requires to calculate the likelihood function of $2^{2^n-1}$ states to obtain an estimate. For a large number of qubits, this stage exponentially increases the computational cost of the method. Furthermore, in order to obtain distinguishable values of the likelihood, a large ensemble size is required. Our estimation method does not require such a procedure. The present method estimates pure multi-qubit quantum states using an inductive process. For an arbitrary $n$-qubit state $| \Psi \rangle$ we define $n$ sets of $2^{n-j}$ reduced states of dimension $2^{j}$, with $j=1, 2,..., n$ and with the reduced state of dimension $2^n$ equal to $| \Psi \rangle$ except a global phase. We first show how to estimate the reduced states of dimension $2$ and then show that the knowledge about reduced states of dimension $2^{j-1}$ together with some measurement outcomes allows one to estimate the reduced states of dimension $2^{j}$. This points to an inductive estimation method: we first estimate the reduced states of dimension $2$, and in $n$ iterations we arrive to the reduced state of dimension $2^n$, completing the estimation procedure. We show that the set of reduced states can be estimated by projecting the state onto $mn+1$ local bases or $m$ entangled bases plus the computational basis. Throughout the proposed estimation method, it is necessary to solve several systems of linear equations that might have vanishing determinants if the number of measurements is insufficient, leading to the failure of the method. However, the presence of noise, such as ,finite statistics, helps to mitigate this problem. We also show that a small increase in the number of local or entangled bases solves this problem and improves the overall fidelity of the estimation. We study the present method through Monte Carlo numerical simulations. We randomly generate, according to a Haar-uniform distribution, a set of unknown pure states and calculate the average estimation fidelity as a function of the number of qubits. Projective measurements on each basis are simulated considering a fixed number of repetitions. We show that $2n+1$ separable bases achieve an average estimation fidelity of 0.88 for $n=10$. We improve this figure by considering larger number of bases, that is, $3n+1$ and $4n+1$, which lead to an average estimation fidelity of 0.915 and 0.93 for $n=10$, respectively. If the average estimation fidelity is calculated with respect to the set of separable states, then for the above number of bases we obtain the values $0.95, 0.955$ and $0.96$, correspondingly. The use of entangled bases leads to a less favorable picture, where $2,3$ and $4$ entangled bases plus the computational basis lead to an average estimation fidelity of 0.2, 0.6 and 0.8 for $n=6$, respectively. These figures increase when estimating separable states, where we achieve an average estimation fidelity of 0.75, 0.95 and 0.95 for $2,3$ and $4$ entangled bases plus the computational base, respectively. The large gap between separable and entangled bases can be explained by recalling that measurements are simulated with a fixed number of repetitions or, equivalently, with a fixed ensemble size. Thereby, the total ensemble size is much smaller in the case of the entangled bases, which decreases the estimation fidelity. We test the proposed method by means of experiments carried out using the quantum processor \emph{ibmq\_manhattan} provided by IBM. We consider the estimation of fixed quantum states from 2 to 10 qubits using the local bases, and states up to 4 qubits using the entangled bases. We first estimate two completely separable states. One of these states is such that the matrices to be inverted are ill-conditioned. Nevertheless, the estimation of these states via local bases lead to very similar fidelities above 0.83 for $n=10$. The estimation via entangled bases leads to fidelities in the interval $[0.1,0.3]$ for $n=4$. We also consider the estimation of an $n/2$-fold tensor product of a Bell state, where local bases for $n=10$ lead to a fidelity above $0.67$ and entangled bases lead to a fidelity below 0.5 for $n=10$. Finally, we test the estimation of a Greenberger-Horne-Zeilinger state for 2, 3 and 4 qubits. The proposed method leads to en estimation fidelity of approximately 0.87 for $n=4$ via local bases while in the case of entangled bases the fidelity is approximately 0.81, 0.53 and 0.47 for 2, 3 and 4 entangled bases plus the computational base, respectively. The large decrease in the estimation of entangled states by means of entangled bases can be explained by the use of low-accuracy entangling gates in the preparation of the states as well in the projection to the states of the entangled bases. Our simulations and experimental results indicate that the method proposed here allows the efficient and scalable estimation of $n$-qubit pure states by means of $mn+1$ separable bases in NISQ computers, while the $m$ entangled bases together with the computational base can offer a large advantage in quantum computers based on high accuracy entangling gates. This article is organized as follows: in Section II we present the method and its properties. In Section III we show the results of several Monte Carlo simulations aimed at studying the overall behavior of the fidelity as a function of the number of qubits. In Section IV we present the results of implementing the proposed estimation method in IBM's quantum processors. In Section V we summarize and conclude. \section{Method} Let us consider a $n$-qubit system described by the pure state \begin{align} |\Psi\rangle = \sum_{\alpha=0}^{2^n-1} c_\alpha e^{i\phi_\alpha} |\alpha\rangle_n, \end{align} where $c_\alpha\geq0$ and $|\alpha\rangle_n =|\alpha_{n-1}\rangle\otimes\cdots\otimes|\alpha_0\rangle$ is the $n$-qubit computational basis, with $\alpha = \sum_{k=0}^{n-1}2^k\alpha_k $ the integer associated with the $n$-bit binary string $\alpha_{n-1}\cdots \alpha_0$. Our main aim is to estimate the values of the amplitudes $\{c_\alpha\}$ and the phases $\{\phi_\alpha\}$ with a total number of measurements that does not scale exponentially with the number $n$ of qubits. For this purpose, we employ an iterative algorithm based on estimating reduced states, which are non-normalized $j$-qubit vectors defined by \begin{align} |\Psi_{\beta}^j\rangle =\sum_{\alpha=0}^{2^j-1} {}_{n}\langle 2^{n-j}\beta + \alpha|\Psi\rangle |\alpha\rangle_j, \end{align} with $1\leq j \leq n$ and $ 0 \leq \beta \leq 2^{n-j} -1$. Thus, for a given state $| \Psi\rangle$ we have $n$ sets of $2^{n-j}$ reduced states, one set for each $j$ and one reduced state for each $\beta$. Every reduced state has some partial information about the full state. For instance, if $j=1$ we have $2^{n-1}$ reduced states that are given by \begin{align} |\Psi_{\beta}^1\rangle = c_{2\beta}e^{i\phi_{2\beta}}|0\rangle + c_{2\beta+1}e^{i\phi_{2\beta+1}}|1\rangle. \label{Psi_j1} \end{align} We can see that the amplitudes and the phases are the same entering in the state $| \Psi \rangle$. Thereby, we can reduce the problem of estimating the unknown state $|\Psi\rangle$ to the problem of estimating the set of reduced states. The main obstacle of this approach is that measurements in quantum mechanics do not contain information about global phases, and hence, if we try to find every reduced state in an independent way using the results of some measurements, we can only obtain the reduced states $|\Psi_{\beta}^j\rangle$ except a global phase, which in this case, would be a relative phase in $|\Psi \rangle$. Thus, the only reduced state that fully characterizes the system is $|\Psi_{0}^n\rangle$, which is the same as $|\Psi \rangle$. Nevertheless, the remaining reduced states will also be useful in the task of reconstructing $|\Psi \rangle$, because we can find the reduced states $| \Psi_\beta^{j} \rangle$ (except a global phase) using the reduced states $| \Psi_\beta^{j-1} \rangle $, as we will show. For a large enough set of identical copies of the state $|\Psi\rangle$, measurements in the computational base $\{| \alpha \rangle \}$ gives us a histogram of observations such that we have approximations $p_\alpha$ to $|\langle\alpha|\Psi\rangle|^2$. Then the amplitudes can be estimated as $c_{\alpha} = \sqrt{p_{\alpha}}$. On the other hand, the phases can be determined from projective measurements on the following states \begin{align} |P_{a\beta}^j\rangle = |\beta\rangle_{n-j}\otimes|+_a\rangle\otimes|-_a\rangle^{\otimes j-1}, \label{ProjTomo} \end{align} where $|+_a\rangle = u_{a}|0\rangle + v_{a}e^{i\varphi_a}|1\rangle$ and $|-_a\rangle = v_{a}|0\rangle - u_{a}e^{i\varphi_a}|1\rangle$ are orthonormal single-qubit states, with $a = 1,\dots, m$. Here, $m$ is the number of bases $\{ |\pm_a\rangle\}$ considered, which must be large enough to carry out the algorithm. In addition, these bases have to be all different from the computational one. To start the algorithm, we set $j=1$ and the reduced states are given by Eq. \eqref{Psi_j1}. The probabilities of projecting $|\Psi\rangle$ onto $|P^1_{a\beta}\rangle$ are given by \begin{align} P^1_{a\beta} =& u_{a}^2c_{2\beta}^2 + v_{a}^2c_{2\beta+1}^2 \nonumber \\ & + 2u_{a}v_{a}c_{2\beta}c_{2\beta+1} [ \cos(\varphi_a)\cos(\phi_{2\beta+1}-\phi_{2\beta}) \nonumber \\ & +\sin(\varphi_a)\sin(\phi_{2\beta+1}-\phi_{2\beta}) ]. \label{P1abeta} \end{align} These generate a set of equations, one equation for each value of $\beta$, that are linear combinations of the cosine and sine of the relative phase $\delta\phi_\beta = \phi_{2\beta+1}-\phi_{2\beta}$, with coefficients depending on $|+_a\rangle$. Thereby, we can form a linear system of equations $L_\beta \vec{a}_\beta = \vec{b}_\beta$ for trigonometric functions of the relative phase, which is explicitly given by \begin{align} \begin{bmatrix} \cos\varphi_1 & \sin\varphi_1 \\ \vdots & \vdots \\ \cos\varphi_m & \sin\varphi_m \end{bmatrix} \begin{bmatrix} \cos\delta\phi_\beta\\ \sin\delta\phi_\beta \end{bmatrix} = \begin{bmatrix} \tilde{P}^{1}_{1\beta}\\ \vdots \\ \tilde{P}^{1}_{m\beta} \end{bmatrix},\label{EqSyst1} \end{align} where we have defined \begin{align} \tilde{P}_{a\beta}^{1} = \frac{1}{2u_{a}v_{a}c_{2\beta}c_{2\beta+1} }(P_{a\beta}^{1} - u_{a}^2c_{2\beta}^2 - v_{a}^2c_{2\beta+1}^2 ). \end{align} Since we know the projectors $|P^1_{a\beta}\rangle$ and the coefficients $c_j$, this system can be solved for $m\geq 2$ inverting $L_\beta$ by the Moore-Penrose pseudo-inverse $A^+=(A^TA)^{-1}A^T$. We can guarantee that the pseudo-inversion is possible by suitably choosing the coefficients $u_a$, $v_a$ and the phases $\varphi_a$ entering in the states $|P_{a\beta}^1\rangle$. For example, taking the particular case of $m=2$ the system has a solution \begin{align} \begin{bmatrix} \cos \delta\phi_\beta\\ \sin \delta\phi_\beta \end{bmatrix} =\frac{1}{\det(L_\beta)} \begin{bmatrix} \sin \varphi_2 & -\sin \varphi_1 \\ -\cos \varphi_2 & \cos \varphi_1 \end{bmatrix} \begin{bmatrix} \tilde{P}^{1}_{1\beta}\\ \tilde{P}^{1}_{2\beta} \end{bmatrix} \end{align} as long as the determinant of $L_\beta$ does not vanish, that is, $\det(L_\beta)=\cos(\varphi_1)\sin(\varphi_2)-\sin(\varphi_1)\cos(\varphi_2) \neq 0$. Thereby, taking $\varphi_1=0$ and $\varphi_2=\pi/2$, or equivalently $|\pm_1\rangle=u_1|0\rangle\pm v_1|1\rangle$, $|\pm_2\rangle=u_2|0\rangle\pm i v_2|1\rangle$, the system of equations can be always solved since $\det(L_\beta)=1$, and the exponential of the relative phase is given by $e^{i\delta\phi_\beta} = \tilde{P}^1_{1\beta} + i\tilde{P}^1_{2\beta}$. Therefore, the reduced states $|\Psi_{\beta}^1\rangle$ can be determined up to a global phase, \begin{align} |\tilde\Psi_{\beta}^1\rangle = c_{2\beta}|0\rangle + c_{2\beta+1}e^{i\delta\phi_\beta}|1\rangle, \label{Eq:ReducedStatej1} \end{align} provided we measure the computational basis and at least two projectors $|P_{a\beta}^1 \rangle$ for every $\beta$ in $0, 1,..., 2^{n-1}-1$. Nevertheless, when some of the computational basis probabilities $p_\alpha$ are null, we have to consider a particular rule to obtain the reduced state. If any or both of the coefficients $c_{2\beta}=\sqrt{p_{2\beta+1}}$ or $c_{2\beta+1}=\sqrt{p_{2\beta+1}}$ are null, the reduced state is determined only by the computational basis, setting $|\tilde\Psi_{\beta}^1\rangle = c_{2\beta} | 0 \rangle$, $|\tilde\Psi_{\beta}^1\rangle = c_{2\beta + 1} | 1 \rangle$ or $|\tilde\Psi_{\beta}^1\rangle = \vec{0}$ in Eq. \eqref{Eq:ReducedStatej1}. This means that there may be null reduced states, which will have an impact on the algorithm in posterior iterations. In the following iterations, that is, the case $j>1$, the reduced states can be expressed as linear combinations of the previous ones, \begin{align} |\tilde\Psi_{\beta}^j\rangle = |0\rangle\otimes|\tilde\Psi_{2\beta}^{j-1} \rangle + e^{i\delta\phi_\beta^j}|1\rangle\otimes|\tilde\Psi_{2\beta+1}^{j-1} \rangle, \end{align} with $|\tilde\Psi_{\beta}^j\rangle = e^{-i\phi_{2^j\beta}}|\Psi_{\beta}^j\rangle$ the corresponding reduced states up to global phase and $\delta\phi_\beta^j =\phi_{2^{j}(\beta+1/2)}-\phi_{2^{j}\beta} $ relative phases. Thus, assuming that we know the reduced states of the previous iteration (except for a global phase), we can determine the reduced state of the current iteration (except for a global phase) simply by determining the relative phase. Analogously to the first iteration, if any or both of the previous reduced states $|\tilde\Psi_{2\beta}^{j-1} \rangle$ or $|\tilde\Psi_{2\beta+1}^{j-1} \rangle$ are null, there is no relative phase to determine and the next reduced state is simply obtained with the choice $\delta\phi_{\beta}=0$. Otherwise, we have to determine the relative phase from the projections $|P_\beta^j\rangle$. Considering $j>1$, the probability of projecting the state $|\Psi\rangle$ onto the state $|P^j_{a\beta}\rangle$ is given by \begin{align} P^j_{a\beta} =& u_{a}^2 |\langle W_a^j | \tilde\Psi_{2\beta}^{j-1} \rangle|^2 +v_{a}^2 |\langle W_a^j | \tilde\Psi_{2\beta+1}^{j-1} \rangle|^2 \nonumber \\ & + 2u_{a}v_{a}\Re[ e^{i(\delta\phi_\beta^j-\varphi_a)} \langle \tilde\Psi_{2\beta}^{j-1} |W_a^j \rangle\langle W_a^j | \tilde\Psi_{2\beta+1}^{j-1} \rangle ], \end{align} where $|W_a^j\rangle = | -_a\rangle^{\otimes j-1}$. Defining the quantities \begin{align} X_{a\beta}^j = e^{-i\varphi_a} \langle \tilde\Psi_{2\beta}^{j-1} |W_a^j \rangle\langle W_a^j | \tilde\Psi_{2\beta+1}^{j-1} \rangle \end{align} and \begin{align} \tilde{P}_{a\beta}^j = \frac{1}{2u_{a}v_{a} }\big( P_{a\beta}^j - u_{a}^2 |\langle W_a^j | \tilde\Psi_{2\beta}^{j-1} \rangle|^2 - v_{a}^2 |\langle W_a^j | \tilde\Psi_{2\beta+1}^{j-1} \rangle|^2 \big), \end{align} we obtain the following system of equations for the relative phases \begin{align} \begin{bmatrix} \Re X_{1\beta}^j & -\Im X _{1\beta}^j\\ \vdots & \vdots \\ \Re X_{m\beta}^j & -\Im X _{m\beta}^j \end{bmatrix} \begin{bmatrix} \cos\delta\phi_\beta^j\\ \sin\delta\phi_\beta^j \end{bmatrix} = \begin{bmatrix} \tilde{P}_{1\beta}^j\\ \vdots \\ \tilde{P}_{m\beta}^j \end{bmatrix}, \label{EqSyst} \end{align} where $\Re$ and $\Im$ stand for real and imaginary part, respectively. Again, the phases can be obtained by solving a linear system of equations $L_{\beta}^j\vec{a}_{\beta}^j=\vec{b}_{\beta}^j$ given by Eq. \eqref{EqSyst} through the Moore-Penrose pseudo-inverse. If for a fixed $j$, we are able to pseudo-invert the matrices $L_{\beta}^j$, we can find all the $|\tilde\Psi_{\beta}^j\rangle$. Then, we continue inductively with $j+1$ until reaching $j=n$. In that case, the protocol ends because the reduced state $|\tilde\Psi_{0}^n\rangle$ is equal to the state $|\Psi\rangle$ up to a global phase. One might think that, analogously to the case $j = 1$, the matrix $L_{\beta}^j$ can be inverted for $m\geq 2$. Thereby, to completely estimate the unknown state $|\Psi\rangle$ we would need a minimum of $2(2^n-1)$ different projections onto states $|P_{a\beta}^j\rangle$ in \eqref{ProjTomo}, because there are $2^n-1$ different reduced states. However, now the matrix $L_{\beta}^j$ to be pseudo-inverted does not only depends on the projectors $|P^j_{a\beta}\rangle$, but it also depends on the reduced states of the previous iteration, so that we cannot guarantee that the pseudo-inversion is feasible. For example, taking $m=2$, the solution of the system of equations \eqref{EqSyst} is \begin{align} \begin{bmatrix} \cos\delta\phi_\beta^j\\ \sin\delta\phi_\beta^j \end{bmatrix} = \frac{1}{\det(L_\beta^j)}\begin{bmatrix} -\Im X _{2\beta}^j& \Im X_{1\beta}^j\\ -\Re X_{2\beta}^j & \Re X_{1\beta}^j \end{bmatrix} \begin{bmatrix} \tilde{P}_{1\beta}^j\\ \tilde{P}_{2\beta}^j \end{bmatrix}, \label{EQ-SYS} \end{align} with $\det(L_\beta^j)=\Im(X_{1\beta}^j[X_{2\beta}^j]^*)$ the determinant of $L_\beta^j$. Clearly, this solution is only valid when $\det(L_\beta^j)\neq 0$. A non-invertible system of equations for a $|\tilde\Psi_{\beta}^j\rangle$ means that all the equations are linearly dependent and that it is sufficient to consider a single equation $\Re X_{\beta}^j\cos(\delta\phi_{\beta}^j)-\Im X_{\beta}^j\sin(\delta\phi_{\beta}^j) = \tilde{P}_{\beta}^j$, from which the trigonometric functions of the relative phase cannot be solved. However, using the identity $\cos^2(\delta\phi_{\beta}^j)+\sin^2(\delta\phi_{\beta}^j)=1$, we can obtain the trigonometric functions except for their corresponding quadrant, that is, we have two possible estimates or ambiguities for the reduced states. Thus, in the worst case, where the matrix $L_{\beta}^j$ cannot be pseudo-inverted for any reduced state, the algorithm leads to a finite set of $2^{n-1}$ possible estimates of $|\Psi\rangle$. This problem can be avoided by increasing the number of $m$ of bases for given values of $j$ and $\beta$ until the systems can be solved. An alternative solution is to consider an extra adaptive measurement to discriminate between the ambiguities. Furthermore, since the probabilities are experimentally estimated by means of a sample of finite size, with high probability the system of equations can be solved due the finite sample noise, although not necessarily with high accuracy since $L_{\beta}^j$ might have a bad condition number, $\textrm{cond}(A) = ||A|| \times ||A^+||$, with $||\cdot||$ a matrix norm. In order to apply our method based on the inductive estimation of reduced states we need to find an efficient implementation of projections onto states $|P_{a\beta}^j\rangle$. For this purpose we consider the measurement of two different sets of bases. The first set of $n\times m$ bases are given by \begin{equation} \mathcal{L}_{ab} = \left\{|\beta\rangle_{n-b}\otimes |\pm_a\rangle \otimes \cdots \otimes |\pm_a \rangle \right\}, \label{local_observables} \end{equation} with $ 0 \leq \beta \leq 2^{n-b} -1$. These bases can be implement via local gates, as shown in Fig.~\ref{fig:circuits}. These bases define a larger set of projections than required by the method. However, these extra projections can also be used in the algorithm, since $|\beta\rangle_{n-j}\otimes |\pm_a\rangle \otimes \cdots \otimes |\pm_a \rangle $ form a system of $2^{j}$ equations at the $j$th-iteration. The second set of $m$ bases are given by \begin{equation} \mathcal{E}_a = \left\{ |\beta\rangle_{n-j}\otimes |+_a\rangle \otimes |-_a\rangle ^{\otimes j-1}, |-_a\rangle^{\otimes n} \right\}, \label{ent_observables} \end{equation} with $1\leq j \leq n$ and $ 0 \leq \beta \leq 2^{n-j} -1$. Despite projections onto members of the bases are local, these observables are entangled since many multi-controlled $U$ gates are needed to implement them, as shown in Fig.~\ref{fig:circuits}. In a quantum computer, these gates can be decomposed in terms of CNOT gates \cite{MultiControlGates}. \begin{figure} \caption{Local basis.} \label{circ_local} \caption{Entangled basis.} \label{circ_entangled} \caption{Quantum circuits to implement \ref{circ_local} \label{fig:circuits} \end{figure} As example, let us consider a 2-qubit system in the state \begin{align} |\Psi\rangle = c_0e^{i\phi_0}|0\rangle_2 + c_1e^{i\phi_1}|1\rangle_2 +c_2e^{i\phi_2}|2\rangle_2 +c_3e^{i\phi_3}|3\rangle_2. \end{align} Omitting the Kronecker product for compactness, the corresponding reduced states are \begin{align} |\tilde\Psi^1_0\rangle &= c_0|0\rangle + c_1e^{i(\phi_1-\phi_0)}|1\rangle,\\ |\tilde\Psi^1_1\rangle &= c_2|0\rangle +c_3e^{i(i\phi_3-\phi_2)}|1\rangle,\\ |\tilde\Psi^2_0\rangle &= |0\rangle\big(c_0|0\rangle + c_1e^{i(\phi_1-\phi_0)}|1\rangle \big) \nonumber \\ &\quad + e^{i(\phi_2-\phi_0)} |1\rangle\big( c_2|0\rangle +c_3e^{i(i\phi_3-\phi_2)}|1\rangle \big), \end{align} and, the bases to be measured are \begin{align} \mathcal{L}_{a0} =& \{\: |0\rangle|+_a\rangle,\: |0\rangle|-_a\rangle,\: |1\rangle|+_a\rangle,\: |1\rangle|-_a\rangle \: \},\\ \mathcal{L}_{a1} =& \{\: |+_a\rangle|+_a\rangle,\: |+_a\rangle|-_a\rangle, \:|-_a\rangle|+_a\rangle, \:|-_a\rangle|-_a\rangle \: \} , \end{align} or \begin{align} \mathcal{E}_a =\{\: |0\rangle|+_a\rangle,\: |1\rangle|+_a\rangle,\: |+_a\rangle|-_a\rangle,\: |-_a\rangle|-_a\rangle\: \}. \end{align} \begin{figure*} \caption{Randomly generated states, local bases} \label{fig:sim_2_to_10_ent} \caption{Randomly generated separable states, local bases} \label{fig:sim_2_to_10_sep} \caption{Randomly generated states, entangled bases} \label{fig:sim_2_to_6_ent_ent} \caption{Randomly generated separable states, entangled bases} \label{fig:sim_2_to_6_sep_ent} \caption{Median fidelity (solid dots) between arbitrary states and its estimates as a function of number of qubits. The colors label the number of bases: For (a) and (b) we have $2n+1$ (blue), $3n+1$ (yellow) and $4n+1$ (purple) local bases. For (c) and (d) we have $2$ (blue), $3$ (yellow) and $4$ (purple) entangled bases plus the computational basis. Shaded areas represent the corresponding interquartile range.} \label{Figure2} \end{figure*} \section{Numerical Simulations} We are interested in how similar an unknown state and its estimate are. To quantify this we use the fidelity defined by \begin{align} F = | \langle \Psi | \Psi_{est} \rangle |^2, \end{align} between an arbitrary state $| \Psi \rangle $ and its estimate $|\Psi_{est} \rangle$. A vanishing fidelity indicates that $|\Psi\rangle$ and $|\Psi_{est}\rangle$ can be perfectly distinguished. A unitary fidelity indicates that $|\Psi\rangle$ and $|\Psi_{est}\rangle$ are equal. Thus, a good pure-state estimation method will be characterized by high values of the fidelity. We are interested in the behavior of the fidelity as a function of the dimension, or equivalently, the number of qubits, and the impact on the fidelity of shot noise or finite-statistics effects. To study the performance of the proposed method we carried out numerical experiments. For each number of qubits $n=2,3,\dots,10$ we randomly generate a set $\{|\Psi^{(i)}\rangle\}$ of $100$ Haar-distributed pure states, which play the role of the unknown state to be estimated. Afterward, we simulate projective measurements on each $|\Psi^{(i)}\rangle$ using the entangled bases, Eq.~\eqref{ent_observables}, and the local bases, Eq.~\eqref{local_observables}. These measurements are simulated considering $10^{13}$ repetitions for each base. Thereafter, using the set of estimated probabilities obtained from the simulated measurements we applied our estimation method to each state $|\Psi^{(i)}\rangle$, which leads to a set $\{|\Psi_{est}^{(i)} \rangle\}$ of estimates. Finally, we calculate the value of the fidelity between each unknown state and its estimate. Thereby, we obtain a set of fidelity values for each number $n$ of qubits, the statistics of which we study in the following figures. Figure $\ref{fig:sim_2_to_10_ent}$ shows the median of the fidelity as a function of the number $n$ of qubits for three different numbers $nm + 1$ of local bases with $m=2, 3$, and $4$, from bottom to top. For each number of bases, the infidelity decreases as the number of qubits increases. This is a general feature of quantum state estimation methods. Since for each basis the number of repetitions is fixed, the probabilities entering in the systems of linear equations, such as Eq.~(\ref{EQ-SYS}), are estimated with a decreasing accuracy as $n$ increases, which leads to lower fidelity values. Figure $\ref{fig:sim_2_to_10_ent}$ also shows that the median value of the fidelity can be increased, while keeping the number of repetitions fixed, using a larger number of local bases. The proposed method requires the pseudo-inverse for every matrix in Eq.~\eqref{EqSyst}. The quality of this inversion can be improved by reducing the condition numbers of the linear equation systems, which can be achieved by increasing the number of bases employed in the estimation. An interesting feature of Fig. ~$\ref{fig:sim_2_to_10_ent}$ is that the interquartile range is very narrow for each basis, which indicates that the estimation method generates very similar fidelity values for all the simulated states. Figure \ref{fig:sim_2_to_6_ent_ent} exhibits the median of the fidelity as a function of the number $n$ of qubits for 2, 3 and 4 entangled bases plus the canonical basis. In this case, the estimation method achieves a poor performance in comparison to the use of local bases. The use of 2 entangled bases lead to two equations for every reduced state, which generates an ill-conditioned equation system. Thereby, a poor estimation of the phases of the probability amplitudes is obtained, which propagates through the iterations of the algorithm leading to a low fidelity. However, as Fig.~\ref{fig:sim_2_to_6_ent_ent} shows, the fidelity can be greatly improved by increasing the number of entangled bases for the estimation. Also, the simulation was carried out employing a fixed ensemble size of $2^{13}$ for each base. This was done to allow a comparison between our simulations and the results of experiments on the IBM's quantum processors. The total ensemble employed with the local and entangled bases scales as $mn\times2^{13}$ and $k\times2^{13}$, respectively. Thereby, the total ensemble used with the local bases is much larger than in the case of the entangled bases, which leads to a better estimation via local bases. An increase of the ensemble size used with the entangled bases could lead to an improvement of the fidelity. Nevertheless, the entangled bases are capable of delivering for $n=6$ a fidelity close to $0.9$. Figures \ref{fig:sim_2_to_10_sep} and \ref{fig:sim_2_to_6_sep_ent} also display the fidelity as a function of the number of qubits for local and entangled bases, respectively. However, the simulation has been carried out on the set of completely separable states. In both cases we obtain an increase of the performance of the proposed estimation method, achieving fidelities of around $0.95$ for $10$ qubits when employing local bases. This result indicates that the performance of the proposed estimation method on the set of entangled states should be closer to the depicted in Figs. $\ref{fig:sim_2_to_10_ent}$ and \ref{fig:sim_2_to_6_ent_ent}. \begin{figure*} \caption{$|\Phi_1^{n} \label{fig:1.1} \caption{$|\Phi_2^{n} \label{fig:1.2} \caption{$|\Phi_3^{n} \label{fig:1.3} \caption{$|\Phi_4^{n} \label{fig:1.4} \caption{$|\Phi_1^{n} \label{fig:2.1} \caption{$|\Phi_2^{n} \label{fig:2.2} \caption{$|\Phi_3^{n} \label{fig:2.3} \caption{$|\Phi_4^{n} \label{fig:2.4} \caption{Fidelity achieved by projective measurements on $mn+1$ local bases (upper row, $m=2$ (blue), $m=3$ (yellow), and $m=4$ (purple)) and $m$ entangled bases plus the computational base (lower row, $m=2$ (blue), $m=3$ (yellow), and $m=4$ (purple)), for states $|\Phi_1^{n} \label{fig:Experiment} \end{figure*} \section{Experimental Realization} We carried out an experimental demonstration of our estimation method using the quantum processor \emph{ibmq\_manhattan} developed by IBM. This NISQ device has online access and can be programmed with Qiskit, an open-source development framework for working with quantum computers in Python programming language. To test our estimation method we prepare the following $n$-qubit states \begin{align} |\Phi_1^{n}\rangle = \frac{1}{\sqrt{2^n}} \left( |0\rangle - e^{i\pi/4}|1\rangle \right)^{\otimes n}, \end{align} \begin{align} |\Phi_2^{n}\rangle = \frac{1}{\sqrt{2^n}} \left( |0\rangle + e^{i\pi/4}|1\rangle \right)^{\otimes n}, \end{align} and \begin{align} |\Phi_3^{n}\rangle = \Bigg\{ \begin{array}{cc} \frac{1}{2^{n/4}} \left( |00\rangle + |11\rangle \right)^{\otimes n/2}, & n \text{ even.} \\ \frac{1}{2^{(n-1)/4} } \left( |00\rangle + |11\rangle \right)^{\otimes (n-1)/2}\otimes|0 \rangle, & n \text{ odd.} \end{array} \end{align} States $|\Phi_1^{n}\rangle$ and $|\Phi_2^{n}\rangle$ are strictly separable and thus they can be prepared efficiently with the use of a single-qubit gate acting on each qubit. The state $|\Phi_3^{n}\rangle$ has maximal entanglement between pairs of qubits. For $n$ even (odd), this state can be prepared with $n/2$ ($(n-1)/2$) CNOT gates and $n/2$ ($(n-1)/2$) single-qubit gates. Thus, states $|\Phi_1^{n}\rangle$, $|\Phi_2^{n}\rangle$, and $|\Phi_3^{n}\rangle$ can be generated by circuits with short depth. However, the state $|\Phi_3^{n}\rangle$ is generated with a lower preparation fidelity than states $|\Phi_1^{n}\rangle$ and $|\Phi_2^{n}\rangle$ due to the fact that control-not gates are implemented with an error much larger than the one achieved in the implementation of single-qubit gates. We also perform the estimation of $n$-partite GHZ states, \begin{align} |\Phi_4^{n}\rangle = \frac{1}{\sqrt{2}}\left( |0\rangle^{\otimes n}+|1\rangle^{\otimes n} \right). \end{align} These states are implemented by a long-depth circuit which contains $n-1$ CNOT gates. Consequently, the fidelity achieved in its generation can be very low. Thus, we focus our study in small number of qubits $n=2,3,4$ for both local and entangled bases. Each basis was measured employing $10^{13}$ repetitions, that is, the size of the ensemble of equally prepared copies of the unknown quantum state. This is currently the maximal sample size that can be employed in IBM's quantum processors. Figure~\ref{fig:Experiment} summarizes the results obtained by implementing our estimation method on IBM's quantum processor, where the fidelity between states $|\Phi_1^{n}\rangle$, $|\Phi_2^{n}\rangle$, $|\Phi_3^{n}\rangle$, $|\Phi_4^{n}\rangle$ and its corresponding estimates is displayed as a function of the number $n$ of qubits for the cases of $mn+1$ local bases (with $m=2,3$ and 4) and $m$ entangled bases plus the computational base. Shaded areas correspond to the error in the values of the fidelity obtained by bootstrapping method. The solid blue line is the maximal achievable value of the fidelity of the state preparation stage $\mathcal{\mathcal{E}}$ considering a white noise model for the device based on the average error per gate $r$ provided by IBM, \begin{align} \mathcal{E}_{noise}(|0\rangle^{\otimes n}\rangle) = \left( 1-\frac{2^nr}{2^n-1}\right)\mathcal{E}(|0\rangle^{\otimes n}) + \frac{2^nr}{2^n-1}I. \end{align} The error model was applied on each gate necessary to prepare the states. We use as average error per gate $r_1=5\times10^{-4}$ for local gates and $r_2 =2\times10^{-2}$ for CNOT gates. According to Figs.~\ref{fig:1.1} and \ref{fig:1.2}, the estimation of local states $|\Phi_1^{n}\rangle$ and $|\Phi_2^{n}\rangle$ through local bases leads values that are comparable with the theoretical predictions shown in Figs.~\ref{fig:sim_2_to_10_ent} and \ref{fig:sim_2_to_10_sep}, which only consider noise due to finite statistics. For the particular case of $n=10$ qubits, the theoretical predictions for the estimation fidelity using local bases are within the intervals $[0.88, 0.93]$ for randomly generated states, and $[0.95, 0.96]$ for randomly generated separable states, while the experiment leads to an estimation fidelity in the interval $[0.82, 0.89]$. The estimation of entangled states $|\Phi_3^{n}\rangle$ and $|\Phi_4^{n}\rangle$ by local bases, exhibited in Figs.~\ref{fig:1.3} and \ref{fig:1.4}, respectively, also leads to good fidelities, albeit lower than in the case of states $|\Phi_1^{n}\rangle$ and $|\Phi_2^{n}\rangle$. This is to be expected since states $|\Phi_3^{n}\rangle$ and $|\Phi_4^{n}\rangle$ exhibit entanglement and thus are generated applying several CNOT gates, which increase the preparation error. Nevertheless, for the state $|\Phi_3^{n}\rangle$ with $n=10$ our method provides an estimation fidelity close to $0.67$, where the maximum achievable value according to the noise model is $0.9$ (blue solid line in Fig.~\ref{fig:1.3}). A comparison between the estimated fidelity for the bi-local state $|\Phi_3^{n}\rangle$ and GZH state $|\Phi_4^{n}\rangle$ for $n=2,3$ and $4$ qubits indicates that these entangled states are estimated with similar fidelities. This indicates that the method delivers similar quality estimation for states with different type of entanglement. The experimental realization of our estimation method shows that the use of local bases allows us to estimate pure states of large numbers of qubits. The fidelity of the estimation is meanly constrained by ensemble size and number of local bases. An increase of any of these quantities leads to an improvement in the quality of the estimation. In particular, states of larger number of qubits can be reliably estimated by increasing the number of local bases. As is shown in Figs.~\ref{fig:1.1}, \ref{fig:1.2}, and \ref{fig:1.3} the use of $4n+1$ local bases leads to higher values of the estimation fidelity than the cases of $2n+1$ and $3n+1$ local bases. The use of entangled bases shows a different picture, where the physical realization of the method and the particular class of states to be estimated lead to a reduction of the fidelity when compared to the case of local bases. Figs.~\ref{fig:2.1} and \ref{fig:2.2} exhibit the estimation achieved for states $|\Phi_1^n\rangle$ and $|\Phi_2^n\rangle$, respectively. A comparison for $n=4$ with Figs.~\ref{fig:1.1} and \ref{fig:1.2} shows an acute decrease of the fidelity from approximately 0.995 to 0.2. A similar decrease can also be observed in the case of states $|\Phi_3^n\rangle$ and $|\Phi_4^n\rangle$. The low achieved fidelity finds its origin in the circuits employed to implement the measurements on the entangled bases. For $n=2,3$, and $4$, 2 entangled bases require the use of 2, 7 and 27 control-note gates, respectively. This number becomes 3027 for $n=10$. Since the control-not gate has a high error rate in NISQ-devices, the implemented bases differ significantly from the actual bases $\mathcal{E}_a$ to be implemented. Besides, this also affects the generated states. Fig.~\ref{fig:2.2} shows that for $n=3$ the use of $2$ and $3$ entangled bases plus the computational base lead to a fidelity value of $0.1$ much lower than the case of using $4$ entangled bases plus the computational base. This originates in the state $|\Phi_2^3\rangle$ to be reconstructed, which for $2$ and $3$ entangled bases exhibits ill-conditioned equation systems. This is not present in the case of $4$ entangled bases, where the fidelity is in the order of $0.95$. Let us note that this effect does not appear in Fig. ~\ref{fig:1.2}, where the use of $mn+1$ local bases allow us to obtain well-conditioned equation systems. Ultimately, the combinations of these factors leads to a low estimation fidelity. \section{Conclusions} Estimating states of $d$-dimensional quantum systems requires a minimal number of measurement outcomes that scales quadratically with the dimension. In the case of composite systems, such as quantum computers, the scaling becomes exponential in the number of subsystems, which makes the estimation by generic methods unfeasible except for few-component systems. NISQ computers are characterized by low-accuracy entangling gates, which are required for implementing measurements of arbitrary observables, and by a fixed number of repetitions, which constrains the size of statistical samples and decreases the estimation accuracy as the number of qubits increases. Therefore, estimating $n$-qubit states of NISQ computers is difficult task. We have proposed a method to estimate pure states of $n$-qubit systems that is well suited for NISQ computers. The method is based on the reconstruction of the so-called reduced states $|\tilde\Psi_{\beta}^j \rangle$, which for an unknown state $|\Psi \rangle$ are $n$ sets of $2^{n-j}$ non-normalized states, one set for each $j=1, 2, ..., n$. If we know the reduced states $|\tilde\Psi_{\beta}^j \rangle$ for a fixed $j$, then it is easy to reconstruct the reduced states with $|\tilde\Psi_{\beta}^{j+1} \rangle$ from the results of well defined set of measurements. The method begins by measuring a set of projectors that allow us to reconstructs all the reduced states for $j=1$, and iterating, the rest of reduced states until reaching $|\tilde\Psi_{\beta}^n \rangle$, the estimate of the unknown pure state. The set of measurements employed by the proposed method corresponds to projective measurements onto a set of bases. We have first shown that $mn+1$ bases, with $m$ an integer number equal or greater than 2, allow us to estimate most pure states up to a null-measure set. Thereby, a total of $(mn+1)2^n$ measurement outcomes is required. This number compares favorably with other estimation methods. Mutually unbiased bases , SIC-POVM, and compressed sensing require $(2^n+1)2^n$, $2^{2n}$, and in the order of $2^{2n}n^2$ measurement outcomes, respectively, to reconstruct pure states. The $mn+1$ bases are local, that is, they can be cast as the tensor product of $n$ single-qubit bases. Projective measurements onto the states of these bases are carried out on a NISQ computer by applying local gates onto each qubit followed by a projection onto the computational base. Thus, no entangling gates are necessary. In principle, $2n+1$ bases are enough. We have shown, by means of numerical simulations, that the use of a larger number of local bases increases the estimation accuracy for larger numbers of qubits. However, if high-accuracy entangling gates are available, then the set of measurements employed by the proposed method can be reduced to projective measurements on $m$ entangled bases plus the computational base, with $m$ an integer number equal or greater than 3. In this case, an increase in the number of entangled bases also allows us to increase the estimation accuracy. We tested the proposed estimation method in the IBM's quantum processing units. The estimation of local pure states via local bases up to 10 qubits provides fidelities that agree with the numerical simulations and are above 90\%. The estimation of entangled states via local bases was also tested. In this case, the estimation of tensor product of two-qubit Bell states up to $n=10$ led to fidelities above 70\%. In this case, however, the preparation of the state is affected by large errors due to the use of entangling gates. Finally, we tested the estimation of a GHZ state for $n=2,3$ and 4 qubits obtaining fidelities above 86\%. Entangled bases were also tested. Nevertheless, due to the massive use of entangling gates in the preparation and measurement stages very low fidelities were achieved, as expected. The estimation via $k$ entangled bases plus the computational base, where $k$ is high-enough but does not scale with the number $n$ of qubits, can lead to a large fidelity in future fault-tolerating quantum architectures \cite{Knill}, where a large numbers of qubits and high-accuracy entangling gates are required. Also, it might be possible to increase the estimation fidelity by using alternative implementations of the multi-control NOT gates that employ ancillary qubits. This allows a reduction in the depth of the circuit required to implement measurements on the proposed entangled bases \cite{MultiControlGates}. The main characteristic of our protocol is its scalability, which is a consequence of the {\it a priori} information about the states to be estimated. This is an approach common among many estimation methods such as, for instance, compressed sensing and matrix product states. The purity assumption is reasonable in systems that are able to prepare high purity states or in systems whose purity can be certified, for instance, through randomized benchmarking \cite{RandomizedBenchmark}. However, this is not necessarily true on NISQ devices. Decoherence and gate errors can reduce the quality of preparing a pure state, so it really becomes a mixed state. In this case our protocol could not be applied. Nevertheless, it has been shown that experimental results can be improved by reducing the error of the raw data employing error-mitigation techniques \cite{CI5BB,noise1,error_mitigation_1,error_mitigation_2}. Combining scalable error-mitigation methods with our estimation method could extend its applicability to high-noise systems. \begin{acknowledgments} This work was supported by ANID -- Millennium Science Initiative Program -- ICN17$_-$012. AD was supported by FONDECYT Grant 1180558. LP was supported by ANID-PFCHA/DOCTORADO-BECAS-CHILE/2019-72200275. LZ was supported by ANID-PFCHA/DOCTORADO-NACIONAL/2018-21181021. We thank the IBM Quantum Team for making multiple devices available to the CSIC-IBM Quantum Hub via the IBM Quantum Experience. \end{acknowledgments} \end{document}
\begin{document} \thispagestyle{empty} \title{Experimental approaches to the difference in the Casimir force through the varying optical properties of boundary surface } \author{R.~Castillo-Garza${}^{1}$, C.-C.~Chang${}^{1}$, D.~Jimenez${}^{1}$, G.~L.~Klimchitskaya${}^2$, V.~M.~Mostepanenko${}^3$, and U.~Mohideen${}^1$} \affiliation{${}^{1}$Department of Physics and Astronomy, University of California, Riverside, California 92521, USA \\ ${}^2$North-West Technical University, Millionnaya St. 5, St.Petersburg 191065, Russia\\ ${}^3$Noncommercial Partnership ``Scientific Instruments'', Tverskaya St.{\ }11, Moscow 103905, Russia} \begin{abstract} We propose two novel experiments on the measurement of the Casimir force acting between a gold coated sphere and semiconductor plates with markedly different charge carrier densities. In the first of these experiments a patterned Si plate is used which consists of two sections of different dopant densities and oscillates in the horizontal direction below a sphere. The measurement scheme in this experiment is differential, i.e., allows the direct high-precision measurement of the difference of the Casimir forces between the sphere and sections of the patterned plate or the difference of the equivalent pressures between Au and patterned parallel plates with static and dynamic techniques, respectively. The second experiment proposes to measure the Casimir force between the same sphere and a VO${}_2$ film which undergoes the insulator-metal phase transition with the increase of temperature. We report the present status of the interferometer based variable temperature apparatus developed to perform both experiments and present the first results on the calibration and sensitivity. The magnitudes of the Casimir forces and pressures in the experimental configurations are calculated using different theoretical approaches to the description of optical and conductivity properties of semiconductors at low frequencies proposed in the literature. It is shown that the suggested experiments will aid in the resolution of theoretical problems arising in the application of the Lifshitz theory at nonzero temperature to real materials. They will also open new opportunities in nanotechnology. \end{abstract} \pacs{12.20.Fv, 12.20.Ds, 68.37.Ps, 42.50.Nn} \maketitle \section{Introduction} The Casimir effect \cite{1} implies that there is a force acting between closely spaced electrically neutral bodies following from the zero-point oscillations of the electromagnetic field. The Casimir force can be viewed as an extension of the van der Waals force to large separations where the retardation effects come into play. Within a decade of Casimir's work, Lifshitz and collaborators \cite{2,3} introduced the role of optical properties of the material into the van der Waals and Casimir force. In the last few years, the advances following from both fundamental physics and nanotechnology have motivated careful experimental and theoretical investigations of the Casimir effect. The first modern experiments were made with metal test bodies in a sphere-plate configuration, and their results are summarized in Ref.~\cite{4}. In subsequent experiments the lateral Casimir force between corrugated surfaces \cite{5} and the pressure in the original Casimir configuration \cite{6} have been demonstrated. Later experiments \cite{7,8,8a} have brought the most precise determination of the Casimir pressure between two metal plates. The rapid theoretical progress has raised fundamental questions on our understanding of the Casimir force between real metals at nonzero temperature. Specifically, the role of conductivity processes and the related optical properties of metals at quasi-static frequencies has become the subject of discussions \cite{9,10,11,12,13,14,15,16}. One of the most important applications of the Casimir effect is the design, fabrication and function of micro- and nanoelectromechanical systems such as micromirrors, microresonators, nanotweezers and nanoscale actuators \cite{17,18,19,20,21}. The separations between the adjacent surfaces in such devices are rapidly falling below a micrometer, i.e., to a region where the Casimir force becomes comparable with typical electrostatic forces. It is important that investigations of the Casimir force be done in semiconductors as they are the material of choice for the fabrications of optomechanical, micro- and nanoelectromechanical systems. While the role of conductivity and optical properties of materials can be checked in metals, semiconductors offer better control of the related parameters (charge carrier density, defect density, size etc.) and will provide an exhaustive check of the various models. Reference \cite{22} pioneered the measurement of the Casimir force acting between a semiconductor surface, single crystal Si wafer, and a gold coated sphere. The experimental data obtained for a wafer with the concentration of charge carriers $\approx 3\times 10^{19}\,\mbox{cm}^{-3}$ were compared with the Lifshitz theory at zero temperature and good agreement was found at a 95\% confidence level. At the same time, the theory describing a ``dielectric'' Si plate with a concentration $\sim 5\times 10^{12}\,\mbox{cm}^{-3}$ was excluded by experiment at 70\% confidence. This allows one to conclude that the Casimir force is sensitive to the conductivity properties of semiconductors. This conclusion has found direct experimental confirmation in Ref.~\cite{23} where the Casimir forces between a gold coated sphere and two different Si wafers with the concentrations of charge carriers $\approx 3.2\times 10^{20}\,\mbox{cm}^{-3}$ and $1.2\times 10^{16}\,\mbox{cm}^{-3}$ have been measured. The difference of the mesured forces for the two conductivities was found to be in good agreement with the corresponding difference of the theoretical results computed at zero temperature (note that the sensitivity of force mesurements in Refs.~\cite{22,23} was not sufficient to detect the thermal corrections predicted in Refs.~\cite{9,10,11,12,13,14,15,16}). In the most precise experiment on the Casimir force between a metal and a semiconductor \cite{24}, the density of charge carriers in a Si membrane of 4$\,\mu$m thickness is changed from $5\times 10^{14}\,\mbox{cm}^{-3}$ to $2\times 10^{19}\,\mbox{cm}^{-3}$ through the absorption of photons from a laser pulse. This is a differential experiment where only the difference of the Casimir forces in the presence and in the absence of a laser pulse was measured. This decreases the experimental error to a fraction of a 1\,pN and allows one to check the role of conductivity processes in semiconductors at the laboratory temperature $T=300\,$K. The experimental data for the difference Casimir force as a function of separation were compared with the Lifshitz theory and the outcome was somewhat puzzling. The data were found to be in excellent agreement with the theoretical difference force computed at $T=300\,$K under the assumption that in the absence of laser light Si possesses a finite static dielectric permittivity. By contrast, if theory takes into account the dc conductivity of Si in the absence of laser light, it is excluded by the data at a 95\% confidence level. This is somewhat analogous to the above-mentioned problems for two real metals where the inclusion of the actual conductivity processes at low frequencies also leads to disagreement with experiment \cite{7,8,8a}. The fundamental questions on the role of scattering processes and conductivity at low frequencies in the Casimir force have to be clarified for further progress in the field and this calls for new precise experiments. In this paper we propose two experiments on the Casimir force between a metal sphere and semiconductor plate which can shed light on the applicability of the Lifshitz theory at nonzero temperature to real materials. In the first of these experiments, the patterned Si plate with two sections of different dopant densities is oscillated in the horizontal direction below the Au coated sphere. As a result, the sphere is subject to the difference Casimir force which can be measured using the static and dynamic techniques. This experimental scheme promises a record sensitivity to force differences at the level of 1\,fN. In the second experiment, we propose to demonstrate the modulation of the Casimir force by optically switching the insulator-metal transition in VO${}_2$ films \cite{25}. The phase transition between the insulator and metal leads to a change in the charge carrier density of order $10^4$, which is sufficient to bring about a large change in the Casimir force. For both experiments the related theory is elaborated and the magnitudes of Casimir forces are computed with application to the experimental configurations. The effects from using different theoretical approaches to the description of conductivity processes are carefully analyzed and shown to be observable in the proposed experiments. The present status of developing the apparatus at UC Riverside, its calibration and sensitivity is presented. The proposed experiments offer a precision test of the role of conductivity, optical properties and scattering in the Lifshitz theory of the van der Waals and Casimir force at nonzero temperature. They also open up possibilities of radically new nanomechanical devices using the Casimir force in imaging applications. The paper is organized as follows. In Sec.~II, we describe the proposed experiment on the difference Casimir force with the patterned semiconductor plate. The brief description of the experimental apparatus and preliminary results are also provided. Section~III contains the calculation of the difference Casimir force and equivalent pressure in the patterned geometry using the Lifshitz theory at nonzero temperature and different models of conductivity processes at low frequencies. In Sec.~IV we propose the experiment on the modulation of the Casimir force through a metal-insulator transition. The experimental scheme and some preliminary tests are discussed. Section~V presents theoretical computations of Casimir forces in a insulator-metal transition on the basis of the Lifshitz theory at nonzero temperature and using different models for the conductivity processes. Section VI contains our conclusions and a discussion. \section{Proposed experiment on the difference Casimir force with the patterned semiconductor plate} The aim of this experiment is to gain a fundamental understanding of the role of carrier density in the Casimir force using a nanofabricated patterned semiconductor plate. The proposed design for the experiment is shown schematically in Fig.~1. The gold coated polystyrene sphere of about 100$\,\mu$m radius is attached to a cantilever of an atomic force microscope (AFM) specially adapted for making sensitive force measurements. Instead of the simple single crystal Si substrate used in the previous experiments \cite{22,23}, here a patterned Si plate is employed. This plate is composed of single crystal Si specifically fabricated to have adjacent sections of two different charge carrier densities $n\sim 10^{16}\,\mbox{cm}^{-3}$ and $\tilde{n}\sim 10^{20}\,\mbox{cm}^{-3}$. In this range of doping densities, the plasma frequency $\omega_p$ will change by a factor of 100. Additional changes in the $\omega_p$ can be brought about by using both $p$(B) and $n$(P) type dopants as electrons and holes differ in their effective mass by 30\%. The preparation of the Si sample with the two sections having different conductivities is shown schematically in Fig.~2. First, one half of the bare Si wafer of 0.3 to 0.5\,mm thickness \cite{22,23} having the lower conductivity shown in (a) is masked with a photoresist as shown in (b). Next in (c) the exposed half of the Si wafer is doped with P ions using ion implantation leading to a higher density of electrons in the exposed half in (d). Rapid thermal annealing and chemical mechanical polishing of the patterned Si plate will be done as the last step. This is to ensure that there are no surface features resulting from the fabrication. Similar nanofabrication procedures of semiconductors were used in our previous work \cite{23,26,27}. Sharp transition boundaries between the two sections of Si plate of width less than 200\,nm are possible. The limitation comes from interdiffusion and the resolution of the ion implantation procedure to be used. It might be necessary to further limit interdiffusion by the creation of a narrow 100\,nm barrier between the two doped regions. Identically prepared but unpatterned samples will be used to measure the properties which are needed for theoretical computations. The carrier concentration will be measured using Hall probes. This will yield an independent measurement of the plasma frequencies. A four probe technique will be used to measure the conductivity $\sigma$. From the conductivity and the charge carrier concentration, the scattering time $\tau=\sigma/(\varepsilon_0\omega_p^2)$ will be found, where $\varepsilon_0$ is the permittivity of free space. The first important improvement of this experiment, as compared with Refs.~\cite{22,23}, is the direct measurement of difference forces when the patterned plate is oscillated below the sphere. This measurement is performed as follows. The patterned Si plate will be mounted on the piezo below a Au coated sphere as is shown in Fig.~1. The Si plate is positioned such that the boundary is below the vertical diameter of the sphere. The distance between the sphere and Si plate $z$ will be kept fixed and the Si plate will be oscillated in the horizontal direction using the piezo such that the sphere crosses the boundary in the perpendicular direction during each oscillation (a similar approach was exploited in Ref.~\cite{28} for constraining new forces from the oscillations of the Au coated sphere over two dissimilar metals, Au and Ge). The Casimir force on the sphere changes as the sphere crosses the boundary. This change corresponds to the differential force \begin{equation} \Delta F(z)=F_{\tilde{n}}(z)-F_{n}(z), \label{eq1} \end{equation} \noindent equal to the difference of the Casimir forces due to two different charge carriers densities $\tilde{n}$ and $n$, respectively. This causes a difference in the deflection of the cantilever. In order to reduce the random noise by averaging, the periodic horizontal movement of the plate will be of an angular frequency $\Omega\sim 0.1\,$Hz. The amplitude of the plate oscillations is limited by the piezo characteristics, but will be of order 100\,$\mu$m, much larger than typical transition region of 200\,nm. The experiment will be repeated for different sphere-plate separations in the region from 100 to 300\,nm. The measurement of absolute separations will be performed by the application of voltages to the test bodies as described in Ref.~\cite{22}. The second major improvement in this experiment in comparison with all previous measurements of the Casimir force is the increased sensitivity. This will be achieved through the use of the interferometer based low temperature AFM capable of operating over wide temperature range spanning from 360 to 4\,K, and the use of two measurement techniques, a static one and a dynamic one. A picture of the newly constructed experimental apparatus of a low-temperature AFM is shown in Fig.~3. Here the cantilever deflection is measured interferometrically and therefore has much higher sensitivity than photodiodes used in the previous work \cite{22,23,24}. The detection of a difference force $\Delta F(z)$ will be done by two alternative techniques. The first technique, a static one, reduces to the direct measurement of $\Delta F(z)$ as described above. We are presently performing the initial tests and calibration trials at 77\,K. An oil free vacuum with a pressure of around $2\times 10^{-7}\,$Torr is used. The instrument is magnetically damped to yield low mechanical coupling to the environment. The temperature can be varied with a precision of 0.2\,K. We have fabricated special conductive cantilevers with a spring constant $k=0.03\,$N/m. The magnitude of $k$ is found by applying electrostatic voltages to the plate as discussed in Ref.~\cite{22}. To accomplish this, Si cantilevers were thermal diffusion doped to achieve the necessary conductivity. Note that conductive cantilevers are necessary to reduce electrostatic effects. The cantilever-sphere arrangement has been checked to be stable at 77\,K. The experimental setup using the static measurement technique allows not only a demonstration but a detailed investigation of the influence of carrier density, conductivity and scattering in semiconductors on the Casimir force. According to the results of Refs.~\cite{22,23,24} and calculations below in Sec.~III, the magnitude of the difference force to be measured is about several pN. Different theoretical models of conductivity processes at low frequencies lead to predictions differing by approximately 1\,pN within a wide separation region (see Fig.~5 in Sec.~III). The described setup provides excellent opportunity for precise measurement of forces of order and below 1\,pN. We have measured a resonance frequency of the cantilever of $f_r=1130.9\,$Hz, a quality factor $Q=5889.2$, and an equivalent noise bandwidth $B=0.3\,$Hz. The resultant force sensitivity of our cantilever at $T=77\,$K with the gold coated sphere attached was determined following Ref.~\cite{29} to be \begin{equation} \delta F_{\min}=\left( \frac{2k_B TkB}{\pi Qf_r}\right)^{1/2}\approx 0.96\times 10^{-15}\,\mbox{N}\approx 1\,\mbox{fN}, \label{eq2} \end{equation} \noindent where $k_B$ is the Boltzmann constant. Even bearing in mind that the systematic error may be up to an order of magnitude larger, the sensitivity (\ref{eq2}) presents considerable possibilities for the precise investigation of the difference Casimir force $\Delta F$. The second technique for the detection of the difference Casimir force is a dynamic one \cite{7,8,20}. Here this technique is applied not for a direct measurement of $\Delta F$ but rather for the experimental determination of the equivalent difference Casimir pressure between the two parallel plates (one made of Au and the other one, a patterned Si plate). The cantilever-sphere system oscillates in the verical direction due its thermal noise with a resonant frequency $\omega_r=(k/M)^{1/2}=2\pi f_r$ in the absence of the Casimir force, where $M$ is the mass of the system. The thermal noise spectrum of the sphere-cantilever system is measured and fit to a Lorentzian to identify the peak resonant frequency, $\omega_r$. The shift of $\omega_r$ in the presence of the Casimir force when for example the sphere is positioned above a section of the patterned Si plate with the density of charge carriers $\tilde{n}$ is equal to \cite{7,8,20} \begin{equation} \omega_{r,\tilde{n}}-\omega_r=-\frac{\omega_r}{2k}\, \frac{\partial F_{\tilde{n}}(z)}{\partial z}. \label{eq3} \end{equation} Next the plate is oscillated in the horizontal direction with a frequency $\Omega$. As a result, the frequency shift \begin{equation} \omega_{r,\tilde{n}}-\omega_{r,n}=-\frac{\omega_r}{2k}\, \frac{\partial\Delta F(z)}{\partial z} \label{eq4} \end{equation} \noindent between the resonant frequencies above the two different sections of the patterned Si plate is measured. Using the proximity force approximation \cite{4,30}, we determine the difference Casimir pressure \begin{equation} \Delta P(z)=-\frac{1}{2\pi R}\, \frac{\partial\Delta F(z)}{\partial z} \label{eq5} \end{equation} \noindent between the two parallel plates (the Au one and the patterned Si). Note that the systematic error from the use of the proximity force approximation was recently confirmed to be less than $z/R$ \cite{31,32,33,34,34a}. Equations (\ref{eq4}) and (\ref{eq5}) express the difference Casimir pressure through the measured shift of the resonance frequency above the two halfs of the patterned plate. As is shown in the next section, the measurements of the difference Casimir pressure using the dynamic technique provides us with one more test of the predictions of the different models of conductivity processes at low frequencies. The experiment on the difference Casimir force from a patterned Si plate, as described in this section, allows variation of charge carrier density by the preparation of different semiconductor samples. Thus, the proposed measurements should provide a comprehensive understanding on the role of conductivity and optical processes in the Casimir force for nonmetallic materials and discriminate between competing theoretical approaches. \section{Calculation of the difference Casimir force and \protect{\\} pressure in the patterned geometry} The difference Casimir force and the equivalent Casimir pressure from the oscillation of the patterned Si plate below an Au coated sphere at $T=300\,$K in thermal equilibrium are given by the Lifshitz theory. In the static technique the data to be compared with theory is the difference of Casimir forces acting between the sphere and two sections of the patterned plate. This difference is obtained from the Casimir energy between two parallel plates, as given by the Lifshitz theory, using the proximity force approximation \cite{2,3,4} \begin{eqnarray} && \Delta F(z)=k_BTR\sum\limits_{l=0}^{\infty}\left(1- \frac{1}{2}\delta_{l0}\right)\int_{0}^{\infty} k_{\bot}dk_{\bot} \label{eq6} \\ &&\phantom{aa}\times \ln\frac{\left[1-r_{TM;\tilde{n}}(\xi_l,k_{\bot}) r_{TM}(\xi_l,k_{\bot})e^{-2q_lz}\right]\,\left[1- r_{TE;\tilde{n}}(\xi_l,k_{\bot}) r_{TE}(\xi_l,k_{\bot})e^{-2q_lz}\right]}{\left[1- r_{TM;{n}}(\xi_l,k_{\bot}) r_{TM}(\xi_l,k_{\bot})e^{-2q_lz}\right]\,\left[1- r_{TE;{n}}(\xi_l,k_{\bot}) r_{TE}(\xi_l,k_{\bot})e^{-2q_lz}\right]}\,. \nonumber \end{eqnarray} \noindent Here $\xi_l=2\pi k_BTl/\hbar$ with $l=0,\,1,\,2,\,\ldots$ are the Matsubara frequencies, $q_l=(k_{\bot}^2+\xi_l^2/c^2)^{1/2}$, and $k_{\bot}$ is the projection of the wave vector on the boundary planes. The reflection coefficients on the Au plane for the two independent polarizations of the electromagnetic field (transverse magnetic and transverse electric modes) are \begin{equation} r_{TM}(\xi_l,k_{\bot})= \frac{\varepsilon_lq_l-k_l}{\varepsilon_lq_l+k_l}, \qquad r_{TE}(\xi_l,k_{\bot})= \frac{k_l-q_l}{k_l+q_l}, \label{eq7} \end{equation} \noindent where $k_l=(k_{\bot}^2+\varepsilon_l\xi_l^2/c^2)^{1/2}$ and $\varepsilon_l=\varepsilon(i\xi_l)$ is the dielectric permittivity of Au along the imaginary frequency axis. In a similar way, the reflection coefficients on the two sections of a patterned Si plate with charge carrier densities $\tilde{n}$ and $n$ are given, respectively, by \begin{equation} r_{TM;\tilde{n},n}(\xi_l,k_{\bot})= \frac{\varepsilon_{l;\tilde{n},n}q_l- k_{l;\tilde{n},n}}{\varepsilon_{l;\tilde{n},n}q_l+k_{l;\tilde{n},n}}, \qquad r_{TE;\tilde{n},n}(\xi_l,k_{\bot})= \frac{k_{l;\tilde{n},n}-q_l}{k_{l;\tilde{n},n}+q_l}, \label{eq8} \end{equation} \noindent where $k_{l;\tilde{n},n}= (k_{\bot}^2+\varepsilon_{l;\tilde{n},n}\xi_l^2/c^2)^{1/2}$ and $\varepsilon_{l;\tilde{n},n}=\varepsilon_{\tilde{n},n}(i\xi_l)$ are the dielectric permittivities of Si with charge carrier densities $\tilde{n}$ and $n$ along the imaginary frequency axis. In the dynamic technique the data to be compared with theory is the equivalent difference Casimir pressure between two parallel plates, one made of Au and the other one a patterned Si plate. Using the same notations as above, the difference Casimir pressure is given by \begin{eqnarray} && \Delta P(z)=-\frac{k_BT}{\pi}\sum\limits_{l=0}^{\infty}\left(1- \frac{1}{2}\delta_{l0}\right)\int_{0}^{\infty} k_{\bot}dk_{\bot}q_l \label{eq9} \\ &&\phantom{aa}\times \left\{\left[r_{TM;\tilde{n}}^{-1}(\xi_l,k_{\bot}) r_{TM}^{-1}(\xi_l,k_{\bot})e^{2q_lz}-1\right]^{-1}+\left[ r_{TE;\tilde{n}}^{-1}(\xi_l,k_{\bot}) r_{TE}^{-1}(\xi_l,k_{\bot})e^{2q_lz}-1\right]^{-1}\right. \nonumber \\ && \phantom{aaaa}\left. -\left[r_{TM;{n}}^{-1}(\xi_l,k_{\bot}) r_{TM}^{-1}(\xi_l,k_{\bot})e^{2q_lz}-1\right]^{-1}- \left[r_{TE;{n}}^{-1}(\xi_l,k_{\bot}) r_{TE}^{-1}(\xi_l,k_{\bot})e^{2q_lz}-1\right]^{-1}\right\}\,. \nonumber \end{eqnarray} \noindent Note that in Eqs.~(\ref{eq6}) and (\ref{eq9}) we have replaced the 100\,nm Au coating and the 0.3--0.5\,mm Si plate for an Au and Si semispaces, respectively. Using the Lifshitz formula for layered structures \cite{4} it is easy to calculate the force and pressure errors due to this replacement. For example, for an Au layer at a typical separation of 100\,nm this error is about 0.01\%. For Si a finite thickness of the plate $d$ markedly affects the Casimir force when the separation distance $z$ exceeds the thickness, i.e., $z/d>1$ \cite{35a}. In our case, however, even at the largest separation considered ($z=300\,$nm) the ratio of the separtion to the plate thickness $z/d\leq 10^{-3}$. This is similar to the case of the experiment \cite{24} where the finite thickness of Si membrane also does not influence the magnitude of the Casimir force because at separations $z\leq 200\,$nm where statistically meaningful results were obtained $z/d\leq0.05$. We have performed computations of the difference Casimir force (\ref{eq6}) and difference Casimir pressure (\ref{eq9}) for samples with typical values of charge carrier concentrations $\tilde{n}$ and $n$ as used in experiments \cite{22,23,24}. Both sections of the Si plate were chosen to have electron conductivity and doped with P. For the section of the plate with higher concentration of charge carriers the values $\tilde{n}_1=3.2\times 10^{20}\,\mbox{cm}^{-3}$ (such a sample was fabricated in Ref.~\cite{23}) and $\tilde{n}_2=3.2\times 10^{19}\,\mbox{cm}^{-3}$ were used in the computations. The respective dielectric permittivity along the imaginary frequency axis can be represented in the form \cite{34b} \begin{equation} \varepsilon_{\tilde{n}}(i\xi_l)=\varepsilon^{Si}(i\xi_l)+ \frac{\omega_{p;\tilde{n}}^2}{\xi_l(\xi_l+\gamma_{\tilde{n}})}. \label{eq10} \end{equation} \noindent Here $\varepsilon^{Si}(i\xi_l)$ is the permittivity of high-resistivity (dielectric) Si along the imaginary frequency axis computed in Ref.~\cite{35} by means of the dispersion relation using the tabulated optical data for the complex index of refraction \cite{36}. The values of the plasma frequencies and relaxation parameters are the following \cite{23}: $\omega_{p;{\tilde{n}_1}}=2.0\times 10^{15}\,$rad/s, $\gamma_{{\tilde{n}_1}}=2.4\times 10^{14}\,$rad/s, $\omega_{p;{\tilde{n}_2}}=6.3\times 10^{14}\,$rad/s, $\gamma_{{\tilde{n}_2}}=1.8\times 10^{13}\,$rad/s. In Fig.~4 the dielectric permittivities of the samples with high concentrations of charge carriers $\tilde{n}_1$ and $\tilde{n}_2$ are shown as solid lines 1 and 2, respectively. In the same figure, the dashed line $a$ shows the permittivity of high-resistivity, dielectric, Si \cite{35} and the dotted line the permittivity of Au computed in Ref.~\cite{35} using the tabulated optical data of Ref.~\cite{36}. Below we will use two models for the permittivity of the section of the Si plate with lower concentration of charge carriers $n$. Calculations show that for any $0<n\leq 1.0\times 10^{17}\,\mbox{cm}^{-3}$ (this interval includes the experimental value of $n\approx 1.2\times 10^{16}\,\mbox{cm}^{-3}$ in Ref.~\cite{23}), the obtained values of $F_n(z)$ and, thus, of $\Delta F(z)$ do not depend on $n$. Because of this we use in the computations $n= 1.0\times 10^{17}\,\mbox{cm}^{-3}$, the plasma frequency $\omega_{p;n}=3.5\times 10^{13}\,$rad/s and the relaxation parameter $\gamma_{n}=1.8\times 10^{13}\,$rad/s \cite{23,37} (note that for $n\leq 1.0\times 10^{17}\,\mbox{cm}^{-3}$ the value of the relaxation parameter does not effect the magnitude of the Casimir force). Then the dielectric permittivity of this section of the Si plate along the imaginary frequency axis is given by \begin{equation} \varepsilon_{{n}}^{(b)}(i\xi_l)=\varepsilon^{Si}(i\xi_l)+ \frac{\omega_{p;{n}}^2}{\xi_l(\xi_l+\gamma_{{n}})} \label{eq11} \end{equation} \noindent and is shown as the dashed line $b$ in Fig.~4. This is one model of Si with a lower concentration of charge carriers referred to below as model ($b$). As is seen in Fig.~4, the dashed line $b$, and thereby all respective lines for samples with the concentration of charge carriers smaller than $1.0\times 10^{17}\,\mbox{cm}^{-3}$, deviate from the permittivity of dielectric Si (line $a$) only at frequencies below the first Matsubara frequency $\xi_1$. Because of this, it is common (see, e.g., \cite{2,3,38}) to neglect the small conductivity of high-resistivity materials at low frequencies and describe them in the frequency region below the first Matsubara frequency by the static dielectric permittivity. In our case this leads to \begin{equation} \varepsilon_{{n}}^{(a)}(i\xi_l)=\varepsilon^{Si}(i\xi_l), \label{eq12} \end{equation} \noindent which is the other model for Si with a lower concentration of charge carriers referred to below as model ($a$). From Eq.~(\ref{eq12}) and Fig.~4 at all frequencies $\xi\leq\xi_1$ it follows: $\varepsilon_{{n}}^{(a)}(i\xi)=\varepsilon^{Si}(0)=11.66$. To be exact, at any $T>0$ the density of free charge carriers $n$ in semiconductors (and even in dielectrics) and thus the conductivity are nonzero ($n>0$). Thus, the model (\ref{eq11}) should be considered as more exact than the model (\ref{eq12}). At the same time, if we note that for $n\leq 1.0\times 10^{17}\,\mbox{cm}^{-3}$ the conductivity is small, it should be expected that both models should lead to practically identical results. This is, however, not so. In Fig.~5 we present the computational results for the difference Casimir force using Eq.~(\ref{eq6}). The solid line $1a$ demonstrates the values of the difference Casimir force versus separation for the patterned Si plate with a higher concentration of charge carriers $\tilde{n}_1$ computed under the assumption that the lower concentration section of the plate is described by Eq.~(\ref{eq12}), i.e., the conductivity processes at low frequencies are neglected. The dashed line $1b$ shows the difference Casimir force as a function of separation computed with the same $\tilde{n}_1$ but taking into account the conductivity processes at low frequencies in accordance with Eq.~(\ref{eq11}). As is seen from the comparison of lines $1a$ and $1b$, the difference Casimir forces computed using Eqs.~(\ref{eq12}) and (\ref{eq11}) differ by 1.2\,pN at a separation $z=100\,$nm and this difference slowly decreases to approximately 0.14\,pN at a separation $z=300\,$nm. The lines $2a$ and $2b$ present similar results for the case when the higher charge carrier density is equal to $\tilde{n}_2$. As is seen from Fig.~5, decreasing the higher concentration by an order of magnitude decreases the predicted magnitude of the difference Casimir force by more than two times, but leaves the same gap between the predictions of two different models of the permittivity at low frequencies. Importantly, our predictions do not depend on the discussions mentioned in the Introduction on the optical properties of metals at quasi-static frequencies \cite{9,10,11,12,13,14,15,16}. The resolution of this controversy affects only the value of the Au reflection coefficient $r_{TE}(0,k_{\bot})$ at zero frequency. The latter, however, does not contribute to the difference Casimir force (\ref{eq6}) and pressure (\ref{eq9}) because for dielectrics and semiconductors $r_{TE;\tilde{n},n}(0,k_{\bot})=0$ regardless of what model (\ref{eq11}) or (\ref{eq12}) is used for the description of the dielectric permittivity at low frequencies. The obtained difference between the lines $1a-1b$ and $2a-2b$ in Fig.~5 is completely explained by the different contributions of semiconductor reflection coefficient $r_{TM;n}(0,k_{\bot})$ when one uses Eq.~(\ref{eq11}) or Eq.~(\ref{eq12}) to describe the dielectric permittivity at low frequencies. Regarding the semiconductor section with a higher charge carrier density $\tilde{n}$, from Eqs.~(\ref{eq8}), (\ref{eq10}) it is always valid that \begin{equation} r_{TM;\tilde{n}}(0,k_{\bot})=1. \label{eq13} \end{equation} \noindent However, for the section of the plate with a lower charge carrier density $n$ it follows from Eq.~(\ref{eq8}) that \begin{equation} r_{TM;{n}}(0,k_{\bot})=1\quad\mbox{or}\quad r_{TM;{n}}(0,k_{\bot})= \frac{\varepsilon^{Si}(0)-1}{\varepsilon^{Si}(0)+1} \label{eq14} \end{equation} \noindent when Eq.~(\ref{eq11}) or Eq.~(\ref{eq12}) are used, respectively. Thus, the difference between the lines $1a$ and $1b$ (and the same difference between the lines $2a$ and $2b$) can be found analytically. Taking only the zero-frequency contribution in Eq.~(\ref{eq6}) and subtracting the difference Casimir force calculated using Eq.~(\ref{eq11}) [model ($b$)] from the difference Casimir force calculated using Eq.~(\ref{eq12}) [model ($a$)] one obtains \begin{equation} \Delta F_{a}^{(0)}-\Delta F_{b}^{(0)}= -\frac{k_BTR}{8z^2}\left\{\zeta(3)-\mbox{Li}_3 \left[\frac{\varepsilon^{Si}(0)-1}{\varepsilon^{Si}(0)+1} \right]\right\}. \label{eq15} \end{equation} \noindent Here $\zeta(z)$ is the Riemann zeta function, and Li${}_3(z)$ is the polylogarithm function. The results using the analytic Eq.~(\ref{eq15}) coincide with the differences between the lines $1a-1b$ and $2a-2b$ in Fig.~5 computed numerically. In the experiment \cite{24} the difference Casimir force between Au coated sphere and Si plate illuminated with laser pulses was first measured. In the presence of light the charge carrier density was about $2\times 10^{19}\,\mbox{cm}^{-3}$ and in the absence of light of about $5\times 10^{14}\,\mbox{cm}^{-3}$. The experimental data were shown to be in agreement with model ($a$) which uses the finite static dielectric permittivity of Si. The model ($b$) which includes the dc conductivity of Si was excluded at 95\% confidence within the separation region from 100 to 200\,nm. As was discussed above, in the framework of the Lifshitz theory this result is rather unexpected. Bearing in mind that illumination with laser pulses leads to several additional sources of errors discussed in Ref.~\cite{24}, it is of vital interest to verify the obtained conclusions in a more precise experiment with patterned Si plates. The comparison of the experimental sensitivities presented in Sec.~II with the magnitudes of the difference Casimir forces computed here using different theoretical models demonstrate that the proposed experiment with patterned semiconductor plate will bring decisive results on the discussed problems in the Lifshitz theory at nonzero temperature. The calculations of the difference Casimir pressure determined in the dynamic mode of the proposed experiment leads to results analogous to those for the difference force. The calculation results using Eq.~(\ref{eq9}) with the same values of parameters as above and two models of lower conductivity Si are presented in Fig.~6. Here the difference Casimir pressures between an Au plate and a patterned Si plate with the higher densities of charge carriers $\tilde{n}_{1,2}$ (one section of the plate) and lower $n$ (another section of the plate) are shown with solid lines $1a$ and $2a$, respectively, computed under the assumption that Si with the lower $n$ possesses a finite permittivity (\ref{eq12}) at zero frequency. The dashed lines $1b$ and $2b$ are obtained under the assumption that Si with the lower $n$ is described by the permittivity (\ref{eq11}) which goes to infinity when the frequency goes to zero. As is seen in Fig.~6, the difference Casimir pressure with a patterned plate with charge carrier densities $\tilde{n}_1$ and $n$ equals 250\,mPa at a separation $z=100\,$nm [model ($a$) of low conductivity section of the plate] and the difference in predictions for the two models equals 38.6\,mPa. The proposed experiment of the difference Casimir pressure can reliably discriminate between the solid and dashed lines in Fig.~6 thus providing one more test for the Lifshitz theory at nonzero temperature. Notice that in a similar way to the force, the differences between the lines $1a-1b$ and $2a-2b$ in Fig.~6 are expressed analytically by taking the zero-frequency contributions in Eq.~(\ref{eq9}): \begin{equation} \Delta P_{a}^{(0)}-\Delta P_{b}^{(0)}= -\frac{k_BT}{8\pi z^3}\left\{\zeta(3)-\mbox{Li}_3 \left[\frac{\varepsilon^{Si}(0)-1}{\varepsilon^{Si}(0)+1} \right]\right\}. \label{eq16} \end{equation} \noindent Calculations using Eq.~(\ref{eq16}) lead to the same differences between the lines $1a-1b$ and $2a-2b$ as were computed numerically in Fig.~6. \section{Proposed experiment on the modulation of the Casimir force through an insulator-metal transition} The exciting possibility for the modulation of the Casimir force due to a change of charge carrier density is offered by semiconductor materials that undergo the insulator-metal transition with the increase of temperature. Such a transition leads to a change of the carrier density of order $10^4$. Although in literature it is common to speak about insulator-metal transition, this can be considered as a transformation between two semiconductor phases with lower and higher charge carrier densities $n$ and $\tilde{n}$, respectively. As was shown above, this is sufficient to bring about a large change in the Casimir force. From a fundamental point of view, the modulation of the Casimir force due to the phase transition will offer one more precision test of the role of conductivity and optical properties in the Lifshitz theory of the Casimir force. This experiment suggests some advantages as compared to the difference force measurement with a patterned plate considered in Secs.~II and III. First, because of the large change in the magnitude and bandwidth of the optical properties in a phase transition, the modulation of the Casimir force will be larger. Second, an insulator-metal transition does not require the special fabrication of patterned plates with one section having a high carrier density, which might not be compatible with robust device design. Keeping in mind that the increase of temperature necessary for the phase transition can be induced by laser light, this opens up the possibility of radically new nanomechanical devices using the Casimir force in image detection. The phase transition can be also brought about through electrical heating of the material. In this experiment we propose to measure the change of the Casimir force acting between an Au coated sphere and a vanadium dioxide (VO${}_2$) film deposited on sapphire substrate which undergoes the insulator-metal transition with the increase of temperature. It has been known that VO${}_2$ crystals and thin films undergo an abrupt transition from semiconducting monoclinic phase at room temperature to a metallic tetragonal phase at 68\,${}^{\circ}$C \cite{25,39,40,41,42,43,44}. The phase transition causes the resistivity of the sample to decrease by a factor of $10^4$ from 10\,$\Omega\,$cm to $10^{-3}\,\Omega\,$cm (i.e., the same change as for two semiconductor half plates in Sec.~II with lower and higher charge carrier densities). In addition, the optical transmission for a wide region of wavelengths extending from $1\,\mu$m to greater than $10\,\mu$m, decreases by more than a factor of 10--100. The schematic of the experimental setup is shown in Fig.~7. In this figure, light from a chopped 980\,nm laser will be used to heat the VO${}_2$ film \cite{39,40}. About 10--100\,mW power of the 980\,nm laser is required to bring about all optical switching of VO${}_2$ films. The same procedure as outlined in Sec.~II (the static technique) will be used in the measurement of the modulation of the Casimir force including the interferometric detection of cantilever deflection. The schematic of the setup is similar to the one used in Ref.~\cite{24} in the demonstration of optically modulated dispersion forces. An important point is that in Ref.~\cite{24} the absorption of light from a 514\,nm Ar laser led to an increase of charge carrier density. By contrast, here the wavelength of a laser is selected in such a way that light only leads to heating of a VO${}_2$ film but does not change the number of free charge carriers \cite{39}. As a first step towards studying the role of the insulator-metal transition in the Casimir force, we have recently fabricated thin films of VO${}_2$ on sapphire plates. The preliminary results are shown in Fig.~8. It is observed that we have obtained more than a factor of 10 change in the resistivity of the film. These films were prepared by thermal evaporation of VO${}_2$ powder. While films of appropriate thickness approaching 100\,nm and roughness of about 2\,nm (shown in Fig.~9) can be obtained by this procedure, it is not optimal as it leads to the non-stoichimetric formation of the mixed valence states of the vanadium oxide (VO${}_x$). In the future rf magnetron sputtering will be used to make the films \cite{39}. VO${}_2$ films using this technique have been shown to have the $10^4$ change in resistivity and a corresponding large change in optical reflectivity and spectrum. The aim of the proposed experiment on the influence of insulator-metal transition on the Casimir force is two fold: applications for actuation of nanodevices through a modulation of the Casimir force, and to perform fundamental tests on the theory of dispersion forces. To accomplish this, two types of measurements are planned. In the first we plan to demonstrate the modulation of the Casimir force through an optical switching of the insulator-metal transition. This modulation will lead to novel microdevices as optical and electrical switches, optical modulators, optical filters and IR detectors that can be actuated optically through the absorption of IR radiation. Importantly, such devices can be integrated with Si technology which is used in the fabriaction of microelectromechanical systems \cite{39}. In the second type of measurements, the variable temperature atomic force microscope described in Sec.~II will be used to perform precision measurements of the Casimir force between a gold coated sphere and VO${}_2$ film. Here, the Casimir force will be measured at different temperatures from room temperature through 80\,${}^{\circ}$C. This temperature range spans the dielectric (semiconducting) and metal regions of VO${}_2$. Careful comparison of the experimental data and the theory (see the next section) will be done to understand the role of conductivity and losses in both phases of VO${}_2$. \section{Calculation of the Casimir force in an insulator-metal transition} The Casimir force acting between a metal coated sphere and the VO${}_2$ film on a sapphire plate both before and after the phase transition (i.e., in the insulating and metal phases or, more exactly, in the semiconductor phases with lower and higher charge carrier densities) are expressed by the Lifshitz formulas in accordance with Eqs.~(\ref{eq1}) and (\ref{eq6}). As above, we label the higher concentration of charge carriers $\tilde{n}$ and the lower concentration $n$. To compute the Casimir force before and after the phase transition one needs the optical properties of VO${}_2$ on a sapphire plate in a wide frequency region. In Ref.~\cite{43} the dielectric permittivity of VO${}_2$ is measured and fitted to the oscillator model for both bulk VO${}_2$ and for a system of 100\,nm thick VO${}_2$ film deposited on bulk sapphire plate within the frequency region from 0.25\,eV to 5\,eV. This modelling was performed both before and after the phase transition. Typical thickness of sapphire substrate is of about 0.3\,mm, i.e., the same as the thickness of patterned Si plate in Secs.~II and III. Because of this, when calculating the Casimir force between gold coated sphere and VO${}_2$ film on sapphire substrate, we can use the Lifshitz formala for bulk test bodies (see Sec.~III for details). The application region of the models presented in Ref.~\cite{43} should be extended in order to perform computations of the Casimir force within the separation region from 100 to 300\,nm where contribution from optical data up to about 10\,eV have to be taken into account. For this purpose, we have supplemented equations of Ref.~\cite{43} with additional terms taking into account the frequency-dependent electronic transitions at high frequencies \cite{45,46}. As a result, the effective dielectric permittivity of the VO${}_2$ film on a sapphire substrate before the phase transition (at $T=300\,$K) is given by \begin{equation} \varepsilon_n(i\xi_l)=1+\sum\limits_{i=1}^{7} \frac{s_{n,i}}{1+\frac{\xi_l^2}{\omega_{n,i}^2}+ \Gamma_{n,i}\frac{\xi_l}{\omega_{n,i}}}+ \frac{\varepsilon_{\infty}^{(n)}-1}{1+ \frac{\xi_l^2}{\omega_{\infty}^2}}. \label{eq17} \end{equation} \noindent Here the values of the oscillator frequencies $\omega_{n,i}$, dimensionless relaxation parameters $\Gamma_{n,i}$ and of the oscillator strengths $s_{n,i}$ taken from Fig.~5 in Ref.~\cite{43} are presented in Table~I. The constants related to the contribution of high-frequency electronic transitions [the last term on the right-hand side of Eq.~(\ref{eq17})] are $\varepsilon_{\infty}^{(n)}=4.26$ \cite{43} and $\omega_{\infty}=15\,$eV. If we put $\xi_l=0$ in the last term on the right-hand side of Eq.~(\ref{eq17}), this equation is the same as the result in Ref.~\cite{43}. After the phase transition we have a phase with increased charge carrier density $\tilde{n}$. Similar to Eq.~(\ref{eq10}) the effective dielectric permittivity of the VO${}_2$ film on a sapphire substrate can be described by the dielectric permittivity \begin{eqnarray} && \varepsilon_{\tilde{n}}(i\xi_l)=1+ \frac{\omega_{p;\tilde{n}}^2}{\xi_l(\xi_l+\gamma_{\tilde{n}})} \label{eq18} \\ && \phantom{aa}+ \sum\limits_{i=1}^{4} \frac{s_{\tilde{n},i}}{1+\frac{\xi_l^2}{\omega_{\tilde{n},i}^2}+ \Gamma_{\tilde{n},i}\frac{\xi_l}{\omega_{\tilde{n},i}}}+ \frac{\varepsilon_{\infty}^{(\tilde{n})}-1}{1+ \frac{\xi_l^2}{\omega_{\infty}^2}}. \nonumber \end{eqnarray} \noindent Parameters $\omega_{\tilde{n},i}$, $\Gamma_{\tilde{n},i}$ and $s_{\tilde{n},i}$ can be found in Fig.~6 of Ref.~\cite{43} and are listed in Table~II. The other parameters are $\varepsilon_{\infty}^{(\tilde{n})}=3.95,$ $\omega_{p;\tilde{n}}=3.33\,$eV, $\gamma_{\tilde{n}}=0.66\,$eV \cite{43}. Setting $\xi_l=0$ in the last term on the right-hand side of Eq.~(\ref{eq18}) (this term takes high-frequency electronic transitions into account), returns (\ref{eq18}) to the original form suggested in Ref.~\cite{43}. Note that the recently suggested model for the dielectric permittivity of VO${}_2$ films \cite{47} is applicable not only before and after a phase transition but also at intermediate temperatures. This model is, however, restricted to a more narrow frequency region from 0.73 to 3.1\,eV and uses the simplified description of two oscillators before the phase transition and only one oscillator with nonzero frequency after it. In Fig.~10, the effective dielectric permittivity of VO${}_2$ film of 100\,nm thickness on sapphire substrate before and after the phase transition, as given in Eqs.~(\ref{eq17}) and (\ref{eq18}) is shown by the solid lines 1 and 2, respectively. In the same figure, the dielectric permittivity of Au versus frequency is shown as dots. The vertical line indicates the position of the first Matsubara frequency at $T=340\,$K (i.e., in the region of the phase transition). In Fig.~11 we present the computational results for the Casimir force between the Au coated sphere and VO${}_2$ film on sapphire substrate versus separation obtained by the substitution of the dielectric permittivity (\ref{eq17}) (VO${}_2$ before the phase transition in solid line 1) and (\ref{eq18}) (VO${}_2$ after the phase transition in solid line 2) into the Lifshitz formula. As is seen in Fig.~11, after the phase transition the magnitudes of the Casimir force increase due to an increase in the charge carrier density. For a comparison with the proposed experiment on the difference Casimir force from a patterned Si plate, in Fig.~12 (solid line) we plot the difference of the Casimir forces after and before the phase transition, i.e., the difference of lines 2 and 1 in Fig.~11. It is seen that the difference Casimir force from a phase transition changes from 13\,pN at $z=100\,$nm to 1.2\,pN at $z=300\,$nm, i.e., the magnitudes of the difference from the phase transition are greater than that from the patterned Si plate. The difference Casimir force in the insulator-metal phase transition provides us with one more test on the proper modelling of the dielectric permittivity in the Lifshitz theory of dispersion forces. Similar to Sec.~III, we arrive at different results for the difference Casimir force after and before the phase transition if the conductivity of a dielectric VO${}_2$ at zero frequency is taken into account in our computations. The shift in the values of the difference Casimir force is completely determined by the change of the zero-frequency term in the Lifshitz formula. By analogy with Eq.~(\ref{eq15}) it follows: \begin{equation} \Delta F_{a}^{(0)}-\Delta F_{b}^{(0)}= -\frac{k_BTR}{8z^2}\left\{\zeta(3)-\mbox{Li}_3\left[ \frac{\varepsilon^{{\rm VO}_{2}}(0)-1}{\varepsilon^{{\rm VO}_2}(0)+1} \right]\right\}, \label{eq19} \end{equation} \noindent where $b$ represents the case when the dc conductivity of an insulating VO${}_2$ is taken into account, and $a$ represents the case when insulating VO${}_2$ is described by the permittivity (\ref{eq17}). From Eq.~(\ref{eq17}) and Table~I one obtains \begin{equation} \varepsilon^{{\rm VO}_{2}}(0)\equiv\varepsilon_n(0)= \varepsilon_{\infty}^{(n)}+\sum\limits_{i=1}^{7}s_{n,i} =9.909. \label{eq20} \end{equation} In Fig.~12 the difference Casimir force between an Au coated sphere and VO${}_2$ film on sapphire substrate after and before the phase transition computed including the dc conductivity of insulating VO${}_2$ versus separation is plotted with the dashed line. The difference between the solid and dashed lines is determined by Eq.~(\ref{eq19}). This difference changes from 1.6\,pN at $z=100\,$nm to 0.2\,pN at $z=300\,$nm. Thus, in the phase transition experiment the predicted discrepances between the two theoretical approaches to the description of conductivity properties at low frequencies are larger than in the experiment with the patterned semiconductor plate. This will help to experimentally discriminate between the two approaches and deeply probe the role of the material properties in the Lifshitz theory at nonzero temperature. \section{Conclusions and discussion} In the above we have proposed two experiments on the measurement of the difference Casimir force acting between a metal coated sphere and a semiconductor with different charge carrier densities. One of these experiments is based on the formation of a special patterned Si plate, two sections of which have charge carrier densities differing by several orders of magnitude. The measurement scheme in this experiment is differential, i.e., adapted for the direct measurement of the difference in the Casimir forces between the sphere and each section of the patterned plate. This allows one to obtain high precision within a wide measurement range. Using the dynamic measurement technique, this experiment also permits the measurement of the difference Casimir pressure between two parallel plates one of which is coated with gold and the other is patterned and consists of two sections with different charge carrier densities. Another proposed experiment directed to the same objective is novel and uses the insulator-metal phase transition brought about by an increase of temperature in the measurements of the Casimir force. This transition also leads to the change of charge carrier density by several orders of magnitude while not requiring the formation of special patterned samples. The expected difference in the Casimir forces after and before the phase transition is even larger than in the experiment with the patterned Si plate. Both proposed experiments are motivated by the uncertainties in the application of the theory of dispersion forces at nonzero temperature. As was shown above, different models of the conductivity of semiconductors at low frequencies used in the literature predict variations of the difference Casimir force at the level of 1\,pN. An even greater concern is that the model taking into account the dc conductivity of dielectrics violates the Nernst heat theorem \cite{49a,49b,49c}. We have reported an apparatus developed at UC Riverside that has the sensitivity of force measurements on the level of 1\,fN and is well adapted for the systematic investigation of the proposed effects in a wide range of separations. This apparatus includes an interferometer based atomic force microscope operated in high vacuum over a temperature range from 360\,K to 4\,K. The proposed experiments are feasible using the developed techniques and will aid in the resolution of theoretical problems on the application of the Lifshitz theory at nonzero temperature to real materials. Another motivation of the proposed experiments is in the application to nanotechnology. The separations between the adjacent surfaces in micro- and nanoelectromechanical devices are rapidly falling to a region below a micrometer where the Casimir force becomes dominant. Keeping in mind that semiconducting materials are used for micromachines, the detailed investigation of the dependence of the Casimir force on the properties of semiconductors is important. The proposed experiments and related theory clearly demonstrate that it is possible to control the Casimir force with semiconductor surfaces by changing the charge carrier density with doping or excitation. This opens new opportunities discussed above for using the Casimir force in both the operation and function of novel nanomechanical devices. In addition to the previously performed experiments on the Casimir force (see review \cite{4} and Refs.~\cite{5,6,7,8,20,22,23,24}) currently a number of new experiments have been proposed in the literature. Thus, Ref.~\cite{48} proposes to measure the Casimir torque between two parallel birefringent plates with in-plane optical anisotropy separated by either a vacuum or ethanol. In Ref.~\cite{48a} it is suggested to measure the vacuum torque between corrugated mirrors. References \cite{49,50,51} propose the measurements of the Casimir force between metallic surfaces at large separations of a few micrometers as a spherical lens and a plate, a cylinder and a plate or two parallel plates. These experiments are aimed at resolving the theoretical problems arising in the Lifshitz theory when it is applied to real metals. In Refs.~\cite{52,52a} a proposal to measure the influence of the Casimir energy on the value of the critical magnetic field in the transition from a superconductor to a normal state has been made. References~\cite{53,53a} proposed the measurement of the dynamic Casimir effect resulting in the creation of photons. The experiments proposed here on the difference Casimir force through the use of patterned semiconductor samples or using the insulator-metal phase transition indicate important new promising directions for future investigations in the Casimir effect. \section*{Acknowledgments} G.L.K.and V.M.M. are grateful to the Department of Physics and Astronomy of the University of California (Riverside) for its kind hospitality. The work on the difference Casimir force with patterned semiconductor samples and insulator-metal phase transition was supported by the DOE Grant No.~DE-FG02-04ER46131. The development of the interferometer based low temperature AFM was supported by the NSF Grant No.~PHY0355092. R.C.-G. is grateful for the financial support of UCMEXUS and CONACYT. {\bf Figures} \\[2mm] {\bf Fig.~1.} {(Color online) Schematic diagram of the experimental setup for the measurement of the difference Casimir force. The patterned Si plate with two sections of different dopant densities is mounted on a piezo below the Au coated sphere attached to a cantilever of an atomic force microscope. The piezo oscillates in the horizontal direction above different regions of the plate causing the flexing of the cantilever in response to the Casimir force. } {\bf Fig.~2.} {(Color online) Steps in the fabrication of the patterned Si plate with patterned doping (see text for more details). } {\bf Fig.~3.} {(Color online) Image of the interferometer based variable temperature atomic force microscope with the force sensitivity up to 1\,fN fabricated at UC, Riverside. The critical components are labeled. } {\bf Fig.~4.} { The dielectric permittivity of Si along the imaginary frequency axis for samples with high concentration of charge carriers $\tilde{n}_1$ and $\tilde{n}_2$ is shown by the solid lines 1 and 2, respectively. For the sample with a low concentration of charge carriers $n$ the permittivity versus frequency is shown by dashed lines $a$ and $b$ based on whether the static permittivity is finite or infinitely large. The premittivity of Au is indicated by the dotted line. } {\bf Fig.~5.} { The difference Casimir forces versus separation in the case when the higher concentration of charge carriers is equal to $\tilde{n}_1$ and the sample with a lower concentration, $n$, is described by a finite or infinitely large static permittivity are shown by the solid line 1$a$ and dashed line 1$b$, respectively. The analogous difference forces when the higher concentration of charge carriers is equal to $\tilde{n}_2$ are shown by the solid line 2$a$ and dashed line 2$b$. } {\bf Fig.~6.} { The difference Casimir pressures versus separation in the case when the higher concentration of charge carriers is equal to $\tilde{n}_1$ and the sample with a lower concentration, $n$, is described by a finite or infinitely large static permittivity are shown by the solid line 1$a$ and dashed line 1$b$, respectively. The analogous difference pressures when the higher concentration of charge carriers is equal to $\tilde{n}_2$ are shown by the solid line 2$a$ and dashed line 2$b$. } {\bf Fig.~7.} { Schematic of the experimental setup for the observation of modulation of the Casimir force in an insulator-metal phase transition. Light from a chopped 980\,nm laser heats a VO${}_2$ film leading to a phase transition to a state with higher concentration of charge carriers (sapphire substrate is not shown). Cooling in between pulses causes the transition to a state with lower concentration of carriers. The cantilever of an atomic force microscope flexes in response to the difference Casimir force. } {\bf Fig.~8.} {(Color online) Preliminary results on the resistance of VO${}_2$ film grown at UC Riverside as a function of temperature are shown as black squares (heating) and dots (cooling). } {\bf Fig.~9.} {(Color online) Morphology of the same VO${}_2$ film, as in Fig.~8, grown by thermal evaporation. The heights of roughness peaks are of about 2\,nm. } {\bf Fig.~10.} { The effective dielectric permittivity of VO${}_2$ film on sapphire substrate along the imaginary frequency axis before and after the phase transition are shown by the solid lines 1 and 2, respectively. The permittivity of Au is indicated by the dotted line. } {\bf Fig.~11.} { The Casimir force between an Au coated sphere and VO${}_2$ film on sapphire substrate versus separation before and after the phase transition are shown by the solid lines 1 and 2, respectively. } {\bf Fig.~12.} { The difference of the Casimir forces after and before the phase transition versus separation computed using a finite static dielectric permittivity (solid line) and taking into account the dc conductivity of VO${}_2$ in a dielectric state (dashed line). } \begingroup \squeezetable \begin{table} \caption{Values of the oscillator resonant frequencies $\omega_{n,i}$, dimensionless relaxation parameters $\Gamma_{n,i}$ and oscillator strengths $s_{n,i}$ of VO${}_2$ film on sapphire substrate before the phase transition. } \begin{ruledtabular} \begin{tabular}{llll} i & $\omega_{n,i}\,$(eV) & $\Gamma_{n,i}$ & $s_{n,i}$ \\ \hline 1 & 1.02 & 0.55 & 0.79 \\ 2 & 1.30 & 0.55 & 0.474 \\ 3 & 1.50 & 0.50 & 0.483 \\ 4 & 2.75 & 0.22 & 0.536 \\ 5 &3.49 & 0.47 & 1.316 \\ 6 & 3.76 & 0.38 & 1.060 \\ 7 & 5.1 & 0.385 & 0.99 \end{tabular} \end{ruledtabular} \end{table} \endgroup \begingroup \squeezetable \begin{table} \caption{Values of the oscillator resonant frequencies $\omega_{\tilde{n},i}$, dimensionless relaxation parameters $\Gamma_{\tilde{n},i}$ and oscillator strengths $s_{\tilde{n},i}$ of VO${}_2$ film on sapphire substrate after the phase transition.} \begin{ruledtabular} \begin{tabular}{llll} i & $\omega_{\tilde{n},i}\,(eV)$ & $\Gamma_{\tilde{n},i}$ & $s_{\tilde{n}n,i}$ \\ \hline 1 & 0.86 & 0.95 & 1.816 \\ 2 & 2.8 & 0.23 & 0.972 \\ 3 & 3.48 & 0.28 & 1.04 \\ 4 & 4.6 & 0.34 & 1.05 \end{tabular} \end{ruledtabular} \end{table} \endgroup \end{document}
\begin{document} \title{A boosted DC algorithm for non-differentiable DC components with non-monotone line search\thanks{Originally submitted at 2021-11-02, 14:49. Current version was submitted to the editors at 2022-06-13, 13:07. {The first author was supported in part by CNPq grant 304666/2021-1. The second author was supported in part by CAPES. The third author was supported in part by CNPq grant 424169/2018-5.}}} \author{O. P. Ferreira \thanks{Instituto de Matem\'atica e Estat\'istica, Universidade Federal de Goi\'as, CEP 74001-970 - Goi\^ania, GO, Brazil, E-mail: {\tt [email protected]}}, E. M. Santos \thanks{Instituto Federal de Educa\c{c}\~{a}o, Ci\^{e}ncia e Tecnologia do Maranh\~{a}o, A\c{c}ail\^{a}ndia, MA, Brazil E-mail: {\tt [email protected]}.}, J. C. O. Souza \thanks{Department of Mathematics, Federal University of Piau\'{i}, Teresina, PI, Brazil, E-mail: {\tt [email protected]} } } \maketitle \begin{abstract} We introduce a new approach to apply the boosted difference of convex functions algorithm (BDCA) for solving non-convex and non-differentiable problems involving difference of two convex functions (DC functions). Supposing the first DC component differentiable and the second one possibly non-differentiable, the main idea of BDCA is to use the point computed by the DC algorithm (DCA) to define a descent direction and perform a monotone line search to improve the decreasing the objective function accelerating the convergence of the DCA. However, if the first DC component is non-differentiable, then the direction computed by BDCA can be an ascent direction and a monotone line search cannot be performed. Our approach uses a non-monotone line search in the BDCA (nmBDCA) to enable a possible growth in the objective function values controlled by a parameter. Under suitable assumptions, we show that any cluster point of the sequence generated by the nmBDCA is a critical point of the problem under consideration and provide some iteration-complexity bounds. Furthermore, if the first DC component is differentiable, we present different iteration-complexity bounds and prove the full convergence of the sequence under the Kurdyka-\L{}ojasiewicz property of the objective function. Some numerical experiments show that the nmBDCA outperforms the DCA such as its monotone version. \end{abstract} \begin{keywords} {DC function}, boosted difference of convex functions algorithm, DC algorithm, non-monotone line search, Kurdyka-\L{}ojasiewicz property. \end{keywords} \begin{AMS} {90C26}, 65K05, 65K10, 47N10 \end{AMS} \section{Introduction} In this paper, we are interested in solving the following non-convex and non-differentiable DC optimization problem: \begin{equation}\label{Pr:DCproblem} \begin{array}{c} \min \phi(x):=g(x)-h(x), \qquad \mbox{s.t. } ~x\in \mathbb{R}^{n}. \end{array} \end{equation} where $g, h:\mathbb{R}^n \to \mathbb{R} $ are convex functions possibly non-differentiable. DC programming has been developed and studied in the last decades and successfully applied in different fields including but not limited to image processing \cite{LouZengOsherXin2015}, compressed sensing \cite{YinLouQi2015}, location problems \cite{AnNamYen2916, Brimberg1995, CruzNetoLopesSantosSouza2019}, sparse optimization problems \cite{GotohTakeda2018}, the minimum sum-of-squares clustering problem \cite{ARAGON2019, CuongYaoYen2020, OrdiBagirov2015}, the bilevel hierarchical clustering problem \cite{NamGeremewReynoldsTran2018}, clusterwise linear regression \cite{BagirovUgon2018}, the multicast network design problem \cite{GeremewNamSemenovBoginski2018} and multidimensional scaling problem \cite{LeTao2001, ARAGON2019, BeckHallak2020}. To the best of our knowledge, DCA was the first algorithm directly designed to solve \eqref{Pr:DCproblem}; see \cite{TaoLe1997,Pham1986}. Since then, several variants of DCA have arisen and its theoretical and practical properties have been investigated over the years, resulting in a wide literature on the subject; see \cite{OliveiraABC,DCAFirst2018}. Algorithmic aspects of DC programming have received significant attention lately such as subgradient-type (see \cite{BeckHallak2020, KhamaruWainwright2019}), proximal-subgradient \cite{CruzNetoEtAl2018, Moudafi2006, SOUZA3016, SunSampaio2003}, proximal bundle \cite{Welington2019}, double bundle \cite{KaisaBagirov2018}, codifferential \cite{BagirovUgon2011} and inertial method \cite{WelingtonTcheou2019}. Nowadays, DC programming plays an important role in high dimension non-convex programming and DCA (and its variants) is commonly used due to its simplicity, inexpensiveness and efficiency. Recently, \cite{ARAGON2017, ARAGON2019} proposed a boosted DC algorithm (BDCA) for solving \eqref{Pr:DCproblem} which has the property of accelerating the convergence of DCA. In \cite{ARAGON2017}, the convergence of BDCA is proved if both functions $g$ and $h$ are differentiable, and the non-differentiable case is considered in \cite{ARAGON2019} but still supposing that $g$ is differentiable and $h$ is possibly non-differentiable. Although DCA is a decent method, the main idea of BDCA is to define a descent direction using the point computed by DCA and a line search to take a longer step than the DCA obtaining in this way a larger decreasing of the objective value per iteration. In addition to accelerating the convergence of DCA, we have been noticed that BDCA escapes from bad local solutions thanks to the line search, and hence, BDCA can be used to either accelerate DCA and provide better solutions. The differentiability of the DC component $g$ is needed to guarantee a descent direction from the point computed by DCA, otherwise such direction can be an ascent direction; see \cite[Remark~1]{ARAGON2017} and \cite[Example~3.4]{ARAGON2019}. The aim of this paper is to provide a version of BDCA which can still be applied if both DC components are non-differentiable. The key idea is to use a non-monotone line search allowing some growth in the objective function values controlled by a parameter. We study the convergence analysis of the non-monotone BDCA (nmBDCA) as well as iteration-complexity bounds. Therefore, nmBDCA enlarges the applicability of BDCA to a broader class of non-smooth functions keeping up with its efficiency and simplicity. The concept of non-monotone line search, that we use here as a synonym for inexact line search, was firstly proposed by \cite{Grippo1986}, and later a new non-monotone search was considered by \cite{ZhangHager2004}. In \cite{SachsSachs2011}, an interesting general framework for non-monotone line research was proposed and more recently some variants have been considered; see for instance \cite{GrapigliaSachs2017, GrapigliaSachs2020}. Furthermore, we prove the global convergence under the Kurdyka-\L{}ojasiewicz property as well as different iteration-complexity bounds for the case where the DC component $g$ is differentiable. We present some numerical experiments involving academic test functions to show the efficiency of the method comparing its performance with DCA. The rest of this paper is organized as follows. In Section~\ref{sec2}, we recall some definitions and preliminary results used throughout the paper. The methods DCA, BDCA and nmBDCA are presented in Section~\ref{sec3}. In Section{ \ref{sec4}, we show the convergence analysis and iteration-complexity analysis of nmBDCA for a possible non-differentiable $g$. In Section~\ref{sec5} we established some iteration-complexity bounds for the sequence generated by nmBDCA under the assumption that the function $g$ is differentiable and a full convergence of sequence is also established under Kurdyka-\L{}ojasiewicz property. In Section~\ref{Sec6}, we present some numerical experiments to illustrate the performance of the method. The last section contains some remarks and future research directions are presented. \section{Preliminaries} \label{sec2} In this section we present some notations, definitions and results that will be used throughout the paper which can be found in \cite{Amir, clarke1983optimization, Lemarechal1993}. \begin{definition}[{\cite[Definition 1.1.1, p. 144, and Proposition 1.1.2, p. 145]{Lemarechal1993}}] A function $f\colon\mathbb{R}^{n}\to \mathbb{R} $ is said to be convex if $f(\lambda x + (1-\lambda)y)\leq \lambda f(x) + (1-\lambda) f(y)$, for all $x,y\in {\mathbb{R}^{n}}$ and $\lambda \in\, ]0,1[$. Moreover, $f$ is said to be strongly convex with modulus $\sigma > 0 $ if $f(\lambda x + (1-\lambda)y)\leq \lambda f(x) + (1-\lambda) f(y)-\frac{\sigma}{2}\lambda(1-\lambda)\|x-y\|^2$, for all $x,y\in {\mathbb{R}^{n}}$ and $\lambda \in \, ]0,1[$. \end{definition} If in the above definition $f$ is strongly convex with modulus $\sigma=0$, then $f$ is convex. \begin{definition}[{\cite[p. 25]{clarke1983optimization}}] We say that $f\colon\mathbb{R}^{n}\to\mathbb{R}$ is locally Lipschitz if, for all $x\in \mathbb{R}^{n}$, there exist a constant $K_{x}>0$ and a neighborhood $U_{x}$ of $x$ such that $|f(u)-f(y)|\leq K_{x}\|u-y\|$, for all $ u,y\in U_{x}.$ \end{definition} If $f\colon\mathbb{R}^{n}\to \mathbb{R}$ is convex, then $f$ is locally Lipschitz; see \cite[p. 34]{clarke1983optimization}. \begin{definition}[{\cite[p. 27]{clarke1983optimization}}] Let $f\colon\mathbb{R}^{n}\to\mathbb{R}$ be a locally Lipschitz function. The Clarke's subdifferential of $f$ at $x\in \mathbb{R}^{n}$ is given by $\partial _{c} f(x)=\{v \in \mathbb{R}^{n}\:|\: f^{\circ }(x;d)\geq \langle v,d \rangle, ~ \forall d \in \mathbb{R}^{n} \},$ where $f^{\circ }(x;d)$ is the generalized directional derivative of $f$ at $x$ in the direction $d$ given by $ f^{\circ }(x;d)= \limsup _{ u\rightarrow x, t\downarrow 0} (f(u+td)-f(u))/t.$ \end{definition} If $f$ is convex, then $\partial _{c}f(x)$ coincides with the subdifferential $\partial f(x)$ in the sense of convex analysis and $f^{\circ }(x;d)$ coincides with the usual directional derivative $f'(x;d)$; see {\cite[p. 36]{clarke1983optimization}}. \begin{theorem}[{\cite[Proposition 2.1.2, p. 27]{clarke1983optimization}}] Let $f\colon\mathbb{R}^{n}\to\mathbb{R}$ be a locally Lipschitz function. Then, for all $x\in \mathbb{R}^{n}$, $\partial _{c}f(x)$ is a non-empty, convex, compact subset of $\mathbb{R}^{n}$ and $\|v\|\leq K_{x},$ for all $v\in \partial _{c}f(x)$, where $K_{x}>0$ is the Lipschitz constant of $f$ around $x$. \end{theorem} \begin{proposition} \label{cont_subdif} Let $f\colon\mathbb{R}^{n}\rightarrow \mathbb{R}$ be convex and $(x^{k})_{k\in\mathbb{N}}$ such that ~$\lim _{k\rightarrow\infty}x^{k}=x^{*}$. If $(y^{k})_{k\in\mathbb{N}}$ is a sequence such that $y^{k}\in \partial f(x^{k})$ for every $k\in \mathbb{N}$, then $(y^{k})_{k\in\mathbb{N}}$ is bounded and its cluster points belong to $\partial f(x^{*}).$ \end{proposition} \begin{theorem}[{\cite[Theorem 5.25, p. 122 and Corollary 3.68, p. 76]{Amir}}] Let $f\colon\mathbb{R}^{n}\to \mathbb{R} $ be a strongly convex function. Then, $f$ has a unique minimizer $x^{*}\in \mathbb{R}^{n}$, and $0\in \partial f(x^{*})$. \end{theorem} \begin{lemma}[{\cite[Lemma 5.20, p. 119]{Amir}}] Let $f\colon\mathbb{R}^{n}\to \mathbb{R} $ be a strongly convex function with modulus $\sigma>0$, and let $\bar{f}:\mathbb{R}^{n}\to \mathbb{R}$ be convex. Then $f + \bar{f}$ is strongly convex function with modulus $\sigma>0$. \end{lemma} \begin{theorem}[{\cite[Theorem 5.24, p. 119]{Amir}}] \label{teo2} Let $f\colon\mathbb{R}^{n}\to \mathbb{R}$ be a convex function. Then, for a given $\sigma> 0$, the following statements are equivalent: \begin{enumerate}[label={({\roman*})}, ref={(\roman*)}] \item $f$ is a strongly convex function with modulus $\sigma> 0$. \item \label{it:teo2:fct-geq} $f(y)\geq f(x) + \langle v, y-x \rangle + \frac{\sigma}{2} \| y-x\|^{2}$, for all $x,y\in \mathbb{R} ^{n} $ and all $v\in \partial f(x)$. \item \label{it:teo2:inner-geq} $\langle w-v,x-y \rangle \geq \sigma \| y-x\|^{2}$, for all $x,y\in \mathbb{R} ^{n}$, all $w\in \partial f(x)$ and all $v\in \partial f(y).$ \end{enumerate} \end{theorem} \begin{definition}[{\cite[p. 107]{Amir}}]\label{def.lipschtz} A differentiable function $f\colon\mathbb{R}^n \to \mathbb{R} $ has gradient Lipschitz continuous with constant $L >0$ whenever $\|\nabla f(x) -\nabla f(y)\|\leq L\|x-y\|$, for all $x,y \in \mathbb{R}^{n}$. \end{definition} \begin{lemma}[Descent lemma {\cite[Lemma 5.7, p. 109]{Amir}}]\label{eq:IneqLip} Assume that $f$ satisfies Definition~\ref{def.lipschtz}. Then, for all $x, d\in \mathbb{R}^n$ and all $\lambda \in \mathbb{R}$, there holds $f\left(x+ \lambda d\right) \leq f(x) +\lambda \left\langle \nabla f(x), d \right\rangle + L\lambda^2 \|d\|^2/2$. \end{lemma} \begin{theorem}[{\cite[p. 38-39]{clarke1983optimization}}]\label{subdif_DC} Let $g:\mathbb{R}^n \to \mathbb{R} $ be a locally Lipschitz and differentiable function and $h:\mathbb{R}^n \to \mathbb{R}$ be a convex function. Then, for every $x\in \mathbb{R}^{n},$ we have $\partial _{c}(g-h)(x) =\{\nabla g(x)\}-\partial h(x)$ \end{theorem} \section{Non-monotone BDCA} \label{sec3} In this section we introduce the non-monotone boosted DC algorithm (nmBDCA) to solve \eqref{Pr:DCproblem}. Throughout the paper we need the following two assumptions: \begin{enumerate}[label={(\textbf{H\arabic*})}, ref={(H\arabic*)}] \item\label{it:ghstronglyconvex} $g,h:\mathbb{R}^{n}\to \mathbb{R}$ are both strongly convex functions with modulus $\sigma>0$; \item\label{it:phistarinf} $\phi ^{*}:=\inf _{x\in \mathbb{R}^n} \{ \phi (x)=g(x)-h(x) \}>-\infty .$ \end{enumerate} Before proceeding with our study let us first discuss the assumptions \ref{it:ghstronglyconvex} and \ref{it:phistarinf} in next remark. \begin{remark}\label{infinitydecomp} We first note that \ref{it:ghstronglyconvex} is not restrictive. Indeed, given two convex functions $g$ and $h$ we can add to both a strongly convex term $({\sigma}/{2})\| x \|^{2}$ to obtain $\phi(x):=(g(x)+{\sigma}/{2}\| x \|^{2}) - (h(x)+{\sigma}/{2}\| x \|^{2})$. Hence, $\phi$ is rewritten as a difference of two strongly convex functions with modulus $\sigma>0$. Assumption \ref{it:phistarinf} is usual in the context of DC programming, see e.g. \cite{ARAGON2017,ARAGON2019,CruzNetoEtAl2018}. \end{remark} Let us present the conceptual statement of BDCA, but first we recall the DCA given in Algorithm~\ref{Alg:DCA}. \begin{algorithm}[h] \caption{The DC Algorithm (DCA)}\label{Alg:DCA} \begin{algorithmic}[1] \STATE { Choose an initial point $x^0\in \mathbb{R}^{n}$. Set $k=0$.} \STATE{Choose $w^{k}\in\partial h(x^{k})$ and compute $y^{k}$ the solution of the following problem \begin{equation} \label{eq:DCAS} \min _{x\in \mathbb{R}^{n}} g(x)-\left \langle w^{k},x-x^{k} \right\rangle. \end{equation}} \STATE{If $y^{k}=x^{k}$, then STOP and return $x^{k}$. Otherwise, set $x^{k+1}:=y^{k}$, $k \leftarrow k+1$ and go to Step~2.} \end{algorithmic} \end{algorithm} Note that, in the DCA, if $d^k\neq 0$, then the next iterate is $x^{k+1} = y^k$. In this case, under the assumption~\ref{it:ghstronglyconvex}, it can be proved that $\phi (y^{k})< \phi (x^{k})-\sigma \|d^k\| ^{2}$, see for example \cite[Proposition~3.1]{ARAGON2019}. The conceptual monotone BDCA is given in Algorithm~\ref{Alg:BDCAM}. \begin{algorithm}[h] \caption{The monotone boosted DC Algorithm (BDCA)}\label{Alg:BDCAM} \begin{algorithmic}[1] \STATE {Fix $\lambda _{-1}>0$, $\rho>0$ and $\zeta \in (0,1)$. Choose an initial point $x^0\in \mathbb{R}^{n}$. Set $k=0$.} \STATE{Select $w^{k}\in\partial h(x^{k})$ and compute $y^{k}$ the solution of the following problem \begin{equation} \label{eq:BDCAMu} \min _{x\in \mathbb{R}^{n}} g(x)-\left \langle w^{k},x-x^{k} \right\rangle. \end{equation}} \STATE{Set $ d^{k}:=y^{k}-x^{k}$. If $d^{k} =0$, then STOP and return the point $x^{k}$. Otherwise, set $\lambda _{k}:= \zeta^{j_k}\lambda_{k-1}$, where \begin{equation} \label{eq:BDCAjku} j_k:=\min \big\{j\in {\mathbb N}: ~\phi( y^{k}+\zeta^{j} \lambda _{k-1}d^{k})\leq \phi (y^{k})-\rho \left(\zeta^{j}{\lambda _{k-1}}\right)^{2}\| d^{k}\| ^{2}\big\}. \end{equation}} \STATE{Set $x^{k+1}:=y^{k}+\lambda _{k}d^{k}$; set $k \leftarrow k+1$ and go to Step~2.} \end{algorithmic} \end{algorithm} It is known that a necessary condition for a point $x$ to be local minimizer of $\phi$ is that $\partial h(x)\subseteq \partial g(x)$; see \cite{Toland}. In this case, $x$ is called inf-stationary. Such a condition is not easy to verify and hence a relaxed form of inf-stationary has been considered in DC literature. \begin{definition} A point $x^{*}\in \mathbb{R}^{n}$ is critical of $\phi(x)=g(x)-h(x)$ as in \eqref{Pr:DCproblem} if $$\partial g(x^{*}) \cap \partial h(x^{*}) \neq~\emptyset.$$ \end{definition} From subdifferential calculus, we have that $\partial_c \phi(x)\subseteq \partial g(x)-\partial h(x)$; see \cite{clarke1983optimization}. Thus, criticality is a weaker condition than Clarke stationary, i.e., $0\in \partial _{c} \phi(x)$. Reference \cite{KaisaBagirov2018} pointed out some interesting relationships between inf-stationary, Clarke stationary and critical points. Namely, inf-stationarity implies Clarke stationarity and Clarke stationary implies critical point. However, the converse of these implications do not hold without some extra assumptions. In other words, it is possible that a critical point is neither a local optimum nor a saddle point of the objective $\phi$. Therefore, the quality of the solution found by an algorithm is something that must to be discussed. \subsection{The algorithm} Next, we formally introduce our non-monotone version of BDCA to solve~\eqref{Pr:DCproblem}. \begin{algorithm}[h] \caption{Non-monotone boosted DC Algorithm (nmBDCA)} \label{Alg:ASSPM} \begin{algorithmic}[1] \STATE {Fix $\lambda _{-1}>0$, $\rho>0$ and $\zeta \in (0,1)$. Choose an initial point $x^0\in \mathbb{R}^{n}$. Set $k=0$.} \STATE{Select $w^{k}\in\partial h(x^{k})$ and compute $y^{k}$ the solution of the following problem \begin{equation} \label{eq:ASSPMu} \min _{x\in \mathbb{R}^{n}} \psi _{k}(x):= g(x)-\left \langle w^{k},x-x^{k} \right\rangle. \end{equation}} \STATE{Set $ d^{k}:=y^{k}-x^{k}$. If $d^{k} =0$, then STOP and return $x^{k}$. Otherwise, take $\nu _{k}\in \mathbb{R}_{+}$ (to be specified later) and set $\lambda _{k}:= \zeta^{j_k}\lambda_{k-1}$, where \begin{equation} \label{eq:jku} j_k:=\min \big\{j\in {\mathbb N}: ~\phi( y^{k}+\zeta^{j} \lambda _{k-1}d^{k})\leq \phi (y^{k})-\rho \left(\zeta^{j}{\lambda _{k-1}}\right)^{2}\| d^{k}\| ^{2} + \nu _{k}\big\}. \end{equation}} \STATE{Set $x^{k+1}:=y^{k}+\lambda _{k}d^{k}$; set $k \leftarrow k+1$ and go to Step~2.} \end{algorithmic} \end{algorithm} From now on, we denote by $(x^k)_{k\in\mathbb{N}}$ the sequence generated by Algorithm~\ref{Alg:ASSPM} which is a non-monotone version of BDCA for solving \eqref{Pr:DCproblem} with both DC components possibly non-differentiable. It is worth to mention that we will study the convergence analysis of Algorithm~\ref{Alg:ASSPM} for both cases where $g$ is differentiable and non-differentiable. If $g$ is non-differentiable, we will suppose that $\nu_k>0$, for all $k\in\mathbb{N}$, and hence, we will extend the results of BDCA~\cite{ARAGON2017,ARAGON2019} to the non-differentiable setting. If $g$ is differentiable, we will suppose that $\nu_k\geq 0$, for all $k\in\mathbb{N}$. In this case, if $\nu _{k}= 0$, for all $k\in\mathbb{N}$, then the non-monotone line search \eqref{eq:jku} coincides with the monotone line search \eqref{eq:BDCAjku}. Otherwise, if $\nu _{k}> 0$, for all $k\in\mathbb{N}$, then nmBDCA can be viewed as an inexact version of BDCA. Before, recall that \eqref{eq:ASSPMu} always has a unique solution $y^{k}$, which is characterized by \begin{equation}\label{eq:charyk} w^{k}\in \partial g (y^{k}), \qquad \forall k\in \mathbb{N}. \end{equation} It is remarked in \cite[Remark 1]{ARAGON2017} (differentiable case) and \cite[Example 3.4]{ARAGON2019} (non-differentiable case) that the differentiability of the DC component $g$ is necessary to apply the boosted technique proposed by the authors. The next example shows that the search direction $d^k\neq 0$ used by Algorithm~\ref{Alg:ASSPM} can be an ascent direction at the point $y^k$. Consequently, the line search in usual BDCA proposed in \cite{ARAGON2017,ARAGON2019} cannot be performed. However, the non-monotone line search proposed in Algorithm~\ref{Alg:ASSPM} overcome this drawback as we will illustrate in the sequel. \begin{example}{\cite[Example 3.4]{ARAGON2019}}\label{ExampleAlgorithm} Consider the problem \eqref{Pr:DCproblem}, where the functions $g$ and $h$ are given, respectively, by \begin{equation*} g(x_1,x_2)=-\frac{5}{2}x_1+x_1^2+x_2^2+|x_1|+|x_2|, \qquad h(x_1,x_2)=\frac{1}{2}(x_1^2+x_2^2). \end{equation*} The function $\phi(x_1,x_2) = g(x_1,x_2)-h(x_1,x_2)=\frac{1}{2}(x_1^2+x_2^2)+|x_1|+|x_2|-\frac{5}{2}x_1$ has only one critical point (global minimum) at $x^*=(1.5, 0)$. Clearly, $g$ is a non-differentiable function. Some calculations show that, letting $x^0=(\frac{1}{2},1)$, we have that $w^0=(\frac{1}{2},1)$, $y^0=(1,0)$ is the solution of \eqref{eq:ASSPMu} and $d^0=(\frac{1}{2}, -1)$. We can check that the directional derivative of $\phi$ at $y^0$ in the direction of $d^0$ is $\phi'(y^0, d^0)=\frac{3}{4}$. Thus, $d^0$ is not a descent direction for $\phi$ at $y^0$. Indeed, due to $\phi(y^0)=-1$ and $\phi(y^0+\lambda d^0) =-1+\frac{3}{4}\lambda+\frac{5}{8}\lambda^2$, we conclude that $\phi(y^0+\lambda d^0)>\phi(y^0)$, for all $\lambda>0$. Hence, a usual monotone line search cannot be performed. On the other hand, \begin{equation*} \phi(y^0+\lambda d^0)-\phi(y^0)+\rho \lambda ^{2}\| d^{0}\| ^{2}= \frac{3}{4}\lambda+\frac{5}{8}\lambda^2 + \frac{5}{4} \rho \lambda ^{2}, \end{equation*} and $ \lim_{\lambda \to 0^+}\left(\frac{3}{4}\lambda+\frac{5}{8}\lambda^2 + \frac{5}{4} \rho \lambda ^{2}\right)=0$. Thus, for $\nu_0>0$ there exists $\delta_0>0$ such that $\phi(y^0+\lambda d^0)-\phi(y^0)+\rho \lambda ^{2}\| d^{0}\| ^{2}<\nu_0$, for all $\lambda\in (0, \delta_0)$. Therefore, the non-monotone line search \eqref{eq:jku} can be performed; see Figure~\ref{fig1a}. Using $\lambda _{-1}=1$, $\rho=0.1$ and $\zeta = 0.5$ one has that although $f(x^{k+1})$ does not decrease the corresponding iteration in DCA (namely, $f(y^k)$), for all $k$ (see Figure~\ref{fig1b}), Algorithm~\ref{Alg:ASSPM} still has a better performance than DCA as we can see in Figure~\ref{fig1c}. Both algorithms return the global minimum $x^*=(1.5, 0)$ with Algorithm~\ref{Alg:ASSPM} requiring 6 iterates while DCA computing 17 iterates until the stopping rule is satisfied. We will return to this example with more details in Section~\ref{Sec6}. \begin{figure*} \caption{Illustration of Example~\ref{ExampleAlgorithm} \label{fig1a} \label{fig1b} \label{fig1c} \end{figure*} \end{example} In the previous example, we illustrated how a non-monotone line search \eqref{eq:jku} can be performed in BDCA. In fact, in next section, we will show that in general the non-monotone line search \eqref{eq:jku} can be performed. We end this section with a basic result in the study of DCA, see for example \cite[Proposition 2]{TaoLe1997}. In particular, it shows that the solution of Problem~\eqref{eq:ASSPMu}, which coincides with the solution of Problems~\eqref{eq:DCAS} and \eqref{eq:BDCAMu} in Algorithms~\ref{Alg:DCA} and \ref{Alg:BDCAM} respectively, provides a decrease in the value of the objective function $\phi$. For the sake of completeness, we include its proof here. \begin{proposition}\label{pr:ffr} For each $k\in\mathbb{N}$, the following statements hold: \begin{enumerate}[label={({\roman*})}, ref={(\roman*)}] \item\label{it:ffr:critical} If $d^{k}=0$, then $x^{k}$ is a critical point of $\phi$; \item\label{it:ffr:ineq} There holds $\phi (y^{k})\leq \phi (x^{k})-\sigma \| d^{k}\| ^{2}$. \end{enumerate} \end{proposition} \begin{proof} Before starting the proof, we remind that $d^{k}=y^{k}-x^{k}$. To prove item~\ref{it:ffr:critical} recall that due to $y^{k}$ being the solution of \eqref{eq:ASSPMu}, it satisfies \eqref{eq:charyk}. Thus, if $d^{k}=0$, then $y^{k}=x^{k}$ and $w^k\in \partial g (x^{k})\cap \partial h (x^{k})\not=\emptyset$. Consequently, $x^k$ is a critical point of $\phi$. To prove item~\ref{it:ffr:ineq}, take $w^{k}\in\partial h(x^{k})$. Now, we use the strong convexity of $g$ with modulus $\sigma>0$ and Theorem~\ref{teo2}\,\ref{it:teo2:fct-geq} to conclude that \begin{equation*} g(x^k)\geq g(y^k) - \langle v, d^k \rangle + \frac{\sigma}{2} \| d^k\|^{2}, \qquad \forall v\in \partial g(y^k). \end{equation*} Thus, due to $y^{k}$ being the solution of Problem~\eqref{eq:ASSPMu}, \eqref{eq:charyk} and the last inequality, we have \begin{equation} \label{eq:ffrs} g(x^k)\geq g(y^k) - \langle w^k, d^k \rangle + \frac{\sigma}{2} \|d^k\|^{2}. \end{equation} We also know that $h$ satisfies \ref{it:ghstronglyconvex}, i.e., $h$ is strong convex with modulus $\sigma>0$. Thus, since $w^{k}\in\partial h(x^{k})$, it follows from Theorem~\ref{teo2}\,\ref{it:teo2:fct-geq} that \begin{equation} \label{eq:ffrsa} h(y^k)\geq h(x^k) + \langle w^k, d^k \rangle + \frac{\sigma}{2} \| d^k\|^{2}. \end{equation} Thus, adding \eqref{eq:ffrs} and \eqref{eq:ffrsa} we have that $g(x^k)+h(y^k)\geq g(y^k)+h(x^k) + {\sigma}\| d^k\|^{2}$. Hence, using the definition of $\phi$ in \eqref{Pr:DCproblem} we conclude that $ \phi(x^k)\geq \phi(y^k) + {\sigma}\|d^k\|^{2}$, which is equivalente to the desired inequality finishing the proof of item~\ref{it:ffr:ineq}. \end{proof} \subsection{Strategies to choose $\nu_k$} Next, we discuss some strategies to choose the sequence of parameters $(\nu_{k})_{k\in \mathbb{N}}$. We emphasize that throughout the paper each one of the following strategies will be used separately and only when explicitly stated: \begin{enumerate}[label={(\textbf{S\arabic*})}, ref={(S\arabic*)}] \item \label{it:deltamin} Given $\delta _{min}\in [0,1)$, the sequence $(\nu _{k})_{k\in \mathbb{N}}\subset\mathbb{R}_{+}$ is defined as follows: $\nu _{0}\geq 0$ and $\nu_{k+1}$, for each $ \delta _{k+1} \in [\delta _{min},1]$, satisfies the following condition \begin{equation}\label{nmr1} 0 \leq \nu_{k+1} \leq (1-\delta _{k+1})(\phi (x^{k})- \phi (x^{k+1})+\nu _{k}), \quad \forall k\in \mathbb{N}. \end{equation} \item \label{it:nusummable} $(\nu _{k})_{k\in \mathbb{N}}\subset\mathbb{R}_{+}$ is such that $\sum _{k=0}^{+\infty}\nu _{k} <+\infty$; \item \label{it:nubound} $(\nu _{k})_{k\in \mathbb{N}}\subset\mathbb{R}_{+}$ is such that for every $\delta>0$, there exists $k_{0}\in \mathbb{N}$ such that $\nu _{k}\leq \delta \|d^{k}\|^{2},$ for all $k\geq k_{0}$. \end{enumerate} \begin{remark} \label{re:pip} First note that, by using Proposition~\ref{prop15u}\,\ref{it:prop15u:decrease}, we have $ 0\leq \sigma \|d^{k}\|^{2} \leq \phi (x^{k})-\phi (x^{k+1})+\nu _{k}$, for all $k\in \mathbb{N}$. Thus, we can take $\nu_{k+1}\geq 0$ satisfying \eqref{nmr1}. Furthermore, if $(\nu _{k})_{k\in \mathbb{N}}$ satisfies \ref{it:deltamin} with $\delta_{min}>0$, then $(\nu _{k})_{k\in \mathbb{N}}$ also satisfies \ref{it:nusummable}. Indeed, it follows from \eqref{nmr1} that \begin{equation} \label{eq:ccsa} 0\leq \delta _{k+1}(\phi (x^{k})-\phi (x^{k+1})+\nu _{k}) \leq (\phi (x^{k})+\nu _{k})- (\phi (x^{k+1})+\nu _{k+1}). \end{equation} Since $\delta _{k+1}\geq\delta _{min}>0$, $\phi (x^{k})-\phi (x^{k+1})+\nu _{k}\geq 0$, for all $k\in \mathbb{N}$, and $\phi$ satisfies \ref{it:phistarinf} we obtain $\delta _{min} \sum _{k=0}^{N} (\phi (x^{k})-\phi (x^{k+1})+\nu _{k}) \leq \phi (x^{0})-\phi ^{*} + \nu _{0}<\infty$. Hence, due to $\nu _{k+1}\leq (1-\delta _{min})(\phi (x^{k})-\phi (x^{k+1})+\nu _{k})$ for all $k\in \mathbb{N}$, we have $\sum _{k=0}^{+\infty}\nu _{k} < \nu_0+ ((1-\delta _{min})/\delta _{min})(\phi (x^{0})-\phi ^{*} + \nu _{0})<\infty$. Therefore, $(\nu _{k})_{k\in \mathbb{N}}$ satisfies \ref{it:nusummable} and the claim is proved. \end{remark} Although strategy \ref{it:deltamin} seems to be theoretical, we will see in the sequel a practical and efficient example satisfying this condition. A sequence $(\nu _{k})_{k\in \mathbb{N}}$ satisfying \ref{it:nusummable} is simple and exogenous, i.e., it can be taken {\it a priori}. Finally, at a first glance, strategy \ref{it:nubound} seems to be a strong condition, but actually there are simple examples of sequences $(\nu _{k})_{k\in \mathbb{N}}$ satisfying \ref{it:nubound} which are easy to implement numerically. Alternatively, we can consider the following strategy: \begin{enumerate}[label={(\textbf{S3'})}, ref={(S3')}] \item \label{eq:nub2} Fix $\bar{\delta} \in (0, \sigma)$. There exists $k_{0}\in \mathbb{N}$ such that $\nu _{k}\leq \bar{\delta} \|d^{k}\|^{2},$ for all $k\geq k_{0}$. \end{enumerate} Since it changes according to $d^k$, in our point of view, it makes this strategy very interesting. Next, we present some examples of sequences $(\nu_{k})_{k\in \mathbb{N}}$ according to the above strategies. \begin{example} Let us first recall the definition of the sequence of ``cost updates" $(C_k)_{k\in\mathbb{N}}$ that characterizes the non-monotone line search proposed in \cite{ZhangHager2004}. Consider $0\leq \eta_{min}\leq \eta_{max}<1$, $C_0 >\phi (x^0)$ and $Q_0 = 1$. Choose $\eta_k\in [\eta_{min}, \eta_{max}]$ and set \begin{equation} \label{eq:zhs} Q_{k+1}:=\eta_kQ_{k}+1, \qquad C_{k+1} := ({\eta_k}Q_kC_k + \phi(x^{k+1}))/Q_{k+1}, \qquad \forall k \in \mathbb{N}. \end{equation} Note that, after some algebraic manipulations, we can show that \eqref{eq:zhs} is equivalent to $C_{k+1} = (1-1/Q_{k+1})C_{k}+\phi(x^{k+1})/Q_{k+1}$, for all $k \in \mathbb{N}$. Thus, setting $\nu_{k}=C_k-\phi(x^k)$ and $\delta_{k+1}=1/Q_{k+1}$, we conclude that $\nu _{k+1} =(1-\delta _{k+1})(\phi (x^{k})-\phi (x^{k+1})+\nu _{k})$, for all $k \in \mathbb{N}$. Moreover, from \eqref{eq:zhs} we have $Q_{k+1}>1,$ for all $k\in \mathbb{N}$, and then $(1-\delta _{k+1})>0$ for all $k\in \mathbb{N}$. Since $\nu_0=C_0 -\phi (x^0)>0$, induction argument combined with Proposition~\ref{prop15u}\,\ref{it:prop15u:decrease} imply that $\nu _{k+1}>0$ for all $k\in \mathbb{N}$. Moreover, $(\nu _{k})_{k\in \mathbb{N}}$ satisfies \ref{it:deltamin}. It is worth noting that the non-monotone line search technique proposed in \cite{ZhangHager2004} outperforms the one proposed in \cite{Grippo1986} in many problems; see \cite[Section 4]{ZhangHager2004}. \end{example} \begin{example} Take any $\nu _{0}>0$, and define $\delta _{k+1}$ and $\nu _{k}$ as follows \begin{equation}\label{eq:ndgeo} 0< \delta _{min}\leq \delta _{k+1}<1, \qquad 0< \nu _{k+1}:= (1-\delta _{k+1})(\sigma +\rho \lambda _{k}^{2})\|d^{k}\|^{2}, \qquad \forall k\in \mathbb{N}. \end{equation} Then Proposition~\ref{prop15u}\,\ref{it:prop15u:decrease} yields $(\sigma+\rho \lambda _{k}^{2})\|d^{k}\|^{2}\leq \phi (x^{k})-\phi (x^{k+1})+\nu _{k}.$ Thus, whenever $d^k\neq 0$, we have $0<\nu _{k+1} \leq (1-\delta _{k+1})(\phi (x^{k})-\phi (x^{k+1})+\nu_{k}).$ Therefore, $(\nu _{k})_{k\in \mathbb{N}}$ defined in \eqref{eq:ndgeo} satisfies \ref{it:deltamin}. \end{example} \begin{example} \label{eq:omega} Let $\omega>0$ be a constant. Then, the sequence $(\nu _{k})_{k\in \mathbb{N}}\subset\mathbb{R}_{++}$ defined by $ \nu _{k}:=\omega\|d^{k}\|^{2}/(k+1)$, for all $ k\in \mathbb{N}$, satisfies \ref{it:nubound}. Indeed, due to $\lim _{k\to\infty} \omega /(k+1)=0$, for every $\delta>0$, there exists $k_{0}\in \mathbb{N}$ such that $k\geq k_{0}$ implies that $ \omega/(k+1)\leq \delta $. Thus, we have that $ \nu _{k}\leq \delta \|d^{k}\|^{2}.$ Similarly, we can show that $(\nu _{k})_{k\in \mathbb{N}}\subset\mathbb{R}_{++}$ defined by $ \nu _{k}:=\omega\|d^{k}\|^{2}/\ln(k+2)$, for all $ k\in \mathbb{N}$, also satisfies \ref{it:nubound}. \end{example} \begin{example} Take an integer $M>0$, set $m_{0}=0$ and for $k>0$ take $0\leq m_{k}\leq \min \{m_{k-1}+1,M\}$. Setting $\phi (x^{\ell(k)}):=\max _{0\leq j\leq m_{k}}\phi (x^{k-j})$ and \begin{equation}\label{eq:grippo} \nu _{k}:=\phi (x^{\ell(k)})-\phi (x^{k}),\qquad 0=\delta_{min}\leq \delta_{k+1}\leq \frac{ \phi(x^{\ell(k)})- \phi(x^{\ell(k+1)})}{\phi(x^{\ell(k)})-\phi(x^{k+1})}, \end{equation} the definitions of $\nu _{k}$ and $\delta _{k+1}$ in \eqref{eq:grippo} satisfies \ref{it:deltamin} with $\delta _{min}=0$. In fact, from the definition of $\phi (x^{\ell(k)})$ it follows that $\nu _{0}=0$ and $\phi (x^{k})\leq \phi (x^{\ell (k)}),$ for all $k\in\mathbb{N}$, which ensures that $\nu _{k}\geq 0.$ From Proposition~\ref{prop15u}\,\ref{it:prop15u:decrease} and definition of $\nu _{k}$ in \eqref{eq:grippo} it follows that $\phi (x^{k+1}) < \phi (x^{\ell (k)}).$ Since $m_{k+1}\leq m_{k}+ 1$, we conclude that $\phi (x^{\ell(k+1)})\leq \phi (x^{\ell(k)}).$ Thus, we have $\phi(x^{\ell(k)})- \phi(x^{\ell(k+1)}) \leq \phi(x^{\ell(k)})-\phi(x^{k+1}),$ which shows that $\delta _{k+1}\in [0,1].$ By using the definitions of $\nu _{k}$ and $\delta _{k+1}$ in \eqref{eq:grippo}, we have \begin{align*} \nu_{k+1} &=\frac{\phi(x^{\ell(k)})-\phi(x^{k+1})-\big(\phi(x^{\ell(k)})- \phi(x^{\ell(k+1)})\big)}{\phi(x^{\ell(k)})-\phi(x^{k+1})}\big( \phi (x^{k})-\phi(x^{k+1})+\nu _{k} \big) \\ &=\left(1-\frac{\phi(x^{\ell(k)})- \phi(x^{\ell(k+1)})}{\phi(x^{\ell(k)})-\phi(x^{k+1})}\right)\left( \phi (x^{k})-\phi(x^{k+1})+\nu _{k} \right) \\ & \leq (1-\delta _{k+1})\big(\phi (x^{k})-\phi(x^{k+1})+\nu _{k} \big), \end{align*} which shows that $\nu _{k+1}$ satisfies \eqref{nmr1}. Therefore, the strategy defined in \eqref{eq:grippo} is a particular instance of \ref{it:deltamin} which turns Algorithm~\ref{Alg:ASSPM} into a non-monotone boosted version of the DCA employing the non-monotone line search proposed in \cite{Grippo1986}. It worth to mention that, since the non-monotone rule \eqref{eq:grippo} does not satisfy the condition $\nu_{k}>0$ for all $k\in \mathbb{N}$, it can not be used to boost the DCA in the case of $g$ is non-differentiable (see Proposition~\ref{prop15u} in the sequel). Therefore, such rule will be used only in the case of $g$ is continuously differentiable. \end{example} \section{Convergence analysis: $g$ possibly non-differentiable} \label{sec4} The aim of this section is to present convergence results and iteration-complexity analysis of nmBDCA when the function $g$ is possibly non-differentiable. It is worth to mention that in the next result we need to assume that $\nu _{k}>0$. We begin by showing that Algorithm~\ref{Alg:ASSPM} is well-defined. \begin{proposition}\label{prop15u} Let $(x^{k} )_{k \in \mathbb{N}}$ be the sequence generated by Algorithm~\ref{Alg:ASSPM}. For each $k\in\mathbb{N}$, assume that $d^{k}\neq 0$ and $\nu _{k}>0$. Then, the following statements hold: \begin{enumerate}[label={({\roman*})}, ref={(\roman*)}] \item\label{it:prop15u:ddsc} There holds ${\hat{\delta} _{k}}:=\nu _{k}/ (g(y^{k}+d^{k})+g(x^k)-2g(y^{k}))>0$, and \begin{equation*} \phi (y^{k}+\lambda d^{k})\leq \phi (y^{k}) - \rho\lambda^2 \| d^{k}\| ^{2}+\nu_k, \qquad \forall \lambda \in (0,{\delta}_{k}], \end{equation*} where ${\delta} _{k}:= \min \{ {\hat{\delta} _{k}},1, ({3\sigma})/({2\rho})\}$. Consequently, the line search in Step 3 is well-defined. \item\label{it:prop15u:decrease} $\phi (x^{k+1}) \leq \phi (x^{k}) - (\sigma+\rho\lambda_k^2) \| d^{k}\| ^{2}+\nu_k$. \end{enumerate} \end{proposition} \begin{proof} Before starting the proof, we remind that $d^{k}=y^{k}-x^{k}$. To prove item~\ref{it:prop15u:ddsc}, assume that $d^{k}\neq 0$ and take $w^{k}\in\partial h(x^{k})$. Since $h$ is strongly convex with modulus $\sigma>0$, it follows from Theorem~\ref{teo2}\,\ref{it:teo2:fct-geq} that \begin{equation} \label{eq:sch} h(y^{k}+\lambda d^{k})\geq h(y^k) + \lambda \langle s, d^k \rangle + \frac{\sigma}{2} \lambda^2\|d^k\|^{2}, \qquad \forall s \in\partial h(y^{k}). \end{equation} Moreover, taking into account that $w^{k}\in\partial h(x^{k})$, we can apply Theorem~\ref{teo2}\,\ref{it:teo2:inner-geq} to obtain that $\langle s, d^k \rangle \geq \langle w^k, d^k \rangle+ {\sigma}\|d^k\|^{2}$. Hence, \eqref{eq:sch} becomes \begin{equation} \label{eq:sscs} h(y^{k}+\lambda d^{k})\geq h(y^k) + \lambda \langle w^k, d^k \rangle + {\sigma}\lambda\|d^k\|^{2}+ \frac{\sigma}{2} \lambda^2\|d^k\|^{2}. \end{equation} Considering that $y^{k}$ is the solution of \eqref{eq:ASSPMu} we have that $g(y^{k})- \left \langle w^{k},d^{k} \right\rangle \leq g(x^{k})$, which combining with \eqref{eq:sscs} yields \begin{equation} \label{eq:tscs} -(h(y^{k}+\lambda d^{k})-h(y^k)) \leq \lambda\left( g(x^{k})- g(y^{k}) \right) - {\sigma}\lambda\|d^k\|^{2}- \frac{\sigma}{2} \lambda^2\|d^k\| ^{2}. \end{equation} On the other hand, by using the strong convexity of $g$ with modulus $\sigma>0$ we have \begin{align}\label{eq:IX} g(y^{k}+\lambda d^{k})-g(y^{k}) & = g( \lambda (y^{k}+d^{k}) +(1-\lambda) y^{k} )-g(y^{k}) \notag \\ & \leq \lambda g(y^{k}+d^{k}) + (1-\lambda)g(y^{k})- \frac{\sigma}{2}\lambda(1-\lambda)\|d^k\|^2 -g(y^{k})\notag \\ & = \lambda\left(g(y^{k}+d^{k})-g(y^{k})\right)- \frac{\sigma}{2}\lambda(1-\lambda)\|d^k\|^2, \end{align} for all $\lambda \in (0,1]$. Combining the definition of $ \phi $ in \eqref{Pr:DCproblem} with \eqref{eq:tscs} and \eqref{eq:IX}, we obtain \begin{align}\label{eq:phiin} \phi (y^{k}+\lambda d^{k})-\phi (y^{k}) &=g(y^{k}+\lambda d^{k})-g(y^{k})-\left(h(y^{k}+\lambda d^{k})-h(y^{k}) \right) \notag \\ &\leq -\frac{3\sigma}{2}\lambda\|d^k\|^2+\lambda\left(g(y^{k}+d^{k})+g(x^{k})-2g(y^{k})\right). \end{align} Moreover, it follows from Theorem~\ref{teo2}\,\ref{it:teo2:fct-geq} that \begin{equation*} g(y^{k}+d^{k})\geq g(y^{k}) +\langle w, d^k \rangle + \frac{\sigma}{2} \|d^k\|^{2}, \qquad g(x^{k}) \geq g(y^{k}) -\langle w, d^k \rangle + \frac{\sigma}{2} \|d^k\|^{2}, \end{equation*} for all $w\in \partial g(y^{k}),$ which implies that $g(y^{k}+d^{k})+g(x^k)-2g(y^{k}) \geq \sigma \|d^k\|^{2} >0$. Thus, due to $\nu _{k}>0$, we have $0<{\hat{\delta} _{k}}:=\nu _{k}/ (g(y^{k}+d^{k})+g(x^k)-2g(y^{k}))$, which proves the first statement of item~\ref{it:prop15u:ddsc}. Moreover, we have \begin{equation*} 0<\lambda \left(g(y^{k}+d^{k})+g(x^k)-2g(y^{k}) \right) \leq \nu _{k}, \qquad \lambda \in ( 0,{\hat{\delta} _{k}}]. \end{equation*} Set ${\delta} _{k}:= \min \{ {\hat{\delta} _{k}},1, ({3\sigma})/({2\rho})\}$. Hence, the last inequality together with \eqref{eq:phiin} implies \begin{align*} \phi(y^{k}+\lambda d^{k})- \phi(y^{k}) \leq - \rho\lambda^2\|d^{k}\|^{2} +\nu _{k} ,\qquad \forall \lambda \in (0, \delta _{k}], \end{align*} which concludes the second statement of the item~\ref{it:prop15u:ddsc}. Finally, considering that $\lim _{j\to \infty} \zeta ^{j} \lambda _{k-1} =0$, it follows from the last inequality that the line search in {Step~3} is well-defined, and the proof of item~\ref{it:prop15u:ddsc} is concluded. To prove item~~\ref{it:prop15u:decrease}, we first note that item~\ref{it:prop15u:ddsc} implies that Step 4 is well-defined for $\nu _{k}>0$. Thus, \eqref{eq:jku} implies $\phi (y^{k}+\lambda_k d^{k})\leq \phi (y^{k}) - \rho\lambda_k^2 \| d^{k}\| ^{2}+\nu_k$, which combined with Proposition~\ref{pr:ffr}\,\ref{it:ffr:ineq} implies item~\ref{it:prop15u:decrease} and the proof of the proposition is completed. \end{proof} Note that if $x^{k+1}=x^k$, then from the definition of Algorithm~\ref{Alg:ASSPM} one can easily show that $d^{k}=0$, and hence, $x^{k}$ is a critical point of $\phi$. Therefore, from now on we assume that $d^{k}\neq 0$, or equivalently, that the sequence $(x^{k}) _{k\in \mathbb{N}}$ generated by Algorithm~\ref{Alg:ASSPM} is infinite. \begin{remark} \label{re:aapn} In the case that $g$ is convex and non-differentiable the direction $d^{k}\neq 0$ generated by Step~3 in Algorithm~\ref{Alg:ASSPM} is not in general a descent direction of $\phi$ at $y^{k}$, see Example~\ref{ExampleAlgorithm}. For this reason in Step~3 of Algorithm~\ref{Alg:ASSPM} we must assume that $\nu _{k}>0$, otherwise we cannot compute $\lambda _{k}>0$ satisfying \eqref{eq:jku}. However, as we will see in the Section~\ref{sec5}, whenever $g$ is convex and differentiable we just need to assume that $\nu _{k}\geq 0$ to compute $\lambda _{k}>0$ satisfying \eqref{eq:jku}. \end{remark} \subsection{Asymptotic convergence analysis} Next, we prove the main results of this section. \begin{theorem}\label{pr:assym} If $\lim _{k\to\infty}\|d^{k}\|=0$, then every cluster point of $(x^k)_{k\in\mathbb{N}}$, if any, is a critical point of $\phi$. \end{theorem} \begin{proof} Let ${\bar x}$ be a cluster point of $(x^k)_{k\in\mathbb{N}}$, and $(x^{k_{\ell}})_{\ell\in \mathbb{N}}$ a subsequence of $(x^k)_{k\in\mathbb{N}}$ such that $\lim _{\ell\to \infty}x^{k_{\ell}}={\bar x}$. Let $(w^{k_{\ell}})_{\ell\in \mathbb{N}}$ and $(y^{k_{\ell}})_{\ell\in \mathbb{N}}$ be the according sequences generated by Algorithm~\ref{Alg:ASSPM}, i.e., $w^{k_{\ell}}\in\partial h(x^{k_{\ell}})$. From \eqref{eq:charyk} we have that $ w^{k_{\ell}} \in \partial g (y^{k_{\ell}})$. Since $\lim _{k\to\infty}\|d^{k}\|=0$ and $\lim _{\ell\to \infty}x^{k_{\ell}}={\bar x}$ we obtain that $\lim _{\ell\to \infty}y^{k_{\ell}}={\bar x}$. Considering that $w^{k_{\ell}}\in \partial h(x^{k_{\ell}})\cap \partial g(y^{k_{\ell}})$ and due to the convexity of $g$ and $h$, without loss of generality, we can apply Proposition \ref{cont_subdif} to obtain that $\lim _{\ell\to \infty}w^{k_{\ell}}=\bar{w}\in \partial g({\bar x}) \cap \partial h({\bar x}),$ which concludes the proof. \end{proof} \begin{theorem}\label{prop1a} If $(\nu_k)_{k\in\mathbb{N}}$ is chosen according to strategy \ref{it:deltamin}, then $(\phi (x^{k})+\nu _{k}) _{k\in \mathbb{N}}$ is non-increasing and convergent. \end{theorem} \begin{proof} It follows from \eqref{eq:ccsa} in Remark~\ref{re:pip} that $(\phi (x^{k})+\nu _{k})_{k\in \mathbb{N}}$ is non-increasing. Therefore, by using \ref{it:phistarinf} and $(\nu _{k})_{k\in \mathbb{N}}\subset\mathbb{R}_{+}$, the desired result directly follows. \end{proof} \begin{corollary} If $(\nu_k)_{k\in\mathbb{N}}$ is chosen according to strategy \ref{it:deltamin} and $\lim _{k\to\infty}\nu _{k}=0$, then every cluster point of $(x^k)_{k\in\mathbb{N}}$, if any, is a critical point of $\phi$. \end{corollary} \begin{proof} Since $\lim _{k\to\infty} \nu_{k}=0$ from Theorem~\ref{prop1a} we have that $(\phi (x^{k})) _{k\in \mathbb{N}}$ is convergent. On the other hand, by Proposition~\ref{prop15u}\,\ref{it:prop15u:decrease} we obtain $0\leq \sigma \| d^{k}\| ^{2} \leq \phi (x^{k})+ \nu _{k}-\phi (x^{k+1})$, for all $k\in \mathbb{N}$. Therefore, taking limit in the last inequality we have that $\lim _{k\to\infty}\|d^{k}\|=0$. Finally, we apply Theorem~\ref{pr:assym} and the proof is complete. \end{proof} Next result proves the asymptotic convergence of Algorithm~\ref{Alg:ASSPM} when $(\nu _{k})_{k\in\mathbb{N}}$ is summable. \begin{corollary}\label{coro:A2} If $(\nu_k)_{k\in\mathbb{N}}$ is chosen according to strategy \ref{it:nusummable}, then every cluster point of $(x ^{k})_{k\in\mathbb{N}}$, if any, is a critical point of $\phi$. \end{corollary} \begin{proof} Proposition~\ref{prop15u}\,\ref{it:prop15u:decrease} gives $0\leq \sigma ^{2}\|d^{k}\|^{2}\leq \phi (x^{k})-\phi (x^{k+1})+\nu _{k}$, for all $k\in \mathbb{N}$. Thus, using \ref{it:phistarinf} we obtain \begin{equation*} \sum _{k=0}^{\infty}\|d^{k}\|^{2}\leq \frac{1}{\sigma}\Big(\phi (x^{0})-\phi ^{*}+\sum _{k=0}^{\infty}\nu _{k}\Big)<+\infty, \end{equation*} which implies that $\lim _{k\to\infty}\| d^{k}\|=0$. The desired result follows from Theorem~\ref{pr:assym}. \end{proof} \begin{corollary} Suppose that $(\nu_k)_{k\in\mathbb{N}}$ is chosen according to strategy \ref{it:deltamin}. If $\delta _{min}>0,$ then every cluster point of $(x^{k})_{k\in \mathbb{N}}$, if any, is a critical point of $\phi$. \end{corollary} \begin{proof} It follows by combining Remark~\ref{re:pip} with Corollary~\ref{coro:A2}. \end{proof} \begin{corollary}\label{coro:A3} If $(\nu_k)_{k\in\mathbb{N}}$ is chosen according to strategy \ref{it:nubound}, then every cluster point of $(x ^{k})_{k\in\mathbb{N}}$, if any, is a critical point of $\phi$. \end{corollary} \begin{proof} From the definition of strategy \ref{it:nubound}, there exists $k_{0}\in \mathbb{N}$ such that $0\leq \nu _{k}\leq \sigma\|d^{k}\|^{2}/2$, for all $k\geq k_{0}$. Thus, $\sigma\|d^{k}\|^{2}/2\leq \sigma\|d^{k}\|^{2} - \nu_k$, for all $k\geq k_{0}$. Hence, using Proposition~\ref{prop15u}\,\ref{it:prop15u:decrease} we have $0\leq \sigma\|d^{k}\|^{2}/2 \leq \phi (x^{k})-\phi (x^{k+1})$, for all $k\geq k_{0}$. Hence, using \ref{it:phistarinf} we conclude that $(\phi (x^{k})) _{k\geq k_{0}}$ is convergent. Furthermore, it follows that $\lim _{k\to\infty}\|d^{k}\|=0$. Therefore, applying Theorem~\ref{pr:assym} we obtain the desired result. \end{proof} \begin{remark} In particular, Corollary~\ref{coro:A3} is also valid by replacing the strategy~\ref{it:nubound} by the alternative strategy \ref{eq:nub2}. Indeed, if we assume that $(\nu _{k})_{k\in \mathbb{N}}$ satisfies~\ref{eq:nub2}, then using Proposition~\ref{prop15u}\,\ref{it:prop15u:decrease} we have $0< \sigma\|d^{k}\|^{2} \leq \phi (x^{k})-\phi (x^{k+1}) + \nu _{k} \leq \phi (x^{k})-\phi (x^{k+1}) + \bar{\delta}\|d^{k}\|^{2}$ for all $k\geq k_{0}$, which implies that $0< (\sigma - \bar{\delta})\|d^{k}\|^{2} \leq \phi (x^{k})-\phi (x^{k+1}),$ for all $k\geq k_{0}$. Thus, using \ref{it:phistarinf} we conclude that $(\phi (x^{k})) _{k\geq k_{0}}$ is convergent and $\lim _{k\to\infty}\|d^{k}\|=0$. Therefore, the assertion holds by using Theorem~\ref{pr:assym}. \end{remark} \subsection{Iteration-complexity analysis} In this section some iteration-complexity bounds for $(x^{k})_{k\in \mathbb{N}}$ generated by Algorithm~\ref{Alg:ASSPM} are presented. Our results establish iteration-complexity bounds for the case where $(\nu_k)_{k\in\mathbb{N}}$ is chosen according to each one of the strategies \ref{it:nusummable} and \ref{it:nubound}. Before present it, we first note that in particular Proposition~\ref{prop15u}\,\ref{it:prop15u:decrease} implies that \begin{equation}\label{eq:CGnd} \sigma\| d^{k}\| ^{2}\leq \phi (x^{k})-\phi (x^{k+1}) +\nu _{k},\qquad \forall k\in \mathbb{N}. \end{equation} \begin{theorem} Suppose that $(\nu_k)_{k\in\mathbb{N}}$ is chosen according to strategy \ref{it:nusummable}. For each $N\in\mathbb{N},$ we have \begin{equation}\label{eq:comp_A2nd} \min \left\{\| d^{k}\|:~k=0,1,\cdots,N-1\right\}\leq {\frac{ \sqrt{\phi(x^{0})-\phi^{*}+\sum _{k=0}^{\infty}\nu _{k}}}{ \sqrt{ \sigma }}}\frac{1}{ \sqrt{N}}. \end{equation} Consequently, for a given accuracy $\epsilon>0$, if $N\geq \left({ \phi (x^{0})-\phi ^{*}+\sum _{k=0}^{\infty}\nu _{k} }\right)/(\sigma \epsilon^{2})$, then $\min \{\| d^{k}\|: ~k=0,1,\cdots,N-1\}\leq \epsilon.$ \end{theorem} \begin{proof} Since $\phi^{*}:=\inf _{x\in \mathbb{R}^{n}} \phi(x)\leq \phi (x^{k})$ for all $k\in\mathbb{N}$, from \eqref{eq:CGnd} we obtain that \begin{equation*} \sum _{k=0}^{N-1}\|d^{k}\|^{2} \leq \frac{1}{\sigma}\Big(\phi(x^{0})-\phi(x^{N})+\sum _{k=0}^{N-1}\nu _{k}\Big) \leq \frac{1}{\sigma} \Big(\phi(x^{0})-\phi^{*}+\sum _{k=0}^{\infty}\nu _{k}\Big). \end{equation*} Therefore, $ N\min \{\| d^{k}\| ^{2}: ~k=0,1,\cdots,N-1\}\leq (\phi(x^{0})-\phi^{*}+\sum _{k=0}^{\infty}\nu _{k})/\sigma,$ and \eqref{eq:comp_A2nd} follows. The second statement is a directly consequence of the first one. \end{proof} \begin{theorem}\label{th:comp_A3nd} Suppose that $(\nu_k)_{k\in\mathbb{N}}$ is chosen according to strategy \ref{it:nubound}. Let $0< \varsigma <1$ and $k_{0}\in \mathbb{N} $ such that $\nu _{k}\leq \varsigma \sigma\|d^{k}\|^{2}$, for all $k\geq k_{0}$. Then, for each $N\in\mathbb{N}$ such that $N> k_0$, one has \begin{equation*} \min \{\|d^{k}\| : k=0,1,\cdots,N-1\} \leq {\frac{ \sqrt{\phi(x^{0})-\phi^{*}+\sum _{k=0}^{k_0-1}\nu _{k}}}{ \sqrt{ (1-\varsigma)\sigma }}}\frac{1}{ \sqrt{N}}. \end{equation*} Consequently, for a given $\epsilon>0$ and $k_{0}\in \mathbb{N}$ such that $\nu _{k}\leq \varsigma \sigma\|d^{k}\|^{2}$ for all $k\geq k_{0}$, if $N\geq \max\{k_0, ({ \phi (x^{0})-\phi ^{*}+\sum _{k=0}^{k_0-1}\nu _{k} })/(\sigma(1-\varsigma ) \epsilon^{2})\}$, then the following inequality holds $\min \left\{\| d^{k}\|: ~k=0,1,\cdots,N-1\right\}\leq \epsilon.$ \end{theorem} \begin{proof} Let $\varsigma \in (0,1)$ and $k_{0}\in \mathbb{N} $ such that $\nu _{k}\leq \varsigma \sigma\|d^{k}\|^{2}$, for all $k\geq k_{0}$. It follows from \eqref{eq:CGnd} that $\sigma\|d^{k}\|^{2}\leq \phi (x^{k})-\phi (x^{k+1})+\nu _{k}$, for all $ k=0,1,\cdots, N-1$. Summing up the last inequality from $k=0$ to $k=N-1$ and using assumption \ref{it:phistarinf} we have \begin{equation*} \sigma\sum _{k=0}^{N-1}\|d^{k}\|^{2} \leq \phi (x^{0})-\phi ^{*}+\sum _{k=0}^{k_{0}-1} \nu _{k}+\sum _{k=k_{0}}^{N-1} \nu _{k}. \end{equation*} Hence, considering that $\nu _{k}\leq \varsigma \sigma\|d^{k}\|^{2}$, for all $k\geq k_{0}$, the last inequality becomes \begin{equation*} \sum _{k=0}^{N-1}\sigma\|d^{k}\|^{2} \leq \phi (x^{0})-\phi ^{*}+\sum _{k=0}^{k_{0}-1} \nu _{k}+\sum _{k=0}^{N-1} \varsigma \sigma\|d^{k}\|^{2}, \end{equation*} which is equivalent to $ \sum _{k=0}^{N-1} (1-\varsigma)\sigma\|d^{k}\|^{2} \leq \phi (x^{0})-\phi ^{*}+\sum _{k=0}^{k_{0}-1} \nu _{k}$. Therefore, we have $ N\min \{\| d^{k}\| ^{2}: ~k=0,1,\cdots,N-1\}\leq (\phi(x^{0})-\phi^{*}+\sum _{k=0}^{k_0-1}\nu _{k})/((1-\varsigma){\sigma})$, and the first inequality follows. The last inequality follows from the first one. \end{proof} \begin{remark}\label{rmk:ExemploA3} Theorem~\ref{th:comp_A3nd} may not seem very useful at first look, since the integer $k_0$ is not always known. However, specifically for the sequences $(\nu_{k})_{k\in \mathbb{N}}$ given in Example~\ref{eq:omega}, we are able to compute such integer $k_0$ explicitly. Indeed, given $\omega>0$ and $0<\varsigma <1$, if $\nu _{k}=\omega\|d^{k}\|^{2}/(k+1),$ then the integer $k_0$ such that $k\geq k_0$ implies $\nu_k\leq \varsigma \sigma\|d^k\|^{2}$ must satisfies $k_{0}\geq ({\omega}/{\varsigma\sigma})-1$. On the other hand, if $\nu _{k}={\omega}\|d^{k}\|^{2}/\ln (k+2),$ then some calculations show that $k_0\geq e^{({\omega}/{\varsigma \sigma})}-2.$ \end{remark} \section{Convergence analysis: $g$ continuously differentiable} \label{sec5} In this section we present an iteration-complexity analysis of nmBDCA when the function $g$ is continuously differentiable. We remark that in this section we just need to assume that $\nu _{k}\geq 0$. Hence, it is worth mentioning that, if $\nu _{k}= 0$ for all $k\in\mathbb{N}$, then non-monotone line search \eqref{eq:jku} merges into monotone line search \eqref{eq:BDCAjku}, i.e., Algorithm~\ref{Alg:ASSPM} is a natural extension of the BDCA introduced in \cite{ARAGON2019}. If $\nu _{k}> 0$, for all $k\in\mathbb{N}$, then nmBDCA can be viewed as an inexact version of BDCA. To proceed with the analysis of Algorithm~\ref{Alg:ASSPM} we need to assume, in addition to \ref{it:ghstronglyconvex} and \ref{it:phistarinf}, the following condition: \begin{enumerate}[label={(\textbf{H\arabic*})}, start=3, ref={(H\arabic*)}] \item\label{it:gdiff} $g$ is continuously differentiable and $\nabla g$ is Lipschitz continuous with constant $L>0$. \end{enumerate} Our first task is to establish the well-definition of Algorithm~\ref{Alg:ASSPM}, which will be done in the next proposition. \begin{proposition}\label{prop:gdif} Suppose that $g:\mathbb{R}^{n}\to \mathbb{R}$ satisfies \ref{it:gdiff}. For each $k\in\mathbb{N}$, assume that $d^{k}\neq 0$ and $\nu _{k}\geq 0$. Then, the following statements hold: \begin{enumerate}[label={({\roman*})}, ref={(\roman*)}] \item\label{it:gdif:deltak} $\phi '(y^{k};d^{k})\leq -\sigma \|d^{k}\|^{2}<0$ and there exists a constant ${\delta}_{k}>0$ such that $\phi (y^{k}+\lambda d^{k})\leq \phi (y^{k}) - \rho\lambda^2 \| d^{k}\| ^{2}+\nu_k$, for all $\lambda \in (0,{\delta}_{k}]$. Consequently, the line search in Step 4 is well-defined. \item\label{it:gdif:ineq} $\phi (x^{k+1}) \leq \phi (x^{k}) - (\sigma+\rho\lambda_k^2) \| d^{k}\| ^{2}+\nu_k$. \end{enumerate} \end{proposition} \begin{proof} The proof of item~\ref{it:gdif:deltak} follows from \cite[Proposition 3.1(ii)-(iii)]{ARAGON2019} together with the fact of $\nu _{k}\geq 0$. Finally, the proof of item~\ref{it:gdif:ineq} follows from Proposition~\ref{pr:ffr}\,\ref{it:ffr:ineq} and item~\ref{it:gdif:deltak}. \end{proof} In the sequel, we will establish a positive lower bound to the step-size $\lambda _{k}>0$ defined in {Step 4} of Algorithm~\ref{Alg:ASSPM} when $g$ satisfies \ref{it:gdiff}. Before proving such result, we will obtain a result that generalizes Lemma~\ref{eq:IneqLip} for DC functions. In fact, instead of assuming that the whole function $\phi=g-h$ has gradient Lipschitz, we assume that only the first DC component $g$ has such a property. The statement is as follows. \begin{lemma}\label{le:LipcLeDC} Let $\phi:\mathbb{R}^{n}\to \mathbb{R}$ be given by $\phi(x)=g(x)-h(x)$, where $g$ satisfies \ref{it:gdiff} and $h$ is convex. Then, for all $x, d\in \mathbb{R}^n$ and $\lambda \in \mathbb{R}$, there holds \begin{equation*} \phi (x+ \lambda d) \leq \phi (x) +\lambda \left\langle \nabla g(x)-w, d \right\rangle + \frac{L}{2} \lambda^2 \|d\|^2, \qquad \forall w \in \partial h(x). \end{equation*} Moreover, if $h$ is strongly convex with modulus $\sigma>0$, then \begin{equation*} \phi (x+ \lambda d) \leq \phi (x) +\lambda \left\langle \nabla g(x)-w, d \right\rangle + \frac{(L-\sigma)}{2} \lambda^2 \|d\|^2, \qquad \forall w \in \partial h(x). \end{equation*} \end{lemma} \begin{proof} Let $x\in \mathbb{R}^{n}$ and an arbitrary $w \in \partial h(x)$. Define the function $\psi :\mathbb{R}^{n}\to \mathbb{R}$ by $\psi (z)=g(z)-\langle w,z\rangle$. Thus, we have $\nabla \psi(z)= \nabla g(z)-w$ and, since $\nabla g$ is Lipschitz continuous with constant $L$ we obtain that $\nabla \psi $ is also Lipschitz continuous with constant $L$. Given $d\in \mathbb{R}^n$ and $\lambda \in \mathbb{R}$, by using Lemma \ref{eq:IneqLip} with $\phi=\psi$, we obtain that $\psi(x+\lambda d)\leq \psi(x) + \lambda \langle \nabla g(x)-w, d \rangle + L\lambda^2\|d\|^2/2$. Since $\psi(z)=g(z)-\langle w,z\rangle$, the last inequality is equivalent to \begin{align}\label{eq:dsclemmaDC} g(x+\lambda d) \leq g(x)+\lambda \langle w, d \rangle + \lambda \langle \nabla g(x)-w,d \rangle + \frac{L}{2}\lambda^2\|d\|^{2}. \end{align} Since $h$ is convex and $w\in \partial h(x)$, we have $\lambda \langle w,d \rangle\leq h(x+\lambda d)-h(x)$. Thus, by using \eqref{eq:dsclemmaDC}, we obtain that $g(x+\lambda d)-h(x+\lambda d) \leq g(x)-h(x) + \lambda \langle \nabla g(x)-w, d \rangle + L\lambda^2\|d\|^{2}/2$. Due to $\phi=g-h$, the last inequality is equivalent to the first assertion. On the other hand, if we assume that $h$ is strongly convex with modulus $\sigma> 0$ and $w\in \partial h(x),$ it follows from item~$(ii)$ of Theorem~\ref{teo2} that $\lambda \langle w,d \rangle\leq h(x+\lambda d)-h(x)-\sigma\lambda^2\|d\|^2/2 $. Hence, the last inequality together \eqref{eq:dsclemmaDC} yield $g(x+\lambda d)-h(x+\lambda d) \leq g(x)-h(x) + \lambda \langle \nabla g(x)-w, d \rangle + (L-\sigma)\lambda^2\|d\|^{2}/2$. Therefore, taking into account that $\phi=g-h$ the proof is concluded. \end{proof} \begin{remark} It is worth to note that in Lemma~\ref{le:LipcLeDC} it is sufficient to assume \ref{it:gdiff}. In this case {\cite[Corollary of Proposition 2.2.1, p. 32]{clarke1983optimization}} ensures that if $g$ is continuously differentiable, then $g$ is locally Lipschitz. Hence, by Theorem~\ref{subdif_DC}, we have $\partial _{c}\phi(x)=\{\nabla g(x)\}-\partial h(x)$, and the desired result follows. We also note that the Lemma~\ref{le:LipcLeDC} generalizes Lemma~\ref{eq:IneqLip}. Indeed, taking $h\equiv 0$ in the first part of Lemma~\ref{le:LipcLeDC}, it becomes Lemma~\ref{eq:IneqLip}. \end{remark} Before stating the next result, we need to define the following useful constant: \begin{equation}\label{eq:thetamu} \lambda _{\min}:=\min \left\lbrace \lambda _{-1},\frac{2\zeta\sigma}{(L+2\rho)}\right\rbrace. \end{equation} \begin{lemma}\label{le:lmin} If $g$ satisfies \ref{it:gdiff}, then $\lambda _{k}\geq \lambda _{\min},$ for all $k\in \mathbb{N}.$ \end{lemma} \begin{proof} We will show by induction on $k$ that $\lambda _{k}\geq \lambda _{\min},$ for all $k\in \mathbb{N}.$ Set $k=0$. If $j_{0}=0$, then $\lambda _{0}=\lambda _{-1}$. Thus, \eqref{eq:thetamu} implies that $\lambda _{0}\geq \lambda _{\min}$. Otherwise, assume that $j _{0}>0$. Since $\lambda _{0}=\zeta^{j_0}{\lambda _{-1}}$ we conclude from \eqref{eq:jku} that \begin{equation}\label{eq:lmin10} \phi \left(y^{0} + \frac{\lambda _{0}}{\zeta}d^{0} \right)- \phi (y^{0})> - \rho \frac{\lambda_{0}^{2}}{\zeta^{2}}\|d^{0}\|^{2} + \nu _{0}. \end{equation} On the other hand, using Lemma~\ref{le:LipcLeDC} with $x=y^{0}$, $\lambda = \lambda_{0}/\zeta$ and $d=d^{0}$ we obtain \begin{equation}\label{eq:lmin19.5} \phi \left( y^{0}+\frac{\lambda_{0}}{\zeta} d^{0} \right)-\phi (y^{0}) \leq \frac{\lambda_{0}}{\zeta} \langle \nabla g(y^{0})-s^{0}, d^{0} \rangle + \frac{L}{2}\frac{\lambda_{0}^{2}}{\zeta^{2}}\| d^{0}\| ^{2},\qquad \forall s^{0}\in \partial h(y^{0}). \end{equation} From \eqref{eq:charyk} we have $\nabla g(y^{0})=w^{0}\in \partial h(x^{0}).$ Thus, from the strong convexity of $h$ and Theorem~\ref{teo2}\,\ref{it:teo2:inner-geq} we have $\langle \nabla g(y^{0})-s^{0}, d^{0} \rangle = \langle w^{0}-s^{0},y^{0}-x^{0} \rangle \leq -\sigma \|d^{0}\|^{2}.$ Therefore, since $\nu _{0}\geq 0,$ we obtain from the last inequality and \eqref{eq:lmin19.5} that \begin{equation}\label{eq:lmin20} \phi \left( y^{0}+\frac{\lambda_{0}}{\zeta} d^{0} \right)-\phi (y^{0}) \leq -\frac{\lambda_{0}\sigma}{\zeta} \| d^{0}\|^{2} + \frac{L}{2}\frac{\lambda_{0}^{2}}{\zeta^{2}}\| d^{0}\| ^{2}+\nu_{0}. \end{equation} Combining \eqref{eq:lmin10} and \eqref{eq:lmin20} we have \begin{equation*} - \frac{\lambda_{0}^{2}\rho }{\zeta^{2}}\|d^{0}\|^{2}< -\frac{ \lambda _{0} \sigma }{ \zeta} \|d^{0}\|^{2} + \frac{L}{2}\frac{\lambda_{0}^{2}}{\zeta^{2}}\| d^{0}\| ^{2} . \end{equation*} Hence, considering that $\rho>0$, $L>0$ and $d^{0}\neq 0$, some algebraic manipulations show that $\lambda_{0} > 2 \zeta \sigma /(L +2\rho)\geq \lambda_{\min}$. Therefore, the inequality holds for $k=0$. Now, we assume that $\lambda _{k-1}\geq \lambda _{\min}$ for some $k>0$. If $j^{k}=0$, then $\lambda _{k}=\lambda _{k-1}\geq \lambda _{\min}.$ Otherwise, if $j_{k}>0$, then repeating the above argument with $\lambda_0$ replaced by $\lambda_{k-1}$, we obtain that $\lambda_{k} > 2 \zeta \sigma /(L +2\rho)\geq \lambda_{\min},$ which completes the proof. \end{proof} \begin{corollary}\label{coro:lmin} Assume that the function $g$ satisfies conditions \ref{it:gdiff}. Then $(\sigma + \rho \lambda _{\min}^{2})\|d^{k}\|^{2}\leq \phi (x^{k})-\phi (x^{k+1})+\nu _{k}$, for all $k\in \mathbb{N}$. \end{corollary} \begin{proof} It follows from Proposition~\ref{prop:gdif}\,\ref{it:gdif:ineq} and Lemma~\ref{le:lmin}. \end{proof} \subsection{Iteration complexity bounds} The aim of this section is to present some iteration-complexity bounds for the sequence $(x^{k}) _{k\in \mathbb{N}}$ generated by Algorithm~\ref{Alg:ASSPM} in the case that $g$ is differentiable. \begin{lemma} \label{eq:nfeas} Suppose that $g$ satisfies \ref{it:gdiff}. Let $j_{k}\in \mathbb{N}$ be the integer defined in \eqref{eq:jku}, and $J_{k}$ be the number of function evaluations $\phi$ in \eqref{eq:jku} after $k\geq 1$ iterations of Algorithm \ref{Alg:ASSPM}. Then, $j_{k}\leq {({\log \lambda_{\min}-\log {\lambda _{-1}} })}/{\log \zeta} $ and \begin{equation*} J_{k}\leq 2(k+1)+ \frac{\log \lambda_{\min} - \log {\lambda _{-1}}}{\log \zeta} . \end{equation*} \end{lemma} \begin{proof} It follows from the {step~4} in Algorithm~\ref{Alg:ASSPM} together with Lemma~\ref{le:lmin} that $0< \lambda_{\min}\leq \lambda _{k}= \zeta ^{j_{k}}{\lambda _{k-1}}\leq \lambda _{-1}$, for all $k\in \mathbb{N}$. Thus, taking logarithm in last inequalities we obtain that $\log \lambda_{\min}\leq\log \lambda _{k}=j_{k} \log \zeta + \log {\lambda _{k-1}}\leq \log {\lambda _{-1}}$, for all $k\in \mathbb{N}$. Hence, taking into account that $\zeta \in (0,1)$ and $ 0<\lambda _{k-1}\leq \lambda _{-1}$, we have \begin{equation*} j_{k}= \frac{\log \lambda _{k}-\log {\lambda _{k-1}}}{\log \zeta} \leq \frac{\log \lambda_{\min}-\log {\lambda _{k-1}}}{ \log \zeta } \leq \frac{\log \lambda_{\min}-\log {\lambda _{-1}}}{ \log \zeta }, \quad \forall k\in \mathbb{N}. \end{equation*} This prove the first inequality. To prove the second assertion, we sum up the above inequality from $l=0$ to $k$ and we obtain \begin{align*} \sum _{\ell=0}^{k}j_{\ell} = \sum _{\ell =0}^{k} \frac{\log \lambda_{\ell}-\log {\lambda _{\ell -1}} }{\log \zeta} =\frac{\log \ \lambda_{k } - \log {\lambda _{-1}}}{\log \zeta} \leq \frac{\log \lambda_{min } - \log {\lambda _{-1}}}{\log \zeta}. \end{align*} On the other hand, the definition of $J_{k}$ implies that $J_{k}= \sum _{\ell=0}^{k}(j_{\ell}+2)=2(k+1)+\sum _{\ell=0}^{k}j_{\ell}$. Therefore, by using the last inequality the desired inequality follows. \end{proof} Next results establish iteration-complexity bounds when $(\nu _{k})_{k\in \mathbb{N}}$ is summable. \begin{theorem}\label{th:comp_A2} Suppose that $(\nu_k)_{k\in\mathbb{N}}$ is chosen according to strategy \ref{it:nusummable} and $g$ satisfies \ref{it:gdiff}. For each $N\in\mathbb{N},$ we have \begin{equation}\label{eq:comp_A2} \min \left\{\| d^{k}\|: ~k=0,1,\cdots,N-1\right\}\leq {\frac{ \sqrt{\phi(x^{0})-\phi^{*}+\sum _{k=0}^{\infty}\nu _{k}}}{ \sqrt{ \sigma + \rho \lambda _{\min}^{2} }}}\frac{1}{ \sqrt{N}}. \end{equation} Consequently, for a given $\epsilon>0$, if $ N\geq { \big(\phi (x^{0})-\phi ^{*}+\sum _{k=0}^{\infty}\nu _{k} \big)}/{((\sigma + \rho \lambda _{\min}^{2}) \epsilon ^{2})}$, then $\min \left\{\| d^{k}\|: ~k=0,1,\cdots,N-1\right\}\leq \epsilon.$ \end{theorem} \begin{proof} Since $\phi^{*}:=\inf _{x\in \mathbb{R}^{n}} \phi(x)\leq \phi (x^{k})$ for all $k\in\mathbb{N}$, from Corollary~\ref{coro:lmin}, we obtain \begin{equation*} (\sigma + \rho \lambda _{\min}^{2})\sum _{k=0}^{N-1}\|d^{k}\|^{2} \leq \phi(x^{0})-\phi(x^{N+1})+\sum _{k=0}^{N-1}\nu _{k} \leq \phi(x^{0})-\phi^{*}+\sum _{k=0}^{\infty}\nu _{k}. \end{equation*} Thus, \begin{equation*} N\min \{\| d^{k}\| ^{2}: ~k=0,1,\cdots,N-1\} \leq \frac{ \phi(x^{0})-\phi^{*}+\sum_{k=0}^{\infty}\nu _{k}}{ \sigma +\rho\lambda_{\min}^{2}}, \end{equation*} and \eqref{eq:comp_A2} follows. The second statement is an immediately consequence of the first one. \end{proof} \begin{theorem} Suppose that $(\nu_k)_{k\in\mathbb{N}}$ is chosen according to strategy \ref{it:nusummable} and $g$ satisfies \ref{it:gdiff}. For a given $\epsilon>0$, the number of function evaluations $\phi$ in Algorithm \ref{Alg:ASSPM} to compute $d^k$ such that $\| d^{k}\|\leq \epsilon$ is at most \begin{equation*} 2\Big(\frac{\phi (x^{0})-\phi ^{*}+\sum _{k=0}^{\infty}\nu _{k}}{(\sigma +\rho\lambda_{\min}^{2}) \epsilon ^{2}}+1\Big)+ \frac{\log \lambda_{\min} - \log {\lambda _{-1}}}{\log \zeta}. \end{equation*} \end{theorem} \begin{proof} The proof follows upon combining Lemma~\ref{eq:nfeas} with Theorem~\ref{th:comp_A2}. \end{proof} \begin{theorem}\label{th:comp_A3_dif} Suppose that \ref{it:nubound} holds and $g$ satisfies \ref{it:gdiff}. Let $0<\varsigma <1$ and $k_{0}\in \mathbb{N} $ such that $\nu _{k}\leq \varsigma (\sigma +\rho\lambda_{\min}^{2})\|d^{k}\|^{2}$, for all $k\geq k_{0}$. Then, for each $N\in\mathbb{N}$ such that $N >k_0$, there holds \begin{equation*} \min \{\|d^{k}\| : k=0,1,\cdots,N-1\} \leq {\frac{ \sqrt{\phi(x^{0})-\phi^{*}+\sum _{k=0}^{k_{0}-1}\nu _{k}}}{ \sqrt{ (1-\varsigma)(\sigma +\rho\lambda_{\min}^{2}) }}}\frac{1}{ \sqrt{N}}. \end{equation*} Consequently, for a given $\epsilon>0$ and $k_{0}\in \mathbb{N}$ such that $\nu _{k}\leq \varsigma (\sigma +\rho\lambda_{\min}^{2})\|d^{k}\|^{2}$ for all $k\geq k_{0}$, if $N\geq \max\{k_0, ({\phi (x^{0})-\phi ^{*}+\sum _{k=0}^{k_{0}-1}\nu _{k}})/({(1-\varsigma )(\sigma +\rho\lambda_{\min}^{2}) \epsilon ^{2}})\}$, then the following inequality holds $\min \left\{\| d^{k}\|: ~k=0,1,\cdots,N-1\right\}\leq \epsilon.$ \end{theorem} \begin{proof} Let $\varsigma \in (0,1)$ and $k_{0}\in \mathbb{N} $ such that $\nu _{k}\leq \varsigma (\sigma +\rho\lambda_{\min}^{2})\|d^{k}\|^{2}$, for all $k\geq k_{0}$. It follows from Corollary~\ref{coro:lmin} that \begin{align*} (\sigma +\rho\lambda_{\min}^{2})\|d^{k}\|^{2}\leq \phi (x^{k})-\phi (x^{k+1})+\nu _{k}, \qquad k=0,1,\cdots, N. \end{align*} Summing up last inequality from $k=0$ to $k=N$ and using assumption \ref{it:phistarinf}, we have \begin{equation*} (\sigma +\rho\lambda_{\min}^{2})\sum _{k=0}^{N-1}\|d^{k}\|^{2} \leq \phi (x^{0})-\phi ^{*}+\sum _{k=0}^{k_{0}-1} \nu _{k}+\sum _{k=k_{0}}^{N-1} \nu _{k}. \end{equation*} Hence, due to $\nu _{k}\leq \varsigma (\sigma +\rho\lambda_{\min}^{2})\|d^{k}\|^{2}$, for all $k\geq k_{0}$, the last inequality becomes \begin{equation*} (\sigma +\rho\lambda_{\min}^{2})\sum _{k=0}^{N-1}\|d^{k}\|^{2} \leq \phi (x^{0})-\phi ^{*}+\sum _{k=0}^{k_{0}-1} \nu _{k}+\varsigma (\sigma +\rho\lambda_{\min}^{2})\sum _{k=0}^{N-1} \|d^{k}\|^{2}, \end{equation*} which is equivalent to $(1-\varsigma)(\sigma +\rho\lambda_{\min}^{2}) \sum _{k=0}^{N-1} \|d^{k}\|^{2} \leq \phi (x^{0})-\phi ^{*}+\sum _{k=0}^{k_{0}-1} \nu _{k}$. Thus $ N\min \{\| d^{k}\| ^{2}: ~k=0,1,\cdots,N-1\}\leq (\phi(x^{0})-\phi^{*}+\sum _{k=0}^{k_0-1}\nu _{k})/((1-\varsigma){(\sigma +\rho\lambda_{\min}^{2})})$, and the first inequality follows. The last inequality follows from the first one. \end{proof} \begin{remark} For each one of the sequences $(\nu_{k})_{k\in \mathbb{N}}$ that appears in Example~\ref{eq:omega}, we already know the value of $k_0$ satisfying Theorem~\ref{th:comp_A3_dif}, see Remark~\ref{rmk:ExemploA3}. \end{remark} \begin{theorem} Suppose that $(\nu_k)_{k\in\mathbb{N}}$ is chosen according to strategy \ref{it:nubound} and $g$ satisfies \ref{it:gdiff}. Let $0<\varsigma <1$ and $k_{0}\in \mathbb{N} $ such that $\nu _{k}\leq \varsigma (\sigma +\rho\lambda_{\min}^{2})\|d^{k}\|^{2}$, for all $k\geq k_{0}$. Then, the number of function evaluations in Algorithm \ref{Alg:ASSPM} to compute $d^k$ such that $\| d^{k}\|\leq \epsilon$ is at most \begin{equation*} 2\Big(\frac{\phi (x^{0})-\phi ^{*}+\sum _{k=0}^{k_{0}-1}\nu _{k}}{(1-\varsigma)(\sigma +\rho\lambda_{\min}^{2}) \epsilon ^{2}}+1\Big) + \frac{\log \lambda_{\min} - \log {\lambda _{-1}}}{\log \zeta}. \end{equation*} \end{theorem} \begin{proof} The proof follows combining Lemma~\ref{eq:nfeas} with Theorem~\ref{th:comp_A3_dif}. \end{proof} \subsection{Full convergence under the Kurdyka-\L{}ojasiewicz property} The aim of this section is to present the full convergence for the sequence $(x^{k}) _{k\in \mathbb{N}}$ generated by Algorithm~\ref{Alg:ASSPM} under the assumption that $\phi$ satisfies the Kurdyka-\L{}ojasiewicz property (in short K\L{} property) at a cluster point $x^{*}$ of $(x^{k})_{k\in \mathbb{N}}$. Before, let us recall the definition of K\L{} property; see for instance \cite{AttouchSoubeyran2010, AttouchSvaiter2013} and \cite{ARAGON2019,CruzNetoEtAl2018} for this concept in the DC context. \begin{definition} Let $C^{1}[(0,+\infty)]$ be the set of all continuously differentiable functions defined in $(0,+\infty)$, $f\colon\mathbb{R}^{n}\to \mathbb{R}$ be a locally Lipschitz function and $ \partial _{c} f(\cdot)$ be the Clarke's subdifferential of $f$. The function $f$ is said to have the Kurdyka-\L{}ojasiewicz property at $x^{*}$ if there exist $\eta\in (0,+\infty]$, a neighborhood $U$ of $x^{*}$ and a continuous concave function $\gamma : [0,\eta)\to \mathbb{R}_{+}$ (called desingularizing function) such that $\gamma (0)=0,\; \gamma \in C^{1}[(0,+\infty)]$ and $\gamma'(t)>0$ for all $t\in (0,\eta)$. In addition, the function $f$ satisfies $\gamma '(f (x)-f (x^{*})) \mbox{dist}(0, \partial _{c} f(x))\geq 1$, for all $x\in U\cap \{x\in\mathbb{R}^{n}\;|\; f (x^{*})<f (x)<f (x^{*})+\eta \}$, where $\mbox{dist}(0, \partial _{c} f(x)):=\inf\{\|s\|: ~s\in \partial _{c} f(x)\}.$ \end{definition} The technique in the proof of next theorem is similar to the one used in seminal works \cite{AttouchSoubeyran2010, AttouchSvaiter2013}. Since we used strategy \ref{it:nubound}, we decide to include the proof here for sake of completeness. \begin{theorem}\label{th:conv_kl} Suppose that $(\nu_k)_{k\in\mathbb{N}}$ is chosen according to strategy \ref{it:nubound}. Assume that $(x^{k}) _{k\in \mathbb{N}}$ has a cluster point $x^{*}$, $\nabla g$ is locally Lipschitz continuous around $x^{*}$, and that $\phi$ satisfies the K-\L{} property at $x^{*}$. Then $(x^{k}) _{k\in \mathbb{N}}$ converges to $x^{*}$, which is a critical point of $\phi$. \end{theorem} \begin{proof} Since $(\nu _{k}) _{k\in \mathbb{N}}$ satisfies \ref{it:nubound}, there exists $k_{0}\in \mathbb{N}$ such that $ \nu _{k}\leq (\sigma / 2) \|d^{k}\|^{2}$, for all $k\geq k_{0}$. Hence, we have \begin{equation*} 0< (\sigma / 2) \|d^{k}\|^{2} = \sigma \|d^{k}\|^{2} - (\sigma / 2) \|d^{k}\|^{2} \leq \sigma \|d^{k}\|^{2} -\nu_{k}, \qquad \forall k\geq k_{0}. \end{equation*} Combining the last inequality with Proposition~\ref{prop:gdif}\,\ref{it:gdif:ineq} we obtain \begin{equation}\label{eq:monoto} 0 < (\sigma / 2) \|d^{k}\|^{2} \leq (\sigma +\rho \lambda _{k}^{2}) \|d^{k}\|^{2}-\nu _{k} \leq \phi (x^{k})-\phi (x^{k+1}), \qquad \forall k \geq k_{0}. \end{equation} Since $x^{*}$ is a cluster point of $(x^{k}) _{k\in \mathbb{N}},$ there exists a subsequence $(x^{k_{\ell}}) _{\ell \in \mathbb{N}}$ of $(x^{k}) _{k\in \mathbb{N}}$ such that $\lim _{\ell \to\ \infty}x^{k_{\ell}}=x^{*}$, which combined with \eqref{eq:monoto} implies that $\lim _{k\to \infty }\phi (x^{k})=\phi (x^{*})$. If there exists an integer $k\geq k_{0}$ such that $\phi (x^{k})=\phi (x^{*}),$ then \eqref{eq:monoto} implies that $d^{k}=0.$ In this case, Algorithm~\ref{Alg:ASSPM} stops after a finite number of steps and the proof is concluded. Now, suppose that $\phi (x^{k})>\phi (x^{*})$ for all $k\geq k_{0}.$ Since $\nabla g$ is locally Lipschitz around $x^{*}$, there exist ${\hat \delta}>0$ and $L>0$ such that \begin{equation}\label{gLip} \|\nabla g(x)-\nabla g(y) \| \leq L \|x-y \|, \qquad \forall x,y \in B(x^{*},{\hat \delta}). \end{equation} Since $\phi$ satisfies the Kurdyka-\L{}ojasiewicz inequality at $x^{*}$, there exist $\eta \in (0, +\infty]$, a neighborhood $U$ of $x^{*}$ , and a continuous and concave function $\gamma: [0,\eta) \to \mathbb{R} _{+} $ such that for every $x \in U$ with $\phi (x^{*}) < \phi(x) < \phi(x^{*}) + \eta$, we have \begin{equation}\label{eq:kl} \gamma '(\phi (x)-\phi (x^{*}))\mbox{dist}(0,\partial _{c}\phi (x))\geq 1. \end{equation} Take ${\tilde \delta}>0$ such that $B(x^{*},{\tilde \delta})\subset U$ and set $\delta :=\frac{1}{2}\min \{{\hat \delta},{\tilde \delta}\}>0$. Considering that $\lim _{k\to \infty }\phi (x^{k})=\phi (x^{*})$, it follows from \eqref{eq:monoto} that $\lim _{k\to \infty}d^{k}=0$. Then, there exists $k_{1}\in \mathbb{N}$ such that $\|y^{k}-x^{k}\|=\|d^k\|\leq \delta $ for all $k\geq k_{1}.$ Thus, for all $k\geq k_{1}$ such that $x^{k}\in B(x^{*},{\delta})$ we obtain that $ \|y^{k}-x^{*}\|\leq \|y^{k}-x^{k}\|+\|x^{k}-x^{*}\|\leq 2\delta \leq {\hat \delta}$. Hence, for all $k\geq k_{1}$ such that $x^{k}\in B(x^{*},{\delta})$ we obtain $x^k, y^{k}\in B(x^{*},{\hat \delta})$, and using \eqref{gLip} we conclude that $\|\nabla g(x^k)-\nabla g(y^k) \| \leq L \|x^k-y^k \|$. Hence, using that $\nabla g(x^{k})-w^{k}\in \partial _{c}\phi (x^{k})$, $w^{k}=\nabla g(y^{k})$ and $x^{k+1}-x^k=(1+\lambda_k) (y^k-x^k)$ we have \begin{equation} \label{eq:fckl} \mbox{dist}(0,\partial _{c}\phi (x^{k}))\leq \|\nabla g(x^{k})-w^{k}\|= \|\nabla g(x^{k})-\nabla g(y^{k})\|\leq \frac{L}{1+\lambda _{k}} \|x^{k+1}-x^{k}\|, \end{equation} for all $k\geq k_{1}$ such that $x^{k}\in B(x^{*},{\delta})$. To simplify the notations we set \begin{equation} \label{eq:Kkl} K:= \frac{2L(1+\lambda _{-1})}{\sigma}>0. \end{equation} Since $\lim _{\ell \to \infty}x^{k_{\ell}} = x^{*}$, $\lim _{k\to \infty} \phi(x^{k}) = \phi (x^{*})$ and $\phi( x^{k} ) > \phi (x^{*})$, for all $k\geq k_{0}$, and $\phi$ is continuous, we can take an index $N\geq \max\{ k_{0},k_{1}\}$ such that \begin{equation}\label{eq:xN} x_{N}\in B(x^{*},\delta)\subset U, \qquad \phi (x^{*})<\phi (x^{N})<\phi (x^{*})+\eta. \end{equation} Furthermore, due to $\gamma (0)=0$, we can also assume that $N\geq \max\{ k_{0},k_{1}\}$ satisfies \begin{equation}\label{eq:xN2} \|x^{N}-x^{*}\| + K \gamma (\phi (x^{N})-\phi (x^{*}))<\delta. \end{equation} On the other hand, for $k \geq N$ such that $x^{k} \in B(x^{*}, \delta)\subset U$, \eqref{eq:kl} and \eqref{eq:fckl} yield \begin{equation*} \gamma '(\phi (x^{k})-\phi (x^{*}))\geq \frac{1}{\mbox{dist}(0,\partial _{c}\phi (x^{k}))}\geq \frac{1+\lambda_{k} }{L\|x^{k}-x^{k+1}\|}. \end{equation*} Thus, due to $\gamma$ be concave, combining the last inequality with \eqref{eq:monoto} we have \begin{align*} \gamma (\phi(x^{k})-\phi (x^{*})) -\gamma (\phi(x^{k+1})-\phi (x^{*})) &\geq \gamma '(\phi(x^{k})-\phi (x^{*}))(\phi (x^{k})-\phi (x^{k+1})) \\ & \geq \frac{ 1+\lambda _{k}}{L\|x^{k+1}-x^{k}\|} \frac{ \sigma \| d^{k}\|^{2}}{2}. \end{align*} Hence, using that $x^{k+1}-x^k=(1+\lambda_k) d^k$, $0<\lambda_k\leq \lambda_{-1}$ and \eqref{eq:Kkl}, we obtain \begin{equation}\label{eq:sum1} \|x^{k+1}-x^{k}\| \leq K\left( \gamma (\phi(x^{k})-\phi (x^{*})) -\gamma (\phi(x^{k+1})-\phi (x^{*})) \right), \end{equation} for all $k \geq N$ such that $x^{k}\in B(x^{*},\delta)$. In the next step we will prove by induction that $x^{k}\in B(x^{*},\delta)$ for all $k\geq N$. For $k=N,$ the statement is valid due to the inclusion in \eqref{eq:xN}. Now, suppose that $x^{k}\in B(x^{*},\delta)$ for all $k=N+1, \cdots,N+p-1$ for some $p\geq 2$. Since $\phi( x^{k} ) > \phi (x^{*})$ for all $k\geq k_{0}$, from \eqref{eq:monoto}, \eqref{eq:xN} and $N\geq \max\{ k_{0},k_{1}\}$ we conclude that $\phi(x^{*})<\phi (x^{k+1})<\phi (x^{k})<\phi(x^{*})+\eta$, for all $k=N+1, \cdots,N+p-1$. We proceed to prove that $x^{N+p}\in B(x^{*},\delta)$. First, by using triangular inequality, induction hypothesis and \eqref{eq:sum1}, we have \begin{align*} \| x^{N+p}-x^{*}\| &\leq \| x^{N}-x^{*} \| + \sum _{i=1}^{p}\| x^{N+i}-x^{N+i-1}\|\\ &\leq \| x^{N}-x^{*} \| + K \sum _{i=1}^{p}\left[ \gamma (\phi(x^{N+i-1})- \phi (x^{*})) -\gamma (\phi(x^{N+i})-\phi (x^{*})) \right]. \end{align*} Summing up last inequality and taking into account that $ \gamma(\phi(x^{N+p})-\phi (x^{*}))\geq 0$ and \eqref{eq:xN2} we obtain \begin{align*} \| x^{N+p}-x^{*}\| &= \| x^{N}-x^{*} \| + K \gamma(\phi(x^{N})-\phi (x^{*}))- K \gamma(\phi(x^{N+p})-\phi (x^{*}))\\ &\leq \| x^{N}-x^{*} \| + K \gamma\big(\phi(x^{N})-\phi (x^{*})\big) <\delta, \end{align*} which concludes the induction. Finally, considering that $x^{k}\in B(x^{*},\delta)$ for all $k\geq N$, similar argument used above together with \eqref{eq:xN2} and \eqref{eq:sum1} yields \begin{align*} \sum _{k=N}^{N+p} \|x^{k+1}-x^{k}\| & \leq \sum _{k=N}^{N+p} K\left( \gamma (\phi(x^{k})-\phi (x^{*})) -\gamma (\phi(x^{k+1})-\phi (x^{*})) \right)\\ & = K \gamma(\phi(x^{N})-\phi (x^{*}))- K \gamma(\phi(x^{N+p})-\phi (x^{*}))\\ &\leq K \gamma(\phi(x^{N})-\phi (x^{*}) <\delta. \end{align*} Taking the limit in last inequality as $p$ goes to $\infty$ we have $\sum _{k=N}^{\infty }\| x^{k}-x^{k+1} \| <\infty,$ which implies that $ (x^k)_{k\in\mathbb{N}}$ is a Cauchy sequence. Hence, due to $x^{*}$ be a cluster point of $ (x^k)_{k\in\mathbb{N}}$, then the whole sequence $ (x^k)_{k\in\mathbb{N}}$ converges to $x^{*}$. Therefore, by using Corollary~\ref{coro:A3}, the proof is concluded. \end{proof} \begin{remark} If $\nu_{k}\equiv 0$, then Algorithm~\ref{Alg:ASSPM} becomes the BDCA given in \cite{ARAGON2019}, and consequently Theorem~\ref{th:conv_kl} merges into \cite[Theorem 4.3]{ARAGON2019}. \end{remark} \section{Numerical experiments} \label{Sec6} In this section, we present some numerical experiments to verify the practical efficiency of the proposed non-monotone BDCA. The experiments were coded in MATLAB R2020b on a notebook 8 GB RAM Core i7. To evaluate its performance, we run it for some academic tests functions existing in the DC literature (see \cite{ARAGON2019,CruzNetoLopesSantosSouza2019,Joki2017}). The aim of this section is to show that the non-monotone BDCA has a good performance as its monotone version proposed by \cite{ARAGON2019} compared to the classical DC Algorithm (DCA \cite{Pham1986}). Additionally, we also compare its performance with the proximal point method for DC functions (PPMDC~\cite{Moudafi2006, SOUZA3016, SunSampaio2003}). All the methods require to solve a minimization problem (here it is called ``subproblem"). We solve the subproblems of all methods using \texttt{fminsearch}, a build-in MATLAB solver, with the \texttt{optionset(`\,TolX\,',1e-7,`\,TolFun\,',1e-7)}. The stopping rule of the outer loop in all methods is $||x^{k+1}-x^k||< 10^{-7}$. In each running, the methods take the same random initial point $x^0 \in [-10,10]^n$ in $\mathbb{R}^n$. In the PPMDC, we take the proximal parameter in the regularization term $\alpha_k=0.01$, for all $k\in\mathbb{N}$. In nmBDCA, for all problems, we take the same configuration of parameters $\rho=\zeta =0.5$ and $ \nu _{k}:=\omega\|d^{k}\|^{2}/(k+1)$ as suggested in Example~\ref{eq:omega} with $\omega=0.01$. The initial value $\lambda _{-1}$ are taken as follows: $\lambda _{-1}=3.9$, $\lambda _{-1}=16.0$, $\lambda _{-1}=1.5$, $\lambda _{-1}=5.4$, $\lambda _{-1}=2.8$, $\lambda _{-1}=30.0$ and $\lambda _{-1}=6.6$ for Problem~\ref{prob7}~--~\ref{prob5}, respectively. We make the MATLAB implementation of solvers nmBDCA, DCA and PPMDC as well as the list of initial points and the test problems freely available at the link \href{https://sites.google.com/ufpi.edu.br/souzajco/publications}{https://sites.google.com/ufpi.edu.br/souzajco/publications}. To proceed the comparison, we perform all methods 100 times starting from the same random initial point. We show the results of nmBDCA, DCA and PPMDC in Tables~\ref{tabnmBDCA}, where $n$ denotes the number of variables of the problem, the columns \texttt{min. k} (resp. \texttt{min. time}), \texttt{max. k} (resp. \texttt{max. time}) and \texttt{med. k} (resp. \texttt{med. time}) present the minimum, maximum and median of iterations (resp. CPU time in seconds) until the stopping rule is satisfied, $\phi(x^k)$ denotes the best value of the objective function for all the solutions found and \texttt{\% opt. value} presents the rate in which the method approximately found the best solution known. The values of $|\phi(x^k)-\phi^*|$ and $||x^{k+1}-x^k||$ for one run of each method are presented in Figures~\ref{fig8}--\ref{fig6}. In these figures, we can clearly see that although nmBDCA sometimes does not decrease DCA (when the red line is above of blue line in the $||\phi(x^k)-\phi^*||$-axis) nmBDCA still outperforms DCA and PPMDC. We run all the methods for two different starting points which for one of them all the methods find the global solution and the other one which DCA and PPMDC stop at a critical point while nmBDCA keeps running until to find the global solution (except for Problem~\ref{prob1} where all the methods always find the global minimum). \begin{problem}\label{prob7}\cite[Problem 4.1]{CruzNetoLopesSantosSouza2019} Let $\phi:\mathbb{R}^2 \to \mathbb{R}$ be a DC function with DC components $$ g(x)=\sin\left(\sqrt{|3x_1+|x_1-x_2| +2x_2|}\right)+5(x_1^2+x_2^2)$$ and $$h(x)=5(x_1^2+x_2^2).$$ The optimum value is $\phi^*=-1$. \end{problem} \begin{problem}\label{prob6}Example~\ref{ExampleAlgorithm} revisited (\cite[Example 3.4]{ARAGON2019}) Let $\phi:\mathbb{R}^2 \to \mathbb{R}$ be a DC function with DC components $$ g(x)=-\frac{5}{2}x_1+x_1^2+x_2^2+|x_1|+|x_2|$$ and $$h(x)=\frac{1}{2}(x_1^2+x_2^2).$$ The minimum point of $\phi$ is $x^*=(1.5, 0)^{\top}$ and the optimum value is $\phi^*=-1.125$. \end{problem} \begin{problem}\label{prob1}\cite[Problem 1]{Joki2017} Let $\phi:\mathbb{R}^2 \to \mathbb{R}$ be a DC function with DC components $$g(x)=\max\{f_{1,1}(x),f_{1,2}(x),f_{1,3}(x)\}+f_{2,1}(x)+f_{2,2}(x)+f_{2,3}(x)$$ and $$h(x)=\max\{f_{2,1}(x)+f_{2,2}(x),f_{2,2}(x)+f_{2,3}(x),f_{2,1}(x)+f_{2,3}(x)\},$$ where $f_{1,1}(x)=x_1^4+x_2^2$, $f_{1,2}(x)=(2-x_1)^2+(2-x_2)^2$, $f_{1,3}(x)=2e^{-x_1+x_2}$, $f_{2,1}(x)=x_1^2-2x_1+x_2^2-4x_2+4$, $f_{2,2}(x)=2x_1^2-5x_1+x_2^2-2x_2+4$ and $f_{2,3}(x)=x_1^2+2x_2^2-4x_2+1$. The minimum point of $\phi$ is $x^*=(1,1)^{\top}$ and the optimum value is $\phi^*=2$. \end{problem} \begin{problem}\label{prob2}\cite[Problem 2]{Joki2017} Let $\phi:\mathbb{R}^2 \to \mathbb{R}$ be a DC function with DC components $$g(x)=|x_1-1|+200\max\{0,|x_1|-x_2\}$$ and $$h(x)=100(|x_1|-x_2).$$ The minimum point of $\phi$ is $x^*=(1,1)^{\top}$ and the optimum value is $\phi^*=0$. \end{problem} \begin{problem}\label{prob3}\cite[Problem 3]{Joki2017} Let $\phi:\mathbb{R}^4 \to \mathbb{R}$ be a DC function with DC components \begin{eqnarray*} g(x)&=& |x_1-1|+200\max\{0,|x_1| - x_2\}+180\max\{0,|x_3| - x_4\}+|x_3-1|\\ &&+10.1(|x_2-1|+|x_4-1|)+4.95|x_2+x_4-2| \end{eqnarray*} and $$h(x)=100(|x_1| - x_2)+90(|x_3| - x_4)+4.95|x_2 - x_4|.$$ The minimum point of $\phi$ is $x^*=(1,1,1,1)^{\top}$ and the optimum value is $\phi^*=0$. \end{problem} \begin{problem}\label{prob4}\cite[Problem 7]{Joki2017} Let $\phi:\mathbb{R}^2 \to \mathbb{R}$ be a DC function with DC components \begin{eqnarray*} g(x)&=& |x_1-1|+200\max\{0,|x_1| - x_2\}\\ &&+10\max\{x_1^2+x_2^2+|x_2|,x_1+x_1^2+x_2^2+|x_2|-0.5,|x_1-x_2|+|x_2|-1,x_1+x_1^2+x_2^2\} \end{eqnarray*} and $$h(x)=100(|x_1| - x_2)+10(x_1^2+x_2^2+|x_2|).$$ The minimum point of $\phi$ is $x^*=(0.5,0.5)^{\top}$ and the optimum value is $\phi^*=0.5$. \end{problem} \begin{problem}\label{prob5}\cite[Problem 8]{Joki2017} Let $\phi:\mathbb{R}^3 \to \mathbb{R}$ be a DC function with DC components \begin{eqnarray*} g(x)&=& 9-8x_1-6x_2-4x_3+2|x_1|+2|x_2|+2|x_3|\\ &&+4x_1^2+2x_2^2+2x_3^2 + 10\max\{0,x_1+x_2+2x_3-3,-x_1,-x_2,-x_3\} \end{eqnarray*} and $$h(x)=|x_1 - x_2|+|x_1 - x_3|.$$ The minimum point of $\phi$ is $x^*=(0.75,1.25,0.25)^{\top}$ and the optimum value is $\phi^*=3.5$. \end{problem} As we can see in Table~\ref{tabnmBDCA}, nmBDCA outperforms DCA and PPMDC in the quality of the solution found in all the test problems. In three of the seven test problems, nmBDCA finds the global minimum in all run while the other methods do the same only in one test problem. In terms of performance (number of iterates and CPU time), nmBDCA also outperforms DCA and PPMDC. This is clear when we see the lines for Problem~\ref{prob7} and \ref{prob1} in Table~\ref{tabnmBDCA} where all the methods have the same rate of finding the global solution. In these cases, nmBDCA is 16 times and 3 times more efficient in terms of the median of iterates and CPU time than DCA and PPMDC for Problem~\ref{prob7} and \ref{prob1}, respectively. In Problem~\ref{prob6} and \ref{prob4}, nmBDCA has a better performance in terms of median of iteration and CPU time compare with DCA and PPMDC. Note that in these problems nmBDCA finds the global solution in the rate of $100\%$ and $56\%$, respectively, against $63\%$ and $30\%$ for DCA and PPMDC, respectively. In Problem~\ref{prob2}, \ref{prob3} and \ref{prob5}, nmBDCA needs more iterates and CPU time to obtain a solution than DCA and PPMDC. This is why in these problems, DCA and PPMDC stop in few iterates but they find the global solution only in the rate of $49\%$, $17\%$ and $18\%$ for DCA and $48\%$, $18\%$ and $18\%$ for PPMDC while nmBDCA finds the global solution in the rate of $100\%$, $31\%$ and $67\%$, respectively. It is worth to note that, in these problems, nmBDCA underperforms the other methods meanly because the sequence $\nu _{k}:=\omega\|d^{k}\|^{2}/(k+1)$ (with $\omega=0.01$) enables a large increasing of $\phi(x^{k+1})$ compared to $\phi(y^{k})$ in the first steps. Our simulations show that if we consider small values of $\omega$ in these problems, then the performance of nmBDCA is quite similar to DCA and PPMDC but with better rates of finding global minimum. Summing up, our numerical experiments show that nmBDCA has a good performance compared with DCA and PPMDC such as its monotone version BDCA. The freedom to a possible growth given by the parameter $\nu_k$ does not affect the efficiency of the method. This is an important feature which increases the range of application of boosted DC algorithms. \begin{sidewaystable} \captionsetup{position=top} \caption{Summary of the numerical results of nmBDCA, DCA and PPMDC for 100 run.} \label{tabnmBDCA} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline Problem & $n$ & min. $k$ & max. $k$ & med. $k$ & min. time & max. time & med. time & $\phi(x^k)$ & \% opt. value \\ \hline \multicolumn{10}{|c|}{nmBDCA} \\ \hline \ref{prob7} & 2 & 3 & 83 & 46.28 & 0.0026649 & 0.0568138 & 0.030465146 & -0.999999999999859 & 97 \\ \hline \ref{prob6} & 2 & 7 & 16 & 10.82 & 0.005585 & 0.0136846 & 0.008780246 & -1.125000000000000 & 100 \\ \hline \ref{prob1} & 2 &6& 15 & 9.81 & 0.0082839 & 0.0211974 & 0.012511895 & 2.000000000000004 & 100 \\ \hline \ref{prob2} & 2& 3 & 6 & 4.02 & 0.0022413 & 0.0081915 & 0.004182691 & 3.960432204408448e-09 & 100 \\ \hline \ref{prob3} & 4& 4 & 13 & 7.28 & 0.0078985 & 0.0395438 & 0.021201167 & 4.348665016973285e-08 & 31 \\ \hline \ref{prob4} & 2 & 3 & 21 & 8.8 & 0.0022562 & 0.0213988 & 0.009153369 & 0.500000002033778 & 56 \\ \hline \ref{prob5} & 3 & 3 & 8 & 6.41 & 0.009874 & 0.0205232 & 0.013033188 & 3.499999999999999 & 67 \\ \hline \multicolumn{10}{|c|}{DCA} \\ \hline \ref{prob7} &2&2&1072&749.5599999&0.0023714 & 0.8463685 & 0.499823531 & -0.999999999996628 & 97 \\ \hline \ref{prob6} & 2 & 2 & 27 & 17.19 & 0.0011749 & 0.0150196 & 0.008793059 & -1.125000000000000 & 63 \\ \hline \ref{prob1}&2&23&35 & 30.5599999 & 0.0268213 & 0.0439881 & 0.036820655 & 2.000000000000052 & 100 \\ \hline \ref{prob2} & 2 & 2 & 5 & 2.15 & 0.0012365 & 0.0040823 & 0.001967687 & 2.274440946692380e-08 & 49 \\ \hline \ref{prob3} & 4 & 3 & 12 & 6.59 & 0.0068355 & 0.043065 & 0.018513736 & 9.012336862346258e-08 & 17 \\ \hline \ref{prob4} &2&2& 359 & 58.4399999 & 0.0013686 & 0.3862458 & 0.062961467 & 0.500000013656637 & 30 \\ \hline \ref{prob5} & 3 & 2 & 6 & 2.54 & 0.0035983 & 0.0169846 & 0.006560558 & 3.500000000000000 & 18 \\ \hline \multicolumn{10}{|c|}{PPMDC} \\ \hline \ref{prob7} &2&2&1067&751.4299999&0.0025088 & 0.8645138 & 0.510410394 & -0.999999999997184 & 97 \\ \hline \ref{prob6} & 2 & 2 & 27 & 17.51 & 0.0012693 & 0.0180685 & 0.009237261 & -1.125000000000000 & 63 \\ \hline \ref{prob1} & 2 & 23 & 35 & 30.53 & 0.027967 & 0.0532839 & 0.038431424 & 2.000002000000055 & 100 \\ \hline \ref{prob2} & 2 & 2 & 4 & 2.09 & 0.0012844 & 0.0036622 & 0.001996839 & 1.526064075108025e-08 & 48 \\ \hline \ref{prob3} & 4 & 3 & 14 & 6.5 & 0.0064865 & 0.0394513 & 0.019424618 & 6.207930470791823e-08 & 18 \\ \hline \ref{prob4} & 2 & 2 & 359 & 58.46 & 0.0014268 & 0.4039013 & 0.065365052 & 0.500000011319287 & 30 \\ \hline \ref{prob5} & 3 & 2 & 29 & 5.3 & 0.0067166 &0.0536524 & 0.0127099 & 3.500000000000002 & 18 \\ \hline \end{tabular} \end{sidewaystable} \begin{figure} \caption{Value of $||\phi(x^k)-\phi^*||$ and $||x^{k+1} \label{fig8:a} \label{fig8:b} \label{fig8:c} \label{fig8:d} \label{fig8} \end{figure} \begin{figure} \caption{Value of $||\phi(x^k)-\phi^*||$ and $||x^{k+1} \label{fig7:a} \label{fig7:b} \label{fig7:c} \label{fig7:d} \label{fig7} \end{figure} \begin{figure} \caption{Value of $||\phi(x^k)-\phi^*||$ and $||x^{k+1} \label{fig2:a} \label{fig2:b} \label{fig2} \end{figure} \begin{figure} \caption{Value of $||\phi(x^k)-\phi^*||$ and $||x^{k+1} \label{fig3:a} \label{fig3:b} \label{fig3:c} \label{fig3:d} \label{fig3} \end{figure} \begin{figure} \caption{Value of $||\phi(x^k)-\phi^*||$ and $||x^{k+1} \label{fig4:a} \label{fig4:b} \label{fig4:c} \label{fig4:d} \label{fig4} \end{figure} \begin{figure} \caption{Value of $||\phi(x^k)-\phi^*||$ and $||x^{k+1} \label{fig5:a} \label{fig5:b} \label{fig5:c} \label{fig5:d} \label{fig5} \end{figure} \begin{figure} \caption{Value of $||\phi(x^k)-\phi^*||$ and $||x^{k+1} \label{fig6:a} \label{fig6:b} \label{fig6:c} \label{fig6:d} \label{fig6} \end{figure} It is worth to mention that the freedom in the choice of the parameters $\lambda _{-1}$, $\rho$ and $\zeta$ in the line search of nmBDCA enables the possibility of speed up the method, specially as done in \cite{ARAGON2019} with the self-adaptive trial step size on the parameter $\lambda_k$. It is remarked in \cite{ARAGON2019} that this strategy allowed to obtain a two times speed up BDCA in their numerical experiments, when compared with the constant strategy. More precisely, other possibilities in the choice of the trial step size could improve the performance of nmBDCA. Other important question is the computational influence in nmBDCA of the strongly convexity modulus $\sigma>0$ in the DC components. It does not explicitly appear neither in nmBDCA nor DCA (and PPMDC) but its influence can be seem for instance in Proposition~\ref{pr:ffr}~\ref{it:ffr:ineq} and Proposition~\ref{prop15u}~\ref{it:prop15u:decrease}. As mentioned in Remark~\ref{infinitydecomp}, given a DC function $\phi(x)$ with DC decomposition $\phi(x)=g(x) - h(x)$, we can add to both DC decomposition a strongly convex term $({\sigma}/{2})\| x \|^{2}$ to obtain a new DC representation $\phi(x)=(g(x)+\frac{\sigma}{2}\| x \|^{2}) - (h(x)+\frac{\sigma}{2}\| x \|^{2})$. It leads to the open problem whose answer is crucial to a deep understanding of the DC structure: Does exist an ``optimal" (in some sense) DC decomposition? This problem is intimately related to the notion of more convex and less convex (domination concept) introduced by Moreau; see \cite{Moreau}. This issue has been dealt for polynomial functions with in \cite{FerrerMartinezLegaz2009} and quadratic functions by \cite{BomzeLocatelli2004}. In order to clarify the importance of this question, let us consider the following concept and a simple example. \begin{definition} We say that $\phi(x)=g(x) - h(x)$ is an undominated DC decomposition for $\phi$ if there is no other DC decomposition $\tilde{g}$ and $\tilde{h}$ for $\phi$ such that $g(x)=\tilde{g}(x)+p(x)$ and $h(x)=\tilde{h}(x)+p(x)$, for some non-constant convex function $p$. \end{definition} The key idea of algorithms for DC functions is to minimize convex bound functions instead of the possibly non-convex DC function. The following simple example shows that the interest for undominated DC decompositions lies in the fact that they allow us to deliver better bounds. \begin{example}\cite[Example 4.85]{LocatelliSchoen2013} Let $\phi(x)=x^3-x^2$ in $X=[0,1]$. Then, $$g_t(x)=x^3+tx^2\quad\mbox{and}\quad h_t(x)=(t+1)x^2,\quad t\geq 0,$$ define an infinite class of DC decomposition of $\phi$ over $X$, all dominated by the decomposition with $t=0$. Assume that we want to find the convex understimator of $\phi$ over $X$, i.e., $\psi_{t}(x)=g_t(x) - (t+1)x$ which is obtained by replacing the concave function $-h_t(x)$ by its convex envelope over $X$. Thus, the maximum distance between $\phi$ and its convex understimator $\psi_{t}$ is attained at $x=0.5$ and is equal to $$\max_{x\in X}\phi(x)-\psi_t(x)=\frac{1}{4}(1+t).$$ Therefore, the maximum distance is minimized for $t=0$, where $g_t$ and $h_t$ are undominated. \end{example} From a theoretical point of view $\sigma$ adds more structure to the DC representation. Nevertheless, from a computational point of view adding $\sigma$ may be a drawback. Next, we run nmBDCA, DCA and PPMDC in order to find a DC decomposition (related to $\sigma$) for Problem~\ref{prob7} and \ref{prob6} which needs less iterates and CPU time until the methods stop (in this sense it is more efficient). The results are presented in Figure~\ref{fig9} and \ref{fig10}. To this end, we consider the following DC components for Problem~\ref{prob7} $$g(x)=\sin\left(\sqrt{|3x_1+|x_1-x_2| +2x_2|}\right)+\sigma(x_1^2+x_2^2) \quad \mbox{and}\quad h(x)=\sigma(x_1^2+x_2^2),$$ and for Problem~\ref{prob6} $$g(x)=-\frac{5}{2}x_1+|x_1|+|x_2|+\sigma(x_1^2+x_2^2) \quad \mbox{and}\quad h(x)=(\sigma-0.5)(x_1^2+x_2^2).$$ \begin{figure} \caption{Median of 100 run for different values of $\sigma$ in Problem~\ref{prob7} \label{fig10a} \label{fig10b} \label{fig10} \end{figure} \begin{figure} \caption{Median of 100 run for different values of $\sigma$ in Problem~\ref{prob6} \label{fig9a} \label{fig9b} \label{fig9} \end{figure} In our numerical experiments, we consider positive integer values for $\sigma$ from $1$ to $20$. In Figure~\ref{fig10}, we can see that $\sigma=5$ provides the best performance for nmBDCA while $\sigma=1$ is the best choice for DCA and PPMDC in Problem~\ref{prob7}. In Problem~\ref{prob6}, all the methods have the best performance for $\sigma=1$ as we can see in Figure~\ref{fig9}. In both problems, Figures~\ref{fig10} and \ref{fig9} clearly show that the higher the value of $\sigma$, the worse the performance of the methods. However, some questions still rise. Which is the best DC decomposition from a computational point of view and how can it be obtained? Could it be connected with some suitable theoretical concept? We refrain from discussing these questions to our general context of non-smooth DC functions because we understand it deserves to be deeply studied and maybe it cannot be completely answered unless it is considered for some specific cases. To illustrate how difficult are these questions we refer to \cite{BomzeLocatelli2004} where it is presented a quadratic problem which admits an infinite number of undominated DC decompositions. \section{Conclusions} We have developed a non-monotone version of the boosted DC algorithm (BDCA) proposed in \cite{ARAGON2019} for DC programming when both DC components are not differentiable. Under mild conditions on the parameter that control the non-monotonicity of the objective function and standard assumptions on the DC function some convergence results and iteration-complexity bounds were obtained. In the case where the first DC component is differentiable, the global convergence and different iteration-complexity bounds were established assuming the Kurdyka-\L{}ojasiewicz property of the objective function. We have applied this non-monotone boosted DC algorithm (nmBDCA) for some academic tests. Our numerical experiments indicate that nmBDCA outperforms DC Algorithm (DCA~\cite{Pham1986}) and Proximal Point Method for DC functions (PPMDC~\cite{SunSampaio2003}) in both computational performance and quality of the solution found. A very interesting topic of future research is that the idea of using non-monotone line search to establish the well-definition of the nmBDCA can also be employed in other methods of non-differentiable convex optimization. For instance, let $f:\mathbb{R}^2\to \mathbb{R}$ be non-differentiable and convex function given by $f(x,y)=(x^2+y^2)/4+|x|+2 |y|$. The subdifferential of $f$ is given by \begin{align} \label{eq:sdex} \partial f(x)=\begin{cases} ([-1,1], y/2+2 sgn(y)) , \qquad &\mbox{if } x=0, y\neq 0 ;\\ (x/2+sgn(x), [-2,2]) , \qquad &\mbox{if } x\neq 0, y=0 ;\\ ([-1,1], [-2,2]) , \qquad &\mbox{if } x= 0, y=0 ;\\ (x/2+ sgn(x) , y/2+2 sgn(y)) , \qquad &\mbox{if } x\neq 0, y\neq 0. \end{cases} \end{align} Take $x^0:=(4,4)$. Since $f$ is differentiable at $x^{0}$, we have $\partial f(x^{0})=\{(3,4)\}$ and setting $s^{0}=(3,4)\in \partial f(x^{0})$, we know that $-s^0$ is a descent direction of $f$ at $x^0$. Taking $\lambda _{-1}=1>0$, $\rho= \zeta = 1/2\in (0,1)$ we obtain \begin{align*} 5/4=f(x^0- \zeta ^{0}\lambda _{-1}s^0) \leq f(x^0)-\rho \zeta^{0}\lambda _{-1}\|s^0\|^{2}=30/4, \end{align*} where $j^0=0$. Thus, $\lambda_{0}=1$ and setting $x^{1}=x^{0}-\lambda_{0}s^{0} =(1,0)$, we obtain that $$f(x^{1})\leq f(x^{0})-\lambda_{0}\|s^0\|^{2}.$$ This shows that starting with $x^0$ we can apply a monotone line search in order to find $x^1=(1,0)$. This was possible because $f$ is differentiable at $x^0=(4,4)$ and $-s^0=(-3,-4)$ is a descent direction of $f$ at $x^0$. On the other hand, since $f$ is not differentiable at $x^1=(1,0),$ given an arbitrary $s^1\in \partial f(x^1)$, the direction $-s^1$ may not be a descent direction, which means that we cannot apply monotone line search strategies in this case. Indeed, first note that \eqref{eq:sdex} gives $\partial f(x^1)=(3/2, [-2,2])$. Taking $s^1:=(3/2, -2)$ and $\lambda \in (0, 2/3)$, we have \begin{equation*} f(x^1-\lambda s^1)-f(x^1)= \frac{7}{4} \lambda+\frac{25}{16} \lambda^2. \end{equation*} Hence, we conclude that $f'(x^{1}, -s^1)=7/4 >0$, which means that $-s^1=(-3/2, 2)$ is an ascent direction of $f$ at $x^1$. Thus, a monotone line search cannot be performed in this case. However, $ \lim_{\lambda \to 0^+} (f(x^1-\lambda s^1)-f(x^1)+\rho \lambda \| s^{1}\| ^{2})=0$. Thus, for any $\nu_1>0$, there exists $\delta >0$ such that $f(x^1-\lambda s^1)-f(x^1)+\rho \lambda \| s^1\| ^{2}<\nu_1$, for all $\lambda\in (0, \delta)$. Therefore, a non-monotone line search such as \begin{equation*} j_k:=\min \left\{j\in {\mathbb N}: ~f( x^{k}-\zeta^{j} \lambda _{k-1}s^{k})\leq f(x^{k})-{\rho} \left(\zeta^{j}{\lambda_{k-1} }\right)\| s^{k}\| ^{2} +\nu _{k} \right\}. \end{equation*} can be performed. This motivates us to define the following subgradient method with non-monotone line search to minimize a convex function $f:\mathbb{R}^n\to \mathbb{R}$ . \begin{algorithm}[H] \caption{SubGrad method with non-monotone line search}\label{Alg1s} \begin{algorithmic}[1] \STATE {Fix $\lambda _{-1}>0$, $0<\rho<1$ and $\zeta \in (0,1)$. Choose $x^0\in \mathbb{R}^{n}$. Set $k=0$.} \STATE{Choose $s^{k}\in\partial f(x^{k})$. If $s^k=0$, then STOP and return $x^{k}$. Otherwise, take $\nu _{k}\in \mathbb{R}_{++}$ and set $\lambda _{k}:= \zeta ^{j_{k}}\lambda_{k-1},$ where \begin{equation} \label{eq:jk} j_k:=\min \left\{j\in {\mathbb N}: ~f( x^{k}-\zeta^{j} \lambda _{k-1}s^{k})\leq f(x^{k})-{\rho} \left(\zeta^{j}{\lambda_{k-1} }\right)\| s^{k}\| ^{2} +\nu _{k} \right\}. \end{equation}} \STATE{ Set $x^{k+1}:=x^{k}-\lambda _{k}s^{k}$, and $k \leftarrow k+1$ and go to Step~2.} \end{algorithmic} \end{algorithm} As we can see in the sequel, Algorithm~\ref{Alg1s} is well-defined. More precisely, if $\nu _{k}>0$, for all $k\in\mathbb{N}$, then Algorithm~\ref{Alg1s} is well defined. Indeed, since $f$ is a convex function, it is also continuous. Thus, we conclude that $\lim_{\lambda \to 0^+}(f(x^{k}-\lambda s^{k}) - f(x^{k})+\rho \lambda \|s^{k}\|^{2})=0$. Hence, due to $\nu _{k}>0$, there exists $\eta _{k}>0$ such that \begin{equation} \label{eq:ismwd} f(x^{k}-\lambda s^{k}) - f(x^{k})+\rho \lambda \|s^{k}\|^{2}< \nu _{k}, \qquad \forall \lambda \in (0,{\eta _{k}}]. \end{equation} On the other hand, due to $\zeta \in (0,1)$ we have $\lim_{j\in {\mathbb N}} \zeta^{j}{\lambda_{k-1}}=0$. Hence, considering that $\eta_k>0$, there exists ${j_*}\in {\mathbb N}$ such that $\zeta^{j}{\lambda_{k-1}}\in (0, \eta_k]$, for all $j\geq {j_*}$. Therefore, \eqref{eq:ismwd} implies that there exists $j_k$ satisfying \eqref{eq:jk} and the claim is proved. It is worth to point out that $s^k$ in step 2 of Algorithm~\ref{Alg1s} is not in general a descent direction at $x^k$. However, it follows from \cite[Theorem 4.2.3]{Lemarechal1993} that the set where convex functions fail to be differentiable is of zero measure. Consequently, almost every $s^k\neq 0$ is a descent direction. Therefore, we expect that Algorithm~\ref{Alg1s} has a behavior similar to gradient method with non-monotone line search. We believe this is an issue that deserves to be investigated. \section*{Acknowledgments} We would like to thank the reviewers for their constructive remarks which allow us to improve the paper. \end{document}
\begin{document} \title{Predicting the supremum: optimality of ``stop at once or not at all" ootnote{ hankyou} \begin{abstract} Let $(X_t)_{0\leq t\leq T}$ be a one-dimensional stochastic process with independent and stationary increments, either in discrete or continuous time. This paper considers the problem of stopping the process $(X_t)$ ``as close as possible" to its eventual supremum $M_T:=\sup_{0\leq t\leq T}X_t$, when the reward for stopping at time $\tau\leq T$ is a nonincreasing convex function of $M_T-X_\tau$. Under fairly general conditions on the process $(X_t)$, it is shown that the optimal stopping time $\tau$ takes a trivial form: it is either optimal to stop at time $0$ or at time $T$. For the case of random walk, the rule $\tau\equiv T$ is optimal if the steps of the walk stochastically dominate their opposites, and the rule $\tau\equiv 0$ is optimal if the reverse relationship holds. An analogous result is proved for L\'evy processes with finite L\'evy measure. The result is then extended to some processes with nonfinite L\'evy measure, including stable processes, CGMY processes, and processes whose jump component is of finite variation. {\it AMS 2000 subject classification}: 60G40, 60G50, 60J51 (primary); 60G25 (secondary) {\it Key words and phrases}: Random walk; L\'evy process; optimal prediction; ultimate supremum; stopping time; skew symmetry; convex function \end{abstract} \section{Introduction} In recent years there has been a great deal of interest in optimal prediction problems of the form \begin{equation} \sup_{\tau\leq T}\sE[f(M_T-X_\tau)], \label{eq:intro-objective} \end{equation} where $f$ is a nonincreasing function, $(X_t)_{t\geq 0}$ a one-dimensional stochastic process, $T>0$ a finite time horizon, and $M_T:=\sup\{X_t: 0\leq t\leq T\}$. The supremum in \eqref{eq:intro-objective} is taken over the set of all stopping times adapted to the process $(X_t)_{t\geq 0}$ for which $\mathrm{P}(\tau\leq T)=1$. For the case of Brownian motion, the problem \eqref{eq:intro-objective} has been investigated for several reward functions $f$, though it is often formulated as a penalty-minimization problem in the form \begin{equation} \inf_{\tau\leq T}\sE[\tilde{f}(M_T-X_\tau)], \label{eq:minimize-penalty} \end{equation} where $\tilde{f}:=-f$. For instance, Graversen et al.~\cite{GPS} solved \eqref{eq:minimize-penalty} for standard Brownian motion and $\tilde{f}(x)=x^2$. Their result was generalized to $\tilde{f}(x)=x^\alpha$ for arbitrary $\alpha>0$ by Pedersen \cite{Pedersen}, who also considered the function $f=\chi_{[0,\varepsilon]}$ for $\varepsilon>0$ in \eqref{eq:intro-objective}. Du Toit and Peskir \cite{DuToit1} were the first to extend these results (for power functions $f$) to Brownian motion with arbitrary drift, which required an entirely new approach. More recently, Shiryaev et al.~\cite{SXZ} considered the problem \eqref{eq:intro-objective} for Brownian motion with drift and $f(x)=e^{-\sigma x}$, where $\sigma>0$. In that case the problem has the natural interpretation of maximizing the expected ratio of the selling price to the eventual maximum price in the Black-Scholes model for stock price movements. They observed that when the drift parameter lies outside a certain critical interval, the optimal rule $\tau^*$ becomes trivial; that is, either $\tau^*\equiv 0$ or $\tau^*\equiv T$. A year later, Du Toit and Peskir \cite{DuToit2} managed to prove that the optimal rule is trivial also in the critical interval. More precisely, their result was that $\tau^*\equiv 0$ when the drift is negative, and $\tau^*\equiv T$ when the drift is positive. While this may seem intuitively quite plausible, it is nontrivial to prove. Since the optimal rule changes abrubtly from $0$ to $T$ as the drift parameter passes through $0$, Du Toit and Peskir \cite{DuToit2} called it a ``bang-bang" stopping rule. They also showed that for the (seemingly quite similar) problem \eqref{eq:minimize-penalty} with $\tilde{f}(x)=e^{\sigma x}$, the optimal rule is not of bang-bang form, but transitions from $\tau^*\equiv 0$ to $\tau^*\equiv T$ in a nontrivial way throughout the critical interval. In the discrete-time setting, an analogous result for Bernoulli random walk was obtained later the same year by Yam et al.~\cite{YYZ}, using ideas from \cite{DuToit2}. Here we put $T=N$, a positive integer, and write $X_n$ instead of $X_t$, where $\{X_n\}_{0\leq n\leq N}$ is a simple random walk with parameter $p$. Yam et al.~\cite{YYZ} considered both the function $f=\chi_0$, the characteristic function of the set $\{0\}$ (in which case the expectation in \eqref{eq:intro-objective} is just the probability of stopping at the ``top" of the random walk) and the function $f(x)=e^{-\sigma x}$, and concluded that in both cases, the optimal rule is of bang-bang type. Precisely, the optimal rule is $\tau\equiv N$ when $p>1/2$; $\tau\equiv 0$ when $p<1/2$; or any stopping rule $\tau$ satisfying $\mathrm{P}(X_\tau=M_\tau \ \mbox{or}\ \tau=N)=1$ when $p=1/2$. It is worth noting that the case $f=\chi_0$ had already been considered for general {\em symmetric} random walks more than 20 years earlier by Hlynka and Sheahan \cite{Hlynka}. The results for both discrete and continuous time were recently extended in Allaart \cite{Allaart}, where it is shown that the bang-bang principle holds for both Bernoulli random walk and Brownian motion with drift whenever $f$ is nonincreasing and convex. Equivalently, it holds for problem \eqref{eq:minimize-penalty} when $\tilde{f}$ is nondecreasing and concave, which is the case, for instance, for the natural penalty function $\tilde{f}(x)=x^\alpha$ with $0<\alpha\leq 1$. Allaart \cite{Allaart} gives simple sufficient conditions on $f$ for the optimal rules to be unique in the discrete-time case, and necessary and sufficient conditions for the case of Brownian motion. The aim of the present paper is to extend the result further still, to include more general random walks as well as certain L\'evy processes. First, in Section \ref{sec:random-walk}, it is shown that the bang-bang principle holds for any random walk whose increments stochastically dominate their opposites, or vice versa (see Theorem \ref{thm:random-walk} below). In Section \ref{sec:levy} an analogous result is proved for L\'evy processes, first for the case of finite L\'evy measure (Theorem \ref{thm:Levy-finite}), then for the more general case (Theorem \ref{thm:Levy-general}). This appears to require some notion of drift, and therefore it seems necessary to impose some additional conditions pertaining to the ``small jumps" of the process. One of these conditions can be omitted in the case when $f$ is continuous and bounded (Theorem \ref{thm:bounded-f}), but the author does not know whether it is needed in the general case. The extra conditions may seem restrictive, but they are satisfied by several commonly studied types of L\'evy processes including subordinators, symmetric stable processes, and CGMY processes. A possible application of this research is in finance. Suppose you buy a share of stock on the first day of the month, which you must sell some time by the end of the month. Perhaps the stock price follows a random walk in discrete time, and your objective is to maximize the probability of selling the stock at the highest price over the month. In that case, let $X_t$ be the random walk, and let $f=\chi_0$. Or perhaps the stock price follows an exponentiated L\'evy process, such as geometric Brownian motion, and your goal is to maximize the expected ratio of the price at the time you sell to the eventual maximum price. In that case, let $X_t$ be the L\'evy process, and put $f(x)=e^{-\sigma x}$, where $\sigma>0$. In both examples the results of this paper imply, under suitable conditions on the process $X_t$, that it is either optimal to sell the stock immediately, or to keep it until the last day of the month. In fact, the result for the second example remains valid if one takes as objective function an arbitrary increasing convex function $g$ of the price ratio, since if $g:(0,\infty)\to\RR$ is increasing and convex, then $f(x)=g(e^{-\sigma x})$ is decreasing and convex. After this work was begun, the author learnt that D. Orlov has also extended the bang-bang principle to certain L\'evy processes. Unfortunately, an English version of his paper was not available at the time the present article was nearing completion. In addition, a paper by Bernyk et al. \cite{BDP2} appeared in which problem \eqref{eq:minimize-penalty} is solved for stable L\'evy processes of index $\alpha\in(1,2)$ with no negative jumps, for the penalty function $\tilde{f}(x)=x^p$ with $p>1$. (We observe that for this case, $f=-\tilde{f}$ is not convex, so the results of the present note do not apply; indeed, the optimal rule is nontrivial and its determination requires significant analytical tools.) Some of the preparatory work for this last paper was done in \cite{BDP1}. \section{The maximum of a random walk} \label{sec:random-walk} In this section, let $\{X_n\}_{n=0,1,\dots}$ be a random walk with general steps satisfying a form of skew-symmetry as follows: $X_0\equiv 0$, and for $n\geq 1$, $X_n=\sum_{k=1}^n\xi_k$, where $\xi,\xi_1,\xi_2,\dots$ are independent, identically distributed (i.i.d.) random variables for which either $\xi\geq_{\st}-\xi$ or $\xi\leq_{\st}-\xi$. Here, $\geq_{\st}$ denotes the usual stochastic order of random variables, defined by \begin{equation*} X\geq_{\st} Y \quad\Longleftrightarrow\quad \sP(X>t)\geq \sP(Y>t) \quad\mbox{for all $t\in\RR$}. \end{equation*} (See Chapter 17 of Marshall and Olkin \cite{Marshall} for a general treatment of the stochastic order.) Let $M_n:=\max_{0\leq k\leq n}X_k$ for $n=0,1,\dots,N$, where $N\in\NN$ is a finite time horizon. For a nonincreasing function $f:[0,\infty)\to\RR$, consider the optimal stopping problem \begin{equation} \sup_{0\leq\tau\leq N}\mathrm{E}[f(M_N-X_\tau)], \label{eq:objective} \end{equation} where the supremum is over the set of all stopping times $\tau\leq N$ adapted to the natural filtration $\{\FF_n\}_{0\leq n\leq N}$ of the process $\{X_n\}_{0\leq n\leq N}$. We note that since $f$ is bounded above, the expectation in \eqref{eq:objective} always exists, though it could take the value $-\infty$. The above setup includes Bernoulli random walk with arbitrary parameter $p\in(0,1)$ as a special case, but is of course much more general. \begin{theorem} \label{thm:random-walk} Assume that either $\xi\geq_{\st}-\xi$ or $\xi\leq_{\st}-\xi$, and let $f:[0,\infty)\to\RR$ be nonincreasing and convex. Consider the problem \eqref{eq:objective}. \begin{enumerate}[(i)] \item If $\xi\geq_{\st}-\xi$, the rule $\tau\equiv N$ is optimal. \item If $\xi\leq_{\st}-\xi$, the rule $\tau\equiv 0$ is optimal. \item If $\xi\stackrel{d}{=}-\xi$, any rule $\tau$ satisfying $\mathrm{P}(X_\tau=M_\tau\ \mbox{or}\ \tau=N)=1$ is optimal. \end{enumerate} \end{theorem} \begin{remark} {\rm By the assumption of convexity $f$ must be continuous on $(0,\infty)$, but it may have a jump discontinuity at $x=0$. Thus, in particular, the theorem covers the important case $f=\chi_0$, the characteristic function of the set $\{0\}$. In that case, the problem comes down to maximizing the probability of stopping at the highest point of the walk, so it can be thought of as a random walk version of the secretary (or best-choice) problem. } \end{remark} \begin{remark} {\rm The condition $\xi\geq_{\st}-\xi$ holds for any random variable $\xi$ whose distribution is symmetric about some point $m\geq 0$, as is easy to see. For instance, any normal random variable $\xi$ with a nonnegative mean satisfies $\xi\geq_{\st}-\xi$. It follows that Theorem \ref{thm:random-walk} applies to all $\xi$ with symmetric distributions. } \end{remark} \begin{example} {\rm An example of a nonsymmetric distribution for which $\xi\geq_{\st}-\xi$ is the Gumbel extreme value distribution, with distribution function $F(x)=\exp(-e^{-x})$, $x\in\RR$. To see this, let $g(x)=\exp(-e^x)+\exp(-e^{-x})$. Then \begin{equation*} g'(x)=\exp(-e^x+x)\left[\exp(e^x-e^{-x}-2x)-1\right]. \end{equation*} Since it is easy to see (for instance by using a series expansion) that $e^x-e^{-x}-2x\geq 0$ for $x\geq 0$, it follows that $g$ is increasing on $[0,\infty)$. And since $\lim_{x\to\infty}g(x)=1$, this means that $g(x)<1$ for $x\geq 0$. Hence, \begin{equation*} 1-F(x)\geq F(-x), \qquad x\geq 0. \end{equation*} So if $\xi\sim F$, then $\xi\geq_{\st}-\xi$. } \end{example} \begin{example} {\rm The condition $\xi\geq_{\st}-\xi$ in statement (i) cannot be replaced by the condition $\mathrm{E}(\xi)\geq 0$. For instance, let $\mathrm{P}(\xi=3)=1/3=1-\sP(\xi=-1)$, let $f=\chi_0$, and take $n=2$. Even though $\mathrm{E}(\xi)=1/3>0$, the optimal rule is easily seen to be $\tau\equiv 0$ rather than $\tau\equiv 2$. } \end{example} In case of Bernoulli random walk, simple sufficient conditions on the function $f$ such that the optimal rules given above be unique are given in \cite{Allaart}. There an example is also given to show that without convexity of $f$, the conclusion of Theorem \ref{thm:random-walk} may fail in general. The proof of Theorem \ref{thm:random-walk} uses the following generalization of Lemma 2.1 in \cite{Allaart}. Note that, compared to that lemma, a somewhat different method of proof is needed here. \begin{lemma} \label{lem:key-inequality} Let $f$ be as in Theorem \ref{thm:random-walk}, and suppose $\xi\geq_{\st}-\xi$. Then \begin{equation} \sE[f(z\vee M_n-X_n)]\geq \sE\big[f\big(z\vee (M_n-X_n)\big)\big] \label{eq:key-inequality} \end{equation} for all $n\leq N$ and all $z\geq 0$. \end{lemma} Since the statement of the lemma involves only expectations, we may construct the random walk on a convenient probability space. Recall first that if $X\geq_{\st}Y$, then $X$ and $Y$ can be defined on a common probability space $(\Omega,\FF,\sP)$ so that $X(\omega)\geq Y(\omega)$ for all $\omega\in\Omega$. (See, for instance, \cite[Theorem 17.B.1]{Marshall}.) Thus, on a sufficiently large probability space, we can construct the random variables $\xi_1,\dots,\xi_N$ together with another set of random variables $\tilde{\xi}_1,\dots,\tilde{\xi}_N$ such that the random vectors $(\xi_1,\tilde{\xi}_1),\dots,(\xi_N,\tilde{\xi}_N)$ are independent, $\tilde{\xi}_i\stackrel{d}{=}-\xi_1$ for each $i$, and $\xi_i\geq\tilde{\xi}_i$ for each $i$. Let $\tilde{X}_0\equiv 0$, and $\tilde{X}_n=\sum_{k=1}^n\tilde{\xi}_k$, for $n=1,2,\dots,N$. Finally, define $\tilde{M}_n:=\max_{0\leq k\leq n}\tilde{X}_k$, $n=0,1,\dots,N$. Clearly, $X_n\geq\tilde{X}_n$ and $M_n\geq\tilde{M}_n$ for every $n$. It is also useful to define \begin{equation*} Z_n:=M_n-X_n \quad\mbox{and}\quad \tilde{Z}_n:=\tilde{M}_n-\tilde{X}_n, \qquad n=0,1,\dots,N. \end{equation*} One checks easily that \begin{equation} Z_n\leq\tilde{Z}_n, \qquad n=0,1,\dots,N. \label{eq:Z-relationship} \end{equation} The key to the proof of the lemma is that, for each fixed $n$, \begin{equation} (M_n-X_n,X_n)\stackrel{d}{=}(\tilde{M}_n,-\tilde{X}_n), \label{eq:reflection} \end{equation} as follows from an easy time-reversal argument. \begin{proof}[Proof of Lemma \ref{lem:key-inequality}] The lemma holds trivially (with equality) when $z=0$, so assume $z>0$. We must first deal separately with the case when $\sE[f(z\vee M_n-X_n)]=-\infty$. Let $\alpha:=[f(z)-f(0)]/z$. Then the convexity of $f$ implies that, for all $u\geq 0$, \begin{equation} f(u+z)-f(u)\geq\alpha z. \label{eq:convexity-bound} \end{equation} Using the algebraic inequality $z\vee m-x\leq z\vee(m-x)+z$ (valid for $z\geq 0$ and $m\geq 0$) and the fact that $f$ is nonincreasing, we get \begin{equation*} f(z\vee m-x)\geq f(z\vee(m-x)+z)\geq f(z\vee(m-x))+\alpha z, \end{equation*} in view of \eqref{eq:convexity-bound}. Thus, if $\sE[f(z\vee M_n-X_n)]=-\infty$, then $\sE\big[f\big(z\vee (M_n-X_n)\big)\big]=-\infty$ as well, and the lemma holds in this case. Assume for the remainder of the proof that $\sE[f(z\vee M_n-X_n)]>-\infty$. Since $n$ is fixed, we omit the subscripts and write $M=M_n$, $X=X_n$, $Z=Z_n$, and similarly for their tilded counterparts. Let \begin{equation*} h(z,m,x):=f(z\vee m-x)-f\big(z\vee(m-x)\big), \end{equation*} so that it is to be shown that \begin{equation} \mathrm{E}[h(z,M,X)]\geq 0. \label{eq:h-expectation} \end{equation} The above expectation exists and is finite, because $\alpha z\leq h(z,m,x)\leq|\alpha|z$. We begin by writing \begin{equation*} \mathrm{E}[h(z,M,X)]=\sE[h(z,M,X);X>0]+\sE[h(z,M,X);X<0]. \end{equation*} Using \eqref{eq:reflection}, we can write the second expectation as \begin{equation} \sE[h(z,M,X);X<0]=\sE[h(z,\tilde{M}-\tilde{X},-\tilde{X});\tilde{X}>0]. \label{eq:rewrite-second} \end{equation} On the other hand, we claim that \begin{equation} \mathrm{E}[h(z,M,X);X>0]\geq\sE[h(z,\tilde{M},\tilde{X});\tilde{X}>0]. \label{eq:expectation-relationship} \end{equation} To see this, note that $h(z,M,X)=0$ on $\{X>0, M-X>z\}$, and hence, \begin{align*} h(z,M,X)\sI(X>0)&=\big(f(z\vee M-X)-f(z)\big)\sI(X>0, M-X\leq z)\\ &=\big(f(\max\{z-X,Z\})-f(z)\big)\sI(X>0, Z\leq z)\\ &\geq \big(f(\max\{z-X,Z\})-f(z)\big)\sI(\tilde{X}>0, \tilde{Z}\leq z)\\ &\geq \big(f(\max\{z-\tilde{X},\tilde{Z}\})-f(z)\big)\sI(\tilde{X}>0, \tilde{Z}\leq z)\\ &=h(z,\tilde{M},\tilde{X})\sI(\tilde{X}>0). \end{align*} Here the first inequality follows since $\{\tilde{X}>0, \tilde{Z}\leq z\}\subset \{X>0, Z\leq z\}$ by \eqref{eq:Z-relationship}, $\max\{z-X,Z\}\leq z$ on $\{X>0, Z\leq z\}$, and $f$ is nonincreasing. The second inequality follows since $f$ is nonincreasing and $\max\{z-X,Z\}\leq \max\{z-\tilde{X},\tilde{Z}\}$. Combining \eqref{eq:rewrite-second} and \eqref{eq:expectation-relationship}, we obtain \begin{equation} \mathrm{E}[h(z,M,X)]\geq \sE[h(z,\tilde{M},\tilde{X})+h(z,\tilde{M}-\tilde{X},-\tilde{X});\tilde{X}>0]. \label{eq:together} \end{equation} Next, the convexity of $f$ implies that for all $0\leq x<y$ and all $d>0$, \begin{equation} f(x)-f(x+d)\geq f(y)-f(y+d), \label{eq:convex-property} \end{equation} as is easily checked. Thus, for $z\geq 0$ and $0<x\leq m$, we have \begin{align*} h(z,m,x)+h(z,m-x,-x)&=\big[f(z\vee m-x)-f\big(z\vee(m-x)\big)\big]\\ &\qquad+\big[f\big(z\vee(m-x)+x\big)-f(z\vee m)\big]\\ &=[f(z\vee m-x)-f(z\vee m)]\\ &\qquad-\big[f\big(z\vee(m-x)\big)-f\big(z\vee(m-x)+x\big)\big]\\ &\geq 0, \end{align*} where the inequality follows by \eqref{eq:convex-property} with $d=x$, since $x>0$ implies that $z\vee m-x\leq z\vee(m-x)$. This, together with \eqref{eq:together}, yields \eqref{eq:h-expectation}. \end{proof} \begin{corollary} \label{cor:consequence} Under the hypotheses of Lemma \ref{lem:key-inequality}, \begin{equation} \sE[f(z\vee M_n-X_n)]\geq \sE[f(z\vee M_n)] \label{eq:p-inequality} \end{equation} for all $n\leq N$ and all $z\geq 0$. \end{corollary} \begin{proof} By \eqref{eq:reflection}, the inequality \eqref{eq:key-inequality} can be expressed alternatively as \begin{equation} \sE[f(z\vee M_n-X_n)]\geq \sE[f(z\vee \tilde{M}_n)]. \label{eq:alternative-form} \end{equation} But $\mathrm{E}[f(z\vee \tilde{M}_n)]\geq \sE[f(z\vee M_n)]$, since $\tilde{M}_n\leq_{\st} M_n$ and $f$ is nonincreasing. Thus, \eqref{eq:p-inequality} follows. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:random-walk}] The main idea in the proof below is essentially due to Du Toit and Peskir \cite{DuToit2}; see Yam et al.~\cite{YYZ} for the discrete-time case. (i) Suppose first that $\xi_1\geq_{\st}-\xi_1$. Construct the random variables $\xi_k$, $X_k$, $M_k$, $Z_k$ and $\tilde{\xi}_k$, $\tilde{X}_k$, $\tilde{M}_k$ and $\tilde{Z}_k$ on a common probability space as in the discussion following the statement of Lemma \ref{lem:key-inequality}. Define the $\sigma$-algebras \begin{equation*} \GG_k:=\sigma(\{\xi_1,\dots,\xi_k,\tilde{\xi}_1,\dots,\tilde{\xi}_k\}), \qquad k=0,1,\dots,N. \end{equation*} It will be important later in the proof that the increments $X_k-X_j$ and $\tilde{X}_k-\tilde{X}_j$ are independent of $\GG_j$, for all $0\leq j\leq k$. Note further that if the stopping time $\tau\equiv N$ is optimal among the set of all stopping times relative to the filtration $\{\GG_k\}$, then it is certainly optimal among the stopping times relative to $\{\FF_k\}$. Thus, it is sufficient to show that \begin{equation} \mathrm{E}[f(M_N-X_\tau)]\leq\sE[f(M_N-X_N)] \label{eq:domination} \end{equation} for any stopping time $\tau$ relative to $\{\GG_k\}$. Define the functions \begin{equation*} G(k,z):=\sE[f(z\vee M_k)],\qquad D(k,z):=\sE[f(z\vee M_k-X_k)], \end{equation*} for $z\geq 0$ and $k=0,1,\dots,N$. Note that $G(k,z)$ and $D(k,z)$ can possibly take the value $-\infty$. Let $\tau\leq N$ be any stopping time. An easy exercise using the independent and stationary increments of the random walk $\{X_k\}$ leads to \begin{equation} \mathrm{E}[f(M_N-X_\tau)|\GG_\tau]=G(N-\tau,Z_\tau), \label{eq:first-conditional} \end{equation} and \begin{equation} \mathrm{E}[f(M_N-X_N)|\GG_\tau]=D(N-\tau,Z_\tau). \label{eq:second-conditional} \end{equation} Now Corollary \ref{cor:consequence} says that $D(k,z)\geq G(k,z)$, and hence \begin{equation*} \mathrm{E}[f(M_N-X_\tau)|\GG_\tau]\leq \sE[f(M_N-X_N)|\GG_\tau]. \end{equation*} Taking expectations on both sides gives \eqref{eq:domination}, as desired. (ii) Suppose next that $\xi_1\leq_{\st}-\xi_1$. Apply again the construction following the statement of Lemma \ref{lem:key-inequality}, but this time with $\xi_i\leq\tilde{\xi}_i$ for all $i$. Observe that all the other relationships between random variables and their tilded counterparts are now reversed as well, i.e. \begin{equation*} X_k\leq \tilde{X}_k, \qquad M_k\leq\tilde{M}_k, \qquad Z_k\geq\tilde{Z}_k, \end{equation*} for $k=0,1,\dots,N$. Define the filtration $\{\GG_k\}$ and the function $G(k,z)$ as in the proof of part (i) above, and let \begin{equation*} \tilde{D}(k,z):=\sE[f(z\vee \tilde{M}_k-\tilde{X}_k)]. \end{equation*} In place of \eqref{eq:alternative-form}, we now have the inequality \begin{equation*} \sE[f(z\vee \tilde{M}_k-\tilde{X}_k)]\geq \sE[f(z\vee M_k)], \end{equation*} or in other words, $\tilde{D}(k,z)\geq G(k,z)$. Furthermore, the fact that $f$ is nonincreasing implies that $G(k,z)$ is nonincreasing in $z$, and therefore, \begin{equation*} G(N-j,Z_j)\leq G(N-j,\tilde{Z}_j) \end{equation*} for each $j$. By \eqref{eq:reflection}, $\mathrm{E}[f(M_N)]=\sE[f(\tilde{Z}_N)]$. Putting these facts together, we obtain for any stopping time $\tau$ relative to $\{\GG_k\}$, by the same kind of reasoning as in the proof of part (i), \begin{align} \begin{split} \mathrm{E}[f(M_N-X_\tau)]&=\sE[G(N-\tau,Z_\tau)]\leq \sE[G(N-\tau,\tilde{Z}_\tau)]\\ &\leq \sE[\tilde{D}(N-\tau,\tilde{Z}_\tau)]=\sE[f(\tilde{Z}_N)]=\sE[f(M_N)]. \end{split} \label{eq:chain} \end{align} Hence, the rule $\tau\equiv 0$ is optimal. (iii) Suppose finally that $\xi_1\stackrel{d}{=}-\xi_1$. This is a special case of part (i), so the rule $\tau\equiv N$ is optimal. Now let $\tau$ be any stopping time such that with probability one, $X_\tau=M_\tau$ or $\tau=N$. Since $G(0,z)=f(z)=D(0,z)$ for all $z\geq 0$ and $G(k,0)=\sE[f(M_k)]=\sE[f(\tilde{Z}_k)]=\sE[f(Z_k)]=D(k,0)$ for all $k$, \eqref{eq:first-conditional} and \eqref{eq:second-conditional} give equality in \eqref{eq:domination}. Hence, $\tau$ is optimal. \end{proof} \section{The maximum of a L\'evy process} \label{sec:levy} A careful study of the proofs in the previous section reveals that the essential property of the random walk is its independent and stationary increments. Furthermore, in order to construct the random walk $\{X_n\}$ and its dual $\{\tilde{X}_n\}$ on a common probability space in such a way that the increments of $\{X_n\}$ uniformly dominate those of $\{\tilde{X}_n\}$ (or vice versa), the step-size distribution had to satisfy a type of skew symmetry. With this in mind, we can now extend the result to a much larger class of stochastic processes. The general continuous-time analog of a random walk is a {\em L\'evy process}, which is defined as a stochastic process on $[0,\infty)$ with independent and stationary increments which starts at $0$ and is continuous in probability. Following standard practice, we assume also that the process has almost surely right-continuous sample paths with left-hand limits everywhere (or, for short, that the process is {\em rcll}). If $X=(X_t)_{t\geq 0}$ is a (one-dimensional) L\'evy process, it is uniquely determined by the L\'evy-Khintchine formula \begin{equation*} \mathrm{E}\left[e^{iuX_t}\right]=e^{t\eta(u)}, \end{equation*} where \begin{equation} \eta(u)=i\gamma u-\frac{\sigma^2 u^2}{2} +\int_{\RR\backslash\{0\}}\left[e^{iuy}-1-iuy\chi_{(-1,1)}(y)\right]\nu(dy). \label{eq:LK-representation-2} \end{equation} In this expression, the {\em L\'evy measure} $\nu$ satisfies $\int_{\RR\backslash\{0\}}(y^2\wedge 1)\nu(dy)<\infty$, but $\nu$ need not be finite. We say that $X$ is {\em generated by the triplet $(\gamma,\sigma^2,\nu)$}. Define the supremum process $M=(M_t)_{t\geq 0}$ by \begin{equation*} M_t:=\sup_{0\leq s\leq t}X_s, \qquad t\geq 0. \end{equation*} If $\nu$ is finite, then $X$ is simply the sum of a Brownian motion with drift and a compound Poisson process, and it is straightforward to adapt the result of the previous section. This is done in Subsection \ref{subsec:interlacing} below. If $\nu$ is not finite, however, complications arise in attempting to couple the process $X$ with its dual, and some additional conditions appear to be needed to overcome these difficulties. This is made precise in Subsection \ref{subsec:general}. Finally, in Subsection \ref{subsec:bounded}, we eliminate one of the extra conditions in the case when $f$ is continuous and bounded. \subsection{The case of finite $\nu$} \label{subsec:interlacing} We consider first the case when $\nu$ is finite. Then we may put \begin{equation*} b:=\gamma-\int_{0<|y|<1}y\nu(dy), \end{equation*} and express $X_t$ pathwise in the form \begin{equation} X_t=bt+\sigma B_t+\sum_{i=1}^{N(t)}\xi_i, \label{eq:path-representation} \end{equation} where $B_t$ is a standard Brownian motion, $\xi_1, \xi_2,\dots$ are i.i.d. random variables with distribution $\nu/|\nu|$, and $(N(t))_{t\geq 0}$ is a Poisson process with intensity $|\nu|$. In this representation, the Poisson process, the Brownian motion and the $\xi_i$'s are all independent of one another. \begin{definition} Let $X=(X_t)_{t\geq 0}$ be a L\'evy process of the form \eqref{eq:path-representation}, with finite L\'evy measure $\nu$. \begin{enumerate}[(i)] \item $X$ is {\em right skew symmetric (RSS)} if $b\geq 0$ and $\nu\big((a,\infty)\big)\geq\nu\big((-\infty,-a)\big)$ for all $a>0$. \item $X$ is {\em left skew symmetric (LSS)} if $b\leq 0$ and $\nu\big((a,\infty)\big)\leq\nu\big((-\infty,-a)\big)$ for all $a>0$. \item $X$ is {\em symmetric} if $b=0$ and $\nu\big((a,\infty)\big)=\nu\big((-\infty,-a)\big)$ for all $a>0$. \end{enumerate} \end{definition} Note that the condition regarding $\nu$ in the definition of RSS is equivalent to $\xi_1\geq_{\st}-\xi_1$, because if the inequality holds for all $a>0$, it holds for all $a\in\RR$. The following result is the analog of Theorem \ref{thm:random-walk} for a L\'evy process with finite L\'evy measure $\nu$. \begin{theorem} \label{thm:Levy-finite} Let $X=(X_t)_{t\geq 0}$ be a L\'evy process with finite L\'evy measure $\nu$, adapted to a filtration $\{\FF_t\}$, such that $X_t-X_s$ is independent of $\FF_s$ for all $0\leq s\leq t$. Assume $X$ is either RSS or LSS, and let $f$ be as in Theorem \ref{thm:random-walk}. For fixed $T>0$, consider the problem \begin{equation} \sup_{0\leq\tau\leq T} \sE[f(M_T-X_\tau)], \label{eq:Levy-objective} \end{equation} where the supremum is over all stopping times $\tau$ relative to the filtration $\{\FF_t\}$ with $\mathrm{P}(\tau\leq T)=1$. \begin{enumerate}[(i)] \item If $X$ is RSS, the rule $\tau\equiv T$ is optimal. \item If $X$ is LSS, the rule $\tau\equiv 0$ is optimal. \item If $X$ is symmetric, any rule $\tau$ satisfying $\mathrm{P}(X_\tau=M_\tau\ \mbox{or}\ \tau=T)=1$ is optimal. \end{enumerate} \end{theorem} If $\nu=0$, then $X$ is a Brownian motion with drift. Thus, the above theorem generalizes recent results of Shiryaev et al.~\cite{SXZ}, Du Toit and Peskir \cite[Section 4]{DuToit2} and Allaart \cite{Allaart}. \begin{definition} Let $X=(X_t)_{t\geq 0}$ be a L\'evy process . The {\em dual process} of $X$, denoted $\tilde{X}$, is a process such that $(\tilde{X}_t)_{t\geq 0}\stackrel{d}{=}(-X_t)_{t\geq 0}$. The {\em dual supremum process}, denoted $\tilde{M}$, is the process defined by $\tilde{M}_t:=\sup_{0\leq s\leq t}\tilde{X}_s$, for $t\geq 0$. \end{definition} If $X$ is a L\'evy process generated by the triplet $(\gamma,\sigma^2,\nu)$, then $\tilde{X}$ is a L\'evy process with triplet $(-\gamma,\sigma^2,\tilde{\nu})$, where $\tilde{\nu}(A)=\nu(-A)$ for any Borel set $A\subset \RR$. Note that if $X$ is RSS, then $\tilde{X}$ is LSS and vice versa. \begin{lemma} \label{lem:Levy-reflection} Let $X$ be any L\'evy process. Then, for each fixed $t\geq 0$, \begin{equation*} (M_t-X_t,X_t)\stackrel{d}{=}(\tilde{M}_t,-\tilde{X}_t). \end{equation*} \end{lemma} \begin{proof} This is essentially a know fact. Let $I_t:=\inf\{X_s: 0\leq s\leq t\}$. Then plainly \begin{equation*} (\tilde{M}_t,-\tilde{X}_t) \stackrel{d}{=} (-I_t,X_t). \end{equation*} According to Proposition 3 of Bertoin \cite[p.~158]{Bertoin}, \begin{equation*} (-I_t,X_t-I_t)\stackrel{d}{=}(M_t-X_t,M_t), \end{equation*} which is equivalent to \begin{equation*} (-I_t,X_t)\stackrel{d}{=}(M_t-X_t,X_t). \end{equation*} Thus, the Lemma follows. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:Levy-finite}] Assume for the moment that $X$ is RSS. Recall the representation \eqref{eq:path-representation}. On the same probability space on which the process $X$ is defined, we construct the dual $\tilde{X}$ as follows. For each $i\in\NN$, we can construct out of $\xi_i$ (using an external randomization if necessary) a random variable $\tilde{\xi}_i$ such that $\tilde{\xi}_i\stackrel{d}{=}-\xi_i$, and $\xi_i\geq\tilde{\xi}_i$ pointwise. Now put \begin{equation*} \tilde{X}_t:=-bt+\sigma B_t+\sum_{i=1}^{N(t)}\tilde{\xi}_i, \qquad t\geq 0. \end{equation*} Then it is easy to see that $(\tilde{X}_t)_{t\geq 0}\stackrel{d}{=}(-X_t)_{t\geq 0}$, and moreover, the processes $X$ and $\tilde{X}$ satisfy the property that, for all $0\leq s<t$ and for all $\omega\in\Omega$, \begin{equation} X_t(\omega)-X_s(\omega)\geq \tilde{X}_t(\omega)-\tilde{X}_s(\omega). \label{eq:increment-domination} \end{equation} For $t\geq 0$, define \begin{equation*} Z_t:=M_t-X_t, \qquad \tilde{Z}_t:=\tilde{M}_t-\tilde{X}_t. \end{equation*} As in Section \ref{sec:random-walk}, it follows from \eqref{eq:increment-domination} that \begin{equation*} M_t\geq\tilde{M}_t \qquad\mbox{and}\qquad Z_t\leq\tilde{Z}_t \qquad\mbox{for all $t\geq 0$}. \end{equation*} Using these relationships and Lemma \ref{lem:Levy-reflection}, we can show in exactly the same way as in the proof of Lemma \ref{lem:key-inequality}, that \begin{equation*} \sE[f(z\vee M_t-X_t)]\geq \sE\big[f\big(z\vee (M_t-X_t)\big)\big] \end{equation*} for all $t\geq 0$ and all $z\geq 0$. Next, for $t\geq 0$, let $\GG_t$ be the smallest $\sigma$-algebra containing both $\FF_t$ and $\sigma(\{\tilde{X}_s:0\leq s\leq t\})$. Then $\{\GG_t\}_{t\geq 0}$ is a filtration with respect to which both $X$ and $\tilde{X}$ are adapted, and for each $0\leq s\leq t$, both $X_t-X_s$ and $\tilde{X}_t-\tilde{X}_s$ are independent of $\GG_s$. The rest of the proof is now the same (modulo subscript notation) as the proof of Theorem \ref{thm:random-walk}, where the analogs of \eqref{eq:first-conditional} and \eqref{eq:second-conditional} follow since $X$, being a L\'evy process, obeys the strong Markov property. \end{proof} \begin{question} {\rm It is clear that when $X$ is RSS, we have $X_t\geq_{\st}\tilde{X}_t$ for all $t\geq 0$. Does the converse of this statement hold? } \end{question} \subsection{The general case} \label{subsec:general} For a general L\'evy process with nonfinite L\'evy measure $\nu$, the construction of the previous subsection is no longer possible because the jump times are dense in the time interval $[0,T]$. Here we shall use the fact that a general L\'evy process on $[0,T]$ can always be obtained as the almost sure uniform limit of a sequence of processes of the form \eqref{eq:path-representation}. However, in order to ensure that this can be done while preserving the uniform domination of increments (i.e. \eqref{eq:increment-domination}), some extra conditions appear to be needed. Let the L\'evy-Khintchine representation of $X=(X(t))_{t\geq 0}$ be given by \eqref{eq:LK-representation-2}. (In what follows, it will be notationally more convenient to write $X(t)$ instead of $X_t$.) \begin{definition} We say $X$ is {\em balanced in its small jumps (BSJ)}, if \begin{equation} L:=\lim_{\varepsilon\downarrow 0}\int_{\varepsilon\leq |y|<1}y\nu(dy) \quad\mbox{exists and is finite}. \label{eq:almost-symmetric} \end{equation} \end{definition} This condition is always satisfied when $\nu$ is symmetric on a sufficiently small interval $(-\varepsilon,\varepsilon)$ where $\varepsilon>0$, or when $\int_{0<|y|<1}|y|\nu(dy)<\infty$. (In the latter case, the non-Gaussian part of $X$ has finite variation.) In the case when $\int_{0<|y|<1}|y|\nu(dy)=\infty$, \eqref{eq:almost-symmetric} may be interpreted as saying that $\nu$ is {\em almost} symmetric in a sufficiently small neighborhood of the origin. Roughly speaking, this means that we allow the small jumps of the process to be dense in time, provided that the positive and negative jumps more or less balance each other. It allows us to still think of the number $\gamma-L$ as the `drift' of the process. It is clear that if $X$ is BSJ, then so is its dual $\tilde{X}$. Denote by $\tilde{\nu}$ the dual measure of $\nu$, so that $\tilde{\nu}(A)=\nu(-A)$ for $A\subset\RR$. If $\mu$ and $\nu$ are measures on $\RR$ and $E\subset\RR$, we say $\mu$ {\em majorizes} $\nu$ on $E$ if $\mu(F)\geq\nu(F)$ for every $F\subset E$. \begin{definition} \label{def:SRSS} Let $X=(X(t))_{t\geq 0}$ be a L\'evy process. \begin{enumerate}[(i)] \item We say $X$ is {\em strongly right skew symmetric (SRSS)} if all of the following hold: \begin{enumerate} \item $X$ is balanced in its small jumps; \item $\gamma\geq L$, where $L$ is the limit in \eqref{eq:almost-symmetric}; \item $\nu\big((a,\infty)\big)\geq\nu\big((-\infty,-a)\big)$ for all $a>0$; \item There exists $\varepsilon>0$ such that $\nu$ majorizes $\tilde{\nu}$ on $(0,\varepsilon)$. \end{enumerate} \item We say $X$ is {\em strongly left skew symmetric (SLSS)} if $\tilde{X}$ is SRSS. \item We say $X$ is {\em symmetric} if $\gamma=0$ and $\nu=\tilde{\nu}$. \end{enumerate} \end{definition} \begin{remark} {\rm ({\em a}) If $X$ is symmetric, then it is both SRSS and SLSS, since \eqref{eq:almost-symmetric} holds with $L=0$. ({\em b}) If $X$ is SRSS (resp. SLSS) and $\nu$ is finite, then $X$ is RSS (resp. LSS), since $b=\gamma-L$. The undesirable fourth condition in the definition of SRSS seems to be needed in order to carry out the pathwise construction of $X$ and its dual, below. At this point, the author does not see how to get around this technical difficulty, except in the special case when $f$ is bounded and continuous (see Subsection \ref{subsec:bounded} below). ({\em c}) The SRSS and SLSS conditions can be made more concrete in case $\nu$ has a density. Let $f,g: (0,\infty)\to[0,\infty)$ and suppose that \begin{equation*} \nu(dx)=\left(f(x)\chi_{(0,\infty)}(x)+g(-x)\chi_{(-\infty,0)}(x)\right)dx. \end{equation*} Then $\nu$ is a L\'evy measure if and only if $\int_0^\infty (x^2 \wedge 1)[f(x)+g(x)]\,dx<\infty$. The BSJ condition is now equivalent to convergence of the integral $\int_0^1 x[f(x)-g(x)]\,dx$. Conditions (c) and (d) in the definition of SRSS become, respectively (c)'\ \ $\int_a^\infty [f(x)-g(x)]\,dx\geq 0$ for all $a>0$. (d)'\ \ There is $\varepsilon>0$ such that $f(x)\geq g(x)$ for all $x\in(0,\varepsilon)$. The easiest way to satisfy both (c)' and (d)' is, of course, to take $f\geq g$ everywhere. This way, we may obtain nontrivial examples of nonsymmetric L\'evy processes that are SRSS (or SLSS). For instance, let \begin{equation*} f(x)=\frac{c}{x^p}, \qquad g(x)=\frac{c}{x^p+x^r}, \qquad\mbox{where} \quad c>0, \quad 2\leq p<3, \quad r>2p-2. \end{equation*} Then $r>p$, and $\nu$ satisfies \eqref{eq:almost-symmetric} with \begin{equation*} L=\int_0^1 x[f(x)-g(x)]\,dx=\int_0^1 \frac{cx^{r-2p+1}}{1+x^{r-p}}\,dx, \end{equation*} a convergent integral. Since (c)' and (d)' are obviously satisfied, the process will be SRSS if $\gamma\geq L$. Since $r>p$, the `large' positive jumps of $X(t)$ tend to be greater in magnitude (and occur more frequently) than the `large' negative jumps. On the other hand, the small jumps of the process in either direction are comparable in size. } \end{remark} \begin{example} \label{ex:stable} {\rm ({\em Stable processes}\,) Let $X$ be a stable L\'evy process with index of stability $\alpha$ ($0<\alpha\leq 2$). If $\alpha=2$, then $X$ is just a Brownian motion with drift, and the optimal rule is already specified by Theorem \ref{thm:Levy-finite}. (In fact, in this case the optimal rules are unique except for some trivial cases; see Allaart \cite{Allaart}.) If $\alpha<2$, then $\sigma=0$ and the L\'evy measure $\nu$ is of the form \begin{equation*} \nu(dx)=\left(\frac{c_1}{x^{1+\alpha}}\chi_{(0,\infty)}(x)+\frac{c_2}{|x|^{1+\alpha}}\chi_{(-\infty,0)}(x)\right)dx, \end{equation*} where $c_1\geq 0$, $c_2\geq 0$, and $c_1+c_2>0$ (see, e.g. Sato \cite{Sato}, p.~80). If follows that if $1\leq\alpha<2$, then $X$ is BSJ if and only if $c_1=c_2$, in which case $\nu$ is symmetric. In that case, $X$ is SRSS if $\gamma\geq 0$, and $X$ is SLSS if $\gamma\leq 0$. On the other hand, if $0<\alpha<1$, then the BSJ condition \eqref{eq:almost-symmetric} is always satisfied, with \begin{equation*} L=\int_{0<|x|<1}x\nu(dx)=\frac{c_1-c_2}{1-\alpha}, \end{equation*} and $X$ is SRSS if $\gamma\geq L\geq 0$; or similarly, $X$ is SLSS if $\gamma\leq L\leq 0$. Note that in the stable case, condition (d) in Definition \ref{def:SRSS} is satisfied whenever (a)-(c) are. } \end{example} \begin{example} \label{ex:CGMY} {\rm ({\em CGMY processes}\,) Another example of nonsymmetric processes that are SRSS or SLSS is given by the CGMY processes, which are frequently used in financial modeling. The CGMY process, named for Carr, Geman, Madan and Yor (see \cite{CGMY}), is a L\'evy process with L\'evy measure \begin{equation*} \nu(dx)=C\cdot\frac{e^{-G|x|}\chi_{(-\infty,0)}(x) + e^{-Mx}\chi_{(0,\infty)}(x)}{|x|^{1+Y}}, \end{equation*} where $C>0$, $G\geq 0$, $M\geq 0$ and $Y<2$, and it is assumed that $G>0$ and $M>0$ if $Y\leq 0$. The CGMY processes include the symmetric stable processes (take $G=M=0$) and are sometimes called {\em tempered stable processes}. The CGMY process with $Y=0$ is known as the {\em variance gamma process}. The very small jumps of a CGMY process behave essentially as in the symmetric stable case, and it is easy to check that all CGMY processes have the BSJ property. Furthermore, conditions (c) and (d) in Definition \ref{def:SRSS} are satisfied if and only if $M\leq G$. Hence, the CGMY process is SRSS if $M\leq G$ and $\gamma\geq L$, with $L$ as in \eqref{eq:almost-symmetric}; and it is SLSS if $M\geq G$ and $\gamma\leq L$. } \end{example} We can now state the result for the most general case. \begin{theorem} \label{thm:Levy-general} Let $X=(X(t))_{t\geq 0}$ be a L\'evy process, and let $f$ be as in Theorem \ref{thm:random-walk}. For fixed $T>0$, consider the problem \eqref{eq:Levy-objective}. \begin{enumerate}[(i)] \item If $X$ is SRSS, the rule $\tau\equiv T$ is optimal. \item If $X$ is SLSS, the rule $\tau\equiv 0$ is optimal. \item If $X$ is symmetric, any rule $\tau$ satisfying $\mathrm{P}\big(X(\tau)=M(\tau)\ \mbox{or}\ \tau=T\big)=1$ is optimal. \end{enumerate} \end{theorem} The proof of Theorem \ref{thm:Levy-general} hinges on the following construction. Once this is accomplished, the rest of the proof is the same as before. \begin{lemma} \label{lem:general-construction} Let $X$ be a SRSS L\'evy process. Then, on a suitable probability space $(\Omega,\FF,\sP)$, we can construct $X$ and its dual $\tilde{X}$ in such a way that there exists a set $\Omega_0\subset \Omega$ with $\mathrm{P}(\Omega_0)=1$ such that, for all $0\leq s<t$ and for all $\omega\in\Omega_0$, \begin{equation} X(t;\omega)-X(s;\omega)\geq \tilde{X}(t;\omega)-\tilde{X}(s;\omega). \label{eq:increment-domination-general} \end{equation} \end{lemma} \begin{proof} Let $\varepsilon>0$ be as in the definition of SRSS. Then $X(t)$ can be expressed by the {\em L\'evy-Ito decomposition} \begin{equation*} X(t)=\gamma' t+\sigma B(t)+\int_{|y|<\varepsilon} yN'(t,dy)+\int_{|y|\geq\varepsilon}yN(t,dy), \end{equation*} where $B(t)$ is a standard Brownian motion on $\RR$, $\gamma':=\gamma-\int_{\varepsilon\leq|y|<1}y\nu(dy)$, $(N(t,\cdot))_{t\geq 0}$ is a Poisson random measure with intensity measure $\nu$ which is independent of the Brownian motion, and $N'(t,\cdot)$ is defined by \begin{equation*} N'(t,dy)=N(t,dy)-t\nu(dy), \qquad t\geq 0. \end{equation*} In general, the integrals $\int_{|y|<\varepsilon} yN(t,dy)$ and $\int_{|y|<\varepsilon} y\nu(dy)$ need not converge, but the `compensated sum of small jumps', $\int_{|y|<\varepsilon} yN'(t,dy)$, always does. Now we will construct a sequence of L\'evy processes $Y_1,Y_2,\dots$ and their duals $\tilde{Y}_1,\tilde{Y}_2,\dots$, as follows. Let $\varepsilon=\varepsilon_1>\varepsilon_2>\dots$ be a sequence of numbers decreasing to zero. Define first \begin{equation*} Y_1(t):=(\gamma-L)t+\sigma B(t)+\int_{|y|\geq \varepsilon}yN(t,dy). \end{equation*} Then $Y_1$ has finite L\'evy measure $\nu_1$, where $\nu_1$ is the restriction of $\nu$ to the set $\{y:|y|\geq\varepsilon\}$. Clearly $\nu_1\big((a,\infty)\big)\geq\nu_1\big((-\infty,-a)\big)$ for all $a>0$, since $\nu_1$ simply inherits this property from $\nu$. Since $\gamma\geq L$, we can construct $Y_1$ and its dual $\tilde{Y}_1$ on the same probability space so that these processes satisfy the increment property \eqref{eq:increment-domination}. Next, for $n\geq 2$, let \begin{equation*} Y_n(t)=\int_{\varepsilon_n\leq|y|<\varepsilon_{n-1}}yN(t,dy), \end{equation*} and note that by the usual independence property of Poisson point processes, the processes $Y_n$, $n\in\NN$ may be constructed independently of each other. Now for each $n\geq 2$, $Y_n$ is a compound Poisson process with (finite) L\'evy measure $\nu_n$, where $\nu_n$ is the restriction of $\nu$ to the set $\{y:\varepsilon_n\leq|y|<\varepsilon_{n-1}\}$. Since $\nu$ majorizes $\tilde{\nu}$ on $(0,\varepsilon)$, it follows that $\nu_n\big((a,\infty)\big)\geq\tilde{\nu}_n\big((a,\infty)\big)$ for all $n\geq 2$. (Note that this fact would not be guaranteed without the fourth condition in the definition of SRSS.) Thus, we can construct $Y_n$ and its dual $\tilde{Y}_n$ together as in the previous subsection in such a way that these processes satisfy \eqref{eq:increment-domination}. Finally, put \begin{equation*} X_n(t):=Y_1(t)+\dots+Y_n(t), \qquad \tilde{X}_n(t):=\tilde{Y}_1(t)+\dots+\tilde{Y}_n(t) \end{equation*} for $n\in\NN$, so that $\tilde{X}_n$ is the dual of $X_n$. Since the property \eqref{eq:increment-domination} is clearly preserved under addition of two or more processes, we have that, for all $0\leq s<t$, \begin{equation} X_n(t)-X_n(s)\geq\tilde{X}_n(t)-\tilde{X}_n(s) \label{eq:partial-increment} \end{equation} pointwise on $\Omega$. Finally, note that $X_n(t)$ can be written as \begin{align*} X_n(t)&=(\gamma-L)t+\sigma B(t)+\int_{|y|\geq\varepsilon_n}yN(t,dy)\\ &=\gamma_n t+\sigma B(t)+\int_{\varepsilon_n\leq|y|<\varepsilon}yN'(t,dy)+\int_{|y|\geq \varepsilon}yN(t,dy), \end{align*} where \begin{equation*} \gamma_n:=\gamma-L+\int_{\varepsilon_n\leq|y|<\varepsilon}y\nu(dy). \end{equation*} By \eqref{eq:almost-symmetric}, $\gamma_n\to\gamma'$, and it follows from Theorem 2.6.2 in \cite{Applebaum} that $X_n(t)\to X(t)$ uniformly in $[0,T]$ with probability one, as long as the sequence $\{\varepsilon_n\}$ decreases fast enough so that \begin{equation} \int_{0<|y|<\varepsilon_n}y^2\nu(dy)\leq\frac{1}{8^n} \label{eq:epsilon-condition} \end{equation} for every $n$. Similarly, $\tilde{X}_n(t)\to\tilde{X}(t)$ uniformly in $[0,T]$ with probability one. And, by taking limits in \eqref{eq:partial-increment}, we see that $X$ and $\tilde{X}$ satisfy \eqref{eq:increment-domination-general} everywhere on the set on which both processes converge. \end{proof} \subsection{The case of bounded and continuous $f$} \label{subsec:bounded} In general, it seems difficult to eliminate the unnatural condition (d) in the definition of SRSS, except when the reward function $f$ is bounded and continuous on $[0,\infty)$. This case includes, for instance, the natural reward function $f(x)=e^{-\sigma x}$ with $\sigma>0$. Say a general L\'evy process $X=(X(t))_{t\geq 0}$ with L\'evy-Khintchine representation \eqref{eq:LK-representation-2} is {\em right skew symmetric} (RSS) if \begin{equation} \gamma\geq\liminf_{\delta\downarrow 0}\int_{\delta<|y|<1}y\nu(dy), \label{eq:weaker-RSS-condition} \end{equation} and $\nu\big((a,\infty)\big)\geq\nu\big((-\infty,a)\big)$ for all $a>0$. Say $X$ is {\em left skew symmetric} (LSS) if $\tilde{X}$ is right skew symmetric. \begin{theorem} \label{thm:bounded-f} Let $X=(X(t))_{t\geq 0}$ be a L\'evy process, and let $f:[0,\infty)\to\RR$ be bounded, nonincreasing, continuous and convex. For fixed $T>0$, consider the problem \eqref{eq:Levy-objective}. \begin{enumerate}[(i)] \item If $X$ is RSS, the rule $\tau\equiv T$ is optimal. \item If $X$ is LSS, the rule $\tau\equiv 0$ is optimal. \end{enumerate} \end{theorem} (Observe that the symmetric case is already covered by Theorem \ref{thm:Levy-general}.) \begin{proof} Suppose first that $X$ is RSS. Let $L:=\liminf_{\delta\downarrow 0}\int_{\delta<|y|<1}y\nu(dy)$, and choose a sequence $\delta_1>\delta_2>\dots>0$ so that $\lim_{k\to\infty}\int_{\delta_k<|y|<1}y\nu(dy)=L$. For each $n$, choose $k_n$ so that $\varepsilon_n:=\delta_{k_n}$ satisfies \eqref{eq:epsilon-condition}. Now we construct the process $X$ as an almost-sure uniform limit of a sequence of processes $X_n=(X_n(t))_{t\geq 0}$, $n\in\NN$, exactly as in the proof of Lemma \ref{lem:general-construction}. Then each $X_n$ is RSS in the sense of Subsection \ref{subsec:interlacing}. (Note that in order to construct the processes $X_n$ in this way, without their duals, condition (d) in Definition \ref{def:SRSS} is not needed.) For each $t\geq 0$, let $\FF_t$ be the smallest $\sigma$-algebra containing each $\sigma(\{X_n(s): 0\leq s\leq t\})$, $n\in\NN$. Let $\Omega_0$ be the subset of $\Omega$ on which $X_n(t)$ converges uniformly in $t$. By arbitrarily redefining $X(t;\omega)\equiv 0$ for $\omega\in\Omega\backslash\Omega_0$, we see that $X$ is adapted to $\{\FF_t\}$, and clearly $X_t-X_s$ is independent of $\FF_s$ for each $0\leq s\leq t$. Thus, by Theorem \ref{thm:Levy-finite}, for any stopping time $\tau$ relative to $\{\FF_t\}$, \begin{equation} \mathrm{E}\big[f\big(M_n(T)-X_n(\tau)\big)\big]\leq\sE\big[f\big(M_n(T)-X_n(T)\big)\big]. \label{eq:finite-n-inequality} \end{equation} Now it follows from the uniform convergence of $X_n$ to $X$ that, pointwise on $\Omega_0$, $M_n(T)\to M(T)$ and $X_n(\tau)\to X(\tau)$, and hence, by the continuity of $f$, $f\big(M_n(T)-X_n(\tau)\big)\to f\big(M(T)-X(\tau)\big)$ and $f\big(M_n(T)-X_n(T)\big)\to f\big(M(T)-X(T)\big)$. Thus, taking limits in \eqref{eq:finite-n-inequality} we see via the Bounded Convergence Theorem that \begin{equation*} \mathrm{E}\big[f\big(M(T)-X(\tau)\big)\big]\leq \sE\big[f\big(M(T)-X(T)\big)\big]. \end{equation*} Therefore, the rule $\tau\equiv T$ is optimal. A similar argument shows that the rule $\tau\equiv 0$ is optimal if $X$ is LSS. \end{proof} \begin{remark} {\rm If we try to extend the above reasoning to unbounded continuous $f$ via the Dominated Convergence Theorem, we run into the difficulty of bounding expectations such as $\mathrm{E}|f(M_n(T))|$ uniformly in $n$, since there is no guarantee that $\mathrm{E}|f(M_n(T))|$ converges to $\mathrm{E}|f(M(T))|$. } \end{remark} \begin{remark} {\rm It may seem that in Theorem \ref{thm:Levy-general} we could have weakened the SRSS condition similarly, replacing (a) and (b) in Definition \ref{def:SRSS} with \eqref{eq:weaker-RSS-condition}. But this would not actually give a weaker hypothesis, since in the presence of condition (d), the integral in \eqref{eq:almost-symmetric} increases monotonically as $\varepsilon\downarrow 0$. } \end{remark} \end{document}
\begin{document} \title{Transverse coherence of photon pairs generated in spontaneous parametric down-conversion} \author{Martin Hamar} \author{Jan Pe\v{r}ina Jr.} \author{Ond\v{r}ej Haderka} \author{V\'{a}clav Mich\'{a}lek} \affiliation{Joint Laboratory of Optics of Palack\'{y} University and Institute of Physics of Academy of Sciences of the Czech Republic, 17. listopadu 50a, 772 07 Olomouc, Czech Republic} \begin{abstract} Coherence properties of the down-converted beams generated in spontaneous parametric down-conversion are investigated in detail using an iCCD camera. Experimental results are compared with those from a theoretical model developed for pulsed pumping with a Gaussian transverse profile. The results allow to tailor the shape of correlation area of the signal and idler photons using pump-field and crystal parameters. As an example, splitting of a correlation area caused by a two-peak pump-field spectrum is experimentally studied. \end{abstract} \pacs{42.50.Ar,42-65.Lm} \maketitle \section{Introduction} Light emitted from spontaneous parametric down-conversion in a nonlinear crystal is composed of photon pairs. Two photons comprising a photon pair are called a signal and an idler photon for historic reasons. The first theoretical investigation of this process has been done in year 1968 \cite{Giallorenziho1968}. Already this study has revealed that frequencies and emission directions of two photons in a pair are fully determined by the laws of energy and momentum conservations. For this reason, there occurs a strong correlation (entanglement) between properties of the signal and idler photons. In an ideal case of infinitely long and wide nonlinear crystal and monochromatic plane-wave pumping, a plane-wave signal photon at frequency $ \omega_s $ belongs just to one plane-wave idler photon at frequency $ \omega_i $ that is determined by the conservation of energy. Emission angles of these photons are given by the momentum conservation that forms phase-matching conditions. Possible signal (and similarly idler) emission directions lie on a cone which axis coincides with the pump-beam direction of propagation. However, real experimental conditions have enforced the consideration of crystals with finite dimensions \cite{Hong1985,Wang1991}, pump beams with nonzero divergence \cite{Grayson1994,Steuernagel1998} as well as pulsed pumping \cite{Keller1997,DiGiuseppe1997,Grice1997,PerinaJr1999}. During this investigation, the approximation based on a multidimensional Gaussian spectral two-photon amplitude has been found extraordinarily useful \cite{Joobeur1994,Joobeur1996}. The developed models have revealed that spatial characteristics of a pump beam are transferred to certain extent to these of a photon pair generated in a nonlinear crystal, especially in case of short crystals \cite{Nasr2002,Walborn2004,Monken1998,Molina-Terriza2005}. These models have also been useful in quantifying real effects in applied experimental setups utilizing photon pairs \cite{Shih2003,Law2004}. They have also been recently extended to photonic \cite{Centini2005,PerinaJr2006} and wave-guiding \cite{Ding1995,DeRossi2002,Booth2002,Walton2003,Walton2004,PerinaJr2008} structures. Also effects at nonlinear boundaries have been taken into account \cite{PerinaJr2009,PerinaJr2009a}. In this paper, we continue the previous investigations of spatial photon-pair properties \cite{Saleh1998,Howell2004,DAngelo2004,Brambilla2004} by experimental study of transverse profiles of the down-converted beams as well as correlation areas of the signal and idler photons using an iCCD camera \cite{Jost1998,Haderka2005,Haderka2005a}. Special attention is paid to the role of pump-beam parameters. Experimental results are compared with a theoretical model that considers Gaussian spectrum and elliptical pump-beam profile. We note that also sensitive CCD cameras have been found useful in investigations of spatial properties of more intense twin beams \cite{Jiang2003,Jedrkiewicz2004,Jedrkiewicz2006}. The paper is organized as follows. A theoretical model is presented in Sec.~II. Sec.~III brings theoretical analysis of parameters of a correlation area as well as spectral properties of the down-converted fields. An experimental method based on the use of an iCCD camera is discussed in detail in Sec.~IV. The experimentally observed dependence of parameters of the correlation area on pump-beam characteristics and crystal length is reported in Sec.~V. Sec.~VI is devoted to splitting of the correlation area and its experimental observation. Conclusions are drawn in Sec.~VII. \section{Theory} The process of spontaneous parametric down-conversion is described by the following interaction Hamiltonian $ \hat{H}_{\rm int} $ \cite{Hong1985,Saleh1991,Shih2003}: \begin{eqnarray} \hat{H}_{\rm int}(t) &=& \varepsilon_0 \int\limits_V d{\bf r} \chi^{(2)} : {\bf E}_{p}^{(+)}({\bf r},t) \hat{\bf E}_{s}^{(-)} ({\bf r},t) \hat{\bf E}_{i}^{(-)}({\bf r},t) \nonumber \\ & & \mbox{} + {\rm H.c.} , \label{1} \end{eqnarray} where $ {\bf E}_{p}^{(+)} $ is the positive-frequency part of the pump electric-field amplitude, whereas $ {\bf E}_{s}^{(-)} $ ($ {\bf E}_{i}^{(-)} $) stands for the negative-frequency part of the signal (idler) electric-field amplitude operator. Symbol $ \chi^{(2)} $ means the second-order susceptibility tensor and $ : $ is shorthand for tensor reduction with respect to its three indices. Susceptibility of vacuum is denoted as $ \varepsilon_0 $, interaction volume as $ V $ and $ {\rm H.c.} $ substitutes the Hermitian-conjugated term. We further consider parametric down-conversion in a LiIO$ {}_3 $ crystal with an optical axis perpendicular to the $ z $ axis of fields' propagation direction and type-I interaction. The pump field is assumed to be polarized vertically (it propagates as an extraordinary wave) whereas the signal and idler fields are polarized horizontally (they propagate as ordinary waves). In this specific configuration, scalar optical fields are sufficient for the description. The interacting optical fields can then be decomposed into monochromatic plane waves with frequencies $ \omega_a $ and wave vectors $ {\bf k}_a $: \begin{eqnarray} E_a^{(+)}({\bf r},t) &=& \int d{\bf k}_a E_a^{(+)}({\bf k}_a) \exp(i{\bf k}_a{\bf r}-i\omega_a t) + {\rm H.c.}; \nonumber \\ & & \hspace{3cm} a=p,s,i. \label{2} \end{eqnarray} The signal and idler fields at a single-photon level have to be described quantally and so their spectral amplitudes $ \hat{E}_a^{(+)}({\bf k}_a) $ can be expressed as $ \hat{E}_a^{(+)}({\bf k}_a) = i \sqrt{\hbar\omega_a}/\sqrt{2\varepsilon_0 c {\cal A} n_a(\omega_a)} \hat{a}_a({\bf k}_a) $ using annihilation operators $ \hat{a}_a({\bf k}_a) $ that remove one photon from a plane-wave mode $ {\bf k}_a $ in field $ a $. Symbol $ \hbar $ stands for the reduced Planck constant, $ c $ is speed of light in vacuum, $ {\cal A} $ transverse area of a beam, and $ n_a $ means index of refraction in field $ a $. Under these conditions, the interaction Hamiltonian $ \hat{H}_{\rm int} $ in Eq.~(\ref{1}) takes the form \cite{Joobeur1994}: \begin{eqnarray} \hat{H}_{\rm int}(t) &=& A_n(\omega_s^0,\omega_i^0) \int d\mathbf{k}_s \int d\mathbf{k}_i \int d\mathbf{k}_p E_{p}^{(+)}(\mathbf{k}_p ) \nonumber \\ & & \mbox{} \times \exp\left\{i\left[ \omega(\mathbf{k}_p) - \omega(\mathbf{k}_s) - \omega(\mathbf{k}_i) \right] t \right\} \nonumber \\ & & \mbox{} \times \int\limits_V d\mathbf{r} \exp\left[-i\left(\mathbf{k}_p - \mathbf{k}_s - \mathbf{k}_i \right)\mathbf{r}\right] \nonumber \\ & & \mbox{} \times \hat{a}^{\dagger}_s(\mathbf{k}_s) \hat{a}^{\dagger}_i(\mathbf{k}_i) + {\rm H.c.} \label{3} \end{eqnarray} We have assumed in deriving Eq.~(\ref{3}) that the function $ A_n (\omega_s,\omega_i) = - \hbar\sqrt{\omega_s\omega_i} \chi^{(2)} / (2c{\cal A} \sqrt{n_s(\omega_s) n_i(\omega_i)} $ is a slowly varying function of frequencies $ \omega_s $ and $ \omega_i $ and can be approximated by its value taken at the central frequencies $ \omega_s^0 $ and $ \omega_i^0 $. A quantum state $ | \Psi \rangle $ of a generated photon pair can be obtained after solving the Schr\"{o}dinger equation up to the first power of the interaction constant that results in the formula: \begin{equation} | \Psi \rangle = - \frac{i}{\hbar} \int_{-\infty}^{\infty} dt \hat{H}_{\rm int} (t) | {\rm vac} \rangle; \label{4} \end{equation} $ |{\rm vac} \rangle $ means the vacuum state. Substitution of the interaction Hamiltonian $ \hat{H}_{\rm int} $ from Eq.~(\ref{3}) into Eq.~(\ref{4}) provides the following form for the quantum state $ |\Psi\rangle $ \cite{Grayson1994,Shih2003,Ou1989,Mandel1995}: \begin{equation} | \Psi \rangle = \int d \mathbf{k}_{s} \int d \mathbf{k}_{i} S(\mathbf{k}_{s},\mathbf{k}_{i}) \hat{a}_s^\dagger({\bf k}_s) \hat{a}_i^\dagger({\bf k}_i) |{\rm vac}\rangle , \label{5} \end{equation} where the newly introduced two-photon amplitude $ S $ takes the form: \begin{eqnarray} S(\mathbf{k}_{s},\mathbf{k}_{i}) &=& A'_n \int d \mathbf{k}_{p} E_{p}^{( + )}(\mathbf{k}_{p}) \delta(\omega_p - \omega_s - \omega_i) \nonumber \\ & & \times \int_V d \mathbf{r} \exp\left[-i( \mathbf{k}_{p} - \mathbf{k}_{s} - \mathbf{k}_{i}) \mathbf{r} \right] \label{6} \end{eqnarray} and $ A'_n = -2\pi i / \hbar A_n(\omega_s^0,\omega_i^0) $. We note that squared modulus $ |S({\bf k}_s,{\bf k}_i)|^2 $ of the two-photon amplitude gives us the probability density of simultaneous generation of a signal photon with wave vector $ {\bf k}_s $ and its twin with wave vector $ {\bf k}_i $. Spectral resolution is usually not found in experiments with photon pairs and then the photon-pair coincidence-count rate is linearly proportional to the fourth-order correlation function $ G_{s,i} $ defined as: \begin{eqnarray} G_{s,i}(\xi_s,\delta_s,\xi_i,\delta_i) &=& \frac{ \sin(\xi_s)\sin(\xi_i) }{c^6} \int d\omega_s \omega_s^2 \nonumber \\ & & \hspace{-3cm} \times \int d\omega_i \omega_i^2 |h(\omega_s) h(\omega_i)|^2 |S(\xi_s,\delta_s,\omega_s, \xi_i,\delta_i,\omega_i)|^2; \label{7} \end{eqnarray} $ S(\xi_s,\delta_s,\omega_s,\xi_i,\delta_i,\omega_i) \equiv S({\bf k}_s,{\bf k}_i) $. The propagation direction of a photon is parameterized by radial emission angles $ \xi_a $ (determining declination from the $ z $ axis) and azimuthal emission angles $ \delta_a $ (describing rotation around the $ z $ axis starting from the $ x $ axis); $ a=s,i $ (see also Fig.~\ref{fig8}). Functions $ h_s $ and $ h_i $ introduced in Eq.~(\ref{7}) describe amplitude spectral and/or geometrical filtering of photons in front of detectors. More detailed information is contained in intensity spectrum $ S_s $ of a signal field assuming photon pairs emitted into the fixed signal- and idler-photon directions given by angles $ \xi_s $, $ \delta_s $, $ \xi_i $, and $ \delta_i $: \begin{eqnarray} S_s(\omega_s;\xi_s,\delta_s,\xi_i,\delta_i) &=& \frac{ \sin(\xi_s)\sin(\xi_i) \omega_s^2 |h(\omega_s)|^2}{c^6} \nonumber \\ & & \hspace{-2.5cm} \times \int d\omega_i \omega_i^2 |h(\omega_i)|^2 |S(\xi_s,\delta_s,\omega_s, \xi_i,\delta_i,\omega_i)|^2. \label{8} \end{eqnarray} If the signal-photon emission direction described by angles $ \xi_s $ and $ \delta_s $ is not resolved, an integrated signal-field emission spectrum $ S_s^{\rm int} $ is observed: \begin{equation} S_s^{\rm int}(\omega_s;\xi_i,\delta_i) = \int_{-\pi/2}^{\pi/2} d\xi_s \int_{-\pi}^{\pi} d\delta_s S_s(\omega_s;\xi_s,\delta_s,\xi_i,\delta_i). \label{9} \end{equation} Similar formulas as given in Eqs.~(\ref{8}) and (\ref{9}) can be derived also for the idler field. On the other hand excluding resolution in emission directions, spectral correlations between the signal and idler fields are characterized by a two-photon spectral amplitude $ \Phi_{s,i} $ which squared modulus is defined as: \begin{eqnarray} |\Phi_{s,i}(\omega_s,\omega_i)|^2 &=& \frac{\omega_s^2 \omega_i^2}{c^6} \int d\delta_s \int d\xi_s \int d\delta_i \int d\xi_i \nonumber \\ & & \hspace{-1cm} \sin(\xi_s) \sin(\xi_i) |h(\omega_s) h(\omega_i)|^2 \nonumber \\ & & \hspace{-1cm} \mbox{} \times |S(\xi_s,\delta_s,\omega_s,\xi_i,\delta_i,\omega_i)|^2. \label{10} \end{eqnarray} We further consider a Gaussian pump beam with the electric-field amplitude $ E_{p}^{(+)} $ in the from: \begin{eqnarray} E_p^{(+)}({\bf r},t) &=& \int d\omega_p A_p(\omega_p)\exp(i{\bf k}_{pz}z -i\omega_p t) \nonumber \\ & & \hspace{-10mm} \times \frac{1}{W_{px}(z)} \exp\left[ -\frac{x^2}{W_{px}^2(z)} \right] \exp\left[ -ik_p\frac{x^2}{2R_{px}^2(z)} \right] \nonumber \\ & & \hspace{-10mm} \times \frac{1}{W_{py}(z)} \exp\left[ -\frac{y^2}{W_{py}^2(z)} \right] \exp\left[ -ik_p\frac{y^2}{2R_{py}^2(z)} \right] \nonumber \\ & & \hspace{-10mm} \times \exp[i\zeta_p(z)]; \label{11} \end{eqnarray} $ k_p = |{\bf k}_p| $. The functions $ W_{pa} $, $ R_{pa} $, and $ \zeta_p $ are defined as: \begin{eqnarray} W_{pa}(z) &=& W_{pa}^0 \sqrt{ 1+ \frac{z^2}{(z_{pa}^0)^2} }, \hspace{2mm} W_{pa}^0 = \sqrt{\frac{2z_{pa}^0}{k_p}}, \\ R_{pa}(z) &=& z \left[ 1+ \frac{(z_{pa}^0)^2}{z^2} \right] , \hspace{10mm} a=x,y ,\\ \zeta_p(z) &=& [\arctan(z/z_{px}^0) + \arctan(z/z_{py}^0)]/2. \end{eqnarray} The function $ A_p $ introduced in Eq.~(\ref{11}) gives the pump-field amplitude temporal spectrum. Constants $ z_{px}^0 $ and $ z_{py}^0 $ describe positions of waists with radii $ W_{px}^0 $ and $ W_{py}^0 $ in the $ x $ and $ y $ directions, respectively. Function $ W_{px}(z) $ [$ W_{py}(z) $] gives a radius of the beam in the $ x $ [$ y $] direction and with wavefront curvature $ R_{px}(z) $ [$ R_{py}(z) $] at position $ z $. We assume that the nonlinear crystal is sufficiently short so that changes of the pump-field amplitude $ E_p^{(+)} $ in the transverse plane along the $ z $ axis can be neglected. In this case, the pump-field amplitude $ E_p^{(+)} $ can be characterized both by its temporal spectrum $ A_p(\omega_p) $ and spatial spectrum $ F_{p}(k_{px},k_{py}) $ in the transverse plane: \begin{eqnarray} E_p^{(+)}({\bf r},t) &=& \int d\omega_p A_p(\omega_p)\int d{\bf k}_{px} \int d{\bf k}_{py} \nonumber \\ & & \hspace{-15mm} F_p({\bf k}_{px},{\bf k}_{py}) \exp(i{\bf k}_{px}x) \exp(i{\bf k}_{py}y) \nonumber \\ & & \hspace{-15mm} \times \exp(i{\bf k}_{pz}z) \exp(-i\omega_p t); \label{15} \end{eqnarray} The spatial spectrum $ F_p $ corresponding to the Gaussian beam written in Eq.~(\ref{11}) and propagating along the $ z $ axis can be expressed as: \begin{eqnarray} F_p({\bf k}_{px},{\bf k}_{py}) &=& \frac{1}{W_{px}(z_0) W_{py}(z_0)} \frac{2}{ \bar{W}_{px} \bar{W}_{py} } \nonumber \\ & & \hspace{-20mm} \times \exp\left[ -\frac{{\bf k}_{px}^2}{\bar{W}_{px}^2} \right] \exp\left[ -\frac{{\bf k}_{py}^2}{\bar{W}_{py}^2} \right] \exp[i\zeta_p(z_0)], \label{16} \end{eqnarray} where the position $ z_0 $ lies inside the crystal. Complex spectral half-widths $ \bar{W}_{px} $ and $ \bar{W}_{py} $ of the spatial spectrum in the transverse plane are given as follows: \begin{equation} \bar{W}_{pa} = 2\sqrt{ \frac{1}{W_{pa}^2(z_0)} + \frac{ik_p}{2R_{pa}^2(z_0)} }, \hspace{5mm} a=x,y . \end{equation} In the following we consider a Gaussian chirped pump pulse which temporal amplitude spectrum $ A_p $ can be expressed in the form: \begin{equation} A_p(\omega_p) = \xi_p \frac{\tau_p}{\sqrt{2(1+ia_p)}} \exp\left[ -\frac{\tau_p^2}{4(1+ia_p)} \omega_p^2 \right]. \label{18} \end{equation} In Eq.~(\ref{18}), $ \tau_p $ denotes pump-pulse duration, $ a_p $ stands for a chirp parameter, and $ \xi_p $ is the pump-field amplitude. We note that the amplitude width $ \Delta\omega_p $ (given as full width at $ 1/e $ of the maximum) of the pulse written in Eq.~(\ref{18}) equals $ 4\sqrt(1+a_p^2)/\tau_p $. Considering the pump-field amplitude $ E_p^{(+)} $ as given in Eq.~(\ref{15}) the two-photon amplitude $ S $ defined in Eq.~(\ref{6}) can be recast into the form: \begin{eqnarray} S(\xi_s,\delta_s,\omega_s, \xi_i,\delta_i,\omega_i) &=& c A_p(\omega_s+\omega_i) \nonumber \\ & & \hspace{-32mm} \times F_p({\bf k}_{sx}+{\bf k}_{ix},{\bf k}_{sy}+{\bf k}_{iy}) \nonumber \\ & & \hspace{-32mm} \times L_z {\rm sinc} \left\{ \frac{[{\bf k}_{pz}(\omega_s+\omega_i)-{\bf k}_{sz}(\omega_s)-{\bf k}_{iz}(\omega_i)]L_z}{2} \right\} \nonumber \\ & & \hspace{-32mm} \times \exp \left\{ -i \frac{[{\bf k}_{pz}(\omega_s+\omega_i)-{\bf k}_{sz}(\omega_s)-{\bf k}_{iz}(\omega_i)]L_z}{2} \right\} ; \nonumber \\ & & \label{19} \end{eqnarray} $ {\rm sinc}(x) = \sin(x)/x $. In deriving Eq.~(\ref{19}), we have assumed that the crystal extents from $ z = -L_z $ to $ z = 0 $, $ L_z $ being the crystal length. The transverse profile of crystal is also assumed to be sufficiently wide. \section{Correlation area, spectral properties} Correlation area is defined by the profile of probability density of detecting a signal photon in the direction described by angles ($\xi_s,\delta_s $) provided that its idler twin has been detected in a fixed direction given by angles ($\xi_i,\delta_i $). In coherence theory, this probability is given by the fourth-order correlation function $ G_{s,i} $ defined in Eq.~(\ref{7}). Because the correlation function $ G_{s,i} $ is usually a smooth function of its arguments, it can be conveniently parameterized using angular widths (given as full-widths at $ 1/e $ of maximum) in the radial ($ \Delta\xi_s $) and azimuthal ($ \Delta\delta_s $) directions. In general, parameters of the correlation area depend on properties of crystal material as well as crystal length, pump-field spectral bandwidth, and transverse pump-beam profile. The last two parameters allow to tailor characteristics of the correlation area in wide ranges. In the theoretical analysis of Sec.~III, we use radial ($ \xi $) and azimuthal ($ \delta $) angles inside a nonlinear crystal. The reason is that we want to exclude the effect of mixing in spatial and frequency domains at the output plane of the crystal in the discussion. However starting from Sec.~IV radial ($ \xi $) and azimuthal ($ \delta $) angles outside the nonlinear crystal are naturally used in the presentation of experimental results. In radial direction, crystal length and pump-field spectral bandwidth as well as transverse pump-beam profile play a role. The dependence of radial width of the correlation area on the crystal length $ L_z $ emerges through the phase matching condition in the $ z $ direction. This condition is mathematically described by the expression $ {\rm sinc}(\Delta {\bf k}_z L_z/2) $ in Eq.~(\ref{19}); $ \Delta {\bf k}_z = {\bf k}_{pz} - {\bf k}_{sz} - {\bf k}_{iz} $. Actual radial width is determined by this condition and conservation of energy ($ \omega_p = \omega_s + \omega_i $). According to the formula in Eq.~(\ref{19}), the longer the crystal, the smaller the radial width. Analytical theory also predicts narrowing of the signal- and idler-field spectra with an increasing crystal length. If pulsed pumping is considered, the wider the pump-field spectrum, the greater the radial width and also the greater the signal-field spectral width (compare Figs.~\ref{fig1}c, d with Figs~\ref{fig1}a, b). This can be understood as follows: more pump-field frequencies are present in a wider pump-field spectrum and so more signal- and idler-field frequencies are allowed to obey the phase-matching conditions in the $ z $ direction and conservation of energy. In more detail and following the graphs in Figs.~\ref{fig1}c and d, signal-field photons with different wavelengths are emitted into different radial emission angles $ \xi_s $. Superposition of photon fields emitted into different radial emission angles $ \xi_s $ then broadens the overall signal-field spectrum. It is important to note that all idler-field photons have nearly the same wavelengths which means that signal-field photons emitted into different radial emission angles $ \xi_s $ use different wavelengths of the pulsed-pump spectrum. The transverse pump-beam profile affects the radial width through the phase-matching condition in the radial plane. This radial phase matching condition is an additional requirement that must be fulfilled by a generated photon pair. Qualitatively, the more the pump beam is focused, the wider its spatial spectrum in radial direction and so the weaker the radial phase-matching condition. However, this dependence is quite small in radial angles, as follows from the comparison of graphs in Figs.~\ref{fig1}a, b and Figs.~\ref{fig1}e, f. On the other hand, focusing of the pump beam leads to considerable broadening of the signal- and idler-field spectra in all radial emission directions. Finally, if a focused pulsed pump beam is assumed (see Figs.~\ref{fig1}g, h), broadening of the correlation area in radial direction as well as broadening of the overall signal- and idler-field spectra is observed due to a final pump-field spectral width. On the top, broadening of the signal- and idler-field spectra corresponding to any radial emission angle $ \xi_s $ occurs as a consequence of pump-beam focusing. This behavior is related to the fact that indexes of refraction of the interacting fields are nearly constant inside the correlation area. We can say in general, that spectral widths of the signal and idler fields behave qualitatively in the same way as the radial width of correlation area. \begin{figure} \caption{Contour plots of signal- [$ S_s(\lambda_s) $] and idler-field [$ S_i(\lambda_i) $] intensity spectra as they depend on radial signal-field emission angle $ \xi_s $; idler-field emission angle $ \xi_i $ is fixed. Spectra are determined for cw plane-wave pumping (a, b), pulsed plane-wave pumping (c, d, $ \Delta\lambda_p = 2.8 $~nm), cw focused pumping (e, f, $ W_p^{0,f} \label{fig1} \end{figure} Comparison of the signal- and idler-field spectra in Figs.~\ref{fig1}c, d valid for pulsed pumping with those in Figs.~\ref{fig1}a, b for cw pumping leads to a remarkable observation. Photon pairs generated into different signal-photon radial emission angles $ \xi_s $ use different pump-field frequencies. There occurs spectral asymmetry between the signal and idler fields that originates in different detection angles considered; whereas the idler-field detection angle is fixed, the angle of a signal-photon detection varies. This asymmetry determines the preferred direction of the signal- and idler-field frequency correlations as they are visible in the shape of squared modulus $ |\Phi_{s,i}|^2 $ of two-photon spectral amplitude introduced in Eq.~(\ref{10}) [a large signal-field detector is assumed]. Contour plot of the squared modulus $ |\Phi_{s,i}|^2 $ of two-photon amplitude has a typical cigar shape. In cw case, the main axis of this cigar is rotated by 45 degrees counter-clockwise with respect to the $ \lambda_i $ axis (see Fig.~\ref{fig2}a) in order to describe perfect frequency anti-correlation. If pulsed pumping is taken into account, the cigar axis tends to rotates clockwise; the broader the pump-field spectrum, the greater the rotation angle. Even states with positively correlated signal- and idler-field frequencies can be observed for sufficiently broad pump-field spectra (see Fig.~\ref{fig2}b). We note that different dispersion properties at different propagation angles have been fully exploited in the method of achromatic phase matching that allows to generate photon pairs with an arbitrary orientation of the two-photon spectral amplitude \cite{Torres2005,Torres2005a,Molina-Terriza2005}. \begin{figure} \caption{Contour plots of squared modulus $ |\Phi_{s,i} \label{fig2} \end{figure} The azimuthal width of correlation area is determined predominantly by a pump-beam transverse profile for geometric reasons. To be more specific, it is the pump-beam spatial spectrum in azimuthal direction that affects the azimuthal extension of the correlation area through the phase-matching conditions in the azimuthal direction. As material dispersion characteristics of the crystal are rotationally symmetric with respect to the $ z $ axis (signal and idler fields propagate as ordinary waves), the azimuthal width of spatial pump-beam spectrum does not practically influence spectral properties of the signal and idler fields. We illustrate the dependence of correlation area on pump-beam focusing using a 5~mm long crystal and both cw and pulsed pumping in Fig.~\ref{fig3}. We can see in Fig.~\ref{fig3}a that the signal-field azimuthal width $ \Delta\delta_s $ is inversely proportional to the width $ W_p^{0,f} $ (full-width at $ 1/e $ of the maximum; $ W_p^{0,f} \equiv W_{px}^{0,f} = W_{py}^{0,f} $) of the pump-beam waist whereas the radial width $ \Delta\xi_s $ does not practically depend on the width $ W_p^{0,f} $ of the pump-beam waist. This is caused by the fact that the phase-matching condition in the $ z $ direction is much stronger than that in radial direction for a 5-mm long crystal and so the radial width $ \Delta\xi_s $ is sensitive only to the pump-field spectral width in this case. Pulsed pumping gives a broader correlation area in radial direction as well as broader signal-field spectrum compared to cw case (see Fig.~\ref{fig3}b). Increasing pump-beam focusing releases phase-matching conditions and naturally leads to a broader signal-field spectrum. \begin{figure} \caption{a) Radial ($ \Delta\xi_s $, solid curves) and azimuthal ($ \Delta\delta_s $, dashed curves) widths of correlation area and b) signal-field spectral width $ \Delta\omega_s $ as they depend on width $ W_p^{0,f} \label{fig3} \end{figure} Contrary to the azimuthal width, the radial width $ \Delta\xi_s $ depends on the pump-field spectral width $ \Delta\lambda_p $. The larger the pump-field spectral width $ \Delta\lambda_p $ the greater the radial width $ \Delta\xi_s $ and also the greater the signal-field spectral width $ \Delta\omega_s $, as documented in Fig.~\ref{fig4} for a focused pump beam. We can also see in Fig.~\ref{fig4}a that the radial width $ \Delta\xi_s $ reaches a constant value for sufficiently narrow pump-field spectra. This value is determined by the phase-matching condition in the $ z $ direction for the central pump-field frequency $ \omega_p^0 $ and so depends on the crystal length $ L_z $ (together with material dispersion properties of the crystal). The longer the crystal the smaller the radial width $ \Delta\xi_s $. \begin{figure} \caption{a) Radial width $ \Delta\xi_s $ of correlation area and b) signal-field spectral width $ \Delta\omega_s $ as functions of pump-field spectral width $ \Delta\lambda_p $ for a 5-mm (circles) and 10-cm (triangles) long crystal assuming a focused pump beam; $ W_{p} \label{fig4} \end{figure} The above described dependencies allow to generate photon pairs with highly elliptic profiles of the correlation area provided that the pump-beam profile in the transverse plane is highly elliptic. As an example, we consider a pump beam having $ W_{py}^{0,f}/W_{px}^{0,f} = 10 $. The dependence of the radial ($ \Delta\xi_s $) and azimuthal ($ \Delta\delta_s $) widths and signal-field spectral width $ \Delta\omega_s $ on the central azimuthal signal-photon emission angle $ \delta_{s0} $ is shown in Fig.~\ref{fig5} in this case. Whereas the radial and azimuthal widths are comparable for the azimuthal signal-field emission angle $ \delta_{s0} = \pi/2 $, their ratio $ \Delta\delta_s/\Delta\xi_s $ equals approx. 20 for $ \delta_{s0} = 0 $. Focusing the pump beam from 200~$ \mu $m to 20~$ \mu $m in radial direction results in doubling the signal-field spectral width $ \Delta\omega_s $ as documented in Fig.~\ref{fig5}b (see also Figs.~\ref{fig1}a and e). \begin{figure} \caption{a) Radial ($ \Delta\xi_s $, solid curve) and azimuthal ($ \Delta\delta_s $, dashed curve) widths of correlation area and b) signal-field spectral width $ \Delta\omega_s $ as they depend on central azimuthal signal-field emission angle $ \delta_{s0} \label{fig5} \end{figure} \section{Experimental setup} We have used a negative uniaxial crystal made of LiIO$_3$ cut for non-critical phase matching, i.e. the optical axis was perpendicular to the pump-beam propagation direction. We have considered crystals of two different lengths ($L_z$=2~mm and 5~mm) pumped both by cw and pulsed lasers. As for cw pumping, a semiconductor laser Cube 405 (Coherent) delivered 31.6~mW at 405~nm and with spectral bandwidth $\Delta \lambda_p= 1.7 $~nm. The second-harmonic field of an amplified femtosecond Ti:sapphire system (Mira+RegA, Coherent) providing pulses at 800~nm and $\sim$250~fs long was used in the pulsed regime. The mean SHG power was 2.5~mW at the crystal input for a repetition rate of 11~kHz. Spectral bandwidth was adjusted between 4.8 and 7.4~nm by fine tuning of the SHG process. A dispersion prism was used to separate the fundamental and SHG beams (for details, see Fig.~\ref{fig6}). Transverse profile of the pump beam and its divergence were controlled by changing the focus length of converging lens L1 or using a beam expander (BE2X, Thorlabs). The used focal lengths $f_{L1}$ of lens $ L1 $ laid in the interval from 30 to 75~cm. As we wanted the pump beam to be as homogeneous as possible along the $ z $ axis, we chose the distance $z_{L1}$ between the lens L1 and the nonlinear crystal such that the beam waist was placed far behind the crystal, i.e. $z_{L1}<f_{L1}$. Spatial spectrum of the pump beam in the transverse plane as a very important parameter in our experiment was measured by a CCD camera (Lu085M, Lumenera) placed at the focal plane of a converging lens L3. Spatial spectra in horizontal and vertical directions were determined as marginal spectra and parameters $ \tilde{W}_{px} $ and $ \tilde{W}_{py} $ characterizing their widths were found after fitting the experimental data. A fiber-optic spectrometer (HR4000CG-UV-NIR, Ocean Optics) was used to obtain the pump-beam temporal spectrum after propagation through the nonlinear crystal. \begin{figure} \caption{Experimental setup used for the determination of angular widths: a) Entire setup that includes both cw and pulsed pumping as well as pump-beam diagnostics (for more details, see the text). b) Detail of the setup showing paths of the signal and idler beams.} \label{fig6} \end{figure} The experiment was done with photon pairs degenerate in frequencies ($\lambda _{s0}=\lambda _{i0} = 800$~nm) and emitted in opposite parts of a cone layer (the central radial emission angle was 33.4~deg behind the crystal). As shown in Fig.~\ref{fig6}b the signal beam was captured directly by a detector whereas the idler beam propagated to the detector after being reflected on a high-reflectivity mirror. Both beams were detected on a photocathode of an iCCD camera with image intensifier (PI-MAX:512-HQ, Princeton Instruments). Before detection, both beams were transformed using a converging lens L2, one narrow-bandwidth and two high-pass edge filters. The geometry of the setup was chosen such that the lens L2 mapped the signal and idler photon emission angles to positions at the photocathode; the photocathode was placed in the focal plane of lens $ L2 $. For convenience, lenses L2 with different focal lengths ($f_{L2}$= 12.5, 15, and 25~cm) were used. The applied bandwidth filter was 11~nm wide and centered at 800~nm. Edge filters (Andover, ANDV7862) had high transmittances at 800~nm (98\%) and blocked wavelengths below 666~nm. An active area of the photocathode in the form of a rectangular 12.36~mm wide (see Fig.~\ref{fig7}) was divided into $512\times 512$ pixels. Spatial resolution of the camera was 38~$\mu$m (FWHM) and its main limitation came from imperfect contrast transfer in the image intensifier. In order to make data acquisition faster the resolution was further decreased by grouping $4\times 4$ or $8\times 8$ pixels into one super-pixel in the hardware of the camera. Consequently, several tens of camera frames were captured in one second. The overall quantum detection efficiency including components between the crystal and photocathode was 7\%, as derived from covariance of the signal and idler photon numbers. Widths of the signal and idler strips are given by the bandwidth filter and lens L2 focal length. As for timing, a 10~ns long gate of the camera was used synchronously with laser pulses. In cw case, a 2~$\mu$s long gate was applied together with internal triggering. This timing together with appropriate pump-field intensities assured that the probability of detecting two photons in a single super-pixel was negligible. In other words, the number of detection events divided by quantum detection efficiency had to be much lower than the number of super-pixels. \begin{figure} \caption{Photocathode with registered photons after a) illumination by light coming from 20 000 consecutive pump pulses, b) one pump pulse. The signal and idler strips image small sections of the cone layer and are slightly curved. The curvatures are oriented in the same sense in both strips because the idler beam is reflected on a mirror. } \label{fig7} \end{figure} Also the level of noise was monitored in the third narrow strip; 1.82\% of detection events came from noise. Detailed analysis has shown that 90~\% of noise photons were red photons originating from fluorescence inside the crystal. Scattered pump photons contributed by 8.4~\% and only 1.6\% of noise counts were dark counts of the iCCD camera. The experimental signal-idler correlation functions $ g_x $ and $ g_y $ in the transverse plane described by horizontal ($x'$) and vertical ($y'$) coordinates of the reference system in this plane have been determined after processing many experimental frames. The formula for the determination of correlation function $ g_x $ can be written as follows (see also Fig.~\ref{fig7}b): \begin{equation} g_x( x'_s ,x'_i ) = \sum_{p = 1}^N {} \sum_{m = 1}^{M_p } {} \sum_{l = 1}^{L_p } \delta \left( {x'}_{s}^{pm} - x'_s \right)\delta \left( {x'}_{i}^{pl} - x'_i \right). \label{20} \end{equation} In Eq.~(\ref{20}), $p$ indexes frames ($N$ gives the number of frames) and $m$ ($l$) counts signal (idler) detection events [up to $M_p$ ($L_p$) in the $p$-th frame]. Symbol $ {x'}_s^{pl}$ ($ {x'}_i^{pl}$) denotes horizontal position of the $l$-th detection in the signal (idler) strip of the $p$-th frame. Correlations in the vertical direction given by the correlation function $ g_y $ can be determined similarly. The formula in Eq.~(\ref{20}) takes into account all possible combinations of pairwise detection events. Only some of them correspond to detection of both photons from one pair. The remaining combinations are artificial in the sense that they do not correspond to detection of a photon pair. This poses the following restriction to the method. The number of artificial combinations that occur at random positions has to be large enough in order to create a plateau in a 2D graph of correlation function $ g_x( x'_s,x'_i ) $. Real detections of photon pairs are then visible on the top of this plateau (see Fig.~\ref{fig9} later). Cartesian coordinates $ x'_j $ and $ y'_j $, $ j=s,i $, in the transverse plane can be conveniently transformed into angles $ \beta_j $ and $ \gamma_j $ measured from the middle ($ x'^{\rm cent}_j $, $ y'^{\rm cent}_j $) of the $ j $th strip and defined in Fig.~\ref{fig8} using the formulas: \begin{eqnarray} \gamma_j &=& \arctan\left[(x'_j- x'^{\rm cent}_j) /f_{L2}\right] , \nonumber \\ \beta_j &=& \arctan\left[ (y'_j-y'^{\rm cent}_j) /f_{L2} \cos(\gamma_j) \right], \hspace{0.3cm} j=s,i; \nonumber \\ & & \label{21} \end{eqnarray} $ f_{L2} $ means the focal length of lens $ L2 $. Angles $ \beta_j $ and $ \gamma_j $ are related to radial and azimuthal angles $ \xi_j $ and $ \delta_j $ by the following transformation: \begin{eqnarray} \beta_j &=& \arcsin\left[ \sin(\xi_j)\sin(\delta_j)\right] , \nonumber \\ \gamma_j &=& \arctan \left[ \tan(\xi_j) \cos(\delta_j) \right] - \xi_{j,{\rm det}} , \hspace{0.2cm} j=s,i, \label{22} \end{eqnarray} where the radial angle $ \xi_{j,{\rm det}} $ describes the position of a detector in beam $ j $. \begin{figure} \caption{Sketch showing the geometry of signal and idler beams. The photon emission direction is described by radial ($ \xi $) and azimuthal ($ \delta $) emission angles. In detector plane, cartesian coordinates $ x' $ and $ y' $ are useful. Photons propagation directions are then conveniently parameterized by angles $ \beta $ and $ \gamma $.} \label{fig8} \end{figure} \section{Experimental determination of parameters of correlation area} In the experiment, spatial and temporal spectra of the pump beam have been characterized first. Typical results are shown in Figs.~\ref{fig9}a and b and have been used in the model for the determination of expected parameters of the correlation area. The correlation area, or more specifically its radial and angular profiles, have been characterized using histograms $ g_x(x'_s,x'_i) $ and $ g_y(y'_s,y'_i) $. Histogram $ g_x(x'_s,x'_i) $ [$ g_y(y'_s,y'_i) $] gives the number of paired detections with a signal photon detected at position $ x'_s $ [$ y'_s $] together with an idler photon registered at position $ x'_i $ [$ y'_i $]. These histograms usually contain experimental data from several hundreds of thousands of frames. As graphs in Figs.~\ref{fig9}c and d show detections of correlated photon pairs lead to higher values in histograms $ g_x $ and $ g_y $ around diagonals going from upper-left to lower-right corners of the plots. Finite spreads of these diagonals have their origin in non-perfect phase matching and can be characterized by their widths $ \Delta x'_s $ and $ \Delta y'_s $. Or more conveniently by uncertainties in the determination of angles $ \beta_s $ and $ \gamma_s $; $\Delta \beta_{s} \approx \Delta y'_{s}/f_{L2}$ and $\Delta \gamma_{s} \approx \Delta x'_{s}/f_{L2}$. As detailed inspection of the histogram $ g_x $ ($ g_y $) in Fig.~\ref{fig9}c (d) has shown, cuts of this histogram along the lines with constant values of $ x'_i $ ($ y'_i $) do not depend on the value of $ x'_i $ ($ y'_i $). This reflects the fact that idler photons detected at different positions inside the investigated area on the photocathode have identical (signal-photon) correlation areas. This allows us to combine the data obtained for idler photons detected at different positions together and increase the measurement precision this way. This approach thus provides the radial cross-section $ \langle G_{s,i}\rangle_{\beta_s} $ of the correlation area along the radial angle $ \gamma_s $ as a mean value over all possible values of the signal-field azimuthal angle $ \beta_s $. Moreover, consideration of different idler-photon detection positions means averaging over the angles $ \gamma_i $ and $ \beta_i $. The averaging is indicated by symbol $ \langle \rangle $. Mathematically, the radial cross-section $ \langle G_{s,i}\rangle_{\beta_s} $ expressed in the coordinate $ x'_s $ can be derived along the formula \begin{equation} \langle G_{s,i}\rangle_{\beta_s}(x'_s) = \sum_{x'_i} g \left[ x'_s - x'^{\rm mid}_s(x'_i),x'_i\right] , \label{23} \end{equation} where the function $ x'^{\rm mid}_s(x'_i) $ gives the central position (given as a locus) of the cut of the histogram $ g(x'_s,x'_i) $ for a fixed value of the coordinate $ x'_i $. In the theory, the radial cross-section $ \langle G_{s,i}\rangle_{\beta_s} $ is determined using the fourth-order correlation function $ G_{s,i} $ written in Eq.~(\ref{7}), substitution of angles $ \xi_s $, $ \delta_s $, $ \xi_i $, and $ \delta_i $ by angles $ \gamma_s $, $ \beta_s $, $ \gamma_i $, and $ \beta_i $ [inverse transformation to that in Eq.~(\ref{22})] and finally integration over the angles $ \beta_s $, $ \gamma_i $, and $ \beta_i $. Similarly, the azimuthal cross-section $ \langle G_{s,i}\rangle_{\gamma_s} $ of the correlation area along the azimuthal angle $ \beta_s $ arises after averaging over the angles $ \gamma_s $, $ \gamma_i $, and $ \beta_i $ and can be determined by a formula analogous to that given in Eq.~(\ref{23}). The radial and azimuthal cross-sections $ \langle G_{s,i}\rangle_{\beta_s} $ and $ \langle G_{s,i}\rangle_{\gamma_s} $ corresponding to the pump beam with characteristics defined in Figs.~\ref{fig9}a and b are plotted in Figs.~\ref{fig9}e and f. Solid lines in Figs.~\ref{fig9}e and f refer to the results of numerical model and are in a good agreement with the experimental data. \begin{figure} \caption{Typical measurement of a correlation area for pulsed pumping having 327,600 frames; $L_z=5$~mm. a) Spatial spectrum of the pump beam determined in the focal plane of lens L3. b) Temporal intensity spectrum of the pump beam as determined by a spectrometer (diamonds), solid line represents a multi-peak Gaussian fit. c), d) Experimental histograms $ g_x(x'_s,x'_i) $ (c) and $ g_y(y'_s,y'_i) $ (d). e), f) Experimental radial ($ \langle G_{s,i} \label{fig9} \end{figure} The radial width $\langle\Delta \gamma_{s}\rangle_{\beta_s}$ (measured as full-width at $ 1/e $ of the maximum) of radial cross-section $ \langle G_{s,i}\rangle_{\beta_s} $ depends mainly on the pump-beam spectral width $\Delta \lambda_{p}$. It holds that the greater the pump-beam width $\Delta \lambda_{p}$ the larger the radial width $\langle\Delta \gamma_{s}\rangle_{\beta_s}$ as documented in Fig.~\ref{fig10} for crystals 2- and 5-mm long. In the experiment, 11-nm wide frequency filters have been applied to cut noise. However, certain amount of photons comprising a photon pair has also been blocked. According to the theoretical model, this has also resulted in a small narrowing of the radial cut of the correlation area (compare solid and dashed curves in Fig.~\ref{fig10}). The theoretical curve in Fig.~\ref{fig10} has been experimentally confirmed for several values of the width $W_{px}^{0,f}$ of pump-beam waist both for cw and pulsed pumping. \begin{figure} \caption{Radial width $\langle\Delta \gamma_{s} \label{fig10} \end{figure} On the other hand and in our geometry, it is the width $ W_{py}^{0,f} $ of the pump-beam waist that determines the angular width $\langle\Delta \beta_{s}\rangle_{\gamma_s}$ of angular cross-section $ \langle G_{s,i}\rangle_{\gamma_s} $. Predictions of the model for 2- and 5-mm long crystals are shown in Fig.~\ref{fig11} by a solid curve. This curve has been checked experimentally for several values of the width $ W_{py}^{0,f} $ of the pump-beam waist both for cw and pulsed pumping. We note that these curves do not depend on the pump-beam spectral width $ \Delta\lambda_p $. We can see in Fig.~\ref{fig11} that the measured points agree with the theoretical curve for smaller values of the width $ W_{py}^{0,f} $. Larger values of the width $ W_{py}^{0,f} $ lead to small angular widths $\langle\Delta \beta_{s}\rangle_{\gamma_s}$ that could not be correctly measured because of the limited spatial resolution of the iCCD camera. \begin{figure} \caption{Angular width $\langle\Delta \beta_{s} \label{fig11} \end{figure} \section{Engineering the shape of a correlation area} As the above results have shown parameters of a correlation area can be efficiently controlled using pump-beam parameters, namely temporal spectrum and transverse profile. Even the shape of correlation area can be considerably modified. Splitting of the correlation area into two parts that occurs as a consequence of splitting of the pump-field temporal spectrum can serve as an example. Using our femtosecond pump system, we were able to experimentally confirm this behavior. We have generated a pump beam with the spatial spectrum given in Fig.~\ref{fig12}a. Its temporal spectrum containing two peaks as was acquired by a spectrometer is plotted in Fig.~\ref{fig12}b. The experimental radial width $\langle\Delta \gamma_{s}\rangle_{\beta_s}$ of cross-section $ \langle G_{s,i}\rangle_{\beta_s} $ given in Fig.~\ref{fig12}c shows that the two-peak structure of the pump-field spectrum resulted in splitting of the correlation area into two parts. On the other hand and in agreement with the theory, the angular cross-section $ \langle G_{s,i}\rangle_{\gamma_s} $ was not affected by the pump-field spectral splitting (see Fig.~\ref{fig12}d). For comparison, the theoretical profile of the correlation area given by the correlation function $ G_{s,i} $ and appropriate for the pump-beam parameters given in Figs.~~\ref{fig12}a and b is plotted in Fig.~\ref{fig12}e. It indicates a good agreement of the model with experimental data. Moreover, the squared modulus $ |\Phi_{s,i}|^2 $ of theoretical two-photon spectral amplitude reveals that splitting of the correlation area is accompanied by splitting of the signal-field spectrum (see Fig.~\ref{fig12}f). \begin{figure} \caption{Determination of a correlation area for pulsed pumping composed of two spectral peaks; $L_z=5$~mm. a) Spatial spectrum of the pump beam. b) Temporal pump-field intensity spectrum (experimental points are indicated by diamonds, solid line represents a multi-peak Gaussian fit). c) ,d) Experimental radial ($ \langle G_{s,i} \label{fig12} \end{figure} \section{Conclusions} We have developed a method for the determination of profiles of a correlation area using an intensified CCD camera. Single detection events in many experimental frames are processed and provide histograms from which cross-sections of the correlation area can be recovered. This method has been used for investigations of the dependence of parameters of the correlation area on pump-beam characteristics and crystal length. The experimentally obtained curves have been successfully compared with a theoretical model giving fourth-order correlation functions. Radial profile of the correlation area depends mainly on pump-field spectrum and crystal length. On the other hand, azimuthal profile of the correlation area is sensitive only to the transverse profile of the pump beam. Splitting of the correlation area caused by a two-peak structure of the pump-field spectrum has also been experimentally observed. \acknowledgments This research has been supported by the projects IAA100100713 of GA AV \v{C}R, 1M06002 and COST OC 09026 of the Ministry of Education of the Czech Republic. \end{document}
\begin{document} \title{Global SPACING Constraint\ (Technical Report) hanks{NICTA is funded by the Australian Government as represented by the Department of Broadband, Communications and the Digital Economy and the Australian Research Council. } \begin{abstract} We propose a new global $\algid{Spacing}$ constraint that is useful in modeling events that are distributed over time, like learning units scheduled over a study program or repeated patterns in music compositions. First, we investigate theoretical properties of the constraint and identify tractable special cases. We propose efficient $\emph{DC}$ filtering algorithms for these cases. Then, we experimentally evaluate performance of the proposed algorithms on a music composition problem and demonstrate that our filtering algorithms outperform the state-of-the-art approach for solving this problem. \end{abstract} \section{Introduction} When studying a new topic, it is often better to spread the learning over a long period of time and to revise topics repeatedly. This ``spacing effect'' was first identified by Hermann Ebbinghaus in 1885 \cite{spacingeffect}. It has subsequently become ``one of the most studied phenomena in the 100-year history of learning research'' \cite{dempster88}. It has been observed across domains (e.g. learning mathematical concepts or a foreign language, as well as learning a motor skill), across species (e.g. in pigeons, rats and humans), across age groups and individuals, and across timescales (e.g. from seconds to months). To enable learning software to exploit this effect, Novikoff, Kleinberg and Strogratz have proposed a simple mathematical model \cite{nkspnas12}. They consider learning a sequence of educational units, and model the spacing effect with a constraint defined by two sequences, $A = \seq{a_1, a_2, \ldots}$ and $B = \seq{b_1, b_2, \ldots}$. For each unit being taught, the $i + 1$st time that it should be reviewed is between $a_i$ and $b_i$ time steps after the $i$th time. This technical report, after giving a brief background in the following section, describes and analyses the Global \algid{Spacing}\ Constraint in the section 3. Then the sections 4, 5, 6, 7 and 8 explore restrictions of the constraint and identify tractable special cases. In the sections 5 and 6 we propose efficient filtering algorithms for these tractable cases. In the section 9, we describe a useful application of the constraint to solving a music composition problem. The section 10 presents experimental evaluation of the \algid{Spacing}\ constraint on the music composition problems. \section{Background} \subsection{Constraint Satisfaction Problems (CSP)} A constraint satisfaction problem consists of a set of variables, each with a finite domain of values, and a set of constraints specifying allowed combinations of values for subsets of variables. We use capital letters for variables (e.g. $X$ and $Y$), and lower case for values (e.g. $d$ and $d'$). We write $D(X)$ for the domain of a variable $X$ and $D = \bigcup_{i = 1}^{n} D(X_i)$ for the set of all the domain values. Assigning a value $d \in D(X)$ to a variable $X$ means removing all the other values from its domain. A solution is an assignment of values to the variables satisfying the constraints. Constraint solvers typically explore partial assignments enforcing a local consistency property using either specialized or general purpose propagation (or filtering) algorithms. A \emph{support} for a constraint $C$ is an assignment that assigns to each variable some value from its domain and satisfies $C$. A constraint is \emph{domain consistent} (\emph{DC}) iff for each variable $X_i$, every value in $D(X_i)$ belongs to some support. \subsection{Matching Theory} We also give some background on matching. A \emph{bipartite graph} is a graph $G=(U,V,E)$ with the set of nodes partitioned between $U$ and $V$ such that there is no edge between two nodes in the same partition. A \emph{matching} in a graph $G$ is a subset of $E$ where no two edges have a node in common. A \emph{maximum matching} is a matching of maximum cardinality. Regin\cite{regin1} proposed efficient propagator based on a maximum matching algorithm. \subsection{Propositional Satisfiability (SAT)} A \emph{Propositional Satisfiability} (SAT) is a problem of finding a model of a propositional formula. The propositional formula is usually in \emph{Conjunctive Normal Form} (CNF), which is a set of clauses. We consider the clauses to be sets of literals. A literal is either a propositional variable (i.e. $p$) or a negated propositional variable (i.e. $\neg p$). The set of all variables from $\phi$ is $\funid{var}(\phi)$ and $\neg \funid{var}(\phi) = \setcomp{\neg p}{p \in \funid{var}(\phi)}$. Without loss of generality we can order the variables from $\funid{var}(\phi)$ into sequence $\seq{p_1, \ldots, p_\ensuremath{v}}$ and the clauses of $\phi$ into sequence $\seq{C_1, \ldots, C_\ensuremath{c}}$. Let $\funid{lit}(\phi) = \funid{var}(\phi) \cup \neg\funid{var}(\phi)$. Then a set of literals $I \subseteq \funid{lit}(\phi)$ is an interpretation of $\phi$ if it is maximal and consistent, i.e. it does not contain a pair of complementary literals $l \in I \rightarrow \overline{l} \notin I$. An interpretation $I$ of $\phi$ is a model of $\phi$ if it contains at least one literal from each clause of $\phi$. We also use the following notation: if $L$ is a set of literals, then $L' = \setcomp{l'}{l \in L}$ and $L^i = \setcomp{l^i}{l \in L}$. \section{The Global \algid{Spacing}\ Constraint} First we describe the Global \algid{Spacing}\ Constraint and then its further restrictions in the following sections. For simplicity, we introduce a function that returns the number of occurrences of a domain value $d \in D$ in a sequence of variables $X = \seq{X_1, \ldots, X_n}$. It is $\funid{occ}(d, X) = |\setcomp{i}{X_i = d, 1 \leq i \leq n}|$ Now we define the Global \algid{Spacing}\ Constraint as follows: \begin{definition}\label{def:spacing} Let $X = \seq{X_1, \ldots, X_n}$ be a sequence of $n$ variables and let $S \subseteq D$ be a set of domain values. Let $A = \seq{a_1, \ldots, a_{k-1}}$ and $B = \seq{b_1, \ldots, b_{k-1}}$ be sequences of natural numbers such that $a_i \leq b_i$ for $1 \leq i \leq k-1$. Then $\algid{Spacing}(S, A, B, X)$ holds iff for all $i$ s.t. $1 \leq i \leq k-1$ and for all $d \in S$ it holds that if there exists $j$ s.t. $1 \leq j \leq n$ and $X_j = d$ and $\funid{occ}(d, \seq{X_1, \ldots, X_j}) = i$ then there exists $j' \leq n$ s.t. $j + a_i \leq j' \leq j + b_i$ and $X_{j'} = d$ and $\funid{occ}(d, \seq{X_1, \ldots, X_{j'}}) = i+1$. \end{definition} In other words, each value $d \in S$ either does not occur in $X$ at all, or it occurs on at least $k$ different places and the distances between the places are determined by the sequences $A$ and $B$. The \emph{minimum distance condition} forces the $i+1$st occurrence of the value $d$ to be no closer than $a_i$ places from its $i$th occurrence and the \emph{maximum distance condition} forces the $i+1$st occurrence to be no further than $b_i$ places from the $i$th occurrence. For example, suppose we need to prepare a playlist for a radio station. The \algid{Spacing}\ Constraint allows us to specify that any of top ten songs is either not played at all, or it is played at least four times in the 360 song long playlist and it is not repeated more frequently than every 30 songs, but at least every 90 songs. The constraint would be imposed on the sequence $X = \seq{X_1, \ldots, X_{360}}$ of $n = 360$ variables. The domain values would represent the songs and $S \subseteq D$ would be the set of the top ten songs. The number of spaced occurrences is $k = 4$, so the sequences $A$ and $B$ would be of length $k-1 = 3$. Overall the constraint would be specified as $\algid{Spacing}(S, \seq{30,30,30}, \seq{90,90,90}, X)$. \begin{theorem}\label{thm:intractS} Enforcing \emph{DC}\ on the Global \algid{Spacing}\ Constraint is NP-hard. \end{theorem} \begin{proof} We prove this by reduction of SAT\ to the problem of finding a support for \algid{Spacing}. Let $\phi$ be an arbitrary CNF\ with $\ensuremath{v}$ propositional variables and $\ensuremath{c}$ clauses. We will abuse the notation slightly by using literals as domain values. There is a model of $\phi$ iff there is a support for the constraint $\algid{Spacing}(S, \seq{a_1, \ldots, a_{k-1}}, \seq{b_1, \ldots, b_{k-1}}, X)$ with \begin{itemize} \item $S = \funid{lit}(\phi)$ \item $k = \ensuremath{c}+1$ \item $a_i = 1$ and $b_i = \ensuremath{v}+1$ for $1 \leq i \leq k-1$ \item $X$ is a sequence of $\ensuremath{v}\ensuremath{c}+\ensuremath{v}+\ensuremath{c}$ variables with domains as described below. \end{itemize} If we cut the sequence $X$ into slices $\ensuremath{v}+1$ variables long and put the slices under each other, we obtain a table with $\ensuremath{c}+1$ rows where the last cell is empty. For simplicity the variables will be indexed $X_{j,i}$, where $j$ is a row number and $i$ is a column number, i.e. $X_{j,i}$ stands for $X_{(j-1)(\ensuremath{v}+1) + i}$. The first $\ensuremath{v}$ columns will represent the propositional variables of $\phi$, so \begin{itemize} \item the domains of variables $X_{j,i}$ for $1 \leq i \leq \ensuremath{v}$ and $1 \leq j \leq \ensuremath{c}+1$ are $\set{p_i, \neg p_i}$. \end{itemize} The last column represents $\phi$, each clause in one row, so the domains are sets of literals from particular clauses. Simply, \begin{itemize} \item the domains of variables $X_{j,\ensuremath{v}+1}$ for $1 \leq j \leq \ensuremath{c}$ are $C_j$. \end{itemize} For example take CNF $\phi = \seq{\set{\neg p, q, r}, \set{\neg q, r}, \set{\neg p, \neg q}, \set{p, q}}$ the equivalent constraint would be $\algid{Spacing}(S, \seq{1,1,1,1}, \seq{4,4,4,4}, X)$ with $S = \set{p, q, r, \neg p, \neg q, \neg r}$ and $X$ would contain $19$ variables with domains ordered into the following table: \begin{center} \begin{tabular}{r|cccc} $i = $ & 1 & 2 & 3 & 4 \\ \hline $j = 1$ & $\set{p, \neg p}$ & $\set{q, \neg q}$ & $\set{r, \neg r}$ & $\set{\neg p, q, r}$ \\ $j = 2$ & $\set{p, \neg p}$ & $\set{q, \neg q}$ & $\set{r, \neg r}$ & $\set{\neg q, r}$ \\ $j = 3$ & $\set{p, \neg p}$ & $\set{q, \neg q}$ & $\set{r, \neg r}$ & $\set{\neg p, \neg q}$ \\ $j = 4$ & $\set{p, \neg p}$ & $\set{q, \neg q}$ & $\set{r, \neg r}$ & $\set{p, q}$ \\ $j = 5$ & $\set{p, \neg p}$ & $\set{q, \neg q}$ & $\set{r, \neg r}$ & \\ \end{tabular} \end{center} Note that $D \subseteq S$. So if a value occurs in a support for the constraint, it has to occur in it at least $\ensuremath{c}+1$ times. Due to construction of the domains, one value can occur on at most $2\ensuremath{c}+1$ places (in one of the first $\ensuremath{v}$ columns and in the last column). Each value is sharing $\ensuremath{c}+1$ of these places with its complement, thus if a value occurs in a support, it occupies at least $\ensuremath{c}+1$ of $2\ensuremath{c}+1$ places, so its complement does not have space to repeat enough times to satisfy the constraint. Hence if a value occurs in a support it will occupy one of the first $\ensuremath{v}$ columns completely, because its complement cannot. Consequently, if we have a support for the constraint, the set of values assigned to the first $\ensuremath{v}$ variables is a model of $\phi$, because each value selected in the last column has to occur in it and complement of none of these values can occur in it. On the other hand, if we have a model of $\phi$, we can obtain a support for the constraint by assigning the literals from the model to all the variables as a values. This is always possible, because a model interprets all the literals and it is consistent, so there will be exactly one value for each of the variables in the first $\ensuremath{v}$ columns, and a model satisfies each clause, so there will be some value for each variable in the last column. This will really be a support for the constraint, because each value that will occur in it will occur in each row at least once. \qed \end{proof} Please, note that the proof does not make use of the condition of minimal spacing imposed by the sequence $A$. Also the sequence $B$ is constant ($b_i = b_{i+1}$ for all $1 \leq i \leq k-2$), so the proof holds also for much simpler constraint. In fact, the proof does not use the condition that the values have to be repeated, it is based barely on the number of occurrences. The reason why \algid{Spacing}\ is NP-hard is that there is a possibility for the values to either occur in the support or not. Next we will examine a case that does not provide such a possibility. \section{The Forced Global \algid{Spacing}\ Constraint (\algid{Spacing}F)} This section describes a restriction of the constraint where all the values from $S$ are forced to occur in the sequence and later it analyses also restriction with the distance conditions relaxed. \begin{definition}\label{def:spacingF} Let $X = \seq{X_1, \ldots, X_n}$ be a sequence of $n$ variables and let $S \subseteq D$ be a set of domain values. Let $A = \seq{a_1, \ldots, a_{k-1}}$ and $B = \seq{b_1, \ldots, b_{k-1}}$ be sequences of natural numbers such that $a_i \leq b_i$ for $1 \leq i \leq k-1$. Then $\algid{Spacing}F(S, A, B, X)$ holds iff $\algid{Spacing}(S, A, B, X)$ holds and all $S \subseteq \set{X_1, \ldots, X_n}$. \end{definition} This means that each value $v \in S$ occurs in the sequence on at least $k$ different places. \begin{theorem}\label{thm:intractSF} Enforcing \emph{DC}\ on the Global \algid{Spacing}F\ Constraint is NP-hard. \end{theorem} \begin{proof}[no minimal distance condition] We prove this by reduction of SAT\ to the problem of finding a support for \algid{Spacing}F. The reduction is the same as in the proof of Theorem~\ref{thm:intractS}, except the sequence $X$ has additional $(\ensuremath{c}+2)(\ensuremath{v}+1)+1$ variables (in total $(2\ensuremath{c}+3)(\ensuremath{v}+1)$ variables) with domains as follows: If we order the variables into the same table as before, with rows of length $\ensuremath{v}+1$, the variables in the rest of the last column, as well as the variables in the $\ensuremath{c}+2$nd row can only take a dummy value \ensuremath{0} \begin{itemize} \item the domains of variables $X_{j,\ensuremath{v}+1}$ for $\ensuremath{c}+1 \leq j \leq 2\ensuremath{c}+3$ and variables $X_{\ensuremath{c}+2,i}$ for $1 \leq i \leq \ensuremath{v}$ are $\set{\ensuremath{0}}$, where $\ensuremath{0} \not\in S$ \end{itemize} and the first $\ensuremath{v}$ cells of remaining $\ensuremath{c}+1$ rows are again the representation of propositional variables, so \begin{itemize} \item the domains of variables $X_{j,i}$ for $1 \leq i \leq \ensuremath{v}$ and $\ensuremath{c}+3 \leq j \leq 2\ensuremath{c}+3$ are again $\set{p_i, \neg p_i}$. \end{itemize} For example, the CNF $\phi = \seq{\set{\neg p, q, r}, \set{\neg q, r}, \set{\neg p, \neg q}, \set{p, q}}$ from the previous example would be reduced to the constraint $\algid{Spacing}F(S, \seq{1,1,1,1}, \seq{4,4,4,4}, X)$ with $S = \set{p, q, r, \neg p, \neg q, \neg r}$ and $X$ would contain $44$ variables with domains ordered into the following table: \begin{center} \begin{tabular}{r|cccc} $i = $ & 1 & 2 & 3 & 4 \\ \hline $j = 1$ & $\set{p, \neg p}$ & $\set{q, \neg q}$ & $\set{r, \neg r}$ & $\set{\neg p, q, r}$ \\ $j = 2$ & $\set{p, \neg p}$ & $\set{q, \neg q}$ & $\set{r, \neg r}$ & $\set{\neg q, r}$ \\ $j = 3$ & $\set{p, \neg p}$ & $\set{q, \neg q}$ & $\set{r, \neg r}$ & $\set{\neg p, \neg q}$ \\ $j = 4$ & $\set{p, \neg p}$ & $\set{q, \neg q}$ & $\set{r, \neg r}$ & $\set{p, q}$ \\ $j = 5$ & $\set{p, \neg p}$ & $\set{q, \neg q}$ & $\set{r, \neg r}$ & $\set{\ensuremath{0}}$ \\ $j = 6$ & $\set{\ensuremath{0}}$ & $\set{\ensuremath{0}}$ & $\set{\ensuremath{0}}$ & $\set{\ensuremath{0}}$ \\ $j = 7$ & $\set{p, \neg p}$ & $\set{q, \neg q}$ & $\set{r, \neg r}$ & $\set{\ensuremath{0}}$ \\ $j = 8$ & $\set{p, \neg p}$ & $\set{q, \neg q}$ & $\set{r, \neg r}$ & $\set{\ensuremath{0}}$ \\ $j = 9$ & $\set{p, \neg p}$ & $\set{q, \neg q}$ & $\set{r, \neg r}$ & $\set{\ensuremath{0}}$ \\ $j = 10$ & $\set{p, \neg p}$ & $\set{q, \neg q}$ & $\set{r, \neg r}$ & $\set{\ensuremath{0}}$ \\ $j = 11$ & $\set{p, \neg p}$ & $\set{q, \neg q}$ & $\set{r, \neg r}$ & $\set{\ensuremath{0}}$ \\ \end{tabular} \end{center} We can divide the sequence into 2 parts separated by the $\ensuremath{c}+2$nd row. We will call the first one \emph{positive} part and the second one \emph{negative} part. If a value from $S$ occurs in one part, it will occur at least $\ensuremath{c}+1$ times in that part, because the gap of \ensuremath{0}-s between the parts is longer than any $b_i$, so all the necessary repetitions must occur in one part. Also if a value $d \in S$ occurs in one part, its complement $\overline{d}$ cannot occur in the same part because of a counting argument analogous to the one in the proof of Theorem~\ref{thm:intractS}, so, while all the values from $S$ must occur in the support, the complement $\overline{d}$ must occur in the other part (where $d$ will not occur due to the same argument). Thanks to this, the proof of Theorem~\ref{thm:intractS} holds for the positive part and the values that cannot occur in the positive part will occur in the negative part. It is obvious that there is enough space for all the values: $|S| = 2\ensuremath{v}$ while $\ensuremath{v}$ values occur in the positive part and the other $\ensuremath{v}$ values occur in the negative part. Hence, we can obtain a support for the constraint from a model of $\phi$ and a model from a support in the same manner as in the proof of Theorem~\ref{thm:intractS}. \qed \end{proof} As before, this proof does not make use of the condition on the minimal distance between two consecutive values from $S$ ($A$ is a sequence of 1-s). In fact, also if we relax the condition on the maximal distance between two consecutive values from $S$ ($B$ will be a sequence of $n$-s), the constraint is still intractable. \begin{theorem}\label{thm:intractB} Enforcing \emph{DC}\ on the Global \algid{Spacing}F\ Constraint with $b_i = n$ for $1 \leq i \leq k-1$ is NP-hard. \end{theorem} \begin{proof}[no maximal distance condition] We prove this by reduction of SAT\ to the problem of finding a support for \algid{Spacing}. Let $\phi$ be an arbitrary CNF\ with $\ensuremath{v}$ propositional variables and $\ensuremath{c}$ clauses. There is a model of $\phi$ iff there is a support for the constraint $\algid{Spacing}F(S, \seq{a_1, \ldots, a_{k-1}}, \seq{b_1, \ldots, b_{k-1}}, X)$ with \begin{itemize} \item $S = \funid{lit}(\phi)$ \item $k = \ensuremath{c}$ \item $a_i = 5\ensuremath{v}+1$ and $b_i = (7\ensuremath{v}+1)\ensuremath{c}$ for $1 \leq i \leq k-1$ \item $X$ is a sequence of $(7\ensuremath{v}+1)\ensuremath{c}$ variables with the domains as described below. \end{itemize} If we organize the variables from $X$ into a table with $\ensuremath{c}$ rows and $7\ensuremath{v}+1$ columns, the first column will represent $\phi$ in the way that \begin{itemize} \item the domains of variables $X_{j,1}$ for $1 \leq j \leq \ensuremath{c}$ are $C_j$, \end{itemize} following $2\ensuremath{v}$ columns together with the first column will represent satisfied literals \begin{itemize} \item the domains of variables $X_{j, i+1}$ for $1 \leq i \leq \ensuremath{v}$ and $1 \leq j \leq \ensuremath{c}$ are $\set{p_i, \ensuremath{0}}$ and \item the domains of variables $X_{j, i+(1+\ensuremath{v})}$ for $1 \leq i \leq \ensuremath{v}$ and $1 \leq j \leq \ensuremath{c}$ are $\set{\neg p_i, \ensuremath{0}}$, \end{itemize} following $2\ensuremath{v}$ columns will be a padding of \ensuremath{0}-s \begin{itemize} \item the domains of variables $X_{j, i+(1+2\ensuremath{v})}$ for $1 \leq i \leq 2\ensuremath{v}$ and $1 \leq j \leq \ensuremath{c}$ are $\set{\ensuremath{0}}$, \end{itemize} following $\ensuremath{v}$ columns will represent unsatisfied literals \begin{itemize} \item the domains of variables $X_{j, i+(1+4\ensuremath{v})}$ for $1 \leq i \leq \ensuremath{v}$ and $1 \leq j \leq \ensuremath{c}$ are $\set{p_i, \neg p_i}$ \end{itemize} and the last $2\ensuremath{v}$ columns will be again a padding of \ensuremath{0}-s. \begin{itemize} \item the domains of variables $X_{j, i+(1+5\ensuremath{v})}$ for $1 \leq i \leq 2\ensuremath{v}$ and $1 \leq j \leq \ensuremath{c}$ are $\set{\ensuremath{0}}$. \end{itemize} For example, the CNF from our running example would be reduced to the constraint $\algid{Spacing}F(S, \seq{16,16,16}, \seq{88,88,88}, X)$ with $S = \set{p, q, r, \neg p, \neg q, \neg r}$ and $X$ would contain $88$ variables with domains ordered into the following table: \begin{center} \newcommand{\compd}[1]{\scriptsize\begin{tabular}{>{$}c<{$}} #1 \end{tabular}} \begin{tabular}{r|*{22}{|>{\centering\arraybackslash}c}} $i = $ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 & 21 & 22 \\ \hline \hline $j = 1$ & \compd{\neg p \\ q \\ r} & \compd{p \\ \ensuremath{0}} & \compd{q \\ \ensuremath{0}} & \compd{r \\ \ensuremath{0}} & \compd{\neg p \\ \ensuremath{0}} & \compd{\neg q \\ \ensuremath{0}} & \compd{\neg r \\ \ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{p \\ \neg p} & \compd{q \\ \neg q} & \compd{r \\ \neg r} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} \\ \hline $j = 2$ & \compd{\neg q \\ r} & \compd{p \\ \ensuremath{0}} & \compd{q \\ \ensuremath{0}} & \compd{r \\ \ensuremath{0}} & \compd{\neg p \\ \ensuremath{0}} & \compd{\neg q \\ \ensuremath{0}} & \compd{\neg r \\ \ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{p \\ \neg p} & \compd{q \\ \neg q} & \compd{r \\ \neg r} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} \\ \hline $j = 3$ & \compd{\neg p \\ \neg q} & \compd{p \\ \ensuremath{0}} & \compd{q \\ \ensuremath{0}} & \compd{r \\ \ensuremath{0}} & \compd{\neg p \\ \ensuremath{0}} & \compd{\neg q \\ \ensuremath{0}} & \compd{\neg r \\ \ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{p \\ \neg p} & \compd{q \\ \neg q} & \compd{r \\ \neg r} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} \\ \hline $j = 4$ & \compd{p \\ q} & \compd{p \\ \ensuremath{0}} & \compd{q \\ \ensuremath{0}} & \compd{r \\ \ensuremath{0}} & \compd{\neg p \\ \ensuremath{0}} & \compd{\neg q \\ \ensuremath{0}} & \compd{\neg r \\ \ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{p \\ \neg p} & \compd{q \\ \neg q} & \compd{r \\ \neg r} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} \\ \end{tabular} \end{center} We can split the table into two parts. We will address the first $2\ensuremath{v}+1$ columns as a \emph{positive} part and the $4\ensuremath{v} + 2$nd to $5\ensuremath{v} + 1$-st column as a \emph{negative} part. (The rest is padding of \ensuremath{0}-s.) In order for an assignment to $X$ to be a support, the following must hold: \begin{itemize} \item A value $d \in S$ can occur at most once per row because of the minimal distance condition imposed by the sequence $A$. While the number of rows is $k$, any value $d \in S$ has to occur in each row, so all values from $S$ have to occur in each row exactly once. \item Two complementary values $d, \overline{d} \in S$ cannot occur in the same part of any row. For the negative part this is due to construction of the domains and if both occurred in the positive part, none of them would be able to occur in the negative part of the same row, so there would be a variable with empty domain, which is a contradiction. \item If a value from $S$ occurs in one part of some row, it will occur in the same part in each row. This is because of the following: \begin{itemize} \item If a value $d \in S$ occurs in the negative part of a row number $j$, it cannot occur in the positive part of the following row $j+1$, because of the minimal distance condition, so it will occur in the negative part of the row $j+1$. \item Suppose a value $d \in S$ occurs in the positive part of a row number $j$. Then the complementary value $\overline{d}$ has to occur in the negative part of the row $j$. Now if $j \neq k$ and if the value $d$ did not occur in the positive part of the following row $j+1$, then it would have to occur in the negative part. Then, however, the complementary value $\overline{d}$ would not be able to occur in that negative part of the row $j+1$, which is contradiction with the above. So $d$ has to occur in the positive part of the row $j+1$. \item Each value from $S$ has to occur in the first row. So if a value from $S$ occurred in some row in a different part as in the first row, we would reach contradiction by applying the above. \end{itemize} \end{itemize} Consequently, if we have a support of the constraint, the assignment to the first $2\ensuremath{v}+1$ variables (the positive part) represents a model of $\phi$. The set of the values assigned to this variables (without the value $\ensuremath{0}$) is clearly an interpretation, because it cannot contain complementary literals. Some value is selected for each variable representing clause (first column) and this value has to occur in the positive part of each row and its complement cannot occur in the positive part, hence the interpretation is also a model, because it contains at least one literal from each clause. We can obtain a support of the constraint from a model $I$ of $\phi$ by assigning a value that represents a literal that is satisfied by $I$ to the variables in the first column. This is always possible, because $I$ must satisfy at least one literal from each clause. The assignment to the variables in the positive part will be as follows: \begin{inparaenum}[(1)] \item The value \ensuremath{0}\ is assigned to the variable with domain containing value already assigned to the first variable in the row. \item The literals satisfied by $I$ are assigned to the rest of the variables as values. \item Where this is not possible, \ensuremath{0}\ is assigned. \end{inparaenum} This is always possible, because each literal that may occur in $\phi$ is represented by some value from the domains of these variables and no two representations of two different literals occur in a domain of one variable. Further, we assign the variables in the negative part so that they contain representations of each literal that is falsified by $I$. Such an assignment is always possible because: \begin{inparaenum}[(1)] \item Each of these variables will be assigned, because $I$ falsifies either $p_i$ or $\neg p_i$ for each $1 \leq i \leq \ensuremath{v}$. \item And there will be at most one possible value for each variable, because $I$ is consistent. \end{inparaenum} We assign \ensuremath{0}\ to the rest of the variables (their domains are $\set{\ensuremath{0}}$). Now the assignment is a support of the constraint, because each value $d \in S$ occurs exactly once in each row which is $\ensuremath{c} = k$ occurrences. Further, due to the fact that the representations of the satisfied literals occur always in the positive part and the representations of the falsified literals occur always in the negative part, there is always at least $5\ensuremath{v}$ places between two successive occurrences of each $d \in S$. In details, the last possible occurrence of a representation of a satisfied literal in the $j$th row is on the position $(j-1)(7\ensuremath{v}+1)+(2\ensuremath{v}+1)$ and the next possible occurrence of a representation of a satisfied literal is at the beginning of the following row $(j+1-1)(7\ensuremath{v}+1)+1$. The difference of these two positions is $5\ensuremath{v}+1 \geq 5\ensuremath{v}$. The last possible occurrence of a representation of a falsified literal in the $j$th row is on the position $(j-1)(7\ensuremath{v}+1)+(5\ensuremath{v}+1)$ and the next possible occurrence of a representation of a falsified literal is at the position $(j+1-1)(7\ensuremath{v}+1)+(4\ensuremath{v}+2)$ of the following row. The difference of these two positions is $6\ensuremath{v}+2 \geq 5\ensuremath{v}$. \qed \end{proof} Another intractable restriction of the constraint is when all values from $S$ are forced to occur on the first $p \leq n$ places, e.g. in the reduction from the last proof it was the first row. The following corollary summarizes all found intractable restrictions \begin{corollary}\label{crl:intract} Enforcing \emph{DC}\ on the \algid{Spacing}\ constraint is NP-hard even if any combination of the following not containing (3) and (4) simultaneously holds: \begin{enumerate}[(1)] \item All the values from $S$ must occur in the first $p \leq n$ places of the sequence. \item $A$ and $B$ are constant sequences, i.e. $a_1=\ldots=a_k$ and $b_1=\ldots=b_k$. \item There is no minimal distance condition ($a_i = 1$ for $1 \leq i \leq k-1$). \item There is no maximal distance condition ($b_i = n$ for $1 \leq i \leq k-1$). \end{enumerate} \end{corollary} \begin{proof} The proof of Theorem~\ref{thm:intractB} holds for all combinations of the restrictions not including (3) and the proof of Theorem~\ref{thm:intractSF} holds for all combinations of the restrictions not including (4). \qed \end{proof} \section{Bounded Size of $S$} We identified two useful restrictions of the $\algid{Spacing}$ constraint that allow polynomial time $\emph{DC}$ filtering algorithms. The first restriction bounds the size of $S$, $|S| = O(1)$. It can be used to model education process where the number of learning units is naturally bounded. \begin{theorem}\label{thm:tractS} Enforcing $\emph{DC}$ on the $\algid{Spacing}(S, A, B, X)$ constraint can be done in $O(n^{|S|+2}|S|)$ time. \end{theorem} \begin{proof} We can define an automaton for accepting sequences satisfying the $\algid{Spacing}$ constraint. The states of the automaton just need to keep count of the number of steps since the last occurrence of each value in $S$. There are $O(n^{|S|})$ possible states in this automaton, which is polynomial for $|S| = O(1)$.\qed \end{proof} \section{The One Voice Global \algid{Spacing}\ Constraint (\algid{Spacing}ONE)} The second tractable restriction of the $\algid{Spacing}$ constraint ensures that all values from $S$ occur in the first \emph{period} of length $p$ and they must repeat in the successive $k-1$ periods on the same \emph{places}. In other words, the first period of length $p$ is cycled $k$ times. This restriction is useful in music composition problems~\cite{MusicBook}, where the composer wants to generate one voice consisting of a $p$ beat long rhythmical pattern that is played $k$ times. The pattern consists of $|S|$ onsets (beginnings of notes) that must be played exactly $k$ times in the whole voice. This can be encoded using a restriction of \algid{Spacing}\ that is defined as follows: \begin{definition}\label{def:spacing1} Let $X = \seq{X_1, \ldots, X_n}$ be a sequence of $n$ variables and let $S \subseteq D$ be a set of domain values. Let $p$ and $k$ be natural numbers such that $p \leq n$ and $pk \leq n$. Then $\algid{Spacing}ONE(S, p, k, X)$ holds iff $\algid{Spacing}(S, A, B, X)$ with $S \subseteq \set{X_1, \ldots, X_p}$, $a_i = b_i = p$ for $1 \leq i \leq k-1$ and $|\setcomp{j}{X_j = d, 1 \leq j \leq n}| = k$ for all $d \in S$ holds. \end{definition} \begin{theorem}\label{thm:tract1} For any constraint $\algid{Spacing}ONE(S, p, k, X)$, there is a bipartite graph $G = (U, V, E)$ such that there is a support for the constraint iff there is a maximum matching in $G$. Enforcing \emph{DC}\ on the constraint takes $O(p^2 k + p^{2.5})$ time down a branch of the search tree. \end{theorem} \begin{proof} First, we observe that values $D \setminus S$ are interchangeable as we do not distinguish between values $d$ outside $S$, $d \notin S$. Therefore, we perform channeling of variables $X$ to variables $Y$ and map all values outside of $S$ into a dummy value $\ensuremath{0}$: $X_i \in S \leftrightarrow Y_i = X_i$ and $X_i \not\in S \leftrightarrow Y_i = \ensuremath{0}$ for $1 \leq i \leq n$, where $\ensuremath{0} \notin S$ is a fresh value. Second, we exploit the special structure of $\algid{Spacing}ONE$. Namely, variables in positions $i, p + i,\ldots, (k-1)p + i$, $1 \leq i \leq p$, must take the same value. Hence, to check whether value $d$ can be assigned to one of the variables $Y_i, Y_{p + i},\ldots, Y_{(k-1)p + i}$, we need to check whether $d \in \bigcap_{j=0}^{k-1} D(Y_{j p + i})$. We use a \emph{folding} procedure to identify possible positions for each value. We \emph{fold} the domains of $Y$ into $P_i = \bigcap_{j=0}^{k-1} D(Y_{j p + i})$ for $1 \leq i \leq p$. Finally, we need to match values $S \cup \set{\ensuremath{0}}$ with the positions in one period. To avoid using generalized matching, we introduce $p - |S|$ copies of the dummy value, otherwise we would have to match $\ensuremath{0}$ with $p - |S|$ nodes. Next we describe construction of the graph $G$. The sets of nodes are $U = S \cup \setcomp{\ensuremath{0}_j}{1 \leq j \leq p - |S|}$, $V = \set{1, \ldots, p}$. The set of edges is $E = \setcomp{(d, i)}{d \in P_i \cap S, 1 \leq i \leq p} \cup \setcomp{(\ensuremath{0}_j, i)}{1 \leq j \leq p - |S|, \ensuremath{0} \in P_i, 1 \leq i \leq p}$. Now we show that there exists a support for $\algid{Spacing}ONE$ iff there exists a maximum matching in $G$. Having a support for the constraint, we can obtain a subgraph of $G$ with the same sets of nodes and the set of edges $M$ that consists of two parts: \begin{inparaenum}[(1)] \item the edges between the places of the variables in the first period and the values from $S$ that are assigned to them in the support $M_1 = \setcomp{(d,i)}{d = X_i, d \in S, 1 \leq i \leq p}$, \item if we order the rest of the places in the first period (that contain values not in $S$) into a new sequence $N = \left[ i | X_i \notin S, 1 \leq i \leq p \right]$ (note that $|N| = p - |S|$), the edges between these places and the dummy values with the same index $M_2 = \setcomp{(\ensuremath{0}_j,N_j)}{1 \leq j \leq p - |S|}$. \end{inparaenum} So $M = M_1 \cup M_2$. Now we have indeed a subgraph of $G$, because $M$ is a subset of $E$, because, while we have a support, values from $S$ have to repeat on the same places in each period, so $X_i \in P_i$ if $X_i \in S$ and $\ensuremath{0} \in P_i$ if $X_i \notin S$ for $1 \leq i \leq p$, thus $M_1 \subseteq \setcomp{(d, i)}{d \in P_i \cap S, 1 \leq i \leq p}$ and $M_2 \subseteq \setcomp{(\ensuremath{0}_j, i)}{1 \leq j \leq p - |S|, \ensuremath{0} \in P_i, 1 \leq i \leq p}$. $M$ is also a matching in $G$, because exactly one value is assigned to each variable and each dummy value is connected to exactly one place. $M$ is a maximum matching, because it contains all the nodes, because each value from $S$ have to be assigned to some variable in the first period and the number of the dummy values is exactly the number of variables in the first period with values not in $S$ assigned to them. Hence, we obtained a maximum matching in $G$ from a support for the constraint. Now having a maximum matching $M$ in $G$, we can obtain a support for the constraint by assigning the values from $S$ to the places in each period to which they are matched by $M$ and any value not in $S$ to the places in each period to which a dummy value is matched by $M$. This is a valid assignment, because $M$ matches something with each place, because it is maximal. This assignment is always possible, because $G$ connects each $d \in S$ only to places of variables with domains containing $d$ in each period and $G$ connects dummy values only to places of variables with domains containing a value not in $S$ in each period. The assignment is indeed a support, because each value from $S$ is matched with exactly one place of a period and repeats altogether $k$ times on the same place in each period. Our $\emph{DC}$ propagator is based on $\emph{DC}$ propagator for $\algid{AllDifferent}$ by Regin\cite{regin1}. First, we determine the set of edges that do not belong to any maximum matching the same way as propagator for $\algid{AllDifferent}$ enforces $\emph{DC}$. This takes $O(p^{2.5})$ down a branch of the search tree. If an edge $(u,i)$ does not belong to any matching then the value $u$ can be removed from domains of variables $Y_i, Y_{p + i},\ldots, Y_{(k-1)p + i}$ which takes $O(k)$ time. There can be at most $O(p^2)$ removals down a branch, so overall complexity down a branch of the search tree is $O(p^2 k + p^{2.5})$. \qed \end{proof} For example, consider the \algid{Spacing}ONE\ constraint with $D = \set{a,b,c,o}$, $S = \set{a,b,c}$ representing the onsets, $p = 5$ length of the pattern and $k = 3$ number of repetitions on the sequence of 15 variables. The variable domains are as shown below: \begin{center} \newcommand{\compd}[1]{\scriptsize\begin{tabular}{>{$}c<{$}} #1 \end{tabular}} \begin{tabular}{r|*{15}{|>{\centering\arraybackslash}p{3.5mm}}} $i = $ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 \\ \hline \hline $D(X_i) =$ & \compd{ a \\ b \\ \ \\ o \\ } & \compd{ a \\ b \\ c \\ o \\ } & \compd{ a \\ b \\ c \\ o \\ } & \compd{ a \\ b \\ \ \\ \ \\ } & \compd{ \ \\ b \\ c \\ o \\ } & \compd{ a \\ b \\ c \\ o \\ } & \compd{ a \\ b \\ c \\ \ \\ } & \compd{ \ \\ \ \\ c \\ \ \\ } & \compd{ a \\ b \\ c \\ o \\ } & \compd{ \ \\ b \\ c \\ o \\ } & \compd{ a \\ b \\ c \\ o \\ } & \compd{ a \\ \ \\ c \\ o \\ } & \compd{ a \\ b \\ c \\ o \\ } & \compd{ a \\ b \\ c \\ o \\ } & \compd{ \ \\ b \\ c \\ o \\ } \\ \end{tabular} \end{center} During propagation, channeling will simply replaces $o$ with \ensuremath{0}. Then the folded domains will look like this: \begin{center} \newcommand{\compd}[1]{\scriptsize\begin{tabular}{>{$}c<{$}} #1 \end{tabular}} \begin{tabular}{r|*{5}{|>{\centering\arraybackslash}p{3.5mm}}} $i = $ & 1 & 2 & 3 & 4 & 5 \\ \hline \hline $P_i =$ & \compd{ a \\ b \\ \ \\ \ensuremath{0} \\ } & \compd{ a \\ \ \\ c \\ \ \\ } & \compd{ \ \\ \ \\ c \\ \ \\ } & \compd{ a \\ b \\ \ \\ \ \\ } & \compd{ \ \\ b \\ c \\ \ensuremath{0} \\ } \\ \end{tabular} \end{center} And finally, the bipartite graph is given in Figure~\ref{f:f1}. There are two maximum matchings in this graph: $\set{(c,3), (a,2), (b,4), (\ensuremath{0}_1, 1), (\ensuremath{0}_2, 5)}$ and $\set{(c,3), (a,2), (b,4), (\ensuremath{0}_2, 1), (\ensuremath{0}_1, 5)}$. This means that the only support is $\seq{o,a,c,b,o,o,a,c,b,o,o,a,c,b,o}$. \begin{figure} \caption{\label{f:f1} \label{f:f1} \end{figure} \section{The $h$-Voice Global \algid{Spacing}\ Constraint (\algid{Spacing}H)} A composer would usually want to compose more voices playing at the same time with no overlapping onsets. The $h$-Voice Global \algid{Spacing}\ Constraint, which is simply a conjunction of more \algid{Spacing}ONE\ constraints on the same sequence of variables, can be used for this purpose. \begin{definition}[\algid{Spacing}H]\label{def:spacingh} Let $X = \seq{X_1, \ldots, X_n}$ be a sequence of $n$ variables and let $S = \seq{S_1, \ldots, S_h}$ be a sequence of pairwise disjoint sets of domain values ($\bigcup_{l = 1}^{h} S_l \subseteq D$ and $S_{l_1} \cap S_{l_2} = \emptyset$ for all $1 \leq l_1 < l_2 \leq h$). Let $\seq{p_1, \ldots, p_h}$ be a sequence of natural numbers and let $k$ be a natural number such that $p_{l}k \leq n$ for all $1 \leq l \leq h$. Then $\algid{Spacing}H(S, \seq{p_1, \ldots, p_h}, k, X)$ holds iff a conjunction of constraints $\algid{Spacing}ONE(S_l, p_l, k, X)$ for all $1 \leq l \leq h$ holds. \end{definition} \begin{theorem}\label{thm:intractSH} Enforcing \emph{DC}\ on the Global \algid{Spacing}H\ Constraint is NP-hard even for $h=2$. \end{theorem} \begin{proof} We prove this by reduction of SAT\ to the problem of finding a support for \algid{Spacing}H. The main idea of the proof is similar to the other hardness proofs: The alternative choice of a literal satisfying a clause is modeled by choice of a value for a variable and mutual exclusion of complementary literals is enforced by properties of \algid{Spacing}. For the \algid{Spacing}H\ constraint it is mutual exclusion of values from the same variable domain (assignment of one excludes the others). The system how choosing two complementary literals to be true leads to the mutual exclusion of values is quite complicated, because we have little freedom (just two voices). The idea is that the literal chosen to satisfy a clause is copied to the part representing the model. This is done by the impossibility of having more than one occurrence of the same value in one period of a voice, so from domains of cardinality 2 containing this value the other one has to be chosen. These other values (that represent the model) belong, however, to the other voice, which has period of different length, so repetitions of these values will be aligned with different places of the other periods of the first voice. On the one hand, this is used for copying the clause satisfiers to the same model representation, on the other hand, it is used to align values for complementary literals with each other so that their assignment mutually exclude each other. Now we describe the reduction in detail. Please, recall that if $L$ is a set of literals, then $L' = \setcomp{l'}{l \in L}$ and $L^i = \setcomp{l^i}{l \in L}$. Let $\phi$ be an arbitrary CNF\ with $\ensuremath{v}$ propositional variables and $\ensuremath{c}$ clauses. There is a model of $\phi$ iff there is a support for the constraint $\algid{Spacing}H(\seq{S_1, S_2}, \seq{p_1, p_2}, k, X)$ where \begin{itemize} \item $S_1 = \funid{lit}(\phi)' \cup \bigcup_{j = 1}^\ensuremath{c} \left( \funid{lit}(\phi)^j \right)$ \item $S_2 = \funid{lit}(\phi)$ \item $p_1 = \ensuremath{c} + 6\ensuremath{c}\ensuremath{v}$ \item $p_2 = \ensuremath{c} + 6\ensuremath{c}\ensuremath{v} + 2\ensuremath{v}$ \item $k = \ensuremath{c}$ \item $X$ is a sequence of $(\ensuremath{c} + 6\ensuremath{c}\ensuremath{v} + 2\ensuremath{v})k$ variables with the domains as described below. \end{itemize} We will describe the domains voice by voice, so ultimately the domains are minimal sets satisfying the following conditions. The period of the first voice is $p_1 = \ensuremath{c} + 6\ensuremath{c}\ensuremath{v}$, let us split the sequence into rows of this length and order them into a table (one row is one period of the first voice). Let $X^1_{j,i}$ denote $X_{(j-1)p_1 + i}$. Then the first $\ensuremath{c}$ columns represent $\phi$ in the following fashion: The cells on the main diagonal represent the clauses and the other cells are filled with representation of all the literals, \begin{itemize} \item the domains of variables $X^1_{j,i}$ for $1 \leq j,i \leq \ensuremath{c}$ contain $\setcomp{p^i}{p \in C_j}$ if $i = j$, otherwise $\funid{lit}(\phi)^i$. \end{itemize} The following $4\ensuremath{c}\ensuremath{v}$ columns contain representations of all literals for each clause two times, \begin{itemize} \item the domains of variables $X^1_{j,(\ensuremath{c}) + (i-1)2\ensuremath{v} + \ensuremath{v} y + x}$ for $1 \leq j,i \leq \ensuremath{c}$, $0 \leq y \leq 1$ and $1 \leq x \leq \ensuremath{v}$ contain $p^i_x$ if $y=0$ and $\neg p^i_x$ if $y=1$, \item the domains of variables $X^1_{j,(\ensuremath{c} + 2\ensuremath{c}\ensuremath{v}) + (i-1)2\ensuremath{v} + \ensuremath{v} y + x}$ for $1 \leq j,i \leq \ensuremath{c}$, $0 \leq y \leq 1$ and $1 \leq x \leq \ensuremath{v}$ contain $p^i_x$ if $y=0$ and $\neg p^i_x$ if $y=1$. \end{itemize} The last $2\ensuremath{c}\ensuremath{v}$ columns contain representations of all literals in the usual order in their first $2\ensuremath{v}$ columns, in their last $2\ensuremath{v}$ in the order where positive and negative literals are swapped, and \ensuremath{0}\ in the rest of their columns, \begin{itemize} \item the domains of variables $X^1_{j,(\ensuremath{c} + 4\ensuremath{c}\ensuremath{v}) + x}$ for $1 \leq j \leq \ensuremath{c}$ and $1 \leq x \leq \ensuremath{v}$ contain $p'_x$, \item the domains of variables $X^1_{j,(\ensuremath{c} + 4\ensuremath{c}\ensuremath{v}) + \ensuremath{v} + x}$ for $1 \leq j \leq \ensuremath{c}$ and $1 \leq x \leq \ensuremath{v}$ contain $\neg p'_x$, \item the domains of variables $X^1_{j,(\ensuremath{c} + 4\ensuremath{c}\ensuremath{v}) + 2\ensuremath{v} + x}$ for $1 \leq j \leq \ensuremath{c}$ and $1 \leq x \leq 2\ensuremath{c}\ensuremath{v} - 4\ensuremath{v}$ contain $\ensuremath{0}$, \item the domains of variables $X^1_{j,(\ensuremath{c} + 4\ensuremath{c}\ensuremath{v}) + (2\ensuremath{c}\ensuremath{v}-2\ensuremath{v}) + x}$ for $1 \leq j \leq \ensuremath{c}$ and $1 \leq x \leq \ensuremath{v}$ contain $\neg p'_x$, \item the domains of variables $X^1_{j,(\ensuremath{c} + 4\ensuremath{c}\ensuremath{v}) + (2\ensuremath{c}\ensuremath{v}-\ensuremath{v}) + x}$ for $1 \leq j \leq \ensuremath{c}$ and $1 \leq x \leq \ensuremath{v}$ contain $p'_x$. \end{itemize} (This is not well defined for $\ensuremath{c}=1$, but $\phi$ is trivially satisfiable in this case.) The arrangement of the values of the second voice is much more simple. The period of the second voice is $p_2 = \ensuremath{c} + 6\ensuremath{c}\ensuremath{v} + 2\ensuremath{v}$, so let $X^2_{j,i}$ denote $X_{(j-1)p_2 + i}$. Then places from $\ensuremath{c}+1$ to $\ensuremath{c}+2\ensuremath{v}$ and from $\ensuremath{c}+4\ensuremath{c}\ensuremath{v}+1$ to $\ensuremath{c}+4\ensuremath{c}\ensuremath{v}+2\ensuremath{v}$ of each period contain representations of all literals in the usual order, \begin{itemize} \item the domains of variables $X^2_{j,(\ensuremath{c}) + x}$ for $1 \leq j \leq \ensuremath{c}$ and $1 \leq x \leq \ensuremath{v}$ contain $p_x$, \item the domains of variables $X^2_{j,(\ensuremath{c} + \ensuremath{v}) + x}$ for $1 \leq j \leq \ensuremath{c}$ and $1 \leq x \leq \ensuremath{v}$ contain $\neg p_x$, \item the domains of variables $X^2_{j,(\ensuremath{c} + 4\ensuremath{c}\ensuremath{v}) + x}$ for $1 \leq j \leq \ensuremath{c}$ and $1 \leq x \leq \ensuremath{v}$ contain $p_x$, \item the domains of variables $X^2_{j,(\ensuremath{c} + 4\ensuremath{c}\ensuremath{v} + \ensuremath{v}) + x}$ for $1 \leq j \leq \ensuremath{c}$ and $1 \leq x \leq \ensuremath{v}$ contain $\neg p_x$. \end{itemize} Finally, the domains of variables that are not on the first $\ensuremath{c}$ places of any period of the first voice and do not contain a value from $S_2$ contain \ensuremath{0}: \begin{itemize} \item the domains of variables $X_i$ for $i \notin \setcomp{(j-1)p_1 + x}{1 \leq j,x \leq \ensuremath{c}} \cup \setcomp{(j-1)p_2 + \ensuremath{c} + x}{1 \leq j \leq \ensuremath{c}, 1 \leq x \leq 2\ensuremath{v}} \cup \setcomp{(j-1)p_2 + \ensuremath{c} + 4\ensuremath{c}\ensuremath{v} + x}{1 \leq j \leq \ensuremath{c}, 1 \leq x \leq 2\ensuremath{v}}$ contain \ensuremath{0}. \end{itemize} The constraint for the CNF from our running example would be imposed on a sequence of variables with domains displayed in the Table~\ref{t:exampleh}. The last $2\ensuremath{v} k$ domains of $\set{\ensuremath{0}}$ are not displayed in the table. \begin{sidewaystable} \caption{ \label{t:exampleh}Example for reduction from SAT\ to \algid{Spacing}H. } \begin{center} \newcommand{\compd}[1]{\scriptsize\begin{tabular}{>{$}c<{$}} #1 \end{tabular}} \begin{tabular}{r|*{4}{|>{\centering\arraybackslash}c}|*{24}{|>{\centering\arraybackslash}c}} & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 & 21 & 22 & 23 & 24 & 25 & 26 & 27 & 28 \\ \hline \hline 1 & \compd{\neg p^{1} \\ q^{1} \\ r^{1}} & \compd{p^{2} \\ q^{2} \\ r^{2} \\ \neg p^{2} \\ \neg q^{2} \\ \neg r^{2}} & \compd{p^{3} \\ q^{3} \\ r^{3} \\ \neg p^{3} \\ \neg q^{3} \\ \neg r^{3}} & \compd{p^{4} \\ q^{4} \\ r^{4} \\ \neg p^{4} \\ \neg q^{4} \\ \neg r^{4}} & \compd{p^{1} \\ p} & \compd{q^{1} \\ q} & \compd{r^{1} \\ r} & \compd{\neg p^{1} \\ \neg p} & \compd{\neg q^{1} \\ \neg q} & \compd{\neg r^{1} \\ \neg r} & \compd{p^{2} \\ \ensuremath{0}} & \compd{q^{2} \\ \ensuremath{0}} & \compd{r^{2} \\ \ensuremath{0}} & \compd{\neg p^{2} \\ \ensuremath{0}} & \compd{\neg q^{2} \\ \ensuremath{0}} & \compd{\neg r^{2} \\ \ensuremath{0}} & \compd{p^{3} \\ \ensuremath{0}} & \compd{q^{3} \\ \ensuremath{0}} & \compd{r^{3} \\ \ensuremath{0}} & \compd{\neg p^{3} \\ \ensuremath{0}} & \compd{\neg q^{3} \\ \ensuremath{0}} & \compd{\neg r^{3} \\ \ensuremath{0}} & \compd{p^{4} \\ \ensuremath{0}} & \compd{q^{4} \\ \ensuremath{0}} & \compd{r^{4} \\ \ensuremath{0}} & \compd{\neg p^{4} \\ \ensuremath{0}} & \compd{\neg q^{4} \\ \ensuremath{0}} & \compd{\neg r^{4} \\ \ensuremath{0}} \\ \hline 2 & \compd{p^{1} \\ q^{1} \\ r^{1} \\ \neg p^{1} \\ \neg q^{1} \\ \neg r^{1}} & \compd{\neg q^{2} \\ r^{2}} & \compd{p^{3} \\ q^{3} \\ r^{3} \\ \neg p^{3} \\ \neg q^{3} \\ \neg r^{3}} & \compd{p^{4} \\ q^{4} \\ r^{4} \\ \neg p^{4} \\ \neg q^{4} \\ \neg r^{4}} & \compd{p^{1} \\ \ensuremath{0}} & \compd{q^{1} \\ \ensuremath{0}} & \compd{r^{1} \\ \ensuremath{0}} & \compd{\neg p^{1} \\ \ensuremath{0}} & \compd{\neg q^{1} \\ \ensuremath{0}} & \compd{\neg r^{1} \\ \ensuremath{0}} & \compd{p^{2} \\ p} & \compd{q^{2} \\ q} & \compd{r^{2} \\ r} & \compd{\neg p^{2} \\ \neg p} & \compd{\neg q^{2} \\ \neg q} & \compd{\neg r^{2} \\ \neg r} & \compd{p^{3} \\ \ensuremath{0}} & \compd{q^{3} \\ \ensuremath{0}} & \compd{r^{3} \\ \ensuremath{0}} & \compd{\neg p^{3} \\ \ensuremath{0}} & \compd{\neg q^{3} \\ \ensuremath{0}} & \compd{\neg r^{3} \\ \ensuremath{0}} & \compd{p^{4} \\ \ensuremath{0}} & \compd{q^{4} \\ \ensuremath{0}} & \compd{r^{4} \\ \ensuremath{0}} & \compd{\neg p^{4} \\ \ensuremath{0}} & \compd{\neg q^{4} \\ \ensuremath{0}} & \compd{\neg r^{4} \\ \ensuremath{0}} \\ \hline 3 & \compd{p^{1} \\ q^{1} \\ r^{1} \\ \neg p^{1} \\ \neg q^{1} \\ \neg r^{1}} & \compd{p^{2} \\ q^{2} \\ r^{2} \\ \neg p^{2} \\ \neg q^{2} \\ \neg r^{2}} & \compd{\neg p^{3} \\ \neg q^{3}} & \compd{p^{4} \\ q^{4} \\ r^{4} \\ \neg p^{4} \\ \neg q^{4} \\ \neg r^{4}} & \compd{p^{1} \\ \ensuremath{0}} & \compd{q^{1} \\ \ensuremath{0}} & \compd{r^{1} \\ \ensuremath{0}} & \compd{\neg p^{1} \\ \ensuremath{0}} & \compd{\neg q^{1} \\ \ensuremath{0}} & \compd{\neg r^{1} \\ \ensuremath{0}} & \compd{p^{2} \\ \ensuremath{0}} & \compd{q^{2} \\ \ensuremath{0}} & \compd{r^{2} \\ \ensuremath{0}} & \compd{\neg p^{2} \\ \ensuremath{0}} & \compd{\neg q^{2} \\ \ensuremath{0}} & \compd{\neg r^{2} \\ \ensuremath{0}} & \compd{p^{3} \\ p} & \compd{q^{3} \\ q} & \compd{r^{3} \\ r} & \compd{\neg p^{3} \\ \neg p} & \compd{\neg q^{3} \\ \neg q} & \compd{\neg r^{3} \\ \neg r} & \compd{p^{4} \\ \ensuremath{0}} & \compd{q^{4} \\ \ensuremath{0}} & \compd{r^{4} \\ \ensuremath{0}} & \compd{\neg p^{4} \\ \ensuremath{0}} & \compd{\neg q^{4} \\ \ensuremath{0}} & \compd{\neg r^{4} \\ \ensuremath{0}} \\ \hline 4 & \compd{p^{1} \\ q^{1} \\ r^{1} \\ \neg p^{1} \\ \neg q^{1} \\ \neg r^{1}} & \compd{p^{2} \\ q^{2} \\ r^{2} \\ \neg p^{2} \\ \neg q^{2} \\ \neg r^{2}} & \compd{p^{3} \\ q^{3} \\ r^{3} \\ \neg p^{3} \\ \neg q^{3} \\ \neg r^{3}} & \compd{p^{4} \\ q^{4}} & \compd{p^{1} \\ \ensuremath{0}} & \compd{q^{1} \\ \ensuremath{0}} & \compd{r^{1} \\ \ensuremath{0}} & \compd{\neg p^{1} \\ \ensuremath{0}} & \compd{\neg q^{1} \\ \ensuremath{0}} & \compd{\neg r^{1} \\ \ensuremath{0}} & \compd{p^{2} \\ \ensuremath{0}} & \compd{q^{2} \\ \ensuremath{0}} & \compd{r^{2} \\ \ensuremath{0}} & \compd{\neg p^{2} \\ \ensuremath{0}} & \compd{\neg q^{2} \\ \ensuremath{0}} & \compd{\neg r^{2} \\ \ensuremath{0}} & \compd{p^{3} \\ \ensuremath{0}} & \compd{q^{3} \\ \ensuremath{0}} & \compd{r^{3} \\ \ensuremath{0}} & \compd{\neg p^{3} \\ \ensuremath{0}} & \compd{\neg q^{3} \\ \ensuremath{0}} & \compd{\neg r^{3} \\ \ensuremath{0}} & \compd{p^{4} \\ p} & \compd{q^{4} \\ q} & \compd{r^{4} \\ r} & \compd{\neg p^{4} \\ \neg p} & \compd{\neg q^{4} \\ \neg q} & \compd{\neg r^{4} \\ \neg r} \\ \end{tabular} \begin{tabular}{r|*{9}{|>{\centering\arraybackslash}c}|*{24}{|>{\centering\arraybackslash}c}} & 29 & 30 & 31 & 32 & & 49 & 50 & 51 & 52 & 53 & 54 & 55 & 56 & 57 & 58 & 59 & 60 & 61 & 62 & 63 & 64 & 65 & 66 & 67 & 68 & 69 & 70 & 71 & 72 & 73 & 74 & 75 & 76 \\ \hline \hline 1 & \compd{p^{1} \\ \ensuremath{0}} & \compd{q^{1} \\ \ensuremath{0}} & \compd{r^{1} \\ \ensuremath{0}} & \compd{\neg p^{1} \\ \ensuremath{0}} & \compd{\ldots} & \compd{r^{4} \\ \ensuremath{0}} & \compd{\neg p^{4} \\ \ensuremath{0}} & \compd{\neg q^{4} \\ \ensuremath{0}} & \compd{\neg r^{4} \\ \ensuremath{0}} & \compd{p' \\ p} & \compd{q' \\ q} & \compd{r' \\ r} & \compd{\neg p' \\ \neg p} & \compd{\neg q' \\ \neg q} & \compd{\neg r' \\ \neg r} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\neg p' \\ \ensuremath{0}} & \compd{\neg q' \\ \ensuremath{0}} & \compd{\neg r' \\ \ensuremath{0}} & \compd{p' \\ \ensuremath{0}} & \compd{q' \\ \ensuremath{0}} & \compd{r' \\ \ensuremath{0}} \\ \hline 2 & \compd{p^{1} \\ \ensuremath{0}} & \compd{q^{1} \\ \ensuremath{0}} & \compd{r^{1} \\ \ensuremath{0}} & \compd{\neg p^{1} \\ \ensuremath{0}} & \compd{\ldots} & \compd{r^{4} \\ \ensuremath{0}} & \compd{\neg p^{4} \\ \ensuremath{0}} & \compd{\neg q^{4} \\ \ensuremath{0}} & \compd{\neg r^{4} \\ \ensuremath{0}} & \compd{p' \\ \ensuremath{0}} & \compd{q' \\ \ensuremath{0}} & \compd{r' \\ \ensuremath{0}} & \compd{\neg p' \\ \ensuremath{0}} & \compd{\neg q' \\ \ensuremath{0}} & \compd{\neg r' \\ \ensuremath{0}} & \compd{\ensuremath{0} \\ p} & \compd{\ensuremath{0} \\ q} & \compd{\ensuremath{0} \\ r} & \compd{\ensuremath{0} \\ \neg p} & \compd{\ensuremath{0} \\ \neg q} & \compd{\ensuremath{0} \\ \neg r} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\neg p' \\ \ensuremath{0}} & \compd{\neg q' \\ \ensuremath{0}} & \compd{\neg r' \\ \ensuremath{0}} & \compd{p' \\ \ensuremath{0}} & \compd{q' \\ \ensuremath{0}} & \compd{r' \\ \ensuremath{0}} \\ \hline 3 & \compd{p^{1} \\ \ensuremath{0}} & \compd{q^{1} \\ \ensuremath{0}} & \compd{r^{1} \\ \ensuremath{0}} & \compd{\neg p^{1} \\ \ensuremath{0}} & \compd{\ldots} & \compd{r^{4} \\ \ensuremath{0}} & \compd{\neg p^{4} \\ \ensuremath{0}} & \compd{\neg q^{4} \\ \ensuremath{0}} & \compd{\neg r^{4} \\ \ensuremath{0}} & \compd{p' \\ \ensuremath{0}} & \compd{q' \\ \ensuremath{0}} & \compd{r' \\ \ensuremath{0}} & \compd{\neg p' \\ \ensuremath{0}} & \compd{\neg q' \\ \ensuremath{0}} & \compd{\neg r' \\ \ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0} \\ p} & \compd{\ensuremath{0} \\ q} & \compd{\ensuremath{0} \\ r} & \compd{\ensuremath{0} \\ \neg p} & \compd{\ensuremath{0} \\ \neg q} & \compd{\ensuremath{0} \\ \neg r} & \compd{\neg p' \\ \ensuremath{0}} & \compd{\neg q' \\ \ensuremath{0}} & \compd{\neg r' \\ \ensuremath{0}} & \compd{p' \\ \ensuremath{0}} & \compd{q' \\ \ensuremath{0}} & \compd{r' \\ \ensuremath{0}} \\ \hline 4 & \compd{p^{1} \\ \ensuremath{0}} & \compd{q^{1} \\ \ensuremath{0}} & \compd{r^{1} \\ \ensuremath{0}} & \compd{\neg p^{1} \\ \ensuremath{0}} & \compd{\ldots} & \compd{r^{4} \\ \ensuremath{0}} & \compd{\neg p^{4} \\ \ensuremath{0}} & \compd{\neg q^{4} \\ \ensuremath{0}} & \compd{\neg r^{4} \\ \ensuremath{0}} & \compd{p' \\ \ensuremath{0}} & \compd{q' \\ \ensuremath{0}} & \compd{r' \\ \ensuremath{0}} & \compd{\neg p' \\ \ensuremath{0}} & \compd{\neg q' \\ \ensuremath{0}} & \compd{\neg r' \\ \ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\ensuremath{0}} & \compd{\neg p' \\ p} & \compd{\neg q' \\ q} & \compd{\neg r' \\ r} & \compd{p' \\ \neg p} & \compd{q' \\ \neg q} & \compd{r' \\ \neg r} \\ \end{tabular} \end{center} \end{sidewaystable} Let us organize the sequence $X$ into a table with $\ensuremath{c} + 6\ensuremath{c}\ensuremath{v}$ columns again (using $X^1_{j,i}$ notation) and ignore the last $2\ensuremath{c}\ensuremath{v}$ variables with domains $\set{\ensuremath{0}}$. Each row belongs to one clause. Also there is one set of literal representing values for each clause ($\funid{lit}(\phi)^j$ for $C_j$). Clauses form domains on the main diagonal using their sets of values. The rest of the domains in the first $\ensuremath{c}$ columns are constructed to enable repetitions of values selected for the clause variables. The next $2\ensuremath{c}\ensuremath{v}$ columns are called \emph{positive} part, because the values from $S_2$ it contains will represent a model of $\phi$. Due to the difference between periods, in each row $j$ these values from $S_2$ are aligned with the values of the clause of the row (values from $\funid{lit}(\phi)^j$). Thanks to this, when a value $d^j$ is assigned to the variable of clause $C_j$, $d^j$ cannot occur in the rest of the row, thus also not in the positive part, so the appropriate value $d$ has to be selected in this part. This is why, having a solution of the constraint, at least one literal from each clause will be assigned as a value from $S_2$ to some variable in the positive part. The last $2\ensuremath{c}\ensuremath{v}$ columns are \emph{consistency} part, because they ensure the consistency of the model represented by a solution. When a value $d \in S_2$ is assigned to a variable in the positive part, it has to be repeated in the positive part in each row. Also it cannot be repeated in the rest of any row. Hence a corresponding primed value $d' \in S_1$ must be assigned to one of the first $2\ensuremath{v}$ places of the consistency part of the first row (and also each other row), thus $d'$ cannot repeat in the last $2\ensuremath{v}$ places of the consistency part in any row, mainly not in the last one. Please note that the positive and negative primed values are switched in the last $2\ensuremath{v}$ places, so that, in the last row, the negative primed values are aligned with the positive values from $S_2$ and the positive ones are aligned with the negative ones. This whole construction of the domains causes that if two complementary literal values $d, \overline{d} \in S_2$ are assigned to variables in the positive part, they cannot be assigned to the first $2\ensuremath{v}$ variables in the first row of the consistency part, so both $d', \overline{d'} \in S_1$ have to be assigned in these first $2\ensuremath{v}$ variables and cannot be assigned to the last $2\ensuremath{v}$ variables in the last row of the consistency part. However, the same assignment of values from $S_2$ must be repeated in each row of this part, so two variables in the last $2\ensuremath{v}$ places of the last row would be left with empty domains. Analogous reasoning rules out the case when none of $d, \overline{d} \in S_2$ is assigned in the positive part. Also it is easy to see that when exactly one of $d, \overline{d} \in S_2$ is assigned in the positive part, no contradiction is reached. This shows that, having a solution of the constraint, the set of literals assigned as a values from $S_2$ in the positive part is consistent. So, together with the result above, having a solution of the constraint, the set of literals assigned as a values from $S_2$ in the positive part is a model of $\phi$. We can obtain a solution of the constraint from a model of $\phi$ in the following way: We assign the literals from the model as a values of the second voice to the variables in the positive part. Then we choose a value that represents some literal satisfied by the model for the clause variables. And assign the rest of the variables so that the constraint holds. It is always possible to do this. It is always possible to assign the literals from the model to the positive part, because there is one variable for each literal in each row of the part and these variables repeat with the period of the second voice (where the values are from). It is always possible to complete the assignment to the positive part, because each value has an alternative in each domain and each value can be repeated with it's period. As long as the model is consistent, no contradiction can be reached in the consistency part because of the following: For any pair of complementary literals $d, \overline{d} \in S_2$, one of them is assigned in the positive part and the other one is not. The selection of the values from $S_2$ in the consistency part is reversed with comparison to the positive part, so the mutual exclusion still holds. The selection of the primed values in the first $2\ensuremath{v}$ columns of the consistency part will represent the model again and the selection of the primed values in the last $2\ensuremath{v}$ columns will be reversed again. Due to the fact that the positive values are swapped for the negative ones in the last $2\ensuremath{v}$ columns, assignment of the primed values complements the assignment of values from $S_2$ in the first as well in the last $2\ensuremath{v}$ columns and no domains are emptied. It is always possible to select some value for the clause variables because of the following: In the positive part of each row $j$, the selection of the values indexed with $j$ will be complementary to the selection of the values from the model. This leaves the possibility for the indexed values representing literals satisfied by the model to be selected elsewhere in the row, for instance in the clause variable. Each clause variable always contains at lease one value representing literal satisfied by the model. Assignment to the clause variables can be repeated in each row and the primed values that could not be selected in the positive part nor in the clause values can still be assigned to some of the variables in the columns between $\ensuremath{c}+2\ensuremath{c}\ensuremath{v}+1$ and $\ensuremath{c}+4\ensuremath{c}\ensuremath{v}$ called \emph{negative} part. \qed \end{proof} \section{Incomplete filtering of \algid{Spacing}H} Since already a conjunction of two \algid{Spacing}ONE\ constraints is NP-hard, we decided to devise an incomplete additional rule that facilitates propagation between two \algid{Spacing}ONE\ constraints on the same sequence. If periods of these two constraints are of different length, assigning a value from the first one to one variable forbids assigning any value from the second one to multiple variables. For example, take $\algid{Spacing}ONE(\set{a,b}, 5, 4, X)$ and $\algid{Spacing}ONE(\set{c,d}, 7, 3, X)$ on the same sequence $X$ of 21 variables. We start with full domains for all variables. After assign $a$ to the first variable and filtering the domains are: \begin{center} \newcommand{\compd}[1]{\scriptsize\begin{tabular}{>{$}c<{$}} #1 \end{tabular}} \begin{tabular}{r|*{21}{|>{\centering\arraybackslash}p{3.7mm}}} $i = $ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 & 21 \\ \hline \hline $D(X_i) =$ & \compd{ a \\ \ \\ \ \\ \ \\ \ } & \compd{ \ \\ b \\ \ \\ \ \\ \ensuremath{0} } & \compd{ \ \\ b \\ c \\ d \\ \ensuremath{0} } & \compd{ \ \\ b \\ \ \\ \ \\ \ensuremath{0} } & \compd{ \ \\ b \\ c \\ d \\ \ensuremath{0} } & \compd{ a \\ \ \\ \ \\ \ \\ \ } & \compd{ \ \\ b \\ c \\ d \\ \ensuremath{0} } & \compd{ \ \\ b \\ \ \\ \ \\ \ensuremath{0} } & \compd{ \ \\ b \\ \ \\ \ \\ \ensuremath{0} } & \compd{ \ \\ b \\ c \\ d \\ \ensuremath{0} } & \compd{ a \\ \ \\ \ \\ \ \\ \ } & \compd{ \ \\ b \\ c \\ d \\ \ensuremath{0} } & \compd{ \ \\ b \\ \ \\ \ \\ \ensuremath{0} } & \compd{ \ \\ b \\ c \\ d \\ \ensuremath{0} } & \compd{ \ \\ b \\ \ \\ \ \\ \ensuremath{0} } & \compd{ a \\ \ \\ \ \\ \ \\ \ } & \compd{ \ \\ b \\ c \\ d \\ \ensuremath{0} } & \compd{ \ \\ b \\ \ \\ \ \\ \ensuremath{0} } & \compd{ \ \\ b \\ c \\ d \\ \ensuremath{0} } & \compd{ \ \\ b \\ \ \\ \ \\ \ensuremath{0} } & \compd{ \ \\ \ \\ c \\ d \\ \ensuremath{0} } \\ \end{tabular} \end{center} If we assign $b$ to $X_4$, the assignment must be repeated every 5 variables. These repetitions occur on different places of the second period, so sufficient number of repetitions of $c$ and $d$ will not be possible. Subsequent filtering would remove circled values \begin{center} \newcommand{\compd}[1]{\scriptsize\begin{tabular}{>{$}c<{$}} #1 \end{tabular}} \begin{tabular}{r|*{21}{|>{\centering\arraybackslash}p{3.7mm}}} $i = $ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 & 21 \\ \hline \hline $D(X_i) =$ & \compd{ a \\ \ \\ \ \\ \ \\ \ } & \compd{ \ \\ \textcircled b \\ \ \\ \ \\ \ensuremath{0} } & \compd{ \ \\ \textcircled b \\ c \\ d \\ \ensuremath{0} } & \compd{ \ \\ b \\ \ \\ \ \\ \textcircled \ensuremath{0} } & \compd{ \ \\ \textcircled b \\ \textcircled c \\ \textcircled d \\ \ensuremath{0} } & \compd{ a \\ \ \\ \ \\ \ \\ \ } & \compd{ \ \\ \textcircled b \\ \textcircled c \\ \textcircled d \\ \ensuremath{0} } & \compd{ \ \\ \textcircled b \\ \ \\ \ \\ \ensuremath{0} } & \compd{ \ \\ b \\ \ \\ \ \\ \textcircled \ensuremath{0} } & \compd{ \ \\ \textcircled b \\ c \\ d \\ \ensuremath{0} } & \compd{ a \\ \ \\ \ \\ \ \\ \ } & \compd{ \ \\ \textcircled b \\ \textcircled c \\ \textcircled d \\ \ensuremath{0} } & \compd{ \ \\ \textcircled b \\ \ \\ \ \\ \ensuremath{0} } & \compd{ \ \\ b \\ \textcircled c \\ \textcircled d \\ \textcircled \ensuremath{0} } & \compd{ \ \\ \textcircled b \\ \ \\ \ \\ \ensuremath{0} } & \compd{ a \\ \ \\ \ \\ \ \\ \ } & \compd{ \ \\ \textcircled b \\ c \\ d \\ \ensuremath{0} } & \compd{ \ \\ \textcircled b \\ \ \\ \ \\ \ensuremath{0} } & \compd{ \ \\ b \\ \textcircled c \\ \textcircled d \\ \textcircled \ensuremath{0} } & \compd{ \ \\ \textcircled b \\ \ \\ \ \\ \ensuremath{0} } & \compd{ \ \\ \ \\ \textcircled c \\ \textcircled d \\ \ensuremath{0} } \\ \end{tabular} \end{center} and render the second constraint unsatisfiable. The same course of reasoning holds for assigning $b$ to $X_2$ and $X_5$. Thus only possibility is to assign $b$ to $X_3$, which leaves only two places free in the period of the second \algid{Spacing}ONE, hence two symmetrical solutions. Formally, we first need to count a number of places that may contain a value from $S_l$ for each \algid{Spacing}ONE\ constraint $l$. We denote it by $u_l$ and define as $$ u_l = |\setcomp{i}{S_l \cap P^l_i \neq \emptyset, 1 \leq i \leq p_l}| $$ where $P^l_i$ are the folded domains from the proof of Theorem~\ref{thm:tract1} for the constraint $l$. Now we need to count a number of places that are not blocked for values from $S_{l_2}$, but would become blocked after assigning a value from $S_{l_1}$ to a variable $X_i$ in the first period of $l_1$ (i.e. $1 \leq i \leq p_{l_1}$). This count is denoted by $b^{l_1, l_2}_i$ and defined as $$ |\setcomp{x}{x = ((i + j p_{l_1} -1) \mbox{\scriptsize \rm \ mod \ } p_{l_2})+1, 0 \leq j < k_{l_1}, i + j p_{l_1} \leq k_{l_2} p_{l_2}, S_{l_2} \cap D(X_x) \neq \emptyset}| $$ for each pair of different voices $l_1$, $l_2$ and $1 \leq i \leq p_{l_1}$. \begin{theorem}\label{thm:soundR} Let $l_1 = \algid{Spacing}ONE(S_{l_1}, p_{l_1}, k_{l_1}, X)$ and $l_2 = \algid{Spacing}ONE(S_{l_2}, p_{l_2}, k_{l_2}, X)$ be two constraints defined on the same sequence $X$ with $S_{l_1} \cap S_{l_2} = \emptyset$ that are \emph{DC}. Let $u_{l_2}$ and $b^{l_1, l_2}_i$ be defined as above. If $b^{l_1, l_2}_i > u_{l_2} - |S_{l_2}|$ then there is no support for $l_1 \ \wedge \ l_2$ with any value from $S_{l_1}$ assigned to $X_i$ within $1 \leq i \leq p_{l_1}$. \end{theorem} \begin{proof} Suppose the premises of the theorem hold. The places in the first period of $l_2$ that may hold a value from $S_{l_2}$ (together with places that already hold it) are called \emph{free}. The rest of the places in the first period of $l_2$ are called \emph{blocked}. The number of the free places is $u_{l_2}$. Suppose that we assigned a value $d_1 \in S_{l_1}$ to $X_i$. Then $d_1$ has to repeat on the same place in each period of $l_1$, that is places $i + j p_{l_1}$ for each $0 \leq j < k_{l_1}$. Now no value from $S_{l_2}$ can be put on those of these places that are constrained by $l_2$, i.e. $i + j p_{l_1} \leq k_{l_2} p_{l_2}$ for $0 \leq j < k_{l_1}$. So no value from $S_{l_2}$ can repeat on whatever places in the period $p_{l_2}$ these places are aligned with. Arbitrary index $y$ into the sequence $X$ is $((y-1) \mbox{\scriptsize \rm \ mod \ } p_{l_2})+1$-st place in a period of $l_2$, so no value from $S_{l_2}$ can be assigned to any variable with index $x = ((i + j p_{l_1} -1) \mbox{\scriptsize \rm \ mod \ } p_{l_2})+1$ where $i + j p_{l_1} \leq k_{l_2} p_{l_2}$ for $0 \leq j < k_{l_1}$. While $l_2$ is \emph{DC}\ and $1 \leq x \leq p_{l_2}$, $S_{l_2} \cap D(X_x) = S_{l_2} \cap P^{l_2}_x$. This restricts the places we are counting in $b^{l_1, l_2}_i$ to those that were free, so $b^{l_1, l_2}_i$ is the number of places that are blocked exclusively by assigning $d_1$ to $X_i$. Finally, suppose $b^{l_1, l_2}_i > u_{l_2} - |S_{l_2}|$ holds. This is equivalent to $u_{l_2} - b^{l_1, l_2}_i < |S_{l_2}|$ which means that the number of free places without the places blocked solely by $d_1$ on the place $i$ (the places left free after the assignment) is less than the number of values from $S_{l_2}$ we need to assign to one period of $l_2$. The constraint $l_2$ is obviously unsatisfiable in this case, so there is no support for $l_1 \ \wedge \ l_2$ with any value $d \in S_{l_1}$ assigned to $X_i$. \qed \end{proof} \section{Application} One of the applications of CSP in music composition is the problem of \emph{Asynchronous rhythms} described by Truchet~\cite{TruchetPhD,MusicBook}. Having $h$ voices and a time horizon $H$, the goal is to construct one rhythmical pattern for each voice. The patterns are played repeatedly until the time horizon is reached. The pattern of voice $l$ is of length $p_l$ and consists of $m_l$ onsets that have to be placed so that no two onsets are played at the same time. We consider also extension of this problem, where composer may want to impose additional constraints, such as forbidding or enforcing particular onsets on particular places, or restricting the density of the onsets e.g. at most one onset on two successive places. Truchet~\cite{TruchetPhD,MusicBook} formalized this problem in terms of variables $V_{l,i}$ that denote time when an onset $i$ is played in the pattern of voice $l$ (so $D(V_{l,i}) = \set{1, \ldots, p_l}$) and the following constraints: $\algid{AllDifferent}(\set{V_{l,1}, \ldots, V_{l,m_l}})$ for each voice $l$, and $V_{l_1,i_1} + j_1 p_{l_1} \neq V_{l_2,i_2} + j_2 p_{l_2}$ for each pair of different voices $l_1$, $l_2$, each onset $i_1$, $i_2$ in each voice respectively and each $j_1$, $j_2$ s.t. $V_{l_1,i_1} + j_1 p_1 \leq H$ and $V_{l_2,i_2} + j_2 p_2 \leq H$. This, however, cannot be encoded as a CSP model, because if $H \mbox{\scriptsize \rm \ mod \ } p_l \neq 0$ for some voice $l$ then the range of $j_l$ in the later constraint depends on $V_{l,i}$ for each onset $i$. We fix this by minor modification, where the pattern of each voice $l$ is repeated exactly $k_l$ times and the indexes $j_1$, $j_2$ in the last constraint are quantified by $0 \leq j_1 < k_{l_1}$ and $0 \leq j_2 < k_{l_2}$. We will call this model \algid{$OM$}. We encoded the problem into three new models. The first model (\algid{$SM$}) uses one $\algid{Spacing}ONE(S_l, p_l, k_l, X)$ for each voice $l$ on the same sequence of variables $X$. The sets of values $S_l$ represent the onsets for each voice $l$, so they must be pairwise disjoint. The length of the pattern of voice $l$ is $p_l$ and the number of its repetitions is $k_l$. Each variable $X_j$ represents the onset played in the beat $j$, so $D(X_j) = \bigcup_{l = 1}^h S_l \cup \set{\ensuremath{0}}$ where $\ensuremath{0} \not\in \bigcup_{l = 1}^h S_l$. \begin{theorem}\label{thm:strongS} Enforcing \emph{DC}\ on \algid{$SM$}\ is strictly stronger than enforcing \emph{DC}\ on \algid{$OM$}. \end{theorem} \begin{proof} First we show that if every constraint in \algid{$SM$}\ is \emph{DC}, then also every constraint in \algid{$OM$}\ is \emph{DC}. Suppose every constraint in \algid{$SM$}\ is \emph{DC}. The first question is what is the correspondence between the two models in this case, i.e. if \algid{$SM$}\ is \emph{DC}, with what domains should we check \algid{$OM$}\ for \emph{DC}. The correspondence should be semantical. That is, for every placing of every onset whether it is possible to place other onsets so that each constraint of a model is satisfied. While all the \algid{Spacing}ONE\ constraints are \emph{DC}, the domains restricted to their values are the same in each of their period. So the correspondence between the models can be described only on the first periods. It is $d \in D(X_i)$ where $d \in S_l$ and $1 \leq i \leq p_l$ iff $i \in D(V_{l,d})$. If every constraint in \algid{$SM$}\ is \emph{DC}, then also every \algid{AllDifferent}\ in \algid{$OM$}\ is \emph{DC}, because the bipartite graph in the propagator of \algid{Spacing}ONE\ of voice $l$ is a supergraph of \algid{AllDifferent}\ for voice $l$ in \algid{$OM$}. So while the \algid{Spacing}ONE\ is \emph{DC}, all edges in its graph belong to some maximum matching. If we remove each \ensuremath{0}\ node, we obtain a graph of the corresponding \algid{AllDifferent}. Each of the edges in the new graph still belong to some maximum matching in it, because removing nodes preserves maximum matchings, because while each edge is in some maximum matching and each node is in at most one edge of a matching, each maximum matching loses exactly one edge per removed node. If every constraint in \algid{$SM$}\ is \emph{DC}, then also every difference constraint in \algid{$OM$}\ is \emph{DC}. The semantics of a difference constraint $V_{l_1,i_1} + j_1 p_{l_1} \neq V_{l_2,i_2} + j_2 p_{l_2}$ is that the onset $i_1$ in the $j_1+1$-st period of the voice $l_1$ cannot be played at the same time as the onset $i_2$ in the $j_2+1$-st period of the voice $l_2$. If a difference constraint was not \emph{DC}, this would mean that for some placing of the onset $i_1$ it is not possible to place the onset $i_2$ so that they would be played on different times in the $j_1+1$-st and $j_2+1$-st periods of their voices respectively. This is possible only if the placing of $i_2$ is fixed ($D(V_{l_2,i_2}) = \set{x}$). This, however, means in the \algid{$SM$}\ that the value of this onset has only one place where it can occur in one period of $l_2$ ($i_2 \in D(X_y)$ iff $y = x$ for $1\leq y \leq p_{l_2}$), so the propagator of \algid{Spacing}ONE\ for this voice would instantiate it and hence remove other values from the domain of the variable of the place ($D(X_x) = \set{i_2}$). This means for \algid{Spacing}ONE\ of $l_1$ instantiating the variable of the place to \ensuremath{0}\ ($D(Y^{l_1}_x) = \set{\ensuremath{0}}$), so assuming \algid{$SM$}\ is \emph{DC}\ we have contradiction with possibility of placing $i_1$ so that the difference constraint cannot be satisfied ($x \notin D(V_{l_1,i_1})$ because $i_1 \notin D(Y^{l_1}_x) = \set{\ensuremath{0}}$), so the difference constraint must be \emph{DC}. To show strictness, consider instance of \emph{Asynchronous rhythms} with two voices, first having $m_1 = p_1 = k_1 = 2$ and the second one $m_2 = 1$, $p_2 = 3$, $k_2 = 2$. Additionally, the very first place has to contain an onset of the first voice. Domains of variables of \algid{$OM$}\ for this instance are $D(V_{1,a}) = D(V_{1,b}) = \set{1,2}$ and $D(V_{2,c}) = \set{2,3}$ and the constraints are $\algid{AllDifferent}(\set{V_{1,a}, V_{1,b}})$ and $V_{1,i} \neq V_{2,c}$, $V_{1,i} + 2 \neq V_{2,c}$, $V_{1,i} \neq V_{2,c} + 3$, $V_{1,i} + 2 \neq V_{2,c} + 3$ for $i \in \set{a,b}$. All the constraints of this model are \emph{DC}. However, domains of variables of \algid{$SM$}\ are $D(X_1) = \set{a,b}$ and $D(X_2) = \ldots = D(X_6) = \set{a,b,c,\ensuremath{0}}$ where $S_1 = \set{a,b}$ and $S_2 = \set{c}$. The propagation of \algid{Spacing}ONE\ for the first voice will remove all values not in $S_1$ from $D(X_2)$ up to $D(X_4)$. This leaves the \algid{Spacing}ONE\ for the second voice unsatisfiable. So \algid{$SM$}\ determines that the instance is unsatisfiable without search while \algid{$OM$}\ is not able to do so. \qed \end{proof} Drawback of \algid{$SM$}\ is that, due to interchangeability of onsets in one voice, it generates a lot of symmetrical solutions. This observation led to the second model (\algid{$SB$}). It is the same as the first one, except it uses a version of \algid{Spacing}ONE\ that does not distinguish between different values in one voice, which is a piece of information a composer does not need when constructing rhythmical patterns. This new constraint, denoted $\algid{Spacing}\ensuremath{_{SB}}(d, m, p, k, X)$, can be defined as $\algid{Spacing}ONE(S, p, k, X)$ where $S$ is a multiset of $m$ values $d$. In other words, $\algid{Spacing}\ensuremath{_{SB}}(d, m, p, k, X)$ is satisfied iff there is exactly $m$ occurrences of $d$ in the first $p$ places of $X$ and this pattern is repeated $k$ times. Filtering algorithm of \algid{Spacing}\ensuremath{_{SB}}\ takes $O(n)$ time down a branch of a search tree as we can avoid finding the maximum matching step. After folding of the domains $P_i = \bigcap_{j=0}^{k-1} D(X_{j p + i})$ for $1 \leq i \leq p$, the filtering algorithm simply counts $u = |\setcomp{i}{d \in P_i, 1 \leq i \leq p}|$ and $v = |\setcomp{i}{P_i = \set{d}, 1 \leq i \leq p}|$. The algorithm fails iff $u < m$ or $v > m$, if $u = m$ it instantiates all $X_i$ with $d \in D(X_i)$ to $d$ and if $v = m$ it removes $d$ from all domains that $D(X_i) \neq \set{d}$. Of course, the algorithm reflects the folded domains $P_i$ back to the domains $D(X_i)$ by removing $d$ if $d \notin P_i$ or instantiating to $d$ if $P_i = \set{d}$ and repeats all the instantiations and all the removals on the appropriate places in each period. \begin{theorem}\label{thm:correctBS} Enforcing \emph{DC}\ on \algid{Spacing}\ensuremath{_{SB}}\ can be done in $O(n)$ time down a branch of the search tree. \end{theorem} \begin{proof} In order to show that the filtering algorithm described above really establishes \emph{DC}, we need to show that it fails iff the constraint is unsatisfiable and if it does not fail the constraint is \emph{DC}. Thanks to the folding of the domains, reflecting the folded domains back and repetition of instantiations and removals in each period, for the domains on the same places of each period ($\setcomp{D(X_{jp+i})}{0 \leq j < k}$ for each $i$) it holds that one contain $d$ or is $\set{d}$ iff all the others contain $d$ or are $\set{d}$ respectively. This allows us to restrict out reasoning only to one period. First, we show that the algorithm fails iff the constraint is unsatisfiable. After what is shown in the previous paragraph the only way how to falsify the constraint is to put either too many or too little values $d$ to one period. Obviously, $u < m$ means that there is not enough variables in one period with $d$ in their domains and $v > m$ means that $d$ is assigned to too many variables in one period. So $u < m$ or $v > m$ holds iff the constraint is unsatisfiable, so the algorithm fails iff the constraints is unsatisfiable. Second, we show that if the algorithm removes a value from the domain of a variable, there is no solution of the constraint with this value assigned to this variable. Suppose the algorithm removes $d$ from $D(X_i)$. Except the cases considered in the first paragraph, the algorithm does so only when $v = m$ and $i$ is not one of the places counted in $v$. So if there was a solution with $X_i = d$, the number of occurrences of $d$ in one period would be $m+1$, which is a contradiction. Suppose the algorithm removes $d' \neq d$ from $D(X_i)$. Except the cases considered in the first paragraph, the algorithm does so only when $u = m$ and $i$ is one of the places counted in $u$. So if there was a solution with $X_i = d'$, the number of occurrences of $d$ in one period would be $m-1$, which is a contradiction. Finally, we show that if the constraint is satisfiable and there is no solution of the constraint with a value assigned to a variable, the algorithm removes the value from the domain of the variable. Suppose the constraint is satisfiable and there is no solution of the constraint with a value $d'$ assigned to a variable $X_i$. Except the cases considered in the first paragraph, the constraint may have no solution only in two cases: The value $d$ is assigned to too many or too little variables in one period. The constraint is satisfiable without the assignment $X_i = d'$. So the first case is possible only if there are exactly $m$ variables in one period with $d$ assigned to them and the assignment $X_i = d'$ increases this number. This means that $X_i$ is not instantiated to $d$ yet, $d' = d$ and $v = m$ holds, so the algorithm will remove $d$ from the domains of all the variables that are not instantiated to $d$ yet, hence also from $D(X_i)$. The second case is possible only if there are exactly $m$ variables in one period with $d$ in their domains and the assignment $X_i = d'$ decreases this number. This means that $X_i$ contains $d$ in its domain, $d' \neq d$ and $u = m$ holds, so the algorithm will assign $d$ to all the variables that contain $d$ in their domains, hence remove $d'$ from $D(X_i)$. The algorithm runs in $O(n)$ time down a branch of the search tree, because the folded domains $P_i$ are represented implicitly. At the beginning of the search, folding and reflecting back to the domains takes $O(pk)$, because it is done simply by, for every $1 \leq i \leq p$, removing $d$ from each $D(X_{jp+i})$, $0 \leq j < k$ if it is missing in some and instantiating each $X_{jp+i}$, $0 \leq j < k$ to $d$ if some already is instantiated. During the search it is sufficient just to update $u$ and $v$ upon removal of $d$ or instantiation to $d$. On each update, the same action (removal or instantiation) has to be repeated in each of $k$ period. Down a branch there will be at most $p$ such updates, because after $p$ updates the pattern of $d$-s in a period is fully determined. That makes $O(pk)$ steps down a branch. While $pk \leq n$, the filtering runs in $O(n)$ time down a branch of the search tree. \qed \end{proof} The last model (\algid{$SR$}) is the same as \algid{$SM$}, but it is additionally making use of the incomplete filtering rule for \algid{Spacing}H\ (described in the previous section) between each pair of constraints. \section{Experimental results} To compare performance of the different models, we carried out a series of experiments on random instances of the \emph{Asynchronous rhythms} problem. The instances were generated for a fixed number of voices $h$, a mean length of the pattern of the first voice $p_1$ and a fixed number of repetitions of the last voice $k_h$. The models were tested on ten instances for each tuple of $(h, p_1, k_h)$. The generation of the instances followed the philosophy that the first voice should be the base voice with short pattern and small number of onsets and the other voices should have richer patterns. So the pattern of the second voice was $4 \pm 1$ beats longer that the pattern of the first voice and each following voice had pattern in average two times longer $\pm 3$ beats than two voices before. Numbers of repetitions were set so that the voices overlap as much as possible. And the total number of the onsets was approximately $75\%$ of the length of the sequence uniformly distributed between the voices. The experiments were run with 5 minute timeout and with a heuristic under which all models were performing better. The left side of Table~\ref{t:t1} shows performance of the models on the \emph{basic} Asynchronous rhythms problem. The right side of Table~\ref{t:t1} shows performance of the models on the \emph{extended} Asynchronous Rhythms problem, where the composer applies additional constraints that some onsets are forbidden in certain positions. We randomly removed $10\%$ of values from the domains to model this restriction. \algid{$SB$}\ was not tested on the extended problem, because in this case onsets are not interchangeable due to additional constraints on onsets. Experiments were run with CHOCO Solver 2.1.5 on Intel Xeon 3 CPU 2.0Ghz, 4GB RAM. \begin{table} \begin{center} {\scriptsize \caption{ \label{t:t1}Number of solved instances / average time to solve in sec / average number of backtracks in thousands. } \begin{tabular}{|ccc||crr|crr|crr|crr||crr|crr|crr|} \hline & & & \multicolumn{12}{|c||}{Basic problem} & \multicolumn{9}{c|}{Extended problem} \\ $h$ & $p_1$ & $k_h$ & \multicolumn{3}{|c}{\algid{$OM$}} & \multicolumn{3}{c}{\algid{$SM$}} & \multicolumn{3}{c}{\algid{$SR$}} & \multicolumn{3}{c||}{\algid{$SB$}} & \multicolumn{3}{c}{\algid{$OM$}} & \multicolumn{3}{c}{\algid{$SM$}} & \multicolumn{3}{c|}{\algid{$SR$}} \\ \hline 3 & 12 & 2 & 9 & 4.99 & 99.76 & 9 & 9.54 & 61.03 & 9 & 4.83 & 24.38 & \textbf{10} & \textbf{0.17} & \textbf{4.58} & 8 & \textbf{0.72} & \textbf{4.29} & \textbf{10} & 21.85 & 143.66 & \textbf{10} & 9.82 & 42.08 \\ 3 & 12 & 3 & \textbf{10} & 0.84 & 2.80 & \textbf{10} & 1.13 & 1.80 & \textbf{10} & 0.60 & 0.46 & \textbf{10} & \textbf{0.09} & \textbf{0.11} & 9 & 8.56 & 180.02 & \textbf{10} & 0.17 & 0.02 & \textbf{10} & \textbf{0.12} & \textbf{0.00} \\ 3 & 12 & 4 & \textbf{10} & 3.92 & 35.17 & \textbf{10} & 4.86 & 24.88 & \textbf{10} & 1.50 & 2.92 & \textbf{10} & \textbf{0.18} & \textbf{0.33} & \textbf{10} & 0.57 & 0.98 & \textbf{10} & 0.31 & 0.04 & \textbf{10} & \textbf{0.20} & \textbf{0.02} \\ 3 & 18 & 2 & 7 & 21.02 & 244.97 & 7 & 20.83 & 102.16 & 7 & 9.86 & 35.60 & \textbf{10} & \textbf{0.64} & \textbf{18.05} & 5 & 7.22 & 138.13 & \textbf{9} & 29.60 & 109.82 & \textbf{9} & \textbf{5.96} & \textbf{13.47} \\ 3 & 18 & 3 & 8 & 13.48 & 128.47 & 8 & 16.56 & 58.54 & 8 & 7.58 & \textbf{22.13} & \textbf{10} & \textbf{1.01} & 26.95 & 6 & 10.38 & 71.18 & 8 & \textbf{0.25} & \textbf{0.03} & \textbf{9} & 6.86 & 14.60 \\ 3 & 18 & 4 & 7 & 42.77 & 270.29 & 7 & 44.88 & 119.39 & 7 & 9.25 & \textbf{14.21} & \textbf{10} & \textbf{1.05} & 17.42 & \textbf{10} & 31.71 & 137.36 & \textbf{10} & 0.44 & 0.30 & \textbf{10} & \textbf{0.34} & \textbf{0.13} \\ 3 & 24 & 2 & 6 & 5.70 & 48.93 & 7 & 11.39 & 29.52 & 7 & \textbf{5.56} & \textbf{8.31} & \textbf{10} & 20.80 & 1409.94 & 3 & 10.37 & 176.43 & 7 & \textbf{3.31} & \textbf{10.80} & \textbf{8} & 37.01 & 63.47 \\ 3 & 24 & 3 & 3 & 1.58 & 6.79 & 3 & 1.20 & 2.62 & 3 & \textbf{0.46} & \textbf{0.34} & \textbf{10} & 4.49 & 90.87 & 2 & 121.36 & 286.81 & \textbf{9} & 47.57 & 166.14 & \textbf{9} & \textbf{9.58} & \textbf{21.07} \\ 3 & 24 & 4 & 2 & 8.38 & \textbf{40.97} & 2 & 22.11 & 41.56 & 4 & 86.38 & 59.74 & \textbf{10} & \textbf{5.10} & 91.03 & 4 & 51.02 & 195.04 & \textbf{10} & 46.92 & 114.67 & \textbf{10} & \textbf{5.71} & \textbf{9.36} \\ \hline 4 & 12 & 2 & 9 & 3.78 & 63.49 & 9 & 6.37 & 35.25 & 9 & 2.48 & \textbf{8.06} & \textbf{10} & \textbf{0.43} & 10.00 & 7 & \textbf{3.15} & 46.06 & \textbf{10} & 8.62 & 20.04 & \textbf{10} & 4.69 & \textbf{7.74} \\ 4 & 12 & 3 & 8 & 26.01 & 523.55 & 7 & 0.90 & 1.19 & 7 & 0.64 & \textbf{0.41} & \textbf{10} & \textbf{0.36} & 4.90 & \textbf{10} & 39.23 & 450.50 & \textbf{10} & 4.76 & 15.61 & \textbf{10} & \textbf{3.06} & \textbf{8.95} \\ 4 & 12 & 4 & 8 & 21.81 & 546.20 & 8 & 31.88 & 208.21 & 9 & 38.60 & 63.99 & \textbf{10} & \textbf{0.39} & \textbf{4.54} & 9 & \textbf{1.70} & 14.58 & \textbf{10} & 9.59 & 36.59 & \textbf{10} & 3.91 & \textbf{9.87} \\ 4 & 18 & 2 & 8 & 29.46 & 438.73 & 8 & 26.11 & 88.45 & 8 & 17.56 & 50.75 & \textbf{10} & \textbf{0.61} & \textbf{20.28} & 5 & 62.00 & 837.42 & \textbf{8} & 3.13 & 10.41 & \textbf{8} & \textbf{1.30} & \textbf{2.52} \\ 4 & 18 & 3 & 6 & 1.40 & 12.64 & 6 & 1.95 & 5.02 & 6 & \textbf{1.04} & \textbf{1.44} & \textbf{10} & 1.41 & 20.64 & 7 & 6.89 & 20.61 & \textbf{10} & 13.20 & 17.57 & \textbf{10} & \textbf{6.75} & \textbf{8.35} \\ 4 & 18 & 4 & 5 & 5.06 & 15.08 & 5 & 10.33 & 11.09 & 5 & \textbf{1.64} & \textbf{0.63} & \textbf{10} & 2.85 & 36.14 & 8 & 19.17 & 144.31 & \textbf{10} & 1.43 & 1.35 & \textbf{10} & \textbf{0.78} & \textbf{0.39} \\ 4 & 24 & 2 & 5 & \textbf{0.14} & 0.01 & 5 & 0.30 & \textbf{0.00} & 6 & 37.27 & 37.70 & \textbf{8} & 2.06 & 54.92 & 3 & 51.49 & 501.10 & 7 & \textbf{6.69} & \textbf{18.52} & \textbf{8} & 25.07 & 38.43 \\ 4 & 24 & 3 & 4 & 9.41 & 23.10 & 4 & 14.15 & 15.90 & 5 & \textbf{7.43} & \textbf{4.34} & \textbf{9} & 24.67 & 507.45 & 2 & \textbf{1.41} & \textbf{0.44} & \textbf{8} & 17.49 & 23.72 & \textbf{8} & 4.05 & 4.50 \\ 4 & 24 & 4 & 1 & 1.35 & 1.24 & 1 & 1.41 & 0.84 & 1 & \textbf{0.65} & \textbf{0.17} & \textbf{10} & 8.39 & 125.89 & 4 & 27.64 & 101.35 & \textbf{9} & 5.38 & 10.56 & \textbf{9} & \textbf{3.16} & \textbf{4.93} \\ \hline 5 & 12 & 2 & \textbf{10} & 0.98 & 6.69 & \textbf{10} & 0.71 & 0.52 & \textbf{10} & 0.53 & 0.32 & \textbf{10} & \textbf{0.08} & \textbf{0.07} & 6 & 1.30 & 11.45 & \textbf{10} & \textbf{0.26} & 0.05 & \textbf{10} & 0.28 & \textbf{0.03} \\ 5 & 12 & 3 & 7 & \textbf{0.07} & 0.02 & 7 & 0.37 & \textbf{0.01} & 9 & 26.66 & 31.66 & \textbf{10} & 0.50 & 4.82 & 9 & 3.96 & 57.02 & \textbf{10} & 0.31 & 0.04 & \textbf{10} & \textbf{0.27} & \textbf{0.03} \\ 5 & 12 & 4 & \textbf{10} & 1.40 & 2.94 & \textbf{10} & 5.18 & 4.79 & \textbf{10} & 1.05 & \textbf{0.47} & \textbf{10} & \textbf{0.32} & 0.50 & \textbf{10} & 13.77 & 134.70 & \textbf{10} & 0.13 & 0.00 & \textbf{10} & \textbf{0.13} & \textbf{0.00} \\ 5 & 18 & 2 & 6 & \textbf{0.43} & 0.18 & 7 & 1.39 & 0.42 & 7 & 0.65 & \textbf{0.12} & \textbf{10} & 0.51 & 5.24 & 6 & 20.51 & 80.34 & 9 & \textbf{5.55} & \textbf{9.45} & \textbf{10} & 25.57 & 22.88 \\ 5 & 18 & 3 & 7 & 6.52 & 6.77 & 7 & 6.61 & 2.36 & 7 & 3.64 & \textbf{1.04} & \textbf{10} & \textbf{0.70} & 12.51 & 7 & \textbf{0.93} & \textbf{7.00} & 9 & 14.12 & 38.74 & \textbf{10} & 16.38 & 31.21 \\ 5 & 18 & 4 & 7 & 16.95 & 14.71 & 7 & 38.33 & 17.58 & 7 & \textbf{2.04} & \textbf{0.53} & \textbf{10} & 6.74 & 83.41 & \textbf{10} & 0.30 & 0.12 & \textbf{10} & \textbf{0.18} & 0.00 & \textbf{10} & 0.18 & \textbf{0.00} \\ 5 & 24 & 2 & 5 & 38.12 & 31.28 & 4 & \textbf{2.56} & \textbf{1.24} & 5 & 8.11 & 1.77 & \textbf{10} & 29.43 & 841.65 & 1 & 0.82 & 0.10 & \textbf{8} & 0.45 & 0.04 & \textbf{8} & \textbf{0.45} & \textbf{0.01} \\ 5 & 24 & 3 & 6 & 34.80 & 18.90 & 5 & \textbf{0.66} & \textbf{0.06} & 6 & 8.71 & 0.92 & \textbf{9} & 4.26 & 35.88 & 4 & 54.11 & 76.50 & \textbf{10} & 16.73 & 15.73 & \textbf{10} & \textbf{6.08} & \textbf{3.64} \\ 5 & 24 & 4 & 3 & \textbf{19.47} & 106.09 & 4 & 33.87 & 47.46 & 5 & 29.96 & \textbf{27.82} & \textbf{8} & 21.05 & 124.15 & 6 & 4.66 & 10.80 & \textbf{10} & 1.02 & 0.46 & \textbf{10} & \textbf{0.82} & \textbf{0.36} \\ \hline \end{tabular} } \end{center} \end{table} \begin{comment} \begin{table} \begin{center} {\scriptsize \caption{ \label{t:t2}Shrinked domains. Number of instances solved in 5 min / average time to solve in sec / average number of backtracks. } \begin{tabular}{|ccc|r@{ / }r@{ / }r|r@{ / }r@{ / }r|r@{ / }r@{ / }r|} \hline $h$ & $p_1$ & $k_h$ & \multicolumn{3}{|c|}{\algid{$OM$}} & \multicolumn{3}{|c|}{\algid{$SM$}} & \multicolumn{3}{|c|}{\algid{$SR$}} \\ \hline \hline 3 & 12 & 2 & 8 & \textbf{0.72} & \textbf{4286} & \textbf{10} & 21.85 & 143657 & \textbf{10} & 9.82 & 42083 \\ 3 & 12 & 3 & 9 & 8.56 & 180021 & \textbf{10} & 0.17 & 20 & \textbf{10} & \textbf{0.12} & \textbf{5} \\ 3 & 12 & 4 & \textbf{10} & 0.57 & 979 & \textbf{10} & 0.31 & 36 & \textbf{10} & \textbf{0.20} & \textbf{22} \\ 3 & 18 & 2 & 5 & 7.22 & 138133 & \textbf{9} & 29.60 & 109818 & \textbf{9} & \textbf{5.96} & \textbf{13472} \\ 3 & 18 & 3 & 6 & 10.38 & 71178 & 8 & \textbf{0.25} & \textbf{26} & \textbf{9} & 6.86 & 14605 \\ 3 & 18 & 4 & \textbf{10} & 31.71 & 137356 & \textbf{10} & 0.44 & 305 & \textbf{10} & \textbf{0.34} & \textbf{126} \\ 3 & 24 & 2 & 3 & 10.37 & 176428 & 7 & \textbf{3.31} & \textbf{10797} & \textbf{8} & 37.01 & 63466 \\ 3 & 24 & 3 & 2 & 121.36 & 286806 & \textbf{9} & 47.57 & 166139 & \textbf{9} & \textbf{9.58} & \textbf{21073} \\ 3 & 24 & 4 & 4 & 51.02 & 195037 & \textbf{10} & 46.92 & 114668 & \textbf{10} & \textbf{5.71} & \textbf{9361} \\ \hline 4 & 12 & 2 & 7 & \textbf{3.15} & 46062 & \textbf{10} & 8.62 & 20038 & \textbf{10} & 4.69 & \textbf{7736} \\ 4 & 12 & 3 & \textbf{10} & 39.23 & 450498 & \textbf{10} & 4.76 & 15612 & \textbf{10} & \textbf{3.06} & \textbf{8954} \\ 4 & 12 & 4 & 9 & \textbf{1.70} & 14576 & \textbf{10} & 9.59 & 36590 & \textbf{10} & 3.91 & \textbf{9872} \\ 4 & 18 & 2 & 5 & 62.00 & 837424 & \textbf{8} & 3.13 & 10405 & \textbf{8} & \textbf{1.30} & \textbf{2520} \\ 4 & 18 & 3 & 7 & 6.89 & 20613 & \textbf{10} & 13.20 & 17566 & \textbf{10} & \textbf{6.75} & \textbf{8351} \\ 4 & 18 & 4 & 8 & 19.17 & 144310 & \textbf{10} & 1.43 & 1353 & \textbf{10} & \textbf{0.78} & \textbf{388} \\ 4 & 24 & 2 & 3 & 51.49 & 501102 & 7 & \textbf{6.69} & \textbf{18520} & \textbf{8} & 25.07 & 38427 \\ 4 & 24 & 3 & 2 & \textbf{1.41} & \textbf{439} & \textbf{8} & 17.49 & 23716 & \textbf{8} & 4.05 & 4502 \\ 4 & 24 & 4 & 4 & 27.64 & 101346 & \textbf{9} & 5.38 & 10558 & \textbf{9} & \textbf{3.16} & \textbf{4934} \\ \hline 5 & 12 & 2 & 6 & 1.30 & 11445 & \textbf{10} & \textbf{0.26} & 46 & \textbf{10} & 0.28 & \textbf{31} \\ 5 & 12 & 3 & 9 & 3.96 & 57020 & \textbf{10} & 0.31 & 35 & \textbf{10} & \textbf{0.27} & \textbf{26} \\ 5 & 12 & 4 & \textbf{10} & 13.77 & 134698 & \textbf{10} & 0.13 & 2 & \textbf{10} & \textbf{0.13} & \textbf{1} \\ 5 & 18 & 2 & 6 & 20.51 & 80338 & 9 & \textbf{5.55} & \textbf{9446} & \textbf{10} & 25.57 & 22878 \\ 5 & 18 & 3 & 7 & \textbf{0.93} & \textbf{7005} & 9 & 14.12 & 38745 & \textbf{10} & 16.38 & 31210 \\ 5 & 18 & 4 & \textbf{10} & 0.30 & 117 & \textbf{10} & \textbf{0.18} & 1 & \textbf{10} & 0.18 & \textbf{0} \\ 5 & 24 & 2 & 1 & 0.82 & 101 & \textbf{8} & 0.45 & 42 & \textbf{8} & \textbf{0.45} & \textbf{14} \\ 5 & 24 & 3 & 4 & 54.11 & 76498 & \textbf{10} & 16.73 & 15728 & \textbf{10} & \textbf{6.08} & \textbf{3637} \\ 5 & 24 & 4 & 6 & 4.66 & 10799 & \textbf{10} & 1.02 & 461 & \textbf{10} & \textbf{0.82} & \textbf{360} \\ \hline \end{tabular} } \end{center} \end{table} \end{comment} As we can see in the results, \algid{$SB$}\ solved almost all instances and, where comparable, it was the fastest and needed the least backtracks in solving basic model. \algid{$SB$}\ is so successful because it removes symmetries and the filtering algorithm of \algid{Spacing}\ensuremath{_{SB}}\ runs in $O(n)$ down a branch. On the basic problem, \algid{$SM$}\ is not obviously better than \algid{$OM$}, however \algid{$SM$}\ performs much better on the extended problem and it needs significantly less backtracks than \algid{$OM$}. That supports the theory that \algid{$SM$}\ achieves more propagation than \algid{$OM$}. Finally, the additional rule significantly improves performance of \algid{$SR$}\ against \algid{$SM$}, especially the number of backtracks is lower by order of magnitude, which shows that the rule really facilitates propagation between the \algid{Spacing}\ constraints. \section{Conclusions} The global $\algid{Spacing}$ constraint is useful in modeling events that are distributed over time, like learning units scheduled over a study program or repeated patterns in music compositions. We have investigated theoretical properties of the constraint and shown that enforcing domain consistency (\emph{DC}) is intractable even in very restricted cases. On the other hand we have identified two tractable restrictions and implemented efficient \emph{DC}\ filtering algorithms for one of them. The algorithm takes $O(p^2 k + p^{2.5})$ time down a branch of the search tree. We have also proposed an incomplete filtering algorithm for one of the intractable cases. We have experimentally evaluated performance of the algorithms on a music composition problem and demonstrated that our filtering algorithms outperform the state-of-the-art approach for solving this problem in both, speed and number of backtracks. \end{document}
\begin{document} \title{Input independence} \footnotetext{The authors are partially supported by the US Army Research Office under W911NF-20-1-0297.} \thispagestyle{empty} \begin{abstract} We establish the following \emph{input independence} principle. If a quantum circuit \C\ computes a unitary transformation $U_\mu$ along a computation path $\mu$, then the probability that computation of \C\ follows path $\mu$ is independent of the input. \end{abstract} \title{Input independence} \thispagestyle{empty} \section{Introduction} \label{s:intro} In the analysis of quantum computations, estimating the probability that a given circuit computes the desired output is rather important. Typically, the circuit is good enough if that probability is high enough. Static analysis of the circuit may reveal good computation paths, those leading to the desired outcome. But estimating the probability $P$ that the computation follows a good path may not be easy. A priori $P$ depends on the input. The main result of this paper is the \emph{input independence} principle: If a quantum circuit \C\ computes a unitary transformation $U_\mu$ along a computation path $\mu$, then the probability that computation of\, \C\ follows path $\mu$ is independent of the input. The principle implies that, if the desired output is given by a unitary transformation $U$, then the probability $P$ that computation of \C\ follows a path leading to the desired outcome does not depend on the input. Furthermore, if the unitary transformation is allowed to depend on the path, then $P$ still does not depend on the input. First, based on the syntax and semantics of quantum circuits in \cite{G250}, we make the principle mathematically precise. In the process, we recall terminology from \cite{G250} and repeat some, though not all, of the definitions. \begin{setting}\label{s1}\mbox{} \begin{enumerate}[A.] \item Following the standard textbook \cite{NC} of Michael Nielsen and Isaac Chuang, we define a \emph{(general) measurement} over a Hilbert space \ensuremath{\mathcal K}\ as an indexed family $M = \ang{L_i: i\in I}$ of (bounded linear) operators on \ensuremath{\mathcal K}\ where $\sum_{i\in I} L_i^\dag L_i$ is the identity operator $\Id$ on \ensuremath{\mathcal K}; elements of the index set $I$ are \emph{(classical) outcomes} of $M$ \cite[\S2.2.3]{NC}. However, contrary to \cite{NC}, we do not presume that \ensuremath{\mathcal K}\ is finite dimensional or that $I$ is finite. Instead, as in \cite{BLM}, we presume that \ensuremath{\mathcal K}\ is separable (and complex), $I$ is countable, and the sum converges in the weak operator topology \footnote{We began this work in the context of quantum computing and required that Hilbert spaces are finite dimensional and measurements have only finitely many outcomes. However, the results work just as well in the broader setting adopted here.}. A \emph{state} in a Hilbert space \ensuremath{\mathcal K}\ is given by a density operator on \ensuremath{\mathcal K}; the set of these density operators is denoted DO(\ensuremath{\mathcal K}). \item In much of the literature, the meaning of ``quantum circuit'' is ambiguous. A circuit typically contains unitary operations, often contains measurements, and may contain classical channels. The ambiguity is whether the circuit also tells which gates fire simultaneously and, for those that don't, the order in which they fire. In the present paper, a quantum circuit may contain unitary operations, measurements, and classical channels. And by default a circuit comes together with a fixed execution schedule $(B_1, B_2, \dots, B_N)$ where $B_1$ is the set of gates to be fired first, $B_2$ is the set of gates to be fired next, and so on \footnote{In \cite{G250}, we distinguished between circuits with execution schedules and without. A circuit with a fixed execution schedule was called a \emph{circuit algorithm} there.}. It is required of course that the sets $B_n$ are disjoint, that their union contains all the gates, and that, for every $n$ and every gate $G\in B_n$, the set $\bigcup_{k<n}B_k$ of gates scheduled to fire before $G$ contains all (quantum and classical) prerequisites of $G$. \item For uniformity, we treat unitary operators as measurements with a single outcome, so that all gates are measurement gates. Every gate $G$ in a quantum circuit \C\ is assigned a finite nonempty set of measurements, called $G$ \emph{measurements}. It is assumed without loss of generality that different $G$ measurements have disjoint outcome sets. In addition, $G$ is assigned a \emph{selection} function $\Sigma_G$ that, in runtime, picks the $G$ measurement to be executed. Let \CS{G} be the set of \emph{classical sources} of $G$, i.e.\ the gates with classical channels to $G$. Given outcomes $o_F$ of the measurements performed by the classical sources $F$ of $G$, the $G$ measurement $\Sigma_G \iset{o_F: F\in \CS{G}}$ is picked. Without loss of generality, we presume that there are no superfluous $G$ measurements: every $G$ measurement is in the range of $\s_G$. In particular, if $G$ has no classical sources, then it is assigned (a set comprising) a single $G$ measurement. \item A \emph{(computation) path}\footnote{Paths were called tracks in \cite{G250}} $\mu$ through a quantum circuit \C\ is an assignment to each gate $G$ of an outcome $\mu(G)$ of the $G$-measurement $M_\mu(G) = \Sigma_G\iset{\mu(F): F\in \CS{G}}$. Note that this last equation is a coherence requirement on the measurements performed along the path. Given an input for \C, a path $\mu$ represents a potential computation of \C\ which is said to \emph{follow} path $\mu$ on the given input. \item Consider quantum circuits \C\ with principal inputs and outputs (being states) in a Hilbert state \ensuremath{\mathcal H}\ of dimension $\ge2$. \C\ may also use ancillas (initialized in a fixed pure state) and may produce garbage to be discarded at the end of the computation. Let \A\ be a Hilbert space hosting the ancillas (which we may view as a single higher-dimensional ancilla) at the beginning of computation and the garbage at the end. Accordingly, a full input of \C\ has the form $\rz = \r\ox\a$ where $\r\in \DO{\ensuremath{\mathcal H}}$ and \ket a is a fixed unit vector in \A. If the computation of \C\ on input \rz\ follows $\mu$ then the output is a density operator $\Out_\C(\mu\, | \rz)$ on \ensuremath{\mathcal H}\ox\A, and the principal output is the density operator on \ensuremath{\mathcal H}\ given by the partial trace $\ensuremath{\mathrm{Tr}}_\A\big(\Out_\C(\mu, | \rz)\big)$. \qef \end{enumerate} \end{setting} \begin{theorem}[Input independence]\label{t:ii} Suppose that a quantum circuit \C\ computes a unitary operator $U_\mu: \ensuremath{\mathcal H}\to\ensuremath{\mathcal H}$ along a computation path $\mu$ in the sense that density operators $\ensuremath{\mathrm{Tr}}_\A\big( \Out_\C (\mu\, |\rz) \big)$ and $U_\mu\rho\, U_\mu^\dag$ represent the same state in \ensuremath{\mathcal H}\ for all $\r\in \DO{\ensuremath{\mathcal H}}$. Then the probability $P_\C(\mu\, |\rz)$ that the computation of\, \C\ follows path $\mu$ on principal input \r\ does not depend on \r. \end{theorem} The proof of the theorem gives a tiny bit more: It suffices to assume that \C\ computes $U_\mu$ along $\mu$ just on pure inputs. To simplify notation, we assumed in Setting~\ref{s1}.E that principal inputs and outputs of \C\ are in the same Hilbert space and that the ancillas and garbage are in the same Hilbert space. Accordingly, in Theorem~\ref{t:ii}, \C\ computes a unitary operator (rather than unitary transformation) along path $\mu$. The generalization where the two assumptions are dropped is obvious. \begin{corollary}\label{c:iimult} Let \G\ be a set of computation paths through a quantum circuit \C. Suppose that \C\ computes a unitary operator along every computation path in \G, possibly different operators along different paths. Then the probability that a computation of \C\ follows a path in \G\ does not depend on the input. \end{corollary} The \G\ in the corollary can represent the set of good paths mentioned in the motivating paragraphs. \subsubsection*{Related work} In 2017, Vadym Kliuchnikov conjectured that, if a quantum circuit computes a unitary operator (the same unitary operator along every computation path), then the probability that its computation follows a particular path does not depend on the input. His conjecture provoked this investigation. We confirm the conjecture. \begin{corollary}\label{c:iiall} Suppose that a quantum circuit \C\ computes a unitary operator $U$ in the sense that \C\ computes $U$ along every computation path. Then the probability that a computation of \C\ follows a particular path does not depend on the input. \end{corollary} Paul Busch formulated the ``no information without disturbance'' principle for quantum measurements \cite[Theorem~2]{Busch} and sketched the proof of the principle; a slightly more informative version of his sketch appeared earlier in a book by Paul Busch, Pekka J. Lahti, and Peter Mittelstaedt \cite[p.~32]{BLM}. Busch's principle is a special case of Corollary~\ref{c:iiall} where \C\ is just a single measurement and $U$ the identity operator. (Busch works with positive operator-valued measurements, POVMs, rather than general measurements but this is not important for our story. ``POVMs are best viewed as a special case of the general measurement formalism, providing the simplest means by which one can study general measurement statistics, without the necessity for knowing the post-measurement state'' \cite[\S2.2.6, Box~2.5]{NC}. Under our broader assumptions in Setting~\ref{s1}, POVMs still can be viewed as a special case of the general measurement formalism.) \subsubsection*{Organization of this paper} The input independence theorem is proved in the rest of this paper. In order to analyze circuit computation, it is useful to untangle and separate distinct computation paths. We do that in \S\ref{s:reduce} where we introduce measurement trees whose branches can be viewed as their computation paths. The reduction theorem, Theorem~\ref{t:reduction}, asserts that for every quantum circuit \C\ there exist a measurement tree \T\ and a one-to-one correspondence $\mu \mapsto \b_\mu$ from the computation paths of \C\ to the branches of \T\ such that the probability that computation of \C\ follows $\mu$ on a given (full) input is exactly the probability that the computation of \T\ follows branch $\b_\mu$ on that input and the two computations produce the same output. After investigating some basic properties of measurement trees in \S\ref{s:trees}, we prove the input independence theorem for measurement trees, Theorem~\ref{t:iit}, in \S\ref{s:ii}. By the reduction theorem, the input independence theorem for circuits follows from that for measurement trees. Appendix~A establishes a function constancy criterion which is used in \S\ref{s:ii} and which is arguably of independent interest. Relegating that task to an appendix makes the exposition in \S\ref{s:ii} cleaner. Appendix~B is devoted to a generalization of Theorem~\ref{t:ii}. \section{Reduction to measurement trees}\label{s:reduce} As usual we take density operators over a Hilbert space \ensuremath{\mathcal K}\ to be nonzero, positive semidefinite, Hermitian operators on \ensuremath{\mathcal K}. However, instead of requiring them to have trace 1, we only require them to have finite traces, i.e., to be trace-class operators. The set of density operators over \ensuremath{\mathcal K}\ will be denoted \DO{\ensuremath{\mathcal K}}. A density operator is \emph{normalized} if its trace is 1. \begin{convention}\label{con} We use density operators, not necessarily normalized, to represent (possibly mixed) states over \ensuremath{\mathcal K}. A density operator $\rho$ represents the same state as its normalized version $\rho/\ensuremath{\mathrm{Tr}}(\rho)$. If a measurement $M = \ang{L_j: j\in J}$ in the state (represented by) $\rho$ produces outcome $j$, then we will usually use density operator $L_j\rho L_j^\dag$ (rather than any scalar multiple of it) to represent the post-measurement state. Notice that $\rho\mapsto L_j\rho L_j^\dag$ is a linear operator and the probability of outcome $j$ is $\ensuremath{\mathrm{Tr}}(L_j\rho L_j^\dag)/\ensuremath{\mathrm{Tr}}(\rho)$. \qef \end{convention} To analyze computations of a quantum circuit, it is convenient to represent the circuit by a tree where the computations are represented by branches. \begin{definition}\label{d:meas tree} A \emph{measurement tree}, a \emph{meas tree} in short, over a Hilbert space \ensuremath{\mathcal K}\ is a directed rooted tree with a finite bound on the lengths of branches, where every non-leaf node $x$ has a measurement $M_x$ over \ensuremath{\mathcal K}\ associated to it, and the edges emanating from $x$ are labeled in one-to-one fashion with the classical outcomes of $M_x$. \qef \end{definition} All our results about meas trees would remain true if the bounded-branch-length assumption were replaced by a weaker assumption that every branch is of finite length. But the bounded-branch-length trees suffice to support the reduction theory, Theorem~\ref{t:reduction}, below. Let $\T$ be a meas tree. A \emph{route} on \T\ is a sequence of edges $(x_0,x_1), (x_1,x_2), \dots, (x_{n-1},x_n)$ where $x_0$ is the root; it may be given by the labels of those edges: $(o_1,\dots,o_n)$ where $o_{i+1}$ is an outcome of the measurement $M_{x_i}$. The same notation $(o_1,\dots,o_n)$ may be used for the final node $x_n$ of the route; to avoid ambiguity, we may also use notation \End{o_1,\dots,o_n} for $x_n$. A route is a \emph{branch} if its final node is a leaf. View \T\ as the algorithm which works as follows on input $\sigma_0\in\DO{\ensuremath{\mathcal K}}$. If the root is a leaf (and thus the only node), do nothing. Otherwise, perform the root's measurement in state $\sigma_0$ producing, with some probability $p_1$, an outcome $o_1$ and post-measurement state $\sigma_1$. If node \End{o_1} is not a leaf, perform its measurement in state $\sigma_1$ producing, with some probability $p_2$, an outcome $o_2$ and post-measurement state $\sigma_2$, and so on until a leaf is reached. Let \b\ be the resulting branch $(o_1, o_2, \dots, o_N)$. The (quantum) output $\Out_\T(\b |\sigma_0)$ of this computation is the post-measurement state $\sigma_N$ resulting from the last measurement performed. The probability $P_\T(\b |\sigma_0)$ that computation of \T\ follows branch \b\ on input $\sigma_0$ is $p_1\!\cdot p_2\cdots p_N$. Consider quantum circuits \C\ with (full) inputs and outputs in some Hilbert space \ensuremath{\mathcal K}. Assume Setting~\ref{s1} (so that $\ensuremath{\mathcal K} = \ensuremath{\mathcal H} \ox \A$ but the internal structure of \ensuremath{\mathcal K}\ will play no role in this section.) Without loss of generality, we may assume that \C\ comes with a fixed linear order of its gates. A \emph{bout} of gates of \C\ is a nonempty set $B$ of gates where no gate is a (quantum or classical) prerequisite for another gate; thus all $B$ gates could be executed in parallel after all their prerequisites have been executed. A \emph{schedule} of the circuit is a sequence $(B_1, B_2, B_3, \dots, B_N)$ of disjoint gate bouts such that every gate set $\bigcup_{i\le n} B_i$ is closed under prerequisites and $\bigcup_{n\le N} B_n$ contains all the gates. A schedule is \emph{linear} if all its bouts are (composed of) single gates. Paths through a circuit with linear schedule $(G_1, G_2, \dots, G_N)$ can be represented by sequences $(o_1, o_2, \dots, o_N)$ where each $o_n$ is an outcome of the $G_n$ measurement $\Sigma_{G_n}\iset{o_i: G_i \in \CS{G_n}}$. \begin{lemma}\label{l:linear} Let \C\ be a quantum circuit with a linear schedule $(G_1, G_2, \dots, G_N)$ with (full) inputs and outputs in Hilbert space \ensuremath{\mathcal K}. There exists a meas tree \T\ over \ensuremath{\mathcal K}\ such that for every path $\mu$ through circuit \C\ and every $\sigma$ in \DO{\ensuremath{\mathcal K}} we have: \begin{enumerate} \item $\b_\mu = (\mu(G_1), \mu(G_2), \dots, \mu(G_N))$ is a branch of \T, and every branch of \T\ is obtained that way, \item $\Out_\C(\mu|\sigma) = \Out_\T(\b_\mu | \sigma)$, and \item $P_\C(\mu|\sigma) = P_\T(\b_\mu | \sigma)$. \end{enumerate} \end{lemma} \begin{proof} We construct the desired meas tree \T. The nodes of \T are initial segments $(o_1, \dots, o_n)$ of paths through \C. If $n<N$ then segments $(o_1, \dots, o_n, o_{n+1})$ are children of the segment $(o_1, \dots, o_n)$, the label of the edge from segment $(o_1, \dots, o_n)$ to segment $(o_1, \dots, o_n, o_{n+1})$ is $o_{n+1}$, and the measurement assigned to segment $(o_1, \dots, o_n)$ is $\Sigma_{G_{n+1}}\!\iset{o_i: G_i \in \CS{G_{n+1}}}$. In particular, the empty segment is assigned the unique measurement of the first gate $G_1$. As in the definition of computation paths in Setting~\ref{s1}, this description of children incorporates the coherence requirement for the measurements along the route $(o_1, \dots, o_{n+1})$. All three claims are easy to verify from the definition of \T. \end{proof} Given measurements $M_1 = \iset{A_i: i\in I}$ and $M_2 = \iset{B_j: j\in J}$ over Hilbert spaces $\ensuremath{\mathcal K}_1, \ensuremath{\mathcal K}_2$ respectively, consider the tensor product $M_1 \ox M_2$, i.e.\ the measurement \iset{A_i\ox B_j: (i,j)\in I\times J} over $\ensuremath{\mathcal K}_1 \ox \ensuremath{\mathcal K}_2$. If the probabilities of outcomes $i,j$ on inputs $\sigma_1, \sigma_2$ are $p_i, q_j$ respectively, then the probability of outcome $(i,j)$ on input $\sigma_1\ox\sigma_2$ is $p\cdot q$. The tensor products of more than two (but finitely many) measurements are defined similarly. The following theorem extends Lemma~\ref{l:linear} to the case of schedules which may not be linear. \begin{theorem}[Reduction]\label{t:reduction} Let \C\ be a quantum circuit with (full) inputs and outputs in Hilbert space \ensuremath{\mathcal K}. There exist a meas tree \T\ over \ensuremath{\mathcal K}\ and a one-to-one correspondence $\mu \mapsto \b_\mu$ from the set of paths through \C\ onto the set of branches of \T\ such that for every path $\mu$ through circuit \C\ and every $\sigma$ in \DO{\ensuremath{\mathcal K}} we have: \begin{enumerate} \item $\Out_\C(\mu|\sigma) = \Out_\T(\b_\mu | \sigma)$, and \item $P_\C(\mu|\sigma) = P_\T(\b_\mu | \sigma)$. \end{enumerate} \end{theorem} \begin{proof} Let $(B_1, B_2, \dots, B_N)$ be the schedule of \C. By Lemma~\ref{l:linear}, it suffices to construct a circuit $\C'$ with linear schedule admitting a one-to-one correspondence $\mu \mapsto \mu'$ from the paths through \C\ onto those of $\C'$ so that $\Out_\C(\mu|\sigma) = \Out_\T(\b_\mu | \sigma)$ and $P_\C(\mu|\sigma) = P_{\C'}(\mu'|\sigma)$ for all $\sigma$ in \DO{\ensuremath{\mathcal K}} and all paths $\mu$ through \C. We construct the desired circuit $\C'$. The gates of $\C'$ are the bouts $B_n$ of the schedule of \C. If the \C\ gates of $B_n$ are $F_1 < F_2 < \dots$ (in the fixed order of \C\ gates), then $B_n$ measurements are tensor products $M_{F_1} \ox M_{F_2} \ox\dots$ where each $M_{F_j}$ is an $F_j$ measurement in \C. Since, for every gate $F$ in \C, the outcomes of different $F$ measurements are disjoint, the outcomes of different $B_n$ measurements are disjoint. Next we define a selection functions $\Sigma'_n = \Sigma'_{B_n}$ for each $n = 1, \dots, N$. A $\C'$ gate $B_i$ is a classical source for a $\C'$ gate $B_n$ if some \C\ gate in $B_i$ is a classical source for a \C\ gate in $B_n$. If the \C\ gates of $B_i$ are $E_1 < E_2 < \dots$ and, in runtime, they produce outcomes $o_1, o_2, \dots$ respectively, then the outcome $O_i = (o_1, o_2, \dots)$ of $B_i$ is sent to $B_n$. Note that $O_i$ determines the outcomes of all \C\ gates in $B_i$. It follows that, for every \C\ gate $F$ in $B_n$, the outcomes $O_i$ determine outcomes of all classical sources $E$ of $F $ in \C, and therefore determine an $F$ measurement $M_F$ chosen by the selection function $\Sigma_F$ of \C. If the \C\ gates of $B_n$ are $F_1 < F_2 < \dots$, then $\Sigma'_n\iset{O_i: B_i \in \CS{B_N}} = M_{F_1} \ox M_{F_2} \ox\dots$. Finally, for every path $\mu$ through \C, we define a path $\mu'$ through $\C'$. For each $n = 1,\dots, N$, we need to define an outcome $\mu'(B_n)$. If the \C\ gates of $B_n$ are $F_1 < F_2 < \dots$, then $\mu'(B_n) = (\mu(F_1), \mu(F_2), \dots)$. It is easy to see that every path through $\C'$ is obtained this way and that $\Out_\C(\mu|\sigma) = \Out_\T(\b_\mu | \sigma)$ and $P_\C(\mu|\sigma) = P_{\C'}(\mu'|\sigma)$ for all $\sigma$ in \DO{\ensuremath{\mathcal K}} and all paths $\mu$ through \C. \end{proof} \section{Cumulative operators} \label{s:trees} Let \T\ be a meas tree over a Hilbert space \ensuremath{\mathcal K}. A branch \b\ of \T\ is \emph{attainable on} input $\sigma$ in DO(\ensuremath{\mathcal K}) if the probability $P(\b |\sigma)$ that computation of \T\ on input $\sigma$ follows branch \b\ is strictly positive. \begin{definition} For every branch $\b = (o_1, o_2, \dots, o_N)$ of \T, the \emph{cumulative operator} $C_\b$ is the composition \[ C_\b = A_N \circ A_{N-1} \circ \cdots \circ A_2 \circ A_1 \] where each $A_{n+1}$ is the operator of the measurement at node \End{o_1,\dots, o_n} that produces the outcome $o_{n+1}$. In the special case of the one-node tree, the length-zero branch has $C_\b=\Id$. \qef \end{definition} Notice that, for every input $\sigma$ in DO(\ensuremath{\mathcal K}), $C_\b\sigma C_\b^\dag$ is exactly $\Out_\T(\b | \sigma)$. \begin{lemma}\label{l:cum} \begin{enumerate}\mbox{} \item $\sum_\b C_\b^\dag C_\b = \Id$, \item $\displaystyle\pbs = \frac{\ensuremath{\mathrm{Tr}}(C_\b\sigma C_\b^\dag)}{\ensuremath{\mathrm{Tr}}(\sigma)}$, \item $\pbs=0 \iff C_\b\sigma C_\b^\dag=0$. \end{enumerate} \end{lemma} \begin{proof} Induction on the number of non-leaf nodes in \T. \end{proof} By the first two claims of the lemma, the indexed family of operators $C_\beta$ a measurement (that could be called the \emph{aggregate measurement} of \T\ in terms of \cite{G250}). \begin{setting}\label{s2} Let \A, \ensuremath{\mathcal H}, and \ket a be as in Setting~\ref{s1}.E. Consider a meas tree \T\ with principal inputs and outputs in DO(\ensuremath{\mathcal H}). Like circuits, \T\ may use ancillas and may produce garbage; the full input of \T\ has the form $\r\ox \a$ where \r\ ranges over DO(\ensuremath{\mathcal H}) and \ket{a} is a fixed unit vector in \A. Again, we abbreviate $\r\ox \a$ to \rz. Let $U$ be a unitary operator on \ensuremath{\mathcal H}. \end{setting} \begin{definition}\label{d:det2} \T\ \emph{computes $U$ on principal input \r\ along branch \b} if the operator $\ensuremath{\mathrm{Tr}}_\A(C_\b \rz C_\b^\dag)$ agrees with $U\r\,U^\dag$ up to a scalar factor and thus represents the same state. \qef \end{definition} \begin{corollary}\label{c:det2} If \T\ computes $U$ on principal input \r\ along branch \b\ then\\ $\ensuremath{\mathrm{Tr}}_\A(C_\b \rz C_\b^\dag) = P_\T(\b|\rz)\cdot U\r\,U^\dag$. \end{corollary} \begin{proof} According to the definition of ``computes,'' the two sides of the claimed equation differ by only a positive scalar factor. So it suffices to check that they have the same trace. Since $\ensuremath{\mathrm{Tr}}(\rz) = \ensuremath{\mathrm{Tr}}(\r)$ and, by unitarity of $U$, $\ensuremath{\mathrm{Tr}}(U\r\,U^\dag) = \ensuremath{\mathrm{Tr}}(\r)$, we have $ \ensuremath{\mathrm{Tr}}\left(\ensuremath{\mathrm{Tr}}_\A(C_\b \rz C_\b^\dag)\right) = \ensuremath{\mathrm{Tr}}\left(C_\b \rz C_\b^\dag\right) = \pbrz \ensuremath{\mathrm{Tr}}(\rz) = \ensuremath{\mathrm{Tr}}\Big(\pbrz U\r\,U^\dag\Big). $ \end{proof} \section{Input independence} \label{s:ii} We use Setting~\ref{s2}. \begin{theorem}\label{t:iit} Suppose that a meas tree \T\ computes a unitary operator $U$ on all pure inputs along a branch \b. Then there is a vector $b$ in \A\ such that for all (pure or not) principal inputs $\rho\in\DO{\ensuremath{\mathcal H}}$, we have: \begin{enumerate} \item $C_\b \rz C_\b^\dag = U\rho\, U^\dag \ox \ketbra{b}{b}$, \item the probability \pbrz\ that a computation of\,\ \T\ follows branch $\b$ on input \rz\ is $\ensuremath{\mathrm{Tr}}(\ketbra{b}{b}) = \Vert b\Vert^2$ and thus is independent of $\rho$, and \item if $\b$ is attainable on some input, then it is attainable on all inputs. \end{enumerate} \end{theorem} \begin{proof} Claim~3 obviously follows from claim~2. Claim~2 follows from claim~1: \[ \pbrz = \frac{\ensuremath{\mathrm{Tr}}(C_\mu \rz C_\mu^\dag)}{\ensuremath{\mathrm{Tr}}(\rz)} = \frac{\ensuremath{\mathrm{Tr}}(U\rho\, U^\dag \ox \ketbra{b}{b})}{\ensuremath{\mathrm{Tr}}(\rz)} = \frac{\ensuremath{\mathrm{Tr}}(\rho)\times\ensuremath{\mathrm{Tr}}(\ketbra{b}{b})}{\ensuremath{\mathrm{Tr}}(\rz)} = \ensuremath{\mathrm{Tr}}(\ketbra{b}{b}). \] In the rest of the proof we prove Claim~1. By the linearity of the equation in Claim~1, we may assume without loss of generality that $\rho$ is pure. Then $U\rho\, U^\dag$ is also pure. Since \T\ computes $U$ on $\rho$, $\ensuremath{\mathrm{Tr}}_\A (C_\b \rz C_\b^\dag)$ and $U\rho\, U^\dag$ agree up to a scalar factor. By the Pure State Factor Theorem\footnote{ In \cite{Ballentine} Ballentine works with normalized density operators, but the theorem remains true, because any positive scalar factor can be shifted to $\xi(\rho)$.} in \cite[\S8.3]{Ballentine}, \[ C_\b \rz C_\b^\dag = U\rho\, U^\dag \ox \xi(\rho) \] for some (possibly mixed) state $\xi(\rho)$ in \A, and therefore $\ensuremath{\mathrm{Tr}}(C_\b \rz C_\b^\dag) = \ensuremath{\mathrm{Tr}}(\rho)\times \ensuremath{\mathrm{Tr}}(\xi(\rho))$. Notice that a mixed state $\sigma$ is pure if and only if $\ensuremath{\mathrm{Tr}}(\sigma^2) = \ensuremath{\mathrm{Tr}}(\sigma)^2$. Since $\rho$ is pure, the states \rz\ and $C_\b\rz C_\b^\dag$ are pure. We have \begin{align*} \left(\ensuremath{\mathrm{Tr}}(\rho)\right)^2 \times \left(\ensuremath{\mathrm{Tr}}(\xi(\rho))\right)^2 &= \big(\ensuremath{\mathrm{Tr}}(C_\b \rz C_\b^\dag)\big)^2\\ &= \ensuremath{\mathrm{Tr}}\big((C_\b \rz C_\b^\dag)^2\big) = \ensuremath{\mathrm{Tr}}(\rho^2) \times \ensuremath{\mathrm{Tr}}\big((\xi(\rho))^2\big). \end{align*} Cancelling $\ensuremath{\mathrm{Tr}}(\rho)^2 = \ensuremath{\mathrm{Tr}}(\rho^2)$, we get $\left(\ensuremath{\mathrm{Tr}}(\xi(\rho))\right)^2 = \ensuremath{\mathrm{Tr}}\left(\xi(\rho)^2\right)$, and so $\xi(\rho)$ is pure as well. Let $\rho = \ketbra\psi\psi$. Then there is an \A\ vector \ket{f(\psi)} such that $\xi(\rho) = \ket{f(\psi)}\bra{f(\psi)}$. Thus, $C_\b \rz C_\b^\dag = U\rho\, U^\dag \ox \ket{f(\psi)}\bra{f(\psi)}$. Hence $C_\b (\ket\psi\ox\ket{a}) = U\ket\psi\ox \ket{f(\psi)}$ up to a phase, which we absorb into $\ket{f(\psi)}$. Since $U\ket\psi\ox \ket{f(\psi)} = C_\b (\ket\psi\ox\ket{a})$ is linear in \ket\psi, we can apply the function constancy criterion Theorem~\ref{t:const} in Appendix~A with $V_1=\ensuremath{\mathcal H}$, $V_2 = \ensuremath{\mathcal H}$, $V_3=\A$, $L = U$, and $f$ is the function $f(\psi)$. We learn that function $f(\psi)$ is constant, with the same value \ket{b} on all nonzero vectors $\psi$ in $\ensuremath{\mathcal H}$. So we have $C_\b\big(\ket\psi\ox\ket{a}\big) = U\ket\psi\otimes\ket{b}$ for every vector \ket\psi\ in $\ensuremath{\mathcal H}$. In terms of density operators, $C_\b \rz C_\b^\dag = U\rho\, U^\dag \ox \ketbra{b}{b}$ for all pure $\rho\in\DO{\ensuremath{\mathcal H}}$ and, by linearity, for all $\rho\in\DO{\ensuremath{\mathcal H}}$. \end{proof} \begin{corollary}\label{c:puretoall} If \T\ computes $U$ on pure inputs along a branch, then it computes $U$ on all inputs along that branch. \end{corollary} \begin{proof}[Proof of Theorem~\ref{t:ii}] The reduction theorem, Theorem~\ref{t:reduction}, allows us to transfer the results of Theorem~\ref{t:iit} from meas trees to circuits. Thus Theorem~\ref{t:ii} follows from the reduction theorem and Theorem~\ref{t:iit}. \end{proof} \begin{appendices} \section{A function constancy criterion} Let $V_1, V_2, V_3$ be vector spaces, $L: V_1\to V_2$ a linear transformation of rank $\ge2$, $f$ an arbitrary function from $V_1$ to $V_3$, and $\L = L\ox f$ meaning that $\L x = Lx \ox fx$. Notice that, if $f$ is constant, then \L\ is linear, because $L$ is linear and \ox\ is bilinear. The same conclusion follows if $f$ is constant only on those vectors $x$ in $V_1$ for which $Lx\ne0$. Indeed, changing the value of $fx$ arbitrarily for those $x$ with $Lx=0$ has no effect on \L. The following theorem provides a useful converse to this observation. \begin{theorem}\label{t:const} Suppose that \L\ is linear. Then $fx = fy$ for all $x,y$ with nonzero $Lx, Ly$. \end{theorem} \begin{proof} It suffices to prove $fx=fy$ when $Lx,Ly$ are linearly independent. Indeed, there are vectors $v_1, v_2$ such that $Lv_1, Lv_2$ are independent because the rank of $L$ is $\ge2$. If $Lx,Ly$ are linearly dependent but nonzero, then some $Lv_i$ is independent from both $Lx$ and $Ly$. Then $fx = fv_i = fy$. So suppose that $Lx, Ly$ are independent, and choose bases \begin{itemize} \item $v_1, v_2, \dots$ for $V_2$, where $v_1 = Lx$ and $v_2 = Ly$, and \item $w_1, w_2, \dots$ for $V_3$, where $w_1 = fx$ and $fy = aw_1 + bw_2$ for some scalars $a,b$. \end{itemize} Then $f(x+y) = \sum_j c_j w_j$ for some scalars $c_j$. Notice that $V_2$ and $V_3$ are arbitrary vector spaces and need not have any inner product specified. Even if they (or some of them) are infinite dimensional, we are talking about bases in the simple sense of linear algebra. Therefore $\sum_j c_j w_j$ is a finite sum, i.e., all but finitely many of the coefficients $c_j$ are 0. By linearity of $L$ and \L, we have \begin{align*} & (v_1 + v_2) \ox \sum_j c_j w_j = (Lx + Ly) \ox f(x+y) = L(x+y) \ox f(x+y) = \\ & \L(x+y) = \L x + \L y = (Lx \ox fx) + (Ly \ox fy) = v_1\ox w_1 + v_2\ox(a w_1 + bw_2). \end{align*} Vectors $v_i\ox w_j$ form a basis of $V_2\ox V_3$. Comparing the coefficients of basis vectors\\ $v_1\ox w_1, v_1\ox w_2, v_2\ox w_1, v_2\ox w_2$ at the left end and the right end, we have \[ c_1 = 1,\quad c_2 = 0,\quad c_1 = a = 1,\quad c_2 = b = 0, \] and therefore $ fy = 1w_1 + 0w_2 = w_1 = fx $. \end{proof} \section{Isometries} A linear operator $L$ on a Hilbert space \ensuremath{\mathcal K}\ is an \emph{isometry} if it preserves norms: $\norm{L\ket\psi} = \norm{\ket\psi}$ for every vector \ket\psi\ in \ensuremath{\mathcal K}\ \cite[Def.~2.10]{Moretti}. Equivalently, $L$ preserves inner products \cite[Prop.~3.8]{Moretti}. It follows that a (bounded) linear operator $L$ is an isometry if and only if $L^\dag L = \Id$: \[ \braket{L\psi}{L\phi} = \braket\psi\phi\text{ for all } \ket\psi, \ket\phi \iff \braket{L^\dag L\psi}\phi = \braket{\Id\psi}\phi\text{ for all } \ket\psi, \ket\phi \iff L^\dag L = \Id. \] Notice that a single-outcome measurement consists of a single linear operator $L$ with $L^\dag L = \Id$ and therefore amounts to an isometry. If $L$ is an isometry then $\ensuremath{\mathrm{Tr}}(L\r L^\dag) = \ensuremath{\mathrm{Tr}}(L^\dag L\r) = \ensuremath{\mathrm{Tr}}(\r)$ for all \r\ in DO(\ensuremath{\mathcal K}). In finite dimensions $L^\dag L = \Id$ implies that $L L^\dag = \Id$, so that $L^\dag = L^{-1}$ and $L$ is unitary. \begin{theorem}\label{t:iit2} The input independence theorem, Theorem~\ref{t:ii}, remains valid if ``a unitary operator'' is replaced by ``an isometry''. \end{theorem} \begin{proof} By the reduction theorem, Theorem~\ref{t:reduction}, it suffices to prove the version of Theorem~\ref{t:iit} where ``a unitary operator'' is replaced by ``an isometry''. We walk through the proof of Theorem~\ref{t:iit} and examine all places where the unitarity is or seems to be used. \begin{itemize} \item The derivation of Claim~2 from Claim~1 uses only the equality $\ensuremath{\mathrm{Tr}}(U\r\,U^\dag) = \ensuremath{\mathrm{Tr}}(\r)$ which is true for isometries $U$. \item The application of Pure State Factor Theorem remains valid because, if \r\ is a pure density operator \ketbra\psi\psi, then $U\rho U^\dag = U\ketbra\psi\psi U^\dag = \ketbra{U\psi}{U\psi}$ is pure as well for any linear operator $U$. \item The proof that $\xi(\r)$ is pure requires only a tiny modification: \begin{multline*} \ensuremath{\mathrm{Tr}}\big((U\r\,U^\dag)^2\big) \times \left(\ensuremath{\mathrm{Tr}}(\xi(\rho))\right)^2 = \big(\ensuremath{\mathrm{Tr}}(U\r\,U^\dag)\big)^2 \times \left(\ensuremath{\mathrm{Tr}}(\xi(\rho))\right)^2 = \big(\ensuremath{\mathrm{Tr}}(C_\b \rz C_\b^\dag)\big)^2\\ = \ensuremath{\mathrm{Tr}}\big((C_\b \rz C_\b^\dag)^2\big) = \ensuremath{\mathrm{Tr}}\big((U\r\,U^\dag)^2\big) \times \ensuremath{\mathrm{Tr}}\big((\xi(\rho))^2\big). \end{multline*} Cancelling $\ensuremath{\mathrm{Tr}}\big((U\r\,U^\dag)^2\big)$, we get $\left(\ensuremath{\mathrm{Tr}}(\xi(\rho))\right)^2 = \ensuremath{\mathrm{Tr}}\left(\xi(\rho)^2\right)$, and so $\xi(\rho)$ is pure. \end{itemize} Thus, only the isometricity of $U$ is used in the proof of Theorem~\ref{t:iit}. \end{proof} The following proposition shows that the isometricity requirement in Theorem~\ref{t:iit2} cannot be substantially weakened at least in the case where the same linear operator is computed along every computation path. \begin{proposition} If a quantum circuit computes the same linear operator $U$ on \ensuremath{\mathcal H}\ along every computation path, then $U$ is an isometry up to a positive scalar factor. \end{proposition} \begin{proof} By the reduction theorem, Theorem~\ref{t:reduction}, it suffices to prove the meas tree version of the proposition: If a meas tree computes the same linear operator $U$ along every branch \b, then $U$ is an isometry. By claim~1 of Theorem~\ref{t:iit2}, for every branch \b\ there is a vector $b_\b$ in \A\ such that $C_\b \rz C_\b^\dag = U\rho\, U^\dag \ox \ketbra{b_\b}{b_\b}$. It follows that \[ \ensuremath{\mathrm{Tr}}_\A\sum_\b(C_\b \rz C_\b^\dag) = U\r\,U^\dag\ox \sum_\b \ensuremath{\mathrm{Tr}}\ketbra{b_\b}{b_\b} = t^2\cdot U\r\,U^\dag, \] where $t = \sqrt{\sum_\mu \braket{b_\mu}{b_\mu}}$. We show that $tU$ is isometric. Let \ket\psi\ be an arbitrary vector in \ensuremath{\mathcal K}\ and $\r = \ketbra\psi\psi$. By Lemma~\ref{l:cum}, $\sum_\b C_\b^\dag C_\b = \Id$. We have: \begin{align*} \langle tU\psi | tU\psi\rangle & = \ensuremath{\mathrm{Tr}}(t^2\cdot U\r\,U^\dag) =\ensuremath{\mathrm{Tr}}\Big(\ensuremath{\mathrm{Tr}}_\A\sum_\b(C_\b \rz C_\b^\dag)\Big)\\ & = \sum_\b \ensuremath{\mathrm{Tr}}(C_\b^\dag C_\b \rz) = \ensuremath{\mathrm{Tr}}(\rz) = \ensuremath{\mathrm{Tr}}(\rho) =\braket\psi\psi. \qedhere \end{align*} \end{proof} \begin{corollary} Suppose that the Hilbert space \ensuremath{\mathcal H}\ is finite dimensional. If a quantum circuit computes the same linear operator $U$ on \ensuremath{\mathcal H}\ along every computation path, then $U$ is unitary up to a positive scalar factor. \end{corollary} \end{appendices} \end{document}
\begin{document} \title[Conformal Scattering]{H\"ormander's method for the characteristic Cauchy problem and conformal scattering for a non linear wave equation} \author{J\'er\'emie Joudioux} \address{Albert-Einstein-Institut, Max-Planck-Institut f\"ur Gravitationsphysik, Am M\"uhlenberg 2, 14476 Potsdam, Germany } \email{[email protected]} \urladdr{jeremiejoudioux.eu} \maketitle \begin{abstract} The purpose of this note is to prove the existence of a conformal scattering operator for the cubic defocusing wave equation on a non-stationary background. The proof essentially relies on solving the characteristic initial value problem by the method developed by H\"ormander. This method consists in slowing down the propagation speed of the waves to transform a characteristic initial value problem into a standard Cauchy problem. \end{abstract} \section*{Introduction} \subsection*{The result} Consider a globally hyperbolic smooth manifold \((M,\hat g)\), whose metric $\hat g$ satisfies the vacuum Einstein equations. Let $\Sigma$ be a Cauchy hypersurface in \(M\), and $T$ be a future oriented timelike vector normal to $\Sigma$. Consider the Cauchy problem for the defocusing cubic equation \begin{equation}\label{eq:cauchycubicwave} \begin{aligned} \hat\nabla_\alpha \hat\nabla^\alpha \hat{\phi} &= \hat{\phi}^3 \\ (\hat{\phi}, T(\hat{\phi}) |_{t =0}) &\in C_0^\infty (\Sigma) \times C_0^\infty (\Sigma), \end{aligned} \end{equation} where $\hat \nabla$ is the Levi-Civita connection associated with $\hat{g}$. Since \((M,\hat g)\) is globally hyperbolic, there exists a time foliation on the manifold, and the global existence of solutions and the existence of scattering operator can be discussed for this problem. Nonetheless, since the metric is a priori not static, the standard approach to prove the global existence, for instance, through Strichartz estimates, and the existence of the scattering operator, which would require a proof of the local energy decay, might fail. The cubic wave equation \eqref{eq:cauchycubicwave} enjoys nonetheless good properties regarding conformal transformations. Indeed, if one considers a metric ${g}$ conformal to the original metric $\hat{g}$ \[ g = \Omega^2 \hat g, \] one obtains the following identity \[ \Omega^{-3}(\hat \nabla_\alpha \hat \nabla^\alpha \hat\phi - \hat\phi^3) = {\nabla}_\alpha {\nabla}^\alpha (\Omega^{-1}\hat\phi) + \dfrac16\text{scal}_g (\Omega^{-1}\hat\phi) - (\Omega^{-1}\hat\phi) ^3, \] where ${\nabla}_\alpha$ is the Levi-Civita connection associated with $g$. Hence, if $\hat\phi$ is a solution to the problem \eqref{eq:cauchycubicwave}, then $\Omega^{-1} \hat\phi$ is a solution to the corresponding geometric equation arising from the conformal metric ${g}$. This property can be exploited to prove the global existence of solutions to the problem \eqref{eq:cauchycubicwave} by considering the local existence for the conformal problem, and construct a scattering operator by relying on that operation. Penrose introduced in the '60s a class of space-times admitting a conformal compactification. These space-times, known as asymptotically simple space-times, model isolated bodies. The compactification is obtained by completing the manifold with two disconnected hypersurfaces, null for the conformal metric when the cosmological constant vanishes, which represent the extremities of future and past null geodesics. These hypersurfaces are denoted by ${\mathscr I}^+$ and ${\mathscr I}^-$ respectively. The strategy for the construction of the scattering operator exploiting the existence of the conformal compactification is the following. For convenience, the construction is done for smooth data with compact support. Consider smooth compactly supported initial data $(\hat\phi_0, \hat\phi_1)$ on $\Sigma$ for the Cauchy problem \eqref{eq:cauchycubicwave}. These data are appropriately transformed into data \(({\phi}_0, {\phi}_1)\) for the conformal cubic wave equation: \begin{equation} \label{eq:confcauchycubicwave} \begin{aligned} {\nabla}_\alpha {\nabla}^\alpha (\Omega^{-1}\hat\phi) + \dfrac16\text{scal}_g (\Omega^{-1}\hat\phi) - (\Omega^{-1}\hat\phi) ^3 = 0\\ ({\phi}, \hat{T}(({\phi}))|_{t =0} = ({\phi}_0, {\phi}_1) \in C^\infty_0{(\hat{\Sigma})} \times C^\infty_0(\hat{\Sigma}). \end{aligned} \end{equation} Since the problem is now set on a space compact in time, the global existence problem for the Cauchy problem \eqref{eq:cauchycubicwave} turns into a much easier local existence problem \eqref{eq:confcauchycubicwave} for the rescaled equation. The existence result of Cagnac-Choquet-Bruhat \cite{MR789558} can be used to address the existence in this context. The radiation profile of the function $\hat{\phi}$ can be obtained by considering the trace of $\Omega^{-1} \hat{\phi}$. We define the mapping, generalising the inverse wave operators: \begin{equation} \mathfrak{T}^\pm: ({\phi}_0, {\phi}_1) \in C^\infty_0{(\hat{\Sigma})} \times C^\infty_0(\hat{\Sigma}) \mapsto \Omega^{-1} \hat \phi |_{{\mathscr I}^\pm}. \end{equation} Energy estimates can be proven for the cubic wave operator and we prove in fact $$ \mathfrak{T}^\pm \left( C^\infty_0{(\hat{\Sigma})} \times C^\infty_0(\hat{\Sigma})\right) \subset H^1({\mathscr I}^\pm). $$ The existence of the wave operators defined on \(H^1({\mathscr I}^\pm)\), inverses of the trace operators \( \left(\mathfrak{T}^\pm\right)^{-1} \), is obtained by solving the characteristic Cauchy problem for the conformal cubic wave equation with data on ${\mathscr I}^{\pm}$. The generalisation of the standard scattering operator is defined as the operator \[ \mathfrak{S} = \mathfrak{T}^+ \circ \left(\mathfrak{T}^-\right) ^{-1}. \] In this paper, we prove the existence of a bi-Lipschitz conformal scattering operator, which extends the notion of scattering operator to space-times which are non static: \begin{theorem*} There exists a bi-Lipschitz operator \(\mathfrak{S} : H^{1}({\mathscr I}^{-}) \rightarrow H^{1}({\mathscr I}^{+}) \), which associates to past radiation profiles of solutions to the problem \eqref{eq:cauchycubicwave} the corresponding future radiation profiles. \end{theorem*} \subsection*{Conformal techniques, scattering, asymptotic behaviour} The well-posedness of the Cauchy problem for this equation, and in this geometric setting, has been addressed in \cite{MR789558}. The existence of a scattering operator on Minkowski space-time has been considered, on flat space-times in \cite{Karageorgis:2006wg}, for general semi-linear wave equations, and by conformal methods on flat space-time in \cite{MR1073286}. The existence of a scattering operator, under more constraining assumptions, has been considered by the author in \cite{Joudioux:2012eo}, and this work extends this previous work to the generic cubic wave equation. In particular, we remove an artificial assumption in the decay of the coefficient of the non-linearity. The purpose of this assumption was to compensate for the blow-up of the Sobolev constant associated with the Sobolev embeddings of $H^{1}$ into $L^{6}$ in dimension $3$. Since the metric is not stationary, this geometrical setting is a priori not amenable to standard analytic techniques to prove the existence of a scattering operator. To construct this operator, we exploit the conformal invariance of the equation, in conjunction with the conformal compactification of the space-time. The rescaled solution $\Omega^{-1}\hat\phi = \phi$ can be extended up to $\hat{M}$ by the mean of Equation \eqref{eq:confcauchycubicwave}. The trace of $\Omega^{-1}\hat\phi$ on the boundaries are the radiation profiles, in this conformal setting, discussed by Friedlander \cite{MR583989}. Hence, the trace operators $\mathfrak{T}^{\pm}$, associating to initial data for the Cauchy problem for the conformal wave equation \eqref{eq:confcauchycubicwave} to the traces of the corresponding solutions to \eqref{eq:confcauchycubicwave} on the boundaries ${\mathscr I}^{\pm}$, generalises the standard inverse wave operators of the classical scattering theory, see \cite{mn04}. The actual existence of the scattering operator is performed by inverting the trace operators, that is to say, within the appropriate function spaces, solving the characteristic Cauchy problem with data on the boundaries of $M$. Conformal methods to study global problems for partial differential equations in relativity go back to Penrose and Sachs and the peeling of higher spin fields. It was further studied in the context of the Cauchy problem in the mid-80s, early '90s (see for instance \cite{MR789558}). Friedlander \cite{MR583989} pioneered scattering theory in relativity. The reader who wishes to have a more exhaustive bibliography should refer to \cite{Joudioux:2010tn,mokdad:tel-01502657, pham:tel-01630023}. In the past years, this approach by conformal techniques to understand the asymptotic behaviour of solutions of field equations in general relativity has been studied in various contexts: Mason and Nicolas \cite{mn04} obtained the first analytic result for linear fields, followed by a peeling result on the Schwarzschild background, for scalar wave \cite{MR2461904}, spin $1/2$ and spin $1$ fields \cite{MR2888989}. This has been later extended to non-linear waves on Kerr black holes \cite{2018arXiv180108996N}. As mentioned before, the conformal scattering construction was extended to a non-linear wave equation \cite{Joudioux:2012eo}. Similar constructions based on local energy estimates \cite{2017arXiv170406441M} were obtained on Reissner-Nordstr\"om black holes \cite{2017arXiv170606993M} for the Maxwell equations. More recently, the existence of a conformal scattering operator has been proven for Yang-Mills fields on the de Sitter background \cite{2018arXiv180901559T}. \subsection*{Description of the work} We now review the key points of the result. As already mentioned, this work is an extension of previous work \cite{Joudioux:2012eo}. We are in particular building on the estimates performed in the asymptotic spatial region $i^0$. The main improvement of this paper is the possibility to handle a more general non-linearity. In the previous paper, we assumed that the non-linearity was multiplied by a decaying function $b$. This restriction has its origin in the treatment of the characteristic initial value problem on ${\mathscr I}^\pm$ to construct the inverse of the trace operators $\mathfrak{T}^\pm$. The technique relies on a fixed point argument in the energy space obtained by foliating the interior of the light cones at infinity ${\mathscr I}^{\pm}$. Because of the power non-linearity, Sobolev embeddings are required. Nonetheless, the volume of the leaves goes to zero, and a well-known consequence is the blow-up of the related Sobolev constant, associated with the Sobolev embedding from $H^{1}$ into $L^6$. The non-linearity is multiplied by a function decaying sufficiently fast to compensate for this blow-up of the Sobolev constant. To circumvent that issue, the strategy to approach the characteristic initial value problem is changed. In particular, this strategy avoids the use of the Sobolev embeddings on a shrinking foliation, and relies on the work by H\"ormander \cite{MR1073287} for the wave equation. The principle is the following. One assumes the existence of a solution for the Cauchy problem for the considered geometric equation\footnote{By geometric equation, we would like to insist that the operator defining the equation is depending on the metric.}. One considers an initial characteristic surface $C$. A time function being chosen to split the metric \[ g = -N^2 dt^2 + h_{\Sigma_t}, \] where $h_{\Sigma_t}$ is the Riemannian metric on the leaves of the foliation induced by the time function $(\Sigma_t)$, the propagation speed of the metric is slowed down \[ g_\lambda = - \lambda N^2 dt^2 + h_{\Sigma_t},\mbox{ for } \lambda \in \left[1/2, 1\right). \]\ The initial characteristic hypersurface $C$ becomes spacelike for $g_{\lambda}$, and solutions to the Cauchy problem for the geometric equation associated with $g_{\lambda}$ can be solved. In the case of the cubic wave equation, this result has been obtained by Choquet-Bruhat - Cagnac \cite{MR789558}. Given initial data, a family of solutions is then obtained, depending on $\lambda$. Standard compactness arguments can then be used to prove the existence of an accumulation point as \(\lambda\) goes to one which satisfies the characteristic initial value problem. This method by H\"ormander has been extended to metrics of weak regularity in \cite{MR2244222} for the linear wave equation. This paper clarifies some points of \cite{MR2244222}, in particular in the way some energy estimates are performed and in the use of trace theorems for intermediate derivatives. This technique by H\"ormander can only be used in a neighbourhood of timelike infinity, where the Penrose compactification leads indeed to a compact space. Near spacelike infinity $i^0$, the metric coincides with the Schwarzschild metric, and the chosen conformal factor does not lead to a compactification in the neighbourhood of $i^0$. There, the existence of solutions to the characteristic initial value problem can be proven by relying on an explicit foliation. Global solutions to the Cauchy problem in the past of ${\mathscr I}^+$ (resp. the future of ${\mathscr I}^{-}$) are obtained by appropriately gluing the different solutions to the characteristic initial value problem in the neighbourhood of $i^\pm$ and the neighbourhood of $i^0$. \subsection*{Organisation of the paper} Section \ref{sec:preliminaries} contains the precise geometric framework, in Section \ref{sec:asymptoticallysimple}, and reminders on the cubic wave equation, in Section \ref{sec:remindercubicwave}, in particular, the global existence of solutions to the defocusing wave equations, see Proposition \ref{hor3} and some estimates for the characteristic initial value problem, see Proposition \ref{prop:aprioricar}. Section \ref{sec:car} solves the characteristic initial value problem for the cubic wave equation, by H\"ormander's method, see Theorem \ref{thm:existence}. Section \ref{sec:uniqueness} addresses the proof of the uniqueness of solutions to the Cauchy problem, see Proposition \ref{charest2}. The next section, Section \ref{sec:neighbourhoodi0}, addresses the problem of solving the characteristic initial value problem, see Proposition \ref{prop:existenceschwarz}. The global characteristic initial value problem is solved in Section \ref{sec:globalcauchy}. Finally, the trace operators, as well as the scattering operators, are obtained in Section \ref{sec:scattering}, see Theorem \ref{thm:mainthm}, and its corollary, Corollary \ref{cor:existencescatt}. Appendix \ref{app:trace} contains reminders on a trace theorem, see in particular Theorem \ref{tracelm}, and its use when energy estimates can be proven. \begin{sloppypar} \section{Geometrical and analytical preliminaries}\label{sec:preliminaries} \subsection{Conventions, notations} All along the paper, the metrics have signature $(-+++)$, and we use the Einstein summation conventions. The expression $a \lesssim b$, where $a$ and $b$ are two functions defined on $M$ means that there exists a constant $C>0$ depending on the geometry such that $$ a \leq C b \mbox{ on }M. $$ If $a \lesssim b$ and $b \lesssim a$, then one writes $a \approx b$. \subsection{Regular asymptotically simple space-time}\label{sec:asymptoticallysimple} Penrose introduced asymptotically simple space-times \cite{MR944085} as space-times, that is to say, manifolds endowed with a metric satisfying the Einstein equations, modelling a spatially localised gravitational perturbations. The existence of such space-times (solutions to the Einstein equations in vacuum with vanishing cosmological constant) can be obtained by combining the result of gluing by Corvino - Schoen and Chru\'sciel - Delay \cite{MR1920322,MR1902228,MR2225517}, and the result by Andersson - Chru\'sciel \cite{MR1269390} on the existence of space-time with smooth ${\mathscr I}^{+}$ arising from regular enough data on a hyperboloid. The important point is that it generates a family of solutions to the Einstein equations which are not static, and develop nonetheless a regular ${\mathscr I}$. Such solutions are by construction future and past geodesically complete \cite{MR868737}, and therefore can be seen as perturbations of Minkowski space-time, though the precise relation to any Minkowski stability result \cite{MR1946854} is yet to be clarified. \begin{definition}[Regular asymptotically simple space-times]\label{def:asymptoticallysimple} A space-time $(\hat {M},\hat{g})$, satisfying the Einstein equations in vacuum, is a regular asymptotically simple space-time if there exists a manifold $ M$, a Lorentzian metric $g$ on $ M$ and a $C^\infty$-conformal factor $\Omega$ such that: \begin{enumerate} \item $M$ is embedded in $\hat M$ and, on $M$, $ g = \Omega^2 \hat g$; \item $ g$ and $\Omega$ are $C^\infty$ over $M$; \item $\Omega$ is positive on $M$ and $\textrm{d} \Omega \neq 0$; \item Any inextensible null geodesic admits a future (resp. past) endpoint on ${\mathscr I}^+$ (resp. ${\mathscr I}^-$). \end{enumerate} \end{definition} The framework of this paper imposes the use of energy estimates to prove the existence of a solution of the Cauchy problem set up on the characteristic cones at infinity. These energy estimates usually rely on the use of Stokes' theorem, requiring that the metric is regular enough at the tip of the cone. This is why an extra regularity assumption is made at tips of the boundary of the manifold. \begin{assumption} Let $(\hat M,\hat g)$ be regular asymptotically simple space-time whose conformal compactification is denoted by $( M, g = \Omega^2 \hat g)$. We assume that there exist a neighbourhood $U$ of $i^0$ in $ M$ and a system of coordinates $(t, r, \theta, \phi)$ on $U\cap \hat M$ such that, in $U\cap \hat M$, the metric is the Schwarzschild metric $$ \tilde g = \left(1-\frac{2m}{r}\right) \textrm{d} t^2 - \left(1-\frac{2m}{r}\right)^{-1}\textrm{d} r^2 -r^2 \left(\textrm{d}\theta ^2 + \sin^2\left(\theta\right) \textrm{d} \phi^2\right). $$ \end{assumption} Another important assumption which is done on the structure of the asymptotically simple space-time and which is the most important restriction on the geometry: \begin{assumption} Assume that there exists a neighbourhood \(U\) of $i^\pm$ in \({M}\), and an embedding of \(U\) into a manifold $\bar{M}$, such that the metric $g$ extends to a \(C^\infty\)-metric onto \(\bar{M}\). \end{assumption} \subsection{Some reminders on the cubic wave equation}\label{sec:remindercubicwave} \subsubsection{Conformal changes for the cubic wave equation} Consider the conformally related metrics \(g\) and \({\hat g} = \Omega^{-2} g\). Then, it is a straightforward calculation that \(\hat \phi\) is a solution to the cubic wave equation on the manifold \(\hat M\) \[ \hat \nabla_\alpha \hat \nabla^\alpha \hat \phi + \dfrac16 \text{scal}_{\hat g} \hat\phi = \hat \phi^3 \] if, and only if, on \(\hat M\), the function $\phi = \Omega^{-1} \hat\phi$ satisfies the equation \[ \nabla_\alpha \nabla^\alpha \phi + \dfrac16 \text{scal}_g \phi = \phi^3. \] \subsubsection{Estimates for the defocusing wave equation} It should be noticed that we are working here with the \emph{defocusing} wave equation. Given a time foliation $(\Sigma_t)$ on $\hat M$, with unit future oriented normal $T$, the energy $E(t)$ at time $t$ \[ \Vert \phi \Vert^4_{H^1(\Sigma_t)} + \Vert D\phi \Vert^2_{L^2(\Sigma_t)} + \Vert T(\phi) \Vert^2_{L^2(\Sigma_t)} \lesssim \Vert D \phi \Vert^2_{L^2(\Sigma_t)} + \Vert T(\phi) \Vert^2_{L^2(\Sigma_t)} + \Vert \phi \Vert^4_{L^4(\Sigma_t)} = E(t). \] is approximately conserved along the evolution, in the sense that $$ E(t) \lesssim E(0). $$ Hence, for the defocusing wave equation, it is possible to prove global existence for large data in $H^1(\Sigma_0)\times L^2(\Sigma_0)$. The key tool is a Sobolev embedding from $H^1(\Sigma_t)$ into $L^4(\Sigma_t)$ in dimension 3. As long as these embedding can be performed uniformly (for instance, when all the leaves are diffeomorphic to $R^3$, or the $3-$sphere), then the global existence for all data is ensured. The strategy to perform the estimates is the following: \begin{itemize} \item whenever we are working with a finite time interval, we use the standard energy estimates for the linear wave equations, similar to \cite[Chapter 1, \S 3]{sogge}, where, for the local existence, $H^s$-norms are propagated; \item we otherwise use the conservation of the energy with the nonlinear term. \end{itemize} \subsubsection{The Cauchy problem for the cubic wave equation} The section contains some preliminary known results about the global existence of solutions to the Cauchy problem for the cubic wave equation \[ \square \phi = \phi^3 , \] where \(\hat\square = \hat \nabla_\alpha \hat \nabla^\alpha\) is the wave operator associated with the metric \(\hat g\). The well-posedness result of Cagnac and Choquet-Bruhat \cite{MR789558} is recalled: \begin{proposition}\label{hor3} Let $X$ be a compact manifold. Consider the Lorentzian manifold $(X\times \mathbb{R}, g)$ so that $X\times \{t\}$ is a family of uniformly spacelike hypersurfaces. Assume that the deformation tensor of the vector $\partial_t$ is bounded as well as a sufficient number of derivatives of the metric. \newline The Cauchy problem \begin{equation*}\left\{ \begin{array}{l} \square \phi = \phi^3\\ \phi|_{X} = \phi_{0}\in H^1(S)\\ T^a\nabla_a \phi|_{X} = \psi_0 \in L^2(X) \end{array}\right. \end{equation*} admits a global solution in $C^1(\mathbb{R}, L^2(X))\cap C^0(\mathbb{R}, H^1(X))$. Furthermore, the a priori estimate holds, for all $t\in \mathbb{R}$: $$ E_{\phi}(\{t\}\times X)\lesssim E_{\phi}(\{0\}\times X). $$ \end{proposition} \begin{remark}\label{rem:bfunction} We could add a function $b$ in the equation as follows: \[ \square \phi = b \phi^3 , \] Cagnac and Choquet-Bruhat in \cite{MR789558} addressed the local well-posedness for this equation under the following assumptions on the function $b$: the function $b$ is bounded on $\hat M$, one time differentiable function on $M$, admits a $C^1$-extension to $ M$ and, for a given future-oriented unit timelike vector field $T^a$ on the unphysical space-time $M$, there exists a constant $c$ such that $$ |T^a\nabla_a b| \lesssim b. $$ The conformal scattering operator could be constructed in that situation in a similar fashion. This extension has very little meaning in the context of that work, and its purpose, and is therefore ignored. \end{remark} \subsubsection{A priori estimates for the characteristic initial value problem} In previous work, we have proven various a priori estimates. Since these estimates are the basis of H\"ormander's method for solving the characteristic Cauchy problem, we recall in this section the various energy estimates which were previously proved. It is important to note that, in this previous work, as noted in \cite{Joudioux:2012eo}, the a priori energy estimates are proved without the decay assumptions at the tip of the vertex on the function $b$, as considered in Remark \ref{rem:bfunction}. The first result is a local a priori energy estimate in the neighbourhood of the tip of the vertex. The geometric and analytic settings are the following: let $p$ be a point in $\hat M$ and $U$ an open neighbourhood of $p$ in $M$, with compact closure. Assume that one can define the past light cone globally $C^-(p)$ in $U$. Let $t$ be a time function defining a foliation $\{\Sigma_t\}_{t= 0\cdots 1 }$ of the past of $C^-(p)$. The unit future oriented normal to $\Sigma_t$ is denoted by $T^a$. On $C^-(p)$, let $l$ be a generator of future oriented null geodesics ending up at $p$. Let $n$ be a null vector field transverse to $C^-(p)$ such that $$ T = \frac{1}{2}\left(l+n\right). $$ The family $(l,n)$ is completed into a family $(l,n,e_1, e_2)$ so that the family $(l,e_1, e_2)$ is tangent to the cone $C^-(p)$ and $(e_1, e_2)$ is orthogonal to $(l,n)$ . The derivatives with respect to the vectors $e_1, e_2$ are denoted by $\nabla_{\mathbb{S}^2}$.\\ We consider the standard Sobolev spaces on $\Sigma_t$, $H^1(\Sigma_t)$ and $L^2(\Sigma_t)$. The set of smooth functions on the cone $C^-(p)$ in the future of $\Sigma_0$ with compact support away from the tip $p$ is endowed with the norm: $$ \Vert \phi \Vert^2_{H^1(C^-(p))} = \int_{C^-(p)}\left( \vert \nabla_{l} \phi\vert ^2 + \vert\nabla_{\mathbb{S}^2}\phi \vert^2 + \vert\phi \vert^2 \right)n\lrcorner \textrm{d} \mu[\hat g] $$ where $n\lrcorner \textrm{d} \mu[\hat g]$ is the contraction of the ambient four dimensional volume form $\textrm{d} \mu[\hat g]$ with the null vector $n$ transverse to the light cone $C^-(p)$. The completion for this norm of the set of smooth functions with compact support away from $p$ is denoted by $H^1(C^{-}(p))$. Using standard energy estimates, one gets the following proposition: \begin{proposition}\label{prop:aprioricar} Let $\phi$ be a solution of the equation $$ \square \phi + \frac {1}{6} \text{Scal}_{g} \phi = \phi^3. $$ The following inequalities hold: $$ \Vert \phi \Vert_{H^1(C^-(p))}^2 + \Vert \phi \Vert^4_{L^4(C^-(p))} \lesssim \left ( \Vert \phi \Vert_{H^1(\Sigma_0)}^2 + \Vert T^a\nabla_a \phi\Vert^2_{L^2(\Sigma_0)} + \Vert \phi \Vert^4_{L^4(\Sigma_0)} \right) $$ and $$ \Vert \phi \Vert_{H^1(\Sigma_0)}^2 + \Vert T^a\nabla_a \phi\Vert^2_{L^2(\Sigma_0)} + \Vert \phi \Vert^4_{L^4(\Sigma_0)} \lesssim\left(\Vert \phi \Vert_{H^1(C^-(p))}^2 + \Vert \phi \Vert^4_{L^4(C^-(p))} \right). $$ Furthermore, for all $t$, one has $$ \Vert \phi \Vert_{H^1(\Sigma_t)}^2 + \Vert T^a\nabla_a \phi\Vert^2_{L^2(\Sigma_t)} + \Vert \phi \Vert^4_{L^4(\Sigma_t)} \lesssim \left ( \Vert \phi \Vert_{H^1(\Sigma_0)}^2 + \Vert T^a\nabla_a \phi\Vert^2_{L^2(\Sigma_0)} + \Vert \phi \Vert^4_{L^4(\Sigma_0)} \right) $$ \end{proposition} \begin{remark} Using Sobolev embedding from $H^1(\Sigma_0)$ into $L^4(\Sigma_0)$ and from $H^1(C^-(p))$ into $L^4(C^-(p))$, one immediately gets the same inequality without the $L^4$ norms. \end{remark} \section{Characteristic Cauchy problem \emph{\`a la} H\"ormander}\label{sec:car} The purpose of this section is to establish an a priori well-posedness result of the characteristic Cauchy problem for the wave equation on a curved space-time based on the result by H\"ormander \cite{MR1073287}, extended in \cite{MR2244222} and partially used in \cite{Joudioux:2012eo} to prove the existence and uniqueness of a weak solution to the characteristic Cauchy problem in $H^1(M)$. More regularity is required when establishing the Lipschitz continuity of the wave operator. The method used by H\"ormander is based on a reduction of the propagation speed of the considered wave equation so that one can resort to the standard existence result of the wave. In the following, one restricts oneself to a light cone, but the result can be extended in the same way to arbitrary weakly lightlike hypersurfaces. The geometric setting is the following: consider a point $p$ in $M$. One denotes by $C^-(p)$ the light cone from $p$. Consider a time function defined in the interior of $C^-(p)$. The induced foliation is denoted by $\Sigma_t$. One assumes that the slice $\Sigma_0$ is in the past of $p$ and the slice containing $p$ is denoted by $\Sigma_T$. The unit future oriented normal vector to the time slices is denoted by $T^a$. One considers a 3+1 splitting of the metric in the following form: $$ g =- N^2 \textrm{d} t^2 + h_{\Sigma_t} $$ where $h_{\Sigma_t}$ is a Riemannian metric on $\Sigma_t$ and the lapse $N$ is given by: $$ N^2 = g(\nabla t, \nabla t). $$ The functional setting is given by: \begin{itemize} \item on the time slice $\Sigma_t$, one defines the energy of a function $\phi$: $$ E_\phi(\Sigma_t) = \Vert\phi\Vert^2_{H^1(\Sigma_t)} + \Vert T^a\nabla_a\phi\Vert^2_{L^2(\Sigma_t)}; $$ \item on the light cone $C^-(p)$, the following $H^1$ norm is considered: $$ \Vert\phi\Vert^2_{H^1(C^-(p))} = \int_{C^-(p)} \left(\nabla_l \phi^2 + |\nabla_{\mathbb{S}^2}\phi|^2+\phi^2\right) T^a\lrcorner \textrm{d} \mu[g] $$ where $l$ is a non-vanishing generator of the null directions of $C^-(p)$ and $T^a\lrcorner \textrm{d} \mu[g]$ is the contraction of the space-time volume form with the vector field $T$. The space $H^1(C^-(p))$ is defined as being the completion of the space of smooth functions whose compact support does not contain $p$. \end{itemize} \begin{remark} One could have defined, like H\"ormander, $H^1 (C^-(p))$ by transporting the $H^1$ structure of a timelike slice on $C^-(p)$. It happens that the two definitions coincide~\cite{Joudioux:2012eo}. \end{remark} Consider the characteristic Cauchy problem \begin{equation}\label{hor1}\left\{ \begin{array}{l} \square \phi = \phi^3\\ \phi|_{C^-(p)} = \phi_{0}\in H^1(C^-(p)). \end{array}\right. \end{equation} Note that, for the purpose of the discussion of this section, the scalar curvature term is excluded of the discussion, but can be treated in a similar fashion, exploiting the fact the curvature is bounded. The purpose of this section is to prove that there exists a strong solution to the characteristic Cauchy problem up to the hypersurface $\Sigma_0$ in the past of $p$. One then restricts oneself to the intersection of the future of $\Sigma_0$ and the past of $p$. Using Proposition \ref{hor3}, one can then prove the following theorem: \begin{theorem}\label{thm:existence} The characteristic Cauchy problem \eqref{hor1} admits a global strong solution down to $\Sigma_0$ in $C^0(\mathbb{R}, H^1(\Sigma_\tau))\cap C^1(\mathbb{R}, L^2(\Sigma_\tau))$. The following a priori estimates furthermore hold: $$ \forall \tau\in[0,T], E_{\phi}(\Sigma_\tau)\approx E_{\phi}(C^-(p)). $$ \end{theorem} \begin{proof} Assuming that there exists a solution in $C^0(\mathbb{R}, H^1(\Sigma_\tau))$, the a priori estimates are a consequence of in \cite[Proposition 4.15]{Joudioux:2012eo}. The proof of the theorem will require at a point the use of Sobolev embeddings. This requires to work with an extension of the foliation $(\Sigma_\tau)$ to a cylinder. One then considers a smooth isometric embedding of $(J^-(p)\cap J^+(\Sigma_0),g)$ into a compact cylinder $(U, g)$. The foliation $(\Sigma_\tau)$ is extended on the cylinder $U$ in a spacelike foliation of $U$ for the extension of the metric $g$. We still denote by $(\Sigma_\tau)$ this extension; all the leaves of this foliation are now topological 3-spheres endowed with a Riemannian metric, and as a consequence, all the Sobolev spaces $H^k(\Sigma_\tau)$ considered on the leaf $\Sigma_\tau$ are equivalent since the leaf is compact. The proof of the well-posedness relies on the method introduced by H\"ormander \cite{MR1073287} which consists in slowing down the propagation speed of the waves. A detailed proof of H\"ormander's paper is given in\cite{MR2244222} when the metric is only Lipschitz. The following proof follows step by step this paper (and especially the scheme given in the proof of theorem 4.3), modifying it when necessary. The reduction of the propagation speed is realised by introducing a parameter $\lambda$ in $[\frac12, 1]$ and a family of metric $g_\lambda$: $$ g_{\lambda} = -\lambda^2 N^2 \textrm{d} t^2 +h_{\Sigma_t} $$ so that the hypersurface $C^-(p)$ becomes spacelike for the metric $g_\lambda$. According to Proposition \ref{hor3}, for any $\lambda$ in $[\frac12, 1]$, the Cauchy problem \begin{equation*}\left\{ \begin{array}{l} \square_\lambda \phi = \phi^3\\ \phi|_{C^-(p)} = \phi_{0}\in H^1(C^-(p))\\ T^a\nabla_a \phi|_{C^-(p)} =0 \in L^2(C^-(p)) \end{array}\right. \end{equation*} admits a solution $\phi_\lambda$ down to $\Sigma_0$ in $C^1([0,T], L^2(\Sigma_\tau))\cap C^0([0,T], H^1(\Sigma_\tau))$ such that: $$ E_{\phi_{\lambda}} \leq c_{\lambda} E_{\phi_\lambda}(C^-(p)) = \Vert\phi_0\Vert_{H^1_{\lambda}(C^-(p))}\lesssim C \Vert\phi_0\Vert_{H^1(C^-(p))}. $$ where $c_\lambda$ is a constant which depends continuously on the scalar curvature of $g_\lambda$. Since the interval under consideration is compact and since, as a consequence of the previous remark, $c_\lambda$ depends continuously on $\lambda$, one can replace $c_\lambda$ by its supremum over $[\frac12, 1]$. Furthermore, $C^-(p)$ is compact and, as consequence, all the Sobolev spaces associated with a smooth metric are equivalent. This includes $H^1(C^-(p))$. The family $(\phi_\lambda)_{\lambda\in [\frac12, 1]}$ is then uniformly bounded in $L^\infty([0,T], H^1(\Sigma_\tau))$: $$ E_{\phi_{\lambda}} \lesssim \Vert\phi_0\Vert_{H^1(C^-(p))} $$ Consider now a sequence $(\lambda_n)$ converging to $1$ and one denotes by $(\phi_n)$ the associated sequence. One denotes by $U$ the volume delimited by $\Sigma_0$ and $C^-(p)$. \begin{remark}\label{hor4} Before studying the convergence process, let us remind that, using a priori estimates of Proposition \ref{prop:aprioricar}, the energy on the initial time slice $\Sigma_0$ controls all the energies on the time slices $\Sigma\tau$ for $\tau\in [0,T]$. As a consequence, a convergence stated for $H^1(\Sigma_0)$ actually holds in $L^\infty([0,T], H^1(\Sigma_\tau))$. \end{remark} One now proceeds by extracting successively sequences of $(\lambda_n)$ as follows: \begin{enumerate}[wide, labelwidth=!, labelindent=0pt] \item Since $(\phi_n)$ is bounded in $C^1([0,T], L^2(\Sigma_\tau))\cap C^0([0,T], H^1(\Sigma_\tau))$, $(\phi_n)$ is bounded in $H^1(U)$. Hence, using Kakutani's theorem, there exists a sub-sequence of $(\phi_n)$ converging weakly in $H^1(U)$ towards a function $\phi$ in $H^1(U)$. Furthermore, using Rellich-Kondrachov's theorem, since $H^1(U)$ is compactly embedded in $H^\sigma(U)$ for $\sigma<1$, a diagonal extraction process gives: \begin{eqnarray} \phi_n&\stackrel{w - H^1(U)}{\xrightarrow{\hspace*{3cm}}}& \phi\label{hor5}\\ \phi_n&\stackrel{H^\sigma(U)}{\xrightarrow{\hspace*{3cm}}}& \phi\label{hor6}. \end{eqnarray} \item Since $(\phi_n)$ is bounded in $L^\infty([0,T], H^1(\Sigma_\tau))$, accordingly to Remark \ref{hor4}, up to an extraction, using Kakutani's theorem, one has : \begin{eqnarray} \phi_n&\stackrel{w - L^\infty([0,T],H^1(\Sigma_\tau))}{\xrightarrow{\hspace*{3cm}}}& \phi\label{hor7} \end{eqnarray} \item Furthermore, since $(\phi_n)$ converges strongly in $H^{\frac12}(U)$, by continuity of the trace operator from $H^{\frac12}(U)$ into $L^2(\Sigma_0)$ and using Remark \ref{hor4}, one gets that \begin{equation} \phi_n \stackrel{C^0([0,T], L^2(\Sigma_\tau))}{\xrightarrow{\hspace*{3cm}}} \phi\label{hor8}. \end{equation} We have also used that $\phi_n$ lies for all $n$ in $C^0([0,T], L^2(\Sigma_\tau))$ which is closed in $L^\infty([0,T], L^2(\Sigma_\tau))$. \item Furthermore, $(\phi_k)$ and $(\partial_\tau \phi_k)$ are both bounded respectively in $L^\infty([0,T], H^1(\Sigma_\tau))$ and $L^\infty([0,T],L^2(\Sigma_\tau))$. Using Banach-Alaoglu-Bourbaki theorem, up to extractions, one has \begin{eqnarray} \phi_n&\stackrel{\star-w - L^\infty([0,T], H^1(\Sigma_\tau))}{\xrightarrow{\hspace*{3cm}}}& \phi\label{hor9}\\ \phi_n&\stackrel{\star-w-L^\infty([0,T],L^2(\Sigma_\tau))}{\xrightarrow{\hspace*{3cm}}}& \phi.\label{hor10} \end{eqnarray} \item Since the foliation $\{\Sigma_\tau\}$ has the volume of its leaves bounded away from $0$, Sobolev embeddings can be realized uniformly over the foliation. Using these Sobolev embeddings, Remark \ref{hor4} and the convergence \eqref{hor7}, the following convergence holds \begin{equation} \phi_n \stackrel{w-L^\infty([0,T], L^2\cap L^6(\Sigma_\tau))}{\xrightarrow{\hspace*{3cm}}} \phi\label{hor11}\\ \end{equation} and, as a consequence, \begin{equation} \phi_n^3 \stackrel{w-L^\infty([0,T], L^2(\Sigma_\tau))}{\xrightarrow{\hspace*{3cm}}} \phi^3\label{hor12}. \end{equation} \item Furthermore, since $g_\lambda$ is smooth, $\square_\lambda \phi_\lambda$ also converges in the sense of distributions towards $\square \phi$. Using the convergence \eqref{hor12}, $(\phi_n^3)$ converges towards $\phi^3$ in the sense of distributions. This means that the function $\phi$ satisfies, in the sense of distributions, $$ \square \phi = \phi^3. $$ \item It remains to prove that $\phi$ satisfies the initial conditions. This is a direct consequence of Equation \eqref{hor8}: $$ \phi|_{C^-(p)} = \phi_0. $$ \end{enumerate} The next step of the proof consists in proving that the solution is in fact a strong solution of the equation: \begin{enumerate}[wide, labelwidth=!, labelindent=0pt] \item Since we are working on a compact space, the trace operator is in fact compact from $H^1(U)$ in $L^2(C^-(p))$. Up to an extraction, $(\phi_n)$ converges then strongly in $L^2(C^-(p))$. As a consequence, $\phi|_{C^-(p)}$ is equal to $\phi_0$ in $L^2(C^-(p))$. \item One already has, as a consequence of the a priori estimates, \begin{eqnarray*} \phi &\in & L^\infty([0,T],H^1(\Sigma_\tau))\\ \phi &\in & C^0([0,T],L^2(\Sigma_\tau))\\ \partial_\tau\phi &\in & L^\infty([0,T],L^2(\Sigma_\tau)). \end{eqnarray*} \item One then considers the operator $$ L_0=\square_\lambda - \frac{1}{N^2}\partial_\tau^2 = \alpha \partial_t\times L_1 + L_2. $$ where $\alpha$ is a smooth function on $U$, $L_1$ is a first order purely spatial operator and $L_2$ is a second order purely spatial operator. It is clear from this decomposition that, if $\phi$ lies in $L^\infty([0,T], H^1(\Sigma_\tau))$ and $\partial_\tau \phi$ in $L^\infty([0,T], L^2(\Sigma_\tau))$, then $L_0\phi$ lies $L^\infty([0,T], H^{-1}(\Sigma_\tau))$ since $H^1(\Sigma_\tau)$ and $L^2(\Sigma_\tau)$ embed themselves continuously and uniformly over the foliation in $H^{-1}(\Sigma_\tau)$. \item One finally gets, using the principle of intermediate derivatives of Lions \cite{Lions:1963tu}, as described in Appendix \ref{tracelm}, since \begin{eqnarray*} \phi&\in&L^\infty([0,T], H^1(\Sigma_\tau))\\ \partial_\tau\phi &\in& L^\infty([0,T], L^2(\Sigma_\tau))\\ \partial_\tau^2\phi &\in& L^\infty([0,T], H^{-1}(\Sigma_\tau)), \end{eqnarray*} and since the energy is continuous, one gets \begin{eqnarray} \phi &\in &C^0([0,T], L^2(\Sigma_\tau)) \nonumber\\ \partial_\tau \phi &\in & C^0([0,T], L^2(\Sigma_\tau)) \label{hor13} \end{eqnarray} that is to say that \(\phi \in C^1([0,T], L^2(\Sigma_\tau))\). \item It remains to prove that the function $\phi$ belongs to $C^0([0,T], H^1(\Sigma_\tau))$. This is done as follows: \begin{enumerate}[wide, labelwidth=!, labelindent=0pt] \item The weak convergence \eqref{hor7} of $(\phi_n)$ in $L^\infty([0,T], H^1(\Sigma_\tau))$ and the strong convergence \eqref{hor8} of $(\phi_n)$ in $C^0([0,T], L^2(\Sigma_\tau))$ imply that $$ \forall t\in[0,T], \phi_n(t, \star)\in H^1(\Sigma_\tau). $$ \item The a priori estimates and Equation \eqref{hor13} give: \begin{eqnarray*} \phi &\in & L^\infty([0,T], H^1(\Sigma_\tau))\\ \partial_\tau \phi & \in & C^0([0,T], L^2(\Sigma_\tau)) \end{eqnarray*} As a consequence (this fact is proved in \cite[p. 535]{MR2244222}), we have: $$ \phi\in C^0([0,T], w-H^1(\Sigma_\tau)). $$ In this context, on can prove the a priori estimates over the foliation $\Sigma_\tau$: $$ |E_\phi(\Sigma_\tau)-E_\phi(\Sigma_\mu)|\leq C\left(\int_{[\tau;\mu]}E_\phi(\Sigma_r)\textrm{d} r\right). $$ The energy is then locally Lipschitz continuous. As a consequence, since $\phi$ is in $C^0([0,T], w-H^1(\Sigma_\tau))$, $\phi$ is also in $C^0([0,T], H^1(\Sigma_\tau))$. \end{enumerate} \end{enumerate} \end{proof} \section{Estimates for the characteristic Cauchy problem}\label{sec:uniqueness} In the previous section, it has been proved that the Cauchy problem admits a local solution to the characteristic Cauchy problem, relying only on the existence of a priori estimates for the solutions. The uniqueness has, so far, not be proved yet, as well as the continuity in the initial data. This can be achieved in various ways. The path we chose to follow is based on the work of Baez-Segal-Zhou \cite{MR1073286} and relies on an a priori existence result of solutions to the Cauchy problem, which has been handled in a specific way separately in section \ref{sec:car}. The estimates are established by relying on a reduction to the Cauchy problem. The setting is both the framework introduced by H\"ormander and Baez-Segal-Zhou. Hence, one considers the following geometric context: \begin{itemize} \item Let $p$ a point in $M$. \item One considers an open geodesically convex hyperbolic neighbourhood $\Omega$ of $p$. \item Let $C^-(p)$ be the past light cone from $p$. \item Let $t$ be a time function on $\Omega$; the time foliation arising from it is denoted by $\{S_t\}$. One assumes that $S_0\cap J^-(p)$ is entirely on $\Omega$. $S_T$ denotes the time slice containing $p$. \item Let $T^a$ be a unit normal vector field to the foliation $S_t$ and considers the flow generated by $T$. The hypersurface $\Sigma_T$ is transported by the flow generated by $T^a$ up to $p$. This creates a foliation from $S_T$, which is denoted by $\{\Sigma_t\}$. Up to a rescaling and a shift, one can assume that the slice $\Sigma_0$ contains $p$. \item The closure of the manifold which is obtained is now topologically isomorphic to $\Sigma_T\times[T, 0]$. \end{itemize} \begin{remark} The setting introduced by H\"ormander considers only space-times of the form $X\times \mathbb{R}$, where $X$ is a compact manifold, a priori without boundary. The result can nonetheless be immediately extended to the case when the manifold $X$ has a boundary. This can be seen using the following remark: the energy space remains $H^1(X)$ (and not $\dot{H}{}^1(X)$). Assume that the manifold with boundary $\overline{X}$ is embedded in a bigger compact manifold without boundary (which can always be done). If the boundary of $X$ is at least $C^1$, the space $H^1(X)$ extends continuously into $H^1(\overline{X})$. As a consequence, all energy estimates involving $\overline{X}$ can be brought back onto $X$. In this specific case, the boundary of $\Sigma_T$ is the intersection of $\Sigma_T$ with $J^-(p)$ and is, as a consequence, smooth. \end{remark} One introduces first the following operator: $$ \mathfrak{T} : H^1(\Sigma_T)\times L^2(\Sigma_T) \rightarrow H^1(C_T) $$ which associates to initial data $(\phi, \psi)$ the trace over $C_{T}$ of the solution of the \underline{linear} wave equation with initial data $(\phi, \psi)$ on $\Sigma_T$. Following H\"ormander \cite{MR1073287}, this operator as the following properties: \begin{theorem}[H\"ormander] The linear operator $\mathfrak{T}$ is one-one, onto and bi-continuous, that is to say that there exists a constant $C$ depending only on the geometry of the manifold such that, for all $(\phi, \psi)$ in $H^1(\Sigma_T)\times L^2(\Sigma_T)$: $$ \Vert\mathfrak{T}(\phi, \psi)\Vert_{H^1(C_T)} \lesssim \Vert(\phi, \psi)\Vert_{H^1(\Sigma_T)\times L^2(\Sigma_T)} $$ and $$ \Vert(\phi, \psi)\Vert_{H^1(\Sigma_T)\times L^2(\Sigma_T)} \lesssim \Vert\mathfrak{T}(\phi, \psi)\Vert_{H^1(C_T)}. $$ \end{theorem} The remark of Baez, Segal and Zhou \cite[Proof of Theorems 13 and 16]{MR1073286} is then the following: \begin{lemma} Let $\delta_0$ be an initial data set in $H^1(C_T)$. Let $H$ be a function defined over the foliation $\Sigma_T$ in $C^1([0, T], L^2(\Sigma_t))\cap C^0([0, T], H^1(\Sigma_t))$. Then $\phi$ can be extended to the past of $\Sigma_T$ by solving the Cauchy problem on $\Sigma_T$ with initial data $\mathfrak{T}^{-1}(\delta_0)$ for the equation: $$ \square \delta + \textbf{1}_{J^-(C_T)} H^2 \delta=0 $$ where the function $H$ is extended by 0 outside $J^-(C_T)$. \end{lemma} For such an equation, the energy estimates are simple to obtain, since the the foliation of reference to establish them have non vanishing volume. For the sake of consistency, these estimates are nonetheless proved here: \begin{proposition}\label{charest} The following energy estimates hold for $\delta$: there exists an increasing function $C: \mathbb{R}^+\rightarrow \mathbb{R}^+$ such that: $$ E_\delta(C_T)\leq C (\sup_{[0,T]} E^2_H(\Sigma_t)) E_\delta(\Sigma_T) $$ and $$ E_\delta(\Sigma_T) \leq C \sup_{[0,T]} E^2_H(\Sigma_t))E_\delta(C_T), $$ where: \begin{itemize} \item the energy on $C_T$ is taken to be the squared $H^1$-norm of the function on $C_T$: $$ E_\delta(C_T) = \Vert \delta\Vert^2_{H^1(C_T)} $$ \item the energy on the time slice $\Sigma_T$ is taken to be the standard energy: $$ E_\delta(\Sigma_T) = \Vert \delta\Vert^2_{H^1(\Sigma_T)}+ \Vert T^a\nabla_a \delta\Vert^2_{L^2(\Sigma_T)}. $$ \end{itemize} \end{proposition} \begin{proof} Since the techniques under considerations are standard, the proof is only sketched. Consider the stress energy tensor: $$ T_{ab}=\nabla_a \delta \nabla_b \delta +g_{ab}\left(-\frac12 \nabla_c\delta\nabla^c\delta + \frac{\delta^2}{2}\right) $$ whose error term is given by: $$ \nabla^a(T^bT_{ab}) = \nabla^{(a}T^{b)}T_{ab} - T^a\nabla_a\delta H^2 \delta. $$ Applying Stokes theorem between $\Sigma_T$ and $\Sigma_s$, one gets immediately the following inequality: $$ \left|E_{\delta}(\Sigma_T) - E_\delta(\Sigma_s)\right| \lesssim \int_s^TE_{\delta}(\Sigma_\tau) \textrm{d} \tau + \int_{s}^T\int_{\Sigma_\tau} H^4\delta^2 \textrm{d}\mu_{\Sigma_\tau}\textrm{d} \tau, $$ One deals with the non linear term using Sobolev embeddings over the compact foliation $\Sigma_\tau$, whose volume of leaves does not go to 0: \begin{gather*} \int_{s}^T\int_{\Sigma_\tau} H^4\delta^2 \textrm{d}\mu_{\Sigma_\tau}\textrm{d} \tau \lesssim \int_{s}^T\left(\int_{\Sigma_\tau} H^6 \textrm{d}\mu_{\Sigma_\tau} \right)^{2/3}\left(\int_{\Sigma_\tau} \delta^6\textrm{d}\mu_{\Sigma_\tau}\right)^{1/3}\textrm{d} \tau\\ \lesssim\sup_{\tau\in[0,T]}E^2_H(\Sigma_\tau)\int_{s}^TE_{\delta}(\Sigma_\tau) \textrm{d} \tau \end{gather*} One finally gets $$ \left|E_{\delta}(\Sigma_t) - E_\delta(\Sigma_s)\right| \lesssim \left(\sup_{\tau\in[0,T]} E^2_H(\Sigma_\tau)+1\right) \int_{s}^TE_{\delta}(\Sigma_\tau) \textrm{d} \tau $$ and using Gr\"onwall's inequality closes the estimates. \end{proof} Let now be $\theta$ and $\xi$ be two functions in $H^1(C_T)$ and consider the two characteristic Cauchy problems: $$\left\{ \begin{array}{l} \square u +\frac16 \text{scal}_g u = u^3\\ u|_{C_T}=\theta \end{array}\right. \text{ and } \left\{ \begin{array}{l} \square v +\frac16 \text{scal}_g v = v^3\\ v|_{C_T}=\xi \end{array}\right. $$ The function $\delta$ defined as being the difference between $u$ and $v$ satisfies the wave equation $$ \square \delta + \frac16 \text{scal}_g \delta = H^2\delta $$ where $$ H^2 = \left(u^2+v^2+uv \right) = \left(u+\frac{v}{2}\right)^2+\frac34v^2. $$ A direct consequence of Proposition \ref{charest} is the following energy estimates: \begin{proposition}\label{charest2} There exists an increasing function $C$ such that the following inequalities hold: \begin{gather*} \Vert u-v \Vert^2_{H^1(\Sigma_T)} +\Vert T^a\nabla_a(u-v)\Vert^2_{L^2(\Sigma_T)} \leq C\left(\Vert u\Vert^2_{H^1(C_T)}+\Vert v\Vert^2_{H^1(C_T)} \right)\cdot ||\theta-\xi\Vert^2_{H^1(C_T)}\\ ||\theta-\xi\Vert^2_{H^1(C_T)}\leq C\left(\Vert u\Vert^2_{H^1(C_T)}+\Vert v\Vert^2_{H^1(C_T)}\right) \cdot \left(\Vert u-v \Vert^2_{H^1(\Sigma_T)} +\Vert T^a\nabla_a(u-v)\Vert^2_{L^2(\Sigma_T)}\right) \end{gather*} \end{proposition} \begin{proof} The proof of this energy inequalities is a direct consequence of Proposition \ref{charest}. It suffices to notice that, for $H^2=u^2 +uv + v^2$, using the triangular inequality, \begin{eqnarray*} \sup_{[T,0]} E_H(\Sigma_t)^2 \lesssim \left(\sup_{[T,0]} E_u(\Sigma_t)^2 +\sup_{[T,0]} E_v(\Sigma_t)^2\right). \end{eqnarray*} Using a priori estimates such as the one proved in \cite[Proposition 6.2]{Joudioux:2012eo}, the later terms can be bounded by either $$ E_u(\Sigma_T)^2+ E_v(\Sigma_T)^2 \text{ or } E_u(C_T)^2+ E_v(C_T)^2. $$ \end{proof} Finally, an important consequence of the energy estimate \eqref{charest2} and Theorem \ref{thm:existence} is the following proposition: \begin{proposition}\label{uniqueness} The characteristic Cauchy problem: $$\left\{ \begin{array}{l} \square u +\frac16 \textrm{Scal}_g u = u^3\\ u|_{C_T}=\theta \in H^1(C_T) \end{array}\right. $$ admits at most local solution up to time $T$ in $C^0([T, 0],H^1(\Sigma_t))\cap C^1([T,0], L^2(\Sigma_t))$. \end{proposition} \section{Solving the characteristic Cauchy problem in the neighbourhood of $i^0$}\label{sec:neighbourhoodi0} \subsection{Preliminary result} The purpose of this section is to explain how the characteristic Cauchy problem can be solved in the neighbourhood of the spacelike infinity $i^0$ of the asymptotically simple manifold. The neighbourhood of $i^0$ is assumed to be isometric to the Schwarzschild space-time to agree with the work of the Corvino-Schoen and Chrusciel-Delay. The Schwarzschild metric writes, in the standard spherical coordinates $$ \hat{g}_S = \left(1-\frac{2m}{r}\right) \textrm{d} t^2- \left(1-\frac{2m}{r}\right)^{-1} \textrm{d} r^2 -r^2d\omega_{\mathbb{S}^2}. $$ Performing the change of coordinates $$ r^\ast = r+ 2m \log \left(1-\frac{2m}{r}\right), R = \frac{1}{R} \text{ and } u = t-r^\ast, $$ the metric takes the form $$ \hat{g}_S =(1-2mR)\textrm{d} u^2 -\frac{2}{R^2} \textrm{d} u \textrm{d} R -\frac{1}{R^2} \textrm{d} \omega_{\mathbb{S}^2}. $$ A conformal rescaling with a conformal factor $\Omega =\frac{1}{R}$ is finally performed to define the unphysical metric: $$ g_S =R^2(1-2mR)\textrm{d} u^2 - 2 \textrm{d} u \textrm{d} R -\frac{1}{R^2} \textrm{d} \omega_{\mathbb{S}^2}. $$ Consider the domain $\Omega^+_{u_0} = \{u\leq u_0\}$, for some $u_0$ to be chosen later. It has been proved \cite{Joudioux:2012eo, mn04} that the following lemma holds: \begin{lemma}\label{schwarzestimates} Let $\epsilon>0$. There exists $u_0<0$, $|u_0|$ large enough, such that the following decay estimates in the coordinate $(u,r,\theta, \psi)$ hold: \begin{gather*} r<r^\ast<(1+\epsilon)r, 1<Rr^\ast<1+\epsilon, 0<R|u|<1+\epsilon,\\ 1-\epsilon<1-2mR<1, 0<s=\frac{|u|}{r^\ast}<1. \end{gather*} Furthermore, the vector field $$ T^a = u^2\partial_u-2(1+uR)\partial_R $$ is uniformly timelike in the region $\Omega^+_{u_0}$. \end{lemma} The parameter $\epsilon$ will be chosen later when we perform the energy estimates. We define, in $\Omega^+_{u_0}=\{t>0, u<u_0\}$, the following hypersurfaces, for $u_0$ given in $\mathbb{R}$: \begin{itemize} \item $S_{u_0}=\{u=u_0\}$, a null hypersurface transverse to ${\mathscr I}^+$; \item $\Sigma_{0}^{u_0>}=\Sigma_0\cap\{u_0>u\}$, the part of the initial data surface $\Sigma_0$ in the past of $S_{u_0}$; \item ${\mathscr I}_{u_0}^+=\Omega^+_{u_0}\cap {\mathscr I}^+$, the part of ${\mathscr I}^+$ beyond $S_{u_0}$; \item $\mathcal{H}_s=\Omega^+_{u_0}\cap\{u=-sr^\ast\}$, for $s$ in $[0,1]$, a foliation of $\Omega^+_{u_0}$ by spacelike hypersurfaces accumulating on ${\mathscr I}$. \end{itemize} The volume form associated with $ g$ in the coordinates $(R,u,\omega_{\mathbb{S}^2})$ is then: \begin{equation}\label{volumeformcoor} \mu[ g]= \textrm{d} u\wedge\textrm{d} R \wedge \textrm{d}^2\omega_{\mathbb{S}^2}. \end{equation} If $\phi$ is function defined on $\Omega^+_{u_0}$, one defines the following energies, in the domain $\Omega_{u_0}^+$: \begin{proposition}\label{energyequivalenceschwarzschild} There exists $u_0$, such that the following energy estimates holds on $\mathcal{H}_s$ in $\Omega^+_{u_0}$, for all $s$ in $[0,1]$: $$ E_{\phi}(\mathcal{H}_s) = \int_{\mathcal{H}_s}i^\star_{\mathcal{H}_s}\left(\star \hat T^a T_{ab}\right)\approx \int_{\mathcal{H}_s}\left(u^2(\partial_u\phi)^2+\frac{R}{|u|}(\partial_R\phi)^2+|\nabla_{\mathbb{S}^2}\phi|^2+\frac{\phi^2}{2}+\frac{\phi^4}{4}\right)\textrm{d} u \wedge \textrm{d} \omega_{\mathbb{S}^2}. $$ Furthermore, if one introduces the parameter \begin{equation}\label{parametrization} \tau: \begin{array}{ccc} [0,1]&\longrightarrow& [0,2]\\ s&\longmapsto& -2(\sqrt{s}-1), \end{array} \end{equation} the following energy estimates holds: \begin{itemize} \item Energy decay: $$ E_{\phi}(\mathcal{H}_s) \lesssim E_{\phi}({\Sigma_0^{u_0<}}) $$ \item A priori estimates: $$ E({\mathscr I}^+_{u_0})+\int_{{\mathscr I}^+_{u_0}}\phi^4\textrm{d} \mu_{{\mathscr I}^+}+E(S_{u_0})+ \int_{S_{u_0}}\phi^4\textrm{d} \mu_{S_u} \approx E({\Sigma_0^{u_0<}}) + \int_{{\Sigma_0^{u_0<}}} u^4 \textrm{d}_{{\Sigma_0^{u_0<}}} $$ \end{itemize} \end{proposition} \subsection{Local existence for the Characteristic Cauchy problem on ${\mathscr I}^+$} The purpose of this section is to prove that the Cauchy problem: \begin{equation}\label{eq:carschwarzschild} \begin{array}{l} \square \phi + \frac16\text{Scal}_{ g} \phi = \phi^3 \\ \phi|_{{\mathscr I}^+_{u_0}} = \theta^+ \in H^1({\mathscr I}^+_{u_0}) \text{ and } \phi|_{S_{u_0}} = \theta^0 \in H^{1}(S_{u_0}) \end{array} \end{equation} where $H^{1}(S_{u_0})$ is defined as being the completion of the space of traces of smooth functions on $S_{u_0}$ for the norm $$ \int_{S_{u_0}} i^\star_{S_{u_0}}\left(\star \hat T^a T_{\text{lin}\,ab}\right) $$ $T_{\text{lin}\,ab}$ being the stress-energy tensor associated with the linear wave equation. In a previous work \cite{Joudioux:2012eo}, the following result has been proved: \begin{itemize} \item using the fact that the leaves of the foliation $\mathcal{H}_s$ endowed with its induced metric is uniformly equivalent to a cylinder, it is possible to establish energy estimates for the difference of solutions of the non linear equation; \item it is possible to establish an existence result for solutions of the nonlinear equation for small data, using either a Picard iteration (as in \cite{Joudioux:2010tn}) or H\"ormander slowing down process. \end{itemize} \begin{proposition}\label{prop:schwarz1} Let $\phi$ and $\psi$ be two solutions of the characteristic Cauchy problem and denote by $\delta = u-v$ their difference. One denotes the energy of $\delta$ by $$ E_{\delta}(\mathcal{H}_s) = \int_{\mathcal{H}_s}i^\star_{\mathcal{H}_s}\left(\star \hat T^a T_{\text{lin}\,ab}\right)\approx \int_{\mathcal{H}_s}\left(u^2(\partial_u\delta)^2+\frac{R}{|u|}(\partial_R\delta)^2+|\nabla_{\mathbb{S}^2}\phi|^2+\frac{\delta^2}{2} \right)\textrm{d} u \wedge \textrm{d} \omega_{\mathbb{S}^2} $$ where $T_{\text{lin}\,ab}$ is the stress energy tensor for $\delta$ associated with the linear wave equation. Then, $\delta$ satisfies the following energy estimates on the foliation $\mathcal{H}_s$; $$ E_\delta(\mathcal{H_s}) \lesssim \sup_{\sigma\in [s;t]} \left( ||\phi||^4_{H^1(\mathcal{H}_s)} + ||\psi||^4_{H^1(\mathcal{H}_s)}\right) E_\delta(\mathcal{H_s}). $$ In particular, the Cauchy problem \eqref{eq:carschwarzschild} admits at most one solution in $C^0([0,\epsilon], H^1(\mathcal{H}_s))\cap C^1([0,\epsilon], L^2(\mathcal{H}_s))$ and the energy $$ s\mapsto E_\delta(\mathcal{H}_s) $$ is continuous. \end{proposition} \begin{proof} Let $\phi$ and $\psi$ be two global solutions of \eqref{eq:carschwarzschild}. Their difference $\delta =\phi -\psi$ satisfies the equation: $$ \square \delta + \frac{1}{6}\text{scal}_{\hat g} \delta = \left(\phi^2+\phi\psi+\phi^2\right)\delta. $$ Using the Stokes theorem between ${\mathscr I}^+_{u_0}$, $S_{u_0}$ and $\mathcal{H}_s$, one gets, since $\delta$ vanishes on $S_{u_0}$ and ${\mathscr I}^+_{u_0}$, using proposition: \begin{gather*} E_{\delta}(\mathcal{H}_s) \lesssim \int_{0}^\varepsilon E_\delta(\mathcal{H}_\sigma) \textrm{d} \sigma + \int_{0}^{s}\int_{\mathcal{H_{\sigma}}} \vert \hat T^a\hat \nabla_{a} \delta \cdot \delta\vert + (\phi^2+\psi^2)\vert\hat T^a\hat \nabla_{a} \delta \cdot \delta \vert \textrm{d} \mu_{\mathcal{H}_{\sigma}} \textrm{d} \sigma\\ + E_{\delta } (S_{u_0}) . \end{gather*} As already said, Sobolev estimates can be performed uniformly on the foliation $(\mathcal{H}_s)$. As a consequence, using H\"older's inequality, one gets: $$ \int_{0}^{s}\int_{\mathcal{H_{\sigma}}}(\phi^2+\psi^2)\vert \hat T^a\hat \nabla_{a} \delta \cdot \delta \vert \textrm{d} \mu_{\mathcal{H}_{\sigma}}\leq C\int_{0}^\varepsilon E_\delta(\mathcal{H}_\sigma) \textrm{d} \sigma + \varepsilon \sup_{\sigma\in[0,\epsilon]} \left(E_{\phi}(\mathcal{H}_\sigma)^2+E_{\psi}(\mathcal{H}_\sigma)^2\right) $$ where $T^a$ is the approximate Morawetz vector field $$ T^a = u \partial_u + v \partial_ v. $$ The constant depends only on the $L^\infty$-bound of the scalar curvature. Finally, using Gr\"onwall lemma, one gets that $$ E_{\delta}(\mathcal{H}_s) \lesssim \varepsilon e^{C\varepsilon} \sup_{\sigma\in[0,\epsilon]} \left(E_{\phi}(\mathcal{H}_\sigma)^2+E_{\psi}(\mathcal{H}_\sigma)^2\right) E_{\delta}(\mathcal{H}_0) \times E_\delta (S_{u_0}). $$ As a consequence, the mapping \[ \begin{array}{ccc} B(0,r)\subset H^1({\mathscr I}^+_{u_0}) \times H^1(S_{u_0}) & \longrightarrow & L^\infty ([0,\epsilon], H^1(\mathcal{H}_s))\\ (\theta^+, \theta ^0) & \longmapsto & \phi \end{array} \] is continuous. Furthermore, the Cauchy problem \eqref{eq:carschwarzschild} admits at most a unique solution in $C^0([0,\epsilon], H^1(\mathcal{H}_s))\cap C^1([0,\epsilon], L^2(\mathcal{H}_s))$. \end{proof} \begin{proposition}\label{prop:existenceschwarz} Let $\varepsilon$ be a positive real number. Then, for $\varepsilon$ small enough, the characteristic Cauchy problem \eqref{eq:carschwarzschild} admits a unique solution in the neighborhood of ${\mathscr I}^+_{u_0}$ $$ \bigcup_{s\in[0, \varepsilon]} H_{s} = \left\{ u \geq (1+\varepsilon) r^\star \right\}. $$ \end{proposition} \begin{proof} The proof of the proposition is based on a Picard iteration defined as follows; let $\phi_0$ be a solution of the linear Cauchy problem: \begin{equation}\label{initialstep} \begin{array}{l} \square \phi_0 +\text{scal}_{ g} \phi_0 = 0 \\ \phi_0|_{{\mathscr I}^+_{u_0}} = \theta \in H^1({\mathscr I}^+_{u_0}). \end{array} \end{equation} and define the sequence as follows, for $n>0$: \begin{equation}\label{recursion} \begin{array}{l} \square \phi_n +\text{scal}_{ g} \phi_n = \phi^3_{n-1} \\ \phi_n|_{{\mathscr I}^+_{u_0}} = \theta \in H^1({\mathscr I}^+_{u_0}). \end{array} \end{equation} H\"ormander's theorem \cite{MR1073287} ensures that these two Cauchy problems admits a global solution in $L^\infty ([0,1], H^1(\mathcal{H}_s))$. Using the stress energy tensor associated with the linear wave equation, one obtains the following energy estimates: \begin{equation} E_{\phi_n}(\mathcal{H}_s) \leq C \int_{0}^s E_{\phi_n}(\mathcal{H}_\sigma) \textrm{d} \sigma + \int_{0}^s \int_{\mathcal{H}_s}\phi_{n-1}^6 \textrm{d} \mu{\mathcal{H}_\sigma}\textrm{d} \sigma, \end{equation} and, using Gr\"onwall's lemma and the Sobolev embedding on the foliation $\mathcal{H}_s$, one gets immediately that \begin{equation} \sup_{\sigma\in [0,s]}E_{\phi_n} (\mathcal{H}_\sigma) \leq C s E_{\theta}({\mathscr I}^+_{u_0})\left(\sup_{\sigma\in [0,s]} E_{\phi_{n-1}}(\mathcal{H}_{\sigma}) \right)^3. \end{equation} An immediate recursion gives that, for all $n>0$ and $s=\epsilon$ \begin{eqnarray*} \sup_{\sigma\in [0,\epsilon]}E_{\phi_n} (\mathcal{H}_\sigma) & \leq & \left( C\epsilon E_{\theta}({\mathscr I}^+_{u_0}) \left(\sup_{\sigma\in [0,\epsilon]} E_{\phi_{0}}(\mathcal{H}_{\sigma}) \right)^\frac12 \right)^{3^n}\\ &\leq&\left( C\epsilon \left( E_{\theta}({\mathscr I}^+_{u_0}\right)^{\frac12} \right)^{3^n}. \end{eqnarray*} As a consequence, the sequence $(\phi_n)$ is a Cauchy sequence in the complete space $L^\infty\left([0,\epsilon], H^1(\mathcal{H}_s) \right)$ and, as a consequence, converges strongly towards a function $\phi$ in the same space. Since $\phi_n$ is a weak solution of the Cauchy problem \ref{eq:carschwarzschild}, so is $\phi$. As a consequence, the previous proposition \ref{prop:schwarz1} proves that the solution actually belongs to $C^0\left([0,\epsilon], H^1(\mathcal{H}_s) \right)$. This concludes the proof of the existence of solution to \eqref{eq:carschwarzschild} in $C^0\left([0,\epsilon], H^1(\mathcal{H}_s) \right)$. \end{proof} \section{Global Cauchy problem on ${\mathscr I}^+$}\label{sec:globalcauchy} The existence of global solutions to the Cauchy problem is proved using a gluing process: the Cauchy problem with data in $H^1({\mathscr I}^+)$ are constructed up to a spacelike hypersurface and, then, considering the traces of the solution on two give hypersurfaces in the neighbourhood of $i^0$ solved up to the initial time slice $\Sigma_0$. The main issue arising in this process is that the constants arising in the energy estimates depend on the $L^\infty$-bounds of the metric and its inverse on the bounded neighbourhood of $i^0$. Since the compactification we are working with has the particularity to have an asymptotic end at $i^0$, these constants are not bounded on the whole future of $\Sigma_0$. This problem is avoided as follows: \begin{itemize} \item Let $u_0$ be in $\mathbb{R}$ such as in Proposition \ref{energyequivalenceschwarzschild}. \item Let $\Sigma$ be a given spacelike hypersurface such that: \begin{itemize} \item $\Sigma$ is transverse to ${\mathscr I}^+$; \item $\Sigma$ is in the past of $S_{u_0}\cap J^+(\{t = 0\})$ and in the future of $\Sigma_{0}$; in particular, $\Sigma$ coincides with $\Sigma_{0}$ for from $i^0$. \end{itemize} \end{itemize} These choices of $u_0$ and $\Sigma$ are made once for all. \begin{proposition}\label{prop:global} The characteristic Cauchy problem $$ \begin{array}{l} \square \phi + \frac{1}{6}\text{Scal}_{{g}} \phi = \phi ^3 \\ \phi|_{{\mathscr I}^+} = \theta \in H^1({\mathscr I}^+). \end{array} $$ admits a unique global solution in \({C^0([0,1], H^1(\tilde \Sigma_s)) \cap C^1([0,1],L^2(\tilde \Sigma_s))}\) where $(\tilde \Sigma_s)_{s>0}$ is a smooth spacelike foliation extending the definition of the foliation $(\mathcal{H}_s)$ in the neighbourhood of $i_0$. Furthermore, if $\tilde \phi$ is another solution with initial data $\tilde \theta$, the following energy estimates holds: there exists a increasing function $f$ such that $$ E_{\phi-\tilde \phi}(\Sigma_0) \leq C f\left( \sup_{\sigma \in [0,1]} \left(||\phi||^2_{H^1(\mathcal{H}_s)} + ||\tilde \phi||^2_{H^1(\mathcal{H}_s}\right) \right) ||\theta -\tilde \theta ||^2_{H^1({\mathscr I}^+)}. $$ \end{proposition} \begin{proof} The proof of the theorem relies on the consecutive use of Theorem \ref{thm:existence}, Proposition \ref{uniqueness} and of Proposition \ref{prop:existenceschwarz}. Let $\theta$ be in $H^1({\mathscr I}^+)$. Using Theorem \ref{thm:existence}, one proves that the Cauchy problem admits a solution $\phi$ up to $\Sigma$, as defined earlier. Then, using Proposition \ref{prop:existenceschwarz}, there exists a $\epsilon>0$ such that the Cauchy problem given by \begin{equation}\label{intermediate1} \begin{array}{l} \square \phi + \frac16\text{Scal}_{ g} \phi = \phi^3 \\ \phi|_{{\mathscr I}^+_{u_0}} = \theta \in H^1({\mathscr I}^+_{u_0}) \text{ and } \phi|_{S_{u_0}} = \phi_{S_{u_0}} \in H^{1}(S_{u_0}) \end{array} \end{equation} exists in $$ \bigcup_{s\in[0, \varepsilon]} H_{s} = \left\{ u \geq (1+\varepsilon) r^\star \right\}. $$ Consider the time slice $\mathcal{H}_\epsilon$. This time slice can be extended into $\hat M$ into a spacelike Cauchy slice denoted by $\mathcal{H}$. The timelike vector field $T$ is extended as a timelike vector to $\mathcal{H}$. One now considers the functions $(\psi_0,\psi_1)$ defined piecewise by: \begin{equation*} \left\{\begin{array}{lcl} \psi_0 |_{\mathcal{H_\epsilon}} &=& \tilde{\phi} \in H^1(\mathcal{H}_\epsilon)\\ \psi_0 |_{\mathcal{H}\cap J^+(S_{u_0})} &=& \tilde{\phi}\in H^1({\mathcal{H}\cap J^+(S_{u_0})}) \\ \end{array} \right. \end{equation*} and \begin{equation*} \left\{\begin{array}{lcl} \psi_1 |_{\mathcal{H_\epsilon}} &=& T^a \nabla_a \tilde{\phi}\\ \psi_1 |_{\mathcal{H}\cap J^+(S_{u_0})} &=& T^a \nabla_a \tilde{\phi}\in L^2({\mathcal{H}\cap J^+(S_{u_0})}) \\ \end{array} \right.. \end{equation*} By construction, since $\mathcal{H}$ is a spacelike Cauchy surface on the unphysical space-time, the result from Cagnac-Choquet-Bruhat \cite{MR789558} can be applied immediately to prove the existence of a unique solution to the Cauchy problem with data $(\psi_0, \psi_1)$ on $\mathcal{H}$ up to the Cauchy surface $\Sigma_0$. Furthermore, this solution belongs to \({C^0([0,1], H^1(\tilde \Sigma_s)) \cap C^1([0,1],L^2(\tilde \Sigma_s))}\). Consequently, there exists a global solution of the Cauchy problem: $$ \begin{array}{l} \square \phi + \frac{1}{6}\text{Scal}_{{g}} \phi = \phi ^3 \\ \phi_{{\mathscr I}^+} = \theta \in H^1({\mathscr I}^+). \end{array} $$ obtained by gluing the solutions of the Cauchy problems obtained in Theorem \ref{thm:existence} and Proposition \ref{prop:existenceschwarz}. \end{proof} \section{Existence and regularity of the scattering operator}\label{sec:scattering} The purpose of this section is to prove the regularity and the existence of conformal scattering operator for the non-linear wave equation $$ \square \hat u = \hat u^3 $$ In the context of asymptotically simple space-times with regular $i^\pm$, such as the one considered in \cite{Joudioux:2012eo,mn04}, it has been proved by Mason and Nicolas, for the scalar wave equation, that the trace operators $$ \mathfrak{T}^\pm: (u,T(u))|_{t=0} \in H^1(\Sigma_0)\times L^2(\Sigma_0) \longmapsto \phi|_{{\mathscr I}^\pm} \in H^1({\mathscr I}^+) $$ can be obtained from the inverse wave operators $\tilde\Omega^\pm$ as follows: let $F_\pm$ be the null geodesic flows identifying the hypersurface $\Sigma_0$ with ${\mathscr I}^\pm$; then, $\mathfrak{T}^\pm$ are given by $$ \mathfrak{T}^\pm = F^\star_\pm \tilde \Omega^\pm $$ where $F^\star$ is the pullback by the null geodesic flows $F_\pm$. The same result will hold in the context of the nonlinear equation. The conformal scattering result which is stated here should consequently be considered as a standard scattering result for a non-stationary metric for a non-linear wave equation. The existence of scattering operators was obtained on the flat background for sub-critical wave equations by \cite{Tsutaya:ep,Tsutaya:1992vx,Karageorgis:2006wg,hi03,Hidano:1998da}. One considers the operators $$ \mathfrak{T}^\pm: H^1(\Sigma_0)\times L^2(\Sigma_0) \rightarrow H^1({\mathscr I}^\pm) $$ defined by $$ \mathfrak{T}^\pm(\phi, \psi) = u|_{{\mathscr I}^\pm} $$ where $u$ is the unique solution of the Cauchy problem on the conformally compactified space-time $$ \left\{ \begin{array}{c} \square u +\frac16 \text{Scal}_{ g}u = u^3 \\ u|_{t=0} = \phi, \hat T^a\nabla_a u|_{t=0} = \psi. \end{array} \right. $$ One finally introduces the conformal scattering operator defined as $$ \mathfrak{S} = \mathfrak{T}^+ \circ \left(\mathfrak{T}^-\right)^{-1} $$ \begin{theorem} \label{thm:mainthm} The operators $\mathfrak{T}^\pm : H^1(\Sigma_0)\times L^2(\Sigma_0) \rightarrow H^1({\mathscr I}^\pm)$ are well-defined, invertible and locally bi-Lipschitz, that is to say that $\mathfrak{T}^\pm$ and $(\mathfrak{T}^\pm)^{-1}$ are Lipschitz on any ball in $H^1(\Sigma_0)\times L^2(\Sigma_0) $ and $H^1({\mathscr I}^\pm)$ respectively. \end{theorem} \begin{proof} The existence and continuity of the operators is guaranteed the existence of solution to the Cauchy problem for the conformal wave equation, see Proposition \ref{hor3}. The existence of their inverses, and their continuity, are obtained thanks to the Lipschitzness guaranteed by the energy estimate of Proposition \ref{prop:global}. \end{proof} The final product of this paper, the existence of a scattering operator, is achieved by composing the operator \((\mathfrak{T}^{-})^{-1}\) with \(\mathfrak{T}^{+}\): \begin{corollary}\label{cor:existencescatt} There exists a locally bi-Lipschitz invertible operator \(\mathfrak{S} : H^{1}(\Sigma) \times L^{2}(\Sigma) \rightarrow H^1({\mathscr I}^\pm) \). This operator generalises the notion of classical scattering operator in the situation when the metric is static. \end{corollary} \begin{proof} As mentioned before, the construction of \(\mathfrak{S} \) is a straightforward consequence of Theorem \ref{thm:mainthm}. The fact that the operator coincides with the classical notion of scattering operator is stated in \cite{mn04}. \end{proof} \appendix \section{A trace theorem}\label{app:trace} One of the critical point of the proof of the existence theorem \ref{hor3} is a trace theorem, which has already been used in \cite[p. 19, proof of Theorem 4.3]{MR2244222}. Most of the material presented here can be found in \cite[Section 4.3, p. 281 sqq.]{MR1395148} for the interpolation and in \cite[Theorems 2.3 and 3.1]{Lions:1972ww} for the continuity of trace operators. Finally, it is important to note that the essential ideas and proofs of this appendix are contained in \cite[Chapter 3, Section 8.2]{Lions:1972ww}. The purpose of this appendix is also to give further understanding, and details on the work of H\"ormander on the Cauchy problem \cite{MR1073287} and clarify some points contained in \cite{MR2244222}. Let $X$ and $Y$ be two Hilbert spaces such that \begin{itemize} \item $X$ is continuously embedded in $Y$; \item $X$ is dense in $Y$. \end{itemize} The interpolation spaces between $X$ and $Y$ are denoted by $$ [X,Y]_{\theta} \text{ for } \theta \in [0,1]. $$ Let $a,b$ be two elements in $\mathbb{R}\cup\{\pm \infty\}$ and $m$ an positive integer. One considers the following space: $$ W(a,b) =\left\{u |u \in L^2([a,b],X), \frac{\partial^m u}{\partial t^m} \in L^2([a,b],Y)\right\}. $$ The following trace theorem then holds, see \cite[theorems 2.3 and 3.1, chapter 1,]{Lions:1972ww}: \begin{theorem}[Lions-Magenes]\label{tracelm} Let $u$ be in $W(a,b)$. Then, for all $j$ in $\{0,\dots,m-1 \}$, the derivatives of $u$ satisfy: \begin{itemize} \item $\frac{\partial^j u}{\partial t^j}$ is in $L^2([a,b];[X,Y]_{\frac{j}{m}})$; \item $\frac{\partial^j u}{\partial t^j}$ is in $C^0_b([a,b];[X,Y]_{\frac{j+1/2}{m}})$. \end{itemize} \end{theorem} \begin{remark} The theory developed by Lions and Magenes applies to general Hilbert spaces, independently of the geometric context (in particular, if one works on a space with boundary or on a compact manifold). \end{remark} One applies this result to the following situation: let $M$ be a compact manifold and consider \begin{itemize} \item $X= H^1(M)$; \item $Y = H^{-1}(M)$. \end{itemize} The interpolation spaces between both are given by \cite[Section 4.3, Proposition 3.1]{MR1395148}: $$ [H^1(M),H^{-1}(M)]_\theta = H^{1-2\theta}(M) \text{ for } \theta \in [0,1]. $$ Let $a,b$ be in $\mathbb{R}$ and $m=2$. Applying Theorem \ref{tracelm} then gives: if u satisfies: $$ u\in L^2([a,b],H^1(M)) \text{ and }\frac{\partial^2 u}{\partial t^2} \in L^2([a,b],H^{-1}(M)) $$ then: \begin{itemize} \item $\frac{\partial u}{\partial t}$ is in $L^2([a,b],L^2(M))$ ($\theta = \frac12$); \item $u\in C^0([a,b],L^2(M))$; \item $\frac{\partial u}{\partial t}$ is in $C^0([a,b],H^{-\frac12}(M))$ ($\theta = \frac34$). \end{itemize} \begin{remark} Obtaining that the derivative is actually in $C^0([a,b],H^{0}(M))$ would require $$ u\in L^2([a,b],H^{1}(M)) \text{ and }\frac{\partial^2 u}{\partial t^2} \in L^2([a,b],H^{-\frac14}(M)) $$ \end{remark} To improve this result, one notices the following: \begin{lemma} Let $I=[a,b]$ be a time interval and $s<\sigma $ be two real numbers and consider a function $\phi$ such that $$ \phi \in L^\infty(I,H^s(M)) \cap C^0(I,H^\sigma (M)). $$ Then, $\phi$ belongs to $C^0(I,H^s (M)-w)$. \end{lemma} \begin{remark} This is a particular case of \cite[Chapter 3, Lemma 8.1]{Lions:1972ww}. \end{remark} \begin{proof} Let $\phi$ be in $H^s(M)$ such as in the proposition and consider $w$ in $H^\sigma(M)$. One considers the function $$ f(t) = <\phi(t),w>_{H^s}. $$ Let $t_0$ be in $I$. The purpose is to prove that $f$ is continuous in $t_0$. Let $(w_k)$ be a sequence of smooth function converging towards $w$ in $H^s$. Let $\epsilon$ be a positive real and consider: \begin{eqnarray*} f(t)-f(t_0) &=& <\phi(t)-\phi(t_0),w>_{H^s}\\ &=&<\phi(t)-\phi(t_0),w-w_k>_{H^s} +<\phi(t)-\phi(t_0),w_k>_{H^s}. \end{eqnarray*} Since, using Cauchy-Schwarz inequality, $$ |<\phi(t)-\phi(t_0),w-w_k>_{H^s}|\leq \Vert w-w_k\Vert_{H^s}\Vert\phi(t)-\phi(t_0)\Vert_{H^s}, $$ and since $(w_k)$ converges towards $w$ in $H^s$, and since $\phi$ is in $L^\infty(I,H^s(M))$, there exists a $K$ such that: $$ \left|<\phi(t)-\phi(t_0),w-w_K>_{H^s}\right|\leq \frac{\epsilon}{2}. $$ One now considers the interpolation operators between $L^2(M)$ and $H^s(M)$ (for arbitrary $s$): $$ \mathfrak{F}_s=L^2(M) \longrightarrow H^s(M). $$ This is a family of unbounded, self-adjoint operators for which: $$ \Vert \mathfrak{F}^{-1}_s (\phi) \Vert_{L^2} \leq C \Vert\phi\Vert_{H^s}. $$ Since $w_K$ is smooth, it admits a pre-image by $\mathfrak{F}^{-1}_s\circ\mathfrak{F}_\sigma$. As a consequence, one has: \begin{eqnarray*} |<\phi(t)-\phi(t_0),w_k>_{H^s}|&=&|<\mathfrak{F}^{-1}_s\left(\phi(t)-\phi(t_0)\right),\mathfrak{F}^{-1}_s(w_K)>_{L^2}|\\ &=& |<\mathfrak{F}^{-1}_\sigma\circ\mathfrak{F}^{-1}_s\left(\phi(t)-\phi(t_0)\right),\mathfrak{F}_\sigma\circ \mathfrak{F}^{-1}_s(w_K)>_{L^2}|. \end{eqnarray*} Cauchy-Schwarz inequality implies then: $$ | <\phi(t)-\phi(t_0),w_k>_{H^s}|\leq \Vert\phi(t)-\phi(t_0)\Vert_{H^\sigma}\Vert\mathfrak{F}^{-1}_\sigma(\omega_K)\Vert_{H^\sigma}. $$ Since $\phi$ is in $C^0(I,H^\sigma (M))$, there exists an open neighborhood $U$ of $t_0$ in $I$ such that, for all $t$ in $U\subset I$: $$ |<\phi(t)-\phi(t_0),w_k>_{H^s}|\leq \frac{\epsilon}{2}. $$ As a consequence, for all $t$ in $U$, $$ |f(t)-f(t_0)|\leq \epsilon $$ that is to say that $f$ is continuous in $t_0$. \end{proof} One now considers a model case fitting the context of the wave equation. Let $(I \times M, N^2 \textrm{d} t^2-g_t)$ be a Lorentzian manifold, $I$ being a compact interval. One defines the energy of the function as being the function of $t$ and $\phi$ in $H^1(M)$ and $\psi$ in $L^2(M)$ by: $$ E(t,\phi, \psi) = \Vert\phi\Vert^2_{H^1(\{t\}\times M)} + \Vert\psi\Vert^2_{L^2(\{t\}\times M)}. $$ The final lemma required to achieve the wanted regularity for the the solution of the characteristic Cauchy problem for the wave equation is the following: \begin{lemma} One assumes that, for all $(\phi, \psi)$ in $H^1(M)\times L^2(M)$, the function: $$ t\longmapsto E(t, \phi,\psi) $$ is continuously differentiable in $I$. Let $\phi$ be a function such that: \begin{itemize} \item $\phi$ is in $C^0(I,H^1(M)-w)$; as a consequence, $\phi$ lies in $L^\infty(I,H^1(M))$ \item $\partial_t \phi$ is in $C^0(I,L^2(M)-w)$; as a consequence, $\phi$ lies in $L^\infty(I,L^2(M))$. \end{itemize} The function: $$ t\longmapsto E(t, \phi(t),\partial_t\phi(t)) $$ is furthermore assumed to be continuous in $t$.\\ Then $\phi$ and $\partial_t \phi$ are in fact in $C^0(I,H^1(M))$ and $C^0(I,L^2(M))$. \end{lemma} \begin{proof} The proof of this fact can be found in the ) of \cite[Chapter 3, Section 8.4, p. 279]{Lions:1972ww}. For the sake of self-consistency, the proof is quoted here, with some adaptations to our framework. Let $t$ be in $I$ and consider $(t_n)_n$ a sequence of elements of $I$ converging towards $t$ and define the quantity: \begin{gather*} \xi_n = \Vert\phi(t)-\phi(t_n)\Vert^2_{H^1(M)} + \Vert\partial_t\phi(t)-\partial_t\phi(t_n)\Vert^2_{L^2(M)}\\ =E(t_n, \phi(t_n), \partial_t \phi(t_n))+E(t, \phi(t), \partial_t \phi(t))\\ -2 <\phi(t),\phi(t_n)>_{H^1(M)} -2 <\partial_t\phi(t),\partial_t \phi(t_n)>_{L^2(M)} \end{gather*} Since $\phi$ is in $C^0(I,H^1(M)-w)$ and $\partial_t \phi$ is in $C^0(I,L^2(M)-w)$, the last two terms converge towards $-2E(t, \phi(t),\partial_t\phi(t))$. Since the energy is assumed to be continuous, the first tow terms converge towards $2E(t, \phi(t),\partial_t\phi(t))$. The sequence $(\xi_n)$ converges then towards $0$ when $n$ grows. As a consequence, $\phi$ and $\partial_t\phi$ are in $C^0(I,H^1(M))$ and $C^0(I,L^2(M))$, respectively. \end{proof} \printbibliography \end{sloppypar} \end{document}
\bar begin{document} \thanks{The authors were supported by ISF grants No. 555/21.} \bar begin{abstract} We show that an infinite group $G$ definable in a $1$-h-minimal field admits a strictly $K$-differentiable structure with respect to which $G$ is a (weak) Lie group, and show that definable local subgroups sharing the same Lie algebra have the same germ at the identity. We conclude that infinite fields definable in $K$ are definably isomorphic to finite extensions of $K$ and that $1$-dimensional groups definable in $K$ are finite-by-abelian-by-finite. Along the way we develop the basic theory of definable weak $K$-manifolds and definable morphisms between them. \end{abstract} \textbf{m}aketitle \section{Introduction} Various Henselian valued fields are amenable to model theoretic study. Those include the $p$-adic numbers (more generally, $p$-adically closed fields), and (non-trivially) valued real closed and algebraically closed fields, as well as various expansions thereof (e.g. by restricted analytic functions). Recently, a new axiomatic framework for tame valued fields (of characteristic $0$) was introduced. This framework, known as Hensel-minimality\footnote{In \cite{hensel-min} and \cite{hensel-minII} various notions of Hensel-minimality -- $n$-h-minimality -- for $n\in {\textbf{m}athbb{N}}\cup \{\omega\}$, were introduced. For the sake of clarity of exposition, we will only discuss $1$-h-minimality.}, was suggested in \cite{hensel-min} and \cite{hensel-minII} as a valued field analogue of o-minimality. The notion of $1$-h-minimality is both broad and powerful. Known examples include, among others, all pure Henselian valued fields of characteristic $0$ as well as their expansions by restricted analytic functions. Known tameness consequences of $1$-h-minimality include a well-behaved dimension theory, and strong regularity of definable functions (e.g., a generic Taylor approximation theorem for definable functions). In the present paper, we initiate a study of groups definable in $1$-h-minimal fields. Using the above mentioned tameness and regularity conditions provided by $1$-h-minimality and inspired by similar studies in the o-minimal setting (initiated in \cite{Pi5}) and in $p$-adically closed fields (\cite{PilQp}) our first theorem (Proposition \ref{definable-is-lie}, stated here in a slightly weaker form) is: \bar begin{introtheorem}\langlebel{T: main intro} Let $K$ be a $1$-h-minimal field, $G$ an infinite group definable in $K$. Then $G$ admits a definable weak ${\mathcal C}^k$ (any $k$) manifold structure with respect to which $G$ has the structure of a strictly differentiable weak ${\mathcal C}^k$-Lie group. I.e., the forgetful functor from definable strictly differentiable weak Lie groups to definable groups is an equivalence of categories. If algebraic closure coincides with definable closure in $K$, then a definable weak Lie group is a definable Lie group. \end{introtheorem} Above by a definable weak Lie group (over $K$) we mean a Lie group whose underlying $K$-manifold structure may not have a definable (so, in particular, finite) atlas but can be covered by (the domains of) finitely many compatible \'etale maps. We do not know whether this is a necessary requirement for the correctness of the statement, or an artifact of the proof: we follow Pillay's argument in the o-minimal and $p$-adic contexts (\cite{Pi5}, \cite{PilQp}), but the fact that in the present setting finite covers are not generically trivial, requires that we work with weakly definable manifolds, in the above sense. To pursue this argument, we have to extend the study of definable functions beyond what was done in \cite{hensel-min} (and its sequel). Specifically, instead of working with continuously differentiable functions (as is the case in the o-minimal setting) we are working with strictly differentiable functions, and for those we prove an inverse function theorem, allowing us to deduce an implicit function theorem for definable functions as well as other standard consequences of these theorems. We do not know whether strict differentiability follows in the $1$-h-minimal context from continuous differentiability (as is the case in real analysis), but it can be easily inferred from a multi-variable Taylor approximation theorem for definable functions available in this context. \\ Having established that definable groups are Lie, our next theorem establishes the natural Lie correspondence (asserting that the germ of a definable group morphism at the identity is determined by its derivative at that point). For applications it is convenient to state the result for local groups (Corollary \ref{dimension-of-kernel}): \bar begin{introtheorem} Let $K$ be a $1$-h-minimal field, $U$ and $V$ definable strictly differentiable local Lie groups and $g,f:U\to V$ definable strictly differentiable local Lie group morphisms. If we denote $Z=\{x\in U: g(x)=f(x)\}$, then $\dim_e Z=\dim(\ker(f'(e)-g'(e)))$. \end{introtheorem} We then prove two applications. First, we show -- adapting techniques from the o-minimal context -- that every infinite field definable in a $1$-h-minimal field, $K$, is definably isomorphic to a finite extension of $K$, Proposition \ref{field}. This generalizes an analogous result for real closed valued fields (\cite{BaysPet}) and $p$-adically closed fields (\cite{PilQp}). It will be interesting to know whether these results can be extended to \emph{interpretable} fields (in the spirit of \cite{HaHaPeVF} or \cite[\S 6]{HrRid} under suitable additional assumptions on the RV-sort. Our next application is a proof that definable $1$-dimensional groups are finite-by-abelian-by-finite, Corollary \ref{one-dimensional}. This generalizes analogous results in the o-minimal context (\cite{Pi5}), in $p$-adically closed fields (\cite{PilQp}) and combines with \cite{AcostaACVF} to give a complete classification of $1$-dimensional groups definable in ACVF$_0$. \\ The present paper is a first step toward the study of groups definable in $1$-h-minimal fields. It seems that more standard results on Lie groups over complete local fields can be extended to this context. Thus, for example, it can be shown that any definable local group contains a definable open subgroups. As the proof is long and involves new techniques we postpone it to a subsequent paper. \subseteqsection{Structure of the paper} In Section \ref{preliminaries} we review the basics of $1$-h-minimality and dimension theory in geometric structures. In Section \ref{taylor-section} we prove a multi-variable Taylor approximation theorem for $1$-h-minimal fields, and formulate some strong regularity conditions (implied, generically, by Taylor's theorem) that will be needed in later parts of the paper. These results are, probably, known to the experts, and we include them mostly for the sake of completeness and clarity of exposition (as some of them do not seem to exist in writing). In Section \ref{smooth-map} we prove the inverse function theorem and related theorems on the local structure of immersions, submersions and constant rank functions. Though some of the proofs are similar to those of analogous statements in real analysis (and, more generally, in the o-minimal context) this is not true throughout. Specifically, $1$-h-minimality is invoked in a crucial way in the proof that a function with vanishing derivative is locally constant, which -- in turn -- is used in our proof of the Lie correspondence for definable groups. Using the results of the first sections, our study of definable groups starts in Section \ref{groups-section}. We first show that definable groups can be endowed with an, essentially unique, strictly differentiable weak Lie group structure, and that the germ of definable group morphisms are determined by their derivative at the identity. We then define the (definable) Lie algebra associated with a definable Lie group, and show that it satisfies the familiar properties of Lie algebras. This is done using a local computation, after characterizing the Lie bracket as the second order part of the commutator function near the identity. Section \ref{fields-section} is dedicated to the classification of fields definable in $1$-h-minimal fields, and in Section \ref{one-dimensional-section} we prove our results on definable one dimensional groups. \section{Preliminaries}\langlebel{preliminaries} In this section we set some background definitions, notation and describe basic relevant results, used in later sections. Most of the terminology below is either standard or taken from \cite{hensel-min}. Throughout, $K$ will denote a non-trivially valued field. We will not distinguish, notationally, between the structure and its universe. Formally, we allow $K$ to be a multi-sorted structure (with all sorts coming from from $K^{eq}$), but by a definable set we mean (unless explicitly stated otherwise) a subset of $K^n$ definable with parameters. All tuples are finite, and we write (as is common in model theory) $a\in K$ for $a\in K^n$ for $n=\textbf{m}athrm{length}(a)$. We apply the same convention to variables. To stress the analogy of the current setting with the Real numbers we use multiplicative notation for the valuation. Thus, the valued group is denoted $(\Gamma,\cdot)$ and the valuation $|\cdot|:K\to \Gamma_0=\Gamma\cup \{0\}$, and if $x\in K^n$ we set $|x|:=\textbf{m}ax_{1\leq k\leq n}|x_k|$. An open ball of (valuative) radius $r\in \Gamma$ in $K^n$ is a set of the form $B=\{x\in K^n: |x-a|<r\}$ for $a\in K^n$. The balls endow $K$ with a field topology (the valuation topology). Up until Section \ref{groups-section} all topological notions mentioned in the text will refer solely to this topology. We denote $\textbf{m}athcal{O}:=\{x: |x|\le 1\}$, the valuation ring, $\textbf{m}athcal{M}:=\{x\in {\mathcal O}: |x|<1\}$, the valuation ideal, and $k:={\mathcal O}/{\mathcal M}$, the residue field. We also denote $RV=K^{\times}/1+\textbf{m}athcal{M}$. More generally, whenever $s\in \Gamma$ and $s\leq 1$, we denote $\textbf{m}athcal{M}_s=\{x\in K: |x|<s\}$, and $RV_s=K^{\times}/1+\textbf{m}athcal{M}_s$. If $K$ has mixed characteristic $(0,p)$, we denote $RV_{p,n}=RV_{|p|^n}$ and $RV_{p,\bar bullet}=\bar bigcup_n RV_{p,n}$. \\ It is convenient, when discussing approximation theorems, to adopt the big-O notation from real analysis. For the sake of clarity we recall this notation in the valued field setting: \bar begin{definition}\langlebel{big-o} \bar begin{enumerate} \item If $f:U\to K^m$ and $g:U\to \Gamma_0$ are functions defined in an open neighborhood of $0$ in $K^n$, then $f(x)=O(g(x))$ means that there are $r, M>0$ in $\Gamma$, such that if $|x|<r$ then $|f(x)|\le Mg(x)$. We also denote $f_1(x)=f_2(x)+O(g(x))$ if $f_1(x)-f_2(x)=O(g(x))$. \item If $g:U\to K^r$, and $s\in \textbf{m}athbb{N}$, then $O(g(x)^s)=O(|g(x)|^s)$. \item If $f:Y\times U\to K^m$, is a function where $U$ is an open neighborhood of $0$ in $K^n$, and if $g:U\to \Gamma_0$, then $f(y,x)=O_y(g(x))$ means that for every $y\in Y$, there are $r_y,M_y>0$, such that if $|x|<r_y$ then $|f(y,x)|\le M_yg(x)$. \end{enumerate} \end{definition} As mentioned in the introduction, in the present paper we are working with the notion of strict differentiability, which we now recall: \bar begin{definition} Let $U\subseteqset K^n$ be an open subset and $f:U\to K^m$ be a map. Then $f$ is strictly differentiable at $a\in U$ if there is a linear map $A:K^n\to K^m$ such that for every $\epsilon>0$, there exists $\delta>0$ satisfying $|f(x)-f(y)-A(x-y)|\leq \epsilon |x-y|$ for every $x,y$ such that $|x-a|<\delta$ and $|y-a|<\delta$. $f$ is strictly differentiable in $U$ if it is strictly differentiable at every point of $U$. \end{definition} In the situation of the definition the linear map $A$ is uniquely determined and denoted $f'(a)$. If $f$ is strictly differentiable in an open $U$, then it is continuously differentiable. \bar begin{definition} Let $U\subseteqset K^n$ and $V\subseteqset K^n$ be open subsets. Then $f:U\to V$ is a strict diffeomorphism if it is strictly differentiable, bijective and its inverse is strictly differentiable. \end{definition} As we will see, a strict diffeomorphism is just a strictly differentiable diffeomorphism. \\ Given an open ball $B\subseteq K^n$ of radius $r$, a subset $Y$ of $K^n$, and an element $s\in \Gamma$ with $s\leq 1$, we say that $B$ is $s$-next to $Y$ if $B'\cap Y=\emptyset$ for $B'$ the open ball of radius $s^{-1}r$ containing $B$. Note that every point not in the closure of $Y$ is contained in a ball $s$-next to $Y$. This is because if $B$ is an open ball of radius $r$ disjoint from $Y$, then every open ball of radius $sr$ contained in $B$ is $s$-next to $Y$. Following \cite{hensel-min} we say that a finite set $Y\subseteqset K$ prepares the set $X\subseteqset K$, if for every ball, $B$, disjoint from $Y$ is either disjoint from $X$ or contained in $X$. More generally, if $s\in \Gamma$ is such that $s\leq 1$, then $Y$ $s$-prepares $X$ if every open ball $B$ $s$-next to $Y$ is either contained in $X$ or disjoint from $X$. If $K$ is a valued field of mixed characteristic $(0,p)$, given an integer $m\in \textbf{m}athbb{N}$, an open ball, $B\subseteq K^n$, and a set $Y\subseteq K^n$, we say that $B$ is $m$-next to $Y$ if it is $|p|^m$-next to $Y$. Similarly, if $s\in \Gamma$ and $s\leq 1$ then $B$ is $m$-$s$-next to $Y$ if it is $|p|^ms$-next to $Y$. Given a finite $Y\subseteqset K$ and $X\subseteqset K$, we say that $Y$ $m$-prepares (resp. $m$-$s$-prepares) the set $X$ if $Y$ $|p|^m$-prepares $X$ (resp. $Y$ $|p|^ms$-prepares $X$). Next, we recall the definitions of 1-h-minimality defined in the equi-characteristic $0$ (\cite{hensel-min}) and in mixed characteristic (\cite{hensel-minII}) settings: \bar begin{definition} Let $K$ be an $\bar bar aleph_0$-saturated non-trivially valued field of characteristic $0$, which is a structure in a language extending the language of valued fields. \bar begin{enumerate} \item If $K$ has residue characteristic $0$ then $K$ is $1$-h-minimal, if for any $s\le 1$ in $\Gamma$ any $A\subseteq K$, $A'\in RV_s$ (a singleton) and every $(A\cup RV\cup A')$-definable set $X\subseteqset K$, there is an $A$-definable finite set $Y\subseteqset K$ $s$-preparing $X$. \item If $K$ has mixed characteristic $(0,p)$ then $K$ is $1$-h-minimal, if for any $s\le 1$ in $\Gamma$ any $A\subseteq K$, $A'\in RV_{s}$ (a singleton) and every $(A\cup RV_{p,\bar bullet}\cup A')$-definable set $X\subseteqset K$, there is $m\in {\textbf{m}athbb{N}}$ and an $A$-definable finite set $Y\subseteqset K$ which $m$-$s$-prepares $X$. \end{enumerate} \end{definition} In the sequel, when appealing directly to the definition, we will only need the case $s=1$ (so $A'$ does not appear). The parameter $s$ does appear implicitly, though, when applying properties of $1$-h-minimality such as generic continuity of definable functions (see \cite[Proposition 5.1.1]{hensel-min}). Below we will need to study properties of ``one-to-finite definable functions'' (definable correspondences, in the terminology of \cite{SimWal}). It turns out that statements regarding such objects can sometimes be reduced to statements on definable functions in expansions of the language by algebraic Skolem functions (i.e., Skolem functions for definable finite sets). For this, the following will be convenient (see \cite[Proposition 4.3.3]{hensel-min}, and \cite[Proposition 3.2.2]{hensel-minII}): \bar begin{fact}\langlebel{acl=dcl} Suppose $K$ is a 1-h-minimal valued field. Then there exists a language $\textbf{m}athcal{L}'\supseteq \textbf{m}athcal{L}$, an elementary extension $K'$ of $K$, and an $\bar bar aleph_0$-saturated $\textbf{m}athcal{L}'$-structure on $K'$ extending the $\textbf{m}athcal{L}$-structure of $K'$, such that $K'$ is 1-h-minimal as an $\textbf{m}athcal{L}'$-structure, and such that $\bar bar acl_{\textbf{m}athcal{L}'}(A)=\dcl_{\textbf{m}athcal{L}'}(A)$ for all $A\subseteq K'$. \end{fact} Above and throughout, algebraic and definable closures are always assumed to be taken in the $K$ sort. In the sequel we will refer to the property appearing in the conclusion of Fact \ref{acl=dcl} simple as "$\bar bar acl=\dcl$". \bar begin{remark}\langlebel{acl=dcl-remark} Given an ${\mathcal L}$-definable set, $S$, statements concerning topological or geometric properties of $S$ are often expressible by first-order ${\mathcal L}$-formulas. As the topology on $K$ is definable in the valued field language, and the dimension of definable sets in $1$-minimal minimal structure is determined by the topology (see Proposition \ref{dimension}), the truth values of the hypothesis and conclusion of such statements (for our fixed ${\mathcal L}$-definable set $S$) are the same in $K$ and in any elementary extension $K\prec K'$, as well as in any $1$-h-minimal expansion of the latter. Therefore, by Fact \ref{acl=dcl}, in the proof of such statements (for a fixed definable $S$) there is no harm assuming $\bar bar acl=\dcl$. \end{remark} \subseteqsection{Geometric structures} Geometric structures were introduced in \cite[\S 2]{HruPil}. Let us recall the definition: An $\bar bar aleph_0$-saturated structure, $M$ is \emph{pregeometric} if $\bar bar acl(\cdot)$ is a matroid, that is, it satisfies the exchange property: \[ \text{if } a\in \bar bar acl(Ab)\setminus \bar bar acl(A), \text{ then } b\in \bar bar acl(Aa) \text{ for singletons } a,b\in M. \] In this situation the matroid gives a notion of dimension, $\dim(a/b)$, the dimension of a tuple $a$ over a tuple $b$ as the smallest length of a sub-tuple $a'$ of $a$ such that $a\in \bar bar acl(a'b)$, and the dimension of a $b$-definable set $X$ as the maximum of the dimensions $\dim(a/b)$ with $a\in X$ (this does not depend on $b$). As is customary, we set $\dim(\emptyset)=-\infty$. We recall the basic properties of dimension (see \cite[\S 2]{HruPil} for all references). This dimension satisfies the additivity property \[ \dim(ab/c)=\dim(a/bc)+\dim(b/c), \] that we will invoke without further reference. We call $a$ and $b$ algebraically independent over $c$ if $\dim(a/bc)=\dim(a/c)$. Note that by additivity of dimension this is a symmetric relation. Note also that additivity implies that if $b,c$ are inter-algebraic over $a$, meaning $b\in \bar bar acl(ac)$ and $c\in \bar bar acl(ab)$, then $\dim(b/a)=\dim(c/a)$ (in particular, this holds when $c$ is the image of $b$ under an $a$-definable bijection). If $M$ is a pregeometric structure and $f:X\to Y$ is a surjective definable function with fibers of constant dimension $k$, then $\dim(X)=\dim(Y)+k$. This is a consequence of the additivity formula. Given an $a$-definable set $X$, a generic element of $X$ over $a$ is an element $b\in X$ such that $\dim(b/a)=\dim(X)$. By compactness generic elements can always be found in saturated enough models. We call $Y\subseteqset X$ large if $\dim(X\setminus Y)<\dim(X)$. This is equivalent to $Y$ containing every generic point of $X$. A pregeometric structure, $M$, eliminating the quantifier $\exists^\infty$ is called geometric. If $M$ is geometric dimension is definable in definable families. Namely, for $\{X_a\}_{a\in S}$, a definable family, the set $\{a\in S: \dim(X_a)=k\}$ is definable. \iffalse It is not hard to see that the property of an $\bar bar aleph_0$-structure being geometric is preserved under reducts. \fi The following simple fact is a translation of the definition of a pregeometry to a property of definable sets. Note as an aside that this reformulation implies that the property of being a pregeometry is preserved under reducts. I.e., if $M$ is an $\bar bar aleph_0$-saturated pregeometric $\textbf{m}athcal{L}'$-structure, and $\textbf{m}athcal{L}\subseteqset \textbf{m}athcal{L}'$, then $M$ is also a pregeometric $\textbf{m}athcal{L}$-structure. For the sake of completeness, we give the proof: \bar begin{fact}\langlebel{pregeometry-definable-set-criterion} Suppose $M$ is an $\bar bar aleph_0$-saturated structure. Then $M$ is pregeometric if and only if for every definable $X\subseteqset M\times M$ if the projection, $\pi_1:X\to M$, into the first factor is finite-to-one, and $\pi_2:X\to M$ is the projection into the second factor, then the set $Y=\{c\in M: \pi_2^{-1}(c)\cap X\text{ is infinite}\}$ is finite. \end{fact} \bar begin{proof} Suppose $M$ is pregeometric and suppose $X\subseteqset M\times M$ is $A$-definable such that $\pi_1^{-1}(x)\cap X$ is finite for all $x\in M$. Suppose also that $Y=\{y\in M: \pi_2^{-1}(y)\cap X\text{ is infinite}\}$ is infinite. By compactness and saturation, we can choose $b\in Y$ such that $\dim(b/A)=1$. Similarly, we can find $a\in \pi_2^{-1}(b)\cap X$ such that $\dim(a/Ab)=1$. We conclude that $\dim(ab/A)=2$, and so $\dim(X)\ge 2$. This contradicts the fact that $\pi_1^{-1}(x)\cap X$ is finite for all $x\in M$. For the converse, suppose $A$ is a finite subset of $M$ and $a,b\in M$ are singletons such that $a\in\bar bar acl(Ab)\setminus \bar bar acl(A)$. Then there is an $A$-definable set $X\subseteqset M\times M$ such that $(b,a)\in X$ and $\pi_1^{-1}(b)\cap X$ is finite, say of cardinality $k$. If we take $Z=\{c\in M: \pi_1^{-1}(c)\cap X\text{ has cardinality }k\}$, then we may replace $X$ by $X\cap Z\times M$, and we may assume that $\pi_1^{-1}(c)\cap X$ is either empty or of constant finite cardinality for all $c\in M$. In this case, by the hypothesis we conclude that $Y=\{y\in M: \pi_2^{-1}(y)\cap M\text{ is infinite}\}$ is finite. Note that $Y$ is $A$-invariant and definable, so it is $A$-definable. We conclude that $a\notin Y$, because $a\notin \bar bar acl(A)$, and so $b\in \bar bar acl(Aa)$ as required. \end{proof} The next characterization of the $\bar bar acl$-dimension should be well known: \bar begin{fact}\langlebel{dimension-is-canonical} Suppose $M$ is an $\bar bar aleph_0$-saturated structure, which eliminates the $\exists^{\infty}$ quantifier. Suppose there is a function, $X\textbf{m}apsto d(X)$, from the non-empty definable subsets of (cartesian powers of) $M$ into $\textbf{m}athbb{N}$ satisfying: \bar begin{enumerate} \item If $X\subseteqset M^n\times M$ is such that the first coordinate projection $\pi_1:X\to M^n$ is finite to one, then $d(X)=d(\pi_1(X))$. \item If $X\subseteqset M^n\times M$ is such that the first coordinate projection $\pi_1:X\to M^n$ has infinite fibers, then $d(X)=d(\pi_1(X))+1$. \item If $\pi:M^n\to M^n$ is a coordinate permutation, then $d(X)=d(\pi(X))$. \item $d(X\cup Y)=\textbf{m}ax\{d(X),d(Y)\}$. \item $d(M)=1$ \item $d(X)=0$ if and only if $X$ is finite. \end{enumerate} Then $M$ is a geometric structure and $d$ coincides with its $\bar bar acl$-dimension. \end{fact} \bar begin{proof} It suffices to show that $M$ is pregeometric. We use Fact \ref{pregeometry-definable-set-criterion}. Let $X\subseteqset M\times M$ be such that $\pi_1^{-1}(x)\cap X$ is finite for all $x\in M$. Take $Y=\{y\in M: \pi_2^{-1}(y)\cap X\text{ is infinite}\}$. Because $M$ eliminates the $\exists^{\infty}$ quantifier we have that $Y$ is definable. If $Y$ is infinite we conclude that $d(X)\ge d(X\cap \pi_2^{-1}(Y))=d(Y)+1=2$. The first inequality by item (4), the second equality by items (3) and (2), and the third by item (6). On the other hand $d(X)=d(\pi_1(X))\le d(M)=1$, the first equality by item (1), the second inequality by item (4) and the third by item (5). This is a contradiction and finishes the proof. In order to see that $d(X)=\dim(X)$ for $X\subseteqset M^n$ we may proceed by induction on $n$. The base case $n=1$ follows from item (4), (5) and (6). So suppose that $X\subseteqset M^n\times M$. Denote $Y=\{x\in M^n: \pi_1^{-1}(x)\cap X\text{ is infinite}\}$. By hypothesis $Y$ is a definable set. Denote $X_1=\pi_1^{-1}(Y)\cap X$ and $X_2=X\setminus X_1$. Then by items (1),(2) and (4) we conclude that $d(X)=\textbf{m}ax\{d(X_1),d(X_2)\}=\textbf{m}ax\{d(Y)+1,d(\pi_1(X)\setminus Y)\}$. For the same reason we have the formula $\dim(X)=\textbf{m}ax\{\dim(Y)+1,\dim(\pi_1(X)\setminus Y)\}$, so $d(X)=\dim(X)$ as required. \end{proof} The next fact is also standard: \bar begin{fact}\langlebel{cell-decomposition0} Suppose $M$ is a geometric structure. Suppose $X\subseteqset M^n$ is $a$-definable. Then there is a partition of $X$ into a finite number of $a$-definable sets $X=X_1\cup\cdots\cup X_n$, such that for each member of the partition $X_k$, there is a coordinate projection $\pi:X_k\to M^r$ which is finite to one and has image of dimension $r$. \end{fact} \bar begin{remark}\langlebel{terminology-cell-decomposition-0} For this statement we need to allow the identity $\textbf{m}athrm{id}:M^n\to M^n$ as a coordinate projection. Also, recall that $M^{0}$ is a set consisting of one element. For this statement we also need to allow the constant function $M^n\to M^{0}$ as a coordinate projection. \end{remark} \bar begin{proof} By induction on the dimension of the ambient space $n$. Consider the projection onto the first $n-1$ coordinates $\pi_1:M^n\to M^{n-1}$. Then the set $Y\subseteqset M^{n-1}$ of $y$ such that the fibers $X_y=\pi_1^{-1}(y)\cap X$ are infinite is definable. So partitioning $X$ we may assume all the nonempty fibers of $X$ over $M^{n-1}$ are finite, or all are infinite. If all the fibers of $X\to K^{n-1}$ are finite then we finish by induction. If all the nonempty fibers are infinite then by the induction there is a partition $Y=\bar bigcup_i Y_i$ and for each $Y_i$ there is a coordinate projection $\tau: Y_i\to K^r$ with finite fibers and $r=\dim(Y_i)$. Denote $\pi_2:M^{n}\to M$ the projection onto the last coordinate. Then setting $X_i=X\cap \pi_1^{-1}(Y_i)$, the projection $\pi(x)=(\tau(\pi_1(x)),\pi_2(x))$ has the desired properties. \end{proof} The next proposition is key. It asserts that $1$-h-minimal fields are geometric, and it connects (combined with the previous fact) topology and dimension in such structures: \bar begin{proposition}\langlebel{dimension} Suppose $K$ is a 1-h-minimal valued field. Then: \bar begin{enumerate} \item $K$ is a geometric structure. \item Every $X\subseteqset K^n$ satisfies $\dim(X)=n$ if and only if $X$ has nonempty interior. Every $X\subseteqset K^n$ satisfies $\dim(X)<n$ if and only if $X$ is nowhere dense. \item For $X\subseteqset K^n$, we have $\dim(X)=\textbf{m}ax\limits_{x\in X}\dim_x(X)$, where we denote $\dim_x(X)$ the local dimension of $X$ at $x$, defined as $\dim_x(X)= \textbf{m}in\{\dim(B\cap X): x\in B \text{ is an open ball}\}$ \end{enumerate} \end{proposition} \bar begin{proof} This is essentially items (1)-(5) of \cite[Proposition 5.3.4]{hensel-min} in residue characteristic $0$ and contained in \cite[Proposition 3.1.1]{hensel-minII} in mixed characteristic. For example, assume $K$ has residue characteristic $0$. That $K$ is geometric is proved in the course of the proof of \cite[Proposition 5.3.4]{hensel-min}. We can also derive it from Fact \ref{dimension-is-canonical} and \cite[Proposition 5.3.4]{hensel-min}. The topological characterization $\dim(X)=n$ if and only if $X$ has nonempty interior, is a particular case of item (1) in \cite[Proposition 5.3.4]{hensel-min}. That $\dim(X)<n$ if and only if $X$ is nowhere dense follows from this. Indeed, if $\dim(X)=n$, then $X$ has nonempty interior and so it is not nowhere dense. If $\dim(X)<n$ and $U\subseteqset K^n$ is nonempty open, then $\dim(U\setminus X)=n$, and so $U\setminus X$ has nonempty interior. This implies $X$ is nowhere dense. That dimension is the maximum of the local dimensions is item (5) of Proposition 5.3.4 of \cite{hensel-min} \end{proof} \bar begin{proposition}\langlebel{generic-continuity} Suppose $K$ is a 1-h-minimal field. Suppose $f:U\to K^m$ is a definable function. Then there is a definable open dense subset $U'\subseteqset U$ such that $f:U'\to K^m$ is continuous. \end{proposition} \bar begin{proof} This is essentially a particular case of \cite[Proposition 5.1.1]{hensel-min} in residue characteristic $0$, and contained in \cite[Proposition 3.1.1]{hensel-minII} in mixed characteristic. Indeed, because the intersection of open dense sets is open and dense we reduce to the case $m=1$. From those propositions one gets that the set $Z$ of points where $f$ is continuous is dense in $U$. As $Z$ is not nowhere dense we conclude using item (2) of Proposition \ref{dimension} that $\dim(Z)=n$ and so $Z$ has nonempty interior. If $V\subseteqset U$ is a nonempty open definable subset then, as $Z\cap V$ is the set of points at which $f|_{V}$ is continuous, by what we just proved $Z\cap V$ has nonempty interior. We conclude that the set of points at which $f$ is continuous has a dense interior in $U$, as desired. \end{proof} Next we describe a topology for $Y^{[s]}$, the set of subsets of $Y$ of cardinality $s$, for $Y$ a Hausdorff topological space, and $s$ a positive integer. We prove a slightly more general statement that will be applied when $X$ is $Y^s\setminus \Delta$, the set of tuples of $Y^s$ with distinct coordinates and the symmetric group, $S_s$, on $s$ elements acting on $Y^s$ by coordinate permutation, in which case the orbit space is identified with $Y^{[s]}$. \bar begin{fact}\langlebel{topology-group-action} Suppose $X$ is a Hausdorff topological space, and $G$ a finite group acting on $X$ by homeomorphisms, and such that every $x\in X$ has a trivial stabilizer in $G$. Then $X/G$ equipped with the quotient topology is Hausdorff and the map $p:X\to X/G$ is a closed finite covering map. In fact for every $x\in X$ there is an open set $x\in U\subseteqset X$ such that $\{gU: g\in G\}$ are pairwise disjoint, $p^{-1}p(U)=\bar bigcup_ggU$ and $p|_{gU}$ is a homeomorphism onto $p(U)$. \end{fact} \bar begin{proof} We know that $p$ is open, since $p^{-1}p(U)=\bar bigcup_{g\in G}gU$ is open for $U$ open. Consider the orbit $\{gx\}_{g\in G}$ of $x$. By assumption, if $g\neq h$, then $gx\neq hx$. Let $V$ be an open set in $X$ containing $\{gx\}_{g\in G}$. Now, because $X$ is Hausdorff, we conclude that there are $U_g$ open neighborhoods of $gx$, contained in $V$, such that $U_g\cap U_h=\emptyset $ for $g\neq h$. If we take $U=\bar bigcap_{g\in G}g^{-1}U_g$, then $gU\subseteqset U_g$ and so $\{gU: g\in G\}$ are pairwise disjoint. We conclude that $p$ is closed and restricted to $gU$ is a homeomorphism. That $X/G$ is Hausdorff now follows from this. Indeed, if $p(x)\neq p(y)$, then there are open sets $V_1$ and $V_2$ of $X$, which are disjoint and such that $p^{-1}p(x)\subseteqset V_1$ and $p^{-1}p(y)\subseteqset V_2$. Because $p$ is closed there are open sets $p(x)\in U_1$ and $p(y)\in U_2$ in $X/G$ such that $p^{-1}(U_i)\subseteqset V_i$. We conclude that $U_1$ and $U_2$ are disjoint. \end{proof} \bar begin{proposition}\langlebel{generic-regularity-finite} Let $K$ be a 1-h-minimal valued field. Suppose $U\subseteqset K^n$ is open and $f:U\to (K^r)^{[s]}$ is definable. Then there is an open dense definable set $U'\subseteqset U$ such that $f$ is continuous in $U'$. \end{proposition} \bar begin{proof} This statement is equivalent to saying that the interior of the set of points on which $f$ is continuous is dense. As this property is expressible by a first order formula we may assume $\bar bar acl=\dcl$, see Fact \ref{acl=dcl} and the remark following it. In that case we have a definable section $s:(K^r)^{[s]}\to K^{rs}$, and if $V\subseteq U$ is open dense such that $sf$ is continuous, as provided by Proposition \ref{generic-continuity}, then $f$ is continuous in $V$. \end{proof} \bar begin{proposition}\langlebel{cell-decomposition} Suppose $X\subseteqset K^n$ is $b$-definable. Then there is finite partition of $X$ into $b$-definable sets, such that for each element $Y$ of the partition there is a coordinate projection $\pi:Y\to U$ onto an open set $U\subseteqset K^m$, such that the fibers of $\pi$ all have the same cardinality equal to $s$, and the associated map $f:U\to (K^{n-m})^{[s]}$ is continuous. \end{proposition} \bar begin{remark} As in Remark \ref{terminology-cell-decomposition-0}, we need to allow the two cases $m=0$ and $m=n$. The set $K^0$ consists of a single point, and has a unique topology. \end{remark} \bar begin{proof} This is a consequence of dimension theory and the previous observation. In more detail, we proceed by induction on the dimension of $X$. First, recall that $X$ has a finite partition into $b$-definable sets such that for each set $X'$ in the partition there is a coordinate projection $\pi:X'\to K^r$ with finite fibers and $r=\dim(X')$, see Fact \ref{cell-decomposition0}. So now assume $\pi:X\to K^r$ is a coordinate projection with finite fibers and $r=\dim(X)$, and denote $\pi':X\to K^{n-r}$ the projection into the other coordinates. There is an integer $s$ which bounds the cardinality of the fibers of $\pi$. If we denote $Y_k$ the set of elements $a\in K^r$ such that $X_a=\pi'(\pi^{-1}(a))$ has cardinality $k$ then we get $Y_0\cup\cdots\cup Y_s=K^r$. Now let $V_j\subseteqset Y_j$ be open dense in the interior of $Y_j$ and such that the map $V_j\to (K^{n-r})^{[j]}$ given by $a\textbf{m}apsto X_a$ is continuous, see Proposition \ref{generic-regularity-finite}. Then the set $\{x\in X: \pi(x)\in Y_j\setminus V_j, 1\leq j\leq s\}$ is of lower dimension than $X$, by item 2 of Proposition \ref{dimension}, and so we may apply the induction hypothesis on it. \end{proof} Recall that a subset $Y\subseteqset X$ of a topological space $X$ is locally closed if it is the intersection of an open set and a closed set. This is equivalent to $Y$ being relatively open in its closure. It is also equivalent to, for every point $y\in Y$, the existence of a neighborhood $V$ of $y$, such that $Y\cap V$ is relatively closed in $V$. \bar begin{proposition}\langlebel{definable-is-constructible} Suppose $K$ is 1-h-minimal and $X\subseteqset K^n$ an $a$-definable set. Then $X$ is a finite union of $a$-definable locally closed subsets of $K^n$. \end{proposition} \bar begin{proof} This is a consequence of Proposition \ref{cell-decomposition}. Namely, there is a partition of $X$ into a finite union of definable subsets for each of which there is a coordinate projection with finite fibers onto an open set $U$, so we may assume $X$ is of this form. We may further assume that the fibers have constant cardinality $k$ and the associated mapping $U\to (K^r)^{[k]}$ is continuous. Then $X$ is closed in $U\times K^r$ and so locally closed. \end{proof} We finish by reviewing a more difficult property of dimension. We will only use this in Proposition \ref{generic-embedding}, Proposition \ref{subgroups-are-closed} and Corollary \ref{subgroup-tangent}, which are not used in the main theorems. \bar begin{proposition}\langlebel{dimension-boundary-function} Suppose $K$ is a 1-h-minimal field and $X\subseteqset K^n$. Then $\dim(\cl(X)\setminus X)<\dim(X)$. \end{proposition} This is item 6 of \cite[Proposition 5.3.4]{hensel-min} for the residue characteristic $0$ and it is contained in Proposition 3.1.1 of \cite{hensel-minII} in the mixed characteristic case. \section{Taylor approximations}\langlebel{taylor-section} In this section we show that, in the $1$-h-minimal setting, the generic one variable Taylor approximation theorem (\cite[Theorem 3.1.2]{hensel-minII}) implies a multi-variable version of the theorem. In equicharacteristic $(0,0))$ this is \cite[Theorem 5.6.1]{hensel-min}. Though the proof in mixed characteristic is, essentially, similar; we give the details, for the sake of completeness and in view of the importance of this result in the sequel. We then proceed to introducing some regularity conditions for definable functions (implied, in the present context, by Taylor's approximation theorem) necessary for computations related to the Lie algebra of definable groups. First, we recall the multi-index notation. If $i=(i_1,\dots,i_n)\in \textbf{m}athbb{N}^n$, we denote $|i|=i_1+\cdots+i_n$ and $i!=i_1!\cdots i_n!$. For $x=(x_1,\cdots,x_n)\in K^n$ we denote $x^i=x_1^{i_1}\cdots x_n^{i_n}$. Also if $f:U\to K$ is a function defined in an open set of $K^n$, we denote $f^{(i)}(x)=(\frac{\partial^{i_1}}{\partial x_1^{i_1}} \cdots \frac{\partial^{i_n}}{\partial x_n^{i_n}}f)(x)$ whenever it exists. Note that we are not assuming equality of mixed derivatives, but see Corollary \ref{partial-derivatives-commute}. \bar begin{proposition}\langlebel{taylor0} Let $K$ be a 1-h-minimal field of residue characteristic $0$. Suppose $f:U\to K$ is an $a$-definable function with $U\subseteqset K^n$ open and let $r\in {\textbf{m}athbb{N}}$. Then there is an $a$-definable set $C$, of dimension strictly smaller than $n$, such that for any open ball, $B\subseteq U$ disjoint from $C$ the derivative $f^{(i)}$ exists in $B$ for every $i$ with $|i|\leq r$, and has constant valuation in $B$. Moreover, \bar begin{displaymath} \left | f(x)-\sum_{\{i:|i|<r\}}\frac{1}{i!}f^{(i)}(x_0)(x-x_0)^i \right | \leq \textbf{m}ax_{\{i:|i|=r\}} \left |\frac{1}{i!}f^{(i)}(x_0)(x-x_0)^i \right| \end{displaymath} For every $x,x_0\in B$. \end{proposition} This is \cite[Theorem 5.6.1]{hensel-min}. Our first order of business is to generalize this result to positive residue characteristic. The following fact is proved by a standard compactness argument, and is often applied implicitly. We add this argument for convenience. \bar begin{fact}\langlebel{standard-compactness} Let $M$ be an $\bar bar aleph_0$-saturated structure, and $\{\Phi^l(\bar bar{D})\}_{l\in I}$ be a family of properties of definable sets $\bar bar{D}=(D_1,\dots,D_n)$ in $M$, indexed by a directed set $I$. Let $b$ be a tuple in $M$, and $S$ be a $b$-definable set. Assume that \bar begin{enumerate} \item For all $l$ the property $\Phi^l$ is definable in definable families. I.e, if $\{D_{i,a}\}_{a\in T}$ are $b$-definable families, then the set $\{a\in T: \Phi^l({\bar bar{D}_a})\text{ holds}\}$ is $b$-definable. \item $\Phi^l$ implies $\Phi^{l'}$ for all $l\leq l'$. \item For every $a\in S$, there are $ba$-definable sets $D_{i,a}$, satisfying $\Phi^{l_a}$ for some $l_a\in I$. \end{enumerate} Then there are $\{D_{i,a}\}_{a\in S}$ $b$-definable families of sets, and a fixed $l\in I$, such that $\Phi^l(\bar bar{D}_{a})$ holds for every $a\in S$. \end{fact} \bar begin{remark} Formally, $\Phi^l$ is a subset of \[ \{(D_1,\dots,D_n): D_i \text{ is a definable set and}\} \] and we say $\Phi^l(D_1,\dots,D_n)$ holds if the tuple $(D_1,\cdots,D_n)$ belongs to $\Phi^l$. Note also that the tuple $(D_1,\dots,D_n)$ can be replaced with $D_1\times \cdots \times D_n$, so there is no loss of generality in taking $\Phi^l$ of the form $\Phi^l(D)$. \end{remark} \bar begin{proof} Let $a\in S$. By hypothesis there are $b$-definable families $\{D_{i,a'}^a\}_{a'\in S_i^{0,a}}$ and an element $l^a\in I$, such that $(D_{1,a}^a,\dots,D_{n,a}^a)$ satisfies $\Phi^{l^a}$. Consider $S^a$ to be the set of $a'\in S$ such that $a'\in S_i^{0,a}$ for $i=1,\dots,n$, and such that $\Phi^{l^a}(\bar bar{D}_{a'}^a)$ holds. By hypothesis, this is a $b$-definable set contained in $S$ and containing $a$. We conclude that $S=\bar bigcup_{a\in S}S^a$ is a cover of $S$ by $b$-definable sets, and so by compactness and saturation there is a finite sub-cover, say $S=S^1\cup\dots \cup S^k$ for $S^r=S^{a_r}$. Indeed, if there was no finite sub-cover, then the partial type expressing $x\in S$ and $x\notin S^a$ for all $a\in S$, is a consistent $b$-type, and so a realization in $M$ would contradict $S=\bar bigcup_{a\in S}S^a$. Then $D_{i,a}$ defined as $D_{i,a}^{a_r}$ if $a\in S^r\setminus \bar bigcup_{r'<r}S^{r'}$ satisfy that $\{D_{i,a}\}_{a\in S}$ form $b$-definable families. If we take $l$ such that $l\geq l^{a_1},\dots,l^{a_k}$, then we get that $\Phi^l(\bar bar{D}_{a})$ holds for every $a\in S$, as required. \end{proof} \bar begin{notation} If $D\subseteqset E\times F$, and $a\in E$ we often denote $D_a=\{b\in F: (a,b)\in D\}$. If $b\in F$ we denote, when no ambiguity can occur, $D_b=\{a\in E: (a,b)\in D\}$. If $f:D\to C$ is a function, we let $f_a:D_a\to C$ denote the function $f_a(b)=f(a,b)$, and similarly $f_b$ for $b\in F$. \end{notation} \bar begin{proposition}\langlebel{taylor} Let $K$ be a 1-h-minimal field of positive residue characteristic, $f:U\to K$ an $a$-definable function with $U\subseteqset K^n$ open, and let $r\in {\textbf{m}athbb{N}}$. Then there is an integer $m$, and a set $C$, closed, $a$-definable with $\dim(C)<n$, such that for every open ball, $B\subseteq U$ $m$-next to $C$, $f^{(i)}$ exists in $B$ for every $i$ with $|i|\leq r$, and $f^{(i)}$ has constant valuation in $B$. Moreover, \bar begin{displaymath} \left | f(x)-\sum_{\{i:|i|<r\}}\frac{1}{i!}f^{(i)}(x_0)(x-x_0)^i \right | \leq \textbf{m}ax_{\{i:|i|=r\}} \left |\frac{1}{i!}f^{(i)}(x_0)(x-x_0)^i \right | \end{displaymath} For every $x,x_0\in B$. \end{proposition} \bar begin{proof} We proceed by induction on $n$, the case $n=1$ being \cite[Theorem 3.1.2]{hensel-minII}. Assume the result for $n$ and let $f:U\to K$ be an $a$-definable function with $U\subseteqset K^n\times K$ open, $i$ a multi-index with $|i|\le r$. Then for every $x\in K^n$ there is a finite $ax$-definable set $C_x\subseteqset K$ and an integer $m_x$ such that \bar begin{equation}\langlebel{taylor-1} |f_x(y)-\sum_{s<r}\frac{1}{s!}f_x^{(s)}(y_0)(y-y_0)^s| \leq |\frac{1}{r!}f_x^{(r)}(y_0)(y-y_0)^r| \end{equation} for every $y$ and $y_0$ in an open ball $m_x$-next to $C_x$, and such that $|f_x^{(s)}(y)|$ exists and is constant in any such open ball. By a standard compactness argument (See Fact \ref{standard-compactness}), we may assume that the $C_x$ are uniformly definable and that there is some $m\in {\textbf{m}athbb{N}}$ such that $m_x=m$ for all $x$. Define $C=\cup_x\{x\}\times C_x$. By induction, for each $y\in K$ we can approximate the functions $g_{s,y}(x)=f_x^{(s)}(y)$ defined on $V_y=\textbf{m}athrm{Int}(U_y\setminus C_y)$ up to order $r-s$. By the same compactness argument we obtain a natural number $m'$ and an $a$-definable family $\{D_y\}_{y\in K^n}$ of subsets $D_y\subseteq V_y$ with $\dim(D_y)<n$ such that $g_{s,y}^{(i)}$ exists and has constant valuation on any ball $m'$-next to $D_y$ in $V_y$, for every multi-index, $i$, with $|i|\leq r-s$. Moreover, \bar begin{equation}\langlebel{taylor-2} | g_{s,y}(x)-\sum_{\{i:|i|<r-s\}}\frac{1}{i!}g_{s,y}^{(i)}(x_0)(x-x_0)^i | \leq \textbf{m}ax_{\{i:|i|=r-s\}} |\frac{1}{i!}g_{s,y}^{(i)}(x_0)(x-x_0)^i| \end{equation} Replacing $m$ and $m'$ by their maximum, we may assume $m=m'$. Define $D:=\bar bigcup_y D_y\times \{y\}$. By additivity of dimension $\dim(C)\le n$ and $\dim(D)\le n$. Finally, take $E=C\cup D\cup \bar bigcup_y (U_y\setminus V_y)\times\{y\}$. Similar dimension considerations show that $\dim(E)<n+1$. Note that for $(x,y)\in U\setminus E$ we have that, for $i$ and $s$ such that $|i|+s\leq r$, $f^{(i,s)}(x,y)$ and $g_{s,y}^{(i)}(x)$ exist and are equal. Now, for $x\in K^n$ define $W_x=\textbf{m}athrm{Int}(U_x\setminus E_x)$, and for the functions $h_{x,s,i}:y\textbf{m}apsto f^{(i,s)}(x,y)$ with $s+|i|\leq r$ defined on $W_x$, we find a finite set $F_x\subseteqset W_x$ such that $\{F_x\}_x$ is an $a$-definable family, and there is an integer $m'$, such that in every ball in $W_x$ $m'$-next to $F_x$, $h_{x,s,i}$ has constant valuation. We may assume that $m'=m$ as before. Let $G$ be the closure of $E\cup\bar bigcup_x \{x\}\times F_x\cup \bar bigcup_x \{x\}\times (U_x\setminus W_x)$. Note that $\dim(G)<n+1$. Take $B_1\times B_2$ a ball in $U$, $m$-next to $G$. Then for every $x\in B_1$ we get that $B_2$ is $m$-next to both $C_x$ and $F_x$ and $B_2\subseteq W_x$. Similarly, for every $y\in B_2$, $B_1\subseteq V_y$ is $m$-next to $D_y$. We conclude that for every $(x,y)\in B_1\times B_2$, $f^{(i,s)}(x,y)$ exists and has constant valuation, for every index $(i,s)$ such that $|(i,s)|\leq r$. Indeed, we have for every $(x,y), (x',y')\in B_1\times B_2$ that \[ |f^{(i,s)}(x',y')|=|g_{s,y'}^{(i)}(x')|=|g_{s,y'}^{(i)}(x)|=|h_{x,s,i}(y')| =|h_{x,s,i}(y)|=|f^{(i,s)}(x,y)|, \] as the second equality follows from the condition on $C_x$, and the fourth from those on $F_x$ and $W_x$. Now, if $(x,y)$ and $(x_0,y_0)$ are in $B_1\times B_2$, then equations \ref{taylor-1} and \ref{taylor-2} hold and for the error term of \ref{taylor-1} we have $|f_x^{(r)}(y_0)|=|f^{(0,r)}(x,y_0)|= |f^{(0,r)}(x_0,y_0)|$. Denote \[M=\textbf{m}ax\left \{\frac{1}{i!s!} |f^{(i,s)}(x_0,y_0)(x-x_0)^i(y-y_0)^s|: |(i,s)|=r\right \}.\] Then Equation \ref{taylor-1} yields $|f(x,y)-\sum_{s<r}\frac{1}{s!}f^{(0,s)}(x,y_0)(y-y_0)^s|\leq M$. Also from Equation \ref{taylor-2} we have that \[\left |\frac{1}{s!}f^{(0,s)}(x,y_0)(y-y_0)^s- \sum_{\{i:|i|<r-s\}}\frac{1}{i!s!}f^{(i,s)}(x_0,y_0)(x-x_0)^i(y-y_0)^s \right | \leq M.\] Taking the sum over $s$ smaller than $r$ and using the ultrametric inequality we obtain \[\left |\sum_s \frac{1}{s!}f^{(0,s)}(x,y_0)(y-y_0)^s- \sum_{\{(i,s):|i|+s<r\}}\frac{1}{i!s!}f^{(i,s)}(x_0,y_0)(x-x_0)^i(y-y_0)^s\right |\leq M.\] Summing this with Equation \ref{taylor-1} and using the ultrametric inequality once more we conclude. \end{proof} As a consequence of the previous theorem we obtain that partial derivatives of definable functions commute generically. \bar begin{corollary}\langlebel{partial-derivatives-commute} Suppose $f:U\to K$ is a definable function for some open $U\subseteqset K\times K$. Then there exists a open dense $U'\subseteqset U$ such that for every $(x,y)\in U'$ \[\frac{\partial}{\partial x}\frac{\partial}{\partial y}f(x,y)= \frac{\partial}{\partial y}\frac{\partial}{\partial x}f(x,y)\] and, in particular, the terms of the above equation exist in $U'$. Moreover, if $f:U\to K$ is such that the partial derivatives $\frac{\partial}{\partial x}\frac{\partial}{\partial y}f(x,y)$, $\frac{\partial}{\partial y}\frac{\partial}{\partial x}f(x,y)$ exist and are continuous in $U$, then they are equal. \end{corollary} \bar begin{proof} Take a $1$-dimensional closed $C\subseteqset K\times K$ and $m$ an integer as provided by the Taylor approximation property for errors of order $3$. We may also assume that $\pi(C)$ and $m$ satisfy the same Taylor approximation property for the function $f\pi$, where $\pi$ is the coordinate permutation $(x,y)\textbf{m}apsto (y,x)$. Then for $(x,y),(x_0,y_0)\in B_1\times B_2$ in a ball $m$-next to $C$ we obtain (see Definition \ref{big-o} for the big-$O$ notation) that \bar begin{align*} f(x,y)=& f(x_0,y_0)+ (x-x_0)^2\frac{1}{2}\frac{\partial^2}{\partial x^2}f(x_0,y_0)+ (y-y_0)^2\frac{1}{2}\frac{\partial^2}{\partial y^2}f(x_0,y_0)+ \\ & (x-x_0)(y-y_0)\frac{\partial}{\partial x}\frac{\partial}{\partial y}f(x_0,y_0)+ O((x-x_0,y-y_0)^3). \end{align*} Similarly, \bar begin{align*} f\pi(y,x)=f(x,y)=& f(x_0,y_0)+ (x-x_0)^2\frac{1}{2}\frac{\partial^2}{\partial x^2}f(x_0,y_0)+ (y-y_0)^2\frac{1}{2}\frac{\partial^2}{\partial y^2}f(x_0,y_0)+ \\ & (x-x_0)(y-y_0)\frac{\partial}{\partial y}\frac{\partial}{\partial x}f(x_0,y_0)+ O((x-x_0,y-y_0)^3) \end{align*} Taking the difference we obtain \bar begin{center} $(x-x_0)(y-y_0)\frac{\partial}{\partial y}\frac{\partial}{\partial x}f(x_0,y_0)- (x-x_0)(y-y_0)\frac{\partial}{\partial x}\frac{\partial}{\partial y}f(x_0,y_0)= O((x-x_0,y-y_0)^3)$ \end{center} Taking $h=(x-x_0)=(y-y_0)$ small we get, $h^2 (\frac{\partial}{\partial y}\frac{\partial}{\partial x}f(x_0,y_0)- \frac{\partial}{\partial x}\frac{\partial}{\partial y}f(x_0,y_0))= O(h^3)$, so $\frac{\partial}{\partial y}\frac{\partial}{\partial x}f(x_0,y_0)- \frac{\partial}{\partial x}\frac{\partial}{\partial y}f(x_0,y_0)=O(h)$. This is only possible when the left-hand side is $0$, as desired. \end{proof} The following notation is intended to look similar to the monomial $ax^n$ for $a\in K$ and $x\in K$, for the purpose of expressing the Taylor approximation of a multivariate function. \bar begin{definition}\langlebel{monomial} Let $m, n$ be positive integers, and $r\in {\textbf{m}athbb{N}}$. Let $J=J(r,n)=\{j\in\textbf{m}athbb{N}^n: |j|=r\}$. Let $a=(a_j)_{j\in J}$ be such that $a_j\in K^m$ for all $j\in J$. Then, for $x\in K^n$ we define $ax^r=\sum\limits_{j\in J}a_jx^j$, where $x^j:=\prod_{i=1}^n x^{j(i)}$. Note that $x\textbf{m}apsto ax^r$ is a function $K^n\to K^m$. \end{definition} As an example, consider, in the above notation, the case $r=1$. In this case $J=\{e_1,\dots, e_n\}$ and for $j\in J$ we have $x^j=x_j$ (where $x=(x_1,\dots, x_n)$), so for $a=(a_j)_{j\in J}$ with $a_j\in K^m$ we get that $ax=A\cdot x$ where $A$ is the matrix whose $j$-th column is $a_j$. \bar begin{definition}\langlebel{tn} Let $U\subseteqset K^k$ be open, $f:U\to K^m$ a function and $a\in U$. We say that $f$ is $P_n$ at $a$ if it is approximable by polynomials of degree $n$ near $a$ in the following sense: there are constants $b_0,\cdots,b_n$ $f(a+x)=\sum_{r\leq n}b_rx^r+O(x^{n+1})$. \end{definition} In view of the above example, it follows immediately from the definition that a $P_1$ function is differentiable and for the coefficient $b_1$ in the definition we may take $f'(a)$ (or, more precisely $b_1^t=f'(a)$). It follows from Lemma \ref{uniqueness-taylor} below, that -- in fact -- $b_1=f'(a)^t$ whenever $f$ is $P_n$ for any $n\ge 1$. \bar begin{definition} Let $U\subseteqset K^k$ be open, $f:U\to K^m$ a function and $a\in U$. We say $f$ is $T_n$ at $a$ if there is $\gamma\in \Gamma$ such that for every $x,x'$ with $|x-a|,|x'-a|<\gamma$, we have $f(x)=\sum_{r\leq n}c_r(x')(x-x')^r+O(x-x')^{n+1}$ for $c_r$ a $P_{n-r}$ function at $a$. \end{definition} Note that in the previous definition, the constant implicit in the notation $O(x-x')^{n+1}$ (see Definition \ref{big-o}) does not depend on $x'$; so this definition requires some uniformity with respect to the center $x'$ which is not implied by simply assuming $f$ is $P_n$ at every point of a ball around $a$. Note also that if $f$ is $T_n$ at $c=(b,a)$ then, in particular $f(z,a+x)=f(z)+f_1(z)x+\cdots+f_n(z)x^n+O(x^{n+1})$, for $P_{n-k}$ functions $f_k$ at $b$ (and a constant in $O(x^{n+1})$ uniform in $z$). This follows from the definition by taking $x=(z,a+x)$ and $x'=(z,a)$. Sums and products of $P_n$ (resp. $T_n$) functions are $P_n$ (resp. $T_n$), and a vector function is $P_n$ (resp. $T_n$) if and only if its coordinate functions are $P_n$ (resp. $T_n$). We could also require the stronger condition, $ST_n$, defined similarly $T_n$, but requiring inductively the functions $c_r$ be $ST_{n-r}$ for $r=1,\dots,n$ (the base case $ST_0=T_0$). This definition may be more natural, and one can prove the same results for this notion in what follows, but the stated definition is enough for Lemma \ref{xy} which is our main motivation. \bar begin{lemma}\langlebel{uniqueness-taylor} If $f$ is $P_n$ at $a$, then for every number $i\leq n$, the coefficients $b_i$ in Definition \ref{tn} are determined by $f$. \end{lemma} \bar begin{proof} The problem readily reduces to the case of $f$ a polynomial restricted to some open neighborhood, $U$, of the origin. I.e., We have to show that if $\sum_{i\leq n}b_ix^i=O(x^{n+1})$ in any open $U\subseteq K^r$ then $b_i=0$ for all $i$. For $x_0$ fixed let $x=tx_0$ and consider the single variable polynomial $P(tx_0)=\sum_i\le n (b_ix_0^i)t^i=O(t^{n+1})$. If we knew the result for $r=1$ this would give that $b_ix_0^i=0$. Since $x_0\in U$ was arbitrary and $U$ contains a cartesian product of $r$ infinite sets, this implies $b_i=0$ for all $i$. So we are reduced to proving the result for $r=1$. In this case, if $i$ is the smallest with $b_i\neq 0$ we get $x^i=O(x^{i+1})$ which is a contradiction. \end{proof} \bar begin{proposition} If $g$ is $P_n$ at $a$ and $f$ is $P_n$ at $g(a)$ then the composition, $f\circ g$ is $P_n$ at $a$. If $g$ is $T_n$ at $a$ and $f$ is $T_n$ at $g(a)$ then $f\circ g$ is $T_n$ at $a$. \end{proposition} \bar begin{proof} For the first statement we write \bar begin{align*} & f(g(a+x))= \\ & f(g(a)+g_1(a)x+...+g_n(a)x^n+O(x^{n+1}))= \\ & f(g(a))+f_1(g(a))h(a,x)+f_2(g(a))h(a,x)^2+...+O(h(a,x)^{n+1})= \\ & fg(a)+b_1(a)x+\cdots+b_n(a)x^n+O(x^{n+1})+O(h(a,x)^{n+1}), \end{align*} where \bar begin{enumerate} \item $h(a,x)=g(a,x)-g(a)=g_1(a)x+\cdots+g_n(a)x^n+O(x^{n+1})$ \item The second inequality is the application of the assumption that $f$ is $P_n$ at $g(a)$. \item The coefficients $b_i$ arise by expanding the expression \[ f_k(a)h(a,x)^k=f_k(a)(g_1(a)x+\cdots+g_n(a)x^n+O(x^{n+1}))^k. \] \end{enumerate} To conclude, we note that, as in the proof of Lemma \ref{uniqueness-taylor} $h(a,x)=O(x)$, and so $O(h(a,x)^{n+1})=O(x^{n+1})$. The proof of the second statement is, essentially, similar: \bar begin{align*} & f(g(x))=\\ & f(g(x')+g_1(x')(x-x')+...+g_n(x')(x-x')^n+O(x-x')^{n+1})= \\ & f(g(x'))+f_1(g(x'))h(x,x')+f_2(g(x'))h(x,x')^2+...+f_n(g(x'))(x-x')^n+O(h(x,x')^{n+1})= \\ & f(g(x'))+b_1(x')(x-x')+\cdots+b_n(x')(x-x')^n+O(x^{n+1})+O(h(x,x')^{n+1}) \end{align*} Where $h(x,x')=g(x)-g(x')=g_1(x')(x-x')+\cdots+g_n(x')(x-x')^n+O(x-x')^{n+1}= O(x-x')$, and the coordinates of the coefficients $b_k(x')$ are sums and products of the coordinates of the coefficients of $f_i(g(x'))$ and $g_j(x')$ with $i,j\leq k$. By what we have just proved, those are $P_{n-i}$ functions. The constant appearing on $h(x,x')=O(x-x')$ does not depend on $x'$ because the $g_i$ are continuous at $a$. We conclude that the $b_k$ are $P_{n-k}$ at $a$, as claimed. \end{proof} \bar begin{proposition}\langlebel{generic-regularity-0} Let $K$ be 1-h-minimal and $f:U\to K^m$ as definable function. Then there exists $U'\subseteqset U$ definable open and dense such that $f$ is $T_n$ at every point of $U'$. In particular for every $f:U\to K^m$, there is a definable open dense subset $U'\subseteqset U$ such that $f$ is strictly differentiable in $U'$. \end{proposition} This follows from Taylor's approximation theorem (Proposition \ref{taylor0} in residue characteristic $0$ and Proposition \ref{taylor} in positive residue characteristic). The second statement follows because a $T_1$ function is strictly differentiable. In the next section we show that a strictly differentiable map with invertible derivative, definable in a 1-h-minimal valued field, is a local homeomorphism. Here we show that the local inverse is strictly differentiable. We then proceed to showing that the properties $P_n$ and $T_n$ are also preserved in this local inverse, though this latter fact is not used for the proof of our main results. \bar begin{proposition}\langlebel{inverse-derivative} Suppose $f:U\to V$ is a bijection where $U\subseteqset K^n$ and $V\subseteqset K^n$ are open. Suppose $f$ satisfies $|f(x)-f(y)-(x-y)|<|x-y|$ for $x,y\in U$ distinct. Assume $f$ is differentiable at $a$. Then $f'(a)$ is invertible, $f^{-1}$ is differentiable at $b=f(a)$ and $(f^{-1})'(b)=f'(f^{-1}(b))^{-1}$. If $f$ is strictly differentiable at $a$, then $f^{-1}$ is strictly differentiable at $b$. \end{proposition} \bar begin{proof} Note that the hypothesis implies $|f(x)-f(x')|=|x-x'|$. This implies that $f'(a)$ is invertible. Indeed, assume otherwise, and take $x$ close to $a$ such that $f'(a)(x-a)=0$, to get $|f(x)-f(a)|<|x-a|$, a contradiction. Assume that $f$ is strictly differentiable at $a$. Take $\epsilon>0$ in $\Gamma$. Then there is $0<r\in \Gamma$ such that if $|x-a|,|x'-a|<r$, then $|f(x)-f(x')-f'(a)(x-x')|\le \epsilon |x-x'|$. If we denote $y=f(x)$ and $y'=f(x')$, then we have $|y-y'|=|x-x'|$, so multiplying the above inequality by $f'(a)^{-1}$ we obtain \bar begin{align*} & |f^{-1}(y)-f^{-1}(y')-f'(a)^{-1}(y-y')|= \\ & |f'(a)^{-1}(f'(a)(x-x')-(f(x)-f(x'))|\le \\ & |f'(a)^{-1}||f(x)-f(x')-f'(a)(x-x')|\le \\ & \epsilon |f'(a)^{-1}||x-x'|= \epsilon |f'(a)^{-1}||y-y'|, \end{align*} where, for a linear map, $A$, represented by the matrix $(a_{ij})_{i,j}$ we denote $|A|=\textbf{m}ax_{i,j}|a_{ij}|$, and use the ultra-metric inequality to get $|Ax|\le |A||x|$, which we apply to obtain the first inequality in the above computation. So we conclude that $|f^{-1}(y)-f^{-1}(y')-f'(a)^{-1}(y-y')|\leq \epsilon|f'(a)|^{-1}|y-y'|$ for any $y,y'$ such that $|y-b|=|x-a|<r$ and $|y'-b|=|x'-a|<r$. We have, thus, shown that $f^{-1}$ is strictly differentiable at $b$ and $(f^{-1})'(b)=f'(f^{-1}(b))^{-1}$. \\ To show that $f^{-1}$ is differentiable if $f$ is substitute $x'=a$ in the above argument. \end{proof} \bar begin{proposition} Suppose $f:U\to V$ is a bijection where $U\subseteqset K^n$ and $V\subseteqset K^n$ are open. Suppose $f$ satisfies $|f(x)-f(y)-(x-y)|<|x-y|$ for $x,y\in U$ distinct. Then if $f$ is $P_n$ (resp. $T_n$) at $b$, $f^{-1}$ is $P_n$ (resp. $T_n$) at $f(b)$. \end{proposition} \bar begin{proof} Denote $a=f(b)$. Note that the hypothesis implies than $|f(x)-f(y)|=|x-y|$ for all distinct $x,y\in U$, so the inverse map $f^{-1}$ is continuous, and in fact satisfies $|f^{-1}(x)-f^{-1}(y)|=|x-y|$ for distinct $x,y\in V$. In particular, $f^{-1}$ is $T_0$ in $V$. Now, assume that $f$ is $P_n$ at $b$, with $n\ge 1$. In particular, by Proposition \ref{inverse-derivative} it is differentiable and $f'(b)$ is invertible. Apply the fact that $f$ is $P_n$ (and see also the discussion following the definition) to get: \[ f(y)-f(b)=f'(b)(y-b)+f_2(b)(y-b)^2+\cdots+f_n(b)(y-b)^n+O(y-b)^{n+1}. \] Rearranging, we get: \[ y-b=f'(b)^{-1}(f(y)-f(b))-f'(b)^{-1}f_2(b)(y-b)^2-\cdots- f'(b)^{-1}f_n(b)(y-b)^n+O(y-b)^{n+1}. \] Putting $y=f^{-1}(x)$, and remembering $x-a=O(y-b)$ we conclude \[ f^{-1}(x)-f^{-1}(a)=f'(f^{-1}(a))^{-1}(x-a)+\sum_{2\leq i\leq n}c_i(a)(f^{-1}(x)-f^{-1}(a))^i+O(x-a)^{n+1}, \tag{$\diamond$} \] for some constants $c_i(a)$. Next we proceed to showing (by induction on $k\le n$) that $f^{-1}$ is $P_k$. As $P_0$ follows from the equality $|f^{-1}(x)-f^{-1}(y)|=|x-y|$, we assume that So suppose $f^{-1}$ is $P_{k-1}$. Using this, we can write $f^{-1}(x)-f^{-1}(a)=\sum_{1\leq j<k}b_j(a)(x-a)^j+O(x-a)^k$, and apply a direct computation to obtain that \[ c_i(a)(f^{-1}(x)-f^{-1}(a))^i= \sum_{i\leq j\leq k}d_{ij}(a)(x-a)^j+O(x-a)^{k+1}. \] for some constants $d_{ij}$. Note that as $i\ge 2$ we obtain the improved error $O(x-a)^{k+1}$. Substituting this in $(\diamond)$, we the conclusion follows. Now suppose $f$ is $T_n$ at $b$. The proof in this case is similar: \[f^{-1}(x)-f^{-1}(x')=f'(f^{-1}(x'))^{-1}(x-x')+\sum_{2\leq i\leq n}c_i(f^{-1}(x'))(f^{-1}(x)-f^{-1}(x'))^i+O(x-x')^{n+1},\] so, as above, if $f^{-1}(x)$ is $T_{k-1}$ we obtain \[f^{-1}(x)-f^{-1}(x')=f'(f^{-1}(x'))^{-1}(x-x')+\sum_{2\leq i\leq k}d_i(x')(x-x')^i+O(x-x')^{k+1}.\] Here note that $f'$ is $P_{n-1}$ at $b$ and so $f'(f^{-1}(x'))^{-1}$ is $P_{k-1}$ at $a$. Also, following the above argument we see that the coordinates of $d_i(x')$ are sum and products of functions of the form $b_{i'}(x')$ with $1\leq i'<i$, for $b_{i'}(x')$ a $P_{k-1-i'}$ function at $a$ (by the induction hypothesis that $f^{-1}$ is $T_{k-1}$), and functions of the form $c_{i'}(f^{-1}(x'))$ for $c_{i'}(y')$ a $P_{k-i'}$ function at $b$, $i'\leq i$ (by the assumption that $f$ is $T_n$). So $d_i(x')$ is a $P_{k-i}$ at $a$. \end{proof} The next lemma will be important in our study of the differential structure of definable groups. For the statement, recall that $O_x$ means that the constant implicit in the notation depends on $x$, see Definition \ref{big-o}. \bar begin{lemma}\langlebel{xy} Let $f:U\times V\to K^r$ be a definable function, where $U\subseteqset K^n$ and $V\subseteqset K^m$ are open sets around $0$. Suppose $f(x,y)$ is $T_2$ at $(0,0)$, and $f(x,y)=O(x,y)^3$. If $axy+f(x,y)=O_x(y^2)$, then $a=0$. \end{lemma} \bar begin{proof} By the definition of $T_2$ (with $x=x'$, $y'=0$) we get $f(x,y)=f_0(x)+f_1(x)y+O(y^2)$, for $f_1$ a $P_1$ function at $0$ and $f_0$ a $P_2$ function at $0$. Fixing $x$ and expanding the Taylor polynomial of $f(x,y)$ the uniqueness of Taylor coefficients (Lemma \ref{uniqueness-taylor}) gives, using our assumption, $axy+f_0(x)+f_1(x)y=0$. Expanding $f_0,f_1$ around $0$ and keeping in mind $f(x,y)= O(x,y)^3$ we get $f_0(x)= O(x^3)$ and $f_1(x)= O(x^2)$. Indeed, we have $f_0(x)=b_0+b_1x+b_2x^2+O(x^3)$ and $f_1(x)=c_0+c_1x+O(x^2)$, so $f(x,y)=b_0+b_1x+b_2x^2+c_0y+c_1xy+O(x,y)^3=O(x,y)^3$, so from the uniqueness of the Taylor coefficients we get $b_0=b_1=b_2=c_0=c_1=0$. Now from $axy=O(x,y)^3$, we get $a=0$, by the uniqueness of Taylor coefficients again. \end{proof} \section{Strictly differentiable definable maps}\langlebel{smooth-map} In this section we prove an inverse function theorem for definable strictly differentiable maps in a 1-h-minimal valued fields. This is done adapting a standard argument from real analysis using Banach's fixed point theorem. In the present section we use definable spherical completeness to obtain a definable version of Banach's fixed point theorem, implying, almost formally, the desired inverse function theorem. From the inverse function theorem we deduce results on the local structure of immersions and submersions in the usual way. We then proceed to proving a generic version of the theorem on the local structure of functions of constant rank (Proposition \ref{constant-rank}). This last result is obtained only generically. The reason is that definable functions whose partial derivative with respect to a variable $x$ is $0$ on an open set, $U$ need not be locally constant in $x$ in $U$. For that reason, we give a different argument for a weaker result, see Proposition \ref{generic-relative-locally-constant}, and the discussion preceding it. \\ Throughout the rest of this section, we fix an $\bar bar aleph_0$-saturated 1-h-minimal valued field $K$. \\ We start with a fixed point theorem, mentioned in \cite[Remark 2.7.3]{hensel-min}. We first note that a version of definable spherical completeness of $1$-h-minimal fields (\cite[Lemma 2.7.1]{hensel-min}) holds in positive residue characteristic: \bar begin{lemma}\langlebel{spherical-completeness} Suppose $K$ has positive residue characteristic $p$. Suppose $\{B_i\}_i$ is a definable chain of open balls or a definable chain of closed balls. Suppose, further, that for every $i$ there is $j$ such that $\text{rad}(B_j)\leq |p|\text{rad}(B_i)$. Then $\bar bigcap_i B_i\neq \emptyset$. \end{lemma} \bar begin{proof} The proof is similar to spherical completeness in residue characteristic $0$, see Lemma 2.7.1 of \cite{hensel-min}. It is enough consider the $1$-dimensional case, since the higher dimensional case follows by considering the coordinate projections. Note also that our assumption implies that the chain $\{B_i\}$ has no minimal element (as such an element would have valuative radius $0$). The closed case follows from the open case as follows: given a definable chain $\{B_i\}_{i\in I}$ of closed balls. For each $i$ let $r_i$ be the valuative radius of $B_i$ and let $B_i'$ be the unique open ball $B\subseteq B_i$ of valuative radius $r_i$ with the additional property that $B\supseteq B_j$ for all $j<i$. Obviously, $\bar bigcap_i B_i=\bar bigcap_i B_i'$ (unless the chain $B_i$ has a minimal element, in which case there is nothing to prove). Note that, in the above notation, the map $B_i\textbf{m}apsto r_i$ is injective, so there is no harm assuming that $\{B_i\}$ is indexed by a subset of $\Gamma$. Thus, our chain $\{B_i\}$ has index set interpretable in $RV$, so by \cite[Proposition 2.3.2]{hensel-minII} there is a finite set $C$ $m$-preparing the chain $\{B_i\}$ for some $m\in {\textbf{m}athbb{N}}$. We claim that $C\cap B_i\neq \emptyset$ for all $i\in I$. This would finish the proof, since $C$ is finite. Assume, therefore, that this is not the case, and let $i_0\in I$ be such that $B_{i_0}\cap C=\emptyset$. By assumption we can find $i<i_0$ such that $r_i<|p^m|r_{i_0}$. Then $B_i$ is a ball $m$-next to $C$, and since our chain has no minimal element, any ball $B\varsubsetneq B_i$ that is an element of our chain is not $m$-prepared by $C$, a contradiction. \end{proof} Note that by \cite[Exanple 1.5]{BWHal} infinitely ramified $1$-h-minimal fields of positive residue characteristic need not be definably spherically model complete. Thus, the extra condition in the assumption of the above lemma is not superfluous. \bar begin{proposition}\langlebel{fixed-point} Let $B_r=\{x\in K^n: |x|\leq r\}$. Suppose $f:B_r\to B_r$ is a definable function. Assume that for distinct $x,y\in B_r$, we have \bar begin{enumerate} \item $|f(x)-f(y)|<|x-y|$ if the residue characteristic is $0$. \item $|f(x)-f(y)|\leq |p||x-y|$ if the residue characteristic is $p>0$. \end{enumerate} Then $f$ has a unique fixed point in $B_r$. \end{proposition} \bar begin{proof} Uniqueness is immediate from the hypothesis. For existence take the family of balls of the form $B(a)_{|f(a)-a|}$. It is a definable chain of balls indexed by $a\in B_r$. Indeed if $a,b\in B_r$ are distinct and the balls are disjoint then $|f(a)-f(b)|=|a-b|$, as the distance of points in disjoint balls does not change. Note that in positive residue characteristic one has the additional hypothesis in Lemma \ref{spherical-completeness}, because $|f(f(a))-f(a)|\leq |p||f(a)-a|$, by assumption 2 on $f$. By the appropriate version of definable spherical completeness of 1-h-minimal fields (Lemma 2.7.1 of \cite{hensel-min} for residue characteristic $0$, Lemma \ref{spherical-completeness} otherwise), we obtain a point $x$ in the intersection of all balls. Then $x$ is a fixed point of $f$. Indeed, if we assume otherwise then for $y=f(x)$, we have $|f(y)-y|<|f(x)-x|$ by the hypothesis. On the other hand if $a$ is arbitrary then, as $x\in B(a)_{|f(a)-a|}$, one has $|f(x)-f(a)|\leq |x-a|\leq |f(a)-a|$ and so $|f(x)-x|\leq |f(a)-a|$. This is a contradiction and finishes the proof. \end{proof} Just as in real analysis this fixed point theorem implies an inverse function theorem. \bar begin{proposition}\langlebel{bilipschitz-inverse-function} Suppose $f:U\to K^n$ is a definable function from an open set $U\subseteqset K^n$ satisfying the following ``bilipschitz condition'': for every $x,y\in U$ distinct \bar begin{enumerate} \item $|f(x)-f(y)-(x-y)|<|x-y|$ if the residue characteristic is $0$. \item $|f(x)-f(y)-(x-y)\leq |p||x-y|$ if the residue characteristic is $p>0$. \end{enumerate} Then $f(U)$ is open and $f$ is a homeomorphism from $U$ to $f(U)$. If $f$ is (strictly) differentiable then $f^{-1}$ is (strictly) differentiable. \end{proposition} \bar begin{proof} Injectivity of the map follows directly from the hypothesis. The same assumptions also imply that if $x,y\in U$ are distinct then $|f(x)-f(y)|=|x-y|$, implying continuity of the inverse. The main difficulty is showing that $f(U)$ is open. Translating, we may assume $0\in U$ and $f(0)=0$. We have to find an open neighborhood of $0$ in $f(U)$. Take $r>0$ such that $0\in B_r\subseteqset U$. Then $B_{r}\subseteqset f(U)$. Indeed, if $|a|\leq r$ then, by the same reasoning as above, the function $g(x)=x+a-f(x)$ satisfies $g(B_r)\subseteqset B_r$. By the assumptions on $f$ this implies that $g$ satisfies the hypothesis of Proposition \ref{fixed-point}. So $g(x_0)=x_0$ for some $x_0$, namely, $a=f(x_0)$, as claimed. Differentiability (and strict differentiability) of $f^{-1}$ now follow from \ref{inverse-derivative}. \end{proof} We can finally formulate and prove the inverse function theorem for $1$-h-minimal fields: \bar begin{proposition}\langlebel{inverse-function} Suppose $f:U\to K^n$ is a definable function from an open set $U\subseteqset K^n$. Suppose $f$ is strictly differentiable at $a$ and $f'(a)$ is invertible. Then there is an open set $V\subseteq U$ around $a$ such that $f(V)$ is open and $f:V\to f(V)$ is a bijection whose inverse is strictly differentiable at $f(a)$. \end{proposition} \bar begin{proof} By the definition of strict differentiability the function $f'(a)^{-1}f$ satisfies the hypothesis of the previous proposition in a small open ball around $a$. The conclusion follows. \end{proof} We do not know whether a definable function, $f:U\to K$, with continuous partial derivatives, but such that $f$ is not strictly differentiable in $U$ could exist\footnote{In real analysis it is well known that a function $f: U \to {\textbf{m}athbb{R}}$ is ${\mathcal C}^1$ in $U$ if and only if it is strictly differentiable there.}. Clearly, sums, products and compositions of strictly differentiable functions are strictly differentiable, and so are locally analytic functions. Moreover, strict differentiability is first order definable, and therefore extends to elementary extensions. Also, by the generic Taylor approximation theorem, in the $1$-h-minimal context, any definable function in an open subset of $K^n$ is strictly differentiable in a dense open subset. See Proposition \ref{generic-regularity-0}. \\ Our next goal is to study definable functions of constant rank. We first note, that without the assumption of definability, a strictly differentiable function whose derivative vanishes identically need not be locally constant: \bar begin{example} Consider a function $f:\textbf{m}athcal{O}\to\textbf{m}athcal{O}$ that is locally constant in $B\setminus \{0\}$ but near $0$ it grows like $x^2$. Such a function $f$ will be strictly differentiable, with $f'\equiv 0$, but $f$ is not locally constant at $0$. \end{example} Roughly, a function as in the above example involves an infinite number of choices, so it is not definable. In contrast we have: \bar begin{proposition}\langlebel{locally-constant} Let $f:U\to K^m$ be a function definable in an open set $U\subseteqset K^n$. Assume $f$ is continuous. Assume $f$ is differentiable with derivative $0$ on an open dense subset of $U$. Then $f$ is locally constant with finite image. \end{proposition} \bar begin{proof} We proceed by induction on $n$, the dimension of the domain. We may assume $m=1$. First assume $n=1$. By the valuative Jacobian property in \cite[Corollary 3.1.6]{hensel-min} and \cite[Corollary 3.1.3]{hensel-minII}, there is a finite set $C\subseteqset U$ such that in $U\setminus C$ the function $f$ is locally constant. This implies that the fibers of $f|_{U\setminus C}$ are of dimension 1, and so the image of $f$ is finite. As $f$ is continuous it is locally constant, the fibers form a finite partition of closed and so open sets on which $f$ is constant. Now assume the proposition is valid for $n$, and suppose $U\subseteqset K^n\times K$. We denote $\pi_1:K^n\times K\to K^n$ the projection onto the first factor and $\pi_2:K^n\times K\to K$ the projection onto the second factor. Let $V\subseteqset U$ be an open dense subset such that $f$ is differentiable at $V$ with derivative $0$. Denote $C=U\setminus V$. Let $T=\{x\in K^n\textbf{m}id \dim(\pi_1^{-1}(x)\cap C)=1\}$ then $\dim(T)<n$. We conclude that there is an open dense set $W\subseteqset K^{n}$ such that $\dim(\pi_1^{-1}(x)\cap C)=0$ for all $x\in W$. Similarly, there is an open dense set $P\subseteqset K$ such that $\dim(\pi_2^{-1}(x)\cap C)<n$ for every $x\in P$. Shrinking $V$ to $V\cap \pi_1^{-1}(W)\cap \pi_2^{-1}(P)$ we may assume that if $(x,y)\in V$ then $V_x$ is an open dense subset of $U_x$ and $V_y$ is an open dense subset of $U_y$. By the induction hypothesis and the $n=1$ case, we conclude that $f_x$ and $f_y$ are locally constant. This implies that the fiber $f^{-1}f(x,y)$ has dimension $n+1$. Indeed, if $B$ is an open neighborhood of $y$ on which $f_x$ is constant and for each $y'\in B$ we take $R_{y'}\subseteqset U_{x}$ an open neighborhood of $x$ on which $f_{y'}$ is constant, then $f$ is constant on $\bar bigcup_{y'\in B} R_{y'}\times\{y'\}$.By dimension considerations, we conclude that the image $f(V)$ is finite. As $f$ is continuous, $f^{-1}f(V)$ is closed in $U$, and as $V$ is dense in $U$, we conclude $f(U)=f(V)$ is finite. As $f$ is continuous, we conclude $f$ is locally constant as before. \end{proof} Given the previous proposition we may expect that a definable strictly differentiable function $f:U\to K^m$ with open domain $U\subseteqset K^r\times K^s$ and satisfying that $D_yf(x,y)=0$, is locally of the form $f(x,y)=g(x)$. Unfortunately this is not true. \bar begin{example} Take $f:\textbf{m}athcal{O}\times \textbf{m}athcal{O}\to \textbf{m}athcal{O}$ defined by $f(x,y)=0$ if $|y|> |x|$ and $f(x,y)=x^2$ if $|y|\leq |x|$. Then $f$ is strictly differentiable, $f(x,\cdot)$ is locally constant, but $f$ is not of the form $g(x)$ near $(0,0)$. \end{example} It is due to this pathology that the conclusion of Proposition \ref{constant-rank} below only holds generically. \\ Below, we let $D_yf(x,y)$ be the differential of the function $f_x$, given by $f_x(y)=f(x,y)$; we call this the derivative of $f$ with respect to $y$ (where $y$ can be a tuple of variables). \bar begin{proposition}\langlebel{generic-relative-locally-constant} Suppose $U\subseteqset K^n$, and $V\subseteqset K^r$ are open and $f:U\times V\to K^m$ is a definable function such that $f$ is continuous and $D_yf=0$ on a dense open subset of $\dom(f)$. Then there exists an open dense set $U'\subseteqset U$ such that $f|_{U'}$ is locally of the form $g(x)$. \end{proposition} \bar begin{proof} The set, $D$ of points $x\in U$ such that for every point of $\{x\}\times V$ $f$ is locally of the form $g(x)$ is definable. More precisely, $x\in D$ exactly when for all $y\in V$, there exists an open ball $B\ni (x,y)$, such that for all $(x',y'), (x',y'')\in B$ we have $f(x',y')=f(x',y'')$. Thus, the statement that $D$ has dense interior in $U$ is a first order expressible property, so we may assume that $\bar bar acl=\dcl$, see Fact \ref{acl=dcl} and the subsequent remark. In the course of the proof we may replace $U$ by a dense open subset a finite number of times. Fix, $W\subseteqset U\times V$, a dense open set where $f$ is differentiable and its derivative with respect to $y$ is $0$. Shrinking $U$ we may assume $W_x\subseteqset V$ is dense open for all $x\in U$. By Proposition \ref{locally-constant} we know that $f_x$ is locally constant with finite image for every $x\in U$ (recall that $f_x(y):=f(x,y)$). The sets $\textbf{m}athrm{Im}(f_x)$ form a definable family of finite sets indexed by $x\in U$, so there is uniform bound, $n$, on their cardinalities. Denoting $A_k=\{x\in U: |\textbf{m}athrm{Im}(f_x)|=k\}$ we have $U=A_1\cup\cdots\cup A_n$, so the union of the interiors of the $A_k$ form a dense open subset of $U$. Since the closures of the $\textbf{m}athrm{Int}(A_k)$ are pairwise disjoint, we may assume that $|\textbf{m}athrm{Im}(f_x)|=k$ for all $x$ and some fixed $k$. Since we assumed that $\bar bar acl=\dcl$, there are definable functions $r_1,\dots, r_k:U\to K^m$ such that $\{r_1(x),\dots,r_k(x)\}=\textbf{m}athrm{Im}(f_x)$. By generic continuity of definable functions (Proposition \ref{generic-continuity}) we may assume that $r_i$ are all continuous. Then the sets $B_i=\{(x,y): f(x,y)=r_i(x)\}$ form a finite partition of $U\times V$ into closed, and so open subsets. \end{proof} The next two results, describing the local structure of definable maps of full rank, are standard applications of the inverse function theorem: \bar begin{proposition}\langlebel{immersion} Suppose $U\subseteqset K^k$ is a definable open set and $f:U\to K^k\times K^r$ is a definable, strictly differentiable map. Suppose that for some $a\in U$ the derivative $f'(a)$ has full rank. Then there is a ball $a\in B\subseteqset U$, a ball $B_2\ni 0$, a definable open set $V\subseteqset K^k\times K^r$, and a definable strict diffeomorphism $\varphi:V\to B\times B_2$ such that $f(B)\subseteqset V$ and the composition $\varphi f:B\to B\times B_2$ is the inclusion $b\textbf{m}apsto (b,0)$. \end{proposition} \bar begin{proof} After a coordinate permutation in the target we may assume the principal $k\times k$ minor of $f'(a)$ is invertible. Consider the function $g:U\times K^r\to K^k\times K^r$ defined as $g(x,y)=f(x)+(0,y)$. Then $g$ is strictly differentiable and has invertible derivative at $(a,0)$ so by the inverse function theorem, Proposition \ref{inverse-function}, we can find a ball $B$ around $a$ and a ball $B_2$ around $0$, and open set $f(a)\in V$ such that $g$ restrict to a strict diffeomorphism $g:B\times B_2\to V$. If $i:B\to B\times B_2$ is the inclusion $i(b)=(b,0)$ then we get that $gi=f$, so we conclude the statement is valid with $\varphi=g^{-1}$. \end{proof} \bar begin{proposition}\langlebel{submersion} Suppose $U\subseteqset K^k\times K^r$ is a definable open set, and $f:U\to K^k$ is a definable strictly differentiable map. Let $a\in U$. Suppose $f'(a)$ has full rank. Then there exists a definable open set $a\in U'\subseteqset U$, a ball $f(a)\in B$, a ball $B_2\subseteq K^r$, and a definable strict diffeomorphism $\varphi:B\times B_2\to U'$, such that $f(U')\subseteqset B$ and the composition $f\varphi:B\times B_2\to B$ is the projection $(b,c)\textbf{m}apsto b$. \end{proposition} \bar begin{proof} After applying a coordinate permutation to $U$ we may assume that the principal $k\times k$ minor of $f'(a)$ is invertible. Consider the function $g:U\to K^k\times K^r$ defined as $g(x,y)=(f(x,y),y)$. Then $g$ is strictly differentiable with invertible differential, so by the inverse function theorem, Proposition \ref{inverse-function}, there is an open set $a\in U'\subseteqset U$ such that $g(U')$ is open and $g:U'\to g(U')$ is a strict diffeomorphism. Making $U'$ smaller we may assume $g(U')=B\times B_2$ is a product of two balls. Then if $p:B\times B_2\to B$ is the projection $p(b,c)=b$, we get that $pg=f$ and so the statement is valid with $\varphi=g^{-1}$. \end{proof} We can finally prove our result on the local structure of definable functions of constant rank: \bar begin{proposition}\langlebel{constant-rank} Let $U\subseteqset K^k\times K^r$ and $V\subseteqset K^k\times K^s$ be open definable sets and let $f:U\to V$ be a definable strictly differentiable map such that for all $a\in U$ the rank of $f'(a)$ is constant equal to $k$. Then there exist $U'\subseteqset U$ and $V'\subseteqset V$ definable open sets, such that $f(U')\subseteqset V'$ and there are definable strict diffeomorphisms $\varphi_1:B_1\times B_2\to U'$ and $\varphi_2:V'\to B_1\times B_3$, such that the composition $\varphi_2f\varphi_1:B_1\times B_2\to B_1\times B_3$ is the map $(a,b)\textbf{m}apsto (a,0)$. \end{proposition} \bar begin{proof} Take a point $(b,c)\in U$. After a coordinate permutation in $U$ and $V$ we may assume $f'(b,c)$ has its first $k\times k$ minor invertible. Then by the theorem on submersions, Proposition \ref{submersion}, applied to the composition of $f:U\to K^k\times K^s$ with the projection $K^k\times K^s\to K^k$ onto the first factor, we may assume that $U$ is of the form $B_1\times B_2$ and $f$ is of the form $f(x,y)=(x,g(x,y))$. As $f'$ has constant rank equal to $k$ we conclude that $D_yg=0$. By the Proposition \ref{generic-relative-locally-constant} we may assume $g(x,y)$ is of the form $g(x,y)=g(x)$ (after passing to smaller open balls of $B_1$ and $B_2$ not necessarily containing $(b,c)$). Now the function $h:B_1\to K^k\times K^s$ defined by $h(x)=(x,g(x))$ is a definable strictly differentiable immersion so by the theorem on immersions \ref{immersion} we may after shrinking $B_1$ and composing with a definable diffeomorphism in the target assume that $h$ is of the form $h(x)=(x,0)$. This finishes the proof. \end{proof} \section{Strictly differentiable definable manifolds}\langlebel{manifold-section} In this section we define definable manifolds in a 1-h-minimal field, and variants. These are manifolds which are covered by a finite number of definable charts, with compatibility functions of various kinds. Throughout, we keep the convention that $K$ is an $\bar bar aleph_0$-saturated $1$-h-minimal field. In case $\bar bar acl_K$ is not the same as $\dcl_K$ it is better to take ``\'etale domains'' instead of open subsets of $K^n$ as the local model of the manifold. This is because the cell decomposition, as provided by Proposition \ref{cell-decomposition}, decomposes a definable set into a finite number of pieces, each of which is only a finite cover of an open set, instead of an open set. We describe this notion formally below: \bar begin{definition} Let $S\subseteqset K^m$. A definable function $f:S\to K^n$ is (topologically) étale if it is a local homeomorphism. In other words, for every $x\in T$ there is a ball $x\in B$ such that $f(B\cap S)$ is open and the inverse map $f(B\cap S)\to B\cap S\to K^m$ is continuous. \end{definition} Informally, we think of étale maps as similar to open immersions, and will denote such maps accordingly e.g., $i:U\to K^n$. We now proceed to describing the differential structure of \'etale maps (or, rather, \'etale domains): \bar begin{definition} Suppose $i:U\to K^n$ and $j:V\to K^m$ are étale maps. A definable function $f:U\to V$ is strictly differentiable at $x\in U$ if there are balls $x\in B$ and $f(x)\in B'$ such that $i:B\cap U\to i(B\cap U)$, $j:B'\cap V\to j(B'\cap V)$ are homeomorphisms onto open sets, such that $f(B\cap U)\subseteqset B'\cap V$, and the map $i(B\cap U)\bar bar xrightarrow{i^{-1}} B\cap U\bar bar xrightarrow{f} B'\cap V\bar bar xrightarrow{j} j(B'\cap V)$ is strictly differentiable at $i(x)$. In this case the derivative $f'(x)$ is defined as the derivative of $i(B\cap U)\to j(B'\cap V)$. The function $f:U\to V$ is called $T_k$ at $x$ if the composition $i(B\cap U)\to j(B'\cap V)$ is $T_k$ at $i(x)$. \end{definition} Note that with this definition the given inclusion $U\subseteqset K^r$ is not necessarily strictly differentiable, because the local inverses $i(U\cap B)\to K^r$ of the map $i:U\to K^n$ are only topological embeddings, so not necessarily strictly differentiable. \\ \textbf{For the rest of this section let $\textbf{m}athcal{P}$ stand for any one of the following adjectives: topological, strictly differentiable, or $T_n$}. \bar begin{definition}\langlebel{weak-manifolds-def} A definable weak $\textbf{m}athcal{P}$-$n$-manifold is a definable set, $M$, equipped with a finite number of definable injections, $\varphi_i:U_i\to M$, and each $U_i$ comes equipped with an étale map $r_i:U_i\to K^n$. We require further that the sets $U_{ij}:=\varphi_i^{-1}(\varphi_j(U_j))$ are open in $U_i$, and that the transition maps $U_{ij}\to U_{ji}$, $ \varphi_j^{-1}\varphi_i$ are $\textbf{m}athcal{P}$-maps. We further define: \bar begin{enumerate} \item A definable weak $\textbf{m}athcal{P}$-manifolds is a weak $\textbf{m}athcal{P}$-$n$-manifolds for some $n$. \item A weak $\textbf{m}athcal{P}$-manifold is equipped with a topology making the structure maps, $\varphi_i$, open immersions. \item A morphism of definable weak $\textbf{m}athcal{P}$-manifolds is a definable function $f:M\to N$, such that for any charts $\varphi_i:U_i\to M$ and $\tau_j:V_j\to N$ the set $W_{ij}=\varphi_if^{-1}\tau_j(V_j)$ is open in $U_i$ and the map $W_{ij}\to V_j$ given by $x\textbf{m}apsto \tau_j^{-1}f\varphi_i(x)$ is a $\textbf{m}athcal{P}$-map. \item A definable $\textbf{m}athcal{P}$-$n$-manifold is a definable weak $\textbf{m}athcal{P}$-$n$-manifold, where the $U_i$ are open subsets of $K^n$ (and the maps $U_i\to K^n$ are inclusions). \item A morphism of definable $\textbf{m}athcal{P}$-manifolds is a morphism of weak definable $\textbf{m}athcal{P}$-manifolds. \end{enumerate} \end{definition} Definable weak $K$-manifolds are, immediately from the definition, (abstract) manifolds over $K$. As such, definable differentiable weak manifold inherit the classical differential structure. For the sake of completeness we remind the relevant definitions: \bar begin{definition} If $M$ is a definable strictly differentiable weak manifold and $x\in M$, then the tangent space of $M$ at $x$, $T_x(M)$, is the disjoint union of $T_i=T_{\varphi_i^{-1}(x)}(U_i)=K^n$ for $(U_i,\varphi_i)$ a chart around $x$, under the identification of the spaces $T_i$ and $T_j$ associated with the charts $U_i,U_j$ via the map $(\varphi_j^{-1}\varphi_i)'(\varphi_i^{-1}(x))$. For a strictly differentiable definable morphism $f:M\to N$ of definable strictly differentiable weak manifolds, we have a map of $K$-vector spaces $f'(x):T_x(M)\to T_{f(x)}(N)$ given by the differential of the map appearing in Definition \ref{weak-manifolds-def} above. \end{definition} As usual, once we have a chart around a point in a weak strictly differentiable manifold, we get an identification of $T_x(M)$ with $K^n$, but distinct charts may give distinct isomorphisms. \bar begin{definition} A definable (weak) $\textbf{m}athcal{P}$-Lie group is a group object in the category of definable (weak) $\textbf{m}athcal{P}$-manifolds. \end{definition} \bar begin{lemma}\langlebel{generic-regularity-etale} Suppose $i:U\to K^n$ and $j:V\to K^m$ are étale and $f:U\to V$ is a definable map. Then $f$ is continuous in an open dense subset of $U$. Also $f$ is strictly differentiable and $T_k$ in an open dense subset of $U$. \end{lemma} \bar begin{proof} For the statement about continuity, note that $V$ has the subspace topology (of $V\subseteqset K^r$) so we may assume $V=K^r$. If we denote $U'$ the interior (relative to $U$) of the set of points of $U$ where $f$ is continuous, then in every ball $B$ where $i$ is a homeomorphism $i:B\cap U\to i(B\cap U)$ we get that $B\cap U\cap U'$ is dense in $B\cap U$, by generic continuity of definable functions. We conclude that $U'$ is dense as required. For strict differentiability and $T_k$, by the above we may assume that $f$ is continuous. Let $U'$ be the interior of the set of all points where $f:U\to V$ is strictly differentiable and $T_k$. This is a definable open set. By generic differentiability and generic $T_k$ property for functions defined on open sets, for every point $x\in U$ there is an open ball $B\ni x$, such that $U\cap B\cap U'$ is dense in $U\cap B$. Thus, we conclude that $U'$ is dense in $U$. \end{proof} Note that the previous lemma implies that a (weak) definable topological manifold $M$ contains an open dense subset $U\subseteqset M$, which admits a structure of a (weak) definable $T_n$ manifold extending the given (weak) definable topological manifold structure. As a consequence of Proposition \ref{generic-regularity} below we have that this structure on $U$ is unique up to isomorphism and restriction to a definable open dense subset. For that reason, several of the statements below hold (essentially unaltered) for definable weak manifolds (without further assumptions on differentiability or $T_n$). For the sake of clarity of the exposition, we keep these assumptions. \bar begin{proposition}\langlebel{generic-regularity} If $f:M\to N$ is a definable function and $M$, $N$ are definable weak $\textbf{m}athcal{P}$-manifolds, then $f$ is a $\textbf{m}athcal{P}$-map in an open dense set of $M$. \end{proposition} \bar begin{proof} Considering the charts in $M$ we may assume $M=U\to K^n$ is étale. Now if $(V_i,\tau_i)$ are charts for $N$, then $f^{-1}\tau_i(V_i)$ cover $U$, and so the union of their interiors is open dense in $U$. So we may assume $N=V\to K^m$ is étale. This case is Lemma \ref{generic-regularity-etale}. \end{proof} Recall that the local dimension of a definable set $X$ is defined as \[ \dim_xX=\textbf{m}in\{\dim(B\cap X): x\in B\text{ is a definable open neighborhood of } x\text{ in } M\}. \] The next lemma is standard: \bar begin{lemma}\langlebel{local-dimension} Suppose $M$ is a definable topological weak manifold. Let $X\subseteqset M$ be a definable subset. Then $\dim(X)=\textbf{m}ax_{x\in X}\dim_{x}(X)$. If $G$ is a definable weak topological group and $H$ is a subgroup, then the dimension of $H$ is the local dimension of $H$ at any point. \end{lemma} \bar begin{proof} If $M=U_1\cup\cdots\cup U_n$ is a covering by open sets and $\varphi_i:U_i\to V_i$ is a homeomorphism onto a set $V_i$, with an étale map $V_i\to K^n$, then ${\rm dim}(X)=\textbf{m}ax_i({\rm dim }(\varphi_i(X\cap U_i)))$, and the local dimension of $X$ at $x\in X\cap U_i$ is the local dimension of $\varphi_i(X\cap U_i)$ at $\varphi_i(x)$, so we reduce to the case $M=V$ is étale over $K^n$. In fact, the result is true whenever $M\subseteqset K^m$ with the subspace topology, as then the local dimension of $X\subseteqset M$ at a point $x$ equals the local dimension of $X$ at $x$ in $K^m$, and so the result follows from Proposition \ref{dimension}(3). If $G$ is a definable weak topological group and $H$ is a subgroup, then the local dimension of $H$ at any point $h\in H$ is constant independent of $h$. Indeed the left translation $L_h:G\to G$ is a definable homeomorphism, that sends $e$ to $h$ and satisfies $L_h(H)=H$, so ${\rm dim}_e(H)={\rm dim}_h(H)$. \end{proof} \bar begin{proposition}\langlebel{etale-example} Suppose $T\subseteqset K^m$ is such that there is a coordinate projection $\pi:T\to U$ onto an open subset $U\subseteqset K^n$ and such that the fibres of $\pi$ are finite of constant cardinality, $s$. Assume the associated map $f:U\to (K^{m-n})^{[s]}$ is continuous. Then $T\to K^n$ is étale. \end{proposition} \bar begin{proof} Let $x\in T$. Replacing $U$ by a smaller neighborhood around $\pi(x)$ we may assume, using Fact \ref{topology-group-action}, that $f$ lifts to a continuous function $g:U\to (K^{m-n})^s$, $g=(g_1,\cdots,g_s)$. In this case one gets that $T$ is homeomorphic to $\bar bigsqcup_{i=1}^sU$ over $U$, via the map $(a,i)\textbf{m}apsto (a,g_i(a))$. \end{proof} \bar begin{lemma}\langlebel{glue} Suppose $M=\bar bigcup_{i=1}^r\varphi_i(U_i)$ where $\varphi_i:U_i\to M$ are definable functions, such that the $U_i$ are definable (weak) $\textbf{m}athcal{P}$-$n$-manifolds. Suppose further that for all $i,j$ the sets $U_{ij}:=\varphi_i^{-1}(\varphi_j(U_j))$ are open in $U_i$, and the transition maps $U_{ij}\to U_{ji}$ given by $x\textbf{m}apsto \varphi_j^{-1}\varphi_i(x)$ are $\textbf{m}athcal{P}$-maps Then $M$ has a unique structure of a definable (weak) $\textbf{m}athcal{P}$-$n$-manifold such that $\varphi_i:U_i\to M$ is an open immersion. \end{lemma} The proof is straightforward and omitted. \bar begin{proposition}\langlebel{large-equals-open-dense} Suppose $M$ is a definable weak topological manifold, then $X\subseteqset M$ is large if and only if the interior of $X$ in $M$ is dense in $M$. \end{proposition} \bar begin{proof} Because the dimension of $M\setminus X$ is the maximum of the local dimension at its points by Lemma \ref{local-dimension}, we conclude that both conditions are local, and so we may assume $M=U\subseteqset K^n$ is open. Here the result follows from dimension theory. \end{proof} \bar begin{proposition}\langlebel{definable-is-constructible-manifold} Suppose $M$ is a weak definable topological manifold. Suppose $X\subseteqset M$ is definable. Then $X$ is a finite union of locally closed definable subsets of $M$. \end{proposition} \bar begin{proof} There is an immediate reduction to the case where $M=U\to K^n$ is étale. In this case $U$ has the subspace topology $U\subseteqset K^s$ for some $s$. So it is enough to prove this for $X\subseteqset K^s$. This is a consequence of Proposition \ref{definable-is-constructible}. \end{proof} In case $\bar bar acl=\dcl$ a weak manifold is generically a manifold: \bar begin{proposition}\langlebel{weak-manifold-is-generically-strong} Suppose $\bar bar acl=\dcl$. If $M$ is a definable weak $\textbf{m}athcal{P}$-manifold, then there is a definable open dense subset $U\subseteqset M$ which is a definable $\textbf{m}athcal{P}$-manifold. \end{proposition} \bar begin{proof} There is an immediate reduction to the case in which $i:M=U\to K^n$ is étale. Let $r$ be an uniform bound for the cardinality of the fibers of $U$. In this case if we denote $X_k\subseteqset K^n$ the set of points $x$ such that $i^{-1}(x)$ has cardinality $x$, and $U_k\subseteqset X_k$ is the interior of $X_k$, then $\bar bigcup_{k\leq r} U_k$ is open dense in in $K^n$. Replacing $U$ with $i^{-1}(U_k)$ we may assume that the nonempty fibers of $i$ have constant cardinality. From the assumption $\bar bar acl=\dcl$ we conclude that the map $i(U)\to (K^r)^{[s]}$ lifts to a definable map $i(U)\to (K^r)^s$. There is an open dense subset $V'\subseteqset i(U)$ such that $V'\to i(U)\to (K^r)^s$ is a $\textbf{m}athcal{P}$-map, see Proposition \ref{generic-regularity-etale}, and we conclude that $i^{-1}(V')\cong \bar bigsqcup_{i=1}^sV'$ over $V'$, which is clearly a $\textbf{m}athcal{P}$-manifold. \end{proof} It seems possible that in this situation a weak manifold is already a manifold, but we do not need this so we do not try to prove it. \\ The next couple of results are not used in the main theorems, but may be of independent interest. \bar begin{definition} Suppose $M$ and $N$ are definable strictly differentiable weak manifolds, and $f:M\to N$ a definable strictly differentiable function. Then $f$ is called an immersion if the derivative $f'(x)$ is injective at all points $x\in M$. $f$ is called an embedding if $f$ is an immersion and a homeomorphism onto its image. $f$ is called a submersion if the derivative $f'(x)$ is surjective for all $x\in M$. \end{definition} These notions have the expected properties. \bar begin{proposition} Suppose $f:M\to N$ is a strictly differentiable definable map of strictly differentiable definable weak manifolds. If $f$ is an immersion then $M$ satisfies the following universal property: For every strictly differentiable weak definable manifold $P$, and $g:P\to M$, the function $g$ is strictly differentiable and definable if and only is $fg$ is strictly differentiable and $g$ is definable and continuous. If $f$ is an embedding and $g:P\to M$ is as above, then $g$ is a strictly differentiable definable map if and only if $fg$ is a strictly differentiable definable map. \end{proposition} \bar begin{proposition} If $f:M\to N$ is a surjective submersion, then a map $g:N\to K$ is a strictly differentiable definable function if and only if the composition $gf$ is strictly differentiable and definable. \end{proposition} These two properties are a consequence of the theorems on the local structure of immersions and submersions, Propositions \ref{immersion} and \ref{submersion}. We leave the details for the interested reader to fill.\\ Suppose $M$ is a definable strictly differentiable weak manifold. If $M\to N$ is a surjective map of sets, it determines at most one structure of a definable strictly differentiable weak manifold on $N$, in such a way that $M\to N$ is a submersion. Also, an injective map $N\to M$ determines at most one structure of a strictly differentiable weak manifold on $N$, in such a way that $N\to M$ is an embedding. The subsets $N\subseteqset M$ admitting such a structure are called submanifolds of $M$. We also get that if $N$ is a definable topological space, and $N\to M$ is a definable and continuous function, then there is at most one structure of a strictly differentiable definable manifold on $N$ extending the given topology, and for which $N\to M$ is an immersion. In other words the strictly differentiable weak manifold structure that makes $N\to M$ an embedding is determined by the set $N$, and the strictly differentiable weak manifold structure that makes $N\to M$ an immersion is determined by the topological space $N$. \bar begin{proposition}\langlebel{generic-immersion} Suppose $M,N$ are definable strictly differentiable weak manifolds, and let $f:M\to N$ be an injective definable map. Then there is a definable open dense subset $U\subseteqset M$ such that $f|_U$ is an immersion. \end{proposition} \bar begin{proof} By Proposition \ref{generic-regularity} we may assume $f$ is strictly differentiable. We have to show that the interior of the set $\{x\in M: f'(x) \text{ is injective} \}$ is dense in $M$. If this is not the case, we can find an open nonempty subset of $M$ such that $f$ is not an immersion at any point. So suppose $M$ is an open subset of $K^n$, $N$ is an open subset of $K^m$ and $f$ is not an immersion at any point. For dimension reasons $n\leq m$. If we define $X_k$ to be the set of points $x$ of $M$ such that $f'(x)$ is of rank $k$ then $X_0\cup\dots\cup X_{n-1}=M$, and so if $U_r$ is the interior of $X_r$ we have that $U_1\cup\cdots\cup U_{n-1}$ is open dense in $M$. So we may assume that $f$ is of constant rank. This contradicts the result in Proposition \ref{constant-rank}, since the map $(x,y)\textbf{m}apsto (x,0)$ is not injective. \end{proof} The following facts are standard, and are probably known: \bar begin{fact}\langlebel{finite-cover-trivial} Suppose $X$ is a Hausdorff space and $X\to Y$ is a surjective continuous map, which is a local homemorphism with fibers of constant cardinality $s$. Then the map $t:Y\to X^{[s]}$ given by the fibers of $p$, is continuous. \end{fact} \bar begin{proof} Let $\pi:X^{s}\setminus \Delta\to X^{[s]}$ be the canonical projection. Take $y\in Y$ and $\{x_1,\dots,x_s\}=p^{-1}(y)=t(y)$. A basic open neighborhood of $t(y)$ is of the form $\pi(U_1\times\dots\times U_s)$ for $x_k\in U_k$ open and $U_k$ pairwise disjoint. Shrinking $U_k$ we may assume $p|_{U_k}$ is a homemorphism onto an open set. If $V=\bar bigcap_kp(U_k)$, then $t^{-1}(V)\subseteqset \pi(U_1\times\dots\times U_s)$. \end{proof} \bar begin{fact}\langlebel{finite-cover-rigid} Let $X,Y,Z$ be topological spaces, $p:X\to Z$, $q:Y\to Z$ be surjective continuous functions and $f:X\to Y$ a continuous bijection such that $qf=p$. Assume that $X$ and $Y$ are Hausdorff spaces, and $p:X\to Z$ has finite fibers of constant cardinality, $s$. If the map $t:Z\to X^{[s]}$, given by $x\textbf{m}apsto p^{-1}(x)$ is continuous then $f$ is a homeomorphism. \end{fact} \bar begin{proof} Since $f$ is continuous and bijective, we only need to show that it is open, which is a local property. Fix some $x\in X$ and $z=p(z)$. By Fact \ref{topology-group-action} the map $Z\to X^{[s]}$ lifts, locally near $z$, to a continuous map $(l_1,\dots, l_s):Z\to X^s$. Shrinking $Z$ to this neighborhood, and reducing $X$ and $Y$ accordingly, we may assume that $X$ is homeomorphic to $\bar bigsqcup_{i\leq s}Z$ over $Z$, via the homeomorphism $F_l:\bar bigsqcup_{i\leq s}Z\to X$, given by $(i,z)\textbf{m}apsto l_i(z)$. To see that this is a homeomorphism, note that the image of the $i$-th-cofactor via $F_l$ is the set $X_i=\{x\in X\textbf{m}id x=l_ip(x)\}$, which is closed in $X$ (because $X$ is Hausdorff). Since there are only finitely many $X_i$ (and they are pairwise disjoint) they are also open. Finally, the inverse of $F_l$ restricted to $X_i$ coincides with $p$ which is continuous. So $f$ restricted to $F_l(Z)$ is a homeomorphism, so $f$ is open at $x$. Since $x$ was arbitrary, the conclusion follows. Similarly, we have a homeomorphism $F_{fl}:\bar bigsqcup_{i\leq s}Z\to Y$, which is compatible with $f$ in the sense that $fF_l=F_{fl}$. \end{proof} \bar begin{proposition}\langlebel{generic-embedding} Let $M, N$ be strictly differentiable weak manifolds, and $f:M\to N$ be an injective definable function. Then there is a dense open $U\subseteqset M$ such that $f|_U$ is an embedding. \end{proposition} \bar begin{proof} By Proposition \ref{generic-immersion} we may assume that $f$ is an immersion. If $V_1,\dots,V_n$ is a finite open cover of $N$ and the statement is valid for $f:f^{-1}V_i\to V_i$, then it is also valid for $f$. So we may assume $N=V\to K^m$ is étale. From the definition we have that $V\subseteqset K^d$ has the subspace topology. We have already seen that $f:M\to V$ is an immersion, so if it is a topological embedding into $K^d$, then it is an embedding into $V$. So we may assume $N=K^m$. Now consider $U_1\cup\dots\cup U_n=M$ a finite open cover of $M$. Assume $f|_{U_i}$ is an embedding. Define $U_i'=\text{Int}(U_i\setminus \bar bigcup _{j<i}U_i)$, and $U_i''=U_i'\setminus \bar bigcup_{j\neq i}f^{-1}\cl(f(U_j'))$. Note that $\bar bigcup_i U_i'\subseteqset M$ is an open dense set. Also, note that $f(U_i'\setminus U_i'')= \bar bigcup_{j\neq i}f(U_i') \cap \cl(f(U_j'))\subseteqset \bar bigcup_{j\neq i}\cl(f(U_j'))\setminus f(U_j')$. So we conclude that $\dim(U_i'\setminus U_i'')<\dim(M)$, by Proposition \ref{dimension-boundary-function}. Thus, replacing $U_i$ with $U_i''$ we may assume $\cl (f(U_i))\cap f(U_j)=\emptyset$ for distinct $i,j$. In this case one verifies that $f$ is a topological embedding. We are thus reduced to the case where $M=U\to K^n$ is étale. Consider for each $I\subseteqset \{1,\dots,m\}$ of size $n$, the set $A_I$ of $x\in U$ such that the $I$-th-minor of $f'(x)$ is invertible. As $U=\bar bigcup_IA_I$ we conclude that $U'=\bar bigcup_I\text{Int}(A_I)$ is open dense in $U$, so by the reduction in the previous paragraph we may assume that the composition of $f:U\to K^m$ with the projection onto the first $n$ coordinates $p:K^m\to K^n$ is an étale immersion. If $s$ is a uniform bound for the size of the fibers of $U$ over $K^n$, then we can take $A_k$ the set of $x\in K^m$ such that the fiber $(pf)^{-1}(x)$ has $k$ elements, and consider $U'=\bar bigcup_{k\leq s}\text{Int}(A_k)$. So we may assume that if $V=pf(U)$, the fibers of $U$ over $V$ have the same size $s$. In this case the function $V\to U^{[s]}$, given by $x\textbf{m}apsto (pf)^{-1}(x)$, is continuous, and the function $f$ is topological homeomorphism $U\to f(U)$, see Facts \ref{finite-cover-trivial} and \ref{finite-cover-rigid}. \end{proof} We can now prove a $1$-h-minimal version of Sard's Lemma (compare with \cite[Theorem 2.7]{WilComp} for an analogous result in the o-minimal setting). Namely, given a definable strictly differentiable morphism, $f$, of definable weak manifolds, call a value $x$ of $f$ regular when $f$ is a submersion at every point of $f^{-1}(x)$. The statement is, then, that the set of singular values is small: \bar begin{proposition}\langlebel{sard} Suppose $M$ and $N$ are definable strictly differentiable weak manifolds. If $f:M\to N$ is a strictly differentiable map, then there exists an open dense subset $U\subseteqset N$ such that $f:f^{-1}(U)\to U$ is a submersion. \end{proposition} \bar begin{proof} We have to see that the image via $f$ of the set of points $x\in M$ such that $f'(x)$ is not surjective, is nowhere dense in $N$. This property is expressible by a first order formula, so we may assume that $\bar bar acl=\dcl$, see Fact \ref{acl=dcl} and the subsequent remark. Let $m=\dim(N)$. Let $X\subseteqset M$ a definable set such that for all $x\in X$, $f'(x)$ is not surjective. We have to see that $\dim f(X)<m$. We do this by induction on the dimension of $X$. The base case, when $X$ is finite, is trivial. The dimension of $f(X)$ is the maximum of the local dimensions at points, see Proposition \ref{local-dimension}. So we may assume $N\subseteqset K^m$ is open. Covering $M$ by a finite number of charts we may assume $M\to K^n$ is étale, say $M\subseteqset K^r$. Then by Proposition \ref{cell-decomposition} there exists a finite partition of $X$ into definable sets such that if $X'$ is an element of the partition, there exists a coordinate projection $p:K^r\to K^l$, which restricted to $X'$ is a surjection $X'\to U$ onto an open subset $U\subseteqset K^l$ with finite fibers of constant cardinality. If we prove that $\dim f(X')<m$ for every element $X'$ of the partition then also $\dim f(X)<m$. So we may assume there is a coordinate projection $p:X\to U$ onto an open subset $U\subseteqset K^l$, such that $p^{-1}(u)$ has $t$ elements for all $u\in U$. From the assumption $\bar bar acl=\dcl$, we get that there are definable sections $s_1,\dots,s_t:U\to X$, such that $\{s_1(u),\cdots,s_t(u)\}=p^{-1}(u)$, for all $u\in U$. As $X=\bar bigcup_{i\le t}s_i(U)$, we may assume $p:X\to U$ is a bijection with inverse $s:U\to X$. The map $s:U\to M$ becomes strictly differentiable in an open dense $V\subseteqset U$, see Proposition \ref{generic-regularity}. As $s(U\setminus V)$ has smaller dimension than $X$, we may assume that $s$ is strictly differentiable. Now note that $s$ has image in $X$ and so the composition $U\to M\to N$ has derivative which is not surjective at any point of $U$. So we have reduced to the case in which $M=U\subseteqset K^n$ is open and $X=U$. If we consider $A_k\subseteqset U$, the set defined by $A_k=\{x\in U: f'(x)\text{ has rank }k\}$, then $\bar bigcup_{k<m}A_k=U$ and so $\bar bigcup_{k<m}\text{Int}(A_k)$ is open and dense in $U$. We conclude by the induction hypothesis that the image of $U\setminus \bar bigcup_{k<m}\text{Int}(A_k)$ is nowhere dense in $N$, and so we may assume that $f'(x)$ has constant rank $k$ in $U$, for a $k<m$. Consider the set $Y=\{x\in U: \dim f^{-1}f(x)\geq \dim(U)-k\}$. Then $Y$ is definable, because dimension is definable in definable families. Also, $Y$ has dense interior by the constant rank theorem Proposition \ref{constant-rank}. So once more by the induction hypothesis we may assume $f^{-1}f(x)$ has dimension at least $\dim(U)-k$ for all $x\in U$. Then the dimension of $f(U)$ is at most $k$, by the additivity of dimension. \end{proof} If $M\to N$ is a map of strictly differentiable weak manifolds, and $y\in N$ is a regular value, then one can show $f^{-1}(y)\subseteqset M$ is a strictly differentiable weak submanifold. \section{Definable Lie groups}\langlebel{groups-section} In this section we show that every definable group is a definable weak Lie group and that the germ of a definable weak Lie group morphism is determined by its derivative at the identity. \\ The proof of the following lemma was communicated to us by Martin Hils. \bar begin{lemma}\langlebel{generic-in-group-0} Suppose $G$ is a group $a$-definable in a pregeometric theory, and that $X,Y\subseteqset G$ are non-empty $a$-definable sets of dimension smaller than $G$. If $g\in G$ is such that $\dim(gX\cap Y)=\dim(X)$, then $\dim(g/a)\leq \dim(Y)$. In particular, there exists $g\in G$ such that $\dim(gX\cap Y)<\dim(X)$ \end{lemma} \bar begin{proof} Denote $d=\dim(X)$ and $d'=\dim(Y)$. Suppose $\dim(gX\cap Y)=d$. Note that $d\leq d'$. Let $h'\in gX\cap Y$ be such that $\dim(h'/ag)=d$. Let $h=g^{-1}h'$. As $h\in X$ we have $d\geq \dim(h/a)\geq \dim(h/ag)=\dim(h'/ag)=d$. The first inequality is because $h\in X$, the third one because $h$ and $h'$ are inter-definable over $ag$, and the fourth one by choice of $h'$. We conclude that $h$ and $g$ are algebraically independent over $a$. Then we obtain that $d'\geq \dim(h'/a)\geq \dim(h'/ah)=\dim(g/ah)=\dim(g/a)$. The first inequality because $h'\in Y$, the third equality because $h'$ and $g$ are inter-definable over $ah$, and the fourth equality because $h$ and $g$ are algebraically independent over $a$. For the second statement, note that if $g\in G$ is such that $\dim(g/a)=\dim(G)$, or more generally $\dim(g/a)>d'$, then $\dim(gX\cap Y)<\dim(Y)$. \end{proof} The next lemma generalizes Lemma 2.4 of \cite{Pi5} for o-minimal theories. Pillay's proof can be seen to generalize, with some effort, to geometric theories. We give a different proof: \bar begin{lemma}\langlebel{generic-in-group} Suppose, $G$ is a group definable in a pregeometric theory and suppose $X\subseteqset G$ is such that $\dim(G\setminus X)<\dim(G)$. Then a finite number of translates of $X$ cover $G$. \end{lemma} \bar begin{proof} Suppose we have $g_0,\cdots,g_n\in G$ such that $\dim(G\setminus(\bar bigcup_k g_kX))=m$. By the Lemma \ref{generic-in-group-0} applied to $G\setminus X$ and $G\setminus (\bar bigcup_kg_kX)$ we get that there is $g_{n+1}\in G$ such that $\dim(G\setminus\bar bigcup_{k\leq (n+1)}g_kX)<m$, which finishes the proof. \end{proof} \bar begin{lemma}\langlebel{large-generates} Suppose $G$ is a definable group in a pregeometric theory and $V\subseteqset G$ is large. Then every $g\in G$ is a product of two elements in $V$. \end{lemma} \bar begin{proof} The proof of \cite[Lemma 2.1]{Pi5} works: if we take $h\in G$ generic over $g$, then $h^{-1}g$ is also generic over $g$, and so $h,h^{-1}g\in V$ and their product is $g$. \end{proof} \bar begin{proposition}\langlebel{definable-is-lie} A definable group can be given the structure of a definable strictly differentiable weak $T_k$-Lie group. The forgetful functor from definable strictly differentiable weak Lie groups to definable groups is an equivalence of categories. If $\bar bar acl=\dcl$ the forgetful functor from definable strictly differentiable $T_k$-Lie groups to definable groups is an equivalence of categories. \end{proposition} \bar begin{proof} That the forgetful functor is full follows from Proposition \ref{generic-regularity}. Indeed, suppose $G$ and $H$ are strictly differentiable or $T_k$-Lie groups, and let $f:G\to H$ be a definable group morphism. Then by Proposition \ref{generic-regularity} there is an open dense $U\subseteqset G$ such that $f:U\to H$ is strictly differentiable or $T_k$. If $g_0\in U$ is arbitrary, and $g\in G$, consider the formula $f=L_{f(g)f(g_0)^{-1}}fL_{g_0g^{-1}}$, where we are denoting $L_h$ the left translate by $h$. Now, $L_{g_0g^{-1}}$ is a strict diffeomorphism or $T_k$-isomorphism which sends $g$ to $g_0$, and $L_{f(g)f(g_0)^{-1}}$ is a strict diffeomorphism or $T_k$-isomorphism. We conclude that $f$ being strictly differentiable or $T_k$ at $g_0$ implies that $f$ is strictly differentiable of $T_k$ at $g$. To see that the forgetful functor is essentially surjective one follows the proof of \cite[Proposition 2.5]{Pi5}. Namely, let $G$ be of dimension $n$. Decompose $G$ as in Proposition \ref{cell-decomposition}, and let $V_0\subseteqset G$ be the union of the $n$-dimensional pieces $U_0,\cdots, U_r$. Give $V_0$ the structure of a weak strictly differentiable manifold with charts the inclusions $U_i\to V_0$, see Proposition \ref{etale-example}. Note that $V_0^{-1}\subseteqset G$ is large in $G$, as the inverse function is a definable bijection sending the large subset $V_0$ onto $V_0^{-1}$. As the intersection of two large sets is large, we conclude that $V_0\cap V_0^{-1}$ is large in $G$. A fortiori, $V_0\cap V_0^{-1}$ is large in $V_0$ and so it contains an open dense subset of $V_0$, see Proposition \ref{large-equals-open-dense}. Let $V_1\subseteqset V_0\cap V_0^{-1}$ be open dense in $V_0$ such that the inverse function on $V_1$ (and into $V_0$) is strictly differentiable and $T_k$, see Proposition \ref{generic-regularity}. In a similar way, we have that $V_0\times V_0\cap m^{-1}(V_0)$ is large in $G\times G$ Indeed, $m^{-1}(V_0)$ is the inverse image of the large subset $G\times V_0$ of $G\times G$ under the definable bijection $(\id,m):G\times G\to G\times G$. In the same way as before, we find $Y_0\subseteqset V_0\times V_0\cap m^{-1}(V_0)$ open and dense in $V_0\times V_0$ such that the multiplication map $Y_0\to V_0$ is strictly differentiable and $T_k$. Now we take \[ V_1'=\{g\in V_1: (h,g),(h^{-1},hg)\in Y_0\text{ for all }h\text{ generic over }g\}. \] Note that $V_1'$ is definable, because $g\in V_1'$ is equivalent to $\dim(G\setminus X_g)<n$ for $X_g=\{h\in G: (h,g),(h^{-1},hg)\in Y_0\}$, and dimension is definable in definable families in geometric theories. Note also that $V_1'$ is large in $G$, because if $g\in G$ is generic and $h\in G$ is generic over $g$, then $(h,g)$ is generic in $G\times G$, and $(h^{-1},hg)$, being the image of a definable bijection at $(h,g)$ is also generic in $G\times G$, so they belong to $Y_0$, because $Y_0$ is large in $G\times G$. Now take $V_2$ the interior of $V_1'$ in $V_0$ and $V=V_2\cap V_2^{-1}$. Then $V_2$ is large in $V_0$ by Proposition \ref{large-equals-open-dense}, and so it is also large in $G$. So we conclude that $V$ is an open dense subset of $V_0$. Define also $Y=\{(g,h): g,h, gh\in V, (g,h)\in Y_0\}$, then $Y$ is open dense in $V_0\times V_0$. This is because $Y$ is large in $G\times G$, with arguments as above, and it is open in $Y_0$, because multiplication is continuous in $Y_0$. Then we have shown: \bar begin{enumerate} \item $V$ is large in $G$. \item $Y$ is dense open subset of $V\times V$, and multiplication $Y\to V$ is strictly differentiable and $T_k$. \item Inversion is a strictly differentiable $T_k$-map from $V$ onto $V$. \item If $g\in V$ and $h\in G$ is generic in $G$ over $g$ then $(h,g),(h^{-1},hg)\in Y$ \end{enumerate} For the last item, note that $h,hg,h^{-1}\in G$ are generic, and so they belong to $V$. Also, because $g\in V_1'$, one has that $(h,g), (h^{-1},hg)\in Y_0$. From this one gets \bar begin{enumerate}[(a)] \item For every $g,h\in G$ the set $Z=\{x\in V: gxh\in V\}$ is open and $Z\to V$ given by $x\textbf{m}apsto gxh$ is strictly differentiable and $T_k$. \item For every $g,h\in G$, the set $W=\{(x,y)\in V\times V: gxhy\in V\}$ is open in $V\times V$ and the map $W\to V$ given by $(x,y)\textbf{m}apsto gxhy$ is strictly differentiable and $T_k$. \end{enumerate} Indeed, for (a), assume $x_0\in Z$, take $h_1$ generic over $h$ and $k$ generic over $g,x,h,h_1$. Take $h_2=h_1^{-1}h$. Note that $h_1,h_2\in V$. Now one writes $f(x)=gxh$ as a composition of strictly differentiable and $T_k$ functions defined on an open neighborhood of $x_0$ in the following way. Consider the set $Z_1=\{x\in V:(kg,x)\in Y, (kgx,h_1)\in Y, (kgxh_1,h_2)\in Y, (k^{-1},kgxh)\in Y\}$, then by item 2 we have that $Z_1$ is open and the map $x\textbf{m}apsto gxh=k^{-1}(((kgx)h_1)h_2)$ is a composition of strictly differentiable and $T_k$ functions. Also $x_0\in Z_1$ by item 4. Similarly for (b) given $(x_0,y_0)\in W$ the set \[ W_1=\{(x,y)\in V: (kg,x),(kgx,h_1), (kgxh_1,h_2), (kgxh,y), (k^{-1},kgxhy)\in Y\} \] is open by item (3), contains $(x_0,y_0)$ by item (4) and in $W_1$ the required map is a composition of strictly differentiable and $T_k$ functions. By (1) above and Lemma \ref{generic-in-group} a finite number of translates, $g_0V,\dots, g_nV$, cover $G$. Consider the maps $\varphi_i:V\to G$ given by $\varphi_i(x)=g_ix$. It is straightforward to verify, using (a), (b) and (3) above that these charts endow $G$ with a (unique) structure of a strictly differentiable or $T_k$ manifold, as in Lemma \ref{glue}, and with this structure $G$ is a Lie group. For example, to see that the transition maps are strictly differentiable or $T_k$, we have to see that the sets $V\cap g_i^{-1}g_jV$ are open, and the maps $\varphi_{i,j}: V\cap g_i^{-1}g_jV\to V\cap g_j^{-1}g_iV$ given by $x\textbf{m}apsto g_j^{-1}g_ix$ are strictly differentiable or $T_k$. This is a particular case of (a). Similarly, (b) translates into the multiplication being strictly differentiable or $T_k$ and (a) and (3) translate into the inversion being strictly differentiable and $T_k$. When $\bar bar acl=\dcl$ an appropriate version of cell decomposition in Proposition \ref{cell-decomposition} gives the result by repeating the above proof. Alternatively, we can see it directly from the result we have just proved and Proposition \ref{weak-manifold-is-generically-strong} and Lemma \ref{generic-in-group} (and the appropriate version of the Lemma \ref{glue}). Indeed, if $G$ is a definable group in a 1-h-minimal field with $\bar bar acl=\dcl$, then $G$ has the structure of a weak strictly differentiable or $T_k$-Lie group. By Proposition \ref{weak-manifold-is-generically-strong} there is an open dense $U\subseteqset G$ such that $U$ is a strictly differentiable or $T_k$-manifold. By Proposition \ref{large-equals-open-dense} and Lemma \ref{generic-in-group} a finite number of translates of $U$ cover $G$, $g_1U\cup\cdots\cup g_nU=G$. Then the functions $\varphi_i:U\to G$ given by $x\textbf{m}apsto g_ix$ form a gluing data for $G$ which makes it a strictly differentiable or $T_k$-Lie group. \end{proof} As the previous result implies that every definable group $G$ admits a structure of a definable weak Lie group which is unique up to a unique isomorphism, whenever we mention a property of the weak Lie group structure we understand it with respect to this structure. \bar begin{definition} A definable strictly differentiable local Lie group is given by a definable open set containing a distinguished point $e\in U\subseteqset K^n$, a definable open subset $e\in U_1\subseteqset U$, and definable strictly differentiable maps $U_1\times U_1\to U$ denoted as $(a,b)\textbf{m}apsto a\cdot b$ and $U_1\to U$ denoted as $a\textbf{m}apsto a^{-1}$, such that there exists $e\in U_2\subseteqset U_1$ definable open such that \bar begin{itemize} \item $a\cdot e=e\cdot a=a$ for $a\in U_2$. \item If $a,b,c\in U_2$ then $a\cdot b\in U_1, b\cdot c\in U_1$ and $(a\cdot b)\cdot c=a\cdot (b\cdot c)$. \item If $a\in U_2$ then $a^{-1}\in U_1$ and $a\cdot a^{-1}=a^{-1}\cdot a=e$. \end{itemize} Given two definable strictly differentiable local Lie groups, $U$ and $V$, a definable strictly differentiable local Lie group morphism is given by a definable strictly differentiable map $f:U'\to V_1$ for a $e\in U'\subseteqset U_1$ open, with $U_1$ and $V_1$ as in the above definition, and such that $f(e)=e$, $f(a\cdot b)=f(a)\cdot f(b)$ and $f(a^{-1})=f(a)^{-1}$ for $a\in U'$. Also two such maps $f_1$ and $f_2$ are identified as morphisms if they have the same germ around $0$, in other words, if there is a definable open neighborhood of the identity $W\subseteqset \dom(f_1)\cap \dom(f_2)$ such that $f_1|_W=f_2|_W$. \end{definition} It is common to only consider local groups where $e=0$, and translating we see that every local group is isomorphic to one with this condition. In this case we denote the distinguished element by $e$ whenever we emphasize its role as a local group identity. We will usually identify a local group with its germ at $e$. In those terms, the prototypical example of a local Lie group is the germ around the identity of a Lie group. \\ The following fact is a well known application of the chain rule. We give the short proof for completeness: \bar begin{fact}\langlebel{m'} Suppose $U$ is a local definable strictly differentiable Lie group. Then the multiplication map $m:U_1 \times U_1\to U_0$ has derivative the $m'(0)(u,v)=u+v$. The inverse $i:U_1\to U_0$ has derivative $i'(0)(x)=-x$. The $n$-power $p_n:U_n\to U_0$ has derivative $p_n'(0)(x)=nx$ \end{fact} \bar begin{proof} The formula for $m'(0)$ follows formally from the equations $m(x,0)=x, m(0,y)=y$. Indeed if $m(x,y)=ax+by+o(x,y)$, then plugging $y=0$ we obtain $a=1$ and plugging $x=0$ we obtain $b=1$. Here we are using the small $o$ notation, $f=o(x,y)$, meaning that for all $\epsilon>0$ there is an $r$ such that if $|(x,y)|<r$ then $|f(x,y)|\leq \epsilon|(x,y)|$, and we are using the uniqueness of derivatives for the strictly differentiable functions $m(x,0)$ and $m(0,y)$. From this the formula for $i'(0)$ follows from $m(x,i(x))=0$ and the chain rule. The formula for $p_n$ follows inductively from the chain rule and $p_n(x)=m(p_{n-1}(x),x)$. \end{proof} We give some results on subgroups and quotient groups These are not needed for the main applications. \bar begin{proposition} Suppose $f:G\to H$ is a surjective definable group morphism. Then $f$ is a submersion. \end{proposition} \bar begin{proof} This is a consequence of Sard's Lemma, Proposition \ref{sard}. \end{proof} \bar begin{fact}\langlebel{constructible-nowhere-dense} Suppose $X$ is a topological space and $Y\subseteqset X$ is a finite union of locally closed subsets of $X$. Then every open nonempty subset of $X$ contains an open nonempty subset which is disjoint from $Y$ or contained in $Y$. \end{fact} \bar begin{proof} The property mentioned is closed under Boolean combinations and is true for open subsets. \end{proof} \bar begin{proposition}\langlebel{subgroups-are-closed} Suppose $G$ is a definable group and $H\subseteqset G$ is a definable subgroup. Then $H$ is closed in $G$. \end{proposition} \bar begin{proof} Recall that $H$ is a finite union of locally closed subsets of $G$, see for instance Proposition \ref{definable-is-constructible-manifold}. So by applying Fact \ref{constructible-nowhere-dense} to $H\subseteqset \bar bar{H}$, we conclude that $H$ has nonempty relative interior in $\bar bar{H}$. As $H$ is a subgroup we conclude by translation that $H$ is open in $\bar bar{H}$. An open subgroup is the complement of some of its translates, so it is also closed. We conclude that $H=\bar bar{H}$ is closed. \end{proof} \bar begin{proposition}\langlebel{subgroups-are-submanifolds} Suppose $H\subseteqset G$ is a subgroup of $G$. Then with the structure of weak definable strictly differentiable manifolds on $G$ and $H$, the inclusion $i:H\to G$ is a closed embedding. \end{proposition} \bar begin{proof} By Proposition \ref{generic-embedding}, there is an open dense set $U\subseteqset H$ such that $i|_{U}$ is an embedding. Replacing $U$ by $U'=U\setminus \cl(i(H)\setminus i(U))$ if necessary, and keeping in mind Proposition \ref{dimension-boundary-function} to show $U'$ is large in $H$, we may assume $i(U)$ is open in $i(H)$. By translation we conclude that $i$ is an immersion. Also for an open set $V\subseteqset H$ we have that $i(V)=i(\bar bigcup_{h\in H} hU\cap V)=\bar bigcup_hhi(U\cap h^{-1}V)$ is open in $i(H)$. Since $i$ is injective, the conclusion follows. \end{proof} As a consequence of the theorem on constant rank functions, Proposition \ref{constant-rank}, we have the following result: \bar begin{corollary}\langlebel{dimension-of-kernel} Suppose $U$ and $V$ are definable strictly differentiable local Lie groups and let $g,f:U\to V$ be definable strictly differentiable local Lie group morphisms. If we denote $Z=\{x\in U: g(x)=f(x)\}$, then $\dim_e Z=\dim(\ker (f'(e)-g'(e)))$. In particular if $G$ and $H$ are definable strictly differentiable weak Lie groups and $g,f$ are definable strictly differentiable Lie group morphisms then $\dim\{x: f(x)=g(x)\}=\dim(\ker(f'(e)-g'(e)))$. \end{corollary} \bar begin{proof} The second result follows from the first because of Lemma \ref{local-dimension}. In order to keep the proof readable we only verify the first statement in the case of weak Lie groups. The proof for local Lie groups is similar. By translating in $G$ we see that the map $f\cdot g^{-1}:G\to G$ has, at any point of $G$, derivatives of constant rank equal to ${\rm dim}(G)-k$, for $k=\dim(\text{ker}(f'(e)-g'(e)))$. Indeed, if $u\in G$, then $(f\cdot g^{-1})L_u=L_{f(u)}R_{g(u)^{-1}}(f\cdot g^{-1})$, where $L_u$, $R_u$ denote the left and right translates by $u$, respectively. By the chain rule we get $(f\cdot g^{-1})'(u)L_u'(e)=(L_{f(u)}R_{g(u)^{-1}})'(f(u)\cdot g(u)^{-1})(f\cdot g^{-1})'(e)$. As $L_u$ and $L_{f(u)}R_{g(u)}$ are definable strict diffeomorphisms, we have that their derivatives at any point are vector space isomorphisms, so we conclude that the rank of $(f\cdot g^{-1})'(u)$ equals the rank of $(f\cdot g^{-1})'(e)=f'(e)-g'(e)$ (see Fact \ref{m'}), as desired. By the theorem on constant rank functions, Proposition \ref{constant-rank}, we conclude that there are nonempty open sets $U\subseteqset G$ and $V\subseteqset H$, balls $B_1, B_2$ and $B_3$ around the origin and definable strictly differentiable isomorphisms $\varphi_1:U\to B_1\times B_2$ and $\varphi_2:V\to B_1\times B_3$, such that $f(U)\subseteqset V$ and $\varphi_2f=\varphi_1\bar bar alpha$ for $\bar bar alpha:B_1\times B_3\to B_1\times B_3$ the function $(x,y)\textbf{m}apsto (x,0)$. Translating in $G$ we may assume $e\in U$. More precisely, from the formula $(f\cdot g^{-1})L_u=(L_{f(u)}R_{g(u)^{-1}})(f\cdot g^{-1})$ discussed before, if $u\in U$ maps to $(0,0)$ under $\varphi_1$, then $e\in u^{-1}U$, so we may replace $(U,V,\varphi_1,\varphi_2)$ by $(u^{-1}U, f(u)^{-1}Vg(u), \varphi_1L_u, \varphi_2L_{f(u)}R_{g(u)}^{-1})$. Note also that $\varphi_1(e)=(0,0)$. In this case we obtain $\{0\}\times B_2=\varphi_1(Z\cap U)$ so the local dimension of $e$ at $Z$ is the local dimension of $\{0\}\times B_2\subseteqset B_1\times B_2$ at $(0,0)$, which is the dimension of $B_2$ and is as in the statement. \end{proof} In the particular case the dimension $\dim(\text{ker}(f'(e)-g'(e)))$ of the previous statement equals the dimension of $G$ we get: \bar begin{corollary}\langlebel{map-germ-at-1} Suppose $U$ and $V$ are definable strictly differentiable local Lie groups and let $g,f:U\to V$ be definable strictly differentiable local Lie group morphisms. Then $f$ and $g$ are equal (as local Lie group morphisms) if and only if $f'(0)=g'(0)$. In particular if $G$ and $H$ are definable strictly differentiable weak Lie groups and $g,f$ are definable strictly differentiable Lie group morphisms then $f$ and $g$ coincide in an open neighborhood of the identity $e$ if and only if $f'(e)=g'(e)$. \end{corollary} The following two corollaries are not needed for the sequel, but may be interesting on their own right. \bar begin{corollary}\langlebel{intersection-tangent} Suppose $H_1$ and $H_2$ are subgroups of the strictly differentiable definable weak Lie group $G$. Then $T_e(H_1\cap H_2)=T_e(H_1)\cap T_e(H_2)$ as subspaces of $T_e(G)$. \end{corollary} \bar begin{proof} This is a consequence of Corollary \ref{dimension-of-kernel}. Indeed, we know $H_1$, $H_2$ and $H_1\cap H_2$ are strictly differentiable definable weak Lie groups and the inclusion maps $H_1\cap H_2\to H_i$ and $H_i\to G$ are strictly differentiable immersions, for example by Proposition \ref{subgroups-are-submanifolds}, so the statement makes sense. We also have the diagonal map $\Delta:H_1\cap H_2\to H_1\times H_2$ is the equalizer of the two projections $p_1:H_1\times H_2\to G$ and $p_2:H_1\times H_2\to G$. The kernel of the $p_1'(e)-p_2'(e)$ is the image under the diagonal map of $T_e(H_1)\cap T_e(H_2)$. So by the equality of the dimensions in Corollary \ref{dimension-of-kernel} we conclude $T_e(H_1\cap H_2)=T_e(H_1)\cap T_e(H_2)$. \end{proof} \bar begin{corollary}\langlebel{subgroup-tangent} If $G$ is a definable strictly differentiable weak Lie group and $H_1,H_2$ are subgroups, then there is $U\subseteqset G$ an open neighborhood of $e$ such that $U\cap H_1=U\cap H_2$ if and only if $T_e(H_1)=T_e(H_2)$. \end{corollary} \bar begin{proof} By Corollary \ref{intersection-tangent} we get $T_e(H_3)=T_e(H_1)=T_e(H_2)$ for $H_3=H_1\cap H_2$. Then as the inclusion $H_3\to H_1$ produces an isomorphism of tangent spaces at the identity we conclude by the inverse function theorem \ref{inverse-function} that there is $U\subseteqset G$ an open neighborhood of the identity, such that $U\cap H_1=U\cap H_3$. Note that this also uses that the topology of $H_1$ and $H_3$ which makes them strictly differentiable definable Lie groups coincides with the subgroup topology coming from $G$, see Proposition \ref{subgroups-are-submanifolds}. Symmetrically we have $U'\cap H_2=U'\cap H_3$ for some open $U'$. \end{proof} Next we give the familiar definition of the Lie bracket in $T_e(G)$ for the definable Lie group $G$, and show it forms a Lie algebra. \bar begin{definition} Suppose $G$ is a definable strictly differentiable weak Lie group. For $g\in G$ we consider the map $c_g:G\to G$ defined by $c_g(h)=ghg^{-1}$. Then $c_g$ is a definable group morphism and so it is strictly differentiable, see Proposition \ref{generic-regularity}. Its derivative produces a map $\mathrm{Ad}:G\to \textbf{m}athrm{Aut}_K(T_e(G))$, $g\textbf{m}apsto c_g'(e)$ which is a definable map and a group morphism by the chain rule and the equation $c_gc_h=c_{gh}$. Then $\mathrm{Ad}$ is strictly differentiable and so its derivative at $e$ gives a linear map $\mathrm{ad}:T_e(G)\to \textbf{m}athrm{End}_K(T_e(G))$. In other words this gives a bilinear map $(x,y)\textbf{m}apsto \mathrm{ad}(x)(y)$, $T_e(G)\times T_e(G)\to T_e(G)$ denoted $(x,y)\textbf{m}apsto [x,y]$. This map is called the Lie bracket. \end{definition} \bar begin{proposition}\langlebel{local-lie-bracket} Let $G$ be a definable weak $T_2$-Lie group. Let $0\in U\subseteqset K^n$ be an open set and $i:U\to G$ a $T_2$-diffeomorphism of $U$ onto an open subset of $G$, that sends $0$ to $e$. Make $U$ into a local definable group via $i$. Then under the identification $i'(0):K^n\to T_e(G)$ we have that the Lie bracket is characterized by the property $x\cdot y\cdot x^{-1}\cdot y^{-1}=[x,y]+O(x,y)^3$, for $x,y\in U$. \end{proposition} \bar begin{proof} We have that the function $f(x,y)=x\cdot y\cdot x^{-1}$ satisfies $f(0,y)=y$ and $f(x,0)=0$, so its Taylor approximation of order 2 is of the form $f(x,y)=y+axy+O(x,y)^3$. Indeed it is of the form $a_0+a_1x+a_2y+a_3x^2+a_4xy+a_5y^2+O(x,y)^3$ and plugging $x=0$ and using the uniqueness of the Taylor approximation we get $a_0=a_5=0$ and $a_2=1$, and a similar argument with $y=0$ gives $a_1=a_3=0$, so $f(x,y)=y+axy+O(x,y)^3$ as claimed. From the definition of $\mathrm{Ad}(x)$ we get $f(x,y)=\mathrm{Ad}(x)y+O_x(y^2)$ where $O_x$ means the coefficient may depend on $x$. Note that the definition of $\mathrm{ad}(x)$ gives $\mathrm{Ad}(x)(y)=y+[x,y]+O(x^2y)$. Indeed, we have $\mathrm{Ad}(x)=\mathrm{Ad}(0)+\mathrm{Ad}'(0)(x)+O(x^2)=I+\mathrm{ad}(x)+O(x^2)$, where $I$ is the identity matrix, and evaluating at $y$ we conclude $\mathrm{Ad}(x)y=y+[x,y]+O(x^2y)$. We conclude that $y+axy+O(x,y)^3=y+[x,y]+O_x(y^2)$. This implies $[x,y]=axy$. See Lemma \ref{xy}. Now from $x\cdot y\cdot x^{-1}=y+axy+O(y,x)^3$, and the formula $x\cdot y^{-1}=x-y+b_0x^2+b_1xy+b_2y^2+O(x,y)^3$ (see Fact \ref{m'}), we get $x\cdot y\cdot x^{-1}\cdot y^{-1}=(x\cdot y\cdot x^{-1})\cdot y^{-1}=axy+b_3y^2+O(x,y)^3$. On the other hand if $c(x,y)=x\cdot y\cdot x^{-1}\cdot y^{-1}$ then $c(0,y)=0$ implies that $b_3=0$, as required. \end{proof} \bar begin{proposition} Let $G$ be a definable strictly differentiable weak Lie group. Then $(T_e(G),[,])$ is a Lie algebra. \end{proposition} \bar begin{proof} We have to prove $[x,x]=0$ and the Jacobi identity. We will use the characterization of Proposition \ref{local-lie-bracket} (we may assume $G$ is $T_2$ by Proposition \ref{definable-is-lie}). $[x,x]=0$ now follows immediately. The idea of proof of the Jacobi identity is to express $xyz$ as $f(x,y,z)zyx$ in two different ways using associativity, the first one permutes from left to right, the second permutes $yz$ and then permutes from left to right. The details follow. Writing $c(x,y)=xyx^{-1}y^{-1}$ one has \bar begin{center} $xyz=c(x,y)yxz=c(x,y)yc(x,z)zx=c(x,y)([y,c(x,z)]+O(y,c(x,z))^3)c(x,z)yzx= c(x,y)([y,[x,z]]+O(x,y,z)^4)c(x,z)c(y,z)zyx= (c(x,y)+[y,[x,z]]+c(x,z)+c(y,z)+O(x,y,z)^4)zyx$. \end{center} At the last step we use the formula $xy=x+y+O(x,y)^2$ And on the other hand \bar begin{center} $xyz=xc(y,z)zy=([x,c(y,z)]+O(x,c(y,z))^3)c(y,z)xzy= ([x,[y,z]]+O(x,y,z)^4)c(y,z)c(x,z)zxy= ([x,[y,z]]+O(x,y,z)^4)c(y,z)c(x,z)zc(x,y)yx= ([x,[y,z]]+O(x,y,z)^4)c(y,z)c(x,z)([z,[x,y]]+O(x,y,z)^4)c(x,y)zyx= ([x,[y,z]]+c(y,z)+c(x,z)+c(x,y)+[z,[x,y]]+O(x,y,z)^4)zyx$. \end{center} From this we get $[y,[x,z]]=[x,[y,z]]+[z,[x,y]]+O(x,y,z)^4$ and from the uniqueness of Taylor expansions we obtain $[y,[x,z]]=[x,[y,z]]+[z,[x,y]]$ which is the Jacobi identity. \end{proof} Given a strictly differentiable definable weak Lie group $G$, we denote $Lie(G)$ the tangent space $T_e(G)$ considered as a Lie algebra with the Lie bracket $[x,y]$. \section{Definable fields}\langlebel{fields-section} In this section we prove that if $L$ is a definable field in a 1-h-minimal valued field then, as a definable field, $L$ isomorphic to a finite field extension of $K$. This is result generalizes \cite[Theorem 4.2]{BaysPet}, where this is proved for real closed valued fields, and \cite[Theorem 4.1]{PilQp} where this is proven for $p$-adically closed fields. With the terminology and results we have developed in the previous section the main ingredients of the proof are similar to those appearing in the classification of infinite fields definable in o-minimal fields, \cite[Theorem 1.1]{OtPePi}. \bar begin{lemma}\langlebel{definable-subfield} Suppose $K$ is 1-h-minimal, $L\subseteq K$ a definable subfield. Then $L=K$. \end{lemma} \bar begin{proof} $L$ is a definable set which is infinite because the characteristic of $K$ is $0$. We conclude that there is a nonempty open ball $B\subseteqset L$, for example by dimension theory, item 2 of Proposition \ref{dimension}. The field generated by a nonempty open ball is $K$. Indeed $B-B$ contains a ball around the origin $B'$, $C=B'\setminus \{0\}^{-1}$ is the complement of a closed ball, and $C-C=K$. \end{proof} \bar begin{lemma}\langlebel{k-is-rigid} Suppose $K$ is 1-h-minimal. Let $F_1$ and $F_2$ be finite extensions of $K$, and consider them as definable fields in $K$. If $\varphi:F_1\to F_2$ is a definable field morphism, then $\varphi$ is a morphism of $K$ extensions, in other words it is the identity when restricted to $K$. \end{lemma} \bar begin{proof} The set $\{x\in K: \varphi(x)=x\}$ is a definable subfield of $K$, so Lemma \ref{definable-subfield} give the desired conclusion. \end{proof} \bar begin{proposition}\langlebel{field} Suppose $K$ is 1-h-minimal and $F$ is a definable field. Then $F$ is isomorphic as a definable field to a finite extension of $K$. The forgetful functor from finite $K$-extensions to definable fields is an equivalence of categories. \end{proposition} \bar begin{proof} That the functor is full is Lemma \ref{k-is-rigid}. Let $F$ be a definable field. By Proposition \ref{definable-is-lie} we have that $(F,+)$ is a definable strictly differentiable weak Lie group. If $a\in F$ the map $L_a:x\textbf{m}apsto ax$ is a definable group morphism and so it is strictly differentiable, by the fullness in Proposition \ref{definable-is-lie}. We get a definable map $f:F\to M_n(K)$ defined as $a\textbf{m}apsto L_a'(0)$. By the chain rule we have $f(ab)=f(a)f(b)$ for all $a,b\in F$. Clearly $f(1)=1$. Finally one has $f(a+b)=f(a)+f(b)$ (the derivative of multiplication $G\times G\to G$ in a Lie group is the sum map, see for instance Fact \ref{m'}). We conclude that $f$ is a ring map, and because $F$ is a field it is injective. If we set $i:K\to M_n(K)$ given by $i(k)=kI$ where $I$ is the identity matrix, then $i^{-1}f(F)\subseteqset K$ is a definable subfield of $K$, and so by Lemma \ref{definable-subfield} one has $i(K)\subseteqset f(F)$. So $F/K$ is a finite field extension as required. \end{proof} \section{One dimensional groups are finite by abelian by finite} \langlebel{one-dimensional-section} In this section we prove that if $K$ is a 1-h-minimal valued field and $G$ is a one dimensional group definable in $K$ then $G$ is finite-by-abelian-by-finite. This generalizes \cite[Theorem 2.5]{PiYao} where it is proved that one dimensional groups definable in p-adically closed fields are abelian-by-finite. This result is analogous to \cite[Corollary 2.16]{Pi5} where it is shown that a one dimensional group definable in an o-minimal structure is abelian-by-finite. The proof here is not a straightforward adaptation of either, since we do not assume NIP, making the argument more involved. \bar begin{definition}\langlebel{cw} Let $G$ be a group. We let $C^w$ denote the set of elements $x\in G$ whose centralizer, $c_G(x)$, has finite index in $G$. \end{definition} Note that $C^w$ is a characteristic subgroup of $G$. \bar begin{lemma}\langlebel{abelian-core} Suppose $G$ is an (abstract) group. Take $C^w$ as in Definition \ref{cw}, $Z$ its center. Then $C^w$ and $Z$ are characteristic groups of $G$, and $Z$ is commutative. Moreover $Z$ has finite index in $G$ if and only if $G$ is abelian-by-finite. When $G$ is definable in a geometric theory, $C^w$ and $Z$ are definable. Also $x\in C^w$ if and only if $\dim(c_G(x))=\dim(G)$. \end{lemma} \bar begin{proof} It is clear that $C^w$ and $Z$ are characteristic, and that $Z$ is abelian. So, in particular, if $[G:Z]<\infty$ then $G$ is abelian-by-finite. On the other hand, if $A$ is an abelian subgroup of finite then $A\subseteqset C^w$, as $A\subseteqset c_G(a)$ for every $a\in A$. If $a_1,\dots, a_n$ are a set of representatives for left cosets of $A$ in $C^w$ then $\bar bigcap_{k=1}^n c_G(a_k)\cap A\subseteqset Z$, and as $a_k\in C^w$, the $c_G(a_k)$ have finite index in $G$, and so $Z$ has finite index in $G$. If $G$ is definable in a geometric theory note that $x\in C^w$ if and only if $x^G$, the orbit of $G$ under conjugation, is finite. This is because the fibers of the map $x\textbf{m}apsto x^g$ are cosets of $c_G(x)$. So $C^w$ is definable because a geometric theory eliminates the exist infinity quantifier. We also get that if $\dim(c_G(x))=\dim(G)$ then $c_G(x)$ is of finite index. \end{proof} \bar begin{lemma}\langlebel{locally-constant-pregeometry} Suppose $f:X\times Y\to Z$ is a function definable in a pregeometric theory. Denote $n=\dim(X)$ and $m=\dim(Y)$. Suppose for all $x\in X$ the nonempty fibers of the function $f_x(y)=f(x,y)$ have dimension $m$. Suppose that for all $y\in Y$ the nonempty fibers of the function $f_y(x)=f(x,y)$ have dimension $n$. Then $f$ has finite image. \end{lemma} \bar begin{proof} We claim that the nonempty fibers of $f$ have dimension $n+m$. Indeed, if $(x_0,y_0)\in X\times Y$, then $f^{-1}f(x_0,y_0)$ contains $\cup_{x\in f_{y_0}^{-1}f_{y_0}(x_0)}\{x\}\times f_x^{-1}f_x(y_0)$, so we conclude by the additivity of dimension. \end{proof} \bar begin{lemma}\langlebel{c-is-finite} Suppose $G$ is definable in a pregeometric theory and $G=C^w$. Then the image of the commutator map $c:G\times G\to G$ is finite. \end{lemma} \bar begin{proof} The commutator map $c(x,y)$ is constant when $x$ is fixed and $y$ varies over a right coset of $c_G(y)$, and it is constant when $y$ is fixed and $x$ varies over a right coset of $c_G(x)$. This implies that the image of $c$ is finite, see Lemma \ref{locally-constant-pregeometry}. \end{proof} \bar begin{lemma}\langlebel{almost-commutative} Suppose $G$ is an $n$-dimensional group definable in a pregeometric theory such that $G=C^w$. Then there is a definable characteristic subgroup, $G_1$, of finite index with a characteristic finite subgroup $L$, central in $G_1$, such that $G_1/L$ is abelian. If $Z$ is the center of $G_1$ then $Z/L$ contains $(G_1/L)^m$, the $m$-th powers of $G_1/L$, for some $m$. If the theory is NIP then the center of $G$ has finite index. \end{lemma} \bar begin{proof} If the theory has NIP then the center is finite index by Baldwin-Saxl (e.g., \cite[Lemma 1.3]{PoiGroups}). Indeed, $Z=\bar bigcap_{g\in G}c_G(g)$ is an intersection of a definable family subgroups, each of which is finite index by the assumption $G=C^w$. So, as $G$ is NIP one gets that $Z$ is the intersection of finitely many of the centralizers and $[G:Z]<\omega$. In general, by Lemma \ref{c-is-finite} we know that $c(G,G)$ is finite. The centralizer of $c(G,G)$ is $G_1$ and has finite index in $G$ by the hypothesis that $G=C^w$. Clearly $G_1$ is characteristic in $G$, so we may replace $G$ by $G_1$ and assume that $c(G,G)$ is contained in the center of $G$. In this case we prove that $c(G,G)$ generates a finite central characteristic group $L=D(G)$. Indeed, since $c(G,G)$ is central, a simple computation shows $c(gh,x)=c(g,x)c(h,x)$ for all $g,h,x\in G$. It follows that $c(g,h)^m=c(g^m,h)$ is in $c(G,G)$. Thus $c(g,h)$ has finite order. As $c(G,G)$ is central with elements of finite order, the group it generates is central and finite. It is obviously characteristic. We also see that if $m$ is the order of $D(G)$ then $g^m\in Z$ for all $g\in G$. This is because $c(g^m,h)=c(g,h)^m=1$, so $(G/D(G))^m$ is contained in $Z/D(G)$ as required. \end{proof} \bar begin{lemma}\langlebel{abelian-n-torsion} Suppose $G$ is an $n$-dimensional abelian group definable in a 1-h-minimal theory. Then the $m$-torsion of $G$ is finite and $G^m\subseteqset G$ is a subgroup of dimension $n$. \end{lemma} \bar begin{proof} The map $x\textbf{m}apsto x^m$ is a definable group morphism with invertible derivative at the identity, see for instance Fact \ref{m'}, so by Corollary \ref{dimension-of-kernel} we get that the $m$-torsion of $G$ is finite, and so by additivity of dimension $\dim(G^m)=\dim(G)=n$. \end{proof} \bar begin{lemma}\langlebel{almost-center} Suppose $G$ is an $n$-dimensional group definable in a 1-h-minimal field. Then $C^w$ is the kernel of the map $\mathrm{Ad}:G\to GL_n(K)$. If the Lie algebra $\textbf{m}athrm{Lie}(G)$ is abelian, then $C^w$ has finite index. \end{lemma} \bar begin{proof} The first statement follows from Corollary \ref{map-germ-at-1} and Lemma \ref{abelian-core}. If the Lie bracket is abelian, then, by the definition of the Lie bracket, the derivative of $\mathrm{Ad}$ at $e$ is $0$. This means that $C^w=\ker(\mathrm{Ad})$ contains an open neighborhood of $e$ by Corollary \ref{map-germ-at-1}, so $C^w$ is $n$-dimensional. As the kernel of $\mathrm{Ad}$ is $C^w$ we conclude that $C^w$ has finite index in $G$ by the additivity of dimension. \end{proof} \bar begin{lemma}\langlebel{center-finite-by-abelian} Suppose $G$ is an $n$-dimensional finite-by-abelian group definable in a 1-h-minimal theory. Then $G=C^w$ and the center of $G$ has dimension $n$. \end{lemma} \bar begin{proof} Let $H$ be finite normal such that $G/H$ is abelian. By finite elimination of imaginaries in fields we have that $G/H$ is definable. Also by Corollary \ref{dimension-of-kernel} we see that that the quotient map $p:G\to G/H$ induces an isomorphism of tangent spaces at the identity, and under this isomorphism $1=\mathrm{Ad}(p(g))=\mathrm{Ad}(g)$ for all $g\in G$. We conclude that $G=C^w$, by Lemma \ref{almost-center}. Now if we apply Lemma \ref{almost-commutative} we get characteristic groups $L\subseteqset G_1\subseteqset G$ such that $G/G_1$ is finite, $L$ is finite and $G_1/L$ is abelian, and $Z(G_1)/L\supset (G_1/L)^m$ for some $m$. Note that $Z(G_1)$ contains a finite index subgroup of $Z(G)$, so we just have to see that $(G_1/L)^m$ has dimension $n$. This follows from Lemma \ref{abelian-n-torsion}. \end{proof} \bar begin{proposition}\langlebel{abelian} Suppose $K$ is 1-h-minimal. Suppose $G$ is a strictly differentiable definable weak Lie group. Then $\textbf{m}athrm{Lie}(G)$ is abelian if and only if $G$ is finite-by-abelian-by-finite. In this case $G$ has characteristic definable subgroups $L\subseteqset G_1\subseteqset G$ such that $G/G_1$ is finite, $G_1/L$ is abelian, and $L$ is finite and central in $G_1$. Also if $Z$ is the center of $G'$, then $Z$ is $n$-dimensional, and $Z/L$ contains $(G_1/L)^m$ for some $m$. If $K$ is NIP then we may take $L=1$. \end{proposition} \bar begin{proof} This follows by putting the previous results together, Lemmas \ref{almost-center}, \ref{almost-commutative}, \ref{center-finite-by-abelian}. \end{proof} \bar begin{corollary}\langlebel{one-dimensional} Suppose $G$ is a one dimensional group definable in a 1-h-minimal valued field. Then $G$ is finite-by-abelian-by-finite. If the theory is NIP then $G$ is abelian-by-finite. \end{corollary} \bar begin{proof} By Proposition \ref{definable-is-lie} we get that $G$ is a strictly differentiable definable weak Lie group. The result now follows from Proposition \ref{abelian} because the only one dimensional Lie algebra is abelian. \end{proof} In the NIP case this corollary follows more directly from the fact that a definable group is definably weakly Lie. Indeed, this implies that there is an element $x\in G$ with $x^n\neq e$ (because the derivative of the map $x\textbf{m}apsto x^n$ at $e$ is $v\textbf{m}apsto nv$, which is not equal to $0$, see Fact \ref{m'}). By $\bar bar aleph_0$-saturation there is an $x\in G$ such that the group generated by $x$ is infinite. Then by \cite{PiYao}, Remark 2.4, one has that the centralizer of the centralizer of $x$ has finite index and is abelian. Indeed, note that if $a\in c_G(x)$, $c_G(a)$ contains the group generated by $x$, and so it is of dimension $1$. The last corollary shows that the classification of one dimensional abelian groups definable in ACVF carried out in \cite{AcostaACVF} extends to all definable $1$-dimensional groups, for $K$ of characteristic $0$ (see also the main result of \cite{HaHaPeGps}). I.e., since ACVF$_0$ is $1$-h-minimal and NIP, $1$-dimensional definable groups are abelian-by-finite, and the classification of definable $1$-dimensional abelian groups of \cite{AcostaACVF} applies. We do not know if this corollary is true in ACVF$_{p,p}$. Similarly, the commutativity assumption is unnecessary in the classification of $1$-dimensional groups definable in pseudo-local fields of residue characteristic $0$. As those are pure henselian they, too, are $1$-h-minimal, so we may apply Proposition \ref{abelian}. To get the full result we observe that though pseudo-local fields are not NIP, an inspection of the list of the definable $1$-dimensional abelian groups, $A$, obtained in \cite{AcostaACVF} shows they are almost divisible (i.e., $nA$ has finite index in $A$ for all $n$). Therefore, in the notation of Proposition \ref{abelian} the center of $G_1$ has finite index in $G_1$, and so every one dimensional group is abelian-by-finite. \bar begin{question} If $G$ is finite-by-abelian, does the center of $G$ have finite index in $G$? This is true if the theory is NIP or if $nA$ has finite index in $A$ for every abelian definable group, by Lemmas \ref{center-finite-by-abelian} and \ref{almost-commutative}. \end{question} \bar begin{remark} ACVF$_{p,p}$ does not fit into the framework of 1-h-minimality. However, many of the ingredients in previous sections translate to this setting. For example: $K$ is geometric, a subset of $K^n$ has dimension $n$ if and only if it contains a nonempty open set, one-to-finite functions defined in an open set are generically continuous, functions definable in an open set are generically continuous, and $K$ is definably spherically complete. That one-to-finite functions are generically continuous follows from the fact that $\bar bar acl(a)$ coincides with the field-theoretic algebraic closure of $a$ and by a suitable result about continuity of roots. That functions are generically continuous follows from the fact that $\dcl(a)$ is the Henselization of the perfect closure of $a$, so a definable function is definably piecewise a composition of rational functions, inverse of the Frobenius automorphism and roots of Hensel polynomials, all of these functions being continuous. However, the inverse of the Frobenius is not differentiable anywhere, so Proposition \ref{generic-regularity-0} does not hold. Also the Frobenius is an homeomorphism with $0$ derivative, so for example Proposition \ref{locally-constant} does not hold. \end{remark} \bar bibliographystyle{plain} \bar bibliography{harvard} \end{document}
\begin{document} \title{\bfseries Minimal Distance of Propositional Models \thanks{This paper is the final version of a trilogy of conference papers \emph{Give Me Another One!} \begin{abstract} We investigate the complexity of three optimization problems in Boolean propositional logic related to information theory: Given a conjunctive formula over a set of relations, find a satisfying assignment with minimal Hamming distance to a given assignment that satisfies the formula ($\NextSol$, $\XSOL$) or that does not need to satisfy it ($\NearestSol$, $\NSOL$). The third problem asks for two satisfying assignments with a minimal Hamming distance among all such assignments ($\MinSolDistance$, $\MSD$). For all three problems we give complete classifications with respect to the relations admitted in the formula. We give polynomial time algorithms for several classes of constraint languages. For all other cases we prove hardness or completeness regarding $\APX$, $\pAPX$, or equivalence to well-known hard optimization problems. \end{abstract} \section{Introduction} \label{sec:intro} We investigate the solution spaces of Boolean constraint satisfaction problems built from atomic constraints by means of conjunction and variable identification. We study three minimization problems in connection with Hamming distance: Given an instance of a constraint satisfaction problem in the form of a generalized conjunctive formula over a set of atomic constraints, the first problem asks to find a satisfying assignment with minimal Hamming distance to a given assignment ($\NearestSol$, $\NSOL$). Note that for this problem we assume neither that the given assignment satisfies the formula nor that the solution is different from the assignment. The second problem is similar to the first one, but this time the given assignment has to satisfy the formula and we look for another solution with minimal Hamming distance ($\NextSol$, $\XSOL$). The third problem is to find two satisfying assignments with minimal Hamming distance among all satisfying assignments ($\MinSolDistance$, $\MSD$). Note that the dual problem $\MaxSolDistance$ has been studied in~\cite{CrescenziR-02}. The $\NSOL$ problem appears in several guises throughout literature. E.g., a common problem in Artificial Intelligence is to find solutions of constraints close to an initial configuration; our problem is an abstraction of this setting for the Boolean domain. Bailleux and Marquis~\cite{BailleuxM-06} describe such applications in detail and introduce the decision problem $\DistanceSAT$: Given a propositional formula~$\varphi$, a partial interpretation~$I$, and a bound~$k$, is there a satisfying assignment differing from~$I$ in no more than $k$ variables? It is straightforward to show that $\DistanceSAT$ corresponds to the decision variant of our problem with existential quantification (called $\dNSOLpp$ later on). While \cite{BailleuxM-06} investigates the complexity of $\DistanceSAT$ for a few relevant classes of formulas and empirically evaluates two algorithms, we analyze the decision and the optimization problem for arbitrary semantic restrictions on the formulas. Hamming distance also plays an important role in belief revision. The result of revising/updating a formula~$\varphi$ by another formula~$\psi$ is characterized by the set of models of~$\psi$ that are closest to the models of~$\varphi$. Dalal~\cite{Dalal-88} selects the models of~$\psi$ having a minimal Hamming distance to models of~$\varphi$ to be the models that result from the change. As is common, we analyze the complexity of our optimization problems modulo a parameter that specifies the atomic constraints allowed to occur in the constraint satisfaction problem. We give a complete classification of the approximation complexity with respect to this parameterization. It turns out that our problems can either be solved in polynomial time, or they are complete for a well-known optimization class, or else they are equivalent to well-known hard optimization problems. Our study can be understood as a continuation of the minimization problems investigated by Khanna et al.\ in~\cite{KhannaSTW-01}, especially that of $\OptMinOnes$. The $\OptMinOnes$ optimization problem asks for a solution of a constraint satisfaction problem with the minimal Hamming weight, i.e., minimal Hamming distance to the $0$-vector. Our work generalizes these results by allowing the given vector to be also different from zero. Our work can also be seen as a generalization of questions in coding theory. In fact, our problem $\MSD$ restricted to affine relations is the well-known problem $\MinDist$ of computing the minimum distance of a linear code. This quantity is of central importance in coding theory, because it determines the number of errors that the code can detect and correct. Moreover, our problem $\NSOL$ restricted to affine relations is the problem $\NCW$ of finding the nearest codeword to a given word, which is the basic operation when decoding messages received through a noisy channel. Thus our work can be seen as a generalization of these well-known problems from affine to general relations. In the case of $\NearestSol$ we are able to apply methods from clone theory, even though the problem turns out to be more intricate than pure satisfiability. The other two problems, however, cannot be shown to be compatible with existential quantification easily, which makes classical clone theory inapplicable. Therefore we have to resort to weak co-clones that require only closure under conjunction and equality. In this connection, we apply the theory developed in~\cite{SchnoorS-08,SchnoorDiss} as well as the minimal weak bases of Boolean co-clones from~\cite{Lagerkvist-14}. This paper is structured as follows. Section~\ref{sec:prelim} recalls basic definitions and notions. Section~\ref{sec:results} introduces the trilogy of optimization problems studied in this paper, namely Nearest Solution (denoted by $\NSOL$), Nearest Other Solution (denoted by $\XSOL$), and Minimum Solution Distance (denoted by $\MSD$), as well as their decision versions. It also states our three main results, i.e., a complete classification of complexity for these optimization problems, depicted in Figures~\ref{fig:nsol-coclones}, \ref{fig:xsol-coclones}, and~\ref{fig:msd-coclones}. Section~\ref{sec:proofsPP} investigates the (non-)applicability of clone theory to our problems. It also provides a duality result for the constraint languages used as parameters. Section~\ref{sec:proofsNSOL} contains the proofs of complexity classification results for $\NearestSol$, Section~\ref{sec:proofsXSOL} for $\NextSol$, and Section~\ref{sec:proofsMSD} for $\MinSolDistance$. Finally, the concluding remarks in Section~\ref{sec:conclusion} compare our theorems to previously existing similar results and put our results into perspective. \section{Preliminaries} \label{sec:prelim} \subsection{Boolean Relations and Relational Clones} An $n$-ary \emph{Boolean relation}~$R$ is a subset of $\set{0,1}^n$; its elements $(b_1, \ldots, b_n)$ are also written as $b_1 \cdots b_n$. Let~$V$ be a set of variables. An \emph{atomic constraint}, or an \emph{atom}, is an expression $R(\vec x)$, where~$R$ is an $n$-ary relation and $\vec x$ is an $n$-tuple of variables from~$V$. Let~$\Gamma$ be a non-empty finite set of Boolean relations, also called a \emph{constraint language}. A (conjunctive) \emph{$\Gamma$-formula} is a finite conjunction of atoms $R_1(\vec{x_1}) \land \cdots \land R_k(\vec{x_k})$, where the~$R_i$ are relations from~$\Gamma$ and the $\vec{x_i}$ are variable tuples of suitable arity. For technical reasons in connection with reductions we also allow empty conjunctions ($k=0$) here. Such formulas elegantly take care of certain marginal cases at the cost of adding only one additional trivial problem instance. An \emph{assignment} is a mapping $m\colon V \rightarrow \set{0,1}$ assigning a Boolean value $m(x)$ to each variable $x \in V$. In a given context we can assume $V$ to be finite, by restricting it e.g.\ to the variables occurring in a formula. If we impose an arbitrary but fixed order on the variables, say $x_1, \ldots, x_n$, then the assignments can be identified with elements from $\set{0,1}^n$. The $i$-th component of a tuple~$m\in\set{0,1}^n$ is denoted by~$m[i]$ and corresponds to the value of the $i$-th variable, i.e., $m[i]=m(x_i)$. The \emph{Hamming weight} $\hw(m) = \card{\Set{i}{m[i]=1}}$ of~$m$ is the number of~$1$s in the tuple~$m$. The \emph{Hamming distance} $\hd(m,m') = \card{\Set{i}{m[i] \neq m'[i]}}$ of~$m$ and~$m'$ is the number of coordinates on which the tuples disagree. The complement~$\cmpl{m}$ of a tuple~$m$ is its pointwise complement, $\cmpl m[i] = 1- m[i]$. An assignment~$m$ satisfies a constraint $R(x_1,\ldots ,x_n)$ if $(m(x_1), \ldots , m(x_n))\in R$ holds. It satisfies the formula~$\varphi$ if it satisfies all its atoms; $m$ is said to be a \emph{model} or \emph{solution} of~$\varphi$ in this case. We use $[\varphi]$ to denote the set of models of~$\varphi$. For a term~$t$, $[t]$ is the set of assignments for which $t$ evaluates to~$1$. Note that $[\varphi]$ and $[t]$ represent Boolean relations. If the variables of~$\varphi$ are not explicitly enumerated in parentheses as parameters, they are implicitly considered to be ordered lexicographically. In sets of relations represented this way we usually omit the brackets. A \emph{literal} is a variable~$v$, or its negation~$\neg v$. Assignments are extended to literals by defining $m(\neg v)=1-m(v)$. Table~\ref{tab:funrel} defines Boolean functions and relations needed later on, in particular exclusive or $[x \oplus y]$, not-all-equal $\nae^3$, $k$-ary disjunction $\bor^k$, and $k$-ary negated conjunction $\nand^k$. \begin{table}[t] \caption{List of some Boolean functions and relations} \label{tab:funrel} \centering \begin{displaymath} \begin{array}[t]{@{}rcl@{\qquad}rcl@{}} x \oplus y &=& x + y \pmod{2} & \bor^k &=& \set{0,1}^k\smallsetminus\set{0 \dotsm 0} \text{ for $k \geq 1$}\\ {\eq} &=& \set{00,11} & \nand^k &=& \set{0,1}^k\smallsetminus\set{1 \dotsm 1} \text{ for $k \geq 1$}\\ \dup^3 &=& \set{0,1}^3 \smallsetminus \set{010, 101} &\even^k &=& \set{(a_1, \ldots, a_k) \in \set{0,1}^k \mid \sum_{i=1}^k a_i \text{ is even}}\\ \nae^3 &=& \set{0,1}^3 \smallsetminus \set{000, 111} & \odd^k &=& \set{(a_1, \ldots, a_k) \in \set{0,1}^k \mid \sum_{i=1}^k a_i \text{ is odd}}\\ S_0 &=& [(x_1\land x_4) \eq (x_2\land x_3)] & S_1 &=& [S_0(\neg x_1,\neg x_2,\neg x_3,\neg x_1)]\\ S_2 &=& [(\neg x_1\lor\neg x_2) \to \neg x_3]\\ \multicolumn{6}{c}{\even^k_{k\neq} = \set{(a_1, \ldots, a_{2k}) \in \set{0,1}^{2k} \mid \even^k(a_1, \ldots, a_k) \land \bigwedge_{i=1}^k \left(a_{k+i}\eq \neg a_i \right)}} \end{array} \end{displaymath} \end{table} \begin{table}[b] \caption{Some relevant Boolean co-clones with bases} \label{tab:clones} \centering \begin{displaymath} \begin{array}[t]{@{}ll@{}} \iS_0^k & \set{\bor^k} \\ \iS_1^k & \set{\nand^k} \\ \iS_{00}^k & \set{\bor^k, x \to y, \neg x, x} \\ \iS_{10}^k & \set{\nand^k, \neg x, x, x \to y } \\ \iD_1 & \set{x \oplus y, x} \\ \iD_2 & \set{x \oplus y, x \to y} \end{array} \qquad \begin{array}[t]{@{}ll@{}} \iL & \set{\even^4} \\ \iL_2 & \set{\even^4, \neg x, x} \\ \iV & \set{x \lor y \lor \neg z} \\ \iV_2 & \set{x \lor y \lor \neg z, \neg x, x} \\ \iE & \set{\neg x \lor \neg y \lor z} \\ \iE_2 & \set{\neg x \lor \neg y \lor z, \neg x, x} \end{array} \qquad \begin{array}[t]{@{}ll@{}} \iN & \set{\dup^3} \\ \iN_2 & \set{\nae^3} \\ \iI & \set{\even^4, x \to y} \\ \iI_0 & \set{\even^4, x \to y, \neg x}\\ \iI_1 & \set{\even^4, x \to y, x}\\ \iM_2 & \set{x \to y, \neg x, x} \end{array} \end{displaymath} \end{table} Throughout the text we refer to different types of Boolean constraint relations following Schaefer's terminology~\cite{Schaefer-78} (see also the monograph~\cite{CreignouKS-01} and the survey~\cite{BoehlerCRV-04}). A Boolean relation~$R$ is \begin{inparaenum}[(1)] \item \emph{$1$-valid} if $1 \cdots 1 \in R$ and \emph{$0$-valid} if $0 \cdots 0 \in R$, \item \emph{Horn} (\emph{dual Horn}) if~$R$ can be represented by a formula in conjunctive normal form (CNF) with at most one unnegated (negated) variable per clause, \item \emph{monotone} if it is both Horn and dual Horn, \item \emph{bijunctive} if it can be represented by a CNF formula with at most two literals per clause, \item \emph{affine} if it can be represented by an affine system of equations $A x = b$ over~$\ZZ_2$, \item \emph{complementive} if for each $m \in R$ also $\cmpl m \in R$, \item \emph{implicative hitting set-bounded$+$} with bound~$k$ (denoted by $\IHSBp k$) if~$R$ can be represented by a CNF formula with clauses of the form $(x_1 \lor \cdots \lor x_k)$, $(\neg x \lor y)$, $x$, and~$\neg x$, \item \emph{implicative hitting set-bounded$-$} with bound~$k$ (denoted by $\IHSBm k$) if~$R$ can be represented by a CNF formula with clauses of the form $(\neg x_1 \lor \cdots \lor \neg x_k)$, $(\neg x \lor y)$, $x$, and~$\neg x$. \end{inparaenum} A set~$\Gamma$ of Boolean relations is called $0$-valid ($1$-valid, Horn, dual Horn, monotone, affine, bijunctive, complementive, $\IHSBp k$, $\IHSBm k$) if \emph{every} relation in~$\Gamma$ is $0$-valid ($1$-valid, Horn, dual Horn, monotone, affine, bijunctive, complementive, $\IHSBp k$, $\IHSBm k$). A formula constructed from atoms by conjunction, variable identification, and existential quantification is called a \emph{primitive positive formula} (\emph{pp-formula}). If $\varphi$ is such a formula, we write again $[\varphi]$ for its set of models, i.e., the Boolean relation defined by $\varphi$. As above the coordinates of this relation are understood to be the variables of~$\varphi$ in lexicographic order, unless otherwise stated by explicit enumeration. We denote by $\cc \Gamma$ the set of all relations that can be expressed using relations from $\Gamma\cup\set{\eq}$, conjunction, variable identification (and permutation), cylindrification, and existential quantification, i.e., the set of all relations that are primitive positively definable from $\Gamma$ and equality. The set~$\cc \Gamma$ is called the \emph{co-clone} generated by~$\Gamma$. A \emph{base} of a co-clone~$\B$ is a set of relations~$\Gamma$ such that $\cc \Gamma = \B$, i.e., just a generating set with regard to primitive positive definability including equality. Note that traditionally (e.g.~\cite{JanovMucnik1959}), the notion of \emph{base} also involves minimality with respect to set inclusion. Our use of the term \emph{base} is in accordance with~\cite{BoehlerRSV-05}, where finite bases for all Boolean co-clones have been determined. Some of these are listed in Table~\ref{tab:clones}. The sets of relations being $0$-valid, $1$-valid, complementive, Horn, dual Horn, affine, bijunctive, 2affine (both bijunctive and affine), monotone, $\IHSBp k$, and $\IHSBm k$ each form a co-clone denoted by $\iI_0$, $\iI_1$, $\iN_2$, $\iE_2$, $\iV_2$, $\iL_2$, $\iD_2$, $\iD_1$, $\iM_2$, $\iS_{00}^k$, and~$\iS_{10}^k$, respectively; see Table~\ref{tab:cclones}. \begin{table}[t] \caption{Sets of Boolean relations with their names determined by co-clone inclusions} \label{tab:cclones} \centering \begin{displaymath} \begin{array}{lcl@{\hspace*{2cm}}lcl} \Gamma \subseteq \iI_0 & \Leftrightarrow & \Gamma \text{ is $0$-valid} & \Gamma \subseteq \iI_1 & \Leftrightarrow & \Gamma \text{ is $1$-valid}\\ \Gamma \subseteq \iE_2 & \Leftrightarrow & \Gamma \text{ is Horn} & \Gamma \subseteq \iV_2 & \Leftrightarrow & \Gamma \text{ is dual Horn}\\ \Gamma \subseteq \iM_2 & \Leftrightarrow & \Gamma \text{ is monotone} & \Gamma \subseteq \iD_2 & \Leftrightarrow & \Gamma \text{ is bijunctive}\\ \Gamma \subseteq \iL_2 & \Leftrightarrow & \Gamma \text{ is affine} & \Gamma \subseteq \iD_1 & \Leftrightarrow & \Gamma \text{ is 2affine}\\ \Gamma \subseteq \iN_2 & \Leftrightarrow & \Gamma \text{ is complementive} & \Gamma \subseteq \iI & \Leftrightarrow & \Gamma \text{ is both $0$- and $1$-valid}\\ \Gamma \subseteq \iS_{00}^k & \Leftrightarrow & \Gamma \text{ is $\IHSBp k$} & \Gamma \subseteq \iS_{10}^k & \Leftrightarrow & \Gamma \text{ is $\IHSBm k$} \end{array} \end{displaymath} \end{table} We will also use a weaker closure than $\cc \Gamma$, called \emph{conjunctive closure} and denoted by~$\ccc \Gamma$, where the constraint language~$\Gamma$ is closed under conjunctive definitions, but not under existential quantification or addition of explicit equality constraints. Sets of relations of the form $W = \ccc{W\cup\set{\eq}}$ are called \emph{weak systems} and are in a one-to-one correspondence with so-called strong partial clones~\cite{Romov1981}. It is a well-known consequence of the Galois theory developed in~\cite{Romov1981} that for every co-clone $\cc{\Gamma'}$ whose corresponding clone is finitely generated (this presents no restriction in the Boolean case), there is a largest partial clone whose total part coincides with that clone, cf.~\cite[Theorem~20.7.2]{Lau-06} or see~\cite[Theorems~4.6, 4.7, 4.11]{SchnoorS-08} for a proof in the Boolean case. This largest partial clone even is a strong partial clone, and hence, there is a least weak system~$W$ under inclusion such that $\cc W = \cc{\Gamma'}$. Any \emph{finite} weak generating set~$\Gamma$ of this weak system~$W$, i.e., $W =\ccc{\Gamma\cup\set{\eq}}$, is called a \emph{weak base} of~$\cc{\Gamma'}$, see~\cite[Definition~4.2]{SchnoorS-08}. Such a set~$\Gamma$, in particular, is a finite base of the co-clone~$\cc{\Gamma'}$. Finally, to get from the closure operator~$\ccc{\Gamma\cup\set{\eq}}$ (which is hard to handle in the context of our problems) to~$\ccc{\Gamma}$ (which is easy to handle), one needs the notion of irredundancy. A relation~$R$ is called \emph{irredundant}, if it has neither duplicate nor fictitious coordinates. It can be observed from the proofs of Proposition~5.2 and Corollary~5.6 in~\cite{SchnoorS-08} or from~\cite[Proposition~3.11]{SchnoorDiss}, that $R\in \ccc{\Gamma\cup\set{\eq}}$ implies $R\in \ccc{\Gamma}$ for any irredundant relation~$R$. Following Schnoor~\cite[p.~30]{SchnoorDiss}, we call a weak base of~$\cc{\Gamma'}$ consisting exclusively of irredundant relations an \emph{irredundant weak base}. Thus, if~$\Gamma$ is an irredundant weak base of~$\cc{\Gamma'}$, then the minimality of the weak system $W = \ccc{\Gamma\cup\set{\eq}}$ implies that $\Gamma\subseteq W\subseteq \ccc{\Gamma'\cup\set{\eq}}$ (cf.~\cite[Corollary~4.3]{SchnoorS-08}), and thus $\Gamma\subseteq \ccc{\Gamma'}$ because of irredundancy. Hence, we obtain the following useful tool. \begin{theorem}[{Schnoor~\cite[Corollary~3.12]{SchnoorDiss}}]\label{thm:weakbases} If\/~$\Gamma$ is an irredundant weak base of a co-clone~$\iC$, e.g.\ a minimal weak base of\/~$\iC$, then $\Gamma\subseteq \ccc{\Gamma'}$ holds for any base~$\Gamma'$ of\/~$\iC$. \end{theorem} According to Lagerkvist~\cite{Lagerkvist-14}, a \emph{minimal weak base} is an irredundant weak base satisfying an additional minimality property that ensures small cardinality. The utility of Theorem~\ref{thm:weakbases} comes in particular from the fact that Lagerkvist determined minimal weak bases for all finitely generated Boolean co-clones in~\cite{Lagerkvist-14}. For our purposes we note that each of the co-clones $\iV$, $\iV_0$, $\iV_1$, $\iV_2$, $\iN$, $\iN_2$, and $\iI$ is generated by a minimal weak base consisting of a single relation (Table~\ref{tab:weakbases}). Another source of weak base relations without duplicate coordinates comes from the following construction: let $\graphic$ be the $2^{n}$-ary relation that is given by the value tables (in some chosen enumeration) of the~$n$ distinct $n$-ary projection functions. More formally, let $\beta\colon 2^{n}\to \set{0,1}^n$ be the reader's preferred bijection between the index set $2^n = \set{0,\dotsc,2^{n-1}}$ and the set of all arguments of an $n$-ary Boolean function~-- often lexicographic enumeration is chosen here for presentational purposes, but the order of enumeration of the $n$-tuples does not matter as long as it remains fixed. Then $\graphic = \Set{e_i\circ \beta}{1\leq i\leq n}$ where $e_i\colon \set{0,1}^n\to\set{0,1}$ denotes the projection function onto the $i$-th coordinate. Let~$C$ be a clone with corresponding co-clone~$\iC$. Since $\iC$ is closed with respect to intersection of relations of identical arity, for any $k$-ary relation $R$, there is a least $k$-ary relation in $\iC$ containing $R$, scilicet $\GammaF{C}{R} :=\bigcap\Set{R'\in \iC}{R'\supseteq R, R'\text{ $k$-ary}}$. Traditionally, e.g.~\cite[Sect.~2.8, p.~134]{Lau-06} or \cite[Definition~1.1.16, p.~48]{PoeschelK-79}, this relation is denoted by $\Gamma_C(R)$, but here we have chosen a different notation to avoid confusion with constraint languages. It is well known, e.g.~\cite[Satz~1.1.19(i), p.~50]{PoeschelK-79}, and easy to see that $\GammaF{C}{R}$ is completely determined by the $\ell$-ary part of~$C$ whenever $\ell \geq \card{R}$: given any enumeration of $\emptyset\neq R = \set{r_1,\dotsc,r_{\ell}}$ (for technical reasons we have to exclude the case $\ell = 0$ in this presentation because we do not consider clones with nullary operations here) we have $\GammaF{C}{R} =\Set{f\circ(r_1,\dotsc,r_{\ell})}{f\in C, f \text{ $\ell$-ary}}$, where $f\circ(r_1,\dotsc,r_{\ell})$ denotes the row-wise application of~$f$ to a matrix whose columns are formed by the tuples $r_1,\dotsc,r_{\ell}$. Relations of the form $\GammaF{C}{\graphic}$ represent the $n$-ary part of the clone $C$ as a $2^{n}$-ary relation and are called \emph{$n$-th graphic} of $C$ (cf.\ e.g.~\cite[p.~133 and Theorem~2.8.1(b)]{Lau-06}). Indeed, the previous characterization of $\GammaF{C}{\graphic}$ yields $\GammaF{C}{\graphic} =\Set{f\circ(e_1\circ\beta,\dotsc,e_n\circ\beta)}{ f\in C, f \text{ $n$-ary}} =\Set{f\circ(e_1,\dotsc,e_n)\circ\beta}{ f\in C, f \text{ $n$-ary}} =\Set{f\circ\beta}{f\in C, f \text{ $n$-ary}}$. With the help of this description of $\GammaF{C}{\graphic}$ and standard clone theoretic manipulations, one can easily verify the following result, identifying possible candidates for irredundant singleton weak bases. \begin{theorem}[{\cite[Theorem~4.11]{SchnoorS-08}}] \label{thm:weakbases-from-graphics} Let $C$ be a clone and $R =\GammaF{C}{\set{r_1,\dotsc,r_n}}$ with $n\geq 1$, then $\GammaF{C}{\graphic}$ gives a singleton weak base of $\cc{\set{R}}$ without duplicate coordinates. \end{theorem} \begin{table}[b] \caption{Minimal weak bases for some co-clones} \label{tab:weakbases} \centering \begin{displaymath} \begin{array}[t]{@{}lcl@{}} R_{\iL} &=& \even^4\\ R_{\iL_0} &=& {\even^3} \times \set{0}\\ R_{\iL_1} &=& {\odd^3} \times \set{1}\\ R_{\iL_2} &=& {\even^3_{3\neq}} \times \set{0} \times \set{1}\\ R_{\iL_3} &=& \even^4_{4\neq}\\ R_{\iN} &=& {\even^4}\cap S_0 \end{array} \qquad \begin{array}[t]{@{}lcl@{}} R_{\iV} &=& (S_1\times \set{0,1})\cap (\set{0,1}\times S_2)\\ R_{\iV_0} &=& S_1\times \set{0}\\ R_{\iV_1} &=& R_{\iV}\times \set{1}\\ R_{\iV_2} &=& S_1\times \set{0}\times \set{1}\\ R_{\iN_2} &=& [R_{\iN}(x_1,\dotsc,x_4)\land \bigwedge_{i=1}^4 x_{i+4} \eq\neg x_i]\\ R_{\iI} &=& [S_1(\neg x_1,\neg x_2,\neg x_3)\land S_1(x_4,x_2,x_3)] \end{array} \end{displaymath} \end{table} \subsection{Approximability, Reductions, and Completeness} We assume that the reader has a basic knowledge of approximation algorithms and complexity theory. We recall some basic notions of approximation algorithms and complexity theory; for details see the monographs~\cite{AusielloCGKMSP-99,CreignouKS-01}. A \emph{combinatorial optimization problem}~$\Pcal$ is a quadruple $(I, \sol, \obj, \goal)$, where: \begin{compactitem}[$\bullet$] \item $I$ is the set of admissible \emph{instances} of~$\Pcal$. \item $\sol(x)$ denotes the set of \emph{feasible solutions} for every instance~$x\in I$. \item $\obj(x,y)$ denotes the non-negative integer measure of~$y$ for every instance~$x\in I$ and every feasible solution~$y\in\sol(x)$; $\obj$ is also called \emph{objective function}. \item $\goal \in \set{\min, \max}$ denotes the \emph{optimization goal} for~$\Pcal$. \end{compactitem} A combinatorial optimization problem is said to be an \emph{$\NP$-optimization problem} ($\NPO$-problem) if \begin{compactitem}[$\bullet$] \item the instances and solutions are recognizable in polynomial time, \item the size of the solutions in~$\sol(x)$ is polynomially bounded in the size of~$x$, and \item the objective function $\obj$ is computable in polynomial time. \end{compactitem} The optimal value of the objective function for the solutions of an instance~$x$ is denoted by $\OPT(x)$. In our case the optimization goal will always be minimization, i.e., $\OPT(x)$ will be the minimum. Given an instance $x \in I$ with a feasible solution $y \in \sol(x)$ and a real number $r\geq 1$, we say that $y$ is \emph{$r$-approximate} if $\obj(x,y)\leq r\OPT(x)$ holds and our goal is minimization, or $\obj(x,y)\geq \OPT(x)/r$ and we consider a maximization problem. Let~$A$ be an algorithm that for any instance~$x$ of~$\Pcal$ such that $\sol(x)\neq \emptyset$ returns a feasible solution $A(x) \in \sol(x)$. Given an arbitrary function $r\colon \NN \to [1,\infty)$, we say that~$A$ is an $r(n)$-approximate algorithm for~$\Pcal$ if for any instance $x \in I$ having feasible solutions the algorithm returns an $r(\card{x})$-approximate solution, where~$\card x$ is the size of~$x$. If an $\NPO$ problem~$\Pcal$ admits an $r(n)$-approximate polynomial-time algorithm, we say that~$\Pcal$ is approximable within $r(n)$. An $\NPO$ problem~$\Pcal$ is in the class $\PO$ if the optimum is computable in polynomial time (i.e. if~$\Pcal$ admits a $1$-approximate polynomial-time algorithm). $\Pcal$ is in the class $\APX$ ($\pAPX$) if it is approximable within a constant (polynomial) function in the size of the instance~$x$. $\NPO$ is the class of all $\NPO$ problems and $\NPOPB$ is the class of all $\NPO$ problems where the objective function is polynomially bounded. The following inclusions hold for these approximation complexity classes: $\PO \subseteq \APX \subseteq \pAPX \subseteq \NPO$. All inclusions are strict unless $\P = \NP$. For reductions among decision problems we use the polynomial-time many-one reduction denoted by~$\mle$. Many-one equivalence between decision problems is denoted by~$\meq$. For reductions among optimization problems we use approximation preserving reductions, also called $\AP$-reductions, denoted by~$\aple$. $\AP$-equivalence between optimization problems is denoted by~$\apeq$. We say that an optimization problem $\Pcal$ \emph{$\AP$-reduces} to another optimization problem $\Qcal$, denoted $\Pcal \aple \Qcal$, if there are two polynomial-time computable functions~$f$ and~$g$ and a real constant~$\alpha\geq 1$ such that for all $r>1$ and all $\Pcal$-instances~$x$ the following conditions hold. \begin{compactitem}[$\bullet$] \item $f(x)$ is a $\Qcal$-instance or the generic unsolvable instance~$\bot$ (which is not part of~$\Qcal$). \item If $x$ admits feasible solutions, then $f(x)$ is different from $\bot$ and also admits feasible solutions. \item For any feasible solution~$y'$ of~$f(x)$, $g(x,y')$ is a feasible solution of~$x$. \item If~$y'$ is an $r$-approximate solution of the $\Qcal$-instance~$f(x)$, then $g(x,y')$ is an $(1+(r-1)\alpha + \Lo(1))$-approximate solution of the $\Pcal$-instance~$x$, where $\Lo(1)$ refers to the size of~$x$. \end{compactitem} Our definition of $\AP$-reducibility slightly extends the one in~\cite{AusielloCGKMSP-99} by introducing a generic unsolvable instance~$\bot$. This extension allows us to reduce problems with unsolvable instances to such without as long as the unsolvable instances can be detected in polynomial time, by making $f$ map the unsolvable instances to~$\bot$. This practice has been implicit in previous work, e.g.~\cite{KhannaSTW-01}. We also need a slightly non-standard variation of $\AP$-reductions. We say that an optimization problem $\Pcal$ \emph{$\AP$-Turing-reduces} to another optimization problem $\Qcal$ if there is a polynomial-time oracle algorithm~$A$ and a constant~$\alpha\geq 1$ such that for all $r>1$ on any input~$x$ for~$\Pcal$ \begin{compactitem}[$\bullet$] \item if all oracle calls with a $\Qcal$-instance~$x'$ are answered with a feasible $\Qcal$-solution~$y$ for~$x'$, then~$A$ outputs a feasible $\Pcal$-solution for~$x$, and \item if for every call the oracle answers with an $r$-approximate solution, then $A$ computes a $(1+(r-1)\alpha + \Lo(1))$-approximate solution for the $\Pcal$-instance~$x$. \end{compactitem} It is straightforward to check that $\AP$-Turing-reductions are transitive. Moreover, if $\Pcal$ $\AP$-Turing-reduces to $\Qcal$ with constant $\alpha$ and $\Qcal$ has an $r(n)$-approximation algorithm, then there is an $\alpha r(n)$-approximation algorithm for $\Pcal$. We will relate our problems to well-known optimization problems, by calling the problem~$\Pcal$ under investigation $\Qcal$-complete if $\Pcal\apeq \Qcal$. This notion of completeness is stricter than the one in~\cite{KhannaSTW-01}, since the latter relies on $\mathrm{A}$-reductions. For $\Qcal$, we will consider the following optimization problems analyzed in~\cite{KhannaSTW-01}. \optproblem{$\OptMinOnes(\Gamma)$} {A conjunctive formula~$\varphi$ over relations from~$\Gamma$.} {An assignment~$m$ satisfying~$\varphi$.} {Minimum Hamming weight $\hw(m)$.} \optproblem{$\OptWMinOnes(\Gamma)$} {A conjunctive formula~$\varphi$ over relations from~$\Gamma$ and a weight function $w\colon V \to \NN$ assigning non-negative integer weights to the variables of $\varphi$.} {An assignment~$m$ satisfying~$\varphi$.} {Minimum value $\sum_{x: m(x)=1}w(x)$.} We now define some well-studied problems to which we will relate our problems. Note that these problems do not depend on any parameter. \optproblem{$\NCW$} {A matrix $A \in \ZZ_2^{k\times l}$ and a vector $m\in \ZZ_2^l$.} {A vector $x\in \ZZ_2^k$.} {Minimum Hamming distance $\hd(xA,m)$.} \optproblem{$\MinDist$} {A matrix $A\in \ZZ_2^{k\times l}$.} {A non-zero vector $x \in \ZZ_2^l$ with $A x = 0$.} {Minimum Hamming weight $\hw(x)$.} \optproblem{$\MinHD$} {A conjunctive formula~$\varphi$ over relations from~$\set{x\lor y\lor \neg z, x, \neg x}$.} {An assignment $m$ to $\varphi$.} {Minimum number of unsatisfied conjuncts of $\varphi$.} $\NCW$, $\MinDist$ and $\MinHD$ are known to be $\NP$-hard to approximate within a factor $2^{\LOmega(\log^{1-\varepsilon}(n))}$ for every $\varepsilon > 0$~\cite{AroraBSS-97,DumerMS-03,KhannaSTW-01}. Thus if a problem $\Pcal$ is equivalent to any of these problems, it follows that $\Pcal \notin \APX$ unless $\P=\NP$. \subsection{Satisfiability} We also use the classic problem $\SAT(\Gamma)$ asking for the satisfiability of a given conjunctive formula over a constraint language~$\Gamma$. Schaefer~\cite{Schaefer-78} completely classified its complexity. $\SAT(\Gamma)$ is polynomial-time decidable if~$\Gamma$ is $0$-valid $(\Gamma \subseteq \iI_0)$, $1$-valid $(\Gamma \subseteq \iI_1)$, Horn $(\Gamma \subseteq \iE_2)$, dual Horn $(\Gamma \subseteq \iV_2)$, bijunctive $(\Gamma \subseteq \iD_2)$, or affine $(\Gamma \subseteq \iL_2)$; otherwise it is $\NP$-complete. Moreover, we need the decision problem $\AnotherSat(\Gamma)$: Given a conjunctive formula over~$\Gamma$ and a satisfying assignment~$m$, is there another satisfying assignment~$m'$ different from~$m$? The complexity of this problem was completely classified by Juban~\cite{Juban-99}. $\AnotherSat(\Gamma)$ is polynomial-time decidable if~$\Gamma$ is both $0$- and $1$-valid $(\Gamma \subseteq \iI)$, complementive $(\Gamma \subseteq \iN_2)$, Horn $(\Gamma \subseteq \iE_2)$, dual Horn $(\Gamma \subseteq \iV_2)$, bijunctive $(\Gamma \subseteq \iD_2)$, or affine $(\Gamma \subseteq \iL_2)$; otherwise it is $\NP$-complete. \subsection{Linear and Integer Programming} A \emph{unimodular matrix} is a square integer matrix having determinant $+1$ or $-1$. A \emph{totally unimodular matrix} is a matrix for which every square non-singular submatrix is unimodular. A totally unimodular matrix need not be square itself. Any totally unimodular matrix has only $0$, $+1$ or $-1$ entries. If~$A$ is a totally unimodular matrix and $\vec b$ is an integral vector, then for any given linear functional~$f$ such that the linear program $\min\Set{f(\vec x)}{A \vec x \geq\vec b}$ has a real minimum~$\vec{x}$, it also has an integral minimum point~$\vec{x}$. That is, the feasible region $\Set{\vec{x}}{A\vec{x}\geq \vec{b}}$ is an integral polyhedron. For this reason, linear programming methods can be used to obtain the solutions for integer linear programs in this case. Linear programs can be solved in polynomial time, hence so can integer programs with totally unimodular matrices. For details see the monograph by Schrijver~\cite{Schrijver-86}. \section{Results} \label{sec:results} This section presents the problems we consider and our results; the proofs follow in subsequent sections. The input to all our problems is a conjunctive formula over a constraint language. The satisfying assignments of the formula, i.e.\ its models or solutions, form a Boolean relation that can be understood as an associated generalized binary code. As for linear codes, the minimization target is always the Hamming distance between the codewords or models. Our three problems differ in the information additionally available for computing the required Hamming distance. Given a formula and an arbitrary assignment, the first problem asks for a solution closest to the given assignment. \optproblem{$\NearestSol(\Gamma)$, $\NSOL(\Gamma)$} {A conjunctive formula~$\varphi$ over relations from~$\Gamma$ and an assignment~$m$ to the variables occurring in~$\varphi$, which is not required to satisfy~$\varphi$.} {An assignment~$m'$ satisfying~$\varphi$ (i.e.\ a codeword of the code described by~$\varphi$).} {Minimum Hamming distance $\hd(m,m')$.} Note that the problem generalizes the $\OptMinOnes$ problem from~\cite{KhannaSTW-01}. Indeed, if we take the all-zero assignment $m = 0 \cdots 0$ as part of the input, we get exactly the $\OptMinOnes$ problem as a special case. \begin{theorem}[{\textnormal{illustrated in Figure~\ref{fig:nsol-coclones}}}]\label{thm:NSol} For a given Boolean constraint language~$\Gamma$ the optimization problem $\NSOL(\Gamma)$ is \begin{compactenum}[(i)] \item in~$\PO$ if\/ $\Gamma$ is \begin{compactenum}[(a)] \item 2affine $(\Gamma\subseteq\iD_1)$ or \item monotone $(\Gamma\subseteq\iM_2)$; \end{compactenum} \item $\APX$-complete if \begin{compactenum}[(a)] \item $\Gamma$ generates $\iD_2$ $(\cc \Gamma = \iD_2)$, or \item $[x\lor y] \in \cc \Gamma$ and $\Gamma$ is $\IHSBp k$ $(\iS_0^2 \subseteq \cc \Gamma \subseteq \iS_{00}^k)$ for some $k \in \NN$, $k\geq 2$, or \item $[\neg x \lor \neg y] \in \cc \Gamma$ and $\Gamma$ is $\IHSBm k$ $(\iS_1^2 \subseteq \cc \Gamma \subseteq \iS_{10}^k)$ for some $k \in \NN$, $k\geq 2$, or \end{compactenum} \item $\NCW$-complete if\/ $\Gamma$ is exactly affine $(\iL\subseteq\cc \Gamma\subseteq\iL_2)$; \item $\MinHD$-complete if\/ $\Gamma$ is \begin{compactenum}[(a)] \item exactly Horn $(\iE\subseteq\cc \Gamma\subseteq \iE_2)$ or \item exactly dual Horn $(\iV\subseteq\cc \Gamma\subseteq \iV_2)$; \end{compactenum} \item $\pAPX$-complete if\/ $\Gamma$ does not contain an affine relation and it is \begin{compactenum}[(a)] \item $0$-valid $(\iN\subseteq\cc \Gamma \subseteq \iI_0)$ or \item $1$-valid $(\iN\subseteq\cc \Gamma \subseteq \iI_1)$; and \end{compactenum} \item otherwise $(\iN_2\subseteq\cc\Gamma)$ it is $\NP$-complete to decide whether a feasible solution for $\NSOL(\Gamma)$ exists. \end{compactenum} \end{theorem} \begin{proof} The proof is split into several propositions presented in Section~\ref{sec:proofsNSOL}. \begin{compactenum}[(i)] \item See Propositions~\ref{prop:NSOL-iD1} and~\ref{prop:NSOL-iM2}. \item See Propositions~\ref{prop:NSOL-iS0^2-iS1^2}, \ref{prop:NSOL-D_2}, and~\ref{prop:NSOL-S_00}. \item See Corollary~\ref{cor:NSOL-NCW-hard-L} and Proposition~\ref{prop:NSOL-MinOnes-to-weak_base-L_2}. \item See Propositions~\ref{prop:iV_2_to_MinOnes} and~\ref{prop:MinOnes_to_iV_2}. \item See Proposition~\ref{prop:duphardnc}. \item See Proposition~\ref{prop:NSOL-iN_2-BR}. \qedhere \end{compactenum} \end{proof} \newcommand\NSOLstyle {R/.cstyle=PO,M/.cstyle=PO,D/.cstyle=PO, D2/.cstyle=APX,S2/.cstyle=APX,S3/.cstyle=APX, S0/.cstyle=NA,S1/.cstyle=NA, E/.cstyle=MHD,V/.cstyle=MHD, L/.cstyle=NCW, N/.cstyle=pAPX,N2/.cstyle=NPO,N/.lstyle={text=white}, I/.cstyle=pAPX,I2/.cstyle=NPO,I/.lstyle={text=white} } \begin{figure} \caption{Lattice of co-clones with complexity classification for $\NSOL$.} \label{fig:nsol-coclones} \end{figure} Given a constraint and one of its solutions, the second problem asks for another solution closest to the given one. \optproblem{$\NextSol(\Gamma)$, $\XSOL(\Gamma)$} {A conjunctive formula~$\varphi$ over relations from~$\Gamma$ and a satisfying assignment~$m$ (to the variables mentioned in~$\varphi$).} {An assignment~$m'\neq m$ satisfying~$\varphi$.} {Minimum Hamming distance $\hd(m,m')$.} The difference between the problems $\NearestSol$ and $\NextSol$ is the knowledge, or its absence, whether the input assignment satisfies the constraint. Moreover, for $\NearestSol$ we may output the given assignment if it satisfies the formula while for $\NextSol$ we have to output an assignment different from the one given as the input. \begin{theorem}[{\textnormal{illustrated in Figure~\ref{fig:xsol-coclones}}}]\label{thm:NOSol} For every constraint language~$\Gamma$ the optimization problem $\XSOL(\Gamma)$ is \begin{compactenum}[(i)] \item in~$\PO$ if \begin{compactenum}[(a)] \item $\Gamma$ is bijunctive $(\Gamma\subseteq\iD_2)$ or \item $\Gamma$ is $\IHSBp k$ $(\Gamma\subseteq\iS_{00}^k)$ for some $k \in \NN$, $k \geq 2$ or \item $\Gamma$ is $\IHSBm k$ $(\Gamma\subseteq\iS_{10}^k)$ for some $k \in \NN$, $k \geq 2$; \end{compactenum} \item $\MinDist$-complete if\/ $\Gamma$ is exactly affine $(\iL\subseteq\cc \Gamma\subseteq\iL_2)$; \item $\MinHD$-complete under $\AP$-Turing-reductions if\/~$\Gamma$ is \begin{compactenum}[(a)] \item exactly Horn $(\iE\subseteq\cc \Gamma\subseteq \iE_2)$ or \item exactly dual Horn $(\iV\subseteq\cc \Gamma\subseteq \iV_2)$; \end{compactenum} \item in $\pAPX$ if\/~$\Gamma$ is \begin{compactenum}[(a)] \item exactly both $0$-valid and $1$-valid $(\cc \Gamma = \iI)$ or \item exactly complementive $(\iN \subseteq \cc \Gamma \subseteq \iN_2)$, \end{compactenum} where $\XSOL(\Gamma)$ is $n$-approximable but not $(n^{1-\varepsilon})$-approximable for any $\varepsilon>0$ unless $\P=\NP$; \item and otherwise $(\iI_0\subseteq\cc \Gamma$ or $\iI_1\subseteq\cc \Gamma)$ it is $\NP$-complete to decide whether a feasible solution for $\XSOL(\Gamma)$ exists. \end{compactenum} \end{theorem} \begin{proof} The proof is split into several propositions presented in Section~\ref{sec:proofsXSOL}. \begin{compactenum}[(i)] \item See Propositions~\ref{prop:XSOL-iD2} and~\ref{prop:XSOL-iI00m}. \item See Proposition~\ref{prop:MinDist-hardness-XSOL}. \item See Corollary~\ref{cor:XSOL-Horn-dual_Horn}. \item See Propositions~\ref{prop:XSOL-iI_0-iI_1} and~\ref{prop:tightxsol}. \item See Proposition~\ref{prop:XSOL-iI_0-iI_1}. \qedhere \end{compactenum} \end{proof} \newcommand\NOSOLstyle {R/.cstyle=PO,M/.cstyle=PO,D/.cstyle=PO, S2/.cstyle=PO,S3/.cstyle=PO, S0/.cstyle=NA,S1/.cstyle=NA, E/.cstyle=MHD,V/.cstyle=MHD, L/.cstyle=MD, N/.cstyle=pAPX,N/.lstyle={text=white}, I/.cstyle=NPO,Ix/.cstyle=pAPX,I/.lstyle={text=white} } \begin{figure} \caption{Lattice of co-clones with complexity classification for $\XSOL$.} \label{fig:xsol-coclones} \end{figure} The third problem does not take any assignments as input, but asks for two solutions which are as close to each other as possible. We optimize once more the Hamming distance between the solutions. \optproblem{$\MinSolDistance(\Gamma)$, $\MSD(\Gamma)$} {A conjunctive formula~$\varphi$ over relations from~$\Gamma$.} {Two satisfying truth assignments~$m\neq m'$ to the variables occurring in~$\varphi$.} {Minimum Hamming distance $\hd(m,m')$.} The $\MinSolDistance$ problem enlarges the notion of minimum distance of an error correcting code. The following theorem is a more fine-grained analysis of the result published by Vardy in~\cite{Vardy-97}, extended to an optimization problem. \begin{theorem}[{\textnormal{illustrated in Figure~\ref{fig:msd-coclones}}}]\label{thm:MSD} For any constraint language~$\Gamma$ the optimization problem $\MSD(\Gamma)$ is \begin{compactenum}[(i)] \item in~$\PO$ if\/~$\Gamma$ is \begin{compactenum}[(a)] \item bijunctive $(\Gamma\subseteq\iD_2)$ or \item Horn $(\Gamma\subseteq\iE_2)$ or \item dual Horn $(\Gamma\subseteq\iV_2)$; \end{compactenum} \item $\MinDist$-complete if\/~$\Gamma$ is exactly affine $(\iL\subseteq\cc \Gamma\subseteq\iL_2)$; \item in $\pAPX$ if\/ $\dup^3 \in \cc \Gamma$ and~$\Gamma$ is both $0$-valid and $1$-valid $(\iN \subseteq \cc \Gamma \subseteq \iI)$, where $\MSD(\Gamma)$ is $n$-approximable but not $(n^{1{-}\varepsilon})$-approximable for any $\varepsilon>0$ unless $\P=\NP$; and \item otherwise $(\iN_2\subseteq\cc \Gamma$ or $\iI_0\subseteq\cc \Gamma$ or $\iI_1\subseteq\cc \Gamma)$ it is $\NP$-complete to decide whether a feasible solution for $\MSD(\Gamma)$ exists. \end{compactenum} \end{theorem} \begin{proof} The proof is split into several propositions presented in Section~\ref{sec:proofsMSD}. \begin{compactenum}[(i)] \item See Propositions~\ref{prop:MSD-iD2} and~\ref{prop:MSD-iE2-iV2}. \item See Proposition~\ref{prop:msdaffine}. \item For $\Gamma \subseteq \iI$, every formula~$\varphi$ over~$\Gamma$ has at least two solutions since it is both $0$-valid and $1$-valid. Thus $\TSSAT(\Gamma)$ is in~$\P$, and Proposition~\ref{prop:linearapprox} yields that $\MSD(\Gamma)$ is $n$-approximable. By Proposition~\ref{prop:tightnessOfLinearapprox} this approximation is indeed tight. \item According to~\cite{Juban-99}, $\AnotherSat(\Gamma)$ is $\NP$-hard for $\iI_0\subseteq\cc \Gamma$, or $\iI_1\subseteq\cc \Gamma$. By Lemma~\ref{lem:asattotwo} it follows that $\TSSAT(\Gamma)$ is $\NP$-hard, too. For $\iN_2\subseteq\cc \Gamma$ we can reduce the $\NP$-hard problem $\SAT(\Gamma)$ to $\TSSAT(\Gamma)$. Hence it is $\NP$-complete to decide whether a feasible solution for $\MSD(\Gamma)$ exists in all three cases. \qedhere \end{compactenum} \end{proof} \newcommand\MSDstyle {R/.cstyle=PO,M/.cstyle=PO,D/.cstyle=PO, E/.cstyle=PO,V/.cstyle=PO, S2/.cstyle=PO,S3/.cstyle=PO, S0/.cstyle=NA,S1/.cstyle=NA, L/.cstyle=MD, N/.cstyle=pAPX,N2/.cstyle=NPO,N/.lstyle={text=white}, I/.cstyle=NPO,Ix/.cstyle=pAPX,I/.lstyle={text=white} } \begin{figure} \caption{Lattice of co-clones with complexity classification for $\MSD$.} \label{fig:msd-coclones} \end{figure} The three optimization problems can be transformed into decision problems in the usual way. We add an integer bound $k$ to the input and ask if the Hamming distance satisfies the inequality $\hd(m,m') \leq k$. This way we obtain the corresponding decision problems $\dXSOL$, $\dNSOL$, and $\dMSD$, respectively. Their complexity follows immediately from the theorems above. All cases in $\PO$ become polynomial-time decidable, whereas the other cases, which are $\APX$-hard, become $\NP$-complete. This way we obtain dichotomy theorems classifying the decision problems as polynomial or $\NP$-complete for all sets~$\Gamma$ of relations. We obtain the following dichotomies for each of the respective decision problems. \begin{corollary}\label{cor:dMSD} For each constraint language~$\Gamma$ \begin{compactitem} \item $\dNSOL(\Gamma)$ is in~$\P$ if\/~$\Gamma$ is 2affine or monotone, and it is $\NP$-complete otherwise. \item $\dXSOL(\Gamma)$ is in~$\P$ if\/~$\Gamma$ is bijunctive, $\IHSBp k$, or $\IHSBm k$, and it is $\NP$-complete otherwise. \item $\dMSD(\Gamma)$ is in~$\P$ if\/~$\Gamma$ is bijunctive, Horn, or dual-Horn, and it is $\NP$-complete otherwise. \end{compactitem} \end{corollary} \section{Applicability of Clone Theory and Duality} \label{sec:proofsPP} We show that clone theory is applicable to the problem $\NSOL$, as well as a possibility to exploit inner symmetries between co-clones, which shortens several proofs in the following sections. \subsection{Nearest Solution} There are two natural versions of $\NSOL(\Gamma)$. In one version the formula~$\varphi$ is quantifier free while in the other one we do allow existential quantification. We call the former version $\NSOL(\Gamma)$ and the latter $\NSOLpp(\Gamma)$ and show that both versions are equivalent. Let $\dNSOL(\Gamma)$ and $\dNSOLpp(\Gamma)$ be the decision problems corresponding to $\NSOL(\Gamma)$ and $\NSOLpp(\Gamma)$, asking whether there is a satisfying assignment within a given bound. \begin{proposition}\label{prop:quantifiers} For any constraint language~$\Gamma$ we have the equivalences $\dNSOL(\Gamma)\meq\dNSOLpp(\Gamma)$ and $\NSOL(\Gamma)\apeq\NSOLpp(\Gamma)$. \end{proposition} \begin{proof} The reduction from left to right is trivial in both cases. For the other direction, consider first an instance of $\dNSOLpp(\Gamma)$ with formula~$\varphi$, assignment~$m$, and bound~$k$. Let $x_1, \ldots, x_n$ be the free variables of $\varphi$ and let $y_1, \ldots, y_\ell$ be the existentially quantified ones, which we can assume to be disjoint. By discarding variables $y_i$ while not changing $[\varphi]$, we can assume that each variable $y_i$ occurs in at least one atom of~$\varphi$. We construct a quantifier-free formula $\varphi'$, where the non-quantified variables of~$\varphi$ get duplicated by a factor $\lambda:=(n+\ell+1)^2$ such that the effect of quantified variables becomes negligible. For each variable~$z$ we define the set $B(z)$ as follows: \begin{eqnarray*} B(z) &=& \begin{cases} \set{x_i^1,\dots,x_i^\lambda} & \text{if $z=x_i$ for some $i\in \set{1, \ldots, n}$},\\ \set{y_i} & \text{if $z=y_i$ for some $i\in \set{1, \ldots, \ell}$}. \end{cases} \end{eqnarray*} For every atom $R(z_1, \ldots, z_s)$ in~$\varphi$, the quantifier-free formula $\varphi'$ over the variables $\bigcup_{i=1}^n B(x_i) \cup \bigcup_{i=1}^\ell B(y_i)$ contains the atom $R(z_1', \ldots, z_s')$ for every combination $(z_1', \ldots, z_s')$ from $B(z_1)\times \cdots \times B(z_s)$. Moreover, we construct an assignment $B(m)$ of~$\varphi'$ by assigning to every variable~$x_i^j$ the value $m(x_i)$ and to~$y_i$ the value~$0$. Note that because there is an upper bound on the arities of relations from~$\Gamma$, this is a polynomial time construction. We claim that $\varphi$ has a solution $m'$ with $\hd(m, m') \le k$ if and only if $\varphi'$ has a solution $m''$ with $\hd(B(m), m'')\le k\lambda +\ell$. First, observe that if $m'$ with the desired properties exists, then there is an extension $m'_{\mathrm{e}}$ of $m'$ to the $y_i$ that satisfies all atoms. Define $m''$ by setting $m''(x_i^j):= m'(x_i)$ and $m''(y_i):= m'_{\mathrm{e}}(y_i)$ for all $i$ and $j$. Then $m''$ is clearly a satisfying assignment of $\varphi'$. Moreover, $m''$ and $B(m)$ differ in at most $k\lambda$ variables among the $x_i^j$. Since there are only~$\ell$ other variables~$y_i$, we get $\hd(m'', B(m))\leq k\lambda + \ell$ as desired. Now suppose $m''$ satisfies $\varphi'$ with $\hd(B(m), m'') \le k\lambda +\ell$. We may assume for each~$i$ that $m''(x_i^1) = \cdots = m''(x_i^\lambda)$. Indeed, if this is not the case, then setting all~$x_i^j$ to $B(m)(x_i^j)=m(x_i)$ will result in a satisfying assignment closer to $B(m)$. After at most~$n$ iterations we get some~$m''$ as desired. Now define an assignment~$m'$ for~$\varphi$ by setting $m'(x_i):=m''(x_i^1)$. Then~$m'$ satisfies~$\varphi$, because the variables~$y_i$ can be assigned values as in $m''$. Moreover, whenever $m(x_i)$ differs from $m'(x_i)$, the inequality $B(m)(x_i^j) \neq m''(x_i^j)$ holds for every~$j$. Thus we obtain $\lambda \hd(m, m') \leq \hd(B(m), m'') \leq k\lambda+\ell$. Therefore, we have the inequality $\hd(m, m') \leq k + \ell /\lambda$ and hence $\hd(m, m')\leq k$, since $\ell /\lambda<1$. This completes the many-one reduction. To see that the construction above is also an $\AP$-reduction, let~$m''$ be an $r$-approximation for~$\varphi'$ and $B(m)$, i.e., $\hd(B(m), m'')\leq r \cdot \OPT(\varphi', B(m))$. Construct~$m'$ as before, so $\lambda \hd(m, m') \leq \hd(B(m), m'') \leq r \cdot \OPT(\varphi', B(m))$. Since $\OPT(\varphi', B(m))$ is at most $\lambda \OPT(\varphi, m) + \ell$ as before, we get $\lambda \hd(m, m') \leq r ( \lambda \OPT(\varphi, m) + \ell)$. This implies the inequality $\hd(m,m') \leq r \cdot \OPT(\varphi, m) + r\cdot \ell / \lambda = (r + \Lo(1))\cdot \OPT(\varphi, m)$ and shows that the construction is an $\AP$-reduction with $\alpha = 1$. \end{proof} \begin{remark}\label{rem:zero} Note that in the reduction from $\dNSOLpp(\Gamma)$ to $\dNSOL(\Gamma)$ we construct the assignment $B(m)$ as an extension of~$m$ by setting all new variables to~$0$. In particular, if~$m$ is the constant $0$-assignment, then so is $B(m)$. We use this observation as we continue. \end{remark} The following result is a technical lemma, which allows us to consider constraints with disjoint variables independently. \begin{lemma}\label{lem:split} Let $\varphi(\vec x, \vec y) = \psi(\vec x) \land \chi(\vec y)$ be a $\Gamma$-formula over a constraint language~$\Gamma$ and~$m$ an assignment over disjoint variable blocks $\vec x$ and $\vec y$. Let $(\varphi, m)$ be an instance of\/ $\NSOL(\Gamma)$. Then $\OPT(\varphi, m) = \OPT(\psi, m\Restriction_{\vec x}) + \OPT(\chi, m\Restriction_{\vec y})$. \end{lemma} \begin{proof} If $s\in[\varphi]$, then $s\Restriction_{\vec{x}} \in [\psi]$ and $s\Restriction_{\vec{y}}\in[\chi]$. Conversely, if $s_\psi\in[\psi]$ and $s_\chi\in[\chi]$, then $s:= s_\psi\cup s_\chi$ is a model of $\varphi$. If $s\in[\varphi]$ is optimal, i.e.\ $\hd(s,m)=\OPT(\varphi,m)$, then \begin{displaymath} \OPT(\varphi,m)=\hd(s,m) = \hd(s\Restriction_{\vec{x}},m\Restriction_{\vec{x}}) +\hd(s\Restriction_{\vec{y}},m\Restriction_{\vec{y}}) \geq \OPT(\psi,m\Restriction_{\vec{x}}) +\OPT(\chi,m\Restriction_{\vec{y}}). \end{displaymath} Conversely, if $s_\psi\in[\psi]$ and $s_\chi\in[\chi]$ are optimal solutions for their respective problems, then $s := s_\psi\cup s_\chi$ satisfies \begin{displaymath} \OPT(\varphi,m)\leq\hd(s,m) = \hd(s\Restriction_{\vec{x}},m\Restriction_{\vec{x}}) +\hd(s\Restriction_{\vec{y}},m\Restriction_{\vec{y}}) = \OPT(\psi,m\Restriction_{\vec{x}}) +\OPT(\chi,m\Restriction_{\vec{y}}).\qedhere \end{displaymath} \end{proof} We can also show that introducing explicit equality constraints does not change the complexity of our problem. We need two introductory lemmas. The first one deals with equalities that do not interfere with the other atoms of the given formula. \begin{lemma}\label{lem:pre-processing-equality} For constraint languages $\Gamma$, the problems $\NSOL(\Gamma\cup\set{\eq})$ and $\dNSOL(\Gamma\cup\set{\eq})$ reduce to particular cases of the respective problem, where for each constraint $x\eq y$ in the given formula~$\varphi$ at least one of $x,y$ occurs also in some $\Gamma$-atom of~$\varphi$. \end{lemma} \begin{proof} Let $(\varphi,m)$ be an instance of $\NSOL(\Gamma\cup\set{\eq})$. Without loss of generality we assume $\varphi$ to be of the form $\psi\land\varepsilon$, where $\psi$ is a $\Gamma$-formula and $\varepsilon$ is a $\set\eq$-formula. Let $(V_i)_{i\in I}$ be the unique finest partition of the variables in~$\varepsilon$ satisfying that variables $x,y$ are in the same partition class if $x\eq y$ occurs in~$\varepsilon$. For each index $i\in I$ we designate a specific variable~$x_i\in V_i$. Let $\psi'$ be the formula obtained from~$\psi$ by substituting all occurrences of variables $y\in V_i$ by $x_i$. Moreover, let $I'$ be the set of indices $i\in I$ such that $x_i$ actually occurs in~$\psi'$, and let $I'':=I\smallsetminus I'$ be the set of indices without this property. We set $\varepsilon':=\bigwedge_{i\in I'}\varepsilon_i$ and $\varepsilon'':=\bigwedge_{i\in I''}\varepsilon_i$, where the formula $\varepsilon_i:=\bigwedge_{y\in V_i}(x_i \eq y)$ expresses the equivalence of the variables in~$V_i$. Note that the formulas $\psi\land\varepsilon$ and $\chi:= \psi'\land\varepsilon'\land\varepsilon''$ contain the same variables and have identical sets of models. Now consider the formula $\varphi':=\psi'\land\varepsilon'$ and the assignment $m' := m\Restriction_{V'}$, where $V'$ is the set of variables occurring in~$\varphi'$. The pair $(\varphi', m')$ is an $\NSOL(\Gamma\cup\set{\eq})$ instance with the additional properties stated in the lemma. By construction we have $\chi = \varphi' \land \varepsilon''$, where the set~$V'$ of variables in $\varphi'$ and the set~$V''$ of variables in~$\varepsilon''$ are disjoint. By Lemma~\ref{lem:split} we obtain $\OPT(\varphi,m) = \OPT(\chi,m) =\OPT(\varphi',m') + \OPT(\varepsilon'',m\Restriction_{V''})$. An optimal solution $s_{\varepsilon''}$ of~$\varepsilon''$ and the optimal value $d:=\OPT(\varepsilon'',m\Restriction_{V''})$ can obviously be computed in polynomial time. Therefore the instance $(\varphi,m,k)$ of $\dNSOL(\Gamma\cup\set{\eq})$ corresponds to the instance $(\varphi',m',k-d)$ of the restricted decision problem in the polynomial-time many-one reduction. Moreover, if~$s'$ is an $r$-approximate solution of $(\varphi',m')$ for some $r\geq 1$, then $s:=s'\cup s_{\varepsilon''}$ is a solution of $\varphi$, and we have \begin{equation*} \hd(s,m) = \hd(s',m') + d \leq r\OPT(\varphi',m') + d \leq r\OPT(\varphi',m') + rd = r\OPT(\varphi,m), \end{equation*} so the constructed solution~$s$ of $\varphi$ is also $r$-approximate. This concludes the proof of the $\AP$-reduction with factor $\alpha=1$. \end{proof} When dealing with $\NSOL(\Gamma\cup\set{\eq})$, the previous lemma enables us to concentrate on instances where the formula~$\varphi$ has the form $\psi(z_1,\dotsc,z_n,x_1,\dotsc,x_t) \land \bigwedge_{i=1}^t\bigwedge_{x\in V_i} (x_i \eq x)$, where $V_1,\dotsc,V_t$ are disjoint sets of variables, being also disjoint from the variables of the $\Gamma$-formula $\psi$. For each $1\leq i\leq t$ the given assignment $m$ can have equal distance to the zero vector and the all-ones vector on the variables in $V_i\cup\set{x_i}$, or it can be closer to one of the constant vectors. It is convenient to group the equality constraints according to these three cases. The following lemma discusses how to remove those equality constraints, on whose variables $m$ is not equidistant from~$\vec{0}$ and~$\vec{1}$. \begin{lemma}\label{lem:Heindl} Let~$\Gamma$ be a constraint language and~$\psi(z_1,\dotsc,z_{n},x_1,\dotsc,x_\alpha,v_1,\dotsc,v_\beta,w_1,\dotsc,w_\gamma)$ be any $\Gamma$-formula containing precisely the distinct variables $z_1,\dotsc,z_{n}$, $x_1,\dotsc,x_\alpha$, $v_1,\dotsc,v_\beta$ and $w_1,\dotsc,w_\gamma$. Consider a formula \[\varphi := \psi \land \bigwedge_{a=1}^\alpha\bigwedge_{x\in I_a'} (x_a \eq x) \land \bigwedge_{b=1}^\beta \bigwedge_{x\in J_b'} (v_b \eq x) \land \bigwedge_{c=1}^\gamma\bigwedge_{x\in K_c'} (w_c \eq x) \] where $I_1',\dotsc,I_\alpha'$, $J_1',\dotsc,J_\beta'$ and $K_1',\dotsc,K_\gamma'$ are non-empty sets of variables that are pairwise disjoint and disjoint from the variables in~$\psi$. For $1\leq a\leq \alpha$, $1\leq b\leq \beta$ and $1\leq c\leq\gamma$ we put $I_a:= I_a' \cup\set{x_a}$, $J_b:= J_b' \cup\set{v_b}$ and $K_c := K_c'\cup\set{w_c}$. Moreover, let $m$ be an assignment for $\varphi$, such that for $1\leq a\leq \alpha$, $1\leq b\leq \beta$ and $1\leq c\leq\gamma$ \begin{align*} d_{0,I_a}&:= \hd(m\Restriction_{I_a},\vec{0}),& d_{1,I_a}&:= \hd(m\Restriction_{I_a},\vec{1}),& &&& d_{1,I_a} - d_{0,I_a} &&= 0\\ d_{0,J_b}&:= \hd(m\Restriction_{J_b},\vec{0}),& d_{1,J_b}&:= \hd(m\Restriction_{J_b},\vec{1}),& &\text{satisfy}& e_b :={}& d_{1,J_b} - d_{0,J_b} &&> 0\\ d_{0,K_c}&:= \hd(m\Restriction_{K_c},\vec{0}),& d_{1,K_c}&:= \hd(m\Restriction_{K_c},\vec{1}),& &&f_c :={}& d_{0,K_c} - d_{1,K_c} &&> 0 \end{align*} It is possible to construct a formula~$\psi'$, whose size is polynomial in the size of~$\varphi$, and an assignment~$M$ for $\varphi':= \psi'\land\bigwedge_{a=1}^\alpha\bigwedge_{x\in I_a'} (x_a\eq x)$ such that the following holds \begin{compactitem} \item $\psi$, $\varphi$, $\varphi'$ and $\psi'$ are equisatisfiable; \item if $\psi$ is satisfiable, then $\OPT(\varphi,m) = \OPT(\varphi',M) + d$ where $d = \sum_{b=1}^\beta d_{0,J_b} +\sum_{c=1}^\gamma d_{1,K_c}$; \item for every $r\in[1,\infty)$, one can produce an ($r$-approximate) solution of $(\varphi,m)$ from any ($r$-\hskip0pt{}approximate) solution of $(\varphi',M)$ in polynomial time. \end{compactitem} \end{lemma} \begin{proof} First, we describe how to construct the formula~$\psi'$. We abbreviate $Z:=\set{z_1,\dotsc,z_n,x_1,\dotsc,x_\alpha}$, $Z':=Z\cup \bigcup_{a=1}^\alpha I_a$, $V:=\set{v_1,\dotsc,v_\beta}$ and $W:=\set{w_1,\dotsc,w_\gamma}$. For every variable $u\in Z\cup V\cup W$ define a set $B(u)$ of variables as follows: \[ B(u) = \begin{cases} \set{u} & \text{if } u\in Z\\ \set{u^1,\dotsc,u^{e_b}} & \text{if } u=v_b\in V\\ \set{u^1,\dotsc,u^{f_c}} & \text{if } u=w_c\in W. \end{cases} \] For each atom~$R(u_1,\dotsc,u_q)$ of~$\psi$ define a set of atoms $\Set{R(u_1',\dotsc,u_q')}{(u_1',\dotsc,u_q')\in\prod_{i=1}^q B(u_i)}$, take the union over all these sets and define $\psi'$ as the conjunction of all its members, giving a formula over $Z\cup V'\cup W'$ where $V' =\bigcup_{u\in V} B(u)$ and $W' = \bigcup_{u\in W} B(u)$. Adding again the equality constraints, where~$m$ has equal distance from~$\vec{0}$ and~$\vec{1}$ we get $\varphi' = \psi'\land\bigwedge_{a=1}^\alpha\bigwedge_{x\in I_a'} (x_a\eq x)$ over~$Z'\cup V'\cup W'$. This is a polynomial time construction since the arities of relations in~$\Gamma$ are bounded. Moreover, we define an assignment~$M$ to the variables~$u$ of~$\varphi'$ as follows: \begin{align*} M(u)=\begin{cases} m(u)&\text{if } u\in Z'\\ 0 &\text{if } u\in V'\\ 1 &\text{if } u\in W'. \end{cases} \end{align*} Let~$S'$ be a solution of~$(\varphi',M)$. If $S'$ is constant on~$B(u)$, for each $u\in V\cup W$, then put $S'':= S'$. Otherwise, by letting $S''(u):=S'(u)$ for $u\in Z'$ and for $u\in B(u')$ where $u'\in V\cup W$ is such that $S'$ is constant on $B(u')$, and by defining $S''(u):= M(u) = 0$ for the remaining variables $u\in V'$ and $S''(u):= M(u) = 1$ for the remaining variables $u\in W'$, we obtain a model~$S''$ of~$\varphi'$ satisfying $\hd(S'',M)\leq \hd(S',M)$ and being constant on $B(u)$ for each $u\in V\cup W$. From~$S''$ we construct an assignment~$S$ of~$\varphi$ by defining $S(u):=S''(u)$ for~$u\in Z'$, $S(u):=S''(v_b^1)$ for $u\in J_b$ and $1\leq b\leq \beta$, and $S(u):=S''(w_c^1)$ for $u\in K_c$ and $1\leq c\leq \gamma$. It satisfies~$\varphi$ as $e_b,f_c>0$ for $1\leq b\leq \beta$ and $1\leq c\leq \gamma$. From these definitions, it follows \begin{alignat*}{5} \hd(S'',M) &= \hd(S''\Restriction_{Z'},M\Restriction_{Z'}) &&+ \sum_{b=1}^\beta \hd(S''\Restriction_{B(v_b)},M\Restriction_{B(v_b)}) &&+ \sum_{c=1}^\gamma \hd(S''\Restriction_{B(w_c)},M\Restriction_{B(w_c)})\\ &= \hd(S''\Restriction_{Z'},m\Restriction_{Z'}) &&+ \sum_{b=1}^\beta S''(v_b^1)\cdot e_b &&+ \sum_{c=1}^\gamma (1-S''(w_c^1))\cdot f_c,\\ \intertext{because $S''$ is constant on $B(u)$ for $u\in V\cup W$ and $\lvert B(v_b)\rvert = e_b$, $\lvert B(w_c)\rvert = f_c$ for $1\leq b\leq \beta$ and $1\leq c\leq \gamma$; and} \hd(S,m) &= \hd(S\Restriction_{Z'},m\Restriction_{Z'}) &&+ \sum_{b=1}^\beta \hd(S\Restriction_{J_b},m\Restriction_{J_b}) &&+ \sum_{c=1}^\gamma \hd(S\Restriction_{K_c},m\Restriction_{K_c})\\ &= \hd(S''\Restriction_{Z'},m\Restriction_{Z'}) &&+ \sum_{b=1}^\beta \left(S''(v_b^1)\cdot e_b + d_{0,J_b}\right) &&+ \sum_{c=1}^\gamma \left((1-S''(w_c^1))\cdot f_c + d_{1,K_c}\right). \end{alignat*} Consequently, $\hd(S,m) = \hd(S'',M) + d$, where $d=\sum_{b=1}^\beta d_{0,J_b} + \sum_{c=1}^\gamma d_{1,K_c}$. \par Using this, we shall prove below that $\OPT(\varphi',M) + d = \OPT(\varphi,m)$. Thus, if~$S'$ now takes the role of an $r$-approximate solution of~$(\varphi',M)$ for some $r\geq 1$, then it follows that \begin{align*} \hd(S,m) = \hd(S'',M) + d \leq \hd(S',M) + d &\leq r\OPT(\varphi',M) + d\\ &\leq r\OPT(\varphi',M) + rd = r\OPT(\varphi,m). \end{align*} \par Let subsequently $S'$ be such that $\OPT(\varphi',M) = \hd(S',M)$, and let $s$ be a model of~$\varphi$. Construct a model~$s'$ of~$\varphi'$ by putting $s'(u):=s(u)$ for~$u\in Z'$ and $s'(u):=s(u')$ for $u\in B(u')$ and $u'\in V\cup W$. As above we get $\hd(s,m) = \hd(s',M) + d$ because the definitions imply \begin{alignat*}{5} \hd(s',M) &= \hd(s'\Restriction_{Z'},M\Restriction_{Z'}) &&+ \sum_{b=1}^\beta \hd(s'\Restriction_{B(v_b)},M\Restriction_{B(v_b)}) &&+ \sum_{c=1}^\gamma \hd(s'\Restriction_{B(w_c)},M\Restriction_{B(w_c)})\\ &= \hd(s\Restriction_{Z'},m\Restriction_{Z'}) &&+ \sum_{b=1}^\beta s(v_b)\cdot e_b &&+ \sum_{c=1}^\gamma (1-s(w_c))\cdot f_c\enspace;\\ \hd(s,m) &= \hd(s\Restriction_{Z'},m\Restriction_{Z'}) &&+ \sum_{b=1}^\beta \hd(s\Restriction_{J_b},m\Restriction_{J_b}) &&+ \sum_{c=1}^\gamma \hd(s\Restriction_{K_c},m\Restriction_{K_c})\\ &= \hd(s\Restriction_{Z'},m\Restriction_{Z'}) &&+ \sum_{b=1}^\beta \left(s(v_b)\cdot e_b + d_{0,J_b}\right) &&+ \sum_{c=1}^\gamma \left((1-s(w_c))\cdot f_c + d_{1,K_c}\right). \end{alignat*} By minimality of $S'$, we obtain $\hd(S'',M)\leq \hd(S',M)\leq \hd(s',M)$. If we additionally require that~$s$ be an optimal solution of~$(\varphi,m)$, then $\hd(s',M) = \hd(s,m) -d \leq \hd(S,m) -d = \hd(S'',M)$. Thus, the distances $\hd(S'',M)$, $\hd(S',M)$ and $\hd(s',M)$ coincide, which implies the desired equality $\OPT(\varphi,m)=\hd(s,m) = \hd(s',M)+d = \hd(S',M)+d=\OPT(\varphi',M)+d$. \end{proof} The previous lemma, in fact, describes an $\AP$-reduction from the specialized version of the problem $\NSOL(\Gamma\cup\set{\eq})$ discussed in Lemma~\ref{lem:pre-processing-equality} to an even more specialized variant (the analogous statement is true for the decision version---instances $(\varphi,m,k)$ can be decided by considering $(\varphi',M,k-d)$ instead): namely all equality constraints touch variables in $\Gamma$-atoms and the given assignment has equal distance from the constant tuples on each variable block connected by equalities. In the next result we show how to remove also these equality constraints. \begin{proposition}\label{prop:equality} For constraint languages~$\Gamma$ we have $\dNSOL(\Gamma)\meq\dNSOL(\Gamma\cup\set{\eq})$ and $\NSOL(\Gamma)\apeq\NSOL(\Gamma\cup\set{\eq})$. \end{proposition} \begin{proof} The reduction from left to right is trivial. For the other direction, consider first an instance of $\dNSOL(\Gamma\cup\set{\eq})$ with formula~$\varphi$, assignment~$m$, and bound~$k$. Applying the reductions indicated in Lemmas~\ref{lem:pre-processing-equality} and~\ref{lem:Heindl}, we can assume (also for $\NSOL(\Gamma\cup\set{\eq})$) that $\varphi$ is of the form $\psi \land \bigwedge_{a=1}^\alpha \bigwedge_{x\in I'_{a}} (x_a \eq x)$ with a $\Gamma$-formula~$\psi$ containing the distinct variables $z_1,\dotsc,z_n,x_1,\dotsc,x_\alpha$ ($n\geq 0$, $\alpha\geq 1$) and non-empty disjoint (from each other and from $\psi$) variable sets $I'_a$ for $1\leq a\leq \alpha$. Moreover, we can suppose that $\hd(m\Restriction_{I_a},\vec{0}) = \hd(m,\Restriction_{I_a},\vec{1}) =:c_a$ for all $1\leq a\leq \alpha$, where $I_a$ denotes the set $I'_a\cup\set{x_a}$. We define $c := \sum_{a=1}^\alpha c_a$, and we choose some $\ell$-element index set $I$ such that $\alpha/\ell <1$, that is, $\ell\geq \alpha+1$ (we shall place another condition on $\ell$ at the end). We construct a formula $\varphi'$ as follows: For each atom $R(u_1,\dotsc,u_q)$ of~$\psi$ we introduce the set $\Set{R(u_1^{i_1},\dotsc,u_q^{i_q})}{(i_1,\dotsc,i_q)\in I^q}$ of atoms where for $1\leq \nu\le q$ and $i\in I$ we let $u_{\nu}^{i} := z_{j,i}$ if $u_{\nu} = z_j$ for some $1\leq j\leq n$ and $u_{\nu}^{i} = u_{\nu}$ if else $u_{\nu}\in \set{x_1,\dotsc,x_\alpha}$. Take the union over all these sets and let~$\varphi'$ be the conjunction of all atoms in this union. This construction can be carried out in polynomial time since there is a bound on the arities of relations in~$\Gamma$. Define an assignment~$M$ by $M(x_a):= m(x_a)$ for $1\leq a\leq \alpha$ and $M(z_{j,i}):=m(z_j)$ for $1\leq j\leq n$ and $i\in I$. We claim that existence of solutions for $(\varphi,m,k)$ can be decided by checking for solutions of $(\varphi',M,\ell(k-c)+\alpha)$. The argument is similar to that of Proposition~\ref{prop:quantifiers}: $\psi$ is (un)satisfiable if and only if $\varphi$ and $\varphi'$ are, so we have a correct answer in the unsatisfiable case. Otherwise, consider a solution $s$ to $(\varphi,m,k)$. Letting $Z:=\set{z_1,\dotsc,z_n}$, we have \begin{equation*} \hd(s,m) = \hd(s\Restriction_Z,m\Restriction_Z) + \sum_{a=1}^\alpha \hd(s\Restriction_{I_a},m\Restriction_{I_a}) =\hd(s\Restriction_Z,m\Restriction_Z) + \sum_{a=1}^\alpha c_a =\hd(s\Restriction_Z,m\Restriction_Z) + c, \end{equation*} i.e.\ $\hd(s\Restriction_Z,m\Restriction_Z)\leq k-c$. Putting $s'(x_a):=s(x_a)$ for $1\leq a\leq \alpha$ and $s'(z_{j,i}):=s(z_j)$ for $1\leq j\leq n$ and $i\in I$ we get a model of~$\varphi'$, and it follows that $\hd(s'\Restriction_{Z'},M\Restriction_{Z'}) =\ell\cdot\hd(s\Restriction_{Z},m\Restriction_{Z})\leq \ell\cdot(k-c)$, where $Z':=\set{z_{j,i}\mid 1\leq j\leq n, i\in I}$. Therefore, abbreviating $X := \set{x_1,\dotsc,x_\alpha}$, we obtain $\hd(s',M) = \hd(s'\Restriction_{Z'},M\Restriction_{Z'}) + \hd(s'\Restriction_X,M\Restriction_X) \leq \ell\cdot(k-c)+\alpha$. Conversely, let $S'$ be a solution of $(\varphi',M,\ell(k-c)+\alpha)$. As in Proposition~\ref{prop:quantifiers} we can construct a solution $S''$ being constant on $\set{z_{j,i}\mid i\in I}$ for each $1\leq j\leq n$. Letting $S(x):=S''(x_a)$ for $x\in I_a$ and $1\leq a\leq \alpha$ and $S(z_j):= S''(z_{j,i})$ for some fixed index $i\in I$ and all $1\leq j\leq n$, one obtains a model of~$\varphi$. If $S(z_j)\neq m(z_j)$ for some $1\leq j\leq n$, then we have $S''(z_{j,i}) = S(z_j) \neq m(z_j) = M(z_{j,i})$ for all $i\in I$. Hence, we have $ \ell\cdot\hd(S\Restriction_{Z},m\Restriction_{Z}) \leq\hd(S''\Restriction_{Z'},M\Restriction_{Z'}) \leq \hd(S'',M) \leq \hd(S',M) $. Division by~$\ell$ implies $\hd(S\Restriction_{Z},m\Restriction_{Z})\leq \hd(S',M)/\ell \leq k-c + \alpha/\ell < k-c+1$, i.e.\ $\hd(S\Restriction_{Z},m\Restriction_{Z})\leq k-c$. From this we finally infer that $\hd(S,m) = \hd(S\Restriction_{Z},m\Restriction_{Z}) + c \leq k$. Suppose now that~$S'$ is an $r$-approximate solution for $(\varphi',M)$ for some $r\geq 1$, i.e.\ we have $\hd(S',M)\leq r\OPT(\varphi',M)$. Constructing a model $S$ of~$\varphi$ as before, we obtain $\ell\hd(S\Restriction_{Z},m\Restriction_{Z})\leq \hd(S',M)\leq r\OPT(\varphi',M)$. Further from an optimal solution of~$\varphi$, we get a model~$s'$ of~$\varphi'$ satisfying \begin{align*} \OPT(\varphi',M)\leq \hd(s',M) &= \hd(s'\Restriction_{Z'},M\Restriction_{Z'}) + \hd(s'\Restriction_X,M\Restriction_X)\\ &= \ell(\OPT(\varphi,m) - c) + \hd(s'\Restriction_X,M\Restriction_X) \leq \ell(\OPT(\varphi,m)-c) + \alpha. \end{align*} Multiplying this inequality by~$r$, combining it with previous inequalities and dividing by~$\ell$ we thus have $\hd(S\Restriction_{Z},m\Restriction_{Z})\leq r\OPT(\varphi,m)-rc +r\alpha/\ell$. Note that $\OPT(\varphi,m)>0$, because if $\OPT(\varphi,m) =0$, then we would have a unique optimal model of~$\varphi$, namely~$m$. Then $m\Restriction_{I_1}$ would have to be constant, implying $\hd(m\Restriction_{I_1},\vec{0}) \neq \hd(m\Restriction_{I_1},\vec{1})$, as one distance would be zero and the other one $\lvert I_1\rvert >0$. Therefore, for $\ell\in \LOmega(\lvert\varphi\rvert^2)$ we have $\hd(S,m) = \hd(S\Restriction_{Z},m\Restriction_{Z}) + c \leq \hd(S\Restriction_{Z},m\Restriction_{Z}) + rc \leq r\OPT(\varphi,m)+r\alpha/\ell \leq \OPT(\varphi,m)(r + r\alpha/\ell) = \OPT(\varphi,m)(r + \Lo(1))$. This demonstrates an $\AP$-reduction with factor~$1$. \end{proof} Propositions~\ref{prop:quantifiers} and~\ref{prop:equality} allow us to switch freely between formulas with quantifiers and equality and those without. Hence we may derive upper bounds in the setting without quantifiers and equality while using the latter in hardness reductions. In particular, we can use pp-definability when implementing a constraint language~$\Gamma$ by another constraint language~$\Gamma'$. Hence it suffices to consider Post's lattice of co-clones to characterize the complexity of $\NSOL(\Gamma)$ for every finite constraint language~$\Gamma$. \begin{corollary}\label{cor:coclones} For constraint languages~$\Gamma$ and $\Gamma'$, for which the inclusion $\Gamma'\subseteq \cc \Gamma$ holds, we have the reductions $\dNSOL(\Gamma')\mle\dNSOL(\Gamma)$ and $\NSOL(\Gamma')\aple\NSOL(\Gamma)$. Thus, if $\cc{\Gamma'} = \cc\Gamma$ is satisfied, then the equivalences $\dNSOL(\Gamma)\meq\dNSOL(\Gamma')$ and $\NSOL(\Gamma)\apeq\NSOL(\Gamma')$ hold. \end{corollary} Next we prove that, in certain cases, unit clauses in the formula do not change the complexity of $\NSOL$. \begin{proposition}\label{prop:unary} Let $\Gamma$ be a constraint language such that feasible solutions of\/ $\NSOL(\Gamma)$ can be found in polynomial time. Then we have $\NSOL(\Gamma) \apeq \NSOL(\Gamma\cup \set{[x], [\neg x]})$. \end{proposition} \begin{proof} The direction from left to right is obvious. For the other direction, we give an $\AP$-reduction from $\NSOL(\Gamma\cup \set{[x], [\neg x]})$ to $\NSOL(\Gamma\cup \set{\eq})$. The latter is $\AP$-equivalent to $\NSOL(\Gamma)$ by Proposition~\ref{prop:equality}. The idea of the construction is to introduce two sets of variables $y_1, \ldots, y_{n^2}$ and $z_1, \ldots, z_{n^2}$ such that in any feasible solution all $y_i$ and all $z_i$ take the same value. By setting $m(y_i)=1$ and $m(z_i)=0$ for each $i$, any feasible solution $m'$ of small Hamming distance to $m$ will have $m'(y_i)=1$ and $m'(z_i)=0$ for all $i$ as well, because deviating from this would be prohibitively expensive. Finally, we simulate the unary relations $x$ and $\neg x$ by $x\eq y_1$ and $x\eq z_1$, respectively. We now describe the reduction formally. Consider a formula~$\varphi$ over $\Gamma \cup \set{[x], [\neg x]}$ with the variables $x_1, \ldots, x_n$ and an assignment~$m$. If~$(\varphi,m)$ fails to have feasible solutions, i.e., if~$\varphi$ is unsatisfiable, we can detect this in polynomial time by the assumption of the lemma and return the generic unsatisfiable instance~$\bot$. Otherwise, we construct a $(\Gamma \cup \set{\eq})$-formula~$\varphi'$ over the variables $x_1, \ldots x_n$, $y_1, \ldots, y_{n^2}$, $z_1, \ldots, z_{n^2}$ and an assignment~$m'$. We obtain~$\varphi'$ from~$\varphi$ by replacing every occurrence of a constraint~$[x]$ by $x \eq y_1$ and every occurrence of~$[\neg x]$ by $x \eq z_1$. Finally, we add the atoms $y_i \eq y_1$ and $z_i \eq z_1$ for all $i \in \set{2, \ldots, n^2}$. Let~$m'$ be the assignment of the variables of~$\varphi'$ given by $m'(x_i) = m(x_i)$ for each $i \in \set{1, \ldots, n}$, and $m'(y_i)=1$ and $m'(z_i)=0$ for all $i \in \set{1, \ldots, n^2}$. To any feasible solution~$m''$ of~$\varphi'$ we assign $g(\varphi, m, m'')$ as follows. \begin{compactenum} \item \label{case1} If $\varphi$ is satisfied by~$m$, we define $g(\varphi, m, m'')$ to be equal to~$m$. \item \label{case2} Else if $m''(y_i)=0$ holds for all $i \in \set{1, \ldots, n^2}$ or $m''(z_i)=1$ for all $i \in \set{1, \ldots, n^2}$, we define $g(\varphi, m, m'')$ to be any satisfying assignment of~$\varphi$. \item \label{case3} Otherwise, we have $m''(y_i)=1$ and $m''(z_i)=0$ for all $i \in \set{1, \ldots, n^2}$. In this case we define $g(\varphi, m, m'')$ to be the restriction of~$m''$ onto $x_1, \ldots, x_n$. \end{compactenum} Observe that all variables~$y_i$ and all~$z_i$ are forced to take the same value in any feasible solution, respectively, so $g(\varphi, m, m'')$ is always well-defined. The construction is an $\AP$-reduction. Assume that~$m''$ is an $r$-approximate solution. We will show that $g(\varphi, m, m'')$ is also an $r$-approximate solution. \paragraph{Case~\ref{case1}:} $g(\varphi, m, m'')$ computes the optimal solution, so there is nothing to show. \paragraph{Case~\ref{case2}:} Observe first that~$\varphi$ has a solution because otherwise it would have been mapped to~$\bot$ and~$m''$ would not exist. Thus, $g(\varphi, m, m'')$ is well-defined and feasible by construction. Observe that~$m'$ and~$m''$ disagree on all~$y_i$ or on all~$z_i$, so we have $\hd(m', m'')\ge n^2$. Moreover, since~$\varphi$ has a feasible solution, it follows that $\OPT(\varphi', m') \leq n$. Since~$m''$ is an $r$-approximate solution, we have that $n\OPT(\varphi',m') \leq n^2\leq \hd(m',m'')\leq r\OPT(\varphi',m')$. If $\OPT(\varphi',m')=0$, then $m'$ would have to be a model of $\varphi'$, and so would be its restriction to the~$x_i$, i.e.~$m$, a model of~$\varphi$. This is handled in the first case, which is disjoint from the current one; hence, we infer $n\leq r$. Consequently, the distance $\hd(m, g(\varphi, m, m''))$ is bounded above by $n \leq r \leq r \cdot \OPT(\varphi,m)$, where the last inequality holds because~$\varphi$ is not satisfied by~$m$ and thus the distance of any optimal solution from~$m$ is at least~$1$. \paragraph{Case~\ref{case3}:} The variables~$x_i$, for which the relation~$[x_i]$ is a constraint, all satisfy $g(\varphi,m,m'')(x_i)=1$ by construction. Moreover, we have $g(\varphi, m, m'')(x_i)=0$ for all~$x_i$ for which $[\neg x_i]$ is a constraint of~$\varphi$. Consequently, $g(\varphi, m, m'')$ is feasible. Again, $\OPT(\varphi', m') \leq n$, so any optimal solution to $(\varphi', m')$ must set all variables~$y_i$ to~$1$ and all~$z_i$ to~$0$. It follows that $\OPT(\varphi,m) = \OPT(\varphi', m')$. Thus we get \begin{displaymath} \hd(m, g(\varphi, m, m'')) = \hd(m', m'') \leq r \cdot \OPT(\varphi',m') = r \cdot \OPT(\varphi,m), \end{displaymath} which completes the proof. \end{proof} \subsection{Inapplicability of Clone Closure} \label{sec:inap} Corollary~\ref{cor:coclones} shows that the complexity of~$\NSOL$ is not affected by existential quantification by giving an explicit reduction from~$\NSOLpp$ to~$\NSOL$. It does not seem possible to prove the same for $\XSOL$ and $\MSD$. However, similar results hold for the conjunctive closure; thus we resort to minimal or irredundant weak bases of co-clones instead of usual bases. \begin{proposition}\label{prop:coclonesmincd}\label{prop:coclonesminhd} Let~$\Gamma$ and~$\Gamma'$ be constraint languages. If $\Gamma'\subseteq \ccc \Gamma$ holds then we have the reductions $\dXSOL(\Gamma')\mle\dXSOL(\Gamma)$ and $\XSOL(\Gamma')\aple\XSOL(\Gamma)$, as well as $\dMSD(\Gamma')\mle\dMSD(\Gamma)$ and $\MSD(\Gamma')\aple\MSD(\Gamma)$. \end{proposition} \begin{proof} We prove only the part that $\Gamma'\subseteq \ccc \Gamma$ implies $\XSOL(\Gamma')\aple\XSOL(\Gamma)$. The other results will be clear from that reduction since the proof is generic and therefore holds for both $\XSOL$ and $\MSD$, as well as for their decision variants. Let a $\Gamma'$-formula~$\varphi$ be an instance of $\XSOL(\Gamma')$. Since $\Gamma' \subseteq \ccc \Gamma$, every constraint $R(x_1, \ldots, x_k)$ of~$\varphi$ can be written as a conjunction of constraints over relations from~$\Gamma$. Substitute the latter into~$\varphi$, obtaining $\varphi'$. Now $\varphi'$ is an instance of $\XSOL(\Gamma)$, where~$\varphi'$ is only polynomially larger than~$\varphi$. As~$\varphi$ and~$\varphi'$ have the same variables and hence the same models, also the closest distinct models of~$\varphi$ and~$\varphi'$ are the same. \end{proof} \subsection{Duality} \label{sec:duality} Given a relation $R \subseteq \set{0,1}^n$, its \emph{dual} relation is $\dual(R) = \Set{\cmpl m}{m \in R}$, i.e., the relation containing the complements of tuples from~$R$. Duality naturally extends to sets of relations and co-clones. We define $\dual(\Gamma) = \Set{\dual(R)}{R \in \Gamma}$ as the set of dual relations to~$\Gamma$. Since taking complements is involutive, duality is a symmetric relation. If a relation~$R'$ (a set of relations~$\Gamma'$) is a dual relation to~$R$ (a set of dual relations to~$\Gamma$), then~$R$ ($\Gamma$) is also dual to~$R'$ (to~$\Gamma'$). By a simple inspection of the bases of co-clones in Table~\ref{tab:clones}, we can easily see that many co-clones are dual to each other. For instance $\iE_2$ is dual to~$\iV_2$. The following proposition shows that it is sufficient to consider only one half of Post's lattice of co-clones. \begin{proposition}\label{prop:dual} For any constraint language~$\Gamma$ we have $\dNSOL(\Gamma)\meq\dNSOL(\dual(\Gamma))$ and $\NSOL(\Gamma)\apeq\NSOL(\dual(\Gamma))$, $\dXSOL(\Gamma)\meq\dXSOL(\dual(\Gamma))$ and $\XSOL(\Gamma)\apeq\XSOL(\dual(\Gamma))$, as well as $\dMSD(\Gamma)\meq\dMSD(\dual(\Gamma))$ and $\MSD(\Gamma)\apeq\MSD(\dual(\Gamma))$. \end{proposition} \begin{proof} Let~$\varphi$ be a $\Gamma$-formula and~$m$ an assignment to~$\varphi$. We construct a $\dual(\Gamma)$-formula $\varphi'$ by substitution of every atom $R(\vec x)$ by $\dual(R)(\vec x)$. The assignment~$m$ satisfies~$\varphi$ if and only if~$\cmpl m$ satisfies~$\varphi'$, where~$\cmpl m$ is the pointwise complement of~$m$. Moreover, $\hd(m, m') = \hd(\cmpl m, \cmpl m')$. \end{proof} \section{Finding the Nearest Solution} \label{sec:proofsNSOL} This section contains the proof of Theorem~\ref{thm:NSol}. We first consider the polynomial-time cases followed by the cases of higher complexity. \subsection{Polynomial-Time Cases} \label{ssec:proofsNSOL-PO} \begin{proposition}\label{prop:NSOL-iD1} If a constraint language~$\Gamma$ is both bijunctive and affine $(\Gamma\subseteq\iD_1)$, then $\NSOL(\Gamma)$ can be solved in polynomial time. \end{proposition} \begin{proof} Since~$\Gamma\subseteq \iD_1= \cc{\Gamma'}$ with $\Gamma':=\set{[x\oplus y], [x]}$, we have the reduction $\NSOL(\Gamma)\aple \NSOL(\Gamma')$ by Corollary~\ref{cor:coclones}. Every $\Gamma'$-formula~$\varphi$ is equivalent to a linear system of equations over the Boolean ring~$\ZZ_2$ of type $x\oplus y = 1$ and $x=1$. Substitute the fixed values $x=1$ into the equations of the type $x\oplus y = 1$ and propagate. If a contradiction is found thereby, reject the input. After an exhaustive application of this rule only equations of the form $x\oplus y = 1$ remain. For each of them put an edge $\set{x,y}$ into~$E$, defining an undirected graph $G=(V,E)$, whose vertices $V$ are the unassigned variables. If $G$ is not bipartite, then $\varphi$ has no solutions, so we can reject the input. Otherwise, compute a bipartition $V = L \dot\cup R$. We assume that~$G$ is connected; if not perform the following algorithm for each connected component (cf.\ Lemma~\ref{lem:split}). Assign the value~$0$ to each variable in~$L$ and the value~$1$ to each variable in~$R$, giving the satisfying assignment~$m_1$. Swapping the roles of $0$ and $1$ w.r.t.\ $L$ and $R$ we get a model~$m_2$. Return $\min\set{\hd(m,m_1),\hd(m,m_2)}$. \end{proof} \begin{proposition}\label{prop:NSOL-iM2} If a constraint language~$\Gamma$ is monotone $(\Gamma\subseteq\iM_2)$, then $\NSOL(\Gamma)$ can be solved in polynomial time. \end{proposition} \begin{proof} We have $\iM_2 = \cc{\Gamma'}$ where $\Gamma':=\set{[x\to y], [\neg x], [x]}$. Thus Corollary~\ref{cor:coclones} and $\Gamma\subseteq\cc{\Gamma'}$ imply $\NSOL(\Gamma)\aple\NSOL(\Gamma')$. The relations $[\neg x]$ and~$[x]$ determine a unique value for the respective variable, therefore we can eliminate unit clauses and propagate the values. If a contradiction occurs, we reject the input. It thus remains to consider formulas~$\varphi$ containing only binary implicative clauses of type $x \to y$. Let~$V$ be the set of variables in~$\varphi$, and for $i\in\set{0,1}$ let $V_i = \Set{x \in V}{m(x) = i}$ be the variables mapped to value~$i$ by assignment~$m$. We transform the formula~$\varphi$ to a linear programming problem as follows. For each clause $x \to y$ we add the inequality $y \geq x$, and for each variable $x \in V$ we add the constraints $x \geq 0$ and $x \leq 1$. As linear objective function we use $f(\vec x) = \sum_{x \in V_0} x + \sum_{x \in V_1} (1 - x)$. For an arbitrary solution~$m'$, it returns the number of variables that change their parity between~$m$ and~$m'$, i.e., $f(m')=\hd(m,m')$. This way we obtain the (integer) linear programming problem $(f, A \vec x \geq \vec b)$, where~$A$ is a totally unimodular matrix and~$\vec b$ is an integral column vector. The rows of~$A$ consist of the left-hand sides of inequalities $y - x \geq 0$, $x \geq 0$, and $-x \geq -1$, which constitute the system $A \vec x \geq \vec b$. Every entry in~$A$ is~$0$, $+1$, or~$-1$. Every row of~$A$ has at most two non-zero entries. For the rows with two entries, one entry is~$+1$, the other is~$-1$. According to Condition~(iv) in Theorem~19.3 in~\cite{Schrijver-86}, this is a sufficient condition for~$A$ being totally unimodular. As~$A$ is totally unimodular and~$\vec b$ is an integral vector, $f$ has integral minimum points, and one of them can be computed in polynomial time (see~e.g.~\cite[Chapter~19]{Schrijver-86}). \end{proof} \subsection{Hard Cases} \label{ssec:proofsNSOL-hard} We start off with an easy corollary of Schaefer's dichotomy. \begin{proposition}\label{prop:NSOL-iN_2-BR} Let $\Gamma$ be a finite set of Boolean relations. If\/ $\iN_2\subseteq \cc \Gamma$, then it is $\NP$-complete to decide whether a feasible solution exists for $\NSOL(\Gamma)$; otherwise, $\NSOL(\Gamma)\in \pAPX$. \end{proposition} \begin{proof} If $\iN_2\subseteq \cc \Gamma$ holds, checking the existence of feasible solutions for $\NSOL(\Gamma)$-instances is $\NP$-hard by Schaefer's theorem~\cite{Schaefer-78}. Let $(\varphi,m)$ be an instance of $\NSOL(\Gamma)$. We give an $n$-approximate algorithm for the other cases, where $n$ denotes the number of variables in~$\varphi$. If~$m$ satisfies~$\varphi$, return~$m$. Otherwise compute an arbitrary solution~$m'$ of~$\varphi$, which can be done in polynomial time by Schaefer's theorem. This algorithm is $n$-approximate: If~$m$ satisfies~$\varphi$, the algorithm returns the optimal solution; otherwise we have $\OPT(\varphi, m) \geq 1$ and $\hd(m,m') \leq n$, hence the answer~$m'$ of the algorithm is $n$-approximate. \end{proof} \subsubsection{APX-Complete Cases} We start with reductions from the optimization version of vertex cover. Since the relation $[x \lor y]$ is a straightforward Boolean encoding of vertex cover, we immediately get the following result. \begin{proposition}\label{prop:NSOL-iS0^2-iS1^2} $\NSOL(\Gamma)$ is $\APX$-hard for every constraint language~$\Gamma$ satisfying $\iS_0^2 \subseteq \cc \Gamma$ or $\iS_1^2 \subseteq \cc \Gamma$. \end{proposition} \begin{proof} We have $\iS_0^2 = \cc{\set{[x\lor y]}}$ and $\iS_1^2 = \cc{\set{[\neg x \lor \neg y]}}$. We discuss the former case, the latter one being symmetric and provable from the first one by Proposition~\ref{prop:dual}. We encode $\prbname{VertexCover}$ into $\NSOL(\set{[x\lor y]})$. For each edge $\set{x,y} \in E$ of a graph $G=(V,E)$ we add the clause $(x \lor y)$ to the formula~$\varphi_G$. Every model $m'$ of~$\varphi_G$ yields a vertex cover $\set{v\in V\mid m'(v) = 1}$, and conversely, the characteristic function of any vertex cover satisfies~$\varphi_G$. Moreover, we choose $m = \vec{0}$. Then $\hd(\vec{0}, m')$ is minimal if and only if the number of~$1$s in~$m'$ is minimal, i.e., if~$m'$ is a minimal model of~$\varphi_G$, i.e., if~$m'$ represents a minimal vertex cover of~$G$. Since $\prbname{VertexCover}$ is $\APX$-complete (see e.g.~\cite{AusielloCGKMSP-99}) and $\NSOL(\set{[x\lor y]})\aple\NSOL(\Gamma)$ (see Corollary~\ref{cor:coclones}), the result follows. \end{proof} \begin{proposition}\label{prop:NSOL-D_2} We have $\NSOL(\Gamma) \in \APX$ for constraint languages $\Gamma\subseteq \iD_2$. \end{proposition} \begin{proof} $\Gamma':=\set{[x \oplus y], [x\to y]}$ is a base of $\iD_2$. By Corollary~\ref{cor:coclones} it suffices to show that $\NSOL(\Gamma')$ is in $\APX$. Let $(\varphi, m)$ be an instance of this problem. Feasibility for $\varphi$ can be encoded as an integer program as follows: Every constraint $x \oplus y$ induces an equation $x + y=1$, every constraint $x \to y$ an inequality $x\le y$. If we restrict all variables to $\set{0,1}$ by the appropriate inequalities, it is clear that an assignment~$m'$ satisfies~$\varphi$ if it satisfies the linear system with inequality side conditions. As objective function we use $f(\vec x):=\sum_{x\in V_0} x + \sum_{x\in V_1} (1-x)$, where $V_i$ is the set of variables mapped to~$i$ by~$m$. Clearly, for every solution~$m'$ we have $f(m') = \hd(m, m')$. The $2$-approximation algorithm from~\cite{HochbaumMNT-93} for integer linear programs, where every inequality contains at most two variables, completes the proof. \end{proof} \begin{proposition}\label{prop:NSOL-S_00} We have $\NSOL(\Gamma) \in \APX$ for constraint languages $\Gamma\subseteq\iS_{00}^\ell$ with $\ell\geq2$. \end{proposition} \begin{proof} $\Gamma':=\set{[x_1\lor \cdots \lor x_\ell], [x\to y], [\neg x], [x]}$ is a base of $\iS_{00}^\ell$. By Corollary~\ref{cor:coclones} it suffices to show that $\NSOL(\Gamma')$ is in $\APX$. Let $(\varphi,m)$ be an instance of this problem. We use an approach similar to the one for the corresponding case in~\cite{KhannaSTW-01}, again writing~$\varphi$ as an integer program. We write constraints $x_1 \lor \cdots \lor x_\ell$ as inequalities $x_1+\cdots + x_\ell \ge 1$, constraints $x \to y$ as $x\le y$, $\neg x$ as $x=0$, and $x$ as $x=1$. Moreover, we add $x\geq 0$ and $x\leq 1$ for each variable~$x$. It is easy to check that the feasible Boolean solutions of $\varphi$ and of the linear system coincide. As objective function we use $f(\vec x):=\sum_{x\in V_0} x + \sum_{x\in V_1} (1-x)$, where $V_i$ is the set of variables mapped to~$i$ by~$m$. Clearly, for every solution~$m'$ we have $f(m') = \hd(m, m')$. Therefore it suffices to approximate the optimal solution for the integer linear program. To this end, let~$m''$ be a (generally non-integer) solution to the relaxation of the linear program, which can be computed in polynomial time. We construct~$m'$ by setting $m'(x) = 0$ if $m''(x)< 1 / \ell$ and $m'(x) = 1$ if $m''(x) \geq 1 / \ell$. As $\ell\geq 2$, we get $\hd(m,m')=f(m') \leq \ell f(m'') \leq \ell \cdot \OPT(\varphi, m)$. It is easy to check that~$m'$ is a feasible solution, which completes the proof. \end{proof} \subsubsection{NearestCodeword-Complete Cases} This section essentially uses the facts that $\OptMinOnes$ is $\NCW$-complete for the co-clone~$\iL_2$ and that it is a special case of $\NSOL$. The following result was stated by Khanna et al. for completeness via $\A$-reductions~\cite[Theorem 2.14]{KhannaSTW-01}. A closer look at the proof reveals that it also holds for the stricter notion of completeness via $\AP$-reductions that we use. \begin{proposition} \label{prop:MinOnes-NCW-complete} The problem $\OptMinOnes(\Gamma)$ is $\NCW$-complete via $\AP$-reductions for constraint languages~$\Gamma$ satisfying $\cc{\Gamma} = \iL_2$. \end{proposition} \begin{proof} According to~\cite[Lemma 8.13]{KhannaSTW-01}, $\OptMinOnes(\Gamma)$ is $\NCW$-hard for $\iL \subseteq \cc \Gamma$. The proof uses $\AP$-reductions, i.e., we have $\NCW\aple\OptMinOnes(\Gamma)$. Regarding the other direction, $\OptMinOnes(\Gamma)\aple\NCW$, we first observe that $\odd^3=\SET{(a_1,a_2,a_3)\in\set{0,1}^3}{$\sum_i a_i$ odd}$ and $\even^3=\SET{(a_1,a_2,a_3)\in\set{0,1}^3}{$\sum_i a_i$ even}$ perfectly implement every constraint in $\iL_2$, i.e., $\cc{\set{\odd^3,\even^3}}=\iL_2$~\cite[Lemma 7.6]{KhannaSTW-01}. Therefore, for $\Gamma\subseteq\iL_2$, the problem $\OptWMinOnes(\Gamma)$ $\AP$-reduces to $\OptWMinOnes(\set{\odd^3,\even^3})$~\cite[Lemma 3.9]{KhannaSTW-01}. The latter problem $\AP$-reduces to $\WMinCSP(\set{\odd^3,\even^3,[\lnot x]})$~\cite[Lemma 8.1]{KhannaSTW-01}, which further $\AP$-reduces to $\WMinCSP(\set{\odd^3,\even^3})$ because of $[\lnot x]=[\even^3(x,x,x)]$. In total we thus have that $\OptWMinOnes(\Gamma)$ $\AP$-reduces to $\WMinCSP(\set{\odd^3,\even^3})$. We conclude by observing that $\OptMinOnes$ is a particular case of~$\OptWMinOnes$ and that $\NCW$ is the same as $\WMinCSP(\set{\odd^3,\even^3})$, yielding $\OptMinOnes(\Gamma)\aple\NCW$. \end{proof} \begin{lemma}\label{lem:MinOnes-aple-NSOL} We have $\OptMinOnes(\Gamma)\aple \NSOL(\Gamma)$ for any constraint language~$\Gamma$. \end{lemma} \begin{proof} $\OptMinOnes(\Gamma)$ is a special case of $\NSOL(\Gamma)$ where~$m$ is the constant $\vec 0$-assignment. \end{proof} \begin{corollary}\label{cor:NSOL-NCW-hard-L} $\NSOL(\Gamma)$ is $\NCW$-hard for constraint languages~$\Gamma$ satisfying $\iL \subseteq \cc \Gamma$. \end{corollary} \begin{proof} $\Gamma':=\set{\even^4,[x],[\neg x]}$ is a base of $\iL_2$. By Proposition~\ref{prop:MinOnes-NCW-complete}, $\OptMinOnes(\Gamma')$ is $\NCW$-complete. By Lemma~\ref{lem:MinOnes-aple-NSOL}, $\OptMinOnes(\Gamma')$ reduces to $\NSOL(\Gamma')$. By Proposition~\ref{prop:unary}, $\NSOL(\Gamma')$ is $\AP$-equivalent to $\NSOL(\set{\even^4})$. Finally, because of $\even^4\in\iL\subseteq\cc{\Gamma}$ and Corollary~\ref{cor:coclones}, $\NSOL(\set{\even^4})$ reduces to $\NSOL(\Gamma)$. \end{proof} \begin{proposition}\label{prop:NSOL-MinOnes-to-weak_base-L_2} We have $\NSOL(\Gamma)\aple\OptMinOnes(\set{\even^4, [\neg x], [x]})$ for constraint languages~$\Gamma\subseteq\iL_2$. \end{proposition} \begin{proof} $\Gamma':=\set{\even^4, [\neg x], [x]}$ is a base of $\iL_2$. By Corollary~\ref{cor:coclones} it suffices to show $\NSOL(\Gamma') \aple \OptMinOnes(\Gamma')$. We proceed by reducing $\NSOL(\Gamma')$ to a subproblem of $\NSOLpp(\Gamma')$, where only instances $(\varphi,\vec{0})$ are considered. Then, using Proposition~\ref{prop:quantifiers} and Remark~\ref{rem:zero}, this reduces to a subproblem of $\NSOL(\Gamma')$ with the same restriction on the assignments, which is exactly $\OptMinOnes(\Gamma')$. Note that $[x \oplus y]$ is equal to $\left[\exists z \exists z' (\even^4(x,y, z, z') \land \neg z \land z')\right]$ so we can freely use $[x \oplus y]$ in any $\Gamma'$-formula. Let formula~$\varphi$ and assignment~$m$ be an instance of $\NSOL(\Gamma')$. We copy all clauses of~$\varphi$ to~$\varphi'$. For each variable~$x$ of~$\varphi$ for which $m(x)=1$, we take a new variable~$x'$ and add the constraint $x \oplus x'$ to~$\varphi'$. Moreover, we existentially quantify~$x$. Clearly, there is a bijection~$I$ between the satisfying assignments of~$\varphi$ and those of~$\varphi'$: For every solution~$s$ of~$\varphi$ we get a solution~$I(s)$ of~$\varphi'$ by setting for each~$x'$ introduced in the construction of~$\varphi'$ the value $I(s)(x')$ to the complement of $s(x)$. Moreover, we have that $\hd(m, s) = \hd(\vec 0, I(s))$. This yields a trivial $\AP$-reduction with $\alpha=1$. \end{proof} \subsubsection{MinHornDeletion-Complete Cases} \begin{proposition}[Khanna et al.~\cite{KhannaSTW-01}]\label{prop:kstw-minhd} The optimization problems $\OptMinOnes(\set{x\lor y\lor \neg z, x,\neg x})$ and $\OptWMinOnes(\set{x\lor y\lor \neg z, x\lor y})$ are $\MinHD$-complete via $\AP$-reductions. \end{proposition} \begin{proof} These results are stated in~\cite[Theorem 2.14]{KhannaSTW-01} for completeness via $\A$-reductions. The actual proof in~\cite[Lemma 8.7 and Lemma 8.14]{KhannaSTW-01}, however, uses $\AP$-reductions, hence the results also hold for our stricter notion of completeness. \end{proof} \begin{lemma}\label{lem:horncontain} $\NSOL(\set{x\lor y\lor \neg z}) \aple \OptWMinOnes(\set{x\lor y\lor \neg z, x\lor y})$. \end{lemma} \begin{proof} Let formula~$\varphi$ and assignment~$m$ be an instance of $\NSOL(\set{x\lor y\lor \neg z})$ over the variables $x_1, \ldots,x_n$. Let $V_1$ be the set of variables $x_i$ with $m(x_i)=1$. We construct a $\set{x\lor y\lor \neg z, x\lor y}$-formula $\varphi'$ by adding to~$\varphi$ for each $x_i\in V_1$ the constraint $x_i\lor x_i'$ where $x_i'$ is a new variable. We set the weights of the variables of~$\varphi'$ as follows. For $x_i\in V_1$ we set $w(x_i) = 0$, all other variables get weight~$1$. To each satisfying assignment $m'$ of $\varphi'$ we construct the assignment $m''$ which is the restriction of $m'$ to the variables of $\varphi$. This construction is an $\AP$-reduction. Note that~$m''$ is feasible if $m'$ is. Let~$m'$ be an $r$-approximation of $\OPT(\varphi')$. Note that whenever for $x_i\in V_1$ we have $m'(x_i)=0$ then $m'(x_i')= 1$. The other way round, we may assume that whenever $m'(x_i)=1$ for $x_i\in V_1$ then $m'(x_i') = 0$. If this is not the case, then we can change~$m'$ accordingly, decreasing the weight that way. It follows that $w(m') = n_0 + n_1$ where we have \begin{align*} n_0 &= \card{\Set{i}{x_i \in V_1, m'(x_i)=0}} = \card{\Set{i}{x_i \in V_1, m'(x_i)\neq m(x_i)}}\\ n_1 &= \card{\Set{i}{x_i \notin V_1, m'(x_i)=1}} = \card{\Set{i}{x_i \notin V_1, m'(x_i)\neq m(x_i)}}, \end{align*} which means that $w(m')$ equals $\hd(m,m'')$. Analogously, any model $s\in[\varphi]$ can be extended to a model $m'\in[\varphi']$ by putting $m'(x_i') =1$ if $x_i\in V_1$ and $s(x_i)=0$, and $m'(x_i') =0$ for the remaining $x_i\in V_1$; thereby $w(m')=\hd(m,s)$. Consequently, the optima in both problems correspond, that is, we get $\OPT(\varphi') = \OPT(\varphi, m)$. Hence we deduce $\hd(m, m'') = w(m') \leq r \OPT(\varphi')= r\OPT(\varphi,m)$. \end{proof} \begin{proposition}\label{prop:iV_2_to_MinOnes} For every dual Horn constraint language $\Gamma \subseteq \iV_2$ we have the reduction $\NSOL(\Gamma)\aple\OptWMinOnes(\set{x\lor y\lor \neg z, x\lor y})$. \end{proposition} \begin{proof} Since $\set{x\lor y\lor \neg z, x, \neg x}$ is a base of~$\iV_2$, by Corollary~\ref{cor:coclones} it suffices to prove the reduction $\NSOL(\set{x\lor y\lor \neg z, x, \neg x}) \aple \OptWMinOnes(\set{x\lor y\lor \neg z, x\lor y})$. To this end, first reduce $\NSOL(\set{x\lor y\lor \neg z, x, \neg x})$ to $\NSOL(x\lor y\lor \neg z)$ by Proposition~\ref{prop:unary} and then use Lemma~\ref{lem:horncontain}. \end{proof} \begin{proposition}\label{prop:MinOnes_to_iV_2} $\NSOL(\Gamma)$ is $\MinHD$-hard for finite~$\Gamma$ with $\iV_2\subseteq \cc \Gamma$. \end{proposition} \begin{proof} For $\Gamma':=\set{x\lor y\lor\neg z, x,\neg x}$ we have $\MinHD\apeq \OptMinOnes(\Gamma')$ by Proposition~\ref{prop:kstw-minhd}. Now it follows $\OptMinOnes(\Gamma')\aple \NSOL(\Gamma')\aple \NSOL(\Gamma)$ using Lemma~\ref{lem:MinOnes-aple-NSOL} and Corollary~\ref{cor:coclones} on the assumption $\Gamma'\subseteq\iV_2\subseteq \cc{\Gamma}$. \end{proof} \subsubsection{Poly-APX-Hardness} \begin{proposition}\label{prop:duphardnc} The problem $\NSOL(\Gamma)$ is $\pAPX$-hard for constraint languages~$\Gamma$ satisfying $\iN\subseteq \cc{\Gamma} \subseteq \iI_0$ or $\iN\subseteq \cc{\Gamma}\subseteq \iI_1$. \end{proposition} \begin{proof} The constraint language $\Gamma_1:=\set{\even^4, [x\to y], [x]}$ is a base of~$\iI_1$. $\OptMinOnes(\Gamma_1)$ is $\pAPX$-hard by Theorem~2.14 of~\cite{KhannaSTW-01} and reduces to $\NSOL(\Gamma_1)$ by Lemma~\ref{lem:MinOnes-aple-NSOL}. Since $[x\to y] = [\dup^3(x,y,1)] = [\exists z (\dup^3(x, y, z) \land z)]$, as well as $\cc{\set{\even^4}} = \iL$, $\cc{\set{\dup^3}} = \iN$, and $\iL \subseteq \iN$, we have the reductions \begin{displaymath} \NSOL(\Gamma_1)\aple \NSOL(\Gamma_1\cup\set{\dup^3}) \aple \NSOL(\set{\even^4,\dup^3,x}) \apeq \NSOL(\set{\dup^3,x}) \end{displaymath} by Corollary~\ref{cor:coclones}. The problem of finding feasible solutions of $\NSOL(\Gamma)$, where $\iN\subseteq \cc{\Gamma} \subseteq \iI_0$ or $\iN\subseteq \cc{\Gamma} \subseteq \iI_1$, is polynomial-time decidable. Indeed, such~$\Gamma$ is $0$- or $1$-valid, therefore the all-zero or all-one tuple is always guaranteed to exist as a feasible solution. Therefore Proposition~\ref{prop:unary} implies $\NSOL(\set{\dup^3,x})\apeq \NSOL(\set{\dup^3})$; the latter problem reduces to $\NSOL(\Gamma)$ because $\dup^3 \in \iN\subseteq \cc{\Gamma}$ and Corollary~\ref{cor:coclones}. \end{proof} \endinput \section{Finding Another Solution Closest to the Given One} \label{sec:proofsXSOL} In this section we study the optimization problem $\NextSol$. We first consider the polynomial-time cases and then the cases of higher complexity. \subsection{Polynomial-Time Cases} \label{ssec:proofsXSOL-PO} Since we cannot take advantage of clone closure, we must proceed differently. We use the following result based on a theorem by Baker and Pixley~\cite{BakerP-75}. \begin{proposition}[Jeavons et al.~\cite{JeavonsCG-97}]\label{prop:BakerPixley} Every bijunctive constraint $R(x_1, \ldots, x_n)$ is equivalent to the conjunction $\bigwedge_{1 \leq i \leq j} R_{ij}(x_i,x_j)$, where~$R_{ij}$ is the projection of~$R$ to the coordinates~$i$ and~$j$. \end{proposition} \begin{proposition}\label{prop:XSOL-iD2} If\/ $\Gamma$ is bijunctive $(\Gamma\subseteq\iD_2)$ then $\XSOL(\Gamma)$ is in $\PO$. \end{proposition} \begin{proof} According to Proposition~\ref{prop:BakerPixley} we may assume that the formula~$\varphi$ is a conjunction of atoms $R(x,y)$ or a unary constraint $R(x,x)$ of the form $[x]$ or $[\neg x]$. Unary constraints fix the value of the constrained variable and can be eliminated by propagating the value to the other clauses. For each of the remaining variables, $x$, we attempt to construct a model~$m_x$ of~$\varphi$ with $m_x(x)\neq m(x)$ such that $\hd(m_x,m)$ is minimal among all models with this property. This can be done in polynomial time as described below. If the construction of~$m_x$ fails for every variable~$x$, then~$m$ is the sole model of~$\varphi$ and the problem is not solvable. Otherwise choose one of the variables~$x$ for which $\hd(m_x,m)$ is minimal and return~$m_x$ as second solution~$m'$. It remains to describe the computation of~$m_x$. Initially we set $m_x(x)$ to $1-m(x)$ and $m_x(y):=m(y)$ for all variables~$y\neq x$, and mark~$x$ as flipped. If~$m_x$ satisfies all atoms we are done. Otherwise let $R(u,v)$ be an atom falsified by~$m_x$. If~$u$ and~$v$ are marked as flipped, the construction fails, a model~$m_x$ with the property $m_x(x)\neq m(x)$ does not exist. Otherwise $R(u,v)$ contains a uniquely determined variable~$v$ not marked as flipped. Set $m_x(v) := 1-m(v)$, mark~$v$ as flipped, and repeat this step. This process terminates after flipping every variable at most once. \end{proof} \begin{proposition}\label{prop:XSOL-iI00m} If\/ $\Gamma \subseteq \iS_{00}^k$ or $\Gamma \subseteq \iS_{10}^k$ for some $k \geq 2$ then $\XSOL(\Gamma)$ is in $\PO$. \end{proposition} \begin{proof} We perform the proof only for~$\iS_{00}^k$. Proposition~\ref{prop:dual} implies the same result for~$\iS_{10}^k$. The co-clone $\iS_{00}^k$ is generated by $\Gamma':=\set{\bor^k, [x\rightarrow y], [x], [\neg x]}$. In fact, $\Gamma'$ is even a \emph{plain base} of $\iS_{00}^k$~\cite{CreignouKZ-08}, meaning that every relation in~$\Gamma$ can be expressed as a conjunctive formula over relations in~$\Gamma'$, without existential quantification or explicit equalities. Hence we may assume that $\varphi$ is given as a conjunction of $\Gamma'$-atoms. Note that $x \lor y$ is a polymorphism of $\Gamma'$, i.e., for any two solutions $m_1$, $m_2$ of $\varphi$ their disjunction $m_1 \lor m_2$~-- defined by $(m_1\lor m_2)(x) = m_1(x)\lor m_2(x)$ for all~$x$~-- is also a solution of~$\varphi$. Therefore we get the optimal solution~$m'$ of an instance $(\varphi,m)$ by flipping in~$m$ either some ones to zeros or some zeros to ones, but not both. To see this, assume the optimal solution~$m'$ flips both ones and zeros. Then $m' \lor m$ is a solution of~$\varphi$ that is closer to $m$ than $m'$, which contradicts the optimality of~$m'$. Unary constraints fix the value of the constrained variable and can be eliminated by propagating the value to the other clauses (including removal of disjunctions containing implied positive literals and shortening disjunctions containing implied negative literals). This propagation does not lead to contradictions since~$m$ is a model of~$\varphi$. For each of the remaining variables, $x$, we attempt to construct a model~$m_x$ of~$\varphi$ with $m_x(x)\neq m(x)$ such that $\hd(m_x,m)$ is minimal among all models with this property. This can be done in polynomial time as described below. If the construction of~$m_x$ fails for every variable~$x$, then~$m$ is the sole model of~$\varphi$ and the problem is not solvable. Otherwise choose one of the variables~$x$ for which $\hd(m_x,m)$ is minimal and return~$m_x$ as second solution~$m'$. It remains to describe the computation of~$m_x$. If $m(x)=0$, we flip~$x$ to~$1$ and propagate this change iteratively along the implications, i.e., if $x \to y$ is a constraint of~$\varphi$ and $m(y)=0$, we flip~$y$ to~$1$ and iterate. This kind of flip never invalidates any disjunctions, it could only lead to contradictions with conditions imposed by negative unit clauses (and since their values were propagated before such a contradiction would be immediate). For $m(x)=1$ we proceed dually, flipping~$x$ to~$0$, removing~$x$ from disjunctions if applicable, and propagating this change backward along implications $y \to x$ where $m(y)=1$. This can possibly lead to immediate inconsistencies with already inferred unit clauses, or it can produce contradictions through empty disjunctions, or it can create the necessity for further flips from~$0$ to $1$ in order to obtain a solution (because in a disjunctive atom all variables with value~$1$ have been flipped, and thus removed). In all these three cases the resulting assignment does not satisfy~$\varphi$, and there is no model that differs from~$m$ in~$x$ and that can be obtained by flipping in one way only. Otherwise, the resulting assignment satisfies~$\varphi$, and this is the desired~$m_x$. Our process terminates after flipping every variable at most once, since we flip only in one way (from zeros to ones or from ones to zeros). Thus, $m_x$ is computable in polynomial time. \end{proof} \subsection{Hard Cases} \label{ssec:proofsXSOL-hard} \begin{proposition}\label{prop:XSOL-iI_0-iI_1} Let $\Gamma$ be a constraint language. If\/ $\iI_1\subseteq \cc \Gamma$ or $\iI_0 \subseteq \cc \Gamma$ holds then it is $\NP$-complete to decide whether a feasible solution for $\XSOL(\Gamma)$ exists. Otherwise, $\XSOL(\Gamma)\in \pAPX$. \end{proposition} \begin{proof} Finding a feasible solution to $\XSOL(\Gamma)$ is exactly the problem $\AnotherSat(\Gamma)$ which is $\NP$-hard if and only if $\iI_1\subseteq \cc \Gamma$ or $\iI_0 \subseteq \cc \Gamma$ according to Juban~\cite{Juban-99}. If $\AnotherSat(\Gamma)$ is polynomial-time decidable, we can always find a feasible solution for $\XSOL(\Gamma)$ if it exists. Obviously, every feasible solution is an $n$-approximation of the optimal solution, where $n$ is the number of variables in the input. \end{proof} \subsubsection{Tightness results}\label{sssec:xsoltight} It will be convenient to consider the following decision problem asking for another solution that is not the complement, i.e., that does not have maximal distance from the given one. \decproblem{$\AnotherSatNC(\Gamma)$} {A conjunctive formula~$\varphi$ over relations from~$\Gamma$ and an assignment~$m$ satisfying~$\varphi$.} {Is there another satisfying assignment~$m'$ of~$\varphi$, different from~$m$, such that $\hd(m,m') < n$, where~$n$ is the number of variables in~$\varphi$?} \begin{remark} $\AnotherSatNC(\Gamma)$ is $\NP$-complete for $\iI_0 \subseteq \cc \Gamma$ and $\iI_1 \subseteq \cc \Gamma$, since already $\AnotherSat(\Gamma)$ is $\NP$-complete for these cases, as shown in~\cite{Juban-99}. Moreover, $\AnotherSatNC(\Gamma)$ is polynomial-time decidable if~$\Gamma$ is Horn $(\Gamma \subseteq \iE_2)$, dual Horn $(\Gamma \subseteq \iV_2)$, bijunctive $(\Gamma \subseteq \iD_2)$, or affine $(\Gamma \subseteq \iL_2)$, for the same reason as for $\AnotherSat(\Gamma)$: For each variable~$x_i$ we flip the value of $m[i]$, substitute $\cmpl m(x_i)$ for~$x_i$, and construct another satisfying assignment if it exists. Consider now the solutions which we get for every variable~$x_i$. Either there is no solution for any variable, then $\AnotherSatNC(\Gamma)$ has no solution; or there are only the solutions which are the complement of~$m$, then $\AnotherSatNC(\Gamma)$ has no solution as well; or else we get a solution~$m'$ with $\hd(m,m') < n$, leading also to a solution for $\AnotherSatNC(\Gamma)$. Hence, taking into account Proposition~\ref{prop:AScomplhard} below, we obtain a dichotomy result also for $\AnotherSatNC(\Gamma)$. \end{remark} Note that $\AnotherSatNC(\Gamma)$ is not compatible with existential quantification. Let $\varphi(y, x_1, \ldots, x_n)$ with model~$m$ be an instance of $\AnotherSatNC(\Gamma)$ and let $m'$ be a solution satisfying $\hd(m,m') < n+1$. Now consider the formula $\varphi_1(x_1, \ldots, x_n) = \exists y \, \varphi(y, x_1, \ldots, x_n)$, obtained by existentially quantifying the variable~$y$, and the tuples $m_1$ and~$m'_1$ obtained from $m$ and~$m'$ by omitting the first component. Both, $m_1$ and~$m'_1$, are still solutions of~$\varphi'$, but we cannot guarantee $\hd(m_1, m'_1) < n$. Hence we need the equivalent of Proposition~\ref{prop:coclonesmincd} for this problem, whose proof is analogous. \begin{proposition}\label{prop:coclonesanothersat} $\AnotherSatNC(\Gamma') \mle \AnotherSatNC(\Gamma)$ for constraint languages $\Gamma$ and $\Gamma'$ satisfying $\Gamma'\subseteq \ccc \Gamma$. \end{proposition} \begin{proposition}\label{prop:AScomplhard} If a constraint language~$\Gamma$ satisfies $\cc \Gamma = \iI$ or $\iN \subseteq \cc \Gamma \subseteq \iN_2$, then $\AnotherSatNC(\Gamma)$ is $\NP$-complete. \end{proposition} \begin{proof} Containment in $\NP$ is clear, it remains to show hardness. Since $\AnotherSatNC$ is not compatible with existential quantification, we cannot use clone theory, but have to consider the three co-clones $\iN_2$, $\iN$, and $\iI$ separately and make use of minimal weak bases. \paragraph{Case $\cc \Gamma = \iN$:} Putting $R:=\set{000, 101, 110}$, we give a reduction from $\AnotherSat(\set{R})$, which is $\NP$-hard~\cite{Juban-99} as $\cc{\set{R}} = \iI_0$. The problem remains $\NP$-complete if we restrict it to instances $(\varphi, \vec 0)$, since $R$ is $0$-valid and any given model~$m$ other than the constant $0$-assignment admits the trivial solution $m'=\vec 0$. Thus we can perform a reduction from this restricted problem. Consider the relation $R_{\iN}=\set{0000,1010,1100,1111,0101,0011}$. Given a formula~$\varphi$ over~$R$, we construct a formula~$\psi$ over $R_{\iN}$ by replacing every constraint $R(x, y, z)$ with $R_{\iN}(x, y, z, w)$, where~$w$ is a new global variable. Moreover, we set~$m$ to the constant $0$-assignment. This construction is a many-one reduction from the restricted version of $\AnotherSat(\set R)$ to $\AnotherSatNC(\set{R_{\iN}})$. To see this, observe that the tuples in~$R_{\iN}$ that have a~$0$ in the last coordinate are exactly those in $R\times \set{0}$. Thus any solution of~$\varphi$ can be extended to a solution of~$\psi$ by assigning~$0$ to~$w$. Conversely we observe that any solution~$m'$ of the $\AnotherSatNC(\set{R_{\iN}})$ instance $(\psi,\vec0)$ is different from $\vec 0$ and~$\vec 1$. As $R_{\iN}$ is complementive, we may assume $m'(w)=0$. Then~$m'$ restricted to the variables of~$\varphi$ solves the $\AnotherSat(\set{R})$ instance $(\varphi,\vec0)$. Finally, observe that $R_{\iN}$ is a minimal weak base and $\Gamma$ is a base of the co-clone~$\iN$, therefore we have $R_{\iN}\in\ccc\Gamma$ by Theorem~\ref{thm:weakbases}. Now the $\NP$-hardness of $\AnotherSatNC(\Gamma)$ follows from the one of $\AnotherSatNC(\set{R_{\iN}})$ by Proposition~\ref{prop:coclonesanothersat}. \paragraph{Case $\cc \Gamma = \iN_2$:} We give a reduction from $\AnotherSatNC(\set{R_{\iN}})$, which is $\NP$-hard by the previous case. By Theorem~\ref{thm:weakbases}, $\ccc\Gamma$ contains the relation $R_{\iN_2} = \Set{ m\cmpl m}{m \in R_{\iN}}$. For an $R_{\iN}$-formula~$\varphi(x_1, \ldots, x_n)$, we construct an $R_{\iN_2}$-formula $\psi(x_1, \ldots, x_n, x_1', \ldots, x_n')$ by replacing every constraint $R_{\iN}(x, y, z, w)$ with $R_{\iN_2}(x, y, z, w, x', y', z', w')$. Assignments~$m$ for $\varphi$ extend to assignments $M$ for~$\psi$ by setting $M(x'):= \cmpl{m}(x)$. Conversely, assignments for~$\psi$ yield assignments for~$\varphi$ by restricting them to the variables in~$\varphi$. Because every variable $x_1,\dotsc,x_n$ assigned by models of $\varphi$ actually occurs in some $R_{\iN}$-atom in $\varphi$ and hence in some $R_{\iN_{2}}$-atom of $\psi$, and because of the structure of $R_{\iN_{2}}$, any model of $\psi$ distinct from $M$ and $\cmpl{M}$ restricts to a model of $\varphi$ other than $m$ or $\cmpl{m}$. Consequently, this construction is again a reduction from $\AnotherSatNC(\set{R_{\iN}})$ to $\AnotherSatNC(\set{R_{\iN_2}})$, reducing itself to $\AnotherSatNC(\Gamma)$ by Proposition~\ref{prop:coclonesanothersat}. \paragraph{Case $\cc \Gamma = \iI$:} We proceed as in Case $\cc \Gamma = \iN$, but use $R_{\iI}=\set{0000,0011,0101,1111}$ instead of~$R_{\iN}$, and $\set{000, 011, 101}$ for~$R$. Note that the $R_{\iI}$-tuples with first coordinate~$0$ are exactly those in $\set{0}\times R$. The relation $R_{\iI}$ is not complementive, but (as every variable assigned by any model of $\psi$ occurs in some atomic $R_{\iI}$-constraint) the only solution $m'$ such that $m'(w)=1$ is the constant $1$-assignment, which is ruled out by the requirement $\hd(m,m') < n$. Hence we may again assume $m'(w)=0$. \end{proof} \begin{proposition}\label{prop:tightxsol} For a constraint language~$\Gamma$ satisfying $\cc \Gamma = \iI$ or $\iN \subseteq \cc \Gamma \subseteq \iN_2$ and any $\varepsilon>0$ there is no polynomial-time $n^{1-\varepsilon}$-approximation algorithm for $\XSOL(\Gamma)$, unless $\P = \NP$. \end{proposition} \begin{proof} Assume that there is a constant $\varepsilon>0$ with a polynomial-time $n^{1-\varepsilon}$-approximation algorithm for $\XSOL(\Gamma)$. We show how to use this algorithm to solve $\AnotherSatNC(\Gamma)$ in polynomial time. Proposition~\ref{prop:AScomplhard} completes the proof. Let $(\varphi, m)$ be an instance of $\AnotherSatNC(\Gamma)$ with~$n$ variables. If $n=1$, then we reject the instance. Otherwise, we construct a new formula~$\varphi'$ and a new assignment~$m'$ as follows. Let~$k$ be the smallest integer greater than $1/\varepsilon$. Choose a variable~$x$ of~$\varphi$ and introduce $n^k-n$ new variables~$x^i$ for $i = 1, \ldots, n^k-n$. For every $i \in \set{1, \ldots, n^k-n}$ and every constraint $R(y_1, \ldots , y_\ell)$ in~$\varphi$, such that $x \in \set{y_1, \ldots, y_\ell}$, construct a new constraint $R(z_1^i, \ldots, z_\ell^i)$ by $z_j^i = x^i$ if $y_j = x$ and $z_j^i = y_j$ otherwise; add all the newly constructed constraints to~$\varphi$ in order to get~$\varphi'$. Moreover, we extend~$m$ to a model of~$\varphi'$ by setting $m'(x^i)=m(x)$. Now run the $n^{1-\varepsilon}$-approximation algorithm for $\XSOL(\Gamma)$ on $(\varphi', m')$. If the answer is $\cmpl{m'}$ then reject, otherwise accept. We claim that the algorithm described above is a correct polynomial-time algorithm for the decision problem $\AnotherSatNC(\Gamma)$ when~$\Gamma$ is complementive. Polynomial runtime is clear. It remains to show its correctness. If the only solutions to~$\varphi$ are~$m$ and~$\cmpl m$, then, as $n>1$, the only models of~$\varphi'$ are $m'$ and $\cmpl{m'}$. Hence the approximation algorithm must answer $\cmpl{m'}$ and the output is correct. Now assume that there is a satisfying assignment~$m_s$ different from~$m$ and~$\cmpl m$. The relation~$[\varphi]$ is complementive, hence we may assume that $m_s(x)=m(x)$. It follows that~$\varphi'$ has a satisfying assignment~$m_s'$ for which $0<\hd(m_s', m')<n$ holds. But then the approximation algorithm must find a satisfying assignment~$m''$ for~$\varphi'$ with $\hd(m', m'') < n \cdot (n^k)^{1-\varepsilon} = n^{k(1-\varepsilon)+1}$. Since the inequality $k > 1/\varepsilon$ holds, it follows that $\hd(m', m'')< n^k$. Consequently, $m''$ is not the complement of~$m'$ and the output of our algorithm is again correct. When~$\Gamma$ is not complementive but both $0$-valid and $1$-valid $(\cc \Gamma = \iI)$, we perform the expansion algorithm described above for each variable of the formula~$\varphi$ and reject if the result is the complement for each run. The runtime remains polynomial. If $[\varphi] = \set{m,\cmpl{m}}$, then indeed every run results in the corresponding $\cmpl{m'}$, and we correctly reject. Otherwise, we have a model $m_s\in [\varphi]\smallsetminus\set{m,\cmpl{m}}$, so there is a variable~$x$ of $\varphi$, where $m_s(x)\neq \cmpl{m}(x)$, i.e.\ $m_s(x) = m(x)$. For this instance $(\varphi',m')$ the approximation algorithm does not return~$\cmpl{m'}$, wherefore we correctly accept. \end{proof} \subsubsection{MinDistance-Equivalent Cases}\label{sssec:XSOLaffine} In this section we show that affine co-clones give rise to problems equivalent to $\MinDist$. \begin{lemma}\label{lem:affine-MinDist} For affine constraint languages $\Gamma$ $(\Gamma\subseteq \iL_2)$ we have $\XSOL(\Gamma)\aple\MinDist$. \end{lemma} \begin{proof} Let the formula~$\varphi$ and the satisfying assignment~$m$ be an instance of $\XSOL(\Gamma)$ over the variables $x_1, \ldots, x_n$. The input $\varphi$ can be written as $A \vec x = \vec b$, with~$m$ being a solution of this affine system. A tuple~$m'$ is a solution of $A\vec x = \vec b$ if and only if it can be written as $m' = m+m_0$ where~$m_0$ is a solution of $A \vec x = \vec 0$. The Hamming distance is invariant with respect to affine translations: namely we have $\hd(m',m) = \hd(m'+m'', m+m'')$ for any tuple $m''$, in particular, for $m'' = -m$ we obtain $\hd(m',m)=\hd(m'-m,\vec{0})$. Therefore $m'\neq m$ is a solution of $A \vec x = \vec b$ with minimal Hamming distance to~$m$ if and only if $m_0 = m'-m$ is a non-zero solution of the homogeneous system $A \vec x = \vec 0$ with minimum Hamming weight. Hence, the problem $\XSOL(\Gamma)$ for affine languages~$\Gamma$ is equivalent to computing the non-trivial solutions of homogeneous systems with minimal weight, which is exactly the $\MinDist$ problem. \end{proof} We need to express an affine sum of even number of variables by means of the minimal weak base for each of the affine co-clones. In the following lemma, the existentially quantified variables are uniquely determined, therefore the existential quantifiers serve only to hide superfluous variables and do not pose any problems as they were mentioned before. \begin{lemma}\label{lem:represent-sums} For every $n\in\NN$, $n\geq 1$, the constraint $x_1 \oplus x_2 \oplus\dotsm \oplus x_{2n} = 0$ can be equivalently expressed by each of the following formulas: \begin{enumerate} \item\label{item:sumiL} $\exists y_0,\dotsc,y_n( y_0 = 0\land y_{n} = 0\land\ \begin{alignedat}[t]{2} &R_{\iL}(y_0,x_1,x_2,y_1) &\land\ & R_{\iL}(y_1,x_3,x_4,y_2) \land \dotsm\land{}\\ &R_{\iL}(y_{n-1},x_{2n-1},x_{2n},y_{n})), \end{alignedat}$ \item $\exists y_0,\dotsc,y_{2n}( \begin{alignedat}[t]{2} &R_{\iL_0}(y_0,x_1,y_1,y_0) &\land\ &R_{\iL_0}(y_1,x_2,y_2,y_0) \land \dotsm\land\\ &R_{\iL_0}(y_{2n-1},x_{2n},y_{2n},y_{2n})), \end{alignedat}$ \item $\exists y_0,\dotsc,y_{2n}( \begin{alignedat}[t]{2} &R_{\iL_1}(y_0,x_1,y_1,y_0) &\land\ &R_{\iL_1}(y_1,x_2,y_2,y_0) \land \dotsm\land\\ &R_{\iL_1}(y_{2n-1},x_{2n},y_{2n},y_{2n})), \end{alignedat}$ \item $\exists y_0,\dotsc,y_n,z_0,\dotsc,z_n,w_1,\dotsc,w_{2n}( y_0 = 0\land y_n = 0\land{}$\\ \strut $\begin{alignedat}[t]{1} &R_{\iL_3}(y_0,x_1,x_2,y_1,z_0,w_1,w_2,z_1) \land\ R_{\iL_3}(y_1,x_3,x_4,y_2,z_1,w_3,w_4,z_2) \land \dotsm\land\\ &R_{\iL_3}(y_{n-1},x_{2n-1},x_{2n},y_{n}, z_{n-1},w_{2n-1},w_{2n},z_{n})), \end{alignedat}$ \item\label{item:sumiL2} $\exists y_0,\dotsc,y_{2n},z_0,\dotsc,z_{2n},w_1,\dotsc,w_{2n}( \begin{alignedat}[t]{1} &R_{\iL_2}(y_0,x_1,y_1,z_0,w_1,z_1,y_0,z_0) \land{}\\ &R_{\iL_2}(y_1,x_2,y_2,z_1,w_2,z_2,y_0,z_0) \land \dotsm\land{}\\ &R_{\iL_2}(y_{2n-1},x_{2n},y_{2n},z_{2n-1},w_{2n},z_{2n}, y_{2n},z_{2n})), \end{alignedat}$ \end{enumerate} where the number of existentially quantified variables is linearly bounded in the length of the constraint. Note moreover that in each case any model of $x_1 \oplus x_2 \oplus\dotsm \oplus x_{2n} = 0$ uniquely determines the values of the existentially quantified variables. \end{lemma} \begin{proof} Write out the constraint relations following the existential quantifiers as (conjunctions of) equalities. From this uniqueness of valuations for the existentially quantified variables is easy to see, and likewise that any model of $\bigoplus_{i=1}^{2n} x_i = 0$ also satisfies each of the formulas~\ref{item:sumiL}. up to~\ref{item:sumiL2}. Adding up the equalities behind the existential quantifiers shows the converse direction. \end{proof} The following lemma shows that $\MinDist$ is $\AP$-equivalent to a restricted version, containing only constraints generating the minimal weak base, for each co-clone in the affine case. \begin{lemma}\label{lem:MD-red-to-weak-base-and-zero} For each co-clone $\mathcal{B}\in\set{\iL,\iL_0,\iL_1,\iL_2,\iL_3}$ we have the reduction $\MinDist\aple \XSOL(\set{R_{\mathcal{B}},[\neg x]})$. \end{lemma} \begin{proof} Consider a co-clone $\mathcal{B}\in\set{\iL,\iL_0,\iL_1,\iL_2,\iL_3}$ and a $\OptMinDist$-instance represented by a matrix $A\in\ZZ_2^{k\times l}$. If one of the columns of~$A$, say the $i$-th, is zero, then the $i$-th unit vector is an optimal solution to this instance with optimal value~$1$. Hence, we assume from now on that none of the columns equals a zero vector. \par Every row of $A$ expresses the fact that a sum of $n\leq l$ variables equals zero. If $n$ is odd, we extend this sum to one with $n+1$ summands, thereby introducing a new variable~$v$, which we existentially quantify and confine to zero using a unary $[\neg x]$-constraint. Then we replace the expanded sum by the existential formula from Lemma~\ref{lem:represent-sums} corresponding to the co-clone~$\mathcal{B}$ under consideration. This way we have introduced only linearly many new variables in~$l$ for every row, and for any feasible solution for the $\OptMinDist$-problem the values of the existential variables needed to encode it are uniquely determined. Thus, taking the conjunction over all these formulas we only have a linear growth in the size of the instance. \par Next, we show how to deal with the existential quantifiers: First we transform the expression to prenex normal form getting a formula~$\psi$ of the form~$\exists y_1,\dotsc,y_p(\varphi(y_1,\dotsc,y_p,x_1,\dotsc,x_l))$, which holds if and only if $A\vec{x} = \vec{0}$ for $\vec{x} = (x_1,\dotsc,x_l)$. We use the same blow-up construction regarding $x_1,\dotsc,x_l$ as in Proposition~\ref{prop:quantifiers} and Lemma~\ref{lem:Heindl} to make the influence of $y_1,\dotsc,y_p$ on the Hamming distance negligible. For this we put $J:=\set{1,\dotsc,t}$ and introduce new variables $x_i^j$ where $1\leq i\leq l$ and $j\in J$. If $u$ is among $x_1,\dotsc,x_l$, we define its blow-up set to be $B(u) = \Set{x_i^j}{j\in J}$, otherwise, for $u\in\set{y_1,\dotsc,y_p}$, we set $B(u) = \set{u}$. Now for each atom $R(u_1,\dotsc,u_q)$ of $\varphi$ we form the set of atoms $\Set{R(u_1',\dotsc,u_q')}{(u_1',\dotsc,u_q')\in\prod_{i=1}^q B(u_i)}$, and define the quantifier free formula $\varphi'$ to be the conjunction of all atoms in the union of these sets. Note that this construction takes time polynomial in the size of~$\psi$ and hence in the size of the input $\OptMinDist$-instance whenever~$t$ is polynomial in the input size because the atomic relations in~$\psi$ are at most octonary. If $s$ is an assignment of values to $\vec{x}$ making $A\vec{x} =\vec{0}$ true, we define $s'(x_i^j):= s(x_i)$ and extend this to a model of~$\varphi'$ assigning the uniquely determined values to $y_1,\dotsc,y_p$. Let~$m'$ be the model arising in this way from the zero assignment $m$. If~$s'$ is any model of~$\varphi'$, then for every $1\leq i\leq l$, all $j\in J$ and each atom $R(u_1,\dotsc,u_q)$ of~$\varphi$, $s'$ satisfies, in particular, the conjunction $R(u_1',\dotsc,u_q')\land R(u_1'',\dotsc,u_q'')$ where for $u\in\set{u_1,\dotsc,u_q}$ we have $u' = u'' = u$ if $u \in\set{y_1,\dotsc,y_p}$, $u'=x_i^1$, $u'' = x_i^j$ if $u=x_i$, and $u'=u''=x_k^1$ if $u=x_k$ for some $k\in\set{1,\dotsc,l}\smallsetminus\set{i}$. Hence, the vectors $(s'(x_1^1),\dotsc,s'(x_l^1))$ and $(s'(x_1^1),\dotsc,s'(x_{i-1}^1),s'(x_i^j),s'(x_{i+1}^1),\dotsc, s'(x_l^1)))$ both belong to the kernel of~$A$ and so does their difference, which is $s'(x_i^j) - s'(x_i^1)$ times the $i$-th unit vector. As the $i$-th column of~$A$ is non-zero, we must have $s'(x_i^j) = s'(x_i^1)$. This also implies that if $s'$ is zero on $x_1^1,\dotsc,x_n^1$, then it must be zero on all $x_i^j$ ($1\leq i\leq l$, $j\in J$) and thus it must coincide with $m'$. Therefore, every feasible solution to the~$\XSOL$-instance $(\varphi',m')$ yields a non-zero vector $(s'(x_1^1),\dotsc,s'(x_l^1))$ in the kernel of~$A$. \par Further, if $s'$ is an $r$-approximation to an optimal solution, i.e.\ we have $\hd(s',m')\leq r\OPT(\varphi',m')$, then, as $s'(x_i^1)=s'(x_i^j)$ holds for all $j\in J$ and all $1\leq i\leq l$, we obtain a solution to the $\MinDist$ problem with Hamming weight~$w$ such that $t\cdot w \leq \hd(s',m')$. Also, any optimal solution to the $\MinDist$-instance can be extended to a not-necessarily optimal solution $s''$ of~$(\varphi',m')$, for which one can bound the distance to~$m'$ as follows: $\OPT(\varphi',m')\leq \hd(s'',m') \leq t\cdot\OPT(A) + p$. Combining these inequalities, we can infer $t\cdot w \leq r\cdot t\cdot \OPT(A) + r\cdot p$, or $w\leq \OPT(A)\cdot(r+ r/\OPT(A)\cdot p/t)$. We noted above that $p$ is linearly bounded in the size of the input, thus choosing $t$ quadratic in the size of the input bounds $w$ by $\OPT(A)(r+ \Lo(1))$, whence we have an AP-reduction with $\alpha=1$. \end{proof} \begin{lemma}\label{lem:XSOL-reduce-constants} For constraint languages $\Gamma$, where one can decide the existence of (and then find) a feasible solution of\/~$\XSOL(\Gamma)$ in polynomial time, we have $\XSOL(\Gamma) \aple \XSOL((\Gamma\smallsetminus\set{[x],[\neg x]})\cup\set{\eq})$. \end{lemma} \begin{proof} If an instance $(\varphi,m)$ does not have feasible solutions, then it does not have nearest other solutions either. So we map it to the generic unsolvable instance~$\bot$. Consider now formulas $\varphi$ over variables $x_1,\dotsc,x_n$ with models~$m$ where some feasible solution $s_0\neq m$ exists (and has been computed). We can assume $\varphi$ to be of the form $\psi(x_1,\dotsc,x_n) \land \bigwedge_{i\in I_1} [x_i] \land \bigwedge_{i\in I_0} [\neg x_i]$, where $\psi$ is a $(\Gamma\smallsetminus\set{[x],[\neg x]})$-formula and $I_1,I_0\subseteq \set{1,\dotsc,n}$. We transform $\varphi$ to $\varphi':= \psi(x_1,\dotsc,x_n) \land \bigwedge_{i\in I_1} x_i \eq y_1 \land \bigwedge_{i\in I_0} x_i \eq z_1 \land \bigwedge_{i=1}^{1+n^2} (y_i \eq y_1 \land z_i \eq z_1)$ and extend models of~$\varphi$ to models of~$\varphi'$ in the natural way. Conversely, if~$s'$ is a model of~$\varphi'$ and $s'(y_i) = 1$ and $s'(z_i) = 0$ hold for all $1\leq i\leq 1+n^2$, then we can restrict it to a model of~$\varphi$. Other models of~$\varphi'$ are not optimal and are mapped to $s_0$. It is not hard to see that this provides an AP-reduction with $\alpha=1$. \end{proof} \begin{proposition}\label{prop:MinDist-hardness-XSOL} For every constraint language $\Gamma$ satisfying $\iL\subseteq \cc \Gamma \subseteq \iL_2$ we have $\MinDist \apeq \XSOL(\Gamma)$. \end{proposition} \begin{proof} Since we lack compatibility with existential quantification, we shall deal with each co-clone $\mathcal{B} = \cc\Gamma$ in the interval $\set{\iL,\iL_0,\iL_1,\iL_2,\iL_3}$ separately. First we perform the reduction from Lemma~\ref{lem:MD-red-to-weak-base-and-zero} to $\XSOL(\set{R_{\mathcal{B}}, [\neg x]})$. We need to find a reduction to $\XSOL(\set{R_{\mathcal{B}}})$ as this reduces to~$\XSOL(\Gamma)$ by Proposition~\ref{prop:coclonesmincd} and Theorem~\ref{thm:weakbases}. This is simple in the case of $\iL_0$ and $\iL_2$ since $[\neg x] = \set{x\mid R_{\iL_0}(x,x,x,x)}\in\ccc{\set{R_{\iL_0}}}$ (see Proposition~\ref{prop:coclonesmincd}) and $[\neg x] = \set{x\mid \exists y(R_{\iL_2}(x,x,x,y,y,y,x,y))}$, where the existential quantifier can be handled by an AP-reduction with $\alpha=1$ which drops the quantifier and extends every model by assigning $1$ to all previously existentially quantified variables. Thereby (optimal) distances between models do not change at all. \par In the remaining cases, we reduce $\XSOL(\set{R_{\mathcal{B}},[\neg x]})\aple \XSOL(\set{R_{\mathcal{B}},[x],[\neg x]})$ and the latter to $\XSOL(\set{R_{\mathcal{B}}, \eq})$ by Lemma~\ref{lem:XSOL-reduce-constants}, which now has to be reduced to $\XSOL(\set{R_{\mathcal{B}}})$. This is obvious for $\mathcal{B} = \iL$ where equality constraints $x\eq y$ can be expressed as $R_{\iL}(x,x,x,y) \in \ccc{\set{R_{\iL}}}$ (cf.\ Proposition~\ref{prop:coclonesmincd}). For $\iL_1$ the same can be done using the formula $\exists z(R_{\iL_1}(x,y,z,z))$, where the existential quantifier can be removed by the same sort of simple AP-reduction with $\alpha=1$ as employed for $\iL_2$. Finally, for $\iL_3$ we want to express equality as $\exists u\exists v(R_{\iL_3}(x,x,x,y,u,u,u,v))$. Here, in an AP-reduction, the quantifiers cannot simply be disregarded, as the values of the existentially quantified variables are not constant for all models. They are uniquely determined by the values of $x$ and $y$ for each particular model, though, which allows us to perform a similar blow-up construction as in the proof of Lemma~\ref{lem:MD-red-to-weak-base-and-zero}. \par In more detail, given a $\set{R_{\iL_3}, \eq}$-formula $\psi$ containing variables $x_1,\dotsc,x_l$, first note that each atomic $R_{\iL_3}$-constraint $R_{\iL_3}(x_1,\dotsc,x_8)$ can be represented as a linear system of equations, namely $\oplus_{i=1}^4 x_i = 0$ and $x_i \oplus x_{i+4} = 1$ for $1\leq i\leq 4$. Since equalities $x_i \eq x_j$ can be written as $x_i\oplus x_j = 0$, the formula $\psi$ is equivalent to an expression of the form $A\vec{x} = \vec{b}$ where $\vec{x} = (x_1,\dotsc,x_l)$. Replacing each equality constraint by the existential formula above and bringing the result into prenex normal form, we get a formula $\exists y_1,\dotsc,y_p(\varphi(y_1,\dotsc,y_p,x_1,\dotsc,x_l))$, which is equivalent to $\psi$ and where $\varphi$ is a conjunctive $\set{R_{\iL_3}}$-formula. By construction any two models of $\varphi$ that agree on $x_1,\dotsc,x_l$ must coincide. Thus, introducing variables $x_i^j$ for $1\leq i\leq l$ and $j\in J:=\set{1,\dotsc,t}$ and defining $\varphi'$ in literally the same way as in the proof of Lemma~\ref{lem:MD-red-to-weak-base-and-zero}, any model~$s$ of~$\psi$ yields a model~$s'$ of $\varphi'$ by putting $s'(x_i^j):=s(x_i)$ for $1\leq i\leq l$ and $j\in J$ and extending this with the unique values for $y_1,\dotsc,y_p$ satisfying $\varphi(y_1,\dotsc,y_p,x_1,\dotsc,x_l)$. In this way we obtain a model~$m'$ of~$\varphi'$ from a given solution~$m$ of~$\psi$. Besides, if $s'$ is any model of~$\varphi'$, then as in Lemma~\ref{lem:MD-red-to-weak-base-and-zero}, the vectors $(s'(x_1^1),\dotsc,s'(x_l^1))$ and $(s'(x_1^1),\dotsc,s'(x_{i-1}^1),s'(x_i^j),s'(x_{i+1}^1),\dotsc, s'(x_l^1)))$ both satisfy~$\psi$, and thus their difference is in the kernel of~$A$. Since the variable $x_i$ occurs in at least one of the atoms of~$\psi$, the $i$-th column of~$A$ is non-zero, implying that $s'(x_i^j) = s'(x_i^1)$ for $j\in J$ and all $1\leq i\leq l$. Thus, any model $s'\neq m'$ of~$\varphi'$ gives a model~$s\neq m$ of~$\psi$ by defining $s(x_i):= s'(x_i^1)$ for all $1\leq i\leq l$. The presented construction is an AP-reduction with $\alpha=1$, which can be proven completely analogously to the last paragraph of the proof of Lemma~\ref{lem:MD-red-to-weak-base-and-zero}, choosing~$t$ quadratic in the size of~$\psi$. \end{proof} \subsubsection{MinHornDeletion-Equivalent Cases} As in Proposition~\ref{prop:AScomplhard} the need to use conjunctive closure instead of $\cc{\ }$ causes a case distinction in the proof of the following result. \begin{lemma}\label{lem:constructimpl} If\/~$\Gamma$ is exactly dual Horn $(\iV \subseteq \cc \Gamma \subseteq \iV_2)$ then one of the following relations is in $\ccc \Gamma$: $[x\to y]$, $[x\to y]\times \set{0}$, $[x\to y]\times \set{1}$, or $[x\to y]\times \set{01}$. \end{lemma} \begin{proof} The co-clone $\cc\Gamma$ is equal to $\iV$, $\iV_0$, $\iV_1$, or $\iV_2$. In the case $\cc \Gamma= \iV$ the relation $R_\iV$ belongs to~$\ccc \Gamma$ by Theorem~\ref{thm:weakbases}; because of $R_\iV(y, y, y, x) = [x \to y]$ we have $[x\to y] \in \ccc {R_\iV}\subseteq \ccc \Gamma$. The case $\cc \Gamma = \iV_1$ leads to $[x\to y]\times \set{1} \in \ccc \Gamma$ in an analogous manner. The cases $\cc \Gamma = \iV_0$ and $\cc \Gamma = \iV_2$ lead to $[x\to y]\times \set{0}\in \ccc \Gamma$ and $[x\to y]\times \set{01} \in \ccc \Gamma$, respectively, by observing that $[S_1(y,y,x)]=[S_0(\lnot y,\lnot y, \lnot x,\lnot y)] =[(\lnot y\land\lnot y)\eq(\lnot y\land\lnot x)] = [x\to y]$. \end{proof} \begin{lemma}\label{lem:Horn_MinHD-hard} If\/~$\Gamma$ is exactly dual Horn $(\iV \subseteq \cc \Gamma \subseteq \iV_2)$, then $\XSOL(\Gamma)$ is $\MinHD$-hard. \end{lemma} \begin{proof} There are four cases to consider, namely $\cc{\Gamma}\in\set{\iV,\iV_0,\iV_1,\iV_2}$. For simplicity we only present the situation where $\cc{\Gamma} = \iV_1$; the case $\cc{\Gamma}=\iV_2$ is very similar, and the other possibilities are even less complicated. At the end we shall give a few hints how to adapt the proof in these cases. \par The basic structure of the proof is follows: we choose a suitable weak base of~$\iV_1$ consisting of an irredundant relation~$R_1$, and identify a relation~$H_1\in\ccc{\set{R_1}}$ which allows us to encode a sufficiently complicated variant of the $\OptMinOnes$-problem into $\XSOL(\set{H_1})$. Thus by Theorem~\ref{thm:weakbases} and Lemma~\ref{lem:constructimpl} we have $H_1\in\ccc{\set{R_1}}\subseteq\ccc{\Gamma}$ and $[x\to y]\times\set{1} \in\ccc{\Gamma}$, wherefore Proposition~\ref{prop:coclonesminhd} implies $\XSOL(\Gamma')\aple \XSOL(\Gamma)$ where $\Gamma'=\set{H_1,[x\to y]\times\set{1}}$. According to~\cite[Theorem~2.14(4)]{KhannaSTW-01}, $\MinHD$ is equivalent to $\OptMinOnes(\Delta)$ for constraint languages~$\Delta$ being dual Horn, not $0$-valid and not implicative hitting set bounded$+$ with any finite bound, that is, if $\cc{\Delta}\in\set{\iV_1,\iV_2}$. The key point of the construction is to choose~$R_1$ and~$H_1$ in such a way that we can find a relation~$G_1$ satisfying $\iV_1\subseteq\cc{\set{G_1}}\subseteq\iV_2$ and $((G_1\times\set{1})\cup\set{\vec 0})\times\set{1}=H_1$. The latter property will allow us to prove an $\AP$-reduction $\MinHD\apeq\OptMinOnes(\set{G_1})\aple\XSOL(\Gamma')$, completing the chain. \par We first check that $R_1 = \GammaF{\fV_1}{\graphic[4]}$ satisfies $\cc{\set{R_1}} = \iV_1$: namely, by construction, this relation is preserved by the disjunction and by the constant operation with value~$1$, i.e., $\cc{R_1}\subseteq \iV_1$. This inclusion cannot be proper, since $\vec{0}\notin R_1$ ($\cc{R_1}\not\subseteq\iI_0$) and $x \lor (y \land z) \notin R_1$ while $x = (e_1\circ\beta)\lor (e_4\circ\beta)$, $y = (e_1\circ\beta)\lor (e_2\circ\beta)$ and $z = (e_1\circ\beta)\lor (e_3\circ\beta)$ belong to $\GammaF{\fV_1}{\graphic[4]}$ (cf.\ before Theorem~\ref{thm:weakbases-from-graphics} for the notation), i.e.\ the generating function $(x,y,z)\mapsto x\lor (y\land z)$ of the clone $\fS_{00}$~\cite[Figure~2, p.~8]{CreignouV-08} fails to be a polymorphism of $R_1$. For later we note that when~$\beta$ is chosen such that the coordinates of~$\graphic[4]$ are ordered lexicographically (and we are going to assume this from now on), then this failure can already be observed within the first seven coordinates of~$R_1$. Now according to Theorem~\ref{thm:weakbases-from-graphics}, the sedenary relation $R_1:=\GammaF{\fV_1}{\graphic[4]}$ is a weak base relation for~$\iV_1$ without duplicate coordinates, and a brief moment of inspection shows that none of them is fictitious either. Therefore, $R_1$ is an irredundant weak base relation for $\iV_1$. We define $H_1$ to be $\Set{(x_0,\dotsc,x_8)}{(x_0,\dotsc,x_7,x_8,\dotsc,x_8)\in R_1}$, then clearly $H_1 \in\ccc{\set{R_1}}$. Now we put $G_1 := G_1'\smallsetminus\set{\vec{0}}$ where $G_1' := \Set{(x_0,\dotsc,x_6)}{(x_0,\dotsc,x_8)\in H_1}$, and one quickly verifies that $((G_1\times\set{1})\cup\set{\vec 0})\times\set{1}=H_1$. Since $G_1'\in \cc{H_1}\subseteq \cc{R_1} = \iV_1$ and removing the bottom-element $\vec{0}$ of a non-trivial join-semilattice with top-element still yields a join-semilattice with top-element, we have $G_1\in\iV_1$. With the analogous counterexample as for the relation~$R_1$ above, we can show that $(x,y,z)\mapsto x\lor(y\land z)$ is not a polymorphism of~$G_1$ (because the non-membership is witnessed among the first seven coordinates). Thus, $\cc{\set{G_1}} = \iV_1$; in particular $G_1$, and any relation conjunctively definable from it, is not $0$-valid. \par For the reduction let now $\varphi(\vec x) = G_1(\vec{x_1}) \land\cdots\land G_1(\vec{x_k})$ be an instance of $\OptMinOnes(\set{G_1})$. We construct a corresponding $\Gamma'$-formula~$\varphi'$ as follows. \begin{align*} \varphi''(\vec x,y,z) &:= H_1(\vec{x_1},y,z) \land \cdots \land H_1(\vec{x_k},y,z) \\ \varphi'''(\vec x, \vec{x^{(2)}}, \cdots, \vec{x^{(\ell)}},z) &:= \bigwedge_{i=1}^\ell \left( (x_i \toequals{z} x_i^{(2)}) \land \bigwedge_{j=2}^{\ell-1} (x_i^{(j)} \toequals{z} x_i^{(j+1)}) \land (x_i^\ell \toequals{z} x_i) \right)\\ \varphi'(\vec x, \vec{x^{(2)}}, \cdots, \vec{x^{(\ell)}},y,z) &:= \varphi''(\vec x,y,z) \land \varphi'''(\vec x, \vec{x^{(2)}}, \cdots,\vec{x^{(\ell)}},z) \end{align*} where $\ell = \card{\vec x}$ is the number of variables of~$\varphi$, $y$ and~$z$ are new global variables, and where we have written $(u\toequals{w}v)$ to denote $([x\to y]\times\set{1})(u,v,w)$. Let $m_0$ be the assignment to the $\ell^2+2$ variables of~$\varphi'$ given by $m_0(z) = 1$ and $m_0(x)=0$ elsewhere. It is clear that $(\varphi',m_0)$ is an instance of $\XSOL(\Gamma')$, since~$m_0$ satisfies~$\varphi'$. The formula~$\varphi'''$ only multiplies each variable~$x$ from~$\varphi$ $\ell$-times and forces $x \eq x^{(2)} \eq \cdots \eq x^{(\ell)}$, which is just a technicality for establishing an $\AP$-reduction. The main idea of this proof is the correspondence between the solutions of~$\varphi$ and~$\varphi''$. For each solution~$s$ of $\varphi(\vec x)$ there exists a solution~$s'$ of $\varphi''(\vec x, y)$ with $s'(y)=1$ (and $s'(z)=1$). Each solution~$s'$ of~$\varphi''$ has always $s'(z)=1$ and either $s'(y)=0$ or $s'(y)=1$. Because every variable from~$\vec{x}$ is part of one of the $\vec{x_i}$, the assignment~$m_0$ restricted to $(\vec{x}, y,z)$ is the only solution~$s'$ of~$\varphi''$ satisfying $s'(y)=0$. If otherwise $s'(y)$ equals~$1$, then~$s'$ restricted to the variables~$\vec x$ satisfies $\varphi(\vec x)$, following the correspondence between the relations~$G_1$ and~$H_1$. For~$r\in[1,\infty)$ let~$s'$ be an $r$-approximate solution of the $\XSOL(\Gamma')$-instance $(\varphi',m_0)$. Let $s := s'\Restriction_{\vec x}$ be the restriction of~$s'$ to the variables of~$\varphi$. Since $s'\neq m_0$, by what we showed before, $s'(y)=1$ and $s$~is a solution of $\varphi(\vec x)$. We have $\OPT(\varphi', m_0) \geq 2$ and $\OPT(\varphi) \geq 1$, since solutions of the $\XSOL(\Gamma')$-instance $(\varphi',m_0)$ must be different from~$m_0$, whereby~$y$ is forced to have value~$1$, and $[\varphi]\in\ccc{\set{G_1}}$ is not $0$-valid. Moreover, $\hw(s) = \hd(\vec 0, s)$, $\hd(s',m_0) = \ell \hw(s)+1$, $\OPT(\varphi',m_0) = \ell \OPT(\varphi) + 1$, and $\hd(s',m_0)\leq r\OPT(\varphi',m_0)$. From this and $\OPT(\varphi)\geq 1$ it follows that \begin{align*} \ell\hw(s) < \ell\hw(s)+ 1 = \hd(s',m_0)&\leq r\OPT(\varphi',m_0) = r\ell\OPT(\varphi) + r\\ &\leq r\ell\OPT(\varphi) + r\OPT(\varphi)\\ &\leq r\ell\OPT(\varphi) + r\OPT(\varphi) + (r-1)\ell\OPT(\varphi)\\ &= (2r-1+r/\ell)\ell\OPT(\varphi) = (1+2(r-1) +r/\ell)\ell\OPT(\varphi). \end{align*} Hence~$s$ is an $(1+\alpha(r-1)+\Lo(1))$-approximate solution of the instance~$\varphi$ of $\OptMinOnes(\set{G_1})$ where $\alpha=2$. \par In the case when $\cc{\Gamma} = \iV_2$, the proof goes through with minor changes: $R_2 = \GammaF{\fV_2}{\graphic[4]} = R_1\smallsetminus\set{\vec{1}}$, so we define~$H_2$ and~$G_2$ like~$H_1$ and~$G_1$ just using~$R_2$ and~$H_2$ in place of~$R_1$ and~$H_1$. Then we have $H_2 = H_1\smallsetminus\set{\vec{1}}$, $G_2 = G_1\smallsetminus\set{\vec{1}}$ and $\cc{\set{G_2}}= \iV_2$. Moreover, for the reduction we shall need an additional global variable~$w$ for~$\varphi'''$ (and $\varphi'$) since the encoding of the implication from Lemma~\ref{lem:constructimpl} requires it (and forces it to zero in every model). \par For $\cc{\Gamma}= \iV_0$ we can use $R_0=\GammaF{\fV_0}{\graphic[4]} = R_2\cup\set{\vec{0}}$; then, letting $H_0=\Set{(x_0,\dotsc,x_7)}{(x_0,\dotsc,x_7,x_7,\dotsc,x_7)\in R_0} \in\ccc{\set{R_0}}$, we have $H_0 = (G_2\times\set{1})\cup\set{\vec{0}}$. On a side note, we observe that $H_0 = \GammaF{\fV_0}{\graphic[3]}$, which we can use alternatively without detouring via~$R_0$. Given the relationship between $G_2$ and $H_0$, we do not need the global variable~$z$ in the definition of~$\varphi''$, but we need to have it in the definition of $\varphi'''$, where the relation given by Lemma~\ref{lem:constructimpl} necessitates atoms of the form $(u\xrightarrow{z=0}v)$ forcing $z$ to zero in every model. \par The case where $\cc{\Gamma} = \iV$ is similar to the previous: we can use the irredundant weak base relation $H = \GammaF{\fV}{\graphic[3]} = H_0 \cup\set{\vec{1}} = (G_1\times\set{1})\cup\set{\vec{0}}$. Except for~$y$ in the definition of~$\varphi''$ no additional global variables are needed in the definition of~$\varphi'$, because $[u\to v]$ atoms are directly available for~$\varphi'''$. \end{proof} \begin{corollary}\label{cor:XSOL-Horn-dual_Horn} If\/~$\Gamma$ is exactly Horn $(\iE \subseteq \cc \Gamma \subseteq \iE_2)$ or exactly dual-Horn $(\iV \subseteq \cc \Gamma \subseteq \iV_2)$ then $\XSOL(\Gamma)$ is $\MinHD$-complete under $\AP$-Turing-reductions. \end{corollary} \begin{proof} Hardness follows from Lemma~\ref{lem:Horn_MinHD-hard} and duality. Moreover, $\XSOL(\Gamma)$ can be $\AP$-Turing-reduced to $\NSOL(\Gamma\cup \set{[x], [\neg x]})$ as follows: Given a $\Gamma$-formula~$\varphi$ and a model~$m$, we construct for every variable~$x$ of~$\varphi$ a formula $\varphi_x= \varphi \land (x\eq \cmpl{m}(x))$. Then for every~$x$ where $[\varphi_x]\neq\emptyset$ we run an oracle algorithm for $\NSOL(\Gamma\cup \set{[x], [\neg x]})$ on $(\varphi_x, m)$ and output one result of these oracle calls that is closest to~$m$. We claim that this algorithm provides indeed an $\AP$-Turing reduction. To see this observe first that the instance $(\varphi,m)$ has feasible solutions if and only if this holds for $(\varphi_x,m)$ and at least one variable~$x$. Moreover, we have $\OPT(\varphi,m) = \min_{x,[\varphi_x]\neq\emptyset}(\OPT(\varphi_x, m))$. Let $A(\varphi,m)$ be the answer of the algorithm on $(\varphi, m)$ and let $B(\varphi_x, m)$ be the answers to the oracle calls. Consider a variable~$x^*$ such that $\OPT(\varphi,m) = \min_{x,[\varphi_x]\neq\emptyset}(\OPT(\varphi_x,m)) = \OPT(\varphi_{x^*},m)$, and assume that $B(\varphi_{x^*}, m)$ is an $r$-approximate solution of $(\varphi_{x^*},m)$. Then we get \begin{displaymath} \frac{\hd(m, A(\varphi, m))}{\OPT(\varphi, m)} = \frac{\min_{y,[\varphi_y]\neq\emptyset}(\hd(m, B(\varphi_y, m))}{\OPT(\varphi_{x^*}, m)} \leq \frac{\hd(m, B(\varphi_{x^*}, m))}{\OPT(\varphi_{x^*}, m)} \leq r . \end{displaymath} Thus the algorithm is indeed an $\AP$-Turing-reduction from $\XSOL(\Gamma)$ to $\NSOL(\Gamma\cup \set{[x], [\neg x]})$. Note that for $\Gamma\subseteq \iV_2$ the problem $\NSOL(\Gamma\cup \set{[x], [\neg x]})$ reduces to $\MinHD$ according to Propositions~\ref{prop:iV_2_to_MinOnes} and~\ref{prop:kstw-minhd}. Duality completes the proof. \end{proof} \section{Finding the Minimal Distance Between Solutions} \label{sec:proofsMSD} In this section we study the optimization problem $\MinSolDistance$. We first consider the polynomial-time cases and then the cases of higher complexity. \subsection{Polynomial-Time Cases} \label{ssec:proofsMSD-PO} We show that for bijunctive constraints the problem $\MinSolDistance$ can be solved in polynomial time. After stating the result we present an algorithm and analyze its complexity and correctness. \begin{proposition}\label{prop:MSD-iD2} If\/ $\Gamma$ is a bijunctive constraint language $(\Gamma\subseteq\iD_2)$ then $\MSD(\Gamma)$ is in $\PO$. \end{proposition} By Proposition~\ref{prop:BakerPixley}, an algorithm for bijunctive constraint languages~$\Gamma$ can be restricted to at most binary clauses. Alternatively, one can use the plain base $\set{[x],[\neg x],[x\lor y], [\neg x\lor y], [\neg x\lor\neg y]}$ of~$\iD_2$ exhibited in~\cite{CreignouKZ-08} to see that every relation in $\Gamma$ can be written as a conjunction of disjunctions of two not necessarily distinct literals. We shall treat these disjunctions as one- or two-element sets of literals when extending the algorithm of Aspvall, Plass, and Tarjan~\cite{AspvallPT-79} to compute the minimum distance between distinct models of a bijunctive constraint formula. \algorithm{\textsc{bijunctive} $\MSD$} {An $\iD_2$-formula~$\varphi$ given as a collection of one- or two-element sets of literals (bijunctive clauses).} {``$\leq 1$ model'' or the minimal Hamming distance of any two distinct models of~$\varphi$.} {\mbox{}\\ Let $\Var$ be the set of variables occurring in~$\varphi$.\\ Let $\Lit := \set{v,\neg v \mid v\in \Var}$ be the set of literals.\\ Let~$\bar{u}$ denote the literal complementary to~$u\in\Lit$. \par\noindent Construct the relation $R:=\Set{(\bar u,v),(\bar v,u)}{\set{u,v}\in\varphi\land u\neq v}\cup\Set{(\bar u,u)}{\set{u}\in\varphi}$.\\ Let $\leq$ be the reflexive and transitive closure of~$R$, i.e.\ the least preorder on~$\Lit$ extending~$R$.\\ Construct the sets \begin{alignat*}{2} \Var_0 &:= \SET{v\in\Var}{$v\leq x\leq\neg x\phantom{\mbox{}\leq v}$\quad or\quad $\phantom{\neg v\leq\mbox{}}\neg x\leq x\leq\neg v$\quad for some $x\in\Var$}\\ \Var_1 &:= \SET{v\in\Var}{$\phantom{v\leq\mbox{}}x\leq\neg x\leq v$\quad or\quad $\neg v\leq\neg x\leq x\phantom{\mbox{}\leq\neg v}$\quad for some $x\in\Var$} \end{alignat*} If $\Var_0\cap\Var_1 \neq\emptyset$ or $\Var_0\cup\Var_1 = \Var$ holds, then return ``$\leq 1$ model''. \par\noindent Let $\Lit' := \Lit\smallsetminus\Set{v,\neg v}{v \in \Var_0\cup \Var_1}$.\\ Let ${\sim}:= \set{(u,v)\in\Lit' \times \Lit' \mid u\leq v \land v\leq u}$.\\ Return $\min\Set{\card L}{L \in \Lit'/{\sim}}$ as minimal Hamming distance. \par\noindent } \paragraph{Complexity:} The size of $\Lit$ is linear in the number of variables, the reflexive closure can be computed in time linear in~$\card{\Lit}$, the transitive closure in time cubic in~$\card{\Lit}$, see~\cite{WarshallTransClosure1962}. The equivalence relation~$\sim$ is the intersection of~$\leq$ restricted to $\Lit'$ and its inverse (quadratic in $\card{\Lit'}$); from it we can obtain the partition~$\Lit'/{\sim}$ in linear time in~$\card{\Lit'}\leq \card{\Lit}$, including the cardinalities of the equivalence classes and their minimization. Similarly, the remaining sets from the proof ($\Var_0$, $\Var_1$, their intersection and union, and thus also $\Lit'$) can be computed with polynomial time complexity. \paragraph{Correctness:} The pairs in~$R$ arise from interpreting the atomic constraints in~$\varphi$ as implications. By transitivity of implication, the inequality $u\leq v$ for literals $u,v$ means that every model~$m$ of~$\varphi$ satisfies the implication $u\to v$ or, equivalently, $m(u)\leq m(v)$. In particular, $x\leq\neg x$ implies $m(x)=0$ and $\neg x\leq x$ implies $m(x)=1$. Therefore $\Var_0$ can be seen to be the set of variables that have to be false in every model of~$\varphi$, and $\Var_1$ the set of variables true in every model. If $\Var_0 \cap \Var_1 \neq \emptyset$ holds then the formula~$\varphi$ is inconsistent and has no solution. If $\Var_0 \cup \Var_1 = \Var$ holds, then every variable has a unique fixed value, hence $\varphi$ has only one solution. Otherwise the formula is consistent and not all variables are fixed, hence there are at least two models. To determine the minimal number of variables, whose values can be flipped between any two models of~$\varphi$, it suffices to consider the literals without fixed value, $\Lit'$. If we have $u\leq v$ and $v\leq u$, the literals are equivalent, $u\sim v$, and must have the same value in every model. This means that any two distinct models have to differ on all literals of at least one equivalence class in $\Lit'/{\sim}$. Therefore, the return value of the algorithm is a lower bound for the minimal distance. To prove that the return value can indeed be attained, we exhibit two models $m_0 \neq m_1$ of~$\varphi$ having the least cardinality of any equivalence class in~$\Lit'/{\sim}$ as their Hamming distance. Let $L \in \Lit'/{\sim}$ be a class of minimum cardinality. Define $m_0(u):= 0$ and $m_1(u):= 1$ for all literals $u \in L$. We extend this by setting $m_0(w) := m_1(w):= 0$ for all $w\in\Lit$ such that $w\leq u$ for some $u\in L$, and by $m_0(w):= m_1(w):= 1$ for all $w\in\Lit$ such that $u\leq w$ for some $u\in L$. For variables $v\in\Var$ satisfying $v\leq \neg v$ or $\neg v\leq v$ we have $v\in \Var_0\cup\Var_1$, and thus $v\notin \Lit'$; in other words, for $[v]_{\sim} \in \Lit'/{\sim}$ the classes $[v]_{\sim}$ and $[\neg v]_{\sim}$ are incomparable. Thus, so far, we have not defined $m_0$ and $m_1$ on a variable $v\in \Var$ and on its negation $\neg v$ at the same time. Of course, fixing a value for a negative literal $\neg v$ implicitly means that we bind the assignment for $v\in \Var$ to the opposite value. It remains to fix the value of literals in $\Lit'$ that are neither related to the literals in~$L$ nor have fixed values in all models. Suppose $(\bar u, v) \in R$ is a constraint such that the value of at least one literal has not yet been defined. There are three cases: either both literals have not yet received a value, or $\bar u$ is undefined and $v$ has been assigned the value~$1$ (either as a fixed value in all models or because of being greater than a literal in~$L$ or because of being lesser than a complement of a literal in~$L$), or $v$ is undefined and $\bar u$ has been assigned the value~$0$ (either as a fixed value in all models or because of being smaller than a literal in~$L$ or greater than a complement of a literal in~$L$). All three cases can be handled by defining both models, $m_0$ and $m_1$, on the remaining variables identically: starting with a minimal literal $u$, where $m_0$ and $m_1$ are not yet defined, we assign $m_0(u) := m_1(\cmpl{u}) := 0$ and $m_1(u):= m_0(\cmpl{u}):=1$. This way none of the constraints is violated, and $m_0$ and $m_1$ are distinct only on variables corresponding to literals in $L$. Iterate this procedure until all variables (and their complements) have been assigned values. If $\Lit''\subseteq\Lit'$ denotes the literals remaining after propagating the values of~$m_0$ and~$m_1$ on~$L$, then the presented method can be implemented by partitioning $\Lit''$ into two classes $L_0$ and $L_1$ such that $L_0\cap\set{u,\cmpl{u}}$ is a singleton for every $u\in \Lit''$ and each weakly connected component of the quasiordered set $(\Lit'',\leq)$ is either a subset of $L_0$ or $L_1$. Then set $m_0$ and $m_1$ to $k$ on the literals belonging to $L_k$ for $k\in\set{0,1}$. By construction, $m_0$ differs from~$m_1$ only in the variables corresponding to the literals in $L$, so their Hamming distance is $\card{L}$ as desired. Moreover, both assignments respect the order constraints in $(\Lit,\leq)$. As these faithfully reflect all original atomic constraints, $m_0$ and $m_1$ are indeed models of~$\varphi$. \begin{proposition}\label{prop:MSD-iE2-iV2} If\/~$\Gamma$ is a Horn $(\Gamma\subseteq\iE_2)$ or a dual Horn $(\Gamma\subseteq\iV_2)$ constraint language then $\MSD(\Gamma)$ is in $\PO$. \end{proposition} We only discuss the Horn case ($\Gamma\subseteq\iE_2$), dual Horn ($\Gamma\subseteq\iV_2$) being symmetric. \algorithm{\textsc{Horn} $\MSD$} {A Horn formula~$\varphi$ given as a set of Horn clauses (cf.\ the plain base of $\iE_2$ given in~\cite{CreignouKZ-08}).} {``$\leq 1$ model'' or the minimal Hamming distance of any two distinct models of~$\varphi$.} { \par\noindent For each variable~$x$ in~$\varphi$, add the clause $(\neg x \lor x)$.\\ Let $\U := \emptyset$.\\ Apply the following rules to~$\varphi$ until no more clauses and literals can be removed and no new clauses can be added. \emph{Unit resolution and unit subsumption:} Let $\bar{u}$ denote the complement of a literal~$u$. If the clause set contains a unit clause $u$, remove all clauses containing the literal $u$ and remove all literals $\bar u$ from the remaining clauses. Add $u$ to the set $\U$. \emph{Hyper-resolution with binary implications:} Resolve all negative literals of a clause simultaneously with binary implications possessing identical premises. \begin{displaymath} \infer{(\neg x \lor z)} {(\neg x\lor y_1) \cdots (\neg x\lor y_k)&(\neg y_1\lor \dots \lor \neg y_k\lor z)} \qquad\quad \infer{(\neg x)} {(\neg x\lor y_1) \cdots (\neg x\lor y_k)&(\neg y_1\lor\dots\lor\neg y_k)} \end{displaymath} Let $\D$ be the set of clauses after applying the two rules exhaustively.\\ If $\D$ contains the empty clause, return ``$\leq 1$ model''.\\ If $\U$ contains a literal for every variable in~$\varphi$, return ``$\leq 1$ model''.\\ If $\varphi$ contains a variable that appears neither in~$\D$ nor in~$\U$, return $1$ as minimal Hamming distance. \par\noindent Otherwise, let $\Var$ be the set of variables occurring in~$\D$, and let ${\sim}\subseteq \Var^2$ be the relation defined by $x\sim y$ if $\set{\neg x\lor y,\neg y\lor x}\subseteq\D$. Note that $\sim$ is an equivalence, since the tautological clauses ensure reflexivity and resolution of implications computes their transitive closure. We say that a variable~$z$ depends on variables $y_1,\dots,y_k$, if $\D$ contains the clauses $ \neg y_1\lor \dots\lor \neg y_k\lor z$, $\neg z\lor y_1$, \ldots, $\neg z\lor y_k$ and $z\not\sim y_i$ holds for all $i=1,\dots,k$.\\ Return $\min\SET{\card X}{$X\in\Veq$, $X$ does not contain dependent variables}$ as minimal Hamming distance. \par\noindent } \paragraph{Complexity:} The run-time of the algorithm is polynomial in the number of clauses in~$\varphi$: Unit resolution/subsumption can be applied at most once for each variable, and hyper-resolution has to be applied at most once for each variable~$x$ and each clause $\neg y_1\lor \dots \lor \neg y_k \lor z$ and $\neg y_1\lor \dots\lor \neg y_k$. \paragraph{Correctness:} Adding resolvents and removing subsumed clauses maintains logical equivalence, therefore $\D\cup\U$ is logically equivalent to~$\varphi$, i.e., both clause sets have the same models. We note that the sets of variables of~$\U$ and of~$\D$ are disjoint. The unit clauses in~$\U$ are always (uniquely) satisfiable, thus~$\D$ and~$\varphi$ are equisatisfiable. Therefore, if~$\D$ contains the empty clause, $\varphi$ is also unsatisfiable; otherwise~$\D$ is satisfiable, e.g., by assigning~$0$ to every $x\in\Var$. In this case, if~$\U$ contains a literal for every variable of~$\varphi$, the unit clauses in~$\U$ define a unique model of~$\varphi$. Otherwise $\varphi$ has at least two models $m_1\neq m_2$. In the simplest case some variable~$x$ in~$\varphi$ has been left unconstrained by $\D$ and $\U$; in this case we can pick any model of $\D$ and $\U$ and extend it to two different models of~$\varphi$ with Hamming distance~$1$ by setting $m_1(x)=0$ and $m_2(x)=1$ and setting $m_1(y) = m_2(y) = 0$ for any other variable~$y$ outside $\D$ and $\U$. For the remaining situations it is sufficient to consider the models of~$\D$ only, as each model~$m$ of~$\D$ uniquely extends to a model of~$\varphi$ by defining $m(x)=1$ for $(x)\in\U$ and $m(x)=0$ for $(\neg x)\in\U$; hence the minimal Hamming distances of the models of $\varphi$ and~$\D$ will be the same. We are thus looking for models $m_1,m_2$ of~$\D$ such that the size of the difference set $\Delta(m_1, m_2)=\Set{x}{m_1(x)\neq m_2(x)}$ is minimal. In fact, since the models of Horn formulas are closed under minimum, we may assume $m_1<m_2$, i.e., we have $m_1(x)=0$ and $m_2(x)=1$ for all variables $x\in \Delta(m_1, m_2)$. Indeed, given two models $m_2$ and $m_2'$ of~$\D$ where neither $m_2\leq m_2'$ nor $m_2'\leq m_2$, $m_1= m_2 \land m_2'$ is also a model, and it is distinct from $m_2$. Since $\hd(m_1,m_2) \leq \hd(m_2,m_2')$, the minimal Hamming distance will occur between models~$m_1$ and~$m_2$ satisfying $m_1 < m_2$. \noindent Note the following facts regarding the equivalence relation~$\sim$ and dependent variables. \begin{asparaitem} \item If $x\sim y$ then the two variables must have the same value in every model of~$\D$ in order to satisfy the implications $\neg x\lor y$ and $\neg y\lor x$. This means that for all models~$m$ of~$\D$ and all $X\in\Veq$, we have either $m(x)=0$ for all $x\in X$ or $m(x)=1$ for all $x\in X$. \item The dependence of variables is acyclic: If, for some $l\geq 2$, for every $1\leq i<l$ we have that $z_i$ depends on variables including one, say $y_i$, which is equivalent to~$z_{i+1}$, and $z_l = z_1$, then there is a cycle of binary implications between the variables and thus $z_i\sim y_i \sim z_j$ for all $i,j$, contradicting the definition of dependence. \item If a variable~$z$ depending on $y_1$, \dots, $y_k$ belongs to a difference set~$\Delta(m_1, m_2)$, then at least one of the $y_i$s also has to belong to~$\Delta(m_1, m_2)$: $m_2(z)=1$ implies $m_2(y_j)=1$ for all $j=1,\dots,k$ (because of the clauses $\neg z\lor y_i$), and $m_1(z)=0$ implies $m_1(y_i)=0$ for at least one~$i$ (because of the clause $\neg y_1\lor \dots\lor \neg y_k\lor z$). Therefore $\Delta(m_1, m_2)$ is the union of at least two sets in~$\Veq$, namely the equivalence class of~$z$ and the one of~$y_i$. \item If some $z_1\in \Delta(m_1,m_2)$ is equivalent to a variable $z_1'$ that depends on some other variables, then we have a variable $z_2$ among them, which also belongs to $\Delta(m_1,m_2)$. If the equivalence class of $z_2$ still contains a variable $z_2'$ depending on other variables, we can iterate this procedure. In this way we obtain a sequence $z_1\sim z_1', z_2\sim z_2', z_3\sim z_3', \dotsc$ where $z_i'$ depends on variables including $z_{i+1}$, which is equivalent to $z_{i+1}'$. Because there are only finitely many variables and because of acyclicity, after a linear number of steps we must reach a variable $z_n\in\Delta(m_1,m_2)$ such that its equivalence class (being a subset of the difference set) does not contain any dependent variables. \end{asparaitem} Hence the difference between any two models cannot be smaller than the cardinality of the smallest set in~$\Veq$ without dependent variables. It remains to show that we can indeed find two such models. Let~$X$ be a set in~$\Veq$ which has minimal cardinality among the sets without dependent variables, and let $m_0,m_1$ be interpretations defined as follows: \begin{inparaenum}[(1)] \item $m_0(y) = 0$ and $m_1(y) = 1$ if $y\in X$; \item $m_0(y) = 1$ and $m_1(y) = 1$ if $y\notin X$ and $(\neg x\lor y)\in\D$ for some $x\in X$; \item $m_0(y) = 0$ and $m_1(y) = 0$ otherwise. \end{inparaenum} We have to show that~$m_0$ and~$m_1$ satisfy all clauses in~$\D$. Let $m$ be any of these models. $\D$ contains two types of clauses. \begin{asparaenum}[Type~1:] \item Horn clauses with a positive literal $\neg y_1\lor \dots\lor \neg y_k\lor z$. If $m(y_i)=0$ for any~$i$, we are done. So suppose $m(y_i)=1$ for all $i=1,\dots,k$; we have to show $m(z)=1$. The condition $m(y_i)=1$ means that either $y_i\in X$ (for $m=m_1$) or that there is a clause $(\neg x_i \lor y_i)\in\D$ for some $x_i\in X$. We distinguish the two cases $z\in X$ and $z\notin X$. Let $z\in X$. If $z\sim y_i$ for any $i$, we are done for we have $m(z)=m(y_i)=1$. So suppose $z\not\sim y_i$ for all~$i$. As the elements in~$X$, in particular $z$ and the $x_i$s, are equivalent and the binary clauses are closed under resolution, $\D$ contains the clause $\neg z\lor y_i$ for all~$i$. But this would mean that $z$ is a variable depending on the $y_i$s, contradicting the assumption $z\in X$. Let $z\notin X$, and let $x\in X$. As the elements in~$X$ are equivalent and the binary clauses are closed under resolution, $\D$ contains $\neg x\lor y_i$ for all~$i$. Closure under hyper-resolution with the clause $\neg y_1 \lor \dots\lor \neg y_k\lor z$ means that $\D$ also contains $\neg x\lor z$, whence $m(z)=1$. \item Horn clauses with only negative literals $\neg y_1\lor \dots\lor \neg y_k$. If $m(y_i)=0$ for any~$i$, we are done. It remains to show that the assumption $m(y_i)=1$ for all $i=1,\dots,k$ leads to a contradiction. The condition $m(y_i)=1$ means that either $y_i\in X$ (for $m=m_1$) or that there is a clause $(\neg x_i\lor y_i)\in\D$ for some $x_i\in X$. Let $x$ be some particular element of~$X$. Since the elements in~$X$ are equivalent and the binary clauses are closed under resolution, $\D$ contains the clause $\neg x\lor y_i$ for all~$i$. But then a hyper-resolution step with the clause $\neg y_1 \lor \dots\lor \neg y_k$ would yield the unit clause $\neg x$, which by construction does not occur in~$\D$. Therefore at least one $y_i$ is neither in~$X$ nor part of a clause $\neg x\lor y_i$ with $x\in X$, i.e., $m(y_i)=0$. \end{asparaenum} \subsection{Hard Cases} \subsubsection{Two Solution Satisfiability} In this section we study the feasibility problem of $\MSD(\Gamma)$ which is, given a $\Gamma$-formula~$\varphi$, to decide if~$\varphi$ has two distinct solutions. \decproblem{$\TSSAT(\Gamma)$} {A conjunctive formula~$\varphi$ over the relations from the constraint language~$\Gamma$.} {Are there two satisfying assignments~$m \neq m'$ of~$\varphi$?} A priori it is not clear that the tractability of $\TSSAT$ is fully characterized by co-clones. The problem is that the implementation of relations of some language~$\Gamma$ by another language~$\Gamma'$ might not be parsimonious, that is, in the implementation one solution to a constraint might be blown up into several ones in the implementation. Fortunately we can still determine the tractability frontier for $\TSSAT$ by combining the corresponding results for $\SAT$ and $\AnotherSat$. \begin{lemma}\label{lem:sattotwo} Let $\Gamma$ be a constraint language for which $\SAT(\Gamma)$ is $\NP$-hard. Then $\TSSAT(\Gamma)$ is $\NP$-hard. \end{lemma} \begin{proof} Since $\SAT(\Gamma)$ is $\NP$-hard, there must be a relation~$R$ in~$\Gamma$ having more than one tuple, because every relation containing only one tuple is at the same time Horn, dual Horn, bijunctive, and affine. Given an instance $\varphi$ for $\SAT(\Gamma)$, construct $\varphi'$ as $\varphi \land R(y_1, \ldots, y_\ell)$ where $\ell$ is the arity of $R$ and $y_1, \ldots, y_\ell$ are new variables not appearing in $\varphi$. Obviously, $\varphi$ has a solution if and only if $\varphi'$ has at least two solutions. Hence, we have proved $\SAT(\Gamma)\mle \TSSAT(\Gamma)$. \end{proof} \begin{lemma}\label{lem:asattotwo} Let $\Gamma$ be a constraint language for which $\AnotherSat(\Gamma)$ is $\NP$-hard. Then the problem $\TSSAT(\Gamma)$ is $\NP$-hard. \end{lemma} \begin{proof} Let a $\Gamma$-formula~$\varphi$ and a satisfying assignment~$m$ be an instance of $\AnotherSat(\Gamma)$. Then~$\varphi$ has a solution other than~$m$ if and only if it has two distinct solutions. \end{proof} \begin{lemma}\label{lem:twoeasy} Let $\Gamma$ be a constraint language for which both $\SAT(\Gamma)$ and $\AnotherSat(\Gamma)$ are in~$\P$. Then $\TSSAT$ is also in~$\P$. \end{lemma} \begin{proof} Let $\varphi$ be an instance of $\TSSAT(\Gamma)$. All polynomial-time decidable cases of $\SAT(\Gamma)$ are constructive, i.e., whenever that problem is polynomial-time decidable, there exists a polynomial-time algorithm computing a satisfying assignment provided it exists. If $\varphi$ is not satisfiable, we reject the instance. Otherwise, we can compute in polynomial time a satisfying assignment~$m$ of~$\varphi$. Now use the algorithm for $\AnotherSat(\Gamma)$ on the instance $(\varphi, m)$ to decide if there is a second solution to~$\varphi$. \end{proof} \begin{corollary}\label{cor:sat-tssat-complexity} For any constraint language~$\Gamma$, the problem $\TSSAT(\Gamma)$ is in~$\P$ if both $\SAT(\Gamma)$ and $\AnotherSat(\Gamma)$ are in~$\P$. Otherwise, $\TSSAT(\Gamma)$ is $\NP$-hard. \end{corollary} \begin{proposition}\label{prop:linearapprox} Let~$\Gamma$ be a constraint language for which $\TSSAT(\Gamma)$ is in~$\P$. Then there is a polynomial-time $n$-approximation algorithm for $\MSD(\Gamma)$, where~$n$ is the number of variables of the $\Gamma$-formula on input. \end{proposition} \begin{proof} Since~$\TSSAT(\Gamma)$ is in~$\P$, both $\SAT(\Gamma)$ and $\AnotherSat(\Gamma)$ must be in~$\P$ by Corollary~\ref{cor:sat-tssat-complexity}. Since $\SAT(\Gamma)$ is in~$\P$, we can compute a model~$m$ of the input~$\varphi$ in polynomial time if it exists. Now we check the $\AnotherSat(\Gamma)$-instance $(\varphi,m)$. If it has a solution~$m'\neq m$, it is also polynomial time computable, and we return $(m,m')$. If we fail somewhere in this process, then the $\MSD(\Gamma)$-instance~$\varphi$ does not have feasible solutions; otherwise, $\hd(m,m')\leq n \leq n\cdot\OPT(\varphi)$. \end{proof} \subsubsection{MinDistance-Equivalent Cases} In this section we show that, as for the $\NextSol$ problem, the affine cases of $\MSD$ are $\MinDist$-complete. \begin{proposition}\label{prop:msdaffine} $\MSD(\Gamma)$ is $\MinDist$-complete if the constraint language~$\Gamma$ satisfies the inclusions $\iL\subseteq \cc{\Gamma} \subseteq \iL_2$. \end{proposition} \begin{proof} We prove $\MSD(\Gamma)\apeq \NextSol(\Gamma)$, which is $\MinDist$-com\-plete for each constraint language~$\Gamma$ satisfying the inclusions $\iL\subseteq \cc{\Gamma} \subseteq \iL_2$, according to Proposition~\ref{prop:MinDist-hardness-XSOL}. As the inclusion~$\Gamma\subseteq \iL_2 = \cc{\set{\even^4,[x],[\neg x]}}$ holds, any $\Gamma$-formula~$\psi$ is expressible as $\exists y(A_1 x +A_2 y \eq c)$. The projection of the affine solution space is again an affine space, so it can be understood as solutions of a system~$Ax = b$. If $(\psi,m_0)$ is an instance of $\XSOL(\Gamma)$, then $\psi$ is a $\MSD(\Gamma)$-instance, and a feasible solution $m_1\neq m_2$ satisfying~$\psi$ gives a feasible solution $m_3:=m_0+(m_2 - m_1)$ for $(\psi,m_0)$, where $\hd(m_0,m_3) = \hd(m_2,m_1)$. Conversely, a solution $m_3\neq m_0$ to $(\psi,m_0)$ yields a feasible answer to the $\MSD$-instance~$\psi$. Thus, $\OPT(\psi) = \OPT(\psi,m_0)$ and so $\XSOL(\Gamma)\aple\MSD(\Gamma)$. The other way round, if~$\psi$ is an $\MSD$-instance, then attempt to solve the system $Ax=b$ defined by it; if there is no or a unique solution, then the instance does not have feasible solutions. Otherwise, we have at least two distinct models of~$\psi$; let $m_0$ be one of these. As above we conclude $\OPT(\psi) = \OPT(\psi,m_0)$, and therefore, $\MSD(\Gamma)\aple\XSOL(\Gamma)$. \end{proof} \subsubsection{Tightness Results} We prove that Proposition~\ref{prop:linearapprox} is essentially tight for some constraint languages. This result builds heavily on the previous results from Section~\ref{sssec:xsoltight}. \begin{proposition}\label{prop:tightnessOfLinearapprox} For a constraint language~$\Gamma$ satisfying the inclusions $\iN \subseteq \cc \Gamma \subseteq \iI$ and any $\varepsilon>0$ there is no polynomial-time $n^{1-\varepsilon}$-approximation algorithm for $\MSD(\Gamma)$, unless $\P = \NP$. \end{proposition} \begin{proof} We show that any polynomial time $n^{1-\varepsilon}$-approximation algorithm for $\MSD(\Gamma)$ would also allow to decide in polynomial time the problem $\AnotherSatNC(\Gamma)$, which is $\NP$-complete by Proposition~\ref{prop:AScomplhard}. The algorithm works as follows. Given an instance $(\varphi, m)$ for $\AnotherSatNC(\Gamma)$, the algorithm accepts if~$m$ is not a constant assignment. Since~$\Gamma$ is $0$-valid (and $1$-valid), this output is correct. If~$\varphi$ has only one variable, reject because~$\varphi$ has only two models; otherwise, proceed as follows. For each variable~$x$ of~$\varphi$, we construct a new formula~$\varphi'_{x}$ as follows. Let~$k$ be the smallest integer greater than $1/\varepsilon$. Introduce $n^k-n$ new variables~$x^i$ for $i = 1, \ldots, n^k-n$. For every $i \in \set{1, \ldots, n^k-n}$ and every constraint $R(y_1, \ldots , y_\ell)$ in~$\varphi$, such that $x \in \set{y_1, \ldots, y_\ell}$, construct a new constraint $R(z_1^i, \ldots, z_\ell^i)$ by $z_j^i = x^i$ if $y_j = x$ and $z_j^i = y_j$ otherwise; add all the newly constructed constraints to~$\varphi$ in order to get~$\varphi'_{x}$. Note, that we can extend models~$s$ of~$\varphi$ to models~$s'$ of~$\varphi'_{x}$ by setting $s'(x^i)=s(x)$. In particular, this can be done for~$m$, yielding $m'\in[\varphi'_{x}]$. As $\Gamma\subseteq \iI = \iI_0\cap \iI_1$, the $\MSD(\Gamma)$-instance $\varphi'_{x}$ has feasible solutions; thus run the $n^{1-\varepsilon}$-approximation algorithm for $\MSD(\Gamma)$ on $\varphi'_{x}$. If for every $x$ the answer is a pair $(m_1,m_2)$ with $m_2 = \cmpl{m_1}$, then reject, otherwise accept. This procedure is a correct polynomial-time algorithm for $\AnotherSatNC(\Gamma)$. For polynomial runtime is clear, it remains to show correctness. If~$\varphi$ has only constant models, then the same is true for every~$\varphi'_{x}$ since~$\varphi$ contains a variable distinct from~$x$. Thus each approximation must result in a pair of complementary constant assignments, and the output is correct. Assume now that there is a model~$s$ of~$\varphi$ different from~$\vec{0}$ and~$\vec{1}$. Hence, there exists a variable $x$ such that $s(x)=m(x)$ because $m$ is constant. It follows that~$\varphi'_{x}$ has a model~$s'$ fulfilling $\OPT(\varphi'_{x})\leq \hd(s', m')<n$, where~$n$ is the number of variables of~$\varphi$. But then the approximation algorithm must find two distinct models~$m_1\neq m_2$ of~$\varphi'_{x}$ satisfying $\hd(m_1, m_2) < n \cdot (n^k)^{1-\varepsilon} = n^{k(1-\varepsilon)+1}$. Since we stipulated $k > 1/\varepsilon$, it follows that $\hd(m_1,m_2) < n^k$. Consequently, we have $m_2\neq \cmpl{m_1}$ and the output of our algorithm is again correct. \end{proof} \section{Concluding Remarks} \label{sec:conclusion} The problems investigated in this paper are quite natural. In the space of bit-vectors we search for a solution of a formula that is closest to a given point, or for a solution next to a given solution, or for two solutions witnessing the smallest Hamming distance between any two solutions. Our results describe the complexity of exploring the solution space for arbitrary families of Boolean relations. Moreover, our problems generalize problems familiar from the literature: $\OptMinOnes$, $\NCW$, and $\DistanceSAT$ are instances of our $\NearestSol$, while $\MinDist$ is the same as our problem $\MinSolDistance$ when restricting the latter to affine relations. To prove the results, we first had to extend the notion of $\AP$-reduction. The optimization problems considered in the literature have the property that each instance has at least one feasible solution. This is not the case when looking for nearest solutions regarding a given solution or a prescribed Boolean tuple, as a formula may have just a single solution or no solution at all. Therefore we had to refine the notion of $\AP$-reductions such that it correctly handles instances without feasible solutions. The complexity of $\NearestSol$ can be classified by the usual approach: We first show that for each constraint language the complexity of the problem does not change when admitting existential quantifiers and equality, and then check all finitely related clones according to Post's lattice. This approach does not work for the problems $\NextSol$ and $\MinSolDistance$: It does not seem to be possible to show a priori that the complexity remains unaffected under such language extensions. In principle the complexity of a problem might well differ for two constraint languages $\Gamma_1$ and $\Gamma_2$ that represent the same clone ($\cc{\Gamma_1} = \cc{\Gamma_2}$) but that differ with respect to partial polymorphisms ($\ccc{\Gamma_1\cup\set{\eq}} \neq \ccc{\Gamma_2\cup\set{\eq}}$). Theorems~\ref{thm:NOSol} and~\ref{thm:MSD} finally show that this is not the case, but we learn this only a posteriori. Our method of proof fundamentally relies on irredundant weak bases that seem to be the perfect fit for such a situation: a priori compatibility with existential quantification is not required, but it will follow once the proof succeeds just using weak bases. \begin{figure} \caption{Comparing the complexities: The hard cases (colored blue and black) are the same, whereas the polynomial cases (green) increase from left to right.} \label{fig:comparison} \end{figure} Figure~\ref{fig:comparison} compares the complexity classifications of the three problems. Regarding $\NearestSol$ and $\NextSol$, knowing that an assignment is a solution apparently helps in finding a solution nearby. For expressive constraint languages it is $\NP$-complete to decide whether a feasible solution exists at all; for $\NearestSol$ this requires the existence of at least one satisfying assignment, while the other two problems need even two. Kann proved in~\cite{Kann-94} that $\OptMinOnes(\Gamma)$ is $\NPOPB$-complete for $\cc \Gamma = \BR$, where $\NPOPB$ is the class of $\NPO$ problems with a polynomially bounded objective function. This result implies that $\NearestSol(\Gamma)$ is $\NPOPB$-complete for $\cc \Gamma = \BR$ as well. It is unclear whether this result also holds for $\cc \Gamma = \iN_2$. It may be possible to find a suitable constraint language~$\Gamma'$ satisfying $\cc{\Gamma'}=\BR$ such that $\OptMinOnes(\Gamma')$ reduces to $\NextSol(\Gamma)$ for $\iI_0 \subseteq \cc \Gamma$ or $\iI_1 \subseteq \cc \Gamma$, proving thus that $\XSOL(\Gamma)$ is $\NPOPB$-complete for these cases. Likewise, the $\NPOPB$-hardness of $\MSD(\Gamma)$ for $\iN_2\subseteq\cc \Gamma$ or $\iI_0\subseteq\cc \Gamma$ or $\iI_1\subseteq\cc \Gamma$ remains open for the time being. \end{document}
{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{document} \title{An Inexact Conditional Gradient Method for Constrained Bilevel Optimization } {{\boldsymbol{\beta}}oldsymbol{\alpha}}uthor{Nazanin Abolfazli{{\boldsymbol{\beta}}oldsymbol{\theta}}anks{Department of Systems and Industrial Engineering, The University of Arizona, Tucson, AZ, USA \qquad\{[email protected], [email protected]\}} \quad Ruichen Jiang{{\boldsymbol{\beta}}oldsymbol{\theta}}anks{Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX, USA \qquad\{[email protected], [email protected]\}} \quad Aryan Mokhtari$^\dagger$ \quad Erfan Yazdandoost Hamedani$^*$ } {{\boldsymbol{\beta}}oldsymbol{\mu}}aketitle {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{abstract} Bilevel optimization is an important class of optimization problems where one optimization problem is nested within another. This framework is widely used in machine learning problems, including meta-learning, data hyper-cleaning, and matrix completion with denoising. In this paper, we focus on a bilevel optimization problem with a strongly convex lower-level problem and a smooth upper-level objective function over a compact and convex constraint set. Several methods have been developed for tackling unconstrained bilevel optimization problems, but there is limited work on methods for the constrained setting. In fact, for those methods that can handle constrained problems, either the convergence rate is slow or the computational cost per iteration is expensive. To address this issue, in this paper, we introduce a novel single-loop projection-free method using a nested approximation technique. Our proposed method has an improved per-iteration complexity, surpassing existing methods, and achieves optimal convergence rate guarantees matching the best-known complexity of projection-free algorithms for solving convex constrained single-level optimization problems. In particular, when the upper-level objective function is convex, our method requires $\tilde{\mathcal{O}}({{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon^{-1})$ iterations to find an ${{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon$-optimal solution. Moreover, when the upper-level objective function is non-convex the complexity of our method is $\mathcal{O}({{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon^{-2})$ to find an ${{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon$-stationary point. We also present numerical experiments to showcase the superior performance of our method compared with state-of-the-art methods. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{abstract} {{\boldsymbol{\beta}}oldsymbol{\nu}}ewpage \section{Introduction}\label{sec:intro} Many learning and inference problems take a \textit{hierarchical} form, where one optimization problem is nested within another. Bilevel optimization is often used to model problems of this kind with two levels of hierarchy. In this paper, we consider the bilevel optimization problem of the following form {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{equation}\label{eq:bilevel} {{\boldsymbol{\beta}}oldsymbol{\mu}}in_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x} \in {{\boldsymbol{\beta}}oldsymbol{\mu}}athcal{X}}~ {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}) := f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}))\qquad \hbox{s.t.}\quad {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}) \in {{\boldsymbol{\beta}}oldsymbol{\alpha}}rgmin_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}\in {{\boldsymbol{\beta}}oldsymbol{\mu}}athbb{R}^{m}}~g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}). {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{equation} where $n, m {{\boldsymbol{\beta}}oldsymbol{\gamma}}eq 1$ are integers; ${{\boldsymbol{\beta}}oldsymbol{\mu}}athcal{X}\subset {{\boldsymbol{\beta}}oldsymbol{\mu}}athbb{R}^n$ is a compact and convex set, $f : {{\boldsymbol{\beta}}oldsymbol{\mu}}athcal{X} \times {{\boldsymbol{\beta}}oldsymbol{\mu}}athbb{R}^{m} \to {{\boldsymbol{\beta}}oldsymbol{\mu}}athbb{R}$ and $g : {{\boldsymbol{\beta}}oldsymbol{\mu}}athcal{X} \times {{\boldsymbol{\beta}}oldsymbol{\mu}}athbb{R}^{m} \to {{\boldsymbol{\beta}}oldsymbol{\mu}}athbb{R}$ are continuously differentiable functions with respect to (w.r.t.) ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}$ and ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}$ on an open set containing $\mathcal{X}$ and ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbb{R}^m$, respectively. Problem {\boldsymbol{\rho}}ef{eq:bilevel} involves two optimization problems following a two-level structure. The outer objective $f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}))$ depends on ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}$ both directly and also indirectly through ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})$ which is a solution of the lower-level problem of minimizing another function $g$ parameterized by ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}$. Throughout the paper, we assume that $g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y})$ is strongly convex in ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}$, hence, ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})$ is uniquely well-defined for all ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x} \in \mathcal{X}$. The application of {{\boldsymbol{\beta}}oldsymbol{\epsilon}}qref{eq:bilevel} arises in a number of machine learning problems, such as meta-learning \cite{rajeswaran2019meta}, continual learning \cite{borsos2020coresets}, reinforcement learning \cite{konda1999actor}, hyper-parameter optimization\cite{franceschi2018bilevel,pedregosa2016hyperparameter}, and data hyper-cleaning \cite{shaban2019truncated}. Several methods have been proposed to solve the general form of the bilevel optimization problems mentioned in {{\boldsymbol{\beta}}oldsymbol{\epsilon}}qref{eq:bilevel}. For instance, using the optimality conditions of the lower-level problem, the works in \cite{hansen1992new,shi2005extended,moore2010bilevel} transformed the bilevel problem into a single-level constrained problem. However, such an approach includes two major challenges: $(i)$ The reduced problem will have too many constraints when the inner problem is large-scale; $(ii)$ Unless the lower-level function $g$ has a specific structure, such as a quadratic form, the optimality condition of the lower-level problem introduces nonconvexity into the feasible set of the reduced problem. Recently, more efficient gradient-based bilevel optimization algorithms have been proposed, which can be broadly divided into the approximate implicit differentiation (AID) based approach \cite{pedregosa2016hyperparameter,gould2016differentiating,domke2012generic,liao2018reviving,ghadimi2018approximation,lorraine2020optimizing} and the iterative differentiation (ITD) based approach \cite{shaban2019truncated,maclaurin2015gradient,Franceschi_ICML18,grazzi2020iteration}. Nevertheless, with the exception of a few recent attempts, most of the existing studies have primarily focused on analyzing the asymptotic convergence leaving room for the development of novel algorithms that come with guaranteed convergence rates. Moreover, in most prior studies the constraint set $\mathcal{X}$ is assumed to be $\mathcal{X} = {{\boldsymbol{\beta}}oldsymbol{\mu}}athbb{R}^n$ to create a simpler unconstrained upper optimization problem. Nonetheless, in several applications, such as meta-learning \cite{franceschi2018bilevel}, personalized federated learning\cite{fallah2020personalized}, and core sets selection \cite{borsos2020coresets}, $\mathcal{X}$ is a strict subset of ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbb{R}^n$. Using projected gradient methods is a common approach when dealing with such constraint sets. Despite their widespread use, projected gradient techniques may not always be applicable. The limitations of projection-based techniques led to the development of projection-free algorithms like Frank Wolfe-based methods \cite{frank1956algorithm}. Instead of tackling a non-linear projection problem, as in the case of ${{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll_1$-norm or nuclear norm ball restrictions, these Frank Wolfe-based techniques just need to solve a linear minimization problem over $\mathcal{X}$ with lower computational complexity. {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{table}[t!]\scriptsize {\boldsymbol{\rho}}enewcommand{1.0}{1.0} \centering \caption{Summary of algorithms for solving general bilevel optimization with a strongly convex lower-level objective function. The abbreviations ``C'', ``NC'', ``PO", ``LMO" stand for ``convex'', ``non-convex'', ``projection oracle", and ``linear minimization oracle" respectively and $\kappa_g\triangleq L_g/{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g$. For the algorithms with $\mathcal{O}(m^2 \log(K))$ per-iteration dependence, the Hessian inverse can be computed via multiple rounds of matrix-vector products. $^*$ Note that these works focused on convergence rates in the stochastic setting, without addressing the rates in the deterministic setting. As the results did not explicitly provide rates for the deterministic setting, we present the results for the stochastic setting. } \label{tab:bilevel} {\boldsymbol{\rho}}esizebox{\textwidth}{!}{ {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{tabular}{cccclclcc} \toprule & {{\boldsymbol{\beta}}oldsymbol{\mu}}ultirow{2}{*}{\textbf{Reference}} & {{\boldsymbol{\beta}}oldsymbol{\mu}}ultirow{2}{*}{\textbf{oracle}} & \textbf{function $f$} & {{\boldsymbol{\beta}}oldsymbol{\mu}}ultirow{2}{*}{\textbf{$\#$ loops}} & {{\boldsymbol{\beta}}oldsymbol{\mu}}ulticolumn{2}{c}{{{\boldsymbol{\beta}}oldsymbol{\mu}}ultirow{2}{*}{\textbf{Iteration complexity}}} & {{\boldsymbol{\beta}}oldsymbol{\mu}}ulticolumn{2}{c}{\textbf{Convergence rate}} \\ {{\boldsymbol{\beta}}oldsymbol{\mu}}athop{{\boldsymbol{\beta}}f cl}ine{4-4} {{\boldsymbol{\beta}}oldsymbol{\mu}}athop{{\boldsymbol{\beta}}f cl}ine{8-9} & & & NC/C & & {{\boldsymbol{\beta}}oldsymbol{\mu}}ulticolumn{2}{c}{} & Upper level & Lower level \\ \hline {{\boldsymbol{\beta}}oldsymbol{\mu}}ulticolumn{1}{c|}{{{\boldsymbol{\beta}}oldsymbol{\mu}}ultirow{5}{*}{\textbf{Unconstrained}}} & SUSTAIN \cite{khanduri2021near} & ----- & NC &single loop & {{\boldsymbol{\beta}}oldsymbol{\mu}}ulticolumn{2}{c}{$\mathcal{O}( \kappa_g m^2 \log (K))$} & {{\boldsymbol{\beta}}oldsymbol{\mu}}ulticolumn{2}{c}{$\mathcal{O}({{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon^{-1})$} \\ {{\boldsymbol{\beta}}oldsymbol{\mu}}athop{{\boldsymbol{\beta}}f cl}ine{2-9} {{\boldsymbol{\beta}}oldsymbol{\mu}}ulticolumn{1}{c|}{} & FSLA \cite{li2022fully} & ----- & NC &single loop & {{\boldsymbol{\beta}}oldsymbol{\mu}}ulticolumn{2}{c}{$\mathcal{O}(m^2)$} & {{\boldsymbol{\beta}}oldsymbol{\mu}}ulticolumn{2}{c}{$\mathcal{O}({{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon^{-1})$} \\ {{\boldsymbol{\beta}}oldsymbol{\mu}}athop{{\boldsymbol{\beta}}f cl}ine{2-9} {{\boldsymbol{\beta}}oldsymbol{\mu}}ulticolumn{1}{c|}{} & F$^{3}$SA \cite{kwon2023fully} & ----- & NC &single loop & {{\boldsymbol{\beta}}oldsymbol{\mu}}ulticolumn{2}{c}{$\mathcal{O}(m)$} & {{\boldsymbol{\beta}}oldsymbol{\mu}}ulticolumn{2}{c}{$\tilde{\mathcal{O}}({{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon^{-\frac{3}{2}})$} \\ {{\boldsymbol{\beta}}oldsymbol{\mu}}athop{{\boldsymbol{\beta}}f cl}ine{2-9} {{\boldsymbol{\beta}}oldsymbol{\mu}}ulticolumn{1}{c|}{} & AID-BiO \cite{ji2020provably} & ----- & NC &double loop & {{\boldsymbol{\beta}}oldsymbol{\mu}}ulticolumn{2}{c}{$\mathcal{O}(\sqrt{\kappa_g}m^2)$} & {{\boldsymbol{\beta}}oldsymbol{\mu}}ulticolumn{2}{c}{$\mathcal{O}({{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon^{-1}) $} \\ \hline {{\boldsymbol{\beta}}oldsymbol{\mu}}ulticolumn{1}{c|}{{{\boldsymbol{\beta}}oldsymbol{\mu}}ultirow{8}{*}{\textbf{Constrained}}} &{{\boldsymbol{\beta}}oldsymbol{\mu}}ultirow{2}{*}{ABA\cite{ghadimi2018approximation}} & {{\boldsymbol{\beta}}oldsymbol{\mu}}ultirow{2}{*}{PO} & NC & {{\boldsymbol{\beta}}oldsymbol{\mu}}ultirow{2}{*}{double loop} & {{\boldsymbol{\beta}}oldsymbol{\mu}}ulticolumn{2}{c}{{{\boldsymbol{\beta}}oldsymbol{\mu}}ultirow{2}{*}{$\mathcal{O}(m^3 )$}} & $\mathcal{O}({{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon^{-1})$ & $\mathcal{O}({{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon^{-\frac{5}{4}})$ \\ {{\boldsymbol{\beta}}oldsymbol{\mu}}ulticolumn{1}{c|}{} & & & C & {{\boldsymbol{\beta}}oldsymbol{\mu}}ulticolumn{2}{c}{} & & $\mathcal{O}({{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon^{-\frac{1}{2}})$ & $\mathcal{O}({{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon^{-\frac{3}{4}})$ \\ {{\boldsymbol{\beta}}oldsymbol{\mu}}athop{{\boldsymbol{\beta}}f cl}ine{2-9} {{\boldsymbol{\beta}}oldsymbol{\mu}}ulticolumn{1}{c|}{} & {{\boldsymbol{\beta}}oldsymbol{\mu}}ultirow{2}{*}{TTSA$^*$ \cite{hong2020two}} & {{\boldsymbol{\beta}}oldsymbol{\mu}}ultirow{2}{*}{PO} & NC & {{\boldsymbol{\beta}}oldsymbol{\mu}}ultirow{2}{*}{single loop} & {{\boldsymbol{\beta}}oldsymbol{\mu}}ulticolumn{2}{c}{{{\boldsymbol{\beta}}oldsymbol{\mu}}ultirow{2}{*}{$\mathcal{O}(\kappa_g m^2 \log (K))$}} & {{\boldsymbol{\beta}}oldsymbol{\mu}}ulticolumn{2}{c}{$\mathcal{O}({{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon^{-\frac{5}{2}})$} \\ {{\boldsymbol{\beta}}oldsymbol{\mu}}ulticolumn{1}{c|}{} & & & C & {{\boldsymbol{\beta}}oldsymbol{\mu}}ulticolumn{2}{c}{} & & $\mathcal{O}({{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon^{-4})$ & $\mathcal{O}({{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon^{-2})$ \\ {{\boldsymbol{\beta}}oldsymbol{\mu}}athop{{\boldsymbol{\beta}}f cl}ine{2-9} {{\boldsymbol{\beta}}oldsymbol{\mu}}ulticolumn{1}{c|}{} & {{\boldsymbol{\beta}}oldsymbol{\mu}}ultirow{2}{*}{SBFW$^*$ \cite{akhtar2022projection}} & {{\boldsymbol{\beta}}oldsymbol{\mu}}ultirow{2}{*}{LMO} & NC & {{\boldsymbol{\beta}}oldsymbol{\mu}}ultirow{2}{*}{single loop} & {{\boldsymbol{\beta}}oldsymbol{\mu}}ulticolumn{2}{c}{{{\boldsymbol{\beta}}oldsymbol{\mu}}ultirow{2}{*}{$\mathcal{O}( \kappa_g m^2 \log (K))$}} & $\mathcal{O}({{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon^{-4})$ & $\mathcal{O}({{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon^{-\frac{5}{2}})$ \\ {{\boldsymbol{\beta}}oldsymbol{\mu}}ulticolumn{1}{c|}{} & & & C & {{\boldsymbol{\beta}}oldsymbol{\mu}}ulticolumn{2}{c}{} & & $\mathcal{O}({{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon^{-3})$ & $\mathcal{O}({{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon^{-\frac{3}{2}})$ \\ {{\boldsymbol{\beta}}oldsymbol{\mu}}athop{{\boldsymbol{\beta}}f cl}ine{2-9} {{\boldsymbol{\beta}}oldsymbol{\mu}}ulticolumn{1}{c|}{} & {{\boldsymbol{\beta}}oldsymbol{\mu}}ultirow{2}{*}{\textbf{Ours}} & {{\boldsymbol{\beta}}oldsymbol{\mu}}ultirow{2}{*}{LMO} & NC & {{\boldsymbol{\beta}}oldsymbol{\mu}}ultirow{2}{*}{single loop} & {{\boldsymbol{\beta}}oldsymbol{\mu}}ulticolumn{2}{c}{{{\boldsymbol{\beta}}oldsymbol{\mu}}ultirow{2}{*}{$\mathcal{O}(m^{2})$}} & {{\boldsymbol{\beta}}oldsymbol{\mu}}ulticolumn{2}{c}{$\mathcal{O}({{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon^{-2})$} \\ {{\boldsymbol{\beta}}oldsymbol{\mu}}ulticolumn{1}{c|}{} & & & C & {{\boldsymbol{\beta}}oldsymbol{\mu}}ulticolumn{2}{c}{} & & {{\boldsymbol{\beta}}oldsymbol{\mu}}ulticolumn{2}{c}{$\tilde{\mathcal{O}}({{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon^{-1})$} \\ \hline {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{tabular} } {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{table} In the context of bilevel optimization problems, there have been several studies considering constrained settings. However, most existing methods focus on projection-based algorithms, with limited exploration of projection-free alternatives. Moreover, these methods usually suffer from a slow convergence rate or involve a high computational cost per iteration. In particular, the fast convergence guarantees for these methods such as \cite{ghadimi2018approximation} is achieved by utilizing the Hessian inverse of the lower-level function which comes at a steep price, with the worst-case computational cost of $\mathcal{O}(m^3)$ limiting its application. To resolve this issue, a Hessian inverse approximation technique was introduced in \cite{ghadimi2018approximation} and used in other studies such as \cite{hong2020two,akhtar2022projection}. This approximation technique introduces a vanishing biased as the number of inner steps (matrix-vector multiplications) increases and its computational complexity scales with the lower-level problem's condition number. In particular, it has a per-iteration complexity of $\mathcal{O}(\kappa_g m^2\log(K))$ where $\kappa_g$ denotes the condition number of the function $g$. To overcome these issues, we develop a new inexact projection-free method that achieves optimal convergence rate guarantees for the considered settings while requiring only two matrix-vector products per iteration leading to a complexity of $\mathcal{O}(m^2)$ per iteration. Next, we state our contributions. \textbf{Contributions}. In this paper, we consider a class of bilevel optimization problems with a strongly convex lower-level problem and a smooth upper-level objective function over a compact and convex constraint set. This extends the literature, which has primarily focused on unconstrained optimization problems. This paper introduces a novel single-loop projection-free method that overcomes the limitations of existing approaches by offering improved per-iteration complexity and convergence guarantees. Our main idea is to simultaneously keep track of trajectories of lower-level optimal solution as well as a parametric quadratic programming containing Hessian inverse information using a nested approximation. These estimations are calculated using a one-step gradient-type step and are used to estimate the hyper-gradient for a Frank Wolfe-type update. Notably, using this approach our proposed method requires only two matrix-vector products at each iteration which presents a substantial improvement compared to the existing methods. Moreover, our theoretical guarantees for the proposed Inexact Bilevel Conditional Gradient method denoted by IBCG are as follows: {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{itemize} \item When the upper-level function $f$ is convex, we show that our IBCG method requires $\tilde{\mathcal{O}}({{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon^{-1})$ iterations to find an ${{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon$-optimal solution. \item When $f$ is non-convex, IBCG requires $\mathcal{O}({{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon^{-2})$ iterations to find an ${{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon$-stationary point. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{itemize} These results match the best-known complexity of projection-free algorithms for solving convex constrained single-level optimization problems. \textbf{Related work.} In this section, we review some of the related works in the context of general bilevel optimization in a deterministic/stochastic setting and projection-free algorithms.\\ The concept of bilevel optimization was initially introduced by Bracken and McGill in 1973 \cite{bracken1973mathematical}. Since then, researchers have proposed several algorithms for solving bilevel optimization problems, with various approaches including but not limited to constraint-based methods \cite{shi2005extended,moore2010bilevel} and gradient-based methods. In recent years, gradient-based methods for dealing with problem~{{\boldsymbol{\beta}}oldsymbol{\epsilon}}qref{eq:bilevel} have become increasingly popular including implicit differentiation~\citep{domke2012generic,pedregosa2016hyperparameter,gould2016differentiating,ghadimi2018approximation,ji2021bilevel} and iterative differentiation~\citep{Franceschi_ICML18,maclaurin2015gradient}. There are several single and double loop methods developed for tackling unconstrained bilevel optimization problems in the literature \cite{chen2022single,li2022fully,kwon2023fully,khanduri2021near, ji2020provably}; however, the number of works focusing on the constrained bilevel optimization problems are limited. On this basis, the seminal work in \cite{ghadimi2018approximation} presented an Accelerated Bilevel Approximation ({ABA}) method which consists of two iterative loops and it is shown that when the upper-level function is non-convex ABA obtains a convergence rate of $\mathcal{O}({{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon^{-1})$ and $\mathcal{O}({{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon^{-\frac{5}{4}})$ in terms of the upper-level and lower-level objective values respectively. Although their convergence guarantees in both convex and non-convex settings are superior to other similar works, their computational complexity is too expensive as they need to compute the Hessian inverse matrix at each iteration. Built upon the work of \cite{ghadimi2018approximation}, authors in \cite{hong2020two} proposed a Two-Timescale Stochastic Approximation (TTSA) algorithm which is shown to achieve a complexity of $\mathcal{O}({{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon^{-\frac{5}{2}})$ when the upper-level function is non-convex. Results for the convex case are also included in Table~{\boldsymbol{\rho}}ef{tab:bilevel}. It should be noted that these methods require a projection onto set $\mathcal{X}$ at every iteration. In contrast, as our proposed method is a projection-free method, it requires access to a linear solver instead of projection at each iteration, which is suitable for the settings where projection is computationally costly; e.g., when $\mathcal{X}$ is a nuclear-norm ball. In \cite{akhtar2022projection}, which is closely related to our work, authors developed a projection-free stochastic optimization algorithm ({SBFW}) for bilevel optimization problems and it is shown that their method achieves the complexity of $\mathcal{O}({{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon^{-4})$ and $\mathcal{O}({{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon^{-\frac{5}{2}})$ with respect to the upper and lower-level problem respectively. We note that most of the work studied bilevel optimization problems in the stochastic setting and did not provide any results for the deterministic setting motivating us to focus on this category of bilevel problems to improve the convergence and computational complexity over existing methods. Some concurrent papers consider the case where the lower-level problem can have multiple minima \cite{liu2020generic,sow2022constrained,chen2023bilevel}. As they consider a more general setting that brings more challenges, their theoretical results are also weaker, providing only asymptotic convergence guarantees or slower convergence rates. \section{Preliminaries}\label{sec:pre} \subsection{Motivating examples}\label{subsec:examples} The applications of bilevel optimization formulation in {{\boldsymbol{\beta}}oldsymbol{\epsilon}}qref{eq:bilevel} include many machine learning problems such as matrix completion \cite{yokota2017simultaneous}, meta-learning \cite{rajeswaran2019meta}, data hyper-cleaning \cite{shaban2019truncated}, hyper-parameter optimization \cite{franceschi2018bilevel}, etc. Next, we describe two examples in details. \textbf{Matrix Completion with Denoising}: Consider the matrix completion problem where the goal is to recover missing items from imperfect and noisy observations of a random subset of the matrix's entries. Generally, with noiseless data, the data matrix can be represented as a low-rank matrix, which justifies the use of the nuclear norm constraint. In numerous applications, such as image processing and collaborative filtering, the observations are noisy, and solving the matrix completion problem using only the nuclear norm restrictions can result in sub-optimal performance \cite{mcrae2021low,yokota2017simultaneous}. One approach to include the denoising step in the matrix completion formulation is to formulate the problem as a bilevel optimization problem \cite{akhtar2022projection}. This problem can be expressed as follows: {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align}\label{ex:Matrix_comp} &{{\boldsymbol{\beta}}oldsymbol{\mu}}in_{\|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{X} \|_{*}\leq {{\boldsymbol{\beta}}oldsymbol{\alpha}}lpha}~\frac{1}{{{\boldsymbol{\beta}}oldsymbol{\alpha}}bs{\Omega_1}}\sum_{(i,j) \in \Omega_{1}}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{X}_{i,j} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{Y}_{i,j})^2 {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber\\ &\;\,\hbox{s.t.}\quad {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{Y}\in{{\boldsymbol{\beta}}oldsymbol{\alpha}}rgmin_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{V}}~\Big\{ \frac{1}{{{\boldsymbol{\beta}}oldsymbol{\alpha}}bs{\Omega_2}}\sum_{(i,j) \in \Omega_{2}}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{V}_{i,j} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{M}_{i,j})^2 + \lambda_1 \mathcal{R}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{V}) +\lambda_2\|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{X} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{V}\|_F^2 \Big\}, {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align} where ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{M} \in {{\boldsymbol{\beta}}oldsymbol{\mu}}athbb{R}^{n\times m}$ is the given incomplete noisy matrix and $\Omega$ is the set of observable entries, $\mathcal{R}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{V})$ is a regularization term to induce sparsity, e.g., ${{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll_1$-norm or pseudo-Huber loss, $\lambda_1$ and $\lambda_2$ are regularization parameters. The presence of the nuclear norm constraint poses a significant challenge in {{\boldsymbol{\beta}}oldsymbol{\epsilon}}qref{ex:Matrix_comp}. This constraint renders the problem computationally demanding, often making projection-based algorithms impractical. Consequently, there is a compelling need to develop and employ projection-free methods to overcome these computational limitations. \textbf{Model-Agnostic Meta-Learning}: In meta-learning our goal is to develop models that can effectively adapt to multiple training sets in order to optimize performance for individual tasks. One widely used formulation in this context is known as model-agnostic meta-learning (MAML) \cite{finn2017model}. MAML aims to minimize the empirical risk across all training sets through an outer objective while employing one step of (implicit) projected gradient as the inner objective \cite{rajeswaran2019meta}. This framework allows for effective adaptation of the model across different tasks, enabling enhanced performance and flexibility. Let $\{\mathcal{D}_i^{tr}\}_{i=1}^N$ and $\{\mathcal{D}_i^{test}\}_{i=1}^N$ be collections of training and test datasets for $N$ tasks, respectively. Implicit MAML can be formulated as a bilevel optimization problem \cite{rajeswaran2019meta}: {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{equation}\label{eq:MAML} {{\boldsymbol{\beta}}oldsymbol{\mu}}in _{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{t}heta \in {{\boldsymbol{\beta}}oldsymbol{\Theta}}eta} \sum _{i=1} ^N {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll \left({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_i^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{t}heta),\mathcal{D}_i^{test}{{\boldsymbol{\beta}}oldsymbol{\mu}}athop{{\boldsymbol{\beta}}f ri}ght)\quad \hbox{s.t.} \quad {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*_i({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{t}heta) \in {{\boldsymbol{\beta}}oldsymbol{\alpha}}rgmin_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{p}hi} \Big\{\sum _{i=1} ^N {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{p}hi,\mathcal{D}_i^{tr}) + \frac{\lambda}{2}\|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{p}hi -{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{t}heta\|^2\Big\}. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{equation} Here ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{t}heta$ is the shared model parameter, ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{p}hi$ is the adaptation of ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{t}heta$ to the $i$th training set, and ${{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll(\cdot)$ is the loss function. The set ${{\boldsymbol{\beta}}oldsymbol{\Theta}}eta$ imposes constraints on the model parameter, e.g., ${{\boldsymbol{\beta}}oldsymbol{\Theta}}eta=\{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{t}heta{{\boldsymbol{\beta}}oldsymbol{\mu}}id {{\boldsymbol{\beta}}oldsymbol{\nu}}orm{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{t}heta}_1\leq r\}$ for some $r> 0$ to induce sparsity. It can be verified that for a sufficiently large value of $\lambda$ the lower-level problem is strongly convex and problem {\boldsymbol{\rho}}ef{eq:MAML} can be viewed as a special case of {{\boldsymbol{\beta}}oldsymbol{\epsilon}}qref{eq:bilevel}. \subsection{Assumptions and Definitions} In this subsection, we discuss the definitions and assumptions required throughout the paper. We begin by discussing the assumptions on the upper-level and lower-level objective functions, respectively. {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{assumption}\label{assum:upper} ${{\boldsymbol{\beta}}oldsymbol{\nu}}abla_x f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y})$ and ${{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y})$ are Lipschitz continuous w.r.t $({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}) \in {{\boldsymbol{\beta}}oldsymbol{\mu}}athcal{X}\times {{\boldsymbol{\beta}}oldsymbol{\mu}}athbb{R}^{m}$ such that for any $ {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}\in\mathcal{X}$ and ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}\in{{\boldsymbol{\beta}}oldsymbol{\mu}}athbb{R}^m$ {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{enumerate} \item $\|{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_x f( {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}) - {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_x f(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}) \| \leq L_{xx}^f\| {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x} - \Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}\| + L_{xy}^f\|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}\|$, \item $\|{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y f( {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}) - {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y f(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}) \| \leq L_{yx}^f\| {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x} - \Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}\| + L_{yy}^f\|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}\|$. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{enumerate} {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{assumption} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{assumption}\label{assum:lower} $g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y})$ satisfies the following conditions: {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{enumerate}[(i)] \item For any given ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x} \in {{\boldsymbol{\beta}}oldsymbol{\mu}}athcal{X}$, $g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},\cdot)$ is twice continuously differentiable. Moreover, ${{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y g(\cdot,\cdot)$ is continuously differentiable. \item For any ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}\in \mathcal{X}$, ${{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},\cdot)$ is Lipschitz continuous with constant $L_g {{\boldsymbol{\beta}}oldsymbol{\gamma}}eq 0 $. Moreover, for any ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}\in {{\boldsymbol{\beta}}oldsymbol{\mu}}athbb{R}^m$, ${{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y g(\cdot,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y})$ is Lipschitz continuous with constant $C^g_{yx} {{\boldsymbol{\beta}}oldsymbol{\gamma}}eq 0 $. \item For any ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x} \in {{\boldsymbol{\beta}}oldsymbol{\mu}}athcal{X}$, $g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},\cdot)$ is ${{\boldsymbol{\beta}}oldsymbol{\mu}}u_g$-strongly convex with modulus ${{\boldsymbol{\beta}}oldsymbol{\mu}}u_g > 0$. \item For any given ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x} \in {{\boldsymbol{\beta}}oldsymbol{\mu}}athcal{X}$, ${{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yx} g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y})\in{{\boldsymbol{\beta}}oldsymbol{\mu}}athbb{R}^{n\times m}$ and ${{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yy} g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y})$ are Lipschitz continuous w.r.t $({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}) \in {{\boldsymbol{\beta}}oldsymbol{\mu}}athcal{X} \times {{\boldsymbol{\beta}}oldsymbol{\mu}}athbb{R}^{m}$, and with constant $L_{yx}^{g} {{\boldsymbol{\beta}}oldsymbol{\gamma}}eq 0 $ and $L_{yy}^{g} {{\boldsymbol{\beta}}oldsymbol{\gamma}}eq 0 $, respectively. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{enumerate} {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{assumption} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{remark}\label{re:bounded-norm} Considering Assumption~{\boldsymbol{\rho}}ef{assum:lower}-(ii), we can conclude that $\| {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yx} g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y})\|$ is bounded with constant $C_{yx}^{g}{{\boldsymbol{\beta}}oldsymbol{\gamma}}eq 0$ for any $({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}) \in {{\boldsymbol{\beta}}oldsymbol{\mu}}athcal{X}\times {{\boldsymbol{\beta}}oldsymbol{\mu}}athbb{R}^{m}$. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{remark} To measure the quality of the solution at each iteration, we use the standard Frank-Wolfe gap function associated with the single-level variant of problem {\boldsymbol{\rho}}ef{eq:bilevel} formally stated in the next assumption. {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{definition}[Convergence Criteria] \label{def:FW_gap} When the upper level function $f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y})$ is non-convex the Frank-Wolfe gap is defined as {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{equation}\label{eq:FW_gap} {{\boldsymbol{\beta}}oldsymbol{\mu}}athcal{G}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})\triangleq {{\boldsymbol{\beta}}oldsymbol{\mu}}ax_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s}\in {{\boldsymbol{\beta}}oldsymbol{\mu}}athcal{X}}\{\langle {{\boldsymbol{\beta}}oldsymbol{\nu}}abla {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}),{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s}{\boldsymbol{\rho}}angle \}. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{equation} which is a standard performance metric for constrained non-convex settings. Moreover, in the convex setting, we use the suboptimality gap function, i.e., ${{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})-{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}^*)$. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{definition} Before proposing our method, we state some important properties related to problem {\boldsymbol{\rho}}ef{eq:bilevel} based on the assumptions above:\\ (I) A standard analysis reveals that given Assumption {\boldsymbol{\rho}}ef{assum:lower}, the optimal solution trajectory of the lower-level problem, i.e., ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})$, is Lipschitz continuous \cite{ghadimi2018approximation}. \\ (II) One of the required properties to develop a method with a convergence guarantee is to show the Lipschitz continuity of the gradient of the single-level objective function. In the literature of bilevel optimization, to show this result it is often required to assume boundedness of ${{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y})$ for any ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}\in \mathcal{X}$ and ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}\in{{\boldsymbol{\beta}}oldsymbol{\mu}}athbb{R}^m$, e.g., see \cite{ghadimi2018approximation,ji2020provably,hong2020two,akhtar2022projection}. In contrast, in this paper, we show that this condition is only required for the gradient map ${{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y})$ when restricted to the optimal trajectory of the lower-level problem. In particular, we demonstrate that it is sufficient to show the boundedness of ${{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}))$ for any ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}\in \mathcal{X}$ which can be proved using the boundedness of constraint set $\mathcal{X}$. \\ (III) Using the above results we can show that the gradient of the single-level objective function, i.e., ${{\boldsymbol{\beta}}oldsymbol{\nu}}abla{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})$, is Lipschitz continuous. This result is one of the main building blocks of the convergence analysis of our proposed method in the next section. The aforementioned results are formally stated in the following Lemma. {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{lemma}\label{lem:v_b} Suppose Assumption {\boldsymbol{\rho}}ef{assum:upper} and {\boldsymbol{\rho}}ef{assum:lower} hold. Then for any ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}} \in {{\boldsymbol{\beta}}oldsymbol{\mu}}athcal{X}$, the following results hold.\\ (I) There exists ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}{{\boldsymbol{\beta}}oldsymbol{\gamma}}eq 0$ such that $\|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}) - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}})\| \leq {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}}\| {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x} - \Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}\|$.\\ (II) There exists $C_y^f{{\boldsymbol{\beta}}oldsymbol{\gamma}}eq 0$ such that ${{\boldsymbol{\beta}}oldsymbol{\nu}}orm{{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}))}\leq C_y^f$. \\ (III) There exists ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll{{\boldsymbol{\beta}}oldsymbol{\gamma}}eq 0$ such that $\|{{\boldsymbol{\beta}}oldsymbol{\nu}}abla {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x} ) - {{\boldsymbol{\beta}}oldsymbol{\nu}}abla {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}})\| \leq {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll}\|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x} -\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}\|$. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{lemma} \section{Proposed Method}\label{sec:proposed-method} As we discussed in section {\boldsymbol{\rho}}ef{sec:intro}, problem {\boldsymbol{\rho}}ef{eq:bilevel} can be viewed as a single minimization problem ${{\boldsymbol{\beta}}oldsymbol{\mu}}in_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}\in\mathcal{X}}{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})$, however, solving such a problem is a challenging task due to the need for calculation of the lower-level problem's exact solution, a requirement for evaluating the objective function and/or its gradient. In particular, by utilizing Assumption {\boldsymbol{\rho}}ef{assum:upper} and {\boldsymbol{\rho}}ef{assum:lower}, it has been shown in \cite{ghadimi2018approximation} that the gradient of function ${{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll(\cdot)$ can be expressed as {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align}\label{eq:v_def} {{\boldsymbol{\beta}}oldsymbol{\nu}}abla {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}) &= {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{x}f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})) - {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yx}g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})) [{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yy}g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}))]^{-1} {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{y}f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})). {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align} To implement an iterative method to solve this problem using first-order information, at each iteration $k{{\boldsymbol{\beta}}oldsymbol{\gamma}}eq 0$, one can replace ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)$ with an estimated solution ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k$ to track the optimal trajectory of the lower-level problem. Such an estimation can be obtained by taking a single gradient descend step with respect to the lower-level objective function. Therefore, the inexact Frank-Wolfe method for the bilevel optimization problem {\boldsymbol{\rho}}ef{eq:bilevel} takes the following main steps: {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{subequations}\label{alg:FW-direct} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align} &G_k{{\boldsymbol{\beta}}oldsymbol{\gamma}}ets {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{x}f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k) - {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yx}g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k) [{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yy}g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k)]^{-1} {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{y}f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k) {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ &{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s}_k{{\boldsymbol{\beta}}oldsymbol{\gamma}}ets {{\boldsymbol{\beta}}oldsymbol{\alpha}}rgmin_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s}\in \mathcal{X}}~\langle{G_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s}}{\boldsymbol{\rho}}angle \label{eq:lmo-Gk} \\ &{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1}{{\boldsymbol{\beta}}oldsymbol{\gamma}}ets (1-{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma_k){{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k+{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma_k{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s}_k\\ &{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_{k+1}{{\boldsymbol{\beta}}oldsymbol{\gamma}}ets {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k-{{\boldsymbol{\beta}}oldsymbol{\alpha}}lpha_k{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k). {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align} {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{subequations} Calculation of $G_k$ involves Hessian matrix inversion which is computationally costly and requires $\mathcal{O}(m^3)$ operations. To avoid this cost one can reformulate linear minimization subproblem {{\boldsymbol{\beta}}oldsymbol{\epsilon}}qref{eq:lmo-Gk} as {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align*} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s}_k{{\boldsymbol{\beta}}oldsymbol{\gamma}}ets &{{\boldsymbol{\beta}}oldsymbol{\alpha}}rgmin_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s}\in\mathcal{X},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{d}\in {{\boldsymbol{\beta}}oldsymbol{\mu}}athbb{R}^m}~\langle {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_x f(x_k,y_k),{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s} {\boldsymbol{\rho}}angle + \langle {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y f(x_k,y_k),{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{d} {\boldsymbol{\rho}}angle \\ &\hbox{s.t.}\quad {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yx}g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k)^\top{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s}+{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yy}g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k){{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{d}=0. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align*} When the constraint set $\mathcal{X}$ is a polyhedron, the above-reformulated subproblem remains a linear program (LP) with $m$ additional constraints and $m$ additional variables. The resulting LP can be solved using existing algorithms such as interior-point (IP) methods \cite{karmarkar1984new}. However, there are two primary concerns with this approach. First, if the LMO over $\mathcal{X}$ admits a closed-form solution, the new subproblem may not preserve this structure. Second, when $n=\Omega(m)$ IP methods require at most $\mathcal{O}(m^\omega\log(m/\delta))$ steps at each iteration where $\delta>0$ is the desired accuracy and $\mathcal{O}(m^\omega)$ is the time required to multiply two $m\times m$ matrices \cite{cohen2021solving,van2020deterministic}. Hence, the computational would be prohibitive for many practical settings, highlighting the pressing need for a more efficient algorithm. \subsection{Main Algorithm} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{algorithm}[t!] \caption{Inexact Bilevel Conditional Gradient (IBCG) Method}\label{alg:In-BiCoG} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{algorithmic}[1] \STATE \textbf{Input}: $\{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma_k, {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta_k\}_k\subseteq{{\boldsymbol{\beta}}oldsymbol{\mu}}athbb{R}_+$, ${{\boldsymbol{\beta}}oldsymbol{\alpha}}lpha>0$, ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_0\in {{\boldsymbol{\beta}}oldsymbol{\mu}}athcal{X}$, ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_0 \in {{\boldsymbol{\beta}}oldsymbol{\mu}}athbb{R}^{m}$ \STATE \textbf{Initialization}: ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}^0{{\boldsymbol{\beta}}oldsymbol{\gamma}}ets {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^0$ \FOR{$k = 0,\dots,K-1$} \STATE ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_{k+1} {{{\boldsymbol{\beta}}oldsymbol{\gamma}}ets}(I - {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta_k {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yy}g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k) ) {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_{k} + {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta_k {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{y}f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k)$ \STATE {$F_k{{\boldsymbol{\beta}}oldsymbol{\gamma}}ets {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{x}f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k)- {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yx}g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k){{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_{k+1}$} \STATE Compute ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s}_k {{\boldsymbol{\beta}}oldsymbol{\gamma}}ets {{\boldsymbol{\beta}}oldsymbol{\alpha}}rgmin_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s} \in {{\boldsymbol{\beta}}oldsymbol{\mu}}athcal{X}}~\langle{F_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s}}{\boldsymbol{\rho}}angle $ \STATE $ {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1} {{\boldsymbol{\beta}}oldsymbol{\gamma}}ets (1-{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma_k){{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k+{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma_k {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s}_k$\\ \STATE ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_{k+1}{{\boldsymbol{\beta}}oldsymbol{\gamma}}ets{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k-{{\boldsymbol{\beta}}oldsymbol{\alpha}}lpha{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k)$ \ENDFOR {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{algorithmic} {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{algorithm} As discussed above, there are major limitations in a naive implementation of the FW framework for solving {{\boldsymbol{\beta}}oldsymbol{\epsilon}}qref{eq:bilevel} which makes the method in {{\boldsymbol{\beta}}oldsymbol{\epsilon}}qref{alg:FW-direct} impractical. To propose a practical conditional gradient-based method we revisit the problem's structure. In particular, the gradient of the single-level problem in {{\boldsymbol{\beta}}oldsymbol{\epsilon}}qref{eq:v_def} can be rewritten as follows {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{subequations} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align} &{{\boldsymbol{\beta}}oldsymbol{\nu}}abla{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})={{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{x}f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})) - {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yx}g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})) {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}),\label{eq:ell-v}\\ & \text{where} \quad {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v} ({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}) \triangleq [{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yy}g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}))]^{-1} {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{y}f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})). {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align} {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{subequations} In this formulation, the effect of Hessian inversion is presented in a separate term ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})$ which can be viewed as the solution of the following {{\boldsymbol{\beta}}oldsymbol{\epsilon}}mph{parametric} quadratic programming {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{equation}\label{eq:obj-v} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}) = {{\boldsymbol{\beta}}oldsymbol{\alpha}}rgmin_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}}\ \frac{1}{2}{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}^\top {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yy}g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})){{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v} - {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{y}f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}))^\top {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{equation} Our main idea is to provide {{\boldsymbol{\beta}}oldsymbol{\epsilon}}mph{nested} approximations for the true gradient in {{\boldsymbol{\beta}}oldsymbol{\epsilon}}qref{eq:v_def} by estimating trajectories of ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})$ and ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})$. To ensure convergence, we carefully control the algorithm's progress in terms of variable ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}$ and limit the error introduced by these approximations. More specifically, at each iteration $k{{\boldsymbol{\beta}}oldsymbol{\gamma}}eq 0$, given an iterate ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k$ and an approximated solution of the lower-level problem ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k$ we first consider an approximated solution $\tilde {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)$ of {{\boldsymbol{\beta}}oldsymbol{\epsilon}}qref{eq:obj-v} by replacing ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)$ with its currently available approximation, i.e., ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k$, which leads to the following quadratic programming: {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{equation*} \tilde {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k) \triangleq {{\boldsymbol{\beta}}oldsymbol{\alpha}}rgmin_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}}\ \frac{1}{2}{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}^\top {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yy}g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k){{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v} - {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{y}f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k)^\top {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{equation*} Then $\tilde {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)$ is approximated with an iterate ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_{k+1}$ obtained by taking one step of gradient descend with respect to the objective function as follows, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{equation*} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_{k+1}{{\boldsymbol{\beta}}oldsymbol{\gamma}}ets {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_k-{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta_k\left({{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yy}g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k){{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_k-{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k){{\boldsymbol{\beta}}oldsymbol{\mu}}athop{{\boldsymbol{\beta}}f ri}ght), {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{equation*} for some step-size ${{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta_k{{\boldsymbol{\beta}}oldsymbol{\gamma}}eq 0$. This generates an increasingly accurate sequence $\{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_k\}_{k{{\boldsymbol{\beta}}oldsymbol{\gamma}}eq 0}$ that tracks the sequence $\{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)\}_{k{{\boldsymbol{\beta}}oldsymbol{\gamma}}eq 0}$. Next, given approximated solutions ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k$ and ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_{k+1}$ for ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)$ and ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)$, respectively, we can construct a direction to estimate the hyper-gradient ${{\boldsymbol{\beta}}oldsymbol{\nu}}abla{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)$ in {{\boldsymbol{\beta}}oldsymbol{\epsilon}}qref{eq:ell-v}. To this end, we construct a direction $F_k={{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{x}f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k)- {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yx}g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k){{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_{k+1}$, which determines the next iteration ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1}$ using a Frank-Wolfe type update, i.e., {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align*} &{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s}_k{{\boldsymbol{\beta}}oldsymbol{\gamma}}ets {{\boldsymbol{\beta}}oldsymbol{\alpha}}rgmin_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s}\in\mathcal{X}}\langle{F_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s}}{\boldsymbol{\rho}}angle,\quad {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1}{{\boldsymbol{\beta}}oldsymbol{\gamma}}ets (1-{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma_k){{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k+{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma_k{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s}_k. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align*} for some step-size ${{\boldsymbol{\beta}}oldsymbol{\gamma}}amma_k\in [0,1]$. Finally, having an updated decision variable ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1}$ we estimate the lower-level optimal solution ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1})$ by performing another gradient descent step with respect to the lower-level function $g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k)$ with step-size ${{\boldsymbol{\beta}}oldsymbol{\alpha}}lpha>0$ to generate a new iterate ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_{k+1}$ as follows: {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{equation*} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_{k+1}{{\boldsymbol{\beta}}oldsymbol{\gamma}}ets {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k-{{\boldsymbol{\beta}}oldsymbol{\alpha}}lpha{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k). {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{equation*} Our proposed inexact bilevel conditional gradient (IBCG) method is summarized in Algorithm~{\boldsymbol{\rho}}ef{alg:In-BiCoG}. To ensure that IBCG has a guaranteed convergence rate, we introduce the following lemma that quantifies the error between the approximated direction $F_k$ from the true direction ${{\boldsymbol{\beta}}oldsymbol{\nu}}abla {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)$ at each iteration. This involves providing upper bounds on the errors induced by our nested approximation technique discussed above, i.e., ${{\boldsymbol{\beta}}oldsymbol{\nu}}orm{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_{k+1}-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)}$ and ${{\boldsymbol{\beta}}oldsymbol{\nu}}orm{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_{k+1}-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1})}$, as well as Lemma {\boldsymbol{\rho}}ef{lem:v_b}. {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{lemma}\label{lem:ell-es} Suppose Assumptions~{\boldsymbol{\rho}}ef{assum:upper}-{\boldsymbol{\rho}}ef{assum:lower} hold and let ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta\triangleq (L_g-{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g)/(L_g+{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g)$ and ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}\triangleq \frac{L_{yx}^{f} + L_{yy}^{f}{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}}}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g} + \frac{C_{y}^{f}L_{yy}^{g}}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g^2}(1+{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}})$. Moreover, let $\{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_k\}_{k{{\boldsymbol{\beta}}oldsymbol{\gamma}}eq 0}$ be the sequence generated by Algorithm {\boldsymbol{\rho}}ef{alg:In-BiCoG} with step-sizes ${{\boldsymbol{\beta}}oldsymbol{\gamma}}amma_k={{\boldsymbol{\beta}}oldsymbol{\gamma}}amma>0$, ${{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta_k={{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta< \frac{1-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g}$, and ${{\boldsymbol{\beta}}oldsymbol{\alpha}}lpha=2/({{\boldsymbol{\beta}}oldsymbol{\mu}}u_g+L_g)$. Then, for any $k{{\boldsymbol{\beta}}oldsymbol{\gamma}}eq 0$ {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align} \| {{\boldsymbol{\beta}}oldsymbol{\nu}}abla {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll ({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k) - F_k \| &\leq {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_2 {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}g( {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta^k D_0^y + \frac{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}}}{1-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta}D_{\mathcal{X}} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}g)+ C_{yx}^{g} \Big({\boldsymbol{\rho}}ho^{k+1} \|{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_0} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_0) \| + \frac{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma {\boldsymbol{\rho}}ho {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}}}{1-{\boldsymbol{\rho}}ho} D_{\mathcal{X}} {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ &\quad + \frac{{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_1 }{{\boldsymbol{\rho}}ho - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta}{\boldsymbol{\rho}}ho^{k+2}D_0^y +\frac{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_1 {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}}}{(1-{\boldsymbol{\rho}}ho){{\boldsymbol{\beta}}oldsymbol{\mu}}u_g}D_{\mathcal{X}}\Big) {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align} where ${\boldsymbol{\rho}}ho\triangleq 1-{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g$, ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_1 \triangleq L_{yy}^{g} \frac{ C_{y}^{f}}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g} + L_{yy}^{f}$, ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_2 \triangleq {{\boldsymbol{\beta}}oldsymbol{\nu}}a{L_{xy}^{f}} + L_{yx}^{g} \frac{C_{y}^{f}}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g}$, and $D_0^y\triangleq {{\boldsymbol{\beta}}oldsymbol{\nu}}orm{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_0-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_0)}$. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{lemma} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{proof} The proof is relegated to section {\boldsymbol{\rho}}ef{proof:lemma-grad-est} of the Appendix. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{proof} Lemma {\boldsymbol{\rho}}ef{lem:ell-es} provides an upper bound on the error of the approximated gradient direction $F_k$. This bound encompasses two types of terms: those that decrease linearly and others that are influenced by the parameter ${{\boldsymbol{\beta}}oldsymbol{\gamma}}amma$. Selecting the parameter ${{\boldsymbol{\beta}}oldsymbol{\gamma}}amma$ is a crucial task as larger values can introduce significant errors in the direction taken by the algorithm, while smaller values can impede proper progress in the iterations. Therefore, it is essential to choose ${{\boldsymbol{\beta}}oldsymbol{\gamma}}amma$ appropriately based on the overall algorithm's progress. By utilizing Lemma {\boldsymbol{\rho}}ef{lem:ell-es}, we establish a bound on the gap function and ensure a convergence rate guarantee by selecting an appropriate ${{\boldsymbol{\beta}}oldsymbol{\gamma}}amma$. \section{Convergence analysis} In this section, we analyze the iteration complexity of our IBCG method. We first consider the case where the objective function of the single-level problem ${{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll(\cdot)$ is convex. {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{theorem}[Convex bilevel]\label{thm:convex-upper-bound} Suppose that Assumptions~{\boldsymbol{\rho}}ef{assum:upper} and {\boldsymbol{\rho}}ef{assum:lower} hold. If ${{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})$ is convex, let $\{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k\}_{k=0}^{K-1}$ be the sequence generated by Algorithm {\boldsymbol{\rho}}ef{alg:In-BiCoG} with step-sizes specified as in Lemma {\boldsymbol{\rho}}ef{lem:ell-es}. Then, we have that for all $K{{\boldsymbol{\beta}}oldsymbol{\gamma}}eq 1$, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align}\label{eq:subopt-convex} {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_K) - {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}^*) &\leq (1- {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma )^K( {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_0) - {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}^* )) + \sum_{k=0}^{K-1}(1-{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma)^{K-k}\mathcal{R}_k({{\boldsymbol{\beta}}oldsymbol{\gamma}}amma) {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align} where {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align}\label{eq:Rk} \mathcal{R}_k({{\boldsymbol{\beta}}oldsymbol{\gamma}}amma)&\triangleq {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_2 {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta^k D_0^y D_{\mathcal{X}} + \frac{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma^2 D_{\mathcal{X}}^2 {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta}{1-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta} + C_{yx}^{g}\Big[ {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma D_{\mathcal{X}} {\boldsymbol{\rho}}ho^{k+1} \|{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_0} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_0) \| {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ & \quad + \frac{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma^2 D_{\mathcal{X}}^2 {\boldsymbol{\rho}}ho {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}}}{1-{\boldsymbol{\rho}}ho} + \frac{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma D_{\mathcal{X}} D_0^y {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_1{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta {\boldsymbol{\rho}}ho^{k+2}}{{\boldsymbol{\rho}}ho - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta} + \frac{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma^2 D_{\mathcal{X}}^2 {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_1 {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta}{(1-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta)(1-{\boldsymbol{\rho}}ho)}\Big] + \frac{1}{2} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll} {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma^2 D_{\mathcal{X}}^2. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align} {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{theorem} Theorem {\boldsymbol{\rho}}ef{thm:convex-upper-bound} demonstrates that the suboptimality can be reduced using the upper bound presented in {{\boldsymbol{\beta}}oldsymbol{\epsilon}}qref{eq:subopt-convex}, which consists of two components. The first component decreases linearly, while the second term arises from errors in nested approximations and can be mitigated by reducing the step-size ${{\boldsymbol{\beta}}oldsymbol{\gamma}}amma$. Thus, by carefully selecting the step-size ${{\boldsymbol{\beta}}oldsymbol{\gamma}}amma$, we can achieve a guaranteed convergence rate as outlined in the following Corollary. In particular, we establish that setting ${{\boldsymbol{\beta}}oldsymbol{\gamma}}amma=\log(K)/K$ yields a convergence rate of $\mathcal{O}(\log(K)/K)$. {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{corollary}\label{cr:convex_upper-bound} Let $\{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k\}_{k=0}^{K-1}$ be the sequence generated by Algorithm {\boldsymbol{\rho}}ef{alg:In-BiCoG} with step-size ${{\boldsymbol{\beta}}oldsymbol{\gamma}}amma_k = {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma = \frac{\log( K)}{K}$. Under the premises of Theorem~{\boldsymbol{\rho}}ef{thm:convex-upper-bound} we have that ${{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_K) - {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}^*) \leq {{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon$ after $\mathcal{O}({{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon^{-1}\log({{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon^{-1}))$ iterations. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{corollary} Now we turn to the case where the objective function of the single-level problem ${{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll(\cdot)$ is non-convex. {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{theorem}[Non-convex bilevel]\label{thm:nonconvex-upper-bound} Suppose that Assumption~{\boldsymbol{\rho}}ef{assum:upper} and {\boldsymbol{\rho}}ef{assum:lower} hold. Let $\{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k\}_{k=0}^{K-1}$ be the sequence generated by Algorithm {\boldsymbol{\rho}}ef{alg:In-BiCoG} with step-sizes specified as in Lemma {\boldsymbol{\rho}}ef{lem:ell-es}. Then, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align} {{\boldsymbol{\beta}}oldsymbol{\mu}}athcal{G}_{k^*} &\leq \frac{{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_0) -{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}^*)}{K {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma}+ \frac{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma D_{\mathcal{X}} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta}{1-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta}+ \frac{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma D_{\mathcal{X}}^2 {\boldsymbol{\rho}}ho {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}} C_{yx}^{g} {\boldsymbol{\rho}}ho}{1-{\boldsymbol{\rho}}ho} +\frac{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma D_{\mathcal{X}}^2 C_{yx}^{g} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_1 {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta}{(1-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta)(1-{\boldsymbol{\rho}}ho)} + \frac{1}{2} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll}{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma D_{\mathcal{X}}^2 {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ &\quad + \frac{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_2 D_0^y D_{\mathcal{X}} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta}{K(1- {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta)}+ \frac{D_{\mathcal{X}} C_{yx}^{g} {\boldsymbol{\rho}}ho \|{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_0} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_0) \|}{K(1- {\boldsymbol{\rho}}ho)}+ \frac{ D_{\mathcal{X}} D_0^y C_{yx}^{g} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_1 {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta {\boldsymbol{\rho}}ho^2}{K(1-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta)(1-{\boldsymbol{\rho}}ho)} {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align} {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{theorem} Theorem~{\boldsymbol{\rho}}ef{thm:nonconvex-upper-bound} establishes an upper bound on the Frank-Wolfe gap for the iterates generated by IBCG. This result shows that the Frank-Wolfe gap vanishes when the step-size ${{\boldsymbol{\beta}}oldsymbol{\gamma}}amma$ is properly selected. In particular, selecting ${{\boldsymbol{\beta}}oldsymbol{\gamma}}amma = 1/\sqrt{K}$ as outlined in the following Corollary results in a convergence rate of $\mathcal{O}(1/\sqrt{K})$. {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{corollary}\label{cr:nonconvex_upper-bound} Let $\{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k\}_{k=0}^{K-1}$ be the sequence generated by Algorithm {\boldsymbol{\rho}}ef{alg:In-BiCoG} with step-size ${{\boldsymbol{\beta}}oldsymbol{\gamma}}amma_k = {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma = \frac{1}{\sqrt{K}}$, then there exists $k^*\in\{0,1,\dots,K-1\}$ such that ${{\boldsymbol{\beta}}oldsymbol{\mu}}athcal{G}_{k^*} \leq {{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon$ after $\mathcal{O}({{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon^{-2})$ iterations. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{corollary} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{remark} It is worth emphasizing that our proposed method requires only two matrix-vector multiplications, which significantly contributes to its efficiency. Furthermore, our results represent the state-of-the-art bound for the considered setting, to the best of our knowledge. In the case of the convex setting, our complexity result is near-optimal among projection-free methods for single-level optimization problems. This is noteworthy as it is known that the worst-case complexity of such methods is ${{\boldsymbol{\beta}}oldsymbol{\Theta}}eta(1/{{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon)$ \cite{jaggi2013revisiting, lan2013complexity}. Regarding the non-convex setting, our complexity result matches the best-known bound of ${{\boldsymbol{\beta}}oldsymbol{\Theta}}eta(1/{{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon^2)$ within the family of projection-free methods for single-level optimization problems \cite{jaggi2013revisiting, lan2013complexity}. This optimality underscores the efficiency and effectiveness of our approach in this particular context. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{remark} \section{Numerical Experiments}\label{sec:numeric} In this section, we test our method for solving different bilevel optimization problems. First, we consider a small-scale example to demonstrate the performance of our method compared with the most relevant method in \cite{akhtar2022projection}. Next, we consider the matrix completion with denoising example described in Section~{\boldsymbol{\rho}}ef{sec:pre} with synthetic datasets and compare our method with other existing methods in the literature \cite{hong2020two,akhtar2022projection}. All the experiments are performed in MATLAB R2022a with Intel(R) Core(TM) i5-10210U CPU @ 1.60GHz. \subsection{Toy example}\label{subsec:toy} Here we consider a variation of coreset problem in a two-dimensional space to illustrate the numerical instability of our proposed method. Given a point $x_0\in{{\boldsymbol{\beta}}oldsymbol{\mu}}athbb{R}^2$, the goal is to find the closest point to $x_0$ such that under a linear map it lies within the convex hull of given points $\{x_1,x_2,x_3,x_4\}\subset {{\boldsymbol{\beta}}oldsymbol{\mu}}athbb{R}^2$. Let $A\in{{\boldsymbol{\beta}}oldsymbol{\mu}}athbb{R}^{2\times 2}$ represents the linear map, $X \triangleq [x_1, x_2, x_3, x_4] \in {{\boldsymbol{\beta}}oldsymbol{\mu}}athbb{R}^{2 \times 4}$, and ${{\boldsymbol{\beta}}oldsymbol{\Delta}}elta_{4} \triangleq \{ \lambda \in {{\boldsymbol{\beta}}oldsymbol{\mu}}athbb{R}^4 | \langle \lambda, 1{\boldsymbol{\rho}}angle = 1 , \lambda {{\boldsymbol{\beta}}oldsymbol{\gamma}}eq 0 \} $ be the standard simplex set. This problem can be formulated as the following bilevel optimization problem {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align}\label{ex:toy} {{\boldsymbol{\beta}}oldsymbol{\mu}}in_{\lambda \in {{\boldsymbol{\beta}}oldsymbol{\Delta}}elta_{4}} \frac{1}{2}\| {{\boldsymbol{\beta}}oldsymbol{\theta}}eta(\lambda) - x_0\|^2 \quad \hbox{s.t.}\quad {{\boldsymbol{\beta}}oldsymbol{\theta}}eta(\lambda) \in {{\boldsymbol{\beta}}oldsymbol{\alpha}}rgmin_{{{\boldsymbol{\beta}}oldsymbol{\theta}}eta \in {{\boldsymbol{\beta}}oldsymbol{\mu}}athbb{R}^{2}} \frac{1}{2}\| A {{\boldsymbol{\beta}}oldsymbol{\theta}}eta - X\lambda \|^2. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align} We set the target $x_0 = (2,2)$ and choose starting points as ${{\boldsymbol{\beta}}oldsymbol{\theta}}eta_0 = (0,0)$ and $\lambda_0 = {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{1}_4/4$. We implemented our proposed method and compared it with SBFW \cite{akhtar2022projection}. It should be noted that in the SBFW method, they used a biased estimation for $({{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yy}g(\lambda,{{\boldsymbol{\beta}}oldsymbol{\theta}}eta))^{-1}=(A^\top A)^{-1}$ whose bias is upper bounded by $\frac{2}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g}$ (see \cite{ghadimi2018approximation}[Lemma 3.2]). Figure~{\boldsymbol{\rho}}ef{fig:toy1} illustrates the iteration trajectories of both methods for ${{\boldsymbol{\beta}}oldsymbol{\mu}}u_g = 1$ and $K=10^2$. The step-sizes for both methods are selected as suggested by their theoretical analysis. We observe that our method converges to the optimal solution while SBFW fails to converge. This situation for SBFW exacerbates for smaller values of ${{\boldsymbol{\beta}}oldsymbol{\mu}}u_g$ while our method shows a consistent and robust behavior (see Appendix~{\boldsymbol{\rho}}ef{sec:additional_example} for more details and experiment). {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{figure} \centering \includegraphics[scale=0.3]{plot_n.png} \quad \includegraphics[scale=0.3]{upper_n.png} \caption{The performance of IBCG (blue) vs SBFW (red) on Problem {\boldsymbol{\rho}}ef{ex:toy} when ${{\boldsymbol{\beta}}oldsymbol{\mu}}u_g =1$. Plots from left to right are trajectories of ${{\boldsymbol{\beta}}oldsymbol{\theta}}eta_k$ and $f(\lambda_k,{{\boldsymbol{\beta}}oldsymbol{\theta}}eta_k)- f^*$.} \label{fig:toy1} {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{figure} \subsection{Matrix Completion with Denoising}\label{num: matrix_comp} In this section, we study the performance of our proposed IBCG algorithm for solving matrix completion with denoising problem in {{\boldsymbol{\beta}}oldsymbol{\epsilon}}qref{ex:Matrix_comp}. The experimental setup we adopt is aligned with the methodology used in \cite{mokhtari2020stochastic}. In particular, we create an observation matrix $M = \hat{X}+E$. In this setting $\hat{X} = WW^{T}$ where $W \in {{\boldsymbol{\beta}}oldsymbol{\mu}}athbb{R}^{n \times r}$ containing normally distributed independent entries, and $E = \hat{n}(L+L^{T})$ is a noise matrix where $L \in {{\boldsymbol{\beta}}oldsymbol{\mu}}athbb{R}^{n \times n}$ containing normally distributed independent entries and $\hat{n} \in (0,1)$ is the noise factor. During the simulation process, we set $n= 250$, $r = 10$, and ${{\boldsymbol{\beta}}oldsymbol{\alpha}}lpha = \| \hat{X}\|_{*}$. Additionally, we establish the set of observed entries $\Omega$ by randomly sampling $M$ entries with a probability of 0.8. Initially, we set $\hat{n}$ to be 0.5 and employ the IBCG algorithm to solve the problem described in {{\boldsymbol{\beta}}oldsymbol{\epsilon}}qref{ex:Matrix_comp}. To evaluate the performance of our proposed method, we compare it with state-of-the-art methods in the literature for constrained bilevel optimization problems. In particular, We compare the performance of IBCG with other single-loop projection-based and projection-free bi-level algorithms TTSA~\cite{hong2020two} and SBFW~\cite{akhtar2022projection}, respectively. We set $ \lambda_1 = \lambda_2 = 0.05$, and set the maximum number of iteration as $10^4$. It should be noted that we consider pseudo-Huber loss defined by $\mathcal{R}_{\delta}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{V}) = \sum_{i,j}\delta^2 (\sqrt{1+({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{V}_{ij}/ \delta)^2}-1)$ as a regularization term to induce sparsity and set $\delta = 0.9$. The performance is analyzed based on the normalized error, defined as $ \Bar{e} = \frac{\sum _{(i,j) \in \Omega} (X_{i,j} - \hat{X}_{i,j})^2} {\sum _{(i,j) \in \Omega} (\hat{X}_{i,j})^2}$, where $X$ is the matrix generated by the algorithm. Figure~{\boldsymbol{\rho}}ef{fig:matrixcom250} illustrates the progression of the normalized error over time as well as $\|{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y g(x_k,y_k)\|$ and the value of the upper-level objective function. We note that SBFW algorithm suffers from a slower theoretical convergence rate compared to projection-based schemes, but our proposed method outperforms other algorithms by achieving lower values of $\|{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y g(x_k,y_k)\|$ and slightly better performance in terms of the normalized error values. This gain comes from the projection-free nature of the proposed algorithm and its fast convergence since we are no longer required to perform a complicated projection at each iteration. {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{figure} \centering \includegraphics[scale=0.32]{error_250.png} \quad \includegraphics[scale=0.32]{lower_250.png} \quad \includegraphics[scale=0.32]{upper_250.png} \caption{The performance of IBCG (blue) vs SBFW (red) and TTSA (yellow) on Problem {\boldsymbol{\rho}}ef{ex:Matrix_comp}. Plots from left to right are trajectories of normalized error $(\Bar{e})$, $\|{{\boldsymbol{\beta}}oldsymbol{\nu}}abla g_{y}(x_k,y_k)\|$, and $f(x_k,y_k)$ over time.} \label{fig:matrixcom250} {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{figure} \section{{Conclusion}} In this paper, we focused on the constrained bilevel optimization problem that has a wide range of applications in learning problems. We proposed a novel single-loop projection-free method based on nested approximation techniques, which surpasses current approaches by achieving improved per-iteration complexity. Additionally, it offers optimal convergence rate guarantees that match the best-known complexity of projection-free algorithms for solving convex constrained single-level optimization problems. In particular, we proved that our proposed method requires approximately $\tilde{\mathcal{O}}({{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon^{-1})$ iterations to find an ${{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon$-optimal solution when the upper-level objective function $f$ is convex, and approximately $\mathcal{O}({{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon^{-2})$ to find an ${{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon$-stationary point when $f$ is non-convex. Our numerical results also showed superior performance of our IBCG algorithm compared to existing algorithms. \section*{Appendix} In section {\boldsymbol{\rho}}ef{sec:supporting-lemmas}, we establish technical lemmas based on the assumptions considered in the paper. These lemmas characterize important properties of problem {\boldsymbol{\rho}}ef{eq:bilevel}. Notably, Lemma {\boldsymbol{\rho}}ef{lem:v_b} is instrumental in understanding the properties associated with problem {\boldsymbol{\rho}}ef{eq:bilevel}. Moving on to section {\boldsymbol{\rho}}ef{sec:required-lemma}, we present a series of lemmas essential for deriving the rate results of the proposed algorithm. Among them, Lemma {\boldsymbol{\rho}}ef{lem:ell-es} quantifies the error between the approximated direction $F_k$ and ${{\boldsymbol{\beta}}oldsymbol{\nu}}abla {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)$. This quantification plays a crucial role in establishing the one-step improvement lemma (see Lemma {\boldsymbol{\rho}}ef{lem:one-step}). Next, we provide the proofs of Theorem {\boldsymbol{\rho}}ef{thm:convex-upper-bound} and Corollary {\boldsymbol{\rho}}ef{cr:convex_upper-bound} in sections {\boldsymbol{\rho}}ef{appen:convex} and {\boldsymbol{\rho}}ef{appen:convex_coro}, respectively, that support the results presented in the paper for the convex scenario. Finally, in sections {\boldsymbol{\rho}}ef{appen: nonconvex} and {\boldsymbol{\rho}}ef{appen:nonconvex_coro} we provide the proofs for Theorem {\boldsymbol{\rho}}ef{thm:nonconvex-upper-bound} along with Corollary {\boldsymbol{\rho}}ef{cr:nonconvex_upper-bound} for the nonconvex scenario. \section{Supporting Lemmas}\label{sec:supporting-lemmas} In this section, we provide detailed explanations and proofs for the lemmas supporting the main results of the paper. \subsection{Proof of Lemma~{\boldsymbol{\rho}}ef{lem:v_b}} (I) Recall that ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})$ is the minimizer of the lower-level problem whose objective function is strongly convex, therefore, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align*} {{\boldsymbol{\beta}}oldsymbol{\mu}}u_g{{\boldsymbol{\beta}}oldsymbol{\nu}}orm{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})}^2&\leq \langle{{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}))-{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})),{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})} {\boldsymbol{\rho}}angle \\ &=\langle{{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}))-{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})),{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})} {\boldsymbol{\rho}}angle {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align*} Note that ${{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}))= {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}))=0$. Using the Cauchy-Schwartz inequality we have: {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align*} {{\boldsymbol{\beta}}oldsymbol{\mu}}u_g{{\boldsymbol{\beta}}oldsymbol{\nu}}orm{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})}^2&\leq\| {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}))-{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}))\| \|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})\| \\ & \leq {{\boldsymbol{\beta}}oldsymbol{\nu}}a{C_{yx}^{g}} \| {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}\| \|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})\| {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align*} where the {{\boldsymbol{\beta}}oldsymbol{\epsilon}}y{last} inequality is obtained by using the {{\boldsymbol{\beta}}oldsymbol{\epsilon}}y{Assumption}~{\boldsymbol{\rho}}ef{assum:lower}. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}y{Therefore, we conclude that $ {{\boldsymbol{\beta}}oldsymbol{\mu}}u_g{{\boldsymbol{\beta}}oldsymbol{\nu}}orm{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})} \leq C_{yx}^{g} \| {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}\| $ which leads to the desired result in part (I).} (II) We first show that the function ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x} {{\boldsymbol{\beta}}oldsymbol{\mu}}apsto {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}))$ is Lipschitz continuous. To see this, note that for any ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x} \in \mathcal{X}$, we have {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align*} \|{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}))-{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}))\| &\leq L_{yx}^f\|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}\|+L_{yy}^f\|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})\| \\ &\leq \Bigl(L_{yx}^f+\frac{L_{yy}^f {{\boldsymbol{\beta}}oldsymbol{\nu}}a{C_{yx}^{g}}}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g}\Bigr)\|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}\|, {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align*} where in the last inequality we used Lemma~{\boldsymbol{\rho}}ef{lem:v_b}-(I). Since $\mathcal{X}$ is bounded, we also have $\|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}\|\leq D_\mathcal{X}$. Therefore, letting ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x} = {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}^*$ in the above inequality and using the triangle inequality, we have {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{equation*} \|{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}))\| \leq \Bigl(L_{yx}^f+\frac{L_{yy}^f {{\boldsymbol{\beta}}oldsymbol{\nu}}a{C_{yx}^{g}}}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g}\Bigr)D_\mathcal{X}+ \|{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y f({{\boldsymbol{\beta}}oldsymbol{\nu}}a{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}^*},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\nu}}a{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}^*}))\|. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{equation*} Thus, we complete the proof by letting $C_y^f=\Bigl(L_{yx}^f+\frac{L_{yy}^f {{\boldsymbol{\beta}}oldsymbol{\nu}}a{C_{yx}^{g}}}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g}\Bigr)D_\mathcal{X}+ \|{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y f({{\boldsymbol{\beta}}oldsymbol{\nu}}a{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}^*},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\nu}}a{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}^*}))\|$. Before proceeding to show the result of part (III) of Lemma~{\boldsymbol{\rho}}ef{lem:v_b}, we first establish an auxiliary lemma stated next. {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{lemma}\label{lem:v(x)} Under the premises of Lemma {\boldsymbol{\rho}}ef{lem:v_b}, we have that for any ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}\in\mathcal{X}$, ${{\boldsymbol{\beta}}oldsymbol{\nu}}orm{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})}\leq {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}{{\boldsymbol{\beta}}oldsymbol{\nu}}orm{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}$ for some ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}{{\boldsymbol{\beta}}oldsymbol{\gamma}}eq 0$. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{lemma} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{proof} We start the proof by recalling that ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})={{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yy}g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}))^{-1} {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{y}f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}))$. Next, adding and subtracting ${{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yy} g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})){{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y f(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}))$ followed by a triangle inequality leads to, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align}\label{eq:bound-v} &\| {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}) - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}})\| {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ &= \|[{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yy}g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}))]^{-1} {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{y}f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})) - [{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yy}g(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}))]^{-1} {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{y}f(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}))\| {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ &\leq \|[{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yy}g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}))]^{-1}{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}g( {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{y}f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})) - {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{y}f(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}})) {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}g)\| + \|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}g([{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yy}g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}))]^{-1} {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ &- [{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yy}g(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}))]^{-1} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}g){{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{y}f(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}))\| {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ & \leq \frac{1}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g}{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}g( {L_{yx}^f}\| {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x} - \Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}\| + {L_{yy}^f}\|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}) - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}})\|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}g) + C_{y}^{f}\| [{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yy}g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}))]^{-1} - [{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yy}g(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}))]^{-1} \|, {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align} where in the last inequality we used Assumptions~{\boldsymbol{\rho}}ef{assum:upper} and {\boldsymbol{\rho}}ef{assum:lower}-(iii) {{\boldsymbol{\beta}}oldsymbol{\nu}}a{along with the premises of Lemma~{\boldsymbol{\rho}}ef{lem:v_b}-(II)}. Moreover, for any invertible matrices $H_1$ and $H_2$, we have that {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{equation}\label{eq:invert} \| H_2^{-1} -H_1^{-1}\| = \|H_1^{-1} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}g( H_1 - H_2{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}g) H_2^{-1}\| \leq \| H_1^{-1}\| \|H_2^{-1}\| \|H_1 -H_2\|. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{equation} Therefore, using the result of Lemma~{\boldsymbol{\rho}}ef{lem:v_b}-(I) and {{\boldsymbol{\beta}}oldsymbol{\epsilon}}qref{eq:invert} we can further bound inequality {{\boldsymbol{\beta}}oldsymbol{\epsilon}}qref{eq:bound-v} as follows, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align*} &{{\boldsymbol{\beta}}oldsymbol{\nu}}orm{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})} {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber\\ &\leq \frac{1}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g}{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}g( {L_{yx}^f} \| {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x} - \Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}\| + {L_{yy}^f} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}}\| {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x} - \Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}\|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}g) + C_{y}^{f}\| [{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yy}g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}))]^{-1} - [{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yy}g(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}))]^{-1} \| \\ &\leq \frac{1}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}g( {L_{yx}^{f}} + {L_{yy}^{f}}{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}g)\|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x} -\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}\| + {\frac{C_y^f}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g^2} L_{yy}^g{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}gl( \|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}\|+\|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}})\|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}gr)} \\ &= {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}gl( {\frac{L_{yx}^{f} + L_{yy}^{f}{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}}}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g} + \frac{C_{y}^{f}L_{yy}^{g}}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g^2}(1+{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}})}{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}gr) \|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x} -\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}\|. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align*} The result follows by letting ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}=\frac{L_{yx}^{f} + L_{yy}^{f}{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}}}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g} + \frac{C_{y}^{f}L_{yy}^{g}}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g^2}(1+{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}})$. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{proof} (III) We start proving part (III) using the definition of ${{\boldsymbol{\beta}}oldsymbol{\nu}}abla {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll ({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})$ stated in {{\boldsymbol{\beta}}oldsymbol{\epsilon}}qref{eq:ell-v}. Using the triangle inequality we obtain, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align} &\|{{\boldsymbol{\beta}}oldsymbol{\nu}}abla {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll ({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}) - {{\boldsymbol{\beta}}oldsymbol{\nu}}abla {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}})\| {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber\\ &\quad = \| {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{x}f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})) - {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yx}g({{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}})) {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}) - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}g( {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{x}f(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}})) - {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yx}g(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}})) {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}){{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}g)\| {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ & \quad \leq \| {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{x}f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})) - {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{x}f(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}))\| + \| {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}g[{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yx}g(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}})) {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}) - {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yx}g(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}})) {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}){{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}g] {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ & \qquad + {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}g[{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yx}g(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}})) {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})-{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yx}g({{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}})) {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}){{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}g]\| {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align} {{\boldsymbol{\beta}}oldsymbol{\nu}}a{where the second term of the RHS follows from adding and subtracting the term ${{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yx}g(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}})){{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})$}. Next, from Assumptions~{\boldsymbol{\rho}}ef{assum:upper}-(i) and {\boldsymbol{\rho}}ef{assum:lower}-(v) together with the triangle inequality we conclude that {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align} \|{{\boldsymbol{\beta}}oldsymbol{\nu}}abla {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll ({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}) - {{\boldsymbol{\beta}}oldsymbol{\nu}}abla {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}})\| &\leq L_{xx}^f \| {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x} - \Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}\| + L_{xy}^f \| {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}) - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}})\|\ + C_{yx}^g \|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}) - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})\| {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ & \quad + \frac{C_{y}^{f}}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g} \|{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yx}g(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}})) -{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yx}g({{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}))\| {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align} Note that in the last inequality, we use the fact that $\|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})\| = \| [{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yy}g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}))]^{-1} {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{y}f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}))\| \leq \frac{ C_{y}^{f}}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g}$. Combining the result of Lemma~{\boldsymbol{\rho}}ef{lem:v_b} part (I) and (II) with the Assumption~{\boldsymbol{\rho}}ef{assum:lower}-(iv) leads to {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align} \|{{\boldsymbol{\beta}}oldsymbol{\nu}}abla {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll ({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}) - {{\boldsymbol{\beta}}oldsymbol{\nu}}abla {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll(\Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}})\| &\leq L_{xx}^f \| {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x} - \Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}\| + L_{xy}^f {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}}\|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x} - \Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}\| + C_{yx}^g {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}}\|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x} - \Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}\| {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber\\ &\quad + \frac{C_{y}^{f}}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g} L_{yx}^g {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}gl( \|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}\|+\|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}})\|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}gr){{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ &\leq L_{xx}^f \| {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x} - \Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}\| + L_{xy}^f {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}}\|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x} - \Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}\| + C_{yx}^g {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}}\|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x} - \Bar{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}\| {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber\\ &\quad + \frac{C_{y}^{f}}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g} L_{yx}^g {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}gl( \|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}\|+{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}}\|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}\| {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}gr) {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ &\leq {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}g(L_{xx}^f +{L_{xy}^f} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}} +C_{yx}^g {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}} +\frac{C_{y}^{f}}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g} L_{yx}^g(1+{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}}){{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}g) \|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{a}r{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}}\| {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align} The desired result can be obtained by letting ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll} = L_{xx}^f +{L_{xy}^f} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}} +C_{yx}^g {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}} +\frac{C_{y}^{f}}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g} L_{yx}^g(1+{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}})$. \qed \section{Required Lemmas for Theorem {\boldsymbol{\rho}}ef{thm:convex-upper-bound} and {\boldsymbol{\rho}}ef{thm:nonconvex-upper-bound}}\label{sec:required-lemma} {{\boldsymbol{\beta}}oldsymbol{\nu}}a{Before we proceed to the proofs of Theorems~{\boldsymbol{\rho}}ef{thm:convex-upper-bound} and {\boldsymbol{\rho}}ef{thm:nonconvex-upper-bound}, we present the following technical lemmas which quantify the error between the approximated solution ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k$ and ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)$, as well as between ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_{k+1}$ and ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k})$.} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{lemma}\label{lem:lower_GD_C} {{\boldsymbol{\beta}}oldsymbol{\epsilon}}y{Suppose Assumption {\boldsymbol{\rho}}ef{assum:lower} holds. Let $\{({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k)\}_{k{{\boldsymbol{\beta}}oldsymbol{\gamma}}eq 0}$ be the sequence generated by Algorithm {\boldsymbol{\rho}}ef{alg:In-BiCoG}, such that ${{\boldsymbol{\beta}}oldsymbol{\alpha}}lpha=2/({{\boldsymbol{\beta}}oldsymbol{\mu}}u_g+L_g)$. Then, for any $k{{\boldsymbol{\beta}}oldsymbol{\gamma}}eq 0$ {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{equation}\label{eq:lower_GD} \|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_{k}- {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k})\|\leq {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta^k \|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_0 - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_0)\| + {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}}D_\mathcal{X}\sum_{i=0}^{k-1} {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma_i {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta^{k-i}, {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{equation}} where ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta\triangleq (L_g-{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g)/(L_g+{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g)$. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{lemma} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{proof} {{\boldsymbol{\beta}}oldsymbol{\epsilon}}y{We begin the proof by characterizing the one-step progress of the lower-level iterate sequence $\{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_{k}\}_k$. Indeed, at iteration $k$ we aim to approximate ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1})={{\boldsymbol{\beta}}oldsymbol{\alpha}}rgmin_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}} g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y})$. According to the update of ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_{k+1}$ we observe that} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align}\label{eq:lower_iter} \| {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_{k+1}- {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1}) \|^2 &= \| {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k- {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1}) - {{\boldsymbol{\beta}}oldsymbol{\alpha}}lpha{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k) \|^2 {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ &= \| {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k- {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1})\|^2 - 2{{\boldsymbol{\beta}}oldsymbol{\alpha}}lpha \langle {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k), {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1}) {\boldsymbol{\rho}}angle {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber\\ &\quad + {{\boldsymbol{\beta}}oldsymbol{\alpha}}lpha^2 \|{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k)\|^2. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align} Moreover, from Assumption {\boldsymbol{\rho}}ef{assum:lower} we have that {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align}\label{eq:g-sc-lip} \langle {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k), {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k- {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1}){\boldsymbol{\rho}}angle {{\boldsymbol{\beta}}oldsymbol{\gamma}}eq \frac{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g L_g}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g + L_g}\|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k- {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1})\|^2 + \frac{1}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g + L_g}\|{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k)\|^2. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align} The inequality in {{\boldsymbol{\beta}}oldsymbol{\epsilon}}qref{eq:lower_iter} together with {{\boldsymbol{\beta}}oldsymbol{\epsilon}}qref{eq:g-sc-lip} imply that {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align}\label{eq:lower_convrg} \| {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_{k+1}- {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1}) \|^2 &\leq \| {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k- {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1})\|^2 - \frac{2{{\boldsymbol{\beta}}oldsymbol{\alpha}}lpha {{\boldsymbol{\beta}}oldsymbol{\mu}}u_g L_g}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g + L_g}\| {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k- {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1})\|^2 {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber\\ &\quad +\Big ( {{\boldsymbol{\beta}}oldsymbol{\alpha}}lpha^2 - \frac{2{{\boldsymbol{\beta}}oldsymbol{\alpha}}lpha}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g + L_g} \Big ) \|{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k)\|^2. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align} Setting the step-size ${{\boldsymbol{\beta}}oldsymbol{\alpha}}lpha = \frac{2}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g + L_g}$ in {{\boldsymbol{\beta}}oldsymbol{\epsilon}}qref{eq:lower_convrg} leads to {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align}\label{eq:lower_convrg2} \|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_{k+1}- {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1})\|^2 &\leq \Big(\frac{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g - L_g}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g + L_g}\Big )^2\|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_{k}- {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1})\|^2 {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align} {{\boldsymbol{\beta}}oldsymbol{\epsilon}}y{Next, recall that ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta=(L_g-{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g)/(L_g+{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g)$. Using the triangle inequality and Part (I) of Lemma {\boldsymbol{\rho}}ef{lem:v_b} we conclude that } {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align}\label{eq:one-step-triangle} \|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_{k+1}- {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1})\|&\leq {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta\|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_{k}- {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1})\| {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ &\leq {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta\Big [ \|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_{k}- {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k})\| +\|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k})- {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1})\|\Big ] {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ &\leq {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta\Big [ \|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_{k}- {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k})\| + {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}\|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1}\|\Big ] . {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align} {{\boldsymbol{\beta}}oldsymbol{\epsilon}}y{Moreover, from the update of ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1}$ in Algorithm {\boldsymbol{\rho}}ef{alg:In-BiCoG}} and boundedness of $\mathcal{X}$ we have that ${{\boldsymbol{\beta}}oldsymbol{\nu}}orm{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1}-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k}\leq {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma_k {{\boldsymbol{\beta}}oldsymbol{\epsilon}}y{D_\mathcal{X}}$. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}y{Therefore, using this inequality within {{\boldsymbol{\beta}}oldsymbol{\epsilon}}qref{eq:one-step-triangle} leads to {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{equation*} {{\boldsymbol{\beta}}oldsymbol{\nu}}orm{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_{k+1}-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1})}\leq {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta{{\boldsymbol{\beta}}oldsymbol{\nu}}orm{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)}+{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma_k{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y} D_\mathcal{X}. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{equation*} Finally, the desired result can be deduced from the above inequality recursively.} {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{proof} Previously, in Lemma~{\boldsymbol{\rho}}ef{lem:lower_GD_C} {{\boldsymbol{\beta}}oldsymbol{\epsilon}}y{we quantified} how close the approximation ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k$ is from the optimal solution ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)$ of the inner problem. Now, in the following Lemma, we will find an upper bound for the error of approximating ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)$ via ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_{k+1}$. {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{lemma}\label{lem:nu_est} Let $\{({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_k)\}_{k{{\boldsymbol{\beta}}oldsymbol{\gamma}}eq 0}$ be the sequence generated by Algorithm {\boldsymbol{\rho}}ef{alg:In-BiCoG}, such that ${{\boldsymbol{\beta}}oldsymbol{\gamma}}amma_k = {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma$. Define $ {\boldsymbol{\rho}}ho_k \triangleq (1- {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta_k {{\boldsymbol{\beta}}oldsymbol{\mu}}u_g)$ and ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_1 \triangleq L_{yy}^{g} \frac{ C_{y}^{f}}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g} + L_{yy}^{f}$. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}y{Under Assumptions {\boldsymbol{\rho}}ef{assum:upper} and {\boldsymbol{\rho}}ef{assum:lower} we have that for any $k{{\boldsymbol{\beta}}oldsymbol{\gamma}}eq 0$,} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{equation} \| {{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_{k+1}} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v} ({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k})\| \leq {\boldsymbol{\rho}}ho_k \|{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_{k}} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v} ({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k-1})\| + {\boldsymbol{\rho}}ho_k {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}}{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma D_{\mathcal{X}} + {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta_k {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_1{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}g( {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta^k D_0 ^y + {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}} {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma \frac{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta}{1-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta} D_{\mathcal{X}}{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}g). {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{equation} {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{lemma} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{proof} One can easily verify that ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k) = {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k) - {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta_k {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}gl({{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yy}g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)){{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k) - {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{y}f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)){{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}gr)$ from the optimality condition of {{\boldsymbol{\beta}}oldsymbol{\epsilon}}qref{eq:obj-v}. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}y{Now using definition of ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_{k+1}$} we can write {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align} \| {{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_{k+1}} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v} ({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k})\| &= \Big\| \Big({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_k -{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta_k ({{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yy}g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k){{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_{k} - {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{y}f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k))\Big) - \Big({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k) {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ &\quad - {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta_k {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}g({{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yy}g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)){{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)- {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{y}f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)){{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}g)\Big)\Big\| {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ &= \Big\| \Big(I - {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta_k {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yy}g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k)\Big)({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_{k} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v} ({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k})) - {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta_k \Big({{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yy}g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k) {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ &\quad - {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yy}g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k))\Big){{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v} ({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k})+ {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta_k \Big( {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{y}f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)) - {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{y}f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k)\Big)\Big\|, \label{pr:w-v-1} {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align} where the last equality is obtained by adding and subtracting the term $(I - {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta_k {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yy}g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k)) {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)$. Next, using Assumptions~{\boldsymbol{\rho}}ef{assum:upper} and {\boldsymbol{\rho}}ef{assum:lower} along with the application of the triangle inequality we obtain {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align} \| {{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_{k+1}} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v} ({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k})\| &\leq (1 - {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta_k {{\boldsymbol{\beta}}oldsymbol{\mu}}u_g)\|{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_{k}} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v} ({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k})\| + {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta_k L_{yy}^{g} \| {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)\|\|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)\| {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber\\ &\quad + {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta_k L_{yy}^{f} \| {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)\|. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align} Note that $\|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)\| = \| [{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yy}g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}))]^{-1} {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{y}f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x},{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}))\| \leq \frac{ C_{y}^{f}}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g}$. Now, by adding and subtracting ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k-1})$ to the term $\| {{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_{k}} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v} ({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k})\|$ followed by triangle inequality application we can conclude that {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align} \| {{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_{k+1}} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v} ({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k})\| &\leq (1 - {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta_k {{\boldsymbol{\beta}}oldsymbol{\mu}}u_g)\|{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_{k}} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v} ({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k-1})\|+(1 - {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta_k {{\boldsymbol{\beta}}oldsymbol{\mu}}u_g)\|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v} ({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k-1}) - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v} ({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k})\| {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber\\ & \quad+ {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta_k \Big( L_{yy}^{g} \frac{ C_{y}^{f}}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g} + L_{yy}^{f}\Big) \| {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)\| .\label{pr:w-v-3} {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align} Therefore, using the result of Lemma~{\boldsymbol{\rho}}ef{lem:lower_GD_C}, we can further bound inequality~{{\boldsymbol{\beta}}oldsymbol{\epsilon}}qref{pr:w-v-3} as follows {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align} \| {{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_{k+1}} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v} ({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k})\| &\leq (1 - {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta_k {{\boldsymbol{\beta}}oldsymbol{\mu}}u_g)\|{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_{k}} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v} ({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k-1})\|+(1 - {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta_k {{\boldsymbol{\beta}}oldsymbol{\mu}}u_g){{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}}\| {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k-1} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k\| {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ &\quad + {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta_k {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_1\| {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)\| {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber\\ &\leq {\boldsymbol{\rho}}ho_k \|{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_{k}} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v} ({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k-1})\| + {\boldsymbol{\rho}}ho_k {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}}{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma D_{\mathcal{X}} + {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta_k {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_1{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}g( {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta^k D_0 ^y + {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}} {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma \frac{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta}{1-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta}D_{\mathcal{X}}{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}g) {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align} where the last inequality follows from the boundedness assumption of set $\mathcal{X}$, recalling that $D_0^y=\| {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_0 - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_0)\| $, and the fact that $\sum_{i=0}^{k-1} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta^{k-i} \leq \frac{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta}{1-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta}$. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{proof} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{lemma}\label{lem:nu_est_K} Let $\{({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_k)\}_{k{{\boldsymbol{\beta}}oldsymbol{\gamma}}eq 0}$ be the sequence generated by Algorithm {\boldsymbol{\rho}}ef{alg:In-BiCoG} with step-size ${{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta_k={{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta< \frac{1-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g}$ where ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta$ is defined in Lemma {\boldsymbol{\rho}}ef{lem:lower_GD_C}. Suppose that Assumption~{\boldsymbol{\rho}}ef{assum:lower} holds and ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{-1}) = {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_0)$, then for any $K{{\boldsymbol{\beta}}oldsymbol{\gamma}}eq 1$, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align} \| {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_{K} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{K-1})\| \leq {\boldsymbol{\rho}}ho^K \|{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_0} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_0) \| + \frac{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma {\boldsymbol{\rho}}ho {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}} D_{\mathcal{X}}}{1-{\boldsymbol{\rho}}ho} + \frac{{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_1 D_0^y {\boldsymbol{\rho}}ho^{K+1}}{{\boldsymbol{\rho}}ho - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta} +\frac{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_1 {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}} D_{\mathcal{X}} }{(1-{\boldsymbol{\rho}}ho) (1-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta)} , {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align} where ${\boldsymbol{\rho}}ho\triangleq 1-{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g$. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{lemma} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{proof} Applying the result of Lemma~{\boldsymbol{\rho}}ef{lem:nu_est} recursively for $k=0$ to $K-1$, one can conclude that {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align} \| {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_{K} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{K-1})\| &\leq {\boldsymbol{\rho}}ho^K \|{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_0} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_0) \| + {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}} {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma D_{\mathcal{X}} \sum _ {i=1} ^ K {\boldsymbol{\rho}}ho^i+ {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_1 \sum _{i=0}^K {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}g( {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta^i D_0^y + {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}} D_{\mathcal{X}} \frac{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta}{1-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta}{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}g) {\boldsymbol{\rho}}ho^{K-i} {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ &\leq {\boldsymbol{\rho}}ho^K \|{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_0} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_0) \| + \frac{{\boldsymbol{\rho}}ho}{1- {\boldsymbol{\rho}}ho}{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}} {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma D_{\mathcal{X}} + {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_1 D_0^y {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}g(\sum _{i=0}^K {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta^i{\boldsymbol{\rho}}ho^{K-i}{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}g) {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber\\ &\quad + \frac{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_1 {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}} D_{\mathcal{X}} }{1-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta}\sum _{i=0}^K{\boldsymbol{\rho}}ho^{K-i} \label{pr:nu-es-K-1}, {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align} where the last inequality is obtained by noting that $\sum_{i=1}^{K} {\boldsymbol{\rho}}ho^{i} \leq \frac{{\boldsymbol{\rho}}ho}{1-{\boldsymbol{\rho}}ho}$. Finally, the choice ${{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta<\frac{1-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g}$ implies that ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta < {\boldsymbol{\rho}}ho$, hence, $\sum_{i=0}^{K} (\frac{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta}{{\boldsymbol{\rho}}ho})^i \leq \frac{{\boldsymbol{\rho}}ho}{{\boldsymbol{\rho}}ho-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta}$ which leads to the desired result. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{proof} \subsection{Proof of Lemma~{\boldsymbol{\rho}}ef{lem:ell-es}}\label{proof:lemma-grad-est} We begin the proof by considering the definition of ${{\boldsymbol{\beta}}oldsymbol{\nu}}abla {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll ({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)$ and $F_k$ followed by a triangle inequality to obtain {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align} \| {{\boldsymbol{\beta}}oldsymbol{\nu}}abla {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll ({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k) - F_k\| & \leq \|{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{x}f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)) - {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{x}f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k)\| {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber\\ &\quad + \| {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yx}g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k){{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_{k+1} - {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yx}g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)) {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)\| \label{pr:ell_est_1} {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align} Combining Assumption~{\boldsymbol{\rho}}ef{assum:upper}-(i) together with adding and subtracting ${{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yx}g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k){{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)$ to the second term of RHS lead to {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align}\label{eq:ell-F-first-bound} \| {{\boldsymbol{\beta}}oldsymbol{\nu}}abla {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll ({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k) - F_k\| &\leq {{\boldsymbol{\beta}}oldsymbol{\nu}}a{L_{xy}^{f}} \| {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)\| + \| {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yx}g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k){{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}g( {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_{k+1} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k){{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}g) + {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}g( {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yx}g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k) {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ &\quad - {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yx}g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)) {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}g) {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)\| {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ &\leq {{\boldsymbol{\beta}}oldsymbol{\nu}}a{L_{xy}^{f}} \| {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)\| + C_{yx}^{g}\|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_{k+1} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)\| + L_{yx}^{g} \frac{C_{y}^{f}}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g} \| {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}^*({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)\| {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align} where the last inequality is obtained using Assumption~{\boldsymbol{\rho}}ef{assum:lower} and the triangle inequality. Next, utilizing Lemma~{\boldsymbol{\rho}}ef{lem:lower_GD_C} and {\boldsymbol{\rho}}ef{lem:nu_est_K} we can further provide upper-bounds for the term in RHS of {{\boldsymbol{\beta}}oldsymbol{\epsilon}}qref{eq:ell-F-first-bound} as follows {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align*} \| {{\boldsymbol{\beta}}oldsymbol{\nu}}abla {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll ({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k) - F_k\| &\leq {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_2 {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}g( {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta^k D_0^y + \frac{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}} D_{\mathcal{X}} }{1-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{i}g)+ C_{yx}^{g} \Big({\boldsymbol{\rho}}ho^{k+1} \|{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_0} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_0) \| + \frac{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma {\boldsymbol{\rho}}ho {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}} D_{\mathcal{X}}}{1-{\boldsymbol{\rho}}ho} {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ &\quad + \frac{{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_1 D_0^y {\boldsymbol{\rho}}ho^{k+2}}{{\boldsymbol{\rho}}ho - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta}+\frac{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_1 {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}} D_{\mathcal{X}} }{(1-{\boldsymbol{\rho}}ho)(1-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta)}\Big) . {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align*} \qed \subsection{Improvement in one step} In the following, we characterize the improvement of the objective function ${{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x})$ after taking one step of Algorithm~{\boldsymbol{\rho}}ef{alg:In-BiCoG}. {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{lemma}\label{lem:one-step} Let $\{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k\}_{k=0}^{K}$ be the sequence generated by Algorithm~{\boldsymbol{\rho}}ef{alg:In-BiCoG}. Suppose Assumption {\boldsymbol{\rho}}ef{assum:upper} and {\boldsymbol{\rho}}ef{assum:lower} hold and ${{\boldsymbol{\beta}}oldsymbol{\gamma}}amma_k = {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma$, then for any $k{{\boldsymbol{\beta}}oldsymbol{\gamma}}eq 0$ we have {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align}\label{eq:one-step-f} {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1}) &\leq {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k}) - {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma {{\boldsymbol{\beta}}oldsymbol{\mu}}athcal{G} ({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k) + {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_2 {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta^k D_0^y D_{\mathcal{X}} + \frac{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma^2 D_{\mathcal{X}}^2 {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta}{1-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta} + C_{yx}^{g}\Big[ {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma D_{\mathcal{X}} {\boldsymbol{\rho}}ho^{k+1} \|{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_0} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_0) \| {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ &\quad + \frac{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma^2 D_{\mathcal{X}}^2 {\boldsymbol{\rho}}ho {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}}}{1-{\boldsymbol{\rho}}ho}+ \frac{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma D_{\mathcal{X}} D_0^y {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_1{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta {\boldsymbol{\rho}}ho^{k+2}}{{\boldsymbol{\rho}}ho - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta} + \frac{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma^2 D_{\mathcal{X}}^2 {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_1 {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta}{(1-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta)(1-{\boldsymbol{\rho}}ho)}\Big] + \frac{1}{2} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll} {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma^2 D_{\mathcal{X}}^2 {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align} {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{lemma} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{proof} Note that according to Lemma {\boldsymbol{\rho}}ef{lem:v_b}-(III), ${{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll(\cdot)$ has a Lipschitz continuous gradient which implies that {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align} {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1}) &\leq {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k}) + {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma \langle{{{\boldsymbol{\beta}}oldsymbol{\nu}}abla {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k}), {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s}_{k} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k}{\boldsymbol{\rho}}angle +\frac{1}{2}{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll}{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma^2\|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s}_{k} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k\|^2{{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ &= {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k}) + {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma \langle{F_k, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s}_{k} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k}{\boldsymbol{\rho}}angle + {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma \langle{{{\boldsymbol{\beta}}oldsymbol{\nu}}abla {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll ({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k) - F_k, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s}_{k} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k}{\boldsymbol{\rho}}angle +\frac{1}{2}{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll}{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma^2\|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s}_{k} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k\|^2 \label{pr:step-fw-1} {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align} where the last inequality follows from adding and subtracting the term ${{\boldsymbol{\beta}}oldsymbol{\gamma}}amma \langle{F_k, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s}_{k} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k}{\boldsymbol{\rho}}angle$ to the RHS. Using the definition of ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s}_k$ and ${{\boldsymbol{\beta}}oldsymbol{\mu}}athcal{G}(x)$ we can immediately observe that {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align} \langle{F_k, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s}_{k} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k}{\boldsymbol{\rho}}angle &= {{\boldsymbol{\beta}}oldsymbol{\mu}}in_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s} \in {{\boldsymbol{\beta}}oldsymbol{\mu}}athcal{X}}~\langle{F_k, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k}{\boldsymbol{\rho}}angle {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ &\leq \langle{F_k, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s}- {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k}{\boldsymbol{\rho}}angle {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ & = \langle{{{\boldsymbol{\beta}}oldsymbol{\nu}}abla {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k), {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s}- {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k}{\boldsymbol{\rho}}angle +\langle{F_k -{{\boldsymbol{\beta}}oldsymbol{\nu}}abla {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k), {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s}- {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k}{\boldsymbol{\rho}}angle {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ &\leq -{{\boldsymbol{\beta}}oldsymbol{\mu}}athcal{G}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k) +\langle{F_k -{{\boldsymbol{\beta}}oldsymbol{\nu}}abla {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k), {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s}- {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k}{\boldsymbol{\rho}}angle .\label{pr:fw_upper_bound} {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align} Next, combining {{\boldsymbol{\beta}}oldsymbol{\epsilon}}qref{pr:step-fw-1} with {{\boldsymbol{\beta}}oldsymbol{\epsilon}}qref{pr:fw_upper_bound} followed by the Cauchy-Schwartz inequality leads to {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align} {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1}) &\leq {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k}) - {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma {{\boldsymbol{\beta}}oldsymbol{\mu}}athcal{G}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)+{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma \|{{{\boldsymbol{\beta}}oldsymbol{\nu}}abla {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k}) -F_k\| \|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s}_{k} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s}}\|+\frac{1}{2} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll}{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma^2\|{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s}_{k} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k\|^2.\label{pr:step-fw-3} {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align} Finally, using the result of the Lemma~{\boldsymbol{\rho}}ef{lem:ell-es} together with the boundedness assumption of set $\mathcal{X}$ we conclude the desired result. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{proof} \section{Proof of Theorem~{\boldsymbol{\rho}}ef{thm:convex-upper-bound}}\label{appen:convex} Since ${{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll $ is convex, from the definition of ${{\boldsymbol{\beta}}oldsymbol{\mu}}athcal{G}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)$ in {{\boldsymbol{\beta}}oldsymbol{\epsilon}}qref{eq:FW_gap} we have: {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align}\label{eq:conv-G} {{\boldsymbol{\beta}}oldsymbol{\mu}}athcal{G}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k) = {{\boldsymbol{\beta}}oldsymbol{\mu}}ax_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s}\in \mathcal{X}}\{\langle {{{\boldsymbol{\beta}}oldsymbol{\nu}}abla {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k),{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s}}\}{\boldsymbol{\rho}}angle {{\boldsymbol{\beta}}oldsymbol{\gamma}}eq \langle{{{\boldsymbol{\beta}}oldsymbol{\nu}}abla {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k),{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}^*}{\boldsymbol{\rho}}angle {{\boldsymbol{\beta}}oldsymbol{\gamma}}eq {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)- {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}^*). {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align} We assume a fixed step-size in Theorem~{\boldsymbol{\rho}}ef{thm:convex-upper-bound} and we set ${{\boldsymbol{\beta}}oldsymbol{\gamma}}amma_k={{\boldsymbol{\beta}}oldsymbol{\gamma}}amma$. Combining the result of Lemma~{\boldsymbol{\rho}}ef{lem:one-step} with {{\boldsymbol{\beta}}oldsymbol{\epsilon}}qref{eq:conv-G} leads to {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align}\label{eq:convex-one-step1} {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1}) &\leq {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k}) - {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma ({{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}^* ) - {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)) + {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_2 {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta^k D_0^y D_{\mathcal{X}} + \frac{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma^2 D_{\mathcal{X}}^2 {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta}{1-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta} + C_{yx}^{g}\Big[ {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma D_{\mathcal{X}} {\boldsymbol{\rho}}ho^{k+1} \|{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_0} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_0) \| {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ &\quad + \frac{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma^2 D_{\mathcal{X}}^2 {\boldsymbol{\rho}}ho {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}}}{1-{\boldsymbol{\rho}}ho} + \frac{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma D_{\mathcal{X}} D_0^y {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_1{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta {\boldsymbol{\rho}}ho^{k+2}}{{\boldsymbol{\rho}}ho - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta} + \frac{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma^2 D_{\mathcal{X}}^2 {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_1 {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta}{(1-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta)(1-{\boldsymbol{\rho}}ho)}\Big] + \frac{1}{2} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll} {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma^2 D_{\mathcal{X}}^2. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align} Subtracting ${{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}^*)$ from both sides, we get {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align}\label{eq:convex-one-step2} {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1}) - {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}^*) &\leq (1- {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma )( {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k) - {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}^* )) + \mathcal{R}_k({{\boldsymbol{\beta}}oldsymbol{\gamma}}amma), {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align} where {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align*} \mathcal{R}_k({{\boldsymbol{\beta}}oldsymbol{\gamma}}amma)&\triangleq {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_2 {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta^k D_0^y D_{\mathcal{X}} + \frac{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma^2 D_{\mathcal{X}}^2 {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta}{1-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta} + C_{yx}^{g}\Big[ {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma D_{\mathcal{X}} {\boldsymbol{\rho}}ho^{k+1} \|{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_0} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_0) \| {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ & \quad + \frac{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma^2 D_{\mathcal{X}}^2 {\boldsymbol{\rho}}ho {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}}}{1-{\boldsymbol{\rho}}ho} + \frac{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma D_{\mathcal{X}} D_0^y {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_1{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta {\boldsymbol{\rho}}ho^{k+2}}{{\boldsymbol{\rho}}ho - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta} + \frac{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma^2 D_{\mathcal{X}}^2 {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_1 {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta}{(1-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta)(1-{\boldsymbol{\rho}}ho)}\Big] + \frac{1}{2} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll} {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma^2 D_{\mathcal{X}}^2. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align*} Continuing {{\boldsymbol{\beta}}oldsymbol{\epsilon}}qref{eq:convex-one-step2} recursively leads to the desired result. \qed \section{Proof of Corollary~{\boldsymbol{\rho}}ef{cr:convex_upper-bound}}\label{appen:convex_coro} We start the proof by using the result of the Theorem~{\boldsymbol{\rho}}ef{cr:convex_upper-bound}, i.e., {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align}\label{eq:thm-conv-result} {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_K) - {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}^*) &\leq (1- {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma )^K( {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_0) - {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}^* )) + \sum_{k=0}^{K-1}(1-{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma)^{K-k}\mathcal{R}_k({{\boldsymbol{\beta}}oldsymbol{\gamma}}amma). {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align} Note that {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align*} &\sum_{k=0}^{K-1}(1-{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma)^{K-k}\mathcal{R}_k({{\boldsymbol{\beta}}oldsymbol{\gamma}}amma){{\boldsymbol{\beta}}oldsymbol{\nu}}onumber\\ &= {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_2 D_0^y D_{\mathcal{X}} \Big[\sum_{k=0}^{K-1}(1-{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma)^{K-k} {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta^k \Big] + \frac{D_{\mathcal{X}}^2 {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta}{1-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta} \Big[\sum_{k=0}^{K-1}(1-{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma)^{K-k} {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma^2 \Big] {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ & \quad + C_{yx}^{g}\Big( {\boldsymbol{\rho}}ho D_{\mathcal{X}} \|{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_0} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_0) \| \Big[\sum_{k=0}^{K-1}(1-{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma)^{K-k} {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma {\boldsymbol{\rho}}ho^{k} \Big] + \frac{ D_{\mathcal{X}}^2 {\boldsymbol{\rho}}ho {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}}}{1-{\boldsymbol{\rho}}ho}\Big[\sum_{k=0}^{K-1}(1-{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma)^{K-k} {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma^2 \Big] {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ & \quad + \frac{ D_{\mathcal{X}} D_0^y {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_1{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta {\boldsymbol{\rho}}ho^{2}}{{\boldsymbol{\rho}}ho - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta} \Big[\sum_{k=0}^{K-1}(1-{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma)^{K-k} {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma {\boldsymbol{\rho}}ho^k \Big]+ \frac{ D_{\mathcal{X}}^2 {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_1 {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta}{(1-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta)(1-{\boldsymbol{\rho}}ho)} \Big[\sum_{k=0}^{K-1}(1-{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma)^{K-k} {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma^2 \Big]\Big) {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ & \quad + \frac{1}{2} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll} D_{\mathcal{X}}^2 \Big[\sum_{k=0}^{K-1}(1-{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma)^{K-k} {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma^2 \Big]. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align*} Moreover, $\mathcal{R}_k({{\boldsymbol{\beta}}oldsymbol{\gamma}}amma)=\mathcal{O}({{\boldsymbol{\beta}}oldsymbol{\gamma}}amma{\boldsymbol{\rho}}ho^k+{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma^2)$. Therefore, one can easily verify that $\sum_{k=0}^{K-1}(1-{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma)^{K-k}{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma^2\leq {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma(1-{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma)$ and $\sum_{k=0}^{K-1}(1-{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma)^{K-k}{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma{\boldsymbol{\rho}}ho^k\leq \frac{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma(1-{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma)}{{{\boldsymbol{\beta}}oldsymbol{\alpha}}bs{1-{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma-{\boldsymbol{\rho}}ho}}$ from which together with the above inequality we conclude that {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align} &\sum_{k=0}^{K-1}(1-{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma)^{K-k}\mathcal{R}_k({{\boldsymbol{\beta}}oldsymbol{\gamma}}amma){{\boldsymbol{\beta}}oldsymbol{\nu}}onumber\\ &= \frac{ {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_2 D_0^y D_{\mathcal{X}}{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma(1-{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma)}{{{\boldsymbol{\beta}}oldsymbol{\alpha}}bs{1-{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta}} + \frac{D_{\mathcal{X}}^2 {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma(1-{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma)}{1-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta} + C_{yx}^{g}\Big( \frac{ D_{\mathcal{X}} {\boldsymbol{\rho}}ho {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma(1-{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma)}{{{\boldsymbol{\beta}}oldsymbol{\alpha}}bs{1-{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma-{\boldsymbol{\rho}}ho}} \|{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_0} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_0) \| {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ & \quad + \frac{ D_{\mathcal{X}}^2 {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}} {\boldsymbol{\rho}}ho {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma(1-{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma) }{1-{\boldsymbol{\rho}}ho} + \frac{ D_{\mathcal{X}} D_0^y {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_1{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta {\boldsymbol{\rho}}ho^{2} {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma(1-{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma)}{({\boldsymbol{\rho}}ho - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta) {{\boldsymbol{\beta}}oldsymbol{\alpha}}bs{1-{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma-{\boldsymbol{\rho}}ho}} + \frac{ D_{\mathcal{X}}^2 {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_1 {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma(1-{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma)}{(1-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta)(1-{\boldsymbol{\rho}}ho)} \Big) {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ & \quad + \frac{1}{2} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll} D_{\mathcal{X}}^2{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma(1-{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma) {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ &= \mathcal{O}({{\boldsymbol{\beta}}oldsymbol{\gamma}}amma). {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align} Therefore, using the above inequality within {{\boldsymbol{\beta}}oldsymbol{\epsilon}}qref{eq:thm-conv-result} we conclude that ${{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_K) - {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}^*) \leq (1- {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma )^K( {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_0) - {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}^* )) + \mathcal{O}({{\boldsymbol{\beta}}oldsymbol{\gamma}}amma)$. Next, we show that by selecting ${{\boldsymbol{\beta}}oldsymbol{\gamma}}amma=\log(K)/K$ we have that $(1-{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma)^K\leq 1/K$. In fact, for any $x>0$, $\log(x){{\boldsymbol{\beta}}oldsymbol{\gamma}}eq 1-\frac{1}{x}$ which implies that $\log(\frac{1}{1-{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma}){{\boldsymbol{\beta}}oldsymbol{\gamma}}eq {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma=\log(K)/K$, hence, $(\frac{1}{1-{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma})^K{{\boldsymbol{\beta}}oldsymbol{\gamma}}eq K$. Putting the pieces together we conclude that ${{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_K) - {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}^*) \leq \mathcal{O}(\log(K)/K)$. Therefore, to achieve ${{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_K) - {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}^*) \leq {{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon$, Algorithm {\boldsymbol{\rho}}ef{alg:In-BiCoG} requires at most $\mathcal{O}({{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon^{-1}\log({{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon^{-1}))$ iterations. \qed \section{Proof of Theorem~{\boldsymbol{\rho}}ef{thm:nonconvex-upper-bound}}\label{appen: nonconvex} Recall that from Lemma~{\boldsymbol{\rho}}ef{lem:one-step} we have {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align*} {{\boldsymbol{\beta}}oldsymbol{\mu}}athcal{G}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k}) &\leq \frac{{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k}) - {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1})}{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma}+{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_2 {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta^k D_0^y D_{\mathcal{X}}+ \frac{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma D_{\mathcal{X}}^2 {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta}{1-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta} + C_{yx}^{g}\Big[ D_{\mathcal{X}} {\boldsymbol{\rho}}ho^{k+1} \|{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_0} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_0) \| + \frac{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma D_{\mathcal{X}}^2 {\boldsymbol{\rho}}ho {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}}}{1-{\boldsymbol{\rho}}ho} {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ &\quad + \frac{ D_{\mathcal{X}} D_0^y {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_1{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta {\boldsymbol{\rho}}ho^{k+2}}{{\boldsymbol{\rho}}ho - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta} + \frac{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma D_{\mathcal{X}}^2 {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_1 {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta}{(1-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta)(1-{\boldsymbol{\rho}}ho)}\Big] + \frac{1}{2} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll} {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma D_{\mathcal{X}}^2. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align*} Summing both sides of the above inequality from $k=0$ to $K-1$, we get {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align*} \sum_{k=0}^{K-1}{{\boldsymbol{\beta}}oldsymbol{\mu}}athcal{G}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k) &\leq \frac{{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_0) - {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_K)}{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma}+ \frac{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_2 D_0^y D_{\mathcal{X}}}{1- {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta}+ K \frac{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma D_{\mathcal{X}}^2{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta}{1-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta} + C_{yx}^{g}\Big[ \frac{{\boldsymbol{\rho}}ho D_{\mathcal{X}} \|{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_0} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_0) \|}{ 1-{\boldsymbol{\rho}}ho} {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ &\quad + K \frac{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma D_{\mathcal{X}}^2 {\boldsymbol{\rho}}ho {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}}}{1-{\boldsymbol{\rho}}ho} + \frac{ D_{\mathcal{X}} D_0^y {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_1{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta {\boldsymbol{\rho}}ho^{2}}{(1-{\boldsymbol{\rho}}ho)({\boldsymbol{\rho}}ho - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta)} + K \frac{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma D_{\mathcal{X}}^2 {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_1 {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta}{(1-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta)(1-{\boldsymbol{\rho}}ho)}\Big] + \frac{K}{2} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll} {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma D_{\mathcal{X}}^2, {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align*} where in the above inequality we use the fact that $\sum_{i=0}^{K} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta^{i} \leq \frac{1}{1-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta}$. Next, dividing both sides of the above inequality by $K$ and denoting the smallest gap function over the iterations from $k=0$ to $K-1$, i.e., {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{equation*} {{\boldsymbol{\beta}}oldsymbol{\mu}}athcal{G}_{k^*} \triangleq {{\boldsymbol{\beta}}oldsymbol{\mu}}in_{0\leq k \leq K-1}~{{\boldsymbol{\beta}}oldsymbol{\mu}}athcal{G}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k) \leq \frac{1}{K} \sum_{k=0}^{K-1} {{\boldsymbol{\beta}}oldsymbol{\mu}}athcal{G}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k), {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{equation*} imply that {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align}\label{eq:gap-bound-general} {{\boldsymbol{\beta}}oldsymbol{\mu}}athcal{G}_{k^*} &\leq \frac{{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_0) -{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_K)}{K {{\boldsymbol{\beta}}oldsymbol{\gamma}}amma}+ \frac{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma D_{\mathcal{X}} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta}{1-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta}+ \frac{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma D_{\mathcal{X}}^2 {\boldsymbol{\rho}}ho {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}} C_{yx}^{g} {\boldsymbol{\rho}}ho}{1-{\boldsymbol{\rho}}ho} +\frac{{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma D_{\mathcal{X}}^2 C_{yx}^{g} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_1 {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta}{(1-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta)(1-{\boldsymbol{\rho}}ho)} + \frac{1}{2} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll}{{\boldsymbol{\beta}}oldsymbol{\gamma}}amma D_{\mathcal{X}}^2 {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ &+ \frac{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_2 D_0^y D_{\mathcal{X}} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta}{K(1- {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta)}+ \frac{D_{\mathcal{X}} C_{yx}^{g} {\boldsymbol{\rho}}ho \|{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_0} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_0) \|}{K(1- {\boldsymbol{\rho}}ho)}+ \frac{ D_{\mathcal{X}} D_0^y C_{yx}^{g} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_1 {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta {\boldsymbol{\rho}}ho^2}{K(1-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta)(1-{\boldsymbol{\rho}}ho)}. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align} The desired result follows immediately from {{\boldsymbol{\beta}}oldsymbol{\epsilon}}qref{eq:gap-bound-general} and the fact that ${{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}^*)\leq {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_K)$. \qed \section{Proof of Corollary~{\boldsymbol{\rho}}ef{cr:nonconvex_upper-bound}}\label{appen:nonconvex_coro} We begin the proof by using the result of the Theorem~{\boldsymbol{\rho}}ef{thm:nonconvex-upper-bound} and substituting ${{\boldsymbol{\beta}}oldsymbol{\gamma}}amma = \frac{1}{\sqrt{K}}$. {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align*} {{\boldsymbol{\beta}}oldsymbol{\mu}}athcal{G}_{k^*} &\leq \frac{{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_0) -{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}^*)}{\sqrt{K}}+ \frac{ D_{\mathcal{X}} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta}{\sqrt{K}(1-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta)}+ \frac{ D_{\mathcal{X}}^2 {\boldsymbol{\rho}}ho {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}} C_{yx}^{g} {\boldsymbol{\rho}}ho}{\sqrt{K}(1-{\boldsymbol{\rho}}ho)} +\frac{ D_{\mathcal{X}}^2 C_{yx}^{g} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_1 {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta}{\sqrt{K}(1-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta)(1-{\boldsymbol{\rho}}ho)} + \frac{1}{2\sqrt{K}} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{L}_{{{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll} D_{\mathcal{X}}^2 {{\boldsymbol{\beta}}oldsymbol{\nu}}onumber \\ &\quad + \frac{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_2 D_0^y D_{\mathcal{X}} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta}{K(1- {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta)}+ \frac{D_{\mathcal{X}} C_{yx}^{g} {\boldsymbol{\rho}}ho \|{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{w}_0} - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{v}({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_0) \|}{K(1- {\boldsymbol{\rho}}ho)}+ \frac{ D_{\mathcal{X}} D_0^y C_{yx}^{g} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{C}_1 {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta {\boldsymbol{\rho}}ho^2}{K(1-{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta)(1-{\boldsymbol{\rho}}ho)}{{\boldsymbol{\beta}}oldsymbol{\nu}}onumber\\ &=\mathcal{O}(1/\sqrt{K}). {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align*} Finally, note that to achieve ${{\boldsymbol{\beta}}oldsymbol{\mu}}athcal{G}_{k^*}\leq {{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon$, Algorithm {\boldsymbol{\rho}}ef{alg:In-BiCoG} requires at most $\mathcal{O}({{\boldsymbol{\beta}}oldsymbol{\epsilon}}psilon^{-2})$ iterations. \qed \section{Additional Experiments}\label{sec:additional_example} In this section, we provide more details about the experiments conducted in section {\boldsymbol{\rho}}ef{sec:numeric} as well as some additional experiments. \subsection{Experiment Details}\label{appen:ex_detail} In this section, we include more details of the numerical experiments in Section~{\boldsymbol{\rho}}ef{sec:numeric}. The MATLAB code is also included in the supplementary material. For completeness, we briefly review the update rules of SBFW~\citep{akhtar2022projection} and TTSA~\citep{hong2020two} for the setting considered in problem~{{\boldsymbol{\beta}}oldsymbol{\epsilon}}qref{eq:bilevel}. In the following, we use $\mathcal{P}_{\mathcal{X}}(\cdot)$ to denote the Euclidean projection onto the set $\mathcal{X}$. Each iteration of SBFW has the following updates: {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align*} & {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k = {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_{k-1} - \delta_k {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y g ({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k-1}, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_{k-1}),\\ &{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{d}_k = (1- {\boldsymbol{\rho}}ho_k)({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{d}_{k-1} - h({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k-1}, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_{k-1})) + h({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k),\\ &{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s}_k = {{\boldsymbol{\beta}}oldsymbol{\alpha}}rgmin_{{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s} \in \mathcal{X}} \langle {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s}, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{d}_k {\boldsymbol{\rho}}angle,\\ &{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1} = (1 - {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta_k) {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k + {{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta_k {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{s}_k {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align*} Based on the theoretical analysis in \cite{akhtar2022projection}, ${\boldsymbol{\rho}}ho_k = \frac{2}{k^{1/2}}$, ${{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta_k = \frac{2}{{(k+1)}^{3/4}}$, and $\delta_k = \frac{a_0}{k^{1/2}}$ where $a_0 = {{\boldsymbol{\beta}}oldsymbol{\mu}}in \Big\{ \frac{2}{3 {{\boldsymbol{\beta}}oldsymbol{\mu}}u_g} , \frac{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g}{2 L_g^2}\Big\}$. Moreover, $h({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k)$ is a biased estimator of the surrogate ${{\boldsymbol{\beta}}oldsymbol{\epsilon}}ll({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k)$ which can be computed as follows {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align*} h({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k) = {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{x}f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k) - M({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k) {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{y}f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k), {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align*} where the term $ M({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k)$ is a biased estimation of $[{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yy}g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k)]^{-1} $ with bounded variance whose explicit form is {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align*} M({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k) = {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yx}g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k) \times \Big[ \frac{k}{L _g} \Pi_{i=1}^{l}\Big( I - \frac{1}{L_g}{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yy} g (x_k, y_k)\Big) \Big], {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align*} and $l\in \{1,\hdots, k\}$ is an integer selected uniformly at random. The steps of TTSA algorithm are given by {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{align*} &{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_{k+1} = {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k - {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta h_k ^ g,\\ &{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_{k+1} = \mathcal{P}_{\mathcal{X}} ( {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k - {{\boldsymbol{\beta}}oldsymbol{\alpha}}lpha h_k ^f),\\ &h_k^g ={{\boldsymbol{\beta}}oldsymbol{\nu}}abla_y g ({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k),\\ &h_k^f ={{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{x}f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k) - {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yx}g({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k) \times \Big[ \frac{t_{max}(k) c_h}{L_g} \Pi_{i=1}^{p}\Big( I - \frac{c_h}{L_g}{{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yy} g ({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k, {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k)\Big) \Big] {{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{y}f({{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_k,{{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_k), {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{align*} where based on the theory we define $L = L_x^f + \frac{L_y^f C_{yx}^g}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g} + C_y^f \Big( \frac{L_{yx}^g}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g} + \frac{L_{yy}^g C_{yx}^g}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g^2}\Big)$, and $L_y = \frac{C_{yx}^{g}}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g}$, then set ${{\boldsymbol{\beta}}oldsymbol{\alpha}}lpha = {{\boldsymbol{\beta}}oldsymbol{\mu}}in\Big\{ \frac{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g^2}{8 L_y L L_g^2}, \frac{1}{4 L_y L} K^{-3/5} \Big\}$, ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta = {{\boldsymbol{\beta}}oldsymbol{\mu}}in\Big\{ \frac{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g}{ L_g^2}, \frac{2}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g} K^{-2/5} \Big\}$, $t_{max}(k) = \frac{L_g}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g}\log(k+1)$, $p \in \{0, \hdots, t_{max}(k)-1\}$, and $c_h \in (0,1]$. \subsection{Toy example} Recall the toy example problem stated in {{\boldsymbol{\beta}}oldsymbol{\epsilon}}qref{ex:toy}. We implemented our proposed method and compared it with SBFW \cite{akhtar2022projection}. It should be noted that in the SBFW method, they used a biased estimation for $({{\boldsymbol{\beta}}oldsymbol{\nu}}abla_{yy}g(\lambda,{{\boldsymbol{\beta}}oldsymbol{\theta}}eta))^{-1}=(A^\top A)^{-1}$ whose bias is upper bounded by $\frac{2}{{{\boldsymbol{\beta}}oldsymbol{\mu}}u_g}$ (see \cite{ghadimi2018approximation}[Lemma 3.2]). Figure~{\boldsymbol{\rho}}ef{fig:toy2} illustrates the iteration trajectories of both methods for ${{\boldsymbol{\beta}}oldsymbol{\mu}}u_g = 0.1$ and $K=10^3$ in which we also included SBFW method whose Hessian inverse matrix is explicitly provided in the algorithm. The step-sizes for both methods are selected as suggested by their theoretical analysis. In Figure~{\boldsymbol{\rho}}ef{fig:toy1}, we observed that our method converges to the optimal solution while SBFW fails to converge. This situation for SBFW exacerbates with smaller values of ${{\boldsymbol{\beta}}oldsymbol{\mu}}u_g$. Moreover, it can be observed in Figure~{\boldsymbol{\rho}}ef{fig:toy2} that despite incorporating the Hessian inverse matrix in the SBFW method, the algorithm's effectiveness is compromised by excessively conservative step-sizes, as dictated by the theoretical result. Consequently, the algorithm fails to converge to the optimal point effectively. Regarding this issue, we tune their step-sizes, i.e., scale the parameter $\delta$ and ${{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta$ in their method by a factor of 5 and 0.1, respectively. By tuning the parameters we can see in Figure~{\boldsymbol{\rho}}ef{fig:toy3} that the SBFW with Hessian inverse matrix algorithm has a better performance and converges to the optimal solution. In fact, using the Hessian inverse as well as tuning the step-sizes their method converges to the optimal solution while our method always shows a consistent and robust behavior. {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{figure} \centering \includegraphics[scale=0.35]{plot3_new.png} \quad \includegraphics[scale=0.35]{upper3_new.png} \caption{The performance of IBCG (blue) vs SBFW (red) and SBFW with Hessian inverse (green) on Problem {\boldsymbol{\rho}}ef{ex:toy} when ${{\boldsymbol{\beta}}oldsymbol{\mu}}u_g = 0.1$. Plots from left to right are trajectories of ${{\boldsymbol{\beta}}oldsymbol{\theta}}eta_k$ and $f(\lambda_k,{{\boldsymbol{\beta}}oldsymbol{\theta}}eta_k)- f^*$.} \label{fig:toy2} {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{figure} {{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}gin{figure} \centering \includegraphics[scale=0.35]{plot4_new.png} \quad \includegraphics[scale=0.35]{upper4_new.png} \caption{The performance of IBCG (blue) vs SBFW (red) and SBFW with Hessian inverse (green) on Problem {\boldsymbol{\rho}}ef{ex:toy} when ${{\boldsymbol{\beta}}oldsymbol{\mu}}u_g = 0.1$ and the SBFW parameters are tuned. Plots from left to right are trajectories of ${{\boldsymbol{\beta}}oldsymbol{\theta}}eta_k$ and $f(\lambda_k,{{\boldsymbol{\beta}}oldsymbol{\theta}}eta_k)- f^*$.} \label{fig:toy3} {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{figure} \subsection{Matrix completion with denoising}\label{appen: matrix_comp} {{\boldsymbol{\beta}}oldsymbol{\nu}}oindentndent\textbf{Dataset Generation.} We create an observation matrix $M = \hat{X}+E$. In this setting $\hat{X} = WW^{T}$ where $W \in {{\boldsymbol{\beta}}oldsymbol{\mu}}athbb{R}^{n \times r}$ containing normally distributed independent entries, and $E = \hat{n}(L+L^{T})$ is a noise matrix where $L \in {{\boldsymbol{\beta}}oldsymbol{\mu}}athbb{R}^{n \times n}$ containing normally distributed independent entries and $\hat{n} \in (0,1)$ is the noise factor. During the simulation process, we set $n= 250$, $r = 10$, and ${{\boldsymbol{\beta}}oldsymbol{\alpha}}lpha = \| \hat{X}\|_{*}$. {{\boldsymbol{\beta}}oldsymbol{\nu}}oindentndent\textbf{Initialization.} All the methods start from the same initial point ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{x}_0$ and ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{y}_0$ which are generated randomly. We terminate the algorithms either when the maximum number of iterations $K_{{\boldsymbol{\beta}}oldsymbol{\mu}}athrm{max}=10^4$ or the maximum time limit $T_{{\boldsymbol{\beta}}oldsymbol{\mu}}athrm{max} = 2 \times 10^2$ seconds are achieved. {{\boldsymbol{\beta}}oldsymbol{\nu}}oindentndent\textbf{Implementation Details.} For our method IBCG, we choose the step-sizes as ${{\boldsymbol{\beta}}oldsymbol{\gamma}}amma = \frac{1}{4\sqrt{K}}$ to avoid instability due to large initial step-sizes. We tuned the step-size ${{\boldsymbol{\beta}}oldsymbol{\epsilon}}ta_k$ in the SBFW method by multiplying it by a factor of 0.8, and for the TTSA method, we tuned their step-size ${{\boldsymbol{\beta}}oldsymbol{\mu}}athbf{e}ta$ by multiplying it by a factor of 0.25. {{\boldsymbol{\beta}}oldsymbol{\epsilon}}nd{document}
\begin{document} \markboth{Qi Zhang}{Global well-posedness for the nonlinear PAM equation} \catchline{}{}{}{}{} \title{Global well-posedness for the nonlinear generalized parabolic Anderson model equation } \author{Qi Zhang} \address{Yanqi Lake Beijing Institute of Mathematical Sciences and Applications,\\ Beijing, 101408, China\\ Yau Mathematical Sciences Center, Tsinghua University,\\ Beijing 100084, China\\ [email protected]} \maketitle \begin{history} \received{(Day Month Year)} \revised{(Day Month Year)} \end{history} \begin{abstract} We study the global existence of the singular nonlinear parabolic Anderson model equation on $2$-dimensional tours $\mathbb{T}^2$. The method is based on paracontrolled distribution and renormalization. After split the original nonlinear parabolic Anderson model equation into two simple equations, we prove the global well-posedness by some a priori estimates and smooth approximations. Furthermore, we prove the uniqueness of the solution by using classical energy estimates. \end{abstract} \keywords{singular SPDEs, paracontrolled distribution, parabolic Anderson model} \ccode{AMS Subject Classification: 35A01, 35A02 , 60H17} \section{Introduction} We study the following $2$-dimensional nonlinear parabolic Anderson model (PAM) equation \begin{equation}\label{GPAM} \partial_t u + (- \Delta+\mu) u = f(u) + u \diamond \xi, \quad u(0)=u_0. \end{equation} where $\mu>0$, $u:\mathbb{R}^{+}\times \mathbb{T}^2 \rightarrow \mathbb{R}$, the nonlinear function $f(s) = \sum_{1\leq i \leq k-1} a_i s^i$ is a polynomial function from $\mathbb{R}$ to $\mathbb{R}$, and $\xi$ is a spatial white noise on the $2$-dimensional torus $\mathbb{T}^2=(\mathbb{R}/\mathbb{Z})^2$. The Anderson model was originally introduced by Anderson [\refcite{A1958}] as a mathematical description for the electron motion in disordered medium, such as a random potential. In this famous work, Anderson showed that the electron is trapped and remain localized in a random medium. This phenomenon is called Anderson localization in condensed matter physics. When the spatial dimension $n\geq 2$, the parabolic Anderson model equation is a kind of typical singular stochastic partial differential equation. Even though the classical stochastic partial differential equation theory has great achievements in recent decades, many stochastic partial differential equations from physics are singular and hard to deal with by the classical methods, such as the parabolic Anderson model equation, the Kardar–Parisi–Zhang (KPZ) equation, and the $\Phi^4_d$ equation. The difference between singular stochastic partial differential equations and classical stochastic partial differential equations is that the noise in singular stochastic partial differential equations is very rough. Thus the rigorous interpretation of singular stochastic partial differential equations had been an open problem for a long time. In order to study singular stochastic partial differential equations, some new mathematical theories, such as regularity structures by Harier [\refcite{H2014}] or paracontrolled distributions by Gubinelli, Imkeller and Perkowski [\refcite{GIP2015}], had been developed in recent years. Paracontrolled distributions and regularity structures allow a pathwise description of the singular stochastic partial differential equations. In this paper, we study the 2-dimensional nonlinear parabolic Anderson model equation in the paracontrolled distribution frameworks. Comparing with regularity structures, the paracontrolled distribution approach relies on classical PDE techniques, including Littlewood-Paley decomposition, Besov space, paraproduct calculus, and develops on ideas from the theory of controlled rough paths. So it is natural and easy to use some classical PDE tools to study the parabolic Anderson model equation in paracontrolled distribution framework. The discrete parabolic Anderson model has been well understood during the past decades, has seen in the surveys [\refcite{CM1994},\refcite{K2015}], and references therein. The well-posedness of a continuous parabolic Anderson model equation was also given in [\refcite{GIP2015}, \refcite{H2014}, \refcite{HL2015}] by different methods, including regularity structures, paracontrolled distribution, and the transformation method and a elaborate renormalisation procedure. Parabolic equations with other types of purely spatial noise potentials were studied in [\refcite{H2002}, \refcite{HW2021}, \refcite{LS2009}] by Wiener chaos decomposition. We also refer to [\refcite{GH2017} \refcite{KL2017}] for some solution properties of the parabolic Anderson model equation. In [\refcite{CGP2015}], Chouk, Gairing and Perkowski showed that the solution of a continuous parabolic Anderson model is the universal continuum limit of the 2-dimensional lattice discrete Anderson model. The parabolic Anderson model equation can also be viewed as a heat equation with a spatial white noise potential $\xi$. Thus the parabolic Anderson model equation is also a linear parabolic equation with the Anderson Hamiltonian $\mathscr{H}$, defined as $\mathscr{H}u:=\Delta u + u\diamond\xi$. The construction and spectrum of the Anderson Hamiltonian on $\mathbb{T}^2$ and $\mathbb{T}^3$ were studied by Allez and Chouk [\refcite{AC2015}] and Labb\'e [\refcite{L2019}]. The semilinear Schr\"{o}dinger equations and wave equations for the Anderson Hamiltonian in two and three dimensions on $\mathbb{T}^2$ and $\mathbb{T}^3$ have been considered in [\refcite{GUZ2020}]. In [\refcite{ZD2021}], we also consider the variation problem associated with the Anderson Hamiltonian in the paracontrolled distribution framework. Even though the local well-posedness results of paracontrolled solution for generalized parabolic Anderson model equation were given in [\refcite{BDH2015}, \refcite{GIP2015}] by fixed point argument, there are still some difficulties to obtain the global well-posedness in paracontrolled approach. In recent years, the global well-posedness of the $\Phi^4_3$ equation was proved in [\refcite{AK17}, \refcite{GH2019}, \refcite{MW2017}]. In these works, the norm of solution was estimated by using the dissipative property of nonlinear term. In this present chapter, we study the global well-posedness of the nonlinear parabolic Anderson model equation in paracontrolled distribution framework. We assume that the nonlinear term $f(u)$ satisfies the following dissipative assumption: For every $s\in\mathbb{R}$, \begin{align}\label{assumf} -C_0- C_1|s|^k \leq & f(s)s \leq C_0 - C_2|s|^k, \quad k\geq 3, \nonumber \\ f'(s) \leq & l, \end{align} where $C_0, C_1, C_2, l>0$ are positive constants. In order to define the singular term $u\diamond\xi$, we carry out the renormalization procedure and paracontrolled distribution. Then we decompose solution into two parts: $u = \phi + \psi$, and we use a localization technique which developed from [\refcite{GH2019}] to split the original singular stochastic partial differential equation in two simple equations: \begin{equation}\label{decGPAM1} \left\{ \begin{aligned} & \partial_t \phi + (- \Delta+\mu)\phi = \Phi, \quad \phi(0) = \phi_0=u_0,\\ & \partial_t \psi + (- \Delta+\mu)\psi = f(\psi)+ \Psi, \quad \psi(0)=0, \end{aligned} \right. \end{equation} where $\Phi$ contains all of irregular but linear terms, and $\Psi$ contains all the regular terms and the nonlinear terms. By this way, we can handle the irregular part $\phi$ by paracontrolled distribution arguments, and we can analysis the regular part $\psi$ by some classical PDE methods. Since the regularity of initial value $u_0$ is low, we also introduce a time weight $\tau(t):=1-e^{-t}$ to control the singularity when $t$ is small. Combining with the dissipative assumption (\ref{assumf}) of nonlinear term $f$, we establish the parabolic Schauder estimates and parabolic coercive estimates, and obtain some a priori estimates under some time weights. Then we prove the global existence of solution by a smooth approximation and Aubin-Lions argument. We also show the uniqueness of solution by direct energy estimates. We now state our global well-posedness result. We refer to Section 3.2, Theorem \ref{Gexistence} for the details of global existence, and Section 3.3, Theorem \ref{Uniq} for uniqueness result. \begin{thm}\label{WP} Let $u_0 \in \mathscr{C}^{-1}$, and $\alpha \in (2/3,1)$. We denote $\rho =\tau^{1+1/(k-2)+(3\alpha-2)/2}$ for a time weight. Let $\vartheta = (-\Delta + \mu)^{-1}\xi$. Then there exists a solution $(\phi, \psi)$ to system (\ref{decGPAM1}) with \begin{equation*} (\phi,\psi) \in [C_{\tau^{1/(k-2)+\alpha/2}}\mathscr{C}^{\alpha} \cap C^{\alpha/2}_{\tau^{1/(k-2)+\alpha/2}}L^{\infty}] \times [C_{\rho}\mathscr{C}^{3\alpha} \cap C^1_{\rho}L^{\infty} \cap C_{\tau^{1/(k-2)}}L^{\infty}], \end{equation*} such that $u=\phi+\psi$ is a unique global paracontrolled solution to the nonlinear parabolic Anderson model equation (\ref{GPAM}). \end{thm} Throughout the chapter, we use the notation $a\lesssim b$ if there exists a constant $C>0 $, independent of the variables under consideration, such that $a \leq C\cdot b$, and we write $a\simeq b$ if $a\lesssim b$ and $b \lesssim a$. We also use the notation $C_{x}$ to emphasize that the constant $C$ depends on the quantities $x$. The Fourier transform on the torus $\mathbb{T}^d$ is defined with $\hat{u}(k):=\mathscr{F}_{\mathbb{T}^d}u(k)= \sum_{k\in\mathbb{Z}^{d}}e^{2\pi i k \cdot x}u(x)$, so that the inverse Fourier transform on the torus $\mathbb{T}^d$ is given by $\mathscr{F}^{-1}_{\mathbb{T}^d}e^{-2\pi i k \cdot x}\hat{u}(k) = \sum_{k \in \mathbb{Z}^d}\hat{u}(k)$. The space of Schwartz functions on $\mathbb{T}^d$ is denoted by $\mathcal{S}(\mathbb{T}^d)$ or $\mathcal{S}$. The space of tempered distributions on $\mathbb{T}^d$ is denoted by $\mathcal{S}'(\mathbb{T}^d)$ or $\mathcal{S}'$. We denote $\mathscr{L}:= -\Delta +\mu$, and $\rho =\tau^{1+1/(k-2)+(3\alpha-2)/2}$. This paper is organized as follows: In Section 2, we revisit some basic notation and estimates of the singular SPDEs. In Section 3, we obtain some a priori estimates. Then we prove the global well-posedness by a smooth approximation. Using energy estimate, we further prove the uniqueness of solution. This paper ends with some summary and discussion in Section 4. \section{Preliminaries} \subsection{Besov space and Bony's paraproduct} In this subsection, we introduce some basic notations and useful estimates about Littlewood-Paley decomposition, Besov space and Bony's paraproduct. For more details, we refer to [\refcite{BCD2011},\refcite{GIP2015}, \refcite{GH2019}]. Littlewood-Paley decomposition can describe the regularity of (general) functions via the decomposition of a (general) function into a series of smooth functions with different frequencies. In order to do this, we introduce the following dyadic partition. Let $\varphi: \mathbb{R}^d \rightarrow [0,1]$ be a smooth radial cut-off function so that \begin{equation*} \varphi(x)= \left\{ \begin{aligned} & 1, \quad \quad \quad |x| \leq 1 \\ & \text{smooth} , 1<|x|<2 \\ & 0, \quad \quad \quad |x|\geq 2. \end{aligned} \right. \end{equation*} Denote $\varrho(x) = \varphi(x)-\varphi(2^{-1}x)$ and $\chi(x) = 1- \sum_{j\geq 0}\varrho(2^{-j}x) $. Then $\chi, \varrho \in C^{\infty}_c (\mathbb{R}^d)$ are nonnegative radial functions, so that \begin{enumerate} \item $supp(\chi) \subset B_1(0)$ and $supp(\varrho)\subset \{x \in \mathbb{R}^d: \frac{1}{2}\leq |x| \leq 2 \}$; \item $\chi(x)+\sum_{j\geq 0}\varrho(2^{-j}x)=1, \quad x\in \mathbb{R}^n$; \item $supp(\chi)\cap supp(\varrho(2^{-j}x)) = \emptyset $ for $j\geq 1$ and $supp(\varrho(2^{-i}x)) \cap supp(\varrho(2^{-j}x)) = \emptyset $ for $|i-j|\geq 2$. \end{enumerate} \begin{defn} For $u \in \mathcal{S}'(\mathbb{T}^d)$ and $j\geq -1$, the Littlewood-Paley blocks of $u$ are defined as \begin{equation*} \Delta_j u = \mathscr{F}^{-1}_{\mathbb{T}^d}(\varrho_j \mathscr{F}_{\mathbb{T}^d}u), \end{equation*} where $\varrho_{-1}=\chi$ and $\varrho_j=\varrho(2^{-j}\cdot)$ for $j\geq 0$. \end{defn} \begin{defn} For $\alpha\in\mathbb{R}$, $p,q \in [1,\infty]$, we define \begin{equation*} B^{\alpha}_{p,q}(\mathbb{T}^d) = \left\{ u\in \mathcal{S}'(\mathbb{T}^d): \|u\|_{B^{\alpha}_{p,q}(\mathbb{T}^d)}= \left( \sum_{j \geq -1}(2^{j\alpha}\|\Delta_j u\|_{L^p(\mathbb{T}^d)})^q \right)^{1/q} < \infty \right\}. \end{equation*} \end{defn} For $\alpha\in\mathbb{R}$, the H\"{o}lder-Besov space on $\mathbb{T}^d$ is denoted by $\mathscr{C}^{\alpha}=B^{\alpha}_{\infty,\infty}(\mathbb{T}^d)$. We remark that if $\alpha \in (0,\infty)\backslash\mathbb{N}$, then the H\"{o}lder-Besov space $\mathscr{C}^{\alpha}$ is equal to the H\"{o}lder space $C^{\alpha}(\mathbb{T}^d)$. The Sobolev space $H^{\alpha}$ is the same as the Besov space $B^{\alpha}_{2,2}(\mathbb{T}^d)$. For a time weight $\eta$, we write $C^{\beta}_{\eta}\mathscr{C}^{\alpha}$ for the space of continuous maps $\mathbb{R}^{+}\rightarrow \mathscr{C}^{\alpha}$ with norm $\|f\|_{C_{\eta}\mathscr{C}^{\alpha}} = \sup_{t\geq 0}\|\eta(t)f(t)\|_{\mathscr{C}^{\alpha}}$. For $\beta\in (0,1)$, we also denote \begin{equation*} C^{\beta}_{\eta}\mathscr{C}^{\alpha} = \{ f\in C_{\eta}\mathscr{C}^{\alpha} : \|f\|_{C^{\beta}_{\eta}\mathscr{C}^{\alpha}}= \|f\|_{C_{\eta}\mathscr{C}^{\alpha}}+\sup_{t>s\geq0}\frac{\|\eta(t)f(t)-\eta(s) f(s)\|_{\mathscr{C}^{\alpha}}}{|t-s|^{\beta}}< \infty \}. \end{equation*} The following Bernstein inequality is useful in our estimates. \begin{lem} Let $\mathscr{B}$ ba a ball, $n \in \mathbb{N}_0$, and $1\leq p \leq q \leq \infty$. Then for every $\lambda >0$ and $u \in L^p$ with $supp(\mathscr{F}u) \subset \lambda \mathscr{B}$, we have \begin{equation*} \max_{\mu \in \mathbb{N}^d: |\mu|=n}\|\partial_{\mu} u\|_{L^q} \lesssim C_{n,p,q,\mathscr{B}}\lambda^{n+d(\frac{1}{p}-\frac{1}{q})}\|u\|_{L^p}. \end{equation*} \end{lem} We need the following Bernstein inequality in $L^2$ estimates. \begin{lem}\label{BerI} Let $\mathscr{B}$ ba a unit ball, $n \in \mathbb{N}_0$, and $1\leq p \leq q \leq \infty$. Then for every $\lambda >0$ and $u \in L^p$ with $supp(\mathscr{F}u) \subset \lambda \mathscr{B}$, we have \begin{equation*} \max_{\mu \in \mathbb{N}^d: |\mu|=n}\|\partial_{\mu} u\|_{L^q} \lesssim C_{n,p,q,\mathscr{B}}\lambda^{n+d(\frac{1}{p}-\frac{1}{q})}\|u\|_{L^p}. \end{equation*} \end{lem} The Besov embedding theorem is useful in regularity estimates. \begin{lem}\label{Besovem} Let $1 \leq p_1 \leq p_2 \leq \infty$, $1 \leq q_1 \leq q_2 \leq \infty $, and $\alpha \in\mathbb{R}$. Then we have \begin{equation*} B^{\alpha}_{p_1, q_1}(\mathbb{T}^d) \hookrightarrow B^{\alpha -d(1/p_1 -1/p_2)}_{p_2, p_2}(\mathbb{T}^d). \end{equation*} \end{lem} Now we define localization operators $\mathscr{U}^{N,\gamma}_{\leq} $, $\mathscr{U}^{N,\gamma}_{>}$ for the high-low frequency decomposition. For every $f \in \mathcal{S}^{\prime}(\mathbb{T}^d)$, we define the following localization operators \begin{equation} \mathscr{U}^{N,\gamma}_{\leq} f= \sum_{-1 \leq j \leq N}\Delta_{j} f + \sum_{j > N} 2^{- j \gamma} \Delta_{j} f, \quad \mathscr{U}^{N,\gamma}_{>} f= \sum_{j > N}(1-2^{- j \gamma})\Delta_{j} f. \end{equation} \begin{lem}\label{Localization} Let $N, \gamma>0$ and $f \in \mathcal{S}^{\prime}(\mathbb{T}^d)$. Then for every $\alpha, \delta>0$ and $\beta \in [0,\gamma],$ we have \begin{equation*} \left\|\mathscr{U}^{N,\gamma}_{>} f\right\|_{\mathscr{C}^{-\alpha-\delta}} \lesssim 2^{-\delta N}\|f\|_{\mathscr{C}^{-\alpha} }, \quad \|\mathscr{U}^{N,\gamma}_{\leq} f\|_{\mathscr{C}^{-\alpha+\beta}} \lesssim 2^{\beta N}\|f\|_{\mathscr{C}^{-\alpha} }. \end{equation*} \end{lem} \begin{proof} We estimate \begin{align} \|\mathscr{U}^{N,\gamma}_{>} f \|_{\mathscr{C}^{-\alpha-\delta}} = & \sup_{l \geq -1} \left[ 2^{l(-\alpha - \delta)} \|\Delta_{l} (\sum_{j > N}(1-2^{- j \gamma})\Delta_{j} f) \|_{L^{\infty}}\right] \nonumber\\ \leq & 2^{-\delta N}\sup_{l \geq -1} \left[ 2^{-\alpha l} \|\Delta_{l} f \|_{L^{\infty}}\right] \nonumber\\ \leq & 2^{-\delta N} \|f\|_{\mathscr{C}^{-\alpha} } \end{align} Using same argument, we also have \begin{align} \|\mathscr{U}^{N,\gamma}_{\leq} f\|_{\mathscr{C}^{-\alpha+\beta}} = & \sup_{l \geq -1} \left[ 2^{l(-\alpha + \beta)} \|\Delta_{l}(\sum_{-1 \leq j \leq N}\Delta_{j} f + \sum_{j \geq N} 2^{- j \gamma} \Delta_{j} f ) \|_{L^{\infty}}\right] \nonumber\\ \leq & 2^{\beta N} \|f\|_{\mathscr{C}^{-\alpha} }. \end{align} \end{proof} Now we introduce the Bony's paraproduct. Let $u$ and $v$ be tempered distributions in $\mathcal{S}^{\prime}(\mathbb{T}^d)$. By Littlewood-Paley blocks, the product $uv$ can be (formally) decomposed as \begin{equation*} uv = \sum_{j\geq -1}\sum_{i\geq -1} \Delta_{i} u \Delta_{j} v = u\prec v +u\circ v+ u\succ v, \end{equation*} where \begin{equation*} u\prec v = v\succ u= \sum_{j\geq -1}\sum_{i =-1}^{j-2} \Delta_{i} u \Delta_{j} v \quad \text{and} \quad u\circ v = \sum_{|i-j|\leq 1} \Delta_{i} u\Delta_{j} v. \end{equation*} We have following paraproduct estimates in the Bony's paraproduct (See Lemma 2.1 in [\refcite{GIP2015}] or Proposition A.1 in [\refcite{GUZ2020}]). \begin{lem}\label{Bparaproduct} For every $\beta \in \mathbb{R}$, we have \begin{equation*} \|u\prec v\|_{\mathscr{C}^{\beta}} \lesssim \|u\|_{L^{\infty}}\|v\|_{\mathscr{C}^{\beta}}, \end{equation*} \begin{equation*} \|u \prec v\|_{H^{\beta}} \lesssim \|u\|_{L^2} \|v\|_{\mathscr{C}^{\beta + \kappa }} \wedge \|u\|_{L^{\infty}}\|v\|_{H^{\beta}} \quad \text{for all }\kappa >0. \end{equation*} If $\beta \in \mathbb{R}$, $\alpha <0$, we have \begin{equation*} \|u\prec v\|_{\mathscr{C}^{\alpha+\beta}} \lesssim \|u\|_{\mathscr{C}^{\alpha}}\|v\|_{\mathscr{C}^{\beta}}, \end{equation*} \begin{equation*} \|u \prec v\|_{H^{\alpha + \beta }} \lesssim \|u\|_{H^{\alpha}}\|v\|_{\mathscr{C}^{\beta + \kappa }} \wedge \|u\|_{\mathscr{C}^{\alpha}}\|v\|_{H^{\beta}} \quad \text{for all }\kappa >0. \end{equation*} Moreover, if $\alpha+\beta>0$, then \begin{equation*} \|u\circ v\|_{\mathscr{C}^{\alpha +\beta}} \lesssim \|u\|_{\mathscr{C}^{\alpha}}\|v\|_{\mathscr{C}^{\beta}}, \end{equation*} \begin{equation*} \|u\circ v\|_{H^{\alpha + \beta}} \lesssim \|u\|_{\mathscr{C}^{\alpha}}\|v\|_{H^{\beta}}. \end{equation*} \end{lem} The following commutator estimate is also crucial in paracontrolled distribution (See Lemma 2.4 in [\refcite{GIP2015}] and Proposition A.2 in [\refcite{GUZ2020}]) \begin{lem}\label{commutatorE} . Assume that $\alpha\in (0,1)$ and $\beta, \gamma \in \mathbb{R}$ are such that $\alpha+\beta +\gamma>0$ and $\beta +\gamma <0$. Then for $u,v,h \in C^{\infty}(\mathbb{T}^d)$, the trilinear operator \begin{equation*} C(u,v,h)=(u\prec v)\circ h-u(v\circ h) \end{equation*} has the following estimate \begin{equation*} \|C(u,v,h)\|_{\mathscr{C}^{\alpha+\beta +\gamma}} \lesssim \|u\|_{\mathscr{C}^{\alpha}}\|v\|_{\mathscr{C}^{\beta}}\|h\|_{\mathscr{C}^{\gamma}}. \end{equation*} Thus $C$ can be uniquely extended to a bounded trilinear operator from $\mathscr{C}^{\alpha}\times \mathscr{C}^{\beta} \times \mathscr{C}^{\gamma}$ to $\mathscr{C}^{\alpha+\beta +\gamma}$. For $H^{\alpha}$ space, we also have \begin{equation*} \|C(u,v,h)\|_{H^{\alpha + \beta +\gamma}} \lesssim \|u\|_{ H^{\alpha}}\|v\|_{H^{\beta}}\|h\|_{\mathscr{C}^{\gamma }}. \end{equation*} It implies that $C$ can be uniquely extended to a bounded trilinear operator from $H^{\alpha}\times H^{\beta}\times \mathscr{C}^{\gamma}$ to $H^{\alpha+\beta +\gamma}$. \end{lem} For every $u,v,h \in C^{\infty}(\mathbb{T}^d)$, we define the trilinear operator \begin{equation} D(u,v,h) = \langle u, h\circ v \rangle - \langle u\prec v, h\rangle. \end{equation} We have the following estimate from [\refcite{GUZ2020}]. \begin{lem}\label{Dm} Let $\alpha\in (0,1)$, $\beta, \gamma \in \mathbb{R}$ such that $\alpha + \beta + \gamma > 0$ and $\beta + \gamma < 0$. Then we have \begin{equation*} |D(u,v,h)| \lesssim \|u\|_{ H^{\alpha}}\|v\|_{H^{\beta}}\|h\|_{\mathscr{C}^{\gamma}}. \end{equation*} Thus $D$ can be uniquely extended to a bounded trilinear operator from $H^{\alpha}\times H^{\beta}\times \mathscr{C}^{\gamma}$ to $\mathbb{R}$. \end{lem} The following estimate from [\refcite{AC2015}] is useful in this chapter. \begin{lem}\label{CE} Let $f\in H^{\alpha}$, $g \in \mathscr{C}^{\beta}$ with $\alpha \in (0,1)$, $\beta \in \mathbb{R}$. Then \begin{equation*} \|\mathscr{L}(u\prec v)- u\prec (\mathscr{L}v)\|_{H^{\alpha+\beta+2}} \lesssim \|u\|_{H^{\alpha}}\|v\|_{\mathscr{C}^{\beta}}. \end{equation*} \end{lem} In order to obtain some estimate uniformly in time, we also need the following time-mollified paraproducts from [\refcite{GIP2015}]. \begin{defn} Let $\phi: \mathbb{R}\rightarrow \mathbb{R}^{+}$ be a smooth function with compact support $supp \phi \subset [-1,1]$, and $\int_{\mathbb{R}}\phi(s)ds =1$. Let $\eta$ be a time weight. For all $i \geq -1$, we define the operator $Q_i: C_{\eta}\mathscr{C}^{\alpha} \rightarrow C_{\eta}\mathscr{C}^{\alpha} $ by \begin{equation*} Q_i u(t):= \int_{\mathbb{R}}2^{2i}\phi(2^{2i}(t-s))u(s\vee 0)\eta(s)ds. \end{equation*} And we define the modified paraproduct of $u,v \in C_{\eta}\mathscr{C}^{\alpha}$ by \begin{equation}\label{modifiedpara} u \prec\mkern-10mu\prec v=\sum_{i}\left(\sum_{j=-1}^{i-1}\Delta_{j}(Q_{i} u)\right) \Delta_{i} v. \end{equation} \end{defn} The following two estimates are the useful properties of $\prec\mkern-10mu\prec$ from Lemma 2.17 in [\refcite{GH2019}]. \begin{lem}\label{pprec} Let $\alpha\in (0,1)$, $\beta \in \mathbb{R}$, and let $u\in C\mathscr{C}^{\alpha}\cap C^{\alpha/2} L^{\infty}$ and $v\in C \mathscr{C}^{\beta}$. Then \begin{equation*} \|\mathscr{L}(u\prec\mkern-10mu\prec v)-u\prec\mkern-10mu\prec (\mathscr{L}v)\|_{C\mathscr{C}^{\alpha+\beta-2}} \lesssim (\|u\|_{ C\mathscr{C}^{\alpha}}+ \|u\|_{C^{\alpha} L^{\infty}})\|v\|_{ C \mathscr{C}^{\beta}}, \end{equation*} and \begin{equation*} \|u\prec v -u\prec\mkern-10mu\prec v\|_{C\mathscr{C}^{\alpha+\beta}} \lesssim \|u\|_{C^{\alpha/2} L^{\infty}}\|v\|_{ C\mathscr{C}^{\beta}}. \end{equation*} \end{lem} We will need the following interpolations result for Besov space. \begin{lem}\label{interpolation} Let $\eta$ be time weights, $\gamma>0$, $\theta \geq 0$, and $\psi \in C_{\eta}\mathscr{C}^{\gamma}$. Then for any $\alpha \in [0, \gamma]$, we have \begin{equation} \|\psi\|_{C_{\eta^{1+\theta}}\mathscr{C}^{\alpha}} \lesssim \| \psi\|_{C_{\tau^{1/(k-2)}}L^{\infty}}^{\alpha/\gamma} \| \psi\|_{C_{\eta^{1+\theta\gamma/\alpha}}\mathscr{C}^{\gamma}}^{1-\alpha/\gamma}. \end{equation} Moreover, if $\alpha \in (0,1)$ then \begin{equation} \|\psi\|_{C^{\alpha/2}_{\eta}L^{\infty}} \lesssim \|\psi\|^{1/2}_{C^{\alpha}_{\eta}L^{\infty}}\|\psi\|^{1/2}_{C_{\tau^{1/(k-2)}}L^{\infty}}. \end{equation} \end{lem} \begin{proof} For spatial regularity, it holds \begin{align*} \eta(t)^{1+\theta}\|\Delta_k \psi(t)\|_{L^{\infty}} \lesssim & \left[ \eta(t)\|\Delta_k \psi\|_{L^{\infty}}\right]^{1-\alpha/\gamma}\left[ \eta(t)^{1+\theta\gamma/\alpha}\|\Delta_k \psi\|_{L^{\infty}}\right]^{\alpha/\gamma} \\ \lesssim & 2^{-\alpha k} \left[ \eta(t)\|\Delta_k \psi\|_{L^{\infty}}\right]^{1-\alpha/\gamma} \left[ \eta(t)^{1+\theta\gamma/\alpha}\| \psi\|_{\mathscr{C}^{\gamma}}\right]^{\alpha/\gamma}. \end{align*} Thus for each $t>0$, we have \begin{equation*} \eta(t)^{1+\theta}\|\psi(t)\|_{\mathscr{C}^{\alpha}} \lesssim \left[ \eta(t)\| \psi\|_{L^{\infty}}\right]^{\alpha/\gamma} \left[ \eta(t)^{1+\theta\gamma/\alpha}\| \psi\|_{\mathscr{C}^{\gamma}}\right]^{1-\alpha/\gamma}. \end{equation*} Taking supremum in time, we obtain \begin{equation*} \|\psi\|_{C_{\eta^{1+\theta}}\mathscr{C}^{\alpha}} \lesssim \| \psi\|_{C_{\tau}L^{\infty}}^{\alpha/\gamma} \| \psi\|_{C_{\eta^{1+\theta\gamma/\alpha}}\mathscr{C}^{\gamma}}^{1-\alpha/\gamma}. \end{equation*} For time regularity, we have \begin{align*} \|\psi\|_{C^{\alpha/2}_{\eta}L^{\infty}} = & \|\psi\|_{C_{\eta}L^{\infty}} + \sup_{t>s\geq0}\frac{\|\eta(t)\psi(t)-\eta(s) \psi(s)\|_{L^{\infty}}}{|t-s|^{\alpha/2}} \\ \leq & \|\psi\|_{C_{\eta}L^{\infty}} + \sup_{t>s\geq0}\frac{\|\eta(t)\psi(t)-\eta(s) \psi(s)\|^{1/2}_{L^{\infty}}}{|t-s|^{\alpha/2}} \|\psi\|^{1/2}_{C_{\eta}L^{\infty}} \\ \leq & \|\psi\|^{1/2}_{C^{\alpha}_{\eta}L^{\infty}}\|\psi\|^{1/2}_{C_{\eta}L^{\infty}}. \end{align*} This completes the proof. \end{proof} \begin{lem}\label{interpolationH} Let $\beta \in (0,1)$ and $\psi \in H^{\beta}$. Then for arbitrary $\delta>0$, we have \begin{equation} \|\psi\|^2_{H^{\beta}} \lesssim \delta \| \nabla \psi \|_{L^2}^2 + C_{\delta} \|\psi \|_{L^2}^2. \end{equation} \end{lem} \begin{proof} Since $\|\psi\|_{H^{\beta}} \simeq \|\psi\|_{B_{2,2}^{\beta}}$, by Bernstein inequality (Lemma \ref{BerI}), H\"{o}lder inequality and weighted Young inequality, we have \begin{align} \|\psi\|^2_{H^{\beta}} & = \sum_{i\geq -1} 2^{2 \beta k } \|\Delta_{i} \psi\|^2_{L^2} \nonumber \\ & = \sum_{i\geq -1} 2^{2 \beta k } \|\Delta_{i} \psi\|^{2\beta}_{L^2} \|\Delta_{i} \psi\|^{2(1-\beta)}_{L^2} \nonumber \\ & \leq \left[ \sum_{i\geq -1} 2^{2 k }\|\Delta_{i}\psi\|^{2}_{L^2} \right]^{2\beta} \left[ \sum_{i\geq -1} \|\Delta_{i}\psi\|^{2}_{L^2} \right]^{2(1-\beta)} \nonumber \\ & \lesssim\| \nabla \psi \|^{\beta} \|\psi\|^{1-\beta}_{L^2} \nonumber \\ & \lesssim \delta \| \nabla \psi \|_{L^2}^2 + C_{\delta} \|\psi \|_{L^2}^2. \end{align} This completes the proof. \end{proof} \subsection{Renormalization and paracontrolled distributions} The spatial white noise $\xi$ on $\mathbb{T}^2$ is a centered Gaussian process with value in $\mathcal{S}'(\mathbb{T}^2)$ such that for all $f,g\in \mathcal{S}(\mathbb{T}^2)$, we have $\mathbb{E}[\xi(f)\xi(g)]=\langle f,g\rangle_{L^2 (\mathbb{T}^2)}$. Let $(\hat{\xi}(k))_{k\in \mathbb{Z}^2}$ be a sequence of i.i.d. centered complex Gaussian random variables with covariance \begin{equation*} \mathbb{E}(\hat{\xi}(k)\Bar{\hat{\xi}}(l)) = \delta(k-l), \end{equation*} and $\hat{\xi}(k) = \Bar{\hat{\xi}}(-k)$. Then the spatial white noise $\xi$ on $\mathbb{T}^2$ can be defined as follows \begin{equation*} \xi(x) =\sum_{k \in \mathbb{Z}^2} \hat{\xi}(k)e^{2\pi ik \cdot x}. \end{equation*} Moreover, the spatial white noise $\xi$ take value in $\mathscr{C}^{-1-\kappa}$ for all $\kappa>0$. Since $\xi$ is only a distribution, $u\xi$ is ill-defined in classic sense. How to let singular term $u\xi$ make sense is a main challenge in studying the parabolic Anderson mode equation. It is natural to replace $\xi$ by a smooth approximation $\xi_{\epsilon}$ which is given by the convolution of $\xi$ with a rescaled mollifier $\varphi$. More precisely, we let $\varphi: \mathbb{T}^2 \rightarrow \mathbb{R}^{+}$ be a smooth function with $\int_{\mathbb{T}^2}\varphi dt =1$, and define $\xi^{\epsilon} = \epsilon^{-2}\varphi(\epsilon\cdot)\ast\xi$ for $\epsilon >0$ as the mollification of $\xi$. For the PAM equation (\ref{GPAM}), we take \begin{equation*} \vartheta = (-\Delta + \mu)^{-1}\xi = \int_0^{\infty} e^{t(\Delta- \mu)} \xi dt, \end{equation*} where $(e^{t(\Delta - \mu)})_{t\geq 0}$ denotes the semigroup generated by $\Delta - \mu$. Then $\vartheta \in \mathscr{C}^{1-\kappa}$, and $\|\vartheta\|_{\mathscr{C}^{1-\kappa}} \lesssim \|\xi\|_{\mathscr{C}^{-1-\kappa}}$. In order to obtain a well-defined area $\vartheta \diamond \xi$, we have to renormalize the product by “subtracting an infinite constant” as following arguments (see Lemma 5.8 in \refcite{GIP2015}). \begin{lem}\label{renormalize} If $\vartheta_{\epsilon} = (-\Delta + \mu)^{-1}\xi_{\epsilon}$, then the wick product $\vartheta \diamond \xi$ can be approximated as \begin{equation*} \lim_{\epsilon\rightarrow 0}\mathbb{E}[\|\vartheta\diamond\xi-(\vartheta_{\epsilon}\circ\xi_{\epsilon}-C_{\epsilon})\|^p_{\mathscr{C}^{-2\kappa}}]=0 \end{equation*} for all $p\geq 1$ and $\kappa >0$ with the renormalization constant \begin{equation*} C_{\epsilon}= \mathbb{E}(\vartheta_{\epsilon}\circ\xi_{\epsilon}) = \sum_{k\in \mathbb{Z}^2} \frac{|\mathscr{F}_{\mathbb{T}^2}\varphi(\epsilon k)|^2}{|k|^2+ \mu} . \end{equation*} \end{lem} Using the modified paraproduct $\prec\mkern-10mu\prec$, we introduce paracontrolled distributions as follows. \begin{defn} Let $\alpha\in (2/3,1)$ and $\beta \in (0,\alpha]$ be such that $2\alpha+\beta >2$. Let $\rho'$ be a time weight. We say a pair $(u, u')\in { C_{\rho'}\mathscr{C}^{\alpha}}\times{ C_{\rho'}\mathscr{C}^{\beta}}$ is called paracontrolled by $\vartheta$ if \begin{equation*} u^{\sharp}:= u-u'\prec\mkern-10mu\prec \vartheta \in { C_{\rho'}\mathscr{C}^{\alpha+\beta}}. \end{equation*} \end{defn} Now we define $u\diamond\xi$ by above the renormalization argument of singular term $\vartheta\diamond\xi$ and paracontrolled distributions. If $u\in C_{\rho}\mathscr{C}^{\alpha}$ is paracontrolled by $\vartheta$: $u^{\sharp}:= u- u\prec\mkern-10mu\prec\vartheta \in C_{\rho}\mathscr{C}^{2\alpha} $, then we define $u\diamond\xi$ as following \begin{align*} u\diamond\xi= & u\prec \xi + u\succ \xi+ u\circ\xi \\ = & u\prec \xi + u\succ \xi +(u\prec\mkern-10mu\prec\vartheta)\circ\xi + u^{\sharp}\circ\xi \\ = & u\prec \xi + u\succ \xi + (u\prec\mkern-10mu\prec\vartheta - u\prec \vartheta)\circ\xi + C(u,\vartheta,\xi) +u (\vartheta \diamond \xi) +u^{\sharp}\circ\xi \\ = & \lim_{\epsilon\rightarrow 0}(u\prec \xi_{\epsilon} + u\succ \xi_{\epsilon} + (u\prec\mkern-10mu\prec\vartheta_{\epsilon} - u\prec \vartheta_{\epsilon})\circ\xi_{\epsilon} + C(u,\vartheta_{\epsilon},\xi_{\epsilon}) + u (\vartheta_{\epsilon} \circ \xi_{\epsilon}-C_{\epsilon}) +u^{\sharp}\circ\xi_{\epsilon}). \end{align*} Thus the singular term $u\diamond\xi$ can be formally written as \begin{equation*} u\diamond\xi =\lim_{\epsilon\rightarrow 0}u\xi_{\epsilon}-C_{\epsilon}u= u\xi - \infty\cdot u. \end{equation*} \subsection{Parabolic Schauder estimates} We recall the following Schauder estimate for the heat semigroup $P_t:=e^{t(\Delta-\mu)}$ from [\refcite{GIP2015}]. \begin{lem}\label{Lsch} Let $\alpha\in \mathbb{R}$, $\beta \in [0,2]$, and let $P_t$ be the semigroup generated by $\Delta-\mu$ with $\mu>0$. Then for every $t\geq 0$, $u_0 \in \mathscr{C}^{\alpha-\beta}$, we have \begin{equation*} \|P_t u_0\|_{\mathscr{C^{\alpha}}} \lesssim e^{-\mu t}t^{-\beta/2}\|u_0\|_{\mathscr{C}^{\alpha-\beta}}. \end{equation*} \end{lem} For time weight $\tau(t):=1-e^{-t}$, we have the following Schauder estimates from [\refcite{GH2019}]. \begin{lem}\label{tauSch} Define $ \tau(t):=1-e^{-t}$ be a time weight. Let $\alpha\in \mathbb{R}$ and $\beta,\beta_i \in [0,2)$. Assume that $v \in C_{[0,\infty)}\mathscr{C}^{\alpha}$ with $v(0)=0$ be a solution of \begin{equation*} \mathscr{L}v = \sum_{i}f_i. \end{equation*} Then we have \begin{equation*} \|v\|_{C_{[0,\infty)}\mathscr{C}^{\alpha}} \lesssim \|\tau^{\beta/2}v\|_{C_{[0,\infty)}\mathscr{C}^{\alpha+\beta-2}} + \sum_{i}\|\tau^{\beta_1/2}f_i \|_{C_{[0,\infty)}\mathscr{C}^{\alpha+\beta_i -2}}. \end{equation*} Moreover, for every $\alpha \in (0,2)$ and $\beta_i \in [0,2)$ such that $\alpha + \beta_i -2 <0$, we have \begin{equation*} \|v\|_{C_{[0,\infty)}^{\alpha/2}L^{\infty}} \lesssim \|v\|_{C_{[0,\infty)}\mathscr{C}^{\alpha}} + \sum_{i}\|\tau^{\beta_1/2}f_i \|_{C_{[0,\infty)}\mathscr{C}^{\alpha+\beta_i -2}}. \end{equation*} \end{lem} We also need the following Schauder estimate for parabolic equations with polynomial nonlinear term. \begin{lem}\label{nlSch} Define $ \tau(t):=1-e^{-t}$ be a time weight. Let $\mu>0$, $\beta \in [0,1)$, and $\Psi \in C_{\tau^{1+1/(k-1)+\beta/2}}\mathscr{C}^{\beta}$. Assume that \begin{equation*} \psi \in C_{\tau^{1+1/(k-1)+\beta/2}}\mathscr{C}^{2+\beta}\cap C_{\tau^{1+1/(k-1)+\beta/2}}\mathscr{C}^{\beta} \cap C_{\tau^{1/(k-2)}}L^{\infty} \end{equation*} be a classical solution to \begin{equation*} \mathscr{L}\psi = f(\psi) + \Psi, \quad \psi(0)= 0. \end{equation*} If $f$ satisfies the dissipative assumption (\ref{assumf}), then \begin{align} &\| \psi\|_{C_{\tau^{1+1/(k-1)+\beta/2}}\mathscr{C}^{2+\beta}} + \| \psi\|_{C^1_{\tau^{1+1/(k-1)+\beta/2}}L^{\infty}} \nonumber \\ \lesssim & \|\psi\|_{C_{\tau^{1+1/(k-1)+\beta/2}}\mathscr{C}^{\beta}} + \|\Psi\|_{C_{\tau^{1+1/(k-1)+\beta/2}}\mathscr{C}^{\beta}} + 1+ \left( \|\psi\|_{C_{\tau^{1/(k-2)}}L^{\infty}}^{k-2+ 2/(2+\beta)} \right)^{\frac{2+\beta}{2}}. \end{align} \end{lem} \begin{proof} By Lemma \ref{tauSch}, we have \begin{align*} & \| \psi\|_{C_{\tau^{1+1/(k-1)+\beta/2}}\mathscr{C}^{2+\beta}} \\ \lesssim & \|\psi\|_{C_{\tau^{1+1/(k-1)+\beta/2}}\mathscr{C}^{\beta}} + \|\Psi\|_{C_{\tau^{1+1/(k-1)+\beta/2}}\mathscr{C}^{\beta}} + \|f(\psi)\|_{C_{\tau^{1+1/(k-1)+\beta/2}}\mathscr{C}^{\beta}}. \end{align*} The interpolation result in Lemma \ref{interpolation} and the weighted Young inequality lead to \begin{align*} &\|f(\psi)\|_{C_{\tau^{1+1/(k-1)+\beta/2}}\mathscr{C}^{\beta}} \\ \lesssim & 1 + \|\psi\|_{C_{\tau^{1/(k-2)}}L^{\infty}}^{k-2} \|\psi\|_{C_{\rho^{1+(k-2)\beta/2}}\mathscr{C}^{\beta}} \\ \lesssim & 1 + \|\psi\|_{C_{\tau^{1/(k-2)}}L^{\infty}}^{k-2} \|\psi\|_{C_{\tau^{1/(k-2)}}L^{\infty}}^{2/(2+\beta)} \|\psi\|_{C_{\tau^{1+1/(k-1)+\beta/2}}\mathscr{C}^{2+\beta}}^{\beta/(2+\beta)} \\ \lesssim & 1+ C_{\lambda}\left( \|\psi\|_{C_{\tau^{1/(k-2)}}L^{\infty}}^{k-2+ 2/(2+\beta)} \right)^{\frac{2+\beta}{2}} + \lambda \|\psi\|_{C_{\tau^{1+1/(k-1)+\beta/2}}\mathscr{C}^{2+\beta}} \end{align*} for arbitrary $\lambda>0$. Choosing $\lambda$ small enough, we have \begin{align*} \| \psi\|_{C_{\tau^{1+1/(k-1)+\beta/2}}\mathscr{C}^{2+\beta}} \lesssim & \|\psi\|_{C_{\tau^{1+1/(k-1)+\beta/2}}\mathscr{C}^{\beta}} + \|\Psi\|_{C_{\tau^{1+1/(k-1)+\beta/2}}\mathscr{C}^{\beta}} \\ & + 1+ \left( \|\psi\|_{C_{\tau^{1/(k-2)}}L^{\infty}}^{k-2+ 2/(2+\beta)} \right)^{\frac{2+\beta}{2}}. \end{align*} Using Lemma \ref{tauSch}, we obtain the time regularity. The proof is completed. \end{proof} We also need the following parabolic coercive estimates. \begin{lem}\label{coercive} Define $ \tau(t):=1-e^{-t}$ be a time weight. Let $\mu>0$, $\beta \in [0,1)$, and $\Psi \in C_{\tau^{1+1/(k-2)}}L^{\infty}$. Assume that \begin{equation*} \psi \in C_{\tau^{1+1/(k-1)+\beta/2}}\mathscr{C}^{2+\beta}\cap C_{\tau^{1+1/(k-1)+\beta/2}}\mathscr{C}^{\beta} \cap C_{\tau^{1/(k-2)}}L^{\infty} \end{equation*} be a classical solution to \begin{equation*} \mathscr{L}\psi = f(\psi) + \Psi, \quad \psi(0)= 0. \end{equation*} If $f$ satisfies the dissipative assumption (\ref{assumf}), then we have a priori estimates \begin{equation*} \|\psi\|_{C_{\tau^{1/(k-2)}}L^{\infty}} \lesssim 1+ \|\Psi\|_{C_{\rho^{1+1/(k-2)}}L^{\infty}}^{1/(k-1)}. \end{equation*} \end{lem} \begin{proof} Let $\hat{\psi}(t,x)=\psi(t,x)\tau(t)^{\frac{1}{k-2}}$. Suppose $\hat{\psi}$ attains its global maximum $M$ at $(t^{\ast},x^{\ast}) \in [0,\infty)\times\mathbb{T}^2 $. We first assume that $M>0$. Since $\hat{\psi}(0)=\psi(0)\tau(0)^{\frac{1}{k-2}}=0$, $t^{\ast}>0$, and we have \begin{equation*} \partial_t \hat{\psi}(t^{\ast},x^{\ast})=0, \quad -\Delta\hat{\psi}(t^{\ast},x^{\ast})\geq 0. \end{equation*} Furthermore, $\hat{\psi}$ satisfies \begin{align*} & \partial_t \hat{\psi}(t^{\ast},x^{\ast}) + (-\Delta+\mu)\hat{\psi}(t^{\ast},x^{\ast}) \\ = & F(\psi(t^{\ast},x^{\ast}))\tau(t^{\ast})^{\frac{1}{k-2}} + \Psi(t^{\ast},x^{\ast})\tau(t^{\ast})^{\frac{1}{k-2}} + \psi(t^{\ast},x^{\ast})\partial_t\tau(t^{\ast})^{\frac{1}{k-2}}, \end{align*} Then by assumption (\ref{assumf}), we have \begin{align*} \mu M\tau(t^{\ast}) + M^{k-1} & \leq \Psi(t^{\ast},x^{\ast})\tau^{1+1/(k-2)}(t^{\ast}) + \frac{1}{k-2}(\tau\partial_t \tau)(t^{\ast})M \\ & \leq \|\Psi\|_{C_{\tau^{1+1/(k-2)}}L^{\infty}}^{1/(k-1)} + \frac{1}{k-2}(\tau\partial_t \tau)(t^{\ast})M. \end{align*} Since $k\geq 3$, the weighted term $\frac{1}{k-2}(\tau\partial_t \tau)$ is bounded. Then we conclude that \begin{equation*} \hat{\psi}(t^{\ast},x^{\ast}) \lesssim \|\psi\|^{1/(k-1)}_{C_{\tau^{1/(k-2)}}L^{\infty}(\mathbb{T}^2)}+\|\Psi\|_{C_{\tau^{1+1/(k-2)}}L^{\infty}}^{1/(k-1)}. \end{equation*} Applying same argument to $-\hat{\psi}$, we also have \begin{equation*} -\hat{\psi}(t^{\ast},x^{\ast}) \lesssim \|\psi\|^{1/(k-1)}_{C_{\tau^{1/(k-2)}}L^{\infty}(\mathbb{T}^2)}+\|\Psi\|_{C_{\tau^{1+1/(k-2)}}L^{\infty}}^{1/(k-1)}. \end{equation*} Then by weighted Young inequality, we get \begin{equation*} \|\psi\|_{C_{\tau^{1/(k-2)}}L^{\infty}} \lesssim 1 +\|\Psi\|_{C_{\tau^{1+1/(k-2)}}L^{\infty}}^{1/(k-1)}. \end{equation*} If $\hat{\psi}=\psi\tau^{1/(k-2)}$ does not attain its global maximum at finite time, then for all $t_0 >0$ it holds that $\hat{\psi}(t_0) < \lim_{t\rightarrow \infty}\hat{\psi}(t)$. Since $\hat{\psi}$ is bounded and continuous on $[0,\infty)\times \mathbb{T}^2$, then for every $\delta >0$ we have \begin{equation*} \lim_{t \rightarrow \infty}\psi(t)\tau(t)^{1/(k-2)}(1+|t|^2)^{-\delta} = 0 \end{equation*} Thus $\psi(t)\tau(t)^{1/(k-2)}(1+|t|^2)^{-\delta}$ attain its global maximum at finite time. Now we can use same argument in above proof to $\psi(t)\tau(t)^{1/(k-2)}(1+|t|^2)^{-\delta}$, and the conclusion follows by letting $\delta \rightarrow 0$. \end{proof} Similar with Propostion A.2 in [\refcite{GH2019}], we have the following existence result. \begin{lem}\label{Exapp} Let $T>0$, $\mu>0$, $u_0 \in C^{\infty}(\mathbb{T}^2)$, $\xi_{\epsilon}\in C^{\infty}(\mathbb{T}^2)$ is the mollification of spatial white noise $\xi$. Then there exists a unique classical solution $u\in C^{\infty}(\mathbb{R}^{+}\times\mathbb{T}^2)$ to \begin{equation}\label{appPAM} \mathscr{L}u = F(u)+u\diamond\xi_{\epsilon}, \quad u(0)=u_0. \end{equation} \end{lem} \begin{proof} Note that $u_0 \in L^2(\mathbb{T}^2)$ and $\xi^{\epsilon}\in C^{\infty}(\mathbb{T}^2)$ is bounded in $[0,T]\times\mathbb{T}^2$. By monotonicity arguments with the Gelfand triplet(see e.g. Theorem 5.8 in [\refcite{J2001}]) \begin{equation*} [H^1(\mathbb{T}^2)\cap L^k(\mathbb{T}^2)] \hookrightarrow L^2(\mathbb{T}^2) \hookrightarrow [H^1(\mathbb{T}^2)\cap L^k(\mathbb{T}^2)]^{\ast}, \end{equation*} equation (\ref{appPAM}) has a unique solution $u \in C_T L^2(\mathbb{T}^2)\cap L^2_T H^1(\mathbb{T}^2) \cap L^p_T L^k({\mathbb{T}^2})$. By Sobolev embedding $H^1(\mathbb{T}^2)\hookrightarrow L^p(\mathbb{T}^2)$ for every $p\in [2,\infty)$, we have $u(t)\in L^p(\mathbb{T}^2)$ for every $p\in [2,\infty)$ and $t\in [0,T]$ almost surely. After multiply $u^{p-1}$, $p\in [2,\infty)$ on both sides of the equation (\ref{appPAM}), we obtain \begin{align*} & \frac{1}{p}\partial_t \int_{\mathbb{T}^2} |u|^{p}dx + (p-1)\int_{\mathbb{T}^2} |u|^{p-2}|\nabla u|^2dx - \int_{\mathbb{T}^2} (c_0|u|^{p-2}-c_2|u|^{p+k-2})dx \\ \leq & |\xi_{\epsilon}-C_{\epsilon}|_{ L^{\infty}(\mathbb{T}^2)}\int_{\mathbb{T}^2} |u|^{p}dx- \mu\int_{\mathbb{T}^2} |u|^{p}dx. \end{align*} Then the Gronwall Lemma implies that $u\in L^{\infty}_{T}L^p(\mathbb{T}^2)$ for every $p\in [2,\infty)$. Thus by assumption (\ref{assumf}) and interpolation, we have that $F(u)+ u\diamond\xi_{\epsilon}\in L^{\infty}_{T}L^p(\mathbb{T}^2)$ for every $p\in [1,\infty)$. Applying a classical regularity result (see e.g. Theorem 3.2 in [\refcite{DdMH2015}]), we obtain that there exists $\alpha \in (0,1)$ and $p\in [1,\infty)$ such that \begin{equation*} \|u\|_{C^{\alpha/2,\alpha}([0,T]\times\mathbb{T}^2)} \lesssim \|u_0\|_{\mathscr{C}^{\alpha}} + (\mu+|\xi_{\epsilon}|_{L^{\infty}(\mathbb{T}^2)})\|u\|_{L^{\infty}_{T}L^p(\mathbb{T}^2)}+ \|F(u)\|_{L^{p}_{T}L^p(\mathbb{T}^2)}. \end{equation*} Moreover, since $F(u)\in C^{\alpha/2,\alpha}([0,T]\times\mathbb{T}^2)$, by Schauder estimates (see e.g. Theorem 3.4 in [\refcite{DdMH2015}]), we have \begin{equation*} \|u\|_{C^{(\alpha+2)/2,\alpha+2}([0,T]\times\mathbb{T}^2)} \lesssim \|u_0\|_{\mathscr{C}^{\alpha+2}} + (\mu+|\xi_{\epsilon}|_{L^{\infty}(\mathbb{T}^2)})\|u\|_{L^{\infty}_{T}L^p(\mathbb{T}^2)}+ \|F(u)\|_{C^{\alpha/2,\alpha}([0,T]\times\mathbb{T}^2)}. \end{equation*} Since $\xi^{\epsilon}\in C^{\infty}([0,T]\times\mathbb{T}^2)$ and $u_0 \in C^{\infty}(\mathbb{T}^2)$, we apply the regularity result from Theorem 3.4 in [\refcite{DdMH2015}] repeatedly, and conclude that $u\in C^{\infty}([0,T]\times\mathbb{T}^2)$. Now we applied same argument as in parabolic coercive estimates Lemma \ref{coercive}, and then sending $T\rightarrow \infty$. This completes the proof. \end{proof} \section{Global well-posedness} In this section, we consider the global existence and uniqueness of the following nonlinear parabolic Anderson model equation \begin{equation*} \partial_t u + \mathscr{L}u = f(u)+ u \diamond \xi, \quad u(0)=u_0, \end{equation*} where $f$ is a continuous function from $\mathbb{R}$ to $\mathbb{R}$, and $\xi$ is a spatial white noise on the $2$-dimension torus $\mathbb{T}^2=(\mathbb{R}/\mathbb{Z})^2$. Now we define $u\diamond\xi$ by above the renormalization argument of singular term $\vartheta\diamond\xi$ and paracontrolled distributions. If $u\in C_{\rho'}\mathscr{C}^{\alpha}$ is paracontrolled by $\vartheta$: $u^{\sharp}:= u- u\prec\mkern-10mu\prec\vartheta \in C_{\rho'}\mathscr{C}^{2\alpha} $ and define $u\diamond\xi$ as following \begin{align*} & u\diamond\xi \\ = & u\prec \xi + u\succ \xi+ u\circ\xi \\ = & u\prec \xi + u\succ \xi +(u\prec\mkern-10mu\prec\vartheta)\circ\xi + u^{\sharp}\circ\xi \\ = & u\prec \xi + u\succ \xi + (u\prec\mkern-10mu\prec\vartheta - u\prec \vartheta)\circ\xi + C(u,\vartheta,\xi) +u (\vartheta \diamond \xi) +u^{\sharp}\circ\xi \\ = & \lim_{\epsilon\rightarrow 0}(u\prec \xi_{\epsilon} + u\succ \xi_{\epsilon} + (u\prec\mkern-10mu\prec\vartheta_{\epsilon} - u\prec \vartheta_{\epsilon})\circ\xi_{\epsilon} + C(u,\vartheta_{\epsilon},\xi_{\epsilon}) + u (\vartheta_{\epsilon} \circ \xi_{\epsilon}-C_{\epsilon}) +u^{\sharp}\circ\xi_{\epsilon}). \end{align*} Thus the singular term $u\diamond\xi$ can be formally written as $u\diamond\xi =\lim_{\epsilon\rightarrow 0}u\xi_{\epsilon}-C_{\epsilon}u= u\xi - \infty\cdot u$. We introduction the ansatz $u = \psi+\phi$. Then the original equation $\ref{GPAM}$ can be decomposed into a simple system \begin{equation}\label{decGPAM} \left\{ \begin{aligned} & \partial_t\phi + \mathscr{L}\phi = \Phi, \quad \phi(0) = \phi_0=u_0,\\ & \partial_t\psi + \mathscr{L}\psi = f(\psi)+ \Psi, \quad \psi(0)=0, \end{aligned} \right. \end{equation} where $\Phi$ is the collection of all terms of negative regularity, and $\Psi$ the collection of all the others regular term (belonging to $L^{\infty}$). Recall that the stochastic terms $\xi$ and $\vartheta\diamond\xi$ can be constructed such that \begin{equation*} \|\xi\|_{\mathscr{C}^{-1-\kappa}} \lesssim 1, \quad \|\vartheta\diamond\xi\|_{\mathscr{C}^{-2\kappa}} \lesssim 1. \end{equation*} We choose small parameters $\kappa \in (0, 1- \alpha)$, and employ the Localization operators $\mathscr{U}_{\leq}$ and $\mathscr{U}_{>}$ to decompose \begin{equation*} \xi = \mathscr{U}_{\leq} \xi + \mathscr{U}_{>} \xi, \quad \vartheta\diamond\xi = \mathscr{U}_{\leq} (\vartheta\diamond\xi) + \mathscr{U}_{>} (\vartheta\diamond\xi). \end{equation*} Here $\mathscr{U}_{\leq} \xi$, $\mathscr{U}_{\leq} (\vartheta\diamond\xi)$ are regular, and $\mathscr{U}_{>} \xi$, $\mathscr{U}_{>} (\vartheta\diamond\xi)$ are irregular. Then the singular term $u\diamond\xi := (\psi+\phi) \diamond \xi$ can be decomposed as \begin{align*} & (\psi+\phi) \diamond \xi \\ = & (\psi+\phi)\succ \xi + (\psi+\phi)\prec \xi + (\psi+\phi) \circ \xi\\ = & (\psi+\phi) \succ \mathscr{U}_{\leq} \xi + (\psi+\phi) \succ \mathscr{U}_{>} \xi + (\psi+\phi) \prec \mathscr{U}_{\leq} \xi + (\psi+\phi) \prec \mathscr{U}_{>} \xi+ (\psi+\phi) \circ \xi. \end{align*} In order to define the resonant term $(\psi+\phi) \circ \xi$, we also need the modified paraproduct ansatz \begin{equation*} \tau^{\gamma}\phi^{\sharp} = \tau^{\gamma}\phi-[\tau^{\gamma}(\psi+ \phi)\prec\mkern-10mu\prec \vartheta], \end{equation*} where $\phi^{\sharp}(t) \in \mathscr{C}^{2\alpha}$, and the modified paraproduct $\prec\mkern-10mu\prec$ is defined as (\ref{modifiedpara}). Then the resonant term can be defined as \begin{align*} (\psi+\phi) \circ \xi = & \psi\circ\xi +((\psi+\phi)\prec\mkern-10mu\prec\vartheta)\circ \xi + \phi^{\sharp} \circ\xi \\ = & \psi \circ \xi+ ((\psi+\phi)\prec\mkern-10mu\prec\vartheta - (\psi+\phi)\prec \vartheta)\circ\xi + C(\psi+\phi,\vartheta,\xi) +\phi^{\sharp}\circ\xi \\ & +(\psi+\phi) \succ (\vartheta\diamond\xi) +(\psi+\phi)\circ (\vartheta\diamond\xi)+ (\psi+\phi) \prec (\vartheta\diamond\xi). \end{align*} Now we define \begin{align*} \Phi := & (\psi+\phi) \prec \mathscr{U}_{>} \xi + (\psi+\phi) \succ \mathscr{U}_{>} \xi + (\psi+\phi) \succ \mathscr{U}_{>}(\vartheta\diamond\xi) - (\psi+\phi) \prec \mathscr{U}_{>}(\vartheta\diamond\xi), \\ \Psi := & f(\psi+\phi) - f(\psi) + ((\psi+\phi)\prec\mkern-10mu\prec\vartheta - (\psi+\phi)\prec \vartheta)\circ\xi + \phi^{\sharp}\circ\xi \\ & + C(\psi+\phi,\vartheta,\xi)+ (\vartheta\diamond\xi) \circ (\psi+\phi) \\ & + (\psi+\phi) \prec \mathscr{U}_{\leq} \xi + (\psi+\phi) \succ \mathscr{U}_{\leq}\xi +(\psi+\phi) \succ \mathscr{U}_{\leq}(\vartheta\diamond\xi) - (\psi+\phi) \prec \mathscr{U}_{\leq}(\vartheta\diamond\xi) . \end{align*} \subsection{A priori estimates} {\bf Step 1. Bound for $\phi$ in $C_{\tau^{1/(k-2)+\kappa/2}}\mathscr{C}^{\kappa} \cap C_{\tau^{1/(k-2)}}L^{\infty}$ }\\ First, we estimate $\Phi$ in $C_{\tau^{1/(k-2)}}\mathscr{C}^{-2+\kappa}$, and derive a bound for $\phi$ in $C_{\tau^{1/(k-2)+\kappa/2}}\mathscr{C}^{\kappa} \cap C_{\tau^{1/(k-2)}}L^{\infty}$ by Schauder estimates. By Lemma \ref{Localization}, we employ the Localization operators $\mathscr{U}_{\leq}$ and $\mathscr{U}_{>}$ with the parameter $L$ such that \begin{equation*} \|\mathscr{U}_{>} \xi\|_{\mathscr{C}^{ -2+\kappa }} \lesssim 2^{-(1-2\kappa)L}\|\xi\|_{\mathscr{C}^{-1-\kappa}}, \end{equation*} Then by Bony's paraproduct estimate, we have \begin{align} & \|(\psi+\phi) \prec \mathscr{U}_{>} \xi\|_{C_{\tau^{1/(k-2)}}\mathscr{C}^{-2+\kappa}} + \|(\psi+\phi) \succ \mathscr{U}_{>} \xi\|_{C_{\tau^{1/(k-2)}}\mathscr{C}^{-2+\kappa}}\nonumber \\ \lesssim & \|\mathscr{U}_{>} \xi\|_{\mathscr{C}^{-2+\kappa }} \|\psi+\phi\|_{C_{\tau^{1/(k-2)}}L^{\infty}} \nonumber \\ \lesssim & 2^{-(1-2\kappa)L}\|\xi\|_{\mathscr{C}^{-1-\kappa}} \|\psi+\phi\|_{C_{\tau^{1/(k-2)}}L^{\infty}}. \end{align} Similarly, we employ the Localization operators $\mathscr{U}_{\leq}$ and $\mathscr{U}_{>}$ with the parameter $K$ such that \begin{equation*} \|\mathscr{U}_{>} (\vartheta\diamond\xi)\|_{\mathscr{C}^{ -2+\kappa }} \lesssim 2^{-(2-3\kappa)K}\|\vartheta\diamond\xi\|_{\mathscr{C}^{-2\kappa}}, \end{equation*} Then \begin{align} & \|(\psi+\phi) \prec \mathscr{U}_{>} (\vartheta\diamond\xi)\|_{C_{\tau^{1/(k-2)}}\mathscr{C}^{-2+\kappa}} + \|(\psi+\phi) \succ \mathscr{U}_{>} (\vartheta\diamond\xi)\|_{C_{\tau^{1/(k-2)}}\mathscr{C}^{-2+\kappa}}\nonumber \\ \lesssim & \|\mathscr{U}_{>} (\vartheta\diamond\xi)\|_{\mathscr{C}^{-2\kappa }} \|\psi+\phi\|_{C_{\tau^{1/(k-2)}}L^{\infty}} \nonumber \\ \lesssim & 2^{-(2-3\kappa)K}\|\vartheta\diamond\xi\|_{\mathscr{C}^{-2\kappa}} \|\psi+\phi\|_{C_{\tau^{1/(k-2)}}L^{\infty}}. \end{align} Note that the stochastic terms $\xi$ and $\vartheta\diamond\xi$ can be constructed such that \begin{equation*} \|\xi\|_{\mathscr{C}^{-1-\kappa}} \lesssim 1, \quad \|\vartheta\diamond\xi\|_{\mathscr{C}^{-2\kappa}} \lesssim 1. \end{equation*} Now we choose $L,K>1$, such that \begin{equation*} 1+ \|\psi + \phi\|_{C_{\tau^{1/(k-2)}}L^{\infty}} = 2^{(1 -\kappa)L} = 2^{(2-3\kappa)K}. \end{equation*} Then we have \begin{equation}\label{EPhi0} \|\Phi\|_{C_{\tau^{1/(k-2)}}\mathscr{C}^{-2+\kappa}} \lesssim (2^{-(1 -\kappa)L}+2^{-(2-3\kappa)K})\| \psi + \phi \|_{C_{\tau^{1/(k-2)}}L^{\infty}} \lesssim 1. \end{equation} Since \begin{equation*} \mathscr{L}(\tau^{1/(k-2)+\kappa/2}\phi) = (\partial_t )\tau^{1/(k-2)+\kappa/2} \phi + \tau^{1/(k-2)-1+\kappa/2}(\partial_t \tau)\phi + \tau^{1/(k-2)+\kappa/2} \Phi, \end{equation*} by the Schauder estimates, we have the bound for $\phi$, \begin{align*} \|\tau^{\kappa/2}\phi\|_{C_{\tau^{1/(k-2)}} \mathscr{C}^{\kappa}} \lesssim & \|\Phi\|_{C_{\tau^{1/(k-2)}}\mathscr{C}^{-2+\kappa}} + \|\tau^{1/(k-2)-\kappa/2}(\tau^{1+\kappa/2} \phi + \tau^{\kappa/2}(\partial_t \tau)\phi)\|_{CL^{\infty}} \\ \lesssim & \|\Phi\|_{C_{\tau^{1/(k-2)}}\mathscr{C}^{-2+\kappa}} + \|\phi \|_{C_{\tau^{1/(k-2)}}L^{\infty}}, \end{align*} Now we estimate $\phi$ in $C_{\tau^{1/(k-2)}}L^{\infty}$. Since $\tau^{1/(k-2)+\kappa/2}< \tau^{1/(k-2)}$, we can not control $\|\phi \|_{C_{\tau^{1/(k-2)}}L^{\infty}}$ by $\|\phi\|_{C_{\tau^{1/(k-2)+\kappa/2} }\mathscr{C}^{\kappa}}$ directly. By Littlewood-Paley decomposition and Duhamel's formula, for some $t\in (0,1)$ and $i \in \mathbb{N}$ we have \begin{align*} & \|\tau(t)^{1/(k-2)}\phi(t) \|_{L^{\infty}} \nonumber \\ \lesssim & \tau(t)^{1/(k-2)}\|\Delta_{\leq i}\phi \|_{L^{\infty}} + \tau(t)^{1/(k-2)}\|\Delta_{>i}\phi(t) \|_{L^{\infty}} \nonumber \\ \lesssim & \tau(t)^{1/(k-2)}\|P_t \Delta_{\leq i}\phi(0)\|_{L^{\infty}} + \tau(t)^{1/(k-2)}\int^t_0 \|P_{t-s}\Delta_{\leq i}\Phi(s)\|_{L^{\infty}}ds + \tau(t)^{1/(k-2)}\|\Delta_{>i}\phi(t) \|_{L^{\infty}} \nonumber \\ \lesssim & \tau(t)^{1/(k-2)}2^{i}\|\phi(0)\|_{\mathscr{C}^{-1}} + \tau(t)^{1/(k-2)} 2^{(2-\kappa)i}\|\Phi\|_{C\mathscr{C}^{-2+\kappa}} + 2^{-\kappa i}\tau(t)^{1/(k-2)}\|\phi(t)\|_{\mathscr{C}^{\kappa}}. \end{align*} We fix $t\in(0,1)$ and choose $i \in \mathbb{N}$ be such that $2^{-\kappa i} = \lambda\tau(t)^{\kappa/2}$ for any $\lambda>0$ which is independent on time. Then we have \begin{equation*} \|\tau(t)^{1/(k-2)}\phi(t) \|_{L^{\infty}} \lesssim \tau(t)^{1/2}\tau(t)^{1/(k-2)}\|\phi(0)\|_{\mathscr{C}^{-1}} + \tau(t)^{1/(k-2)-\kappa/2}\|\Phi\|_{C\mathscr{C}^{-2+\kappa}} + \lambda \tau(t)^{1/(k-2)}\|\phi(t)\|_{\mathscr{C}^{\kappa}}. \end{equation*} Taking supremum in time, we obtain \begin{equation} \|\phi\|_{C_{\tau^{1/(k-2)}}L^{\infty}} \lesssim \|\phi(0)\|_{\mathscr{C}^{-1}} + \|\Phi\|_{C_{\tau^{1/(k-2)}}\mathscr{C}^{-2+\kappa}} + \lambda \|\phi\|_{C_{\tau^{1/(k-2)+\kappa/2}}\mathscr{C}^{\kappa}}. \end{equation} Choosing $\lambda$ is small enough, we can absorb $\lambda \|\phi\|_{C_{\tau^{1/(k-2)+\kappa/2} }\mathscr{C}^{\kappa}}$ into the left hand side and obtain \begin{equation}\label{Ephi1} \|\phi\|_{C_{\tau^{1/(k-2)+\kappa/2}}\mathscr{C}^{\kappa}} + \|\phi\|_{C_{\tau^{1/(k-2)}}L^{\infty}} \lesssim 1, \end{equation} where the right hand side is uniform in the initial condition $\|u_0\|_{\mathscr{C}^{-1}}$. We fix the parameters $L$ and $K$ in the remain part. We also have \begin{equation}\label{Lep} 2^{(1 -\kappa)L}= 2^{(2 - 3\kappa)K} = 1+\| (\psi + \phi)\|_{C_{\tau^{1/(k-2)}} L^{\infty}} \lesssim 1+\|\psi\|_{C_{\tau^{1/(k-2)}} L^{\infty}}. \end{equation} {\bf Step 2. Bound for $\phi$ in $C_{\tau^{1/(k-2)+\alpha/2}}\mathscr{C}^{\alpha} \cap C^{\alpha/2}_{\tau^{1/(k-2)+\alpha/2}}L^{\infty}$ }\\ First, we estimate $\Phi$ in $C_{\tau^{1/(k-2)}}\mathscr{C}^{-2+\alpha}$, and derive a bound for $\phi$ in $C_{\tau^{1/(k-2)+\alpha/2}}\mathscr{C}^{\alpha} \cap C^{\alpha/2}_{\tau^{1/(k-2)+\alpha/2}}L^{\infty}$ by Schauder estimates. By Bony's paraproduct estimate, we have \begin{align} & \|(\psi+\phi) \prec \mathscr{U}_{>} \xi + (\psi+\phi) \succ \mathscr{U}_{>} \xi\|_{C_{\tau^{1/(k-2)}}\mathscr{C}^{\alpha-2}} \nonumber \\ \lesssim & \|\mathscr{U}_{>}\xi\|_{\mathscr{C}^{\alpha-2}} \|\psi+\phi\|_{C_{\tau^{1/(k-2)}}L^{\infty}} \nonumber \\ \lesssim & 2^{-(1-\kappa-\alpha)L}\|\mathscr{U}_{>}\xi\|_{\mathscr{C}^{-1-\kappa}}\|\psi+\phi\|_{C_{\tau^{1/(k-2)}}L^{\infty}} \nonumber \\ \lesssim & 1+\|\psi\|_{C_{\tau^{1/(k-2)}} L^{\infty}}, \end{align} and \begin{align} & \|(\psi+\phi) \succ \mathscr{U}_{>}(\vartheta\circ\xi) + (\psi+\phi) \prec \mathscr{U}_{>}(\vartheta\circ\xi)\|_{C_{\tau^{1/(k-2)}}\mathscr{C}^{\alpha-2}} \nonumber \\ \lesssim & \| \mathscr{U}_{>}(\vartheta\diamond\xi)\|_{\mathscr{C}^{\alpha-2 }} \|\psi+\phi\|_{C_{\tau^{1/(k-2)}}L^{\infty}} \nonumber \\ \lesssim & 2^{-(2-2\kappa -\alpha)K}\|\mathscr{U}_{>}(\vartheta\circ\xi)\|_{\mathscr{C}^{-2\kappa}}\|\psi+\phi\|_{C_{\tau^{1/(k-2)}}L^{\infty}} \nonumber \\ \lesssim & 1+\|\psi\|_{C_{\tau^{1/(k-2)}} L^{\infty}}. \end{align} Then we have \begin{equation}\label{EPhi1} \|\Phi\|_{C_{\tau^{1/(k-2)}}\mathscr{C}^{\alpha-2}} \lesssim 1 + \|\psi\|_{C_{\tau^{1/(k-2)}} L^{\infty}}. \end{equation} Now we estimate $\phi$ in $C_{\tau^{1/(k-2)+\alpha/2}}\mathscr{C}^{\alpha} \cap C^{\alpha/2}_{\tau^{1/(k-2)+\alpha/2}}L^{\infty}$ by Schauder estimates. Since $\rho=\tau^{1/(k-2)}\eta$, we have \begin{equation*} (\partial_t+\mathscr{L})(\tau^{1/(k-2)+\alpha/2} \phi) = (\partial_t \eta)\tau^{1/(k-2)+\alpha/2} \phi + \rho\tau^{-(2-\alpha)/2}(\partial_t \tau)\phi + \tau^{1/(k-2)+\alpha/2} \Phi, \end{equation*} Then by the Schauder estimates, we obtain the $C_{\tau^{1/(k-2)+\alpha/2}}\mathscr{C}^{\alpha}$ bound for $\phi$, \begin{align*} &\|\phi\|_{C_{\tau^{1/(k-2)+\alpha/2} }\mathscr{C}^{\alpha}} \\ \lesssim & \|\Phi\|_{C_{\tau^{1/(k-2)+\alpha/2}}\mathscr{C}^{\alpha-2}} + \|\tau^{(2-\alpha)/2} (\tau^{1/(k-2)+\alpha/2} \phi + \tau^{1/(k-2)-(2-\alpha)/2} \phi)\|_{C_{\tau^{1/(k-2)}}L^{\infty}} \\ \lesssim & \|\Phi\|_{C_{\rho}\mathscr{C}^{\alpha-2}} + \|\phi \|_{C_{\tau^{1/(k-2)}}L^{\infty}} \\ \lesssim & 1 + \|\psi\|_{C_{\tau^{1/(k-2)}}L^{\infty}}. \end{align*} Using Lemma \ref{tauSch}, we have the time regularity \begin{align*} & \|\phi\|_{C^{\alpha/2}_{\tau^{1/(k-2)+\alpha/2}}L^{\infty}} \\ \lesssim & \|\phi\|_{C_{\tau^{1/(k-2)+\alpha/2}}\mathscr{C}^{\alpha}} + \|\Phi\|_{C_{\rho}\mathscr{C}^{\alpha-2}} + \|\tau^{1/2} (\tau^{1/(k-2)+1/2} \phi + \tau^{1/(k-2)-1/2} \phi)\|_{C_{\eta}C^{\alpha-1}} \\ \lesssim & 1 + \|\psi\|_{C_{\tau^{1/(k-2)}}L^{\infty}}. \end{align*} Thus we have a bound for $\phi$ in $C_{\tau^{1/(k-2)+\alpha/2}}\mathscr{C}^{\alpha} \cap C^{\alpha/2}_{\tau^{1/(k-2)+\alpha/2}}L^{\infty}$ \begin{equation}\label{phicalpha} \|\phi\|_{C_{\tau^{1/(k-2)+\alpha/2}}\mathscr{C}^{\alpha}} + \|\phi\|_{C^{\alpha/2}_{\tau^{1/(k-2)+\alpha/2}}L^{\infty}} \lesssim 1 + \|\psi\|_{C_{\tau^{1/(k-2)}}L^{\infty}}. \end{equation} {\bf Step 3. Bound for $\phi^{\sharp}$ in $C_{\rho}\mathscr{C}^{2\alpha}\cap C^{\alpha}_{\rho}L^{\infty}$}\\ Now we derive a bound for $\phi^{\sharp}$ in $C_{\rho}\mathscr{C}^{2\alpha}\cap C^{\alpha}_{\rho}L^{\infty}$. Recall that we denote $\rho =\tau^{1+1/(k-2)+(3\alpha-2)/2}$. Since $\phi^{\sharp}$ is given by \begin{equation*} \phi^{\sharp} = \phi-\rho^{-1}\left([\rho(\psi+ \phi)]\prec\mkern-10mu\prec \vartheta \right), \end{equation*} the remainder $\phi^{\sharp}$ satisfies \begin{align}\label{usharpg} (\partial_t+\mathscr{L})\phi^{\sharp} & = :- (\partial_t+\mathscr{L})\left(\rho^{-1}[\rho(\psi+ \phi)\prec\mkern-10mu\prec \vartheta]\right) + \Phi \nonumber \\ = & - [(\partial_t+\mathscr{L})(\rho^{-1}[\rho(\psi+ \phi)]\prec\mkern-10mu\prec\vartheta)- \rho^{-1}[\rho(\psi+ \phi)]\prec\mkern-10mu\prec (\partial_t+\mathscr{L})\vartheta] \nonumber \\ & + [ (\psi+\phi)\prec \xi-\rho^{-1}[\rho(\psi+ \phi)]\prec\mkern-10mu\prec (\partial_t+\mathscr{L})\vartheta ] - (\psi+\phi)\prec\xi + \Phi \nonumber \\ = & \left(\frac{k-1}{k-2}+\frac{3\alpha-2}{2}\right)(\tau^{-(\frac{k-1}{k-2}+\frac{3\alpha-2}{2})-1}(1-\tau)[\rho(\psi+ \phi)]\prec\mkern-10mu\prec\vartheta) \nonumber \\ & - \rho^{-1} \left( (\partial_t+\mathscr{L})[\rho(\psi+ \phi)]\prec\mkern-10mu\prec\vartheta)- [\rho(\psi+ \phi)]\prec\mkern-10mu\prec (\partial_t+\mathscr{L})\vartheta \right) \nonumber \\ & + \rho^{-1}\left([\rho(\psi+\phi)]\prec \xi-[\rho(\psi+ \phi)]\prec\mkern-10mu\prec \xi \right) \nonumber \\ & - (\psi+\phi)\prec\xi + \Phi. \end{align} Since $\vartheta = (-\Delta- \mu)^{-1}\xi$, the Schauder estimates yields that $\|\vartheta\|_{\mathscr{C}^{\alpha}} \lesssim \|\xi\|_{\mathscr{C}^{\alpha-2}} \lesssim 1$. Thus by Lemma \ref{pprec} we have \begin{align}\label{Es1} & \|\tau^{(2-\alpha)/2}(\tau^{-(\frac{k-1}{k-2}+\frac{3\alpha-2}{2})-1}(1-\tau)[\rho(\psi+ \phi)]\prec\mkern-10mu\prec\vartheta) \|_{C_{\rho}\mathscr{C}^{\alpha}} \nonumber \\ \lesssim & \|\tau^{-1/2}[\rho(\psi+ \phi)]\prec\mkern-10mu\prec\vartheta \|_{C\mathscr{C}^{\alpha}} \nonumber \\ \lesssim & \|(\psi+ \phi) \|_{C_{\tau^{1/(k-2)}}L^{\infty}}\|\vartheta\|_{\mathscr{C}^{\alpha}} \nonumber \\ \lesssim & 1+ \|\psi\|_{C_{\tau^{1/(k-2)}}L^{\infty}}. \end{align} Lemma \ref{pprec} implies that \begin{align}\label{Es2} & \|\rho^{-1} \left( (\partial_t+\mathscr{L})[\rho(\psi+ \phi)]\prec\mkern-10mu\prec\vartheta - [\rho(\psi+ \phi)]\prec\mkern-10mu\prec (\partial_t+\mathscr{L})\vartheta \right)\|_{C_{\rho}\mathscr{C}^{2\alpha-2}} \nonumber \\ \lesssim & \|(\psi+\phi)\|_{C_{\rho}\mathscr{C}^{\alpha} }+\|(\psi+\phi)\|_{C^{\alpha/2}_{\rho}L^{\infty}}, \end{align} and \begin{align}\label{Es3} & \|\rho^{-1}\left([\rho(\psi+\phi)]\prec \xi-[\rho(\psi+ \phi)]\prec\mkern-10mu\prec \xi \right)\|_{C_{\rho}\mathscr{C}^{2\alpha}} \nonumber \\ \lesssim & \|\left([\rho(\psi+\phi)]\prec \xi-[\rho(\psi+ \phi)]\prec\mkern-10mu\prec \xi \right)\|_{C\mathscr{C}^{2\alpha-2}} \nonumber \\ \lesssim & \|\psi+\phi \|_{C_{\rho}\mathscr{C}^{\alpha} }+\| \psi+\phi \|_{C^{\alpha/2}_{\rho}L^{\infty}}. \end{align} Then by paraproduct estimates, we have \begin{align}\label{Es4} & \|-(\psi+\phi)\prec\xi+\Phi\|_{C_{\rho}\mathscr{C}^{2\alpha -2} } \nonumber \\ \lesssim & \|(\psi+\phi) \succ \mathscr{U}_{>} \xi + (\psi+\phi) \succ \mathscr{U}_{>}(\vartheta\circ\xi)\|_{C_{\rho}\mathscr{C}^{2\alpha -2} } \nonumber \\ & +\|(\psi+\phi) \prec \mathscr{U}_{\leq} \xi - (\psi+\phi) \prec \mathscr{U}_{>}(\vartheta\circ\xi)\|_{C_{\rho}\mathscr{C}^{2\alpha -2} } \nonumber \\ \lesssim & (\| \mathscr{U}_{>} \xi\|_{\mathscr{C}^{\alpha-2}} + \| \mathscr{U}_{>} (\vartheta\circ\xi)\|_{\mathscr{C}^{\alpha-2}})\|(\psi+\phi)\|_{C_{\rho}\mathscr{C}^{\alpha} } \nonumber \\ & +(\| \mathscr{U}_{\leq} \xi\|_{\mathscr{C}^{2\alpha-2}}+ \| \mathscr{U}_{>} (\vartheta\circ\xi)\|_{\mathscr{C}^{2\alpha-2}})\|(\psi+ \phi)\|_{C_{\tau^{1/(k-2)}}L^{\infty}} \nonumber \\ \lesssim & (2^{-(1-\kappa-\alpha)L}+2^{-(2-2\kappa-\alpha)K})\|(\psi+\phi)\|_{C_{\rho}\mathscr{C}^{\alpha} } \nonumber \\ & + (2^{(2\alpha-1+\kappa )L}+ 2^{-(2-2\kappa-2\alpha)K})\|(\psi+ \phi)\|_{C_{\tau^{1/(k-2)}}L^{\infty}} \nonumber \\ \lesssim & \|(\psi+\phi)\|_{C_{\rho}\mathscr{C}^{\alpha} } + 1 + \|\psi \|^{1+\alpha}_{C_{\tau^{1/(k-2)}}L^{\infty}}. \end{align} Combining with above estimates (\ref{Es1})-(\ref{Es4}), and using the Schauder estimates, we have \begin{align}\label{phisharp} & \|\phi^{\sharp}\|_{C_{\rho}\mathscr{C}^{2\alpha}} + \|\phi^{\sharp}\|_{C^{\alpha}_{\rho}L^{\infty}} \nonumber \\ \lesssim & \|\tau^{(2-\alpha)/2}(\tau^{-(\frac{k-1}{k-2}+\frac{3\alpha-2}{2})-1}(1-\tau)[\rho(\psi+ \phi)]\prec\mkern-10mu\prec\vartheta) \|_{C_{\rho}\mathscr{C}^{\alpha}} \nonumber \\ & + \|\rho^{-1} \left( (\partial_t+\mathscr{L})[\rho(\psi+ \phi)]\prec\mkern-10mu\prec\vartheta - [\rho(\psi+ \phi)]\prec\mkern-10mu\prec (\partial_t+\mathscr{L})\vartheta \right)\|_{C_{\rho}\mathscr{C}^{2\alpha-2}} \nonumber \\ & + \|\rho^{-1}\left([\rho(\psi+\phi)]\prec \xi-[\rho(\psi+ \phi)]\prec\mkern-10mu\prec \xi \right)\|_{C_{\rho}\mathscr{C}^{2\alpha-2}} \nonumber \\ & + \|-(\psi+\phi)\prec\xi+\Phi\|_{C_{\rho}\mathscr{C}^{2\alpha -2} } \nonumber \\ \lesssim & \|\psi+\phi \|_{C_{\rho}\mathscr{C}^{\alpha} } + \|(\psi+ \phi)\|_{C_{\tau\eta}L^{\infty}} +\| \psi+\phi \|_{C^{\alpha/2}_{\rho}L^{\infty}} \nonumber \\ \lesssim & 1+\|\psi\|^{1+\alpha}_{C_{\tau^{1/(k-2)}}L^{\infty}} + \|\psi \|_{C_{\rho}\mathscr{C}^{\alpha} } +\|\psi\|_{C^{\alpha/2}_{\rho}L^{\infty}}. \end{align} {\bf Step 4. Bound for $\psi$ in $C_{\rho}\mathscr{C}^{3\alpha} \cap C^1_{\rho}L^{\infty}$} Now we derive a bound for $\psi$ in $C_{\rho}\mathscr{C}^{3\alpha} \cap C^1_{\rho}L^{\infty}$. Recall that we denote $\rho =\tau^{1+1/(k-2)+(3\alpha-2)/2}$. By paraproduct estimates and a priori estimates (\ref{phicalpha}), (\ref{phisharp}), we have \begin{align}\label{E41} \|\phi^{\sharp} \circ \xi\|_{C_{\rho}\mathscr{C}^{3\alpha -2}} \lesssim & \|\xi\|_{\mathscr{C}^{-1-\kappa}}\|\phi^{\sharp}\|_{C_{\rho}\mathscr{C}^{2\alpha}} \nonumber \\ \lesssim & 1+\|\psi\|_{C_{\rho}\mathscr{C}^{\alpha}} +\|\psi\|_{C^{\alpha/2}_{\rho}L^{\infty}}+ \|\psi\|_{C_{\tau^{1/(k-2)}}L^{\infty}}. \end{align} \begin{equation}\label{E42} \|\psi\circ\xi\|_{C_{\rho}\mathscr{C}^{3\alpha -2}} \lesssim \|\psi\|_{C_{\rho}\mathscr{C}^{2\alpha }}, \end{equation} \begin{align}\label{E43} \|\mathscr{U}_{\leq}(\vartheta\diamond\xi) \prec (\psi+\phi)\|_{{\mathscr{C}_{\rho} \mathscr{C}^{3\alpha -2}}} \lesssim & \|\vartheta\diamond\xi\|_{\mathscr{C}^{2\alpha -2}}\|\psi+\phi\|_{C_{\rho}\mathscr{C}^{\alpha}} \nonumber \\ \lesssim & 1+\|\psi\|_{{C_{\rho} \mathscr{C}^{\alpha}}}+\|\psi\|_{C_{\tau^{1/(k-2)}}L^{\infty}} \end{align} \begin{align}\label{E44} \|(\vartheta\diamond\xi) \circ (\psi+\phi)\|_{{C_{\rho}\mathscr{C}^{3\alpha -2}}} \lesssim & \|\vartheta\diamond\xi\|_{\mathscr{C}^{2\alpha -2}}\|\psi+\phi\|_{C_{\rho}\mathscr{C}^{\alpha}} \nonumber \\ \lesssim & 1+\|\psi\|_{{C_{\rho}\mathscr{C}^{\alpha}}}+\|\psi\|_{C_{\tau^{1/(k-2)}}L^{\infty}}, \end{align} The commutator estimate Lemma \ref{commutatorE} implies that \begin{align} \|C(\psi + \phi,\vartheta,\xi)\|_{{C_{\rho} \mathscr{C}^{3\alpha-2}}} \lesssim & \|\psi + \phi\|_{{C_{\rho} \mathscr{C}^{\alpha}}} \|\xi\|_{\alpha -2}\|\vartheta\|_{\alpha} \nonumber \\ \lesssim & 1+\|\psi\|_{{C_{\rho} \mathscr{C}^{\alpha}}}+\|\psi\|_{C_{\tau^{1/(k-2)}}L^{\infty}}. \end{align} According to Lemma \ref{Localization} and the choosing of $L$ and $K$, we have \begin{align} & \|(\psi+\phi)\prec \mathscr{U}_{\leq}(\vartheta\circ\xi) \|_{C_{\rho}\mathscr{C}^{3\alpha-2} }+ \|(\psi+\phi)\prec \mathscr{U}_{\leq} \xi \|_{C_{\rho}\mathscr{C}^{3\alpha-2} } \nonumber \\ \lesssim & \|\psi + \phi\|_{C_{\tau^{1/(k-2)}}L^{\infty}}(\|\mathscr{U}_{\leq} (\vartheta\circ\xi)\|_{\mathscr{C}^{3\alpha-2}}+ \|\mathscr{U}_{\leq} \xi\|_{\mathscr{C}^{3\alpha-2}}) \nonumber \\ \lesssim & 2^{(3\alpha-1+\kappa)L}\|\xi\|_{\mathscr{C}^{-1-\kappa}}\|\psi + \phi\|_{C_{\tau^{1/(k-2)}}L^{\infty}} + 2^{(3\alpha-2+2\kappa)K}\|\vartheta\circ\xi\|_{\mathscr{C}^{-2\kappa}}\|\psi + \phi\|_{C_{\tau^{1/(k-2)}}L^{\infty}} \nonumber \\ \lesssim & 1+\|\psi\|^{3\alpha/(1-\kappa)}_{C_{\tau^{1/(k-2)}}L^{\infty}}, \end{align} and \begin{align} & \|(\psi+\phi) \succ \mathscr{U}_{\leq}(\vartheta\circ\xi)\|_{C_{\rho}\mathscr{C}^{3\alpha-2 }}+ \|(\psi+\phi) \succ \mathscr{U}_{\leq}\xi\|_{C_{\rho}\mathscr{C}^{3\alpha-2 }} \nonumber \\ \lesssim & (\|\mathscr{U}_{\leq}(\vartheta\circ\xi)\|_{\mathscr{C}^{2\alpha-2}} + \|\mathscr{U}_{\leq}\xi\|_{\mathscr{C}^{2\alpha-2}}) \|\psi+ \phi\|_{{C_{\rho} \mathscr{C}^{\alpha}}} \nonumber\\ \lesssim & (2^{(2\alpha-1+\kappa)L}+1)(1+\|\psi\|_{C_{\tau^{1/(k-2)}}L^{\infty}} +\|\psi\|_{{C_{\rho} \mathscr{C}^{\alpha}}}) \nonumber \\ \lesssim & (1+\|\psi\|^{-1+2\alpha/(1-\kappa)}_{C_{\tau^{1/(k-2)}}L^{\infty}})(1+\|\psi\|_{C_{\tau^{1/(k-2)}}L^{\infty}} +\|\psi\|_{{C_{\rho} \mathscr{C}^{\alpha}}}). \end{align} By (\ref{Ephi1}) and the dissipative assumption (\ref{assumf}) of $f$, we have \begin{align}\label{E48} \|f(\psi+\phi)-f(\psi)\|_{C_{\rho}\mathscr{C}^{3\alpha-2}} & \lesssim \|f'(\psi+\phi)\|_{C_{\rho^{k-2}}L^{\infty}}\|\psi\|_{C_{\rho^{1+(k-2)(3\alpha-2)/2}}\mathscr{C}^{3\alpha-2}} \nonumber \\ & \lesssim (1+\|\psi\|^{k-2}_{C_{\tau^{1/(k-2)}}L^{\infty}})\|\psi\|_{C_{\rho^{1+(k-2)(3\alpha-2)/2}}\mathscr{C}^{3\alpha-2}}. \end{align} Combining with above estimates, and using the interpolation result in Lemma \ref{interpolation} and weighted Young inequality, for every $\lambda>0$ we have \begin{align}\label{Psi} & \| \Psi \|_{C_{\rho}\mathscr{C}^{3\alpha -2}} \nonumber \\ \lesssim & 1 +\|\psi\|_{C_{\rho}\mathscr{C}^{\alpha} }+ \|\psi\|_{C^{\alpha/2}_{\rho}L^{\infty} }+ \|\psi\|_{C_{\rho}\mathscr{C}^{2\alpha} } +\|\psi\|^{\alpha-1+2\alpha/(1-\kappa)}_{C_{\tau^{1/(k-2)}}L^{\infty}} \nonumber\\ & + \|\psi\|^{-1+2\alpha/(1-\kappa)}_{C_{\tau^{1/(k-2)}}L^{\infty}}\|\psi\|_{{C_{\rho} \mathscr{C}^{\alpha}}} + \|\psi\|^{k-2}_{C_{\tau^{1/(k-2)}}L^{\infty}}\|\psi\|_{C_{\rho^{1+(k-2)(3\alpha-2)/2}}\mathscr{C}^{3\alpha-2}} \nonumber\\ \lesssim & 1 + \|\psi\|^{2/3}_{C_{\tau^{1/(k-2)}}L^{\infty}}\|\psi\|^{1/3}_{C_{\rho}\mathscr{C}^{3\alpha}}+ \|\psi\|^{1/2}_{C_{\tau^{1/(k-2)}}L^{\infty}}\|\psi\|^{1/2}_{C^{\alpha}_{\rho}L^{\infty}} + \|\psi\|^{1/3}_{C_{\tau^{1/(k-2)}}L^{\infty}}\|\psi\|^{2/3}_{C_{\rho}\mathscr{C}^{3\alpha}} \nonumber\\ & +\|\psi\|^{\alpha-1+2\alpha/(1-\kappa)}_{C_{\tau^{1/(k-2)}}L^{\infty}} + \|\psi\|^{2/3-1+2\alpha/(1-\kappa)}_{C_{\tau^{1/(k-2)}}L^{\infty}}\|\psi\|^{ 1/3}_{C_{\rho}\mathscr{C}^{3\alpha}} + \|\psi\|^{k-2+2/(3\alpha)}_{C_{\tau^{1/(k-2)}}L^{\infty}} \|\psi\|^{(3\alpha-2)/(3\alpha)}_{C_{\rho}\mathscr{C}^{3\alpha}} \nonumber \\ \lesssim & 1+ \lambda \|\psi\|_{C_{\eta}\mathscr{C}^{3\alpha}} + \lambda\|\psi\|_{C^{\alpha}_{\eta}L^{\infty} } +\|\psi\|^{\alpha-1+2\alpha/(1-\kappa)}_{C_{\tau^{1/(k-2)}}L^{\infty}} \nonumber \\ & + \|\psi\|^{-1/2+3\alpha/(1-\kappa)}_{C_{\tau^{1/(k-2)}}L^{\infty}} + \|\psi\|^{3\alpha(k-2)/2+1}_{C_{\tau^{1/(k-2)}}L^{\infty}} \end{align} Then by Schauder estimate Lemma \ref{nlSch} and choosing $\lambda$ small enough, we obtain \begin{align}\label{psi} & \|\psi\|_{C_{\rho}\mathscr{C}^{3\alpha}} + \|\psi\|_{C^1_{\rho}L^{\infty}} \nonumber \\ \lesssim & 1+ \| \Psi \|_{C_{\rho}\mathscr{C}^{3\alpha -2}} +\|\psi\|^{k+1}_{C_{\tau^{1/(k-2)}}L^{\infty}} \nonumber \\ \lesssim & 1 +\|\psi\|^{\alpha-1+2\alpha/(1-\kappa)}_{C_{\tau^{1/(k-2)}}L^{\infty}} + \|\psi\|^{-1/2+3\alpha/(1-\kappa)}_{C_{\tau^{1/(k-2)}}L^{\infty}} + \|\psi\|^{3\alpha(k-2)/2+1}_{C_{\tau^{1/(k-2)}}L^{\infty}} + \|\psi\|^{k-1}_{C_{\tau^{1/(k-2)}}L^{\infty}} \end{align} {\bf Step 5. Bound for $\psi$ in $C_{\tau^{1/(k-2)}}L^{\infty}$} \\ We estimate $\Psi$ in $C_{\tau^{1+1/(k-2)}}L^{\infty}$. Similar with estimates (\ref{E41})-(\ref{E44}), we have \begin{align}\label{E51} \|\phi^{\sharp} \circ \xi\|_{C_{\tau^{1+1/(k-2)}}L^{\infty}} \lesssim & \|\xi\|_{\mathscr{C}^{-1-\kappa}}\|\phi^{\sharp}\|_{C_{\rho}\mathscr{C}^{2\alpha}} \nonumber \\ \lesssim & 1+\|\psi\|_{C_{\rho}\mathscr{C}^{\alpha}} +\|\psi\|_{C^{\alpha/2}_{\rho}L^{\infty}}+ \|\psi\|_{C_{\tau^{1/(k-2)}}L^{\infty}}. \end{align} \begin{equation}\label{E52} \|\psi\circ\xi\|_{C_{\tau^{1+1/(k-2)}}L^{\infty}} \lesssim \|\psi\|_{C_{\rho}\mathscr{C}^{2\alpha }}, \end{equation} \begin{align} \|\mathscr{U}_{\leq}(\vartheta\diamond\xi) \prec (\psi+\phi)\|_{{\mathscr{C}_{\tau^{1+1/(k-2)}}L^{\infty}}} \lesssim & \|\vartheta\diamond\xi\|_{\mathscr{C}^{-2\kappa}}\|\psi+\phi\|_{C_{\rho}\mathscr{C}^{\alpha}} \nonumber \\ \lesssim & 1+\|\psi\|_{{C_{\rho} \mathscr{C}^{\alpha}}}+\|\psi\|_{C_{\tau^{1/(k-2)}}L^{\infty}} \end{align} \begin{align} \|(\vartheta\diamond\xi) \circ (\psi+\phi)\|_{{C_{\tau^{1+1/(k-2)}}L^{\infty}}} \lesssim & \|\vartheta\diamond\xi\|_{\mathscr{C}^{-2\kappa}}\|\psi+\phi\|_{C_{\rho}\mathscr{C}^{\alpha}} \nonumber \\ \lesssim & 1+\|\psi\|_{{C_{\rho}\mathscr{C}^{\alpha}}}+\|\psi\|_{C_{\tau^{1/(k-2)}}L^{\infty}}, \end{align} The commutator estimate Lemma \ref{commutatorE} implies that \begin{align} \|C(\psi + \phi,\vartheta,\xi)\|_{{C_{\tau^{1+1/(k-2)}}L^{\infty}}} \lesssim & \|\psi + \phi\|_{{C_{\rho} \mathscr{C}^{\alpha}}} \|\xi\|_{-1-\kappa}\|\vartheta\|_{1-\kappa} \nonumber \\ \lesssim & 1+\|\psi\|_{{C_{\rho} \mathscr{C}^{\alpha}}}+\|\psi\|_{C_{\tau^{1/(k-2)}}L^{\infty}}. \end{align} According to Lemma \ref{Localization} and the choosing of $L$ and $K$, we have \begin{align} & \|(\psi+\phi)\prec \mathscr{U}_{\leq}(\vartheta\circ\xi) \|_{C_{\tau^{1+1/(k-2)}}L^{\infty} }+ \|(\psi+\phi)\prec \mathscr{U}_{\leq} \xi \|_{C_{\tau^{1+1/(k-2)}}L^{\infty} } \nonumber \\ \lesssim & \|\psi + \phi\|_{C_{\tau^{1/(k-2)}}L^{\infty}}(\|\mathscr{U}_{\leq} (\vartheta\circ\xi)\|_{\mathscr{C}^{3\alpha-2}}+ \|\mathscr{U}_{\leq} \xi\|_{\mathscr{C}^{3\alpha-2}}) \nonumber \\ \lesssim & 2^{(3\alpha-1+\kappa)L}\|\xi\|_{\mathscr{C}^{-1-\kappa}}\|\psi + \phi\|_{C_{\tau^{1/(k-2)}}L^{\infty}} + 2^{(3\alpha-2+2\kappa)K}\|\vartheta\circ\xi\|_{\mathscr{C}^{-2\kappa}}\|\psi + \phi\|_{C_{\tau^{1/(k-2)}}L^{\infty}} \nonumber \\ \lesssim & 1+\|\psi\|^{3\alpha/(1-\kappa)}_{C_{\tau^{1/(k-2)}}L^{\infty}}, \end{align} and \begin{align} & \|(\psi+\phi) \succ \mathscr{U}_{\leq}(\vartheta\circ\xi)\|_{C_{\tau^{1+1/(k-2)}}L^{\infty}}+ \|(\psi+\phi) \succ \mathscr{U}_{\leq}\xi\|_{C_{\tau^{1+1/(k-2)}}L^{\infty}} \nonumber \\ \lesssim & (\|\mathscr{U}_{\leq}(\vartheta\circ\xi)\|_{\mathscr{C}^{2\alpha-2}} + \|\mathscr{U}_{\leq}\xi\|_{\mathscr{C}^{2\alpha-2}}) \|\psi+ \phi\|_{{C_{\rho} \mathscr{C}^{\alpha}}} \nonumber\\ \lesssim & (2^{(2\alpha-1+\kappa)L}+1)(1+\|\psi\|_{C_{\tau^{1/(k-2)}}L^{\infty}} +\|\psi\|_{{C_{\rho} \mathscr{C}^{\alpha}}}) \nonumber \\ \lesssim & (1+\|\psi\|^{-1+2\alpha/(1-\kappa)}_{C_{\tau^{1/(k-2)}}L^{\infty}})(1+\|\psi\|_{C_{\tau^{1/(k-2)}}L^{\infty}} +\|\psi\|_{{C_{\rho} \mathscr{C}^{\alpha}}}). \end{align} By (\ref{Ephi1}) and the dissipative assumption (\ref{assumf}) of $f$, we have \begin{align}\label{E58} \|f(\psi+\phi)-f(\psi)\|_{C_{\tau^{1+1/(k-2)}}L^{\infty}} & \lesssim \|f'(\psi+\phi)\|_{C_{\rho^{k-2}}L^{\infty}}\|\psi\|_{C_{\tau^{1/(k-2)}}L^{\infty}} \nonumber \\ & \lesssim (1+\|\psi\|^{k-2}_{C_{\tau^{1/(k-2)}}L^{\infty}})\|\psi\|_{C_{\tau^{1/(k-2)}}L^{\infty}}. \end{align} Combining with above estimates and (\ref{psi}), we obtain \begin{equation*} \| \Psi \|_{C_{\tau^{1+1/(k-2)}}L^{\infty}} \lesssim 1+ \lambda \|\psi\|^{k-1}_{C_{\tau^{1/(k-2)}}L^{\infty}} +\|\psi\|^{-1/3+4/3(1-\kappa)}_{C_{\tau^{1/(k-2)}}L^{\infty}}+ \|\psi\|^{-1/2+2/(1-\kappa)}_{C_{\tau^{1/(k-2)}}L^{\infty}} + \|\psi\|^{2(k-2)/2+1}_{C_{\tau^{1/(k-2)}}L^{\infty}} \end{equation*} for every $\lambda \in [0,1)$. Then by parabolic coercive estimates from Lemma \ref{nlSch} and weighted Young inequality, we obtain \begin{equation} \|\psi\|_{C_{\tau^{1/(k-2)}}L^{\infty}} \lesssim 1+\|\Psi\|^{1/(k-1)}_{C_{\tau^{1/(k-2)}}L^{\infty}} \lesssim 1. \end{equation} \subsection{Existence} In this subsection, we prove the following existence result by a smooth approximation and compactness. Let $u_{\epsilon}$ be a solution to the approximation equation \begin{equation} \partial_t u_{\epsilon} + \mathscr{L}u_{\epsilon} = f(u_{\epsilon})+ u_{\epsilon}\diamond\xi_{\epsilon}, \quad u_{\epsilon}(0)=u_{0,\epsilon}. \end{equation} where $\xi_{\epsilon}\in C^{\infty}(\mathbb{T}^2)$ is the mollification of the spatial white noise $\xi$, $u_{\epsilon}\diamond \xi_{\epsilon}$ is the approximation of $ u\diamond \xi$, and $u_{0,\epsilon}$ is a smooth approximation of the initial value $u_0$. By Lemma \ref{Exapp}, for every $\epsilon\in (0,1)$ and $T>0$, there exists a unique classical solution $u_{\epsilon} \in C^{\infty}([0,T]\times\mathbb{T}^2)$ to the approximation equation. \begin{thm}\label{Gexistence} Let $ u_0\in \mathscr{C}^{-1}$, $\alpha \in [2/3,1)$. Then there exists a solution $(\phi, \psi, \phi^{\sharp})$ to system (\ref{decGPAM}) with \begin{align*} \phi \in & [C_{\tau^{1/(k-2)+\alpha/2}}\mathscr{C}^{\alpha} \cap C^{\alpha/2}_{\tau^{1/(k-2)+\alpha/2}}L^{\infty}] \\ \psi \in & [C_{\rho}\mathscr{C}^{3\alpha} \cap C^1_{\rho}L^{\infty} \cap C_{\tau^{1/(k-2)}}L^{\infty}] \\ \phi^{\sharp} \in & [C_{\rho}\mathscr{C}^{2\alpha}\cap C^{\alpha}_{\rho}L^{\infty}], \end{align*} such that $u=\phi+\psi$ is a paracontrolled solution to the nonlinear parabolic Anderson model equation. \end{thm} \begin{proof} Let $\xi_{\epsilon}$ be a smooth approximation of the spatial white noise $\xi$, and let $u_{0,\epsilon}$ be a smooth approximation of the initial value $u_0$. Then by Lemma \ref{Exapp}, for every $\epsilon\in (0,1)$ and $T>0$, there exists a unique classical solution $u_{\epsilon} \in C^{\infty}([0,T]\times\mathbb{T}^2)$ to \begin{equation} \partial_t u_{\epsilon}+\mathscr{L}u_{\epsilon} = f(u_{\epsilon})+ u_{\epsilon}\xi_{\epsilon}-C_{\epsilon}u_{\epsilon}, \quad u_{\epsilon}(0)=u_{0,\epsilon}. \end{equation} Where $c_{\epsilon}>0$ is the renormalization constant. We decompose $u_{\epsilon}=\psi_{\epsilon}+\phi_{\epsilon}$ as same as above, such that the pair $(\psi_{\epsilon}, \phi_{\epsilon})$ satisfies the system, \begin{equation}\label{appdecGPAM1} \left\{ \begin{aligned} & \partial_{t}\phi_{\epsilon}+\mathscr{L}\phi_{\epsilon} = \Phi_{\epsilon}, \quad \phi_{\epsilon}(0) = \phi_{0,\epsilon}=u_{0,\epsilon}\\ & \partial_{t}\psi_{\epsilon}+\mathscr{L}\psi_{\epsilon} = f(\psi_{\epsilon})+ \Psi_{\epsilon}, \quad \psi(0)=0, \end{aligned} \right. \end{equation} where the definitions of $\Phi_{\epsilon}$ and $\Psi_{\epsilon}$ are same as $\Phi$ and $\Psi$. Same as $\phi^{\sharp}$, we also define $\phi^{\sharp}_{\epsilon} = \phi- (\psi_{\epsilon} + \phi_{\epsilon})\prec\mkern-10mu\prec \vartheta_{\epsilon}$. From a priori estimates, for any $T>0$ the approximation $(\psi_{\epsilon}, \phi_{\epsilon}, \phi^{\sharp}_{\epsilon})$ have the following uniformly bounds (uniformly in $\epsilon \in (0,1)$) \begin{align} & \|\tau^{1/(k-2)+\alpha/2}\phi_{\epsilon}\|_{C_{T}\mathscr{C}^{\alpha}}+\|\tau^{1/(k-2)+\alpha/2}\phi_{\epsilon}\|_{C^{\alpha/2}_{T}L^{\infty}} \lesssim 1, \nonumber \\ & \|\rho\psi_{\epsilon}\|_{C_{T}\mathscr{C}^{3\alpha} } + \|\rho\psi_{\epsilon}\|_{C^1_{T}L^{\infty}} \lesssim 1 \nonumber \\ & \|\rho\phi_{\epsilon}^{\sharp}\|_{C_{T}\mathscr{C}^{2\alpha}} + \|\rho\phi_{\epsilon}^{\sharp}\|_{C^{\alpha}_{T}L^{\infty}} \lesssim 1 \nonumber \end{align} Due to the Besov embedding, Arzela-Ascoli theorem and Aubin-Lions argument, the space \begin{equation*} [C_{T}\mathscr{C}^{\alpha} \cap C^{\alpha/2}_{T}L^{\infty}] \times [C_{T}\mathscr{C}^{3\alpha} \cap C^1_{T}L^{\infty} ]\times[C_{T}\mathscr{C}^{2\alpha} \cap C^{\alpha}_{T}L^{\infty}] \end{equation*} is compactly embedded into \begin{equation*} [C_{T}\mathscr{C}^{\alpha-\delta} \cap C_{T}^{(\alpha-\delta)/2}\mathscr{C}^{-\gamma}] \times [C_{T}\mathscr{C}^{3\alpha-\delta} \cap C^{1-\delta}_{T}\mathscr{C}^{-\gamma} ]\times[C_{T}\mathscr{C}^{2\alpha-\delta} \cap C^{\alpha-\delta}_{T}\mathscr{C}^{-\gamma}] \end{equation*} provided $\delta \in (0,\alpha)$ and $\gamma \in (0,1)$ are chosen small. We refer Lemma 1 and Theorem 5 in [\refcite{S1987}] for more details. Thus there exists a convergent subsequence (still denoted $(\psi_{\epsilon}, \phi_{\epsilon}, \phi^{\sharp}_{\epsilon})$) which converge to some $(\psi, \phi, \phi^{\sharp})$ in above space. Moreover, for any $T>0$, by linearity of the localizers $\mathscr{U}_{>}$, $\mathscr{U}_{\leq}$, and using same estimates in Section 3.2 we have \begin{equation*} \rho\Phi_{\epsilon} \rightarrow \rho\Phi \quad \text{in} \quad C_T\mathscr{C}^{\alpha -2 - \delta} \end{equation*} and \begin{equation*} \rho\Psi_{\epsilon} \rightarrow \rho\Psi \quad \text{in} \quad C_T\mathscr{C}^{3\alpha-2-\delta}. \end{equation*} Passing to the limit in (\ref{appdecGPAM1}). Thus limit $(\phi, \psi, \psi^{\sharp})$ solves the system (\ref{decGPAM}) in distributional sense. Now we turn to show that \begin{align*} \phi \in & [C_{\tau^{1/(k-2)+\alpha/2}}\mathscr{C}^{\alpha} \cap C^{\alpha/2}_{\tau^{1/(k-2)+\alpha/2}}L^{\infty}] \\ \psi \in & [C_{\rho}\mathscr{C}^{3\alpha} \cap C^1_{\rho}L^{\infty} \cap C_{\tau^{1/(k-2)}}L^{\infty}] \\ \phi^{\sharp} \in & [C_{\rho}\mathscr{C}^{2\alpha}\cap C^{\alpha}_{\rho}L^{\infty}], \end{align*} By a priori estimates for $(\phi_{\epsilon}, \psi_{\epsilon}, \phi_{\epsilon}^{\sharp})$, the Littlewood-Paley blocks $\Delta_i\phi_{\epsilon}$, $\Delta_i\psi_{\epsilon}$, $\Delta_i\phi_{\epsilon}^{\sharp}$ have uniform bounds \begin{align*} & \|\tau(t)^{1/(k-2)+\alpha/2}\Delta_i\psi_{\epsilon}(t)\|_{L^{\infty}} \lesssim 1, \\ & \|\rho(t)\Delta_i\phi_{\epsilon}(t)\|_{L^{\infty}}\lesssim 1, \\ & \|\rho(t)\Delta_i\phi^{\sharp}_{\epsilon}(t)\|_{L^{\infty}} \lesssim 1 \end{align*} uniform in $\epsilon$, $t$, and $i$. From weak $\ast$ lower semicontinuous of $L^{\infty}$ norm, we deduce that \begin{align*} \|\tau(t)^{1/(k-2)+\alpha/2}\Delta_i\phi(t)\|_{L^{\infty}} \leq & \liminf_{\epsilon\rightarrow 0}\|\rho(t)\tau(t)^{\alpha/2}\Delta_i\phi_{\epsilon}(t)\|_{L^{\infty}} \\ \leq & \liminf_{\epsilon\rightarrow 0}\|\phi_{\epsilon}\|_{C_{\tau^{1/(k-2)+\alpha/2}} \mathscr{C}^{\alpha}}2^{-i\alpha} \\ \lesssim & 2^{-i\alpha} , \end{align*} \begin{align*} \|\rho(t)\Delta_i\psi(t)\|_{L^{\infty}} \leq & \liminf_{\epsilon\rightarrow 0}\|\rho(t)\Delta_i\psi_{\epsilon}(t)\|_{L^{\infty}} \\ \leq & \liminf_{\epsilon\rightarrow 0}\|\psi\|_{C_{\rho}\mathscr{C}^{3\alpha}}2^{-i3\alpha} \\ \lesssim & 2^{-i3\alpha}, \end{align*} \begin{align*} \|\rho(t)\Delta_i\psi(t)\|_{L^{\infty}} \leq & \liminf_{\epsilon\rightarrow 0}\|\rho(t)\Delta_i\psi_{\epsilon}(t)\|_{L^{\infty}} \\ \leq & \liminf_{\epsilon\rightarrow 0}\|\psi_{\epsilon}\|_{C_{\rho} L^{\infty}} \\ \lesssim & 1, \end{align*} \begin{align*} \|\rho(t)\Delta_i\phi^{\sharp}(t)\|_{L^{\infty}} \leq & \liminf_{\epsilon\rightarrow 0}\|\rho(t)\Delta_i\phi^{\sharp}_{\epsilon}(t)\|_{L^{\infty}} \\ \leq & \liminf_{\epsilon\rightarrow 0}\|\phi^{\sharp}_{\epsilon}\|_{C_{\rho} \mathscr{C}^{2\alpha}}2^{-i2\alpha} \\ \lesssim & 2^{-i2\alpha}. \end{align*} Above estimates imply that \begin{equation*} (\phi, \psi, \phi^{\sharp}) \in L^{\infty}_{\tau^{1/(k-2)+\alpha/2}}\mathscr{C}^{\alpha} \times [L^{\infty}_{\rho}L^{\infty} \cap L^{\infty}_{\rho}\mathscr{C}^{3\alpha}]\times L^{\infty}_{\rho}\mathscr{C}^{2\alpha}. \end{equation*} For the time regularity, we have \begin{align*} \|\tau(t)^{1/(k-2)+\alpha/2}\phi(t)-\tau(s)^{1/(k-2)+\alpha/2}\phi(s)\|_{L^{\infty}} \lesssim & \liminf_{\epsilon\rightarrow 0}\|\tau(t)^{1/(k-2)+\alpha/2}\phi_{\epsilon}(t)-\tau(s)^{1/(k-2)+\alpha/2}\phi_{\epsilon}(s)\|_{L^{2}} \nonumber \\ \lesssim & \|\phi_{\epsilon}\|_{C_{\tau^{1/(k-2)+\alpha/2}}^{\alpha/2}L^{2}}|t-s|^{\alpha/2} \nonumber \\ \lesssim & |t-s|^{\alpha/2}, \end{align*} \begin{align*} \|\rho(t)\psi(t)-\rho(s)\psi(s)\|_{L^{\infty}} \lesssim & \liminf_{\epsilon\rightarrow 0}\|\rho(t)\psi_{\epsilon}(t)-\rho(s)\phi_{\epsilon}(s)\|_{L^{2}} \nonumber \\ \lesssim & \|\psi_{\epsilon}\|_{C^{1}_{\rho}L^{2}}|t-s| \nonumber \\ \lesssim & |t-s|, \end{align*} and \begin{align*} \|\rho(t)\phi^{\sharp}(t)-\rho(s)\phi^{\sharp}(s)\|_{L^{\infty}} \lesssim & \liminf_{\epsilon\rightarrow 0}\|\rho(t)\phi^{\sharp}_{\epsilon}(t)-\rho(s)\phi^{\sharp}_{\epsilon}(s)\|_{L^{2}} \nonumber \\ \lesssim & \|\phi^{\sharp}_{\epsilon}\|_{C^{\alpha}_{\rho}L^{\infty}}|t-s|^{\alpha} \nonumber \\ \lesssim & |t-s|^{\alpha}. \end{align*} Then we obtain time regularity. The proof is complete. \end{proof} \subsection{Uniqueness} In this subsection, we consider the uniqueness of the nonlinear parabolic Anderson model equation (\ref{GPAM}) via the classical energy estimate. \begin{thm}\label{Uniq} The solution of (\ref{GPAM}) in the sense of Theorem \ref{Gexistence} is unique. \end{thm} \begin{proof} Suppose $(\phi_1, \psi_1, \phi_1^{\sharp})$ and $(\phi_2, \psi_2, \phi_2^{\sharp})$ are two solutions of (\ref{GPAM}) which given in Theorem \ref{Gexistence}. Let $\zeta:= u_1 -u_2 = \psi_1 +\phi_1 -\psi_2 -\phi_2$, then $\zeta$ satisfies \begin{equation}\label{zeta} \partial_{t}\zeta + \mathscr{L}\zeta-\zeta\diamond \xi= f(u_1) - f(u_2) , \quad \zeta(0)=0. \end{equation} Here, we use the simple paracontrolled $\zeta = \zeta\prec\vartheta + \zeta^{\sharp}$ to define $\zeta\diamond \xi$. Since $u =\phi+\psi = u \prec\mkern-10mu\prec\vartheta + \phi^{\sharp} + \psi $, the reminder $\zeta^{\sharp}$ is given by \begin{align*} \zeta^{\sharp}:= & \zeta - \zeta\prec\vartheta \\ = & (\psi_1 - \psi_2) + (\phi_1-\phi_2)-\zeta\prec\vartheta \\ = & (\psi_1 - \psi_2) + ((\phi_1-\phi_2)\prec\mkern-10mu\prec \vartheta - (\phi_1-\phi_2)\prec \vartheta) - (\phi_1^{\sharp}-\phi_2^{\sharp}). \end{align*} The a priori estimates for $(\phi,\psi,\phi^{\sharp})$ yields that $\zeta^{\sharp}(t) \in \mathscr{C}^{2\alpha} \hookrightarrow H^{2\alpha}$. Thus $\zeta\diamond\xi$ is given as follows \begin{equation}\label{Wickp} \zeta\diamond\xi = \zeta \prec \xi + \zeta \succ \xi + \zeta^{\sharp}\circ\xi + C(\zeta,\vartheta, \xi)+ \zeta (\vartheta\diamond\xi). \end{equation} Now we multiply equation (\ref{zeta}) by $\zeta$, and take the $H^{\alpha-1}(\mathbb{T}^2)$ inner product to obtain \begin{equation}\label{testeq} \frac{1}{2}\partial_t\|\zeta\|^2_{H^{\alpha-1}} + \|\nabla\zeta\|^2_{H^{\alpha-1}}+ \mu\|\zeta\|^2_{H^{\alpha-1}} = \langle \zeta, f(u_1) - f(u_2)\rangle_{H^{\alpha-1}} + \langle \zeta, \zeta \diamond \xi\rangle_{H^{\alpha-1}}. \end{equation} We begin to estimate $\langle \zeta, \zeta \diamond \xi\rangle_{H^{\alpha-1}} $. By (\ref{Wickp}), this term can be decomposed as \begin{align*} & \langle \zeta , \zeta \diamond \xi\rangle_{H^{\alpha-1}} \\ = & \langle \zeta , \zeta \prec \xi\rangle_{H^{\alpha-1}} + \langle\zeta, \zeta \succ \xi\rangle_{H^{\alpha-1}} + \langle\zeta, \zeta^{\sharp}\circ\xi\rangle_{H^{\alpha-1}} + \langle \zeta , C(\zeta,\vartheta, \xi) \rangle_{H^{\alpha-1}} + \langle\zeta , \zeta (\vartheta\diamond\xi)\rangle_{H^{\alpha-1}}. \end{align*} By Lemma \ref{interpolationH} and weighted Young inequality, we have \begin{align}\label{UE1} \langle \zeta, \zeta \prec \xi \rangle_{H^{\alpha-1}} + \langle \zeta, \zeta \succ \xi \rangle_{H^{\alpha-1}}\leq & \|\zeta\|_{H^{2\alpha-1+\kappa}}(\|\zeta \prec \xi \|_{H^{-1-\kappa}}+\|\zeta \succ \xi \|_{H^{-1-\kappa}}) \nonumber \\ \lesssim & \|\zeta\|_{H^{2\alpha-1+\kappa}}\|\xi\|_{\mathscr{C}^{-1-\kappa}}\|\zeta\|_{H^{\kappa}} \nonumber \\ \lesssim & \delta\|\nabla\zeta\|^2_{H^{\alpha-1}} + C_{\delta}\|\zeta\|^2_{H^{\alpha-1}}, \end{align} By paraproduct estimates and Lemma \ref{Dm}, we have \begin{align}\label{UE2} \langle\zeta , \zeta^{\sharp}\circ\xi\rangle_{H^{\alpha-1}} \leq & \|\zeta\|_{H^{2\alpha-1+\kappa}}\|\zeta^{\sharp} \circ \xi \|_{H^{-1-\kappa}} \nonumber \\ \lesssim & \|\zeta\|_{H^{2\alpha-1+\kappa}}\|\xi\|_{\mathscr{C}^{-1-\kappa}}\|\zeta^{\sharp}\|_{H^{\kappa}} \nonumber \\ \lesssim & \|\zeta\|_{H^{2\alpha-1+\kappa}}\|\xi\|_{\mathscr{C}^{-1-\kappa}}\|\zeta-\zeta\prec\vartheta\|_{H^{\kappa}} \nonumber \\ \lesssim & \delta\|\nabla\zeta\|^2_{H^{\alpha-1}} + C_{\delta}\|\zeta\|^2_{H^{\alpha-1}}. \end{align} By paraproduct estimates, commutator estimates, and weight Young inequality, we have \begin{align}\label{UE4} \langle \zeta, C(\zeta,\vartheta, \xi) \rangle_{H^{\alpha-1}} \leq & \|\zeta\|^2_{H^{\alpha-1}}+\|C(u,\vartheta,\xi)\|^2_{H^{\alpha-1}}\nonumber \\ \lesssim & \|\zeta\|^2_{H^{\alpha-1}} + \|\zeta\|^2_{H^{1/2}}\|\vartheta\|_{\mathscr{C}^{1-\kappa}}\|\xi\|_{\mathscr{C}^{-1-\kappa}} \nonumber \\ \lesssim & \delta\|\nabla\zeta\|^2_{H^{\alpha-1}} + C_{\delta}\|\zeta\|^2_{H^{\alpha-1}}, \end{align} and \begin{align}\label{UE5} \langle\zeta(t), \zeta(t) (\vartheta\diamond\xi)\rangle_{H^{\alpha-1}} \leq & \|\zeta(t)\|_{H^{\alpha-1}} \|\zeta (\vartheta\diamond\xi)\|_{H^{\alpha-1}} \nonumber\\ \lesssim & \|\zeta(t)\|^2_{H^{\alpha-1}} + \|\zeta\|^2_{H^{1/2}}\|\vartheta\diamond\xi\|^2_{\mathscr{C}^{-2\kappa}} \nonumber \\ \lesssim & \delta\|\nabla\zeta\|^2_{H^{\alpha-1}} + C_{\delta}\|\zeta(t)\|^2_{H^{\alpha-1}}. \end{align} From above estimates (\ref{UE1})-(\ref{UE5}), we have \begin{equation}\label{UEE1} \langle \zeta, \zeta \diamond \xi\rangle_{H^{\alpha-1}} \lesssim \delta\|\nabla\zeta(t)\|^2_{H^{\alpha-1}} + C_{\delta}\|\zeta(t)\|^2_{H^{\alpha-1}}. \end{equation} Moreover, the assumption of $f$ implies that \begin{align}\label{UEE6} \langle \zeta, (f(u_1)-f(u_2)) \rangle_{H^{\alpha-1}} \leq l\| \zeta(t)\|^2_{H^{\alpha-1}}. \end{align} Plugging estimates (\ref{UEE1}) and (\ref{UEE6}) into (\ref{testeq}), and choosing $\delta$ small enough to absorb $\|\nabla\zeta(t)\|^2_{H^{\alpha-1}}$ in left hand side, we finally obtain \begin{equation}\label{testeq8} \frac{1}{2}\partial_t \|\zeta(t)\|^2_{H^{\alpha-1}} \leq \delta\|\nabla\zeta(t)\|^2_{H^{\alpha-1}} + C_{\delta}\|\zeta(t)\|^2_{H^{\alpha-1}}. \end{equation} Since $\zeta(0) = \zeta^{\sharp}(0)=0$, by Gr\"{o}nwall's inequality, we deduce that $\zeta(t)=\zeta^{\sharp}(t)=0$ for every $t>0$. Since $\phi_1 - \phi_2$ satisfies the linear equation \begin{equation*} \mathscr{L}(\phi_1 - \phi_2) = \zeta \prec \mathscr{U}_{>} \xi + \zeta \succ \mathscr{U}_{>} \xi + \zeta \succ \mathscr{U}_{>}(\vartheta\circ\xi) - \zeta \prec \mathscr{U}_{>}(\vartheta\circ\xi), \quad (\phi_1- \phi_2)(0)=0, \end{equation*} if $\zeta(t)=0$ for every $t>0$, then $\phi_1 = \phi_2$, $\psi_1=\psi_2$. Furthermore, note that $\zeta^{\sharp}$ is given by \begin{align*} \zeta^{\sharp}:& =(\phi_1-\phi_2) - \zeta\prec\vartheta \\ & =(\phi_1^{\sharp}-\phi_2^{\sharp})-\rho^{-1}\left((\rho\zeta)\prec \vartheta- (\rho\zeta)\prec\mkern-10mu\prec \vartheta \right). \end{align*} If $\zeta = \zeta^{\sharp}=0$, then $\phi_1^{\sharp}=\phi_2^{\sharp}$. Thus the solution of (\ref{GPAM}) is unique. \end{proof} \section{Conclusion} We have established the global well-posedness result for the nonlinear parabolic Anderson model equation in paracontrolled distribution frame-work and parabolic Schauder and coercive estimates. Furthermore, we have also proved the uniqueness by using direct energy estimates. We point out that another possible method for the nonlinear parabolic Anderson model equation is using some properties of Anderson Hamiltonian $\mathscr{H}$ and employing $L^2$ energy estimates directly. In [\refcite{GUZ2020}], the authors using this method to study semilinear Schr\"{o}dinger and Wave equations with Anderson Hamiltonian $\mathscr{H}$. But if we use this methods, the regularity of the solution is lower than our results, and we need further regularity estimates for the equation. There are still some possible extensions of our results. In fact, the noise term $u\diamond\xi$ can be replaced by more general case, such as $g(u)\diamond\xi$. We could extend the domain $\mathbb{T}^2$ to the whole space. To study the parabolic Anderson model equation on $\mathbb{R}^{+}\times\mathbb{R}^2$ we have to use some spatial weight. We could also consider the equation in higher dimension $(d= 3)$ and more singular noise, such as the spatial time white noise. The dynamical properties of the parabolic Anderson model equation are also interesting to investigate in further works. \section*{Acknowledgments} We are very grateful to our reviewer for valuable comments and editor's help. \end{document}
\begin{document} \title{Implementation vulnerabilities in general quantum cryptography} \author{Anqi~Huang$^{1,2,3}$} \email{[email protected]} \author{Stefanie~Barz$^{4,5}$} \author{Erika~Andersson$^{6}$} \author{Vadim~Makarov$^{7,8}$} \affiliation{ $^1$Institute for Quantum Information \& State Key Laboratory of High Performance Computing, College of Computer, National University of Defense Technology, Changsha 410073, People's Republic of China\\ $^2$Institute for Quantum Computing, University of Waterloo, Waterloo, ON, N2L~3G1 Canada\\ \mbox{$^3$Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, ON, N2L~3G1 Canada}\\ \mbox{$^4$Institute for Functional Matter and Quantum Technologies, University of Stuttgart, 70569 Stuttgart, Germany}\\ \mbox{$^5$Center for Integrated Quantum Science and Technology, University of Stuttgart, IQST, 70569 Stuttgart, Germany}\\ $^6$Institute of Photonics and Quantum Sciences, Heriot-Watt University, Edinburgh, Scotland, EH14~4AS UK\\ $^7$Russian Quantum Center and MISIS University, Moscow, Russia\\ $^8$Department of Physics and Astronomy, University of Waterloo, Waterloo, ON, N2L~3G1 Canada } \date{\today} \begin{abstract} Quantum cryptography is information-theoretically secure owing to its solid basis in quantum mechanics. However, generally, initial implementations with practical imperfections might open loopholes, allowing an eavesdropper to compromise the security of a quantum cryptographic system. This has been shown to happen for quantum key distribution~(QKD). Here we apply experience from implementation security of QKD to several other quantum cryptographic primitives. We survey quantum digital signatures, quantum secret sharing, source-independent quantum random number generation, quantum secure direct communication, and blind quantum computing. We propose how the eavesdropper could in principle exploit the loopholes to violate assumptions in these protocols, breaking their security properties. Applicable countermeasures are also discussed. It is important to consider potential implementation security issues early in protocol design, to shorten the path to future applications. \end{abstract} \maketitle \section{General introduction} Common aims in cryptography are to guarantee confidentiality, integrity, and authentication of information. Some of the conventional cryptography based on computational complexity might be broken by a powerful quantum computer~\cite{ETSIwhitepaper}. However, quantum cryptography, where security rests on the laws of quantum mechanics, is one way to achieve information-theoretic security. Among the quantum cryptographic protocols, quantum key distribution (QKD) has become theoretically mature and technically practical. Inspired by the idea of QKD and taking advantage of QKD implementations, other quantum cryptographic primitives have gradually been developed, such as quantum coin tossing, quantum secret sharing, and quantum digital signatures~\cite{bennett1984,cleve1999,gottesman2001}. For each primitive, different protocols have been proposed, and even realized by current technology~\cite{bogdanski2008,collins2016,yin2017b}. However, there is a non-negligible gap between theory and practice in QKD: imperfections in devices create various loopholes that compromise the protocol's security~\cite{zhao2008,lydersen2010a,weier2011,sun2011,jain2014,sajeed2015a,makarov2016}. Practical security issues might also occur in the realization of other quantum cryptographic protocols. In theory, the protocols are unconditionally secure, but the security might not be guaranteed in practice due to imperfections of devices. Investigating device imperfections and system loopholes in QKD has taken more than a decade, and is still in progress. The experience gained from QKD will be helpful in finding possible loopholes in other implementations of quantum cryptographic protocols, because they use similar optical components. This enhances the practical security of quantum cryptography. \begin{table*} \caption{Summary of potential attacks in implementations of quantum cryptographic protocols. The table lists broken security properties for five primitives: two different protocols for quantum digital signatures (QDS), two different protocols for quantum secret sharing (QSS), source-independent quantum random number generation (SI~QRNG), quantum secure direct communication (QSDC), and blind quantum computing (BQC). ``--" means the attack is not applicable. See text for details.\\} \label{tbl:summary} \begin{tabular}{l|cccc} \diagbox{\bf{Protocol}~~~~}{\bf{Attack}~} & \makecell{Source\\ side channel} & \makecell{Wavelength-dependent\\ attack} & \makecell{Detector control\\ attack} & \makecell{Trojan-horse\\ attack} \\ \hline \\[-1.9ex] QDS& & & & \\ Identical-state-sharing~\cite{yin2016c} & Unforgeability & Unforgeability& Unforgeability& --\\ Different-state-sharing~\cite{collins2016} & -- & --& Unforgeability& Unforgeability\\ \\[-1.4ex] QSS& & & & \\ Entanglement-based~\cite{chen2005} & -- &--&Confidentiality&--\\ Single-qubit~\cite{bogdanski2008} & -- &--&-- &Confidentiality\\ \\[-1.4ex] SI QRNG~\cite{cao2016} & -- & Randomness & Randomness & -- \\ \\[-1.4ex] QSDC~\cite{hu2016}& --&--&Confidentiality&Confidentiality\\ \\[-1.4ex] BQC~\cite{barz2012}& Confidentiality&--&--&Confidentiality \end{tabular} \end{table*} The vulnerability of quantum coin-tossing and non-loophole-free Bell testing has previously been demonstrated \cite{sajeed2015,gerhardt2011a}, using imperfections in their specific experimental implementations to remove the protocol's quantum advantages. In this Article, we survey five quantum cryptographic primitives as examples to investigate practical security threats in their implementation. The primitives are quantum digital signatures (QDS), quantum secret sharing (QSS), source-independent quantum random number generation (SI QRNG), quantum secure direct communication (QSDC), and blind quantum computing (BQC). Based on attacks known to exist for QKD, we propose potential attacks on these primitives. The attacks may compromise the security of practical quantum cryptographic systems, without making the legitimate participants abort the cryptographic protocols. We summarize potential imperfections and broken security properties for all five primitives in~\cref{tbl:summary}. Details for each primitive are explained in~\cref{sec:QDS,sec:QSS,sec:QRNG,sec:QSDC,sec:BQC}. Each of these sections contains two parts: in subsection~A we recap the protocol, and in subsection~B we propose the attacks on its implementation. Countermeasures are discussed in~\cref{Countermeasure}. We conclude in~\cref{conclusion}. Please note that this study is merely a starting point presenting a broad overview. Detailed analysis of each implementation imperfection should be done in the future, as technological implementations of the protocols mature. In this Article, we focus on the implementation security of the demonstrations. We also remark that while most of these quantum cryptographic schemes have advantages over ``classical'' schemes, for some of these protocols, their practical usefulness is less clear, and strict security proofs may still be under development. For example, it is not always clear what practical advantage all protocols for QSS offer, over protocols based on secret shared keys followed by a ``classical'' protocol for secret sharing. Similarly, for QSDC, one would need to motivate the usefulness of direct communication, as opposed to establishing secret shared keys using standard QKD, followed by encryption using these keys. Discussing these aspects is however outside the scope of our study. \section{Quantum digital signatures} \label{sec:QDS} Digital signatures are an important primitive in cryptography. Specifically, three security properties are required for signatures: unforgeability, nonrepudiation, and transferability~\cite{swanson2011}. Unforgeability guarantees a unique message signer, so no one else is able to forge a valid signature. Nonrepudiation requires that once a message is signed, the signer cannot deny the signature. Transferability means that a recipient who accepts a message can be sure that if the message is forwarded, another recipient will also accept the message, except with a probability that can be made arbitrarily low. QDS based on laws of quantum physics is able to satisfy these requirements, and achieve information-theoretic security~\cite{gottesman2001}. Unconditionally secure signatures are also possible based on shared secret keys~\cite{chaum1990,hanaoka2000, swanson2011,amiri2016a}, and the scaling of secret key length with respect to message length can be more favorable than for quantum signatures. The secret shared key could be generated by quantum key distribution, but otherwise these schemes remain entirely ``classical''. On the other hand, the error rate threshold can be less strict for quantum digital signatures than for quantum key distribution to distill shared secret keys~\cite{amiri2016}. References~\onlinecite{yin2016b, amiri2016} propose QDS protocols via insecure quantum channels, which later have been implemented~\cite{yin2016c, collins2016}. A significant difference between these two protocols is the stage of quantum state distribution. In Ref.~\onlinecite{yin2016b}, Alice sends the same quantum states to Bob and Charlie, while in Ref.~\onlinecite{amiri2016}, Bob and Charlie individually send different quantum states to Alice. Both protocols are briefly introduced in the next subsection. A reader familiar with protocol implementation can, of course, skip to subsection~B, where we discuss vulnerabilities. \subsection{Protocol and implementation} \subsubsection{Identical-state-sharing} \label{ISS_QDS} Reference~\onlinecite{yin2016b} proposes a QDS protocol with a quantum-state sender, Alice, and two quantum-state receivers, Bob and Charlie. This protocol has been implemented over a distance of 102~km~\cite{yin2016c} as shown in~\cref{fig:Yin_imp}. The protocol consists of two stages, a quantum stage, and a signing stage. In the quantum stage, for each future 1-bit message $m = 0$ or $m = 1$, Alice employs weak coherent states to randomly prepare two identical sequences of qubit states, and every individual state is in one of the Bennett-Brassard 1984 (BB84) polarization states $\ket{H}$, $\ket{V}$, $\ket{+}$ and $\ket{-}$~\cite{bennett1984}. In addition, the decoy-state protocol~\cite{ma2005} is used to randomly modulate the mean photon numbers of the weak coherent states, protecting the system from photon-number-splitting~(PNS) attacks~\cite{brassard2000}. Then one copy of the sequence is sent to Bob, and one copy to Charlie. A beam splitter is used to randomly and independently select the $X$ or $Z$ basis to measure the received states. In a sifting phase, Bob and Charlie announce in which slots they obtain detections. For each detection slot, Alice then announces two nonorthogonal states from different bases, for example, $\ket{H}$ and $\ket{+}$. One of them is the real state she sent. If Bob (Charlie) obtains a measurement result corresponding to a state that is orthogonal to one of the states Alice announced, such as $\ket{V}$, then Bob (Charlie) conclusively knows that it is the other announced state, $\ket{+}$. In the next stage, the signing stage, only classical processing takes place. It starts by announcing some of the states shared between Alice and Bob (Charlie) during the quantum stage to calculate an authentication threshold $T_{a}$ ($T_{v}$) for Bob (Charlie). The unannounced states form strings denoted as $S_{Am}$ for Alice, $S_{Bm}$ for Bob and $S_{Cm}$ for Charlie, and will be used for the digital signature. To send a signed 1-bit message $m$, Alice sends the message and the corresponding data string, ($m$, $S_{Am}$), to one of the recipients, say, Bob. Bob will accept this message if the mismatch rate of sifted bits between $S_{Am}$ and $S_{Bm}$ is less than $T_{a}$. If Bob wishes to forward the message to Charlie, he forwards ($m$, $S_{Am}$) to Charlie. Charlie will accept this message as well if the mismatch rate of the sifted bits between $S_{Am}$ and $S_{Cm}$ is less than $T_{v}$. \begin{figure} \caption{Experimental setup for QDS implemented by H.-L. Yin and his coworkers (reprinted from Ref.~\onlinecite{yin2016c} \label{fig:Yin_imp} \end{figure} \subsubsection{Different-state-sharing} \label{DSS_QDS} \begin{figure} \caption{Implementation of QDS by R. J. Collins and his coworkers, employing a DPS QKD system (reprinted with permission from Ref.~\onlinecite{collins2016} \label{fig:Collins_imp} \end{figure} Reference~\onlinecite{amiri2016} proposes another quantum digital signature protocol that sends different quantum states from Bob and Charlie to Alice. This protocol has subsequently been implemented based on an installed differential-phase-shift (DPS) QKD system, as shown in~\cref{fig:Collins_imp}~\cite{collins2016}. This protocol is also divided into two stages, a distribution stage and a messaging stage. In the distribution stage, Bob and Charlie randomly and independently select two different \textit{n}-bit strings. Then, they encode the bits into quantum states according to the DPS QKD protocol~\cite{inoue2002}. For each future message $m = 0$ or $m = 1$, Bob (Charlie) applies a key-generating protocol (KGP) to share the bit string with Alice. The KGP can be treated as a partial QKD procedure without error correction and privacy amplification. Alice and Bob (Charlie) estimate the quantum bit error rate~(QBER) by announcing a small part of the shared bits. The remaining $L$-bit key is denoted by $K_{m}^{B}$ ($K_{m}^{C}$) at Bob's (Charlie's) side. At Alice's side, she obtains a signature $\textrm{Sig}_{m} = (A_{m}^{B}, A_{m}^{C})$ for a future message $m$. Then, to guarantee transferability, Bob and Charlie randomly forward half of their keys, $K_{m}^{B}$ and $K_{m}^{C}$, to each other. This classical bit exchange is encrypted by Bob and Charlie using a separate BB84 QKD system. This way, Alice receives no information on which bits have been forwarded and which bits have been kept. From her point of view, a bit she originally shared with Bob (Charlie) is now equally likely to be retained by Bob as by Charlie. Bob (Charlie) combine the non-exchanged part of $K_{m}^{B}$ ($K_{m}^{C}$) and the received part of $K_{m}^{C}$ ($K_{m}^{B}$) as a symmetric key, $S_{m}^{B}$ ($S_{m}^{C}$). In the messaging stage, Alice signs a message $m$ by $\textrm{Sig}_{m}$, and then sends $(m, \textrm{Sig}_{m})$ to Bob. Bob checks the mismatch rate between $\textrm{Sig}_m$ and $S_m^B$. If the mismatch rate is lower than the threshold $s_a$, Bob accepts the message. If Bob wishes to forward the message to Charlie, he forwards $(m, \textrm{Sig}_{m})$ to Charlie. Charlie also checks the mismatch rate between $\textrm{Sig}_m$ and $S_m^C$, and accepts the message if the mismatch rate is lower than the threshold $s_v$. From Alice’s point of view, the situation is symmetric with respect to Bob and Charlie, so that if Bob accepts a signature, Charlie must accept it with high probability, provided acceptance thresholds are chosen correctly and differently for Bob and Charlie. \subsection{Hacking} Both protocols have been proven to be information-theoretically secure, based on different assumptions~\cite{yin2016b, amiri2016}. In this section, we analyse the security assumptions for both protocols and illustrate how these assumptions might be broken. Since QDS realizations are based on QKD schemes with similar optical components, similar vulnerabilities exist. That is, some known attacks on QKD systems are applicable also to the realisations of quantum digital signatures. In our analysis, we assume an external attacker Eve who is not a legitimate participant (Alice, Bob or Charlie) in the QDS protocol. \subsubsection{Identical-state-sharing protocol} \label{hack_ISS_QDS} The unforgeability of this protocol is based on the assumption that given two copies of quantum states, Eve cannot distinguish between all four states Alice might send without error before Alice's declaration~\cite{chefles2001}. However, in practice, if Eve were able to discriminate the states via a side channel, messages could be forged. Several side channels exist in the implementation~\cite{yin2016c}, which could be exploited by Eve to hack it. Source side channels are useful for Eve to learn the quantum state prepared by Alice. When quantum states are prepared by different laser diodes, side channels could exist both in time and frequency domains~\cite{nauerth2009,huang2018}. In the implementation presented in Ref.~\onlinecite{yin2016c}, each laser diode prepares a specific state, and different laser diodes are used in a random order. To avoid the spectral side channel, the implementation controls the difference of the central wavelengths for all of these laser diodes in a narrow range (0.02 nm). Additionally, a dense wavelength division multiplexer~(DWDM) with 100~GHz bandwidth is used as a filter before the states are sent out. However, a side channel might exist in another degree of freedom. For example, pulse emission time, pulse width and pulse shape may vary for different laser diodes. These mismatches give Eve a chance to distinguish different states~\cite{huang2018}. If Eve is able to perfectly distinguish the quantum states, she could forge a copy of Alice's signature and send it to Bob and Charlie. However, usually, Eve can only partially distinguish the states. She may choose to perform different types of quantum measurements to maximize her distinguishability. For example, if Eve makes a measurement that sometimes gives her higher confidence in the result, such as an unambiguous quantum measurement, then she could forward a state only when her measurement has succeeded. Thus, in this case, if losses are high enough, this strategy may not be noticed by the legitimate parties. Measurements are usually more vulnerable than state preparation. One potential flaw hides in the beam splitter situated at the input of Bob's/Charlie's subsystem. The output ratio of the beam splitter might depend on the wavelength of the incoming light~\cite{li2011a}, which helps Eve during the intercept-resend attack. Eve first measures a state sent by Alice. According to the measurement result, Eve resends the measured state with a wavelength that makes the output ratio of the beam splitters become highly unbalanced, for example, 99:1 or 1:99. Then the resent state passes through Bob's/Charlie's beam splitter via one output with high probability, likely reaching the same measurement basis as Eve's. Thus, Eve, Bob and Charlie share almost the same detection results. At the sifting phase, Eve can wiretap the public announcement and follow the sifting rule described in the protocol, obtaining her signature string. After that, if the mismatch rate between Eve's and Bob's (Charlie's) strings is lower than $T_a$ ($T_v$), Eve would be able to pretend to be Alice and send a signature to Bob (Charlie). To force Bob and Charlie to obtain the same detection results as Eve during the intercept-resend attack, another possible tool is a detector blinding attack~\cite{lydersen2010a, lydersen2011c, tanner2014}. By applying this attack, Eve might be able to control all Bob's measurement results~\cite{lydersen2010a, lydersen2011c, tanner2014}. In this attack, Eve sends a strong laser to blind Bob's and Charlie's detectors such that they are no longer sensitive to single photons, but act as classical optical detectors. Then, during intercept-resend, Eve resends the measured states by energy-tailored pulses. The resent pulses trigger Bob's detections in the same basis and state as Eve's. If the detector blinding attack is possible in this QDS implementation, Eve could obtain a copy of the bits shared by Alice and Bob/Charlie after sifting. Thus, Eve could pretend to be Alice to sign a message. The detector blinding attack can maintain the normal detection statistics~\cite{gerhardt2011}. Furthermore, in a receiver that uses a beam splitter to passively choose bases and is vulnerable to the detector blinding attack, Eve can force a click with $100\%$ probability~\cite{gerhardt2011}. If the digital signature scheme is built using detectors other than superconducting nanowire single-photon detectors used in the implementation shown in~\cref{fig:Yin_imp}, other types of detector-control attacks may also apply, such as efficiency mismatch~\cite{makarov2006}, after-gate~\cite{wiechers2011}, superlinearity~\cite{lydersen2011b}, and deadtime~\cite{weier2011}. \subsubsection{Different-state-sharing protocol} \label{hack_DSS_QDS} In this protocol, unforgeability is based on the security of the KGP that guarantees $d(A_{i}^{B}, K_{i}^{B}) < d(E_\textrm{guess}, K_{i}^{B})$ (with high probability), where $d$ is the Hamming distance and $E_\textrm{guess}$ is Eve's attempt at guessing $K_{i}^{B}$~\cite{amiri2016}. However, this property could be broken as well, if Eve can learn the states sent by Bob (Charlie) or forces Alice to detect the same result as hers. Similar to the previous protocol, the implementation might also contain several loopholes. Alice's SNSPDs might be vulnerable to the detector blinding attack~\cite{lydersen2010a, lydersen2011c, tanner2014}. Similarly to the previous implementation, the SNSPDs might be blinded by a strong laser. Eve then does intercept-resend and sends Alice faked states whose power and phase are tailored~\cite{lydersen2011}. Thus, Eve, Alice and Bob (Charlie) share the same bit string, which means $d(A_{i}^{B}, K_{i}^{B}) = d(E_\textrm{guess}, K_{i}^{B})$. At the source in Bob (Charlie), all the states are modulated by a phase modulator, which might open another loophole. The modulation information from the PM could be eavesdropped by a Trojan-horse attack~\cite{vakhitov2001,gisin2006, jain2014, sajeed2015}. In this attack, Eve sends strong light to Bob (Charlie). The reflected light carries the modulation information, which could be measured from the phase difference between injected light and reflected light. It has been shown that around four reflected photons are sufficient to read out most of the information~\cite{jain2014}. If the Trojan-horse attack is successful in the QDS system, Eve could get all Alice's information: $d(E_\textrm{guess}, K_{i}^{B})$ could become equal to $d(A_{i}^{B}, K_{i}^{B})$. \section{Quantum secret sharing} \label{sec:QSS} In secret sharing protocols, information is shared among many parties. The information can be reconstructed only if groups of parties collaborate. Information-theoretically secure secret sharing is possible not only using classical means (e.g., by pairwise shared keys), but also using quantum methods. Here, we focus on quantum secret sharing~\cite{cleve1999}. Two types of quantum secret sharing schemes, entanglement-based schemes~\cite{chen2005} and single-qubit schemes~\cite{bogdanski2008}, have been proposed for the sharing of classical messages. We survey both schemes. \subsection{Protocol and implementation} \subsubsection{Entanglement-based protocol} In one scheme for entanglement-based QSS~\cite{chen2005}, Alice, Bob and Charlie first hold one photon each in a Greenberger-Horne-Zeilinger (GHZ) triplet, which is the state \begin{equation} \label{eq:state} \ket{\psi}_{ABC}= \frac{1}{\sqrt{2}}(\ket{H}_{A}\ket{H}_{B}\ket{H}_{C} + \ket{V}_{A}\ket{V}_{B}\ket{V}_{C}). \end{equation} Then, a projective measurement is performed on each photon randomly either in the $X$ or $Y$ basis, where the basis states are given by \begin{equation} \ket{X_{\pm}}= \frac{1}{\sqrt{2}}(\ket{H}\pm\ket{V}), ~\ket{Y_{\pm}}= \frac{1}{\sqrt{2}}(\ket{H}\pm i\ket{V}). \end{equation} The GHZ states can be written as \begin{eqnarray} \label{eq:measured state} \ket{\psi}_{ABC}&=\frac{1}{2}[(\ket{X_+}_{A}\ket{X_+}_{B} + \ket{X_-}_{A}\ket{X_-}_{B})\ket{X_+}_{C}\nonumber \\ &+ (\ket{X_+}_{A}\ket{X_-}_{B} + \ket{X_-}_{A}\ket{X_+}_{B})\ket{X_-}_{C}]. \end{eqnarray} Thus, if each party measures in the $X$ basis, the measurement results would show perfect correlations. Once any two measurement results are known, the third measurement result can be predicted with certainty. Similar correlation would be obtained for three other measurement combinations, $X_A Y_B Y_C$, $Y_A X_B Y_C$, and $Y_A Y_B X_C$. However, the remaining four basis combinations, $X_A X_B Y_C$, $X_A Y_B X_C$, $Y_A X_B X_C$, and $Y_A Y_B Y_C$, result in uncorrelated measurement results among the three parties. Thus, they could announce their basis choices to sift the basis combinations with perfect correlation. After that, Alice and Bob share their measurement results with each other to establish Charlie's key. Thus, a message encrypted by Charlie can be decrypted if Alice and Bob cooperate. The protocol implementation is shown in~\cref{fig:entangle_setup}. \begin{figure} \caption{QSS based on entangled states (reprinted from Ref.~\onlinecite{chen2005} \label{fig:entangle_setup} \end{figure} \subsubsection{Single-qubit protocol} \label{QSS_single} Instead of using entangled states, reference~\onlinecite{Schmid2005} proposed an \textit{N}-party QSS protocol that uses a single qubit, which is easily realizable and scalable compared to the entanglement-based protocol. On the other hand, this protocol completely removes the possibility to share quantum information in terms of an entangled state. The information shared is necessarily classical. Reference~\onlinecite{bogdanski2008} demonstrated this protocol. An initial qubit $\ket{x}= (\ket{0}+\ket{1})/\sqrt{2}$ is prepared by party $R_1$, and sent from $R_2$ to $R_N$ sequentially. Each party $R_i$ (\textit{i} = 1, ..., $N-1$) encodes information by applying a phase randomly chosen from two sets, \{0, $\pi$\} and \{$\pi/2$, $3\pi/2$\}, to the $\ket{1}$ component in the qubit $\ket{x}$. The party $R_N$ randomly applies phase 0 or $\pi/2$ to the $\ket{1}$ component before measuring the state $\ket{\pm x} = (\ket{0}\pm \ket{1})/ \sqrt{2}$. Thus, the detection probability of each detector is \begin{equation} \begin{split} P_{D1}= \frac{1}{2}[1+\cos(\sum_{i = 1}^N\phi_i)],\\ P_{D2}= \frac{1}{2}[1-\cos(\sum_{i = 1}^N\phi_i)]. \end{split} \end{equation} Half of the time, there is destructive or constructive interference, when $\cos(\phi_1 + \cdots + \phi_N) = \pm 1$. If all parties announce which set of phase values their choice belonged to, then every party knows which detection results are deterministic. Using the knowledge of which measurement results are deterministic, multiple parties can then share a secret as follows. If any $N-1$ parties collaborate and share their modulating phases, they would be certain about the phase applied by the $N$th party for one slot of the deterministic measurement. To maintain stability in the experiment, a bidirectional scheme is applied to implement a 5-party protocol~\cite{bogdanski2008} as shown in~\cref{fig:single_setup}. Alice prepares the initial pulse without phase encoding and acts as $R_5$ to measure the final reflected state. The rest of parties encode their information on the way back from the Faraday mirror~(FM), after the pulse is attenuated to the single-photon level by the amplitude modulator~(AM). This idea is similar to the plug~\&~play QKD system~\cite{muller1997}. \begin{figure} \caption{Single-qubit QSS (reprinted from Ref.~\onlinecite{bogdanski2008} \label{fig:single_setup} \end{figure} \subsection{Hacking} We discuss one type of known attack that may work for the implementation of each aforementioned QSS protocol. An external Eve is assumed to be the attacker. If an external Eve can compromise the security, any inside attacker (a protocol participant) could also compromise security and obtain the secret without the cooperation of the other participants, because inside attackers have at least as much information as an outside attacker. \subsubsection{Blinding attack on entanglement-based implementation} In the entanglement-based QSS scheme mentioned above, three parties securely share a secret string using a GHZ state that has inherent correlations among the three photons. If Eve would like to perform an intercept-resend attack via a quantum channel, she would break the initial correlation between the three entangled photons, and thus introduce errors~\cite{hillery1999}. However, the detector blinding attack~(see~\cref{hack_ISS_QDS}) could help Eve steal the shared secret while introducing no error. Eve performs two independent detector blinding attacks on Alice's and Bob's detectors. The blinded detectors only click when Alice/Bob chooses the same measurement bases as Eve during an intercept-resend attack. Thus, Alice's and Bob's secret strings could be obtained by Eve to let her learn Charlie's key. Alternatively, instead of hacking Alice and Bob, Eve can directly blind Charlie's detectors to control the secret key he obtains. \subsubsection{Trojan-horse attack on single-qubit implementation} \label{two_way_trojan} The security of single-qubit QSS follows the proven BB84 QKD protocol~\cite{Schmid2005}. Similarly to BB84 protocol, an intercept-resend attack on the QSS introduces $25 \%$ error in the final detection results. However, the implementation might have side channels that leak information about state preparation, allowing Eve to learn the shared secret without disturbing the normal QSS protocol. In the implementation scheme of single-qubit QSS shown in~\cref{fig:single_setup}, similar to QKD systems, the phase modulation is implemented by a phase modulator which may be vulnerable. Thus, the Trojan-horse attack~(see \cref{hack_DSS_QDS}) appears to be a high risk, owing to the pass-through nature of every party except for Alice. Eve could send strong light to each party, excluding Alice, and then at the other side of each party receive the light modulated by the PM. By measuring the phase difference between Eve's original coherent light and the modulated light, she could read the phase modulation. In this way, Eve could know the secret shared among the four parties. In general, this hacking strategy works for $N$ parties. An attack on Alice may also be attempted, however, it is more difficult owing to the presence of SPDs in Alice~\cite{sajeed2017}. \section{Source-independent quantum random number generation} \label{sec:QRNG} Quantum random number generation~(QRNG) based on the uncertainty principle in quantum mechanics can be used to provide pure random numbers, a crucial resource in cryptography~\cite{herrero2017}. Similarly to QKD, a QRNG system also consists of quantum state preparation and measurement, however, the states are measured locally without long-distance transmission. Source-independent~(SI) QRNG protocols~\cite{vallone2014, cao2016, marangon2017} assume that the state preparation setup is untrusted, while the measurement setup is trusted. We survey the protocol in Ref.~\onlinecite{cao2016} and its experimental demonstration. \subsection{Protocol and implementation} In the SI~QRNG protocol, an untrusted party Eve prepares single-qubit states $\ket{+}$ and sends them to Alice's measurement station~\cite{cao2016}. Alice first projects the quantum states into qubits $\ket{+}$ and vacuum states, but it is unclear how to implement this operation in practice. Assume that $n$ squashed qubits are obtained during the operation of the protocol. The resulting single qubits are then randomly measured either in the $X$ basis, \{$\ket{+}$, $\ket{-}$\}, or the $Z$ basis, \{$\ket{H}$, $\ket{V}$\}. If $n_x$ out of the $n$ squashed qubits are measured in the $X$ basis, they should be detected as $\ket{+}$ ideally. The detection rate for $\ket{-}$ is treated as the estimated error rate $e_{bx}$. The remaining $n_z=n - n_x$ qubits are measured in the $Z$ basis to generate $n_z$ random bits. Alice then extracts the final secure random bits from $n_z$, which is equivalent to privacy amplification in QKD. The experimental demonstration is shown in~\cref{fig:SIQRNG}. Weak coherent pulses are prepared with $\ket{+}$ polarization by a linear polarizer (LP) and a polarization controller (PC1). At Alice's side, a beam splitter (BS1) with splitting ratio $2$:$98$ is used to passively choose the $X$ or $Z$ basis. In~\cref{fig:SIQRNG}, the upper and lower paths correspond to the $X$ basis and $Z$ basis respectively. A single-photon detector is time-division-multiplexed by using four time delays TD1--TD4. For each coherent state Eve sends, a click in the first detection slot indicates that Alice chooses the $X$ basis and correctly detects the incoming pulse as $\ket{+}$, while a click in the second slot indicates a wrong detection, $\ket{-}$, which is used for the error estimation. Moreover, a click in the third slot indicates that Alice selects the $Z$ basis and obtain the result $\ket{H}$, while a click in the fourth slot indicates the result $\ket{V}$. \begin{figure} \caption{Experimental scheme for SI~QRNG (reprinted from Ref.~\onlinecite{cao2016} \label{fig:SIQRNG} \end{figure} \subsection{Hacking} This SI~QRNG protocol assumes that the source can be untrusted, but the measurement station is trusted and characterized~\cite{cao2016}. However, it is not clear how to guarantee the latter requirement in practice. Therefore, Eve might be able to prepare fake states to generate nonrandom numbers. The detector blinding attack~(see~\cref{hack_ISS_QDS}) could force the SPD to work as a classical detector. Then Eve could send a strong bright pulse to trigger a detection in the first slot. Then she sends another bright pulse with either the state $\ket{H}$ or $\ket{V}$ to control the detection in the third or fourth slot. The attack can result in equal detection rates for $\ket{H}$ and $\ket{V}$, which looks like random clicks to Alice, while being precisely controlled by Eve. Eve actually thus controls the bit string. Another potential issue is the wavelength-dependent attack, because the splitting ratio of a beam splitter might be sensitive to the wavelength of the incoming light~(see~\cref{hack_ISS_QDS}). All four beam splitters in the measurement station might be affected. By controlling the splitting ratio of BS1 and/or BS4, Eve can bias whether error checking or random bit generation happens. For BS2 and BS3, by manipulating the splitting ratio, Eve is able to partially control the results of error checking and bit generation. Please note that a wavelength filter alone will not protect the system from this attack, because Eve could send bright states to overcome the finite extinction ratio in the filter's stopband. \section{Quantum secure direct communication} \label{sec:QSDC} QSDC transmits secret information directly through a quantum channel, instead of establishing a secret key first~\cite{deng2003}. The initial QSDC protocol is based on entangled pairs~\cite{wang2005c, wang2005d, wang2011a}. However, entanglement is not a necessary condition for QSDC. The first single-photon QSDC protocol, Deng-Long 2004 (DL04), was proposed in Ref.~\onlinecite{deng2004}. Recently, researchers started studying the strict security proof of this DL04 protocol~\cite{hu2017}. However, regarding the practical security, the implementation of this protocol also needs to be investigated. Also, more attention may need to be paid to the motivation for secure direct communication. \subsection{Protocol and implementation} The DL04 protocol contains two phases of channel estimation and a phase of secret transmission. The first channel estimation checks the security of the channel from Alice to Bob. Alice prepares a sequence of photons randomly chosen from the set of states $\ket{H}$, $\ket{V}$, $\ket{+}$, and $\ket{-}$, and sends them to Bob. He randomly selects a portion of the received photons, and randomly measures them in the $X$ or the $Z$ basis. Then Bob announces the measurement results and compares them to Alice's prepared states to calculate an error rate. Only when the error rate is lower than a threshold, Alice and Bob trust the channel and continue to the next step. Bob randomly selects another small portion of the received photons, and applies one of two unitary operations to each of them: $U = \ket{0}\bra{1}-\ket{1}\bra{0}$ or $I = \ket{0}\bra{0}+\ket{1}\bra{1}$, i.e., flipping a state or not. These photons are employed to check the security of the channel from Bob to Alice. The rest of the photons received by Bob are used to encode secret information by randomly applying the operator $U$ or $I$ to each photon. All these photons are then sent to Alice, who measures these photons in the preparation bases. Regarding the photons used for the security check, Alice checks if her measurement result is compatible with Bob's operation to estimate the error rate. Once the error rate is lower than a threshold, they trust the channel from Bob to Alice. The remaining photons measured in their preparation bases allow Alice to deterministically know Bob's operation, obtaining the secret information. The protocol is implemented by the setup shown in~\cref{fig:QSDC}~\cite{hu2016}. Alice prepares the initial photon string and measures the photons encoded by Bob. She first prepares $\ket{H}$ and $\ket{V}$ using two laser diodes. The preparation and measurement bases are selected by $\rm{PC_1}$ and $\rm{PC_2}$ respectively. Bob encodes his information by $\rm{PC_3}$. All the basis choices are controlled by field programmable gate arrays~(FPGAs). The channel from Alice to Bob is denoted forward channel, and the channel from Bob to Alice is denoted backward channel. A beamsplitter at Bob's side selects a small portion of the received photons, and then a control module is used to check the security of the forward channel. The control module's scheme is the same as the passive measurement station in BB84 QKD system. A delay line is used to store the photons during the forward-channel check. To tolerate photon loss during secret transmission, a special method named single-photon frequency encoding is used. Instead of encoding information on individual photons, this method encodes information on the spectrum of a sequence of photons. After Alice detects a sequence of photons and converts them to a binary bit string, the spectrum can be known by applying the Fourier transform to the bit string. During the detection, Alice might miss some photons due to channel loss and imperfect detection efficiency. Fortunately, because the information does not only rely on an individual photon, but is determined by the spectrum of the entire sequence, missing some photons just reduces the signal-to-noise ratio, but the feature of the spectrum still exists~\cite{hu2016}. The calculated spectrum corresponds to the bit string that is the initial information Bob sent. \begin{figure} \caption{QSDC implementation (reprinted from Ref.~\onlinecite{hu2016} \label{fig:QSDC} \end{figure} \subsection{Hacking} The first phase of the DL04 QSDC protocol, the security check of the forward channel, is similar to the raw key exchange, sifting and error estimation phases in the BB84 QKD protocol~\cite{bennett1984}. The security check of the backward channel and secret direct transmission are quantum versions of the one-time pad, which randomly flips the bit information~\cite{deng2004}. Just as for QKD, the implementation~\cite{hu2016} may contain side channels. The first potential side channel is that detectors may be attacked by the detector blinding attack~(see~\cref{hack_ISS_QDS}). During the check of the forward channel, Eve blinds the detectors in the control module and conducts an attack with fake states~\cite{makarov2005} to control Bob's detection results. Since this attack introduces no errors, the security check is passed. During the second check of the backward channel and information transmission, Eve uses classical optical detectors to measure her bright pulses modulated by Bob. Since these are states resent by Eve during the previous phase, Eve could apply the same basis as in the previous step to know with certainty what operation Bob performed. Then, she sends the same states with proper brightness to Alice's blinded detectors, such that only when Alice selects the same bases as Eve, Alice obtains detections. This attack results in full control of Alice’s measurement outcomes. Again, no extra errors are introduced. Furthermore, Eve learns the secret information between Alice and Bob. This breaks the security of QSDC. Please note that because this implementation uses an active basis choice (the basis is actively selected by the polarization controller), Eve's measurement basis can only match Alice's/Bob's measurement basis half the time. However, when the basis matches the click probability in Bob under attack can be unity, while his single-photon detection efficiency is typically much lower than unity~\cite{lydersen2010a}. This may compensate for the extra loss introduced by the attack. The second possible side channel exists in the polarization controllers that might be vulnerable to the Trojan-horse attack~(see~\cref{hack_DSS_QDS}). In this QSDC implementation~\cite{hu2016}, Eve can conduct the Trojan-horse attack on $\rm{PC_1}$ or $\rm{PC_3}$. From an attack on $\rm{PC_1}$, Eve would know Alice's basis choice in the state preparation and measurement, as $\rm{PC_2}$ applies the same basis as $\rm{PC_1}$. The difference between the prepared and the measured state is Bob's secret information~(flip or not). On the other hand, Bob's encoded information could be directly known by hacking $\rm{PC_3}$~(similarly to~\cref{two_way_trojan}). Once Eve knows the original states prepared by Alice or what Bob’s modulation was, she could obtain the secret information. \section{Blind quantum computing} \label{sec:BQC} In the future, a quantum computer could be used as a server that provides quantum computation capability to remote users, who themselves do not have a quantum computer and only use simple technology. A key task is to keep the client's data and program secret from the server. Classical blind computing protocols exist~\cite{rivest1978a}, but it can only guarantee computational security~\cite{barz2012}. However, taking advantage of quantum mechanics, BQC is able to provide unconditional security for client's data and computation in the quantum computer server~\cite{broadbent2009}. \subsection{Protocol and implementation} BQC is based on entangled multiparticle cluster states~\cite{barz2012}. In the BQC protocol, qubits are first prepared as $\ket{\theta_j} = (\ket{0} + e^{i\theta_j}\ket{1})/\sqrt{2}$ by a client, where $\theta_j$ is randomly selected from \{$0, \pi/4, ..., 7\pi/4$\}. Then the single-photon qubits are sent to a quantum server that entangles them with each other by applying controlled-phase gates, so that the qubits form a cluster state. Then the cluster state is measured by the quantum server, which performs single-qubit measurements in the basis $\ket{\pm_{\delta_j}} = (\ket{0} + e^{i\delta_j}\ket{1})/\sqrt{2}$. The measurement basis is instructed by the client: $\delta_j = \phi_j + \theta_j + \pi r_j$, where $\phi_j$ is the desired rotation and $r_j$ is randomly chosen from \{0, 1\}. Since $\theta_j$ is the initial phase hidden from the quantum server, the server is not able to calculate the desired rotation $\phi_j$ from the measurement result. It is remarkable that for the cluster state, its shape, such as the dimension, also may leak information about the operation gates. Thus, also the shape of the cluster state is required to be hidden, which can be accomplished by choosing, for example, brickwork states~\cite{barz2012}. The BQC protocol then completes a quantum computation while preserving the client's privacy. \begin{figure} \caption{Proof-of-principle implementation of BQC (reprinted from Ref.~\onlinecite{barz2012} \label{fig:BQC} \end{figure} Theoretically, the client only needs to have a single-photon source to generate a state $\ket{\theta_j}$ and send it to the server. However, implementing a single-photon source is challenging so far, as standard parametric down-conversion sources always also have higher-order emissions, meaning that instead of one pair, two or more pairs are emitted at the same time. An initial demonstration of the BQC protocol with current technology is shown in~\cref{fig:BQC}~\cite{barz2012}. Note that in the current implementation, entangled pairs are first prepared on the client's side, and the cluster state is generated on the server's side. The laser beam passes a BBO crystal to first generate the entangled pair traveling forwards. Then the beam is reflected and passes the BBO crystal again to generate the entangled pair traveling backwards. The initial phase $\theta_j$ is applied by rotating the angles of half-wave plates and quarter-wave plates, serving as modulators. Then the entangled states are sent to the quantum server's side, where a cluster state is generated. The states are measured in different bases, as instructed by the client. In this BQC protocol, the setup of the client is relatively simple, but the setup of the quantum server would have more capabilities once a real quantum computer is available. Here we take one type of BQC protocol as an example. There also exist other versions of BQC where the server generates entangled cluster states and the measurements are done by the client~\cite{morimae2013,greganti2016}. \subsection{Hacking} In the above subsection, a proof-of-principle implementation of BQC was introduced. Although in the future the technology available to implement the quantum server for BQC will be more mature and comprehensive, the client setup is already relatively clear. It is foreseeable that future client station will likely still consist of a photon source and modulators. Unfortunately, in practice, any kind of modulator is susceptible to the Trojan-horse attack~\cite{vakhitov2001,gisin2006, jain2014, sajeed2015}. This vulnerability breaks an important assumption in BQC: the initial phase $\theta_j$ should be unknown to the untrusted quantum server. Specifically, regarding the implementation shown in~\cref{fig:BQC}, the phase modulation is done by the wave plates. The reflected light from the wave plates may leak information about $\theta_j$. Instead of wave plates, an advanced setup in the future could be using phase modulators to randomly modulate the phase $\theta_j$, which is a technique widely used in quantum cryptography~\cite{takesue2005,bogdanski2008,collins2016}. Unfortunately, the Trojan-horse attack might still be applied to phase modulators, as we have discussed in~\cref{hack_DSS_QDS}. Except for imperfect phase modulation, another possible issue is the photon source itself. For the current version of implementation, the entanglement source sometimes might simultaneously emit multiple pairs of entangled states. In this case, Eve could split off a copy of entangled states from the source. Then measuring her copy would give Eve information about the state itself. Even in future implementations, when ideal single-photon sources are available, one still needs to pay attention to state generation. For instance, the BQC protocol needs indistinguishable multiple photons~\cite{barz2012}. Thus, careful source design is crucial to avoid any distinguishability in the generated photons (this can, in principle, occur in any degree of freedom, for example wavelength). For other variations of BQC protocols, where the measurements are done on the client's side~\cite{morimae2013,greganti2016}, attacks that leak information about the measurement settings are applicable. So, in a setting where the client uses wave plates to choose a measurement basis~\cite{greganti2016}, the Trojan-horse attack could be applied as well. \section{Countermeasures} \label{Countermeasure} An imperfect implementation compromises the security promised in theory, as we have argued in~\cref{sec:QDS,sec:QSS,sec:QRNG,sec:QSDC,sec:BQC}. To patch the practical loopholes, we should consider feasible countermeasures in implementations of quantum cryptographic protocols. Existing countermeasures for QKD and countermeasures under development may be adaptable to implementations of other cryptographic protocols. However, integrating these considerations into the relevant security proofs is an open challenge. We now recap countermeasures proposed in the literature for both the source and measurement parts of a quantum cryptographic system. We also discuss how they may be applied to the protocols surveyed in this Article. \subsection{Countermeasures against source imperfection} Properly implementing the quantum-state source in the above protocols requires that any other degrees of freedom are uncorrelated with the degree of freedom where information is encoded. However, for the states prepared by different laser diodes~(see~\cref{ISS_QDS}), the laser diodes may show the inherent difference in the spectrum and emission time. These types of difference hint which laser diode is on, i.e.,\ which state is prepared. The mismatch in a certain degree of freedom could be a side channel for Eve who tries to distinguish different quantum states~\cite{nauerth2009,huang2018}. To avoid this inherent mismatch among different laser diodes, quantum state preparation could use only one laser diode followed by optical modulators~(\cref{fig:source}), as shown in many QKD implementations~\cite{stucki2002,stucki2009,tang2014}. The laser diode generates identical pulses. Then different states are modulated by a phase modulator~\cite{stucki2002}, intensity modulator~\cite{stucki2009}, or polarization modulator~\cite{tang2014}. The external modulation method could be applied to the implementation of double-receiver QDS~(see~\cref{ISS_QDS}). However, this modification might open another loophole: the Trojan-horse attack on the modulators. Once a system uses a modulator, countermeasures against Trojan-horse attack are required. For a unidirectional system that only sends states from one party to another but never back, a possible countermeasure is adding enough isolation between the modulator and the output port connected to the quantum channel, as shown in~\cref{fig:source}. The amount of isolation is defined by the combination of bidirectional attenuation from attenuators, the unidirectional attenuation from isolators and total reflection probability from lasers and modulators. For example, in a BB84 QKD system, the isolation has been quantified as the following~\cite{lucamarini2015}. Suppose Eve injects pulsed light into the party preparing the state. The injected power is limited by the maximum power transmitted safely through standard single-mode fiber (assumed to be 12.8~W in Ref.~\onlinecite{lucamarini2015}). The amount of reflection then is obtained after the injected light is attenuated by the system isolation. Taking this amount of reflection into account in the calculation of the key rate, one could obtain the final secure key rate. To obtain a key rate under the Trojan-horse attack that is close to the rate without an attack, 170 dB isolation is required~\cite{lucamarini2015}. \begin{figure} \caption{The scheme of countermeasures against source imperfections. To eliminate the mismatch among different laser diodes, a single laser diode LD followed by a modulator~M can be used. The following attenuator ATT and isolator ISO provide sufficient isolation to prevent the Trojan-horse attack in a unidirectional system. Here, G denotes a modulation signal generator.} \label{fig:source} \end{figure} A similar methodology could be applied to single-receiver QDS, QSDC, and BQC, which may be vulnerable to the Trojan-horse attack. In each implementation, attenuators and isolators could be added between the modulators and system output, and the reflectivity of modulator and laser diodes should be quantified. Then the required amount of isolation should be calculated according to the security models of the corresponding protocol as has already been done for QKD~\cite{lucamarini2015}. The chosen amount of isolation should maintain the system's security properties. We notice that in the implementation of single-receiver QDS in Ref~\onlinecite{amiri2016}, an attenuator is already included to weaken the output power to single-photon level. However, this amount of attenuation is probably not sufficient to provide isolation to counter the Trojan-horse attack. For a bidirectional plug \& play QSS system (see~\cref{QSS_single}) and pass-through QSDC (see~\cref{sec:QSDC}), the system's isolation in the previous countermeasure is not applicable, because it would block transmission of the states. In the bidirectional system, single-photon monitors would be needed to observe the incoming light~\cite{vakhitov2001}. It is not clear if implementing such countermeasure securely is realistic. Nevertheless, a patent by Trifonov and Vig~\cite{trifonov2007} proposes a scheme against Trojan-horse attack with a single-photon watchdog detector. This countermeasure could be adapted for the single-qubit QSS implementation. Alice could employ a watchdog detector for the received light. The rest of parties in the scheme could use two watchdog detectors to observe two fiber connection ports at each side of the PM. Any alarm would abort the protocol. Please note that the single-photon detector might be vulnerable to the detector blinding attack. Thus, a corresponding countermeasure against detector control attacks is necessary, which is discussed in the next subsection. \subsection{Countermeasures against measurement imperfection} In a party that makes measurements, characteristics of passive optical components, such as beam splitters, might be sensitive to wavelength. That is, the component's behavior for unexpected wavelengths may deviate from what is assumed. To provide practical security, wavelength dependence should be eliminated. A possible method is using a wavelength filter to block unexpected wavelengths, and only pass a narrow range around the working wavelength~\cite{lucamarini2015}. In the implementation of double-receiver QDS (see~\cref{ISS_QDS}), this filter could be added before the beam splitter in Bob and Charlie, i.e.,\ right at their input ports. The filter's transmission should be verified in a wide range of wavelengths. However, there is a limitation to this approach: Eve can simply increase her light power to pass through the stopband. Therefore, as a more robust countermeasure, we suggest utilizing active basis choice in the measurement station. Another major vulnerability in measurement setups is imperfections in single-photon detectors (see~\cref{hack_ISS_QDS}). A proposed countermeasure for QKD systems is calibrating the characteristics of detectors in real time, avoiding Eve's manipulation~\cite{maroy2016}. In this receiver design, a calibrated light source is locally included in the measurement unit, in combination with several other countermeasures. By randomly activating this local source to send photons to the detectors, the corresponding detection efficiency can be calibrated during the system operation. The characterized detection efficiency can then be used in the security proof to calculate the secure key rate. A similar design might be applicable to measurement stations in the other quantum cryptographic protocols. However, incorporating the calibration procedure into their security models should be studied in each case. Another approach to entirely avoid the effect of imperfect detectors and other measurement imperfections are measurement-device-independent (MDI) quantum cryptographic protocols~\cite{fu2015}, such as MDI QKD~\cite{lo2012}, MDI QSS~\cite{yang2016} and MDI QDS~\cite{puthoor2016, yin2017}. In the MDI protocols, the party making measurements is untrusted: there are no security assumptions regarding the measurements. Even if Eve makes the measurements, the secret information (provided the protocol produces it) can still be distributed among the rest of the authenticated parties. This is a promising idea to avoid security loopholes related to the measurements. However, state preparation remains trusted and still needs to be carefully designed to avoid loopholes. \section{Conclusion} \label{conclusion} We have surveyed implementations of five types of quantum cryptographic primitives. As our analysis shows, these quantum cryptographic systems might have security loopholes similar to QKD systems, because they use similar optical components. These imperfections would compromise the security properties of each quantum cryptographic protocol (see summary in~\cref{tbl:summary}). We discuss implementations of these protocols, showing that practical insecurity is a common issue in the implementation of quantum cryptography in general, not only in QKD. In other words, a gap between perfect theory and imperfect practice generally exists in quantum cryptography. Our analysis of imperfections in this survey has been intended to reveal a broad picture. Detailed analysis of imperfections for each specific implementation should be done in the future. Once the existence of practical loopholes has been noticed, it becomes essential to bridge the gap between theory and practice. One should consider countermeasures when implementing existing protocols or designing new quantum cryptographic protocols that tolerate practical imperfections. Fortunately, these approaches appear to be feasible. However, integrating imperfections into security proofs~\cite{ma2005,lucamarini2015,tamaki2016} is a significant challenge, which should be addressed in future studies. \begin{acknowledgments} We thank G.~Brassard for discussions. We acknowledge financial support from NSERC of Canada (programs Discovery and CryptoWorks21) and Ontario MRIS. A.H.\ acknowledges support by China Scholarship Council. S.B.\ acknowledges support from the Carl Zeiss Foundation. E.A.\ acknowledges support from the \nohyphens{EPSRC} UK Quantum Communication Hub (grant number \nohyphens{EP/M013472/1}). \end{acknowledgments} \def \begin{center}\rule{0.5\columnwidth}{.8pt}\end{center} { \begin{center}\rule{0.5\columnwidth}{.8pt}\end{center} } \begin{thebibliography}{74} \makeatletter \providecommand \@ifxundefined [1]{ \@ifx{#1\undefined} } \providecommand \@ifnum [1]{ \ifnum #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \@ifx [1]{ \ifx #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode `\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty \bibitem [{ETS()}]{ETSIwhitepaper} \BibitemOpen \href@noop {} {}\bibinfo {note} {Quantum safe cryptography and security, ETSI white paper no.\ 8 (2015), \url{http://www.etsi.org/images/files/ETSIWhitePapers/QuantumSafeWhitepaper.pdf}}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bennett}\ and\ \citenamefont {Brassard}(1984)}]{bennett1984} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~H.}\ \bibnamefont {Bennett}}\ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Brassard}},\ }in\ \href {\doibase 10.1016/j.tcs.2014.05.025} {\emph {\bibinfo {booktitle} {Proc. IEEE International Conference on Computers, Systems, and Signal Processing (Bangalore, India)}}}\ (\bibinfo {publisher} {IEEE Press},\ \bibinfo {address} {New York},\ \bibinfo {year} {1984})\ pp.\ \bibinfo {pages} {175--179}\BibitemShut {NoStop} \bibitem [{\citenamefont {Cleve}\ \emph {et~al.}(1999)\citenamefont {Cleve}, \citenamefont {Gottesman},\ and\ \citenamefont {Lo}}]{cleve1999} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Cleve}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Gottesman}}, \ and\ \bibinfo {author} {\bibfnamefont {H.-K.}\ \bibnamefont {Lo}},\ }\href {\doibase 10.1103/PhysRevLett.83.648} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {83}},\ \bibinfo {pages} {648} (\bibinfo {year} {1999})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gottesman}\ and\ \citenamefont {Chuang}()}]{gottesman2001} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Gottesman}}\ and\ \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Chuang}},\ }\href@noop {} {\ }\Eprint {http://arxiv.org/abs/quant-ph/0105032} {arXiv:quant-ph/0105032} \BibitemShut {NoStop} \bibitem [{\citenamefont {Bogdanski}\ \emph {et~al.}(2008)\citenamefont {Bogdanski}, \citenamefont {Rafiei},\ and\ \citenamefont {Bourennane}}]{bogdanski2008} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Bogdanski}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Rafiei}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Bourennane}},\ }\href {\doibase 10.1103/PhysRevA.78.062307} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {78}},\ \bibinfo {pages} {062307} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Collins}\ \emph {et~al.}(2016)\citenamefont {Collins}, \citenamefont {Amiri}, \citenamefont {Fujiwara}, \citenamefont {Honjo}, \citenamefont {Shimizu}, \citenamefont {Tamaki}, \citenamefont {Takeoka}, \citenamefont {Andersson}, \citenamefont {Buller},\ and\ \citenamefont {Sasaki}}]{collins2016} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~J.}\ \bibnamefont {Collins}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Amiri}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Fujiwara}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Honjo}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Shimizu}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Tamaki}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Takeoka}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Andersson}}, \bibinfo {author} {\bibfnamefont {G.~S.}\ \bibnamefont {Buller}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Sasaki}},\ }\href {\doibase 10.1364/OL.41.004883} {\bibfield {journal} {\bibinfo {journal} {Opt. Lett.}\ }\textbf {\bibinfo {volume} {41}},\ \bibinfo {pages} {4883} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yin}\ \emph {et~al.}(2017{\natexlab{a}})\citenamefont {Yin}, \citenamefont {Fu}, \citenamefont {Liu}, \citenamefont {Tang}, \citenamefont {Wang}, \citenamefont {You}, \citenamefont {Zhang}, \citenamefont {Chen}, \citenamefont {Wang}, \citenamefont {Zhang}, \citenamefont {Chen}, \citenamefont {Chen},\ and\ \citenamefont {Pan}}]{yin2017b} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.-L.}\ \bibnamefont {Yin}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Fu}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {Q.-J.}\ \bibnamefont {Tang}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {L.-X.}\ \bibnamefont {You}}, \bibinfo {author} {\bibfnamefont {W.-J.}\ \bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {S.-J.}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {T.-Y.}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {Z.-B.}\ \bibnamefont {Chen}}, \ and\ \bibinfo {author} {\bibfnamefont {J.-W.}\ \bibnamefont {Pan}},\ }\href {\doibase 10.1103/PhysRevA.95.032334} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {95}},\ \bibinfo {pages} {032334} (\bibinfo {year} {2017}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhao}\ \emph {et~al.}(2008)\citenamefont {Zhao}, \citenamefont {Fung}, \citenamefont {Qi}, \citenamefont {Chen},\ and\ \citenamefont {Lo}}]{zhao2008} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Zhao}}, \bibinfo {author} {\bibfnamefont {C.-H.~F.}\ \bibnamefont {Fung}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Qi}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Chen}}, \ and\ \bibinfo {author} {\bibfnamefont {H.-K.}\ \bibnamefont {Lo}},\ }\href {\doibase 10.1103/PhysRevA.78.042333} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {78}},\ \bibinfo {eid} {042333} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lydersen}\ \emph {et~al.}(2010)\citenamefont {Lydersen}, \citenamefont {Wiechers}, \citenamefont {Wittmann}, \citenamefont {Elser}, \citenamefont {Skaar},\ and\ \citenamefont {Makarov}}]{lydersen2010a} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Lydersen}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Wiechers}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Wittmann}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Elser}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Skaar}}, \ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Makarov}},\ }\href {\doibase 10.1038/nphoton.2010.214} {\bibfield {journal} {\bibinfo {journal} {Nat. Photonics}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages} {686} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Weier}\ \emph {et~al.}(2011)\citenamefont {Weier}, \citenamefont {Krauss}, \citenamefont {Rau}, \citenamefont {F{\"u}rst}, \citenamefont {Nauerth},\ and\ \citenamefont {Weinfurter}}]{weier2011} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Weier}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Krauss}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Rau}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {F{\"u}rst}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Nauerth}}, \ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Weinfurter}},\ }\href {\doibase 10.1088/1367-2630/13/7/073024} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {13}},\ \bibinfo {pages} {073024} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sun}\ \emph {et~al.}(2011)\citenamefont {Sun}, \citenamefont {Jiang},\ and\ \citenamefont {Liang}}]{sun2011} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.-H.}\ \bibnamefont {Sun}}, \bibinfo {author} {\bibfnamefont {M.-S.}\ \bibnamefont {Jiang}}, \ and\ \bibinfo {author} {\bibfnamefont {L.-M.}\ \bibnamefont {Liang}},\ }\href {\doibase 10.1103/PhysRevA.83.062331} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {83}},\ \bibinfo {pages} {062331} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Jain}\ \emph {et~al.}(2014)\citenamefont {Jain}, \citenamefont {Anisimova}, \citenamefont {Khan}, \citenamefont {Makarov}, \citenamefont {Marquardt},\ and\ \citenamefont {Leuchs}}]{jain2014} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Jain}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Anisimova}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Khan}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Makarov}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Marquardt}}, \ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Leuchs}},\ }\href {\doibase 10.1088/1367-2630/16/12/123030} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {16}},\ \bibinfo {pages} {123030} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sajeed}\ \emph {et~al.}(2015{\natexlab{a}})\citenamefont {Sajeed}, \citenamefont {Chaiwongkhot}, \citenamefont {Bourgoin}, \citenamefont {Jennewein}, \citenamefont {L{\" u}tkenhaus},\ and\ \citenamefont {Makarov}}]{sajeed2015a} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Sajeed}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Chaiwongkhot}}, \bibinfo {author} {\bibfnamefont {J.-P.}\ \bibnamefont {Bourgoin}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Jennewein}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {L{\" u}tkenhaus}}, \ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Makarov}},\ }\href {\doibase 10.1103/PhysRevA.91.062301} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {91}},\ \bibinfo {pages} {062301} (\bibinfo {year} {2015}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Makarov}\ \emph {et~al.}(2016)\citenamefont {Makarov}, \citenamefont {Bourgoin}, \citenamefont {Chaiwongkhot}, \citenamefont {Gagn{\'e}}, \citenamefont {Jennewein}, \citenamefont {Kaiser}, \citenamefont {Kashyap}, \citenamefont {Legr{\'e}}, \citenamefont {Minshull},\ and\ \citenamefont {Sajeed}}]{makarov2016} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Makarov}}, \bibinfo {author} {\bibfnamefont {J.-P.}\ \bibnamefont {Bourgoin}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Chaiwongkhot}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Gagn{\'e}}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Jennewein}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Kaiser}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Kashyap}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Legr{\'e}}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Minshull}}, \ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Sajeed}},\ }\href {\doibase 10.1103/PhysRevA.94.030302} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {94}},\ \bibinfo {pages} {030302} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yin}\ \emph {et~al.}(2017{\natexlab{b}})\citenamefont {Yin}, \citenamefont {Fu}, \citenamefont {Liu}, \citenamefont {Tang}, \citenamefont {Wang}, \citenamefont {You}, \citenamefont {Zhang}, \citenamefont {Chen}, \citenamefont {Wang}, \citenamefont {Zhang}, \citenamefont {Chen}, \citenamefont {Chen},\ and\ \citenamefont {Pan}}]{yin2016c} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.-L.}\ \bibnamefont {Yin}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Fu}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {Q.-J.}\ \bibnamefont {Tang}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {L.-X.}\ \bibnamefont {You}}, \bibinfo {author} {\bibfnamefont {W.-J.}\ \bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {S.-J.}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {T.-Y.}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {Z.-B.}\ \bibnamefont {Chen}}, \ and\ \bibinfo {author} {\bibfnamefont {J.-W.}\ \bibnamefont {Pan}},\ }\href {\doibase 10.1103/PhysRevA.95.032334} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {95}},\ \bibinfo {pages} {032334} (\bibinfo {year} {2017}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chen}\ \emph {et~al.}(2005)\citenamefont {Chen}, \citenamefont {Zhang}, \citenamefont {Zhao}, \citenamefont {Zhou}, \citenamefont {Lu}, \citenamefont {Peng}, \citenamefont {Yang},\ and\ \citenamefont {Pan}}]{chen2005} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.-A.}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {A.-N.}\ \bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Zhao}}, \bibinfo {author} {\bibfnamefont {X.-Q.}\ \bibnamefont {Zhou}}, \bibinfo {author} {\bibfnamefont {C.-Y.}\ \bibnamefont {Lu}}, \bibinfo {author} {\bibfnamefont {C.-Z.}\ \bibnamefont {Peng}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Yang}}, \ and\ \bibinfo {author} {\bibfnamefont {J.-W.}\ \bibnamefont {Pan}},\ }\href {\doibase 10.1103/PhysRevLett.95.200502} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {95}},\ \bibinfo {pages} {200502} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Cao}\ \emph {et~al.}(2016)\citenamefont {Cao}, \citenamefont {Zhou}, \citenamefont {Yuan},\ and\ \citenamefont {Ma}}]{cao2016} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Cao}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Zhou}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Yuan}}, \ and\ \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Ma}},\ }\href {\doibase 10.1103/PhysRevX.6.011020} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages} {011020} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hu}\ \emph {et~al.}(2016)\citenamefont {Hu}, \citenamefont {Yu}, \citenamefont {Jing}, \citenamefont {Xiao}, \citenamefont {Jia}, \citenamefont {Qin},\ and\ \citenamefont {Long}}]{hu2016} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.-Y.}\ \bibnamefont {Hu}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Yu}}, \bibinfo {author} {\bibfnamefont {M.-Y.}\ \bibnamefont {Jing}}, \bibinfo {author} {\bibfnamefont {L.-T.}\ \bibnamefont {Xiao}}, \bibinfo {author} {\bibfnamefont {S.-T.}\ \bibnamefont {Jia}}, \bibinfo {author} {\bibfnamefont {G.-Q.}\ \bibnamefont {Qin}}, \ and\ \bibinfo {author} {\bibfnamefont {G.-L.}\ \bibnamefont {Long}},\ }\href {\doibase 10.1038/lsa.2016.144} {\bibfield {journal} {\bibinfo {journal} {Light Sci. Appl.}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {e16144} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Barz}\ \emph {et~al.}(2012)\citenamefont {Barz}, \citenamefont {Kashefi}, \citenamefont {Broadbent}, \citenamefont {Fitzsimons}, \citenamefont {Zeilinger},\ and\ \citenamefont {Walther}}]{barz2012} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Barz}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Kashefi}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Broadbent}}, \bibinfo {author} {\bibfnamefont {J.~F.}\ \bibnamefont {Fitzsimons}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Zeilinger}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Walther}},\ }\href {\doibase 10.1126/science.1214707} {\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {335}},\ \bibinfo {pages} {303} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sajeed}\ \emph {et~al.}(2015{\natexlab{b}})\citenamefont {Sajeed}, \citenamefont {Radchenko}, \citenamefont {Kaiser}, \citenamefont {Bourgoin}, \citenamefont {Pappa}, \citenamefont {Monat}, \citenamefont {Legr\'e},\ and\ \citenamefont {Makarov}}]{sajeed2015} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Sajeed}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Radchenko}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Kaiser}}, \bibinfo {author} {\bibfnamefont {J.-P.}\ \bibnamefont {Bourgoin}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Pappa}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Monat}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Legr\'e}}, \ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Makarov}},\ }\href {\doibase 10.1103/PhysRevA.91.032326} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {91}},\ \bibinfo {pages} {032326} (\bibinfo {year} {2015}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gerhardt}\ \emph {et~al.}(2011{\natexlab{a}})\citenamefont {Gerhardt}, \citenamefont {Liu}, \citenamefont {Lamas-Linares}, \citenamefont {Skaar}, \citenamefont {Scarani}, \citenamefont {Makarov},\ and\ \citenamefont {Kurtsiefer}}]{gerhardt2011a} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Gerhardt}}, \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Lamas-Linares}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Skaar}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Scarani}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Makarov}}, \ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Kurtsiefer}},\ }\href {\doibase 10.1103/PhysRevLett.107.170404} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {107}},\ \bibinfo {pages} {170404} (\bibinfo {year} {2011}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Swanson}\ and\ \citenamefont {Stinson}(2011)}]{swanson2011} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~M.}\ \bibnamefont {Swanson}}\ and\ \bibinfo {author} {\bibfnamefont {D.~R.}\ \bibnamefont {Stinson}},\ }in\ \href {\doibase 10.1007/978-3-642-20728-0_10} {\emph {\bibinfo {booktitle} {International Conference on Information Theoretic Security}}},\ Vol.\ \bibinfo {volume} {6673}\ (\bibinfo {organization} {Springer, Berlin, Heidelberg},\ \bibinfo {year} {2011})\ pp.\ \bibinfo {pages} {100--116}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chaum}\ and\ \citenamefont {Roijakkers}(1990)}]{chaum1990} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Chaum}}\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Roijakkers}},\ }in\ \href {\doibase 10.1007/3-540-38424-3_15} {\emph {\bibinfo {booktitle} {Conference on the Theory and Application of Cryptography}}}\ (\bibinfo {organization} {Springer},\ \bibinfo {year} {1990})\ pp.\ \bibinfo {pages} {206--214}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hanaoka}\ \emph {et~al.}(2000)\citenamefont {Hanaoka}, \citenamefont {Shikata}, \citenamefont {Zheng},\ and\ \citenamefont {Imai}}]{hanaoka2000} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Hanaoka}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Shikata}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Zheng}}, \ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Imai}},\ }in\ \href {\doibase 10.1007/3-540-44448-3_11} {\emph {\bibinfo {booktitle} {International Conference on the Theory and Application of Cryptology and Information Security}}}\ (\bibinfo {organization} {Springer},\ \bibinfo {year} {2000})\ pp.\ \bibinfo {pages} {130--142}\BibitemShut {NoStop} \bibitem [{\citenamefont {Amiri}\ \emph {et~al.}(2016{\natexlab{a}})\citenamefont {Amiri}, \citenamefont {Abidin}, \citenamefont {Wallden},\ and\ \citenamefont {Andersson}}]{amiri2016a} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Amiri}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Abidin}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Wallden}}, \ and\ \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Andersson}},\ }\href {https://pdfs.semanticscholar.org/d19e/9a165c62dda413acdeaaf72d924c96f0ce03.pdf} {\bibfield {journal} {\bibinfo {journal} {IACR Cryptology ePrint Archive}\ }\textbf {\bibinfo {volume} {2016}},\ \bibinfo {pages} {739} (\bibinfo {year} {2016}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Amiri}\ \emph {et~al.}(2016{\natexlab{b}})\citenamefont {Amiri}, \citenamefont {Wallden}, \citenamefont {Kent},\ and\ \citenamefont {Andersson}}]{amiri2016} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Amiri}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Wallden}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Kent}}, \ and\ \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Andersson}},\ }\href {\doibase 10.1103/PhysRevA.93.032325} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {93}},\ \bibinfo {pages} {032325} (\bibinfo {year} {2016}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yin}\ \emph {et~al.}(2016)\citenamefont {Yin}, \citenamefont {Fu},\ and\ \citenamefont {Chen}}]{yin2016b} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.-L.}\ \bibnamefont {Yin}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Fu}}, \ and\ \bibinfo {author} {\bibfnamefont {Z.-B.}\ \bibnamefont {Chen}},\ }\href {\doibase 10.1103/PhysRevA.93.032316} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {93}},\ \bibinfo {pages} {032316} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ma}\ \emph {et~al.}(2005)\citenamefont {Ma}, \citenamefont {Qi}, \citenamefont {Zhao},\ and\ \citenamefont {Lo}}]{ma2005} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Ma}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Qi}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Zhao}}, \ and\ \bibinfo {author} {\bibfnamefont {H.-K.}\ \bibnamefont {Lo}},\ }\href {\doibase 10.1103/PhysRevA.72.012326} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {72}},\ \bibinfo {pages} {012326} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Brassard}\ \emph {et~al.}(2000)\citenamefont {Brassard}, \citenamefont {L\"utkenhaus}, \citenamefont {Mor},\ and\ \citenamefont {Sanders}}]{brassard2000} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Brassard}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {L\"utkenhaus}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Mor}}, \ and\ \bibinfo {author} {\bibfnamefont {B.~C.}\ \bibnamefont {Sanders}},\ }\href {\doibase 10.1103/PhysRevLett.85.1330} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {85}},\ \bibinfo {pages} {1330} (\bibinfo {year} {2000})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Inoue}\ \emph {et~al.}(2002)\citenamefont {Inoue}, \citenamefont {Waks},\ and\ \citenamefont {Yamamoto}}]{inoue2002} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Inoue}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Waks}}, \ and\ \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Yamamoto}},\ }\href {\doibase 10.1103/PhysRevLett.89.037902} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {89}},\ \bibinfo {pages} {037902} (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chefles}(2001)}]{chefles2001} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Chefles}},\ }\href {\doibase 10.1103/PhysRevA.64.062305} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {64}},\ \bibinfo {pages} {062305} (\bibinfo {year} {2001})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Nauerth}\ \emph {et~al.}(2009)\citenamefont {Nauerth}, \citenamefont {F\"{u}rst}, \citenamefont {Schmitt-Manderbach}, \citenamefont {Weier},\ and\ \citenamefont {Weinfurter}}]{nauerth2009} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Nauerth}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {F\"{u}rst}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Schmitt-Manderbach}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Weier}}, \ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Weinfurter}},\ }\href {\doibase 10.1088/1367-2630/11/6/065001} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages} {065001} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Huang}\ \emph {et~al.}(2018)\citenamefont {Huang}, \citenamefont {Sun}, \citenamefont {Liu},\ and\ \citenamefont {Makarov}}]{huang2018} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont {S.-H.}\ \bibnamefont {Sun}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Liu}}, \ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Makarov}},\ }\href {\doibase 10.1103/PhysRevA.98.012330} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {98}},\ \bibinfo {pages} {012330} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Li}\ \emph {et~al.}(2011)\citenamefont {Li}, \citenamefont {Wang}, \citenamefont {Huang}, \citenamefont {Chen}, \citenamefont {Yin}, \citenamefont {Li}, \citenamefont {Zhou}, \citenamefont {Liu}, \citenamefont {Zhang}, \citenamefont {Guo}, \citenamefont {Bao},\ and\ \citenamefont {Han}}]{li2011a} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.-W.}\ \bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {J.-Z.}\ \bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {Z.-Q.}\ \bibnamefont {Yin}}, \bibinfo {author} {\bibfnamefont {F.-Y.}\ \bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Zhou}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {G.-C.}\ \bibnamefont {Guo}}, \bibinfo {author} {\bibfnamefont {W.-S.}\ \bibnamefont {Bao}}, \ and\ \bibinfo {author} {\bibfnamefont {Z.-F.}\ \bibnamefont {Han}},\ }\href {\doibase 10.1103/PhysRevA.84.062308} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {84}},\ \bibinfo {pages} {062308} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lydersen}\ \emph {et~al.}(2011{\natexlab{a}})\citenamefont {Lydersen}, \citenamefont {Akhlaghi}, \citenamefont {Majedi}, \citenamefont {Skaar},\ and\ \citenamefont {Makarov}}]{lydersen2011c} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Lydersen}}, \bibinfo {author} {\bibfnamefont {M.~K.}\ \bibnamefont {Akhlaghi}}, \bibinfo {author} {\bibfnamefont {A.~H.}\ \bibnamefont {Majedi}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Skaar}}, \ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Makarov}},\ }\href {\doibase 10.1088/1367-2630/13/11/113042} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {13}},\ \bibinfo {pages} {113042} (\bibinfo {year} {2011}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Tanner}\ \emph {et~al.}(2014)\citenamefont {Tanner}, \citenamefont {Makarov},\ and\ \citenamefont {Hadfield}}]{tanner2014} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont {Tanner}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Makarov}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~H.}\ \bibnamefont {Hadfield}},\ }\href {\doibase 10.1364/OE.22.006734} {\bibfield {journal} {\bibinfo {journal} {Opt. Express}\ }\textbf {\bibinfo {volume} {22}},\ \bibinfo {pages} {6734} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gerhardt}\ \emph {et~al.}(2011{\natexlab{b}})\citenamefont {Gerhardt}, \citenamefont {Liu}, \citenamefont {Lamas-Linares}, \citenamefont {Skaar}, \citenamefont {Kurtsiefer},\ and\ \citenamefont {Makarov}}]{gerhardt2011} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Gerhardt}}, \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Lamas-Linares}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Skaar}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Kurtsiefer}}, \ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Makarov}},\ }\href {\doibase 10.1038/ncomms1348} {\bibfield {journal} {\bibinfo {journal} {Nat. Commun.}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {349} (\bibinfo {year} {2011}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Makarov}\ \emph {et~al.}(2006)\citenamefont {Makarov}, \citenamefont {Anisimov},\ and\ \citenamefont {Skaar}}]{makarov2006} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Makarov}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Anisimov}}, \ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Skaar}},\ }\href {\doibase 10.1103/PhysRevA.74.022313} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {74}},\ \bibinfo {pages} {022313} (\bibinfo {year} {2006})},\ \bibinfo {note} {erratum ibid. \textbf{78}, 019905 (2008)}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wiechers}\ \emph {et~al.}(2011)\citenamefont {Wiechers}, \citenamefont {Lydersen}, \citenamefont {Wittmann}, \citenamefont {Elser}, \citenamefont {Skaar}, \citenamefont {Marquardt}, \citenamefont {Makarov},\ and\ \citenamefont {Leuchs}}]{wiechers2011} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Wiechers}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Lydersen}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Wittmann}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Elser}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Skaar}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Marquardt}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Makarov}}, \ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Leuchs}},\ }\href {\doibase 10.1088/1367-2630/13/1/013043} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {13}},\ \bibinfo {pages} {013043} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lydersen}\ \emph {et~al.}(2011{\natexlab{b}})\citenamefont {Lydersen}, \citenamefont {Jain}, \citenamefont {Wittmann}, \citenamefont {Mar{\o}y}, \citenamefont {Skaar}, \citenamefont {Marquardt}, \citenamefont {Makarov},\ and\ \citenamefont {Leuchs}}]{lydersen2011b} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Lydersen}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Jain}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Wittmann}}, \bibinfo {author} {\bibfnamefont {{\O}.}~\bibnamefont {Mar{\o}y}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Skaar}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Marquardt}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Makarov}}, \ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Leuchs}},\ }\href {\doibase 10.1103/PhysRevA.84.032320} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {84}},\ \bibinfo {pages} {032320} (\bibinfo {year} {2011}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lydersen}\ \emph {et~al.}(2011{\natexlab{c}})\citenamefont {Lydersen}, \citenamefont {Skaar},\ and\ \citenamefont {Makarov}}]{lydersen2011} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Lydersen}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Skaar}}, \ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Makarov}},\ }\href {\doibase 10.1080/09500340.2011.565889} {\bibfield {journal} {\bibinfo {journal} {J. Mod. Opt.}\ }\textbf {\bibinfo {volume} {58}},\ \bibinfo {pages} {680} (\bibinfo {year} {2011}{\natexlab{c}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Vakhitov}\ \emph {et~al.}(2001)\citenamefont {Vakhitov}, \citenamefont {Makarov},\ and\ \citenamefont {Hjelme}}]{vakhitov2001} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Vakhitov}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Makarov}}, \ and\ \bibinfo {author} {\bibfnamefont {D.~R.}\ \bibnamefont {Hjelme}},\ }\href {\doibase 10.1080/09500340108240904} {\bibfield {journal} {\bibinfo {journal} {J. Mod. Opt.}\ }\textbf {\bibinfo {volume} {48}},\ \bibinfo {pages} {2023} (\bibinfo {year} {2001})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gisin}\ \emph {et~al.}(2006)\citenamefont {Gisin}, \citenamefont {Fasel}, \citenamefont {Kraus}, \citenamefont {Zbinden},\ and\ \citenamefont {Ribordy}}]{gisin2006} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Gisin}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Fasel}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Kraus}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Zbinden}}, \ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Ribordy}},\ }\href {\doibase 10.1103/PhysRevA.73.022320} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {73}},\ \bibinfo {pages} {022320} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Schmid}\ \emph {et~al.}(2005)\citenamefont {Schmid}, \citenamefont {Trojek}, \citenamefont {Bourennane}, \citenamefont {Kurtsiefer}, \citenamefont {\ifmmode~\dot{Z}\else \.{Z}\fi{}ukowski},\ and\ \citenamefont {Weinfurter}}]{Schmid2005} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Schmid}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Trojek}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Bourennane}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Kurtsiefer}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {\ifmmode~\dot{Z}\else \.{Z}\fi{}ukowski}}, \ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Weinfurter}},\ }\href {\doibase 10.1103/PhysRevLett.95.230505} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {95}},\ \bibinfo {pages} {230505} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Muller}\ \emph {et~al.}(1997)\citenamefont {Muller}, \citenamefont {Herzog}, \citenamefont {Huttner}, \citenamefont {Tittel}, \citenamefont {Zbinden},\ and\ \citenamefont {Gisin}}]{muller1997} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Muller}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Herzog}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Huttner}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Tittel}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Zbinden}}, \ and\ \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Gisin}},\ }\href {\doibase 10.1063/1.118224} {\bibfield {journal} {\bibinfo {journal} {Appl. Phys. Lett.}\ }\textbf {\bibinfo {volume} {70}},\ \bibinfo {pages} {793} (\bibinfo {year} {1997})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hillery}\ \emph {et~al.}(1999)\citenamefont {Hillery}, \citenamefont {Bu{\v{z}}ek},\ and\ \citenamefont {Berthiaume}}]{hillery1999} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Hillery}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Bu{\v{z}}ek}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Berthiaume}},\ }\href {\doibase 10.1103/PhysRevA.59.1829} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {59}},\ \bibinfo {pages} {1829} (\bibinfo {year} {1999})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sajeed}\ \emph {et~al.}(2017)\citenamefont {Sajeed}, \citenamefont {Minshull}, \citenamefont {Jain},\ and\ \citenamefont {Makarov}}]{sajeed2017} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Sajeed}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Minshull}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Jain}}, \ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Makarov}},\ }\href {\doibase 110.1038/s41598-017-08279-1} {\bibfield {journal} {\bibinfo {journal} {Sci. Rep.}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {8403} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Herrero-Collantes}\ and\ \citenamefont {Garcia-Escartin}(2017)}]{herrero2017} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Herrero-Collantes}}\ and\ \bibinfo {author} {\bibfnamefont {J.~C.}\ \bibnamefont {Garcia-Escartin}},\ }\href {\doibase 10.1103/RevModPhys.89.015004} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {89}},\ \bibinfo {pages} {015004} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Vallone}\ \emph {et~al.}(2014)\citenamefont {Vallone}, \citenamefont {Marangon}, \citenamefont {Tomasin},\ and\ \citenamefont {Villoresi}}]{vallone2014} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Vallone}}, \bibinfo {author} {\bibfnamefont {D.~G.}\ \bibnamefont {Marangon}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Tomasin}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Villoresi}},\ }\href {\doibase 10.1103/PhysRevA.90.052327} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {90}},\ \bibinfo {pages} {052327} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Marangon}\ \emph {et~al.}(2017)\citenamefont {Marangon}, \citenamefont {Vallone},\ and\ \citenamefont {Villoresi}}]{marangon2017} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~G.}\ \bibnamefont {Marangon}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Vallone}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Villoresi}},\ }\href {\doibase 10.1103/PhysRevLett.118.060503} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {118}},\ \bibinfo {pages} {060503} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Deng}\ \emph {et~al.}(2003)\citenamefont {Deng}, \citenamefont {Long},\ and\ \citenamefont {Liu}}]{deng2003} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.-G.}\ \bibnamefont {Deng}}, \bibinfo {author} {\bibfnamefont {G.-L.}\ \bibnamefont {Long}}, \ and\ \bibinfo {author} {\bibfnamefont {X.-S.}\ \bibnamefont {Liu}},\ }\href {\doibase 10.1103/PhysRevA.68.042317} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {68}},\ \bibinfo {pages} {042317} (\bibinfo {year} {2003})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wang}\ \emph {et~al.}(2005{\natexlab{a}})\citenamefont {Wang}, \citenamefont {Deng}, \citenamefont {Li}, \citenamefont {Liu},\ and\ \citenamefont {Long}}]{wang2005c} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {F.-G.}\ \bibnamefont {Deng}}, \bibinfo {author} {\bibfnamefont {Y.-S.}\ \bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {X.-S.}\ \bibnamefont {Liu}}, \ and\ \bibinfo {author} {\bibfnamefont {G.-L.}\ \bibnamefont {Long}},\ }\href {\doibase 10.1103/PhysRevA.71.044305} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {71}},\ \bibinfo {pages} {044305} (\bibinfo {year} {2005}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wang}\ \emph {et~al.}(2005{\natexlab{b}})\citenamefont {Wang}, \citenamefont {Deng},\ and\ \citenamefont {Long}}]{wang2005d} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {F.-G.}\ \bibnamefont {Deng}}, \ and\ \bibinfo {author} {\bibfnamefont {G.-L.}\ \bibnamefont {Long}},\ }\href {\doibase 10.1016/j.optcom.2005.04.048} {\bibfield {journal} {\bibinfo {journal} {Opt. Commun.}\ }\textbf {\bibinfo {volume} {253}},\ \bibinfo {pages} {15} (\bibinfo {year} {2005}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wang}\ \emph {et~al.}(2011)\citenamefont {Wang}, \citenamefont {Li}, \citenamefont {Du},\ and\ \citenamefont {Deng}}]{wang2011a} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.-J.}\ \bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {F.-F.}\ \bibnamefont {Du}}, \ and\ \bibinfo {author} {\bibfnamefont {F.-G.}\ \bibnamefont {Deng}},\ }\href {\doibase 10.1088/0256-307X/28/4/040305} {\bibfield {journal} {\bibinfo {journal} {Chin. Phys. Lett.}\ }\textbf {\bibinfo {volume} {28}},\ \bibinfo {pages} {040305} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Deng}\ and\ \citenamefont {Long}(2004)}]{deng2004} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.-G.}\ \bibnamefont {Deng}}\ and\ \bibinfo {author} {\bibfnamefont {G.-L.}\ \bibnamefont {Long}},\ }\href {\doibase 10.1103/PhysRevA.69.052319} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {69}},\ \bibinfo {pages} {052319} (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hu}\ \emph {et~al.}()\citenamefont {Hu}, \citenamefont {Jing}, \citenamefont {Zhang}, \citenamefont {Zhang}, \citenamefont {Hou}, \citenamefont {Xiao},\ and\ \citenamefont {Jia}}]{hu2017} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Hu}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Jing}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Hou}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Xiao}}, \ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Jia}},\ }\href@noop {} {\ }\Eprint {http://arxiv.org/abs/1706.03234} {arXiv:1706.03234 [quant-ph]} \BibitemShut {NoStop} \bibitem [{\citenamefont {Makarov}\ and\ \citenamefont {Hjelme}(2005)}]{makarov2005} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Makarov}}\ and\ \bibinfo {author} {\bibfnamefont {D.~R.}\ \bibnamefont {Hjelme}},\ }\href {\doibase 10.1080/09500340410001730986} {\bibfield {journal} {\bibinfo {journal} {J. Mod. Opt.}\ }\textbf {\bibinfo {volume} {52}},\ \bibinfo {pages} {691} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Rivest}\ \emph {et~al.}(1978)\citenamefont {Rivest}, \citenamefont {Adleman},\ and\ \citenamefont {Dertouzos}}]{rivest1978a} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~L.}\ \bibnamefont {Rivest}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Adleman}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~L.}\ \bibnamefont {Dertouzos}},\ }in\ \href {https://pdfs.semanticscholar.org/3c87/22737ef9f37b7a1da6ab81b54224a3c64f72.pdf} {\emph {\bibinfo {booktitle} {Foundations of secure computation}}}\ (\bibinfo {year} {1978})\ pp.\ \bibinfo {pages} {169--180}\BibitemShut {NoStop} \bibitem [{\citenamefont {Broadbent}\ \emph {et~al.}(2009)\citenamefont {Broadbent}, \citenamefont {Fitzsimons},\ and\ \citenamefont {Kashefi}}]{broadbent2009} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Broadbent}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Fitzsimons}}, \ and\ \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Kashefi}},\ }in\ \href {\doibase 10.1109/FOCS.2009.36} {\emph {\bibinfo {booktitle} {Proc. 50th Annual IEEE Symposium on Foundations of Computer Science}}}\ (\bibinfo {organization} {IEEE Computer Society, Los Alamitos, CA},\ \bibinfo {year} {2009})\ pp.\ \bibinfo {pages} {517--526}\BibitemShut {NoStop} \bibitem [{\citenamefont {Morimae}\ and\ \citenamefont {Fujii}(2013)}]{morimae2013} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Morimae}}\ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Fujii}},\ }\href {\doibase 10.1103/PhysRevA.87.050301} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {87}},\ \bibinfo {pages} {050301} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Greganti}\ \emph {et~al.}(2016)\citenamefont {Greganti}, \citenamefont {Roehsner}, \citenamefont {Barz}, \citenamefont {Morimae},\ and\ \citenamefont {Walther}}]{greganti2016} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Greganti}}, \bibinfo {author} {\bibfnamefont {M.-C.}\ \bibnamefont {Roehsner}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Barz}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Morimae}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Walther}},\ }\href {\doibase 10.1088/1367-2630/18/1/013020} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {18}},\ \bibinfo {pages} {013020} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Takesue}\ \emph {et~al.}(2005)\citenamefont {Takesue}, \citenamefont {Diamanti}, \citenamefont {Honjo}, \citenamefont {Langrock}, \citenamefont {Fejer}, \citenamefont {Inoue},\ and\ \citenamefont {Yamamoto}}]{takesue2005} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Takesue}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Diamanti}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Honjo}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Langrock}}, \bibinfo {author} {\bibfnamefont {M.~M.}\ \bibnamefont {Fejer}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Inoue}}, \ and\ \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Yamamoto}},\ }\href {\doibase 10.1088/1367-2630/7/1/232} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {232} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Stucki}\ \emph {et~al.}(2002)\citenamefont {Stucki}, \citenamefont {Gisin}, \citenamefont {Guinnard}, \citenamefont {Ribordy},\ and\ \citenamefont {Zbinden}}]{stucki2002} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Stucki}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Gisin}}, \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Guinnard}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Ribordy}}, \ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Zbinden}},\ }\href {\doibase 10.1088/1367-2630/4/1/341} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages} {41} (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Stucki}\ \emph {et~al.}(2009)\citenamefont {Stucki}, \citenamefont {Walenta}, \citenamefont {Vannel}, \citenamefont {Thew}, \citenamefont {Gisin}, \citenamefont {Zbinden}, \citenamefont {Gray}, \citenamefont {Towery},\ and\ \citenamefont {Ten}}]{stucki2009} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Stucki}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Walenta}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Vannel}}, \bibinfo {author} {\bibfnamefont {R.~T.}\ \bibnamefont {Thew}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Gisin}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Zbinden}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Gray}}, \bibinfo {author} {\bibfnamefont {C.~R.}\ \bibnamefont {Towery}}, \ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Ten}},\ }\href {\doibase 10.1088/1367-2630/11/7/075003} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages} {075003} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Tang}\ \emph {et~al.}(2014)\citenamefont {Tang}, \citenamefont {Liao}, \citenamefont {Xu}, \citenamefont {Qi}, \citenamefont {Qian},\ and\ \citenamefont {Lo}}]{tang2014} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Tang}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Liao}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Xu}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Qi}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Qian}}, \ and\ \bibinfo {author} {\bibfnamefont {H.-K.}\ \bibnamefont {Lo}},\ }\href {\doibase 10.1103/PhysRevLett.112.190503} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {112}},\ \bibinfo {pages} {190503} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lucamarini}\ \emph {et~al.}(2015)\citenamefont {Lucamarini}, \citenamefont {Choi}, \citenamefont {Ward}, \citenamefont {Dynes}, \citenamefont {Yuan},\ and\ \citenamefont {Shields}}]{lucamarini2015} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Lucamarini}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Choi}}, \bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont {Ward}}, \bibinfo {author} {\bibfnamefont {J.~F.}\ \bibnamefont {Dynes}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Yuan}}, \ and\ \bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont {Shields}},\ }\href {\doibase 10.1103/PhysRevX.5.031030} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {031030} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{tri()}]{trifonov2007} \BibitemOpen \href@noop {} {}\bibinfo {note} {A. Trifonov and H. Vig, ``Single-photon watch dog detector for folded quantum key distribution system'', US patent US7227955B2 (filed in 2003, granted in 2007)}\BibitemShut {NoStop} \bibitem [{\citenamefont {Mar{\o}y}\ \emph {et~al.}(2017)\citenamefont {Mar{\o}y}, \citenamefont {Makarov},\ and\ \citenamefont {Skaar}}]{maroy2016} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {{\O}.}~\bibnamefont {Mar{\o}y}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Makarov}}, \ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Skaar}},\ }\href {\doibase 10.1088/2058-9565/aa83c9} {\bibfield {journal} {\bibinfo {journal} {Quantum Sci. Technol.}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {044013} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Fu}\ \emph {et~al.}(2015)\citenamefont {Fu}, \citenamefont {Yin}, \citenamefont {Chen},\ and\ \citenamefont {Chen}}]{fu2015} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Fu}}, \bibinfo {author} {\bibfnamefont {H.-L.}\ \bibnamefont {Yin}}, \bibinfo {author} {\bibfnamefont {T.-Y.}\ \bibnamefont {Chen}}, \ and\ \bibinfo {author} {\bibfnamefont {Z.-B.}\ \bibnamefont {Chen}},\ }\href {\doibase 10.1103/PhysRevLett.114.090501} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {114}},\ \bibinfo {pages} {090501} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lo}\ \emph {et~al.}(2012)\citenamefont {Lo}, \citenamefont {Curty},\ and\ \citenamefont {Qi}}]{lo2012} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.-K.}\ \bibnamefont {Lo}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Curty}}, \ and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Qi}},\ }\href {\doibase 10.1103/PhysRevLett.108.130503} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {108}},\ \bibinfo {pages} {130503} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yang}\ \emph {et~al.}()\citenamefont {Yang}, \citenamefont {Wei}, \citenamefont {Ma}, \citenamefont {Sun}, \citenamefont {Liu}, \citenamefont {Li}, \citenamefont {Yin}, \citenamefont {Du},\ and\ \citenamefont {Wu}}]{yang2016} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Yang}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Wei}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Ma}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Sun}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Yin}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Du}}, \ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Wu}},\ }\href@noop {} {\ }\Eprint {http://arxiv.org/abs/1608.00114} {arXiv:1608.00114 [quant-ph]} \BibitemShut {NoStop} \bibitem [{\citenamefont {Puthoor}\ \emph {et~al.}(2016)\citenamefont {Puthoor}, \citenamefont {Amiri}, \citenamefont {Wallden}, \citenamefont {Curty},\ and\ \citenamefont {Andersson}}]{puthoor2016} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {I.~V.}\ \bibnamefont {Puthoor}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Amiri}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Wallden}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Curty}}, \ and\ \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Andersson}},\ }\href {\doibase 10.1103/PhysRevA.94.022328} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {94}},\ \bibinfo {pages} {022328} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yin}\ \emph {et~al.}(2017{\natexlab{c}})\citenamefont {Yin}, \citenamefont {Wang}, \citenamefont {Tang}, \citenamefont {Zhao}, \citenamefont {Liu}, \citenamefont {Sun}, \citenamefont {Zhang}, \citenamefont {Li}, \citenamefont {Puthoor}, \citenamefont {You}, \citenamefont {Andersson}, \citenamefont {Wang}, \citenamefont {Liu}, \citenamefont {Jiang}, \citenamefont {Ma}, \citenamefont {Zhang}, \citenamefont {Curty}, \citenamefont {Chen},\ and\ \citenamefont {Pan}}]{yin2017} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.-L.}\ \bibnamefont {Yin}}, \bibinfo {author} {\bibfnamefont {W.-L.}\ \bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {Y.-L.}\ \bibnamefont {Tang}}, \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Zhao}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {X.-X.}\ \bibnamefont {Sun}}, \bibinfo {author} {\bibfnamefont {W.-J.}\ \bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {I.~V.}\ \bibnamefont {Puthoor}}, \bibinfo {author} {\bibfnamefont {L.-X.}\ \bibnamefont {You}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Andersson}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Jiang}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Ma}}, \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Curty}}, \bibinfo {author} {\bibfnamefont {T.-Y.}\ \bibnamefont {Chen}}, \ and\ \bibinfo {author} {\bibfnamefont {J.-W.}\ \bibnamefont {Pan}},\ }\href {\doibase 10.1103/PhysRevA.95.042338} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {95}},\ \bibinfo {pages} {042338} (\bibinfo {year} {2017}{\natexlab{c}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Tamaki}\ \emph {et~al.}(2016)\citenamefont {Tamaki}, \citenamefont {Curty},\ and\ \citenamefont {Lucamarini}}]{tamaki2016} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Tamaki}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Curty}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Lucamarini}},\ }\href {\doibase 10.1088/1367-2630/18/6/065008} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {18}},\ \bibinfo {pages} {065008} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \end{thebibliography} \end{document}
\begin{document} \title{Algebraic treatment of a simple model for the electromagnetic self-force} \author{Francisco M. Fern\'{a}ndez} \institute{INIFTA (UNLP,CCT La Plata-CONICET), Blvd. 113 y 64 S/N, Sucursal 4, Casilla de Correo 16, 1900 La Plata, Argentina \email{[email protected]}} \date{Received: date / Accepted: date} \maketitle \begin{abstract} The problem of the electromagnetic self-force can be studied in terms of a quadratic PT-symmetric Hamiltonian. Here, we apply a straightforward algebraic method to determine the regions of model-parameter space where the quantum-mechanical operator exhibits real spectrum. An alternative point of view consists of finding the values of the model parameters so that a symmetric operator supports bound states. \PACS{03.65.Ge, 11.30.Er, 03.65.-w} \end{abstract} \section{Introduction} \label{sec:intro} In a recent paper Bender and Gianfreda\cite{BG15} discussed a model for the electromagnetic self-force on an oscillating charge particle. They showed that the problems reported by Englert\cite{E80} were due to the fact that the PT-symmetric Hamiltonian that describes the system exhibits broken PT symmetry. The addition of two interacting terms to that Hamiltonian enabled Bender and Gianfreda to analyse the regions of broken and unbroken PT symmetry. To this end the authors resorted to the approach of Rossignoli and Kowalski\cite{RK05} in order to rewrite the quadratic Hamiltonian in diagonal form. The authors mentioned the possibility of using an alternative approach for the same purpose\cite{F14} which is known since long ago\cite {FC96}. The purpose of this paper is to stress the fact that the application of the latter algebraic method to quadratic Hamiltonians is extremely simple and straightforward. In section~\ref{sec:model} we derive the regular or adjoint matrix representation of the modified Hamiltonian operator for the electromagnetic self-force\cite{BG15} and obtain the spectral frequencies in terms of the model parameters. These frequencies, which are roots of the characteristic polynomial for the matrix already mentioned, are suitable for determining the regions in parameter space where the PT symmetry is broken. Finally, in section~\ref{sec:conclusions} we summarize the main results and draw conclusions. \section{The quantum-mechanical model} \label{sec:model} From the pair of classical equations of motion proposed by Englert\cite{E80} , Bender and Gianfreda\cite{BG15} derived the Hamiltonian function \begin{equation} H_{c}=\frac{p_{x}p_{w}-p_{y}p_{z}}{m\tau }+\frac{2p_{z}p_{w}}{m\tau ^{2}}+ \frac{wp_{y}+zp_{x}}{2}-\frac{mzw}{2}+kxy, \label{eq:H_c} \end{equation} where $q=x,y,z,w$ are suitable coordinates and $p_{q}$ the corresponding conjugate momenta. The quantum-mechanical version of this operator is PT-symmetric but its eigenvalues are not real because the PT symmetry is broken for all $m,\tau ,k$. To overcome this difficulty Bender and Gianfreda added two terms and obtained the modified Hamiltonian operator \begin{eqnarray} H &=&\frac{B\left( wp_{z}-zp_{w}\right) }{m\tau }+\frac{2p_{z}p_{w}}{m\tau ^{2}}+\frac{p_{x}p_{w}-p_{y}p_{z}}{m\tau }-\frac{mzw}{2}+\frac{wp_{y}+zp_{x} }{2}+kxy \nonumber \\ &&+\frac{A\left( x^{2}+y^{2}\right) }{2}, \label{eq:H} \end{eqnarray} where the coordinates and momenta satisfy the quantum-mechanical commutation relations $[q,p_{q}]=i$. In this way those authors were able to show that the resulting PT-symmetric Hamiltonian exhibits real spectrum for suitable values of $A$ and $B$. In particular, the PT symmetry is broken for the case $A=B=0$ that leads to $H_{c}$. The simplest way of determining the regions of broken PT symmetry is to convert $H$ into a suitable diagonal form. Bender and Gianfreda resorted to the approach proposed by Rosignoli and Kowalski\cite{RK05} and in what follows we apply a well known algebraic approach[F96] proposed recently by Fern\'{a}ndez\cite{F14} for the treatment of a simple model for optical resonators\cite{BGOPY13}. The algebraic approach is based on the construction of the adjoint or regular matrix representation of $H$ in the operator basis $\left\{ O_{1},O_{2},\ldots ,O_{8}\right\} =\left\{ x,y,z,w,p_{x},p_{y},p_{z},p_{w}\right\} $. The matrix elements $H_{ij}$ are given by the coefficients of the commutator relations \begin{equation} \lbrack H,O_{i}]=\sum_{j=1}^{8}H_{ji}O_{j},\;i=1,2,\ldots ,8. \label{eq:[H,O_i]} \end{equation} A straightforward calculation leads to \begin{equation} \mathbf{H}=\left( \begin{array}{llllllll} 0 & 0 & 0 & 0 & iA & ik & 0 & 0 \\ 0 & 0 & 0 & 0 & ik & iA & 0 & 0 \\ -\frac{i}{2} & 0 & 0 & \frac{iB}{m\tau } & 0 & 0 & 0 & -\frac{im}{2} \\ 0 & -\frac{i}{2} & -\frac{iB}{m\tau } & 0 & 0 & 0 & -\frac{im}{2} & 0 \\ 0 & 0 & 0 & -\frac{i}{m\tau } & 0 & 0 & \frac{i}{2} & 0 \\ 0 & 0 & \frac{i}{m\tau } & 0 & 0 & 0 & 0 & \frac{i}{2} \\ 0 & \frac{i}{m\tau } & 0 & -\frac{2i}{m\tau ^{2}} & 0 & 0 & 0 & \frac{iB}{ m\tau } \\ -\frac{i}{m\tau } & 0 & -\frac{2i}{m\tau ^{2}} & 0 & 0 & 0 & -\frac{iB}{ m\tau } & 0 \end{array} \right) . \label{eq:H_mat} \end{equation} The spectral frequencies of the problem are the eigenvalues of this matrix. The secular equation $|\mathbf{H}-\lambda \mathbf{I}|=0$ (where $\mathbf{I}$ is the $8\times 8$ identity matrix) yields the characteristic polynomial \begin{equation} \left( m^{2}\tau ^{2}\xi -B^{2}+m^{2}\right) \left( m^{2}\tau ^{2}\xi ^{3}+\xi ^{2}\left( m^{2}-B^{2}\right) +\xi \left( 2AB-2km\right) -A^{2}+k^{2}\right) =0, \label{eq:charpoly_xi} \end{equation} where $\xi =\lambda ^{2}$. One of the roots is \begin{equation} \xi =\frac{B^{2}-m^{2}}{m^{2}\tau ^{2}}, \label{eq:xi_1} \end{equation} and the remaining three ones are solutions to the cubic equation \begin{equation} m^{2}\tau ^{2}\xi ^{3}+\left( m^{2}-B^{2}\right) \xi ^{2}+2\left( AB-km\right) \xi -A^{2}+k^{2}=0. \label{eq:charpoly_xi_2} \end{equation} When the spectrum is real the frequencies are real and the four roots $\xi _{j}$ are positive. Thus the region in parameter space where the spectrum is real is determined by the set of equations $\left\{ B^{2}>m^{2},\;A^{2}>k^{2},\;AB>km\right\} $. Bender and Gianfreda\cite{BG15} obtained exactly the same equations (\ref {eq:xi_1}) and (\ref{eq:charpoly_xi_2}) by means of an alternative approach based on the creation and annihilation operators\cite{RK05}. In our opinion the adjoint or regular matrix representation of the Hamiltonian operator provides a more straightforward route. In addition to it, the algebraic approach is somewhat more general because it applies to any Hamiltonian that satisfies the commutator relations (\ref{eq:[H,O_i]})\cite{FC96}. \section{Conclusions} \label{sec:conclusions} Any Hamiltonian that is a quadratic function of the coordinates and momenta or of the creation and annihilation operators is exactly solvable. One way of obtaining its spectrum is to convert it into a diagonal form by means of a linear combination of the relevant dynamical variables\cite{RK05}. The algebraic method\cite{F14,FC96} is a particularly simple approach because the construction of the adjoint or regular matrix representation is straightforward. It only requires the calculation of simple commutation relations like (\ref{eq:[H,O_i]}). It is also worth mentioning that the quantum-mechanical versions of $H_{c}$ and $H$ are symmetric operators. They satisfy $\left\langle f\right| H\left| g\right\rangle =\left\langle g\right| H\left| f\right\rangle ^{*}$ for any pair of square integrable functions $f(\mathbf{q})$ and $g(\mathbf{q})$. If $ \psi $ is a normalizable eigenfunction of $H$ ($H\psi =E\psi $, $ \left\langle \psi \right| \left. \psi \right\rangle <\infty $) then $E$ is real. Therefore, in the present case, looking for the regions in parameter space of unbroken PT symmetry is equivalent to looking for the regions where $H$ supports bound states (as Bender and Gianfreda\cite{BG15} did explicitly for the ground state). The same situation emerged in the case of the optical resonators mentioned above\cite{F14,BGOPY13}. \end{document}
\begin{document} \title{Fourier-Pad{\'e} approximants for Angelesco systems} \author{M. Bello-Hern{\'a}ndez} \address{Universidad de La Rioja \\ Dpto. Matem{\'a}ticas y Computaci{\'o}n} \author{G. L{\'o}pez-Lagimasino} \address{Universidad Carlos III de Madrid\\ Dpto. de Matem{\'a}ticas Aplicada} \author{J. M{\'\i}nguez-Ceniceros} \address{Universidad de La Rioja\\ Dpto. Matem{\'a}ticas y Computaci{\'o}n} \mathcal{M}aketitle {\it Keywords and phrases:} Rational approximation, multipoint Pad\'e approximation, Fourier-Pad\'e approximation, potential theory.\\ \section{Introduction} In this paper we study linear and non-linear Fourier-Pad\'e approximation for Angelesco systems of functions. This construction is similar to that of Hermite-Pad\'e approximation. Instead of considering power series expansions of the functions in the system, we take their expansion in a series of orthogonal polynomials. In \cite{sue} and \cite{sue2}, S. P. Suetin obtained convergence results for rows of Fourier-Pad\'e approximation extending to this setting classical results of the theory of Pad\'e approximation. Diagonal sequences of Fourier-Pad\'e approximation were studied by A. A. Gonchar, E. A. Rakhmanov, and S. P. Suetin in \cite{Gon-Rak-Sue} when the function to be approximated is of Markov type; that is, the Cauchy transform of a measure supported on the real line. They give the rate of convergence of diagonal sequences of linear and non-linear Pad\'e approximants in terms of the equilibrium measures of a related potential theoretic problem. We generalize those results to the case when a system of Markov functions is given defined by measures whose supports do not intersect. Let $\mathcal{M}athcal{M}(\mathcal{M}athbb{D}elta)$ denote the class of all finite, Borel measures with compact support consisting of an infinite set of points contained in an interval $\mathcal{M}athbb{D}elta$ of the real line $\mathcal{M}athbb{R}$. Given $\sigma \in \mathcal{M}athcal{M}(\mathcal{M}athbb{D}elta)$, let \[ \widehat{\sigma}(z)=\int\frac{d\sigma(x)}{z-x} \] be the associated \mathcal{E}mph{Markov} function. Let $\mathcal{M}athbb{D}elta_k, k=1,\ldots,m,$ be intervals of the real line such that \[ \mathcal{M}athbb{D}elta_k \cap \mathcal{M}athbb{D}elta_j = \mathcal{E}mptyset\,, \qquad k \neq j \,, \] and $\sigma_k \in \mathcal{M}athcal{M}(\mathcal{M}athbb{D}elta_k), k=1,\ldots,m.$ We say that $\sigma = (\sigma_1,\ldots,\sigma_m)$ forms an Angelesco system of measures and $(\widehat{\sigma}_1,\ldots,\widehat{\sigma}_m)$ is the associated Angelesco system of functions. Let $\sigma_0 \in \mathcal{M}athcal{M}(\mathcal{M}athbb{D}elta_0).$ Likewise, we will assume that \[ \mathcal{M}athbb{D}elta_0 \cap \mathcal{M}athbb{D}elta_k = \mathcal{E}mptyset\,, \qquad k=1,\ldots,m\,. \] Consider the sequence $\{\mathcal{E}ll_n\}, n \in \mathcal{M}athbb{Z}_+ = \{0,1,2,\ldots\},$ of orthonormal polynomials with respect to $\sigma_0$ with positive leading coefficient. Take a multi-index ${\bf n} = (n_1,\ldots,n_m) \in \mathcal{M}athbb{Z}_+^m$. Set \[ |{\bf n}| = n_1 + \cdots + n_m \,. \] Let $Q_{\bf n},P_{{\bf n},1},\ldots,P_{{\bf n},m},$ be polynomials such that: \begin{itemize} \item[i)] $ \deg Q_{\bf n} \leq |{\bf n}|, Q_{\bf n} \not\mathcal{E}quiv 0\,, \deg P_{{\bf n},j} \leq |{\bf n}|-1, j=1,\ldots,m\,.$ \item[ii)] For each $j = 1,\ldots,m\,,$ and $k = 0,\ldots, |{\bf n}| +n_j -1$ \[ c_k(Q_{\bf n} \widehat{\sigma}_j - P_{{\bf n},j}) = \int (Q_{\bf n} \widehat{\sigma}_j - P_{{\bf n},j})(x) \mathcal{E}ll_k(x) d \sigma_0(x) = 0\,. \] \mathcal{E}nd{itemize} Notice that \begin{equation} \label{losP} P_{{\bf n},j}(z) = \sum_{i=0}^{|{\bf n}|-1}c_i(Q_{\bf n} \widehat{\sigma}_j)\mathcal{E}ll_i(z)\,. \mathcal{E}nd{equation} The $|{\bf n}| +1$ coefficients of $Q_{\bf n}$ satisfy a homogeneous linear system of $|{\bf n}|$ equations given by \[ c_k(Q_{\bf n}\widehat{\sigma}_j) = 0\,, \qquad j = 1,\ldots,m\,,\qquad k = |{\bf n}|,\ldots,|{\bf n}| + n_j -1\,. \] Therefore, a non-trivial solution is guaranteed. In Section \ref{teo-pot} we will prove that every solution to i)-ii) has $\deg Q_{\bf n} = |{\bf n}|$. This being the case, $(Q_{\bf n},P_{{\bf n},1},\ldots,P_{{\bf n},m})$ is uniquely determined up to a constant factor. In fact, let us assume that $(Q_{\bf n},P_{{\bf n},1},\ldots,P_{{\bf n},m}),$ and $(\widetilde{Q}_{\bf n},\widetilde{P}_{{\bf n},1},\ldots,\widetilde{P}_{{\bf n},m}),$ are solutions of i)-ii). Without loss of generality, we can assume that $Q_{\bf n}$ and $\widetilde{Q}_{\bf n}$ are monic (with leading coefficient equal to one). Obviously, if $Q_{\bf n} - \widetilde{Q}_{\bf n} \not\mathcal{E}quiv 0$ then $(Q_{\bf n }- \widetilde{Q}_{\bf n},{P}_{{\bf n},1}-\widetilde{P}_{{\bf n},1},\ldots,{P}_{{\bf n},m} - \widetilde{P}_{{\bf n},m})$ is also a solution with $\deg Q_{\bf n }- \widetilde{Q}_{\bf n} < |{\bf n}|$ which contradicts our assumption. Hence $Q_{\bf n } \mathcal{E}quiv \widetilde{Q}_{\bf n}$ and by (\ref{losP}) it follows that ${P}_{{\bf n},j} \mathcal{E}quiv \widetilde{P}_{{\bf n},j} , j = 1\ldots,m\,.$ The rational vector function $\left(\frac{P_{{\bf n},1}}{Q_{\bf n}},\ldots,\frac{P_{{\bf n},m}}{Q_{\bf n}}\right)$ constructed from any solution of i)-ii) is called the {\bf n}-th linear Fourier-Pad\'e approximant for the Angelesco system $(\widehat{\sigma}_1,\ldots,\widehat{\sigma}_m)$ with respect to $\sigma_0$. We shall see that for all ${\bf n} \in \mathcal{M}athbb{Z}_+^m$ the linear Fourier-Pad\'e approximant of an Angelesco system is unique. Non-linear Fourier-Pad\'e approximants are determined as follows. Given ${\bf n} \in \mathcal{M}athbb{Z}_+^m,$ we must find polynomials $T_{{\bf n}},\, S_{{\bf n},1},\ldots,S_{{\bf n},m}$ such that \begin{itemize} \item[i')] $\deg T_{\bf n} \le |{\bf n}|, T_{\bf n} \not\mathcal{E}quiv 0\,, \deg(S_{{\bf n},j})\le |{\bf n}|-1,\, j=1,\ldots, m\,.$ \item[ii')] For each $j = 1,\ldots,m\,,$ and $k = 0,\ldots, |{\bf n}| +n_j -1$ \[ c_k\left( \widehat{\sigma}_j - \frac{S_{{\bf n},j}}{T_{\bf n}} \right) = \int \left( \widehat{\sigma}_j - \frac{S_{{\bf n},j}}{T_{\bf n}}\right)(x) \mathcal{E}ll_k(x) d \sigma_0(x) = 0\,. \] \mathcal{E}nd{itemize} This system of equations is non-linear in the coefficients of the polynomials. We shall prove that for each ${\bf n} \in \mathcal{M}athbb{Z}_+^m,$ the system has a solution but we have not been able to show that it is unique. For any solution of i')-ii'), the vector rational function $\left(\frac{S_{{\bf n},1}}{T_{\bf n}},\ldots,\frac{S_{{\bf n},m}}{T_{\bf n}}\right)$ is called an {\bf n}-th non-linear Fourier-Pad\'e approximant for the Angelesco system $(\widehat{\sigma}_1,\ldots,\widehat{\sigma}_m)$ with respect to $\sigma_0$. In this paper, we obtain the rate of convergence (divergence) of linear and non-linear Fourier-Pad\'e approximants for Angelesco systems such that the measures $\sigma_0,\ldots, \sigma_m$ are in the class $\mathcal{M}box{\bf {Reg}}$ of regular measures. For different equivalent forms of defining regular measures see sections 3.1 to 3.3 in \cite{stto}. In particular, $\sigma_0 \in \mathcal{M}box{\bf {Reg}}$ if and only if \[ \lim_{n} |\mathcal{E}ll_n(z)|^{1/n} = \mathcal{E}xp\{g_{\Omega_0}(z;\infty)\}\,, \] uniformly on compact subsets of the complement of the smallest interval containing the support, $\text{supp}(\sigma_0),$ of $\sigma_0 $ and $g_{\Omega_0}(\cdot;\infty)$ denotes Green's function for the region $\Omega_0 = \mathcal{M}athbb{C} \setminus \text{supp}(\sigma_0)$ with singularity at $\infty.$ Analogously, one defines regularity for the other measures $\sigma_1,\ldots,\sigma_m\,.$ In the sequel, we write $(\sigma_0;\sigma_1,\ldots,\sigma_m) \in \mathcal{M}box{\bf Reg}$ to mean that $\sigma_k \in \mathcal{M}box{\bf Reg}, k=0,\ldots,m\,.$ The system $(\sigma_1,\ldots,\sigma_m)$ will be used to construct the Angelesco system of functions whereas $\sigma_0$ will determine the system of orthogonal polynomials with respect to which the Fourier expansions will be taken. Therefore, for all $0 \leq j,k \leq m\,,$ we assume that \[ \mathcal{M}athbb{D}elta_j \cap \mathcal{M}athbb{D}elta_k = \mathcal{E}mptyset\,, \qquad j \neq k\,. \] In Theorems 1 and 2 below, we find the rate of convergence of the $|{\bf n}|$th root of the error of approximation of the functions $\widehat{\sigma}_k$ by linear and non-linear Fourier-Pad\'e approximants, respectively. The answers are given in terms of extremal solutions of certain vector valued equilibrium problems for the logarithmic potential. Before stating Theorems 1 and 2, we need to introduce some notation and results from potential theory. Let $F_k, k=1,\ldots,N,$ be (not necessarily distinct) closed bounded intervals of the real line and $\mathcal{M}athcal{C} = (c_{j,k})$ be a real, positive definite, symmetric matrix of order $N$. $\mathcal{M}athcal{C}$ will be called the interaction matrix. By $\mathcal{M}athcal{M}_1(F_k), k =1,\ldots,N,$ we denote the subclass of probability measures of $\mathcal{M}athcal{M}(F_k)$ and \[\mathcal{M}athcal{M}_1= \mathcal{M}athcal{M}_1(F_1) \times \cdots \times \mathcal{M}athcal{M}_1(F_{N}) \,. \] Given a vector measure $\mathcal{M}u \in \mathcal{M}athcal{M}_1$ and $j=1,\ldots,N,$ we define the combined potential \begin{equation} \label{combpot} W^{\mathcal{M}u}_j(x) = \sum_{k=1}^{N} c_{j,k} V^{\mathcal{M}u_k}(x) \,,\qquad x \in \mathcal{M}athbb{D}elta_j\,, \mathcal{E}nd{equation} where \[ V^{\mathcal{M}u_k}(x) = \int \log \frac{1}{|x-t|} \,d\mathcal{M}u_k(t)\, \] denotes the standard logarithmic potential of $\mathcal{M}u_k$. We denote \[ w_j^{\mathcal{M}u} = \inf \{W_j^{\mathcal{M}u}(x): x \in \mathcal{M}athbb{D}elta_j\} \,. \] In Chapter 5 of \cite{niso} (see Propositions 4.5, 4.6, and Theorem 4.1) the authors prove (we state the result in a form convenient for our purpose) \begin{lem} \label{niksor} Let $\mathcal{M}athcal{C}$ be a real, positive definite, symmetric matrix of order $N$. If there exists $\overline{\mathcal{M}u} = (\overline{\mathcal{M}u}_1,\ldots,\overline{\mathcal{M}u}_{N})\in \mathcal{M}athcal{M}_1$ such that for each $j=1,\ldots,N$ \[ W_j^{\overline{\mathcal{M}u}} (x) = w_j^{\overline{\mathcal{M}u}}\,, \qquad x \in \text{supp} (\overline{\mathcal{M}u}_j)\,,\] then $\overline{\mathcal{M}u}$ is unique. Moreover, if $c_{j,k} \geq 0$ when $F_j \cap F_j \neq \mathcal{E}mptyset$ then $\overline{\mathcal{M}u}$ exists. \mathcal{E}nd{lem} The vector measure $\overline{\mathcal{M}u} \in \mathcal{M}athcal{M}_1$ is called the equilibrium solution for the vector potential problem determined by $\mathcal{M}athcal{C}$ on the system of intervals $F_j\,, j = 1,\ldots,N\,.$ In the sequel $\Lambda = \Lambda(p_1,\ldots,p_m) \subset \mathcal{M}athbb{Z}_+^m$ is an infinite system of distinct multi-indices such that \[ \lim_{ {\bf n} \in \Lambda} \frac{n_j}{|{\bf n}|} = p_j \in (0,1)\,, \qquad j=1,\ldots,m\,. \] Let us define the block matrix \begin{displaymath} \mathcal{M}athcal{C}_1= \left( \begin{array}{cc} \mathcal{M}athcal{C}_{1,1} & \mathcal{M}athcal{C}_{1,2} \\ \mathcal{M}athcal{C}_{2,1} & \mathcal{M}athcal{C}_{2,2} \mathcal{E}nd{array}\right)\,, \mathcal{E}nd{displaymath} where \begin{displaymath} \mathcal{M}athcal{C}_{1,1}= \left( \begin{array}{cccc} 2p_1^2 & p_1p_2 & \cdots & p_1p_m \\ p_2p_1 & 2p_2^2 & \cdots & p_2p_m \\ \vdots & \vdots & \ddots & \vdots \\ p_mp_1 & p_mp_2 & \cdots & 2p_m^2 \mathcal{E}nd{array}\right)\,,\mathcal{E}nd{displaymath} and $\mathcal{M}athcal{C}_{1,2}, \mathcal{M}athcal{C}_{2,1}, \mathcal{M}athcal{C}_{2,2} $ are diagonal matrices given by \begin{displaymath} \mathcal{M}athcal{C}_{1,2}= \mathcal{M}athcal{C}_{2,1} = \mathcal{M}box{diag}\{-p_1(1+p_1),-p_2(1+p_2),\cdots,-p_m(1+p_m)\} \,, \mathcal{E}nd{displaymath} and \begin{displaymath} \mathcal{M}athcal{C}_{2,2} = \mathcal{M}box{diag}\{2(1+p_1)^2,2(1+p_2)^2,\cdots,2(1+p_m)^2\} \,. \mathcal{E}nd{displaymath} $\mathcal{M}athcal{C}_1$ satisfies all the assumptions of Lemma \ref{niksor} on the system of intervals $F_j = \mathcal{M}athbb{D}elta_j, j=1,\ldots,m, F_j = \mathcal{M}athbb{D}elta_0, j = m+1,\ldots,2m,$ including $c_{j,k} \geq 0$ when $F_j \cap F_j \neq \mathcal{E}mptyset.$ The only non-trivial property is its positive definiteness and we shall prove this in Section 2. Let $\overline{\mathcal{M}u}=\overline{\mathcal{M}u}(\mathcal{M}athcal{C}_1) $ be the equilibrium solution for the corresponding vector potential problem. We have \begin{thm}\label{teoprin} Let $({\sigma}_0;\sigma_1,\ldots,{\sigma}_m) \in \mathcal{M}box{\bf Reg}$ and consider the sequence of multi-indices $\Lambda = \Lambda(p_1,\ldots,p_m)$. Let $\left(\frac{P_{{\bf n},1}}{Q_{\bf n}},\ldots,\frac{P_{{\bf n},m}}{Q_{\bf n}}\right), {\bf n} \in \Lambda, $ be the associated sequence of linear Fourier-Pad\'e approximants for the Angelesco system of functions $(\widehat{\sigma}_1,\ldots,\widehat{\sigma}_m)$ with respect to $\sigma_0$. Then, \begin{equation}\label{conver1} \lim_{{\bf n} \in \Lambda}\left|\widehat{\sigma}_j(z)-\frac{P_{{\bf n},j}(z)}{Q_{\bf n}(z)}\right|^{1/|{\bf n}|}=G_j(z)\,, \qquad j=1,\ldots,m\,, \mathcal{E}nd{equation} uniformly on each compact subset of $\overline{\mathcal{M}athbb{C}}\setminus (\cup_{k=0}^m \mathcal{M}athbb{D}elta_k)$, where \[G_j(z)=\mathcal{E}xp\left((W_j^{\overline{\mathcal{M}u}}(z)- \omega_j^{\overline{\mathcal{M}u}})/p_j\right)\,,\] $\overline{\mathcal{M}u} = \overline{\mathcal{M}u}(\mathcal{M}athcal{C}_1),$ and the combined potentials $W_j^{\overline{\mathcal{M}u}}$ are defined by $(\ref{combpot})$ using $\mathcal{M}athcal{C}_1$. \mathcal{E}nd{thm} Set \[ G_j^{\pm}=\{x\in\overline{\mathcal{M}athbb{C}}\setminus(\cup_{k=0}^{m} \mathcal{M}athbb{D}elta_k):\,\pm\left( \omega_j^{\overline{\mathcal{M}u}}- W_j^{\overline{\mathcal{M}u}}(x)\right)>0\}. \] An immediate consequence of Theorem \ref{teoprin} is \begin{cor}\label{convergencia-lineal} Under the assumptions of Theorem \ref{teoprin}, \[ \lim_{{\bf n} \in \Lambda} \frac{P_{{\bf n},j}}{Q_{\bf n}}=\widehat{\sigma}_j \,, \qquad j=1,\ldots,m\,,\] uniformly on compact subsets of $G_j^+$ and diverges to infinity at each point of $G_j^-$. \mathcal{E}nd{cor} Non-linear Fourier Pad{\'e} approximants require the solution of a different vector potential equilibrium problem. Let \begin{displaymath} \mathcal{M}athcal{C}_2= \left( \begin{array}{cc} \mathcal{M}athcal{C}_{1,1} & \mathcal{M}athcal{C}_{1,2} \\ \mathcal{M}athcal{C}_{2,1} & \mathcal{M}athcal{C}_{2,2}^2 \mathcal{E}nd{array}\right)\,, \mathcal{E}nd{displaymath} where $\mathcal{M}athcal{C}_{1,1}, \mathcal{M}athcal{C}_{1,2},\mathcal{M}athcal{C}_{2,1}$ are as before and \[ \mathcal{M}athcal{C}_{2,2}^2 = \left( \begin{array}{cccc} \frac{2m(1+p_1)^2}{m+1} & \frac{-2(1+p_1)(1+p_2)}{m+1} & \cdots & \frac{-2(1+p_1)(1+p_m)}{m+1} \\ \frac{-2(1+p_2)(1+p_1)}{m+1} & \frac{2m(1+p_2)^2}{m+1} & \cdots & \frac{-2(1+p_2)(1+p_m)}{m+1} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{-2(1+p_m)(1+p_1)}{m+1} & \frac{-2(1+p_m)(1+p_2)}{m+1} & \cdots & \frac{2m(1+p_m)^2}{m+1} \mathcal{E}nd{array} \right)\,. \] $\mathcal{M}athcal{C}_2$ is a real, positive definite, symmetric matrix of order $2m$. We take the system of intervals $F_j = \mathcal{M}athbb{D}elta_j, j=1,\ldots,m, F_j = \mathcal{M}athbb{D}elta_0, j = m+1,\ldots,2m.$ $\mathcal{M}athcal{C}_2$ does not satisfy that $c_{j,k} \geq 0$ when $F_j \cap F_j \neq \mathcal{E}mptyset.$ In Theorem \ref{asint2} of Section 3, we prove that the corresponding equilibrium problem has at least one solution and that $\mathcal{M}athcal{C}_2$ is positive definite. Therefore, according to Lemma \ref{niksor} the solution is unique. Let $\overline{\mathcal{M}u}=\overline{\mathcal{M}u}(\mathcal{M}athcal{C}_2) $ be the equilibrium solution for the corresponding vector potential problem. In Lemma \ref{existe} we show that for each ${\bf n} \in \mathcal{M}athbb{Z}_+^m$ there exists at least one non-linear Fourier- Pad\'e approximant but we have not been able to prove that it is unique. We have \begin{thm}\label{teoprin2} Let $({\sigma}_0;\sigma_1,\ldots,{\sigma}_m) \in \mathcal{M}box{\bf Reg}$ and consider the sequence of multi-indices $\Lambda = \Lambda(p_1,\ldots,p_m)$. Let $\left(\frac{S_{{\bf n},1}}{T_{\bf n}},\ldots,\frac{S_{{\bf n},m}}{T_{\bf n}}\right), {\bf n} \in \Lambda, $ be an associated sequence of non-linear Fourier-Pad\'e approximants for the Angelesco system of functions $(\widehat{\sigma}_1,\ldots,\widehat{\sigma}_m)$ with respect to $\sigma_0$. Then, \begin{equation}\label{conver2} \lim_{ {\bf n} \in \Lambda}\left|\widehat{\sigma}_j(z)-\frac{S_{{\bf n},j}(z)}{T_{\bf n}(z)}\right|^{1/|{\bf n}|}=H_j(z)\,, \qquad j=1,\ldots,m\,, \mathcal{E}nd{equation} uniformly on each compact subset of $\overline{\mathcal{M}athbb{C}}\setminus (\cup_{k=0}^m \mathcal{M}athbb{D}elta_j)$, where \[ H_j(z) = \mathcal{E}xp\left((W_j^{\overline{\mathcal{M}u}}(z)- \omega_j^{\overline{\mathcal{M}u}})/p_j\right)\,, \] $\overline{\mathcal{M}u} = \overline{\mathcal{M}u}(\mathcal{M}athcal{C}_2)$ and the combined potentials $W_j^{\overline{\mathcal{M}u}}$ are defined by $(\ref{combpot})$ using $\mathcal{M}athcal{C}_2$. \mathcal{E}nd{thm} Notice that the limit only depends on $\Lambda$ and not on the non-linear Fourier-Pad\'e approximants selected (in case that they were not uniquely determined). Set \[ H_j^{\pm}=\{x\in\overline{\mathcal{M}athbb{C}}\setminus(\cup_{j=0}^{m} \mathcal{M}athbb{D}elta_j:\,\pm\left( \omega_j^{\overline{\mathcal{M}u}}- W_j^{\overline{\mathcal{M}u}}(x)\right)>0\}. \] As a consequence of Theorem \ref{teoprin2}, we obtain \begin{cor}\label{convergencia-lineal} Under the assumptions of Theorem \ref{teoprin2}, \[ \lim_{{\bf n} \in \Lambda} \frac{S_{{\bf n},j}}{T_{\bf n}} = \widehat{\sigma}_j\,, \qquad j =1,\ldots,m\,, \] uniformly on compact subsets of $H_j^+$ and diverges to infinity at each point of $H_j^-$. \mathcal{E}nd{cor} Section \ref{teo-pot} is dedicated to the proof of Theorem \ref{teoprin} and Section \ref{demostracion} to that of Theorem \ref{teoprin2}. Section \ref{justi} is dedicated to the justification of Lemma \ref{niksor} as stated here since in \cite{niso} the assumption $c_{j,k} \geq 0$ if $F_j \cap F_k \neq \mathcal{E}mptyset$ is assumed in general. In the sequel, we maintain the notation introduced above. \section{ Proof of Theorem \ref{teoprin}}\label{teo-pot} From the definition of the linear Fourier-Pad\'e approximant immediately follows that for each $j=1,\ldots,m$ \begin{equation}\label{four-pade-lineal} \int x^k(Q_{\bf n}(x)\widehat{\sigma}_j(z)-P_{{\bf n},j}(x))d\sigma_0(x)=0,\quad k=0,\ldots, |{\bf n}|+n_j-1\,. \mathcal{E}nd{equation} Since the function $Q_{\bf n}(z)\widehat{\sigma}_j(z)-P_{{\bf n},j}(z)$ is continuous on $\mathcal{M}athbb{D}elta_0$, from (\ref{four-pade-lineal}) we have that $Q_{\bf n}(z)\widehat{\sigma}_j(z)-P_{{\bf n},j}(z)$ has at least $|{\bf n}|+n_j$ sign changes on $\mathcal{M}athbb{D}elta_0$. Let $W_{{\bf n},j}$ be the monic polynomial whose zeros are the points where $Q_{\bf n}(z)\widehat{\sigma}_j(z)-P_{{\bf n},j}(z)$ changes sign on the interval $\mathcal{M}athbb{D}elta_0$. Obviously, $\deg W_{{\bf n},j} \geq |{\bf n}|+n_j$ and \[ \frac{Q_{\bf n}(z)\widehat{\sigma}_j(z)-P_{{\bf n},j}(z)}{W_{{\bf n},j}(z)}\in \mathcal{M}athcal{H}(\overline{\mathcal{M}athbb{C}}\setminus \text{supp}(\sigma_j)), \quad j=1,\ldots,m\,, \] is analytic on the indicated region. Thus, linear Fourier-Pad\'e approximants satisfy interpolation conditions on $\mathcal{M}athbb{D}elta_0$. A similar statement holds for the non-linear Fourier-Pad\'e approximants. In our proofs, we will use certain orthogonality relations satisfied by vector rational interpolants. \begin{lem}\label{ortogonalidad} Let $(\widehat{\sigma}_1,\ldots,\widehat{\sigma}_m)$ be an Angelesco system, ${\bf n} = (n_1,\ldots,n_m) \in \mathcal{M}athbb{Z}_+^m,$ and $(w_{{\bf n},1},\ldots,w_{{\bf n},m})$ a system of polynomials such that $\deg w_{{\bf n},j} \geq |{\bf n}|+n_j, j=1,\ldots,m$ whose zeros lie on an interval $\mathcal{M}athbb{D}elta_0, \mathcal{M}athbb{D}elta_0 \cap \mathcal{M}athbb{D}elta_j = \mathcal{E}mptyset, j= 1,\ldots,m $. Let $(\frac{p_{{\bf n},1}}{q_{\bf n}},\ldots,\frac{p_{{\bf n},m}}{q_{\bf n}})$ be a vector rational function such that $\deg p_{{\bf n},j} \leq |{\bf n}|-1, j=1,\ldots,m, \deg q_{\bf n} \leq |{\bf n}|, q_{\bf n} \not\mathcal{E}quiv 0,$ and \begin{equation}\label{analiticidad} \frac{q_{\bf n}(z)\widehat{\sigma}_j(z)-p_{{\bf n},j}(z)}{w_{{\bf n},j}(z)}\in \mathcal{M}athcal{H}(\overline{\mathcal{M}athbb{C}}\setminus \text{supp}(\sigma_j)), \quad j=1,\ldots,m\,. \mathcal{E}nd{equation} Then \begin{equation}\label{orto-total} \int x^k\frac{q_{\bf n}(x)}{w_{{\bf n},j}(x)}d\sigma_j(x)=0\,, \quad k=0,\,1,\ldots,n_j-1\,, \quad j = 1,\ldots, m\,. \mathcal{E}nd{equation} Consequently, $\deg q_{\bf n} = |{\bf n}|$ with exactly $n_j$ simple zeros in the interior of $\mathcal{M}athbb{D}elta_j$ (in connection with intervals of the real line, the interior refers to the Euclidean topology of the real line) and $\deg w_{{\bf n},j} = |{\bf n}|+n_j, j=1,\ldots,m$. Let $q_{\bf n}= q_{{\bf n},j}\widetilde{q}_{{\bf n},j},$ where $q_{{\bf n},j}$ is the monic polynomial whose zeros are those of $q_{\bf n}$ lying in the interior of $\mathcal{M}athbb{D}elta_j.$ Then \begin{equation}\label{resto1} \widehat{\sigma}_j(z)-\frac{p_{{\bf n},j}(z)}{q_{\bf n}(z)}=\frac{w_{{\bf n},j}(z)}{q_{{\bf n},j}^2(z)\widetilde{q}_{{\bf n},j}(z)} \int \frac{q_{{\bf n},j}^2(x)}{z-x}\frac{\widetilde{q}_{{\bf n},j}(x)}{w_{{\bf n},j}(x)}d\sigma_j(x)\,. \mathcal{E}nd{equation} \mathcal{E}nd{lem} {\bf Proof.} Notice that (\ref{analiticidad}) and the assumption on the degrees of the polynomials $q_{\bf n}, p_{{\bf n},j},$ and $w_{{\bf n},j}$ imply that for $j=1,\ldots,m,$ and $k=0,\ldots,n_j-1,$ \[ z^k\frac{q_{\bf n}(z)\widehat{\sigma}_j(z)-p_{{\bf n},j}(z)}{w_{{\bf n},j}(z)}=\mathcal{M}athcal{O}\left(\frac{1}{z^2}\right),\quad z\to\infty\,. \] Let $\Gamma_j$ be a closed, smooth, Jordan curve that surrounds $\mathcal{M}athbb{D}elta_j$ such that all the intervals $\mathcal{M}athbb{D}elta_i, i\ne j, i = 0,\ldots,m$, lie in the unbounded connected component of the complement of $\Gamma_j$. By Cauchy's Theorem, Cauchy's Integral Formula and Fubini's Theorem, it follows that \[ 0=\frac{1}{2\pi i}\int_{\Gamma_j}z^k\frac{q_{\bf n}(z)\widehat{\sigma}_j(z)-p_{{\bf n},j}(z)}{w_{{\bf n},j}(z)}d z = \] \[ \frac{1}{2\pi i}\int_{\Gamma_j}z^k\frac{q_{\bf n}(z)\widehat{\sigma}_j(z)}{w_{{\bf n},j}(z)}d z-\frac{1}{2\pi i}\int_{\Gamma_j}z^k\frac{p_{{\bf n},j}(z)}{w_{{\bf n},j}(z)}d z = \] \[ \frac{1}{2\pi i}\int_{\Gamma_j}z^k\frac{q_{\bf n}(z)}{w_{{\bf n},j}(z)}\int \frac{d\sigma_j(x)}{z-x}\,d z=\int x^k\frac{q_{\bf n}(x)}{w_{{\bf n},j}(x)}d\sigma_j(x), \] for $k=0,\,1,\ldots,n_j-1$ and $j=1,\ldots,m$. Therefore, (\ref{orto-total}) follows. Using standard arguments of orthogonality, from (\ref{orto-total}) we obtain that $q_{\bf n}$ must have at least $n_j$ sign changes in the interior of $\mathcal{M}athbb{D}elta_j$ and, consequently, at least $n_j$ zeros of odd multiplicity. Since $\deg q_{\bf n} \leq |{\bf n}|,$ we have that $\deg q_{\bf n} = |{\bf n}|$, that all its zeros are simple and they are distributed in such a way that exactly $n_j$ lie in the interior of $\mathcal{M}athbb{D}elta_j$. Assume that $\deg w_{{\bf n},j} > |{\bf n}| + n_j$ for some $j$. Then \[ z^k\frac{q_{\bf n}(z)\widehat{\sigma}_j(z)-p_{{\bf n},j}(z)}{w_{{\bf n},j}(z)}=\mathcal{M}athcal{O}\left(\frac{1}{z^2}\right),\quad z\to\infty\,, \qquad k = 0,\ldots,n_j\,. \] This implies that (\ref{orto-total}) holds for all $k = 0,\ldots,n_j$. In turn, this means that $q_{\bf n}$ has at least $n_j +1$ zeros in the interior of $\mathcal{M}athbb{D}elta_j$ against what was just proved. Therefore, $\deg w_{{\bf n},j} = |{\bf n}| + n_j.$ Set $q_{\bf n}= q_{{\bf n},j}\widetilde{q}_{{\bf n},j},$ where $q_{{\bf n},j}$ is the monic polynomial whose zeros are those of $q_{\bf n}$ lying in the interior of $\mathcal{M}athbb{D}elta_j$. Notice that $\widetilde{q}_{{\bf n},j}d\sigma_j/w_{{\bf n},j}$ is a real measure with constant sign on $\mathcal{M}athbb{D}elta_j$. For future reference, notice that with this notation the orthogonality relations (\ref{orto-total}) may be expressed as \begin{equation} \label{relortvar} \int x^kq_{{\bf n},j}(x)|\widetilde{q}_{{\bf n},j}(x)|\frac{d\sigma_j(x)}{|w_{{\bf n},j}(x)|}=0,\quad k=0,\,1,\ldots,n_j-1\,. \mathcal{E}nd{equation} Hence, for each $j=1,\ldots,m,$ $q_{{\bf n},j}$ is the monic orthogonal polynomial of degree $n_j$ with respect to the varying measure $\frac{|\widetilde{q}_{{\bf n},j}|}{|w_{{\bf n},j}|}d\sigma_j\,.$ Notice that \[ \frac{[q_{{\bf n},j}(q_{\bf n}\widehat{\sigma}_j-p_{{\bf n},j})](z)}{w_{{\bf n},j}(z)}=\mathcal{M}athcal{O}\left(\frac{1}{z}\right),\quad z\to\infty\,. \] Choose $\Gamma_j$ as before. Using Cauchy's integral formula, Cauchy's Theorem, and Fubini's Theorem, we obtain that for each $j=1,\ldots,m$ \[\frac{[q_{{\bf n},j}(q_{\bf n}\widehat{\sigma}_j-p_{{\bf n},j})](z)}{w_{{\bf n},j}(z)} = \frac{1}{2\pi i}\int_{\Gamma_j} \frac{[q_{{\bf n},j}(q_{\bf n} \widehat{\sigma}_j -p_{{\bf n},j})](\zeta)}{w_{{\bf n},j}(\zeta)(z - \zeta)} d \zeta = \] \[ \int \frac{1}{2\pi i}\int_{\Gamma_j} \frac{(q_{{\bf n},j}q_{\bf n})(\zeta)}{w_{{\bf n},j}(\zeta)(z - \zeta)(\zeta -x)} d \zeta d \sigma_j(x)= \int \frac{q_{{\bf n},j}^2(x)}{z-x}\frac{\widetilde{q}_{{\bf n},j}(x)}{w_{{\bf n},j}(x)}d\sigma_j(x)\,, \] which is equivalent to (\ref{resto1}). $\Box$ The vector rational function $(\frac{p_{{\bf n},1}}{q_{\bf n}},\ldots,\frac{p_{{\bf n},m}}{q_{\bf n}})$ is called a multipoint vector Pad\'e approximant of the Angelesco system $(\widehat{\sigma}_1,\ldots,\widehat{\sigma}_m)$. According to Lemma \ref{ortogonalidad} a necessary condition for their existence is that $\deg w_{{\bf n},j} \leq |{\bf n}|+n_j, j=1,\ldots,m$. Solving a homogeneous linear system of equations one sees that this condition is also sufficient. When $\deg w_{{\bf n},j} = |{\bf n}|+n_j, j=1,\ldots,m$ uniqueness follows because then $\deg q_{\bf n} = |{\bf n}|$ as we have seen. \begin{rem} Applying this Lemma to linear Fourier-Pad\'e approximants, we have that $\deg(Q_{\bf n})=|{\bf n}|$. Thus, for each ${\bf n } \in \mathcal{M}athbb{Z}_+^m$, they are uniquely determined as claimed. \mathcal{E}nd{rem} Let us return to linear Fourier-Pad\'e approximants. In this case, $Q_{{\bf n}} = q_{{\bf n}}, Q_{{\bf n},j} = q_{{\bf n},j}, \widetilde{Q}_{{\bf n},j} = \widetilde{q}_{{\bf n},j}$ and $W_{{\bf n},j} = w_{{\bf n},j}$. \begin{lem}\label{prop-wn} For each $ j=1, \ldots,m,$ and $k=0,\ldots,|{\bf n}|+n_j-1$ \begin{equation}\label{ortoW1} \int t^k\frac{W_{{\bf n},j}(t)}{|Q_{{\bf n},j}(t)|}\left(\int \frac{Q_{{\bf n},j}^2(x)}{|t-x|}\frac{|\widetilde{Q}_{{\bf n},j}(x)|}{|W_{{\bf n},j}(x)|}d\sigma_j(x)\right)d\sigma_0(t)=0\,. \mathcal{E}nd{equation} Moreover, $\deg W_{{\bf n},j} = |{\bf n}|+n_j, j=1, \ldots,m;$ that is, $Q_{\bf n}(z)\widehat{\sigma_j}(z)-P_{{\bf n},j}(z)$ has exactly $|{\bf n}|+n_j$ sign changes in the interior of $\mathcal{M}athbb{D}elta_0$. \mathcal{E}nd{lem} {\bf Proof.} From (\ref{resto1}) and the definition of the linear Fourier-Pad\'e approximant, (\ref{ortoW1}) follows directly. The assertion concerning the degree of $W_{{\bf n},j}$ is also contained in Lemma \ref{ortogonalidad}. $\Box$ Let $\{\mathcal{M}u_l\} \subset \mathcal{M}athcal{M}(\mathcal{M}athcal{K})$ be a sequence of measures, where $\mathcal{M}athcal{K}$ is a compact subset of the complex plane and $\mathcal{M}u \in \mathcal{M}athcal{M}(\mathcal{M}athcal{K})$. We write \[ *\lim_l \mathcal{M}u_l = \mathcal{M}u\,, \qquad \mathcal{M}u \in \mathcal{M}athcal{M}(\mathcal{M}athcal{K})\,,\] if for every continuous function $f \in \mathcal{M}athcal{C}( \mathcal{M}athcal{K})$ \[ \lim_l \int f d \mathcal{M}u_l = \int f d\mathcal{M}u\,;\] that is, when the sequence of measures converges to $\mathcal{M}u$ in the weak star topology. Given a polynomial $q_l$ of degree $l \geq 1$, we denote the associated normalized zero counting measure by \[ \nu_{q_l} = \frac{1}{l} \sum_{q_l(x) = 0} \delta_x \,,\] where $\delta_x$ is the Dirac measure with mass $1$ at $x$ (in the sum the zeros are repeated according to their multiplicity). In order to prove our main results we need Theorem 3.3.3 of \cite{stto}. We present it in the form stated in \cite{gora} which is more adequate for our purpose. In \cite{gora}, it was proved under stronger assumptions on the measure. \begin{lem}\label{gonchar-rakhmanov} Let $\{\phi_l\}, l \in \Gamma \subset \mathcal{M}athbb{Z}_+,$ be a sequence of positive continuous functions on a bounded closed interval $\mathcal{M}athbb{D}elta \subset\mathcal{M}athbb{R} , $ $\sigma \in {\bf Reg} \cap \mathcal{M}athcal{M}(\mathcal{M}athbb{D}elta),$ and let $\{q_l\}, l \in \Gamma,$ be a sequence of monic polynomials such that $\deg q_l = l$ and \[ \int q_l(t)t^k\phi_l(t)d\sigma(t)=0,\quad k=0,\ldots, l-1. \] Assume that \[ \lim_{l\in \Gamma}\frac{1}{2l}\log\frac{1}{|\phi_l(x)|}= v(x), \] uniformly on $\mathcal{M}athbb{D}elta$. Then \[ *\lim_{l \in \Gamma}\nu_{q_l} = \overline{\nu}, \] and \[ \lim_{l\in\Gamma}\left(\int |q_l|^2\phi_l d\mathcal{M}u\right)^{1/{2l}}= e^{-\omega}, \] where $\overline{\nu} \in \mathcal{M}athcal{M}_1(\mathcal{M}athbb{D}elta_1)$ is the equilibrium measure for the extremal problem \[ V^{\overline{\nu}}(x)+v(x) \left\{ \begin{array}{l} = \omega,\quad x \in \text{supp}(\overline{\nu}) \,, \\ \geq \omega, \quad x \in \mathcal{M}athbb{D}elta_1 \,, \mathcal{E}nd{array} \right. \] in the presence of the external field $v$. \mathcal{E}nd{lem} Using this result, we can obtain the asymptotic limit distribution of the zeros of the polynomials $Q_{{\bf n},j}$ and $W_{{\bf n},j}$. \begin{thm} \label{asint1} Let $({\sigma}_0;\sigma_1,\ldots,{\sigma}_m) \in \mathcal{M}box{\bf Reg}$ and consider the sequence of multi-indices $\Lambda = \Lambda(p_1,\ldots,p_m)$. Then, for each $j=1,\ldots,m$ \[ *\lim_{{\bf n} \in \Lambda }{\nu}_{Q_{{\bf n},j}} = \overline{\mathcal{M}u}_j\,, \quad\quad *\lim_{{\bf n} \in \Lambda }\nu_{W_{{\bf n},j}} = \overline{\mathcal{M}u}_{m+j}\,, \] where $\overline{\mathcal{M}u} = \overline{\mathcal{M}u}(\mathcal{M}athcal{C}_1) \in \mathcal{M}athcal{M}_1$ is the vector equilibrium measures determined by the matrix $ \mathcal{M}athcal{C}_1$ on the system of intervals $F_j = \mathcal{M}athbb{D}elta_j, j= 1,\ldots,m\,,\,\, F_j = \mathcal{M}athbb{D}elta_0, j= m+1,\ldots,2m\,.$ \mathcal{E}nd{thm} {\bf Proof.} The unit ball in the cone of positive Borel measures is weakly compact; therefore, it is sufficient to show that the sequences of measures $\{\nu_{Q_{{\bf n},j}}\}$ and $\{\nu_{W_{{\bf n},j}}\}, {\bf n} \in \Lambda,$ have only one accumulation point which coincide with the components of the vector measure $\overline{\mathcal{M}u}(\mathcal{M}athcal{C}_1) $ respectively. Let $\Lambda^{\prime} \subset \Lambda$ be a subsequence of multi-indices such that for each $j=1,\ldots,m$ \[ *\lim_{{\bf n} \in \Lambda^{\prime}}\nu_{Q_{{\bf n},j}} = \nu_j\,, \quad\quad *\lim_{{\bf n} \in \Lambda^{\prime}}\nu_{W_{{\bf n},j}} = \nu_{m+j}\,. \] (Notice that $\nu_j \in \mathcal{M}athcal{M}_1(\mathcal{M}athbb{D}elta_j), j=1,\ldots,m,$ and $\nu_j \in \mathcal{M}athcal{M}_1(\mathcal{M}athbb{D}elta_0), j=m+1,\ldots,2m.$) Therefore, \begin{equation} \label{pol-pot1} \lim_{{\bf n} \in \Lambda^{\prime}}|Q_{{\bf n},j}(z)|^{\frac{1}{n_j}}=\mathcal{E}xp(-V^{\nu_j}(z)), \mathcal{E}nd{equation} uniformly on compact subsets of $\mathcal{M}athbb{C}\setminus \mathcal{M}athbb{D}elta_j $, and \begin{equation} \label{pol-pot2} \lim_{{\bf n} \in \Lambda^{\prime}} |W_{{\bf n},j}(z)|^{\frac{1}{|{\bf n}|+n_j}}=\mathcal{E}xp(-V^{\nu_{m+j}}(z)), \mathcal{E}nd{equation} uniformly on compact subsets of $\mathcal{M}athbb{C}\setminus \mathcal{M}athbb{D}elta_0$. For each fixed $j=1\ldots,m,$ the polynomials $Q_{{\bf n},j}$ satisfy the orthogonality relations (\ref{relortvar}). Using (\ref{pol-pot1}) and (\ref{pol-pot2}) it follows that \[ \lim_{{\bf n}\in \Lambda^{\prime}}\frac{1}{2n_j}\log\frac{|W_{{\bf n},j}(x)|}{|\widetilde{Q}_{{\bf n},j}(x)|}= -\frac{1+p_j}{2p_j}V^{\nu_{m+j}}(x)+ \sum_{k \neq j} \frac{p_k}{2p_j}V^{\nu_k}(x), \] uniformly on $\mathcal{M}athbb{D}elta_j$. By Lemma \ref{gonchar-rakhmanov}, $\nu_j$ is the unique equilibrium measure for the extremal problem \begin{equation}\label{prob-extre-11} V^{\nu_j}(x)+ \sum_{k \neq j} \frac{p_k}{2p_j}V^{\nu_k}(x) -\frac{1+p_j}{2p_j}V^{\nu_{m+j}}(x) \geq \theta_{j},\quad x\in \mathcal{M}athbb{D}elta_j \,, \mathcal{E}nd{equation} with equality for all $x \in \text{supp}(\nu_j)$. Additionally, \begin{equation}\label{asin-1} \lim_{{\bf n}\in \Lambda^{\prime}} \left(\int |Q_{{\bf n},j}(x)|^2\frac{|\widetilde{Q}_{{\bf n},j}(x)|d\sigma_j(x)}{|W_{{\bf n},j}(x)|}\right)^{\frac{1}{2n_j}}=e^{-\theta_j}. \mathcal{E}nd{equation} On the other hand, for each fixed $j=1,\ldots,m,$ the polynomials $W_{{\bf n},j}$ satisfy the orthogonality relations (\ref{ortoW1}) and we can apply once more Lemma \ref{gonchar-rakhmanov}. Notice that for all $t \in \mathcal{M}athbb{D}elta_0$ \begin{equation}\label{desrest1} \int \frac{|Q_{{\bf n},j}^2(x)|}{|t-x|}\frac{|\widetilde{Q}_{{\bf n},j}(x)|d\sigma_j(x)}{|W_{{\bf n},j}(x)|} \le\frac{1}{\delta_j} \int |Q_{{\bf n},j}(x)|^2\frac{|\widetilde{Q}_{{\bf n},j}(x)|d\sigma_j(x)}{|W_{{\bf n},j}(x)|}, \mathcal{E}nd{equation} where $\delta_j= \inf\{|t-x|: t \in \mathcal{M}athbb{D}elta_0,\, x \in \mathcal{M}athbb{D}elta_j\}$ and \begin{equation}\label{desrest2} \int \frac{|Q_{{\bf n},j}^2(x)|}{|t-x|}\frac{|\widetilde{Q}_{{\bf n},j}(x)|d\sigma_j(x)}{|W_{{\bf n},j}(x)|}\ge \frac{1}{\delta_j^*} \int |Q_{{\bf n},j}(x)|^2\frac{|\widetilde{Q}_{{\bf n},j}(x)|d\sigma_j(x)}{|W_{{\bf n},j}(x)|}, \mathcal{E}nd{equation} with $\delta_j^*=\mathcal{M}ax\{|t-x|:t \in \mathcal{M}athbb{D}elta_0,\,x\in \mathcal{M}athbb{D}elta_j\}$. From (\ref{pol-pot1}), (\ref{pol-pot2}), (\ref{asin-1}), (\ref{desrest1}), and (\ref{desrest2}), we obtain \begin{multline*} \lim_{{\bf n} \in \Lambda^{\prime}}\frac{1}{2(|{\bf n}|+n_j)}\log\frac{|Q_{{\bf n},j}(x)|} {\int\frac{|Q_{{\bf n},j}(t)|^2}{|x-t|}\frac{|\widetilde{Q}_{{\bf n},j}(t)|d\sigma_j(t)}{|W_{{\bf n},j}(t)|}} \\ =-\frac{p_j}{2(1+p_j)}V^{\nu_j}(x)+\frac{p_j}{1+p_j}\theta_j, \mathcal{E}nd{multline*} uniformly on $\mathcal{M}athbb{D}elta_0$. Using Lemma \ref{gonchar-rakhmanov}, $\nu_{m+j}$ is the unique extremal solution for the equilibrium problem \begin{equation}\label{prob-extre-31} V^{\nu_{m+j}}(x)-\frac{p_j}{2(1+p_j)}V^{\nu_j}(x)+\frac{p_j}{1+p_j}\theta_j \geq \theta_{m+j},\quad x\in \mathcal{M}athbb{D}elta_0\,, \mathcal{E}nd{equation} with equality for all $x \in \text{supp}{(\nu_{m+j})}.$ Rewriting (\ref{prob-extre-11}) and (\ref{prob-extre-31}) conveniently, we see that the vector measure $(\nu_1,\ldots,\nu_{2m}) \in \mathcal{M}athcal{M}_1$ is the unique solution for the vector equilibrium problem determined by the system of extremal problems \begin{equation}\label{prob-extre-11*} 2p_j^2V^{\nu_j}(x)+ \sum_{k \neq j} p_jp_k V^{\nu_k}(x) -p_j(1+p_j) V^{\nu_{m+j}}(x) \geq \omega_j,\quad x\in \mathcal{M}athbb{D}elta_j \,, \mathcal{E}nd{equation} ($2p_j^2\theta_j = \omega_j$) with equality for all $x \in \text{supp}(\nu_j)$, and \begin{equation}\label{prob-extre-31*} 2(1 + p_j)^2 V^{\nu_{m+j}}(x)- p_j(1+p_j) V^{\nu_j}(x) \geq \omega_{m+j}\,,\quad x\in \mathcal{M}athbb{D}elta_0\,, \mathcal{E}nd{equation} with equality for all $x \in \text{supp}{(\nu_{m+j})}.$ That is, it is the equilibrium measure $\overline{\mathcal{M}u} \in \mathcal{M}athcal{M}_1$ for the vector potential problem determined by $\mathcal{M}athcal{C}_1$ on the system of intervals $F_j = \mathcal{M}athbb{D}elta_j, j = 1,\ldots,m,\, F_j = \mathcal{M}athbb{D}elta_0, j=m+1,\ldots,2m.$ The condition $c_{j,k} \geq 0$ if $F_j \cap F_k \neq \mathcal{E}mptyset$ is fulfilled. According to Lemma \ref{niksor}, this equilibrium vector measure is uniquely determined if $\mathcal{M}athcal{C}_1$ is positive definite. Let us prove this. For $j \in \{1,\ldots,m\}$ the principle minor $\mathcal{M}athcal{C}_1^{(j)}$ of order $j$ of $ \mathcal{M}athcal{C}_1$ is \[ \mathcal{M}box{det}(\mathcal{M}athcal{C}_1^{(j)}) = (p_1\cdots p_j)^2 \left| \begin{array}{cccc} 2 & 1 & \cdots & 1 \\ 1 & 2 & \cdots & 1 \\ \vdots & \vdots & \ddots & \vdots \\ 1 & 1 & \cdots & 2 \mathcal{E}nd{array} \right|_{j\times j} = (p_1\cdots p_j)^2(j+1) > 0 \,. \] For $j \in \{m+1,\ldots,2m\}$ the principle minor $\mathcal{M}athcal{C}_1^{(j)}$ of order $j$ of $ \mathcal{M}athcal{C}_1$ can be calculated as follows. For each $k=1,\ldots,m,$ factor out $p_k$ from the $k$th row and $k$th column of $\mathcal{M}athcal{C}_1^{(j)}.$ From the row and column $m+k, k=1,\ldots, j-m,$ factor out $1 + p_k$. In the resulting determinant, for each $k=1,\ldots,j-m,$ add the $k$th row to the $(m+k)$th row and then to the resulting determinant add the $k$th column to the $(m+k)$th column. We obtain \[ \mathcal{M}box{det}(\mathcal{M}athcal{C}_1^{(j)}) = [p_1\cdots p_m(1+p_1)\cdots(1 + p_{j-m})]^2 \left| \begin{array}{cccc} 2 & 1 & \cdots & 1 \\ 1 & 2 & \cdots & 1 \\ \vdots & \vdots & \ddots & \vdots \\ 1 & 1 & \cdots & 2 \mathcal{E}nd{array} \right|_{m\times m} = \] \[ [p_1\cdots p_m(1+p_1)\cdots(1 + p_{j-m})]^2(m+1) > 0 \,. \] With this we conclude the proof. $\Box$ {\bf Proof of Theorem \ref{teoprin}.} From (\ref{resto1}), (\ref{desrest1}), and (\ref{desrest2}), the asymptotic behavior of the function $\widehat{\sigma}_j -\frac{P_{{\bf n},j}}{Q_{\bf n}}$ depends on the behavior of $W_{{\bf n},j}$, $Q_{{\bf n},j}$, $\widetilde{Q}_{{\bf n},j}$, and $\gamma_{{\bf n},j}$, where \begin{align*} \frac{1}{\gamma_{{\bf n},j}^2}&=\mathcal{M}in_Q \left\{\int |Q(x)|^2\frac{|\widetilde{Q}_{{\bf n},j}(x)|d\sigma_j(x)}{|W_{{\bf n},j}(x)|} :\,Q(x)=x^{n_j}+\cdots\right\} \\ &=\int |Q_{{\bf n},j}(x)|^2\frac{|\widetilde{Q}_{{\bf n},j}(x)|d\sigma_j(x)}{|W_{{\bf n},j}(x)|}. \mathcal{E}nd{align*} From Theorem \ref{asint1}, for each $j=1,\ldots,m,$ we have \begin{equation}\label{vel-w1} \lim_{{\bf n} \in \Lambda} |W_{{\bf n},j}(x)|^{1/{|{\bf n}|}}=\mathcal{E}xp\{-(1+p_j)V^{\overline{\mathcal{M}u}_{m+j}}(x)\}, \mathcal{E}nd{equation} uniformly on compact subsets of $\mathcal{M}athbb{C}\setminus\mathcal{M}athbb{D}elta_0$, and \begin{equation}\label{vel-q1} \lim_{{\bf n} \in \Lambda} |Q_{{\bf n},j}(x)|^{1/{|{\bf n}|}}=\mathcal{E}xp\{-p_jV^{\overline{\mathcal{M}u}_j}(x)\}, \mathcal{E}nd{equation} uniformly on compact subsets of $\mathcal{M}athbb{C}\setminus\mathcal{M}athbb{D}elta_j$, where $\overline{\mathcal{M}u} = \overline{\mathcal{M}u}(\mathcal{M}athcal{C}_1)$ Using (\ref{asin-1}) (see also parenthesis after (\ref{prob-extre-11*})), it follows that \begin{equation}\label{vel-res} \lim_{|{\bf n}|\to\infty} \left(\frac{1}{\gamma_{{\bf n},j}^2}\right)^{1/{|{\bf n}|}}=\mathcal{E}xp \{ -2p_j\theta_j\} = \mathcal{E}xp\{-\omega_j/p_j\}\,. \mathcal{E}nd{equation} Combining (\ref{resto1}), (\ref{vel-w1}), (\ref{vel-q1}), and (\ref{vel-res}), we conclude that (\ref{conver1}) holds true uniformly on compact subsets of the indicated region. $\Box$ \section{Proof of Theorem \ref{teoprin2}}\label{demostracion} We begin by proving the existence of non-linear Fourier-Pad\'e approximants. \begin{lem} \label{existe} Given $(\sigma_0;\sigma_1,\ldots,\sigma_m),$ for each ${\bf n} \in \mathcal{M}athbb{Z}_+^m$ there exists an {\bf n}-th non-linear Fourier-Pad\'e approximant of $(\widehat{\sigma}_1,\ldots,\widehat{\sigma}_m)$ with respect to $\sigma_0$. \mathcal{E}nd{lem} {\bf Proof.} In the proof we make use of multipoint Hermite-Pad\'e approximation. Fix ${\bf n} \in \mathcal{M}athbb{Z}_+^m.$ For each $j \in \{1,\ldots,m\}$, choose an arbitrary set of $|{\bf n}|+n_j$ points contained in $\mathcal{M}athbb{D}elta_0$ \[ X_{{\bf n},j}=(x_{{\bf n},j,1},\ldots,x_{{\bf n},j,|{\bf n}|+n_j})\in \mathcal{M}athbb{D}elta_{{\bf n},j}\,, \] where \[ \mathcal{M}athbb{D}elta_{{\bf n},j} = \{(x_{1},\ldots,x_{|{\bf n}|+n_j}) \in \mathcal{M}athbb{D}elta_0^{|{\bf n}| + n_j}: x_1 \leq \cdots \leq x_{|{\bf n}|+n_j} \}\,. \] Let \[ w_{{\bf n},j}(x)=(x-x_{{\bf n},j,1})\cdots(x-x_{{\bf n},j,|{\bf n}|+n_j})\,, \] and consider the simultaneous multipoint Pad\'e approximant which interpolates the functions $\widehat{\sigma}_j, j=1,\ldots,m,$ at the zeros of $w_{{\bf n},j}$ respectively. That is, $(p_{{\bf n},1}/q_{\bf n},\ldots, p_{{\bf n},m}/q_{\bf n})$ is a vector rational function such that $\deg (p_{{\bf n},j})\le |{\bf n}|-1$, $j=1,\ldots,m$, $\deg(q_{\bf n})\le |{\bf n}|$, $q_{\bf n}\not\mathcal{E}quiv 0,$ and \begin{equation}\label{def-mult} \frac{q_{\bf n}\widehat{\sigma}_j-p_{{\bf n},j}}{w_{{\bf n},j}}\in \mathcal{M}athcal{H}(\mathcal{M}athbb{C}\setminus\text{supp}(\sigma_j)). \mathcal{E}nd{equation} From Lemma \ref{ortogonalidad} we have (\ref{orto-total}) and (\ref{resto1}). Once we have determined $q_{\bf n}\,,$ for each $j=1,\ldots,m,$ we define the monic polynomial $\Omega_{{\bf n},j}\,, \deg(\Omega_{{\bf n},j}) = |{\bf n}|+n_j,$ by the orthogonality relations \begin{equation}\label{orto-aux1} \int y^k\Omega_{{\bf n},j}(y)\left(\frac{1}{q_{{\bf n},j}^2(y)\widetilde{q}_{{\bf n},j}(y)} \int \frac{q_{{\bf n},j}^2(x)}{y-x}\frac{\widetilde{q}_{{\bf n},j}(x)d\sigma_j(x)}{w_{{\bf n},j}(x)}\right)d\sigma_0(y)=0, \mathcal{E}nd{equation} $k=0,\ldots, |{\bf n}|+n_j-1$. For each $j=1,\ldots,m,$ these relations determine a unique $\Omega_{{\bf n},j}$ since the (varying) measures involved have constant sign on $\mathcal{M}athbb{D}elta_0\,.$ The polynomial $\Omega_{{\bf n},j}$ has exactly $|{\bf n}|+n_j$ simple zeros in the interior of $\mathcal{M}athbb{D}elta_0\,.$ Set \[ Y_{{\bf n},j}=(y_{{\bf n},j,1},\ldots,y_{{\bf n},j,|{\bf n}|+n_j})\in \mathcal{M}athbb{D}elta_{{\bf n},j}\,, \] where $y_{{\bf n},j,1}<\cdots < y_{{\bf n},j,|{\bf n}|+n_j}$ are the zeros of $\Omega_{{\bf n},j}$. Since for each $j=1,\ldots,m,$ the distance between $\mathcal{M}athbb{D}elta_j$ and $\mathcal{M}athbb{D}elta_0$ is greater than zero, the correspondence \[ (X_{{\bf n},1},\ldots,X_{{\bf n},m}) \longrightarrow (Y_{{\bf n},1},\ldots,Y_{{\bf n},m}), \] defines a continuous function from $\mathcal{M}athbb{D}elta_{{\bf n},1} \times \cdots \times \mathcal{M}athbb{D}elta_{{\bf n},m}$ into itself with the Euclidean norm. The continuity of this function is an easy consequence of the fact that $\mathcal{M}athbb{D}elta_0 \cap \mathcal{M}athbb{D}elta_j = \mathcal{E}mptyset, j=1,\ldots,m.$ By Brouwer's fixed point Theorem (see page 364 of \cite{RS}) this function has at least one fixed point. Choose a fixed point. Then, $w_{{\bf n},j}=\Omega_{{\bf n},j}$, $j=1,\ldots,m\,.$ Consequently (\ref{orto-aux1}) can be rewritten as \begin{equation}\label{orto-aux2} \int y^k w_{{\bf n},j}(y)\left(\frac{1}{q_{{\bf n},j}^2(y)\widetilde{q}_{{\bf n},j}(y)} \int\frac{q_{{\bf n},j}^2(x)}{y-x}\frac{\widetilde{q}_{{\bf n},j}(x)d\sigma_j(x)}{w_{{\bf n},j}(x)}\right)d\sigma_0(y)=0\,, \mathcal{E}nd{equation} $k=0,\ldots, |{\bf n}|+n_j-1$, and taking into consideration (\ref{resto1}) we obtain that for each $j=1,\ldots,m,$ \[ \int \left(\widehat{\sigma}_j(x)-\frac{p_{{\bf n},j}(x)}{q_{\bf n}(x)}\right)x^kd\sigma_j(x)=0\,,\qquad k=0,\ldots,|{\bf n}|+n_j-1 \,. \] From the definition, it follows that $(p_{{\bf n},1}/q_{\bf n},\ldots,p_{n_m}/q_{\bf n})$ is an ${\bf n}$th non linear Fourier-Pad\'e approximant for the Angelesco system, taking $S_{{\bf n},j}=p_{{\bf n},j}$, $j=1,\ldots,m$, and $T_{\bf n}=q_{\bf n}$. $\Box$ Let $\left(\frac{S_{{\bf n},1}}{T_{\bf n}},\ldots,\frac{S_{{\bf n},m}}{T_{\bf n}}\right)$ be any non-linear Fourier-Pad\'e approximant with respect to the Angelesco system $(\widehat{\sigma}_1,\ldots,\widehat{\sigma}_m)$. From ii') it follows that $ \widehat{\sigma}_j(z)-\frac{S_{{\bf n},j}(z)}{T_{\bf n}(z)}$ has at least $|{\bf n}|+n_j$ sign changes on $\mathcal{M}athbb{D}elta_0$. Let $W_{{\bf n},j}$ be the monic polynomial whose zeros are the points where this function changes sign on $\mathcal{M}athbb{D}elta_0$. Obviously, $\deg W_{{\bf n},j} \geq |{\bf n}|+n_j$ and \begin{equation}\label{analiticidad1} \frac{T_{\bf n}(z)\widehat{\sigma}_j(z)-S_{{\bf n},j}(z)}{W_{{\bf n},j}(z)}\in \mathcal{M}athcal{H}(\overline{\mathcal{M}athbb{C}}\setminus \text{supp}(\sigma_j)), \quad j=1,\ldots,m\,, \mathcal{E}nd{equation} is analytic on the indicated region. (These polynomials $W_{{\bf n},j}$ do not coincide with those of the linear case.) Using Lemma \ref{ortogonalidad} it follows that \begin{equation}\label{x1} \int x^k\frac{|T_{\bf n}(x)|}{|W_{{\bf n},j}(x)|}d\sigma_j(x)=0\,, \quad k=0,\,1,\ldots,n_j-1\,, \quad j = 1,\ldots, m\,. \mathcal{E}nd{equation} and \begin{equation}\label{x2} \widehat{\sigma}_j(z)-\frac{S_{{\bf n},j}(z)}{T_{\bf n}(z)}=\frac{W_{{\bf n},j}(z)}{T_{{\bf n},j}^2(z)\widetilde{T}_{{\bf n},j}(z)} \int \frac{T_{{\bf n},j}^2(x)}{z-x}\frac{\widetilde{T}_{{\bf n},j}(x)}{W_{{\bf n},j}(x)}d\sigma_j(x)\,, \mathcal{E}nd{equation} where $T_{{\bf n},j}$ is the monic polynomial whose zeros are the $n_j$ zeros of $T_{\bf n}$ lying in the interior of $\mathcal{M}athbb{D}elta_j$. Combining (\ref{x2}) with ii') we obtain \begin{equation}\label{x3} \int y^k W_{{\bf n},j}(y)\left(\frac{1}{T_{{\bf n},j}^2(y)|\widetilde{T}_{{\bf n},j}(y)|} \int\frac{T_{{\bf n},j}^2(x)}{|y-x|}\frac{|\widetilde{T}_{{\bf n},j}(x)|d\sigma_j(x)}{|W_{{\bf n},j}(x)|}\right)d\sigma_0(y)=0\,. \mathcal{E}nd{equation} The proof of Theorem \ref{teoprin2} is similar to that of Theorem \ref{teoprin}. First, we study the asymptotic zero distribution of the polynomials $T_{{\bf n},j}$ and $W_{{\bf n},j}$. Then, we use this result to obtain the asymptotic behavior of the remainder in the approximation. \begin{thm} \label{asint2} Let $({\sigma}_0;\sigma_1,\ldots,{\sigma}_m) \in \mathcal{M}box{\bf Reg}$ and consider the sequence of multi-indices $\Lambda = \Lambda(p_1,\ldots,p_m)$. Then, there exists a vector measure $\overline{\mathcal{M}u} = (\overline{\mathcal{M}u}_1,\ldots,\overline{\mathcal{M}u}_{2m}) \in \mathcal{M}athcal{M}_1$ such that for each $j=1,\ldots,m$ \[ *\lim_{{\bf n} \in \Lambda }{\nu}_{T_{{\bf n},j}} = \overline{\mathcal{M}u}_j\,, \quad\quad *\lim_{{\bf n} \in \Lambda }\nu_{W_{{\bf n},j}} = \overline{\mathcal{M}u}_{m+j}\,. \] Moreover, $\overline{\mathcal{M}u} = \overline{\mathcal{M}u}(\mathcal{M}athcal{C}_2) $ is the vector equilibrium measure determined by the matrix $ \mathcal{M}athcal{C}_2$ on the system of intervals $F_j = \mathcal{M}athbb{D}elta_j, j= 1,\ldots,m\,,\,\, F_j = \mathcal{M}athbb{D}elta_0, j= m+1,\ldots,2m\,.$ \mathcal{E}nd{thm} {\bf Proof.} Let us show that the sequences of measures $\{\nu_{T_{{\bf n},j}}\}$ and $\{\nu_{w_{{\bf n},j}}\}, {\bf n} \in \Lambda,$ have only one accumulation point. Let $\Lambda^{\prime} \subset \Lambda$ be a subsequence of indices such that for each $j=1,\ldots,m$ \[ *\lim_{{\bf n} \in \Lambda^{\prime}}\nu_{T_{{\bf n},j}} = \nu_j\,, \quad\quad *\lim_{{\bf n} \in \Lambda^{\prime}}\nu_{W_{{\bf n},j}} = \nu_{m+j}\,. \] (Notice that $\nu_j \in \mathcal{M}athcal{M}_1(\mathcal{M}athbb{D}elta_j), j=1,\ldots,m,$ and $\nu_j \in \mathcal{M}athcal{M}_1(\mathcal{M}athbb{D}elta_0), j=m+1,\ldots,2m.$) Therefore, \begin{equation} \label{pol-pot-no1} \lim_{{\bf n} \in \Lambda^{\prime}}|T_{{\bf n},j}(z)|^{\frac{1}{n_j}}=\mathcal{E}xp(-V^{\nu_j}(z))\,, \mathcal{E}nd{equation} uniformly on compact subsets of $\mathcal{M}athbb{C}\setminus \mathcal{M}athbb{D}elta_j,$ and \begin{equation} \label{pol-pot-no2} \lim_{{\bf n} \in \Lambda^{\prime}}|W_{{\bf n},j}(z)|^{\frac{1}{|{\bf n}|+n_j}}=\mathcal{E}xp(-V^{\nu_{m+j}}(z)), \mathcal{E}nd{equation} uniformly on compact subsets of $\mathcal{M}athbb{C}\setminus \mathcal{M}athbb{D}elta_0$. As we have seen, $T_{{\bf n},j}$ is orthogonal with respect to the varying measure $\frac{|\widetilde{T}_{{\bf n},j}|}{|W_{{\bf n},j}|}d\sigma_j$. Using (\ref{pol-pot-no1}) and (\ref{pol-pot-no2}), we obtain \[ \lim_{{\bf n} \in\Lambda^{\prime}}\frac{1}{2n_j}\log\frac{|W_{{\bf n},j}(x)|}{|\widetilde{T}_{{\bf n},j}(x)|}= -\frac{1+p_j}{2p_j}V^{\nu_{m+j}}(x) + \sum_{k \neq j} \frac{p_k}{2p_j}V^{\nu_k}(x)\,, \] uniformly in $\mathcal{M}athbb{D}elta_j$. By (\ref{x1}) and Lemma \ref{gonchar-rakhmanov}, $\nu_j$ is the unique equilibrium measure for the extremal problem \begin{equation}\label{prob-11-no} V^{\nu_j}(x)+\sum_{k \neq j} \frac{p_k}{2p_j}V^{\nu_k}(x)-\frac{1+p_j}{2p_j}V^{\nu_{m+j}}(x) \geq \mathcal{E}ta_j\,,\qquad x\in \mathcal{M}athbb{D}elta_j \,, \mathcal{E}nd{equation} with equality for all $x \in \text{supp}(\nu_j),$ and \begin{equation}\label{asin-1-no} \lim_{{\bf n} \in\Lambda^{\prime}} \left(\int |T_{{\bf n},j}^2(x)| \frac{|\widetilde{T}_{{\bf n},j}(x)|d\sigma_j(x)}{|W_{{\bf n},j}(x)|}\right)^{\frac{1}{2n_j}}=e^{-\mathcal{E}ta_j}. \mathcal{E}nd{equation} These relations are completely similar to those obtained for the linear case (see (\ref{prob-extre-11}) and (\ref{asin-1})). On the other hand, $W_{{\bf n},j}$ satisfies the orthogonality relations (\ref{x3}). We can apply once more Lemma \ref{gonchar-rakhmanov} obtaining that, for each $j=1,\ldots,m,$ $\nu_{m+j}$ is the unique equilibrium measure for the extremal problem \begin{equation}\label{prob13-no} V^{\nu_{m+j}}(x)- \frac{p_j}{1+p_j}V^{\nu_j}(x)- \sum_{k\neq j} \frac{p_k}{2(1+p_j)}V^{\nu_k}(x) \geq \mathcal{E}ta_{m+j}\,,\qquad x\in \mathcal{M}athbb{D}elta_{0} \,, \mathcal{E}nd{equation} with equality for all $x \in \text{supp}(\nu_{m+j}).$ These relations differ from those obtained for the linear case (see (\ref{prob-extre-31})) If we look at the matrix corresponding to this system of equations we see that it is not symmetric. Let us rewrite the system as follows. Multiply equations (\ref{prob-11-no}) times $2p_j^2$ and we obtain for each $j=1,\ldots,m,$ \begin{equation} \label{c} 2p_j^2V^{\nu_j}(x)+\sum_{k \neq j} p_jp_k V^{\nu_k}(x)-p_j(1+p_j) V^{\nu_{m+j}}(x) \geq 2\mathcal{E}ta_jp_j^2 = w_j \,,\qquad x\in \mathcal{M}athbb{D}elta_j \,, \mathcal{E}nd{equation} with equality for all $x \in \text{supp} (\nu_j)$. With equations (\ref{prob13-no}) we have to work harder. First, let us multiply them times $2(1+p_j)$ thus obtaining for each $j=1,\ldots,m,$ \begin{equation} \label{d} - 2p_j V^{\nu_j}(x)- \sum_{k\neq j} p_k V^{\nu_k}(x) + 2(1+p_j) V^{\nu_{m+j}}(x)\geq \mathcal{E}nd{equation} \[ 2\mathcal{E}ta_{m+j}(1+p_j) = \mathcal{E}ta_{m+j}^{\prime}\,,\quad x\in \mathcal{M}athbb{D}elta_{0} \,, \] with equality for all $x \in \text{supp} (\nu_{m+j})$. Let us show that in this second group of equations we have equality for all $x\in \mathcal{M}athbb{D}elta_{0}$. In fact, notice that $2(1+p_j)\nu_{m+j}$ is a measure on $\mathcal{M}athbb{D}elta_0$ of total mass equal to $2(1+p_j)$. On the other hand \[ 2p_j \nu_j + \sum_{k \neq j} p_k \nu_k \] is a measure of total mass $p_j + \sum_{k=1}^m p_k = 1 + p_j < 2(1+p_j)$ supported on the set $\cup_{k=1}^m \mathcal{M}athbb{D}elta_k$ which is disjoint from $\mathcal{M}athbb{D}elta_0 .$ Therefore, \[ 2(1+p_j)\nu_{m+j} = ( 2p_j \nu_j + \sum_{k \neq j} p_k \nu_k )^{\prime}+ (1+p_j) \omega_{\mathcal{M}athbb{D}elta_0}\,, \] where $(\cdot)^{\prime}$ denotes the balayage onto $\mathcal{M}athbb{D}elta_0$ of the indicated measure and $\omega_{\mathcal{M}athbb{D}elta_0}$ is the equilibrium measure on $\mathcal{M}athbb{D}elta_0$ (without external field). since these two measures are supported on all $\mathcal{M}athbb{D}elta_0$ so is their sum. Thus, $\text{supp} (\nu_{m+j}) = \mathcal{M}athbb{D}elta_0\,.$ The idea now is to take row transformations on the system of equations (\ref{d}) to transform it conveniently. The matrix of this system of equations is \[\left( \begin{array}{cccccccc} -2p_1 & -p_2 & \cdots & -p_m & 2(1+p_1) & 0 & \cdots & 0 \\ -p_1 & -2p_2 & \cdots & -p_m & 0 & 2(1+p_2) & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots & \vdots& \ddots & \vdots \\ -p_1 & -p_2 & \cdots & -2p_m & 0 & 0 & \cdots & 2(1+p_m) \mathcal{E}nd{array} \right)\,. \] Since each column has a common factor we will carry out the operations without the common factor and afterwards place them back. Thus in columns $k=1,\ldots,m$ we factor out $-p_k$ and in columns $k=m+1,\ldots,2m$ we factor out $2(1+p_{k-m}),$ respectively. The resulting matrix is \[\left( \begin{array}{cccccccc} 2 & 1 & \cdots & 1 & 1 & 0 & \cdots & 0 \\ 1 & 2 & \cdots & 1 & 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots & \vdots& \ddots & \vdots \\ 1 & 1 & \cdots & 2 & 0 & 0 & \cdots & 1 \mathcal{E}nd{array} \right) = \left( \begin{array}{cc} \mathcal{M}athcal{B} & \mathcal{M}athcal{I} \mathcal{E}nd{array} \right)\,, \] where $\mathcal{M}athcal{I}$ denotes the identity matrix of order $m$. We know that the submatrix $\mathcal{M}athcal{B}$ is positive definite and through row operations can be reduced to the identity. This is the same as multiplying $\left( \begin{array}{cc} \mathcal{M}athcal{B} & \mathcal{M}athcal{I} \mathcal{E}nd{array} \right)$ on the left by $\mathcal{M}athcal{B}^{-1}$. Doing this we obtain the block matrix \[ \left( \begin{array}{cc} \mathcal{M}athcal{I} & \mathcal{M}athcal{B}^{-1} \mathcal{E}nd{array} \right)\,. \] It is easy to check that \[ \mathcal{M}athcal{B}^{-1} = \frac{1}{m+1}\left( \begin{array}{cccc} m & -1 & \ldots & -1 \\ -1 & m & \ldots & -1 \\ \vdots & \vdots & \ddots & \vdots \\ -1 & -1 & \cdots & m \mathcal{E}nd{array} \right)\,. \] Multiplying back the factors we extracted we obtain the matrix \[\left( \begin{array}{cccccccc} -p_1 & 0 & \cdots & 0 & \frac{2m(1+p_1)}{m+1} & \frac{-2(1+p_2)}{m+1} & \cdots & \frac{-2(1+p_m)}{m+1} \\ 0 & -p_2 & \cdots & 0 & \frac{-2(1+p_1)}{m+1} & \frac{2m(1+p_2)}{m+1} & \cdots & \frac{-2(1+p_m)}{m+1} \\ \vdots & \vdots & \ddots & \vdots & \vdots & \vdots& \ddots & \vdots \\ 0 & 0 & \cdots & -p_m & \frac{-2(1+p_1)}{m+1} & \frac{-2(1+p_2)}{m+1} & \cdots & \frac{2m(1+p_m)}{m+1} \mathcal{E}nd{array} \right)\,. \] Therefore, the system of equations (\ref{d}) is equivalent to \begin{equation} \label{e} -p_j V^{\nu_j}(x) + \frac{2m(1+p_j)}{m+1} V^{\nu_{m+j}}(x) - \mathcal{E}nd{equation} \[ \sum_{k \neq j} \frac{2(1+p_k)}{m+1} V^{\nu_{m+j}}(x) = \mathcal{E}ta_{m+j}^{''}\,, \quad x \in \mathcal{M}athbb{D}elta_0\,, \] where \[(\mathcal{E}ta_{m+1}^{''},\ldots,\mathcal{E}ta_{2m}^{''})^t= \mathcal{M}athcal{B}^{-1}(\mathcal{E}ta_{m+1}^{\prime},\ldots,\mathcal{E}ta_{2m}^{\prime})^t. \] Finally, multiply the $j$th equation in (\ref{e}) times $(1+p_j)$ to obtain \begin{equation} \label{f} -p_j(1+p_j) V^{\nu_j}(x) + \frac{2m(1+p_j)^2}{m+1} V^{\nu_{m+j}}(x) - \mathcal{E}nd{equation} \[\sum_{k \neq j} \frac{2(1+p_k)(1+p_j)}{m+1} V^{\nu_{m+j}}(x) = \mathcal{E}ta_{m+j}^{''}(1+p_j) = w_{m+j}\,, \quad x \in \mathcal{M}athbb{D}elta_0\,. \] The system of equilibrium problems defined by (\ref{c}) and (\ref{f}) has the interaction matrix \[ \mathcal{M}athcal{C}_2 = \left( \begin{array}{cc} \mathcal{M}athcal{C}_{1,1} & \mathcal{M}athcal{C}_{1,2} \\ \mathcal{M}athcal{C}_{2,1} & \mathcal{M}athcal{C}_{2,2}^2 \mathcal{E}nd{array} \right) \] defined in Section 1. Thus, the corresponding equilibrium problem has at least one solution given by $(\nu_1,\ldots,\nu_m)$. According to Lemma 1, $(\nu_1,\ldots,\nu_m)$ is uniquely determined if we prove that $\mathcal{M}athcal{C}_2$ is positive definite. Let us show that $\mathcal{M}athcal{C}_2$ is positive definite. The first $m$ principal minors of $\mathcal{M}athcal{C}_1$ and $\mathcal{M}athcal{C}_2$ coincide and we already know that they are positive. Let $\mathcal{M}athcal{C}_2^{(j)}$ denote the principal minor of $\mathcal{M}athcal{C}_2$ of order $j$ where $j \in \{ m+1,\ldots,2m\}$. For each $k=1,\ldots,m,$ factor out $p_k$ from the $k$th row and $k$th column of $\mathcal{M}athcal{C}_2^{(j)}.$ From the row and column $m+k, k=1,\ldots, j-m,$ factor out $1 + p_k$. In the resulting determinant, for each $k=1,\ldots,j-m,$ add the $k$th row to the $(m+k)$th row and then to the resulting determinant add the $k$th column to the $(m+k)$th column. We obtain \[ \mathcal{M}box{det}(\mathcal{M}athcal{C}_2^{(j)}) = [p_1\cdots p_m(1+p_1)\cdots(1 + p_{j-m})]^2 \times \] \[ \left| \begin{array}{cccccccc} 2 & 1 & \cdots & 1 & 1 & 1 & \cdots & 1 \\ 1 & 2 & \cdots & 1 & 1 & 1 & \cdots & 1 \\ \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & 1 & \cdots & 2 & 1 & 1 & \cdots & 1 \\ 1 & 1 & \cdots & 1 & \frac{2m}{m+1} & \frac{m-1}{m+1} & \cdots & \frac{m-1}{m+1}\\ 1 & 1 & \cdots & 1 & \frac{m-1}{m+1} & \frac{2m}{m+1} & \cdots & \frac{m-1}{m+1}\\ \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & 1 & \cdots & 1 & \frac{m-1}{m+1} & \frac{m-1}{m+1} & \vdots & \frac{2m}{m+1} \mathcal{E}nd{array} \right| \] In the determinant above, delete the row $m+1$ from the following ones and in the resulting determinant add to the column $m+1$ those after it and we get \[ \left| \begin{array}{cccccccc} 2 & 1 & \cdots & 1 & j-m & 1 & \cdots & 1 \\ 1 & 2 & \cdots & 1 & j-m & 1 & \cdots & 1 \\ \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & 1 & \cdots & 2 & j-m & 1 & \cdots & 1 \\ 1 & 1 & \cdots & 1 & \frac{(m+1)+(j-m)(m-1)}{m+1} & \frac{m-1}{m+1} & \cdots & \frac{m-1}{m+1}\\ 0 & 0 & \cdots & 0 & 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 0 & 0 & 0 & \cdots & 1 \mathcal{E}nd{array} \right| = \] \[ \left| \begin{array}{ccccc} 2 & 1 & \cdots & 1 & j-m \\ 1 & 2 & \cdots & 1 & j-m \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 1 & 1 & \cdots & 2 & j-m \\ 1 & 1 & \cdots & 1 & j-m \mathcal{E}nd{array} \right| + \left| \begin{array}{ccccc} 2 & 1 & \cdots & 1 & 0 \\ 1 & 2 & \cdots & 1 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 1 & 1 & \cdots & 2 & 0 \\ 1 & 1 & \cdots & 1 & \frac{(m+1)-2(j-m)}{m+1} \mathcal{E}nd{array} \right| = \] \[ (j-m) + (m+1) - 2(j-m) = 2m+1 -j > 0\,. \] With this we conclude the proof. $\Box$ We are ready to prove Theorem \ref{teoprin2}. {\bf Proof of Theorem \ref{teoprin2}.} From (\ref{x2}) the asymptotic behavior of $\widehat{\sigma}_j(z)-\frac{S_{{\bf n},j}(z)}{T_{\bf n}(z)}$ can be expressed in terms of that of the sequences of polynomials $W_{{\bf n},j}$, $T_{{\bf n},j}$, and $\zeta_{{\bf n},j},$ where \[ \frac{1}{\zeta_{{\bf n},j}^2}=\mathcal{M}in\left\{\int |Q(x)|^2\frac{|\widetilde{T}_{{\bf n},j}(x)|d\sigma_j(x)}{|W_{{\bf n},j}(x)|} :\,Q(x)=x^{n_j}+\cdots\right\} \] \[ =\int |T_{{\bf n},j}(x)|^2\frac{|\widetilde{T}_{{\bf n},j}(x)|d\sigma_j(x)}{|W_{{\bf n},j}(x)|}\,. \] On account of Theorem \ref{asint2}, we have \begin{equation}\label{vel-w1no} \lim_{{\bf n} \in\Lambda} |W_{{\bf n},j}(z)|^{1/{|{\bf n}|}}=\mathcal{E}xp\{-(1+p_j)V^{\overline{\mathcal{M}u}_{m+j}}(z)\}\,, \mathcal{E}nd{equation} uniformly on compact subsets of $\mathcal{M}athbb{C}\setminus \mathcal{M}athbb{D}elta_0$, and \begin{equation}\label{vel-t1no} \lim_{{\bf n} \in\Lambda} |T_{{\bf n},j}^2(z)|^{1/{|{\bf n}|}}=\mathcal{E}xp\{-2p_jV^{\overline{\mathcal{M}u}_j}(z)\}\,, \mathcal{E}nd{equation} uniformly on compact subsets of $\mathcal{M}athbb{C}\setminus \mathcal{M}athbb{D}elta_j\,,$ where $\overline{\mathcal{M}u} = \overline{\mathcal{M}u}(\mathcal{M}athcal{C}_2)$. Using (\ref{asin-1-no}), we have \begin{equation}\label{vel-res-no1} \lim_{{\bf n} \in\Lambda} \left(\frac{1}{\zeta_{{\bf n},j}^2}\right)^{1/{|{\bf n}|}}=\mathcal{E}xp\{-2p_j\mathcal{E}ta_j\} = \mathcal{E}xp\{- w_j/p_j\} \,. \mathcal{E}nd{equation} Combining (\ref{x2}), (\ref{vel-w1no}), (\ref{vel-t1no}), and (\ref{vel-res-no1}), we obtain that (\ref{conver2}) holds true uniformly on compact subsets of the indicated region. $\Box$ \section{Comments on Lemma \ref{niksor}} \label{justi} Let $\mathcal{M}athcal{M}(F_k), k =1,\ldots,N,$ be the class of all measures on $\mathcal{M}athcal{M}(F_k)$ and \[\mathcal{M}athcal{M}= \mathcal{M}athcal{M}_(F_1) \times \cdots \times \mathcal{M}athcal{M}(F_{N}) \,. \] Define the mutual energy of two vector measures $\mathcal{M}u^1,\mathcal{M}u^2 \in {\mathcal{M}athcal{M}}$ by \begin{equation} \label{mutualenergy} J(\mathcal{M}u^1, \mathcal{M}u^2)= \sum_{j,k=1}^N \int \int c_{j,k} \ln \frac{1}{|z-x|} d \mathcal{M}u^1_{j}(z) d \mathcal{M}u^2_{k}(x). \mathcal{E}nd{equation} The energy of the vector measure $\mathcal{M}u \in {\mathcal{M}athcal{M}}$ is \begin{equation} \label{energy} J(\mathcal{M}u)= \sum_{j,k=1}^N c_{j,k}I(\mathcal{M}u_{j},\mathcal{M}u_{k}) \,, \mathcal{E}nd{equation} where \begin{equation} \label{mutualenergy*} I(\mathcal{M}u_{j},\mathcal{M}u_{k})=\int \int \ln \frac{1}{|z-x|} d \mathcal{M}u_{j}(z) d \mathcal{M}u_{k}(x)\,. \mathcal{E}nd{equation} Given a real symmetric positive definite matrix $\mathcal{M}athcal{C}$ and $\mathcal{M}u \in \mathcal{M}athcal{M}$ define the combined potentials $W^{\mathcal{M}u}_j,j=1,\ldots,N,$ as in the introduction, and the vector potential $W^{\mathcal{M}u} = (W^{\mathcal{M}u}_1,\ldots,W^{\mathcal{M}u}_N).$ These formulas may be rewritten as \begin{equation} \label{energiamutuapotencial} J(\mathcal{M}u^1, \mathcal{M}u^2)=\int W^{\mathcal{M}u^2}(z) d \mathcal{M}u^1(z) \,, \mathcal{E}nd{equation} where \[ \int W^{\mathcal{M}u^2}(z) d\mathcal{M}u^1(z) = \sum_{i=1}^m \int W_i^{\mathcal{M}u^2}(z) d\mathcal{M}u_i^1(z)\,, \] and \begin{equation} \label{energiapotencial} J(\mathcal{M}u)=\int W^{\mathcal{M}u}(z) d \mathcal{M}u(z)\,. \mathcal{E}nd{equation} If $\mathcal{M}u, \mathcal{M}u^1, \mathcal{M}u^2 \in \mathcal{M}athcal{E}$ are vector charges whose components have finite energy, the energy of a charge $J(\mathcal{M}u)$ and the mutual energies of the charges $J(\mathcal{M}u^1,\mathcal{M}u^2)$ can be defined analogously by formulas (\ref{energy}) and (\ref{mutualenergy}), respectively. If $c_{j,k} \geq 0$ when $F_j \cap F_k \neq \mathcal{E}mptyset, j,k \in \{1,\ldots,N\}$ the functionals $J(\mathcal{M}u^1, \mathcal{M}u^2)$ and $J(\mathcal{M}u)$ are lower semicontinuous in the weak topology $\mathcal{M}athcal{M}$ (see Proposition 5.4.1 in \cite{niso}). Consequently, the functional $J(\mathcal{M}u)$ attains its minimum in $\mathcal{M}athcal{M}_1$. In Proposition 5.4.2, using a unitary decomposition of $\mathcal{M}athcal{C}$, the authors prove that $J(\mathcal{M}u)$ is a nonsingular positive definite quadratic form on the linear space $\mathcal{M}athcal{E}$. In Proposition 5.4.2, the extra condition on the coefficients of $\mathcal{M}athcal{C}$ is not needed. Therefore, we can say that if there is a minimizing vector measure then it is unique. Let $0 \leq \mathcal{E}psilon \leq 1$ and $\mathcal{M}u^1,\mathcal{M}u^2 \in {\mathcal{M}athcal{M}_1}.$ Assume that the components of $\mathcal{M}u^1,\mathcal{M}u^2$ have finite energy. Set $\widetilde{\mathcal{M}u}= \mathcal{E}psilon \mathcal{M}u^2 + (1- \mathcal{E}psilon) {\mathcal{M}u}^1 \in {\mathcal{M}athcal{M}}_1$. It is algebraically straightforward to verify that \begin{equation} \label{epsilonal2} J(\widetilde{\mathcal{M}u}) - J(\mathcal{M}u^1) = \mathcal{E}psilon^2 J(\mathcal{M}u^2-\mathcal{M}u^1)+2\mathcal{E}psilon \int W^{ {\mathcal{M}u}^1}(x) d(\mathcal{M}u^2 - {\mathcal{M}u}^1)(x) \,. \mathcal{E}nd{equation} Assume that $J(\mathcal{M}u^1)$ minimizes the energy functional. Dividing by $\mathcal{E}psilon$ and letting $\mathcal{E}psilon$ tend to zero, it follows that \begin{equation} \label{minimo} \int W^{ \mathcal{M}u^1}(x) d(\mathcal{M}u^2 - \mathcal{M}u^1)(x) \geq 0. \mathcal{E}nd{equation} for all $\mathcal{M}u^2 \in \mathcal{M}athcal{M}_1$. Reciprocally, assume that (\ref{minimo}) takes place for all $\mathcal{M}u^2 \in \mathcal{M}athcal{M}_1$, then using (\ref{epsilonal2}) it follows that $\mathcal{M}u^1$ minimizes the energy functional since $J(\mathcal{M}u^2-\mathcal{M}u^1) \geq 0$ for all $\mathcal{M}u^1,\mathcal{M}u^2 \in \mathcal{M}athcal{C}$. Now, let $\overline{\mathcal{M}u} \in \mathcal{M}athcal{M}_1$ be a solution of the equilibrium potential problem determined by $\mathcal{M}athcal{C}$ on the system of intervals $F_j\,, j = 1,\ldots,N\,.$ That is \[ W_j^{\overline{\mathcal{M}u}} (x) = w_j^{\overline{\mathcal{M}u}}\,, \qquad x \in \text{supp} (\overline{\mathcal{M}u}_j)\,, \] where $w_j^{\overline{\mathcal{M}u}} = \inf\{W_j^{\overline{\mathcal{M}u}} (x): x \in F_j\}$. Hence, for all $\mathcal{M}u \in \mathcal{M}athcal{M}_1,$ \[ \int W^{\overline{\mathcal{M}u}}(x) d(\mathcal{M}u - \overline{\mathcal{M}u})(x) = \sum_{j=1}^N \int W^{\overline{\mathcal{M}u}_j}(x) d(\mathcal{M}u_j - \overline{\mathcal{M}u}_j)(x) \geq \sum_{j=1}^N w_j^{\overline{\mathcal{M}u}} - w_j^{\overline{\mathcal{M}u}} = 0 \] and it follows that $\overline{\mathcal{M}u}$ minimizes the energy functional. With this we conclude the comments on Lemma 1. \begin{thebibliography}{99} \bibitem{gora} {\sc A. A. Gonchar, E. A. Rakhmanov,} The equilibrium measure and distribution of zeros of extremal polynomials. (Russian) {\sl Mat. Sb. N. S.} {\bf 125(167)} (1984), 117--127; English translation {\sl Math. USSR-Sb.} {\bf 53} (1986), 119--130. \bibitem{Gon-Rak-Sue} {\sc A. A. Gonchar, E. A. Rakhmanov, S. P. Suetin}, On the rate of convergence of Pad{\'e} approximants of orthogonal expansions. {\sl Progress in approximation theory} (Tampa, FL, 1990), 169-190, {\sl Springer Ser. Comput. Math.,} {\bf 19}, Springer, New York, 1992. \bibitem{niso}{\sc E. M. Nikishin, V. N. Sorokin,} ``Rational approximations and orthogonality,'' Transl. of Math. Monographs Vol. {\bf 92}, Amer. Math. Soc., Providence, Rhode Island, 1991. \bibitem{RS} {\sc M. Reed, B. Simon,} ``Functional Analysis, I,'' Academic Press, N. York, 1980. \bibitem{stto} {\sc H. Stahl, V. Totik,} ``General orthogonal polynomials,'' Enc. Math. Vol. {\bf 43}, Cambridge University Press, Cambridge, (1992). \bibitem{sue} {\sc S. P. Suetin,} The convergence of the rational approximations of polynomial expansions in the domains of meromorphy of a given function. (Russian) \textit{Mat. Sb. (N.S.)} {\bf 105(147)} (1978), 413--430; English translation \textit{Math. USSR-Sb.} {\bf 34} (1978), 367--381. \bibitem{sue2} {\sc S. P. Suetin,} On Montessus de Ballore's theorem for rational approximants of orthogonal expansions. (Russian) \textit{Mat. Sb. (N.S.)} {\bf 114(156)} (1981), 451--464, 480; English translation \textit{Math. USSR-Sb.} {\bf 42} (1982), 399--411. \mathcal{E}nd{thebibliography} \mathcal{E}nd{document}
\begin{document} \title{Generation of Total Angular Momentum Eigenstates in Remote Qubits} \author{A. Maser} \affiliation{Institut f\"ur Optik, Information und Photonik, Max-Planck Forschungsgruppe,\\Universit\"at Erlangen-N\"urnberg, 91058 Erlangen, Germany} \author{U. Schilling} \affiliation{Institut f\"ur Optik, Information und Photonik, Max-Planck Forschungsgruppe,\\Universit\"at Erlangen-N\"urnberg, 91058 Erlangen, Germany} \author{T. Bastin} \affiliation{Institut de Physique Nucl\'eaire, Atomique et de Spectroscopie, Universit\'e de Li\`ege, 4000 Li\`ege, Belgium} \author{E. Solano} \affiliation{Departamento de Qu\'imica F\'isica, Universidad del Pa\'is Vasco - Euskal Herriko Unibertsitatea, Apartado 644, 48080 Bilbao, Spain} \author{C. Thiel} \email{[email protected]} \homepage{http://www.ioip.mpg.de/jvz/} \affiliation{Institut f\"ur Optik, Information und Photonik, Max-Planck Forschungsgruppe,\\Universit\"at Erlangen-N\"urnberg, 91058 Erlangen, Germany} \author{J. von Zanthier} \affiliation{Institut f\"ur Optik, Information und Photonik, Max-Planck Forschungsgruppe,\\Universit\"at Erlangen-N\"urnberg, 91058 Erlangen, Germany} \date{\today} \begin{abstract} We propose a scheme enabling the universal coupling of angular momentum of $N$ remote noninteracting qubits using linear optical tools only. Our system consists of $N$ single-photon emitters in a $\Lambda$-configuration that are entangled among their long-lived ground-state qubits through suitably designed measurements of the emitted photons. In this manner, we present an experimentally feasible algorithm that is able to generate any of the $2^N$ symmetric and nonsymmetric total angular momentum eigenstates spanning the Hilbert space of the $N$-qubit compound. \end{abstract} \pacs{42.50.Dv,42.50.Tx,37.10.-x,03.67.-a} \maketitle \section{Introduction} Since the celebrated article by Einstein, Podolsky, and Rosen in 1935~\cite{Einstein:1935:a}, it is commonly assumed that the phenomenon of entanglement between different systems occurs if the systems {\em had previously interacted with each other}. Indeed, for most experiments generating entangled quantum states interactions such as non-linear effects~\cite{Kwiat:1995:a}, atomic collisions~\cite{Osnaghi:2001:a}, Coulomb coupling~\cite{Leibfried:2005:a,Haeffner:2005:a}, or atom-photon interfaces~\cite{Wilk:2007:a}, are a prerequisite. Recent proposals considered that entanglement between systems that never interacted before can be created as a consequence of measuring photons propagating along multiple quantum paths, leaving the emitters in particular entangled states~\cite{Cabrillo:1999:a,Bose:1999:a,Skornia:2001:a,Duan:2001:a,Feng:2003:a,Duan:2003:a,Simon:2003:a,Thiel:2007:a,Bastin:2007:a}. Since then, several experiments generating {\em entanglement at a distance via projection} have been realized, first between disordered clouds of atoms~\cite{Julsgaard:2001:a,Chou:2005:a,Matsukevich:2006:a} and very recently even between single trapped atoms~\cite{Moehring:2007:a}. On the other hand, the coupling of angular momentum is commonly utilized to account for the interaction between particles in order to retrieve the corresponding energy eigenstates and eigenvalues of the total system. This coupling of angular momentum has been fruitfully employed in as disparate fields as solid state, atomic or high-energy physics, to account for the interaction between electric or magnetic multipoles or spins of quarks, respectively~\cite{Wigner:1959}. Here again, it seems counter-intuitive that noninteracting particles, such as remotely placed spin-1/2 particles, will couple to form arbitrary total angular momentum eigenstates as if an interaction were present, including highly and weakly entangled quantum states. In this article, we propose a method how to mimic the universal coupling of angular momentum of $N$ remote noninteracting spin-1/2 particles (qubits) in an experimentally operational manner. Hereby, an arbitrary number of distant particles can be entangled in their two-level ground states providing long-lived $N$-qubit states via the use of suitably designed projective measurements. In reference to the algorithm describing the coupling of angular momentum of individual spin-1/2 particles, our method couples successively remote qubit states to a multi-qubit compound system. Thereby, it offers access to the entire coupled basis of an $N$-qubit compound system of dimension $2^N$, i.e., to any of the $2^N$ symmetric and nonsymmetric total angular momentum eigenstates. \section{Description of the physical system} For $N$ spin-1/2 particles, the total angular momentum eigenstates, defined as simultaneous eigenstates of the square of the total spin operator $\hat{\bf S}^2$ and its $z$-component $\hat{S}_z$, are commonly denoted by $|S_N;\!m_N\rangle$, with the corresponding eigenvalues $S_N(S_N+1)\hbar^2$ and $m_N\hbar$~\cite{Dicke:1954:a,Mandel:1995:a}. However, since the denomination $|S_N;\!m_N\rangle$ generally characterizes more than one quantum state, we will extend the notation of an $N$-qubit state by its coupling history, i.e.~by adding the values of $S_1, S_2, ..., S_{N-1}$ to those of $S_N$ and $m_N$. A single qubit state has $S_1=\frac{1}{2}$, a two-qubit system can either have $S_2=0$ or $S_2=1$, a three-qubit system $S_3=\frac{1}{2}$ or $S_3=\frac{3}{2}$, and so on. Including the coupling history we thus get the following notation $|S_1,\!S_2,...,\!S_N;\!m_N\rangle$ which describes a particular angular momentum eigenstate unambiguously. \begin{figure} \caption{\label{fig1} \label{fig1} \end{figure} In the following, we consider a system consisting of $N$ indistinguishable single-photon emitters, e.g.~atoms, with a $\Lambda$-configuration, see Fig.~\ref{fig1}. We denote the two ground levels of the $\Lambda$-configured atoms as $|+\rangle$ and $|-\rangle$ or, using the notation introduced before, $|\frac{1}{2};\!+\frac{1}{2}\rangle\equiv|+\rangle$ and $|\frac{1}{2};\!-\frac{1}{2}\rangle\equiv|-\rangle$. Initially, all atoms are excited by a laser $\pi$ pulse towards the excited state $|e\rangle$ and subsequently decay by spontaneously emitting $N$ photons that are collected by single-mode optical fibers~\cite{Moehring:2007:a,Volz:2006:a} and transmitted to $N$ different detectors. Since each atom is connected via optical fibers to several detectors, a single photon can travel on several alternative, yet equally probable paths to be eventually recorded by one detector. After a successful measurement, where all $N$ photons have been recorded at the $N$ detectors so that each detector registers exactly one photon, it is thus impossible to determine along {\em which way} each of the $N$ photons propagated. This may cause quantum interferences of $N$th order which can be fruitfully employed to engineer particular quantum states of the emitters, e.g., to generate families of entangled states symmetric under permutation of their qubits~\cite{Thiel:2007:a,Bastin:2007:a}. Here, we will consider the generation of a more general class of quantum states, including symmetric {\em and} nonsymmetric states. By mimicking the process of spin-spin coupling, we will demonstrate how to generate any quantum state belonging to the coupled basis of an $N$-qubit compound system. \section{Measurement based preparation of total angular momentum eigenstates} Let us start by looking at the most basic process of our system. If one single excited atom with a $\Lambda$-configuration emits a photon, the atomic ground state and the photonic polarization states cannot be described independently. The excited state $|e\rangle$ can decay along two possible channels, $|e\rangle\rightarrow|+\rangle$ and $|e\rangle\rightarrow|-\rangle$, accompanied by the spontaneous emission of a $\sigma^-$ or a $\sigma^+$-polarized photon, respectively (consider e.g.~Zeeman sub-levels). A single decaying atom thus forms an entangled state between the polarization state of the emitted photon and the corresponding ground state of the de-excited atom~\cite{Volz:2006:a,Blinov:2004:a}. This correlation implies that the state of the atom is projected onto $|+\rangle$ ($|-\rangle$) if the emitted photon is registered by a detector with a $\sigma^-$ ($\sigma^+$) polarized filter in front. \subsection{Preparation of $2$-qubit states} In a next step, we consider the system shown in Fig.~\ref{fig1} where two atoms with a $\Lambda$-configuration are initially excited and subsequent measurements on the spontaneously emitted photons are performed at two different detectors. Again, if a polarization sensitive measurement is performed on the two emitted photons using two different polarization filters in front of the detectors, the state of the two atoms is projected due to the measurement. However, if the polarization of both photons is measured along orthogonal directions, the state of the atoms will be projected onto a superposition of both ground states, since it is impossible to determine which atom emitted the photon travelling to the first or the second detector by the information obtained in the measurement process. With each qubit having a total spin of $\frac{1}{2}$, a two-qubit system can have a total spin of either $1$ or $0$ and thus defines four angular momentum eigenstates given by: \begin{tabular*}{\textwidth}{cccccc}\\ spin-1 triplet & $|S_1,\!S_2;\!m\rangle$ && spin-0 singlet & $|S_1,\!S_2;\!m\rangle$ & \\ $|\!\!+\!+\!\rangle$ & $|\frac{1}{2},\!1;\!+1\rangle$ &&&& \\ $\frac{1}{\sqrt{2}}(|\!\!+\!-\!\rangle\!+\!|\!\!-\!+\!\rangle)$ & $|\frac{1}{2},\!1;\!0\rangle$ && $\frac{1}{\sqrt{2}}(|\!\!+\!-\!\rangle\!-\!|\!\!-\!+\!\rangle)$ & $|\frac{1}{2},\!0;\!0\rangle$ \\ $|\!\!-\!-\!\rangle$ & $|\frac{1}{2},\!1;\!-1\rangle$ &&&& \end{tabular*} The spin-1 triplet can be easily generated with the setup shown in Fig.~\ref{fig1} by choosing the polarization filters accordingly: For example, if both filters are oriented in such a way that only $\sigma^-$ ($\sigma^+$) polarized photons are transmitted, the emitters are projected onto the state $|\!\!+\!+\rangle$ ($|\!\!-\!-\rangle$); if the filters are orthogonal, i.e.~one is transmitting $\sigma^-$ and one $\sigma^+$ polarized photons, the system is projected onto the state $|\frac{1}{2},\!1;\!0\rangle$, since any information along {\em which way} the photons propagated is erased by the system. Finally, in order to generate the singlet state $|\frac{1}{2},\!0;\!0\rangle$, we may introduce an optical phase shift of $\pi$ in one of the optical paths shown in Fig.~\ref{fig1}, e.g., by extending or shortening the length of the optical path by $\frac{\lambda}{2}$. The generation of the four two-particle total angular momentum eigenstates with the system shown in Fig.~\ref{fig1} thus requires only the variation of two polarizer orientations and, in case of the singlet state, to introduce an optical phase shift of $\pi$. \subsection{Preparation of $3$-qubit states} With the two-qubit angular momentum eigenstates at hand, we can next couple an additional qubit in order to access the eight possible three-qubit total angular momentum eigenstates. In the following, we will exemplify our method for the three-qubit state $|\!\frac{1}{2},\!1,\!\frac{1}{2};\!+\frac{1}{2}\rangle$ given by \begin{eqnarray}\label{3Dicke}\textstyle |\frac{1}{2},\!1,\!\frac{1}{2};\!+\frac{1}{2}\rangle&=&\frac{1}{\sqrt{6}}\,(2|\!+\!+-\rangle-|\!+\!-+\rangle-|\!-\!++\rangle)\\ &=&\frac{\sqrt{2}}{\sqrt{3}}\,|\frac{1}{2},\!1;+\!1\rangle\otimes|-\rangle-\frac{1}{\sqrt{3}}\,|\frac{1}{2},\!1;\!0\rangle\otimes|+\rangle\nonumber, \end{eqnarray} where the last line in Eq.~(\ref{3Dicke}) exhibits the coupling history: In order to generate the three-qubit state $|\frac{1}{2},\!1,\!\frac{1}{2};\!+\frac{1}{2}\rangle$, the two-qubit spin-1 states $|\frac{1}{2},\!1;\!+1\rangle$ and $|\frac{1}{2},\!1;\!0\rangle$ are coupled with $|-\rangle$ and $|+\rangle$, respectively. Thereby, the prefactors $\frac{\sqrt{2}}{\sqrt{3}}$ and $-\frac{\sqrt{1}}{\sqrt{3}}$ represent the corresponding Clebsch-Gordan coefficients as a result of changing the basis~\cite{Clebsche:2002:a}. In the following, we will make use of our knowledge of how to generate the states $|\frac{1}{2},\!1;\!+1\rangle$ and $|\frac{1}{2},\!1;\!0\rangle$ in order to generate the desired state $|\frac{1}{2},\!1,\!\frac{1}{2};\!+\frac{1}{2}\rangle$. Therefore, we have to add a third qubit and combine the two systems generating the two individual states accordingly in one setup. \begin{figure} \caption{\label{fig2} \label{fig2} \end{figure} The two setups individually capable of generating the three-qubit states $|\frac{1}{2},\!1;\!+1\rangle\otimes|-\rangle$ and $|\frac{1}{2},\!1;\!0\rangle\otimes|+\rangle$ are shown in Fig.~\ref{fig2}. The additional qubit is not yet coupled to the two-qubit system, i.e.~it is simply projected either onto the state $|+\rangle$ (Fig.~\ref{fig2}, left) or $|-\rangle$ (Fig.~\ref{fig2}, right), where the two-qubit systems are projected in the same way as explained in Fig.~\ref{fig1}. In order to generate the three-qubit state $|\frac{1}{2},\!1,\!\frac{1}{2};\!\frac{1}{2}\rangle$, we now have to superpose these two possibilities. The combined system is shown in Fig.~\ref{fig3}. We will explain the underlying physics by considering the possible scenarios when detecting the photon emitted by the additional third atom. \begin{figure} \caption{(Color online) Setup for the generation of the state $2|{+\!+\!-} \label{fig3} \end{figure} In a successful measurement cycle, the three emitted photons are detected at three different detectors. Thus, there are only two possible situations due to a measurement of a photon emitted by the third atom: \begin{itemize} \item[I.] (red solid lines) The emitted photon is registered at detector~$D_3$ which has a $\sigma^-$ polarizing filter in front. In this case, emitter 3 is projected onto the state $|+\rangle$ and emitter 1 and 2 are left in the setup generating the state $|\frac{1}{2},\!1;\!0\rangle\equiv\frac{1}{\sqrt{2}}(|\!+\!-\rangle+|\!-\!+\rangle)$; as discussed in Fig.~\ref{fig2} (left). \item[II.] (blue dashed lines) The emitted photon is registered at detector~$D_2$ which has a $\sigma^+$ polarizing filter in front. In this case, emitter 3 is projected onto the state $|-\rangle$ and emitter 1 and 2 are left in the setup generating the state $|\frac{1}{2},\!1;\!1\rangle\equiv|\!+\!+\rangle$; as discussed in Fig.~\ref{fig2} (right). \end{itemize} In other words, the third emitter acts as a switch between the two possible quantum paths: with equal probabilities, the system is either projected onto the state $2|++-\rangle$ or onto the state $|+-+\rangle+|-++\rangle$. Note that the relative factor of two results from using an equal number of path ways (optical fibers) in both cases. In addition, we can modify the path where a photon emitted by the third atom is registered at detector~$D_3$ by implementing a relative optical phase shift of $\pi$ (c.f.~Fig.~\ref{fig3}) to obtain a minus sign for scenario II. relative to scenario I. In this case, the final state projected by the setup shown in Fig.~\ref{fig3} corresponds to the three-qubit state $|\frac{1}{2},\!1,\!\frac{1}{2};\!\frac{1}{2}\rangle$ of Eq.~(\ref{3Dicke}). Reconsidering the state $|\frac{1}{2},\!1,\!\frac{1}{2};\!\frac{1}{2}\rangle$ in terms of our extended notation, we coupled two spin-1/2 particles to form a spin-1 compound state that was coupled again with a spin-1/2 particle to form a three-particle spin-1/2 compound state. Similarly, we could have coupled the spin-1 compound state with an additional qubit in such a way that we obtain the symmetric state $|\frac{1}{2},\!1,\!\frac{3}{2};\!\frac{1}{2}\rangle$, also known as W-state~\cite{Duer:2000:a}. For this case, we have to change the setup shown in Fig.~\ref{fig3} slightly: we remove the optical phase shift of $\pi$ and connect the third emitter also with detector~$D_1$. In this case, the totally symmetric setup generates a W-state (c.f.~\cite{Thiel:2007:a}). \begin{figure} \caption{\label{fig4} \label{fig4} \end{figure} \subsection{Preparation of $N$-qubit states} Finally, let us outline how to engineer the coupling of angular momentum of $N$ remote qubits to form an arbitrary $N$-qubit total angular momentum eigenstate. In order to generate the $N$-qubit state $|{S_1,S_2,S_3,...S_N;m_s}\rangle$ we have to \begin{itemize} \item[1.] set up $\frac{N}{2}+m_s$ ($\frac{N}{2}-m_s$) detectors with $\sigma^-$ ($\sigma^+$) polarized filters in front. Hereby, we connect the first emitter with optical fibers to all $N$ detectors. \item[2.] check for each particle $i$ beginning with $i=2$ whether $S_i>S_{i-1}$ or $S_i<S_{i-1}$. If \begin{itemize} \item[a.] $S_i>S_{i-1}$; we have to connect the particle with optical fibers to all detectors except those which are mentioned in case~b.~below. \item[b.] $S_i<S_{i-1}$; we have to connect the particle with optical fibers to one detector with a $\sigma^-$ polarizer and to one with a $\sigma^+$ polarizer. The optical fiber leading to the $\sigma^-$ polarizer should induce a relative optical phase shift of $\pi$ and those two detectors should not be linked to any other subsequent particle. \end{itemize} \end{itemize} If one wants to create a particular total angular momentum eigenstate $|{S_1,S_2,S_3,...S_N;m_s}\rangle$, the setup is determined by the total spins $S_1,S_2,S_3,...S_N$ obtained by successively coupling $N$ spin-1/2 particles. Hereby, the spin number $m_s$ determines the fraction of $\sigma^-$ and $\sigma^+$ polarized filters used in the setup (s.~Fig.~\ref{fig4}). As examples, let us apply this algorithm for the two three-qubit total angular momentum eigenstates $|\frac{1}{2},\!1,\!\frac{1}{2};\!\frac{1}{2}\rangle$ and $|\frac{1}{2},\!1,\!\frac{3}{2};\!\frac{1}{2}\rangle$ discussed above. Since $m_s=\frac{1}{2}$ for both states, we use two detectors with $\sigma^-$ polarized filters and one with a $\sigma^+$ polarized filter. Further, in both cases we have $S_2>S_1$ which implies that the first and the second emitter are connected to all three detectors. For the state $|\frac{1}{2},\!1,\!\frac{1}{2};\!\frac{1}{2}\rangle$, we find $S_3<S_2$. Therefore, we connect the third emitter only to two detectors with $\sigma^-$ and $\sigma^+$ polarized filters in front, respectively, e.g.~detector~$D_2$ and~$D_3$, and we introduce an optical phase shift of $\pi$ for the path leading from the third emitter to detector~$D_3$. Summarizing we obtain the setup shown in Fig.~\ref{fig3} as postulated. For the state $|\frac{1}{2},\!1,\!\frac{3}{2};\!\frac{1}{2}\rangle$, we find $S_3>S_2$. Here, we connect the third emitter to all three detectors. In this case, as mentioned above, the setup will generate the symmetric W-state~\cite{Thiel:2007:a}. The method proposed here, relies on the probabilistic scattering of photons. Thereby, the efficiency of generating a particular $N$-qubit total angular momentum eigenstate decreases with increasing number of qubits $N$. If the probability to find a single photon in an angular detection window $\Delta\Omega$ is given by $P(\Delta\Omega)$, including fiber coupling and detection efficiencies, the corresponding $N$-fold counting rate is found to be $P^N(\Delta\Omega)$. This might limit the scalability of our scheme (see the discussion in~\cite{Thiel:2007:a}) as is indeed the case with other experiments observing entangled atoms~\cite{Blinov:2004:a,Volz:2006:a,Moehring:2007:a}. \section{Conclusions} In conclusion, we considered a system of $N$ remote noninteracting single-photon emitters with a $\Lambda$-configuration. By mimicking the coupling of angular momentum, we showed that it is possible to engineer any of the $2^N$ total angular momentum eigenstates in the long-lived ground-state qubits. Using linear optical tools only, our method employs the detection of all $N$ photons scattered from the $N$ emitters at $N$ polarization sensitive detectors. Thereby, it offers access to any of the $2^N$ states of the coupled basis of an $N$-qubit compound system. Using projective measurements we thereby form highly and weakly entangled quantum states even though no interaction between the qubits is present. \section{Acknowledment} U.S. thanks financial support from the Elite Network of Bavaria. E.S. thanks financial support from Ikerbasque Foundation, EU EuroSQIP project, and UPV-EHU Grant GIU07/40. C.T.~and J.v.Z.~gratefully acknowledge financial support by the Staedtler foundation. \end{document}
\begin{document} \title{On lax limits in $\infty$-categories} \author{John D. Berman} \maketitle \begin{abstract}\blfootnote{This work was supported by a National Science Foundation Postdoctoral Fellowship under grant 1803089.} We introduce partially lax limits of $\infty$-categories, which interpolate between ordinary limits and lax limits. Most naturally occurring examples of lax limits are only partially lax; we give examples arising from enriched $\infty$-categories and $\infty$-operads. Our main result is a formula for partially lax limits and colimits in terms of the Grothendieck construction. This generalizes a formula of Lurie for ordinary limits and of Gepner-Haugseng-Nikolaus for fully lax limits. \end{abstract} \section{Introduction} \noindent Many notions in ordinary category theory can be described in terms of lax limits in 2-categories: Grothendieck constructions, comma categories, Kleisli categories, and so on. In theory, this means that a great deal of category theory reduces to the study of lax limits. Gepner-Haugseng-Nikolaus \cite{GHN} have defined lax limits in $(\infty,2)$-categories, and they have proven that the Grothendieck construction is an example: \begin{theorem}[\cite{GHN} 1.1]\label{ThmGHN} Let $F:I\to\text{Cat}_\infty$ be a functor with associated cocartesian fibration $\begingroup\textstyle \int\endgroup F\to I$ and associated cartesian fibration $\intsmall F\to I$. Then $\begingroup\textstyle \int\endgroup F$ is equivalent to the lax colimit of $F$, and $\intsmall F$ is equivalent to the oplax colimit of $F$. \end{theorem} \noindent In this short paper, we will generalize the notion of lax limit to encompass many \emph{other} constructions in higher category theory. If $\mathcal{C}$ is an $(\infty,2)$-category, $I$ is an $\infty$-category with some morphisms marked, and $F:I\to\mathcal{C}$ is a functor, we will define a \emph{partially lax colimit} $\text{colim}^\text{lax}(F)$, which satisfies the following universal property: \begin{itemize} \item there are morphisms $\alpha_i:F(i)\to\text{colim}^\text{lax}(F)$ for each $i\in I$; \item there are 2-morphisms $\alpha_\phi:\alpha_i\Rightarrow\alpha_j F(\phi)$ for each $\phi:i\to j$ in $I$; \item $\alpha_\phi$ is an equivalence whenever $\phi$ is marked. \end{itemize} \noindent In particular, any time we write $\text{colim}^\text{lax}(F)$, we are implicitly referencing a marking on the domain of $F$. There are also some variants on this idea. If we change the direction of the 2-morphisms $\alpha_\phi$, we obtain an \emph{oplax} colimit. If we change the direction of the morphisms $\alpha_i$, we obtain lax and oplax \emph{limits}. If \emph{all} morphisms of $I$ are marked, then all the 2-morphisms are required to be equivalences, and we recover the ordinary colimit. If \emph{no} morphisms of $I$ are marked (or if only the equivalences are marked), then none of the 2-morphisms are required to be equivalences, and we recover the fully lax colimit. The author hopes that anyone interested in lax limits will also be interested in partially lax limits, for the following simple reason: \begin{center} Most lax limits appearing `in nature' are only partially lax.\end{center} \noindent In this paper, we offer the following two examples as partial justification, with the expectation that more will follow: \begin{itemize} \item (Enriched $\infty$-categories) If $\mathcal{V}$ is a monoidal $\infty$-category, there is an $\infty$-category $\text{Cat}^\mathcal{V}$ of $\mathcal{V}$-enriched categories. This is constructed (as by Gepner-Haugseng \cite{GH}) in two steps: First, for each set $S$, we construct $\infty$-categories $\text{Cat}^\mathcal{V}_S$ of $\mathcal{V}$-enriched categories with a fixed set $S$ of objects. These assemble into a functor $\text{Cat}^\mathcal{V}_{-}:\text{Set}^\text{op}\to\text{Cat}_\infty$. Then $$\text{Cat}^\mathcal{V}\cong\text{colim}^\text{oplax}(\text{Cat}^\mathcal{V}_{-}),$$ where $\text{Set}^\text{op}$ is marked by surjections. \item ($\infty$-operads) Any symmetric monoidal $\infty$-category $\mathcal{C}^\otimes$ induces a functor from the category of finite pointed sets $$\mathcal{C}:\text{Fin}_\ast\to\text{Cat}_\infty,$$ where $\mathcal{C}\left\langle n\right\rangle=\mathcal{C}^{\times n}$, and the maps in $\text{Fin}_\ast$ describe the monoidal operation. If $\mathcal{O}$ is an $\infty$-operad, so that it comes with a functor to $\text{Fin}_\ast$, then there is an equivalence of $\infty$-categories $$\text{Alg}_\mathcal{O}(\mathcal{C})\cong\text{lim}^\text{lax}(\mathcal{O}\to\text{Fin}_\ast\xrightarrow{\mathcal{C}}\text{Cat}_\infty),$$ where $\mathcal{O}$ is marked by its inert morphisms. \end{itemize} \noindent To understand these examples, we need a way to compute lax limits in $\text{Cat}_\infty$. This is the subject of our main result: \begin{theorem*}[Theorem \ref{ThmLaxCat}] If $I$ is a marked $\infty$-category and $F:I\to\text{Cat}_\infty$, let $p:\begingroup\textstyle \int\endgroup F\to I$ (respectively $q:\intsmall F\to I$) be the associated cocartesian (or cartesian) fibration. We say that a morphism of $\begingroup\textstyle \int\endgroup F$ (or $\intsmall F$) is marked if it is $p$-cocartesian (or $q$-cartesian) and lies over a marked morphism in $I$. Then: \begin{itemize} \item the lax colimit is the localization of $\begingroup\textstyle \int\endgroup F$ at marked morphisms; \item the oplax colimit is the localization of $\intsmall F$ at marked morphisms; \item the lax limit is the $\infty$-category of marked sections of $p:\begingroup\textstyle \int\endgroup F\to I$; \item the oplax limit is the $\infty$-category of marked sections of $q:\intsmall F\to I$. \end{itemize} \end{theorem*} \noindent In the event that no morphisms in $I$ are marked, this theorem reduces to Theorem \ref{ThmGHN}. Descotte, Dubuc, and Szyld \cite{DDS} have recently studied partially lax limits (which they call $\sigma$-limits) in 2-categories. They suggest that the theory will be relevant to the development of 2-topoi. As for as this author knows, their 2018 paper was the first appearance of partially lax limits in print. We will begin with a discussion of marked $\infty$-categories in Section 2, then define lax limits in Section 3. We prove our main theorem in Section 4, and finally discuss the two main examples in Section 5. \section{Marked $\infty$-categories} \begin{definition} A \emph{marked category} is a category $\mathcal{C}$ along with a specified collection of morphisms, such that all isomorphisms are marked, and any composition of marked morphisms is marked. If $\mathcal{C},\mathcal{D}$ are marked categories, a functor $F:\mathcal{C}\rightarrow\mathcal{D}$ is \emph{marked} if it sends marked morphisms to marked morphisms. \end{definition} \noindent There is a 2-category $\text{Cat}^\dag$ of marked categories, marked functors, and natural isomorphisms. \begin{definition} A \emph{marked $\infty$-category} is an $\infty$-category $\mathcal{C}$ along with a marking on the homotopy category $h\mathcal{C}$. The $\infty$-category of marked $\infty$-categories is $\text{Cat}_\infty^\dag=\text{Cat}_\infty\times_\text{Cat}\text{Cat}^\dag$. \end{definition} \noindent That is, a marked $\infty$-category has some marked morphisms such that: \begin{itemize} \item all equivalences are marked; \item given two equivalent morphisms $f\cong g$, $f$ is marked if and only if $g$ is; \item marked morphisms are closed under composition; \end{itemize} \noindent and marked functors are functors sending marked morphisms to marked morphisms. \begin{remark} There are many ways to construct $\text{Cat}^\dag_\infty$. See \cite{BarwickK} 1.14 and \cite{HA} 4.1.7.1. \end{remark} \noindent We will be interested in multiple markings on the same $\infty$-category, so we use notation like $\mathcal{C}^\dag$ to denote a marking on $\mathcal{C}$. The basic examples are: \begin{itemize} \item For any $\infty$-category $\mathcal{C}$, there is a \emph{sharp} marking $\mathcal{C}^\sharp$, in which all morphisms are marked, \item and a \emph{flat} marking $\mathcal{C}^\flat$, in which only equivalences are marked; \item If $\mathcal{C}\to\mathcal{D}$ is a cocartesian (respectively cartesian) fibration, there is a \emph{natural} marking $\mathcal{C}^\natural$, in which the cocartesian morphisms (respectively cartesian morphisms) are marked; \item If $\mathcal{O}$ is an $\infty$-operad (Lurie \cite{HA} writes $\mathcal{O}^\otimes$), there is an \emph{inert} marking $\mathcal{O}^\mathsection$, in which the inert morphisms are marked. \end{itemize} \noindent If $\mathcal{C}^\dagger,\mathcal{D}^\dagger\in\text{Cat}^\dagger_\infty$, we will write $\text{Fun}^\dagger(\mathcal{C}^\dagger,\mathcal{D}^\dagger)$ for the full subcategory of $\text{Fun}(\mathcal{C},\mathcal{D})$ spanned by marked functors. The sharp and flat markings each promote to functors $$(-)^\sharp,(-)^\flat:\text{Cat}_\infty\to\text{Cat}^\dag_\infty.$$ There is also a forgetful functor $U:\text{Cat}_\infty^\dag\to\text{Cat}_\infty$ which forgets the marking, and a chain of adjunctions $(-)^\flat\vdash U\vdash (-)^\sharp$. Moreover, $(-)^\flat$ also has a left adjoint $|-|:\text{Cat}^\dag_\infty\to\text{Cat}_\infty$ which preserves finite products (\cite{HA} 4.1.7.2). We regard $|\mathcal{C}^\dag|$ informally as the $\infty$-category obtained from $\mathcal{C}$ by adjoining formal inverses to all the marked morphisms. \begin{example} If $\mathcal{C}$ is any $\infty$-category, then $|\mathcal{C}^\flat|\cong\mathcal{C}$, and $|\mathcal{C}^\sharp|$ is the \emph{geometric realization}, or the $\infty$-groupoid built by adding inverses to all morphisms. \end{example} \noindent We can compute limits and colimits in $\text{Cat}^\dagger_\infty$ as so: \begin{proposition}\label{PropMarkLim} Let $I$ be a small $\infty$-category, $F^\dagger:I\to\text{Cat}^\dagger_\infty$ a functor, and $F:I\to\text{Cat}^\dagger_\infty\rightarrow\text{Cat}_\infty$ the composite with the forgetful functor $U$. Then $\text{colim}(F^\dagger)\cong\text{colim}(F)$ and $\text{lim}(F^\dagger)\cong\text{lim}(F)$ as underlying $\infty$-categories, and the markings may be recovered as follows: \begin{enumerate} \item A morphism $\phi$ of $\text{colim}(F)$ is marked if there is some $i\in I$ such that $F(i)\to\text{colim}(F)$ sends a marked morphism to one equivalent to $\phi$; \item A morphism $\phi$ of $\text{lim}(F)$ is marked if for every $i\in I$, $\phi$ is sent by $\text{lim}(F)\to F(i)$ to a marked morphism. \end{enumerate} \end{proposition} \begin{proof} Let $\text{colim}(F)^\dagger$ be marked as in (1). By construction, the functors $F^\dagger(i)\to\text{colim}(F)^\dagger$ are marked, so they induce a marked functor $e:\text{colim}(F^\dagger)\to\text{colim}(F)^\dagger$. Since the forgetful functor $U:\text{Cat}^\dagger_\infty\to\text{Cat}_\infty$ has a right adjoint, it preserves colimits, so $e$ is an equivalence of $\infty$-categories. We need only show the following: If a morphism $\phi$ of $\text{colim}(F)$ is marked, then $\phi$ is marked in $\text{colim}(F^\dagger)$. However, if $\phi$ is marked in $\text{colim}(F)$, then by definition it arises from a marked morphism of $F^\dagger(i)$ for some $i\in I$, and since $F^\dagger(i)\to\text{colim}(F^\dagger)$ is a marked functor, therefore $\phi$ is marked in $\text{colim}(F^\dagger)$. The proof for limits is exactly the same. \end{proof} \begin{corollary} The functor $(-)^\sharp:\text{Cat}_\infty\to\text{Cat}_\infty^\dagger$ preserves small colimits. Therefore it has a right adjoint which we denote $U^\dagger:\text{Cat}_\infty^\dagger\to\text{Cat}_\infty$. \end{corollary} \noindent Explicitly, $U^\dagger(\mathcal{C}^\dagger)$ is the subcategory of $\mathcal{C}^\dagger$ spanned by the marked morphisms (and all objects). In conclusion, we have a chain of adjunctions $$|-|\vdash(-)^\flat\vdash U\vdash(-)^\sharp\vdash U^\dagger.$$ \section{Lax limits} \noindent Lax limits are limits indexed by a twisted arrow category, which we will review first. Twisted arrow categories are classical, but the analogue for $\infty$-categories is due to Barwick \cite{TwAr}. \begin{definition} If $I$ is an $\infty$-category, let $\text{Tw}(I)\to I\times I^\text{op}$ be the right fibration associated to the functor $\text{Map}(-,-):I^\text{op}\times I\rightarrow\text{Top}$. We call $\text{Tw}(I)$ the \emph{twisted arrow $\infty$-category} of $I$. \end{definition} \noindent We may regard $\text{Tw}(I)$ as follows: objects are morphisms $i\xrightarrow{f} j$ in $I$, and morphisms $f\to f^\prime$ are \emph{twisted} commutative squares $$\xymatrix{ i\ar[r]^f\ar[d] &j \\ i^\prime\ar[r]_{f^\prime} &j^\prime.\ar[u] }$$ \begin{definition} If $I^\dag$ is marked and $i\in I$, there is an induced marking on the undercategory $I_{i/}^\dag$: a morphism is marked if the forgetful functor $I_{i/}\to I$ sends it to a marked morphism of $I$. In the same way, $I_{/i}^\dag$ is also marked. \end{definition} \noindent Notice that precomposition with any morphism $X\rightarrow Y$ induces a marked functor $I_{Y/}^\dag\rightarrow I_{X/}^\dag$, so that the undercategory construction (and similarly the overcategory construction) is functorial $$I_{-/}^\dag:I^\text{op}\rightarrow\text{Cat}_\infty^\dag,$$ $$I_{/-}^\dag:I\rightarrow\text{Cat}_\infty^\dag.$$ We say an $\infty$-category $\mathcal{C}$ is \emph{tensored} (respectively \emph{cotensored}) over $\text{Cat}_\infty$ if there are functors $-\otimes-:\text{Cat}_\infty\times\mathcal{C}\to\mathcal{C}$, respectively $[-,-]:\text{Cat}_\infty^\text{op}\times\mathcal{C}\to\mathcal{C}$. \begin{definition}\label{DefLax} Suppose $I^\dag$ is a marked small $\infty$-category, $\mathcal{C}$ is an $\infty$-category, and $F:I\rightarrow\mathcal{C}$ is a functor (of $\infty$-categories). If $\mathcal{C}$ is tensored over $\text{Cat}_\infty$, we define $$\text{colim}^\text{lax}(F)=\text{colim}\left(\text{Tw}(I)\rightarrow I^\text{op}\times I\xrightarrow{|I_{-/}^\dag|\times F}\text{Cat}_\infty\times\mathcal{C}\xrightarrow{-\otimes -}\mathcal{C}\right),$$ $$\text{colim}^\text{oplax}(F)=\text{colim}\left(\text{Tw}(I)\to I^\text{op}\times I\xrightarrow{|I_{-/}^{\text{op}\dag}|\times F}\text{Cat}_\infty\times\mathcal{C}\xrightarrow{-\otimes -}\mathcal{C}\right).$$ If $\mathcal{C}$ is cotensored over $\text{Cat}_\infty$, we define $$\text{lim}^\text{lax}(F)=\text{lim}\left(\text{Tw}(I)\rightarrow I^\text{op}\times I\xrightarrow{|I_{/-}^\dag|\times F}\text{Cat}_\infty^\text{op}\times\mathcal{C}\xrightarrow{[-,-]}\mathcal{C}\right),$$ $$\text{lim}^\text{oplax}(F)=\text{lim}\left(\text{Tw}(I)\rightarrow I^\text{op}\times I\xrightarrow{|I_{/-}^{\text{op}\dag}|\times F}\text{Cat}^\text{op}_\infty\times\mathcal{C}\xrightarrow{[-,-]}\mathcal{C}\right).$$ \end{definition} \begin{remark} Such a colimit (respectively limit) over the twisted arrow $\infty$-category is a \emph{coend} (respectively \emph{end}), so we may write for example: $$\text{colim}^\text{lax}(F)=\text{coend}(|I_{-/}^\dag|\otimes F(-));$$ $$\text{lim}^\text{lax}(F)=\text{end}([|I_{/-}^\dag|,F(-)]).$$ However, we won't say anything about ends and coends; see \cite{GHN} for more. \end{remark} \begin{example} If $I^\flat$ has the flat marking, then $|I_{/-}^\flat|=I_{/-}$ and $|I_{-/}^\flat|=I_{-/}$, so the formulas reduce to the formulas for `fully' lax limits in \cite{GHN}. \end{example} \begin{proposition}\label{PropLim} If $I^\sharp$ has the sharp marking, then $$\text{colim}^\text{lax}(F)\cong\text{colim}^\text{oplax}(F)\cong\text{colim}(F),$$ $$\text{lim}^\text{lax}(F)\cong\text{lim}^\text{oplax}(F)\cong\text{lim}(F).$$ \end{proposition} \noindent The proposition reduces to a lemma about the twisted arrow category. We say that a functor $i:\mathcal{A}\to\mathcal{B}$ is \emph{left cofinal} if for any $F:\mathcal{B}\to\mathcal{C}$, $\text{colim}(F)\cong\text{colim}(Fi)$, and \emph{right cofinal} if the same is true for limits. See \cite{HTT} 4.1, but note that Lurie uses `cofinal' where we use `left cofinal'. A functor $\mathcal{A}\to\mathcal{B}$ is right cofinal if and only if $\mathcal{A}^\text{op}\to\mathcal{B}^\text{op}$ is left cofinal, so that the two notions are dual. \begin{lemma} For any $\infty$-category $I$, projection onto the first coordinate $\pi:\text{Tw}(I)\to I$ is both left and right cofinal. \end{lemma} \begin{proof} Note that $\text{Tw}(I)$ is highly asymmetric; for example, it often has an initial object, but almost never has a terminal object. Therefore, the proofs of left and right cofinality will not be similar. Left cofinality was proven by Glasman in \cite{Glasman} 2.5; we will sketch the proof. By Quillen's Theorem A (\cite{HTT} 4.1.3.1), it suffices to prove that $\pi_{i/}=\text{Tw}(I)\times_I I_{i/}$ is weakly contractible for each $i\in I$. An object of $\pi_{i/}$ is a diagram $i\to j\to k$. Projection onto $k$ describes a functor $\pi_{i/}\to(I_{i/})^\text{op}$, and this functor has a right adjoint which sends $i\to k$ to $i\to i\to k$ in $\pi_{i/}$. Since $(I_{i/})^\text{op}$ is weakly contractible (as it has a terminal object), so is $\pi_{i/}$. For right cofinality, it suffices to prove that $\pi_{/i}=\text{Tw}(I)\times_I I_{/i}$ is weakly contractible for each $i$. An object of $\pi_{/i}$ is a diagram $i\leftarrow j\to k$. We define functors $F_1,F_2,F_3:\pi_{/i}\to\pi_{/i}$ as follows: $$F_1(i\leftarrow j\to k)=(i\leftarrow j\to j)$$ $$F_2(i\leftarrow j\to k)=(i\leftarrow j\to i)$$ $$F_3(i\leftarrow j\to k)=(i\leftarrow i\to i),$$ where all maps $i\to i$ or $j\to j$ are the identity. There are natural transformations $\text{id}\Rightarrow F_1\Leftarrow F_2\Rightarrow F_3$ given by the vertical maps in the diagram $$\xymatrix{ &j\ar[ld]\ar[r]\ar[d] &k \\ i &j\ar[l]\ar[r] &j\ar[u]\ar[d] \\ &j\ar[lu]\ar[r]\ar[u]\ar[d] &i \\ &i\ar[luu]\ar[r] &i.\ar[u] }$$ This exhibits a homotopy between the identity functor and a constant functor, so $\pi_{/i}$ is weakly contractible and $\pi$ is right cofinal. \end{proof} \begin{proof}[Proof of Proposition \ref{PropLim}] By definition, $|I_{i/}^\sharp|$ is the geometric realization of $I_{i/}$, which is contractible since $I_{i/}$ has an initial object. Therefore, $\text{colim}^\text{lax}(F)$ is the colimit of $\text{Tw}(I)\rightarrow I\xrightarrow{F}\mathcal{C}$, which is just the colimit of $F$ because the projection $\text{Tw}(I)\to I$ is left cofinal. The proof for oplax colimits is identical. Since $\text{Tw}(I)\to I$ is also right cofinal, the proof for lax and oplax limits is also identical. \end{proof} \section{Lax limits of $\infty$-categories} \noindent The $\infty$-category $\text{Cat}_\infty$ is tensored and cotensored over itself via the functors $-\times -:\text{Cat}_\infty\times\text{Cat}_\infty\to\text{Cat}_\infty$ and $\text{Fun}(-,-):\text{Cat}_\infty^\text{op}\times\text{Cat}_\infty\to\text{Cat}_\infty$. We are nearly ready to prove our main result, that lax limits and colimits in $\text{Cat}_\infty$ can be computed explicitly via Grothendieck constructions. \begin{definition}\label{DefIM} Suppose that $I^\dag$ is a marked $\infty$-category and $F:I\to\text{Cat}_\infty$ is a functor. We denote by $p:\begingroup\textstyle \int\endgroup F\to I$ the associated cocartesian fibration (given by the Grothendieck construction). Then the \emph{induced marking} $\begingroup\textstyle \int\endgroup F^\dag$ is given as follows: A morphism $\phi$ of $\begingroup\textstyle \int\endgroup F$ is marked if and only if $p(\phi)$ is marked and $\phi$ is $p$-cocartesian. In exactly the same way there is an induced marking on the associated cartesian fibration $\intsmall F^\dag\to I^{\text{op}\dag}$. \end{definition} \begin{remark} Given two functors $t:I^\dagger\to J^\dagger$ and $F:J\to\text{Cat}_\infty$, Definition \ref{DefIM} is chosen so that we have a pullback square of marked $\infty$-categories $$\xymatrix{ \begingroup\textstyle \int\endgroup Ft^\dagger\ar[r]\ar[d] &\begingroup\textstyle \int\endgroup F^\dagger\ar[d] \\ I^\dagger\ar[r] &J^\dagger. }$$ \end{remark} \begin{example} If $I^\flat$ has the flat marking, $\begingroup\textstyle \int\endgroup F^\flat$ also has the flat marking. If $I^\sharp$ has the sharp marking, $\begingroup\textstyle \int\endgroup F^\natural$ has the natural marking for a cocartesian fibration (the marking by cocartesian edges). Thus, the marking $\begingroup\textstyle \int\endgroup F^\dag$ interpolates between the flat marking and the natural marking. \end{example} \begin{theorem}\label{ThmLaxCat} Suppose $I^\dag$ is a small marked $\infty$-category and $F:I\to\text{Cat}$ is a functor. Then $$\text{colim}^\text{lax}(F)\cong|\begingroup\textstyle \int\endgroup F^\dag|,$$ $$\text{colim}^\text{oplax}(F)\cong|\intsmall F^\dag|,$$ $$\text{lim}^\text{lax}(F)\cong\text{Fun}_{/I}^\dag(I^\dag,\begingroup\textstyle \int\endgroup F^\dag),$$ $$\text{lim}^\text{oplax}(F)=\text{Fun}^\dag_{/I^\text{op}}(I^{\text{op}\dag},\intsmall F^\dag).$$ \end{theorem} \begin{remark}\label{RmkFully} If $I^\sharp$ has the sharp marking, then the theorem describes how to compute ordinary limits and colimits in $\text{Cat}_\infty$. This is \cite{HTT} 3.3.3.2 (for limits) and \cite{HTT} 3.3.4.3 (for colimits). If $I^\flat$ has the sharp marking, then the theorem simplifies considerably and describes how to compute fully lax limits and colimits: $\text{colim}^\text{lax}(F)\cong\begingroup\textstyle \int\endgroup F$ and $\text{lim}^\text{lax}(F)\cong\text{Fun}_{/I}(I,\begingroup\textstyle \int\endgroup F)$. These are the main results of \cite{GHN}. \end{remark} \begin{lemma} Suppose $I$ is a small $\infty$-category and $\eta:F_0\to F_1$ is a natural transformation of functors $I\to\text{Cat}$. If each functor $\eta_i:F_0(i)\to F_1(i)$ is fully faithful, then so is $\text{lim}(\eta):\text{lim}(F_0)\to\text{lim}(F_1)$. Moreover, an object $X\in\text{lim}(F_1)$ is in the essential image of $\text{lim}(\eta)$ if and only if $\text{lim}(F_1)\to F_1(i)$ sends $X$ to an object in the essential image of $\eta_i$ for each $i\in I$. \end{lemma} \begin{proof} Consider the associated diagram $$\xymatrix{ \begingroup\textstyle \int\endgroup F_0^\natural\ar[rr]^{\eta_\ast}\ar[rd]_{p_0} &&\begingroup\textstyle \int\endgroup F_1^\natural\ar[ld]^{p_1} \\ &I, & }$$ where $p_i$ are cocartesian fibrations. Since $\eta_i$ is fully faithful for each $i$, so is $\eta_\ast$. By \cite{HTT} 3.3.3.2, $\text{lim}(F_i)\cong\text{Fun}_{/I}^\dagger(I^\sharp,\begingroup\textstyle \int\endgroup F_i^\natural)$. Therefore, $\text{lim}(\eta)$ is equivalent to $$\text{Fun}_{/I}^\dagger(I^\sharp,\begingroup\textstyle \int\endgroup F_0^\natural)\to\text{Fun}_{/I}^\dagger(I^\sharp,\begingroup\textstyle \int\endgroup F_1^\natural),$$ which is fully faithful as desired. The maps $\text{lim}(F_1)\to F_1(i)$ are given by evaluating a section at $i\in I$. Therefore, a section $s:I\to\begingroup\textstyle \int\endgroup F_1$ lifts to $\begingroup\textstyle \int\endgroup F_0$ if and only if $s(i)\in F_1(i)$ is in $F_0(i)\subseteq F_1(i)$ for all $i$, which proves the second part of the lemma. \end{proof} \begin{proof}[Proof of Theorem \ref{ThmLaxCat}] First we prove the theorem for lax colimits. By definition, $\text{colim}^\text{lax}(F)$ is the colimit of $$F^\text{lax}_c:\text{Tw}(I)\to I^\text{op}\times I\xrightarrow{p}\text{Cat},$$ where $p(i,j)=|I_{i/}^\dagger|\times F(j)$. Since $|-|:\text{Cat}^\dagger\to\text{Cat}$ preserves finite products, $p(i,j)\cong|I_{i/}^\dagger\times F(j)^\flat|$, and therefore $F^\text{lax}_c(-)\cong|\bar{F}^\text{lax}_c(-)|$, where $$\bar{F}^\text{lax}_c:\text{Tw}(I)\to I^\text{op}\times I\xrightarrow{\bar{p}}\text{Cat}^\dag$$ and $\bar{p}(i,j)=I_{i/}^\dagger\times F(j)^\flat$. By Proposition \ref{PropMarkLim}, $\text{colim}(\bar{F}^\text{lax}_c)$ is the \emph{fully} lax colimit of $F$, which is the Grothendieck construction $\begingroup\textstyle \int\endgroup F$ by Remark \ref{RmkFully}, and a morphism of $\begingroup\textstyle \int\endgroup F$ is marked if and only if it is in the image of a marked morphism under $I^\dagger_{i/}\times F(j)^\flat\to\begingroup\textstyle \int\endgroup F$ for some $j\to i$ in $\text{Tw}(I)$. These are the morphisms which are cocartesian over marked morphisms of $I$. Therefore, $\text{colim}(\bar{F}^\text{lax}_c)\cong\begingroup\textstyle \int\endgroup F^\dagger$. Since $|-|:\text{Cat}^\dagger\to\text{Cat}$ has a right adjoint, it preserves colimits, so $$\text{colim}^\text{lax}(F)=\text{colim}(F^\text{lax}_C)\cong|\text{colim}(\bar{F}^\text{lax}_C)|\cong|\begingroup\textstyle \int\endgroup F^\dagger|.$$ The proof for oplax colimits is exactly the same. Now we turn to lax limits. By definition, $\text{lim}^\text{lax}$ is the limit of the composite $$F^\text{lax}_\ell:\text{Tw}(I)\to I^\text{op}\times I\xrightarrow{q}\text{Cat},$$ where $q(i,j)=\text{Fun}(|I_{/i}^\dagger|,F(j))\cong\text{Fun}^\dagger(I_{/i}^\dagger,F(j)^\flat)$. In particular, consider the composite $$\bar{F}^\text{lax}_\ell:\text{Tw}(I)\to I^\text{op}\times I\xrightarrow{\bar{q}}\text{Cat}$$ where $\bar{q}(i,j)=\text{Fun}(I_{/i},F(j))$. Then there is a natural transformation $F^\text{lax}_\ell\to\bar{F}^\text{lax}_\ell$ which is a full subcategory inclusion at each $j\to i$ in $\text{Tw}(I)$. By the lemma, therefore $\text{lim}^\text{lax}(F)=\text{lim}(F^\text{lax}_\ell)$ is a full subcategory of $\text{lim}(\bar{F}^\text{lax}_\ell)$, which is the \emph{fully} lax limit. By Remark \ref{RmkFully}, the fully lax limit is the $\infty$-category of sections of $t:\begingroup\textstyle \int\endgroup F\to I$, so we conclude by the lemma that $$\text{lim}^\text{lax}(F)\subseteq\text{Fun}_{/I}(I,\begingroup\textstyle \int\endgroup F),$$ and a section is in $\text{lim}^\text{lax}(F)$ if and only if it sends marked morphisms to $t$-cocartesian morphisms. \end{proof} \section{Examples} \noindent We will end with two examples of partially lax limits. \subsection{Example: enriched $\infty$-categories} \noindent Let $\mathcal{V}$ be a monoidal $\infty$-category. For each set $S$, Gepner-Haugseng \cite{GH} construct an $\infty$-operad $\mathcal{O}_S$ and define: A $\mathcal{V}$-enriched category with set $S$ of objects is an $\text{Assoc}_S$-algebra in $\mathcal{V}$. Then we may write $\text{Cat}^\mathcal{V}_S=\text{Alg}_{\mathcal{O}_S}(\mathcal{V})$ for the $\infty$-category of $\mathcal{V}$-enriched categories with set $S$ of objects. This construction is functorial in $S$, $$\text{Cat}^\mathcal{V}_{-}:\text{Set}^\text{op}\to\widehat{\text{Cat}},$$ and the $\infty$-category of \emph{all} $\mathcal{V}$-enriched categories is defined to be a localization of the associated cartesian fibration (\cite{GH} 5.4.3). By Theorem \ref{ThmLaxCat}, $\text{Cat}^\mathcal{V}$ can be described as an oplax colimit: \begin{definition} The $\infty$-category of $\mathcal{V}$-enriched categories is the oplax colimit $$\text{Cat}^\mathcal{V}\cong\text{colim}^\text{oplax}(\text{Cat}^\mathcal{V}_{-}),$$ where $\text{Set}^\text{op}$ is marked by the surjections. \end{definition} \noindent We will briefly motivate this definition. If $\mathcal{C}$ is an enriched category with set $T$ of objects, and $f:S\to T$ is a function, there is an enriched category $f^\ast\mathcal{C}$, which is determined essentially uniquely by the property: $f^\ast\mathcal{C}$ has set $S$ of objects, and there is a fully faithful functor $\alpha_\mathcal{C}:f^\ast\mathcal{C}\to\mathcal{C}$ which acts as $f:S\to T$ on the sets of objects. That is, there are triangles $$\xymatrix{ \text{Cat}^\mathcal{V}_T\ar[dd]_{f^\ast}\ar[rd] &\\ &\text{Cat}^\mathcal{V}, \\ \text{Cat}^\mathcal{V}_S\ar[ru] & }$$ with natural transformations $\alpha$ going \emph{up} the triangle. Moreover, if $f$ is surjective, then each $\alpha_\mathcal{C}$ is fully faithful and essentially surjective, so we expect $\alpha$ to be a natural equivalence when $f$ is surjective. In other words, $\text{Cat}^\mathcal{V}$ should satisfy the same universal property as an oplax colimit with respect to the surjective marking on Set. This will be described in greater detail in our upcoming work on enriched $\infty$-categories \cite{Berman1}. \subsection{Example: algebras over $\infty$-operads} \noindent Let $\text{Fin}_\ast$ denote the category of finite pointed sets. We may think of a symmetric monoidal $\infty$-category $\mathcal{C}^\otimes$ as a functor $\mathcal{C}:\text{Fin}_\ast\to\text{Cat}_\infty$ such that the maps $\mathcal{C}\left\langle n\right\rangle\to\mathcal{C}=\mathcal{C}\left\langle 1\right\rangle$ exhibit $\mathcal{C}\left\langle n\right\rangle\cong\mathcal{C}^{\times n}$. This is the same property as Segal's $\Gamma$-spaces; see \cite{HA} 2.4.2.2. \begin{proposition}\label{PropOp} If $\mathcal{O}$ is an $\infty$-operad and $\mathcal{C}$ is a symmetric monoidal $\infty$-category, then the $\infty$-category of $\mathcal{O}$-algebras in $\mathcal{C}$ is the lax limit of the composite $$\text{Alg}_\mathcal{O}(\mathcal{C})\cong\text{lim}^\text{lax}(\mathcal{O}\to\text{Fin}_\ast\xrightarrow{\mathcal{C}}\text{Cat}_\infty),$$ where $\mathcal{O}$ is marked by the \emph{inert morphisms} (defined \cite{HA} 2.1.2.3). \end{proposition} \begin{proof} By definition (\cite{HA} 2.1.2.7), $\text{Alg}_\mathcal{O}(\mathcal{C})\cong\text{Fun}^\dagger_{/\text{Fin}_\ast}(\mathcal{O}^\mathsection,\begingroup\textstyle \int\endgroup\mathcal{C})$. Here $^\mathsection$ denotes the marking by inert morphisms, and $\begingroup\textstyle \int\endgroup\mathcal{C}\to\text{Fin}_\ast$ is the cocartesian fibration associated to $\text{Fin}_\ast\xrightarrow{\mathcal{C}}\to\text{Cat}_\infty$. (Lurie writes $\mathcal{C}^\otimes$ instead of $\begingroup\textstyle \int\endgroup\mathcal{C}$.) By Theorem \ref{ThmLaxCat}, this is equivalent to the lax limit described. \end{proof} \end{document}
\begin{document} \title{Quiver varieties and fusion products for $\mathfrak{sl}_2$} \author{Alistair Savage and Olivier Schiffmann} \date{} \maketitle \mathfrak{n}oindent \paragraph{Introduction.} In a remarkable series of work starting in \cite{N0}, Nakajima gives a geometric realization of integrable highest weight representations $V_\lambda$ of a Kac-Moody algebra $\mathfrak{g}$ in the homology of a certain Lagrangian subvariety $\mathcal{L}(\lambda)$ of a symplectic variety $\mathcal{M}(\lambda)$ constructed from the Dynkin diagram of $\mathfrak{g}$ (the \textit{quiver variety}). In particular, in \cite{N2}, he realizes the tensor product $V_\lambda \otimes V_{\mu}$ as the homology of a ``tensor product variety'' $\mathcal{L}(\lambda, \mu) \subset \mathcal{M}(\lambda + \mu)$ (the same construction also appears independently in \cite{M}). When $\mathfrak{g}$ is simple, one might ask if a similar construction can produce the \textit{fusion} tensor products $V_{\lambda} \otimes_l V_{\mu}$, certain truncations of $V_\lambda \otimes V_\mu$. \paragraph{}In this short note, we answer this question affirmatively when $\mathfrak{g}=\mathfrak{sl}_2$. In this case, $V_{\lambda} \otimes_l V_{\mu}$ is realized as the homology of the most natural subvarieties $\mathcal{L}_l(\lambda,\mu) \subset \mathcal{L}(\lambda,\mu)$ (see Section 3). We also consider the case of a tensor product of arbitrarily many $\mathfrak{sl}_2$-modules $V_{\lambda_1}, \cdots ,V_{\lambda_r}$. Finally, we give a combinatorial description of the irreducible components of $\mathcal{L}_l(\lambda,\mu)$ (and $\mathcal{L}_l(\lambda_1, \ldots, \lambda_r)$) using the notions of graphical calculus and crossingless matches for $\mathfrak{sl}_2$ (see \cite{FK97} and \cite{Savage}). We do not expect these constructions to generalize to Lie algebras of higher rank. \centerline{\textbf{Acknowledgments.}} We would like to thank I. Frenkel for raising the question of the relation between quiver varieties and fusion products. The research of the first author was supported in part by the Natural Science and Engineering Research Council of Canada. \section{Fusion products for $U(\mathfrak{sl}_2)$.} \paragraph{1.1.} Let $\mathcal{R}$ denote the category of finite-dimensional $\mathfrak{sl}_2$-modules, and for $i \mathfrak{g}eq 0$ let $V_i$ denote the simple module of highest weight $i$. Let $\mathbb{C}[\mathcal{R}]$ be the Grothendieck ring of $\mathcal{R}$ and let $[V]$ denote the class of a module $V$. We have $$V_i \otimes V_j \simeq \bigoplus_{k=j-i}^{i+j} V_k,\qquad [V_i] \cdot [V_j] =\sum_{k=j-i}^{i+j} [V_k],\qquad \mathrm{for\;} i \leq j$$ where in the sums $k$ increases by twos. \paragraph{1.2.} Now let us fix some positive integer $l \in \mathbb{N}$. Consider the quotient $$\mathbb{C}_l[\mathcal{R}]=\mathbb{C}[\mathcal{R}]/[V_{l+1}]\mathbb{C}[\mathcal{R}].$$ Denoting by $[V]_l$ the image of $[V]$ in $\mathbb{C}_l[\mathcal{R}]$, we have $\mathbb{C}_l[\mathcal{R}]=\mathbb{C}[V_0]_l \oplus \cdots \oplus \mathbb{C}[V_l]_l$, and $$[V_i \otimes V_j]_l=\sum_{k=j-i}^{\min(i+j,2l -i-j)} [V_k]_l,\qquad \mathrm{for\;} i \leq j \leq l.$$ We also set $$V_i \otimes_l V_j=\bigoplus_{k=j-i}^{\min(i+j,2l-i-j)} V_k,\qquad \mathrm{for\;} i \leq j \leq l.$$ Again, in the above sums, $k$ increases by twos. The ring $\mathbb{C}_l[\mathcal{R}]$ appears in conformal field theory (as the Grothendieck ring of the modular category of integrable $\widehat{\mathfrak{sl}}_2$-modules of level $l$) and in quantum group theory (as the Grothendieck ring of a suitable quotient of the category of tilting modules over $U_\epsilon(\mathfrak{sl}_2)$ when $\epsilon$ is an $l$th root of unity). \section{Lagrangian construction of $U(\mathfrak{sl}_2)$.} We briefly recall Ginzburg's construction of irreducible representations of $\mathfrak{sl}_2$ in the homology of certain varieties associated to partial flag varieties (cf. \cite{Ginz}). We use the (in this case equivalent) language of quiver varieties (cf. \cite{Nak}). \paragraph{2.1.} Let $v,w \in \mathbb{N}$ and let $V$ and $W$ be $\mathbb{C}$-vector spaces of dimensions $v$ and $w$ respectively. Consider the space $$M(v,w)=\{(i,j)\;|\;ij=0; \;\ker j=\{0\}\} \subset \mathrm{Hom}\;(W,V) \oplus \mathrm{Hom}\;(V,W).$$ We let $GL(V)$ act on $M(v,w)$ via $g\cdot(i,j)=(gi,jg^{-1})$. This action is free and we set $\mathcal{M}(v,w)=M(v,w)/GL(V)$. The assignment $(i,j) \mapsto (ji, \mathrm{Im}\;j)$ defines an isomorphism between $\mathcal{M}(v,w)$ and the variety $$\mathcal{F}_{v,w}=\{(t,V_0)\;| V_0 \subset W, \mathrm{dim}\;V_0=v, \mathrm{Im}\;t \subset V_0 \subset \ker t \} \subset \mathcal{N}_W \times \mathrm{Gr}(v,w),$$ where $\mathcal{N}_W$ is the nullcone of $\mathfrak{gl} (W)$ and $\mathrm{Gr}(v,w)$ is the Grassmannian of $v$-dimensional subspaces in $W$. We will denote by $\pi: \mathcal{M}(v,w) \to \mathcal{N}_W$, the projection $(i,j) \mapsto ji$. For any $t \in \mathcal{N}_W$ such that $t^2=0$ we set $\mathcal{M}(v,w)_t=\pi^{-1}(t)$ and $\mathcal{M}(w)_t = \sqcup_v \mathcal{M}(v,w)_t$. In particular, we set $\mathcal{L}(v,w)= \pi^{-1}(0)$. Observe that $\mathcal{L}(v,w)$ is just $\mathrm{Gr}(v,w)$ and that $\mathcal{M}(v,w)$ is isomorphic to the cotangent bundle of $\mathcal{L}(v,w)$. We have $\mathrm{dim}\;\mathcal{M}(v,w)=2 \mathrm{dim}\;\mathcal{L}(v,w)= 2v(w-v)$. For $v_1,v_2,w \in \mathbb{N}$ we also consider the variety of triples $$Z(v_1,v_2,w)=\{((i_1,j_1),(i_2,j_2))\;|\;j_1i_1=j_2i_2\}\subset \mathcal{M}(v_1,w) \times \mathcal{M}(v_2,w). $$ Then $\mathrm{dim}\;Z(v_1,v_2,w)=v_1(w-v_1)+v_2(w-v_2)$. \paragraph{}The form $\omega((i,j),(i',j'))=\mathrm{Tr}_V(ij'-i'j)$ defines a symplectic structure on $\mathcal{M}(v,w)$, for which the variety $\mathcal{L}(v,w)$ is Lagrangian. Equip $\mathcal{M}(v_1,w) \times \mathcal{M}(v_2,w)$ with the symplectic form $\omega \times (-\omega)$. Then $Z(v_1,v_2,w)$ is also Lagrangian. Let $Z(w) = \sqcup_{v_1,v_2} Z(v_1,v_2,w)$. \paragraph{2.2.} For any complex algebraic variety $X$ we let $H_*(X)$ be the Borel-Moore homology with coefficients in $\mathbb{C}$, and set $H_{top}(X)=H_{2d}(X)$ where $d=\mathrm{dim}\;X$. \paragraph{} Let $p_{ij}: \mathcal{M}(v_1,w) \times \mathcal{M}(v_2,w) \times \mathcal{M}(v_3,w) \to \mathcal{M}(v_i,w) \times \mathcal{M}(v_j,w)$ be the obvious projections. The map $$p_{13}:\;p_{12}^{-1}(Z(v_1,v_2,w)) \cap p_{23}^{-1}(Z(v_2,v_3,w)) \to Z(v_1,v_3,w)$$ is proper and we can define the convolution product \begin{align*} H_i(Z(v_1,v_2,w)) \otimes H_j(Z(v_2,v_3,w)) &\to H_{i+j-d_2}(Z(v_1,v_3,w))\\ c \otimes c' &\mapsto p_{13*}(p_{12}^*(c) \cap p_{23}^*(c')) \end{align*} where $d_2=4v_2(w-v_2)$. In particular, this gives rise to an algebra structure on $H_{top}(Z(w))= \bigoplus_{v_1,v_2} H_{top}(Z(v_1,v_2,w))$. \paragraph{}Now let $t \in \mathcal{N}_W$ such that $t^2=0$. The projection $$p_1:\;Z(v_1,v_2,w) \cap p_2^{-1}(\mathcal{M}(v_2,w)_t) \to \mathcal{M}(v_1,w)_t$$ (where $p_1$ and $p_2$ are the obvious projections) is proper and the convolution action \begin{align*} H_{top}(Z(v_1,v_2,w)) \otimes H_{top}(\mathcal{M}(v_2,w)_t) &\to H_{top} (\mathcal{M}(v_1,w)_t)\\ c \otimes c' &\mapsto p_{1*}(c \cap p_{2}^*(c')) \end{align*} makes $H_{top}(\mathcal{M}(w)_t)=\bigoplus_v H_{top}(\mathcal{M}(v,w)_t)$ into a $H_{top}(Z(w))$-module. \begin{theo}[\cite{Ginz}] There is a natural surjective homomorphism $\Phi:\;U(\mathfrak{sl}_2) \to H_{top}(Z(w))$. Under $\Phi$, the module $H_{top}(\mathcal{M}(w)_t)$ is isomorphic to $V_{w-2u}$ where $u=\rank t$.\end{theo} \paragraph{2.3.} We now give the realization of tensor products of $U(\mathfrak{sl}_2)$-modules. Let $w=w_1+\cdots + w_r$ and fix $W=W_1 \oplus \cdots \oplus W_r$ with $\mathrm{dim}\;W_i=w_i$. Let $W_0 = 0$. The group $GL(W)$ acts on $\mathcal{M}(v,w)$ by $g\cdot (i,j)=(ig^{-1},gj)$. Consider the embedding \begin{align*} \sigma:\;(\mathbb{C}^*)^{r-1} &\to \prod_{i=1}^r GL(W_i) \subset GL(W)\\ (t_2,t_3,\ldots,t_r) &\mapsto (Id, t_2^{-1}, t_2^{-1}t_3^{-1},\ldots, t_2^{-1} \cdots t_r^{-1}) \end{align*} Then, for each $v$, we have (see e.g \cite[Lemma 3.2]{N2}) $$\mathcal{M}(v,w)^\sigma =\underset{v_1+\cdots + v_r=v}{\bigsqcup} \mathcal{M}(v_1,w_1) \times \cdots \times \mathcal{M}(v_r,w_r).$$ Consider the subvarieties \[ \mathcal{M}(v,w_1,\ldots,w_r)=\{x \in \mathcal{M}(v,w)\;|\; \underset{t_i \to 0}{\mathrm{lim}} \sigma(t_2,\ldots,t_r)\cdot x \;\mathrm{exists}\} \] $$\mathcal{N}_W(w_1,\ldots,w_r)=\{t \in \mathcal{N}_W\;|\; \underset{t_i \to 0}{\mathrm{lim}} \sigma(t_2,\ldots,t_r)\cdot t \;\mathrm{exists}\}. $$ For $x \in \mathcal{M}(v,w_1,\ldots,w_r)$, let us set $\tau(x)= \underset{t_i \to 0}{\mathrm{lim}} \sigma(t_2,\ldots,t_r)\cdot x$. We define $\tau(t)$ similarly for $t \in \mathcal{N}_W(w_1,\ldots,w_r)$. Now consider $$\mathcal{L}(v,w_1,\ldots,w_r)=\{x \in \mathcal{M}(v,w_1,\ldots,w_r)\;|\; \tau(x) \in \prod_i \mathcal{L}(v_i,w_i)\;\mathrm{for\;some\;}(v_i)\}. $$ Set $\mathcal{L}(w_1,\ldots,w_r)=\sqcup_v \mathcal{L}(v,w_1,\ldots,w_r)$. Note that $\mathcal{L}(w_1,\ldots,w_r)=\pi^{-1} (\tau^{-1}(0))$ so that we have an action of $H_{top}(Z(w))$ on $H_{top}(\mathcal{L}(w_1,\ldots,w_r))$. Moreover, it is easy to check that $\mathcal{L}(w_1,\ldots,w_r)$ is Lagrangian. Note that $\mathcal{L}(w_1,\ldots,w_r)$ is isomorphic to the variety \[ \{(t,V_0) \;|\; V_0 \subset W,\, \im t \subset V_0 \subset \ker t,\, t(W_j) \subset W_0 \oplus \dots \oplus W_{j-1},\, 1\le j \le r\}. \] \begin{theo}[\cite{GRV}, \cite{N2}, \cite{M}] $H_{top}(\mathcal{L}(w_1,\ldots,w_r))$ is isomorphic to $V_{w_1} \otimes \cdots \otimes V_{w_r}$ as a $U(\mathfrak{sl}_2)$-module. \end{theo} \section{Lagrangian construction of the fusion product} \paragraph{}Let us fix some positive integer $l$. We will now describe an open subvariety of $\mathcal{L}(w_1,\ldots,w_r)$ whose homology realizes the fusion product $V_{w_1} \otimes_l \cdots \otimes_l V_{w_r}$. \paragraph{3.1.} We keep the notation of 2.3. For all $k \in \mathbb{N}$ and $t \in \mathcal{N}_{W_1 \oplus \cdots \oplus W_k} (w_1,\ldots, w_k)$ we set $\tau_k(t)=\mathrm{lim}_{t_k\to 0} \sigma (1,\ldots,1,t_k) (t)$. Let us consider the open subvariety $\mathcal{N}^l(w_1,w_2)= \{t \in \mathcal{N}_{W_1 \oplus W_2}\;| \dim \ker t \leq l \}$ of $\mathcal{N}_{W_1 \oplus W_2}$ and define inductively \begin{equation} \label{Nl_def} \begin{split} \mathcal{N}^l(w_1,\ldots,w_k)=\{t \in \mathcal{N}_{W_1 \oplus \cdots \oplus W_k}\;|\dim \ker t \leq l + \rank \tau_k(t), \\ t|_{W_1 \oplus \dots \oplus W_{k-1}} \in \mathcal{N}^l(w_1,\dots,w_{k-1}) \} \end{split} \end{equation} for $k \mathfrak{g}e 3$. Finally, set $\mathcal{L}_l(w_1,\ldots,w_r)=\mathcal{L}(w_1,\ldots,w_r) \cap \pi^{-1} (\mathcal{N}^l(w_1,\ldots,w_r))$. By definition $\mathcal{L}_l(w_1,\ldots,w_r)$ is an open subvariety of $\mathcal{L}(w_1,\ldots,w_r)$ and therefore $H_{top}(\mathcal{L}_l(w_1,\ldots,w_r))$ is a $H_{top}(Z(w))$-module. \begin{theo} $H_{top}(\mathcal{L}_l(w_1,\ldots,w_r))$ is isomorphic to $V_{w_1} \otimes_l \cdots \otimes_l V_{w_r}$ as a $U(\mathfrak{sl}_2)$-module. \end{theo} \mathfrak{n}oindent \begin{proof} We proceed by induction. Suppose $r=2$. It is enough to describe the irreducible components of $\mathcal{L}_l(w_1,w_2)$ corresponding to highest weight vectors in the $U(\mathfrak{sl}_2)$-module $H_{top}(\mathcal{L}_l(w_1,w_2))$. The irreducible components of $\mathcal{L}(w_1,w_2)$ corresponding to highest-weight vectors are the $$I_v=\{(i,j)\;|\;j(V) \subset W_1,\;i(W_2)=V,\;i(W_1)=0\}, \qquad\mathrm{for\;} 0 \leq v \leq w_1,w_2$$ and the associated highest weight is $w_1+w_2-2v$. Note that the condition $\dim \ker ji \leq l$ is equivalent to the condition $w_1+ w_2-2v \leq 2l - w_1-w_2$. Now suppose that the theorem is proved for tensor products of $r-1$ modules, and let us set $W'=W_1 \oplus\cdots\oplus W_{r-1}$. For each $u \in \mathbb{N}$ let us set $\mathcal{N}_{W'}(u)=\{t \in \mathcal{N}_{W'}\; | \rank t=u\}$. Recall that $\mathcal{L}_l(w_1,\ldots,w_{r-1})$ is Lagrangian and that $\pi$ is semi-small with all strata being relevant (c.f \cite[$\S10$]{Nak}). Thus $\pi(\mathcal{L}_l(w_1,\ldots ,w_{r-1})) \cap \mathcal{N}_{W'}(u)$ is a subvariety of $\mathcal{N}_{W'}(u)$ of dimension $\frac{1}{2}\mathrm{dim\;}\mathcal{N}_{W'}(u)$. Let $\mathcal{C}_1^u,\ldots , \mathcal{C}_{s(u)}^u$ be its irreducible components. By the induction hypothesis, \begin{equation}\label{E:1} s(u)=\mathrm{dim\;Hom}_{\mathfrak{sl}_2}(V_{w'-2u},V_{w_1} \otimes_l \cdots \otimes_l V_{w_{r-1}}). \end{equation} The irreducible components of $\mathcal{L}_l(v,w_1,\ldots,w_r)$ corresponding to highest weight vectors of $H_{top}(\mathcal{L}_l(w_1,\ldots, w_r))$ are of the form $\overline{I_\chi}$ with $$I_{\chi}=\{(i,j)\;|i(W)=V,\;j(V) \subset W',\;(i_{W'},j) \in \chi\}$$ where $\chi$ is an irreducible component of $\mathcal{L}_l (v,w_1,\ldots,w_{r-1})$, and the associated highest weight is $w-2v$ (note that $I_\chi$ may be empty). Let us fix $u \in \mathbb{N}$ and $\mathcal{C}_k^u$ for some $k \leq s(u)$. Let $\chi \subset \overline{\pi^{-1}(\mathcal{C}_k^u)} \cap \mathcal{L}_l(v,w_1,\ldots,w_{r-1})$ be an irreducible component. Then $I_\chi \subset \overline{\mathcal{L}_l(w_1,\ldots, w_r)}$ if for all $(i,j)$ in (an open dense subset of) $I_{\chi}$ we have $\mathrm{dim\;Im\;}ji \leq l +u.$ This is equivalent to the condition that the corresponding highest weight $w-2v$ satisfies \begin{equation}\label{E:2} w-2v \leq 2l-w_r-(w'-2u). \end{equation} Equations (\ref{E:1}) and (\ref{E:2}) together imply that $$H_{top}(\mathcal{L}_l(w_1,\ldots,w_r))\simeq (V_{w_1} \otimes_l \cdots \otimes_l V_{w_{r-1}})\otimes_l V_{w_r}$$ as a $U(\mathfrak{sl}_2)$-module, as desired. \end{proof} \paragraph{Remarks.} i) The above construction is not canonical in the sense that it was made using a choice of a bracketing of the tensor product, namely $$(\cdots (( V_{w_1} \otimes_l V_{w_2}) \otimes_l V_{w_3} )\cdots \otimes_l V_{w_r}).$$ Different bracketings give rise to different (possibly non-isomorphic) open subvarieties of $\mathcal{L}_l(w_1,\ldots,w_r)$ realizing the same fusion tensor product.\\ ii) One might be tempted to define in an analogous fashion a truncated tensor product for finite-dimensional representations of $U_q(\widehat{\mathfrak{sl}}_2)$ by considering equivariant K-theory of $\mathcal{L}_l(w_1,w_2)$ rather than Borel-Moore homology. However, it is easy to check that (because of Remark i)) the resulting product is not associative. \section{A graphical calculus for the fusion product} \paragraph{4.1.} We first recall some results on the graphical calculus of tensor products and intertwiners. For a more complete treatment, see \cite{FK97} and \cite{Savage}. In the graphical calculus, $V_d$ is depicted by a box marked $d$ with $d$ vertices. To depict the set $CM_{w_1, \dots, w_r}^\mu$ of crossingless matches, we place the boxes representing the $V_{w_i}$ on a horizontal line and the box representing $V_\mu$ on another horizontal line lying above the first one. $CM_{w_1, \dots, w_r}^\mu$ is then the set of non-intersecting curves (up to isotopy) connecting the vertices of the boxes such that the following conditions are satisfied: \begin{enumerate} \item Each curve connects exactly two vertices. \item Each vertex is the end point of exactly one curve. \item No curve joins a box to itself. \item The curves lie inside the box bounded by the two horizontal lines and the vertical lines through the extreme right and left points. \end{enumerate} We call the curves joining two lower boxes \emph{lower curves} and those joining a lower and an upper box \emph{middle curves}. We define the set of oriented crossingless matches $OCM_{w_1,\dots, w_r}^\mu$ to be the set of elements of $CM_{w_1, \dots, w_r}^\mu$ along with an orientation of the curves such that all lower curves are oriented to the left and all middle curves are oriented so that those oriented down are to the right of those oriented up. As shown in \cite{FK97}, the set of crossingless matches $CM_{w_1, \dots, w_r}^\mu$ is in one-to-one correspondence with a basis of the set of intertwiners \[ H_{w_1, \dots, w_r}^\mu \stackrel{\text{def}}{=} \Hom (V_{w_1} \otimes \dots \otimes V_{w_r}, V_\mu). \] The matrix coefficients of the intertwiner associated to a particular crossingless match are given by Theorem~2.1 of \cite{FK97}. We will also need to define the set of \emph{lower crossingless matches} $LCM_{w_1, \dots, w_r}^\mu$ and \emph{oriented lower crossingless matches} $OLCM_{w_1, \dots, w_r}^\mu$. Elements of $LCM_{w_1, \dots, w_r}^\mu$ and $OLCM_{w_1, \dots, w_r}^\mu$ are obtained from elements of $CM_{w_1, \dots, w_r}^\mu$ and $OCM_{w_1, \dots, w_r}^\mu$ (respectively) by removing the upper box (thus converting lower end points of middle curves to unmatched vertices). For the case of $OLCM_{w_1, \dots, w_r}^\mu$, unmatched vertices will still have an orientation (indicated by an arrow attached to the vertex). As for middle curves in the case of $OCM_{w_1,\dots, w_r}^\mu$, the unmatched vertices in an element of $OLCM_{w_1, \dots, w_r}^\mu$ must be arranged so that those oriented down are to the right of those oriented up. Note that the set of lower crossingless matches $LCM=LCM_{w_1, \dots, w_r}$ is in one-to-one correspondence with the set $\bigcup_{\mu} CM_{w_1, \dots, w_r}^\mu$. From now on, we will identify these two sets. \paragraph{4.2.} Let $s$ be a bracketing of the tensor product $V_{w_1} \otimes \dots \otimes V_{w_r}$. Pick an ordering of the tensor operations compatible with this bracketing. For each $n$ such that $1 \le n \le r-1$, let $S_n$ be the set of the $V_{w_i}$ separated from the $n^{th}$ tensor product operation only by operations ranked lower than or equal to $n$. Then let $ ^l_sCM_{w_1, \dots, w_r}^\mu$ be the set of elements of $CM_{w_1, \dots, w_r}^\mu$ satisfying the following condition: for each $n$, the number of curves connecting $V_{w_i}$'s in $S_n$ to either $V_{w_i}$'s in $S_n$ on the other side of the $n^{th}$ tensor product symbol or $V_{w}$'s not in $S_n$ is less than or equal to $l$. Note that this condition does not depend on the particular ordering so long as it is compatible with the bracketing $s$. Let $ ^l_sLCM = {^l_sLCM}_{w_1, \dots, w_r}$ be the set of lower crossingless matches satisfying the same condition (where unmatched vertices are always counted as curves with the other end point outside of any $S_n$) and identify this set with the set $\bigcup_{\mu} {^l_sCM}_{w_1, \dots, w_r}^\mu$. We define $ ^l_sOCM_{w_1, \dots, w_r}^\mu$ and $ ^l_sOLCM = {^l_sOLCM}_{w_1, \dots, w_r}$ similarly (and the corresponding identification is made). Note that in the case $r=2$ the condition in the definition simplifies to the requirement that the total number of curves (including middle curves) is less than or equal to $l$. In fact, the given definition simply arises from applying this condition to each tensor product operation (in the given ordering), neglecting curves with both end points in $V_{w_i}$'s which have already been tensored together. \begin{prop} The set $ ^l_sCM_{w_1, \dots, w_r}^\mu$ is in one-to-one correspondence with a basis of the space of intertwiners $ ^lH_{w_1, \dots, w_r}^\mu \stackrel{\text{def}}{=} \Hom (V_{w_1} \otimes_l \dots \otimes_l V_{w_r}, V_\mu)$. \end{prop} \begin{proof} We first consider the case $r=2$. For any $b \in CM_{w_1, w_2}^\mu$, the total number of curves is equal to $(w_1 + w_2 + \mu)/2$ (since each vertex is an end point of exactly one curve). Thus the condition that the total number of curves is less than or equal to $l$ reduces to $w_1 + w_2 + \mu \le 2l$ or $\mu \le 2l - w_1 - w_2$ as desired. Now assume the result holds for the product of less than $r$ irreducible modules and that for the product of $V_{w_1}$ through $V_{w_r}$, the $r^{th}$ tensor product operation is the one occurring between $V_{w_k}$ and $V_{w_{k+1}}$ ($k<r$). Note that \[ \bigoplus_\mathfrak{n}u {^lH}_{w_1, \dots, w_{k}}^\mathfrak{n}u \otimes {^lH}_{\mathfrak{n}u, w_{k+1}, \dots, w_{r}}^\mu \cong {^lH}_{w_1, \dots, w_r}^\mu \] via the map $f \otimes g \mapsto g(f \otimes \text{id}_{V_{w_{k+1}} \oplus \dots \oplus V_{w_r}})$. Now, if $s_1$ is the bracketing of the first $k$ modules and $s_2$ is the bracketing of the last $r-k$ modules, it is easy to see that \[ \sum_\mathfrak{n}u {^l_{s_1}CM}_{w_1, \dots, w_k}^\mathfrak{n}u \times {^l_{s_2}CM}_{\mathfrak{n}u, w_{k+1}, \dots, w_r}^\mu \cong {^l_sCM}_{w_1, \dots, w_r}^\mu \; \text{(as sets)}. \] The result now follows by induction. \end{proof} \paragraph{4.3.} From the associativity of the fusion tensor product it follows immediately that the order of the set ${^l_sCM}_{w_1, \dots, w_r}^\mu$ is independent of the bracketing~$s$. However, we will present here a direct proof. \begin{prop} The order of the set ${^l_sCM}_{w_1, \dots, w_r}^\mu$ is independent of the bracketing~$s$. \end{prop} \begin{proof} It suffices to prove the statement for three factors. Let $s_1$ be the bracketing $(V_{w_1} \otimes V_{w_2}) \otimes V_{w_3}$ and $s_2$ be the bracketing $V_{w_1} \otimes (V_{w_2} \otimes V_{w_3})$. We will set up a one-to-one correspondence between ${^l_{s_1}CM}_{w_1, \dots, w_r}^\mu$ and ${^l_{s_2}CM}_{w_1, \dots, w_r}^\mu$. We will first establish a one-to-one correspondence between the subsets consisting of those crossingless matches with no curves connecting $V_{w_1}$ and $V_{w_3}$ and a fixed number $n$ of lower curves. Let $a$ (resp. $b$) denote the number of curves connecting $V_{w_1}$ (resp. $V_{w_3}$) to $V_{w_2}$. Thus $a+b = n$. Now, the number of curves with at least one end point in $V_{w_1}$ or $V_{w_2}$ is $w_1 + w_2 - a$ and the total number of curves minus the curves connecting $V_{w_1}$ to $V_{w_2}$ is $w_1 + w_2 + w_3 - n - a$. Thus a crossingless match lies in ${^l_{s_1}CM}_{w_1, \dots, w_r}^\mu$ if and only if \[ w_1 + w_2 - a \le l,\ w_1 + w_2 + w_3 - n - a \le l. \] Similarly, a crossingless match lies in ${^l_{s_2}CM}_{w_1, \dots, w_r}^\mu$ if and only if \[ w_2 + w_3 - b \le l,\ w_1 + w_2 + w_3 - n - b \le l. \] Now, the largest possible value of $a$ is $\min(w_1,n)$ and the largest possible value of $b$ is $\min(w_3,n)$. Therefore, by counting the possible values of $a$, the number of crossingless matches in ${^l_{s_1}CM}_{w_1, \dots, w_r}^\mu$ with no curves connecting $V_{w_1}$ and $V_{w_3}$ and with $n$ total curves is equal to \[ r_a = \min(w_1,n) - \max(w_1 + w_2 - l, w_1 + w_2 + w_3 - n - l) + 1 \] if this number is positive and zero otherwise. Similarly, the number of crossingless matches in ${^l_{s_2}CM}_{w_1, \dots, w_r}^\mu$ with no curves connecting $V_{w_1}$ and $V_{w_3}$ and with $n$ total curves is equal to \[ r_b = \min(w_3,n) - \max(w_2 + w_3 - l, w_1 + w_2 + w_3 - n - l) + 1 \] if this number is positive and zero otherwise. Considering the four cases $n \le w_1, w_3$; $n \mathfrak{g}e w_1, w_3$; $w_1 \le n \le w_3$ and $w_3 \le n \le w_1$ we easily see that $r_a = r_b$ in all cases. It remains to establish a one-to-one correspondence between the elements of ${^l_{s_1}CM}_{w_1, \dots, w_r}^\mu$ and ${^l_{s_2}CM}_{w_1, \dots, w_r}^\mu$ with $c \mathfrak{g}e 1$ curves joining $V_{w_1}$ and $V_{w_3}$. Fix the number of lower curves with one end point in $V_{w_2}$ to be $n$. Since $V_{w_1}$ and $V_{w_3}$ are connected, there can be no middles curves with end points in $V_{w_2}$. Thus $s=w_2$. Define $a$ and $b$ as above. By an argument analogous to that given in the earlier case, the number of crossingless matches in ${^l_{s_1}CM}_{w_1, \dots, w_r}^\mu$ with $c \mathfrak{g}e 1$ curves connecting $V_{w_1}$ to $V_{w_3}$ and with $n$ lower curves with one end point in $V_{w_2}$ is equal to \[ r_a = \min(w_1-c,w_2) - \max(w_1 + w_2 - l, w_1 + w_3 - l - c) + 1 \] if this number is positive and zero otherwise. Similarly, the number of crossingless matches in ${^l_{s_2}CM}_{w_1, \dots, w_r}^\mu$ with $c \mathfrak{g}e 1$ curves connecting $V_{w_1}$ to $V_{w_3}$ and with $n$ lower curves with one end point in $V_{w_2}$ is equal to \[ r_b = \min(w_3-c,w_2) - \max(w_2 + w_3 - l, w_1 + w_3 - l - c) + 1 \] if this number is positive and zero otherwise. Considering the four cases $w_2 \le w_1-c, w_3-c$; $w_2 \mathfrak{g}e w_1-c, w_3-c$; $w_1-c \le w_2 \le w_3-c$ and $w_3-c \le w_2 \le w_1-c$ we easily see that $r_a = r_b$ in all cases. This concludes the proof. \end{proof} From now on, we will use the bracketing $( \cdots ((V_{w_1} \otimes V_{w_2}) \otimes V_{w_3}) \cdots V_{w_r})$ unless explicitly stated otherwise. Thus, if we omit a subscript $s$, we take $s$ to be this bracketing. \section{The fusion product via constructible functions} \paragraph{5.1.} Fix a $w=w_1 + \dots + w_r$ dimensional $\mathbb{C}$-vector space $W$ and let \begin{equation*} \begin{split} \mathfrak{T}(w_1, \dots, w_r) = \{(\mathbf{D}=\{D_i\}_{i=0}^r,V_0,t) \;|\; 0 = D_0 \subset D_1 \subset \dots \subset D_r = W,\, V_0 \subset W, \\ t \in \End W,\, t(D_i) \in D_{i-1}, \dim (D_i/D_{i-1}) = w_i, \im t \subset V_0 \subset \ker t\}. \end{split} \end{equation*} Consider the projection \begin{equation*} \begin{split} \mathfrak{T}(w_1,\dots,w_r) \to \{\mathbf{D}=\{D_i\}_{i=0}^r \;|\; 0=D_0 \subset D_1 \subset \dots \subset D_r = W, \\ \dim(D_i/D_{i-1}) = w_i\} \end{split} \end{equation*} given by $(\mathbf{D},V_0,t) \mapsto \mathbf{D}$. It is easy to see that the fibers of this map are all isomorphic and that in \cite{Savage} one could replace the tensor product variety $\mathfrak{T}(w_1,\dots,w_r)$ by this fiber, restrict the constructible functions to this fiber and the theory would remain unchanged. Let $\mathfrak{T}_{\mathbf{D}}(w_1,\dots,w_r)$ denote the fiber over a flag $\mathbf{D}$. If we define \[ D_i = W_0 \oplus \dots \oplus W_i,\; 0\le i \le r, \] then obviously \[ \mathfrak{T}_{\mathbf{D}}(w_1,\dots,w_r) \cong \mathcal{L}(w_1,\ldots,w_r) \] and in the sequel we will identify these two varieties. \paragraph{5.2.} If $b \in CM_{w_1, \dots, w_r}^\mu$ is an unoriented crossingless match, let \[ Y_b = \{(\mathbf{D},V_0,t) \in \mathfrak{T}(w_1, \dots, w_r)\; |\; \dim (\ker t \cap D_i)/(\ker t \cap D_{i-1}) = b_i\} \] where $b_i$ is the number of left end points (of lower curves) and lower end points (of middle curves) contained in the box representing $V_{w_i}$. It is shown in \cite{Savage} (Proposition~3.2.1) that $\sqcup_b Y_b = \mathfrak{T}(w_1, \dots, w_r)$ and that the closures of the $Y_b$ are precisely the irreducible components of $\mathfrak{T}(w_1, \dots, w_r)$. Let $X_b = Y_b \cap \mathcal{L}(w_1,\dots,w_r)$. Then obviously $\mathcal{L}(w_1, \dots, w_r) = \sqcup_{b \in LCM} X_b$. \begin{prop} $\mathcal{L}_l(w_1,\dots,w_r) = \sqcup_{b \in {^lLCM}} X_b$. \end{prop} \begin{proof} We see from equation~\eqref{Nl_def} that $\mathcal{L}_l(w_1, \dots, w_r)$ is the set of all $(t,V_0) \in \mathcal{L}(w_1, \dots, w_r)$ such that \[ \dim \ker t|_{W_1 \oplus \dots \oplus W_i} \le l + \rank t|_{W_1 \oplus \dots \oplus W_{i-1}} \; \forall \; 1 \le i \le r. \] Now, by the definition of the $X_b$, if $(t,V_0) \in X_b$ for some $b \in LCM$ then $\dim \ker t|_{W_1 \oplus \dots \oplus W_i}$ is equal to $\sum_{j=1}^i w_j$ minus the number of lower curves with both end points among the lower $i$ boxes. Also, $\rank t|_{W_1 \oplus \dots \oplus W_{i-1}}$ is equal to the number of lower curves with both end points among the lower $i-1$ boxes. Let $c_i$ denote the number of curves with both end points among the lower $i$ boxes. Then \begin{gather*} \dim \ker t|_{W_1 \oplus \dots \oplus W_i} \le l + \rank t|_{W_1 \oplus \dots \oplus W_{i-1}} \\ \Leftrightarrow \sum_{j=1}^i w_j - c_i \le l + c_{i-1} \\ \Leftrightarrow \sum_{j=1}^i w_j - 2c_{i-1} - \#\{\text{curves with right end point in $i^{th}$ box}\} \le l \\ \Leftrightarrow \begin{split} \sum_{i=1}^n w_i - \#\{\text{end points in first $i-1$ boxes of lower curves with both} \\ \text{end points in first $i$ boxes}\} \le l \end{split} \end{gather*} and this is easily seen to be equivalent to the condition that $b \in { ^lLCM}$ (with the default bracketing). \end{proof} \paragraph{5.3.} We will now define a $U(\mathfrak{sl}_2)$-module structure on a certain space of constructible functions on $\mathcal{L}_l(w_1, \ldots, w_r)$. For $\mathbf{a} \in OLCM_{w_1, \dots, w_r}$, let ${\bar{\mathbf{a}}}$ be the associated element of $LCM_{w_1, \dots, w_r}$ obtained by forgetting the orientation. Define \[ Y_{\mathbf{a}} = \{(\mathbf{D}, V_0, t) \in Y_{\bar{\mathbf{a}}}\; |\; \dim W = \#\{\text{up-oriented vertices of $\mathbf{a}$}\}\} \] where the right end points of lower curves are oriented up (as well as the up-oriented unmatched vertices). Let $X_{\mathbf{a}} = Y_{\mathbf{a}} \cap \mathcal{L}(w_1, \dots, w_r)$. Then it follows from equation~(33) of \cite{Savage} that \[ X_b = \bigcup_{\mathbf{a} : \bar{\mathbf{a}} = b} X_{\mathbf a}. \] Now let \[ \mathcal{B}^l_s = \{\ensuremath{\mathbf {1}}_{Y_{\mathbf{a}}}\; |\; \mathbf{a} \in {^lOLCM}\} \] where $\ensuremath{\mathbf {1}}_A$ is the function that is equal to one on the set $A$ and zero elsewhere. Let \[ \mathcal{T}^l = \mathcal{T}^l_s(w_1, \dots, w_r) = \operatorname{Span} \mathcal{B}^l_s. \] We endow $\mathcal{T}^l$ with the structure of a $U(\mathfrak{sl_2})$-module as in \cite{Savage}. \begin{theo} $\mathcal{T}^l_s(w_1, \dots, w_r)$ is isomorphic as a $U(\mathfrak{sl}_2)$-module to $V_{w_1} \otimes_l \dots \otimes_l V_{w_r}$ and $\mathcal{B}^l_s$ is a basis for $\mathcal{T}^l_s(w_1, \dots, w_r)$ adapted to its decomposition into a direct sum of irreducible representations. That is, for a given $b \in {^lCM_{w_1, \dots, w_r}^\mu}$, the space $\operatorname{Span} \{\ensuremath{\mathbf {1}}_{Y_{\mathbf{a}}} \; |\; \bar{\mathbf{a}} = b \}$ is isomorphic to the irreducible representation $V_\mu$ via the map \[ \ensuremath{\mathbf {1}}_{Y_{\mathbf{a}}} \mapsto {^\mu v_{\mu - 2\#\{\text{unmatched down-oriented vertices of $\mathbf{a}$}\}}}. \] \end{theo} \begin{proof} The second part of the theorem follows from Theorem~3.3.1 of \cite{Savage}. Then \begin{align*} \mathcal{T}^l &= \bigoplus_{\mu} \bigoplus_{b \in {^lCM}_{w_1, \dots, w_r}^\mu} \operatorname{Span} \{\ensuremath{\mathbf {1}}_{Y_{\mathbf{a}}} \; |\; \bar{\mathbf{a}} = b\} \\ &\cong \bigoplus_{\mu} \bigoplus_{b \in {^lCM}_{w_1, \dots, w_r}^\mu} V_\mu \\ &\cong \bigoplus_{\mu} {^lH}_{w_1, \dots, w_r}^\mu \otimes V_\mu \\ &\cong V_{w_1} \otimes_l \dots \otimes_l V_{w_r} \end{align*} where ${^lH}_{w_1, \dots, w_r}^\mu$ is given the trivial module structure. \end{proof} \paragraph{Remarks.} We have used here the standard bracketing $( \cdots (V_{w_1} \otimes_l V_{w_2}) \otimes_l V_{w_3}) \cdots \otimes_l V_{w_r})$. However, one could easily modify the definitions to use any other bracketing. The proofs would need only slight changes. Of course, as noted above, while we would still recover the structure of the fusion product, the varieties involved would be non-isomorphic in general. \small{ } \mathfrak{n}oindent Olivier Schiffmann,\\ DMA ENS Ulm, 45 rue d'Ulm, 75005 Paris FRANCE; email:\;\texttt{[email protected]}\\ Alistair Savage,\\ Department of Mathematics, Yale University, P.O. Box 208283, New Haven, CT 06520-8283, USA; email:\;\texttt{[email protected]} \end{document}
\begin{document} \begin{abstracts} \abstractin{english} We use mixed Hodge theory to show that the functor of singular chains with rational coefficients is formal as a lax symmetric monoidal functor, when restricted to complex varieties whose weight filtration in cohomology satisfies a certain purity property. This has direct applications to the formality of operads or, more generally, of algebraic structures encoded by a colored operad. We also prove a dual statement, with applications to formality in the context of rational homotopy theory. In the general case of complex varieties with non-pure weight filtration, we relate the singular chains functor to a functor defined via the first term of the weight spectral sequence. \abstractin{french} Nous utilisons la théorie de Hodge mixte pour montrer que le foncteur des chaînes singulières à coefficients rationnels est formel, comme foncteur symétrique monoïdal lax, lorsqu’on le restreint aux variétés complexes dont la filtration par le poids en cohomologie satisfait une certaine propriété de pureté. Ce résultat a des applications directes à la formalité d’opérades ou plus généralement à des structures algébriques encodées par une opérade colorée. Nous prouvons aussi le résultat dual, avec des applications à la formalité dans le contexte de la théorie de l’homotopie rationnelle. Dans le cas général d’une variété dont la filtration par le poids n'est pas pure, nous relions le foncteur des chaînes singulières à un foncteur défini par la première page de la suite spectrale des poids. \end{abstracts} \selectlanguage{english} \title{Mixed Hodge structures and formality of symmetric monoidal functors} \section{Introduction} There is a long tradition of using Hodge theory as a tool for proving formality results. The first instance of this idea can be found in \cite{DGMS} where the authors prove that compact K\"{a}hler manifolds are formal (i.e. the commutative differential graded algebra of differential forms is quasi-isomorphic to its cohomology). In the introduction of that paper, the authors explain that their intuition came from the theory of étale cohomology and the fact that the degree $n$ étale cohomology group of a smooth projective variety over a finite field is pure of weight $n$. This purity is what heuristically prevents the existence of non-trivial Massey products. In the setting of complex algebraic geometry, Deligne introduced in \cite{DeHII,DeHIII} a filtration on the rational cohomology of every complex algebraic variety $X$, called the \textit{weight filtration}, with the property that each of the successive quotients of this filtration behaves as the cohomology of a smooth projective variety, in the sense that it has a Hodge-type decomposition. Deligne's mixed Hodge theory was subsequently promoted to the rational homotopy of complex algebraic varieties (see \cite{Mo}, \cite{Ha}, \cite{Na}). This can then be used to make the intuition of the introduction of \cite{DGMS} precise. In \cite{Dupont} and \cite{ChCi1}, it is shown that purity of the weight filtration in cohomology implies formality, in the sense of rational homotopy, of the underlying topological space. However, the treatment of the theory in these references lacks functoriality and is restricted to smooth varieties in the first paper and to projective varieties in the second. In another direction, in the paper \cite{santosmoduli}, the authors elaborate on the method of \cite{DGMS} and prove that operads (as well as cyclic operads, modular operads, etc.) internal to the category of compact K\"{a}hler manifolds are formal. Their strategy is to introduce the functor of de Rham currents which is a functor from compact Kähler manifolds to chain complexes that is lax symmetric monoidal and quasi-isomorphic to the singular chain functor as a lax symmetric monoidal functor. Then they show that this functor is formal as a lax symmetric monoidal functor. Recall that, if $\mathbb{C}c$ is a symmetric monoidal category and $\mathcal{A}$ is an abelian symmetric monoidal category, a lax symmetric monoidal functor $F:\mathbb{C}c\longrightarrow \cat{Ch}_*(\mathcal{A})$ is said to be formal if it is weakly equivalent to $H\circ F$ in the category of lax symmetric monoidal functors. It is then straightforward to see that such functors send operads in $\mathbb{C}c$ to formal operads in $\cat{Ch}_*(\mathcal{A})$. The functoriality also immediately gives us that a map of operads in $\mathbb{C}c$ is sent to a formal map of operads or that an operad with an action of a group $G$ is sent to a formal operad with a $G$-action. Of course, there is nothing specific about operads in these statements and they would be equally true for monoids, cyclic operads, modular operads, or more generally any algebraic structure that can be encoded by a colored operad. The purpose of this paper is to push this idea of formality of symmetric monoidal functors from complex algebraic varieties in several directions in order to prove the most general possible theorem of the form ``purity implies formality''. Before explaining our results more precisely, we need to introduce a bit of terminology. Let $X$ be a complex algebraic variety. Deligne's weight filtration on the rational $n$-th cohomology vector space of $X$ is bounded by \[0=W_{-1}H^n(X,\mathbb{Q})\subseteq W_{0}H^n(X,\mathbb{Q})\subseteq \cdots \subseteq W_{2n}H^n(X,\mathbb{Q})=H^n(X,\mathbb{Q}).\] If $X$ is smooth then $W_{n-1}H^n(X,\mathbb{Q})=0$, while if $X$ is projective $W_{n}H^n(X,\mathbb{Q})=H^n(X,\mathbb{Q})$. In particular, if $X$ is smooth and projective then we have \[0=W_{n-1}H^n(X,\mathbb{Q})\subseteq W_{n}H^n(X,\mathbb{Q})=H^n(X,\mathbb{Q}).\] In this case, the weight filtration on $H^n(X,\mathbb{Q})$ is said to be \textit{pure of weight $n$}. More generally, for $\alpha$ a rational number and $X$ a complex algebraic variety, we say that the weight filtration on $H^*(X,\mathbb{Q})$ is \textit{$\alpha$-pure} if, for all $n\geq 0$, we have \[Gr^W_pH^n(X,\mathbb{Q}):={{W_pH^n(X,\mathbb{Q})}\over{W_{p-1}H^n(X,\mathbb{Q})}}=0\text{ for all }p\neq \alpha n.\] The bounds on the weight filtration tell us that this makes sense only when $0\leq \alpha\leq 2$. Note as well that if we write $\alpha=a/b$ with $(a,b)=1$, $\alpha$-purity implies that the cohomology is concentrated in degrees that are divisible by $b$, and that $H^{bn}(X,\mathbb{Q})$ is pure of weight $an$. Aside from smooth projective varieties, some well-known examples of varieties with $1$-pure weight filtration are: projective varieties whose underlying topological space is a $\mathbb{Q}$-homology manifold (\cite[Theorem 8.2.4]{DeHIII}) and the moduli spaces $\mathcal{M}_{Dol}$ and $\mathcal{M}_{dR}$ appearing in the non-abelian Hodge correspondence (\cite{Hausel}). Some examples of varieties with $2$-pure weight filtration are: complements of hyperplane arrangements (\cite{kimweights}), which include the moduli spaces $\mathcal{M}_{0,n}$ of smooth projective curves of genus 0 with $n$ marked points, and complements of toric arrangements (\cite{Dupont}). As we shall see in Section $\ref{Section_formalschemes}$, complements of codimension $d$ subspace arrangements are examples of smooth varieties whose weight filtration in cohomology is $2d/(2d-1)$-pure. For instance, this includes configuration spaces of points in $\mathbb{C}^d$. Our main result is Theorem $\ref{theo: main covariant}$. We show that, for a non-zero rational number $\alpha$, the singular chains functor \[S_*(-,\mathbb{Q}):\mathrm{Var}_\mathbb{C}\longrightarrow \cat{Ch}_*(\mathbb{Q})\] is formal as a lax symmetric monoidal functor when restricted to complex varieties whose weight filtration in cohomology is $\alpha$-pure. Here $\mathrm{Var}_\mathbb{C}$ denotes the category of complex algebraic varieties (i.e the category of schemes over $\mathbb{C}$ that are reduced, separated and of finite type). This generalizes the main result of \cite{santosmoduli} on the formality of $S_*(X,\mathbb{Q})$ for any operad $X$ in smooth projective varieties, to the case of operads in possibly singular and/or non-compact varieties with pure weight filtration in cohomology. As direct applications of the above result, we prove formality of the operad of singular chains of some operads in complex varieties, such as the noncommutative analog of the (framed) little 2-discs operad introduced in \cite{dotsenkononcommutative} and the monoid of self-maps of the complex projective line studied by Cazanave in \cite{cazanavealgebraic} (see Theorems $\ref{theo: little}$ and $\ref{theo: monoidformal}$). We also reinterpret in the language of mixed Hodge theory the proofs of the formality of the little disks operad and Getzler's gravity operad appearing in \cite{Petersen} and \cite{dupontgravity}. These last two results do not fit directly in our framework, since the little disks operad and the gravity operad do not quite come from operads in algebraic varieties. However, the action of the Grothendieck-Teichmüller group provides a bridge to mixed Hodge theory. In Theorem \ref{theo: main contravariant} we prove a dual statement of our main result, showing that Sullivan's functor of piecewise linear forms \[\mathcal{A}_{PL}^*:\mathrm{Var}_\mathbb{C}^{\mathrm{op}}\longrightarrow \cat{Ch}_*(\mathbb{Q})\] is formal as a lax symmetric monoidal functor when restricted to varieties whose weight filtration in cohomology is $\alpha$-pure, where $\alpha$ is a non-zero rational number. This gives functorial formality in the sense of rational homotopy for such varieties, generalizing both ``purity implies formality'' statements appearing in \cite{Dupont} for smooth varieties and in \cite{ChCi1} for singular projective varieties. Our generalization is threefold: we allow $\alpha$-purity (instead of just $1$- and $2$-purity), we obtain functoriality and we study possibly singular and open varieties simultaneously. Theorems $\ref{theo: main covariant}$ and $\ref{theo: main contravariant}$ deal with situations in which the weight filtration is pure. In the general context with mixed weights, it was shown by Morgan \cite{Mo} for smooth varieties and in \cite{CG1} for possibly singular varieties, that the first term of the multiplicative weight spectral sequence carries all the rational homotopy information of the variety. In Theorem $\ref{E1formality}$ we provide the analogous statement for the lax symmetric monoidal functor of singular chains. A dual statement for Sullivan's functor of piecewise linear forms is proven in Theorem $\ref{E1formality_contravariant}$, enhancing the results of \cite{Mo} and \cite{CG1} with functoriality. \\ We now explain the structure of this paper. The first four sections are purely algebraic. In Section $\ref{Section_formal_functors}$ we collect the main properties of formal lax symmetric monoidal functors that we use. In particular, in Theorem \ref{theo: Hinich rigidification} we recall a recent theorem of rigidification due to Hinich that says that, over a field of characteristic zero, formality of functors can be checked at the level of $\infty$-functors. We also introduce the notion of $\alpha$-purity for complexes of bigraded objects in a symmetric monoidal abelian category and show that, when restricted to $\alpha$-pure complexes, the functor defined by forgetting the degree is formal. The connection of this result with mixed Hodge structures is done in Section $\ref{Section_MHS}$, where we prove a symmetric monoidal version of Deligne's weak splitting of mixed Hodge structures over $\mathbb{C}$. Such splitting is a key tool towards formality. In Section $\ref{Section_descent}$ we study lax symmetric monoidal functors to vector spaces over a field of characteristic zero equipped with a compatible filtration. We show, in Theorem $\ref{descens_rsplittings}$, that the existence of a lax symmetric monoidal splitting for such functors can be verified after extending the scalars to a larger field. As a consequence, we obtain splittings for the weight filtration over $\mathbb{Q}$. This result enables us to bypass the theory of descent of formality for operads of \cite{santosmoduli}, which assumes the existence of minimal models. Putting the above results together we are able to show that the forgetful functor \[\cat{Ch}_*(\mathrm{MHS}_\mathbb{Q})\longrightarrow \cat{Ch}_*(\mathbb{Q})\] induced by sending a rational mixed Hodge structure to its underlying vector space is formal when restricted to those complexes whose mixed Hodge structure in homology is $\alpha$-pure. In order to obtain a symmetric monoidal functor from the category of complex varieties to an algebraic category encoding mixed Hodge structures, we have to consider more flexible objects than complexes of mixed Hodge structures. This is the content of Section $\ref{Section_MHC}$, where we study the category $\mathrm{MHC}_\Bbbk$ of mixed Hodge complexes. In Theorem $\ref{theo: equivinfty}$ we construct an equivalence of symmetric monoidal $\infty$-categories between mixed Hodge complexes and complexes of mixed Hodge structures. This result is a lift of Beilinson's equivalence of triangulated categories $D^b(\mathrm{MHS}_\Bbbk)\longrightarrow \mathrm{ho}(\mathrm{MHC}_\Bbbk)$ (see also \cite{Drew}, \cite{BNT}). The geometric character of this paper comes in Section $\ref{Section_logaritmic}$, where we construct a symmetric monoidal functor from complex varieties to mixed Hodge complexes. This is done in two steps. First, for smooth varieties, we dualize Navarro's construction \cite{Na} of functorial mixed Hodge complexes to obtain a symmetric monoidal $\infty$-functor \[\mathscr{D}_*:\icat{N}(\mathrm{Sm}_{\mathbb{C}})\longrightarrow \mathbf{MHC}_\mathbb{Q}\] such that its composite with the forgetful functor $\mathbf{MHC}_\mathbb{Q}\longrightarrow \icat{Ch}_*(\mathbb{Q})$ is naturally weakly equivalent to $S_*(-,\mathbb{Q})$ as a symmetric monoidal $\infty$-functor (see Theorem $\ref{theo: equivalence cochains-sheaf}$). Note that in order to obtain monoidality, we move to the world of $\infty$-categories, denoted in boldface letters. In the second step, we extend this functor from smooth, to singular varieties, by standard cohomological descent arguments. The main results of this paper are stated and proven in Section $\ref{Section_main}$, where we also explain several applications to operad formality. Lastly, Section $\ref{Section_formalschemes}$ contains applications to the rational homotopy theory of complex varieties. \subsection*{Acknowledgments} This project was started during a visit of the first author at the Hausdorff Institute for Mathematics as part of the Junior Trimester Program in Topology. We would like to thank the HIM for its support. We would also like to thank Alexander Berglund, Brad Drew, Clément Dupont, Vicen\c{c} Navarro, Thomas Nikolaus and Bruno Vallette for helpful conversations. Finally we thank the anonymous referees for many useful comments. \subsection*{Notations} As a rule, we use boldface letters to denote $\infty$-categories and normal letters to denote $1$-categories. For $\mathbb{C}c$ a $1$-category, we denote by $\icat{N}(\mathbb{C}c)$ its nerve seen as an $\infty$-category. For $\mathcal{A}$ an additive category, we will denote by $\cat{Ch}_*^{?}(\mathcal{A})$ the category of (homologically graded) chain complexes in $\mathcal{A}$, where $?$ denotes the boundedness condition: nothing for unbounded, $b$ for bounded below and above and $\geq 0$ (resp. $\leq 0$) for non-negatively (resp. non-positively) graded complexes. We denote by $\icat{Ch}_*^?(\mathcal{A})$ the $\infty$-category obtained from $\cat{Ch}_*^?(\mathcal{A})$ by inverting the quasi-isomorphisms. \section{Formal symmetric monoidal functors}\label{Section_formal_functors} The main result of this section is a ``purity implies formality'' statement in the setting of symmetric monoidal functors. Let $(\mathcal{A},\otimes,\mathbf{1})$ be an abelian symmetric monoidal category with infinite direct sums. The homology functor $H:\cat{Ch}_*(\mathcal{A})\longrightarrow \prod_{n \in\mathbb{Z}}\mathcal{A}$ is a lax symmetric monoidal functor, via the usual K\"{u}nneth morphism. In the cases that will interest us, all the objects of $\mathcal{A}$ will be flat and the homology functor is in fact strong symmetric monoidal. We will also make the small abuse of identifying the category $\prod_{n \in\mathbb{Z}}\mathcal{A}$ with the full subcategory of $\cat{Ch}_*(\mathcal{A})$ spanned by the chain complexes with zero differential. We recall the following definition from \cite{santosmoduli}. \begin{defi}\label{defFormalFunctor} Let $\mathbb{C}c$ be a symmetric monoidal category and $F:\mathbb{C}c\longrightarrow \cat{Ch}_*(\mathcal{A})$ a lax symmetric monoidal functor. Then $F$ is said to be a \textit{formal lax symmetric monoidal} functor if $F$ and $H\circ F$ are \textit{weakly equivalent} in the category of lax symmetric monoidal functors: there is a string of natural transformations of lax symmetric monoidal functors \[F\xleftarrow{\Phi_1}F_1\longrightarrow \cdots \longleftarrow F_n\xrightarrow{\Phi_n}H\circ F\] such that for every object $X$ of $\mathbb{C}c$, the morphisms $\Phi_i(X)$ are quasi-isomorphisms. \end{defi} \begin{defi} Let $\mathbb{C}c$ be a symmetric monoidal category and $F:\icat{N}(\mathbb{C}c)\to\icat{Ch}_*(\mathcal{A})$ a lax symmetric monoidal functor (in the $\infty$-categorical sense). We say that $F$ is a \textit{formal lax symmetric monoidal $\infty$-functor} if $F$ and $H\circ F$ are equivalent in the $\infty$-category of lax symmetric monoidal functors from $\icat{N}(\mathbb{C}c)$ to $\icat{Ch}_*(\mathcal{A})$. \end{defi} Clearly a formal lax symmetric monoidal functor $\mathbb{C}c\to \cat{Ch}_*(\mathcal{A})$ induces a formal lax symmetric monoidal $\infty$-functor $\icat{N}(\mathbb{C}c)\to \icat{Ch}_*(\mathcal{A})$. The following theorem and its corollary give a partial converse. \begin{theo}[Hinich]\label{theo: Hinich rigidification} Let $\Bbbk$ be a field of characteristic $0$. Let $\mathbb{C}c$ be a small symmetric monoidal category. Let $F$ and $G$ be two lax symmetric monoidal functors $\mathbb{C}c\to\cat{Ch}_*(\Bbbk)$. If $F$ and $G$ are equivalent as lax symmetric monoidal $\infty$-functors $\icat{N}(\mathbb{C}c)\longrightarrow \icat{Ch}_*(\Bbbk)$, then $F$ and $G$ are weakly equivalent as lax symmetric monoidal functors. \end{theo} \begin{proof} This theorem is true more generally if $\mathbb{C}c$ is a colored operad. Indeed recall that any symmetric monoidal category has an underlying colored operad whose category of algebras is equivalent to the category of lax symmetric monoidal functors out of the original category. Now since we are working in characteristic zero, the operad underlying $\mathbb{C}c$ is homotopically sound (following the terminology of \cite{hinichrectification}). Therefore, \cite[Theorem 4.1.1]{hinichrectification} gives us an equivalence of $\infty$-categories \[\icat{N}(\mathrm{Alg}_\mathbb{C}c(\cat{Ch}_*(\Bbbk))\xrightarrow{\sim} \mathbf{Alg}_\mathbb{C}c(\icat{Ch}_*(\Bbbk))\] where we denote by $\mathrm{Alg}_\mathbb{C}c$ (resp. $\mathbf{Alg}_\mathbb{C}c$) the category of lax symmetric monoidal functors (resp. the $\infty$-category of lax symmetric monoidal functors) out of $\mathbb{C}c$. Now, the two functors $F$ and $G$ are two objects in the source of the above map that become weakly equivalent in the target. Hence, they are already equivalent in the source, which is precisely saying that they are connected by a zig-zag of weak equivalences of lax symmetric monoidal functors. \end{proof} \begin{coro}\label{coro: Hinich rigidification} Let $\Bbbk$ be a field of characteristic $0$. Let $\mathbb{C}c$ be a small symmetric monoidal category. Let $F:\mathbb{C}c\to\cat{Ch}_*(\Bbbk)$ be a lax symmetric monoidal functor. If $F$ is formal as lax symmetric monoidal $\infty$-functor $\icat{N}(\mathbb{C}c)\longrightarrow \icat{Ch}_*(\Bbbk)$, then $F$ is formal as a lax symmetric monoidal functor. \end{coro} \begin{proof} It suffices to apply Theorem \ref{theo: Hinich rigidification} to $F$ and $H\circ F$. \end{proof} The following proposition whose proof is trivial is the reason we are interested in formal lax monoidal functors. \begin{prop}[\cite{santosmoduli}, Proposition 2.5.5] If $F:\mathbb{C}c\longrightarrow \cat{Ch}_*(\mathcal{A})$ is a formal lax symmetric monoidal functor then $F$ sends operads in $\mathbb{C}c$ to formal operads in $\cat{Ch}_*(\mathcal{A})$. \end{prop} In rational homotopy, there is a criterion of formality in terms of weight decompositions which proves to be useful in certain situations (see for example \cite{BMSS} and \cite{Body-Douglas}). We next provide an analogous criterion in the setting of symmetric monoidal functors. Denote by $gr\mathcal{A}$ the category of graded objects of $\mathcal{A}$. It inherits a symmetric monoidal structure from that of $\mathcal{A}$, with the tensor product defined by \[(A\otimes B)^n:=\bigoplus_p A^p\otimes B^{p-n}.\] The unit in $gr\mathcal{A}$ is given by $\mathbf{1}$ concentrated in degree zero. The functor $U:gr\mathcal{A}\longrightarrow \mathcal{A}$ obtained by forgetting the degree is strong symmetric monoidal. The category of graded complexes $\cat{Ch}_*(gr\mathcal{A})$ inherits a symmetric monoidal structure via a graded K\"{u}nneth morphism. \begin{defi}\label{defpure} Given a rational number $\alpha$, denote by $\cat{Ch}_*pure{\alpha}$ the full subcategory of $\cat{Ch}_*(gr\mathcal{A})$ given by those graded complexes $A=\bigoplus A_n^p$ with \textit{$\alpha$-pure homology}: \[H_n(A)^p=0\text{ for all }p\neq \alpha n.\] \end{defi} Note that if $\alpha=a/b$, with $a$ and $b$ coprime, then the above condition implies that $H_*(A)$ is concentrated in degrees that are divisible by $b$, and in degree $kb$, it is pure of weight $ka$: \[H_{kb}(A)^p=0\text{ for all }p\neq ka.\] \begin{prop}\label{decomposition_formal} Let $\mathcal{A}$ be an abelian category and $\alpha$ a non-zero rational number. The functor $U:\cat{Ch}_*pure{\alpha}\longrightarrow \cat{Ch}_*(\mathcal{A})$ defined by forgetting the degree is formal as a lax symmetric monoidal functor. \end{prop} \begin{proof} We will define a functor $\tau:\cat{Ch}_*(gr\mathcal{A})\longrightarrow \cat{Ch}_*(gr\mathcal{A})$ together with natural transformations \[\Phi:U\circ \tau\mathbb{R}ightarrow U\text{ and }\Psi:U\circ\tau\mathbb{R}ightarrow H\circ U\] giving rise to weak equivalences when restricted to chain complexes with $\alpha$-pure homology. Consider the truncation functor $\tau:\cat{Ch}_*(gr\mathcal{A})\longrightarrow \cat{Ch}_*(gr\mathcal{A})$ defined by sending a graded chain complex $A=\bigoplus A_n^p$ to the graded complex given by: \[(\tau A)_n^p:=\left\{ \begin{array}{ll} A_n^p& n>\ceil{p/\alpha}\\ \mathrm{Ker}(d:A_n^p\to A_{n-1}^p)& n=\ceil{p/\alpha}\\ 0& n<\ceil{p/\alpha} \end{array}\right., \] where $\ceil{p/\alpha}$ denotes the smallest integer greater than or equal to $p/\alpha$. Note that for each $p$, $\tau(A)^p_*$ is the chain complex given by the canonical truncation of $A_*^p$ at $\ceil{p/\alpha}$, which satisfies \[H_n(\tau(A)^p_*)\cong H_n(A^p_*)\text{ for all }n\geq \ceil{p/\alpha}.\] To prove that $\tau$ is a lax symmetric monoidal functor it suffices to see that \[\tau(A)^p_n\otimes \tau(B)^q_m\subseteq \tau(A\otimes B)^{p+q}_{n+m}\] for all $A,B\in\cat{Ch}_*(gr\mathcal{A})$. By symmetry in $A$ and $B$, it suffices to consider the following three cases : \begin{enumerate} \item If $n>\ceil{p/\alpha}$ and $m\geq \ceil{q/\alpha}$ then $n+m>\ceil{p/\alpha}+\ceil{q/\alpha}\geq \ceil{(p+q)/\alpha}$. Therefore we have $\tau(A\otimes B)^{p+q}_{n+m}=(A\otimes B)^{p+q}_{n+m}$ and the above inclusion is trivially satisfied. \item If $n=\ceil{p/\alpha}$ and $m=\ceil{q/\alpha}$ then $n+m=\ceil{p/\alpha}+\ceil{q/\alpha}\geq \ceil{(p+q)/\alpha}$. Now, if $n+m>\ceil{(p+q)/\alpha}$ then again we have $\tau(A\otimes B)^{p+q}_{n+m}=(A\otimes B)^{p+q}_{n+m}$. If $n+m=\ceil{(p+q)/\alpha}$ then the above inclusion reads \[\mathrm{Ker}(d:A_n^p\to A_{n-1}^p)\otimes \mathrm{Ker}(d:B_m^q\to B_{m-1}^q)\subseteq \mathrm{Ker}(d:(A\otimes B)_{n+m}^{p+q}\to (A\otimes B)_{n+m-1}^{p+q}).\] This is verified by the Leibniz rule. \item Lastly, if $n<\ceil{p/\alpha}$ then $\tau(A)^p_n=0$ and there is nothing to verify. \end{enumerate} The projection $\mathrm{Ker}(d:A_n^p\to A_{n-1}^p)\twoheadrightarrow H_n(A)^p$ defines a morphism $\tau A\to H(A)$ by \[(\tau A)_n^p\mapsto \left\{ \begin{array}{ll} 0& n\neq \ceil{p/\alpha}\\ H_n(A)^p& n=\ceil{p/\alpha}\\ \end{array} \right.. \] This gives a symmetric monoidal natural transformation $\Psi:U\circ\tau\mathbb{R}ightarrow H\circ U=U\circ H$. Likewise, the inclusion $\tau A\mathrm{ho}okrightarrow A$ defines a symmetric monoidal natural transformation $\Phi:U\circ\tau\mathbb{R}ightarrow U$. Let $A$ be a complex of $\cat{Ch}_*pure{\alpha}$. Then both morphisms \[\Psi(A):\tau\circ U(A)\to H\circ U(A)\text{ and }\Phi(A):U\circ\tau(A)\to U(A)\] are clearly quasi-isomorphisms. \end{proof} For graded chain complexes whose homology is pure up to a certain degree, we obtain a result of partial formality as follows. \begin{defi} Let $q\geq 0$ be an integer. A morphism of chain complexes $f:A\to B$ is called a \textit{$q$-quasi-isomorphism} if the induced morphism in homology $H_i(f):H_i(A)\to H_i(B)$ is an isomorphism for all $i\leq q$. \end{defi} \begin{rem} There is a notion of $q$-quasi-isomorphism in rational homotopy which asks in addition that the map induced in degree $(q+1)$-cohomology is a monomorphism. Dually, for chain complexes one could ask to have an epimorphism in degree $(q+1)$-homology. Note that we don't consider this extra condition here, since we work with possibly negatively and positively graded complexes and such a condition would break the symmetry. In addition, in our subsequent work on formality with torsion coefficients \cite{CiHo2}, the notion of partial formality as defined below plays a fundamental role. \end{rem} \begin{defi} Let $q\geq 0$ be an integer. A functor $F:\mathbb{C}c\longrightarrow \cat{Ch}_*(\mathcal{A})$ is a \textit{$q$-formal lax symmetric monoidal} functor if there are natural transformations $\Phi_i$ as in Definition $\ref{defFormalFunctor}$ such that $\Phi_i(X)$ are $q$-quasi-isomorphisms for all $X\in\mathbb{C}c$ and all $1\leq i\leq n$. \end{defi} \begin{prop}\label{decomposition_qformal} Let $\mathcal{A}$ be an abelian category. Given a non-zero rational number $\alpha$ and an integer $q\geq 0$, denote by $\cat{Ch}_*pure{\alpha}_q$ the full subcategory of $\cat{Ch}_*(gr\mathcal{A})$ given by those graded complexes $A=\bigoplus A_n^p$ whose homology in degrees $\leq q$ is $\alpha$-pure: for all $n\leq q$, \[H_n(A)^p=0\text{ for all }p\neq \alpha n.\] Then the functor $U:\cat{Ch}_*pure{\alpha}_q\longrightarrow \cat{Ch}_*(\mathcal{A})$ defined by forgetting the degree is $q$-formal. \end{prop} \begin{proof} The proof is parallel to that of Proposition \ref{decomposition_formal} by noting that, if $H_n(A)$ is $\alpha$-pure for $n\leq q+1$, then the morphisms \[\Psi(A):\tau\circ U(A)\to H\circ U(A)\text{ and }\Phi(A):U\circ\tau(A)\to U(A)\] are $q$-quasi-isomorphisms. \end{proof} \section{Mixed Hodge structures}\label{Section_MHS} We next collect some main definitions and properties on mixed Hodge structures and prove a symmetric monoidal version of Deligne's splitting for the weight filtration. Denote by $\mathcal{F}\mathcal{A}$ the category of filtered objects of an abelian symmetric monoidal category $(\mathcal{A},\otimes,\mathbf{1})$. All filtrations will be assumed to be of finite length and exhaustive. With the tensor product \[W_p(A\otimes B):=\sum_{i+j=p} \mathrm{Im}(W_iA\otimes W_jB\longrightarrow A\otimes B), \] and the unit given by $\mathbf{1}$ concentrated in weight zero, $\mathcal{F}\mathcal{A}$ is a symmetric monoidal category. The functor $U^{fil}:gr\mathcal{A}\longrightarrow \mathcal{F}\mathcal{A}$ defined by $A=\bigoplus A^p\mapsto W_mA:=\bigoplus_{q\leq m}A^q$ is strong symmetric monoidal. The category of filtered complexes $\cat{Ch}_*(\mathcal{F}\mathcal{A})$ inherits a symmetric monoidal structure via a filtered K\"{u}nneth morphism and we have a strong symmetric monoidal functor \[U^{fil}:\cat{Ch}_*(gr\mathcal{A})\longrightarrow \cat{Ch}_*(\mathcal{F}\mathcal{A}).\] Let $\Bbbk\subset \mathbb{R}$ be a subfield of the real numbers. \begin{defi}\label{defMHS} A \textit{mixed Hodge structure} on a finite dimensional $\Bbbk$-vector space $V$ is given by an increasing filtration $W$ of $V$, called the \textit{weight filtration}, together with a decreasing filtration $F$ on $V_\mathbb{C}:=V\otimes\mathbb{C}$, called the \textit{Hodge filtration}, such that for all $m\geq 0$, each $\Bbbk$-vector space $Gr_m^WV:=W_mV/W_{m-1}V$ carries a pure Hodge structure of weight $m$ given by the filtration induced by $F$ on $Gr_m^WV\otimes\mathbb{C}$, that is, there is a direct sum decomposition \[Gr^m_WV\otimes\mathbb{C}=\bigoplus_{p+q=m}V^{p,q}\text{ where }V^{p,q}=F^p(Gr_m^WV\otimes\mathbb{C})\cap \overline{F}^q(Gr_m^WV\otimes\mathbb{C})=\overline{V}^{q,p}.\] \end{defi} Morphisms of mixed Hodge structures are given by morphisms $f:V\to V'$ of $\Bbbk$-vector spaces compatible with filtrations: $f(W_mV)\subset W_mV'$ and $f(F^pV_\mathbb{C})\subset F^pV_\mathbb{C}'$. Denote by $\mathrm{MHS}_\Bbbk$ the category of mixed Hodge structures over $\Bbbk$. It is an abelian category by \cite[Theorem 2.3.5]{DeHII}. \begin{rem}\label{Homandtensor} Given mixed Hodge structures $V$ and $V'$, then $V\otimes V'$ carries a mixed Hodge structure with the filtered tensor product. This makes $\mathrm{MHS}_\Bbbk$ into a symmetric monoidal category. Also, $\mathsf{Hom}(V,V')$ carries a mixed Hodge structure with the weight filtration given by \[W_p\mathsf{Hom}(V,V'):=\{f:V\to V';f(W_qV)\subset W_{q+p}V',\,\forall q\}\] and the Hodge filtration defined in the same way. In particular, the dual of a mixed Hodge structure is again a mixed Hodge structure. \end{rem} Let $\Bbbk\subset\mathbb{K}$ be a field extension. The functors \[\Pi_\mathbb{K}:\mathrm{MHS}_\Bbbk\longrightarrow \mathrm{Vect}_\mathbb{K}\text{ and }\Pi_\mathbb{K}^{W}:\mathrm{MHS}_\Bbbk\longrightarrow \mathcal{F}\mathrm{Vect}_\mathbb{K}\] defined by sending a mixed Hodge structure $(V,W,F)$ to $V_\mathbb{K}:=V_\Bbbk\otimes \mathbb{K}$ and $(V_\mathbb{K},W)$ respectively, are strong symmetric monoidal functors. Deligne introduced a global decomposition of $V_\mathbb{C}:=V\otimes\mathbb{C}$ into subspaces $I^{p,q}$, for any mixed Hodge structure $(V,W,F)$ which generalizes the decomposition of pure Hodge structures of a given weight. In this case, one has a congruence $I^{p,q}\equiv\overline{I}^{q,p}$ modulo $W_{p+q-2}$. From this decomposition, Deligne deduced that morphisms of mixed Hodge structures are strictly compatible with filtrations and that the category of mixed Hodge structures is abelian (see \cite[Section 1]{DeHII}, see also \cite[Section 3.1]{PS}). We next study this decomposition in the context of symmetric monoidal functors. \begin{lemm}[Deligne's splitting]\label{Deligne_splitting} The functor $\Pi_\mathbb{C}^{W}$ admits a factorization \[ \xymatrix@R=4pc@C=4pc{ \mathrm{MHS}_\Bbbk\ar[r]^{G}\ar[dr]_{\Pi_\mathbb{C}^{W}}&gr\mathrm{Vect}_\mathbb{C}\ar[d]^{U^{fil}}\\ &\mathcal{F}\mathrm{Vect}_\mathbb{C} } \] into strong symmetric monoidal functors. In particular, there is an isomorphism of functors \[U^{fil}\circ gr\circ \Pi_\mathbb{C}^{W}\cong \Pi_\mathbb{C}^{W},\] where $gr:\mathcal{F}\mathrm{Vect}_\mathbb{C}\longrightarrow gr\mathrm{Vect}_\mathbb{C}$ is the graded functor given by $gr(V_\mathbb{C},W)^p=Gr_p^WV_\mathbb{C}$. \end{lemm} \begin{proof} Let $(V,W,F)$ be a mixed Hodge structure. By \cite[1.2.11]{DeHII} (see also \cite[Lemma 1.12]{GS}), there is a direct sum decomposition $V_\mathbb{C}=\bigoplus I^{p,q}(V)$ where \[I^{p,q}(V)=(F^p W_{p+q}V_\mathbb{C})\cap \left(\overline{F}^q W_{p+q}V_\mathbb{C}+\sum_{i>0}\overline{F}^{q-i} W_{p+q-1-i}V_\mathbb{C}\right).\] This decomposition is functorial for morphisms of mixed Hodge structures and satisfies \[W_mV_\mathbb{C}=\bigoplus_{p+q\leq m} I^{p,q}(V).\] Define $G$ by letting $G(V,W,F)^n:=\bigoplus_{p+q=n} I^{p,q}(V)$ for any mixed Hodge structure. Since $f(I^{p,q}(V))\subset I^{p,q}(V')$ for every morphism $f:(V,W,F)\to (V',W,F)$ of mixed Hodge structures, $G$ is functorial. To see that $G$ is strong symmetric monoidal it suffices to use the definition of $I^{p,q}$ together with the tensor product mixed Hodge structure defined via the filtered tensor product, to obtain isomorphisms \[\sum_{\substack{p+q=n\\p'+q'=n'}} I^{p,q}(V)\otimes I^{p',q'}(V')\cong \sum_{i+j=n+n'}I^{i,j}(V\otimes V') \] showing that the splittings $I^{*,*}$ are compatible with tensor products (see also \cite[Proposition 1.9]{Mo}). The functor $U^{fil}:gr\mathrm{Vect}\longrightarrow \mathcal{F}\mathrm{Vect}$ is the strong symmetric monoidal functor given by $$\bigoplus_n V^n\mapsto (V,W),\text{ with }W_mV:=\bigoplus_{n\leq m} V^n.$$ Therefore we have $U^{fil}\circ G=\Pi_\mathbb{C}^{W}$. In order to prove the isomorphism $U^{fil}\circ gr\circ \Pi_\mathbb{C}^{W}\cong \Pi_\mathbb{C}^{W}$ it suffices to note that there is an isomorphism of functors $gr\circ U^{fil}\cong \mathrm{Id}.$ \end{proof} \section{Descent of splittings of lax symmetric monoidal functors}\label{Section_descent} In this section, we study lax symmetric monoidal functors to vector spaces over a field of characteristic zero $\Bbbk$ equipped with a compatible filtration. More precisely, we are interested in lax symmetric monoidal maps $\mathbb{C}c\longrightarrow\mathcal{F}\mathrm{Vect}_\Bbbk$. Our goal is to prove that the existence of a lax symmetric monoidal splitting for such a functor (i.e. of a lift of this map to $\mathbb{C}c\longrightarrow gr\mathrm{Vect}_\Bbbk$) can be checked after extending the scalars to a larger field. Our proof follows similar arguments to those appearing in \cite[Section 2.4]{CG1}, see also \cite{santosmoduli} and \cite{Su}. A main advantage of our approach with respect to these references is that, in proving descent at the level of functors, we avoid the use of minimal models (and thus restrictions to, for instance, operads with trivial arity 0). It will be a bit more convenient to study a more general situation where $\mathbb{C}c$ is allowed to be a colored operad instead of a symmetric monoidal category. Indeed recall that any symmetric monoidal category can be seen as an operad whose colors are the objects of $\mathbb{C}c$ and where a multimorphism from $(c_1,\ldots,c_n)$ to $d$ is just a morphism in $\mathbb{C}c$ from $c_1\otimes\ldots\otimes c_n$ to $d$. Then, given another symmetric monoidal category $\mathscr{D}d$, there is an equivalence of categories between the category of lax symmetric monoidal functors from $\mathbb{C}c$ to $\mathscr{D}d$ and the category of $\mathbb{C}c$-algebras in the symmetric monoidal category $\mathscr{D}d$. We fix $(V,W)$ a map of colored operads $\mathbb{C}c\longrightarrow \mathcal{F}\mathrm{Vect}_\Bbbk$ such that for each color $c$ of $\mathbb{C}c$, the vector space $V(c)$ is finite dimensional. We denote by $\mathsf{Aut}_W(V)$ the set of its automorphisms in the category of $\mathbb{C}c$-algebras in $\mathcal{F}\mathrm{Vect}_\Bbbk$ and by $\mathsf{Aut}(Gr^W V)$ the set of automorphisms of $Gr^WV$ in the category of $\mathbb{C}c$-algebras in $gr\mathrm{Vect}_\Bbbk$. We have a morphism $gr:\mathsf{Aut}_W(V)\to \mathsf{Aut}(Gr^W V)$. Let $\Bbbk\to R$ be a commutative $\Bbbk$-algebra. The correspondence $$R \mapsto \mathbb{I}derline{\Aut}_W{(V)}(R) := \mathsf{Aut}_W(V \otimes_{\Bbbk}R)$$ defines a functor $\mathbb{I}derline{\Aut}_W(V):\mathrm{Alg}_\Bbbk \longrightarrow \mathrm{Gps}$ from the category $\mathrm{Alg}_\Bbbk$ of commutative $\Bbbk$-algebras, to the category $\mathrm{Gps}$ of groups. Clearly, we have $\mathbb{I}derline{\Aut}_W{(V)} (\Bbbk) = \mathsf{Aut}_W(V)$. We define in a similar fashion a functor $\mathbb{I}derline{\Aut}(Gr^WV)$ from $\mathrm{Alg}_\Bbbk$ to $\mathrm{Gps}$. We recall the following properties: \begin{prop}\label{algebraicgroups} Let $(V,W)$ be as above. \begin{enumerate} \item $\mathbb{I}derline{\Aut}_W(V)$ is a group scheme whose group of $\Bbbk$-points is $\mathsf{Aut}_W(V)$. \item The functor $Gr^W$ induces a morphism $\mathbb{I}derline{gr} :\mathbb{I}derline{\Aut}_W{(V)} \to \mathbb{I}derline{\Aut}(Gr^W V)$ of group schemes. \item The kernel $U := \mathrm{Ker} \left( \mathbb{I}derline{gr} :\mathbb{I}derline{\Aut}_W(V) \to \mathbb{I}derline{\Aut}(Gr^W V) \right)$ is a unipotent group scheme over $\Bbbk$. \end{enumerate} \end{prop} \begin{proof} We first observe that there is an isomorphism \[\mathbb{I}derline{\Aut}_W(V)\cong\mathcalatorname{lim}_S\mathbb{I}derline{\Aut}_W(V_S)\] in which the limit is taken over the poset of finite sets $S$ of objects of $\mathbb{C}c$ and $V_S$ denotes the restriction of $V$ to those objects. We can write the groups $\mathsf{Aut}_W(V)$ and $\mathbb{I}derline{\Aut}(Gr^WV)$ as similar limits. Therefore we may restrict to the case when $\mathbb{C}c$ has finitely many objects and prove that in this case, the above objects live in the category of algebraic groups. Let $N$ be such that the vector space $^{\mathrm{op}}lus_{c\in\mathbb{C}c} V(c)$ can be linearly embedded in $\Bbbk^N$. Then $\mathsf{Aut}_W(V)$ is the closed subgroup of $\mathrm{GL}_N(\Bbbk)$ defined by the polynomial equations that express the data of a filtration preserving $\mathbb{C}c$-algebra automorphism. Similarly, inside the functor of linear automorphisms $^{\mathrm{op}}lus_{c\in\mathbb{C}c} V(c)\otimes_\Bbbk R\longrightarrow ^{\mathrm{op}}lus_{c\in\mathbb{C}c} V(c)\otimes_\Bbbk R$, let $F(R)$ be those preserving the structure of $V$ as a $\mathbb{C}c$-algebra in filtered vector spaces. The condition of preserving the filtration and the algebra structure is given by polynomial equations in the matrix entries and so $F$ is representable (this is also explained in Section 7.6 of \cite{Waterhouse}). It follows that $\mathbb{I}derline{\Aut}_W(V)$ is an algebraic group and its group of $\Bbbk$-points is $\mathsf{Aut}_W(V)$. Hence (1) is satisfied. For every commutative $\Bbbk$-algebra $R$, the map \[ \mathbb{I}derline{\Aut}_W{(V)}(R) = \mathsf{Aut}_W(V\otimes_{\Bbbk}R) \longrightarrow \mathsf{Aut}(Gr^W (V\otimes_{\Bbbk}R)) = \mathbb{I}derline{\Aut}(Gr^W V)(R) \] is a morphism of groups which is natural in $R$. Thus (2) follows and hence the kernel $U$ is an algebraic group. It now suffices to take a basis of $^{\mathrm{op}}lus_{c\in\mathbb{C}c} V(c)$ compatible with $W$. Then we may view $U$ as a subgroup of the group of upper-triangular matrices with 1's on the diagonal. Hence (3) is satisfied. \end{proof} \begin{lemm} \label{split_autos} Let $(V,W)$ be as above. The following assertions are equivalent: \begin{enumerate} \item The pair $(V,W)$ admits a lax symmetric monoidal splitting: $W_pV\cong\bigoplus_{q\leq p} Gr^W_qV$. \item The morphism $gr:\mathsf{Aut}_W(V)\to \mathsf{Aut}(Gr^W V)$ is surjective. \item There exists $\alpha\in\Bbbk^*$ which is not a root of unity together with an automorphism $\Phi\in \mathsf{Aut}_W(V)$ such that $gr(\Phi)=\psi_\alpha$ is the grading automorphism of $Gr^W V$ associated with $\alpha$, defined by $$\psi_\alpha(a)=\alpha^{p}a,\text{ for }a\in Gr_p^WV.$$ \end{enumerate} \end{lemm} \begin{proof} The implications $(1)\mathbb{R}ightarrow (2)\mathbb{R}ightarrow (3)$ are trivial. We show that (3) implies (1). Let $\Phi\in \mathsf{Aut}_W(V)$ be such that $gr \Phi=\psi_\alpha$. We will first produce a decomposition $\Phi=\Phi_s.\Phi_u$ which is such that for any object $c$ of $\mathbb{C}c$, the restrictions $(\Phi_s(c),\Phi_u(c))$ is a Jordan decomposition for $\Phi(c)$. In order to do that, recall that we have an isomorphism \[\mathsf{Aut}_W(V)=\mathcalatorname{lim}_{S}\mathsf{Aut}_W(V_S)\] where the limit is taken over the poset of finite subsets $S$ of objects of $\mathbb{C}c$ and $V_S$ denotes the restriction of $V$ to the subset $S$. For each of the groups $\mathsf{Aut}_W(V_S)$ we can find a Jordan decomposition of the image of $\Phi$ in each of them. The transition maps between those groups preserve this decomposition and it follows that this decomposition induces a decomposition of $\Phi$ with the desired property. By \cite[Theorem 4.4]{Borel}, there is a decomposition of the form $V(c)=V'(c)^{\mathrm{op}}lus V''(c)$, where $$V'(c)=\bigoplus V_{p}(c)\text{ with }V_{p}(c):=\mathrm{Ker}(\Phi_s(c)-\alpha^{p}I)$$ and $V''(c)$ is the complementary subspace corresponding to the remaining factors of the characteristic polynomial of $\Phi_s(c)$. By assumption, $Gr^WV(c)$ contains nothing but eigenspaces of eigenvalue $\alpha^p$. Therefore we have $Gr^WV''(c)=0$ and one concludes that $V''(c)=0$. In order to show that $W_pV=\bigoplus_{i\leq p}V_p$ it suffices to prove it objectwise. Let $c$ be an object of $\mathbb{C}c$. For $x\in V_p(c)$, let $q$ be the smallest integer such that $x\in W_qV(c)$. Then $x$ defines a class $x+W_{q+1}V(c)\in gr V(c)$, and $$\psi_\alpha(x+W_{q-1}V(c))=\alpha^{q}x+W_{q-1}V(c)=\Phi(x)+W_{q-1}V(c)=\alpha^{p}x+W_{q-1}V(c).$$ Then $(\alpha^{q}-\alpha^{p})x\in W_{q-1}V(c)$. Since $x\notin W_{q-1}V(c)$ we have $q=p$, hence $x\in W_pV$. \end{proof} We may now state and prove the main theorem of this section. \begin{theo}\label{descens_rsplittings} Let $(V,W)$ be a map of colored operads $\mathbb{C}c\longrightarrow \mathcal{F}\mathrm{Vect}_\Bbbk$ such that for each color $c$ of $\mathbb{C}c$, the vector space $V(c)$ is finite dimensional. Let $\Bbbk\subset\mathbb{K}$ be a field extension. Then $V$ admits a lax symmetric monoidal splitting if and only if $V_{\mathbb{K}}:=V\otimes_\Bbbk\mathbb{K}:\mathbb{C}c\longrightarrow \mathrm{Vect}_\mathbb{K}$ admits a lax symmetric monoidal splitting. \end{theo} \begin{proof} We may assume without loss of generality that $\mathbb{K}$ is algebraically closed. If $V_{\mathbb{K}}$ admits a splitting, the map $$ \mathbb{I}derline{\Aut}_W(V)(\mathbb{K}) \longrightarrow \mathbb{I}derline{\Aut}(Gr^WV)(\mathbb{K}) $$ is surjective by Lemma $\ref{split_autos}$. Our goal is to prove surjectivity of $$ \mathbb{I}derline{\Aut}_W(V)(\Bbbk) \longrightarrow \mathbb{I}derline{\Aut}(Gr^WV)(\Bbbk). $$ As in Proposition \ref{algebraicgroups}, we can write those groups as filtered limits. Since an inverse limit of surjections is a surjection, it is enough to prove the result when $\mathbb{C}c$ has finitely many objects. From \cite[Section 18.1]{Waterhouse} there is an exact sequence of groups $$ 1 \longrightarrow U(\mathbf{\Bbbk}) \longrightarrow \mathbb{I}derline{\Aut}_W(V)(\Bbbk) \longrightarrow \mathbb{I}derline{\Aut}(Gr^WV) (\Bbbk) \longrightarrow H^1(\mathbb{K}/\Bbbk, U) \longrightarrow \dots $$ where $U$ is a unipotent algebraic group by Proposition $\ref{algebraicgroups}$ and our assumption that $\mathbb{C}c$ has finitely many objects. Since $\mathbf{\Bbbk}$ has characteristic zero the group $H^1(\mathbb{K}/\Bbbk, U)$ is trivial (see \cite[Example 18.2.e]{Waterhouse}) and we deduce the desired surjectivity. \end{proof} From this theorem we deduce that Deligne's splitting holds over $\mathbb{Q}$. We record this fact in the following Lemma. \begin{lemm}[Deligne's splitting over $\mathbb{Q}$]\label{Deligne_splitting_over_Q} The forgetful functor $\Pi_\mathbb{Q}^{W}:\mathrm{MHS}_\mathbb{Q}\longrightarrow \mathcal{F}\mathrm{Vect}_\mathbb{Q}$ given by $(V,W,F)\mapsto (V,W)$ admits a factorization $$ \xymatrix@R=4pc@C=4pc{ \mathrm{MHS}_\mathbb{Q}\ar[r]^{G}\ar[dr]_{\Pi_\mathbb{Q}^{W}}&gr\mathrm{Vect}_\mathbb{Q}\ar[d]^{U^{fil}}\\ &\mathcal{F}\mathrm{Vect}_\mathbb{Q} } $$ into lax symmetric monoidal functors. In particular, there is an isomorphism of functors $$U^{fil}\circ gr\circ \Pi_\mathbb{Q}^{W}\cong \Pi_\mathbb{Q}^{W},$$ where $gr:\mathcal{F}\mathrm{Vect}_\mathbb{Q}\longrightarrow gr\mathrm{Vect}_\mathbb{Q}$ is the graded functor given by $gr(V_\mathbb{Q},W)^p=Gr_p^WV_\mathbb{Q}$. \end{lemm} \begin{proof} We apply Theorem \ref{descens_rsplittings} to the lax symmetric monoidal functor $\Pi_\mathbb{Q}^{W}$ using the fact that $\Pi_\mathbb{Q}^{W}\otimes_\mathbb{Q}\mathbb{C}$ admits a splitting by Lemma \ref{Deligne_splitting}. \end{proof} \begin{rem} We want to emphasize that Theorem \ref{descens_rsplittings} does not say that the splitting of the previous lemma recovers the splitting of Lemma \ref{Deligne_splitting} after tensoring with $\mathbb{C}$. In fact, it can probably be shown that such a splitting cannot exist. Nevertheless, the existence of Deligne's splitting over $\mathbb{C}$ abstractly forces the existence of a similar splitting over $\mathbb{Q}$ which is all this Lemma is saying. Note as well that these are not splittings of mixed Hodge structures, but only of the weight filtration. They are also referred to as \textit{weak splittings} of mixed Hodge structures (see for example \cite[Section 3.1]{PS}). As is well-known, mixed Hodge structures do not split in general. \end{rem} The above splitting over $\mathbb{Q}$ yields the following ``purity implies formality'' statement in the abstract setting of functors defined from the category of complexes of mixed Hodge structures. Given a rational number $\alpha$, denote by $\cat{Ch}_*(\mathrm{MHS}_\mathbb{Q})^{\alpha\text{-}pure}$ the full subcategory of $\cat{Ch}_*(\mathrm{MHS}_\mathbb{Q})$ of complexes with pure weight $\alpha$ homology: an object $(K,W,F)$ in $\cat{Ch}_*(\mathrm{MHS}_\mathbb{Q})^{\alpha\text{-}pure}$ is such that $Gr^p_WH_n(K)=0$ for all $p\neq \alpha n.$ \begin{coro}\label{coro-purity formality Q} The restriction of the functor $\Pi_\mathbb{Q}:\cat{Ch}_*(\mathrm{MHS}_\mathbb{Q})\longrightarrow \cat{Ch}_*(\mathbb{Q})$ to the category $\cat{Ch}_*(\mathrm{MHS}_\mathbb{Q})^{\alpha\text{-}pure}$ is formal for any non-zero rational number $\alpha$. \end{coro} \begin{proof} This follows from Proposition \ref{decomposition_formal} together with Lemma \ref{Deligne_splitting_over_Q}. \end{proof} \section{Mixed Hodge complexes}\label{Section_MHC} In this section, we construct an equivalence of symmetric monoidal $\infty$-categories between mixed Hodge complexes and complexes of mixed Hodge structures, lifting Beilinson's equivalence of triangulated categories. We first recall the notion of mixed Hodge complex introduced by Deligne in \cite{DeHIII} in its chain complex version (with homological degree). Note that, in contrast with the classical setting of mixed Hodge theory, in the homological version of a mixed Hodge complex, the weight filtration $W$ will be decreasing while the Hodge filtration $F$ will be increasing. Let $\Bbbk\subset \mathbb{R}$ be a subfield of the real numbers. \begin{defi}\label{defMHC} A \textit{mixed Hodge complex over $\Bbbk$} is given by a filtered chain complex $(K_{\Bbbk},W)$ over $\Bbbk$, a bifiltered chain complex $(K_{\mathbb{C}},W,F)$ over $\mathbb{C}$, together with a finite string of filtered quasi-isomorphisms of filtered complexes of $\mathbb{C}$-vector spaces $$(K_\Bbbk,W)\otimes\mathbb{C}\xrightarrow{\alpha_1}(K_1,W)\xleftarrow{\alpha_2}\cdots\xrightarrow{\alpha_{l-1}} (K_{l-1},W)\xrightarrow{\alpha_l} (K_\mathbb{C},W).$$ We call $l$ the \textit{length} of the mixed Hodge complex. The following axioms must be satisfied: \begin{enumerate} \item[($\mathrm{MH}_0$)] The homology $H_*(K_\Bbbk)$ is bounded and finite-dimensional. \item[($\mathrm{MH}_1$)] The differential of $Gr^p_WK_\mathbb{C}$ is strictly compatible with $F$. \item[($\mathrm{MH}_2$)] The filtration on $H_n(Gr_W^pK_{\mathbb{C}})$ induced by $F$ makes $H_n(Gr_W^pK_\Bbbk)$ into a pure Hodge structure of weight $p+n$. \end{enumerate} Such a mixed Hodge complex will be denoted by $\mathcal{K}=\{(K_\Bbbk,W),(K_\mathbb{C},W,F)\}$, omitting the data of the comparison morphisms $\alpha_i$. \end{defi} The above axioms imply that for all $n\geq 0$ the triple $(H_n(K_\Bbbk),W[n],F)$ is a mixed Hodge structure over $\Bbbk$, where $W[n]$ is the shifted weight filtration given by \[W[n]^p H_n(K_\Bbbk):=W^{p-n}H_n(K_\Bbbk).\] Morphisms of mixed Hodge complexes are given by levelwise bifiltered morphisms of complexes making the corresponding diagrams commute. Denote by $\mathrm{MHC}_\Bbbk$ the category of mixed Hodge complexes of a certain fixed length, which we omit in the notation. The tensor product of mixed Hodge complexes is again a mixed Hodge complex (see \cite[Lemma 3.20]{PS}). This makes $\mathrm{MHC}_\Bbbk$ into a symmetric monoidal category, with a filtered variant of the K\"{u}nneth formula. \begin{defi} A morphism $f:K\to L$ in $\mathrm{MHC}_\Bbbk$ is said to be a \textit{weak equivalence} if $H_*(f_\Bbbk)$ is an isomorphism of $\Bbbk$-vector spaces. \end{defi} Since the category of mixed Hodge structures is abelian, the homology of every complex of mixed Hodge structures is a graded mixed Hodge structure. We have a functor $$\mathcal{T}:\cat{Ch}_*b(\mathrm{MHS}_\Bbbk)\longrightarrow \mathrm{MHC}_\Bbbk$$ given on objects by $(K,W,F)\mapsto \{(K,TW),(K\otimes\mathbb{C},TW,F)\}$, where $TW$ is the shifted filtration $(TW)^pK_n:=W^{p+n}K_n$. The comparison morphisms $\alpha_i$ are the identity. Also, $\mathcal{T}$ is the identity on morphisms. This functor clearly preserves weak equivalences. \begin{lemm} The shift functor $\mathcal{T}:\cat{Ch}_*b(\mathrm{MHS}_\Bbbk)\longrightarrow \mathrm{MHC}_\Bbbk$ is strong symmetric monoidal. \end{lemm} \begin{proof} It suffices to note that given filtered complexes $(K,W)$ and $(L,W)$, we have: \[T(W\otimes W)^p(K\otimes L)_n=(TW\otimes TW)^p(K\otimes L)_n.\qedhere\] \end{proof} Beilinson \cite{Be} gave an equivalence of categories between the derived category of mixed Hodge structures and the homotopy category of a shifted version of mixed Hodge complexes. We will require a finer version of Beilinson's equivalence, in terms of symmetric monoidal $\infty$-categories. Denote by $\mathbf{MHC}_\Bbbk$ the $\infty$-category obtained by inverting weak equivalences of mixed Hodge complexes, omitting the length in the notation. As explained in \cite[Theorem 2.7.]{Drew}, this object is canonically a symmetric monoidal stable $\infty$-category. Note that in loc. cit., mixed Hodge complexes have fixed length 2 and are polarized. The results of \cite{Drew} as well as Beilinson's equivalence, are equally valid for the category of mixed Hodge complexes of an arbitrary fixed length. \begin{theo}\label{theo: equivinfty} The shift functor induces an equivalence $\icat{Ch}_*^b(\mathrm{MHS}_\Bbbk)\longrightarrow \mathbf{MHC}_\Bbbk$ of symmetric monoidal $\infty$-categories. \end{theo} \begin{proof} A proof in the polarizable setting appears in \cite{Drew}. Also, in \cite{BNT}, a similar statement is proven for a shifted version of mixed Hodge complexes. We sketch a proof in our setting. We first observe as in Lemma 2.6 of \cite{BNT} that both $\infty$-categories are stable and that the shift functor is exact. The stability of $\mathbf{MHC}_\Bbbk$ follows from the observation that this $\infty$-category is the Verdier quotient at the acyclic complexes of the $\infty$-category of mixed Hodge complexes with the homotopy equivalences inverted. This last $\infty$-category underlies a dg-category that can easily be seen to be stable. The stability of $\icat{Ch}_*^b(\mathrm{MHS}_\Bbbk)$ follows in a similar way. Since a complex of mixed Hodge structures is acyclic if and only if the underlying complex of $\Bbbk$-vector spaces is acyclic, and $\mathcal{T}$ is the identity on the underlying complexes of $\Bbbk$-vector spaces, it follows that $\mathcal{T}$ is exact. Therefore, in order to prove that $\mathcal{T}$ is an equivalence of $\infty$-categories, it suffices to show that it induces an equivalence of homotopy categories \[D^{b}(\mathrm{MHS}_\Bbbk)\longrightarrow \mathrm{ho}(\mathrm{MHC}_\Bbbk).\] In \cite[ Lemma 3.11]{Be}, it is proven that the shift functor $\mathcal{T}:\cat{Ch}_*^b(\mathrm{MHS}_\Bbbk^{p})\longrightarrow \mathrm{MHC}_\Bbbk^p$ induces an equivalence at the level of homotopy categories. Here the superindex $p$ indicates that the mixed Hodge objects are polarized. One may verify that Beilinson's proof is equally valid if we remove the polarization (see also \cite[Theorem 4.10]{CG2} where Beilinson's equivalence is proven in the non-polarized version via other methods). The fact that $\mathcal{T}$ can be given the structure of a strong symmetric monoidal $\infty$-functor follows from the work of Drew in \cite{Drew}. \end{proof} \section{Logarithmic de Rham currents}\label{Section_logaritmic} The goal of this section is to construct a strong symmetric monoidal $\infty$-functor from algebraic varieties over $\mathbb{C}$ to mixed Hodge complexes which computes the correct mixed Hodge structure after passing to homology. The construction for smooth varieties is relatively straightforward. It suffices to take a functorial mixed Hodge complex model for the cochains as constructed for instance in \cite{Na} and dualize it. The monoidality of that functor is slightly tricky as one has to move to the world of $\infty$-categories for it to exist. Once one has constructed this functor for smooth varieties, it can be extended to more general varieties by standard descent arguments. We denote by $\mathrm{Var}_{\mathbb{C}}$ the category of complex schemes that are reduced, separated and of finite type. We use the word variety for an object of this category. We denote by $\mathrm{Sm}_{\mathbb{C}}$ the subcategory of smooth schemes. Both of these categories are essentially small (i.e. there is a set of isomorphisms classes of objects) and symmetric monoidal under the cartesian product. We will make use of the following very simple observation. \begin{prop}\label{prop: cartesian product oplax} Let $\mathbb{C}c$ and $\mathscr{D}d$ be two categories with finite products seen as symmetric monoidal categories with respect to the product. Then any functor $F:\mathbb{C}c\longrightarrow \mathscr{D}d$ has a preferred oplax symmetric monoidal structure. \end{prop} \begin{proof} We need to construct comparison morphisms $F(c\times c')\longrightarrow F(c)\times F(c')$. By definition of the product, there is a unique such functor whose composition with the first projection is the map $F(c\times c')\longrightarrow F(c)$ induced by the first projection $c\times c'\longrightarrow c$ and whose composition with the second projection is the map $F(c\times c')\longrightarrow F(c')$ induced by the second projection $c\times c'\longrightarrow c'$. Similarly, one has a unique map $F(\ast)\longrightarrow \ast$. One checks easily that these two maps give $F$ the structure of an oplax symmetric monoidal functor. \end{proof} \subsection{For smooth varieties} In this section, we construct a lax symmetric monoidal functor \[\mathscr{D}_*:\icat{N}(\mathrm{Sm}_{\mathbb{C}})\longrightarrow \mathbf{MHC}_\mathbb{Q}\] such that its composition with the forgetful functor $\mathbf{MHC}_\mathbb{Q}\longrightarrow \icat{Ch}_*(\mathbb{Q})$ is naturally weakly equivalent to $S_*(-,\mathbb{Q})$ as a lax symmetric monoidal functor. We will adapt Navarro-Aznar's construction of mixed Hodge diagrams \cite{Na}. Let $X$ be a smooth projective complex variety and $j:U\mathrm{ho}okrightarrow X$ an open subvariety such that $D:=X-U$ is a normal crossing divisor. Denote by $\mathcal{A}^*_X$ the analytic de Rham complex of the underlying real analytic variety of $X$ and let $\mathcal{A}^*_X(\log D)$ denote the subsheaf of $j_*\mathcal{A}^*_U$ of logarithmic forms in $D$. Note that in Deligne's approach to mixed Hodge theory, the sheaf $\Omega_X^*(\log D)$ of holomorphic forms on $X$ with logarithmic poles along $D$ is used instead. As explained in 8.5 of \cite{Na}, the main advantage to consider analytic forms is the natural real structure obtained, together with a decomposition of the form \[\mathcal{A}^n_X(\log D)\otimes \mathbb{C}=\bigoplus_{p+q=n} \mathcal{A}_X^{p,q}(\log D).\] Also, there is an inclusion $\Omega_X^*(\log D)\mathrm{ho}okrightarrow \mathcal{A}^*_X(\log D)\otimes\mathbb{C}$ which is a quasi-isomorphism and $\mathcal{A}^*_X(\log D)$ may be naturally endowed with a multiplicative weight filtration $W$ (see 8.3 of \cite{Na}). Proposition 8.4 of loc. cit. gives a string of quasi-isomorphisms of sheaves of filtered cdga's over $\mathbb{R}$: \[ (\mathbf{R}_{\text{\tiny{$\mathrm{TW}$}}} j_*\mathbb{I}derline{\mathbb{Q}}_U,\tau)\otimes\mathbb{R} \xrightarrow{\sim} (\mathbf{R}_{\text{\tiny{$\mathrm{TW}$}}} j_*\mathcal{A}^*_U,\tau) \xleftarrow{\sim} (\mathcal{A}^*_X(\text{log} D),\tau) \xrightarrow{\sim}(\mathcal{A}^*_X(\text{log} D),W), \] where $\tau$ is the canonical filtration. In this diagram, \[\mathbf{R}_{\text{\tiny{$\mathrm{TW}$}}}j_*:\mathrm{Sh}(U,\cat{Ch}_*^{\leq 0}(\Bbbk))\longrightarrow \mathrm{Sh}(X,\cat{Ch}_*^{\leq 0}(\Bbbk))\] is the functor defined by \[\mathbf{R}_{\text{\tiny{$\mathrm{TW}$}}}j_*:=\mathbf{s}_{\text{\tiny{$\mathrm{TW}$}}}\circ j_*\circ G^+\] where $$G^+:\mathrm{Sh}(X,\cat{Ch}_*^{\leq 0}(\Bbbk))\longrightarrow \mathscr{D}elta \mathrm{Sh}(X,\cat{Ch}_*^{\leq 0}(\Bbbk))$$ is the Godement canonical cosimplicial resolution functor and \[\mathbf{s}_{\text{\tiny{$\mathrm{TW}$}}}:\mathscr{D}elta \mathrm{Sh}(X,\cat{Ch}_*^{\leq 0}(\Bbbk))\longrightarrow \mathrm{Sh}(X,\cat{Ch}_*^{\leq 0}(\Bbbk))\] is the Thom-Whitney simple functor introduced by Navarro in Section 2 of loc. cit. Both functors are lax symmetric monoidal and hence $\mathbf{R}_{\text{\tiny{$\mathrm{TW}$}}}j_*$ is a lax symmetric monoidal functor (see \cite[Section 3.2]{RR}). The complex $\mathcal{A}^*_X(\log D)\otimes\mathbb{C}$ carries a natural multiplicative Hodge filtration $F$ (see 8.6 of \cite{Na}). The above string of quasi-isomorphisms gives a commutative algebra object in (cohomological) mixed Hodge complexes after taking global sections. Specifically, the composition \[\mathbf{R}_{\text{\tiny{$\mathrm{TW}$}}}\Gamma(X,-):=\mathbf{s}_{\text{\tiny{$\mathrm{TW}$}}}\circ \Gamma(X,-)\circ G^+\] gives a derived global sections functor \[\mathbf{R}_{\text{\tiny{$\mathrm{TW}$}}}\Gamma(X,-):\mathrm{Sh}(X,\cat{Ch}_*^{\leq 0}(\mathbb{Q}))\longrightarrow \cat{Ch}_*^{\leq 0}(\mathbb{Q})\] which again is lax symmetric monoidal. There is also a filtered version of this functor defined via the filtered Thom-Whitney simple (see Section 6 of \cite{Na}). Theorem 8.15 of loc. cit. asserts that by applying the (bi)filtered versions of $\mathbf{R}_{\text{\tiny{$\mathrm{TW}$}}}\Gamma(X,-)$ to each of the pieces of the above string of quasi-isomorphisms, one obtains a commutative algebra object in mixed Hodge complexes $\mathcal{H} dg(X,U)$ whose cohomology gives Deligne's mixed Hodge structure on $H^*(U,\mathbb{Q})$ and such that $$\mathcal{H} dg(X,U)_\mathbb{Q}=\mathbf{R}_{\text{\tiny{$\mathrm{TW}$}}}\Gamma(X,\mathbf{R}_{\text{\tiny{$\mathrm{TW}$}}}j_*\mathbb{I}derline{\mathbb{Q}}_U)$$ is naturally quasi-isomorphic to $S^*(U,\mathbb{C})$ (as a cochain complex). This construction is functorial for morphisms of pairs $f:(X,U)\to (X',U')$. The definition of $\mathcal{H} dg(f)$ follows as in the additive setting (see \cite[Lemma 6.1.2]{Huber} for details), by replacing the classical additive total simple functor with the Thom-Whitney simple functor. Now we wish to get rid of the dependence on the compactification. For this purpose, we define for $U$ a smooth variety over $\mathbb{C}$, a category $R_U$ whose objects are pairs $(X,U)$ where $X$ is smooth and proper variety containing $U$ as an open subvariety and $X-U$ is a normal crossing divisor. Morphisms in $R_U$ are morphisms of pairs. We then define $\mathscr{D}^*(U)$ by the formula \[\mathscr{D}^*(U):=\mathcalatorname{colim}_{R_U^{\mathrm{op}}}\mathcal{H} dg(X,U)\] By theorems of Hironaka and Nagata, the category $R_U^{\mathrm{op}}$ is a non-empty filtered category. Note that we have to be slightly careful here as the category of mixed Hodge complexes does not have all filtered colimits. However, we can form this colimit in the category of pairs $(K_\mathbb{Q},W),(K_\mathbb{C},W,F)$ having the structure required in Definition \ref{defMHC} but not necessarily satisfying the axioms $\mathrm{MH}_0$, $\mathrm{MH}_1$ and $\mathrm{MH}_2$. Since taking filtered colimit is an exact functor, we deduce from the classical isomorphism between sheaf cohomology and singular cohomology that there is a quasi-isomorphism \[\mathscr{D}^*(U)_\mathbb{Q}\to S^*(U,\mathbb{Q})\] This shows that the cohomology of $\mathscr{D}^*(U)$ is of finite type and hence, that $\mathscr{D}^*(U)$ satisfies axiom $\mathrm{MH}_0$. The other axioms are similarly easily seen to be satisfied. Moreover, filtered colimits preserve commutative algebra structures, therefore the functor $\mathscr{D}^*$ is a functor from $\mathrm{Sm}_\mathbb{C}^{\mathrm{op}}$ to commutative algebras in $\mathrm{MHC}_\mathbb{Q}$. Since the coproduct in commutative algebras is the tensor product, we deduce from the dual of Proposition \ref{prop: cartesian product oplax} that $\mathscr{D}^*$ is canonically a lax symmetric monoidal functor from $\mathrm{Sm}_\mathbb{C}^{\mathrm{op}}$ to $\mathrm{MHC}_\mathbb{Q}$. But since the comparison map \[\mathscr{D}^*(U)_\mathbb{Q}\otimes_\mathbb{Q} \mathscr{D}^*(V)_\mathbb{Q}\longrightarrow \mathscr{D}^*(U\times V)_\mathbb{Q}\] is a quasi-isomorphism, this functor extends to a strong symmetric monoidal $\infty$-functor \[\mathscr{D}^*:\icat{N}(\mathrm{Sm}_\mathbb{C})^{\mathrm{op}}\longrightarrow \mathbf{MHC}_\mathbb{Q}\] \begin{rem} A similar construction for real mixed Hodge complexes is done in \cite[Section 3.1]{bunkeregulators}. There is also a similar construction in \cite{Drew} that includes polarizations. \end{rem} Now, the category $\mathrm{MHC}_\mathbb{Q}$ is equipped with a duality functor. It sends a mixed Hodge complex $\{(K_\mathbb{Q},W),(K_\mathbb{C},W,F)\}$ to the linear duals $\{(K_\mathbb{Q}^{\vee},W^{\vee}),(K_\mathbb{C}^{\vee},W^{\vee},F^{\vee})\}$ where the dual of a filtered complex is defined as in \ref{Homandtensor}. One checks easily that this dual object satisfies the axioms of a mixed Hodge complex. Moreover, the duality functor $\mathrm{MHC}_\mathbb{Q}^{\mathrm{op}}\longrightarrow \mathrm{MHC}_\mathbb{Q}$ is lax symmetric monoidal and preserves weak equivalences of mixed Hodge complexes, therefore it induces a lax symmetric monoidal $\infty$-functor \[\mathbf{MHC}_\mathbb{Q}^{\mathrm{op}}\longrightarrow\mathbf{MHC}_\mathbb{Q}\] but in fact, we have the following proposition. \begin{prop}\label{prop : duality is monoidal} The dualization $\infty$-functor \[\mathbf{MHC}_\mathbb{Q}^{\mathrm{op}}\longrightarrow\mathbf{MHC}_\mathbb{Q}\] is strong symmetric monoidal. \end{prop} \begin{proof} It suffices to observe that the canonical map \[K^{\vee}\otimes L^{\vee}\longrightarrow (K\otimes L)^{\vee}\] is a weak equivalence. This follows from the fact that mixed Hodge complexes are assumed to have finite type cohomology. \end{proof} Composing the duality functor with $\mathscr{D}^*$, we get a strong symmetric monoidal $\infty$-functor \[\mathscr{D}_*:\icat{N}(\mathrm{Sm}_\mathbb{C})\longrightarrow \mathbf{MHC}_\mathbb{Q} \] \begin{rem} One should note that $\mathscr{D}^*$ comes from a lax symmetric monoidal functor from $\mathrm{Sm}_\mathbb{C}^{\mathrm{op}}$ to $\mathrm{MHC}_\mathbb{Q}$. On the other hand, $\mathscr{D}_*$ is induced by a strict functor which is neither lax nor oplax. Indeed, it is obtained as the composition of $(\mathscr{D}^*)^{\mathrm{op}}$ which is an oplax symmetric monoidal functor $\mathrm{Sm}_\mathbb{C}\longrightarrow(\mathrm{MHC}_\mathbb{Q})^{\mathrm{op}}$ and the duality functor which is a lax symmetric monoidal functor. Thus, the symmetric monoidal structure on $\mathscr{D}_*$ only exists at the $\infty$-categorical level. \end{rem} To conclude this construction, it remains to compare the functor $\mathscr{D}_*(-)_\mathbb{Q}$ with the singular chains functor. These two functors are naturally quasi-isomorphic as shown in \cite{Na} but we will need to know that they are quasi-isomorphic as symmetric monoidal $\infty$-functors. We denote by $S_*(-,R)$ the singular chain complex functor from the category of topological spaces to the category of chain complexes over a commutative ring $R$. The functor $S_*(-,R)$ is lax symmetric monoidal. Moreover, the natural map \[S_*(X,R)\otimes S_*(Y,R)\to S_*(X\times Y,R)\] is a quasi-isomorphism. This implies that $S_*(-,R)$ induces a strong symmetric monoidal $\infty$-functor from the category of topological spaces to the $\infty$-category $\icat{Ch}_*(R)$ of chain complexes over $R$. We still use the symbol $S_*(-,R)$ to denote this $\infty$-functor. \begin{theo}\label{theo: equivalence cochains-sheaf} The functors $\mathscr{D}_*(-)_\mathbb{Q}$ and $S_*(-,\mathbb{Q})$ are weakly equivalent as strong symmetric monoidal $\infty$-functors from $\icat{N}(\mathrm{Sm}_\mathbb{C})$ to $\icat{Ch}_*(\mathbb{Q})$. \end{theo} \begin{proof} We introduce the category $\mathrm{Man}$ of smooth real manifolds. We consider the $\infty$-category $\mathbf{PSh}(\mathrm{Man})$ of presheaves of spaces on the $\infty$-category $\icat{N}(\mathrm{Man})$. This is a symmetric monoidal $\infty$-category under the product. We can consider the reflective subcategory $\mathbf{T}$ spanned by presheaves $\mathcal{G}$ satisfying the following two conditions: \begin{enumerate} \item Given a hypercover $U_\bullet \to M$ of a manifold $M$, the induced map \[\mathcal{G}(M)\to\mathcalatorname{lim}_{\mathscr{D}elta}\mathcal{G}(U_\bullet)\] is an equivalence. \item For any manifold $M$, the map $\mathcal{G}(M)\to \mathcal{G}(M\times\mathbb{R})$ induced by the projection $M\times\mathbb{R}\to M$ is an equivalence. \end{enumerate} The presheaves satisfying these conditions are stable under product, hence the $\infty$-category $\mathbf{T}$ inherits the structure of a symmetric monoidal locally presentable $\infty$-category. It has a universal property that we now describe. Given another symmetric monoidal locally presentable $\infty$-category $\mathbf{D}$, we denote by $\on{Fun}^{L,\otimes}(\mathbf{T},\mathbf{D})$ the $\infty$-category of colimit preserving strong symmetric monoidal functors $\mathbf{T}\to\mathbf{D}$. Then, we can consider the composition \[\on{Fun}^{L,\otimes}(\mathbf{T},\mathbf{D})\to\on{Fun}^{L,\otimes}(\mathbf{PSh}(\mathrm{Man}),\mathbf{D})\to\on{Fun}^{\otimes}(\icat{N}\mathrm{Man},\mathbf{D})\] where the first map is induced by precomposition with the left adjoint to the inclusion $\mathbf{T}\to \mathbf{PSh}(\mathrm{Man})$ and the second map is induced by precomposition with the Yoneda embedding. We claim that the above composition is fully faithful and that its essential image is the full subcategory of $\on{Fun}^{\otimes}(\icat{N}\mathrm{Man},\mathbf{D})$ spanned by the functors $F$ that satisfy the following two properties: \begin{enumerate} \item Given a hypercover $U_\bullet\to M$ of a manifold $M$, the map \[\mathcalatorname{colim}_{\mathscr{D}elta^{\mathrm{op}}}F(U_\bullet)\to F(M)\] is an equivalence. \item For any manifold $M$, the map $F(M\times\mathbb{R})\to F(M)$ induced by the projection $M\times\mathbb{R}\to M$ is an equivalence. \end{enumerate} This statement can be deduced from the theory of localizations of symmetric monoidal $\infty$-categories (see \cite[Section 3]{hinichdwyer}). In particular, there exists an essentially unique strong symmetric monoidal and colimit preserving functor from $\mathbf{T}$ to $\mathbf{S}$ (the $\infty$-category of spaces) that is determined by the fact that it sends a manifold $M$ to the simplicial set $\mathcalatorname{Sing}(M)$. This functor is an equivalence of $\infty$-categories. This is a floklore result. A proof of a model category version of this fact can be found in \cite[Proposition 8.3.]{duggeruniversal}. The $\infty$-category $\mathbf{S}$ is the unit of the symmetric monoidal $\infty$-category of presentable $\infty$-categories. It follows that it has a commutative algebra structure (which corresponds to the symmetric monoidal structure coming from the cartesian product) and that it is the initial symmetric monoidal presentable $\infty$-category. Since $\mathbf{T}$ is equivalent to $\mathbf{S}$ as a symmetric monoidal presentable $\infty$-category, we deduce that, up to equivalence, there is a unique functor $\mathbf{T}\longrightarrow\icat{Ch}_*(\mathbb{Q})$ that is strong symmetric monoidal and colimit preserving. But, using the universal property of $\mathbf{T}$, we easily see that $S_*(-,\mathbb{Q})$ and $\mathscr{D}_*(-)_\mathbb{Q}$ can be extended to strong symmetric monoidal and colimits preserving functors from $\mathbf{T}$ to $\icat{Ch}_*(\mathbb{Q})$. It follows that they must be equivalent. \end{proof} \subsection{For varieties} In this subsection, we extend the construction of the previous subsection to the category of varieties. We have the site $(\mathrm{Var}_{\mathbb{C}})_{pro}$ of varieties over $\mathbb{C}$ with the proper topology and the site $(\mathrm{Sm}_\mathbb{C})_{pro}$ which is the restriction of this site to the category of smooth varieties (see \cite[Section 3.5]{blanctopological} for the definition of the proper topology). \begin{prop}[Blanc]\label{prop:Blanc} Let $\mathbf{C}$ be a symmetric monoidal presentable $\infty$-category. We denote by $\on{Fun}^{\otimes}_{pro,}(\mathrm{Var}_\mathbb{C},\mathbf{C})$ the $\infty$-category of strong symmetric monoidal functors from $\mathrm{Var}_\mathbb{C}$ to $\mathbf{C}$ whose underlying functor satisfies descent with respect to proper hypercovers. Similarly, we denote by $\on{Fun}^{\otimes}_{pro}(\mathrm{Sm}_\mathbb{C},\mathbf{C})$ the $\infty$-category of strong symmetric monoidal functors from $\mathrm{Sm}_\mathbb{C}$ to $\mathbf{C}$ whose underlying functor satisfies descent with respect to proper hypercovers. The restriction functor \[\on{Fun}^{\otimes}_{pro}(\mathrm{Var}_\mathbb{C},\mathbf{C})\longrightarrow \on{Fun}^{\otimes}_{pro}(\mathrm{Sm}_\mathbb{C},\mathbf{C}) \] is an equivalence. \end{prop} \begin{proof} We have the categories $\on{Fun}(\mathrm{Var}_\mathbb{C}^{\mathrm{op}},\mathrm{sSet})$ and $\on{Fun}(\mathrm{Sm}_\mathbb{C}^{\mathrm{op}},\mathrm{sSet})$ of presheaves of simplicial sets over $\mathrm{Var}_{\mathbb{C}}$ and $\mathrm{Sm}_\mathbb{C}$ respectively. These categories are related by an adjunction \[\pi^*:\on{Fun}(\mathrm{Sm}_\mathbb{C}^{\mathrm{op}},\mathrm{sSet})\leftrightarrows \on{Fun}(\mathrm{Var}_\mathbb{C}^{\mathrm{op}},\mathrm{sSet}):\pi_*\] where the right adjoint $\pi_*$ is just the restriction of a presheaf to smooth varieties. Both sides of this adjunction have a symmetric monoidal structure by taking objectwise product. The functor $\pi_*$ is obviously strong symmetric monoidal. We can equip both sides with the local model structure with respect to the proper topology. We obtain a Quillen adjunction \[\pi^*:\on{Fun}_{pro}(\mathrm{Sm}_\mathbb{C}^{\mathrm{op}},\mathrm{sSet})\leftrightarrows \on{Fun}_{pro}(\mathrm{Var}_\mathbb{C}^{\mathrm{op}},\mathrm{sSet}):\pi_*\] between symmetric monoidal model categories in which the right adjoint is a strong symmetric monoidal functor. In \cite[Proposition 3.22]{blanctopological}, it is proved that this is a Quillen equivalence. The model category $\on{Fun}_{pro}(\mathrm{Sm}_\mathbb{C}^{\mathrm{op}},\mathrm{sSet})$ presents the $\infty$-topos of hypercomplete sheaves over the proper site on $\mathrm{Sm}_\mathbb{C}$ and similarly for model category $\on{Fun}_{pro}(\mathrm{Var}_\mathbb{C}^{\mathrm{op}},\mathrm{sSet})$. Therefore, this Quillen equivalence implies that these two $\infty$-topoi are equivalent. Moreover, as in the proof of \ref{theo: equivalence cochains-sheaf}, these topoi, seen as symmetric monoidal presentable $\infty$-categories under the cartesian product, represent the functor $\mathbf{C}\mapsto \on{Fun}^{\otimes}_{pro}(\mathrm{Sm}_\mathbb{C},\mathbf{C})$ (resp. $\mathbf{C}\mapsto \on{Fun}^{\otimes}_{pro}(\mathrm{Var}_\mathbb{C},\mathbf{C})$). The result immediately follows. \end{proof} \begin{theo} Up to weak equivalences, there is a unique strong symmetric monoidal functor \[\mathscr{D}_*:\icat{N}(\mathrm{Var}_\mathbb{C})\longrightarrow \mathbf{MHC}_\mathbb{Q}\] which satisfies descent with respect to proper hypercovers and whose restriction to $\mathrm{Sm}_\mathbb{C}$ is equivalent to the functor $\mathscr{D}_*$ constructed in the previous subsection. There is also a unique strong symmetric monoidal functor \[\mathscr{D}^*:\icat{N}(\mathrm{Var}_\mathbb{C})^{\mathrm{op}}\longrightarrow \mathbf{MHC}_\mathbb{Q}\] which satisfies descent with respect to proper hypercovers and whose restriction to $\mathrm{Sm}_\mathbb{C}$ is equivalent to the functor $\mathscr{D}^*$ constructed in the previous subsection. \end{theo} \begin{proof} Let $\mathcalatorname{Ind}(\mathbf{MHC}_\mathbb{Q})$ be the Ind-category of the $\infty$-category of mixed Hodge complexes. This is a stable presentable $\infty$-category. We first prove that the composite \[\mathscr{D}_*:\icat{N}(\mathrm{Sm}_\mathbb{C})\longrightarrow \mathbf{MHC}_\mathbb{Q}\longrightarrow \mathcalatorname{Ind}(\mathbf{MHC}_\mathbb{Q})\] satisfies descent with respect to proper hypercovers. Let $Y$ be a smooth variety and $X_\bullet\to Y$ be a hypercover for the proper topology. We wish to prove that the map \[\alpha:\mathcalatorname{colim}_{\mathscr{D}elta^{\mathrm{op}}} \mathscr{D}_*(X_\bullet)\longrightarrow \mathscr{D}_*(Y)\] is an equivalence in $\mathcalatorname{Ind}(\mathbf{MHC}_\mathbb{Q})$. By \cite[Proposition 3.24]{blanctopological} and the fact that taking singular chains commutes with homotopy colimits in spaces, we see that the map \[\beta:\mathcalatorname{colim}_{\mathscr{D}elta^{\mathrm{op}}}S_*(X_\bullet,\mathbb{Q})\longrightarrow S_*(Y,\mathbb{Q})\] is an equivalence. On the other hand, writing $\icat{Ch}_*(\mathbb{Q})^{\Omegaega}$ for the $\infty$-category of chain complexes whose homology is finite dimensional, the forgetful functor \[U:\mathcalatorname{Ind}(\mathbf{MHC}_\mathbb{Q})\longrightarrow \mathcalatorname{Ind}(\icat{Ch}_*(\mathbb{Q})^{\Omegaega})\simeq \icat{Ch}_*(\mathbb{Q})\] preserves colimits and by Theorem \ref{theo: equivalence cochains-sheaf}, the composite $U\circ\mathscr{D}_*$ is weakly equivalent to $S_*(-,\mathbb{Q})$. Therefore, the map $\beta$ is weakly equivalent to the map $U(\alpha)$ in particular, we deduce that the source of $\alpha$ is in $\mathbf{MHC}_\mathbb{Q}$ (as opposed to $\mathcalatorname{Ind}(\mathbf{MHC}_\mathbb{Q})$). And since the functor $U:\mathbf{MHC}_\mathbb{Q}\to \icat{Ch}_*(\mathbb{C})$ is conservative, it follows that $\alpha$ is an equivalence as desired. Hence, by Proposition \ref{prop:Blanc}, there is a unique extension of $\mathscr{D}_*$ to a strong symmetric monoidal functor $\icat{N}(\mathrm{Var}_\mathbb{C})\longrightarrow\mathcalatorname{Ind}(\mathbf{MHC}_\mathbb{Q})$ that has proper descent. Moreover, by the first paragraph of this proof, if $Y$ is an object of $\mathrm{Var}_\mathbb{C}$ and $X_\bullet\longrightarrow Y$ is a proper hypercover by smooth varieties, then $\mathcalatorname{colim}_{\mathscr{D}elta^{\mathrm{op}}}\mathscr{D}_*(X_\bullet,\mathbb{Q})$ has finitely generated homology. It follows that this unique extension of $\mathscr{D}_*$ to $\mathrm{Var}_\mathbb{C}$ lands in $\mathbf{MHC}_\mathbb{Q}\subset \mathcalatorname{Ind}(\mathbf{MHC}_\mathbb{Q})$. For the case of $\mathscr{D}^*$, we know from Proposition \ref{prop : duality is monoidal} that dualization induces a strong symmetric monoidal equivalence of $\infty$-categories $\mathbf{MHC}_\mathbb{Q}^{\mathrm{op}}\simeq \mathbf{MHC}_\mathbb{Q}$ (we emphasize that, as a functor, dualization is only lax symmetric monoidal but as an $\infty$-functor it is strong symmetric monoidal). Thus, we see that we have no other choice but to define $\mathscr{D}^*$ as the composite \[\icat{N}(\mathrm{Var})^{\mathrm{op}} \xrightarrow{(\mathscr{D}_*)^{\mathrm{op}}}\mathbf{MHC}_\mathbb{Q}^{\mathrm{op}}\xrightarrow{(-)^{\vee}}\mathbf{MHC}_\mathbb{Q}\] and this will be the unique strong symmetric monoidal functor \[\mathscr{D}^*:\icat{N}(\mathrm{Var}_\mathbb{C})^{\mathrm{op}}\longrightarrow \mathbf{MHC}_\mathbb{Q}\] which satisfies descent with respect to proper hypercovers and whose restriction to $\mathrm{Sm}_\mathbb{C}$ is equivalent to the functor $\mathscr{D}^*$ constructed in the previous subsection. \end{proof} \begin{prop}\label{prop:equivalence D S} \begin{enumerate} \item There is a weak equivalence $\mathscr{D}_*(-)_\mathbb{Q}\simeq S_*(-,\mathbb{Q})$ in the category of strong symmetric monoidal $\infty$-functors $\icat{N}(\mathrm{Var}_\mathbb{C})\longrightarrow \icat{Ch}_*(\mathbb{Q})$. \item There is a weak equivalence $\mathcal{A}_{PL}^*(-)\simeq\mathscr{D}^*(-)_\mathbb{Q}\simeq S^*(-,\mathbb{Q})$ in the $\infty$-category of strong symmetric monoidal $\infty$-functors $\icat{N}(\mathrm{Var}_\mathbb{C})^{\mathrm{op}}\longrightarrow \icat{Ch}_*(\mathbb{Q})$. \end{enumerate} \end{prop} \begin{proof} We prove the first claim. By construction $\mathscr{D}_*(-)_\mathbb{Q}$ is a symmetric monoidal functor that satisfies proper descent. By \cite[Proposition 3.24]{blanctopological}, the same is true for $S_*(-,\mathbb{Q})$. Since these two functors are moreover weakly equivalent when restricted to $\mathrm{Sm}_\mathbb{C}$, they are equivalent by Proposition \ref{prop:Blanc}. The linear dual functor is strong symmetric monoidal when restricted to chain complexes whose homology is of finite type. Moreover, both $S_*(-,\mathbb{Q})$ and $\mathscr{D}_*(-)_\mathbb{Q}$ land in the $\infty$-category of such chain complexes. Therefore, the equivalence $S^*(-,\mathbb{Q})\simeq \mathscr{D}^*(-)_{\mathbb{Q}}$ follows from the first part. The equivalence $\mathcal{A}_{PL}^*(-)\simeq S^*(-,\mathbb{Q})$ is classical. \end{proof} \section{Formality of the singular chains functor}\label{Section_main} In this section, we prove the main results of the paper on the formality of the singular chains functor. We also explain some applications to operad formality. \begin{defi} Let $X$ be a complex variety and let $\alpha$ be a rational number. We say that the weight filtration on $H^*(X,\mathbb{Q})$ is \textit{$\alpha$-pure} if for all $n\geq 0$ we have $$Gr^W_pH^n(X,\mathbb{Q})=0\text{ for all }p\neq \alpha n.$$ \end{defi} \begin{rem} Note that since the weight filtration on $H^n(-,\mathbb{Q})$ has weights in the interval $[0,2n]\cap \mathbb{Z}$, the above definition makes sense only for $\alpha\in[0, 2]\cap \mathbb{Q}$. For $\alpha=1$ we recover the purity property shared by the cohomology of smooth projective varieties. A very simple example of a variety whose filtration is $\alpha$-pure, with $\alpha$ not integer, is given by $\mathbb{C}^2\setminus\{0\}$. Its reduced cohomology is concentrated in degree $3$ and weight $4$, so its weight filtration is $4/3$-pure. We refer to Proposition $\ref{prop: purity for subspace arrangements}$ in the following section for more elaborate examples. \end{rem} Here is our main theorem. \begin{theo}\label{theo: main covariant} Let $\alpha$ be a non-zero rational number. The singular chains functor \[S_*(-,\mathbb{Q}):\mathrm{Var}_\mathbb{C}\longrightarrow \cat{Ch}_*(\mathbb{Q})\] is formal as a lax symmetric monoidal functor when restricted to varieties whose weight filtration in cohomology is $\alpha$-pure. \end{theo} \begin{proof} By Corollary \ref{coro: Hinich rigidification}, it suffices to prove that this functor is formal as an $\infty$-lax symmetric monoidal functor. By Proposition \ref{prop:equivalence D S}, it is equivalent to prove that $\mathscr{D}_*(-)_{\mathbb{Q}}$ is formal. We denote by $\bar{\mathscr{D}}_*$ the composite of $\mathscr{D}_*$ with a strong symmetric monoidal inverse of the equivalence of Theorem \ref{theo: equivinfty}. Because of that theorem, $\mathscr{D}_*(-)_\mathbb{Q}$ is weakly equivalent to $\Pi_\mathbb{Q}\circ \bar{\mathscr{D}}_*$. The restriction of $\bar{\mathscr{D}}_*$ to $\mathrm{Var}_\mathbb{C}^{\alpha\text{-}pure}$ lands in $\icat{Ch}_*(\mathrm{MHS}_\mathbb{Q})^{\alpha\text{-}pure}$, the full subcategory of $\icat{Ch}_*(\mathrm{MHS}_\mathbb{Q})$ spanned by chain complexes whose homology is $\alpha$-pure. By Corollary \ref{coro-purity formality Q}, the $\infty$-functor $\Pi_\mathbb{Q}$ from $\icat{Ch}_*(\mathrm{MHS}_\mathbb{Q})^{\alpha\text{-}pure}$ to $\icat{Ch}_*(\mathbb{Q})$ is formal and hence so is $\Pi_\mathbb{Q}\circ \bar{\mathscr{D}}_*$. \end{proof} We now list a few applications of this result. \subsection{Noncommutative little disks operad}\label{subsection: ncld} The authors of \cite{dotsenkononcommutative} introduce two nonsymmetric topological operads $\mathcal{A}s_{S^1}$ and $\mathcal{A}s_{S^1}\rtimes S^1$. In each arity, these operads are given by a product of copies of $\mathbb{C}-\{0\}$ and the operad maps can be checked to be algebraic maps. It follows that the operads $\mathcal{A}s_{S^1}$ and $\mathcal{A}s_{S^1}\rtimes S^1$ are operads in the category $\mathcalatorname{Sm}_\mathbb{C}$ and that the weight filtration on their cohomology is $2$-pure. Therefore, by \ref{theo: main covariant} we have the following result. \begin{theo}\label{theo: little} The operads $S_*(\mathcal{A}s_{S^1},\mathbb{Q})$ and $S_*(\mathcal{A}s_{S^1}\rtimes S^1,\mathbb{Q})$ are formal. \end{theo} \begin{rem} The fact that the operad $S_*(\mathcal{A}s_{S^1},\mathbb{Q})$ is formal is proved in \cite[Proposition 7]{dotsenkononcommutative} by a more elementary method and it is true even with integral coefficients. The other formality result was however unknown to the authors of \cite{dotsenkononcommutative}. \end{rem} \subsection{Self-maps of the projective line}\label{subsection : self maps} We denote by $F_d$ the algebraic variety of degree $d$ algebraic maps from $\mathbf{P}^1_{\mathbb{C}}$ to itself that send the point $\infty$ to the point $1$. Explicitly, a point in $F_d$ is a pair $(f,g)$ of degree $d$ monic polynomials without any common roots. Sending a monic polynomial to its set of coefficients, we may see the variety $F_d$ as a Zariski open subset of $\mathbf{A}^{2d}_{\mathbb{C}}$. See \cite[Section 5]{horelmotivic} for more details. \begin{prop}The weight filtration on $H^*(F_d,\mathbb{Q})$ is $2$-pure. \end{prop} \begin{proof} The variety $F_d$ is denoted $\mathcalatorname{Poly}_1^{d,2}$ in \cite[Definition 1.1.]{farbtopology}. It is explained in Step 4 of the proof of Theorem 1.2 in that paper, that the variety $F_d$ is the quotient of the complement of a hyperplane arrangement $H$ in $\mathbf{A}^{2d}_{\mathbb{C}}$ by the group $\Sigma_d\times\Sigma_d$ acting by permuting the coordinates. The quotient map \[\pi:\mathbf{A}^{2d}_{\mathbb{C}}-H\to F_d\] is algebraic and thus induces a morphism of mixed Hodge structures $\pi^*:H^*(F_d,\mathbb{Q})\to H^*(\mathbf{A}^{2d}_{\mathbb{C}}-H,\mathbb{Q})$. Moreover, it is classical that $\pi^*$ is injective (see e.g. \cite[Theorem III.2.4]{Bredon}). Since the mixed Hodge structure of $H^k(\mathbf{A}^{2d}_{\mathbb{C}}-H,\mathbb{Q})$ is pure of weight $2k$ (by Proposition \ref{prop: purity for subspace arrangements} or by \cite{kimweights}), the desired result follows. \end{proof} In \cite[Proposition 3.1.]{cazanavealgebraic}, Cazanave shows that the variety $\bigsqcup_dF_d$ has the structure of a graded monoid in $\mathcalatorname{Sm}_\mathbb{C}$. The structure of a graded monoid can be encoded by a colored operad. Thus the following result follows from Theorem \ref{theo: main covariant}. \begin{theo}\label{theo: monoidformal} The graded monoid in chain complexes $\bigoplus_dS_*(F_d,\mathbb{Q})$ is formal. \end{theo} \subsection{The little disks operad}\label{subsect: little disks} In \cite{Petersen}, Petersen shows that the operad of little disks $^{\mathrm{op}}er{D}$ is formal. The method of proof is to use the action of a certain group $\mathcalatorname{GT}(\mathbb{Q})$ on $S_*(^{\mathrm{op}}er{PAB}_\mathbb{Q},\mathbb{Q})$ which follows from work of Drinfeld. Here the operad $^{\mathrm{op}}er{PAB}_\mathbb{Q}$ is rationally equivalent to $^{\mathrm{op}}er{D}$ and $\mathcalatorname{GT}(\mathbb{Q})$ is the group of $\mathbb{Q}$-points of the pro-algebraic Grothendieck-Teichmüller group. We can reinterpret this proof using the language of mixed Hodge structures. Indeed, the group $\mathcalatorname{GT}$ receives a map from the group $\mathcalatorname{Gal}(\mathrm{MT}(\mathbb{Z}))$, the Galois group of the Tannakian category of mixed Tate motives over $\mathbb{Z}$ (see \cite[25.9.2.2]{andrebook}). Moreover there is a map $\mathcalatorname{Gal}(\mathrm{MHTS}_\mathbb{Q})\to\mathcalatorname{Gal}(\mathrm{MT}(\mathbb{Z}))$ from the Tannakian Galois group of the abelian category of mixed Hodge Tate structures (the full subcategory of $\mathrm{MHS}_\mathbb{Q}$ generated under extensions by the Tate twists $\mathbb{Q}(n)$ for all $n$) which is Tannaka dual to the tensor functor \[\mathrm{MT}(\mathbb{Z})\longrightarrow \mathrm{MHTS}_\mathbb{Q}\] sending a mixed Tate motive to its Hodge realization. This map of Galois group allows us to view $S_*(^{\mathrm{op}}er{PAB}_\mathbb{Q},\mathbb{Q})$ as an operad in $\cat{Ch}_*(\mathrm{MHS}_\mathbb{Q})$ which moreover has a $2$-pure weight filtration (as follows from the computation in \cite{Petersen}). Therefore by Corollary \ref{coro-purity formality Q}, the operad $S_*(^{\mathrm{op}}er{PAB}_\mathbb{Q},\mathbb{Q})$ is formal and hence also $S_*(^{\mathrm{op}}er{D},\mathbb{Q})$. \subsection{The gravity operad} In \cite{dupontgravity}, Dupont and the second author prove the formality of the gravity operad of Getzler. It is an operad structure on the collection of graded vector spaces $\{H_{*-1}(\mathcal{M}_{0,n+1}), n\in \mathbb{N}\}$. It can be defined as the homotopy fixed points of the circle action on $S_*(^{\mathrm{op}}er{D},\mathbb{Q})$. The method of proof in \cite{dupontgravity} can also be interpreted in terms of mixed Hodge structures. Indeed, a model $^{\mathrm{op}}er{G}rav^{W'}$ of gravity is constructed in 2.7 of loc. cit. This model comes with an action of $\mathcalatorname{GT}(\mathbb{Q})$ and a $\mathcalatorname{GT}(\mathbb{Q})$-equivariant map $\iota:^{\mathrm{op}}er{G}rav^{W'}\longrightarrow S_*(^{\mathrm{op}}er{PAB}_\mathbb{Q},\mathbb{Q})$ which is injective on homology. As in the previous subsection, this action of $\mathcalatorname{GT}(\mathbb{Q})$ allows us to interpret $^{\mathrm{op}}er{G}rav^{W'} $ as an operad in $\cat{Ch}_*(\mathrm{MHS}_\mathbb{Q})$. Moreover, the injectivity of $\iota$ implies that $^{\mathrm{op}}er{G}rav^{W'}$ also has a $2$-pure weight filtration. Therefore by Corollary \ref{coro-purity formality Q}, we deduce the formality of $^{\mathrm{op}}er{G}rav^{W'}$. In fact, we obtain the stronger result that the map \[\iota:^{\mathrm{op}}er{G}rav^{W'}\longrightarrow S_*(^{\mathrm{op}}er{PAB}_\mathbb{Q},\mathbb{Q})\] is formal as a map of operads (i.e. it is connected to the induced map in homology by a zig-zag of maps of operads). \subsection{$E^1$-formality} The above results deal with objects whose weight filtration is pure. In general, for mixed weights, the singular chains functor is not formal, but it is $E^1$-formal as we now explain. The $r$-stage of the spectral sequence associated to a filtered complex is an $r$-bigraded complex with differential of bidegree $(-r,r-1)$. By taking its total degree and considering the column filtration we obtain a filtered complex. Denote by $$E^r:\icat{Ch}_*{(\mathcal{F}\mathbb{Q})}\longrightarrow \icat{Ch}_*{(\mathcal{F}\mathbb{Q})}$$ the resulting strong symmetric monoidal $\infty$-functor. Denote by $$\tilde \Pi^{W}_\mathbb{Q}:\mathbf{MHC}_\mathbb{Q}\longrightarrow \icat{Ch}_*(\mathcal{F}\mathbb{Q})$$ the forgetful functor defined by sending a mixed Hodge complex to its rational component together with the weight filtration. Note that, since the weight spectral sequence of a mixed Hodge complex degenerates at the second stage, the homology of $E^1\circ \tilde \Pi^{W}_\mathbb{Q}$ gives the weight filtration on the homology of mixed Hodge complexes. We have: \begin{theo}\label{E1formality} Denote by $S^{fil}_*:\icat{N} (\mathrm{Var}_\mathbb{C})\longrightarrow \icat{Ch}_*(\mathbb{Q})$ the composite functor $$\icat{N}(\mathrm{Var}_\mathbb{C})\xrightarrow{\mathscr{D}_*}\mathbf{MHC}_\mathbb{Q}\xrightarrow{\tilde \Pi^{W}_\mathbb{Q}}\icat{Ch}_*(\mathcal{F}\mathbb{Q}).$$ There is an equivalence of strong symmetric monoidal $\infty$-functors $E^1\circ S_*^{fil}\simeq S^{fil}_*.$ \end{theo} \begin{proof} It suffices to prove an equivalence $\tilde \Pi^{W}_\mathbb{Q}\simeq E^1\circ \tilde \Pi^{W}_\mathbb{Q}$. We have a commutative diagram of strong symmetric monoidal $\infty$-functors. $$ \xymatrix{ \icat{Ch}_*(\mathrm{MHS}_\mathbb{Q})\ar[d]_{\Pi^{W}_\mathbb{Q}}\ar[r]^{\mathcal{T}}&\mathbf{MHC}_\mathbb{Q}\ar[d]^{\tilde \Pi^{W}_\mathbb{Q}}\\ \icat{Ch}_*(\mathcal{F}\mathbb{Q})\ar[r]^{T}\ar[d]_{E^0}&\icat{Ch}_*(\mathcal{F}\mathbb{Q})\ar[d]^{E^1}\\ \icat{Ch}_*(\mathcal{F}\mathbb{Q})\ar[r]^{T}&\icat{Ch}_*(gr\mathbb{Q}) } $$ The commutativity of the top square follows from the definition of $\mathcal{T}$. We prove that the bottom square commutes. Recall that $T(K,W)$ is the filtered complex $(K,TW)$ defined by $TW^pK_n:=W^{p+n}K_n$. It satisfies $d(TW^pK_p)\subset TW^{p+1}K_{n-1}$. In particular, the induced differential on $Gr_{TW}K$ is trivial. Therefore we have: $$E^1_{-p,q}(K,TW)\cong H_{q-p}(Gr^{p}_{TW}K)\cong Gr^{p}_{TW}K_{q-p}=Gr^q_WK_{q-p}=E^0_{-q,2q-p}(K,W).$$ This proves that the above diagram commutes. Since $\mathcal{T}$ is an equivalence of $\infty$-categories, it is enough to prove that $E^1\circ \tilde \Pi^{W}_\mathbb{Q}\circ \mathcal{T}$ is equivalent to $\tilde \Pi^{W}_\mathbb{Q}\circ \mathcal{T}$. By the commutation of the above diagram it suffices to prove that there is an equivalence $E^0\circ \Pi^{W}_\mathbb{Q}\cong \Pi^{W}_\mathbb{Q}$. This follows from Lemma \ref{Deligne_splitting_over_Q}, since $E^0=U^{fil}\circ gr$. \end{proof} \section{Rational homotopy of varieties and formality}\label{Section_formalschemes} For $X$ a space, we denote by $\mathcal{A}_{PL}^*(X)$, Sullivan's algebra of piecewise linear differential forms. This is a commutative dg-algebra over $\mathbb{Q}$ that captures the rational homotopy type of $X$. A contravariant version of Theorem $\ref{theo: main covariant}$ gives: \begin{theo}\label{theo: main contravariant} Let $\alpha$ be a non-zero rational number. The functor \[\mathcal{A}_{PL}^*:\mathrm{Var}_\mathbb{C}^{\mathrm{op}}\longrightarrow \cat{Ch}_*(\mathbb{Q})\] is formal as a lax symmetric monoidal functor when restricted to varieties whose weight filtration in cohomology is $\alpha$-pure. \end{theo} \begin{proof} The proof is the same as the proof of Theorem $\ref{theo: main covariant}$ using $\mathscr{D}^*$ instead of $\mathscr{D}_*$ and using the fact that $\mathscr{D}^*(-)_\mathbb{Q}$ is quasi-isomorphic to $\mathcal{A}_{PL}^*$ as a lax symmetric monoidal functor (see \cite[Th\'{e}or\`{e}me 5.5]{Na}). \end{proof} Recall that a topological space $X$ is said to be \textit{formal} if there is a string of quasi-isomorphisms of commutative dg-algebras from $\mathcal{A}_{PL}^*(X)$ to $H^*(X,\mathbb{Q})$, where $H^*(X,\mathbb{Q})$ is considered as a commutative dg-algebra with trivial differential. Likewise, a continuous map of topological spaces $f:X\longrightarrow Y$ is \textit{formal} if there is a string of homotopy commutative diagrams of morphisms $$ \xymatrix{ \mathcal{A}_{PL}^*(Y)\ar[d]^{f^*}&\ar[l]\ast\ar[d]&\ar[l]\cdots\ar[r]&\ast\ar[d]\ar[r]&H^*(Y,\mathbb{Q})\ar[d]^{H^*(f)}\\ \mathcal{A}_{PL}^*(X)&\ar[l]\ast&\cdots\ar[l]\ar[r]&\ast\ar[r]&H^*(X,\mathbb{Q}) } $$ where the horizontal arrows are quasi-isomorphisms. Note that if $f:X\to Y$ is a map of topological spaces and $X$ and $Y$ are both formal spaces, then it is not always true that $f$ is a formal map. Also, in general, the composition of formal morphisms is not formal. Theorem $\ref{theo: main contravariant}$ gives functorial formality for varieties with pure weight filtration in cohomology, generalizing both ``purity implies formality'' statements appearing in \cite{Dupont} for smooth varieties and in \cite{ChCi1} for singular projective varieties. We also get a result of partial formality as done in these references, via Proposition \ref{decomposition_qformal}. Our generalization is threefold, as explained in the following three subsections. \subsection{Rational purity} To our knowledge, in the existing references where $\alpha$-purity of the weight filtration is discussed, only the cases $\alpha=1$ and $\alpha=2$ are considered, whereas we obtain formality for varieties with $\alpha$-pure cohomology, for $\alpha$ an arbitrary non-zero rational number. We now show that certain complement of subspaces arrangements give examples of such varieties. \begin{defi} Let $V$ be a finite dimensional $\mathbb{C}$-vector space. We say that a finite set $\{H_i\}_{i\in I}$ of subspaces of $V$ is a \textit{good arrangement of codimension $d$ subspaces} if \begin{enumerate} \item [(i)]For each $i \in I$, the subspace $H_i$ is of codimension $d$. \item [(ii)]For each $i\in I$, the set of subspaces $\{H_i\cap H_j\}_{j\neq i}$ of $H_i$ form a good arrangement of codimension $d$ subspaces. \end{enumerate} \end{defi} \begin{rem} In particular the empty set of subspaces is a good arrangement of codimension $d$ subspaces. By induction on the size of $I$, we see that this condition is well-defined. \end{rem} \begin{example} Recall that a set of subspaces of codimension $d$ of an $n$-dimensional space is said to be in general position if the intersection of $k$ of those subspaces is of codimension $\mathcalatorname{min}(n,dk)$. One easily checks that a set of codimensions $d$ subspaces in general position is a good arrangement. However, the converse does not hold as shown in the following example. \end{example} \begin{example} Take $V=(\mathbb{C}^d)^m$ and define, for $(i,j)$ an unordered pair of distinct elements in $\{1,\ldots,m\}$, the subspace \[W_{(i,j)}=\{(x_1,\ldots,x_n)\in (\mathbb{C}^d)^m, x_i=x_j\}.\] This collection of codimension $d$ subspaces of $V$ is a good arrangement. However, these subspaces are not in general position if $m$ is at least $3$. Indeed, the codimension of $W_{(1,2)}\cap W_{(1,3)}\cap W_{(2,3)}$ is $2d$. The complement $V-\bigcup_{(i,j)}W_{(i,j)}$ is exactly $F_m(\mathbb{C}^d)$, the space of configurations of $m$ points in $\mathbb{C}^d$. \end{example} \begin{prop}\label{prop: purity for subspace arrangements} Let $H=\{H_1,\ldots,H_k\}$ be a good arrangement of codimension $d$ subspaces of $\mathbb{C}^n$. Then the mixed Hodge structure on $H^*(\mathbb{C}^n-\cup_i H_i,\mathbb{Q})$ is pure of weight $2d/(2d-1)$. \end{prop} \begin{proof} We proceed by induction on $k$. This is obvious for $k=0$. Now, we consider the variety $X=\mathbb{C}^n-\cup_i^{k-1} H_i$, It contains an open subvariety $U=\mathbb{C}^n-\cup_i^{k} H_i$ and its closed complement $Z=H_k-\cup_i^{k-1} H_i\cap H_k$ which has codimension $d$. Therefore the purity long exact sequence on cohomology groups has the form \[\ldots \longrightarrow H^{r-2d}(Z)(-d)\longrightarrow H^r(X)\longrightarrow H^r(U)\longrightarrow H^{r+1-2d}(Z)(-d)\longrightarrow \ldots\] By the induction hypothesis, the Hodge structure on $H^{r+1-2d}(Z)(-d)$ and on $H^r(X)$ are pure of weight $2dr/(2d-1)$ and hence it is also the case for $H^r(U)$ as desired. \end{proof} \begin{rem} This proposition is well-known for $d=1$ and is proved for instance in \cite{kimweights}. Note that the space $F_m(\mathbb{C}^d)$ of configurations of $m$ points in $\mathbb{C}^d$ fits in the above proposition, so we recover formality of these spaces by purely Hodge theory arguments. \end{rem} \subsection{Functoriality}Every morphism of smooth complex projective varieties is formal. However, if $f:X\to Y$ is an algebraic morphism of complex varieties (possibly singular and/or non-projective), and both $X$ and $Y$ are formal, the morphism $f$ need not be formal. \begin{example} Consider the algebraic Hopf fibration $f:\mathbb{C}^2\setminus\{0\}\longrightarrow \mathbf{P}^1_{\mathbb{C}}$ defined by $(x_0,x_1)\mapsto [x_0:x_1]$. Both spaces $\mathbb{C}^2\setminus\{0\}\simeq S^3$ and $\mathbf{P}^1_{\mathbb{C}}\simeq S^2$ are formal. The morphism induced by $f$ in cohomology is trivial in all positive degrees. Therefore, if $f$ were formal, this would mean that $f$ is nullhomotopic. However, it is well-known that $f$ generates the one dimensional vector space $\pi_3(S^2)\otimes\mathbb{Q}$. Note in fact, that $\mathbf{P}^1_{\mathbb{C}}$ has $1$-pure weight filtration while $\mathbb{C}^2\setminus\{0\}$ has $4/3$-pure weight filtration. \end{example} Theorem $\ref{theo: main contravariant}$ tells us that if $f:X\longrightarrow Y$ is a morphism of algebraic varieties and both $X$ and $Y$ have $\alpha$-pure cohomology, with $\alpha$ a non-zero rational number (the same $\alpha$ for $X$ and $Y$), then $f$ is a formal morphism. This generalizes the formality of holomorphic morphisms between compact K\"{a}hler manifolds of \cite{DGMS} and enhances the results of \cite{Dupont} and \cite{ChCi1} by providing them with functoriality. In fact, we have: \begin{prop} Let $f:X\longrightarrow Y$ be a morphism between connected complex varieties. Assume that the weight filtration on the cohomology of $X$ (resp. $Y$) is $\alpha$-pure (resp. $\beta$-pure). Then: \begin{enumerate} \item If $\alpha=\beta$, then $f$ is formal. \item If $\alpha\neq\beta$, then $f$ is formal only if it is rationally nullhomotopic. \end{enumerate} \end{prop} \begin{proof} Let us first give the precise definition that we will use of a rationally nullhomotopic map. We say that a map $g:U\longrightarrow V$ between topological spaces is rationally nullhomotopic if the induced map \[\mathcal{A}_{PL}(g):\mathcal{A}_{PL}^*(V)\longrightarrow \mathcal{A}_{PL}^*(U)\] is equal in the homotopy category of cdga's to a map that factors through the initial cdga $\mathbb{Q}$. When $\alpha=\beta$, Theorem $\ref{theo: main contravariant}$ ensures that $f$ is formal. If $\alpha\neq \beta$, then we claim that $H^*(f)$ is zero in positive degree. Indeed, since $H^*(f)$ is strictly compatible with the weight filtration, it suffices to show that the morphism $$Gr_{p}^WH^n(Y,\mathbb{Q})\longrightarrow Gr_{p}^WH^n(X,\mathbb{Q})$$ is trivial for all $p\in\mathbb{Z}$ and all $n>0$ which follows from the purity conditions. Therefore, if $f$ is formal, the map $\mathcal{A}_{PL}^*(f)$ coincides with $H^*(f)$ in the homotopy category of cdga's and the latter map factors through $\mathbb{Q}$. \end{proof} \subsection{Non-projective singular varieties} The following result of formality of non-projective singular complex varieties with pure Hodge structure seems to be a new result. \begin{example} Let $X$ be an irreducible singular projective variety of dimension $n>0$ with $1$-pure weight filtration in cohomology. Let $p\in X$ be a smooth point of $X$. Then, we claim that the complement $X-p$ has $1$-pure weight filtration in cohomology. Indeed, we can consider the long exact sequence of cohomology groups for the pair $(X,X-p)$. \[\cdots\to H^{i-1}(X-p)\to H^i(X,X-p)\to H^i(X)\to H^i(X-p)\to H^{i+1}(X,X-p)\to\cdots \] Since $p$ is a smooth point, there exists a neighborhood $U$ of $p$ that is homeomorphic to $\mathbb{R}^{2n}$, therefore excision gives us an isomorphism \[H^k(X,X-p)\cong H^k(U,U-p)\] Since $H^k(U,U-p)$ is non-zero only when $k=2n$, we deduce that the map $H^k(X)\to H^k(X-p)$ is an isomorphism for all $k<2n-1$. Moreover, since $X$ is irreducible, we have $H^{2n}(X)=\mathbb{Q}$ and this vector space has a generator, the fundamental class, which is in the image of $H^{2n}(X,X-q)\to H^{2n}(X)$ for any smooth point $q$. Together with the above long exact sequence, this implies that $H^{2n-1}(X-p)\cong H^{2n-1}(X)$ and $H^{2n}(X-p)=0$. To summarize, we have proved that the inclusion $X-p\to X$ induces an isomorphism on all cohomology groups except on the top one where $H^{2n}(X)=\mathbb{Q}$ while $H^{2n}(X-p)=0$. This proves that the weight filtration of $X-p$ is $1$-pure. As a consequence, the space $X-p$ is formal and the inclusion $X-p\mathrm{ho}okrightarrow X$ is formal. \end{example} \subsection{$E_1$-formality} We also have a contravariant version of Theorem $\ref{E1formality}$. \begin{theo}\label{E1formality_contravariant} Denote by $\mathcal{A}_{fil}^*:\icat{N}(\mathrm{Var}_\mathbb{C})^{\mathrm{op}}\longrightarrow \icat{Ch}_*(\mathcal{F}\mathbb{Q})$ the composite functor $$\icat{N}(\mathrm{Var}_\mathbb{C})^{\mathrm{op}}\xrightarrow{\mathscr{D}^*}\mathbf{MHC}_\mathbb{Q}\xrightarrow{\tilde \Pi^{W}_\mathbb{Q}}\icat{Ch}_*(\mathcal{F}\mathbb{Q}).$$ Then \begin{enumerate} \item The lax symmetric monoidal $\infty$-functors $\mathcal{A}_{fil}^*$ and $E_1\circ \mathcal{A}_{fil}^*$ are weakly equivalent. \item Let $U:\cat{Ch}_*(\mathcal{F}\mathbb{Q})\longrightarrow \cat{Ch}_*(\mathbb{Q})$ denote the forgetful functor. The lax symmetric monoidal $\infty$-functor $U\circ E_1\circ \mathcal{A}_{fil}^*:\icat{N}(\mathrm{Var}_\mathbb{C})^{\mathrm{op}}\to \icat{Ch}_*(\mathbb{Q})$ is weakly equivalent to Sullivan's functor $\mathcal{A}_{PL}^*$ of piecewise linear forms. \item The lax symmetric monoidal functor $U\circ E_1\circ \mathcal{A}_{fil}^*:\mathrm{Sm}_\mathbb{C}^{\mathrm{op}}\to \cat{Ch}_*(\mathbb{Q})$ is weakly equivalent to Sullivan's functor $\mathcal{A}_{PL}^*$ of piecewise linear forms. \end{enumerate} \end{theo} \begin{proof} The first part is proven as Theorem \ref{E1formality} replacing $\mathscr{D}^*$ by $\mathscr{D}_*$. The second part follows from the first part and the fact that $\mathcal{A}_{PL}^*(-)$ is naturally weakly equivalent to $\mathscr{D}^*(-)_{\mathbb{Q}}\simeq U\circ \mathcal{A}^*_{fil}$ (Proposition \ref{prop:equivalence D S}). The third part follows from the second part and Theorem \ref{theo: Hinich rigidification}, using the fact that both functors are ordinary lax symmetric monoidal functors when restricted to smooth varieties. \end{proof} \begin{rem}In \cite{Mo} it is proven that the complex homotopy type of every smooth complex variety is $E_1$-formal. This is extended to possibly singular varieties and their morphisms in \cite{CG1}. Then, a descent argument is used to prove that for nilpotent spaces (with finite type minimal models), this result descends to the rational homotopy type. Theorem $\ref{E1formality_contravariant}$ enhances the contents of \cite{CG1} in two ways: first, since descent is done at the level of functors, we obtain $E_1$-formality over $\mathbb{Q}$ for any complex variety, without nilpotency conditions (the only property needed is finite type cohomology). Second, the functorial nature of our statement makes $E_1$-formality at the rational level, compatible with composition of morphisms. \end{rem} \subsection{Formality of Hopf cooperads} Our main theorem takes two dual forms, one covariant and one contravariant. The covariant theorem yields formality for algebraic structures (like monoids, operads, etc.), the contravariant theorem yields formality for coalgebraic structure (like the comonoid structure coming from the diagonal $X\to X\times X$ for any variety $X$). One might wonder if there is a way to do both at the same time. For example, if $M$ is a topological monoid, then $H^*(M,\mathbb{Q})$ is a Hopf algebra where the multiplication comes from the diagonal of $M$ and the comultiplication comes from the multiplication of $M$. One may ask whether $S^*(M,\mathbb{Q})$ is formal as a Hopf algebra. This question is not well-posed because $S^*(M,\mathbb{Q})$ is not a Hopf algebra on the nose. The problem is that there does not seem to exist a model for singular chains or cochains that is strong symmetric monoidal: the standard singular chain functor $S_*(-,\mathbb{Q})$ is lax symmetric monoidal and Sullivan's functor $\mathcal{A}_{PL}^*$ is oplax symmetric monoidal functor from $\mathrm{Top}$ to $\cat{Ch}_*(\mathbb{Q})^{\mathrm{op}}$. Nevertheless, the functor $\mathcal{A}_{PL}^*$ is strong symmetric monoidal ``up to homotopy''. It follows that, if $M$ is a topological monoid, $\mathcal{A}_{PL}^*(M)$ has the structure of a cdga with a comultiplication up to homotopy and it makes sense to ask if it has formality as such an object. In order to formulate this more precisely, we introduce the notion of an algebraic theory. The following is inspired by Section 3 of \cite{LaVo}. \begin{defi} An algebraic theory is a small category $T$ with finite products. For $\mathbb{C}c$ a category with finite products, a $T$-algebra in $\mathbb{C}c$ is a finite product preserving functor $T\longrightarrow\mathbb{C}c$. \end{defi} There exist algebraic theories for which the $T$-algebras are monoids, groups, rings, operads, cyclic operads, modular operads etc. \begin{rem} Definitions of algebraic theories in the literature are usually more restrictive. This definition will be sufficient for our purposes. \end{rem} \begin{defi} Let $T$ be an algebraic theory. Let $\Bbbk$ be a field. Then a dg Hopf $T$-coalgebra over $\Bbbk$ is a finite coproduct preserving functor from $T^{\mathrm{op}}$ to the category of cdga's over $\Bbbk$. \end{defi} \begin{rem} Recall that the coproduct in the category of cdga's is the tensor product. It follows that a dg Hopf $T$-coalgebra for $T$ the algebraic theory of monoids is a dg Hopf algebra whose multiplication is commutative. A dg Hopf $T$-coalgebra for $T$ the theory of operads is what is usually called a dg Hopf cooperad in the literature. \end{rem} \begin{defi} Let $T$ be an algebraic theory and $\mathbb{C}c$ a category with products and with a notion of weak equivalences. A weak $T$-algebra in $\mathbb{C}c$ is a functor $F:T\longrightarrow \mathbb{C}c$ such that for each pair $(s,t)$ of objects of $T$, the canonical map \[F(t\times s)\longrightarrow F(t)\times F(s)\] is a weak equivalence. A weak $T$-algebra in the opposite category of $\mathrm{CDGA}_\Bbbk$ is called a weak dg Hopf $T$-coalgebra. \end{defi} Observe that if $X:T\longrightarrow \mathrm{Top}$ is a $T$-algebra in topological spaces (or even a weak $T$-algebra), then $\mathcal{A}^*_{PL}(X)$ is a weak dg Hopf $T$-coalgebra. Our main theorem for Hopf $T$-coalgebras is the following. \begin{theo} Let $\alpha$ be a rational number different from zero. Let $X:T\longrightarrow \mathrm{Var}_\mathbb{C}$ be a $T$-algebra such that for all $t\in T$, the weight filtration on the cohomology of $X(t)$ is $\alpha$-pure. Then $\mathcal{A}_{PL}^*(X)$ is formal as a weak dg Hopf $T$-coalgebra. \end{theo} \begin{proof} Being a weak $T$-coalgebra is a property of a functor $T^{\mathrm{op}}\to \mathrm{CDGA}_\Bbbk$ that is invariant under quasi-isomorphism. Thus the result follows immediately from Theorem \ref{theo: main contravariant}. \end{proof} It should be noted that knowing that $\mathcal{A}_{PL}^*(X)$ is formal as a dg Hopf $T$-coalgebra implies that the data of $H^*(X,\mathbb{Q})$ is enough to reconstruct $X$ as a $T$-algebra up to rational equivalence. Indeed, recall the Sullivan spatial realization functor \[\langle-\rangle : \mathrm{CDGA}_\Bbbk \longrightarrow \mathrm{Top}\] Applying this functor to a weak dg Hopf $T$-coalgebra yields a weak $T$-algebra in rational spaces. Specializing to $\mathcal{A}^*_{PL}(X)$ where $X$ is a $T$-algebra in spaces, we get a rational model for $X$ in the sense that the map \[X\longrightarrow \langle \mathcal{A}^*_{PL}(X)\rangle\] is a rational weak equivalence of weak $T$-algebras whose target is objectwise rational. It should also be noted that for reasonable algebraic theories $T$ (including in particular the theory for monoids, commutative monoids, operads, cyclic operads), the homotopy theory of $T$-algebras in spaces is equivalent to that of weak $T$-algebras by the main theorem of \cite{bergnerrigidification}. In particular our weak $T$-algebra $\langle \mathcal{A}^*_{PL}(X)\rangle$ can be strictified to a strict $T$-algebra that models the rationalization of $X$. If $\mathcal{A}^*_{PL}(X)$ is formal, one also get a rational model for $X$ by applying the spatial realization to the strict Hopf $T$-coalgebra $H^*(X,\mathbb{Q})$. Thus the rational homotopy type of $X$ as a $T$-algebra is a formal consequence of $H^*(X,\mathbb{Q})$ as a Hopf $T$-coalgebra. \begin{example} Applying this theorem to the non-commutative little disks operad and framed little disks operad of subsection \ref{subsection: ncld}, we deduce that $\mathcal{A}^*_{PL}(\mathcal{A} s_{S^1})$ and $\mathcal{A}^*_{PL}(\mathcal{A} s_{S^1}\rtimes S^1)$ are formal as a weak Hopf non-symmetric cooperads. Similarly applying this to the monoid of self maps of the projective line of subsection \ref{subsection : self maps}, we deduce that $\mathcal{A}^*_{PL}(\bigsqcup_d F_d)$ is formal as a weak Hopf graded comonoid. \end{example} \end{document}
\betaegin{document} \title{About mathematical apparatus of many body quantum dynamics} \alphauthor{Yu.I.Ozhigov\thanks{The work is supported by Fond of NIX Computer Company (grant \# F793/8-05), INTAS grant 04-77-7289 and Russian Fond for Basic Research grant 06-01-00494-a. e-mail: [email protected]} \\[7mm] Moscow State University,\\ Institute of Physics and Technology of RAS } \maketitle \betaegin{abstract} We discuss the possibility to modify many-body Hilbert quantum formalism that is necessary for the representation of quantum systems dynamics. The notion of effective classical algorithm and visualization of quantum dynamics play the key role. \varepsilonsilonnd{abstract} \sigmamallskipewpage \sigmaection{Introduction} This work touches the unusual theme - the possibility to modify the mathematical apparatus of quantum theory. Its aim - to show why it is desirable and what future waits quantum theory if this modification comes true. Such a work must bear the general character with necessity, and in this case also its main idea is not conventional. Nevertheless, the discussed theme has the immediate relation to many experiments on new technologies based on quantum effects. Moreover, the hypothesis formulated in this work can be rejected in the real experiments (which are in process), and it thus has completely physical character. The proposed mathematical apparatus is based on the effective (of polynomial complexity) classical algorithms, which give the dynamical picture of many particle quantum processes. Providing algorithms with the status of first principles makes possible to give exact formulations of the things which have no such formulations in standard quantum theory, for example, for the mechanism of concordance between unitary evolutions and measurements. It opens door for the regular treatment of quantum effects in the many body dynamics, which means the theoretical possibility of the complete acquisition of chemistry. The price of the proposed modification is recognizing a video film as a main form of the representation of results in theoretical physics, and the agreement to use analytically found values (for example, energies) only as the checkpoints for such a video film. This idea is not completely new, for example, many physicists actively use the packages of programs for numerical calculations and supplement the first principles of quantum theory by their own models of various processes. I represent the algorithmic view point in the radical form of the mathematical apparatus in order to show how wonderful consequences it gives to quantum mechanics and its applications to complex systems and how good it agrees with the spirit and traditions of physics. Quantum mechanics gives the most fundamental representation of the structure of matter; it forms the basic terms of our comprehension of the chemical and biological processes. Hence the question about its effective implementation to the many body systems (separate elementary particles, atoms, molecules) has the principle meaning. But just the structure of quantum theory contains elements which seriously complicate such implementation\footnote{Many body call it contra - intuitiveness of quantum physics.}. These elements are: “squeezing” of the wave packages, connected with the principle of uncertainty, interference of amplitudes, non locality, the existence of two principally different forms of evolution: unitary evolution and measurements. The discussion of these peculiarities has the long history, beginning from the famous discussion between Einstein and Bohr about the nature of quantum randomness. The contra intuitiveness of quantum description forced many body either to look for more “realistic” version of quantum theory or to lead these peculiarities to its limit(\cite{Bom}, \cite{Bel}, \cite{v.Neu}, \cite{Ev}, and also \cite{Me} and others.). In the past years with the appearance and active development of the idea of quantum computer (\cite{Fe}, and also \cite{Ben}, \cite{De}, etc.) we obtain the possibility of the different approach to this question that may be more fruitful. The point is that the peculiarities of quantum theory follow from its mathematical apparatus, which is based on mathematical analysis and Hilbert spaces. The tremendous superfluity of this mathematical apparatus remained unnoticed during all the time when quantum mechanics develops, because practically only analytical methods were applied (the solution of differential equations, matrices diagonalization) for one particle, whereas the many particle area of Hilbert formalism connected with the forming of new spaces by tensor product of existing remained without applications\footnote{The only exclusion are states which can be written as Schmidt expansion (see, for example \cite{VBB}), that is from the algebraic viewpoint the simple generalization of EPR pairs; we discuss them below.}. This superfluity has not interfered the development of quantum physics, because there was no experimental scheme verifying the adequateness of this many particle formalism. The situation radically changed with the appearance of quantum computer (QC) scheme. The quantum computer is the first principle prediction of quantum theory, e.g., it follows immediately from the formalism of Hilbert spaces. Consequently, if this formalism is adequate for many body quantum systems then QC must exist and will be built despite of serious difficulties lying on this way (see \cite{Va}). But it may happen that Hilbert formalism is not adequate in the area of many particles problems. The procedure of taking tensor product of one particle spaces of states may have the physical sense: it can reflect the real mechanism when two “samples” of these particles glue together and form the “sample” of new quantum particle. In this case the entanglement turns to be the physical resource and the realization of arbitrary complex states predicted by Hilbert formalism becomes impossible because it presumes the unacceptably huge (exponential) expenditure of this resource. This is what I call (physical) non adequateness of Hilbert many particle formalism\footnote{Sometimes it is expressed in more fuzzy terms: the productiveness of the device producing the more complicated entangled state must be multiplied to small factor (proportional to $e/c$) with every complication.}. This situation requires the elaboration of new mathematical apparatus for the description of quantum many particle phenomena. The nature of difficulties of QC technologies lies in the basis of quantum theory: in the existence of spontaneous decoherence leading to the reduction of quantum many particle states. The mechanism of such reduction lies beyond the standard quantum theory\footnote{There are many opinions about its possible construction: from the influence of gravitation (see \cite{Pe}), mental efforts of an observer (\cite{Me}, \cite{She} etc.), to the refusal of reduction at all (\cite{Ev}). We do not touch this theme here.}. It is important that the unitary evolutions (Shroedinger equation) without reduction procedure cannot give the complete picture of quantum dynamics\footnote{In contrast with classical physics where such a picture is possible.}. Hilbert formalism for many particles (even with QC) does not then leave us the hope to obtain once the description of Nature possessing the predicting power. By the predicting power we, of course, mean not a fate of single photon determining by quantum randomness, but the quantum mechanical probability predictions expended to the level of big ensembles of atoms and molecules (for example, to $10^{16}$ atoms that is sufficient for the representation of simplest bacteria). If we hope to obtain once such a description\footnote{Those who do not hope should probably change the science to the other area of activity.}, we should agree that something is not in the order with the Hilbert formalism itself, e.g., that the difficulties nest in the mathematical apparatus of quantum theory. This paper is devoted to the discussion what to do with this. \sigmaection{Why there is the problem with the mathematical apparatus of quantum theory and what can be done} The systematic character of difficulties met by quantum theory witnesses that for the further development its mathematical apparatus must be changed\footnote{Somebody can feel horror from this idea, but this is only emotional reactions. The educational system and traditions force theorists to rely to the robustness of the conventional analytical and algebraic technique without doubts, but one must understand that the mathematics does not remain unchanged. For example, in nineteenth not many physicists could suppose what role the theory of continuous groups representations will play in physics.}. The limits of application of mathematical analysis to the processes connected with super large ($10^{20}$ and more) ensembles of small particles are well known. The value of differentials must be bigger than the size of elementary objects which compose the ensembles, because otherwise the analytical method will give the systematic mistake. For example, the exact solution of the heat transfer equation gives the instantaneous speed of heat transfer that is impossible in practice. The analogous is true for Shroedinger equation which then cannot be transformed to the relativistic. The divergence from the exact solutions can serve as the indicator of grain of the considered system, that is applicable to the quantum mechanical amplitudes as well (the sequence from the supposition of grained amplitude is considered below). The other example of this kind is represented by quantum electrodynamics (QED). We consider the interaction between one electron and two photons: 1 and 2 (see \cite{Fe2}). Let at first the electron with impulse $p_1$ emit the photon 1 with impulse $k$, and then this electron emit the photon 2 with impulse $q$, and at last the electron absorb the photon 1 obtaining impulse $p_2$. Accordingly to the rules of QED the amplitude of such process is proportional to the integral \betaegin{equation} \int\frac{1}{(p_2-k-m)(p_1-k-m)q^2k^2}d^4k, \varepsilonsilonnd{equation} which diverges logarithmically. This difficulty is called ultraviolet divergence, and it cannot be eliminated by means of analytical methods\footnote{In physics it is resolved by the trick called renormalization (see \cite{BS}). Such tricks “correct” the undesirable presentations of analytical formalism features, but by no means can eliminate the fundamental defect of it.}. At last, the principle division of quantum evolutions to the unitary operators and measurements (reductions of the wave package) makes impossible to create the dynamical picture of quantum processes, because this description is made dependent from the free will of those who are authorized to observe.\footnote{The practical question: beginning from what quantity of particles the system must be considered as classical – has no answer in quantum theory. But this question is important for computer programs simulating chemistry because the picture of reactions depends on the exact answer to it. The standard criterion consisting of comparison of the action with Planck constant does not give the solution; because it merely transforms the problem to the question of the choice of the value of elementary time segment.}. The possible solution of these problems is the modification of the mathematical apparatus of quantum theory. This modification can be based on the notion of algorithm. An algorithm is an instruction for the operation on a finite object expressed in the exact terms. The step-to step fulfillment of this instruction is called the computation. Formulas which are applied to the finite data types are examples of algorithms. But algorithms are much flexible than formulas, because the instructions can have not algebraic character. It is important what type of physical processes is required for the realization of the instructions. If there are classical processes, the algorithms are classical, if it needs the quantum processes, then we have deal with QC. Since in sense of computability QC is equivalent to classical computers, for the algorithmic approach considered in general form there is no difference to what extend QC can be made scalable. But if we want to use algorithms as the basis of mathematical apparatus, the way of physical realization of the algorithms is important. The main aim what QC has been proposed for is the simulation of many particle quantum physics. This is why the question about physical realization of algorithms is not the technical but the principal. This question can be reformulated as follows: is it possible to build the dynamical model of real quantum processes be means of classical algorithms, or the scalable QC is necessary for it ?\footnote{Of course, we do not intend to build classical algorithms operating with vectors in the space of exponential dimensionality. The point is that this exponential can be redundant for the description of Nature.} Factually, the only what is required from such QC is to realize quantum Fourier transform for any dimension of any particle from arbitrary big ensemble in quantum state (\cite{Za} and \cite{Wi}). This question is equivalent to the question: does a scalable QC exist at all. The existence of a scalable QC means the reality of such local physical processes which cannot be simulated by means of classical algorithms in the real time mode. There are the fast quantum algorithms (the most known of them – factoring of integers \cite{Sh}, the algorithm of fast quantum search \cite{Gr}; the more special algorithm can be found in \cite{Am}). These algorithms represent the test for checking for a given device: is it a QC or not.\footnote{This is the single universal test on QC and no other can serve as a complete criterion of it. The physical picture of the world thus depends on the complexity of algorithm !}. The existence of a scalable QC is then equivalent to the adequacy of many-particle Hilbert formalism. We note that all corollaries from quantum theory which are verified in experiments up to now can be derived by effective (requiring polynomial memory of the length of input) classical algorithms. At the same time Hilbert formalism asserts that for a system with $n$ particles all states from the space ${\cal H} = \betaigotimes\limits_{j=1}^n{\cal H}_j$ are physically realizable. This space is the tensor product of one particle spaces and its dimensionality $dim({\cal H})=dim({\cal H}_1)dim({\cal H}_2)\ldots dim({\cal H}_n)$ grows exponentially with the growth of $n$. Hence, if Hilbert formalism is adequate then the classical computers are not appropriate for the simulation of physics. There is no yet clear experimental evidence that the scalable QC exists, despite of that the work goes in the different technological directions\footnote{Solid state quantum dots, Josephson junctions, ion traps; the last approach now becomes more promising (see \cite{SB}). The obtaining of entangled states with more than 10 particles is not the big problem now as well as the creation of 2 qubit gates. The problem consists in the checking of the real parameters of such devices that is reduced to the realization of fast quantum algorithms, or, as an intermediate solution to the checking of the probability distributions of qubit’s states on their corresponding to the quantum distributions.}. We must then account the possibility that the scalable QC will not be built, which means that we should create the program solutions for the simulation problems based on classical algorithms\footnote{And the simulators of limited QC as well. Classical simulators of quantum algorithms must be included to them as the autonomous element in the system of distributed computations and be realized on supercomputers. This scheme of simulation can serve as an indicator of success in the creation of (limited) QC.}. The following difficulty in the traditional mathematical apparatus is the impossibility to create the dynamical models of quantum systems. The dynamics of quantum system even of one particle substantially depends on the reduction of wave function (see \cite{Pe}) and quantum theory contains neither conditions of its realization, nor criterion how to consider a given system – as classical or as quantum. It results in that the dynamical quantum models are limited by the problems of scattering\footnote{The main application of them are the processes in colliders.} that are solved by $S$ matrix; it is absolutely not sufficient for the building of complex dynamical pictures, like the chemical reactions. Here just algorithms could serve as the main mathematical tool. The dynamical picture of evolution is the main aim of the proposed modification of the mathematical apparatus for quantum theory. Quantum physics in its traditional form in its nature cannot give the dynamical picture; it is designed for the solution of stationary problems only. At the same time, in my opinion, the future of physics is connected with the dynamical pictures. The example of biology says that there must be the real mechanisms resulting in the visible dynamical picture. I do not doubt that the mechanism of such a kind (but, of course, of absolutely other nature) exists on the physical level of description of the matter\footnote{Without such mechanisms the realization of great plans of the simulation of living cells is impossible. The simulation of molecular movements in a cell based on classical “balls and springs“ can give movies which can be fine for ordering of experimental facts only but have no prediction power, as fundamental theories, in particular, biological.}. The discovery of such mechanisms in physics seems to be the principal problem, and it cannot be solved (and even correctly formulated) by means of conventional Hilbert formalism for many quantum particles. This is why I think that the building of new mathematical apparatus for quantum theory based on effective classical algorithms is very important\footnote{Ideas of this kind were put forward by many theorists, for example, \cite{Ak}, \cite{Wo}, \cite{Oz}.}, whereas conventional algebraic and analytical technique will acquire the status of excellent heuristics despite of very precise corresponding with experimental data in the area of one particle dynamics. \sigmaection{What does it mean that algorithms acquire the status of mathematical apparatus} Mathematical formalism plays the special role. The physical intuition is impossible without it\footnote{To be more precise, the intuition and mathematical apparatus are two sides of one thing.}. In particular it means that the property of physical theories which we call contra intuitiveness is factually the symptom of deeply lying defect of mathematical apparatus. The modification of mathematical apparatus is absolutely non trivial procedure which cannot be compared for example, with renormalization. The passage to the algorithmic formalism in quantum theory leads some principal consequences which we consider now. These corollaries are connected with the gradual computerization of physics and we then will use the computational terminology.\footnote{This question can cause perplexity among the part of specialists which could take it as an encroachment on their individuality. Factually this touches the human individuality in the same degree as the mathematical apparatus of physics.}. The main consequence: the form of representation of theoretical results and its interrelations with experiment will change. The main form of algorithmic description of the quantum evolutions is the “film” which pictures are prepared by the simulating algorithm. This form of representation follows from the nature of algorithms. If we are given an algorithm $A$ and some initial state of the object $O$ on which this algorithm must be fulfilled (input data) then in the general case there is no way to obtain the far result of work of $A$ on this state but sequential applications of the instruction corresponding to $A$\footnote{This fact remains true even if we have QC (see \cite{Oz2}).}. The single universal form in which the results of algorithms can be represented is its protocols, e.g., the sequential results of applications of the corresponding instructions. If we (roughly) associate the state of object $O$ with the states of simulated system, we conclude that the single universal form of the representation of the result of simulation is the corresponding “video film” reflected the dynamics of our system in time. This viewpoint is the roughening. The state of object $O$ on which our algorithm works can (and must) contain, beyond the main part, some additional elements that are called ancillary elements (ancilla). Physically, ancilla is the administrative part of the model which is invisible for a user who is the looker of the video film. Administrative part of the model bears the technical role but its existence is necessary for the preparation of the film. There is the physical interpretation of this property of the simulating algorithms. It is connected with quantum non locality and relativism \footnote{The bad agreement between the spirit of relativism and quantum non locality can follow from that the mechanisms of the both phenomena belongs to ancilla.}. Non locality of EPR photon states means that the video film cannot be prepared in the real time mode. For the simulation (on classical computer) of the results of measurements of two parts of the system in state $|00\rangle +|11\rangle$ we need the transfer of some information between the first and the second qubits and the time of this transfer must not be real, physical time\footnote{One could say that from the user’s viewpoint this information is transferred instantaneously. But this information belongs to ancilla, and user Alisa cannot then use this channel to transfer to the other user – Bob information created by Alisa. Relativism is not then violated.}. The visualization of quantum dynamics\footnote{Now it is considered mainly as the question of teaching physics (see, for example, \cite{Th}).}, then becomes the necessary part of theory and the main tool for the checking of hypothesis. In some sense the visualization acquires the higher level than the immediate application of algebraic or analytic methods for finding probability distributions in the elementary steps of evolution (for scattering problems) because the last methods are then considered as the necessary checkpoints for the building of right visual picture, not as the independent tasks. It is important that the simulating algorithm must be based on the limited and stated beforehand set of first principles, and must not contain any artifacts, which could fit the algorithm to the conditions of particular problem. This formulation depends on what we consider as the first principles for algorithms. We cannot simply take the basic principles of QED and claim them first principles because at first, their reduction to algorithms represent the substantial task, and at second, if we turn them to the instructions of algorithm, the formal implementation of such instructions\footnote{Mathematical apparatus presume only formal implementation.} gives infinite procedure because it meets the divergences. Factually, the reduction of QED to algorithms is fairly real task. But it is possible only if we introduce some computational tricks to the list of first principles\footnote{It can touch also the procedure of renormalization.}. These tricks represent the cut off Hilbert formalism in the sense that its computer realization will certainly give us some approximation to the real picture, whereas standard QED cannot give such approximation in principle. We consider some of such tricks below and discuss what tricks must be added for the creation of more full picture. We have to sacrifice some advantages of Hilbert formalism, in particular, the easy passages to other basis in quantum space of states. For example, the grain (quantum) of amplitude will depend on what basis (coordinate or impulse) we consider our system in. The coordinate basis will be separated for us. But we will show below how the consideration of state in impulse basis can be reduced to the coordinate basis (for photons). Some inequality of basis for representation of quantum systems is unavoidable in the reduction of standard formalism to algorithms. For example, in QED it is convenient to represent the states of charged particles in coordinate basis and the states of photons – in impulse basis, and it is reflected in the algorithmic reduction. The accepting of algorithms as the new mathematical apparatus means the agreement that the pure computational limitations acquire the status of fundamental physical laws. For example, the impossibility to reserve too large memory for the storage of complex states of many body system means that there are no such states at all\footnote{This gives the main method of approximate computations: to cut off the complexity of quantum states.}. For example, the decoherence will arise not as the result of environmental influence\footnote{Influence of environment is the conventional source of decoherence.}, but as the result of limitation on computer memory which can be reserved for the storage of complex quantum states of many particles. We can make this statement more precise, if we require that any real states of quantum $n$ particle system can be stored in the memory of the size $M(n)$, which depends on the number $n$ but not on the complexity of states. This requirement is connected with the question: what objects we consider as particles, and it is treated below. The main and universal form of description will be the representation of objects by visual images, not by formulas. This viewpoint is not conventional. We show that the standard quantum mechanics with the entanglement of Schmidt type completely agrees with this representation. The problem arises with QC. If we orient to the visualization, it means that we can use only effective classical algorithms, e.g., algorithms which time of the work is limited from above by some polynomial of the input word\footnote{Better if there are algorithms with linear complexity. Just these algorithms are considered in the work below.}. This required the radical reduction of Hilbert formalism. The visualization of QC in its scalable form can be impossible just because of existence of the fast quantum algorithms\footnote{One would separate the “visual” part of the model where only Schmidt type of entanglement can be, and “non visible” part, where a scalable QC can be (see below). This is the good solution for the indication of success in the going experiments on QC. But this solution is not appropriate for the role of mathematical apparatus of quantum theory because of violation of integrity.}, which work cannot be visualized because the essential intermediate steps of it cannot be adequately represented in the visual terms. In any case, the final arbiter here is only one: the practical building of scalable QC; this is the only way to reject the algorithmic approach. \sigmaection{Amplitude quantum and Born rule} Many of equations in mathematical physics (heat transfer, diffusion, oscillations) describing the classical systems dynamics result from the passage to limit in the dynamical problems of huge quantities of small bodies (quanta of the matter). Correspondingly, the area of application of such equations is limited by the finite sizes of these bodies. These equations resulted from the more fundamental laws or mechanisms of interaction (for example, the equation of oscillations follows from Hooke law). In QED such mechanism is described by Feynman diagrams for fundamental processes (\cite{Fe2}). The fundamental processes diagrams facilitate the algorithmic reduction of QED but yet are not the final result of this reduction because they allow the operations with infinitesimals. Algorithms require the complete transfer to the operations with the finite objects\footnote{The wide spread mistake that this would limit the possibilities of the theory has the same origin that the fear of algorithms. Nobody can operate with the infinite objects as well as with non algorithmic procedures. The question is only in that some elements of algorithms can be inaccessible for us (as the administrative part in quantum model). As for the internal beauty, it is not less in the world of algorithms than in its part: in the world of formulas. But algorithms have one principle advantage: the possibility of visualization. The mathematics which does not permit the visualization can rely on the deduction only, but it is the precarious basis, in view of Goedel theorem about incompleteness. The reckless usage of such mathematics is similar to walking on a thin ice.}. The method of collective behavior described below gives the possible form of the algorithmization \footnote{See also \cite{Oz}, \cite{SO})}. We now consider one aspect of this form of algorithmization – the quantization of amplitudes. We give the explanation of Born rule based on the conception of amplitude quantum\footnote{The similar reasoning contains in \cite{Zu}, but here Born rule follows from the general algorithmic concept without additional suppositions.}. The consideration of quantum evolutions from the viewpoint of Hilbert formalism of many particles gives the states of the form \betaegin{equation} |\Psi\rangle=\sigmaum\limits_j\lambda_j|e_j\rangle, \varepsilonsilonnd{equation} where the summing extends to the unlimited set of basic states of the system $|e_j\rangle$. Algorithmic approach requires to cut off this row to the finite sum by elimination of all summands with coefficients $\lambda_j$, which modules are less than some constant threshold $\varepsilonsilon$. We call this procedure the reduction, and agree to fulfill the reduction on any state which we meet in evolution. Such reduced sum contains no more than $1/\varepsilonsilon^2$ summands. Let $N$ be the number of basic states for one particle. We then can agree that $\varepsilonsilon=\frac{1}{\sigmaqrt{N}}$. The state thus will have the form \betaegin{equation} |\Psi\rangle=\sigmaum\limits_{j=1}^N\lambda_j|e_j\rangle, \lambdabel{limstate} \varepsilonsilonnd{equation} where some summands can be zeroes. We call the constant $\varepsilonsilon$ amplitude quantum. We now show how the reduction leads to the Born rule for the finding of quantum probabilities. For this we reduce the finding of probability to obtain some certain basic state $A$ in the measurement of state $\Psi$ to the application of classical rule $$ p(A)=\frac{N_{suc}}{N_{tot}} $$ where $N_{suc}$ is the total number of successful events (e.g., such elementary events for which the event $A$ is realized), $N_{tot}$ the total number of elementary events. We define the set of elementary events and establish the correspondence between elementary events and the basic states of the measured system. We call an elementary event such a state of extended system (measured system + measuring apparatus) which module of amplitude in the given quantum state equals $\varepsilonsilon$. The set of elementary events will then depend on a quantum state of extended system. Let $|\Psi_j\rangle$ denote basic states of measured system, and $|\Phi_j\rangle$ denote basic states of measuring device (which can be, for example, the eye of observer), we obtain in the instant of contact between these two subsystems the state of the form \betaegin{equation} \sigmaum\limits_j\lambda_j|\Psi_j\rangle\betaigotimes |\Phi_j\rangle \lambdabel{meas} \varepsilonsilonnd{equation} Now, in view that the measuring device is very massive in comparison with the measured object, when trying to describe its states we must divide states from (\ref{meas}) to the sum of $l_j$ basic states (we must account states of all nuclei and electrons containing in the measuring device, all photons emitted and absorbed by it, etc.). If even in the instant of contact we have the state $|\Phi_j\rangle$, the evolution transforms it very fast to the state $|\Phi'_j\rangle =\sigmaum\limits_{k=1}^{l_j}\mu_{j,k}|\phi_{j,k}\rangle$, where the numbers $l_j$ increase rapidly in time until the amplitudes modules achieve $\varepsilonsilon$ - when they will become nulls. Consequently, all modules of amplitudes $\mu_{j,k}$ must be considered as approximately equal. If we substitute the expression for $|\Phi'_j\rangle$ instead of $|\Phi_j\rangle$ in (\ref{meas}), the amplitudes of states $\phi_{j,k}$ become equal to $\frac{\lambda_j}{\sigmaqrt{l_j}}$ because the quantum evolution is unitary. We agreed to fulfill the reduction, that is the elimination of summands $\phi_{j,k}$ which module of amplitude is too small. Since the time frame when the division to the huge number of summands is very short, in the computations it means that we divide each summand in (\ref{meas}) to $l_j$ new summands so that all newly arisen amplitudes have modules close to amplitude quantum and approximately equal, that makes all the states equivalent before the reduction and it makes possible to apply the classical urn scheme. The quantity $l_j$ of summands with the first multiplier $|\Psi_j\rangle$, that is the total number of successful elementary events, is proportional to $|\lambda_j|^2$, and if only one summands survive in the reduction, we obtain Born rule for quantum probability. The probability space then depends on the choice of wave function $|\Psi\rangle$. We consider, factually, the conditional probabilities to obtain the results of measurements provided the system is in state $|\Psi\rangle$. The explanation of Born rule we give is based only on our definition of the wave function reduction as the cancellation of small amplitudes. This reduction is fulfilled at each step of the unitary evolution simulation, because otherwise the simulation would be impossible at all. Here the specific of measurement comparatively to the unitary evolution is only quantitative: the measurement is the moment when our system falls in contact with the massive object which can be called “environment” that causes the division of summands in (\ref{meas}) to the large number of new summands. Beyond this natural supposition we used only the stability of the wave function norm which follows from Shroedinger equation. Hence, in our explanation of Born rule we used nothing from outside of standard quantum mechanics but the reduction of wave function concluding in the cancellation of small amplitudes. Just this reduction procedure transforms the set of Feynman paths into the classical trajectory for an object with the big action (see \cite{FH}). Decoherence is treated as the forming of entangled states of the form (\ref{meas}) with the environment, which means that we do not separate it from the especial measuring of the considered system. Born rule and the irreversible corruption of states called decoherence is thus the consequences of the existence of amplitude quantum. Factually, decoherence and measurements follow from the severe limitation of classical memory of the simulating computer. The algorithmic first principles thus acquire purely physical form\footnote{The quantitative estimation of the speed of decoherence can be obtained from the simulation with sequential reductions. The only difficulty is how to account all states of our system, depending of all the degrees of freedom, which we often do not know exactly. Here the hierarchical description of complex system (see below) can help.}. We note that in the derivation of Born rule we used only the entanglement of (generalized) Schmidt type of the form (\ref{meas}), which generalizes EPR type of entanglement. The simple treatment of Born rule in terms of amplitude quantum is not arbitrary. The concretization of amplitude quantum – the method of collective behavior, introduced below, makes possible to build the dynamics of many particle evolution practically. \sigmaection{Method of collective behavior} The method of collective behavior (swarm method) is the simplest and most evident way for the algorithmization of quantum theory. The passage from the single particle to the swarm of its samples seems to be the easiest way to overcome contra intuitiveness featured to quantum theory. With some additional suppositions swarm method gives the algorithm of simulation of the dynamics with linear complexity of the number of particles. These additional suppositions lie in the framework of the basic idea of algorithmic approach – the limitation of the memory and time for the simulation\footnote{The question is only how severe these limitations are. Here we discuss the case of the most severe limitations – the linear growth of computational resources. The constant is so that in practical simulation we can operate with all entangled states of no more than 50 qubits.}. The supposition is that we should treat as realizable only states of the system $S=S_1\cap S_2,\ S_1\sigmaup S_2=\varepsilonsilonmptyset$ of the form $|\Psi_S\rangle=\sigmaum\limits_j\lambda_j|\Psi^j_{S_1}\rangle\betaigotimes|\Psi^j_{S_2}\rangle$, where $|\Psi^1_{S_1}, |\Psi^2_{S_1},\ldots$ and $|\Psi^1_{S_2}, |\Psi^2_{S_2}$ are orthonormal basis in space of states of subsystems $S_1$ and $S_2$ correspondingly, and each of these basic states has the same form, with some depth of nesting. Factually, this is one particle states, where the entanglement is distributed among the particles sequentially nested in each other (see below). Moreover, we can make our approach completely scalable only if the sets $\{|\Psi^j_{S_1}\}_j$ and $\{|\Psi^j_{S_2}\}_j$ belong to coordinate (for charged particles) or impulse (in case of photons) basis of the space of states. \sigmaubsection{Swarm representation of particles and fields} We represent any quantum particle in the form of swarm (set) of classical particles \betaegin{equation} \lambdabel{swarm} s=\{ s_1,s_2,\ldots,s_k\} \varepsilonsilonnd{equation} where each element $s_j$ is called a sample of the considered quantum particle, and has the definite spatial-time coordinates $t(s_j),\betaar x(s_j)$ in the configuration space-time corresponding to this particle, and some auxiliary parameters $\alpha (s_j),\beta (s_j),\gamma (s_j),\ldots$. A swarm can be stored in the form of a field. We divide the configuration space to cells $C_i$, and in each cell we find the total number of samples $s_j$, for which $\alpha (s_j)=\alpha_0$. It gives the natural number expressing the intensity of scalar field $F_{\alpha=\alpha_0}$, corresponding to the chosen value $\alpha_0$ of the parameter $\alpha$. If $\alpha_1,\alpha_2,\ldots,\alpha_l$ are some values of $\alpha$, the set of corresponding fields $F_{\alpha=\alpha_i},\ i=1,2,\ldots,l$ is called the vector field and its intensity in each point is the vector from the corresponding natural numbers\footnote{The appropriate normalization gives the vector from the real numbers.}. The storage of intensity gives the exponential economy comparatively with the storage of massive of samples. But for the representation of dynamics we need just the notion of samples. If the parameters of quantum particle (or some set of particles) are not so interest for us than the intensity of it, we call the field the representation of this particle (particles) in the form of massive of its intensity values in the whole configuration space\footnote{E.g., the difference between fields and particles is only methodical. For example, scalar photons is more convenient to represent as a unique field, but the vector potential – in the form of separate photons, because the polarization of scalar photons is parallel to their impulses and thus it bears no additional information over their total number, whereas the vector photon polarization is orthogonal to the impulse vector, and it bears the information determining electrical and magnetic field induced by these photons.}. In the case when some quantum particle is concentrated in one cell of our division of configuration space we call it the classical particle. The classicality of particles is thus determined by the division of space-time. Swarm representation of a system of $n$ particles and $l$ fields is the set of vectors of the form \betaegin{equation} \lambdabel{gen_swarm} \betaar s_{par} =\{ s^1,s^2,\ldots,s^n\},\ \ \betaar \gamma=\{\gamma_1,\gamma_2,\ldots,\gamma_l\} , \varepsilonsilonnd{equation} where $\betaar s_{par}$ are the swarms of samples of particles, $\betaar \gamma$ are the swarms of samples of fields. Evolution of the representation (\ref{gen_swarm}) in time is determined by the set ${\cal T}$ of rules for particle transformations, each of which has the form \betaegin{equation} v_{j_1},v_{j_2},\ldots,v_{j_p }\longrightarrow v_{k_1},v_{k_2},\ldots,v_{k_q}, \varepsilonsilonnd{equation} where $ v_{j_1},v_{j_2},\ldots,v_{j_p }$ è $ v_{k_1},v_{k_2},\ldots,v_{k_q}$ are the sets of initial and resulted samples of particles and fields which can belong to the different swarms but satisfy the following requirement of locality. The coordinates of samples from the both groups can be obtained one from the other by the simple rule which we call the rule of local correspondence. In the simplest case this rule consists in that the coordinates of them must coincide, e.g., the interaction must be point wise (the action of a scalar potential). The more complex case is that these coordinates can differ only on some small value $\varepsilonsilon>0$ (free flight of particles). The more complex form of local correspondence rule is for the passage from the coordinate representation to impulse. Here the rule says that the samples from one group must lie approximately on the equal distances from one another with interval $d$, and are close to some fixed line, where $d$ and the position of the line is determined by the coordinates and parameters of initial samples (energy conservation in the emission and absorption of photon by the charged particle). Here are the main forms of the rules for particle transformations. \betaegin{equation} \betaegin{array}{lll} 1).\ &v_p,v’_p&\longleftarrow\longrightarrow ,\ \ v,v’\in\{ s,g\},\\ 2).\ &g_p&\longrightarrow g_q,\ \\ 3).\ &s_p&\longrightarrow s_p,g_q,\ \\ 4).\ &g_p&\longrightarrow (g_p,) s_q,\ \\ 5).\ &s_{j_1},s_{j_2},\ldots,s_{j_p }&\longrightarrow g_{k_1},g_{k_2},\ldots,g_{k_p },\ \\ 6).\ &g_{j_1},g_{j_2},\ldots,s_{j_p }&\longrightarrow s_{k_1},s_{k_2},\ldots,s_{k_p },\\ 7).\ &s_p,s_{p’}&\longleftarrow\longrightarrow s_q. \varepsilonsilonnd{array} \varepsilonsilonnd{equation} These transformations determine correspondingly: the normalizing of particles and fields, diffusion of fields, emission of virtual photon samples by the samples of charged particles, emission of charged particle samples by the samples of virtual scalar photons, emission of vector photon samples by the samples of charged particles, emission of charged particle samples by the samples of virtual vector photons (photon absorption), the forming and the decay of entangled states of Schmidt type. The free flight of charged quantum particles as well as the action of Coulomb potential is determined by the transformations of types 3) and 4), the spreading of Coulomb field and the field of vector photons – by 2), interaction between charged particles and vector photons - by 5) and 6). The normalizing 1) is auxiliary and is applied in every instant. Every rule of transformation of particles is fulfilled with the corresponding intensity (the probability to apply the rule to the appropriate set of samples). The intensities of the application of rules are constants taken from the experiments\footnote{One could try to derive them, but here the more complex things must be accounted, as the relativity delays.}. \sigmaubsection{Swarm representation of Shroedinger equation} Shroedinger equation for the wave function $\Psi(x,t)$ has the form \betaegin{equation} i\deltaot\Psi=-\Delta\Psi+V\Psi . \lambdabel{shr} \varepsilonsilonnd{equation} We represent the wave function in the form $$ \Psi(x,t)=\Psi^r(x,t)+i\Psi^i(x,t), $$ where $\Psi^r(x,t),\ \Psi^i(x,t)$ are its real and imaginary parts. The equation (\ref{shr}) can be then written in the form of system of two equations \betaegin{equation} \betaegin{array}{lll} &\deltaot\Psi^r&=-\Delta\Psi^i+V\Psi^i,\\ &\deltaot\Psi^i&=\Delta\Psi^r-V\Psi^r. \varepsilonsilonnd{array} \lambdabel{} \varepsilonsilonnd{equation} We establish the necessary connection between Shroedinger equation and the equation of diffusion. The equation of diffusion has the form \betaegin{equation} \rho_1\deltaot u=div(p\ grad\ u)-qu+F, \lambdabel{diff} \varepsilonsilonnd{equation} where $u(x,t)$ is the concentration (density) of particles, $\rho_1,p,q,F$ parameters depending on $x,t$, that determines the density of environment, diffusion rate, absorption rate and the source of particles intensity correspondingly. The positive absorption rate means that all the particles are eliminated in this point with the intensity $q$, and negative means that they are created in this point with intensity $|q|$. We agree that $F=0,$ à $\rho_1, p$ have the unit values and thus only the absorption rate $q$ has the substantial sense; it is proportional to the potential energy on Shroedinger equation. The equation (\ref{diff}) follows immediately from Nernst law (see (\cite{Vl}) for finding of the stream of particles through the element of surface $dS$: \betaegin{equation} dQ=-p\frac{\partial u}{\partial\betaar n}dS. \lambdabel{ner} \varepsilonsilonnd{equation} To reduce the equation (\ref{shr}) to some version of the diffusion equation we consider the swarm of samples of our particle. We divide these samples to two types: real ($r$) and imaginary ($i$), and each of these types divide to two subtypes: positive (+) and negative (-). We then obtain the subdivision of all samples of the same particle to four types which members we denote as: $ \alpha^{+,r}_j,\ \alpha^{+,i}_j, \ \alpha^{-,r}_j,\ \alpha^{-,i}_j,$ where $j$ denotes the number of the sample. For the description of stationary states only one type of samples would suffice, because stationary states are determined by the density of wave function. For the description of dynamics we could manage with only 2 types of samples: real and imaginary, but it is convenient to have four types. We divide the configuration space on cubes so that $D(x,t)$ denotes the cube containing the point $(x,t)$. The total number of samples of the swarm $\betaar s$ of the same type occurred in some cube $D(x,t)$, is denoted by $s^{\sigmaigma ,\varepsilonsilonta}(x,t)$, where $\sigmaigma\in\{ +,-\},\ \varepsilonsilonta\in\{ r,i\}$. We agree that the speeds of all samples are distributed uniformly and independently of the type. Since we are going to represent the evolution of wave function as the chain of sequential diffusions, we have to get rid of signs in equations and for this the introduced types of samples must be used, where the swarm approximation of wave function is always found by the formula \betaegin{equation} \Psi(x,t)_{s}=s^{+,r}-s^{-,r}+i(s^{+,i}-s^{-,i}). \lambdabel{sign} \varepsilonsilonnd{equation} This equation does not determine the division to the positive and negative parts uniquelly, but only within the addition of the same constant to the both parts. We then has within the normalizing the approximate equalities \betaegin{equation} \betaegin{array}{lll} &\Psi^r(x,t)&\alphapprox s^{+,r}(x,t)-s^{-,r}(x,t),\\ &\Psi^i(x,t)&\alphapprox s^{+,i}(x,t)-s^{-,i}(x,t), \varepsilonsilonnd{array} \varepsilonsilonnd{equation} This gives the system, equivalent for Shroedinger equation: \betaegin{equation} \betaegin{array}{lll} &\deltaot s^{+,r}(x,t)&=\Delta s^{-,i}(x,t)+V(x,t)s^{+,i}(x,t),\\ &\deltaot s^{-,r}(x,t)&=\Delta s^{+,i}(x,t)+V(x,t) s^{-,i}(x,t),\\ &\deltaot s^{+,i}(x,t)&=\Delta s^{+,r}(x,t)+V(x,t) s^{-,r}(x,t),\\ &\deltaot s^{-,i}(x,t)&=\Delta s^{-,r}(x,t)+V(x,t) s^{+,r}(x,t). \varepsilonsilonnd{array} \lambdabel{swa} \varepsilonsilonnd{equation} We enumerate the types $(+,r),(+,i),(-,r),(-,i)$ by the natural numbers $1,2,3,4$ correspondingly everywhere including indices, and apply to them arithmetic operations in $Z/4Z$. We already have no one density $u$, but the vector- column $\betaar u$, which components $u_j\ j=1,2,3,4$ are the densities of the samples of types $j$. For our aim the rule of type transformation must correspond to the going around the beginning of coordinates in some direction, e.g., the rule must be cyclic: $1\longrightarrow 2\longrightarrow 3\longrightarrow 4\longrightarrow 1$. The system (\ref{swa}) is thus equivalent to the following equation \betaegin{equation} \deltaot\betaar u=\Gamma (\Delta u-q\betaar u), \lambdabel{diff_trans} \varepsilonsilonnd{equation} where the matrix $\Gamma$, expressing the law of type transformation and the matrix $g$ inverting the sign of type have the form $$ \Gamma= \left( \betaegin{array}{lllll} &0&0&0&1\\ &1&0&0&0\\ &0&1&0&0\\ &0&0&1&0 \varepsilonsilonnd{array} \right) ,\ \ \ g= \left( \betaegin{array}{lllll} &0&0&1&0\\ &0&0&0&1\\ &1&0&0&0\\ &0&1&0&0 \varepsilonsilonnd{array} \right) . $$ We now have to determine the swarm behavior giving the solution of equation (\ref{diff_trans}). Here it is impossible to manage with only reactions of type transformation accordingly to the cyclic rule, because in this case the matrix $\Gamma$ would contain the equal and nonzero diagonal elements which all are zeroes in $\Gamma$. This is why we need to introduce the samples of connected photons to the swarm behavior. Let we are given 4 types of the samples of particle: 1,2,3,4, possessing the same dynamical properties, and also the same types of the samples of connected photons. We denote the samples of particle and connected photons by $\alpha^j$ and $\gamma^j$ correspondingly. Let every sample of the charged particle of the type $j=1,2,3,4$ emits with the steady rate the samples of connected photons of the same type. These samples of connected photons move by the diffusion and in time frame $\delta t$ they convert to the samples of the same particle but of the type $j+1$. The type transformation thus goes not directly but through the samples of connected photons. The main which we need from the samples of connected photons is that their diffusion rate must be much larger than of the samples of particle: $p_{phot}\gammag p_{part}$. We could agree that the sample of particles do not move themselves at all. Just this requirement allow to suppress the diagonal elements in the matrix of generalized diffusion and thus to obtain the needed matrix $\Gamma$. We then consider two areas $D_1$ and $D_2$ with the common border through which the diffusion goes. Here due our agreement the samples of photons diffuse much faster than the samples of particle. In the change of the density of each type in the small area in the time frame $\delta t$ two kind of samples of this type make their deposits: a) newly formed samples resulted from the photon samples emitted in this area or penetrated from the neighboring areas, and b) penetrated by the immediate diffusion. Due to our agreement about the values of diffusion rate the deposit b) will be much less than a), and we will neglect the immediate diffusion of samples of particle. Then applying to our situation the formula (\ref{ner}), we obtain that the flow of samples of the type $j$ in the unit time frame through the border is \betaegin{equation} dQ=-p_{phot}\frac{\partial u^{j-1}}{\partial\betaar n}dS, \lambdabel{ner1} \varepsilonsilonnd{equation} where $u^{j-1}$ is the density of samples of particle belonging to the parent type. The potential $V$ is treated as the rate of creation or annihilation of samples of any type. Applying the reasoning from the derivation of diffusion equation we come to the equation (\ref{diff_trans}). The introduced method of description of quantum evolution includes classical dynamics of samples of particles. It is sufficient to have the quantity of the connected photon samples $A(|s^{r,+}-s^{r,-}|^2+|s^{i,+}-s^{i,-}|^2)^{1/2}$, where $A$ is some constant independent of the point of the configuration space that prevent the useless storage in the mutually canceling parts of wave function in the memory. Let the carrier of wave function be concentrated in the area $D_0$. The phase distribution of the form $\phi(x)=Ñ \betaar x\betaar p$ then gives the same effect that the movement of the swarm with the speed proportional to $\betaar p$ along the vector $\betaar p$. Correspondingly, we can change the rule for the transformation of the connected photon samples to the samples of particles, using the mean speed $v_{av}(x,t)$ of the samples of particles emitting the connected photon samples in the point $x,t$. The rule defined below says that the connected photon samples of type $j$ transform to the samples of particles of type $j+1$ in the instant $t+\delta t$, here $\delta t$ - constant. The new rule will be: the connected photon samples transform to the samples of particle of the type corresponding to the amplitude distribution $exp(\delta tip^2/m)$, where $\delta t$ is the arbitrary positive number in the segment $(0,\delta t_0)$, where the impulse $p=v_{av}m$ is found from the mean speed of the particle samples which emit the connected photon samples. It means that for the fixed $\delta t$ in this segment the fraction of the type $j+k, \ k=0,1,2,3$ equals to the corresponding amplitude in the exponential $exp(\delta tip^2/m)$. This new rule can be realized by the transformation of type 5), and the reverse operation – by 6). The new rule does not effect to the description of swarm dynamics if the initial amplitude distribution has the form $exp(ipx)$ and $V=0$. Here the movement will be uniform and straight with the impulse $p$. In the other cases the influence of this rule is determined by the value of $\delta t_0$: the less it is the less the influence will be. If we fix $\delta t_0$, we can find the influence of the potential $V$. Let $\Delta \Psi (\betaar\Delta x)$ be the divergence between the value of wave function found by the old and new rules for the transformation of photon samples to the samples of particle in the point $x+\betaar\Delta x$. The value $\Delta \Psi (\betaar\Delta x)$ is then proportional to $\sigmain (\betaar{grad}\ V,\betaar\Delta x)$. The divergence will be then maximal if the direction of the photon samples moving is orthogonal to $\betaar{grad}\ V$. It substantiates the following agreement. We assume that a sample of connected photon transforms to the free photon sample if it spreads to the distance more than some limit $\Delta x_0$ from the point of emission along $\betaar\Delta x$, so that $|\sigmain (\betaar{grad}V,\Delta x)|>\varepsilonsilon_0$, e.g., along the direction close to the normal for the gradient of the potential in which the considered particle moves. This agreement is not completely precise because we have not determined the constants $\varepsilonsilon_0,\ \Delta x_0$, but it can be reformulated in terms of QED if we introduce the vector of polarization of photon which is always orthogonal to its impulse. Our agreement will then express in terms of swarm the rule for finding of the amplitude of the fundamental process: emission (absorption amplitude is complex conjugate) of the photon by the charged particle (see (\cite{Fe2})). This amplitude is imaginary and it is proportional to $i(p_1+p_2)(\betaar \varepsilon , p_1+p_2)$, where $ p_1, p_2$ - are impulses of the particle before and after the emission (absorption), $p_1=p_2+q$, where $q$ - is the photon impulse, $\betaar \varepsilon$ - its polarization. This rule is equivalent to Dirac relativistic equation for spineless particle. We can realize it by our rules of the type transformation of the types 5) and 6). If we then introduce the photon spirality, which determines its magnetic field, we can show that Maxwell equations can be derived from it\footnote{We consider non relativistic theory but in the swarm form also the relativistic equations can be represented regularly, for example, Dirac equation for free electron. For this we must not ignore the own diffusion of the particle samples as we did it above. Moreover, we could introduce the spins of particles as the limits of momentums of impulses of the corresponding field if the size of spatial cell reaches the minimal size. We then can represent the influence of the spin of particle in the given point on the spirality of emitted (absorbed) photon sample and obtain, for example, spin-spin interaction of electrons and nuclei, and spin-orbit interaction.}. \sigmaubsection{Representation of Coulomb field in the method of collective behavior} The spreading of scalar field in the method of collective behavior goes on diffusion mechanism, e.g., as follows. Each sample of the field $g$ with the probability $p$ remains where it was, and with the probability $\frac{1}{6}(1-p)$ shifts to each neighboring cell. It is realized by the rule 2) for the transformation of particles. This mechanism is analogous to Coulomb flow of wave package of the free particle. We suppose that the samples of Coulomb field of the charged particle are emitted by all samples of this particle with the constant intensity proportional to its charge $e$. Let the coefficient of diffusion for the field $p$ is much larger than this coefficient for the virtual photon samples which determine the spreading of the wave package of charged particle (the field, induced by the particle must exist in the areas where there is no samples of this particle). Then in the short time frame when the density of the samples of particle has not changed substantially, the field reaches its stationary state. The intensity of stationary state of the field $\phi$ created in the point $x_0$ is the solution of the equation $\Deltaelta\phi (x)=\deltaelta(x_0)\phi (x)$, e.g., it is Green function of Laplace operator $\Deltaelta$, determining the diffusion. This function in the space $R^3$ is proportional to $\frac{e}{r}$, $r=|x-x_0|$. Coulomb potential can be thus obtained by the diffusion method. Analogously we can build in the swarm representation any other potential determined by a rule of the form of cell automata, e.g., by a rule of local type. We can also use the non local rules for the field construction in the case when there is the simple algorithm for the selection of the spatial cells participating in this rule. For example, we can obtain the local interaction in the impulse space by the selection of fixed quantity of points located in the same straight line with the equal intervals. Here the total number of the required steps will be always proportional to the total numbers of elements in the division of configuration space. The advantages of swarm representation of Coulomb field are the following. To find the state of quantum system with $n$ particles on one step of evolution it requires $O(nN)$ operations, where $N$ is the number of elements of the division of configuration space. It follows immediately from the locality of all considered interactions. If we suppose the instantaneous spreading of Coulomb potential we would have to consider the interactions for all the pairs of particles ($O(n^2)$), and for each such pair – the interaction of all pairs of their samples ($O(N^2)$), that gives the total number of operations $O((nN)^2)$. The finite speed of spreading of the filed determining interaction thus gives the lowest possible complexity of the simulation needed for the scalability. We note that the estimation of the complexity from above is valid for the case when we do not use the transformation of the particles (rule 7)) and all the particles remain unchanged. In this case our method of simulation can represent only non entangled states. The possibility to represent entangled states is discussed below. \sigmaection{Scalability of the collective behavior method} From the formal viewpoint the main drawback of Hilbert formalism is the lack of scalability. We treat the scalability as the possibility to add new particles to the considered system which does not cause the revision of all the computational work which has been already fulfilled for previous system. The best conditions of scalability is the situation when the complexity of simulation of one step of evolution of the system with $n$ particles is $O(n)$\footnote{This, in particular, means that the precision of the one particle description must not depend on $n$, e.g., the variety of one particle states must not suffer from the existence of entanglement. Just this is the most difficult point in the creation of a quantum computer independently of its technology.}. Since the main expenses of the time and memory in the standard Hilbert formalism owe to the work with the entangled states, the advantage of the collective behavior method in sense of scalability is connected mainly with the representation of entangled states. We formulate the conditions under which the collective behavior method would be completely scalable and show how they can be checked in experiments. These conditions can also serve as the indicators of intermediate success in the building of quantum processors with big total number of qubits because its experimental checking is much easier than the realization of the fast quantum algorithms\footnote{The absolute indicator of success in the building of QC is the reliably working QC with 60 logical qubits (and the same quantity of ancilla) on which one could solve by Grover method the search problem inaccessible for the classical computers. Taking into account quantum error correction, the number of logical qubits must be multiplied to $10^5$. Our criterion is applicable yet to 3 qubits.}. \sigmaubsection{Entangled states as the form of particles} The single mean for the description of entanglement in the collective behavior method is the forming of new particles from the existing ones by the operation 7). For the preserving of advantages of swarm method it is undesirable to extend its signature, hence in the description of entangled states we will use only the operation 7). The quantum system dynamics can lead to the situation when two particles glue and begin to behave as the single particle. The example is the joining of one electron and one proton to the molecule of hydrogen (with the emission of photon), or (with the addition of neutrino) to neutron. If we ignore the internal excited levels of such composed system and account its components states by the rules applicable to two independent particles we can roughly represent the states of such composed particle as $|00\rangle+|11\rangle$, where the first qubit denotes the spatial position of the electron, the second – of the proton. This state can be easily distinguished from the mixture $|00\rangle$ and $|11\rangle$ with equal probabilities, for example, measuring impulses of the both parts of the composed particle. We thus have the real entangled state of two quantum particles. This state can be represented in terms of collective behavior, using the rule 7) of the particle transformations: $e+p\longrightarrow H$. To ensure our requirement that the total numbers of values of parameters of any particle must be much less than the total number of elements of the configuration space division, we must consider the electron in this system in the stationary state (for example, $1s$) as occupying only one position in the configuration space of the joint particle. Only then we could consider the hydrogen atom as one quantum particle which (samples) can interfere. In the opposite case (for example, if dependently of the position of atom the electron occupies the excited levels) no interference will be, e.g., the particle ($H$) will not be one, but many ($H_{1s}, H_{2s}, H_{2p}$, etc.). Analogously can be described the complex molecules, cooper pairs of electrons in superconductors, pairs of entangled photons or pairs “charged particle+ photon”, entangled ions in the trap. Here in the case of photons the configuration state for them is always impulse. The internal states of the composed parts can be quantum stationary (or non stationary) states or even classical states; but it is important that these states do not depend on the position of joint particle in its configuration space (e.g., the joint particle must interfere with itself). We consider, for example, by Born- Oppengeimer scheme, the rolling molecule of hydrogen $H_2$. If it rolls independently of its center of mass position, we can apply the rule 7) for it: $H+H\longrightarrow H_2$ and consider it as the joint particle in the framework of the collective behavior method. We consider Hilbert representation of the system of particles which in turn consist of more small particles, etc. We agree that every particle of the level $k-1$ is located in the center of mass of forming it particles of the level $k$, and in the group of these particles the determining of coordinates of all but one determines the coordinates of the last one (in the system of center of mass). Let $r_1,r_2,\ldots,r_n$ be coordinates of all particles partially ordered by nesting so that the particles of the more depth level of nesting has the much number, and the particles of the same level of nesting follow in the random order. We apply the qubit representation of the wave functions $|\Psi(\betaar r)\rangle$ in the form \betaegin{equation} \sigmaum\limits_{\betaar r}\lambda_{\betaar r}|\betaar r\rangle \lambdabel{wave_function} \varepsilonsilonnd{equation} where $\betaar r$ is the binary arithmetic notation of the values of coordinates of all particles containing in the considered system. Let $n$ be the length of this list. Here the value of wave function $\Psi(\betaar r)$ is proportional to $\lambda_{\betaar r}$. We consider the natural lexicographic order on the list $\betaar r$, corresponding to the one particle case, but our consideration will be the general.\footnote{Representation of wave functions in the form (\ref{wave_function}) is more convenient than the traditional form $\Psi(\betaar r)$, because the last form can be ambiguous the two different objects can be denoted by it: the wave function and its value in the concrete point $\betaar r$ (so to distinguish these two senses physicists often write integrals with delta functions).} We denote by $\betaar r_k$ the initial segment of the sequence $\betaar r$ of the length $k$, where $r_k$ is the $k$-th element of this sequence. If we do not use the upper indices, we can agree that the $r_k$ represent only one qubit – this makes our notations easier. Any wave function of the form (\ref{wave_function}) can be represented as \betaegin{equation} |\Psi\rangle=\sigmaum\limits_{r_1}\left(\lambda_{\betaar r_1}|r_1\rangle\betaigotimes\sigmaum\limits_{r_2}\left(\lambda_{\betaar r_2}|r_2\rangle\betaigotimes\ldots\betaigotimes\sigmaum\limits_{r_n}\lambda_{\betaar r_n}|\betaar r_n\rangle\right)\ldots\right) \lambdabel{hie} \varepsilonsilonnd{equation} For this we can simply take all $\lambda_{\betaar r_j}$ equal 1 for $j=1,2,\ldots,n-1$, and for $j=n$ take them equal $\lambda_{\betaar r}$ from the formula (\ref{wave_function}). If we consider the fixed value of $j$, the amplitude distribution $\lambda_{\betaar r_j}$ can be treated as some wave function; we suppose that it is normalized. Let all the functions $\lambda_{\betaar r_j}$ really depend not on all the list $\betaar r_{j}$, but on the coordinates $r_{j-p},r_{j-p+1},\ldots,r_{j}$ only. Then we call the set of such states the class of the depth $p$. The class ${\cal P}_0$ of the states of zero depth in the simple generalization of Schmidt states to the case of hierarchical system of nested particles. It is formed by the states for which the distributions of the samples oa any particles in the coordinate system of the embracing sample $s$ of the particle of the lesser level of nesting do not depend on the coordinates of $s$. The class of states of the depth $p$ is denoted by ${\cal P}_p$. The class of states ${\cal P}_0$ admits the natural swarm representation. We include each amplitude distribution $\lambda_{\betaar r_j}$ to the parameters of the corresponding particle of the level $j-1$. For example, the wave function of the system “electron+proton” with the motionless center of mass is the joint parameter of the particle “hydrogen atom”. The state (\ref{hie}) can be then obtained by the sequential operations of the type 7) of the form \betaegin{equation} s_j,s_k\longrightarrow s_q, \lambdabel{reaction} \varepsilonsilonnd{equation} where $s_j,\ s_k$ are the particles containing to the particle $s_q$. The states of the class ${\cal P}_k$ can be reduced to the states of the class ${\cal P}_{k-1}$, by the introduction of new particles which differ from the existing in their amplitude distribution in the coordinate system connected with their center of mass. Every state can be then reduced to ${\cal P}_0$ \footnote{It can be not optimal because the quantity of the types of particles can grow too fast. In any case it is necessary to preserve the linear growth of complexity, whereas the criteria on which we distinguish the different particles from the different states of the same particle can depend on the method of visualization.}. For example, we consider the atom of lithium Li, in its ground state. It consists of: nucleus, one electron with spin up, one electron with spin down in state $1s$ and one electron with spin up in state $2s$ (we ignore the deformation of states resulted from the Coulomb interaction between the electrons). This atom is the joint particle which samples can interfere with each other. The full state of such a particle $Li_0$ belongs to the class ${\cal P}$, and is an example of entangled state. We suppose that this atom interacts with the vector photon and can pass to the excited state $Li_1$. Such an excited atom will be considered as the different particle, with the different constituents: for example the third electron will be not the particle $2s$, but the particle $3p$ (the other electrons also will be the particles of the other types, but we neglect it). The long and simultaneous existence of the samples of the two different types of particles: $Li_0$ and $Li_1$ is impossible because it results in the super expenditure of the memory comparatively with the initial situation when $Li_0$ and the photon were independent\footnote{These different samples of $Li_0$ and $Li_1$ cannot interfere. For example, if the initial $Li_0$ interferes on two slits and the photon is oriented to one of these slits, then after the absorption the interference picture disappears.}. The fast choice between two alternatives: $Li_0$ + photon flied away, or $Li_1$ without photons is unavoidable for the method of collective behavior. The reason is that otherwise we would not have enough memory for the storing of the more and more complex states connected with the further behavior of the virtual photon and virtual atom which will be in the intermediate states. This choice is fulfilled by the probability mechanism as it was shown in the section about Born rule. In terms of the transformations of particles of the form (\ref{reaction}) it means that this reaction either goes for all the pairs of the form $ s_j,s_k$ (and the types of samples $s_j,s_k$ are contained as the parameters in the type of particle $s_q$), or does not go to anything. Hence we accept the following principle of stability of swarm: the samples of the swarm corresponding to the same particle behave in the transformations of the type 7) equally. It is impossible to distinguish the samples of one swarm to two different swarms. The two swarms can result from one swarm can happen only if every sample divides to two different samples of the different particles in the transformation of the type 7) reverse to (\ref{reaction}). Inside of the same swarm all the samples must be treated as identical because they must interfere with each other. Hence, if we detect the divergence from the identity leading to the impossibility of interference (as in the example with $Li_0$ and $Li_1$), we must introduce the new particle\footnote{If we allow the possibility to divide the swarm to two and forming the two real particle from one, then with the unlimited size of memory we could write any state of the form (\ref{hie}) through the transformation of particles in the form of such states of the zero depth of influence. I claim that this form of notation of the entangled states is optimal on the expenditure of classical memory. In the other words, the most valuable part of quantum interference lies inside of one swarm. If the Hilbert form of representation of the dynamics leads to the increasing of the depth of influence, it always leads to the forming of new physical particles (e.g., ensembles which much effective can be treated as whole particles). For example, when the electron flies at proton the spreading of the electron wave function $\Psi_e$ means physically the transformation of the far located branches of $\Psi_e$ to photons. The certain physical sense expressed by the particle transformation stays behind every “over expenditure of the classical memory”.}. The description of entangled states in the method of collective behavior is thus the principle constriction of the Hilbert formalism for many body quantum systems. The checking of the fact of belonging of the given many body state to the class ${\cal P}_k$ is reduced to the comparison of the statistics for this class with the experimental statistics of such states which can be regularly obtained in big quantities and with the high accuracy. It gives, in principle, the indicator of the success in the experiments on the quantum processors, as well as the checking of our basic supposition about the validity of collective behavior method. The rougher indicator checks only entanglement of states and its degree\footnote{In principle it is possible the brief existence of the intermediate states, for example, of the depth 1,2, or more. Such states do not belong to the class ${\cal P}_0$, but can be represented in the swarm form if we increase the quantity of the types of particles and over expend memory in the process of the reaction. But if we suppose that the swarm approach is scalable, then this over expenditure means simple the redistribution of the computational resource and the rougher description of some other process. In any cases the life time of these more complex states must be much lesser than of the states from ${\cal P}_0$.}. The entangled states of the type ${\cal P}_0$ arise in the case if the movements of the pair (or bigger number) of particles are somehow synchronized. It can take place if there is some attracting potential between the particles which is induced by themselves (as for atoms in vacuum) or their interaction with the other particles (ions in traps, cooper pairs). The outside influence can be essential, for example, in the phase transitions where the new particles arise from the molecular clusters, which interference properties can influence to the picture of the dynamics. The existence of entangled states is connected not only with the electrodynamics interactions. The transformation of the particles in the nuclear interactions and the effects resulted from the entanglement of the form ${\cal P}_k$ between nucleons and atomic structures can play the important role in the dynamics observed in experiments. The role of such effects cannot be determined using the standard quantum formalism for many particles, but we hope to go farer by the method of collective behavior.\footnote{One more sign of the perspective ness of the collective behavior method is the accuracy of the computation of stationary states of electron systems in atoms and molecules by the diffusion Monte Carlo method which in the stationary analog of the collective behavior method. Such computations give the most exact agreement with the experimental data.}. \sigmaubsection{Interpretation of the identity of particles in the collective behavior method} The spaces of occupational numbers corresponding to boson or fermion symmetry of the ensemble of identicl particles represent the important part of standard Hilbert formalism. It means that in the algebraic operations with the vector of state $|\Psi\rangle$ of the system of $n$ identical particles which separate states have the form $|\psi_j\rangle=\sigmaum_k\lambda_j^{k_j}|k_j\rangle ,\ j=1,2,\ldots,n$ in case of independent states, inspead of $|\Psi\rangle=\betaigotimes_j |\psi_j\rangle $ we should always write \betaegin{equation} |\Psi\rangle=A\sigmaum\limits_{\pi_1,\pi_2,\ldots,\pi_N}D_{j,r}(\lambda_j^{k_{r}})|\pi_1,\pi_2,\ldots,\pi_N \rangle , \lambdabel{fock} \varepsilonsilonnd{equation} where $|\pi_1,\pi_2,\ldots,\pi_N \rangle$ are the states of system which means that in any $l$-th element of the division of the configuration state for one particle $l=1,2,\ldots,N$ there are $\pi_l$ particles, where the sum spreads to such summands for which for all $s=1,2,\ldots,N$ and $k$ $\sigmaum_{r:\ k_{r}=s}=\pi_s$. Here for a fixed $k$ for fermions $D_{j,r}$ is the determinant, and for bosons - permanent of the matrix $(\lambda_j^{k_{r}})_{j,r=1,2\ldots,n}$. Such states are treated as non entangled in the Fock space of the occupational numbers ${\cal F}_n$ for $n$ particles, and by the linear combinations with them we can obtain arbitrary states in this space. The states $|\pi_1,\pi_2,\ldots,\pi_N \rangle$ for an orthonormal basis of ${\cal F}_n$. The coefficient $A$ is determined from the normalizing; in the case of orthonormal $|\psi_j\rangle$ it equals $\frac{1}{\sigmaqrt{n!}}$. We consider the case of identical fermions. In the collective behavior method any state from Fock space ${\cal F}_n$ is represented as the set of $n$ swarms $S_1,S_2,\ldots,S_n$, which fill the mutually nonoverlapping areas of the (one particle configuration) space $D_1,D_2,\ldots, D_n$. The wave function (\ref{fock}) will be then the sum of functions which differ only by the renaming of their argumants and signs, and each of them has the swarm representation. This agreement does not touch the computation of the measured magnitudes; for example, the computation of the energies of stationary states by such functions must give the same result as in the standard Fock space, etc. \sigmaection{Spacio-temporal aspect of the collective behavior method} We have yet noted the possibility to represent the relativistic wave equation (of the second order on the time) which we can try to obtain in the swarm approach if do not ignore the diffusion of the samples of charged particle. Here we discuss the other aspect of the collective behavior method touching the relativism. This aspect is connected with the known feature of the relativistic physics – the impossibility of the objective (independent of the observer) separation the past from the future. The collective behavior method has this feature as well. The dynamical picture of evolution sometimes cannot be build by the unilateral algorithm: from the past to the future. For the building of the next picture of the video film the specification of some details of the previous pictures can be necessary. This necessity does not lead to the logical paradoxes by the same reason as the analogous feature of the relativistic physics: it does not violate the cause-and-effect chain of events in the space-time but only makes indispensable (in contrast to the non relativistic theory) the common consideration of the space and time. We consider a wave function in the relativistic (station-temporal) qubit representation \betaegin{equation} \Psi=\sigmaum\limits_{\betaar r,\ t}\lambda_{\betaar r,\ t}|\betaar r,\ t\rangle \varepsilonsilonnd{equation} where $\betaar r$ and $t$ takes values from all points of the space and time correspondingly. The normalization of wave function is the dynamical process (see above) and hence it is normalized only for the values of $t$ filling some small but nonzero segment $t,t+\Deltaelta t$ for each value of the time $t$\footnote{This is the reason of the difficulty of the normalization of wave functions for relativistic objects, as photons. For this the value $\Deltaelta t$ must be fairly large for that the photon energy can be determined sufficiently exactly: the uncertainty relation gives $|\Deltaelta t\ \Deltaelta E|=h$.}. The norm of wave function cannot be thus exactly preserved for the very small time frames $\Deltaelta t$. Just for such time frames the relativistic effects play the role that is connected with the relativity of the order of the events in the time. These effects consists in the reactions of the type: $photon\ \longrightarrow \ photon \ +\ particle\ + \ antiparticle$, in which anti particles participate. Anti particles are the particles which move in the reverse direction in the time. In QED antiparticles arise in the attempt to reduce QED to the fundamental processes (see \cite{Fe}) due to the symmetry of Dirac equation. We show how anti particles arise in the collective behavior method. Non relativistic description of evolution which we dealt with above is based on that the cause-and-effect connection of events and the physical time were treated as the same. In particular, we ignored the self diffusion of the charged particles samples and regarded only their transformations through the samples of connected photons, which gave Shroedinger equation. If the speed of samples of particles will be of the same order as the photon samples, then this supposition loses its force. We consider the process when the electron $e_0$ flies into the hollow $H$ through the slit $E_0$, and can leave this hollow through one of the exit slits: $E_1,E_2,\ldots, E_k$. Let the speed of electron be such that in the time of one fundamental process (emission of photon or its absorption) the electron can overpass the whole hollow and leave it through one of exit slits. If we consider this process as the sequential scatterings following in the order of physical time, we deal with the reactions of transformations of electrons and photons which fill some tree with the initial vertex $E_0$. Our aim will be the determining of the amplitudes of coming out of the electron through the slits $E_1,E_2,\ldots ,E_k$. We will fulfill computations in this tree until its upper branches reach all exit vertexes, corresponding to $E_1,E_2,\ldots ,E_k$. This requires the time of the order exponential of the (conditional) high of the tree $exp(h)$, where $h$ is the conditional distance from the beginning of the tree to the most far exit vertex. We consider the other method of computations, where we suppose that the electron $e_0$ flies in the hollow through the slit $E_0$, and simultaneously from the hollow through $E_1$ the other electron $e_1$ comes out, which was born with the positron $e^1$ from the photon $p_1$. This positron then passes through the chain of sequential transformations and annihilates with the initial electron $e_0$, transforming to the virtual photon $p_1$. Here we also must suppose that the photon can move back in the time, e.g., it is its own anti particle. In this method the cause-and-effect connection does not correspond to the physical time. But here we have two trees growing towards each other and the computation ends when its upper branches meet\footnote{We suppose that the vertexes of trees are disposed in the points of the division points in the configuration space corresponding to the hollow.}. The complexity of computation will have the order $exp(h/2)$, because the high of trees in their finite position will be twice lower than in the first case. E.g., the complexity of the second method of computations is of the order of square root of the complexity of first method. This is the heuristic argument for the introduction of anti particles in the swarm method. The relativistic effects have then the natural representation in terms of the collective behavior. It points to the possibility of the building of the algorithmic quantum formalism on the base of the collective behavior. \sigmaection{Conclusion} We have given some arguments for the necessity of the modification of the mathematical apparatus of quantum mechanics of many body systems. The main argument is the principal impossibility to build the dynamical model for many particles in the framework of standard Hilbert formalism. This depreciates in the practical sense the great advantage of quantum theory consisting in the stability of quantum trajectories in the unitary evolutions. The replacement of the quantum Hilbert many-particle formalism by the more convenient formal tool for the representation of the many body dynamics is desirable. The effective classical algorithms are proposed to be such formalism. The unavoidable consequences of it: the fundamental character of the visual representation of quantum dynamics, the treatment of the limitations to the time and memory as the physical laws, including of decoherence to the general description of the dynamics, removing the observer from the quantum theory. Born rule follows from this representation of quantum physics. The price of modernization is the refusal from the idea of a scalable quantum computer. We also propose the concrete form of the algorithmization for QED: the collective behavior method. It is described in details on the example of standard Shroedinger equation, and the idea of the generalization to the interaction with photons is given. We also propose the simple description of entangled states based on the joining of several particles to one new particle. It was shown how to indicate the success in the experiments with quantum process using this representation of entangled states. The collective behavior method is completely scalable – it requires the time depending linearly from the quantity of particles in the considered system. Its regular application makes possible to build the good approximation of quantum dynamics for systems of many particles (thousands) that is impossible in the framework of standard formalism. \betaegin{thebibliography}{99} \betaibitem{Ak}[Ak] V.Akulin, Description of quantum entanglement by nilpotent polynomials, lanl e-print quant-ph/0508234, \betaibitem{Am}[Am] A.Ambainis, Quantum walk algorithm for element distinctness, lanl e-print quant-ph/0311001, \betaibitem{Bel}[Bel] J.S.Bell, Speakable and unspeakable in quantum mechanics, 1989, Sringer. \betaibitem{Ben}[Ben] Benioff P. Quantum mechanical Hamiltonian models of Turing machines, J. Stat. Phys., 1982, 29, 515-546. \betaibitem{BS}[BS] N.N.Bogolubov, D.V.Shirkov, Introduction to theory of quantized fields (Russian), 1984, Moscow, Nauka. \betaibitem{Bom}[Bom] D.Bom, Quantum mechanics (Russian translation), Moscow, Mir, 1965. \betaibitem{De}[De] D.Deutsch, "Quantum theory, the Church-Turing principle and the universal quantum computer", Proc. R. Soc. London A 400, 97 (1985). \betaibitem{Ev}[Ev] Everett H. III, The Theory of the Universal Wave Function. - in: The Many-World, Interpretation of Quantum Mechanics (ed B. DeWitt, N.Graham). - Princeton, N.J.:Princeton University Press, 1973. \betaibitem{FH}[FH] R.Feynman, A.Hibbs, Quantum mechanics and path integrals (Russian translation) Moscow, Mir, 1968. \betaibitem{Fe}[Fe] R.Feynman, Quantum mechanical computers. Foundations of Physics, 1986, Vol. 16, pp. 507-531. \betaibitem{Fe2}[Fe2] R.Feynman, Theory of fundamental processes, (Russian translation: Moscow, Nauka, 1978). \betaibitem{Gr}[Gr] L.Grover, A fast quantum mechanical algorithm for database search, Proceedings, 28th Annual ACM Symposium on the Theory of Computing (STOC), May 1996, pages 212-219. \betaibitem{KB}[KB] P.Kurakin, H.Bloom, G.Malinetskii, Conversational (dialogue) model of quantum transitions, Proceedings of SPIE, 2006, 6264, lanl e-print quant-ph/0504088. \betaibitem{Me}[Me] M.B.Menski, Quantum measurement: decoherence and concsionsness, UFN 171 (4), 2001, pp. 459. \betaibitem{v.Neu}[v.Neu] J. von Neumann. Mathematical Foundations of Quantum Mechanics. Princeton University Press, Princeton, NJ, 1955. \betaibitem{Pe}[Pe] R.Penrose, The Emperor’s New Mind, Oxford University Press, 1989. \betaibitem{Oz}[Oz] Y.I.Ozhigov, Algorithmic approach to quantum physics, Microelectronica, 2006, vol. 36, 1, pp. 61-77, lanl e-print quant-ph/0412196. \betaibitem{Oz2}[Oz2] Y.I.Ozhigov, Quantum Computers Speed Up Classical with Probability Zero, Chaos Solitons Fractals 1999, 10, pp. 1707-1714, lanl e-print quant-ph/9803064. \betaibitem{SB}[SB] M.Sasura, V.Buzek, Cold Trapped Ions as Quantum Information Processors, Journal of Modern Optics 2002, 49, 1593-1647. \betaibitem{SO}[SO] I.Semenihin, Y.Ozhigov, Algorithmic approach to quantum theory 2: Amplitude quanta in diffusion Monte Carlo method, Proceedings of SPIE, 2006, 6264. \betaibitem{She}[She] A.V.Sheverev, Private communication, 2005. \betaibitem{Sh}[Sh] P.Shor, Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer, SIAM J.Sci.Statist.Comput. 1997, 26, 1484 \betaibitem{Th}[Th] B. Thaller, Advanced visual quantum mechanics, 2004, Springer-Verlag. \betaibitem{Va}[Va] K.A.Valiev, Quantum computers and quantum computations, UFN, 2005, vol. 175, 1, pp. 1-39. \betaibitem{VBB}[VBB] A.Yu. Bogdanov, Yu.I. Bogdanov, K.A. Valiev, Schmidt information and entanglement in quantum systems, lanl-e print quant-ph/0512062. \betaibitem{Vl}[Vl] V.S.Vladimirov, Equations of mathematical physics (Russian), Moscow, Nauka, 1974. \betaibitem{Wi}[Wi] S. Wiesner, Simulations of Many-Body Quantum Systems by a Quantum Computer, lanl e-print, quant-ph/9603028. \betaibitem{Wo}[Wo] S.Wolfram, A new kind of science, 2002, Wolfram Media. \betaibitem{Za}[Za] Zalka, Efficient Simulation of Quantum Systems by Quantum ComputersProc.Roy.Soc.Lond. 1998, A454, 313-322. \betaibitem{Zu}[Zu] W.H.Zurek, Probabilities from Entanglement, Born's Rule from Envariance, Phys. Rev. A 2005, 71, 052105. \varepsilonsilonnd{thebibliography} \varepsilonsilonnd{document}
\begin{document} \title{Quantum Key Distribution based on Single Photon Bi-partite Correlation} \author{Kim Fook Lee} \email[]{[email protected]} \author{Yong Meng Sua} \affiliation{Department of Physics,\\ Michigan Technological University,\\ Houghton, Michigan 49931} \author{Harith B. Ahmad} \affiliation{Department of Physics, \\ University of Malaya,\\ 50603 Kuala Lumpur, Malaysia} \date{July 20, 2012} \begin{abstract} We present a scheme for key distribution based on bi-partite correlation of single photons. Alice keeps an ancilla photon and sends a signal photon to Bob, where intrinsic bi-partite correlation of these photons is obtained through first order intensity correlation in their detectors. The key bits are distributed through sharing four bi-partite correlation functions and photon counting. The scheme consists of two parts; first, Alice prepares deterministic photon states and Bob measures the photon states based on his random choice on correlation functions. Second, Alice guesses Bob's choice of correlation functions and sets the key bits by sending out photon states. Bob verifies the key bits through the photon states regardless Alice made a right or wrong guess. We called this key distribution scheme as prepare-measure-guess-verify (PMGV) protocol. We discuss the protocol by using a highly attenuated laser light, and then point out the advantages of using a fiber based correlated photon-pair to achieve better performance in security, communication distance and success rate of key distribution. \end{abstract} \pacs{03.67.Hk, 03.67.Dd, 42.50.Ex, 03.65.Ud} \maketitle Superposition and entanglement are essential in developing quantum information technologies for real world applications especially for secure communication with quantum key distribution (QKD)~\cite{Gisin02,Scarani09}. QKD has been securely implemented in an optical free-space link (144 km) with polarization entangled photon-pair~\cite{zeilingerFS} and an optical fiber network (45 km) with six different protocols~\cite{zeilingerOE}. The BB84 and B92 protocols~\cite{BB92a, BB92b} are secure against photon number splitting attack (PNS). This attack is only a threat if Alice and Bob shared a fake single-photon source such as using a highly attenuated laser light. To overcome photon number splitting attack (PNS)~\cite{Brassard00,Lutkenhaus00} due to the use of highly attenuated laser light in a lossy long channel, decoy-state protocols~\cite{Hwang03,Lo05,Wang05} and SARG04 protocol~\cite{Scarani04} have been proposed and implemented. Measurement-device-independent QKD~\cite{LoPRL12} is recently proposed to enhance secure communication against all detector side channel attacks and double communication distance by using highly attenuated laser light. The approach requires coincidence detection of a signal pulse from Alice and a reference pulse from Bob. In this work, we propose a QKD scheme based on single photon bi-partite correlation. Correlation functions or expectation values of two observable are classical information as a consequence of the collapse of wave functions in a measurement process. Correlation functions of two Einstein-Podolsky-Rosen (EPR) entangled photons are obtained through nonlocal interferences of their probability amplitudes at two observers. The quantumness of these interferences is the exhibition of particle-wave duality. The quantum mechanics without probability amplitude was proposed~\cite{wootters86}, leading to the possibility of quantum information processing without probability amplitudes, that is, quantum information processing with correlation functions. Bi-partite correlation of coherent light state has been observed by wave mechanical interferences of electromagnetic light fields with different modulated frequencies~\cite{kflee02,kflee04}. By post-selecting a pair or multiple pairs of beat frequencies from detectors, the bi-partite or multiple-partite correlation functions were obtained for simulating the violation of Bell's inequalities~\cite{aspect81} and the locality of Greenberger-Horne-Zeilinger (GHZ) entanglement~\cite{JWPan00,Dik99}. Recently, a coherent light field and a random phase-modulated noise field was used to generate the optical bi-partite correlation without applying any post-selecting techniques~\cite{kflee09}. Instead, the correlation was obtained through mean-value measurement of the multiplied random beat signals of two observers. The experiment showed that the phases information between two observers was not diminished in the presence of random noises. We further interrogated the generation of bi-partite correlation by using two weak coherent states in balanced homodyne detection~\cite{sua11}, where quantum noises, shot noises and electronic noises were included in the measurement process. These noises were used to protect the phases information between two observers. Bits correlations were then extracted from the correlation functions. In this paper, we present a new protocol for key distribution by using single photon bi-partite correlation. The security of the protocol is protected by the principle of quantum mechanics such as quantum non-cloning theorem, superposition and entanglement. We use four bi-partite correlations to distribute key between Alice and Bob. Our protocol is relied on interference of single photons, i.e., the first order intensity correlation of an ancilla photon in Alice and a signal photon in Bob. These single photons have to be intrinsically correlated through a highly attenuated laser light or a photon-pair source. We will first discuss the protocol by using single photons (ancilla and signal) prepared from a highly attenuated laser light. Our protocol requires coincidence detection of ancilla photon and signal photon. We then discuss the use of a fiber based correlated photon-pair or two time-synchronized deterministic single-photon sources for improving the success rate of key distribution and increasing the communication distance by factor two in comparison to the use of a highly attenuated laser light. The correlated photon-pair is easy-to-use and more tolerant to decoherence compared to an entangled photon-pair. The essence of the paper is to propose a new scheme for key distribution based on single photon bi-partite correlation. Secure communication between Alice and Bob is established through four types of bi-partite correlation functions (C1, C2, C3, C4), which can be divided into two groups $(\Psi, \Phi)$. Alice first prepares a sequence of deterministic states, Bob measures each photon state randomly based on his random choice on C1, C2, C3, or C4 and then tells Alice through classical channel about the sequence of groups $(\Psi, \Phi)$ that he has randomly chosen. Alice guesses the correlation based on the group information and sets the key bit by sending another signal photon to Bob. Bob verifies the key bit by the outcome of his measurement regardless Alice made a right or wrong guess. We called this scheme as prepare-measure-guess-verify (PMGV) protocol. \begin{figure} \caption{The experiment scheme for implementing key distribution using a signal photon and an ancilla photon. Also shown is the use of a fiber based correlated photon-pair as photon source to replace the highly attenuated laser light. The dotted box is the wave plates used for preparing bi-partite correlation between Alice and Bob.} \end{figure} The proposed experiment setup is shown in Fig.1. A pulsed, $45^{\circ}$-polarized laser light is used to provide a coherent state with large mean photon number per pulse. The coherent state is split by a polarizing beam splitter (PBS) into a coherent state $|\alpha\rangle_{H}=||\alpha|e^{i\phi_{\alpha}}\rangle$ with horizontal polarization and a coherent state $|\beta\rangle_{V}=||\beta|e^{i\phi_{\beta}}\rangle$ with vertical polarization. These coherent states are combined through a beam splitter (BS1), producing two spatially separated beams, i.e, beam 1 and beam 2 at each output of the BS1. Beam 1 is sent to Bob and the other beam 2 is kept by Alice. To create single photon quantum channel between Alice and Bob, the beam 1 is attenuated to single photon level, i.e., the mean photon number per pulse less than 1. The single photon sent to Bob in the quantum channel is called signal photon. The signal photon is inherited from the superposition of $|\alpha\rangle_{H}+|\beta\rangle_{V}$ or two paths before the BS1, where the relative phase $(\phi_{\beta}-\phi_{\alpha})=-90^{\circ}$ is locked through beam 2 at Alice. Similarly, the beam 2 is further attenuated to single photon level. The single photon kept in Alice is called ancilla photon. The ancilla photon in Alice and the signal photon in Bob are intrinsically correlated in the laser and phase-locked through the phase-locking circuit at $(\phi_{\beta}-\phi_{\alpha})=-90^{\circ}$. Then, the bi-partite correlation between these photons is obtained by using wave plates in Alice and Bob as shown in the dotted boxes in Fig.1. In the our previous work~\cite{sua11}, we have verified four types of bi-partite correlation functions $\textrm{C1}=- \textrm{cos} 2(\theta_{1}-\theta_{2})$, $\textrm{C2}=+\textrm{cos} 2(\theta_{1}+\theta_{2})$, $\textrm{C3}=-\textrm{cos} 2(\theta_{1}+\theta_{2})$, and $\textrm{C4}=+\textrm{cos} 2(\theta_{1}-\theta_{2})$ through the combination of wave plates as shown in the inset of Fig.1. The half-wave plates(HWP3 and HWP5) before the polarizing beam splitters (PBS1 and PBS2) in Alice and Bob are used for projecting polarization angles $\theta_{1}$ and $\theta_{2}$ so that maximum correlation, i.e., $\textrm{C}_{1,2,3,4}=\pm 1$ is obtained. In the following discussion, let's assume that an ancilla photon and a signal photon are available at the same time slots. We denote $'+'$ and $'-'$ for these photons passed through and reflected from their PBSs. If the single photon detector (SPD) $'+'$ or $'-'$ detects a photon, then we encode the valid detection as bit $'1'$ or bit $'0'$, respectively. For each correlation function, Alice and Bob can control their photons to be in the $'+'$ or $'-'$ port of their PBSs by using their HWPs. We need to choose the best settings for their HWPs so that we could obtain maximum correlations of their photons and also distribute the key effectively. \begin{figure} \caption{The definition of the group ($\psi, \phi$), settings of $\theta_{1} \end{figure} We illustrate each bi-partite correlation function and the optimum settings of $\theta_{1}$ and $\theta_{2}$ in Fig.2. For the correlation function $\textrm{C1}=-\textrm{cos} 2(\theta_{1}-\theta_{2})$, Alice and Bob receive their photons with interference terms as given by $+\textrm{Cos}(2\theta_{1}+(\phi_{\beta}-\phi_{\alpha}))$ and $-\textrm{Cos}(2\theta_{2}+(\phi_{\beta}-\phi_{\alpha}))$, respectively. By projecting the HWP5 in Alice at $\theta_{1}=+45^{\circ}$ and also the HWP3 in Bob at $\theta_{2}=+45^{\circ}$, we have C1=-1 indicating maximum anti-correlation between Alice and Bob. This can be easily noticed with the help of phase-locking $(\phi_{\beta}-\phi_{\alpha})=-90^{\circ}$, the interference term in Alice, $+\textrm{Cos}(2(+45^{\circ})+(\phi_{\beta}-\phi_{\alpha}))\rightarrow $ $'+'$ and interference term in Bob, $-\textrm{Cos}(2(+45^{\circ})+(\phi_{\beta}-\phi_{\alpha}))\rightarrow $ $'-'$. This implies that the ancilla photon in Alice passed through the PBS2 and detected by the $'+'$ SPD3, and also the signal photon in Bob is reflected from the PBS1 and detected by the $'-'$ SPD2. For the correlation function, $\textrm{C2}=+\textrm{cos} 2(\theta_{1}+\theta_{2})$, the maximum correlation C2=+1 is obtained by projecting $\theta_{1}=+45^{\circ}$ and $\theta_{2}=-45^{\circ}$. Alice has the same interference term, but Bob has the interference term $+\textrm{Cos}(2\theta_{2}-(\phi_{\beta}-\phi_{\alpha}))\rightarrow +\textrm{Cos}(2(-45^{\circ})-(-90^{\circ}))\rightarrow$ $'+'$. As a result, both the $'+'$ SPD3 in Alice and the $'+'$ SPD1 in Bob will detect a photon. For the correlation function, $\textrm{C3}=-\textrm{cos} 2(\theta_{1}+\theta_{2})$, we still project $\theta_{1}=+45^{\circ}$ and $\theta_{2}=-45^{\circ}$ for the maximum correlation C3=-1. Alice's $'+'$ SPD3 still see her ancilla photon. While Bob has the interference term $-\textrm{Cos}(2\theta_{2}-(\phi_{\beta}-\phi_{\alpha}))\rightarrow -\textrm{Cos}(2(-45^{\circ})-(-90^{\circ}))\rightarrow$ $'-'$, and so the $'-'$ SPD2 will see his signal photon. For the correlation function, $\textrm{C4}=+\textrm{cos} 2(\theta_{1}-\theta_{2})$, the $\theta_{1}=+45^{\circ}$ and the $\theta_{2}=+45^{\circ}$ are used for the maximum correlation C4=1. Similarly, Alice will see her ancilla photon in the '+' SPD3. Bob has the interference term $+\textrm{Cos}(2\theta_{2}+(\phi_{\beta}-\phi_{\alpha}))\rightarrow +\textrm{Cos}(2(45^{\circ})+(-90^{\circ}))\rightarrow '+'$. The '+' SPD1 in Bob will detect a photon. For all four bi-partite correlations, Alice will have her ancilla photon in the '+' SPD3. Alice uses this valid detection for preparing the signal photon that sent to Bob. We will divide the four bi-partite correlations into two groups, C1,C2 $\rightarrow {\Psi}$ and C3,C4 $\rightarrow {\Phi}$. \begin{figure} \caption{Conceptual prepare-measure-guess-verify protocol for key distribution based on single photon bi-partite correlation.} \end{figure} Now, let's discuss the protocol of key distribution between Alice and Bob using the shared correlation function as shown in Fig.3. The scheme can be divided into two parts; the first part is Prepare-Measure (PM) part and the second part is Guess-Verify (GV). In the PM part, Alice prepares an ancilla photon for herself and also a signal photon to be sent to Bob. First, Alice has to verify her ancilla photon is always bit $'1'$ or detected by the $'+'$ SPD3 as discussed before in Fig.2, so that the signal photon sent to Bob is phase-locked, i.e, the relative phase between the horizontal and vertical components of the signal photon is kept constant. From here, we assume that Alice only prepares bit $'1'$ and sends the bit information to Bob through the signal photon. Bob can randomly choose one of the four C1, C2, C3, and C4 correlation functions by means of randomly projecting the HWP1 and QWP2 as shown in the inset of Fig.1. If Bob chooses C1, his $'-'$ SPD2 will 'click' as shown in Fig.2 and then he encoded the detection as bit $'0'$. Similarly, if Bob chooses C2 or C3 or C4, he will have bit $'1'$ or $'0'$ or $'1'$, respectively. Bob can randomly generate the key by his choice of correlation functions. However, the key is not shared with Alice. Fig.2. shows the expected bit correlation for each correlation between Alice and Bob in the PM part. In the guess-verify (GV) part, Bob has to tell Alice about which group $(\Psi, \Phi)$ of his choice by using a classical channel. Since each group of $(\Psi, \Phi)$ has two choices of correlation functions, Alice has to guess one of two correlations within the group. No matter Alice's guess of Bob's choice of correlation is right or wrong, Alice will use her guess of correlation to generate the key bit. For example, Bob tells Alice that he used the group ${\Psi}$, Alice can chooses C1 or C2. If Alice choose C1 (C2), she will generate bit $'1'$($'0'$) as her key bit. In order for Alice to do that, she has to use the HWP1 and the QWP2 as shown in the dotted box in Bob's setup in Fig.1. In the GV part, Alice is mimicking the Bob's apparatus and generating the key bit based on her guess. She send the signal photon to Bob. Bob will use the QWP4 at $+45^{\circ}$ to replace the HWP1 and the QWP2 in his setup. However, Bob must keep the setting of the HWP3 ($+45^{\circ}, -45^{\circ})$ for his choice of correlation function that he has chosen in PM part. The sequence of the HWP3 angles will be used to verify the key bit sent by Alice. The essence of this verify part is Bob only (not Alice and Eve) knows his HWP3 angles. Bob can find out the key generated by Alice's guess by detecting the signal photon in the $'+'$ SPD1 ('Yes'/right guess) or the $'-'$ SPD2 ('No'/wrong guess). To implement the GV part, Alice and Bob must keep their HWP angles ($\theta_{1}=+45^{\circ}, +45^{\circ}, +45^{\circ}, +45^{\circ}; \theta_{2}=+45^{\circ}, -45^{\circ}, -45^{\circ}, +45^{\circ}$) for the correlation function C1, C2, C3 and C4, respectively. Since Alice and Bob have swapped their correlation preparation ($\textrm{HWP1+QWP2} \Leftrightarrow \textrm{QWP4}$) and kept their projection angles (HWP3 and HWP5), Alice has to shift the phase-locked mode to $\phi_{\beta}-\phi_{\alpha}=90^{\circ}$ for her guess on the correlation functions C2 and C3. The reason is for the correlation functions C2 and C3, Alice will have the interference terms in the cosine function changed from $+(\phi_{\beta}-\phi_{\alpha}))\rightarrow -(\phi_{\beta}-\phi_{\alpha}))$. \begin{figure} \caption{The guess-verify part showing how Bob knows Alice's guess is right or wrong for his choice of correlation in the prepare-measure part. .} \end{figure} We illustrate the guess-verify part in more detail in Fig.4 about how Bob knows Alice's guess is right or wrong. For the correlation function C1, Alice and Bob keep their projection angles $\theta_{1}=+45^{\circ}$ and $\theta_{2}=+45^{\circ}$ that they used in the prepare-measure part. If Alice's guess on C1 is correct through the group information $\Psi $ sent by Bob where Bob did choose the C1 for his choice, the interference term in Alice $-\textrm{cos}(2\theta_{1}+(\phi_{\beta}-\phi_{\alpha})))\rightarrow -1$, i.e., bit $'0'$ or the $'-'$ SPD4 will detect the ancilla photon. While the interference term in Bob $+\textrm{cos}(2\theta_{2}+(\phi_{\beta}-\phi_{\alpha})))\rightarrow +1$, i.e., the $'+'$ SPD1 will detect the signal photon which means 'Yes', so Bob knows that Alice has guessed the right correlation function C1 and hence the bit $'0'$ for the key bit. Now, if Alice guessed C2 instead of C1, so her guess is wrong. The interference term in Alice $+\textrm{cos}(2\theta_{1}-(\phi_{\beta}-\phi_{\alpha})))\rightarrow +1$ or bit $'1'$. Note that Alice has to apply the phase shift $\phi_{\beta}-\phi_{\alpha}=+90^{\circ}$ for her guess on C2 and C3 as discussed above. While the interference term in Bob $+\textrm{cos}(2\theta_{2}+(\phi_{\beta}-\phi_{\alpha})))\rightarrow -1$ or the $'-'$ SPD2 will 'click' which means 'No', so Bob knows Alice has guessed the wrong correlation or bit. From here, Bob knows the key bit set by Alice regardless Alice' guess is right or wrong. Similarly, Bob knows Alice's guess for the other correlation functions C2, C3 and C4 in the guess-verify part as illustrated in Fig.4. To illustrate the PMGV protocol more systematically, we will discuss an example of the key distribution as shown in Fig.5. Step 1-4 is for the PM part and Step 5-9 is for the GV part. Step 1: Alice sends a phase-locked signal photon to Bob by making sure the ancilla photon is detected in her $'+'$ SPD3 or bit $'1'$. Alice kept the projection angle $\theta_{1}=+45^{\circ}$. Step 2: Bob measures the signal photon based on his random choice of correlation function. He chooses C3, C1, C4 and C2 and keeps the projection angle $\theta_{2}$ for each correlation function. Step 3: He obtains bit $'0'$, $'0'$, $'1'$, and $'1'$, respectively, according to Fig.2. Step 4: Bob tells Alice through classical channel about the group of his choice $(\Phi, \Psi)$, not revealing his choice of correlation function. Step 5: Alice makes guess based on which group information. Alice guesses C4, C1, C3, and C2. Step 6: Alice uses the projection angle she kept in Step 1. She measures the ancilla photon and obtains the bit $'1'$, $'0'$, $'0'$, and $'1'$ as her key bit. Step 7: Bob measures the signal photon prepared by Alice's guess by using the sequence of projection angle $\theta_{2}$ he kept in Step 2. Bob knows whether Alice's guess is right ('Yes') or wrong ('No') as illustrated in Fig.4. Step 8a: Bob knows the key bits that Alice set based on her guess even though Alice's guess was wrong. Steps 8b and Step 9 are the alternative of Step 8a in the case Bob did not receive the signal photon sent by Alice. Step 8b: Bob tells Alice about his valid detection. The 'x' means no valid detection and the '$\surd$' means valid detection. Step 9: Bob and Alice shared the remain bits as their raw secret key. \begin{figure} \caption{The scheme for key distribution between Alice and Bob.} \end{figure} This protocol is based on the bi-partite correlation function generated through the interference of the ancilla photon in Alice and the interference of the signal photon in Bob. These interferences are spatially separated but their phases are intrinsically correlated in the laser. Since the signal photon is prepared from a highly attenuated laser light, the protocol is still vulnerable to PNS attack. The protocol is secure against PNS attack if a correlated photon-pair is used as a photon source for replacing the highly attenuated laser light. The correlated photon-pair is much easier to generate and less sensitive to decoherence than an entangled photon-pair. The high purity of correlated photon-pair at the telecom wavelengths can be generated through a four wave mixing process in a 300 m dispersion-shifted fiber (DSF). The coincidence to accidental coincidence ratio (CAR) $>$ 100 has been achieved by suppressing the spontaneous Raman scattering process in a DSF cooled at the liquid nitrogen temperature 77K~\cite{kflee06}. In the four wave mixing process, two pumps photon are annihilated to create energy-time correlated signal-idler photon pair. The signal and idler photons are separated from the pump photons by using a cascaded wavelength division multiplexing (WDM). The signal photon is projected to left circular polarization and sent to Bob. Similarly, the idler photon is projected to right circular polarization and sent to Alice. The right and left circular polarizations of idler/signal photons are analog to the phase-locked ancilla and signal photons when the highly attenuated laser light is used. In the guess-verify part, the polarizations of idler/signal photons are exchanged to right $\rightleftharpoons$ left for the correlation functions C2 and C3. Since our protocol requires coincidence detection of ancilla photon and signal photon, the highly attenuated laser light can provide the success rate of key distribution as given by $n_{a}n_{s}$, where $n_{a}$ and $n_{s}$ are mean photon number per pulse for the ancilla and signal photons. As for the use of a fiber based correlated photon-pair, the protocol is complete secure against PNS attack. The success rate for the key distribution is given by the production rate of the photon-pair per pulse, $n_{pr}$. A cooled dispersion-shifted fiber at liquid nitrogen temperature (77K) can provide the purity of photon-pair with CAR $>$ 100 at photon-pair production rate of 0.01 per pulse. For example, if we use the commercial available $(\textrm{u}^{2}\textrm{t})$ fiber mode-locked laser at repetition rate of 10 GHz~\cite{Liang07} and a high speed low dark count super-conducting single photon detector, we will obtain raw secret key bits about $0.01 \times 10^{9}\times 0.01\textrm{(total detection efficiency)}\sim 10^{5}$ key bits. After applying private amplification and information reconciliation protocols, we predict to obtain about 25000 secret key bits. If a highly attenuated laser light is used to prepare the $n_{a}n_{s} = 0.01$ per pulse for both ancilla and signal, we can have the same performance as discussed above. In ideal case, the best performance of this protocol can be achieved by replacing highly attenuated laser light or photon-pair source with two time-synchronized deterministic single photon sources, where the $n_{a}n_{s} = n_{pr}= 1.0$ per pulse. It is worth to note that the photon-pair source and two single-photon sources can increase distance of communication between two parties by factor of two. In conclusion, we have proposed a prepare-measure-guess-verify (PMGV) protocol for key distribution using four types of single photon bi-partite correlation functions between two parties. We show that the PMGV protocol can be implemented with a highly attenuated laser light source, which is often used as alternative single photon source. Since the protocol requires coincidence detection of an ancilla photon and a signal photon, any photon-pair source or two single-photon sources can improve the success rate of key distribution, the security against PNS attack and double the communication distance in comparison to the use of highly attenuated laser light. \begin{acknowledgments} K.F.L would like to acknowledge the financial support as visiting professorship from University of Malaya. H.B.A gratefully acknowledges the support from the University of Malaya High Impact Research Grant (UM.C/HIR/MOHE/SC/01) on this work. \end{acknowledgments} \newcommand{\noopsort}[1]{} \newcommand{\printfirst}[2]{#1} \newcommand{\singleletter}[1]{#1} \newcommand{\switchargs}[2]{#2#1} \end{document}
\begin{document} \title{Iterated greedy algorithms for a complex parallel machine scheduling problem} \begin{abstract} This paper addresses a complex parallel machine scheduling problem with jobs divided into operations and operations grouped in families. Non-anticipatory family setup times are held at the beginning of each batch, defined by the combination of one setup-time and a sequence of operations from a unique family. Other aspects are also considered in the problem, such as release dates for operations and machines, operation's sizes, and machine's eligibility and capacity. We consider item availability to define the completion times of the operations within the batches, to minimize the total weighted completion time of jobs. We developed Iterated Greedy~(IG) algorithms combining destroy and repair operators with a Random Variable Neighborhood Descent~(RVND) local search procedure, using four neighborhood structures to solve the problem. The best algorithm variant outperforms the current literature methods for the problem, in terms of average deviation for the best solutions and computational times, in a known benchmark set of 72 instances. New upper bounds are also provided for some instances within this set. Besides, computational experiments are conducted to evaluate the proposed methods' performance in a more challenging set of instances introduced in this work. Two IG variants using a greedy repair operator showed superior performance with more than 70\% of the best solutions found uniquely by these variants. Despite the simplicity, the method using the most common destruction and repair operators presented the best results in different evaluated criteria, highlighting its potential and applicability in solving a complex machine scheduling problem. \end{abstract} \keywords{Metaheuristics \and Machine Scheduling \and Family Scheduling \and Batching \and Setup Times} \section{Introduction} \label{sec:introduction} Parallel machine scheduling problems have been extensively studied since the seminal work of~\cite{mcnaughton1959scheduling}, with many applications in theoretical and real-life problems~\citep{wang2020identical,wang-2009}. The application that motivated this work is a real-life ship scheduling problem in the offshore oil and gas industry, with the machines representing an available fleet of vessels~\citep{AbuMArrul2020, abu2020matheuristics,cunhails}. The problem considers several aspects, such as non-anticipatory family setup times, eligibility constraints, split jobs, machine capacity, and others, making it more complex and challenging. Its characteristics relate to other parallel machine scheduling problems in the literature, such as serial batch scheduling problems, family scheduling problems, concurrent open shop scheduling problems, order scheduling problems, and some variants of the classical identical parallel machine scheduling problem. More formally, there are a set of $n$ jobs, $\cal{J}$~=~\{$J_1$,...,~$J_n$\}, a set of $o$ operations, $\cal{O}$~=~\{$O_1$,...,~$O_{o}$\}, and a set of $m$ identical machines in parallel, $\cal{M}$~=~\{$M_1$,...,~$M_m$\}. Each job $J_j \in \mathcal{J}$ is composed by a subset $\mathcal{O}_j \subseteq \cal{O}$ of operations, which can be processed by different machines simultaneously. All component operations must be finished to complete a job. In this problem, differently from job-shops, flow-shops, or open-shops, one operation might be associated with more than one job. Thus, we use $\mathcal{J}_i \subseteq \mathcal{J}$ to identify the subset of jobs that an operation $O_i$ is associated with. Operations are grouped by similarity, with each one belonging to exactly one family $g \in \mathcal{F}$~=~\{$F_1$,...,~$F_{f}$\}. For each operation $O_i \in \mathcal{O}$, its processing time $p_i$, release date $r_i$, and size $l_i$ are given. Moreover, a subset $\mathcal{M}_i \subseteq \mathcal{M}$ of eligible machines to execute operation $O_i$, without preemption, is known in advance. For each machine $M_k \in \mathcal{M}$ is given a capacity $q_k$ and a release date $r_k$. When assigning operations to machines, a family setup time $s_g$ must be incurred whenever a machine changes the execution of operations from different families and before the first operation on each machine. The composition of a family setup time with a sequence of operations is called \textit{Batch}. The total size of all operations in a batch cannot exceed the assigned machine capacity. Finally, setup times are non-anticipatory since the starting time of a batch is restricted by the release date of the operations that composes it. The completion time $C_j$ of a job~$J_j$ is given by the maximum completion time among its component operations, that is, $C_j = \max\{C_i:O_i \in \mathcal{O}_j\}$, where $C_i$ is the completion time of operation~$O_i$. The objective is to minimize the Total Weighted Completion Time~(TWCT) of jobs, given by $\sum_{j \in \mathcal{J}} w_jC_j$, where $w_j$ is the weight of job~$J_j$. In this problem, decision-makers need to not only decide the \textit{allocation} and \textit{sequencing} of operations in the machines but also how to \textit{compose batches}. The last part is critical since a sequence of operations of the same family in a machine can form batches in different ways due to capacity constraints. Furthermore, in some cases, splitting a sequence of operations of the same family into more than one batch may be a better choice, even if it does not violate the machine's capacity. This assumption considers the non-anticipatory setup aspect that may generate idleness in a given schedule, delaying processing some operations. \begin{figure} \caption{Scheduling example with 5 operations, 3 Jobs and 2 machines. \label{fig:scheduling-example-one} \label{fig:scheduling-example-one} \end{figure} In Figure~\ref{fig:scheduling-example-one}, a small example, with 5 operations, 3 jobs, and 2 machines, is depicted to help the readers understanding the problem. At the top of each scheduled operation, we show its release date and size, and, below, we indicate its associated jobs. Setup times define the beginning of a batch, as shown in the top of the first batch on machine $M_1$. Each machine's capacity is shown below its name, and other characteristics are displayed inside the box below the schedule. Moreover, we highlight the completion times of the jobs in the schedule. In the example given, two batches are created in each machine, respecting their capacities. Note that a single batch in machine $M_1$ would lead to a higher cost solution, since the unique batch would have to start its processing at $ t = 30 $ (highest release date between operations $O_1$, $O_2$, and $O_3$), respecting the non-anticipatory consideration. One can note that the non-anticipatory consideration generated idleness in both machines and that the first batch on machine $M_2$ respects the machine's release date. The Total Weighted Completion Time~(TWCT) of the example schedule, following the jobs index order $(J_1,J_2,J_{3})$, is given by: TWCT = (3 $\times$ 45) + (2 $\times$ 65) + (1 $\times$ 70) = 335. In this paper, we extend the works of \citet{AbuMArrul2020, abu2020matheuristics} by proposing four variants of Iterated Greedy~(IG) algorithms, combined with a Randomized Variable Neighborhood Descent~(RVND) local search, to solve the described problem, testing it in two sets of benchmark instances. The paper's main contributions are: (1)~The development of IG approaches to solve a complex parallel machine scheduling problem, useful for similar machine scheduling problems; (2)~Improvement of the state of the art algorithm for the addressed problem; (3)~The development of a new set of instances, combining several concepts from distinct machine scheduling problems, making it more interesting to address; (4)~The combination of diversification and intensification strategies to improve the algorithms' performance; (5)~Solving a problem with a realistic ship scheduling background, highlighting the applicability of IG algorithms to real-life problems. The outline of this paper is structured as follows. The problem's motivation regarding the machine scheduling theory is discussed, and some related works are presented in Section~\ref{sec:motivation}. Section~\ref{sec:ig_approach} details the developed algorithms' structure. Computational experiments are presented and discussed in Section~\ref{sec:experiments}. Finally, Section~\ref{sec:conclusion} concludes the paper, also giving some perspectives for future research regarding the problem. \section{Motivation and Related Works} \label{sec:motivation} As stated in Section~\ref{sec:introduction}, the paper's initial motivation comes from a ship scheduling problem related to the offshore oil and gas industry. In the problem, a set of vessels, analogous to the machines, must perform operations in sub-sea oil wells to anticipate processing the most productive ones. Regarding this problem, referred to in the literature as the \textit{Pipe Laying Support Vessel Scheduling Problem}~(PLSVSP), \citet{AbuMArrul2020} developed three mathematical formulations, testing it in a set of instances, also generated by the authors, based on real data provided by a Brazilian company. \citet{abu2020matheuristics} developed six matheuristics, combining a constructive heuristic with two batch scheduling formulations. The authors provided new best solutions for several instances in the benchmark of \citet{AbuMArrul2020}, reducing the computational time compared with the pure mathematical formulations. From the best of our knowledge, no other work in the scheduling literature deals simultaneously with all the studied problem aspects. However, some works have certain similarities with the problem. \citet{Gerodimos-1999} approached a single machine scheduling problem with jobs divided into operations from different families, including sequence-independent setup times. The authors provided a complexity analysis of the problem within three objectives to minimize: maximum lateness, weighted number of late jobs, and total completion time. \citet{mosheiov-2008} proposed a linear time algorithm to minimize makespan and flow-time in a problem in which jobs are split into operations with identical processing times. Non-anticipatory setup times are incurred between batches, with their sizes limited by the machine's capacities. In \citet{wang-2009}, jobs are divided into orders that are organized into families. Sequence-dependent family setup times are considered, and jobs are delivered in batches with limited capacity. The authors proposed heuristic algorithms to minimize the weighted sum of the last arrival time of jobs. \citet{tahar-2009} proposed a linear programming approach to minimize the makespan in a parallel machine scheduling problem with job sections and sequence-dependent setup times between sections from different jobs. A sequence-dependent family setup time marks the starting process of a batch. In \citet{shen-2012}, a sequence-dependent family setup time marks the starting process of a batch. They developed a tabu search algorithm to minimize the makespan in a problem with jobs organized into families and composed by operations. In \citet{kim-2018}, jobs are processed in parts, organized in families, on a set of parallel machines to minimize the makespan. Setup times are incurred between processing parts from different families. \citet{Rocholl-2019} considered a problem with jobs divided into orders from the same family, with equal processing times. The authors proposed a mixed-integer linear problem model to minimize both earliness and tardiness, respecting the machine's capacities. \citet{lin2011} addressed a concurrent open shop problem, where jobs are partitioned into operations that are eligible to only a specific machine. The authors provided polynomial-time algorithms to minimize the maximum lateness, the weighted number of tardy jobs, and the total weighted completion time. Moreover, due to the addressed problem complexity, it can be seen as a generalization of other well-known machine scheduling problems. This relation highlights the relevance of the problem regarding the scheduling literature. Still, it indicates that the algorithms presented in this work may be used to solve all simplified variants of the problem. In the following sections, we list four machine scheduling problems that can be seen as simplifications of the addressed problem. \subsection{Identical Parallel Machine Scheduling Problem} \label{sec:identical_p_machine} The first related problem is the classical \textit{Identical Parallel Machine Scheduling Problem}~(IPMSP) that can be achieved by simplifying four aspects: (1) Every operation is associated to a single job; (2) Every job is composed by a single operation; (3) Every operation belongs to the same family; (4) Setup time duration is set as zero for all families. Note that, with these modifications, capacity constraints are disregarded, since no setup times are considered. Furthermore, each operation completion time will represent the completion time of its unique associated job. Observe that release dates can be considered or not, since it limits the starting of a batch with a single operation and zero duration. Thus, it will limit the operation itself. Formally, there is a set of $n$ jobs, $\cal{J}$~=~\{$J_1$,...,~$J_n$\}, to be processed by a set of $m$ identical machines in parallel, $\cal{M}$~=~\{$M_1$,...,~$M_m$\}. Each job~$J_j$ has a processing time $p_j$, might have a release date~$r_j$, and might be limited to be processed by a subset~ $\mathcal{M}_j \subseteq \mathcal{M}$ of eligible machines. The Identical Parallel Machine Scheduling Problem has been proven to be $\mathscr{N\mspace{-8mu}P}$-hard \citep{mokotoff-2004}. Thus, we can assume that the addressed problem is $\mathscr{N\mspace{-8mu}P}$-hard according to the described simplification. \citet{nessah2008} and \citet{ahmed2013} addressed the IPMSP with release dates. In the former, a branch-and-bound algorithm is developed to minimize the total weighted completion time. In the latter, a modified forward heuristic to minimize makespan is presented. \citet{xu2014} proposed a column generation approach to minimize both makespan and total weighted completion time. \citet{dellamico2008}, \citet{alharkan2018}, and \citet{dellacroce2018} developed heuristic algorithms to deal with the IPMSP to minimize the makespan, while \citet{dellamico2005} solution approaches are based on exact methods. \citet{haouari2006} developed both heuristics and exact methods considering the same objective of minimizing the makespan. \subsection{Family Scheduling Problem} \label{sec:family_p_machine} The second related problem is the \textit{Family Scheduling Problem}~(FSP), achieved with the following simplifications: (1)~Every operation is associated with a unique job; (2)~Every job is composed by a single operation; (3)~Release dates are set as zero; (4)~Size of operations are set as zero. Note that, with these modifications, family setup times are only considered when changing the execution from operations of different families. Furthermore, the completion time of one operation will represent the completion time of its single related job. If release dates are considered, setup times would be non-anticipatory and additional setups may be considered during the optimization to avoid delays in starting processing some operations. Formally, there is a set $n$ of jobs, $\cal{J}$~=~\{$J_1$,...,~$J_n$\}, to be processed by a set of $m$ identical machines in parallel, $\cal{M}$~=~\{$M_1$,...,~$M_m$\}. Each job $J_j \in \mathcal{J}$ belongs to exactly one family $g \in \mathcal{F}$~=~\{$\mathcal{F}_1$,...,~$\mathcal{F}_{f}$\}. Moreover, each job $J_j$ has a processing time $p_j$, might have a release date $r_j$, and might be limited to be processed by a subset $\mathcal{M}_j \subseteq \mathcal{M}$ of eligible machines. A family setup time $s_g$ is incurred before the first job on each machine or when a machine changes the execution of jobs from different families. \citet{mehdizadeh2015}, \citet{liao2012}, \citet{bettayeb2008} and \citet{dunstall2005} addressed the FSP with parallel machines. The first work proposed a vibration-damping metaheuristic. The second and the third ones developed constructive heuristics, and the last introduced a branch-and-bound algorithm to minimize the total weighted completion time of jobs. \citet{lin2017} and \citet{Nazif2009} addressed the FSP on a single machine. The former considered two variants of the problem, with and without due dates, introducing a mixed integer programming model to minimize the total weighted completion time and maximum lateness. The latter work developed a genetic algorithm to minimize the total weighted completion time. \subsection{Concurrent Open Shop Scheduling Problem} \label{sec:open_shop} The \textit{Concurrent Open Shop Scheduling Problem}~(COSSP), also referred to as the \textit{Order Scheduling Problem}~(OSP), is the third related problem. Three simplifications are needed to achieve it: (1) Every operation is associated with a single job; (2) Setup times are set as zero; (3) Each operation has a unique eligible machine to execute it. In this problem, a set of operations must be executed, each one in its pre-defined machine, to complete a job. Note that the variants with and without release dates can be solved since no setup times are considered in this problem. Formally, there are a set of $n$ independent jobs, $\cal{J}$~=~\{$J_1$,...,~$J_n$\}, and a set of $m$ identical machines in parallel, $\cal{M}$~=~\{$M_1$,...,~$M_m$\}. Jobs are composed of operations defined in the set $\cal{O}$~=~\{$O_1$,...,~$O_o$\}. The subset of operations that composes a job $J_j \in \mathcal{J}$ is given by $\mathcal{O}_j \subseteq \cal{O}$, in such a way that $\bigcup_{j=1}^{n} \mathcal{O}_{j} = \mathcal{O}$. Each operation $O_i$ is eligible to be processed by a unique machine, it has a processing time $p_i$, and might have a release date $r_i$. All component operations must be executed to complete a job. \citet{Roemer2006} addressed the COSSP, providing a complexity analysis aiming to minimize the makespan. \citet{mastrolilli2010} proposed a combinatorial approximation algorithm to minimize the total weighted completion time, while \citet{Cheng2011} introduced a polynomial-time algorithm with the same objective. \citet{khuller2019} considered release dates, introducing an online scheduling framework, and minimizing the total weighted completion time. \subsection{Serial Batch Scheduling Problem with Job Availability} \label{sec:serial_batch} The last analogy regards to a \textit{Serial Batch Scheduling problem with Job availability}, and it can be achieved with four simplifications: (1) Every operation is associated with one job; (2) Every job is composed by a single operation; (3) Release dates are set as zero; (4) All operations belong to the same family. In this problem, a setup time with a fixed duration is incurred at the beginning of each batch, and a batch must respect the machine capacity. If we consider release dates, the setup times become non-anticipatory. Formally, there are a set of $n$ independent jobs, $\cal{J}$~=~\{$J_1$,...,~$J_n$\}, to be processed by a set of $m$ identical machines in parallel, $\cal{M}$~=~\{$M_1$,...,~$M_m$\}. Each job $J_j$ has a processing time $p_j$ and a size $l_j$. A fixed setup time $s$ must be incurred before the first job on each machine or when the machine capacity $q_k$ is reached. Serial batch scheduling problems with job availability are rarely found in the scheduling literature. \citet{gerodimos2001scheduling} addressed the problem with a single-machine and job's due dates. They introduced a pseudo-polynomial time algorithm to minimize three objective functions: total weighted completion time, maximum lateness, and the number of tardy jobs. \section{Iterated Greedy Algorithm} \label{sec:ig_approach} In this section, we describe the structure of the proposed IG algorithms. The IG approach~\citep{ruiz2007simple} combines an intensification step, given by a local search procedure to achieve local optimal solutions, with destroying and repair phases to diversify solutions and avoid the method being stuck. Our IG algorithm uses a Random Variable Neighborhood Descent~(RVND) in the local search phase. The RVND strategy \citep{subramanian2017} picks neighborhoods up randomly from a pool instead of running them in a deterministic pre-defined sequence. We also included a simulated annealing criterion, a strategy to accept infeasible solutions to add more diversification to the method during its execution, and a solution restore step to intensify the search around good feasible solutions. The general pseudo-code of the IG approach is shown in Algorithm~\ref{alg:ils}. The method considers $\mathtt{s}=(\mathtt{s}_1,\ldots,\mathtt{s}_m)$ as a solution composed by a list of schedules for the $m$~machines. Each schedule is a permutation of families (indicating a setup time) and operations. $\mathtt{s}$ indicates the current solution on each iteration and $\mathtt{s^*}$ the best solution reached so far in the execution. And, function $f(.)$ returns the total weighted completion time of an input solution. In Figure~\ref{fig:sol_rep}, the solution representation is presented, considering the schedule example depicted in Figure~\ref{fig:scheduling-example-one}. \begin{figure} \caption{Solution representation regarding the schedule example depicted in Figure~\ref{fig:scheduling-example-one} \label{fig:sol_rep} \end{figure} \begin{algorithm}[!htb] \footnotesize \SetAlgoLined \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{Number of iterations ($\eta$); Restore parameter ($\lambda$).} \Output{The best solution found $\mathtt{s^*}$ as a list of schedules for each machine.} \BlankLine $\Omega \leftarrow \lceil \lambda \eta \rceil$; $\mathtt{s} \leftarrow \texttt{Constructive()}$;\Comment*[f]{Construct the initial solution} $\mathtt{s} \leftarrow$ \texttt{RVND($\mathtt{s}$)};\Comment*[f]{Local Search} $\mathtt{s^*} \leftarrow \mathtt{s}$;\Comment*[f]{Initialize the best overall solution} $\omega \leftarrow 0$; \For{$\eta$ iterations}{ $\mathtt{s'} \leftarrow$ \texttt{Destroy($\mathtt{s}$)}; $\mathtt{s'} \leftarrow$ \texttt{Repair($\mathtt{s'}$)}; $\mathtt{s'} \leftarrow$ \texttt{RVND($\mathtt{s'}$)}; $\omega \leftarrow \omega + 1$; $is\_feasible \leftarrow$ \texttt{Feasible($\mathtt{s'}$)};\Comment*[f]{Infeasibility strategy} \If(\Comment*[f]{Acceptance Criterion}){$f(\mathtt{s'}) < f(\mathtt{s})$ {\normalfont \textbf{or} \texttt{Accept($\mathtt{s'},\mathtt{s}$)}}}{ $\mathtt{s} \leftarrow \mathtt{s'}$; \If(\Comment*[f]{Improvement check}){$f(\mathtt{s}) < f(\mathtt{s^*})$ {\normalfont \textbf{and}} $is\_feasible$}{ $\mathtt{s^{*}} \leftarrow \mathtt{s}$; $\omega \leftarrow 0$; } } \If(\Comment*[f]{Solution restore}){$\omega$ = $\Omega$}{ $\mathtt{s} \leftarrow \mathtt{s^{*}}$; $\omega \leftarrow 0$; } } \textbf{return} $\mathtt{s^*}$; \caption{Iterated Greedy Algorithm } \label{alg:ils} \end{algorithm} Algorithm~\ref{alg:ils} starts by computing the number of consecutive iterations without improvement $\Omega$ to restore the current solution~(Line~1). Then, a solution $\mathtt{s}$ is built by employing the \texttt{WMCT-WAVGA} constructive heuristic~(Section~\ref{sec:constructive})\citep{abu2020matheuristics}~(Line~2). After running the local search~(Line~3), the best overall solution is initialized~(Line~4), and also the counter $\omega$ of consecutive iterations without improvement~(Line~5). The main loop~(Lines~6--23) runs for $\eta$ iterations and consists of destroying and repairing the current solution $\mathtt{s}$, generating solution $\mathtt{s'}$~(Lines~7--8), and applying a local search in $\mathtt{s'}$ to achieve a local optima solution~(Line~9). Then, the feasibility of the local optima solution is verified~(Line~11). The algorithm updates the current solution $\mathtt{s}$ if the cost of $\mathtt{s}'$ is lower than the cost of $\mathtt{s}$ or if the acceptance criterion is met~(Lines~12--13). If the updated current solution $\mathtt{s}$ is feasible and its cost improves the cost of solution $\mathtt{s^*}$, $\mathtt{s^*}$ is replaced by $\mathtt{s}$ and the counter $\omega$ of iterations without improvement is reinitialized~(Lines~14--17). The algorithm replaces $\mathtt{s}$ by $\mathtt{s^*}$ after $\Omega$ consecutive iterations without improvement~(Lines~19--22). The method returns the best overall solution $\mathtt{s^*}$~(Line~24). Next sections detail the constructive heuristic, the RVND local search with the considered neighborhoods, the destroy and repair operators, the acceptance criterion, and the infeasibility strategy. \subsection{Constructive Heuristic} \label{sec:constructive} The \texttt{Constructive} procedure (Line 2 of Algorithm~\ref{alg:ils}) builds solutions iteratively by using the \texttt{WMCT-WAVGA} heuristic~\citep{abu2020matheuristics}, shown in Algorithm~\ref{alg.constructive}, assigning operations sequentially to the machines. The heuristic combines an adaptive rule to estimate the operations' weights with a weighted minimum completion time dispatching rule to select the operations to schedule. The selected operation is scheduled on the eligible machine, which minimizes the partial schedule's total weighted completion time. \renewcommand{\leftarrow}{\leftarrow} \begin{algorithm}[htb] \renewcommand{\leftarrow}{\leftarrow} \footnotesize \SetAlgoLined \SetAlgoLined \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{Problem data.} \Output{A solution $\mathtt{s}$.} \BlankLine $C_k \leftarrow r_k$, $S_k \leftarrow r_k$, $L_k \leftarrow 0$, $F_k \leftarrow 0$, $\mathcal{A}_k \leftarrow \emptyset$, $\mathtt{s}_{k} \leftarrow \emptyset$, \:\: $\forall M_k \in \mathcal{M}$; $C_i \leftarrow \infty, \:\: \forall O_i \in \mathcal{O}$; $\mathcal{U} \leftarrow \mathcal{O}$; \While{$\mathcal{U} \neq \varnothing$}{ $w_i \leftarrow \sum\limits_{j \in \mathcal{N}_i}\frac{w_j}{|\mathcal{U}_j|}$, $T_i \leftarrow \min\limits_{k \in \mathcal{M}_i}C_k$, $\pi_i = \frac{w_i}{\max(T_i, r_i) + p_i + s_{g_i}}$, \:\:$\forall O_i \in \mathcal{U}$; Select operation $O_{i^*} \in \mathcal{U}$ that maximizes $\pi_i$; $\Delta_{i^* k} \leftarrow \max(0, r_{i^*} - S_k), \:\: \forall M_k \in \mathcal{M}_{i^*}$; $C^{CB}_{i^* k} \leftarrow C_k + \Delta_{i^* k} + p_{i^*}, \:\: \forall M_k \in \mathcal{M}_{i^*}$; $C^{NB}_{i^* k} \leftarrow \max(r_{i^*}, C_k)+s_{g_{i^*}} + p_{i^*}, \:\: \forall M_k \in \mathcal{M}_{i^*}$; $ \mathcal{CB} \leftarrow \bigg\{ cb_{i^* k} = w_{i^*} C^{CB}_{i^* k} + \sum\limits_{{\hat{\imath}} \in \mathcal{B}_k }{w_{\hat{\imath}}} \Delta_{i^* k} \:\: \big| \:\: k \in \mathcal{M}_{i^*}, \: F_k=g_{i^*}, \: L_k+l_{i^*} \leq q_k \bigg\}$; $ \mathcal{NB} \leftarrow \bigg\{nb_{i^* k} =w_{i^*} C^{NB}_{i^* k}\:\: \big| \:\: M_k \in \mathcal{M}_{i^*}\bigg\}$; $b_{min} \leftarrow \min\{b : b \in (\mathcal{CB} \cup \mathcal{NB})\}$; Select $M_{k^*}$ corresponding to $b_{min}$; \eIf{$b_{min} \in \mathcal{CB}$}{ $S_{k^*} \leftarrow \max(r_{i^*}, S_{k^*})$; $C_{k^*} \leftarrow C_{i^* k^*}^{CB}$; $L_{k^*} \leftarrow L_{k^*} + l_{i^*}$; $\mathcal{A}_{k^*} \leftarrow \mathcal{A}_{k^*} \cup \{O_{i^*}\}$; }{ $S_{k^*} \leftarrow \max(r_{i^*}, C_{k^*})$; $C_{k^*} \leftarrow C_{i^* k^*}^{NB}$; $L_{k^*} \leftarrow l_{i^*}$; $\mathcal{A}_{k^*} \leftarrow \{O_{i^*}\}$; $\mathtt{s}_{k^*} \leftarrow \mathtt{s}_{k^*} \cup \{g_{i^*}\}$; } $\mathtt{s}_{k^*} \leftarrow \mathtt{s}_{k^*} \cup \{O_{i^*}\}$; $F_{k^*} \leftarrow g_{i^*}$; $C_{i^*} \leftarrow C_{k^*}$; $\mathcal{U} \leftarrow \mathcal{U} \setminus \{O_{i^*}\}$; } \textbf{return} $\mathtt{s}$; \caption{\texttt{WMCT-WAVGA \citep{abu2020matheuristics}}} \label{alg.constructive} \end{algorithm} The algorithm starts by initializing $S_k$, $L_k$, $F_k$, and $\mathcal{A}_k$, used to store the starting time, cumulative load, family, and set of assigned operations regarding the current batch on machine~$M_k$, respectively~(Line~1). Completion times of operations are initialized~(Line~2), and the set of unscheduled operations is created~(Line~3). The algorithm's main loop~(Lines~4--23) stops when all operations are assigned. Within the loop, operations are selected~(Line~6) according to a priority index computed in line~5. We use $g_i$ to indicate the family of operation~$O_i$. The delay in starting the current batch~($\Delta_{ik}$) on each machine~$M_k$, considering the insertion of the selected operation~$O_{i^{*}}$ is computed in line~7. The value of $\Delta_{ik}$ is used to compute the values of $C_{i^{*}k}^{CB}$ and $C_{i^{*}k}^{NB}$. They represent the completion time of the selected operation~$O_{i^*}$, if inserted in the current batch or in a new batch on each machine~$M_k$, respectively. Then, the algorithm identifies the sets $\mathcal{CB}$ and $\mathcal{NB}$ with the feasible assignments of the selected operation into the current batch or into a new batch, on each eligible machine~$M_k$, with the respective costs~(Lines~10--11). The best element among the sets is chosen~(Line~12), and the corresponding machine is identified~(Line~13). Finally, the selected operation is assigned to the selected machine, and the algorithm's variables and the schedule are updated~(Lines~14--22). The algorithm returns the final schedule $\mathtt{s}$~(Line~24). \subsection{RVND Local Search} \label{secsim:rvnd} The \texttt{RVND} procedure~(Lines 3 and 9 of Algorithm~\ref{alg:ils}) takes a solution and returns a local optimum solution, considering four neighborhood structures, described as follows: \begin{enumerate} \item \texttt{Swap}: Select any two operations, assigned to the same or different machines, and exchange them on the schedule, considering the eligibility constraint. Figure~\ref{fig:swap} exemplifies two swap movements. The first between operations from the same family (Figure~\ref{fig:swap_same}), and the second between operations from distinct families (Figure~\ref{fig:swap_distinct}). Note that, when the movement generates batches with mixed families, new setup times are included and extra setup times are removed to make the solution feasible regarding the family constraint. \item \texttt{Relocate}: Select an operation assigned to any machine and insert it in a new position on the same or another machine, considering the eligibility constraint. Figure~\ref{fig:relocate} exemplifies two relocate movements. The first inserts an operation in a batch of the same family (Figure~\ref{fig:relocate_same}), and the second inserts an operation in a batch of a distinct family (Figure~\ref{fig:relocate_distinct}). Again, one can note that setup times are included to ensure the feasibility of the solution regarding the family constraint. Also, if a movement creates a solution with two consecutive setup times, the extra setup time is removed. \item \texttt{SplitBatches}: Select a batch and insert a setup time between two consecutive operations of the same family. The idea is to split a batch into two. Figure~\ref{fig:setup_insert} depicts an example of a setup insertion movement. Note that, this movement is only possible in batches with more than one operation inside. \item \texttt{MergeBatches}: Select a setup time on any machine and remove it, if possible. A batch from the same family must precede the batch with the chosen setup time for the movement to be feasible. The idea is to combine batches from the same family scheduled in sequence. Figure~\ref{fig:setup_removal} depicts an example of a setup removal movement. Note that, this movement is only possible when the resulting solution is feasible regarding the family constraint. \end{enumerate} \begin{figure} \caption{Swap movement examples.} \label{fig:swap_same} \label{fig:swap_distinct} \label{fig:swap} \end{figure} \begin{figure} \caption{Relocate movement examples.} \label{fig:relocate_same} \label{fig:relocate_distinct} \label{fig:relocate} \end{figure} \begin{figure} \caption{Setup movement examples.} \label{fig:setup_insert} \label{fig:setup_removal} \label{fig:setup_movements} \end{figure} The \texttt{RVND}, shown in Algorithm~\ref{alg.rvnd}, starts by initializing the set~$\cal{L}$ with the indices of the considered neighborhoods in the local search procedure~(Line~1). Considering the proposed neighborhoods (1--\texttt{Swap}, 2--\texttt{Relocate}, 3--\texttt{SplitBatches}, 4--\texttt{MergeBatches}), the set of indices is $\cal{L}=\{$1, 2, 3, 4$\}$. The main loop is repeated until set $\cal{L}$ is empty~(Lines~2--17). The algorithm selects a neighborhood index $\ell$ at random~(Line 3) and tests each solution in the neighborhood $N_{\ell}$ until it finds the first improvement in the solution cost~(Lines~5--11). If an improved solution is found, the current solution is updated, and the algorithm resets the set $\cal{L}$ of neighborhood indices. If no improvement is found in a given neighborhood, its index is removed from set $\cal{L}$~(Lines~12--16). The algorithm returns the local optimal solution~(Line 18). \begin{algorithm}[htb!] \renewcommand{\leftarrow}{\leftarrow} \footnotesize \SetAlgoLined \SetAlgoLined \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{A given solution $\mathtt{s}$.} \Output{A local optimal solution $\mathtt{s}$.} \BlankLine Initialize set $\cal{L}$ of neighborhood indices; \While{$\cal{L} \neq \varnothing$} { $\ell \leftarrow$\texttt{random}($\cal{L}$); $improved \leftarrow$ \texttt{false}; \For{$\mathtt{s'} \in N_{\ell}(\mathtt{s})$} { \If{$f(\mathtt{s'}) < f(\mathtt{s})$} { $\mathtt{s} \leftarrow \mathtt{s'}$; $improved \leftarrow$ \texttt{true}; \texttt{break}; } } \uIf{improved}{ Initialize set $\cal{L}$ of neighborhood indices; } \Else{ $\cal{L} \leftarrow \cal{L} \setminus \{\ell\}$; } } \textbf{return} $\mathtt{s}$; \caption{\texttt{RVND}} \label{alg.rvnd} \end{algorithm} \subsection{Destroy and Repair Operators} \label{sec:destroy_repair} The \texttt{Destroy} procedure~(Line 7 of Algorithm~\ref{alg:ils}) removes $d = \lceil \varepsilon o \rceil$ operations from a given solution $\mathtt{s}$, generating a partial solution with $o - d$ scheduled operations, where $\varepsilon \in [0, 1]$ is the destruction parameter. Extra setup times are also removed from the partial solution when necessary. The total weighted completion time of a partial solution only considers the scheduled operations, equivalent to defining all unscheduled operations' completion times to zero. It is worth mentioning that the batches' starting times are anticipated after removing the operations, whenever possible, to calculate the partial solution's objective function. The removed operations are stored in a list of unscheduled operations, following the order in which the destroy operator extracted them from $\mathtt{s}$. We consider two destruction operators, described in the following: \begin{enumerate} \item \texttt{RandomDestroy}: Extracts operations randomly, as proposed by \citet{ruiz2007simple}. \item \texttt{PseudoRandomDestroy}: Splits operations into two groups. The first group containing operations with a higher impact in a given solution and the second containing the remaining ones. The impact of each operation $O_i \in \mathcal{O}$ is computed as $f(\mathtt{s})-f'(\mathtt{s})$, where $f(\mathtt{s})$ and $f'(\mathtt{s})$ are the objective function values of solution~$\mathtt{s}$ before and after removing operation~$O_i$. The operations' impacts are updated iteratively each time an operation is removed. In this approach $d/2$ operations are randomly selected from each group. This operator is inspired by the work of \citet{ruiz2018iteratedgreedy}. \end{enumerate} We also consider two operators for reconstructing solutions, executed within the \texttt{Repair} procedure~(Line 8 of Algorithm~\ref{alg:ils}). Both operators pick the unscheduled operations one-by-one according to the list order defined during the destroying phase to reinsert in the solution until a new complete solution is created. The repair operators are described as follows: \begin{enumerate} \item \texttt{GreedyRepair}: Uses a greedy rule, computing the objective function value according to the picked operation's insertion in all possible positions between all eligible machines, assigning it to the best possible position. \item \texttt{PseudoGreedyRepair}: Reinserts the first $d/2$ operations within the unscheduled list following the first operator's greedy rule. Then, the remaining operations are reinserted at random in any position within the subset of eligible machines for each operation. This operator is inspired by the work of \citet{ruiz2018iteratedgreedy}. \end{enumerate} In Figure~\ref{fig:destroy_repair-example}, an example using the \texttt{RandomDestroy} and \texttt{GreedyRepair} operators is shown, considering the example depicted in Figure~\ref{fig:scheduling-example-one}, with $d=2$. Note that operations $O_5$ and $O_2$ are removed from the solution, generating a partial schedule with a TWCT of 230. Then, operation $O_5$ is reinserted in its best position, which, in this case, is its original position. One can note that a setup time is included before operation $O_5$. Finally, operation $O_2$ is inserted in the first batch on machine $M_1$ before operation $O_1$, generating a new complete solution with a TWCT of 305, better than the initial solution. \begin{figure} \caption{Destroy and repair exaple with $d=2$. \label{fig:destroy_repair-example} \label{fig:destroy_repair-example} \end{figure} \subsection{Acceptance Criterion} \label{sec:accep} The \texttt{Accept} procedure~(Line 12 of Algorithm~\ref{alg:ils}) receives two solutions, the current solution~$\mathtt{s}$ and a candidate solution $\mathtt{s'}$ to decide whether $\mathtt{s'}$ should replace $\mathtt{s}$ as the current solution. For this, a simulated annealing criterion is employed, accepting $\mathtt{s'}$ with probability $e^{- \Delta / \tau}$, where $\Delta=f(\mathtt{s'})-f(\mathtt{s})$, and $\tau$ is the current temperature. The temperature~$\tau$ starts with a value $\tau_0$, decreasing at each iteration as $\tau = \tau \kappa$, where $\kappa=[0,1)$ is the cooling rate~\citep{kirkpatrick-1983}. The initial and final temperatures~($\tau_0$ and $\tau_F$) are instance-dependent, as proposed by~\citet{Pisinger2007}, computed as $\tau_0= - (\delta_1 f_o) / \text{ln}(0.5)$ and $\tau_F= - (\delta_2 f_o) / \text{ln}(0.5)$, respectively, where $f_0$ is the cost of the initial solution, and $\delta_1$ and $\delta_2$ are adjustable parameters. The initial and final temperature are set to accept, with 50\% probability, solutions that are $\delta_1$ and $\delta_2$ worse than the initial solution, respectively. Based on these temperatures, the cooling rate~$\kappa$ is set by considering the number of iterations $\eta$ to execute, computed as $\kappa=(\tau_F/\tau_0)^{1/\eta}$. \subsection{Infeasibility Strategy} \label{secmet:deas} As mentioned earlier, we use an infeasibility strategy to enlarge the problem's search space, increase diversification, and help the algorithm escape from local optimal solutions. In this strategy, the capacity constraint is relaxed, and violating solutions are penalized. Thus, the cost of a solution $\mathtt{s}$ with capacity violations is updated by using a penalty factor $\rho \geq 1$, as $f(\mathtt{s}) = f(\mathtt{s}) + \rho V$, where $V$ is the total capacity violation among all batches from all machines, and $f(\mathtt{s})$ is the total weighted completion time of solution $\mathtt{s}$. We use two parameters $\rho^{+} \in [0, 1)$ and $\rho^{-} \in [0, 1)$ to update the value of $\rho$ at each iteration. Thus, every time an infeasible solution is accepted, the penalty factor value is increased as $\rho=\rho (1 + \rho^{+})$. Contrariwise, if the accepted solution is feasible, the factor value is decreased as $\rho=\rho (1 -\rho^{-})$. The idea is to increase the penalty factor when infeasible solutions are accepted, to force the algorithm to prioritize feasible solutions. The \texttt{Feasible} procedure~(Line 11 of Algorithm~\ref{alg:ils}) updates the parameter~$\rho$ after checking whether a given solution is feasible or not, returning \texttt{true} when no capacity violations exists, and \texttt{false}, otherwise. \section{Computational Experiments} \label{sec:experiments} In this section, we present the conducted computational experiments to evaluate the proposed IG algorithm's performance, organizing them into two parts. First, we present the experiments performed on a set with 72 benchmark instances, developed by \citet{abumarrul2019}. The authors generated these instances based on real data from a ship scheduling problem, with up to 8 machines and 50 operations. In this part, only the most regular destroy and repair operators are considered (\texttt{RandomDestroy} and \texttt{GreedyRepair}). The algorithm's input parameters are calibrated, and the method is compared with the state-of-the-art algorithms for solving the studied problem. In the second part, four IG variants are tested within a new set of instances that combine concepts from different machine scheduling works in the literature with up to 100 operations and 10 machines. We call the instance sets \textit{Small-Sized Benchmark Set} and \textit{Large-Sized Benchmark Set}, respectively. We use C\texttt{++} language for coding the IG algorithm, and ten independent runs are performed for each variant within the experiments. All experiments are performed on a computer with an Intel i7-8700K CPU of 3.70GHz and 64 GB of RAM, running Linux with a single thread. In all analyses, we evaluate solutions in terms of the Relative Percentage Deviation~(RPD) concerning the best solutions found for each instance, computed according to Equation~\eqref{eq:RPD}. $TWCT^{Sol}$ denotes the total weighted completion for a given solution of a specific instance, and $TWCT^{Best}$ designates the total weighted completion time regarding the best solution found for the same instance in a given experiment. \begin{equation}\label{eq:RPD} \mathit{RPD} = \frac{TWCT^{Sol} - TWCT^{Best}}{TWCT^{Best}} \times 100. \end{equation} \subsection{Experiments on the Small-Sized Benchmark Set} \label{sec:small_bench} \subsubsection{Instances Description} \label{sec: instances} As mentioned before, the first benchmark set is composed of 72 instances, proposed by~\citet{AbuMArrul2020}, and available online~\citep{abumarrul2019}. The number of machines is defined as $m = \{4, 8\}$, and the number of operations as $o=\{15, 25, 50\}$. The number of jobs is defined as $n=\lfloor o/3\rfloor$ and the number of families as $f=3$. The authors consider three input parameters to generate instances with different ranges for the release dates, eligibility levels, and probabilities of associating jobs to operations. The operation's processing times were drawn from a discrete uniform distribution $U(1,30)$ and the operation's sizes from another discrete uniform distribution $U(0,100)$ with step size 10. Note that, in this set, processing times are low. Moreover, the number of operations within a batch may be small, given the possibility of operations with larger sizes close to the machine's capacities, ranging from 80 to 100. We refer the reader to the work of \citet{abumarrul2019} for a complete description of the instances generation process. \subsubsection{Results and Calibration} \label{sec:results_small} Before comparing the IG algorithm with other methods for solving the problem, we set the algorithm's input parameters following a two-phase tuning strategy introduced by \citet{ropke2006adaptive}. The first is a trial-and-error phase, in which the parameters are defined during the algorithm development. The second is the improvement phase, in which a fine-tuning is performed in each parameter individually, within pre-defined possible values, while the remaining parameter values are fixed. We considered the following parameters order for the improvement phase: (1)~Initial temperature parameter for the simulated annealing~($\delta_1$), with values ranging from 0.3 to 0.7 and a step size of 0.1; (2)~Final temperature parameter for the simulated annealing~($\delta_2$) with the possible values of $10^{-3}$, $10^{-4}$, $10^{-5}$, and $10^{-6}$. (3)~Perturbation parameter~($\varepsilon$) with values ranging from 0.05 to 0.20 and a step size of 0.05; (4)~Solution restore parameter~($\lambda$), within the same range of values tested for $\varepsilon$. (5)~Penalty update factor when infeasible solutions are reached~($\rho^{+}$), defined within the range 0.10 to 0.25, with a step size of 0.05; (6)~Penalty update factor when feasible solutions are reached~($\rho^{-}$) defined within the range of 0.05 to 0.20, with a step size of 0.05. All parameters were tuned using the two most regular destroy and repair operators considered in the literature for IG algorithms, the \texttt{RandomDestroy} operator and the \texttt{GreedyRepair} operator. We named \texttt{IG-RG} this variant of the method. Ten independent runs on each instance were performed with $\eta$ = 2500 iterations, which is the value defined in the preliminary tests during the trial-and-error phase. The final parameter values, shown in Table~\ref{tab:parametersdef}, are the ones that presented the best average relative percentage deviation ($\overline{RPD}$) values concerning the best solutions achieved at each step of the parameterization. \begin{table}[htbp] \centering \footnotesize \caption{Final parameter values for the proposed algorithm.} \begin{tabularx}{\textwidth}{cXcc} \hline Parameter & Description & Domain & Value \\ \hline $\delta_1$ & Initial temperature definition parameter for the simulated annealing criterion & [0, 1] & 0.6 \\ $\delta_2$ & Final temperature definition parameter for the simulated annealing criterion & [0, 1] & $10^{-5}$ \\ $\varepsilon$ & Proportion of the total number of operations to destroy at each iteration & [0, 1] & 0.15 \\ $\lambda$ & Restore solution parameter & [0, 1] & 0.1 \\ $\rho^{+}$ & Parameter to update the penalty factor when infeasible solutions are accepted & [0, 1) & 0.20 \\ $\rho^{-}$ & Parameter to update the penalty factor when feasible solutions are accepted & [0, 1) & 0.05 \\ \hline \end{tabularx} \label{tab:parametersdef} \end{table} In Table \ref{tab:components}, we evaluate the impact in the solution quality, in terms of the average relative percentage deviation ($\overline{RPD}$), standard deviation of the RPDs (SD), and the average computational time ($\overline{Time}$), in seconds, with the removal of each feature of the algorithm individually. Each configuration corresponds to the disabling of one of the following features: (LS)~RVND local search; (SA)~Simulated annealing acceptance criterion; (DR)~Destroy and Repair steps; (Inf.)~Infeasibility strategy; (Rest.)~Solution restore strategy. When the destroy and repair steps are turned off, we replace it with a regular perturbation step in which we select one neighborhood at random, and $d = \lceil \varepsilon o \rceil$ random moves are performed. Note that the algorithm using the perturbation instead of the destruction and repair operators follows an Iterated Local Search (ILS) metaheuristic structure. As one can notice, the local search is the most relevant component of the method and the most time-consuming feature. The algorithm's average computational time reduces when it disregards the diversification strategies (simulated annealing and infeasibility strategy). However, the small addition of time is worth it due to the deviation reduction these strategies bring to the method. We can see that the configuration with all features combined generates the smallest $\overline{RPD}$ and SD, indicating that they all contribute to the algorithm's performance. \begin{table}[htbp] \centering \caption{Average RPD and computational time with the removal of each algorithm's feature.} \begin{tabular}{lrcccccccccc} \toprule \multicolumn{1}{c}{Config.} & & LS & SA & DR & Inf. & Rest. & & $\overline{RPD}$ & SD & & $\overline{Time}$ \\ \midrule No LS & & & \tiny $\bullet$ & \tiny $\bullet$ & \tiny $\bullet$ & \tiny $\bullet$ & & 2.68 & 2.38 & & 0.15 \\ No SA & & \tiny $\bullet$ & & \tiny $\bullet$ & \tiny $\bullet$ & \tiny $\bullet$ & & 0.22 & 0.42 & & 7.70 \\ No DR & & \tiny $\bullet$ & \tiny $\bullet$ & & \tiny $\bullet$ & \tiny $\bullet$ & & 0.37 & 0.90 & & 12.86 \\ No Inf. & & \tiny $\bullet$ & \tiny $\bullet$ & \tiny $\bullet$ & & \tiny $\bullet$ & & 0.19 & 0.31 & & 7.69 \\ No Rest. & & \tiny $\bullet$ & \tiny $\bullet$ & \tiny $\bullet$ & \tiny $\bullet$ & & & 0.18 & 0.32 & & 8.20 \\ Complete & & \tiny $\bullet$ & \tiny $\bullet$ & \tiny $\bullet$ & \tiny $\bullet$ & \tiny $\bullet$ & & 0.15 & 0.26 & & 8.18 \\ \bottomrule \end{tabular} \label{tab:components} \end{table} In Figure \ref{fig:scheduling-example}, we evaluate the trade-off between the solution quality in terms of the $\overline{RPD}$ and the average computational time, regarding the number of iterations considered. We run the experiments with the total number of iterations~($\eta$) ranging from 500 to 10000 with a step size of 500. Note that the average time grows linearly as the number of iterations increases. The $\overline{RPD}$ is presented with a 95\% confidence interval. One can note that the average RPD decreases as the number of iterations grows, reducing faster up to 4000 iterations. From 4000 to 4500, the method is stable. It returns to a continuous reduction of the average RPD, but slower, from 4500 to 7000 iterations. Then, from 7000 iterations, the method stabilizes, maintaining the same quality of the solutions until reaching the maximum number of iterations tested~(10000), indicating that it converges to a $\overline{RPD}$ around 0.08\%. \begin{figure} \caption{Average RPD and average computational time with different number of iterations. \label{fig:scheduling-example} \label{fig:scheduling-example} \end{figure} In the next analysis, we compare the \texttt{IG-RG} with the best methods in the literature for solving the problem. As mentioned before, in \citet{abu2020matheuristics}, the authors tackled the problem with matheuristic algorithms, comparing six variants of it. In their analysis, two of the matheuristics presented the best results, the \texttt{GRASP-Math}$_3$ and the \texttt{ILS-Math}$_3$. Both methods use a combination of a metaheuristic approach with MIP-based local searches using a batch scheduling formulation. The former considers a Greedy Randomized Adaptive Search Procedure~(GRASP) framework, and the latter an Iterated Local Search~(ILS). We compare them in terms of the RPD, computational time, and capacity of reaching the best-found solution. Based on the previous analysis, we ran the algorithm for 2500, 4500, and 7000 iterations. The results are depicted in Table~\ref{tab:bench1_math_comparison} with the average~(Avg), maximum~(Max) and standard deviation~(SD) for the RPDs and computational times, the percentage of instances~(\%Inst) and runs~(\%Runs) in which each algorithm achieved the best solution, and the percentage of instances in which the best solution is uniquely found by a given approach (\%Unique). Each \texttt{IG-RG} variant includes the number of iterations in its name. Our experiments were performed on a machine with the same characteristics as the one used by \citet{abu2020matheuristics} for a fair comparison. Note that the \texttt{IG-RG} has a superior performance in all criteria also when running for 2500 iterations (\texttt{IG-RG-2500}). Its worst-case RPD (Max) remains around 2\%, with a low standard deviation (0.28), showing consistency within the different runs. Even when the number of iterations grows, as in the case of the \texttt{IG-RG-7000} (\texttt{IG-RG} running for 7000 iterations), the average computational time is at least 94\% lower than the matheuristic ones. The \texttt{IG-RG} provided new best solutions (upper bound) for this set of instances. The algorithm reached the best solution in 94.44\% of the instances and 70\% of the runs when executed by 7000 iterations. Complete detailed results are available as supplementary material. \begin{table}[htbp] \centering \caption{Comparison between the iterated greedy algorithm and the best methods in the literature for this set of instances.} \resizebox{\textwidth}{!}{ \begin{tabular}{lrccccccccccc} \toprule \multicolumn{1}{c}{\multirow{2}[4]{*}{Algorithm}} & & \multicolumn{3}{c}{RPD (\%)} & & \multicolumn{3}{c}{Time (seconds)} & & \multicolumn{3}{c}{Achieved Best Solution} \\ \cmidrule{3-5}\cmidrule{7-9}\cmidrule{11-13} & & Avg & Max & SD & & Avg & Max & SD & & \%Inst. & \%Run & \%Unique \\ \midrule \texttt{GRASPMath}$_3$ & & 0.79 & 20.57 & 1.28 & & 464.68 & 2677.43 & 529.83 & & 62.50 & 40.56 & 0.00 \\ \texttt{ILS-Math}$_3$ & & 0.74 & 6.62 & 0.96 & & 411.67 & 2735.17 & 503.52 & & 58.33 & 37.64 & 1.39 \\ \texttt{IG-RG-2500} & & 0.15 & 2.03 & 0.28 & & 8.18 & 26.78 & 9.71 & & 73.61 & 61.25 & 0.00 \\ \texttt{IG-RG-4500} & & 0.11 & 1.47 & 0.23 & & 14.69 & 47.28 & 17.48 & & 83.33 & 64.86 & 4.17 \\ \texttt{IG-RG-7000} & & 0.07 & 1.23 & 0.16 & & 22.80 & 75.26 & 27.14 & & 94.44 & 70.00 & 19.44 \\ \bottomrule \end{tabular} } \label{tab:bench1_math_comparison} \end{table} \subsection{Experiments on the Large-Sized Benchmark Set} \label{sec:large_bench} \subsubsection{Instances Description} \label{sec:large_instances} The second benchmark combines concepts from different instance generation processes in the scheduling literature. We generated 3 instances for each combination of the number of machines $m = \{5, 10\}$, the number of operations $o=\{50, 75, 100\}$, the number of jobs $n=o/q$, where $q=\{3,5\}$, and the number of families $f=\{3,5\}$. Therefore, the set is composed of 72 instances in total. The generation of the remaining aspects is described in the following. Processing times follow a discrete uniform distribution $U(1,100)$. This is a well-know interval considered in the scheduling literature for drawing processing times (see, for instance, the works of \citet{sheikhalishahi2019multi}, \cite{fanjul2010iterated}, \cite{lin2013multiple}, and \cite{akturk2001new}). Family setup times follow a discrete uniform distribution $U(10,30)$, as in the work of \citet{sheikhalishahi2019multi}. And, job's weights are drawn from a discrete uniform distribution $U(1,10)$, as established by \citet{lin2013multiple}. Operation's families are defined according to the scheme proposed by \citet{dunstall2005comparison}. First, a random number between 0 and 1 is generated for each family. Then, these numbers are weighted to define the proportion of operations in each family. Following the defined proportion, operations are assigned to families at random. Eligibility subsets are defined according to the scheme proposed by \citet{bitar2016memetic}. First, a number of eligible machines for each operation is drawn from a discrete uniform distribution $U(1,m)$. Then, the eligible machines are selected at random, respecting the defined number. Release dates for machines and operations are defined based on the works of \citet{pei2020new}, \cite{AbuMArrul2020}, and \cite{akturk2001new}, following a discrete uniform distribution $U(0,MR)$, where $MR$ is the maximum release date, defined as $MR = \bigl\lceil \alpha \bigl( \sum_{O_i \in \mathcal{O}}{p_i + \sum_{g \in \mathcal{F}} s_{g} |\mathcal{F}_g|\bigr)}/ m \bigr\rceil$. We use $\alpha=0.5$ for defining the operation's release dates and $\alpha=0.1$ for defining the machine's release dates. The association between jobs and operations follows a modified version of the scheme proposed by \citet{AbuMArrul2020}, and we organized it into two steps. Let $\mathcal{U} \subseteq \mathcal{O}$ be a subset of unassigned operations, initialized as $\mathcal{U} = \mathcal{O}$. In the first step, jobs are selected one by one at random, and $\max\{|\mathcal{U}|, \lceil o/n \rceil\}$ operations are randomly associated with it, being selected from subset $\mathcal{U}$. Selected operations are removed from $|\mathcal{U}|$ before moving to the next job. After the first step, each operation is associated with a unique job. Then, in the second step, for each pair operation/job, we define if they are associated with 5\% of probability. Capacities of machines and operation's sizes are defined based on the works of \citet{pei2015serial} and \citet{AbuMArrul2020}. The machine's capacities follow a discrete uniform distribution $U(10,15)$, while the operation's sizes are drawn from a discrete uniform distribution $U(1,5)$. \subsubsection{Results and Discussion} \label{sec: results} After highlighting the superior performance of the proposed IG algorithm against the current methods in the literature, in this section, we conduct experiments within the new set of instances considering four variants of the method, with different combinations among the destroy and repair operators (Section~\ref{sec:destroy_repair}): (1)~\texttt{IG-RG}: IG with \texttt{RandomDestroy} and \texttt{GreedyRepair}; (2)~\texttt{IG-RP}: IG with \texttt{RandomDestroy} and \texttt{PseudoGreedyRepair}; (3)~\texttt{IG-PG}: IG with \texttt{PseudoRandomDestroy} and \texttt{GreedyRepair}; (4)~\texttt{IG-PP}: IG with \texttt{PseudoRandomDestroy} and \texttt{PseudoGreedyRepair}. The results are shown in Table~\ref{tab:bench2_methods}, with instances grouped by the number of operations~($o$) and machines~($m$). In this experiment, we diversify the total number of iterations~($\eta$) to evaluate the quality of the solutions in terms of the average relative percentage deviation~($\overline{RPD}$) and the average computational time~($\overline{Time}$) for 2500, 4500 and 7000 iterations. Best values for the $\overline{RPD}$ are highlighted in bold. \begin{table}[htb!] \centering \caption{Results by group of instances for each algorithm within different number of iterations.} \resizebox{\textwidth}{!}{ \begin{tabular}{cccccccccccccc} \toprule \multirow{2}[4]{*}{$\eta$} & Group & & \multicolumn{2}{c}{\texttt{IG-RG}} & & \multicolumn{2}{c}{\texttt{IG-RP}} & & \multicolumn{2}{c}{\texttt{IG-PG}} & & \multicolumn{2}{c}{\texttt{IG-PP}} \\ \cmidrule{4-5}\cmidrule{7-8}\cmidrule{10-11}\cmidrule{13-14} & $o$-$m$ & & $\overline{RPD}$ & $\overline{Time}$ & & $\overline{RPD}$ & $\overline{Time}$ & & $\overline{RPD}$ & $\overline{Time}$ & & $\overline{RPD}$ & $\overline{Time}$ \\ \midrule \multirow{7}[4]{*}{2500} & 50-5 & & 0.35 & 16.79 & & 0.52 & 38.61 & & \textbf{0.34} & 17.37 & & 0.54 & 40.14 \\ & 50-10 & & \textbf{0.46} & 17.62 & & 0.60 & 31.93 & & 0.48 & 18.19 & & 0.61 & 33.19 \\ & 75-5 & & \textbf{0.69} & 74.81 & & 1.32 & 182.71 & & 0.74 & 75.80 & & 1.41 & 195.91 \\ & 75-10 & & \textbf{0.73} & 68.35 & & 1.46 & 144.41 & & 0.76 & 70.07 & & 1.56 & 150.12 \\ & 100-5 & & \textbf{0.72} & 216.27 & & 1.70 & 583.66 & & 0.81 & 216.28 & & 1.88 & 629.27 \\ & 100-10 & & \textbf{0.75} & 189.12 & & 1.85 & 441.93 & & 0.84 & 194.83 & & 1.98 & 468.39 \\ \cmidrule{2-14} & All & & \textbf{0.62} & 97.16 & & 1.24 & 237.21 & & 0.66 & 98.76 & & 1.33 & 252.83 \\ \midrule \multirow{7}[4]{*}{4500} & 50-5 & & \textbf{0.26} & 30.15 & & 0.38 & 69.47 & & 0.28 & 31.28 & & 0.41 & 72.43 \\ & 50-10 & & \textbf{0.38} & 31.69 & & 0.44 & 57.32 & & 0.41 & 32.56 & & 0.50 & 59.77 \\ & 75-5 & & \textbf{0.49} & 134.17 & & 1.20 & 329.93 & & 0.59 & 134.84 & & 1.31 & 351.52 \\ & 75-10 & & \textbf{0.55} & 122.83 & & 1.33 & 259.57 & & 0.56 & 126.27 & & 1.45 & 270.02 \\ & 100-5 & & \textbf{0.52} & 388.95 & & 1.57 & 1050.79 & & 0.60 & 390.18 & & 1.69 & 1138.61 \\ & 100-10 & & \textbf{0.61} & 341.16 & & 1.72 & 796.96 & & \textbf{0.61} & 350.29 & & 1.86 & 841.75 \\ \cmidrule{2-14} & All & & \textbf{0.47} & 174.82 & & 1.11 & 427.34 & & 0.51 & 177.57 & & 1.20 & 455.69 \\ \midrule \multirow{7}[4]{*}{7000} & 50-5 & & \textbf{0.19} & 47.33 & & 0.31 & 109.10 & & 0.23 & 49.20 & & 0.33 & 113.87 \\ & 50-10 & & \textbf{0.30} & 49.99 & & 0.39 & 90.57 & & 0.33 & 51.30 & & 0.40 & 94.11 \\ & 75-5 & & \textbf{0.45} & 209.51 & & 1.10 & 515.05 & & 0.47 & 211.97 & & 1.21 & 550.21 \\ & 75-10 & & 0.48 & 192.35 & & 1.24 & 406.24 & & \textbf{0.47} & 199.05 & & 1.31 & 423.76 \\ & 100-5 & & \textbf{0.42} & 610.52 & & 1.49 & 1640.53 & & 0.45 & 608.87 & & 1.64 & 1778.70 \\ & 100-10 & & \textbf{0.48} & 536.75 & & 1.62 & 1249.20 & & 0.50 & 550.54 & & 1.75 & 1322.61 \\ \cmidrule{2-14} & All & & \textbf{0.39} & 274.41 & & 1.02 & 668.45 & & 0.41 & 278.49 & & 1.11 & 713.88 \\ \bottomrule \end{tabular} } \label{tab:bench2_methods} \end{table} One can see that methods with the \texttt{GreedyRepair} operator (\texttt{IG-RG} and \texttt{IG-PG}) outperformed the ones with the \texttt{PseudoGreedyRepair} operator (\texttt{IG-RP} and \texttt{IG-PP}), in terms of the $\overline{RPD}$ and $\overline{Time}$, independently of the number of iterations executed. Moreover, the discrepancy between these algorithms grows when more iterations are performed. For example, when running for 7000 iterations and considering the complete set of instances, \texttt{IG-RG} reached an average RPD of 0.39\%. At the same time, algorithms with the \texttt{PseudoGreedyRepair} operator remained with average RPDs above 1\%. Not surprisingly, the $\overline{RPD}$ values decrease when the number of iterations increases for all algorithm variants. One can also note that the algorithms with the \texttt{GreedyRepair} operator consume less than half the others' computational time. Overall, the \texttt{IG-RG} dominates the remaining variants with the best values for the $\overline{RPD}$ and $\overline{Time}$ on almost all groups, independently of the number of iterations performed. An interesting aspect of the results is how the average computational time is usually higher for groups with five machines. This is mainly because solving these instances is more challenging due to the higher number of operations performed by each machine, increasing the local search's impact on the algorithm's performance. The studied problem is characterized by the fact that few operations define the job's completion times and, consequently, the objective function's value. Thus, any random insertion of operations can cause a significant disruption in a given solution's cost. The local search may struggle to restore reasonable solutions from a perturbed solution, given that numerous movements lead to solutions of equivalent cost. This corroborates the weak performance of the \texttt{PseudoGreedyRepair} operator in the addressed problem. Figure~\ref{fig:boxplot_operations} shows the RPD distributions of each algorithm for the different numbers of iterations considering the complete set of instances to help visualize the contrast between the methods' solutions. One can note some relevant points: (1)~The distributions get less spread out and with smaller interquartile values as the number of iterations grow. (2)~The weakness of the algorithms that use the \texttt{PseudoGreedyRepair} operator is more evident, getting worse when the operator is combined with the \texttt{PseudoRandomDestroy} operator. (3)~Faster improvement by the \texttt{IG-RG} algorithm can be noted, with it reaching an average RPD below 0.5\% when running for 4500 iterations. (4)~Slower convergence of the \texttt{IG-PG} variant compared to \texttt{IG-RG} but competitive when performed for 7000 iterations. \begin{figure} \caption{Boxplots of the RPD distributions for each algorithm within different number of iterations, considering the complete set of instances.} \label{fig:boxplot_operations} \end{figure} To improve the discussion and provide more statistical information over the RPD distributions, we conducted the pairwise Wilcoxon rank-sum test with Hommel's \textit{p}-values adjustment with a confidence level of 0.05. Before performing the test, we executed the Shapiro-Wilk test on the distributions, which indicated that the RPDs are not normally distributed. The \textit{p}-values obtained from the pairwise Wilcoxon test were all below 0.05, except when comparing the \texttt{IG-RG} against \texttt{IG-PG}, running for 7000 iterations, where the \textit{p}-value is 0.16. This result confirms what we highlighted, and shows that significant statistical differences can be noted between the algorithms using the \texttt{GreedyRepair} (\texttt{IG-RG} and \texttt{IG-PG}) and \texttt{PseudoGreedyRepair} operators (\texttt{IG-RP} and \texttt{IG-PP}). Also, although \texttt{IG-RG} achieves a better value for $\overline{RPD}$ regardless of the number of iterations performed, the statistic test does not indicate a significant difference between it and the \texttt{IG-PG} variant when the algorithms are executed for 7000 iterations, showing that both methods converge for solutions of similar quality as the number of iterations grows. In the next analysis, we evaluate the methods' performance in terms of the proportion of best solutions achieved by each one. Thus, Table~\ref{tab:percentage_instances_bench2} depicts the percentage of instances (\%Inst.) and runs (\%Run) in which each method achieved the best solution and the percentage of instances in which the best solution was uniquely achieved by a given algorithm (\%Unique). The table includes the average RPD ($\overline{RPD}$) and the standard deviation (SD) of the RPDs regarding the runs in which the best solution was not achieved. Algorithm names include the number of iterations executed, and the best results are highlighted in bold. Note that variant \texttt{IG-PG} surpasses the others, regardless of the number of iterations, finding unique best solutions in 38.89\% of the instances (29,17\% + 6.94\% + 2.78\%). Moreover, it reaches the best solutions in 44.44\% of the instances when running for 7000 iterations. These results indicate that the \texttt{PseudoRandomDestroy} operator can lead to better solutions when combined with the \texttt{GreedyRepair} operator. However, variant \texttt{IG-PG} needs a higher number of iterations to achieve an overall competitive performance, which may indicate that the method is more likely to get stuck in local optimum solutions. However, \texttt{IG-PG} seems to be a good option when the decision-maker has more time to solve the problem. One can note that variant \texttt{IG-PP} had the worst performance in these criteria, not providing any of the best solutions. \texttt{IG-RP} variant found some of the best solutions, collaborating with future studies related to this set of instances. Another important aspect concerns the low number of runs in which the methods found the best solution. Note that \texttt{IG-RG-7000} found the best solution in 7.92\% of the runs. In contrast, for the smallest set of instances, this number was 70.00\% (see Section \ref{sec:results_small}), proving the new benchmark to be a more challenging set to be solved. Despite this behavior, the \textit{Not Achieved} columns show that $\overline{RPD}$ and SD are low when the method does not reach the best solutions, highlighting its stable performance. \begin{table}[htb!] \centering \caption{Analysis of the best solutions achievement among the IG algorithm variants, considering the complete set of instances.} \begin{tabular}{lrrrrrrr} \toprule \multicolumn{1}{c}{\multirow{2}[4]{*}{Algorithm}} & & \multicolumn{1}{c}{\multirow{2}[4]{*}{\%Inst.}} & \multicolumn{1}{c}{\multirow{2}[4]{*}{\%Run}} & \multicolumn{1}{c}{\multirow{2}[4]{*}{\%Unique}} & & \multicolumn{2}{c}{Not Achieved} \\ \cmidrule{7-8} & & & & & & \multicolumn{1}{c}{$\overline{RPD}$} & \multicolumn{1}{c}{SD} \\ \midrule \texttt{IG-RG-2500} & & 13.89 & 2.64 & 5.56 & & 0.63 & 0.36 \\ \texttt{IG-RG-4500} & & 20.83 & 4.58 & 8.33 & & 0.49 & 0.30 \\ \texttt{IG-RG-7000} & & 40.28 & \textbf{7.92} & 18.06 & & \textbf{0.42} & \textbf{0.26} \\ \texttt{IG-RP-2500} & & 8.33 & 1.11 & 0.00 & & 1.25 & 0.61 \\ \texttt{IG-RP-4500} & & 11.11 & 2.36 & 1.39 & & 1.13 & 0.60 \\ \texttt{IG-RP-7000} & & 13.89 & 3.61 & 1.39 & & 1.06 & 0.57 \\ \texttt{IG-PG-2500} & & 18.06 & 2.50 & 2.78 & & 0.68 & 0.39 \\ \texttt{IG-PG-4500} & & 23.61 & 4.17 & 6.94 & & 0.53 & 0.31 \\ \texttt{IG-PG-7000} & & \textbf{44.44} & \textbf{7.92} & \textbf{29.17} & & 0.44 & 0.27 \\ \texttt{IG-PP-2500} & & 2.78 & 0.42 & 0.00 & & 1.34 & 0.67 \\ \texttt{IG-PP-4500} & & 6.94 & 1.53 & 0.00 & & 1.22 & 0.65 \\ \texttt{IG-PP-7000} & & 11.11 & 2.08 & 0.00 & & 1.13 & 0.63 \\ \bottomrule \end{tabular} \label{tab:percentage_instances_bench2} \end{table} Figure \ref{fig:specific_parameter_analysis} shows the RPD distributions for two different grouping schemes to assess the performance of the \texttt{IG-RG-7000} concerning relevant aspects of the instances. In the first, we grouped the instances by the number of families~($f$) and operations~($o$). While in the second, the groups indicate the proportion of jobs~($q$), defined in the instance generation process, and the number of operations~($o$) to schedule. One can see an improvement in the solutions when the number of families increases. This behavior indicates that the family constraint reduces the possible combinations of operations within the batches since it concerns a hard constraint, enhancing the local search performance. The same effect can be observed for the number of jobs. When the proportion of jobs is equal to three~($q=3$), meaning that approximately three operations will compose a job, more jobs are considered. For example, when 100 operations must be scheduled, the number of jobs is equal to 34. In these cases, the RPD distributions are less dispersed with lower average and interquartile values. If the number of jobs reduces, fewer operations' completion times define the objective function's value. Thus, the proportion of solutions with equivalent cost increases, limiting the efficiency of the local search. \begin{figure} \caption{Boxplot of the RPD distributions for the \texttt{IG-RG} \label{fig:specific_parameter_analysis} \end{figure} \section{Conclusions} \label{sec:conclusion} In this paper, we addressed a complex parallel machine scheduling problem, where jobs are composed of operations that are arranged into families. Non-anticipatory setup times incur at the beginning of each batch, formed by a sequence of operations considering their sizes and families. The problem also considers release dates for operations and machines, and machine's eligibility and capacity constraints. To solve it, we developed four Iterated Greedy (IG) algorithm variants, combining two destroy operators and two repair operators with a RVND local search procedure and other diversification and intensification strategies. We tested the algorithms in two instance sets, named \textit{Small-Sized Benchmark Set} and \textit{Large-Sized Benchmark Set}. The former is a known benchmark set from the literature, introduced by \citet{AbuMArrul2020}. The latter is a new set proposed in this work, based on instance generation definitions of several papers from the machine scheduling literature. The results showed that two variants using a greedy repair operator performed better than the remaining ones. The best variant outperformed the current literature approaches for solving the addressed problem in terms of the average deviation from the best solutions and the average computational time. Moreover, the IG algorithm provided new best solutions (upper bounds) in the \textit{Small-Sized Benchmark Set}. Experiments showed that the solution quality improves when the number of iterations of the algorithm grows. Nevertheless, the algorithms are still efficient when running for the small number of iterations tested (2500 iterations). Using the most common operators of destruction and repair in IG algorithms' literature, one of the method's variants presented the best results in different analyses presented, emphasizing that these operators are still relevant and efficient despite their simplicity. The present study reinforces the applicability of iterated greedy algorithms in solving combinatorial optimization problems even when a complicated problem inspired by a real context is considered. The idea of destroying and repairing the solution is a simple yet powerful and efficient technique, as we confirmed in our experiments. Since we are dealing with a complex parallel machine scheduling problem, it is worth emphasizing that the developed IG algorithms can be applied to simplified variants of the problem. This aspect enhances its relevance to the machine scheduling literature, supporting researchers studying similar problems or simplified variants of it. In this work, we consider that all the problem data are deterministic and known in advance. However, considering uncertainties in some aspects of the problem would make it attractive to researchers working with realistic problems. Many works on stochastic scheduling problems consider uncertainty in task processing times. Nevertheless, due to the complexity of the problem addressed, uncertainties can be defined for several aspects. For instance, uncertainty in the size of operations could lead to infeasible solutions concerning the machines' capacities. Moreover, uncertainty in the release dates would impact solutions' costs due to non-anticipatory setup times consideration. Therefore, investigating these aspects and developing tools to deal with the problem's stochastic variants is an exciting theme for future work. \section*{Acknowledgments} This research was partially supported by PUC-Rio, by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior – Brasil~(CAPES) – Finance Code 001, by the Conselho Nacional de Desenvolvimento Cient{\'i}fico e Tecnol{\'o}gico~(CNPq), under grant numbers \mbox{313521/2017-4} and \mbox{315361/2020-4}, and by the Norwegian Agency for International Cooperation and Quality Enhancement in Higher Education~(Diku) – Projct number UTF-2017-four-year/10075. {\linespread{1.3}\selectfont } \end{document}
\begin{document} \subjclass[2000]{03E40} \keywords{c.c.c.\ partitions, proper forcing, forcing axiom} \begin{abstract} We outline a portfolio of novel iterable properties of c.c.c.\ and proper forcing notions and study its most important instantiations, Y-c.c.\ and Y-properness. These properties have interesting consequences for partition-type forcings and anticliques in open graphs. Using Neeman's side condition method it is possible to obtain PFA variations and prove consistency results for them. \end{abstract} \title{Why Y-c.c.} \section{Introduction} A recent work of Yorioka \cite{yorioka:random} implicitly contains the following definition. \begin{definition} \label{yccdefinition} A poset $P$ satisfies \emph{Y-c.c.\ }if for every countable elementary submodel $M\prec H_\theta$ containing $P$ and every condition $q\in P$ there is a filter $F\in M$ on the completion $RO(P)$ such that $\{p\in RO(P)\cap M\restriction\restrictionon p\geq q\}\subset F$. \end{definition} \noindent We will show that this is a property intermediate between $\sigma$-centered and c.c.c.\ whose verification follows typical $\Delta$-system arguments. Y-c.c.\ holds for many natural examples, such as Aronszajn tree specialization forcing (Corollary~\ref{aronszajncorollary}), gap specialization forcing (Corollary~\ref{gapcorollary}), Todorcevic's posets used for the resolution of Horn--Tarski problem (Corollary~\ref{todorceviccorollary}) and other partition style c.c.c.\ posets. Y-c.c.\ has a number of pleasing consequences, such as not adding random reals (Corollary~\ref{covercorollary}) and branches into $\omega_1$-trees (Corollary~\ref{branchcorollary}), preserving uncountable chromatic number of open graphs and not adding uncountable anticliques for them (Corollary~\ref{cliquecorollary}). It is preserved under the finite support iteration (Theorem~\ref{fsitheorem}), which means that the forcing axiom for Y-c.c.\ posets can be forced via a Y-c.c.\ poset (Corollary~\ref{ymatheorem}). There is also a non-c.c.c.\ variant: \begin{definition} A poset $P$ is \emph{Y-proper} if for every countable elementary submodel $M\prec H_\theta$ containing $P$ and every condition $p\in P\cap M$ there is $q\leq p$ (a \emph{Y-master condition}) which is master for $M$ and such that for every $r\leq q$ there is a filter $F\in M$ on the completion $RO(P)$ such that $\{s\in RO(P)\cap M\restriction\restrictionon s\geq r\}\subset F$. \end{definition} Again, a number of natural posets are Y-proper, such as the Laver forcing (Theorem~\ref{firsttheorem}), the ideal based forcings (Theorem~\ref{idealbasedtheorem}), or the PID forcings (Theorem~\ref{pidtheorem}). Implications of Y-properness are similar to Y-c.c.\ Preservation of Y-properness under the countable support iteration is unclear, but still the Neeman method allows one to produce a Y-proper forcing which forces the forcing axiom YPFA for Y-proper posets--Theorem~\ref{ypfaforcingtheorem}. YPFA does not imply OCA. Both Y-c.c.\ and Y-properness are instances of a wide-ranging portfolio of iterable forcing properties quite distinct from those considered so far in the literature. The general scheme for these properties results from replacing the requirement that the sets $F\subset RO(P)$ be filters with some other regularity demand on $F$. Section~\ref{generalsection} provides the fairly involved general iteration theorems for the resulting concepts. The present paper owes a great deal to previous work of Yorioka. In particular, most of the examples were known to Yorioka, with proofs that inspired our proofs. Our contribution consists of isolating abstract, axiomatically useful classes of partial orders, and proving general preservation and forcing axiom theorems about them. We use set theoretic notational standard of \cite{jech:newset}. The forcing notation follows the western convention: $p\geq q$ means that $q$ is stronger or more informative than $p$. If $P$ is a (separative) partial order then $RO(P)$ denotes the completion of $P$, the unique complete Boolean algebra in which $P$ is dense. If $B$ is a complete Boolean algebra and $\phi$ is a statement of its forcing language, $\|\phi\|$ denotes the Boolean value of $\phi$ in $B$; that is, $\|\phi\|$ is the supremum of all $b\in B$ such that $b\Vdash\phi$. In all arguments, $\theta$ denotes a large enough regular cardinal, and $H_\theta$ the collection of all sets whose transitive closure has size less than $\theta$. OCA denotes the Open Coloring Axiom \cite[Section 8]{todorcevic:partitions}, the statement that every open graph on a second countable space is either countably chromatic or else contains an uncountable clique. \section{Y-c.c.: consequences} \label{asection} With a novel property such as Y-c.c., it appears necessary to explore its basic consequences. \begin{theorem} \label{ccctheorem} $\sigma$-centeredness implies Y-c.c.\ implies c.c.c. \end{theorem} \begin{proof} Suppose first that a poset $P$ is $\sigma$-centered, and fix a covering $\{F_n:n\in\omega\}$ of $RO(P)$ by countably many filters. Let $M\prec H_\theta$ be a countable elementary submodel containing $P, F_n$ for every $n\in\omega$. and let $q\in P$ be an arbitrary condition. There must be $n\in\omega$ such that $q\in F_n$; clearly, the filter $F_n\in M$ has the properties required in Y-c.c. The Y-c.c.\ of $P$ has been verified. The other implication is more involved. Suppose for contradiction that $P$ is a Y-c.c.\ poset and $A\subset P$ is an antichain of size $\aleph_1$. Let $M\prec H_\theta$ be a countable elementary submodel containing $P, A$. Let $q\in A\setminus M$ be any element and let $F\in M$ be the filter guaranteed by Y-c.c.\ Let $G=\{B\subset A\restriction\restrictionon \sum B\in F\}$; so $G\in M$ is a collection of subsets of $A$. \begin{claim} For every set $B\subset A$ in $M$, $B\in G\leftrightarrow q\in B$. \end{claim} \begin{proof} Suppose that $q\in B$. Then $\sum B\geq q$ is an element of $M$ which must belong to $F$ by the choice of $F$, and so $B\in G$. Suppose that $q\notin B$ and for contradiction assume that $B\in G$. By the previous paragraph, $A\setminus B\in G$, and so both $\sum B$ and $\sum (A\setminus B)$ belong to the filter $F$. This is a contradiction--the conjunction of the two is zero since $A$ is an antichain. \end{proof} We can now argue that $G$ is a nonprincipal $\sigma$-complete ultrafilter on $A$. By the elementarity of $M$, it is enough to show that $G$ is closed under countable intersections in $M$, and that for every set $B\subset A$ in $M$, exactly one of $B\in G$, $A\setminus B\in G$. Both of these statements follow immediately from the claim. However, in ZFC there are no nonprincipal countably complete ultrafilters on sets of size $\aleph_1$, the final contradiction. \end{proof} An important feature of Y-c.c.\ posets is their interaction with open graphs. This is encapsulated in the following theorem. \begin{theorem} \label{cliquetheorem} Suppose that $X$ is a second countable topological space and $H\subset X^\omega$ is an open set. If $P$ is Y-c.c. then every $H$-anticlique in the extension is covered by a ground model countable set of $H$-anticliques. \end{theorem} \noindent Here, an $H$-anticlique is just a set $A\subset X$ such that $A^\omega\cap H=0$. \begin{proof} Let $X$ be the space and $H\subset X^\omega$ be an open set. Suppose that $P\Vdash \dot A\subset X$ is an anticlique. For every filter $F\subset RO(P)$, let $B(\dot A, F)=\{x\in X:$ for every open neighborhood $O\subset X$ of $x$, the Boolean value $\|\check O\cap\dot A\neq 0\|$ is in the filter $F\}$. \begin{claim} The set $B(\dot A, F)$ is an $H$-anticlique. \end{claim} \begin{proof} For contradiction assume that this fails and let $\langle x_n:n\in\omega\rangle\in H$ be a sequence of points in $B(\dot A, F)$. Use the fact that $H$ is open to find a number $l\in\omega$ and open sets $O_k\subset X$ for $k\in l$ such that $x_k\in O_k$ and $\prod_{k\in l}O_k\times X^\omega\subset H$. By the definition of the set $B(\dot A, F)$, the Boolean values $\|\check O_k\cap\dot A\neq 0\|$ for $k\in l$ are all in the filter $F$ and have a lower bound $p\in P$. But then, $p\Vdash (\prod_{k\in l}O_k\times X^\omega)\cap \dot A^\omega\neq 0$ so $\dot A$ is not an $H$-anticlique. This is a contradiction. \end{proof} Now, let $M\prec H_\theta$ be a countable elementary submodel containing $P, X, H, \dot A$; we claim that $\dot A$ is forced to be covered by the anticliques in the model $M$. Suppose that this fails, and let $q\in P$ be a condition and $x\in X$ a point such that $x$ belongs to no anticliques in the model $M$, and $q\Vdash\check x\in\dot A$. Let $F\subset RO(P)$ be a filter in the model $M$ containing all elements of $RO(P)\cap M$ weaker than $q$. Since the space $X$ is second countable, it has a basis all of whose elements belong to the model $M$. For every such basic open set $O\subset X$ containing the point $x$, the Boolean value $\|\check O\cap \dot A\neq 0\|$ is weaker than $q$, and it belongs to the model $M$. Therefore, $x\in B(\dot A, F)$ which an anticlique in the model $M$ by the claim, a contradiction to the choice of the point $x$. \end{proof} Theorem~\ref{cliquetheorem} has a number of prominent corollaries. The following is immediate: \begin{corollary} \label{cliquecorollary} Let $P$ be a Y-c.c.\ poset. Let $X$ be a second countable space and let $H\subset X^\omega$ be an open set. \begin{enumerate} \item If in the extension $X$ is covered by countably many $H$-anticliques, then already in the ground model it is covered by countably many anticliques; \item if in the extension $H$ has an uncountable anticlique, then $H$ has an uncountable anticlique in the ground model. \end{enumerate} \end{corollary} \noindent Thus, Y-c.c.\ posets cannot be used to force an instance of OCA in clopen graphs. \begin{corollary} \label{covercorollary} Let $P$ be a Y-c.c.\ poset. If $X$ is a compact Polish space and $C$ is an $\omega_1$-cover consisting of $G_\delta$-sets, then in the extension $C$ remains an $\omega_1$-cover. \end{corollary} \noindent Here, a set $C\subset\mathcal{P}(X)$ is an $\omega_1$-cover if every countable subset of $X$ is a subset of one element of $C$. Corollary~\ref{covercorollary} needs to be understood in the context of interpretations of descriptive set theoretic notions in generic extensions: the space $X$ as well as the $G_\delta$ elements of the cover $C$ are naturally interpreted in the $P$-extension as a compact Polish space and its $G_\delta$-subsets again. \begin{proof} Let $p\in P$ be a condition and $\dot x_n$ for $n\in\omega$ be names for elements of $X$; we must find a set $B\in C$ and a condition $q\leq p$ such that $q\Vdash\{\dot x_n:n\in\omega\}\subset\dot B$. \begin{claim} There is a condition $q\leq p$ and a countable set $\{y_n:n\in\omega\}$ such that for every compact set $K\subset X$, $$q\Vdash \dot K\cap \{\dot x_n:n\in\omega\}\neq 0\text{ implies }K\cap \{y_n:n\in\omega\}\neq 0.$$ \end{claim} \begin{proof} Consider the set $H\subset (K(X))^\omega$ consisting of all sequences $\langle K_n:n\in\omega\rangle$ such that $\bigcap_nK_n=0$. A compactness argument shows that if the hyperspace $K(X)$ of compact subsets of $X$ is equipped with the Polish Vietoris topology, the set $H$ is open. For each $n\in\omega$ let $\dot A_n$ be the $P$-name for the collection of compact ground model subsets of $X$ which (whose canonical interpretations) contain the point $\dot x_n$. Clearly, $\dot A_n\subset K(X)^V$ is forced to be an $H$-anticlique. By Theorem~\ref{cliquetheorem}, there is a condition $q\leq p$ and a countable set $\{D_n\restriction\restrictionon n\in\omega\}$ of $H$-anticliques such that $q\Vdash\bigcup_n\dot A_n\subset\bigcup_n D_n$. A compactness argument shows that the intersection of each $H$-anticlique is nonempty, and for each $n\in\omega$ there is a point $y_n\in\bigcap D_n$. It is immediate that the set $\{y_n:n\in\omega\}$ works. \end{proof} Pick a condition $q\leq p$ and a set $\{y_n:n\in\omega\}$ as in the claim. Let $B\in C$ be a $G_\delta$-set such that $\{y_n:n\in\omega\}\subset B$. It will be enough to prove that $q\Vdash\{\dot x_n:n\in\omega\}\subset\dot B$. Suppose that this fails. As $B$ is $G_\delta$, there must be a ground model open superset of $B$ not containing the set $\{\dot x_n:n\in\omega\}$. Let $K\subset X$ be the compact complement of this open set. Then $K\cap\{ \dot x_n:n\in\omega\}\neq 0$ while $K\cap \{y_n:n\in\omega\}=0$, a contradiction. \end{proof} The corollary has numerous consequences: Y-c.c.\ posets do not add random reals since the $G_\delta$ Lebesgue null sets form an $\omega_1$-cover. Y-c.c.\ posets do not separate gaps of uncountable cofinality, since each such gap induces a natural $\omega_1$-cover of $G_\delta$-sets. Theorem~\ref{cliquetheorem} did not use the fact that the filters $F$ of Definition~\ref{yccdefinition} come from the model $M$; it was enough to assume that they come from some fixed countable set of filters on $RO(P)\cap M$. The assumption that the filters come from the model $M$ is used in the preservation of Y-c.c.\ under the finite support iteration, as well as in the following two features. \begin{theorem} Suppose that $P$ has Y-c.c.\ and $\kappa$ is a cardinal. For every function $f\in\kappa^\kappa$ in the $P$-extension, if $f\restriction a$ is in the ground model for every ground model countable set $a$, then $f$ is in the ground model. \end{theorem} \begin{proof} Let $p\in P$ be a condition and $\dot f$ a name such that $p\Vdash\dot f\in\kappa^\kappa$ is a function such that $\dot f\restriction\check a\in V$ for every countable set $a\in V$. We must find a condition $q\leq p$ and a function $g\in\kappa^\kappa$ such that $q\Vdash\check g=\dot f$. Let $M$ be a countable elementary submodel of $H_\theta$ containing $P, p, \dot f, \kappa$. Let $q\leq p$ be a condition deciding all values of $\dot f(\alpha)$ for $\alpha\in\kappa\cap M$. We will show that there is a function $g$ in the model $M$ such that $q\Vdash \dot f=\check g$. Let $F\subset RO(P)$ be a filter in the model $M$ obtained by an application of Y-c.c.\ to $M,q$. By the c.c.c.\ of $P$, for every ordinal $\alpha\in\kappa\cap M$, the ordinal $\beta$ such that $q\Vdash\dot f(\check\alpha)=\check \beta$ must be in the model $M$. Therefore, the Boolean value $\|\dot f(\check\alpha)=\check\beta\|$ is in the model $M$, it is weaker than $q$, and therefore belongs to the filter $F$. Let $g=\{\langle\alpha, \beta\rangle\in\kappa\times\kappa\restriction\restrictionon \|\dot f(\check \alpha)=\check\beta\|\in F\}$. Since $F$ is a filter, this is a partial function from $\kappa$ to $\kappa$. By the elementarity of the model $M$, $g\in M$. We just argued that $g$ is defined for every ordinal $\alpha\in M$, and so by the elementarity of the model $M$, $g$ is a total function from $\kappa$ to $\kappa$. We have also argued that $q\Vdash\dot f\restriction M=\check g\restriction M$, and by the elementarity of $M$ and the c.c.c.\ of $P$, $q\Vdash\dot f=\check g$ as desired. \end{proof} \begin{corollary} \label{branchcorollary} If $P$ has Y-c.c., then $P$ does not add any new cofinal branches into $\omega_1$-trees. \end{corollary} It is well known that an atomless $\sigma$-centered poset adds an unbounded real, and the proof translates with the obvious changes to Y-c.c.\ posets. \begin{theorem} \label{unboundedtheorem} If $P$ is an atomless poset satisfying Y-c.c., then $P$ adds an unbounded real. \end{theorem} Together with the preservation of Y-c.c.\ under complete subalgebras, this reproves the fact that Y-c.c.\ posets add no random reals. If an Y-c.c.\ poset did add a random real, then the measure algebra would be Y-c.c.\ which contradicts Theorem~\ref{unboundedtheorem}. \begin{proof} Let $M\prec H_\theta$ be a countable elementary submodel. Let $\langle F_i\restriction\restrictionon i\in\omega\rangle$ be an enumeration of all ultrafilters on $RO(P)$ that belong to the model $M$. Each of them is nowhere dense in $RO(P)$ and so one can find a maximal antichain $A_i\subset RO(P)\setminus F_i$ in the model $M$ for every $i\in\omega$. The antichain is infinite and countable by the c.c.c.\ of $P$. Let $\{a_i^j\restriction\restrictionon j\in\omega\}$ be an enumeration of $A_i$ for each $i\in\omega$ and define the name $\tau$ for an element of $\gw^\gw$ by $\tau(i)=j$ if $a_i^j$ belongs to the generic filter. We claim that this is a name for an unbounded real. Suppose not, and find a condition $q\leq p$ such that for every $i\in \omega$, $q$ is compatible with only finitely many elements of the antichain $A_i$. Let $F\subset RO(P)$ be a filter in $M$ granted by the application of Y-c.c.\ to $M, q$. Use the axiom of choice in $M$ to find $i\in\omega$ such that $F\subset F_i$. Let $B\subset A_i$ be the finite set of all elements of $A_i$ compatible with $q$. Thus, $B\in M$, $\sum B\in M$, and necessarily $\sum B\geq q$ since no elements of $A_i\setminus B$ are compatible with $q$. Now, $B\cap F_i=0$ and so $\sum B\notin F_i$ as $F_i$ is an ultrafilter. On the other hand, $\sum B\geq q$ and so $\sum B\in F\subset F_i$ by the choice of $i$. This is a contradiction. \end{proof} The existence of unbounded reals in Y-c.c.\ extensions can be derived also abstractly from Theorem~\ref{cliquetheorem} and the following argument. \begin{theorem} Let $P$ be a bounding poset adding a new point $\dot x\in2^\gw$. Then \begin{enumerate} \item either some condition forces $\dot x$ to be c.c.c.\ over the ground model and then some $\omega_1$-cover of $G_\delta$ sets on a compact Polish space is not preserved; \item or $\dot x$ is forced not to be c.c.c.\ over the ground model, and then there is a compact Polish space $X$, an open graph $H\subset [X]^2$ and an $H$-anticlique in the extension which is not covered by countably many ground model $H$-anticliques. \end{enumerate} \end{theorem} \noindent Here, a point $x\in2^\gw$ is c.c.c.\ over the ground model if there is a $\sigma$-ideal $I$ on $2^\gw$ in the ground model which is c.c.c.\ (i.e.\ there is no uncountable collection of Borel pairwise disjoint $I$-positive sets) and $x$ belongs to no Borel set in $I$ coded in the ground model. \begin{proof} Suppose that $p\in P$ is a condition. Let $I_p$ be the $\sigma$-ideal on $2^\gw$ consisting of all analytic sets $A\subset2^\gw$ such that $p\Vdash\dot x\notin\dot A$. \begin{claim} Every $I_p$-positive analytic set has an $I_p$-positive compact subset. \end{claim} \begin{proof} Let $A\notin I_p$ be an analytic set, and let $T\subset (2\times\omega)^{<\omega}$ be a tree such that $A=\mathrm{proj}([T])$. Let $q\leq p$ be a condition forcing $\dot x\in\dot A$, and let $\dot y$ be a name for a function in $\gw^\gw$ such that $q\Vdash\langle \dot x, \dot y\rangle\in [\check T]$. Use the bounding assumption to find a condition $r\leq q$ and a function $z\in\gw^\gw$ such that $r\Vdash\dot y$ is dominated by $\check z$. Let $S$ be the tree obtained from $T$ by erasing all nodes which exceed the function $z$ at some point in their domain. Then $S$ is a finitely branching tree, $\mathrm{proj}([S])\subset A$ is compact, $r\Vdash \dot x\in \mathrm{proj}([S])$ and so $\mathrm{proj}([S])$ is $I_p$-positive. The claim follows. \end{proof} Now, suppose that there is a c.c.c.\ $\sigma$-ideal $J$ and a condition $p\in P$ which forces that $x$ belongs to no Borel set in $J$. Since each $I_p$-positive Borel set is $J$-positive, the $\sigma$-ideal $I_p$ is c.c.c. Every Borel set $B\in I_p$ is covered by a $G_\delta$ set $C\in I_p$, namely $C=2^\gw\setminus\bigcup A$ where $A$ is some (countable) maximal antichain of compact $I_p$-positive subsets of $2^\gw$ disjoint from $B$. It follows that the collection of all $G_\delta$ sets in the ideal $I_p$ is a $\omega_1$-cover, and the condition $p$ forces it not to be an $\omega_1$-cover in the extension--none of its elements contain the point $\dot x$. Suppose on the other hand that $x$ is forced not to be c.c.c., and the ideal $I_p$ is not c.c.c.\ for any condition $p\in P$. Let $X=K(2^\gw)$ and consider the open graph $H\subset X^2$ consisting of all pairs $\langle K, L\rangle$ such that $K\cap L=0$. Let $\dot A$ be a name for the $H$-anticlique consisting of all compact sets in the ground model containing the point $\dot x$. We claim that it is forced not to be covered by countably many anticliques in the ground model. Suppose that $p\in P$ is a condition and $B_n$ for $n\in\omega$ are $H$-anticliques; we will find a condition $q\leq p$ and a compact set $K\subset2^\gw$ such that $K\notin\bigcup_nB_n$ and $q\Vdash\dot x\in\dot K$. Just observe that the $\sigma$-ideal $I_p$ is not c.c.c.\ and use the claim to produce an uncountable collection $C$ of pairwise disjoint $I_p$-positive compact sets. Note that this collection is an $H$-clique, and therefore for each $n\in\omega$ the intersection $B_n\cap C$ can contain at most one set. As $C$ is uncountable, there must be $K\in C\setminus\bigcup_nB_n$ and then any condition $q\leq p$ forcing $\dot x\in\dot K$ is as required. \end{proof} \section{Y-c.c.: examples} Y-c.c.\ in all of our examples is verified using the same general theorem. \begin{theorem} \label{ctheorem} Let $P$ be a poset. Suppose that there is a function $w$ defined on $P$ such that \begin{enumerate} \item for every $p\in P$, $w(p)$ is a finite set; \item if $p, q\in P$ are compatible, then they have a lower bound $r\leq p,q$ such that $w(r)=w(p)\cup w(q)$; \item whenever $\{p_\alpha\restriction\restrictionon \alpha\in\omega_1\}$ and $\{q_\alpha\restriction\restrictionon\alpha\in\omega_1\}$ are subsets of $P$ such that $\{w(p_\alpha)\restriction\restrictionon\alpha\in\omega_1\}$ and $\{w(q_\alpha)\restriction\restrictionon\alpha\in\omega_1\}$ are $\Delta$-systems with the same root, then there are ordinals $\alpha, \beta\in\omega_1$ such that $p_\alpha$ and $q_\beta$ are compatible. \end{enumerate} \noindent Then $P$ is Y-c.c. \end{theorem} \begin{proof} Let $a$ be a finite set. Say that a set $A\subset P$ is $a$-large if for every countable set $b\supset a$ there is a condition $p\in A$ such that $w(p)\cap b=a$. \begin{claim} \label{demclaim} The set $\{\sum A\restriction\restrictionon A$ is $a$-large$\}\subset RO(P)$ is centered. \end{claim} \noindent The trivial case where there are no $a$-large sets at all is included in the statement of the claim. \begin{proof} Let $\{A_i\restriction\restrictionon i\in n\}$ be finitely many $a$-large sets; we must produce a condition $q\in P$ which for each $i$ has an element of $A_i$ above it. To this end, use transfinite induction and the largeness assumption to find conditions $p_\alpha^i\in A_i$ for each $\alpha\in\omega_1$ and $i\in n$ so that $\{w(p_\alpha^i)\restriction\restrictionon\alpha\in\omega_1\}$ is a $\Delta$-system with root $a$ for each $i\in n$. By induction on $i\in n$ find sets $\{q_\alpha^i\restriction\restrictionon\alpha\in\omega_1\}$ so that $\{w(q_\alpha^i)\restriction\restrictionon \alpha\in\omega_1\}$ forms a $\Delta$-system with root $a$, and for every $\alpha\in\omega_1$ and every $j\in i+1$ there is $\beta\in\omega_1$ such that $q_\alpha^i\leq p_\beta^j$. The step $i=0$ is trivially satisfied with $p_\alpha^i=q_\alpha^i$. To perform the induction step, by transfinite recursion on $\gamma\in\omega_1$ use item (3) of the assumptions repeatedly to find countable ordinals $\alpha_\gamma$ and $\beta_\gamma$ such that $q_{\alpha_\gamma}^i$ and $p_{\beta_\gamma}^i$ are compatible and whenever $\delta\neq\gamma$ then $(w(p_{\beta_\gamma}^i)\cup w(q_{\alpha_\gamma}^i))\cap (w(p_{\beta_\delta}^i)\cup w(q_{\alpha_\delta}^i))=a$. Then, use (2) to find conditions $q_\gamma^{i+1}\leq p_{\beta_\gamma}^i, q_{\alpha_\gamma}^i$ such that $w(q_{\gamma}^{i+1})=w(p_{\beta_\gamma}^i)\cup w(q_{\alpha_\gamma}^i)$; this concludes the induction step. In the end the condition $q=q_0^{n-1}$ works as required. \end{proof} Now suppose that $M\prec H_\theta$ is a countable elementary submodel containing $w, P$, and suppose that $q\in P$ is any condition. Let $a=M\cap w(q)\in M$, and find a filter $F\in M$ on $RO(P)$ extending the centered system $\{\sum A\restriction\restrictionon A$ is $a$-large$\}$. We will show that for every $p\in M\cap RO(P)$, if $p\geq q$ then $p\in F$. This will complete the proof of Y-c.c.\ for $P$. Let $A=\{r\in P\restriction\restrictionon r\leq p\}\in M$. It will be enough to show that $A$ is $a$-large, since $P\subset RO(P)$ is dense, and so $\sum A=p$ and $p\in F$. Suppose for contradiction that $A$ is not $a$-large. A counterexample, a countable set $b$, can be found in the model $M$ by elementarity. But then, the condition $q\in A$ satisfies $w(q)\cap b=a$, contradicting the assumption that $b$ is a counterexample. Thus, the poset $P$ is Y-c.c. \end{proof} The first specific example of a Y-c.c.\ forcing is the specialization forcing for a tree without branches of length $\omega_1$. Let $T$ be a such a tree and consider the specialization poset $P(T)$ consisting of all finite functions $p:T\to\omega$ such that for all $s<t$ in $\mathrm{dom}(p)$ the values $p(s), p(t)$ are distinct. The ordering is that of reverse inclusion. \begin{corollary} \label{aronszajncorollary} If $T$ is a tree without branches of length $\omega_1$, then the specialization forcing $P(T)$ satisfies Y-c.c. \end{corollary} \begin{proof} For every condition $p\in P(T)$ let $w(p)=\mathrm{dom}(p)$. It will be enough to show that item (3) of Theorem~\ref{ctheorem} is satisfied. Suppose that $\{p_\alpha\restriction\restrictionon\alpha\in \omega_1\}$ and $\{q_\alpha\restriction\restrictionon\alpha\in\omega_1\}\subset P(T)$ are sets such that $\{w(p_\alpha)\restriction\restrictionon\alpha\in \omega_1\}$ and $\{w(q_\alpha)\restriction\restrictionon\alpha\in\omega_1\}$ are $\Delta$-systems with the same root $a$. By transfinite recursion on $\gamma\in\omega_1$ find ordinals $\alpha_\gamma$ and $\beta_\gamma$ such that the sets $b_\gamma=(\mathrm{dom}(p_{\alpha_\gamma})\cup \mathrm{dom}(q_{\beta_\gamma}))\setminus a$ for $\gamma\in\omega_1$ are pairwise disjoint and contain no element of $T$ which is below some element of $a$. For each $\gamma\in\omega_1$ consider a condition $r_\gamma\in P(T)$ whose domain is the set of minimal elements of $b_\gamma$ and which assigns to every element of its domain value $0$. Since the poset $P(T)$ is c.c.c.\ (see e.g.~\cite{baumgartner:thesis}) there must be countable ordinals $\delta\neq\gamma$ for which $r_\gamma$ and $r_\delta$ are compatible, i.e.\ no elements of $b_\gamma$ is compatible with any element of $b_\delta$. It is immediate that the conditions $p_{\alpha_\gamma}$ and $q_{\beta_\delta}\in P$ are compatible as well. \end{proof} The usual gap specialization forcing satisfies Y-c.c.\ as well. To this end, recall basic definitions. An $(\omega_1, \omega_1)$-pregap is a sequence $\langle x_\alpha, y_\alpha\restriction\restrictionon \alpha\in\omega_1\rangle$ of subsets of $\omega$ such that $x_\alpha\cap y_\alpha=0$ and $\beta\in\alpha$ implies that $x_\beta\subset^* x_\alpha$ and $y_\beta\subset^* y_\alpha$, each time up to finitely many exceptional natural numbers. A set $c\subset\omega$ separates the pregap if for every ordinal $\alpha\in\omega_1$, $x_\alpha\subset^* c$ and $y_\alpha\cap c=^*0$ holds. A gap is a pregap that cannot be separated. A pregap is a gap if and only if for every uncountable set $D\subset\omega_1$ there are distinct ordinals $\alpha, \beta\in D$ such that $(x_\alpha\cap y_\beta)\cup (x_\beta\cap y_\alpha)\neq 0$. A gap is special if there is an uncountable set $D\subset\omega_1$ such that for all distinct ordinals $\alpha, \beta\in D$ $(x_\alpha\cap y_\beta)\cup (x_\beta\cap y_\alpha)\neq 0$ holds. For a special gap it is impossible to introduce a separating set without collapsing $\omega_1$. There is a natural specializing forcing for gaps \cite[Lemma 3.8]{bekkali:set}. Suppose that $H=\langle x_\alpha, y_\alpha\restriction\restrictionon\alpha\in\omega_1\rangle$ is a gap. Let $P(H)$ be the poset of all finite sets $p\subset\omega_1$ such that for distinct ordinals $\alpha, \beta\in p$ the requirement $(x_\alpha\cap y_\beta)\cup (x_\beta\cap y_\alpha)\neq 0$ holds. It turns out that $P(H)$ is c.c.c. It follows that there is a condition $p\in P(H)$ which forces that the union of the generic filter is uncountable; this is the specializing set. \begin{corollary} \label{gapcorollary} If $H$ is a $(\omega_1, \omega_1)$-gap then the specialization forcing $P(H)$ satisfies Y-c.c. \end{corollary} \begin{proof} For every condition $p\in P(H)$ let $w(p)=p$. It will be enough to show that item (3) of Theorem~\ref{ctheorem} is satisfied. Suppose that $\{p_\alpha\restriction\restrictionon\alpha\in \omega_1\}$ and $\{q_\alpha\restriction\restrictionon\alpha\in\omega_1\}\subset P(H)$ are sets such that $\{w(p_\alpha)\restriction\restrictionon\alpha\in \omega_1\}$ and $\{w(q_\alpha)\restriction\restrictionon\alpha\in\omega_1\}$ are $\Delta$-systems with the same root $a$. By transfinite recursion on $\gamma\in\omega_1$ find ordinals $\alpha_\gamma$ and $\beta_\gamma$ such that the sets $b_\gamma=(p_{\alpha_\gamma}\cup q_{\beta_\gamma})\setminus a$ for $\gamma\in\omega_1$ are pairwise disjoint and contain no ordinals less or equal to $\max(a)$. Use a counting argument to find an uncountable set $D\subset\omega_1$ and a number $k\in\omega$ such that for every ordinal $\gamma\in D$, the sets $\{x_\delta\setminus k\restriction\restrictionon\delta\in b_\gamma\}$ and the sets $\{y_\delta\setminus k\restriction\restrictionon\delta\in b_\gamma\}$ are linearly ordered by inclusion. For each $\gamma\in D$, write $\delta_\gamma=\min(b_\gamma)$ and let $c_\gamma=x_{\delta_\gamma}\setminus k$ and $d_\gamma=y_{\delta_\gamma}\setminus k$. The object $\langle c_\gamma, d_\gamma\restriction\restrictionon\gamma\in D\rangle$ is a gap since any set separating this gap would also separate the original gap. Therefore, there must be ordinals $\gamma\neq\gamma'\in D$ such that $(c_\gamma\cap d_{\gamma'})\cup (c_{\gamma'}\cap d_\gamma)\neq 0$. It is easy to verify that the conditions $p_{\alpha_\gamma}, q_{\beta_{\gamma'}}\in P(H)$ are compatible as required. \end{proof} Todorcevic \cite[Theorem 7.8]{todorcevic:partitions} introduced a partition-type forcing associated with unbounded sequences of functions of length $\omega_1$; this poset satisfies Y-c.c.\ as well. Let $\vec f=\langle f_\alpha\restriction\restrictionon\alpha\in\omega_1\rangle$ be a modulo finite increasing, unbounded sequence of increasing functions in $\gw^\gw$. Let $P(\vec f)$ be the poset of all finite sets $p\subset\omega_1$ such that for all ordinals $\alpha\in\beta$ in the set $p$, there is $n$ such that $f_\alpha(n)>f_\beta(n)$. The ordering of $P(\vec f)$ is that of inclusion. \begin{corollary} \label{unboundedcorollary} If the sequence $\vec f$ is unbounded then the poset $P(\vec f)$ satisfies Y-c.c. \end{corollary} \begin{proof} For every condition $p\in P(\vec f)$ let $w(p)=p$. It will be enough to show that item (3) of Theorem~\ref{ctheorem} is satisfied. Suppose that $\{p_\alpha\restriction\restrictionon\alpha\in \omega_1\}$ and $\{q_\alpha\restriction\restrictionon\alpha\in\omega_1\}\subset P(\vec f)$ are sets such that $\{w(p_\alpha)\restriction\restrictionon\alpha\in \omega_1\}$ and $\{w(q_\alpha)\restriction\restrictionon\alpha\in\omega_1\}$ are $\Delta$-systems with the same root $a$. By transfinite recursion on $\gamma\in\omega_1$ find ordinals $\alpha_\gamma$ and $\beta_\gamma$ such that the sets $b_\gamma=(p_{\alpha_\gamma}\cup q_{\beta_\gamma})\setminus a$ for $\gamma\in\omega_1$ are pairwise disjoint and contain no ordinals less or equal to $\max(a)$. Use a counting argument to find an uncountable set $D\subset\omega_1$ and a number $k\in\omega$ such that for every ordinal $\gamma\in D$ the functions $\{f_\delta\restriction\restrictionon\delta\in b_\gamma\}$ are linearly ordered by domination everywhere above $k$. Let $\delta_\gamma=\min(b_\gamma)$. The collection $\langle f_{\delta_\gamma}\restriction\restrictionon\gamma\in\omega_1\rangle$ is unbounded, and therefore there is a number $n>k$ such that for every $m$ there is an ordinal $\gamma(m)\in D$ such that $f_{\delta_{\gamma(m)}}(n)>m$. Let $\gamma'\in D$ be an ordinal larger than all $\gamma(m)$ for $m\in\omega$, let $m=\max\{f_\delta(n)\restriction\restrictionon \delta\in b_{\gamma'}\}$ and observe that the conditions $p_{\alpha_{\gamma(m)}}, q_{\gamma'}$ are compatible as desired. \end{proof} Balcar, Paz{\' a}k, and Th{\" u}mmel \cite{bpt:todorcevic}, following Todorcevic \cite{todorcevic:forcing}, defined an ordering $T(Y)$ for every topological space $Y$. Th{\" u}mmel used these orderings to settle an old problem of Horn and Tarski \cite{thuemmel:horntarski}. There are several closely related definitions of $T(Y)$; we will use the following. $T(Y)$ consists of all sets $p\subset Y$ such that $p$ is a union of finitely many converging sequences together with their limits. For $p\in T(Y)$, write $w(p)$ for the set of its accumulation points. The ordering is defined by $q\leq p$ if $p\subset q$ and $w(q)\cap p=w(p)$. The poset $T(Y)$ may or may not be c.c.c., $\sigma$-centered etc, depending on the topological space $Y$. However, if it is c.c.c., then it automatically assumes Y-c.c. This improves a result of Yorioka \cite{yorioka:random}. \begin{corollary} \label{todorceviccorollary} Let $Y$ be a topological space. If $T(Y)$ is c.c.c., then $T(Y)$ has Y-c.c. \end{corollary} \begin{proof} We will show that the function $p\mapsto w(p)$ has the required properties. It will be enough to show that item (3) of Theorem~\ref{ctheorem} is satisfied. Suppose that $\{p_\alpha\restriction\restrictionon\alpha\in \omega_1\}$ and $\{q_\alpha\restriction\restrictionon\alpha\in\omega_1\}\subset T(Y)$ are sets such that $\{w(p_\alpha)\restriction\restrictionon\alpha\in \omega_1\}$ and $\{w(q_\alpha)\restriction\restrictionon\alpha\in\omega_1\}$ are $\Delta$-systems with the same root $a$. By transfinite recursion on $\gamma\in\omega_1$ find ordinals $\alpha_\gamma$ and $\beta_\gamma$ such that the sets $b_\gamma=(w(p_{\alpha_\gamma})\cup w(q_{\beta_\gamma}))\setminus a$ for $\gamma\in\omega_1$ are pairwise disjoint. For each $\gamma\in\omega_1$ consider a condition $r_\gamma=p_{\alpha_\gamma}\cup q_{\beta_\gamma}\in T(Y)$. Since the poset $T(Y)$ is c.c.c.\ there must be countable ordinals $\delta\neq\gamma$ for which $r_\gamma$ and $r_\delta$ are compatible, i.e.\ no element of $w(r_\gamma)$ is an isolated point of $r_\delta$ and vice versa. It is immediate that the conditions $p_{\alpha_\gamma}$ and $q_{\beta_\delta}\in P$ are compatible as well. \end{proof} As a final remark in this section, note that the OCA partition posets in general do not have Y-c.c.\ by Theorem~\ref{cliquetheorem}. \begin{question} Suppose that $I$ is a suitably definable ideal on a Polish space $X$ and let $P_I$ be the quotient Boolean algebra of Borel subsets of $X$ modulo $I$. Are the following equivalent? \begin{enumerate} \item $P_I$ is Y-c.c.; \item $P_I$ is $\sigma$-centered. \end{enumerate} \end{question} Note that if a c.c.c.\ poset $P$ is a finite support product of posets, and each factor satisfies the assumptions of Theorem~\ref{ctheorem}, then $P$ also satisfies the assumptions, and is Y-c.c. \begin{question} Suppose that $P, Q$ are Y-c.c.\ posets such that $P\times Q$ is c.c.c. Must $P\times Q$ be Y-c.c.? \end{question} \section{Y-properness} The proper variation of Y-c.c.\ yields much greater variety of posets. The basic consequences of Y-properness remain mostly the same as for Y-c.c.\ and also the proofs of Section~\ref{asection} immediately adapt to give the following: \begin{theorem} \label{yproperpreservationtheorem} Suppose that $P$ is a Y-proper poset. \begin{enumerate} \item Whenever $\kappa$ is a cardinal and $f\in \kappa^\kappa$ is a function in the $P$-extension which is not in the ground model, then there is a ground model countable set $a\subset\kappa$ such that $f\restriction a$ is not in the ground model; \item $P$ preserves $\omega_1$-covers consisting of $G_\delta$ sets on compact Polish spaces; \item if $X$ is a second countable topological space and $H\subset X^\omega$ is an open set, then every $H$-anticlique in the extension is covered by countably many ground model anticliques; \item if $P$ is atomless, then it adds an unbounded real. \end{enumerate} \end{theorem} The notion of Y-properness is particularly suitable for side condition type proper forcings. We discuss two classes of examples. \cite{z:keeping} introduced the notion of ideal based forcings. Yorioka showed that ideal based forcings do not add random reals. We will now show that ideal based forcings are Y-proper. This class of forcings includes posets used for destroying S-spaces, forcing a five-element classification of directed partial orders of size $\aleph_1$, and others. First, the rather involved definitions must be carefully stated. \begin{definition} \label{tripledefinition} An \emph{ideal based triple} is a triple $\langle U, \sqsubseteq, I\rangle$ such that the following are satisfied for $\sqsubseteq$: \begin{enumerate} \item[(1)] $U$ is a collection of finite subsets of $\omega_1$ and $\sqsubseteq$ is an ordering on it refining inclusion; \item[(2)] whenever $a\in U$ and $\beta\in\omega_1$ then $a\cap\beta\in U$ and $a\cap\beta\sqsubseteq a$; \end{enumerate} \noindent and the following are satisfied about $I$: \begin{enumerate} \item[(3)] $I$ is an ideal on $\omega_1$ including all singletons; \item[(4)] every $I$-positive set has a countable $I$-positive subset; \item[(5)] for every $a\in U$ the set $\{\beta\in \omega_1\restriction\restrictionon a\sqsubseteq a\cup \{\beta\}\}$ is not covered by countably many elements of $I$; \item[(6)] for every $a\in U$ the set $\{\beta\in\omega_1\restriction\restrictionon a\cap\beta\sqsubseteq (a\cap \beta)\cup\{\beta\}$ and $a\not\sqsubseteq a\cup\{\beta\}\}$ is in $I$. \end{enumerate} \end{definition} \begin{definition} Given an ideal based triple $\langle U, \sqsubseteq, I\rangle$, the associated ideal based forcing $P$ is defined as follows. A condition $p\in P$ is a finite set of ordered pairs $\langle M, \alpha\rangle$ such that $M\prec H_\lambda$ is a countable elementary submodel for some fixed $\lambda$ such that $I\in H_\lambda$, $\alpha$ is a countable ordinal which does not belong to $\bigcup (I\cap M)$, and \begin{itemize} \item $w(p)=\{\alpha\restriction\restrictionon\exists M\ \langle M, \alpha\rangle\in p\}\in U$; \item whenever $\langle M, \alpha\rangle$ and $\langle N, \beta\rangle$ are distinct elements of $p$ then either $M\in N$ and $\alpha\in N$, or $N\in M$ and $\beta\in M$. \end{itemize} \noindent The ordering on the poset $P$ is defined by $p\geq q$ if $p\subset q$ and $w(p)\sqsubseteq w(q)$. \end{definition} \begin{theorem} \label{idealbasedtheorem} If $P$ is an ideal based forcing then $P$ is Y-proper. \end{theorem} \begin{proof} Fix the ideal based triple $\langle U, \sqsubseteq, I\rangle$ generating the poset $P$. Let $p\in P$ be a condition. Say that a set $A\subset P$ is $p$-large if Player II has a winning strategy in the following game $G(A,p)$. Player I starts out with a countable set $z\in H_{\lambda}$. Then, Players I and II alternate for $\omega$ many rounds, Player I starts round $k$ with a set $b_k\in I$ and Player II answers with a countable ordinal $\alpha_k\notin b_k$ such that $\alpha_0\in \alpha_1\in\dots$ Player II wins if there is a number $l$ and a condition $q\leq p$ such that $w(q)=w(p)\cup\{\alpha_k:k\in l\}$, the $\in$-least model $M$ on $q\setminus p$ contains $z$ as an element and $w(q)\cap M=w(p)$, and there is $r\in A$ such that $r\geq q$. Note that the game is open for Player II and therefore determined. \begin{claim} \label{aclaim} The collection $\{\sum A:A$ is $p$-large$\}\subset RO(P)$ is centered. \end{claim} \begin{proof} Let $\{A_i\restriction\restrictionon i\in n\}$ be a collection of $p$-large sets; we must find conditions $p_i\in A_i$ for each $i\in n$ with a common lower bound. Let $\langle M_i\restriction\restrictionon i\in n+1\rangle$ be an $\in$-chain of countable elementary submodels of some large $H_\theta$ with $P, p, I, A_i$ for $i\in n$ all elements of $M_0$. By induction on $i\in n$, we will construct conditions $p_i\in A_i, q_i$ such that \begin{itemize} \item $p, p_i$ are both weaker than $q_i$, $q_i\in M_{n-i}$, and the $\in$-least model on $q_i\setminus p$ contains $M_{n-i-1}\cap H_\lambda$ as an element; \item for each $i$, $r_i=\bigcup_{j\in i}q_j$ is a condition in $P$ which is a lower bound of all conditions $q_j$ for $j\in i$. Moreover, $w(r_i)\cap M_{n-i}=w(p)$. \end{itemize} Suppose that conditions $p_j, q_j$ have been constructed for $j\in i$. Write $a_i=w(r_i)$. Work in the model $M_{n-i}$. Let $\sigma_i$ be a winning strategy for Player II in the game $G(A_i, p)$. We will produce an infinite play of the game such that \begin{itemize} \item the initial move of Player I is $M_{n-i-1}\cap H_\lambda$; \item all moves are in the model $M_{n-i}$; \item writing $e_l=\{\alpha_k:k\in l\}$, for every $l$ we have $a_i\sqsubseteq a_i\cup e_l$. \end{itemize} The play is easy to construct by induction on $l\in\omega$. Suppose the first $l$ moves have been constructed, producing a play $t_l$. Write $c=\{\alpha\in\omega_1:$ for some set $b\in I$ the strategy $\sigma_i$ answers the play $t_l^\smallfrown b$ with $\alpha\}$. Thus, $c\in M_{n-i}$. Note that for every ordinal $\alpha\in c$, $w(p)\cup e_l\sqsubseteq w(p)\cup e_l\cup\{\alpha\}$: since there is a play in which Player II wins, producing a condition $r$ such that $w(r)$ contains $w(p)\cup e_l\cup\{\alpha\}$ as an initial segment, this follows from (2) of Definition~\ref{tripledefinition}. Also, $c$ is an $I$-positive set: if it were an element of $I$, then Player I could play a set containing $c$, forcing the strategy $\sigma_i$ to answer with an ordinal out of $c$, contradicting the definition of $c$. By (4) of Definition~\ref{tripledefinition}, the set $c$ contains an $I$-positive countable subset $d\subset c$, and this set $d$ can be found in the model $M_{n-i}$. By (6), there is an ordinal $\alpha_l\in d$ such that $a_i\cup\{\alpha_k:k\in l\}\sqsubseteq a_i\cup\{\alpha_k:k\in l+1\}$. Find a move $b_l\in M_{n-i}$ provoking the strategy $\sigma_i$ to answer $\alpha_l$ and let $t_{l+1}=t_l^\smallfrown b_l^\smallfrown \alpha_l$. This concludes the induction step and the construction of the play. Now, since $\sigma_i$ is a winning strategy for Player II, there is a natural number $l$ and conditions $p_i\in A_i$ and $q_i$ such that $q_i\leq p_i, p$, $q_i\in M_{n-i}$, and $w(q_i)=w(p)\cup e_l$. Consider the set $r_{i+1}=\bigcup_{j\leq i}q_j$. The set $r_{i+1}$ is a condition in the poset $P$ smaller than $r_i$ by the third item of the construction of the infinite play. Also, $r_{i+1}\leq q_i$ by (2) of Definition~\ref{tripledefinition}. This concludes the induction step of the induction on $i$ and the proof of the claim. \end{proof} Now we are ready to verify Y-properness for the poset $P$. Let $M\prec H_\theta$ be a countable elementary submodel containing $U, \sqsubseteq, I$, let $p\in P\cap M$. Find an ordinal $\alpha\in\omega_1\setminus \bigcup (I\cap M)$ such that $w(p)\sqsubseteq w(p)\cup \{\alpha\}$; such ordinal exists by Definition~\ref{tripledefinition}(5). Let $q=p\cup \{\langle M\cap H_{\aleph_2}, \alpha\rangle\}$. It is not difficult to see that $q\leq p$. \cite{z:keeping} shows that $q$ is a master condition for $M$. We shall show that $q$ is a Y-master condition for the model $M$. Suppose that $r\leq q$ is an arbitrary condition. Observe that $r\cap M\in P$ is a condition weaker than $r$. Let $F\in M$ be a filter on $RO(P)$ extending the centered system $\{\sum A:A$ is $r\cap M$-large$\}$. We claim that for every condition $s\in RO(P)\cap M$ such that $s\geq r$, $s\in F$ holds. This will conclude the proof. Indeed, let $A=\{t\in P:t\leq s\}$. Since $P$ is dense in $RO(P)$, it is clear that $\sum A=s$. To conclude the proof, it will be enough to show that $A$ is $r\cap M$-large. Suppose that it is not. The game $G(A, r\cap M)$ is determined, Player II has no winning strategy, therefore Player I has a winning strategy, and such a strategy $\sigma$ has to exist in the model $M$ as $r\cap M, s\in M$. Note that the strategy $\sigma$ is in $H_{\aleph_2}$, and so it belongs to all models on $r\setminus M$. The definition of the poset $P$ shows that Player II can defeat the strategy by playing the ordinals in $w(r)\setminus M$ in increasing order, since then the condition $r\leq s$ will witness the defeat of Player I at the appropriate finite stage. This is the final contradiction. \end{proof} Another class of Y-proper posets comes from the usual way of forcing the P-ideal dichotomy, PID \cite{todorcevic:pid, todorcevic:bsl}. Let $X$ be a set and $I\subset [X]^{\leq \aleph_0}$ be a P-ideal containing all singletons. This means that for every countable set $J\subset I$ there is a set $a\in I$ such that for every $b\in J$, $b\subset^* a$. Suppose that $X$ is not a countable union of sets $Y_n$ for $n\in\omega$ such that $\mathcal{P}(Y_n)\cap I=[Y_n]^{<\aleph_0}$. Then there is a proper poset $P$ adding an uncountable set $Z\subset X$ such that $[Z]^{\aleph_0}\subset I$, which we now proceed to define. For simplicity assume that the underlying set $X$ is a cardinal $\kappa$. Let $K$ be the $\sigma$-ideal on $X$ generated by those sets $Y\subset X$ such that $I\cap\mathcal{P}(Y)= [Y]^{<\aleph_0}$. Thus, the assumptions imply that $X\notin K$. The poset $P$ consists of conditions $p$, which are finite sets of triples $\langle M, x, a\rangle$ such that $M\prec H_{\kappa^+}$ is a countable elementary submodel, $x\in X$ is a point which does not belong to $\bigcup (K\cap M)$, and $a\in I$ is a set which modulo finite contains all sets in $I\cap M$. Moreover, if $\langle M, x,a\rangle$ and $\langle N, y, b\rangle$ are distinct elements of $p$, then either $M, x, a\in N$ or $N, y, b\in M$. The ordering is defined by $q\leq p$ if $p\subseteq q$ and whenever $\langle M, x, a\rangle\in q\setminus p$ and $\langle N, y, b\rangle\in p$ are such that $M\in N$ then $x\in b$. As in the ideal-based case, for a condition $p\in P$ we write $w(p)=\{x\in X:\exists M, a\ \langle M, x, a\rangle\in p\}$. \begin{theorem} \label{pidtheorem} The PID poset $P$ is Y-proper. \end{theorem} \begin{proof} Suppose that $p\in P$ is a condition and $A\subset P$ is a set. Say that $A$ is $p$-large if Player II has a winning strategy in the following game $G(A, p)$. In the game, Player I starts with a set $z\in H_{\kappa^+}$, and then Player I and II alternate for $\omega$ many rounds. At round $k$, Player I plays a set $Y_k\in K$ and Player II answers with a point $x_k\in X\setminus Y_k$. Player II wins if at some round $l\in\omega$ there are conditions $q\in P$ and $r\in A$ such that $q$ is a lower bound of $p,r$, $w(q)=w(p)\cup\{x_k\restriction\restrictionon k\in l\}$, and the $\in$-first model $M$ on $q\setminus p$ contains the set $z$, and $w(q)\cap M=w(p)$. Note that the game is open for Player II and therefore determined. \begin{claim} The set $\{\sum A\restriction\restrictionon A$ is $p$-large$\}\subset RO(P)$ is centered. \end{claim} \begin{proof} Let $\{A_i\restriction\restrictionon i\in n\}$ be a collection of $p$-large sets; we must find conditions $p_i\in A_i$ for each $i\in n$ with a common lower bound. Let $\langle M_i\restriction\restrictionon i\in n+1\rangle$ be an $\in$-chain of countable elementary submodels of some large $H_\theta$ with $M_0$ containing $X, I, P, p, A_i$ for $i\in n$ as elements. By induction on $i\in n$, we will construct conditions $p_i\in A_i, q_i$ such that \begin{itemize} \item $p, p_i$ are both weaker than $q_i$, $q_i\in M_{n-i}$, and the $\in$-least model on $q_i\setminus p$ contains $M_{n-i-1}\cap H_{\kappa^+}$ as an element; \item for each $i$, $r_i=\bigcup_{j\in i}q_j$ is a condition in $P$ which is a lower bound of all conditions $q_j$ for $j\in i$, and $w(r_i)\cap M_{n-i}=w(p)$. \end{itemize} Suppose that conditions $p_j, q_j$ have been constructed for $j\in i$. Write $a_i\subset X$ for the intersection of all sets in the P-ideal $I$ which occur on $r_i\setminus p$. Observe that every set in $I\cap M_{n-i}$ is contained in $a_i$ up to finitely many exceptions. Work in the model $M_{n-i}$. Let $\sigma_i$ be a winning strategy for Player II in the game $G(A_i, p)$. We will produce an infinite play of the game such that \begin{itemize} \item the initial move of Player I is $M_{n-i-1}\cap H_{\kappa^+}$; \item all moves are in the model $M_{n-i}$; \item all moves of Player II belong to the set $a_i$. \end{itemize} The play is easy to construct by induction on $l\in\omega$. Suppose the first $l$ moves have been constructed, producing a play $t_l$. Write $c=\{x\in X:$ for some set $b\in K$ the strategy $\sigma_i$ answers the play $t_l^\smallfrown b$ with $x\}$. Thus, $c\in M_{n-i}$. Observe that $c\notin K$: if $c\in K$, then Player I could play the set $c$, forcing the strategy $\sigma_i$ to answer with a point out of $c$, contradicting the definition of $c$. By the definition of the ideal $K$, the set $c$ contains an infinite countable set $d\subset c$ in the P-ideal $I$, and this set $d$ can be found in the model $M_{n-i}$. Thus, the intersection $a_i\cap d$ is nonempty, containing some element $x\in M_{n-i}$. Find a move $b_l\in M_{n-i}$ provoking the strategy $\sigma_i$ to answer with $x$ and let $t_{l+1}=t_l^\smallfrown b_l^\smallfrown x$. This concludes the induction step and the construction of the play. Now, since $\sigma_i$ is a winning strategy for Player II, there is a natural number $l$ and conditions $p_i\in A_i$ and $q_i$ such that $q_i\leq p_i, p$, $q_i\in M_{n-i}$, and all points in $X$ appearing on $q_i\setminus p$ belong to the set $a_i$. It is immediate to verify that $r_{i+1}=\bigcup_{j\leq i}q_i$ is a lower bound of $r_i$ and $q_i$. This concludes the induction step of the induction on $i$ and the proof of the claim. \end{proof} Now suppose that $M\prec H_\theta$ is a countable elementary submodel containing $X,I$, and let $p\in P\cap M$ be any condition. We must produce a Y-master condition $q\leq p$ for the model $M$. Let $x\in X$ be some point not in $\bigcup(K\cap M)$, and let $a\in I$ be some set which modulo finite contains all sets in $I\cap M$; these objects exist by initial assumptions on the ideal $I$. Let $q=p\cup\{\langle M\cap H_{\kappa^+}, x, a\rangle\}$. \cite{todorcevic:pid} shows that $q$ is a master condition for $M$. We will show that $q$ is a Y-master condition for the model $M$. Let $r\leq q$ be a condition. Note that $r\cap M\in P$ is a condition weaker than $r$. Let $F\in M$ be any filter on $RO(P)$ extending the centered system $\{\sum A:A\subset P$ is $r\cap M$-large$\}$. We will show that for every condition $s\in RO(P)\cap M$, if $s\geq r$ then $s\in F$. This will conclude the proof. Indeed, suppose that $s\in RO(P)\cap M$ is a condition weaker than $r$. Let $A=\{t\in P:t\leq s\}\in M$ and argue that $A$ is $r\cap M$-large. This will conclude the proof since $P$ is dense in $RO(P)$ and so $\sum A=s$ and $s\in F$. Suppose for contradiction that $A$ is not $r\cap M$-large. Since the game $G(A, r\cap M)$ is determined, there must be a winning strategy $\sigma\in M$ for Player I in it. Now, let $\langle M_k, x_k, a_k\rangle:k\in l$ enumerate $r\setminus M$ in $\in$-increasing order and consider the counterplay of Player II against the strategy $\sigma$ in which Player II's moves are $x_k$ for $k\in l$ in this order. Note that the strategy $\sigma$ belongs to all models $M_k$ for $k\in l$ and so this is a legal counterplay. At the end of it, Player II is in a winning position, as witnessed by the condition $r\in A$. This contradicts the assumption that $\sigma$ was a winning strategy for Player I. \end{proof} It is natural to ask which traditional fusion-type forcings are Y-proper. We do not have a comprehensive answer to this question. Instead, we prove a rather limited characterization theorem which nevertheless illustrates the complexity of the question well. For an ideal $I$ on $\omega$ let $P(I)$ be the poset of all trees $T\subset\omegatree$ which have a trunk $t$ and for every node $s\in T$ extending the trunk, the set $\{n\in\omega:s^\smallfrown n\in T\}$ does not belong to $I$. The ordering is that of inclusion. Rather standard fusion arguments show that posets of this form are all proper, and they preserve $\omega_1$ covers on compact Polish spaces consisting of $G_\delta$ sets. \begin{theorem} \label{firsttheorem} Let $I$ be an ideal on $\omega$. If $I$ is the intersection of $F_\sigma$-ideals, then $P(I)$ is Y-proper. \end{theorem} In particular, Laver forcing is Y-proper. The implication in Theorem~\ref{firsttheorem} cannot be reversed already for $\mathbf{\Sigma}^0_4$ ideals. The ideal $I$ on $\omega\times\omega$ generated by vertical sections and sets with all vertical sections finite is not the intersection of $F_\sigma$-ideals, but the poset $P(I)$ is Y-proper. The simplest example in which we do not know how to check the status of Y-properness is $I=$the ideal of nowhere dense subsets on $2^{<\gw}$. \begin{theorem} \label{secondtheorem} Let $I$ be an analytic P-ideal on $\omega$. The following are equivalent: \begin{enumerate} \item $I$ is the intersection of $F_\sigma$-ideals; \item $P(I)$ is Y-proper; \item for every compact Polish space $X$ and every open set $H\subset X^\omega$, every $H$-anticlique in the $P(I)$-extension is covered by countably many $H$-anticliques in the ground model. \end{enumerate} \end{theorem} An analytic P-ideal is an intersection of $F_\sigma$-ideals if and only if it is the intersection of countably many $F_\sigma$-ideals. A good example of an analytic P-ideal which can be written as such an intersection and yet is not $F_\sigma$ is $I=\{a\subset\omega\restriction\restrictionon \forall \varepsilon>0\ \sum_{n\in a} n^{-\varepsilon}<\infty\}$. An example of an analytic P-ideal which is not an intersection of $F_\sigma$-ideals is the ideal of sets of asymptotic density zero; in fact, the only $F_\sigma$ ideal containing the density ideal is trivial, containing $\omega$ as an element. \begin{proof}[Proof of Theorem~\ref{firsttheorem}] We will need a bit of notation. Write $P=P(I)$. Let $T\in P$ be a tree with trunk $t$. For a function $f\restriction\restrictionon\omegatree\to I$ write $T_f=\{s\in T\restriction\restrictionon \forall i\in\mathrm{dom}(s\setminus t)\ s(i)\notin f(s\restriction i)\}$, observe that the trees $\{T_f\restriction\restrictionon f\in \omega^{(\omegatree)}\}$ form a centered system in $P$, and pick an ultrafilter $F(T)\subset RO(P)$ extending this centered system. \begin{claim} \label{directclaim} For every element $p\in F(T)$ there is a tree $S\subset T$ with trunk $t$ such that $S\leq p$. \end{claim} \begin{proof} Suppose this fails for some $p$. Let $U$ be the set of all nodes $s\in T$ such that $t\subseteq s$ and there is no tree $S\subset T$ with trunk $s$ such that $S\leq p$. Observe that $t\in U$, and if $s\in U$ then the set $\{i\in\omega: s^\smallfrown i\in T$ and $s^\smallfrown i\notin U\}$ is in $I$. If this failed, then there would be a $I$-positive set $a\subset\omega$ such that for each $i\in a$, $s^\smallfrown i\in T$ and there is a tree $S_i\subset T\restriction s^\smallfrown i$ with trunk $s^\smallfrown i$ which is below $p$. Now, $S=\bigcup_{i\in a}S_i=\sum_{i\in a}S_i\leq p$, contradicting the assumption that $s\in U$. Now, let $V=\{s\in T:$ if $\mathrm{dom}(t)\leq i\leq\mathrm{dom}(s)$ then $s\restriction i\in U\}$ and use the previous paragraph to see that $V=T_f$ for some $f:\omegatree\to I$. Since $p\in F(T)$, $p$ is compatible with $V$ and there is some tree $W\subset V$ such that $W\leq p$. Let $s$ be the trunk of $W$ and obtain a contradiction with the fact that $s\in U$. \end{proof} The (ultra)filters on $RO(P)$ critical for Y-properness of $P$ will be obtained in the following way. If $T\in P$ is a tree with trunk $t$, write $a=\{i\in\omega\restriction\restrictionon t^\smallfrown i\in T\}\notin I$, use the assumption on the ideal $I$ to find an $F_\sigma$-ideal $I(T)$ such that $I\subset I(T)$ and $a\notin I(T)$, and an ultrafilter $U(T)$ on $\omega$ such that $a\in U(T)$ and $I(T)\cap U(T)=0$. Finally, let $G(T)=\{p\in RO(P)\restriction\restrictionon\{i\in a:p\in F(T\restriction t^\smallfrown i)\}\in U(T)\}$. It is not difficult to see that $G(T)$ is an ultrafilter. Now we are ready for the fusion argument. Let $M\prec H_\theta$ be a countable elementary submodel containing $U, G, F$. \begin{claim} \label{ceclaim} If $T\in M\cap P$ is a tree with trunk $t$, then there is a tree $S\subset T$ with the same trunk such that \begin{enumerate} \item for all $i\in\omega$, if $t^\smallfrown i\in S$ then $S\restriction t^\smallfrown i\in M$; \item for every $p\in G(T)\cap M$, for all but finitely many $i$, if $t^\smallfrown i\in S$ then $S\restriction t^\smallfrown i\leq p$. \end{enumerate} \end{claim} \begin{proof} Let $\{p_i\restriction\restrictionon i\in\omega\}$ be an enumeration of $G(T)\cap M$. Let $\mu$ be a lower semicontinuous submeasure on $\omega$ such that $I(T)=\{a\restriction\restrictionon \mu(a)<\infty\}$. By induction on $j\in\omega$ find finite pairwise disjoint sets $a_j\subset\omega$ and trees $S_k\subset T$ for each $k\in a_j$ so that \begin{itemize} \item $\mu(a_j)\geq j$; \item $S_k\in M$ is a tree with trunk $t^\smallfrown k$, below $\bigwedge_{i<j}p_i$. \end{itemize} \noindent This is easy to do using Claim~\ref{directclaim} and elementarity of the model $M$ repeatedly. In the end, let $S=\bigcup_jS_j$. \end{proof} Now, an obvious fusion argument using Claim~\ref{ceclaim} repeatedly gives the following. For every tree $T\in M\cap P$ there is a tree $S\subset T$ in $P$ with the same trunk such that for every node $s\in S$ there is an ultrafilter $F(s)\in M$ on $RO(P)$ such that for every $p\in F(s)\cap M$ for all but finitely many $i\in\omega$, either $s^\smallfrown i\notin S$ or $S\restriction s^\smallfrown i\leq p$. We will verify that the condition $S\leq T$ is Y-master for the model $M$. Indeed, let $U\leq S$ be any tree, and let $s$ be its trunk. We claim that for every $p\in RO(P)\cap M$, if $p\geq U$ then $p\in F(s)$. Indeed, if this failed then $1-p\in F(s)$, by the properties of the tree $S$ one can erase finitely many immediate successors of $s$ in the tree $U$ to get some $V\subset U$ such that $V\leq 1-p$, and then $V$ would be a common lower bound of $p$ and $1-p$, a contradiction. \end{proof} \begin{proof}[Proof of Theorem~\ref{secondtheorem}] It is enough to show that (3) implies (1). Suppose that $I$ is an analytic P-ideal which is not an intersection of $F_\sigma$-ideals; we must show that there is a condition in $P=P(I)$ forcing an anticlique which is not covered by countably many ground model anticliques. Let $a\subset\omega$ be a set which belongs to every $F_\sigma$-ideal containing $I$, yet $a\notin I$. To simplify the notation, assume that $a=\omega$; otherwise, work under the condition $a^{<\omega}\in P$. Let $M\prec H_\theta$ be a countable elementary submodel containing $I$. Let $Y$ be the compact Polish space of all ultrafilters on $RO(P)\cap M$. Let $X=K(Y)$ and consider the open set $H\subset X^\omega$ consisting of all sequences $\langle K_n:n\in\omega\rangle\in X^\omega$ such that $\bigcap_nK_n=0$. By compactness, the set $H$ is open. For every condition $T\in P$, let $K_T=\{F\in Y\restriction\restrictionon \{p\in RO(P)\cap M\restriction\restrictionon p\geq T\}\subset F\}$. This is a compact subset of $Y$, therefore an element of $X=K(Y)$. Let $\dot A=\{K_T\restriction\restrictionon T$ is a tree in the $P(I)$-generic filter$\}$. Clearly, this is a $P$-name for an $H$-anticlique. We will show that $\dot A$ is forced not to be covered by countably many $H$-anticliques in the ground model. Suppose that $\{B_i\restriction\restrictionon i\in\omega\}$ is a countable collection of $H$-anticliques and $T\in P$ is a condition. We will find a condition $S\leq T$ such that $K_S\notin \bigcup_iB_i$; this will complete the proof. By compactness, for every $i\in\omega$ there is an ultrafilter $F_i$ on $RO(P)\cap M$ such that $F_i\in\bigcap B_i$. We will find the condition $S\leq T$ so that for every $i\in\omega$ there is $p_i\in F_i$ such that $1-p_i\geq S$. Then, for every $i\in\omega$ $F_i\notin K_S$ and therefore $K_S\notin B_i$ as required. The construction of the condition $S$ starts with a small claim: \begin{claim} \label{littleclaim} For every tree $U\in P$ with trunk $t$ and every $j\in\omega$ there is a tree $V\leq U$ with the same trunk such that \begin{enumerate} \item there is a set $c\subset\omega$ such that $V=\{s\in U\restriction\restrictionon s$ is compatible with $t^\smallfrown n\}$ for every $n\in c$; \item for every $i\in j$ there is a condition $q_i\in F_i$ such that $V\leq 1-q_i$; \item for every $i\in\omega$ there is a finite set $u$ of immediate successors of $t$ in the tree $V$ and a condition $q_i\in F_i$ such that the tree $V$ with the nodes in $u$ erased is below $1-q_i$. \end{enumerate} \end{claim} \begin{proof} Finally, we will use the assumptions on the ideal $I$. By a result of Solecki~\cite{solecki:ideals}, since $I$ is an analytic P-ideal it is possible to find a lower semicontinuous submeasure $\mu$ on $\omega$ such that $I=\{b\subset\omega:\limsup_n\mu(b\setminus n)=0\}$. Observe that for every $k\in\omega$ there is a partition of $\omega$ into finitely many singletons and finitely many pieces of $\mu$-mass $<2^{-k}$. If this failed, then the singletons together with sets of $\mu$-mass $<2^{-k}$ generate an $F_\sigma$-ideal which contains $I$ as a subset and does not contain $\omega$ as an element, contradicting our assumptions on $I$. By the elementarity of the model $M$, such partitions exist in the model $M$ as well. Let $a=\{n\in\omega\restriction\restrictionon t^\smallfrown n\in U\}$. Let $\varepsilon=\limsup_n\mu(a\setminus n)>0$. Let $\langle k_i\restriction\restrictionon i\in\omega\rangle$ be a sequence of numbers such that $\sum_i{2^{-k_i}}<\varepsilon/2$. The previous paragraph shows that there are sets $b_i\subset\omega$ in the model $M$ such that each $b_i$ is either a singleton or a set of $\mu$-mass $<2^{-k_i}$ such that either the Boolean value $q_i=\|$the generic element of $\gw^\gw$ does not start with $t\|$ is in $F_i$, or the Boolean value $q_i=\|$the generic element of $\gw^\gw$ starts with $t^\smallfrown n$ for some $n\in b_i\|$ is in the ultrafilter $F_i$. It is now easy to find a set $c\subset a$ such that $\limsup_n\mu(c\setminus n)>\varepsilon/2$ such that for all $i\in j$, $b_i\cap c=0$, and for every $i\in\omega$ $b_i\cap c$ is finite. The tree $V=\{s\in U:s$ is compatible with some $t^\smallfrown n$ for some $n\in c\}$ clearly works as desired. \end{proof} Assume for simplicity that the trunk of the tree $T$ is empty. A standard fusion argument using Claim~\ref{littleclaim} repeatedly yields a tree $S\leq T$ with empty trunk such that for every $i\in\omega$, there is a nonempty finite tree $u_i\subset S$ such that for every node $t\in u_i$ there is an element $q_i^t\in F_i$ such that the tree $S_t$ obtained from $S$ by restricting to $t$ and erasing all immediate successors of $t$ which are in $u_i$, is stronger than $1-q_i^t$. Let $p_i=\prod_{t\in u_i}q_i^t$ and observe that $S, p_i$ work as desired. \end{proof} \begin{question} Does the conjunction of Y-properness and c.c.c.\ imply Y-c.c.? \end{question} \begin{question} Suppose that $I$ is a suitably definable $\sigma$-ideal on a Polish space $X$. Suppose that the quotient poset $P_I$ of Borel $I$-positive sets ordered by inclusion is proper. Are the following equivalent? \begin{enumerate} \item $P_I$ is Y-proper; \item for every Polish compact space $Y$ and every open set $H\subset Y^\omega$, every $H$-anticlique in the $P_I$ extension is covered by countably many $H$-anticliques in the ground mode. \end{enumerate} \end{question} \section{General treatment} \label{generalsection} Y-c.c.\ and Y-properness are preserved under a suitable notion of iteration, and there are suitable forcing axioms associated with them. The treatment is complicated enough to warrant a more general approach of which Y-c.c.\ and Y-properness are the most important instances. \begin{definition} \label{regularitydefinition} A property $\Phi(F, B)$ of subsets $F$ of complete Boolean algebras $B$ is a \emph{regularity property} if the following is provable in ZFC: \begin{enumerate} \item (nontriviality) $\Phi(\{1\}, B)$ for every complete Boolean algebra $B$; \item (closure up) $\Phi(F, B)\to\Phi(F', B)$ whenever $F'=\{p\in B:\exists q\in F\ q\leq p\}$; \item (restriction) whenever $p\in B$ then $\Phi(F, B)$ implies $\Phi(F\cap (B\restriction p), B\restriction p)$, and $\Phi(F, B\restriction p)$ implies $\Phi(F, B)$. Here, $B\restriction p$ is the Boolean algebra $\{q\in B:q\leq p\}$ with the usual operations; \item (complete subalgebras) if $B_0$ is a complete subalgebra of $B_1$: for every $F\subset B_1$ $\Phi(F, B_1)\to \Phi(F\cap B_0, B_0)$ holds, and for every $F\subset B_0$ $\Phi(F, B_0)\to\Phi(F, B_1)$ holds; \item (iteration) if $\dot B_1$ is a $B_0$-name for a complete Boolean algebra, $F_0\subset B_0$, $\dot F_1$ a name for a subset of $B_1$, $\Phi(F_0, B_0)$ and $1\Vdash\Phi(\dot F_1, \dot B_1)$, then $\Phi(F_0*\dot F_1, B_0*\dot B_1)$ where $$F_0*\dot F_1=\{\langle p_0, \dot p_1\rangle\in B_0*\dot B_1: p_0\land \|\dot p_1\in\dot F_1\|\in F_0\}$$. \end{enumerate} \noindent If the Boolean algebra $B$ is clear from the context, we write $\Phi(F)$ for $\Phi(F, B)$. \end{definition} In the last item, we use the Boolean presentation of the two-step iteration. Let $B_0$ be a complete Boolean algebra and $\dot B_1$ a $B_0$-name for a complete Boolean algebra. Consider the poset of all pairs $\langle p_0, \dot p_1\rangle$ such that $p_0\in B_0$, $\dot p_1$ is a $B_0$-name for an element of $\dot B_1$, $p_0\neq 0$ and $p_0\Vdash\dot p_1\neq 0$. The ordering is defined by $\langle q_0, q_1\rangle\leq \langle p_0, \dot p_1\rangle$ if $q_0\leq p_0$ and $q_0\Vdash\dot q_1\leq\dot p_1$. It is not difficult to check that the separative quotient of this partial ordering (together with a zero element) is complete (admits arbitrary suprema and infima) and therefore forms a complete Boolean algebra which we will denote by $B_0*\dot B_1$. The central example of a regularity property studied in this paper is $\Phi(F)=$``$F$ is a centered set.'' Other possibilities include $\Phi(F)=$``any two elements of $F$ are compatible'' or $\Phi(F)=$``for every collection $\{p_n:n\in\omega\}\subset F$ the Boolean value $\liminf_n p_n$ is nonzero.'' There are many other sensible possibilities. Items (1--3) of the definition imply that the strongest conceivable regularity property is $\Phi(F)=$``$F$ has a lower bound.'' The class of regularity properties is closed under countable conjunctions. The disjunctions are more slippery but also more rewarding. To treat them, we introduce an additional notion. \begin{definition} Let $G$ be a set with a binary operation $*$. A property $\Phi(g, F, B)$ of subsets $F$ of complete Boolean algebras $B$ and elements $g\in G$ is a $G$-\emph{regularity property} if for each $g\in G$, $\Phi(g, \cdot, \cdot)$ satisfies the demands (1--4) of Definition~\ref{regularitydefinition} and (5) is replaced with \begin{enumerate} \item[(5)] if $\dot B_1$ is a $B_0$-name for a complete Boolean algebra, $F_0\subset B_0$, $\dot F_1$ a name for a subset of $B_1$, $\Phi(g_0, F_0, B_0)$ and $1\Vdash\Phi(\check g_1, \dot F_1, \dot B_1)$, then $\Phi(g_0*g_1, F_0*\dot F_1, B_0*\dot B_1)$. \end{enumerate} \end{definition} \noindent A typical case appears when $G$ is a countable semigroup. If $G$ is clear from context, we omit it from the notation. It is clear that every regularity property is a $G$-regularity property for $G=\{1\}$ with the multiplication operation. Good nontrivial examples include $G=$the rationals in the interval $(0,1]$ with multiplication, and $\Phi(\varepsilon, F, B)=$``there is a finitely additive probability measure $\mu$ on $B$ such that $\mu(p)\geq\varepsilon$ for all $p\in F$." Another example studied by Steprans \cite[Definition~3]{steprans:strong-Q} is obtained when $G\subset\gw^\gw$ is a set closed under composition, with the composition operation, and $\Phi(g, F, B)=$``for every $n\in\omega$ and every collection of $g(n)$ many elements of $F$, there are $n$ many elements in the collection with a common lower bound." \begin{definition} Suppose that $\langle G, *\rangle$ is a set with a binary operation. Suppose that $\Phi$ is a $G$-regularity property of subsets of complete Boolean algebras. \begin{enumerate} \item A poset $P$ is $\Phi$\emph{-c.c.} if for every condition $q\in P$ and every countable elementary submodel $M\prec H_\theta$ containing $P, G$ there is an element $g\in G\cap M$ and a set $F\in M$ such that $\Phi(g, F)$ holds and $F$ contains all elements of $RO(P)\cap M$ weaker than $q$. \item $P$ is $\Phi$\emph{-proper} if for every countable elementary submodel $M\prec H_\theta$ containing $P, G$ and every condition $p\in P\cap M$ there is a $\Phi$-\emph{master condition} $q\leq p$: this is a condition which is master for $M$ and for every $r\leq q$, there is an element $g\in G\cap M$ and a set $F\in M$ such that $\Phi(g, F)$ holds and $F$ contains all elements of $RO(P)\cap M$ weaker than $q$. \end{enumerate} \end{definition} Clearly, Y-c.c.\ and Y-properness are special cases of $\Phi$-c.c.\ and $\Phi$-properness where $\Phi(F)=$``$F$ is a centered set.'' Certain natural posets may satisfy other variations of $\Phi$-c.c.\ For example, the random poset satisfies $\Phi$-c.c. for $\Phi(F)=$``any two elements of $F$ are compatible'' or $\Phi(F)=$``for every collection $\{p_n:n\in\omega\}\subset F$ the Boolean value $\liminf_n p_n$ is nonzero.'' The notion of strong properness (as defined in~\cite{mitchell:addclub}) corresponds to $\Phi$-properness for $\Phi(F)=$``$F$ has a lower bound.'' Every strongly proper poset is thus $\Phi$-proper for every choice of the regularity property~$\Phi$. There are many attractive arguments drawing abstract consequences from $\Phi$-c.c.\ and $\Phi$-properness for various regularity properties $\Phi$. We will limit ourselves to several striking consequences of this kind. \begin{theorem} Suppose that $\Phi$ is a regularity property such that $\Phi(F)$ implies that $F$ contains no uncountable antichain. Then $\Phi$-c.c.\ implies c.c.c.\ \end{theorem} \begin{proof} For contradiction, assume that $P$ is a $\Phi$-c.c.\ poset with an antichain $A$ of size $\aleph_1$. Let $M\prec H_\theta$ be a countable elementary submodel containing $P, A$, and let $q\in A\setminus M$ be any element. Let $F\in M$ be a subset of $RO(P)$ such that $\Phi(F)$ holds and $F$ contains all elements of $RO(P)\cap M$ weaker than $r$. Let $I$ be the $\sigma$-ideal on $A$ $\sigma$-generated by the sets $B\subset A$ such that $\sum B\notin F$. \begin{claim} $I$ is a nontrivial c.c.c.\ $\sigma$-ideal containing all singletons. \end{claim} \begin{proof} For the nontriviality, use the elementarity of the model $M$. If $B_n\subset A$ for $n\in\omega$ are generating elements of the $\sigma$-ideal $I$ in the model $M$, then $q\notin B_n$ for each $n$ by the definitions, and so $q\notin \bigcup_nB_n$ and $\bigcup_nB_n\neq A$. Thus, no countable union of generating sets in the model $M$ of the $\sigma$-ideal $I$ covers all of $A$, and by the elementarity of the model $M$ this is true even for generating sets in $V$. If the $\sigma$-ideal $I$ failed to be c.c.c.\ then there would be an uncountable collection $C$ of pairwise disjoint $I$-positive sets. As the sets in $C$ are pairwise disjoint and $A$ is an antichain, the Boolean sums $\sum B$ for $B\in C$ are pairwise incompatible. They all must be elements of $F$ by the definition of $I$. However, this contradicts the assumption on the regularity property $\Phi$. \end{proof} However, by a classical theorem of Ulam \cite{ulam:matrix}, in ZFC there are no nontrivial c.c.c.\ $\sigma$-ideals on sets of size $\aleph_1$ which contain no singletons. This is a contradiction. \end{proof} \begin{theorem} \label{litheorem} Let $\Phi$ be a regularity property such that $\Phi(F)$ implies that $F$ contains no infinite antichain. For every $\Phi$-proper poset $P$, if $H\subset [X]^2$ is an open graph on a second countable space $X$, then every $H$-anticlique in the $P$-extension is covered by countably many anticliques in the ground model. \end{theorem} \noindent Note that the statement ``$F$ contains no infinite antichains'' in itself is not a regularity property as it does not satisfy the iteration clause of regularity. \begin{proof} Suppose that $P$ is a $\Phi$-proper poset and $H\subset [X]^2$ is an open graph on a second countable space. Let $\dot A$ be a $P$-name for an anticlique and let $F\subset RO(P)$ be a set satisfying $\Phi$. \begin{claim} The set $B(\dot A, F)=\{x\in X:$ for every open neighborhood $O\subset X$ of $x$, the Boolean value $\|\check O\cap\dot A\neq 0\|$ is in $F\}$ is a union of countably many $H$-anticliques. \end{claim} \begin{proof} Remove all basic open neighborhoods $O$ from the set $B(\dot A, F)$ such that $O\cap B(\dot A, F)$ is a union of countably many $H$-anticliques; it will be enough to show that the remainder $B$ is empty. Suppose not; then for every basic open set $O\subset X$, the set $B\cap O$, if nonempty, is not an $H$-anticlique. This allows us to build by induction on $n\in\omega$ basic open sets $O_n, U_n\subset X$ such that \begin{itemize} \item $O_n\times U_n\subset H$; \item $O_{n+1}, U_{n+1}\subset U_n$; \item the sets $B\cap O_n$ and $B\cap U_n$ are both nonempty. \end{itemize} For each $n\in\omega$, let $p_n\in F$ be the Boolean value of $\|\check O_n\cap\dot A\neq 0\|$. By the assumption on $\Phi$, there must be numbers $n\neq m$ such that the conditions $p_n, p_m$ are compatible. Denote their lower bound by $q$. Then $q\Vdash\dot A\cap\check O_n\neq 0$ and $\dot A\cap\check O_m\neq 0$, which together with the fact that $O_n\times O_m\subset H$ contradicts the assumption that $\dot A$ is forced to be an $H$-anticlique. \end{proof} Now, let $p\in P$ be a condition, let $M\prec H_\theta$ be a countable elementary submodel containing $P, p,\dot A, H, X$. Let $q\leq p$ be a $\Phi$-master condition for the model $M$. We claim that $q$ forces $\dot A$ to be covered by the $H$-anticliques in the model $M$; this will complete the proof. Suppose that this fails and let $r\leq q$ and $x\in X$ be a point which is not in any anticlique in the model $M$ and yet $r\Vdash\check x\in\dot A$. Let $F\in M$ be a set satisfying $\Phi$ and containing all conditions $s\in RO(P)\cap M$ such that $s\geq r$. Then, for every basic open set $O\subset X$ containing $x$ it is the case that $\|\check O\cap\dot A\neq 0\|\geq r$, and since the Boolean value is an element of the model $M$, it is the case that $\|\check O\cap\dot A\neq 0\|\in F$ and so $x\in B(F, \dot A)$. The latter set is a union of $H$-anticliques in the model $M$ as per the claim. This is a contradiction. \end{proof} Steprans \cite{steprans:cellularity}and Todorcevic \cite[Theorem 7]{todorcevic:cellularity} produced for every number $k\geq 2$ a poset $P_k$ which is $\sigma$-$k$-linked and yet adds an anticlique for an open hypergraph in dimension $k+1$ which is not covered by countably many anticliques in the ground model. Thus, the various finite dimensions of open hypergraphs do have significance. Once finitely additive measures enter the picture, all finite dimensions are well-behaved: \begin{theorem} Suppose that $\Phi$ is a regularity property such that $\Phi(F)$ implies that there is a finitely additive probability measure $\mu$ on $B$ and a real number $\varepsilon>0$ such that $\forall p\in F\ \mu(p)>\varepsilon$. Then, for every $n\in\omega$, every second countable space $X$, and every open set $H\subset X^n$, every $H$-anticlique in $\Phi$-proper extension is covered by countably many ground model $H$-anticliques. \end{theorem} \begin{proof} Let $P$ be a $\Phi$-proper poset and $\dot A$ a $P$-name for an $H$-anticlique. Let $F\subset RO(P)$ be a set with $\Phi(F)$. Let $B(\dot A, F)=\{x\in X:$ for every open neighborhood $O\subset X$ with $x\in O$, $\|\check O\cap\dot A\neq 0\|\in F\}$. \begin{claim} $B(\dot A, F)\subset X$ is a union of countably many $H$-anticliques. \end{claim} \begin{proof} First, remove from the set $B(\dot A, F)$ all open neighborhoods in which the set is the union of countably many anticliques. We claim that the remainder $B\subset X$ is empty; this will complete the proof of the claim. Suppose for contradiction that $B\neq 0$. Note that for every open neighborhood $O\subset X$, if $O\cap B\neq 0$ then $O\cap B$ is not an $H$-anticlique. Let $\mu$ be a finitely additive probability measure on $RO(P)$ such that for some fixed $\varepsilon>0$, $\mu(p)\geq\varepsilon$ for every condition $p\in F$. For every open set $O\subset X$, write $q(O)=\|\check O\cap\dot A\neq 0\|$. By induction on $m\in\omega$ build basic open sets $O_m^i:i\in n$ and numbers $0\neq i_m\in n$ so that \begin{itemize} \item for every $i\in n$ it is the case that $B\cap O_m^i\neq 0$; \item $\prod_iO_m^i\subset H$; \item $O_{m+1}^i\subset O_m^{i_m}$; \item writing $q_m=q(O_m^0)-q(O_m^{i_m})$, it is the case that $\mu(q_m)\geq\varepsilon/n$. \end{itemize} \noindent This is easy to do: at stage $m$, the set $B\cap O_m^{i_m}$ is nonempty and therefore not an anticlique, which makes it possible to find sets $O_{m+1}^i$ for $i\in n$ satisfying the first three items. Now, since $\mu(q(O_m^0))>\varepsilon$, if for every number $0\neq i\in n$ it were the case that $\mu(q(O_m^0)-q(O_m^i))<\varepsilon/n$, then the conjunction $\bigwedge_iq(O_m^i)$ would have positive $\mu$-mass by the finite additivity of $\mu$. This conjunction would force $\dot A$ to contain an $H$-edge, contradicting the initial assumptions. In the end, the conditions $q_m$ for $m\neq 0$ form an antichain and each of them has $\mu$-mass at least $\varepsilon/n$, a contradiction with the finite additivity of the probability measure $\mu$. \end{proof} The rest of the argument follows word by word the conclusion of the proof of Theorem~\ref{litheorem}. \end{proof} \begin{theorem} Suppose that $\Phi$ is a regularity property such that $\Phi(F)$ implies that $F$ contains no infinite antichains. Suppose that $P$ is a $\Phi$-proper poset and $\kappa$ is a cardinal. For every function $f\in\kappa^\kappa$ in the $P$-extension, if $f\restriction a$ is in the ground model for every countable ground model set $a\subset\kappa$, then $f$ is in the ground model. \end{theorem} \begin{proof} We will start with an abstract claim. Let $\kappa$ be an uncountable cardinal. A \emph{coherent system} on $\kappa$ is a collection $S$ of partial countable functions on $\kappa$, closed under subsets, such that for every countable set $a\subset\kappa$ there is $g\in S$ with $\mathrm{dom}(g)=a$, and there is no infinite collection of pairwise incompatible functions in $S$. \begin{claim} For every coherent system $S$ on $\kappa$, the set $H=\{f\in\kappa^\kappa$: for every countable set $a\subset \kappa$, $f\restriction a\in S\}$ is nonempty and finite. \end{claim} \begin{proof} To see that the set $H$ is nonempty, consider the sets $H_a=\{f\in\kappa^\kappa\restriction\restrictionon f\restriction a\in S\}$ for every countable set $a\subset\kappa$. Intersection of any countable collection of such sets is nonempty by the assumptions on $S$. Let $U$ be an ultrafilter on $\kappa^\kappa$ containing all sets $H_a$ for $a\subset\kappa$ countable. For each such set $a\subset\omega$, there are only finitely many functions in $S$ with domain $a$, and so one of them, denoted by $g_a$, satisfies $\{f\in\kappa^\kappa\restriction\restrictionon g_a\subset f\}\in U$. It is immediate that $\bigcup_ag_a\in H$. To prove the finiteness of $H$, suppose for contradiction that $f_n$ for $n\in\omega$ are pairwise distinct functions in $H$. Then, there is a countable set $a\subset\kappa$ such that the functions $f_n\restriction a$ for $n\in\omega$ are pairwise distinct. They all belong to the set $S$, contradicting the coherence assumption on $S$. \end{proof} Now suppose that $P$ is a $\Phi$-proper poset and $\dot f$ is a $P$-name for a function from $\kappa$ to $\kappa$. Let $p\in P$ be a condition forcing $\dot f\restriction a\in V$ for every countable set $a\subset\kappa$; we must produce a function $e\in\kappa^\kappa$ and a stronger condition forcing $\check e=\dot f$. Let $M\prec H_\theta$ be a countable elementary submodel containing $P, p, \dot f$, and let $q\leq p$ be a $\Phi$-master condition for $M$. Find a condition $r\leq q$ deciding all values of $\dot f\restriction M$, yielding a function $h\restriction\restrictionon M\to\kappa$. We will find a function $e\in M\cap \kappa^\kappa$ such that $h\subset e$. Then, since $r$ is a master condition for $M$ and $r\Vdash\check e\restriction M=\dot f\restriction M(=\check h)$, it must be the case that $r\Vdash \check e=\dot f$. This will complete the proof. Towards the construction of the function $e$, let $F\subset RO(P)$ be an upwards closed set in the model $M$ such that $\Phi(F)$ holds and $F$ contains all elements of $RO(P)\cap M$ weaker than $r$. Let $S=\{g\restriction\restrictionon g$ is a partial function from $\kappa$ to $\kappa$ with countable domain and the Boolean value $\|\check g\subset\dot f\|$ belongs to $F\}$. We claim that $S\in M$ is a coherent system. Closure of $S$ under subsets is clear from the definitions. $S$ contains no infinite set of pairwise incompatible functions since $F$ contains no infinite antichain. For every countable set $a\in M$ the function $h\restriction a$ is in $M\cap S$ since $r\Vdash \dot f\restriction a=h\restriction a\in V$ and $r$ is a master condition for the model $M$. By the elementarity of the model $M$, the coherence of the system $S$ follows. Now, let $H\in M$ be the finite set of functions from $\kappa$ to $\kappa$ obtained by the application of the claim to the coherent system $S$. We claim that the function $h$ is a subset of one element of $H$. Indeed, if this was not the case, then there would be a finite set $c\subset\kappa\cap M$ such that $h\restriction c$ is not a subset of any function in the finite set $H$. Let $T=\{g\in S\restriction\restrictionon g$ is a function compatible with $h\restriction c\}$. Just as in the previous paragraph, $T\in M$ is a coherent system, and there is a function $e\in\kappa^\kappa$ such that every restriction of $e$ to a countable set is in $T$. This function must appear on the finite list $H$ while $e\restriction c=h\restriction c$. Contradiction! \end{proof} \section{Iteration theorems} As with most forcing properties, the point of the properties introduced in the previous section is that they are preserved under suitable iterations and their associated forcing axioms can be forced with a poset in the same category. \begin{definition} Suppose that $\Phi$ is a $G$-regularity property of subsets of complete Boolean algebras. \begin{enumerate} \item if $\kappa$ is a cardinal then $\Phi$-$\mathrm{MA}_\kappa$ is the statement that for every c.c.c.\ $\Phi$-c.c.\ poset $P$ and every list of open dense subsets of $P$ of size $\kappa$ there is a filter on $P$ meeting them all; \item $\Phi$-PFA is the statement that for every $\Phi$-proper poset $P$ and every list of $\aleph_1$ many open dense subsets of $P$ there is a filter on $P$ meeting them all. \end{enumerate} \end{definition} In the important special case of $\Phi(F)=$``$F$ is centered'', we will write $\mathrm{YMA}_\kappa$ and YPFA for $\Phi$-$\mathrm{MA}_\kappa$ and $\Phi$-PFA. \begin{theorem} \label{fsitheorem} Let $\Phi$ be a $G$-regularity property. Then the conjunction of c.c.c.\ and $\Phi$-c.c.\ is preserved under \begin{enumerate} \item restriction to a condition; \item complete subalgebras; \item the finite support iteration. \end{enumerate} \end{theorem} \begin{proof} The first two items follow easily from the subalgebra and restriction clauses of regularity. The two-step iteration part of (3) follows just as easily from the iteration clause of regularity. If $P_0$ has $\Phi$-c.c.\ and $\dot P_1$ is a $P$-name such that $P_0\Vdash\dot P_1$ has $\Phi$-c.c., we must show that $P_0*\dot P_1$ has $\Phi$-c.c. Let $M\prec H_\theta$ be a countable elementary submodel containing $P_0, \dot P_1$ and let $\langle q_0, \dot q_1\rangle$ be an arbitrary condition in the iteration. We must find a set $F\in M$ in $RO(P_0)*RO(\dot P_1)$ and $g\in G\cap M$ such that $\Phi(g, F)$ holds, and for every condition $\langle p_0, \dot p_1\rangle\in RO(P_0)*RO(\dot P_1)$ in the model $M$, if $\langle p_0, \dot p_1\rangle\geq \langle q_0, \dot q_1\rangle$ then $\langle p_0, \dot p_1\rangle\in F$. To this end, write $\dot G_0$ for the canonical $P_0$-name for its generic filter and $M[\dot G_0]$ for the $P_0$-name for the set $\{\tau/\dot G_0:\tau\in M$ is a $P_0$-name$\}$. It is well known that $M[\dot G_0]$ is forced to be a countable elementary submodel of $H_\theta$ of the generic extension ${V[\dot G_0]}$ and its intersection with the ground model is equal to $M$. Strengthening $q_0$ if necessary and using $\Phi$-c.c.\ of the poset $P_1$ in the extension, we may find a name $\dot F_1\in M$ for a subset of $RO(\dot P_1)$ and $g_1\in G\cap M$ such that $1\Vdash\Phi(\check g_1, \dot F_1)$, and $q_0\Vdash \{p\in RO(\dot P_1)\cap M[\dot G_0]\restriction\restrictionon p\geq \dot q_1\}\subset\dot F_1$. Use the $\Phi$-c.c.\ of $P_0$ to find some $g_0\in G\cap M$ and $F_0\in M$ such that $F_0\subset RO(P_0)$, $\Phi(g_0, F_0)$ and $\{p\in RO(P_0)\cap M:p\geq q_0\}\subset F_0$. By the iteration clause of regularity, $\Phi(g_0*g_1, F_0*\dot F_1)$ holds. We claim that $F=F_0*\dot F_1$ witnesses $\Phi$-c.c.\ for the iteration. Indeed, suppose that $\langle p_0, \dot p_1\rangle\in M$ is a condition in the iteration weaker than $\langle q_0, \dot q_1\rangle$. Thus, $q_0\Vdash\dot p_1\geq\dot q_1$, $\dot p_1\in M[\dot G_0]$, and so $\dot p_1\in\dot F_1$. The Boolean value $\|\dot p_1\in\dot F_1\|$ is in the model $M$ and it is weaker than $q_0$, so the conjunction $p_0\land \|\dot p_1\in\dot F_1\|\in M$ is still weaker than $q_0$ and so belongs to the set $F_0$. Thus, $\langle p_0, \dot p_1\rangle\in F_0*\dot F_1$ as desired. The general proof proceeds by induction on $\beta=$the length of the iteration. The case $\beta$ successor is handled by the two-step iteration case. Suppose that $\beta$ is limit, $M$ is a countable elementary submodel of $H_\theta$, and $q$ is any condition in the iteration. The domain of $q$ is a finite subset of $\beta$; let $\alpha=\max(M\cap\mathrm{dom}(q))$. Write $P$ for the whole iteration, $P_0$ for the initial segment of the iteration up to $\alpha$ inclusive, and $\dot P_1$ for the remainder of the iteration; thus, $\dot P_1$ is a $P_0$-name. The condition $q$ can be viewed as a pair $\langle q_0, \dot q_1\rangle$ where $q_0\in P_0$ and $q_0\Vdash\dot q_1\in\dot P_1$. Since $\alpha\in\beta$, the induction hypothesis guarantees the existence of a subset $F_0\in M$ of $RO(P_0)$ and an element $g\in M\cap G$ such that $\Phi(g, F_0)$ holds and for every condition $p\in RO(P_0)$ in the model $M$, weaker than $q_0$, belongs to the set $F_0$. Let $F\in M$ be the subset of $RO(P)$ consisting of pairs $\langle p_0, \dot p_1\rangle\in RO(P_0)*RO(\dot P_1)$ where $p_0\land \|\dot p_1=1\|\in F_0$. By the nontriviality and the finite iteration clauses of regularity, $\Phi(g*h, F, RO(P))$ holds for every $h\in G$; we claim that the set $F$ works as desired. Suppose that $p\geq q$ is a condition in the model $M$ in $RO(P)$; we must show that $p\in F$. The condition $p$ can be viewed as a pair $\langle p_0, p_1\rangle$ such that $p_0\in RO(P_0)$ and $p_0\Vdash\dot p_1\in RO(\dot P_1)$. Since $p\geq q$, it is the case that $p_0\geq q_0$ and $q_0\Vdash\dot p_1\geq \dot q_1$. The important point is that the latter formula means that $q_0\Vdash\dot p_1=1$. If this were not the case, by the c.c.c.\ of $P_1$ there would be a strengthening $r_0\leq q_0$ and a condition $\dot r_1\in M\cap P_1$ such that $r_0\Vdash \dot r_1$ is incompatible with $\dot p_1$. Now, by c.c.c.\ of $P_0$, $P_0$ forces the domain of $\dot r_1$ to be a subset of $M$ and therefore disjoint from $\mathrm{dom}(\dot q_1)$. Thus, $r_0\Vdash\dot r_1, \dot q_1$ are compatible, contradicting the assumption that $q_0\Vdash\dot p_1\geq\dot q_1$. Now, the Boolean value $\|\dot p_1=1\|\in RO(P_0)$ is an element of $M$ and it is weaker than $q_0$. The same is true of $p_0$. Therefore, the conjunction $p_0\land \|\dot p_1=1\|$ must belong to the set $F_0$, and so $\langle p_0, \dot p_1\rangle\in F$ as desired. \end{proof} As an abstract consequence of Theorem~\ref{fsitheorem}, it is possible to force Martin's Axiom for c.c.c.\ $\Phi$-c.c.\ posets with a $\Phi$-c.c.\ poset. The following simple general theorem does not seem to appear in the literature: \begin{theorem} Suppose that $\Psi$ is a property of complete Boolean algebras which provably in ZFC implies c.c.c.\ and is preserved under complete subalgebras and the finite support iteration. Let $\kappa$ be an uncountable regular cardinal and suppose that $\Diamond_{\mathtt{cof}(\kappa)\cap\kappa^+}$ holds. There is a complete Boolean algebra satisfying $\Psi$ forcing $\mathrm{MA}_\kappa$ for posets satisfying $\Psi$. \end{theorem} \begin{corollary} \label{ymatheorem} Let $\Phi$ be a $G$-regularity property of sets of Boolean algebras. Let $\kappa$ be an uncountable regular cardinal and suppose that $\Diamond_{\mathtt{cof}(\kappa)\cap\kappa^+}$ holds. There is a c.c.c.\ $\Phi$-c.c.\ poset forcing $\Phi$-$\mathrm{MA}_\kappa$. \end{corollary} \begin{proof} Let $\langle E_\alpha:\alpha\in\kappa^+\rangle$ be a diamond sequence for $\mathtt{cof}(\kappa)\cap\kappa^+$. This specifically means the following. Fix a wellordering $\prec$ of the set $H_{\kappa^+}$ of ordertype $\kappa^+$. Each set $E_\alpha$ is of hereditary cardinality $\kappa$ and whenever $A\subset H_{\kappa^+}$ is a set, then the set $$\{\alpha\in\mathtt{cof}(\kappa)\cap\kappa^+\restriction\restrictionon E_\alpha=\{x\in H_{\kappa^+}\restriction\restrictionon x\in A\text{ and the rank of }x\text{ in }\prec\text{ is less than }\alpha\}\}$$ is stationary. In the following, for a poset $P$ we will write $\Psi(P)$ for the statement $\Psi(RO(P))$. Consider the finite support iteration $R=\langle R_\alpha, \dot Q_\alpha\restriction\restrictionon \alpha\in\kappa^+\rangle$ obtained by the following rule: if $\alpha$ is an ordinal such that $E_\alpha$ codes an $R_\alpha$-name for a poset, and in the $R_\alpha$-extension $\Psi(\dot E_\alpha)$ holds, then $\dot Q_\alpha=E_\alpha$. Otherwise, let $\dot Q_\alpha=$the $R_\alpha$-name for the trivial poset. We claim that the iteration works as required. Suppose that in the $R$-extension, $P$ is a poset, $\Psi(P)$ holds, and $\langle D_\beta\restriction\restrictionon \beta\in\kappa\rangle$ are open dense subsets of it. We must produce a filter meeting them all. First of all, without loss of generality, we may assume that $\|P\|\leq\mathfrak{c}\leq\kappa^+$. If this were not the case, let $N\prec H_\theta$ be an elementary submodel of size $\mathfrak{c}$ containing $P,\tau, \langle D_\beta:\beta\in\kappa\rangle$ as elements, $\kappa$ as a subset, and such that $N^\omega\subset N$. Then, $N\cap P$ is a regular subposet of $P$, therefore $\Psi(N\cap P)$ holds by the closure of $\Psi$ under complete subalgebras, it has size $\leq\mathfrak{c}$ and all sets $D_\beta\cap N$ for $\beta\in\kappa$ are dense in it. If there is a filter $G\subset N\cap P$ meeting all the sets $D_\beta\cap N$ for $\beta\in\kappa$, then we are done. Thus, without loss of generality assume that $\|P\|=\kappa^+$, $P\subset H_{\kappa^+}$ and use the c.c.c. of $R$ find an $R$-name $\tau\subset H_{\kappa^+}$ for it so that $R\Vdash\Psi(\tau)$. Back in the ground model, find an elementary submodel $N\prec H_\theta$ of size $\kappa$ containing $P,\tau, \langle D_\beta:\beta\in\kappa\rangle$ as elements, $\kappa$ as a subset, such that $N^\omega\subset N$ and, writing $\alpha= N\cap\kappa^+$, it is the case that $\tau\cap N=E_\alpha$. We claim that $R_\alpha\Vdash E_\alpha$ is a poset satisfying $\Psi$, thus $\dot Q_\alpha=E_\alpha$, and the generic filter added by the $\alpha$-th stage of the iteration generates a filter on $P$ meeting all the dense subsets as required. Let $G_\alpha\subset R_\alpha$ be a generic filter and for the remainder of the proof work in $V[G_\alpha]$. Let $R^\alpha$ be the remainder of the iteration, so $\Psi(R^\alpha)$ holds. Write $P_\alpha=E_\alpha/G_\alpha$; thus $R^\alpha\Vdash P_\alpha\subset P$. The elementarity of the model $N$ has an important consequence: \begin{claim} The map $\pi\restriction\restrictionon P_\alpha\to R^\alpha*\dot P$ given by $\pi(p)=\langle 1, \check p\rangle$ is a regular embedding. \end{claim} \begin{proof} We must verify that if $A\subset P_\alpha$ is a maximal antichain, then $\pi''A\subset R^\alpha*\dot P$ is a maximal antichain as well, or equivalently $R^\alpha\Vdash\dot A\subset\dot P$ is maximal. To prove this, suppose that $G^\alpha\subset R^\alpha$ is a filter generic over the model $V[G_\alpha]$, and let $G\subset R$ be the concatenation of $G_\alpha$ and $G^\alpha$. Then in $V[G]$ the following holds: \begin{itemize} \item $N[G]$ is an elementary submodel of $H_\theta[G]$; \item $P\cap N[G]=P_\alpha$; \item $(P_\alpha)^\omega\cap V[G_\alpha]\subset N[G]$. \end{itemize} \noindent For the last item, return to the ground model for a moment and observe that every $R_\alpha$-name $\sigma$ for an element of $(\dot P_\alpha)^\omega$ is at the same time an $R$-name for an element of $(\dot P)^\omega$. At the same time, $N\cap R=R_\alpha$ and $N$ is closed under countable sequences, therefore $N$ contains $\sigma$ as an element. It follows that $A\in N[G]$. Since $A\subset P_\alpha$ is a maximal antichain and $P\cap N[G]=P_\alpha$, $N[G]\models A\subset P$ is a maximal antichain. Since $N[G]$ is elementary in $H_\theta[G]$, $A\subset P$ must be a maximal antichain as desired. \end{proof} Now, still arguing in the model $V[G_\alpha]$, both steps in the iteration $R^\alpha*\dot P$ satisfy $\Psi$ and so does the iteration. $P_\alpha$ is a regular subposet of this iteration and therefore satisfies $\Psi$ as well. Therefore, at stage $\alpha$ of the iteration the poset $P_\alpha$ is forced with, and the resulting filter on $P_\alpha\subset P$ meets all the open dense subsets on the list $\langle D_\beta:\beta\in\kappa\rangle$. \end{proof} Now, let us move to the proper variations. $\Phi$-properness is not preserved under the countable support iteration. To provide a trivial example, consider the countable support iteration of an atomic poset with two atoms, of length $\omega_1$. Clearly, each poset in the iteration is Y-proper, and the iteration is isomorphic to adding a subset of $\omega_1$ with countable approximations. This poset is not Y-proper by Theorem~\ref{yproperpreservationtheorem}(1). Even so, it is possible to force the forcing axiom for Y-proper posets with an Y-proper poset using the technology of \cite{neeman:twotypes}. This is the contents of the following theorem. \begin{theorem} \label{ypfaforcingtheorem} Suppose that $\Phi$ is a $G$-regularity property and there is a supercompact cardinal. Then there is a $\Phi$-proper forcing $P$ forcing $\Phi$-PFA. \end{theorem} \begin{proof} We first verify the preservation of $\Phi$-properness under two-step iteration. This follows immediately from the iteration clause of regularity: \begin{claim} \label{itclaim} If $P$ is $\Phi$-proper and $P\Vdash\dot Q$ is $\Phi$-proper, then $P*\dot Q$ is $\Phi$-proper. If $M\prec H_\theta$ is a countable elementary submodel containing $P$, $\dot Q$, and $p\in P$ is a $\Phi$-master condition for $M$ in $P$ and $p\Vdash\dot q$ is a $\Phi$-master condition for $M[G]$ in $\dot Q$, then $\langle p, \dot q\rangle$ is a $\Phi$-master condition for $M$ in $P*\dot Q$. \end{claim} Let $\kappa$ be an inaccessible cardinal and $f:\kappa\to V_\kappa$ be a function. Let $I\subset\kappa+1$ be the set of all inaccessible cardinals $\beta$ such that $\langle V_\beta, f\restriction\beta\rangle\prec \langle V_\kappa, f\rangle$; in particular, $\kappa\in I$. For every ordinal $\beta\in I$ define the orders $Q_\beta$ by $m\in Q_\beta$ if $m$ is a finite $\in$-chain whose elements are either countable elementary submodels of $V_\beta$ (the \emph{countable nodes}) or sets $V_\delta$ for some $\delta\in I\cap\beta$ (the \emph{transitive nodes}). Moreover, the chain $m$ must be closed under intersections. The ordering is that of reverse inclusion. Observe that if $\delta\in\beta$ are elements of $I$ then $Q_\delta\subset Q_\beta$. It is proved in~\cite{neeman:twotypes} that $Q_\beta$ is strongly proper and hence also $\Phi$-proper. By transfinite recursion on $\beta\in I$ we will define partial orders $\langle P_\beta, \leq_\beta\rangle$. The canonical names for their respective generic filters will be denoted by $\dot G_\beta$. The elements of $P_\beta$ will be certain pairs $p=\langle m(p), w(p)\rangle$, where $m(p)\in Q_\beta$ and $w(p)$ is a function on $m(p)$. For such a pair $p$, if $\delta\in I$ is such that $V_\delta\in m(p)$, write $p\restriction\delta$ for the pair $\langle m(p)\cap V_\delta, w(p)\restriction V_\delta\rangle$. The posets $P_\beta$ are defined by the following recursive formula. A set $p$ is an element of $P_\beta$ if $p=\langle m(p), w(p)\rangle$ where $m(p)\in Q_\beta$ and $w(p)$ is a function with $\mathrm{dom}(w(p))=m(p)$ such that for every transitive node $V_\delta\in m(p)$, $p\restriction\delta\in P_\delta$. Moreover, $w(p)(M)$ is equal to $\mathrm{trash}$ for all nodes $M\in m(p)$ except possibly some transitive nodes $M=V_\delta$ such that $f(\delta)$ is a $P_\delta$-name, $P_\delta\Vdash f(\delta)$ is a $\Phi$-proper forcing, $w(p)(V_\delta)$ is a $P_\delta$-name for an element of $f(\delta)$, and for every countable node $N\in m(p)$ such that $\{ P_\delta, f(\delta)\}\in N$, $p\restriction\delta\Vdash_{P_\delta}$ the condition $w(p)(V_\delta)$ is $\Phi$-master for the model $N[\dot G_\delta]$. Note that due to the closure of $m(p)$ under intersections and to the fact that the set $I$ consists of inaccessible cardinals, it is sufficient to verify the last condition for all countable nodes $N$ which are between $V_\delta$ and the next transitive node on $m(p)$. The ordering is defined by $q\leq_\beta p$ if $m(q)\leq m(p)$ and for every transitive node $V_\delta\in m(p)$, if $w(p)(V_\delta)=\mathrm{trash}$ then $w(q)(V_\delta)=\mathrm{trash}$, and if $w(q)(V_\delta)\neq\mathrm{trash}$ then $q\restriction\delta\leq_\beta p\restriction\delta$ and $q\restriction\delta\Vdash m(q)(V_\delta)\leq m(p)(V_\delta)$. \begin{claim} $\leq_\beta$ is a transitive relation. \end{claim} \begin{proof} This is an elementary argument by transfinite induction on $\beta$. \end{proof} Suppose that $\delta,\beta$ are elements of $I$ such that $\delta\in\beta$. Define $p_\delta^0$ to be the condition in $P_\beta$ which is $\langle \{V_\delta\}, \{\langle V_\delta, \mathrm{trash}\rangle\}\rangle$. In the event that $f(\delta)$ happens to be a $P_\delta$-name and $P_\delta\Vdash f(\delta)$ is $\Phi$-proper, then also define $p_\delta^1$ to be the condition in $P_\beta$ which is $\langle \{V_\delta\}, \{\langle V_\delta, 1_{f(\delta)}\rangle\}\rangle$. \begin{claim} Suppose that $\delta, \beta\in I$ are ordinals such that $\delta\in\beta$, and $p\in P_\beta$ is a condition below $p_\delta^0$ or $p_\delta^1$. Suppose that $q\in P_\delta$ and $q\leq_\delta p\restriction\delta$. Then $r=\langle m_q\cup m_p, w(q)\cup (w(p)\setminus V_\delta)\rangle$ is a condition in $P_\beta$ and $r\leq_\beta p$. \end{claim} \begin{proof} This is another elementary argument by transfinite induction on $\beta$. \end{proof} If $\delta\in I$ is an ordinal less than $\kappa$ such that $f(\delta)$ is an $P_\delta$-name for an $\Phi$-proper forcing, we will write $P_{\delta+1}$ for the two-step iteration $P_\delta*f(\delta)$. For an ordinal $\beta>\delta$ and a condition $p\in P_\beta$ such that $V_\delta\in m(p)$ and $w(p)(V_\delta)\neq\mathrm{trash}$, we will write $p\restriction\delta+1$ for the condition in $P_{\delta+1}$ which is the pair $\langle p\restriction\delta, w(p)(V_\delta)\rangle$. The following claim is now easy to show: \begin{claim} \label{ittclaim} Let $\delta, \beta$ be ordinals in $I$ such that $\delta\in\beta$. \begin{enumerate} \item The conditions $p_\delta^0$ and $p_\delta^1$ both force the filter $G_\beta\cap P_\delta$ to be $P_\delta$-generic; \item if $\delta$ is such that $f(\delta)$ is an $P_\delta$-name for a $\Phi$-proper forcing, then $p_\delta^1\Vdash G_\beta\cap P_{\delta+1}$ is $P_{\delta+1}$-generic. \end{enumerate} \end{claim} Let $p,q\in P_\beta$ be conditions. Say that $p, q$ are in $\Delta$\emph{-position} if there is a countable node $M\in m(p)$ such that the model $M$ contains $q$ as well as $P_\gamma$ and $f(\gamma)$ for all $V_\gamma\in m(q)$ as elements, and writing $V_\delta$ for the largest transitive node on $m(p)$ below $M$, it is the case that $V_\delta\in m(q)$, $q\restriction\delta+1$ is compatible with $p\restriction\delta+1$ and all countable nodes of $m(p)$ between $V_\delta$ and $M$ belong to $m(q)$. If there are no transitive nodes in $m(p)$ below $M$, then we require just that all countable nodes of $m(p)$ below $M$ belong to $m(q)$. \begin{claim} \label{deltaclaim} If $p, q$ are in $\Delta$-position then they are compatible in $P_\beta$. \end{claim} \begin{proof} Let $M\in m(p)$ be the model witnessing the $\Delta$-position of $p,q$. Let us treat the case that there is a largest transitive node on $m(p)$ below $M$, denote it by $V_\delta$, and assume that $w(p)(V_\delta)\neq\mathrm{trash}$. Since $p\restriction\delta+1$ and $q\restriction\delta+1$ are compatible, there is a condition $r\in P_\delta$ below both $p\restriction\delta+1$ and $q\restriction \delta+1$, and a $P_\delta$-name $\tau$ such that $r\Vdash_\delta\tau\leq w(p)(V_\delta), w(q)(V_\delta)$ in the poset $f(\delta)$. To construct the lower bound $s$ of $p,q$, we must define $m(s)$ and $w(s)$. Let $m(s)=m(r)\cup m(q)\cup m(p)\cup n$, where $n$ is the set of all intersections of the form $N\cap V_\gamma$, where $V_\gamma$ is a transitive node on $m(q)$ and $N=M$ or else $M$ is one of the countable nodes on $m(p)$ above $M$ such that there is no transitive node between $M$ and $N$. First, we must verify that $m$ is a condition in $Q_\delta$. This is a mechanical checking of the clauses of the definition of $Q_\delta$; we will outline only the nontrivial points in the argument. For the closure of $m(s)$ under intersection, the only nontrivial case is to check is that if $N_0\in m(q)\setminus V_\delta$ and $N_1\in m(p)\setminus V_\delta$ are countable nodes, then $N_0\cap N_1\in m(r)$. To see this, note that $N_0\in M$ and so $N_0\subset M$, $N_0\cap N_1=N_0\cap (N_1\cap M)$, the countable node $N_1\cap M$ is in $m(p)$ by the closure of $m(p)$ under intersections, and there are two cases. \textbf{Case 1.} Either $N_1\cap M\in V_\delta$. Then $N_0\cap N_1= (N_0\cap V_\delta)\cap (N_1\cap M\cap V_\delta)$. Now, the model $N_0\cap V_\delta\in m(q)$ by the closure of $m(q)$ under intersections, $N_1\cap M\cap V_\delta\in m(p)$ by the closure of $m(p)$ under intersection, so both of them are in $m(r)$ since $r\leq p\restriction\delta, q\restriction\delta$, and so $N_0\cap N_1\in m(r)$ by the closure of $m(r)$ under intersections. \textbf{Case 2.} Or $N_1\cap M\notin V_\delta$. Then $N_1\cap M$ must be one of the models on $m(p)$ between $V_\delta$ and $M$, or it may be equal to $M$ itself. In the former case $N_1\cap M\in m(q)$ as $p,q$ are in $\Delta$-position, and so $N_0\cap N_1=N_0\cap N_1\cap M\in m(q)$ by the closure of $m(q)$ under intersections. In the latter case, $N_0\cap N_1=N_0\cap M=N_0\in m(q)$, and we are finished. To verify that $m(s)$ forms an $\in$-chain, first observe that $m(r)\cup m(q)\cup m(p)$ is a concatenation of three $\in$-chains ($m(r)$, $m(q)\setminus V_\delta$, and $m_p\setminus M$, since $m(r)\in V_\delta$ and $m(q)\in M$) and so an $\in$-chain. Now inspect the models in the set $n$. Let $V_\gamma$ be a transitive node in $m(q)$ and $K$ its predecessor in $m(q)$. Then $K, V_\gamma\in M$. Since the countable nodes between $M$ and the next transitive node in $m(p)$ above $M$ form an $\in$-chain and all contain $M$ as an element, they also contain $K, V_\gamma$ and so their intersections with $V_\gamma$ form an $\in$-chain whose nodes all contain $K$ and are contained in $V_\gamma$. This immediately implies that $m(s)$ forms an $\in$-chain. The definition of $w(s)$ breaks into several cases, all of which except for one are trivial. \noindent \textbf{Case 1.} For $V_\gamma\in m(r)$, let $w(s)(V_\gamma)=w(r)(V_\gamma)$. \noindent \textbf{Case 2.} The value $w(s)(V_\delta)$ will be equal to $\tau$. This condition is forced to be $\Phi$-master for all the relevant models on $m(s)$ above $V_\delta$: these models come either from $m(p)$ or $m(q)$ or from intersections with transitive nodes, and $\tau$ is stronger than both $w(p)(V_\delta)$ and $w(q)(V_\delta)$. \noindent \textbf{Case 3.} Now suppose that $\gamma$ is such that $\delta\in \gamma$ and $V_\gamma\in m(q)$, $f(\gamma)$ is a $P_\gamma$-name for an $\Phi$-proper poset, and $w(q)(V_\gamma)\neq\mathrm{trash}$. To define $w(s)(V_\gamma)$, consider the set $n_\gamma$ of all countable nodes in $m(s)$ below the next transitive node $V_{\gamma^*}$ above $V_\gamma$ (if such a node does not exist, just take all countable nodes on $m(s)$) which contain $P_\gamma, f(\gamma)$. Observe that this set is linearly ordered by $\in$, starts with (perhaps) some models on $m(q)$, after which comes $M\cap V_{\gamma^*}$ and then (perhaps) some other models. By the definition of $P_\beta$, the condition $q\restriction\gamma$ forces in $P_\gamma$ that $w(q)(V_\gamma)$ is a $\Phi$-master condition for $K[G_\gamma]$ for all models $K\in n_\gamma\cap m(q)$ and the poset $f(\gamma)$. Moreover, $P_\gamma, f(\gamma)$ and $w(q)(V_\gamma)$ all belong to the next model $M\cap V_{\gamma^*}$ on the set $n_\gamma$ beyond the models from $m(q)$. Therefore, using the definition of $\Phi$-master condition repeatedly, gradually strengthening the $P_\gamma$-name for the condition $w(q)(V_\gamma)$, it is possible to find a name $w(s)(V_\gamma)$ forced by $q\restriction\gamma$ to be $\Phi$-master for all models $N[G_\gamma]$ where $N\in n_\gamma$. \noindent\textbf{Case 4.} To define $w(s)(V_\gamma)$ for the transitive nodes $V_\gamma$ on $m(s)$ above the model $M$, just let $w(s)(V_\gamma)=w(p)(V_\gamma)$. It is not difficult to verify now that $s=\langle m(s), w(s)\rangle$ is a condition, and it is a lower bound of $p, q$ in $P_\delta$. \end{proof} For every countable elementary submodel $M\prec H_\theta$ containing $\kappa, f$ and an ordinal $\beta\in I$ let $p_M\in P_\beta$ be the unique condition with $m(p)=\{M\cap V_\beta\}$. \begin{claim} Let $\beta\in I$ be an ordinal. The poset $P_\beta$ is $\Phi$-proper, and for every countable elementary submodel $M\prec H_\theta$ containing the ordinal $\beta$ as well as $\kappa, f$, the condition $p_M$ is a $\Phi$-master condition for $M$ in the poset $P_\beta$. \end{claim} \begin{proof} This is proved by induction on $\beta\in I$. Suppose that $\beta\in I$ is an ordinal below which the statement has been verified, and let $M\prec H_\theta$ be a countable elementary submodel containing $\beta$. To verify that $p_M$ is a master condition $P_\beta$, suppose that $p\leq p_M$ is an arbitrary condition and $D\in M$ is an open dense subset of $P_\beta$; we must produce a condition $q\in D\cap M$ compatible with $p$. Strengthening $p$ if necessary, we may assume that $p\in D$. For definiteness assume that there are some transitive nodes in $m(p)$ below $M\cap V_\beta$, and let $V_\delta$ denote the largest one of them. For definiteness assume that $w(p)(V_\delta)\neq\mathrm{trash}$, the other cases are simpler. By the closure of $m(p)$ under intersections, $V_\delta\cap M\in m(p)$ holds, and by the induction hypothesis, $p\restriction\delta$ is a master condition for the model $M$ in the poset $P_\delta$. By the definition of the poset $P_\beta$, $p\restriction\delta\Vdash w(p)(V_\delta)$ is a master condition for $M[\dot G_\delta]$. Therefore, $p\restriction\delta+1$ is a master condition for $M$ in the poset $P_{\delta+1}$. Now, $p\restriction\delta+1$ forces in $P_{\delta+1}$ that there is a condition $q\in D$ such that $q\restriction\delta+1$ is in the generic filter $G_{\delta+1}$ and $m(q)$ contains all countable nodes of $m(p)$ between $V_\delta$ and $M$. This is clear since $q=p$ will work. Since $p\restriction\delta+1$ is $M$-master, it forces that there must be such a condition $q$ in the model $M$. In other words, there must be a condition $q\in M\cap D$ such that $p\restriction\delta+1$ and $q\restriction\delta+1$ are compatible and $m(q)$ contains all countable nodes of $m(p)$ between $V_\delta$ and $M$. But then, $p,q$ are in $\Delta$-position and therefore compatible. The proof that $p_M$ is a master condition for the model $M$ is complete. To verify that $p_M$ is a $\Phi$-master condition, suppose that $p\in P_\beta$ is a condition below $p_M$; we must find an element $g\in G\cap M$ and a set $F\in M$ on $RO(P_\beta)$ such that $\Phi(g, F, RO(P_\beta))$ holds and such that for every condition $q\in M\cap RO(P)$, if $q\geq p$ then $q\in F$. For definiteness assume that there are some transitive nodes in $m(p)$ below $M\cap V_\beta$, and let $V_\delta$ denote the largest one of them. For definiteness, also assume that $w(p)(V_\delta)\neq\mathrm{trash}$, the other cases are simpler. Let $\bar p=\langle m(\bar p), w(\bar p)\rangle$ be the condition defined in the following way: $m(\bar p)$ contains $V_\delta$, all the countable nodes of $m(p)$ between $V_\delta$ and $M$, and the intersections of these nodes with $V_\delta$. The map $w(\bar p)$ returns only one nontrivial value, at $V_\delta$, where it indicates the sum of all conditions in $f(\delta)$ which are $\Phi$-master for all models on $m(\bar p)$ containing $P_\delta$ and $f(\delta)$. It is clear that $\bar p\in M$ is a condition weaker than $p$. By the restriction clause of regularity, it will be enough to find the requested set $F$ in $RO(P_\beta\restriction\bar p)$. By Claim~\ref{ittclaim}, the algebra $A=RO(P_{\delta+1}\restriction \bar p)$ can be naturally viewed as a complete subalgebra of $B=RO(P_\beta\restriction\bar p)$. By the induction hypothesis applied at $\delta$ and the two-step iteration Claim~\ref{itclaim}, the condition $p\restriction\delta+1$ is $\Phi$-master for $M$ and $P_{\delta+1}$. Thus, there are a set $F_0\subset A$ and an element $g\in G$ in the model $M$ such that $\Phi(g, F_0, A)$ holds and $F_0$ contains all elements of $A\cap M$ weaker than $p\restriction\delta+1$. We will show that the set $F$, obtained as the upwards closure of $F_0$ in the algebra $B$, has the requested properties. Certainly $\Phi(g, F, B)$ holds by the subalgebra and closure clauses of regularity, and $F\in M$. We must verify that if $b\in B\cap M$ is weaker than $p$ then $b\in F$. To this end, consider the lower projection function $\mathrm{proj}\restriction\restrictionon B\to A$ defined by $\mathrm{proj}(b)=\sum\{a\in A\restriction\restrictionon a\leq b\}\leq b$. We claim that if $b\in B\cap M$ is weaker than $p$, then $\mathrm{proj}(b)\in A\cap M$ is weaker than $p\restriction\delta+1$. This will complete the proof as then $\mathrm{proj}(b)\leq b$ must be an element of $F_0$ and so $b\in F$. Suppose for contradiction that $p\restriction\delta+1$ is not stronger than $\mathrm{proj}(b)$. Then $p\restriction\delta+1$ must be compatible in $B$ with $1-b$ by the definition of projection. Since $p\restriction\delta+1$ is a master condition for $M$ by the first part of the proof of the claim, this means that there must be a condition $q\leq \bar p$ in the poset $P_\beta$ and the model $M$ such that $q\restriction\delta+1$ is compatible with $p\restriction\delta+1$, and $q$ is below $1-b$. But then, $p$ and $q$ are in $\Delta$-position, as $m(q)$ contains all nodes on $m(\bar p)$ and so all countable nodes on $m(p)$ between $V_\delta$ and $M$. The conditions $p,q$ are therefore compatible by Claim~\ref{deltaclaim}. Their common lower bound will be below both $p$ and $1-b$, contradicting the assumption that $p\leq b$. \end{proof} Now, suppose that $\kappa$ is a supercompact cardinal and $f\restriction\restrictionon\kappa\to V_\kappa$ is the Laver prediction function. Let $P=P_\kappa$ be the $\Phi$-proper forcing obtained from the function $f$ using the scheme above. A routine argument now shows that $P$ forces $\Phi$-PFA to hold. \end{proof} The iteration theorem allows us to finally prove some interesting consistency results. \begin{theorem} YPFA implies \begin{enumerate} \item PID; \item there are only five cofinal types of directed posets of size $\aleph_1$; \item all Aronszajn trees are special; \item $\mathfrak{c}=\aleph_2$. \end{enumerate} \noindent YPFA does not imply \begin{enumerate} \item[(5)] OCA; \item[(6)] c.c.c.\ is productive. \end{enumerate} \end{theorem} \noindent Items (4) and (6) are due to Todorcevic. \begin{proof} YPFA implies PID since the PID posets are Y-proper by Theorem~\ref{pidtheorem}. YPFA implies the five cofinal types statement since the posets used for it are ideal-based \cite{z:keeping} and the ideal-based posets are Y-proper by Theorem~\ref{idealbasedtheorem}. Todorcevic remarked that the classification of cofinal types of size $\aleph_1$ is a consequence of the conjunction of PID and $\mathfrak{p}>\aleph_1$, which are both consequences of YPFA. The proof of $\mathfrak{c}=\aleph_2$ follows the PFA argument with a small change. PID implies that $\mathfrak{b}\leq\aleph_2$ \cite{todorcevic:bsl}, and so YPFA implies $\mathfrak{b}=\aleph_2$. Similarly to the oldest proof that PFA implies $\mathfrak{c}=\aleph_2$, the argument is concluded by showing that $\mathfrak{c}=\mathfrak{b}$. This is quite involved and we only outline the main points. Fix a modulo finite increasing, unbounded sequence $\vec y=\langle y_\alpha\restriction\restrictionon\alpha\in\omega_2\rangle$ of functions in the Baire space. For every $x\in2^\gw$, consider a poset $P_x$ which is the iteration $P_x^0*\dot P_x^1$. The first step of the iteration is the $\in$-collapse of $\aleph_2$ to $\aleph_1$; the main point is that it is Y-proper and preserves unbounded sequences. The second step of the iteration uses $\vec y$ and the fact that it remains unbounded to code the point $x\in2^\gw$ into a closed unbounded subset of $\omega_2^V$ via a c.c.c.\ poset; the main point is that it is in fact even Y-c.c. Thus, the iteration $P_x$ is Y-proper. An application of YPFA to the poset $P_x$ yields coding of the point $x$ by an ordinal $<\omega_2$, proving that $\mathfrak{c}=\aleph_2$. Details (except for showing that $\dot P_x^1$ is forced to be Y-c.c.) can be found in \cite[Theorem 3.16]{bekkali:set}. For (5) and (6), Todorcevic supplied an argument using entangled linear orders. Suppose that there is a supercompact cardinal and the continuum hypothesis holds. Then, there is a set $A$ of reals of size $\aleph_1$ which forms an entangled linear ordering \cite[Theorem 1]{todorcevic:entangled}. Note that entangledness is a statement about nonexistence of uncountable anticliques in a certain graph on $A^n$ and the graph in the case that $A$ is a set of reals is open. Force YPFA with a Y-proper ordering. In the resulting model, $A$ is still entangled, thus OCA fails, and also by \cite[Theorem 6]{todorcevic:entangled} c.c.c.\ is not productive. \end{proof} \end{document}
\begin{document} \title{Local Search Algorithms for Rank-Constrained Convex Optimization} \begin{abstract} We propose greedy and local search algorithms for rank-constrained convex optimization, namely solving $\underset{\mathrm{rank}(A)\leq r^*}{\min}\, R(A)$ given a convex function $R:\mathbb{R}^{m\times n}{\textnormal{i}}ghtarrow \mathbb{R}$ and a parameter $r^*$. These algorithms consist of repeating two steps: (a) adding a new rank-1 matrix to $A$ and (b) enforcing the rank constraint on $A$. We refine and improve the theoretical analysis of~\cite{shalev2011large}, and show that if the \tilde{p}h{rank-restricted} condition number of $R$ is $\kappa$, a solution $A$ with rank $O(r^*\cdot \min\{\kappa \log \frac{R(\mathbf{0}) - R(A^*)}{{\epsilon}}, \kappa^2\})$ and $R(A) \leq R(A^*) + {\epsilon}$ can be recovered, where $A^*$ is the optimal solution. This significantly generalizes associated results on sparse convex optimization, as well as rank-constrained convex optimization for smooth functions. We then introduce new practical variants of these algorithms that have superior runtime and recover better solutions in practice. We demonstrate the versatility of these methods on a wide range of applications involving matrix completion and robust principal component analysis. \end{abstract} \section{Introduction} Given a real-valued convex function $R:\mathbb{R}^{m\times n}{\textnormal{i}}ghtarrow \mathbb{R}$ on real matrices and a parameter $r^*\in\mathbb{N}$, the \tilde{p}h{rank-constrained convex optimization} problem consists of finding a matrix $A\in\mathbb{R}^{m\times n}$ that minimizes $R(A)$ among all matrices of rank at most $r^*$: \begin{align} \underset{\mathrm{rank}(A)\leq r^*}{\min}\, R(A) \label{eq:problem} \end{align} Even though $R$ is convex, the rank constraint makes this problem non-convex. Furthermore, it is known that this problem is NP-hard and even hard to approximate (\cite{Natarajan95,FKT15}). In this work, we propose efficient greedy and local search algorithms for this problem. Our contribution is twofold: \begin{enumerate} \item {We provide theoretical analyses that bound the rank and objective value of the solutions returned by the two algorithms in terms of the \tilde{p}h{rank-restricted condition number}, which is the natural generalization of the condition number for low-rank subspaces. The results are significantly stronger than previous known bounds for this problem.} \item{We experimentally demonstrate that, after careful performance adjustments, the proposed general-purpose greedy and local search algorithms have superior performance to other methods, even for some of those that are tailored to a particular problem. Thus, these algorithms can be considered as a general tool for rank-constrained convex optimization and a viable alternative to methods that use convex relaxations or alternating minimization.} \end{enumerate} \paragraph{The rank-restricted condition number} Similarly to the work in sparse convex optimization, a restricted condition number quantity has been introduced as a reasonable assumption on $R$. If we let ${\textnormal{h}}o_r^+$ be the maximum smoothness bound and ${\textnormal{h}}o_r^-$ be the minimum strong convexity bound only along rank-$r$ directions of $R$ (these are called rank-restricted smoothness and strong convexity respectively), the rank-restricted condition number is defined as $\kappa_r = \frac{{\textnormal{h}}o_r^+}{{\textnormal{h}}o_r^-}$. If this quantity is bounded, one can efficiently find a solution $A$ with $R(A) \leq R(A^*) + {\epsilon}ilon$ and rank $r = O(r^* \cdot \kappa_{r+r^*} \frac{R(\mathbf{0})}{{\epsilon}ilon})$ using a greedy algorithm (\cite{shalev2011large}). However, this is not an ideal bound since the rank scales linearly with $\frac{R(\mathbf{0})}{{\epsilon}ilon}$, which can be particularly high in practice. Inspired by the analogous literature on sparse convex optimization by \cite{Natarajan95, SSZ10, zhang2011sparse, JTK14} and more recently \cite{axiotis2020sparse}, one would hope to achieve a logarithmic dependence or no dependence at all on $\frac{R(\mathbf{0})}{{\epsilon}ilon}$. In this paper we achieve this goal by providing an improved analysis showing that the greedy algorithm of~\cite{shalev2011large} in fact returns a matrix of rank of $r = O(r^* \cdot \kappa_{r+r^*} \log \frac{R(\mathbf{0})}{{\epsilon}ilon})$. We also provide a new local search algorithm together with an analysis guaranteeing a rank of $r = O(r^* \cdot \kappa_{r+r^*}^2)$. Apart from significantly improving upon previous work on rank-restricted convex optimization, these results directly generalize a lot of work in sparse convex optimization, e.g.~\cite{Natarajan95,SSZ10,JTK14}. Our algorithms and theorem statements can be found in Section~{\textnormal{e}}f{sec:algos}. \paragraph{Runtime improvements} Even though the rank bound guaranteed by our theoretical analyses is adequate, the algorithm runtimes leave much to be desired. In particular, both the greedy algorithm of~\cite{shalev2011large} and our local search algorithm have to solve an optimization problem in each iteration in order to find the best possible linear combination of features added so far. Even for the case that $R(A) = \frac{1}{2} \sum\limits_{(i,j)\in \Omega} (M - A)_{ij}^2$, this requires solving a least squares problem on $|\Omega|$ examples and $r^2$ variables. For practical implementations of these algorithms, we circumvent this issue by solving a related optimization problem that is usually much smaller. This instead requires solving $n$ least squares problems with \tilde{p}h{total} number of examples $|\Omega|$, each on $r$ variables. This not only reduces the size of the problem by a factor of $r$, but also allows for a straightforward distributed implementation. Interestingly, our theoretical analyses still hold for these variants. We propose an additional heuristic that reduces the runtime even more drastically, which is to only run a few (less than 10) iterations of the algorithm used for solving the inner optimization problem. Experimental results show that this modification not only does not significantly worsen results, but for machine learning applications also acts as a regularization method that can dramatically improve generalization. These matters, as well as additional improvements for making the local search algorithm more practical, are addressed in Section~{\textnormal{e}}f{sec:runtime_adjustments}. \paragraph{Roadmap} In Section~{\textnormal{e}}f{sec:algos}, we provide the descriptions and theoretical results for the algorithms used, along with several modifications to boost performance. In Section~{\textnormal{e}}f{sec:optimization}, we evaluate the proposed greedy and local search algorithms on optimization problems like robust PCA. Then, in Section~{\textnormal{e}}f{sec:ML} we evaluate their generalization performance in machine learning problems like matrix completion. \section{Algorithms \& Theoretical Guarantees} \label{sec:algos} In Sections~{\textnormal{e}}f{sec:greedy} and~{\textnormal{e}}f{sec:local_search} we state and provide theoretical performance guarantees for the basic greedy and local search algorithms respectively. Then in Section~{\textnormal{e}}f{sec:runtime_adjustments} we state the algorithmic adjustments that we propose in order to make the algorithms efficient in terms of runtime and generalization performance. A discussion regarding the tightness of the theoretical analysis is deferred to Appendix~{\textnormal{e}}f{sec:appendix_tightness}. When the dimension is clear from context, we will denote the all-ones vector by $\mathbf{1}$, and the vector that is $0$ everywhere and $1$ at position $i$ by $\mathbf{1}_i$. Given a matrix $A$, we denote by $\mathrm{im}(A)$ its column span. One notion that we will find useful is that of \tilde{p}h{singular value thresholding}. More specifically, given a rank-$k$ matrix $A\in\mathbb{R}^{m\times n}$ with SVD $\sum\limits_{i=1}^k \sigma_i u^i v^{i\top}$ such that $\sigma_1 \geq \dots\geq \sigma_k$, as well as an integer parameter $r \geq 1$, we define $H_r(A) = \sum\limits_{i=1}^r \sigma_i u^i v^{i\top}$ to be the operator that truncates to the $r$ highest singular values of $A$. \algdef{SE}[DOWHILE]{Do}{doWhile}{\algorithmicdo}[1]{\algorithmicwhile\ #1} \subsection{Greedy} \label{sec:greedy} Algorithm~{\textnormal{e}}f{alg:greedy} (Greedy) was first introduced in~\cite{shalev2011large} as the GECO algorithm. It works by iteratively adding a rank-1 matrix to the current solution. This matrix is chosen as the rank-1 matrix that best approximates the gradient, i.e. the pair of singular vectors corresponding to the maximum singular value of the gradient. In each iteration, an additional procedure is run to optimize the combination of previously chosen singular vectors. In~\cite{shalev2011large} guarantee on the rank of the solution returned by the algorithm is $r^* \kappa_{r+r^*} \frac{R(\mathbf{0})}{{\epsilon}ilon}$. The main bottleneck in order to improve on the $\frac{R(\mathbf{0})}{{\epsilon}ilon}$ factor is the fact that the analysis is done in terms of the squared nuclear norm of the optimal solution. As the worst-case discrepancy between the squared nuclear norm and the rank is $R(\mathbf{0})/{\epsilon}ilon$, their bounds inherit this factor. Our analysis works directly with the rank, in the spirit of sparse optimization results (e.g. \cite{shalev2011large,JTK14,axiotis2020sparse}). A challenge compared to these works is the need for a suitable notion of ``intersection'' between two sets of vectors. The main technical contribution of this work is to show that the orthogonal projection of one set of vectors into the span of the other is such a notion, and, based on this, to define a decomposition of the optimal solution that is used in the analysis. \begin{algorithm} \caption{Greedy}\label{alg:greedy} \begin{algorithmic}[1] \Procedure{Greedy}{$r\in\mathbb{N}$ : target rank} \State function to be minimized $R:\mathbb{R}^{m\times n}{\textnormal{i}}ghtarrow\mathbb{R}$ \State $U \in \mathbb{R}^{m\times 0}$ \Comment{Initially rank is zero} \State $V \in \mathbb{R}^{n\times 0}$ \For {$t=0\dots r-1$} \State $\sigma u v^\top \leftarrow H_1(\nabla R(UV^\top))$ \Comment{Max singular value $\sigma$ and corresp. singular vectors $u,v$} \State $U \leftarrow \begin{pmatrix}U & u\end{pmatrix}$ \Comment{Append new vectors as columns} \State $V \leftarrow \begin{pmatrix}V & v\end{pmatrix}$ \State $U, V \leftarrow \textsc{Optimize}(U, V)$ \mathbb{E}ndFor \State \mathbb{R}eturn\, $UV^\top$ \mathbb{E}ndProcedure \Procedure{Optimize}{$U\in\mathbb{R}^{m\times r}, V\in\mathbb{R}^{n\times r}$} \State $X \leftarrow \underset{X\in\mathbb{R}^{r\times r}}{\argmin}\, R(UXV^\top)$ \State \mathbb{R}eturn\, $UX, V$ \mathbb{E}ndProcedure \end{algorithmic} \end{algorithm} \begin{theorem}[Algorithm~{\textnormal{e}}f{alg:greedy} (greedy) analysis] \label{thm:greedy} Let $A^*$ be any fixed optimal solution of ({\textnormal{e}}f{eq:problem}) for some function $R$ and rank bound $r^*$, and let ${\epsilon}ilon > 0$ be an error parameter. For any integer $r \geq 2r^* \cdot \kappa_{r+r^*} \log \frac{R(\mathbf{0}) - R(A^*)}{{\epsilon}ilon}$, if we let $A = \textsc{Greedy}(r)$ be the solution returned by Algorithm~{\textnormal{e}}f{alg:greedy}, then $ R(A) \leq R(A^*) + {\epsilon}ilon $. The number of iterations is $r$. \end{theorem} The proof of Theorem~{\textnormal{e}}f{thm:greedy} can be found in Appendix~{\textnormal{e}}f{proof_thm:greedy}. \subsection{Local Search} \label{sec:local_search} One drawback of Algorithm 1 is that it increases the rank in each iteration. Algorithm 2 is a modification of Algorithm 1, in which the rank is truncated in each iteration. The advantage of Algorithm 2 compared to Algorithm 1 is that it is able to make progress without increasing the rank of A, while Algorithm 1 necessarily increases the rank in each iteration. More specifically, because of the greedy nature of Algorithm 1, some rank-1 components that have been added to A might become obsolete or have reduced benefit after a number of iterations. Algorithm 2 is able to identify such candidates and remove them, thus allowing it to continue making progress. \begin{algorithm} \caption{Local Search}\label{alg:local_search} \begin{algorithmic}[1] \Procedure{Local\_Search}{$r\in\mathbb{N}$ : target rank} \State function to be minimized $R:\mathbb{R}^{m\times n}{\textnormal{i}}ghtarrow\mathbb{R}$ \State $U \leftarrow \mathbf{0}_{m\times r}$ \Comment{Initialize with all-zero solution} \State $V \leftarrow \mathbf{0}_{n\times r}$ \For {$t=0\dots L-1$} \Comment{Run for $L$ iterations} \State $\sigma u v^\top \leftarrow H_1(\nabla R(UV^\top))$ \Comment{Max singular value $\sigma$ and corresp. singular vectors $u,v$} \State $U, V \leftarrow \textsc{Truncate}(U, V)$ \Comment{Reduce rank of $UV^\top$ by one} \State $U \leftarrow \begin{pmatrix}U & u\end{pmatrix}$ \Comment{Append new vectors as columns} \State $V \leftarrow \begin{pmatrix}V & v\end{pmatrix}$ \State $U, V \leftarrow \textsc{Optimize}(U, V)$ \mathbb{E}ndFor \State \mathbb{R}eturn\, $UV^\top$ \mathbb{E}ndProcedure \Procedure{Truncate}{$U\in\mathbb{R}^{m\times r}, V\in\mathbb{R}^{n\times r}$} \State $U\Sigma V^\top \leftarrow \mathrm{SVD}(H_{r-1}(UV^\top))$ \Comment{Keep all but minimum singular value} \State \mathbb{R}eturn $U\Sigma, V$ \mathbb{E}ndProcedure \end{algorithmic} \end{algorithm} \begin{theorem}[Algorithm~{\textnormal{e}}f{alg:local_search} (local search) analysis] Let $A^*$ be any fixed optimal solution of ({\textnormal{e}}f{eq:problem}) for some function $R$ and rank bound $r^*$, and let ${\epsilon}ilon > 0$ be an error parameter. For any integer $r \geq r^* \cdot (1 + 8 \kappa_{r+r^*}^2)$, if we let $A = \textsc{Local\_Search}(r)$ be the solution returned by Algorithm~{\textnormal{e}}f{alg:local_search}, then $ R(A) \leq R(A^*) + {\epsilon}ilon $. \label{thm:local_search} The number of iterations is $O\left(r^* \kappa_{r+r^*} \log \frac{R(\mathbf{0}) - R(A^*)}{{\epsilon}} {\textnormal{i}}ght)$. \end{theorem} The proof of Theorem~{\textnormal{e}}f{thm:local_search} can be found in Appendix~{\textnormal{e}}f{proof_thm:local_search}. \subsection{Algorithmic adjustments} \label{sec:runtime_adjustments} \paragraph{Inner optimization problem} The inner optimization problem that is used in both greedy and local search is: \begin{align} \underset{X\in\mathbb{R}^{r\times r}}{\min}\, R(UXV^\top)\,. \label{eq:inner_optimization} \end{align} It essentially finds the choice of matrices $U'$ and $V'$, with columns in the column span of $U$ and $V$ respectively, that minimizes $R(U'V^{'\top})$. We, however, consider the following problem instead: \begin{align} \underset{V\in\mathbb{R}^{n\times r}}{\min}\, R(UV^\top)\,. \label{eq:inner_optimization2} \end{align} Note that the solution recovered from ({\textnormal{e}}f{eq:inner_optimization2}) will never have worse objective value than the one recovered from ({\textnormal{e}}f{eq:inner_optimization}), and that nothing in the analysis of the algorithms breaks. Importantly, ({\textnormal{e}}f{eq:inner_optimization2}) can usually be solved much more efficiently than ({\textnormal{e}}f{eq:inner_optimization}). As an example, consider the following objective that appears in matrix completion: $R(A) = \frac{1}{2} \sum\limits_{(i,j)\in\Omega} (M - A)_{ij}^2$ for some $\Omega\subseteq [m]\times [n]$. If we let $\Pi_\Omega(\cdot)$ be an operator that zeroes out all positions in the matrix that are not in $\Omega$, we have $\nabla R(A) = -\Pi_{\Omega}(M-A)$. The optimality condition of ({\textnormal{e}}f{eq:inner_optimization}) now is $U^\top \Pi_{\Omega}(M-UXV^\top) V = \mathbf{0}$ and that of ({\textnormal{e}}f{eq:inner_optimization2}) is $U^\top \Pi_{\Omega}(M-UV^\top) = \mathbf{0}$. The former corresponds to a least squares linear regression problem with $|\Omega|$ examples and $r^2$ variables, while the latter can be decomposed into $n$ independent systems $U^\top \left(\sum\limits_{i: (i,j)\in \Omega} \mathbf{1}_i\mathbf{1}_i^\top {\textnormal{i}}ght) U V^j = U^\top \Pi_{\Omega} \left(M\mathbf{1}_j{\textnormal{i}}ght)$, where the variable is $V^j$ which is the $j$-th column of $V$. The $j$-th of these systems corresponds to a least squares linear regression problem with $\left|\{i\ :\ (i,j)\in\Omega\}{\textnormal{i}}ght|$ examples and $r$ variables. Note that the total number of examples in all systems is $\sum\limits_{j\in[n]} \left|\{i\ :\ (i,j)\in\Omega\}{\textnormal{i}}ght| = |\Omega|$. The choice of $V$ here as the variable to be optimized is arbitrary. In particular, as can be seen in Algorithm~{\textnormal{e}}f{alg:inner_optimization_fast}, in practice we alternate between optimizing $U$ and $V$ in each iteration. It is worthy of mention that the \textsc{Optimize\_Fast} procedure is basically the same as one step of the popular alternating minimization procedure for solving low-rank problems. As a matter of fact, when our proposed algorithms are viewed from this lens, they can be seen as alternating minimization interleaved with rank-1 insertion and/or removal steps. \begin{algorithm} \caption{Fast inner Optimization}\label{alg:inner_optimization_fast} \begin{algorithmic}[1] \Procedure{Optimize\_Fast}{$U\in\mathbb{R}^{m\times r}, V\in\mathbb{R}^{n\times r}, t\in\mathbb{N} : \text{iteration index of algorithm}$} \If {$t \; \mathrm{mod}\; 2 = 0$} \State $X \leftarrow \underset{X\in\mathbb{R}^{m\times r}}{\argmin}\, R(XV^\top)$ \State \mathbb{R}eturn\, $X, V$ \mathbb{E}lse \State $X \leftarrow \underset{X\in\mathbb{R}^{n\times r}}{\argmin}\, R(UX^\top)$ \State \mathbb{R}eturn\, $U, X$ \mathbb{E}ndIf \mathbb{E}ndProcedure \end{algorithmic} \end{algorithm} \paragraph{Singular value decomposition} As modern methods for computing the top entries of a singular value decomposition scale very well even for large sparse matrices~(\cite{martinsson2011randomized,szlam2014implementation,fbpca}), the ``insertion'' step of greedy and local search, in which the top entry of the SVD of the gradient is determined, is quite fast in practice. However, these methods are not suited for computing the \tilde{p}h{smallest} singular values and corresponding singular vectors, a step required for the local search algorithm that we propose. Therefore, in our practical implementations we opt to perfom the alternative step of directly removing one pair of vectors from the representation $UV^\top$. A simple approach is to go over all $r$ possible removals and pick the one that increases the objective by the least amount. A variation of this approach has been used by~\cite{shalev2011large}. However, a much faster approach is to just pick the pair of vectors $U\mathbf{1}_i$, $V\mathbf{1}_i$ that minimizes $\|U\mathbf{1}_i\|_2\|V\mathbf{1}_i\|_2$. This is the approach that we use, as can be seen in Algorithm~{\textnormal{e}}f{alg:rank_reduction_fast}. \begin{algorithm} \caption{Fast rank reduction}\label{alg:rank_reduction_fast} \begin{algorithmic}[1] \Procedure{Truncate\_Fast}{$U\in\mathbb{R}^{m\times r}, V\in\mathbb{R}^{n\times r}$} \State $i \leftarrow \underset{i\in[r]}{\argmin}\, \|U\mathbf{1}_i\|_2 \|V\mathbf{1}_i\|_2$ \State \mathbb{R}eturn\, $\begin{pmatrix}U_{[m],[1,i-1]} & U_{[m],[i+1,r]}\end{pmatrix}, \begin{pmatrix}V_{[n],[1,i-1]} & V_{[n],[i+1,r]}\end{pmatrix}$ \Comment{Remove column $i$} \mathbb{E}ndProcedure \end{algorithmic} \end{algorithm} \ifx 0 \subsubsection*{Local search initialization} As we have seen in Algorithm~{\textnormal{e}}f{alg:local_search}, the local search algorithm can be initialized by an arbitrary solution. In practice, however, we can do with a much better initialization. We choose to initialize by running Algorithm~{\textnormal{e}}f{alg:greedy} and using the returned value as the initial solution for Algorithm~{\textnormal{e}}f{alg:local_search}. This combines the best of both algorithms and has good performance in practice. \fi After the previous discussion, we are ready to state the fast versions of Algorithm~{\textnormal{e}}f{alg:greedy} and Algorithm~{\textnormal{e}}f{alg:local_search} that we use for our experiments. These are Algorithm~{\textnormal{e}}f{alg:fast_greedy} and Algorithm~{\textnormal{e}}f{alg:fast_local_search}. Notice that we initialize Algorithm~{\textnormal{e}}f{alg:fast_local_search} with the solution of Algorithm~{\textnormal{e}}f{alg:fast_greedy} and we run it until the value of $R(\cdot)$ stops decreasing rather than for a fixed number of iterations. \begin{alg}[Fast Greedy] The Fast Greedy algorithm is defined identically as Algorithm~{\textnormal{e}}f{alg:greedy}, with the only difference that it uses the \textsc{Optimize\_Fast} routine as opposed to the \textsc{Optimize} routine. \label{alg:fast_greedy} \end{alg} \ifx 0 \begin{algorithm} \caption{Fast Greedy}\label{alg:fast_greedy} \begin{algorithmic}[1] \Procedure{Fast\_Greedy}{$r\in\mathbb{N}$ : target rank} \State function to be minimized $R:\mathbb{R}^{m\times n}{\textnormal{i}}ghtarrow\mathbb{R}$ \State $U \in \mathbb{R}^{m\times 0}$ \Comment{Initially rank is zero} \State $V \in \mathbb{R}^{n\times 0}$ \For {$t=0\dots r-1$} \State $\sigma u v^\top \leftarrow H_1(\nabla R(UV^\top))$ \label{step:gradient_greedy} \Comment{Max singular value $\sigma$ and corresp. singular vectors $u,v$} \State $U \leftarrow \begin{pmatrix}U & u\end{pmatrix}$ \Comment{Append new vectors as columns} \State $V \leftarrow \begin{pmatrix}V & v\end{pmatrix}$ \State $U, V \leftarrow \textsc{Optimize\_Fast}(U, V, t)$ \mathbb{E}ndFor \State \mathbb{R}eturn\, $UV^\top$ \mathbb{E}ndProcedure \end{algorithmic} \end{algorithm} \fi \begin{algorithm} \caption{Fast Local Search}\label{alg:fast_local_search} \begin{algorithmic}[1] \Procedure{Fast\_Local\_Search}{$r\in\mathbb{N}$ : target rank} \State function to be minimized $R:\mathbb{R}^{m\times n}{\textnormal{i}}ghtarrow\mathbb{R}$ \State $U, V \leftarrow \text{solution returned by \textsc{Fast\_Greedy}($r$)}$ \Do \State $U_{\mathrm{prev}}, V_{\mathrm{prev}} \leftarrow U, V$ \State $\sigma u v^\top \leftarrow H_1(\nabla R(UV^\top))$ \label{step:gradient_local_search} \Comment{Max singular value $\sigma$ and corresp. singular vectors $u,v$} \State $U, V \leftarrow \textsc{Truncate\_Fast}(U, V)$ \Comment{Reduce rank of $UV^\top$ by one} \State $U \leftarrow \begin{pmatrix}U & u\end{pmatrix}$ \Comment{Append new vectors as columns} \State $V \leftarrow \begin{pmatrix}V & v\end{pmatrix}$ \State $U, V \leftarrow \textsc{Optimize\_Fast}(U, V, t)$ \doWhile {$R(UV^\top) < R(U_{\mathrm{prev}} V_{\mathrm{prev}}^\top)$} \State \mathbb{R}eturn\, $U_{\mathrm{prev}}V_{\mathrm{prev}}^\top$ \mathbb{E}ndProcedure \end{algorithmic} \end{algorithm} \section{Optimization Applications} \label{sec:optimization} An immediate application of the above algorithms is in the problem of \tilde{p}h{low rank matrix recovery}. Given any convex distance measure between matrices $d:\mathbb{R}_{m\times n}\times \mathbb{R}_{m\times n}{\textnormal{i}}ghtarrow \mathbb{R}_{\geq 0}$, the goal is to find a low-rank matrix $A$ that matches a target matrix $M$ as well as possible in terms of $d$: $\underset{\mathrm{rank}(A)\leq r^*}{\min} d(M, A)$ This problem captures a lot of different applications, some of which we go over in the following sections. \subsection{Low-rank approximation on observed set} \label{sec:svd_missing_data} A particular case of interest is when $d(M, A)$ is the Frobenius norm of $M-A$, but only applied to entries belonging to some set $\Omega$. In other words, $ d(M,A) = \frac{1}{2} \|\Pi_{\Omega}(M-A)\|_F^2 \,.$ We have compared our Fast Greedy and Fast Local Search algorithms with the SoftImpute algorithm of~\cite{mazumder2010spectral} as implemented by~\cite{fancyimpute}, on the same experiments as in~\cite{mazumder2010spectral}. We have solved the inner optimization problem required by our algorithms by the LSQR algorithm~\cite{lsqr}. More specifically, $M=UV^\top+\eta\in\mathbb{R}^{100\times 100}$, where $\eta$ is some noise vector. We let every entry of $U,V,\eta$ be i.i.d. normal with mean $0$ and the entries of $\Omega$ are chosen i.i.d. uniformly at random over the set $[100]\times[100]$. The experiments have three parameters: The true rank $r^*$ (of $UV^\top$), the percentage of observed entries $p=|\Omega|/10^4$, and the signal-to-noise ratio SNR. We measure the normalized MSE, i.e. $\|\Pi_{\Omega}(M-A)\|_F^2/\|\Pi_{\Omega}(M)\|_F^2$. The results can be seen in Figure~{\textnormal{e}}f{fig:matrix_completion_train}, where it is illustrated that Fast Local Search sometimes returns significantly more accurate and lower-rank solutions than Fast Greedy, and Fast Greedy generally returns significantly more accurate and lower-rank solutions than SoftImpute. \begin{figure} \caption{$r^*=5$, $p=0.2$, $SNR=10$} \caption{$r^*=6$, $p=0.5$, $SNR=1$} \caption{Objective value error vs rank in the problem of Section~{\textnormal{e} \label{fig:matrix_completion_train} \end{figure} \subsection{Robust principal component analysis (RPCA)} \label{sec:rpca} The robust PCA paradigm asks one to decompose a given matrix $M$ as $L+S$, where $L$ is a low-rank matrix and $S$ is a sparse matrix. This is useful for applications with outliers where directly computing the principal components of $M$ is significantly affected by them. For a comprehensive survey on Robust PCA survey one can look at~\cite{bouwmans2018applications}. The following optimization problem encodes the above-stated requirements: \begin{equation} \begin{aligned} \underset{\mathrm{rank}(L) \leq r^*}{\min}\, \|M - L\|_0 \end{aligned} \label{eq:rpca} \end{equation} where $\|X\|_0$ is the sparsity (i.e. number of non-zeros) of $X$. As neither the rank constraint or the $\ell_0$ function are convex, \cite{candes2011robust} replaced them by their usual convex relaxations, i.e. the nuclear norm $\|\cdot\|_*$ and $\ell_1$ norm respectively. However, we opt to only relax the $\ell_0$ function but not the rank constraint, leaving us with the problem: \begin{equation} \begin{aligned} \underset{\mathrm{rank}(L) \leq r^*}{\min}\, \|M - L\|_1 \end{aligned} \label{eq:rpca_ours} \end{equation} In order to make the objective differentiable and thus more well-behaved, we further replace the $\ell_1$ norm by the Huber loss $H_\delta(x) = \begin{cases} x^2/2 & \text{if $|x| \leq \delta$}\\ \delta |x| - \delta^2 / 2& \text{otherwise} \end{cases}$, thus getting: $\underset{\mathrm{rank}(L) \leq r^*}{\min}\, \sum\limits_{ij} H_{\delta} (M - L)_{ij}$. This is a problem on which we can directly apply our algorithms. We solve the inner optimization problem by applying 10 L-BFGS iterations. In Figure~{\textnormal{e}}f{fig:rpca} one can see an example of foreground-background separation from video using robust PCA. The video is from the BMC 2012 dataset~\cite{vacavant2012benchmark}. In this problem, the low-rank part corresponds to the background and the sparse part to the foreground. We compare three algorithms: Our Fast Greedy algorithm, standard PCA with 1 component (the choice of 1 was picked to get the best outcome), and the standard Principal Component Pursuit (PCP) algorithm~(\cite{candes2011robust}), as implemented in~\cite{lin2010augmented}, where we tuned the regularization parameter $\lambda$ to achieve the best result. We find that Fast Greedy has the best performance out of the three algorithms in this sample task. \begin{figure} \caption{Foreground-background separation from video. From left to right: Fast Greedy with rank=3 and Huber loss with $\delta=20$. Standard PCA with rank=1. Principal Component Pursuit (PCP) with $\lambda=0.002$. Both PCA and PCP have visible "shadows" in the foreground that appear as "smudges" in the background. These are less obvious in a still frame but more apparent in a video.} \label{fig:rpca} \end{figure} \section{Machine Learning Applications} \label{sec:ML} \subsection{Regularization techniques} In the previous section we showed that our proposed algorithms bring down different optimization objectives aggressively. However, in applications where the goal is to obtain a low generalization error, regularization is needed. We considered two different kinds of regularization. The first method is to run the inner optimization algorithm for less iterations, usually 2-3. Usually this is straightforward since an iterative method is used. For example, in the case $R(A) = \frac{1}{2} \|\Pi_{\Omega}(M - A)\|_F^2$ the inner optimization is a least squares linear regression problem that we solve using the LSQR algorithm. The second one is to add an $\ell_2$ regularizer to the objective function. However, this option did not provide a substantial performance boost in our experiments, and so we have not implemented it. \subsection{Matrix Completion with Random Noise} \label{sec:matrix_completion_test} In this section we evaluate our algorithms on the task of recovering a low rank matrix $UV^\top$ after observing $\Pi_{\Omega}(UV^\top + \eta)$, i.e. a fraction of its entries with added noise. As in Section~{\textnormal{e}}f{sec:svd_missing_data}, we use the setting of~\cite{mazumder2010spectral} and compare with the SoftImpute method. The evaluation metric is the normalized MSE, defined as $(\sum\limits_{(i,j)\notin \Omega} (UV^\top - A)_{ij}^2)/(\sum\limits_{(i,j)\notin \Omega} (UV^\top)_{ij}^2)$, where $A$ is the predicted matrix and $UV^\top$ the true low rank matrix. A few example plots can be seen in Figure~{\textnormal{e}}f{fig:matrix_completion_test_1} and a table of results in Table~{\textnormal{e}}f{table:matrix_completion_test}. We have implemented the Fast Greedy and Fast Local Search algorithms with 3 inner optimization iterations. In the first few iterations there is a spike in the relative MSE of the algorithms that use the \textsc{Optimize\_Fast} routine. We attribute this to the aggressive alternating minimization steps of this procedure and conjecture that adding a regularization term to the objective might smoothen the spike. However, the Fast Local Search algorithm still gives the best overall performance in terms of how well it approximates the true low rank matrix $UV^\top$, and in particular with a very small rank---practically the same as the true underlying rank. \begin{figure} \caption{$k=5$, $p= 0.2$, $SNR= 10$} \label{fig:sub1} \caption{$k = 6$, $p = 0.5$, $SNR = 1$} \label{fig:sub2} \caption{Test error vs rank in the matrix completion problem of Section~{\textnormal{e} \label{fig:matrix_completion_test_1} \end{figure} \begin{table} \centering \begin{tabular}{||c c c c||} \hline Algorithm & random\_error\_13 & random\_error\_1 & random\_error\_2 \\ [0.5ex] \hline\hline SoftImpute (\cite{mazumder2010spectral})& $0.1759/10$ & \makecell{$0.2465/28$} & \makecell{$0.2284/30$} \\ \hline Fast Greedy (Algorithm~{\textnormal{e}}f{alg:fast_greedy}) & \makecell{$0.0673/30$} & \makecell{$0.1948/13$} & \makecell{$0.1826/21$} \\ \hline Fast Local Search (Algorithm~{\textnormal{e}}f{alg:fast_local_search}) & \makecell{$0.0613/14$}& \makecell{$0.1952/15$} & \makecell{$0.1811/15$} \\ [1ex] \hline \end{tabular} \caption{Lowest test error for any rank in the matrix completion problem of Section~{\textnormal{e}}f{sec:matrix_completion_test}, and associated rank returned by each algorithm. In the form error/rank.} \label{table:matrix_completion_test} \end{table} \subsection{Recommender Systems} In this section we compare our algorithms on the task of movie recommendation on the Movielens datasets~\cite{movielens}. In order to evaluate the algorithms, we perform random 80\%-20\% train-test splits that are the same for all algorithms and measure the mean RMSE in the test set. If we let $\Omega\subseteq [m]\times [n]$ be the set of user-movie pairs in the training set, we assume that the true user-movie matrix is low rank, and thus pose~({\textnormal{e}}f{eq:problem}) with $R(A) = \frac{1}{2} \|\Pi_{\Omega}(M - A)\|_F^2$. We make the following slight modification in order to take into account the range of the ratings $[1,5]$: We clip the entries of $A$ between $1$ and $5$ when computing $\nabla R(A)$ in Algorithm~{\textnormal{e}}f{alg:fast_greedy} and Algorithm~{\textnormal{e}}f{alg:fast_local_search}. In other words, instead of $\Pi_{\Omega}(A - M)$ we compute the gradient as $\Pi_{\Omega}(\mathrm{clip}(A,1,5)-M)$. This is similar to replacing our objective by a Huber loss, with the difference that we only do so in the steps that we mentioned and not the inner optimization step, mainly for runtime efficiency reasons. The results can be seen in Table~{\textnormal{e}}f{table:movie_results}. We do not compare with Fast Local Search, as we found that it only provides an advantage for small ranks ($<30$), and otherwise matches Fast Greedy. For the inner optimization steps we have used the LSQR algorithm with $2$ iterations in the 100K and 1M datasets, and with $3$ iterations in the 10M dataset. Note that even though the SVD algorithm by~\cite{koren2009matrix} as implemented by~\cite{surprise} (with no user/movie bias terms) is a highly tuned algorithm for recommender systems that was one of the top solutions in the famous Netflix prize, it has comparable performance to our general-purpose Algorithm~{\textnormal{e}}f{alg:fast_greedy}. \begin{table}[h!] \centering \begin{tabular}{||c c c c||} \hline Algorithm & MovieLens 100K & MovieLens 1M & MovieLens 10M \\ [0.5ex] \hline\hline NMF (\cite{lee2001algorithms}) & $0.9659$ & $0.9166$ & $0.8960$\\ \hline SoftImpute & $1.0106$ & $0.9599$ & $0.957$ \\ \hline Alternating Minimization & $\mathbf{0.9355}$ & $0.8732$ & $0.8410$ \\ \hline SVD (\cite{koren2009matrix}) & $0.9533$ & $0.8743$ & $\mathbf{0.8315}$\\ \hline Fast Greedy (Algorithm~{\textnormal{e}}f{alg:fast_greedy}) & $0.9451$ & $\mathbf{0.8714}$ & $0.8330$ \\ \hline \end{tabular} \caption{Mean RMSE and standard error among 5 random splits for 100K and 1M with standard errors $<0.01$, and 3 random splits for 10M with standard errors $<0.001$. The rank of the prediction is set to $100$ except for NMF where it is $15$ and Fast Greedy in the 10M dataset where it is chosen to be $35$ by cross-validation. Alternating Minimization is a well known algorithm (e.g. \cite{srebro2004maximum}) that alternatively minimizes the left and right subspace, and also uses Frobenius norm regularization. For SoftImpute and Alternating Minimization we have found the best choice of parameters by performing a grid search over the rank and the multiplier of the regularization term. We have found the best choice of parameters by performing a grid search over the rank and the multiplier of the regularization term. We ran 20 iterations of Alternating Minimization in each case.} \label{table:movie_results} \end{table} Finally, Table~{\textnormal{e}}f{table:time} demonstrates the speedup achieved by our algorithms over the basic greedy implementation. It should be noted that the speedup compared to the basic greedy of~\cite{shalev2011large} (Algorithm~{\textnormal{e}}f{alg:greedy}) is larger as rank increases, since the fast algorithms scale linearly with rank, but the basic greedy scales quadratically. \begin{table}[h!] \centering \begin{tabular}{||c c c c||} \hline Algorithm & Figure~{\textnormal{e}}f{fig:matrix_completion_test_1} (a) & Movielens 100K & Movielens 1M\\ [0.5ex] \hline\hline SoftImpute & $10.6$ & $9.4$ & $40.6$ \\ \hline Alternating Minimization & $18.9$ & $252.0$ & $1141.4$ \\ \hline Greedy~(\cite{shalev2011large}) & $18.8$ & $418.4$ & $4087.3$\\ \hline Fast Greedy & $10.2$ & $43.4$ & $244.2$\\ \hline Fast Local Search & $10.8$ & $46.1$ & $263.0$\\ \hline \end{tabular} \caption{Runtimes (in seconds) of different algorithms for fitting a rank=30 solution in various experiments. Code written in python and tested on an Intel Skylake CPU with 16 vCPUs.} \label{table:time} \end{table} It is important to note that our goal here is not to be competitive with the best known algorithms for matrix completion, but rather to propose a general yet practically applicable method for rank-constrained convex optimization. For a recent survey on the best performing algorithms in the Movielens datasets see~\cite{rendle2019difficulty}. It should be noted that a lot of these algorithms have significant performance boost compared to our methods because they use additional features (meta information about each user, movie, timestamp of a rating, etc.) or stronger models (user/movie biases, "implicit" ratings). A runtime comparison with these recent approches is an interesting avenue for future work. As a rule of thumb, however, Fast Greedy has roughly the same runtime as SVD (\cite{koren2009matrix}) in each iteration, i.e. $O(|\Omega| r)$, where $\Omega$ is the set of observable elements and $r$ is the rank. As some better performing approaches have been reported to be much slower than SVD (e.g. SVD++ is reported to be 50-100x slower than SVD in the Movielens 100K and 1M datasets (\cite{surprise}), this might also suggest a runtime advantage of our approach compared to some better performing methods. \FloatBarrier \section{Conclusions} We presented simple algorithms with strong theoretical guarantees for optimizing a convex function under a rank constraint. Although the basic versions of these algorithms have appeared before, through a series of careful runtime, optimization, and generalization performance improvements that we introduced, we have managed to reshape the performance of these algorithms in all fronts. Via our experimental validation on a host of practical problems such as low-rank matrix recovery with missing data, robust principal component analysis, and recommender systems, we have shown that the performance in terms of the solution quality matches or exceeds other widely used and even specialized solutions, thus making the argument that our Fast Greedy and Fast Local Search routines can be regarded as strong and practical general purpose tools for rank-constrained convex optimization. Interesting directions for further research include the exploration of different kinds of regularization and tuning for machine learning applications, as well as a competitive implementation and extensive runtime comparison of our algorithms. \appendix \section{Appendix} \subsection{Preliminaries and Notation} Given an positive integer $k$, we denote $[k] = \{1,2,\dots, k\}$. Given a matrix $A$, we denote by $\|A\|_F$ its Frobenius norm, i.e. the $\ell_2$ norm of the entries of $A$ (or equivalently of the singular values of $A$). The following lemma is a simple corollary of the definition of the Frobenius norm: \begin{lemma} Given two matrices $A\in\mathbb{R}^{m\times n}$,$B\in\mathbb{R}^{m\times n}$, we have $\|A + B\|_F^2 \leq 2 \left(\|A\|_F^2 + \|B\|_F^2{\textnormal{i}}ght)$. \label{lem:frobenius_sum2} \end{lemma} \begin{proof} \[ \|A + B\|_F^2 = \sum\limits_{ij} (A+B)_{ij}^2 \leq 2\sum\limits_{ij} (A_{ij}^2+B_{ij}^2) = 2(\|A\|_F^2 + \|B\|_F^2) \] \end{proof} \begin{definition}[Rank-restricted smoothness, strong convexity, condition number] Given a convex function $R\in\mathbb{R}^{m\times n}{\textnormal{i}}ghtarrow \mathbb{R}$ and an integer parameter $r$, the \tilde{p}h{rank-restricted smoothness of $R$ at rank $r$} is the minimum constant ${\textnormal{h}}o_r^+ \geq 0$ such that for any two matrices $A\in\mathbb{R}^{m\times n}, B\in\mathbb{R}^{m\times n}$ such that $\mathrm{rank}(A-B) \leq r$, we have \[ R(B) \leq R(A) + \langle \nabla R(A), B-A{\textnormal{a}}ngle + \frac{{\textnormal{h}}o_r^+}{2} \|B-A\|_F^2 \,. \] Similarly, the \tilde{p}h{rank-restricted strong convexity of $R$ at rank $r$} is the maximum constant ${\textnormal{h}}o_r^- \geq 0$ such that for any two matrices $A\in\mathbb{R}^{m\times n}, B\in\mathbb{R}^{m\times n}$ such that $\mathrm{rank}(A-B) \leq r$, we have \[ R(B) \geq R(A) + \langle \nabla R(A), B-A{\textnormal{a}}ngle + \frac{{\textnormal{h}}o_r^-}{2} \|B-A\|_F^2 \,. \] Given that ${\textnormal{h}}o_r^+, {\textnormal{h}}o_r^-$ exist and are nonzero, the \tilde{p}h{rank-restricted condition number of $R$ at rank $r$} is then defined as \[ \kappa_r = \frac{{\textnormal{h}}o_r^+}{{\textnormal{h}}o_r^-} \] \end{definition} Note that ${\textnormal{h}}o_r^+$ is increasing and ${\textnormal{h}}o_r^-$ is decreasing in $r$. Therefore, even though our bounds are proven in terms of the constants $\frac{{\textnormal{h}}o_1^+}{{\textnormal{h}}o_r^-}$ and $\frac{{\textnormal{h}}o_2^+}{{\textnormal{h}}o_r^-}$, these quantities are always at most $\frac{{\textnormal{h}}o_r^+}{{\textnormal{h}}o_r^-} = \kappa_r$ as long as $r \geq 2$, and so they directly imply the same bounds in terms of the constant $\kappa_r$. \begin{definition}[Spectral norm] Given a matrix $A\in\mathbb{R}^{m\times n}$, we denote its spectral norm by $\|A\|_2$. The spectral norm is defined as \[\|A\|_2=\underset{x\in\mathbb{R}^n}{\max}\, \frac{\|A x\|_2}{\|x\|_2} \,, \] \end{definition} \begin{definition}[Singular value thresholding operator] Given a matrix $A\in\mathbb{R}^{m\times n}$ of rank $k$, a singular value decomposition $A = U\Sigma V^\top$ such that $\Sigma_{11} \geq \Sigma_{22} \geq \dots \geq \Sigma_{kk}$, and an integer $1 \leq r \leq k$, we define $H_r(A) = U \Sigma' V^\top$, here $\Sigma'$ is a diagonal matrix with \begin{align*} \Sigma_{ii}' = \begin{cases} \Sigma_{ii} & \text{if $i \leq r$}\\ 0 & \text{otherwise} \end{cases} \end{align*} In other words, $H_r(\cdot)$ is an operator that eliminates all but the top $r$ highest singular values of a matrix. \end{definition} \begin{lemma}[Weyl's inequality] For any matrix $A$ and integer $i\geq 1$, let $\sigma_i(A)$ be the $i$-th largest singular value of $A$ or $0$ if $i > \mathrm{rank}(A)$. Then, for any two matrices $A,B$ and integers $i\geq 1, j\geq 1$: \[ \sigma_{i+j-1}(A+B) \leq \sigma_i(A) + \sigma_j(B) \] \label{lem:weyl} \end{lemma} A proof of the previous fact can be found e.g. in~\cite{fisk1996note}. \begin{lemma}[$H_r(\cdot)$ optimization problem] Let $A\in\mathbb{R}^{m\times n}$ be a rank-$k$ matrix and $r\in[k]$ be an integer parameter. Then $M=\frac{1}{\lambda} H_r(A)$ is an optimal solution to the following optimization problem: \begin{align} \underset{\mathrm{rank}(M) \leq r}\max\{\langle A, M{\textnormal{a}}ngle - \frac{\lambda}{2} \|M\|_F^2\} \label{eq:frobenius_rank_optimization} \end{align} \label{lem:frobenius_rank_optimization} \end{lemma} \begin{proof} Let $U\Sigma V^\top = \sum\limits_i \Sigma_{ii} U_i V_i^\top$ be a singular value decomposition of $A$. We note that ({\textnormal{e}}f{eq:frobenius_rank_optimization}) is equivalent to \begin{align} \underset{\mathrm{rank}(M) \leq r}\min\, \|A - \lambda M\|_F^2 := f(M) \label{eq:frobenius_rank_optimization2} \end{align} Now, note that $f(\frac{1}{\lambda} H_r(A)) = \|A - H_r(A)\|_F^2 = \sum\limits_{i=r+1}^k \Sigma_{ii}^2$. On the other hand, by applying Weyl's inequality (Lemma~{\textnormal{e}}f{lem:weyl}) for $j=r+1$, \[ f(M) = \|A - \lambda M\|_F^2 = \sum\limits_{i=1}^{k+r} \sigma_i^2(A - \lambda M) \geq \sum\limits_{i=1}^{k+r} (\sigma_{i+r}(A) - \sigma_{r+1}(\lambda M))^2 = \sum\limits_{i=r+1}^{k} \Sigma_{ii}^2\,,\] where the last equality follows from the fact that $\mathrm{rank}(A) = k$ and $\mathrm{rank}(M) \leq r$. Therefore, $M=\frac{1}{\lambda} H_r(A)$ minimizes ({\textnormal{e}}f{eq:frobenius_rank_optimization2}) and thus maximizes ({\textnormal{e}}f{eq:frobenius_rank_optimization}). \end{proof} \subsection{Proof of Theorem~{\textnormal{e}}f{thm:greedy} (greedy)} \label{proof_thm:greedy} We will start with the following simple lemma about the Frobenius norm of a sum of matrices with orthogonal columns or rows: \begin{lemma} Let $U\in\mathbb{R}^{m\times r}, V\in\mathbb{R}^{n\times r}, X\in\mathbb{R}^{m\times r}, Y\in\mathbb{R}^{n\times r}$ be such that the columns of $U$ are orthogonal to the columns of $X$ \tilde{p}h{or} the columns of $V$ are orthogonal to the columns of $Y$. Then $\|UV^\top + XY^\top \|_F^2 = \|UV^\top\|_F^2 + \|XY^\top\|_F^2$. \label{lem:frobenius_sum} \end{lemma} \begin{proof} If the columns of $U$ are orthogonal to those of $X$, then $U^\top X = \mathbf{0}$ and if the columns of $V$ are orthogonal to those of $Y$, then $Y^\top V = \mathbf{0}$. Therefore in any case $\langle UV^\top, XY^\top{\textnormal{a}}ngle = \mathrm{Tr}(VU^\top XY^\top) = \mathrm{Tr}(U^\top X Y^\top V) = \mathbf{0}$, implying \begin{align*} \|UV^\top + XY^\top\|_F^2 = \|UV^\top\|_F^2 + \|XY^\top\|_F^2 + 2\langle UV^\top, XY^\top {\textnormal{a}}ngle = \|UV^\top\|_F^2 + \|XY^\top\|_F^2 \end{align*} \end{proof} Additionally, we have the following lemma regarding the optimality conditions of ({\textnormal{e}}f{eq:inner_optimization}): \begin{lemma} Let $A = U X V^\top$ where $U\in\mathbb{R}^{m\times r}$, $X\in\mathbb{R}^{r\times r}$, and $V\in\mathbb{R}^{n\times r}$, such that $X$ is the optimal solution to ({\textnormal{e}}f{eq:inner_optimization}). Then for any $u\in\mathrm{im}(U)$ and $v\in\mathrm{im}(V)$ we have that $\langle \nabla R(A), uv^\top{\textnormal{a}}ngle = 0$. \label{lem:gradient_zero} \end{lemma} \begin{proof} By the optimality condition of {\textnormal{e}}f{eq:inner_optimization}, we have that \[ U^\top \nabla R(A) V = \mathbf{0}\] Now, for any $u = Ux$ and $v = Vy$ we have \[ \langle \nabla R(A), uv^\top{\textnormal{a}}ngle = u^\top \nabla R(A) v = x^\top U^\top \nabla R(A) V y = 0 \] \end{proof} We are now ready for the proof of Theorem~{\textnormal{e}}f{thm:greedy}. \begin{proof} Let $A_{t-1}$ be the current solution $UV^\top$ before iteration $t-1\ge 0$. Let $u\in \mathbb{R}^m$ and $v\in \mathbb{R}^m$ be left and right singular vectors of matrix $\nabla R(A)$, i.e. unit vectors maximizing $|\langle \nabla R(A), uv^\top{\textnormal{a}}ngle|$. Let $${\cal B}_t=\{B| B=A_{t-1}+\eta u v^T, \eta\in \mathbb{R}\}.$$ By smoothness we have \begin{align*} R(A_{t-1}) - R(A_t) & \geq \max_{B\in {\cal B}_t} \left\{R(A_{t-1}) - R(B) {\textnormal{i}}ght\}\\ & \geq \max_{B\in {\cal B}_t} \left\{ - \langle \nabla R(A_{t-1}), B - A_{t-1}{\textnormal{a}}ngle - \frac{{\textnormal{h}}o_1^+}{2} \|B - A_{t-1}\|_F^2 {\textnormal{i}}ght\}\\ & \geq \max_{ \eta} \left\{\eta \langle \nabla R(A_{t-1}), uv^\top {\textnormal{a}}ngle - \eta^2 \frac{{\textnormal{h}}o_1^+}{2}{\textnormal{i}}ght\} \\ & = \max_{\eta} \left\{\eta \|\nabla R(A_{t-1})\|_{2} - \eta^2 \frac{{\textnormal{h}}o_1^+}{2} {\textnormal{i}}ght\} \\ & = \frac{\|\nabla R(A_{t-1})\|_{2}^2 }{ 2 {\textnormal{h}}o_1^+ } \end{align*} where $\|\cdot\|_{2}$ is the spectral norm (i.e. maximum magnitude of a singular value). On the other hand, by strong convexity and noting that \[ \mathrm{rank}(A^* - A_{t-1}) \leq \mathrm{rank}(A^*) + \mathrm{rank}(A_{t-1}) \leq r^* + r\,, \] \begin{equation} \begin{aligned} R(A^*) - R(A_{t-1}) & \geq \langle \nabla R(A_{t-1}), A^* - A_{t-1}{\textnormal{a}}ngle + \frac{{\textnormal{h}}o_{r+r^*}^-}{2} \|A^* - A_{t-1}\|_F^2\,. \\ \label{eq:greedy_strconv} \end{aligned} \end{equation} Let $A_{t-1} = UV^\top$ and $A^* = U^*V^{*\top}$. We let $\Pi_{\mathrm{im}(U)} = U(U^\top U)^+U^\top$ and $\Pi_{\mathrm{im}(V)} = V(V^\top V)^+V^\top$ denote the orthogonal projections onto the images of $U$ and $V$ respectively. We now write \[ A^* = U^*V^{*\top} = (U^1 + U^2)(V^1 + V^2)^\top = U^1 V^{1\top} + U^1 V^{2\top} + U^2 V^{*\top} \] where $U^1 = \Pi_{\mathrm{im}(U)} U^*$ is a matrix where every column of $U^*$ is replaced by its projection on $\mathrm{im}(U)$ and $U^2 = U^* - U^1$ and similarly $V^1 = \Pi_{\mathrm{im}(V)} V^*$ is a matrix where every column of $V^*$ is replaced by its projection on $\mathrm{im}(V)$ and $V^2 = V^* - V^1$. By setting $U' = (-U\ |\ U^1)$ and $V' = (V\ |\ V^1)$ we can write \[ A^* - A_{t-1} = U' V{'^\top} + U^1 V^{2\top} + U^2 V^{*\top} \] where $\mathrm{im}(U') = \mathrm{im}(U)$ and $\mathrm{im}(V') = \mathrm{im}(V)$. Also, note that \begin{align*} \mathrm{rank}(U^1V^{2\top}) \leq \mathrm{rank}(V^2) \leq \mathrm{rank}(V^*) = \mathrm{rank}(A^*) \leq r^* \end{align*} and similarly $\mathrm{rank}(U^2 V^{*\top}) \leq r^*$. So now the right hand side of ({\textnormal{e}}f{eq:greedy_strconv}) can be reshaped as \begin{align*} & \langle \nabla R(A_{t-1}), A^* - A_{t-1}{\textnormal{a}}ngle + \frac{{\textnormal{h}}o_{r+r^*}^-}{2} \|A^* - A_{t-1}\|_F^2 \\ & = \langle \nabla R(A_{t-1}), U' V{'^\top} + U^1 V^{2\top} + U^2 V^{*\top} {\textnormal{a}}ngle + \frac{{\textnormal{h}}o_{r+r^*}^-}{2} \|U' V{'^\top} + U^1 V^{2\top} + U^2 V^{*\top} \|_F^2 \end{align*} Now, note that since by definition the columns of $U'$ are in $\mathrm{im}(U)$ and the columns of $V'$ are in $\mathrm{im}(V)$, Lemma~{\textnormal{e}}f{lem:gradient_zero} implies that $\langle \nabla R(A_{t-1}), U' V{'^\top}{\textnormal{a}}ngle = 0$. Therefore the above is equal to \begin{align*} & \langle \nabla R(A_{t-1}), U^1 V^{2\top} + U^2 V^{*\top} {\textnormal{a}}ngle + \frac{{\textnormal{h}}o_{r+r^*}^-}{2} \|U' V{'^\top} + U^1 V^{2\top} + U^2 V^{*\top} \|_F^2 \\ & \geq \langle \nabla R(A_{t-1}), U^1 V^{2\top}{\textnormal{a}}ngle + \langle \nabla R(A_{t-1}), U^2 V^{*\top} {\textnormal{a}}ngle + \frac{{\textnormal{h}}o_{r+r^*}^-}{2} \left(\|U^1 V^{2\top}\|_F^2 + \|U^2 V^{*\top} \|_F^2{\textnormal{i}}ght) \\ & \geq 2 \underset{\mathrm{rank}(M) \leq r^*}\min\, \left\{\langle \nabla R(A_{t-1}), M{\textnormal{a}}ngle + \frac{{\textnormal{h}}o_{r+r^*}^-}{2} \|M\|_F^2{\textnormal{i}}ght\}\\ & = -2 \frac{\|H_{r^*}(\nabla R(A_{t-1}))\|_{F}^2}{2{\textnormal{h}}o_{r+r^*}^-}\\ & \geq -r^*\frac{\|\nabla R(A_{t-1})\|_{2}^2}{{\textnormal{h}}o_{r+r^*}^-} \end{align*} where the first equality follows by noticing that the columns of $V'$ and $V^1$ are orthogonal to those of $V^2$ and the columns of $U'$ and $U^1$ are orthogonal to those of $U^2$, and applying Lemma~{\textnormal{e}}f{lem:frobenius_sum}. The last equality is a direct application of Lemma~{\textnormal{e}}f{lem:frobenius_rank_optimization} and the last inequality states that the largest squared singular value is not smaller than the average of the top $r^*$ squared singular values. Therefore we have concluded that \[ \|\nabla R(A_{t-1})\|_{2}^2\geq \frac{{\textnormal{h}}o_{r+r^*}^-}{r^*}\left(R(A_{t-1}) - R(A^*){\textnormal{i}}ght) \] Plugging this back into the smoothness inequality, we get \[ R(A_{t-1}) - R(A_t) \geq \frac{1}{2 r^* \kappa} (R(A_{t-1}) - R(A^*)) \] or equivalently \[ R(A_{t}) - R(A^*) \leq \left(1 - \frac{1}{2 r^* \kappa}{\textnormal{i}}ght) (R(A_{t-1}) - R(A^*))\,. \] Therefore after $L=2 r^* \kappa \log \frac{R(A_0) - R(A^*)}{{\epsilon}}$ iterations we have \begin{align*} R(A_{T}) - R(A^*) & \leq \left(1 - \frac{1}{2 r^* \kappa}{\textnormal{i}}ght)^L (R(A_{0}) - R(A^*))\\ & \leq e^{-\frac{L}{2 r^* \kappa}} (R(A_{0}) - R(A^*))\\ & \leq {\epsilon} \end{align*} Since $A_0 = \mathbf{0}$, the result follows. \end{proof} \subsection{Proof of Theorem~{\textnormal{e}}f{thm:local_search} (local search)} \label{proof_thm:local_search} \begin{proof} Similarly to Section~{\textnormal{e}}f{proof_thm:local_search}, we let $A_{t-1}$ be the current solution before iteration $t-1\ge 0$. Let $u\in \mathbb{R}^m$ and $v\in \mathbb{R}^m$ be left and right singular vectors of matrix $\nabla R(A)$, i.e. unit vectors maximizing $|\langle \nabla R(A), uv^\top{\textnormal{a}}ngle|$ and let $${\cal B}_t=\{B| B=A_{t-1}+\eta u v^T - \sigma_{\min} xy^\top, \eta\in \mathbb{R} \},$$ where $\sigma_{\min} xy^\top = A_{t-1} - H_{r-1}(A_{t-1})$ is the rank-1 term corresponding to the minimum singular value of $A_{t-1}$. By smoothness we have \begin{align*} & R(A_{t-1}) - R(A_t) \\ & \geq \max_{B\in {\cal B}_t} \left\{R(A_{t-1}) - R(B) {\textnormal{i}}ght\}\\ & \geq \max_{B\in {\cal B}_t} \left\{ - \langle \nabla R(A_{t-1}), B - A_{t-1}{\textnormal{a}}ngle - \frac{{\textnormal{h}}o_2^+}{2} \|B - A_{t-1}\|_F^2 {\textnormal{i}}ght\}\\ & = \max_{ \eta\in\mathbb{R}}\left\{ -\langle \nabla R(A_{t-1}), \eta uv^\top - \sigma_{\min} xy^\top {\textnormal{a}}ngle - \frac{{\textnormal{h}}o_2^+}{2} \|\eta uv^\top - \sigma_{\min} xy^\top \|_F^2 {\textnormal{i}}ght\} \\ & \geq \max_{\eta\in\mathbb{R}}\left\{ -\langle \nabla R(A_{t-1}), \eta uv^\top {\textnormal{a}}ngle - \eta^2 {\textnormal{h}}o_2^+ - \sigma_{\min}^2 {\textnormal{h}}o_2^+ {\textnormal{i}}ght\} \\ & = \max_{\eta\in \mathbb{R}} \left\{ \eta \|\nabla R(A_{t-1})\|_{2} - \eta^2 {\textnormal{h}}o_2^+ - \sigma_{\min}^2 {\textnormal{h}}o_2^+ {\textnormal{i}}ght\}\\ & = \frac{\|\nabla R(A_{t-1})\|_{2}^2 }{ 4 {\textnormal{h}}o_2^+ } - \sigma_{\min}^2 {\textnormal{h}}o_2^+\,, \end{align*} where in the last inequality we used the fact that $\langle \nabla R(A_{t-1}), xy^\top {\textnormal{a}}ngle = 0$ following from Lemma~{\textnormal{e}}f{lem:gradient_zero}, as well as Lemma~{\textnormal{e}}f{lem:frobenius_sum2}. On the other hand, by strong convexity, \begin{align*} R(A^*) - R(A_{t-1}) & \geq \langle \nabla R(A_{t-1}), A^* - A_{t-1}{\textnormal{a}}ngle + \frac{{\textnormal{h}}o_{r+r^*}^-}{2} \|A^* - A_{t-1}\|_F^2\,. \end{align*} Let $A_{t-1} = UV^\top$ and $A^* = U^*V^{*\top}$. We write \[ A^* = U^*V^{*\top} = (U^1 + U^2)(V^1 + V^2)^\top = U^1 V^{1\top} + U^1 V^{2\top} + U^2 V^{*\top} \] where $U^1$ is a matrix where every column of $U^*$ is replaced by its projection on $\mathrm{im}(U)$ and $U^2 = U^* - U^1$ and similarly $V^1$ is a matrix where every column of $V^*$ is replaced by its projection on $\mathrm{im}(V)$ and $V^2 = V^* - V^1$. By setting $U' = (-U\ |\ U^1)$ and $V' = (V\ |\ V^1)$ we can write \[ A^* - A_{t-1} = U' V{'^\top} + U^1 V^{2\top} + U^2 V^{*\top} \] where $\mathrm{im}(U') = \mathrm{im}(U)$ and $\mathrm{im}(V') = \mathrm{im}(V)$. Also, note that \begin{align*} \mathrm{rank}(U^1V^{2\top}) \leq \mathrm{rank}(V^2) \leq \mathrm{rank}(V^*) = \mathrm{rank}(A^*) \leq r^* \end{align*} and similarly $\mathrm{rank}(U^2 V^{*\top}) \leq r^*$. So we now have \begin{align*} & \langle \nabla R(A_{t-1}), A^* - A_{t-1}{\textnormal{a}}ngle + \frac{{\textnormal{h}}o_{r+r^*}^-}{2} \|A^* - A_{t-1}\|_F^2 \\ & = \langle \nabla R(A_{t-1}), U' V{'^\top} + U^1 V^{2\top} + U^2 V^{*\top} {\textnormal{a}}ngle + \frac{{\textnormal{h}}o_{r+r^*}^-}{2} \|U' V{'^\top} + U^1 V^{2\top} + U^2 V^{*\top} \|_F^2 \\ & = \langle \nabla R(A_{t-1}), U^1 V^{2\top} + U^2 V^{*\top} {\textnormal{a}}ngle + \frac{{\textnormal{h}}o_{r+r^*}^-}{2} \|U' V{'^\top} + U^1 V^{2\top} + U^2 V^{*\top} \|_F^2 \\ & = \langle \nabla R(A_{t-1}), U^1 V^{2\top} + U^2 V^{*\top} {\textnormal{a}}ngle + \frac{{\textnormal{h}}o_{r+r^*}^-}{2} \left(\|U' V^{'\top}\|_F^2 + \|U^1 V^{2\top}\|_F^2 + \|U^2 V^{*\top} \|_F^2{\textnormal{i}}ght) \\ & \geq \langle \nabla R(A_{t-1}), U^1 V^{2\top}{\textnormal{a}}ngle + \langle \nabla R(A_{t-1}), U^2 V^{*\top} {\textnormal{a}}ngle + \frac{{\textnormal{h}}o_{r+r^*}^-}{2} \left(\|U^1 V^{2\top}\|_F^2 + \|U^2 V^{*\top} \|_F^2{\textnormal{i}}ght) \\ & + \frac{{\textnormal{h}}o_{r+r^*}^-}{2} \|U' V^{'\top}\|_F^2\\ & \geq 2 \underset{\mathrm{rank}(M) \leq r^*}\min \left\{ \langle \nabla R(A_{t-1}), M{\textnormal{a}}ngle + \frac{{\textnormal{h}}o_{r+r^*}^-}{2} \|M\|_F^2 {\textnormal{i}}ght\} + \frac{{\textnormal{h}}o_{r+r^*}^-}{2} \|U' V^{'\top}\|_F^2\\ & = -2 \frac{\|H_{r^*}(\nabla R(A_{t-1}))\|_{F}^2}{2{\textnormal{h}}o_{r+r^*}^-} + \frac{{\textnormal{h}}o_{r+r^*}^-}{2} \|U' V^{'\top}\|_F^2\\ & \geq -r^*\frac{\|\nabla R(A_{t-1})\|_{2}^2}{{\textnormal{h}}o_{r+r^*}^-} + \frac{{\textnormal{h}}o_{r+r^*}^-}{2} \|U' V^{'\top}\|_F^2 \end{align*} where the second equality follows from the fact that $\langle \nabla R(A_{t-1}), uv^\top{\textnormal{a}}ngle = 0$ for any $u\in \mathrm{im}(U), v\in \mathrm{im}(V)$, the third equality from the fact that $\mathrm{im}(U^2)\perp \mathrm{im}(U') \cup \mathrm{im}(U^1)$ and $\mathrm{im}(V^2)\perp \mathrm{im}(V')$ and by applying Lemma~{\textnormal{e}}f{lem:frobenius_sum}, and the last inequality from the fact that the largest squared singular value is not smaller than the average of the top $r^*$ squared singular values. Now, note that since $\mathrm{rank}(U^1 V^{1\top}) \leq r^* < r = \mathrm{rank}(UV^\top)$, \begin{align*} \|U'V{'^\top}\|_F^2 & = \|U^1 V^{1\top} - UV^\top\|_F^2 \\ & = \sum\limits_{i=1}^r \sigma_i^2 (U^1 V^{1\top} - UV^\top) \\ & \geq \sum\limits_{i=1}^r (\sigma_{i+r^*}(UV^\top) -\sigma_{r^*+1} (U^1 V^{1\top}))^2 \\ & = \sum\limits_{i=r^*+1}^r \sigma_i^2 (UV^\top) \\ & \geq (r-r^*) \sigma_{\min}^2 (UV^\top) \\ & = (r-r^*) \sigma_{\min}^2(A_{t-1})\,, \end{align*} where we used the fact that $\mathrm{rank}(U^1V^{1\top}) \leq r^*$ together with Lemma~{\textnormal{e}}f{lem:weyl}. Therefore we have concluded that \[ \|\nabla R(A_{t-1})\|_{2}^2\geq \frac{{\textnormal{h}}o_{r+r^*}^-}{r^*} \left(R(A_{t-1}) - R(A^*){\textnormal{i}}ght) + \frac{({\textnormal{h}}o_{r+r^*}^-)^2 (r-r^*)}{2r^*} \sigma_{\min}^2\] Plugging this back into the smoothness inequality and setting $\widetilde{\kappa} = \frac{{\textnormal{h}}o_2^+}{{\textnormal{h}}o_{r+r^*}^-}$, we get \begin{align*} R(A_{t-1}) - R(A_t) & \geq \frac{1}{4r^*\widetilde{\kappa}} (R(A_{t-1}) - R(A^*)) + \left(\frac{{\textnormal{h}}o_{r+r^*}^-(r-r^*)}{8r^*\widetilde{\kappa}} - {\textnormal{h}}o_2^+ {\textnormal{i}}ght)\sigma_{\min}^2(A_{t-1})\\ & \geq \frac{1}{4r^*\widetilde{\kappa}} (R(A_{t-1}) - R(A^*)) \end{align*} as long as $r \geq r^*(1 + 8\widetilde{\kappa}^2)$, or equivalently, \[ R(A_{t}) - R(A^*) \leq \left(1 - \frac{1}{4 r^* \widetilde{\kappa}}{\textnormal{i}}ght) (R(A_{t-1}) - R(A^*))\,. \] Therefore after $L=4 r^* \widetilde{\kappa} \log \frac{R(A_0) - R(A^*)}{{\epsilon}}$ iterations we have \begin{align*} R(A_{T}) - R(A^*) & \leq \left(1 - \frac{1}{4 r^* \widetilde{\kappa}}{\textnormal{i}}ght)^L (R(A_{0}) - R(A^*))\\ & \leq e^{-\frac{L}{4 r^* \widetilde{\kappa}}} (R(A_{0}) - R(A^*))\\ & \leq {\epsilon} \end{align*} Since $A_0 = \mathbf{0}$ and $\widetilde{\kappa} \leq \kappa_{r+r^*}$, the result follows. \end{proof} \subsection{Tightness of the analysis} \label{sec:appendix_tightness} It is important to note that the $\kappa_{r+r^*}$ factor that appears in the rank bounds of both Theorems~{\textnormal{e}}f{thm:greedy} and {\textnormal{e}}f{thm:local_search} is inherent in these algorithms and not an artifact of our analysis. In particular, such lower bounds based on the restricted condition number have been previously shown for the problem of sparse linear regression. More specifically, \cite{FKT15} showed that there is a family of instances in which the analogues of Greedy and Local Search for sparse optimization require the sparsity to be $\Omega(s^* \kappa')$ for constant error ${\epsilon}ilon > 0$, where $s^*$ is the optimal sparsity and $\kappa'$ is the \tilde{p}h{sparsity}-restricted condition number. These instances can be easily adjusted to give a \tilde{p}h{rank} lower bound of $\Omega(r^* \kappa_{r+r^*})$ for constant error ${\epsilon}ilon > 0$, implying that the $\kappa$ dependence in Theorem~{\textnormal{e}}f{thm:greedy} is tight for Greedy. Furthermore, specifically for Local Search, \cite{axiotis2020sparse} additionally showed that there is a family of instances in which the analogue of Local Search for sparse optimization requires a sparsity of $\Omega(s^* (\kappa')^2)$. Adapting these instances to the setting of rank-constrained convex optimization is less trivial, but we conjecture that it is possible, which would lead to a rank lower bound of $\Omega(r^* \kappa_{r+r^*}^2)$ for Local Search. We present the following lemma, which essentially states that sparse optimization lower bounds for Orthogonal Matching Pursuit (OMP, \cite{pati1993orthogonal}) (resp. Orthogonal Matching Pursuit with Replacement (OMPR, \cite{JTD11})) in which the optimal sparse solution is also a global optimum, immediately carry over (up to constants) to rank-constrained convex optimization lower bounds for Greedy (resp. Local Search). \begin{lemma} Let $f\in\mathbb{R}^n{\textnormal{i}}ghtarrow\mathbb{R}$ and $x^*\in\mathbb{R}^n$ be an $s^*$-sparse vector that is also a global minimizer of $f$. Also, let $f$ have restricted smoothness parameter $\beta$ at sparsity level $s+s^*$ for some $s \geq s^*$ and restricted strong convexity parameter $\alpha$ at sparsity level $s+s^*$. Then we can define the rank-constrained problem, with $R:\mathbb{R}^{n\times n}{\textnormal{i}}ghtarrow \mathbb{R}$, \begin{align} \underset{\mathrm{rank}(A)\leq s^*} \min\, R(A) := f(\mathrm{diag}(A)) + \frac{\beta}{2} \|A - \mathrm{diag}(A)\|_F^2\,, \label{rcopt} \end{align} where $\mathrm{diag}(A)$ is a vector containing the diagonal of $A$. $R$ has rank-restricted smoothness at rank $s+s^*$ at most $2\beta$ and rank-restricted strong convexity at rank $s+s^*$ at least $\alpha$. Suppose that we run $t$ iterations of OMP (resp. OMPR) starting from a solution $x$, to get solution $x'$, and similarly run $t$ iterations of Greedy (resp. Local Search) starting from solution $A = \mathrm{diag}(x)$ (where $\mathrm{diag}(x)$ is a diagonal matrix with $x$ on the diagonal) to get solution $A'$. Then $A'$ is diagonal and $\mathrm{diag}(A') = x'$. In other words, in this scenario OMP and Greedy (resp. OMPR and Local Search) are equivalent. \end{lemma} \begin{proof} Note that for any solution $\widehat{A}$ of $R$ we have $R(\widehat{A}) \geq f(\mathrm{diag}(\widehat{A})) \geq f(x^*)$, with equality only if $\widehat{A}$ is diagonal. Furthermore, $\mathrm{rank}(\mathrm{diag}(x^*)) \leq s^*$, meaning that $\mathrm{diag}(x^*)$ is an optimal solution of $({\textnormal{e}}f{rcopt})$. Now, given any diagonal solution $A$ of ({\textnormal{e}}f{rcopt}) such that $A = \mathrm{diag}(x)$, we claim that one step of either Greedy or Local Search keeps it diagonal. This is because \begin{align*} \nabla R(A) = \mathrm{diag}(\nabla f(x)) + \frac{\beta}{2} (A - \mathrm{diag}(A)) = \mathrm{diag}(\nabla f(x)) \,. \end{align*} Therefore the largest eigenvalue of $\nabla R(A)$ has corresponding eigenvector $\mathbf{1}_i$ for some $i$, which implies that the rank-1 component which will be added is a multiple of $\mathbf{1}_i\mathbf{1}_i^\top$. For the same reason the rank-1 component removed by Local Search will be a multiple of $\mathbf{1}_j\mathbf{1}_j^\top$ for some $j$. Therefore running Greedy (resp. Local Search) on such an instance is identical to running OMP (resp. OMPR) on the diagonal. \end{proof} Together with the lower bound instances of \cite{FKT15} (in which the global minimum property is true), it immediately implies a rank lower bound of $\Omega(r^* \kappa_{r+r^*})$ for getting a solution with constant error for rank-constrained convex optimization. On the other hand, the lower bound instances of \cite{axiotis2020sparse} give a \tilde{p}h{quadratic} lower bound in $\kappa$ for OMPR. The above lemma cannot be directly applied since the sparse solutions are not global minima, but we conjecture that a similar proof will give a rank lower bound of $\Omega(r^* \kappa_{r+r^*}^2)$ for rank-constrained convex optimization with Local Search. \subsection{Addendum to Section~{\textnormal{e}}f{sec:ML}} \begin{figure} \caption{One of the splits of the Movielens 100K dataset. We can see that for small ranks the Fast Local Search solution is better and more stable, but for larger ranks it does not provide any improvement over the Fast Greedy algorithm.} \label{fig:sub3} \end{figure} \begin{figure} \caption{$k = 10$, $p = 0.5$, $SNR = 1$} \label{fig:sub4} \caption{$k = 10$, $p = 0.3$, $SNR = 3$} \label{fig:sub5} \caption{Test error vs rank in the matrix completion problem of Section~{\textnormal{e} \label{fig:matrix_completion_test_2} \end{figure} \begin{figure} \caption{Train error vs rank} \caption{Test error vs rank} \caption{ Performance of greedy with fully solving the inner optimization problem (left) and applying 3 iterations of the LSQR algorithm (right) in the matrix completion problem of Section~{\textnormal{e} \label{fig:sub4} \label{fig:sub4} \end{figure} \end{document}
\begin{document} \title{ On the conjecture of bijection between perfect matching and sub-hypercube in folded hypercubes hanks{This research was partially supported by the National Natural Science Foundation of China (Nos. 11801061 and 11761056), the Chunhui Project of Ministry of Education (No. Z2017047) and the Fundamental Research Funds for the Central Universities (No. ZYGX2018J083)} \begin{abstract} Dong and Wang in [Theor. Comput. Sci. 771 (2019) 93--98] conjectured that the resulting graph of the $n$-dimensional folded hypercube $FQ_n$ by deleting any perfect matching is isomorphic to the hypercube $Q_n$. In this paper, we show that the conjecture holds when $n=2,3$, and it is not true for $n\geq4$. \vskip 0.1 in \noindent \textbf{Key words:} Hypercube; Folded hypercube; Perfect matching; Sub-hypercube; Isomorphic \noindent \textbf{Mathematics Subject Classification:} 05C60, 68R10 \end{abstract} \section{Introduction} Let $G=(V(G),E(G))$ be a graph, where $V(G)$ is the vertex-set of $G$ and $E(G)$ is the edge-set of $G$. A {\em matching} of $G$ is a set of pairwise nonadjacent edges. A {\em perfect matching} of $G$ is a matching with size $|V(G)|/2$. A {\em $k$-factor} of $G$ is a $k$-regular spanning subgraph of $G$. Clearly, a perfect matching, together with end vertices of its edges, forms a 1-factor of $G$. $G$ is {\em $k$-factorable} if it admits a decomposition into $k$-factors. The distance between two vertices $u$ and $v$ is the number of edges in a shortest path joining $u$ and $v$ in $G$, denoted by $d(u, v)$. For any two edges $uv$ and $xy$, the distance of $uv$ and $xy$, denoted by $d(uv, xy)$, is $\min\{d(u, x), d(u, y), d(v, x), d(v, y)\}$. For other standard graph notations not defined here please refer to \cite{Bondy}. \vskip 0.0 in The well-known $n$-dimensional hypercube is a graph $Q_n$ with $2^n$ vertices and $n2^{n-1}$ edges. Each vertex is labelled by an $n$-bit binary string. Two vertices are adjacent if their binary string differ in exactly one bit position. The folded hypercube, denoted by $FQ_n$, is first introduced by El-Amawy and Latifi\cite{El-Amawy} as a variant of the hypercube. $FQ_n$ is obtained from the hypercube $Q_n$ by adding $2^{n-1}$ independent edges, called {\em complementary edges}, each of which is between $x_1x_2\cdots x_n$ and $\overline{x}_1\overline{x}_2\cdots \overline{x}_n$, where $\overline{x}_i=1-x_i$, $i=1,\cdots,n$. For convenience, the set of complementary edges of $FQ_n$ are denoted by $E_c$ and the set of $i$-dimensional edges in $Q_n$ are denoted by $E^i$ for each $1\leq i\leq n$, where an edge $uv$ is $i$-dimensional in $Q_n$ if $u$ and $v$ differ only in the $i$-th position. We illustrate $FQ_{3}$ in Fig. \ref{FQ3}. \begin{figure} \caption{The 3-dimensional folded hypercube $FQ_3$.} \label{FQ3} \end{figure} Some attractive properties of the folded hypercube are widely studied in the literature, such as, pancyclicity \cite{Xu}, conditional connectivity \cite{Zhao}, stochastic edge-fault-tolerant routing algorithm \cite{Thuan}, conditional diagnosability \cite{Liu} and conditional cycle embedding\cite{Kuo}. Recently, Dong and Wang \cite{Dong} conjectured the following: \begin{conjecture}{\bf.} An subset $E^m$ of $2^{n-1}$ edges of $FQ_n$ is a perfect matching if and only if $FQ_n-E^m$ is isomorphic to $Q_n$. \end{conjecture} \vskip 0.05 in We solve this conjecture in Section 2. Conclusions are given in Section 3. \section{Main results} The affirmative answer to Dong's conjecture for $n=2,3$ are shown as follows. \begin{theorem}\label{iso}{\bf.} For any perfect matching $M$ of $FQ_n$, $n=2,3$, $FQ_n-M$ is isomorphic to $Q_n$. \end{theorem} \noindent{\bf Proof.} Clearly, $FQ_2$ is the complete graph $K_4$ and $Q_2$ is a 4-cycle, so the statement holds when $n=2$. Let $M$ be a perfect matching of $FQ_3$. If $M=E_c$, the lemma is obviously true. Therefore, we assume that $M\neq E_c$. For convenience, we label each vertex of $FQ_3$ by $u_i$, $i\in\{0,\cdots,7\}$, respectively (see Fig. \ref{FQ3}). We distinguish the following cases. \noindent{\bf Case 1.} $M$ contains no complementary edges. By symmetry of $FQ_3$, we may assume that $u_0u_2\in M$. Then the subgraph induced by the hypercube edges of $FQ_3-\{u_0,u_2\}$, say $H$, is a $2\times 1$-grid with six vertices. Clearly, $H$ has three perfect matchings $M_1=\{u_1u_5,u_3u_7,u_4u_6\}$, $M_2=\{u_1u_3,u_4u_5,u_6u_7\}$ and $M_3=\{u_1u_3,u_4u_6,u_5u_7\}$. Thus, $M=\{u_0u_2\}\cup M_j$, $j=1,2,3$. By direct checking, $FQ_3-M$ is isomorphic to $Q_3$. \noindent{\bf Case 2.} $M$ contains exactly one complementary edge. Suppose w.l.o.g. that $u_0u_7\in M$. Then the subgraph induced by the hypercube edges of $FQ_3-\{u_0,u_7\}$, say $C$, is a 6-cycle. Clearly, $C$ has two perfect matchings $M_1=\{u_1u_5,u_2u_3,u_4u_6\}$ and $M_2=\{u_1u_3,u_2u_6,u_4u_5\}$. Thus, $M=\{u_0u_7\}\cup M_1$ or $\{u_0u_7\}\cup M_2$. By direct checking, $FQ_3-M$ is isomorphic to $Q_3$. \noindent{\bf Case 3.} $M$ contains exactly two complementary edges. By symmetry of $FQ_3$, we may assume that $A\subset M$ or $B\subset M$, where $A=\{u_0u_7,u_2u_5\}$ and $B=\{u_0u_7,u_3u_4\}$. Then $A$ (resp. $B$) can be uniquely extended to a perfect matching of $FQ_3$ by adding two hypercube edges. Thus, $M=\{u_0u_7,u_2u_5,u_1u_3,u_4u_6\}$ or $\{u_0u_7,u_3u_4,u_1u_5,u_2u_6\}$. By direct checking, $FQ_3-M$ is isomorphic to $Q_3$. \noindent{\bf Case 4.} $M$ contains exactly three complementary edges. In this condition, exactly six vertices of $FQ_3$ are saturated by the complementary edges of $M$ and the remaining two vertices are diagonal, which can not be saturated by any hypercube edge. So there exist no perfect matchings containing exactly three complementary edges. This completes the proof.\qed \vskip 0.1 in The following lemma is useful. \begin{lemma}\cite{Zhu}\label{common-neighbors}{\bf.} Any two vertices in $V(FQ_n)$ exactly have two common neighbors for $n\geq 4$ if they have. \end{lemma} For $n\geq4$, in fact, we prove the following theorem which characterizes the relationship between a perfect matching and the sub-hypercube of $FQ_n$. \begin{theorem}{\bf.}\label{non-iso} Let $n\geq4$ be an integer and let $M$ be a perfect matching of $FQ_{n}$. Then $FQ_{n}-M$ is isomorphic to $Q_n$ if and only if $M=E_c$ or $E^i$ for any $i\in\{1,2,\cdots,n\}$. \end{theorem} \noindent{\bf Proof.} \textit{Sufficiency.} By the definition of $FQ_n$, if $M=E_c$, the statement is obviously true. Therefore, let $M=E^i$ for some $i\in\{1,2,\cdots,n\}$. We shall show that $FQ_n-E^i$ is isomorphic to $Q_n$. One may consider the graph $FQ_n-E^i\cup E_c$ since $FQ_n-E^i\cup E_c$ is two disjoint copies of $Q_{n-1}$. For convenience, let $G=FQ_n-E^i$. The vertices of $G$ are still labelled by $n$-tuple binary strings. We define a bijection $\varphi: V(G)\rightarrow V(Q_n)$ as follows: (1) $\varphi(u)=u$ if the $i$-th bit of $u$ is 0; (2) $\varphi(u)=\overline{u}_1\cdots\overline{u}_{i-1}u_i\overline{u}_{i+1}\cdots \overline{u}_n$ if the $i$-th bit of $u$ is 1, where $u=u_1\cdots u_{i-1}u_iu_{i+1}\cdots u_n$. Let $uv\in E(G)$ be an arbitrary edge. We shall verify that $\varphi$ is an isomorphism. \noindent{\bf Case 1.} $uv$ is a $j$-dimensional edge of $G$, $j\in\{1,\cdots,n\}\setminus\{i\}$. Then $u$ and $v$ differ only in the $j$-th position. We may assume that $u=u_1\cdots u_i\cdots u_j\cdots u_n$ and $v=u_1\cdots u_i\cdots \overline{u}_j\cdots u_n$. If $u_i=0$, then $\varphi(u)=u$ and $\varphi(v)=v$, yielding that $\varphi(u)\varphi(v)\in E(Q_n)$. If $u_i=1$, then $\varphi(u)=\overline{u}_1\cdots\overline{u}_{i-1} u_i\overline{u}_{i+1}\cdots \overline{u}_j\cdots \overline{u}_n$ and $\varphi(v)=\overline{u}_1\cdots\overline{u}_{i-1} u_i\overline{u}_{i+1}\cdots u_j\cdots \overline{u}_n$. Again, $\varphi(u)\varphi(v)\in E(Q_n)$. \noindent{\bf Case 2.} $uv\in E_c$. For convenience, let $u=u_1\cdots u_i\cdots u_n$ and $v=\overline{u}_1\cdots \overline{u}_i$ $\cdots \overline{u}_n$. We may assume that $u_i=0$. Thus, $\varphi(u)=u$ and $\varphi(v)=u_1\cdots \overline{u}_i\cdots u_n$. Therefore, $\varphi(u)\varphi(v)\in E(Q_n)$. By above, it follows that $FQ_{n}-M$ is isomorphic to $Q_n$. \textit{Necessity.} Suppose on the contrary that $M\neq E_c$ and $M\neq E^i$ for each $i\in\{1,2,\cdots,n\}$. We consider the following two cases. \noindent{\bf Case 1.} $M\cap E_c\neq\emptyset$. We claim that there exists a vertex $u$ such that the complementary edge $uv\in E_c$ and one of its neighbors, say $v_1$, is saturated by a hypercube edge $v_1u_1$ in $M$. Suppose not. If all the neighbors of any vertex $u$ in $FQ_n$ are saturated by complementary edges, then $M=E_c$. So the claim holds. Thus, there exists a 4-cycle $uv_1u_1v_2u$ in $FQ_n$, where $v_1$ and $v_2$ are two neighbors of $u$. Obviously, $uv_1,uv_2\not\in M$. Note that $u_1v_1\in M$, then $u_1v_2\not\in M$. This implies that $u$ and $u_1$ have exactly one common neighbor in $FQ_n-M$, contradicting the well-known fact that every two vertices in $Q_n$ have zero or exactly two common neighbors. \noindent{\bf Case 2.} $M\cap E_c=\emptyset$. Our objective is to show that there exists a 4-cycle $C$ of $FQ_n$ containing exactly one edge of $M$. Accordingly, two diagonal vertices of $C$ have exactly one common neighbor, which contradicts the fact that every two vertices in $Q_n$ have zero or exactly two common neighbors. Note that $M\cap E_c=\emptyset$ and $M\neq E^i$ for each $i\in\{1,\cdots,n\}$, then there exists two edges $e,f\in M$ with $e\in E^i$ and $f\in E^j$ such that $d_{FQ_n}(e,f)=1$, where $1\leq i<j\leq n$. For clarity, let $C=uxvyu$ and $e=ux$. Suppose w.l.o.g. that all edges of $C$ are hypercube edges. So there exists an edge $xw$ connecting $e$ and $f$, where $w$ is an end vertex of $f$. By Lemma \ref{common-neighbors}, $u$ and $v$ have exactly two common neighbors $x$ and $y$ in $FQ_n$, and vice versa. Note that $e\not\in E(FQ_n-M)$, $u$ and $v$ have at most one common neighbor in $FQ_n-M$. If $u$ and $v$ have exactly one common neighbor, say $y$, then we are done. So we assume that $u$ and $v$ have no common neighbors in $FQ_n-M$, namely $vy\in M$. \begin{figure} \caption{Illustration for Theorem \ref{non-iso} \label{FQn-M} \end{figure} If $xv,uy\in E^j$, then there exists a 4-cycle $C'=xwzvx$ such that $f=wz$. Clearly, $xw,zv,vx\not\in M$ and $wz\in M$, then we have a 4-cycle that contains exactly one edge in $M$, yielding that $x$ and $z$ have exactly one common neighbor. So we assume that $xv,uy\not\in E^j$ and $f=wa\in M$. Accordingly, $xw\in E^k$, where $k\neq i,j$. Thus, there exists a cycle $C''=xwabx$ in $FQ_n$. Recall that $e=ux$ and $e\in M$, thus, $bx\not\in M$. Similarly, $ab,xw\not\in M$. So $a$ and $x$ have exactly one common neighbor in $G$, a contradiction (see Fig. \ref{FQn-M}). Hence, the theorem holds. \qed By the above theorem, we have the following corollary, which disproves Dong's conjecture for $n\geq4$. \begin{corollary}{\bf.} There exists a perfect matching $M$ of $FQ_n$ with $n\geq4$ such that $FQ_n-M$ is not isomorphic to $Q_n$. \end{corollary} \noindent{\bf Proof.} Obviously, there exists a perfect matching $M$ of $Q_n$ such that $M\neq E^i$ for each $1\leq i\leq n$. Note that $Q_n$ is a spanning subgraph of $FQ_n$, then $M\neq E_c$. By Theorem \ref{non-iso}, the statement follows immediately.\qed \section{Conclusions} In this paper, we characterize the relationship between the resulting graph of $FQ_n$ by deleting a perfect matching and the sub-hypercube. It is interesting to study the similar property in hypercube variants which include the hypercube as their spanning subgraphs. \end{document}
\begin{document} \title{Parameterized process characterization with reduced resource requirements} \thanks{This manuscript has been authored by UT-Battelle, LLC, under Contract No. DE-AC0500OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for the United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan.} \author{ Vicente Leyton-Ortega} \hbox{\rm e}mail[Corresponding author: ]{[email protected]} \affiliation{Computational Sciences and Engineering Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831, USA} \author{ Tyler Kharazi} \affiliation{Computational Sciences and Engineering Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831, USA} \author{ Raphael C. Pooser} \affiliation{Computational Sciences and Engineering Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831, USA} \date{\today} \begin{abstract} Quantum Process Tomography (QPT) is a powerful tool to characterize quantum operations, but it requires considerable resources making it impractical for more than 2-qubit systems. This work proposes an alternative approach that requires significantly fewer resources for {unitary} processes characterization {without prior knowledge of the process} and provides a built-in method for state preparation and measurement (SPAM) error mitigation. By measuring the quantum process as rotated through the $X$ and $Y$ axes on the Bloch Sphere, we can acquire enough information to reconstruct the quantum process matrix $\chi$ and measure its fidelity. We test the algorithm's performance against standard QPT using simulated and physical experiments on several IBM quantum processors and compare the resulting process matrices. We demonstrate in numerical experiments that the method can improve gate fidelity via a noise reduction in the imaginary part of the process matrix, along with a stark decrease in the number of experiments needed to perform the characterization. \hbox{\rm e}nd{abstract} \rm {\bf Kg}eywords{Quantum computation, gate characterization} \rm {\bf m}aketitle \rm {\bf s}ection{Introduction} Modern quantum computers are marred by noise that limits the computational reach of these devices. The sources of this noise are myriad, including initial state preparation errors, noise introduced during the computation via decoherence and gate noise, and imprecise state readout at measurement \cite{Magesan2012}. There has been extensive work to improve quantum processor units (QPUs) at the hardware level with methods that are generally not accessible at the end-user level. Given the current proliferation of noisy intermediate-scale quantum resources, we seek to develop strategies that end-users can use to calibrate a set of qubits on a physical QPU in a cost-effective (less resource-intensive) manner. Furthermore, new algorithms to isolate and characterize noise are critical for quantifying where QPUs need improvement and benchmarking algorithm performance \cite{Eisert2020}. One such characterization method, quantum tomography, provides a set of tools to characterize the behavior of quantum dynamical processes through a series of measurements on a complete basis, typically the Pauli basis. Standard quantum process tomography (QPT) reconstructs the underlying quantum process $\rm {\bf m}athcal{E}$ by performing state tomography on a set of identical quantum states after applying certain quantum operations, i.e., a quantum circuit \cite{NielsenBook, Poyatos1997, Chuang1997}. Through this tomographic reconstruction in state space, one can infer the region where each generated state lies by applying maximum likelihood estimation~\cite{Baumgratz2013}, Bayesian credibility~\cite{BlumeKohout2010, Ferrie2014}, or confidence regions~\cite{BlumeKohout2012,Christandl2012,Wang2019}. Process tomography is also a key component in noise characterization and noise mitigation for quantum algorithms~\cite{Zhang2020}. Through a large number of circuit evaluations, or shots, one can deploy statistical and numerical methods to recover the underlying { process matrix representation} of $\rm {\bf m}athcal{E}$ \cite{Alexander2020, Korotkov2013, Mohseni2008}. However, the resource requirements inhibit the scalability of QPT-based methods on NISQ hardware. For a complete determination of an n-qubit quantum process, one needs to prepare $4^n \times 3^n$ independent circuit executions to specify a quantum process completely (see Appendix \ref{app:qpt} for more details). This resource overhead makes QPT impractical for characterizing processes involving more than a few qubits. For example, the complete characterization of a 3-qubit quantum process requires $12^3 = 1728$ independent experiments, with each experiment repeated many times to gather sufficient statistics. On the publicly available IBM QPUs, i.e., \texttt{IBMQ Bogota}, a user can send at most 900 independent experiments, falling far short of the 1728 required to characterize a 3-qubit process fully. One may send the complete set of experiments in two separate batches of circuits, but this leaves open the possibility that the device may have changed significantly between experimental runs. Without dedicated access to a QPU, process tomography is practically challenging for $n>3$ qubit systems. {To dodge this problem, ancilla-\cite{Altepeter2003, Leung2003, Ariano2003, Ariano2001} and error-correction-based\cite{Omkar2015a, Omkar2015b, Mohseni2006, Mohseni2007} QPT schemes were introduced. These methods require sophisticated state and measurement preparations. The number of experiments can be reduced even more if some prior information about the process is known using compressed sensing techniques\cite{Flammia2010, Flammia2012}. However, the success of compressed sensing depends on the accuracy of the rank knowledge given for the quantum process\cite{Rodionov2014, Shabani2011}. An adaptive measurement technique introduces a way to characterize any unitary process that does not require any prior assumption about the process\cite{Kim2020}, but it requires an optimization routine to adapt the initial states and measurement operators. On the other hand, the resource requirements can be reduced if one does not require a complete characterization of the quantum process; for instance, randomized benchmarking is commonly use to compute gate fidelities on superconducting QPUs \cite{Emerson2005,Knill2008, Magesan2011}. In summary, these methods assume a specific structure: low-rank restrictions \cite{Shabani2011}, two-qubit processes \cite{Govia_2020}, and a unitary structure that only requires measurements of the diagonal elements of the rotated process matrix. } {We consider the results from standard QPT as a reference to measure the performance of our method.} In addition to the experimental overhead, QPT assumes perfect readout measurement, yet it is highly sensitive to state preparation and measurement (SPAM) errors \cite{Eisert2020, Korotkov2013}. This assumption can lead to underestimating process fidelity by rolling SPAM errors into the same process as the gates one is trying to characterize. Here, we reduce the complexity of QPT without sparsity assumptions while still offering an exponential improvement in resource cost over standard QPT by assuming a very simple noise model consisting of SPAM error and rotation error (see Fig. \ref{fig:budget}). Characterization uses a series of rotations and measurements tailored to limited access quantum chips such as cloud-based IBM quantum devices, which we dub parametrized process characterization (PPC). We further provide a method to unravel SPAM errors from process characterization by fitting the projective measurement of key quantum states generated by the quantum process to a statistical model influenced by SPAM-type errors. The resulting fit parameters then allow us to reconstruct the underlying quantum process. \rm {\bf s}ection{Methods} To illustrate the general idea { of PPC}, we first give a one-qubit example that can be extended to a more general case. We wish to characterize some quantum process ${\cal U}: \rho \rightarrow U \rho\, U^\dagger$, with $\rho$ and $U$ as a one-qubit quantum state and unitary operator, respectively. Without loss of generality, we shall consider the rotation $Y_\theta = \hbox{\rm e}xp [-i \theta \rm {\bf s}igma_y/2]$ and assume that rotation is a noiseless unitary operation in the experimental setup. The projective measurement, along the $z$-axis in the Bloch sphere, of the state $Y_\theta U \rm {\bf Kg}et{s}$, for $\rm {\bf Kg}et{s}\in \lbrace \rm {\bf Kg}et{0}, \rm {\bf Kg}et{1} \rbrace$, reads \begin{equation}\label{eq:prob0} P^{\cal U} _s (\theta) = \left( \begin{array}{c} s_{\theta/2}^2 + |{U}_{s,0}|^2 c_\theta - ({ U}_{s,0} {U}_{s,1}^* + { U}_{s,1} { U}_{s,0}^*) s_\theta \\[2mm] s_{\theta/2}^2 + |{U}_{s,1}|^2 c_\theta + ({U}_{s,0} {U}_{s,1}^* + {U}_{s,1} {U}_{s,0}^*) s_\theta \hbox{\rm e}nd{array} \right) \ , \hbox{\rm e}nd{equation} with ${U}_{kl} = \bra{l} {U} \rm {\bf Kg}et{k}$, $c_\theta = \cos \theta$, and $s_\theta = \rm {\bf s}in \theta$. To account for readout error, we introduce a classical assignment error modeled by the transition matrix: \begin{equation} T = \left( \begin{array}{cc} t_{00} & 1- t_{11} \\[2mm] 1-t_{00} & t_{11} \hbox{\rm e}nd{array} \right), \hbox{\rm e}nd{equation} with $t_{00}$ and $t_{11}$ as the probabilities of measuring correctly the states $\rm {\bf Kg}et{0}$ and $\rm {\bf Kg}et{1}$, respectively. $T$ is obtained experimentally via calibration measurements. This matrix represents a binary asymmetric channel, i.e., $t_{00} \neq t_{11}$, that maps the original probability distribution to the experimental observation $Q^{\cal U} _s (\theta) = T \cdot P^{\cal U} _s (\theta)$, which can be rewritten as: \begin{equation} \label{eq:qdist} Q^{\cal U}_{s} (\theta) = \left( \begin{array}{c} 1 - t_{11} \\ 1 - t_{00} \hbox{\rm e}nd{array} \right) + |T| P^{\cal U}_s (\theta - \theta_0). \hbox{\rm e}nd{equation} This establishes a way to determine the quantum process by fitting $U_{s,0}$ and $U_{s,1}$ with the experimental data $T$ and $Q^{\cal U}_s(\theta)$. Notably, with the assumption a transition matrix can be describe readout error, and that errors occur only along the direction of rotation, a single parameter in Eq.~\ref{eq:qdist} can be used to fit the data. Here $\theta_0$ is an initial phase representing either a state preparation error or a compilation error in the rotation operator $Y_\theta$. In summary, we evaluate the action of the unitary operator $U$ on a set of rotated states $Y_\theta \rm {\bf Kg}et{0}$, and by fitting the model for the measurement output $T\cdot P_s^{\cal U}(\theta)$ to the experimental output $Q_s^{\cal U}(\theta)$ in (see Eq. \ref{eq:qdist}), we can estimate the components for $U$ in the computational basis. This procedure can be extended to an $n$-qubit system, considering $Y^s_\theta$ as the main rotation; the superscript $s$ stands for the physical qubit where the rotation is applied. As before, it is enough to consider a subset of the $n$-qubit computational basis where the $s$-th qubit is in the ground state, i.e., the set $\lbrace \rm {\bf Kg}et{\varphi_{k,s}} = \rm {\bf Kg}et{k_0, \cdots , k_{2^n}} \in \lbrace \rm {\bf Kg}et{0}, \rm {\bf Kg}et{1} \rbrace^n : k_s=0 \rbrace$, and determine the action of the unitary operator $U$ on different rotations applied on this set, \begin{equation} \rm {\bf Kg}et{\psi_{k,s}(\theta)} = U \cdot Y^s_\theta \rm {\bf Kg}et{\varphi_{k,s}} \hbox{\rm e}nd{equation} for different values of $\theta$ in $[-\pi, \pi]$. The projective measurement from these states, $T\cdot P_{k,s}^{\cal U}(\theta)$, can be seen as a function of $\theta$ with parameters given by the components of $U$, i.e., a function $F_{k,s}(\theta; U_{00}, \cdots , U_{(2^n, 2^n)})$. Thus, the fit parameters of this function to the experimental measurement data give the estimate for $U$. Before applying the algorithm, we use a calibration procedure to determine the transition matrix $T$ and the phase correction $\theta_0$ that uses a similar angular sweep (see Section \ref{sec:calibration}). \begin{figure} \centering \includegraphics[width=0.7\linewidth]{sphere.pdf} \caption{Diagrammatic sketch of PPC in the Bloch sphere for one-qubit systems. With the information obtained from the rotation on the states ${\cal U} \rm {\bf Kg}et{0}$ and ${\cal U} \rm {\bf Kg}et{1}$ we get information about the quantum process represented by $\cal U$. The dotted lines represent the rotations of the states ${\cal U} \rm {\bf Kg}et{0}$ and ${\cal U} \rm {\bf Kg}et{1}$ around the $y$-axis.} \label{fig:sketch} \hbox{\rm e}nd{figure} \rm {\bf s}ection{Single Qubit Quantum Process Characterization}\label{sec:QPC} For an $n$-qubit quantum process characterization ${\cal U}$, it is necessary to execute $N_{PPC} = 2^{n-1}(n+2) N_\theta$ quantum circuits with $N_\theta$ as the number of rotations the interval $[-\pi,\pi]$ is divided into; $2^{n} N_\theta$ quantum circuits for calibration; and $2^{n-1}n N_\theta$ quantum circuits to obtain data for fitting an estimate of $U$. Both the calibration and the estimation quality depends on $N_\theta$ -a large enough value ensures slight deviations $(\rm {\bf s}igma \rm {\bf s}im N_\theta^{-1/2})$ of the model from the experimental data. Note that $N_\theta$ does not depend on the number of qubits. Since QPT scales differently with the number of qubits, $N_{QPT} = 12^n$, PPC with a moderate number of angles becomes a favorable method in cases where $n > 2$, { since $\log (N_{QPT} / N_{{PPC}} ) \rm {\bf s}im (n -1) \log 6 - \log (N_{\theta}/2)$}. Figure~\ref{fig:budget} shows resource scaling for each protocol. \begin{figure} \centering \includegraphics[width=0.8\linewidth]{budget.pdf} \caption{\hbox{\rm e}mph{Number of quantum circuit executions for quantum process characterization:} in this figure, we compare the resources used by QPT against the resources used by PPC. For different numbers of rotations ($N_\theta = 31, 51, $and $71$ lines), QPT surpasses PPC in the number of resources required for its execution for $n > 2 $. } \label{fig:budget} \hbox{\rm e}nd{figure} We performed simulations and on-hardware experiments for one and two-qubit systems using native gates as the target operations to characterize. For these experiments, we set $N_\theta = 51$ rotations and $N = 5000$ experimental repetitions of PPC circuits and employ the Python Symfit \cite{symfit} library to fit the model to the experimental data. Symfit is a Python module that uses symbolic methods to determine the Jacobian in the fitting process analytically. We test our procedure characterizing the X- and H-gate for one-qubit systems and the CX-gate for two-qubit systems applied on different plaquettes on several IBM QPUs. We use $Y_\theta$ for the calibration and characterization since this rotation has lower fidelity than $X_\theta$ (see appendix \ref{app:exp}), and thus constitutes a good benchmark for the method. For the QPT experiments, we use the tomography module of Qiskit Ignis~\cite{Qiskit}. { A common way to describe the quantum process $\cal U$ is through the process matrix ${\chi}_{\cal}$ defined as \begin{equation} \label{eq:defchi} {\cal U} ( \rho) = \rm {\bf s}um_{kl} \chi_{kl} P_{k} \rho P_{l} , \hbox{\rm e}nd{equation} with $\rho$ as a one qubit state, and $P_{k} \in \lbrace \rm {\bf m}athbb{I} , \rm {\bf s}igma_{x}, \rm {\bf s}igma_{y}, \rm {\bf s}igma_{z} \rbrace$ (Pauli basis). It is simple to find the process matrix ${\chi}$, we need to find the representation of $U$ in the Pauli basis and compare $U \rho_{0} U^{\dagger}$ with Eq. \hbox{\rm e}qref{eq:defchi}, which gives $\chi_{kl} = u_{k}u^{*}_{l} $ with $ u_{k} = {\rm tr} \lbrace {\cal U} P_{k}\rbrace /2$. } When comparing QPT and PPC-generated process matrices, we observe qualitatively similar results for the real part, $\chi_{re}$, while observing differences in the imaginary part, $\chi_{im}$. In Figures \ref{fig:h_char} and \ref{fig:cx_char}, we present the heat plots of the process matrix from the numerical and \hbox{\rm e}mph{on-hardware} experiments comparing both methods using the gates H and CX as targets, respectively. We further observe that the $\chi^{PPC}_{im}$ differs from $\chi^{QPT}_{im}$, even in the noiseless numerical experiments. The imaginary part, which gives information about the quantum error \cite{Korotkov2013}, in both procedures is slightly different, owing to the assignment error mitigation implemented in PPC. This isolation of SPAM errors is impossible to do under standard QPT. A common way to compare the empirical and the expected operation is through the process fidelity $F_{\chi } = {\rm tr}[\chi \chi_0] / 4^n$, with $\chi_0$ as the noiseless process matrix. In table \ref{tab:Fidelity}, we show the process fidelity values for every experiment. The PPC results show closer values to $1$ than the QPT results in the numerical experiments, where we used a noiseless simulator as the initial test of the method. In the physical experiments, we still observe that $\chi_{PPC}$ has a more minor contribution from the imaginary part than the result from $\chi_{QPT}$. { The numerical results for $\chi^{PPC}_{im}$ and $\chi^{QPT}_{im}$ are irrelevant since the statistical error $N^{-1/2}/2 \rm {\bf s}im 7 \times 10^{-3}$\cite{Knips2015} in the projective measurement. For the physical experiment, we observed similar values for one- and two-qubit in the standard deviation ($\rm {\bf s}igma_{PPA} \rm {\bf s}im 2\times10^{-3}$ and $\rm {\bf s}igma_{QPT} \rm {\bf s}im 6\times10^{-3}$) after bootstrapping results from 21 experiments using each method in $10^{4}$ resamples. These deviations determine what values are statistically relevant in the experiment. Therefore, the results for one-qubit experiments are closer to an ideal behavior than the results from two-qubit gates. Additionally, we determine the difference between the process matrices $\chi^{PPC}$ and $\chi^{QPT}$ applying the distance $d_{\infty}(\cdot, \cdot)$} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{charh.pdf} \caption{ \hbox{\rm e}mph{Comparison between tomography results for the one-qubit system}. Here we present the $\chi$-matrix representation in the Pauli matrices for the Hadamard gate $\rm H$ using QPT and PPC. In a) and b), we depict the $\chi$-matrix using QPT and PPC, respectively, by running the experiment in a noiseless simulator. In c) and d), we show the resulting $\chi$ matrices, by QPT and PPC, respectively, by running the experiment on the 0th qubit on the IBMQ-Bogota backend. In both cases, we considered $N=5000$ shots and least square estimation for QPT, and $N = 5000$ shots and $N_\theta = 51$ for PPC. There is a good agreement in the $\chi$-matrix's real part, but a difference in the imaginary part, even when using the noiseless simulator. {In the noiseless simulator, the imaginary part is statistically irrelevant since the statistical error is in the order of $N^{-1/2}/2 \rm {\bf s}im 7\times 10^{-3} $\cite{Knips2015}. On the other hand, in the QPT experiments, the imaginary part is statistically relevant due to SPAM errors. At the bottom, we present the $d_{\infty}(\cdot)$ distance between the resulting process matrices.}} \label{fig:h_char} \hbox{\rm e}nd{figure} \renewcommand{1.1}{1.1} \begin{table}[h!] \centering \caption{Process fidelity $F_\chi$ of local and non-local gates. } \begin{ruledtabular} \begin{tabular}{lccc} configuration & gate & $F_{\chi}$ PPC& $F_{\chi}$ QPT \\ \hline Numerical & & 1.0 & 0.99 \\ IBMQ-Bogota, qubit 0 & & 0.99 & 0.92 \\ IBMQ-Bogota, qubit 2 & & 0.99 & 0.95 \\ IBMQ-Santiago, qubit 1 & X & 0.99 & 0.97 \\ IBMQ-Santiago, qubit 3 & & 0.99 & 0.99 \\ IBMQ-Quito, qubit 0 & & 0.92 & 0.93 \\ IBMQ-Quito, qubit 1 & & 0.99 & 0.99 \\ IBMQ-Boeblingen, qubit 0 & & 0.99 & 0.96 \\ IBMQ-Boeblingen, qubit 4 & & 0.99 & 0.92 \\ \hline Numerical & & 1.0 & 0.99\\ IBMQ-Bogota, qubit 0 & & 0.96 & 0.90 \\ IBMQ-Bogota, qubit 2 & & 0.99 & 0.95 \\ IBMQ-Santiago, qubit 1 & H & 0.99 & 0.97 \\ IBMQ-Santiago, qubit 3 & & 0.99 & 0.99 \\ IBMQ-Quito, qubit 0 & & 0.93 & 0.94 \\ IBMQ-Quito, qubit 1 & & 0.99 & 0.99 \\ IBMQ-Boeblingen, qubit 0 & & 0.99 & 0.95 \\ IBMQ-Boeblingen, qubit 4 & & 0.96 & 0.90 \\ \hline Numerical & & 1.0 & 0.98 \\ IBMQ-Bogota, qubits [1,2] & & 0.97 & 0.75 \\ IBMQ-Bogota, qubits [0,1] & CX & 0.96 & 0.73 \\ IBMQ-manhattan, qubits [0,1] & & 0.99 & 0.88 \\ IBMQ-manhattan, qubits [11,17] & & 0.99 & 0.98 \\ IBMQ-Boeblingen, qubit 0 & & 0.99 & 0.86 \\ IBMQ-Boeblingen, qubit 4 & & 0.99 & 0.81 \hbox{\rm e}nd{tabular} \hbox{\rm e}nd{ruledtabular} \label{tab:Fidelity} \hbox{\rm e}nd{table} \rm {\bf s}ection{multi-qubit quantum process characterization \label{sec:characterization}} Consider the action of a $n$-qubit quantum operator ${\cal U}$ on the state \begin{eqnarray} \rm {\bf Kg}et{\psi_{k,s}(\theta)} &=& Y_\theta^s \rm {\bf Kg}et{k_0, \cdots ,k_{s-1}, 0,k_{s+1}, \cdots ,k_{n}} \nonumber \\ &=& Y_\theta^s \rm {\bf Kg}et{k,0_s} . \hbox{\rm e}nd{eqnarray} Above, we used a short notation to identify where the rotation is being applied, in this case on the ground state of the $s$-th qubit while the rest are at $\rm {\bf Kg}et{k_0, \cdots ,k_{s-1},k_{s+1}, \cdots ,k_{n}}$. The probability distribution for ${\cal U} \rm {\bf Kg}et{\psi_{k,s}(\theta)}$ reads as \begin{equation}\label{eq:Umodel} P^{\, \cal U}_{k,s} (\theta) = \frac{1}{2} \left( A_{k,s} + B_{k,s} \, c_{\theta} + C_{k,s} \, s_\theta \right) , \hbox{\rm e}nd{equation} where $A_{k,s}$, $B_{k,s}$, and $C_{k,s}$ are vectors in $\rm {\bf m}athbb{R}^{2^n}$, with components \begin{eqnarray} [A_{k,s}]_j &=& |\bra{j}{\cal U} \rm {\bf Kg}et{k,0_s}|^2 + |\bra{j}{\cal U} \rm {\bf Kg}et{k,1_s}|^2 \ , \\ \, [B_{k,s}]_j &=& |\bra{j}{\cal U} \rm {\bf Kg}et{k,0_s}|^2 - |\bra{j}{\cal U} \rm {\bf Kg}et{k,1_s}|^2 \ , \\ \, [C_{k,s}]_j &=& \bra{j}{\cal U} \rm {\bf Kg}et{k,0_s} \bra{j}{\cal U} \rm {\bf Kg}et{k,1_s}^* + \nonumber \\ && \quad \quad \bra{j}{\cal U} \rm {\bf Kg}et{k,0_s}^* \bra{j}{\cal U} \rm {\bf Kg}et{k,1_s} , \ j \in \{0,1\}^n \ . \nonumber \\ \hbox{\rm e}nd{eqnarray} By measuring $N$ systems identically prepared in the state ${\cal U} \rm {\bf Kg}et{\psi_{k,s}(\theta)}$ for every $\theta$ in ${\cal S}_{N_\theta}= \{(j/N_\theta - 1)\pi : j\in \rm {\bf m}athbb{N}, 0 \leq j \leq N_\theta \}$; we estimate the probability distributions $Q_{k,s}^j = ({\rm C}^{k,j}_0/N, \cdots, {\rm C}^{k,j}_{2^n}/N)$ for $j = 0, \cdots, N_\theta$, with ${\rm C}^{k,j}_m$ as the number of outcomes `$m$' $\in \{0, 1 \}^n$ for the $j$-th angle. The original distributions $P^j_{k,s}$, after mitigating the assignment error (see next section for details), are obtained by minimizing the functions $f_j(P) = ||Q^j_{k,s} - T\cdot P||_F$, where $||\cdot ||_F$ is the Frobenius norm. With that information, i.e.~$\{ P^0_{k,s}, \cdots, P^{N_\theta}_{k,s} \}$ and the model \hbox{\rm e}qref{eq:Umodel}, we can compute the matrix elements for $\cal U$. Therefore, we can determine the process matrix $\chi$. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{cx_char.pdf} \caption{\hbox{\rm e}mph{$\chi$-matrix for the CX-gate:} Here we depict the $\chi$-matrix representation in Pauli matrices of the CNOT-gate, obtained by the QPT and PPC methods. In a) and b) we used a noiseless simulator to run QPT and PPC, respectively. In c) and d), we used the qubits 0 and 1 on IBMQ-Bogota backend to run QPT and PPC respectively. In the experiments we used $N = 5000$ shots and least squared estimation for the QPT configuration, and $N = 5000$ shots and $N_\theta = 51$ for the PPC configuration. We obtained similar results to the one-qubit case, there is good agreement with the real part, but a slight difference in the imaginary ones. Again, part of the imaginary values stem from an inherent error introduced by the estimation procedure in the QPT method. {In this case, the imaginary part produced by PPC and QPT in the numerical experiments are still statistical no relevant since we can consider the same level of statistical error used in the one-qubit case \cite{Knips2015}, i.e. $\rm {\bf s}im 7\times 10^{-3}$. For the physical experiments, the imaginary part overpass the standard deviations ($\rm {\bf s}igma_{PPA} \rm {\bf s}im 2\times10^{-3}$ and $\rm {\bf s}igma_{QPT} \rm {\bf s}im 6\times10^{-3}$) reveling imperfections in the quantum gate. In this case, the distance $d_{\infty}(\cdot,\cdot)$ in the simulation is similar with the experimental result. }} \label{fig:cx_char} \hbox{\rm e}nd{figure} \rm {\bf s}ection{calibration \label{sec:calibration}} We assume the assignment error effects are represented by a transition matrix $T_{kl} = P(l|k)$, where $P(l|k)$ is the conditional probability of observing $\rm {\bf Kg}et{j}$ when the system has been prepared in the state $\rm {\bf Kg}et{i}$. This is typically determined by measuring the number of outcomes `$j$', $c_j$, of $N$ identically-prepared states of $\rm {\bf Kg}et{i}$; $T_{ij} = c_j/N$. However, this procedure is limited in its explanatory power, as we cannot determine which part of the readout error comes from the state preparation. Additionally, this method scales exponentially with the number of qubits considered since we must evaluate the conditional probabilities for all $2^n$ states of the system. We follow a modified calibration routine that tracks assignment error as a function of the rotation about the $y$-axis to discriminate the assignment error from every possible state preparation error in each experiment. We rotate the states $\rm {\bf Kg}et{0,k_0, \cdots, k_{n-1}}$ around the $y$-axis of the first qubit, for $k \in \lbrace 0,1 \rbrace^{n-1}$, yielding the output $c_{\theta/2} \rm {\bf Kg}et{0, k_0, \cdots, k_{n-1}} + s_{\theta/2} \rm {\bf Kg}et{1, k_0, \cdots, k_{n-1}}$. Thus, the expected probability distribution will have two components, $c_{\theta/2}^2$ and $s_{\theta/2}^2$, for instance \begin{eqnarray} P_{k = 0\cdots 0}(\theta) &=& (c_{\theta/2}^2,\underbrace{0,\cdots,0}_{2^{n-1}-1},s_{\theta/2}^2, 0,\cdots,0)^T , \nonumber \\ P_{k = 10\cdots 0}(\theta) &=& (0, c_{\theta/2}^2,\underbrace{0,\cdots,0}_{2^{n-1}-2},0,s_{\theta/2}^2,0,\cdots,0)^T , \nonumber \\ & \vdots & \nonumber \\ P_{k = 0\cdots 01}(\theta) &=& (\underbrace{0,\cdots,0}_{2^{n-1}-1}, c_{\theta/2}^2,0,\cdots,0,s_{\theta/2}^2)^T . \nonumber \\ \hbox{\rm e}nd{eqnarray} Under the bit-flip noise model, an empirical probability distribution $Q_{k}(\theta)$ is related to $P_{k}(\theta)$ by $Q_{k}(\theta) = T \cdot P_{k}(\theta)$, with \begin{equation}\label{eq:Tmodel} T = \left( \begin{array}{ccc} t_{00} & \cdots & t_{0,2^n} \\ \vdots & \ddots & \vdots \\ t_{2^n,0} & \cdots & t_{2^n,2^n} \hbox{\rm e}nd{array} \right) \ . \hbox{\rm e}nd{equation} Taking into account the structure of $P_{k}(\theta)$, we can determine two columns of $T$ per state $Y_\theta \rm {\bf Kg}et{0,k_0, \cdots, k_{n-1}}$. We measure the states $Y_{\theta_j} |0, k_0, \cdots, k_{n-1} \rangle $, $\theta_j \in {\cal S}_\theta$, and estimate the probability distribution by the sampling distribution of the outcomes, i.e.~we obtain the quantities $Q_k^j = ({\rm C}^{k,j}_0/N, \cdots, {\rm C}^{k,j}_{2^n}/N)^T$. We fit the acquired data with the model $T \cdot P_k (\theta - \theta_0)$. In the model, we have considered an initial phase $\theta_0$ to describe preparation error. \rm {\bf s}ubsection{Two-qubit transition matrix and initial preparation} For the two-qubit system, we can consider the following model for the expected distributions of the states $Y^0_\theta \rm {\bf Kg}et{00}$ and $Y^0_\theta \rm {\bf Kg}et{01}$, \begin{eqnarray} P_{0}(\theta) &=& (c_{\theta/2}^2,0,s_{\theta/2}^2, 0)^T , \nonumber \\ P_{1}(\theta) &=& (0, c_{\theta/2}^2,0,s_{\theta/2}^2)^T , \hbox{\rm e}nd{eqnarray} and the emperical distributions $Q_k(\theta) = T \cdot P_k(\theta)$, \begin{eqnarray}\label{eq:twoqubitmodel} Q_{0}(\theta) &=& \left( \begin{array}{c} t_{00} \\ t_{10} \\ t_{20} \\ t_{30} \hbox{\rm e}nd{array} \right) c_{\theta/2}^2 + \left( \begin{array}{c} t_{02} \\ t_{12} \\ t_{22} \\ t_{32} \hbox{\rm e}nd{array} \right) s_{\theta/2}^2 , \nonumber \\ Q_{1}(\theta) &=& \left( \begin{array}{c} t_{01} \\ t_{11} \\ t_{21} \\ t_{31} \hbox{\rm e}nd{array} \right) c_{\theta/2}^2 + \left( \begin{array}{c} t_{03} \\ t_{13} \\ t_{23} \\ t_{33} \hbox{\rm e}nd{array} \right) s_{\theta/2}^2 . \nonumber \\ \hbox{\rm e}nd{eqnarray} Now, we execute the quantum circuits \begin{eqnarray} \Qcircuit @C=1.4em @R=1.8em { \lstick{\rm {\bf Kg}et{0}} & \gate{Y_{\theta_j}} & \rm {\bf m}eter \\ \lstick{\rm {\bf Kg}et{0}} & \gate{\rm {\bf m}athbb{I}} & \rm {\bf m}eter } \quad {\rm for}\ k=0 , \\[4mm] \Qcircuit @C=1.4em @R=1.8em { \lstick{\rm {\bf Kg}et{0}} & \gate{Y_{\theta_j}} & \rm {\bf m}eter \\ \lstick{\rm {\bf Kg}et{0}} & \gate{X} & \rm {\bf m}eter } \quad {\rm for}\ k=1 , \hbox{\rm e}nd{eqnarray} and for $\theta_j \in {\cal S}_\theta$. We determine the transition matrix elements $t_{ij}$ by fitting the model \hbox{\rm e}qref{eq:twoqubitmodel} to the acquired data $Q_k^j = ({\rm C}_{00}^{k,j}/N, {\rm C}_{01}^{k,j}/N,{\rm C}_{10}^{k,j}/N,{\rm C}_{11}^{k,j}/N)^T$ from the above quantum circuit executions. \rm {\bf s}ection{Discussion} To summarize, we introduced an alternative method to characterize a {unitary} quantum process based on a rotational sweeping procedure {without prior information about the process}. Instead of using a complete set of measurement operators, we measure the unknown quantum state rotated around different angles. Additionally, we proposed a pre-characterization process to define the assignment errors enclosed in the transition matrix and then mitigated those effects in the quantum process characterization. In the scheme presented here, we perform $2^{n-1}(n+2) N_\theta$ experiments to characterize an $n$-qubit process, in which the number of angles $N_\theta$ does not scale with the number of qubits. Therefore, we can fix this parameter to get enough data to fit the model with minor deviations. We considered $N_\theta = 51$ for a suitable fitting with relatively slight deviations ($\rm {\bf s}im 10^{-4}$) from ideal behavior; this value can be used for any $n$-qubit system. Additionally, QPT is affected by errors in preparing the initial states and tomography measurements. By contrast, in our method, we control the initial preparation error by considering an initial phase in the model and the readout error by mitigating the assignment error. The PPC algorithm is affected by the fitting error that depends on the number of rotational divisions, $N_\theta$. This procedure assumes a high-fidelity rotation and trusted quantum states $\rm {\bf Kg}et{0}$ and $\rm {\bf Kg}et{1}$ in the same way that QPT relies on high fidelity measurement. One way to mitigate a possible imperfection in the rotation process is by using a different compilation for the rotations $Y_\theta$ and $X_\theta$. Instead of decomposing any rotation in terms of $X_{\pm \pi /2}$ and $Z_\theta$, we can use the compilation procedure introduced in \cite{Gokhale2020}; a decomposition in terms of $X_\theta$. The alternative decomposition brings a shorter pulse structure and commensurately higher fidelity. To validate the performance of the PPC procedure, we compared the process matrix obtained by our approach with the outcome from the QPT. We compared the process fidelity $F_\chi$ of the results, from PPC and QPT, of different one- and two-qubits gates {\it on-hardware} and {\it in-silico}. As expected, we found minor differences between ${\rm Re}\lbrace \chi_{PPC} \rbrace$ and ${\rm Re}\lbrace \chi_{QPT} \rbrace$; there was, however, a remarkable difference in their imaginary parts, even in the numerical experiments. {The matrix elements $(\chi^{QPT/PPA}_{im})_{ij}$ can be attributed to numerical errors. They are in the range of the statistical error present in the simulation and therefore do not contribute to the characterization analysis. We observed statistically relevant values for $\chi^{QPT}_{im}$ in the physical experiments and neglishible ones in $\chi^{PPA}_{im}$, other consequence of the SPAM mitigation in the PPA process in one-qubit quantum processes. The PPA implements the mitigation of preparation errors in local rotations, therefore the imaginary part $\chi^{QPT}_{im}$ contents relevant data. We establish the difference of the process matrices via the distance $d_{\infty}(\cdot, \cdot)$, and the process fidelity as performance metric that gives a perfect score in the noiseless simulation for the PPA (see Table \ref{tab:Fidelity}). } Additionally, the PPC protocol allows the calculation of the error process matrix directly from the quantum gate's characterization without appealing to the QPT process introduced by Korotkov \cite{Korotkov2013}. The imaginary part of the error process matrix provides information about the process fidelity and imperfections. A natural extension of this work is studying the error process matrix for low-depth quantum circuits. The method may also prove useful in providing tighter measures of crosstalk effects in quantum processes. For example, tomography on one qubit, while its neighbors undergo local unitaries, can reveal correlated noise~\cite{idle1,idle2}. The improved scalability of PPC over QPT allows us to extend this tomographic method to more significant numbers of qubits, a valuable feature for crosstalk identification and characterization. \appendix \rm {\bf s}ection{Quantum process tomography implementation}\label{app:qpt} The QPT algorithm finds the process matrix $\chi_\rm {\bf m}athcal{E}$ of a quantum map $\rm {\bf m}athcal{E}: \rm {\bf Kg}ket{\rho} \rightarrow \rm {\bf Kg}ket{E \rho E^\dagger}$, by measuring the resulting state $\rm {\bf m}athcal{E}\rm {\bf Kg}ket{\rho_k}$, from a basis of initial states $\rm {\bf m}athcal{P}: \lbrace \rm {\bf Kg}ket{\rho_1}, \cdots, \rm {\bf Kg}ket{\rho_{4^n}}\rbrace$, onto different directions $\rm {\bf m}athcal{M}:\lbrace \rm {\bf Kg}ket{E_1}, \cdots , \rm {\bf Kg}ket{E_M} \rbrace$ in the Hilbert space, with $n$ and $M$ as the number of qubits and number of measurement operators respectively. Here, we have introduced the superoperator notation for the statistical operator, where operators become superkets and quantum maps becomes superoperators, i.e, $\rm {\bf m}athcal{E}(\rho) \rightarrow \rm {\bf m}athcal{E}\rm {\bf Kg}ket{\rho}$ (more details in \cite{Greenbaum2015}). In the superoperator notation, the goal is to determine the matrix representation $[ \chi_\rm {\bf m}athcal{E} ]_{ij} = \bbra{j}| \rm {\bf m}athcal{E} | \rm {\bf Kg}ket{i}$ in the Pauli basis $\lbrace \rm {\bf Kg}ket{i} \rbrace$, by the measurements \begin{equation}\label{eq:lambda} \lambda_{ij} = \bbra{E_j} \rm {\bf m}athcal{E} \rm {\bf Kg}ket{\rho_i}. \hbox{\rm e}nd{equation} We can establish the relation between $\lambda$ and $\chi_\rm {\bf m}athcal{E}$ by inserting the completeness identity $\rm {\bf s}um_i \rm {\bf Kg}ket{i}\bbra{i} = \rm {\bf m}athbb{I}$ in Eq.~\ref{eq:lambda}, \begin{equation}\label{eq:chi1} \lambda_{ij} = \rm {\bf s}um_{k,l = 1}^{4^n} [\chi_\rm {\bf m}athcal{E}]_{lk} \ \bbra{E_j} k \rangle \rangle \langle \langle l \rm {\bf Kg}ket{\rho_i} . \hbox{\rm e}nd{equation} We can arrange the terms as \begin{eqnarray} y_{j + (i-1)\times M} &=& \lambda_{ij}, \\ x_{k + (l-1)\times 4^n} &=& [\chi_\rm {\bf m}athcal{E}]_{ij} \\ B_{j + (i-1)\times M, \, k + (l-1)\times 4^n} &=& \bbra{E_j} k \rangle \rangle \langle \langle l \rm {\bf Kg}ket{\rho_i}, \hbox{\rm e}nd{eqnarray} and transform \ref{eq:chi1} into a more convenient expression, $B \, \vec{x} - \vec{y} = 0 $, for a numerical solution. {\it B.1 One-qubit process matrix:} Without loss of generality, consider the characterization of a one-qubit quantum map $\rm {\bf m}athcal{E}$ using the following intitial states and measurement operators \begin{equation}\label{eq:prepmeas} \begin{aligned}[c] \rm {\bf m}athcal{P} : \left\lbrace \right. \, & \rm {\bf Kg}ket{Z_p}, \rm {\bf Kg}ket{Z_m} , \\ & \rm {\bf Kg}ket{X_p}, \rm {\bf Kg}ket{Y_p} \, \left. \right\rbrace \\ & \hbox{\rm e}nd{aligned} \quad , \quad \begin{aligned}[c] \rm {\bf m}athcal{M} : \left\lbrace \right. \, & \rm {\bf Kg}ket{Z_p}, \rm {\bf Kg}ket{Z_m} , \\ & \rm {\bf Kg}ket{X_p}, \rm {\bf Kg}ket{X_m}, \\ & \rm {\bf Kg}ket{Y_p}, \rm {\bf Kg}ket{Y_m} \, \left. \right\rbrace \ \ , \hbox{\rm e}nd{aligned} \hbox{\rm e}nd{equation} where we have introduced the projectors $Z_p = (\rm {\bf m}athbb{I} + \rm {\bf s}igma_z)/2$, $Z_m =(\rm {\bf m}athbb{I} - \rm {\bf s}igma_z)/2$, $X_{p/m} = Y_{-\pi/2} Z_{p/m} Y_{\pi/2}$, and $Y_{p/m} = X_{-\pi/2} Z_{p/m} X_{\pi/2}$. Since $Z_p = \rm {\bf Kg}et{0}\bra{0}$, $Z_m = \rm {\bf Kg}et{1}\bra{1}$, $X_p = \rm {\bf Kg}et{+}\bra{+}$, and $Y_p = \rm {\bf Kg}et{i}\bra{i}$, the set $\rm {\bf m}athcal{P}$ is generated by the preparation of the states $\lbrace \rm {\bf Kg}et{0}, \rm {\bf Kg}et{1}, \rm {\bf Kg}et{+} , \rm {\bf Kg}et{i} \rbrace$. The quantum chip detector measures $\rm {\bf s}igma_z$ by default. The possible outcomes of a measurement on an arbitrary state $\rho$ are $i = 0, 1$, where $i=0$ corresponds to $\langle \rm {\bf s}igma_z \rangle = +1$ and $i = 1$ to $\langle \rm {\bf s}igma_z \rangle = -1$, with probabilities \begin{equation} p_0 = {\rm tr} \lbrace \rho Z_p \rbrace, \quad p_1 = {\rm tr} \lbrace \rho Z_m \rbrace. \hbox{\rm e}nd{equation} Now, to measure $\rm {\bf s}igma_x$ and $\rm {\bf s}igma_y$ we need to consider the projectors $X_p$, $X_m$, and $Y_p$, $Y_m$, respectively, by transforming the $x$- and $y$-axis into the $z$-axis in the Bloch sphere and measuring $\rm {\bf s}igma_z$ (see the definitions below Eq.~\ref{eq:prepmeas}). Therefore, the set $\rm {\bf m}athcal{M}$ is generated by the gates $\lbrace \rm {\bf m}athbb{I}, Y_{-\pi/2}, X_{-\pi/2} \rbrace$. In Figure \ref{fig:onequbitqpt} there is a sketch of the required circuits to gate characterization. Thus, the number of quantum circuits for a complete characterization is less than the number of independent terms in the matrix process $\chi_{\rm {\bf m}athcal{E}}$. On the other hand, one can observe that the number of measurement operators, size of $\rm {\bf m}athcal{M}$, is enough for the solution of Eq.~\ref{eq:lambda}, since ${\rm dim} \lbrace \rm {\bf m}athcal{M}\rbrace\times {\rm dim} \lbrace \rm {\bf m}athcal{P}\rbrace > {\rm dim} \lbrace \vec{y} \rbrace$. { {\it B.2 $n$-qubit process matrix:} The natural extension of the initial states and the measurement operators in the $n$-qubit quantum process characterization follows: \begin{equation}\label{eq:prepmeas} \begin{aligned}[c] \rm {\bf m}athcal{P}_{n} : \left\lbrace \right. \, & \rm {\bf Kg}ket{Z_p}, \rm {\bf Kg}ket{Z_m} , \\ & \rm {\bf Kg}ket{X_p}, \rm {\bf Kg}ket{Y_p} \, \left. \right\rbrace^{\otimes n} \\ & \hbox{\rm e}nd{aligned} \quad , \quad \begin{aligned}[c] \rm {\bf m}athcal{M}_{n} : \left\lbrace \right. \, & \rm {\bf Kg}ket{Z_p}, \rm {\bf Kg}ket{Z_m} , \\ & \rm {\bf Kg}ket{X_p}, \rm {\bf Kg}ket{X_m}, \\ & \rm {\bf Kg}ket{Y_p}, \rm {\bf Kg}ket{Y_m} \, \left. \right\rbrace^{\otimes n} \ \ , \hbox{\rm e}nd{aligned} \hbox{\rm e}nd{equation} where $\rm {\bf m}athcal{P}_{n}$ is generated by the preparation preparation of the states $\lbrace \rm {\bf Kg}et{0}, \rm {\bf Kg}et{1}, \rm {\bf Kg}et{+} , \rm {\bf Kg}et{i} \rbrace^{\otimes n}$. Now, the possible outcomes of the $\rm {\bf s}igma_{z}^{\otimes n}$ measurement on an arbitrary state are $s = \lbrace 0,1 \rbrace^{n}$, with probabilities \begin{eqnarray} p_{0\cdots0} &=& {\rm tr} \lbrace Z_{p}\otimes \cdots \otimes Z_{p} \rbrace \nonumber \\ p_{010\cdots0} &=& {\rm tr} \lbrace Z_{p}\otimes Z_{m} \otimes Z_{p} \otimes \cdots \otimes Z_{p} \rbrace \nonumber \\ &\vdots & \nonumber \\ p_{1\cdots1} &=& {\rm tr} \lbrace Z_{m}\otimes \cdots \otimes Z_{m} \rbrace . \hbox{\rm e}nd{eqnarray} For the $\lbrace \rm {\bf s}igma_{z}, \rm {\bf s}igma_{y}, \rm {\bf s}igma_{z} \rbrace^{\otimes n}$ measurements we need to apply the projectors $\lbrace X_{p} , X_{n}\rbrace^{\otimes n}$ and $\lbrace Y_{p} , Y_{n}\rbrace^{\otimes n}$, respectively. Thus, the set $\rm {\bf m}athcal{M}_{n}$ is generated by $\lbrace \rm {\bf m}athbb{I}, Y_{-\pi/2}, X_{-\pi/2} \rbrace^{\otimes n}$. Again, as in the one-qubit case, we get a redundant amount of measurements, ${\rm dim} \lbrace {\cal M}_{n}\rbrace \times {\rm dim} \lbrace {\cal P}_{n}\rbrace = 4^{n} \times 6^{n}$ for the number $16^{n}$ of independent elements in $\chi$. } \begin{figure} \centering \includegraphics[width=\linewidth]{qpt.pdf} \caption{Preparation and Measurement set of quantum circuits for one-qubit quantum process tomography.} \label{fig:onequbitqpt} \hbox{\rm e}nd{figure} \rm {\bf s}ection{Fitting parameters} \label{app:exp} In this section we shall discuss the fitting details used in the post-processing step in the PPC protocol. The number of rotations and shots plays an essential role in the quantum process characterization. We consider a one-qubit experiment to benchmark the calibration process, which follows the same principle as the characterization. In this experiment, we consider the qubit 0 in the IBMQ-Bogota quantum chip. For the calibration we use rotations about the $x$- and $y$-axis, with a different number of rotations and shots. Figure \ref{fig:cal} shows the conditional probabilities and the initial phase for each experimental setup as a function of the number of rotations and shots, with error bars indicating the standard deviation in each measurement. We choose a suitable setup where the conditional probabilities do not vary significantly concerning the result using the highest values $N_\theta = 71$ and $N = 5000$. For $Y_\theta$ and $X_\theta$ we found the optimal points $N_\theta = 51$ and $N_\theta = 41$, respectively (see shadow regions in Figure \ref{fig:cal}). One important feature is the dependence of the parameters' standard deviation on $N_\theta$ and $N$, which slightly improves the experimental setup's refinement. \begin{widetext} \begin{figure}[h] \centering \includegraphics[width=0.6\linewidth]{calibration.pdf} \caption{ \hbox{\rm e}mph{Fitting parameters:} Conditional probabilities and initial phase using $Y_\theta$ rotation in a), b), and c), and using the $X_\theta$ rotation in d), e), and f), respectively. The error bars represent the standard deviation of each parameter in the fitting process. The shadow regions indicate the optimal values for $N_\theta$ and $N$.} \label{fig:cal} \hbox{\rm e}nd{figure} \hbox{\rm e}nd{widetext} \begin{acknowledgments} We thank Ryan Bennink for valuable discussions. This work was supported as part of the ASCR Quantum Testbed Pathfinder Program at Oak Ridge National Laboratory under FWP \# ERKJ332. This research used resources of the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725. \hbox{\rm e}nd{acknowledgments} \rm {\bf s}ection*{Author contributions} V.L.O. introduced the PPC method. V.L.O. and T.K. developed the theoretical formalism, performed the analytic calculations and performed the experiments. R.C.P.~analyzed and interpreted experimental data and suggested experiments to run. V.L.O, T.K., and R.C.P. cowrote the manuscript. \rm {\bf s}ection*{Competing interests} The authors declare that there are no competing interests. \begin{thebibliography}{24} \bibitem{Magesan2012} E. Magesan, J. M. Gambetta, B. R. Johnson, C. A. Ryan, J. M. Chow, S. T. Merkel, M. P. da Silva, G. A. Keefe, M. B. Rothwell, T. A. Ohki, M. B. Ketchen, and M. Steffen, ``Efficient measurement of quantum gate error by interleaved randomized benchmarking," Phys. Rev. Lett. \textbf{109}, 080505 (2012). \bibitem{Eisert2020} J. Eisert, D. Hangleiter, N. Walk, I. Roth, D. Markham, R. Parekh, U. Chabaud, and E. Kashefi, ``Quantum certification and benchmarking," Nat. Rev. Phys.\ \textbf {2},\ 382 (2020). \bibitem{NielsenBook} M. A. Nielsen and I. L Chuang, \hbox{\rm e}mph{Quantum Computation and Quantum Information: 10th Anniversary Edition}, 10th\ ed.\ (Cambridge University Press, USA,\ 2011). \bibitem{Poyatos1997} J. F. Poyatos, J. I. Cirac, and P. Zoller, `` Complete Characterization of a Quantum Process: The Two-Bit Quantum Gate," Phys. Rev. Lett. \textbf{78}, 390 (1997). \bibitem{Chuang1997} I. L. Chuang and M. A. Nielsen, ``Prescription for experimental determination of the dynamics of a quantum black box ," Journal of Modern Optics \textbf{44}, 2455 (1997). \bibitem{Baumgratz2013} T. Baumgratz, D. Gross, M. Cramer, and M. B. Plenio, ``Scalable reconstruction of density matrices," Phys. Rev. Lett. \textbf{111} (2013). \bibitem{BlumeKohout2010} R. Blume-Kohout, ``Optimal, reliable estimation of quantum states," New J. Phys. \textbf{12}, 043034 (2010). \bibitem{Ferrie2014} C. Ferrie, ``Quantum model averaging," New J. Phys. \textbf{16}, 093035 (2014). \bibitem{BlumeKohout2012} R. Blume-Kohout, ``Robust Error Bars for Quantum Tomography," arXiv: 1202.5270. \bibitem{Christandl2012} M. Christandl and R. Renner, ``Reliable quantum state tomography," Phys. Rev. Lett. \textbf{109}, 120403 (2012). \bibitem{Wang2019} J. Wang, V. B. Scholz, and R. Renner, ``Confidence polytopes in quantum state tomography," Phys. Rev. Lett. \textbf{122} (2019). \bibitem{Zhang2020} S. Zhang, Y. Lu, K. Zhang, W. Chen, Y. Li, J.-N. Zhang, and K. Kim, ``Error-mitigated quantum gates exceeding physical fidelities in a trapped-ion system," Nat. Comm. \textbf{11}, 587 (2020). \bibitem{Alexander2020} T. Alexander, N. Kanazawa, D. J. Egger, L. Capelluto, C. J Wood, A. Javadi-Abhari, and D. C. McKay, ``Qiskit pulse: programming quantum computers through the cloud with pulses," Quantum Science and Technology \textbf{5}, 044006 (2020). \bibitem{Korotkov2013} A. N. Korotkov, ``Error matrices in quantum process tomography," arXiv:1309.6405. \bibitem{Mohseni2008} M. Mohseni, A.T. Rezakhani, and D.A. Lidar, ``Quantum-process tomography: Resource analysis of different strategies," Phys. Rev. A \textbf{77}, 032322 (2008). \bibitem{Altepeter2003} , J. B. Altepeter, D. Branning, E. Jeffrey, T. C. Wei, P. G. Kwiat, R. T. Thew, J. L. O’Brien, M. A. Nielsen, and A. G. White, ``Ancilla-Assisted Quantum Process Tomography," Phys. Rev. Lett. \textbf{90}, 193601 (2003). \bibitem{Leung2003} , D. W. Leung, ``Choi’s proof as a recipe for quantum process tomography," J. Math. Physical. (N.Y.) \textbf{44}, 528 (2003). \bibitem{Ariano2003} , G.M. D’Ariano and P. Lo Presti, ``Imprinting Complete Information About a Quantum Channel on its Output State," Phys. Rev. Lett. \textbf{91}, 047902 (2003). \bibitem{Ariano2001} , G.M. D’Ariano and P. Lo Presti, ``Quantum Tomography for Measuring Experimentally the Matrix Elements of an Arbitrary Quantum Operation," Phys. Rev. Lett. \textbf{86}, 4195 (2001). \bibitem{Omkar2015a} , S. Omkar, R. Srikanth, and S. Banerjee, ``Characterization of quantum dynamics using quantum error correction," Phys. Rev. A \textbf{91}, 012324 (2015). \bibitem{Omkar2015b} , S. Omkar, R. Srikanth, and S. Banerjee, ``Quantum code for quantum error characterization," Phys. Rev. A \textbf{91}, 052309 (2015). \bibitem{Mohseni2007} , M. Mohseni and D. A. Lidar, ``Direct characterization of quantum dynamics: General theory," Phys. Rev. A \textbf{75}, 062331 (2007). \bibitem{Mohseni2006} , M. Mohseni and D. A. Lidar, ``Direct Characterization of Quantum Dynamics," Phys. Rev. Lett. \textbf{97}, 170501 (2006). \bibitem{Flammia2010} D. Gross, Y.-K. Liu, S. T. Flammia, S. Becker, and J. Eisert, ``Quantum state tomography via compressed sensing," Phys. Rev. Lett. \textbf{105}, 150501 (2010). \bibitem{Flammia2012} S.T. Flammia, D. Gross, Y.-K. Liu, and J. Eisert, ``Quantum tomography via compressed sensing: Error bounds, sample complexity and efficient estimators," New J. Phys. \textbf{14}, 095022 (2012). \bibitem{Rodionov2014} A. V. Rodionov, A. Veitia, R. Barends, J. Kelly, D. Sank, J. Wenner, J. M. Martinis, R. L. Kosut, and A. N. Korotkov, ``Compressed sensing quantum process tomography for superconducting quantum gates," Phys. Rev. B \textbf{90}, 144504 (2014). \bibitem{Shabani2011} A. Shabani, R. L. Kosut, M. Mohseni, H. Rabitz, M. A. Broome, M.P. Almeida, A. Fedrizzi, and A.G. White, ``Efficient Measurement of Quantum Dynamics Via Com- pressive Sensing," Phys. Rev. Lett. \textbf{106}, 100401 (2011). \bibitem{Kim2020} Y. Kim, Y. S. Teo, D. Ahn, D.-G. Im, Y.-W. Cho, G. Leuchs, L. L. Sánchez-Soto, H. Jeong, and Y.-H. Kim, ``Universal compressive characterization of quantum dynamics," Phys. Rev. Lett. \textbf{124}, 210401 (2020). \bibitem{Emerson2005} J. Emerson, R. Alicki, and K. Zyczkowski, ``Scalable noise estimation with random unitary operators," Journal of Optics B: Quantum and Semiclassical Optics \textbf{7}, S347 (2005). \bibitem{Knill2008} E. Knill, D. Leibfried, R. Reichle, J. Britton, R.B. Blakestad, J.D. Jost, C. Langer, R. Ozeri, S. Seidelin, and D.J. Wineland, ``Randomized benchmarking of quantum gates," Phys. Rev. A \textbf{77}, 012307 (2008). \bibitem{Magesan2011} E. Magesan, J.M. Gambetta, and J. Emerson, ``Scalable and robust randomized benchmarking of quantum processes," Phys. Rev. Lett. \textbf{106}, 180504 (2011). \bibitem{Shabani2011} A. Shabani, R.L. Kosut, M. Mohseni, H. Rabitz, M.A. Broome, M.P. Almeida, A. Fedrizzi, and A.G. White, ``Efficient Measurement of Quantum Dynamics via Compressive Sensing," Phys. Rev. Lett. \textbf{106}, 100401 (2011). \bibitem{Govia_2020} L.C.G. Govia, G.J. Ribeill, D. Rist{\`e}, M. Ware, and H. Krovi, ``Bootstrapping quantum process tomography via a perturbative ansatz," Nat. Comm. \textbf{11} (2020). \bibitem{symfit} M. Roelfs and P. Kroon, ``symfit," http://doi.org/10.5281/zenodo.1133336. \bibitem{Qiskit} G. Aleksandrowicz \hbox{\rm e}mph{et al.}, ``An open-source framework for quantum computing," http://doi.org/10.5281/zenodo.2562111. \bibitem{Knips2015} L. Knips, C. Schwemmer, N. Klein, J. Reuter, G. T\'oth, and H. Weinfurteri, ``How long does it take to obtain a physical density matrix?," arXiv:1512.06866. \bibitem{Gokhale2020} P. Gokhale, A. Javadi-Abhari, N. Earnest, and Y. Shi, ``Optimized Quantum Compilation for Near-Term Algorithms with OpenPulse," arXiv:2004.11205. \bibitem{idle1} R. Blume-Kohout, E. Nielsen, K. Rudinger, K. Young, M. Sarovar, and T. Proctor, ``Idle tomography: Efficient gate characterization for N-qubit processors,'' in APS March Meeting Abstracts, {\bf 2019}, P35-006 (2019). \bibitem{idle2} A. Ash-Saki, M. Alam, and S. Ghosh, ``Experimental Characterization, Modeling, and Analysis of Crosstalk in a Quantum Computer,'' IEEE Transactions on Quantum Engineering, \textbf{1}, 1-6 (2020). \bibitem{Greenbaum2015} D. Greenbaum, ``Introduction to Quantum Gate Set Tomography," arXiv:1509.02921. \hbox{\rm e}nd{thebibliography} \hbox{\rm e}nd{document}
\betaegin{document} \tauitle{Gain-Loss Hedging and Cumulative Prospect Theory} \alphauthor{Lorenzo Bastianello, Alain Chateauneuf, Bernard Cornet\footnote{Bastianello (corresponding author): Università Ca' Foscari Venezia, (email: [email protected]); Chateauneuf: IPAG Business School, Universit\'{e} Paris 1 Panth\'{e}on-Sorbonne and Paris School of Economics, (email: [email protected] ); Cornet: Universit\'{e} Paris 1 Panth\'{e}on-Sorbonne and Kansas University (email: [email protected]). First version: February 2020 }} \muaketitle \betaegin{abstract} Two acts are comonotonic if they yield high payoffs in the same states of nature. The main purpose of this paper is to derive a new characterization of Cumulative Prospect Theory (CPT) through simple properties involving comonotonicity. The main novelty is a concept dubbed gain-loss hedging: mixing positive and negative acts creates hedging possibilities even when acts are comonotonic. This allows us to clarify in which sense CPT differs from Choquet expected utility. Our analysis is performed under the simpler case of (piece-wise) constant marginal utility which allows us to clearly separate the perception of uncertainty from the evaluation of outcomes. \muedskip p_{\alphalpha}r\nuoindent {\sigmac Keywords:\/} Cumulative Prospect Theory, Comonotonicity, Gain-loss hedging, \v Sipo\v s integral, Choquet integral. \muedskipp_{\alphalpha}r\nuoindent {\sigmac JEL Classification Number:\/} D81. \varepsilonnd{abstract} \sigmaection{Introduction}\lambdaabel{sec:intro} When making everyday decisions, economic agents are often confronted with uncertainty. For instance, one can think of a decision maker (DM) who needs to choose how to allocate her wealth between two different portfolios of assets, or a firm that has to decide whether to invest in an innovative technology or in a traditional one. The most popular model used under risk and uncertainty is the expected utility model. This model, proposed first by Bernoulli at the beginning of the XVIII century, has been axiomatized by de Finetti {\bf I}te{deFinetti}, Savage {\bf I}te{Savage} and Von Neumann and Morgenstern {\bf I}te{vNM}. However, empirical evidence has shown that expected utility does not provide a good description of DMs' actual choices. Early examples are the famous paradoxes of Allais {\bf I}te{Allais} and Ellsberg {\bf I}te{Ellsberg}. One of the most prominent and most successful alternative to expected utility theory is cumulative prospect theory (CPT) of Tversky and Kahneman {\bf I}te{CPT}. The aim of this paper is twofold: $(i)$ we provide a new mathematical characterization of the CPT functional under the simplifying assumption of (piece-wise) constant marginal utility \tauextit{à la} Yaari~{\bf I}te{Yaari}; $(ii)$ we use the characterization of the previous point to obtain a novel preference axiomatization of CPT. Consider acts as functions from a state space $S$ to the set of real numbers. Thus, given an act $f:S{\rm ri\hspace{.5pt}}ghtarrow\muathbb{R}$, $f(s)$ can be interpreted as the amount of money or consumption good that a DM obtains if the state turns out to be $s$. A central role is played by comonotonic acts. Loosely speaking, two acts are comonotonic if they are positively correlated. Mixing two comonotonic acts does not provide a possible hedge against uncertainty. This idea was exploited in the seminal papers of Schmeidler {\bf I}te{Schmeidler86}, {\bf I}te{Schmeidler89} to extend expected utility to Choquet expected utility. One advantage of CPT over the Choquet model is that it allows to disentangle the behavior of DMs in the domain of gains from the one in the domain of losses, i.e. when outcomes are respectively above or below a certain reference point (in our case the reference point is naturally taken equal to 0). This difference in behavior can be decomposed into two components. The first one is called loss-aversion and says that ``losses loom larger than gains'' (Tversky and Kahneman {\bf I}te{CPT}). Mathematically, it means that losses are multiplied by a constant $\lambdaambda>1$. The second one is usually called sign dependence and says that the attitude toward uncertainty (mathematically represented by a capacity) is different for gains and for losses. We take this behavior as a starting point for both the mathematical characterization of CPT and its axiomatization. The intuition behind our properties is that adding comonotonic acts can still provide some hedge if those acts are of opposite signs and have non-disjoint supports. We call this property gain-loss hedging. We describe here the two main properties that we use in Section \ref{sec:CPT_math} to characterize mathematically the CPT functional. The first property is well-known and postulates comonotonic independence (separately) for gains and for losses. Comonotonic acts do not provide a possible hedge against uncertainty and therefore adding them should not change the preferences of the DM. Take three acts $f,g,$ and $h$, all in the domain of gains or all in the domain of losses, such that $h$ is comonotonic with $f$ and $g$. Then our condition require that if $f$ and $g$ are indifferent, then adding $h$ to both of them does not change a DM's preferences since in both situations $h$ does not increase nor reduce uncertainty. The second property, that we call gain-loss hedging, represents the main behavioral novelty of the paper. The key idea is that adding an act above the reference point to an act below the reference point may provide an hedge against uncertainty unless these acts have disjoint supports. To exemplify suppose that there are two states of the world $S=\{s_1,s_2\}$ and that a DM with a linear utility function over outcomes is indifferent between the assets $f=(20,0)$ ($f$ is the act that pays 20 if $s_1$ is realized and 0 otherwise) and $g=(10,10)$. Consider now the act $h=(0,-10)$ which has disjoint support with $f$ but not with $g$. When the DM evaluates $f+h=(20,-10)$ and $g+h=(10,0)$, she may feel $f+h$ more uncertain than $g+h$ and therefore she may prefer $g+h$. Note that indifference between $f$ and $g$ and then a strict preference for $g+h$ is precluded by the expected utility model (with the utility function being the identity). More interestingly, this preference pattern would be a paradox even for the more general Choquet expected utility model of Schmeidler {\bf I}te{Schmeidler89} (with the utility function being the identity). The Choquet model excludes any possible hedging through mixing of comonotonic acts. In this example however act $h$ is comonotonic with both acts $f$ and $g$ and therefore no possible hedging would be envisioned by the Choquet model. Therefore $h$ is a possible hedge to uncertainty when added to $g$ because gains and losses balance out one another, and not because of comonotonicity. We elaborate more on this idea in Example \ref{ex:gain-loss-hedge}. In Section \ref{sec:CPT_behave}, we give a preference axiomatization of the CPT model with piece-wise linear utility. We do not assume the Anscombe and Aumann {\bf I}te{AA} framework, and our axioms only appeals to simple properties related to comonotonicity. Moreover, we propose a new and simple axiom that can be used to elicit the coefficient of loss-aversion $\lambdaambda$. In order to derive a CPT representation of preferences, we use the mathematical characterization of Section \ref{sec:CPT_math}. In a sense, our paper parallels, in the context of prospect theory, the work of Schmeidler {\bf I}te{Schmeidler86}, {\bf I}te{Schmeidler89} on the Choquet integral. Empirical evidence not only supports sign-dependence, but it suggests further that agents are uncertainty averse for gains and uncertainty seeking for losses, see for instance Wakker~{\bf I}te{WakkerPT}, Section 12.7 for a review. Section \ref{sec:CPT_ambig_att} provides testable axioms that characterize those opposite behaviors. Finally we investigate when uncertainty aversion for gains is symmetric to uncertainty seeking for losses. Behaviorally, this happens if a DM who is indifferent between an act $f$ and a monetary outcome $\alphalpha$ is also indifferent between $-f$ and $-\alphalpha$. In this case we prove that weights for gains and losses are dual with respect to each other and that CPT reduces to a \v Sipo\v s integral, see \v Sipo\v s {\bf I}te{Sipos}. This result clarifies the relation of CPT with the \v Sipo\v s integral that was first noticed by Starmer and Sudgen {\bf I}te{Starmer89} (see also Wakker {\bf I}te{WakkerPT} and Kothiyal \tauextit{et al.} {\bf I}te{KSW}). Of course, there are several axiomatizations of CPT available in the literature. The concept of comonotonicity and the fact that acts are rank-ordered are crucial, see Diecidue and Wakker {\bf I}te{10-2Wakker}. The very first axiomatization is provided in the seminal paper of Tversky and Kahneman {\bf I}te{CPT} and relies on comonotonic independence and a property called double matching. See also Trautmann and Wakker {\bf I}te{TW} for a recent characterization using these axioms in a (reduced) Anscombe and Aumann {\bf I}te{AA} framework. Wakker and Tversky {\bf I}te{WT} pair comonotonicity with trade-off consistency (see also Chateauneuf and Wakker {\bf I}te{ChatoWakker99} for the case of risk). The conomotonic sure thing principle approach (or a weakening of it called tail independence) is developed in Chew and Wakker {\bf I}te{ChewWakker96}, Zank {\bf I}te{Zank} and Wakker and Zank {\bf I}te{WZ}. The paper closest to our is the one of Schmidt and Zank {\bf I}te{SchmidtZank09}. The authors characterize CPT through an axiom called independence of common increments for comonotonic acts. Interestingly, they obtain a piecewise linear utility function (with a kink about the reference point), as in our axiomatization. We refer the reader to the introductory section of Schmidt and Zank {\bf I}te{SchmidtZank09} for a detailed discussion about the advantages of adopting piece-wise linear utility. The rest of the paper is organized as follows. Section \ref{sec:framework} introduces the framework, the mathematical notations and the behavioral models that we will consider. Section \ref{sec:main} is divided in three subsections and it contains our main results. Section \ref{sec:CPT_math} presents the mathematical characterization of the CPT and \v Sipo\v s functionals, Section \ref{sec:CPT_behave} provides a behavioral characterization of CPT and Section \ref{sec:CPT_ambig_att} discusses DM's attitude towards uncertainty. Section \ref{sec:conclusion} concludes. All proofs are gathered in the Appendix. \sigmaection{Framework}\lambdaabel{sec:framework} Let $S$ be a set of states of the world endowed with a $\sigmaigma$-algebra ${{\cal A}l A}$. Elements of ${{\cal A}l A}$ are called \tauextit{events}. We denote $\muathcal{F}$ the set of all bounded, real-valued, ${{\cal A}l A}$-measurable functions over $S$, i.e. $\muathcal{F}~=~\{f:S{\rm ri\hspace{.5pt}}ghtarrow \muathbb{R}| f \tauext{ is bounded and }{{\cal A}l A}\tauext{-mesurable}\}$. A function $f\imathn \muathcal{F}$ is called \tauextit{act}. An act can be interpreted as an asset that pays a monetary outcome in $\muathbb{R}$ that depends on the realization of the state of the world. We denote the \tauextit{positive part} of an act $f\imathn\muathcal{F}$ by $f^+=f\vee 0$ and the \tauextit{negative part} by $f^-=(-f)\vee 0$. Note that both positive and negative parts are greater than 0.\footnote{Note that several papers studying prospect theory use the symbol $f^-$ to denote $f\wedge 0$.} The set $\muathcal{F}^+=\{f\imathn \muathcal{F}| f(s)\geq 0,\, \forall s\imathn S\}$ is the set of positive acts, the set of negative acts $\muathcal{F}^-$ is defined analogously. Two acts $f,g\imathn \muathcal{F}$ have the \tauextit{same sign} if either $f,g\imathn \muathcal{F}^+$ or $f,g\imathn \muathcal{F}^-$. We say that two acts are of \tauextit{opposite sign} if one of them is positive and the other is negative. The \tauextit{support} of an act of $f\imathn \muathcal{F}$ is the set $supp(f)=\{s\imathn S | f(s)\nueq 0\}$. Two acts $f,g\imathn \muathcal{F}$ are \tauextit{comonotonic} if for all $s,t\imathn S$, $(f(s)-f(t))(g(s)-g(t))\geq 0$. Let $A\sigmaubseteq S$, $1_A$ is the \tauextit{indicator function} of the set $A$, i.e. $1_A(s):= \betaegin{cases} 1 & \tauext{ if } s\imathn A\\ 0 & \tauext{ if } s\imathn A^c \varepsilonnd{cases}$. If $\alphalpha\imathn \muathbb{R}$, then $\alphalpha 1_A$ denotes the constant act which pays $\alphalpha$ in every state $s\imathn A$. A \tauextit{(normalized) capacity} $v$ on the measurable space $(S,{{\cal A}l A})$ is a set function $v:{{\cal A}l A} \muapsto \muathbb{R}$ such that $v(\varepsilonmptyset)=0,\,v(S)=1$ and for all $A,B \imathn {{\cal A}l A},\, A\sigmaubseteq B \muathbb{R}ightarrow v(A)\lambdaeq v(B)$. If $v$ is a capacity, we define its \tauextit{conjugate} by $\hat{v}(A)=1-v(A^c)$ for all $A\imathn {{\cal A}l A}$. A capacity $v: {{\cal A}l A} \muapsto \muathbb{R}$ is \tauextit{convex (concave)} if, for all $A,B\imathn {{\cal A}l A}$, $v(A{\cal U}p B)+v(A{\cal A}p B)\geq(\lambdaeq) v(A)+v(B)$. Given a capacity $v$ on $(S,{{\cal A}l A})$, the \tauextit{Choquet integral} of $f\imathn \muathcal{F}$ with respect to $v$ is a functional $C:\muathcal{F}{\rm ri\hspace{.5pt}}ghtarrow \muathbb{R}$ defined by $$ C(f)=\imathnt_{S}{f\,dv}:=\imathnt_{-\imathnfty}^0{(v(\{f\geq t\})-1)\,dt} + \imathnt_0^{+\imathnfty}{v(\{f\geq t\})\,dt}, $$ In the following we will remove the subscript $S$ from the integral sign whenever the domain of integration is clear. Given a capacity $v$ on $(S,{{\cal A}l A})$, the \tauextit{\v Sipo\v s integral} (see Sipo\v s~{\bf I}te{Sipos}) of $f\imathn \muathcal{F}$ with respect to $v$ is a functional ${\cal H}eck{S}:\muathcal{F}{\rm ri\hspace{.5pt}}ghtarrow \muathbb{R}$ defined as $$ {\cal H}eck{S}(f)=\imathnt f^+ dv-\imathnt f^- dv $$ where the two integrals are Choquet integrals. The following Lemma gives an alternative formulation of the \v Sipo\v s integral when the conjugate capacity is used when evaluating the negative part of a function. Moreover it clarifies the relation between the Choquet integral and the \v Sipo\v s integral. \betaegin{lemma}\lambdaabel{lemma:sipos_ceu} Let $v$ be a capacity and $\hat{v}$ its conjugate. Then the following holds: \betaegin{itemize} \imathtem ${\cal H}eck{S}(f)= \imathnt f^+ dv+\imathnt- f^- d\hat{v}$; \imathtem $C(f)=\imathnt f^+ dv+\imathnt- f^-dv=\imathnt f^+dv -\imathnt f^- d\hat{v}$. \varepsilonnd{itemize} \varepsilonnd{lemma} The main object of this paper is the \tauextit{(piecewise linear) Cumulative Prospect Theory (CPT)} functional $CPT:\muathcal{F}{\rm ri\hspace{.5pt}}ghtarrow \muathbb{R}$. It is a generalization of both Choquet and \v Sipo\v s integrals. Consider two capacities $v^+$, $v^-$ and a real number $\lambdaambda>0$, then the \tauextit{(piecewise linear) CPT} functional $CPT:\muathcal{F}{\rm ri\hspace{.5pt}}ghtarrow \muathbb{R}$ is defined by $$ CPT(f)=\imathnt f^+ dv^+-\imathnt \lambdaambda f^- dv^-. $$ A preference relation $\sigmauccsim$ over $\muathcal{F}$ is a complete and transitive binary relation with non-empty strict part. As usual, $f\sigmauccsim g$ means ``$f$ is preferred to $g$''. We denote $\sigmaucc$ and $\sigmaim$ the strict and weak part of $\sigmauccsim$. A functional $I:\muathcal{F}{\rm ri\hspace{.5pt}}ghtarrow\muathbb{R}$ \tauextit{represents} $\sigmauccsim$ if for all $f,g\imathn \muathcal{F}$, $f\sigmauccsim g$ if and only if $I(f)\geq I(g)$. \sigmaection{Main results}\lambdaabel{sec:main} This section contains our two main results. The first one, Theorem \ref{th:CPT}, characterizes mathematically the CPT functional. The second result, Theorem \ref{th:axiom_CPT}, studies which behavioral axioms a preference relation should satisfy in order to be represented by a CPT functional. \sigmaubsection{The CPT functional}\lambdaabel{sec:CPT_math} We start with a seminal theorem of Schmeidler {\bf I}te{Schmeidler86} who provided a characterization of the Choquet functional. Before presenting the result we recall that a functional $I:\muathcal{F}{\rm ri\hspace{.5pt}}ghtarrow \muathbb{R}$ is \tauextit{monotonic} if $f\geq g\muathbb{R}ightarrow I(f)\geq I(g)$, where $f\geq g$ means $f(s)\geq g(s)$ for all $s\imathn S$. Moreover $I$ satisfies \tauextit{comonotonic additivity} if, whenever $f$ and $g$ are comonotonic, then $I(f+g)=I(f)+I(g)$. \betaegin{theorem}\lambdaabel{th:Schmeidelr86}{\sigmac (Schmeidler {\bf I}te{Schmeidler86})} Let $I:\muathcal{F}{\rm ri\hspace{.5pt}}ghtarrow \muathbb{R}$ be a given functional with $I(1_S)=1$. Then the following are equivalent. \betaegin{itemize} \imathtem[(i)] $(a)$ $I$ is monotonic; $(b)$ $I$ satisfies comonotonic additivity. \imathtem[(ii)] $I$ is a Choquet integral. \varepsilonnd{itemize} \varepsilonnd{theorem} The CPT functional generalizes the Choquet functional by relaxing comonotonic additivity. More specifically, comonotonic additivity will be retained only for comonotonic acts of the same sign and for (comonotonic) acts of opposite sign with disjoint supports. The following is our first main result. \betaegin{theorem}\lambdaabel{th:CPT} Let $I:\muathcal{F}{\rm ri\hspace{.5pt}}ghtarrow \muathbb{R}$ be a given functional satisfying $I(1_S)=1$. Then the following are equivalent. \betaegin{itemize} \imathtem[(i)] $(a)$ $I$ is monotonic; $(b)$ $I$ satisfies comonotonic additivity on $\muathcal{F}^+$ and $\muathcal{F}^-$ and for acts $f,g$ of opposite sign such that $supp(f){\cal A}p supp(g)=\varepsilonmptyset$. \imathtem[(ii)] $I$ is a CPT functional. \varepsilonnd{itemize} \varepsilonnd{theorem} Consider item $(i)$ of both Theorem \ref{th:Schmeidelr86} and Theorem \ref{th:CPT}. Note that part (b) of Theorem \ref{th:Schmeidelr86} implies (b) of Theorem \ref{th:CPT}, as acts with opposite sign and disjoint supports are comonotonic. This relaxation not only characterize a functional that is more general than the Choquet integral, but it also gives some important insights from a behavioral point of view. Recall that comonotonic additivity is a weakening of full-fledged additivity, a property that would force the functional $I$ to be linear, and hence an expectation. The behavioral intuition behind comonotonic additivity is that adding two comonotonic acts does not permit possible hedging against choices of nature. Relaxing comonotonic additivity allows us to uncover more sophisticated attitudes towards uncertainty and more subtle forms of hedging. The fist remarkable property of the CPT functional is that it differentiates agents' behavior in the domain of gains (i.e. $\muathcal{F}^+$) from the one in the domain of losses (i.e. $\muathcal{F}^-$). The outcome for which behavior changes, namely the monetary outcome 0, is called the \tauextit{reference point}.\footnote{In this paper, the reference point is exogenously given and it is normalized to 0 for convenience (we could have chosen any other reference point $r\imathn \muathbb{R}$). Schmidt and Zank {\bf I}te{SchmidtZank12} provide axioms to make the reference point endogenous.} Comonotonic additivity is preserved whenever acts under considerations are both above or both below the reference point. Comonotonic additivity over $\muathcal{F}^+$ and $\muathcal{F}^-$ weakens a condition already well known in the literature called cosigned independence. Two acts $f,g\imathn \muathcal{F}$ are \tauextit{sign-comonotonic} or simply \tauextit{cosigned} if they are comonotonic and there exists no $s\imathn S$ such that $f(s)>0$ and $g(s)<0$, see Wakker and Tversky {\bf I}te{WT} and Trautmann and Wakker~{\bf I}te{TW}. One of the main contribution of the present paper lies in the second comonotonic additivity requirement that characterizes the $CPT$ functional, namely $f,g$ of opposite sign such that $supp(f){\cal A}p supp(g)=\varepsilonmptyset$ implies $CPT(f+g)=CPT(f)+CPT(g)$. This means that comonotonic additivity can fail if we have $f,g$ of opposite sign and $supp(f){\cal A}p supp(g)\nueq\varepsilonmptyset$ (we underline again that such acts are comonotonic). The behavioral intuition behind this requirement is that adding the positive and negative parts of two acts can provide a hedge against possible choices of nature even when acts under consideration are comonotonic. We call this property \tauextit{gain-loss hedging}.\footnote{We thank Peter Wakker for suggesting this terminology.} This hedging possibility it is not considered for instance in the Choquet model, where the only way to hedge is to add two non-comonotonic acts. The following example provides more details for the particular case in which CPT reduces to a \v Sipo\v s integral, i.e. $\lambdaambda=1$ and $v^+=v^-$. \betaegin{example}\lambdaabel{ex:gain-loss-hedge} Let $S=\{s_1,s_2,s_3\}$ and consider a CPT functional with $\lambdaambda=1$ and $v=v^+=v^-$ (i.e. a \v Sipo\v s integral). Let $v$ be defined as \betaegin{center} \betaegin{tabular}{c|c|c|c|c|c|c|c|c} $A$ & $S$ & $\varepsilonmptyset$ & $s_1$ & $s_2$ & $s_3$ & $s_1{\cal U}p s_2$ & $s_2{\cal U}p s_3$ & $s_1{\cal U}p s_3$ \\ \hline $v$ & 1 & 0 & $\frac{2}{3}$ & $\frac{1}{3}$ & 0 & $\frac{2}{3}$ & $\frac{2}{3}$ & 1 \\ \varepsilonnd{tabular} \varepsilonnd{center} Consider now the following acts on $S$. \betaegin{center} \betaegin{tabular}{c|c|c|c} & $s_1$ & $s_2$ & $s_3$ \\ \hline $f$ & 3 & 4 & 4 \\ $g$ & 0 & 11 & 0 \\ $h$ & -3 & 0 & -1 \\ $-h$ & 3 & 0 & 1 \\ $f+h$ & 0 & 4 & 3 \\ $g+h$ & -3 & 11 & -1 \\ \varepsilonnd{tabular} \varepsilonnd{center} Acts $f,g,h$ are comonotonic, but $supp(g){\cal A}p supp(h)=\varepsilonmptyset$ while $supp(f){\cal A}p supp(h)\nueq\varepsilonmptyset$. Let $\sigmauccsim_{{\cal H}eck{S}}$ be the preference relation induced by the ${\cal H}eck{S}$ functional, i.e. $f\sigmauccsim_{{\cal H}eck{S}} g \Lambdaeftrightarrow {\cal H}eck{S}(f)\geq {\cal H}eck{S}(g)$ and $\sigmauccsim_C$ the one induced by the $C$ functional (both functionals ${\cal H}eck{S}$ and $C$ are defined in Section \ref{sec:framework}). We have \betaegin{align*} {\cal H}eck{S}(f)=C(f)= & 3+(4-3)\frac{2}{3}=\frac{11}{3} \\ {\cal H}eck{S}(g)=C(g)= & 0+(11-0)\frac{1}{3}=\frac{11}{3} \varepsilonnd{align*} and therefore $f\sigmaim_{{\cal H}eck{S}}g$ and $f\sigmaim_C g$. Moreover since $h$ is comonotonic with $f$ and $g$, by comonotonic additivity $f+h\sigmaim_C g+h$ (one can actually verify that $C(f+h)=C(f)+C(h)=\frac{7}{3}=C(g)+C(h)=C(g+h)$). However we can notice that the act $f+h$ looks much ``smoother'' than $g+h$ and moreover $f+h\geq0$ since gains balance losses. This intuition is captured by the preference relation induced by the \v Sipo\v s integral as \betaegin{align*} {\cal H}eck{S}(f+h)= & 0+(3-0)\frac{2}{3}+(4-3)\frac{1}{3}=\frac{7}{3} \\ {\cal H}eck{S}(g+h)= & C(g)- C(-h)= \frac{11}{3}- (0+(1-0)1+(3-1)\frac{2}{3})=\frac{4}{3} \varepsilonnd{align*} and therefore $f+h\sigmaucc_{{\cal H}eck{S}} g+h$. \varepsilonnd{example} Example \ref{ex:gain-loss-hedge} shows that gain-loss hedging is an interesting behavioral feature of CPT and of \v Sipo\v s integrals. Adding positive and negative acts with supports that are not disjoint, can provide an hedge even when the acts involved are comonotonic. This happens because gains compensate losses. In the next section we provide a new behavioral foundation of CPT taking this observation as a starting point. Example \ref{ex:gain-loss-hedge} shows that preferences represented by \v Sipo\v s integrals are rich enough to entail gain-loss hedging behaviors. It is therefore interesting to mathematically characterize Sipo\v s integrals. Theorem \ref{th:Sipos} shows that a symmetric condition pins down a CPT functional as a \v Sipo\v s integral. \betaegin{theorem}\lambdaabel{th:Sipos} A CPT functional is a \v Sipo\v s integral if and only if $CPT(-f)=-CPT(f)$ for all $f\imathn \muathcal{F}$. \varepsilonnd{theorem} Theorem \ref{th:Sipos} says that CPT reduces to a \v Sipo\v s integral if and only if the condition $CPT(-f)=-CPT(f)$ for all $f\imathn \muathcal{F}$ is satisfied. This is an interesting result as such condition is a strong one. As an example, if $C$ is a Choquet functional then $C(-f)=-C(f)$ for all $f\imathn \muathcal{F}$ if and only if the capacity $v$ equals its conjugate $\hat{v}$, and therefore it is additive on events $\{A,A^c\}$. \sigmaubsection{A behavioral characterization of CPT}\lambdaabel{sec:CPT_behave} In this section we provide a preference axiomatization of CPT. We recall that a preference relation $\sigmauccsim$ over $\muathcal{F}$ is a complete and transitive binary relation with non-empty strict part. The first axiom is a continuity axiom. \muedskip \nuoindent \tauextsc{A.1 Continuity.} The sets $\{\alphalpha\imathn\muathbb{R}| \alphalpha1_S\sigmauccsim f\}$ and $\{\alphalpha\imathn\muathbb{R}| f\sigmauccsim \alphalpha1_S\}$ are closed for all $f\imathn \muathcal{F}$. \muedskip Note that the axiom requires only to compare acts with constants. This dispenses us to formulate topological assumptions on the set of acts $\muathcal{F}$. The second axiom is a monotonicity property. \muedskip \nuoindent \tauextsc{A.2 Monotonicity.} Let $f,g\imathn \muathcal{F}$ be such that $f \geq g$. Then $f\sigmauccsim g$. \muedskip Consider now the well known comonotonic independence axiom (Chateauneuf {\bf I}te{Chato94}, Schmeidler {\bf I}te{Schmeidler89}). It says that if two acts $f$ and $g$ are indifferent to each other, then adding a comonotonic act $h$ to both of them does not change the DM's preferences. The idea behind this condition is that adding comonotonic acts does not provide any possible hedge against uncertainty. \muedskip \nuoindent \tauextsc{A.C Comonotonic Independence.} Let $f,g,h\imathn \muathcal{F}$ such that $h$ is comonotonic with $f$ and with $g$. Then $f\sigmaim g$ implies $f+h\sigmaim g+h$ \muedskip Preferences satisfying A.1, A.2 and A.C are represented by a Choquet integral. We present this result in the next proposition \betaegin{theorem}{\sigmac (Chateauneuf {\bf I}te{Chato94}, Schmeidler {\bf I}te{Schmeidler89})} Let $\sigmauccsim$ be a preference relation over $\muathcal{F}$. Then the following are equivalent. \betaegin{itemize} \imathtem[(i)] $\sigmauccsim$ satisfies A.1, A.2 and A.C. \imathtem[(ii)] There exists a (unique) capacity $v$ such that $\sigmauccsim$ is represented by a Choquet functional. \varepsilonnd{itemize} \varepsilonnd{theorem} However, as Example \ref{ex:gain-loss-hedge} shows, Comonotonic Independence may be too strong as it doesn't take into account (gain-loss) hedging possibilities that arise adding positive and negative acts with non-disjoint supports. The following two axioms, axiom A.3 and A.4, are both implied by Comonotonic Independence. They are at the heart of our behavioral characterization. They generalizes A.C in two directions. First, axiom A.3 allows for different attitudes towards uncertainty in the domain of gain and in the domain of losses. Second, axiom A.4 takes into account possible gain-loss hedging opportunities that arises in situations like the one of Example \ref{ex:gain-loss-hedge}. \muedskip \nuoindent \tauextsc{A.3 Comonotonic Independence for Gain and Losses.} Let $f,g,h\imathn \muathcal{F}^{+(-)}$ be such that $h$ is comonotonic with $f$ and $g$. Then $f\sigmaim g$ implies $f+h\sigmaim g+h$. \muedskip \nuoindent \tauextsc{A.4 $\lambdaambda$-Disjoint Independence.} There exists $\lambdaambda>0$ such that for all $f\imathn \muathcal{F}^{+}$ and $g\imathn \muathcal{F}^{-}$ such that $supp(f){\cal A}p supp(g)=\varepsilonmptyset$ and such that $f\sigmaim \alphalpha1_S$ and $g\sigmaim \betaeta 1_S$ \betaegin{enumerate} \imathtem if $\alphalpha+\lambdaambda \betaeta\geq 0$ then $f+g\sigmaim (\alphalpha+\lambdaambda \betaeta)1_S$; \imathtem if $\alphalpha+\lambdaambda \betaeta<0$ then $f+g\sigmaim \lambdaeft(\frac{\alphalpha+ \lambdaambda\betaeta }{\lambdaambda} {\rm ri\hspace{.5pt}}ght) 1_S$. \varepsilonnd{enumerate} Axiom A.4 represents the main behavioral novelty. To better understand it, note that it is implied by the following (stronger) axiom. \muedskip \nuoindent \tauextsc{A.4$^*$ Disjoint Independence.} For all $f\imathn \muathcal{F}^{+}$ and $g\imathn \muathcal{F}^{-}$ such that $supp(f){\cal A}p supp(g)=\varepsilonmptyset$ and such that $f\sigmaim \alphalpha1_S$ and $g\sigmaim \betaeta 1_S$, one has $f+g\sigmaim (\alphalpha+\betaeta)1_S$. \muedskip It is easy to see that A.4$^*$ follows from A.4 imposing $\lambdaambda=1$. A.4$^*$ requires that the act $f+g$ is evaluated as the sum of its constant equivalent. In the general case, we can have $\lambdaambda\nueq 1$ and in this case A.4 says that the constant equivalent of $f+g$ depends on the sign of $\alphalpha+\lambdaambda \betaeta$. The interpretation for the case of loss-aversion, $\lambdaambda>1$, is the following. The DM outweighs losses by a factor of $\lambdaambda$ and considers $\lambdaambda\betaeta$ instead of $\betaeta$ \tauextit{tout-court}. If $\alphalpha+\lambdaambda \betaeta>0$ then the DM feels ``overall in the domain of gains'' and the certainty equivalent of $f+g$ is positive and such that $\alphalpha>0$ is balanced by $\lambdaambda \betaeta<\betaeta<0$, i.e. the certainty equivalent $\betaeta$ of losses is outweighed by a factor of $\lambdaambda$. If $\alphalpha+\lambdaambda \betaeta<0$ then the DM feels ``overall in the domain of losses'' and in this case the certainty equivalent of $f+g$ is negative and equal to $\betaeta<0$ plus $\frac{\alphalpha}{\lambdaambda}>0$, i.e. the certainty equivalent $\alphalpha$ of the positive part decreased by a factor of $\lambdaambda$ (since $\frac{\alphalpha}{\lambdaambda}<\alphalpha$). Importantly, $\lambdaambda$ can be determined in the lab: take $f\imathn\muathcal{F}^+$ and $g\imathn\muathcal{F}^-$ such that $supp(f){\cal A}p supp(g)=\varepsilonmptyset$, ask the certainty equivalents $\alphalpha$, $\betaeta$ and $\gamma$ of $f$, $g$ and $f+g$ respectively. If $\gamma=\alphalpha+\betaeta$, there is no loss-aversion or seeking. If $\gamma\nueq\alphalpha+\betaeta$, then if $\gamma>0$ we have $\lambdaambda=\frac{\gamma-\alphalpha}{\betaeta}$ and if $\gamma<0$ we have $\lambdaambda=\frac{\alphalpha}{\gamma-\betaeta}$. There is lively debate on whether loss-aversion is a real phenomena or not, with results on both sides. See for instance Gal and Rucker {\bf I}te{Gal} and Gächter, Johnson and Herrmann {\bf I}te{Gat}. We hope therefore that A.4 could be helpful to elicit loss-aversion in a setting in which individuals' preferences are represented by the CPT functional with piece-wise constant marginal utility. When A.C is replaced by A.3 and A.4, we obtain a characterization of the CPT functional. The following is our second main result. \betaegin{theorem}\lambdaabel{th:axiom_CPT} Let $\sigmauccsim$ be a preference relation over $\muathcal{F}$. Then the following are equivalent. \betaegin{itemize} \imathtem[(i)] $\sigmauccsim$ satisfies A.1, A.2, A.3 and A.4. \imathtem[(ii)] There exist two (unique) capacities $v^+$, $v^-$ and a real number $\lambdaambda>0$ such that $\sigmauccsim$ is represented by a CPT functional. \varepsilonnd{itemize} \varepsilonnd{theorem} Note that if we replace A.4 with A.4$^*$ in Theorem \ref{th:axiom_CPT}, we obtain a CPT functional with $\lambdaambda=1$, i.e. loss-neutrality. \sigmaubsection{Attitude towards uncertainty}\lambdaabel{sec:CPT_ambig_att} As we already said above, a remarkable property of CPT is that (unlike the Choquet functional) it allows to disentangle DMs' attitude towards uncertainty in the domain of gains from the one in the domain of losses. This is made possible since an act is evaluated through the sum of two Choquet integrals with respect to a capacity $v^+$ for gains, and a different one $v^-$ for losses. Experimental evidence shows that DMs are uncertainty averse for gains and uncertainty seeking for losses. Loosely speaking, uncertainty aversion (seeking) means that agents prefer situations in which objective probabilities of events are (not) available. In our framework, objective probabilities are not there at all. Therefore an act is not uncertain only if it is a constant act. Intuitively, in our purely subjective setting, an uncertainty averse (seeking) DM would prefer acts that are ``as close (far) as possible'' to constant acts. We capture this idea with the two following axioms. \muedskip \nuoindent A.3' Let $f,g,h\imathn \muathcal{F}^+$ such that $h$ is comonotonic with $g$. Then $f\sigmaim g \muathbb{R}ightarrow f+h\sigmauccsim g+h$. \muedskip \muedskip \nuoindent A.3'' Let $f,g,h\imathn \muathcal{F}^-$ such that $h$ is comonotonic with $f$. Then $f\sigmaim g \muathbb{R}ightarrow f+h\sigmauccsim g+h$. \muedskip Axiom A.3' captures the intuition that DMs are uncertainty averse in the domain of gains $\muathcal{F}^+$. Consider three acts $f,g,h\imathn \muathcal{F}^+$ such that $f\sigmaim g$ and $h$ is comonotonic with $f$ and $g$. Then adding (the potentially non-comonotone act) $h$ to $f$ increases the appreciation of $f$ since $h$ may by an hedge against $f$, while at the same time it decreases the appreciation of $g$ since uncertainty may be higher. To exemplify, let $A\imathn {{\cal A}l A}$ and consider $f=10 {\cal D}ot 1_A+5{\cal D}ot 1_{A^c}$, $g=5 {\cal D}ot 1_A+10 {\cal D}ot 1_{A^c}$ and $h=0 {\cal D}ot 1_A+5{\cal D}ot 1_{A^c}$. Then $f+h=10 {\cal D}ot 1_S$ is a constant act while $g+h=5 {\cal D}ot 1_A+15 {\cal D}ot 1_{A^c}$ is even more uncertain than $g$. A DM who dislikes uncertainty would clearly prefer $f+h$ to $g+h$. Axiom A.3'' can be interpreted similarly, but in this case the DM is willing to increase the perceived uncertainty. Notice that similar conditions were proposed by Chateuneuf {\bf I}te{Chato94}, see also Wakker {\bf I}te{Wakker90}. The following theorem shows that if a DM is uncertainty averse for gains and uncertainty seeking for losses then the capacities appearing in the CPT functional are both convex. \betaegin{theorem}\lambdaabel{th:axiom_CPT_convex} Let $\sigmauccsim$ be a preference relation over $\muathcal{F}$. Then the following are equivalent. \betaegin{itemize} \imathtem[(i)] $\sigmauccsim$ satisfies A.1, A.2, A.3', A.3'', and A.4. \imathtem[(ii)] There exist two (unique) convex capacities $v^+$, $v^-$ and $\lambdaambda>0$, such that $\sigmauccsim$ is represented by a CPT functional. \varepsilonnd{itemize} \varepsilonnd{theorem} Note that the CPT functional can be rewritten (using Lemma \ref{lemma:basic} in the Appendix) as \betaegin{equation}\lambdaabel{eq:CPT_formula2} CPT(f)=\imathnt f^+ dv^++\imathnt -\lambdaambda f^- d\hat{v}^-. \varepsilonnd{equation} If one is using this formulation then Theorem \ref{th:axiom_CPT_convex} implies that the conjugate capacity $\hat{v}^-$ is concave. We conclude this section providing a testable axiom that characterizes symmetric attitudes around the reference point with respect to uncertainty. An interesting question is in fact to understand when one has $v^-=v^+$ in the CPT functional.\footnote{Or, equivalently if one is using formulation (\ref{eq:CPT_formula2}), when one gets $v^-=\hat{v}^+$.} Note that if $\lambdaambda=1$ Theorem \ref{th:Sipos} applies and one gets a \v Sipo\v s integral. Consider the following axiom. \muedskip \nuoindent \tauextsc{A.5 Gain-Loss Symmetry.} Let $f\imathn \muathcal{F}$ and $\alphalpha\imathn \muathbb{R}$. Then $f\sigmaim \alphalpha1_S$ if and only if $-f\sigmaim -\alphalpha1_S$. \muedskip Axiom A.5 says that if a DM is indifferent between an (uncertain) act $f$ and a sure amount $\alphalpha$, then she should stay indifferent between $-f$ and $-\alphalpha$. The intuition is that the DM sees $f$ and $-f$ as symmetric with respect to the reference point 0, and therefore evaluates them through the symmetric sure amounts $\alphalpha$ and $-\alphalpha$. The following theorem offers a behavioral characterization of the \v Sipo\v s integral. \betaegin{theorem}\lambdaabel{th:axiom_Sipos} Let $\sigmauccsim$ be a preference relation over $\muathcal{F}$. Then the following are equivalent. \betaegin{itemize} \imathtem[(i)] $\sigmauccsim$ satisfies A.1, A.2, A.3, A.4 and A.5. \imathtem[(ii)] There exists a (unique) capacity $v$ such that $\sigmauccsim$ is represented by the \v Sipo\v s integral. \varepsilonnd{itemize} \varepsilonnd{theorem} \sigmaection{Conclusion}\lambdaabel{sec:conclusion} We provided an axiomatic analysis of CPT with piece-wise linear utility. This allowed us to focus on (sign-dependent) attitudes towards uncertainty. First, we mathematically characterized the CPT functional by weakening the comonotonic additivity property of the Choquet integral. We also gave conditions to reduce CPT to a \v Sipos integral. Then we gave an axiomatic characterization of CPT. The main novelty is given by a gain-loss hedging property: gains and losses balance each other out and provide an hedge against uncertainty. Moreover, we introduced an axiom that offers a way to easily elicit the coefficient of loss-aversion, in case of piece-wise linear utility. Finally, we characterized uncertainty aversion for losses and uncertainty loving for gains. Moreover we showed that these attitudes are symmetric with respect to the reference point if and only if CPT is a \v Sipos integral. \nuewpage \alphappendix \sigmaection{Appendix} \sigmamall We begin with an elementary Lemma. The proof if given for sake of completeness. \betaegin{lemma}\lambdaabel{lemma:basic} Let $\hat{v}(A)=1-v(A^c)$ and $f\imathn \muathcal{F}^+$ or $f\imathn \muathcal{F}-$. Then $-\imathnt f dv = \imathnt - fd \hat{v}$. \varepsilonnd{lemma} \betaegin{proof} Let $f\imathn \muathcal{F}^+$, then one has \betaegin{align*} -\imathnt f dv &= -\imathnt_{0}^{\imathnfty}v(s\imathn S|f(s)\geq t) dt \\ &= - \imathnt_{0}^{\imathnfty}[1-\hat{v}(s\imathn S|f(s)> t) ]dt \\ &= - \imathnt_{0}^{-\imathnfty}[1-\hat{v}(s\imathn S|f(s)> u)](-du) \\ &= \imathnt^{0}_{-\imathnfty}[\hat{v}(s\imathn S|f(s)> u)-1]du \\ &= \imathnt - fd \hat{v} \varepsilonnd{align*} The case $f\imathn \muathcal{F}^-$ can be treated similarly. \varepsilonnd{proof} \muedskip \muedskip \betaegin{proof}[\tauextbf{Proof of Lemma \ref{lemma:sipos_ceu}}] To prove the first point we just need to apply Lemma \ref{lemma:basic}. In fact noticing that $f^-\imathn \muathcal{F}^+$ we have ${\cal H}eck{S}(f)=\imathnt f^+ dv-\imathnt f^- dv=\imathnt f^+ dv+\imathnt - f^-d \hat{v}.$\\ The second point, note that $f=f^++(-f^-)$ and that $f^+$ and $-f^-$ are comonotonic. Then by the comonotonic additivity of the Choquet integral proved in Theorem \ref{th:Schmeidelr86}, we have $$ \imathnt f dv=\imathnt f^++(-f^-) dv=\imathnt f^+dv +\imathnt -f^- dv. $$ Note that by Lemma \ref{lemma:basic} one can also write $\imathnt f dv=\imathnt f^+dv -\imathnt f^- d\hat{v}$. \varepsilonnd{proof} \muedskip \muedskip \betaegin{proof}[\tauextbf{Proof of Theorem \ref{th:CPT}}] $(i) \muathbb{R}ightarrow (ii)$. We start with an auxiliary Lemma. \betaegin{lemma}\lambdaabel{lem:useful} For all $\alphalpha>0$, for all $f\imathn \muathcal{F}^+{\cal U}p \muathcal{F}^-$, $I(\alphalpha f)=\alphalpha I(f)$. Moreover for $\alphalpha>0$ and $f\imathn \muathcal{F}^+$, $I(f+\alphalpha1_S)=I(f)+\alphalpha$. \varepsilonnd{lemma} \betaegin{proof}[Proof of Lemma \ref{lem:useful}] The proof is standard. \varepsilonnd{proof} Let $v^+(A)=I(1_A)$, then doing the same proof as Schmeidler {\bf I}te{Schmeidler86} one can show that for all $f\imathn \muathcal{F}^+$, $I(f)=\imathnt f dv^+$. Now let $\lambdaambda:=-I(-1_S)$. By comonotonic additivity of $I$, $I(0)=0$. By monotonicity of $I$, $I(-1_S)\lambdaeq I(0)=0$. Then $\lambdaambda\geq 0$. Define for all $A\imathn {{\cal A}l A}$, $$ v(A)=-\frac{I(-1_A)}{\lambdaambda}. $$ We have $v(\varepsilonmptyset)=0$ and $v(S)=1$. Take $A\sigmaubseteq B$ so that $-1_A\geq-1_B$. Since $I$ is monotonic, $I(-1_A)\geq I(-1_B)$ and therefore $v(A)\lambdaeq v(B)$. This show that $v$ is a capacity. Define $v^-$ as the conjugate capacity of $v$, meaning that for all $A\imathn {{\cal A}l A}$, $$ v^-(A)=1-v(A^c). $$ We will show that for all $f\imathn \muathcal{F}^-$, $f$ simple, $I(f)=\imathnt \lambdaambda f dv^-$. Let $f\imathn \muathcal{F}^-$ be defined as $$ f=x_11_{A_1}+\dots +x_n1_{A_n} $$ where $\{A_1,\dots,A_n\}$ is a partition of $S$ and $x_1\lambdaeq \dots \lambdaeq x_n \lambdaeq 0$. Note that we can rewrite $f$ as $$ f=(0-x_n)(-1_S)+(x_n-x_{n-1})(-1_{A_{n-1}{\cal U}p\dots{\cal U}p A_1})+\dots+(x_3-x_2)(-1_{A_2{\cal U}p A_1})+(x_2-x_1)(-1_{A_1}). $$ Define $$ h_i=(x_{i+1}-x_{i})(-1_{A_{i}{\cal U}p\dots{\cal U}p A_1}) $$ with the convention that $x_{n+1}=0$. We have that $$ f=\sigmaum_{i=0}^n h_i $$ We show now that $h_i$ is comonotone with $\sigmaum_{k=i+1}^n h_k$. Consider $s,t\imathn S$ be such that $s\imathn A_{i}{\cal U}p\dots{\cal U}p A_1$ and $t\imathn (A_{i}{\cal U}p\dots{\cal U}p A_1)^c$, suppose $t\imathn A_l$ for $l>i$. Then $h_i(s)-h_i(t)=x_i-x_{i+1}\lambdaeq 0$ and $\sigmaum_{k=i+1}^n h_k(s)-\sigmaum_{k=i+1}^n h_k(t)=x_i-x_l\lambdaeq 0$, hence $(h_i(s)-h_i(t))\lambdaeft(\sigmaum_{k=i+1}^n h_k(s)-\sigmaum_{k=i+1}^n h_k(t){\rm ri\hspace{.5pt}}ght)\geq 0$. If $s,t\imathn A_{i}{\cal U}p\dots{\cal U}p A_1$ or $s,t\imathn (A_{i}{\cal U}p\dots{\cal U}p A_1)^c$ the previous product is 0. This shows that the functions $h_i$ and $\sigmaum_{k=i+1}^n h_k$ are conomotone. Since $h_i$ and $\sigmaum_{k=i+1}^n h_k$ are negative, by comonotonic additivity on $\muathcal{F}^-$ we have $$ I(f)=I(h_1+\sigmaum_{i=2}^n h_i)=I(h_1)+I(h_2+\sigmaum_{i=3}^n h_i)=\dots= \sigmaum_{i=1}^n I(h_i). $$ Note that by Lemma \ref{lem:useful} and by definition of $v^-$ we have $$ I(h_i)=(x_{i+1}-x_{i})\lambdaambda(v^-(A_{i+1}{\cal U}p\dots{\cal U}p A_n)-1). $$ Therefore \betaegin{align*} I(f)&= \sigmaum_{i=1}^{n-1} I(h_i) +I(h_n) \\ &= \lambdaambda \lambdaeft[\sigmaum_{i=1}^{n-1}(x_{i+1}-x_{i})v^-(A_{i+1}{\cal U}p\dots{\cal U}p A_n)+\sigmaum_{i=1}^{n-1}(x_{i+1}-x_{i}){\rm ri\hspace{.5pt}}ght]+(x_{n+1}-x_n)(-\lambdaambda)\\ &= \lambdaambda \lambdaeft[\sigmaum_{i=1}^{n-1}(x_{i+1}-x_{i})v^-(A_{i+1}{\cal U}p\dots{\cal U}p A_n)-x_n+x_1{\rm ri\hspace{.5pt}}ght]+\lambdaambda x_n \\ &=\lambdaambda \lambdaeft[x_1+\sigmaum_{i=1}^{n-1}(x_{i+1}-x_{i})v^-(A_{i+1}{\cal U}p\dots{\cal U}p A_n){\rm ri\hspace{.5pt}}ght]\\ &=\lambdaambda\imathnt f dv^- \\ &=\imathnt \lambdaambda f dv^- \varepsilonnd{align*} Notice that every bounded function can be approximated by a sequence of step functions as in Schmeidler {\bf I}te{Schmeidler86}. This shows that for all $f\imathn \muathcal{F}^-$, $I(f)=\imathnt \lambdaambda f dv^+$. Let now $f\imathn \muathcal{F}$ and notice that $f=f^+ +(-f^-)$ and moreover $supp(f^+){\cal A}p supp(f^-)=\varepsilonmptyset$. Hence $$ I(f)=I(f^++(-f^-))=I(f^+)+I(-f^-)=\imathnt f^+dv^++\imathnt \lambdaambda(-f^-)d\mubox{\betaoldmath$A$}r{v}^- $$ Let $\hat{v}^-$ be the conjugate capacity of $\mubox{\betaoldmath$A$}r{v}^-$, i.e. $\hat{v}^-(A)=1-\mubox{\betaoldmath$A$}r{v}^-(A^c)$ for all $A\imathn {{\cal A}l A}$. Then by Lemma \ref{lemma:basic} one has $$ \imathnt -\lambdaambda f^-d\mubox{\betaoldmath$A$}r{v}^-=-\imathnt \lambdaambda f^-d\hat{v}^- $$ Defining $v^-=\hat{v}^-$ concludes the ``$\muathbb{R}ightarrow$'' part of the proof. $(ii)\muathbb{R}ightarrow (i)$. We prove (a). Suppose $f\geq g$. Then $f^+\geq g^+$ and $g^-\geq f^-$. It is well known that the Choquet integral is monotonic. Hence $I(f)=\imathnt f^+dv^+ - \imathnt\lambdaambda f^-dv^-\geq g^+dv^+ - \imathnt\lambdaambda g^-dv^-=I(g)$ . \\ We prove (b). Let $f,g$ comonotonic and such that $f,g\geq 0$ (the case $f,g\lambdaeq 0$ is similar). Then $(f+g)^+=f+g=f^++g^+$ and $(f+g)^-=0=f^-=g^-$. Therefore \betaegin{multline*} I(f+g)=\imathnt(f+g)^+dv^+=\imathnt f^+dv^+ + \imathnt g^+dv^+= \\ \imathnt f^+dv^+ - \imathnt\lambdaambda f^-dv^- + \imathnt g^+dv^+ - \imathnt\lambdaambda g^-dv^-=I(f)+I(g). \varepsilonnd{multline*} We prove part (b) of $(ii)$. Let $f$, $g$ be of opposite sign (for instance $f\geq0$ and $g\lambdaeq 0$) and such that $supp(f){\cal A}p supp(g)=\varepsilonmptyset$. Notice that $(f+g)^+=f=f^+$, $(f+g)^-=-g=g^-$, and $f^-=0$, $g^+=0$. Therefore \betaegin{multline*} I(f+g)=\imathnt(f+g)^+dv^+-\imathnt\lambdaambda(f+g)^-dv^-=\imathnt f^+dv^+ - \imathnt\lambdaambda g^-dv^-= \\ \imathnt f^+dv^+ - \imathnt\lambdaambda f^-dv^- + \imathnt g^+dv^+ - \imathnt \lambdaambda g^-dv^-=I(f)+I(g) \varepsilonnd{multline*} which complete the proof of part (b). \varepsilonnd{proof} \muedskip \muedskip \betaegin{proof}[\tauextbf{Proof of Theorem \ref{th:Sipos}}] $(i) \muathbb{R}ightarrow (ii)$ Let $CPT$ be a \v Sipo\v s integral. Then $\lambdaambda=1$ and $v^+=v^-$ and hence $$ CPT(-f)=\imathnt(-f)^+ dv-\imathnt (-f)^- dv=\imathnt f^- dv-\imathnt f^+ dv=-CPT(f) $$ $(ii) \muathbb{R}ightarrow (i)$ Note that $\lambdaambda=1$ since $-\lambdaambda=CPT(-1_S)=-CPT(1_S)=-1$. Let $A\imathn {{\cal A}l A}$ and consider $f=1_A$. Then $$ CPT(-f)=0-\imathnt 1_A dv^-=-v^-(A) \tauext{ and } -CPT(f)=-\imathnt 1_A dv^+=-v^+(A) $$ Therefore $$ CPT(-f)=-CPT(f)\Lambdaeftrightarrow v^-(A)=v^+(A). $$ Since this must be true for all $A\imathn {{\cal A}l A}$, $v^-=v^+$ and the CPT functional is a \v Sipo\v s integral. \varepsilonnd{proof} \muedskip \muedskip \betaegin{proof}[\tauextbf{Proof of Theorem \ref{th:axiom_CPT}}] $(ii)\Lambdaeftarrow (i)$ We only prove A.4. Take $\lambdaambda>0$ of the CPT functional. Fix $f$ and $g$ s.t. $f\imathn \muathcal{F}^{+}$ and $g\imathn \muathcal{F}^{-}$ and s.t. $supp(f){\cal A}p supp(g)=\varepsilonmptyset$. Suppose $f\sigmaim \alphalpha1_S$ and $g\sigmaim \betaeta 1_S$. Note that $\alphalpha\geq 0$ and $\betaeta\lambdaeq 0$. Therefore \betaegin{align*} CPT(f)=CPT(\alphalpha1_S) &\Lambdaeftrightarrow CPT(f)=\alphalpha \\ CPT(g)=CPT(\betaeta1_S) &\Lambdaeftrightarrow CPT(g)=-\lambdaambda\imathnt \betaeta^-dv^-=-\lambdaambda(-\betaeta)=\lambdaambda\betaeta. \varepsilonnd{align*} Moreover since $f$ and $g$ have opposite signs and have disjoints supports we have $$ CPT(f+g)=CPT(f)+CPT(g)=\alphalpha+\lambdaambda\betaeta. $$ Now, if $\alphalpha+\lambdaambda\betaeta>0$, $CPT((\alphalpha+\lambdaambda\betaeta)1_S)=\alphalpha+\lambdaambda\betaeta$, and since CPT represents $\sigmauccsim$, $f+g\sigmaim (\alphalpha+\lambdaambda \betaeta)1_S$. If $\alphalpha+\lambdaambda\betaeta<0$, $CPT\lambdaeft(\frac{\alphalpha+\lambdaambda \betaeta}{\lambdaambda}1_S{\rm ri\hspace{.5pt}}ght)=-\lambdaambda\imathnt \lambdaeft(\frac{\alphalpha+\lambdaambda \betaeta}{\lambdaambda}{\rm ri\hspace{.5pt}}ght)^-dv^-=-\lambdaambda\frac{-\alphalpha-\lambdaambda \betaeta}{\lambdaambda}=\alphalpha+\lambdaambda\betaeta$. Therefore $f+g\sigmaim \frac{\alphalpha+\lambdaambda \betaeta}{\lambdaambda}1_S$. $(i)\muathbb{R}ightarrow(ii)$ First, note that for all $f\imathn \muathcal{F}^+$, $f=f^+$ and for all $f\imathn \muathcal{F}^-$, $f=-f^-$. (One can prove that) By A.1 and A.2 for all $f\imathn \muathcal{F}^+$ there exists a unique $\alphalpha_{f^+}\geq 0$ s.t. $$ f^+\sigmaim \alphalpha_{f^+}1_S. $$ Let $\lambdaambda>0$ be the one of Axiom A.4. Then again by A.1 and A.2 for all $f\imathn \muathcal{F}^-$ there exists a unique $\alphalpha_{-f^-}\lambdaeq 0$ s.t. $$ -f^-\sigmaim \lambdaeft(\frac{\alphalpha_{-f^-}}{\lambdaambda}{\rm ri\hspace{.5pt}}ght)1_S. $$ Define $I:\muathcal{F}{\rm ri\hspace{.5pt}}ghtarrow\muathbb{R}$ as $$ I(f)=I(f^+)+I(-f^-) $$ where $I(f^+)=\alphalpha_{f^+}$ and $I(-f^-)=\alphalpha_{-f^-}$. Note that $f^+\sigmaim I(f^+)1_S$ and $-f^-\sigmaim \lambdaeft(\frac{I(-f^-)}{\lambdaambda}{\rm ri\hspace{.5pt}}ght)1_S$. Moreover $I(1_S)=1$ by Monotonicity. We will prove that $I$ satisfies the conditions of Theorem \ref{th:CPT} and it is therefore a CPT functional. \betaegin{step}\lambdaabel{step:CE} Fix $f\imathn\muathcal{F}$, then $I(f)\geq0$ implies $f\sigmaim I(f)1_S$ and $I(f)<0$ implies $f\sigmaim \frac{I(f)}{\lambdaambda} 1_S$. \varepsilonnd{step} \betaegin{proof} Let $f\imathn\muathcal{F}$. \betaegin{itemize} \imathtem Case 1: $I(f)\geq0$. Note that $f=f^++(-f^-)$ and by definition $f^+\sigmaim I(f^+)1_S$ and $-f^-\sigmaim \frac{I(-f^-)}{\lambdaambda}1_S$. Moreover $I(f^+)+\lambdaambda\frac{I(-f^-)}{\lambdaambda}=I(f)\geq0$, hence by A.4 and by the definition of $I(f)$ $$ f=f^++(-f^-)\sigmaim \lambdaeft(I(f^+)+\lambdaambda\frac{I(-f^-)}{\lambdaambda}{\rm ri\hspace{.5pt}}ght)1_S=I(f)1_S. $$ \imathtem Case 2: $I(f)<0$. Then reasoning as before and applying A.4 we get $$ f=f^++(-f^-)\sigmaim \lambdaeft(\frac{I(f^+)+\lambdaambda\frac{I(-f^-)}{\lambdaambda}}{\lambdaambda} {\rm ri\hspace{.5pt}}ght)1_S=\lambdaeft(\frac{I(f^+)+I(-f^-)}{\lambdaambda}{\rm ri\hspace{.5pt}}ght)1_S=\frac{I(f)}{\lambdaambda} 1_S. $$ \varepsilonnd{itemize} \varepsilonnd{proof} \betaegin{step}\lambdaabel{step:mon} $I$ is monotone. \varepsilonnd{step} \betaegin{proof} Let $f,g\imathn\muathcal{F}$ be such that $f\geq g$. Then $f^+\geq g^+$ and $-f^-\geq -g^-$. Then by Monotonicity $f^+\sigmauccsim g^+$ and $-f^-\sigmauccsim -g^-$. Then by Step \ref{step:CE} $I(f^+)1_S\sigmaim f^+\sigmauccsim g^+\sigmaim I(g^+)1_S$ and $\frac{I(-f^-)}{\lambdaambda}1_S\sigmaim -f^-\sigmauccsim -g^-\sigmaim \frac{I(-g^-)}{\lambdaambda}1_S$. Monotonicity implies $I(f^+)\geq I(g^+)$ and $I(-f^-)\geq I(-g^-)$. Summing up we obtain $I(f)\geq I(g)$. \varepsilonnd{proof} \betaegin{step}\lambdaabel{step:como_add} $I$ satisfies comonotonic additivity over $\muathcal{F}^+$ and $\muathcal{F}^-$. \varepsilonnd{step} \betaegin{proof} We prove comonotonic additivity over $\muathcal{F}^-$, the proof for $\muathcal{F}^+$ can be done in a similar way. \\ Take $f,g\imathn\muathcal{F}^-$ s.t. $f$ and $g$ are comonotone. By Step $\ref{step:CE}$, $f\sigmaim \frac{I(f)}{\lambdaambda}1_S$ and $g\sigmaim \frac{I(g)}{\lambdaambda}1_S$. Since constant acts are comonotone with all other acts and $\frac{I(f)}{\lambdaambda},\frac{I(g)}{\lambdaambda}\lambdaeq 0$, by A.3 one gets $f+g\sigmaim \frac{I(f)}{\lambdaambda}1_S+g$ and $g+\frac{I(f)}{\lambdaambda}1_S\sigmaim \frac{I(g)}{\lambdaambda}1_S+\frac{I(f)}{\lambdaambda}1_S$. Since $f+g\imathn\muathcal{F}^-$, by Step \ref{step:CE} $f+g\sigmaim \frac{I(f+g)}{\lambdaambda}1_S$. Therefore $\frac{I(f+g)}{\lambdaambda}1_S\sigmaim \lambdaeft(\frac{I(f)}{\lambdaambda}+ \frac{I(g)}{\lambdaambda}{\rm ri\hspace{.5pt}}ght)1_S$, and Monotonicity implies $I(f+g)=I(f)+I(g)$. \varepsilonnd{proof} \betaegin{step}\lambdaabel{step:disj_add} For all $f\imathn \muathcal{F}^{+(-)}$ and $g\imathn \muathcal{F}^{-(+)}$ s.t. $supp(f){\cal A}p supp(g)=\varepsilonmptyset$, $I(f+g)=I(f)+I(g)$. \varepsilonnd{step} \betaegin{proof} Fix $f\imathn \muathcal{F}^{+}$ and $g\imathn \muathcal{F}^{-}$ s.t. $supp(f){\cal A}p supp(g)=\varepsilonmptyset$. Define $h=f+g$ and note that $h^+=f$ and $-h^-=g$. Therefore by definition of $I$, $I(f+g)=I(h)=I(h^+)+I(-h^-)=I(f)+I(g)$. \varepsilonnd{proof} \betaegin{step}\lambdaabel{step:represent} $I$ represents $\sigmauccsim$ over $\muathcal{F}$ (i.e. $f\sigmauccsim g \Lambdaeftrightarrow I(f)\geq I(g)$). \varepsilonnd{step} \betaegin{proof} Fix $f,g\imathn\muathcal{F}$. Then we have to consider 4 cases. \betaegin{itemize} \imathtem Case 1: $I(f),I(g)\geq0$. Using Step \ref{step:CE} and Monotonicity $I(f)1_S\sigmaim f\sigmauccsim g\sigmaim I(g)1_S \Lambdaeftrightarrow I(f)\geq I(g)$. \imathtem Case 2: $I(f),I(g)\lambdaeq0$. Using Step \ref{step:CE} and Monotonicity $\frac{ I(f)}{\lambdaambda}1_S\sigmaim f\sigmauccsim g\sigmaim \frac{ I(g)}{\lambdaambda}1_S\Lambdaeftrightarrow I(f)\geq I(g)$, since $\lambdaambda>0$. \imathtem Case 3: $I(f)\geq 0>I(g)$. Using Step \ref{step:CE} and Monotonicity $I(f)1_S\sigmaim f\sigmauccsim g\sigmaim \frac{ I(g)}{\lambdaambda}1_S\Lambdaeftrightarrow I(f)\geq I(g)$. Note that in this case we cannot have $g\sigmauccsim f$. \imathtem Case 4: $I(g)\geq 0>I(f)$. This is the same as Case 3. \varepsilonnd{itemize} \varepsilonnd{proof} Since $I(1_S)=1$, Steps \ref{step:mon}, \ref{step:como_add} and \ref{step:disj_add} prove that $I$ satisfies condition $(i)$ of Theorem \ref{th:CPT} and therefore $I$ is a CPT functional. Moreover Step \ref{step:represent} shows that $I$ represents $\sigmauccsim$. Therefore the proof is complete. \varepsilonnd{proof} \muedskip \muedskip \betaegin{proof}[\tauextbf{Proof of Theorem \ref{th:axiom_CPT_convex}}] $(i)\muathbb{R}ightarrow (ii)$ Note that A.3' and A.3'' imply A.3. Hence Theorem~\ref{th:axiom_CPT} applies and $I$ is represented by a CPT functional. It is left to show that $v^+$ and $v^-$ are convex. We only show convexity of $v^-$. Fix $A,B\imathn{{\cal A}l A}$ and note that $CPT(-1_A)=-\lambdaambda v^-(A)=CPT(-v^-(A)1_S)$ and a similar statement holds for $B\imathn {{\cal A}l A}$. Therefore $-1_A\sigmaim -v^-(A)1_S$ and $-1_B\sigmaim -v^-(B)1_S$. Since $-1_B$ is comonotonic with $-v^-(A)1_S$, by A.3'' $-v^-(A)1_S-1_B\sigmauccsim -1_A-1_B$. Moreover since $-v^-(A)1_S$ is comonotonic with both $-1_B$ and $-v^-(B)1_S$ by A.3'' we get $-1_B-v^-(A)1_S\sigmaim -v^-(B)1_S-v^-(A)1_S$. Therefore $$ -v^-(B)1_S-v^-(A)1_S\sigmaim -1_B-v^-(A)1_S\sigmauccsim -1_A-1_B $$ Note that $-1_A-1_B=-1_{A{\cal U}p B}-1_{A{\cal A}p B}$ and since $1_{A{\cal U}p B}$ and $1_{A{\cal A}p B}$ are comonotonic, \betaegin{align*} CPT(-1_{A{\cal U}p B}-1_{A{\cal A}p B})&=-\imathnt \lambdaambda (-1_{A{\cal U}p B}-1_{A{\cal A}p B})^-dv^-\\ &= -\lambdaambda \lambdaeft( \imathnt1_{A{\cal U}p B} dv^-+ \imathnt1_{A{\cal A}p B} dv^-{\rm ri\hspace{.5pt}}ght) \\ &= -\lambdaambda [v^-(A{\cal U}p B)+v^-(A{\cal A}p B)]. \varepsilonnd{align*} Therefore $-\lambdaambda [v^-(A)+v^-(B)]= CPT(-v^-(A)1_S-v^-(B)1_S)\geq CPT(-1_{A{\cal U}p B}-1_{A{\cal A}p B})=-\lambdaambda [v^-(A{\cal U}p B)+v^-(A{\cal A}p B)]$ which implies $v^-(A)+v^-(B)\lambdaeq v^-(A{\cal U}p B)+v^-(A{\cal A}p B)$, i.e. $v^-$ is convex. $(ii)\muathbb{R}ightarrow (i)$ Left to the reader. \varepsilonnd{proof} \muedskip \muedskip \betaegin{proof}[\tauextbf{Proof of Theorem \ref{th:axiom_Sipos}}] $(i)\muathbb{R}ightarrow (ii)$ Since $\sigmauccsim$ satisfies A.1, A.2, A.3 and A.4, it can be represented by a CPT functional $I$ by Theorem \ref{th:axiom_CPT}. Hence for all $f\imathn \muathcal{F}$, $f\sigmaim I(f)1_S$ and $-f\sigmaim I(-f)1_S$. Notice that by A.5 one has also $-f\sigmaim -I(f)1_S$ and hence A.2 implies $I(-f)=-I(f)$. By Theorem \ref{th:Sipos}, $I$ is a \v Sipo\v s integral. $(ii)\muathbb{R}ightarrow (i)$ Left to the reader. \varepsilonnd{proof} \nuormalsize \betaegin{thebibliography}{99} \betaibitem{Allais} Allais, M. Le comportement de l'homme rationnel devant le risque: critique des postulats et axiomes de l'école américaine. \tauextit{Econometrica}, pp.503-546, 1953. \betaibitem{AA} Anscombe. F.J. and R. Aumann. A definition of subjective probability. \tauextit{Annals of Mathematical Statistics}, 34, 199-205, 1963. \betaibitem{Chato94} Chateauneuf, A. Modeling attitudes towards uncertainty and risk through the use of choquet integral. \tauextit{Annals of Operations Research}, 52, 1–20, 1994. \betaibitem{ChatoWakker99} Chateauneuf, A. and Wakker, P. An axiomatization of cumulative prospect theory for decision under risk. \tauextit{Journal of Risk and Uncertainty}, 18(2), pp.137-145, 1999. \betaibitem{ChewWakker96} Chew SH. and Wakker, P. The comonotonic sure-thing principle. \tauextit{Journal of Risk and Uncertainty}, 12(1), pp.5-27, 1996. \betaibitem{deFinetti} De Finetti, B. Sul significato soggettivo della probabilit\'a. \tauextit{Fundamenta Mathematicae}, 17(1), pp.298-329, 1931. \betaibitem{Denneberg} Denneberg D. \tauextit{Non-Additive Measure and Integral}. Kluwer, Dodrecht, 1994. \betaibitem{10-2Wakker} Diecidue, E. and Wakker, PP. On the intuition of rank-dependent utility. \tauextit{Journal of Risk and Uncertainty}, 23(3), pp.281-298, 2001. \betaibitem{Ellsberg} D. Ellsberg. Risk, ambiguity, and the Savage axioms. \tauextit{The Quarterly Journal of Economics}, 643-669, 1961. \betaibitem{Gal} Gal, D. and Rucker, D.D. The loss of loss aversion: Will it loom larger than its gain? \tauextit{Journal of Consumer Psychology}, 28(3), pp.497-516 , 2018. \betaibitem{Gat} Gächter, S., Johnson, E.J. and Herrmann, A. Individual-level loss aversion in riskless and risky choices. \tauextit{Theory and Decision}, pp.1-26, 2021. \betaibitem{KSW} Kothiyal, A., Spinu, V. and Wakker, PP. An experimental test of prospect theory for predicting choice under ambiguity. \tauextit{Journal of Risk and Uncertainty}, 48(1), pp.1-17, 2014. \betaibitem{Savage} Savage, L.J. \tauextit{The foundations of statistics}. Courier Corporation, 1972. \betaibitem{Schmeidler86} Schmeidler, D. Integral representation without additivity. \tauextit{Proceedings of the American Mathematical Society}, 97:255-261, 1986. \betaibitem{Schmeidler89} Schmeidler, D. Subjective probability and expected utility theory without additivity. \tauextit{Econometrica}, 57, 571-587, 1989. \betaibitem{SchmidtZank09} Schmidt, U. and Zank, H. A simple model of cumulative prospect theory. \tauextit{Journal of Mathematical Economics}, 45(3-4), pp.308-319, 2009. \betaibitem{SchmidtZank12} Schmidt, U. and Zank, H., 2012. A genuine foundation for prospect theory. \tauextit{Journal of Risk and Uncertainty}, 45(2), pp.97-113. \betaibitem{Sipos} \v Sipo\v s, J. Integral with respect to a pre-measure. \tauextit{Mathematica Slovaca}, 29(2), pp.141-155, 1979. \betaibitem{Starmer89} Starmer, C. and Sugden, R. Violations of the independence axion in common ratio problems: An experimental test of some competing hypotheses. \tauextit{Annals of Operations Research}, 19(1), pp.79-102, 1989. \betaibitem{TW} Trautmann, S. and Wakker, P. Making the Anscombe-Aumann approach to ambiguity suitable for descriptive applications. \tauextit{Journal of Risk and Uncertainty}, 56(1), pp.83-116, 2018. \betaibitem{CPT} Tversky, A. and Kahneman, D. Advances in prospect theory: Cumulative representation of uncertainty. \tauextit{Journal of Risk and Uncertainty}, 5(4), pp.297-323, 1992. \betaibitem{Wakker90} P. Wakker. Characterizing optimism and pessimism directly through comonotonicity. \tauextit{Journal of Economic Theory}, 52:453-463, 1990. \betaibitem{WakkerPT} Wakker, PP. \tauextit{Prospect theory: For risk and ambiguity}. Cambridge university press, 2010. \betaibitem{WT} Wakker, P. and Tversky, A. An axiomatization of cumulative prospect theory. \tauextit{Journal of Risk and Uncertainty}, 7(2), pp.147-175, 1993. \betaibitem{WZ} Wakker, PP. and Zank, H. A simple preference foundation of cumulative prospect theory with power utility. \tauextit{European Economic Review,} 46(7), pp.1253-1271, 2002. \betaibitem{Yaari} Yaari, ME. The dual theory of choice under risk. \tauextit{Econometrica}, pp.95-115, 1987. \betaibitem{vNM} Von Neumann, J. and Morgenstern, O. \tauextit{Theory of games and economic behavior}. Princeton university press, 1944. \betaibitem{Zank} Zank, H. Cumulative prospect theory for parametric and multiattribute utilities. \tauextit{Mathematics of Operations Research}, 26(1), pp.67-81, 2001. \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \title{An answer to the Whitehead’s asphericity question} \author{Elton Pasku \\ Universiteti i Tiran\"es \\ Fakulteti i Shkencave Natyrore \\ Departamenti i Matematik\"es\\ Tiran\"e, Albania \\ [email protected]} \date{} \maketitle \begin{abstract} The Whitehead asphericity problem, regarded as a problem of combinatorial group theory, asks whether any subpresentation of an aspherical group presentation is also aspherical. We give a positive answer to this question by proving that if $\mathcal P=(\mathbf{x}, \mathbf{r})$ is an aspherical presentation of the trivial group, and $r_{0} \in \mathbf{r}$ a fixed relation, then $\mathcal P_{1}=(\mathbf{x}, \mathbf{r}_{1})$ is aspherical where $\mathbf{r}_{1}=\mathbf{r} \setminus \{r_{0}\}$. \end{abstract} \maketitle \section{Introduction} A 2-dimensional CW-complex $K$ is called {\it aspherical} if $\pi_2(K)=0$. The Whitehead asphericity problem (WAP for short), raised as a question in \cite{JHCW}, asks whether any subcomplex of an aspherical 2-complex is also aspherical. The question can be formulated in group theoretic terms since every group presentation $\mathcal{P}$ has a geometric realisation as a 2-dimensional CW-complex $K(\mathcal{P})$ and so $\mathcal{P}$ is called aspherical if $K(\mathcal{P})$ is aspherical. A useful review of this question is in \cite{Rosebrock}. The purpose of the present paper is to prove that if $\mathcal{P}=( \mathbf{x},\mathbf{r} )$ is an aspherical presentation of the trivial group and $r_{0} \in \mathbf{r}$ is a fixed relation, then the subpresentation $\mathcal{P}_{1}=( \mathbf{x},\mathbf{r}_{1} )$ where $\mathbf{r}_{1}=\mathbf{r} \setminus \{r_{0}\}$ is again aspherical. This in fact implies that WAP has always a positive answer since in Theorem 1 of \cite{I IJM} Ivanov proves that if the WAP is false, then there is an aspherical presentation $\mathcal P=(\mathcal{A}, \mathcal{R}\cup \{z\})$ of the trivial group where the alphabet $\mathcal{A}$ is countable and $z \in \mathcal{A}$ such that $\mathcal P_{1}=(\mathcal{A}, \mathcal{R})$ is not aspherical. An immediate implication of our result and that of Bestvina-Brady \cite{BB} is that the conjecture of Eilenberg and Ganea \cite{EG} is false. This conjecture states that if a discrete group $G$ has cohomological dimension 2, then it has a 2-dimensional Eilenberg-MacLane space $K(G, 1)$. There is a large corpus of results which are related to ours and is mostly contained in \cite{BP}, \cite{BH}, \cite{AGP}, \cite{CH82}, \cite{GR}, \cite{How79}, \cite{How81a}, \cite{How81b}, \cite{How82}, \cite{How83}, \cite{How84}, \cite{How85}, \cite{How98}, \cite{Hue82}, \cite{I IJM}, \cite{IL} and \cite{Stefan}. In the first part of our paper we will make use of the review paper \cite{BH} of Brown and Huebschmann which contains several key results about aspherical group presentations one of which is proposition 14 that gives sufficient and necessary conditions under which a group presentation $\mathcal{P}=( \mathbf{x},\mathbf{r} )$ is aspherical. It turns out that the asphericity of $\mathcal{P}$ is encoded in the structure of the free crossed module $(H/P,\hat{F},\delta)$ that is associated to $\mathcal{P}$. To be precise we state below proposition 14. \begin{proposition} (Proposition 14 of \cite{BH}) Let $K(\mathcal{P})$ be the geometric realisation of a group presentation $\mathcal{P}=( \mathbf{x},\mathbf{r})$ and let $G$ be the group given by $\mathcal{P}$. The following are equivalent. \begin{description} \item [(i)] The 2-complex $K(\mathcal{P})$ is aspherical. \item [(ii)] The module $\pi$ of identities for $\mathcal{P}$ is zero. \item [(iii)] The relation module $\mathcal{N(P)}$ of $\mathcal{P}$ is a free left $\mathbb{Z}G$ module on the images of the relators $r \in \mathbf{r}$. \item [(iv)] Any identity $Y$-sequence for $\mathcal{P}$ is Peiffer equivalent to the empty sequence. \end{description} \end{proposition} The last condition is of a particular interest to us. By definition, a $Y$-sequence for $\mathcal{P}$ is a finite (possibly empty) sequence of the form $((^{u_{1}}r_{1})^{\varepsilon_{1}},...,(^{u_{n}}r_{n})^{\varepsilon_{n}})$ where $r \in \mathbf{r}$, $u$ is a word from the free group $F$ over $\mathbf{x}$ and $\varepsilon =\pm 1$. A $Y$-sequence $((^{u_{1}}r_{1})^{\varepsilon_{1}},...,(^{u_{n}}r_{n})^{\varepsilon_{n}})$ is called an identity $Y$-sequence if it is either empty or if $\prod_{i=1,n}u_{i}r_{i}^{\varepsilon_{i}}u_{i}^{-1}=1$ in $F$. The definition of Peiffer equivalence is based on Peiffer operations on $Y$-sequences and reads as follows. \begin{itemize} \item [(i)] An \textit{elementary Peiffer exchange} replaces an adjacent pair $((^{u}r)^{\varepsilon},((^{v}s)^{\delta})$ in a $Y$-sequence by either $((^{ur^{\varepsilon}u^{-1}v}s)^{\delta},(^{u}r)^{\varepsilon})$, or by $((^{v}s)^{\delta}, ((^{vs^{-\delta}v^{-1}u}r)^{\varepsilon})$. \item [(ii)] A \textit{Peiffer deletion} deletes an adjacent pair $((^{u}r)^{\varepsilon},(^{u}r)^{-\varepsilon})$ in a $Y$-sequence. \item [(iii)] A \textit{Peiffer insertion} is the inverse of the Peiffer deletion. \end{itemize} The equivalence relation on the set of $Y$-sequences generated by the above operations is called \textit{Peiffer equivalence}. We recall from \cite{BH} what does it mean for an identity $Y$-sequence $((^{u_{1}}r_{1})^{\varepsilon_{1}},...,(^{u_{n}}r_{n})^{\varepsilon_{n}})$ to have the primary identity property. This means that the indices $1,2,...,n$ are grouped into pairs $(i,j)$ such that $r_{i}=r_{j}$, $\varepsilon_{i}=-\varepsilon_{j}$ and $u_{i}=u_{j}$ modulo $N$ where $N$ is the normal subgroup of $\hat{F}$ generated by $\mathbf{r}$. Proposition 16 of \cite{BH} shows that every such sequence is Peiffer equivalent to the empty sequence. Given an identity $Y$-sequence $d$ which is equivalent to the empty sequence 1, we would be interested to know what kind of insertions $((^{u}r)^{\varepsilon},(^{u}r)^{-\varepsilon})$ are used along the way of transforming $d$ to 1. It is obvious that keeping track of that information is vital to tackle the Whitehead problem. The aim of Section \ref{poma} of the present paper is to offer an alternative way in dealing with the asphericity of a group presentation $\mathcal{P}=( \mathbf{x},\mathbf{r})$ by considering a new crossed module $(\mathcal{G}(\Upsilon),\hat{F},\tilde{\theta})$ over the free group $\hat{F}$ on $\mathbf{x}$ where $\mathcal{G}(\Upsilon)$ is the group generated by the symbols $(^{u}r)^{\varepsilon}$ subject to relations $(^{u}r)^{\varepsilon} (^{v}s)^{\delta}=(^{{u{r^{\varepsilon}}u^{-1}}v}s)^{\delta}(^{u}r)^{\varepsilon}$, the action of $\hat{F}$ on $\mathcal{G}(\Upsilon)$ and the map $\tilde{\theta}$ are defined in the obvious fashion. The advantage of working with $\mathcal{G}(\Upsilon)$ is that unlike to $H/P$, in $\mathcal{G}(\Upsilon)$ the images of insertions $((^{u}r)^{\varepsilon},(^{u}r)^{-\varepsilon})$ do not cancel out and this enables us to express the asphericity in terms of such insertions. This is realized by considering the kernel $\tilde{\Pi}$ of $\tilde{\theta}$ which is the analogue of the module $\pi$ of identities for $\mathcal{P}$ in the standard theory and is not trivial when $\mathcal{P}$ is aspherical. We call $\tilde{\Pi}$ the generalized module of identities for $\mathcal{P}$. To prove our results we apply techniques from the theory of semigroup actions and to this end we use concepts like the universal enveloping group $\mathcal{G}(S)$ of a given semigroup $S$, the dominion of a subsemigroup $U$ of a semigroup $S$ and the tensor product of semigroup actions. These concepts are explained, with references, in Section \ref{ma}. \section{Monoid actions} \label{ma} For the benefit of the reader not familiar with monoid actions we will list below some basic notions and results that are used in the paper. For further results on the subject the reader may consult the monograph \cite{Howie}. Given $S$ a monoid with identity element 1 and $X$ a nonempty set, we say that $X$ is a \textit{left S-system} if there is an action $(s,x) \mapsto sx$ from $S \times X$ into $X$ with the properties \begin{align*} (st)x&=s(tx) \text{ for all } s,t \in S \text{ and } x \in X,\\ 1x&=x \text{ for all } x \in X. \end{align*} Right $S$-systems are defined analogously in the obvious way. Given $S$ and $T$ (not necessarily different) monoids, we say that $X$ is an \textit{(S,T)-bisystem} if it is a left $S$-system, a right $T$-system, and if \begin{equation*} (sx)t=s(xt) \text{ for all } s \in S, t \in T \text{ and } x \in X. \end{equation*} If $X$ and $Y$ are both left $S$-systems, then an \textit{S-morphism} or \textit{S-map} is a map $\phi: X \rightarrow Y$ such that \begin{equation*} \phi(sx)=s\phi(x) \text{ for all } s \in S \text{ and } x \in X. \end{equation*} Morphisms of right $S$-systems and of $(S,T)$-bisystems are defined in an analogue way. If we are given a left $T$-system $X$ and a right $S$-system $Y$, then we can give the cartesian product $X \times Y$ the structure of an $(T,S)$-bisystem by setting \begin{equation*} t(x,y)=(tx,y) \text{ and } (x,y)s=(x,ys). \end{equation*} Let now $A$ be an $(T,U)$-bisystem, $B$ an $(U,S)$-bisystem and $C$ an $(T,S)$-bisystem. As explained above, we can give to $A \times B$ the structure of an $(T,S)$-bisystem. With this in mind we say that a $(T,S)$-map $\beta: A \times B \rightarrow C$ is a \textit{bimap} if \begin{equation*} \beta(au,b)=\beta(a,ub) \text{ for all } a\in A, b \in B \text{ and } u \in U. \end{equation*} A pair $(A\otimes_{U}B,\psi)$ consisting of a $(T,S)$-bisystem $A\otimes_{U}B$ and a bimap $\psi: A \times B \rightarrow A \otimes_{U}B$ will be called a \textit{tensor product of A and B over U} if for every $(T,S)$-bisystem $C$ and every bimap $\beta: A \times B \rightarrow C$, there exists a unique $(T,S)$-map $\bar{\beta}: A\otimes_{U}B \rightarrow C$ such that the diagram \begin{equation*} \xymatrix{A\times B \ar[d]_{\beta} \ar[r]^{\psi} & A \otimes_{U} B \ar[ld]^{\bar{\beta}}\\C } \end{equation*} commutes. It is proved in \cite{Howie} that $A\otimes_{U}B$ exists and is unique up to isomorphism. The existence theorem reveals that $A\otimes_{U}B=(A \times B)/\tau$ where $\tau$ is the equivalence on $A \times B$ generated by the relation \begin{equation*} T=\{ ((au,b),(a,ub)): a \in A, b \in B, u \in U\}. \end{equation*} The equivalence class of a pair $(a,b)$ is usually denoted by $a \otimes_{U}b$. To us is of interest the situation when $A=S=B$ where $S$ is a monoid and $U$ is a submonoid of $S$. Here $A$ is clearly regarded as an $(S,U)$-bisystem with $U$ acting on the right on $A$ by multiplication, and $B$ as an $(U,S)$-bisystem where $U$ acts on the left on $B$ by multiplication. Another concept that is important to our approach is that of the dominion which is defined in \cite{Isbell} from Isbell. By definition, if $U$ is a submonoid of a monoid $S$, then the dominion $\text{Dom}_{S}(U)$ consists of all the elements $d \in S$ having the property that for every monoid $T$ and every pair of monoid homomorphisms $f,g: S \rightarrow T$ that coincide in $U$, it follows that $f(d)=g(d)$. Related to dominions there is the well known zigzag theorem of Isbell. We will present here the Stenstrom version of it (theorem 8.3.3 of \cite{Howie}) which reads. \textit{Let $U$ be a submonoid of a monoid $S$ and let $d \in S$. Then, $d \in \text{Dom}_{S}(U)$ if and only if $d \otimes_{U}1=1\otimes_{U}d$ in the tensor product $A=S \otimes_{U}S$}. We mention here that this result holds true if $S$ turns out to be a group and $U$ a subgroup, both regarded as monoids. A key result (theorem 8.3.6 of \cite{Howie}) that is used in the next section is the fact that any inverse semigroup $U$ is absolutely closed in the sense that for every semigroup $S$ containing $U$ as a subsemigroup, $\text{Dom}_{S}(U)=U$. It is obvious that groups are absolutely closed as special cases of inverse monoids (see \cite{epidom2}). \section{Peiffer operations and monoid actions} \label{poma} Before we explain how monoid actions are used to deal with the Peiffer operations on $Y$-sequences, we will introduce several monoids. The first one is the monoid $\Upsilon$ defined by the monoid presentation $\mathcal{M}=\langle Y \cup Y^{-1}, P \rangle $ where $Y^{-1}$ is the set of group inverses of the elements of $Y$ and $P$ consists of all pairs $(ab,{^{\theta (a)}}b a)$ where $a,b \in Y\cup Y^{-1}$. The second one is the group $\mathcal{G}(\Upsilon)$ given by the group presentation $ (Y \cup Y^{-1}, \hat{P} )$ where $\hat{P}$ is the set of all words $ab\iota(a)\iota(^{\theta(a)}b)$ where by $\iota(c)$ we denote the inverse of $c$ in the free group over $Y\cup Y^{-1}$. Before we introduce the next two monoids and the respective monoid actions, we stop to explain that $\Upsilon$ and $\mathcal{G}(\Upsilon)$ are special cases of a more general situation. If a monoid $S$ is given by the monoid presentation $\mathcal{M}=\langle X, R\rangle$, then its \textit{universal enveloping group} $\mathcal{G}(S)$ (see \cite{Bergman} and \cite{Cohn}) is defined to be the group given by the group presentation $( X, \hat{R} )$ where $\hat{R}$ consists of all words $u\iota(v)$ whenever $(u,v) \in R$ where $\iota(v)$ is the inverse of $v$ in the free group over $X$. We let for future use $\sigma: FM(X) \rightarrow S$ the respective canonical homomorphism where $FM(X)$ is the free monoid on $X$. It is easy to see that there is a monoid homomorphism $\mu_{S}: S \rightarrow \mathcal{G}(S)$ which satisfies the following universal property. For every group $G$ and monoid homomorphism $f: S \rightarrow G$, there is a unique group homomorphism $\hat{f}: \mathcal{G}(S) \rightarrow G$ such that $\hat{f} \mu_{S}=f$. This universal property is an indication of an adjoint situation. Specifically, the functor $\mathcal{G}:\mathbf{Mon} \rightarrow \mathbf{Grp}$ which maps every monoid to its universal group, is a left adjoint to the forgetful functor $U: \mathbf{Grp} \rightarrow \mathbf{Mon}$. This ensures that $\mathcal{G}(S)$ is an invariant of the presentation of $S$. The third monoid we consider is the submonoid $\mathfrak{U}$ of $\Upsilon$, having the same unit as $\Upsilon$, and is generated from all the elements of the form $\sigma(a)\sigma(a^{-1})$ with $a \in Y \cup Y^{-1}$. This monoid, acts on the left and on the right on $\Upsilon$ by the multiplication in $\Upsilon$. The last monoid considered is the subgroup $\hat{\mathfrak{U}}$ of $\mathcal{G}(\Upsilon)$ generated by $\mu(\mathfrak{U})$. Similarly to above, $\hat{\mathfrak{U}}$ acts on $\mathcal{G}(\Upsilon)$ by multiplication. Given $\alpha=(a_{1},...,a_{n})$ an $Y$-sequence over the group presentation $\mathcal{P}=( \mathbf{x},\mathbf{r})$, then performing an elementary Peiffer operation on $\alpha$ can be interpreted in a simple way in terms of the monoids $\Upsilon$ and $\mathfrak{U}$. In what follows we will denote by $\sigma(\alpha)$ the element $\sigma(a_{1})\cdot \cdot \cdot \sigma(a_{n}) \in \Upsilon$. If $\beta=(b_{1},...,b_{n})$ is obtained from $\alpha=(a_{1},...,a_{n})$ by performing an elementary Peiffer exchange, then from the definition of $\Upsilon$, $\sigma(\alpha)=\sigma(\beta)$, therefore an elementary Peiffer exchange or a finite sequence of such has no effect on the element $\sigma(a_{1})\cdot \cdot \cdot \sigma(a_{n}) \in \Upsilon$. Before we see the effect that a Peiffer insertion in $\alpha$ has on $\sigma(\alpha)$ we need the first claim of the following. \begin{lemma} \label{central} The elements of $\mathfrak{U}$ are central in $\Upsilon$ and those of $\hat{\mathfrak{U}}$ are central in $\mathcal{G}(\Upsilon)$. \end{lemma} \begin{proof} We see that for every $a \text{ and } b \in Y \cup Y^{-1}$, $\sigma(a)\sigma(a^{-1})\sigma(b)=\sigma(b)\sigma(a)\sigma(a^{-1})$. Indeed, \begin{align*} \sigma(a)\sigma(a^{-1})\sigma(b)&=~^{\theta(a)\theta(a^{-1})}{\sigma(b)}(\sigma(a)\sigma(a^{-1}))\\ &=\sigma(b)\sigma(a)\sigma(a^{-1}). \end{align*} Since elements $\sigma(b)$ and $\sigma(a)\sigma(a^{-1})$ are generators of $\Upsilon$ and $\mathfrak{U}$ respectively, then the first claim holds true. The second claim follows easily. \end{proof} If we insert $(a,a^{-1})$ at some point in $\alpha=(a_{1},...,a_{n})$ to obtain $\alpha'=(a_{1},...,a,a^{-1},...,a_{n})$, then from lemma \ref{central}, \begin{equation*} \sigma(\alpha')=\sigma(\alpha) \cdot (\sigma(a)\sigma(a^{-1})), \end{equation*} which means that inserting $(a,a^{-1})$ inside a $Y$-sequence $\alpha$ has the same effect as multiplying the corresponding $\sigma(\alpha)$ in $\Upsilon$ by the element $\sigma(a)\sigma(a^{-1})$ of $\mathfrak{U}$. For the converse, it is obvious that any word $\beta \in FM(Y \cup Y^{-1})$ representing $\sigma(\alpha)\cdot (\sigma(a)\sigma(a^{-1}))$ is Peiffer equivalent to $\alpha$. Of course the deletion has the obvious interpretation in our semigroup theoretic terms as the inverse of the above process. We retain the same names for our semigroup operations, that is insertion for multiplication by $\sigma(a)\sigma(a^{-1})$ and deletion for its inverse. Related to these operations on the elements of $\Upsilon$ we make the following definition. \begin{definition} We denote by $\sim_{\mathfrak{U}}$ the equivalence relation in $\Upsilon$ generated by all pairs $(\sigma(\alpha),\sigma(\alpha)\cdot \sigma(a)\sigma(a^{-1}))$ where $\alpha \in \text{FM}(Y\cup Y^{-1})$ and $a \in Y \cup Y^{-1}$. We say that two elements $\sigma(a_{1})\cdot \cdot \cdot \sigma(a_{n})$ and $\sigma(b_{1})\cdot \cdot \cdot \sigma(b_{m})$ where $m,n \geq 0$ are \textit{Peiffer equivalent in $\Upsilon$} if they fall in the same $\sim_{\mathfrak{U}}$-class. \end{definition} From what we said before it is obvious that two $Y$-sequences $\alpha$ and $\beta$ are Peiffer equivalent in the usual sense if and only if $\sigma(\alpha)\sim_{\mathfrak{U}} \sigma(\beta)$. For this reason we decided to make the following convention. If $\alpha=(a_{1},...,a_{n})$ is a $Y$-sequence (resp. an identity $Y$-sequence), then its image in $\Upsilon$, $\sigma(\alpha)$ will again be called a $Y$-sequence (resp. an identity $Y$-sequence). In the future instead of working directly with an $Y$-sequence $\alpha$, we will work with its image $\sigma(\alpha)$. We note that it should be mentioned that the study of $\sim_{\mathfrak{U}}$ might be as hard as the study of Peiffer operations on $Y$-sequences, and at this point it seems we have not made any progress at all. In fact this definition will become useful later in this section and yet we have to prove a few more things before we utilize it. The process of inserting and deleting generators of $\mathfrak{U}$ in an element of $\Upsilon$ is related to the following new concept. Given $U$ a submonoid of a monoid $S$ and $d \in S$, then we say that $d$ belongs to the \textit{weak dominion of} $U$, shortly written as $d \in \text{WDom}_{S}(U)$, if for every group $G$ and every monoid homomorphisms $f,g:S \rightarrow G$ such that $f(u)=g(u)$ for every $u \in U$, then $f(d)=g(d)$. An analogue of the Stenstr\"{o}m version of Isbell's theorem for weak dominion holds true. The proof of the if part of its analogue is similar to that of Isbell theorem apart from some minor differences that reflect the fact that we are working with $WDom$ rather than $Dom$ and that will become clear along the proof, while the converse relies on the universal property of $\mu: S \rightarrow \mathcal{G}(S)$. \begin{proposition} \label{wd prop} Let $S$ be a monoid, $U$ a submonoid and let $\hat{U}$ be the subgroup of $\mathcal{G}(S)$ generated by elements $\mu(u)$ with $u \in U$. Then $d \in \text{WDom}_{S}(U)$ if and only if $\mu(d) \in \hat{U}$. \end{proposition} \begin{proof} The set $\hat{A}=\mathcal{G}(S)\otimes_{\hat{U}} \mathcal{G}(S)$ has an obvious $(\mathcal{G}(S),\mathcal{G}(S))$-bisystem structure. The free abelian group $\mathbb{Z}\hat{A}$ on $\hat{A}$ inherits a $(\mathcal{G}(S),\mathcal{G}(S))$-bisystem structure if we define \begin{equation*} g \cdot \sum z_{i}(g_{i} \otimes_{\hat{U}}h_{i})=\sum z_{i}(gg_{i}\otimes_{\hat{U}}h_{i}) \text{ and } \left(\sum z_{i} (g_{i} \otimes_{\hat{U}}h_{i})\right)\cdot g=\sum z_{i}(g_{i} \otimes_{\hat{U}}h_{i}g). \end{equation*} The set $\mathcal{G}(S) \times \mathbb{Z}\hat{A}$ becomes a group by defining \begin{equation*} (g,\sum z_{i} g_{i} \otimes_{\hat{U}} h_{i})\cdot (g',\sum z'_{i} g'_{i} \otimes_{\hat{U}}h'_{i})=(gg', \sum z_{i} g_{i} \otimes_{\hat{U}} h_{i}g'+\sum z'_{i} gg'_{i} \otimes_{\hat{U}}h'_{i}). \end{equation*} The associativity is proved easily. The unit element is $(1,0)$ and for every $(g,\sum z_{i} g_{i} \otimes_{\hat{U}} h_{i})$ its inverse is the element $(g^{-1},-\sum z_{i} g^{-1}g_{i} \otimes_{\hat{U}} h_{i} g^{-1})$. Let us now define \begin{equation*} \beta: S \rightarrow \mathcal{G}(S) \times \mathbb{Z}\hat{A} \text{ by } s\mapsto (\mu(s),0), \end{equation*} which is clearly a monoid homomorphism, and \begin{equation*} \gamma: S \rightarrow \mathcal{G}(S) \times \mathbb{Z}\hat{A} \text{ by } s \mapsto (\mu(s), \mu(s) \otimes_{\hat{U}}1-1\otimes_{\hat{U}} \mu(s)), \end{equation*} which is again seen to be a monoid homomorphism. These two coincide on $U$ since for every $u \in U$ \begin{equation*} \gamma(u)=(\mu(u),\mu(u) \otimes_{\hat{U}}1-1\otimes_{\hat{U}}\mu(u))=(\mu(u),0)=\beta(u). \end{equation*} The last equality and the assumption that $d \in \text{WDom}_{S}(U)$ imply that $\beta(d)=\gamma(d)$, therefore \begin{equation*} (\mu(d),0)=(\mu(d),\mu(d)\otimes_{\hat{U}} 1-1 \otimes_{\hat{U}} \mu(d)), \end{equation*} which shows that $\mu(d)\otimes_{\hat{U}} 1=1 \otimes_{\hat{U}} \mu(d)$ in the tensor product $\mathcal{G}(S)\otimes_{\hat{U}} \mathcal{G}(S)$ and therefore theorem 8.3.3, \cite{Howie}, applied for monoids $\mathcal{G}(S)$ and $\hat{U}$, implies that $\mu(d) \in \text{Dom}_{\mathcal{G}(S)}(\hat{U})$. But $\text{Dom}_{\mathcal{G}(S)}(\hat{U})=\hat{U}$ as from theorem 8.3.6, \cite{Howie} every inverse semigroup is absolutely closed, whence $\mu(d) \in \hat{U}$. Conversely, suppose that $\mu(d) \in \hat{U}$ and we want to show that $d \in \text{WDom}_{S}(U)$. Let $G$ be a group and $f,g: S \rightarrow G$ two monoid homomorphisms that coincide in $U$, therefore the group homomorphisms $\hat{f}, \hat{g}: \mathcal{G}(S) \rightarrow G$ of the universal property of $\mu$ coincide in $\hat{U}$ which, from our assumption, implies that $\hat{f}(\mu(d))=\hat{g}(\mu(d))$, and then $f(d)=g(d)$ proving that $d \in \text{WDom}_{S}(U)$. \end{proof} Given a presentation $\mathcal{P}=( \mathbf{x},\mathbf{r})$ for a group $G$, we consider the following crossed module. If $\mathcal{G}(\Upsilon)$ is the universal group associated with $\mathcal{P}$ and $\hat{F}$ is the free group on $\mathbf{x}$, then we define \begin{equation*} \tilde{\theta}: \mathcal{G}(\Upsilon) \rightarrow \hat{F} \text{ by } \mu\sigma(^{u}{r})^{\varepsilon} \mapsto ur^{\varepsilon}u^{-1}. \end{equation*} An action of $\hat{F}$ on $\mathcal{G}(\Upsilon)$ is given by $^{v}(\mu \sigma (^{u}r)^{\varepsilon})=\mu \sigma(^{vu}r)^{\varepsilon}$ for every $v \in \hat{F}$ and every generator $\mu \sigma((^{u}r)^{\varepsilon})$ of $\mathcal{G}(\Upsilon)$. It is easy to check that the triple $(\mathcal{G}(\Upsilon),\hat{F},\tilde{\theta})$ is a crossed module over $\hat{F}$. The elements of $\text{Ker}(\tilde{\theta})$ are central, therefore $\text{Ker}(\tilde{\theta})$ is an abelian subgroup of $\mathcal{G}(\Upsilon)$ on which $G$ acts on the left by the rule $$^{g} (\mu \sigma(a_{1},...,a_{n})\iota\mu \sigma (b_{1},...,b_{m}))=\mu \sigma (^{w}{a_{1}},...,^{w}{a_{n}})\iota \mu \sigma(^{w}{b_{1}},...,^{w}{b_{m}}),$$ where $w$ is a word in $\hat{F}$ representing $g$. With this action $\text{Ker}(\tilde{\theta})$ becomes a left $G$-module which we call \textit{the generalized module of identities for} $\mathcal{P}$ and is denoted by $\tilde{\Pi}$. Also we note that $\hat{\mathfrak{U}}$ is a sub $G$-module of $\tilde{\Pi}$. The module of identities $\pi$ for $\mathcal{P}$ is obtained from $\tilde{\Pi}$ by factoring out $\hat{\mathfrak{U}}$. In terms of $\tilde{\Pi}$ and $\hat{\mathfrak{U}}$ we prove the following analogue of theorem 3.1 of \cite{papa-attach}. \begin{theorem} \label{wdom} The following assertions are equivalent. \begin{itemize} \item [(i)] The presentation $\mathcal{P}=( \mathbf{x},\mathbf{r})$ is aspherical. \item [(ii)] For every identity $Y$-sequence $d$, $d \in \text{WDom}_{\Upsilon}(\mathfrak{U})$. \item [(iii)] $\tilde{\Pi}=\hat{\mathfrak{U}}$. \end{itemize} \end{theorem} \begin{proof} $(i) \Rightarrow (ii)$ Let $d=\sigma(a_{1})\cdot \cdot \cdot \sigma(a_{n}) \in \Upsilon$ be any identity $Y$-sequence and as such it has to be Peiffer equivalent to 1. We proceed by showing that $d \in \text{WDom}_{\Upsilon}(\mathfrak{U})$. Let $G$ be any group and $f,g:\Upsilon \rightarrow G$ two monoid homomorphisms that coincide in $\mathfrak{U}$ and we want to show that $f(d)=g(d)$. The proof will be done by induction on the minimal number $h(d)$ of insertions and deletions needed to transform $d=\sigma(a_{1})\cdot \cdot \cdot \sigma(a_{n})$ to $1$. If $h(d)=1$, then $d \in \mathfrak{U}$ and $f(d)=g(d)$. Suppose that $h(d)=n>1$ and let $\tau$ be the first operation performed on $d$ in a series of operations of minimal length. After $\tau$ is performed on $d$, it is obtained an element $d'$ with $h(d')=n-1$. By induction hypothesis, $f(d')=g(d')$ and we want to prove that $f(d)=g(d)$. There are two possible cases for $\tau$. First, $\tau$ is an insertion and let $u=\sigma(a)\sigma(a^{-1}) \in \mathfrak{U}$ be the element inserted. It follows that $f(d')=f(d)f(u)$ and $g(d')=g(d)g(u)$, but $f(u)=g(u)$, therefore from cancellation law in the group $G$ we get $f(d)=g(d)$. Second, $\tau$ is a deletion and let $u=\sigma(a)\sigma(a^{-1}) \in \mathfrak{U}$ be the element deleted, that is $d=d'u$. It follows immediately from the assumptions that $f(d)=g(d)$ proving that $d \in \text{WDom}_{\Upsilon}(\mathfrak{U})$. $(ii) \Rightarrow (iii)$ Let $\tilde{d} \in \tilde{\Pi}$. We may assume without loss of generality that no $\iota(\mu\sigma(^{u}{r})^{\varepsilon})$ is represented in $\tilde{d}$ for if there is any such occurrence, we can multiply $\tilde{d}$ by $\mu\sigma((^{u}{r})^{\varepsilon}(^{u}{r})^{-\varepsilon})$ to obtain in return $\tilde{d}'$ where $\iota(\mu\sigma(^{u}{r})^{\varepsilon})$ is now replaced by $\mu\sigma((^{u}{r})^{-\varepsilon})$. It is obvious that if $\tilde{d}' \in \hat{\mathfrak{U}}$, then $\tilde{d} \in \hat{\mathfrak{U}}$ and conversely. Let now $d$ be any preimage of $\tilde{d}$ under $\mu$. It is clear that $d$ is an identity $Y$-sequence and as such $d \in \text{WDom}_{\Upsilon}(\mathfrak{U})$. Then proposition \ref{wd prop} implies that $\tilde{d}=\mu(d) \in \hat{\mathfrak{U}}$. $(iii) \Rightarrow (i)$ Assume that $\tilde{\Pi}=\hat{\mathfrak{U}}$ and we want to show that any identity $Y$-sequence $d$ is Peiffer equivalent to 1. From the assumption for $d$ we have that $\mu(d) \in \hat{\mathfrak{U}}$ and then proposition \ref{wd prop} implies that $d \in \text{WDom}_{\Upsilon}(\mathfrak{U})$. Consider the group $H/P$ as a quotient of $\mathcal{G}(\Upsilon)$ obtained by identifying $\iota(\mu\sigma({^{u}{r}}))$ with $\mu\sigma((^{u}{r})^{-1})$ and let $\nu:\mathcal{G}(\Upsilon) \rightarrow H/P$ be the respective quotient morphism. Writing $\tau$ for the zero morphism from $\Upsilon$ to $H/P$, we see that $\tau$ and the composition $\nu\mu$ coincide in $\mathfrak{U}$, therefore since $d \in \text{WDom}_{\Upsilon}(\mathfrak{U})$, it follows that $\nu\mu(d)=1$ in $H/P$. The asphericity of $\mathcal{P}$ now follows from theorem 2.7, p.71 of \cite{cha}. \end{proof} Before we prove our next result we recall the definition of the relation module $\mathcal{N}(\mathcal{P})$. Given $\mathcal{P}=( \mathbf{x},\mathbf{r} )$ a presentation for a group $G$, we let $\alpha: \hat{F} \rightarrow G$ and $\beta: N \rightarrow N/[N,N]$ be the canonical homomorphisms where $N$ is the normal closure of $\mathbf{r}$ in $\hat{F}$ and $[N,N]$ its commutator subgroup. There is a well defined $G$-action on $\mathcal{N}(\mathcal{P})=N/[N,N]$ given by \begin{equation*} w^{\alpha}\cdot s^{\beta}=(w^{-1}sw)^{\beta} \end{equation*} for every $w\in \hat{F}$ and $s \in N$. This action extends to an action of $\mathbb{Z}G$ over $\mathcal{N}(\mathcal{P})$ by setting \begin{equation*} (w_{1}^{\alpha} \pm w_{2}^{\alpha})\cdot s^{\beta}=(w_{1}^{-1}sw_{1}w_{2}^{-1}s^{\pm1}w_{2})^{\beta}. \end{equation*} When $\mathcal P$ is aspherical, the basis of $\mathcal{N}(\mathcal{P})$ as a free $\mathbb{Z}G$ module is the set of elements $r^{\beta}$ with $r \in \mathbf{r}$. \begin{proposition} \label{free} If $\mathcal{P}$ is aspherical, then $\hat{\mathfrak{U}}$ is a free $G$-module with bases equipotent to the set $\mathbf{r}$. \end{proposition} \begin{proof} The result follows if we show that $\hat{\mathfrak{U}} \cong \mathcal{N}(\mathcal{P})$ as $G$-modules. For this we define $$\Omega: \mathcal{N}(\mathcal{P}) \rightarrow \hat{\mathfrak{U}}$$ on free generators by $r^{\beta} \mapsto \mu \sigma (rr^{-1})$ which is clearly well defined and a surjective morphism of $G$-modules. Now we prove that $\Omega$ is injective. Let $$\xi= \sum_{i=1}^{n} u_{i}^{\alpha}\cdot r_{i}^{\beta} - \sum_{j=n+1}^{m} v_{j}^{\alpha} \cdot r_{j}^{\beta} \in \text{Ker}(\Omega),$$ which means that \begin{equation} \label{ker} \prod_{i=1}^{n} \mu \sigma (^{u_{i}}r_{i}(^{u_{i}}r_{i})^{-1}) \iota \left( \prod_{j=n+1}^{m} \mu \sigma (^{v_{j}}r_{j}(^{v_{j}}r_{j})^{-1}) \right)=1. \end{equation} To prove that $\xi=0$ we will proceed as follows. Define \begin{equation*} \gamma: FM(Y \cup Y^{-1}) \rightarrow \mathcal{N}(\mathcal{P}) \end{equation*} on free generators as follows \begin{equation*} (^{u}r)^{\varepsilon} \mapsto u^{\alpha}\cdot r^{\beta}. \end{equation*} It is easy to see that $\gamma$ is compatible with the defining relations of $\Upsilon$, hence there is $g:\Upsilon \rightarrow \mathcal{N}(\mathcal{P})$ and then the universal property of $\mu$ implies the existence of $\hat{g}: \mathcal{G}(\Upsilon) \rightarrow \mathcal{N}(\mathcal{P})$ such that $\hat{g}\mu=g$. If we apply now $\hat{g}$ on both sides of (\ref{ker}) obtain $$2 \cdot \sum_{i=1}^{n} u_{i}^{\alpha}\cdot r_{i}^{\beta} - 2 \cdot \sum_{j=n+1}^{m} v_{j}^{\alpha} \cdot r_{j}^{\beta}=0,$$ proving that $\xi=0$. \end{proof} \section{Proof of the main theorem} The proof of our main theorem is heavily based on two papers. The first one is \cite{SES} where McGlashan et al extended the Squier complex of a monoid presentation to a 3-complex and obtained a short exact sequence involving data from this complex. This sequence will be crucial in the proof of our theorem. The second one is \cite{LDHM2} where Pride realizes the second homotopy group associated with a group presentation as the first homotopy group of a certain extension of the Squire complex arising from that presentation. For the sake of completeness we have added below a number of sections which tend to explain the material that is used in our proofs. Section \ref{rws} gives some basic material about rewriting systems since they are used in the construction of our complexes and in our proofs. In Section \ref{sqc} we explain in some details how the Squier complex of a monoid presentation is defined and the cellular chain complex associated with it. Further in section \ref{exsqc} we give the definition of the extended Squier complex as it appears in \cite{SES} and some of the homological consequences that will be used in our proofs. Section \ref{osqc} shows how the 0 and the 1-skeleton of the Squier complex is well ordered, and in the case when the rewriting system is complete, it shows how these well orders induce another well order in the set of all 2-cells of the extended 3-complex. This new well order will be used further in section \ref{shp}. Section \ref{kb} is about the Knuth-Bendix completion procedure since it is used to give a new and shorter proof of the key result of \cite{SES} regarding the short exact sequence we mentioned above. This proof is given in section \ref{shp}. Section \ref{pc} is devoted to introducing the Pride complex associated with a group presentation and to explain ideas and results from \cite{LDHM2} since we make extensive use of them in our proofs. Finally, it is important to mention that theorem 6.6 of \cite{OK2002} is vital in the proof of key lemma \ref{c-rel-ext}. \subsection{Some basic concepts from rewriting systems} \label{rws} A rewriting system is a pair $\mathcal P=(\mathbf{x}, \mathbf{r})$ where $\mathbf{x}$ is a non empty set and $\mathbf{r}$ is a set of rules $r=(r_{+1},r_{-1}) \in F \times F$ where $F$ is the free monoid on $\mathbf{x}$. Related with $\mathbf{r}$ there is the so called the one single step reduction of words $$\rightarrow_{\mathbf{r}}=\{(ur_{+1}v,ur_{-1}v)|r \in \mathbf{r} \text{ and } u, v \in F\}.$$ The reflexive and transitive closure of $\rightarrow_{\mathbf{r}}$ is denoted by $\rightarrow_{\mathbf{r}}^{\ast}$, and the reflexive, transitive and symmetric closure is denoted by $\leftrightarrow_{\mathbf{r}}^{\ast}$ and is also known as the Thue congruence generated by $\mathbf{r}$. The quotient $F/\leftrightarrow_{\mathbf{r}}^{\ast}$ forms a monoid $S$ whose elements are the congruence classes $\bar{u}$ of words $u \in F$, and the multiplication is given by $\bar{u}\cdot \bar{v}= \overline{uv}$. We say that the monoid $S$ is given by $\mathcal P$, or that $\mathcal P$ is a presentation for $S$. A rewriting system $\mathcal P=(\mathbf{x}, \mathbf{r})$ is noetherian if there is no infinite chain $$w \rightarrow_{\mathbf{r}} w' \rightarrow_{\mathbf{r}} \dots$$ and is confluent if whenever we have $w \rightarrow_{\mathbf{r}}^{\ast} w_{1}$ and $w \rightarrow_{\mathbf{r}}^{\ast} w_{2}$, then there is $z \in F$ such that $w_{1} \rightarrow_{\mathbf{r}}^{\ast} z$ and $w_{2} \rightarrow_{\mathbf{r}}^{\ast} z$. A rewriting system $\mathcal P=(\mathbf{x}, \mathbf{r})$ is complete if it is both noetherin and confluent. Let $\mathcal P=(\mathbf{x}, \mathbf{r})$ be a presentation for a monoid $S$. The natural epimorphism $$F \rightarrow S \text{ such that } w \mapsto \bar{w},$$ where $F$ is the free monoid on $\mathbf{x}$, extends linearly to a ring epimorphism $$\mathbb{Z}F \rightarrow \mathbb{Z}S$$ of the corresponding integral monoid rings. The kernel of this epimorphism is denoted by $J$ which as an abelian group is generated by all $$u(r_{+1}-r_{-1})v \text{ where } u, v \in F \text{ and } r \in \mathbf{r}.$$ As a $(\mathbb{Z}F, \mathbb{Z}F)$-bimodule $J$ is generated by all $r_{+1}-r_{-1}$. \subsection{The Squier complex of a monoid presentation} \label{sqc} The material included in this section is taken from \cite{SES} (see also \cite{Sth}). At the end of the section we give shortly the respective terminology used in \cite{OK2002} which differs slightly from ours. The reason we explain this terminology is the use of theorem 6.6 of \cite{OK2002} in the proof of our key lemma \ref{c-rel-ext}. For every rewriting system $\mathcal P=(\mathbf{x}, \mathbf{r})$ we can define its graph of derivations $\Gamma(\mathcal P)$ whose vertices are the elements of $F$, and the edges are all quadruples $$e=(w,r, \varepsilon, w') \text{ where } w,w' \in F, \varepsilon=\pm 1, r \in \mathbf{r},$$ with initial, terminal and inverse functions $$\iota e= wr_{\varepsilon}w', \tau e= wr_{-\varepsilon}w' \text{ and } e^{-1}=(w,r, -\varepsilon, w').$$ The edge $e$ is called positive if $\varepsilon=1$. We can think of $\Gamma(\mathcal P)$ as a one dimensional cw-complex with 0-cells all the elements of $F$ and with 1-cells all positive edges. We note here that $e^{-1}=(w,r, -1, w')$ is not a new edge attached to the complex, but is defined to mean the topological inverse of the attaching map of $e=(w,r, 1, w')$. A path $p$ of length $n$ in $\Gamma(\mathcal P)$ is a sequence of edges $p=e_{1} \dots e_{i}e_{i+1} \dots e_{n}$ where $\tau e_{i}=\iota e_{i+1}$ for $1 \leq i \leq n-1$. It is called positive if the edges are positive, and is called closed if $\iota e_{1}=\tau e_{n}$. There is a natural two-sided action of $F$ on $\Gamma(\mathcal P)$. The action on vertices is given by the multiplication of $F$, and the action of $z,z' \in F$ on edges $e=(w,r, \varepsilon, w')$ is given by $$z.e.z'=(zw,r, \varepsilon, w'z'),$$ and sometimes is called translation. This action extends to paths in the obvious way. Note that there is a 1-1 correspondence between the elements of $S$ given by $\mathcal P$ and the connected components of $\Gamma(\mathcal P)$ since $u \leftrightarrow_{\mathbf{r}}^{\ast} v$ if and only if there is a path in $\Gamma(\mathcal P)$ connecting $u$ with $v$. Also note that the generators of $J$ as an abelian group are the elements $\iota e- \tau e$ where $e$ is a positive edge. We say that two positive edges $e_{1}$ and $e_{2}$ are disjoint if they can be written in the form $$e_{1}=f_{1}. \iota f_{2}, f_{2}=\iota f_{1} f_{2}$$ where $f_{1}, f_{2}$ are positive edges. We say that an edge $e$ is left reduced (resp. right reduced) if it cannot be written in the form $u.f$ (resp. $f. u$) for some non empty word $u \in F$ and an edge $f$. A pair of positive edges with the same initial forms a critical pair it either \begin{itemize} \item [(1)] One of the pair is both left and right reduced (a critical pair of inclusion type), or \item[(2)] One of the pair is left reduced but not right reduced, and the other is right reduced but not left reduced (a critical pair of overlapping type). \end{itemize} We say that a critical pair $(e_{1},e_{2})$ is resolvable if there are positive paths (a resolution of the critical pair) from $\tau e_{1}$ and $\tau e_{2}$ to a common vertex. It is well known \cite{Newman} that, when the system $\mathcal P=(\mathbf{x}, \mathbf{r})$ is noetherian and if all the critical pairs are resolvable, then the system is confluent. The Squier complex $\mathcal{D}(\mathcal P)$ associated with $\mathcal P$ is a combinatorial 2-complex with 1-skeleton $\Gamma(\mathcal P)$, to which, for each pair of positive edges $e,f$ a 2-cell $[e,f]$ is attached along the closed path $$\partial[e,f]=(e. \iota f)(\tau e. f) (e.\tau f)^{-1}(\iota e. f)^{-1}.$$ Sometimes we refer the 2-cell $[e,f]$ as a square 2-cell. The two-sided action of $F$ on $\Gamma(\mathcal P)$ extends to the 2-cells by $$w.[e,f].w'=[w.e,f.w'] \text{ where } w,w' \in F, e, f \text{ are positive edges}.$$ We have the chain complex $$\xymatrix{\mathbf{C}(\mathcal{D}): C_{2} \ar[r]^-{\partial_{2}} & C_{1} \ar[r]^{\partial_{1}} & C_{0} \ar[r] & 0},$$ where $C_{0}$, $C_{1}$, $C_{2}$ and $C_{3}$ are the free abelian groups generated by all 0-cells, positive edges, 2-cells, and 3-cells respectively. The boundary maps are given by $$\partial_{1} e=\iota e- \tau e \text{ where } e \text{ is a positive edge},$$ $$\partial_{2}[e,f]=e.(\iota f - \tau f)-(\iota e- \tau e).f \text{ where } e,f \text{ are positive edges}.$$ In the paper \cite{OK2002} of Otto and Kobayashi, a monoid presentation is denoted by $(\Sigma, R)$ and the rewriting rules of $R$ are denoted by $r \rightarrow \ell$. The edges of the graph of derivations in \cite{OK2002} are denoted by $(x,u,v,y)$ where $x,y \in \Sigma^{\ast}$ and $(u \rightarrow v) \in E=R \cup R^{-1}$. In \cite{OK2002} it is considered the set of closed paths $$D=\{e_{1}xu_{2} \circ v_{1}xe_{2} \circ e_{1}^{-1}xv_{2} \circ u_{1}xe_{2}^{-1}| e_{1}=(u_{1},v_{1})\in R, e_{2}=(u_{2},v_{2})\in R, x \in \Sigma^{\ast}\}.$$ It is important to observe that each circuit of $D$ is in fact the boundary of a square 2-cell as the following shows $$e_{1}xu_{2} \circ v_{1}xe_{2} \circ e_{1}^{-1}xv_{2} \circ u_{1}xe_{2}^{-1}=\partial[(1,r_{1},1,1),(x,r_{2},1,1)],$$ where $r_{1}=e_{1}$ and $r_{2}=e_{2}$. The free $\mathbb{Z} \Sigma^{\ast}$ bi-module $\mathbb{Z} \Sigma^{\ast} \cdot R \cdot \mathbb{Z} \Sigma^{\ast}$ considered in \cite{OK2002} is the abelian group $C_{1}$ of our complex $\mathbf{C}(\mathcal{D})$, and the maps $\partial_{1}$ are the same in both papers. On the other hand, the free $\mathbb{Z} \Sigma^{\ast}$ bi-module $\mathbb{Z} \Sigma^{\ast} \cdot D \cdot \mathbb{Z} \Sigma^{\ast}$ of \cite{OK2002} is the abelian group $C_{2}$ of $\mathbf{C}(\mathcal{D})$, and the maps $\partial_{2}$ are the same in both papers. Finally, the exact sequence of theorem 6.6 of \cite{OK2002} in our notations will be $$\xymatrix{C_{2} \ar[r]^-{\partial_{2}} & J.R.\mathbb{Z} \Sigma^{\ast}+\mathbb{Z} \Sigma^{\ast}. R. J \ar[r]^-{\partial_{1}} & J^{2} \ar[r] & 0 .}$$ The interpretation of the exactness in the middle of the above sequence is that $Ker \partial_{1} \cap (J.R.\mathbb{Z} \Sigma^{\ast}+\mathbb{Z} \Sigma^{\ast}. R. J)=Im \partial_{2}$. \subsection{The extended Squier complex} \label{exsqc} Assume now that $\mathbf{p}$ is a set closed paths in $\mathcal{D}(\mathcal P)$. In \cite{SES} the complex $\mathcal{D}(\mathcal P)$ has been extended to a 3-complex $(\mathcal{D},\mathbf{p})$ in the following way. We add to $\mathcal{D}(\mathcal P)$ additional 2-cells $[u,p,v]$ attached along the closed path $$\partial[u,p,v]=u.p.v \text{ where } u, v \in F, \text{ and } p \in \mathbf{p}.$$ The construction is then completed by adding 3-cells as follows. For each positive edge $f$ and each 2-cell $\sigma$ with $\partial \sigma= e_{1}^{\varepsilon_{1}}\dots e_{n}^{\varepsilon_{n}}$, 3-cells $[f,\sigma]$ and $[\sigma, f]$ are attached to the 2-skeleton by mapping their boundaries to respectively: \begin{itemize} \item [(1)] the 2-cells $\iota f. \sigma$, $\tau f. \sigma$ together with 2-cells $[f,e_{i}]$ for $1 \leq i \leq n$, \item[(2)] the 2-cells $\sigma. \iota f$, $\sigma. \tau f$ together with 2-cells $[e_{i},f]$ for $1 \leq i \leq n$. \end{itemize} The 2-sided action of $F$ on the 2-skeleton extends naturally to the 3-cells. For $[f,\sigma]$, $[\sigma, f]$ and $u, v \in F$, $$u. [f, \sigma].v= [u.f, \sigma. v] \text{ and } u. [\sigma, f].v=[u.\sigma, f.v].$$ The complex $\mathbf{C}(\mathcal{D})$ now extends to $$\xymatrix{\mathbf{C}(\mathcal{D}, \mathbf{p}): 0 \ar[r] & C_{3}^{\mathbf{p}} \ar[r]^-{\tilde{\partial}_{3}} & C_{2}^{\mathbf{p}} \oplus C_{2} \ar[r]^-{\tilde{\partial}_{2}} & C_{1} \ar[r]^{\partial_{1}} & C_{0} \ar[r] & 0}$$ where $C_{3}^{\mathbf{p}}$ is the free abelian group generated by the set of all 3-cells, and $C_{2}^{\mathbf{p}}$ is the free abelian group generated by the set of all newly added 2-cells $\sigma=[u,p,v]$. The boundary map $\tilde{\partial}_{2}$ restricted to $C_{2}$ is $\partial_{2}$, and for every $[u, p,v]$ where $p \in \mathbf{p}$ with $\partial p= f_{1}^{\delta_{1}}\dots f_{n}^{\delta_{n}}$, it is defined $$\tilde{\partial}_{2}[u,p,v]=\sum_{i=1}^{n}\delta_{i}u.f_{i}.v.$$ Finally, the definition of $\tilde{\partial}_{3}$ is done in the following way. For every positive edge $f$ and every 2-cell $\sigma$ with $\tilde{\partial}_{2}\sigma= \sum_{i=1}^{n}\varepsilon_{i} e_{i}$ we have \begin{equation} \label{d3a} \tilde{\partial}_{3}[f,\sigma]= (\iota f - \tau f).\sigma+\sum_{i=1}^{n}\varepsilon_{i}[f,e_{i}], \end{equation} and \begin{equation} \label{d3b} \tilde{\partial}_{3}[\sigma, f]= \sigma.(\iota f- \tau f)- \sum_{i=1}^{n}\varepsilon_{i}[e_{i},f]. \end{equation} The definition of the 2-cells $[u,p,v]$ where $u,v \in F, p \in\mathbf{p}$ suggests that $C_{2}^{\mathbf{p}}$ can be regarded as a free $(\mathbb{Z}F, \mathbb{Z}F)$-bimodule with basis $$\hat{\mathbf{p}}=\{[1,p,1]| p \in\mathbf{p}\}.$$ This enables us to define a $(\mathbb{Z}F, \mathbb{Z}F)$-homomorphism $$\varphi: C_{2} \oplus C_{2}^{\mathbf{p}} \rightarrow \mathbb{Z}S\mathbf{p} \mathbb{Z}S$$ by mapping $C_{2}$ to 0, and every 2-cell $[u,p,v]$ to $\bar{u}.p. \bar{v}$. The kernel of $\varphi$ is denoted by $K^{\mathbf{p}}$. It is shown in \cite{SES} that $$K^{\mathbf{p}}=C_{2} + J.\hat{\mathbf{p}}. \mathbb{Z}F+ \mathbb{Z}F. \hat{\mathbf{p}}. J.$$ Also it is shown that $B_{2}(\mathcal{D},\mathbf{p}) \subseteq K^{\mathbf{p}}$ and that the restriction of $\tilde{\partial}_{2}$ on $K^{\mathbf{p}}$ sends $K^{\mathbf{p}}$ onto $B_{1}(\mathcal{D})$, therefore we have the complex \begin{equation} \label{cxkp} \xymatrix{0 \ar[r] & B_{2}(\mathcal{D},\mathbf{p}) \ar[r]^-{incl.} & K^{\mathbf{p}} \ar[r]^-{\tilde{\partial}_{2}} & B_{1}(\mathcal{D}) \ar[r] & 0} \end{equation} It is proved in Proposition 14 of \cite{SES} that when $\mathbf{p}$ is a homology trivializer, then the sequence (\ref{cxkp}) is exact. We will give a new proof in section \ref{shp} for the exactness of (\ref{cxkp}). Since the proof uses the so called Knuth-Bendix completion procedure, we will explain this procedure in some details in section \ref{kb}. Before doing that we will introduce in the next section some useful orders in the skeleta of $\mathcal{D}(\mathcal P)$. \subsection{Ordering the Squier complex} \label{osqc} As before $\mathcal P=(\mathbf{x}, \mathbf{r})$ is a rewriting system and $\mathcal{D}(\mathcal P)$ its Squier complex. Assume in addition that for every $(r_{+1},r_{-1}) \in \mathbf{r}$, $r_{+1} \neq r_{-1}$. Let $\vartriangleright$ be a well ordering on $\mathbf{x}$. The corresponding length-lexicographical ordering on $F$ is defined as follows. For $u,v \in F$, we write $u >_{llex} v$ if and only if $|u|>|v|$, or $|u|=|v|$, $u=au'$, $v=bv'$ where $a,b \in \mathbf{x}$, $u',v' \in F$, and one of the following holds: \begin{itemize} \item [(i)] $a \vartriangleright b$, \item[(ii)] $a=b$, $u' >_{llex} v'$. \end{itemize} It turns out that $>_{llex}$ is a well ordering on $F$ (see \cite{Book+Otto}). We can always assume that $>_{llex}$ is compatible with $\mathbf{r}$ in the sense that $r_{+1} >_{llex} r_{-1}$, for if there are rules $(r_{+1},r_{-1})$ satisfying the opposite, we can exchange $r_{+1}$ with $r_{-1}$. Well orderings in $F$ that are compatible with $\mathbf{r}$ are usually called reduction well ordering and are the basis to start the Knuth Bendix completion procedure. So far we have defined a reduction well order on the 0-skeleton of $\mathcal{D}(\mathcal P)$ which will be denoted by $\prec_{0}$. This order induces a noetherian (well founded) partial order in the 1-skeleton of $\mathcal{D}(\mathcal P)$ in the following way. For $e=(u,r,+1,v)$ and $f=(u',r',+1,v')$ positive edges in by $\mathcal{D}(\mathcal P)$, we define $e \prec_{1} f$ if and only if $\iota e = \iota f$, and one of the following occurs: \begin{itemize} \item[(i)] $v'$ is a proper suffix of $v$, or \item[(ii)] $v=v'$ and $|r_{+1}|< |r'_{+1}|$, or \item[(iii)] $v=v'$, $r_{+1} = r'_{+1}$ and $r_{-1} \prec_{0} r'_{-1}$. \end{itemize} It turns out that $\prec_{1}$ is a partial order and that it is well founded. Further, assume that $\mathcal P=(\mathbf{x}, \mathbf{r})$ is confluent, so that all the critical pairs of positive edges resolve. In that case, we attach to $\mathcal{D}(\mathcal P)$ 2-cells $\mathbf{p}$ by choosing resolutions for every critical pair of positive edges $(e,f)$ in the following way. If $p_{e}$, $p_{f}$ are positive paths from $\tau e$ and $\tau f$ respectively to a common vertex, then the boundary of the 2-cell $\sigma$ corresponding to $(e,f)$ is $$\partial \sigma= e p_{e} p_{f}^{-1}f^{-1}.$$ Also we attach 2-cells $u.\sigma.v$ for every $u,v \in F$ along the loop $u. \partial \sigma. v$. As it is explained in the section \ref{exsqc}, this new 2-complex extends to a 3-complex denoted there by $(\mathcal{D}, \mathbf{p})$. It is important to mention that every 2-cell of $(\mathcal{D}, \mathbf{p})$, including the square 2-cells, is uniquely determined by the pair $(e,f)$ of edges meeting its maximal vertex $w=\iota_{e}=\iota_{f}$ (according to $\prec_{0}$). For this reason, we write the 2-cell as $[w;(e,f)]$. Now we extend the orders $\prec_{0}$ and $\prec_{1}$ to the 2-skeleton of the 3-complex $(\mathcal{D}, \mathbf{p})$ as follows. For every two 2-cells $[w;(e,f)]$ and $[w';(e',f')]$ we say that $[w;(e,f)] \prec_{2} [w';(e',f')]$ if and only if: \begin{itemize} \item [(i)] $w \prec_{0} w'$; or \item[(ii)] $w=w'$ and $f\prec_{1} f'$; or \item[(iii)] $f=f'$ and $e\prec_{1} e'$. \end{itemize} This is a well founded total order in the set of all 2-cells of $(\mathcal{D}, \mathbf{p})$. Under the current assumptions, similarly to 2-cells, every 3-cell is uniquely determined by three positive edges $e_{1} \prec_{1} e_{2} \prec_{1} e_{3}$ with initial the maximal vertex $w$ of the 3-cell, where either $e_{1}$ is disjoint from $e_{2}$ and $e_{3}$, or $e_{3}$ is disjoint from $e_{1}$ and $e_{2}$. For this reason we write the 3-cell by $[w;(e_{1},e_{2},e_{3})]$. By (\ref{d3a}) and (\ref{d3b}) we see that \begin{equation} \label{d3p} \tilde{\partial}_{3}[w;(e_{1},e_{2},e_{3})]=[w;(e_{2},e_{3})]-[w;(e_{1},e_{3})]+[w;(e_{1},e_{2})]+\varsigma \end{equation} where $\varsigma$ is a 2-chain made up of 2-cells all of which have maximal vertices less than $w$. Also note that the maximal 2-cell represented in $\tilde{\partial}_{3}[w;(e_{1},e_{2},e_{3})]$ is $[w;(e_{2},e_{3})]$. \subsection{The Knuth-Bendix completion procedure} \label{kb} The Knuth-Bendix procedure \cite{Book+Otto}, produces a complete system out of any given system and equivalent to it. Given a rewriting system $\mathcal P=(\mathbf{x}, \mathbf{r})$ and a reduction well order $\succ$ on $F$ that is compatible with $\mathbf{r}$ (there is always such one as explained in section \ref{osqc}), one can produce a complete rewriting system $\mathcal P^{\infty}$ that is equivalent to $\mathcal P$ in the following way. Put $\mathbf{r}_{0}=\mathbf{r}$. For each non-resolvable pair of edges $(e,f)$ in $\mathcal{D}(\mathcal P)$ we chose positive path $p_{e}$, $p_{f}$ from $\tau e$ and $\tau f$ respectively to distinct irreducibles. Let $\mathbf{r}_{1}$ be the set of rules obtained from $\mathbf{r}$ by adding for each such critical pair $(e,f)$ the rule $(\tau p_{e}, \tau p_{f})$ if $\tau p_{e} \succ \tau p_{f}$, otherwise adding the rule $\tau p_{f} \succ \tau p_{e}$. It is clear that $\mathcal P_{1}=(\mathbf{x}, \mathbf{r}_{1})$ is equivalent to $\mathcal P=(\mathbf{x}, \mathbf{r})$ and that $\mathbf{r} \subseteq \mathbf{r}_{1}$ where the inclusion is strict if $\mathcal P$ is not complete. Assume by induction that we have defined a sequence of equivalent rewriting systems $$\mathcal P=(\mathbf{x}, \mathbf{r}_{0}),...,\mathcal P_{n-1}=(\mathbf{x}, \mathbf{r}_{n-1}),\mathcal P_{n}=(\mathbf{x}, \mathbf{r}_{n}),$$ and consequently, an increasing sequence of complexes $$\mathcal{D}(\mathcal P) \subseteq \dots \subseteq \mathcal{D}(\mathcal P_{n-1}) \subseteq \mathcal{D}(\mathcal P_{n}),$$ where $\mathcal P_{n}=(\mathbf{x}, \mathbf{r}_{n})$ is obtained from $\mathcal P_{n-1}=(\mathbf{x}, \mathbf{r}_{n-1})$ by resolving all the non-resolvable critical pairs of $\mathcal{D}(\mathcal P_{n-1})$. Put $\mathbf{r}_{\infty}=\underset{n \geq 0}{\cup}\mathbf{r}_{n}$ and let $\mathcal P_{\infty}=(\mathbf{x}, \mathbf{r}_{\infty})$ be the resulting rewriting system. The corresponding complex $\mathcal{D}(\mathcal P_{\infty})$ will be latter denoted by $\mathcal{D}^{\infty}$. The rewriting system $\mathcal P_{\infty}=(\mathbf{x}, \mathbf{r}_{\infty})$ is obviously equivalent to $\mathcal P$ and it is complete since it is compatible with the order $\succ$ on $F$ and for every non-resolvable pair $(e,f)$ of edges found in some $\mathcal{D}(\mathcal P_{n})$, there is an edge $g$ in $\mathcal{D}(\mathcal P_{n+1})$ connecting the endpoints of the positive paths $p_{e}$ and $p_{f}$ of $\mathcal{D}(\mathcal P_{n})$. \subsection{A shorter proof for the exactness of (\ref{cxkp})} \label{shp} The proof that is provided below is valid in the special case when each 2-cell from $\mathbf{p}$ arises from the resolution of a critical pair. The proof goes through the following stages. The first stage is the same as that of \cite{SES} and for this reason is not presented here in full. In this stage it is proved that (\ref{cxkp}) is exact in the special case when the monoid presentation $\mathcal{M}=\langle \mathbf{x}, \mathbf{r}\rangle $ from which $\mathcal{D}$ is defined, is complete, and the set $\mathbf{p}$ of homology trivializers is obtained by choosing resolutions of critical pairs of $\mathbf{r}$. The proof is roughly as follows. Using (\ref{d3p}), it is shown that every 2-cycle $\xi \in K^{\mathbf{p}}$ is homologous to a 2-cycle $\bar{\xi} \in K^{\mathbf{p}}$ that is obtained from $\xi$ by replacing the maximal 2-cell $\sigma$ represented in $\xi$ by a 2-chain made up of lesser 2-cells than $\sigma$. Then we proceed by Noetherian induction. In the second stage, differently from the general case that is considered in \cite{SES}, we assume that we have a monoid presentation $\mathcal{M}=\langle \mathbf{x}, \mathbf{r}\rangle $ (not necessarily complete) and that $H_{1}(\mathcal{D})$ of the corresponding Squier complex $\mathcal{D}$ is trivialized by adding 2-cells $\mathbf{p}$ arising from the resolution of certain critical pairs. Also, the same as in \cite{SES}, we assume that $\mathbf{r}$ is compatible with a length-lexicographic order in the free monoid $F$ on $\mathbf{x}$. Using the Knuth-Bendix procedure, we obtain a new presentation $\mathcal{M}^{\infty}=\langle \mathbf{x}, \mathbf{r}^{\infty}\rangle $ with $\mathbf{r} \subseteq \mathbf{r}^{\infty}$ and where $\mathbf{r}^{\infty}$ is compatible with the order on $F$. The Squier complex $\mathcal{D}^{\infty}$ has trivializer $\mathbf{p}^{\infty}$ obtained by choosing resolution of all critical pairs of $\mathbf{r}^{\infty}$ and as a consequence $\mathbf{p} \subseteq \mathbf{p}^{\infty}$. From the special case of the first stage, we have the exactness of $$\xymatrix{0 \ar[r] & B_{2}(\mathcal{D}^{\infty},\mathbf{p}^{\infty}) \ar[r]^-{incl.} & K^{\mathbf{p}^{\infty}} \ar[r]^-{\tilde{\partial}^{\infty}_{2}} & B_{1}(\mathcal{D}^{\infty}) \ar[r] & 0,}$$ where $K^{\mathbf{p}^{\infty}}=C_{2}(\mathcal{D}^{\infty}) + J. \mathbf{p}^{\infty}. \mathbb{Z}F+ \mathbb{Z}F.\mathbf{p}^{\infty}. J$. We will use this and the fact that $\mathbf{p} \subseteq \mathbf{p}^{\infty}$ to prove in a shorter way the exactness of (\ref{cxkp}). We begin by pointing out that $(\mathcal{D}, \mathbf{p})$ is a subcomplex of $(\mathcal{D}^{\infty}, \mathbf{p}^{\infty})$, therefore for $i=1,2,3$, we have that $C_{i}(\mathcal{D}, \mathbf{p}) \leq C_{i}(\mathcal{D}^{\infty}, \mathbf{p}^{\infty})$. We will define for $i=1,2,3$, retractions $\hat{\rho_{i}}: C_{i}(\mathcal{D}^{\infty}, \mathbf{p}^{\infty}) \rightarrow C_{i}(\mathcal{D}, \mathbf{p})$. First, for every positive edge $e$ from $(\mathcal{D}^{\infty}, \mathbf{p}^{\infty})$ not belonging to $(\mathcal{D}, \mathbf{p})$, we chose a path $\rho(e)=e_{1}^{\varepsilon_{1}} \dots e_{n}^{\varepsilon_{n}}$ in $(\mathcal{D}, \mathbf{p})$ connecting $\iota e$ with $\tau e$ where every $\varepsilon_{i}=\pm 1$. Relative to this choice we define $$\hat{\rho_{1}}: C_{1}(\mathcal{D}^{\infty}, \mathbf{p}^{\infty}) \rightarrow C_{1}(\mathcal{D}, \mathbf{p})$$ by $$e \mapsto \sum_{i}\varepsilon_{i} e_{i},$$ whenever $e$ is from $(\mathcal{D}^{\infty}, \mathbf{p}^{\infty})$ not belonging to $(\mathcal{D}, \mathbf{p})$, and for positive edges $e$ from $(\mathcal{D}, \mathbf{p})$ we define $$\hat{\rho_{1}}(e)=e.$$ Thus $\hat{\rho_{1}}$ is a retraction. Before we define a second retraction $\hat{\rho_{2}}:C_{2}(\mathcal{D}^{\infty}, \mathbf{p}^{\infty}) \rightarrow C_{2}(\mathcal{D}, \mathbf{p})$, we prove the following. \begin{lemma} \label{rho} For every path $\rho=f_{1}^{\beta_{1}} \dots f_{n}^{\beta_{n}}$ in $(\mathcal{D}, \mathbf{p})$ where every $\beta_{j}=\pm 1$, we have that $$\partial_{1}(\beta_{1}f_{1} +\dots + \beta_{n}f_{n})= \iota (\rho)- \tau (\rho).$$ \end{lemma} \begin{proof} The proof will be done by induction on $n$. For $n=1$, $$\partial_{1}(\beta_{1}f_{1})=\beta_{1}(\iota f_{1}- \tau f_{1}),$$ therefore, depending on the sign of $\beta_{1}$, we have that $\partial_{1}(\beta_{1}f_{1})=\iota(f_{1}^{\beta_{1}})-\tau(f_{1}^{\beta_{1}})$. For the inductive step, we write $$\rho=f_{1}^{\beta_{1}} \dots f_{n}^{\beta_{n}}\cdot f_{n+1}^{\beta_{n+1}}=\rho_{1} \cdot f_{n+1}^{\beta_{n+1}}.$$ From the assumption for $\rho_{1}$ we have $$\partial_{1}(\beta_{1}f_{1} +\dots + \beta_{n}f_{n})= \iota (\rho_{1})- \tau (\rho_{1})=\iota(\rho)- \iota(f_{n+1}^{\beta_{n+1}}),$$ and then \begin{align*} \partial_{1}(\beta_{1}f_{1} +\dots + \beta_{n}f_{n} +\beta_{n+1}f_{n+1})&= \partial_{1}(\beta_{1}f_{1} +\dots + \beta_{n}f_{n})+ \partial_{1}(\beta_{n+1}f_{n+1})\\ &= \iota(\rho)- \iota(f_{n+1}^{\beta_{n+1}})+ (\iota(f_{n+1}^{\beta_{n+1}})-\tau(f_{n+1}^{\beta_{n+1}}))\\ &=\iota(\rho)-\tau(f_{n+1}^{\beta_{n+1}})\\ &=\iota(\rho)-\tau(\rho). \end{align*} \end{proof} Now we define $\hat{\rho_{2}}$ in the following way. If $z= \sum_{j} \delta_{j} f_{j} \in Z_{1}(\mathcal{D}^{\infty}, \mathbf{p}^{\infty})$ is a 1-cycle where at least one of $f_{j}$ is from $(\mathcal{D}^{\infty}, \mathbf{p}^{\infty})$ not belonging to $(\mathcal{D}, \mathbf{p})$, we have the 1-chain $$\hat{\rho_{1}}\left(\sum_{j} \delta_{j} f_{j}\right)$$ in $C_{1}(\mathcal{D}, \mathbf{p})$. Let us show that $\hat{\rho_{1}}\left(\sum_{j} \delta_{j} f_{j}\right)$ is in fact a 1-cycle in $Z_{1}(\mathcal{D}, \mathbf{p})$. Indeed, \begin{align*} \partial_{1} \hat{\rho_{1}}\left(\sum_{j} \delta_{j} f_{j}\right)&= \sum_{j} \delta_{j} \partial_{1}\hat{\rho_{1}}(f_{j})\\ &=\sum_{j}\delta_{j}(\iota f_{j}-\tau f_{j}) && \text{(by lemma \ref{rho})}\\ &=\partial^{\infty}_{1}\left(\sum_{j} \delta_{j} f_{j}\right)\\ &=\partial^{\infty}_{1}(z)\\ &=0. \end{align*} Since $\mathbf{p}$ is a homology trivializer, then for the 1-cycle $\hat{\rho_{1}}\left(\sum_{j} \delta_{j} f_{j}\right)$ there is a 2-chain $\varsigma_{z} \in C_{2}(\mathcal{D}, \mathbf{p})$ such that \begin{equation} \label{lnk1} \tilde{\partial}_{2}(\varsigma_{z} )= \hat{\rho_{1}}\left(\sum_{j} \delta_{j} f_{j}\right). \end{equation} We can apply the above for every 2-cell $\sigma \in (\mathcal{D}^{\infty}, \mathbf{p}^{\infty})$ not in $(\mathcal{D}, \mathbf{p})$ by taking $z=\tilde{\partial}^{\infty}_{2}(\sigma)$ and writing $\varsigma_{\sigma}$ instead of $\varsigma_{z}$. With these notations (\ref{lnk1}) takes the form \begin{equation} \label{lnk} \tilde{\partial}_{2}(\varsigma_{\sigma} )=\hat{\rho_{1}}(\tilde{\partial}^{\infty}_{2}(\sigma)). \end{equation} We define $$\hat{\rho_{2}}: C_{2}(\mathcal{D}^{\infty}, \mathbf{p}^{\infty}) \rightarrow C_{2}(\mathcal{D}, \mathbf{p})$$ by $$\hat{\rho_{2}}(\sigma)=\sigma$$ for every 2-cell $\sigma$ in $(\mathcal{D}, \mathbf{p})$, and for every other 2-cell $\sigma$ we define $$\hat{\rho_{2}}(\sigma)=\varsigma_{\sigma}.$$ We will explain how this works for 2-cells $[e,f]$ with $\hat{\rho_{1}}(e)=\sum_{i} \alpha_{i} e_{i}$ and $\hat{\rho_{1}}(f)=\sum_{j} \beta_{j} f_{j}$ where at least one of the sums has more than one term (the corresponding edge is not in $(\mathcal{D}, \mathbf{p})$). In this case we have \begin{align*} \hat{\rho}_{1}\tilde{\partial}^{\infty}_{2}([e,f])&= \hat{\rho}_{1}(e.(\iota f- \tau f)-(\iota e-\tau e).f)\\ &=\sum_{i}\alpha_{i}e_{i}\cdot \sum_{j}\beta_{j}(\iota f_{j}-\tau f_{j})-\sum_{i}\alpha_{i}(\iota e_{i}-\tau e_{i})\cdot \sum_{j} \beta_{j} f_{j} && \text{(lemma \ref{rho})}\\ &= \sum_{i,j} \alpha_{i} \beta_{j} \tilde{\partial}_{2}[e_{i},f_{j}]\\ &= \tilde{\partial}_{2}\left(\sum_{i,j} \alpha_{i} \beta_{j} [e_{i},f_{j}]\right), \end{align*} therefore the 2-chain $\varsigma_{[e,f]}$ in this case can be chosen to be $\sum_{i,j} \alpha_{i} \beta_{j} [e_{i},f_{j}]$. Again $\hat{\rho_{2}}$ as defined above is a retraction. Finally, we define $$\hat{\rho_{3}}: C_{3}(\mathcal{D}^{\infty}, \mathbf{p}^{\infty}) \rightarrow C_{3}(\mathcal{D}, \mathbf{p})$$ as follows. If $e$ is any edge with $\hat{\rho_{1}}(e)=\sum_{i}\varepsilon_{i}e_{i}$ and $\sigma$ a 2-cell in $(\mathcal{D}^{\infty}, \mathbf{p}^{\infty})$ such that $\hat{\rho_{2}}(\sigma)=\sum_{j} \mu_{j} \sigma_{j}$, then we define $$\hat{\rho_{3}}([e,\sigma])=\sum_{i,j} \varepsilon_{i} \mu_{j} [e_{i}, \sigma_{j}] \text{ and } \hat{\rho_{3}}([\sigma,e])=\sum_{i,j} \varepsilon_{i} \mu_{j} [\sigma_{j},e_{i}].$$ It is obvious that $\hat{\rho_{3}}$ is a retraction. \begin{lemma} \label{chm} The following hold true: \begin{itemize} \item [(i)] $\hat{\rho_{1}}\partial^{\infty}_{2}=\tilde{\partial}_{2}\hat{\rho_{2}}$. \item[(ii)]$\hat{\rho_{2}}\partial^{\infty}_{3}=\tilde{\partial}_{3}\hat{\rho_{3}}$. \end{itemize} \end{lemma} \begin{proof} (i) If $\sigma \in C_{2}(\mathcal{D}, \mathbf{p})$, then \begin{align*} \hat{\rho_{1}}\tilde{\partial}^{\infty}_{2}(\sigma)&=\hat{\rho_{1}}\tilde{\partial}_{2}(\sigma)=\tilde{\partial}_{2}(\sigma)=\tilde{\partial}_{2}\hat{\rho_{2}}(\sigma). \end{align*} Assume now that $\sigma$ is a 2-cell not in $C_{2}(\mathcal{D}, \mathbf{p})$, then from the definition of $\hat{\rho_{2}}$ and from (\ref{lnk}) we have \begin{align*} \tilde{\partial}_{2}\hat{\rho_{2}}(\sigma)=\tilde{\partial}_{2}(\varsigma_{\sigma})=\hat{\rho_{1}} (\tilde{\partial}^{\infty}_{2}(\sigma)). \end{align*} (ii) Let $[e,\sigma]$ be a 3-cell where $\sigma$ is any 2-cell in $(\mathcal{D}^{\infty}, \mathbf{p}^{\infty})$ with $\hat{\rho_{2}}(\sigma)=\sum_{j} \mu_{j} \sigma_{j}$, and $e$ is an edge with $\hat{\rho_{1}}(e)=\sum_{i}\varepsilon_{i}e_{i}$. Assume also that $\tilde{\partial}^{\infty}_{2}(\sigma)= \sum_{k} \delta_{k} f_{k}$ where for each $k$, $\hat{\rho_{1}}(f_{k})= \sum_{s}\beta_{ks}g_{ks}$. It follows that from (i) that \begin{equation} \label{brz} \tilde{\partial}_{2}\left(\sum_{j} \mu_{j} \sigma_{j}\right)=\tilde{\partial}_{2}(\hat{\rho_{2}}(\sigma)) =\hat{\rho_{1}}\tilde{\partial}^{\infty}_{2}(\sigma)=\sum_{k}\delta_{k}\sum_{s}\beta_{ks} g_{ks}. \end{equation} If for each $j$ we let $\mathcal{T}_{\sigma_{j}}$ be the set of terms of $\tilde{\partial}_{2}(\sigma_{j})$, we see from (\ref{brz}) that for each $k$ and each $s$, there is some $j$ and some $\alpha_{j}x_{j} \in \mathcal{T}_{\sigma_{j}}$, such that $\mu_{j}\alpha_{j}x_{j}=\delta_{k}\beta_{ks}g_{ks}$. This implies that for each edge $e_{i}$, we have that $\delta_{k}\beta_{ks}[e_{i},g_{ks}]=\mu_{j} \alpha_{j}[e_{i},x_{j}]$. Further we see that \begin{align*} \hat{\rho_{2}}\tilde{\partial}^{\infty}_{3}([e,\sigma])&=\hat{\rho_{2}}\left((\iota e- \tau e).\sigma+ \sum_{k} \delta_{k}[e,f_{k}]\right)\\ &=\sum_{j}\mu_{j}(\iota e - \tau e).\sigma_{j}+\sum_{k} \delta_{k}\left(\sum_{i}\varepsilon_{i}\sum_{s} \beta_{ks}[e_{i},g_{ks}]\right)\\ &=\sum_{j}\mu_{j}\left(\sum_{i}\varepsilon_{i}(\iota e_{i} - \tau e_{i})\right).\sigma_{j}+\sum_{k} \delta_{k}\left(\sum_{i}\varepsilon_{i}\sum_{s} \beta_{ks}[e_{i},g_{ks}]\right) && \text{(lemma \ref{rho})}\\ &=\sum_{i}\varepsilon_{i}\left(\sum_{j}\mu_{j}(\iota e_{i} - \tau e_{i})\right).\sigma_{j}+\sum_{i} \varepsilon_{i}\left(\sum_{k}\delta_{k}\sum_{s} \beta_{ks}[e_{i},g_{ks}]\right)\\ &=\sum_{i}\varepsilon_{i} \left(\sum_{j}\mu_{j}(\iota e_{i} - \tau e_{i}).\sigma_{j}+ \sum_{k}\delta_{k}\sum_{s} \beta_{ks}[e_{i},g_{ks}]\right)\\ &=\sum_{i}\varepsilon_{i} \left(\sum_{j}\mu_{j}(\iota e_{i} - \tau e_{i}).\sigma_{j}+ \sum_{j} \mu_{j}\sum_{\alpha_{j} x_{j} \in \mathcal{T}_{\sigma_{j}}}\alpha_{j}[e_{i},x_{j}]\right)\\ &=\sum_{i,j}\varepsilon_{i} \mu_{j}\left((\iota e_{i} - \tau e_{i}).\sigma_{j}+\sum_{\alpha_{j} x_{j} \in \mathcal{T}_{\sigma_{j}}}\alpha_{j}[e_{i},x_{j}]\right)=\tilde{\partial}_{3}\left(\sum_{i,j}\varepsilon_{i} \mu_{j}[e_{i}, \sigma_{j}]\right) && \text{ (by (\ref{d3a}))}\\ &=\tilde{\partial}_{3}\hat{\rho_{3}}([e,\sigma]). \end{align*} The proof for the 3-cell $[\sigma, e]$ is similar to the above and is omitted here. \end{proof} \begin{proposition} \label{p14} (Proposition 14, \cite{SES}) If $\mathbf{p}$ is a homology trivializer obtained by choosing resolutions of certain critical pairs, then the sequence (\ref{cxkp}) is exact. \end{proposition} \begin{proof} From the special case of proposition we have the short exact sequence \begin{equation} \label{cxkpinft} \xymatrix{0 \ar[r] & B_{2}(\mathcal{D}^{\infty},\mathbf{p}^{\infty}) \ar[r]^-{incl.} & K^{\mathbf{p}^{\infty}} \ar[r]^-{\tilde{\partial}^{\infty}_{2}} & B_{1}(\mathcal{D}^{\infty}) \ar[r] & 0} \end{equation} Let $\xi \in Ker \tilde{\partial}_{2}$. Since $K^{\mathbf{p}} \subseteq K^{\mathbf{p}^{\infty}}$ and the restriction of $\tilde{\partial}^{\infty}_{2}$ on $K^{\mathbf{p}}$ is $\tilde{\partial}_{2}$, then $\xi \in Ker \tilde{\partial}^{\infty}_{2}$ and the exactness of (\ref{cxkpinft}) implies the existence of a 3-chain $w \in C_{3}(\mathcal{D}^{\infty},\mathbf{p}^{\infty})$ such that $\tilde{\partial}^{\infty}_{3}(w)=\xi$. It follows form lemma \ref{chm} that $$\xi= \hat{\rho_{2}}(\xi)=\hat{\rho_{2}}(\tilde{\partial}^{\infty}_{3}(w))=\tilde{\partial}_{3}\hat{\rho_{3}}(w),$$ which shows that $\xi \in B_{2}(\mathcal{D}, \mathbf{p})$ and hence the exactness of (\ref{cxkp}). \end{proof} \subsection{The Pride complex $\mathcal{D}(\mathcal{P})^{\ast}$ associated with a group presentation $\mathcal P$} \label{pc} In this section we will explain several results of Pride in \cite{LDHM2} where it is proved that the homotopical property FDT for groups is equivalent to the homological property $FP_{3}$. In order to achieve this, associated to any group presentation $\hat{\mathcal{P}}=(\mathbf{x},\mathbf{r})$, Pride considers two crossed modules. The first one is the free crossed module $(\Sigma, \hat{F}, \partial)$ associated to $\hat{\mathcal{P}}=(\mathbf{x},\mathbf{r})$. To define the second, he constructs first a complex $\mathcal{D}(\mathcal{M})^{\ast}$ arising from the monoid presentation $$\mathcal{M}=\langle\mathbf{x}, \mathbf{x}^{-1}: R=1 (R \in \mathbf{r}), x^{\varepsilon}x^{-\varepsilon}=1(x \in \mathbf{x}, \varepsilon=\pm1) \rangle$$ of the same group given by $\hat{\mathcal{P}}=(\mathbf{x},\mathbf{r})$. In other words, the group is now realized as the quotient of the free monoid $F$ on $\mathbf{x} \cup \mathbf{x}^{-1}$ by the smallest congruence generated by the relations giving $\mathcal{M}$. We will give below some necessary details on this complex. We first mention that $\mathcal{D}(\mathcal{M})^{\ast}$ is an extension of the usual Squier complex $\mathcal{D}(\mathcal{M})$ arising from $\mathcal{M}$. This complex is called in \cite{Gilbert} the Pride complex. We emphasize here that the definition of $\mathcal{D}(\mathcal{M})$ in \cite{LDHM2} requires the attachment of 2-cells $[e^{\varepsilon}, f^{\delta}]$ where $e,f$ are positive edges and $\varepsilon, \delta=\pm 1$. But as it is observed in the latter paper \cite{SES} (see Remark 8 there), since we are interested in the homotopy and homology of the complex, the attachment of 2-cells $[e^{\varepsilon}, f^{\delta}]$ is unnecessary in the presence of $[e,f]$ because the boundary of $[e^{\varepsilon}, f^{\delta}]$ is a cyclic permutation of $(\partial[e,f])^{\pm 1}$, hence it is null homotopic. So we assume in what follows that $\mathcal{D}(\mathcal{M})$ is that one described in section \ref{sqc}. To complete the construction of $\mathcal{D}(\mathcal{M})^{\ast}$ we need to add to $\mathcal{D}(\mathcal{M})$ certain extra 2-cells along the closed paths $$\mathbf{t}=(1,x^{\varepsilon}x^{-\varepsilon}, 1, x^{\varepsilon}) \circ (x^{\varepsilon}, x^{-\varepsilon}x^{\varepsilon}, 1, 1)^{-1} ,$$ where $x \in \mathbf{x}$ and $\varepsilon=\pm1$. The attaching of these two cells is done for every overlap of two trivial edges as depicted below $$\xymatrix{x^{\varepsilon}x^{-\varepsilon}x^{\varepsilon} \ar@/^/[d]^{(x^{\varepsilon}, x^{-\varepsilon}x^{\varepsilon}, 1, 1)}\ar@/_/[d]_{(1,x^{\varepsilon}x^{-\varepsilon}, 1, x^{\varepsilon})} & \\x^{\varepsilon}}$$ The attached 2-cell has boundary made of the following edges $$\mathbb{A}=(1,x^{\varepsilon}x^{-\varepsilon}, 1, x^{\varepsilon}) \text{ and } \mathbb{B}=(x^{\varepsilon}, x^{-\varepsilon}x^{\varepsilon}, 1, 1).$$ Together with such 2-cells, there are added to the complex all their ''translates'' $$\xymatrix{ux^{\varepsilon}x^{-\varepsilon}x^{\varepsilon}v \ar@/^/[d]^{(ux^{\varepsilon}, x^{-\varepsilon}x^{\varepsilon}, 1, v)}\ar@/_/[d]_{(u,x^{\varepsilon}x^{-\varepsilon}, 1, x^{\varepsilon}v)} & \\ux^{\varepsilon}v}$$ In our paper the Pride complex $\mathcal{D}(\mathcal{M})^{\ast}$ is denoted throughout by $(\mathcal{D}, \mathbf{t})$. If $P$ is a path in $(\mathcal{D}, \mathbf{t})$ with $\iota(P)=W$ and $\tau(P)=Z$, there are defined in \cite{LDHM2}, $T_{W}$ and $T_{Z}$ to be arbitrary trivial paths from $W^{\ast}$ to $W$ and from $Z^{\ast}$ to $Z$ where $W^{\ast}$ and $Z^{\ast}$ are the unique reduced words freely equivalent to $W$ and $Z$ respectively. Then the path $T_{W} P T_{Z}^{-1}$ is denoted by $P^{\ast}$. The notation is not ambiguous since any two parallel trivial paths in $(\mathcal{D}, \mathbf{t})$ are homotopic. Pride has defined in \cite{LDHM2} an $\hat{F}$-crossed module $\Sigma^{\ast}$ out of $(\mathcal{D}, \mathbf{t})$ in the following way. The elements of $\Sigma^{\ast}$ are the homotopy classes $\langle P \rangle$ where $P$ is a path in the 1-skeleton of $(\mathcal{D}, \mathbf{t})$ such that $\tau(P)$ is the empty word and $\iota(P)$ is a freely reduced word from $\hat{F}$. He then defines a (non commutative) operation $+$ on $\Sigma^{\ast}$ by $$\langle P_{1} \rangle + \langle P_{2} \rangle= \langle (P_{1}+P_{2})^{\ast} \rangle$$ and an action of the free group $\hat{F}$ on $\mathbf{x} \cup \mathbf{x}^{-1}$ on $\Sigma^{\ast}$ by $$^{[W]} \langle P \rangle= \langle (WPW^{-1})^{\ast} \rangle.$$ Also he defines $$\partial^{\ast}: \Sigma^{\ast} \rightarrow \hat{F}$$ by $$\partial^{\ast}(\langle P \rangle)= [\iota(P)].$$ It is proved in \cite{LDHM2} that the triple $(\Sigma^{\ast}, \hat{F}, \partial^{\ast})$ is a crossed module. Further, using the fact that $\Sigma$ is the free crossed module over $\mathbf{r}$, it is proved that $$\eta: \Sigma \rightarrow \Sigma^{\ast}$$ defined by $r \mapsto \langle (1, r,1,1)\rangle $ is an isomorphism of crossed modules. The inverse $\psi: \Sigma^{\ast} \rightarrow \Sigma$ of $\eta$ is the map defined in the following way. It is first defined a map $\psi_{0}$ from the set of edges of $(\mathcal{D}, \mathbf{t})$ to $\Sigma$ as follows. Every trivial edge is mapped to $0$, and every edge $(u, r, \varepsilon, v)$ is mapped to $(^{[u]}r)^{\varepsilon}$ where $[u]$ is the element of $\hat{F}$ represented by $u$. It is proved that this map extends to paths of $(\mathcal{D}, \mathbf{t})$ and it sends the boundaries of the defining 2-cells of $(\mathcal{D}, \mathbf{t})$ to 0. We thus have a morphism $\psi: \Sigma^{\ast} \rightarrow \Sigma$ given by $$\langle P\rangle \mapsto \psi_{0}(P)$$ which is proved to be the inverse of $\eta$. By restriction it is obtained an isomorphism between $\text{Ker}\partial$ and $\text{Ker}\partial^{\ast}$. But $\text{Ker}\partial$ is itself isomorphic to $\pi_{2}(\hat{\mathcal{P}})$, the second homotopy module of the standard complex associated with $\hat{\mathcal{P}}$, and $\text{Ker}\partial^{\ast}$ on the other hand, is isomorphic to $\pi_{1}(\mathcal{D}, \mathbf{t}, 1)$, the first homotopy group of the connected component of $(\mathcal{D}, \mathbf{t})$ at 1. Recollecting, we have the following isomorphisms \begin{equation} \label{ki} \xymatrix{\pi_{2}(\hat{\mathcal{P}}) =\text{Ker}\partial \ar@<2pt>[r]^-(0.40){\eta} & \text{Ker}\partial^{\ast} \ar@<2pt>[l]^-(0.60){\psi} {=\pi_{1}(\mathcal{D}, \mathbf{t}, 1)}.} \end{equation} The fundamental group $\pi_{1}(\mathcal{D}, \mathbf{t},1)$ is abelian being isomorphic to $\pi_{2}(\hat{\mathcal{P}})$ and therefore isomorphic to its abelianization $H_{1}(\mathcal{D}, \mathbf{t},1)$. The role of the isomorphism between the two groups will be played by the well known Hurewicz homomorphism $h: \pi_{1}(\mathcal{D}, \mathbf{t}, 1) \rightarrow H_{1}(\mathcal{D}, \mathbf{t},1)$ which sends the homotopy class of a loop to the homology class of the corresponding 1-cycle. In our proofs in the following sections, we will identify the homotopy class of any loop $\xi$ with $h(\xi)$ without further comment. \subsection{A characterization of the asphericity in terms of the Pride complex} Assume now we are given a presentation $\mathcal{P}=(\mathbf{x},\mathbf{r})$ of a group $G$. The new presentation $\hat{\mathcal{P}}=(\mathbf{x},\mathbf{r} \cup \mathbf{r}^{-1})$ where $\mathbf{r}^{-1}=\{r^{-1}| r \in \mathbf{r}\}$, is still giving $G$. The free crossed module $(\Sigma, \hat{F}, \partial)$ of \cite{LDHM2} arising from $\hat{\mathcal{P}}$ is in fact isomorphic to our crossed module $(\mathcal{G}(\Upsilon), \hat{F}, \tilde{\theta})$. Indeed, there is a morphism of crossed modules $\alpha: \Sigma \rightarrow \mathcal{G}(\Upsilon)$ induced by the map $r^{\varepsilon} \mapsto \mu \sigma (r^{\varepsilon})$, whose inverse is $\beta: \mathcal{G}(\Upsilon) \rightarrow \Sigma$ defined by $\mu \sigma ((^{u}r)^{\varepsilon}) \mapsto {^{u}}(r^{\varepsilon})$. So there is no loss of generality if we identify ${^{u}}(r^{\varepsilon}) \in \Sigma$ with $\mu \sigma ((^{u}r)^{\varepsilon}) \in \mathcal{G}(\Upsilon)$. The isomorphism $\Sigma \cong \mathcal{G}(\Upsilon)$ means in particular that $\text{Ker}\partial \cong \text{Ker}\tilde{\theta}=\tilde{\Pi}$. We have on the other hand the monoid presentation of $G$ $$\mathcal{M}=\langle\mathbf{x}, \mathbf{x}^{-1}: \mathbf{s} \rangle$$ where $$\mathbf{s}=\{(r^{\varepsilon},1): r \in \mathbf{r}, \varepsilon=\pm 1\} \cup \left\{ (x^{\varepsilon}x^{-\varepsilon},1): x \in \mathbf{x}, \varepsilon=\pm 1\right\}.$$ Related to $\mathcal{M}$ we have the Pride complex $(\mathcal{D}, \mathbf{t})$. Being aspherical for $\mathcal P$ means in virtue of theorem \ref{wdom} and of isomorphisms in (\ref{ki}) that $H_{1}(\mathcal{D}, \mathbf{t},1)$ is trivialized as an abelian group by the homology classes of all 1-cycles corresponding to $\eta(\hat{\mathfrak{U}})$. This section is devoted to proving that the asphericity of $\mathcal P$ is equivalent to $H_{1}(\mathcal{D}, \mathbf{p})=0$ where $\mathbf{p}=\mathbf{q} \cup \mathbf{t}$ and $\mathbf{q}$ is the set of 1-cycles corresponding to $\eta(\mu \sigma (rr^{-1}))$ with $r \in \mathbf{r}$. For every two paths of positive length $A=e^{\varepsilon_{1}}_{1} \circ \dots \circ e^{\varepsilon_{n}}_{n}$ and $B=f^{\delta_{1}}_{1} \circ \dots \circ f^{\delta_{m}}_{m}$ in $(\mathcal{D}, \mathbf{t})$ we have two parallel paths: $$A. \iota B \circ \tau A. B \text{ and } \iota A. B \circ A. \tau B.$$ In what follows we use the notation $C \sim D$ to mean that two parallel paths $C$ and $D$ are homotopic to each other. \begin{lemma} \label{[A,B]} For every two paths $A=e^{\varepsilon_{1}}_{1} \circ \dots \circ e^{\varepsilon_{n}}_{n}$ and $B=f^{\delta_{1}}_{1} \circ \dots \circ f^{\delta_{m}}_{m}$ as above, $(A. \iota B \circ \tau A. B) \sim (\iota A. B \circ A. \tau B)$. \end{lemma} \begin{proof} The proof is done by induction on the maximum of $n$ and $m$. If $m=n=1$, then it follows that $$A. \iota B \circ \tau A. B= e^{\varepsilon_{1}}_{1}.\iota f^{\delta_{1}}_{1} \circ \tau e^{\varepsilon_{1}}_{1}. f^{\delta_{1}}_{1} \sim \iota e^{\varepsilon_{1}}_{1}. f^{\delta_{1}}_{1} \circ e^{\varepsilon_{1}}_{1}. \tau f^{\delta_{1}}_{1}= \iota A. B \circ A. \tau B.$$ Indeed, if $\varepsilon_{1}=\delta_{1}=1$, then this is an immediate consequence of the 2-cell $[e_{1},f_{1}]$. If $\varepsilon_{1}=\delta_{1}=-1$, then, since \begin{equation} \label{hsc} e_{1}.\iota f_{1} \circ \tau e_{1}. f_{1} \sim \iota e_{1}. f_{1} \circ e_{1}. \tau f_{1}, \end{equation} it follows by taking inverses that \begin{align*} \iota e_{1}^{\varepsilon_{1}}.f_{1}^{\delta_{1}} \circ e_{1}^{\varepsilon_{1}}. \tau f_{1}^{\delta_{1}}&= \tau e_{1}. f_{1}^{-1} \circ e_{1}^{-1}.\iota f_{1}\\ & \sim e_{1}^{-1}.\tau f_{1} \circ \iota e_{1}. f_{1}^{-1}\\ &= e^{\varepsilon_{1}}_{1}.\iota f^{\delta_{1}}_{1} \circ \tau e^{\varepsilon_{1}}_{1}. f^{\delta_{1}}_{1}. \end{align*} In the case when $\varepsilon_{1}=1$ and $\delta_{1}=-1$, after composing on the left of (\ref{hsc}) by $\iota e_{1}.f_{1}^{-1}$ we obtain $$\iota e_{1}.f_{1}^{-1} \circ e_{1}.\iota f_{1} \circ \tau e_{1}. f_{1} \sim \iota e_{1}.f_{1}^{-1} \circ\iota e_{1}. f_{1} \circ e_{1}. \tau f_{1}= e_{1}. \tau f_{1},$$ and then after composing the above on the right by $\tau e_{1}. f_{1}^{-1}$ we get $$\iota e_{1}.f_{1}^{-1} \circ e_{1}.\iota f_{1} \sim e_{1}. \tau f_{1} \circ \tau e_{1}. f_{1}^{-1},$$ which is the same as $$\iota e_{1}^{\varepsilon_{1}}.f_{1}^{\delta_{1}} \circ e_{1}^{\varepsilon_{1}}.\tau f_{1}^{\delta_{1}} \sim e_{1}^{\varepsilon_{1}}. \iota f_{1}^{\delta_{1}} \circ \tau e_{1}^{\varepsilon_{1}}. f_{1}^{\delta_{1}}.$$ The proof for the case when $\varepsilon_{1}=-1$ and $\delta_{1}=1$ is symmetric to the above and is omitted. For the inductive step, let (for instance) $B$ be the path of maximal length $m>1$. For the path $B'=f_{1}^{\delta_{1}} \circ \dots \circ f_{m-1}^{\delta_{m-1}}$ we know by induction that $$A. \iota B' \circ \tau A. B' \sim \iota A. B' \circ A. \tau B'.$$ Again, by induction for $A$ and $f_{m}^{\delta_{m}}$ we have that $$A. \iota f_{m}^{\delta_{m}} \circ \tau A. f_{m}^{\delta_{m}} \sim \iota A. f_{m}^{\delta_{m}} \circ A. \tau f_{m}^{\delta_{m}}.$$ It follows that \begin{align*} A. \iota B \circ \tau A. B&= A. \iota B \circ \tau A. B' \circ \tau A. f_{m}^{\delta_{m}}\\ &= A. \iota B' \circ \tau A. B' \circ \tau A. f_{m}^{\delta_{m}}\\ & \sim \iota A. B' \circ A. \tau B' \circ \tau A. f_{m}\\ &= \iota A. B' \circ A.\iota f_{m}^{\delta_{m}} \circ \tau A. f_{m}^{\delta_{m}}\\ & \sim \iota A. B' \circ \iota A. f_{m}^{\delta_{m}} \circ A. \tau f_{m}^{\delta_{m}} \\ &= \iota A. B \circ A. \tau B. \end{align*} There is a similar proof when $A$ is of maximal length. \end{proof} For every $u \in \hat{F}$, regarded as an element of the free monoid $F$ on $\mathbf{x} \cup \mathbf{x}^{-1}$, and for every $r \in \mathbf{r}$, we see that \begin{align*} \eta(\mu \sigma (^{u}r))={^{u}}\eta(\mu \sigma(r))&={^{u}}\langle(1,r,1,1) \rangle\\ &=\langle (u,r,1,u^{-1})^{\ast}\rangle\\ &= \langle T_{uru^{-1}}\circ (u,r,1,u^{-1}) \circ T^{-1}_{uu^{-1}}\rangle . \end{align*} The path $T_{uru^{-1}}\circ (u,r,1,u^{-1}) \circ T_{uu^{-1}}$ is a composition of $T_{uru^{-1}}$ which is a trivial path from the freely reduced word $(uru^{-1})^{\ast}$ to $uru^{-1}$ followed by the edge $(u,r,1,u^{-1})$ and then by the inverse of the trivial path $T_{uu^{-1}}$ from $uu^{-1}$ to 1. Similarly to the above we have that \begin{align*} \eta(\mu \sigma((^{u}r)^{-1}))={^{u}}\eta(\mu \sigma (r^{-1}))&={^{u}}\langle(1,r^{-1},1,1) \rangle\\ &=\langle (u,r^{-1},1,u^{-1})^{\ast}\rangle\\ &=\langle T_{ur^{-1}u^{-1}} \circ (u,r^{-1},1,u^{-1})\circ T^{-1}_{uu^{-1}}\rangle. \end{align*} Then we have \begin{align*} \eta(\mu \sigma(^{u}r({^{u}}r)^{-1}))&= \eta(\mu \sigma (^{u}r)) + \eta(\mu \sigma ((^{u}r)^{-1}))\\ &=\langle ((T_{uru^{-1}}\circ (u,r,1,u^{-1}) \circ T^{-1}_{uu^{-1}})+(T_{ur^{-1}u^{-1}} \circ (u,r^{-1},1,u^{-1})\circ T^{-1}_{uu^{-1}}))^{\ast} \rangle\\ &=\langle T_{(ur^{-1}u)^{\ast}(ur^{-1}u^{-1})^{\ast}} \circ T_{uru^{-1}}\cdot (ur^{-1}u^{-1})^{\ast} \circ (u,r,1,u^{-1})\cdot (ur^{-1}u^{-1})^{\ast} \\ &\circ T^{-1}_{uu^{-1}} \cdot (ur^{-1}u^{-1})^{\ast} \circ T_{ur^{-1}u^{-1}} \circ (u,r^{-1}, 1, u^{-1}) \circ T^{-1}_{uu^{-1}} \rangle. \end{align*} Now we define two closed paths in $(\mathcal{D}, \mathbf{t})$. First we let \begin{align*} P(r,u)= T_{(uru^{-1})(ur^{-1}u^{-1})} & \circ (u, r, 1, u^{-1}ur^{-1}u^{-1}) \\ &\circ(T^{-1}_{uu^{-1}}\cdot ur^{-1}u^{-1} ) \circ (u, r^{-1},1, u^{-1}) \circ T^{-1}_{uu^{-1}}, \end{align*} and second $$Q(r,u)= T_{urr^{-1}u^{-1}} \circ (u, r,1, r^{-1}u^{-1}) \circ (u, r^{-1}, 1, u^{-1}) \circ T^{-1}_{uu^{-1}}.$$ \begin{proposition} \label{th} The presentation $\mathcal P$ is aspherical if and only if $\pi_{1}(\mathcal{D},\mathbf{t},1)$ is generated as an abelian group by the homotopy classes of loops $Q(r,u)$ with $r \in \mathbf{r}$ and $u \in F$. \end{proposition} \begin{proof} First note that the presentation $\mathcal P$ is aspherical if and only if the set of all $\eta(\mu \sigma(^{u}r({^{u}}r)^{-1}))$ generates $\pi_{1}(\mathcal{D},\mathbf{t},1)$. The claim follows directly if we prove that for each $r \in \mathbf{r}$ and $u \in F$, $\eta(\mu \sigma(^{u}r({^{u}}r)^{-1}))=\langle P(r,u) \rangle $ and that $P(r,u) \sim Q(r,u)$. Let us prove first that $\eta(\mu \sigma(^{u}r({^{u}}r)^{-1}))=\langle P(r,u) \rangle $. Consider the following paths in $(\mathcal{D}, \mathbf{t})$. First, we let \begin{align*} a&=(uru^{-1})^{\ast}\cdot T_{ur^{-1}u^{-1}},\\ b&=T_{uru^{-1}} \cdot (ur^{-1}u^{-1}),\\ d&=T_{uru^{-1}} \cdot (ur^{-1}u^{-1})^{\ast},\\ c&=(uru^{-1})\cdot T_{ur^{-1}u^{-1}}, \end{align*} and observe from lemma \ref{[A,B]} that \begin{equation} \label{abcd} a \circ b \sim d \circ c \end{equation} Second, we let \begin{align*} e&=(u,r,1,u^{-1}(ur^{-1}u^{-1})^{\ast}),\\ c&=(uru^{-1})\cdot T_{ur^{-1}u^{-1}},\\ g&=(u,r,1,u^{-1}(ur^{-1}u^{-1})),\\ f&=(uu^{-1})\cdot T_{ur^{-1}u^{-1}}, \end{align*} where again from lemma \ref{[A,B]} we have that $c \circ g \sim e \circ f$. This implies that \begin{equation} \label{c-1} c^{-1} \sim g \circ f^{-1} \circ e^{-1}. \end{equation} And finally, let \begin{align*} y&= T^{-1}_{uu^{-1}}\cdot (ur^{-1}u^{-1})^{\ast} \\ f&= (uu^{-1})\cdot T_{ur^{-1}u^{-1}}\\ x&=T^{-1}_{uu^{-1}}\cdot (ur^{-1}u^{-1})\\ z&=1 \cdot T_{ur^{-1}u^{-1}}, \end{align*} where as before $f \circ x \sim y \circ z$. This on the other hand implies that \begin{equation} \label{f-1} f^{-1} \sim x \circ z^{-1} \circ y^{-1}. \end{equation} Further we write $\ell= T_{(uru^{-1})^{\ast}(ur^{-1}u^{-1})^{\ast}}$, $\ell_{1}= T_{(uru^{-1})(ur^{-1}u^{-1})}$, $k=(u,r^{-1},1,u^{-1})$ and $h=T^{-1}_{uu^{-1}}$. With the above abbreviations we have \begin{align*} \ell &\sim \ell_{1} \circ b^{-1} \circ a^{-1}&& \text{($\parallel$ trivial parallel paths are $\sim$)}\\ & \sim \ell_{1} \circ c^{-1} \circ d^{-1} && \text{(from (\ref{abcd}))}\\ & \sim \ell_{1} \circ (g \circ f^{-1} \circ e^{-1}) \circ d^{-1} && \text{(from (\ref{c-1}))}\\ &\sim \ell_{1} \circ g \circ (x \circ z^{-1} \circ y^{-1}) \circ e^{-1} \circ d^{-1} && \text{(from (\ref{f-1})).} \end{align*} It follows that \begin{align*} \eta(\mu \sigma (^{u}r({^{u}}r)^{-1}))&=\langle \ell \circ (d \circ e \circ y \circ z \circ k \circ h) \rangle \\ &=\langle (\ell_{1} \circ g \circ (x \circ z^{-1} \circ y^{-1}) \circ e^{-1} \circ d^{-1}) \circ (d \circ e \circ y \circ z \circ k \circ h)\rangle \\ & = \langle \ell_{1} \circ g \circ x \circ k \circ h \rangle \\ &= \langle P(r,u) \rangle. \end{align*} Secondly, we prove that $P(r,u) \sim Q(r,u) $. Indeed, if we consider paths \begin{align*} A&=(u, r, 1, u^{-1}ur^{-1}u^{-1})\\ B&= (ur) \cdot T^{-1}_{u^{-1}u} \cdot (r^{-1}u^{-1})\\ C&= u \cdot T^{-1}_{u^{-1}u} \cdot (r^{-1}u^{-1})\\ D&= (u, r, 1, r^{-1}u^{-1}), \end{align*} which from lemma \ref{[A,B]} satisfy $A \circ C \sim B \circ D$, then we have \begin{align*} P(r,u) & = T_{(uru^{-1})(ur^{-1}u^{-1})} \circ (u, r, 1, u^{-1}ur^{-1}u^{-1}) \circ(T^{-1}_{uu^{-1}}\cdot ur^{-1}u^{-1} ) \circ (u, r^{-1},1, u^{-1}) \circ T^{-1}_{uu^{-1}}\\ &\sim T_{(uru^{-1})(ur^{-1}u^{-1})} \circ A \circ (u\cdot T^{-1}_{u^{-1}u}\cdot r^{-1}u^{-1}) \circ (u,r^{-1},1,u^{-1})\circ T^{-1}_{uu^{-1}}\\ &= T_{(uru^{-1})(ur^{-1}u^{-1})} \circ A \circ C \circ (u,r^{-1},1,u^{-1})\circ T^{-1}_{uu^{-1}}\\ &\sim T_{(uru^{-1})(ur^{-1}u^{-1})} \circ B \circ D \circ (u,r^{-1},1,u^{-1})\circ T^{-1}_{uu^{-1}}\\ &\sim T_{urr^{-1}u^{-1}} \circ B^{-1} \circ B \circ D \circ (u,r^{-1},1,u^{-1})\circ T^{-1}_{uu^{-1}}\\ &= T_{urr^{-1}u^{-1}} \circ D \circ (u,r^{-1},1,u^{-1})\circ T^{-1}_{uu^{-1}}\\ &= Q(r,u). \end{align*} This concludes the proof. \end{proof} Passing to homology we have the following \begin{proposition} \label{thc} The presentation $\mathcal P$ is aspherical if and only if $H_{1}(\mathcal{D},\mathbf{t},1)$ is generated as an abelian group by the homology classes of 1-cycles corresponding to loops $Q(r,u)$ with $r \in \mathbf{r}$ and $u \in F$. \end{proposition} \begin{definition} In $(\mathcal{D},\mathbf{t})$ we let $\mathbf{q}$ be the set of closed paths $Q(r,1)$ with $r \in \mathbf{r}$. \end{definition} If we attach to $(\mathcal{D},\mathbf{t})$ 2-cells $\sigma$ along the closed paths $u. Q(r,1). v$ with $u,v \in F$ and 3-cells $[e,\sigma]$ and $[\sigma, e]$ for every 2-cell $\sigma$ and each positive edge $e$, then we obtain a new 3-complex $(\mathcal{D},\mathbf{q} \cup \mathbf{t})$. The asphericity of $\mathcal P$ is encoded in the homology of $(\mathcal{D},\mathbf{q} \cup \mathbf{t})$ as the following shows. \begin{theorem} \label{asphh} The presentation $\mathcal P$ is aspherical if and only if $H_{1}(\mathcal{D},\mathbf{q} \cup \mathbf{t})=0$. \end{theorem} To prove the theorem we first note the following two lemmas. \begin{lemma} \label{Jm} For every $\varsigma \in Z_{1}(\mathcal{D}, \mathbf{t})$ and every $u,v \in F$ such that $\bar{u}=\bar{v}$, $\varsigma \cdot u +B_{1}(\mathcal{D}, \mathbf{t})= \varsigma \cdot v +B_{1}(\mathcal{D}, \mathbf{t})$. \end{lemma} \begin{proof} It is enough to prove that for every positive edge $f$, we have $\varsigma \cdot \iota f +B_{1}(\mathcal{D}, \mathbf{t})= \varsigma \cdot \tau f +B_{1}(\mathcal{D}, \mathbf{t})$. From Lemma 4.1 of \cite{LDHM1} it follows that $\varsigma\cdot (\iota f - \tau f) \in B_{1}(\mathcal{D})$. But $B_{1}(\mathcal{D}) \subseteq B_{1}(\mathcal{D}, \mathbf{t})$ and we are done. \end{proof} If $u=x_{1}^{\varepsilon_{1}}\dots x_{n}^{\varepsilon_{n}} \in F$ is any word of length $|u|=n \in \mathbb{N}$, then a trivial path from 1 to $uu^{-1}$ is the following $$T_{uu^{-1}}=(1,x_{1}^{\varepsilon_{1}}x_{1}^{-\varepsilon_{1}},1,1)^{-1} \circ \dots \circ (x_{1}^{\varepsilon_{1}}\dots x_{|u|-1}^{\varepsilon_{|u|-1}}, x_{|u|}^{\varepsilon_{|u|}}x_{|u|}^{-\varepsilon_{|u|}},1, x_{|u|-1}^{-\varepsilon_{|u|-1}}\dots x_{1}^{-\varepsilon_{1}})^{-1}.$$ We write for short \begin{align*} t_{u}^{(1)}&=(1,x_{1}^{\varepsilon_{1}}x_{1}^{-\varepsilon_{1}},1,1)\\ ...\\ t_{u}^{(|u|)}&=(x_{1}^{\varepsilon_{1}}\dots x_{|u|-1}^{\varepsilon_{|u|-1}}, x_{|u|}^{\varepsilon_{|u|}}x_{|u|}^{-\varepsilon_{|u|}},1, x_{|u|-1}^{-\varepsilon_{|u|-1}}\dots x_{1}^{-\varepsilon_{1}})^{-1}, \end{align*} and let $$\tau_{uu^{-1}}=t_{u}^{(1)}+\dots + t_{u}^{(|u|)}.$$ \begin{definition} For every $r \in \mathbf{r}$ and $u \in F^{\ast}$, we let $$q(r,u)=(u,r,1,r^{-1}u^{-1})+(u,r^{-1},1,u^{-1})+\tau_{uu^{-1}}-\tau_{urr^{-1}u^{-1}},$$ be the 1-cycle that corresponds to the closed path $Q(r,u)$. When $u=1$, we let $$q(r,1)=(1,r,1,r^{-1})+(1,r^{-1},1,1)-\tau_{rr^{-1}}$$ be the 1-cycle corresponding to $Q(r,1)$. \end{definition} \begin{lemma} \label{qr} For every $r \in \mathbf{r}$ and $u \in F$, $u. q(r,1).u^{-1}+B_{1}(\mathcal{D},\mathbf{t})=q(r,u)+B_{1}(\mathcal{D},\mathbf{t})$. \end{lemma} \begin{proof} First note that $T^{-1}_{uu^{-1}} \circ T_{urr^{-1}u^{-1}} \sim u. T_{rr^{-1}}.u^{-1}$ since any two trivial paths with the same end points are homotopic with each other. For the corresponding 1-chains we have that $$\tau_{uu^{-1}}-\tau_{urr^{-1}u^{-1}}+B_{1}(\mathcal{D},\mathbf{t})=-u.\tau_{rr^{-1}}.u^{-1}+B_{1}(\mathcal{D},\mathbf{t}).$$ It follows now that \begin{align*} u. q(r,1).u^{-1}+B_{1}(\mathcal{D},\mathbf{t})&= (u,r,1,r^{-1}u^{-1})+ (u,r^{-1},1,u^{-1})- u. \tau_{rr^{-1}}.u^{-1}+B_{1}(\mathcal{D},\mathbf{t})\\ &=(u,r,1,r^{-1}u^{-1})+ (u,r^{-1},1,u^{-1})+\tau_{uu^{-1}}-\tau_{urr^{-1}u^{-1}}+B_{1}(\mathcal{D},\mathbf{t})\\ &=q(r,u)+B_{1}(\mathcal{D},\mathbf{t}), \end{align*} proving the claim. \end{proof} \begin{proof} (of theorem \ref{asphh}) If $H_{1}(\mathcal{D},\mathbf{q} \cup \mathbf{t})=0$, then $H_{1}(\mathcal{D},\mathbf{q} \cup \mathbf{t},1)=0$ which means that the homology classes of the loops $u.Q(r,1).v$ trivialize $H_{1}(\mathcal{D},\mathbf{t},1)$. We claim that every 1-cycle corresponding to a loop $u.Q(r,1).v$ which sits inside $(\mathcal{D},\mathbf{t},1)$ is in fact homologous to the 1-cycle corresponding to the loop $Q(r,u)$. Indeed, since $u. Q(r,1). v$ is a loop in $(\mathcal{D},\mathbf{t},1)$, then $\bar{v}=\bar{u}^{-1}$. It follows from lemma \ref{Jm} and lemma \ref{qr} that \begin{align*} u. q(r,1).v + B_{1}(\mathcal{D},\mathbf{t})&= u. q(r,1).u^{-1} + B_{1}(\mathcal{D},\mathbf{t})\\ &=q(r,u)+B_{1}(\mathcal{D},\mathbf{t}), \end{align*} which proves our claim. As a consequence of this we have that the homology classes of 1-cycles $q(r,u)$ trivialize $H_{1}(\mathcal{D},\mathbf{t},1)$, and then from proposition \ref{thc} we get the asphericity of $\mathcal P$. Conversely, if $\mathcal P$ is aspherical, then from proposition \ref{thc} and lemma \ref{qr} $H_{1}(\mathcal{D},\mathbf{t},1)$ is generated as an abelian group by the homology classes of 1-cycles $u. q(r,1).u^{-1}$. But from \cite{LDHM2} the homology group $H_{1}(\mathcal{D}, \mathbf{t}, w)$ of the connected component $(\mathcal{D}, \mathbf{t}, w)$ is isomorphic to $H_{1}(\mathcal{D}, \mathbf{t}, 1)$, where the isomorphism $\phi_{w}:H_{1}(\mathcal{D}, \mathbf{t}, 1) \rightarrow H_{1}(\mathcal{D}, \mathbf{t}, w)$ maps each homology class of some 1-cycle $\varsigma$ to the homology class of $\varsigma \cdot w$. This shows that the set of the homology classes of 1-cycles $u.q(r,1).u^{-1}w$ trivialize $H_{1}(\mathcal{D}, \mathbf{t})$. We prove that this set equals to the set of homology classes of 1-cycles $u.q(r,1).v$ where $u,v \in F$. Indeed, for every $u,v \in F$ and every $q(r,1)$, if we take $w=uv$, we get that $u.q(r,1).u^{-1}uv+B_{1}(\mathcal{D}, \mathbf{t})$ is a generator of $H_{1}(\mathcal{D}, \mathbf{t})$. But from lemma \ref{Jm}, $u.q(r,1).u^{-1}uv+B_{1}(\mathcal{D}, \mathbf{t})=u.q(r,1).v+B_{1}(\mathcal{D}, \mathbf{t})$, hence $u.q(r,1).v+B_{1}(\mathcal{D}, \mathbf{t})$ is a generator of $H_{1}(\mathcal{D}, \mathbf{t})$. For the converse, it is obvious that any generator $u.q(r,1).u^{-1}w + B_{1}(\mathcal{D}, \mathbf{t})$ is of the form $u.q(r,1).v + B_{1}(\mathcal{D}, \mathbf{t})$ with $v=u^{-1}w$. \end{proof} \begin{remark} \label{IR} The Squier complex $\mathcal{D}$ of the monoid presentation $\mathcal{M}=\langle\mathbf{x}, \mathbf{x}^{-1}: \mathbf{s} \rangle$ of $G$ has an important property. As the theorem \ref{asphh} shows, the homology trivializers of $H_{1}(\mathcal{D})$ are classes of 1-cycles corresponding to loops from $\mathbf{q} \cup \mathbf{t}$ and each one of them arises from the resolution of a critical pair. Indeed, if $r \in \mathbf{r}$ has the reduced word form $r=x_{1}^{\varepsilon_{1}}\dots x_{n}^{\varepsilon_{n}}$ in $\hat{F}$, then considering $x_{1}^{\varepsilon_{1}}\dots x_{n}^{\varepsilon_{n}}$ as a word from $F$, we see that the loop $Q(r,1)$ is obtained by resolving the following overlapping pair of edges $$((1, r,1, r^{-1}), (x_{1}^{\varepsilon_{1}}\dots x_{n-1}^{\varepsilon_{n-1}}, x_{n}^{\varepsilon_{n}}x_{n}^{-\varepsilon_{n}}, 1, x_{n-1}^{-\varepsilon_{n-1}}\dots x_{1}^{-\varepsilon_{1}})).$$ On the other hand, if $t=(1,x^{\varepsilon}x^{-\varepsilon}, 1, x^{\varepsilon}) \circ (x^{\varepsilon}, x^{-\varepsilon}x^{\varepsilon}, 1, 1)^{-1} $ is a loop of $\mathbf{t}$, then it arises from the resolution of the overlapping pair $$((1,x^{\varepsilon}x^{-\varepsilon}, 1, x^{\varepsilon}), (x^{\varepsilon}, x^{-\varepsilon}x^{\varepsilon}, 1, 1) ).$$ The importance of this remark stands at the fact that when the given presentation $\mathcal P$ is aspherical, then the sequence (\ref{cxkp}) that is associated with the complex $(\mathcal{D},\mathbf{q} \cup \mathbf{t})$ is exact. \end{remark} \subsection{A preliminary result} \label{pr} Let $\mathcal{P}=(\mathbf{x},\mathbf{r})$ be an aspherical group presentation and $\mathcal{P}_{1}=(\mathbf{x},\mathbf{r}_{1})$ a subpresentation of the first where $\mathbf{r}_{1}=\mathbf{r}\setminus \{r_{0}\}$ and $r_{0}\in \mathbf{r}$ is a fixed relation. We denote by $\Upsilon_{1}$, $\mathfrak{U}_{1}$ monoids associated with $\mathcal{P}_{1}$ and by $\mathcal{G}(\Upsilon_{1})$ and $\hat{\mathfrak{U}}_{1}$ their respective groups and let $\tilde{\theta}_{1}$ be the morphism of the crossed module $\mathcal{G}(\Upsilon_{1})$ whose kernel is denoted by $\tilde{\Pi}_{1}$. Also we consider $\hat{\mathfrak{A}}_{1}$ the subgroup of $\hat{\mathfrak{U}}$ generated by all $\mu\sigma(bb^{-1})$ where $b \in Y_{1} \cup Y_{1}^{-1}$. Finally note that the monomorphism $\varphi: \Upsilon_{1} \rightarrow \Upsilon$ induced by the map $\sigma_{1}(a) \rightarrow \sigma(a)$ induces a homomorphism $\hat{\varphi}: \mathcal{G}(\Upsilon_{1} )\rightarrow \mathcal{G}(\Upsilon)$. These data fit into a commutative diagram as depicted below. \begin{equation*} \xymatrix{ \mathcal{G}(\Upsilon_{1}) \ar[rr]^{\hat{\varphi}} \ar[rd]_{\tilde{\theta}_{1}} && \mathcal{G}(\Upsilon) \ar[ld]^-{\tilde{\theta}} \\ & F} \end{equation*} The following will be useful in the proof of our main theorem. \begin{proposition} \label{quasi} If $\mathcal{P}$ is aspherical, then $\hat{\varphi}(\tilde{\Pi}_{1})=\hat{\mathfrak{A}}_{1}$. \end{proposition} \begin{proof} Let $\tilde{d}=\mu_{1}\sigma_{1}(a_{1}\cdot \cdot \cdot a_{n}) \in \tilde{\Pi}_{1}$ where as before no $a_{i}$ is equal to any $\iota(\mu_{1}\sigma_{1}(^{u}{r})^{\varepsilon})$ and assume that \begin{align*} \hat{\varphi}(\tilde{d})=&(\mu\sigma(b_{1}b_{1}^{-1})\cdot \cdot \cdot \mu\sigma(b_{s}b_{s}^{-1}))\cdot (\iota(\mu\sigma(b_{s+1}b_{s+1}^{-1}))\cdot \cdot \cdot \iota(\mu\sigma(b_{r}b_{r}^{-1})))\\ &(\mu\sigma(c_{1}c_{1}^{-1})\cdot \cdot \cdot \mu\sigma(c_{t}c_{t}^{-1}))\cdot (\iota(\mu\sigma(d_{1}d_{1}^{-1}))\cdot \cdot \cdot \iota(\mu\sigma(d_{k}d_{k}^{-1}))), \end{align*} where the first half involves elements from $Y_{1}\cup Y_{1}^{-1}$ and the second one is $$\mu\sigma(C)\iota(\mu\sigma(D))$$ with $$C=c_{1}c_{1}^{-1} \cdot \cdot \cdot c_{t}c_{t}^{-1} \text{ and } D=d_{1}d_{1}^{-1} \cdot \cdot \cdot d_{k}d_{k}^{-1},$$ where $C$ and $D$ involve only elements of the form $(^{u}{r_{0}})^{\varepsilon}$ with $\varepsilon=\pm1$. Recalling from above that in $\mathcal{G}(\Upsilon)$ we have \begin{align*} \mu \sigma((a_{1} \cdot \cdot \cdot a_{n}) \cdot ((b_{s+1}b_{s+1}^{-1}) \cdot \cdot \cdot (b_{r}b_{r}^{-1})) \cdot ((d_{1}d_{1}^{-1}) \cdot \cdot \cdot (d_{k}d_{k}^{-1})))\\ = \mu \sigma(((b_{1}b_{1}^{-1}) \cdot \cdot \cdot (b_{s}b_{s}^{-1})) \cdot ((c_{1}c_{1}^{-1}) \cdot \cdot \cdot (c_{t}c_{t}^{-1}))), \end{align*} we can apply $\hat{g}$ defined in proposition \ref{free} on both sides and get \begin{align*} g\sigma((a_{1} \cdot \cdot \cdot a_{n}) \cdot ((b_{s+1}b_{s+1}^{-1}) \cdot \cdot \cdot (b_{r}b_{r}^{-1})) \cdot ((d_{1}d_{1}^{-1}) \cdot \cdot \cdot (d_{k}d_{k}^{-1})))\\ =g\sigma(((b_{1}b_{1}^{-1}) \cdot \cdot \cdot (b_{s}b_{s}^{-1})) \cdot ((c_{1}c_{1}^{-1}) \cdot \cdot \cdot (c_{t}c_{t}^{-1}))). \end{align*} If we now write each $c_{i}=(^{u_{i}}r_{0})^{\varepsilon_{i}}$ and each $d_{j}=(^{v_{j}}r_{0})^{\delta_{j}}$ where $\varepsilon_{i}$ and $\delta_{j}=\pm1$, while we write each $a_{\ell}=(^{w_{\ell}}r_{\ell})^{\gamma_{\ell}}$ and each $b_{p}=(^{\eta_{p}}\rho_{p})^{\epsilon_{p}}$ where all $r_{\ell}$ and $\rho_{p}$ belong to $\mathbf{r}_{1}$ and $\gamma_{\ell}, \epsilon_{p}=\pm 1$, then the definition of $g$ yields \begin{align*} (w_{1}^{\alpha}\cdot r_{1}^{\beta}+\cdot \cdot \cdot + w_{n}^{\alpha}\cdot r_{n}^{\beta})+(2\eta_{s+1}^{\alpha} \cdot \rho_{s+1}^{\beta} + \cdot \cdot \cdot +2\eta_{r}^{\alpha} \cdot \rho_{r}^{\beta})+(2v_{1}^{\alpha}+\cdot \cdot \cdot + 2v_{k}^{\alpha})\cdot r_{0}^{\beta}\\ =(2\eta_{1}^{\alpha} \cdot \rho_{1}^{\beta} + \cdot \cdot \cdot +2\eta_{s}^{\alpha} \cdot \rho_{s}^{\beta})+(2u_{1}^{\alpha}+\cdot \cdot \cdot + 2u_{t}^{\alpha})\cdot r_{0}^{\beta} \end{align*} The freeness of $\mathcal{N}(\mathcal{P})$ on the set of elements $r^{\beta}$ implies in particular that \begin{equation*} (2v_{1}^{\alpha}+\cdot \cdot \cdot + 2v_{k}^{\alpha})\cdot r_{0}^{\beta}=(2u_{1}^{\alpha}+\cdot \cdot \cdot + 2u_{t}^{\alpha})\cdot r_{0}^{\beta} \end{equation*} from which we see that $k=t$, and after a rearrangement of terms $u^{\alpha}_{i}=v^{\alpha}_{i}$ for $i=1,...,k$. The easily verified fact that in $\mathcal{G}(\Upsilon)$, $\mu\sigma(aa^{-1})=\mu\sigma(a^{-1}a)$ and the fact that if $u^{\alpha}=v^{\alpha}$, then for every $s \in \mathbf{r}$, $\mu\sigma((^{v}s)^{\delta}(^{v}s)^{-\delta})=\mu\sigma((^{u}s)^{\delta}(^{u}s)^{-\delta})$, imply easily that \begin{equation*} \mu\sigma((^{v}r_{0})^{\delta}(^{v}r_{0})^{-\delta})=\mu\sigma((^{u}r_{0})^{\varepsilon}(^{u}r_{0})^{-\varepsilon}). \end{equation*} If we apply the latter to pairs $(c_{i}, d_{i})$ for which $u^{\alpha}_{i}=v^{\alpha}_{i}$, we get that $\mu\sigma(C)\iota(\mu\sigma(D))=1$ which shows that $\hat{\varphi}(\tilde{d}) \in \hat{\mathfrak{A}}_{1}$. \end{proof} \subsection{The proof} Throughout this section we assume that $\mathcal P=(\mathbf{x}, \mathbf{r})$ is an aspherical presentation of the trivial group. Consider now a sub presentation $\mathcal P_{1}=(\mathbf{x}, \mathbf{r}_{1})$ of $\mathcal P$ where $\mathbf{r}_{1}= \mathbf{r} \setminus \{r_{0}\}$. For each of the above group presentations, we have a monoid presentation of the same group, namely $$\mathcal{M}=\langle\mathbf{x}, \mathbf{x}^{-1}: \mathbf{s} \rangle$$ is a monoid presentation of the trivial group, where $$\mathbf{s}=\{(r^{\varepsilon},1): r \in \mathbf{r}, \varepsilon=\pm 1\} \cup \left\{ (x^{\varepsilon}x^{-\varepsilon},1): x \in \mathbf{x}, \varepsilon=\pm 1\right\},$$ and $$\mathcal{M}_{1}=\langle\mathbf{x}, \mathbf{x}^{-1}: \mathbf{s}_{1} \rangle$$ is a monoid presentation of the group given by $\mathcal P_{1}$, where $$\mathbf{s}_{1}=\{(r_{1}^{\varepsilon},1): r_{1} \in \mathbf{r}_{1}, \varepsilon=\pm 1\} \cup \left\{ (x^{\varepsilon}x^{-\varepsilon},1): x \in \mathbf{x}, \varepsilon=\pm 1\right\}.$$ Related to $\mathcal{M}$ we have defined two 2-complexes. The first one is the usual Squier complex $\mathcal{D}$, and the second one is its extension $(\mathcal{D},\mathbf{t})$, and similarly we have two 2-complexes arising from $\mathcal{M}_{1}$, $\mathcal{D}_{1}$ and its extension $(\mathcal{D}_{1},\mathbf{t})$. Further, $(\mathcal{D},\mathbf{t})$ has been extended to a 3-complex $(\mathcal{D}, \mathbf{q} \cup \mathbf{t})$ by adding first 2-cells arising from $Q(r,1)$ and their translates, and than adding all the 3-cells $[e, \sigma]$ or $[\sigma, e]$ for every 2-cell $\sigma$ and every positive edge $e$. We write for short $(\mathcal{D}, \mathbf{q} \cup \mathbf{t})$ by $(\mathcal{D}, \mathbf{p})$ where $\mathbf{p}=\mathbf{q} \cup \mathbf{t}$. Likewise, $(\mathcal{D}_{1},\mathbf{t})$ extends to a 3-complex $(\mathcal{D}_{1}, \mathbf{p}_{1})$ where $\mathbf{p}_{1}=\mathbf{q}_{1} \cup \mathbf{t}$ and $\mathbf{q}_{1}$ is the set of 2-cells arising from $Q(r_{1},1)$ with $r_{1} \in \mathbf{r}_{1}$. But $(\mathcal{D}_{1}, \mathbf{p}_{1})$ is a subcomplex of $(\mathcal{D}, \mathbf{p} )$, therefore we have the following exact sequence of abelian groups $$\xymatrix@C=30pt{ \dots \ar[r] & H_{2}(\mathcal{D}, \mathbf{p} ) \ar[r] & H_{2}((\mathcal{D}, \mathbf{p}), (\mathcal{D}_{1}, \mathbf{p}_{1})) \ar[r] & H_{1}(\mathcal{D}_{1}, \mathbf{p}_{1}) \ar[r] & H_{1}(\mathcal{D}, \mathbf{p} ) \ar[r] & \dots }.$$ We know from theorem \ref{asphh} that $H_{1}(\mathcal{D}, \mathbf{p})=0$, so if we prove that $H_{2}((\mathcal{D}, \mathbf{p}), (\mathcal{D}_{1}, \mathbf{p}_{1}))=0$, then the exactness of the above sequence will imply that $H_{1}(\mathcal{D}_{1}, \mathbf{p}_{1})=0$ and we are done. Before we proceed with the proof, we explain how the boundary maps for the corresponding quotient complex are defined. For this we consider the commutative diagram \begin{equation} \label{def rel} \xymatrix{C_{3}(\mathcal{D}, \mathbf{p}) \ar[d]_{\mu_{3}} \ar[r]^{\tilde{\partial}_{3}} & C_{2}(\mathcal{D}, \mathbf{p}) \ar[d]_{\mu_{2}} \ar[r]^{\tilde{\partial}_{2}} & C_{1}(\mathcal{D}, \mathbf{p}) \ar[d]_{\mu_{1}}\\ C_{3}(\mathcal{D}, \mathbf{p})/C_{3} (\mathcal{D}_{1}, \mathbf{p}_{1}) \ar[r]^{\hat{\partial}_{3}} & C_{2}(\mathcal{D}, \mathbf{p})/C_{2} (\mathcal{D}_{1}, \mathbf{p}_{1}) \ar[r]^{\hat{\partial}_{2}} & C_{1}(\mathcal{D}, \mathbf{p})/C_{1} (\mathcal{D}_{1}, \mathbf{p}_{1})} \end{equation} where $\mu_{i}$ for $i=1,2,3$ are the canonical epimorphisms. Then, for $i=2,3$ and for every $\sigma \in C_{i}(\mathcal{D}, \mathbf{p})$ we have $$\hat{\partial}_{i}(\mu_{i}\sigma)= \mu_{i-1}\tilde{\partial}_{i}(\sigma).$$ We write $Im (\hat{\partial}_{3})=B_{2}((\mathcal{D}, \mathbf{p}),(\mathcal{D}_{1}, \mathbf{p}_{1}))$, and similarly $Im (\hat{\partial}_{2})=B_{1}((\mathcal{D}, \mathbf{p}),(\mathcal{D}_{1}, \mathbf{p}_{1}))$. Also we let $Z_{2}((\mathcal{D}, \mathbf{p}),(\mathcal{D}_{1}, \mathbf{p}_{1}))=Ker(\hat{\partial}_{2})$ and then $$H_{2}((\mathcal{D}, \mathbf{p}),(\mathcal{D}_{1}, \mathbf{p}_{1}))=Z_{2}((\mathcal{D}, \mathbf{p}),(\mathcal{D}_{1}, \mathbf{p}_{1}))/B_{2}((\mathcal{D}, \mathbf{p}),(\mathcal{D}_{1}, \mathbf{p}_{1})).$$ We note that $$C_{2}(\mathcal{D}, \mathbf{p})/C_{2} (\mathcal{D}_{1}, \mathbf{p}_{1}) \cong \mu_{2}(C_{2}(\mathcal{D})) \oplus C_{2}^{\mathbf{q}_{0}},$$ where $\mu_{2}(C_{2}(\mathcal{D}))$ is the free abelian group generated by all 2-cells $[e,f]$ where at least one of the edges $e$ or $f$ arises from $r_{0}$, and $C_{2}^{\mathbf{q}_{0}}$ is the free abelian group on 2-cells $u. \mathbf{q}_{0}.v$ where $\mathbf{q}_{0}$ is the 2-cell attached along $Q(r_{0},1)$. We can thus regard $Z_{2}((\mathcal{D}, \mathbf{p}),(\mathcal{D}_{1}, \mathbf{p}_{1}))$ as a subgroup of $ \mu_{2}(C_{2}(\mathcal{D})) \oplus C_{2}^{\mathbf{q}_{0}}$. Now we let $$\varphi_{rel}: \mu_{2}(C_{2}(\mathcal{D})) \oplus C_{2}^{\mathbf{q}_{0}} \rightarrow \mathbb{Z}G. \mathbf{q}_{0}. \mathbb{Z}G$$ be the $(\mathbb{Z}F,\mathbb{Z}F)$-homomorphism defined by mapping $\mu_{2}(C_{2}(\mathcal{D}))$ to 0, and every 2-cell $u. \mathbf{q}_{0}. v$ to $\bar{u}.\mathbf{q}_{0}. \bar{v}$. Denote the kernel of $\varphi_{rel}$ by $K_{rel}^{\mathbf{q}_{0}}$. By the same argument as that of \cite{SES} we see that $$K_{rel}^{\mathbf{q}_{0}}= \mu_{2}(C_{2}(\mathcal{D}))+ J. \mathbf{q}_{0}.\mathbb{Z}F+\mathbb{Z}F.\mathbf{q}_{0}.J.$$ Latter we will make use of the fact that $K_{rel}^{\mathbf{q}_{0}}$ can be regarded as a sub group of $K^{\mathbf{p}}$. Next we show that $B_{2}((\mathcal{D}, \mathbf{p}), (\mathcal{D}_{1}, \mathbf{p}_{1})) \subseteq K_{rel}^{\mathbf{q}_{0}}$ and that the restriction of $\hat{\partial}_{2}$ on $K_{rel}^{\mathbf{q}_{0}}$ sends $K_{rel}^{\mathbf{q}_{0}}$ onto the subgroup $B_{1}(\mathcal{D}, \mathcal{D}_{1})$ of $ C_{1}(\mathcal{D}, \mathbf{p})/ C_{1}(\mathcal{D}_{1}, \mathbf{p}_{1})$ defined by $$B_{1}(\mathcal{D}, \mathcal{D}_{1})=\left\{\beta+ C_{1}(\mathcal{D}_{1})| \beta \in \tilde{\partial}_{2}(C_{2}(\mathcal{D})) \right\}.$$ To see that $B_{2}((\mathcal{D}, \mathbf{p}), (\mathcal{D}_{1}, \mathbf{p}_{1})) \subseteq K_{rel}^{\mathbf{q}_{0}}$, we must prove that for every 3-cell $[f,\sigma]$ or $[\sigma, f]$, $$\hat{\partial}_{3}\left([f, \sigma]+C_{3}(\mathcal{D}_{1}, \mathbf{p}_{1})\right) \in K_{rel}^{\mathbf{q}_{0}}$$ and similarly, $$\hat{\partial}_{3}\left([\sigma,f]+C_{3}(\mathcal{D}_{1}, \mathbf{p}_{1})\right) \in K_{rel}^{\mathbf{q}_{0}}.$$ We prove the second for convenience. Let $[\sigma, f] \notin C_{3}(\mathcal{D}_{1}, \mathbf{p}_{1})$. If $\sigma \in F.\mathbf{q}_{0}.F$ or $f$ arises from $r_{0}$, then clearly \begin{align*} \hat{\partial}_{3}\left([\sigma,f]+C_{3}(\mathcal{D}_{1}, \mathbf{p}_{1})\right)&=\left(\sigma. (\iota f- \tau f)-\sum_{i} \varepsilon_{i}[e_{i},f]\right)+C_{2}(\mathcal{D}_{1}, \mathbf{p}_{1})\in K_{rel}^{\mathbf{q}_{0}}. \end{align*} Otherwise, if $\sigma \notin \mathbf{q}_{0}$ and $f$ arises from $\mathbf{r}_{1}$, then $\sigma=[g,h]$ where at least $g$ or $h$ arises from $r_{0}$. Again we see that $\hat{\partial}_{3}\left([\sigma,f]+C_{3}(\mathcal{D}_{1}, \mathbf{p}_{1})\right) \in K_{rel}^{\mathbf{q}_{0}}$. Next we prove that the restriction of $\hat{\partial}_{2}$ on $K_{rel}^{\mathbf{q}_{0}}$ sends $K_{rel}^{\mathbf{q}_{0}}$ onto $B_{1}(\mathcal{D}, \mathcal{D}_{1})$. Indeed, since for every $(\iota f - \tau f). \sigma_{0} \in J. \mathbf{q}_{0}.\mathbb{Z}F$ \begin{equation*} (\iota f - \tau f). \sigma_{0}= \tilde{\partial}_{3}[f, \sigma_{0}]-\sum_{i}\varepsilon_{i}[f, e_{i}], \end{equation*} then we can derive that \begin{align*} \hat{\partial}_{2}\left((\iota f - \tau f). \sigma_{0}\right)&=-\hat{\partial}_{2}\left(\sum_{i}\varepsilon_{i}[f, e_{i}]\right)\\ &=-\sum_{i}\varepsilon_{i}\tilde{\partial}_{2}[f, e_{i}]+C_{1}(\mathcal{D}_{1}) \in B_{1}(\mathcal{D}, \mathcal{D}_{1}). \end{align*} In a symmetric way one can show that $\hat{\partial}_{2}\left(\sigma_{0}.(\iota f - \tau f)\right) \in B_{1}(\mathcal{D}, \mathcal{D}_{1})$. Finally, if $[e,f] \in \mu_{2}(C_{2}(\mathcal{D}))$, then $$\hat{\partial}_{2}[e,f]=\tilde{\partial_{2}}[e,f]+C_{1}(\mathcal{D}_{1})\in B_{1}(\mathcal{D}, \mathcal{D}_{1}).$$ This also shows that $\hat{\partial}_{2}$ is onto. Therefore we have the complex \begin{equation} \label{c-rel} \xymatrix{0 \ar[r] & B_{2}((\mathcal{D}, \mathbf{p}), (\mathcal{D}_{1}, \mathbf{p}_{1})) \ar[r]^-{incl.} & K_{rel}^{\mathbf{q}_{0}} \ar[r]^-{\hat{\partial}_{2}} & B_{1}(\mathcal{D}, \mathcal{D}_{1}) \ar[r] & 0. } \end{equation} which is exact on the left and on the right. \begin{lemma} \label{c-rel-ext} The complex (\ref{c-rel}) is exact. \end{lemma} \begin{proof} For this we consider the commutative diagram $$\xymatrix@C=35pt{0 \ar[r] & B_{2}(\mathcal{D}, \mathbf{p}) \ar[d]_{\mu_{2}} \ar[r]^{incl.} & K^{\mathbf{p}} \ar[d]_{\mu_{2}} \ar[r]^{\tilde{\partial}_{2}} & B_{1}(\mathcal{D}) \ar[d]_{\mu_{1}} \ar[r] & 0 \\ 0 \ar[r] & B_{2}((\mathcal{D}, \mathbf{p}), (\mathcal{D}_{1}, \mathbf{p}_{1})) \ar[r]^-{incl.} & K_{rel}^{\mathbf{q}_{0} \ar[r]^-{\hat{\partial}_{2}}} & B_{1}(\mathcal{D}, \mathcal{D}_{1}) \ar[r] & 0}$$ The top row is exact from proposition \ref{p14} and from remark \ref{IR}, and $\mu_{1},\mu_{2}$ are the restrictions of the epimorphisms of (\ref{def rel}). Let $\xi =\sum_{i}z_{i}\mu_{2}(\sigma_{i})\in Ker \hat{\partial}_{2}$. We recall that $\xi$ can be regarded as an element of $K^{\mathbf{p}}$ with no terms arising from $\mathbf{t}$ or square 2-cells $[e,f]$ with both $e$ and $f$ in $\mathcal{D}_{1}$. Further we have that \begin{align*} 0&=\hat{\partial}_{2}\left(\sum_{i}z_{i}\mu_{2}(\sigma_{i})\right)=\sum_{i}z_{i}\hat{\partial}_{2} \mu_{2}(\sigma_{i})\\ &=\sum_{i}z_{i}\mu_{1}\tilde{\partial}_{2}(\sigma_{i})=\mu_{1}\tilde{\partial}_{2}\left(\sum_{i}z_{i}\sigma_{i}\right), \end{align*} which implies that $\tilde{\partial}_{2}\left(\sum_{i}z_{i}\sigma_{i}\right) \in C_{1}(\mathcal{D}_{1})$, and so $\tilde{\partial}_{2}\left(\sum_{i}z_{i}\sigma_{i}\right)$ is a 1-cycle in $Z_{1}(\mathcal{D}_{1})$. It follows that $$\tilde{\partial}_{2}\left(\sum_{i}z_{i}\sigma_{i}\right) \in Ker \tilde{\partial}_{1} \cap (J. \mathbf{s}_{1}. \mathbb{Z}F+ \mathbb{Z}F. \mathbf{s}_{1}. J).$$ We note that each term from $J. \mathbf{s}_{1}. \mathbb{Z}F+ \mathbb{Z}F. \mathbf{s}_{1}. J$ that is represented in $\tilde{\partial}_{2}\left(\sum_{i}z_{i}\sigma_{i}\right)$ arises either from a 2-cell $[e,f]$ where at least one of $e$ or $f$ is a positive edges that belongs to $\mathcal{D}_{1}$, or arises from an element of the form $j.\mathbf{q}_{0}. v$ or $u. \mathbf{q}_{0}. j$ with $u, v \in F$ and $j \in J$. Theorem 6.6 of \cite{OK2002} implies that there is a 2-chain $\sum_{j}k_{j} \beta_{j} \in C_{2}(\mathcal{D}_{1})$ such that $\tilde{\partial}_{2}\left(\sum_{i}z_{i}\sigma_{i}\right)=\tilde{\partial}_{2}\left(\sum_{j}k_{j} \beta_{j}\right)$ and then we have the 2-cycle $\tilde{\xi}=\sum_{i}z_{i}\sigma_{i}-\sum_{j}k_{j} \beta_{j}$ in $K^{\mathbf{p}}$. It follows that $\tilde{\xi}$ is a 2-boundary since the top row is exact, and has the property that \begin{align*} \mu_{2}(\tilde{\xi})&=\mu_{2}\left(\sum_{i}z_{i}\sigma_{i}\right)-\mu_{2}\left(\sum_{j}k_{j} \beta_{j}\right)\\ &=\mu_{2}\left(\sum_{i}z_{i}\sigma_{i}\right) && \text{(since each $\beta_{j} \in C_{2}(\mathcal{D}_{1})$)}\\ &=\sum_{i}z_{i}\mu_{2}(\sigma_{i})\\ &=\xi, \end{align*} hence for some $w \in C_{3}(\mathcal{D}, \mathbf{p})$ we have that $$\xi=\mu_{2}(\tilde{\xi})=\mu_{2}(\tilde{\partial}_{3}(w))=\hat{\partial}_{3}\mu_{3}(w).$$ This proves that $\xi$ is a relative 2-boundary and as a consequence the exactness of the bottom row. \end{proof} Further we note that $B_{1}(\mathcal{D}, \mathcal{D}_{1})$ embeds in $Im (\hat{\partial}_{2})$. Indeed, any element $\tilde{\partial}_{2}(\xi)+ C_{1}(\mathcal{D}_{1})$ of $B_{1}(\mathcal{D}, \mathcal{D}_{1})$ where $\xi$ is a 2-chain from $C_{2}(\mathcal{D})$ is in $Im (\hat{\partial}_{2})$ since $C_{2}(\mathcal{D}) \leq C_{2}(\mathcal{D}, \mathbf{p})$ and $C_{1}(\mathcal{D}_{1}, \mathbf{p}_{1})=C_{1}(\mathcal{D}_{1})$. Finally, consider the commutative diagram $$\xymatrix{0 \ar[r] & B_{2}((\mathcal{D}, \mathbf{p}), (\mathcal{D}_{1}, \mathbf{p}_{1})) \ar[d]_{\iota} \ar[r]^-{incl.} & K_{rel}^{\mathbf{q}_{0}} \ar[d]_{\iota} \ar[r]^-{\hat{\partial}_{2}} & B_{1}(\mathcal{D}, \mathcal{D}_{1}) \ar[d]_{\iota} \ar[r] & 0 \\ 0 \ar[r] & Z_{2}((\mathcal{D}, \mathbf{p}), (\mathcal{D}_{1}, \mathbf{p}_{1})) \ar[r]^{\iota} & \mu_{2}(C_{2}(\mathcal{D})) \oplus C_{2}^{\mathbf{q}_{0}} \ar[r]^-{\hat{\partial}_{2}} & Im (\hat{\partial}_{2}) \ar[r] & 0 }$$ where the top row is exact from lemma \ref{c-rel-ext}, and the bottom one is also exact where $Im(\hat{\partial}_{2}) \leq C_{1}(\mathcal{D}, \mathbf{p})/C_{1}(\mathcal{D}_{1}, \mathbf{p}_{1})$. From the Snake Lemma we get the exact sequence \begin{equation} \label{mainses} \xymatrix{0 \ar[r] & H_{2}((\mathcal{D}, \mathbf{p}), (\mathcal{D}_{1}, \mathbf{p}_{1})) \ar[r] & \mathbb{Z}G.\mathbf{q}_{0}. \mathbb{Z}G \ar[r]^-{\nu} & Im(\hat{\partial}_{2})/ B_{1}(\mathcal{D}, \mathcal{D}_{1}) \ar[r] & 0,} \end{equation} where $\nu(\mathbf{q}_{0})=\hat{\partial}_{2}([1,\mathbf{q}_{0},1])+B_{1}(\mathcal{D}, \mathcal{D}_{1})$. Since $G$ is the trivial group, we have that for every $[u, \mathbf{q}_{0}, v]$, \begin{equation} \label{fgen} \hat{\partial}_{2}([u, \mathbf{q}_{0}, v])+ B_{1}(\mathcal{D}, \mathcal{D}_{1})=\hat{\partial}_{2}([1, \mathbf{q}_{0}, 1])+ B_{1}(\mathcal{D}, \mathcal{D}_{1}). \end{equation} This follows easily if we prove that for every positive edge $e$, and $v \in F$ we have that $$\hat{\partial}_{2}([\iota e, \mathbf{q}_{0}, v])+ B_{1}(\mathcal{D}, \mathcal{D}_{1})=\hat{\partial}_{2}([\tau e, \mathbf{q}_{0}, v])+ B_{1}(\mathcal{D}, \mathcal{D}_{1}),$$ and similarly, for every positive edge $f$ and $u \in F$, $$\hat{\partial}_{2}([u, \mathbf{q}_{0}, \iota f])+ B_{1}(\mathcal{D}, \mathcal{D}_{1})=\hat{\partial}_{2}([u, \mathbf{q}_{0}, \tau f])+ B_{1}(\mathcal{D}, \mathcal{D}_{1}).$$ We prove the first claim for convenience. Since $$(\iota e- \tau e). \mathbf{q}_{0}.v+ C_{2}(\mathcal{D}_{1}, \mathbf{p}_{1})= \left(\tilde{\partial}_{3}[e,\mathbf{q}_{0}]-\sum_{i}\varepsilon_{i}[e,e_{i}]\right)+C_{2}(\mathcal{D}_{1}, \mathbf{p}_{1}),$$ where $\partial \mathbf{q}_{0}= e_{1}^{\varepsilon_{1}}\dots e_{n}^{\varepsilon_{n}}$, then $$\tilde{\partial}_{2}((\iota e- \tau e). \mathbf{q}_{0}.v)+C_{1}(\mathcal{D}_{1})=-\tilde{\partial}_{2}\left(\sum_{i}\varepsilon_{i}[e,e_{i}]\right)+C_{1}(\mathcal{D}_{1}).$$ But $$\tilde{\partial}_{2}\left(\sum_{i}\varepsilon_{i}[e,e_{i}]\right)+C_{1}(\mathcal{D}_{1}) \in B_{1}(\mathcal{D}, \mathcal{D}_{1}),$$ consequently \begin{align*} \left(\hat{\partial}_{2}([\iota e, \mathbf{q}_{0}, v])-\hat{\partial}_{2}([\tau e, \mathbf{q}_{0}, v])\right)+ B_{1}(\mathcal{D}, \mathcal{D}_{1})&=\left(\tilde{\partial}_{2}((\iota e- \tau e). \mathbf{q}_{0}.v)+C_{1}(\mathcal{D}_{1})\right) + B_{1}(\mathcal{D}, \mathcal{D}_{1})\\ &=-\left(\tilde{\partial}_{2}\left(\sum_{i}\varepsilon_{i}[e,e_{i}]\right)+C_{1}(\mathcal{D}_{1})\right)+B_{1}(\mathcal{D}, \mathcal{D}_{1})\\ &=B_{1}(\mathcal{D}, \mathcal{D}_{1}), \end{align*} which proves the first claim. An obvious consequence of (\ref{fgen}) is that $Im(\hat{\partial}_{2})/ B_{1}(\mathcal{D}, \mathcal{D}_{1})$ is a cyclic group with generator $\hat{\partial}_{2}([1, \mathbf{q}_{0}, 1])+ B_{1}(\mathcal{D}, \mathcal{D}_{1})$. The key to proving our main theorem is that $Im(\hat{\partial}_{2})/ B_{1}(\mathcal{D}, \mathcal{D}_{1})$ is infinite cyclic. Before that, we need to do some preparatory work. If we let $G_{1}$ be the group given by $\mathcal P_{1}$, then for every $g \in G_{1}$, we let $(\mathcal{D}_{1}, \mathbf{t}, g)$ be the connected component of $(\mathcal{D}_{1}, \mathbf{t})$ corresponding to $g$, and let $H_{1}(\mathcal{D}_{1}, \mathbf{t}, g)$ be the corresponding homology group. The homology group $H_{1}(\mathcal{D}_{1}, \mathbf{t})$ decomposes as a direct sum $$H_{1}(\mathcal{D}_{1}, \mathbf{t})=\oplus_{g \in G_{1}}H_{1}(\mathcal{D}_{1}, \mathbf{t}, g).$$ Any 1-cycle $\varsigma$ now decomposes uniquely as $$\varsigma= \varsigma_{g_{1}}+\dots + \varsigma_{g_{n}}$$ where $\varsigma_{g_{i}} \in Z_{1}(\mathcal{D}_{1}, \mathbf{t}, g_{i})$, and $\varsigma+B_{1}(\mathcal{D}_{1}, \mathbf{t})$ writes uniquely as $$\varsigma+B_{1}(\mathcal{D}_{1}, \mathbf{t})=(\varsigma_{g_{1}}+B_{1}(\mathcal{D}_{1}, \mathbf{t}, g_{1}))+ \dots + (\varsigma_{g_{n}}+B_{1}(\mathcal{D}_{1}, \mathbf{t}, g_{n})).$$ From \cite{LDHM2} we know that each $H_{1}(\mathcal{D}_{1}, \mathbf{t}, g_{i})$ is isomorphic to $H_{1}(\mathcal{D}_{1}, \mathbf{t}, 1)$ where the isomorphism $$\theta_{u_{i}}: H_{1}(\mathcal{D}_{1}, \mathbf{t}, g_{i}) \rightarrow H_{1}(\mathcal{D}_{1}, \mathbf{t}, 1)$$ is defined by $$\varsigma_{g_{i}}+B_{1}(\mathcal{D}_{1}, \mathbf{t}, g_{i}) \mapsto \varsigma_{g_{i}}\cdot u_{i}^{-1}+B_{1}(\mathcal{D}_{1}, \mathbf{t},1)$$ where $u_{i}$ is any vertex in $(\mathcal{D}_{1}, \mathbf{t}, g_{i})$. We let $\psi_{1}: H_{1}(\mathcal{D}_{1}, \mathbf{t},1) \rightarrow \tilde{\Pi}_{1}$ and $\eta: \tilde{\Pi} \rightarrow H_{1}(\mathcal{D}, \mathbf{t})$ be the isomorphism of \cite{LDHM2}. With these notations the following holds true. \begin{lemma} \label{inv} For every $\varsigma\in Z_{1}(\mathcal{D}_{1}, \mathbf{t}, 1)$, $\hat{\varphi}\psi_{1}(\varsigma+ B_{1}(\mathcal{D}_{1}, \mathbf{t}, 1))=\psi(\varsigma+ B_{1}(\mathcal{D}, \mathbf{t}))$. \end{lemma} \begin{proof} This follows easily from the definitions of $\psi, \psi_{1}$ and $\hat{\varphi}$. Indeed, assume that $$\varsigma=\sum_{i \in I}z_{i}(u_{i},s_{i},1,v_{i}),$$ and let $$J=\{i \in I: s_{i}=r^{\varepsilon_{i}}_{i} \text{ where } r^{\varepsilon_{i}}_{i} \in \mathbf{r}_{1}^{\pm 1}\}.$$ Then from the definitions of $\psi_{1}$ and $\psi$ we have that \begin{equation} \label{psi10} \psi_{1,0}(\varsigma)= \prod_{j \in J}\mu_{1}\sigma_{1}(^{u_{j}}s_{j})^{z_{j}} \text{ and } \psi_{0}(\varsigma)= \prod_{j \in J}\mu\sigma(^{u_{j}}s_{j})^{z_{j}} \end{equation} where the exponential notations of the right hand sides mean that if $z_{j}<0$, then $\mu_{1}\sigma_{1}(^{u_{j}}s_{j})^{z_{j}}=\iota \left(\mu_{1}\sigma_{1}(^{u_{j}}s_{j})\right)^{|z_{j}|}$ and likewise, $\mu\sigma(^{u_{j}}s_{j})^{z_{j}}=\iota \left(\mu\sigma(^{u_{j}}s_{j})\right)^{|z_{j}|}$. We used the definitions of $\psi_{1,0}$ and $\psi_{0}$ by regarding $\varsigma$ as a sum of 1-cycles arising from loops in $(\mathcal{D}_{1}, \mathbf{t}, 1)$. This is always possible due to lemma 5.1 of \cite{OK2002}. Further we have that \begin{align*} \hat{\varphi}\psi_{1}(\varsigma+ B_{1}(\mathcal{D}_{1}, \mathbf{t}, 1))&= \hat{\varphi}(\psi_{1,0}(\varsigma)) && \text{(from the definition of $\psi_{1}$)}\\ &= \hat{\varphi} \left(\prod_{j \in J}\mu_{1}\sigma_{1}(^{u_{j}}s_{j})^{z_{j}}\right) && \text{(from (\ref{psi10}))}\\ &=\prod_{j \in J}\mu\sigma(^{u_{j}}s_{j})^{z_{j}} && \text{(from the definition of $\hat{\varphi}$)}\\ &=\psi_{0}(\varsigma) && \text{(from (\ref{psi10}))}\\ &=\psi(\varsigma+ B_{1}(\mathcal{D}, \mathbf{t})) && \text{(from the definition of $\psi$)}, \end{align*} proving the lemma. \end{proof} With the decomposition $H_{1}(\mathcal{D}_{1}, \mathbf{t})=\oplus_{g_{i} \in G_{1}}H_{1}(\mathcal{D}_{1}, \mathbf{t}, g_{i})$, consider the following sequence of homomorphisms $$\xymatrix{\oplus_{g_{i} \in G_{1}}H_{1}(\mathcal{D}_{1}, \mathbf{t}, g_{i}) \ar[r]^{\oplus \theta_{u_{i}}} & \oplus_{g_{i} \in G_{1}}H_{1}(\mathcal{D}_{1}, \mathbf{t}, 1) \ar[r]^-{\oplus \psi_{1}} & \oplus_{g_{i} \in G_{1}} \tilde{\Pi}_{1} \ar[r]^{\oplus \hat{\varphi}}& \oplus_{g_{i} \in G_{1}}\tilde{\Pi} \ar[r]^{\gamma} & H_{1}(\mathcal{D}, \mathbf{t})},$$ where $$\gamma \left(\sum_{g_{i}} d_{g_{i}}\right)= \sum_{g_{i}}\eta(d_{g_{i}}),$$ and write for short $\chi= \gamma (\oplus \hat{\varphi}) (\oplus \psi_{1}) (\oplus \theta_{u_{i}})$. \begin{lemma} \label{st} For any element $\varsigma+B_{1}(\mathcal{D}_{1}, \mathbf{t}) \in H_{1}(\mathcal{D}_{1}, \mathbf{t})$, we have $\chi(\varsigma+B_{1}(\mathcal{D}_{1}, \mathbf{t}))=\varsigma +B_{1}(\mathcal{D}, \mathbf{t})$. \end{lemma} \begin{proof} If $\varsigma+B_{1}(\mathcal{D}_{1}, \mathbf{t})$ is expressed as $$\varsigma+B_{1}(\mathcal{D}_{1}, \mathbf{t})=\sum_{g_{i}}(\varsigma_{g_{i}}+B_{1}(\mathcal{D}_{1}, \mathbf{t}, g_{i})),$$ then we have \begin{align*} \chi(\varsigma+B_{1}(\mathcal{D}_{1}, \mathbf{t}))&=(\gamma (\oplus \hat{\varphi}) (\oplus \psi_{1}) (\oplus \theta_{u_{i}}))\left(\sum_{g_{i}}(\varsigma_{g_{i}}+B_{1}(\mathcal{D}_{1}, \mathbf{t}, g_{i}))\right)\\ &=(\gamma (\oplus \hat{\varphi}) (\oplus \psi_{1}) )\left(\sum_{g_{i}}(\varsigma_{g_{i}}\cdot u_{i}^{-1}+B_{1}(\mathcal{D}_{1}, \mathbf{t}, 1))\right)\\ &=(\gamma (\oplus \hat{\varphi}))\left(\sum_{g_{i}}\psi_{1}(\varsigma_{g_{i}}\cdot u_{i}^{-1}+B_{1}(\mathcal{D}_{1}, \mathbf{t}, 1))\right)\\ &=\gamma \left(\sum_{g_{i}}\hat{\varphi}\psi_{1}(\varsigma_{g_{i}}\cdot u_{i}^{-1}+B_{1}(\mathcal{D}_{1}, \mathbf{t}, 1))\right)\\ &=\gamma \left(\sum_{g_{i}}\psi(\varsigma_{g_{i}}\cdot u_{i}^{-1}+B_{1}(\mathcal{D}, \mathbf{t}))\right) && \text{(by lemma \ref{inv})} \\ &=\sum_{g_{i}}\eta \psi(\varsigma_{g_{i}}\cdot u_{i}^{-1}+B_{1}(\mathcal{D}, \mathbf{t}))\\ &=\sum_{g_{i}} \varsigma_{g_{i}}\cdot u_{i}^{-1}+B_{1}(\mathcal{D}, \mathbf{t}) \\ &=\sum_{g_{i}} \varsigma_{g_{i}} +B_{1}(\mathcal{D}, \mathbf{t}) && \text{(by lemma \ref{Jm})}\\ &=\varsigma +B_{1}(\mathcal{D}, \mathbf{t}). \end{align*} \end{proof} \begin{theorem} The subpresentation $\mathcal P_{1}$ is aspherical. \end{theorem} \begin{proof} We prove first that $Im(\hat{\partial}_{2})/ B_{1}(\mathcal{D}, \mathcal{D}_{1})$ is infinite cyclic. If we assume the contrary, then there is $z \in \mathbb{Z}$ and a 2-chain $\xi \in C_{2}(\mathcal{D})$ such that $$z \tilde{\partial}_{2}([1, \mathbf{q}_{0}, 1])+C_{1}(\mathcal{D}_{1})= \tilde{\partial}_{2} (\xi)+C_{1}(\mathcal{D}_{1}).$$ It follows that $\varsigma=z \tilde{\partial}_{2}([1, \mathbf{q}_{0}, 1])-\tilde{\partial}_{2} (\xi)$ is a 1-cycle in $C_{1}(\mathcal{D}_{1})$ and therefore $\varsigma \in Z_{1}(\mathcal{D}_{1},\mathbf{t})$. Now we see that \begin{align*} \chi (\varsigma+ B_{1}(\mathcal{D}_{1}, \mathbf{t}))&= \varsigma+B_{1}(\mathcal{D}, \mathbf{t}) && \text{(from lemma \ref{st})}\\ &=(z \tilde{\partial}_{2}([1, \mathbf{q}_{0}, 1])-\tilde{\partial}_{2} (\xi))+B_{1}(\mathcal{D}, \mathbf{t})\\ &=z \tilde{\partial}_{2}([1, \mathbf{q}_{0}, 1])+ B_{1}(\mathcal{D}, \mathbf{t})\\ &=z q(r_{0},1)+B_{1}(\mathcal{D}, \mathbf{t}). \end{align*} But from proposition \ref{quasi} it follows that $$((\oplus \hat{\varphi}) (\oplus \psi_{1}) (\oplus \theta_{u_{i}}))(\varsigma+ B_{1}(\mathcal{D}_{1}, \mathbf{t}))=\sum_{g_{i}}v_{g_{i}},$$ where $v_{g_{i}}\in \hat{\mathfrak{A}}_{1}$, say $v_{g_{i}}=\mu \sigma (^{u_{i}}r_{i}({^{u_{i}}}r_{i})^{-1})^{z_{i}}$ where $r_{i} \in \mathbf{r}_{1}$, $u_{i} \in F$ and $z_{i} \in \mathbb{Z}$. Now we have \begin{align*} \chi (\varsigma+ B_{1}(\mathcal{D}_{1}, \mathbf{t}))&=\gamma \left(\sum_{g_{i}}v_{g_{i}}\right)\\ &= \gamma \left(\sum_{g_{i}}\mu \sigma (^{u_{i}}r_{i}({^{u_{i}}}r_{i})^{-1})^{z_{i}}\right)\\ &= \sum_{g_{i}} \eta (\mu \sigma (^{u_{i}}r_{i}({^{u_{i}}}r_{i})^{-1})^{z_{i}})\\ &= \sum_{g_{i}} z_{i}(q(r_{i},u_{i}) + B_{1}(\mathcal{D},\mathbf{t}) ) && \text{(Proposition \ref{th})}. \end{align*} Recollecting, we have in $H_{1}(\mathcal{D}, \mathbf{t})$ the equality $$z q(r_{0},1)+ B_{1}(\mathcal{D}, \mathbf{t})=\sum_{g_{i}} z_{i} q(r_{i},u_{i}) + B_{1}(\mathcal{D},\mathbf{t}) .$$ In $\tilde{\Pi}=\hat{\mathfrak{U}}$ this equality translates to $$\mu \sigma (r_{0}r_{0}^{-1})^{z}= \prod_{g_{i}}\mu \sigma (^{u_{i}}r_{i}({^{u_{i}}}r_{i})^{-1})^{z_{i}}$$ which from proposition \ref{free} is impossible since each $r_{i} \neq r_{0}$. So it remains that $Im(\hat{\partial}_{2})/ B_{1}(\mathcal{D}, \mathcal{D}_{1})$ is infinite cyclic, and as a result it is isomorphic to $\mathbb{Z}G.\mathbf{q}_{0}. \mathbb{Z}G$ where the isomorphism sends $\mathbf{q}_{0}$ to $\hat{\partial}_{2}([1,\mathbf{q}_{0},1])+B_{1}(\mathcal{D}, \mathcal{D}_{1})$ which is the free generator of $Im(\hat{\partial}_{2})/ B_{1}(\mathcal{D}, \mathcal{D}_{1})$. But this map is the map $\nu$ of (\ref{mainses}), therefore $H_{2}((\mathcal{D}, \mathbf{p}), (\mathcal{D}_{1}, \mathbf{p}_{1}))=0$ as desired. \end{proof} \end{document}
\begin{document} \title{Comment on \emph{``The Stochastic Nonlinear Schrödinger Equation in $H^{1}$''}} \author{Torquil Macdonald Sørensen} \maketitle \lyxaddress{\begin{center} Centre of Mathematics for Applications\\ University of Oslo\\ NO-0316 Oslo, Norway\\ Email: [email protected], [email protected] \par\end{center}} \begin{abstract} The paper \emph{``The Stochastic Nonlinear Schrödinger Equation in $H^{1}$''} \cite{debouard2003} gives an existence proof for a stochastic nonlinear Schrödinger equation with multiplicative noise. We point out two mistakes that draw the validity of the proof into question.\\ \\ Keywords (MSC2010): 35Q41 Time-dependent Schrödinger equations, Dirac equations; 35R60 Partial differential equations with randomness, stochastic partial differential equations; 35G20 Nonlinear higher-order equations. \end{abstract} \section{Regarding the proof of \cite[Theorem 4.1]{debouard2003}\label{sec:theorem_4_1}} Consider \cite[Theorem 4.1]{debouard2003}, concerning solution existence for the following stochastic nonlinear Schrödinger equation with multiplicative noise, \begin{equation} idu-(\Delta u+\lambda|u|^{2\sigma}u)dt=udW-\frac{i}{2}uF_{\phi}dt\,,\label{eq:spde} \end{equation} describing a stochastic process $u$ on $\mathbb{R}^{n}$. We refer to \cite{debouard2003} for additional information about the mathematical details. Here we only describe the bare minimum of details that are necessary to describe our objection to the proof that is provided. In the theorem, the following $n$-dependent parameter ranges for $\sigma$ are assumed, \begin{equation} \begin{cases} 0<\sigma & ,n=1,2\\ 0<\sigma<2 & ,n=3\\ \frac{1}{2}\leq\sigma<\frac{2}{n-2}\quad\mbox{or}\quad\sigma<\frac{1}{n-1} & ,n\geq4\,. \end{cases}\label{eq:sigma_range} \end{equation} The theorem essentially states that an \emph{admissible pair} of Lebesgue space exponents $(r,p)$ exists such that \eqref{eq:spde} has a unique solution in a certain function space characterised by $(r,p)$. Admissibility for $(r,p)$ is defined as \begin{equation} r\geq2,\quad\frac{2}{r}=n\left(\frac{1}{2}-\frac{1}{p}\right)\,.\label{eq:admissible_pair} \end{equation} Dual Lebesgue space exponents are denoted by primed quantities, and are defined by the equation \[ \frac{1}{p}+\frac{1}{p'}=1\,. \] The proof given is for the special case $\sigma\geq1/2$. In the proof, a second admissible pair $(\gamma,s)$ is introduced after \cite[Equation (4.17)]{debouard2003}. The parameters $s'$ and $p$ are related through another parameter $q$ which arises in the proof, \begin{equation} \frac{1}{s'}=\frac{2\sigma}{q}+\frac{1}{p}\,,\label{eq:s'_q_p} \end{equation} which is described prior to \cite[Equation (4.18)]{debouard2003}. The parameter $q$ arises due to the use of the Sobolev embedding $H^{1}(\mathbb{R}^{n})\subset L^{q}(\mathbb{R}^{n})$. It is claimed that the embedding holds because ``$q<2n/(n-3)<2n/(n-2)$''. However, the second part of this inequality is incorrect, and therefore the Sobolev embedding is used without proper justification. \section{Regarding the proof of \cite[Lemma 4.3]{debouard2003}\label{sec:lemma_4_3}} In the proof of \cite[Lemma 4.3]{debouard2003}, in the second estimate on page 121, the interpolation inequality for $L^{p}$-spaces, followed by the Sobolev embeddings $H^{1}(\mathbb{R}^{n}),W^{1,p}(\mathbb{R}^{n})\subset L^{q}(\mathbb{R}^{n})$ were used, where the parameter $q$ was ``as above''. Therefore, for the same reason, the Sobolev embeddings used here also lack a proper justification. \section{Conclusions} We believe that the errors described here put into question the validity of the existence proof provided in \cite{debouard2003} for the SPDE in the multiplicative case. \end{document}
\begin{document} \title{Induced topological pressure for topological dynamical systems \footnotetext {Mathematics Subject Classification: 37D25, 37D35 }} \author{Zhitao Xing$^{1,2},$ Ercai Chen$^{1,3}.$ \\ \small 1 School of Mathematical Science, Nanjing Normal University,\\ \small Nanjing 210023, Jiangsu, P.R. China\\ \small e-mail: [email protected] \\ \small e-mail: [email protected] \\ \small 2 School of Mathematics and Statistics, Zhaoqing University,\\ \small Zhaoqing 526061, Guangdong, P.R. China\\ \small 3 Center of Nonlinear Science, Nanjing University,\\ \small Nanjing 210093, Jiangsu, P.R. China\\} \date{} \maketitle \begin{center} \begin{minipage}{120mm} {\small {\bf Abstract.} In this paper, inspired by the article [5], we introduce the induced topological pressure for a topological dynamical system. In particular, we prove a variational principle for the induced topological pressure. } \end{minipage} \end{center} \vskip0.5cm {\small{\bf Keywords and phrases:} Induced pressure, dynamical system, variational principle.}\vskip0.5cm \section{INTRODUCTION AND MAIN RESULT } \quad \quad The present paper is devoted to the study of the induced topological pressure for topological dynamical systems. Before stating our main result, we first give some notation and background about the induced topological pressure. By a topological dynamical system (TDS) $(X,f)$, we mean a compact metric space $(X,d)$ together with a continuous map $f:X\rightarrow X.$ Recall that $C(X,\mathbb{R})$ is the Banach algebra of real-valued continuous functions of $X$ equipped with the supremum norm. For $\varphi \in C(X,\mathbb{R}), n\geq 1$, let $(S_{n}\varphi)(x):=\sum \limits_{i=0}^{n-1}\varphi(f^{i}x)$ and for $\psi \in C(X,\mathbb{R})$ with $\psi >0$, let $m:=\min\{\psi(x): x\in X\}$. We denote by $ M(X,f)$ all $f$-invariant Borel probability measures on $X$ endowed with the weak-star topology. Topological pressure is a basic notion of the thermodynamic formalism. It first introduced by Ruelle [11] for expansive topological dynamical systems, and later by Walters [1,9,10] for the general case. The variational principle established by Walters can be stated as follows: Let $(X,f)$ be a TDS, and let $\varphi \in C(X,\mathbb{R})$, $ P(\varphi)$ denote the topological pressure of $\varphi.$ Then \begin{equation}\label{tag-1} P(\varphi)=\sup\{h_{\mu}(f)+\int \varphi d\mu: \mu \in M(X,f)\}. \end{equation} where $h_{\mu}(f)$ denotes the measure-theoretical entropy of $\mu.$ The theory of topological pressure and its variational principle plays a fundamental role in statistics, ergodic theory, and the theory of dynamical systems [3,9,13]. Since the works of Bowen [4] and Ruelle [12], the topological pressure has become a basic tool in the dimension theory of dynamical systems [8,14]. Recently Jaerish, Kesseb\"{o}hmer and Lamei [5] introduced the notion of the induced topological pressure of a countable Markov shift, and established a variational principle for it. One important feature of this pressure is the freedom in choosing a scaling function, and this is applied to large deviation theory and fractal geometry. In this paper we present the induced topological pressure for a topological dynamical system and consider the relation between it and the topological pressure. We set up a variational principle for the induced topological pressure. As an application, we will point out that the BS dimension is a special case of the induced topological pressure. Let $(X,f)$ be a TDS. For $n\in \mathbb{N}$, the $n$th Bowen metric $d_{n}$ on $X$ is defined by $$d_{n}(x,y)=\max \{d(f^{i}(x),f^{i}(y)): i=0,1,\ldots, n-1 \}.$$ For every $\epsilon >0$, we denote by $B_{n}(x,\epsilon),\overline{B}_{n}(x,\epsilon) $ the open (resp. closed) ball of radius $\epsilon$ and order $n$ in the metric $d_{n}$ around $x$, i.e., $$B_{n}(x,\epsilon)= \{y\in X : d_{n}(x,y)<\epsilon\} \text{ and } \overline{B}_{n}(x,\epsilon)= \{y\in X : d_{n}(x,y)\leq \epsilon\}. $$ Let $Z\subseteq X$ be a non-empty set. A subset $F_{n}\subset X$ is called an $(n, \epsilon)$-spanning set of $Z$ if for any $y\in Z$, there exists $x \in F_{n} $ with $d_{n}(x,y)\leq \epsilon$.\ A subset $E_{n}\subset Z$ is called an $(n,\epsilon)$-separated set of $Z$ if $x,y\in E_{n}, x\neq y$ implies $d_{n}(x,y)>\epsilon$. Now we define a new notion, the \textit{induced topological pressure} which extends the definition in [5] for topological Markov shifts if the Markov shift is compact, as follows. \begin{defn} Let $(X,f)$ be a TDS and $ \varphi,\psi \in C(X,\mathbb{R})$ with $\psi>0$. For $ T>0$, define $$S_{T}=\{n\in \mathbb{N}: \exists x\in X \text { such that } S_{n}\psi(x)\leq T \text{ and }S_{n+1}\psi(x)>T\}.$$ For $n\in S_{T}$, define $$X_{n}=\{x\in X: S_{n}\psi(x)\leq T \text{ and }S_{n+1}\psi(x)>T \}.$$ Let $$Q_{\psi ,T}(f,\varphi, \epsilon)= \inf\left\{\sum\limits_{n\in S_{T}}\sum \limits_{x\in F_{n}}\exp (S_{n}\varphi)(x): F_{n} \ is \ an \ (n,\epsilon)\text{-spanning set of } X_{n},n\in S_{T} \right\}.$$ We define the $\psi$-induced topological pressure of $\varphi $ (with respect to $f$) by \begin{equation}\label{tag-1} P_{\psi}(\varphi)=\lim \limits_{\epsilon\rightarrow 0}\limsup \limits_{T \rightarrow \infty}\frac{1}{T}\log Q_{\psi ,T}(f,\varphi, \epsilon) \end{equation} \end {defn} \noindent {\bf Remarks.}\\ ($\romannumeral1$) Let $[\frac{T}{m}]$ denote the integer part of $\frac{T}{m}$. Then for $n\in S_{T}$, $n\leq [\frac{T}{m}]+1$, i.e., $S_T$ is a finite set.\\($\romannumeral2$) If $0<\epsilon_{1}<\epsilon_{2}$, then $Q_{\psi ,T}(f,\varphi, \epsilon_{1})\geq Q_{\psi ,T}(f,\varphi, \epsilon_{2})$, which implies the existence of the limit in (1.2) and $P_{\psi}(\varphi)> -\infty$.\\ ($\romannumeral3$) $P_1(\varphi)=P(\varphi)$. The variational principle for induced topological pressure is stated as follows. \begin{thm} Let $(X,f)$ be a TDS and $\varphi, \psi \in C(X,\mathbb{R})$ with $\psi> 0$. Then \begin{equation}\label{tag-1} P_{\psi}(\varphi)=\sup\left\{\frac{h_{\nu}(f)}{\int \psi d\nu}+\frac{\int \varphi d\nu}{\int \psi d\nu}: \nu \in M(X ,f )\right\}. \end{equation} \end{thm} This paper is organized as follows. In Section 2, we provide an equivalent definition of induced topological pressure. We prove Theorem 1.1 in Section 3. We point out that the BS dimension is a special case of the induced topological pressure in Section 4. In Section 5, we study the equilibrium measures for the induced topological pressure. \section{AN EQUIVALENT DEFINITION } \quad\quad In this section, we obtain an equivalent definition of the induced topological pressure by using separated sets (from now on, we omit the word `topological' if no confusion can arise). \begin{prop} Let $(X,f)$ be a TDS and $\varphi, \psi \in C(X,\mathbb{R})$ with $\psi>0$. For $T>0$, define $$P_{\psi ,T}(f,\varphi, \epsilon)= \sup\left\{\sum\limits_{n\in S_{T}}\sum \limits_{x\in E_{n}}\exp (S_{n}\varphi)(x): E_{n} \ is \ an \ (n,\epsilon)\text{-separated set of } X_n, n\in S_T \right\}.$$ Then \begin{equation}\label{tag-1}P_{\psi}(\varphi)=\lim \limits_{\epsilon\rightarrow 0}\limsup \limits_{T \rightarrow \infty}\frac{1}{T}\log P_{\psi ,T}(f,\varphi, \epsilon) \end{equation} \end{prop} \noindent\textbf{Proof.} We note that since the map $\epsilon\mapsto \limsup \limits_{T \rightarrow \infty}\frac{1}{T}\log P_{\psi ,T}(f,\varphi, \epsilon)$ is nondecreasing, the limit in (2.4) is well defined when $\epsilon\rightarrow 0$. For $n\in S_{T}$, let $E_{n}$ be an $(n,\epsilon)$-separated set of $X_{n}$ which fails to be $(n,\epsilon)$-separated when any point of $X_{n}$ is added. Then $E_{n}$ is an $(n,\epsilon)$-spanning set of $X_{n}$. Therefore $$Q_{\psi ,T}(f,\varphi, \epsilon)\leq P_{\psi ,T}(f,\varphi, \epsilon)$$ and $$P_{\psi}(\varphi)\leq \lim \limits_{\epsilon\rightarrow 0}\limsup \limits_{T \rightarrow \infty}\frac{1}{T}\log P_{\psi ,T}(f,\varphi, \epsilon).$$ To show the reverse inequality, for any $\epsilon>0$, we choose $\delta>0$ small enough so that \begin{flalign} d(x,y)\leq \frac{\delta}{2} \Rightarrow |\varphi(x)-\varphi(y)|<\epsilon. \end{flalign} For $n\in S_{T}$, let $E_{n}$ be an $(n,\delta)$-separated set of $X_{n}$ and $F_{n}$ an $(n, \frac{\delta}{2})$-spanning set of $X_{n}$. Define $\phi:E_{n}\rightarrow F_{n}$ by choosing, for each $x\in E_{n}$, some point $\phi(x)\in F_{n}$ with $d_{n}(\phi(x),x)\leq \frac{\delta}{2}$. Then $\phi$ is injective. \\ Therefore, \begin{flalign*} &\sum\limits_{n\in S_{T}}\sum\limits_{y\in F_{n}}\exp(S_{n}\varphi)(y)\\ \geq&\sum\limits_{n\in S_{T}}\sum\limits_{y\in\phi E_{n}}\exp(S_{n}\varphi)(y)\\ \geq&\sum\limits_{n\in S_{T}}(\min\limits_{x\in E_{n}}\exp((S_{n}\varphi)(\phi x)-(S_{n}\varphi)(x)))\sum\limits_{x\in E_{n}}\exp(S_{n}\varphi)(x) \\\geq& \exp(-(\frac{T}{m}+1)\epsilon)\sum\limits_{n\in S_{T}}\sum\limits_{x\in E_{n}}\exp(S_{n}\varphi)(x). \end{flalign*} We conclude that $$ \lim \limits_{\delta\rightarrow 0}\limsup \limits_{T \rightarrow \infty}\frac{1}{T}\log Q_{\psi ,T}(f,\varphi, \frac{\delta}{2})\geq -\frac{1}{m}\epsilon+\lim \limits_{\delta\rightarrow 0}\limsup \limits_{T \rightarrow \infty}\frac{1}{T}\log P_{\psi ,T}(f,\varphi, \delta).$$ As $\epsilon\rightarrow 0 $, we have $$P_{\psi}(\varphi)\geq \lim \limits_{\delta\rightarrow 0}\limsup \limits_{T \rightarrow \infty}\frac{1}{T}\log P_{\psi ,T}(f,\varphi, \delta).$$ \section{ THE PROOF OF THEOREM 1.1} \quad\quad In this section, we give the proof of Theorem 1.1. Firstly, we study the relation between $P_{\psi}(\varphi)$ and $P(\varphi)$, which will be needed for the proof of Theorem 1.1. The following Theorem 3.1 is very similar to Theorem 2.1 of [5], and it is a generalization of this theorem in the case of a compact topological Markov shift. \begin{thm} Let $(X,f)$ be a TDS and $\varphi, \psi \in C(X,\mathbb{R})$ with $\psi> 0$. For $T>0$, define $$G_{T}=\{n\in \mathbb{N}: \exists x\in X \text { such that } S_{n}\psi(x)> T\}.$$ For $n\in G_{T}$, define $$Y_{n}=\{x\in X: S_{n}\psi(x)>T\}.$$ Let $$R_{\psi ,T}(f,\varphi, \epsilon)= \sup\left\{\sum\limits_{n\in G_{T}}\sum \limits_{x\in E^{'}_{n}}\exp (S_{n}\varphi)(x): E^{'}_{n} \text { is \ an }(n,\epsilon)\text{-separated set of } Y_{n}, n\in G_T \right \}.$$ We have \begin{flalign} P_{\psi}(\varphi)=\inf\{\beta \in \mathbb{R}: \lim \limits_{\epsilon\rightarrow 0}\limsup \limits_{T \rightarrow \infty}R_{\psi ,T}(f,\varphi-\beta\psi, \epsilon)<\infty\}. \end{flalign} Here we make the convention that $\inf \emptyset =\infty$. \end{thm} \noindent\textbf{Proof.} For $n\in \mathbb{N},x\in X$, we define $m_n(x)$ to be the unique positive integer such that $$(m_n(x)-1)\|\psi\|<S_{n}\psi(x)\leq m_n(x)\|\psi\|.$$ Observing that $$ \exp(-\beta \|\psi\| m_n(x))\exp(-|\beta|\|\psi\|)\leq \exp(-\beta S_{n}\psi(x))\leq \exp(-\beta \|\psi\| m_n(x))\exp(|\beta|\|\psi\|)$$ for all $ x\in X$. For $\xi_T=\{\xi_n: X\to\mathbb{R}\}_{n\in G_T}$, we define \begin{flalign*} &R_{\psi ,T}(f,\varphi, \xi_T,\epsilon)\\=& \sup\left\{\sum\limits_{n\in G_{T}}\sum \limits_{x\in E^{'}_{n}}\exp ((S_{n}\varphi)(x)-\xi_n(x)): E^{'}_{n} \text { is \ an } (n,\epsilon)\text{-separated set of } Y_{n}, n\in G_T \right\}.\end{flalign*}We conclude that $$\lim \limits_{\epsilon\rightarrow 0}\limsup \limits_{T \rightarrow \infty}R_{\psi ,T}(f,\varphi-\beta \psi, \epsilon)<\infty $$ if and only if $$\lim \limits_{\epsilon\rightarrow 0}\limsup \limits_{T \rightarrow \infty}R_{\psi ,T}(f,\varphi,\{-\beta \|\psi\| m_n\}_{n\in G_T}, \epsilon)<\infty.$$ Hence, it will be sufficient to verify that $$P_{\psi}(\varphi)=\inf\{\beta \in \mathbb{R}: \lim \limits_{\epsilon\rightarrow 0}\limsup \limits_{T \rightarrow \infty}R_{\psi ,T}(f,\varphi,\{-\beta \|\psi\| m_n\}_{n\in G_{T}}, \epsilon)<\infty\}.$$ By the equivalent definition of $P_{\psi}(\varphi),$ for every $ \delta>0 , \beta\in \mathbb{R}$ with $\beta<P_{\psi}(\varphi)-\delta$, there exists an $\epsilon_{0}>0$ with $$\beta+\delta<\limsup \limits_{T \rightarrow \infty}\frac{1}{T}\log P_{\psi ,T}(f,\varphi, \epsilon)\leq P_{\psi}(\varphi), \ \ \forall \epsilon \in (0, \epsilon_{0}),$$ and we can find a sequence $\{T_{j}\}_{j\in \mathbb{N}}$ such that for every $j\in \mathbb{N},$ $T_{j+1}-T_{j}>2\|\psi\|$ and for each $j\in \mathbb{N}$, there exists an $E_{T_{j}}=\bigcup \limits_{n\in S_{T_{j}}}E_{n}$ with $$\sum\limits_{n\in S_{T_{j}}}\sum \limits_{x\in E_{n}}\exp (S_{n}\varphi)(x)\geq \exp(T_{j}(\beta+\frac{\delta}{2})).$$ Since for $j\in \mathbb{N}, n\in S_{T_{j}}, x\in E_{n}$, $T_j-\|\psi\|<S_{n}\psi(x)\leq T_j$, we have $$S_{T_i}\cap S_{T_j}=\emptyset,i\neq j$$ and $$|\|\psi\| m_n(x)-T_{j}|<2\|\psi\|.$$ It follows that \begin{flalign*} &R_{\psi ,T}(f,\varphi,\{-\beta \|\psi\| m_n\}_{n\in G_T},\epsilon)\\ \geq& \sum\limits_{j\in \mathbb{ N },\ T_{j}-\|\psi\|>T}\sum \limits_{n\in S_{T_{j}}}\sum \limits_{x\in E_{n}}\exp ((S_{n}\varphi)(x)-\beta \|\psi\| m_n(x))\\ \geq& \exp(-2|\beta|\|\psi\|)\sum\limits_{j\in \mathbb{N},\ T_{j}-\|\psi\|> T}\sum\limits_{n\in S_{T_{j}}}\sum \limits_{x\in E_{n}}\exp ((S_{n}\varphi)(x)-\beta T_{j})\\ \geq& \exp(-2|\beta|\|\psi\|)\sum\limits_{j\in \mathbb{N},\ T_{j}-\|\psi\| >T}\exp ((\beta+\frac{\delta}{2})T_{j}-\beta T_{j})\\=&\infty. \end{flalign*} Therefore, for all $\beta<P_{\psi}(\varphi)-\delta$, \begin{equation}\label{tag-1} \lim \limits_{\epsilon\rightarrow 0}\limsup \limits_{T \rightarrow \infty}R_{\psi ,T}(f,\varphi,\{-\beta \|\psi\| m_n\}_{n\in G_T}, \epsilon)=\infty. \end{equation} This argument is not only valid for $P_{\psi}(\varphi)\in \mathbb{R}$, but also for $P_{\psi}(\varphi)=\infty$, in which case (3.7) holds for every $\beta\in \mathbb{R}$. Then \begin{equation}\label{tag-1} P_{\psi}(\varphi)\leq \inf\{\beta \in \mathbb{R}: \lim \limits_{\epsilon\rightarrow 0}\limsup \limits_{T \rightarrow \infty}R_{\psi ,T}(f,\varphi,\{-\beta \|\psi\| m_n\}_{n\in G_T}, \epsilon)<\infty\}. \end{equation} Next, we establish the reverse inequality. We consider the case $P_{\psi}(\varphi)\in \mathbb{R}$ and show that for any $\delta>0,$ $$\lim \limits_{\epsilon\rightarrow 0}\limsup \limits_{T \rightarrow \infty}R_{\psi ,T}(f,\varphi,\{-(P_{\psi}(\varphi)+\delta) \|\psi\| m_n\}_{n\in G_T}, \epsilon)<\infty.$$ Again, by the equivalent definition of $P_{\psi}(\varphi)$, we have, for any $\epsilon>0$, $$\limsup \limits_{T \rightarrow \infty}\frac{1}{T}\log P_{\psi ,T}(f,\varphi, \epsilon)< P_{\psi}(\varphi)+\frac{\delta}{2},$$ and we can find an $l_{0}\in \mathbb{N}$ such that for all $l \in \mathbb{N}$ with $l \geq l_{0}$, $$P_{\psi,lm}(f,\varphi,\epsilon)\leq \exp(lm(P_{\psi}(\varphi)+\frac{2\delta}{3})).$$ Note that for $n\in S_{lm}, x\in E_{n}$, we have $$|\|\psi\| m_n(x)-lm|<2\|\psi\|$$ and $$-(P_{\psi}(\varphi)+\delta)\|\psi\| m_n(x)\leq -lm(P_{\psi}(\varphi)+\delta)+2|P_{\psi}(\varphi)+\delta||\psi\|.$$ Moreover, for sufficiently large $T>0, n\in G_{T}, x\in E^{'}_{n} \subset Y_n$, there exists a unique $l\in \mathbb{N}$ such that $(l-1)m<S_{n}\psi(x)\leq lm$. Obviously $S_{n+1}\psi(x)>lm$. Hence, we obtain \begin{flalign*} &R_{\psi,T}(f,\varphi,\{-(P_{\psi}(\varphi)+\delta)\|\psi\| m_n\}_{n\in G_T},\epsilon)\\ \leq& \sum\limits_{l\geq l_{0}}\sup\Bigg\{\sum\limits_{n\in S_{lm}}\sum \limits_{x\in E_{n}}\exp ((S_{n}\varphi)(x)- (P_{\psi}(\varphi)+\delta)\|\psi\| m_n(x)):\\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad E_n \text{ is an } (n,\epsilon)\text{-separated set of } X_n, n\in S_{lm}\Bigg\}\\ \leq& \exp(2\|\psi\||P_{\psi}(\varphi)+\delta|)\sum\limits_{l\geq l_{0}}\exp(- (P_{\psi}(\varphi)+\delta)lm)P_{\psi,lm}(f,\varphi,\epsilon)\\ \leq& \exp(2\|\psi\||P_{\psi}(\varphi)+\delta|)\sum\limits_{l\geq l_{0}}\exp(-\frac{\delta}{3}lm)\\<&\exp(2\|\psi\||P_{\psi}(\varphi)+\delta|)\frac{1}{1-\exp (-\frac{\delta m}{3})}. \end{flalign*} This implies $$\lim \limits_{\epsilon\rightarrow 0}\limsup \limits_{T \rightarrow \infty}R_{\psi ,T}(f,\varphi,\{-(P_{\psi}(\varphi))+\delta) \|\psi\| m_n\}_{n\in G_T}, \epsilon)<\infty,$$ and hence, \begin{equation}\label{tag-1} P_{\psi}(\varphi)\geq \inf\{\beta \in \mathbb{R}: \lim \limits_{\epsilon\rightarrow 0}\limsup \limits_{T \rightarrow \infty}R_{\psi ,T}(f,\varphi,\{-\beta \|\psi\| m_n\}_{n\in G_T}, \epsilon)<\infty\}. \end{equation} Combining (3.8) and (3.9) we obtain (3.6). \begin {cor} Let $(X,f)$ be a TDS, and $\varphi, \psi \in C(X,\mathbb{R})$ with $\psi>0$. We have \begin{equation}\label{tag-1} P_{\psi}(\varphi)\geq \inf \{\beta \in \mathbb{R}: P(\varphi-\beta \psi)\leq 0\}. \end{equation} \end {cor} \noindent\textbf{Proof.} Let $\beta\in \{\beta \in \mathbb{R}:\lim \limits_{\epsilon\rightarrow 0}\limsup \limits_{T \rightarrow \infty}R_{\psi ,T}(f,\varphi-\beta \psi, \epsilon)<\infty\}$ and $$\lim \limits_{\epsilon\rightarrow 0}\limsup \limits_{T \rightarrow \infty}R_{\psi ,T}(f,\varphi-\beta \psi, \epsilon)=a.$$ Then for any $\epsilon>0$, $$\limsup \limits_{T \rightarrow \infty}R_{\psi ,T}(f,\varphi-\beta \psi, \epsilon)<a+1 .$$ We can find a $T_{0}>0$ such that for all $T>T_{0}$, $$R_{\psi ,T}(f,\varphi-\beta\psi, \epsilon)<a+2.$$ Now, for sufficiently large $n\in\mathbb{N}$, $$S_{n}\psi(x)>T, \ \ \forall x\in X,$$ and hence, for such $n\in G_{T}$, $E_{n}$ is an $(n, \epsilon)$-separated set of $X$ and $$\sum\limits_{x\in E_{n}}\exp (S_{n}(\varphi-\beta \psi))(x)<a+2.$$ It follows from this that $$P(\varphi-\beta \psi)\leq 0.$$ Since \begin{flalign*}&\inf \{\beta \in \mathbb{R}:\lim \limits_{\epsilon\rightarrow 0}\limsup \limits_{T \rightarrow \infty}R_{\psi ,T}(f,\varphi-\beta \psi, \epsilon)<\infty\} \\ \geq& \inf\{\beta\in\mathbb{R}:P(\varphi-\beta \psi)\leq 0\},\end{flalign*} the inequality (3.10) follows by Theorem 3.1. \begin {cor} Let $(X,f)$ be a TDS, and $\varphi, \psi \in C(X,\mathbb{R})$ with $\psi>0$. We have $$P_{\psi}(\varphi)=\inf \{\beta \in \mathbb{R}: P(\varphi-\beta\psi)\leq 0\}=\sup\{\beta \in \mathbb{R}: P(\varphi-\beta\psi)\geq 0\}.$$ \end {cor} \noindent\textbf{Proof.} If there exists a $\beta \in \mathbb{R}$ such that $P(\varphi-\beta \psi)=\infty$, then $P(\varphi-\beta \psi)=\infty$ for all $\beta \in \mathbb{R}$. By Corollary 3.1, we have $$P_{\psi}(\varphi)=\inf \{\beta \in \mathbb{R}: P(\varphi-\beta\psi)\leq 0\}=\sup\{\beta \in \mathbb{R}: P(\varphi-\beta\psi)\geq 0\}.$$ Suppose for any $\beta \in \mathbb{R}$, $P(\varphi-\beta \psi)<\infty$. By (1.1) we have $$P(\varphi-\beta \psi)=\sup \{h_{\nu}(f)+\int \varphi d\nu- \beta\int\psi d\nu: \nu\in M(X,f)\}.$$ Then for each $\beta_{1},\beta_{2} \in \mathbb{R}, \beta_{1}<\beta_{2}$ and $0<\epsilon<\frac{m(\beta_{2}-\beta_{1})}{2}$, there exists a $\mu \in M(X,f)$ such that \begin{flalign*} &\sup \{h_{\nu}(f)+\int \varphi d\nu- \beta_{2}\int\psi d\nu: \nu\in M(X,f)\} \\<& h_{\mu}(f)+\int \varphi d\mu- \beta_{2}\int\psi d \mu +\epsilon\\=&h_{\mu}(f)+\int \varphi d\mu- \beta_{1}\int\psi d \mu +\epsilon-(\beta_{2}-\beta_{1})\int\psi d\mu\\<&h_{\mu}(f)+\int \varphi d\mu- \beta_{1}\int\psi d \mu-(\beta_{2}-\beta_{1})(\int\psi d\mu-\frac{m}{2})\\ \leq &\sup \{h_{\nu}(f)+\int \varphi d\nu- \beta_{1}\int\psi d\nu: \nu\in M(X,f)\}-(\beta_{2}-\beta_{1})(\int\psi d\mu-\frac{m}{2}). \end{flalign*} Thus, the map $\beta \mapsto P(\varphi-\beta \psi)$ is strictly decreasing. Next, we prove that $$P(\varphi-\beta\psi)< 0\Longrightarrow R_{\psi ,T}(f,\varphi-\beta \psi, \epsilon)<\infty.$$ Let $P(\varphi-\beta\psi)=2a< 0$. For any $\epsilon>0$, we can find $N\in \mathbb{N}$ such that for all $n\in \mathbb{N}$ with $n\geq N$, $$\sup\limits_{E_{n}}\sum \limits_{x\in E_{n}}\exp (S_{n}(\varphi-\beta\psi))(x)\leq \exp (na),$$ where the supremum is taken over all $(n,\epsilon)$-separated sets of $X$. Consequently, for sufficiently large $T>0$, we have $$R_{\psi ,T}(f,\varphi-\beta \psi, \epsilon)\leq \sum\limits_{n\geq N}\sup\limits_{E_{n}}\sum\limits_{x\in E_{n}}\exp (S_{n}(\varphi-\beta\psi))(x)\leq \frac{1}{1-\exp(a)}<\infty,$$ and the conclusion holds. \\ Since $$\inf \{\beta \in \mathbb{R}: P(\varphi-\beta\psi)< 0\}\geq \inf\{\beta \in \mathbb{R}: \lim \limits_{\epsilon\rightarrow 0}\limsup \limits_{T \rightarrow \infty}R_{\psi ,T}(f,\varphi-\beta \psi, \epsilon)<\infty\},$$ by Theorem 3.1 and Corollary 3.1, we conclude that \begin{flalign*}\inf \{\beta \in \mathbb{R}: P(\varphi-\beta\psi)\leq 0\}&= \inf \{\beta \in \mathbb{R}: P(\varphi-\beta\psi)< 0\}\\ &=\sup\{\beta \in \mathbb{R}: P(\varphi-\beta\psi)\geq 0\}.\end{flalign*} \begin{cor} Let $(X,f)$ be a TDS and $\varphi, \psi \in C(X,\mathbb{R})$ with $\psi>0$. Suppose that for each $\beta\in \mathbb{R}$ we have $P(\varphi-\beta\psi)\in \mathbb{R}$. Then $P(\varphi-P_{\psi}(\varphi)\psi)=0$. \end{cor} \noindent\textbf{Proof.} By the proof of Corollary 3.2 the map $\beta \mapsto P(\varphi-\beta \psi)$ is a strictly decreasing, continuous map on $ \mathbb{R}$. Hence $P(\varphi-P_{\psi}(\varphi)\psi)=0$. We are now ready to prove Theorem 1.1.\\ \emph{Proof of Theorem 1.1}. Firstly, we show \begin{equation} P_{\psi}(\varphi)\geq \sup \{\frac{h_{\upsilon}(f)}{\int \psi d\nu}+\frac{\int \varphi d\nu}{\int \psi d\nu}: \nu \in M(X,f)\}. \end{equation} By Corollary 3.1 we have $0\geq P(\varphi-\beta\psi)$ for $\beta> P_{\psi}(\varphi)$. It follows from (1.1) that \begin{flalign*} 0&\geq P(\varphi-\beta\psi)\\&= \sup \{h_{\nu}(f)+\int \varphi d\nu- \beta \int\psi d\nu: \nu\in M(X,f)\} \\&=\sup \{\int \psi d\nu(\frac{h_{\upsilon}(f)}{\int \psi d\nu}+\frac{\int \varphi d\nu}{\int \psi d\nu}-\beta): \nu \in M(X,f)\}, \end{flalign*} and hence (3.11) holds. Next, we establish the reverse inequality. Similarly by Corollary 3.2 we have $ P(\varphi-\beta\psi)\geq 0$ for $ \beta<P_{\psi}(\varphi)$. Then \begin{flalign*} &P(\varphi-\beta\psi)\\=& \sup \{h_{\nu}(f)+\int \varphi d\nu- \beta \int\psi d\nu: \nu\in M(X,f)\} \\=&\sup \{\int \psi d\nu(\frac{h_{\upsilon}(f)}{\int \psi d\nu}+\frac{\int \varphi d\nu}{\int \psi d\nu}-\beta): \nu \in M(X,f)\}\\ \geq& 0. \end{flalign*} It is easy to see that \begin{equation} P_{\psi}(\varphi)\leq\sup \{\frac{h_{\upsilon}(f)}{\int \psi d\nu}+\frac{\int \varphi d\nu}{\int \psi d\nu}: \nu \in M(X,f)\}. \end{equation} Combining (3.11) and (3.12), we obtain (1.3). \section{ A SPECIAL CASE (BS-DIMENSION)} \quad\quad In this section we will show that the BS dimension with Carath\'{e}odory structure is a special case of the induced pressure. The BS dimension was first defined by Barreira and Schmeling [2] as follows. For $n\geq1, \epsilon>0$, we put $$\mathcal{W}_{n}(\epsilon)=\{B_{n}(x,\epsilon):x\in X\}.$$ For any $B_{n}(x,\epsilon)\in\mathcal{W}_{n}(\epsilon), \psi\in C(X,\mathbb{R})$ with $\psi>0$, the function $\psi$ can induce a function by $$\psi(B)=\sup\limits_{x\in B}(S_{n}\psi)(x).$$ We call $\mathcal{G}\subset \cup_{j\geq N}\mathcal{\mathcal{W}}_{j}(\epsilon)$ covers $X$, if $\bigcup\limits_{B\in\mathcal{G}}B=X$. \begin{defn} Let $(X,f)$ be a TDS. For any $\alpha>0, N\in\mathbb{N}$ and $\epsilon>0$, we define $$M(\alpha,\epsilon,N)=\inf \limits_{\mathcal{G}}\{\sum\limits_{B\in\mathcal{G}}\exp(-\alpha \psi(B))\},$$ where the infimum is taken over all finite $\mathcal{G}\subset \cup_{j\geq N}\mathcal{\mathcal{W}}_{j}(\epsilon)$ that cover $X.$ Obviously $M(\alpha,\epsilon,N)$ is a finite outer measure on $X$ and increases as $N$ increases. Define $$m(\alpha,\epsilon)=\lim\limits_{N \rightarrow \infty}M(\alpha,\epsilon,N)$$ and $$\dim_{BS}(X,\epsilon)=\inf \{\alpha:m(\alpha,\epsilon)=0\}=\sup\{\alpha:m(\alpha,\epsilon)=\infty\}.$$ The BS dimension is $\dim_{BS}X=\lim \limits_{\epsilon\rightarrow 0}\dim_{BS}(X,\epsilon):$ this limit exists because given $\epsilon_{1}<\epsilon_{2},$ we have $m(\alpha,\epsilon_{1})\geq m(\alpha,\epsilon_{2})$, so $\dim_{BS}(X,\epsilon_{1})\geq\dim_{BS}(X,\epsilon_{2})$. \end {defn} \begin{prop} For a TDS, we have $P_{\psi}(0)=\dim_{BS}X$. \end{prop} \noindent\textbf{Proof.} By [2, Proposition 6.4], we have $P(-\psi\dim_{BS}X)=0$. Now it follows from Corollary 3.3 that $P_{\psi}(0)=\dim_{BS}X$. \section{EQUILIBRIUM MEASURES AND GIBBS MEASURES } \quad\quad In this section we consider the problem of the existence of equilibrium measures for the induced pressure. We also study the relation between Gibbs measures and equilibrium measures for the induced pressure in the particular case of symbolic dynamics. \begin{defn} Let $(X,f)$ be a TDS and $\varphi,\psi\in C(X,\mathbb{R})$ with $\psi>0$. A member $\mu$ of $M(X,f)$ is called an equilibrium measure for $\psi$ and $\varphi$ if $P_{\psi}(\varphi)=\frac{h_{\mu}(f)+\int \varphi d\mu}{\int \psi d\mu}.$ We will write $M_{\psi,\varphi}(X,f)$ for the collection of all equilibrium measures for $\psi$ and $\varphi$. \end{defn} \begin{defn} Let $(X,f)$ be a TDS. Then $f$ is said to be positively expansive if there exists $\epsilon>0$ such that $x=y$ whenever $d(f^{n}(x),f^{n}(y))<0$ for every $n\in\mathbb{N}\cup \{0\}$. \end{defn} The entropy map of a TDS is the map $\mu\mapsto h_{\mu}(f)$, which is defined on $M(X,f)$ and has values in $[0,\infty]$. The entropy map $\mu\mapsto h_{\mu}(f)$ is called upper semi-continuous if given a measure $\mu \in M(X,f)$ and $\delta>0$, we have $h_{\nu}(f)<h_{\mu}(f)+\delta$ for any measure $\nu\in M(X,f)$ in some open neighborhood of $\mu$. Now we show that any expansive map has equilibrium measures. \begin{prop} Let $(X,f)$ be a TDS and $\varphi,\psi\in C(X,\mathbb{R})$ with $\psi>0$. Then {\em ($\romannumeral1$)} If $f$ is a positively expansive map, then $M_{\psi,\varphi}(X,f)$ is compact and non-empty. {\em ($\romannumeral2$)} If $\varphi,\phi,\psi\in C(X,\mathbb{R})$ with $\psi>0$ and if there exists a $c\in \mathbb{R}$ such that $$\varphi-\phi-c\int\psi d\mu\in \overline{\{\tau\circ f -\tau:\tau\in C(X,\mathbb{R})\}}$$ for each $\mu\in M(X,f)$, then $M_{\psi,\varphi}(X,f)=M_{\psi,\phi}(X,f)$. \end{prop} \noindent\textbf{Proof.} \noindent($\romannumeral1$) For a positively expansive map $f$, it follows from the proof in [1,9] that the map $\mu \mapsto h_{\mu}(f)$ is upper semi-continuous. Then $\mu\mapsto \frac{h_{\mu}(f)}{\int \psi d\mu}$ is upper semi-continuous. Since the map $$\mu\mapsto \int \frac{\varphi}{\int\psi d\mu}d\mu$$ is continuous for each $\varphi\in C(X,f),$ then $$\mu\mapsto \frac{h_{\mu}(f)+\int\varphi d\mu}{\int\psi d\mu}$$ is upper semi-continuous. Since an upper semi-continuous map has a maximum on any compact set, it follows from Theorem 1.1 that $M_{\psi,\varphi}(X,f)\neq \emptyset$. The upper semi-continuity also implies $M_{\psi,\varphi}(X,f)$ is compact because if $\mu_{n}\in M_{\psi,\varphi}(X,f)$ and $\mu_{n}\rightarrow \mu \in M(X,f)$, then $$\frac{h_{\mu}(f)+\int \varphi d\mu}{\int \psi d\mu}\geq \limsup\limits_{n\rightarrow\infty}\frac{h_{\mu_{n}}(f)+\int \varphi d\mu_{n}}{\int \psi d\mu_{n}}=P_{\psi}(\varphi),$$ so $\mu\in M_{\psi,\varphi}(X,f)$. \\ ($\romannumeral2$) Note that for each $\mu\in M(X,f)$ $$\frac{h_{\mu}(f)+\int\varphi d\mu}{\int\psi d\mu}=\frac{h_{\mu}(f)+\int\phi d\mu}{\int\psi d\mu},$$ therefore $P_{\psi}(\varphi)=P_{\psi}(\phi)+c$, hence $M_{\psi,\varphi}(X,f)=M_{\psi,\phi}(X,f)$. Next, we consider symbolic dynamics. Let $(\Sigma_{A} ,\sigma)$ be a one-sided \textit{topological Markov shift}\ (TMS, for short) over a finite set $S=\{1,2,\ldots, k\}$. This means that there exists a matrix $A=(t_{ij})_{k\times k}$ of zeros and ones (with no row or column made entirely of zeros) such that $$\Sigma_{A} =\{{\omega =(i_{1},i_{2},\ldots)\in S^{\mathbb{N}}:t_{i_{j}i_{j+1}=1} \text{ for every } j\in \mathbb{N}}\}. $$ The \textit{shift map } $\sigma :\Sigma_{A} \rightarrow\Sigma_{A}$ is defined by $(i_{1},i_{2},i_{3}\ldots)\mapsto (i_{2},i_{3},\ldots)$. We call $C_{i_{1}\ldots i_{n}}=\{(j_{1}j_{2}\ldots) \in \Sigma_{A} :j_{l}=i_{l} \text{ for }l=1,\ldots,n\}$ the \textit{cylindrical set } of $\omega$. We equip $\Sigma_{A}$ with the topology generated by the cylindrical sets. The topology of a TMS is metrizable and may be given by the metric $d_{\alpha}(\omega ,\omega')=e^{-\alpha|\omega\wedge\omega'|}, \alpha>0$, where $\omega\wedge\omega'$ denotes the longest common initial block of $\omega ,\omega'\in \Sigma_{A}$. The shift map $\sigma$ is continuous with respect to this metric. A TMS $(\Sigma_{A},\sigma)$ is called a topologically mixing TMS if for every $a, b \in S$, there exists an $N_{ab}\in\mathbb{N}$ such that for every $n>N_{ab}$, we have $C_{a}\cap \sigma^{-n}C_{b}$. \begin{defn} Let $(\Sigma_{A} ,\sigma)$ be a TMS and $\varphi,\psi\in C(\Sigma_{A},\mathbb{R})$ with $\psi>0$. We say that a probability measure $\mu$ in $\Sigma_{A}$ is a Gibbs measure for $\psi$ and $\varphi$ if there exists a $K>1$ such that $$K^{-1}\leq \frac{\mu(C_{i_{1}\ldots i_{n}})}{\exp [-(S_{n}\psi)(\omega) P_{\psi}(\varphi)+(S_{n}\varphi)(\omega)]}\leq K$$ for each $(i_{1},i_{2},\ldots)\in \Sigma_{A}, n\in \mathbb{N}$ and $\omega\in C_{i_{1}\ldots i_{n}}$. \end{defn} We show that $\sigma$-invariant Gibbs measures are equilibrium measures. Making a similar proof as in [1, Theorem 3.4.2], we can obtain the following statement: \begin{prop} If a probability measure $\mu$ in $(\Sigma_{A},\sigma)$ is a $\sigma$-invariant Gibbs measure for $\varphi$ and $\psi$, then it is also an equilibrium measure for $\psi$ and $\varphi$. \end{prop} Now we establish the existence of Gibbs measures. \begin{prop} Let $(\Sigma_{A},\sigma)$ be a topologically mixing TMS. Suppose that $\varphi$ and $\psi$ are H$\ddot{\text{o}}$lder continuous functions and $\psi>0$. Then there exists at least one $\sigma$-invariant Gibbs measure for $\psi$ and $\varphi$. \end{prop} \noindent\textbf{Proof.} By Corollary 3.3 we have $$P(\varphi-P_{\psi}(\varphi)\psi)=0.$$ As $\varphi-P_{\psi}(\varphi)\psi$ is H$\ddot{\text{o}}$lder continuous, it follows from [1, Theorem 3.4.4] that there exists a $K>1$ such that $$K^{-1}\leq \frac{\mu(C_{i_{1}\ldots i_{n}})}{\exp [-n P(\varphi-P_{\psi}(\varphi)\psi)-(S_{n}\psi)(\omega) P_{\psi}(\varphi)+(S_{n}\varphi)(\omega)]}\leq K.$$ \noindent {\bf ACKNOWLEDGEMENTS.} This research was supported by the National Natural Science Foundation of China (Grant No. 11271191) and the National Basic Research Program of China (Grant No. 2013CB834100). We would like to thank the referee for very useful comments and helpful suggestions. The first author would like to thank Dr. Zheng Yin for useful discussions. \end{document}
\begin{document} \begin{abstract} We obtain a cohesive fracture model as a $\Gamma$-limit of scalar damage models in which the elastic coefficient is computed from the damage variable $v$ through a function $f_k$ of the form $f_k(v)=\mathrm{min}\{1,\varepsilon_k^{1/2} f(v)\}$, with $f$ diverging for $v$ close to the value describing undamaged material. The resulting fracture energy can be determined by solving a one-dimensional vectorial optimal profile problem. It is linear in the opening $s$ at small values of $s$ and has a finite limit as $s\to\infty$. If the function $f$ is allowed to depend on the index $k$, for specific choices we recover in the limit Dugdale's and Griffith's fracture models, and models with surface energy density having a power-law growth at small openings. \end{abstract} \maketitle \section{Introduction}\label{introd} The modeling of fracture in materials leads naturally to functional spaces with discontinuities, in particular functions of bounded variation ($BV$) and of bounded deformation ($BD$). In variational models, the key ingredients are a volume term, corresponding to the stored energy and depending on the diffuse part of the deformation gradient, and a surface term, modeling the fracture energy and depending on the jump part of the deformation gradient \cite{fra-mar,BFM,barenblatt,dug}. For antiplane shear models one can consider a scalar displacement $u\in BV(\Omega)$, typical models take the form \begin{equation}\label{eqfrattura} \int_\Omega h(|\nabla u|) dx + \kappa\,|D^cu|(\Omega) + \int_{\Omega\cap J_u} g(|[u]|) d{\mathcal H}^{n-1}\,. \end{equation} Here $h$ represents the strain energy density, quadratic near the origin; $g$ is the surface energy density depending on the opening $[u]$ of the crack, and $\kappa\in[0,+\infty]$ is a constant related to the slope of $g$ at 0 and the slope of $h$ at $\infty$. In models of brittle fracture one usually considers $g$ to be a constant, given by twice the energy required to generate a free surface \cite{fra-mar,BFM}. Correspondingly $\kappa=\infty$ and the Cantor part $D^cu$ disappears, so that one can assume $u\in SBV$. Physically this represents a situation in which already for the smallest opening there is no interaction between the two sides of the fracture, surface reconstruction is purely local. Analytically, the resulting functional coincides with the Mumford-Shah functional from image segmentation. In ductile materials fracture proceeds through the opening of a series of voids, separated by thin filaments which produce a weak bound between the surfaces at moderate openings \cite{barenblatt,dug,FokouaContiOrtiz2014}. The function $g$ grows then continuously from $g(0)=0$ to some finite value $g(\infty)$, representing the energetic cost of total fracture. The constant $\kappa$ is its slope at 0, and $D^cu$ represents the distribution of microcracks. For the same reason the volume energy density $h$ becomes linear at $\infty$. A large literature was devoted to the derivation of models like (\ref{eqfrattura}) from more regular models, like damage or phase field models, mainly within the framework of $\Gamma$-convergence. These regularizations can be interpreted as microscopic physical models, so that the $\Gamma$-limit justifies the macroscopic model (\ref{eqfrattura}), or as regularizations used for example to approximate (\ref{eqfrattura}) numerically. Ambrosio and Tortorelli \cite{amb-tort1,amb-tort2} have shown that \begin{equation}\label{eqAT} \int_\Omega \left( (v^2+o(\varepsilon)) |\nabla u|^2 + \frac{(1-v)^2}{4 \varepsilon} + \varepsilon|\nabla v|^2 \right) dx \end{equation} converges to the Mumford-Shah functional, which coincides with (\ref{eqfrattura}) with $h(t)=t^2$, $\kappa=\infty$, $g(t)=1$. This result was extended in many directions, for example to vector-valued functions \cite{focardi,focardi_tesi}, to linearized elasticity \cite{chambolle,addendum,iur12}, to second-order problems \cite{AFM}, vectorial problems \cite{Sh}, and models with nonlinear injectivity constraints \cite{HenaoMoracorralXu}; for numerical simulations we refer to \cite{bel-cos,bou-fra-mar2,bou,bur-ort-sul,bur-ort-sul2}. There is also a large numerical literature on the application to computer vision, see for example \cite{Fusco2003,BarSochenKiryati2006} and references therein. Models like (\ref{eqfrattura}) with a linear $h$ were obtained in \cite{AlicBrShah, AlFoc}. The case with a quadratic $h$ and an affine $g$, i.e. $h(t)=t^2$, $g(t)=t+c$, and $\kappa=+\infty$, described in \cite{amb-lem-roy} a strain localization plastic process. From the mathematical point of view the functional was limit of models like (\ref{eqAT}) with an additional term linear in $|\nabla u|$. In \cite{dm-iur,iur} the asymptotic behavior of a generalization of \eqref{eqAT} with different scalings of the three terms was analyzed. In one of the several regimes identified, the limiting model again exhibited an affine $g$. The result was then extended to the vectorial case in \cite{foc-iur}. In \cite{iur} a different scaling of the parameters led to the Hencky's diffuse plasticity, i.e. to a model like (\ref{eqfrattura}) with a linear $g$. This functional can be used to describe ductile fracture only at small openings. Discrete models for fracture were studied for example in \cite{bra-gar-dm,BraidesGelli2006}. Up to now, we are unaware of any result in which a ductile fracture model with $g$ continuous and bounded, as described above, has been derived. In this work we study a damage model as proposed by Pham and Marigo \cite{PM1,PM2} (cp. Remark~\ref{r:gen}), namely, \begin{equation}\label{eqPM} F_\varepsilon(u,v):=\int_\Omega \left( f_\varepsilon^2(v) |\nabla u|^2 + \frac{(1-v)^2}{4 \varepsilon} + \varepsilon|\nabla v|^2 \right) dx, \end{equation} with $u,v\in H^1(\Omega)$, $0\leq v\leq 1$ $\mathcal{L}^n\text{-a.e.\ in } \Omega$, and $F_\varepsilon(u,v):=\infty$ otherwise, and show that it converges to a cohesive fracture model like (\ref{eqfrattura}), where $g$ is a continuous bounded function with $g(0)=0$, which is linear close to the origin. The potential $f_\varepsilon:[0,1)\to[0,+\infty]$ in \eqref{eqPM} is defined by \begin{equation}\label{e:feintro} f_\varepsilon(s):=1\wedge \varepsilon^{1/2}f(s), \end{equation} where $f\in C^0([0,1),[0,+\infty))$ is nondecreasing, $f^{-1}(0)=\{0\}$, and it satisfies \begin{equation}\label{eqdivergencefintro} \liminf_{k\to+\infty}m_{s\to1}(1-s)f(s)=\ell, \quad \ell\in(0,+\infty). \end{equation} Our main result describes the asymptotic of $(F_\varepsilon)$ as follows. \begin{theorem}\label{t:mainintro} Let $\Omega\subset{\mathbb R}^n$ be a bounded Lipschitz set. Then, the functionals $F_\varepsilon$ $\Gamma$-converge in $L^1(\Omega){\times}L^1(\Omega)$ to the functional $F$ defined by \begin{equation*} F(u,v):=\begin{cases} \displaystyle\int_\Omega h(|\nabla u|)dx+\int_{J_{u}}g(|[u]|)d{\mathcal H}^{n-1}+\ell|D^cu|(\Omega) & \textrm{if $v=1$ $\mathcal{L}^n\text{-a.e.\ in } \Omega$, $u\in GBV(\Omega)$}\cr + \infty & \text{otherwise}. \end{cases} \end{equation*} Here the volume energy density $h$ is set as $h(s):=s^2$ if $s\leq\ell/2$ and as $h(s):=\ell s-\ell^2/4$ otherwise, while the surface energy density $g$ is given by \begin{alignat}1\nonumber g(s):=\inf & \left\{ \int_0^1|1-\beta|\sqrt{f^2(\beta)|\alpha'|^2+|\beta'|^2}\,dt:\,(\alpha,\beta) \in H^1\big((0,1)\big),\right. \\&\left. \phantom{\int}\hskip25mm \alpha(0)=0,\ \alpha(1)=s,\ \beta(0)=\beta(1)=1 \right\}.\label{eqintrodefg} \end{alignat} \end{theorem} The key difference with the previously discussed work is that in our case the optimal profiles for the damage variable $v$ and the elastic displacement $u$ cannot be determined separately. They instead arise from a joint vectorial minimization problem which defines the cohesive energy $g$, specified in \eqref{eqintrodefg}. Figures \ref{fig1} and \ref{fig2} show the behavior of $f_\varepsilon$ and $g$ in the case $f(s)=s/(1-s)$, Figure \ref{fig3} shows the profiles $\alpha$, $\beta$ entering (\ref{eqintrodefg}). Theorem~\ref{t:mainintro}, in the equivalent formulation given in Theorem \ref{t:gamma-lim} below, is proved first in the one-dimensional case in Section~\ref{s:onedim}, relying on elementary arguments in which we estimate separately the diffuse and jump contributions, and then extended to the general $n$-dimensional setting in Section~\ref{s:ndim}. This extension is obtained by means of several tools. A slicing technique and the above mentioned one-dimensional result are the key for the lower bound inequality. Instead, the upper bound inequality is proved through the direct methods of $\Gamma$-convergence on $SBV$, i.e. abstract compactness results and integral representation of the corresponding $\Gamma$-limits. The latter methods are complemented with an ad-hoc one-dimensional construction to match the lower bound on $SBV$ and a relaxation procedure to prove the result on $BV$. Finally, the extension to $GBV$ is obtained via a simple truncation argument. The issues of equi-coercivity of $F_\varepsilon$ and the convergence of the related minima are dealt with in Theorem~\ref{t:comp} and Corollary~\ref{c:min} below, respectively. Qualitative properties of the surface energy density $g$ defined in \eqref{eqintrodefg} are analyzed in Section~\ref{s:gprop}. Its monotonicity, sublinearity, boundedness and linear behavior in the origin are established in Proposition~\ref{11}. Proposition~\ref{p2} characterizes $g$ by means of an asymptotic cell formula particularly convenient in the proof of the $\Gamma$-limsup inequality. Furthermore, the dependence of $g$ on $f$ is analyzed in detail in Proposition~\ref{p:gl}. The latter results on one hand show the variety of such a class of functions, and on the other hand are instrumental to handle the approximation of other models. In Section~\ref{s:further} we discuss how the phase field approximation scheme can be used to approximate different fracture models. We first consider damage functions of the form \begin{equation*} f_k(s):=\min\{1, \varepsilon_k^{1/2} \max\{f(s), a_ks\}\} \end{equation*} and show that if $a_k\to\infty$ and $a_k \varepsilon_k^{1/2}\to0$ then the a similar result holds with the limiting surface energy $g(s)=1\wedge (\ell s)$, so that (\ref{eqfrattura}) reduces to Dugdale's fracture model (Theorem~\ref{t:Dugdale} in Section \ref{subsecdugdale}). Secondly we consider a situation in which $f$ diverges with exponent $p>1$ close to $s=1$, so that (\ref{eqdivergencefintro}) is replaced by \begin{equation*} \liminf_{k\to+\infty}m_{s\to1}(1-s)^p f(s)=\ell\,. \end{equation*} Also in this case the functionals $\Gamma$-converge to a problem of the form of (\ref{eqfrattura}), in this case however the fracture energy $g$ turns out to be proportional to the opening $s$ to the power $2/(p+1)$ at small $s$. Correspondingly the coefficient $\kappa$ of the diffuse part is infinite, so that the limiting problem is framed in the space GSBV, see Theorem \ref{t:sublin} in Section \ref{subsectpowerlaw}. Finally we show that if $f_k(s)$ diverges as $\ell_k/(1-s)$, with $\ell_k\to\infty$, then Griffith's fracture model is recovered in the limit, see Theorem \ref{t:MS} in Section \ref{subsecgriffith} below. We finally resume the structure of the paper. In Section~\ref{s:nota} we introduce some notations, some preliminaries, and the functional setting of the problem. The main result of the paper is stated in Section~\ref{s:stat}, where we also discuss the convergence of related minimum problems and minimizers. Our $\Gamma$-convergence result relies on several properties of the surface energy density $g$ that are established in Section~\ref{s:gprop}. The proof is then given first in the one-dimensional case in Section~\ref{s:onedim} and then in $n$ dimensions in Section \ref{s:ndim}. The three generalizations are discussed and proven in Section~\ref{s:further}. \begin{figure} \caption{Sketch of the function $f_\varepsilon(s)$ for the prototypical case $f(s)=s/(1-s)$.} \label{fig1} \end{figure} \begin{figure} \caption{Sketch of the function $g(s)$ defined in (\ref{eqintrodefg} \label{fig2} \end{figure} \begin{figure} \caption{Optimal profiles $(\alpha/s,\beta)$ obtained numerically from the minimization in the definition of $g(s)$, see (\ref{eqintrodefg} \label{fig3} \end{figure} \section{Notation and preliminaries}\label{s:nota} Let $n\geq1$ be a fixed integer. We denote the Lebesgue measure and the $k$-dimensional Hausdorff measure in $\mathbb{R}^n$ by $\mathcal{L}^n$ and $\mathcal{H}^k$, respectively. Given $\Omega\subset{\mathbb R}n$ an open bounded set with Lipschitz boundary, we define $\mathcal{A}(\Omega)$ as the set of all open subsets of $\Omega$. Throughout the paper $c$ denotes a generic positive constant that can vary from line to line. \subsection{\texorpdfstring{\boldmath{$\Gamma$}}{Gamma}- and {\texorpdfstring{\boldmath{$\overline{\Gamma}$}}{Gamma bar}-convergence}} Given an open set $\Omega\subset{\mathbb R}n$ and a sequence of functionals $\mathscr{F}_k:X\times\mathcal{A}(\Omega)\to [0,+\infty]$, $(X,d)$ a separable metric space, such that the set function $\mathscr{F}_k(u;\cdot)$ is nondecreasing on the family $\mathcal{A}(\Omega)$ of open subsets of $\Omega$, set \[ \mathscr{F}'(\cdot; A):=\Gamma\hbox{-}\liminf_{k\to+\infty} \mathscr{F}_k(\cdot; A),\quad \mathscr{F}''(\cdot; A):=\Gamma\hbox{-}\limsup_{k\to+\infty} \mathscr{F}_k(\cdot; A) \] for every $A\in\mathcal{A}(\Omega)$. If $A=\Omega$ we drop the set dependence in the above notation. Moreover we recall that if $\mathscr{F}=\mathscr{F}'=\mathscr{F}''$ we say that $\mathscr{F}_k$ $\Gamma$-converges to $\mathscr{F}$ (with respect to the metric $d$). Next we recall the notion of \emph{$\overline\Gamma$-convergence}, useful in particular to deal with the integral representation of $\Gamma$-limits of families of integral functionals. We say that $(\mathscr{F}_k)$ $\overline\Gamma$ -\emph{converges} to $\mathscr{F}:X\times \mathcal{A}(\Omega)\to [0,+\infty]$ if $\mathscr{F}$ is the inner regular envelope of both functionals $\mathscr{F}'$ and $\mathscr{F}''$, i.e., \[ \mathscr{F}(u;A)=\sup\{\mathscr{F}'(u;A^\prime):\, A^\prime\in \mathcal{A}(\Omega),\, A^\prime \subset\subset A\} =\sup \{ \mathscr{F}''(u;A^\prime): A^\prime\in \mathcal{A}(\Omega),\,A^\prime\subset\subset A\}, \] for every $(u,A)\in X\times \mathcal{A}(\Omega)$. \subsection{Functional setting of the problem} All the results we shall prove in what follows will be set in the spaces $BV$ and $SBV$ and in suitable generalizations. For the definitions, the notations and the main properties of such spaces we refer to the book \cite{ambrosio}. Below we just recall the definition of $SBV^2(\Omega)$ that we shall often use in the sequel: \[ SBV^2(\Omega):= \big\{u\in SBV(\Omega):\nabla u\in L^2(\Omega)\text{ and }\mathcal{H}^{n-1}(J_u)<+\infty\big\}. \] Moreover, a function $u:\Omega\to{\mathbb R}$ belongs to $GBV(\Omega)$ (respectively to $GSBV(\Omega)$) if the truncations $u^M:=-M\varepsilone(u\wedge M)$ belong to $BV_{\textsl{loc}}(\Omega)$ (respectively to $SBV_{\textsl{loc}}(\Omega)$), for every $M>0$. For fine properties of $GBV$ and $GSBV$ again we refer to \cite{ambrosio}. The prototype of the asymptotic result we shall prove in Sections~\ref{s:onedim}, \ref{s:ndim}, and \ref{s:further} concerns the Mumford-Shah functional of image segmentation \be{\label{e:MS} {M\!S}(u):=\begin{cases} \displaystyle{\int_{\Omega}|\nabla u|^2dx+{\mathcal H}^{n-1}(J_u)} & \textrm{if $u\in GSBV(\Omega)$,}\cr +\infty & \textrm{otherwise in $L^1(\Omega)$}. \end{cases} } Let $\psi:[0,1]\to[0,1]$ be any nondecreasing lower-semicontinuous function such that $\psi^{-1}(0)=0$ and $\psi(1)=1$. Then the classical approximation by Ambrosio and Tortorelli (cp. \cite{amb-tort1, amb-tort2}, and also \cite{focardi}) establishes that the two fields functionals $AT_k^\psi:L^1(\Omega)\times L^1(\Omega)\to[0,+\infty]$ \begin{equation}\label{e:ATk} AT_k^\psi(u,v):=\begin{cases} \displaystyle{\int_\Omega{\Big(\psi^2(v)|\nabla u|^2+\frac{(1-v)^2}{4\varepsilon_k} +\varepsilon_k|\nabla v|^2\Big) dx}} & \textrm{if $(u,v)\in H^1(\Omega){\times}H^1(\Omega)$}\cr & \text{and $0\leq v\leq1$ $\mathcal{L}^n\text{-a.e.\ in } \Omega$,}\\ +\infty &\text{otherwise} \end{cases} \end{equation} $\Gamma$-converge in $L^1(\Omega){\times}L^1(\Omega)$ to \[ \widetilde{{M\!S}}(u,v):=\begin{cases} {M\!S}(u) & \textrm{if $v=1$ $\mathcal{L}^n\text{-a.e.\ in } \Omega$,}\cr +\infty & \text{otherwise}, \end{cases} \] that is equivalent to the Mumford-Shah functional ${M\!S}$ for minimization purposes. We finally introduce the notation related to slicing. Fixed $\xi\in \mathbb{S}^{n-1}:=\{\xi\in\mathbb{R}^n:|\xi|=1\}$, let $\Pi^\xi:=\big\{y\in\mathbb{R}^n:y\cdot\xi=0\big\}$, and for every subset $A\subset {\mathbb R}n$ set \bes{ & A_y^\xi:=\big\{t\in\mathbb{R}:y+t\xi\in A\big\}\quad \text{ for $y\in \Pi^\xi$},\\ & A^\xi:=\{y\in\Pi^\xi:A^\xi_y\neq\varnothing\}. } For $u:\Omega\to{\mathbb R}$ we define the slices $u^\xi_y:\Omega_y^\xi\to\mathbb{R}$ by $ u^\xi_y(t):=u(y+t\xi). $ Observe that if $u_k,u\in L^1(\Omega)$ and $u_k\to u$ in $L^1(\Omega)$, then for every $\xi\in \mathbb{S}^{n-1}$ there exists a subsequence $(u_{k_j})$ such that \[ (u_{k_j})^\xi_y\to u^\xi_y\text{ in }L^1(\Omega^\xi_y)\quad \textrm{for $\mathcal{H}^{n-1}$-a.e.\ $y\in\Omega^\xi$}. \] \section{The main results: approximation, compactness and convergence of minimizers} \label{s:stat} Given a bounded open set $\Omega\subset{\mathbb R}n$ with Lipschitz boundary and an infinitesimal sequence $\varepsilon_k>0$, we consider the sequence of functionals $F_k\colon L^1(\Omega){\times}L^1(\Omega)\to [0,+\infty]$ \begin{equation}\label{Fk} F_k(u,v):=\left\{ \begin{array}{ll} \ {\displaystyle\int_\Omega{\Big(f_k^2(v)|\nabla u|^2+\frac{(1-v)^2}{4\varepsilon_k} +\varepsilon_k|\nabla v|^2\Big) dx}} & \textrm{if $(u,v)\in H^1(\Omega){\times}H^1(\Omega)$}\\ & \text{and $0\leq v\leq1$ $\mathcal{L}^n\text{-a.e.\ in } \Omega$},\\ \\ + \infty & \text{otherwise}, \end{array} \right. \end{equation} where \begin{equation}\label{fk} f_k(s):=1\wedge\varepsilon_k^{1/2} f(s) \,,\hskip1cm f_k(1)=1\,, \end{equation} and \begin{equation}\label{f0} \text{$f\in C^0([0,1),[0,+\infty))$ is a nondecreasing function satisfying $f^{-1}(0)=\{0\}$} \end{equation} with \be{\label{f1} \displaystyle\liminf_{k\to+\infty}m_{s\to1}(1-s)f(s)=\ell, \quad \ell\in(0,+\infty). } In particular, the function $[0,1)\mapsto(1-s)f(s)$ can be continuously extended to $s=1$ with value $\ell$. One can consider $f(s):=\frac{s}{1-s}$ as prototype. It is also useful to introduce a localized version $F_k(\cdot;A)$ of $F_k$ simply obtained by substituting the domain of integration $\Omega$ with any measurable subset $A$ of $\Omega$ itself. In particular, to be consistent with \eqref{Fk}, for $A=\Omega$ we shall not indicate the dependence on the domain of integration. Let now $\Phi\colon L^1(\Omega)\to [0,+\infty]$ be defined by \begin{equation}\label{Phi} \Phi(u):=\left\{ \begin{array}{ll} \ {\displaystyle\int_\Omega h(|\nabla u|)dx+\int_{J_{u}}g(|[u]|)d{\mathcal H}^{n-1}+\ell|D^cu|(\Omega)} & \textrm{if $u\in GBV(\Omega)$,}\\ + \infty & \text{otherwise}, \end{array} \right. \end{equation} where we recall that $h,g\colon[0,+\infty)\to[0,+\infty)$ are given by \be{\label{h} h(s):=\left\{ \begin{array}{ll} s^2 & \textrm{if $s\leq \ell/2$,}\\ \ell s-{\ell}^2/4 & \textrm{if $s\geq \ell/2$}, \end{array} \right.} and \be{\label{g} g(s):=\inf_{(\alpha,\beta)\in\mathcal{U}_s} \int_0^1|1-\beta|\sqrt{f^2(\beta)|\alpha'|^2+|\beta'|^2}\,dt, } where $\mathcal{U}_s:=\mathcal{U}_s(0,1)$ and for all $T>0$ \begin{equation}\label{e:admfnctns} \mathcal{U}_s(0,T):=\{\alpha,\beta\in H^1\big((0,T)\big):\, 0\le \beta\le 1, \, \alpha(0)=0,\, \alpha(T)=s,\, \beta(0)=\beta(T)=1\}. \end{equation} At the points $t$ with $\beta(t)=1$ the integrand in (\ref{g}) reduces to $\ell |\alpha'|(t)$, in agreement with (\ref{f1}). Our main result is the following. \begin{theorem}\label{t:gamma-lim} Under the assumptions above, the functionals $F_k$ $\Gamma$-converge in $L^1(\Omega){\times}L^1(\Omega)$ to the functional $F$ defined by \begin{equation}\label{F} F(u,v):=\begin{cases} \Phi(u) & \textrm{if $v=1$ $\mathcal{L}^n\text{-a.e.\ in } \Omega$,}\cr + \infty & \text{otherwise}. \end{cases} \end{equation} \end{theorem} \begin{remark}\label{r:gen} The assumption that $f^{-1}(0)=0$ is not restrictive and changes only the detailed properties of $g$. Indeed, standing all the other hypotheses, defining $\lambda:=\sup\{s\in[0,1):\,f(s)=0\}\in[0,1)$, we would get that $g(s)\leq (1-\lambda)^2\wedge\ell s$ (cp.~Proposition~\ref{11} below). In addition, the function $(1-v)^2$ in \eqref{Fk} can be replaced by any continuous, decreasing function $d(v)$ with $d(1)=0$. In this case $d^{1/2}(s)$ and $d^{1/2}(\beta)$ appear in formulas \eqref{f1} and \eqref{g} in place of $1-s$ and $1-\beta$ respectively, and we obtain $g(s)\leq 2\int_0^1d^{1/2}(t)dt\wedge\ell s$ (see Proposition~\ref{11}). Finally the definition of $f_k$ in \eqref{fk} can be given in the following more general form $f_k:=\psi_k\wedge\varepsilon^{1/2}f$. Here the truncation of $f$ is performed with any continuous nondecreasing function $\psi_k:[0,1]\to[0,1]$ satisfying $\psi_k\geq c>0$, $\liminf_{k\to+\infty}m_k\psi_k(1)=1$, and converging uniformly in a neighborhood of $1$. \end{remark} We next address the issue of equi-coercivity for the $F_k$'s. \begin{theorem}\label{t:comp} Under the assumptions above, if $(u_k,v_k)\in H^1(\Omega){\times}H^1(\Omega)$ is such that $$\sup_k \left(F_k(u_k,v_k)+||u_k||_{L^1(\Omega)}\right)<+\infty,$$ then there exists a subsequence $(u_j,v_j)$ of $(u_k,v_k)$ and a function $u\in GBV\cap L^1(\Omega)$ such that $u_j\to u$ $\mathcal{L}^n\text{-a.e.\ in } \Omega$ and $v_j\to 1$ in $L^1(\Omega)$. \end{theorem} We shall prove Theorem~\ref{t:gamma-lim} in Sections~\ref{s:onedim} and \ref{s:ndim}, Theorem~\ref{t:comp} shall be established in Section~\ref{s:ndim}. In the rest of this section instead we address the issue of convergence of minimum problems. Minimum problems related to the functional $F_k$ could have no solution due to a lack of coercivity. Therefore we slightly perturb the $f_k$'s to guarantee the existence of a minimum point for each $F_k$. This together with Theorems~\ref{t:gamma-lim} and \ref{t:comp} shall in turn imply the convergence of minima and minimizers as $k\uparrow\infty$. Let $\eta_k,\varepsilon_k$ be positive infinitesimal sequences such that $\eta_k=o(\varepsilon_k)$ and let $\zeta\in L^q(\Omega)$, with $q>1$. Let us consider the sequence of functionals $G_k\colon L^1(\Omega){\times}L^1(\Omega)\to [0,+\infty]$ defined by $$ G_k(u,v):=\left\{ \begin{array}{ll} \ {\displaystyle\int_\Omega{\Big(\big(f_k^2(v)+\eta_k\big)|\nabla u|^2+\frac{(1-v)^2}{4\varepsilon_k} +\varepsilon_k|\nabla v|^2+|u-\zeta|^q\Big)dx}} & \textrm{if $(u,v)\in H^1(\Omega){\times}H^1(\Omega)$}\\ & \text{and $0\leq v\leq1$ $\mathcal{L}^n\text{-a.e.\ in } \Omega$},\\ \\ + \infty & \text{otherwise}, \end{array} \right. $$ where $f_k$ is as in \eqref{fk}. Let now $\mathscr{G}\colon L^1(\Omega)\to [0,+\infty]$ be defined by $$ \mathscr{G}(u):=\left\{ \begin{array}{ll} \ {\displaystyle\int_\Omega h(|\nabla u|)dx+\int_{J_{u}}g(|[u]|)d{\mathcal H}^{n-1}+\ell|D^cu|(\Omega)+\int_{\Omega}|u-\zeta|^qdx} & \textrm{if $u\in GBV(\Omega)$,}\\ + \infty & \text{otherwise}, \end{array} \right. $$ where $h,g,$ and $\ell$ are as in \eqref{h}, \eqref{g} and \eqref{f1} respectively. Then the following corollary holds true. \begin{corollary}\label{c:min} For every $k$, let $(u_k,v_k)\in H^1(\Omega){\times} H^1(\Omega)$ be a minimizer of the problem \begin{equation}\label{intro43} \min_{(u,v)\in H^1(\Omega){\times} H^1(\Omega)}G_k(u_k,v_k). \end{equation} Then $v_k\to 1$ in $L^1(\Omega)$ and a subsequence of $u_k$ converges in $L^q(\Omega)$ to a minimizer $u$ of the problem $$ \min_{u\in GBV(\Omega)}\mathscr{G}(u). $$ Moreover the minimum values of (\ref{intro43}) tend to the minimum value of the limit problem. \end{corollary} \begin{proof} We shall only sketch the main steps to establish the conclusion, being the arguments quite standard. One first proves that in fact the functionals $F_k$ $\Gamma$-converge to $F$ in $L^q(\Omega){\times}L^1(\Omega)$, where $F_k$ and $F$ are the functionals in Theorem~\ref{t:gamma-lim}. Indeed, when $q>1$ the $\Gamma$-limsup inequality works exactly as in the case $q=1$, whereas the $\Gamma$-liminf inequality is an immediate consequence of the comparison with the case $q=1$ and of \cite[Proposition 6.3]{dalmaso}. Let us observe that the presence of $\eta_k$ in the functional $F_k$ does not modify the $\Gamma$-convergence result and that the proofs still hold analogously. As a consequence $G_k$ $\Gamma$-converges to $G$ in $L^1(\Omega){\times}L^1(\Omega)$ for every $q\geq 1$, where $G(u,v):=\mathscr{G}(u)$ if $v=1$ $\mathcal{L}^n\text{-a.e.\ in } \Omega$ and $\infty$ otherwise. Indeed, \cite[Proposition 6.3]{dalmaso} yields that the $\Gamma$-limsup of $G_k$ in $L^1(\Omega){\times}L^1(\Omega)$ is less than or equal to the one in $L^q(\Omega){\times}L^1(\Omega)$. In addition, $\int_{\Omega}|\cdot-\zeta|^qdx$ is continuous in $L^q(\Omega){\times}L^1(\Omega)$, so that the conclusion follows from \cite[Propositions 6.17 and 6.21]{dalmaso}. The previous result combined with a general result of $\Gamma$-convergence technique \cite[Corollary 7.20]{dalmaso} concludes the proof of Corollary \ref{c:min} through the compactness result Theorem \ref{t:comp}. \end{proof} \section{Properties of the surface energy density}\label{s:gprop} In this section we shall establish several properties enjoyed by the surface energy density $g$ defined in \eqref{g}. To this aim we shall often exploit that, in computing $g(s)$, $s\geq0$, we may assume that the admissible functions $\alpha$ satisfy $0\leq\alpha\leq s$ by a truncation argument (whereas $0\leq\beta\leq1$ by definition). Further, given a curve $(\alpha,\beta)\in \mathcal{U}_s(0,T)$, note that the integral appearing in the definition of $g$ is invariant under reparametrizations of $(\alpha,\beta)$. \begin{proposition}\label{11} The function $g$ defined in (\ref{g}) enjoys the following properties: \begin{enumerate}[label=(\roman*)] \item\label{e21} $g(0)=0$, and $g$ is subadditive, \textsl{i.e.}, $g(s_1+s_2)\leq g(s_1)+g(s_2)$, for every $s_1,s_2\in{\mathbb R}^+$; \item\label{e19} $g$ is nondecreasing, $0\leq g(s)\leq 1\wedge\ell s$ for all $s\in{\mathbb R}^+$, and $g$ is Lipschitz continuous with Lipschitz constant $\ell$; \item\label{e20} \be{\label{e23}\liminf_{k\to+\infty}m_{s\uparrow\infty}g(s)=1;} \item\label{e22} \be{\label{e24}\liminf_{k\to+\infty}m_{s\downarrow 0}\frac{g(s)}{s}=\ell.} \end{enumerate} \end{proposition} \begin{proof} Proof of \ref{e21}. The couple $(\alpha,\beta)=(0,1)$ is admissible for the minimum problem defining $g(0)$, so that $g(0)=0$. In order to prove that $g$ is subadditive we fix $s_1,s_2\in{\mathbb R}^+$ and we consider the minimum problems for $g(s_1)$ and $g(s_2)$, respectively. Let $\eta>0$ and let $(\alpha_1,\beta_1),(\alpha_2,\beta_2)$ be admissible couples respectively for $g(s_1)$ and $g(s_2)$ such that for $i=1,2$ \be{\label{e27} \int_0^1|1-\beta_i|\sqrt{f^2(\beta_i)|\alpha_i'|^2+|\beta_i'|^2}dt<g(s_i)+\eta. } Next define $\alpha:=\alpha_1$ in $[0,1]$, $\alpha:=\alpha_2(\cdot-1)+s_1$ in $[1,2]$, $\beta:=\beta_1$ in $[0,1]$, and $\beta:=\beta_2(\cdot-1)$ in $[1,2])$. An immediate computation and the reparametrization property mentioned above entail the subadditivity of $g$ since $\eta$ is arbitrary. Proof of \ref{e19}. In order to prove that $g$ is nondecreasing we fix $s_1,s_2$ with $s_1< s_2$ and $\eta>0$, and we consider $(\alpha,\beta)$ satisfying a condition analogous to \eqref{e27} for $g(s_2)$. Then $(\frac{s_1}{s_2}\alpha,\beta)$ is admissible for $g(s_1)$, thus we infer $$g(s_1)\leq \int_0^1|1-\beta|\sqrt{\Big(\frac{s_1}{s_2}\Big)^2f^2(\beta)|\alpha'|^2+|\beta'|^2}dt< g(s_2)+\eta,$$ since $s_1/{s_2}< 1$. As $\eta\to 0$ we find $g(s_1)\leq g(s_2)$. Next we prove that $g(s)\leq 1\wedge \ell s$. Indeed, inequality $g\leq 1$ straightforwardly comes from the fact that for every $s\geq 0$ the following couple $(\alpha,\beta)$ is admissible for $g(s)$: $\alpha:=0$ in $(0,1/3)$, $\alpha:=s$ in $(2/3,1)$, and linearly linked in $(1/3,2/3)$, and $\beta:=0$ in $(1/3,2/3)$ and linearly linked to $1$ in $(0,1/3)$ and in $(2/3,1)$. Moreover, $g(s)\leq \ell s$ for every $s\geq0$ since the couple $(st,1)$ is admissible for $g(s)$. The Lipschitz continuity of $g$ is an obvious consequence of the facts that $g$ is nondecreasing, subadditive and $g(s)\leq \ell s$ for $s\geq0$. Proof of \ref{e20}. Let $s_k$, $k\in{\mathbb N}$, be a diverging sequence and let $(\alpha_k,\beta_k)$ be an admissible couple for $g(s_k)$ such that \be{\label{e25}\int_0^1|1-\beta_k|\sqrt{f^2(\beta_k)|\alpha'_k|^2+|\beta'_k|^2}dt<g(s_k)+\frac{1}{k}.} If $\inf_{(0,1)}\beta_k\geq\delta$ for some $\delta>0$ and for every $k$, then there exists a constant $c(\delta)>0$ such that $f(\beta_k)(1-\beta_k)>c(\delta)$, since $f(s)(1-s)\to 0$ if and only if $s\to0$. Therefore by \eqref{e25} one finds $$c(\delta)s_k\leq g(s_k)+\frac{1}{k},$$ so that $g(s_k)\to+\infty$ as $k\to+\infty$ and this contradicts the fact that $g\leq 1$. Therefore there exists a sequence $x_k\in(0,1)$ such that $\beta_k(x_k)\to0$ up to subsequences. Since we have already shown that $g\leq 1$, we conclude the proof of \eqref{e23} noticing that \eqref{e25} yields \be{\label{e26}(1-\beta_k(x_k))^2 \leq\int_0^{x_k}|1-\beta_k||\beta'_k|dt+\int_{x_k}^1|1-\beta_k||\beta'_k|dt\leq g(s_k)+\frac{1}{k}.} Proof of \ref{e22}. Let $s_k$, $k\in{\mathbb N}$, be an infinitesimal sequence and let $(\alpha_k,\beta_k)$ be an admissible couple for $g(s_k)$ satisfying \eqref{e25} with $s_k/k$ in place of $1/k$. If there exists $\delta>0$, a not relabeled subsequence of $k$, and a sequence $x_k\in[0,1]$ such that $\beta_k(x_k)<1-\delta$, then the same computation as in \eqref{e26} leads to \[\delta^2\leq g(s_k)+\frac{s_k}{k}. \] As $k\to+\infty$ this contradicts the fact that $g(s)\le \ell s$. Therefore, $\beta_k$ converges uniformly to $1$ and fixing $\delta>0$ $$(\ell-\delta)s_k\leq\int_0^1(1-\beta_k)f(\beta_k)|\alpha'_k|dt\leq g(s_k)+\frac{s_k}{k}$$ holds for $k$ large by \eqref{f1}. Formula \eqref{e24} immediately follows dividing both sides of the last inequality by $s_k$, taking first $k\to+\infty$ and then $\delta\to0$, and using the fact that $g(s)\leq \ell\,s$, for $s\geq0$. \end{proof} \begin{remark}\label{r:g} We can actually show that $g$ does not coincide with the function $1\wedge\ell\,s$ at least in the model case $f(s)=\frac{\ell s}{1-s}$ by slightly refining the construction used in \ref{e19} above. With fixed $s>0$, let $\lambda\in[0,1]$ and set $\alpha:=0$ on $[0,1/3]$, $\alpha:=s$ on $[1/3,2/3]$, and the linear interpolation of such values on $[1/3,2/3]$; moreover, set $\beta_\lambda:=\lambda$ on $[1/3,2/3]$ and the linear interpolation of the values $1$ and $\lambda$ on each interval $[0,1/3]$ and $[2/3,1]$ in order to match the boundary conditions. Straightforward calculations lead to \[ g(s)\leq(1-\lambda)^2+(1-\lambda)f(\lambda)\,s. \] Thus, minimizing over $\lambda\in[0,1]$ yields in turn \[ g(s)\leq \ell s-\frac{(\ell s)^2}{4}<1\wedge\ell s\quad\text{ for all $s\in(0,2/\ell)$.} \] \end{remark} In what follows it will be convenient to provide an alternative representation of $g$ by means of a cell formula more closely related to the one-dimensional version of the energies $F_k$'s. To this aim we introduce the function $\hat{g}\colon [0,+\infty)\to[0,+\infty)$ defined by \begin{equation}\label{hatg} \hat{g}(s):=\liminf_{k\to+\infty}m_{T\uparrow\infty}\inf_{(\alpha,\beta)\in\mathcal{U}_s(0,T)} \int_0^T \left(f^2(\beta)|\alpha'|^2+\frac{|1-\beta|^2}{4}+|\beta'|^2\right)dt, \end{equation} the class $\mathcal{U}_s(0,T)$ has been introduced in \eqref{e:admfnctns}. We note that $\hat{g}$ is well-defined as the minimum problems appearing in its definition are decreasing with respect to $T$. \begin{proposition}\label{p2} For all $s\in [0,+\infty)$ it holds $g(s)=\hat{g}(s)$. \end{proposition} \begin{proof} Let $\alpha,\beta \in H^1\big((0,T)\big)$, $T>0$, be admissible functions in the definition of $\hat g(s)$. By Cauchy inequality we obtain \begin{equation*} \sqrt{f^2(\beta)|\alpha'|^2 + |\beta'|^2}\, |1-\beta|\le f^2(\beta)|\alpha'|^2 + |\beta'|^2+\frac{(1-\beta)^2}{4} \end{equation*} and integrating \begin{equation*} \int_0^T |1-\beta| \, \sqrt{f^2(\beta)|\alpha'|^2 + |\beta'|^2}\, dt\le \int_0^T \left(f^2(\beta)|\alpha'|^2+\frac{|1-\beta|^2}{4}+|\beta'|^2\right)dt\,. \end{equation*} The first integral is one-homogeneous in the derivatives, therefore we can reparametrize from $(0,T)$ to $(0,1)$. Taking the infimum over all such $\alpha$, $\beta$, and $T$ we obtain $g(s)\le \hat g(s)$. To prove the converse inequality, we first show that $\alpha$ and $\beta$ in the infimum problem defining $g$ can be taken in $W^{1,\infty}\big((0,1)\big)$. Let $\eta>0$ small and let $\alpha,\beta\in H^1\big((0,1)\big)$ be competitors for $g(s)$ such that \be{\label{e:quasimin}\int_{0}^{1} |1-\beta| \, \sqrt{f^2(\beta)|\alpha'|^2 + |\beta'|^2}\, dt< g(s)+\eta.} By density we find two sequences $\alpha_j,\beta_j\in W^{1,\infty}\big((0,1)\big)$ (actually in $C^\infty([0,1])$) such that $\alpha_j(0)=0$, $\alpha_j(1)=s$, $\beta_j(0)=\beta_j(1)=1$, $0\leq\beta_j\leq 1$, and converging respectively to $\alpha$ and $\beta$ in $H^1\big((0,1)\big)$. Since the function $(1-s)f(s)$ is uniformly continuous and $\beta_j\to\beta$ also uniformly, we deduce that $$\int_{0}^{1} |1-\beta_j| \, \sqrt{f^2(\beta_j)|\alpha_j'|^2 + |\beta_j'|^2}\, dt< g(s)+\eta$$ for $j$ large, and this concludes the proof of the claim. Let us prove now that $\hat{g}\leq g$. We fix a small parameter $\eta>0$ and consider competitors $\alpha,\beta\in W^{1,\infty}\big((0,1)\big)$ for $g(s)$ satisfying \eqref{e:quasimin}. We define, for $t\in [0,1]$, \begin{equation*} \beta^\eta(t):=\beta(t)\wedge (1-\eta)\quad \text{and}\quad \psi_\eta(t):=\int_{0}^t \frac{2}{1-\beta^\eta} \sqrt{\eta+f^2(\beta^\eta)\, |\alpha'|^2 + |(\beta^\eta)'|^2} dt' \,. \end{equation*} The function $\psi_\eta:[0,1]\to[0,M_\eta:=\psi_\eta(1)]$ is bilipschitz and in particular invertible. We define $\bar\alpha^\eta, \bar\beta^\eta\in W^{1,\infty}\big((0,M_\eta)\big)$ by \begin{equation*} \bar\alpha^\eta := \alpha \circ \psi_\eta^{-1} \text{ and } \bar\beta^\eta := \beta^\eta \circ \psi_\eta^{-1}\,. \end{equation*} We compute, using the definition and the change of variables $x=\psi_\eta(t)$, \begin{alignat*}1 \int_0^{M_\eta} \frac{(1-\bar\beta^\eta)^2}4 dx &= \int_0^{M_\eta} \frac{(1-\beta^\eta(\psi_\eta^{-1}(x)))^2}4 dx = \int_{0}^1 \frac{(1-\beta^\eta(t))^2}4 \psi_\eta'(t) dt \\ &=\int_{0}^1 \frac{1-\beta^\eta}2 \sqrt{\eta+ f^2(\beta^\eta)\, |\alpha'|^2 + |(\beta^\eta)'|^2} dt \\ &\le \sqrt\eta+ \int_{0}^1 \frac{1-\beta^\eta}2 \sqrt{f^2(\beta^\eta)\, |\alpha'|^2 + |(\beta^\eta)'|^2} dt\,, \end{alignat*} where we inserted $\psi_\eta'$ from the definition of $\psi_\eta$ and used $\sqrt{\eta+A}\le \sqrt \eta + \sqrt A$. Analogously, \begin{alignat*}1 \int_0^{M_\eta} \left( f^2(\bar\beta^\eta)|(\bar\alpha^\eta)'|^2+|(\bar\beta^\eta)'|^2\right) dx &= \int_{0}^1 \left( f^2(\beta^\eta)|\alpha'|^2+|(\beta^\eta)'|^2\right) \frac{1}{\psi'_\eta} dt \\ &= \int_{0}^1 \left( f^2(\beta^\eta)|\alpha'|^2+|(\beta^\eta)'|^2\right) \frac{1-\beta^\eta}{2 \sqrt{\eta+ f^2(\beta^\eta)\, |\alpha'|^2 + |(\beta^\eta)'|^2}}dt \\ &\le \int_{0}^1 \frac{1-\beta^\eta}{2} \sqrt{f^2(\beta^\eta)\, |\alpha'|^2 + |(\beta^\eta)'|^2}dt \,. \end{alignat*} We extend $\bar\alpha^\eta$ and $\bar\beta^\eta$ to $(-1, M_\eta+1)$ setting $\bar\alpha^\eta:=0$ in $(-1,0)$, $\bar\alpha^\eta:=s$ in $(M_\eta,M_\eta+1)$, and $\bar\beta^\eta$ the linear interpolation between $1-\eta$ and 1 in each of the two intervals, so that they obey the required boundary conditions for $\hat g$ in the larger interval. Collecting terms, we obtain \begin{align} \hat g(s)&\le \int_{-1}^{M_\eta+1} \left(\frac{(1-\bar\beta^\eta)^2}4 + f^2(\bar\beta^\eta)|\bar\alpha'_\eta|^2+|\bar(\beta^\eta)'|^2\right) dx\nonumber\\ &\le \sqrt \eta + 3\eta^2 + \int_{0}^1 (1-\beta^\eta) \sqrt{f^2(\beta^\eta)\, |\alpha'|^2 + |(\beta^\eta)'|^2}dt \,,\label{al:p1} \end{align} where the $3\eta^2$ term comes from an explicit computation on the two boundary intervals. It remains to replace $\beta^\eta$ by $\beta$ in the last integral. We observe that $(\beta^\eta)'=0$ almost everywhere on the set where $\beta\ne \beta^\eta$ (which coincides with the set $\{\beta>1-\eta\}$). Therefore \begin{alignat*}1 \int_{\{\beta\ne \beta^\eta\}} (1-\beta^\eta) \sqrt{f^2(\beta^\eta)\, |\alpha'|^2 + |(\beta^\eta)'|^2}dt &= \int_{\{\beta\ne \beta^\eta\}} (1-\beta^\eta) f(\beta^\eta)\, |\alpha'| dt \\ &\le \int_{\{\beta\ne \beta^\eta\}} (1-\beta) f(\beta)\, |\alpha'| dt + \omega(\eta) \int_0^1 |\alpha'| dt \end{alignat*} where $\omega(\eta)$ is the continuity modulus of $(1-s)f(s)$ near $s=1$, and therefore \begin{alignat*}1 \hat g(s) &\le \sqrt \eta + 3\eta^2 +\omega(\eta)\int_0^1|\alpha'| dt+ \int_{0}^1 (1-\beta) \sqrt{f^2(\beta)\, |\alpha'|^2 + |\beta'|^2}dt \,. \end{alignat*} Since the last integral is less than $g(s)+\eta$ and $\eta$ can be made arbitrarily small, this concludes the proof. \end{proof} For the proof of the lower bound we also need to introduce the auxiliary functions $g^{(\eta)}\colon [0,+\infty)\to[0,+\infty)$, for $\eta>0$, defined by \be{\label{geta} g^{(\eta)}(s):=\inf_{(\alpha,\beta)\in\mathcal{U}^{(\eta)}_s} \int_0^1|1-\beta|\sqrt{f^2(\beta)|\alpha'|^2+|\beta'|^2}\,dt, } where $$ \mathcal{U}^{(\eta)}_s:=\{\alpha,\beta\in H^1\big((0,1)\big):\, \alpha(0)=0,\, \alpha(1)=s,\, \beta(0)=\beta(1)=1-\eta\}. $$ \begin{proposition}\label{p1} For all $s\in [0,+\infty)$ it holds \[ |g(s)-g^{(\eta)}(s)|\leq\eta^2. \] \end{proposition} \begin{proof} We consider the minimum problems for $g$ and $g^{(\eta)}$ respectively in the intervals $(-1,2)$ and $(0,1)$. Let $(\alpha_{\eta},\beta_{\eta})$ be an admissible couple for $g^{(\eta)}(s)$ and let $\alpha:=0$ in $(-1,0)$, $\alpha:=\alpha_{\eta}$ in $(0,1)$, and $\alpha:=s$ in $(1,2)$; we also set $\beta:=\beta_{\eta}$ in $(0,1)$ and linearly linked to $1$ in $(-1,0)$ and in $(1,2)$. Then an easy computation shows that $$g(s)\leq \int_0^1|1-\beta_{\eta}|\sqrt{f^2(\beta_{\eta})|\alpha'_{\eta}|^2+|\beta'_{\eta}|^2}dt+\eta^2.$$ By taking the infimum on $(\alpha_{\eta},\beta_{\eta})$ we infer that $$g(s)\leq g^{(\eta)}(s)+\eta^2.$$ Reversing the roles of $g$ and $g^{(\eta)}$ we conclude. \end{proof} Finally, we study the dependence of $g$ on the function $f$ in detail. The results in the next proposition provide a first insight on the class of functions $g$ that arise as surface energy densities in our analysis. Moreover, they will be instrumental to get in the limit different energies by slightly changing the functionals $F_k$'s in \eqref{Fk} (cp. Theorems~\ref{t:Dugdale}, \ref{t:sublin}, and \ref{t:MS} below). \begin{proposition}\label{p:gl} Let $(\f{j})$ be a sequence of functions satisfying \eqref{f0} and \eqref{f1}. Denote by $\ell_j$, $g_j$ the value of the limit in \eqref{f1} and the function in \eqref{g} corresponding to $\f{j}$, respectively. Then, \begin{itemize} \item[(i)] if $\ell_j=\ell$ for all $j$, $\f{j}\geq \f{j+1}$, and $\f{j}(s)\downarrow 0$ for all $s\in[0,1)$, then $g_j\geq g_{j+1}$ and $g_j(s)\downarrow 0$ for all $s\in[0,+\infty)$; \item[(ii)] if $\ell_j=\ell$ for all $j$, $\f{j}\leq \f{j+1}$, and $\f{j}(s)\uparrow \infty$ for all $s\in(0,1)$, then $g_j\leq g_{j+1}$ and $g_j(s)\uparrow 1\wedge\ell s$ for all $s\in[0,+\infty)$; \item[(iii)] if $\ell_j\uparrow\infty$, $\f{j}\leq \f{j+1}$, and $\f{j}(s)\uparrow \infty$ for all $s\in(0,1)$, then $g_j\leq g_{j+1}$ and $g_j(s)\to \chi_{(0,+\infty)}(s)$ for all $s\in[0,+\infty)$. \end{itemize} \end{proposition} \begin{proof} To prove item (i) we note that the monotonicity of the sequence $(\f{j})$ and the pointwise convergence to a continuous function on $[0,1)$ yield that the sequence $(\f{j})$ actually converges uniformly on compact subsets of $[0,1)$ to $0$. Therefore, for all $\delta\in(0,1)$ we have for some $j_\delta$ \[ \max_{[0,1-\delta]}\f{j}\leq\delta\qquad\text{for all $j\geq j_\delta$.} \] Then, consider $\alpha_j,\beta_j$ defined as follows: $\alpha_j(t):=3s(t-1/3)$ on $[1/3,2/3]$, $\alpha_j:=0$ on $[0,1/3]$, and $\alpha_j:=s$ on $[2/3,1]$; $\beta_j:=1-\delta$ on $[1/3,2/3]$ and a linear interpolation between the values $1$ and $1-\delta$ on each interval $[0,1/3]$ and $[2/3,1]$. Straightforward calculations give \[ g_j(s)\leq\delta^2\,s+\delta^2 \qquad\text{for all $j\geq j_\delta$,} \] from which the conclusion follows by passing to the limit first in $j\uparrow\infty$ and finally letting $\delta\downarrow 0$. We now turn to item (ii). We first note that $(g_j)$ is nondecreasing and that \begin{equation}\label{e:geasy} \liminf_{k\to+\infty}m_jg_j(s)\leq \ell s\wedge 1 \end{equation} in view of item \ref{e19} in Proposition~\ref{11}. Next we show the following: for all $\delta>0$ \begin{equation}\label{e:dmlim} \liminf_{k\to+\infty}m_j\min_{t\in[\delta,1]}(1-t)\f{j}(t)=\ell. \end{equation} Let $s_j\in\textrm{argmin}_{[\delta,1]}(1-t)\f{j}(t)$, and denote by $j_k$ a subsequence such that \[ \liminf_{k\to+\infty}m_k\min_{t\in[\delta,1]}(1-t)\f{j_k}(t)=\liminf_{k\to+\infty}minf_j\min_{t\in[\delta,1]}(1-t)\f{j}(t). \] Either $\liminf_{k\to+\infty}msup_ks_{j_k}<1$ or $\liminf_{k\to+\infty}msup_ks_{j_k}=1$. We exclude the former possibility: suppose that, up to further subsequences not relabeled, $\liminf_{k\to+\infty}m_ks_{j_k}=s_\infty\in[\delta,1)$, then for all $i\in{\mathbb N}$ \[ \liminf_{k\to+\infty}minf_k(1-s_{j_k})\f{j_k}(s_{j_k})=(1-s_\infty)\liminf_{k\to+\infty}minf_k\f{j_k}(s_{j_k}) \geq(1-s_\infty)f^{(i)}(s_\infty), \] that gives a contradiction by letting $i\uparrow\infty$ since by minimality of $s_j$ \be{\label{e:boundell} (1-s_j)\f{j}(s_j)\leq\ell\qquad\text{for all $j$}. } Therefore, $\liminf_{k\to+\infty}msup_ks_{j_k}=1$, and thus we get \[ \liminf_{k\to+\infty}minf_j(1-s_j)\f{j}(s_j)\geq \liminf_{k\to+\infty}minf_k(1-s_{j_k})\f{1}(s_{j_k})=\ell. \] Formula \eqref{e:dmlim} follows straightforwardly by this and \eqref{e:boundell}. If $s=0$, clearly we conclude as $g_j(0)=0$ for all $j$. Let then $s\in(0,+\infty)$ and $\alpha_j$, $\beta_j\in H^1\big((0,1)\big)$ be such that $\alpha_j(0)=0$, $\alpha_j(1)=s$, $\beta_j(0)=\beta_j(1)=1$ and \[ g_j(s)+\frac1j\geq \int_0^1|1-\beta_j|\sqrt{(\f{j})^2(\beta_j)|\alpha'_j|^2+|\beta'_j|^2}dt. \] There are now two possibilities: either there exists $\delta>0$ and a subsequence $j_k$ such that $\inf_{[0,1]}\beta_{j_k}\geq\delta$, or $\inf_{[0,1]}\beta_j\to0$. In the former case the subsequence satisfies \begin{equation*} g_{j_k}(s)+\frac1{j_k}\geq\big(\min_{t\in[\delta,1]}(1-t)\f{j_k}(t)\big)\,s. \end{equation*} Taking the $\liminf_{k\to+\infty}msup_{k}$ and using (\ref{e:dmlim}) we obtain \begin{equation}\label{eqcaso1} \liminf_{k\to+\infty}msup_j g_j(s) \ge \ell s\,. \end{equation} In the other case for every $\delta>0$ definitively it holds \begin{equation}\label{e:altb} g_{j}(s)+\frac1{j}\geq\int_0^1(1-\beta_{j})|\beta_{j}^\prime|dt\geq (1-\delta)^2. \end{equation} Taking again the $\liminf_{k\to+\infty}msup$ we obtain \begin{equation}\label{eqcaso2} \liminf_{k\to+\infty}msup_j g_j(s) \ge (1-\delta)^2\,. \end{equation} Since $\delta$ was arbitrary, from (\ref{eqcaso1}) and (\ref{eqcaso2}) we obtain $\liminf_{k\to+\infty}msup_j g_j(s)\ge 1\wedge \ell s$ and, recalling (\ref{e:geasy}), conclude the proof of (ii). Let us now prove item (iii). First we observe that $g_j(s)\leq 1$ for all $j$. To prove the lower bound, we notice that arguing similarly as in the proof of \eqref{e:dmlim} one obtains \begin{equation}\label{e:dmlim2} \liminf_{k\to+\infty}m_j\min_{t\in[\delta,1]}(1-t)\f{j}(t)=\infty \text{ for all $\delta>0$}. \end{equation} For any $s\in(0,+\infty)$ we choose $(\alpha_j, \beta_j)\in \mathcal{U}_s$ such that \[ g_j(s)+\frac1j\geq\int_0^1|1-\beta_j|\sqrt{(\f{j})^2(\beta_j)|\alpha'_j|^2+|\beta'_j|^2}dt. \] If there is $\delta>0$ such that $\inf \beta_j\ge \delta$ for infinitely many $j$ then for the same indices \begin{equation*} g_j(s)+\frac1j\ge \min_{t\in[\delta,1]} (1-t) f^{(j)}(t) s \,, \end{equation*} which in view of (\ref{e:dmlim2}) and the bound $g_j(s)\le 1$ is impossible. Therefore $\inf_{[0,1]}\beta_j\to0$, which in view of (\ref{e:altb}) proves the assertion. \end{proof} \begin{remark} The monotonicity assumption $\f{j}\leq \f{j+1}$ in items (ii) and (iii) above leads to simple proofs but it is actually not needed. The same convergence results for $(g_j)$ would follow by using the uniform convergence on compact subsets of $[0,1)$ of $(\f{j})$. The latter property is a consequence of the fact that each $\f{j}$ is nondecreasing and that $f\in C^0([0,1))$. \end{remark} \section{Proof in the one-dimensional case}\label{s:onedim} Let us study first the one-dimensional case $n=1$. As usual, we will prove a $\Gamma$-liminf inequality and a $\Gamma$-limsup inequality. The following proposition gives the lower estimate. \begin{proposition}[Lower bound]\label{p:liminfunidim} For every $(u,v)\in L^1(\Omega){\times}L^1(\Omega)$ it holds $$F(u,v)\leq F'(u,v).$$ \end{proposition} \begin{proof} The conclusion is equivalent to the following fact: let $(u_k,v_k)$ be a sequence such that \begin{equation}\label{ukvk} (u_k,v_k)\to (u,v) \text{ in } L^1(\Omega){\times} L^1(\Omega), \end{equation} \begin{equation}\label{bounded} \sup_k F_{k}(u_k,v_k)<+\infty, \end{equation} then $u\in BV(\Omega)$, $v=1$ $\mathcal{L}^1$-a.e.\ in $\Omega$, and \be{\label{1}\Phi(u)\leq \liminf_{k\to+\infty}minf_{k\to \infty}F_k(u_k,v_k).} Since the left-hand side of \eqref{1} is $\sigma$-additive and the right-hand side is $\sigma$-superadditive with respect to $\Omega$, it is enough to prove the result when $\Omega$ is an interval. For the sake of convenience in what follows we assume $\Omega=(0,1)$. By \eqref{bounded} one deduces that $v=1$ $\mathcal{L}^n\text{-a.e.\ in } \Omega$. Up to subsequences one can assume that the lower limit in \eqref{1} is in fact a limit and that the convergences in \eqref{ukvk} are also $\mathcal{L}^1$-a.e.\ in $\Omega$. For the first part of the proof we will use a discretization argument, following the lines of \cite{AlicBrShah}. We fix $\delta\in(0,1)$ and for any $N\in{\mathbb N}$ divide $\Omega$ into $N$ intervals $$I^j_N:=\Big(\frac{j-1}{N},\frac{j}{N}\Big),\qquad j=1,\dots,N.$$ Up to subsequences we can assume that $\displaystyle\liminf_{k\to+\infty}m_{k\to+\infty} \inf_{I^j_N} v_k$ exists for every $j=1,\dots,N$. We define $$J_N:=\Big\{j\in\{1,\dots,N\}:\liminf_{k\to+\infty}m_{k\to+\infty} \inf_{I^j_N} v_k\le 1-\delta\Big\}.$$ Fixed $j\in J_N$, we denote by $x_k$ and $y$ two points in $I_N^j$ such that $v_k(x_k)<1-\delta/2$ and $v_k(y)\to 1$. Then by Cauchy's inequality we deduce for $k$ large (assuming for instance $x_k\leq y$) \be{\label{2}\int_{x_k}^{y}\Big(\frac{(1-v_k)^2}{4\varepsilon_k}+\varepsilon_k| v'_k|^2\Big) dx\geq \frac{1}{2}((1-v_k(x_k))^2-(1-v_k(y))^2)\geq \frac{\delta^2}{16}.} The previous computation entails $$\sup_N{\mathcal H}^0(J_N)<+\infty,$$ so that up to subsequences we can assume $J_N=\{j^N_1,\dots,j^N_L\}$, with $L$ independent on $N$, and that all sequences $j^N_i/N$ converge. We denote by $S$ the set of limits of these sequences, $$ S=\{t_1,\dots,t_{L'}\}=\bigl\{\liminf_{k\to+\infty}m_{N\to+\infty}\frac{j^N_i}{N}\,,\hskip2mm i=1,\dots,L\bigr\} \subset\Omega\,. $$ We claim now that there exists a modulus of continuity $\omega$, i.e., $\omega(\delta)\to0$ as $\delta\to0$, depending only on $f$, such that for all $\eta$ sufficiently small and $k$ sufficiently large (depending on $\eta$) one has \be{\label{3} (1-\omega(\delta))\int_{\Omega\setminus S_{\eta}}h(|u'_k|)dx\leq F_k(u_k,v_k,\Omega\setminus S_{\eta}), } where $S_{\eta}:=\bigcup_{i=1}^{L'}(t_i-\eta,t_i+\eta)$. It suffices to prove (\ref{3}) in the case that $\eta$ is so small that the intervals $(t_i-\eta,t_i+\eta)$ are pairwise disjoint. In order to prove \eqref{3}, we observe that by definition of $f_k$ in \eqref{fk} and by Cauchy's inequality we obtain \ba{\label{4} F_k(u_k,v_k;\Omega\setminus S_{\eta})&\geq& \int_{\Omega\setminus S_{\eta}}\Big(f_k^2(v_k)|u'_k|^2+\frac{(1-v_k)^2}{4\varepsilon_k}\Big) dx\nonumber\\ &\geq& \int_{\Omega\setminus S_{\eta}}\Big(|u'_k|^2\wedge \bigl(\varepsilon_kf^2(v_k)|u'_k|^2+\frac{(1-v_k)^2}{4\varepsilon_k}\bigr)\Big)dx\nonumber\\ &\geq& \int_{\Omega\setminus S_{\eta}}|u'_k|^2\wedge\bigl((1-v_k)f(v_k)|u'_k|\bigr)dx. } Let us note that $v_k> 1-\delta$ in $\Omega\setminus S_{\eta}$ for $k$ large. By \eqref{f1} there exists a modulus of continuity $\omega$ such that \be{\label{5}|(1-s)f(s)-\ell|\leq \ell\omega(\delta),\qquad \textrm{for $s\geq1-\delta$}.} Therefore by \eqref{4} and \eqref{5} we obtain \be{\label{6} F_k(u_k,v_k;\Omega\setminus S_{\eta})\geq(1-\omega(\delta))\int_{\Omega\setminus S_{\eta}}|u'_k|^2\wedge\ell|u'_k|dx \geq(1-\omega(\delta))\int_{\Omega\setminus S_{\eta}}h(|u'_k|)dx. } The last inequality holds true as $h$ is the convex envelope of $t^2\wedge\ell t$. Formula (\ref{6}) proves the claim in \eqref{3}. Notice that the boundedness assumption in \eqref{bounded} and formula \eqref{3} imply that $$\sup_{k}\int_{\Omega\setminus S_{\eta}}|u'_k|dx<+\infty.$$ Therefore $u\in BV(\Omega\setminus S_{\eta})$, and actually the finiteness of $S$ ensures that $u\in BV(\Omega)$. In addition, the $L^1$-lower semicontinuity of the functional $\Phi$ defined in \eqref{Phi} yields \be{\label{e16}(1-\omega(\delta))\Phi(u;\Omega\setminus S_{\eta})\leq\liminf_{k\to+\infty}minf_k F_k(u_k,v_k;\Omega\setminus S_{\eta}).} We now estimate the energy contribution on $S_{\eta}$. To this aim it is not restrictive to assume that $S\subseteq J_u$. Let us fix $i\in\{1,\dots,L'\}$ and consider $I^i_\eta:=(t_i-\eta,t_i+\eta)$. We claim that \be{\label{8}(1-\omega(\delta))g(\esssup_{I^i_{\eta}} u-\essinf_{I^i_{\eta}} u)\leq\liminf_{k\to+\infty} F_k(u_k,v_k;I^i_{\eta})+O(\eta).} Let us introduce a small parameter $\mu>0$ and $x_1,x_2\in I^i_{\eta}$ such that \ba{ &\displaystyle v_k(x_1)\to 1, \qquad v_k(x_2)\to 1, \nonumber\\ \label{13} &\displaystyle u_k(x_1)\to u(x_1), \qquad u_k(x_2)\to u(x_2), \\ \label{7} &\displaystyle u(x_1)>\esssup_{I^i_{\eta}} u-\mu,\qquad u(x_2)<\essinf_{I^i_{\eta}} u+\mu.} Assuming without loss of generality that $x_1< x_2$, we define $I:=(x_1,x_2)$. There are just finitely many connected components of the set $$\{x\in I: v_k(x)< 1-\eta\}$$ where $v_k$ achieves the value $1-\delta$, as a computation analogous to \eqref{2} easily shows (recall that $\eta\ll\delta$). Precisely one finds up to subsequences that the number $N$ of these components is $$N\leq\frac{c}{\delta^2-\eta^2},$$ for some constant $c>0$ independent of $N$. Let us now estimate the functional $F_k$ over each component $C_k^j$ of this type, $j=1,\dots,N$. Since $v_k<1-\eta$ in $C_k^j$ one finds for $k$ large that $f_k(v_k)=\varepsilon^{1/2}_kf(v_k)$, so that for $j=1,\dots,N$ it follows \begin{multline}\label{e15} F_k(u_k,v_k;C_k^j)\geq \int_{C_k^j}{\Big(\varepsilon_kf^2(v_k)|u'_k|^2+\frac{(1-v_k)^2}{4\varepsilon_k}+\varepsilon_k|v'_k|^2\Big) dx}\\ \geq g^{(\eta)}\left(\left|\int_{C_k^j}u'_k dx\right|\right)\geq g\left(\left|\int_{C_k^j}u'_k dx\right|\right)-\eta^2, \end{multline} by Cauchy's inequality and Proposition \ref{p1}. Outside the selected components $C_k^j$, $j=1,\dots,N$, one has $v_k\geq 1-\delta$, so that estimate \eqref{6} also holds with $I\setminus \bigcup_{j=1}^N C_k^j$ replacing $\Omega\setminus S_{\eta}$. Therefore \ba{\label{9} F_k\left(u_k,v_k;I\setminus \bigcup_{j=1}^N C_k^j\right)&\geq&(1-\omega(\delta))\int_{I\setminus \bigcup_{j=1}^N C_k^j}h(|u'_k|)dx\nonumber\\ &\geq& (1-\omega(\delta))\ell\int_{I\setminus \bigcup_{j=1}^N C_k^j}|u'_k|dx-(1-\omega(\delta))\frac{\ell^2}{4}{\mathcal L}^1(I\setminus \bigcup_{j=1}^N C_k^j)\nonumber\\ &\geq& (1-\omega(\delta))g\Big(\Big|\int_{I\setminus \bigcup_{j=1}^N C_k^j}u'_k dx\Big|\Big)-\frac{\ell^2}{2}\eta,} where we have used the definition of $h$ and Proposition \ref{11} \ref{e19}. By \eqref{e15}, \eqref{9}, and the subadditivity of $g$ one finds $$F_k(u_k,v_k;I)+\frac{\ell^2}{2}\eta+\frac{c\,\eta^2}{\delta^2-\eta^2}\geq (1-\omega(\delta))g\left(\left|\int_I u'_k dx\right|\right) =(1-\omega(\delta))g(|u_k(x_1)-u_k(x_2)|).$$ By property \eqref{13} and by the continuity of $g$, as $k\to+\infty$ one deduces $$\liminf_{k\to+\infty} F_k(u_k,v_k;I^i_{\eta})+\frac{\ell^2}{2}\eta+\frac{c\eta^2}{\delta^2-\eta^2}\geq (1-\omega(\delta))g(|u(x_1)-u(x_2)|).$$ Finally property \eqref{7} concludes the proof of \eqref{8} as $\mu\to0$. The thesis follows by summing \eqref{e16} and \eqref{8} for $i=1,\dots,L$ and taking first $\eta\to0$ and finally $\delta\to0$. \end{proof} \begin{proposition}[Upper bound]\label{p:limsupunidim} For all $u\in BV(\Omega)$ there exists $(u_k,v_k)\to(u,1)$ in $L^1(\Omega){\times}L^1(\Omega)$ such that $$\liminf_{k\to+\infty}msup_{k\to+\infty}F_k(u_k,v_k)\leq \Phi(u).$$ \end{proposition} \begin{proof} Let us consider first the case when $u\in SBV^2(\Omega)$. By a localization argument it is not restrictive to assume that $J_{u}=\{x_0\}$ and to take $x_0=0$. We also assume for a while that $u$ is constant in a neighborhood on both sides of $0$. With fixed $\eta>0$, we consider $T_\eta>0$ and $\alpha_\eta,\beta_\eta\in H^1\big((0,T_{\eta})\big)$ such that $\alpha_\eta(0)=u^-(0),$ $\alpha_\eta(T_{\eta})=u^+(0)$, $0\leq\beta_\eta\leq 1$, $\beta_\eta(0)=\beta_\eta(T_\eta)=1$, and \be{\label{e17}g\big(|[u](0)|\big)+\eta>\int_0^{T_{\eta}} \left(f^2(\beta_\eta)|\alpha_\eta'|^2+\frac{|1-\beta_\eta|^2}{4}+|\beta_\eta'|^2\right)dt.} This choice is possible in view of Proposition \ref{p2}, up to a translation of the variable $\alpha_\eta$. Let us define $A_k:=(-\frac{\varepsilon_kT_\eta}{2},\frac{\varepsilon_kT_\eta}{2})$ and \ba {u_k(x):=\left\{ \begin{array}{ll} \ {\displaystyle \alpha_\eta\left(\frac{x}{\varepsilon_k}+\frac{T_\eta}{2}\right)} & \textrm{if $x\in A_k$,}\\ u & \text{otherwise}, \end{array} \right.\nonumber\\ v_k(x):=\left\{ \begin{array}{ll} \ {\displaystyle \beta_\eta\left(\frac{x}{\varepsilon_k}+\frac{T_\eta}{2}\right)} & \textrm{if $x\in A_k$,}\\ 1 & \text{otherwise}. \end{array} \right. } An easy computation shows that $(u_k,v_k)\to(u,1)$ in $L^1(\Omega){\times}L^1(\Omega)$, that $u_k,v_k\in H^1(\Omega)$ for $k$ large, and that for the same $k$ $$F_k(u_k,v_k,\Omega\setminus A_k)\leq\int_{\Omega}|u'|^2dx,$$ being $f_k\leq 1$. Moreover using that $f_k\leq \varepsilon^{1/2}_kf$ and changing the variable $x$ with $y=\frac{x}{\varepsilon_k}+\frac{T_\eta}{2}$ one has $$F_k(u_k,v_k,A_k)\leq g\big(|[u](0)|\big)+\eta,$$ where we have used \eqref{e17}. Therefore we find $$F''(u,1)\leq \int_{\Omega}|u'|^2dx+\int_{J_u}(g(|[u]|) +\eta)d{\mathcal H}^0,$$ and then \be{\label{e18} F''(u,1)\leq \Phi(u),} since $\eta$ is arbitrary. Let us remove now the hypothesis that $u$ is constant near $0$. For a function $u\in SBV^2(\Omega)$ with $J_u=\{0\}$, one can consider the sequence $u_j:=u$ in $\Omega\setminus (-1/j,1/j)$, with $u_j:=u(-1/j)$ in $(-1/j,0)$ and $u_j=u(1/j)$ in $(0,1/j)$. Then $u_j\to u$ in $L^1(\Omega)$ and $|u'_j|\leq |u'|$ $\mathcal{L}^1$-a.e.\ in $\Omega$, so that by the lower semicontinuity of $F''$ and by the absolute continuity of $u$ on both sides of $0$ we conclude as $j\to+\infty$ that $u$ still satisfies \eqref{e18}. The extension of \eqref{e18} to each $u\in SBV^2(\Omega)$ with ${\mathcal H}^0(J_u)<+\infty$ is immediate and finally \cite[Propositions 3.3-3.5]{bou-bra-but} conclude the proof. \end{proof} \section[Proof in the \texorpdfstring{$n$}{n}-dimensional case]{Proof in the \texorpdfstring{$n$}{n}-dimensional case}\label{s:ndim} In this section we establish the $\Gamma$-convergence result in the $n$-dimensional setting. We recover the lower bound estimate by using a slicing technique thus reducing ourselves to the one-dimensional setting of Proposition~\ref{p:liminfunidim}. Instead, the upper bound inequality follows by an abstract approach based on integral representation results (cp. Proposition~\ref{p:limsupndim} below). \begin{proposition}\label{p:liminfndim} For every $(u,v)\in L^1(\Omega){\times}L^1(\Omega)$ it holds $$F(u,v)\leq F'(u,v).$$ \end{proposition} \begin{proof} Let us assume first that $u\in L^\infty(\Omega)$. We set $M:=||u||_{L^\infty(\Omega)}$. Let $(u_k,v_k)$ be a sequence such that $(u_k,v_k)\to(u,v)$ in $L^1(\Omega){\times}L^1(\Omega)$ and $\sup F_k(u_k,v_k)<+\infty$. Then it is straightforward that $v=1$ $\mathcal{L}^n\text{-a.e.\ in } \Omega$. We are going to show that $u\in BV(\Omega)$ and that \be{\label{e34}\Phi(u)\leq \liminf_{k\to+\infty} F_k(u_k,v_k),} that proves the thesis under the assumption of the boundedness of $u$. Given $\xi\in\mathbb{S}^{n-1}$, we consider a subsequence $(u_r,v_r)$ of $(u_k,v_k)$ satisfying $$((u_r)_y^\xi,(v_r)_y^\xi)\to (u_y^\xi,1) \text{ in } L^1(\Omega_y^\xi){\times} L^1(\Omega_y^\xi) \text{ for } \mathcal{H}^{n-1}\text{-a.e.\ } y\in\Pi^\xi$$ and realizing the lower limit in \eqref{e34} as a limit. By Fubini's theorem and Fatou's lemma one deduces that \be{\label{e30} \liminf_{k\to+\infty}minf_{r\to \infty}\int_{\Omega_y^\xi}\bigg(f_r((v_r)_y^\xi)\left|\nabla ((u_r)_y^\xi)\right|^2+ \frac{(1-(v_r)_y^\xi)^2}{4\varepsilon_r}+\varepsilon_r|\nabla ((v_r)^\xi_y)|^2\bigg)dt <+\infty} holds for $\mathcal{H}^{n-1}\text{-a.e. } y\in \Omega^{\xi}$ The one-dimensional result Proposition \ref{p:liminfunidim} yields now that $u^\xi_y\in BV(\Omega^\xi_y)$ and that \bm{\label{e29}\int_{\Omega^\xi_y} h(|\nabla (u^\xi_y)|)dt+\int_{J_{u^\xi_y}}g(|[u^\xi_y]|)d{\mathcal H}^0+\ell|D^cu^\xi_y|(\Omega^\xi_y)\leq\\ \leq\liminf_{k\to+\infty}minf_{r\to \infty}\int_{\Omega_y^\xi}\bigg(f_r((v_r)_y^\xi)\left|\nabla ((u_r)_y^\xi)\right|^2+ \frac{(1-(v_r)_y^\xi)^2}{4\varepsilon_r}+\varepsilon_r|\nabla ((v_r)^\xi_y)|^2\bigg)dt.} Let us check that \eqref{e29} implies $u\in BV(\Omega)$ by estimating $\int_{\Omega^\xi}|D(u^\xi_y)|(\Omega^\xi_y)d{\mathcal H}^{n-1}$. We first notice that \be{\label{e32}\int_{\Omega^\xi_y} |\nabla (u^\xi_y)|dt\leq \frac{1}{\ell}\int_{\Omega^\xi_y} h(|\nabla (u^\xi_y)|)dt+\frac{\ell}{4}{\mathcal L}^1(\Omega^\xi_y),} being $h(s)\geq \ell s-{\ell}^2/4$. Since $g(s)/s\to\ell$ as $s\to0$, with fixed $\eta>0$ one has \be{\label{e33}g(s)>(\ell-\eta)s\qquad \textrm{for }s<\delta,} for some $\delta$ sufficiently small. Therefore \eqref{e32}, \eqref{e33}, and the boundedness of $u$ entail \bm{|D(u^\xi_y)|(\Omega^\xi_y)\leq\frac{1}{\ell}\int_{\Omega^\xi_y} h(|\nabla (u^\xi_y)|)dt+\frac{\ell}{4}\diam{\Omega}+\frac{1}{\ell-\eta}\int_{\{t\in J_{u^\xi_y}:|[u^\xi_y]|<\delta\}}g(|[u^\xi_y]|)d{\mathcal H}^0\\ +\frac{2M}{g(\delta)}\int_{\{t\in J_{u^\xi_y}:|[u^\xi_y]|\geq\delta\}}g(|[u^\xi_y]|)d{\mathcal H}^0 +|D^cu^\xi_y|(\Omega^\xi_y)\nonumber\\ \leq c+c\left(\int_{\Omega^\xi_y} h(|\nabla (u^\xi_y)|)dt+\int_{J_{u^\xi_y}}g(|[u^\xi_y]|)d{\mathcal H}^0+\ell|D^cu^\xi_y|(\Omega^\xi_y)\right), } where $\diam\Omega$ denotes the diameter of $\Omega$ and $c:=\max\{\frac{1}{\ell},\frac{\ell}{4}\diam{\Omega},\frac{1}{\ell-\eta},\frac{2M}{g(\delta)}\}$. Integrating the last inequality on $\Omega^\xi$ one deduces by \eqref{e29} $$\int_{\Omega^\xi}|D(u^\xi_y)|(\Omega^\xi_y)d{\mathcal H}^{n-1}\leq c{\mathcal H}^{n-1}(\Omega^\xi)+c\sup_k F_k(u_k,v_k).$$ Taking $\xi=e_1,\dots,e_n$ one obtains $u\in BV(\Omega)$. Let us prove now formula \eqref{e34} using localization. The integration on $\Omega^\xi$ of the one-dimensional estimate in \eqref{e29} gives \be{\label{e35}\int_\Omega h(|\nabla u\cdot\xi|)dx+\int_{J_{u}}|\nu_u\cdot\xi|g(|[u]|)d{\mathcal H}^{n-1}+\ell\int_\Omega |\gamma_u\cdot\xi|d|D^cu|\leq \liminf_{k\to+\infty} F_k(u_k,v_k;\Omega), } where $\gamma_u:=\frac{d D^c u}{d|D^c u|}$ denotes the density of $D^cu$ with respect to $|D^c u|$. Let $E\subset\Omega$ be a Borel set such that $D^a u(E)=0$ and $D^s u(\Omega\setminus E)=0$, and let $$\lambda:={\mathcal L}^n\lfloor{\Omega\setminus E}+{\mathcal H}^{n-1}\lfloor{J_u}+|D^cu|\lfloor{E\setminus J_u}.$$ Let us consider a countable dense set $D\subset\mathbb{S}^{n-1}$ and the functions $$\psi_\xi:=h(|\nabla u\cdot\xi|)\chi_{\Omega\setminus E}+|\nu_u\cdot\xi|g(|[u]|)\chi_{J_u} +\ell|\gamma_u\cdot\xi|\chi_{E\setminus J_u},\qquad \xi\in D.$$ Then (\ref{e35}) gives $(\psi_\xi\lambda)(A)\le F'(u,1,A)$ for all open sets $A\subset\Omega$. Since $F'(u,1,\cdot)$ is superadditive, this implies $((\sup_\xi\psi_\xi)\lambda)(A)\le F'(u,1,A)$ (see \cite[Lemma 15.2]{braides}) and therefore the conclusion. In the general case, if $u\in L^1\setminus L^\infty(\Omega)$ one considers $(u_k^M,v_k)$ and $(u^M,v)$, where $u^M:=(-M\varepsilone u)\wedge M$ denotes the truncation at level $M\in(0,+\infty)$. Since the functional $F_k$ decreases by truncation and $u_k^M\to u^M$ in $L^1(\Omega)$, we deduce that $u^M\in BV(\Omega)$ and \be{\label{e37}\Phi(u^M)\leq\liminf_{k\to+\infty} F_k(u_k^M,v_k)\leq \liminf_{k\to+\infty} F_k(u_k,v_k).} Therefore $u\in GBV(\Omega)$ and \eqref{e34} follows easily from \eqref{e37} as $M\to+\infty$. \end{proof} To prove the limsup inequality we follow an abstract approach. We first show that the $\overline{\Gamma}$-limit is a Borel measure. The only relevant property to be checked is the weak subadditivity of the $\Gamma$-limsup. This is a consequence of De Giorgi's slicing and averaging argument as shown in the following lemma. \begin{lemma}\label{l:wsub} Let $(u,v)\inL^1(\Omega){\times}L^1(\Omega)$, let $A',A,B\in\mathcal{A}(\Omega)$ with $A'\subset\subset A$, then \be{\label{e:subadd} F''(u,1;A'\cup B)\leq F''(u,1;A)+F''(u,1;B). } \end{lemma} \begin{proof} We assume that the right-hand side of \eqref{e:subadd} is finite, so that $u\in GBV(A\cup B)$ and $v=1$ ${\mathcal L}^n$-a.e.\ in $A\cup B$. We can reduce the problem to the case of functions $u\in BV\cap L^\infty(A\cup B)$. This is a straightforward consequence of the fact that the energies $F_k$'s, and thus the $\Gamma$-limsup $F''$, are decreasing by truncations. Actually, thanks to $L^1$ lower semicontinuity, they are continuous under such an operation. Under this assumption, let $(u_k^A,v_k^A)$, $(u_k^B,v_k^B)$ be recovery sequences for $(u,1)$ on $A$ and $B$ respectively, that is: \begin{equation}\label{e:rec1} (u_k^A,v_k^A), (u_k^B,v_k^B)\to (u,1) \text{ in } L^1(\Omega){\times} L^1(\Omega), \end{equation} and \begin{equation}\label{e:rec2} \limsup_{k\to+\infty} F_k(u_k^A,v_k^A;A)=F''(u,1;A),\quad \limsup_{k\to+\infty} F_k(u_k^B,v_k^B;B)=F''(u,1;B). \end{equation} Note that, again up to truncations, we may assume that \begin{equation}\label{e:rec3} (u_k^A,v_k^A),\, (u_k^B,v_k^B)\quad\text{are bounded in } L^\infty(\Omega). \end{equation} To simplify the calculations below we introduce the functionals $G_k:L^1(\Omega)\times\mathcal{A}(\Omega)\to[0,+\infty]$ given by \[ G_k(v;O):=\int_O\left(\frac{(1-v)^2}{4\varepsilon_k}+\varepsilon_k|\nabla v|^2\right)dx,\quad \text{if $v\in H^1(\Omega)$,} \] $+\infty$ otherwise. Notice that \[ F_k(u,v;O)=\int_O f_k^2(v)|\nabla u|^2dx+G_k(v;O). \] Let $\delta:=\mathrm{dist}(A^\prime,\partial A)>0$, and with fixed $M\in{\mathbb N}$, we set for all $i\in\{1,\ldots,M\}$ \[ A_i:=\left\{x\in \Omega:\,\mathrm{dist}(x,A^\prime)<\frac{\delta}{M}i\right\}, \] and $A_0:=A^\prime$. Clearly, we have $A_{i-1}\subset\subset A_i\subset A$. Denote by $\varphi_i\in C_c^1(\Omega)$ a cut-off function between $A_{i-1}$ and $A_i$, i.e., $\varphi_i|_{A_{i-1}}=1$, $\varphi_i|_{A_{i}^c}=0$, and $\|\nabla \varphi_i\|_{L^\infty(\Omega)}\leq \frac{2M}{\delta}$. Then, set \begin{equation}\label{e:uki} u_k^i:=\varphi_i\,u_k^A+(1-\varphi_i)u_k^B, \end{equation} and \begin{equation}\label{e:vki} v_k^i:=\begin{cases} \varphi_{i-1}\,v_k^A+(1-\varphi_{i-1})(v_k^A\wedge v_k^B) & \text{ on } A_{i-1}\cr v_k^A\wedge v_k^B & \text{ on } A_i\setminus {A}_{i-1} \cr \varphi_{i+1}(v_k^A\wedge v_k^B)+(1-\varphi_{i+1})\,v_k^B & \text{ on } \Omega\setminus {A}_i. \end{cases} \end{equation} With fixed $i\in\{2,\ldots,M-1\}$, by the very definitions in \eqref{e:uki} and \eqref{e:vki} above $(u_k^i,v_k^i)\in H^1(\Omega){\times}H^1(\Omega)$ and the related energy $F_k$ on $A'\cup B$ can be estimated as follows \begin{equation}\label{e:split} F_k(u_k^i,v_k^i;A'\cup B)\leq F_k(u_k^A,v_k^A;A_{i-2})+F_k(u_k^B,v_k^B;B\setminus {A}_{i+1})+ F_k(u_k^i,v_k^i;B\cap(A_{i+1}\setminus {A}_{i-2})). \end{equation} Therefore, we need to bound only the last term. To this aim we further split the contributions in each layer; in estimating each of such terms we shall repeatedly use the monotonicity of $f_k$ and the fact that it is bounded by $1$. In addition, a positive constant, which may vary from line to line, will appear in the formulas below. Elementary computations and the very definitions in \eqref{e:uki} and \eqref{e:vki} give, using $v^i_k\le v^A_k$, \begin{multline}\label{e:i-1i-2} F_k(u_k^i,v_k^i;B\cap(A_{i-1}\setminus {A}_{i-2}))\leq \int_{B\cap(A_{i-1}\setminus {A}_{i-2})}f_k^2(v_k^A)|\nabla u_k^A|^2\,dx +G_k(v_k^i;B\cap(A_{i-1}\setminus {A}_{i-2}))\\ \leq c\,\Big(F_k(u_k^A,v_k^A;B\cap(A_{i-1}\setminus {A}_{i-2})) +F_k(u_k^B,v_k^B;B\cap(A_{i-1}\setminus {A}_{i-2}))\Big)\\ +\frac{c\,M^2\varepsilon_k}{\delta^2} \int_{B\cap(A_{i-1}\setminus {A}_{i-2})}|v_k^A-v_k^B|^2\,dx, \end{multline} \begin{multline}\label{e:ii-1} F_k(u_k^i,v_k^i;B\cap(A_{i}\setminus {A}_{i-1}))\\ \leq c\int_{B\cap(A_{i}\setminus {A}_{i-1})}f_k^2(v_k^A\wedge v_k^B) \left(|\nabla u_k^A|^2+|\nabla u_k^B|^2+\frac{4M^2}{\delta^2}|u_k^A-u_k^B|^2\right)\,dx +G_k(v_k^A\wedge v_k^B;B\cap(A_{i}\setminus {A}_{i-1}))\\ \leq c\Big(F_k(u_k^A,v_k^A;B\cap(A_{i}\setminus {A}_{i-1})) +F_k(u_k^B,v_k^B;B\cap(A_{i}\setminus {A}_{i-1}))\Big) +\frac{c\,M^2}{\delta^2}\int_{B\cap(A_{i}\setminus {A}_{i-1})}|u_k^A-u_k^B|^2\,dx, \end{multline} and \begin{multline}\label{e:i+1i} F_k(u_k^i,v_k^i;B\cap(A_{i+1}\setminus {A}_{i}))\leq \int_{B\cap(A_{i+1}\setminus {A}_{i})}f_k^2(v_k^B)|\nabla u_k^B|^2\,dx +G_k(v_k^i;B\cap(A_{i+1}\setminus {A}_{i}))\\ \leq c\,\Big(F_k(u_k^A,v_k^A;B\cap(A_{i+1}\setminus {A}_{i})) +F_k(u_k^B,v_k^B;B\cap(A_{i+1}\setminus {A}_{i}))\Big)+\frac{c\,M^2\varepsilon_k}{\delta^2} \int_{B\cap(A_{i+1}\setminus {A}_{i})}|v_k^A-v_k^B|^2\,dx. \end{multline} By adding \eqref{e:split}-\eqref{e:i+1i}, we deduce that \begin{multline*} F_k(u_k^i,v_k^i;A'\cup B)\leq F_k(u_k^A,v_k^A;A)+F_k(u_k^B,v_k^B;B)\\ +c\,\Big(F_k(u_k^A,v_k^A;B\cap(A_{i+1}\setminus {A}_{i-2})) +F_k(u_k^B,v_k^B;B\cap(A_{i+1}\setminus {A}_{i-2}))\Big)\\ +\frac{c\,M^2}{\delta^2}\int_{B\cap(A_{i+1}\setminus {A}_{i-2})}|u_k^A-u_k^B|^2\,dx +\frac{c\,M^2\varepsilon_k}{\delta^2}\int_{B\cap(A_{i+1}\setminus {A}_{i-2})}|v_k^A-v_k^B|^2\,dx. \end{multline*} Hence, by summing up on $i\in\{2,\ldots,M-1\}$ and taking the average, for each $k$ we may find an index $i_k$ in that range such that \begin{multline*} F_k(u_k^{i_k},v_k^{i_k};A'\cup B)\leq F_k(u_k^A,v_k^A;A)+F_k(u_k^B,v_k^B;B)\\ +\frac{c}{M}\,\Big(F_k(u_k^A,v_k^A;B\cap(A\setminus {A^\prime})) +F_k(u_k^B,v_k^B;B\cap(A\setminus {A^\prime}))\Big)\\ +\frac{c\,M}{\delta^2}\int_{B\cap(A\setminus {A^\prime})}|u_k^A-u_k^B|^2\,dx +\frac{c\,M\varepsilon_k}{\delta^2}\int_{B\cap(A\setminus {A^\prime})}|v_k^A-v_k^B|^2\,dx. \end{multline*} By \eqref{e:rec1} we deduce that $(u_k^{i_k},v_k^{i_k})\to (u,1)$ in $L^1(\Omega){\times} L^1(\Omega)$, and actually in $L^q(\Omega){\times} L^q(\Omega)$ for all $q\in[1,+\infty)$ thanks to the uniform boundedness assumption in \eqref{e:rec3}. Therefore, in view of \eqref{e:rec2} and the definition of $\Gamma$-limsup we infer that \[ F''(u,1;A'\cup B)\leq \left(1+\frac{c}{M}\right)\Big(F''(u,1;A)+F''(u,1;B)\Big). \] The conclusion then follows by passing to the limit on $M\uparrow\infty$. \end{proof} We next prove that $F''(u,1;\cdot)$ is controlled in terms of the Mumford-Shah functional ${M\!S}$, whose definition is given in \eqref{e:MS}. This result gives a first rough estimate for the upper bound inequality. We shall improve on the jump part in Proposition~\ref{p:limsupndim} below and finally we shall conclude the proof of the $\Gamma$-limsup inequality using a relaxation argument. \begin{lemma}\label{l:boundglimsup} For all $u\in L^1(\Omega)$ and $A\in\mathcal{A}(\Omega)$ it holds \begin{equation}\label{e:FH} F''(u,1;A)\leq {M\!S}(u;A). \end{equation} \end{lemma} \begin{proof} Denote by $\psi:[0,1]\to[0,1]$ any nondecreasing lower-semicontinuous function such that $\psi^{-1}(0)=0$, $\psi(1)=1$ and \[ \sup_{k}f_k(s)\leq\psi(s)\qquad\text{for all $s\in[0,1]$}, \] for instance $\psi=\chi_{(0,1]}$ satisfies all the conditions written above. Consider the corresponding functionals $AT_k^\psi:L^1(\Omega)\times L^1(\Omega)\to[0,+\infty]$ defined in \eqref{e:ATk}, and note that $F_k\leq AT_k^\psi$ for every $k$. The upper bound inequality for $(F_k)$ then follows at once from the classical results by Ambrosio and Tortorelli (cp. \cite{amb-tort2}, and see also \cite{focardi}). \end{proof} We are now ready to prove the upper bound inequality. \begin{proposition}\label{p:limsupndim} For every $(u,v)\in L^1(\Omega){\times}L^1(\Omega)$ it holds \[ F''(u,v)\leq F(u,v). \] \end{proposition} \begin{proof} Since $L^1$ is separable, given any subsequence $(F_{k_j})$ of $(F_k)$ we may extract a further subsequence, not relabeled for convenience, $\overline{\Gamma}$-converging to some $\widehat{F}$ (see \cite[Theorem~16.9]{dalmaso}). The functional $\widehat{F}(u,v;\cdot)$ is by definition increasing and inner regular. Since $F_k(u,v;\cdot)$ is additive, one easily deduces that $F'$ is superadditive and from this that its inner regular envelope $\widehat F=(F')_-$ is superadditive (see \cite[Proposition~14.18 or Proposition~16.12]{dalmaso}). Using Lemma~\ref{l:wsub} one can show that $\widehat F=(F'')_-$ is subadditive (see \cite[Lemma~14.20 and the proof of Proposition~18.4]{dalmaso}). Therefore $\widehat F$ is the restriction to open sets of the Borel measure \begin{equation*} F_*(u,v;E)=\inf \{ \widehat F(u,v;A): A\in \mathcal{A}(\Omega); E\subset A\}\,, \end{equation*} see \cite[Theorem~14.23]{dalmaso}, in the following we identify $\widehat F$ and $F_*$. If $u\in L^1(\Omega)$ is such that ${M\!S}(u;\Omega)<+\infty$, then by Lemma~\ref{l:boundglimsup} we obtain $F''(u,1;\cdot)\le {M\!S}(u;\cdot)<+\infty$ on all open sets, and by the regularity properties of Radon measures $F''$ coincides with its inner envelope. Indeed, for a given open set $A$ and $\varepsilon>0$, choose open sets $A'$, $A''$ and $C$ with $A'\subset\subset A''\subset\subset A$ and $A\setminus A'\subset C$ such that ${M\!S}(u;C)\le\varepsilon$. Then use Lemmas~\ref{l:wsub} and \ref{l:boundglimsup} to estimate $F''(u,1;A)\le F''(u,1;A'\cup C)\le F''(u,1;A'')+{M\!S}(u;C)\leq F''(u,1;A'')+\varepsilon$. In other words, $\widehat{F}(u,1)$ is the $\Gamma$-limit of $F_{k_j}$ for all $u$ such that ${M\!S}(u)<+\infty$. For all $u\in SBV^2(\Omega)$ in particular the estimate in Lemma~\ref{l:boundglimsup} implies that \begin{equation}\label{e:upbvolume} \widehat{F}(u,1;\Omega\setminus J_u)\leq\int_\Omega |\nabla u|^2dx. \end{equation} We provide below for the same $u$ the estimate \begin{equation}\label{e:upbvsalto} \widehat{F}(u,1;J_u)\leq\int_{J_u}g(|[u]|)\,d{\mathcal H}^{n-1}. \end{equation} Given this for granted we conclude as follows: we consider the functional $F_\infty:BV(\Omega)\to[0,+\infty]$ \[ F_\infty(u):=\begin{cases} \displaystyle{ \int_\Omega |\nabla u|^2dx+\int_{J_u}g(|[u]|)\,d{\mathcal H}^{n-1}} & \textrm{if $u\in SBV^2(\Omega)$} \cr +\infty & \textrm{otherwise on $BV(\Omega)$.} \end{cases} \] Further, note that by \cite[Theorem~3.1 and Propositions~3.3-3.5]{bou-bra-but} its relaxation w.r.to the $w\ast\hbox{-}BV$ topology is given on $BV(\Omega)$ by $F(\cdot,1)$. Since, by \eqref{e:upbvolume} and \eqref{e:upbvsalto} we have that $\widehat{F}\leq F_\infty$, and $\widehat{F}(\cdot,1)$ is $L^1$-lower semicontinuous, we infer that \[ \widehat{F}(u,1)\leq F(u,1)\quad\textrm{for all $u\in BV(\Omega)$.} \] We conclude that the same inequality is true for all $u\in GBV\cap L^1(\Omega)$ by the usual truncation argument. Finally, by combining the latter estimate with the lower estimate of Proposition~\ref{p:liminfndim} allows us to deduce that the $\Gamma\hbox{-}$limit does not depend on the chosen subsequence and it is equal to $F$. Hence, by Urysohn's property the whole family $(F_k)$ $\Gamma\hbox{-}$converges to $F$ (cp. \cite[Proposition~8.3]{dalmaso}). Let us now prove formula \eqref{e:upbvsalto}. To this aim, fixed $\lambda>0$ we introduce the perturbed functional \[ \widehat{F_{\lambda}}(u,1):=\widehat{F}(u,1)+\lambda\Big(\int_{\Omega}|\nabla u|^2dx +\int_{J_u}(1+|[u]|)d{\mathcal H}^{n-1}\Big) \] for all $u\in SBV^2(\Omega)$. We may apply to $\widehat{F_{\lambda}}$ the integral representation result \cite[Theorem~1]{bou-fon-leo-masc} to infer that for ${\mathcal H}^{n-1}$-a.e. $x\in J_u$ \bm{\label{e:flambda} \frac{d\widehat{F_{\lambda}}(u,1;\cdot)}{d({\mathcal H}^{n-1}\res J_u)}(x)=\liminf_{k\to+\infty}msup_{\delta\downarrow 0}\frac{1}{\delta^{n-1}} \inf\left\{\widehat{F_{\lambda}}(w,1;x+\delta\,Q_{\nu_u(x)}):\,w\in SBV^2\big(x+\delta\,Q_{\nu_u(x)}\big),\right. \\ \left.w=u_x \text{ on a neighborhood of }x+\delta\,\partial Q_{\nu_u(x)}\right\}, } where \[ u_x(y):= \begin{cases} u^+(x) & \text { if }\langle y-x,\nu_u(x)\rangle>0\cr u^-(x) & \text { if }\langle y-x,\nu_u(x)\rangle<0 \end{cases} \] and $Q_{\nu_u(x)}$ denotes any cube of side $1$ centered in the origin and with a face orthogonal to $\nu_u(x)$. Hence, it is enough to show that for ${\mathcal H}^{n-1}$-a.e. $x\in J_u$ \begin{equation}\label{e:minest} \liminf_{k\to+\infty}msup_{\delta\downarrow 0}\frac 1{\delta^{n-1}}\widehat{F}(u_x,1;x+\delta\, Q_{\nu_u(x)})\leq g(|[u](x)|), \end{equation} since by taking $u_x$ itself as test function in \eqref{e:flambda} we get $$ \frac{d\widehat{F_{\lambda}}(u,1;\cdot)}{d({\mathcal H}^{n-1}\res J_u)}(x)\leq \liminf_{k\to+\infty}msup_{\delta\downarrow 0}\frac{1}{\delta^{n-1}} \widehat{F}(u_x,1;x+\delta\, Q_{\nu_u(x)})+\lambda(1+|[u](x)|), $$ in turn implying \[ \widehat{F}(u,1;J_u)\leq\widehat{F_{\lambda}}(u,1;J_u)\leq \int_{J_u}\big(g(|[u](x)|)+\lambda+\lambda|[u](x)|\big)\,d{\mathcal H}^{n-1}. \] Finally, \eqref{e:upbvsalto} follows at once by letting $\lambda\downarrow 0$. Formula \eqref{e:minest} easily follows by repeating the one-dimensional construction of Proposition~\ref{p:limsupunidim}. More precisely, assume $x=0$ and $\nu_u(x)=e_n$ for simplicity. With fixed $\eta>0$, let $T_{\eta}>0$ and $\alpha_{\eta},\beta_{\eta}\in H^1\big((0,T_{\eta})\big)$ be such that $\alpha_{\eta}(0)=u^-(0),$ $\alpha_{\eta}(T_{\eta})=u^+(0)$, $\beta_{\eta}(0)=\beta_{\eta}(T_\eta)=1$, $u^-(0)\leq\alpha_{\eta}\leq u^+(0)$, $0\leq\beta_{\eta}\leq 1$, and \[ \int_0^{T_{\eta}} \left(f^2(\beta_{\eta})|\alpha_{\eta}'|^2+ \frac{|1-\beta_{\eta}|^2}{4}+|\beta_{\eta}'|^2\right)dt\leq g\big(|[u](0)|\big)+\eta. \] Let $A_j:=(-\frac{\varepsilon_{k_j}T_\eta}{2},\frac{\varepsilon_{k_j}T_\eta}{2})$, and set \ba {u_j(y):=\left\{ \begin{array}{ll} \ {\displaystyle \alpha_{\eta}\left(\frac{y_n}{\varepsilon_{k_j}}+\frac{T_\eta}{2}\right)} & \textrm{if $y_n\in A_j$}\\ u_0 & \text{otherwise}, \end{array} \right.\nonumber\\ v_j(y):=\left\{ \begin{array}{ll} \ {\displaystyle \beta_{\eta}\left(\frac{y_n}{\varepsilon_{k_j}}+\frac{T_\eta}{2}\right)} & \textrm{if $y_n\in A_j$}\\ 1 & \text{otherwise}. \end{array} \right. } Clearly, $(u_j,v_j)\to(u_0,1)$ in $L^1(Q_{e_n})\times L^1(Q_{e_n})$, and if $Q'_{e_n}=Q_{e_n}\cap({\mathbb R}^{n-1}\times\{0\})$, a change of variable yields \begin{multline*} F_{k_j}(u_j,v_j;\delta\,Q_{e_n})=F_{k_j}\big(u_j,v_j,\delta\,Q'_{e_n}\times A_j\big)\\ \leq\delta^{n-1}\int_0^{T_{\eta}} \left(f^2(\beta_{\eta})|\alpha_{\eta}'|^2+ \frac{|1-\beta_{\eta}|^2}{4}+|\beta_{\eta}'|^2\right)dt\leq \delta^{n-1}(g\big(|[u](0)|\big)+\eta). \end{multline*} Therefore, by the very definition of $\widehat{F}$ we infer that \[ \widehat{F}(u_0,1;\delta\,Q_{e_n})\leq \delta^{n-1}(g\big(|[u](0)|\big)+\eta), \] and estimate \eqref{e:minest} follows at once dividing by $\delta^{n-1}$ and taking the superior limit as $\delta\downarrow 0$, and finally by letting $\eta\downarrow 0$ in the formula above. \end{proof} The proof of the compactness result Theorem~\ref{t:comp} follows the lines of \cite[Theorem 7.4]{dm-iur}, so we just sketch the relevant arguments and refer to \cite{dm-iur} for more details. \begin{proof}[Proof of Theorem~\ref{t:comp}] One first proves the thesis in the one-dimensional case under the hypothesis that $u_k$ is bounded in $L^\infty(\Omega)$. Then one extends the proof to the $n$-dimensional case and finally removes the boundedness assumption. Let us start assuming that $n=1$ and that $\sup_k ||u_k||_{L^\infty(\Omega)}<+\infty$. Up to a diagonalization argument, one reduces to study the case $\Omega=(0,1)$. Repeating the proof of Theorem~\ref{p:liminfunidim} one finds that $v_k\to1$ in $L^1(\Omega)$ and that for every $\delta>0$ there exists a finite subset $S\subset\Omega$ for which $$(1-\omega(\delta))\int_{\Omega\setminus S_{\eta}}h(|u'_k|)dx\leq F_k(u_k,v_k,\Omega\setminus S_{\eta})$$ holds for $\eta>0$ small (dependently on $\delta$) and for $k$ large (dependently on $\eta$), where $\omega$ is a modulus of continuity provided by $f$ and $S_{\eta}:=\bigcup_{i=1}^L(t_i-\eta,t_i+\eta)$. This implies by assumption that $u_k$ is bounded in $BV(\Omega\setminus S_{\eta})$ uniformly with respect to $k$ and $\eta$. Hence up to subsequences $u_k$ converges to a function $u\in BV(\Omega\setminus S_{\eta})$ ${\mathcal L}^1$-a.e.\ in $\Omega\setminus S_{\eta}$. The boundedness hypothesis and a diagonalization argument yield that $u$ in fact belongs to $BV(\Omega)$ and that $u_k\to u$ $L^1(\Omega)$. In order to generalize the previous result to the case $n>1$, one applies a compactness result by Alberti, Bouchitt\'{e}, and Seppecher \cite[Theorem 6.6]{alberti}. Indeed, fixed $\xi\in\mathbb{S}^{n-1}$ and $\delta>0$, one can introduce the sequence $w_k$ whose slices satisfy \bes{ & (w_k)^\xi_y:=\begin{cases}(u_k)^{\xi}_y & \textrm{if $y\in A_k$,}\\ 0 & \textrm{otherwise,} \end{cases}\\ & A_k:=\{y\in \Omega^{\xi}:F^1_k((u_k)^{\xi}_y,(v_k)^\xi_y)\leq L\},} where $F_k^1$ denotes the one-dimensional counterpart of the functional $F_k$ and $L$ is chosen properly and depends on $\delta$. An easy computation shows that $w_k$ is bounded in $L^\infty(\Omega)$, that $u_k$ is in a $\delta$-neighborhood of $w_k$ in $L^1(\Omega)$, and that $(w_k)^\xi_y$ is pre-compact in $L^1(\Omega)$ (the last property follows from the first part of the proof). Then the pre-compactness of $u_k$ in $L^1(\Omega)$ is ensured by \cite[Theorem 6.6]{alberti} as $\xi$ varies in a basis of ${\mathbb R}n$. If $u_k$ is not bounded in $L^\infty(\Omega)$ the argument above applies to the truncations, so that up to subsequences $u_k^M\to u_M$ in $L^1(\Omega)$ and $\mathcal{L}^n\text{-a.e.\ in } \Omega$, with $u_M\in BV(\Omega)$, for every $M\in {\mathbb N}$. One can prove that the function $$u:=\liminf_{k\to+\infty}m_{M\to+\infty}u_M$$ is well-defined, finite $\mathcal{L}^n\text{-a.e.\ in } \Omega$, and its truncation $u^M$ coincides with $u_M$ $\mathcal{L}^n\text{-a.e.\ in } \Omega$. This straightforwardly implies that $u_k\to u$ $\mathcal{L}^n\text{-a.e.\ in } \Omega$ and that $u\in GBV\cap L^1(\Omega)$. \end{proof} \section{Further results}\label{s:further} In this section we build upon the results in Sections~\ref{s:stat}-\ref{s:ndim} to obtain in the limit different models by slightly changing the approximating energies $F_k$'s. More precisely, we shall approximate a cohesive model with the Dugdale's surface density, a cohesive model with power-law growth at small openings, and a model in Griffith's brittle fracture. This task will be accomplished by letting the function $f$ vary as in item (ii) of Proposition~\ref{p:gl} in the first instance, as in item (iii) in the third, and suitably in the second (cp. (iii) of Proposition~\ref{p:tp} below), respectively. More precisely, we consider a sequence of functions $(\f{j})$ satisfying \eqref{f0} and \eqref{f1} and for all $j,k\in{\mathbb N}$ introduce the energies \begin{equation}\label{e:Fj} \Fk jk(u,v):=\begin{cases} \displaystyle{\int_\Omega{\Big((\fk jk)^2(v)|\nabla u|^2+\frac{(1-v)^2}{4\varepsilon_k} +\varepsilon_k|\nabla v|^2\Big) dx}} & \textrm{if $(u,v)\in H^1(\Omega){\times}H^1(\Omega)$}\cr & \text{and $0\leq v\leq1$ $\mathcal{L}^n\text{-a.e.\ in } \Omega$},\cr + \infty & \text{otherwise}, \end{cases} \end{equation} where $\fk jk(s):=1\wedge\varepsilon_k^{1/2} \f{j}(s)$. In each of Theorems~\ref{t:Dugdale}, \ref{t:sublin}, and \ref{t:MS} below we shall further specify the nature of the sequence $(\f{j})$. \subsection{Dugdale's cohesive model} \label{subsecdugdale} In order to approximate the Dugdale's model, i.e., to get in the limit $\mathscr{D}:L^1(\Omega)\to[0,+\infty]$ \begin{equation}\label{e:Phit} \mathscr{D}(u):= \begin{cases} \displaystyle{\int_\Omega h(|\nabla u|)dx+\int_{J_{u}}\big(\gD |[u]|\big)d{\mathcal H}^{n-1}+\ell|D^cu|(\Omega)} & \textrm{if $u\in GBV(\Omega)$,}\cr\cr +\infty & \textrm{otherwise}, \end{cases} \end{equation} with $h$ as in \eqref{h}, we shall consider the specific choice \begin{equation}\label{e:fjPhit} \f j(s):=(a_j\,s)\varepsilone f(s) \end{equation} with $f$ satisfying \eqref{f0} and \eqref{f1}, and \begin{equation}\label{e:alphaj} \text{$(a_j)$ nondecreasing, $a_j\uparrow\infty$ and such that $a_j\,\varepsilon_j^{1/2}\downarrow 0$.} \end{equation} \begin{theorem}\label{t:Dugdale} Suppose that $(\f j)$ is as in \eqref{e:fjPhit} and \eqref{e:alphaj} above. Then, the functionals $\Fk kk$ $\Gamma$-converge in $L^1(\Omega){\times}L^1(\Omega)$ to the functional $\mathscr{D}t$ defined as follows \begin{equation}\label{e:F1} \mathscr{D}t(u,v):=\begin{cases} \mathscr{D}(u) & \textrm{if $v=1$ $\mathcal{L}^n\text{-a.e.\ in } \Omega$,}\cr +\infty & \text{otherwise}. \end{cases} \end{equation} \end{theorem} \begin{proof}[Proof of Theorem~\ref{t:Dugdale}] The very definitions in \eqref{Fk} and \eqref{e:Fj} give $\Fk jk\leq \Fk kk$ for $j\leq k$, being $(\f j)$ nondecreasing by assumption. Hence, by Theorem~\ref{t:gamma-lim} we deduce \begin{equation}\label{e:liminfFj0} \Gamma\hbox{-}\liminf_{k\to+\infty}minf_k \Fk kk(u,v)\geq \F j(u,v), \end{equation} where $\F j$ is defined as $F$ in \eqref{F} with $f$ substituted by $\f j$ in formulas \eqref{f1} and \eqref{h} defining the volume density, and \eqref{g} defining the surface density. In particular, being $\ell_j=\ell$ for all $j$, the corresponding volume density $h_j$ equals the function $h$ in \eqref{h}. Moreover, the surface energy densities $g_j$ are dominated by the constant $1$, and by item (ii) in Proposition~\ref{p:gl} we have $\liminf_{k\to+\infty}m_jg_j(s)=\gD s$ for all $s\in[0,+\infty)$. In conclusion, if $\Gamma\hbox{-}\liminf_{k\to+\infty}minf_k \Fk kk(u,v)<+\infty$, we infer that $v=1$ $\mathcal{L}^n\text{-a.e.\ in } \Omega$, $u\in GBV(\Omega)$, and \[ \Gamma\hbox{-}\liminf_{k\to+\infty}minf_k\Fk kk(u,1)\geq \mathscr{D}(u), \] by the Dominated Convergence theorem, as $j\uparrow+\infty$ in \eqref{e:liminfFj0}. The upper bound inequality follows by arguing as in Proposition~\ref{p:limsupndim}. Indeed, we first note that by a careful inspection of the proofs, Lemmas~\ref{l:wsub} and \ref{l:boundglimsup} are still valid in this generalized framework. More precisely, Lemma~\ref{l:wsub} continue to hold true as there we have only used that each function $f_k=1\wedge\varepsilon_k^{1/2} f$ in \eqref{fk} is nondecreasing and bounded by $\chi_{(0,1]}$, properties enjoyed by $\fk kk$ as well. In conclusion, as a first step we establish the estimate \begin{equation}\label{e:minest2} \liminf_{k\to+\infty}msup_{\delta\downarrow 0}\frac 1{\delta^{n-1}}\widehat{F}(u_x,1;x+\delta\, Q_{\nu_u(x)})\leq \gD{|[u](x)|}, \end{equation} for $u\in SBV^2(\Omega)$ and for ${\mathcal H}^{n-1}$-a.e. $x\in J_u$, where $\widehat{F}$ is the $\overline{\Gamma}$-limit of a properly chosen subsequence $(\Fk{k_j}{k_j})$ of $(\Fk kk)$ (cp. Proposition~\ref{p:limsupndim}). Given \eqref{e:minest2}, the derivation of the upper bound inequality in general follows exactly as in Proposition~\ref{p:limsupndim}. Let us now prove \eqref{e:minest2} by means of a one-dimensional construction. For the sake of simplicity we assume $x=0$ and $\nu_u(x)=e_n$. Actually, in view of the estimate in Lemma~\ref{l:boundglimsup} we need only to discuss the case $|[u](0)|<\ell^{-1}$. To this aim, set \[ s_j:=\sup\{s\in[0,1):\,a_{k_j}s=f(s)\}, \] it is easy to check that $s_j$ it is actually a maximum, i.e., $a_{k_j}s_j=f(s_j)$, and that $s_j\leq s_{j+1}<1$ with $s_j\uparrow 1$. Let now \[ T_{j}:=|[u](0)|\frac{f(s_j)}{1-s_j}, \] then $T_j\uparrow\infty$. Define $\alpha_j(t):=u^-(0)$ on $[-T_j-1,-T_j]$, $\alpha_j(t):=[u](0)\cdot(\frac{t}{2T_j}+\frac12)+u^-(0)$ on $[-T_j,T_j]$, $\alpha_j(t):=u^+(0)$ on $[T_j,T_j+1]$, and $\beta_j(t):=s_j$ on $[-T_j,T_j]$, $\beta_j(t):=(1-s_j)(|t|-T_j)+s_j$ otherwise in $[-T_j-1,T_j+1]$. Setting $A_j:=(-{\varepsilon_{k_j}(T_j+1)},{\varepsilon_{k_j}(T_j}+1))$, we have that ${\mathcal L}^1(A_j)\to0$ as $j\uparrow\infty$ by \eqref{e:alphaj}. Indeed, in view of \eqref{f1} and the definition of $s_j$ it is easy to deduce that $(1-s_j)a_{k_j}\to\ell$ as $j\uparrow\infty$, so that $\varepsilon_{k_j}T_j\sim\varepsilon_{k_j}a_{k_j}^2\to 0$ as $j\uparrow\infty$ thanks to \eqref{e:alphaj}. Therefore, if \ba {u_j(y):=\left\{ \begin{array}{ll} {\displaystyle \alpha_j\left(\frac{y_n}{\varepsilon_{k_j}}\right)} & \textrm{if $y_n\in A_j$}\\ u_0 & \text{otherwise}, \end{array} \right.\nonumber\\ v_j(y):=\left\{ \begin{array}{ll} {\displaystyle \beta_j\left(\frac{y_n}{\varepsilon_{k_j}}\right)} & \textrm{if $y_n\in A_j$}\\ 1 & \text{otherwise}, \end{array} \right. } then $(u_j,v_j)\to (u_0,1)$ on $L^1(Q_{e_n})\times L^1(Q_{e_n})$, where $u_0=u^-(0)\chi_{\{y_n\leq 0\}}+u^+(0)\chi_{\{y_n> 0\}}$. Moreover, if $Q'_{e_n}=Q_{e_n}\cap({\mathbb R}^{n-1}\times\{0\})$, then a change of variable yields \begin{multline*} \Fk{k_j}{k_j}(u_j,v_j;\delta\,Q_{e_n})=\Fk {k_j}{k_j}\big(u_j,v_j;\delta\,Q'_{e_n}\times A_j\big)\\ \leq\delta^{n-1}\left(\int_{-T_j}^{T_j}\bigg(f^2(\beta_j)|\nabla\alpha_j|^2+\frac{(1-\beta_j)^2}4\bigg)dt +2\int_{T_j}^{T_j+1} \left(\frac{|1-\beta_j|^2}{4}+|\beta'_j|^2\right)dt\right)\\ =\delta^{n-1}\left( \bigg(f^2(s_j)\frac{|[u](0)|^2}{2T_j}+2T_j\frac{(1-s_j)^2}4\bigg) +2(1-s_j)^2\int_{T_j}^{T_j+1}\frac{(t-(T_j+1))^2}{4}dt+2(1-s_j)^2\right)\\ =\delta^{n-1}\left((1-s_j)f(s_j)|[u](0)|+\frac{13}6(1-s_j)^2\right) =\delta^{n-1}\big(\ell|[u](0)|+o(1)\big)\qquad\textrm{as $j\uparrow\infty$}. \end{multline*} Therefore, being $|[u](0)|<\ell^{-1}$, by the very definition of $\widehat{F}$ we infer that \[ \widehat{F}(u_0,1;\delta\,Q_{e_n})\leq \delta^{n-1}(\gD{|[u](0)|}), \] and estimate \eqref{e:minest2} follows at once dividing by $\delta^{n-1}$ and taking the superior limit as $\delta\downarrow 0$ in the formula above. \end{proof} \begin{remark}\label{r:regimes} The analysis in the general case of a diverging sequence $f^{(k)}$ is much more intricate because of the combination of several effects: the speed of divergence of the $f^{(k)}$'s compared with the scaling $\varepsilon_k^{1/2}$ in the definition of $f^{(k)}_k$, and even more the behavior of each $f^{(k)}$ close to $1$. In this remark we limit ourselves to consider those families of functions $f^{(k)}$ satisfying item (ii) in Proposition~\ref{p:gl}, another instance shall be discussed in Remark~\ref{r:regimes2} below. Therefore, assume for example that $f(s)=\frac{\ell s}{1-s}$, and that $\f k$ is defined as in \eqref{e:fjPhit} above but with $a_k=\varepsilon_k^{-1/2}$, thus violating the last condition in \eqref{e:alphaj}. Then, one can show that the $\Gamma$-limit is given by the Mumford-Shah energy introduced in \eqref{e:MS}. This claim follows easily by noting that with this choice \[ \fk kk(s)=\begin{cases} s & 0\leq s\leq 1-\ell\,\varepsilon_k^{1/2} \cr \displaystyle{\varepsilon_k^{1/2}\frac{\ell s}{1-s}} & 1-\ell\,\varepsilon_k^{1/2}\leq s\leq (1+\ell\,\varepsilon_k^{1/2})^{-1} \cr 1 & (1+\ell\,\varepsilon_k^{1/2})^{-1}\leq s\leq 1, \end{cases} \] so that $\fk kk(s)\geq s$ for all $s\in[0,1]$, and actually $(\fk kk)$ converges uniformly to the identity on $[0,1]$. Therefore, $AT_k^{Id}\leq\Fk kk\leq AT_k^{\psi}$, with $\psi(s)=\chi_{(0,1]}(s)$ (cp. with \eqref{e:ATk} for the definition of $AT_k^\psi$), and the result follows at once from Ambrosio and Tortorelli classical results (cp. \cite{amb-tort2}, see also \cite{focardi}). A similar argument works also in the regime $a_k\varepsilon_k^{1/2}\uparrow\infty$, in which \[ \fk kk(s)=\begin{cases} a_k\varepsilon_k^{1/2}s & 0\leq s\leq a_k^{-1}\varepsilon_k^{-1/2} \cr 1 & a_k^{-1}\varepsilon_k^{-1/2}\leq s\leq 1, \end{cases} \] for $k$ sufficiently large, so that $\fk kk(s)\to\chi_{(0,1]}(s)$ for all $s\in[0,1]$, and again we get the Mumford-Shah energy in the $\Gamma$-limit arguing as above. Finally, note that for $a_k$ as in \eqref{e:alphaj}, we have \[ \fk kk(s)=\begin{cases} a_k\varepsilon_k^{1/2}s & 0\leq s\leq 1-\ella_k^{-1} \cr \varepsilon_k^{1/2}\frac{\ell s}{1-s} & 1-\ella_k^{-1}\leq s\leq (1+\ell\varepsilon_k^{1/2})^{-1}\cr 1 & (1+\ell\varepsilon_k^{1/2})^{-1}\leq s\leq 1 \end{cases} \] so that $\fk kk(s)\to\chi_{\{1\}}(s)$ in $[0,1]$. \end{remark} \subsection{A model with power-law growth at small openings} \label{subsectpowerlaw} In Theorem~\ref{t:sublin} below we approximate a model with sublinear surface density in the origin and quadratic growth for the volume term. To this aim, let $p>1$ and consider a function $\psi_p$ satisfying condition \eqref{f0} and \be{\label{e:f1p} \displaystyle\liminf_{k\to+\infty}m_{s\to1}(1-s)^p\psi_p(s)=\kappa, \quad \kappa\in(0,+\infty). } Clearly, one can take $\psi_p(s):=\frac{s}{(1-s)^p}$ as prototype. The surface energy density $\vartheta_p:[0,+\infty)\to[0,+\infty)$ is defined as $g$ in \eqref{g} by \be{\label{e:tp} \vartheta_p(s):=\inf_{(\alpha,\beta)\in\mathcal{U}_s} \int_0^1|1-\beta|\sqrt{\psi_p^2(\beta)|\alpha'|^2+|\beta'|^2}\,dt, } where $\mathcal{U}_s$ has been introduced in \eqref{e:admfnctns}. In this case the integral is finite only if $\beta<1$ almost everywhere on the set $\{\alpha'\ne0\}$. We next prove some properties of $\vartheta_p$ in analogy to Propositions~\ref{11}, \ref{p2} and \ref{p:gl}. In what follows, we keep the same notations introduced there. We also note that given any curve $(\alpha,\beta)$, the integral to be minimized in the definition of $\vartheta_p$ is invariant under reparametrizations of $(\alpha,\beta)$. \begin{proposition}\label{p:tp} Let $\psi_p$ satisfy \eqref{f0} and \eqref{e:f1p}, let $\vartheta_p:[0,+\infty)\to[0,+\infty)$ be the corresponding surface energy in \eqref{e:tp}. Then, \begin{itemize} \item[(i)] $\vartheta_p(0)=0$, $\vartheta_p$ is nondecreasing, subadditive, and \be{\label{e:tp4} 0\leq\vartheta_p(s)\leq 1\wedge c\, s^{\frac2{p+1}},\quad\textrm{for all $s\geq 0$}, } where $c=c(\psi_p)>0$. Moreover, $\vartheta_p\in C^{0,\frac 2{p+1}}\big([0,+\infty)\big)$ and \begin{equation}\label{e:tp3} \kappa^{\frac 2{p+1}}\le \liminf_{k\to+\infty}m_{s\downarrow 0}\frac{\vartheta_p(s)}{s^{\frac2{p+1}}}\le \frac{p+1}{2^{\frac 2{p+1}}(p-1)^{\frac{p-1}{p+1}}}\,{\kappa}^{\frac{2}{p+1}}; \end{equation} \item[(ii)] $\vartheta_p=\hat{\vartheta}_p$, where \be{\label{e:tp2} \hat{\vartheta}_p(s):=\liminf_{k\to+\infty}m_{T\uparrow\infty}\inf_{(\alpha,\beta)\in\mathcal{U}_s(0,T)} \int_0^T\left(\psi_p^2(\beta)|\alpha^\prime|^2+\frac{(1-\beta)^2}{4}+|\beta^\prime|^2\right)dt; } \item[(iii)] the functions \begin{equation}\label{e:fjPhitt} f^{(j)}(s):=\frac{j\,s}{1-s}\wedge \psi_p(s), \end{equation} satisfy \eqref{f0} and \eqref{f1}. If $g_j$ denotes the corresponding surface energy in \eqref{g}, then $g_j\leq g_{j+1}$ and \be{\label{e:gjtp} \liminf_{k\to+\infty}m_{j\to\infty}g_j(s)=\vartheta_p(s)\quad\textrm{for all $s\geq 0$}. } \end{itemize} \end{proposition} \begin{proof} We prove (i). The facts that $\vartheta_p(0)=0$ and that $\vartheta_p$ is nondecreasing follow easily from the definition. The subadditivity follows as in Proposition~\ref{11}\ref{e21}. Moreover, $0\leq\vartheta_p\leq 1$ arguing as in \ref{e19} of Proposition~\ref{11}. To show \eqref{e:tp4} and the upper bound in \eqref{e:tp3}, let $s,\,\lambda>0$ and consider $\alpha:=0$ in $[0,1/3]$, $\alpha:=s$ in $[2/3,1]$ and set $\alpha$ to be the linear interpolation of the values $0$ and $s$ on $[1/3,2/3]$; $\beta_\lambda:=1-(\lambda\,s)^{\frac 1{p+1}}$ in $[1/3,2/3]$ and set $\beta_\lambda$ to be the linear interpolation of that value to $1$ on $[0,1/3]\cup[2/3,1]$. Then, clearly $(\alpha,\beta_\lambda)\in\mathcal{U}_s$ and a simple computation shows that \begin{equation}\label{e:thetapub} \vartheta_p(s)\leq\int_0^1|1-\beta_\lambda|\sqrt{\psi_p^2(\beta_\lambda)|\alpha'|^2 +|\beta_\lambda^{\prime}|^2}\,dt =(\lambda\,s)^{\frac{1}{p+1}}\,\psi_p\big(1-(\lambda\,s)^{\frac 1{p+1}}\big)\,s +(\lambda\,s)^{\frac 2{p+1}}. \end{equation} By taking $\lambda=1$, since $(1-t)^p\psi_p(t)\leq c$ for some constant $c=c(\psi_p)>0$ and for all $t\in[0,1]$, we deduce that \[ \vartheta_p(s)\leq (c+1)\,s^{\frac 2{p+1}}, \] from which inequality \eqref{e:tp4} follows as $0\leq\vartheta_p\leq 1$. Note that the H\"older continuity of $\vartheta_p$ then follows easily from \eqref{e:tp4} and its subadditivity and monotonicity. Further, by \eqref{e:thetapub} we infer \[ \liminf_{k\to+\infty}msup_{s\downarrow 0}\frac{\vartheta_p(s)}{s^{\frac2{p+1}}}\le \kappa\,\lambda^{-\frac{p-1}{p+1}}+\lambda^{\frac{2}{p+1}}, \] minimizing the latter inequality over $\lambda\in(0,\infty)$ yields the upper bound in (\ref{e:tp3}). We now prove the lower bound in \eqref{e:tp3}. Let $s_k\to 0$, $s_k>0$, and up to subsequences let the liminf in \eqref{e:tp3} be a limit. Let $\alpha_k,\beta_k$ be competitors for $\vartheta_p(s_k)$ such that $$\int_{0}^{1} |1-\beta_k| \, \sqrt{\psi_p^2(\beta_k)|\alpha_k'|^2 + |\beta_k'|^2}\, dt\le \vartheta_p(s_k)+s_k.$$ If, after taking a subsequence, there is a sequence $x_j\in[0,1]$ such that $$\frac{1-\beta_j(x_j)}{s_j^{\frac{1}{p+1}}}\ge\kappa^{1/(p+1)} \text{ for all $j$},$$ then \be{\label{e:MM}\vartheta_p(s_j)+s_j\ge (1-\beta_j(x_j))^2\ge\kappa^{2/(p+1)}\,s_j^{\frac{2}{p+1}}.} Otherwise, for all $k$ large enough $$\frac{1-\beta_k}{s_k^{\frac{1}{p+1}}}\leq \kappa^{1/(p+1)}$$ must hold uniformly, so that $\beta_k\to1$ uniformly and by (\ref{e:f1p}) for any $\varepsilon>0$ \begin{equation*} (1-\beta_k)^p \psi_p(\beta_k) \ge \kappa-\varepsilon \text{ uniformly, for $k$ large enough.} \end{equation*} Therefore \begin{equation}\label{e:the} \vartheta_p(s_k)+s_k>\int_0^1\psi_p(\beta_k)(1-\beta_k)|\alpha'_k|dt\geq \int_0^1 \frac{\psi_p( \beta_k)(1-\beta_k)^p}{(1-\beta_k)^{p-1}}|\alpha'_k|dt \ge \frac{\kappa-\varepsilon}{\kappa^{(p-1)/(p+1)}} s_k^{2/(p+1)}\,. \end{equation} Since $\varepsilon$ was arbitrary this and (\ref{e:MM}) give the lower bound in (\ref{e:tp3}). Finally we prove that the limit in (\ref{e:tp3}) exists. We fix a sequence $s_j\downarrow 0$ and choose $\alpha_j,\beta_j \in\mathcal{U}_{s_j}$ such that \begin{equation*} \int_0^1|1-\beta_j|\sqrt{\psi_p^2(\beta_j)|\alpha'_j|^2 +|\beta_j^{\prime}|^2}\,dt \le \vartheta_p(s_j) + \frac1j s_j^{2/(p+1)}\,. \end{equation*} By the computation above we obtain $\beta_j\to1$ uniformly. For $k\ge j$ we define $\alpha_k,\beta_k\in\mathcal{U}_{s_k}$ by \begin{equation*} \overline \alpha_k =\frac{s_k}{s_j}\alpha_j \text{ and } \overline \beta_k =1- \Bigl(\frac{s_k}{s_j}\Bigr)^{1/(p+1)}(1-\beta_j)\,. \end{equation*} After a straightforward computation, using these test functions in the definition of $\vartheta_p(s_k)$ leads to \begin{equation*} \vartheta_p(s_k)\le \Bigl(\frac{s_k}{s_j}\Bigr)^{2/(p+1)} \left[ \int_0^1|1-\beta_j|\sqrt{\psi_p^2(\beta_j)|\alpha'_j|^2 +|\beta_j^{\prime}|^2}\,dt\right] \sup\bigl\{\frac{\psi_p(t)(1-t)^p}{\psi_p(t')(1-t')^p}: \min \beta_j\le t,t'<1\bigr\}\,. \end{equation*} Since $\beta_j\to1$ uniformly as $j\to\infty$, and $\psi_p(t)(1-t)^p$ has a finite limit as $t\to1$, the $\sup$ converges to $1$ as $j\to\infty$. Therefore we obtain that for every $\varepsilon>0$ if $j$ is sufficiently large, then \begin{equation*} \frac{\vartheta_p(s_k)}{s_k^{2/(p+1)}} \le (1+\varepsilon) \frac{\vartheta_p(s_j)}{s_j^{2/(p+1)}} +\frac1j\hskip5mm \text{ for all } k\ge j\,. \end{equation*} This implies that the sequence converges. Since the decreasing sequence $s_j$ was arbitrary, the limit in (\ref{e:tp3}) exists. To establish (ii), we note first that by Cauchy inequality $\vartheta_p\leq\hat{\vartheta}_p$. For the sake of proving the converse inequality, we first claim that $\alpha$ and $\beta$ in the infimum problem defining $\vartheta_p$ can be taken in $W^{1,\infty}\big((0,1)\big)$. Let $\eta>0$ small and let $\alpha,\beta\in H^1\big((1/3,2/3)\big)$ be competitors for $\vartheta_p(s)$ such that \be{\label{e:quasimin2}\int_{1/3}^{2/3} |1-\beta| \, \sqrt{\psi_p^2(\beta)|\alpha'|^2 + |\beta'|^2}\, dt\le \vartheta_p(s)+\eta.} We define $\beta^\eta(t):=\beta(t)\wedge (1-\eta)$ in $[1/3,2/3]$. Since $(1-s)^p\psi_p(s)$ has a finite nonzero limit at $1$, there is a function $\omega$, with $\omega(\eta)\to0$ as $\eta\to0$, such that \begin{alignat}1\label{al:com} (1-s')^p \psi_p(s') &\le (1+ \omega(\eta)) (1-s)^p \psi_p(s) \text{ for all } s,s'\in [1-\eta,1)\,. \end{alignat} In particular, if $1-\eta< \beta(t)< 1$, then \begin{equation}\label{e:cont} \eta \psi_p(1-\eta) \le \eta^{1-p} (1+\omega(\eta)) (1-\beta(t))^p \psi_p(\beta(t)) \le (1+\omega(\eta)) (1-\beta(t)) \psi_p(\beta(t)) \,. \end{equation} We observe that $\beta^\eta=1-\eta$ and $(\beta^\eta)'=0$ almost everywhere on the set $\{\beta\ne \beta^\eta\}$ and compute \begin{alignat}1 \int_{\{\beta\ne \beta^\eta\}} (1-\beta^\eta) \sqrt{\psi_p^2(\beta^\eta)\, |\alpha'|^2 + |(\beta^\eta)'|^2}dt &= \int_{\{\beta\ne \beta^\eta\}}\eta \psi_p(1-\eta)\, |\alpha'| dt \nonumber\\ &\le (1+\omega(\eta))\int_{\{\beta\ne \beta^\eta\}}(1-\beta) \psi_p(\beta) |\alpha'|\,dt,\label{e:mini} \end{alignat} so that by \eqref{e:quasimin2} it follows \be{\label{e:betaeta}\int_{1/3}^{2/3} |1-\beta^\eta| \, \sqrt{\psi_p^2(\beta^\eta)|\alpha'|^2 + |(\beta^\eta)'|^2}\, dt\le \vartheta_p(s)+\eta+\omega(\eta)+\eta\omega(\eta).} By density we are able to find two sequences $\alpha_j,\beta_j^\eta\in W^{1,\infty}\big((1/3,2/3)\big)$ (actually in $C^\infty([1/3,2/3])$) such that $\alpha_j(1/3)=0$, $\alpha_j(2/3)=s$, $\beta_j^\eta(1/3)=\beta_j^\eta(2/3)=1-\eta$, $0\leq\beta\leq 1-\eta$, and converging respectively to $\alpha$ and $\beta^\eta$ in $H^1\big((1/3,2/3)\big)$. Since the function $(1-s)^p\psi_p(s)$ is uniformly continuous in $[0,1-\eta]$ and since $\beta^\eta_j\to\beta^\eta$ also uniformly, we deduce that for $j$ large it holds $$\int_{1/3}^{2/3} |1-\beta_j^\eta| \, \sqrt{\psi_p^2(\beta_j^\eta)|\alpha_j'|^2 + |(\beta_j^\eta)'|^2}\, dt\le \vartheta_p(s)+2\eta+\omega(\eta)+\eta\omega(\eta).$$ Finally we extend $\alpha_j$ and $\beta_j^\eta$ in $[0,1]$ defining $\alpha_j:=0$ in $[0,1/3]$, $\alpha_j:=s$ in $[2/3,1]$, and $\beta_j^\eta$ as a linear interpolation of the values $1-\eta$ and $1$. Now $\alpha_j$ and $\beta_j^\eta$ are competitors for $\vartheta_p(s)$ and for $j$ large they satisfy $$\int_{0}^{1} |1-\beta_j^\eta| \, \sqrt{\psi_p^2(\beta_j^\eta)|\alpha_j'|^2 + |(\beta_j^\eta)'|^2}\, dt\le \vartheta_p(s)+2\eta+\omega(\eta)+\eta\omega(\eta)+\eta^2$$ and this concludes the proof of the claim. Let us prove now that $\hat{\vartheta}_p(s)\leq\vartheta_p(s)$. We argue exactly as in Proposition~\ref{p2} until estimate \eqref{al:p1}. In doing this we point out that $f$, $g$ and $\hat g$ have to be substituted by $\psi_p$, $\vartheta_p$ and $\hat\vartheta_p$, respectively. By keeping the same notation introduced there, we repeat the computations in \eqref{al:com}-\eqref{e:mini} and we conclude that \begin{alignat*}1 \hat \vartheta_p(s) &\le \sqrt \eta + 3\eta^2 +(1+\omega(\eta)) \int_{0}^1 (1-\beta) \sqrt{\psi_p^2(\beta)\, |\alpha'|^2 + |\beta'|^2}dt \,. \end{alignat*} Since the last integral is less than $\vartheta_p(s)+\eta$ and $\eta$ can be made arbitrarily small the inequality $\hat{\vartheta}_p\leq\vartheta_p$ follows at once. We finally prove (iii). It is easy to check that $f^{(j)}\leq f^{(j+1)}$, and that $f^{(j)}(s)\to \psi_p(s)$ for all $s\in[0,1)$. Hence, the sequence $(g_j)$ is nondecreasing and $g_j(s)\leq \vartheta_p(s)$ for all $s\geq 0$. To prove \eqref{e:gjtp}, with fixed $s\in(0,+\infty)$, consider the functionals $\mathscr{G}_j,\,\mathscr{G}_\infty:L^1\big((0,1)\big)\times L^1\big((0,1)\big)\to[0,+\infty]$ defined for $(\alpha,\beta)\in\mathcal{U}_s$ by \[ \mathscr{G}_j(\alpha,\beta):= \int_0^1|1-\beta|\sqrt{(f^{(j)})^2(\beta)|\alpha^\prime|^2+|\beta^\prime|^2}\,dt \] and \[ \mathscr{G}_\infty(\alpha,\beta):= \int_0^1|1-\beta|\sqrt{\psi_p^2(\beta)|\alpha^\prime|^2+|\beta^\prime|^2}\,dt \] respectively, and set equal to $+\infty$ otherwise on $L^1\big((0,1)\big)\times L^1\big((0,1\big))$. Note that $\mathscr{G}_j\leq\mathscr{G}_{j+1}$ and that $\mathscr{G}_j$ pointwise converge to $\mathscr{G}_\infty$ by Beppo-Levi's theorem. Therefore, $(\mathscr{G}_j)$ $\Gamma$-converges to $\overline{\mathscr{G}_\infty}$, the relaxation of $\mathscr{G}_\infty$ w.r.to the $L^1\times L^1$ topology. Being $g_j(s)=\inf\mathscr{G}_j$ and $\vartheta_p(s)=\inf\mathscr{G}_\infty$ to conclude we need only to discuss the compactness properties of the minimizing sequences of $\mathscr{G}_j$'s. To this aim let $\alpha_j$, $\beta_j\in W^{1,\infty}\big((0,1)\big)$ be such that $\alpha_j(0)=0$, $\alpha_j(1)=s$, $\beta_j(0)=\beta_j(1)=1$, and \[ \mathscr{G}_j(\alpha_j,\beta_j)\leq g_j(s)+\frac1j. \] Hence, either there exists $\delta>0$ and a subsequence $j_k$ such that $\inf_{[0,1]}\beta_{j_k}\geq\delta$, or $\inf_{[0,1]}\beta_j\to0$. In the former case, we note that \[ \big(\min_{t\in[\delta,1)}(1-t)f^{(j_k)}(t)\big)\|\alpha_{j_k}^\prime\|_{L^1}\leq g_{j_k}(s)+\frac1{j_k}, \] and since for $k$ sufficiently large (depending on $\delta$) \[ \min_{t\in[\delta,1)}(1-t)f^{(j_k)}(t)=\min_{t\in[\delta,1)}(1-t)\psi_p(t)>0\,, \] we conclude that $\sup_k\|\alpha_{j_k}^\prime\|_{L^1}<+\infty$. Moreover, as \[ \frac12\sup_j\|\big((1-\beta_j)^2\big)^\prime\|_{L^1}\leq \sup_j g_j(s)\leq\vartheta_p(s), \] the sequence $(\alpha_{j_k},\beta_{j_k})$ is pre-compact in $L^1\times L^1$, so that by standard properties of $\Gamma$-convergence we may conclude that \begin{equation}\label{e:tpcasoa} \liminf_{k\to+\infty}m_j g_j(s)=\liminf_{k\to+\infty}m_j\inf\mathscr{G}_j=\min\overline{\mathscr{G}_\infty} =\inf\mathscr{G}_\infty=\vartheta_p(s). \end{equation} Instead, in case $\inf_{[0,1]}\beta_j\to0$, set $s_j=\mathrm{argmin}_{[0,1]}\beta_j$ and deduce that \begin{equation}\label{e:tpcasob} \liminf_{k\to+\infty}m_jg_j(s)\geq \liminf_{k\to+\infty}minf_j\left(\int_0^{s_j}|1-\beta_j||\beta_j^\prime|\,dt +\int_{s_j}^1|1-\beta_j||\beta_j^\prime|\,dt\right)\geq1. \end{equation} Together with inequality $\vartheta_p(s)\leq 1$, the latter formula provides the conclusion. \end{proof} The functionals $F_k^{(k)}$ corresponding to the sequence $(f^{(j)})$ in \eqref{e:fjPhitt} of Proposition~\ref{p:tp} provide an approximation of $\Phi_p:L^1(\Omega)\to[0,+\infty]$ defined by \begin{equation}\label{e:Phitt} \Phi_p(u):= \begin{cases} \displaystyle{\int_\Omega |\nabla u|^2dx+\int_{J_{u}}\vartheta_p(|[u]|)d{\mathcal H}^{n-1}} & \textrm{if $u\in GSBV(\Omega)$,}\cr\cr +\infty & \textrm{otherwise}, \end{cases} \end{equation} with $\vartheta_p$ is defined in formula \eqref{e:tp}. \begin{theorem}\label{t:sublin} Suppose that $(\f j)$ is as in \eqref{e:fjPhitt} above. Then, the functionals $\Fk kk$ $\Gamma$-converge in $L^1(\Omega){\times}L^1(\Omega)$ to $\widetilde{\Phi}_p$, where \begin{equation}\label{e:p} \widetilde{\Phi}_p(u,v):=\begin{cases} \Phi_p(u) & \textrm{if $v=1$ $\mathcal{L}^n\text{-a.e.\ in } \Omega$,}\cr +\infty & \text{otherwise}. \end{cases} \end{equation} \end{theorem} \begin{proof} By monotonicity of the sequence $(f^{(j)}_k)$ we have that $F_k^{(k)}\geq F_k^{(j)}$ for $k\geq j$, so that by Theorem~\ref{t:gamma-lim} if $\Gamma\hbox{-}\liminf_{k\to+\infty}minf_kF_k^{(k)}(u,v)<+\infty$ then $u\in GBV(\Omega)$, $v=1$ ${\mathcal L}^n$-q.o. on $\Omega$ and for all $j\in{\mathbb N}$ \[ \Gamma\hbox{-}\liminf_{k\to+\infty}minf_kF_k^{(k)}(u,1)\geq \Gamma\hbox{-}\liminf_{k\to+\infty}m_kF_k^{(j)}(u,1) =\int_\Omega h_j(|\nabla u|)dx+\int_{J_u}g_j(|[u]|)d{\mathcal H}^{n-1}+j|D^cu|(\Omega), \] where $h_j$ and $g_j$ are defined, respectively, by \eqref{h} and \eqref{g} with $f^{(j)}$ in place of $f$. By letting $j\uparrow\infty$, we get that \[ h_j(s)\uparrow s^2,\quad\text{ and}\quad g_j(s)\uparrow\vartheta_p(s)\quad \textrm{for all $s\geq 0$.} \] Indeed, the former convergence follows from the explicit formula $h_j(s)=s^2$ for $s\in[0,j/2]$ and $h_j(s)=js-j^2/4$ for $s\in[j/2,+\infty)$, while the latter in view of (iii) in Proposition~\ref{p:tp}. Therefore, by Beppo-Levi's theorem we conclude that $u\in GSBV(\Omega)$ with \[ \Gamma\hbox{-}\liminf_{k\to+\infty}minf_kF_k^{(k)}(u,1)\geq\widetilde{\Phi}_p(u,1). \] To prove the upper bound inequality we note that Lemma~\ref{l:wsub} and \ref{l:boundglimsup} still hold true in this setting as there we have only used that each function $f_k=1\wedge\varepsilon_k^{1/2} f$ in \eqref{fk} is nondecreasing and bounded by $1$ from above, properties enjoyed by $\fk kk$ as well (cp. also Theorem~\ref{t:Dugdale}). Hence, we may argue again as in Proposition~\ref{p:limsupndim} and reduce ourselves to prove the estimate \begin{equation}\label{e:minestp} \liminf_{k\to+\infty}msup_{\delta\downarrow 0}\frac 1{\delta^{n-1}}\widehat{F}(u_x,1;x+\delta\, Q_{\nu_u(x)})\leq \vartheta_p(|[u](x)|), \end{equation} for $u\in SBV^2(\Omega)$ and for ${\mathcal H}^{n-1}$-a.e. $x\in J_u$, where $\widehat{F}$ is the $\bar{\Gamma}$-limit of a properly chosen subsequence $(\Fk{k_j}{k_j})$ of $(\Fk kk)$. Given \eqref{e:minestp}, we deduce the upper bound estimate as follows: we employ first \cite[Propositions~3.3-3.5]{bou-bra-but} to get the estimate $\widehat{F}(\cdot,1)\leq \widetilde{\Phi}_p(\cdot,1)$ on the full $SBV$ space, by relaxing the functional $\Phi_\infty:BV(\Omega)\to[0,+\infty]$ \[ \Phi_\infty(u):=\begin{cases} \displaystyle{ \int_\Omega |\nabla u|^2dx+\int_{J_u}\vartheta_p(|[u](x)|)\,d{\mathcal H}^{n-1}} & \textrm{if $u\in SBV^2(\Omega)$} \cr +\infty & \textrm{otherwise on $BV(\Omega)$,} \end{cases} \] w.r.to the $w\ast\hbox{-}BV$ topology on $BV(\Omega)$. This implies $\widehat{F}(\cdot,1)\leq \Phi_p$ on $BV(\Omega)$. We get the required estimate on the whole $GSBV\cap L^1(\Omega)$ by the usual truncation argument. We then argue as in Proposition~\ref{p:limsupndim} to show that the whole family $(F_k^{(k)})$ $\Gamma\hbox{-}$converges to $\widetilde{\Phi}_p$. The proof of \eqref{e:minestp} is identical to the proof of \eqref{e:minest} in Proposition~\ref{p:limsupndim} and therefore not repeated. \end{proof} \subsection{Griffith's brittle fracture} \label{subsecgriffith} Finally, we show how to approximate the Mumford-Shah functional by means of any sequence $\big(f^{(j)}\big)$ satisfying item (iii) in Proposition~\ref{p:gl}. Thus, we recover the original approximation scheme of Ambrosio and Tortorelli \cite{amb-tort1}, \cite{amb-tort2} (see also \cite{focardi}). \begin{theorem}\label{t:MS} Suppose that $(\f j)$ satisfies $\f{j}\leq \f{j+1}$, $\ell_j\uparrow\infty$ and $\f{j}(s)\uparrow \infty$ pointwise in $(0,1).$ Then, the functionals $\Fk kk$ $\Gamma$-converge in $L^1(\Omega){\times}L^1(\Omega)$ to the functional $\widetilde{{M\!S}}$ defined as follows \begin{equation}\label{e:Ft} \widetilde{{M\!S}}(u,v):=\begin{cases} {M\!S}(u) & \textrm{if $v=1$ $\mathcal{L}^n\text{-a.e.\ in } \Omega$,}\cr +\infty & \text{otherwise}. \end{cases} \end{equation} \end{theorem} \begin{proof}[Proof of Theorem~\ref{t:MS}] We start off as in the proof of Theorem~\ref{t:Dugdale} and note that by the very definitions $\Fk jk\leq \Fk kk$ for $j\leq k$ being $(\f j)$ nondecreasing by assumption. Thus, by Theorem~\ref{t:gamma-lim} we deduce \begin{equation}\label{e:liminfFj} \Gamma\hbox{-}\liminf_{k\to+\infty}minf_k \Fk kk(u,v)\geq \F j(u,v), \end{equation} where $\F j$ is defined as $F$ in \eqref{F} with $f$ substitued by $\f j$ in formulas \eqref{h} defining $h_j$, and \eqref{g} defining $g_j$. In particular, the corresponding volume density is given by \[ h_j(t)=\begin{cases} t^2 & t\leq\frac{\ell_j}{2} \cr \ell_j\,t-\frac{\ell_j^2}{4} & t\geq\frac{\ell_j}{2}, \end{cases} \] where $\ell_j$ is the value of the limit in \eqref{f1} and it satisfies $\ell_j\uparrow\infty$. Thus $h_j(s)\leq s^2$ and $\liminf_{k\to+\infty}m_jh_j(s)=s^2$ for all $s\in[0,+\infty)$. Moreover, the surface energy densities $g_j$ are dominated by the constant $1$, and by item (iii) in Proposition~\ref{p:gl} we have $\liminf_{k\to+\infty}m_jg_j(s)=\chi_{(0,+\infty)}(s)$ for all $s\in[0,+\infty)$. In conclusion, if $\Gamma\hbox{-}\liminf_{k\to+\infty}minf_k\Fk kk(u,v)<+\infty$, by letting $j\uparrow\infty$ in \eqref{e:liminfFj} we infer that $v=1$ $\mathcal{L}^n\text{-a.e.\ in } \Omega$, $u\in GSBV(\Omega)$ and by the Beppo-Levi's theorem we get \[ \Gamma\hbox{-}\liminf_{k\to+\infty}minf_k\Fk kk(u,v)\geq \widetilde{{M\!S}}(u). \] Eventually, we establish the limsup inequality. Set $\psi:=\chi_{(0,1]}$, we observe once more that $\Fk kk\leq AT_k^\psi$ for every $k$, where $AT_k^\psi$ has been defined in \eqref{e:ATk}. Therefore the conclusion follows by the Ambrosio and Tortorelli result \cite{amb-tort2} (see also \cite{focardi}). \end{proof} \begin{remark}\label{r:regimes2} In Remark~\ref{r:regimes} we have shown that both the divergence of the $f_k$'s and the scaling with $\varepsilon_k^{1/2}$ in the definition of $f^{(k)}_k$ are influencing the asymptotic behavior of the related sequence $(F_k^{(k)})$. Here, we show that also the sequence of values of the limits in $1$ of the functions $(1-s)f^{(k)}(s)$ is playing a role. In particular, we highlight that the pointwise limit of $(f_k^{(k)})$ is not determining the asymptotics of $(F^{(k)}_k)$. Indeed, suppose that $f^{(k)}(s):=a_k\frac{s}{1-s}$, where $a_k\uparrow\infty$, then \[ f^{(k)}_k(s)=\begin{cases} a_k\varepsilon_k^{1/2}\frac{s}{1-s} & 0\leq s\leq(1+a_k\varepsilon_k^{1/2})^{-1} \cr 1 & (1+a_k\varepsilon_k^{1/2})^{-1}\leq s\leq 1, \end{cases} \] and by letting $k\uparrow\infty$ we infer that \[ f^{(k)}_k(s)\to \begin{cases} \chi_{\{1\}}(s) & \textrm{if $a_k\varepsilon_k^{1/2}\downarrow 0$} \cr \gamma\,\frac{s}{1-s} \wedge 1 & \textrm{if $a_k\varepsilon_k^{1/2}\to\gamma\in(0,+\infty)$}\cr \chi_{(0,1]}(s) & \textrm{if $a_k\varepsilon_k^{1/2}\uparrow \infty$}. \end{cases} \] Hence, by taking also into account the examples in Remark~\ref{r:regimes}, we have built two sequences of functions $f_k^{(k)}$ both converging to $\chi_{\{1\}}$ but giving rise in the $\Gamma$-limit on one hand to the Dugdale's cohesive energy and on the other hand to a Griffith's type energy. \end{remark} \section*{Acknowledgments} Part of this work was conceived when M.~Focardi was visiting the University of Bonn in winter 2014. He would like to thank the Institute for Applied Mathematics for the hospitality and the stimulating scientific atmosphere provided during his stay. M.~Focardi and F.~Iurlano are grateful to Gianni Dal Maso for stimulating discussions and for many insightful remarks. They are members of the Gruppo Nazionale per l'Analisi Matematica, la Probabilit\`a e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM). F.~Iurlano was funded under a postdoctoral fellowship by the Hausdorff Center for Mathematics. \end{document}
\begin{document} \title{f The multiproximal linearization method for convex composite problems hanks{This work is sponsored by the Air Force Office of Scientific Research under grant FA9550-14-1-0500.}{} \begin{abstract} Composite minimization involves a collection of smooth functions which are aggregated in a nonsmooth manner. In the convex setting, we design an algorithm by linearizing each smooth component in accordance with its main curvature. The resulting method, called the {\rm Multiprox}\ method, consists in solving successively simple problems (e.g., constrained quadratic problems) which can also feature some proximal operators. To study the complexity and the convergence of this method, we are led to study quantitative qualification conditions to understand the impact of multipliers on the complexity bounds. We obtain explicit complexity results of the form $O(\frac{1}{k})$ involving new types of constant terms. A distinctive feature of our approach is to be able to cope with oracles involving moving constraints. Our method is flexible enough to include the moving balls method, the proximal Gauss-Newton's method, or the forward-backward splitting, for which we recover known complexity results or establish new ones. We show through several numerical experiments how the use of multiple proximal terms can be decisive for problems with complex geometries. \noindent {\bf Keyword}: Composite optimization; convex optimization; complexity; first order me\-thods; pro\-ximal Gauss-Newton's method, prox-linear method. \end{abstract} \section{Introduction} Proximal methods are at the heart of optimization. The idea has its roots within the infimal convolution of Moreau \cite{Moreau:1965} with early algorithmic applications to variational inequalities \cite{Martinet:1970}, constrained minimization \cite{rock}, and mechanics \cite{Moreau}. The principle is elementary but far reaching: it simply consists in generating algorithms by considering successive strongly convex approximations of a given objective function. Many methods can be seen through these lenses, like for instance, the classical gradient method, the gradient projection method, or mirror descent methods \cite{rosen1960gradient,rosen1961gradient,Levitin:66,Nemirovskii:1983,auslender2006interior}. At this day, the most famous example is probably the forward-backward splitting algorithm \cite{passty1979ergodic,lions1979splitting,Tseng:1991,Combettes:2005} and its accelerated variant FISTA \cite{Beck:2009}. Many generalizations in many settings followed, see for instance \cite{eckstein1993nonlinear,Ortega:2000,Combettes:2011,leroux2012stochastic,salzo2012convergence,villa2013accelerated,plc13,plc16,schmidt2017minimizing,BC17}. { In order to deal with problems with more complex structure, we are led to consider models of the form \begin{eqnarray} \big(\mathcal{P}\big):\ \ \ {\mathrm{min}}\left\{ g\left(F(x)\right):{x\in\mathbb{R}^n}\right\},\nonumber \end{eqnarray} where $F=(f_1,\ldots,f_m)$ is a collection of convex differentiable functions with $L_i$ Lipschitz continuous gradient and $g:\mathbb{R}^m\rightarrow \mathbb{R}\cup\{+\infty\}$ is a proper convex lower semicontinuous function. {The function $g$ is allowed to take infinite values offering a great flexibility in the modeling of constraints while it is often assumed to be finite in the literature.} {When $g$ is restricted to be Lipschitz, a} natural proximal approach to this problem consists in linearizing the smooth part, leaving the nonsmooth term unchanged and adding an adequate quadratic form. Given $x$ in $\mathbb{R}^n$, one obtains the proximal Gauss Newton method or the prox-linear method $$\mbox{\rm(PGNM)}\quad x^+=\underset{y\in \mathbb{R}^n}\mathrm{argmin}\, \left(g\left(F(x)+\nabla F(x)(y-x)\right)+\frac{\lambda}{2}\|y-x\|^2\right),$$ where \begin{equation}\label{ineq1}\lambda\geq {L_g}\displaystyle\max_{1,\ldots,m} L_i \end{equation} {with $L_g$ being the Lipschitz constant of $g$.} The method\footnote{Also known as the proximal Gauss-Newton's method} progressively emerged from quadratic programming, see \cite{pshe87} and references therein, but also from ideas \`a la Gauss-Newton \cite{flet80,Burke:1985,Burke:1995}. It was eventually formulated under a proximal form in \cite{lewis2015proximal}. It allows to deal with general nonlinear programming pro\-blems and unifies within a simple framework many different classes of methods met in practice \cite{Li:2002,Li:2007,Cartis:2001,lewis2015proximal,BP,Pauwels:2016,Drusvyatskiy:2016}. It is {one of the rare primal methods} for composite problems without linesearch\footnote{Indeed, PGNM is somehow a ``constant step size" method} (see also \cite{Auslender:10}), and as such assessing its complexity is a natural question. Even though the complexity analysis of convex first order methods has now become classical (see e.g., \cite{Nesterov:2004}), considerable difficulties remain for com\-po\-si\-te problems with such generality. One of the {reasons} is that constraints, embodied in $g$, generate multipliers whose role is not yet understood. To our knowledge, there are very few works in this line. In \cite{Drusvyatskiy:2016} the authors study this method under error bounds conditions and establish linear convergence results, see also \cite[Section 2.3]{Nesterov:2004} for related results. In a recent article \cite{Drusvyatskiy:20162}, the authors propose an acceleration of the same method and they obtain faster convergence guaranties under mild assumptions. The global complexity with general assumptions on a convex $g$ seems to be an open question. We work here along a different line and we propose a new flexible method with {\em inner quadratic/proximal approximations}. Given a starting point $x$, we consider $$({\rm Multiprox})\quad x^+=\underset{y\in\mathbb{R}^n}\mathrm{argmin}\, \,g\left(F(x)+\nabla F(x)\left(y-x\right)+\|y-x\|^2\left(\begin{array}{c}\frac{\lambda_1}{2}\\ \vdots \\ \frac{\lambda_m}{2}\end{array}\right)\right),$$ or more explicitly $$ x^+=\underset{y\in\mathbb{R}^n}{\mathrm{argmin}\,} \,g\left(\begin{array}{c}f_1(x)+ {\nabla f_1(x)^T (y-x)} +\lambda_1\frac{\|y-x\|^2}{2}\\ \vdots \\ f_m(x)+ {\nabla f_m(x)^T (y-x)} +\lambda_m\frac{\|y-x\|^2}{2}\end{array}\right),$$ where \begin{equation}\label{ineq2}\lambda_i\geq L_i\mbox{ for }i=1,\ldots,m,\end{equation} {and $g$ is allowed to be extended valued.} In order to preserve the convexity properties of the local model, the function $g$ is assumed to be componentwise nondecreasing. {In spite of the monotonicity restriction on $g$,} this model is quite versatile and includes as special cases general inequality constrained convex programs, important min-max problems or additive composite models. Observe that the local approximation used in {\rm Multiprox}\, is sharper than the one in the proximal Gauss-Newton's method since it relies on the vector $(L_1,\ldots,L_m)$ rather than on the mere constant $\max\{L_i:i=1,\ldots,m\}$. {Due to the presence of multiple proximal/gradient terms we called our method {\rm Multiprox}.} The key idea behind {\rm Multiprox}, already present in \cite{Auslender:10}, is to design local approximations through upper quadratic models specifically taylored for each of the components. This makes the method well adapted to the geometry of the original problem and allows in general to take much larger and clever steps as illustrated in numerical experiments in the last section. } Studying this method presents several serious difficulties. First $g$ {may not have} full domain (contrary to what is assumed in \cite{Burke:1995,Li:2002,Drusvyatskiy:2016,Drusvyatskiy:20162}), which reflects the fact that subproblems may feature ``moving constraint sets". Even though moving constraints are very common in sequential convex programming, we did not find any genuine results on complexity in the literature. Secondly the nature of our algorithm rises new issues concerning qualification conditions (and subsequently on the role of Lagrange multipliers in the complexity analysis). The qualification condition we consider is surprisingly simple to state, yet non trivial to study: $$F^{-1}(\mbox{int}\,\mathrm{dom}\, g)\neq \emptyset.$$ This Slater's like condition is specific to situations when $g$ is monotone and was already used in \cite[Theo\-rem~2]{hiriart2006note} to provide formulas for computing the Legendre transform of com\-po\-si\-te functions and in \cite{BGW} to study stable duality and chain rule. Under this condition, we establish a complexity result of the form $O(\frac{1}{k})$ whose constant term depends on the geometry of $\big(\mathcal{P}\big)$ through a quantity combining various curvature constants of the components of $F$ with the multipliers attached to the subproblems. When $g$ is finite and Lipschitz continuous, the complexity boils down to $$\frac{L_g\sqrt{\sum_{i=1}^mL_i^2}}{2k}\|x_0-x_*\|^2$$ where $L_g$ is the Lipschitz constant of $g$. The exact same analysis leads to improved bounds if the outer function $g$ has a favorable structure, such as the {coordinatewise} maximum, in which case, the numerator reduces to $\max_{i=1,\ldots,m}\{L_i\}$. To our knowledge, these results were missing from the {literature}. We study further the boundedness properties of the sequences generated by {{\rm Multiprox} .} We derive in particular quantitative bounds on the multipliers for hard constrained\footnote{Here, hard constraints means that only feasible point can be considered, contrasting with infeasible methods (e.g., \cite{auslender2013extended,toint14}).} problems. This allows us in turn to derive complexity estimates for sequential convex methods in convex nonlinear programming such as the moving balls method \cite{Auslender:10} and its nonsmooth objective variant \cite{shefi2016dual}. To put this into perspective, the only feasible methods for nonlinear programming which come with such explicit estimates are, to the best of our knowledge, interior point methods (see \cite{nesterov1994interior,ye} and references therein). We also analyze into depth the important cases when Lipschitz constants are not known\footnote{We refer to constants relative to the gradients.} or when they only exist locally (e.g., the $C^2$ case). In this setting, the ``step sizes" (the various $\lambda_i$) cannot be tuned a priori and thus complexity results are much more difficult to establish due to the use of linesearch routines. Yet we obtain some useful rates and we are able to establish convergence of the sequence. Once more we insist on the fact that convergence in this setting is not an easy matter and very few results are known \cite{Auslender:10,auslender2013extended,solodov2009global,Cartis:2001,toint14}. Finally, we illustrate the efficiency of {\rm Multiprox}\ on synthetic data. We consider a composite function consisting of the maximum of convex quadratic functions with different smoothness moduli. We compare our method with the proximal Gauss-Newton algorithm and its accelerated variant described in \cite{Drusvyatskiy:20162}. These experiments illustrate that, although the complexity estimates of the {\rm Multiprox}\ algorithm are not better than existing estimates for the {concurrent} methods, its adaptivity to different smoothness moduli gives it a crucial advantage in practice. \paragraph{Outline.} In Section \ref{SE:problem} we describe the composite optimization problem and study qualification conditions. Section \ref{SE:complexity} provides first general complexity and convergence results and presents consequences for specific models. Section \ref{s:search} on linesearch describes cases when Lipschitz constants are unknown or merely locally bounded. In the section \ref{SE:numerical} we provide numerical experiments illustrating the efficiency of our method. \paragraph{Notations} $\mathbb{R}^n$ is the $n$-dimensional Euclidean space equipped with the Euclidean norm $\|\cdot\|$. For $x \in \mathbb{R}^n$ and $r \geq 0$, $B(x,r)$ denotes the closed Euclidean ball of radius $r$ centered at $x$. By $\mathbb{R}^n_+$, we denote the $n$-dimensional nonnegative orthant ($n$-dimensional vectors with nonnegative entries). The notations of $<$, $\leq$, $>$, and $\geq$ between vectors indicate that the corresponding inequalities are met coordinatewise. Our notations for convex analysis are taken from \cite{Rockafellar:1998}. We recall the most important ones. Given a convex extended-valued convex function $h:\mathbb{R}^m \rightarrow \mathbb{R}\cup \{+\infty\}$, we set $$\mathrm{dom}\, h := \{z\in\mathbb{R}^m: h(z) < +\infty\}.$$ The subdifferential of $h$ at any $\bar{z} \in \mathrm{dom}\, h$ is defined as usual by $$\partial h(\bar{z}) := \{\lambda \in\mathbb{R}^m: {(z - \bar{z})^T \lambda} + h(\bar{z}) \leq h(z), \forall z\in\mathbb{R}^m\},$$ and is the empty set if $z \not \in \mathrm{dom}\, h$. {For any convex subset $D\subset \mathbb{R}^m$, we denote by $i_D$ its indicator function -- recall that $i_D(x)=0$ if $x$ is in $D$, $i_D(x)=+\infty$ otherwise.} \section{Minimization problem and algorithm}\label{SE:problem} \subsection{Composite model and assumptions} We consider a composite minimization problem of the type: \begin{eqnarray} \big(\mathcal{P}\big):\ \ \ {\mathrm{min}}\left\{ g(F(x)):{x\in\mathbb{R}^n}\right\},\nonumber \end{eqnarray} where $g:\mathbb{R}^m\rightarrow \mathbb{R}\cup\{+\infty\}$ and $F:\mathbb{R}^n\rightarrow \mathbb{R}^m$. We set $F=(f_1,\ldots, f_m)$ and we make the standing assumptions:\\ \begin{assumption} \begin{enumerate} \item[(a)] Each $f_i$ is continuously differentiable, convex, with $L_i$ Lipschitz continuous gradient {\rm($L_i\geq 0$)}. \item[(b)] The function $g:\mathbb{R}^m\rightarrow \mathbb{R}\cup\{+\infty\}$ is convex, proper, lower semicontinuous and $L_g$ Lipschitz continuous on its domain. That is $$|g(x)-g(y)|\leq L_g\|x-y\| \text{ for all $x,y$ in $\mathrm{dom}\, g$.}$$ For each $i=1,\ldots,m$ with $L_i > 0$, then $g$ is nondecreasing in its $i$-th argument\footnote{{For any such $i$ and any $m$ real numbers $z_1, \ldots z_m$, the function $z \mapsto g(z_1,\ldots, z_{i-1}, z,z_{i+1}, \ldots, z_m)$ is nondecreasing. In particular, its domain is either the whole of $\mathbb{R}$ or a closed half line $(-\infty, a]$ for some $a\in \mathbb{R}$, or empty. }}. In other words $g$ is nondecreasing in its $i$-th argument whenever $f_i$ is not affine. \end{enumerate} \label{AS:convex} \end{assumption} {\begin{remark} {\rm (a) Note that the monotonicity restriction on $g$ implies some restrictions. For example, ignoring the affine components of $F$, for any $z \in \mathrm{dom}\, g$, we also have $z - \mathbb{R}_+^m \subset \mathrm{dom}\, g$, so that $\mathrm{dom}\, g$ is not compact. Prominent examples for $g$ includes the max function, the indicator of $\mathbb{R}_-^m$ (which allows to handle nonlinearity of the type $f_i\leq 0$), support functions of a subset of positive {numbers}. Depending on the structure of $F$ other examples are possible (see further sections). \\ (b) Contrary to $L_1,\ldots, L_m$, the value of $L_g$ is never required to design/run the algorithm (see in particular Theorems \ref{TH:nonincreasing} and \ref{TH:nonincreasingBack}).}\end{remark}} \begin{assumption} The function $g\circ F$ {is proper and }has a minimizer. \label{AS:effectiveness} \end{assumption} Observe that the monotonicity of $g$ and the convexity of $F$ in Assump\-tion~\ref{AS:convex} ensure that the problem $\big(\mathcal{P}\big)$ is a convex optimization problem, in other words: \begin{equation} \text{ $g\circ F$ is convex.} \label{LE:convex} \end{equation} \subsection{The Multiproximal linearization algorithm} \noindent Let us introduce the last fundamental ingredient necessary to the description of our method: $$\bm{L} := (L_1,\ldots,L_m)^T\in \mathbb{R}_+^m.$$ Observe that the monotonicity of $g$ in Assumption \ref{AS:convex} implies that for any $z\in \mathrm{dom}\, g$, one has \begin{equation}\bm{L}^T\lambda \geq 0, \text{ for any }\lambda \in \partial g(z).\label{LE:Lnv_positive} \end{equation} The central idea is to use quadratic upper approximations componentwise on the smooth term~$F$. We thus introduce the following mapping \begin{align*} H(x,y) := F(x) + \nabla F(x) (y - x) + \frac{\bm{L}}{2}\|y - x\|^2,\nonumber \quad (x,y)\in\mathbb{R}^n\times\mathbb{R}^n, \end{align*} where $\nabla F$ denotes the Jacobian matrix of $F$. This leads to the following family of subproblems \begin{eqnarray} \big(\mathcal{P}\big)x: \quad \min\{ \,g (H(x,y)):y\in \mathbb{R}^n\}.\nonumber \end{eqnarray} where $x$ ranges in $F^{-1}(\mathrm{dom}\, g)$. As shall be discussed in further sections this problem is well-posed for broad classes of examples. We make the following additional standing assumption: \begin{assumption}\label{AS:existence_sub} For any $x$ in $F^{-1}(\mathrm{dom}\, g)$, the function $g\circ H(x,\cdot)$ has a minimizer. \end{assumption} \noindent Elementary but important properties of problem $\big(\mathcal{P}\big)x$ are given in the following lemma. \begin{lemma} \label{LE:sub_property} For any $x\in F^{-1}(\mathrm{dom}\, g)$, the following statements hold: \begin{description} \item (1) $\mathrm{dom}\, g(H(x,\cdot)) \subset \mathrm{dom}\, g\circ F$ and $$g(F(y)) \leq g(H(x,y)), \: \forall y \in \mathrm{dom}\, g(H(x,\cdot)).$$ \item (2) $g(F(x)) = g(H(x,x))$. \item (3) $g\circ H(x,\cdot)$ is proper and convex. \end{description} \end{lemma} \begin{proof} According to Assumption \ref{AS:convex} and the descent Lemma, \cite[Lemma 1.2.3]{Nesterov:2004}, for every $i$ in $\{1,\ldots,m\}$ one has $$f_i(y) \begin{cases} \leq f_i(x) + { \nabla f_i(x)^T(y - x)} + \frac{L_i}{2}\|y - x\|^2,\ L_i>0,\\ = f_i(x) + { \nabla f_i(x)^T(y - x)} + \frac{L_i}{2}\|y - x\|^2,\ L_i=0, \end{cases}\ \ \forall (x,y)\in\mathbb{R}^n\times\mathbb{R}^n. $$ By the monotonicity properties of $g$ we obtain that $g(F(y)) \leq g(H(x,y))$ for all $y\in \mathbb{R}^n$ and (1) is proved. Items (2) and (3) follow from simple verifications. \end{proof} \begin{mdframed}[style=MyFrame] \begin{center} {\bf Multiproximal method ({\rm Multiprox})} \end{center} \noindent \ \ \ \ \ Choose $\ x_0 \in F^{-1}(\mathrm{dom}\, g)$ and iterate for $k\in \mathbb{N}$: \begin{equation} x_{k+1}\in \underset{y\in \mathbb{R}^n}{\mathrm{argmin}\,}\ \ g\left(F(x_k) + \nabla F(x_k) (y - x_k) + \frac{\bm{L}}{2}\|y - x_k\|^2\right) \label{Numerical_Scheme} \end{equation} \mbox{with the choice $x_{k+1}=x_k$ whenever $x_k$ is a minimizer of {$g(F(x_k,\cdot))$}.} \end{mdframed} \noindent If we set $$p(x):= \underset{ y\in\mathbb{R}^n}{\text{argmin}}\ g (H(x,y))$$ for any $x$ in $F^{-1}(\mathrm{dom}\, g)$ the algorithm simply {reads as} $x_{k+1}\in p(x_k)$ with $x_{k+1}=x_k$ whenever $x_k\in p(x_k)$. \begin{remark}{\rm (a) Item (1) of Lemma~\ref{LE:sub_property}, {along with Assumption \ref{AS:existence_sub},} implies that the algorithm is well defined. Observe that Lemma~\ref{LE:sub_property} actually shows that our algorithm is based on the classical idea of minimizing successively majorant functions coinciding at order 1 with the original function.\\ (b) As already mentioned, the algorithm does not require the knowledge of the Lipschitz constant of $g$ on its domain.} \end{remark} { \subsection{Examples and implementation issues}\label{examples} We give here two important examples for which the subproblems are simple quadratic problems. \subsubsection{Convex nonlinear programming}\label{movi} Consider the classical convex nonlinear programming problem \begin{equation}\label{exnlp}\min \left\{f(x):f_i(x)\leq 0,\: i=1,\ldots,m, \, x \in \mathbb{R}^n \right\},\end{equation} where $f \colon \mathbb{R}^n \to \mathbb{R}$ is a convex function with $L_f$ Lipschitz continuous gradient and each $f_i$ is defined as in Assumption \ref{AS:convex}. Using the reformulation: $ \min \{ f(x)+i_{\mathbb{R}_-^m}(f_1(x),\ldots, f_m(x)): x\in \mathbb{R}^n\}$ and setting $g(s,y_1,\ldots,y_m)=s+i_{\mathbb{R}_-}(y_1,\ldots,y_m)$ and $F(x)=(f(x),f_1(x),\ldots,f_m(x))$, the problem\footnote{There is a slight shift in the indices of $F$} \eqref{exnlp} can be seen as an instance of $\big(\mathcal{P}\big)$. {\rm Multiprox}\ writes \begin{align} x_{k+1} \in \underset{y\in\mathbb{R}^n}{\mathrm{argmin}\,}\quad & f(x_k) + \nabla f(x_k)^T (y- x_k) + \frac{L_f}{2}\|y - x_k\|^2 \nonumber \\ \mathrm{s.t.}\quad&f_i(x_k) + \nabla f_i(x_k)^T (y- x_k) + \frac{L_i}{2}\|y - x_k\|^2 \leq 0,\, i=1,\ldots,m, \label{EQ:MBalgo} \end{align} which is a generalization of the moving balls method \cite{Auslender:10,shefi2016dual} in the sense that our algorithm offers the additional flexibility that affine constraints can be left unchanged in the subproblem (by setting the corresponding $L_i$ to $0$). Assume for simplicity that $L_f>0$. Computing $x_{k+1}$ leads to solve very specific quadratic problems. Indeed, if $q$ is a quadratic form appearing within the above subproblem, its Hessian is given by $\nabla^2q=cI_n$ (with $c>0$) or $\nabla^2q=0_n$, where $I_n$ (resp. $0_n$) denotes the identity (resp. null) matrix in $\mathbb{R}^{n\times n}$. Computing $x_{k+1}$ amounts to computing the Euclidean projection of a point to an intersection of Euclidean balls/hyperplanes. Both types of sets have extremely simple projection operators and one can thus apply Dykstra's projection algorithm (see e.g., \cite{BC17}) or a fast quadratic solver (see e.g., \cite{nocedal}). Let us also mention that this type of problems can be treated very efficiently by specific methods based on activity detection described in \cite{Auslender:10, shefi2016dual}. \subsubsection{ Min-max problems}\label{minmax} We consider the problem $$\min\left\{\max_{i=1,\ldots,m} f_i (x):x\in \mathbb{R}^n\right\}.$$ This type of problems is very classical in optimization but also in game theory (see e.g., \cite{Nesterov:2004}). Observing that $g=\max_{1,\ldots,m} {f_i}$ satisfies our assumptions, we see that the problem is already under the form $\big(\mathcal{P}\big)$. The substeps assume thus the form \begin{align} x_{k+1} \in \underset{y\in\mathbb{R}^n}{\mathrm{argmin}\,}\quad & \left(\max_{i=1,\ldots,m}(f_i(x_k) + \nabla f_i(x_k)^T(y - x_k) + \frac{L_i}{2}\|y - x_k\|^2)\right) .\nonumber \end{align} As previously explained, this subproblem can be rewritten as a simple quadratic problem and it can thus be solved through the same means. In the last section, we illustrate the numerical efficiency of {\rm Multiprox}\ on this type of problems. \begin{remark} \rm Other cases can be treated by {\rm Multiprox}. Consider for example, the following problem \begin{align*} \underset{x\in\mathbb{R}^n}{\mathrm{min}}\quad\quad& \mathrm{max}\{f_1(x),f_2(x)\} + \|x\|_1,\\ \mathrm{s.t.}\quad\quad&\mathrm{max}\{f_3(x),f_4(x)\} + \|x\|_{\infty} \leq 0. \end{align*} where $f_i$ are smooth convex functions for $i = 1,\ldots,4$. Then, for any $x \in \mathbb{R}^n$, the solution of $\big(\mathcal{P}\big)x$ can be computed as follows: \begin{align*} \min_{y \in \mathbb{R}^n, \,s \in \mathbb{R}, \,t\in \mathbb{R}^n, \,u \in \mathbb{R}, \,v \in \mathbb{R}} \quad\quad &s + \sum_{i=1}^n t_i\\ \mathrm{s.t.}\quad\quad &f_1(x) + \nabla f_1(x)^T (y - x) + \frac{L_1}{2} \|y - x\|^2 \leq s\\ &f_2(x) + \nabla f_2(x)^T (y - x) + \frac{L_2}{2} \|y - x\|^2 \leq s\\ &x_i\leq t_i, \, i=1,\ldots, n\\ &-x_i\leq t_i, \, i=1,\ldots, n\\ &{t_i\leq v}, \, i=1,\ldots, n\\ &f_3(x) + \nabla f_3(x)^T (y - x) + \frac{L_3}{2} \|y - x\|^2 \leq u\\ &f_4(x) + \nabla f_4(x)^T (y - x) + \frac{L_4}{2} \|y - x\|^2 \leq u\\ &u + v \leq 0 \end{align*} which is a quadratically constrained linear program. \end{remark} } \subsection{Qualification, optimality conditions and a condition number}\label{s:cr} The first issue met in the study of {\rm Multiprox}\ is the one of qualification conditions both for $\big(\mathcal{P}\big)$ and $\big(\mathcal{P}\big)x$. Classical qualification conditions take the form \begin{align} \label{EQ:qualifTradi} N_{\mathrm{dom}\, g}(F(x)) \cap \mathrm{ker}(\nabla F (x)^T) = \{0\}, \quad \forall x \in F^{-1}(\mathrm{dom}\, g), \end{align} where $N_{\mathrm{dom}\, g}(F(x))$ denotes the normal cone to $\mathrm{dom}\, g$ (see e.g., \cite[Example 10.8]{Rockafellar:1998}). In this section, we first describe a different qualification condition which takes advantage of the specific monotonicity properties of $g$ as described in Assumption \ref{AS:convex}. This condition was already used in \cite{hiriart2006note} to provide a formula for the Legendre conjugate of the composite function $g(F(\cdot))$. In this setting, we show that this condition allows to use the chain rule and provides optimality conditions which will be crucial to study {\rm Multiprox}\ algorithm. We emphasize that this qualification condition is much more practical than conditions of the form (\ref{EQ:qualifTradi}). Besides it is also naturally amenable to quantitative estimation which appears to be fundamental for the computation of complexity estimates. \paragraph{Qualification condition and chain rule} The following Slater's like qualification condition is specific to the ``monotone composite model" we consider here (see \cite[Theorem 2]{hiriart2006note} and \cite[ Section 3.5.2]{BGW}). {We make the following standing assumption:}\begin{assumption}[Qualification]\label{AS:Qualification} There exists $\bar{x}$ in $\displaystyle F^{-1}(\text{\rm int}\,\mathrm{dom}\, g).$ \end{assumption} \noindent The following result illustrates the main interest of Assumption~\ref{AS:Qualification}: \begin{proposition}[Chain rule]\label{PR:chainRule} For all $x$ in $F^{-1}(\mathrm{dom}\, g)$: $$\partial \left(g\circ F\right)(x)= {\nabla F(x)^T} \partial g(F(x)).$$ \end{proposition} Proposition \ref{PR:chainRule} follows from \cite[Theorem 3.5.2]{BGW}, and we provide a self contained proof in Appendix \ref{SE:appendixProofChainRule}. Another interesting and useful consequence of Assumption \ref{AS:Qualification} is that it automatically ensures a similar qualification condition for all subproblems~$\big(\mathcal{P}\big)x$. \begin{proposition}[Qualification for subproblems]\label{PR:qualifSub} For all $x$ in $F^{-1}(\mathrm{dom}\, g)$ there exists $w(x) \in \mathbb{R}^n$ such that \begin{align*} H(x, w(x)) \in \mathrm{int}\ \mathrm{dom}\, g. \end{align*} \end{proposition} \begin{proof} An explicit construction of $w(x)$ is provided in Lemma \ref{LE:lower_bound_Hxw} (Appendix \ref{SE:appendixExplicitBound}). \end{proof} These results provide necessary and sufficient optimality conditions for $\big(\mathcal{P}\big)$ and for $\big(\mathcal{P}\big)x$. \begin{corollary}[Fermat's rule] \label{TH:optimality_condition} A point $x_* \in F^{-1}(\mathrm{dom}\, g)$ is a minimizer of $g\circ F$ if and only if \begin{eqnarray} \exists \lambda_*\in \partial g(F(x_*))\ \text{s.t.}\ {\nabla F(x_*)^T} \lambda_* = 0.\nonumber \end{eqnarray} For any $x$ in $F^{-1}(\mathrm{dom}\, g)$, and for any $y$ in $\mathbb{R}^n$, we have $y \in p(x)$ if and only if $$\exists \nu\in\partial g(H(x,y))\ \text{s.t.}\ {\nabla_y H(x,y)^T} \nu = 0.$$ \end{corollary} \noindent The first part of the corollary is of course an immediate consequence of the chain rule in Proposition~\ref{PR:chainRule}. The second part holds true for the same reasons, replacing Assumption \ref{AS:Qualification} by Proposition~\ref{PR:qualifSub}. \paragraph{Lagrange multipliers and a condition number} \begin{definition}[Lagrange multipliers for $\big(\mathcal{P}\big)x$] \label{DE:set} For any $x\in F^{-1}(\mathrm{dom}\, g)$ and any $y\in p(x)$, we set \begin{eqnarray*} \mathcal{V}(x,y) := \big\{\nu\in\partial g(H(x,y)): {\nabla_y H(x,y)^T} \nu = 0\big\}. \end{eqnarray*} \end{definition} The following quantity, which can be seen as a kind of condition number captures the boundedness properties of the multipliers for the subproblems. It will play a crucial role in our complexity studies. \begin{lemma}[A condition number] \label{LE:bounded1} Given any nonempty compact set $K\subseteq F^{-1}(\mathrm{dom}\, g)$ and any $\gamma \geq \min g\circ F$, the following quantity is finite \begin{eqnarray*} \mathscr{C}_\gamma(K):= \sup\left\{\inf \{\bm{L}^T \nu : \nu\in \mathcal{V}(x,y)\}:\:x\in K,\, g(F(x)) \leq \gamma,\, y\in p(x) \right\} . \end{eqnarray*} \end{lemma} \begin{proof} We shall see that Lemma \ref{TH:explicitBound} provides an explicit bound on this condition number and as a consequence $\mathscr{C}_\gamma(K)$ is finite. \end{proof} \begin{remark}[$\mathscr{C}_\gamma(K)$ as a condition number] {\rm {In numerical analysis, the term condition number usually refers to a measure of the magnitude of variation of the output as a function of the variation of the input. For instance, when one studies the usual gradient method for minimizing a convex function $f:\mathbb{R}^n\to\mathbb{R}$ with Lipschitz gradient $L_f$, one is led to algorithms of the form $x_{k+1}=x_k-\nabla f(x_k)/L_f$ and the complexity takes the form $\displaystyle L_f\|x_0-x^*\|^2/(2k)$ where $x^*$ is a minimizer of $f$. The bigger $L_f$ is, the worse the estimate is. It captures in particular the compositional structure of the model by combining the smoothness modulus of $F$ with some regularity for $g$ captured through KKT multipliers.}}\end{remark} \section{Complexity and convergence}\label{SE:complexity} This section is devoted to the exposition of the complexity results obtained for {\rm Multiprox}. In the first {subsection}, we describe our main results, an abstract convergence result and {we provide explicit} complexity estimate. We then describe the consequences for known algorithms. \subsection{Complexity results for {\rm Multiprox}} \paragraph{General complexity and convergence results} We begin by establishing that {\rm Multiprox}\ is a descent method: \begin{lemma}[Descent property] \label{LE:nonincreasing} For any $x\in F^{-1}(\mathrm{dom}\, g)$, if $y\in p(x)$, one has \begin{equation} g(F(y)) - g(F(x)) \leq -\frac{\|y - x\|^2}{2} \bm{L}^T {\nu},\nonumber \end{equation} for all $\nu$ in $\mathcal{V}(x,y)$. \end{lemma} \begin{proof} By Definition \ref{DE:set}, for every $x\in F^{-1}(\mathrm{dom}\, g)$ and $y\in p(x)$ we have $ {\nabla_y H(x,y)^T} {\nu} = 0$, for any $\nu\in\mathcal{V}(x,y)$. In other words \begin{eqnarray} {\nabla F(x)^T} {\nu} + (y - x)(\bm{L}^T {\nu}) = 0,\ \forall \nu\in\mathcal{V}(x,y). \label{EQ:A_suboptimality} \end{eqnarray} By convexity of $g$, one has \begin{eqnarray} g(H(x,y)) - g(F(x))&\leq& [H(x,y) - F(x)]^T {\nu}\nonumber\\ &=& {[\nabla F(x)^T {\nu}]^T(y - x)} + \frac{\|y - x\|^2}{2} \bm{L}^T {\nu},\ \forall \nu \in\mathcal{V}(x,y).\nonumber \end{eqnarray} Substituting this inequality into equation (\ref{EQ:A_suboptimality}) yields \begin{eqnarray} g(H(x,y)) - g(F(x)) \leq - \frac{\|y - x\|^2}{2} \bm{L}^T {\nu},\ \forall \nu\in\mathcal{V}(x,y).\nonumber \end{eqnarray} Combining Lemma \ref{LE:sub_property} with this inequality completes the proof. \end{proof} \begin{remark}[{\rm Multiprox}\ is a descent method]{\rm For any sequence $(x_k)_{k\in\mathbb{N}}$ generated by {\rm Multiprox}, the corresponding sequence of objective values $(g\circ F(x_k))_{k\in\mathbb{N}}$ is nonincreasing.} \label{RE:nonincreasing} \end{remark} \noindent Let us set \begin{eqnarray*} S & := &\mathrm{argmin}\, g\circ F\\ &= & \{x\in F^{-1}(\mathrm{dom}\, g): \exists \lambda \in \partial g(F(x))\ \text{s.t.}\ {\nabla F(x)^T} \lambda = 0\}.\nonumber \end{eqnarray*} The following theorem is our first main result under the assumption that the smoothness moduli of the components of $F$ are known and available to the user. The first item is a complexity result while the second one is a convergence result. Discussion regarding the impact of our results {on} other existing algorithms is held in Section \ref{SE:conseqComplexity}. \begin{theorem}[Complexity and convergence for {\rm Multiprox}] Let $(x_k)_{k\in\mathbb{N}}$ be a sequence generated by {\rm Multiprox}. Then, the following statements hold: \begin{description} \item (i) For any $x_*\in S$ set $B_{x_0,x_*}= B(x_*,\|x_0-x_*\|)$. Then, for all $k \geq 1$, \begin{equation} g (F(x_k)) - g(F(x_*)) \leq\frac{\mathscr{C}_{g(F(x_0))}\left(B_{x_0,x_*}\right)}{2k} \|x_0 - x_*\|^2; \label{EQ:complexity_f} \end{equation} \item (ii) The sequence $(x_k)_{k\in\mathbb{N}}$ converges to a point in the solution set $S$.\end{description} \label{TH:nonincreasing} \end{theorem} \begin{proof} (i) {Let $n$ be a positive integer.} The following elementary observation appears to be very useful: given any function of the form $$f:\mathbb{R}^n\rightarrow \mathbb{R},\ x\mapsto a \|x\|^2 + {b^T x} + c,$$ where $a\in \mathbb{R}_+$, $b\in\mathbb{R}^n$, and $c\in\mathbb{R}$, if there exists $\hat{x}$ in $\mathbb{R}^n$ such that $\nabla f(\hat{x}) = 0$, one has \begin{eqnarray} f(x) = f(\hat{x}) + a\|x - \hat{x}\|^2\ \forall x\in\mathbb{R}^n. \label{EQ:strong_a} \end{eqnarray} By Definition \ref{DE:set}, {for any integer $k>0$ and} for every $\nu_k\in\mathcal{V}(x_{k-1},x_k)$ the gradient of $H(x_{k-1},\cdot)^T {\nu}_{k}$ at $x_k$ is zero, i.e., $\nabla_y [H(x_{k-1},x_k)^T {\nu}_{k}] =0$. Combining the explicit expression of $H(x_{k-1},\cdot)^T {\nu}_{k}$ with equation (\ref{EQ:strong_a}) and considering that $\bm{L}^T\nu_k \geq 0$ (see \eqref{LE:Lnv_positive}), one has, for any $\nu_k$ in $\mathcal{V}(x_{k-1},x_k)$ and any $x_* $ in $S$, \begin{eqnarray} H(x_{k-1},x_{k})^T {\nu}_{k} & = & F(x_{k-1}){^T\nu_k}+[\nabla F(x_{k-1})(x_k-x_{k-1}){]^T\nu_k}+\frac{\|x_k-x_{k-1}\|^2}{2}\bm{L}{^T}\nu_k\\ & = &H(x_{k-1},x_{*})^T {\nu}_{k} - \frac{\bm{L}^T {\nu}_{k}}{2}{\|x_k - x_*\|^2}. \label{EQ:strong_equation} \end{eqnarray} \noindent {According to} the convexity of $g$, for any $\nu_k$ in $\mathcal{V}(x_{k-1},x_k)$ and any $x_*$ in $S$, \begin{eqnarray} g\circ H(x_{k-1},x_{k}) - g\circ F(x_{*})\leq [H(x_{k-1},x_{k}) - F(x_{*}) ]^T {\nu}_{k}. \label{EQ:complexity_1} \end{eqnarray} As a consequence, for any $\nu_k$ in $\mathcal{V}(x_{k-1},x_k)$ and any $x_*$ in $S$, one has \begin{align} \label{EQ:bound_decreasing} \begin{split} &\ g\circ F(x_{k}) - g\circ F(x_{*})\\ \overset{(a)}{\leq} &\ [H(x_{k-1},x_k) - F(x_*)]^T {\nu}_k\\ \overset{(b)}{=} &\ H(x_{k-1},x_{*})^T {\nu}_{k} - \frac{\bm{L}^T {\nu}_{k}}{2}{\| x_k - x_*\|^2} - F(x_{*})^T {\nu}_{k} \\ \overset{(c)}{=} &\ [ F(x_{k-1}) + \nabla F(x_{k-1})( x_{*} - x_{k-1}) ]^T {\nu}_{k} + \frac{\bm{L}^T {\nu}_{k} }{2} (\|x_{k-1} - x_*\|^2 - \|x_k - x_*\|^2) - F(x_{*})^T {\nu}_{k} \\ \overset{(d)}{\leq} &\ F(x_{*})^T {\nu}_{k} + \frac{\bm{L}^T {\nu}_{k} }{2} (\|x_{k-1} -x_*\|^2 - \|x_k - x_*\|^2) - F(x_{*})^T {\nu}_{k} \\ = &\ \frac{\bm{L}^T {\nu}_{k} }{2} (\|x_{k-1} -x_*\|^2 - \|x_k - x_*\|^2), \end{split} \end{align} where $(a)$ is obtained by combining Lemma \ref{LE:sub_property} with equation (\ref{EQ:complexity_1}), for $(b)$ we use equation (\ref{EQ:strong_equation}), for $(c)$ we expand $H(x_{k-1},x_*)^T {\nu}_k$ explicitly, and for $(d)$ we use the property that the $i$-th coordinate of $\nu_k$ is nonnegative if $L_i>0$ (c.f. Assumption \ref{AS:convex}) and the coordinatewise convexity of $F$. Let us consider beforehand the stationary case. If there {exists} a positive integer $k_0$ and a subgradient $ \nu\in \mathcal{V}(x_{k_0-1},x_{k_0})$ such that $\bm{L}^T\nu =0$, one deduces from equation~(\ref{EQ:bound_decreasing}) that $g(F(x_{k_0})) = \mathrm{inf}\{g\circ F\}$. Recalling that the sequence $\big(g(F(x_k))\big)_{k\in\mathbb{N}}$ is nonincreasing (cf. Remark \ref{RE:nonincreasing}), one thus has $x_{k_0+j}\in S$ for any $j\in\mathbb{N}$. Using Lemma \ref{LE:sub_property}, it follows that for any $j \in \mathbb{N}$, $x_{k_0+j} \in p(x_{k_0+j})$. Hence, for all $j \in \mathbb{N}$, $x_{k_0+j} = x_{k_0} \in S$ and the algorithm actually stops at a global minimizer. This ensures that if there exists $k_0$ such that $1\leq k_0 \leq k$ and a subgradient $ \nu\in \mathcal{V}(x_{k_0-1},x_{k_0})$ with $\bm{L}^T\nu =0$, then equation (\ref{EQ:complexity_f}) holds since in this case, $x_k =x_{k_0} \in S$. To proceed, we now suppose that $\bm{L}^T \nu >0$ for every $\nu\in \mathcal{V}(x_{j-1},x_j)$ and for every $j$ in $\{1,\ldots,{k}\}$. Observe first that by (\ref{EQ:bound_decreasing}) the sequence $\|x_k-x_*\|$ is nonincreasing and, since $x_k$ is a descent sequence for $g\circ F$, it evolves within $B(x_*,\|x_0-x_*\|)$ and satisfies $g(F(x_k)) \leq g(F(x_0))$ for all $k \in \mathbb{N}$. Recalling Lemma \ref{LE:bounded1}, we have the following {boundedness} result \begin{equation}\label{maj} \mathscr{C}_{g(F(x_0))}(B_{x_0,x_*})\geq \underset{i=1,\ldots,k}{\mathrm{max}} \big\{ \underset{\nu\in\mathcal{V}(x_{i-1},x_i)}{\mathrm{min}}\{\bm{L}^T {\nu}\} \big\}, \forall k\geq 1. \end{equation} The rest of the proof is quite standard. Combining inequalities of the form (\ref{EQ:bound_decreasing}) with the above inequality \eqref{maj} one obtains \begin{eqnarray} g(F(x_j)) - g (F(x_*)) \leq \frac{\mathscr{C}_{g(F(x_0))}(B_{x_0,x_*}) }{2}(\|x_{j-1} - x_*\|^2 - \|x_j - x_*\|^2), \label{EQ:unifBound} \end{eqnarray} for all {$j\in\{1,\ldots,k\}$} and for any $x_*\in S$. Fix $k\geq 1$. Summing up inequality (\ref{EQ:unifBound}) for $j \in \{1, \ldots,k\}$ ensures that for any $x_* \in S$, we have \begin{eqnarray} \sum_{j=1}^{k} [g \circ F(x_{j}) - g\circ F(x_{*})] &\leq&\frac{\mathscr{C}_{g(F(x_0))}(B_{x_0,x_*})}{2}(\|x_0 - x_*\|^2 - \|x_k - x_*\|^2)\nonumber\\ & \leq & \frac{\mathscr{C}_{g(F(x_0))}(B_{x_0,x_*}) }{2}\|x_0 - x_*\|^2. \label{EQ:complexity_9} \end{eqnarray} Since the sequence $(g\circ F(x_j))_{j\in\mathbb{N}}$ is nonincreasing, it follows that, for any $x_* \in S$, \begin{align} k[g \circ F(x_{k}) - g\circ F(x_{*}) ] \leq \sum_{j=1}^k [g \circ F(x_{j}) - g\circ F(x_{*}) ]. \label{EQ:boundSum} \end{align} Substituting inequality (\ref{EQ:boundSum}) into inequality (\ref{EQ:complexity_9}), we obtain \begin{eqnarray} k[g \circ F(x_{k}) - g\circ F(x_{*}) ] \leq \frac{\mathscr{C}_{g(F(x_0))}(B_{x_0,x_*})}{2} \|x_0 - x_*\|^2,\ \forall x_*\in S.\nonumber \end{eqnarray} Dividing both sides of this inequality by $k$ indicates that (\ref{EQ:complexity_f}) holds. Since $k$ was an arbitrary positive integer, this completes the proof of (i). \noindent (ii) The proof relies on Opial's lemma/monotonicity techniques \`a la F\'ejer (see \cite{BC17}). Using \eqref{EQ:complexity_f} and the lower semicontinuity of $g\circ F$ one immediately proves that cluster points of $(x_k)_{k\in \mathbb{N}}$ are in $S$. Combining this with the fact that $\|x_k-x_*\|$ is nonincreasing for all $x_* \in S$ concludes the proof.\end{proof} \begin{remark}\label{r}{\rm (a) If the function $g$ is globally $L_g$ Lipschitz continuous, one has $\mathscr{C}_\gamma(K) \leq L_g \|\bm{L}\|$ for any compact set $K\subset\mathbb{R}^n$ and any $\gamma \geq \min g \circ F$, see Subsection \ref{SE:LIP}.\\ (b) Consider a minimization problem $\big(\mathcal{P}\big)$ which has several formulations in the sense that there exist $g_1,F_1$ and $g_2,F_2$ such that the objective is given by $g_1\circ F_1=g_2\circ F_2$. Then the complexity results for the two formulations may differ considerably, see Subsection~\ref{SE:FB}. \\ (c) Note that the above proof actually yields a more subtle ``online" {estimate}: \begin{equation}\label{EQ:complexity_f2} g (F(x_k)) - g(F(x_*)) \leq \frac{\underset{i=1,\ldots,k} {\mathrm{max}} \big\{ \min\left\{\bm{L}^T {\nu}: \nu\in\mathcal{V}(x_{i-1},x_i) \big\}\right\}}{2k} \|x_0 - x_*\|^2. \end{equation} This shows that the specific history of a sequence plays an important role in its actual complexity. This is of course not captured by global constants of the form $\mathscr{C}_{g(F(x_0))}(B_{x_0,x_*})$ which are worst case estimates.\\ (d) The complexity estimate of Theorem \ref{TH:nonincreasing} does not directly involve the constant $L_g$, {but} only multipliers. This will be useful to recover existing complexity results for {algorithms} such as the forward-backward splitting algorithm in Section \ref{SE:FB}. } \end{remark} \paragraph{Explicit complexity bounds} We now provide an explicit bound for the condition number which will in turn provide explicit complexity bounds for {\rm Multiprox}. Our approach relies on a thorough study of the multipliers and on a measure of the Slater's like assumption through the term $$\mathrm{dist}[F(\bar{x}),\mathrm{bd}\ \mathrm{dom}\, g]>0,$$ whose positivity follows from Assumption~\ref{AS:Qualification}. Our results on multipliers are recorded in the following fundamental lemma. Its proof is quite delicate and it is postponed in the appendix. Given a matrix $A\in \mathbb{R}^{m\times n}$, its operator norm is denoted by $\|A\|_{\rm op}$. \begin{lemma}[Bounds for the multipliers of {$\big(\mathcal{P}\big)x$}\,] For any $x\in F^{-1}(\mathrm{dom}\, g )$, $y\in p(x)$ and $\nu\in\mathcal{V}(x,y)$, the following statements hold:\\ (i) if $H(x,y)\in \mathrm{int}\ \mathrm{dom}\, g$, then $\bm{L}^T\nu \leq L_g \|\bm{L}\|$;\\ (ii) if $H(x,y)\in \mathrm{bd}\ \mathrm{dom}\, g$, then \\ $\displaystyle \bm{L}^T\nu \leq 8L_g \left(\|\nabla F(x)\|_{\rm op} +(3\|x - y\| + \|\bar{x} - x\|) \frac{\|\bm{L}\|}{2} \right)^2 \frac{ \mathrm{dist}[F(\bar{x}),\mathrm{bd}\ \mathrm{dom}\, g ] + \|\bm{L}\| \|\bar{x} - x \|^2/2 } {\mathrm{dist}[F(\bar{x}),\mathrm{bd}\ \mathrm{dom}\, g ]^2 }$ $\quad$ where $\bar{x}$ is as in Assumption \ref{AS:Qualification}. \label{TH:explicitBound} \end{lemma} {The full proof of this Lemma is postponed to Appendix \ref{SE:appendixExplicitBound}.} A pretty direct consequence of {Lemma \ref{TH:explicitBound}} is a complexity result with explicit constants. \begin{theorem}[Explicit complexity bound for {\rm Multiprox}] \label{CO:explicitLipDomain} Let $(x_k)_{k\in\mathbb{N}}$ be a sequence generated by {\rm Multiprox} . Then, for any $x_*\in S$ {and} for all $k \geq 1$, \begin{equation} {g (F(x_k)) - g(F(x_*)) \leq \max\Big\{\|\bm{L}\|,\gamma \Big\}\, \frac{L_g\|x_0 - x_*\|^2}{2k},} \label{EQ:complexity_f_explicit} \end{equation} where $$ \gamma=8 \left(\|\nabla F(x_0)\|_{\rm op} + \frac{\|\bm{L}\|}{2}\left( 11\|x_0 - x_*\| + \|\bar{x} - x_*\| \right) \right)^2 \frac{ d_{\bar{x}} + \frac{\|\bm{L}\|}{2}\left( \| x_* - \bar{x}\|+ \| x_* - x_0\| \right)^2 } {d_{\bar{x}}^2 } $$ {with} $d_{\bar{x}} = \mathrm{dist}[F(\bar{x}),\mathrm{bd}\ \mathrm{dom}\, g ]$. \end{theorem} \begin{proof} Fix $k>0$ in $\mathbb{N}$. {Recall that $\|x_k - x_*\| \leq \|x_0 - x_*\|$ (see the proof of Theorem \ref{TH:nonincreasing})}. The bound on the operator norm of the Jacobian is then computed as follows: \begin{align*} \|\nabla F(x_k)\|_{\rm op} &= \|\nabla F(x_0) + \nabla F(x_k) - \nabla F(x_0) \|_{\rm op} \\ &\leq \|\nabla F(x_0)\|_{\rm op}+ \|\nabla F(x_k) - \nabla F(x_0)\|_{\rm op} \\ &\leq \|\nabla F(x_0)\|_{\rm op}+ \|\bm{L}\|\|x_k - x_0\| \\ &\leq \|\nabla F(x_0)\|_{\rm op}+ 2\|\bm{L}\|\|x_* - x_0\|, \end{align*} where we have used the fact that row $i$ of $\nabla F$ is $L_i$ Lipschitz, for $i = 1, \ldots, m$, implying that $\nabla F$ is $\|\bm{L}\|$ Lipschitz with respect to the Frobenius norm. One concludes by using Theorem \ref{TH:nonincreasing} and Lemma \ref{TH:explicitBound}.\end{proof} \subsection{Consequences of the main result} \label{SE:conseqComplexity} \subsubsection{Complexity for Lipschitz continuous models} \label{SE:LIP} It is very useful to make the following elementary observation: \begin{align} \label{bound} \mathscr{C}_\gamma(K)&\leq \sup \{\bm{L}^T \nu:\: x\in K,\, g(F(x)) \leq \gamma,\, y\in p(x),\nu\in \mathcal{V}(x,y) \}\nonumber\\ & { \leq \sup \{\bm{L}^T \lambda:\: x\in K, \, g(F(x)) \leq \gamma, \, y\in p(x),\lambda\in \partial g({H}(x,y)) \}} \nonumber\\ &{\leq \sup \{\bm{L}^T \lambda:\: z\in \mathrm{dom}\ g,\lambda\in \partial g(z) \} ,} \end{align} {where the first inequality is because we just removed an infimum from the definition of $\mathscr{C}_\gamma(K)$ in {Lemma \ref{LE:bounded1}}, the second follows because for any $x \in F^{-1}(\mathrm{dom}\, g)$ and $y \in p(x)$, $\mathcal{V}(x,y) \subset \partial g\left( H(x,y) \right)$, and the third follows because for any $x\in F^{-1}(\mathrm{dom}\, g)$ and any $y\in p(x)$, we have $H(x,y) \in \mathrm{dom}\, g$. The above upper bound is finite whenever $g$ has full domain and is globally Lipschitz continuous.} Indeed, in that case $\sup_{z,\lambda}\{ \|\lambda\|:\lambda \in \partial g(z)\}\leq L_g$ (see e.g., \cite[Theorem 9.13]{Rockafellar:1998}). Assumptions~\ref{AS:existence_sub} and \ref{AS:Qualification} are automatically satisfied. An immediate application of the Cauchy-Schwartz inequality leads to the following bound for the condition number \begin{equation} \mathscr{C}_{g(F(x_0))}(B_{x_0,x_*})\leq L_g\|\bm{L}\|, \quad \text{for all feasible data\, $x_0,x_*$}. \label{cond} \end{equation} Thus we have the general result for composite Lipschitz continuous problems: \begin{corollary}[Global complexity for Lipschitz continuous model] \label{CO:bounded} In addition to Assumptions~\ref{AS:convex} and \ref{AS:effectiveness}, suppose that $g$ is finite valued, and thus globally Lipschitz continuous with Lipschitz constant $L_g$. Let $(x_k)_{k\in\mathbb{N}}$ be a sequence generated by {\rm Multiprox}, then it converges to a minimizer and for any $x_* \in S$: \begin{align} g (F(x_k)) - g(F(x_*)) &\leq \frac{L_g\|\bm{L}\|}{2k} \|x_0 - x_*\|^2,\quad \forall k\geq1. \label{EQ:complexLipG} \end{align} \end{corollary} \begin{remark}{\rm (a) Note that instead of using the Cauchy-Schwartz inequality, one could use H\"older's inequality if $g$ is Lipschitz with respect to a different norm. For example, suppose that each coordinate of $g$ is $L_g$ Lipschitz continuous (the others being fixed), or in other words that the supremum norm of the subgradients of $g$ is bounded by $L_g$. In this case, a result similar to (\ref{EQ:complexLipG}) holds with $\|\bm{L}\|_1$ in place of the Euclidean norm. \\ (b) The bound given above is sharper than the general bound provided in Theorem~\ref{CO:explicitLipDomain}.} \end{remark} \subsubsection{Proximal Gauss-Newton's method for {min-max} problems} \label{SE:MINMAX} Let us illustrate how Theorem \ref{TH:nonincreasing} can give new insights into the proximal Gauss-Newton method (PGNM) when $g$ is a componentwise maximum. As in Subsection~\ref{minmax} consider $g(z) =\max(z_1,\ldots,z_m)\label{EQ:maxG}$ for $z$ in $\mathbb{R}^m$. Take $F$ as in Assumption~\ref{AS:convex}, set $\displaystyle L = \max_{i \in \{1, \ldots, m\}} L_i $ and \begin{align} \bm{L} = (L,\ldots,L)^T. \label{EQ:GNmodelL} \end{align} {\rm Multiprox}\ writes: \begin{align} x_{k+1} =\underset{y \in \mathbb{R}^n}{ \mathrm{argmin}\,} \max_{i \in \left\{ 1,\ldots,m \right\}} \left\{f_i(x_k) + \nabla f_i(x_k)^T( y - x_k) \right\} + \frac{L}{2}\|y - x_k\|^2, \label{EQ:GNAlgo} \end{align} which is nothing else than PGNM applied to the problem $\big(\mathcal{P}\big)$. The kernel $g$ is $1$ Lipschitz continuous with respect to the $L^1$ norm. As in Corollary~\ref{CO:bounded}, a straightforward application of H\"older's inequality leads to the following complexity result: \begin{corollary}[Complexity for PGNM] \label{CO:GNMAX} In addition to Assumptions \ref{AS:convex}, \ref{AS:effectiveness}, suppose that $g$ is the componentwise maximum. Let $(x_k)_{k\in\mathbb{N}}$ be a sequence generated by PGNM, then for any $k\geq 1$ and any $x_*$ in $S$, we have \begin{align} g (F(x_k)) - g(F(x_*)) &\leq \frac{L}{2k} \|x_0 - x_*\|^2, \label{EQ:complexLipG} \end{align} furthermore, the sequence $(x_k)_{k \in \mathbb{N}}$ converges to a solution of $\big(\mathcal{P}\big)$. \end{corollary} \begin{remark}\label{RE:GNMAX} {\rm (a) As far as we know, this complexity result for the classical PGNM is new. We suspect that similar results could be derived for much more general kernels $g$, this is a matter for future research.\\ (b) Note that the accelerated algorithm for PGNM described in \cite{Drusvyatskiy:20162} would achieve a convergence rate of the form $\displaystyle 2\sqrt{m}L\|x_0-x_*\|^2/k^2$. Indeed, the multiplicative constant appearing in the convergence rate of \cite[Theorem 8.5]{Drusvyatskiy:20162} involves the Lipschitz constant of $\nabla F^T$ measured in term of operator norm. We do not know if the constant $\sqrt{m}$ (which can be big for some problems) could be avoided, and thus, at this stage of our understanding, we cannot draw any comparative conclusion between these two complexity results.\\% In the setting of this paragraph, each line of $\nabla F$ being $L$-Lipschitz, the worst case estimate for this quantity is of the order of $\sqrt{m} L$. Yet, at this stage of our understanding we cannot draw any comparative conclusion of these two complexity results. Indeed it is not clear whether the factor $\sqrt{m}$, which can be big for some problems, could be avoided in the estimate of the fast method.\\ (c) Although the worst-case complexity estimate of {\rm Multiprox}\ is the same as the one for PGNM, we have observed a dramatic difference in practice, see Section~\ref{SE:numerical}. The intuitive reason is quite obvious since {\rm Multiprox}\ is much more adapted to the geometry of the problem. Better performances might also be connected to the estimate \eqref{EQ:complexity_f2} given in Remark~\ref{r}. } \end{remark} \subsubsection{Complexity of the Moving balls method.} \label{SE:MB} We provide here {an} enhanced nonsmooth version of the moving balls method, introduced in Subsection~\ref{movi}, which allows to handle sparsity constraints. Consider the following nonlinear convex programming problem: \begin{align} \min_{x \in \mathbb{R}^n} \,&f(x)+h(x)\nonumber\\ \mathrm{s.t.}\,&f_i(x) \leq 0, \, i= 1, \ldots, m. \label{EQ:modelNLP2} \end{align} where {$f:\mathbb{R}^n\rightarrow \mathbb{R}$ is convex, differentiable with $L_f$ Lipschitz continuous gradient}, each $f_i{:\mathbb{R}^n\rightarrow \mathbb{R}}$ is defined as in Assumption \ref{AS:convex}, and $h \colon \mathbb{R}^n \to \mathbb{R}$ is a convex lower semicon\-ti\-nuous function, for instance $h=\|\cdot\|_1.$ Choosing $g$ and $F$ adequately (details can be found in the proof of Corollary \ref{CO:MovingBall}), {\rm Multiprox}\ gives an algorithm combining/improving ideas presented in \cite{Auslender:10,shefi2016dual}\footnote{Observe that the subproblems are simple convex quadratic problems.}: \begin{align} x_{k+1} \in {\underset{y\in\mathbb{R}^n}{\mathrm{argmin}}}\quad & f(x_k) + {\nabla f(x_k)^T( y - x_k)} + \frac{L_f}{2}\|y - x_k\|^2 + h(y) \nonumber \\ \mathrm{s.t.}\quad&f_i(x_k) + {\nabla f_i(x_k)^T( y - x_k)} + \frac{L_i}{2}\|y - x_k\|^2 \leq 0,\, i=1,\ldots,m. \label{EQ:MBalgo} \end{align} Our main convergence result in Theorem \ref{TH:nonincreasing} can be combined with Lemma \ref{LE:bounded1} to recover and extend the convergence results of \cite{Auslender:10,shefi2016dual}. More importantly we derive explicit complexity bounds of the form $O(\frac{1}{k})$. We are not aware of any such quantitative result for general nonlinear programming problems. \begin{corollary}[Complexity of the moving balls method] \label{CO:MovingBall} Assume that $h$ is $L_h$ Lipschitz continuous and that there exists $\bar{x}$ in $\mathbb{R}^n$ such that $f_i(\bar{x}) < 0$, $i = 1, \ldots, m$. Assume that (\ref{EQ:modelNLP2}) has a solution $x^*$ and that there exists $i$ in $\{1,\ldots,m\}$ such that $L_i > 0$. Let $(x_k)_{k \in \mathbb{N}}$ be a sequence generated by {\rm Multiprox}. Then for any $k \geq 1$, $x_k$ is feasible\footnote{For the original problem (\ref{EQ:modelNLP2})} and \begin{equation} (f+h)(x_k) - (f+h)(x_*) \leq \max\Big\{1,\zeta\Big\} \frac{\|\bm{L}\|(1+ L_h)\|x_0 - x_*\|^2}{2k}, \label{EQ:complexity_MB_explicit} \end{equation} where {$\bm{L} = (L_f,L_1,\ldots,L_m,0,\ldots,0)^T\in\mathbb{R}^{n+m+1}$ and } $$\zeta=8\|\bm{L}\| \left(\frac{\|\nabla F(x_0)\|_{\rm op}}{\|\bm{L}\|} + \frac{1}{2}\left( 11\|x_0 - x_*\| + \|\bar{x} - x_*\| \right) \right)^2 \\ \frac{ \bar{f} + \frac{\|\bm{L}\|}{2} \left( \| x_* - \bar{x}\|+ \| x_* - x_0\| \right)^2 } {\bar{f}^2 }$$ with $\bar{f} = |\max_{i=1, \ldots, m}\left\{ f_i(\bar{x}) \right\}|$. \end{corollary} \begin{proof} We set $F \colon x \mapsto (f(x),f_1(x), \ldots, f_m(x),x)$ and $g\colon (z_0,z_1, \ldots, z_m, z) \mapsto z_0 + h(z) + \sum_{i=1}^m i_{\mathbb{R}_-}(z_i)$. With this choice, we obtain problem (\ref{EQ:modelNLP2}) and algorithm (\ref{EQ:MBalgo}) (we set the smoothness modulus of the identity part in $F$ to $0$, whence the value of ${\bm L}$). Assumptions \ref{AS:convex}, \ref{AS:effectiveness} and \ref{AS:Qualification} are clearly satisfied. Assumption \ref{AS:existence_sub} is satisfied since one of the $L_i$ is positive, {indicating} that the subproblems in (\ref{EQ:MBalgo}) are strongly convex {and} have bounded constraint sets. Finally $g$ is $(1+L_h)$ Lipschitz continuous on its domain. Hence {Theorem} \ref{CO:explicitLipDomain} can be applied. It remains to notice that $\bar{f} = \mathrm{dist}[F(\bar{x}), \mathrm{bd}\ \mathrm{dom}\ g]$ to conclude the proof. \end{proof} \subsubsection{Forward-backward splitting algorithm} \label{SE:FB} To illustrate further the flexibility of our method, we explain how our approach allows to recover the classical complexity results of the classical forward-backward splitting algorithm {within the convex setting}. Let $f\colon \mathbb{R}^n \to \mathbb{R}$ be a continuously differentiable convex function with $L$ Lipschitz gradient and $h\colon \mathbb{R}^n \to \mathbb{R} \cup \{+\infty\}$ be a proper lower semicontinuous convex function. Consider the following problem \begin{align} \inf_{x \in \mathbb{R}^n} f(x) + h(x). \label{EQ:smoothPlusNonSmooth} \end{align} This problem is a special case of the optimization objective of problem $\big(\mathcal{P}\big)$, by choosing \begin{equation}F:\left\{\begin{array}{lcl} \mathbb{R}^n &\rightarrow & \mathbb{R}^{n+1} \nonumber \\ x & \rightarrow &(f(x),x) \label{EQ:FBmodelF} \end{array}\right. \end{equation} and \begin{equation} \label{EQ:FBmodelg} g: \left\{ \begin{array}{lcl} \mathbb{R}\times \mathbb{R}^{n} &\rightarrow & {\mathbb{R}\cup \{+\infty\}} \\ (a,z) & \rightarrow &a+h(z) \end{array}\right. \end{equation} Finally, setting \begin{align} \bm{L} = \left( \begin{array}{c} L\\ 0\\ \vdots\\ 0 \end{array} \right) \in \mathbb{R}^{n+1}, \label{EQ:FBmodelL} \end{align} {\rm Multiprox}\ eventually writes: \begin{align} x_{k+1} = {\underset{{y \in \mathbb{R}^n}}{\mathrm{argmin}\,}}\quad f(x_k) + {\nabla f(x_k)^T( y - x_k)} + h(y) + \frac{L}{2}\|y - x_k\|^2, \label{EQ:FBAlgo} \end{align} which is exactly the forward-backward splitting algorithm. It is immediate to check that Assumptions \ref{AS:convex}, \ref{AS:effectiveness}, \ref{AS:existence_sub} and \ref{AS:Qualification} hold true as long as the minimum is achieved in (\ref{EQ:smoothPlusNonSmooth}) and that $\mathrm{dom}\, h$ has nonempty interior with $h$ being Lipschitz continuous on $\mathrm{dom}\, h$\footnote{Lipschitz continuity is actually superfluous for Theorem \ref{TH:nonincreasing} to hold. }. Given the form of $g$ in equation (\ref{EQ:FBmodelg}) and $\bm{L}$ in (\ref{EQ:FBmodelL}), Theorem~\ref{TH:nonincreasing} yields the classical convergence and complexity results for the forward-backward algorithm (see e.g., \cite{Combettes:2005} for convergence and \cite{Beck:2009} for complexity): $x_k$ converges to a minimizer and \begin{align} (f+h)(x_k) -(f+h)(x_*) \leq \frac{L \|x_0 - x_*\|^2}{2k}, \quad \forall k \geq 1. \label{EQ:complexFB} \end{align} \begin{remark}[Complexity estimates depend on the formulation] {\rm Assume that $h$ is the indicator function of a ball $B(a,r)$ so that the above method is the gradient projection method on this ball and its complexity is recovered by equation (\ref{EQ:complexFB}). Another way of modeling the problem is to consider minimizing $g_1\circ F_1$ taking the following forms \begin{align} & F_1\, \colon \mathbb{R}^n\rightarrow \mathbb{R}^2\nonumber\\ & \ \ \ \ \ x\rightarrow \left( \begin{array}{c} f(x)\\ f_1(x) \end{array}\right),\nonumber \end{align} where $f_1(x) = \|x-a\|^2 - r^2$, and \begin{align} & g_1\, \colon \mathbb{R}^2 \rightarrow \mathbb{R}{\ \cup\ \{+\infty\}}\nonumber\\ &\ \ \ \ \ \left(\begin{array}{c} z_1\\ z_2 \end{array}\right)\rightarrow z_1 + h_1(z_2),\nonumber \end{align} where $h_1$ is the indicator function of $\mathbb{R_-}$, so that for every $x\in\mathbb{R}^n$ it holds $g\circ F(x) = g_1\circ F_1(x)$ and {\rm Multiprox}\ for $g_1\circ F_1$ is equivalent to the moving balls method. Since $$f_1(x) + {\nabla f_1(x)^T(y-x)} + \frac{L_1}{2}\|y - x\|^2 = f_1(y),\ \forall x,y\in\mathbb{R}^n,$$ where $L_1=2$ is the Lipschitz constant of the gradient of $f_1$, it follows that for any $x\in\mathbb{R}^n$ it holds $$\underset{\text{gradient projection method}}{\underbrace {\underset{y\in\mathbb{R}^n}{{\mathrm{argmin}}} \bigg(\frac{1}{2} \left\| y - x + \frac{\nabla f(x)}{L}\right\|^2 + h(y)\bigg ) }}= \underset{\text{moving balls method}}{\underbrace { \underset{y\in\mathbb{R}^n}{{\mathrm{argmin}}} \bigg(\frac{1}{2} \left\| y - x + \frac{\nabla f(x)}{L}\right\|^2 + h_1(f_1(y))\bigg) }}.$$ Thus, if the initial point is the same, the sequence of the moving balls method is the same as that of gradient projection method. However, considering the third item of Remark \ref{r}, the best estimate our analysis can provide for the moving-balls method is \begin{eqnarray} \label{EQ:complexMBBis} & & (f+h_1)(x_k) - (f+h_1)(x_*) \\ & \leq & {\frac{\underset{i=1,\ldots,k}{\mathrm{max}} \big\{L + 2 \mathrm{min}\{ \nu_1\in\mathbb{R}: (1,\nu_1) \in \mathcal{V}(x_{i-1},x_i)\}\big\}}{2 k} \|x_0 - x_*\|^2,\ \forall k \geq 1.}\nonumber \end{eqnarray} This is different from the complexity of the gradient projection method in (\ref{EQ:complexFB}). Indeed the {infimum over} the variable $\nu_1$ appearing in the numerator is nonnegative. {Furthermore it is non zero in many situations because otherwise the constraints would never been binding. As an example, one can consider a linear objective function for which the infimum in the numerator in (\ref{EQ:complexMBBis}) is strictly positive}. Observe that the numerator in (\ref{EQ:complexMBBis}) is strictly greater than $L$ which is the classical constant attached the projected gradient, even though both algorithms are actually the same. This highlights the second item of Remark \ref{r} on the dependance of the estimate on the choice of equivalent composite models. } \end{remark} \section{Backtracking and linesearch}\label{s:search} In practice, the collection $\bm{L}$ of Lipschitz constants may not be known or efficiently computable. Lipschitz continuity might not even be global. To handle these fundamental cases, we provide now our algorithmic scheme in (\ref{Numerical_Scheme}) with a linesearch procedure (see e.g., \cite{nocedal,Beck:2009}). First let us define a space search for our steps\footnote{Actually the inverse of our steps.} \begin{eqnarray} \Gamma:=\Big\{ ({\alpha}_1,\ldots,{\alpha}_m)\in\mathbb{R}^m_+ :\: {\alpha}_i = 0\ \text{if} \ L_i = 0,\ {\alpha_i > 0\ \text{if}\ L_i > 0},\ i=1,\ldots,m\Big\}, \nonumber \end{eqnarray} and for every $\bm{\alpha}\in\Gamma$ we set $$H_{{\bm{\alpha}}}(x,y):= F(x) + \nabla F(x)(y-x) + \frac{{\bm{\alpha}}}{2}\|y - x\|^2,\ \forall (x,y)\in\mathbb{R}^n\times\mathbb{R}^n.$$ In order to design an algorithm with this larger family of surrogates, we need a stronger version of Assumption \ref{AS:existence_sub}. \begin{assumption} \label{AS:existence_sub3} For any $x\in F^{-1}(\mathrm{dom}\, g)$ and every ${\bm{\alpha}}\in\Gamma$, the function $g\circ H_{{\bm{\alpha}}}(x,\cdot)$ has a minimizer. \end{assumption} \noindent For every $x\in F^{-1}(\mathrm{dom}\, g)$, the basic subproblem we shall use is defined for any ${\bm{\alpha}}\in\Gamma$, \begin{eqnarray} \big(\mathcal{P}_{{\bm{\alpha}},x}\big):\ \ \ \ p_{{\bm{\alpha}}}(x):= \underset{y\in\mathbb{R}^n}{\mathrm{argmin}}\ \ g\circ H_{{\bm{\alpha}}}(x,y).\nonumber \end{eqnarray} The {\rm Multiprox}\ algorithm with backtracking {step sizes} is defined as: \begin{mdframed}[style=MyFrame] \begin{center} {\bf Multiproximal method with backtracking {step sizes} } \end{center} \noindent Take $\ x_0 \in F^{-1}(\mathrm{dom}\, g),\ \bm{\alpha}_0\in \Gamma\ \text{and}\ \eta>1$. Then iterate for $k \in \mathbb{N}$: \begin{align} \label{Numerical_Scheme_Back} \left. \begin{array}{ll} step\ 1.&\text{set } \tilde{\bm{\alpha}}=\bm{\alpha}_k,\, \tilde{x} \in p_{\bm{\tilde{\alpha}}}(x_k).\\ step\ 2.& \text{while } {\text{the inequality}\ H_{\tilde{\bm{\alpha}}}(x_k, \tilde{x}) > F(\tilde{x})\ \text{is not satisfied}}:\\ & \begin{array}{lll} &\text{for}&i=1,\ldots,m:\\ &&\text{if } f_i(\tilde{x}) > f_i(x_k) + {\nabla f_i(x_k)^T(\tilde{x} - x_k)} + \frac{ {\tilde{\alpha}_i}}{2}\|\tilde{x} - x_k\|^2:\\ &&\quad\text{set } {{\tilde{\alpha}}_i \leftarrow \eta {\tilde{\alpha}}_i}\\ &\text{set }&\tilde{x} \in p_{\bm{\tilde{\alpha}}}(x_k). \end{array}\\ &\text{set } \bm{\alpha}_{k+1}=\tilde{\bm{\alpha}}.\\ step \ 3.&\text{if}\ x_{k}\in p_{\bm{\alpha}_{k+1}}(x_{k}):\\ &\quad x_{k+1} = x_{k}\\ &\text{else :}\\ &\quad x_{k+1}=\tilde{x} \end{array} \right\rbrace \end{align} \end{mdframed} \begin{remark}[A finite while-loop]\label{RE:backtracking_Lk}{\rm (a) One needs to make sure that the scheme is well defined and that each while-loop stops after finitely many tries. Under Assumption \ref{AS:convex}, it is naturally the case. To see this let $(\bm{\alpha}_k)_{k\in\mathbb{N}}$ be a sequence generated by the scheme in (\ref{Numerical_Scheme_Back}), and let $\bm{L}\in\mathbb{R}^m_+$ be the collection of Lipschitz constants associated to $\nabla F$. Then, for every integer $k$, the following statements must obviously hold for each $i=1,\ldots,m$: $$(\bm{\alpha}_{k})_i\leq \max\left\{\eta L_i, (\bm{\alpha}_{0})_i \right\}.$$ (b) More importantly we shall also see that local Lipschitz continuity and coercivity also ensure that the while-loop is finite, see Theorem~\ref{t:gen} below.} \end{remark} \noindent Arguments similar to those of Subsection~\ref{s:cr} allow to derive chain rules and to eventually consider the following sets (note that {a proposition equivalent to Proposition \ref{PR:qualifSub}} holds for the problem $\big(\mathcal{P}_{{\bm{\alpha}},x}\big)$). \begin{definition}[Lagrange multipliers of the subproblems] Given any fixed point $x\in F^{-1}(\mathrm{dom}\, g)$, any ${\bm{\alpha}}\in\Gamma$ and any $y\in p_{{\bm{\alpha}}}(x)$, we set $$\mathcal{V}_{{\bm{\alpha}}}(x,y): = \{\nu\in g(H_{{\bm{\alpha}}}(x,y)): {\nabla _y H_{{\bm{\alpha}}}(x,y)^T} \nu = 0\}.$$ \end{definition} \noindent We are now able to extend Theorem \ref{TH:nonincreasing} to a larger setting: Lipschitz constants do exist but they are unknown to the user. \begin{theorem}[{\rm Multiprox}\ with backtracking] \label{TH:nonincreasingBack} Suppose that {Assumptions} \ref{AS:convex}, \ref{AS:effectiveness}, \ref{AS:Qualification} and \ref{AS:existence_sub3} hold. Let $(x_k)_{k\in\mathbb{N}}$ and $(\bm{\alpha}_k)_{k\in\mathbb{N}}$ be any sequences generated by the algorithmic scheme in (\ref{Numerical_Scheme_Back}). Then, for every $x_*\in S$ and any sequence $(\nu_k)_{k\in\note{\mathbb{N}}}$ such that $\nu_j\in \mathcal{V}_{\bm{\alpha}_j}(x_{j-1},x_j)$ for all $j\geq 1$, one has, \begin{eqnarray} g\circ F(x_k) - g\circ F(x_*) \leq \frac{ \underset{j=1,\ldots,k}{\mathrm{max}} \{ \hat{\bm{\alpha}}^T \nu_j\}}{2k}\|x_0 - x_*\|^2,\ \forall k\geq 1, \label{EQ:complexity_back} \end{eqnarray} where $\hat{\bm{\alpha}} \in \Gamma$ is the vector {whose} entries are given by the upper bound in Remark \ref{RE:backtracking_Lk}. Furthermore, the sequence $(x_k)_{k\in \mathbb{N}}$ converges to a point in S. \end{theorem} \begin{proof} The proof is in the line of that of Theorem \ref{TH:nonincreasing}. We observe first that we have a descent method. Let $(x_k)_{k\in\mathbb{N}}$ and $(\bm{\alpha}_k)_{k\in\mathbb{N}}$ be sequences generated by the algorithmic scheme in (\ref{Numerical_Scheme_Back}). Fix $k \geq 1$. One has $F(x_k) \leq H_{\bm{\alpha}_k}(x_{k-1},x_k)$ as the while-loop stops after finitely many steps (see Remark \ref{RE:backtracking_Lk}). Using the monotonicity properties of $g$, one deduces that $g(F(x_k)) \leq g(H_{\bm{\alpha}_k}(x_{k-1},x_k))$. Considering that $x_k$ is a minimizer of $g(H_{\bm{\alpha}_k}(x_{k-1},\cdot))$, it follows further that $g(H_{\bm{\alpha}_k}(x_{k-1},x_k)) \leq g( H_{\bm{\alpha}_k}(x_{k-1},x_{k-1}) )= g( F(x_{k-1}))$, indicating that the algorithm is a descent method. For any $\nu_k$ in $\mathcal{V}_{\bm{\alpha}_k}(x_{k-1},x_k)$ and any $x_*$ in $S$, one has \begin{eqnarray} && g\circ F(x_k) - g\circ F(x_*)\nonumber\\ &\overset{(a)}{\leq}& g\circ H_{\bm{\alpha}_k}(x_{k-1},x_k) - g\circ F(x_*)\nonumber\\ &\overset{(b)}{\leq} &[H_{\bm{\alpha}_k}(x_{k-1},x_k) - F(x_*) ]^T \nu_k\nonumber\\ &\overset{(c)}{=}& H_{\bm{\alpha}_k}(x_{k-1}, {x_*})^T \nu_k - F(x_*)^T \nu_k - \frac{\bm{\alpha}_k^T \nu_k}{2} \|x_k - x_*\|^2\nonumber\\ &\overset{(d)}{=}& [F(x_{k-1}) + \nabla F(x_{k-1})(x_* - x_{k-1})]^T \nu_k - F(x_*)^T \nu_k + \frac{\bm{\alpha}_k^T \nu_k}{2} (\|x_{k-1} - x_*\|^2 - \|x_k - x_*\|^2)\nonumber\\ &\overset{(e)}{\leq}& \frac{\bm{\alpha}_k^T \nu_k}{2} (\|x_{k-1} - x_*\|^2 - \|x_k - x_*\|^2),\nonumber \end{eqnarray} where: (a) is obtained by combining the descent property with $F(x_k) \leq H_{\bm{\alpha}_k}(x_{k-1},x_k)$, (b) follows from the convexity of $g$ and the fact that $\nu_k\in \mathcal{V}_{\bm{\alpha}_k}(x_{k-1},x_k)$, (c) is obtained by combining $[\nabla_y H_{\bm{\alpha}_k}(x_{k-1},x_k)]^T \nu_k = 0$ with equation (\ref{EQ:strong_a}), (d) is an explicit expansion of $H_{\bm{\alpha}_k}(x_{k-1},x_k)^T\nu_k$ explicitly, and eventually, (e) stems from the property that the $i$-th coordinate of $\nu_k$ is nonnegative if $L_i>0$ (cf. Assumption \ref{AS:convex}), the construction of $\Gamma$ and the coordinatewise convexity of $F$. We therefore obtain: \begin{eqnarray} g(F(x_k)) - g(F(x_*)) \leq \frac{C_k}{2k} \|x_0 - x_*\|^2,\ \ k\geq1, \end{eqnarray} where $\displaystyle C_k=\max_{j=1,\ldots,k} \min \{ \bm{\alpha}_j{^T} \nu: \nu\in \mathcal{V}_{\bm{\alpha}_j(x_{j-1},x_j) }\}$ is a bounded sequence. The convergence of the sequence $(x_k)_{k\in \mathbb{N}}$ follows by similar arguments as in the proof of Theorem \ref{TH:nonincreasing}.\end{proof} We now consider the fundamental and ubiquitous case when global Lipschitz constants do not exist but exist locally. A typical case of such a situation is given by problems involving $C^2$ mappings $F$ (local Lipschitz continuity follows indeed from a direct application of the mean value theorem to the $\nabla f_i$, $i=1,\ldots,m$). To compensate for this lack of global Lipschitz continuity, we make the following coercivity assumption: \begin{assumption} \label{AS:existence_sub4} The problem $g\circ F$ is coercive and, for any $x\in F^{-1}(\mathrm{dom}\, g)$, the function $g\circ H_{{\bm{\alpha_0}}}(x,\cdot)$ is coercive. \end{assumption} \begin{theorem}[Convergence without any global Lipschitz condition]\label{t:gen} Suppose that Assumptions~\ref{AS:effectiveness}, \ref{AS:Qualification} and \ref{AS:existence_sub4} hold and that Assumption \ref{AS:convex} is weakened in the sense that we assume that each $\nabla f_i$ is only locally Lipschitz continuous. Then any sequence $(x_k)_{k\in \mathbb{N}}$ generated by (\ref{Numerical_Scheme_Back}) converges to a single point in the solution set $S$. \end{theorem} \begin{proof} The first and main point to be observed is that the while-loop is finite. Fix $k\geq 1$. Observe that since $\tilde \alpha\geq \alpha_0$ any of the sublevel sets of the subproblems in the while loop are contained in a common compact set $K:=\{y\in \mathbb{R}^n:g\circ H_{\bm{\alpha}_0}(x_k,y)\leq g\circ F(x_k))\}$. Since each gradient is Lipschitz continuous globally on $K$ the while loop ends in finitely many runs. Using arguments similar to those previously given allows to prove that the sequence $x_k$ is a descent sequence. As a consequence it evolves in set $\{y\in \mathbb{R}^n:g(F(y))\leq g(F(x_0))\}$ which is a compact set by coercivity. Since on this set $\nabla F$ is Lipschitz continuous, usual arguments {apply}. One can thus prove that the sequence converges as previously. \end{proof} \begin{remark} {\rm ``Complexity estimates" could also be derived but the constants appearing in the estimates would be unknown a priori. In a numerical perspective, they could be updated online and used to forecast the efficiency of the method step after step.} \end{remark} \section{Numerical experiments}\label{SE:numerical} In this section, we present numerical experiments which illustrate the performance of the proposed algorithm on some collections of min-max problems. The proposed method is compared to both the Proximal Gauss-Newton method (PGNM) and an accelerated variant \cite[Algorithm 8]{Drusvyatskiy:20162}, which we refer to as APGNM. In this setting the convergence rate for both {\rm Multiprox}\ and PGNM is of the order $O(1/k)$, while it is of the order of $O(1/k^2)$ for APGNM (see \cite{Drusvyatskiy:20162} and the discussion in Subsection \ref{SE:MINMAX}). \subsection{Problem class and algorithms} We consider the problem of minimizing the maximum of finitely many quadratic convex functions: \begin{equation} \begin{array}{rl} \underset{x\in\mathbb{R}^n}{\mathrm{min}}\ &\mathrm{max}\{f_1(x),\ldots,f_m(x)\},\label{EQ:minmax_p}\\ \text{with}\ &f_i\colon x \to x^T Q_i x + b_i^T x + c_i \quad \text{for} \ i=1,\ldots,m, \end{array} \end{equation} where $Q_1, \ldots, Q_m$ are real $n\times n$ positive semidefinite matrices, $b_1$, $\ldots, b_m$ are in $\mathbb{R}^n$, and $c_1,\ldots,c_m$ in $\mathbb{R}$. Choosing $g$ as the coordinatewise maximum and each coordinate of $F$ to be one of the $f_i$, $i=1,\ldots,m$, one sees that this problem is of the form of $\big(\mathcal{P}\big)$. Assumptions~\ref{AS:convex} and~\ref{AS:Qualification} are satisfied and we assume that the data of the problem (\ref{EQ:minmax_p}) are chosen so that Assumption~\ref{AS:effectiveness} holds. It is easily checked that Assumption \ref{AS:existence_sub} holds. Under these conditions, all the results established in Section \ref{SE:problem} and Section \ref{SE:complexity} hold for problem (\ref{EQ:minmax_p}). We consider two choices for the vector $\bm L$, $${\bm L}_{_{\rm\small GN}}=\big(\tilde L,\ldots, \tilde L\big)\text{ and }{\bm{L}}_{_{\rm MProx}}=(L_1,\ldots,L_m)$$ where $\tilde{{L}}:= \mathrm{max}\{L_1,\ldots,L_m\}$. {By replacing $\bm{L}$ in {\rm Multiprox}\ with ${\bm L}_{_{\rm\small GN}}$}, we recover the PGNM method. Similarly, by {replacing $\bm{L}$ in {\rm Multiprox}\ with} ${\bm{L}}_{_{\rm MProx}}$, we have the {\rm Multiprox}\ algorithm. In both cases, the subproblem $(\mathcal{P}_x)$ writes \begin{align} x_{k+1}\in \underset{y\in\mathbb{R}^n, s \in \mathbb{R}}{\mathrm{argmin}}\quad&s\ \nonumber\\ \mathrm{s.t.}\quad&s\geq f_i(x_k) + \langle \nabla f_i(x_k),y - x_k\rangle + \frac{L'_i}{2}\|y - x_k\|^2,\ i=1,\ldots,m, \label{EQ:equiv_sub_proposed} \end{align} the choice of $L'_i$, $i = 1, \ldots, m$, being the only difference between the two methods. Problem (\ref{EQ:equiv_sub_proposed}) is a quadratically constrained quadratic program (QCQP) which can be solved by appropriate solvers. As for APGNM, the accelerated variant of PGNM, it requires to solve iteratively QCQP subproblems which are analogous to (\ref{EQ:equiv_sub_proposed}), their solutions can be computed similarly. However APGNM requires to solve two QCQP subproblems at each step (we refer the interested readers to \cite{Drusvyatskiy:20162} for more details on this algorithm). \subsection{Performance comparison} In this section, we provide numerical comparison of the performances of {\rm Multiprox}, PGNM, and APGNM on problem (\ref{EQ:minmax_p}), with synthetic, randomly generated data. First, let us explain the data generation process. The matrix $Q_m$ is set to zero so that the resulting component, $f_m$ is actually an affine function (and $L_m$ is set to $0$). The rest of problem data is generated as follows. For $i = 1,\ldots,m-1$ we generate positive semidefinite matrices $Q_i = Y_i D_i Y_i$, where $Y_i$ are random Householder orthogonal matrices $$Y_i = I_n - 2\frac{\omega_i \omega_i^T}{\|\omega_i\|^2},$$ where $I_n$ is the $n\times n$ identity matrix and the coordinates of $\omega_i\in\mathbb{R}^n$ are chosen as {independent} realizations of a unit Gaussian. For $i = 1,\ldots,m-1$, $D_i$ is chosen as an $n\times n$ diagonal matrix whose diagonal elements are randomly shuffled from the set $$\{i\times10^{\frac{j}{n-1}}, j = 1,\ldots,n\}.$$ For $i=1,\ldots,m$, the coordinates of $b_i$ are chosen as independent realizations of a Gaussian $N(0,(1/3)^2)$. Finally, we choose $$c_i = 10^{2i/m}.$$ Note that for each $i=1,\ldots,m$, the Lipschitz constant of $f_i$ is twice the maximum eigenvalue of $Q_i$, i.e., $$L_i = 2 \times {\mathrm{max}}\{\text{eigenvalue}(Q_i)\}.$$ To iteratively solve the surrogate QCQP subproblems, we used {\it MOSEK} solver \cite{Mosek:2016} interfaced with \textit{YALMIP} toolbox \cite{Lofberg:2004} in Matlab. We fix the number of variables to $n =100$ and choose the origin as the initial point. For each $m\in\{5,10,15,20,25,30\}$, we repeat the random data generation process 20 times and run the three algorithms to solve each corresponding problem. As a measure of performance, in order to compare efficiency between different random runs of the data generation process, we use the following normalized suboptimality gap $$\left(\frac{g\circ F(x_{k}) - g\circ F(x_*)} { g\circ F(x_{0}) - g\circ F(x_*)}\right)_{k\in \mathbb{N}}.$$ Statistics for the three algorithms are presented in Table \ref{TAB:comparison}. It is clear from these figures, that {\rm Multiprox}\ is dramatically faster than both PGNM and APGNM. Furthermore, {\rm Multiprox}\ seems to suffer less from increasing values of $m$. Finally {APGNM} is less consistent in terms of performances. \begin{table}[htp] \caption{Comparisons of {\rm Multiprox}, PGNM, and APGNM in terms of normalized suboptimality gap in percentage at 10th and 20th iterations. The mean and standard deviations represent the central tendency and dispersion for 20 random runs of the data generation process.} \begin{center} \begin{tabular}{c|cc||cc||cc} \hline \multicolumn{7}{c}{}\\ \multicolumn{7}{c}{$100\times${\large $\frac{g\circ F(x_{k}) - g\circ F(x_*)}{ g\circ F(x_{0}) - g\circ F(x_*)}$}}\\ \multicolumn{7}{c}{}\\ \cline{1-7} & \multicolumn{2}{c||}{ } & \multicolumn{2}{c||}{} & \multicolumn{2}{c}{} \\ & \multicolumn{2}{c||}{{\rm Multiprox}\ } & \multicolumn{2}{c||}{PGNM } & \multicolumn{2}{c}{APGNM} \\ & \multicolumn{2}{c||}{ } & \multicolumn{2}{c||}{} & \multicolumn{2}{c}{} \\ \cline{2-7} & \multicolumn{2}{c||}{$k=10$} & \multicolumn{2}{c||}{$k=10$} & \multicolumn{2}{c}{$k=10$} \\\hline $m$ &mean & std & mean & std & mean & std \\\hline 5 & 0.48 & 1.12$\times$10$^{-1}$ &94.44&5.01$\times$10$^{-1}$&88.15&1.07 \\ 10 & 0.46& 9.81$\times$10$^{-2}$ &95.20&3.34$\times$10$^{-1}$&89.78&7.14$\times$10$^{-1}$\\ 15 & 0.47 & 8.49$\times$10$^{-2}$&95.62&2.84$\times$10$^{-1}$&90.66&6.05$\times$10$^{-1}$\\ 20 & 0.47& 8.73$\times$10$^{-2}$ &95.72&3.71$\times$10$^{-1}$&90.88&7.92$\times$10$^{-1}$\\ 25 & 0.44& 1.07$\times$10$^{-1}$&95.79&4.04$\times$10$^{-1}$&91.03&8.64$\times$10$^{-1}$\\ 30 & 0.46 & 8.01$\times$10$^{-2}$&95.82&4.33$\times$10$^{-1}$&91.09&9.26$\times$10$^{-1}$\\ \hline & \multicolumn{2}{c||}{$k=20$} & \multicolumn{2}{c||}{$k=20$} & \multicolumn{2}{c}{$k=20$} \\\hline $m$ & mean & std & mean & std & mean &std\\\hline 5 & 2.24$\times$10$^{-2}$ & 5.70$\times$10$^{-3}$&88.88&1.00 &62.34 & 3.40 \\ 10& 2.24$\times$10$^{-2}$ & 5.86$\times$10$^{-3}$&90.40&6.68$\times$10$^{-1}$&67.53 & 2.27 \\ 15& 2.15$\times$10$^{-2}$ & 5.43$\times$10$^{-3}$&91.24&5.67$\times$10$^{-1}$&70.35 & 1.92 \\ 20& 2.13$\times$10$^{-2}$ & 4.67$\times$10$^{-3}$&91.44&7.42$\times$10$^{-1}$&71.03 & 2.51 \\ 25& 2.08$\times$10$^{-2}$ & 6.45$\times$10$^{-3}$&91.57&8.10$\times$10$^{-1}$&71.49 & 2.74 \\ 30& 2.07$\times$10$^{-2}$ & 5.46$\times$10$^{-3}$&91.64&8.67$\times$10$^{-1}$&71.71 & 2.94 \\ \hline \end{tabular} \end{center} \label{TAB:comparison} \end{table} \begin{figure} \caption{Comparisons of {\rm Multiprox} \label{Fig:com_gauss_proposed} \end{figure} \begin{figure} \caption{Same as Figure \ref{Fig:com_gauss_proposed} \label{Fig:com_gauss_proposed1} \end{figure} A graphical view of the same results is presented in Figure \ref{Fig:com_gauss_proposed} and a log-scale view is given in Figure \ref{Fig:com_gauss_proposed1}. One can see from Figures \ref{Fig:com_gauss_proposed} and \ref{Fig:com_gauss_proposed1} that the sequences for {\rm Multiprox}\ and PGNM are nonincreasing as predicted by our theory. Note that the sequence generated by APGNM is not necessarily nonincreasing, although all the sequences represented in Figure \ref{Fig:com_gauss_proposed} are strictly decreasing. It is clear that the decreasing slopes for PGNM and APGNM are much smaller than that of {\rm Multiprox}, coinciding with the data in Table \ref{TAB:comparison}. This situation is actually not surprising since the data of the problem were chosen in a way ensuring that the $L_i$'s ($i=1,\ldots,m$) can take very different values as in many ill posed problems. The strength of {\rm Multiprox}\ is that $\bm{L}$ can be chosen appropriately to adapt to this disparity. On the other hand the use of a single parameter (as in PGNM or APGNM) yields smaller steps and thus slower convergence. { \section{Proof of Proposition \ref{PR:chainRule}} \label{SE:appendixProofChainRule} Let us recall a qualification condition from \cite{Rockafellar:1998}. Given any $x\in F^{-1}(\mathrm{dom}\, g)$, let \begin{equation}J(x,\cdot):\left\{ \begin{array}{lll} \mathbb{R}^n & \rightarrow & \mathbb{R}^{m}\\ \omega &\mapsto & F({x}) + \nabla F({x}) \omega, \label{EQ:linearized_mapping} \end{array}\right. \end{equation} be the linearized mapping of $F$ at $x$. Proposition~\ref{PR:chainRule} follows immediately from the classical chain rule given in \cite[Theorem 10.6]{Rockafellar:1998} and the following proposition. \begin{proposition}[Two equivalent qualification conditions]\label{TH:QC} Under Assumptions \ref{AS:convex} on $F$ and $g$, Assumption~\ref{AS:Qualification} holds if and only if \begin{center} {\bf (QC) } $\mathrm{dom}\, g$ cannot be separated from $J(x,\mathbb{R}^n)$ for any $x\in F^{-1}(\mathrm{dom}\, g)$. \end{center} \end{proposition} \begin{proof} We first suppose that {\bf (QC)} is true. We begin with a remark showing that this implies that $\mathrm{dom}\, g$ is not empty. Let $A$ and $B$ be two subsets of $\mathbb{R}^m$. The logical negation of the sentence ``$A$ and $B$ can be separated'' can be written as follows: for all $a$ in $\mathbb{R}^m$ and for all $b \in \mathbb{R}$, there exists $y \in A$ such that \begin{align*} {a^Ty} + b > 0, \end{align*} or, there exists $z \in B$ such that \begin{align*} {a^T z} + b< 0. \end{align*} In particular if $A$ and $B$ cannot be separated, then either $A$ or $B$ is not empty. Note that if $\mathrm{dom}\, g$ is empty, then so is the set $\{J(x, \mathbb{R}^n),\, x \in F^{-1}(\mathrm{dom}\, g)\}$. Hence {\bf (QC)} actually implies that $\mathrm{dom}\, g$ is not empty. Pick a point $\tilde{x}\in F^{-1}(\mathrm{dom}\, g)$. If $F(\tilde{x})\in \text{int dom}(g)$, there is nothing to prove, so we may suppose that $F(\tilde{x})\in \text{bd dom}\,g$. {If we had $[\text{int dom}(g)] \cap J(\tilde{x},\mathbb{R}^n) = \emptyset$, then, $\mathrm{dom}\, g$ and $J(\tilde{x},\mathbb{R}^n)$ could be separated by Hahn-Banach theorem contradicting {\bf (QC)}. } Hence, there exists $\tilde{\omega}\in\mathbb{R}^n$ such that $J(\tilde{x},\tilde{\omega})\in \text{int dom}(g)$. {Note that, since $F(\tilde{x})\in \mathrm{dom}(g)$ and $g$ is nondecreasing with respect to each argument, it follows $F(\tilde{x}) - d\in \mathrm{dom}(g)$ for any $d\in(\mathbb{R}_+^*)^m$, indicating that $\mathrm{int}(\mathrm{dom}(g))\neq \emptyset$}. Since $\mathrm{dom}\, g$ is convex, a classical result yields \begin{eqnarray} J(\tilde{x},\lambda \tilde{\omega}) \in \text{int dom}(g),\ \forall\,\lambda\in(0,1]. \label{EQ:QC:int} \end{eqnarray} On the other hand $F$ is differentiable thus \begin{eqnarray} \|F(\tilde{x} + \lambda \tilde{\omega}) - J(\tilde{x},\lambda \tilde{\omega})\| = o(\lambda), \label{EQ:Taylor} \end{eqnarray} where $o(\lambda)/\lambda$ tends to zero as $\lambda$ goes to zero. After these basic observations, let us recall an important property of the signed distance (see \cite[p. 154]{Hiriart-Urruty:1993}). Let $D\subset \mathbb{R}^m$ be a nonempty closed convex set. Then, the function \begin{eqnarray}\label{LE:Concavity} D\rightarrow \mathbb{R}_+,\ z\mapsto \mathrm{dist}(z,\mathrm{bd}(D)),\nonumber \end{eqnarray} is concave. Using this concavity property for $D=\mathrm{dom}\, g$ and the fact that $F(\tilde{x}) = J(\tilde{x},0)$, it holds that \begin{eqnarray} &&\lambda \text{dist}[J(\tilde{x},\tilde{\omega}),\text{bd}(\mathrm{dom}\, g)] + (1-\lambda) \text{dist}[F(\tilde{x}),\text{bd}(\mathrm{dom}\, g)]\nonumber\\ && \leq \text{dist}[J(\tilde{x},\lambda\tilde{\omega}),\text{bd}(\mathrm{dom}\, g)],\ \forall\ \lambda\in[0,1].\nonumber \end{eqnarray} Since $\text{dist}[F(\tilde{x}),\text{bd}(\mathrm{dom}\, g)] = 0$, it follows that \begin{eqnarray} \lambda \text{dist}[J(\tilde{x},\tilde{\omega}),\text{bd}(\mathrm{dom}\, g)] \leq \text{dist}[J(\tilde{x},\lambda\tilde{\omega}),\text{bd}(\mathrm{dom}\, g)],\ \lambda\in[0,1]. \label{EQ:Taylor1} \end{eqnarray} Note that $ \text{dist}[J(\tilde{x},\tilde{\omega}),\text{bd}(\mathrm{dom}\, g)] > 0$ since $J(\tilde{x},\tilde{\omega})\in \text{int dom}(g)$. Hence, equation (\ref{EQ:Taylor}) indicates that there exists $\epsilon > 0$ such that for any $0 < \lambda \leq \epsilon$, we have $$\| F(\tilde{x} + \lambda \tilde{\omega}) - J(\tilde{x},\lambda\tilde{\omega}) \| < \lambda \text{dist}[J(\tilde{x},\tilde{\omega}),\text{bd}(\mathrm{dom}\, g)] .$$ Substituting this inequality into equation (\ref{EQ:Taylor1}) indicates that for any $0 < \lambda \leq \epsilon$, we have $$\|F(\tilde{x} + \lambda \tilde{\omega}) - J(\tilde{x},\lambda\tilde{\omega}) \| < \text{dist}[ J(\tilde{x},\lambda\tilde{\omega}),\text{bd}(\mathrm{dom}\, g)].$$ Using equation (\ref{EQ:QC:int}), for any $0 <\lambda \leq \epsilon$, we have $F(\tilde{x} + \lambda \tilde{\omega})\in \text{int dom}(g)$. This shows the first implication of the equivalence. {Let us prove the reverse implication by contraposition and }assume that {\bf (QC)} does not hold, that is, there exists a point $\tilde{x}\in F^{-1}(\mathrm{dom}\, g)$ such that $\mathrm{dom}\, g$ can be separated from $J(\tilde{x},\mathbb{R}^n)$. In this case, there exists $a \neq 0 \in \mathbb{R}^{m}$ and $b\in\mathbb{R}$ such that \begin{eqnarray} \begin{cases} a^T z + b \leq 0,\ \forall z\in \mathrm{dom}\, g,\\ a^T J(\tilde{x},\omega) + b \geq 0,\ \forall \omega \in \mathbb{R}^n. \end{cases} \label{EQ:separating} \end{eqnarray} Since $J(\tilde{x},0) = F(\tilde{x}) \in \mathrm{dom}\, g$, it follows \begin{eqnarray} a^TJ(\tilde{x},0) + b = 0. \label{EQ:boundary_point} \end{eqnarray} By the coordinatewise convexity of $F$, for every $i\in\{1,\ldots,m\}$ one has $$ f_i(y) \begin{cases} \geq f_i(x) + {\nabla f_i(x)^T (y-x)},\ L_i >0,\\ = f_i(x) + {\nabla f_i(x)^T(y-x)},\ L_i =0, \end{cases}\ \ \forall (x,y)\in\mathbb{R}^n\times\mathbb{R}^n. $$ We thus have the componentwise inequality $$J(\tilde{x},0) - [F(\tilde{x}+\omega) - J(\tilde{x},\omega) ] \leq F(\tilde x).$$ The monotonicity properties of $g$ implies thus that $$J(\tilde{x},0) - [F(\tilde{x}+\omega) - J(\tilde{x},\omega) ] \in \mathrm{dom}\, g,\ \forall \omega\in\mathbb{R}^n.$$ As a result, combining equation (\ref{EQ:separating}) with equation (\ref{EQ:boundary_point}), one has $$a^{T} \big\{ J(\tilde{x},0) - [F(\tilde{x}+\omega) - J(\tilde{x},\omega) ]\} + b \leq a^T J(\tilde{x},0) + b,\ \forall \omega\in\mathbb{R}^n,$$ which reduces to $a^T [F(\tilde{x}+\omega) - J(\tilde{x},\omega) ] \geq 0,$ $\forall \omega\in\mathbb{R}^n$. Hence, for any $\omega\in\mathbb{R}^n$ one has \begin{eqnarray} a^T F(\tilde{x}+\omega ) + b &=& a^T J(\tilde{x},\omega) + b + a^T(F(\tilde{x}+\omega) - J(\tilde{x},\omega) )\nonumber\\ &\geq& a^T J(\tilde{x},\omega) + b\;\geq\; 0,\nonumber \end{eqnarray} where for the last inequality, equation (\ref{EQ:separating}) is used. {{This inequality combined with the fact $a^T z + b < 0,\, \forall z\in \mathrm{int\ dom}\ g$ obtained according to the first item of equation (\ref{EQ:separating}),} shows that $F(\tilde{x}+\omega)\not\in \text{int\,dom}\ g$, for all $\omega\in\mathbb{R}^n$, and thus $F^{-1}(\text{int\,dom}\,g)= \emptyset$, that is Assumption \ref{AS:Qualification} does not hold.} This provides the reverse implication and the proof is complete. \end{proof} \section{Proof of Lemma \ref{TH:explicitBound}} \label{SE:appendixExplicitBound} In this section, we present an explicit estimate of the condition number appearing in our complexity result. Let us first introduce a notation. For any $D\subset\mathbb{R}^m$, nonempty closed set, we define a signed distance function as \begin{eqnarray} D\rightarrow \mathbb{R},\ z\mapsto \mathrm{sdist}= \begin{cases} \mathrm{dist}(z,\mathrm{bd}(D)),\ \text{if}\ z\in \mathrm{int}(D),\\ -\mathrm{dist}(z,\mathrm{bd}(D)),\ \text{otherwise}. \end{cases} \label{EQ:signed_distance} \end{eqnarray} It is worth recalling that the signed distance function is concave (see \cite[p. 154]{Hiriart-Urruty:1993}). We begin with a lemma which describes a monotonicity property of the signed distance function. \begin{lemma}\label{LE:g_bd} Given any $z\in \mathrm{dom}\, g $ and any $d=(d_1,\ldots,d_m)\in \mathbb{R}_+^m$ with $d_i =0$ if $L_i = 0$, {if $\mathrm{bd\ dom}g \neq \emptyset$,} one has \begin{eqnarray} \mathrm{sdist}(z,\mathrm{bd}\ \mathrm{dom}\, g ) \geq \mathrm{sdist}(z+ d, \mathrm{bd}\ \mathrm{dom}\, g ). \label{EQ:ineq_sdist} \end{eqnarray} \end{lemma} \begin{proof} Fix an arbitrary $z\in \mathrm{dom}\, g $ and an {arbitrary} $d=(d_1,\ldots,d_m)\in\mathbb{R}_+^m$ such that $d_i = 0$ whenever $L_i = 0$ in the sequel of the proof. If $z+d \not\in \mathrm{dom}\ g$, equation (\ref{EQ:ineq_sdist}) holds true by the definition in equation (\ref{EQ:signed_distance}). From now on, we suppose $z+d \in \mathrm{dom}\ g$. Let $\bar{z} \in \mathrm{bd}\ \mathrm{dom}\, g $ be a point such that $$\bar{z} \in \underset{\hat{z}\in \mathrm{bd}\ \mathrm{dom}\, g }{\mathrm{argmin}}\ \ \|z - \hat{z}\|.$$ Then, one has \begin{eqnarray} \|z - \bar{z}\| = \mathrm{sdist}(z,\mathrm{bd}\ \mathrm{dom}\, g ). \label{EQ:z_barz} \end{eqnarray} Since $\bar{z}$ lies on the boundary of $\mathrm{dom}\, g $, it follows that $\bar{z}+d \not\in \mathrm{int}\ \mathrm{dom}\, g $ because of the monotonicity property of $g$ in Assumption \ref{AS:convex}. {Hence, by the definition of $\mathrm{sdist}$ in equation (\ref{EQ:signed_distance}), we have} $$\mathrm{sdist}(z+d,\mathrm{bd}\ \mathrm{dom}\, g ) = \mathrm{dist}(z+d,\mathbb{R}^m \setminus\mathrm{int}\ \mathrm{dom}\, g ) \leq \|(z + d) - (\bar{z} + d)\| = \|z - \bar{z}\|.$$ Combining this inequality with equation (\ref{EQ:z_barz}) completes the proof. \end{proof} The following lemma shows that it is possible to construct a convex combination between the current $x$ and the Slater point $\bar{x}$ given in Assumption \ref{AS:Qualification} which will be a Slater point for the current sub-problem with a uniform control over the ``degree'' of qualification. \begin{lemma} \label{LE:lower_bound_Hxw} Let $\bar{x}$ be given as in Assumption \ref{AS:Qualification} and $x \in F^{-1}(\mathrm{dom}\ g)$ {and assume that $\mathrm{bd\ dom}g \neq \emptyset$}. Set $$\gamma(\bar{x},x):= \mathrm{min}\left\{1,\frac{\mathrm{sdist}[F(\bar{x}),\mathrm{bd}\ \mathrm{dom}\, g ]}{2\{\mathrm{sdist}[F(\bar{x}),\mathrm{bd}\ \mathrm{dom}\, g ]- \mathrm{sdist}[F(\bar{x})+\bm{L} \|x - \bar{x}\|^2/2,\mathrm{bd}\ \mathrm{dom}\, g ]\}}\right\}.$$ Then, \begin{align} \mathrm{sdist}[H(x,x+\gamma(\bar{x},x)(\bar{x} - x)),\mathrm{bd}\ \mathrm{dom}\, g ]&\geq\frac{\mathrm{sdist}[F(\bar{x}),\mathrm{bd}\ \mathrm{dom}\, g ]^2 }{4 \big\{\mathrm{sdist}[F(\bar{x}),\mathrm{bd}\ \mathrm{dom}\, g ] + \|\bm{L}\| \|x - \bar{x}\|^2/2\big\}}. \label{EQ:lemma_low_bound_h} \end{align} \end{lemma} \begin{proof} Fix an arbitary $x\in K$. Then, for any $t\in (0,1]$ one has \begin{eqnarray} H(x,x+t(\bar{x} - x)) &=& F(x) + t\nabla F(x)(\bar{x} - x) + \frac{\bm{L}}{2}\|\bar{x} - x\|^2 t^2\nonumber\\ &\leq& (1-t) F(x) + t F(\bar{x}) + \frac{\bm{L}}{2}\|x - \bar{x}\|^2 t^2, \label{EQ:ineq1_t} \end{eqnarray} where the last inequality is obtained by applying the coordinatewise convexity of $F$. Therefore, for any $t\in(0,1]$, we have \begin{eqnarray} && \mathrm{sdist}[H(x,x+t(\bar{x} - x)),\mathrm{bd}\ \mathrm{dom}\, g ]\nonumber\\ &\overset{(a)}{\geq}& \mathrm{sdist}[(1-t) F(x) + t F(\bar{x}) + \frac{\bm{L}}{2} \|x - \bar{x}\|^2 t^2,\mathrm{bd}\ \mathrm{dom}\, g ]\nonumber\\ &\overset{(b)}{\geq}& \mathrm{sdist} [F(x),\mathrm{bd}\ \mathrm{dom}\, g ](1-t) + \mathrm{sdist} [F(\bar{x}) + t \frac{\bm{L}}{2}\|x - \bar{x}\|^2 ,\mathrm{bd}\ \mathrm{dom}\, g ] t \nonumber\\ &\overset{(c)}{\geq} & \mathrm{sdist} [(1-t) F(\bar{x}) + t ( F(\bar{x}) + \frac{\bm{L}}{2} \|x - \bar{x}\|^2 ),\mathrm{bd}\ \mathrm{dom}\, g ] t \nonumber\\ &\overset{(d)}{\geq}& \mathrm{sdist}[F(\bar{x}),\mathrm{bd}\ \mathrm{dom}\, g ] t(1-t) + \mathrm{sdist}[ F(\bar{x}) + \frac{\bm{L}}{2} \|x - \bar{x}\|^2,\mathrm{bd}\ \mathrm{dom}\, g ] t^2 \nonumber\\ &=:& \delta(t), \label{EQ:ineq_delta} \end{eqnarray} where for (a) we combine equation (\ref{EQ:ineq1_t}) with Lemma \ref{LE:g_bd}, for (b) we use the concavity of the signed distance function (see \cite[p. 154]{Hiriart-Urruty:1993}), for (c) we use the fact that $(1-t) \mathrm{sdist} [F(x),\mathrm{bd}\ \mathrm{dom}\, g ] \geq 0$, and for (d) we use the concavity of the signed distance function again. It is easy to verify that $\gamma(\bar{x},x)\in(0,1]$ is the maximizer of $\delta(t)$ over the interval $(0,1]$. We now consider the following inequality. \begin{eqnarray} \label{EQ:ineqDistance} \mathrm{sdist}[F(\bar{x})+\bm{L} \|x - \bar{x}\|^2/2,\mathrm{bd}\ \mathrm{dom}\, g ] \geq - \frac{\|\bm{L}\|}{2}\|x - \bar{x}\|^2, \end{eqnarray} Inequality (\ref{EQ:ineqDistance}) holds true: indeed, either $F(\bar{x})+\bm{L} \|x - \bar{x}\|^2/2 \in \mathrm{dom}\ g$ and the result is trivial or otherwise, the result holds by the definition of the distance as an infimum. If $\gamma(\bar{x},x) = 1$, by its definition, one immediately has \begin{eqnarray} \frac{\mathrm{sdist}[F(\bar{x}),\mathrm{bd}\ \mathrm{dom}\, g ]}{2\{\mathrm{sdist}[F(\bar{x}),\mathrm{bd}\ \mathrm{dom}\, g ]- \mathrm{sdist}[F(\bar{x})+\bm{L} \|x - \bar{x}\|^2/2,\mathrm{bd}\ \mathrm{dom}\, g ]\}} \geq 1,\nonumber \end{eqnarray} which implies \begin{eqnarray} \mathrm{sdist}[F(\bar{x})+\bm{L} \|x - \bar{x}\|^2/2,\mathrm{bd}\ \mathrm{dom}\, g ] \geq \frac{\mathrm{sdist}[F(\bar{x}),\mathrm{bd}\ \mathrm{dom}\, g ]}{2}. \label{EQ:beta_alpha2} \end{eqnarray} Substituting $\gamma(\bar{x},x) = 1$ into equation (\ref{EQ:ineq_delta}) yields \begin{eqnarray} \delta(\gamma(\bar{x},x)) &=& \mathrm{sdist}[ F(\bar{x}) + \frac{\bm{L}}{2} \|x - \bar{x}\|^2,\mathrm{bd}\ \mathrm{dom}\, g ] \nonumber\\ &\geq & \frac{\mathrm{sdist}[F(\bar{x}),\mathrm{bd}\ \mathrm{dom}\, g ]}{2}\nonumber\\ &\geq& \frac{\mathrm{sdist}[F(\bar{x}),\mathrm{bd}\ \mathrm{dom}\, g ]}{2} \times \frac{1}{2}\frac{\mathrm{sdist}[F(\bar{x}),\mathrm{bd}\ \mathrm{dom}\, g ]}{\mathrm{sdist}[F(\bar{x}),\mathrm{bd}\ \mathrm{dom}\, g ] + \|\bm{L}\|\|x - \bar{x}\|^2/2}\nonumber\\ &=& \frac{\mathrm{sdist}[F(\bar{x}),\mathrm{bd}\ \mathrm{dom}\, g ]^2 }{4 \big\{\mathrm{sdist}[F(\bar{x}),\mathrm{bd}\ \mathrm{dom}\, g ] + \|\bm{L}\| \|x - \bar{x}\|^2/2\big\}}, \label{EQ:gamma=1} \end{eqnarray} where the {first} inequality is obtained by considering equation (\ref{EQ:beta_alpha2}). As a result, equation (\ref{EQ:lemma_low_bound_h}) holds true if $\gamma(\bar{x},x) = 1$. From now on, let us consider $\gamma(\bar{x},x)<1$. In this case, one has \begin{eqnarray} \gamma(\bar{x},x) = \frac{\mathrm{sdist}[F(\bar{x}),\mathrm{bd}\ \mathrm{dom}\, g ]}{2\{\mathrm{sdist}[F(\bar{x}),\mathrm{bd}\ \mathrm{dom}\, g ]- \mathrm{sdist}[F(\bar{x})+\bm{L} \|x - \bar{x}\|^2/2,\mathrm{bd}\ \mathrm{dom}\, g ]\}}.\nonumber \end{eqnarray} Substituting into equation (\ref{EQ:ineq_delta}) and using (\ref{EQ:ineqDistance}) leads to \begin{eqnarray} \delta(\gamma(\bar{x},x)) &=& \frac{\mathrm{sdist}[F(\bar{x}),\mathrm{bd}\ \mathrm{dom}\, g ]^2 }{4 \big\{\mathrm{sdist}[F(\bar{x}),\mathrm{bd}\ \mathrm{dom}\, g ] -\mathrm{sdist}[F(\bar{x})+\bm{L} \|x - \bar{x}\|^2/2,\mathrm{bd}\ \mathrm{dom}\, g ]\big\}}\nonumber\\ &\geq& \frac{\mathrm{sdist}[F(\bar{x}),\mathrm{bd}\ \mathrm{dom}\, g ]^2 }{4 \big\{\mathrm{sdist}[F(\bar{x}),\mathrm{bd}\ \mathrm{dom}\, g ] + \|\bm{L}\| \|x - \bar{x}\|^2/2\big\}}.\nonumber \end{eqnarray} Eventually, combining this equation with (\ref{EQ:ineq_delta}) and (\ref{EQ:gamma=1}) completes the proof. \end{proof} We are now ready to describe the proof of Lemma \ref{TH:explicitBound} \begin{proof}[Proof of Lemma \ref{TH:explicitBound}] (i) As the function $g$ is $L_g$ Lipschitz continuous on its domain, an immediate application of the Cauchy-Schwartz inequality leads to $\bm{L}^T\nu \leq L_g \|\bm{L}\|$ (see also Section \ref{SE:LIP}). (ii) {The claim is trivial if $\mathrm{bd\ dom}(g) = \emptyset$, hence we will assume that it is not so that we can use Lemmas 5 and 6.} Set $w = x + \gamma(\bar{x},x)(\bar{x} - x)$ with $\gamma(\bar{x},x)$ given as in Lemma \ref{LE:lower_bound_Hxw}. By Lemma \ref{LE:lower_bound_Hxw}, one has $w \in \mathrm{dom}(g\circ H(x,\cdot))$. Then, one obtains \begin{eqnarray} \frac{\bm{L}^T\nu}{2}\|w - y\|^2 &=& [H(x,w) - H(x,y)]^T\nu\nonumber\\ &\leq& g\circ H(x,w) - g\circ H(x,y)\nonumber\\ &\leq& L_g \|H(x,w) - H(x,y)\|, \label{EQ:Lipschitz_g_Lg} \end{eqnarray} where the equality follows from equation (\ref{EQ:strong_a}), the first inequality is obtained by the convexity of $g$, and the last inequality is due to the assumption that $g$ is $L_g$ Lipschitz continuous on its domain. On the other hand, a direct calculation yields \begin{align} \|H(x,w) - H(x,y)\| =&\ \|\nabla F(x) (w - y) + \frac{\bm{L}}{2}\|w - x\|^2 - \frac{\bm{L}}{2}\|y - x\|^2 \|\nonumber\\ =&\ \|\nabla F(x) (w - y) + \frac{\bm{L}}{2} ( \|w - y\|^2 + 2(w-y)^T(y-x) ) \|\nonumber\\ \leq&\ \|\nabla F(x)\|_{\rm op}\|w-y\| + \frac{\|\bm{L}\|}{2} \|w - y\|^2 + \|\bm{L}\|\|w-y\|\|y - x\|\nonumber\\ =&\ \|w - y\| \left[ \|\nabla F(x)\|_{\rm op} + \frac{\|\bm{L}\|}{2}\|w - y\| + \|\bm{L}\|\| y - x\|\right]\nonumber\\ \leq&\ \|w - y\| \left[ \|\nabla F(x)\|_{\rm op} + \frac{\|\bm{L}\|}{2}\|x - y\| + \gamma(\bar{x},x)\frac{\|\bm{L}\|}{2}\|\bar{x} - x\| + \|\bm{L}\|\| y - x\|\right]\nonumber\\ \leq&\ \|w - y\| \left[ \|\nabla F(x)\|_{\rm op} + \frac{3\|\bm{L}\|}{2}\|x - y\| + \frac{\|\bm{L}\|}{2}\|\bar{x} - x\| \right]\nonumber \end{align} Substituting this inequality into equation (\ref{EQ:Lipschitz_g_Lg}) leads to \begin{eqnarray} \frac{\bm{L}^T \nu}{2}\|H(x,w) - H(x,y)\| \leq L_g \left[\|\nabla F(x)\|_{\rm op} + \frac{3\|\bm{L}\|}{2}\|x - y\| + \frac{\|\bm{L}\|}{2}\|\bar{x} - x\| \right]^2. \end{eqnarray} As $H(x,y)$ is on the boundary of $\mathrm{dom}\, g $, it follows that \begin{eqnarray} \|H(x,w) - H(x,y)\| \geq \mathrm{sdist}[H(x,w),\mathrm{bd}\ \mathrm{dom}\, g ]. \end{eqnarray} Combining this inequality with Lemma \ref{LE:lower_bound_Hxw} eventually completes the proof. \end{proof} \end{document}
\begin{document} \title {\bf Conservative solution of the Camassa Holm Equation on the real line} \author {Massimo Fonte \\ \small{S.I.S.S.A., Via Beirut, 2/4} \\ \small{34014 Trieste, Italy} \\ \small{e-mail: \tt {[email protected]}} } \date{} \maketitle \begin{abstract} In this paper we construct a global, continuous flow of solutions to the Camassa-Holm equation on the space $H^1(\R)$. In a previous paper \cite{BF2}, A. Bressan and the author constructed spatially periodic solutions, whereas in this paper the solutions are defined in all the real line. We introduce a distance functional, defined in terms of an optimal transportation problem, which allows us to study the continuous dependance w.r.t. the inital data with a certain decay at infinity. \end{abstract} \section{Introduction} In \cite{CH} the authors present a nonlinear partial differential equation which describes the behaviour of the shallow water as a completely integrable hamiltonian system \begin{equation} \label{kdiv0} u_t+2\kappa u_x - u_{xxt}+3u u_x = 2 u_x u_{xx}+ u u_{xxx}. \end{equation} where $u$ is the fluid velocity in the $x$ direction and $\kappa$ is a constant related to the critical shallow-water wave speed. For the physical description of such equation, we refer to \cite{CH, Jo, CM1, CM2} and to the bibliographic references of \cite{BF2}. In the present paper we study the limit case $\kappa= 0$, a condition in which, starting by a initial smooth data it can develop to a peaked solution with one or more cuspids, the so called multi-peakon function. As an example, in \cite{CHH} a simulation shows that starting from a parabolic initial data in a periodic domain, the system evolves in a train of positive peakons. The equation can be written in nonlocal form as a scalar conservation law with an integro-differential source term: \begin{equation} u_t+\left(\frac{u^2}{2}\right)_x+P^u_x=0 \label{prblCH} \end{equation} where $P^u$ is defined in term of a convolution: $$ P^u\doteq \frac 12 e^{-|x|}*\left(u^2+\frac{u_x^2}2\right)\,. $$ Observe that the function $\frac 12 e^{-|x|}$ is the distributional solution of the equation $$ \left(Id-\partial_{xx}\right)f=\delta_0 $$ where $\delta_0$ is the Dirac measure centered at the origin. A \emph{multi-peakon} is a function of the form $$ u(x)=\sum_{i=1}^N p_i e^{-|x-q_i|} $$ it is well known (see \cite{CE1,CHL,HR}) that if such a function is subject to (\ref{prblCH}), its evolution remains of the same shape, and as long as they are well defined, the coefficients $p_i$, $q_i$ are the solution to the system of ODE $$ \left\{ \begin{array}{l} \displaystyle \dot q_i=\sum_{j=1}^N p_j e^{-|q_i-q_j|}\,, \\ \displaystyle \dot p_i=p_i\sum_{j=1}^N p_j \,{\rm sign}\,(q_i-q_j)e^{-|q_i-q_j|}, \end{array} \right.\qquad i=1,\dots, N. $$ In the smooth case, differentiating (\ref{prblCH}) w.r.t. $x$ and multiplying by $u_x$ we obtain the conservation law with source term \begin{equation} \label{derevolution} \left(\frac{u_x^2}{2}\right)_t+\left(\frac{u u_x^2}{2}-\frac {u^3}{3}\right)_x=-u_x\,P. \end{equation} The previous equation, together with the \emph{Camassa-Holm} equation (\ref{prblCH}), prove that the total energy $$ E\doteq\int_{\R}[u^2(t,x)+u_x^2(t,x)]dx $$ is a conserved quantity as long as the solution remains regular. Constantin and Escher \cite[Theorem 4.1]{CE1} shown that even if the initial data is sufficiently regular, blow-up of the gradient $u_x$ can occur in finite time whenever the initial data has a negative slope. In Section \ref{ODEsystem} we implement a technique based on appropriate rescaled variables in order to resolve the singularities which occurs at the times where the gradient $u_x$ blows up. The new system of ODE can be solved in a unique way in a neighborhood of the time of blow up and the solution turns out to preserve the energy $E$ also after this time. Motivated by the existence of the multi-peakon solutions, whose decay at infinitive is like $e^{-|x|}$, we introduce the space $X_\alpha$ of the $H^1$ function with an exponential decay: let $0<\alpha<1$, we define \begin{equation} X_\alpha= \left\{u\in H^1(\R) \,:\, C^{\alpha,u}\doteq \int_R \left[u^2(x)+u_x^2(x)\right]e^{\alpha|x|}\,dx<\infty\right\}\,. \label{decayHP} \end{equation} In this space we define a distance that is related to an optimal transportation problem (see \cite{V}). Fetching the theory developed by Bressan and Constantin \cite{BC1} for the \emph{Hunter-Saxton} equation, and by Bressan and Fonte \cite{BF2} for the periodic solution of the Camassa-Holm equation, the topology induced by functional $J$ constructed in Section \ref{metricsection} turns out to be weaker than the $H^1$ topology, but useful because with this metric we prove the stability of the multi-peakon solutions w.r.t. the initial conditions, as we will show in Section \ref{stabcor}. \section{Multipeakon solutions}\label{ODEsystem} In this section we shall construct a solution of the Camassa Holm equation starting from an initial condition $\bar u^\varepsilon\in X_\alpha$ of the form $$ u_0^\varepsilon(x)=\sum_{j=1}^{N_\varepsilon} p_j e^{-|x-q_j|}\,. $$ The motivation of this choice is given by the form of traveling wave solution (see \cite{CH} and \cite[Example 5.2]{CE1}). Looking for solution of the equation (\ref{kdiv0}) in the traveling wave form $u(t,x)=U(x-ct)$, with a function $U$ that vanishes at infinity, the limit of $\kappa\to 0$ leads to the function $U= c e^{-|x-ct|}$. The evolution of an initial data like $u_0^\varepsilon$ remains then of the same shape $$ u^\varepsilon(t,x)=\sum_{j=1}^{N_\varepsilon} p_j(t) e^{-|x-q_j(t)|}\,. $$ As long as the classical solution of the problem \begin{equation} \label{HSYS} \left\{ \begin{array}{l} \displaystyle \dot q_i=\sum_{j=1}^N p_j e^{-|q_i-q_j|}\,, \\ \displaystyle \dot p_i=p_i\sum_{j=1}^N p_j \,{\rm sign}\,(q_i-q_j)e^{-|q_i-q_j|} \end{array} \right. \end{equation} exists, the solution of this system gives the coefficients ${\bf p }(t)=(p_1,\dots, p_{N_\varepsilon})$ and ${\bf q}(t)=(q_1,\dots, q_{N_\varepsilon})$ for the solution $u^\varepsilon(t,x)$ to the Camassa-Holm equation. Observe that the previous system can be viewed as an Hamiltonian system with Hamiltonian function $H({\bf q},{\bf p })=\frac 12~\sum_{i,j}p_i p_je^{-|q_i-q_j|}$. In \cite{HR} the autors prove the existence of a global multi-peakon solution when strengths $p_i$ are positive for all $i=1\dots N_\varepsilon$ and the convergence of the sequence of multi-peakon solution. If $u_0$ is an initial data such that the distribution $u_0-{u_0}_{xx}$ is a \emph{positive} Radon measure, there exists a sequence of multi-peakons that converges in $L^\infty(\R,H^1_{loc}(\R))$. In this case the crucial fact is that no interaction between the peakons occurs, and then the gradient remains bounded. However, a general initial data contains both positive and negative peakons, as in the so called peakon-antipeakon interaction: one positive peakon with strength $p$, centered in $-q$, moves forward and one negative anti-peakon in $q$, with strength $p$ moves backward. The evolution of the system produces the overlapping of the two peakons at finite time $t=\tau$, so that $q\to 0$. The conservation of the energy $E=H({\bf q}(t),{\bf p }(t))$ yields \begin{equation} \label{zetalimit} E=\lim_{t\to \tau^-} p^2(1-e^{-2|q|})\,. \end{equation} and then the quantity $p$ diverges in finite time. At the point $(\tau,0)$ occurs thus a singularity for the solution $u$. To extend the solution also after the interaction time with a solution which conserves the energy $E$ we can think that at the interaction point emerge an antipeakon/peakon couple, the first, negative, moving backward and the second, positive, moving forward with coefficients $(-q, -p)$ and $(q,p)$. According to the conservation of the energy, the choice of $q$ and $p$ must satisfy (\ref{zetalimit}) as $t\to \tau^+$. It yields a change of variables which resolves the singularity at $(\tau,0)$ $$ \zeta\doteq p^2 q\qquad \omega\doteq\arctan (p) $$ with this choice, the Hamiltonian system leads to the ODE $$ \frac d{dt} \left( \begin{array}{c} \zeta \\ \omega \end{array} \right) =f(\zeta,\omega)\,, \qquad \left( \begin{array}{c} \zeta \\ \omega \end{array} \right)(\tau) = \left( \begin{array}{c} \frac E 2 \\ \frac \pi 2 \end{array} \right) $$ with $$ f(\zeta,\omega)= \left( \begin{array}{c} \left[1- e^{-\zeta \cot^2(\omega)}-\zeta\cot^2(\omega)e^{-\zeta \cot^2(\omega)}\right]\tan^3(\omega) \\ \sin^2(\omega)e^{-\zeta \cot^2(\omega)} \end{array} \right) $$ and $f$ is a Lipschitz vector field in a neighborhood of the point $( \frac E 2,\frac \pi 2)$. The solution $(\zeta(t),\omega(t))$ of this problem provides then the unique couple $(q(t),p(t))$ which coincides with the classical solution of the Hamiltonian system for $t<\tau$ and extends it for $t\geq \tau$. This example suggests the way to construct the multi-peakon solution whenever an interaction between peakons occurs (see also \cite{BF2} for an ``energetic'' motivation). Suppose that two or more peakons with strengths $p_1,\dots,p_k$ annihilate at the position $\bar q$ at time $\tau$ and produce a blow up of the gradient $u_x$. The conservation of the energy yields that there exists and is positive the limit $$ e_{\tau}\doteq \lim_{t\to \tau^-} \int_{\xi^-(t)}^{\xi^+(t)} u_x^2(t,x)\,dx $$ where $\xi^-$ and $\xi^+$ are the smallest and the largest characteristic curve passing through the point $(\tau,\bar q)$. Assume that after the interaction appear two peakons with strengths $p_1,\,p_2$ and placed at the position $q_1,\,q_2$. Let consider the change of variables $$ z=p_2+p_1\quad w=2\arctan (p_2-p_1)\quad \eta = q_2+q_1 \quad \zeta=(p_2-p_1)^2(q_2-q_1), $$ then the system (\ref{HSYS}) turns out to be $$ \begin{array}{rl} \dot w=&\!\! -\left[\sin (w) \cosh \Big(\frac {\zeta}{2\tan^2 (w/2)}\Big) +2z \sinh\Big(\frac {\zeta}{2\tan^2 (w/2)}\Big) \right]\cdot\sum\limits_{j\geq k+1} p_j e^{-q_j+ \eta /2} \\ &+[z^2 \cos^2(w/2) - \sin^2 (w/2)]e^{-\frac{\zeta}{\tan^2 (w/2)}} \\ \dot z= &\!\!-\left[\frac 12\sin(w) \sinh \Big(\frac {\zeta}{2\tan^2 (w/2)}\Big)+ z\cosh \Big(\frac {\zeta}{2\tan^2 (w/2)}\Big) \right]\cdot \sum\limits_{j\geq k+1} p_j e^{-q_j} \\ \dot \eta= &\!\! z[1+ e^{-\frac{\zeta}{\tan^2 (w/2)}}]+ 2 \cosh \Big(\frac {\zeta}{2\tan^2 (w/2)}\Big)\cdot \sum\limits_{j\geq k+1} p_j e^{-q_j} \\ \dot \zeta = &\!\!\frac{z^2\zeta}{\tan (w/2)}e^{-\frac{\zeta}{\tan^2 (w/2)}}-\tan^3(w/2) \left(1-e^{-\frac{\zeta}{\tan^2 (w/2)}}- \frac{\zeta}{\tan^2 (w/2)}\right) + \\ & + 2\zeta\left[ \sinh \Big(\frac{\zeta}{2\tan^2 (w/2)}\Big)\cdot \left(\frac {\tan^2 (w/2)}{\zeta}- \frac{z}{\tan (w/2)} \right)\right.+ \\ &\qquad\qquad\qquad\qquad\qquad\qquad \left. -\cosh\Big( \frac {\zeta}{2\tan^2 (w/2)}\Big)\right]\cdot \sum\limits_{j\geq k+1}p_j e^{-q_j+\eta/2} \end{array} $$ $$ \begin{array}{rl} \dot p_i = & p_i e^{-q_i+\eta/2}\left[ z \cosh\Big( \frac {\zeta}{2\tan^2 (w/2)}\Big)+ \tan(w/2) \sinh\Big(\frac {\zeta}{2\tan^2 (w/2)}\Big)\right] + \\ &+\sum\limits_{j\geq k+1} p_i p_j \,{\rm sign}\,(q_i-q_j) e^{-|q_i-q_j|} \\ \dot q_i= & e^{-q_i+\eta/2}\left[ z \cosh\Big( \frac {\zeta}{2\tan^2 (w/2)}\Big)+ \tan(w/2) \sinh\Big(\frac {\zeta}{2\tan^2 (w/2)}\Big)\right]+ \\&+\sum\limits_{j\geq k+1} p_j e^{-|q_i-q_j|} \end{array} $$ which is a system of ODE with locally Lipschitz continuous right hand side that can be extended smoothly also at the value $w=\pi$. The initial data becomes $$ z(\tau)=\lim_{t\to \tau^-} \sum_{i=1}^k p_i(t)\qquad w(\tau)=\pi \qquad \eta(\tau)=2\bar q\qquad \zeta(\tau)= e_\tau $$ $$ p_i(\tau)=\lim_{t\to \tau^-}p_i(t)\qquad q_i(\tau)=\lim_{t\to \tau^-} q_i(t) \qquad i=k+1,\dots, N $$ Thus there exists a unique solution of such a system which provides a multi-peakon solution defined on some interval $[\tau,\tau'[$, up to the next interaction time. As in Corollary of \cite[Section 7]{BF2}, once we prove that Camassa Holm equation is time reversible and the uniqueness of solutions of a Cauchy problem, we have that maximal number of peakon interaction is actually $k=2$, one with positive strength the other with negative one. \section{A priori bounds} This section is devoted to the study of some useful properties of the functions $u\in X_\alpha$. We start recalling an estimate for the $L^\infty-$norm of the $\H$ functions. We have \begin{equation} \label{sincichinequality} \|f^2\|_{L^\infty}\leq \|f\|^2_{\H} \end{equation} This estimate give us a bound on the $L^\infty-$norm of the conservative solution $u$ of (\ref{prblCH}), in fact the conservation of the energy yields \begin{equation} \label{normainfty} \|u(t)\|_{L^\infty}\leq \|u(t)\|_\H=\sqrt{E^{\bar u}} \qquad \mbox{for every $t\geq 0$.} \end{equation}691: Another fact is the behaviour of the functions $u\in X_\alpha$ at the infinitive. If we indicate with $C^{\alpha,u}$ the constant $\int_\R (u^2+ u_x^2)e^{\alpha|x|} \,dx$, it holds \begin{equation} \label{uinfty} \sup\limits_{x\in \R}\,u^2(x)e^{ \alpha |x|}\leq 2 C^{\alpha,u} \,. \end{equation} Indeed, the function $$ f(x)\doteq u(t,x)e^{\frac\alpha 2|x| }. $$ belongs to $H^1(\R)$, moreover $$ f_x=u_x e^{\frac\alpha 2|x| }+\frac \alpha 2 \,{\rm sign}\,(x)ue^{\frac\alpha 2|x| } $$ and then, by using (\ref{sincichinequality}), we have $$ |f(x)|^2\leq\|f\|_\H^2\leq \int_\R [2u_x^2+ (1+\alpha)u^2]e^{\alpha |y|}\,dy \leq 2 C^{\alpha,u}\,. $$ Now we study the behaviour at infinitive of the multi-peakon solutions of the Camassa Holm equation. \begin{lemma}({\rm A-priori} bounds) Let $u$ be a multi-peakon solution to (\ref{prblCH}), with initial data $\bar u$ that satisfies (\ref{decayHP}). Then for every $t\in \R$ there exist a continuous function $C(t)$, which depends on $C^{\alpha, \bar u}$ and on the energy $E^{\bar u}$, such that \begin{eqnarray} \displaystyleplaystyle &&\int_{\R}[u^2(t,x)+u_x^2(t,x)]e^{\alpha|x|}\,dx\leq C(t)\,, \label{weightedenergy} \\&& \displaystyleplaystyle \label{Pinfty} \sup\limits_{x\in \R}\, \big|P_x^u(t,x)\big| e^{ \alpha |x|}\leq C(t)\,. \\&& \|u_x\|_{L^1(\R)}\leq C(t) \end{eqnarray} \end{lemma} \begin{proof} Since $|P_x^u|=P^u$, it is sufficient to prove the second inequality with $P^u$. Setting $$ I(t)\doteq\int_{\R}[u^2(t,x)+u_x^2(t,x)]e^{\alpha|x|}\,dx\,, $$ we want to achieve a differential inequality of the kind $$ \frac d{dt}I(t)\leq A+B\cdot I(t)\,, $$ for some constants $A$ and $B$ which depends on the initial data $\bar u$. We start the discussion proving a preliminary estimate for the function $P^u$. By definition \begin{equation} \label{stimaP} \int_{\R}P^u(t,x)e^{\alpha|x|}\,dx= \frac 12 \int_\R e^{\alpha |x|}\,dx \int_\R e^{-|x-y|}\left[u^2(t,y)+\frac{u_x^2(t,y)}{2}\right]dy \end{equation} from this identity we can use Fubini's theorem to switch the order of the two integrals. Hence we compute the following integral \begin{equation} \int_\R e^{\alpha|x|}e^{-|x-y|}\,dx=\frac {2\alpha}{1-\alpha^2}e^{-|y|}+\frac 2{1-\alpha^2}e^{\alpha |y|}\qquad \mbox{for every $y\in\R$}\,. \label{expint} \end{equation} For future use, we observe that the equality (\ref{expint}) holds for $\alpha \in (-1,1)$. Substituting (\ref{expint}) in (\ref{stimaP}) and using the definition of the energy $E^{\bar u}$ we have $$ \int_{\R}P^u(t,x)e^{\alpha|x|}dx\leq \frac{E^{\bar u}}{(1-\alpha^2)} + \frac {1}{1-\alpha^2}\,I(t)\,. $$ Having in mind the previous inequality, we are able to estimate the time derivative of the function $I$. From the equations (\ref{prblCH}) and (\ref{derevolution}) we have $$ \begin{array}{rl} \displaystyleplaystyle \frac d{dt}I(t)& \displaystyleplaystyle \!\!=\!\! \int_{\R}\left[2u u_t+ \left(u_x^2\right)_t\right]\,e^{\alpha |x|}\,dx = \int_{\R}\left[ -2u(u_x+P^u_x)+\frac 23 (u^3)_x- (uu_x^2)_x- 2 u_x P^u\right]\,e^{\alpha |x|}\,dx \\ &\displaystyleplaystyle \!\!\leq -2\int_\R (u P^u +u u_x^2)_x\,e^{\alpha|x|}\,dx \leq \left. -2u (P^u + u_x^2)\,e^{\alpha|x|}\right|_{-\infty}^{\infty} + 2\alpha \int_\R |u| (P^u+ u_x^2)\,e^{\alpha|x|}\,dx \\ &\displaystyleplaystyle\!\!\leq to \frac{2\alpha\|u\|_{L^\infty}}{1-\alpha^2}\left(E^{\bar u}+ 2 I\right) \leq \frac{2\sqrt{E^{\bar u}}}{1-\alpha^2}\left[E^{\bar u}+ 2 I(t)\right] \end{array} $$ the previous inequality gives then a bound on the function $I$, that is $$ I(t)\leq (C^{\alpha,\bar u}+E^{\bar u}/2)\exp\Big(\frac{4\sqrt{E^{\bar u}}}{1-\alpha^2}\, t \Big)\,. $$691: To achieve the estimate (\ref{Pinfty}), set $$ K(t)\doteq \left\|P^u(t,\cdot)e^{\alpha|\cdot|}\right\|_{L^\infty}\,. $$ Proceeding as before, fixed $x\in \R$ we compute the derivative w.r.t the time $t$ of the function $e^{-|x|}*{u^2_x}$. $$ \begin{array}{rl} \displaystyleplaystyle\frac \partial{\partial t} \left(e^{-|x|}*\frac{u^2_x}4\right) &\displaystyleplaystyle =\frac 14 \frac \partial{\partial t}\int_\R e^{-|x-y|}u_x^2(t,y)\,dy \\ &\displaystyleplaystyle =\frac 12\int_\R e^{-|x-y|}\left[\Big( \frac {u^3}3- \frac{u u_x^2}2 - u P^u\Big)_x+ u P_x^u\right] dy \\ &\displaystyleplaystyle\leq \|u\|_{L^\infty} P^u+\frac {\|u\|_{L^\infty}}2 \int_\R e^{-|x-y|}P^u(t,y)\,dy \\ &\displaystyleplaystyle\leq \|u\|_{L^\infty}P^u(t,x)+\frac{\|u\|_{L^\infty}}2 K(t)\int_\R e^{-\alpha|x|}e^{-|x-y|}\,dy \\ &\leq \|u\|_{L^\infty} P^u(t,x)+\frac 1{1-\alpha^2}e^{-\alpha|x|}\|u\|_{L^{\infty}}K(t) \end{array} $$ in the same way, the derivative of $e^{-|x|}*u^2$ is $$ \begin{array}{rl} \frac \partial{\partial t}\left(e^{-|x|}*\frac{u^2}2\right) &\displaystyleplaystyle \leq \int_\R e^{-|x-y|}u \left(|P^u_x|+|u u_x|\right)\,dy \\ &\displaystyleplaystyle \leq\|u\|_{L^\infty} \left( 2 P^u(t,x)+\int_\R e^{-|x-y|}P^u(t,y)\,dy\right) \\ &\displaystyleplaystyle \leq \|u\|_{L^\infty}\left( 2P^u(t,x)+\frac 1{1-\alpha^2} e^{-\alpha|x|}K(t)\right)\,. \end{array} $$ Multiplying the previous 691:two inequalities with $e^{\alpha|x|}$ we get $$ \frac d{dt}K(t) \leq \left(3+\frac{ 2}{1-\alpha^2} \right) \sqrt{E^{\bar u}} K(t) $$ which yields (\ref{Pinfty}). To achieve the last inequality, we write \begin{eqnarray*} \int_\R|u_x(y)|\,dy &=&\!\!\!\!\int\limits_{ \{y :|u_x(y)|e^{\alpha|y|}<1 \} } \!\!\!\!|u_x(y)|\,dy +\!\!\!\!\int \limits_{\{y:|u_x(y)|e^{\alpha|y|}>1\}} \!\!\!\!|u_x(y)|\,dy \\ &\leq& \int_\R e^{-\alpha|y|}\,dy+\int_\R u_x^2(y)e^{\alpha|y|}\,dy\leq \frac 2\alpha + I(t) \end{eqnarray*} where the last estimate is given by (\ref{weightedenergy}). \end{proof} \section{Approximation of the initial data} In this section we shall construct an approximation with an initial data with a multi-peakon function. Our aim is to approximate it with a sequence $u_\varepsilon$ which has an exponential decay at infinitive uniformly w.r.t $\varepsilon$. \begin{lemma} Let $f\in X_\alpha$. Then for every $\varepsilon>0$ there exists a multi-peakon function $g$ of the form $$ g(x)=\sum_{i=1}^N p_i e^{-|x-q_i|} $$ such that \begin{eqnarray} &&\|f-g\|_{\H}<\varepsilon \\ &&\int_\R [g^2(x)+{g}_x^2(x)]e^{\alpha|x|}\,dx\leq C_0 \end{eqnarray} for some constant $C_0>0$691: which does not depend on $\varepsilon$. \end{lemma} \begin{proof} Let $\rho(x)\in \mathcal C^\infty_0$ be a cut-off function such that \begin{itemize} \item $\rho(x)\geq 0$ \item $\rho(x)=1$ for every $|x|\leq 1$, $\rho(x)=0$ for every $|x|> 2$ \item $\int_\R \rho(x)\,dx =1$ \end{itemize} and $\rho_\varepsilon(x)\doteq \frac 1\varepsilon \rho(\frac x\varepsilon) $ be a mollifiers sequence. Observe that for every $\varepsilon>0$, $\tilde f(x)\doteq \rho_\varepsilon*f(x)$ is a smooth function which approximates the function $f$ in $H^1-$norm \begin{equation} \|f-\tilde f\|_{\H}<C\varepsilon \label{apprcinf} \end{equation} moreover it belongs to $X_\alpha$, indeed \begin{eqnarray*} \int_\R [\tilde f^2(x)+{\tilde f}_x^2(x)]e^{\alpha|x|}\,dx &\leq& \int_\R \left[\int691:_\R (f^2(y)+f_x^2(y)) \rho_\varepsilon(x-y) \,dy\right]e^{\alpha|x|}\,dx \\ &\leq& \int_\R [f^2(y)+f_x^2(y)]\int_\R \rho_\varepsilon(x-y) e^{\alpha|x|}\,dx \, dy \\ &\leq& \int_\R [f^2(y)+f_x^2(y)] C_0 e^{\alpha|y|}\,dy =C_0 C^{\alpha,f}<\infty \end{eqnarray*} and $C_0$ is a constant which does not depend on $\varepsilon$. From the previous inequality we can assert that for every $R>0$ one has $\|\tilde f\|_{H^1(\R\setminus [-R,R])}\leq C_0 C_\alpha e^{-\alpha R}$ uniformly in $\varepsilon>0$. We can choose thus $R_\varepsilon$ big enough in order to have \begin{equation} \|\tilde f\|_{H^1(\R\setminus [-R_\varepsilon,R_\varepsilon])}<\varepsilon/2. \label{fuori} \end{equation} In the space $H^1([-R_\varepsilon,R_\varepsilon])$ we can approximate $f_\varepsilon$ with a multi-peakon function. By using the identity $$ \frac 12\left(I-\frac{\partial^2}{{\partial x}^2}\right)e^{-|x|}=\delta_0 $$ the function $\tilde f$ can be rewritten in convolution form $$ \tilde f= e^{-|x|}*\left(\frac{\tilde f - \tilde f_{xx}}2 \right)=\int_\R e^{-|x-y|}\cdot \frac{\tilde f(y) - \tilde f_{xx}(y)}2 \,dy\,. $$ In the interval $[-R_\varepsilon, R_\varepsilon]$ the previous integral can now be approximated with a Riemann sum $$ g(x)= \sum_{i=-N}^N p_i e^{-|x-q_i|},\qquad \left\{ \begin{array}l \displaystyle q_i= \frac i N R_\varepsilon \\ \displaystyle p_i=\int_{q_{i-1}}^{q_i}\frac{\tilde f(y)-\tilde f_{xx}(y)}{2}\,dy\,. \end{array} \right. $$ Choosing $N$ sufficiently large we obtain $\|\tilde f- g\|_{H^1([-R_\varepsilon,R_\varepsilon])}<\varepsilon$. Together with (\ref{apprcinf}) and (\ref{fuori}) this last estimate yields the result. \end{proof} \section{Definition of the distance}\label{metricsection} In this section we define a metric in order to control the distance between two solution of the equation (\ref{prblCH}). It is constructed as a Kantorovich-Wasserstein distance. Let $\T=[0,2\pi]$ the unit circle with the end points $0$ and $\pi$ identified. Consider the metric space $(\R^2\times \T,d^\diamondsuit)$, with distance $$ d^\diamondsuit((x,u,\omega),(x',u',\omega'))\doteq \min\{|x-x'|+|u-x'|+|\omega-\omega'|_*,1\} $$ and for every function $u\in X_\alpha$, let define the Radon measure on $\R^2\times \T$ $$ \sigma^u(A)\doteq \int_{\{x\in \R: (x,u(x),2\arctan u_x(x))\in A\}} [1+u_x^2(x)]\,dx \qquad \mbox{for every Borel set $A$ of $\R^2\times \T$} $$ The set $\mathcal F$ of transportation plans consists of the functions $\psi$ with the following properties: \begin{enumerate} \item \label{conduno} $\psi$ is absolutely continuous, is increasing and has an absolutely continuous inverse; \item \label{condue} $\sup\limits_{x\in\R}|x-\psi(x)|e^{\alpha/2|x|}<\infty$; \item \label{contre} $\int_{\R} |1-\psi'(x)|\,dx<\infty$. \end{enumerate} \psfrag{x}{$x$} \psfrag{u}{$u$} \psfrag{v}{$v$} \begin{figure} \caption{Transportation plan.\label{ftmass} \label{ftmass} \end{figure} The conditions \ref{condue} and \ref{contre} are not restrictive. Indeed, thanks to the exponential decay of functions $u,v \in X_\alpha$, the measures $\sigma^u$ and $\sigma^v$ located on the graph of $u$ and $v$ respectively, have small mass at the infinitive, and then a transportation plan which transports mass from one to the other can be almost the identity $\psi(x) \approx x$ (see fig. \ref{ftmass}). In order to define a distance in the space $X_\alpha$, we consider an optimization problem over all possible transportation plans. Given two functions $u,\,v$ in $X_\alpha$, we introduce two further measurable functions, related to a transportation plan $\psi$: \begin{eqnarray} && \label{non-phi1}\phi_1(x)\doteq \sup \big\{ \theta \in [0,1] {\rm \ \ s.t.\ \ } \theta \cdot(1+u_x^2(x))\leq \left(1+v_x^2(\psi(x))\right)\psi'(x)\big\}, \\ &&\label{non-phi2}\phi_2(x)\doteq \sup \big\{ \theta \in [0,1] {\rm \ \ s.t.\ \ } 1+u_x^2(x)\leq \theta\cdot \left(1+v_x^2(\psi(x))\right)\psi'(x)\big\}. \end{eqnarray} The functions $\phi_1, \phi_2$ can be seen as weights that take into account the difference of the masses of the measure $\sigma^u$ and $\sigma^v$. In fact, from the definitions (\ref{non-phi1})-(\ref{non-phi2}) one has $$ \phi_1(x) (1+u_x^2(x))=\phi_2(\psi(x)) (1+v_x^2(\psi(x)))\psi'(x)\qquad \mbox{for a.e. $x\in\R$}. $$ According to the definitions, the identity $\max\{\phi_1(x),\phi_2(x)\}\equiv 1$ holds. Altough the two measures $\phi_1\sigma^u$ and $\phi_2\sigma^v$ have not finite mass, they satisfy $\phi_1\sigma^u(A)=\phi_2\sigma^v(A)$ for every bounded Borel set $A\subset \R^2\times \mathbb T$. Thus, the functions $\phi_1$ and $\phi_2$ represent the percentage of mass actually transported from one measure to the other. A distance between the two functions $u,\,v$ in $X_\alpha$ can be characterized in the following way. For every $\psi\in \mathcal F$, let ${\bf X}^u=(x,u(x),2\arctan u_x(x))$ and ${\bf X}^v=(\psi(x),v(\psi(x)),2\arctan v_x(\psi(x)))$ and consider the functional $$ J^\psi(u,v)=\int_\R d^\diamondsuit ({\bf X}^u,{\bf X}^v)\phi_1(x)(1+u_x^2(x))\,dx+ \int_\R \left|1+u_x^2(x)-(1+v_x^2(\psi(x)))\psi'(x) \right|\,dx\,. $$ Since the previous function is well defined for every $\psi\in \mathcal F$, we can define $$ J(u,v)\doteq \inf_{\psi \in \mathcal F} J^\psi (u,v). $$ The function $J$ here defined is thus a metric on the space $X_\alpha$ (see \cite{BF2}). \section{Comparison with other topologies} \begin{lemma}\label{L1J} For every $u,\,v\in X_\alpha$ one has \begin{equation} \label{h1el1} \frac 1C\cdot \|u-v\|_{L^1(\R)}\leq J(u,v)\leq C\cdot \|u-v\|_{\H}. \end{equation} Let $(u_n)$ be a Cauchy sequence for the distance $J$ such that $C^{\alpha, u_n}\leq C_0$ for every $n\in \N$. Then \begin{enumerate} \item[i] There exists a limit function $u\in X_\alpha$ such that $u_n\to u$ in $L^\infty$ and the sequence of derivatives ${u_n}_x$ converges to $u_x$ in $L^p(\R)$ for $p \in [1,2[$. \item[ii] Let $\mu_n$ the absolutely continuous measure having density ${u_n}_x^2$ w.r.t. Lebesgue measure. then one has the weak convergence $\mu_n\rightharpoonup \mu$ for some measure $\mu$ whose absolutely continuous part has density $u_x^2$. \end{enumerate} \end{lemma} \begin{proof} The first inequality of (\ref{h1el1}) can be achieved by estimating the area between the two functions $u$ and $v$. For every $\psi\in \F$ we can write $$ \int_\R |u-v|\, dx =\left( \int_{S_1}+\int_{S_2}\right) |u-v|\,dx $$ where the two subsets $S_1$ and $S_2$ are \begin{itemize} \item $S_1= \{x: |x-\psi(x)|\leq~1~\}=\cup_j [x_{2j-1},x_{2j}]$, where in this union we have to take into account that these intervals may be either finite or infinite, possibly having $x_j=\pm \infty$ for some $j$, \item $S_2=\{x: |x-\psi(x)|>1\}$. \end{itemize} The integral over $S_2$ can be estimate in the following way: \begin{equation} \label{psigrosso} \int_{S_2}|u(x)-v(x)|\,dx\leq (\|u\|_{L^\infty}+\|v\|_{L^\infty}) \int_\R|x-\psi(x)|\,dx\leq (E^{\bar u}+ E^{\bar v})J(u,v). \end{equation} The last estimate is given by the definition of the functional $J$. \psfrag{An}{$A_{n}$} \psfrag{An+1}{$A_{n+1}$} \psfrag{An+k}{$A_{n+k-1}$} \psfrag{An+j}{$A_{n+k}$} \psfrag{u}{$u$} \psfrag{v}{$v$} \psfrag{x0}{$x_0$} \psfrag{x1}{$x_1$} \psfrag{x2k}{$x_{2k}$} \psfrag{x2k+1}{$x_{2k+1}$} \psfrag{xk}{$x_k$} \psfrag{x-psix<1}{$S_1$} \psfrag{x-psix>1}{$S_2$} \psfrag{n}{$n$} \psfrag{n+1}{$n+1$} \psfrag{n+j}{$n+j$} \psfrag{n+k}{$n+k$} \psfrag{...}{$...$} \psfrag{PQ}{$\overline {Q_u(n)\,Q_v(n)}$} \begin{figure} \caption{$L^1-$distance between two functions.\label{L1norma} \label{L1norma} \end{figure} As far as the integral over $S_1$ is concerned, the integral over $S_1$ can be viewed as a sum of the area of the regions $A_j$ in the plane $\R^2$, bounded by the graph of the curves $u$, $v$ and by the segments with slope $\pm 1$ that join the points $Q_u(x_{2j-1})=(x_{2j-1},u(x_{2j-1}))$ and $Q_v(x_{2j})=(\psi(x_{2j}),v(\psi(x_{2j})))$, where $\{x_i\}=\partial S_1$. We have $$ \int_{S_1}|u(x)-v(x)|\,dx\leq \sum_j {\rm \,meas\,}(A_j)\,. $$ The measure of the subset $A_j$ is the area sweeped by the segment $\overline {Q_u(x)\,Q_v(x)}$. Recalling that in every set $A_j$ the function $\psi$ satisfies $ |x|-1\leq |\psi(x)|\leq |x|+1$, a bound on this area is given by $$ {\rm \,meas\,} (A_j) \leq \displaystyleplaystyle \int_{x_{2j-1}}^{x_{2j}} (|x-\psi(x)|+|u(x)-v(\psi)|)[1+u_x^2+(1+v_x^2(\psi ))\psi']\,dx $$ and then $$ \begin{array}{rl} \displaystyle \int_{S_1}|u(x)-v(x)|\,dx &\displaystyle \leq \int_{S_1}(|x-\psi(x)|+|u(x)-v(\psi)|)[1+u_x^2+(1+v_x^2(\psi ))\psi']\,dx \\ &\displaystyle\leq J^\psi(u,v) + J^{\psi^{-1}} (u,v) \end{array} $$ this inequality, together with (\ref{psigrosso}), yields to $$ A\leq C(\bar u,\bar v) J(u,v). $$ The proof of the second part of the lemma is perfectly similar to the one of the periodic case, once we take into account the exponential decay of the sequence $u_n$. \end{proof} \section{Stability of solutions w.r.t initial data}\label{stabcor} Let $u_0$ and $v_0$ be two multi-peakon initial data. The technique developed in Section \ref{ODEsystem} ensures the existence of two multi-peakon solution $u(t),v(t)$ for (\ref{decayHP}) which conserve the energy unless interaction of peakon occurs. Suppose then that within a given interval $[0,T]$ no interaction occurs neither for $u(t)$ nor for $v(t)$. The aim of this section is to prove the continuity of the functional $J$ w.r.t. the initial data, more clearly we prove that there exists a continuous, positive function $C(t)$ such that for $t\in [0,T]$ one has $$ J(u(t),v(t))\leq C(t) J(u_0,v_0). $$ \begin{lemma} If $u(t)$ and $v(t)$ are two multi-peakon solutions defined in the interval $[0,T]$ in which no interaction occurs, then there exists a positive, continuous function $c(t)$ which depends only on the energies $E^u$, $E^v$ of the two solutions, such that \begin{equation} \frac d{dt} J(u(t),v(t))\leq c(t) J(u(t),v(t)) \qquad \mbox{for all $t\in[0,T]$}. \end{equation} \end{lemma} \begin{proof} We compute the time derivative of the function $J^\psi(u(t),v(t))$ with a particular choice of the transportation plan $\psi=\psi_{(t)}$. Given any $\psi_0\in \mathcal F$, at every time $t\in [0,T]$ we construct $\psi_{(t)}$ by transporting the function $\psi_0$ along the characteristic curves. More precisely, since no interaction between peakon occurs in the interval $[0,T]$, the functions $u(t,\cdot),\,v(t,\cdot)$ are Lipschitz continuous, then the flows $\varphi^t_u$, $\varphi^t_v$ solutions of the Cauchy problems \begin{eqnarray*} &&\frac d{dt} \varphi^t_u(x)=u(t,\varphi_u^t(x))\qquad \varphi^0_u(x)=x, \\ &&\frac d{dt}\varphi^t_v(y)=v(t,\varphi_v^t(y))\qquad \varphi^0_v(y)=y, \end{eqnarray*} which are the characteristics curves associated to the equation (\ref{decayHP}), are well defined. Now, let $x\in \R$. $\psi_{(t)}$ is defined as the composition \begin{equation} \psi_{(t)}(x)\doteq \varphi_v^t\circ \psi_0\circ \left(\varphi^t_u \right)^{-1}(x), \end{equation} that is $$ \psi_{(t)}(\varphi_u^t(y))=\varphi_v^t(\psi_0(y)). $$ The function $\psi_{(t)}$ belongs to $\mathcal F$, and hence $J^{\psi_{(t)}}$ is well defined, in fact \begin{enumerate} \item By the property \ref{conduno} of the function $\psi_0$ and uniqueness of solution of ODE, the function $\psi_{(t)}$ is an increasing function. \item Let $x\in \R$ and $\varphi_u^t(y)$ be the characteristic curve passing through $x$ at time $t$. Evaluating $|x-\psi_ {(t)}(x)|e^{\alpha/2|x|}$ along this characteristic curve, and computing the derivative w.r.t. $t$ we obtain $$ \frac d{dt}|\varphi_u^t(y)-\varphi_v^t(\psi_0(y))|e^{\alpha/2|\varphi_u^t(y)|}\leq \left[|u(t,x)-v(t,\psi_{(t)}(x))|+\frac \alpha 2 |u(t,x)|\cdot |x-\psi_{(t)}(x)|\right]e^{\alpha/2|x|} $$ by properties (\ref{uinfty}), (\ref{weightedenergy}), and since $u,\,v$ are Lipschitz continuous in $[0,T]$, there exists two $L^\infty$ functions $c_1(t)$, $c_2(t)$ such that $$ \frac d{dt}|x-\psi_{(t)}(x)|e^{\alpha/2|x|}\leq c_1(t)|x-\psi_{(t)}(x)|e^{\alpha/2|x|}+c_2(t) $$ by Gronwall Lemma and the hypothesis $|x-\psi_0(x)|e^{\alpha/2|x|}\leq C_0$, the previous inequality gives the property \ref{condue} for $\psi_{(t)}$ \begin{equation} \label{stpsi} |x-\psi_{(t)}(x)|e^{\alpha/2|x|}\leq C_1(t)\doteq \left(C_0+ \int_0^t c_2(s)\,ds\right)e^{\int_0^t c_1(s)\,ds} \end{equation} \item The last property can be proved by changing the integration variable $x= \varphi_u^t(y)$ $$ \begin{array}{rl} \displaystyle \int_\R |1-\psi_{(t)}(x)|\,dx & \displaystyle =\int_\R |1-\psi_{(t)}(x)|(\varphi_u^t)'(y)\,dy = \int_\R |(\varphi_u^t)'(y)-(\varphi_v^t)'(\psi_0(y))\psi_0'(y)|\,dy \\ &\leq \displaystyle \int_\R |(\varphi_u^t)'(y)-1|\,dy +\int_\R |(\varphi_v^t)'(y)-1|\,dy+\int_\R|1-\psi_0'(y)|\,dy. \end{array} $$ Since $$ |(\varphi_u^t)'(y)-1|\leq \int_0^t |u_x(s,x)|\cdot |(\varphi_u^s)'(y)-1|\,ds+\int_0^t |u_x(s,x)|\,ds $$ (and a similar estimate for $\varphi_v^t$) and $u_x,v_x\in L^\infty$, by the Gronwall lemma the first two integrals of the previous formula are bounded by an absolutely continuous function $C(t)$ in the interval $[0,T]$ and then also property \ref{contre} holds. \end{enumerate} At the transportation plan $\psi_{(t)}$ we associate the functions $\phi_1^{(t)},\,\phi_2^{(t)}$ defined according to (\ref{non-phi1}), (\ref{non-phi2}), the functional $J^{\psi_{(t)}}$ is thus $$ J^{\psi_{(t)}}(u(t),v(t))=\! \int_\R d^\diamondsuit(\mathbf X^u(t),\mathbf X^v(t))\phi_1^{(t)}(x)(1+u_x^2(x))dx+ \int_\R \left|1+u_x^2(x) -(1+v_x^2(\psi_{(t)}(x)))\psi_{(t)}'(x)\right|dx. $$ By deriving $J^{\psi_{(t)}}(u(t),v(t))$ w.r.t. $t$ and computing the change of variables along the characteristics, the previous derivative can be estimate by the sum of the following terms (we leave out the dependence on the integrable variable when it is not essential) \begin{itemize} \item $\displaystyle I_1=\int_\R |u(t,x)-v(t,\psi_{(t)}(x))|\phi_1^{(t)}(x)(1+u_x^2(t,x))\,dx\leq (1+\|u(t)\|_{L^\infty}+\|v(t)\|_{L^\infty}) J^{\psi_{(t)}}(u(t),v(t))\,, $ \item $ \displaystyle I_2=\int_\R|P^u_x(t,x)-P^v_x(t,\psi_{(t)}(x))|\phi_1^{(t)}(x)(1+u_x^2(t,x))\,dx\,, $ \item $I_3= \displaystyle \int_\R \left|\frac {2u^2(t)-u_x^2(t)-2P^u(t)}{1+u_x^2(t)}\right. \left.-\frac{2v^2(t,\psi_{(t)})-v_x^2(t,\psi_{(t)})-2P^v(t,\psi_{(t)})} {1+v_x^2(t,\psi_{(t)})}\right|\cdot \phi_1^{(t)} (1+u_x^2(t))\,dx\,, $ \item the term due to the variation of the base measure $$ \begin{array}{rl} \displaystyle I_4=2\int_\R d^\diamondsuit(\mathbf X^u(t),\mathbf X^v(t))\cdot u_x(t)(u^2(t)- P^u(t))\,dx\,, \end{array} $$ \item and the terms due to the variation of the excess mass $$ I_5=\frac d{dt}\int_\R\left|1+u_x^2(t)-(1+v_x^2(t,\psi_{(t)}))\psi_{(t)}'\right|\,dx\,. $$ \end{itemize} Let us start to estimate the term $I_2$. By definition, the difference of $P^u$ and $P^v$ is written in convolution form $$ \begin{array}{rl} &\displaystyle\left| \int_\R\left \{ e^{-|x-y|}\,{\rm sign}\,(x-y)\left[u^2(t,y)+\frac{u^2_x(t,y)}{2}\right]\,dy \right.\right. \\ &\left.\left.\displaystyle \qquad\qquad -e^{-|\psi_{(t)}(x)-\psi_{(t)}(y)|}\,{\rm sign}\,(\psi_{(t)}(x)-\psi_{(t)}(y))\left[v^2(t,\psi_{(t)}(y))+\frac{v^2_x(t,\psi_{(t)}(y))}{2}\right]\psi_{(t)}'(y)\right\}\,dy\right| \end{array} $$ then in $I_2$ appear the following integrals \begin{eqnarray*} \displaystyleplaystyle &&A=\int_\R(1+u_x^2(t,x))\int_\R e^{-|x-y|}|u^2(t,y)-v^2(t,y)|\,dy\,dx \\ &&B=\int_\R(1+u_x^2(t,x))\left|\int_\R e^{-|x-y|}\,{\rm sign}\,(x-y)[v^2(t,y)-v^2(t,\psi_{(t)}(y))\psi_{(t)}'(y)]\,dy \,\right|\,dx \\ &&C=\int_\R(1+u_x^2(t,x))\int_\R\left|e^{-|x-y|}\,{\rm sign}\,(x-y)-e^{-|\psi_{(t)}(x)-\psi_{(t)}(y)|}\,{\rm sign}\,(\psi_{(t)}(x)-\psi_{(t)}(y))\right| \cdot \\ &&\qquad\qquad\qquad\qquad\qquad\qquad \cdot\left[v^2(t,\psi_{(t)} (y))+\frac {v_x^2(t,\psi_{(t)} (y))}2\right]\psi_{(t)}'(y)\,dy\,dx \\ &&D=\frac 12\int_\R(1+u_x^2(t,x)) \left|\int_\R e^{-|x-y|}\,{\rm sign}\,(x-y)[u_x^2(t,y)-v_x^2(t,\psi_{(t)}(y))\psi_{(t)}'(y)]\,dy\,\right|\,dx \end{eqnarray*} \noindent{\bf A.} Switching the order of the two integrals, the term $A$ is bounded by the $L^1$-norm of the difference between $u$ and $v$: $$ \begin{array}{rl} A & \displaystyle \leq(\|u\|_{L^\infty}+\|v\|_{L^\infty})\int_\R |u(t,y)-v(t,y)|\int_\R e^{-|x-y|}(1+u_x^2(t,x))\,dx\,dy \\ &\displaystyle \leq (2+E^u)(\|u\|_{L^\infty}+\|v\|_{L^\infty})\|u(t)-v(t)\|_{L^1} \end{array} $$ and then, by Lemma \ref{L1J}, $A\leq C(\bar u,\bar v)J(u(t),v(t))$. \noindent{\bf B.} Define $$ F(y)\doteq\int_{-\infty}^y (v^2(z)-v^2(\psi_{(t)}(z))\psi_{(t)}'(z))\,dz=\int_{\psi_{(t)}(y)}^{y}v^2(z)\,dz $$ we have, integrating by parts $$ \begin{array}{rl} \displaystyle \left|\int_\R e^{-|x-y|}\,{\rm sign}\,(x-y)F'(y)\,dy\,\right| & \displaystyle \leq 2|F(x)|+\int_R e^{-|x-y|}|F(y)|\,dy \\ &\displaystyle \leq \|v\|^2_{L^\infty}|x-\psi(x)|+\int_\R e^{-|x-y|}|y-\psi(y)|\,dy \end{array} $$ moreover, substituting the previous expression into the term $B$ we obtain \begin{eqnarray*} B&\!\!\!\!\!\!\!\!\!\!&\leq 2E^{\bar v}J^{\psi_{(t)}}(u(t),v(t))+\int_\R (1+u_x^2(t,x)) \int_\R e^{-|x-y|} |y-\psi_{(t)}(y)|\,dy\,dx \\ &\!\!\!\!\!\!\!\!\!\!&=2E^{\bar v}\left\{J^{\psi_{(t)}}(u(t),v(t))+\int_\R |y-\psi_{(t)}(y)|\int_\R(1+u_x^2(t,x))e^{-|x-y|} \,dx\,dy\right\} \\ &\!\!\!\!\!\!\!\!\!\!&\leq 2E^{\bar v}(3+E^{\bar u})\cdot J^{\psi{(t)}}(u(t),v(t)). \end{eqnarray*} \noindent {\bf C.} Observe that since the function $y\mapsto \psi_{(t)}(y)$ is non decreasing, the quantities $x-y$ and $\psi_{(t)}(x)-\psi_{(t)}(y)$ have the same sign, and since the function $t \mapsto e^{-|t|}$ is Lipschitz continuous either in $(-\infty,0)$ or in $(0,+\infty)$ we have $$ \begin{array}{rl} \displaystyle \left|e^{-|x-y|}-e^{-|\psi_{(t)}(x)-\psi_{(t)}(y)|}\right| &\leq \displaystyle e^{-\min\{|x-y|,|\psi_{(t)}(x)-\psi_{(t)}(y)|\}}\left||x-y|-|\psi_{(t)}(x)-\psi_{(t)}(y)|\right| \\ & \displaystyle \leq e^{-\min\{|x-y|,|\psi_{(t)}(x)-\psi_{(t)}(y)|\}}\left(|x-\psi_{(t)}(x)|+|y-\psi_{(t)}(y)|\right) \end{array} $$ now $$ -\min\{|x-y|,|\psi_{(t)}(x)-\psi_{(t)}(y)|\}\leq -|x-y|+2C_1(t), $$ where $C_1(t)$ is the function (\ref{stpsi}), related to the property \ref{condue} of $\psi_{(t)}$, then $$ \begin{array}{rl} C \leq &\!\!\displaystyle e^{C_1(t)}\!\!\int_\R |y-\psi(y)| \left[v^2(t,\psi_{(t)}(y)) + \frac {v_x^2(t,\psi_{(t)}(y))}2\right]\psi_{(t)}'(y)\cdot\int_\R(1+u_x^2(t,x))e^{-|x-y|}\,dx\,dy+ \\ &+\displaystyle 2 E^{\bar v}\int_\R(1+u_x^2(t,x))|x-\psi_{(t)}(x)|\,dx \\ \leq & \!\!\displaystyle \left[(2+E^{u})e^{C_1(t)}(1+\|u\|_{L^\infty})+ 2 E^{\bar v}\right] J^{\psi_{(t)}}(u(t),v(t)) \end{array} $$ \noindent{\bf D.} Here we can use the estimate given by the change in base measure. Since $$ \int_\R \Big|1+u_x^2(t,x)- \big(1+v_x^2(t,\psi_{(t)}(x))\big)\psi_{(t)}'(x)\Big|\, dx\leq J^{\psi_{(t)}}(u(t),v(t))\,, $$ we obtain $$ \begin{array}{rl} D \leq& \displaystyle \frac 12 \int_\R(1+u_x^2(t,x)) \int_\R e^{-|x-y|} \left|(1+u_x^2(t,y))-(1+v_x^2(t,\psi_{(t)}(y)))\psi_{(t)}'(y)\right| \,dy\,\,dx \\ \quad& \displaystyle + \frac 12 \int_\R(1+u_x^2(t,x))\cdot \left|\int_\R e^{-|x-y|}\,{\rm sign}\,(x-y) \left[\psi_{(t)}'(y)-1\right]dy\, \right|\,dx \\ \leq& \displaystyle (1+ E^{\bar u})J^{\psi_{(t)}}(u(t),v(t))+ \int_\R(1+u_x^2(t,x))\left|(\psi_{(t)}(x)-x)-\frac{e^{-|x-y|}}2\int_\R (\psi_{(t)}(y)-y)\,dy\right|\,dx \\ \leq & 2(2+ E^{\bar u})J^{\psi_{(t)}}(u(t),v(t)) \displaystyle \end{array} $$ where in the last estimate we integrated by part as in the term $B$. \noindent The control for the terms $I_3$, $I_4$ and $I_5$ can be obtained exactly as the ones in \cite{BF2}, whom we refer the reader to. The previous estimates implies that there exists a smooth function $C=C^{\bar u,\bar v}(t)$ which depends only to the variable $t$ and to the initial data $\bar u,\ \bar v$ such that $$ \frac{d}{dt} J^\psi(u,v)\leq C^{\bar u,\bar v}(t) J^{\psi}(u,v) $$ which yields $$ J(u(t),v(t))\leq J(u(s), v(s)) e^{\left|\int_s^t C^{\bar u,\bar v}(\sigma)\,d\sigma\right|}\qquad {\mbox {for every $s,t\in\R$.}} $$ \end{proof} \end{document}
\begin{document} \title{Balanced partitions of $3$-colored geometric sets in the plane\thanks{This paper was published in Discrete Applied Mathematics, 181:21--32, 2015~\cite{journal} \begin{abstract} Let $S$ be a finite set of geometric objects partitioned into classes or \emph{colors}. A subset $S'\subseteq S$ is said to be \emph{balanced} if $S'$ contains the same amount of elements of $S$ from each of the colors. We study several problems on partitioning $3$-colored sets of points and lines in the plane into two balanced subsets: (a) We prove that for every 3-colored arrangement of lines there exists a segment that intersects exactly one line of each color, and that when there are $2m$ lines of each color, there is a segment intercepting $m$ lines of each color. (b) Given $n$ red points, $n$ blue points and $n$ green points on any closed Jordan curve $\gamma$, we show that for every integer $k$ with $0 \leq k \leq n$ there is a pair of disjoint intervals on $\gamma$ whose union contains exactly $k$ points of each color. (c) Given a set $S$ of $n$ red points, $n$ blue points and $n$ green points in the integer lattice satisfying certain constraints, there exist two rays with common apex, one vertical and one horizontal, whose union splits the plane into two regions, each one containing a balanced subset of $S$. \end{abstract} \section{Introduction}\label{section:introduction} Let $S$ be a finite set of geometric objects distributed into classes or \emph{colors}. A subset $S_1\subseteq S$ is said to be \emph{balanced} if $S_1$ contains the same amount of elements of $S$ from each of the colors. Naturally, if $S$ is balanced, its complement is also balanced, hence we talk of a {\em balanced bipartition} of $S$. When the point set $S$ is in the plane, and the balanced partition is defined by a geometric object $\zeta$ splitting the plane into two regions, we say that $\zeta$ is \emph{balanced} (and {\em nontrivial} if both regions contain points of $S$). A famous example of such a partition is the discrete version of the \emph{ham-sandwich theorem}: given a set of $2n$ red points and $2m$ blue points in general position in the plane, there always exists a line $\ell$ such that each halfplane bounded by $\ell$ contains exactly $n$ red points and $m$ blue points. It is well known that this theorem can be generalized to higher dimensions and can be formulated in terms of splitting continuous measures. There are also plenty of variations of the ham-sandwich theorem. For example, it has been proved that given $gn$ red points and $gm$ blue points in the plane in general position, there exists a subdivision of the plane into $g$ disjoint convex polygons, each of which contains $n$ red points and $m$ blue points \cite{Sergei2000}. Also, it was shown in \cite{Imre2001} (among other results) that for any two measures in the plane there are $4$ rays with common apex such that each of the sectors they define contains $\frac{1}{4}$ of both measures. For many more extensions and detailed results we refer the interested reader to~\cite{jorge1,jorge2}, the survey \cite{Kaneko2003} of Kaneko and Kano and to the book \cite{Matousek} by Matou{\v s}ek. Notice that if we have a $3$-colored set of points $S$ in the plane, it is possible that no line produces any non-trivial balanced partition of $S$. Consider for example an equilateral triangle $p_1p_2p_3$ and replace every vertex $p_i$ by a very small disk $D_i$ (so that no line can intersect the three disks), and place $n$ red points, $n$ green points, and $n$ blue points, inside the disks $D_1$, $D_2$ and $D_3$, respectively. It is clear for this configuration that no line determines a halfplane containing exactly $k$ points of each color, for any value of $k$ with $0<k<n$. However, it is easy to show that for every $3$-colored set of points $S$ in the plane there is a conic that simultaneously bisects the three colors: take the plane to be $z=0$ in $\mathbb{R}^3$, lift the points vertically to the unit paraboloid $P$, use the 3-dimensional ham-sandwich theorem for splitting evenly the lifted point set with a plane $\Pi$, and use the projection of $P\cap\Pi$ as halving conic in $z=0$. On the other hand, instead of changing the partitioning object, one may impose some additional constraints on the point set. For example, Bereg and Kano have recently proved that if all vertices of the convex hull of $S$ have the same color, then there exists a nontrivial balanced line~\cite{Bereg2012}. This result was recently extended to sets of points in a space of higher dimension by Akopyan and Karasev \cite{Karasev2012}, where the constraint imposed on the set was also generalized. \noindent\textbf{Our contribution}. In this work we study several problems on balanced bipartitions of $3$-colored sets of points and lines in the plane. In Section \ref{section:lines} we prove that for every 3-colored arrangement of lines, possibly unbalanced, there always exists a segment intersecting exactly one line of each color. If the number of lines of each color is exactly $2n$, we show that there is always a segment intersecting exactly $n$ lines of each color. The existence of balanced segments in 3-colored line arrangements is equivalent, by duality, to the existence of balanced double wedges in $3$-colored point sets. In Section \ref{jordan} we consider balanced partitions on closed Jordan curves. Given $n$ red points, $n$ blue points and $n$ green points on any closed Jordan curve $\gamma$, we show that for every integer $k$ with $0 \leq k \leq n$ there is a pair of disjoint intervals on $\gamma$ whose union contains exactly $k$ points of each color. In Section \ref{section:lattice} we focus on point sets in the integer plane lattice $\mathbb{Z}^2$; for simplicity, we will refer to $\mathbb{Z}^2$ as \emph{the lattice}. We define an \emph{$L$-line} \emph{with }{\em corner} $q$ as the union of two different rays with common apex $q$, each of them being either vertical or horizontal. This {\em $L$-line} partitions the plane into two regions (Figure~\ref{fig:L-line}). If one of the rays is vertical and the other ray is horizontal, the regions are a quadrant with origin at $q$ and its complement. Note, however, that we allow an $L$-line to consist of two horizontal or two vertical rays with opposite direction, in which case the $L$-line is simply a horizontal or vertical line that splits the plane into two halfplanes. An $L$-line segment can be analogously defined using line segments instead of rays. $L$-lines in the lattice play somehow a role comparable to the role of ordinary lines in the real plane. An example of this is the result due to Uno et al.~\cite{Kano2009}, which extends the ham-sandwich theorem to the following scenario: Given $n$ red points and $m$ blue points in general position in $\mathbb{Z}^2$, there always exists an $L$-line that bisects both sets of points. This result was also generalized by Bereg \cite{Bereg2009}; specifically he proved that for any integer $k\ge 2$ and for any $kn$ red points and $km$ blue points in general position in the plane, there exists a subdivision of the plane into $k$ regions using at most $k$ horizontal segments and at most $k-1$ vertical segments such that every region contains $n$ red points and $m$ blue points. Several results on sets of points in $\mathbb{Z}^2$, using $L$-lines or $L$-line segments are described in \cite{Kano2012}. A set $S\subset \mathbb{R}^2$ is said to be {\em orthoconvex} if the intersection of $S$ with every horizontal or vertical line is connected. The {\em orthogonal convex hull} of a set $S$ is the intersection of all connected orthogonally convex supersets of S. Our main result in Section \ref{section:lattice} is in correspondence with the result of Bereg and Kano \cite{Bereg2012} mentioned above that if the convex hull of a $3$-colored point set is monochromatic, then it admits some balanced line. Specifically, we prove here that given a set $S \subset \mathbb{Z}^2$ of $n$ red points, $n$ blue points and $n$ green points in general position (i.e., no two points are horizontally or vertically aligned), whose orthogonal convex hull is monochromatic, then there is always an $L$-line that separates a region of the plane containing exactly $k$ red points, $k$ blue points, and $k$ green points from $S$, for some integer $k$ in the range $1\le k \le n-1$. We conclude in Section \ref{section:conclusion} with some open problems and final remarks. \section{3-colored line arrangements}\label{section:lines} Let $\ensuremath{L} = R \cup G \cup B$ be a set of lines in the plane, such that $R, G$ and $B$ are pairwise disjoint. We refer to the elements of $R$, $G$, and $B$ as red, green, and blue, respectively. Let $\mathcal{A}(\ensuremath{L})$ be the arrangement induced by the set $\ensuremath{L}$. We assume that $\mathcal{A}(\ensuremath{L})$ is \emph{simple}, i.e., there are no parallel lines and no more than two lines intersect at one point. In Section~\ref{subsection:cells} we first prove that there always exists a face in $\mathcal{A}(\ensuremath{L})$ that contains all three colors. We also extend this result to higher dimensions. We say that a segment is \emph{balanced} with respect to $\ensuremath{L}$ if it intersects the same number of red, green and blue lines of $\ensuremath{L}$. In Section~\ref{subsection:doubleWedges} we prove that (i) there always exists a segment intersecting exactly one line of each color; and (ii) if the size of each set $R, G$ and $B$ is $2n$, there always exists a balanced segment intersecting $n$ lines of each color. As there are standard duality transformations between points and lines in which segments correspond to double wedges, the results in this section can be rephrased in terms of the existence of balanced double wedges for $3$-colored point sets. \subsection{Cells in colored arrangements}\label{subsection:cells} In this section we prove that there always exists a $3$-colored face in $\mathcal{A}(\ensuremath{L})$, that is, a face that has at least one side of each color. In fact, we can show that a $d$-dimensional arrangement of $(d-1)$-dimensional hyperplanes, where each hyperplane is colored by one of $d+1$ colors (at least one of each color), must contain a $(d+1)$-colored cell. This result is tight with respect to the number of colors. If we have only $d$ colors, then every cell containing the intersection point of $d$ hyperplanes with different colors is $d$-colored. On the other hand, it is not difficult to construct examples of arrangements of hyperplanes with $d+2$ colors where no $(d+2)$-colored cell exists. \begin{figure} \caption{ Construction of a line arrangement with no $4$-colored cell.} \label{fig:counterexample} \end{figure} An example in the plane is shown in Figure~\ref{fig:counterexample}. Start with a triangle in which each side has a different color (red, green, or blue), and extend the sides to the colored lines $r$, $g$ and $b$ that support the sides (Figure~\ref{fig:counterexample}, left). Then shield each of these lines by two black parallel lines, one on each side (Figure~\ref{fig:counterexample}, right). Finally perturb the black lines in such a way that the arrangement becomes simple, yet the intersection points between former parallel lines are very far away. Now it is easy to see that no cell can contain all four colors. For example, depending on the specific intersections of the black lines with $g$ and $b$, the region $R$ may contain the colors green and blue, but cannot contain color red. This example can be generalized to $d$-dimensional space. Start with a $d$-simplex in which each of the $d+1$ hyperplanes supporting a facet has a different color, $c_1,\dots,c_{d+1}$. Then shield each facet with two parallel hyperplanes having color $c_{d+2}$, one on each side, and perturb as above to obtain a simple arrangement in which no cell is $(d+2)$-colored. For intuition's sake, before generalizing the result to higher dimensions, we first prove the result for $d=2$. Consider the $2$-dimensional arrangement $\mathcal{A}(\ensuremath{L})$ as a graph. The dual of a face $f$ of $\mathcal{A}(\ensuremath{L})$ is a face $\hat{f}$ that contains a vertex for every bounding line of $f$, and contains an edge between two vertices of $\hat{f}$ if and only if the intersection of the corresponding lines is part of the boundary of $f$ (see Figure~\ref{fig:ColorArrangement}(a)). \begin{figure} \caption{ (a) A (complete) face $f$ and its dual $\hat{f} \label{fig:ColorArrangement} \end{figure} Let $C$ be a simple cycle of vertices where each vertex is colored either red, green, or blue. Let $n_r(C), n_g(C), $ and $n_b(C)$ be the number of red, green, and blue vertices of $C$, respectively. We simply write $n_r$, $n_g$, and $n_b$ if $C$ is clear from the context. The \emph{type} of an edge of $C$ is the multiset of the colors of its vertices. Let $n_{rr}$, $n_{gg}$, $n_{bb}$, $n_{rg}$, $n_{rb}$, and $n_{gb}$ be the number of edges of the corresponding type. Note that, if $f$ is bounded, then $\hat{f}$ is a simple cycle, where each vertex is colored either red, green, or blue. We say a bounded face $f$ is \emph{complete} if $n_{rg} \equiv n_{rb} \equiv n_{gb} \equiv 1 \pmod{2}$ holds for $\hat{f}$. \begin{lemma} \label{lem:3colorcycle} Consider a simple cycle in which each vertex is colored either red, green, or blue. Then $n_{rg} \equiv n_{rb} \equiv n_{gb} \pmod{2}$. \end{lemma} \begin{proof} The result follows from double counting. We can obtain an expression for (twice) the number of vertices of a certain color by summing up over all edges that have vertices of that color. For instance, for $n_r$ we get the equation $2 n_r = 2 n_{rr} + n_{rg} + n_{rb}$. This directly implies that $n_{rg} \equiv n_{rb} \pmod{2}$. By repeating the same process for the other colors we obtain the claimed result. \end{proof} \begin{theorem} \label{thm:3ColoredFace} Let $\ensuremath{L}$ be a set of 3-colored lines in the plane inducing a simple arrangement $\mathcal{A}(\ensuremath{L})$, such that each color appears at least once. Then there exists a complete face in $\mathcal{A}(\ensuremath{L})$. \end{theorem} \begin{proof} The result clearly holds if $|R|=|G|=|B|=1$. For the general case, we start with one line of each color, and then incrementally add the remaining lines, maintaining a complete face $f$ at all times. Without loss of generality, assume that a red line $\ell$ is inserted into $\mathcal{A}(\ensuremath{L})$. If $\ell$ does not cross $f$, we keep $f$. Otherwise, $f$ is split into two faces $f_1$ and $f_2$ (see Figure~\ref{fig:ColorArrangement}(b)). Similarly, $\hat{f}$ is split into $\hat{f}_1$ and $\hat{f}_2$ (with the addition of one red vertex, see Figure~\ref{fig:ColorArrangement}(c)). Because $\ell$ is red, the number $n_{gb}$ of green-blue edges does not change, that is, $n_{gb}(\hat{f}) = n_{gb}(\hat{f}_1) + n_{gb}(\hat{f}_2)$. This implies that either $n_{gb}(\hat{f}_1)$ or $n_{gb}(\hat{f}_2)$ is odd. By Lemma~\ref{lem:3colorcycle} it follows that either $f_1$ or $f_2$ is complete. \end{proof} We now extend the result to higher dimensions. For convenience we assume that every hyperplane is colored with a ``color'' in $[d] = \{0, 1, \ldots, d\}$. Consider a triangulation $T$ of the surface of the $(d-1)$-dimensional sphere $\mathbb{S}^{d-1}$, where every vertex is colored with a color in $[d]$. Note that a triangulation of $\mathbb{S}^1$ is exactly a simple cycle. As before, we define the \emph{type} of a simplex (or face) of $T$ as the multiset $S$ of the colors of its vertices. Furthermore, let $n_S$ be the number of simplices (faces) with type $S$. We say a type $S$ is \emph{good} if $S$ does not contain duplicates and $|S| = d$. The following analogue of Lemma~\ref{lem:3colorcycle} is similar to Sperner's lemma~\cite{Sperner96}. \begin{lemma} \label{lem:dColorTriang} Consider a triangulation $T$ of $\mathbb{S}^{d-1}$, where each vertex is colored with a color in $[d]$. Then either $n_S \equiv 0 \pmod{2}$ for all good types $S$, or $n_S \equiv 1 \pmod{2}$ for all good types $S$. \end{lemma} \begin{proof} We again use double counting, following the ideas in the proof of Lemma~\ref{lem:3colorcycle}. Consider a subset $S \subset [d]$ with $|S| = d-1$ (thus $S$ includes all but two colors from $[d]$). Recall that good types consist of $d$ different colors, so there are exactly two good types $S_1$ and $S_2$ that contain $S$: one with each of the two colors of $[d]$ that are not already present in $S$. Since every $(d-2)$-dimensional face of $T$ is present in exactly two $(d-1)$-dimensional simplices, we can obtain an expression for (twice) $n_S$ by summing over all face types that contain $S$: \[ 2 n_S = n_{S_1} + n_{S_2} + 2 \sum_{x \in S} n_{S \uplus \{x\}}, \] where the summation on the right is done over the types that contain $S$ but are not good (the symbol $\uplus$ denotes the \emph{disjoint union}, to allow duplicated colors in the face type). From the above equation we obtain that $n_{S_1} \equiv n_{S_2} \pmod{2}$. By repeating this procedure for every set $S$, we obtain the claimed result. \end{proof} Consider a cell $f$ of a $d$-dimensional arrangement of $(d-1)$-dimensional hyperplanes. The dual $\hat{f}$ of $f$ contains a vertex for every bounding hyperplane of $f$, and contains a simplex on a set of vertices of $\hat{f}$ if and only if the intersection of the corresponding hyperplanes is part of the boundary of $f$. Note that, if $f$ is bounded, then $\hat{f}$ is a triangulation of $\mathbb{S}^{d-1}$. We say a bounded cell $f$ is \emph{complete} if $n_S(\hat{f}) \equiv 1 \pmod{2}$ for all good types $S$. Note that a complete cell is $(d+1)$-colored. \begin{theorem} \label{thm:dColoredCell} Let $\ensuremath{L}$ be a set of $(d+1)$-colored hyperplanes in $\mathbb{R}^d$ inducing a simple arrangement $\mathcal{A}(\ensuremath{L})$, such that each color appears at least once. Then there exists a complete face in $\mathcal{A}(\ensuremath{L})$. \end{theorem} \begin{proof} The proof is analogous to the two-dimensional case: if there is exactly one hyperplane of each color, then the arrangement has exactly one bounded face $f$, which must be complete (this can be easily shown by induction on $d$). We maintain a complete face $f$ during successive insertions of hyperplanes. Assume we add a hyperplane $H$ with color $x$. If $H$ does not cross $f$, we can simply keep $f$. Otherwise, $f$ is split into two faces $f_1$ and $f_2$, and $\hat{f}$ is split into $\hat{f}_1$ and $\hat{f}_2$ (with the addition of one vertex of color $x$). Let $S = [d] - \{x\}$. Because $H$ has color $x$, the number of simplices with type $S$ does not change, that is, $n_S(\hat{f}) = n_S(\hat{f}_1) + n_S(\hat{f}_2)$. This implies that either $n_S(\hat{f}_1)$ or $n_S(\hat{f}_2)$ is odd. By Lemma~\ref{lem:dColorTriang} it follows that either $f_1$ or $f_2$ is complete. \end{proof} An immediate consequence of Theorem~\ref{thm:dColoredCell} is the following result: \begin{corollary} \label{cor:segment111} Let $\ensuremath{L}$ be a $(d+1)$-colored set of hyperplanes in $\mathbb{R}^d$ inducing a simple arrangement $\mathcal{A}(\ensuremath{L})$, such that each color appears at least once. Then there exists a segment intersecting exactly one hyperplane of each color. \end{corollary} \begin{proof} Consider a $(d+1)$-colored cell. By Theorem~\ref{thm:dColoredCell} such a cell must exist and it must also contain an intersection of $d$ hyperplanes with different colors. Now we can take the segment from this intersection to a face of the remaining color (in the same cell). By perturbing and slightly extending this segment, we obtain a segment properly intersecting exactly one hyperplane of each color. \end{proof} \subsection{$3$-colored point sets and balanced double wedges}\label{subsection:doubleWedges} We now return to the plane and consider $3$-colored point sets. By using the point-plane duality, Corollary~\ref{cor:segment111} implies the following result. \begin{theorem}\label{thm:doubleWedge111} Let $S$ be a 3-colored set of points in $\mathbb{R}^2$ in general position, such that each color appears at least once. Then there exists a double wedge that contains exactly one point of each color from $S$. \end{theorem} \begin{proof} We apply the standard duality transformation between points and non-vertical lines where a point $p=(a,b)$ is mapped to a line $p'$ with equation $y=ax-b$, and vice versa~\cite{deberg}. By Corollary~\ref{cor:segment111}, there exists a segment $w$ that intersects exactly one line of each color. By standard point-line duality properties, the dual of $w$ is a double wedge $w'$ that contains the dual points of the intersected lines. \end{proof} Since the dual result extends to higher dimensions, so does the primal one. The equivalent statement says that given a set of points colored with $d+1$ colors in $\mathbb{R}^d$, there exists a {\em pencil} (i.e., a collection of hyperplanes sharing an affine subspace of dimension $d-2$) containing exactly one point of each color. Next we turn our attention to balanced $3$-colored point sets, and prove a ham-sandwich-like theorem for double wedges. \begin{theorem}\label{th:bisectingdoubleWedge} Let $S$ be a 3-colored balanced set of $6n$ points in $\mathbb{R}^2$ in general position. Then there exists a double wedge that contains exactly $n$ points of each color from $S$. \end{theorem} \begin{proof} We call a double wedge satisfying the theorem {\em bisecting}. Without loss of generality we assume that the points of $S$ have distinct $x$-coordinates and distinct $y$-coordinates. For two distinct points $a$ and $b$ in the plane, let $\ell(a,b)$ denote the line passing through them. Consider the arrangement $\mathcal{A}$ of all the lines passing through two points from $S$, i.e. $$\mathcal{A}=\{ \ell(p_i,p_j)~|~p_i,p_j\in S, i\ne j\} .$$ \begin{figure} \caption{$\sigma_p$: ordering of $S$ based on the slopes of lines through $p$.} \label{ordering} \end{figure} Consider a vertical line $\ell$ that does not contain any point from $S$. We continuously walk on $\ell$ from $y=+\infty$ to $y=-\infty$. For any point $p\in \ell$ we define an ordering $\sigma_p$ of $S$ as follows: consider the lines $\ell(p,q),q\in S$ and sort them by (increasing values) of slope. Let $(p_1,\ldots, p_{6n})$ be the obtained ordering (see Figure~\ref{ordering}). By construction, any consecutive interval $\{p_i,p_{i+1},\dots,p_j\}$ of an ordering of $p$ corresponds to a set of lines whose points can be covered by a double wedge with apex at $p$ (even if the indices are taken modulo $6n$). Likewise, for any $p\in\mathbb{R}^2$, any double wedge with apex at $p$ will appear as an interval in the ordering $\sigma_p$. Given an ordering $\sigma_p=(p_1, \ldots, p_{6n})$ of $S$, we construct a polygonal curve as follows: for every $k \in \{1, 2,\ldots, 6n\}$ let $b_k$ and $g_k$ be the number of blue and green points in the set $S(p,k)=\{p_k,p_{k+1},\ldots,p_{k+3n-1}\}$ of $3n$ points, respectively. We define the corresponding lattice point $q_k := (b_k -n, g_k-n)$, and the polygonal curve $\phi(\sigma)=(q_1,\ldots, q_{6n},-q_1,\ldots,-q_{6n},q_1)$. \begin{figure} \caption{The seven types of segments ${q_{k-1} \label{vector2} \end{figure} Intuitively speaking, the point $q_k$ indicates how balanced the interval is that starts with point $p_k$ and contains $3n$ points. By construction, if $q_k = (0,0)$ for some $p \in \ell$ and some $k\leq 6n$, then the associated wedge is balanced (and {\em vice versa}). In the following we show that this property must hold for some $k \in \{1, \ldots, 6n\}$ and $p\in\mathbb{R}^2$. We observe several important properties of $\phi(\sigma)$: \begin{enumerate} \item Path $\phi(\sigma)$ is centrally symmetric (w.r.t. the origin). This follows from the definition of $\phi$. \item\label{noedge} Path $\phi(\sigma)$ is a closed curve. Moreover, the interior of any edge $e_i=q_iq_{i+1}$ of $\phi(\sigma)$ cannot contain the origin: consider the segment between two consecutive vertices $q_{k-1}$ and $q_k$ of $\phi$. Observe that the double wedges associated to $q_{k-1}$ and $q_k$ share $3k-1$ points. Thus, the orientation and length of the segment $q_{k-1}q_k$ only depend on the color of the two points that are not shared. In particular, there are only $7$ types of such segments in $\phi(\sigma)$, see Figure~\ref{vector2}. Since these segments do not pass through grid points, the origin cannot appear at the interior of a segment. \item\label{nointerior} If the orderings of two points $p$ and $p'$ are equal, then their paths $\phi(\sigma_p)$ and $\phi(\sigma_{p'})$ are equal. If the orderings $\sigma_{p}$ and $\sigma_{p'}$ are not equal then either $(i)$ there is a line of $\mathcal{A}$ separating $p$ and $p'$ or $(ii)$ there is a point $p_i\in S$ such that the vertical line passing through $p_i$ separates $p$ and $p'$. In our proof we will move $p$ along a vertical line, so case $(ii)$ will never occur. Consider a continuous vertical movement of $p$, and consider the two orderings $\pi_1$ and $\pi_2$ before and after a line of $\mathcal{A}$ is crossed. Observe that the only difference between the two orderings is that two consecutive points (say, $p_i$ and $p_{i+1}$) of $\pi_1$ are reversed in $\pi_2$. Thus, the only difference between the two associated $\phi$ curves will be in vertices whose associated interval contains one of the two points (and not the other). Since these two points are consecutive, this situation can only occur at vertices $q_{i+1-3n}$ and $q_{i+1}$ (recall that, for simplicity, indices are taken modulo $6n$). Since the predecessor and successor of these vertices are equal in both curves, the difference between both curves will be two quadrilaterals (and two more in the second half of the curve). We claim that the interior of any such quadrilateral can never contain the origin. Barring symmetries, there are two possible ways in which the quadrilateral is formed, depending on the color of $p_i$ and $p_{i+1}$ (see Figure \ref{crossing1}). Regardless of the case, the interior of any such quadrilateral cannot contain any lattice point, and in particular cannot contain the origin. \item\label{nonzero} Path $\phi(\sigma)$ has a vertex $q_i$ such that $q_i=(0,0)$, or the curve $\phi(\sigma)$ has nonzero {\em winding} with respect to the origin. Intuitively speaking, the winding number of a closed curve $\mathcal{C}$ with respect to a point measures the net number of clockwise revolutions that a point traveling on $\mathcal{C}$ makes around the given point (see a formal definition in~\cite{winding}). By Properties~\ref{noedge} and~\ref{nointerior}, the only way in which $\phi(\sigma)$ passes through the origin is through a vertex $q_i$. If this does not happen, then no point of the curve passes through the origin. Recall that $\phi(\sigma)$ is a closed continuous curve that contains the origin and is centrally symmetric around that point. In topological terms, this is called an {\em odd} function. It is well known that these functions have odd winding (see for example~\cite{topology}, Lemma 25). In particular, we conclude that the winding of $\phi(\sigma)$ cannot be zero. \end{enumerate} \begin{figure} \caption{The change of $\phi(\sigma)$ when $p_i$ and $p_{i+1} \label{crossing1} \end{figure} Thus, imagine a moving point $p\in \ell$ from $y=+\infty$ to $y=-\infty$: when $p$ is located sufficiently low along $\ell$, the $y$ coordinates of the points of $S$ can be ignored, and the resulting order will give first the points to the left of $\ell$ (sorted in decreasing value of the $x$ coordinates) and then the points to the right of $\ell$ (also in decreasing value of the $x$ coordinates). Similarly, the order we obtain when $p$ is sufficiently high will be the exact reverse (see Figure~\ref{fig_extremes}). \begin{figure} \caption{When $p$ is placed sufficiently low (or high), the sorting $p_\sigma$ gives the points ordered according to their $x$ coordinate. The orderings corresponding to points $p_N$ and $p_S$ are depicted with an arrow. In particular, notice that the two orderings are reversed.} \label{fig_extremes} \end{figure} Now consider the translation of $p$ from both extremes and the changes that may happen to the ordering along the translation. By Property~\ref{noedge}, the curve $\phi(p)$ will change when we cross a line of $\mathcal{A}$, but the differences between two consecutive curves will be very small. In particular, the space between the two curves cannot contain the origin. Consider now the instants of time in which point $p$ is at $y=+\infty$ and $y=-\infty$: if either curve contains the origin as a vertex, we are done (since such vertex is associated to a balanced double wedge). Otherwise, we observe that the orderings must be the reverse of each other, which in particular implies that the associated curves describe exactly the same path, but in reverse direction. By Property~\ref{nonzero} both curves have nonzero winding, which in particular it implies that they have opposite winding numbers (i.e. their winding number gets multiplied by -1). Since the winding must change sign, we conclude that at some point in the translation the curve $\phi(p)$ passed through the origin (Otherwise, by Hopf's degree Theorem~\cite{needham} they would have the same winding). By Properties~\ref{noedge} and~\ref{nointerior} this can only happen at a vertex of $\phi(p)$, implying the existence of a balanced double wedge. \end{proof} Using the point-line duality again, we obtain the equivalent result for balanced sets of lines. \begin{corollary} \label{cor:halvingSegment} Let $\ensuremath{L}$ be a $3$-colored balanced set of $6n$ lines in $\mathbb{R}^2$ inducing a simple arrangement. Then, there always exists a segment intersecting exactly $n$ lines of each color. \end{corollary} \section{Balanced partitions on closed Jordan curves} \label{jordan} In this section we consider balanced $3$-colored point sets on closed Jordan curves. Our aim is to find a bipartition of the set that is balanced and that can be realized by at most two disjoint intervals of the curve. To prove the claim we use the following arithmetic lemma: \begin{lemma} \label{lem:ArithmeticLemma} For a fixed integer $n \geq 2$, any integer $k \in \{1, 2, \dots , n \}$ can be obtained from $n$ by applying functions $f(x) = \lfloor x/2 \rfloor$ and $g(x) = n-x$ at most $2 \log n + O(1)$ times. \end{lemma} \begin{proof} Consider a fixed value $k\leq n$. In the following we show that $k=h_t(h_{t-1}(\cdots h_1(n)\cdots ))$ for some $t\le 2\log n+O(1)$ where each $h_i$ is either $f$ or $g$. For the purpose, we use the concept of {\em starting points}: we say that an integer $m$ is an $i$-starting point (with respect to $k$) if number $k$ can be obtained from $m$ by applying functions from $\{f,g\}$ at most $i$ times. Note that any number is always a $0$-starting point with respect to itself, and our claim essentially says that $n$ is a $(2 \log n + O(1))$-starting point with respect to any $k\leq n$. Instead of explicitly computing all $i$-starting points, we compute a consecutive interval $\mathcal{V}_i\subseteq \{1,\ldots, n\}$ of starting points. For any $i\geq 0$, let $\ell_i$ and $r_i$ be the left and right endpoints of the interval $\mathcal{V}_i$, respectively. This interval is defined as follows: initially, we set $\mathcal{V}_0=\{k\}$. For larger values of $i$, we use an inductive definition: If $r_i<\lfloor n/2\rfloor$ we apply $f$ as the first operation. That is, any number that, after we apply $f$, falls within $\mathcal{V}_i$ should be in $\mathcal{V}_{i+1}$. Observe that this implies $\ell_{i+1}=2\ell_i$ and $r_{i+1}=2r_i+1$. If $\ell_i>\lfloor n/2\rfloor$ we apply $g$ as the first operation. In this case, we have $\ell_{i+1}=n-r_i$ and $r_{i+1}=n-\ell_i$. By construction, the fact that all elements of $\mathcal{V}_i$ are $i$-starting points implies that elements of $\mathcal{V}_{i+1}$ are $(i+1)$-starting points. This sequence will finish at some index $j$ such that $\ell_j<\lfloor n/2\rfloor<r_j$ (that is, $\lfloor n/2\rfloor$ is a $j$-starting point). In particular, we have that $n$ is a $(j+1)$-starting point since $f(n)=\lfloor n/2\rfloor$. Thus, to conclude the proof it remains to show that $j\leq 2 \log n + O(1)$. Each time we use operator $f$ as the first operation, the length of the interval is doubled. On the other hand, each time we use operator $g$, the length of the interval does not change. Moreover, function $g$ is never applied twice in a row. Thus, after at most $2(\lceil\log n\rceil-1)$ steps, the size of interval $\mathcal{V}_i$ will be at least $2^{\lceil\log n\rceil-1}\geq \lceil n /2 \rceil$, and therefore must contain $\lfloor n/2\rfloor$. \end{proof} Now we can prove the main result of this section. As we explain below, it is enough to prove the result for the case in which the Jordan curve $\gamma$ is the unit circle. Let $S^1$ be the unit circle in $\mathbb{R}^2$. Let $P$ be a $3$-colored balanced set of $3n$ points on $S^1$, and let $R$, $G$, and $B$ be the partition of $P$ into the three color classes. Given a closed curve $\gamma$ with an injective continuous map $f:S^1\to\gamma$ and an integer $c>0$, we say that a set $Q\subseteq \gamma$ is a {\em c-arc set} if $Q=f(Q_S)$ where $Q_S$ is the union of at most $c$ closed arcs of $S^1$. Intuitively speaking, if $\gamma$ has no crossings, $c$ denotes the number of components of $Q$. However, $c$ can be larger than the number of components if $\gamma$ has one or more crossings. \begin{theorem}\label{thm:curveIntervals} Let $\gamma$ be a closed Jordan curve in the plane, and let $P$ be a 3-colored balanced set of $3n$ points on $\gamma$. Then for every positive integer $k\leq n$ there exists a $2$-arc set $P_k \subseteq \gamma$ containing exactly $k$ points of each color. \end{theorem} \begin{proof} Using $f^{-1}$ we can map the points on $\gamma$ to the unit circle, thus a solution on $S^1$ directly maps to a solution on $\gamma$. Hence, it suffices to prove the statement for the case in which $\gamma=S^1$. Let $I$ be the set of numbers $k$ such that a subset $P_k$ as in the theorem exists. We prove that $I = \{1, \ldots, n\}$ using Lemma~\ref{lem:ArithmeticLemma}. To apply the lemma it suffices to show that $I$ fulfills the following properties: (i) $n \in I$, (ii) If $k \in I$ then $n-k \in I$, and (iii) If $k \in I$ then $k/2 \in I$. Once we show that these properties hold, Lemma~\ref{lem:ArithmeticLemma} implies that any integer $k$ between 1 and $2n$ must be in $I$, hence $I = \{1, \ldots, n\}$. Property (i) holds because the whole $S^1$ can be taken as a 2-arc set, containing $n$ points of each color, thus $n \in I$. Property (ii) follows from the fact that the complement of any 2-arc set containing exactly $k$ points of each color is a 2-arc set containing exactly $n-k$ points of each color. Thus if there is a 2-arc set guaranteeing that $k \in I$, its complement guarantees that $n-k \in I$. Proving Property (iii) requires a more elaborate argument. Let $\mathcal{A}_k$ be a $2$-arc set containing exactly $k$ points of each color (such a set must exist by hypothesis, since $k \in I$). We assume $S^1$ is parameterized as $(\cos(t), \sin(t))$, for $t \in [0, 2\pi)$. Without loss of generality, we assume that $f(0)\not\in \mathcal{A}_k$ (if necessary we can change the parametrization of $S^1$ by moving the location of the point corresponding to $t=0$ to ensure this). We lift all points of $P$ to $\mathbb{R}^3$ using the \emph{moment curve}, as explained next. Abusing slightly the notation, we identify each point $(\cos(t), \sin(t))$, $t \in [0, 2\pi)$ on $S^1$ with its corresponding parameter $t$. Then, for $t \in S^1$ we define $\gamma(t) = \{t, t^2, t^3\}$. Also, for any subset $\mathcal{C}$ of $S^1$, we define $\gamma(\mathcal{C}) = \{\gamma(p) | p \in \mathcal{C}\}$. Recall that we assumed that $f(0) \notin \mathcal{A}_k$, thus, $\gamma(\mathcal{A}_k)$ forms two disjoint arc-connected intervals in $\gamma(S^1)$. Next we apply the ham-sandwich theorem to the points in $\gamma(\mathcal{A}_k)$ (disregarding other lifted points of $P$): we obtain a plane $H$ that cuts the three colored classes in $\gamma(\mathcal{A}_k)$ in half. That is, if $k$ is even, each one of the open halfspaces defined by $H$ contains exactly $k/2$ points of each color. If $k$ is odd, then we can force $H$ to pass trough one point of each color and leave $(k-1)/2$ points in each open halfspace (\cite{Matousek}, Cor. 3.1.3). We denote by $H^+$ and $H^-$ the open halfspaces above and below $H$, respectively. Note that each half space contains exactly $\lfloor k/2 \rfloor$ points of each color class, as desired. Let $M_1 = H^+ \cap \gamma(\mathcal{A}_k)$ and $M_2 = H^- \cap \gamma(\mathcal{A}_k)$. Note that both $M_1$ and $M_2$ contain exactly $\lfloor k/2 \rfloor$ points of each color class, as desired. To finish the proof it is enough to show that either $\gamma^{-1}(M_1)$ or $\gamma^{-1}(M_2)$ is a $2$-arc set. Since $\gamma(\mathcal{A}_k)$ has two connected components (and is lifted to the moment curve), we conclude that any hyperplane (in particular $H$) can intersect $\gamma(\mathcal{A}_k)$ in at most $3$ points. Thus, the total number of components of $M_1 \cup M_2$ is at most 5. This is also true for the preimages $\gamma^{-1}(M_1) \cup \gamma^{-1}(M_2)$. Then, either $\gamma^{-1}(M_1)$ or $\gamma^{-1}(M_2)$ must form a $2$-arc set containing exactly $\lfloor k/2 \rfloor$ points of each color. Thus, $\lfloor k/2 \rfloor\in I$ as desired. \end{proof} Our approach generalizes to $c$ colors: if $P$ contains $n$ points of each color on $S^1$, then for each $k\in \{1, \ldots, n\}$ there exists a $(c-1)$-arc set $P_k \subseteq S^1$ such that $P_k$ contains exactly $k$ points of each color. In our approach we lift the points to $\mathbb{R}^3$ because we have three colors, but in the general case we would lift to $\mathbb{R}^c$. We also note that the bound on the number of intervals is tight. Consider a set of points in $S^1$ in which the points of the first $c-1$ colors are contained in $c-1$ disjoint arcs (one for each color), and each two neighboring disjoint arcs are separated by $n/(c-1)$ points of the $c^{th}$ color. Then, if $k < n/(c-1)$, it is easy to see that we need at least $c-1$ arcs to get exactly $k$ points of each color. We note that there exist several results in the literature that are similar to Theorem~\ref{thm:curveIntervals}. For example, in~\cite{Stromquist1985} they show that given $k$ probability measures on $S^1$, we can find a $c$-arc set whose measure is exactly $1/2$ in the $c$ measures. The methods used to prove their result are topological, while our approach is combinatorial. Our result can also be seen as a generalization of the well-known \emph{necklace theorem} for closed curves~\cite{Matousek}. \section{$L$-lines in the plane lattice}\label{section:lattice} We now consider a balanced partition problem for $3$-colored point sets in the integer plane lattice $\mathbb{Z}^2$. For simplicity, we will refer to $\mathbb{R}^2$ and $\mathbb{Z}^2$ as the plane and the lattice, respectively. Recall that a set of points in the plane is said to be in \emph{general position} if no three of them are collinear. When the points lie in the lattice, the expression is used differently: we say instead that a set of points $S$ in the lattice is in \emph{general position} when every vertical line and horizontal line contains at most one point from $S$. An \emph{$L$-line with corner} $q\in\mathbb{R}^2$ is the union of two different rays with common apex $q$, each of them being either vertical or horizontal. An $L$-line partitions the plane into two regions (Figure~\ref{fig:L-line} shows a balanced $L$-line with apex $q$). Since we look for balanced $L$-lines, we will only consider $L$-lines that do not contain any point of $S$. Note that an $L$-line can always be slightly translated so that its apex is not in the lattice, thus its rays do not go through any lattice point. \begin{figure} \caption{A balanced set of $18$ points in the integer lattice with a nontrivial balanced $L$-line.} \label{fig:L-line} \end{figure} $L$-lines in the lattice often play the role of regular lines in the Euclidean plane. For example, the classic ham-sandwich theorem (in its discrete version) bisects a 2-colored finite point set by a line in the plane. Uno et al.~\cite{Kano2009} proved that, when points are located in the integer lattice, there exists a bisecting $L$-line as well. Recently, the following result has been proved for bisecting lines in the plane. \begin{theorem}[Bereg and Kano, \cite{Bereg2012}]\label{thm:balancedLine} Let $S$ be a 3-colored balanced set of $3n$ points in general position in the plane. If the convex hull of $S$ is monochromatic, then there exists a nontrivial balanced line. \end{theorem} As a means to further show the relationship between lines in the plane, and $L$-lines in the lattice, the objective of this section is to extend Theorem~\ref{thm:balancedLine} to the lattice. Replacing the term {\em line} for {\em $L$-line} in the above result does not suffice (see a counterexample in Figure~\ref{fig:diagonal}). In addition we must also use the orthogonal convex hull. \begin{figure} \caption{A balanced set of 3-colored points in the plane lattice. Any $L$-line containing points of all three colors will fully contain a color class, hence this problem instance does not admit a nontrivial balanced $L$-line.} \label{fig:diagonal} \end{figure} \begin{theorem}\label{th:2lattice} Let $S$ be a 3-colored balanced set of $3n$ points in general position in the integer lattice. If the orthogonal convex hull of $S$ is monochromatic, then there exists a nontrivial balanced $L$-line. \end{theorem} \begin{proof} Recall that a set $S\subset \mathbb{R}^2$ is said to be {\em orthoconvex} if the intersection of $S$ with every horizontal or vertical line is connected. The {\em orthogonal convex hull} of a set $S$ is the intersection of all connected orthogonally convex supersets of S. Without loss of generality, we assume that the points on the orthogonal convex hull are red. We use a technique similar to that described in the proof of Theorem~\ref{th:bisectingdoubleWedge}; that is, we will create a sequence of orderings, and associate a polygonal curve to each such ordering. As in the previous case, we show that a curve can pass through the origin only at a vertex, which will correspond to the desired balanced $L$-line. Finally, we find two orderings that are reversed, which implies that some intermediate curve must pass through the origin. The difficulty in the adaptation of the proof lies in the construction of the orderings. This is, to the best of our knowledge, the first time that such an ordering is created for the lattice. Given a point $p\in S$ we define the $0$-ordering of $p$ as follows. Consider the points above $p$ (including $p$) and sort them by decreasing $y$-coordinate, i.e., from top to bottom. Let $(p_1,\ldots, p_{j})$ be the sorting obtained (notice that $p_j=p$). The remaining points (i.e., those strictly below $p$) are sorted by increasing $x$-coordinate, i.e., from left to right. Let $\sigma_{p,0}$ denote the sorting obtained. Similarly, we define the $\pi/2$, $\pi$ and $\frac{3\pi}{2}$-sided orderings $\sigma_{p,{\frac{\pi}{2}}}$, $\sigma_{p,\pi}$ and $\sigma_{p,{\frac{3\pi}{2}}}$, respectively. Each ordering can be obtained by computing $\sigma_{p,0}$ after having rotated clockwise the point set $S$ by $\pi/2$, $\pi$ or $3\pi/2$ radians, respectively. As an example, Figure~\ref{projection} shows $\sigma_{p,{\frac{3\pi}{2}}}$. Let $\mathcal{O}=\{\sigma_{p,i}\,|\, p\in S, i\in \{0,\frac{\pi}{2},\pi,\frac{3\pi}{2}\}\}$ be the collection of all such orderings of $S$. \begin{figure} \caption{The $\frac{3\pi} \label{projection} \end{figure} By construction, any prefix of an ordering of $\mathcal{O}$ corresponds to a set of points that can be separated with an $L$-line. Likewise, any $L$-line will appear as prefix of some sorting $\sigma\in \mathcal{O}$ (for example, one of the sortings associated with the apex of the $L$-line). Given a sorting $\sigma=(p_1, \ldots, p_{3n})\in\mathcal{O}$, we associate it with a polygonal curve in the lattice as follows: for every $k \in \{1, \ldots, 3n-1\}$ let $b_k$ and $g_k$ be the number of blue and green points in $\{p_1\ldots, p_k\}$, respectively. Further, define the point $q_k := (3b_k -k, 3g_k-k)$. Based on these points we define a polygonal curve $\phi(\sigma)=(q_1,\ldots, q_{3n-1}, -q_1, \ldots, -q_{3n-1})$. Similarly to the construction of Theorem~\ref{th:bisectingdoubleWedge}, the fact that $q_k=(0,0)$ for some $1 \leq k\leq 3n-1$ is equivalent to the fact that the corresponding $L$-line is balanced. Therefore the goal of the proof is to show that there is always some ordering in $\mathcal{O}$ for which some $k$ has $q_k=(0,0)$. We observe several important properties of $\phi(\sigma)$: \begin{enumerate} \item $\phi(\sigma)$ is centrally symmetric (w.r.t. the origin). This follows from the definition of $\phi$. \item \label{prop:int_segment} The interior of any segment $q_iq_{i+1}$ of $\phi(\sigma)$ cannot contain the origin. The segment connecting two consecutive vertices in $\phi(\sigma)$ only depends on the color of the added element. Thus, there are only $3$ possible types of segments, see Figure~\ref{segments}. Since these segments do not pass through grid points, the origin cannot appear at the interior of a segment. \item \label{prop:winding} For any $\sigma \in \mathcal{O}$, we have $q_1=(-1,-1)$, and $q_{3n-1}=(1,1)$. Notice that $q_1$ corresponds to an $L$-line having exactly one point on its upper/left side, while $q_{3n-1}$ corresponds to an $L$-line leaving exactly one point on its lower/right side. In particular, these points must belong to the orthogonal convex hull, and thus must be red. That is, $\phi(\sigma)$ is a continuous polygonal curve that starts at $(-1,-1)$, travels to $(1,1)$. The curve is symmetric and returns to $(-1,-1)$. Thus, as in the proof of Theorem~\ref{th:bisectingdoubleWedge} we conclude that either $\phi(\sigma)$ passes through the origin or it has nonzero winding. \end{enumerate} \begin{figure} \caption{The three possible types of segments $q_{k-1} \label{segments} \end{figure} Let $\sigma_x =(x_1, x_2, \ldots, x_{3n})$ be the points of $S$ sorted from left to right (analogously, $\sigma_y=(y_1, y_2, \ldots, y_{3n})$ for the points sorted from bottom to top). Observe that $\sigma_x = \sigma_{x_{3n},\pi/2}$ and $\sigma_y= \sigma_{y_{3n},{\pi}}$ and their reverses are $\sigma^{-1}_x=\sigma_{x_{1}, \frac{3\pi}{2}}$ and $\sigma^{-1}_y=\sigma_{y_1, 0}$, respectively. Analogous to the proof of Theorem~\ref{th:bisectingdoubleWedge}, our aim is to transform $\phi(\sigma_y)$ to its reverse ordering through a series of small transformations of polygonal curves such that the winding number between the first and last curve is of different sign. If we imagine the succession of curves as a continuous transformation, a change in the sign of the winding number can only occur if at some point on the transformation the origin is contained in some curve. We will argue below that the only way in which the transformation passes through the origin is by having it as a vertex of one of the intermediate curves, which immediately leads to a balanced $L$-line. Recall from Property~\ref{prop:int_segment} that the origin cannot be at the interior of an edge of any intermediate curve. Thus, the only way the origin can be swept during the transformation is (i) if it is a vertex of one of the curves, or (ii) if it is contained in the space between two consecutive curves (``swept by the curves''). In the remainder of the proof we show that the latter case cannot occur, that is, the origin is never swept by the local changes between two consecutive curves. This implies that the origin must be a vertex of some curve $\phi(\sigma)$ for some intermediate ordering $\sigma$, in turn implying that the associated $L$-line would be balanced. The transformation we use is the following: \begin{equation} \begin{split} \phi(\sigma_y)=\phi(\sigma_{y_{3n},\pi}) \rightarrow \phi(\sigma_{y_{3n-1},\pi}) \rightarrow \cdots \rightarrow \phi(\sigma_{y_{1},\pi}) \rightarrow \phi(\sigma^{-1}_x)\\ = \phi(\sigma_{x_{1},\frac{3\pi}{2}}) \rightarrow \cdots \rightarrow \phi(\sigma_{x_{3n},\frac{3\pi}{2}}) \rightarrow \phi(\sigma_{y_1,0})=\phi(\sigma_y^{-1}) \end{split} \end{equation} First we give a geometric interpretation of this sequence. Imagine sweeping the lattice with a horizontal line (from top to bottom). At any point of the sweep, we sort the points below the line from bottom to top, and the remaining points are sorted from right to left. By doing so, we would obtain the orderings $\sigma_y=\sigma_{y_{3n},\pi}, \ldots, \sigma_{y_{1},\pi}$, and (once we reach $y=-\infty$) $\sigma_x^{-1}$ (i.e., the reverse of $\sigma_x$). Afterwards, we rotate the line clockwise by $\pi/2$ radians, keeping all points of $S$ to the right of the line, and sweep from left to right. During this second sweep, we sort the points to the right of the line from right to left, and those to the left from top to bottom. By doing this we would obtain the orderings $\sigma^{-1}_x = \sigma_{x_{1},\frac{3\pi}{2}}, \ldots, \sigma_{x_{3n},\frac{3\pi}{2}}$. Once all points have been swept, this process will finish with the ordering $\sigma_{y_1,0}$, which is the reverse of $\sigma_y$. Thus, to complete the proof it remains to show that the difference between any two consecutive orderings $\sigma$ and $\sigma'$ in the above sequence cannot contain the origin. Observe that two consecutive orderings differ in at most the position of one point (the one that has just been swept by the line). Thus, there exist two indices $s$ and $t$ such that $s<t$ and $\sigma=(p_1, \ldots, p_s, \ldots, p_t, \ldots, p_{3n})$, and $\sigma'=(p_1, \ldots, p_s, p_t, p_{s+1}, \ldots p_{t-1}, p_{t+1}, \ldots, p_{3n})$ (that is, point $p_t$ moved immediately after $p_s$). Abusing slightly the notation, we denote by $(q_1, \ldots, q_{3n-1},-q_1, \ldots, -q_{3n-1})$ the vertices of $\phi(\sigma)$ (respectively, $(q'_1, \ldots, q'_{3n-1},-q'_1, \ldots, -q'_{3n-1})$ denotes the vertices of $\phi(\sigma')$). Since only one point has changed its position in the ordering, we can explicitly obtain the differences between the two orderings. Given an index $i\leq 3n-1$, we define $c_i=(-1,-1)$ if $p_i$ is red, $c_i=(2,-1)$ if $p_i$ is blue, or $c_i=(-1,2)$ if $p_i$ is green. Then, we have the following relationship between the vertices of $\phi(\sigma)$ and $\phi(\sigma')$. \begin{eqnarray}\label{eqntrans} q'_i= \begin{cases} q_i &\mbox{if } i\in \{1, \ldots, s\} \cup \{t , \ldots, 3n-1\} \\ q_{i-1} +c_t &\mbox{if } i\in \{s+1, \ldots, t-1\} \end{cases} \end{eqnarray} Observe that the ordering of the first $s+1$ points and the last $t-1$ points is equal in both permutations. In particular, the points $q_1$ to $q_s$ and $q_{t}$ to $q_{3n-1}$ do not change between consecutive polygonal curves. For the intermediate indices, the transformation only depends on the color of $p_t$; it consists of a translation by the vector $c_t$. We now show that the origin cannot be contained in the interior of the quadrilateral $Q_i$ of vertices $q_{i-1},q_i,q'_{i}$, and $q'_{i+1}$, for any $i\in\{s,\ldots, t\}$. Consider the case in which $i\in \{s+2, \ldots , t-1\}$ (that is, neither $i-1$ nor $i+1$ satisfy the first line of Equation~\ref{eqntrans}). Observe that the shape of the quadrilateral only depends on the color of $p_i$ and $p_t$. It is easy to see that when $p_i$ has the same color as $p_t$, $Q_i$ is degenerate and cannot contain the origin. Thus, there are six possible color combinations for $p_i$ and $p_t$ that yield three non-degenerate different quadrilaterals (see Figure~\ref{fig_square}). \begin{figure} \caption{Three options for quadrilateral $Q_i$ depending on the colors of $p_i$ and $p_t$. From left to right $p_i$ is green, blue, and red whereas $p_t$ is red, green, and blue, respectively. The case in which $p_i$ and $p_t$ have reversed colors results in the same quadrilaterals. In all cases, the points in the lattice that are included in the quadrilateral are marked with a cross, thus those are the only possible locations for the origin to be contained in $Q_i$.} \label{fig_square} \end{figure} Assume that for some index $i$ we have that $p_t$ is red, $p_i$ is blue (as shown in Figure~\ref{fig_square}, left) and that the quadrilateral $Q_i$ contains the origin. Note that in this case, $q_i$ must be either $(0,1)$ or $(0,2)$. From the definition of the $x$-coordinate of $q_i$, we have $3b_i-k= 0$, and thus we conclude that $ k \equiv 0 \mod 3$. Consider now the $y$-coordinate of $q_i$; recall that this coordinate is equal to $3g_i - k$, which cannot be $1$ or $2$ whenever $k \equiv 0 \mod 3$. The proof for the other quadrilaterals is identical; in all cases, we show that either the $x$ or $y$ coordinate of a vertex of $Q_i$ is zero must simultaneously satisfy: $(i)$ it is congruent to zero modulo 3, and $(ii)$ it is either $1$ or $2$, resulting in a similar contradiction. Thus, in order to complete the proof it remains to consider the cases in which $i=s+1$ or $i=t-1$. In the former case we have $q_{i-1}=q'_{i}$, and $q_{i}=q'_{i+1}$ in the latter. Whichever the case, quadrilateral $Q_i$ collapses to a triangle, and we have three non-trivial possible color combinations (see Figure~\ref{fig_triangle}). The same methodology shows that none of them can sweep through the origin. \begin{figure} \caption{When $i=s+1$ or $i=t-1$ the corresponding quadrilateral $Q_i$ collapses to a triangle. As in the previous case, this results in three different non-degenerate cases, depending on the colors of $p_i$ and $p_t$.} \label{fig_triangle} \end{figure} That is, we have transformed the curve from $\phi(\sigma_{y_3n,\pi})$ to its reverse $\phi(\sigma_{y_1, 0})$ in a way that the origin cannot be contained between two successive curves. However, since these curves have winding number of different sign, at some point in our transformation one of the curves must have passed through the origin. The previous arguments show that this cannot have happened at the interior of an edge or at the interior of a quadrilateral between edges of two consecutive curves. Thus it must have happened at a vertex of $\phi(\sigma)$, for some $\sigma\in\mathcal{O}$. In particular, the corresponding $L$-line must be balanced. \end{proof} \section{Concluding remarks}\label{section:conclusion} In this paper we have studied several problems about balanced partitions of $3$-colored sets of points and lines in the plane. As a final remark we observe that our results on double wedges can be viewed as partial answers to the following interesting open problem: Find all $k$ such that, for any $3$-colored balanced set of $3n$ points in general position in the plane, there exists a double wedge containing exactly $k$ points of each color. We have given here an affirmative answer for $k=1,n/2$ and $n-1$ (Theorems \ref{thm:doubleWedge111} and \ref{th:bisectingdoubleWedge}). Theorem \ref{thm:curveIntervals} gives the affirmative answer for all values of $k$ under the constraint that points are in convex position. \noindent{\bf Acknowledgments} Most of this research took place during the \emph{5th International Workshop on Combinatorial Geometry} held at UPC-Barcelona from May 30 to June 22, 2012. We thank In\^{e}s Matos, Pablo P\'{e}rez-Lantero, Vera Sacrist\'{a}n and all other participants for useful discussions. F.H., M.K., C.S., and R.S. were partially supported by projects MINECO MTM2012-30951, Gen. Cat. DGR2009SGR1040 and DGR2014SGR46, and ESF EUROCORES programme EuroGIGA, CRP ComPoSe: MICINN Project EUI-EURC-2011-4306. F.H. and C.S. were partially supported by project MICINN MTM2009-07242. M.K. also acknowledges the support of the Secretary for Universities and Research of the Ministry of Economy and Knowledge of the Government of Catalonia and the European Union. R.~I.~S. was funded by Portuguese funds through CIDMA and FCT, within project PEst-OE/MAT/UI4106/2014, and by FCT grant SFRH/BPD/88455/2012. \section*{References} \end{document}
\begin{document} \title{Some remarks on Hall algebra of bound quiver} \author{Kostiantyn Iusenko} \address[Kostiantyn Iusenko]{Departamento de Matem\'{a}tica Univ. de S\~ao Paulo, Caixa Postal 66281, S\~ao Paulo, SP 05315-970 -- Brazil} \email{[email protected]} \author{Evan Wilson} \address[Evan Wilson]{Departamento de Matem\'{a}tica Univ. de S\~ao Paulo, Caixa Postal 66281, S\~ao Paulo, SP 05315-970 -- Brazil} \email{[email protected]} \begin{abstract} In this paper we describe the twisted Hall algebra of bound quiver with small homological dimension. The description is given in the terms of the quadratic form associated with the corresponding bound quiver. \end{abstract} \maketitle \section*{Introduction} Let $Q$ be a quiver, $\mathfrak g$ be a symmetric Kac-Moody algebra associated with $Q$, and $\Rep{q}$ be the category of finite-dimensional representations of $Q$ over $\mathbb F_q$, the field with $q=p^n$ elements. In his remarkable papers \cite{rin,rin2} Ringel proved that if $Q$ is a Dynkin quiver then there exists an isomorphism between the (twisted) Hall algebra associated with $\Rep{q}$ and the positive part $U_t^+(\mathfrak{g})$ of quantized universal enveloping algebra with $t^2=q$. In this paper we consider the case of a quiver $Q$ bound by an admissible ideal $I$. To each such quiver we associate an associative algebra $U_t^+(Q)$ given by relations and generators. In the case when $\mathbb F_q Q/I$ is a representation directed algebra of global dimension at most 2 we show that there exists a homomorphism $\rho$ between $U_t^+(Q)$ and the corresponding twisted Hall algebra $\Ht{\Rep{q}}$, in which $\Rep{q}$ is a category of finite-dimensional bound representations of $Q$. In the case when $q\neq 2$ we show that $\rho$ is an isomorphism (see Section 2 for details). In final section we state some concrete examples of bound quivers and corresponding twisted Hall algebras and compare these examples with the ones obtained in \cite{ChDe}. \section{Preliminaries} \subsection{Hall algebras} Let $\mathcal A$ be an $\mathbb F_q$-linear finitary category. We define the (Ringel) Hall algebra $\Hu{\mathcal A}$ to be the $\mathbb C$-vector space spanned by elements $M$ in $\textrm{Iso}(\mathcal A)$ (the set of isomorphism classes of the objects in $\mathcal A$) with a product defined by: \begin{equation*} [M] \ast [N]=\sum_{R\in \Hu{\mathcal{A}}} F_{M,N}^R [R] \end{equation*} where $F_{M,N}^R$ denotes the number of the sub-objects $X \subset R$ such that $X\simeq N$ and $R/X \simeq M$. Let $K_0(\mathcal A)$ be Grothendieck group of the category $\mathcal A$. By $\langle -,- \rangle: K_0(\mathcal A) \times K_0(\mathcal A) \rightarrow \mathbb Z$ we denote the Euler characteristic of $\mathcal A$: $$ \langle M,N\rangle = \sum_{k=0}^{\infty}(-1)^k \dim \mathrm{Ext}^k(M,N). $$ The twisted version of the product is defined by \begin{equation*}\label{hallp} [M]\cdot [N]=q^{\frac{1}{2}\Sc{M}{N}}\sum_{R\in \Hu{\mathcal{A}}} F_{M,N}^R [R] \end{equation*} Both products give $\Hu{\mathcal{A}}$ an associative algebra structure (see \cite{rin}). The twisted version of Hall algebra will be denoted by $\Ht{\mathcal A}$. We mention that both $\Hu{\mathcal{A}}$ and $\Ht{\mathcal{A}}$ possess a grading by the Grothendieck group $K_0(\mathcal A)$ (see \cite{rin}). For $\alpha\in K_0(\mathcal A)$ we denote the degree $\alpha$ piece by $\Hu{\mathcal{A}}[\alpha]$ and by $\Ht{\mathcal{A}}[\alpha]$. For more information on Hall algebras the interested reader can see, for example, \cite{OS} and references therein. \subsection{Lie Algebras Associated with Unit Forms} Recall the class of Lie algebras associated with unit forms considered by Kosakowska \cite{kos}. A unit form is a mapping $T:\Zn^{I}\to \Zn$ for some finite index set $I$ which is of the following form: \begin{equation*} T(\beta)=\sum_{i\in I} \beta_i^2+\sum_{(i,j)\in I\times I}a_{ij}\beta_i\beta_j \end{equation*} where $a_{ij}\in \Zn$. An example is given by the form associated to a quiver defined in Section \ref{quiver}. A root of $T$ is a vector $\beta \in \Zn^{I}$ satisfying $T(\beta)=1$. A root $\beta$ is called positive if $\beta_i\geq 0,\ i \in I$. We denote the set of roots of $T$ by $\Delta_T$ and the set of positive roots of $T$ by $\Delta^+_T$. Define the Lie algebra $L(T)$ to be the free Lie algebra with generators $\{e_i: \ i\in I\}$ modulo the ideal generated by the set of all elements of the form \begin{equation} \label{multicomm} [e_{i_1},[e_{i_2},[\cdots [e_{i_{k-1}},e_{i_k}]\cdots]]] \end{equation} such that $$\sum_{j=2}^k\alpha_{i_j}\in \Delta_T,\quad \mbox{but}\quad \sum_{j=1}^k\alpha_{i_j}\notin \Delta_T,$$ where $\{\alpha_i;\ i\in I\}$ denotes the standard basis of $\Zn^I$. The expression \eqref{multicomm} appearing above is called a standard multicommutator. This algebra receives an $\Nn^{I}$-grading where each generator $e_i$ has degree $\alpha_i$. \begin{rem} The Lie algebra $L(T)$ was shown \textup{(\cite[Proposition 4.4]{kos})} to be isomorphic to $G^+(T)$--the positive part of the Lie algebra studied by Barot, Kussin, and Lenzing in \cite{bkl}--in the case that $T$ is both weakly positive and positive semidefinite. \end{rem} \begin{rem} Assuming that the form $T$ is positive definite the minimal set of defining relations in algebra $L(T)$ was constructed in \textup{\cite[Theorem 1.1]{kos}}. \end{rem} \subsection{Quantized Lie Algebras associated with Unit forms} Let $T$ be a unit form and $\Delta^+_{T}$ its set of its positive roots. Let $\langle \_,\_ \rangle_T : \Zn^{I}\times \Zn^{I}\to \Zn$ be a bilinear form associated with $T$: $$ \langle \beta,\beta' \rangle_T:=\sum_{i\in I}\beta_i\beta'_i+\sum_{(i,j)\in I\times I}a_{ij}\beta_i\beta'_j. $$ Denote by $$ \langle \beta,\beta' \rangle_T^{0}=\sum_{i\in I}\beta_i\beta'_i+\sum_{(i,j)\in I\times I}(a_{ij})_-\beta_i\beta'_j, $$ where $(a_{ij})_-=\min\{a_{ij},0\}$. We consider the function $\nu:\Zn^{I}\times \Zn^{I}\to \Zn$ defined by \begin{equation} \label{nuDef} \nu(\beta,\beta')=\delta\Big (\sum_{(i,j)\in I\times I}(a_{ij})_-\beta_i\beta'_j\Big ) \langle\beta, \beta'\rangle_T^0, \end{equation} in which $\delta:\mathbb Z\rightarrow \{0,1\}$ is defined as $\delta(0)=1$ and $\delta(x)=0$ elsewhere. \noindent The free associative algebra $\mathbb C\langle e_i; i\in I\rangle$ is $\Nn^{I}$-graded by giving $e_i$ degree $\alpha_i$. Fixing $t\in \Cn$ we define the map \begin{equation} \label{adDef} \text{ad}^t_{x}(y)=xy-t^{\langle \beta,\beta'\rangle_T-\langle \beta',\beta\rangle_T +2\nu(\beta',\beta)-2\nu(\beta,\beta')}yx \end{equation} for homogeneous elements $x$ and $y$ of degree $\beta$ and $\beta'$ respectively. This map depends on the form $T$, but in what follows we use notation $\text{ad}^t_{x}(y)$ assuming that it is always clear which form we have. Note that if $\langle \_,\_\rangle_T=\langle \_,\_\rangle_T^0$ then \eqref{adDef} gives $$\text{ad}_x^t(y)=xy-t^{\pm \langle \beta,\beta'\rangle_T^0\pm\langle \beta',\beta\rangle_T^0}yx.$$ This is slightly different from the quantized adjoint action: $xy-t^{\langle\beta,\beta'\rangle_T^0+\langle\beta',\beta\rangle_T^0}yx$. Our version is useful for studying the Hall algebra associated to a quiver, as we will see in the next section. Consider the following family of associative algebras $$ U_t^+(T)=\mathbb C\langle e_i; i\in I\rangle / \mathcal R^t_T, $$ where $\mathcal R^t_T$ is the two-sided ideal generated by the set of all elements \begin{equation}\label{adRel} \text{ad}^t_{e_{i_1}}(\text{ad}^t_{e_{i_2}}(\cdots \text{ad}^t_{e_{i_{k-1}}}(e_{i_k})\cdots)), \end{equation} such that $\sum_{j=2}^k\alpha_{i_j}\in \Delta^+_T$ but $\sum_{j=1}^k\alpha_{i_j}\notin \Delta^+_T$. \begin{rem} \label{Rem:spec} The specialization of $U_t^+(T)$ at $t=1$ is isomorphic to the universal enveloping algebra of $L(T)$. \end{rem} \begin{rem} Using Remark 1 one can show that in the case where $T$ is both weakly positive and positive semidefinite, $U_t^+(T)$ gives a quantization of a positive part of $G^+(T)$ studied in \cite{bkl}. \end{rem} Here and further, for an $\mathbb N^I$ graded algebra $A$, we denote the $\alpha$-homogeneous piece by $A[\alpha]$ for $\alpha\in \mathbb N^I$. \begin{prop} \label{equalDim} Let $\alpha \in \Nn^I$ and $t$ be any complex number then $$\dim_{\mathbb C} U_t^+(T)[\alpha]=\dim_{\mathbb C}U_1^+(T)[\alpha]=\dim_{\mathbb C} U(L(T))[\alpha].$$ \end{prop} \begin{proof} One can use standard arguments coming from \cite{rein} or \cite{rin2} for instance. Denote by $A$ the Laurent polynomial ring $\mathbb C[u,u^{-1}]$. Then $A/(u-1) \cong \mathbb C$ under the isomorphism of evaluation at $u=1$. Let $$U_A^+(T):=A \langle e_i; i\in I\rangle / \mathcal R^u_T.$$ We have: $$ U_1^+(T)= A/(u-1) \otimes_A U_A^+(T) \cong \mathbb C \otimes_A U_A^+(T). $$ But $U_1^+(T)=U(L(T))$ and hence has a PBW basis with finite dimensional $\mathbb{N}^{I}$ homogeneous subspaces, which follows directly from Remark \ref{Rem:spec} since $U_1^+(T)$ is exactly the specialization of $U_t^+(T)$ at $t=1$. Also, $U_A^+(T)[\alpha]$ is finitely generated as an $A$-module, $\alpha \in \mathbb N^I$. It remains to be shown that $U_A^+(T)[\alpha]$ is free as an $A$-module, since in that case we have: $$ U_1^+(T)[\alpha] \cong \mathbb C\otimes_A U_A^+(T)[\alpha] \cong (\mathbb C\otimes_A A^s) \cong \mathbb C^s,\quad \mbox{for some}\ \ s\in \mathbb N_0. $$ Let $U^*=\mathbb{C}(u)\otimes_A U_A^+(T)$. Then $U_A^+(T)[\alpha]$ is a submodule of $U^*[\alpha]$ considered as $A$-modules, hence is torsion free. Therefore, since $A$ is a principal ideal domain we see that $U_A^+(T)[\alpha]$ is free, which completes the proof. \end{proof} \section{Category $\Rep{q}$ and its twisted Hall algebra} \subsection{Category of bound representations of a quiver} Let $Q$ be a quiver given by a set of vertices $Q_0$ and a set of arrows $Q_1$ denoted by $\rho:i\rightarrow j$ for $i,j\in Q_0$. We only consider finite quivers without oriented cycles, loops, or multiple arrows. A finite-dimensional $\mathbb F_q$-representation of $Q$ is given by a tuple \[V=((V_i)_{i\in Q_0},(V_{\rho})_{\rho \in Q_1}:V_i\rightarrow V_j)\] of finite-dimensional $\mathbb F_q$-vector spaces and $\mathbb F_q$-linear maps between them. A morphism of representations $f:V\rightarrow W$ is a tuple $f=(f_i:V_i\rightarrow W_i)_{i\in Q_0}$ of $\mathbb F_q$-linear maps such that $W_{\rho}f_i=f_{j}W_{\rho}$ for all $\rho:i\rightarrow j$. Let $\mathbb F_q Q$ be the path algebra of $Q$ and $I$ be an admissible ideal in $\mathbb F_q Q$ (see \cite[Section II.1]{ass} for precise definitions). Supposing that $I$ is generated by a minimal set of relations, we denote by $r(i,j)$ the number of relations with starting vertex $i$ and terminating vertex $j$. We say that the representation $V$ of $Q$ is a bound by $I$ if $V_r=0$ for any $r\in I$ (that is, all the relations in $I$ are satisfied). To simplify the notation we denote by $\Rep{q}$ the abelian category of representations of $Q$ over $\mathbb F_q$ bound by an ideal $I$ assuming that it is always clear which ideal we have. One can show that $\Rep{q}$ is equivalent to the category of left modules over $\mathbb F_qQ/I$ analogously to the case of unbound quivers (see \cite[Section III.1]{ass}). Note that, in general, $\Rep{q}$ is not hereditary, i.e. we can have $\mathrm{Ext}^{k}(V,W)\neq 0$ for some $k>1$ and some representations $V$ and $W$. The Grothendieck group $K_0(\Rep{q})$ can be naturally identified with $\mathbb{Z}^{Q_0}$. The dimension $\underline{\dim} V\in K_0(Q)$ of $V$ is defined by $\underline{\dim}V=(\dim V_i)_{i\in Q_0}$. \subsection{Representation directed quivers} \label{quiver} In the following we assume that the algebra $\mathbb F_q Q/I$ is of global dimension at most two. In this case for two representations $V$ and $W$ with $\underline{\dim} V=\beta$ and $\underline{\dim} W=\beta'$ the Euler form of $\Rep{q}$ has a precise form given in terms of dimension vectors (see \cite[Proposition 2.2]{bon}) \begin{equation*} \begin{split} \Sc{V}{W}=&\dim\mathrm{Hom}(V, W)-\dim\mathrm{Ext}^1(V,W)+\dim\mathrm{Ext}^2(V,W)\\ =&\sum_{i\in Q_0}\beta_i\beta_i'-\sum_{\rho:i\rightarrow j\in Q_1} \beta_i\beta_j'+\sum_{(i,j)\in Q_0\times Q_0}r(i,j)\beta_i \beta'_j, \end{split} \end{equation*} hence the Euler form gives rise to the following unit form $$ T_Q(\beta):=\Sc{\beta}{\beta}=\sum_{i\in Q_0}\beta_i^2-\sum_{\rho:i\rightarrow j\in Q_1} \beta_i\beta_j+\sum_{(i,j)\in Q_0\times Q_0}r(i,j)\beta_i \beta_j. $$ In what follows we only consider the case when the algebra $\mathbb F_q Q/I$ is representation directed, which, in particular, means that the bound quiver $(Q,I)$ has just a finite number (up to equivalence) of indecomposable representations. In this case we can enumerate the indecomposable representation $V^{(1)},\dots,V^{(n)}$ in such a way that $\mathrm{Ext}^1(V^{(k)}, V^{(l)})=0$, if $k \leq l$ and $\mathrm{Hom}(V^{(k)},V^{(l)})=0$, if $k<l$. \begin{rem} \label{GabAn} Due to \cite{bon} if $\mathbb F_q Q/I$ is representation directed then the form ${T_Q}$ is weakly positive and the set of its positive roots $\Delta_{T_Q}^+$ is finite. Moreover by \cite[Theorem 1.3]{dr} we have that if $q\neq 2$ then the assignment $V \mapsto \underline{\dim} V$ gives rise to a bijection between the set of indecomposable representations of $Q$ and $\Delta_{T_Q}^+$ (see also \cite[Theorem 3.3]{bon} for the original statement for algebraically closed fields). \end{rem} We mention also that Ringel showed that for any representation directed algebra there exist Hall polynomials which count Hall numbers $F_{M,N}^{R}$ (see \cite[Theorem 1]{rin2} for details). Let $S_i$ denote the simple representation of $Q$ at vertex $i\in Q_0$, i.e. $(S_i)_i=\mathbb F_q$, $(S_i)_j=0$ for $j\neq i$ and $(S_i)_\rho=0$ for all $\rho \in Q_1$. We will need the following \begin{prop} \label{HallCon} Let $M$ be an indecomposable bound representation of $Q$ over $\mathbb F_q$. The following identity holds $$F^{M\oplus S_i}_{S_i,M}=q^{\nu(m,\alpha_i)-\nu(\alpha_i,m)}F^{M\oplus S_i}_{M,S_i},$$ in which $i \in Q_0$, $m=\underline{\dim}M$ and $\nu$ is a function given by \eqref{nuDef}. \end{prop} \begin{proof} First we mention that for any $V, W, R \in \Rep{q}$ we have $$ F^R_{V,W}=\frac{|\mathrm{Ext}(V,W)_R|}{|\mathrm{Hom}(V,W)|}\cdot \frac{|\mathrm{Aut}(R)|}{|\mathrm{Aut}(V)| \cdot |\mathrm{Aut}(W)|}, $$ in which $\mathrm{Ext}(V,W)_R$ is the subset of $\mathrm{Ext}(V,W)$ parameterising extensions with middle term isomorphic to $R$. \noindent It is straightforward to see that $$|\mathrm{Hom}(M,S_i)|=q^{\nu(m,\alpha_i)}, \quad |\mathrm{Hom}(S_i,M)|=q^{\nu(\alpha_i,m)}.$$ Also we have that $$|\mathrm{Ext}(M,S_i)_{M\oplus S_i}|=|\mathrm{Ext}(S_i,M)_{M\oplus S_i}|=1.$$ Therefore $$ \frac{F^{M\oplus S_i}_{S_i,M}}{F^{M\oplus S_i}_{M,S_i}}=\frac{|\mathrm{Hom}(M,S_i)|}{|\mathrm{Hom}(S_i,M)|}=\frac{q^{\nu(m,\alpha_i)}}{q^{\nu(\alpha_i,m)}}=q^{\nu(m,\alpha_i)-\nu(\alpha_i,m)}. $$ \end{proof} \subsection{Twisted Hall algebra of $\Rep{q}$} Let $Q$ be a bound quiver of global dimension at most two. Fixing some $t \in \mathbb C$ consider the quantized Lie algebra $U_t^+(T_Q)$ where $T_Q$ is the unit form associated with $Q$. \begin{rem} Suppose that $i\to j \in Q_1$. Then we have $T_Q(\alpha_i+\alpha_j)=1$, hence $\alpha_i+\alpha_j\in \Delta^+_{T_Q},$ but $2\alpha_i+\alpha_j\notin \Delta^+_{T_Q}$. We also have $\langle \alpha_i,\alpha_j\rangle -\langle \alpha_j,\alpha_i\rangle=-1$ and $\langle \alpha_i,\alpha_i+\alpha_j\rangle -\langle \alpha_i+\alpha_j,\alpha_i,\rangle+2=1$. Then we have the following relation in $U_t^+(T_Q)$ \begin{align*} \textup{ad}^t_{e_i}(\textup{ad}^t_{e_i}(e_j))&=\textup{ad}^t_{e_i}(e_ie_j-t^{-1}e_je_i)\\ &=e_i^2e_j-(t+t^{-1})e_ie_je_i+e_je_i^2=0 \end{align*} which is the usual quantum Serre relation for simple roots connected by a single arrow. A similar statement holds for $\textup{ad}^t_{e_j}(\textup{ad}^t_{e_j}(e_i))$. \end{rem} One can also see that if $Q$ is unbound then $U_t^+(T_Q)$ is a quantized universal enveloping algebra of the positive part of the Lie algebra $\mathfrak g$ associated with a quiver $Q$. As proved by Ringel (see \cite{rin}) in this case the assignment $e_i \mapsto [S_i]$ (where $S_i$ is a simple representation at vertex $i\in Q_0$) gives rise to a homomorphism $\rho$ between $U_t^+(T_Q)$ (which equals $U_t^+(\mathfrak g)$ in this case) and $\Ht{\Rep{q}}$ with $t=+\sqrt{q}$. Moreover if $Q$ has finite type then $\rho$ is an isomorphism. We now prove a similar theorem for bound case. \begin{thm} Let $Q$ be a quiver and $I$ an admissible ideal in $\mathbb F_q Q$ such that $\mathbb F_qQ/I$ is of global dimension at most 2 and representation directed. Then the assignment $e_i \mapsto [S_i]$, $i\in Q_0$ gives rise to a homomorphism of associative algebras {\normalfont $$\rho:U_t^+(T_Q)\to \Ht{\Rep{q}},$$} with $t=+\sqrt{q}$. Moreover if $q\neq 2$ then $\rho$ is an isomorphism. \end{thm} \begin{proof} We show $\rho$ is an homomorphism. We need to show that the relations \eqref{adRel} are satisfied. Let $M$ be an indecomposable representation with dimension $\underline{\dim}M=m$. Suppose that $m+\alpha_i$ is not a root. Therefore \begin{align*} [M]\cdot[S_i]&=q^{\frac{1}{2}\Sc{m}{\alpha_i}} F_{M,S_i}^{M\oplus S_i} [M \oplus S_i],\\ [S_i]\cdot[M]&=q^{\frac{1}{2}\Sc{\alpha_i}{m}} F_{S_i,M}^{M\oplus S_i} [M \oplus S_i], \end{align*} By Proposition \ref{HallCon} we have that \begin{align*} [S_i]\cdot[M]&=q^{\frac{1}{2}\Sc{\alpha_i}{m}} q^{\nu(\alpha_i,m)-\nu(m,\alpha_i)} F_{M,S_i}^{M\oplus S_i} [M \oplus S_i]\\ &=q^{\frac{1}{2}\Sc{\alpha_i}{m}+\nu(m,\alpha_i)-\nu(\alpha_i,m)}F_{M,S_i}^{M\oplus S_i} [M \oplus S_i]. \end{align*} Thus we have that $\text{ad}^t_{[S_i]}([M])=0.$ Suppose that $m+\alpha_i$ is a root. Then (since $\mathbb F_qQ/I$ is representation directed) there exists exactly one indecomposable representation with dimension $m+\alpha_i$. Denote this representation by $M_1$. Also, we have that either $\mathrm{Ext}(S_i,M)=0$ or $\mathrm{Ext}(M,S_i)=0$. Assume for definiteness that $\mathrm{Ext}(S_i,M)=0$. Then \begin{align*} [M]\cdot[S_i]&=q^{\frac{1}{2} \Sc{m}{\alpha_i}}\left(F_{M,S_i}^{M\oplus S_i} [M \oplus S_i]+F_{M,S_i}^{M_1}[M_1]\right),\\ [S_i]\cdot[M]&=q^{\frac{1}{2} \Sc{\alpha_i}{m}} F_{S_i,M}^{M\oplus S_i} [ M \oplus S_i]\\ &=q^{\frac{1}{2} \Sc{\alpha_i}{m}+\nu(m,\alpha_i)-\nu(\alpha_i,m)} F_{M,S_i}^{M\oplus S_i} [ M \oplus S_i]. \end{align*} Now we consider $$\text{ad}^t_{[S_i]}([M])=[S_i]\cdot [M]-t^{\Sc{\alpha_i}{m}-\Sc{m}{\alpha_i}+2\nu(m,\alpha_i)-2\nu(\alpha_i,m)}[M]\cdot[S_i].$$ The coefficient of $[M\oplus S_i]$ is 0 and hence we are left with $\text{ad}^t_{[S_i]}([M])=c[M_1],$ where $c\neq 0 \in \mathbb C$. Now suppose that $\sum_{j=2}^k\alpha_{i_j}\in \Delta^+_{T_Q}$ then inductively using $\text{ad}^t_{[S_{i_{j-1}}]}([M_{j}])=c_{i_{j-1}}[M_{j-1}]$ where $M_{j},j\in \{2,3,\dots k\}$ is the unique indecomposable of degree $\sum_{\ell=j}^k\alpha_{i_\ell}$ and $c_{i_j}\in \mathbb C$ we conclude that \begin{equation*} \text{ad}^t_{[S_{i_2}]}(\text{ad}^t_{[S_{i_3}]}(\cdots \text{ad}^t_{[S_{i_{k-1}}]}([S_{i_k}])\cdots))=c[M] \end{equation*} where $M=M_2$ and $c\in \Cn$ (possibly 0). But then $\text{ad}^t_{[S_{i_1}]}([M])=0$ as $\sum_{j=1}^k\alpha_{i_j}\notin \Delta^+_{T_Q}$. Which proves that $\rho$ is a homomorphism. Assume now that $q\neq 2$. Since $\mathbb F_q Q/I$ is representation directed by Remark \ref{GabAn} we have that $\Ht{\Rep{q}}$ is generated by the roots $\alpha \in \Delta^+_{T_Q}$ and $$\dim \Ht{\Rep{q}}[\beta]=\Big \{ (\gamma_\alpha) \in \mathbb N^{\Delta^+_{T_Q}} \ | \ \sum \gamma_\alpha \alpha =\beta \Big\}.$$ Hence $\rho$ is an epimorphism. Also we have that $\dim_{\mathbb C} L(T_Q) \leq |\Delta^+_{T_Q}|$ (\cite[Corollary 4.3]{kos}). Therefore by Proposition \ref{equalDim} $\rho$ is a monomorphism. Finally $\rho$ is an isomorphism. \end{proof} \section{Examples and final remarks} \subsection{Some examples} \begin{ex} \normalfont Let $Q$ be the following quiver $$ \xymatrix { Q:1 \ar@{->}[r]^{a} & 2 \ar@{->}[r]^{b} & 3 } $$ bound by $I=\langle b a \rangle$ the ideal of $\mathbb F_q Q$ generated by $ba$. Obviously $\mathbb F_q Q/I$ is a representation directed algebra. The corresponding quadratic form is: \begin{equation}\label{exForm} T_Q(\beta)=\beta_1^2+\beta_2^2+\beta_3^2-\beta_1 \beta_2 - \beta_2 \beta_3 + \beta_1 \beta_3. \end{equation} It is not hard to see that the quantum Serre relations of $U_t^+(\mathfrak sl_4)$ are satisfied in $\Ht{\Rep{q}}$ where $t=+\sqrt{q}$. Moreover we have the following additional relations: \begin{equation*} \textrm{ad}^t_{e_1}(e_3)=0, \quad \textrm{ad}^t_{e_1}(\textrm{ad}^t_{e_2}(e_3))=0. \end{equation*} These relations are due to the fact that $\alpha_3$, $\alpha_2+\alpha_3$ are roots but $\alpha_1+\alpha_3$ and $\alpha_1+\alpha_2+\alpha_3$ are not the roots of the form \eqref{exForm}. Denoting $[x,y]_t=txy-yx$ to be the twisted commutator, we have that $$ \mathbf{H}_{\Rep{q}}^{tw} \simeq \mathbb{C}\langle e_i\ |\ i=1,2,3\rangle / \mathcal R. $$ where $\mathcal R$ is the two-sided ideal generated by the usual quantum Serre relations for $U_t^+(\mathfrak{sl}_4)$ but with the relation $[e_3,e_1]=0$ replaced by $[e_3,e_1]_t=0$ and one extra relation $$[e_1,[e_2,e_3]_t]=0.$$ This computation follows from the fact that the positive root system of $U_t^+(T_Q)$ is $\Delta^+(\mathfrak{sl}_4)\backslash \{\alpha_1+\alpha_2+\alpha_3\}.$ This description of $\Ht{\Rep{q}}$ is a little bit different than the one obtained in \cite[Example 5.9]{ChDe} for the same quiver $Q$ and ideal $I$ but with slightly different twisting. \end{ex} \begin{ex} \normalfont More generally let $Q$ be the following quiver $$ \xymatrix { Q:1 \ar@{->}[r]^{a_1} & 2 \ar@{->}[r]^{a_2} & \dots \ar@{->}[r]^{a_n} & n } $$ bound by the ideal $I=\langle a_n\dots a_1 \rangle$ of the algebra $\mathbb F_q Q$. By similar observations we have that $$ \Ht{\Rep{q}} \simeq \mathbb{C}\langle e_i\ |\ i=1,\dots,n\rangle / \mathcal R. $$ where $\mathcal R$ is the two-sided ideal generated by the usual quantum Serre relations for $U_t^+(\mathfrak{sl}_{n+1})$ but with the relation $[e_n,e_1]=0$ replaced by $[e_n,e_1]_t=0$ and one extra relation $$[e_1,[e_2,\dots,[e_{n-1},e_n]_t \dots]_t].$$ Similar to the previous case, this follows since the positive root system of $U_t^+(T_Q)$ is $\Delta^+(\mathfrak{sl}_n)\backslash \{\alpha_1+\alpha_2+\cdots +\alpha_n\}.$ \end{ex} \begin{ex} \label{exRomb} \normalfont \normalfont Let $Q$ be the following quiver $$ \xymatrix { & 4 &\\ Q:2 \ar@{->}[ru]^{a_2} & & 3 \ar@{->}[lu]_{b_2}\\ &\ar@{->}[lu]^{a_1} 1 \ar@{->}[ru]_{b_1} } $$ bound by the ideal $I=\langle a_2a_1- b_2b_1 \rangle$ in $\mathbb F_qQ$. We have $$T_Q(\beta)=\beta_1^2+\beta_2^2+\beta_3^2+\beta_4^2-\beta_1\beta_2-\beta_1\beta_3-\beta_2\beta_4-\beta_3\beta_4+\beta_1\beta_4.$$ In this case apart from the standard relations in $\Ht{\Rep{q}}$, which came from considering $Q$ as unbound quiver, we have the following extra relations: $$ [e_4,e_1]_t=0, \quad [e_1,[e_2,e_4]_t]=0, \quad [e_1,[e_3,e_4]_t]=0. $$ This follows from a similar computation of the positive roots of the form $T_Q$. \end{ex} \begin{ex} \normalfont Let $Q$ be the same quiver as in Example \ref{exRomb} bound by ideal $I=\langle a_2a_1, b_2b_1 \rangle$. In this case we have $$T_Q(\beta)=\beta_1^2+\beta_2^2+\beta_3^2+\beta_4^2-\beta_1\beta_2-\beta_1\beta_3-\beta_2\beta_4-\beta_3\beta_4+2\beta_1\beta_4.$$ Hence the extra relations will have the following form \begin{align*} [e_1,[e_2,e_4]_t]_{t^{-1}}&=0, \quad [e_1,[e_3,e_4]_t]_{t^{-1}}=0 \\ [e_4,e_1]_{t^2}&=0, \quad [e_1,[e_2,[e_3,e_4]_t]_t]=0. \end{align*} As before, this follows from computing the positive roots of the form $T_Q$. \end{ex} \subsection{Commutative representations of quivers over $\mathbb F_1$ and Hall algebra} As the limiting case of our construction one can also study representations of quivers with commutativity conditions over the so-called field with one element: $\mathbb F_1$. Such a field is not defined per se, but there is agreement on what should be the definition and basic properties of the category of vector spaces over $\mathbb{F}_1$ as a limiting case of the categories of vector spaces over $\mathbb{F}_q$ (see for example \cite{matt}). Suppose that $Q$ is without oriented cycles and loops. The category $\Rep{1}$ of finite-dimensional representations of $Q$ over $\mathbb F_1$ is defined similarly to $\textrm{Rep}_Q(\mathbb F_q)$ (see \cite[Section 4]{matt}). It is straightforward to define the representation of quiver over $\mathbb F_1$ which satisfy given relations. In the case when $Q$ has only commutativity relations it is straightforward to show that any indecomposable object in $\textrm{Rep}_Q(\mathbb F_q)$ is one-dimensional (similar statement to \cite[Theorem 5.1]{matt}). M.Szczesny in \cite{matt} defined the Hall algebra, $\mathbf H_{\Rep{1}}$, of $\Rep{1}$ for the case when $Q$ is unbound following Ringel's definition. A slightly modified construction can be applied in the bound case as well, and we use the same notation as the unbound case. Moreover, let $\mathbb K=\{0,1\}$ be the one-dimensional space over $\mathbb F_1$. Denote by $S_i$ a simple representation of $Q$ at vertex $i\in Q_0$, i.e. $(S_i)_i=\mathbb K$, $(S_i)_j={0}$ for $j\neq i$ and $(S_i)_\rho=0$ for all $\rho\in Q_1$. Then one shows the following \begin{rem} Let $Q$ be a quiver without oriented cycles, loops or multiple arrows bound by all possible commutativity relations. The assigment $e_i \mapsto [S_i]$ gives rise to an epimorphism of associative algebras $U^+_1(T_Q)$ and \normalfont $\mathbf H_{\Rep{1}}$. \end{rem} \end{document}
\begin{document} \title{An Optomechanical Platform for Quantum Hypothesis Testing for Collapse Models} \author{Marta Maria Marchese} \affiliation{Centre for Theoretical Atomic, Molecular, and Optical Physics, School of Mathematics and Physics, Queens University, Belfast BT7 1NN, United Kingdom} \author{Alessio Belenchia} \affiliation{Centre for Theoretical Atomic, Molecular, and Optical Physics, School of Mathematics and Physics, Queens University, Belfast BT7 1NN, United Kingdom} \author{Stefano Pirandola} \affiliation{Computer Science and York Centre for Quantum Technologies,University of York, York YO10 5GH, United Kingdom} \author{Mauro Paternostro} \affiliation{Centre for Theoretical Atomic, Molecular, and Optical Physics, School of Mathematics and Physics, Queens University, Belfast BT7 1NN, United Kingdom} \begin{abstract} Quantum Hypothesis Testing has shown the advantages that quantum resources can offer in the discrimination of competing hypothesis. Here, we apply this framework to optomechanical systems and fundamental physics questions. In particular, we focus on an optomechanical system composed of two cavities employed to perform quantum channel discrimination. We show that input squeezed optical noise, and feasible measurement schemes on the output cavity modes, allow to obtain an advantage with respect to any comparable classical schemes. We apply these results to the discrimination of models of spontaneous collapse of the wavefunction, highlighting the possibilities offered by this scheme for fundamental physics searches. \text{\tiny{$\mathcal{E}$}}nd{abstract} \maketitle \text{\tiny{$\mathcal{SE}$}}ction{Introduction} Hypothesis testing (HT) is an hallmark of any statistical inference toolkit, allowing to discern between the outcomes resulting from the occurrence (or lack thereof) of unknown stochastic processes whose events occur with a set of \text{\tiny{$\mathcal{E}$}}nsuremath{\tau_\mathrm{ex}}tit{a priori} probabilities. Quantum hypothesis testing (QHT), initially introduced for state discrimination tasks~\cite{helstrom1976quantum,chefles2000quantum,barnett2009quantum,weedbrook2012gaussian}, has been applied to channel discrimination and dynamics and its technological potentials in fields like quantum sensing and data read-out are under active investigation~\cite{2020arXiv200410211O,2020arXiv200613250S,2020arXiv201010855H,2020arXiv201003594B,2020SciA....6B.451B}. The characteristic of QHT protocols is that they allow to gain an advantage, in terms of lower error probabilities and in certain parameters range, over \text{\tiny{$\mathcal{E}$}}nsuremath{\tau_\mathrm{ex}}tit{any} classical HT strategy by exploiting quantum resources (like entangled squeezed light). With the advent of the \text{\tiny{$\mathcal{E}$}}nsuremath{\tau_\mathrm{ex}}tit{second quantum revolution}, quantum technologies manipulating individual quantum systems and employing exquisitely quantum resources to perform tasks are becoming a reality. Crucially, this has also renewed the interest for fundamental investigations of some of the foundational puzzles of quantum theory. Among them, the quantum-to-classical (QtC) transition -- i.e., the process through which the classical world we experience in our daily life emerges from quantum mechanical building blocks~\cite{zurek1991quantum} -- plays a prominent role. Indeed, getting a grasp of the mechanisms governing the QtC could potentially settle some of the interpretational hurdles of quantum mechanics and possibly determine the ultimate limits of validity (if any) of quantum theory itself. Collapse models~\cite{RevModPhys.85.471} are one of the most prominent attempts at modifying quantum theory by promoting the collapse of the wavefunction to a physical process embedded in the laws of dynamics by a stochastic modification of the Schr\"{o}dinger equation. In these models, microscopic systems evolves essentially undisturbed by the stochastic collapse, recovering all the predictions of quantum mechanics, while macroscopic objects are subject to a strong localization in position space essentially ruling out Schr\"{o}dinger's cat--like superpositions. One of the most studied collapse models is the so called Continuous Spontaneous Localization model (CSL)~\cite{BASSI2003257}, whose phenomenology has received considerable attention in the last few years~\cite{RevModPhys.85.471,bassi2014collapse,PhysRevLett.125.100404,2020arXiv200109788C,carlesso2019collapse,carlesso2019opto}. In light of this fact, and due to the simplicity of the model, in this work we will focus on CSL. It is important to note that, while CSL postulates a stochastic modification of the Schr\"{o}dinger equation, at the phenomenological level the effect of the model is captured entirely by a dissipative term appearing in the master equation describing close quantum systems dynamics. It is thus clear that, in realistic situations the omnipresence of the environment, and thus the open character of the dynamics, requires sophisticated estimation and inference techniques to discriminate the presence or lack thereof of the CSL mechanism. Since collapse models recover all the predictions of quantum mechanics for miscroscopic systems, it is clear that tests able to constrain the parameter space of such models should employ mesoscopic quantum systems. However, creating large spatial superposition of mesoscopic objects is inherently challenging and the subject of intense investigation~\cite{hornberger2012colloquium,arndt2014testing,kaltenbaek2016macroscopic,romero2011large}. Fortunately, collapse models can also be probed via non-interferometric techniques~\cite{Carlesso2018, PhysRevResearch.2.013057,Piscicchia_2017, PhysRevLett.116.090402,vinante2017improved} which do not require the creation and verification of large superpositions. This is exactly the case we explore in this work where the CSL affect the mesoscopic mechanical oscillator in an optomechanical set-up. It is well known that in this situation, the effect of the CSL can be interpreted as an extra mechanical dumping source or, alternatively, as an increased equilibrium temperature of the oscillator. {Owing to this, several non-interferometric tests of collapse models have been proposed, and experiments have been carried out constraining the parameter space of the CSL model [cf. Ref.~\cite{RevModPhys.85.471} and references therein]. Inference techniques translated from quantum metrology and estimation theory have been widely employed in such endeavours. Recently, hypothesis testing has been embedded in theoretical schemes for the assessment of macrorealism and collapse mechanisms~\cite{PhysRevA.100.032111,PhysRevResearch.2.033034}. In this work, we consider hypothesis testing for channel discrimination to probe the dynamical effects of collapse models on macroscopic mechanical oscillators.} We thus face the challenge of discriminating between two quantum channels, encoding the presence or lack of CSL, characterizing the dynamics of the mechanical mode. QHT is particularly apt to this task and we set to show that quantum resources can be used to overcome any comparable classical strategy. Furthermore, we propose in the following a specific measurement strategy to preform the hypothesis testing based on realistic parameters for the optomechanical systems. In this way, we do not aim to establish the ultimate advantages that a QHT strategy can allow -- which could result in a hardly feasible measurement scheme -- but we explicitly spell out one strategy that is both feasible within current technology and that presents the aforementioned quantum advantage. The remainder of this work is organized as follow. In Sec.~\ref{system} we introduce the optomechanical set-up of interest and we spell out the effect of the CSL on the dynamics of the mechanical mode. In Sec.~\ref{measur} we lay down the measurement schemes that we are going to consider in our analysis. Sec.~\ref{analysis} summarises the main concepts of hypothesis testing and introduces the classical bound we are going to compare the quantum case with. Sec.~\ref{results} presents the main results of our work. Finally, we concluded in Sec.~\ref{conclusions} with a discussion of our findings. \text{\tiny{$\mathcal{SE}$}}ction{The System}\label{system} Let us consider a system composed by two optical cavities of lenght $L$, as shown in Figure~\ref{Fig:system}. We follow here the discussion in Ref.~\cite{Mazzola2011} where a similar optomechanical system was investigated for entanglement distribution. {Note however that here we make use of a single mechanical oscillator, as a second one would not result in a better performance of the scheme proposed in this work.} . The cavity modes with frequency $\omega_C$ are described by creation and annihilation operators $\{\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{a}^{\text{\tiny{$\mathcal{E}$}}nsuremath{\delta a}gger}_i,\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{a}_i\}$, with $i=1,2$. The first cavity is equipped with a movable mirror characterised by position and momentum operators $\{\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{q},\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{p}\}$ and damping rate $\gamma_m$. The second cavity is a simple Fabry-Perot cavity with energy decay rate $\kappa$, identical to the first one. They are initially pumped with coherent light with frequency $\omega_L$ and power $P$. The Hamiltonian describing the system reads as \begin{figure} \centering \includegraphics[width=0.5\text{\tiny{$\mathcal{E}$}}nsuremath{\tau_\mathrm{ex}}twidth]{immagini/Sistema.pdf} \caption{Two cavities, one embedded with a movable mirror, are pumped with a classical laser field (red-shaded region) and with an extra source made of two modes of light (green line). Input modes go through polarizing beam splitters (PBS), enter the cavities, interacting with them and, when they come out, they pass through a quarter-wave plates (QWP), which will change the polarization and will allow us to collect the output modes. Before performing the measurement, we might recombine the outputs using a beam splitter. {The two-cavity set-up is crucial for harnessing the quantum advantage entailed by the use of two-mode squeezed light in a ``quantum reading'' like scheme.}} \label{Fig:system} \text{\tiny{$\mathcal{E}$}}nd{figure} \begin{equation} \begin{split} \text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{H}=&\text{\tiny{$\mathcal{S}$}}um_{i=1,2}\left [\delta \hslash \text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{a}_i^{\text{\tiny{$\mathcal{E}$}}nsuremath{\delta a}gger}\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{a}_i +i \text{\tiny{$\mathcal{E}$}}psilon \hslash (\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{a}_i^{\text{\tiny{$\mathcal{E}$}}nsuremath{\delta a}gger}-\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{a}_i)\right]\\ &+\left(\dfrac{\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{p}^2}{2m}+\dfrac{m \omega_m^2}{2}\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{q}^2\right)-\hslash \chi \text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{a}^{\text{\tiny{$\mathcal{E}$}}nsuremath{\delta a}gger}_1\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{a}_1\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{q} \text{\tiny{$\mathcal{E}$}}nd{split} \text{\tiny{$\mathcal{E}$}}nd{equation} \noindent where $\delta=\omega_C-\omega_L$ is the cavity-pump detuning, $\omega_m$ is the frequency of the mechanical oscillator, $\chi={\omega_C}/{L}$ is the radiation-pressure coupling constant and $\text{\tiny{$\mathcal{E}$}}psilon=\text{\tiny{$\mathcal{S}$}}qrt{{2kP}/{\hslash\omega_C}}$ is the amplitude of the laser field which we treat as classical from now on. When the pumping field is intense enough, as we assume in the following, the description of the dynamics simplifies enormously since we can linearise both the cavity and mechanical modes around their respective steady-state. We thus consider the dynamics of the sole zero-mean quadratures fluctuations that we order in the vector \begin{equation}\label{ordering} \text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{\mathbf{r}}=\left(\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{Q},\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{P},\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{X}_1,\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{Y}_1,\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{X}_2,\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{Y}_2\right)^{\intercal}. \text{\tiny{$\mathcal{E}$}}nd{equation} Here, the first two elements are the dimensionless quadratures for the mechanical mode \begin{align} \text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{Q}=\text{\tiny{$\mathcal{S}$}}qrt{\dfrac{m\omega_m}{\hslash}} \text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{q} \qquad \text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{P}=\dfrac{1}{\text{\tiny{$\mathcal{S}$}}qrt{\hslash m \omega_m}}\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{p} \text{\tiny{$\mathcal{E}$}}nd{align} with $\big[\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{Q},\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{P}\big]=i$. The remaining components of the vector represent the optical quadratures \begin{align} \text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{X}_j=\dfrac{\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{a}_j^{\text{\tiny{$\mathcal{E}$}}nsuremath{\delta a}gger}+\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{a}_j}{\text{\tiny{$\mathcal{S}$}}qrt{2}}, \qquad \text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{Y}_j=i\dfrac{\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{a}_j^{\text{\tiny{$\mathcal{E}$}}nsuremath{\delta a}gger}-\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{a}_j}{\text{\tiny{$\mathcal{S}$}}qrt{2}} \qquad (\text{\tiny{$\mathcal{E}$}}nsuremath{\tau_\mathrm{ex}}t{with}\, i=1,2), \text{\tiny{$\mathcal{E}$}}nd{align} for the two intra-cavity field modes. The quadrature vector evolves in time according to the Langevin equations in the input-output formalism \begin{equation} \dot{\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{\mathbf{r}}}=A\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{\mathbf{r}}+\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{\mathbf{n}}. \text{\tiny{$\mathcal{E}$}}nd{equation} where the $6\times 6$ drift matrix $A$ is given by \begin{equation} A=\begin{pmatrix} 0 & \omega_m & 0 & 0 & 0 & 0\\ -\omega_m & -\gamma_m & \text{\tiny{$\mathcal{S}$}}qrt{2}\alpha g & 0 & 0 & 0\\ 0 & 0 & -\kappa & \delta & 0 & 0\\ \text{\tiny{$\mathcal{S}$}}qrt{2}\alpha g & 0 & -\delta & -\kappa & 0 & 0\\ 0 & 0 & 0 & 0 & -\kappa & \delta\\ 0 & 0 & 0 & 0 & -\delta & -\kappa\\ \text{\tiny{$\mathcal{E}$}}nd{pmatrix}\label{driftmatrix} \text{\tiny{$\mathcal{E}$}}nd{equation} \noindent with $\alpha= \text{\tiny{R}}e[\langle a\rangle]$ the square root of the number of photons in the cavity and $g=\chi\text{\tiny{$\mathcal{S}$}}qrt{{\hslash}/{m\omega_m}}$ the effective coupling rate. The vector $\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{\mathbf{n}}$ collects the zero-mean quantum noise operators and it is given by \begin{equation} \text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{\mathbf{n}}=\left(0, \text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{\xi}, \text{\tiny{$\mathcal{S}$}}qrt{2\kappa}\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{X}_{in_1},\text{\tiny{$\mathcal{S}$}}qrt{2\kappa}\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{Y}_{in_1}, \text{\tiny{$\mathcal{S}$}}qrt{2\kappa}\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{X}_{in_2},\text{\tiny{$\mathcal{S}$}}qrt{2\kappa}\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{Y}_{in_2}\right)^{\intercal}.\label{Eq:noisevector} \text{\tiny{$\mathcal{E}$}}nd{equation} Here, $\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{\xi}$ is a Langevin force operator encoding the interaction of the mechanical mode with a phononic thermal bath at temperature $T$ and producing the Brownian motion of the mechanical oscillator. This noise is characterised by its two-point correlator which can be written as \begin{equation} \langle \xi(t)\xi(t')\rangle=\dfrac{2\gamma_m k_B T}{\omega_m\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat b}ar}\delta(t-t'), \text{\tiny{$\mathcal{E}$}}nd{equation} in the high temperature limit $k_B T\gg\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat b}ar\omega$~\cite{PhysRevA.63.023812}. \noindent Then $\left\{\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{X}_{in_k},\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{Y}_{in_k}\right\}$, with $k=\{1,2\}$, are the quadratures of the input noises impinging on the two cavities. The covariance matrix of these two input modes encodes the information on the light state we feed to the cavities on top of the coherent pumping. In the linearized picture that we are considering, the total Hamiltonian of the system is at most quadratic in the quadratures $\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{\mathbf{r}}$ {while the Lindblad operators, describing the interaction with the phononic thermal bath, are at most linear in them.} Thus, if both the initial state of the modes $\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{\mathbf{r}}$ and the state of the input noises are Gaussian, the dynamics will preserve this Gaussianity~\cite{Ferraro2005a,genoni2016conditional}. This observation enormously simplify the dynamics of the system since it is enough to consider the evolution of the first and second statistical moments of the quadratures of the system. Furthermore, as we consider zero-mean quantum fluctuations {and the dynamics of the mean values is decoupled from the evolution of the variances}, it is sufficient to work with the time evolution of the covariance matrix {$\bm{\text{\tiny{$\mathcal{S}$}}igma}$} , which is ruled by the Lyapunov-like equation \begin{equation} \dot{\bm{\text{\tiny{$\mathcal{S}$}}igma}}= A \bm{\text{\tiny{$\mathcal{S}$}}igma} +\bm{\text{\tiny{$\mathcal{S}$}}igma} A^{T} +D,\label{Eq:equazionemoto} \text{\tiny{$\mathcal{E}$}}nd{equation} where $D$ is the so-called diffusion matrix. The elements of $D$ depend on the two-point correlations of the noise vector as~\cite{Mari2009} \begin{equation} D_{ij}=\dfrac{1}{2}\left[ \langle n'_i(t)n'_j(t)\rangle+\langle n'_j(t)n'_i(t)\rangle\right]. \text{\tiny{$\mathcal{E}$}}nd{equation} Considering the aforementioned sources of noise, we can express the $6\times 6$ diffusion matrix $D$ in block-diagonal form as \begin{equation}\label{Dmatrix} D=\begin{pmatrix} \bm{\text{\tiny{$\mathcal{S}$}}igma}_m & 0\\ 0 & \bm{\text{\tiny{$\mathcal{S}$}}igma}_{IN} \text{\tiny{$\mathcal{E}$}}nd{pmatrix} \text{\tiny{$\mathcal{E}$}}nd{equation} where $\bm{\text{\tiny{$\mathcal{S}$}}igma}_{IN}$ is the 4$\times$4 dimensionless covariance matrix associated to the input modes times $2\kappa$, while \begin{equation} \bm{\text{\tiny{$\mathcal{S}$}}igma}_m= \begin{pmatrix} 0 & 0\\ 0 & 2 \dfrac{\gamma_m k_B T}{\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat b}ar \omega_m} + \Delta \text{\tiny{$\mathcal{E}$}}nd{pmatrix}\label{Eq:sigmam} \text{\tiny{$\mathcal{E}$}}nd{equation} is the 2$\times$2 matrix describing the thermal dissipation. Note that, in the last term entering the diffusion matrix we have introduced an extra {heating rate} parameter $\Delta$. This parameter \text{\tiny{$\mathcal{E}$}}nsuremath{\tau_\mathrm{ex}}tit{de facto} modifies the equilibrium temperature of the mechanical oscillator and correspond to an extra dissipation channel for the open quantum system composed by the two cavity modes and the mechanical one. Before moving on, let us briefly remark in the following that the stochastic effect of the CSL model on the mechanical mode in cavity one is encoded exactly in the extra parameter $\Delta$ appearing in the diffusion matrix. \text{\tiny{$\mathcal{S}$}}ubsection{Continuous Spontaneous Localisation model} Collapse models (CMs)~\cite{RevModPhys.85.471} introduce stochastic modifications to the Schr\"odinger equation of quantum mechanics in the attempt to promote the collapse of the wavefunction to a dynamical process providing a dynamical picture of how the classical world emerges from the quantum microscopic one. For our purposes here we do not need to go into the details of collapse models. It is enough to say that we will make use of the arguably better studied among CMs, the so called Continuous Spontaneous Localization model (CSL). The CSL, with white noise, {describes the collapse as a continuous process in time. This introduces in the master equation of the system an extra spatial decoherence term whose phenomenology} is completely characterized by two parameters $\{r_C,\gamma\}$. The parameter $r_C$, is {the localization length of the model, i.e., }the characteristic length-scale above which the collapse mechanism is relevant. {The collapse rate $\gamma$ sets the strength of the CSL mechanism~\cite{RevModPhys.85.471}.} In our setting, the CSL mechanism affects in a significant way only the mechanical mode due to its ``mesoscopic nature'' in view of the fact that CMs are formulated in such a way that their predictions deviate from standard quantum mechanics only for meso-/macroscopic systems. Our mechanical mode is in contact with a thermal phonon bath at temperature $T$, whose effect is described by the operator $\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{\xi}$ in equation \text{\tiny{$\mathcal{E}$}}qref{Eq:noisevector}. On top of that, we consider the decoherence induced by the CSL. Formally, we can treat the effect of the CSL by defining a modified equilibrium temperature of the oscillator via \begin{equation} \gamma_m (2\text{\tiny{$\mathcal{E}$}}nsuremath{\bar a}r{n}_{th}+1)+\Delta=\Delta(2n_{CSL}+1)\rightarrow n_{CSL}=\text{\tiny{$\mathcal{E}$}}nsuremath{\bar a}r{n}_{th}+\frac{\Delta}{2\gamma_m}.\label{CLStemperature} \text{\tiny{$\mathcal{E}$}}nd{equation} where $\text{\tiny{$\mathcal{E}$}}nsuremath{\bar a}r{n}_{th}$ is the thermal number of phonons at temperature $T$. The parameter $\Delta$ entering this expression and the diffusion matrix in~\text{\tiny{$\mathcal{E}$}}qref{Eq:sigmam} is a function of both $r_c$ and $\gamma$ as well as the mass distribution of the system of interest. As reported in~\cite{PhysRevLett.112.210404} it can be written explicitly as \begin{equation}\label{csl_rate} \Delta=\frac{\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat b}ar\gamma}{3m\omega_m m_0^2}\text{\tiny{$\mathcal{S}$}}um_{k=1}^3\int \frac{e^{-\frac{|\mathbf{r}-\mathbf{r}'|^2}{4r_C^2}}}{(2\text{\tiny{$\mathcal{S}$}}qrt{\pi}r_C)^3}\partial_{r_k}\rho(\mathbf{r})\partial_{r'_k}\rho(\mathbf{r}'){\rm d}\mathbf{r}{\rm d}\mathbf{r}', \text{\tiny{$\mathcal{E}$}}nd{equation} where $m_0=1$~amu (atomic mass unit) and $\rho(\mathbf{r})$ is the mass density of the system subject to the CSL. \text{\tiny{$\mathcal{SE}$}}ction{Measurement schemes}\label{measur} The main objective of this work is to investigate the potential of optomechanical systems and quantum reading protocols~\cite{Pirandola2011,PhysRevLett.106.090504} for the discrimination of the additional dissipative channel described by the parameter $\Delta$ via the methods of quantum hypothesis testing. In particular, we want to determine whether quantum resources, in the form of non-classical input noise states, can lead to a quantum advantage with respect to classical resources for hypothesis testing. In order to accomplish this, we will examine two cases for the state of the input noise modes: (i) a two-mode squeezed state as an entangled, and thus quantum, resource and (ii) two independent thermal states as classical ones. Furthermore, in order to probe the system the output modes emerging from the two cavities needs to be measured. Also at this stage we can consider different measurement strategies with different ``degrees of quantumness'' at play, see Fig.~\ref{Fig:measurements}. The output modes can be directed towards photodetectors by using a combination of quarter waveplates and polarised beamsplitters (see Fig.~\ref{Fig:system}). We then consider two different measurement schemes: (i) a local measurement, consisting in measuring directly the quadratures of the output modes $\{\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{x}_{out_i},\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{y}_{out_i}\}$ with $i=1,2$; (ii) in the spirit of the original quantum reading protocol~\cite{Pirandola2011}, the output modes can be further recombined through another beamsplitter to perform a measurements of EPR-like quadratures $\{\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{q}_{\mp},\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{p}_{\pm}\}$ of the emerging modes $\{+,-\}$. These are defined as \begin{equation} \begin{cases} \text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{q}_{\mp}=\dfrac{\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{x}_{out_1}\mp\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{x}_{out_2}}{\text{\tiny{$\mathcal{S}$}}qrt{2}}\\ \text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{p}_{\pm}=\dfrac{\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{y}_{out_1}\pm\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{y}_{out_2}}{\text{\tiny{$\mathcal{S}$}}qrt{2}}, \text{\tiny{$\mathcal{E}$}}nd{cases} \text{\tiny{$\mathcal{E}$}}nd{equation} in term of the output modes. Finally note that, using the input-output formalism~\cite{WallsMilburn}, the output modes can be easily expressed in terms of the input ones via \begin{equation} \begin{cases} \text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{x}_{out_{1,2}}=&\text{\tiny{$\mathcal{S}$}}qrt{\kappa}\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{X}_{1,2}-\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{X}_{in_{1,2}} \\ \text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{y}_{out_{1,2}}=&\text{\tiny{$\mathcal{S}$}}qrt{\kappa}\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{Y}_{1,2}-\text{\tiny{$\mathcal{E}$}}nsuremath{\mhat a}t{Y}_{in_{1,2}}. \text{\tiny{$\mathcal{E}$}}nd{cases} \text{\tiny{$\mathcal{E}$}}nd{equation} All the output modes will depend on the parameter $\Delta$, which vanishes in the case where no CSL is present. For simplicity of notation we omit the dependence on this parameter. {It should also be noticed that, such dependence arises owing to the dynamics of the mechanical system. By itself, the CSL mechanism does not influence light and this guarantee the read-out of the CSL effect on the mechanical oscillator in our set-up}. \begin{figure} \centering \begin{minipage}[t]{0.45\linewidth} \includegraphics[width=1\text{\tiny{$\mathcal{E}$}}nsuremath{\tau_\mathrm{ex}}twidth]{immagini/EPR_Measurement.pdf} \text{\tiny{$\mathcal{E}$}}nd{minipage} \vline \begin{minipage}[t]{0.45\linewidth} \includegraphics[width=1\text{\tiny{$\mathcal{E}$}}nsuremath{\tau_\mathrm{ex}}twidth]{immagini/Local_Measurement.pdf} \text{\tiny{$\mathcal{E}$}}nd{minipage} \caption{On the left a scheme for the measurement of EPR quadratures, while on the right for a local measurement. The scheme in Fig.~\ref{Fig:system}, captures the case on the left here, i.e., EPR quadratures measurement. In order to implement the classical scheme on the right, it would be sufficient to remove the beam-splitter in Fig.~\ref{Fig:system} combining the output cavity modes.} \label{Fig:measurements} \text{\tiny{$\mathcal{E}$}}nd{figure} \text{\tiny{$\mathcal{S}$}}ubsection{Initial state}\label{SubSec:initialstate} While not strictly part of the measurement strategy, we comment here on the initial state of the system (cavities+mechanical mode) that we will assume in the rest of the work unless otherwise stated. Indeed, the choice of a particular initial state is undoubtedly part of any protocol and one on which the feasibility of the protocol hinge. In view of these considerations, we consider as initial state the product state of the cavities' steady-states when subject to vacuum input noises (i.e., when only the coherent pumping is present). The result is a state in which the mechanical mode and the first cavity reach their joint steady-state {, fully characterized by the covariance matrix} $\bm{\text{\tiny{$\mathcal{S}$}}igma}_{ss}^{\rm{mc}}$, while the optical mode of the second cavity remains in its ground state \begin{equation} \bm{\text{\tiny{$\mathcal{S}$}}igma}_{ss}=\left(\begin{array}{c|c} \bm{\text{\tiny{$\mathcal{S}$}}igma}_{ss}^{\rm{mc}} & \mathbb{O}_{4\times 2} \\ \hline \mathbb{O}_{2\times 4} & \mathbb{I}_{2\times 2}/2 \text{\tiny{$\mathcal{E}$}}nd{array}\right), \text{\tiny{$\mathcal{E}$}}nd{equation} where $\mathbb{I}_{n\times m}$ and $\mathbb{O}_{n\times m}$ are $n\times m$ identity and zeros matrices, respectively. This choice of initial state corresponds in practice to commencing the experiment without additional light sources and wait long enough for the system to reach its steady-state. After this, we can start to probe the cavities with extra modes of light (the input noises) and measure the output cavity fields as described above. \text{\tiny{$\mathcal{S}$}}ubsection{Protocol description} Before proceeding further and introducing the hypothesis testing, let us summarize the steps of our channel discrimination protocol: \begin{enumerate} \item \text{\tiny{$\mathcal{E}$}}nsuremath{\tau_\mathrm{ex}}tbf{Preparation:} the optomechanical system, subject to only the coherent pumping of the cavities (i.e., vacuum input noise) reaches its steady-state; \item \text{\tiny{$\mathcal{E}$}}nsuremath{\tau_\mathrm{ex}}tbf{Classical/quantum resources:} two additional light mode impinge on the cavity and are prepared in either a two-mode squeezed state (TMS) or in the tensor product of two local thermal states. The system starts to evolve away from the initial state; \item \text{\tiny{$\mathcal{E}$}}nsuremath{\tau_\mathrm{ex}}tbf{Measurements:} at the same time as the input noises are fed into the cavity, we can monitor the output modes via photodetectors. This can be perform either via local measurement or measuring EPR-like quadratures after recombining the output modes via a beamsplitter; \item \text{\tiny{$\mathcal{E}$}}nsuremath{\tau_\mathrm{ex}}tbf{Post-processing:} the obtained measurement outputs are post-processed via a $\chi^2$-test to discriminate the possible channels, i.e. if the parameter $\Delta$ vanishes or not. \text{\tiny{$\mathcal{E}$}}nd{enumerate} While we wil discuss in detail the hypothesis testing post-processing in the next section, it should be noted that the $\chi^2$-test is suitable since the output of the quadratures measurements follow a Gaussian distribution with zero-mean and variance depending on $\Delta$. \text{\tiny{$\mathcal{SE}$}}ction{Quantum Hypothesis Testing}\label{analysis} In this section, we summarize the main elements of hypothesis testing and specify the classical bound to the error probability in our specific set-up. This lays the basis for comparison between classical and quantum protocols and to show an advantage in using quantum resources. In a typical binary hypothesis testing, two exclusive hypotheses are formulated. Hypothesis $H_0$ is called null hypothesis and it is the starting point: we assume this to be true and we will conduct a test to determine whether this is likely to hold or not. The alternative hypothesis $H_1$ contradicts the previous one and expresses what we think is wrong in the null hypothesis. In our set-up, we aim at testing whether the dissipative channel associated to $\Delta$ is present. This also means testing if the effect coming from CSL model is present or not. In this context, $H_0$ corresponds to no new physics, i.e. an open dynamics with no CSL, while $H_1$ to the presence of the extra dissipative mechanism. The hypothesis testing is performed by post-processing the measurement outcomes. As highlighted in the previous section, these outcomes follow zero-mean Gaussian distributions. This implies that the hypothesis testing, and so the channel discrimination, corresponding to discriminating two Gaussians with different variances {($V_0,\, V_1$)} depending on $\Delta$. Thus we can formulate the two hypotheses as follow \begin{equation} \begin{cases} H_0:\Delta=0 \iff V=V_0\\ H_1:\Delta>0 \iff V=V_1 \neq V_0. \text{\tiny{$\mathcal{E}$}}nd{cases} \text{\tiny{$\mathcal{E}$}}nd{equation} Moreover, it is easy to verify that, for $\Delta>0$ the condition $V_1 > V_0$ holds for both the outcomes of local measurement of the output variables and EPR-like variables. This implies that we can conduct a one-tail test. {It is important to note that, in general, $\Delta\geq 0$, with $\Delta=0$ corresponding to the absence of the CSL. Therefore, statistical inference methods can only rule out, with a certain likelihood, some parts of the CSL parameter space casting upper bounds on $r_C$ and $\gamma$.} Given the nature of the problem, we use a $\chi^2$-test in the following by defining the test statistic $T=(N-1)s^2/{V_0}$, where $s^2=\text{\tiny{$\mathcal{S}$}}um_{i=1}^{N}{(r_i-\text{\tiny{$\mathcal{E}$}}nsuremath{\bar a}r{r}_i)^2}/{N-1}$ is the sample variance for a sample-size $N$, and $r_i$ is the variable we decide to use for the test among $\{q_\pm ,p_\mp\}$, for EPR-like measurements, or $\{x_{out_{1,2}}, y_{out_{1,2}}\}$ for the classical ones. The test statistic follows a $\chi^2$-distribution with $N-1$ degrees of freedom. Note that, contrary to the quantum reading protocol in~\cite{Pirandola2011}, in our case for each measurement schemes different quadratures have different variances. This means that they should be subjected to separate tests not allowing to double the number of outcomes as in~\cite{Pirandola2011}. In an experiment, the hypothesis testing proceed by comparing the likelihood for the particular realization of the test statistic $T=t^*$ with the so called \text{\tiny{$\mathcal{E}$}}nsuremath{\tau_\mathrm{ex}}tit{significance level} $\alpha$ of the test, i.e., the maximum error that we allow ourselves to commit by rejecting $H_0$ when true. In particular, \begin{equation} \begin{cases} \text{\tiny{$\mathcal{E}$}}nsuremath{\tau_\mathrm{ex}}t{If }\; t^{*}\leq Q_{1-\alpha}^{N-1} \Leftrightarrow P|_{H_0}(T\geq t^{*})\leq \alpha \; \text{\tiny{R}}ightarrow \;\text{\tiny{$\mathcal{E}$}}nsuremath{\tau_\mathrm{ex}}t{Reject $H_0$}\\\\ \text{\tiny{$\mathcal{E}$}}nsuremath{\tau_\mathrm{ex}}t{If }\; t^{*}<Q_{1-\alpha}^{N-1} \Leftrightarrow P|_{H_0}(T\geq t^{*})> \alpha \; \text{\tiny{R}}ightarrow \;\text{\tiny{$\mathcal{E}$}}nsuremath{\tau_\mathrm{ex}}t{Accept $H_0$}. \text{\tiny{$\mathcal{E}$}}nd{cases} \text{\tiny{$\mathcal{E}$}}nd{equation} Here we indicate with $P|_{H_0}( T\geq t^{*})$ the probability of obtaining a value of the random variable $T$ larger then $t^*$ conditioned on assuming $H_0$ is true. Note that, as usual, the condition on the probability are mirrored by condition on the realization of the statistic in terms of the quantiles of the $\chi^2$-distribution $Q_{1-\alpha}^{N-1}$. Crucial quantities in hypothesis testing are the error probabilities, i.e. the probability of rejecting $H_0$ when true and the probability of accepting $H_0$ when false. The former is known as Type I error and quantified by $P(H_1|H_0)$, the latter is a Type II error quantified by $P(H_0|H_1)$. Assuming error priors for the two hypotheses, the mean error probability is $P_{err}=\left(P(H_1|H_0)+P(H_0|H_1)\right)/{2}$. It is a simple exercise to find the expression for the total error probability given by \begin{equation} P_{err}= \frac{1}{2} \left[1-\frac{\Gamma \left(\frac{N-1}{2},\frac{Q_{1-\alpha}^{N-1} V_0}{2 V_1}\right)}{\Gamma \left(\frac{N-1}{2}\right)}+\frac{\Gamma \left(\frac{N-1}{2},\frac{Q_{1-\alpha}^{N-1}}{2}\right)}{\Gamma \left(\frac{N-1}{2}\right)}\right], \text{\tiny{$\mathcal{E}$}}nd{equation} where $\Gamma(z,x)$ and $\Gamma(z)$ are the incomplete and complete Gamma functions, respectively. Finally, while the values of $V_{0,1}$ entering the error probability expression depend on both the initial noise state and the measurement scheme, their functional form depends only on the latter. In particular, using the input-output relations and the ordering of the system degrees of freedom for the elements of covariance matrix of the system $\bm{\text{\tiny{$\mathcal{S}$}}igma}(t)$ as in Eq.~\text{\tiny{$\mathcal{E}$}}qref{ordering} it is easy to obtain the expressions for the variances of the output results. For example we have \begin{align} & {\rm Var}(x_{out,1})=2\kappa\text{\tiny{$\mathcal{S}$}}igma_{33}\\ & {\rm Var}(q_\pm)=\kappa\left(\text{\tiny{$\mathcal{S}$}}igma_{33}+\text{\tiny{$\mathcal{S}$}}igma_{55}\pm 2\text{\tiny{$\mathcal{S}$}}igma_{35}\right), \text{\tiny{$\mathcal{E}$}}nd{align} {where $\text{\tiny{$\mathcal{S}$}}igma_{ij}$ are the elements of the covariance matrix solving Eq.~\text{\tiny{$\mathcal{E}$}}qref{Eq:equazionemoto},}and analogously for the rest of the measured quadratures. The dependence of these expressions on the initial noises, as well as on the unknown parameter $\Delta$, is hidden in the elements of the covariance matrix $\bm{\text{\tiny{$\mathcal{S}$}}igma}(t)$ coming from solving the dynamics. We identify $V_{0,1}$ in the hypothesis testing with the values of the relevant variances -- depending on the measurement scheme and initial noise chosen -- for $\Delta=0$ or $\Delta>0$ respectively. \text{\tiny{$\mathcal{S}$}}ubsection{Classical bound}\label{SubSec:classicalbound} We now show that, at intermediate times, quantum resources allow to attain a total error probability lower than the one achievable by any comparable classical strategy. In order to claim this, we need a measure of the minimum error probability attainable. Following the results in \text{\tiny{$\mathcal{E}$}}nsuremath{\tau_\mathrm{ex}}tit{theorem 4} of~\cite{Pirandola2011} for the discrimination of two Gaussian channel via a classical protocol, the error probability is lower-bounded by \begin{equation} {\cal C}(n_1,n_2,t):=\dfrac{1- \text{\tiny{$\mathcal{S}$}}qrt{1- (F(n_1,n_2,t))^N}}{2} \label{C_bound} \text{\tiny{$\mathcal{E}$}}nd{equation} where $F(n_1,n_2,t)$ is the fidelity between the two-mode output Gaussian states corresponding to the evolution of the system up to time $t$ when $\Delta=0$ or $\Delta>0$ with classical input noise thermal states characterized by $n_1$ and $n_2$ mean photon numbers ~\cite{Banchi2015}\footnote{It should be noted that, what we call here fidelity ($F$) corresponds to the fidelity squared ($\mathcal{F}^2$) in~\cite{Banchi2015}}. $N$ is again the number of measurement outcomes collected at time $t$. {Eq.~\text{\tiny{$\mathcal{E}$}}qref{C_bound} is the most stringent bound to the error probability to discriminate the two Gaussian states $\rho_{\Lambda=0}$ and $\rho_{\Lambda\neq 0}$~\cite{Banchi2015}. It is obtained from the form for the error probability derived by Helstrom~\cite{helstrom1976quantum}, $P_{err}(\rho_0,\rho_1)=1-D(\rho_0,\rho_1)$ where $D(\rho_0,\rho_1)$ is the trace distance, by using the inequality $D(\rho_0,\rho_1)\leq\text{\tiny{$\mathcal{S}$}}qrt{1-F(\rho_0,\rho_1)}$ and the factorization properties of the quantum fidelity for product states -- $\rho(t)=\bigotimes_{k=1}^N \rho_k(t)$, with $\rho_k$ the two-mode Gaussian state in each run of the experiment at time $t$.} \text{\tiny{$\mathcal{SE}$}}ction{Results}\label{results} We aim to show that, the total error probability for the channel discrimination in our set-up, when the input noises are quantum correlated, can be lower than the one that can be achieved by \text{\tiny{$\mathcal{E}$}}nsuremath{\tau_\mathrm{ex}}tit{any} comparable classical strategy. It should be stressed that we do not aim to find the ultimate quantum bound to the error probability. Indeed, our scope is more practical: we want to show that such an advantage exists for the specific protocol we consider. \begin{table}[b] \centering \begin{tabular}{llll} \hline \text{\tiny{$\mathcal{E}$}}nsuremath{\tau_\mathrm{ex}}tbf{Symbol} & & \text{\tiny{$\mathcal{E}$}}nsuremath{\tau_\mathrm{ex}}tbf{Name} & \text{\tiny{$\mathcal{E}$}}nsuremath{\tau_\mathrm{ex}}tbf{Value} \\ &&&{\bf or Expression}\\ \hline\hline $\gamma_m$ & & Mechanical dumping & $2\pi \omega_m/10^5$ \\ $\omega_m/2\pi$ & & Mechanical frequency & $2.75\times 10^5$~Hz \\ $T$ & & Phononic bath's temperature & $10^{-3}$~K \\ $\omega_c/2\pi$ & & Cavity mode's frequency & $9.4\times 10^5 c$ \\ $m$ & & Mass & $150$~ng \\ $L$ & & Cavity length & 25~mm \\ $\kappa$ & & Cavities linewidth & $5\times 10^7$~Hz \\ $\delta$ & & Cavity-pump detuning & $4\kappa$ \\ $P$ & & Pump laser power & $4\times 10^{-3}$~W \\ $R$ & & Mechanical system linear dimension & 1~$\mu$m \\ \hline \text{\tiny{$\mathcal{E}$}}nd{tabular} \caption{Specifics of all the parameters for the set-up of two cavities and a mechanical mode entering the simulations in this work.} \label{tab:parameters} \text{\tiny{$\mathcal{E}$}}nd{table} In what follows we fix the significance level to be $\alpha=5\%$ unless otherwise stated. All the values of the system parameters used in the simulations are reported in table~\ref{tab:parameters}. These values are within reach of current technology which is in favour of the feasibility of the protocol. Finally, we assume $\Delta=10^6$~Hz unless otherwise specified. This value of the parameter, characterizing the unknown extra-channel whose presence we want to discriminate, is such that the extra diffusion associated with it is greater than the thermal diffusion characterized by $2\gamma_m k_B T/(\hslash \omega_m)$ and it is motivated by the CSL model. As we discussed previously, the CSL model with white-noise is completely characterised by the two parameters $r_C$, and $\gamma$. The first, $r_C$ can be fixed at $100$~nm~\cite{Carlesso2018} while for the second one we consider the value proposed by Adler~\cite{Adler_2007} $\gamma_{A}=10^{-28}$~m$^3$Hz. Indeed, assuming the mechanical mode to describe the center of mass of a system with linear dimension $R$, that we approximate as spherical for simplicity, and using Eq.~\text{\tiny{$\mathcal{E}$}}qref{csl_rate}, this choice of CSL parameters results in $\Delta\approx 10^6$~Hz. We perform a dynamical analysis, starting from the steady-state of our three-partite system when vacuum input noises are present, and focus on the transient before the system reaches a new steady-state. In doing this, we compare two protocols: a classical one using input thermal noises and local measurements of the output modes, and a quantum one with two-mode squeezed input noises and EPR-like measurements. In order to show the advantage coming from using quantum resources in our context, we need to compare the quantum scheme with the classical lower bound to the error probability. A fair comparison can be achieved by fixing the photon number in the input noises to the two cavities $\{n_1,n_2\}$ and comparing situations with the same sample size $N$, i.e., repetitions of the experiment. We thus compare the error probability coming from the quantum protocol using a TMS input noise with the lower bound that can be achieved starting from uncorrelated thermal noises with the same mean photon number per input mode as the TMS. The lower bound, as already discussed, is the minimum error probability that can be achieved by any classical measurement procedure ~\cite{Pirandola2008}. Thus, the comparison with it can show possible quantum advantages. We start by showing the discrepancy between the local measurement strategy, with classical input noises -- a.k.a. the classical protocol -- with the lower bound to the error probability $\mathcal{C}$. The classical input noise is characterized by its thermal covariance matrix \begin{equation} \bm{\text{\tiny{$\mathcal{S}$}}igma}_{IN}=2\kappa\left(\begin{array}{c|c} (n_1+1/2)\mathbb{I}_{2\times 2} & \mathbb{O}_{2\times 2} \\ \hline \mathbb{O}_{2\times 2} & (n_2+1/2)\mathbb{I}_{2\times 2} \text{\tiny{$\mathcal{E}$}}nd{array}\right) \text{\tiny{$\mathcal{E}$}}nd{equation} entering Eq.~\text{\tiny{$\mathcal{E}$}}qref{Dmatrix}. Fig.~\ref{fig:Perr_C} shows the classical error probability and the correspondent bound for two values of $n_1$. The value of $n_2$ does not have any bearing on these probabilities. We can see that the classical error probability is always greater than the classical bound, as expected, and the bigger the number of photons we inject as noise in the first cavity the worse our ability to discriminate the two hypotheses become. \begin{figure} \centering {\includegraphics[width=1.\columnwidth]{immagini/Perr_C_n1.pdf}} \caption{Comparison between the classical error probabilities (solid lines) and the respective bounds $\mathcal{C}$ (dashed lines) for different values of the mean number of photons in the input noise's modes, $n_1=n_2$. The red curves represent $n_1=10$ while the blue ones $n_1=100$ and we consider the statistics of measurements for the $x_{out_1}$ output quadrature. As discussed in the main text, $\Delta=10^6$~Hz which corresponds to CSL with Adler parameters~\cite{Adler_2007}. The level of significance and the number of experiments are $\alpha=5\%$ and $N=100$ respectively.} \label{fig:Perr_C} \text{\tiny{$\mathcal{E}$}}nd{figure} \begin{figure} \centering {\includegraphics[width=1.\columnwidth]{immagini/C_bound_phi.pdf}} \caption{Quantum error probabilities (solid lines) for different squeezing angles $\phi=\{\pi/2,5\pi/6,\pi\}$. The parameters are $n_1=n_2=100$, $N=100$, $\Delta=10^6$~Hz, and $\alpha=5\%$, and we consider the statistic of measurements for the $q_{+}$ output quadrature. The dashed curve represents the corresponding classical bound $\mathcal{C}(n_1,n_2,t)$, the blue curve is the quantum error probability when the squeezing angle is $\phi=\pi/2$, the green one for $\phi=5\pi/6$ and the red one corresponds to $\phi=\pi$. We observe that, for the squeezing angle approaching $\phi=\pi$, violations of the classical bound are possible at intermediate times.} \label{fig:angolo} \text{\tiny{$\mathcal{E}$}}nd{figure} In Fig.~\ref{fig:angolo}, we show a first comparison of the quantum error probability with respect to the classical bound. In this case, the input noise state is a TMS state with same mean photon number on the two modes and characterised by the squeezing parameter $r\geq 0$ and the squeezing angle $\phi$. The input covariance matrix entering in Eq.~\text{\tiny{$\mathcal{E}$}}qref{Dmatrix} is given by \begin{equation} \bm{\text{\tiny{$\mathcal{S}$}}igma}_{IN}=\kappa\left( \begin{array}{c|c} \cosh 2r \mathbb{I}_{2\times 2} & \text{\tiny{$\mathcal{S}$}}inh 2r R_{\phi}\\ \hline \text{\tiny{$\mathcal{S}$}}inh 2r R_{\phi}& \cosh 2r \mathbb{I}_{2\times 2} \text{\tiny{$\mathcal{E}$}}nd{array} \right), \text{\tiny{$\mathcal{E}$}}nd{equation} where \begin{equation} R_\phi=\begin{pmatrix} \cos\phi & \text{\tiny{$\mathcal{S}$}}in\phi\\ \text{\tiny{$\mathcal{S}$}}in\phi & -\cos\phi \text{\tiny{$\mathcal{E}$}}nd{pmatrix}. \text{\tiny{$\mathcal{E}$}}nd{equation} Fixing the mean photon numbers in the two input modes corresponds to fixing the value of the squeezing parameter $r$ given the relation $\cosh 2r=2 n_{1,2}+1$. We are thus left with the single free parameter $\phi$, the squeezing angle. From Fig.\ref{fig:angolo}, we see that, for non-vanishing squeezing angles {, and looking at the statistics of the $q_+$ EPR quadrature,} a quantum advantage appears since the error probability curve can be lower than the corresponding classical bound. This is the main result of this work. In particular, we see that the advantage is maximized for a squeezing angle $\phi=\pi$ and can be shown to be monotonically increasing for $\phi\in (0,\pi]$. Fig.~\ref{fig:C_bound_n1} shows that no advantage is obtained when we set $\phi=0$ {, for measurements of the $q_+$ quadrature,} and it also shows the dependence of the quantum error probability on the mean number of photons in the noise input. As it could be expected, by increasing the mean number of photons in the input noise the quantum error probability increases. In the same way, the error probability increases when decreasing the number of repetitions of the experiment $N$ as it is clearly shown in Fig.~\ref{fig:C_bound_N}. For the sake of completeness, we examined also two additional cases: one is the combination of TMS light and local measurements and the other is the opposite case of classical thermal input noises and EPR measurements. Fig.~\ref{fig:TMS_C} and Fig.~\ref{fig:Thermal_EPR} show both these cases respectively. It is apparent that, when compared to their respective classical bounds, the error probabilities do not show any advantage even when fixing $\phi=\pi$ in the first case. This tells us that the quantum advantage as shown before depends on the combination of a quantum input and a quantum measurement strategy. Fig.~\ref{fig:confronto_schemi} shows the comparison between these last two protocols and the fully quantum one. {It should be noted that, in all the previous figures, neither the classical bounds nor the error probabilities vanish at the initial time.} {When considering the error probabilities arising from the measurements of the EPR output quadratures, in all the reported figures we have shown the statistic of the EPR quadrature $q_+$. This is sufficient to demonstrate the quantum advantage by comparing error probability with the quantum bound. A similar performance would have been obtained by considering other output quadratures. For instance, if considering the statistics related to $q_-$, any quantum advantage is maximized for $\phi=0$. This should not come at a surprise: the occurrence of an advantage when combining TMS input noise and EPR output measurements can be intuitively traced back to the fact that the two-modes squeezed input light allows quantum correlations of the output fields of the two cavity, which can be exploited in an EPR measurement. In line with the quantum reading protocol~\cite{Pirandola2011}, such correlations appear to be a quantum resource for HT inference. In this context, the dependence of the advantage on the squeezing angle can be qualitatively expected on the basis of the fact that -- depending on the parameters of the set-up -- the non-classical correlations between the output cavity modes that can enable the advantage can be accessed by measuring suitably rotated phase-space quadratures. \\ In the case of TMS input noise and local measurements, it is intuitive to understand that the quantum correlations established between the cavities' output modes cannot be exploited by a scheme based on local measurements. For instance, in Figs.~\ref{fig:TMS_C} and Fig.~\ref{fig:confronto_schemi} no advantage is shown. Analogously, in the case of Fig.~\ref{fig:Thermal_EPR}, where classical noise is teamed with EPR measurements, no advantage is expected. Indeed, as no quantum correlations between the output cavity modes can be present, the output mode of the second cavity is completely oblivious to the CSL mechanism affecting the mechanical mode in the first cavity. The mixing of the output cavity modes, entailed by the EPR measurement, can thus only additionally spoil the discrimination process as a noise source.\\ Finally, the quantum advantages we have found appear at short times and in a dynamical phase away from the steady-state. At long times we see from the previous figures that the advantage is not present anymore. This is analogous to what happens in certain quantum metrology schemes for open quantum systems~\cite{Gambetta} where, at long times, the effect of the dissipation is such that quantum properties are lost and so is the advantage.} \begin{figure} \centering {\includegraphics[width=1.\columnwidth]{immagini/C_bound_Qprob_n1.pdf}} \caption{Comparison between quantum error probabilities (solid lines) and the corresponding classical bounds (dashed lines) for vanishing squeezing angle, $\phi=0$. The blue curves are computed for $n_1=n_2=10$ while the red ones for $n_1=n_2=100$. In both cases $N=100$, $\Delta=10^6$~Hz, and $\alpha=5\%$, and we consider the statistic of measurements for the $q_{+}$ output quadrature.} \label{fig:C_bound_n1} \text{\tiny{$\mathcal{E}$}}nd{figure} \begin{figure} \centering {\includegraphics[width=1.\columnwidth]{immagini/C_bound_N.pdf}} \caption{Quantum error probabilities for $\phi=\pi$ (solid lines) and corresponding classical bounds (dashed lines) for varying number of repetitions of the experiment $N={10,100}$. The blue curve corresponds to $N=10$, the red one to $N=100$. We fix $\alpha=5\%$, $\Delta=10^6$~Hz, and $n_1=n_2=100$. We consider the statistic of measurements for the $q_{+}$ output quadrature.} \label{fig:C_bound_N} \text{\tiny{$\mathcal{E}$}}nd{figure} \begin{figure} \centering {\includegraphics[width=1.\columnwidth]{immagini/TMS_Classical_N.pdf}} \caption{Error probabilities (solid lines) for a protocol in which we have input modes in a TMS state but classical measurements of the output modes. Here we consider local measurements of $x_{out_1}$ to perform the QHT. The dashed lines correspond to the classical bounds. Parameters values are $\phi=\pi$, $n_1=n_2=100$, and $\Delta=10^6$~Hz. The blue curves are obtained for $N=10$ and the red ones for $N=100$. This measurement scheme does not show any advantage in the form of a violation of the classical bound. } \label{fig:TMS_C} \text{\tiny{$\mathcal{E}$}}nd{figure} \begin{figure} \centering {\includegraphics[width=0.5\text{\tiny{$\mathcal{E}$}}nsuremath{\tau_\mathrm{ex}}twidth]{immagini/Thermal_EPR_N.pdf}} \caption{Error probabilities (solid lines) for a protocol in which we have the input modes in product of thermal states and EPR quadrature measurements for the output modes. Here we consider measurements of the quadrature $q_+$ to perform the QHT. The dashed lines correspond to the classical bounds. Parameters values are $\phi=\pi$, $n_1=n_2=100$, and $\Delta=10^6$~Hz. This measurement scheme does not show any advantage in the form of a violation of the classical bound.} \label{fig:Thermal_EPR} \text{\tiny{$\mathcal{E}$}}nd{figure} \begin{figure} \centering {\includegraphics[width=1.\columnwidth]{immagini/confronto_schemi.pdf}} \caption{Comparison between different protocols: 1. Red curve: TMS light as input and EPR measurements of the output modes; 2. Blue curve: TMS light as input and classical (local) measurements of the output modes; 3. Green curve: thermal input noises and EPR measurements. Parameters are $\phi=\pi$, $n_1=n_2=100$, $N=100$, $\alpha=5\%$, and $\Delta=10^6$~Hz. The dashed line represents the corresponding classical bound. We consider the statistic of $x_{out_1}$ and $q_{+}$ from the local and EPR measurements, respectively. As already observed, the only protocol which is able to offer some advantage over the classical bound is the first, fully quantum one.} \label{fig:confronto_schemi} \text{\tiny{$\mathcal{E}$}}nd{figure} {To conclude this section, and} in view of the application of the QHT inference scheme presented here to collapse model, it is interesting to show that the quantum advantage persists if we vary the parameter $\Delta$. This can be seen in Fig.~\ref{fig:deltavariabile}, where it is shown the relative difference between the quantum error probability and the classical bound \begin{equation}\label{violations} \mathcal{Q}(\Delta)=100\frac{\mathcal{C}(n_1,n_2,t)-P_{err}(\phi,r,t)}{\mathcal{C}(n_1,n_2,t)+P_{err}(\phi,r,t)}, \text{\tiny{$\mathcal{E}$}}nd{equation} for $\phi=\pi$, $n_1=n_2=100$, and $N=10$. This figure, and its inset, shows an advantage (regions of $\mathcal{Q}>0$) that is present at early times and extends on several order of magnitudes of $\Delta$. Remarkably, an advantage can still be found also when the unknown channel effect is sub-leading with respect to the thermal diffusion rate. An in-depth analysis of the possibilities offered by QHT for constraining CSL and related models is outside of the scope of the present work and will be part of a future investigation. \begin{figure} \centering {\includegraphics[width=1.\columnwidth]{immagini/deltavariabile_n2_0V2.pdf}} \caption{Quantum advantage for different values of the $\Delta$ parameter. The main figure shows a density plot of $\mathcal{Q}(\Delta,t)$ defined in Eq.~\text{\tiny{$\mathcal{E}$}}qref{violations}. Parameters are $n_1=n_2=100$, $N=10$, $\alpha=5\%$ and $\phi=\pi$, and we consider the statistic of measurements for the $q_{+}$ output quadrature. The black dashed contour separate the region of positive and negative $\mathcal{Q}$. The blue dashed contours are level lines in the region $\mathcal{Q}>0$, i.e., in the region in which a quantum advantage can be found. The inset shows three sample curves for $\Delta=\{10^4,10^6,10^7\}~$Hz in detail. It should be noted that, the case $\Delta=10^4$~Hz corresponds to the situation in which the CSL (or the unknown heating mechanism of the mechanical mode) rate is smaller than the thermal diffusion rate. We also observe that, for $\Delta=10^4$~Hz, despite the small quantum advantages, the quantum protocol considered delivers an error probability close to the classical bound at any time.} \label{fig:deltavariabile} \text{\tiny{$\mathcal{E}$}}nd{figure} \text{\tiny{$\mathcal{SE}$}}ction{Conclusions}\label{conclusions} {The combination of QHT and optomechanical architectures opens the way to tests of fundamental physics and offers the possibility of quantum advantages over analogous classical strategies. Following an approach inspired by the quantum reading framework, we have applied QHT to discriminate between two dynamical channels applied to an optomechanical system. Two hypotheses were formulated to describe the absence ($H_0$) or presence ($H_1$) of an additional dissipative mechanism, potentially due to the spontaneous localisation of the wavefunction of the mechanical resonator as predicted by the CSL model. We compared two measurement strategies and we studied the associated error probabilities to infer that there is an advantage when we use non-classical input noise states instead of classical resources. The classical scheme uses as input source two independent thermal states and it is combined with a direct measurement of the output modes. The quantum scheme employs a two-mode squeezed state and an EPR-like measurement. We have compared the error probabilities obtained from such schemes and the classical lower bound that can be obtained from the fidelity of the two-mode output state with classical thermal input noises. While the error probabilities coming from the classical protocol are always greater than the classical bound, the same is not true for the quantum protocol error probability, which shows an advantage at finite times for some values of the squeezing angle. Recently, in~\cite{2020arXiv200813580S} it was shown how squeezing entanglement offers an advantage for testing the CSL model in cold atom interferometric experiments. Moreover, we explored a large part of the parameter space of the CSL mechanism, showing that the advantage is widespread. In the framework of collapse models, this study offers a starting point to future analysis aimed to restrict the range of still untested parameters characterizing CMs. More in general, we have proposed a versatile scheme that could be implemented and applied to different systems in view of exploring other fundamental physics mechanisms. } \acknowledgments MMM and MP acknowledge support from the EU H2020 FET Project TEQ (Grant No. 766900). AB acknowledges the MSCA project pERFEcTO (Grant No. 795782). SP acknowledges funding from the European Union’s Horizon 2020 Research and Innovation Action under grant agreement No. 862644 (FET-OPEN project: Quantum readout techniques and technologies, QUARTET). MP is supported by the DfE-SFI Investigator Programme (grant 15/IA/2864), the Royal Society Wolfson Research Fellowship (RSWF\text{\tiny{$\mathcal{E}$}}nsuremath{\tau_\mathrm{ex}}tbackslash R3\text{\tiny{$\mathcal{E}$}}nsuremath{\tau_\mathrm{ex}}tbackslash183013), the Royal Society International Exchanges Programme (IEC\text{\tiny{$\mathcal{E}$}}nsuremath{\tau_\mathrm{ex}}tbackslash R2\text{\tiny{$\mathcal{E}$}}nsuremath{\tau_\mathrm{ex}}tbackslash192220), the Leverhulme Trust Research Project Grant (grant nr.~RGP-2018-266), and the UK EPSRC (grant nr.~EP/T028106/1). This research was partially supported by COST Action CA15220 ``Quantum Technologies in Space''. \text{\tiny{$\mathcal{E}$}}nd{document}
\begin{document} \author[Robert Laterveer] {Robert Laterveer} \address{Institut de Recherche Math\'ematique Avanc\'ee, CNRS -- Universit\'e de Strasbourg,\ 7 Rue Ren\'e Des\-car\-tes, 67084 Strasbourg CEDEX, FRANCE.} \email{[email protected]} \title{On the Chow groups of certain cubic fourfolds} \begin{abstract} This note is about the Chow groups of a certain family of smooth cubic fourfolds. This family is characterized by the property that each cubic fourfold $X$ in the family has an involution such that the induced involution on the Fano variety $F$ of lines in $X$ is symplectic and has a $K3$ surface $S$ in the fixed locus. The main result establishes a relation between $X$ and $S$ on the level of Chow motives. As a consequence, we can prove finite--dimensionality of the motive of certain members of the family. \end{abstract} \keywords{Algebraic cycles, Chow groups, motives, cubic fourfolds, hyperk\"ahler varieties, K3 surfaces, finite--dimensional motive} \subjclass[2010]{Primary 14C15, 14C25, 14C30.} \maketitle \section{Introduction} For a smooth projective variety $X$ over $\mathbb{C}$, let $A^i(X):=CH^i(X)_{\mathbb{Q}}$ denote the Chow groups (i.e. the groups of codimension $i$ algebraic cycles on $X$ with $\mathbb{Q}$--coefficients, modulo rational equivalence). Let $A^i_{hom}(X)$ denote the subgroup of homologically trivial cycles. When $X\subset\mathbb{P}^5(\mathbb{C})$ is a smooth cubic fourfold, we have $A^i_{hom}(X)=0$ for $i\not=3$, but $A^3_{hom}(X)\not=0$ (this is related to the fact that $H^{3,1}(X)\not=0$). The main result of this note shows that for a certain family of cubic fourfolds, the group $A^3_{hom}(X)$ is not larger than the Chow group of $0$--cycles on a $K3$ surface: \begin{nonumbering}[=theorem \ref{main}] Let $X\subset\mathbb{P}^5(\mathbb{C})$ be a smooth cubic fourfold defined by an equation \[ f(x_0,x_1,x_2,x_3) + (x_4)^2\ell_1(x_0,\ldots,x_3) + (x_5)^2\ell_2(x_0,\ldots,x_3)+x_4x_5\ell_3(x_0,\ldots,x_3)=0\ \] (here $f$ has degree $3$ and $\ell_1, \ell_2,\ell_3$ are linear forms). There exists a $K3$ surface $S$ and a correspondence $\Gamma\in A^{3}(X\times S)$ inducing a split injection \[ \Gamma_\ast\colon\ \ \ A^3_{hom}(X)\ \hookrightarrow\ A^2_{hom}(S)\ .\] \end{nonumbering} In a nutshell, the argument proving theorem \ref{main} is as follows: cubics $X$ as in theorem \ref{main} have an involution inducing a symplectic involution $\iota_F$ of the Fano variety of lines $F=F(X)$. The fixed locus of $\iota_F$ contains a $K3$ surface $S$. The inclusion $S\subset F$ being symplectic, there is a (correspondence--induced) isomorphism \[ \Gamma_\ast\colon\ \ \ H^{3,1}(X)\ \xrightarrow{\cong}\ H^{2,0}(S)\ .\] Because the cubics $X$ as in theorem \ref{main} form a large family, and the correspondence $\Gamma$ exists for the whole family, one can apply Voisin's method of ``spread'' \cite{V0}, \cite{V1}, \cite{Vo}, \cite{Vo2} to this isomorphism, and obtain a statement on the level of rational equivalence which proves theorem \ref{main}. As an application of theorem \ref{main}, we obtain some new examples of cubics with finite--dimensional motive (in the sense of Kimura/O'Sullivan \cite{Kim}, \cite{An}, \cite{J4}): \begin{nonumberingc}[=corollary \ref{findim}] Let $X$ be as in theorem \ref{main}, and assume \[ \dim H^4(X)\cap H^{2,2}(X,\mathbb{C})\ge 20 \ .\] Then $X$ has finite--dimensional motive. \end{nonumberingc} For $X$ as in corollary \ref{findim}, one can also prove finite--dimensionality for the Fano varieties of lines on $X$ (remark \ref{fanotoo}). This gives new examples of hyperk\"ahler fourfolds with finite--dimensional motive. \vskip0.6cm \begin{convention} In this article, the word {\sl variety\/} will refer to a reduced irreducible scheme of finite type over $\mathbb{C}$. A {\sl subvariety\/} is a (possibly reducible) reduced subscheme which is equidimensional. {\bf All Chow groups will be with rational coefficients}: we will denote by $A_j(X)$ the Chow group of $j$--dimensional cycles on $X$ with $\mathbb{Q}$--coefficients; for $X$ smooth of dimension $n$ the notations $A_j(X)$ and $A^{n-j}(X)$ are used interchangeably. The notations $A^j_{hom}(X)$, $A^j_{AJ}(X)$ will be used to indicate the subgroups of homologically trivial, resp. Abel--Jacobi trivial cycles. For a morphism $f\colon X\to Y$, we will write $\Gamma_f\in A_\ast(X\times Y)$ for the graph of $f$. The contravariant category of Chow motives (i.e., pure motives with respect to rational equivalence as in \cite{Sc}, \cite{MNP}) will be denoted $\mathcal M_{\rm rat}$. We will write $H^j(X)$ to indicate singular cohomology $H^j(X,\mathbb{Q})$. Given a group $G\subset\aut(X)$ of automorphisms of $X$, we will write $A^j(X)^G$ (and $H^j(X)^G$) for the subgroup of $A^j(X)$ (resp. $H^j(X)$) invariant under $G$. \end{convention} \section{Preliminaries} \subsection{Refined K\"unneth decomposition} \begin{definition} Let $X$ be a smooth projective variety, and $h\in\hbox{Pic}(X)$ an ample class. The hard Lefschetz theorem asserts that the map \[ L^{n-i}\colon H^i(X)\ \to\ H^{2n-i}(X)\] obtained by cupping with $h^{n-i}$ is an isomorphism, for any $i< n$. One of the standard conjectures, often denoted $B(X)$, asserts that the inverse isomorphism is algebraic: we say that $B(X)$ holds if for any $i<n$, there exists a correspondence $\mathbb{C}_i\in A^{i}(X\times X)$ such that \[ (C_i)_\ast\colon\ \ H^{2n-i}(X)\ \to\ H^i(X) \] is an inverse to $L^{n-i}$. \end{definition} \begin{remark} For more on the standard conjectures, cf. \cite{K0}, \cite{K1}. In this note, we will be using the following two facts: Any smooth hypersurface $X\subset\mathbb{P}^m(\mathbb{C})$ verifies $B(X)$ \cite{K0}, \cite{K1}. For any smooth cubic fourfold $X\subset\mathbb{P}^5(\mathbb{C})$, the Fano variety of lines $F:=F(X)$ verifies $B(F)$ (this follows from \cite[Theorem 1.1]{ChMa}, or alternatively from \cite[Corollary 6]{moi2}). \end{remark} \begin{remark} Let $N^\ast H^\ast$ denote the coniveau filtration on cohomology \cite{BO}. Vial \cite{V4} has introduced a variant filtration $\widetilde{N}^\ast H^\ast$, called the {\em niveau filtration\/}. There is an inclusion \[ \widetilde{N}^j H^i(X)\ \subset\ N^j H^i(X) \] for any $X$ and all $i,j$. Conjecturally, this is always an equality (this would follow from the standard conjecture $B$). If $B(X)$ holds and $j\ge {i-1\over 2}$, this inclusion is an equality \cite[]{V4}. \end{remark} \begin{theorem}[Vial \cite{V4}]\label{vial} Let $X$ be a smooth projective variety of dimension $n\le 5$. Assume $B(X)$ holds. There exists a decomposition of the diagonal \[ \mathbb{D}elta_X={\displaystyle \sum_{i,j} } \pi^X_{i,j}\ \ \ \hbox{in}\ H^{2n}(X\times X)\ ,\] where the $\pi_{i,j}$'s are mutually orthogonal idempotents. The correspondence $\pi_{i,j}$ acts on $H^\ast(X)$ as a projector on $\hbox{Gr}^j_{\widetilde{N}} H^i(X)$. Moreover, $\pi_{i,j}$ can be chosen to factor over a variety of dimension $i-2j$ (i.e., for each $\pi_{i,j}$ there exists a smooth projective variety $Z_{i,j}$ of dimension $i-2j$, and correspondences $\Gamma_{i,j}\in A^{n-j}(Z_{i,j}\times X), \Psi_{i,j}\in A^{i-j}(X\times Z_{i,j})$ such that $\pi_{i,j}=\Gamma_{i,j}\circ \Psi_{i,j}$ in $H^{2n}(X\times X)$). \end{theorem} \begin{proof} This is a special case of \cite[Theorem 1]{V4}. Indeed, as mentioned in loc. cit., varieties $X$ of dimension $\le 5$ such that $B(X)$ holds verify condition (*) of loc. cit. \end{proof} \begin{remark} If $X$ is a surface, $\pi^X_{2,0}$ is the homological realization of the projector $\pi^X_{2,tr}$ constructed on the level of Chow motives in \cite{KMP}. \end{remark} \subsection{Spread} \begin{lemma}[Voisin \cite{V0}, \cite{V1}]\label{projbundle} Let $M$ be a smooth projective variety of dimension $n+1$, and $L$ a very ample line bundle on $M$. Let \[ \pi\colon \mathcal X\to B\] denote a family of hypersurfaces, where $B\subset\vert L\vert$ is a Zariski open. Let \[ p\colon \widetilde{\mathcal X\times_B \mathcal X}\ \to\ \mathcal X\times_B \mathcal X\] denote the blow--up of the relative diagonal. Then $\widetilde{\mathcal X\times_B \mathcal X}$ is Zariski open in $V$, where $V$ is a projective bundle over $\widetilde{M\times M}$, the blow--up of $M\times M$ along the diagonal. \end{lemma} \begin{proof} This is \cite[Proof of Proposition 3.13]{V0} or \cite[Lemma 1.3]{V1}. The idea is to define $V$ as \[ V:=\Bigl\{ \bigl((x,y,z),\sigma\bigr) \ \vert\ \sigma\vert_z=0\Bigr\}\ \ \subset\ \widetilde{M\times M}\times \vert L\vert\ .\] The very ampleness assumption ensures $V\to\widetilde{M\times M}$ is a projective bundle. \end{proof} This is used in the following key proposition: \begin{proposition}[Voisin \cite{V1}]\label{voisin1} Assumptions as in lemma \ref{projbundle}. Assume moreover $M$ has trivial Chow groups. Let $R\in A^n(V)_{}$. Suppose that for all $b\in B$ one has \[ H^n(X_b)_{prim}\not=0\ \ \ \ \hbox{and}\ \ \ \ R\vert_{\widetilde{X_b\times X_b}}=0\ \ \in H^{2n}(\widetilde{X_b\times X_b})\ .\] Then there exists $\gamma\in A^n(M\times M)_{}$ such that \[ (p_b)_\ast \bigl(R\vert_{\widetilde{X_b\times X_b}}\bigr)= \gamma\vert_{X_b\times X_b} \ \ \in A^{n}({X_b\times X_b})_{}\] for all $b\in B$. (Here $p_b$ denotes the restriction of $p$ to $\widetilde{X_b\times X_b}$, which is the blow--up of $X_b\times X_b$ along the diagonal.) \end{proposition} \begin{proof} This is \cite[Proposition 1.6]{V1}. \end{proof} The following is an equivariant version of proposition \ref{voisin1}: \begin{proposition}[Voisin \cite{V1}]\label{voisin2} Let $M$ and $L$ be as in proposition \ref{voisin1}. Let $G\subset\aut(M)$ be a finite group. Assume the following: \noindent (\romannumeral1) The linear system $\vert L\vert^G:=\mathbb{P}\bigl( H^0(M,L)^G\bigr)$ has no base--points, and the locus of points in $\widetilde{M\times M}$ parametrizing triples $(x,y,z)$ such that the length $2$ subscheme $z$ imposes only one condition on $\vert L\vert^G$ is contained in the union of (proper transforms of) graphs of non--trivial elements of $G$, plus some loci of codimension $>n+1$. \noindent (\romannumeral2) Let $B\subset\vert L\vert^G$ be the open parametrizing smooth hypersurfaces, and let $X_b\subset M$ be a hypersurface for $b\in B$ general. There is no non--trivial relation \[ {\displaystyle\sum_{g\in G}} c_g \Gamma_g +\gamma=0\ \ \ \hbox{in}\ H^{2n}(X_b\times X_b)\ ,\] where $\gamma$ is a cycle in $\operatorname{i}a\bigl( A^n(M\times M)\to A^n(X_b\times X_b)\bigr)$. Let $R\in A^n(\mathcal X\times_B \mathcal X)$ be such that \[ R\vert_{{X_b\times X_b}}=0\ \ \in H^{2n}({X_b\times X_b})\ \ \ \forall b\in B\ .\] Then there exists $\gamma\in A^n(M\times M)_{}$ such that \[ R\vert_{{X_b\times X_b}}= \gamma\vert_{X_b\times X_b} \ \ \in A^{n}({X_b\times X_b})\ \ \ \forall b\in B\ .\] \end{proposition} \begin{proof} This is not stated verbatim in \cite{V1}, but it is contained in the proof of \cite[Proposition 3.1 and Theorem 3.3]{V1}. We briefly review the argument. One considers \[ V:=\Bigl\{ \bigl((x,y,z),\sigma\bigr) \ \vert\ \sigma\vert_z=0\Bigr\}\ \ \subset\ \widetilde{M\times M}\times \vert L\vert^G\ .\] The problem is that this is no longer a projective bundle over $\widetilde{M\times M}$. However, as explained in the proof of \cite[Theorem 3.3]{V1}, hypothesis (\romannumeral1) ensures that one can obtain a projective bundle after blowing up the graphs $\Gamma_g, g\in G$ plus some loci of codimension $>n+1$. Let $M^\prime\to\widetilde{M\times M}$ denote the result of these blow--ups, and let $V^\prime\to M^\prime$ denote the projective bundle obtained by base--changing. Analyzing the situation as in \cite[Proof of Theorem 3.3]{V1}, one obtains \[ R\vert_{X_b\times X_b} =R_0\vert_{X_b\times X_b}+ {\displaystyle\sum_{g\in G}} \lambda_g \Gamma_g\ \ \ \hbox{in}\ A^n(X_b\times X_b) \ ,\] where $R_0\in A^n(M\times M)$ and $\lambda_g\in\mathbb{Q}$ (this is \cite[Equation (15)]{V1}). By assumption, $R\vert_{X_b\times X_b}$ is homologically trivial. Using hypothesis (\romannumeral2), this implies that all $\lambda_g$ have to be $0$. \end{proof} \section{Main result} \begin{theorem}\label{main} Let $X\subset\mathbb{P}^5(\mathbb{C})$ be a smooth cubic fourfold defined by an equation \[ f(x_0,x_1,x_2,x_3) + (x_4)^2\ell_1(x_0,\ldots,x_3) + (x_5)^2\ell_2(x_0,\ldots,x_3)+x_4x_5\ell_3(x_0,\ldots,x_3)=0\ \] (here $f$ has degree $3$ and $\ell_1, \ell_2,\ell_3$ are linear forms). There exists a $K3$ surface $S$ and a correspondence $\Gamma\in A^{3}(X\times S)$ inducing a split injection \[ \Gamma_\ast\colon\ \ \ A^3_{hom}(X)\ \hookrightarrow\ A^2_{hom}(S)\ .\] \end{theorem} \begin{proof} Let us consider the involution \[ \begin{split} \iota\colon \mathbb{P}^5 \ &\to\ \mathbb{P}^5\ ,\\ [x_0:x_1:x_2:x_3:x_4:x_5]\ &\mapsto\ [x_0:x_1:x_2:x_3: -x_4: -x_5]\ .\\ \end{split}\] The family of cubic fourfolds $X$ as in theorem \ref{main} is exactly the family of smooth cubic fourfolds invariant under $\iota$ (this was observed in \cite[Section 7]{Cam}, and also in \cite{LFu3}, where this family appears as ``Family V-(1)'' in the classification table of \cite[Theorem 0.1]{LFu3}). Let us denote by \[ \iota_X\colon\ \ X\ \to\ X \] the involution of $X$ induced by $\iota$. Let $F:=F(X)$ denote the Fano variety parametrizing lines contained in $X$. The variety $F$ is a hyperk\"ahler variety \cite{BD}. The involution \[ \iota_F\colon\ \ F\ \to\ F \] induced by $\iota_X$ is symplectic \cite[Section 7]{Cam}, \cite[Theorem 0.1]{LFu3}. The fixed locus of $\iota_F$ consists of $28$ isolated points and a $K3$ surface $S\subset F$ \cite[Section 7]{Cam}, \cite[Section 4]{LFu3}. The involution $\iota_F$ being symplectic, the surface $S\subset F$ is a {\em symplectic subvariety\/}, i.e. the inclusion $\tau\colon S\to F$ induces an isomorphism \[ \tau^\ast\colon\ \ H^{2,0}(F)\ \xrightarrow{\cong}\ H^{2,0}(S)\ .\] As is readily seen, this implies there is also an isomorphism \begin{equation}\label{triso} \tau^\ast\colon\ \ H^{2}_{tr}(F)\ \xrightarrow{\cong}\ H^{2}_{tr}(S)\ ,\end{equation} where $H^2_{tr}()\subset H^2()$ denotes the smallest Hodge--substructure containing $H^{2,0}()$. Let $\Gamma_{BD}$ be the correspondence inducing the Beauville--Donagi isomorphism \begin{equation}\label{bdiso} (\Gamma_{BD})_\ast\colon\ \ H^4(X)\ \xrightarrow{\cong}\ H^2(F)\ \end{equation} \cite{BD}. (That is, let $P\subset X\times F$ denote the incidence variety, with morphisms $p\colon P\to F$, $q\colon P\to X$. Then $\Gamma_{BD}:= \Gamma_p\circ {}^t \Gamma_q\in A^{5}(X\times F)$.) Let us define a correspondence \[ \Gamma:= {}^t \Gamma_\tau\circ \Gamma_{BD}\ \ \ \in A^{3}(X\times S)\ .\] Combining isomorphisms (\ref{triso}) and (\ref{bdiso}), we obtain an isomorphism \[ \Gamma_\ast\colon\ \ H^4(X)/N^2\ \xrightarrow{\cong}\ H^2_{tr}(F)\ \xrightarrow\ H^2_{tr}(S)\ .\] A bit more formally, this implies there is an isomorphism of homological motives \begin{equation}\label{motiso} \Gamma\colon\ \ (X,\pi^X_{4,1},0)\ \xrightarrow{\cong}\ (S,\pi^S_{2,tr},0)\ \ \ \hbox{in}\ \mathcal M_{\rm hom}\ .\end{equation} Here, $\pi^X_{4,1}=\pi^X_4-\pi^X_{4,2}$ is a projector on $H^4(X)/N^2$; this exists thanks to theorem \ref{vial}. The projector $\pi^S_{2,tr}$ is the projector on $H^2_{tr}(S)$ constructed in \cite{KMP}. Let $\Psi\in A^{3}(S\times X)$ be a correspondence inducing an inverse to the isomorphism (\ref{motiso}). This means that we have \[ (\Psi\circ \Gamma)_\ast=\ide\colon\ \ \ H^4(X)/N^2\ \to\ H^4(X)/N^2\ ,\] which means that there is a homological equivalence of cycles \begin{equation}\label{homeq} \Psi\circ\Gamma\circ \pi_4^X =\pi_4^X+\gamma_1\ \ \ \hbox{in}\ H^8(X\times X)\ ,\end{equation} where $\gamma_1\in A^4(X\times X)$ is some cycle supported on $V\times V\subset X\times X$, where $V\subset X$ is a codimension $2$ closed subvariety (this is because $\gamma_1$ is supported on the support of $\pi^X_{4,2}$, which is supported on $V\times V$ as indicated, by theorem \ref{vial}). As $X\subset\mathbb{P}^5$ is a hypersurface, the only interesting K\"unneth component is $\pi^X_4$. That is, we can write \[ \mathbb{D}elta_X =\pi^X_4 +\gamma_2 \ \ \ \hbox{in}\ H^8(X\times X)\ ,\] where $\gamma_2$ is a ``completely decomposed'' cycle, i.e. a cycle with support on $\cup_i V_i\times W_i\subset X\times X$, where $\dim V_i +\dim W_i=4$. Plugging this in equation (\ref{homeq}), we obtain a homological equivalence of cycles \begin{equation}\label{homeq2} \Psi\circ\Gamma =\mathbb{D}elta_X +\gamma \ \ \ \hbox{in}\ H^8(X\times X)\ ,\end{equation} where $\gamma$ is a ``completely decomposed'' cycle in the above sense. We now proceed to upgrade the homological equivalence (\ref{homeq2}) to a rational equivalence. This can be done thanks to the work of Voisin on the Bloch/Hodge equivalence \cite{V0}, \cite{V1}, using the technique of ``spread'' of algebraic cycles in good families. Following the approach of \cite{V0}, \cite{V1}, we put the above construction in family. We define \[ \pi\colon\ \ \mathcal X\ \to\ B \] to be the family of all smooth cubic fourfolds given by an equation as in theorem \ref{main}. (That is, we let $G\subset\aut(\mathbb{P}^5)$ be the order $2$ group generated by the involution $u$, and we define \[ B\ \subset\ \Bigl(\mathbb{P} H^0\bigl(\mathbb{P}^5,\mathcal O_{\mathbb{P}^5}(3)\bigr)\Bigr)^G \] as the open subset parametrizing smooth $G$--invariant cubics.) We will write $X_b:=\pi^{-1}(b)$ for the fibre over $b\in B$. We also define families \[ \mathcal F\ \to\ B\ ,\ \ \ \mathcal S\ \to\ B \] of Fano varieties of lines, resp. of $K3$ surfaces. (That is, $\mathcal S\subset\mathcal F$ is the fixed locus of the involution of $\mathcal F$ induced by $\iota$.) We will write $F_b$ and $S_b$ for the fibre over $b\in B$. The correspondence $\Gamma$ constructed above readily extends to this relative setting: \begin{lemma}\label{relgamma} There exists a relative correspondence $\Gamma\in A^3(\mathcal X\times_B \mathcal S)$, such that for all $b\in B$, the restriction \[ \Gamma_b:= \Gamma\vert_{X_b\times S_b}\ \ \ \in A^3(X_b\times S_b) \] induces the isomorphism \[ \Gamma_b\colon\ \ (X_b,\pi^{X_b}_{4,1},0)\ \xrightarrow{\cong}\ (S_b,\pi^{S_b}_{2,tr},0)\ \ \ \hbox{in}\ \mathcal M_{\rm hom} \] as in (\ref{motiso}). \end{lemma} \begin{proof} Let $\mathcal P\subset \mathcal X\times_B \mathcal F$ denote the incidence variety, with projections $p\colon \mathcal P\to \mathcal F$, $q\colon \mathcal P\to \mathcal X$. Let $\tau$ denote the inclusion morphism $\mathcal S\to \mathcal F$. We define \[ \Gamma:= {}^t \Gamma_\tau\circ \Gamma_p\circ {}^t \Gamma_q\in A^{3}(\mathcal X\times_B \mathcal S)\ . \] (For composition of relative correspondences in the setting of smooth quasi--projective families that are smooth over a base $B$, cf. \cite{CH}, \cite{GHM}, \cite{NS}, \cite{DM}, \cite[8.1.2]{MNP}.) \end{proof} The correspondences $\Psi$ and $\gamma$ also extend to the relative setting: \begin{lemma}\label{allrel} There exist subvarieties $\mathcal V_i, \mathcal W_i\subset \mathcal X$ with $\hbox{codim}(\mathcal V_i)+\hbox{codim}(\mathcal W_i)=4$, and relative correspondences \[ \Psi\ \ \in A^3(\mathcal S\times_B \mathcal X)\ , \ \ \ \gamma\ \ \in A^4(\mathcal X\times_B \mathcal X)\ ,\] where $\gamma$ is supported on $\cup_i \mathcal V_i\times_B \mathcal W_i$, and such that for all $b\in B$, the restrictions \[ \Psi_b:= \Psi\vert_{S_b\times X_b}\ \ \in A^3(S_b\times X_b)\ ,\ \ \ \gamma_b:= \gamma\vert_{X_b\times X_b}\ \ \in A^4(X_b\times X_b) \] verify the equality \[ \Psi_b\circ\Gamma_b =\mathbb{D}elta_{X_b} +\gamma_b \ \ \ \hbox{in}\ H^8(X_b\times X_b) \] as in (\ref{homeq2}). \end{lemma} \begin{proof} The statement is different, but this is really the same Hilbert schemes argument as \cite[Proposition 3.7]{V0}, \cite[Proposition 4.25]{Vo}. Let $\Gamma\in A^3(\mathcal X\times_B \mathcal S)$ be the relative correspondence of lemma \ref{relgamma}, and let $\mathbb{D}elta_\mathcal X\in A^4(\mathcal X\times_B \mathcal X)$ be the relative diagonal. By what we have said above, for each $b\in B$ there exist subvarieties $V_{b,i}, W_{b,i}\subset X_b$ (with $\dim (V_{b,i})+\dim(W_{b,i})=4$), and a cycle $\gamma_b$ supported on \[\cup_i V_{b,i}\times W_{B,i}\subset X_b\times X_b\ ,\] and a cycle $\Psi_b\in A^3(S_b\times X_b)$, such that there is equality \begin{equation}\label{split} \Psi_b\circ \Gamma_b = \mathbb{D}elta_\mathcal X\vert_{X_b\times X_b} +\gamma_b\ \ \ \hbox{in}\ H^8(X_b\times X_b)\ .\end{equation} The point is that the data of all the $(b,V_{b,i}, W_{b,i}, \gamma_b,\Psi_b)$ that are solutions of the equality (\ref{split}) can be encoded by a countable number of algebraic varieties $p_j\colon M_j\to B$, with universal objects \[ \mathcal V_{i,j}\to M_j\ , \ \ \ \mathcal W_{i,j}\to M_j\ ,\ \ \ \gamma_j\to M_j\ ,\ \ \ \Psi_j\to M_j\ \] (where $\mathcal V_{i,j}, \mathcal W_{i,j}\subset \mathcal X_{M_j}$, and $\gamma_j$ is a cycle supported on $\cup_i \mathcal V_{i,j}\times_{M_j}\mathcal W_{i,j}$, and $\Psi_j\in A^3(\mathcal S\times_{M_j} \mathcal X)$), with the property that for $m\in M_j$ and $b=p_j(m)\in B$, we have \[ \begin{split} &(\gamma_j)\vert_{X_b\times X_b}= \gamma_b\ \ \ \hbox{in}\ H^8(X_b\times X_b)\ ,\\ & (\Psi_j)\vert_{S_b\times X_b}= \Psi_b\ \ \ \hbox{in}\ H^6(S_b\times X_b)\ .\\ \end{split}\] By what we have said above, the union of the $M_j$ dominate $B$. Since there is a countable number of $M_j$, one of the $M_j$ (say $M_0$) must dominate $B$. Taking hyperplane sections, we may assume $M_0\to B$ is generically finite (say of degree $d$). Projecting the cycles $\gamma_0$ and $\Psi_0$ to $\mathcal X\times_B \mathcal X$, resp. to $\mathcal S\times_B \mathcal X$, and then dividing by $d$, we have obtained cycles $\gamma$ and $\Psi$ as requested. \end{proof} Lemma \ref{allrel} can be succinctly restated as follows: the relative correspondence \[ R:= \Psi\circ \Gamma - \mathbb{D}elta_{\mathcal X} - \gamma\ \ \ \in A^4(\mathcal X\times_B \mathcal X) \] has the property that for all $b\in B$, the restriction is homologically trivial: \[ R\vert_{X_b\times X_b}\ \ \ \in A^4_{hom}(X_b\times X_b)\ \ \ \forall b\in B\ .\] Applying theorem \ref{voisin2} to $R$ (this is possible in view of proposition \ref{OK} below), we find that \begin{equation}\label{rateq} (R +\delta)\vert_{X_b\times X_b}=0\ \ \ \hbox{in}\ A^4(X_b\times X_b)\ \ \ \forall b\in B\ ,\end{equation} where $\delta$ is some cycle \[ \delta\in \operatorname{i}a \Bigl( A^4(\mathbb{P}^5\times \mathbb{P}^5\times B)\ \to\ A^4(\mathcal X\times_B \mathcal X)\Bigr)\ .\] Since $A^\ast_{hom}(\mathbb{P}^5\times\mathbb{P}^5)=0$, we have \[ (\delta\vert_{X_b\times X_b})_\ast A^\ast_{hom}(X_b)=0\ .\] For $b\in B$ general, the fibre $X_b\times X_b$ will be in general position with respect to the $\mathcal V_i$ and $\mathcal W_i$ and so \[ \dim(\mathcal V_i\cap X_b)+\dim(\mathcal W_i\cap X_b) =4\ \ \forall i\ ,\] which ensures that \[ (\gamma\vert_{X_b\times X_b})_\ast A^\ast_{hom}(X_b)=0\ .\] Plugging in the definition of $R$ into the rational equivalence (\ref{rateq}), this means that \[ (\Psi\vert_{X_b\times X_b})_\ast (\Gamma\vert_{X_b\times X_b})_\ast =\ide\colon\ \ A^\ast_{hom}(X_b)\ \to\ A^\ast_{hom}(X_b)\ \ \ \hbox{for}\ b\in B\ \hbox{general}\ ,\] which proves theorem \ref{main} for $b\in B$ general. To prove theorem \ref{main} for any given $b_0\in B$, we note that the above construction can also be made locally around the point $b_0$: in the construction of lemma \ref{allrel}, we throw away all the data $M_j$ for which the subvarieties $\mathcal V_{i,j}, \mathcal W_{i,j}$ are {\em not\/} all in general position with respect to $X_{b_0}\times X_{b_0}$. The union of the remaining $M_j$ will dominate an open $B^\prime\subset B$ containing $b_0$, and so the above proof works for the cubic $X_{b_0}$. To end the proof, it remains to verify the hypotheses of theorem \ref{voisin2} (which we applied above) are met with. This is the content of the following: \begin{proposition}\label{OK} Let $\mathcal X\to B$ be the family of smooth cubic fourfolds as in theorem \ref{main}, i.e. \[ B\ \subset\ \Bigl(\mathbb{P} H^0\bigl(\mathbb{P}^5,\mathcal O_{\mathbb{P}^5}(3)\bigr)\Bigr)^G \] is the open subset parametrizing smooth $G$--invariant cubics, and $G=\{id,\iota\}\subset\aut(\mathbb{P}^5)$ as above. This set--up verifies the hypotheses of proposition \ref{voisin2}. \end{proposition} \begin{proof} Let us first prove hypothesis (\romannumeral1) of proposition \ref{voisin2} is satisfied. To this end, we consider the tower of morphisms \[ p\colon\ \ \mathbb{P}^5\ \xrightarrow{p_1}\ P^\prime:= \mathbb{P}^5/G\ \xrightarrow{p_2}\ P:=\mathbb{P}(1^4,2^2)\ ,\] where $\mathbb{P}(1^4,2^2)=\mathbb{P}^5/(\mathbb{Z}/2\mathbb{Z}\times \mathbb{Z}/2\mathbb{Z})$ denotes a weighted projective space. Let us write $\iota_4, \iota_5$ for the involutions of $\mathbb{P}^5$ \[ \begin{split} &\iota_4 [x_0:\ldots:x_5] = [x_0:x_1:x_2:x_3:-x_4:x_5]\ ,\\ &\iota_5 [x_0:\ldots:x_5] = [x_0:x_1:x_2:x_3:x_4:-x_5]\ .\\ \end{split} \] (We note that $\iota=\iota_4\circ\iota_5$, and $P=\mathbb{P}^5/ <\iota_4,\iota_5>$.) The sections in $\bigl(\mathbb{P} H^0\bigl(\mathbb{P}^5,\mathcal O_{\mathbb{P}^5}(3)\bigr)\bigr)^G$ are in bijection with $ \mathbb{P} H^0\bigl(P^\prime,\mathcal O_{P^\prime}(3)\bigr) $, and so there is an inclusion \[ \Bigl(\mathbb{P} H^0\bigl(\mathbb{P}^5,\mathcal O_{\mathbb{P}^5}(3)\bigr)\Bigr)^G \ \subset\ p^\ast \mathbb{P} H^0\bigl( P,\mathcal O_P(3)\bigr)\ .\] Let us now assume $x,y\in\mathbb{P}^5$ are two points such that \[ (x,y)\not\in \mathbb{D}elta_{\mathbb{P}^5}\cup \Gamma_{\iota_4}\cup \Gamma_{\iota_5}\cup \Gamma_\iota\ .\] Then \[ p(x)\not=p(y)\ \ \ \hbox{in}\ P\ ,\] and so (using lemma \ref{delorme} below) there exists $\sigma\in\mathbb{P} H^0\bigl(P^\prime,\mathcal O_{P^\prime}(3)\bigr)$ containing $p(x)$ but not $p(y)$. The pullback $p^\ast(\sigma)$ contains $x$ but not $y$, and so these points $(x,y)$ impose $2$ independent conditions on $ \bigl(\mathbb{P} H^0\bigl(\mathbb{P}^5,\mathcal O_{\mathbb{P}^5}(3)\bigr)\bigr)^G$. It only remains to check that a generic element $(x,y)\in\Gamma_{\iota_4}\cup\Gamma_{\iota_5}$ also imposes $2$ independent conditions. Let us assume $(x,y)$ is generic on $\Gamma_4$ (the argument for $\Gamma_5$ is only notationally different). Let us write $x=[a_0:a_1:\ldots:a_5]$. By genericity, we may assume all $a_i$ are $\not=0$ (intersections of $\Gamma_4$ with a coordinate hyperplane have codimension $>n+1$ and so need not be considered for hypothesis (\romannumeral1) of proposition \ref{voisin2}). We can thus write \[ x=[1:a_1:a_2:a_3:a_4:a_5]\ ,\ \ \ y= [1:a_1:a_2:a_3:-a_4:a_5] \ ,\ \ \ a_i\not=0 \ .\] The cubic \[ a_5 x_0(x_4)^2 - a_4x_0x_4x_5=0 \] is $G$--invariant and contains $x$ while avoiding $y$. This proves hypothesis (\romannumeral1) is satisfied. To establish hypothesis (\romannumeral2) of proposition \ref{voisin2}, we proceed by contradiction. Let us suppose hypothesis (\romannumeral2) is not met with, i.e. there exists a smooth cubic $X_b$ as in theorem \ref{main}, and a non--trivial relation \[ c\,\mathbb{D}elta_{X_b} +d\, \Gamma_{\iota_{X_b}} +\delta =0\ \ \ \hbox{in}\ H^8(X_b\times X_b)\ ,\] where $c,d\in \mathbb{Q}^\ast$ and $\delta\in\operatorname{i}a\bigl( A^4(\mathbb{P}^5\times\mathbb{P}^5)\to A^4(X_b\times X_b)\bigr)$. Looking at the action on $H^{3,1}(X_b)$, we find that necessarily $c=-d$ (indeed, $\delta$ does not act on $H^{3,1}(X_b)$, and $\iota$ acts as the identity on $H^{3,1}(X_b)$). That is, we would have a relation \[ \mathbb{D}elta_{X_b} -\Gamma_{\iota_{X_b}} +{1\over c}\ \delta =0\ \ \ \hbox{in}\ H^8(X_b\times X_b)\ .\] Looking at the action on $H^{2,2}(X_b)$, we find that \[ (\iota_{X_b})^\ast =\ide\colon\ \ \ \hbox{Gr}^2_F H^4(X_b,\mathbb{C})_{\rm prim}\ \to\ \hbox{Gr}^2_F H^4(X_b,\mathbb{C})_{\rm prim}\ .\] Since there is a codimension $2$ linear subspace in $\mathbb{P}^5$ fixed by $\iota$, it follows that actually \[ (\iota_{X_b})^\ast =\ide\colon\ \ \ \hbox{Gr}^2_F H^4(X_b,\mathbb{C})_{}\ \to\ \hbox{Gr}^2_F H^4(X_b,\mathbb{C})_{}\ .\] Consider now the Fano variety of lines $F=F(X_b)$ with the involution $\iota_F$. Using the Beauville--Donagi isomorphism \cite{BD}, one obtains that also \[ (\iota_{F})^\ast =\ide\colon\ \ \ \hbox{Gr}^1_F H^2(F,\mathbb{C})_{}\ \to\ \hbox{Gr}^1_F H^2(F,\mathbb{C})_{}\ .\] As $\dim \hbox{Gr}^1_F H^2(F,\mathbb{C})_{}=21$, this would imply that the trace of $(\iota_{F})^\ast$ on $ \hbox{Gr}^1_F H^2(F,\mathbb{C})_{}$ is $21$. However, this contradicts proposition \ref{chiara} below, and so hypothesis (\romannumeral2) must be satisfied. \begin{lemma}\label{delorme} Let $P=\mathbb{P}(1^4,2^2)$. Let $r,s\in P$ and $r\not=s$. Then there exists $\sigma\in\mathbb{P} H^0\bigl(P,\mathcal O_P(3)\bigr)$ containing $r$ but avoiding $s$. \end{lemma} \begin{proof} It follows from Delorme's work \cite[Proposition 2.3(\romannumeral3)]{Del} that the locally free sheaf $\mathcal O_P(2)$ is very ample. This means there exists $\sigma^\prime\in\mathbb{P} H^0\bigl(P,\mathcal O_P(2)\bigr)$ containing $r$ but avoiding $s$. Taking the union of $\sigma^\prime$ with a hyperplane avoiding $s$, one obtains $\sigma$ as required. \end{proof} \begin{proposition}[Camere \cite{Cam}]\label{chiara} Let $X_b\subset\mathbb{P}^5$ be a cubic as in theorem \ref{main}, and let $\iota_{X_b}$ be the involution as above. Let $F=F(X_b)$ be the Fano variety of lines, and let $\iota_F$ be the involution of $F$ induced by $\iota_{X_b}$. The trace of $(\iota_{F})^\ast$ on the $21$--dimensional vector space $\hbox{Gr}^1_F H^2(F,\mathbb{C})$ is $5$. \end{proposition} \begin{proof} This follows from \cite[Theorem 5]{Cam}. \end{proof} \end{proof} \end{proof} \begin{remark} Let $X$ and $S$ be as in theorem \ref{main}. One expects there is actually an isomorphism \[ \Gamma_\ast\colon\ \ \ A^3_{hom}(X)\ \xrightarrow{\cong}\ A^2_{hom}(S)\ .\] I am unsure whether the argument of theorem \ref{main} can also be used to prove surjectivity. \end{remark} \begin{remark} To find the $K3$ surface $S$ of theorem \ref{main}, we have used the existence of the symplectic involution $\iota_F$ on the Fano variety $F=F(X)$ of lines on the cubic fourfold $X$, for which $S\subset F$ is in the fixed locus. One could ask if there exist cubic fourfolds $X$ other than those of theorem \ref{main}, such that the Fano variety $F(X)$ has a symplectic automorphism with a $2$--dimensional component in the fixed locus. However, if one restricts to {\em polarized\/} symplectic automorphisms of $F(X)$, there are only $2$ families with a surface in the fixed locus: the family of theorem \ref{main}, and a family with an abelian surface in the fixed locus. This follows from the classification obtained by L. Fu in \cite[Theorem 0.1]{LFu3} (the first family is labelled ``Family V-(1)'', and the second family is labelled ``Family IV-(2)'' in loc. cit.). The second family (with an abelian surface in the fixed locus) is studied from the point of view of algebraic cycles in \cite{voisinHK}. \end{remark} \begin{remark} Let $X$ and $F$ be as in theorem \ref{main}. We mention in passing that the automorphisms $\iota$ and $\iota_F$ of $X$ resp. of $F$ act as the identity on $A^3(X)$, resp. on $A^4(F)$ (for $X$, this follows immediately from theorem \ref{main}). This is proven more generally for any polarized symplectic automorphism of the Fano variety of lines of a cubic fourfold \cite[Theorems 0.5 and 0.6]{LFu2} (for a slightly different take on this, cf. \cite[Theorem 5.3]{SV}). The argument of \cite{LFu2} is (just like the argument proving theorem \ref{main}) based on the idea of spread of algebraic cycles in a family, inspired by \cite{V0}, \cite{V1}. \end{remark} \section{Finite--dimensionality} \begin{corollary}\label{findim} Let $X\subset\mathbb{P}^5(\mathbb{C})$ be a smooth cubic fourfold defined by an equation \[ f(x_0,x_1,x_2,x_3) + (x_4)^2\ell_1(x_0,\ldots,x_3) + (x_5)^2\ell_2(x_0,\ldots,x_3)+x_4x_5\ell_3(x_0,\ldots,x_3)=0\ .\] Assume \[ \dim H^4(X)\cap H^{2,2}(X,\mathbb{C})\ge 20 \ .\] Then $X$ has finite--dimensional motive. \end{corollary} \begin{proof} It follows from (the proof of) theorem \ref{main} there is an inclusion as direct summand \begin{equation}\label{inclu} h(X)\ \subset\ h(S)(1)\oplus\bigoplus_j \mathbb{L}(m_j)\ \ \ \hbox{in}\ \mathcal M_{\rm rat}\ ,\end{equation} where $S$ is a $K3$ surface. We have also seen (in the proof of theorem \ref{main}) there is an isomorphism \[ \Gamma_\ast\colon\ \ \ H^4(X)/N^2\ \xrightarrow{\cong}\ H^2_{tr}(S)\ .\] Since the Hodge conjecture is known for $X$ (because $X$ is Fano), there is equality \[ N^2 H^4(X)=H^4(X)\cap H^{2,2}(X,\mathbb{C})\ .\] Thus, the hypothesis on the dimension of the space of Hodge classes implies that \[ \dim N^2 H^4(X)\ge 20\ ,\] and so \[ \dim H^2_{tr}(S) = \dim ( H^4(X)/N^2 ) = 23-\dim N^2\le 3\ .\] This implies the Picard number $\rho(S)$ is at least $19$, and so $S$ has finite--dimensional motive \cite{Ped}. In view of inclusion (\ref{inclu}), this concludes the proof. \end{proof} \begin{remark}\label{fanotoo} Let $X$ be a cubic as in corollary \ref{findim}. Applying \cite{fano}, it follows that the Fano variety of lines $F:=F(X)$ also has finite--dimensional motive. \end{remark} \vskip1cm \begin{nonumberingt} Thanks to all participants of the Strasbourg 2014/2015 ``groupe de travail'' based on the monograph \cite{Vo} for a very pleasant atmosphere. Many thanks to Kai and Len and Yoyo for stimulating discussions not related to this work. \end{nonumberingt} \vskip1cm \end{document}
\begin{document} \title[Quasi Anosov Diffeomorphisms]{On Manifolds Supporting Quasi Anosov Diffeomorphisms} \author{Jana Rodriguez Hertz} \address{Jana Rodriguez Hertz} \email{[email protected]} \thanks{The first author was partially supported by a grant from PEDECIBA} \author{Ra\'{u}l Ures} \address{Ra\'{u}l Ures} \email{[email protected]} \thanks{The second author was partially supported by CONICYT, Fondo Clemente Estable} \author{Jos\'{e} L. Vieitez} \address{Jos\'{e} L. Vieitez} \email{[email protected]} \address{CC 30, IMERL - Facultad de Ingenier\'{\i}a \\Universidad de la Rep\'{u}blica\\Montevideo, Uruguay} \begin{abstract} Let $M$ be an $n$-dimensional manifold supporting a quasi Anosov diffeomorphism. If $n=3$ then either $M={\mathbb T}^3$, in which case the diffeomorphisms is Anosov, or else its fundamental group contains a copy of ${\mathbb Z} ^6$. If $n=4$ then $\Pi_1(M)$ contains a copy of ${\mathbb Z} ^4$, provided that the diffeomorphism is not Anosov. \end{abstract} \maketitle \thispagestyle{empty} \section{Introduction} In this work we obtain some restrictions on a manifold $M$ in order to support {\em quasi Anosov diffeomorphisms} (QAD), meaning diffeomorphisms $f:M\rightarrow M$ such that $\|(f^n)'(x)v\|\rightarrow+\infty$ for all non zero vectors $v\in T_xM$, either for $n\rightarrow\infty$ or for $n\rightarrow-\infty$. \par These maps were introduced in \cite{m1} by Ma\~{n}\'{e}, who showed they satisfy Axiom A and the no-cycle condition. Besides, they are the $C^1$ interior of the class of {\em expansive diffeomorphisms} $g$ \cite{m2}, those satisfying for some $\alpha >0$, that $\sup_{n\in {\mathbb Z }}d(f^n(x),f^n(y))>\alpha$, if $x\not=y$. We expect on one hand, that studying the relation between QAD's (seen as simplified examples of expansive diffeomorphisms) and the manifolds supporting them, will bring some ideas about the restrictions that expansive homeomorphisms impose on their ambient manifolds. Indeed, let us recall, for instance, that no expansive homeomorphism may be found on the 2-sphere, all expansive homeomorphisms are conjugated to Anosov diffeomorphisms on the 2-torus, and to pseudo-Anosov maps if they live in a surface with genus greater than 1 \cite{h,l}. In the case $M$ is a 3-manifold, an expansive $C^{1+\varepsilon}$ diffeomorphism is conjugated to an Anosov diffeomorphism if its non-wandering set is $M$; in which case $M$ must be $\mathbb {T}^3$ \cite{v}. On the other hand, Ma\~{n}\'{e} \cite{m3} and Hiraide \cite{h2} have posed the question of whether a QAD on $\mathbb {T}^n$ is necessarily Anosov. This work provides a positive answer for $n=3$, though we must point out that solving the problem for higher dimensions seems to involve more sophisticated tools. However, there are also examples of QAD's on 3-dimensional manifolds having wandering points \cite{fr}. We present here necessary conditions on a 3-manifold to support this kind of examples. Partial results on 4-manifolds are also obtained. The main result in this work is the following: \begin{teor} \label{teo1} Let $f:M^n\rightarrow M^n$ be a quasi Anosov diffeomorphism which is not Anosov. If $n=3$ then $\pi_1 (M)$ contains a subgroup which is isomorphic to ${\mathbb{Z}}^6$. If $n=4$, $\pi_1(M)$ contains a subgroup which is isomorphic to ${\mathbb{Z}}^4$. \end{teor} This result is strongly based on a theorem due to R. Plykin, which is stated below, and arguments concerning Axiom A diffeomorphisms with the no-cycle condition. \begin{teo}[\cite{ply1,ply2}]\label{teo2} Let $f:M\rightarrow M$ be a diffeomorphism on an $n$-manifold , where $n\geq 3$. If $f$ has $k$ expanding attractors or shrinking repellers of codimension $1$ then $\pi_1(M)$ contains a subgroup isomorphic to $\bigoplus_k{\mathbb{Z}}^n$. \end{teo} We recall that a hyperbolic attractor is said to have {\em dimension $u$} or equivalently {\em codimension $n-u$} if the dimension of its unstable fibre bundle is $u$. Analogously we shall say that a hyperbolic repeller is of {\em dimension $s$} or {\em codimension $n-s$} if the dimension of its stable fiber bundle is $s$.\par Our strategy is to see that a QAD on a 3-manifold must have at least two codimension one attractors or repellers, while on a 4-manifold, it must have at least one. This fact will allow us to prove our main theorem, getting the following as an immediate corollary: \begin{cor}\label{conclusion.2} Every quasi Anosov diffeomorphism on ${\mathbb T}^3$ is Anosov. \end{cor} Corollary \ref{conclusion.2} provides an easier way of recognizing Anosov diffeomorphisms on ${\mathbb T}^3$: it suffices to check that each non zero vector in the tangent bundle goes to infinity under the action of $Df^n$. \section{Relation among dimension 1 attractors and codimension 1 repellers} From now on, we shall assume $M$ is a compact smooth $n$-manifold without boundary, and $f:M\rightarrow M$ is a QAD. The non- wandering set of $f$ will be denoted by $\Omega(f)$, and $\Lambda_i$ will stand for the basic sets arising from the Spectral Decomposition Theorem \cite{sm}. We shall also maintain the standard notation $W^s(x)$ for the stable manifold of $x$, i.e. the set of points $y\in M$ such that $d(f^n(x),f^n(y))\to 0$ as $n\to\infty$. \begin{defi} Letting $\Omega(f)=\Lambda_1\cup\ldots\cup \Lambda_k$, we shall say that $\Lambda_i$ is of type $(s,u)$ if the stable dimension of $\Lambda_i$ is $s$, and the unstable dimension of $\Lambda_i$ is $u$. The expression $W^\sigma(\Lambda_i)$ will be used to denote the set $\bigcup_{x\in\Lambda_i}W^\sigma(x)$, for $\sigma=s,u$, in which case the relation $\Lambda_i\prec\Lambda_j$ will mean that $W^u(\Lambda_i)\cap W^s(\Lambda_j)\neq\emptyset$. \end{defi} We recall from \cite{m1} that a QAD is Anosov, if it satisfies the strong transversality condition. Notice that this condition is satisfied if all basic sets are of the same type. We state this conclusion as a lemma \begin{lema} If all basic sets of a QAD $f$ are of the same type then $f$ is Anosov. \end{lema} Therefore we get the following \begin{propo}\label{prop} Let $f$ be a QAD such that $\Omega(f)\not=M$. If some basic set of $f$ is of type $(1,n-1)$ (or $(n-1,1)$), then $f$ must have a codimension one attractor (repeller). \end{propo} \begin{proof}\,\, Let $\Lambda_1$ be a basic set of $f$, of type $(1,n-1)$ then $W^u(\Lambda_1)$ must meet $W^s(\Lambda_2)$ where $\Lambda_2$ is another basic set of $f$ (the no cycle condition yields $\Lambda_1\not=\Lambda_2$). Hence $\Lambda_1\prec \Lambda_2$, implying that the stable dimension of $\Lambda_2$ is one. Indeed, let $x\in \Lambda_1$ and $y\in \Lambda_2$ such that $z\in W^s(x)\cap W^u(y)$. If the stable dimension of $\Lambda_2$ were greater than one, then we would find a non zero vector $v\in T_z M$ which would be tangent both to $W^u(z)$ and $W^s(z)$, what would yield $\|f^n(z)v\|\to 0$ as $|n|\to \infty$, contradicting the fact that $f$ is quasi Anosov. \par This leaves us only two possibilities: either $W^u(\Lambda_2)$ meets $W^s(\Lambda_3)$, for some other basic set $\Lambda_3$, or $\Lambda_2$ contains the whole set $W^u(\Lambda_2)$, whence it would be a codimension one attractor. In this way we inductively obtain a chain $$\Lambda_1\prec\Lambda_2\prec\ldots\Lambda_r$$ which must end after a finite number of steps, due to the no cycle condition. Besides, the previous argument shows that $\Lambda_i$ has stable dimension one, for each $i=1,\ldots r$. \par If we suppose the previous chain is maximal, then $\Lambda_r$ must be a codimension one attractor. \end{proof} \section{Proof of the Theorem} We shall see that each QAD on a 3-manifold must have an attractor {\em and} a repeller of codimension one, provided that it is not Anosov. This, together with Theorem \ref{teo2} finishes the case $\dim M=3$. Let $f:M^3\rightarrow M^3$, since quasi Anosov maps do not have attracting or repelling periodic points, their basic sets must be all of type $(1,2)$ or $(2,1)$. If $f$ has a basic set of type $(1,2)$ then Proposition \ref{prop} implies the existence of a codimension one attractor. Now, if $f$ is not Anosov, then it must also have a basic set of type $(2,1)$, otherwise, all the basic sets would be of the same type. But in this case, Proposition \ref{prop} guarantees the existence of a codimension one repeller, as well. Observe that the previous argument also shows that if $f:M^4\rightarrow M^4$ is not an Anosov diffeomorphism, then it must have (at least) a basic set of type $(1,3)$ or $(3,1)$, otherwise, all basic sets would be of type $(2,2)$ what would imply $f$ is Anosov. Proposition \ref{prop} again, implies the existence of an attractor or a repeller of codimension one. \end{document}
\begin{document} \title{Weak Approximation for $0$-cycles on a product of elliptic curves} \begin{abstract} In the 1980's Colliot-Th\'{e}l\`{e}ne, Sansuc, Kato and S. Saito proposed conjectures related to local-to-global principles for $0$-cycles on arbitrary smooth projective varieties over a number field. We give some evidence for these conjectures for a product $X=E_1\times E_2$ of two elliptic curves. In the special case when $X=E\times E$ is the self-product of an elliptic curve $E$ over $\mathbb{Q}\xspace$ with potential complex multiplication, we show that the places of good ordinary reduction are often involved in a Brauer-Manin obstruction for $0$-cycles over a finite base change. We give many examples when these $0$-cycles can be lifted to global ones. \end{abstract} \section{Introduction} Let $X$ be a smooth projective geometrically connected variety over a number field $F$. We denote by $\Omega$ the set of all places $v$ of $F$ and by $F_v$ the completion of $F$ at a place $v$. The classical local-to-global principles for $X$ refer to the image of the diagonal embedding \[X(F)\hookrightarrow X(\mathbf{A}_F):=\prod_{v\in\Omega}X(F_v)\] to the set $X(\mathbf{A}_F)$ of adelic points of $X$. The cohomological Brauer group $\Br(X):=H^2(X_{\text{\'{e}t}},\mathbb{G}\xspace_m)$ is known to often obstruct either the existence of an $F$-rational point or the density of $X(F)$ in $X(\mathbf{A}_F)$. Namely, by the foundational work of Y. Manin (\cite{Manin1971}), the Brauer group gives rise to an intermediate closed subset $X(F)\subseteq X(\mathbf{A}_F)^{\Br(X)}\subseteq X(\mathbf{A}_F)$, which is often empty or properly contained in $X(\mathbf{A}_F)$. Unfortunately, the Brauer-Manin obstruction cannot always explain the failure of the Hasse principle and weak approximation for points (cf. \cite{Skorobogatov1999, Poonen2010}). However, this happens to be the case for abelian varieties, assuming finiteness of their Tate-Shafarevich groups, and it is conjectured to be the case for geometrically rationally connected varieties (\cite[p.~174]{Colliot-Thelene2003}). The answer is likely to be yes also for $K3$-surfaces (cf. \cite[p.~4]{Skorobogatov/Zharin2008}). In this article we are interested in an analog of this study for $0$-cycles, namely for the group $\mathbb{C}\xspaceH_0(X)$. Denote by $\Omega_f(F)$ (resp. $\Omega_\infty(F)$) the set of all finite (resp. infinite) places of $F$. For a place $v\in\Omega$ denote by $X_v$ the base change to $F_v$. We have the following conjecture. \begin{conj}[{\cite[Section 4]{Colliot-Thelene/Sansuc1981}, \cite[Section 7]{Kato/Saito1986}, see also \cite[Conjecture~1.5 (c)]{Colliot-Thelene1993}}] \label{locatoglobalconj} Let $X$ be a smooth projective geometrically connected variety over a number field $F$. The following complex is exact, \[\hspace{10pt}\varprojlim_{n} \mathbb{C}\xspaceH_0(X)/n\rightarrow \varprojlim_{n}\mathbb{C}\xspaceH_{0,\mathbf{A}}(X)/n\rightarrow\Hom(\Br(X),\mathbb{Q}\xspace/\mathbb{Z}\xspace).\] \end{conj} The above formulation is due to van Hamel (\cite{vanHamel2003}), see also \cite[Conjecture~($E_0$)]{Wittenberg2012}. The adelic Chow group $\mathbb{C}\xspaceH_{0,\mathbf{A}}(X)$ is equal to $\displaystyle\prod_{v\in\Omega_f(F)}\mathbb{C}\xspaceH_0(X_v)$ when $F$ is totally imaginary, and it has a small contribution from the infinite real places otherwise. When $X$ is a smooth projective curve, \autoref{locatoglobalconj} has been proved by Colliot-Th\'{e}l\`{e}ne (\cite[paragraph 3]{CT1997}) assuming the finiteness of the Tate-Shafarevich group of its Jacobian. In higher dimensions, the main evidence for \autoref{locatoglobalconj} is for rationally connected varieties starting with the work of Colliot-Th\'{e}l\`{e}ne, Sansuc and Swinnerton-Dyer (\cite{CT/Sansuc/SwinnertonDyerI, CT/Sansuc/SwinnertonDyerII}) and continued by the work of Liang (\cite[Theorem B]{Liang2013}). The latter proved this conjecture assuming that the Brauer-Manin obstruction is the only obstruction to Weak Approximation for points on $X_L$ where $L$ is any finite extension of $F$. There is some recent partial evidence of Ieronymou of similar flavor for $K3$-surfaces (cf. \cite[Theorem 1.2]{Ieronymou2021}). Moreover, Harpaz and Wittenberg (\cite[Theorem 1.3]{Wittenberg/Harpaz2016}) proved that the conjecture is compatible with fibrations over curves. The purpose of this article is to give some evidence for this conjecture for a product $X=E_1\times E_2$ of elliptic curves. We focus on the following weaker question. \begin{que} Let $p$ be a prime number and $\Br(X)\{p\}$ be the $p$-primary torsion subgroup of $\Br(X)$. Is the following complex exact \begin{equation}\label{complexintro}\varprojlim_{n} \mathbb{C}\xspaceH_0(X)/p^n\rightarrow \varprojlim_{n}\mathbb{C}\xspaceH_{0,\mathbf{A}}(X)/p^n\rightarrow\Hom(\Br(X)\{p\},\mathbb{Q}\xspace/\mathbb{Z}\xspace)?\end{equation} \end{que} Assuming the finiteness of the Tate-Shafarevich group of $X$, and that the action of the absolute Galois group $G_F$ on the N\'{e}ron-Severi group $\NS(X\otimes_F\overline{F})$ is trivial, the exactness of \ref{complexintro} can be reduced (cf. \autoref{compatibility}, \autoref{reduction}) to the exactness of the complex, \begin{equation}\label{complex2intro}\varprojlim_{n} F^2(X)/p^n\rightarrow \varprojlim_{n} F^2_\mathbf{A}(X)/p^n\rightarrow\Hom\left(\frac{\Br(X)\{p\}}{\Br_1(X)\{p\}},\mathbb{Q}\xspace/\mathbb{Z}\xspace\right),\end{equation} where $F^2(X)$ is the kernel of the Albanese map of $X$ and $\Br_1(X)$ is the algebraic Brauer group of $X$. From now make the above assumptions. Our first result is the following theorem. \begin{theo}\label{main0} (\autoref{mainmain0}) Let $X=E_1\times E_2$ be a product of elliptic curves over a number field $F$. There is an infinite set $T$ of primes $p$ for which the middle term in \ref{complex2intro} vanishes, making the complex exact. If we further assume that at least one of the curves does not have potential complex multiplication, then the complement $S$ of $T$ is a set of primes of density zero (in the sense of \cite{Serre1981}). \end{theo} When both $E_1, E_2$ have potentially good reduction, this result was already obtained in \cite[Corollary 1.8]{Gazaki/Hiranouchi2021}. It follows by \cite[Theorem 3.5]{Raskind/Spiess2000} that the only places that might contribute a nontrivial factor in $\varprojlim_{n} F^2_\mathbf{A}(X)/p^n$ are the places of bad reduction and the places above $p$. The key to prove \autoref{main0} is that under some assumptions on the prime $p$, the group $F^2(X_v)$ is $p$-divisible for every unramified place $v$ above $p$ of good reduction (cf.~\cite[Theorem 1.1]{Gazaki/Hiranouchi2021}). One can think of this result as a weaker analog of the vanishing $F^2(X_v)=0$ for $X$ a rationally connected variety and $v$ a place of good reduction (\cite[Theorem 5]{Kollar/Szabo2003}). A natural follow-up question is whether the places of good reduction are ever involved in a Brauer-Manin obstruction for $0$-cycles. Our second theorem gives an affirmative answer for places of good ordinary reduction that are ramified enough. \begin{theo}\label{main1} (cf. \autoref{mainmain1}) Let $X=E\times E$ be the self product of an elliptic curve $E$ over $\mathbb{Q}\xspace$. Suppose that $E\otimes_\mathbb{Q}\xspace\overline{\mathbb{Q}\xspace}$ has complex multiplication by the full ring of integers of a quadratic imaginary field $K$. Let $p\geq 5$ be a prime which is coprime to the conductor $\mathfrak{n}$ of $E$ and suppose that $p$ splits completely in $K$, $p=\pi\overline{\pi}$ for some prime element $\pi$ of $\mathcal{O}_K$. Let $\overline{E_p}$ be the reduction of $E$ modulo $p$. Suppose there exists a finite Galois extension $F_0/K$ of degree $n<p-1$ such that there is a unique unramified place $w$ of $F_0$ above $\pi$ with the property that $p$ divides $|\overline{E_p}(\mathbb{F}\xspace_w)|$, where $\mathbb{F}\xspace_w$ is the residue field of $F_0$ at $w$. Then there exists a finite Galois extension $L/F_0$ of degree $p-1$, totally ramified at $w$ such that the following are true for the group $\displaystyle\varprojlim\limits_n F^2_{\mathbf{A}}(X_L)/p^n$. \begin{enumerate} \item It is equal to $\displaystyle\prod_{v|p}\varprojlim\limits_n F^2(X_{L_v})/p^n$, and isomorphic to $\mathbb{Z}\xspace/p\oplus\mathbb{Z}\xspace/p$. \item It is orthogonal to the transcendental Brauer group $\Br(X_L)/\Br_1(X_L)$. \end{enumerate} The same result holds if the elliptic curve $E$ is defined over $K$. \end{theo} The primes $p$ of good reduction that split completely in $K$ are precisely the primes of good \textbf{ordinary reduction}. The involvement of the ordinary reduction places in a Brauer-Manin obstruction after suitable base change is not surprising. An analogous result for rational points was recently obtained in \cite{BrightNewton2020}, which in particular applies to weak approximation for points on abelian varieties and $K3$ surfaces. See also \cite{Pagano2021} for an explicit example. \subsection*{Global Approximation} We next pass to the main question of this article namely, whether in the situation considered in \autoref{main1}, one can lift the group $\varprojlim\limits_n F^2_{\mathbf{A}}(X_{L})/p^n$ to global elements. An important observation is that $\varprojlim\limits_n F^2_{\mathbf{A}}(X_{L})/p^n$ coincides with $\displaystyle\prod_{v|p}F^2(X_{L_v})/p$ (cf. \autoref{localAlb}). Thus, one is looking for genuine $0$-cycles and the question is reduced to showing that the diagonal map $F^2(X_L)/p\xrightarrow{\Delta}\prod_{v|p}F^2(X_{L_v})/p$ is surjective. We focus mainly on the simplest case considered in \autoref{main1}, namely when $F_0=\mathbb{Q}\xspace$; that is, when the reduction of the elliptic curve $E$ satisfies $|\overline{E_p}(\mathbb{F}\xspace_p)|=p$. One can find infinite families with this property. For example, consider the family of elliptic curves with Weierstrass equation $\{y^2=x^3+c;\;c\in\mathbb{Z}\xspace, c\neq 0\}$ having potential CM by the ring of integers of $\mathbb{Q}\xspace(\zeta_3)$. Let $p$ be a prime of the form $4p=1+3v^2$. Then, there exist exactly $\frac{p-1}{6}$ congruence classes $c\mod p$ that give elliptic curves that satisfy $|\overline{E_p}(\mathbb{F}\xspace_p)|=p$. In \autoref{computations_section} we give various sufficient conditions that guarantee the ability to lift. The conditions are supported by explicit examples, giving therefore some evidence for \autoref{locatoglobalconj}. The following \autoref{main2} is indicative of the computations we do in \autoref{CMsection}. Before stating the result, we need to refer to some notation. Suppose the pair $E,p$ satisfies the assumptions of \autoref{main1}. Let $P\in E(\mathbb{Q}\xspace)$. We will denote by $P_{\mathbb{Q}\xspace_p}$ the image of $P$ under the restriction map $E(\mathbb{Q}\xspace)\hookrightarrow E(\mathbb{Q}\xspace_p)$. We abuse notation and denote by $P_{\mathbb{Q}\xspace_p}$ also its image modulo $pE(\mathbb{Q}\xspace_p)$. In \autoref{decompose} we construct a decomposition $P_{\mathbb{Q}\xspace_p}=\widehat{P}_{\mathbb{Q}\xspace_p}\oplus \overline{P}_{\mathbb{Q}\xspace_p}$, with $\widehat{P}_{\mathbb{Q}\xspace_p}\in\widehat{E}(p\mathbb{Z}\xspace_p)/[p]$, and $\overline{P}_{\mathbb{Q}\xspace_p}\in \overline{E_p}(\mathbb{F}\xspace_p)/p$ (cf. notation \eqref{localpointsplit}). We can now state our next theorem. \begin{theo}\label{main2} Let $p\geq 5$ be a prime and $E$ an elliptic curve over $\mathbb{Q}\xspace$ satisfying the assumptions of \autoref{main1}. Assume further that $|\overline{E_p}(\mathbb{F}\xspace_p)|=p$. Let $L/K$ be the extension constructed in \autoref{main1}. Suppose that the Mordell-Weil group $E(\mathbb{Q}\xspace)$ has positive rank and there exists a point $P\in E(\mathbb{Q}\xspace)$ of infinite order such that the image $P_{\mathbb{Q}\xspace_p}$ of $P$ under the restriction map $\displaystyle\res_{\mathbb{Q}\xspace_p/\mathbb{Q}\xspace}:E(\mathbb{Q}\xspace)/p\rightarrow E(\mathbb{Q}\xspace_p)/p$, has the property that $\displaystyle\widehat{P}_{\mathbb{Q}\xspace_p}\neq 0\in \frac{\widehat{E}(p\mathbb{Z}\xspace_p)}{[p]\widehat{E}(p\mathbb{Z}\xspace_p)}$. Then we have a surjection \[F^2(X_L)/p\to\prod_{v|p}F^2(X_{L_v})/p\to 0.\] In particular, the complex \eqref{complex2intro} is exact. \end{theo} We expect that \autoref{main2} is satisfied by infinite families of elliptic curves. The criterion given in \autoref{main2} can be checked computationally in SAGE for small values of the prime $p$. In the appendix \ref{appendix} we study the family of elliptic curves $\{y^2=x^3-2+7n,n\in\mathbb{Z}\xspace\}$ and the prime $p=7$. When $n$ lies in the interval $[-5000, 5000]$, we show that about ~86.68\% of elliptic curves with rank one over $\mathbb{Q}\xspace$ have such a ``good point" $P\in E(\mathbb{Q}\xspace)$. The method used to prove \autoref{main2} can be generalized to more situations (cf. \autoref{quadraticcase} for when $F_0/K$ is a quadratic extension). In \autoref{nogoodpoint} we give criteria of similar flavor for elliptic curves that do not possess a suitable rational point of infinite order. For the purposes of the introduction, we chose to state our theorem in its simplest form. \begin{rem} In a recent article Liang (\cite{Liang2020arxiv}) showed some compatibility of weak approximation for $0$-cycles with certain products using the fibration method. The same method has been used in most known results (\cite{Liang2013, Wittenberg/Harpaz2016, Ieronymou2021}). Unfortunately, this breakthrough method does not seem to generalize to abelian varieties. For, the key cohomological properties $H^i(X,\mathcal{O}_X)=0$, for $i=1,2$, and finiteness of $\Br(X)/\Br_0(X)$ (used for rationally connected varieties and $K3$ surfaces respectively) are no longer true. Our approach is different in the sense that it does not use the assumption that the weak approximation is the only one for rational points. The downside is that we cannot find a uniform treatment that guarantees lifting, but we instead present many different sufficient conditions. The advantages on the other hand are first that our results are unconditional and second that we construct global $0$-cycles on the nose. In fact, if one only focuses on proving exactness of \ref{complex2intro}, then the finiteness of the Tate-Shafarevich group is no longer needed. \end{rem} \subsection{Notation} Throughout this note unless otherwise mentioned we will be using the following notation. \begin{itemize} \item For a number field $F$, $\mathcal{O}_F$, $\Omega(F), \Omega_f(F), \Omega_\infty(F)$ will be respectively its ring of integers, the set of all places, all finite and all infinite places of $F$. \item For a finite extension $k/\mathbb{Q}\xspace_p$, $\mathcal{O}_k$ will be its ring of integers, $\mathfrak{m}_k$ its maximal ideal and $\mathbb{F}\xspace_k$ its residue field. \item For a place $v\in\Omega(F)$, $F_v$ will be the completion of $F$ at $v$. \item For an abelian group $M$ and a positive integer $n$, $M_n$ and $M/n$ will be the $n$-torsion and $n$-cotorsion of $M$ respectively. Moreover, $\widehat{M}$ will be the completion, $\widehat{M}=\varprojlim\limits_{n}M/n$. \item For an elliptic curve $E$ and an integer $n$ we will instead write $E[n]$ for the $n$-torsion. \item For a field extension $L/K$, and a variety $X$ over $K$, $X_L$ will be the base change to $L$. \item For a field $k$ and a continuous $\mathbb{G}\xspaceal(\overline{k}/k)$-module $M$ we will denote by $\{H^i(k,M)\}_{i\geq 0}$ the Galois cohomology groups of $M$. \end{itemize} \section{Background}\label{sec:background} \subsection{Elliptic Curves with Complex Multiplication}\label{CM background} In this subsection we recall some necessary facts about elliptic curves with complex multiplication. Let $E$ be an elliptic curve over $\mathbb{Q}\xspace$ with potential complex multiplication by the full ring of integers $\mathcal{O}_K$ of $K$. It follows by \cite[Corollary 5.12]{Rubin1999} that $K$ has class number one. Note that there exactly $9$ such fields $K=\mathbb{Q}\xspace(\sqrt{D})$. Namely, $D\in\{-1,-2,-3,-7,-11,-19,-43,-67,-163\}$. The same result holds if $E$ is defined over $K$. \begin{notn} For an element $a\in\mathcal{O}_K$ we will denote by $E[a]$ the kernel of the isogeny $a:E_K\rightarrow E_K$. Similarly, if $\mathfrak{a}$ is an ideal of $\mathcal{O}_K$, then $\mathfrak{a}=(a)$ for some element $a\in\mathcal{O}_K$ which is uniquely defined up to units. We will also write $E[\mathfrak{a}]$ for the kernel of $a:E_K\rightarrow E_K$ (which is independent of the choice of generator). \end{notn} Let $\mathfrak{a}$ be a nonzero proper ideal of $\mathcal{O}_K$ which is prime to the conductor $\mathfrak{n}_K$ of $E_K$. It follows by \cite[Cor. 5.20 (ii)]{Rubin1999} that there is an isomorphism: \[\mathbb{G}\xspaceal(K(E[\mathfrak{a}])/K)\xrightarrow{\simeq}(\mathcal{O}_K/\mathfrak{a})^\times.\] In particular, if $\mathfrak{p}$ is a prime of $\mathcal{O}_K$ not dividing the conductor, then the Galois group $\mathbb{G}\xspaceal(K(E[\mathfrak{p}])/K)$ is cyclic of order $N_{K/\mathbb{Q}\xspace}(\mathfrak{p})-1$, where $N_{K/\mathbb{Q}\xspace}:K^\times\rightarrow\mathbb{Q}\xspace^\times$ is the norm map. Next suppose that $p$ is a rational prime coprime to $\mathfrak{n}$. We distinguish the following two cases: \begin{itemize} \item $p$ is a prime element of $\mathcal{O}_K$, so $k=K_{(p)}$ is a quadratic unramified extension of $\mathbb{Q}\xspace_p$. Then $\mathbb{G}\xspaceal(K(E[p])/K)\xrightarrow{\simeq}\mathbb{F}\xspace_{p^2}^\times$. In this case the elliptic curve $E_k$ has good supersingular reduction. \item $p$ splits completely in $K$, that is, $p=\pi\overline{\pi}$, where $\pi$ is a prime element of $\mathcal{O}_K$. Then we have an isomorphism, $\mathbb{G}\xspaceal(K(E[p])/K)\xrightarrow{\simeq}\mathbb{F}\xspace_{p}^\times\oplus\mathbb{F}\xspace_{p}^\times,$ and hence $K(E[p])/K$ is an extension of degree $(p-1)^2$. In this case the elliptic curve $E_{\mathbb{Q}\xspace_p}$ has good ordinary reduction. \end{itemize} The case of interest to us in this note is the latter. From now on we will refer to such primes as \textit{ordinary primes}. We fix such an ordinary prime $p$ and a factorization $p=\pi\overline{\pi}$ into prime elements of $\mathcal{O}_K$. We will denote by $\mathfrak{p}, \overline{\mathfrak{p}}$ the prime ideals $(\pi), (\overline{\pi})$ of $\mathcal{O}_K$ respectively. Notice that the two completions $K_\mathfrak{p}, K_{\overline{\mathfrak{p}}}$ of $K$ are both equal to $\mathbb{Q}\xspace_p$. Let $\overline{E_p}$ be the reduction of $E$ modulo $p$ and $r:E(\mathbb{Q}\xspace_p)\rightarrow\overline{E_p}(\mathbb{F}\xspace_p)$ be the reduction map. We will the use same notation for the reduction $E(k)\xrightarrow{r}\overline{E_p}(\mathbb{F}\xspace_k)$ where $k/\mathbb{Q}\xspace_p$ is any finite extension. Moreover, we will denote by $\widehat{E_{\mathbb{Q}\xspace_p}}$ the formal group of $E_{\mathbb{Q}\xspace_p}$. It follows by \cite{Deuring1941} (see also \cite[13.4, Theorem 12]{Lang1987}) that we can choose $\pi$ so that the endomorphism $\pi:E_{\mathbb{Q}\xspace_p}\rightarrow E_{\mathbb{Q}\xspace_p}$ when reduced modulo $p$ coincides with the Frobenius endomorphism $\phi_p:\overline{E_p}\rightarrow\overline{E_p}$. In particular, the reduction of $\pi$ is an automorphism of $\overline{E_p}$, and hence has trivial kernel. This implies that for every $n\geq 1$ the subgroup $E[\pi^n]$ of $E[p^n]$ coincides with the $p^n$-torsion of the formal group, $\widehat{E_{\mathbb{Q}\xspace_p}}[p^n]$. Moreover, the relation $p=\pi\overline{\pi}$ implies that $\overline{\pi}$ induces a height zero isogeny $[\overline{\pi}]:\widehat{E_{\mathbb{Q}\xspace_p}}\to\widehat{E_{\mathbb{Q}\xspace_p}}$, and hence for every $n\geq 1$ the reduction induces an isomorphism of abelian groups $E[\overline{\pi}^n]\xrightarrow{\simeq}\overline{E_p}[p^n]$. Let $k/\mathbb{Q}\xspace_p$ be a finite extension. Since $E_{\mathbb{Q}\xspace_p}$ has good ordinary reduction, for every $n\geq 1$ there is a short exact sequence of $\mathbb{G}\xspaceal(\overline{k}/k)$-modules, \begin{equation}\label{ses2} 0\rightarrow \widehat{E_{\mathbb{Q}\xspace_p}}[p^n]\rightarrow E[p^n]\rightarrow\overline{E_p}[p^n]\rightarrow 0. \end{equation} The above discussion shows that \ref{ses2} splits, since the subgroup $E[\overline{\pi}^n]$ of $E[p^n]$ maps isomorphically to $\overline{E_p}[p^n]$ (see also \cite[A.2.4]{Serre89} for more general results). \subsection*{Special Fiber} We next recall formulas for $|\overline{E_p}(\mathbb{F}\xspace_p)|$ for the various choices of the quadratic imaginary field $K$. We are particularly interested in finding examples of curves that satisfy $|\overline{E_p}(\mathbb{F}\xspace_p)|=p$. That is, curves for which the extension $F_0$ of \autoref{main1} can be taken to be $\mathbb{Q}\xspace$. In all cases we have a formula, \begin{equation} |\overline{E_p}(\mathbb{F}\xspace_p)|=p+1-(\pi+\overline{\pi}), \end{equation} where $\pi\overline{\pi}=p$ and $\pi$ reduces mod $p$ to the Frobenius (cf. \cite[p. 1]{Joux/Morain1995}). For the various choices of quadratic imaginary field $K$, the correct choice of prime element $\pi$ can be computed. We list a few examples. \begin{exmp}\label{ex1} Suppose $D=-3$, that is, $K=\mathbb{Q}\xspace(\zeta_3)$, and $E$ is given by the Weierstrass equation $y^2=x^3+c$ with $c\in\mathbb{Z}\xspace$. It follows by \cite[Theorem 1]{Rajwade1969} that \begin{equation}\label{formula1} |\overline{E_p}(\mathbb{F}\xspace_p)|=p+1-\left(\frac{4c}{\pi_0}\right)_6\cdot\overline{\pi}_0-\left(\frac{4c}{\overline{\pi}_0}\right)_6\cdot\pi_0, \end{equation} where $\pi_0$ is a prime element of $K$ such that $\pi_0\overline{\pi}_0=p$ and $\pi_0,\overline{\pi}_0$ are normalized, that is, they are congruent to $1\mod 3$. Here $\displaystyle\left(\frac{a}{\pi_0}\right)_6=a^{\frac{p-1}{6}}(\text{mod}\;\pi_0)$ is the sixth power residue symbol. The symbol $\displaystyle\left(\frac{4c}{\pi_0}\right)_6$ can take any value within the set of units $\{\pm 1, \pm\zeta_3,\pm\zeta_3^2\}$ of $\mathbb{Z}\xspace[\zeta_3]$. In particular, when $p$ is of the form $4p=1+3v^2$ for some $v\in\mathbb{Z}\xspace$, there are exactly $\frac{p-1}{6}$ different reductions $\overline{E_p}$ which satisfy $|\overline{E_p}(\mathbb{F}\xspace_p)|=p$. Primes of this form include $p=7, 37, 61,\ldots$. For example, when $p=7$, the family of elliptic curves $\{E_n: y^2=x^3-2+7n,n\in\mathbb{Z}\xspace\}$ satisfies the desired equality. Later in this article we will make this family a case study to construct examples that satisfy \autoref{main2}. A second example is given for $p=61$ and the family $\{E_t:y^2=x^3+2+61t,t\in\mathbb{Z}\xspace\}$. \end{exmp} \begin{exmp} Suppose $D=-11$. A sage computation shows that for $p=223$ the family $\{E_s: y^2=x^3-1056x+13552+223s,s\in\mathbb{Z}\xspace\}$ satisfies the desired equality. Another class of examples is given for $D=-19$, $p=43$ and $\{E_l:y^2=x^3-152x+722+43l,l\in\mathbb{Z}\xspace\}$. \end{exmp} \begin{exmp} When $D=-43, -67, -163$, and $E$ is given by a CM Weierstrass equation with parameter $c$ (cf. \cite[Tableau 1]{Joux/Morain1995}) it follows by \cite[Th\'{e}or\`{e}me 1]{Joux/Morain1995} that \begin{equation}\label{formula2} |\overline{E_p}(\mathbb{F}\xspace_p)|=p+1-\left(\frac{2}{p}\right)\left(\frac{u}{p}\right)\left(\frac{c}{p}\right)u, \end{equation} where $\left(\frac{a}{b}\right)$ is the Legendre symbol, and the integer $u$ is such that $4p=u^2-Dv^2$. In this case it is easy to see that if $u=1$, then every integer $c$ such that $\displaystyle\left(\frac{2}{p}\right)\left(\frac{c}{p}\right)=-1$ gives $|\overline{E_p}(\mathbb{F}\xspace_p)|=p$. For example for $D=-43$ the following primes are of the form $1+43v^2$: $p=11, 97, 269, 1301,\ldots$. \end{exmp} \begin{exmp}\label{Zi} When $D=-1$ or $-2$, it follows that $\pi+\overline{\pi}$ is always an even integer, and hence the equality $|\overline{E_p}(\mathbb{F}\xspace_p)|=p$ never happens. That is, any extension $F_0$ satisfying the assumptions of \autoref{main1} has positive degree. For example, consider the family of elliptic curves given by the Weierstrass equation $y^2=x^3+(3+5n)x$, with $n\in\mathbb{Z}\xspace$, which has potential complex multiplication by $\mathbb{Z}\xspace[i]$. Consider the prime $p=5$, which splits completely in $\mathbb{Q}\xspace(i)$. A SAGE computation shows that if $E$ is any elliptic curve in the family, then $p||\overline{E_p}(\mathbb{F}\xspace_{p^2})|$. \end{exmp} \subsection{$0$-cycles and Somekawa $K$-groups}\label{Somekawa} Let $X$ be a smooth projective variety over a perfect field $k$. We consider the Chow group of $0$-cycles, $\mathbb{C}\xspaceH_0(X)$. We recall that this group has a filtration \[\mathbb{C}\xspaceH_0(X)\supset F^1(X)\supset F^2(X)\supset 0,\] where $F^1(X):=\ker(\deg: \mathbb{C}\xspaceH_0(X) \to \mathbb{Z}\xspace)$ is the kernel of the degree map, and $F^2(X):=\ker(\alb_X:F^1(X)\to \mathrm{Alb}_X(k))$ is the kernel of the Albanese map. When $X=C_1\times C_2$ is a product of two smooth projective, geometrically connected curves over $k$ such that $X(k)\neq\emptyset$, Raskind and Spiess (\cite[Theorem 2.2, Corollary 2.4.1]{Raskind/Spiess2000}) showed an isomorphism \begin{equation}\label{Kiso} F^2(X)\simeq K(k;J_1,J_2), \end{equation} where $K(k;J_1,J_2)$ is the Somekawa $K$-group attached to the Jacobian varieties $J_1,J_2$ of $C_1, C_2$. The group $K(k;J_1,J_2)$, defined by K. Kato and Somekawa in \cite{Somekawa1990}, is a quotient of $\displaystyle\bigoplus_{L/k\text{ finite}}J_1(L)\otimes J_2(L)$ by two relations. The first relation is known as \textit{projection formula} and the second as \textit{Weil reciprocity}. In this article we won't make explicit use of these relations, and hence we omit the precise definition (see \cite[Definition 2.1.1]{Raskind/Spiess2000} and the footnote on p. 10 for a correction to Somekawa's original definition). \begin{exmp}\label{selfproduct} Suppose that $X=E\times E$ is the self-product of an elliptic curve $E$ over $k$. We denote by $K_2(k;E)$ the group $K(k;E,E)$. Moreover we will denote by $O$ the zero element of $E$. The Albanese kernel $F^2(X)$ is generated by $0$-cycles of the form \[w_{P,Q}:=f_{L/k\star}([P,Q]-[P,O]-[Q,O]+[O,O]),\] where $L/k$ runs through all finite extensions of $k$, $P,Q\in E(L)$, and $f_{L/k}$ is the proper push-forward $\mathbb{C}\xspaceH_0(X_L)\xrightarrow{f_{L/k}}\mathbb{C}\xspaceH_0(X)$. The isomorphism \eqref{Kiso} sends the $0$-cycle $w_{P,Q}$ to the symbol $\{P,Q\}_{L/k}$. From now on we identify these two objects. \end{exmp} \subsection{Weak Approximation for $0$-cycles} Let $X$ be a smooth projective geometrically connected variety over a number field $F$. Let $v\in\Omega_f(F)$. There is a local pairing, \[\langle\cdot,\cdot\rangle_v:\mathbb{C}\xspaceH_0(X_v)\times\Br(X_v)\rightarrow\Br(F_v)\simeq\mathbb{Q}\xspace/\mathbb{Z}\xspace\] called the \textit{Brauer-Manin pairing} defined as follows. For a closed point $P\in X_v$ and a Brauer class $\alpha \in \Br(X_v)$, the pull-back of $\alpha$ along $P$ is denoted by $\alpha(P) \in \Br(F_v(P))$, where $F_v(P)$ is the residue field of $P$. The pairing $\langle\cdot,\cdot\rangle_v$ is defined on points by $\langle P,\alpha\rangle_v := \mathrm{Cor}_{F_v(P)/F_v}(\alpha(P))$ and it factors through rational equivalence (cf. \cite[p.~4]{Colliot-Thelene1993}). Here $\mathbb{C}\xspaceor_{F_v(P)/F_v}$ is the Corestriction map of Galois cohomology (cf. \cite[VIII.2]{Serre1979local}). \begin{defn} Suppose that $F$ is totally imaginary. The adelic Chow group $\mathbb{C}\xspaceH_{0,\mathbf{A}}(X)$ is defined to be the product $\displaystyle \mathbb{C}\xspaceH_{0,\mathbf{A}}(X):=\prod_{v\in\Omega_f(F)}\mathbb{C}\xspaceH_0(X_v)$. Similarly, we define $\displaystyle F^1_\mathbf{A}(X)=\prod_{v\in\Omega_f(F)}F^1(X_v)$ and $\displaystyle F^2_\mathbf{A}(X)=\prod_{v\in\Omega_f(F)}F^2(X_v)$. \end{defn} When $F$ is not totally imaginary, it follows by \cite[Th\'{e}or\`{e}me 1.3]{Colliot-Thelene1993} that the group $\mathbb{C}\xspaceH_{0,\mathbf{A}}(X)$ has a small ($2$-torsion) contribution from the infinite real places. In this article we will mainly be working over totally imaginary number fields, and hence we omit the more general definition. For more details see \cite[Def. 5.2]{Gazaki/Hiranouchi2021}. The local pairings induce a global pairing, \[\langle\cdot,\cdot\rangle:\mathbb{C}\xspaceH_{0,\mathbf{A}}(X)\times\Br(X)\rightarrow\mathbb{Q}\xspace/\mathbb{Z}\xspace,\] defined by $\langle(z_v)_v,\alpha\rangle=\sum_v\inv_v(\langle z_v,\iota_v^\star(\alpha)\rangle_v)$, where $\iota_v^\star$ is the pullback of $\iota_v: X_v\to X$. The short exact sequence of global class field theory, \[0\rightarrow\Br(F)\rightarrow\bigoplus_{v\in\Omega}\Br(F_v)\xrightarrow{\sum\inv_v}\mathbb{Q}\xspace/\mathbb{Z}\xspace\rightarrow 0,\] implies that the group $\mathbb{C}\xspaceH_0(X)$ lies in the left kernel of $\langle\cdot,\cdot\rangle$. Thus, we obtain a complex, \begin{equation}\label{complex1}\mathbb{C}\xspaceH_0(X)\stackrel{\Delta}{\longrightarrow} \mathbb{C}\xspaceH_{0,\mathbf{A}}(X)\rightarrow\Hom(\Br(X),\mathbb{Q}\xspace/\mathbb{Z}\xspace), \end{equation} where $\Delta$ is the diagonal map. \autoref{locatoglobalconj} then predicts that the complex \eqref{complex1} becomes exact after passing to the completions; namely that the induced complex \begin{equation}\label{complex2}\widehat{\mathbb{C}\xspaceH_0(X)}\stackrel{\Delta}{\longrightarrow} \widehat{\mathbb{C}\xspaceH_{0,\mathbf{A}}}(X)\rightarrow\Hom(\Br(X),\mathbb{Q}\xspace/\mathbb{Z}\xspace) \end{equation} is exact. Next we consider the filtration $\Br(X)\supset\Br_1(X)\supset\Br_0(X)$ induced by the Hochschild-Serre spectral sequence, where $\Br_1(X):=\ker(\Br(X)\rightarrow\Br(X_{\overline{F}}))$ is the \textit{algebraic Brauer group} of $X$, and $\Br_0(X)=\img(\Br(F)\to\Br(X))$ are the constants. When $X$ has a $F$-rational point, the exactness of \eqref{complex2} is reduced to the exactness of the induced complex, \begin{equation}\label{complex3}\widehat{F^1(X)}\stackrel{\Delta}{\longrightarrow} \widehat{F^1_\mathbf{A}(X)}\xrightarrow{\varepsilon}\Hom(\Br(X)/\Br_0(X),\mathbb{Q}\xspace/\mathbb{Z}\xspace). \end{equation} When $X$ is a product of elliptic curves more can be said. \begin{prop}\label{compatibility} Let $X=E_1\times\cdots\times E_d$ be a product of elliptic curves over a number field $F$. Let $G_F$ be the absolute Galois group of $F$. Suppose that the N\'{e}ron-Severi group $\NS(X_{\overline{F}})$ of the base change to the algebraic closure of $F$ has trivial $G_F$-action. Then the restriction of the map $\varepsilon:F^1(X)\to\Hom(\Br(X)/\Br_0(X),\mathbb{Q}\xspace/\mathbb{Z}\xspace)$ to the Albanese kernel $F^2(X)$ factors through the group $\Hom(\Br(X)/\Br_1(X),\mathbb{Q}\xspace/\mathbb{Z}\xspace)$. \end{prop} \begin{proof} We need to show that for every $z\in F^2(X)$ and every $\alpha\in \Br_1(X)/\Br_0(X)$, it follows $\langle z,\alpha\rangle=0$. A direct computation of the Hochshild-Serre spectral sequence gives an isomorphism $\Br_1(X)/\Br_0(X)\simeq H^1(F,\Pic(X_{\overline{F}}))$. The $G_F$-module $\Pic(X_{\overline{F}})$ fits into a (splitting) short exact sequence \[0\to \Pic^0(X_{\overline{F}}))\to \Pic(X_{\overline{F}}))\to\NS(X_{\overline{F}}))\to 0.\] Since $X$ is an abelian variety, the group $\NS(X_{\overline{F}})$ is torsion free, and hence a finitely generated free abelian group. Since we assumed that $G_F$ acts trivially on it, it follows that $H^1(F,\NS(X_{\overline{F}}))=0$. Thus, we obtain an isomorphism \[H^1(F,\Pic(X_{\overline{F}}))\simeq H^1(F,X(\overline{F}))\simeq \bigoplus_{i=1}^d H^1(k,E_i).\] For $i=1,\ldots,d$, let $\pr_i:X\to E_i$ be the projection. The map $\pr_i$ is proper and it induces a pushforward $\pr_{i\star}: \mathbb{C}\xspaceH_0(X)\to\mathbb{C}\xspaceH_0(E_i)$ on Chow groups and a pullback $\pr_i^\star:\Br(E_i)\to\Br(X)$ on Brauer groups. Since the Brauer-Manin pairing is defined just by evaluation, we have a commutative diagram \[\begin{tikzcd} \mathbb{C}\xspaceH_{0,\mathbf{A}}(X)\ar{d}{\pr_{i\star}}\ar{r}& \Hom(\Br(X),\mathbb{Q}\xspace/\mathbb{Z}\xspace)\ar{d}{\Hom(\pr_i^\star,\mathbb{Q}\xspace/\mathbb{Z}\xspace)}\\ \mathbb{C}\xspaceH_{0,\mathbf{A}}(E_i)\ar{r}& \Hom(\Br(E_i),\mathbb{Q}\xspace/\mathbb{Z}\xspace). \end{tikzcd}\] That is, for $z\in\mathbb{C}\xspaceH_0(X)$ and $\beta\in \Br(E_i)$ we have an equality $\langle z,\pr_i^\star(\alpha)\rangle=\langle\pr_{i\star}(z),\alpha\rangle.$ Moreover, both homomorphisms preserve the two piece filtrations of $\mathbb{C}\xspaceH_0$ and $\Br$. The pullbacks $p_i^\star$ induce a homomorphism \[\bigoplus_{i=1}^d \Br_1(E_i)/\Br_0(E_i)\xrightarrow{\oplus\pr_i^\star}\Br_1(X)/\Br_0(X).\] Our previous analysis shows that this is in fact an isomorphism. We compute \[\langle z,\pr_i^\star(\alpha)\rangle=\langle\pr_{i\star}(z),\alpha\rangle=0.\] The vanishing follows, because $\pr_{i\star}(z)\in F^2(E_i)=0$, since $E_i$ is a curve. \end{proof} \begin{cor} Let $X=E\times E$ be the self-product of an elliptic curve over a number field $F$. Suppose that $E$ has complex multiplication defined over $F$ by the full ring of integers of a quadratic imaginary field $K$. Then the assumption of \autoref{compatibility} is satisfied. \end{cor} \begin{proof} Because we assumed that $F$ contains the quadratic imaginary field $K$, it follows by \cite[p. (120), equation (10)]{Skorobogatov/Zharin2012} that $\NS(X_{\overline{F}})$ is a trivial $G_F$-module. \end{proof} \autoref{compatibility} shows that the complex \eqref{complex3} induces a complex \begin{equation}\label{complex4}\widehat{F^2(X)}\stackrel{\Delta}{\longrightarrow} \widehat{F^2_\mathbf{A}(X)}\xrightarrow{\varepsilon}\Hom(\Br(X)/\Br_1(X),\mathbb{Q}\xspace/\mathbb{Z}\xspace). \end{equation} Combining \autoref{compatibility} with \cite[Prop. 5.6]{Gazaki/Hiranouchi2021} yields the following corollary. \begin{cor}\label{reduction} The exactness of \eqref{complex3} can be reduced to the exactness of \eqref{complex4}. \end{cor} We note that this corollary was claimed by the main author and T. Hiranouchi in \cite[Prop. 5.6]{Gazaki/Hiranouchi2021}. It was brought to our attention however that the argument there was not enough. \autoref{compatibility} was a key missing ingredient. We note that such a reduction cannot be made in general for other classes of varieties, for example del Pezzo surfaces, or $K3$ surfaces. \section{Local Results}\label{Brauercomputations} In this section we are going to prove theorem \eqref{main0} and also obtain some necessary local information in order to prove theorems \eqref{main1} and \eqref{main2} in the next section. \subsection{Proof of \autoref{main0}}\label{localprelims} In this subsection $k$ will denote a finite extension of $\mathbb{Q}\xspace_p$. We will often assume that $p$ is odd. Let $X$ be a smooth projective geometrically connected variety over $k$. We will denote by $F^2(X)_{\dv}$ the maximal divisible subgroup of the Albenese kernel $F^2(X)$ and by $F^2(X)_{\nd}$ the quotient $F^2(X)/F^2(X)_{\dv}$. We have a decomposition, \[ F^2(X)=F^2(X)_{\dv}\oplus F^2(X)_{\nd}. \] The subgroup $F^2(X)_{\nd}$ is expected to be finite (cf. \cite[Conjecture 3.5.4]{Raskind/Spiess2000}, see also \cite[1.4(g)]{Colliot-Thelene1993}). When $X=E_1\times E_2$ is the product of two elliptic curves, this conjecture has been established in a large number of cases. In particular, the following are true. \begin{enumerate} \item When $X$ has good reduction, the group $F^2(X)$ is $m$-divisible for every integer $m$ coprime to $p$ (\cite[Theorem 3.5]{Raskind/Spiess2000}). \item Suppose $p\geq 3$, $X$ has split semistable reduction and at most one of the curves has good supersingular reduction. Then the group $F^2(X)_{\nd}$ is finite (\cite[Theorem 1.1]{Raskind/Spiess2000}, \cite[Theorem 1.2]{Gazaki/Leal2018}). \item Suppose $p\geq 3$, $k$ is unramified over $\mathbb{Q}\xspace_p$, $X$ has good reduction and at most one of the curves has good supersingular reduction. Then $F^2(X)_{\nd}=0$ (\cite[Theorem 1.4]{Gazaki/Hiranouchi2021}). \end{enumerate} The finiteness of $F^2(X)_{\nd}$ yields an equality $F^2(X)_{\nd}=\widehat{F^2(X)}$. In the special case when $X$ has good reduction, it follows that the group $F^2(X)_{\nd}$ has $p$-power order. Suppose $F^2(X)_{\nd}=p^N$ for some $N\geq 0$. Then $F^2(X)_{\nd}\simeq F^2(X)/p^N$. We are now ready to prove \autoref{main0}, which we restate here. \begin{theo}\label{mainmain0} Let $X=E_1\times E_2$ be a product of elliptic curves over a number field $F$. Suppose that the action of the absolute Galois group $G_F$ on the N\'{e}ron-Severi group $\NS(X_{\overline{F}})$ is trivial. Then, there is an infinite set $T$ of rational primes $p$ for which the group $\varprojlim\limits_{n}F^2_{\mathbf{A}}(X)/p^n$ vanishes. In particular, the complex \ref{complex2intro} is exact for every $p\in T$. If we further assume that at least one of the curves does not have potential complex multiplication, then the complement $S$ of $T$ is a thin set of primes. \end{theo} \begin{proof} The proof will be along the lines of \cite[Lemma 5.9]{Gazaki/Hiranouchi2021}. Let $p$ be an odd prime and suppose that for every place $v$ above $p$ the surface $X_v$ has good reduction. It follows by \autoref{localprelims} (1) that the only factors of $\displaystyle F^2_{\mathbf{A}}(X)$ that might not be $p$-divisible correspond to those places $v$ above $p$ and to the places of bad reduction. Let $\Lambda=\{v\text{ place of bad reduction}\}$. Then for every $n\geq 1$ we have, \[F^2_{\mathbf{A}}(X)/p^n=\prod_{v|p}F^2(X)/p^n\times\prod_{v\in\Lambda}F^2(X)/p^n.\] Let $v\in\Lambda$ be a place of bad reduction. Then there exists a finite extension $L_v/F_v$ such that the base change $X_{L_v}$ has split semistable reduction. This means that either both elliptic curves $E_{iL_{v}}$ have good reduction, or if any of them has bad reduction, then it has split multiplicative reduction. In the former case, since $v\nmid p$, it follows by \cite[Theorem 3.5]{Raskind/Spiess2000} that the group $F^2(X_{L_v})$ is $p$-divisible. In the latter case, it follows by \cite[Theorem 1.2]{Gazaki/Leal2018}) that the group $F^2(X_{L_v})_{\nd}$ is finite. Write $N_v$ for its order. To unify notation, we set $N_v=1$ for the case of potentially good reduction. Consider the following positive integer, \[M=\prod_{v\in\Lambda}[L_v:F_v]\prod_{v\in\Lambda}N_v.\] We claim that for every prime $p$ which is coprime to $M$ and for every $v\in\Lambda$ the group $F^2(X_v)$ is $p$-divisible. Consider the projection $X_{L_{v}}\xrightarrow{\pi_{L_{v}/F_{v}}} X_{v}$, and let $\mathbb{C}\xspaceH_0(X_{L_{v}})\xrightarrow{\pi_{L_{v}/F_{v}\star}} \mathbb{C}\xspaceH_0(X_{v})$, and $\mathbb{C}\xspaceH_0(X_{v})\xrightarrow{\pi_{L_{v}/F_{v}}^{\star}} \mathbb{C}\xspaceH_0(X_{L_{v}})$ be the induced push-forward and pull back maps respectively. Then we have an equality $\pi_{L_{v}/F_{v}\star}\circ\pi_{L_{v}/F_{v}}^{\star}=[L_v:F_v]$. Since $p$ is coprime to $[L_v:F_v]$, it follows that the pull back map induces an injective map of $\mathbb{F}\xspace_p$-vector spaces \[F^2(X_{v})/p\stackrel{\pi_{L_{v}/F_{v}}^{\star}}{\hookrightarrow} F^2(X_{L_{v}})/p.\] By our choice of $p$, it follows that the group $F^2(X_{L_{v}})$ is $p$-divisible, and hence so is $F^2(X_{v})$. As a conclusion, for every prime $p\nmid M$ such that for every place $v$ above $p$ the surface $X_v$ has good reduction, we have an isomorphism \[\varprojlim\limits_{n}F^2_{\mathbf{A}}(X)/p^n\simeq\prod_{v|p}\varprojlim\limits_{n}F^2(X_v)/p^n.\] Let $T_0$ be the set of odd primes $p$ such that for every place $v$ above $p$ the abelian surface $X_v$ has good reduction, at most one of the curves $E_{1v}, E_{2v}$ has good supersingular reduction and $p\nmid M$. Then it follows by \autoref{localprelims} (2) that for every $p\in T_0$ and for every place $v|p$ the group $F^2(X)_{\nd}$ is finite of $p$-power order. Thus, we have an isomorphism \begin{equation}\label{piso} \varprojlim\limits_{n}F^2_{\mathbf{A}}(X)/p^n=\prod_{v|p}F^2(X_v)_{\nd}.\end{equation} If we further assume that for each place $v|p$ the extension $F_v/\mathbb{Q}\xspace_p$ is unramified, then it follows by \cite[Theorem 1.4]{Gazaki/Hiranouchi2021} that the group $F^2(X_v)_{\nd}$ vanishes. Consider the set \[T=T_0\setminus\{p:\text{ ramified prime}\}.\] Then for every $p\in T$, $\varprojlim\limits_{n}F^2_{\mathbf{A}}(X)/p^n=0$ and the complex \ref{complex2intro} is exact. Note that the set $T_0$ is infinite, since it contains all primes $p$ such that for every place $v$ above $p$ both elliptic curves have good ordinary reduction. Since the set of ramified primes is finite, it follows that $T$ is an infinite set of primes. Next suppose that one of the curves does not have potential complex multiplication; without loss of generality, assume $\End(E_{1\overline{\mathbb{Q}\xspace}})=\mathbb{Z}\xspace$. Then the set of primes $p$ which are such that the curve $E_{1v}$ has good supersingular reduction for every place $v$ above $p$ is a set of primes of density zero (cf.~\cite{Serre1981, Lang/Trotter1976}). We conclude that the complement $S$ of $T$ is a set of primes of density zero. \end{proof} \subsection{Computing the local Albanese kernel over ramified base fields}\label{localAlb} Throughout this subsection $E$ will be an elliptic curve defined over a finite extension $k$ of $\mathbb{Q}\xspace_p$. We assume that $E$ has \textbf{good ordinary reduction}. We will denote by $\widehat{E}$ the formal group of $E$ and by $\overline{E}$ the reduction of $E$, which is an ordinary elliptic curve over the finite field $\mathbb{F}\xspace_k$. Moreover, we will denote by $e$ the absolute ramification index of $k$. The purpose of this subsection is to obtain some explicit information on the group $F^2(X)_{\nd}=\widehat{F^2(X)}$. \subsubsection{Decomposition of local points}\label{decompose} We first obtain some information on points $P\in E(k)$, which will be of use in \autoref{computations_section}. Since the elliptic curve $E$ has good ordinary reduction, for every integer $n\geq 1$ we have a short exact sequence of $\mathbb{G}\xspaceal(\overline{k}/k)$-modules. \begin{equation}\label{ses7} 0\rightarrow \widehat{E}[p^n]\rightarrow E[p^n]\rightarrow\overline{E}[p^n]\rightarrow 0, \end{equation} where as $\mathbb{Z}\xspace$-modules both $\widehat{E}[p^n]$ and $\overline{E}[p^n]$ are isomorphic to $\mathbb{Z}\xspace/p^n$. The sequence \ref{ses7} is also known as the \textit{connected-\'{e}tale} exact sequence. In general this sequence does not split, but it does when $E[p^n]\subset E(k)$, and unconditionally on $k$ if $E$ has complex multiplication (\cite[A.2.4]{Serre89}). In particular it splits when $E$ satisfies the assumptions of \autoref{CM background}. We assume we are in the CM situation. We additionally suppose that $k$ is an unramified extension of $\mathbb{Q}\xspace_p$ and $\overline{E}[p]\subset\overline{E}(\mathbb{F}\xspace_k)$. We consider the exact sequence \begin{equation} \label{ses3} 0\rightarrow \widehat{E}(\mathfrak{m}_k)\rightarrow E(k)\xrightarrow{r}\overline{E}(\mathbb{F}\xspace_k)\rightarrow 0, \end{equation} where $E(k)\xrightarrow{r}\overline{E}(\mathbb{F}\xspace_k)$ is the reduction map. The group $\overline{E}(\mathbb{F}\xspace_k)$ is finite, thus it decomposes as $\overline{E}(\mathbb{F}\xspace_k)\simeq \overline{E}(\mathbb{F}\xspace_k)\{p\}\oplus \overline{E}(\mathbb{F}\xspace_k)\{m\}$, for some large enough integer $m\geq 1$ coprime to $p$. Because $E$ has good reduction, it follows by the criterion of N\'{e}ron-Ogg-Shafarevich (\cite[Theorem 7.1]{Silverman2009} that the coprime-to-$p$ torsion subgroup of $E(k)$ is isomorphic to $\overline{E}(\mathbb{F}\xspace_k)\{m\}$. Moreover, since $\overline{E}$ is an ordinary elliptic curve and we assumed $\overline{E}[p]\subset\overline{E}(\mathbb{F}\xspace_k)$, it follows that $\overline{E}(\mathbb{F}\xspace_k)\{p\}\simeq\overline{E}[p^{N_0}]$ for some integer $N_0\geq 1$. In particular, the group $\overline{E}(\mathbb{F}\xspace_k)\{p\}$ is cyclic of order $p^{N_0}$. The splitting of \ref{ses7} implies that the map $E[p^{N_0}](k)\xrightarrow{r}\overline{E}[p^{N_0}](\mathbb{F}\xspace_k)$ is surjective. Since we assumed that $k/\mathbb{Q}\xspace_p$ is unramified, it follows by \cite[Theorem 6.1]{Silverman2009} that $\widehat{E}[p]=0$, and hence this map is an isomorphism. We conclude that the reduction map $r$ induces an isomorphism on torsion subgroups \[E(k)_{\tor}\simeq\overline{E}(\mathbb{F}\xspace_k)_{\tor},\] and hence the short exact sequence \eqref{ses3} has a natural splitting. Moreover, if $L/k$ is any finite extension, this splitting commutes with the restriction map $E(k)\xrightarrow{\res_{L/k}}E(L)$. It also induces a split short exact sequence \[0\rightarrow \widehat{E}(\mathfrak{m}_k)/p\rightarrow E(k)/p\xrightarrow{r}\overline{E}(\mathbb{F}\xspace_k)/p\rightarrow 0.\] This splitting can be made more explicit as follows. We have an isomorphism, \[\overline{E}(\mathbb{F}\xspace_k)/p\simeq \overline{E}[p^{N_0}]/\overline{E}[p^{N_0-1}]\simeq \overline{E}[p].\] We fix a generator $\overline{P}_0$ of $\overline{E}[p^{N_0}](\mathbb{F}\xspace_k)$ and its unique lift $P_0\in E[p^{N_0}](k)$. Let $P\in E(k)$. There exists an integer $c\in\{0,1,\ldots, p^{N_0}-1\}$ such that $r(P \mod p)-r(cP_0\mod p)=0$. This implies that $P$ has an expression as $P=\widehat{P}+cP_0+pQ$ for some points $Q\in E(k), \widehat{P}\in \widehat{E}(\mathfrak{m}_k)$. \begin{notn}\label{localpointsplit} From now we will abuse notation and for a point $P\in E(k)$ we will denote also by $P$ its image in $E(k)/p$. The above discussion shows that $P$ has a unique decomposition as $P=(\widehat{P},\overline{P})$ where $\widehat{P}\in\widehat{E}(\mathfrak{m}_k)/p$ and $\overline{P}:=r(cP_0\mod p)\in \overline{E}(\mathbb{F}\xspace_k)/p$. It is clear by the construction that this decomposition is compatible with the restriction map $\res_{L/k}$ for finite extensions $L/k$. Namely, $\res_{L/k}(P)=(\res_{L/k}(\widehat{P}),\res_{\mathbb{F}\xspace_L/\mathbb{F}\xspace_k}(\overline{P}))$. \end{notn} \subsubsection{Explicit isomorphism}\label{formalgroup} In this subsection we consider a finite extension $L/k$ such that $E[p^n]\subset E(L)$ for some $n\geq 1$. It follows by \cite[Section 4]{Raskind/Spiess2000} (see also \cite[Theorem 4.2]{Hiranouchi2014}, \cite[Theorem 3.4]{hiranouchi/hirayama}) that we have an isomorphism \begin{equation}\label{modpiso} F^2(X_L)_{\nd}/p^n=F^2(X_L)/p^n\simeq K_2(L;E)/p^n\simeq \mathbb{Z}\xspace/p^n\mathbb{Z}\xspace. \end{equation} Let $N=\max\{n\geq 1:E[p^n]\subset E(L)\}$. If we further assume that the extension $L(E[p^{N+1}])/L$ has wild ramification, it follows by \cite[Theorem 3.14]{Gazaki/Leal2018} that we have an isomorphism \begin{equation}\label{GazLealiso} \widehat{F^2(X_L)}=F^2(X_L)_{\nd}=F^2(X_L)/p^N\simeq\mathbb{Z}\xspace/p^N. \end{equation} The case of interest to us is when $N=1$. To prove \autoref{main1}, the above information is enough. However, in order to construct global $0$-cycles and prove \autoref{main2}, we need to recall the explicit construction of the isomorphism $K_2(L;E_L)/p\simeq \mathbb{Z}\xspace/p\mathbb{Z}\xspace$. We are particularly interested in describing necessary and sufficient conditions for a symbol $\{P,Q\}_{L/L}\in K_2(L;E_L)/p$ to be nontrivial. The group $K_2(L;E_L)$ is defined to be the quotient of the Mackey product $(E_L\otimes^M E_L)(L)$ (cf. \cite[Section 2.1]{Gazaki/Hiranouchi2021}) modulo Weil reciprocity. It follows by \cite{Raskind/Spiess2000} (see also \cite[Theorem 4.2]{Hiranouchi2014}) that when $E[p]\subset E(L)$ we have an isomorphism \[K_2(L;E_L)\simeq (E_L\otimes^M E_L)(L)/p.\] The Mackey functor $E_L/p$ has a direct decomposition $E_L/p\simeq \widehat{E_L}/p\oplus [E_L/\widehat{E_L}]/p,$ induced by the splitting short exact sequences \[0\to \widehat{E_L}(m_{L'})/p\to E_L(L')/p\to E_L(\mathbb{F}\xspace_{L'}/p)\to 0,\] where $L'/L$ is any finite extension. For the definition of the Mackey functor $[E_L/\widehat{E_L}]$ we refer to \cite[Proof of Theorem 3.14]{Gazaki/Leal2018}. It follows by \cite[Lemma 3.4.2]{Raskind/Spiess2000} and \cite[Proof of Theorem 3.1.4]{Gazaki/Leal2018} respectively that the Mackey products $([E_L/\widehat{E_L}]\otimes^M [E_L/\widehat{E_L}])(k)$, $(\widehat{E_L}\otimes^M [E_L/\widehat{E_L}])(L)/p,$ and $ ([E_L/\widehat{E_L}]\otimes^M E_L)(k)/p$ all vanish. Thus we have an isomorphism \[K_2(L;E_L)/p\simeq (\widehat{E_L}\otimes^M \widehat{E_L})(k)/p=(\widehat{E_L}/p\otimes^M \widehat{E_L}/p)(k).\] The upshot is the following, which will be used in \autoref{computations_section}. Let $k/\mathbb{Q}\xspace_p$ be a finite unramified extension like in the previous subsection and let $P\in E(k)/p$. Consider the restriction of $P$ in $E(L)/p$, which for simplicity we will denote again by $P$. Consider the decomposition $P=(\widehat{P},\overline{P})$ as in notation \eqref{localpointsplit}. Let $\widehat{Q}\in\widehat{E_L}(\mathfrak{m}_L)/p$. Then we have, \begin{equation}\label{splitformal}\{P,\widehat{Q}\}_{L/L}=\{\widehat{P},\widehat{Q}\}_{L/L}\in K_2(L;E_L)/p.\end{equation} Next we analyze the Mackey functor $\widehat{E_L}/p$. \color{black} The inclusion $E[p]\subset E(L)$ implies $\mu_p\subset L^\times$, and $\widehat{E}[p]\subset \widehat{E_L}(\mathfrak{m}_L)$. We fix a primitive $p$-th root of unity $\zeta_p\in L^\times$ and a non-canonical isomorphism of $\mathbb{G}\xspaceal(\overline{L}/L)$-modules $\widehat{E}[p]\simeq\mathbb{Z}\xspace/p\simeq\mu_{p}.$ This induces an isomorphism \[H^1(L,\widehat{E_L}[p])\simeq H^1(L,\mu_p)=L^\times/L^{\times p}.\] We consider the connecting homomorphism of the Kummer sequence for $\widehat{E_L}$ (cf. \cite[Lemma 3.5]{Gazaki/Hiranouchi2021}) \[\delta_L:\widehat{E_L}(L)/p\hookrightarrow H^1(L,\widehat{E_L}[p]).\] Let $U_{L}^1$ be the group of $1$-units of $L^\times$ with its natural filtration $\{U_{L}^i=1+\mathfrak{m}_{L}^i\}_{i\geq 1}$ and define \[\overline{U}_{L}^i=\img(U_{L}^i\rightarrow L^\times/L^{\times p}),\;\;i\geq 1.\] It then follows by \cite[Theorem 2.1.6]{kawachi2002}, (see also \cite[Prop. 3.13]{Gazaki/Hiranouchi2021}) that the homomorphism $\delta_L$ induces an isomorphism \begin{equation}\label{kawachi}\delta_L:\widehat{E_L}(L)/p\xrightarrow{\simeq}\overline{U}_{L}^1, \end{equation} and the same holds over any finite extension $L'/L$. In fact we have an isomorphism as Mackey functors $\widehat{E_L}/p\simeq \overline{U}_{L}^1$. Additionally, $\delta_L$ is a filtered isomorphism in the following sense. The formal group $\widehat{E_L}(\mathfrak{m}_L)$ has a natural filtration $\widehat{E_L}^i(L):=\widehat{E_L}(\mathfrak{m}_{L}^i)$, for $i\geq 1$. This induces a filtration on the group $\widehat{E_L}(\mathfrak{m}_{L})/[p]=\widehat{E_L}^1(L)/p$ by defining (cf. \cite[Definition 3.10]{Gazaki/Hiranouchi2021}), \[\mathcal{D}^i:=\frac{\widehat{E_L}^i(L)}{[p]\widehat{E_L}(L)\cap \widehat{E_L}^i(L)},\;\;i\geq 1.\] Then the isomorphism \ref{kawachi} has the property that $\delta_L(\mathcal{D}^i)\subset\overline{U}_L^i$, for all $i\geq 1$. Moreover, it is functorial with respect to norm maps associated with finite extensions $L'/L$. Since $\mu_p\subset L^\times$, we have isomorphisms $H^2(L,\mu_p\otimes\mu_p)\simeq H^2(L,\mu_p)\simeq\Br(L)_p\simeq\mathbb{Z}\xspace/p$. For $a,b\in L^\times/L^{\times p}$ write $(a,b)_p\in\Br(L)_p$ for the cyclic symbol algebra generated by $a,b$. Then we have an isomorphism \begin{eqnarray}\label{sp} && s_p: (\widehat{E_L}/p\otimes^M \widehat{E_L}/p)(k)\rightarrow H^2(L,\mu_p)=\Br(L)_p\nonumber\\ && \{\widehat{P},\widehat{Q}\}_{L/L}\mapsto (\delta(\widehat{P}),\delta(\widehat{Q}))_p, \end{eqnarray} known as \textit{the generalized Galois symbol}. For a proof of the injectivity of $s_p$ see for example \cite[Theorem 4.2]{Hiranouchi2014}. \subsubsection{Criterion for nontriviality}\label{iff} It follows that a symbol $\{\widehat{P},\widehat{Q}\}_{L/L}$ is nonzero in $K_2(L;E_L)/p$ if and only if $\delta_L(\widehat{P})$ is not in the image of the norm map \[N:L\left(\delta_L(\widehat{Q})^{1/p}\right)^\times\rightarrow L^\times,\] or equivalently if and only if $\widehat{P}$ is not in the image of the norm map \[N:\widehat{E_L}\left(L\left(\frac{1}{p}\widehat{Q}\right)\right)/p\rightarrow \widehat{E_L}(L)/p.\] Here we denoted by $L\left(\frac{1}{p}\widehat{Q}\right)$ the smallest Galois extension of $L$ over which there exists a point $\widehat{Q}'$ such that $p\widehat{Q}'=\widehat{Q}$. The same equivalence holds with the roles of $\widehat{P},\widehat{Q}$ interchanged. \section{Self-product of an elliptic curve with complex multiplication}\label{CMsection} In this subsection we are going to prove theorems \eqref{main1} and \eqref{main2}. Throughout this section we are making the following assumption. \begin{ass}\label{Epassume} $K$ will be a quadratic imaginary field of class number one. $E$ will be an elliptic curve over $\mathbb{Q}\xspace$ or $K$ with CM by $\mathcal{O}_K$ and $p\geq 5$ will be an ordinary prime for $E$. Moreover, we fix a factoring $p=\pi\overline{\pi}$ into prime elements of $\mathcal{O}_K$ such that the endomorphism $\pi:E_K\rightarrow E_K$ reduces to the Frobenius automorphism $\phi_p:\overline{E_p}\rightarrow\overline{E_p}$. \end{ass} \subsection{Local $0$-cycles orthogonal to the Brauer group}\label{Brauercomplement} We are now ready to prove \autoref{main1}, which we restate here. \begin{theo}\label{mainmain1} Let $X=E\times E$. Suppose there exists a finite Galois extension $F_0/K$ of degree $n<p-1$ such that there is a unique inert place $w$ of $F_0$ above $\pi$ with the property that $p$ divides $|\overline{E_p}(\mathbb{F}\xspace_w)|$, where $\mathbb{F}\xspace_w$ is the residue field of $F_0$ at $w$. Consider the extension $L:=F_0\cdot K(E[\pi])$. Then the group $\displaystyle\varprojlim\limits_n F^2_{\mathbf{A}}(X_L)/p^n$ has the following properties. \begin{enumerate} \item It is equal to $\displaystyle\prod_{v|p} F^2(X_{L_v})_{\nd}$ and isomorphic to $\mathbb{Z}\xspace/p\oplus\mathbb{Z}\xspace/p$. \item It is orthogonal to the transcendental Brauer group $\displaystyle\frac{\Br(X_L)\{p\}}{\Br_1(X_L)\{p\}}$. \end{enumerate} \end{theo} \begin{proof} The proof involves three main steps. \textbf{Claim 1:} There is an isomorphism $\displaystyle\prod_{v|p}F^2(X_{L_v})_{\nd}\simeq\mathbb{Z}\xspace/p\oplus\mathbb{Z}\xspace/p$. It follows by \cite[Corollary 5.20 (ii), (iii)]{Rubin1999} that $K(E[\pi])/K$ is a Galois extension of degree $p-1$ and it is totally ramified above $\mathfrak{p}=(\pi)$. Since $F_0/K$ is unramified above $\pi$, the extensions $F_0, K(E[\pi])$ are totally disjoint and hence $\mathbb{G}\xspaceal(L/K)\simeq\mathbb{G}\xspaceal(F_0/K)\times\mathbb{G}\xspaceal(K(E[\pi])/K)$. In particular, $L/K$ is a Galois extension of degree $n(p-1)<(p-1)^2$, and there exists a unique place $v$ of $L$ above $\pi$ with ramification index $p-1$ and residue field $\mathbb{F}\xspace_{w_0}$ of degree $n$ over $\mathbb{F}\xspace_p$. Moreover, the extension $L/\mathbb{Q}\xspace$ is Galois, and hence there exists a unique place $\overline{v}$ above $\overline{\pi}$ with the same properties and these are precisely the places of $L$ that lie above $p$. The extensions $L_v$ and $L_{\overline{v}}$ are the same. It is enough therefore to prove an isomorphism $\widehat{F^2(X_{L_v})}\simeq\mathbb{Z}\xspace/p\mathbb{Z}\xspace$. We will simply denote by $E_v$ (resp. $E_{\overline{v}}$) the base change $E_{L_v}$ (resp. $E_{L_{\overline{v}}}$) which is an elliptic curve with good ordinary reduction over the $p$-adic field $L_v/\mathbb{Q}\xspace_p$. We will denote by $\mathfrak{m}_v$ the maximal ideal of $\mathcal{O}_{L_v}$, $\widehat{E_v}$ will be the formal group of $E_v$ and $\overline{E_v}$ the reduction. We claim that $E_v[p]\subset E_v(L_v)$. As noted in \autoref{CM background}, the subgroup $E_{\mathbb{Q}\xspace_p}[\pi]$ of $E_{\mathbb{Q}\xspace_p}[p]$ coincides with the $p$-torsion of the formal group $\widehat{E_{\mathbb{Q}\xspace_p}}[p]$. It follows that $E_v[\pi]$ coincides with $\widehat{E_v}[p]$, and hence it is $L_v$-rational. Since the residue field of $L_v/\mathbb{Q}\xspace_p$ is $\mathbb{F}\xspace_{w_0}$, the assumption that $p$ divides $|\overline{E_p}(\mathbb{F}\xspace_{w_0})|$ implies that $\overline{E_p}[p]\subseteq\overline{E_p}(\mathbb{F}\xspace_{w_0})$. The inclusion $E_v[p]\subset E_v(L_v)$ then follows by the splitting short exact sequence \[0\rightarrow \widehat{E_v}[p]\rightarrow E_v[p]\rightarrow\overline{E_p}[p]\rightarrow 0.\] The desired isomorphism $\widehat{F^2(X_{L_v})}\simeq\mathbb{Z}\xspace/p\mathbb{Z}\xspace$ follows by \eqref{GazLealiso}. More precisely, the above argument shows that we have an inequality $N\geq 1$. We claim that $N=1$. It is enough to show that the extension $L_v(E[p^2])/L_v(E[p])$ has wild ramification. But this is clear, since $L_v(E[p^2])$ contains $L_v(\mu_{p^2})$, which has absolute ramification index dividing $p(p-1)$, while $L_v=L_v(E[p])$ has absolute ramification index $p-1$. \textbf{Claim 2:} There is an isomorphism $\displaystyle\varprojlim_{n} F^2_\mathbf{A}(X_L)/p^n\simeq\prod_{v|p}F^2(X_{L_v})_{\nd}$. We use the notation introduced in the proof of \autoref{mainmain0}. Namely, let \[M=\prod_{v_0\in\Lambda}n_{v_0}\prod_{v_0\in\Lambda}N_{v_0},\] where $\Lambda$ is the set of all bad reduction places of $E_L$, $n_{v_0}$ is the degree of an extension $L'_{v_0}/L_{v_0}$ over which the elliptic curve $E_{L'_{v_0}}$ attains split semistable reduction and $N_{v_0}=1$ in the potentially good reduction case, $N_{v_0}=|F^2(X_{L'_{v_0}})_{\nd}|$ otherwise. To prove the claim, it is enough to show that $p$ is coprime to $M$, because then the proof of \autoref{mainmain0} gives us the desired isomorphism (cf. \ref{piso}). Since the elliptic curve $E$ has complex multiplication, it follows by \cite[Theorem 7]{Serre/Tate1968} that the set $\Lambda$ only contains places of potentially good reduction. Thus, $N_{v_0}=1$ for every $v_0\in\Lambda$. It remains to show that for each $v_0\in\Lambda$ there exists a finite extension $L'_{v_0}/L_{v_0}$ of degree coprime to $p$ over which $E$ attains good reduction. Let $v_0\in\Lambda$ and $l$ be the rational prime lying below $v_0$. It follows by \cite[Corollary 5.22]{Rubin1999} that there exists an elliptic curve $E'$ defined over $\mathbb{Q}\xspace$ such that $E'$ has good reduction at $l$ and $E_{\overline{\mathbb{Q}\xspace}}\simeq E'_{\overline{\mathbb{Q}\xspace}}$. But any two elliptic curves over $\mathbb{Q}\xspace$ become isomorphic over a degree $6$ extension. Since we assumed that $p\geq 5$, the claim follows. \textbf{Claim 3:} The group $\displaystyle\frac{\Br(X_L)\{p\}}{\Br_1(X_L)\{p\}}$ is trivial. The group $\Br(X_L)/\Br_1(X_L)$ is finite by \cite[Theorem 1.1]{Skorobogatov/Zharin2008}. More precisely, if $G_L=\mathbb{G}\xspaceal(\overline{L}/L)$, then Skorobogatov and Zarhin (\cite[Proposition 3.3]{Skorobogatov/Zharin2012}) showed an isomorphism, \[\frac{\Br(X_L)_p}{\Br_1(X_L)_p}\simeq\frac{\Hom_{G_L}(E_L[p],E_L[p])}{(\Hom(E_L,E_L)/p)^{G_L}}.\] Since $E_L$ has CM defined over $L$, $(\Hom(E_L,E_L)/p)^{G_L}\simeq(\mathbb{Z}\xspace/p\mathbb{Z}\xspace)^2$. We claim that this is equal to the numerator $\Hom_{G_L}(E_L[p],E_L[p])$. Since by assumption $n<p-1$, we have proper field extensions $K\subsetneq L\subsetneq K(E[p])$. It follows that $E_L[\pi]$ is the unique nonzero submodule of $E_L[p]$ with a trivial $G_L$-action. Hence, any $G_L$-equivariant homomorphism $f:E_L[p]\rightarrow E_L[p]$ must send $E_L[\pi]$ to itself. This is precisely a subgroup of rank $2$ of $\Hom(E_L[p],E_L[p])\simeq(\mathbb{Z}\xspace/p\mathbb{Z}\xspace)^4$ containing $(\Hom(E_L,E_L)/p)^{G_L}$, and hence it must be equal to it. \end{proof} \begin{defn} Let $F_0/K$ be an extension satisfying the assumptions of \autoref{mainmain1}. Then $\mathbb{F}\xspace_{w_0}\simeq\mathbb{F}\xspace_{p^n}$. We will say that $F_0$ is \textit{minimal} if it has the minimum possible degree. That is, if $p||\overline{E_p}(\mathbb{F}\xspace_{p^n})|$ and $n\geq 1$ is the smallest possible with this property. \end{defn} When we construct global $0$-cycles in the next subsection, we will often start with a minimal totally real extension $F_0/\mathbb{Q}\xspace$ and apply \autoref{mainmain1} for $F_0\cdot K$. When $p||\overline{E_p}(\mathbb{F}\xspace_{p})|$, $F_0=\mathbb{Q}\xspace$. In all other cases a minimal $F_0$ is not uniquely determined. The following proposition shows that the extension $L=F_0\cdot K(E[\pi])$ constructed in \autoref{main1} is in some sense minimal over which interesting things happen at the places above the ordinary prime $p$. \begin{prop}\label{minimality} Let $(E,p)$ be a pair satisfying the assumptions of \autoref{mainmain1}. Let $F_0/\mathbb{Q}\xspace$ be an extension as in \autoref{mainmain1} of minimal degree. Let $\mathbb{Q}\xspace\subset F\subsetneq L=F_0\cdot K(E[\pi])$ be any intermediate extension. Then $\displaystyle\prod_{w\in\Omega_f(F),w|p}F^2(X_{F_{w}})_{\nd}=0$. \end{prop} \begin{proof} Since $F\subsetneq L=F_0\cdot K(E[\pi])$, any such field $F$ has at most two places $w$ above $p$ of ramification index dividing $p-1$. Moreover, since $F$ is properly contained in $L$, and the fields $F_0$ and $K(E[\pi])$ are minimal with respect to their defining properties, it follows that for every place $w$ of $F$ above $p$, $E[p]\not\subset E(F_w)$. This observation reduces the proposition to the following local claim. \textbf{Claim:} Let $k/\mathbb{Q}\xspace_p$ be a finite extension such that $E[p]\not\subset E(k)$. Then $K_2(k;E_k)/p=0$. Consider the finite extension $k_1=k(E[p])$ and notice that $k_1/k$ is a nontrivial extension of degree coprime to $p$. We have a commutative diagram \[\begin{tikzcd} K_2(k;E_k)/p\ar{r}{s_p}\ar{d}{\res_{k_1/k}} & H^2(k,E_k[p]^{\otimes 2})\ar{d}{\res_{k_1/k}}\\ K_2(k_1;E_{k_1})/p \ar{r}{s_p} & H^2(k_1,E_{k_1}[p]^{\otimes 2}) \end{tikzcd},\] where $s_p$ is the \textit{Galois symbol} map (cf. \cite[Definition 2.13]{Gazaki/Hiranouchi2021}) and $\res_{k_1/k}$ are the restriction maps (cf. \cite[p. 7]{Gazaki/Hiranouchi2021}). The bottom horizontal map is injective (cf. \cite[Theorem 4.2]{Hiranouchi2014}). Moreover, $N_{k_1/k}\circ\res_{k_1/k}=[k_1:k]$, and hence the restriction maps are also injective. This forces the top horizontal map to be injective, and hence it is enough to show that the image of the Galois symbol $K_2(k;E_k)/p\xrightarrow{s_p}H^2(k,E_k[p]^{\otimes 2})$ is zero. The image of $s_p$ can be computed by \cite[Theorem 1.1]{Gazaki2017}. Namely, under the Tate duality perfect pairing \[H^2(k,E_k[p]^{\otimes 2})\times\Hom_{\mathbb{G}\xspaceal(\overline{k}/k)}(E_k[p],E_k[p])\to\mathbb{Z}\xspace/p,\] the orthogonal complement of $\img(s_p)$ consists precisely of those $\mathbb{G}\xspaceal(\overline{k}/k)$- homomorphisms $E_k[p]\xrightarrow{f} E_k[p]$ that lift to a homomorphism of finite flat group schemes $\mathcal{E}_k[p]\xrightarrow{\tilde{f}}\mathcal{E}_k[p]$, where $\mathcal{E}_k$ is the N\'{e}ron model of $E_k$ over $\Spec(\mathcal{O}_k)$. Since $E_k$ has good ordinary reduction, it follows by \cite[Proposition 8.8]{Gazaki2017} that these are exactly the homomorphisms $E_k[p]\xrightarrow{f} E_k[p]$ that satisfy $f(\widehat{E_k}[p])\subset \widehat{E_k}[p]$. Thus, to show $\img(s_p)=0$, it is enough to show that every $\mathbb{G}\xspaceal(\overline{k}/k)$-equivariant homomorphism $E_k[p]\xrightarrow{f} E_k[p]$ satisfies this property. Since $E_k$ has complex multiplication defined over $k$, it follows that $E_k[p]\simeq\widehat{E_k}[p]\oplus\overline{E_k}[p]$ as $\mathbb{G}\xspaceal(\overline{k}/k)$-modules. Since at least one of the cyclic submodules is nontrivial, the claim follows. \end{proof} In \autoref{CM background} we saw many examples when $F_0$ can be taken to be $\mathbb{Q}\xspace$. We will focus more on this case in the next subsection. \color{blue} \color{black} \subsection{Constructing global $0$-cycles}\label{computations_section} In this last subsection we investigate the exactness of the complex \eqref{complex2intro} in the situation considered in \autoref{mainmain1}. That is, we are looking for criteria that allow us to lift the group $\displaystyle\prod_{v|p}F^2(X_{L_v})_{\nd}\simeq(\mathbb{Z}\xspace/p)^2$ to global $0$-cycles in $F^2(X)$. \begin{lem}\label{independence} Let $K=\mathbb{Q}\xspace(\sqrt{D})$ with $D\in\{-1,-2,-3,-7,-11,-19,-43,-67,-163\}$. Let $E$ be an elliptic curve over $\mathbb{Q}\xspace$ such that $E_K$ has complex multiplication by $\mathcal{O}_K$. Let $F/\mathbb{Q}\xspace$ be a totally real extension and suppose that the Mordell-Weil group $E(F)$ has positive rank. Let $P\in E(F)$ be a point of infinite order. Write $\mathcal{O}_K=\mathbb{Z}\xspace[\omega_D]$. Then the points $P,\omega_D(P)\in E(K\cdot F)$ are $\mathbb{Z}\xspace$-linearly independent. \end{lem} \begin{proof} We will simply write $\omega$ instead of $\omega_D$. Note that $\omega(P)\in E(K\cdot F)\setminus E(F)$. Let $\mu(x)=x^2-bx+c\in\mathbb{Z}\xspace[x]$ be the minimal polynomial of $\omega$. Let $\overline{\omega}$ be the conjugate of $\omega$, so the equalities $\omega+\overline{\omega}=b$ and $\omega\overline{\omega}=c$ hold. Suppose for contradiction that $P,\omega(P)$ are $\mathbb{Z}\xspace$-linearly dependent. Then we can find $n,m\in\mathbb{Z}\xspace$ such that $nP+m\omega(P)=0$. We may assume that $n,m$ are relatively prime, and hence we can find $x,y\in\mathbb{Z}\xspace$ such that $nx+my=1$. The relation $nP+m\omega(P)=0$ implies that $nxP+mx\omega(P)=0=nyP+my\omega(P)$. Applying the endomorphism $\overline{\omega}$ to the first equality yields $nx\overline{\omega}(P)+mxcP=0$, and hence $nx\omega(P)-(mxc+nxb)P=0$. Adding the latter to the equation $nyP+my\omega(P)=0$ gives \[\omega(P)=(nxb-ny+mxc)P\in E(F),\] which is a contradiction. \end{proof} \begin{rem}\label{congruence} Suppose $D\in\{-3,-7,-11,-19,-43,-67,-163\}$. Then $\mathcal{O}_K=\mathbb{Z}\xspace[\omega_D]$, where $\displaystyle\omega_D=-\frac{1}{2}+\frac{\sqrt{-D}}{2}$. The minimal polynomial of $\omega_D$ is $x^2+x+\displaystyle\frac{1+D}{4}$ so that in the notation of \autoref{independence} we have $b=-1$, $c=\displaystyle\frac{1}{4}+\frac{D}{4}$. Let $p$ be an odd prime, $p\neq -D$. Then the system of equations $\left\{\begin{array}{c} x^2=c\\ 2x=-1 \end{array}\right.$ has no solution over $\mathbb{F}\xspace_p$. For, if $x$ were a solution, then the solution would be $x=-1/2\in\mathbb{F}\xspace_p$, giving $4c\equiv 1\mod p\Leftrightarrow D\equiv 0\mod p$. This simple observation will be used in the proof of the following \autoref{mainmain2}. \end{rem} \subsection*{Proof of lifting when a ``good" rational point exists} In this section we give a first criterion to lift the local $0$-cycles to global. Our criterion relies only on the existence of a ``good" rational point $P$. Before proving our main results, we briefly describe our methodology. From now on we assume we are in the set-up of \autoref{mainmain1} with the extension $F_0$ minimal. The base change $X_L$, where $L=F_0\cdot K(E[\pi])$, contains a nontrivial $L$-rational $\pi$-torsion point $A\in E[\pi](L)$. Our strategy is to examine when this torsion point can be used to construct global $0$-cycles. We say that a point $P\in E(F_0)$ is good if the $0$-cycle $z_1=[A,P]-[A,O]-[O,P]+[O,O]\in F^2(X_L)$ induces a nontrivial element $(z_{1v},z_{1\overline{v}})\in\displaystyle F^2(X_{L_v})_{\nd}\oplus F^2(X_{L_{\overline{v}}})_{\nd}$. When this happens, we show that the $0$-cycle $z_2=[A,\omega(P)]-[A,O]-[O,\omega(P)]+[O,O]$ induces a second $\mathbb{F}\xspace_p$-linearly independent element of $\displaystyle F^2(X_{L_v})_{\nd}\oplus F^2(X_{L_{\overline{v}}})_{\nd}$. To find an explicit condition that guarantees $(z_{1v},z_{1\overline{v}})\neq(0,0)$, we will use the nonvanishing criterion from \autoref{iff}. From now on we identify the $0$-cycle $z_1$ with the symbol $\{A,P\}_{L/L}\in K_2(L;E)$ and we denote by $\{A_v,P_v\}_{L_v/L_v}$ the corresponding element of $K_2(L_v;E_v)$. We will abuse notation and write $\{A_v,P_v\}_{L_v/L_v}$ for the class of this symbol modulo $pK_2(L_v;E_v)$. We recall from \eqref{splitformal} that we have an equality \[\{A_v,P_v\}_{L_v/L_v}=\{\widehat{A_v},\widehat{P}_v\}_{L_v/L_v}=\{A_v,\widehat{P}_v\}_{L_v/L_v},\] where $P_v=(\widehat{P}_v,\overline{P_v})$ is the decomposition of $P$ modulo $pE(L_v)$ as in \eqref{localpointsplit}. Here we used the fact that $A_v\in\widehat{E_v}[p]$. The following \autoref{mainmain2} develops the above idea when $F_0=\mathbb{Q}\xspace$. In this case we are looking for a point $P\in E(\mathbb{Q}\xspace)$ inducing the desired symbol $\{A,P\}_{L/L}$. For such a point we will denote by $P_{\mathbb{Q}\xspace_p}=(\widehat{P}_{\mathbb{Q}\xspace_p},\overline{P}_{\mathbb{Q}\xspace_p})\in \widehat{E}(\mathbb{Q}\xspace_p)/p\oplus\overline{E_p}(\mathbb{F}\xspace_p)$ the corresponding local point. \begin{theo}\label{mainmain2} Suppose $p$ is a prime and $E$ an elliptic curve over $\mathbb{Q}\xspace$ satisfying the assumptions of \autoref{mainmain1}. Assume further that $|\overline{E}_p(\mathbb{F}\xspace_p)|=p$, and consider the extension $L=K(E[\pi])$ constructed in \autoref{mainmain1}. Suppose that the Mordell-Weil group $E(\mathbb{Q}\xspace)$ has positive rank and there exists a global point $P\in E(\mathbb{Q}\xspace)$ of infinite order such that the induced local point $P_{\mathbb{Q}\xspace_p}\in E_{\mathbb{Q}\xspace_p}(\mathbb{Q}\xspace_p)$ has the property that $\widehat{P}_{\mathbb{Q}\xspace_p}\in\widehat{E_{\mathbb{Q}\xspace_p}}(\mathbb{Q}\xspace_p)/p$ is nontrivial. Then the global $p$-torsion $0$-cycles $z_1,z_2\in F^2(X_L)$ induced by $A, P$ and $A,\omega(P)$ respectively lift the group $\displaystyle\prod_{v|p}F^2(X_{L_v})_{\nd}\simeq\mathbb{Z}\xspace/p\oplus\mathbb{Z}\xspace/p$. \end{theo} \begin{proof} The assumption $|\overline{E}_p(\mathbb{F}\xspace_p)|=p$ implies that $K=\mathbb{Q}\xspace(\sqrt{D})$ with $D\equiv 1\mod 4$. Let $\mu(x)=x^2+x+c\in\mathbb{Z}\xspace[x]$ be the minimal polynomial of $\omega=\omega_D$ over $\mathbb{Q}\xspace$. \textbf{Claim 1:} The symbol $\{A_v,\widehat{P}_v\}_{L_v/L_v}\in K_2(L_v;E_v)/p$ is nontrivial. To prove the claim, we consider the Kummer extension $k_0=L_v\left(\delta(A_v)^{1/p}\right)=L_v\left(\frac{1}{p}A_v\right)$. Recall from \autoref{iff} that the symbol $\{A_v,\widehat{P}_v\}_{L_v/L_v}$ is nontrivial if and only if $\delta(\widehat{P}_v)$ is not in the image of the norm map \[N_{k_0/L_v}:k_0^\times\rightarrow L_v^\times.\] We consider the filtration $\{\widehat{E_v}^i(L_v)\}_{i\geq 1}$ of $\widehat{E_v}(L_v)$ and the induced filtration $\{\mathcal{D}^i\}_{i\geq 1}$ of $\widehat{E_v}(L_v)/p$ (cf. \autoref{formalgroup}). We claim that $A_v\in\widehat{E_v}^1\setminus\widehat{E_v}^2$ and it therefore induces a nontrivial element of $\mathcal{D}^1/\mathcal{D}^2$. For, let $v(A_v)$ be the valuation of $A_v$. Since $A_v$ is an element of $\widehat{E}(L_v)$ of exact order $p$, it follows by \cite[IV.6, Theorem 6.1]{Silverman2009} that \[1\leq v(A_v)\leq\frac{v(p)}{p-1}.\] The extension $L_v/\mathbb{Q}\xspace_p$ is totally ramified of degree $p-1$, and hence it follows that $\frac{v(p)}{p-1}=1$ yielding the desired equality $v(A_v)=1$. We note that the integer $\frac{v(p)}{p-1}=1$ is precisely the integer $t_0(\phi)$ defined in \cite[page 11, Lemma 3.5]{Gazaki/Hiranouchi2021} where $\phi=[p]$ is the $p$-isogeny on $\widehat{E_v}$, which is of height one. Using the filtered isomorphism \eqref{kawachi}, we deduce that $\delta(A_v)\in\overline{U}_{L_v}^1\setminus\overline{U}_{L_v}^2$. It follows by \cite[Lemma 2.1.5]{kawachi2002} (see also \cite[Lemma 3.5]{Gazaki/Hiranouchi2021}) that the extension $k_0/L_v$ is totally ramified of degree $p$ with the jump in the ramification filtration of $\mathbb{G}\xspaceal(k_0/L_v)$ happening at $s=p-1$. It then follows by \cite[V.3, Corollary 7]{Serre1979local} that there exists a unit $u\in\overline{U}_{L_v}^{p-1}$ such that the symbol algebra $(\delta(A_v),u)_p$ is nontrivial in $\Br(L_v)_p$. In fact, we can conclude that $u\in\overline{U}_{L_v}^{p-1}\setminus\overline{U}_{L_v}^p$. For, it follows by \cite[V.3, Corollaries 2 \& 3]{Serre1979local} that $(\delta(A_v),u')_p=0$ for every $u'\in\overline{U}_{L_v}^p$. The next key observation is that \[(\delta(A_v),u)_p\neq 0, \text{ \textbf{for every unit} } u\in\overline{U}_{L_v}^{p-1}\setminus\overline{U}_{L_v}^p.\] This is because the residue field of $L_v$ is $\mathbb{F}\xspace_p$, and therefore we have an isomorphism (cf. \cite[Lemma 3.4]{Gazaki/Hiranouchi2021}) $\overline{U}_{L_v}^{p-1}/\overline{U}_{L_v}^{p}\simeq\mathbb{F}\xspace_p$. Thus, in order to prove the claim, it is enough to show $\delta(\widehat{P}_v)\in \overline{U}_{L_v}^{p-1}\setminus\overline{U}_{L_v}^p$, or equivalently that $v(\widehat{P}_v)=p-1$. Consider the inclusion \[\res_{L_v/\mathbb{Q}\xspace_p}:\widehat{E_{\mathbb{Q}\xspace_p}}^1(\mathbb{Q}\xspace_p)/p\hookrightarrow\widehat{E_v}^{p-1}(L_v)/p,\] sending $\widehat{P}_{\mathbb{Q}\xspace_p}$ to $\widehat{P}_{v}$. This map is injective, because we have an equality $N_{L_v/\mathbb{Q}\xspace_p}\circ\res_{L_v/\mathbb{Q}\xspace_p}=[L_v:\mathbb{Q}\xspace_p]=p-1$, which is coprime to $p$. Our last claim will then follow once we show that $\widehat{P}_{\mathbb{Q}\xspace_p}\in \widehat{E_{\mathbb{Q}\xspace_p}}^1(\mathbb{Q}\xspace_p)\setminus\widehat{E_{\mathbb{Q}\xspace_p}}^2(\mathbb{Q}\xspace_p)$. Let us assume otherwise that $\widehat{P}_{\mathbb{Q}\xspace_p}\in\widehat{E_{\mathbb{Q}\xspace_p}}^2(\mathbb{Q}\xspace_p)$, and hence $\widehat{P}_{v}\in\widehat{E_v}^{2(p-1)}$. Because $2(p-1)>p$, it follows by the filtered isomorphism \ref{kawachi} and \cite[Lemma 3.4]{Gazaki/Hiranouchi2021} that $\widehat{P}_{v}$ (and hence also $\widehat{P}_{\mathbb{Q}\xspace_p}$) is a multiple of $p$, which contradicts our assumption that $\widehat{P}_{\mathbb{Q}\xspace_p}\in \widehat{E_{\mathbb{Q}\xspace_p}}^1(\mathbb{Q}\xspace_p)/p$ is nontrivial. This completes the proof of the Claim. The above computation shows that the global $0$-cycle $z_1=[A,P]-[A,O]-[O,P]+[O,O]\in F^2(X_L)$ has a nontrivial image $(z_{1v}, z_{1\overline{v}})$ under the diagonal map $F^2(X_L)/p\xrightarrow{\Delta}\prod_{v|p}F^2(X_{L_v})_{\nd}$. A byproduct of the proof of Claim 1 is that the following two maps are isomorphisms, \begin{eqnarray*} f:\mathcal{D}^1/\mathcal{D}^2\rightarrow\mathbb{Z}\xspace/p, && g:\mathcal{D}^{p-1}/\mathcal{D}^p\rightarrow\mathbb{Z}\xspace/p\\ \;\;\;\; x\mapsto\{x,\widehat{P}_v\}_{L_v/L_v}, && \;\;\;\;y\mapsto\{A_v,y\}_{L_v/L_v}. \end{eqnarray*} The extensions $L_v$ and $L_{\overline{v}}$ are isomorphic as abstract fields. Therefore Claim 1 holds true also for the symbol $\{A_{\overline{v}},P_{\overline{v}}\}_{L_{\overline{v}}/L_{\overline{v}}}\in K_2(L_{\overline{v}}, E_{\overline{v}})/p$ corresponding to the $0$-cycle $z_{1\overline{v}}$. Since $P$ is defined over $\mathbb{Q}\xspace$, it follows that $P_{\overline{v}}=P_v$. Moreover, the map $f$ being an isomorphism implies that there exists some $a\in\mathbb{F}\xspace_p^\times$ such that \[f(A_{\overline{v}})=\{A_{\overline{v}},\widehat{P}_v\}_{L_v/L_v}=a\{A_v,\widehat{P}_v\}_{L_{\overline{v}}/L_{\overline{v}}}.\] In other words, if we fix an isomorphism $ F^2(X_{L_v})_{\nd}\xrightarrow{\simeq}\mathbb{Z}\xspace/p$ sending $z_{1v}$ to $1 $, then the diagonal map $\Delta:F^2(X_L)/p\rightarrow F^2(X_{L_v})_{\nd}\oplus F^2(X_{L_{\overline{v}}})_{\nd}\xrightarrow{\simeq}\mathbb{Z}\xspace/p\oplus\mathbb{Z}\xspace/p$, $z\mapsto(z_{v},z_{\overline{v}})$ maps $z_1$ to the tuple $(1,a)$. \textbf{Claim 2:} The element $\Delta(z_2)$ is $\mathbb{F}\xspace_p$-linearly independent from $\Delta(z_1)=(1,a)$, where $z_2$ is the $0$-cycle $z_2=[A,\omega(P)]-[A,O]-[O,\omega(P)]+[O,O]\in F^2(X_L)$. The $0$-cycles $z_{2v},z_{2\overline{v}}$ correspond to the symbols $\{A_v,\widehat{\omega(P)}_v\}_{L_v/L_v}$, $\{A_{\overline{v}},\widehat{\omega(P)}_{\overline{v}}\}_{L_{\overline{v}}/L_{\overline{v}}}$. Since $P\in E(K)\subset E(L)$, the point $\omega(P)_{\overline{v}}$ is the restriction of $\omega(P)_{\overline{\mathfrak{p}}}$, and hence it is none other than the complex conjugate, $\overline{\omega}(P)_v$. The points $\omega(P), \overline{\omega}(P)$ induce elements $\widehat{\omega(P)}_v, \widehat{\overline{\omega}(P)}_v\in \mathcal{D}^{p-1}/\mathcal{D}^p$. Since $\omega(P)+\overline{\omega}(P)=-P$, and $\widehat{P}_v\in\mathcal{D}^{p-1}/\mathcal{D}^p$ is nontrivial, at least one of these points induces a nontrivial element of $\mathcal{D}^{p-1}/\mathcal{D}^p$. Without loss of generality, assume there exists $m\in\mathbb{F}\xspace_p^{\times}$ such that $\widehat{\omega(P)}_v\equiv m\widehat{P}_v\mod\mathcal{D}^p$. The isomorphism $g$ then yields an equality \[g(\omega(P)_v)=\{A_v,\omega(P)_v\}_{L_v/L_v}=m\{A_v,P_v\}_{L_v/L_v}.\] To prove linear independence, it is enough to show \[\{A_{\overline{v}},\overline{\omega}(P)_v\}_{L_{\overline{v}}/L_{\overline{v}}}\neq ma\{A_v,P_v\}_{L_v/L_v}.\] Notice that we have $\{A_{\overline{v}},\overline{\omega}(P)_v\}_{L_v/L_v}=a\{A_v,\overline{\omega}(P)_v\}_{L_v/L_v}.$ Thus, it suffices to prove that the elements $\widehat{\omega(P)}_v,\widehat{\overline{\omega}(P)}_v$ induce distinct elements of $\mathcal{D}^{p-1}/\mathcal{D}^p$. Suppose for contradiction that they are the same. That is, $\widehat{\omega(P)_v}\equiv\widehat{\overline{\omega}}(P)_v\equiv m\widehat{P}_v\mod\mathcal{D}^p$. We have $\widehat{\omega(P)}_v+\widehat{\overline{\omega}}(P)_v=-\widehat{P}_v$, giving $2m=-1$ in $\mathbb{F}\xspace_p$. Moreover, $\widehat{\overline{\omega}(\omega(P))}_v=m^2\widehat{P}_v=c\widehat{P}_v$, where $c=(D+1)/4$. That is, $m\in\mathbb{F}\xspace_p^\times$ is a solution to the system \[\left\{\begin{array}{c} x^2=c\\ 2x=-1 \end{array}\right.,\] which is a contradiction by \autoref{congruence}. \end{proof} \begin{rem}\label{computations} We believe that the condition of having a ``good" rational point $P\in E(\mathbb{Q}\xspace)$ is satisfied by a positive proportion of elliptic curves with positive rank over $\mathbb{Q}\xspace$. As we saw in the proof of \autoref{mainmain2}, all we need is a point $P$ such that $\widehat{P}_{\mathbb{Q}\xspace_p}\in\widehat{E}^1(\mathbb{Q}\xspace_p)\setminus\widehat{E}^2(\mathbb{Q}\xspace_p)$. The latter can be checked computationally as follows. \begin{enumerate} \item Compute the coordinates of a point $P\in E(\mathbb{Q}\xspace)$ of infinite order. \item Compute the coordinates of the induced local point $\widehat{P}_{\mathbb{Q}\xspace_p}\in E(\mathbb{Q}\xspace_p)$. \item Compute the coordinates of a local nonzero $p$-torsion point $P_{0}\in E[p](\mathbb{Q}\xspace_p)$. \item Find a scalar $\lambda\in \mathbb{F}\xspace_p$ such that the $x$-coordinate of $P_{\mathbb{Q}\xspace_p}-\lambda P_{0}$ has negative valuation. \item If the above valuation is exactly $-2$, then $P$ is a point that satisfies the condition of \autoref{mainmain2}. \end{enumerate} When the ordinary prime $p$ is small enough, this is a computation that can be carried out in SAGE. As a case study, in the appendix \ref{appendix} we carry out this computation for the family of elliptic curves $\{y^2=x^3-2+7n:n\in\mathbb{Z}\xspace\}$, and the prime $p=7$. \end{rem} \begin{rem} An interesting case to consider next is the following. Start with a pair $(E,p)$ that satisfies the assumptions of \autoref{mainmain1} with $F_0=\mathbb{Q}\xspace$ and consider extensions $M/\mathbb{Q}\xspace$ of degree $g<p-1$ such that $p$ splits completely in $M$. Then we still have a vanishing $\Br(X_{L\cdot M})\{p\}/\Br_1(X_{L\cdot M})\{p\}=0$, where $L=K(E[\pi])$. An analog of \autoref{mainmain1} gives \[\prod_{v|p}F^2(X_{(LM)_v})_{\nd}\simeq(\mathbb{Z}\xspace/p)^{2g}.\] We see that as $g$ gets larger, it becomes increasingly harder to find concrete criteria that guarantee the lifting of this group to global $0$-cycles. For example, consider the curve $E$ given by the Weierstrass equation $y^2=x^3+2$ and the prime $p=61$. Then $p$ splits completely over the degree six extension $\mathbb{Q}\xspace(\alpha)$, where $\alpha$ is a root of the polynomial $x^6-x^4+x^2-3x+3$. \end{rem} The key ingredient in the proof of \autoref{mainmain2} was that the two homomorphisms \begin{eqnarray*} f:\mathcal{D}^1/\mathcal{D}^2\rightarrow\mathbb{Z}\xspace/p, && g:\mathcal{D}^{p-1}/\mathcal{D}^p\rightarrow\mathbb{Z}\xspace/p\\ \;\;\;\; x\mapsto\{x,\widehat{P}_v\}_{L_v/L_v}, && \;\;\;\;y\mapsto\{A_v,y\}_{L_v/L_v} \end{eqnarray*} are isomorphisms and this was because the residue field of $K(E[\pi])$ is only $\mathbb{F}\xspace_p$. A similar method can be extended to many more situations. For example the following corollary describes a generalization to when $L=F_0(E[\pi])$ with $F_0/\mathbb{Q}\xspace$ a quadratic extension. \begin{cor}\label{quadraticcase} Let $(E,p)$ be a pair that satisfies the assumptions of \autoref{mainmain1}. Suppose that $p\nmid|\overline{E_p}(\mathbb{F}\xspace_p)|$, but $p||\overline{E_p}(\mathbb{F}\xspace_{p^2})|$. Let $F_0/\mathbb{Q}\xspace$ be a quadratic extension having a unique inert place $w_0$ above $p$. Set $k=F_0\cdot\mathbb{Q}\xspace_p$. Suppose there exists a point $P\in E(F_0)\setminus E(\mathbb{Q}\xspace)$ of infinite order such that the induced local point $P_k\in E(k)$ has the property $\widehat{P}_k\in\widehat{E_k}(k)/p$ is nontrivial. Then the global $p$-torsion $0$-cycles $z_1,z_2\in F^2(X_L)$ induced by $A, P$ and $A,\omega(P)$ respectively lift the group $\displaystyle\prod_{v|p}F^2(X_{L_v})_{\nd}\simeq\mathbb{Z}\xspace/p\oplus\mathbb{Z}\xspace/p$. \end{cor} \begin{proof} The proof is essentially the same as in \autoref{mainmain2}. In this case we have an isomoprhism of $\mathbb{F}\xspace_p$-vector spaces (cf. \cite[Lemma 2.1.4]{kawachi2002}) \[\overline{U}_{L_v}^{p-1}/\overline{U}_{L_v}^{p}\simeq\mathcal{D}_{L_v}^{p-1}/\mathcal{D}_{L_v}^p\xrightarrow{\simeq}\mathbb{F}\xspace_{p^2}\simeq\mathbb{Z}\xspace/p\oplus\mathbb{Z}\xspace/p.\] It follows by \cite[V.3, Corollary 7]{Serre1979local} that there exists some $B\in \mathcal{D}_{L_v}^{p-1}/\mathcal{D}_{L_v}^p$ such that $\displaystyle(\delta(A_v),\delta(B))_p\neq 0$. We will show that we can take $B=(P_k)_v$. Consider the extension $F=K(E[\pi])$ and denote by $v_0$ the unique place of $F$ below $v$. Since $p\nmid|\overline{E_p}(\mathbb{F}\xspace_p)|$, we have a proper inclusion $F\subsetneq L$. It follows by \autoref{minimality} that $K_2(F\cdot\mathbb{Q}\xspace_p;E_{F\mathbb{Q}\xspace_p})/p=0$. This implies that for every $C\in \mathcal{D}_{L_v}^{p-1}/\mathcal{D}_{L_v}^p$ which is the restriction of some element defined over $F_{v_0}$, it follows $(\delta(A_v),\delta(C))_p=0$. Since $\overline{U}_{L_v}^{p-1}/\overline{U}_{L_v}^{p}$ is two dimensional over $\mathbb{F}\xspace_p$, every element $B$ that is not in the image of the restriction map $\res_{L_v/F_{v_0}}$ gives rise to a nontrivial symbol. The assumption $P\in E(F_0)\setminus E(\mathbb{Q}\xspace)$ implies that we can take $B=(P_k)_v$, which yields the desired nonvanishing $\displaystyle(\delta(A_v),(P_k)_v)_p\neq 0$. The rest of the proof of \autoref{mainmain2} carries through to show that the $0$-cycles $z_1=[A,P_k]-[A,O]-[O,P_k]+[O,O]$ and $z_2=[A,\omega(P_k)]-[A,O]-[O,\omega(P_k)]+[O,O]$ are $\mathbb{F}\xspace_p$-linearly independent. The only difference is that we have to consider also the case when $K=\mathbb{Q}\xspace(\sqrt{D})$ with $D\not\equiv 1\mod 4$ (see \autoref{Zi} for examples of elliptic curves with potential CM by $\mathbb{Z}\xspace[i]$ which satisfy the assumptions of the corollary). The computation of linear independence is in fact simpler in this case. The minimal polynomial of $\omega$ is $\mu(x)=x^2-D$, giving $\omega(P)=-\overline{\omega}(P)$. Since $p>2$, it is clear that the elements $\widehat{\omega(P)}_v,\widehat{\overline{\omega}(P)}_v$ induce distinct elements of $\mathcal{D}^{p-1}/\mathcal{D}^p$. \end{proof} \subsection*{Extensions}\label{nogoodpoint} We close this article by discussing alternative criteria of lifting when a ``good" rational point does not exist. We focus on the simplest case considered in \autoref{mainmain2}, namely when $|\overline{E_p}(\mathbb{F}\xspace_p)|=p$. In this case the point $A\in E[\pi]$ can no longer be used to produce global $0$-cycles. One needs to find instead two ``good" points $P,Q\in E(L)$ such that the induced symbol $\{\widehat{P}_v,\widehat{Q}_v\}_{L_v/L_v}\neq 0$. To look for such points we need to start with points of infinite order living in intermediate extensions $\mathbb{Q}\xspace\subsetneq F\subseteq L=K(E[\pi])$. In the following paragraphs we develop this idea. Suppose we have a local point $B_v\in \widehat{E}(L_v)$ such that $B_v\in \widehat{E}^l(L_v)\setminus\widehat{E}^{l+1}(L_v)$ for some integer $0<l<p$. Then it follows by \cite[V.3, Corollaries 2 \& 3]{Serre1979local} that the map \[\mathcal{D}^{p-l}/\mathcal{D}^{p-l+1}\rightarrow\mathbb{Z}\xspace/p,\;\;\; x\mapsto\{x,B_v\}_{L_v/L_v}\] is an isomorphism. Thus, if we can find intermediate extensions $K\subset M, F\subset L$ and points $P\in E(M), Q\in E(F)$ of infinite order such that the induced local formal points $\widehat{P}_v, \widehat{Q}_v$ lie in $\widehat{E}^{l}(L_v)\setminus\widehat{E}^{l+1}(L_v),$ and $ \widehat{E}^{p-l}(L_v)\setminus\widehat{E}^{p-l+1}(L_v)$ respectively, then the symbol $\{\widehat{P}_v,\widehat{Q}_v\}_{L_v/L_v}$ is nonvanishing in $ K_2(L_v;E_v)/p$. Then we can produce a second $\mathbb{F}\xspace_p$-linearly independent symbol using the complex multiplication, as we did in the proof of \autoref{mainmain2}. We next describe a potential algorithm to compute such ``good points" $P,Q$ in a particular example. A similar method can be used more generally, but the algorithm will require more steps. \subsection*{An extended example} To illustrate our method, we consider the case when the quadratic imaginary field $K=\mathbb{Q}\xspace(\zeta_3)$, and $p=7$. We consider the family of elliptic curves \[\{E_n:y^2=x^3-2+7n,\;n\in\mathbb{Z}\xspace\},\] so that the pair $(7, E_n)$ satisfies the assumptions of \autoref{mainmain1}, for all $n\in\mathbb{Z}\xspace$. Suppose that for a certain value of $n$ the condition of \autoref{mainmain2} is not satisfied. That is, there is no good point $P\in E_n(\mathbb{Q}\xspace)$. For simplicity we will write $E=E_n$. The Galois group $G=\mathbb{G}\xspaceal(L/K)$ is cyclic of order $6$, and hence there exist unique intermediate subfields $K\subset M, F\subset L$ such that $[M:K]=\displaystyle 2$ and $\displaystyle[F:K]=3$. Moreover, the extension $L/K$ is totally ramified at $\mathfrak{p},\mathfrak{\overline{p}}$. This implies that there exist unique places $s, \overline{s}$ of $M$ lying over $\mathfrak{p}, \mathfrak{\overline{p}}$ respectively and the extension $M_s/\mathbb{Q}\xspace_p$ is totally ramified of degree $2$. Similarly there exist unique places $t, \overline{t}$ of $F$ over $\mathfrak{p}, \mathfrak{\overline{p}}$ with $F_t/\mathbb{Q}\xspace_p$ totally ramified of degree $3$. Suppose we can find global points $B\in E(M)$, $C\in E(F)$ such that $\widehat{B}_{s}\in\widehat{E}^1(M_s)\setminus \widehat{E}^2(M_s)$, and $\widehat{C}_t\in\widehat{E}^1(F_t)\setminus \widehat{E}^2(F_t)$. Their restrictions $\widehat{B}_{v}, \widehat{C}_{v}$ in $\widehat{E}(L_v)$ lie in $\widehat{E}^3(L_v)\setminus \widehat{E}^4(L_v)$ and $\widehat{E}^2(L_v)\setminus \widehat{E}^3(L_v)$ respectively. Next consider the points $\omega(C), \omega^2(C)\in E(F)$. Similarly to the proof of \autoref{mainmain2}, at least one of these points induces a formal point in $\widehat{E}^2(L_v)\setminus \widehat{E}^3(L_v)$. Call this point $C'$. Then there exists some scalar $m\in\mathbb{F}\xspace_7^\times$ such that $\widehat{C}'_v\equiv m\widehat{C}_v\mod \mathcal{D}^3$, This is equivalent to saying that the $x$-coordinate of the formal point $\widehat{C}'_t-m\widehat{C}_t$ has strictly smaller valuation than the one of $\widehat{C}_t$. If it happens that $\widehat{C}'_t-m\widehat{C}_t\in \widehat{E}^2(F_t)\setminus \widehat{E}^3(F_t)$, then the point $Q=C'-mC$ is a ``good" point. That is, $\widehat{Q}_v\in \widehat{E}^4(L_v)\setminus \widehat{E}^5(L_v)$, and since $3+4=7$, our earlier discussion shows that $\{B_v,Q_v\}_{L_v/L_v}\neq 0\in K_2(L_v;E)/p$. \appendix \section{Computations with local points By Angelos Koutsianas}\label{appendix} In this appendix we give more details about the computations we have done in order to verify the existence of a ``good'' rational point $P$ that Theorem \ref{mainmain2} requires for the family of elliptic curves $$ \{E_n:y^2 = x^3 - 2 + 7n,~n\in\mathbb{Z}\xspace\}. $$ Our computations are based on Remark \ref{computations}. We consider the cases $n\in[-5000, 5000]$ with $p=7$ and $\rk(E_n(\mathbb{Q}\xspace))=1$. For a fixed value of $n$ we define $a=-2+7n$. We recall that $E_n$ has complex multiplication by the ring of integers $\mathcal{O}_K$ of $K=\mathbb{Q}\xspace(\sqrt{-3})$. It follows that $p=7$ splits in $\mathcal{O}_K$ with $7=\pi\overline{\pi}$ where $\pi=\frac{1+3\sqrt{-3}}{2}$ and $\overline{\pi}$ is the complex conjugate of $\pi$. For the computations we need an explicit description of $E_n[p](\mathbb{Q}\xspace_p)$. \begin{lem}\label{lem:torsion_group} With the above notation, it holds $$ E_n[7](\mathbb{Q}\xspace_7)=\{(\sqrt[3]{\theta}\zeta_3^i, \pm\sqrt{\theta + a}):~\text{for }i=0,1,2\}, $$ where $\theta$ is the root of $f(x) = 7x^2 - 4ax + 16a^2$ such that $\theta\equiv 6\pmod{7}$ and $i=0,1,2$. \end{lem} \begin{proof} A symbolic computation shows that the 7th torsion polynomial of $E_n$ is given by $$ f_7(x) = (7x^6 - 4ax^3 + 16a^2)(x^{18} + 564ax^{15} - 5808a^2x^{12} - 123136a^3x^9 - 189696a^4x^6 - 49152a^5x^3 + 4096a^6). $$ It follows by example \ref{ex1} that $|\overline{E_{n,p}}(\mathbb{F}\xspace_7)|=7$. Moreover, the leading coefficient of $f_7$ is divisible by $7$ and thus by \cite[Theorem VII.3.4]{Silverman2009} we conclude that $\#E_n[7](\mathbb{Q}\xspace_7)= 7$ . Therefore, it is enough to show that the above points have order $7$ and coordinates in $\mathbb{Q}\xspace_7$. Let $g(x)=7x^2 - 4ax + 16a^2$, then we can easily prove that the roots of $g$ are $$ \theta_{1,2}=2a\frac{1\pm 3\sqrt{-3}}{7}. $$ Suppose we embed $\sqrt{-3}$ into $\mathbb{Q}\xspace_7$ such that $\sqrt{-3}\equiv 2 + 5\cdot 7 + 6\cdot 7^3 + O(7^4)$. As a result, we get $\theta_{1}=2a(2 + 2\cdot 7 + 4\cdot 7^2 + O(7^3))$ and $\theta_2 = 2a(\frac{2}{7} + 5 + 4\cdot 7 + 2\cdot 7^2 + O(7^3))$. Because $v_7(\theta_2)=-1$ we understand that $\sqrt[3]{\theta_2}\not\in\mathbb{Q}\xspace_7$. Let $\theta=\theta_1$. Because $\theta\equiv 6\pmod{7}$ by Hensel's lemma we get that the polynomial $x^3 - \theta$ has a root in $\mathbb{Q}\xspace_7$. Furthermore, because $\zeta_3\in\mathbb{Q}\xspace_7$ we conclude that $\sqrt[3]{\theta}\zeta_3^i\in\mathbb{Q}\xspace_7$ for all $i=0,1,2$. Let $x_i=\sqrt[3]{\theta}\zeta_3^i$ for $i=0,1,2$. In order to finish the proof, suffices to show that the polynomial $$ x^2 - (x_i^3 + a)=x^2 - (\theta + a), $$ has both of its roots in $\mathbb{Q}\xspace_7$. Because $\theta + a\equiv 4\pmod{7}$ Hensel's lemma yields the conclusion. \end{proof} Having determined the set $E_n[7](\mathbb{Q}\xspace_7)$, we follow the steps in Remark \ref{computations}. We focus on the curves $E_n$ with rank $1$. The most computationally expensive part is the determination of the generator $P$ of the free part of $E_n(\mathbb{Q}\xspace)$. In order to speed up the computations we assume the finiteness of Tate-Shafarevich group. \begin{theo}\label{computationsthm} Let $n\in [-5000, 5000]$. Under the assumption of the finiteness of the Tate-Shafarevich group of $E$, the ~86,68\% of the rank one elliptic curves $E_n$ satisfy the hypothesis of \autoref{mainmain2}, in other words there exists a ``good'' rational point $P$. \end{theo} \begin{proof} We have written a Sage \cite{sagemath} script\footnote{See function \textit{rank\_one\_elliptic\_curves}.} that does all the computations we have described in Remark \ref{computations} and Lemma \ref{lem:torsion_group}. It is important to mention that there are $176$ values of $n$ for which we are not able to compute the generator of the free part of $E_n(\mathbb{Q}\xspace)$. For the calculation of the above percent we do not consider these $176$ curves. The total amount of time for the computations was ~31 minutes in a regular personal computer. \end{proof} The code can be found in \begin{center} \texttt{https://github.com/akoutsianas/local\_global\_0\_cycles} \end{center} \end{document}
\mathbf{e}gin{document} \mathbf{e}gin{frontmatter} \title{Change-point detection in panel data via double CUSUM statistic} \runtitle{Change-point detection in panel data} \author{\fnms{Haeran} \snm{Cho}\mbox{cor}ef{}\ead[label=e1]{[email protected]}} \address{School of Mathematics, University of Bristol, UK.} \runauthor{H. Cho} \mathbf{e}gin{abstract} In this paper, we consider the problem of (multiple) change-point detection in panel data. We propose the double CUSUM statistic which utilises the cross-sectional change-point structure by examining the cumulative sums of ordered CUSUMs at each point. The efficiency of the proposed change-point test is studied, which is reflected on the rate at which the cross-sectional size of a change is permitted to converge to zero while it is still detectable. Also, the consistency of the proposed change-point detection procedure based on the binary segmentation algorithm, is established in terms of both the total number and locations (in time) of the estimated change-points. Motivated by the representation properties of the Generalised Dynamic Factor Model, we propose a bootstrap procedure for test criterion selection, which accounts for both cross-sectional and within-series correlations in high-dimensional data. The empirical performance of the double CUSUM statistics, equipped with the proposed bootstrap scheme, is investigated in a comparative simulation study with the state-of-the-art. As an application, we analyse the log returns of S\&P 100 component stock prices over a period of one year. \end{abstract} \mathbf{e}gin{keyword} \kwd{change-point analysis} \kwd{high-dimensional data analysis} \kwd{CUSUM statistics} \kwd{binary segmentation} \end{keyword} \received{\smonth{11} \syear{2015}} \end{frontmatter} \section{Introduction} \label{sec:intro} Multivariate, possibly high-dimensional observations over time have emerged in many fields, such as economics, finance, natural science, engineering and humanities, thanks to the advances of computing technologies \citep{fan2011}. Multivariate data observed in practical problems often appear nonstationary in the sense that it is natural to let some quantities or parameters involved in the model to be time-varying. Arguably, the simplest departure from assuming stationarity is to operate under the assumption of piecewise stationarity, which allows more flexibility as well as providing interesting insights into the data with regards to the structural change-points. Besides, in the case of time series analysis, it enables (short-term) prediction of the future process values, by treating the last estimated segment as being stationary. Throughout the paper, the term ``multiple change-point detection'' is used interchangeably with ``segmentation''. Panel data models are frequently adopted to analyse high-dimensional data involving measurements over time. In this paper, we focus on the problem of detecting (possibly) multiple change-points in the mean of panel data, where $n$, the dimensionality of the data, may increase with the number of observations $T$. The panel model is presented as \mathbf{e}gin{eqnarray*} x_{j, t} = f_{j, t} + \varepsilon_{j, t}, \qquad t=1, \ldots, T; \ j=1, \ldots, n, \end{eqnarray*} where $\{f_{j, t}\}_{t=1}^T, \, j=1, \ldots, n$ are piecewise constant signals which share an unknown number of change-points at unknown locations. CUSUM statistics have been widely adopted for segmenting both univariate and multivariate data. For univariate data segmentation, CUSUM statistics are computed over time, and this series of CUSUMs is examined to locate a change-point, often as where its maximum in the absolute value is attained. Combined with a binary segmentation (BS) algorithm, the CUSUM statistics can consistently detect multiple change-points in a recursive manner (see e.g., \cite{vostrikova1981}, \cite{venkatraman1992} and \cite{cho2012}). For segmenting $n$-dimensional panel data, we may apply the above procedure to each univariate component series separately, and then prune down the estimated change-points by identifying those detected for the identical change-point across the panel. However, such pruning may be difficult to accomplish even in moderately large dimensions, due to the estimation bias present in each change-point estimate. Besides, this approach does not take into account, and thus benefit from, the cross-sectional nature of change-points (that they are shared across the panel) which may lead to loss of power in change-point detection. Instead, we propose to segment the $n$-dimensional data simultaneously by searching for change-points from the {\em aggregation} of $n$ series of CUSUM statistics, rather than from individual CUSUM series separately. \subsection{Literature review} \label{sec:lit} Let $\mathcal{C}_b$ denote a CUSUM operator which takes $x_{j, t}$ over a generic interval $t \in [s, e]$ with $1 \le s < e \le T$ as an input and returns \mathbf{e}q \mathcal{X}^j_{s, b, e} &=& \mathcal{C}_b(\{\sigma_j^{-1} x_{j, t}\}_{t=s}^e) \nonumber \\ &=& \sqrt{\frac{e-b}{(e-s+1)(b-s+1)}}\sum_{t=s}^b \frac{x_{j, t}}{\sigma_j} - \sqrt{\frac{b-s+1}{(e-s+1)(e-b)}}\sum_{t=b+1}^e \frac{x_{j, t}}{\sigma_j} \nonumber \\ &=& \frac{1}{\sigma_j}\sqrt{\frac{(b-s+1)(e-b)}{e-s+1}}\left( \frac{1}{b-s+1}\sum_{t=s}^b x_{j, t} - \frac{1}{e-b}\sum_{t=b+1}^e x_{j, t} \right) \label{eq:cusum:two} \end{eqnarray} for $b=s, \ldots, e-1$, with a suitably chosen scaling constant $\sigma_j$. Assuming the presence of at most one change-point, some change-point tests for panel data have been proposed, based on the principle of high-dimensional CUSUM series aggregation. Note that for single change-point detection, $s=1$ and $e=T$. \cite{zhang2010} considered a change-point test with the test statistic $\mathcal{T}^{\mbox{\tiny{ZSJL}}}_{1, T} = \max_{b \in [1, T)}\sum_{j=1}^n(\mathcal{X}^j_{1, b, T})^2$ and the test criterion derived under i.i.d. Gaussian setting. A similar change-point statistic was considered by \cite{horvath2012}, \mathbf{e}qs \mathcal{T}h_{1, T} = \max_{b \in [1, T)} \frac{1}{\sqrt{n}}\frac{b(T-b)}{T^2}\sum_{j=1}^n\{(\mathcal{X}^j_{1, b, T})^2 - 1\}, \end{eqnarray}s and the limit distribution of the test statistic was derived for independent panel data. \cite{enikeeva2014} proposed a change-point test for panel data with i.i.d. Gaussian noise that combines the {\em linear} statistic, which is constructed similarly as $\mathcal{T}h_{1, T}$, and the {\em scan} statistic, which is aimed at detecting a cross-sectionally sparse change. More specifically, the test statistics are \mathbf{e}q \mathcal{T}lin_{1, T} &=& \max_{b \in [1, T)} \frac{1}{H\sqrt{2n}} \sum_{j=1}^n\{(\mathcal{X}^j_{1, b, T})^2 - 1\}, \mbox{ and } \nonumber \\ \mathcal{T}scan_{1, T} &=& \max_{b \in [1, T)} \max_{1 \le m \le n} \frac{1}{T_m\sqrt{2m}} \sum_{j=1}^m\{(\mathcal{X}^{(j)}_{1, b, T})^2 - 1\}, \nonumber \end{eqnarray} where $|\mathcal{X}^{(1)}_{1, b, T}| \ge \ldots \ge |\mathcal{X}^{(n)}_{1, b, T}|$ denote the ordered CUSUM statistics at each $b \in [1, T)$. With $H$ and $T_m$ acting as critical values (chosen as approximate quantiles of $\chi^2$-distributions), their proposed change-point test is $\mathcal{T}eh = (\mathcal{T}lin_{1, T} > 1) \vee (\mathcal{T}scan_{1, T} > 1)$ (where $a \vee b = \max(a, b)$). Allowing for both temporal and cross-sectional dependence, \cite{jirak2014} proposed a test statistic obtained from taking the pointwise maximum of the multiple CUSUM series: \mathbf{e}q \mathcal{T}j_{1, T} = \max_{b \in [1, T)}\max_{1 \le j \le n}\sqrt{\frac{b(T-b)}{T}}|\mathcal{X}^j_{1, b, T}|, \label{eq:jirak} \end{eqnarray} which is compared against a threshold drawn from an extreme value distribution of Gumble type or bootstrap. Note that with the exception of $\mathcal{T}scan_{1, T}$, the CUSUM aggregation methods above are not adaptive to the underlying structure of CUSUM statistic values at each $b$, in the sense that they take either pointwise maximum or sum of (squared) CUSUMs. Empirical studies conducted in \cite{cho2015} showed that such approaches may lead to inferior performance in detecting and locating change-points in high-dimensional settings. Instead, they proposed the Sparsified Binary Segmentation (SBS) where the change-point test $\mathcal{T}s_{1, T}(\pi_T) > 0$ was based on the following ``sparsified'' or ``thresholded'' test statistic \mathbf{e}qs \mathcal{T}s_{1, T}(\pi_T) = \max_{b \in [1, T)} \sum_{j=1}^n |\mathcal{X}^j_{1, b, T}| \cdot \mathbb{I}(|\mathcal{X}^j_{1, b, T}| > \pi_T) \end{eqnarray}s ($\mathbb{I}(\mathcal{E}) = 1$ if and only if the event $\mathcal{E}$ is true), with an appropriately bounded threshold $\pi_T$ chosen to guarantee that $|\mathcal{X}^j_{s, b, e}| < \pi_T$ uniformly over $j \in \{1, \ldots, n\}$ and $\{(s, b, e); \, 1 \le s \le b < e \le T\}$ with probability converging to one under the null hypothesis of no change-point. The intuition behind the construction of $\mathcal{T}s_{1, T}(\pi_T)$ is that, irrelevant contribution from those components without any change is reflected as small values of $|\mathcal{X}^j_{1, b, T}|$ and hence is disregarded through the thresholding step, while large CUSUMs formed in the vicinity of the true change-points are summed up to strengthen the meaningful contribution from the corresponding components. Combining the BS procedure with the thresholded test statistic, the consistency of the SBS algorithm for multiple change-point detection was established. However, it is empirically observed that the higher autocorrelations in $\varepsilon_{j, t}$ is, the larger $\pi_T$ is required to grant the consistency in change-point detection, which amounts to selecting $n$ thresholds for segmenting $n$-dimensional panel data. \subsection{Outline of the paper} \label{sec:rest} In this paper, we propose the {\em double} CUSUM statistic which accomplishes the high-dimensional CUSUM series aggregation through data-driven partitioning of the panel data at each point, while avoiding the difficulties involved in selecting (possibly $n$) thresholds that apply directly to individual CUSUMs for establishing change-point detection consistency. The rest of the paper is organised as follows. We describe the double CUSUM statistic in details in Section \ref{sec:method}. In Section \ref{sec:theor}, we establish the consistency of the change-point test based on the double CUSUM statistics, as well as investigating its efficiency in comparison with the tests discussed in Section \ref{sec:lit}. Also, the double CUSUM Binary Segmentation algorithm is formulated and its consistency in multiple change-point detection is studied. Section \ref{sec:choice} discusses the choice of important quantities including the test criterion, for which a bootstrap procedure is introduced. We illustrate its performance on simulated datasets in Section \ref{sec:sim} and on log returns of S\&P 100 component stock prices in Section \ref{sec:real}. Section \ref{sec:conc} concludes the paper and the proofs of theoretical results are provided in Section \ref{sec:pf}. Finally, some auxiliary results and simulation results are reported in the supplementary document \citep{cho2016}. \section{Double CUSUM statistic} \label{sec:method} Recall the panel data model from Introduction \mathbf{e}gin{eqnarray} \label{eq:panel} x_{j, t} = f_{j, t} + \varepsilon_{j, t}, \quad t=1, \ldots, T; \ j=1, \ldots, n. \end{eqnarray} The noise $\{\varepsilon_{j, t}\}_{t=1}^T$ satisfies $\mathbb{E}(\varepsilon_{j, t}) = 0$ for all $j$ and $t$, and is allowed to be correlated both within-series and cross-sectionally as specified later in Section \ref{sec:theor}. The piecewise constant signals $\{f_{j, t}\}_{t=1}^T, \, j=1, \ldots, n$ share $N$ change-points $1<\eta_1 < \ldots < \eta_N<T$ (possibly with unknown $N$). That is, at each change-point $\eta_r$, there exists an index set $\Pi_r = \{j:\, \delta_{j, r} = f_{j, \eta_r+1} - f_{j, \eta_r} \ne 0\} \subset \{1, \ldots, n\}$ with $m_r = |\Pi_r| = \sum_{j=1}^n \mathbb{I}(|\delta_{j, r}|>0) \ge 1$ (where $|\mathcal{S}|$ denotes the cardinality of a set $\mathcal{S}$). Recall the definition of $\mathcal{X}^j_{s, b, e}$ in (\ref{eq:cusum:two}), which denotes the CUSUM statistic computed on $x_{j, t}$ over a generic interval $t \in [s, e]$ with $1 \le s < e \le T$, for $b = s, \ldots, e-1$. For change-point detection in panel data, we propose to employ the double CUSUM (DC) statistics \mathbf{e}gin{eqnarray} && \mathcal{D}_m^\mbox{var}phi(\{|\mathcal{X}^{(j)}_{s, b, e}|\}_{j=1}^n) \nonumber \\ &=& \left\{\frac{m(2n-m)}{2n}\right\}^\mbox{var}phi \left(\frac{1}{m} \sum_{j=1}^m |\mathcal{X}^{(j)}_{s, b, e}| - \frac{1}{2n-m} \sum_{j=m+1}^n |\mathcal{X}^{(j)}_{s, b, e}|\right) \label{double:cusum:one} \\ &=& \left\{\frac{m(2n-m)}{2n}\right\}^\mbox{var}phi \cdot \frac{1}{m} \sum_{j=1}^m \left(|\mathcal{X}^{(j)}_{s, b, e}| - \frac{1}{2n-m}\sum_{j=m+1}^n |\mathcal{X}^{(j)}_{s, b, e}|\right) \label{double:cusum:two} \end{eqnarray} for $b \in [s, e)$ and $m \in \{1, \ldots, n\}$, where the DC operator $\mathcal{D}^\mbox{var}phi_m$ takes the {\em ordered} CUSUM values $|\mathcal{X}^{(1)}_{s, b, e}| \ge |\mathcal{X}^{(2)}_{s, b, e}| \ge \ldots \ge |\mathcal{X}^{(n)}_{s, b, e}|$ at each $b$, as its input for some $\mbox{var}phi\in[0, 1]$. Then, the test statistic for detecting the presence of a change-point over a given interval $[s, e]$, is derived as \mathbf{e}qs \mathcal{T}^\mbox{var}phi_{s, e} = \max_{b\in[s, e)}\max_{1 \le m \le n} \mathcal{D}_m^\mbox{var}phi(\{|\mathcal{X}^{(j)}_{s, b, e}|\}_{j=1}^n), \end{eqnarray}s which is compared against a test criterion $\pi^\mbox{var}phi_{n, T}$. Once $\mathcal{T}^\mbox{var}phi_{s, e} > \pi^\mbox{var}phi_{n, T}$, the location of the change-point is identified as where the pointwise maximum (over $m$) of the DC statistics is maximised (over $b$), i.e., $$ \widehat{\eta} = \arg\max_{b\in[s, e)}\max_{1 \le m \le n} \mathcal{D}_m^\mbox{var}phi(\{|\mathcal{X}^{(j)}_{s, b, e}|\}_{j=1}^n). $$ To understand the properties of the DC statistic, first consider the case where the noise $\varepsilon_{j, t}$ is not present in the panel data. Then the series of DC statistics at each fixed $m$ is always maximised at one of the true change-points within the interval $[s, e)$ and consequently, the maximum over both $b$ and $m$ is guaranteed to be attained at a true change-point. The formal statement and its proof can be found in Appendix B of the supplementary document. The key feature of the DC statistic is the ordering of the input series to $\mathcal{D}^\mbox{var}phi_m$. In panel data segmentation, in addition to detecting and locating the change-points in time, it is also of interest to locate the change in coordinates as well, which is relatively unexplored in the relevant literature with the exception of \cite{jirak2014}. Not only such information is useful in the interpretation of detected change-points, but also can play an important role in aggregating the high-dimensional CUSUM series efficiently as detailed below. At a given time point $b$, one way of partitioning the components into those with changes and those without, is to arrange the modulus of CUSUMs in the decreasing order, and then to label the components which correspond to the first $m_b$ ($\in \{1, \ldots, n\}$) largest values of $|\mathcal{X}^{j}_{s, b, e}|$ as being likely to have a change-point around $b$. Note that aggregating the (squared) CUSUMs via pointwise averaging or maximising implicitly takes $m_b = n$ or $m_b = 1$, respectively. In constructing $\mathcal{T}s(\pi_T)$, the choice of $m_b$ is associated with the choice of $\pi_T$, i.e., $m_b(\pi_T) = \sum_{j=1}^n\mathbb{I}(|\mathcal{X}^j_{s, b, e}| > \pi_T)$. The DC statistic provides a data-driven alternative for selecting $m_b$, namely \mathbf{e}q \widehat{m}^\mbox{var}phi_b = \arg\max_{1 \le m \le n} \mathcal{D}^\mbox{var}phi_m(\{|\mathcal{X}^{(j)}_{s, b, e}|\}_{j=1}^n). \label{eq:wh:m} \end{eqnarray} While we do not claim that the thus-obtained partitioning consistently identify $\Pi_r$ at each detected change-point, we see in simulation studies reported in Section \ref{sec:sim} that the DC statistic performs well in change-point detection, by identifying those components that {\em contribute} to change-point detection according to the above $\widehat{m}^\mbox{var}phi_b$. The RHS of (\ref{double:cusum:two}) indicates that the term $(2n-m)^{-1} \sum_{j=m+1}^n |\mathcal{X}^{(j)}_{s, b, e}|$ acts as a threshold on $|\mathcal{X}^{(j)}_{s, b, e}|, \ j=1, \ldots, m$ at each $b$, and therefore $\mathcal{D}_m^\mbox{var}phi(\{|\mathcal{X}^{(j)}_{s, b, e}|\}_{j=1}^n)$ is essentially a scaled average of $m$ largest CUSUMs after soft-thresholding, from which we draw resemblance between $\mathcal{T}^\mbox{var}phi_{s, e}$ and $\mathcal{T}s_{s, e}(\pi_T)$. However, since $\mathcal{T}^\mbox{var}phi_{s, e}$ involves maximisation over both $b$ and $m$, we avoid explicitly selecting $\pi_T$ that applies directly to individual CUSUMs while still enjoying the ``sparsifying'' effect of the thresholding step. Comparing (\ref{eq:cusum:two}) and (\ref{double:cusum:one}), we can see the connection between the two operators $\mathcal{D}_m^\mbox{var}phi$ and $\mathcal{C}_b$ especially when $\mbox{var}phi= 1/2$, as both return the scaled difference between partial averages over two disjoint intervals. However, $\mathcal{D}_m^\mbox{var}phi$ involves $\{m(2n-m)/(2n)\}^\mbox{var}phi$ instead of $\{m(n-m)/n\}^\mbox{var}phi$, although the input series is of length $n$. This difference comes from the observation that the latter scaling factor may not favour a change-point that is shared by more than $[n/2]$ rows, and even act as a penalty when $m_r$ is close to $n$, which is against our intuition on the detectability of a change-point. By adopting $\{m(2n-m)/(2n)\}^\mbox{var}phi$, the DC statistic can be regarded as being computed on the panel data of dimension $2n$, where there are additional $n$ ``null'' components which are known to have no change-point. Since $\{m(2n-m)/(2n)\}^\mbox{var}phi$ is non-decreasing in $m$, when considering the asymptotic efficiency of the DC statistic-based change-point test, the cross-sectional size of change accounts for both $|\delta_{j, r}|$, the magnitude of jumps at the change-point, and $m_r$, its ``density'' (as opposed to the sparsity) across the panel (see Remark \ref{rem:aston:two} for further discussion). For illustration, we computed the DC statistics from panel data generated with a single change-point of different configurations with $\mbox{var}phi$ chosen as detailed in Section \ref{sec:varphi}. Fixing $n = 250$ and $T = 100$, $\varepsilon_{j, t}$ was simulated as in (N1) of Section \ref{sec:sim:single:model} with $\mbox{var}rho = 0.2$ (which controls the degree of cross-correlations in $\varepsilon_{j, t}$). The piecewise constant signals were generated with a change-point at $\eta_1 = T/2$, where $m_1$ out of $n$ components contained a shift of magnitude randomly drawn from a uniform distribution $\mathcal{U}(0.75\delta_1, 1.25\delta_1)$ with $(m_1, \delta_1) = ([\log\,n], 0.24)$ and $([0.5n], 0.05)$. We chose $m_1$ and $\delta_1$ in order to set $\sum_{j \in \Pi_1} \delta_{j, 1}^2$ at approximately the same level. As shown in Figure \ref{fig:comparison}, the location of the true change-point in time was accurately identified as where the pointwise maximum of DC statistics was maximised in both settings, i.e., $\widehat{\eta}_1 = \arg\max_{b\in[1, T)}\max_{1 \le m \le n}\mathcal{D}^\mbox{var}phi_m(\{|\mathcal{X}^{(j)}_{1, b, T}|\}_{j=1}^n)$. Comparing the two heat maps, where $\mathcal{D}^\mbox{var}phi_m(\{|\mathcal{X}^{(j)}_{1, \widehat{\eta}_1, T}|\}_{j=1}^n)$ was maximised over $m$, namely $\widehat{m}_1 = \arg\max_{1 \le m \le n} \mathcal{D}^\mbox{var}phi_m(\{|\mathcal{X}^{(j)}_{1, \widehat{\eta}_1, T}|\}_{j=1}^n)$, was closer to $m_1$ for the larger $\delta_1$. This implies that not all components with the changes contribute to the detection of a change-point in the presence of noise, due to small magnitude of the changes, and using only a subset of $\Pi_1$ may serve the purpose better for change-point detection. \mathbf{e}gin{figure} \centering \includegraphics[scale=.25]{comparison1.pdf} \caption{$(m_1, \delta_1) = ([\log\,n], 0.24)$ (top) and $(m_1, \delta_1) = ([0.5n], 0.05)$ (bottom); the heat map of $\mathcal{D}^\mbox{var}phi_m(\{|\mathcal{X}^{(j)}_{1, b, T}|\}_{j=1}^n)$ over $b$ ($x$-axis) and $m$ ($y$-axis) (left), the pointwise maximum of $\mathcal{D}^\mbox{var}phi_m(\{|\mathcal{X}^{(j)}_{1, b, T}|\}_{j=1}^n)$ over $m$ for $b \in [1, T)$, with broken lines indicating $\eta_1$ and the dotted ones $\widehat{\eta}_1$ (middle), and $\mathcal{D}^\mbox{var}phi_m(\{|\mathcal{X}^{(j)}_{1, \boldsymbol{\widehat{\eta}_1}, T}|\}_{j=1}^n), \, 1 \le m \le n$ with broken lines indicating $m_1$ and dotted ones $\widehat{m}_1$.} \label{fig:comparison} \end{figure} \mathbf{e}gin{rem} \label{rem:aston:one} For high-dimensional change-point analysis, \cite{aston2014} defined the {\em high-dimensional efficiency}, a concept closely related to asymptotic relative efficiency. Let $\boldsymbol{\delta} = (\delta_{1, 1}, \ldots, \delta_{n, 1})^\top$. Then, the high-dimensional efficiency is determined by the rate at which the cross-sectional size of change is allowed to converge to zero ($\Vert \boldsymbol{\delta} \Vert_2 \to 0$) as $T$ and, with it, $n$ increase, such that the power of the change-point test is strictly between the size and one. They further investigated the problem of single change-point detection using a class of change-point statistics obtained from (i) first projecting the panel data with respect to a projection vector $\mathbf{p} \in \mathbb{R}^n$, and (ii) computing the series of CUSUM statistics from the univariate series $\{\langle \mathbf{x}_t, \mathbf{p} \rangle\}_{t=1}^T$, with $\mathbf{x}_t = (x_{1, t}, \ldots, x_{n, t})^\top$. Assuming that $\boldsymbol{\varepsilon}_t = (\varepsilon_{1, t}, \ldots, \varepsilon_{n, t})^\top$ is independent over time and denoting $\Sigma_{\varepsilon} = \mbox{var}(\boldsymbol{\varepsilon}_t)$, the oracle projection vector with the optimal high-dimensional efficiency was given by maximising $(\mathbf{p}^\top\Sigma_{\varepsilon}\mathbf{p})^{-1}|\langle \boldsymbol{\delta}, \mathbf{p} \rangle|^2$ with respect to $\mathbf{p}$, as $\mathbf{o} := \Sigma_{\varepsilon}^{-1}\boldsymbol{\delta}$. The DC statistic at fixed $b$ and $m$ coincides with the CUSUM of pointwise projection of the panel data. More specifically, $\mathcal{D}^\mbox{var}phi_m(\{|\mathcal{X}^{(j)}_{1, b, T}|\}_{j=1}^n)$ is associated with the $n$-vector $\mathbf{p}^\mbox{var}phi_{b, m}$ with its elements \mathbf{e}qs [\mathbf{p}^\mbox{var}phi_{b, m}]_j = \left\{\mathbf{e}gin{array}{ll} \mbox{sign}(\mathcal{X}^j_{1, b, T}) \cdot \sigma_j^{-1}\left\{\frac{m(2n-m)}{2n}\right\}^\mbox{var}phi\frac{1}{m} & \mbox{if } |\mathcal{X}^j_{1, b, T}| \ge |\mathcal{X}^{(m)}_{1, b, T}|, \\ -\mbox{sign}(\mathcal{X}^j_{1, b, T}) \cdot \sigma_j^{-1}\left\{\frac{m(2n-m)}{2n}\right\}^\mbox{var}phi\frac{1}{2n-m} & \mbox{if } |\mathcal{X}^j_{1, b, T}| < |\mathcal{X}^{(m)}_{1, b, T}|, \end{array}\right. \end{eqnarray}s such that $\mathcal{D}^\mbox{var}phi_m(\{|\mathcal{X}^{(j)}_{1, b, T}|\}_{j=1}^n) = \mathcal{C}_b(\{\langle \mathbf{x}_t, \mathbf{p}^\mbox{var}phi_{b, m} \rangle\}_{t=1}^T)$. In light of the previous discussion on pointwise ordering and partitioning of the panel data in computing the DC statistics, our approach may be viewed as an attempt at mimicking the performance of the oracle projection without the prior knowledge of either $\boldsymbol{\delta}$ or $\Sigma_\varepsilon$. We further investigate the high-dimensional efficiency of the DC statistic in comparison with other competitors and the oracle projection in Section \ref{sec:single}. \end{rem} \mathbf{e}gin{rem} \label{rem:ef} The scan statistic \citep{enikeeva2014} shares with the DC statistic the pointwise maximisation of {\em cumulative sums of the ordered CUSUMs} over $1 \le m \le n$. One key difference is that the former has $(\mathcal{X}^{(j)}_{1, b, T})^2$ in place of $|\mathcal{X}^{(j)}_{1, b, T}|$, with the interpretation of being the marginal log-likelihood ratio at given $b$ and $m$ under the assumption of i.i.d. Gaussian noise. On the other hand, the latter can be seen as a projected change-point statistic as noted in Remark \ref{rem:aston:one}. Another difference comes from the fact that $T_m$, acting as thresholds for $\mathcal{T}scan_{s, e}$, are dependent on $m$, while the test criterion compared against $\mathcal{T}^\mbox{var}phi_{s, e}$ does not depend on the choice of $m$. \cite{enikeeva2014} proposed two choices for $T_m$, a theoretical one from a $\chi^2_m$-distribution, and an empirical one $T_m = 2(2n)^{-1/2}\{m\log(ne/m)+\log(Tn/\alpha)\}$ at a given significance level $\alpha\in(0, 1)$. In conducting the simulation studies reported in Section \ref{sec:sim}, both choices of $T_m$ were observed to be too sensitive to cross-sectional correlations in the panel data. $T_m$ may be chosen numerically from e.g., a bootstrap scheme as indicated by the authors, but this requires selecting $n$ such $T_m$ and moreover, it is unclear how the cross-correlations should be treated. \end{rem} \section{Consistency of the double CUSUM statistic} \label{sec:theor} \subsection{Single change-point detection} \label{sec:single} In this section, we show the consistency of the DC statistic in the single change-point scenario ($N = 1$). More specifically, we consider the null hypothesis \mathbf{e}qs \mathcal{H}_0: \, f_{j, 1} = \cdots = f_{j, T} \quad \mbox{for all} \quad j = 1, \ldots, n, \end{eqnarray}s which indicates structural stability in the mean over time. As an alternative hypothesis, we specify the scenario where the piecewise constant signals $\{f_{j, t}\}_{t=1}^T, \, j=1, \ldots, n$ contains a single change-point at an unknown location $\eta_1 \in (1, T)$, such that \mathbf{e}qs \mathcal{H}_A: \, f_{j, 1} = \cdots = f_{j, \eta_1} \ne f_{j, \eta_1+1} = \cdots = f_{j, T} \, \mbox{for some} \, j \in \Pi_1 \, \mbox{with} \, m_1 = |\Pi_1| \ge 1. \end{eqnarray}s Throughout the paper, $a \sim b$ is used to denote that $a$ is of the order of $b$, and $a \wedge b = \min(a, b)$. Then, the consistency of the proposed test is established under the following conditions. \mathbf{e}gin{itemize} \item[(A1)] For each $j$, $\varepsilon_{j, t}$ denotes a stationary, zero-mean process with the mixing coefficients \mathbf{e}qs \alpha_j(k) = \sup_{\substack{G \in \sigma(\varepsilon_{j, t+k}, \varepsilon_{j, t+k+1}, \ldots) \\ H \in \sigma(\varepsilon_{j, t}, \varepsilon_{j, t-1}, \ldots)}} |\mathbb{P}(H \cap G) - \mathbb{P}(H)\mathbb{P}(G)|. \end{eqnarray}s Then, there exist fixed $C_\varepsilon, C_\alpha > 0$, $\mu\in (0, 1)$ and $\sigma^* \ge \sigma_* > 0$ with $\bar{\sigma} = \sigma^*/\sigma_* \in [1, \infty)$, such that the followings hold. \mathbf{e}gin{itemize} \item[(A1.i)] $\mathbb{E}(\varepsilon_{j, t}^k) \le C_{\varepsilon}^{k-2} k! \mathbb{E}(\varepsilon_{j, t}^2)$ uniformly in $j$ and $k=3, 4, \ldots$. \item[(A1.ii)] $\max_{1 \le j \le n}|\alpha_j(k)| \le C_\alpha \mu^k$ for all $k = 1, 2, \ldots$. \item[(A1.iii)] $\sigma_j^2$, the long-run variance of $\varepsilon_{j, t}$, satisfies $\sigma_*^2 \le \sigma_j^2 \le \sigma^{*2}$. \end{itemize} \item[(A2)] The dimensionality $n$ satisfies $n \sim T^\omega$ for some fixed $\omega\in[0, \infty)$. \item[(A3)] There exists a fixed constant $\bar{f}>0$ such that $\max_{1 \le j \le n}\max_{1 \le t \le T} |f_{j, t}| \le \bar{f}$. \item[(A4)] There exists a fixed constant $c>0$ such that $\eta_1 \wedge (T-\eta_1) > cT^\mathbf{e}ta$ for some $\mathbf{e}ta\in(0, 1]$. \item[(A5)] Let $\widetilde{\delta}_1 = m_1^{-1}\sum_{j \in \Pi_1} |\delta_{j, 1}|$, i.e., the average magnitude of non-zero changes at $t = \eta_1$. Then for any $\mbox{var}phi\in[0, 1]$, we have $(n^\mbox{var}phi\log\,T)^{-1}m_1^\mbox{var}phi\widetilde{\delta}_1 T^{\mathbf{e}ta/2} \to \infty$ as $T \to \infty$. \end{itemize} In (A1), each $\varepsilon_{j, t}$ is assumed to be $\alpha$-mixing (strong-mixing) at a geometric rate, with bounded moments and long-run variance. The condition (A1.i) is met by many distributions such as exponential, gamma and inverse Gaussian, besides the Gaussian distribution. Note that the cross-sectional correlations of the panel data are not explicitly controlled by any of the conditions imposed in (A1). In (A2), the dimensionality can either be fixed or increase with $T$ at a polynomial rate. (A4) imposes a condition on the unbalancedness of the change-point location, permitting $T^{-1}\{\eta_1 \vee (T-\eta_1)\} \to 0$ as $T \to \infty$ when $\mathbf{e}ta<1$. (A5) places a lower bound on the rate of $m_1^\mbox{var}phi\widetilde{\delta}_1$, which dictates the minimum requirement on the cross-sectional size of the change for the change-point to be detected as well as being located with accuracy. (A3) rules out the trivial case where any $|\delta_{j, 1}| \to \infty$ with $T \to \infty$. \mathbf{e}gin{rem}[High-dimensional efficiency] \label{rem:aston:two} Recall the high-dimensional efficiency discussed in Remark \ref{rem:aston:one}, which was introduced in \cite{aston2014} as a tool that allows us to quantify and compare the power of different change-point tests. Table \ref{table:high} summarises the high-dimensional efficiency of $\mathcal{T}^\mbox{var}phi_{1, T}$ and other change-point tests discussed in Section \ref{sec:lit}, when the change-point is maximally distanced from the extreme ends of $[1, T]$ (i.e., $\mathbf{e}ta = 1$). The double vertical line divides the tests into (i) those proposed under the assumption of cross-sectional independence (left) and dependence (right), (ii) those detecting the presence of a change-point only (left) and those identifying its location as well (right), and (iii) those with the interpretation as projected change-point tests (right) and those without (left). The oracle projection-based change-point test (see Remark \ref{rem:aston:one}) achieves high-dimensional efficiency of $T^{1/2}\Vert \Sigma_{\boldsymbol{\varepsilon}}^{-1/2}\boldsymbol{\delta} \Vert_2 \to \infty$ and thus attaining better efficiency than $\mathcal{T}h_{1, T}$ and $\mathcal{T}lin_{1, T}$ by $n^{1/4}$, and than $\mathcal{T}scan_{1, T}$ by $m_1^{1/2}$, for diagonal $\Sigma_{\boldsymbol{\varepsilon}}$ (independent panel). When the change-point is sparse ($m_1 \sim 1$), the high-dimensional efficiency of $\mathcal{T}^0_{1, T}$ is comparable to that of the oracle up to a logarithmic factor. When the change-point is dense ($m_1 \sim n$), the high-dimensional efficiency of $\mathcal{T}^\mbox{var}phi_{1, T}, \, \mbox{var}phi>0$ is comparable to that of $\mathcal{T}^0_{1, T}$. Comparing $\mathcal{T}j_{1, T}$, $\mathcal{T}s_{1, T}(\pi_T)$ and $\mathcal{T}^0_{1, T}$, the latter attains better high-dimensional efficiency when $m_1^{-1}\sum_{j\in\Pi_1}|\delta_{j, 1}| \gg \min_{j\in\Pi_1}|\delta_{j, 1}|$. However, the former two achieves partitioning consistency (i.e., consistent estimation $\Pi_1$), which is not granted by the latter. \mathbf{e}gin{table}[htbp] \caption{High-dimensional efficiency of change-point tests when $\mathbf{e}ta = 1$.} \label{table:high} \centering \mathbf{e}gin{tabular}{c|l||c|l} \hline $\mathcal{T}h_{1, T}$ & $\displaystyle{\frac{(\sum_{j=1}^n\delta_{j, 1}^2)^{1/2}T^{1/2}}{n^{1/4}} \to \infty}$ & $\mathcal{T}j_{1, T}$ & $\displaystyle{\frac{\min_{j\in\Pi_1}|\delta_{j, 1}| T^{1/2}}{\sqrt{\log\,T}} \to \infty}$ \\ \hline $\mathcal{T}lin_{1, T}$ & $\displaystyle{\frac{(\sum_{j=1}^n\delta_{j, 1}^2)^{1/2}T^{1/2}}{n^{1/4}} \to \infty}$ & $\mathcal{T}s_{1, T}(\pi_T)$ & $\displaystyle{\frac{\min_{j\in\Pi_1}|\delta_{j, 1}| T^{1/2}}{\log\,T} \to \infty}$ \\ \hline $\mathcal{T}scan_{1, T}$ & $\displaystyle{\frac{(\sum_{j=1}^n\delta_{j, 1}^2)^{1/2}T^{1/2}}{\sqrt{m_1\log(n/m_1)}} > \sqrt{6.6}}$ & $\mathcal{T}^\mbox{var}phi_{1, T}$ & $\displaystyle{\frac{m_1^\mbox{var}phi\widetilde\delta_1 T^{1/2}}{n^\mbox{var}phi\log\,T} \to \infty}$ \\ \hline \end{tabular} \end{table} \end{rem} \mathbf{e}gin{rem} \label{rem:scaling} As briefly noted below (\ref{eq:cusum:two}), some studentisation is required for panel data analysis in practice. We estimate $\sigma_j$ using the flat-top kernel estimator with the automatically chosen bandwidth as discussed in \cite{politis2011}. Let $\widehat{\eta}_{j, 1} = \arg\max_{b\in[1, T)}|\mathcal{C}_b(\{x_{j, t}\}_{t=1}^T)|$, \mathbf{e}q \bar\varepsilon_{j, t} &=& x_{j, t} - \frac{1}{\widehat{\eta}_{j, 1}}\sum_{t=1}^{\widehat{\eta}_{j, 1}}x_{j, t} \cdot \mathbb{I}(t \le \widehat{\eta}_{j, 1}) - \frac{1}{T-\widehat{\eta}_{j, 1}}\sum_{t=\widehat{\eta}_{j, 1}+1}^T x_{j, t} \cdot \mathbb{I}(t > \widehat{\eta}_{j, 1}), \label{eq:vep:est} \\ w(t) &=& \left\{\mathbf{e}gin{array}{ll} 1 & \mbox{for } |t| \le 1/2, \\ 2(1-|t|) & \mbox{for } 1/2 < |t| < 1, \\ 0 & \mbox{for } |t| \ge 1. \end{array}\right. \nonumber \end{eqnarray} and $c_j(k) = \frac{1}{T} \sum_{t=1}^{T-k} \bar\varepsilon_{j, t} \bar\varepsilon_{j, t+k}$. Note that $\bar\varepsilon_{j, t}$ estimates the unobservable $\varepsilon_{j, t}$ by identifying a change-point candidate $\widehat{\eta}_{j, 1}$ associated with each $\{x_{j, t}\}_{t=1}^T$. Let $\tau_j$ be the smallest positive integer such that $|c_j(\tau_j+k)/c_j(0)| < 1.4\sqrt{T^{-1}\log_{10}\,T}$ for $k=1, 2, 3$ (where the constants are chosen as per \cite{huvskova2010}). Then, the estimator is given by \mathbf{e}gin{eqnarray} \widehat\sigma_j^2 = \max\left\{c_j(0) + 2\sum_{k=1}^{2\tau_j}w\left(\frac{k}{2\tau_j}\right)c_j(k), \, \frac{c_j(0)}{2}\right\}, \label{eq:sig:est} \end{eqnarray} where the second term in the RHS of (\ref{eq:sig:est}) is present to prevent spuriously large CUSUMs resulting from too small estimates of $\sigma_j$ (flat-top kernel estimator is known to produce negative estimators). \cite{huvskova2010} showed the consistency of the above estimator in the {\em single} change-point detection problem, under similar conditions as those imposed on $\varepsilon_{j, t}$ in (A1). Noting that (i) the theoretical results derived in this paper hold without the consistency of the flat-top kernel estimator, provided that $\widehat\sigma_j^2$ is uniformly bounded away from zero, and also that (ii) we ultimately consider the problem of {\em multiple} change-point detection (with adjustment to the estimator of $\sigma_j^2$ in order to accommodate the presence of multiple change-points, see Remark \ref{rem:scaling:multi}), we assume (A6) below on $\widehat\sigma_j^2$ without extending the consistency result of \cite{huvskova2010}. \end{rem} \mathbf{e}gin{itemize} \item[(A6)] Define $\mathcal{E}_\sigma = \{\max_{1 \le j \le n} |\widehat\sigma_j^2 - \sigma_j^2| \le \sigma_*^2/2\}$ where $\sigma_*$ is the same as that in (A1). Then we assume $\mathbb{P}(\mathcal{E}_\sigma) \to 1$ with $T \to \infty$. \end{itemize} Note that under (A4), short intervals near the extreme points $\{1, T\}$ do not contain the change-point. Hence we search for a change-point within $[1, T]\setminus\mathcal{I}_{1, T}$ only, where $\mathcal{I}_{1, T} = [1, d_T] \cup [T-d_T, T]$ with $d_T := [C\log^2\,T]$ for some $C > 0$. That is, $\mathcal{T}^\mbox{var}phi_{1, T} = \max_{b\in[1, T]\setminus\mathcal{I}_{1, T}}\max_{1 \le m \le n} \mathcal{D}^{\mbox{var}phi}_m(\{|\mathcal{X}^{(j)}_{1, b, T}|\}_{j=1}^n)$ and $\widehat{\eta}_1 = \arg\max_{b\in[1, T]\setminus\mathcal{I}_{1, T}}\max_{1 \le m \le n} \mathcal{D}^{\mbox{var}phi}_m(\{|\mathcal{X}^{(j)}_{1, b, T}|\}_{j=1}^n)$. Under these conditions, we present the following theorems on the consistency of the DC statistic-based test equipped with a test criterion $\pi^\mbox{var}phi_{n, T}$, which satisfies $C'n^\mbox{var}phi\log\,T < \pi^\mbox{var}phi_{n, T} < C''m_1^\mbox{var}phi\widetilde{\delta}_1 T^{\mathbf{e}ta/2}$ for some $C', C'' > 0$. \mathbf{e}gin{thm} \label{thm:zero} Assume that (A1)--(A3) and (A6) hold and that there exists no change-point ($N=0$) in the panel data in (\ref{eq:panel}). Then $\mathbb{P}\left\{\mathcal{T}^\mbox{var}phi_{1, T} > \pi^\mbox{var}phi_{n, T}\right\} \to 0$ as $T \to \infty$. \end{thm} Theorem \ref{thm:zero} guarantees that when all signals remain constant, the test does not detect any change-point with probability converging to one. \mathbf{e}gin{thm} \label{thm:one} Assume that (A1)--(A6) hold. Then there exists $c_0>0$ such that $$ \mathbb{P}\left\{\mathcal{T}^\mbox{var}phi_{1, T} > \pi^\mbox{var}phi_{n, T} \mbox{ and } |\widehat{\eta}_1-\eta_1| < c_0\epsilon_T\right\} \to 1 $$ as $T \to \infty$, where $\epsilon_T = n^{2\mbox{var}phi}(m_1^\mbox{var}phi\widetilde\delta_1)^{-2}\log^2\,T$. \end{thm} Theorem \ref{thm:one} states that in the presence of a single change-point, the proposed test detects its presence as well as accurately identifying its location. From (A5), it is easily seen that $\epsilon_T/T^\mathbf{e}ta \to 0$ as $T \to \infty$, i.e., in the rescaled time, the estimated change-point location is consistent since $T^{-1}|\widehat{\eta}_1-\eta_1| \le T^{-\mathbf{e}ta}|\widehat{\eta}_1-\eta_1| \to 0$. We may define the optimality in change-point detection as when the true change-point and its estimate are within the distance of $O_p(1)$, see \cite{korostelev1987}. Then, near-optimality in change-point estimation is achieved up to a logarithmic factor ($\log^2\,T$) with the choice (i) $\mbox{var}phi = 0$ when $\widetilde\delta_1 \sim 1$, or (ii) $\mbox{var}phi>0$ when $m_1 \sim n$ and $\widetilde\delta_1 \sim 1$. \subsection{Binary segmentation for multiple change-point detection} \label{sec:bs} In this section, we show the consistency of the DC statistic for multiple change-point detection when applied jointly with a BS algorithm. We first formulate the double CUSUM Binary Segmentation (DCBS) algorithm for panel data segmentation, which is equipped with a test criterion $\pi^\mbox{var}phi_{n, T}$. As in Section \ref{sec:single}, let $\mathcal{I}_{s, e} = [s, s+d_T] \cup [e-d_T, e]$ denote a fraction of the interval $[s, e]$ on which we do not search for change-points, in order to account for possible bias in the previously detected change-points. This does not affect the asymptotic consistency of the estimated change-point locations under the assumption (B1) below, on the dispersion of change-points. The index $p$ is used to denote the level (indicating the progression of the segmentation procedure) and $q$ is used to denote the location of the node at each level. \mathbf{e}gin{description} \item[The double CUSUM Binary Segmentation algorithm] \item[Step 0] Set $(p, q)=(1, 1)$, $s_{p, q}=1$ and $e_{p, q}=T$. \item[Step 1] At the current level $p$, repeat the following for all $q$. \mathbf{e}gin{description} \item[Step 1.1] Letting $s=s_{p, q}$ and $e=e_{p, q}$, obtain the series of CUSUMs $\{\mathcal{X}^{j}_{s, b, e}\}$ for $b\in[s, e)$ and $j=1, \ldots, n$, on which $\mathcal{D}^{\mbox{var}phi}_m(\{|\mathcal{X}^{(j)}_{s, b, e}|\}_{j=1}^n)$ is computed over all $b$ and $m$. \item[Step 1.2] Obtain the test statistic $\mathcal{T}^\mbox{var}phi_{s, e} = \max_{b\in[s, e]\setminus\mathcal{I}_{s, e}}\max_{1 \le m \le n} \mathcal{D}^{\mbox{var}phi}_m(\{|\mathcal{X}^{(j)}_{s, b, e}|\}_{j=1}^n)$. \item[Step 1.3] If $\mathcal{T}^\mbox{var}phi_{s, e} \le \pi^\mbox{var}phi_{n, T}$, quit searching for change-points on the interval $[s, e]$. On the other hand, if $\mathcal{T}^\mbox{var}phi_{s, e} > \pi^\mbox{var}phi_{n, T}$, locate $$\widehat{\eta} = \arg\max_{b\in[s, e]\setminus\mathcal{I}_{s, e}}\max_{1 \le m \le n} \mathcal{D}^{\mbox{var}phi}_m(\{|\mathcal{X}^{(j)}_{s, b, e}|\}_{j=1}^n)$$ and proceed to Step 1.4. \item[Step 1.4] Add $\widehat{\eta}$ to the set of estimated change-points and divide the interval $[s_{p, q}, e_{p, q}]$ into two sub-intervals $[s_{p+1, 2q-1}, e_{p+1, 2q-1}]$ and $[s_{p+1, 2q}, e_{p+1, 2q}]$, where $s_{p+1, 2q-1} = s_{p, q}$, $e_{p+1, 2q-1} = \widehat{\eta}$, $s_{p+1, 2q} = \widehat{\eta}+1$ and $e_{p+1, 2q} = e_{p, q}$. \end{description} \item[Step 2] Once $[s_{p, q}, e_{p, q}]$ for all $q$ are examined at level $p$, set $p \leftarrow p+1$ and go to Step 1. \end{description} Step 1.3 furnishes a stopping rule to the DCBS algorithm: the search for further change-points is terminated once $\mathcal{T}^\mbox{var}phi_{s, e} \le \pi^\mbox{var}phi_{n, T}$ on every $[s, e]$ defined by two adjacent estimated change-points. \mathbf{e}gin{rem} \label{rem:scaling:multi} An adjustment is required to extend the scaling estimation procedure in Remark \ref{rem:scaling}, originally designed for single change-point detection, to be applicable to the multiple change-point scenario. More specifically, $\bar\varepsilon_{j, t}$ in (\ref{eq:vep:est}) can no longer be regarded as well-estimating $\varepsilon_{j, t}$ when $[1, \widehat{\eta}_{j, 1}]$ or $(\widehat{\eta}_{j, 1}, T]$ may contain further change-points. Seeing that $\widehat{\eta}_{j, 1}$ is the top node of a binary tree that results from applying a BS algorithm to $\{x_{j, t}\}_{t=1}^T$, we remedy this by growing a binary tree of {\em depth} $L_T$ ($= O(\log\,T)$) from each $\{x_{j, t}\}_{t=1}^T$, $j=1, \ldots, n$. Such a tree represents a maximal segmentation of $x_{j, t}$ and therefore, regarding each segment as being stationary, $\bar\varepsilon_{j, t}$ is derived by subtracting the sample means within those intervals. Then, the scaling estimation procedure is applied to the thus-obtained $\bar\varepsilon_{j, t}$. Note that this approach requires an arbitrary choice of $L_T$ when there is no prior information on the upper bound on the total number of change-points, $N$. For multiple change-point detection, such a requirement (or its equivalent) is commonly found; see e.g., \cite{soh2014} and \cite{kirch2015} where the minimum distance between two consecutive change-points plays an essential role in guaranteeing the consistency of the proposed procedures. \end{rem} The consistency of the DCBS algorithm is established under (A1)--(A3), (A6) and the following conditions extending (A4)--(A5) in order to allow for the presence of multiple change-points. \mathbf{e}gin{itemize} \item[(B1)] There exists a fixed constant $c>0$ such that $\min_{0 \le r \le N} (\eta_{r+1} - \eta_r) \ge cT^\mathbf{e}ta$ for some $\mathbf{e}ta\in(6/7, 1]$, using the notational convention that $\eta_0 = 0$ and $\eta_{N+1}=T$. \item[(B2)] At each $\eta_r$, let $\widetilde{\delta}_r = m_r^{-1}\sum_{j \in \Pi_r} |\delta_{j, r}|$. Then for any $\mbox{var}phi\in[0, 1]$, we have $\underline{\Delta}_\mbox{var}phi = \min_{1 \le r \le N} m_r^\mbox{var}phi\widetilde{\delta}_r$ satisfy $(n^\mbox{var}phi\log\,T)^{-1} \underline{\Delta}_\mbox{var}phi T^{7\mathbf{e}ta/4-3/2} \to \infty$ as $T \to \infty$. \end{itemize} Since the unbalancedness of a change-point location is closely related to the distance between two adjacent change-points, (B1) is formulated in terms of the latter. Note that we do not impose any upper bound on the number of total change-points $N$ except for the implication that can be derived from (B1), namely that $N$ may grow with $T$ provided that any two adjacent change-points is sufficiently distanced. Comparing (B2) to (A5), the high-dimensional efficiency is worsen by $T^{3/2-5\mathbf{e}ta/4}$ when detecting multiple change-points, which is also the case when applying the BS algorithm to univariate data. This rate can be improved when the DC statistic is combined with e.g., the Wild Binary Segmentation (WBS) \citep{piotr2014} which applies the CUSUM principle to randomly drawn intervals; we leave the exploration in this direction for the future research. \mathbf{e}gin{thm} \label{thm:two} Let $\widehat{\eta}_r, \ r=1, \ldots, \widehat{N}$ ($1 < \widehat{\eta}_1 < \ldots < \widehat{\eta}_{\widehat{N}} < T$) denote the change-points detected by the DCBS algorithm with a test criterion $\pi^\mbox{var}phi_{n, T}$ satisfying $C'n^{2\mbox{var}phi}\underline{\Delta}_\mbox{var}phi^{-1}T^{5(1-\mathbf{e}ta)/2}\log\,T < \pi^\mbox{var}phi_{n, T} < C''\underline{\Delta}_\mbox{var}phi T^{\mathbf{e}ta-1/2}$. Assuming (A1)--(A3), (A6) and (B1)--(B2), there exists $c_0>0$ such that $$\mathbb{P}\left\{\widehat{N}=N; \,|\widehat{\eta}_r-\eta_r| < c_0\epsilon_T \mbox{ for } r=1, \ldots, N\right\} \to 1$$ as $T \to \infty$, where $\epsilon_T = n^{2\mbox{var}phi}\underline{\Delta}_\mbox{var}phi^{-2} T^{5(1-\mathbf{e}ta)}\log^2\,T$. \end{thm} Unlike in the single change-point scenario, $\epsilon_T$ now depends on the dispersion of the change-points through $\mathbf{e}ta$. From (B1)--(B2), it is easily seen that $\epsilon_T/T^\mathbf{e}ta \to 0$ as $T \to \infty$, and hence the multiple change-points are consistently located for all $r=1, \ldots, N$. \mathbf{e}gin{rem}[Post-processing of estimated change-points] \label{rem:post:proc} In this paper, we focus on establishing the good performance of the DC statistic in high-dimensional CUSUM series aggregation when combined with a simple segmentation method, rather than considering more sophisticated algorithms such as the WBS \citep{piotr2014} or MOSUM \citep{kirch2015} procedures, where the DC principle is straightforwardly applicable. The BS algorithm is known to perform sub-optimally in certain unfavourable settings since, at each iteration, it fits a step function with a single break to the data over a segment that contains possibly multiple change-points. Hence, we equip the DCBS algorithm with an extra step aiming at removing spuriously detected change-points which is in line with the post-processing steps proposed in \cite{cho2012} and \cite{cho2015}. It checks whether $\mathcal{T}^{\mbox{var}phi}_{\widehat{\eta}_r-d_r, \widehat{\eta}_r+d_r} > \pi^\mbox{var}phi_{n, T}$ for $r=1, \ldots, \widehat{N}$ with $d_r = \min(\widehat{\eta}_r-\widehat{\eta}_{r-1}, \widehat{\eta}_{r+1}-\widehat{\eta}_r)/2$. In other words, we compute the DC statistics within each segment containing a single estimated change-point, and retain only those $\widehat{\eta}_r$ that survive the thresholding. More details on the post-processing step can be found in Section 3.2.1 of \cite{cho2012}. \end{rem} \section{Choice of $\mbox{var}phi$ and test criterion} \label{sec:choice} \subsection{Choice of $\mbox{var}phi$} \label{sec:varphi} Remark \ref{rem:aston:two} indicates that $\mbox{var}phi = 0$ is preferable in terms of the high-dimensional efficiency for detecting a change-point that is known to be sparse across the panel, while $\mathcal{T}^\mbox{var}phi_{1, T}$ with $\mbox{var}phi > 0$ achieves the same high-dimensional efficiency as $\mathcal{T}^0_{1, T}$ when the change is dense across the panel. In practice, such information is often unavailable a priori and therefore it is of interest to find a way of combining the information embedded in an {\em array} of DC statistics indexed by $\mbox{var}phi \in [0, 1]$, which works universally well over different ranges of change-point configurations determined by $\eta_r$, $m_r$ and $\widetilde{\delta}_r$. Recalling the data-driven partitioning of the panel achieved by the DC statistic, the pointwise partitioning of the ordered CUSUM values may be regarded as analogous to fitting a step function with a single break to $|\mathcal{X}^{(j)}_{s, b, e}|, \ j=1, \ldots, n$ at each $b$. Suppose that $\{|\mathcal{X}^{(j)}_{s, b, e}|\}_{j=1}^n$ arises from an additive model with a piecewise constant signal that contains a single break at $j=m_r$ for some $r \in \{1, \ldots, N\}$ and otherwise constant. This setting is unlikely to be satisfied by the ordered CUSUM values in general, but provides a framework for linking the optimality in panel partitioning to that in locating a single change-point. Then, \cite{brodsky1993} showed that the choice of $\mbox{var}phi = 1/2$ leads to the best rate of convergence in the bias $|\widehat{m}^\mbox{var}phi_b - m_r|$ (recall (\ref{eq:wh:m})) among $\mbox{var}phi \in [0, 1]$, i.e., asymptotically, the optimal partitioning is attained by the choice of $\mbox{var}phi = 1/2$. Taking into account these observations, we propose a new DC statistic \mathbf{e}gin{eqnarray*} \mathcal{D}zh_m(\{|\mathcal{X}^{(j)}_{s, b, e}|\}_{j=1}^n) = \gamma_n\mathcal{D}^0_m(\{|\mathcal{X}^{(j)}_{s, b, e}|\}_{j=1}^n) + \mathcal{D}^{1/2}_m(\{|\mathcal{X}^{(j)}_{s, b, e}|\}_{j=1}^n), \end{eqnarray*} where $\gamma_n$ acts as a scaling factor that enables treating $\mathcal{D}^0_m$ and $\mathcal{D}^{1/2}_m$ on an equal footing. Heuristically, the discrepancy between $\mathcal{D}^0_m$ and $\mathcal{D}^{1/2}_m$ increases at the rate $m^{1/2}$, but is not so pronounced for small values of $m$. Therefore, $\mathcal{D}zh_m$ can be viewed as an attempt at combining the different ranges of change-point configurations over which either $\mathcal{D}^0_m$ or $\mathcal{D}^{1/2}_m$ achieves consistency, and the following conditions modifying (A5) and (B2) reflect this point. \mathbf{e}gin{itemize} \item[(A5')] $(n^{1/2}\log\,T)^{-1}(\gamma_n \vee m_1^{1/2})\widetilde{\delta}_1 T^{\mathbf{e}ta/2} \to \infty$ as $T \to \infty$. \item[(B2')] We have $\underline{\Delta} = \min_{1 \le r \le N} (\gamma_n \vee m_r^{1/2})\widetilde{\delta}_r$ satisfy $(n^{1/2}\log\,T)^{-1} \underline{\Delta} T^{7\mathbf{e}ta/4-3/2} \to \infty$ as $T \to \infty$. \end{itemize} Then, the consistency of DC statistics carry over to that of the newly defined DC statistic $\mathcal{D}zh_m$ as below. \mathbf{e}gin{thm} \label{thm:four} Assume $\omega > 0$ such that $n \to \infty$ as $T \to \infty$, and $n^{-1/2}\gamma_n \to 0$ as $n \to \infty$. Let $\mathcal{D}zh_m$ replace $\mathcal{D}^\mbox{var}phi_m$ in deriving the test statistics where applicable. \mathbf{e}gin{itemize} \item[(a)] Assume that (A1)--(A4), (A5') and (A6) hold. Equipped with the test criterion $\widetilde{\pi}_{n, T}$ satisfying $C'n^{1/2}\log\,T <$ $\widetilde{\pi}_{n, T}<$ $C''(\gamma_n \vee m_1^{1/2})\widetilde{\delta}_1 T^{\mathbf{e}ta/2}$ for some $C', C'' > 0$, Theorems \ref{thm:zero}--\ref{thm:one} hold with $\epsilon_T = n\{(\gamma_n \vee m_1^{1/2})\widetilde\delta_1\}^{-2}\log^2\,T$ in the latter. \item[(b)] Assume that (A1)--(A3), (A6), (B1) and (B2') hold. Equipped with the test criterion satisfying $C'n\underline{\Delta}^{-1}T^{5(1-\mathbf{e}ta)/2}\log\,T <$ $\widetilde{\pi}_{n, T} <$ $C''\underline{\Delta} T^{\mathbf{e}ta-1/2}$ for some $C', C'' > 0$, Theorem \ref{thm:two} holds with $\epsilon_T = n\underline{\Delta}^{-2} T^{5(1-\mathbf{e}ta)}\log^2\,T$. \end{itemize} \end{thm} For the purpose of testing only, it is reasonable to combine the two tests based on $\mathcal{D}^0_m$ and $\mathcal{D}^{1/2}_m$ as in \cite{enikeeva2014}, namely, $(\mathcal{T}^0_{1, T} > \pi^0_{n, T}) \vee (\mathcal{T}^{1/2}_{1, T} > \pi^{1/2}_{n, T})$. Adopting such an approach, however, it is not clear how to identify the location of a change-point once its presence is detected, since we cannot exclude the possibility that it is detected by both $\mathcal{T}^0_{1, T}$ and $\mathcal{T}^{1/2}_{1, T}$, each of which estimates its location with bias. This becomes further complicated in the presence of multiple change-points of different change-point configurations. On the other hand, while $\gamma_n$ needs to be selected additionally, we can readily identify the change-point locations using $\mathcal{D}zh_m$. Theorem \ref{thm:four} assumes that $n^{-1/2}\gamma_n \to 0$ only. However, as the formulation of (A5') and (B2') suggests, guaranteed improvement of the high-dimensional efficiency of $\mathcal{D}zh_m$ over that of both $\mathcal{D}^0_m$ and $\mathcal{D}^{1/2}_m$, still requires the knowledge of unknown sparsity or density of the change-point for the choice of $\gamma_n$. In Section \ref{sec:sim}, $\gamma_n = \log\,n$ is considered in investigating the finite sample performance of $\mathcal{D}zh_m$ along with that of $\mathcal{D}^0_m$ and $\mathcal{D}^{1/2}_m$. \subsection{Bootstrap for test criterion selection} \label{sec:threshold} Theorems \ref{thm:zero}--\ref{thm:four} provide ranges for the rate of $\pi^\mbox{var}phi_{n, T}$ which grant size and power consistency for both single and multiple change-point detection problems. However, the theoretical rates involve typically unattainable knowledge on the quantities such as $\mathbf{e}ta$ and $\underline{\Delta}_\mbox{var}phi$ and, even with such knowledge available, finite sample performance may be heavily influenced by the choice of the multiplicative constant applied to the given rate. Therefore, we propose a resampling procedure that enables us to approximate the quantiles of $\mathcal{T}^\mbox{var}phi_{s, e}$ under the null hypothesis of no change-points. Originally proposed as a data-based simulation method for statistical inference with i.i.d. random samples, the bootstrap principle has been extended to a variety of non-i.i.d. situations; see \cite{hardle2003} for a comprehensive survey. Although some heuristic attempts have been made \citep{fiecas2014}, applying bootstrap methods developed for time series of fixed dimensions to high-dimensional settings is challenging. Some recent efforts in this direction include \cite{jentsch2015}, who proposed the Linear Process Bootstrap for multivariate time series and established its asymptotic validity for the sample mean when $n$ is allowed to increase with $T$. The procedure involves the estimation of $(nT)\times(nT)$-dimensional covariance matrix of the vectorised version of $[\boldsymbol{\varepsilon}_t, \, t=1, \ldots, T]$ and the computation of its Cholesky decomposition, where the latter task alone is of computational complexity $O(n^3T^3)$, which makes it difficult to apply the method even to panel data of moderately large dimensions. Instead, we propose a bootstrap procedure motivated by the representation theory developed for the Generalised Dynamic Factor Model (GDFM). Factor analysis is a popular dimension reduction technique used in many disciplines, such as econometrics, statistics, signal processing, psychometrics, and chemometrics. The key idea is that pervasive cross-correlations in $\boldsymbol{\varepsilon}_t$ are modelled by the {\em common} component $\boldsymbol{\chi}_t$, and $\boldsymbol{\xi}_t = \mathbf{x}_t - \boldsymbol{\chi}_t$ with moderate degree of cross-correlations denotes the {\em idiosyncratic} component. The GDFM introduced in \cite{forni2000} steps further and admits the following representation theorem \citep{forni2001}: any $\boldsymbol{\varepsilon}_t$ with a finite number ($q < \infty$) of diverging dynamic eigenvalues is decomposed into $\boldsymbol{\chi}_t$ driven by a $q$-tuple of white noises $\mathbf{u}_t$ ({\em common shocks}) as $\boldsymbol{\chi}_t = \mathbf{b}(L)\mathbf{u}_t$ ($L$ denotes the lag operator and $\mathbf{b}(L)$ is an $n \times q$-matrix of one-sided and square-summable filters $b_{ik}(L)$), and $\boldsymbol{\xi}_t$ with bounded dynamic eigenvalues. Referred to as the GDFM-boot, the proposed bootstrap method utilises this representation property of the GDFM in order to effectively handle the cross-correlations as well as within-series correlations present in $\boldsymbol{\varepsilon}_t$ of large dimensions. \mathbf{e}gin{description} \item[The GDFM-boot algorithm] \item[Step 1] Obtain $\mathbf{E} = [\widehat\boldsymbol{\varepsilon}_t, \, t=1, \ldots, T]$ where $\widehat\varepsilon_{j, t} = \widehat{\sigma}_j^{-1}\bar\varepsilon_{j, t}$ (refer to (\ref{eq:vep:est}) and Remark \ref{rem:scaling:multi} for the definition of $\bar\varepsilon_{j, t}$). \item[Step 2] Let $\mathbf{e}_l$ denote the eigenvector of the covariance matrix $T^{-1}\mathbf{E}\mathbf{E}^\top$, which corresponds to its $l$-th largest eigenvalue. Then, estimate $q$, the number of common shocks driving the cross-correlations of $\mathbf{E}$ using the information criterion proposed by \cite{bai2002}: $IC(k) = \log(T^{-1}\sum_{t=1}^T \Vert\widehat{\boldsymbol{\varepsilon}}_t - \sum_{l=1}^k\mathbf{e}_l\mathbf{e}_l^\top\widehat{\boldsymbol{\varepsilon}}_t\Vert_2^2) + kC_{n, T}^{-1}\log(C_{n, T})$, as $q = \arg\min_{0 \le k \le Q} IC(k)$ with $Q = \lfloor C_{n, T}/\log(C_{n, T}) \rfloor$ and $C_{n, T} = n \wedge T$. \item[Step 3] Estimate the $q$-dimensional common shocks ($\widehat{\mathbf{u}}_t$) and the associated filter ($\widehat\mathbf{b}(L)$), such that $\widehat\boldsymbol{\varepsilon}_t$ is decomposed into $\widehat\boldsymbol{\chi}_t = \widehat\mathbf{b}(L)\widehat\mathbf{u}_t$ and $\widehat\boldsymbol{\xi}_t = \widehat\boldsymbol{\varepsilon}_t - \widehat\boldsymbol{\chi}_t$. \item[Step 4] For $l=1, \ldots, B$, repeat the following steps. \mathbf{e}gin{description} \item[Step 4.1] For each $k=1, \ldots, q$, draw $\{u^l_{k, t}\}_{t=1}^T$ independently from the empirical distribution of $\{\widehat{u}_{k, t}\}_{t=1}^T$, from which $\boldsymbol{\chi}^l_t = \widehat\mathbf{b}(L)\mathbf{u}^l_t$ is obtained. \item[Step 4.2] Generate a bootstrap sequence of the Fourier coefficients of $\widehat\boldsymbol{\xi}_t$ using the Time Frequency Toggle (TFT)-Bootstrap \citep{kirch2011}, to which the inverse fast Fourier transform algorithm is applied to produce $\boldsymbol{\xi}_t^l$. \item[Step 4.3] On the bootstrap series $\boldsymbol{\varepsilon}^l_t = \boldsymbol{\chi}^l_t + \boldsymbol{\xi}^l_t$, compute $\mathcal{E}^{j, l}_{1, b, T} = \mathcal{C}_b(\{\varepsilon^l_{j, t}\}_{t=1}^T)$, from which $\mathcal{T}^{\mbox{var}phi, l}_{1, T} = \max_{b\in[1, T), \, 1 \le m \le n}\mathcal{D}^\mbox{var}phi_m(\{|\mathcal{E}^{(j), l}_{1, b, T}|\}_{j=1}^n)$ is obtained where $|\mathcal{E}^{(1), l}_{1, b, T}| \ge \ldots \ge |\mathcal{E}^{(n), l}_{1, b, T}|$. \end{description} \item[Step 5] Select the $(1-\alpha)$-quantile of $\mathcal{T}^{\mbox{var}phi, l}_{1, T}, \, l=1, \ldots, B$ as $\pi^\mbox{var}phi_{n, T}$. \end{description} In Step 1, we adopt the same approach taken in estimating $\sigma_j$, namely first to estimate the locations of the change-points coordinate-wise and then to take away the estimated piecewise constant signal from each $x_{j, t}$, in order to estimate the unobservable $\varepsilon_{j, t}$. Note that the practice of input data studentisation is often adopted prior to factor modelling. Step 2 is justified by the observations that (i) the number of factors in the static representation of factor models is typically greater than the number of common shocks in the GDFM, and (ii) over-estimated $q$ still returns mean-square consistent common components \citep[Corollary 2]{forni2000}. Section 4 of \cite{forni2000} provides an estimator of $\mathbf{b}(L)$ (and hence $\mathbf{u}_t$) based on the windowed estimator of the spectral density matrix of $\mathbf{E}$, which is adopted for Step 3. Step 4.1 produces a bootstrap sample of $\widehat\boldsymbol{\chi}_t$ by treating the white noise estimates $\widehat{u}_{k, t}$ as being i.i.d. over time. In Step 4.2, the Local Bootstrap, originally proposed in \cite{kirch2011} as a part of TFT-Bootstrap for resampling univariate time series, is applied to the $n$-dimensional $\widehat\boldsymbol{\xi}_t$ of bounded cross-sectional correlations. It does not require an initial estimate of the spectral density matrix of $\widehat\boldsymbol{\xi}_t$. Instead, the Fourier coefficients of $\widehat\boldsymbol{\xi}_t$ are resampled according to a kernel-based probability vector, under the observation that in a neighbourhood of each frequency, the distribution of different coefficients is almost identical (if the spectral density is smooth). A detailed description of the Local Bootstrap is provided in the Appendix A of the supplement document. In simulation studies, we used the Daniell kernel with the window size $[0.05T]$. Since the common and idiosyncratic components are handled independently, generating a smaller size bootstrap sample (e.g., $\lceil \sqrt{B}\rceil$) for each component may be sufficient to generate the bootstrap sample of size $B$ for $\boldsymbol{\varepsilon}_t$. Once a bootstrap sample is generated at level $p=1$ of the DCBS algorithm, the same sample can be used repeatedly for critical value selection at levels $p \ge 2$. That is, in Step 4.3 above, the test statistics are computed as $\mathcal{T}^{\mbox{var}phi, l}_{s', e'} = \max_{b\in[s', e'), \, 1 \le m \le n}\mathcal{D}^\mbox{var}phi_m(\{|\mathcal{E}^{(j), l}_{s', b, e'}|\}_{j=1}^n)$ over a moving window of size $e-s+1$ for $1 \le s' \le T-e+s$, $e' = s'-s+e$ and $l=1, \ldots, B$, from which the test criterion is drawn. Establishing the validity of the GDFM-boot algorithm for change-point analysis is beyond the scope of the current paper and hence is left for the future investigation. However, simulation studies in Section \ref{sec:sim} confirm its good performance in various settings together with the proposed change-point statistic. Computationally, the GDFM-boot algorithm benefits from the dimension reduction via factor modelling. When applied to generate a bootstrap sample of size $B = 100$ with $n = 25$ and $T = 100$ (executed on a $3.10$GHz quad-core with $8$GB of RAM running Windows $7$), the \texttt{R} code implementing the algorithm took $0.38$ seconds, compared to $62$ seconds taken by the Linear Process Bootstrap. Further, applying the latter was not computationally feasible for any data of the size and dimensionality considered in our simulation study. \section{Simulation study} \label{sec:sim} \subsection{Single change-point detection} \label{sec:sim:single} In this section, we evaluate the empirical performance of the DC statistic on simulated panel data with (at most) a single change-point. For comparison, change-point tests with two different choices of $\mbox{var}phi=0, 1/2$ (referred to as $\mathcal{T}^0$ and $\mathcal{T}^{1/2}$) as well as the combined DC statistic ($\mathcal{T}zh$) from Section \ref{sec:varphi} are considered, with $\pi^\mbox{var}phi_{n, T}$ computed from the GDFM-boot algorithm. Also included are \mathbf{e}gin{itemize} \item $\mathcal{T}eh_{1, T} = (\mathcal{T}lin_{1, T} > 1) \vee (\mathcal{T}scan_{1, T} > 1)$ equipped with the thresholds $H$ chosen as $(1-\alpha/2)$-quantile of the $\chi^2_n$-distribution and $T_m = \frac{2}{\sqrt{2m}}\{m\log(ne/m)+\log(nT/\alpha)\}$, as per the recommendation made in Section 5 of \cite{enikeeva2014} (referred to as $\mathcal{T}eh$); \item $\mathcal{T}j_{1, T}$ with the test criterion chosen according to the bootstrap algorithm (Algorithm 4.6) proposed in \cite{jirak2014} with the block size $K = 4$ (referred to as $\mathcal{T}j$). \end{itemize} Since the scaling of component series is not discussed in \cite{enikeeva2014}, we choose to apply $\widehat{\sigma}_j$ estimated as in Remark \ref{rem:scaling} in deriving $\mathcal{T}eh_{1, T}$. \cite{jirak2014} showed the consistency of the proposed test when applied with a set of long-run variance estimators for $\sigma_j$. As discussed in Section \ref{sec:sim:single:res}, $\mathcal{T}j$ is highly sensitive to the scaling terms estimated spuriously small, and therefore we use the most conservative estimator among those included in Proposition 3.5 of \cite{jirak2014}. $\mathcal{T}h_{1, T}$ is similarly constructed as $\mathcal{T}lin_{1, T}$ with the same high-dimensional efficiency, and $\mathcal{T}eh$ is expected to perform better than either of the two component tests. Hence $\mathcal{T}h_{1, T}$ is omitted from our study. As described in Section \ref{sec:sim:single:model}, each coordinate of $\boldsymbol{\varepsilon}_t$ is generated from the same model, which enables us to select a single threshold $\pi_T$ applicable to all $n$ component series in deriving $\mathcal{T}s_{1, T}(\pi_T)$. Hence we include $\mathcal{T}s_{1, T}(\pi_T)$ equipped with the ``oracle'' threshold in the comparative study (referred to as $\mathcal{T}s$), where $\pi_T$ is chosen from the GDFM-boot algorithm with (unobservable) $\boldsymbol{\varepsilon}_t$ replacing the estimated $\bar{\boldsymbol{\varepsilon}}_t$. Then, $\pi_T$ is the $(1-\alpha)$-quantile of $\max_{b\in[1, T), \, 1 \le j \le n}|\mathcal{E}^{j, l}_{1, b, T}|, \, l=1, \ldots, B$. When a bootstrap procedure is involved for test criterion selection, the bootstrap sample size is fixed at $B = 100$. We report the Type I error and the size-corrected power at the significance level $\alpha = 0.05$, as well as the location accuracy ($|\widehat{\eta}_1 - \eta_1| < \log\,T$, in $\%$) for $\mathcal{T}^0$, $\mathcal{T}^{1/2}$, $\mathcal{T}zh$, $\mathcal{T}j$ and $\mathcal{T}s$ over $100$ simulated sample paths for each setting; for $\mathcal{T}eh$, it is not suggested how the change-point location is to be estimated. For all tests, $d_T = 5$ is used to trim off the extreme ends of the interval $[1, T]$. Also, we investigate the partitioning accuracy of $\mathcal{T}^0$, $\mathcal{T}^{1/2}$, $\mathcal{T}zh$, $\mathcal{T}j$ and $\mathcal{T}s$ by reporting the Rand index, the sum of true positives and true negatives divided by the total ($n$), where Rand index close to $1$ indicates more accurate partitioning (binary classification). \subsubsection{Data generating models} \label{sec:sim:single:model} We consider piecewise constant signals $\{f_{j, t}\}_{t=1}^{T}$ with varying sparsity/density ($m_1$), size of jumps ($|\delta_{j, 1}| \sim \mathcal{U}(0.75\delta_1, 1.25\delta_1)$ for $j\in\Pi_1$ with randomly assigned signs) and locations $t = \eta_1$. Motivated by \cite{jirak2014}, $\boldsymbol{\varepsilon}_t$ is generated from the following two models: \mathbf{e}gin{description} \item[(N1) ARMA($2, 2$) model.] With $\mbox{var}rho_i = \mbox{var}rho(i+1)^{-1}$ and $\sigma_v = 0.1\mbox{var}rho^{-1}$, \mathbf{e}q u_{j, t} &=& \sum_{i=0}^{99} \mbox{var}rho_i v_{j-i, t}, \qquad v_{j, t} \sim_{\mbox{\scriptsize{i.i.d}}} \mathcal{N}(0, \sigma_v^2), \label{eq:sim:u} \\ \varepsilon_{j, t} &=& 0.2\varepsilon_{j, t-1}-0.3\varepsilon_{j, t-2}+u_{j, t}+0.2u_{j, t-1}. \nonumber \end{eqnarray} \item[(N2) Factor model.] $u_{j, t}$ is generated as in (\ref{eq:sim:u}) with $\mbox{var}rho = 0.2$ and $\sigma_v = 0.5\sqrt{1-\mbox{var}rho_h^2}$, and \mathbf{e}qs \varepsilon_{j, t} &=& \mbox{var}rho_h h_t + 0.2\varepsilon_{j, t-1}-0.3\varepsilon_{j, t-2}+u_{j, t}+0.2u_{j, t-1}, \, h_t \sim_{\mbox{\scriptsize{i.i.d}}} \mathcal{N}(0, 0.1^2). \nonumber \end{eqnarray}s \end{description} It is easily seen that the degree of cross-sectional correlations is controlled by the choice of $\mbox{var}rho \in \{0.2, 0.5\}$ in (N1) and $\mbox{var}rho_h \in \{0.5, 0.9\}$ in (N2). \subsubsection{Results} \label{sec:sim:single:res} Table \ref{table:single:null} compares the Type I errors of different tests in consideration. Combined with the DGFM-boot procedure, the DC statistic-based tests generally manages to control the Type I errors below the nominal level ($\alpha = 0.05$) or slightly above, with the exception of the case when $n=T=250$ and the ARMA model (N1) is used to generate the noise with $\mbox{var}rho = 0.5$. The oracle threshold for $\mathcal{T}s$ leads to a very conservative test, which is also evident in the power performance of $\mathcal{T}s$. $\mathcal{T}j$ appears to be highly sensitive and therefore vulnerable to the choice of $\widehat{\sigma}_j$ particularly when the critical value is selected by the parametric bootstrap (not reported here), since the test statistic directly depends on the largest CUSUM value attained by a single component series. When the noise is generated as in (N2) with $\mbox{var}rho_h = 0.9$, the size of $\mathcal{T}j$ is the closest to the nominal level, which is attributed to the presence of a strong factor as it leads to the scaling terms being estimated homogeneously across the panel. This presence of a strong factor has an adverse effect on the size of $\mathcal{T}eh$, which is originally proposed for independent panel data (see also Remark \ref{rem:ef}). To observe the power behaviour, we present the results when $n=250$ and $T=100$ as a representative example in Tables \ref{table:single:one}--\ref{table:single:two} and report the rest in Appendix C. Increasing sample size generally leads to improved power and accuracy in estimated change-point location. As for the dimensionality, its increase has different effects depending on the error generating model: while there is no strong discernible trend with regards to increasing $n$ in the case of (N1), it brings in dramatic improvement in the performance of $\mathcal{T}^0$ for the noise generated from (N2), especially with increasing influence of the factor (when $\mbox{var}rho_h = 0.9$). The tests tend to lose power when the change-point is sparse, the jumps are of smaller magnitude and its location is distanced away from the centre, and the similar arguments apply to the location and partitioning accuracy. Note that the Rand index occasionally decreases with increasing $m_1$, as it measures both true positives and true negatives where the former may decrease with growing density of the change-point. This indicates that over the range of $\delta_1$ considered here, some $x_{j, t}, \, j\in\Pi_1$ may not contribute to change-point detection due to small jump size $|\delta_{j, 1}|$. When $\boldsymbol{\varepsilon}_t$ is generated from (N1) (where the cross-sectional correlations are relatively small), $\mathcal{T}eh$ and $\mathcal{T}zh$ consistently achieve high (size-corrected) power over the entire range of change-point configurations, and the latter also achieves high location accuracy and Rand index. In the case of (N2), $\mathcal{T}^0$ outperforms all the other tests considered in our study in all change-point configurations and criteria for evaluation. Although not reported here, the performance of $\mathcal{T}zh$ in this particular setting can be improved by using a greater value for $\gamma_n$, which prompts the development of a data-driven way of exploiting the information embedded in an array of DC statistics over $\mbox{var}phi \in [0, 1]$. When the size of the data increases ($T=250$), the performance of $\mathcal{T}zh$ catches up with that of $\mathcal{T}^0$ in this setting. Comparing $\mathcal{T}^0$ and $\mathcal{T}^{1/2}$, the former performs superior to the latter except for when the change-point is highly dense and the noise is generated from (N1). $\mathcal{T}j$ performs as well as $\mathcal{T}^0$ and $\mathcal{T}zh$ or, occasionally, even better when the change-point is centrally located, but its performance deteriorates greatly when $\eta_1 = 0.1T$. This can be explained by the presence of a multiplicative factor involving $b$ in (\ref{eq:jirak}). $\sqrt{b(T-b)/T}|\mathcal{X}^j_{1, b, T}|$ amounts to a CUSUM-based change-point test that attains the slowest rate at which its Type II error converges to zero among the set of CUSUM statistics studied in Section 3 of \cite{brodsky1993}, with the rate depending on $T^{\mathbf{e}ta-1}$ instead of $T^{\mathbf{e}ta/2-1/2}$ as is the case with $|\mathcal{X}^j_{1, b, T}|$; see Theorem 3.5.2 of \cite{brodsky1993} for further details. Due to the conservative behaviour of the oracle threshold as observed in Table \ref{table:single:null}, $\mathcal{T}s$ does not outperform the other tests. Interestingly, when $\mathcal{T}j$, $\mathcal{T}^0$, $\mathcal{T}^{1/2}$ and $\mathcal{T}zh$ attain the similar level of power, the latter two achieve higher accuracy in locating the change-point. It is attributed to the fact that the latter usually select greater $\widehat{m}^\mbox{var}phi_b$ in (\ref{eq:wh:m}), which is evidenced by the higher Rand index values. \mathbf{e}gin{table}[htbp] \caption{Type I error when $\alpha=0.05$; $n=100$ (top) and $n=250$ (bottom).} \label{table:single:null} \centering \scriptsize{ \mathbf{e}gin{tabular}{@{}c@{}@{}c@{}|cccccc|cccccc} \hline \hline & & \multicolumn{6}{c}{$T=100$} & \multicolumn{6}{c}{$T=250$} \\ & $\mbox{var}rho/\mbox{var}rho_h$ & $\mathcal{T}^0$ & $\mathcal{T}^{1/2}$ & $\mathcal{T}zh$ & $\mathcal{T}j$ & $\mathcal{T}eh$ & $\mathcal{T}s$ & $\mathcal{T}^0$ & $\mathcal{T}^{1/2}$ & $\mathcal{T}zh$ & $\mathcal{T}j$ & $\mathcal{T}eh$ & $\mathcal{T}s$ \\ \hline \multirow{2}{*}{(N1)} & $0.2$ & 0.06 & 0.05 & 0.06 & 0.15 & 0.08 & 0 & 0.01 & 0.07 & 0.01 & 0.11 & 0.1 & 0 \\ & $0.5$ & 0.04 & 0.03 & 0.04 & 0.19 & 0.08 & 0 & 0.02 & 0.06 & 0.02 & 0.14 & 0.11 & 0 \\ \multirow{2}{*}{(N2)} & $0.5$ & 0.08 & 0.04 & 0.07 & 0.13 & 0.16 & 0 & 0.01 & 0.08 & 0.01 & 0.18 & 0.23 & 0 \\ & $0.9$ & 0.04 & 0.04 & 0.04 & 0.1 & 0.57 & 0 & 0.04 & 0.05 & 0.05 & 0.05 & 0.64 & 0 \\ \hline \multirow{2}{*}{(N1)} & $0.2$ & 0.06 & 0 & 0.06 & 0.15 & 0.07 & 0 & 0.07 & 0.03 & 0.07 & 0.18 & 0.09 & 0 \\ & $0.5$ & 0.06 & 0.01 & 0.04 & 0.21 & 0.09 & 0 & 0.12 & 0.01 & 0.1 & 0.22 & 0.05 & 0 \\ \multirow{2}{*}{(N2)} & $0.5$ & 0.07 & 0.04 & 0.05 & 0.04 & 0.35 & 0 & 0.02 & 0.05 & 0.05 & 0.08 & 0.34 & 0 \\ & $0.9$ & 0.04 & 0.07 & 0.07 & 0.1 & 0.61 & 0 & 0.05 & 0.05 & 0.05 & 0.06 & 0.75 & 0 \\ \hline \end{tabular}} \end{table} \mathbf{e}gin{sidewaystable} \caption{$n = 250$, $T = 100$ and $\alpha=0.05$: (N1) with $\mbox{var}rho = 0.2$ (top) and $\mbox{var}rho = 0.5$ (bottom).} \label{table:single:one} \centering \scriptsize{ \mathbf{e}gin{tabular}{ccc|cccccc|ccccc|ccccc} \hline\hline & & & \multicolumn{6}{c}{size-corrected power} & \multicolumn{5}{c}{location accuracy ($\%$)} & \multicolumn{5}{c}{Rand index} \\ $\delta_1$ & $m_1$ & $\eta_1$ & $\mathcal{T}^0$ & $\mathcal{T}^{1/2}$ & $\mathcal{T}zh$ & $\mathcal{T}j$ & $\mathcal{T}eh$ & $\mathcal{T}s$ & $\mathcal{T}^0$ & $\mathcal{T}^{1/2}$ & $\mathcal{T}zh$ & $\mathcal{T}j$ & $\mathcal{T}s$ & $\mathcal{T}^0$ & $\mathcal{T}^{1/2}$ & $\mathcal{T}zh$ & $\mathcal{T}j$ & $\mathcal{T}s$ \\ \hline \multirow{6}{*}{0.05} & \multirow{2}{*}{$\log\,n$} & $0.1T$ & 0.08 & 0.02 & 0.08 & 0.03 & 0.06 & 0.08 & 2 & 0 & 2 & 0 & 2 & 0.08 & 0.02 & 0.08 & 0.03 & 0.08 \\ & & $0.5T$ & 0.1 & 0.03 & 0.09 & 0.11 & 0.06 & 0.08 & 0 & 0 & 0 & 6 & 0 & 0.10 & 0.03 & 0.09 & 0.11 & 0.08 \\ & \multirow{2}{*}{$\sqrt{n}$} & $0.1T$ & 0.11 & 0.03 & 0.09 & 0.07 & 0.02 & 0.07 & 1 & 0 & 1 & 0 & 1 & 0.10 & 0.03 & 0.08 & 0.06 & 0.06 \\ & & $0.5T$ & 0.24 & 0.1 & 0.25 & 0.3 & 0.19 & 0.21 & 10 & 2 & 10 & 18 & 9 & 0.21 & 0.10 & 0.22 & 0.27 & 0.19 \\ & \multirow{2}{*}{$0.4n$} & $0.1T$ & 0.14 & 0.17 & 0.14 & 0.06 & 0.15 & 0.11 & 2 & 13 & 2 & 0 & 1 & 0.03 & 0.11 & 0.03 & 0.01 & 0.02 \\ & & $0.5T$ & 0.56 & 1 & 0.65 & 0.77 & 0.97 & 0.6 & 23 & 96 & 31 & 53 & 26 & 0.12 & 0.77 & 0.16 & 0.18 & 0.13 \\ \hline \multirow{6}{*}{0.075} & \multirow{2}{*}{$\log\,n$} & $0.1T$ & 0.09 & 0.01 & 0.09 & 0.05 & 0.04 & 0.07 & 0 & 0 & 0 & 0 & 1 & 0.09 & 0.01 & 0.09 & 0.05 & 0.07 \\ & & $0.5T$ & 0.31 & 0.05 & 0.31 & 0.41 & 0.16 & 0.25 & 16 & 0 & 17 & 31 & 14 & 0.30 & 0.05 & 0.30 & 0.40 & 0.24 \\ & \multirow{2}{*}{$\sqrt{n}$} & $0.1T$ & 0.1 & 0.03 & 0.1 & 0.06 & 0.03 & 0.1 & 1 & 0 & 1 & 0 & 3 & 0.09 & 0.03 & 0.09 & 0.05 & 0.09 \\ & & $0.5T$ & 0.51 & 0.15 & 0.53 & 0.69 & 0.53 & 0.48 & 22 & 5 & 23 & 47 & 22 & 0.45 & 0.15 & 0.47 & 0.62 & 0.43 \\ & \multirow{2}{*}{$0.4n$} & $0.1T$ & 0.23 & 0.87 & 0.24 & 0.11 & 0.75 & 0.24 & 12 & 86 & 12 & 0 & 16 & 0.05 & 0.66 & 0.05 & 0.02 & 0.05 \\ & & $0.5T$ & 0.89 & 1 & 1 & 1 & 0.97 & 0.98 & 55 & 100 & 90 & 75 & 75 & 0.19 & 0.87 & 0.53 & 0.30 & 0.23 \\ \hline \multirow{6}{*}{0.1} & \multirow{2}{*}{$\log\,n$} & $0.1T$ & 0.12 & 0.02 & 0.14 & 0.04 & 0.04 & 0.11 & 5 & 0 & 5 & 0 & 5 & 0.12 & 0.02 & 0.14 & 0.04 & 0.11 \\ & & $0.5T$ & 0.55 & 0.08 & 0.55 & 0.74 & 0.34 & 0.49 & 31 & 0 & 33 & 58 & 31 & 0.53 & 0.08 & 0.53 & 0.72 & 0.48 \\ & \multirow{2}{*}{$\sqrt{n}$} & $0.1T$ & 0.15 & 0.05 & 0.15 & 0.09 & 0.09 & 0.1 & 7 & 1 & 7 & 0 & 4 & 0.13 & 0.05 & 0.13 & 0.08 & 0.09 \\ & & $0.5T$ & 0.76 & 0.47 & 0.82 & 0.94 & 0.96 & 0.74 & 52 & 33 & 58 & 74 & 57 & 0.68 & 0.47 & 0.74 & 0.86 & 0.66 \\ & \multirow{2}{*}{$0.4n$} & $0.1T$ & 0.42 & 1 & 0.68 & 0.11 & 0.97 & 0.51 & 28 & 100 & 55 & 0 & 34 & 0.09 & 0.84 & 0.26 & 0.02 & 0.11 \\ & & $0.5T$ & 1 & 1 & 1 & 1 & 0.97 & 1 & 76 & 100 & 99 & 85 & 97 & 0.21 & 0.93 & 0.69 & 0.47 & 0.29 \\ \hline \hline \multirow{6}{*}{0.05} & \multirow{2}{*}{$\log\,n$} & $0.1T$ & 0.08 & 0.02 & 0.08 & 0.03 & 0.06 & 0.08 & 2 & 0 & 2 & 0 & 2 & 0.08 & 0.02 & 0.08 & 0.03 & 0.08 \\ & & $0.5T$ & 0.1 & 0.03 & 0.09 & 0.11 & 0.06 & 0.08 & 0 & 0 & 0 & 6 & 0 & 0.10 & 0.03 & 0.09 & 0.11 & 0.08 \\ & \multirow{2}{*}{$\sqrt{n}$} & $0.1T$ & 0.11 & 0.03 & 0.09 & 0.07 & 0.02 & 0.07 & 1 & 0 & 1 & 0 & 1 & 0.10 & 0.03 & 0.08 & 0.06 & 0.06 \\ & & $0.5T$ & 0.24 & 0.1 & 0.25 & 0.3 & 0.19 & 0.21 & 10 & 2 & 10 & 18 & 9 & 0.21 & 0.10 & 0.22 & 0.27 & 0.19 \\ & \multirow{2}{*}{$0.4n$} & $0.1T$ & 0.14 & 0.17 & 0.14 & 0.06 & 0.15 & 0.11 & 2 & 13 & 2 & 0 & 1 & 0.03 & 0.11 & 0.03 & 0.01 & 0.02 \\ & & $0.5T$ & 0.56 & 1 & 0.65 & 0.77 & 0.97 & 0.6 & 23 & 96 & 31 & 53 & 26 & 0.12 & 0.77 & 0.16 & 0.18 & 0.13 \\ \hline \multirow{6}{*}{0.075} & \multirow{2}{*}{$\log\,n$} & $0.1T$ & 0.09 & 0.01 & 0.09 & 0.05 & 0.04 & 0.07 & 0 & 0 & 0 & 0 & 1 & 0.09 & 0.01 & 0.09 & 0.05 & 0.07 \\ & & $0.5T$ & 0.31 & 0.05 & 0.31 & 0.41 & 0.16 & 0.25 & 16 & 0 & 17 & 31 & 14 & 0.30 & 0.05 & 0.30 & 0.40 & 0.24 \\ & \multirow{2}{*}{$\sqrt{n}$} & $0.1T$ & 0.1 & 0.03 & 0.1 & 0.06 & 0.03 & 0.1 & 1 & 0 & 1 & 0 & 3 & 0.09 & 0.03 & 0.09 & 0.05 & 0.09 \\ & & $0.5T$ & 0.51 & 0.15 & 0.53 & 0.69 & 0.53 & 0.48 & 22 & 5 & 23 & 47 & 22 & 0.45 & 0.15 & 0.47 & 0.62 & 0.43 \\ & \multirow{2}{*}{$0.4n$} & $0.1T$ & 0.23 & 0.87 & 0.24 & 0.11 & 0.75 & 0.24 & 12 & 86 & 12 & 0 & 16 & 0.05 & 0.66 & 0.05 & 0.02 & 0.05 \\ & & $0.5T$ & 0.89 & 1 & 1 & 1 & 0.97 & 0.98 & 55 & 100 & 90 & 75 & 75 & 0.19 & 0.87 & 0.53 & 0.30 & 0.23 \\ \hline \multirow{6}{*}{0.1} & \multirow{2}{*}{$\log\,n$} & $0.1T$ & 0.12 & 0.02 & 0.14 & 0.04 & 0.04 & 0.11 & 5 & 0 & 5 & 0 & 5 & 0.12 & 0.02 & 0.14 & 0.04 & 0.11 \\ & & $0.5T$ & 0.55 & 0.08 & 0.55 & 0.74 & 0.34 & 0.49 & 31 & 0 & 33 & 58 & 31 & 0.53 & 0.08 & 0.53 & 0.72 & 0.48 \\ & \multirow{2}{*}{$\sqrt{n}$} & $0.1T$ & 0.15 & 0.05 & 0.15 & 0.09 & 0.09 & 0.1 & 7 & 1 & 7 & 0 & 4 & 0.13 & 0.05 & 0.13 & 0.08 & 0.09 \\ & & $0.5T$ & 0.76 & 0.47 & 0.82 & 0.94 & 0.96 & 0.74 & 52 & 33 & 58 & 74 & 57 & 0.68 & 0.47 & 0.74 & 0.86 & 0.66 \\ & \multirow{2}{*}{$0.4n$} & $0.1T$ & 0.42 & 1 & 0.68 & 0.11 & 0.97 & 0.51 & 28 & 100 & 55 & 0 & 34 & 0.09 & 0.84 & 0.26 & 0.02 & 0.11 \\ & & $0.5T$ & 1 & 1 & 1 & 1 & 0.97 & 1 & 76 & 100 & 99 & 85 & 97 & 0.21 & 0.93 & 0.69 & 0.47 & 0.29 \\ \hline & & average & 0.35 & 0.34 & 0.38 & 0.37 & 0.41 & 0.35 & 19.06 & 29.78 & 24.78 & 24.83 & 22.11 & 0.20 & 0.29 & 0.26 & 0.24 & 0.19 \\ \hline \end{tabular}} \end{sidewaystable} \mathbf{e}gin{sidewaystable} \caption{$n = 250$, $T = 100$ and $\alpha=0.05$: (N2) with $\mbox{var}rho_h = 0.5$ (top) and $\mbox{var}rho_h = 0.9$ (bottom).} \label{table:single:two} \centering \scriptsize{ \mathbf{e}gin{tabular}{ccc|cccccc|ccccc|ccccc} \hline\hline & & & \multicolumn{6}{c}{size-corrected power} & \multicolumn{5}{c}{location accuracy ($\%$)} & \multicolumn{5}{c}{Rand index} \\ $\delta_1$ & $m_1$ & $\eta_1$ & $\mathcal{T}^0$ & $\mathcal{T}^{1/2}$ & $\mathcal{T}zh$ & $\mathcal{T}j$ & $\mathcal{T}eh$ & $\mathcal{T}s$ & $\mathcal{T}^0$ & $\mathcal{T}^{1/2}$ & $\mathcal{T}zh$ & $\mathcal{T}j$ & $\mathcal{T}s$ & $\mathcal{T}^0$ & $\mathcal{T}^{1/2}$ & $\mathcal{T}zh$ & $\mathcal{T}j$ & $\mathcal{T}s$ \\ \hline \multirow{6}{*}{0.05} & \multirow{2}{*}{$\log\,n$} & $0.1T$ & 0.22 & 0.06 & 0.14 & 0.04 & 0 & 0.02 & 4 & 1 & 1 & 0 & 0 & 0.21 & 0.06 & 0.14 & 0.04 & 0.02 \\ & & $0.5T$ & 0.3 & 0.07 & 0.19 & 0.08 & 0 & 0.05 & 12 & 0 & 6 & 6 & 1 & 0.29 & 0.07 & 0.18 & 0.08 & 0.05 \\ & \multirow{2}{*}{$\sqrt{n}$} & $0.1T$ & 0.2 & 0.1 & 0.16 & 0.02 & 0 & 0.02 & 5 & 1 & 4 & 0 & 0 & 0.18 & 0.10 & 0.15 & 0.02 & 0.02 \\ & & $0.5T$ & 0.45 & 0.1 & 0.21 & 0.09 & 0 & 0.03 & 18 & 1 & 6 & 4 & 0 & 0.40 & 0.10 & 0.19 & 0.08 & 0.03 \\ & \multirow{2}{*}{$0.4n$} & $0.1T$ & 0.4 & 0.08 & 0.25 & 0.07 & 0 & 0.02 & 20 & 2 & 11 & 0 & 0 & 0.08 & 0.05 & 0.10 & 0.01 & 0.00 \\ & & $0.5T$ & 0.84 & 0.49 & 0.88 & 0.55 & 0.62 & 0.1 & 40 & 39 & 67 & 39 & 6 & 0.18 & 0.38 & 0.45 & 0.14 & 0.02 \\ \hline \multirow{6}{*}{0.075} & \multirow{2}{*}{$\log\,n$} & $0.1T$ & 0.24 & 0.04 & 0.16 & 0.02 & 0 & 0.03 & 7 & 0 & 6 & 0 & 0 & 0.23 & 0.04 & 0.16 & 0.02 & 0.03 \\ & & $0.5T$ & 0.68 & 0.08 & 0.48 & 0.26 & 0 & 0.11 & 40 & 1 & 27 & 16 & 8 & 0.66 & 0.08 & 0.47 & 0.25 & 0.11 \\ & \multirow{2}{*}{$\sqrt{n}$} & $0.1T$ & 0.32 & 0.09 & 0.21 & 0.07 & 0 & 0.03 & 20 & 1 & 8 & 0 & 1 & 0.28 & 0.09 & 0.19 & 0.06 & 0.03 \\ & & $0.5T$ & 0.91 & 0.13 & 0.76 & 0.55 & 0.05 & 0.17 & 53 & 2 & 54 & 41 & 10 & 0.81 & 0.13 & 0.69 & 0.50 & 0.15 \\ & \multirow{2}{*}{$0.4n$} & $0.1T$ & 0.7 & 0.26 & 0.61 & 0.07 & 0.33 & 0.03 & 44 & 19 & 48 & 0 & 2 & 0.15 & 0.19 & 0.28 & 0.02 & 0.01 \\ & & $0.5T$ & 1 & 1 & 1 & 0.9 & 0.7 & 0.7 & 54 & 100 & 100 & 65 & 50 & 0.21 & 0.90 & 0.72 & 0.31 & 0.15 \\ \hline \multirow{6}{*}{0.1} & \multirow{2}{*}{$\log\,n$} & $0.1T$ & 0.37 & 0.05 & 0.21 & 0.06 & 0 & 0.03 & 18 & 1 & 11 & 0 & 0 & 0.36 & 0.05 & 0.20 & 0.06 & 0.03 \\ & & $0.5T$ & 0.95 & 0.08 & 0.85 & 0.68 & 0 & 0.49 & 71 & 0 & 64 & 55 & 41 & 0.92 & 0.08 & 0.83 & 0.67 & 0.48 \\ & \multirow{2}{*}{$\sqrt{n}$} & $0.1T$ & 0.66 & 0.07 & 0.35 & 0.03 & 0 & 0.04 & 46 & 2 & 23 & 0 & 2 & 0.59 & 0.07 & 0.32 & 0.03 & 0.04 \\ & & $0.5T$ & 0.99 & 0.15 & 0.98 & 0.88 & 0.69 & 0.7 & 80 & 3 & 87 & 74 & 60 & 0.88 & 0.15 & 0.90 & 0.82 & 0.63 \\ & \multirow{2}{*}{$0.4n$} & $0.1T$ & 0.92 & 0.86 & 0.99 & 0.04 & 0.7 & 0.2 & 77 & 84 & 96 & 1 & 17 & 0.19 & 0.73 & 0.63 & 0.01 & 0.04 \\ & & $0.5T$ & 1 & 1 & 1 & 0.97 & 0.7 & 0.99 & 77 & 100 & 100 & 86 & 96 & 0.21 & 0.94 & 0.84 & 0.52 & 0.27 \\ \hline \multirow{6}{*}{0.05} & \multirow{2}{*}{$\log\,n$} & $0.1T$ & 0.18 & 0.05 & 0.05 & 0.05 & 0 & 0.02 & 12 & 2 & 2 & 0 & 0 & 0.17 & 0.05 & 0.05 & 0.05 & 0.02 \\ & & $0.5T$ & 0.75 & 0.05 & 0.05 & 0.24 & 0 & 0.03 & 43 & 0 & 0 & 12 & 2 & 0.73 & 0.05 & 0.05 & 0.23 & 0.03 \\ & \multirow{2}{*}{$\sqrt{n}$} & $0.1T$ & 0.33 & 0.06 & 0.06 & 0.05 & 0 & 0.03 & 21 & 1 & 1 & 0 & 1 & 0.29 & 0.06 & 0.06 & 0.04 & 0.03 \\ & & $0.5T$ & 0.96 & 0.07 & 0.07 & 0.39 & 0 & 0.1 & 69 & 0 & 0 & 28 & 7 & 0.85 & 0.07 & 0.07 & 0.35 & 0.09 \\ & \multirow{2}{*}{$0.4n$} & $0.1T$ & 0.69 & 0.07 & 0.07 & 0.05 & 0 & 0.03 & 47 & 2 & 2 & 0 & 1 & 0.14 & 0.06 & 0.06 & 0.01 & 0.01 \\ & & $0.5T$ & 1 & 0.12 & 0.17 & 0.74 & 0.39 & 0.21 & 60 & 2 & 7 & 47 & 13 & 0.21 & 0.08 & 0.11 & 0.24 & 0.05 \\ \hline \multirow{6}{*}{0.075} & \multirow{2}{*}{$\log\,n$} & $0.1T$ & 0.6 & 0.05 & 0.05 & 0.04 & 0 & 0.04 & 43 & 2 & 2 & 0 & 2 & 0.58 & 0.05 & 0.05 & 0.04 & 0.04 \\ & & $0.5T$ & 1 & 0.07 & 0.09 & 0.76 & 0 & 0.27 & 83 & 0 & 2 & 64 & 22 & 0.97 & 0.07 & 0.09 & 0.74 & 0.26 \\ & \multirow{2}{*}{$\sqrt{n}$} & $0.1T$ & 0.89 & 0.09 & 0.09 & 0.06 & 0 & 0.05 & 76 & 2 & 2 & 0 & 2 & 0.79 & 0.09 & 0.09 & 0.05 & 0.04 \\ & & $0.5T$ & 1 & 0.09 & 0.11 & 0.85 & 0.28 & 0.43 & 88 & 0 & 5 & 70 & 37 & 0.89 & 0.09 & 0.10 & 0.79 & 0.39 \\ & \multirow{2}{*}{$0.4n$} & $0.1T$ & 0.94 & 0.09 & 0.09 & 0.12 & 0.25 & 0.12 & 70 & 2 & 2 & 1 & 6 & 0.20 & 0.07 & 0.07 & 0.03 & 0.03 \\ & & $0.5T$ & 1 & 0.73 & 0.92 & 0.99 & 0.44 & 0.77 & 81 & 64 & 87 & 79 & 59 & 0.21 & 0.65 & 0.75 & 0.51 & 0.22 \\ \hline \multirow{6}{*}{0.1} & \multirow{2}{*}{$\log\,n$} & $0.1T$ & 0.9 & 0.08 & 0.08 & 0.05 & 0 & 0.06 & 86 & 2 & 2 & 1 & 4 & 0.87 & 0.08 & 0.08 & 0.05 & 0.06 \\ & & $0.5T$ & 1 & 0.06 & 0.25 & 0.97 & 0.14 & 0.8 & 92 & 0 & 21 & 85 & 73 & 0.97 & 0.06 & 0.24 & 0.96 & 0.78 \\ & \multirow{2}{*}{$\sqrt{n}$} & $0.1T$ & 1 & 0.06 & 0.06 & 0.06 & 0 & 0.14 & 95 & 2 & 2 & 2 & 9 & 0.89 & 0.06 & 0.06 & 0.05 & 0.13 \\ & & $0.5T$ & 1 & 0.07 & 0.59 & 1 & 0.44 & 0.96 & 90 & 0 & 57 & 90 & 89 & 0.89 & 0.07 & 0.55 & 0.97 & 0.88 \\ & \multirow{2}{*}{$0.4n$} & $0.1T$ & 0.99 & 0.1 & 0.29 & 0.16 & 0.44 & 0.23 & 85 & 6 & 25 & 3 & 17 & 0.21 & 0.08 & 0.21 & 0.04 & 0.06 \\ & & $0.5T$ & 1 & 1 & 1 & 1 & 0.44 & 0.99 & 86 & 99 & 99 & 93 & 92 & 0.21 & 0.96 & 0.90 & 0.80 & 0.45 \\ \hline & & average & 0.73 & 0.21 & 0.38 & 0.36 & 0.18 & 0.25 & 53.14 & 15.08 & 28.81 & 26.72 & 20.28 & 0.47 & 0.19 & 0.31 & 0.27 & 0.16 \\ \hline \end{tabular}} \end{sidewaystable} \subsection{Multiple change-point detection} \label{sec:sim:multiple} In this section, we evaluate the empirical behaviour of the DCBS algorithm with $\mathcal{T}^0$ and $\mathcal{T}zh$, based on their good performance observed in Section \ref{sec:sim:single:res}. For comparison, we also investigate the performance of the SBS algorithm furnished with the oracle threshold. Fixing the dimensionality and the size of data at $n = 250$ and $T = 250$, we consider piecewise constant signal $\{f_{j, t}\}_{t=1}^{T}$ containing three change-points as follows: at $t = \eta_r$, an index set $\Pi_r$ of cardinality $m_r$ is randomly drawn from $\{1, \ldots, n\}$, where $|\delta_{j, r}| \sim_{\mbox{\scriptsize{i.i.d}}} \mathcal{U}(0.75\delta_r, 1.25\delta_r)$ for $j \in \Pi_r$. We set $(\eta_1, m_1, \delta_1) = ([0.3T], [0.75n], 0.050)$, $(\eta_2, m_2, \delta_2) = ([0.6T], [0.25n], 0.087)$, and $(\eta_3, m_3, \delta_3) = ([0.8T], [0.1n], 0.140)$ such that $m_r\delta_r^2$ remains identical over all $r=1, 2, 3$. The noise $\boldsymbol{\varepsilon}_t$ is generated as in (N1) and (N2) of Section \ref{sec:sim:single:model} with varying $\mbox{var}rho$ and $\mbox{var}rho_h$. We set $B = 100$, $d_T = 5$ and $L_T = [\log_2(\log\,T+1)]$ where the latter is chosen to permit a growing number ($\log\,T$) of change-points in the data. To account for multiple testing, the Bonferroni's correction is adopted by setting $\alpha = \alpha^*/(2^{L_T}-1)$ with $\alpha^* = 0.05$. We report the total number of estimated change-points ($\widehat{N}$, in $\%$) and their location accuracy ($|\widehat{\eta}_r - \eta_r| < \log\,T$, in $\%$), over $100$ simulated sample paths for each setting in Table \ref{table:multi} and Figures \ref{fig:multi:one}--\ref{fig:multi:two}. Overall, the BS algorithm applied with $\mathcal{T}zh$ performs the best in detecting all three change-points as well as identifying their locations over $80\%$ of the simulated data. In comparison, $\mathcal{T}^0$ or $\mathcal{T}s$ tend to miss $\eta_1$ which is associated with the smallest jump size, while all three methods tend to detect $\eta_3$ the best, which is associated with the largest jump size. The behaviour of $\mathcal{T}^0$ and $\mathcal{T}zh$ change dramatically when $\boldsymbol{\varepsilon}_t$ is generated from (N2) with a strong factor ($\mbox{var}rho_h = 0.9$), where the performance of $\mathcal{T}^0$ improves (identifying all three change-points over $70\%$ of the simulated data) whereas that of the latter deteriorates greatly. As observed in Section \ref{sec:sim:single:res}, using a larger scaling term $\gamma_n$ may improve the performance of $\mathcal{T}zh$ in this setting. While the performance of $\mathcal{T}s$ is better when $\boldsymbol{\varepsilon}_t$ is generated from (N2) rather than (N1), the choice of threshold for $\mathcal{T}s$ turns out to be too conservative overall. Although each $\varepsilon_{j, t}$ is generated from the identical model, employing $n$ thresholds for the $n$-dimensional panel data may lead to better small sample performance of $\mathcal{T}s$. \mathbf{e}gin{table}[htbp] \caption{Summary of the total number of estimated change-points and their location accuracy: $n = 250$, $T = 250$ and $\alpha^*=0.05$.} \label{table:multi} \centering \scriptsize{ \mathbf{e}gin{tabular}{c|c|c|cccccc|ccc} \hline\hline & & & \multicolumn{6}{c}{$\widehat{N}$ (\%)} & \multicolumn{3}{c}{accuracy (\%)} \\ & $\mbox{var}rho/\mbox{var}rho_h$ & & 0 & 1 & 2 & \textbf{3} & 4 & $\ge$5 & $\eta_1$ & $\eta_2$ & $\eta_3$ \\ \hline \multirow{6}{*}{(N1)} & \multirow{3}{*}{0.2} & $\mathcal{T}^0$ & 0 & 9 & 37 & 53 & 0 & 1 & 35 & 71 & 89 \\ & & $\mathcal{T}zh$ & 0 & 1 & 12 & 85 & 1 & 1 & 90 & 87 & 93 \\ & & $\mathcal{T}s$ & 8 & 66 & 26 & 0 & 0 & 0 & 0 & 37 & 77 \\ \cline{2-12} & \multirow{3}{*}{0.5} & $\mathcal{T}^0$ & 0 & 4 & 37 & 56 & 3 & 0 & 46 & 71 & 92 \\ & & $\mathcal{T}zh$ & 0 & 1 & 8 & 88 & 3 & 0 & 95 & 89 & 96 \\ & & $\mathcal{T}s$ & 4 & 67 & 29 & 0 & 0 & 0 & 0 & 46 & 75 \\ \hline \multirow{6}{*}{(N2)} & \multirow{3}{*}{0.5} & $\mathcal{T}^0$ & 1 & 3 & 31 & 65 & 0 & 0 & 34 & 81 & 97 \\ & & $\mathcal{T}zh$ & 0 & 2 & 7 & 83 & 8 & 0 & 96 & 95 & 97 \\ & & $\mathcal{T}s$ & 2 & 21 & 76 & 1 & 0 & 0 & 2 & 83 & 90 \\ \cline{2-12} & \multirow{3}{*}{0.9} & $\mathcal{T}^0$ & 0 & 2 & 10 & 87 & 1 & 0 & 74 & 97 & 98 \\ & & $\mathcal{T}zh$ & 3 & 35 & 37 & 24 & 1 & 0 & 44 & 38 & 92 \\ & & $\mathcal{T}s$ & 0 & 22 & 69 & 9 & 0 & 0 & 9 & 76 & 100 \\ \hline \end{tabular}} \end{table} \mathbf{e}gin{figure}[htbp] \centering \mathbf{e}gin{subfigure}{.475\textwidth} \centering \includegraphics[width=1\linewidth]{multi_arma_2_loc.pdf} \end{subfigure} \mathbf{e}gin{subfigure}{.475\textwidth} \centering \includegraphics[width=1\linewidth]{multi_arma_5_loc.pdf} \end{subfigure} \caption{(N1) with $\mbox{var}rho = 0.2$ (left) and $\mbox{var}rho = 0.5$ (right): the locations of the change-points detected by the BS algorithm in combination with $\mathcal{T}^0$ (top), $\mathcal{T}zh$ (middle) and $\mathcal{T}s$ (bottom); vertical lines indicate the locations of true $\eta_r, \, r=1, 2, 3$.} \label{fig:multi:one} \end{figure} \mathbf{e}gin{figure}[htbp] \centering \mathbf{e}gin{subfigure}{.475\textwidth} \centering \includegraphics[width=1\linewidth]{multi_factor_5_loc.pdf} \end{subfigure} \mathbf{e}gin{subfigure}{.475\textwidth} \centering \includegraphics[width=1\linewidth]{multi_factor_9_loc.pdf} \end{subfigure} \caption{(N2) with $\mbox{var}rho_h = 0.5$ (left) and $\mbox{var}rho_h = 0.9$ (right).} \label{fig:multi:two} \end{figure} \section{Application to financial time series data} \label{sec:real} We analysed the log returns of the closing prices of all S\&P $100$ component stocks between February $24$, $2015$ and February $23$, $2016$, which are denoted by $y_{i, t}, \, i=1, \ldots, \widetilde{n}; \, t=1, \ldots, T$ with $\widetilde{n} = 88$ (only those components which remained in the index for a longer period were included) and $T=252$. \cite{jirak2014} analysed a similar financial dataset for a single change in its mean and variance, respectively. However, considering that (i) log returns are often modelled to have zero-mean and time-varying conditional variance using conditionally heteroscedastic models, and (ii) it is difficult to rule out the possible existence of multiple change-points, we chose to perform the change-point analysis in the (unconditional) second-order structure of $y_{i, t}$ using the DCBS algorithm. For the purpose, wavelet-based periodogram and cross-periodogram sequences of $y_{i, t}$ were computed, which were also adopted to comprise the input panel data to the SBS-MVTS algorithm in \cite{cho2015}. Any change-point in the autocovariance and cross-covariance structure of $y_{i, t}$ is detectable from examining the wavelet (cross-)periodogram sequences; for further details, see Section 3.1 of \cite{cho2015}. We used Haar wavelets at the two finest scales to produce the periodogram sequences, which are denoted by $x_{j, t}, \, j=1, \ldots, n = \widetilde{n}(\widetilde{n}+1); \, t=1, \ldots, T$. The DCBS algorithm with $\mathcal{D}zh_m$ detects a single change-point at $t = 220$, which corresponds to January $6$, $2016$. It has been noted that the first week of trading in $2016$ marked the worst five-day start to a year ever, according to S\&P Dow Jones Indices (Financial Times, \url{http://www.ft.com/fastft/2016/01/07/sp-500-logs-worst-annual-kick-off-on-record/}). For example, the S\&P $500$ index dropped by $4.9\%$ during the period, and the Dow Jones Industrial Average by $6.19\%$. Figure \ref{fig:real} shows $y_{i, t}$ (left) and the pointwise maximum of the DC statistics at the first iteration of the DCBS algorithm (right), where such behaviour of the financial market at the beginning of $2016$ is reflected as a large peak in the latter. \mathbf{e}gin{figure}[htbp] \centering \includegraphics[width=1\linewidth]{snp100.pdf} \caption{Log returns of S\&P $100$ component stock prices ($y_{i, t}$) between February $24$, $2015$ and February $23$, $2016$ (left); pointwise maximum of $\mathcal{D}zh_m(\{|\mathcal{X}^{(j)}_{1, b, T}|\}_{j=1}^{n})$ over $b=1, \ldots, T-1$ (right); the vertical broken line denotes the location of the estimated change-point.} \label{fig:real} \end{figure} \section{Conclusions} \label{sec:conc} In this paper, we have proposed the DC statistic, a novel way of aggregating high-dimensional CUSUM series across the panel for change-point analysis, and showed its consistency in single and multiple change-point detection both theoretically and empirically. We conclude by listing possible future research projects. \mathbf{e}gin{itemize} \item The DC statistic can be applied to detect changes in high-dimensional time series besides those in the mean of panel data. For example, the DCBS algorithm is easily extended to detect change-points in both autocovariance and cross-covariance structure of $n$-dimensional time series, by taking panel data consisting of local periodogram and cross-periodogram sequences as an input, see the real data analysis in Section \ref{sec:real}. Moreover, the DC operator may be regarded as a generic tool that can be adopted to aggregate multiple series of statistics cross-sectionally, the result of which can be utilised for panel data analysis beyond change-point detection. \item Empirically, it was shown that $\mathcal{T}zh$ generally outperforms $\mathcal{T}^\mbox{var}phi$ for any $\mbox{var}phi \in \{0, 1/2\}$, while in the presence of strong cross-correlations, $\mathcal{T}^0$ performs better than the rest. This opens up several possibilities for future research, including the investigation of the ``optimal'' way of exploiting the information contained in the infinite array of DC statistics over $\mbox{var}phi \in [0, 1]$. \item Proposed here for test criterion selection, the GDFM-boot algorithm showed good empirical performance, but it remains to investigate the validity of its application to change-point analysis by establishing that the bootstrap scheme mimics the correct second-order structure for a large class of time series processes. The GDFM-boot will have applicability to a wide range of inference and estimation problems involving high-dimensional time series beyond the context of change-point analysis. For example, a key task in factor analysis is to estimate the number of common factors that drive the pervasive cross-sectional correlations, and several information criterion-type estimators have been proposed. However, there is lack of any attempt at statistical inference on the factor number, e.g., by constructing its confidence interval, and the GDFM-boot can be adopted for such tasks. \end{itemize} \section{Proofs} \label{sec:pf} \subsection{Preliminary results} \label{sec:pf:pre} We first prove a set of lemmas that are stepping stones for the proofs of Theorems \ref{thm:zero}--\ref{thm:four}. We assume that (A1)--(A3) and (A6) hold in all the lemmas where applicable, and the notations $C_i, \, c_i, \ i=0, 1, \ldots$ are adopted to denote fixed positive constants throughout the proofs. Further, $\widetilde\varepsilon_{j, t}$ denotes the scaled $\varepsilon_{j, t}$ with respect to an estimator $\widehat\sigma_j$ satisfying (A6), and $\widetilde{x}_{j, t}$ and $\widetilde{f}_{j, t}$ are defined similarly. Throughout, we operate within $\mathcal{E}_\sigma$ defined in (A6). \mathbf{e}gin{lem} \label{lem:one} Defining the set $\mathcal{I}_1 = \{(s, e):\, 1 \le s < e \le T, \ e-s+1 > d_T = [C\log^2\,T]\}$ and the event $\mathcal{E}_1 = \{\max_{1 \le j \le n}\max_{(s, e) \in \mathcal{I}_1}(e-s+1)^{-1/2}\vert\sum_{t=s}^e \widetilde\varepsilon_{j, t} \vert \le \log\,T\}$ with some fixed constant $C>0$, we have $\mathbb{P}(\mathcal{E}_1) \to 1$ as $T \to \infty$. \end{lem} \mathbf{e}gin{proof} We first study the following probability \mathbf{e}gin{eqnarray} \label{lem:one:event} \mathbb{P}\left(\frac{1}{\sqrt{e-s+1}}\left\vert\sum_{t=s}^e \widetilde\varepsilon_{j, t} \right\vert > \log\,T\right). \end{eqnarray} Let $d = e-s+1$. Theorem 1.4 of \cite{bosq1998} showed that under the conditions imposed in (A1), the probability in (\ref{lem:one:event}) is bounded from the above by \mathbf{e}gin{align} & \left\{\frac{2d}{q} + 2\left(1+\frac{\log^2\,T}{25d\mathbb{E}(\widetilde\varepsilon_{j, t}^2)+5C_{\varepsilon}\sqrt{d}\log\,T}\right)\right\} \exp\left(-\frac{q\log^2\,T}{25d\mathbb{E}(\widetilde\varepsilon_{j, t}^2)+5C_{\varepsilon}\sqrt{d}\log\,T}\right) \nonumber \\ & + 11C_\alpha d\left[1+\frac{5\{\mathbb{E}(\widetilde\varepsilon_{j, t}^2)\}^{2/5}\sqrt{d}}{\log\,T}\right](\mu^{[d/(q+1)]})^{4/5} \label{lem:one:eq} \end{align} where $q \in \{1, \ldots, [d/2]\}$. With the choice $q = [c_qd/\log\,T]$ for some $c_q > 0$, (\ref{lem:one:eq}) is bounded by \mathbf{e}gin{flalign*} & 2\left(\frac{\log\,T}{c_q} + 2\right)\exp\left\{-\frac{c_q\log\,T}{25\mathbb{E}(\widetilde\varepsilon_{j, t}^2)+5C_{\varepsilon}C^{-1/2}}\right\} + 11C_\alpha d\left[1+\frac{5\{\mathbb{E}(\widetilde\varepsilon_{j, t}^2)\}^{2/5}\sqrt{d}}{\log\,T}\right] \times \\ & \exp\left\{-\frac{4\log\,T}{5c_q\log(1/\mu)}\right\} \le C_0T^{3/2}\log^{-1}T \cdot T^{-C_1} \end{flalign*} for some $C_0, C_1 > 0$, where the latter depends on $\mu$, $c_q, C_{\varepsilon}, C$ and $\sigma^{*2}/\sigma_*^2$. More specifically, we can impose appropriate conditions on the above parameters in order that $C_1 > 7/2+\omega$. Therefore, $\mathbb{P}(\mathcal{E}_1^c)$ is bounded from the above by $nT^2 \cdot C_0T^{3/2}\log^{-1}T \cdot T^{-C_1} \to 0$ as $T \to \infty$. \end{proof} \mathbf{e}gin{lem} \label{lem:two} Define the set $\mathcal{I}_2 = \{(s, b, e):\, 1 \le s < b < e \le T, \ (b-s+1) \wedge (e-b) > d_T\}$ with $d_T$ given in Lemma \ref{lem:one}, and the event $\mathcal{E}_2 = \{\max_{1 \le j \le n}\max_{(s, b, e)\in\mathcal{I}_2} |\mathcal{C}_b(\{\widetilde\varepsilon_{j, t}\}_{t=s}^e)| \le \sqrt{2}\log\,T\}$. Then as $T \to \infty$, we have $\mathbb{P}(\mathcal{E}_2) \to 1$. \end{lem} \mathbf{e}gin{proof} Note that $\mathcal{I}_2 \subset \mathcal{I}_1$, and \mathbf{e}gin{align*} |\mathcal{C}_b(\{\widetilde\varepsilon_{j, t}\}_{t=s}^e)| &\le \sqrt{\frac{e-b}{(e-s+1)(b-s+1)}}\left\vert\sum_{t=s}^b \widetilde\varepsilon_{j, t} \right\vert + \\ & \sqrt{\frac{b-s+1}{(e-s+1)(e-b)}}\left\vert\sum_{t=b+1}^e \widetilde\varepsilon_{j, t} \right\vert = I + II. \end{align*} On the event $\mathcal{E}_1$ defined in Lemma \ref{lem:one}, \mathbf{e}gin{eqnarray*} I = \sqrt{\frac{e-b}{e-s+1}} \cdot \frac{1}{\sqrt{b-s+1}}\left\vert \sum_{t=s}^b \widetilde\varepsilon_{j, t} \right\vert \le \sqrt{\frac{e-b}{e-s+1}}\log\,T, \end{eqnarray*} and similarly $II \le \sqrt{(b-s+1)/(e-s+1)} \log\,T$, uniformly over all $j$ and $(s, b, e)\in\mathcal{I}_2$. Hence \mathbf{e}gin{eqnarray*} \max_{1 \le j \le n}\max_{(s, b, e)\in\mathcal{I}_2} |\mathcal{C}_b(\{\widetilde\varepsilon_{j, t}\}_{t=s}^e)| &\le& \max_{(s, b, e)\in\mathcal{I}_2} \left\{ \sqrt{\frac{e-b}{e-s+1}} + \sqrt{\frac{b-s+1}{e-s+1}} \right\}\log\,T \\ &\le& \sqrt{2}\log\,T, \end{eqnarray*} with probability tending to one as $T \to \infty$, and therefore $\mathbb{P}(\mathcal{E}_2^c) \le \mathbb{P}(\mathcal{E}_2^c|\mathcal{E}_1)\mathbb{P}(\mathcal{E}_1) + \mathbb{P}(\mathcal{E}_1^c) \to 0$. \end{proof} Next, we introduce $N$ additive models $y_{r, t} = g_{r, t} + \xi_{r, t}$, $r=1, \ldots, N$, which play a vital role in the following proofs. For each $r$, let $\{k^r_1, \ldots, k^r_n\}$ denote a permutation of $\{1, \ldots, n\}$, and $\{i^r_1, \ldots, i^r_n\}$ a set of signs (taking values from \{-1, 1\} with repetitions), for which the followings hold: \mathbf{e}gin{center} $i^r_1 \cdot \delta_{k^r_1, r} \ge i^r_2 \cdot \delta_{k^r_2, r} \ge \ldots \ge i^r_n \cdot \delta_{k^r_n, r} \ge 0$ \end{center} (since $\delta_{k^r_j, r} = f_{k^r_j, \eta_r+1} - f_{k^r_j, \eta_r} = 0$ for all $j \ge m_r+1$, the ordering and the set of signs are not unique). Then $y_{r, t}$, $g_{r, t}$ and $\xi_{r, t}$ are defined as \mathbf{e}gin{align} y_{r, t} &= \left\{\frac{m_r(2n-m_r)}{2n}\right\}^\mbox{var}phi\left\{\frac{1}{m_r}\sum_{j=1}^{m_r}i^r_j \cdot \widetilde{x}_{k^r_j, t} - \frac{1}{2n-m_r}\sum_{j=m_r+1}^{n}i^r_j \cdot \widetilde{x}_{k^r_j, t}\right\}, \label{def:y:r} \\ g_{r, t} &= \left\{\frac{m_r(2n-m_r)}{2n}\right\}^\mbox{var}phi\left\{\frac{1}{m_r}\sum_{j=1}^{m_r}i^r_j \cdot \widetilde{f}_{k^r_j, t} - \frac{1}{2n-m_r}\sum_{j=m_r+1}^{n}i^r_j \cdot \widetilde{f}_{k^r_j, t}\right\}, \label{def:g:r} \\ \xi_{r, t} &= \left\{\frac{m_r(2n-m_r)}{2n}\right\}^\mbox{var}phi\left\{\frac{1}{m_r}\sum_{j=1}^{m_r}i^r_j \cdot \widetilde\varepsilon_{k^r_j, t} - \frac{1}{2n-m_r}\sum_{j=m_r+1}^{n}i^r_j \cdot \widetilde\varepsilon_{k^r_j, t}\right\}. \label{def:xi:r} \end{align} By its definition, $\{g_{r, t}\}_{t=1}^T$ is a piecewise constant signal with change-points at $t=\eta_1, \ldots, \eta_N$, and its jump at $t=\eta_r$ is of size satisfying the following: \mathbf{e}q \label{min:jump} g_{r, \eta_r+1} - g_{r, \eta_r} = \left\{\frac{m_r(2n-m_r)}{2n}\right\}^\mbox{var}phi\frac{1}{m_r}\sum_{j\in\Pi_r}\frac{|\delta_{j, r}|}{\widetilde\sigma_j} \ge \left\{\frac{m_rn}{2n}\right\}^\mbox{var}phi\frac{2\widetilde{\delta}_r}{3\sigma^*} \ge c_1m_r^\mbox{var}phi \widetilde{\delta}_r. \end{eqnarray} Then Lemma \ref{lem:one} implies that for all $r = 1, \ldots, N$, \mathbf{e}gin{eqnarray} \label{xi:bound:one} \max_{(s, e) \in \mathcal{I}_1} \frac{1}{\sqrt{e-s+1}} \left\vert \sum_{t=s}^e\xi_{r, t} \right\vert \le 2m_r^\mbox{var}phi\log\,T \le 2n^\mbox{var}phi\log\,T \end{eqnarray} with probability converging to one, since within the event $\mathcal{E}_1$ of Lemma \ref{lem:one}, \mathbf{e}gin{align*} \frac{1}{\sqrt{e-s+1}}\left\vert\sum_{t=s}^e\xi_{r, t}\right\vert &\le \left\{\frac{m_r(2n-m_r)}{2n}\right\}^\mbox{var}phi\left\{\frac{1}{m_r}\sum_{j=1}^{m_r}\left\vert \frac{1}{\sqrt{e-s+1}}\sum_{t=s}^e \varepsilon_{k^r_j, t} \right\vert \right. \\ & + \left.\frac{1}{2n-m_r}\sum_{j=m_r+1}^{n}\left\vert \frac{1}{\sqrt{e-s+1}}\sum_{t=s}^e \varepsilon_{k^r_j, t} \right\vert\right\} \le 2m_r^\mbox{var}phi\log\,T. \end{align*} Similarly, Lemma \ref{lem:two} implies that for all $r$, \mathbf{e}gin{eqnarray} \label{xi:bound:two} \max_{(s, b, e)\in\mathcal{I}_2} |\mathcal{C}_b(\{\xi_{r, t}\}_{t=s}^e)| \le 2\sqrt{2}m_r^\mbox{var}phi\log\,T \le 2\sqrt{2}n^\mbox{var}phi\log\,T \end{eqnarray} with probability tending to one. Now we consider a generic additive model \mathbf{e}gin{eqnarray} \label{def:add} y_t = g_t + \xi_t, \end{eqnarray} where $y_t$, $g_t$ and $\xi_t$ are obtained from the panel data $\{x_{j, t}\}_{t=1}^T, \ j=1, \ldots, n$ in the same manner as $y_{r, t}$, $g_{r, t}$ and $\xi_{r, t}$ in (\ref{def:y:r})--(\ref{def:xi:r}), with respect to some $m\in\{1, \ldots, n\}$, a permutation of index $\{k_1, \ldots, k_n\}$ and a sign sequence $\{i_1, \ldots, i_n\}$. Then $g_t$ is a piecewise constant signal with change-points at $t=\eta_1, \ldots, \eta_N$ satisfying (B1). Also, it is easily seen that the inequalities (\ref{xi:bound:one})--(\ref{xi:bound:two}) hold with the zero-mean noise series $\xi_t$ in place of $\xi_{r, t}$ with probability converging to one. Recall that $s$ and $e$ denote the start and the end of an interval which is examined at some stage of our search for the change-points. Let $s$ and $e$ satisfy \mathbf{e}gin{eqnarray} \label{lem:cond:zero} \eta_{q_1} \le s < \eta_{q_1+1} < \ldots < \eta_{q_2} < e \le \eta_{q_2+1} \end{eqnarray} for $0 \le q_1 < q_2 \le N$. In some of the following lemmas, we impose at least one of following conditions: \mathbf{e}gin{eqnarray} s< \eta_q-c_2T^\mathbf{e}ta < \eta_q+c_2T^\mathbf{e}ta < e \mbox{ for some } q\in\{q_1+1, \ldots, q_2\}, \label{lem:cond:one} \\ \{(\eta_{q_1+1}-s)\wedge(s-\eta_{q_1})\} \vee \{(\eta_{q_2+1}-e)\wedge(e-\eta_{q_2})\} \le c_3\bar{\epsilon}_T, \label{lem:cond:two} \end{eqnarray} with $\bar{\epsilon}_T$ to be defined later. The condition (\ref{lem:cond:one}) implies that there is at least one change-point to be detected which is sufficiently distanced away from the previously detected change-points $s$ and $e$, and (\ref{lem:cond:two}) indicates that each of $s$ and $e$ is detected for one of the true change-points. Since the CUSUM statistics are not affected by the shift in the overall level of $g_t$, we assume that $\sum_{t=s}^eg_t = 0$ without loss of generality. Then the CUSUM statistics computed on $\{g_t\}_{t=s}^e$ can be re-written as $\mathcal{C}_b(\{g_t\}_{t=s}^e) = \sqrt{\frac{e-s+1}{(b-s+1)(e-b)}}\sum_{t=s}^bg_t$, $b=s, \ldots, e-1$. \mathbf{e}gin{lem} \label{lem:three} For $s$ and $e$ satisfying (\ref{lem:cond:zero}), there exists $q'\in\{q_1+1, \ldots, q_2\}$ which satisfies $\eta_{q'} = \arg\max_{b\in[s, e)}|\mathcal{C}_b(\{g_t\}_{t=s}^e)|$. \end{lem} \mathbf{e}gin{proof} The proof follows directly from Lemmas 2.2--2.3 of \cite{venkatraman1992}. \end{proof} \mathbf{e}gin{lem} \label{lem:four} Let (\ref{lem:cond:one}) and (\ref{lem:cond:two}) hold. For $\widehat{\eta} = \arg\max_{b\in[s, e)}|\mathcal{C}_b(\{y_t\}_{t=s}^e)|$, there exists a true change-point $\eta_q \equiv \eta\in(s, e)$ satisfying $|\widehat{\eta} - \eta| < c_0\bar{\epsilon}_T$ with probability converging to one, provided that \mathbf{e}qs && |\mathcal{C}_{\eta}(\{g_t\}_{t=s}^e)||\mathcal{C}_{\eta}(\{g_t\}_{t=s}^e)-\mathcal{C}_{\widehat{\eta}}(\{g_t\}_{t=s}^e)| > C_1n^\mbox{var}phi\log\,T \times \\ && \max\left\{\mathbf{e}gin{array}{l} n^\mbox{var}phi\log\,T, \\ \textstyle{\sqrt{\eta-s+1}\left\vert\frac{1}{\widehat{\eta}-s+1}\sum_{t=s}^{\widehat{\eta}}g_t - \frac{1}{\eta-s+1}\sum_{t=s}^{\eta}g_t\right\vert}, \\ \textstyle{\sqrt{\bar{\epsilon}_T}\left\vert\frac{1}{\widehat{\eta}-s+1}\sum_{t=s}^{\widehat{\eta}}g_t - \frac{1}{e-\eta}\sum_{t=\eta+1}^e g_t \right\vert} \end{array}\right\} \quad \mbox{for some } C_1>0. \end{eqnarray}s \end{lem} \mathbf{e}gin{proof} The following proof is an adaptation of the proof of Theorem \ref{thm:two}.1 in \cite{piotr2014} to a non-i.i.d. case. On the segment $[s, e]$, detecting a change-point is equivalent to fitting the best step function $\widehat{g}_t$ (a piecewise constant function with one change-point) which minimises $\sum_{t=s}^e(y_t-h_t)^2$ among all step functions $h_t$ defined on $[s, e]$. Let $g_t^*$ be the best step function approximation to $g_t$ on $[s, e]$, which may not be unique. From Lemma \ref{lem:three}, $g_t^*$ needs to have its change-point $\eta$ coincide with one of the true change-points $\eta_q, \, q\in\{q_1+1, \ldots, q_2\}$. Let us assume that $|\widehat{\eta}-\eta| = c_0\bar{\epsilon}_T$. Due to the fact that $|\mathcal{C}_b(\{g_t\}_{t=s}^e)|$ is monotonic, or decreasing and then increasing in $b$ between two adjacent change-points of $g_t$ (Lemma 2.7 of \cite{venkatraman1992}), it is enough to consider the case when $\widehat{\eta}$ satisfies $|\widehat{\eta} - \eta| = c_0\bar{\epsilon}_T$. Then, if it is shown that \mathbf{e}gin{eqnarray} \label{lem:four:eq:one} \sum_{t=s}^e(y_t-g_t^*)^2 - \sum_{t=s}^e(y_t-\widehat{g}_t)^2 < 0, \end{eqnarray} the claim would be proved to be contradiction. Expanding the LHS of (\ref{lem:four:eq:one}), we obtain \mathbf{e}gin{align*} & \sum_{t=s}^e (\xi_t+g_t-g_t^*)^2 - \sum_{t=s}^e (\xi_t+g_t-\widehat{g}_t)^2 = 2\sum_{t=s}^e \xi_t(\widehat{g}_t-g_t^*) \\ & + \sum_{t=s}^e \{(g_t-g_t^*)^2 - (g_t-\widehat{g}_t)^2\} = I + II. \end{align*} Clearly, $II<0$ from the definition of $g_t^*$. Let $\boldsymbol{\Psi}$ be the set of vectors of length $(e-s+1)$ whose elements are initially positive and constant, then after a break, are negative and constant; moreover, the elements sum to zero and when squared, sum to one. Since we assume that $\sum_{t=s}^e g_t = 0$, we can find a vector $\boldsymbol{\psi}^*\in\boldsymbol{\Psi}$ satisfying $\mathbf{g}^* = \inner{\mathbf{g}}{\boldsymbol{\psi}^*}\boldsymbol{\psi}^*$ where $\mathbf{g}=(g_s, \ldots, g_e)^\top$ and $\mathbf{g}^*=(g_s^*, \ldots, g_e^*)^\top$. Then we have \mathbf{e}gin{eqnarray*} \sum_{t=s}^e(g_t-g_t^*)^2 = \sum_{t=s}^e g_t^2 - \inner{\mathbf{g}}{\boldsymbol{\psi}^*}^2. \end{eqnarray*} Let a step function $\widetilde{g}_t$ be chosen to minimise $\sum_{t=s}^e(g_t - h_t)^2$ among all the step functions $h_t$ defined on $[s, e]$, under the constraint that $h_t$ has its change-point at $t=\widehat{\eta}$. For such $\widetilde{g}_t$, we have $\sum_{t=s}^e(g_t-\widetilde{g}_t)^2 \le \sum_{t=s}^e(g_t-\widehat{g}_t)^2$. Again, there exists a vector $\widetilde{\boldsymbol{\psi}}\in\boldsymbol{\Psi}$ satisfying $\widetilde{\mathbf{g}} = \inner{\mathbf{g}}{\widetilde{\boldsymbol{\psi}}}\widetilde{\boldsymbol{\psi}}$ with $\widetilde{\mathbf{g}} = (\widetilde{g}_s, \ldots, \widetilde{g}_e)^\top$. Then \mathbf{e}gin{eqnarray} |II| &\ge& \sum_{t=s}^e\{(g_t-\widetilde{g}_t)^2 - (g_t-g_t^*)^2\} = \inner{\mathbf{g}}{\boldsymbol{\psi}^*}^2 - \inner{\mathbf{g}}{\widetilde{\boldsymbol{\psi}}}^2 \nonumber \\ &=& (\inner{\mathbf{g}}{\boldsymbol{\psi}^*}+\inner{\mathbf{g}}{\widetilde{\boldsymbol{\psi}}})(\inner{\mathbf{g}}{\boldsymbol{\psi}^*}-\inner{\mathbf{g}}{\widetilde{\boldsymbol{\psi}}}) \ge |\inner{\mathbf{g}}{\boldsymbol{\psi}^*}||\inner{\mathbf{g}}{\boldsymbol{\psi}^*} - \inner{\mathbf{g}}{\widetilde{\boldsymbol{\psi}}}|, \label{lem:four:eq:two} \end{eqnarray} since $|\mathcal{C}_\eta(\{g_t\}_{t=s}^e)| = |\inner{\mathbf{g}}{\boldsymbol{\psi}^*}| \ge |\inner{\mathbf{g}}{\widetilde{\boldsymbol{\psi}}}| = |\mathcal{C}_{\widehat{\eta}}(\{g_t\}_{t=s}^e)|$. Turning to $I$, it is decomposed as \mathbf{e}gin{eqnarray*} 2\sum_{t=s}^e \xi_t(\widehat{g}_t-g_t^*) = 2\sum_{t=s}^e \xi_t(\widehat{g}_t-\widetilde{g}_t) + 2\sum_{t=s}^e \xi_t(\widetilde{g}_t-g_t^*), \end{eqnarray*} and each of the two terms are split into sub-sums computed over the intervals where $\widehat{g}_t-\widetilde{g}_t$ and $\widetilde{g}_t-g_t^*$ are constant, respectively. Letting $\widehat{\eta} > \eta$ without loss of generality, we have \mathbf{e}gin{eqnarray*} \sum_{t=s}^e \xi_t(\widetilde{g}_t-g^*_t) = \left(\sum_{t=s}^\eta + \sum_{t=\eta+1}^{\widehat{\eta}} + \sum_{t=\widehat{\eta}+1}^e\right)\xi_t(\widetilde{g}_t-g^*_t) = III + IV + V. \end{eqnarray*} Then $|III| \le 2n^\mbox{var}phi\log\,T\sqrt{\eta-s+1} \vert (\widehat{\eta}-s+1)^{-1}\sum_{t=s}^{\widehat{\eta}}g_t - (\eta-s+1)^{-1}\sum_{t=s}^{\eta}g_t\vert$ with probability tending to one, following (\ref{xi:bound:one}). $|V|$ is of the same order as $|III|$, and similar arguments lead to $|IV| \le 2n^\mbox{var}phi\log\,T \sqrt{\widehat{\eta}-\eta+1} \vert(\widehat{\eta}-s+1)^{-1}\sum_{t=s}^{\widehat{\eta}}g_t - (e-\eta)^{-1}\sum_{t=\eta+1}^e g_t\vert$. As for $\sum_{t=s}^e\xi_t(\widehat{g}_t-\widetilde{g}_t)$, we have \mathbf{e}gin{eqnarray*} \sum_{t=s}^e\xi_t(\widehat{g}_t-\widetilde{g}_t) = \left(\sum_{t=s}^{\widehat{\eta}}+\sum_{t=\widehat{\eta}+1}^{e}\right)\xi_t(\widehat{g}_t-\widetilde{g}_t) = VI+VII. \end{eqnarray*} $|VI|$ and $|VII|$ are of the same order, and with probability converging to one, \mathbf{e}gin{eqnarray*} |VI| = |\sum_{t=s}^{\widehat{\eta}}\xi_t| \cdot \frac{1}{\widehat{\eta}-s+1}\left\vert\sum_{t=s}^{\widehat{\eta}}(y_t-g_t)\right\vert = \frac{1}{\widehat{\eta}-s+1}\left(\sum_{t=s}^{\widehat{\eta}}\xi_t\right)^2 \le 4n^{2\mbox{var}phi}\log^2\,T. \end{eqnarray*} Putting together (\ref{lem:four:eq:two}) and the upper bound on $|III|$--$|VII|$, we have the dominance of $II$ over $I$ under the conditions given in the lemma. \end{proof} \subsection{Proofs of Theorems \ref{thm:zero}--\ref{thm:one}} \label{sec:pf:thm:one} Throughout the section, we assume (A1)--(A3) and (A6), and (A4)--(A5) where applicable. In the problem of detecting (at most) a single change-point, (\ref{lem:cond:one})--(\ref{lem:cond:two}) are met with $s=1$ and $e=T$ under (A4) with $\bar{\epsilon}_T = (n^{\mbox{var}phi}/m_1)^2\widetilde{\delta}_1^2\log^2\,T$. For $(\widehat{\eta}_1, \widehat{m}_1) = \arg\max_{b\in[1, T]\setminus\mathcal{I}_{1, T}, \, 1 \le m \le n} \mathcal{D}^\mbox{var}phi_m(\{|\mathcal{X}^{(j)}_{1, b, T}|\}_{j=1}^n)$, let $\{k^0_1, \ldots, k^0_n\}$ denote a permutation of $\{1, 2, \ldots, n\}$ satisfying $|\mathcal{X}^{k^0_1}_{1, \widehat{\eta}_1, T}| \ge |\mathcal{X}^{k^0_2}_{1, \widehat{\eta}_1, T}| \ge \ldots \ge |\mathcal{X}^{k^0_n}_{1, \widehat{\eta}_1, T}|$, and $i^0_j\in\{-1, 1\}$ satisfy $|\mathcal{X}^{k^0_j}_{1, \widehat{\eta}_1, T}| = i^0_j \cdot \mathcal{X}^{k^0_j}_{1, \widehat{\eta}_1, T}$ for all $j=1, \ldots, n$. With $\widehat{m}_1$, $\{k^0_j\}$ and $\{i^0_j\}$ replacing $m_r$, $\{k^r_j\}$ and $\{i^r_j\}$, respectively, we have an additive model $y_{0, t} = g_{0, t} + \xi_{0, t}$ with its components defined in the same manner as $y_{r, t}$, $g_{r, t}$ and $\xi_{r, t}$ in (\ref{def:y:r})--(\ref{def:xi:r}). Then $g_{0, t}$ is piecewise constant with (at most) one change-point at $t=\eta_1$, and $\{\xi_{0, t}\}$ satisfies (\ref{xi:bound:one})--(\ref{xi:bound:two}) in place of $\{\xi_{r, t}\}$. Note that the DC statistic at $m=\widehat{m}_1$ and $b=\widehat{\eta}_1$ can equivalently be represented using $\{y_{0, t}\}$ as \mathbf{e}gin{eqnarray*} \mathcal{D}^\mbox{var}phi_{\widehat{m}_1}(\{|\mathcal{X}^{(j)}_{1, \widehat{\eta}_1, T}|\}_{j=1}^n) = \mathcal{D}^\mbox{var}phi_{\widehat{m}_1}(\{i^0_j \cdot \mathcal{X}^{k^0_j}_{1, \widehat{\eta}_1, T}\}_{j=1}^n) = \mathcal{C}_{\widehat{\eta}_1}(\{y_{0, t}\}_{t=1}^T). \end{eqnarray*} In the case when there exists no change-point, $g_{0, t}$ is constant and therefore $\mathcal{C}_b(\{g_{0, t}\}_{t=1}^T) = 0$ for all $b$. As $(1, \widehat{\eta}_1, T)\in\mathcal{I}_2$, we have $\mathcal{D}^\mbox{var}phi_{\widehat{m}_1}(\{|\mathcal{X}^{(j)}_{1, \widehat{\eta}_1, T}|\}_{j=1}^n) = \mathcal{C}_{\widehat{\eta}_1}(\{\xi_{0, t}\}_{t=1}^T) < C'n^\mbox{var}phi\log\,T$ for some $C'>2\sqrt{2}$, which proves Theorem \ref{thm:zero}. To prove Theorem \ref{thm:one}, we need additional lemmas stated with the generic additive model in (\ref{def:add}). \mathbf{e}gin{lem} \label{lem:one:one} Assume that there exists a single change-point $\eta_1$ in $g_t$ which satisfies (A4) and $|g_{\eta_1+1}-g_{\eta_1}| \ge \delta$. Then for some $C_2>0$, \mathbf{e}gin{eqnarray} \label{lem:one:one:eq} |\mathcal{C}_{\eta_1}(\{g_t\}_{t=1}^T)| = \max_{b\in[1, T)}|\mathcal{C}_b(\{g_t\}_{t=1}^T)| = \sqrt{\frac{\eta_1(T-\eta_1)}{T}}\delta \ge C_2\delta T^{\mathbf{e}ta}. \end{eqnarray} \end{lem} \mathbf{e}gin{proof} The first equality (\ref{lem:one:one:eq}) is a direct result of Lemma \ref{lem:three}. The second equality follows from the definition of $|\mathcal{C}_{\eta_1}(\{g_t\}_{t=1}^T)|$. \end{proof} \mathbf{e}gin{lem} \label{lem:one:two} Assume that the conditions imposed in Lemma \ref{lem:one:one} are met. Then for some $\epsilon_T$, we have \mathbf{e}gin{eqnarray*} |\mathcal{C}_{\eta_1}(\{g_t\}_{t=1}^T) - \mathcal{C}_{b'}(\{g_t\}_{t=1}^T)| \ge C_3\epsilon_T|\mathcal{C}_{\eta_1}(\{g_t\}_{t=1}^T)|\frac{T}{\eta_1(T-\eta_1)} \end{eqnarray*} with any $b'$ satisfying $|b'-\eta_1| \ge c_0\epsilon_T$. \end{lem} \mathbf{e}gin{proof} Without loss of generality, let $g_{\eta_1} = g^*_1 > g^*_2 = g_{\eta_1+1}$ and that $b' > \eta_1$. Then \mathbf{e}gin{align*} & \textstyle{\mathcal{C}_{\eta_1}(\{g_t\}_{t=1}^T) - \mathcal{C}_{b'}(\{g_t\}_{t=1}^T) \ge \sqrt{\frac{\eta_1(T-\eta_1)}{T}}\delta - \sqrt{\frac{(\eta_1+c_0\epsilon_T)(T-\eta_1-c_0\epsilon_T)}{T}}\left(\frac{g^*_1\eta_1+g^*_2c_0\epsilon_T}{\eta_1+c_0\epsilon_T}-g^*_2\right)} \\ &= \textstyle{\sqrt{\frac{\eta_1(T-\eta_1)}{T}}\delta\left(1-\sqrt{\frac{\eta_1(T-\eta_1-c_0\epsilon_T)}{(\eta_1+c_0\epsilon_T)(T-\eta_1)}}\right) = \mathcal{C}_{\eta_1}(\{g_t\}_{t=1}^T) \cdot \frac{\sqrt{1+\frac{c_0\epsilon_T}{\eta_1}}-\sqrt{1-\frac{c_0\epsilon_T}{T-\eta_1}}}{\sqrt{1+\frac{c_0\epsilon_T}{\eta_1}}}} \\ &\ge \textstyle{\mathcal{C}_{\eta_1}(\{g_t\}_{t=1}^T) \cdot \frac{c_0\epsilon_T}{2\sqrt{2}}\frac{T}{\eta_1(T-\eta_1)},} \end{align*} where the inequality follows from the Taylor expansion. \end{proof} Lemma \ref{lem:three} implies that $|\mathcal{C}_{\eta_1}(\{g_{0, t}\}_{t=1}^T)| \ge |\mathcal{C}_{\widehat{\eta}_1}(\{g_{0, t}\}_{t=1}^T)|$ while $|\mathcal{C}_{\eta_1}(\{y_{0, t}\}_{t=1}^T)| \le |\mathcal{C}_{\widehat{\eta}_1}(\{y_{0, t}\}_{t=1}^T)|$ by assumption. From (\ref{xi:bound:two}), we have $|\mathcal{C}_b(\{g_{r, t}\}_{t=1}^T)| \ge |\mathcal{C}_b(\{y_{r, t}\}_{t=1}^T)| - c_4n^\mbox{var}phi\log\,T$ at any $b\in\mathcal{I}_{1, T}$ and $r=0, 1$ for some $c_4 > 0$. Note that for all $\mbox{var}phi\in[0, 1]$ and $1 \le m \le n$, on a given interval $[s, e]$, the ordering and the set of signs that are applied to $\{\mathcal{X}^j_{1, b, T}\}_{j=1}^n$ in order to produce $\{|\mathcal{X}^{(j)}_{1, b, T}|\}_{j=1}^n$ (identical to $\{k^0_j\}$ and $\{i^0_j\}$ when $b=\widehat{\eta}_1$), leads to the maximum value of $\mathcal{D}^\mbox{var}phi_m$ at any $b$, among all possible index permutations and the sets of signs. Based on the above observations, the following holds with probability tending to one: \mathbf{e}q && |\mathcal{C}_{\widehat{\eta}_1}(\{y_{0, t}\}_{t=1}^T)| \ge \max_{1 \le m \le n} \mathcal{D}^\mbox{var}phi_m(\{|\mathcal{X}^{(j)}_{1, \eta_1, T}|\}_{j=1}^n) \ge |\mathcal{C}_{\eta_1}(\{y_{1, t}\}_{t=1}^T)|, \mbox{ and} \nonumber \\ && |\mathcal{C}_{\widehat{\eta}_1}(\{g_{0, t}\}_{t=1}^T)| \ge |\mathcal{C}_{\eta_1}(\{g_{1, t}\}_{t=1}^T)| - 2c_4n^\mbox{var}phi \log\,T \ge \frac{c_1m_1^\mbox{var}phi\widetilde{\delta}_1}{2}\sqrt{\frac{\eta_1(T-\eta_1)}{T}}, \label{ineq:delta} \end{eqnarray} where the last inequality of (\ref{ineq:delta}) follows from (\ref{min:jump}) and Lemma \ref{lem:one:one}. Recalling the rates given above Theorem \ref{thm:zero} for $\pi^\mbox{var}phi_{n, T}$, we have $\mathcal{D}^\mbox{var}phi_{\widehat{m}_1}(\{|\mathcal{X}^{(j)}_{1, \widehat{\eta}_1, T}|\}_{j=1}^n) > \pi^\mbox{var}phi_{n, T}$. To prove the consistency in the location of the estimated change-point, let $|\widehat{\eta}_1-\eta_1|=c_0\epsilon_T$ and assume that $\widehat{\eta}_1 > \eta_1$ without loss of generality. We can view the problem of deriving the upper bound on the bias $|\widehat{\eta}_1-\eta_1|$ for $\widehat{\eta}_1 = \arg\max_{b\in[1, T]\setminus\mathcal{I}_{1, T}}\max_{1 \le m \le n} \mathcal{D}^\mbox{var}phi_m(\{|\mathcal{X}^{(j)}_{1, b, T}|\}_{j=1}^n)$, as that for $$\widehat{\eta}_1 = \arg\max_{b\in[1, T]\setminus\mathcal{I}_{1, T}} \mathcal{C}_b(\{y_{0, t}\}_{t=1}^T).$$ Adopting the notations in Lemma \ref{lem:one:two} with $g_{0, t}$ replacing $g_t$, \mathbf{e}qs \left\vert\frac{1}{\widehat{\eta}_1}\sum_{t=1}^{\widehat{\eta}_1}g_{0, t} - \frac{1}{\eta_1}\sum_{t=s}^{\eta_1}g_{0, t}\right\vert &=& \left\vert\frac{g^*_1\eta_1+g^*_2c_0\epsilon_T}{\widehat{\eta}_1} - g^*_1\right\vert \le |g^*_1-g^*_2|\frac{c_0\epsilon_T}{\eta_1}, \\ \left\vert\frac{1}{\widehat{\eta}_1}\sum_{t=1}^{\widehat{\eta}_1}g_{0, t} - \frac{1}{T-\eta_1}\sum_{t=\eta_1+1}^e g_{0, t}\right\vert &=& \left\vert\frac{g^*_1\eta_1+g^*_2c_0\epsilon_T}{\widehat{\eta}_1} - g^*_2\right\vert \le |g^*_1-g^*_2|, \end{eqnarray}s and (\ref{ineq:delta}) indicates that $|g^*_1-g^*_2| \ge c_5m_1^\mbox{var}phi\widetilde{\delta}_1$ for some $c_5 > 0$. Combining the above bounds with Lemmas \ref{lem:one:one}--\ref{lem:one:two}, the conditions in Lemma \ref{lem:four} are met for $\eta=\eta_1$, $\widehat{\eta}=\widehat{\eta}_1$, $s=1$, $e=T$ and $\epsilon_T = (n^{\mbox{var}phi}/m_1)^2\widetilde{\delta}_1^2\log^2\,T$, and thus Theorem \ref{thm:one} is proved. $\square$ \subsection{Proof of Theorem \ref{thm:two}} \label{sec:pf:thm:two} Throughout the section, we assume (A1)--(A3), (A6) and (B1)--(B2). Let $s, e$ satisfy (\ref{lem:cond:one})--(\ref{lem:cond:two}) with $\bar{\epsilon}_T = n^{2\mbox{var}phi}\underline{\Delta}_\mbox{var}phi^{-2} T^{5(1-\mathbf{e}ta)}\log^2\,T$. For $$ (\widehat{\eta}, \widehat{m}) = \arg\max_{b\in[s, e]\setminus\mathcal{I}_{s, e}, \, 1 \le m \le n} \mathcal{D}^\mbox{var}phi_m(\{|\mathcal{X}^{(j)}_{s, b, e}|\}_{j=1}^n), $$ let $\{k^0_1, \ldots, k^0_n\}$ denote a permutation of $\{1, 2, \ldots, n\}$ satisfying $|\mathcal{X}^{k^0_1}_{s, \widehat{\eta}, e}| \ge |\mathcal{X}^{k^0_2}_{s, \widehat{\eta}, e}| \ge \ldots \ge |\mathcal{X}^{k^0_n}_{s, \widehat{\eta}, e}|$, and $i^0_j\in\{-1, 1\}$ satisfy $|\mathcal{X}^{k^0_j}_{s, \widehat{\eta}, e}| = i^0_j \cdot \mathcal{X}^{k^0_j}_{s, \widehat{\eta}, e}$ for all $j=1, \ldots, n$. As before, with $\widehat{m}$, $\{k^0_j\}$ and $\{i^0_j\}$ replacing $m_r$, $\{k^r_j\}$ and $\{i^r_j\}$, respectively, we define an additive model $y_{0, t} = g_{0, t} + \xi_{0, t}$ with its components obtained in the same manner as those in (\ref{def:y:r})--(\ref{def:xi:r}). Then $g_{0, t}$ is a piecewise constant signal with change-points at $t=\eta_r, \ r=1, \ldots, N$, and $\{\xi_{0, t}\}$ satisfies (\ref{xi:bound:one})--(\ref{xi:bound:two}) in place of $\{\xi_{r, t}\}$. Also, the DC statistic at $m = \widehat{m}$ and $b = \widehat{\eta}$ can equivalently be represented by $\mathcal{D}^\mbox{var}phi_{\widehat{m}}(\{|\mathcal{X}^{(j)}_{s, \widehat{\eta}, e}|\}_{j=1}^n) = \mathcal{D}^\mbox{var}phi_{\widehat{m}}(\{i^0_j \cdot \mathcal{X}^{k^0_j}_{s, \widehat{\eta}, e}\}_{j=1}^n) = \mathcal{C}_{\widehat{\eta}}(\{y_{0, t}\}_{t=s}^e)$. Below we introduce additional lemmas stated with the generic additive model in (\ref{def:add}). \mathbf{e}gin{lem} \label{lem:two:one} Let $s$ and $e$ satisfy (\ref{lem:cond:one}) and assume that there exists a change-point $\eta_q\in(s, e)$ at which $(\eta_q-s+1) \wedge (e-\eta_q) > c_2T^\mathbf{e}ta$ and $|g_{\eta_q+1}-g_{\eta_q}| \ge \delta$. Then there exists $q'\in\{q_1+1, \ldots, q_2\}$ and $C_4>0$ such that $|\mathcal{C}_{\eta_{q'}}(\{g_t\}_{t=s}^e)| = \max_{b\in[s, e)}|\mathcal{C}_b(\{g_t\}_{t=s}^e)| \ge C_4\delta T^{\mathbf{e}ta-1/2}$. \end{lem} \mathbf{e}gin{proof} The equality part is a direct result of Lemma \ref{lem:three}. Since $|g_{\eta_q+1}| \vee |g_{\eta_q}| \ge \delta/2$, we have $\vert\sum_{t=\eta_q-c_2T^\mathbf{e}ta+1}^{\eta_q} g_t \vert \vee \vert\sum_{t=\eta_q+1}^{\eta_q+c_2T^\mathbf{e}ta} g_t \vert \ge c_2\delta T^\mathbf{e}ta/2$. Hence, \\ $\max_{b\in[s, e)}|\sum_{t=s}^b g_t| \ge c_2\delta T^\mathbf{e}ta/4$, from which it is derived that \mathbf{e}gin{eqnarray*} \max_{b\in[s, e)}|\mathcal{C}_b(\{g_t\}_{t=s}^e)| \ge \min_{b\in[s, e)}\sqrt{\frac{e-s+1}{(b-s+1)(e-b)}} \cdot \max_{b\in[s, e)}|\sum_{t=s}^b g_t| \ge C_4\delta T^{\mathbf{e}ta-1/2}. \end{eqnarray*} \end{proof} \mathbf{e}gin{lem} \label{lem:two:two} Assume that $s$ and $e$ meet the conditions (\ref{lem:cond:one})--(\ref{lem:cond:two}) and let $\eta\in(s, e)$ denote a change-point that satisfies \mathbf{e}gin{eqnarray} \label{lem:two:two:cond} |\mathcal{C}_\eta(\{g_t\}_{t=s}^e)| > \max_{b\in[s, e)} |\mathcal{C}_b(\{g_t\}_{t=s}^e)| - C_5n^\mbox{var}phi\log\,T \end{eqnarray} for some positive constant $C_5$. Then for some $\epsilon_T$ and $C_6 > 0$, we have \mathbf{e}gin{eqnarray} \label{lem:two:two:eq} |\mathcal{C}_\eta(\{g_t\}_{t=s}^e) - \mathcal{C}_{b'}(\{g_t\}_{t=s}^e)| > C_6T^{\mathbf{e}ta-2}\epsilon_T |\mathcal{C}_\eta(\{g_t\}_{t=s}^e)| \end{eqnarray} with any $b'$ satisfying $|b'-\eta| \ge \epsilon_T$. \end{lem} \mathbf{e}gin{proof} The result is a modification of Lemma 2.6 in \cite{venkatraman1992} and the arguments therein are directly applicable to show (\ref{lem:two:two:eq}). \end{proof} For any interval $[s, e]$, we define an index set $\mathcal{R}_{s, e} \subset \{1, \ldots, N\}$ as $\mathcal{R}_{s, e} = \{1 \le r \le N: \, \eta_r\in[s, e]\setminus\mathcal{I}_{s, e}\}$. Adopting the same arguments as in Section \ref{sec:pf:thm:one}, \mathbf{e}gin{align} & |\mathcal{C}_{\widehat{\eta}}(\{y_{0, t}\}_{t=s}^e)| \ge \max_{q\in\mathcal{R}_{s, e}}\max_{1 \le m \le n} \mathcal{D}^\mbox{var}phi_m(\{|\mathcal{X}^{(j)}_{s, \eta_q, e}|\}_{j=1}^n) \ge \max_{q\in\mathcal{R}_{s, e}} \max_{1 \le r \le N}|\mathcal{C}_{\eta_q}(\{y_{r, t}\}_{t=s}^e)|, \nonumber \\ & |\mathcal{C}_{\widehat{\eta}}(\{g_{0, t}\}_{t=s}^e)| \ge \max_{q\in\mathcal{R}_{s, e}}\max_{1 \le r \le N} |\mathcal{C}_{\eta_q}(\{g_{r, t}\}_{t=s}^e)| - 2c_4n^\mbox{var}phi \log\,T \ge \frac{C_4}{2} c_1\underline{\Delta}_\mbox{var}phi T^{\mathbf{e}ta-1/2}, \label{ineq:delta:bs} \end{align} where the last inequality of (\ref{ineq:delta:bs}) follows from (\ref{min:jump}), (\ref{lem:cond:one}) and Lemma \ref{lem:two:one}. \mathbf{e}gin{lem} \label{lem:two:three} Let (\ref{lem:cond:one}) and (\ref{lem:cond:two}) hold. For $\widehat{\eta} = \arg\max_{b\in[s, e]\setminus\mathcal{I}_{s, e}}|\mathcal{C}_b(\{y_{0, t}\}_{t=s}^e)|$, there exists a true change-point $\eta_q \equiv \eta\in(s, e)$ satisfying $|\widehat{\eta} - \eta| < c_0\epsilon_T$ with probability converging to one, where $\epsilon_T = n^{2\mbox{var}phi}\underline{\Delta}_\mbox{var}phi^{-2}T^{5(1-\mathbf{e}ta)}\log^2\,T$. \end{lem} \mathbf{e}gin{proof} We adopt the notations from the proof of Lemma \ref{lem:four} with $g_{0, t}$ in place of $g_t$. Recall that from Lemma \ref{lem:three}, $g_t^*$ needs to have its change-point $\eta$ coincide with one of the true change-points $\eta_{q_1+1}, \ldots, \eta_{q_2}$. Trivially, such $\eta$ satisfies (\ref{lem:two:two:cond}) since $|\mathcal{C}_\eta(\{g_{0, t}\}_{t=s}^e)| = \max_{b\in[s, e)} |\mathcal{C}_b(\{g_{0, t}\}_{t=s}^e)|$. Under (\ref{lem:cond:one})--(\ref{lem:cond:two}), we have either $(\eta-s+1) \wedge (e-\eta) < c_3\epsilon_T$ or $(\eta-s+1) \wedge (e-\eta) > c_2T^\mathbf{e}ta$. If the former is the case, since $|g_{0, t}| \le 2\bar{f}n^{\mbox{var}phi}$ uniformly in $t$ under (A3), $|\mathcal{C}_\eta(\{g_{0, t}\}_{t=s}^e)| \le 4\bar{f}n^{\mbox{var}phi}(c_3\epsilon_T)^{1/2} < C_4\underline{\Delta}_\mbox{var}phi T^{\mathbf{e}ta-1/2}$, which leads to contradict (\ref{ineq:delta:bs}) and thus $(\eta-s+1) \wedge (e-\eta) > c_2T^\mathbf{e}ta$. Now we turn our attention to bound the terms $|III|$ and $|IV|$ in the presence of multiple change-points. Firstly, $\textstyle{\vert(\widehat{\eta}-s+1)^{-1}\sum_{t=s}^{\widehat{\eta}}g_{0, t} - (\eta-s+1)^{-1}\sum_{t=s}^{\eta}g_{0, t}\vert}$ \mathbf{e}gin{align*} &= \textstyle{\left\vert \frac{1}{\widehat{\eta}-s+1}\sqrt{\frac{(\widehat{\eta}-s+1)(e-\widehat{\eta})}{e-s+1}}\mathcal{C}_{\widehat{\eta}}(\{g_{0, t}\}_{t=s}^e) - \frac{1}{\eta-s+1}\sqrt{\frac{(\eta-s+1)(e-\eta)}{e-s+1}}\mathcal{C}_\eta(\{g_{0, t}\}_{t=s}^e)\right\vert} \\ &= \textstyle{\frac{1}{\sqrt{e-s+1}}\left\vert \sqrt{\frac{e-\widehat{\eta}}{\widehat{\eta}-s+1}}\{\mathcal{C}_{\widehat{\eta}}(\{g_{0, t}\}_{t=s}^e)-\mathcal{C}_{\eta}(\{g_{0, t}\}_{t=s}^e)\}\right.} \\ & \qquad \qquad \textstyle{\left.- \left(\sqrt{\frac{e-\eta}{\eta-s+1}}-\sqrt{\frac{e-\widehat{\eta}}{\widehat{\eta}-s+1}}\right)\mathcal{C}_\eta(\{g_{0, t}\}_{t=s}^e)\right\vert} \\ &\le \textstyle{\sqrt{\frac{e-\eta}{(e-s+1)(\eta-s+1)}} \left\{ \left\vert 1 - \frac{\sqrt{1-\frac{\widehat{\eta}-\eta}{e-\eta}}}{\sqrt{1+\frac{\widehat{\eta}-\eta}{\eta-s+1}}}\right\vert|\mathcal{C}_{\eta}(\{g_{0, t}\}_{t=s}^e)|+ \right.} \\ & \qquad \qquad \qquad \qquad \textstyle{\vert\mathcal{C}_{\widehat{\eta}}(\{g_{0, t}\}_{t=s}^e)-\mathcal{C}_{\eta}(\{g_{0, t}\}_{t=s}^e)\bigg\vert \bigg\}} \\ &\le \textstyle{\sqrt{\frac{e-\eta}{(e-s+1)(\eta-s+1)}}\left\{C_7T^{-\mathbf{e}ta}\epsilon_T|\mathcal{C}_{\eta}(\{g_{0, t}\}_{t=s}^e)|+ \left\vert\mathcal{C}_{\widehat\eta}(\{g_{0, t}\}_{t=s}^e)-\mathcal{C}_{\eta}(\{g_{0, t}\}_{t=s}^e)\right\vert \right\}} \end{align*} for some fixed $C_7>0$, where the last inequality follows from the Taylor expansion. Since $T^{\mathbf{e}ta-2} \le T^{-\mathbf{e}ta}$, Lemma \ref{lem:two:two} leads to $|III| \le 2n^\mbox{var}phi\epsilon_T T^{-\mathbf{e}ta}\log\,T|\mathcal{C}_{\eta}(\{g_{0, t}\}_{t=s}^e)|$ (recall the notation from Lemma \ref{lem:four}) with probability tending to one. Similarly, $\textstyle{\vert(\widehat{\eta}-s+1)^{-1}\sum_{t=s}^{\widehat{\eta}}g_{0, t} - (e-\eta)^{-1}\sum_{t=\eta+1}^e g_{0, t}\vert}$ \mathbf{e}gin{eqnarray*} &=& \textstyle{\left\vert \frac{1}{\widehat{\eta}-s+1}\sqrt{\frac{(\widehat{\eta}-s+1)(e-\widehat{\eta})}{e-s+1}}\mathcal{C}_{\widehat{\eta}}(\{g_{0, t}\}_{t=s}^e) + \frac{1}{e-\eta}\sqrt{\frac{(\eta-s+1)(e-\eta)}{e-s+1}}\mathcal{C}_\eta(\{g_{0, t}\}_{t=s}^e)\right\vert} \\ &=& \textstyle{\frac{1}{\sqrt{e-s+1}}\left\vert \left(\sqrt{\frac{e-\widehat{\eta}}{\widehat{\eta}-s+1}}+\sqrt{\frac{\eta-s+1}{e-\eta}}\right)\mathcal{C}_\eta(\{g_{0, t}\}_{t=s}^e)\right.} \\ && \qquad \qquad \textstyle{ + \left.\sqrt{\frac{e-\widehat{\eta}}{\widehat{\eta}-s+1}}\{\mathcal{C}_{\widehat{\eta}}(\{g_{0, t}\}_{t=s}^e)-\mathcal{C}_{\eta}(\{g_{0, t}\}_{t=s}^e)\}\right\vert} \\ &\le& \textstyle{2\sqrt{\frac{e-s+1}{(\eta-s+1)(e-\eta)}}|\mathcal{C}_\eta(\{g_{0, t}\}_{t=s}^e)|}, \end{eqnarray*} and thus $|IV| \le 4n^\mbox{var}phi\sqrt{c_0\epsilon_T}T^{-\mathbf{e}ta/2}\log\,T|\mathcal{C}_\eta(\{g_t\}_{t=s}^e)|$. Plugging in the above bounds to the condition given in Lemma \ref{lem:four}, we obtain \mathbf{e}gin{align*} & |\mathcal{C}_\eta(\{g_{0, t}\}_{t=s}^e)||\mathcal{C}_\eta(\{g_{0, t}\}_{t=s}^e)-\mathcal{C}_{\widehat{\eta}}(\{g_{0, t}\}_{t=s}^e)| > \\ & C_8n^\mbox{var}phi\log\,T \left\{ (\epsilon_T T^{-\mathbf{e}ta}|\mathcal{C}_{\eta}(\{g_{0, t}\}_{t=s}^e)|) \vee (\sqrt{\epsilon_T}T^{-\mathbf{e}ta/2} |\mathcal{C}_\eta(\{g_{0, t}\}_{t=s}^e)|) \vee (n^{\mbox{var}phi}\log\,T) \right\} \end{align*} for some $C_8>0$. Due to (\ref{ineq:delta:bs}), the above is met with $\epsilon_T = n^{2\mbox{var}phi}\underline{\Delta}_\mbox{var}phi^{-2}T^{5(1-\mathbf{e}ta)}\log^2\,T$. \end{proof} With the above lemmas, we are now ready to prove the theorem. At the beginning of the DCBS algorithm, we have $s=1$ and $e=T$ for which both (\ref{lem:cond:one}) and (\ref{lem:cond:two}) hold, and thus $\eta_{r_1}$ is estimated by $\widehat{\eta}_{r_1}$ within the distance of $\epsilon_T$ from a true change-point (Lemma \ref{lem:two:three}). Then both of the two segments defined to the left and to the right of $\widehat{\eta}_{r_1}$ satisfy (\ref{lem:cond:one})--(\ref{lem:cond:two}) and so do all the subsequently defined segments, and therefore the same arguments apply to show the consistency of the estimates $\widehat{\eta}_{r_2}, \widehat{\eta}_{r_3}, \ldots, \widehat{\eta}_{r_N}$. Once all the $N$ change-points are detected, any segment $[s, e]$ determined by two adjacent $\widehat{\eta}_1, \ldots, \widehat{\eta}_N$ (including $\eta_0=1$ and $\eta_{N+1}=T$) satisfy either \mathbf{e}gin{itemize} \item[(i)] $\exists\, 1 \le r \le N$ such that $r=q_1+1=q_2$ in (\ref{lem:cond:zero}) and $(\eta_r-s+1) \wedge (e-\eta_r) \le c_6\epsilon_T$, or \item[(ii)] $\exists\, 1 \le r \le N-1$ such that $r=q_1+1$ and $r+1=q_2$, and $(\eta_r-s+1) \vee (e-\eta_{r+1}) \le c_6\epsilon_T$ \end{itemize} for some positive constant $c_6$. Under (i), let $$(\widehat{\eta}, \widehat{m}) = \arg\max_{b\in[s, e]\setminus\mathcal{I}_{s, e} \, 1 \le m \le n}\mathcal{D}^\mbox{var}phi_m(\{|\mathcal{X}^{(j)}_{s, b, e}|\}_{j=1}^n),$$ and we adopt the notations $y_{0, t}$, $g_{0, y}$ and $\xi_{0, t}$ introduced at the beginning of this section with respect to $\widehat{\eta}$ and $\widehat{m}$. Then, \mathbf{e}gin{eqnarray*} |\mathcal{C}_{\widehat{\eta}}(\{y_{0, t}\}_{t=s}^e)| &\le& |\mathcal{C}_{\widehat{\eta}}(\{g_{0, t}\}_{t=s}^e)| + c_4n^\mbox{var}phi\log\,T \le |\mathcal{C}_{\eta_r}(\{g_{0, t}\}_{t=s}^e)| + c_4n^\mbox{var}phi\log\,T \\ &\le& 4\bar{f}n^\mbox{var}phi(c_6\epsilon_T)^{1/2} + c_4n^\mbox{var}phi\log\,T < \pi^\mbox{var}phi_{n, T}, \end{eqnarray*} from the fact that $|g_{0, t}| \le 2\bar{f}n^\mbox{var}phi$. A similarly conclusion can be drawn in the case of (ii) as well, and thus the proof is completed. $\square$ \subsection{Proof of Theorem \ref{thm:four}} \label{sec:pf:thm:four} We first prove (a) of Theorem \ref{thm:four}, which states that $\mathcal{D}zh_m$ achieves consistency in identifying and locating the change-point under the single change-point scenario. As in Section \ref{sec:pf:thm:one}, let $(\widehat{\eta}_1, \widehat{m}_1) = \arg\max_{b\in[1, T]\setminus\mathcal{I}_{1, T}, 1 \le m \le n} \mathcal{D}zh_m(\{|\mathcal{X}^{(j)}_{1, b, T}|\}_{j=1}^n)$, and adopt the notations $\{k^0_1, \ldots, k^0_n\}$ and $i^0_j\in\{-1, 1\}$ therein. We define an additive model $y_{0, t} = g_{0, t} + \xi_{0, t}$, where \mathbf{e}gin{eqnarray*} y_{0, t} = \left\{\gamma_n+\sqrt{\frac{\widehat{m}_1(2n-\widehat{m}_1)}{2n}}\right\} \left\{\frac{1}{\widehat{m}_1}\sum_{j=1}^{\widehat{m}_1}i^0_j \cdot \widetilde{x}_{k^0_j, t} - \frac{1}{2n-\widehat{m}_1}\sum_{j=\widehat{m}_1+1}^{n}i^0_j \cdot \widetilde{x}_{k^0_j, t}\right\}, \end{eqnarray*} and $g_{0, t}$ and $\xi_{0, t}$ are constructed analogously with $\widetilde{f}_{k^0_j, t}$ and $\widetilde\varepsilon_{k^0_j, t}$ in place of $\widetilde{x}_{k^0_j, t}$, respectively. Provided that $n^{-1/2}\gamma_n \to 0$ with $n \to \infty$, we have (\ref{xi:bound:one})--(\ref{xi:bound:two}) satisfied with $\{\xi_{0, t}\}$ replacing $\{\xi_{r, t}\}$ with the bound of the order $n^{1/2}\log\,T$. Also as in (\ref{ineq:delta}), it is shown that $|\mathcal{C}_{\widehat{\eta}_1}(\{g_{0, t}\}_{t=1}^T)| \ge C_9(\gamma_n \vee m_1^{1/2})\widetilde{\delta}_1\sqrt{\frac{\eta_1(T-\eta_1)}{T}}$ for some constant $C_9>0$, and thus $\mathcal{D}zh_{\widehat{m}_1}(\{|\mathcal{X}^{(j)}_{1, \widehat{\eta}_1, T}|\}_{j=1}^n) > \widetilde{\pi}_{n, T}$ with probability tending to one, which indicates the consistency of the test $\mathcal{T}zh_{1, T} > \widetilde{\pi}_{n, T}$. Further, the same arguments adopted in bounding $|\widehat{\eta}_1-\eta_1|$ for Theorem \ref{thm:one} are applicable to show that $\epsilon_T = n\{(\gamma_n \vee m_1^{1/2})\widetilde{\delta}_1\}^{-2}\log^2\,T$. Similarly, (\ref{ineq:delta:bs}) can be modified for $\mathcal{D}zh_m$ and thus the proof of (b) follows verbatim. $\square$ \mathbf{e}gin{supplement}[id=suppA] \stitle{Supplement to ``Change-point detection in panel data via double CUSUM statistic''} \slink[doi]{} \sdatatype{.pdf} \sdescription{We provide the detailed description of the Local Bootstrap and proof of some auxillary results. In addition, the tables and plots summarising the outcome of the simulation studies conducted in Section 5 are presented.} \end{supplement} \end{document}
\begin{document} \title[Griffith energies as small strain limit of nonlinear models]{Griffith energies as small strain limit of nonlinear models for nonsimple brittle materials} \keywords{Brittle materials, variational fracture, nonsimple materials, free discontinuity problems, Griffith energies, $\Gamma$-convergence, functions of bounded variation and deformation} \author{Manuel Friedrich} \address[Manuel Friedrich]{Applied Mathematics M\"unster, University of M\"unster\\ Einsteinstrasse 62, 48149 M\"unster, Germany.} \mathbf{e}mail{[email protected]} \begin{abstract} We consider a nonlinear, frame indifferent Griffith model for nonsimple brittle materials where the elastic energy also depends on the second gradient of the deformations. In the framework of free discontinuity and gradient discontinuity problems, we prove existence of minimizers for boundary value problems. We then pass to a small strain limit in terms of suitably rescaled displacement fields and show that the nonlinear energies can be identified with a linear Griffith model in the sense of $\Gamma$-convergence. This complements the study in \cite{Friedrich:15-2} by providing a linearization result in arbitrary space dimensions. \mathbf{e}nd{abstract} \subjclass[2010]{74R10, 49J45, 70G75.} \maketitle \section{Introduction} Mathematical models in solids mechanics typically do not predict the mechanical behavior correctly at every scale, but have a certain limited range of applicability. A central example in that direction are models for hyperelastic materials in nonlinear (finite) elasticity and their linear (infinitesimal) counterparts. The last decades have witnessed remarkable progress in providing a clear relationship between different models via $\Gamma$-convergence \cite{DalMaso:93}. In their seminal work \cite{DalMasoNegriPercivale:02}, \Bbb ZZZ {\sc Dal Maso, Negri, and Percivale} \color{black} performed a nonlinear-to-linear analysis in terms of suitably rescaled displacement fields and proved the convergence of minimizers for corresponding boundary value problems. This study has been extended in various directions, including different growth assumptions on the stored energy densities \cite{Agostiniani}, the passage from atomistic-to-continuum models \cite{Braides-Solci-Vitali:07, Schmidt:2009}, multiwell energies \cite{alicandro.dalmaso.lazzaroni.palombaro, Schmidt:08}, plasticity \cite{Ulisse}, and viscoelasticity \cite{MFMK}. In the present contribution, we are interested in an analogous analysis for materials undergoing fracture. Based on the variational approach to quasistatic crack evolution by {\sc Francfort and Marigo} \cite{Francfort-Marigo:1998}, where the displacements and the (a priori unknown) crack paths are determined from an energy minimization principle, we consider an energy functional of Griffith-type. Such variational models of brittle fracture, which comprise an elastic energy stored in the uncracked region of the body and a surface contribution comparable to the size of the crack of codimension one, have been widely studied both at finite and infinitesimal strains, see \cite{BJ, Chambolle:2003, DalMaso-Francfort-Toader:2005, DalMaso-Lazzaroni:2010, Francfort-Larsen:2003, Solombrino, Giacomini-Ponsiglione:2006} without claim of being exhaustive. We refer the reader to \cite{Bourdin-Francfort-Marigo:2008} for a general overview. In this context, first results addressing the question of a nonlinear-to-linear analysis have been obtained in \cite{NegriToader:2013, Zanini} in a two-dimensional evolutionary setting for a fixed crack set or a restricted class of admissible cracks, respectively. Subsequently, the problem was studied in \cite{FriedrichSchmidt:2014.2} from a different perspective. Here, a simultaneous discrete-to-continuum and nonlinear-to-linear analysis is performed for general crack geometries, but under the simplifying assumption that all deformations \color{black} are \color{black} close to the identity mapping. Eventually, a result in dimension two without a priori assumptions on the crack paths and the deformations, in the general framework of free discontinuity problems (see \cite{DeGiorgi-Ambrosio:1988}), has been derived in \cite{Friedrich:15-2}. \color{black} This \color{black} analysis relies fundamentally on delicate geometric \color{black} rigidity \color{black} results in the spirit of \cite{FrieseckeJamesMueller:02, Chambolle-Giacomini-Ponsiglione:2007}. At this point, the geometry of crack paths in the plane is crucially exploited and higher dimensional analogs seem to be currently out of reach. In spite of the lack of rigidity estimates, the goal of this contribution is to perform a nonlinear-to-linear analysis for brittle materials in the spirit of \cite{Friedrich:15-2} in higher space dimensions. This will be achieved by starting from a slightly different nonlinear model for so-called \mathbf{e}mph{nonsimple materials}. Whereas the elastic properties of simple materials depend only on the first gradient, the notion of a nonsimple material \color{black} refers to the fact that the elastic energy depends additionally on the second gradient of the deformation. This idea goes back to {\sc Toupin} \cite{Toupin:62,Toupin:64} and has proved to be useful in modern mathematical elasticity, see e.g.~\cite{BCO,Batra, capriz,dunn, MFMK,MR}, since it brings additional compactness and rigidity to the problem. In a similar fashion, we consider here a Griffith model with an additional second gradient in the elastic part of the energy. This leads to a model in the framework of free discontinuity and gradient discontinuity problems. The goal of this contribution is twofold. We first show that the regularization allows to prove existence of minimizers for boundary value problems without convexity properties for the stored elastic energy. In particular, we do not have to assume quasiconvexity \cite{Ambrosio:90-2}. Afterwards, we identify an effective linearized Griffith energy as the $\Gamma$-limit of the nonlinear and frame indifferent models for vanishing strains. In this context, it is important to mention that, in spite of the formulation of the nonlinear model in terms of nonsimple materials, the effective limit is a `standard' Griffith functional in linearized elasticity depending only on the first gradient. A similar justification for the treatment of nonsimple materials has recently been discussed in \cite{MFMK} for a model in nonlinear viscoelasticity. The existence result for boundary value problems at finite strains is formulated in the space $GSBV^2_2(\Omega;\Bbb R^d)$, see \mathbf{e}qref{eq: space} below, consisting of the mappings for which both the function itself and its derivative are in the class of \mathbf{e}mph{generalized special functions of bounded variation} \cite{Ambrosio-Fusco-Pallara:2000}. The relevant compactness and lower semicontinuity results stated in Theorem \ref{th: comp} essentially follow from a study on second order variational problems with free discontinuity and gradient discontinuity \cite{Carriero2}. \color{black} Another \color{black} key ingredient is the recent work \cite{Manuel} which extends the classical compactness result due to {\sc Ambrosio} \cite{Ambrosio:90} to problems without a priori bounds on the functions. Concerning the passage to the linearized system, the essential step is to establish a compactness result in terms of suitably rescaled displacement fields \color{black} which measure \color{black} the distance of the deformations from the identity. Whereas in \cite{Friedrich:15-2} this is achieved by means of delicate geometric rigidity estimates, the main idea in our approach is to partition the domain into different regions in which the gradient is `almost constant'. This construction relies on the coarea formula in $BV$ and is the fundamental point where the presence of a second order term \color{black} in the energy \color{black} is used to pass rigorously to a linear theory. The linear limiting model is formulated on the space of \mathbf{e}mph{generalized special functions of bounded deformation} $GSBD^2$, which has been studied extensively over the last years, see e.g.\ \cite{Chambolle-Conti-Francfort:2014, Chambolle-Conti-Francfort:2018, Chambolle-Conti-Iurlano:2018, Conti-Iurlano:15, Conti-Iurlano:15.2, Crismale2, Crismale, Crismale4, Crismale3, DalMaso:13, Friedrich:15-3, Friedrich:15-4, Solombrino, Iurlano:13}. The paper is organized as follows. In Section \ref{rig-sec: main} we first introduce our nonlinear model for nonsimple brittle materials and state our main results: we first address the existence of minimizers for boundary value problems at finite strains. Then, we present a compactness and $\Gamma$-convergence result in the passage from \color{black} the nonlinear to the linearized \color{black} theory. Here, we also discuss the convergence of minima and minimzers \color{black} under given boundary data. \color{black} Section \ref{sec:pre} is devoted to some preliminary results about the function spaces $GSBV$ and $GSBD$. In particular, we present a compactness result in $GSBV^2_2$ involving the second gradient (see Theorem \ref{th: comp}). Finally, Section \ref{sec: proofs} contains the proofs of our results. \section{The model and main results}\label{rig-sec: main} In this section we introduce our model and present the main results. We start with some basic notation. Throughout the paper, $\Omega \subset \Bbb R^d$ is an open and bounded set. The notations $\mathcal{L}^d$ and $\mathcal{H}^{d-1}$ are used for the Lebesgue measure and the $(d-1)$-dimensional Hausdorff measure in $\mathbb{R}^d$, respectively. We set $S^{d-1}=\lbrace x \in \Bbb R^d: \, |x|=1\rbrace$. For an $\mathcal{L}^d$-measurable set $E\subset\mathbb{R}^d$, the symbol $\chi_E$ denotes its indicator function. For two sets $A,B \subset \Bbb R^d$, we \color{black} define \color{black} $A \triangle B = (A\setminus B) \cup (B \setminus A)$. The identity mapping on $\Bbb R^d$ is indicated by $\mathbf{id}$ and its derivative, the identity matrix, by $\mathbf{Id} \in \Bbb R^{d \times d}$. The sets of symmetric and skew symmetric matrices are denoted by $\Bbb R^{d \times d}_{\rm sym}$ and $\Bbb R^{d \times d}_{\rm skew}$, respectively. We set ${\rm sym}(F) = \frac{1}{2}(F^T + F)$ for $F \in \Bbb R^{d\times d}$ \color{black} and \color{black} define $SO(d) = \lbrace R\in \Bbb R^{d \times d}: R^T R = \mathbf{Id}, \, \det R=1 \rbrace$. \subsection{A nonlinear model for nonsimple materials and boundary value problems} In this subsection we introduce our nonlinear model and discuss the existence of minimizers for boundary value problems. \mathbf{e}mph{Function spaces:} To introduce our Griffith-type model for nonsimple materials, we first need to introduce the relevant spaces. We use standard notation for $GSBV$ functions, see \cite[Section 4]{Ambrosio-Fusco-Pallara:2000} and \cite[Section 2]{DalMaso-Francfort-Toader:2005}. In particular, we let \begin{align}\label{eq: space2} GSBV^2(\Omega;\Bbb R^d) = \lbrace y \in GSBV(\Omega;\Bbb R^d): \ \mathbf{n}abla y \in L^2(\Omega;\Bbb R^{d\times d}), \ \mathcal{H}^{d-1}(J_y) < + \infty \rbrace, \mathbf{e}nd{align} where $\mathbf{n}abla y(x)$ denotes the approximate differential at $\mathcal{L}^d$-a.e.\ $x \in \Omega$ and $J_y$ the jump set. We define the space \begin{align}\label{eq: space} GSBV^2_2(\Omega;\Bbb R^d) := \big\{ y \in GSBV^2(\Omega; \Bbb R^d): \ \mathbf{n}abla y \in GSBV^2(\Omega;\Bbb R^{d\times d})\big\}. \mathbf{e}nd{align} The approximate differential and the jump set of $\mathbf{n}abla y$ will be denoted by $\mathbf{n}abla^2 y$ and $J_{\mathbf{n}abla y}$, respectively. \Bbb ZZZ (To avoid confusion, we point out that in the paper \cite{DalMaso-Francfort-Toader:2005} the notation $GSBV^2_2(\Omega;\Bbb R^d)$ was used differently, namely for $GSBV^2(\Omega;\Bbb R^d) \cap L^2(\Omega;\Bbb R^d)$.) \color{black} A similar space has been considered in \cite{Carriero1, Carriero2} to treat second order free discontinuity functionals, e.g., a weak formulation of the Blake $\&$ Zissermann model \cite{BZ} of image segmentation. We point out that the functions are allowed to exhibit \Bbb ZZZ discontinuities. \color{black} Thus, the analysis is outside of the framework of the space of special functions with bounded Hessian $SBH(\Omega)$, considered in problems of second order energies for elastic-perfectly plastic plates, see e.g.\ \cite{Carriero3}. \mathbf{e}mph{Nonlinear Griffith energy for nonsimple materials:} We let $W:\Bbb R^{d \times d} \to [0,+\infty)$ be a single well, frame indifferent stored energy functional. \color{black} More precisely, \color{black} we suppose that there exists $c>0$ such that \begin{align}\label{assumptions-W} {\rm (i)} & \ \ W \text{ continuous and $C^3$ in a neighborhood of $SO(d)$},\mathbf{n}otag\\ {\rm (ii)} & \ \ \text{Frame indifference: } W(RF) = W(F) \text{ for all } F \in \Bbb R^{d \times d}, R \in SO(d),\mathbf{n}otag\\ {\rm (iii)} & \ \ W(F) \ge c\operatorname{dist}^2(F,SO(d)) \ \text{ for all $F \in \Bbb R^{d \times d}$}, \ W(F) = 0 \text{ iff } F \in SO(d). \mathbf{e}nd{align} We briefly note that we can also treat inhomogeneous materials where the energy density has the form $W: \Omega \times \Bbb R^{d \times d} \to [0,+\infty)$. Moreover, it suffices to assume $W \in C^{2,\alpha}$, where $C^{2,\alpha}$ is the H\"older space with exponent $\alpha \in (0,1]$, see Remark \ref{rem: Hoelder space} for details. Let $\kappa>0$ and $\beta \in (\frac{2}{3},1)$. \color{black} For $\varepsilon >0$, define the energy $\mathcal{E}_\varepsilon(\cdot,\Omega) : GSBV^2_2(\Omega;\Bbb R^d) \to [0,+\infty]$ by \begin{align}\label{rig-eq: Griffith en} \mathcal{E}_\varepsilon(y,\Omega) = \begin{cases} \varepsilon^{-2}\int_{\Omega} W(\mathbf{n}abla y(x)) \,dx +\varepsilon^{-2\beta} \int_\Omega |\mathbf{n}abla^2 y(x)|^2 \, dx + \kappa\mathcal{H}^{d-1}(J_y) & \text{ if $J_{\mathbf{n}abla y} \subset J_y$,} \\ + \infty & \text{ else.} \mathbf{e}nd{cases} \mathbf{e}nd{align} Here and in the following, the inclusion $J_{\mathbf{n}abla y} \subset J_y$ has to be understood up to an $\mathcal{H}^{d-1}$-negligible set. Since $W$ grows quadratically around $SO(d)$, the parameter $\varepsilon$ corresponds to the typical scaling of strains for configurations with finite energy. Due to the presence of the second term, we deal with a Griffith-type \color{black} model \color{black} for \mathbf{e}mph{nonsimple materials}. As explained in the introduction, elastic energies which depend additionally on the second gradient of the deformation were introduced by {\sc Toupin} \cite{Toupin:62,Toupin:64} to \color{black} enhance compactness and rigidity properties. \color{black} In the present context, we add a second gradient term for a material undergoing fracture. This regularization effect acts on the \Bbb RRR entire \color{black} \mathbf{e}mph{intact region} $\Omega \setminus \Bbb ZZZ J_y \color{black} $ of the material. This is modeled by the condition $J_{\mathbf{n}abla y} \subset J_y$. The goal of this contribution is twofold. We first show that the regularization allows to prove existence of minimizers for boundary value problems without convexity properties of $W$. The main result of the present work is then to identify a linearized Griffith energy in the small strain limit $\varepsilon \to 0$ which is related to the nonlinear energies $\mathcal{E}_\varepsilon$ through $\Gamma$-convergence. We point out that the effective limit is a `standard' Griffith model in linearized elasticity depending only on the first gradient, see \mathbf{e}qref{rig-eq: Griffith en-lim} below, although we start \color{black} with \color{black} a nonlinear model for nonsimple materials. We observe that the condition $J_{\mathbf{n}abla y} \subset J_y$ is not closed under convergence in measure on $\Omega$. \Bbb ZZZ In fact, consider, e.g., $\Omega = (-1,1)^2, \Omega_1 = (-1,0)\times (-1,1), \Omega_2=(0,1)\times (-1,1)$, and for $\delta \ge 0$ the configurations $$y_\delta(x_1,x_2) = (x_1,x_2) \chi_{\Omega_1} + (2x_1 + \delta,x_2)\chi_{\Omega_2} \ \ \ \ \ \ \text{for } (x_1,x_2) \in \Omega. $$ Then $J_{\mathbf{n}abla y_\delta} = J_{y_\delta} = \lbrace 0\rbrace \times (-1,1)$ for $\delta>0$ and $y_\delta \to y_0$ in measure on $\Omega$ as $\delta \to 0$. However, there holds $\mathbf{e}mptyset = J_{y_0} \subset J_{\mathbf{n}abla y_0} = \lbrace 0\rbrace \times (-1,1)$. \color{black} Therefore, we need to pass to a relaxed formulation. \begin{proposition}[Relaxation]\label{prop: relaxation} Let $\Omega \subset \Bbb R^d$ be open and bounded. Suppose that $W$ satisfies \mathbf{e}qref{assumptions-W}. Then the relaxed functional $\overline{\mathcal{E}}_\varepsilon(\cdot,\Omega): GSBV^2_2(\Omega;\Bbb R^d) \to [0,+\infty] $ defined by $$\overline{\mathcal{E}}_\varepsilon(y,\Omega) = \inf \big\{ \liminf\mathbf{n}olimits_{n \to \infty} \mathcal{E}_\varepsilon(y_n,\Omega): y_n \to y \ \text{\rm in measure on $\Omega$} \big\}$$ is given by \begin{align}\label{eq: relaxed energy} \overline{\mathcal{E}}_\varepsilon(y,\Omega) = \varepsilon^{-2}\int_{\Omega} W(\mathbf{n}abla y(x)) \,dx +\varepsilon^{-2\beta} \int_\Omega |\mathbf{n}abla^2 y(x)|^2 \, dx + \kappa\mathcal{H}^{d-1}(J_y \cup J_{\mathbf{n}abla y}). \mathbf{e}nd{align} \mathbf{e}nd{proposition} The result is proved in Subsection \ref{sec: proofs-nonlinear}. Clearly, $\overline{\mathcal{E}}_\varepsilon(\cdot,\Omega)$ is lower semicontinuous with respect to the convergence in measure. \Bbb RRR We point out that this latter property \color{black} has essentially been shown in \cite{Carriero2}, cf.\ Theorem \ref{th: GSBV2 comp}. In the following, our goal is to study boundary value problems. To this end, we suppose that there exist two bounded Lipschitz domains $\Omega' \supset \Omega$. We will impose Dirichlet boundary data on $\partial_D \Omega := \Omega' \cap \partial \Omega$. As usual for the weak formulation in the framework of free discontinuity problems, this will be done by requiring that configurations $y$ satisfy $y = g$ on $\Omega' \setminus \overline{\Omega}$ for some $g \in W^{2,\infty}(\Omega';\Bbb R^d)$. From now on, we write $\mathcal{E}_\varepsilon(\cdot) = \mathcal{E}_\varepsilon(\cdot,\Omega')$ and $\overline{\mathcal{E}}_\varepsilon(\cdot) = \overline{\mathcal{E}}_\varepsilon(\cdot,\Omega')$ for notational convenience. The following result about existence of minimizers will be proved in Subsection \ref{sec: proofs-nonlinear}. \begin{theorem}[Existence of minimizers]\label{th: existence} Let $\Omega \subset \Omega' \subset \Bbb R^d$ be bounded Lipschitz domains. Suppose that $W$ satisfies \mathbf{e}qref{assumptions-W}, \color{black} and let \color{black} $g \in W^{2,\infty}(\Omega';\Bbb R^d)$. Then the minimization problem \begin{align}\label{eq: minimization problem} \inf_{{y \in GSBV^2_2(\Omega';\Bbb R^d)}} \Big\{ \overline{\mathcal{E}}_\varepsilon(y): \ y = g \text{ on } \Omega' \setminus \overline{\Omega} \Big\} \mathbf{e}nd{align} admits solutions. \mathbf{e}nd{theorem} \subsection{Compactness of rescaled displacement fields} The main goal of the present work is the identification of an effective linearized Griffith energy in the small strain limit. In this subsection, we formulate the relevant compactness result. Let $\Omega' \supset \Omega$ be bounded Lipschitz domains. The limiting energy is defined on the space of generalized special functions of bounded deformation $GSBD^2(\Omega')$. For basic properties of $GSBD^2(\Omega')$ we refer to \cite{DalMaso:13} and Section \ref{sec: GSBD} below. In particular, for $u\in GSBD^2(\Omega')$, we denote by $e(u) = \frac{1}{2}(\mathbf{n}abla u^T + \mathbf{n}abla u )$ the approximate symmetric differential and by $J_u$ the jump set. \Bbb RRR The general idea in linearization results \color{black} in many different settings (see, e.g., \cite{alicandro.dalmaso.lazzaroni.palombaro, Braides-Solci-Vitali:07, DalMasoNegriPercivale:02, MFMK, FriedrichSchmidt:2014.2, NegriToader:2013, Schmidt:08, Schmidt:2009}) is the following: given a sequence $(y_\varepsilon)_\varepsilon$ with $\sup_\varepsilon \mathcal{E}_\varepsilon(y_\varepsilon) < +\infty$, define displacement fields which measure the distance of the deformations from the identity, rescaled by the \Bbb ZZZ small parameter \color{black} $\varepsilon$, i.e., \begin{align}\label{eq: rescali1} u_\varepsilon = \frac{1}{\varepsilon}(y_\varepsilon - \mathbf{id}). \mathbf{e}nd{align} It turns out, however, that in general no compactness can be expected if the body may undergo fracture. Consider, e.g., the functions $y_\varepsilon = \mathbf{id} \chi_{\Omega' \setminus B} + R\,x \chi_{B}$, for a small ball $B \subset \Omega$ and a rotation $R \in SO(d)$, $R \mathbf{n}eq \mathbf{Id}$. Then $|u_\varepsilon|, |\mathbf{n}abla u_\varepsilon| \to \infty$ on $B$ as $\varepsilon \to 0$. The main idea in our approach is the observation that this phenomenon can be avoided if the deformation is \mathbf{e}mph{rotated back to the identity} on the set $B$. This will be made precise in Theorem \ref{thm: compactess}(a) below where we pass to \mathbf{e}mph{piecewise rotated functions}. For such functions, we can control at least the symmetric part of $\mathbf{n}abla u_\varepsilon$ for the rescaled displacement fields defined \color{black} in \color{black} \mathbf{e}qref{eq: rescali1}. This will allow us to derive a compactness result in the space $GSBD^2(\Omega')$, see Theorem \ref{thm: compactess}(b). Recall the definition of $GSBV_2^2(\Omega';\Bbb R^d)$ in \mathbf{e}qref{eq: space}. To account for boundary data $h \in W^{2,\infty}(\Omega';\Bbb R^d)$, we introduce the spaces \begin{align}\label{eq: boundary-spaces} \mathcal{S}_{\varepsilon,h} & = \lbrace y \in GSBV_2^2(\Omega';\Bbb R^d): \ y = \mathbf{id} + \varepsilon h \text{ on } \Omega' \setminus \overline{\Omega} \rbrace, \mathbf{n}otag\\ GSBD^2_h & = \lbrace u \in GSBD^2(\Omega'): \, u = h \text{ on } \Omega' \setminus \overline{\Omega} \rbrace. \mathbf{e}nd{align} Recall $\beta \in (\frac{2}{3},1)$ and the definition of $\overline{\mathcal{E}}_\varepsilon = \overline{\mathcal{E}}_\varepsilon(\cdot,\Omega')$ in \mathbf{e}qref{eq: relaxed energy}. For definition and basic properties of Caccioppoli partitions we refer to Section \ref{sec: Caccio}. In particular, for a set of finite perimeter $E \subset \Omega'$, we denote by $\partial^* E$ its essential boundary \color{black} and by $(E)^1$ the points where $E$ has density one, see \cite[Definition 3.60]{Ambrosio-Fusco-Pallara:2000}. \color{black} \begin{theorem}[Compactness]\label{thm: compactess} Let $\gamma \in (\frac{2}{3},\beta)$. Assume that $W$ satisfies \mathbf{e}qref{assumptions-W}, \color{black} and let \color{black} $h \in W^{2,\infty}(\Omega';\Bbb R^d)$. Let $(y_\varepsilon)_\varepsilon $ be a sequence satisfying $y_\varepsilon \in \mathcal{S}_{\varepsilon,h}$ and $\sup_\varepsilon \overline{\mathcal{E}}_\varepsilon(y_\varepsilon) <+\infty$. \mathbf{n}oindent (a) (Piecewise rotated functions) There exist Caccioppoli partitions $(P_j^\varepsilon)_j$ of $\Omega'$ and corresponding rotations $(R^\varepsilon_j)_j \subset SO(d)$ such that the piecewise rotated functions $y^{\rm rot}_\varepsilon \in GSBV^2_2(\Omega';\Bbb R^d)$ given by \begin{align}\label{eq: piecewise rotated} y^{\rm rot}_\varepsilon := \sum\mathbf{n}olimits_{j=1}^\infty R^\varepsilon_j \, y_\varepsilon \, \chi_{P^\varepsilon_j} \mathbf{e}nd{align} satisfy \begin{align}\label{eq: first conditions} {\rm (i)} & \ \ \text{$y^{\rm rot}_\varepsilon = \mathbf{id} + \varepsilon h$ on $\Omega' \setminus \overline{\Omega}$,}\mathbf{n}otag \\ {\rm (ii)} & \ \ \mathcal{H}^{d-1}\big( \big(J_{y_\varepsilon^{\rm rot}} \cup J_{\mathbf{n}abla y_\varepsilon^{\rm rot}} \big) \setminus \big( J_{y_\varepsilon} \cup J_{\mathbf{n}abla y_\varepsilon}\big)\big) \le \mathcal{H}^{d-1}\Big( \Big( \Omega' \cap \bigcup\mathbf{n}olimits_{j=1}^\infty \partial^* P_j^\varepsilon \Big)\setminus J_{ \mathbf{n}abla y_\varepsilon} \Big) \le C\varepsilon^{\beta-\gamma}, \mathbf{n}otag \\ {\rm (iii)} & \ \ \Vert {\rm sym} (\mathbf{n}abla y_\varepsilon^{\rm rot}) -\mathbf{Id}\Vert_{L^2(\Omega')} \le C\varepsilon,\mathbf{n}otag \\ {\rm (iv)} & \ \ \Vert \mathbf{n}abla y_\varepsilon^{\rm rot} - \mathbf{Id} \Vert_{L^2(\Omega')} \le C\varepsilon^\gamma \mathbf{e}nd{align} for a constant $C>0$ independent of $\varepsilon$. \mathbf{n}oindent (b)(Compactness of rescaled displacement fields) There exists a subsequence (not relabeled) and a function $u \in GSBD^2_h$ such that the rescaled displacement fields $u_\varepsilon \in GSBV^2_2(\Omega';\Bbb R^d)$ defined by \begin{align}\label{eq: rescalidipl} u_\varepsilon: = \frac{1}{\varepsilon} (y^{\rm rot}_\varepsilon - \mathbf{id}) \mathbf{e}nd{align} satisfy \begin{align}\label{eq: the main convergence} {\rm (i)} & \ \ u_\varepsilon \to u \ \ \ \text{a.e.\ in $\Omega' \setminus E_u$},\mathbf{n}otag \\ {\rm (ii)} & \ \ e(u_\varepsilon) \rightharpoonup e(u) \ \ \text{weakly in $L^2(\Omega' \setminus E_u;\Bbb R^{d\times d}_{\rm sym})$},\mathbf{n}otag\\ {\rm (iii)} & \ \ \mathcal{H}^{d-1}(J_{u}) \le \liminf\mathbf{n}olimits_{\varepsilon \to 0} \mathcal{H}^{d-1}(J_{u_\varepsilon}) \le \liminf\mathbf{n}olimits_{\varepsilon \to 0} \mathcal{H}^{d-1}(J_{y_\varepsilon} \cup J_{\mathbf{n}abla y_\varepsilon}),\mathbf{n}otag\\ {\rm (iv)} & \ \ e(u) = 0 \ \ \text{ on } \ E_u, \ \ \ \mathcal{H}^{d-1}\big((\partial^* E_u \color{black} \cap \Omega' \color{black} )\setminus J_u \big) = \mathcal{H}^{d-1}(J_u \cap \color{black} (E_u)^1 \color{black} ) = 0, \mathbf{e}nd{align} where $E_u := \lbrace x\in \Omega: \, |u_\varepsilon(x)| \to \infty \rbrace$ is a set of finite perimeter. \mathbf{e}nd{theorem} Here and in the sequel, we follow the usual convention that convergence of the continuous parameter $\varepsilon \to 0$ stands for convergence of arbitrary sequences $\lbrace \varepsilon_i \rbrace_i$ with $\varepsilon_i \to 0$ as $i \to \infty$, see \cite[Definition 1.45]{Braides:02}. The compactness result will be proved in Subsection \ref{sec: comp-proof}. Note that \mathbf{e}qref{eq: first conditions}(i) implies $y_\varepsilon^{\rm rot} \in \mathcal{S}_{\varepsilon,h}$. In view of \mathbf{e}qref{eq: first conditions}(ii), the frame indifference of the elastic energy, and $\gamma <\beta$, \color{black} one can show that \color{black} the Griffith-type energy \mathbf{e}qref{eq: relaxed energy} of $y_\varepsilon^{\rm rot}$ is \color{black} asymptotically not larger than the one of $y_\varepsilon$. \color{black} The control on the symmetric part of the derivative \mathbf{e}qref{eq: first conditions}(iii) is essential to obtain compactness in $GSBD^2(\Omega')$ for the sequence $(u_\varepsilon)_\varepsilon$. Property \mathbf{e}qref{eq: first conditions}(iv) will be needed to control higher order terms in the passage to linearized elastic energies, see Theorem \ref{rig-th: gammaconv} below. The presence of the set $E_u$ is due to the compactness result in $GSBD^2(\Omega')$, see \cite{Crismale} and Theorem \ref{th: crismale-comp}. In principle, the phenomenon that the sequence is unbounded on a set of positive measure can be avoided by generalizing the definition of \mathbf{e}qref{eq: rescalidipl}: in \cite[Theorem 6.1]{Solombrino} and \cite[Theorem 2.2]{Friedrich:15-2} it has been shown that, by subtracting in \mathbf{e}qref{eq: rescalidipl} suitable translations on a Caccioppoli partition of $\Omega'$ related to $y_\varepsilon$, one can achieve $E_u = \mathbf{e}mptyset$. This construction, however, is limited \color{black} so far to dimension two. \color{black} As discussed in \cite{Crismale}, the presence of $E_u$ is not an issue for minimization problems of Griffith energies since a minimizer can be recovered by choosing $u$ affine on $E_u$ with $e(u)=0$, cf.\ \mathbf{e}qref{eq: the main convergence}(iv). We also note that $E_u \subset \Omega$, i.e., $E_u \cap (\Omega'\setminus \overline{\Omega}) = \mathbf{e}mptyset$. \begin{definition}[Asymptotic representation]\label{def:conv} {\mathbf{n}ormalfont We say that a sequence $(y_\varepsilon)_\varepsilon$ with $y_\varepsilon \in \mathcal{S}_{\varepsilon,h}$ is \mathbf{e}mph{asymptotically represented} by a limiting displacement $u \in GSBD^2_h$, and write $y_\varepsilon \rightsquigarrow u$, if there exist sequences of Caccioppoli partitions $(P_j^\varepsilon)_j$ of $\Omega'$ and corresponding rotations $(R^\varepsilon_j)_j \subset SO(d)$ such that \mathbf{e}qref{eq: first conditions} and \mathbf{e}qref{eq: the main convergence} hold for some fixed $\gamma \in (\frac{2}{3},\beta)$, where $y_\varepsilon^{\rm rot}$ and $u_\varepsilon$ are defined in \mathbf{e}qref{eq: piecewise rotated} and \mathbf{e}qref{eq: rescalidipl}, respectively. } \mathbf{e}nd{definition} Theorem \ref{thm: compactess} shows that for each $(y_\varepsilon)_\varepsilon$ with $\sup_\varepsilon \overline{\mathcal{E}}_\varepsilon(y_\varepsilon) < + \infty$ there exists a subsequence $(y_{\varepsilon_k})_k$ and $u \in GSBD^2_h$ such that $y_{\varepsilon_k} \rightsquigarrow u$ as $k \to \infty$. We speak of asymptotic representation instead of convergence, and we use the symbol $ \rightsquigarrow $, in order to emphasize that Definition \ref{def:conv} cannot be understood as a convergence with respect to a certain topology. In particular, the limiting function $u$ for a given (sub-)sequence $(y_\varepsilon)_\varepsilon$ is not determined uniquely, but depends fundamentally on the choice of the sequences $(P_j^\varepsilon)_j$ and $(R^\varepsilon_j)_j$. To illustrate this phenomenon, we consider an example similar to \cite[Example 2.4]{Friedrich:15-2}. \begin{example}[Nonuniqueness of limits]\label{ex} {\mathbf{n}ormalfont Consider $\Omega'= (0,3) \times (0,1)$, $\Omega = (1,3) \times (0,1)$, $\Omega_1 = (0,2) \times (0,1)$, $\Omega_2 = (2,3) \times (0,1)$, $h \mathbf{e}quiv 0$, and $$y_\varepsilon(x) = x \,\chi_{\Omega_1}(x) + \bar{R}_{\varepsilon}\,x \,\chi_{\Omega_2}(x) \ \ \ \ \text{for} \ x \in \Omega',$$ where $\bar{R}_\varepsilon \in SO(2)$ with $\bar{R}_\varepsilon = \mathbf{Id} + \varepsilon A + {\rm O}(\varepsilon^2) $ for some $A \in \Bbb R^{2 \times 2}_{\rm skew}$. Then two possible alternatives are \begin{align*} (1)& \ \ P^{\varepsilon}_1 = \Omega_1, \ \ P^{\varepsilon}_2 = \Omega_2, \ \ R_1^{\varepsilon} = \mathbf{Id}, \ \ R_2^{\varepsilon} = \bar{R}_\varepsilon^{-1},\\ (2)& \ \ \tilde{P}^{\varepsilon}_1 = \Omega', \ \ \tilde{R}_1^{\varepsilon} = \mathbf{Id}. \mathbf{e}nd{align*} Letting $u_{\varepsilon} = \varepsilon^{-1} (\sum_{j=1}^2 R_j^{\varepsilon}y_{\varepsilon} \chi_{P_j^{\varepsilon}} -\mathbf{id})$ and $\tilde{u}_{\varepsilon} = \varepsilon^{-1} (y_{\varepsilon} -\mathbf{id})$, we find the limits $u \mathbf{e}quiv 0$ and $\tilde{u}(x) = A\,x \, \chi_{\Omega_2}(x)$, respectively. } \mathbf{e}nd{example} We refer to \cite[Section 2.3]{Friedrich:15-2} for a further discussion about different choices of the involved partitions and rigid motions. Here, we show that it is possible to identify uniquely the relevant notions $e(u)$ and $J_u$ of the limit. This is content of the following lemma. \begin{lemma}[Characterization of limiting displacements]\label{lemma: characteri} Suppose that a sequence $(y_\varepsilon)_\varepsilon$ satisfies $y_\varepsilon \rightsquigarrow u_1$ and $y_\varepsilon \rightsquigarrow u_2$, where $u_1, u_2 \in GSBD^2_h$, $u_1 \mathbf{n}eq u_2$. Let $E_{u_1}, E_{u_2} \subset \Omega$ be the sets given in \mathbf{e}qref{eq: the main convergence}. Then \begin{itemize} \item[(a)] $e(u_1) = e(u_2)$ $\mathcal{L}^d$-a.e.\ on $\Omega' \setminus (E_{u_1} \cup E_{u_2})$. \item[(b)] If additionally $(y_\varepsilon)_\varepsilon$ is a minimizing sequence, i.e., \begin{align}\label{eq: minimizer} \overline{\mathcal{E}}_\varepsilon(y_\varepsilon) \le \inf_{\bar{y} \in \mathcal{S}_{\varepsilon,h}} \overline{\mathcal{E}}_\varepsilon(\bar{y}) + \rho_\varepsilon \ \ \ \ \ \text{with } \ \rho_\varepsilon \to 0 \text{ as $\varepsilon \to 0$}, \mathbf{e}nd{align} then $e(u_1) = e(u_2)$ $\mathcal{L}^d$-a.e.\ on $\Omega'$, and $J_{u_1} = J_{u_2}$ up to an $\mathcal{H}^{d-1}$-negligible set. \mathbf{e}nd{itemize} \mathbf{e}nd{lemma} Note that property (a) is consistent with Example \ref{ex}. Example \ref{ex} also shows that the property $J_{u_1} = J_{u_2}$ is not satisfied in general but some extra condition, e.g.\ the one in \mathbf{e}qref{eq: minimizer}, is necessary. We refer to Example \ref{e2} below for an \color{black} illustration \color{black} that in case (a) the strains are not necessarily the same inside $E_{u_1} \cup E_{u_2}$. The result will be proved in Subsection \ref{sec: admissible}. \subsection{Passage from the nonlinear to a linearized Griffith model}\label{rig-sec: sub, main gamma} We now show that the nonlinear energies of Griffith-type can be related to a linearized Griffith model in the small strain limit by $\Gamma$-convergence. We also discuss the convergence of minimizers for boundary value problems. Given bounded Lipschitz domains $\Omega \subset \Omega'$, we define the energy $\mathcal{E}: GSBD^2(\Omega') \to [0,+\infty)$ by \begin{align}\label{rig-eq: Griffith en-lim} \mathcal{E}(u) = \int_{\Omega'} \frac{1}{2} Q(e(u)) + \kappa\mathcal{H}^{d-1}(J_u), \mathbf{e}nd{align} where $\kappa>0$, and $Q: \Bbb R^{d \times d} \to [0,+\infty)$ is the quadratic form $Q(F) = D^2W(\mathbf{Id})F : F$ for all $F \in \Bbb R^{d \times d}$. In view of \mathbf{e}qref{assumptions-W}, $Q$ is positive definite on $\Bbb R^{d \times d}_{\rm sym}$ and vanishes on $\Bbb R^{d \times d }_{\rm skew}$. For the $\Gamma$-limsup inequality, more precisely for the application of the density result stated in Theorem \ref{th: crismale-density2}, \color{black} we make the following geometrical assumption on the Dirichlet boundary $\partial_D \Omega= \Omega' \cap \partial \Omega$: there exists a decomposition $\partial \Omega = \partial_D \Omega \cup \partial_N\Omega \cup N$ with \begin{align}\label{eq: density-condition2} \partial_D \Omega, \partial_N\Omega \text{ relatively open}, \ \ \ \mathcal{H}^{d-1}(N) = 0, \ \ \ \partial_D\Omega \cap \partial_N \Omega = \mathbf{e}mptyset, \ \ \ \partial (\partial_D \Omega) = \partial (\partial_N \Omega), \mathbf{e}nd{align} and there exist $\bar{\delta}>0$ small and $x_0 \in\Bbb R^d$ such that for all $\delta \in (0,\bar{\delta})$ there holds \begin{align}\label{eq: density-condition} O_{\delta,x_0}(\partial_D \color{black} \Omega \color{black} ) \subset \Omega, \mathbf{e}nd{align} where $O_{\delta,x_0}(x) := x_0 + (1-\delta)(x-x_0)$. \color{black} We now present our main $\Gamma$-convergence result. Recall Definition \ref{def:conv}, as well as the definition of the nonlinear energies in \mathbf{e}qref{rig-eq: Griffith en} and \mathbf{e}qref{eq: relaxed energy}. Moreover, recall the spaces $\mathcal{S}_{\varepsilon,h}$ and $GSBD^2_h$ in \mathbf{e}qref{eq: boundary-spaces} for $h \in W^{2,\infty}(\Omega';\Bbb R^d)$. \begin{theorem}[Passage to linearized model]\label{rig-th: gammaconv} Let $\Omega \subset \Omega' \subset \Bbb R^d$ be bounded Lipschitz domains. Suppose that $W$ satisfies \mathbf{e}qref{assumptions-W} and that \mathbf{e}qref{eq: density-condition2}-\mathbf{e}qref{eq: density-condition} hold. Let $h \in W^{2,\infty}(\Omega';\Bbb R^d)$. \begin{itemize} \item[(a)] (Compactness) For each sequence $(y_\varepsilon)_\varepsilon$ with $y_\varepsilon \in \mathcal{S}_{\varepsilon,h}$ and $\sup_\varepsilon \mathcal{E}_\varepsilon(y_\varepsilon) < +\infty$, there exists a subsequence (not relabeled) and $u \in GSBD^2_h$ such that $y_\varepsilon \rightsquigarrow u$. \item[(b)] ($\Gamma$-liminf inequality) For each sequence $(y_\varepsilon)_\varepsilon$, $y_\varepsilon \in \mathcal{S}_{\varepsilon,h}$, with $y_\varepsilon \rightsquigarrow u$ for some $u \in GSBD^2_h$ we have $$\liminf_{\varepsilon \to 0} \mathcal{E}_{\varepsilon}(y_\varepsilon) \ge \mathcal{E}(u).$$ \item[(c)] ($\Gamma$-limsup inequality) For each $u \in GSBD^2_h$ there exists a sequence $(y_\varepsilon)_\varepsilon$, $y_\varepsilon \in \mathcal{S}_{\varepsilon,h}$, such that $y_\varepsilon \rightsquigarrow u$ and $$\lim_{\varepsilon \to 0 } \mathcal{E}_{\varepsilon}(y_\varepsilon) = \mathcal{E}(u).$$ \mathbf{e}nd{itemize} The same \color{black} statements hold \color{black} with $\overline{\mathcal{E}}_\varepsilon$ in place of $\mathcal{E}_\varepsilon$. \mathbf{e}nd{theorem} We point out that we identify a `standard' Griffith energy in linearized elasticity although we departed from a nonlinear model for nonsimple materials. As a corollary, we obtain the convergence of minimizers for boundary value problems. \begin{corollary}[Minimization problems]\label{cor: main cor} Consider the setting of Theorem \ref{rig-th: gammaconv}. Then \begin{align}\label{eq: eps control2} \inf_{\bar{y} \in \mathcal{S}_{\varepsilon,h}} \mathcal{E}_\varepsilon(\bar{y}) \ \to \ \min_{u \in GSBD^2_h} \mathcal{E}(u) \mathbf{e}nd{align} as $\varepsilon \to 0$. Moreover, for each sequence $(y_\varepsilon)_\varepsilon$ with $y_\varepsilon \in \mathcal{S}_{\varepsilon,h}$ satisfying \begin{align}\label{eq: eps control} \mathcal{E}_\varepsilon(y_\varepsilon) \le \inf_{\bar{y} \in \mathcal{S}_{\varepsilon,h}} \mathcal{E}_\varepsilon(\bar{y}) + \rho_\varepsilon \ \ \ \ \ \text{with } \ \rho_\varepsilon \to 0 \text{ as $\varepsilon \to 0$}, \mathbf{e}nd{align} there exist a subsequence (not relabeled) and $u \in GSBD^2_h$ with $\mathcal{E}(u) = \min_{v \in GSBD^2_h} \mathcal{E}(v)$ such that $y_\varepsilon \rightsquigarrow u$. \mathbf{e}nd{corollary} The results announced in this subsection will be proved in Subsection \ref{sec: gamma}. \section{Preliminaries}\label{sec:pre} In this section we collect some fundamental properties about (generalized) special functions of bounded variation and deformation. In particular, we recall and prove some results for $GSBV^2_2$ and $GSBD^2$ that will be needed for the proofs in Section \ref{sec: proofs}. \subsection{Caccioppoli partitions}\label{sec: Caccio} We say that a partition $(P_j)_j$ of an open set $\Omega\subset \Bbb R^d$ is a \textit{Caccioppoli partition} of $\Omega$ if $\sum\mathbf{n}olimits_j \mathcal{H}^{d-1}(\partial^* P_j) < + \infty$, where $\partial^* P_j$ denotes the \mathbf{e}mph{essential boundary} of $P_j$ (see \cite[Definition 3.60]{Ambrosio-Fusco-Pallara:2000}). The local structure of Caccioppoli partitions can be characterized as follows (see \cite[Theorem 4.17]{Ambrosio-Fusco-Pallara:2000}). \begin{theorem}\label{th: local structure} Let $(P_j)_j$ be a Caccioppoli partition of $\Omega$. Then $$\bigcup\mathbf{n}olimits_j (P_j)^1 \cup \bigcup\mathbf{n}olimits_{i \mathbf{n}eq j} (\partial^* P_i \cap \partial^* P_j)$$ contains $\mathcal{H}^{d-1}$-almost all of $\Omega$. \mathbf{e}nd{theorem} Here, $(P)^1$ denote the points where $P$ has density one (see again \cite[Definition 3.60]{Ambrosio-Fusco-Pallara:2000}). Essentially, the theorem states that $\mathcal{H}^{d-1}$-a.e.\ point of $\Omega$ either belongs to exactly one element of the partition or to the intersection of exactly two sets $\partial^* P_i$, $\partial^* P_j$. \subsection{$GSBV^2$ and $GSBV^2_2$ functions} For the general notions on $SBV$ and $GSBV$ functions and their properties we refer to \cite[Section 4]{Ambrosio-Fusco-Pallara:2000}. For $\Omega \subset \Bbb R^d$ open and $m \in \Bbb N$, we define $GSBV^2(\Omega;\Bbb R^m)$ as in \mathbf{e}qref{eq: space2}, for general $m$. We denote by $\mathbf{n}abla y$ the approximate differential and by $J_y$ the set of approximate jump points of $y$, which is an $\mathcal{H}^{d-1}$-rectifiable set. We recall that $GSBV^2(\Omega;\Bbb R^m)$ is a vector space, see \cite[Proposition 2.3]{DalMaso-Francfort-Toader:2005}. \color{black} In a similar fashion, we say $y \in SBV^2(\Omega;\Bbb R^m)$ if $y \in SBV(\Omega;\Bbb R^m)$, $\mathbf{n}abla y \in L^2(\Omega;\Bbb R^{m \times d})$, and $\mathcal{H}^{d-1}(J_y)< + \infty$. \color{black} We define $GSBV^2_2(\Omega;\Bbb R^m)$ as in \mathbf{e}qref{eq: space}, for general $m$. For $m=1$ we write $GSBV^2_2(\Omega)$. \color{black} By definition, \color{black} $\mathbf{n}abla y \in GSBV^2(\Omega;\Bbb R^{m \times d})$, and we use the notation $\mathbf{n}abla^2 y$ and $J_{\mathbf{n}abla y}$ for the approximate differential and the jump set of $\mathbf{n}abla y$, respectively. Applying \cite[Proposition 2.3]{DalMaso-Francfort-Toader:2005} on $y$ and $\mathbf{n}abla y$, we find that $GSBV^2_2(\Omega;\Bbb R^m)$ is a vector space. The following result is the key ingredient for the proof of Proposition \ref{prop: relaxation}. \begin{theorem}[Compactness in $GSBV^2_2$]\label{th: GSBV2 comp} Let $\Omega \subset \Bbb R^d$ be open and bounded, and let $m \in \Bbb N$. Let $(y_n)_n$ be a sequence in $GSBV^2_2(\Omega;\Bbb R^m)$. Suppose that there exists a continuous, increasing function $\psi: [0,\infty) \to [0,\infty)$ with $\lim_{t \to \infty} \psi(t) = + \infty$ such that $$\sup_{n \in \Bbb N} \Big(\int_{\Omega} \psi(|y_n|)\, dx + \int_{\Omega} |\mathbf{n}abla^2 y_n|^2 \, dx + \mathcal{H}^{d-1}(J_{y_n} \cup J_{\mathbf{n}abla y_n}) \Big) < + \infty.$$ Then there exist a subsequence, still denoted by $(y_n)_n$, and a function $y \in [GSBV(\Omega)]^m$ with $\mathbf{n}abla y \in GSBV^2(\Omega;\Bbb R^{m \times d})$ such that for all \color{black} $0 < \gamma_2 \le \gamma_1 \le 2\gamma_2$ \color{black} there holds \begin{align}\label{eq: main compi-resu} {\rm (i)} & \ \ y_n \to y \text{ a.e.\ in $\Omega$}, \mathbf{n}otag\\ {\rm (ii)} & \ \ \mathbf{n}abla y_n \to \mathbf{n}abla y \text{ a.e.\ $\Omega$},\mathbf{n}otag\\ {\rm (iii)} & \ \ \mathbf{n}abla^2 y_n \rightharpoonup \mathbf{n}abla^2 y \text{ weakly in $L^2(\Omega;\Bbb R^{m \times d \times d})$},\mathbf{n}otag\\ {\rm (iv)} & \ \ \gamma_1\mathcal{H}^{d-1}(J_y) + \gamma_2 \mathcal{H}^{d-1}(J_{\mathbf{n}abla y} \setminus J_y) \le \liminf_{n \to \infty} \big( \gamma_1\mathcal{H}^{d-1}(J_{y_n}) + \gamma_2 \mathcal{H}^{d-1}(J_{\mathbf{n}abla y_n} \setminus J_{y_n})\big). \mathbf{e}nd{align} If in addition $\sup_{n \in \Bbb N} \Vert \mathbf{n}abla y_n \Vert_{L^2(\Omega)} < + \infty$, then $y \in GSBV^2_2(\Omega;\Bbb R^m)$. \mathbf{e}nd{theorem} \begin{proof} First, we observe that it suffices to treat the case $m=1$ since otherwise one may argue componentwise, \color{black} see particularly \cite[Lemma 3.1]{Francfort-Larsen:2003} how to deal with property (iv). \color{black} The result has been proved in \cite[Theorem 4.4, Theorem 5.13, Remark 5.14]{Carriero2} with the only difference that we \color{black} just \color{black} assume $\sup_{n \in \Bbb N} \int_{\Omega} \psi(|y_n|)\, dx < + \infty $ here instead of $\sup_{n \in \Bbb N} \Vert y_n \Vert_{L^2(\Omega)} < + \infty$. We briefly indicate the necessary adaptions in the proof of \cite[Theorem 4.4]{Carriero2} for $m=1$. \color{black} To ease comparison with \cite{Carriero2}, we point out that in that paper the notation $GSBV^2(\Omega)$ is used for functions $u$ \color{black} with $u \in GSBV(\Omega)$ and $\mathbf{n}abla u \in [GSBV(\Omega)]^d$. For $k \in \Bbb N$, we define some $\varphi_k \in C^2(\Bbb R)$ by $\varphi_k(t) = t$ for $t \in [-k+1,k-1]$, $|\varphi_k(t)| = k $ for $|t| > k+1$, and $0 \le \varphi_k' \le 1$. By $\Vert \varphi_k \circ y_n \Vert_{L^1(\Omega)} \le k \mathcal{L}^d(\Omega) $ and by using an interpolation inequality one can check that $(\varphi_k \circ y_n)_n$ is bounded in $BV_{\rm loc}(\Omega)$, see \cite[(4.8)]{Carriero2}. Therefore, \color{black} by a diagonal argument there exist a subsequence of $(y_n)_n$ and \color{black} functions $w_k \in BV_{\rm loc}(\Omega)$ \color{black} for all $k \in \Bbb N$ \color{black} such that \begin{align}\label{eq: compi1} \varphi_k \circ y_n \to w_k \ \ \ \text{a.e.\ in $\Omega$ \, for all $k \in \Bbb N$}. \mathbf{e}nd{align} Since $\psi$ is continuous and increasing, and $|\varphi_k(t)| \le |t|$ for all $t \in \Bbb R$, we also get by Fatou's lemma \begin{align}\label{eq: compi2} \Vert \psi(|w_k|) \Vert_{L^1(\Omega)} \le \liminf_{n \to \infty} \Vert \psi(|\varphi_k \circ y_n|) \Vert_{L^1(\Omega)} \le \sup_{n \in \Bbb N} \int_{\Omega} \psi(|y_n|)\, dx < + \infty. \mathbf{e}nd{align} Let $E_k = \lbrace |w_k| < k-1 \rbrace$. The properties of $\varphi_k$ along with \mathbf{e}qref{eq: compi1} imply \begin{align}\label{eq: compi3} y_n \to w_k \ \ \ \text{a.e.\ in $E_k$ for all $k \in \Bbb N$}, \ \ \ \ \ w_k = w_l \ \ \ \text{ on $E_k$ for all $k \le l$. } \mathbf{e}nd{align} By using \mathbf{e}qref{eq: compi2} we observe that $\mathcal{L}^d(\Omega \setminus E_k) \to 0$ as $k \to \infty$ since $\lim_{t \to \infty} \psi(t) = + \infty$. This together with \mathbf{e}qref{eq: compi3} shows that the measurable function $y: \Omega \to \Bbb R$ defined by $y := \lim_{k\to \infty} w_k$ satisfies $y = w_k$ on $E_k$ for all $k \in \Bbb N$ and therefore $$y_n \to y \ \ \ \text{a.e.\ in $\Omega$}.$$ The rest of the proof starting with \cite[(4.10)]{Carriero2} remains unchanged. In \cite{Carriero2}, it has been shown that $y \in GSBV(\Omega)$ and $\mathbf{n}abla y \in [GSBV(\Omega)]^d$. Since $ \mathbf{n}abla^2 y\in L^2(\Omega;\Bbb R^{d\times d})$ and $\mathcal{H}^{d-1}(J_{\mathbf{n}abla y}) < +\infty$, we actually get $\mathbf{n}abla y \in GSBV^2(\Omega;\Bbb R^d)$. Finally, given an additional control on $(\mathbf{n}abla y_n)_n$ in $L^2$, we also find $ \mathbf{n}abla y \in L^2(\Omega;\Bbb R^d)$ and $\mathcal{H}^{d-1}(J_y) < +\infty$. This implies $y\in GSBV^2_2(\Omega)$, see \mathbf{e}qref{eq: space}. \mathbf{e}nd{proof} We now proceed with a version of Theorem \ref{th: GSBV2 comp} without a priori bounds on the functions. We also take boundary data into account. The result relies on Theorem \ref{th: GSBV2 comp} and \cite{Manuel}. \begin{theorem}[Compactness in $GSBV^2_2$ without a priori bounds]\label{th: comp} Let $\Omega \subset \Omega' \subset \Bbb R^d$ be bounded Lipschitz domains, and let $m\in\Bbb N$. Let $g \in W^{2,\infty}(\Omega';\Bbb R^m)$. Consider $(y_n)_n \subset GSBV^2_2(\Omega';\Bbb R^m)$ with $y_n = g$ on $\Omega' \setminus \overline{\Omega}$ and $$\sup_{n \in \Bbb N} \Big(\int_{\color{black} \Omega' \color{black} } \big( |\mathbf{n}abla y_n|^2 + |\mathbf{n}abla^2 y_n|^2 \big) \, dx + \mathcal{H}^{d-1}(J_{y_n} \cup J_{\mathbf{n}abla y_n}) \Big) < + \infty.$$ \mathbf{n}oindent Then we find a subsequence (not relabeled), modifications $(z_n)_n \subset GSBV^2_2(\Omega';\Bbb R^m)$ satisfying $z_n = g$ on $\Omega' \setminus \overline{\Omega}$ and \begin{align}\label{eq: compi1-new} {\rm (i)} & \ \ z_n = g \text{ on } S_n:= \lbrace \mathbf{n}abla z_n \mathbf{n}eq \mathbf{n}abla y_n \rbrace \cup \lbrace \mathbf{n}abla^2 z_n \mathbf{n}eq \mathbf{n}abla^2 y_n \rbrace, \ \ \ \ \text{where $\mathcal{L}^d(S_n) \to 0$ as $n \to \infty$}, \mathbf{n}otag\\ {\rm (ii)} & \ \ \color{black} \lim_{n \to \infty} \color{black} \mathcal{H}^{d-1}\big( \big( J_{z_n} \cup J_{\mathbf{n}abla z_n} \big)\setminus \big( J_{y_n} \cup J_{\mathbf{n}abla y_n} \big) \big) = 0, \mathbf{e}nd{align} as well as a limiting function $y \in GSBV^2_2(\Omega';\Bbb R^m)$ with $y = g$ on $\Omega' \setminus \overline{\Omega}$ such that \begin{align}\label{eq: compi2-new} {\rm (i)}& \ \ \text{$z_n \to y$ in measure on $\Omega'$,}\mathbf{n}otag\\ {\rm (ii)}& \ \ \mathbf{n}abla z_n \to \mathbf{n}abla y \text{ a.e.\ $\Omega'$ and $\mathbf{n}abla z_n \rightharpoonup \mathbf{n}abla y$ weakly in $L^2(\Omega'; \Bbb R^{m\times d})$}\mathbf{n}otag\\ {\rm (iii)}& \ \ \text{$\mathbf{n}abla^2 z_n \rightharpoonup \mathbf{n}abla^2 y$ weakly in $L^2(\Omega'; \Bbb R^{m\times d \times d})$}\mathbf{n}otag\\ {\rm (iv)} & \ \ \mathcal{H}^{d-1}(J_y \cup J_{\mathbf{n}abla y}) \le \liminf_{n \to \infty} \mathcal{H}^{d-1}(J_{z_n} \cup J_{\mathbf{n}abla z_n} ). \mathbf{e}nd{align} \mathbf{e}nd{theorem} In general, it is indispensable to pass to modifications. Consider, e.g., the sequence $y_n = n \chi_U$ for some set $U \subset \Omega$ of finite perimeter. The idea in \cite[Theorem 3.1]{Manuel}, where this result is proved in the space $GSBV^2(\Omega;\Bbb R^m)$, relies on constructing modifications $(z_n)_n$ by (cf.\ \cite[(37)-(38)]{Manuel}) \begin{align}\label{eq:modifications} z_n = g\chi_{R_n} + \sum\mathbf{n}olimits_{j \ge 1} (y_n - t^{n}_j) \chi_{P_j^{n}} \mathbf{e}nd{align} for Caccioppoli partitions $\Omega' = \bigcup_{j \ge 1} P_j^{n} \cup R_n$, and suitable translations $(t_j^{n})_{j \ge 1} \subset \Bbb R^m$, where \begin{align}\label{eq:modifications2} {\rm (i)} & \ \ \lim_{n\to \infty} \mathcal{L}^d(R_n) =0, \mathbf{n}otag \\ {\rm (ii)} & \ \ \lim_{n\to \infty} \mathcal{H}^{d-1}(J_{z_n} \setminus J_{y_n}) = \lim_{n\to \infty} \mathcal{H}^{d-1} \big((\partial^* R_n \cap \Omega') \setminus J_{y_n}\big) =0. \mathbf{e}nd{align} \begin{proof}[Proof of Theorem \ref{th: comp}] We briefly indicate the necessary adaptions with respect to \cite[Theorem 3.1]{Manuel} to obtain the result in the frame of $GSBV^2_2(\Omega';\Bbb R^m)$ involving second derivatives. First, by \cite[Theorem 3.1]{Manuel} we find modifications $(z_n)_n$ as in \mathbf{e}qref{eq:modifications} satisfying $z_n = g$ on $\Omega' \setminus \overline{\Omega}$ and $y \in GSBV^2(\Omega';\Bbb R^m)$ such that $z_n \to y$ in measure on $\Omega'$, up to passing to a subsequence. By \mathbf{e}qref{eq:modifications2} we get \mathbf{e}qref{eq: compi1-new}. As $z_n \to y$ in measure on $\Omega'$, \cite[Remark 2.2]{Solombrino} implies that there exists a continuous, increasing function $\psi: [0,\infty) \to [0,\infty)$ with $\lim_{t \to \infty} \psi(t) = + \infty$ such that up to subsequence (not relabeled) $\sup_{n \in \Bbb N}\int_{\Omega'} \psi(|z_n|)\, dx < + \infty$. \color{black} Moreover, by the assumptions on $y_n$, \mathbf{e}qref{eq: compi1-new}, and the fact that $g \in W^{2,\infty}(\Omega';\Bbb R^m)$ we get that $\mathbf{n}abla z_n$ and $\mathbf{n}abla^2 z_n$ are uniformly controlled in $L^2$, as well as $\sup_{n \in \Bbb N} \mathcal{H}^{d-1}(J_{z_n} \cup J_{\mathbf{n}abla z_n})<+\infty$. \color{black} Then Theorem \ref{th: GSBV2 comp} yields $y \in GSBV^2_2(\Omega';\Bbb R^m)$. Along with \mathbf{e}qref{eq: main compi-resu} for $\gamma_1 = \gamma_2 $ we also get \mathbf{e}qref{eq: compi2-new}, apart from the weak convergence of $(\mathbf{n}abla z_n)_n$. The weak convergence readily follows from $\sup_{n \in \Bbb N} \Vert \mathbf{n}abla z_n\Vert_{L^2(\Omega')} \le \sup_{n \in \Bbb N} \Vert \mathbf{n}abla y_n\Vert_{L^2(\Omega')} + \color{black} \Vert \mathbf{n}abla g\Vert_{L^2(\Omega')} \color{black} < +\infty$. \mathbf{e}nd{proof} \subsection{$GSBD^2$ functions}\label{sec: GSBD} We refer the reader to \cite{ACD} and \cite{DalMaso:13} for the definition, notations, and basic properties of $SBD$ and $GSBD$ functions, respectively. Here, we only recall briefly some relevant notions which can be defined for generalized functions of bounded deformation: let $\Omega \subset \Bbb R^d$ open and bounded. In \cite[Theorem 6.2 and Theorem 9.1]{DalMaso:13} it is shown that for $ u \in GSBD(\Omega)$ the jump set $J_u$ is $\mathcal{H}^{d-1}$-rectifiable and \color{black} that an \color{black} approximate symmetric differential $e(u)(x)$ exists at $\mathcal L^{d}$-a.e.\ $x\in \Omega$. We define the space $GSBD^2(\Omega)$ by $$ GSBD^2(\Omega):= \{u \in GSBD(\Omega): e(u) \in L^2 (\Omega; \mathbb R_{\mathrm{sym}}^{d\times d})\,,\,\mathcal{H}^{d-1}(J_u) < +\infty\}\,. $$ The space $GSBD^2(\Omega)$ is a vector subspace of the vector space of $\mathcal{L}^d$-measurable function, see \cite[Remark 4.6]{DalMaso:13}. Moreover, there holds $GSBV^2(\Omega;\Bbb R^d) \subset GSBD^2(\Omega)$. The following compactness result in $GSBD^2$ has been proved in \cite{Crismale}. \begin{theorem}[$GSBD^2$ compactness]\label{th: crismale-comp} \color{black} Let $\Omega \subset \Bbb R^d$ be open, bounded. \color{black} Let $(u_n)_n \subset GSBD^2(\Omega)$ be a sequence satisfying $$ \sup\mathbf{n}olimits_{n\in \Bbb N} \big( \Vert e(u_n) \Vert_{L^2(\Omega)} + \mathcal{H}^{d-1}(J_{u_n})\big) < + \infty.$$ Then there exists a subsequence (not relabeled) such that the set $A := \lbrace x\in \Omega: \, |u_n(x)| \to \infty \rbrace$ has finite perimeter, and \color{black} there exists \color{black} $u \in GSBD^2(\Omega)$ such that \begin{align}\label{eq: at crsimale comp} {\rm (i)} & \ \ u_n \to u \ \ \ \ \text{ in measure on } \Omega \setminus A, \mathbf{n}otag \\ {\rm (ii)} & \ \ e(u_n) \rightharpoonup e(u) \ \ \ \text{ weakly in } L^2(\Omega \setminus A; \Bbb R^{d \times d}_{\rm sym}),\mathbf{n}otag \\ {\rm (iii)} & \ \ \liminf_{n \to \infty} \mathcal{H}^{d-1}(J_{u_n}) \ge \mathcal{H}^{d-1}(J_u \cup \color{black} (\partial^*A \cap \Omega) \color{black} ). \mathbf{e}nd{align} \mathbf{e}nd{theorem} We briefly remark that \Bbb RRR \mathbf{e}qref{eq: at crsimale comp}(i) \color{black} is slightly weaker with respect to \mathbf{e}qref{eq: compi2-new}(i) in Theorem \ref{th: comp} (or the corresponding version in $GSBV$, see \cite{Manuel}) in the sense that there might be a set $A$ where the sequence $(u_n)_n$ is \mathbf{e}mph{unbounded}, cf.\ the example below Theorem \ref{th: comp}. This phenomenon is avoided in Theorem \ref{th: comp} by passing to suitable modifications which consists in subtracting piecewise constant functions, see \mathbf{e}qref{eq:modifications}. We point out that an analogous result in $GSBD^2$ is so far only available in dimension two, see \cite[Theorem 6.1]{Solombrino}. We now state two density results. \begin{theorem}[Density]\label{th: crismale-density} Let $\Omega \subset \Bbb R^d$ be a bounded Lipschitz domain. Let $u \in GSBD^2(\Omega)$. Then there exists a sequence $(u_n)_n \subset SBV^2(\Omega; \Bbb R^d) \cap L^\infty(\Omega;\Bbb R^d)$ such that each $J_{u_n}$ is closed and included in a finite union of closed connected pieces of $C^1$ hypersurfaces, each $u_n$ belongs to $C^{\infty}(\overline{\Omega} \setminus J_{u_n};\Bbb R^d) \cap W^{m,\infty}({\Omega} \setminus J_{u_n};\Bbb R^d) $ for every $m \in \Bbb N$, and the following properties hold: \begin{align*} \begin{split} {\rm (i)} & \ \ u_n \to u \text{ in measure on } \Omega,\\ {\rm (ii)} & \ \ \Vert e(u_n) - e(u) \Vert_{L^2(\Omega)} \to 0,\\ {\rm (iii)} & \ \ \mathcal{H}^{d-1}(J_{u_n} \triangle J_u) \to 0. \mathbf{e}nd{split} \mathbf{e}nd{align*} \mathbf{e}nd{theorem} \begin{proof} The result follows by combining \cite[Theorem 1.1]{Crismale2} and \cite[Theorem 1.1]{Crismale3}. First, \cite[Theorem 1.1]{Crismale2} yields an approximation $u_n$ satisfying $u_n \in \color{black} SBV^2(\Omega;\Bbb R^d) \cap \color{black} W^{1,\infty}({\Omega} \setminus J_{u_n};\Bbb R^d)$, and \color{black} then \color{black} \cite[Theorem 1.1]{Crismale3} gives the higher regularity. \mathbf{e}nd{proof} An adaption of the proof allows to impose boundary conditions on the approximating sequence. Suppose that the Lipschitz domains $\Omega \subset \Omega'$ satisfy the conditions introduced in \mathbf{e}qref{eq: density-condition2}-\mathbf{e}qref{eq: density-condition}. By $\mathcal{W}(\Omega;\Bbb R^d)$ we denote the space of all functions $u \in SBV(\Omega;\Bbb R^d)$ such that $J_u$ is a finite union of disjoint $(d-1)$-simplices and $u \in W^{k,\infty}(\Omega \setminus J_u; \Bbb R^d)$ for every $k \in \Bbb N$. \begin{theorem}[Density with boundary data]\label{th: crismale-density2} Let $\Omega \subset \Omega' \subset \Bbb R^d$ be bounded Lipschitz domains satisfying \mathbf{e}qref{eq: density-condition2}-\mathbf{e}qref{eq: density-condition}. Let $g \in W^{r,\infty}(\Omega')$ for $r \in \Bbb N$. Let $u \in GSBD^2(\Omega')$ with $u = g$ on $\Omega' \setminus \overline{\Omega}$. Then there exists a sequence of functions $(u_n)_n \subset SBV^2(\Omega; \Bbb R^d)$, a sequence of neighborhoods $(U_n)_n \subset \Omega'$ of $\Omega' \setminus \Omega$, and a sequence of neighborhoods $(\Omega_n)_n \subset \Omega$ of $\Omega \setminus U_n$ such that $u_n = g$ on $\Omega' \setminus \overline{\Omega}$, $u_n|_{U_n} \in W^{r,\infty}(U_n;\Bbb R^d)$, and $u_n|_{\Omega_n} \in \mathcal{W}(\Omega_n;\Bbb R^d)$, and the following properties hold: \begin{align}\label{eq: dense-boundary} {\rm (i)} & \ \ u_n \to u \text{ in measure on } \Omega',\mathbf{n}otag\\ {\rm(ii)} & \ \ \Vert e(u_n) - e(u) \Vert_{L^2(\Omega')} \to 0,\mathbf{n}otag\\ {\rm (iii)} & \ \ \mathcal{H}^{d-1}(J_{u_n}) \to \mathcal{H}^{d-1}(J_u). \mathbf{e}nd{align} In particular, $u_n \in W^{r,\infty}(\Omega \setminus J_{u_n};\Bbb R^d)$. \mathbf{e}nd{theorem} \begin{proof} The fact that $u$ can be approximated by a sequence $(u_n)_n \subset SBV^2(\Omega';\Bbb R^d) \cap L^\infty(\Omega;\Bbb R^d)$ satisfying \mathbf{e}qref{eq: dense-boundary} and $u_n = g$ in a neighborhood of $\Omega' \setminus {\Omega}$ has been addressed in \cite[Proof of Theorem 5.4]{Crismale2}. Here, also the necessity of the geometric assumptions \mathbf{e}qref{eq: density-condition2}-\mathbf{e}qref{eq: density-condition} is discussed, see \cite[Remark 5.6]{Crismale2}. The fact that the approximating sequence can be chosen as in the statement then follows by applying on each $u_n$ a construction very similar to the one of \cite[Proposition 2.5]{Giacomini:2005} along with a diagonal argument. This construction consists in a suitable cut-off construction and the application of the density result \cite{Cortesani-Toader:1999}. We also refer to \cite[Theorem 3.5]{SchmidtFraternaliOrtiz:2009} for a similar statement. \mathbf{e}nd{proof} \section{Proofs}\label{sec: proofs} This section contains the \color{black} proofs \color{black} of our results. \subsection{Relaxation and existence of minimizers for the nonlinear model}\label{sec: proofs-nonlinear} In this subsection we prove Proposition \ref{prop: relaxation} and Theorem \ref{th: existence}. \begin{proof}[Proof of Proposition \ref{prop: relaxation}] For $y \in GSBV_2^2(\Omega;\Bbb R^d)$ we define \begin{align}\label{eq: 5L} \mathcal{E}'_\varepsilon(y) = \inf \big\{ \liminf\mathbf{n}olimits_{n \to \infty} \mathcal{E}_\varepsilon(y_n,\Omega): y_n \to y \ \text{\rm in measure on $\Omega$}\big\}, \mathbf{e}nd{align} and define $\overline{\mathcal{E}}_\varepsilon(\cdot,\Omega)$ as in \mathbf{e}qref{eq: relaxed energy}. We need to check that $\mathcal{E}'_\varepsilon = \overline{\mathcal{E}}_\varepsilon(\cdot,\Omega)$. In the proof, we write $\tilde{\subset}$ and $\tilde{=}$ for brevity if the inclusion or the identity holds up to an $\mathcal{H}^{d-1}$-negligible set, respectively. \mathbf{e}mph{Step 1: $\mathcal{E}'_\varepsilon \ge \overline{\mathcal{E}}_\varepsilon(\cdot,\Omega)$.} Since by definition $\overline{\mathcal{E}}_\varepsilon(y,\Omega) \le {\mathcal{E}}_\varepsilon(y,\Omega)$ for all $y \in GSBV^2_2(\Omega;\Bbb R^d)$, see \mathbf{e}qref{rig-eq: Griffith en}, it suffices to confirm that $\overline{\mathcal{E}}_\varepsilon(\cdot,\Omega)$ is lower semicontinous with respect to the convergence in measure. To see this, consider $(y_n)_n \subset GSBV^2_2(\Omega;\Bbb R^d)$ with $y_n \to y$ in measure $\Omega$ and $\sup_{n\in \Bbb N}\overline{\mathcal{E}}_\varepsilon(y_n,\Omega) <+ \infty$. By using \cite[Remark 2.2]{Solombrino}, there exists a continuous, increasing function $\psi: [0,\infty) \to [0,\infty)$ with $\lim_{t \to \infty} \psi(t) = + \infty$ such that up to subsequence (not relabeled) $\sup_{n \in \Bbb N}\int_{\Omega} \psi(|y_n|)\, dx < + \infty$. Then from Theorem \ref{th: GSBV2 comp} we obtain $$\overline{\mathcal{E}}_\varepsilon(y,\Omega) \le \liminf_{n \to \infty} \overline{\mathcal{E}}_\varepsilon(y_n,\Omega). $$ In fact, for the second and the third term in \mathbf{e}qref{eq: relaxed energy} we use \mathbf{e}qref{eq: main compi-resu}(iii) and (iv) \color{black} for $\gamma_1=\gamma_2$, \color{black} respectively. The first term in \mathbf{e}qref{eq: relaxed energy} is lower semicontinuous by the continuity of $W$, \mathbf{e}qref{eq: main compi-resu}(ii), and Fatou's lemma. This shows that $\overline{\mathcal{E}}_\varepsilon(\cdot,\Omega)$ is lower semicontinous and concludes the proof of $\mathcal{E}'_\varepsilon \ge \overline{\mathcal{E}}_\varepsilon(\cdot,\Omega)$. \mathbf{e}mph{Step 2: $\mathcal{E}'_\varepsilon \le \overline{\mathcal{E}}_\varepsilon(\cdot,\Omega)$.} In the proof, we will use the following argument several times: if $y_1,y_2 \in GSBV^2(\Omega;\Bbb R^d)$, then for a.e.\ $t \in \Bbb R$ there holds that $z:= y_1 + ty_2\in GSBV^2(\Omega;\Bbb R^d)$ satisfies $J_z = J_{y_1} \cup J_{y_2}$, see \cite[Proof of Lemma 3.1]{Francfort-Larsen:2003} or \cite[Proof of Lemma 4.5]{DalMaso-Francfort-Toader:2005} for such an argument. We point out that here we exploit the fact that $GSBV^2(\Omega;\Bbb R^d)$ is a vector space. Observe that for each $y \in GSBV^2_2(\Omega;\Bbb R^d)$ and each $\mathbf{n}u \in S^{d-1}$, the function $v := \mathbf{n}abla y \cdot \mathbf{n}u$ lies in $GSBV^2(\Omega;\Bbb R^d) \subset GSBD^2(\Omega)$. We can choose $\mathbf{n}u \in S^{d-1}$ such that there holds $\mathcal{H}^{d-1}(J_v \triangle J_{\mathbf{n}abla y}) = 0$. We apply Theorem \ref{th: crismale-density} to approximate $v \in GSBD^2(\Omega)$ by a sequence $(v_n)_n \subset SBV^2(\Omega; \Bbb R^d)$ such that $v_n \in W^{2,\infty}({\Omega} \setminus J_{v_n};\Bbb R^d)$ and \begin{align}\label{eq: jumpi1} \mathcal{H}^{d-1}(J_{v_n} \triangle J_{\mathbf{n}abla y} ) = \mathcal{H}^{d-1}(J_{v_n} \triangle J_{v} ) \to 0 \mathbf{e}nd{align} as $n \to \infty$. We point out that $J_{\mathbf{n}abla v_n} \tilde{\subset} J_{v_n}$ since $v_n \in W^{2,\infty}({\Omega} \setminus J_{v_n};\Bbb R^d)$. \color{black} Using \color{black} $v_n \in W^{2,\infty}({\Omega} \setminus J_{v_n};\Bbb R^d)$ we can choose a sequence $(\mathbf{e}ta_n)_n$ with $\mathbf{e}ta_n \to 0$ such that $z_n := y + \mathbf{e}ta_n v_n \in GSBV^2_2(\Omega;\Bbb R^d)$ satisfies $J_{z_n} \tilde{=} J_y \cup J_{v_n}$ \color{black} and there holds \color{black} $z_n \to y$ in measure on $\Omega$. \color{black} By \mathbf{e}qref{eq: jumpi1}, the continuity of $W$, $J_{z_n} \tilde{=} J_y \cup J_{v_n}$, and $J_{\mathbf{n}abla z_n} \tilde{\subset} J_{\mathbf{n}abla y} \cup J_{v_n}$ we get \color{black} \begin{align}\label{eq: jumpi4} \limsup\mathbf{n}olimits_{n\to \infty}\overline{\mathcal{E}}_\varepsilon(z_n,\Omega) \le \overline{\mathcal{E}}_\varepsilon(y,\Omega). \mathbf{e}nd{align} As $J_{z_n} \tilde{=} J_y \cup J_{v_n}$, $J_{\mathbf{n}abla y} \tilde{=} J_{v}$, and $J_{\mathbf{n}abla v_n} \tilde{\subset} J_{v_n}$, we also get \begin{align}\label{eq: jumpi3} J_{\mathbf{n}abla z_n} \setminus J_{z_n} \,\tilde{\subset}\, (J_{\mathbf{n}abla y} \cup J_{\mathbf{n}abla v_n}) \setminus (J_y \cup J_{v_n}) \, \tilde{\subset} \, J_v \setminus J_{v_n}. \mathbf{e}nd{align} In view of \mathbf{e}qref{eq: jumpi1}, \color{black} by a Besicovitch covering argument \color{black} we can cover the rectifiable sets $J_v \setminus J_{v_n}$ by sets of finite perimeter \color{black} $(E_n)_n \subset \subset \Omega$, \color{black} each of which being a countable union of balls with radii smaller than $\frac{1}{n}$, such that \begin{align}\label{eq: jumpi2} \mathcal{L}^d(E_n) + \mathcal{H}^{d-1}(\partial^* E_n) \to 0. \mathbf{e}nd{align} We finally define the sequence $y_n \in GSBV^2_2(\Omega;\Bbb R^d)$ by $y_n = z_n \chi_{\Omega \setminus E_n} + (\mathbf{id} + b_n) \chi_{E_n} $ for suitable constants $(b_n)_n \subset \Bbb R^d$ which are chosen such that $J_{y_n} \tilde{=} (J_{z_n} \setminus E_n) \cup \partial^* E_n$. Now in view of \mathbf{e}qref{eq: jumpi3} and $J_v \setminus J_{v_n} \tilde{\subset} E_n$, we get $J_{\mathbf{n}abla y_n}\tilde{\subset} J_{y_n}$. By \mathbf{e}qref{eq: jumpi2} and $z_n \to y$ in measure on $\Omega$ we get $y_n \to y$ in measure on $\Omega$. By \mathbf{e}qref{assumptions-W}(iii) we obtain $W(\mathbf{n}abla {y}_n) = 0$, $ \mathbf{n}abla^2 {y}_n = 0$ on $E_n$. Then by \mathbf{e}qref{eq: relaxed energy}, \mathbf{e}qref{eq: jumpi4}, \mathbf{e}qref{eq: jumpi2}, and the fact that $J_{\mathbf{n}abla y_n}\tilde{\subset} J_{y_n} \tilde{=} (J_{z_n} \setminus E_n) \cup \partial^* E_n$ we get $$\limsup_{n\to \infty} \overline{\mathcal{E}}_\varepsilon(y_n,\Omega) \le \limsup_{n\to\infty} \big(\overline{\mathcal{E}}_\varepsilon(z_n,\Omega) + \kappa\mathcal{H}^{d-1}(\partial^* E_n) \big) \le \overline{\mathcal{E}}_\varepsilon(y,\Omega).$$ \Bbb RRR Since $\overline{\mathcal{E}}_\varepsilon(y_n,\Omega) = \mathcal{E}_\varepsilon(y_n,\Omega)$ for all $n \in \Bbb N$ by $J_{\mathbf{n}abla y_n}\tilde{\subset} J_{y_n}$, \mathbf{e}qref{eq: 5L} implies $\mathcal{E}'_\varepsilon(y) \le \overline{\mathcal{E}}_\varepsilon(y,\Omega)$. This concludes the proof. \color{black} \mathbf{e}nd{proof} \begin{proof}[Proof of Theorem \ref{th: existence}] We prove the existence of minimizers via the direct method. Let $(y_n)_n \subset GSBV^2_2(\Omega';\Bbb R^d)$ with $y_n = g$ on $\Omega' \setminus\overline{\Omega}$ be a minimizing sequence for the minimization problem \mathbf{e}qref{eq: minimization problem}. By \mathbf{e}qref{assumptions-W} we find $W(F)\ge c_1|F|^2 - c_2$ for $c_1,c_2>0$. Thus, $\sup_{n\in\Bbb N} \overline{\mathcal{E}}_\varepsilon(y_n) <+\infty$ also implies $\sup_{n \in \Bbb N} \Vert \mathbf{n}abla y_n \Vert_{L^2(\Omega')}< + \infty$, and we can apply Theorem \ref{th: comp}. We obtain a sequence $(z_n)_n \subset GSBV^2_2(\Omega';\Bbb R^d)$ satisfying $z_n = g$ on $\Omega' \setminus \overline{\Omega}$ and a limiting function $y \in GSBV^2_2(\Omega';\Bbb R^d)$ with $y = g$ on $\Omega' \setminus \overline{\Omega}$ \color{black} such that $z_n \to y$ in measure on $\Omega'$. \color{black} Using \color{black} \mathbf{e}qref{eq: relaxed energy}, \color{black} \mathbf{e}qref{eq: compi1-new}, and $g \in W^{2,\infty}(\Omega';\Bbb R^d)$ we calculate \begin{align*} \limsup_{n \to \infty} \big(\overline{\mathcal{E}}_\varepsilon(z_n) -\overline{\mathcal{E}}_\varepsilon(y_n) \big)& \le \limsup_{n \to \infty} \Big( \color{black} \varepsilon^{-2} C_{W,g} \mathcal{L}^d(S_n) + \varepsilon^{-2\beta} \Vert \mathbf{n}abla^2 g\Vert^2_{L^2(S_n)} \color{black} \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + \kappa \big( \mathcal{H}^{d-1}(J_{z_n} \cup J_{\mathbf{n}abla z_n} )- \mathcal{H}^{d-1}(J_{y_n} \cup J_{\mathbf{n}abla y_n} ) \big) \Big) \Bbb RRR\le \color{black} 0, \mathbf{e}nd{align*} where \color{black} the constant $C_{W,g}$ depends on $W$ and $\Vert \mathbf{n}abla g \Vert_{L^\infty(\Omega')}$. \color{black} I.e., $(z_n)_n$ is also a minimizing sequence. By \color{black} $z_n \to y$ in measure on $\Omega'$ and \color{black} the fact that $\overline{\mathcal{E}}_\varepsilon$ is lower semicontinuous with respect to the convergence in measure on $\Omega'$, see Proposition \ref{prop: relaxation}, we get $$ \overline{\mathcal{E}}_\varepsilon(y) \le \liminf_{n \to \infty} \overline{\mathcal{E}}_\varepsilon(z_n) \le \liminf_{n \to \infty} \overline{\mathcal{E}}_\varepsilon(y_n) = \inf_{{\bar{y} \in GSBV^2_2(\Omega';\Bbb R^d)}} \Big\{ \overline{\mathcal{E}}_\varepsilon(\bar{y}): \ \bar{y} = g \text{ on } \Omega' \setminus \overline{\Omega} \Big\}. $$ This shows that $y$ is a minimizer. \mathbf{e}nd{proof} \subsection{Compactness}\label{sec: comp-proof} This subsection is devoted to the proof of Theorem \ref{thm: compactess}. \begin{proof}[Proof of Theorem \ref{thm: compactess}(a)] Consider a sequence $(y_\varepsilon)_\varepsilon$ with $y_\varepsilon \in \mathcal{S}_{\varepsilon,h}$, i.e., $y_\varepsilon = \mathbf{id} + \varepsilon h$ on $\Omega' \setminus \overline{\Omega}$. Suppose that $M:=\sup\mathbf{n}olimits_\varepsilon \overline{\mathcal{E}}_\varepsilon(y_\varepsilon) <+\infty$. We first construct Caccioppoli partitions (Step 1) and the corresponding rotations (Step 2) in order to define $y^{\rm rot}_\varepsilon$. Then we confirm \mathbf{e}qref{eq: first conditions} (Step 3). \mathbf{e}mph{Step 1: Definition of the Caccioppoli partitions.} First, we apply the $BV$ coarea formula (see \cite[Theorem 3.40 or Theorem 4.34]{Ambrosio-Fusco-Pallara:2000}) on each component $(\mathbf{n}abla y_\varepsilon)_{ij} \in GSBV^2(\Omega')$, $1 \le i,j\le d$, to write \begin{align*} \int_{-\infty}^\infty \mathcal{H}^{d-1}\big( (\Omega' \setminus J_{\mathbf{n}abla y_\varepsilon}) \cap \partial^* \lbrace (\mathbf{n}abla y_\varepsilon)_{ij} > t \rbrace \big) \,dt &= |D (\mathbf{n}abla y_\varepsilon)_{ij}|(\Omega' \setminus J_{ \mathbf{n}abla y_\varepsilon}) \le \Vert \mathbf{n}abla^2 y_\varepsilon \Vert_{L^1(\Omega')}. \mathbf{e}nd{align*} Using H\"older's inequality and \mathbf{e}qref{eq: relaxed energy} along with $\overline{\mathcal{E}}_\varepsilon(y_\varepsilon) \le M$, we then get \begin{align}\label{eq: L7XXX} \int_{-\infty}^\infty \mathcal{H}^{d-1}\big( (\Omega' \setminus J_{\mathbf{n}abla y_\varepsilon}) \cap \partial^* \lbrace (\mathbf{n}abla y_\varepsilon)_{ij} > t \rbrace \big) \,dt \le (\mathcal{L}^d(\Omega'))^{1/2} \Vert \mathbf{n}abla^2 y_\varepsilon \Vert_{L^2(\Omega')} \le (\mathcal{L}^d(\Omega') M)^{1/2} \varepsilon^\beta. \mathbf{e}nd{align} Fix $\gamma \in (\frac{2}{3},\beta)$ and define $T_\varepsilon= \varepsilon^{\gamma}$. For all $k \in \Bbb Z$ we find $t^{ij}_k \in (kT_\varepsilon, (k+1)T_\varepsilon]$ such that \begin{align}\label{eq: coarea} \mathcal{H}^{d-1}\big( (\Omega' \setminus J_{\mathbf{n}abla y_\varepsilon}) \cap \partial^*\lbrace (\mathbf{n}abla y_\varepsilon)_{ij} >t^{ij}_k \rbrace \big) \le \frac{1}{T_\varepsilon} \int_{kT_\varepsilon}^{{(k+1)T_\varepsilon}} \mathcal{H}^{d-1}\big( (\Omega' \setminus J_{\mathbf{n}abla y_\varepsilon}) \cap \partial^*\lbrace (\mathbf{n}abla y_\varepsilon)_{ij} >t \rbrace \big)\, dt. \mathbf{e}nd{align} Let $G_k^{\varepsilon,ij} = \lbrace (\mathbf{n}abla y_\varepsilon)_{ij} > t^{ij}_{k} \rbrace \setminus \lbrace (\mathbf{n}abla y_\varepsilon)_{ij} > t^{ij}_{k+1} \rbrace$ and note that each set has finite perimeter in $\Omega'$ since it is the difference of two sets of finite perimeter. Now \Bbb RRR \mathbf{e}qref{eq: L7XXX} and \mathbf{e}qref{eq: coarea} imply \color{black} \begin{align}\label{eq: nummer geben} \sum\mathbf{n}olimits_{k\in\Bbb Z} \mathcal{H}^{d-1}\big( (\Omega' \setminus J_{\mathbf{n}abla y_\varepsilon}) \cap \partial^* G_k^{\varepsilon, ij} \big) \le 2T^{-1}_\varepsilon (\mathcal{L}^d(\Omega') M)^{1/2} \varepsilon^\beta \le C \varepsilon^{\beta-\gamma} \mathbf{e}nd{align} for a sufficiently large constant $C>0$ independent of $\varepsilon$. Since $\mathcal{L}^d(\Omega' \setminus \bigcup_{k \in \Bbb Z} G_k^{\varepsilon, ij})=0$, $(G_k^{\varepsilon,ij})_{\color{black} k\in \Bbb Z \color{black} }$ is a Caccioppoli partition of $\Omega'$. We let $(P^\varepsilon_j)_{j\in\Bbb N}$ be the Caccioppoli partition of $\Omega'$ consisting of the nonempty sets of $$\big\{ G_{k_{11}}^{\varepsilon, 11} \cap G_{k_{12}}^{\varepsilon, 12} \cap \ldots \cap G_{k_{dd}}^{\varepsilon, dd}: \ k_{ij} \in \Bbb Z\text{ for } i,j=1,\ldots,d\big\}.$$ Then \color{black} \mathbf{e}qref{eq: nummer geben} implies \color{black} \begin{align}\label{eq: coarea3} \sum\mathbf{n}olimits_{j =1}^\infty\mathcal{H}^{d-1}\big(\partial^*P_j^\varepsilon \cap (\Omega' \setminus J_{\mathbf{n}abla y_\varepsilon}) \big) \le C\varepsilon^{\beta-\gamma} \mathbf{e}nd{align} for a constant $C>0$ independent of $\varepsilon$. \mathbf{e}mph{Step 2: Definition of the rotations.} We now define corresponding rotations. Recalling $T_\varepsilon = \varepsilon^{\gamma}$ we get $|t_k^{ij} -t_{k+1}^{ij}| \le 2T_\varepsilon=2\varepsilon^\gamma$ for all $k \in \Bbb Z$, $i,j=1,\ldots,n$. Then by the definition of $G_k^{\varepsilon,ij}$, for each component $P^\varepsilon_j$ of the Caccioppoli partition, we find a matrix $F_j^\varepsilon \in \Bbb R^{d\times d}$ such that \begin{align}\label{eq: coarea4} \Vert \mathbf{n}abla y_\varepsilon - F_j^\varepsilon \Vert_{L^\infty(P^\varepsilon_j)} \le c\varepsilon^{\gamma}, \mathbf{e}nd{align} where $c$ depends only on $d$. \Bbb RRR For each $j \in \Bbb N$ with $P^\varepsilon_j \subset \Omega$ up to an $\mathcal{L}^d$-negligible set, \color{black} we denote by $\bar{R}_j^\varepsilon$ the nearest point projection of $F_j^\varepsilon$ onto $SO(d)$. For all other components $P^\varepsilon_j$, i.e., the components intersecting $\Omega' \setminus \overline{\Omega}$, we set $\bar{R}_j^\varepsilon = \mathbf{Id}$. We now show that for all $j \in \Bbb N$ and for $\mathcal{L}^d$-a.e.\ $x \in P_j^\varepsilon$ there holds \begin{align}\label{eq: main control} |\mathbf{n}abla y_\varepsilon(x) - \bar{R}_j^\varepsilon| \le \max\big\{ C\varepsilon^{\gamma}, \ 2\operatorname{dist}(\mathbf{n}abla y_\varepsilon(x), SO(d)) \big\} \mathbf{e}nd{align} for a constant $C>0$ independent of $\varepsilon$. First, we consider components $P^\varepsilon_j$ which are contained in $\Omega$ up to an $\mathcal{L}^d$-negligible set. Recall that $\bar{R}_j^\varepsilon$ is defined as the nearest point projection of $F_j^\varepsilon$ onto $SO(d)$. If $|\bar{R}_j^\varepsilon- F_j^\varepsilon| \le 3c\varepsilon^{\gamma}$, where $c$ is the constant of \mathbf{e}qref{eq: coarea4}, \mathbf{e}qref{eq: main control} follows from \mathbf{e}qref{eq: coarea4} and the triangle inequality. Otherwise, by \mathbf{e}qref{eq: coarea4} we get for $\mathcal{L}^d$-a.e.\ $x \in P_j^\varepsilon$ \begin{align*} \operatorname{dist}(\mathbf{n}abla y_\varepsilon(x), SO(d)) & \ge \operatorname{dist}(F^\varepsilon_j, SO(d)) - c\varepsilon^{\gamma} = |\bar{R}_j^\varepsilon- F_j^\varepsilon| - c\varepsilon^{\gamma} \\ & \ge \tfrac{1}{2} \big(|\bar{R}_j^\varepsilon- F_j^\varepsilon| + c\varepsilon^{\gamma} \big) \ge \tfrac{1}{2}|\bar{R}_j^\varepsilon- \mathbf{n}abla y_\varepsilon(x)|. \mathbf{e}nd{align*} This implies \mathbf{e}qref{eq: main control}. Now consider a component $P^\varepsilon_j$ which intersects $\Omega' \setminus \overline{\Omega}$. Then by \mathbf{e}qref{eq: coarea4} and the fact that $y_\varepsilon = \mathbf{id} + \varepsilon h$ on $\Omega' \setminus \overline{\Omega}$ there holds \begin{align*} \Vert \mathbf{Id} + \varepsilon \mathbf{n}abla h - F_j^\varepsilon \Vert_{L^\infty(P^\varepsilon_j \setminus \Omega)} \le \Vert \mathbf{n}abla y_\varepsilon - F_j^\varepsilon \Vert_{L^\infty(P^\varepsilon_j)} \le c\varepsilon^{\gamma}. \mathbf{e}nd{align*} Since $0 < \gamma < 1$, this yields $|F_j^\varepsilon - \mathbf{Id}| \le C\varepsilon^{\gamma}$ for a constant $C$ depending also on $\Vert \mathbf{n}abla h \Vert_{L^\infty(\Omega')}$. This along with \mathbf{e}qref{eq: coarea4} implies \mathbf{e}qref{eq: main control} (for $\bar{R}_j^\varepsilon = \mathbf{Id}$). We define the rotations in the statement by $R_j^\varepsilon := (\bar{R}^\varepsilon_j)^{-1}$. \mathbf{e}mph{Step 3: Proof of \mathbf{e}qref{eq: first conditions}.} We are now in a position to prove \color{black} \mathbf{e}qref{eq: first conditions}. \color{black} We define $y^{\rm rot}_{\varepsilon} $ as in \mathbf{e}qref{eq: piecewise rotated}, i.e., $y^{\rm rot}_\varepsilon = \sum\mathbf{n}olimits_{j=1}^\infty R_j^\varepsilon y_\varepsilon \chi_{P^\varepsilon_j}$. Then \mathbf{e}qref{eq: first conditions}(i) follows from the fact that $y_\varepsilon = \mathbf{id} + \varepsilon h$ on $\Omega' \setminus \overline{\Omega}$ and $y_{\varepsilon}^{\rm rot} = y_\varepsilon$ on $\Omega' \setminus \overline{\Omega}$, where the latter \color{black} holds due to $R^\varepsilon_j = \mathbf{Id}$ for all $P_j^\varepsilon$ intersecting $\Omega' \setminus \overline{\Omega}$. \color{black} Property \mathbf{e}qref{eq: first conditions}(ii) is a direct consequence of the definition of $y_\varepsilon^{\rm rot}$ and \mathbf{e}qref{eq: coarea3}. To see \mathbf{e}qref{eq: first conditions}(iv), we use \mathbf{e}qref{eq: main control} and $R_j^\varepsilon = (\bar{R}^\varepsilon_j)^{-1}$ to get \begin{align*} \Vert \mathbf{n}abla y^{\rm rot}_\varepsilon - \mathbf{Id}\Vert^2_{L^2(\Omega')} &= \sum\mathbf{n}olimits_{j=1}^\infty \Vert \mathbf{n}abla y_\varepsilon - \bar{R}^\varepsilon_j\Vert^2_{L^2(P^\varepsilon_j)} \le C\varepsilon^{2\gamma} \mathcal{L}^d(\Omega') + 4 \Vert \operatorname{dist}(\mathbf{n}abla y_\varepsilon, SO(d)) \Vert^2_{L^2(\Omega')} \\& \le C(\varepsilon^{2\gamma} + \varepsilon^2) \mathbf{e}nd{align*} for a constant depending on $M$, where the last step follows from \mathbf{e}qref{assumptions-W}(iii), \mathbf{e}qref{eq: relaxed energy}, and $\overline{\mathcal{E}}_\varepsilon(y_\varepsilon) \le M$. Since $0 < \gamma < 1$, \mathbf{e}qref{eq: first conditions}(iv) \color{black} is proved. \color{black} It remains to show \mathbf{e}qref{eq: first conditions}(iii). We recall the linearization formula (see \cite[(3.20)]{FrieseckeJamesMueller:02}) \begin{align}\label{rig-eq: linearization} |{\rm sym}(F -\mathbf{Id})| = \operatorname{dist}(F,SO(d)) + {\rm O} (|F- \mathbf{Id}|^2) \mathbf{e}nd{align} for $F \in \Bbb R^{d \times d}$. By Young's inequality and $|{\rm sym}(F -\mathbf{Id})| \le |F -\mathbf{Id}| $ this implies \begin{align*} |{\rm sym}(F -\mathbf{Id})|^2& \le \min\big\{ |F- \mathbf{Id}|^2, \ C \operatorname{dist}^2(F,SO(d))+ C|F-\mathbf{Id}|^4 \big\}\\ & \le C \operatorname{dist}^2(F,SO(d)) + C\min\big\{ |F- \mathbf{Id}|^2, \ |F-\mathbf{Id}|^4 \big\}. \mathbf{e}nd{align*} Then we calculate \begin{align*} \int_{\Omega'} |{\rm sym}( \mathbf{n}abla y^{\rm rot}_\varepsilon - \mathbf{Id})|^2 &\le C\int_{\Omega'} \Big( \operatorname{dist}^2(\mathbf{n}abla y_\varepsilon^{\rm rot},SO(d)) + \min\big\{ |\mathbf{n}abla y_\varepsilon^{\rm rot}- \mathbf{Id}|^2, \ |\mathbf{n}abla y_\varepsilon^{\rm rot}-\mathbf{Id}|^4 \big\}\Big) \\ &\le C\sum\mathbf{n}olimits_{j=1}^\infty\int_{P^\varepsilon_j} \Big(\operatorname{dist}^2(\mathbf{n}abla y_\varepsilon,SO(d)) + |\mathbf{n}abla y_\varepsilon- \bar{R}^\varepsilon_j|^2 \, \min\big\{ 1, \, |\mathbf{n}abla y_\varepsilon-\bar{R}_j^\varepsilon|^2 \big\}\Big). \mathbf{e}nd{align*} By \mathbf{e}qref{eq: main control} we note that for a.e.\ $x \in P_j^\varepsilon$ there holds $$|\mathbf{n}abla y_\varepsilon(x)- \bar{R}^\varepsilon_j|^2 \,\min\big\{ 1 , \, |\mathbf{n}abla y_\varepsilon(x)-\bar{R}_j^\varepsilon|^2 \big\} \le C\varepsilon^{4\gamma} + C\operatorname{dist}^2(\mathbf{n}abla y_\varepsilon(x), SO(d)). $$ Here, we used that, if $|\mathbf{n}abla y_\varepsilon(x)-\bar{R}_j^\varepsilon|^2>1$, the maximum in \mathbf{e}qref{eq: main control} is attained for $\operatorname{dist}(\mathbf{n}abla y_\varepsilon(x), SO(d))$, provided that $\varepsilon$ is small enough. Therefore, we get \begin{align*} \int_{\Omega'} |{\rm sym}( \mathbf{n}abla y^{\rm rot}_\varepsilon - \mathbf{Id})|^2 &\le C\int_{ \color{black} \Omega' \color{black} } \operatorname{dist}^2(\mathbf{n}abla y_\varepsilon,SO(d)) + C\mathcal{L}^d(\Omega') \varepsilon^{4\gamma} \le C\varepsilon^2 + C\varepsilon^{4\gamma}, \mathbf{e}nd{align*} where in the last step we have again \color{black} used \mathbf{e}qref{assumptions-W}(iii), \mathbf{e}qref{eq: relaxed energy}, \color{black} and $\overline{\mathcal{E}}_\varepsilon(y_\varepsilon) \le M$. Since $\gamma >\frac{2}{3} \ge \frac{1}{2}$, we obtain \mathbf{e}qref{eq: first conditions}(iii). This concludes the proof of Theorem \ref{thm: compactess}(a). \mathbf{e}nd{proof} \begin{rem}\label{re: unchanged} For later purposes, we point out that the construction shows $y_\varepsilon^{\rm rot} = y_{\varepsilon}$ on all $P_j^\varepsilon$ intersecting $\Omega'\setminus \overline{\Omega}$. \mathbf{e}nd{rem} \begin{proof}[Proof of Theorem \ref{thm: compactess}(b)] We define the rescaled displacment fields $u_\varepsilon := \frac{1}{\varepsilon}(y^{\rm rot}_\varepsilon - \mathbf{id})$ as in \mathbf{e}qref{eq: rescalidipl}. Clearly, there holds $u_\varepsilon \in GSBV^2(\Omega';\Bbb R^d) \subset GSBD^2(\Omega')$. Note that by \mathbf{e}qref{eq: first conditions}(iii) we obtain $\sup_\varepsilon \Vert e(u_\varepsilon) \Vert_{L^2(\Omega')} < + \infty$, where for shorthand we again write $e(u_\varepsilon) = \frac{1}{2}(\mathbf{n}abla u_\varepsilon^T + \mathbf{n}abla u_\varepsilon)$. Moreover, in view of \mathbf{e}qref{eq: first conditions}(ii) and $\beta >\gamma$, we get \begin{align}\label{eq: should be sep} \limsup\mathbf{n}olimits_{\varepsilon \to 0} \mathcal{H}^{d-1}(J_{u_\varepsilon}) \le \limsup\mathbf{n}olimits_{\varepsilon \to 0} \mathcal{H}^{d-1}(J_{y_\varepsilon} \cup J_{\mathbf{n}abla y_\varepsilon}) < +\infty. \mathbf{e}nd{align} Therefore, we can apply Theorem \ref{th: crismale-comp} on the sequence $(u_\varepsilon)_\varepsilon$ to obtain $A$ and $u' \in GSBD^2(\Omega')$ such that \mathbf{e}qref{eq: at crsimale comp} holds (up to passing to a subsequence). We first observe that $E_u = A$, where \color{black} $E_u := \lbrace x\in \Omega: \, |u_\varepsilon(x)| \to \infty \rbrace$ and $A := \lbrace x\in \Omega': \, |u_\varepsilon(x)| \to \infty \rbrace$. \color{black} To see this, we have to check that $A \subset \Omega$. This follows from the fact that $u_\varepsilon = h$ on $\Omega' \setminus \overline{\Omega}$ for all $\varepsilon$, see \mathbf{e}qref{eq: first conditions}(i) and \mathbf{e}qref{eq: rescalidipl}. We define $u := u' \chi_{\Omega' \setminus E_u} + \lambda \chi_{E_u}$ for some $\lambda \in \Bbb R^d$ such that $\partial^* E_u \cap \Omega' \subset J_u$ up to an $\mathcal{H}^{d-1}$-negligible set. \Bbb RRR Since $J_u \subset J_{u'} \cup (\partial^*E_u\cap \Omega')$, \color{black} \mathbf{e}qref{eq: at crsimale comp} then implies \mathbf{e}qref{eq: the main convergence}, where the last inequality in \mathbf{e}qref{eq: the main convergence}(iii) follows from \mathbf{e}qref{eq: should be sep}. Finally, $u \in GSBD^2_h$ follows from $u_\varepsilon = h$ on $\Omega' \setminus \overline{\Omega}$ and \mathbf{e}qref{eq: the main convergence}(i). \mathbf{e}nd{proof} \subsection{Passage to linearized model by $\Gamma$-convergence}\label{sec: gamma} We now give the proof of Theorem \ref{rig-th: gammaconv}. \begin{proof}[Proof of Theorem \ref{rig-th: gammaconv}] Since $\overline{\mathcal{E}}_\varepsilon \le \mathcal{E}_\varepsilon$, see \mathbf{e}qref{rig-eq: Griffith en} and \mathbf{e}qref{eq: relaxed energy}, the compactness result follows immediately from Theorem \ref{thm: compactess}. It suffices to show the $\Gamma$-liminf inequality for $\overline{\mathcal{E}}_\varepsilon$ and the $\Gamma$-limsup inequality for $\mathcal{E}_\varepsilon$. \mathbf{e}mph{Step 1: $\Gamma$-liminf inequality.} Consider $u \in GSBD^2_h$ and $(y_\varepsilon)_\varepsilon$, $y_\varepsilon \in \mathcal{S}_{\varepsilon,h}$, such that $y_\varepsilon \rightsquigarrow u$, i.e, by Definition \ref{def:conv} there exist $y^{\rm rot}_\varepsilon = \sum\mathbf{n}olimits_{j=1}^\infty R^\varepsilon_j \, y_\varepsilon \, \chi_{P^\varepsilon_j}$ and $ u_\varepsilon: = \frac{1}{\varepsilon} (y^{\rm rot}_\varepsilon - \mathbf{id}) $ such that \mathbf{e}qref{eq: first conditions} and \mathbf{e}qref{eq: the main convergence} hold for some fixed $\gamma \in (\frac{2}{3},\beta)$. The essential step is to prove \begin{align}\label{eq: essential step} \liminf_{\varepsilon \to 0} \frac{1}{\varepsilon^2}\int_{\Omega'} W(\mathbf{n}abla y_\varepsilon) \ge \int_{\Omega'} \frac{1}{2} Q(e( u) ). \mathbf{e}nd{align} Once \mathbf{e}qref{eq: essential step} is shown, we conclude by \mathbf{e}qref{eq: relaxed energy} and \mathbf{e}qref{eq: the main convergence}(iii) that \begin{align*} \liminf_{\varepsilon \to 0} \overline{\mathcal{E}}_\varepsilon (y_\varepsilon) \ge \liminf_{\varepsilon \to 0} \Big( \frac{1}{\varepsilon^2}\int_{\Omega'} W(\mathbf{n}abla y_\varepsilon) + \kappa\mathcal{H}^{d-1}(J_{y_\varepsilon} \cup J_{\mathbf{n}abla y_\varepsilon}) \Big) \ge \int_{\Omega'} \frac{1}{2} Q(e(u)) + \kappa\mathcal{H}^{d-1}(J_u). \mathbf{e}nd{align*} In view of \mathbf{e}qref{rig-eq: Griffith en-lim}, this shows $\liminf_{\varepsilon \to 0} \overline{\mathcal{E}}_\varepsilon (y_\varepsilon) \ge \mathcal{E}(u)$. To see \mathbf{e}qref{eq: essential step}, we first note that the frame indifference of $W$ (see \mathbf{e}qref{assumptions-W}(ii)) and the definitions of $y^{\rm rot}_\varepsilon$ and $u_\varepsilon$ (see \mathbf{e}qref{eq: piecewise rotated} and \mathbf{e}qref{eq: rescalidipl}) imply \begin{align}\label{eq: respresentation} W(\mathbf{n}abla y_\varepsilon) = W(\mathbf{n}abla y_\varepsilon^{\rm rot}) = W(\mathbf{Id} + \varepsilon \mathbf{n}abla u_\varepsilon). \mathbf{e}nd{align} In view of $\gamma >2/3$, we can choose $\mathbf{e}ta_\varepsilon \to +\infty$ such that \begin{align}\label{eq: eta chocie} \varepsilon^{1-\gamma} \mathbf{e}ta_\varepsilon \to +\infty \ \ \ \text{and} \ \ \ \varepsilon \mathbf{e}ta^3_\varepsilon \to 0. \mathbf{e}nd{align} We define $\chi_\varepsilon \in L^\infty(\Omega')$ by $\chi_\varepsilon(x) = \chi_{[0,\mathbf{e}ta_\varepsilon)}(|\mathbf{n}abla u_\varepsilon(x)|)$. \color{black} Note that \color{black} $\mathcal{L}^d(\lbrace |\mathbf{n}abla u_\varepsilon(x)|> \mathbf{e}ta_\varepsilon \rbrace ) \le C(\varepsilon^{\gamma-1}/\mathbf{e}ta_\varepsilon)^2$ by \mathbf{e}qref{eq: first conditions}(iv) and the fact that $ u_\varepsilon = \frac{1}{\varepsilon} (y^{\rm rot}_\varepsilon - \mathbf{id})$. \color{black} Thus, \color{black} \mathbf{e}qref{eq: eta chocie} implies $\chi_\varepsilon \to 1$ boundedly in measure on $\Omega'$. The regularity of $W$ implies $W(\mathbf{Id} + F) = \frac{1}{2}Q( {\rm sym}(F)) + \omega(F)$, where $Q$ is defined in \mathbf{e}qref{rig-eq: Griffith en-lim} and $\omega:\Bbb R^{d \times d}\to \color{black} \Bbb R \color{black} $ is a function satisfying $\color{black} |\omega(F)| \color{black} \le C|F|^3$ for all $F \in \Bbb R^{d\times d}$ with $|F| \le 1$. Then by \mathbf{e}qref{eq: respresentation} and $W \ge 0$ we get \begin{align*} \liminf_{\varepsilon \to 0} \frac{1}{\varepsilon^2}\int_{\Omega'} W(\mathbf{n}abla y_\varepsilon) & \ge \liminf_{\varepsilon\to 0} \frac{1}{\varepsilon^2}\int_{\Omega'} \chi_\varepsilon W(\mathbf{Id} + \varepsilon \mathbf{n}abla u_\varepsilon) \\ &= \liminf_{\varepsilon\to 0} \int_{\Omega'} \chi_\varepsilon \Big( \frac{1}{2}Q(e(u_\varepsilon)) + \frac{1}{\varepsilon^2} \omega(\varepsilon \mathbf{n}abla u_\varepsilon) \Big) \\ &\ge \liminf_{\varepsilon\to 0} \Big(\int_{\Omega'\setminus E_u} \chi_\varepsilon \frac{1}{2}Q(e(u_\varepsilon)) + \int_{\Omega'} \chi_\varepsilon |\mathbf{n}abla u_\varepsilon|^3 \varepsilon \frac{\omega(\varepsilon \mathbf{n}abla u_\varepsilon)}{|\varepsilon \mathbf{n}abla u_\varepsilon|^3} \Big), \mathbf{e}nd{align*} where $E_u = \lbrace x\in \Omega: \, |u_\varepsilon(x)| \to \infty \rbrace$. The second term converges to zero. Indeed, $\chi_\varepsilon\frac{|\omega(\varepsilon \mathbf{n}abla u_\varepsilon)|}{|\varepsilon \mathbf{n}abla u_\varepsilon|^3}$ is uniformly controlled by $C$ and $\chi_\varepsilon |\mathbf{n}abla u_\varepsilon|^3 \varepsilon$ is uniformly controlled by $\mathbf{e}ta_\varepsilon^3 \varepsilon $, where $\mathbf{e}ta_\varepsilon^3 \varepsilon \to 0$ by \mathbf{e}qref{eq: eta chocie}. As $e( u_\varepsilon) \rightharpoonup e(u)$ weakly in $L^2(\Omega'\setminus E_u,\Bbb R^{d\times d}_{\rm sym})$ by \mathbf{e}qref{eq: the main convergence}(ii), $Q$ is convex, and $\chi_\varepsilon$ converges to $1$ boundedly in measure on $\Omega' \setminus E_u$, we conclude \begin{align*} \liminf_{\varepsilon \to 0} \frac{1}{\varepsilon^2}\int_{\Omega'} W(\mathbf{n}abla y_\varepsilon) \ge \int_{\Omega' \setminus E_u} \frac{1}{2}Q(e(u)) = \int_{\Omega'} \frac{1}{2}Q(e(u)), \mathbf{e}nd{align*} where the last step follows from the fact that $e(u) = 0$ on $E_u$, see \mathbf{e}qref{eq: the main convergence}(iv). This shows \mathbf{e}qref{eq: essential step} and concludes the proof of the $\Gamma$-liminf inequality. \mathbf{e}mph{Step 2: $\Gamma$-limsup inequality.} Consider $u \in GSBD^2_h$ \color{black} with $h \in W^{2,\infty}(\Omega';\Bbb R^d)$. \color{black} Let $\gamma \in (\frac{2}{3},\beta)$. By Theorem \ref{th: crismale-density2} we can find a sequence $(v_\varepsilon)_\varepsilon \in GSBV^2_2(\Omega';\Bbb R^d)$ with $v_\varepsilon = h$ on $\Omega' \setminus \overline{\Omega}$, \color{black} $v_\varepsilon \in W^{2,\infty}(\Omega' \setminus J_{v_\varepsilon};\Bbb R^d)$, \color{black} and \begin{align}\label{eq: dense-in-appli} {\rm (i)} & \ \ v_\varepsilon \to u \text{ in measure on } \Omega',\mathbf{n}otag\\ {\rm (ii)} & \ \ \Vert e(v_\varepsilon) - e(u) \Vert_{L^2(\Omega')} \to 0,\mathbf{n}otag\\ {\rm (iii)} & \ \ \mathcal{H}^{d-1}(J_{v_\varepsilon}) \to \mathcal{H}^{d-1}(J_{u}),\mathbf{n}otag\\ {\rm (iv)} & \ \ \Vert \mathbf{n}abla v_\varepsilon \Vert_{L^\infty(\Omega')} + \Vert \mathbf{n}abla^2 v_\varepsilon \Vert_{L^\infty(\Omega')} \le \varepsilon^{(\beta-1)/2} \le \varepsilon^{\gamma-1}. \mathbf{e}nd{align} Note that property (iv) can be achieved since the approximations satisfy $v_\varepsilon \in W^{2,\infty}(\Omega' \setminus J_{v_\varepsilon};\Bbb R^d)$. (Recall $\gamma < \beta < 1$.) Moreover, $v_\varepsilon \in W^{2,\infty}(\Omega' \setminus J_{v_\varepsilon};\Bbb R^d)$ also implies $J_{\mathbf{n}abla v_\varepsilon} \subset J_{v_\varepsilon}$. We define the sequence $y_\varepsilon = \mathbf{id} + \varepsilon v_\varepsilon$. As $v_\varepsilon \in GSBV_2^2(\Omega';\Bbb R^d)$ and $v_\varepsilon = h$ on $\Omega' \setminus \overline{\Omega}$, we get $y_\varepsilon \in \mathcal{S}_{\varepsilon,h}$, see \mathbf{e}qref{eq: boundary-spaces}. We now check that $y_\varepsilon \rightsquigarrow u$ in the sense of Definition \ref{def:conv}. We define $y_\varepsilon^{\rm rot} = y_\varepsilon$, i.e., the Caccioppoli partition in \mathbf{e}qref{eq: piecewise rotated} consists of the set $\Omega'$ only with corresponding rotation $\mathbf{Id}$. Then \mathbf{e}qref{eq: first conditions}(i),(ii) are trivially satisfied. As $\mathbf{n}abla y_\varepsilon^{\rm rot} - \mathbf{Id} = \varepsilon \mathbf{n}abla v_\varepsilon$, \mathbf{e}qref{eq: first conditions}(iii),(iv) follow from \mathbf{e}qref{eq: dense-in-appli}(ii),(iv). The rescaled displacement fields $u_\varepsilon$ defined in \mathbf{e}qref{eq: rescalidipl} satisfy $u_\varepsilon = v_\varepsilon$. Then \mathbf{e}qref{eq: the main convergence} for $E_u = \mathbf{e}mptyset$ follows from \mathbf{e}qref{eq: dense-in-appli}(i)--(iii) and $J_{y_\varepsilon} = J_{v_\varepsilon}$. Finally, we confirm $\lim_{\varepsilon \to 0} \mathcal{E}_{\varepsilon}(y_\varepsilon) = \mathcal{E}(u)$. In view of $J_{y_\varepsilon} = J_{v_\varepsilon}$, $J_{\mathbf{n}abla y_\varepsilon} \subset J_{y_\varepsilon}$, \mathbf{e}qref{eq: dense-in-appli}(iii), and the definition of the energies in \mathbf{e}qref{rig-eq: Griffith en}, \mathbf{e}qref{rig-eq: Griffith en-lim}, it suffices to show $$\lim_{\varepsilon \to 0} \Big( \frac{1}{\varepsilon^2}\int_{\Omega'} W(\mathbf{n}abla y_\varepsilon) + \frac{1}{\varepsilon^{2\beta}} \int_{\Omega'} |\mathbf{n}abla^2 y_\varepsilon|^2 \Big) = \int_{\Omega'} \frac{1}{2} Q(e(u)).$$ The second term vanishes by \mathbf{e}qref{eq: dense-in-appli}(iv), $\beta <1$, and the fact that $\mathbf{n}abla^2 y_\varepsilon = \varepsilon \mathbf{n}abla^2 v_\varepsilon$. For the first term, we again use that $W(\mathbf{Id} + F) = \frac{1}{2}Q( {\rm sym}(F)) + \omega(F)$ with $|\omega(F)|\le C|F|^3$ for $|F| \le 1$, and compute by \mathbf{e}qref{eq: dense-in-appli}(ii),(iv) \begin{align*} \lim_{\varepsilon \to 0}\frac{1}{\varepsilon^2} \int_{\Omega'} W(\mathbf{n}abla y_\varepsilon) & = \lim_{\varepsilon \to 0} \frac{1}{\varepsilon^2} \int_{\Omega'} W(\mathbf{Id} + \varepsilon \mathbf{n}abla v_\varepsilon) = \lim_{\varepsilon \to 0} \int_{\Omega'} \Big( \frac{1}{2} Q(e(v_\varepsilon)) + \frac{1}{\varepsilon^2} \omega(\varepsilon \mathbf{n}abla v_\varepsilon) \Big) \\ & = \int_{\Omega'} \frac{1}{2} Q(e(u)) + \lim_{\varepsilon \to 0}\int_{\Omega'} {\rm O}\big( \varepsilon|\mathbf{n}abla v_\varepsilon|^3 \big)= \int_{\Omega'} \frac{1}{2} Q(e(u)), \mathbf{e}nd{align*} where in the last step we have used that $\Vert \mathbf{n}abla v_\varepsilon \Vert_{L^\infty(\Omega')} \le C\varepsilon^{\gamma - 1}$ for some $\gamma >2/3$. This concludes the proof. \mathbf{e}nd{proof} \begin{rem}\label{rem: Hoelder space} The proof shows that one can readily incorporate a dependence on the material point $x$ in the density $W,$ as long as \mathbf{e}qref{assumptions-W} still holds. We also point out that it suffices to suppose that $W$ is $C^{2,\alpha}$ in a neighborhood of $SO(d)$, provided that $1 >\beta > \gamma > \frac{2}{2+\alpha}$. In fact, in that case, one has $|\omega(F)| \le C|F|^{2+\alpha}$ \Bbb RRR for all $|F|\le 1$,\color{black} and all estimates remain true, where in \mathbf{e}qref{eq: eta chocie} one chooses $\mathbf{e}ta_\varepsilon$ with $\varepsilon^{1-\gamma} \mathbf{e}ta_\varepsilon \to +\infty$ and $\varepsilon^\alpha \mathbf{e}ta^{2+\alpha}_\varepsilon \to 0$. \mathbf{e}nd{rem} We close this subsection with the proof of Corollary \ref{cor: main cor}. \begin{proof}[Proof of Corollary \ref{cor: main cor}] The statement follows in the spirit of the fundamental theorem of $\Gamma$-convergence, see, e.g., \cite[Theorem 1.21]{Braides:02}. We repeat the argument here for the reader's convenience. We observe that $\inf_{\bar{y} \in \mathcal{S}_{\varepsilon,h}} \mathcal{E}_\varepsilon(\bar{y})$ is uniformly bounded by choosing $\mathbf{id} + \varepsilon h$ as competitor. Given $(y_\varepsilon)_\varepsilon$, $y_\varepsilon \in \mathcal{S}_{\varepsilon,h}$, satisfying \mathbf{e}qref{eq: eps control}, we apply Theorem \ref{rig-th: gammaconv}(a) to find a subsequence (not relabeled), and $u \in GSBD^2_h$ such that $y_\varepsilon \rightsquigarrow u$ in the sense of Definition \ref{def:conv}. Thus, by Theorem \ref{rig-th: gammaconv}(b) we obtain \begin{align}\label{eq: last1} \mathcal{E}(u) \le \liminf_{\varepsilon \to 0} \mathcal{E}_\varepsilon(y_\varepsilon) \le \liminf_{\varepsilon \to 0} \inf_{\bar{y} \in \mathcal{S}_{\varepsilon,h}} \mathcal{E}_\varepsilon(\bar{y}). \mathbf{e}nd{align} By Theorem \ref{rig-th: gammaconv}(c), for each $v \in GSBD^2_h$, there exists a sequence $(w_\varepsilon)_\varepsilon$ with $w_\varepsilon \rightsquigarrow v$ and $\lim_{\varepsilon\to 0} \mathcal{E}_\varepsilon(w_\varepsilon) = \mathcal{E}(v)$. This implies \begin{align}\label{eq: last2} \limsup_{\varepsilon \to 0} \inf_{\bar{y} \in \mathcal{S}_{\varepsilon,h}} \mathcal{E}_\varepsilon(\bar{y}) \le \lim_{\varepsilon \to 0} \mathcal{E}_\varepsilon(w_\varepsilon) = \mathcal{E}(v). \mathbf{e}nd{align} By combining \mathbf{e}qref{eq: last1}-\mathbf{e}qref{eq: last2} we find \begin{align}\label{eq: last3} \mathcal{E}(u) \le \liminf_{\varepsilon \to 0} \inf_{\bar{y} \in \mathcal{S}_{\varepsilon,h}} \mathcal{E}_\varepsilon(\bar{y}) \le \limsup_{\varepsilon \to 0} \inf_{\bar{y} \in \mathcal{S}_{\varepsilon,h}} \mathcal{E}_\varepsilon(\bar{y}) \le \mathcal{E}(v). \mathbf{e}nd{align} Since $v \in GSBD^2_h$ was arbitrary, we get that $u$ is a minimizer of $\mathcal{E}$. \color{black} Property \mathbf{e}qref{eq: eps control2} follows from \color{black} \mathbf{e}qref{eq: last3} with $v=u$. In particular, the limit in \mathbf{e}qref{eq: eps control2} does not depend on the specific choice of the subsequence and thus \mathbf{e}qref{eq: eps control2} holds for the whole sequence. \mathbf{e}nd{proof} \subsection{Characterization of limiting displacements}\label{sec: admissible} This final subsection is devoted to the proof of Lemma \ref{lemma: characteri}. \begin{proof}[Proof of Lemma \ref{lemma: characteri}] \mathbf{e}mph{Proof of (a).} As a preparation, we observe that for two given rotations $R_1,R_2 \in SO(d)$ there holds \begin{align}\label{rig-eq: linearization-appli} |{\rm sym}(R_2 R_1^T -\mathbf{Id})| \le C |R_1- R_2|^2. \mathbf{e}nd{align} This follows from formula \mathbf{e}qref{rig-eq: linearization} applied for $F = R_2 R_1^T$. Consider a sequence $(y_\varepsilon)_\varepsilon$. Let \begin{align}\label{eq: piecewise rotated2} y^{{\rm rot},i}_\varepsilon = \sum\mathbf{n}olimits_{j=1}^\infty {R_j^{\varepsilon,i}} \, y_\varepsilon \, \chi_{P_j^{\varepsilon,i}}, \ \ \ i=1,2, \mathbf{e}nd{align} be two sequences such that the corresponding rescaled displacement fields $u_\varepsilon^i = \varepsilon^{-1}(y_\varepsilon^{{\rm rot},i} - \mathbf{id})$, $i=1,2$, converge to $u_1$ and $u_2$, respectively, in the sense of \mathbf{e}qref{eq: the main convergence}, where the exceptional sets are denoted by $E_{u_1}$ and $E_{u_2}$, respectively. In particular, pointwise $\mathcal{L}^d$-a.e.\ in $\Omega'$ there holds \begin{align}\label{eq:symmi} e(u_\varepsilon^1) - e(u_\varepsilon^2) &= \varepsilon^{-1} {\rm sym} \Big( \sum\mathbf{n}olimits_j {R_j^{\varepsilon,1}} \, \mathbf{n}abla y_\varepsilon \, \chi_{P_j^{\varepsilon,1}} - \sum\mathbf{n}olimits_j {R_j^{\varepsilon,2}} \, \mathbf{n}abla y_\varepsilon \, \chi_{P_j^{\varepsilon,2}} \Big) \mathbf{n}otag\\ & = \varepsilon^{-1} {\rm sym} \Big( \sum\mathbf{n}olimits_{j,k} \big( {R_j^{\varepsilon,1}} - {R_k^{\varepsilon,2}} \big) \, \chi_{P_j^{\varepsilon,1} \cap P_k^{\varepsilon,2}} \, \mathbf{n}abla y_\varepsilon \Big) \mathbf{n}otag\\ & = \varepsilon^{-1} {\rm sym} \Big( \sum\mathbf{n}olimits_{j,k} \big( \mathbf{Id} - {R_k^{\varepsilon,2} (R_j^{\varepsilon,1})^T } \big) \, \chi_{P_j^{\varepsilon,1} \cap P_k^{\varepsilon,2}} \, \mathbf{n}abla y^{{\rm rot},1}_\varepsilon \Big). \mathbf{e}nd{align} For brevity, we define $Z_\varepsilon \in L^\infty(\Omega';\Bbb R^{d \times d})$ by \begin{align}\label{eq: Zdef} Z_\varepsilon := \sum\mathbf{n}olimits_{j,k} \big( \mathbf{Id} - {R_k^{\varepsilon,2} (R_j^{\varepsilon,1})^T } \big) \, \chi_{P_j^{\varepsilon,1} \cap P_k^{\varepsilon,2}}. \mathbf{e}nd{align} By \mathbf{e}qref{eq: first conditions}(iv) and the triangle inequality we get \begin{align*} \sum_{j,k} \big\| {R_j^{\varepsilon,1}} - {R_k^{\varepsilon,2} }\big\|^2_{L^2(P_j^{\varepsilon,1} \cap P_k^{\varepsilon,2})} &\le C\sum_{j=1}^\infty \Vert (\mathbf{n}abla y_\varepsilon)^T - R_j^{\varepsilon,1} \Vert^2_{L^2(P_j^{\varepsilon,1})} + C\sum_{k=1}^\infty \Vert (\mathbf{n}abla y_\varepsilon)^T - R_k^{\varepsilon,2} \Vert^2_{L^2(P_k^{\varepsilon,2})} \\ & = C\Vert \mathbf{n}abla y_\varepsilon^{{\rm rot},1}- \mathbf{Id} \Vert_{L^2(\Omega')}^2 + C\Vert \mathbf{n}abla y_\varepsilon^{{\rm rot},2}- \mathbf{Id} \Vert_{L^2(\Omega')}^2 \le C\varepsilon^{2\gamma} \mathbf{e}nd{align*} for some given $\gamma \in (\frac{2}{3},\beta)$, and $C>0$ independent of $\varepsilon$. Equivalently, this means \begin{align*} \sum\mathbf{n}olimits_{j,k} \mathcal{L}^d\big( P_j^{\varepsilon,1} \cap P_k^{\varepsilon,2} \big) \big| R_j^{\varepsilon,1} - {R_k^{\varepsilon,2}} \big|^2 \le C\varepsilon^{2\gamma}. \mathbf{e}nd{align*} By recalling \mathbf{e}qref{rig-eq: linearization-appli} and \mathbf{e}qref{eq: Zdef} we then get \begin{align*} \Vert {\rm sym} (Z_\varepsilon) \Vert_{L^1(\Omega')} \le C\varepsilon^{2\gamma}, \ \ \ \ \ \ \ \ \ \ \ \Vert Z_\varepsilon \Vert_{L^2(\Omega')} \le C\varepsilon^{\gamma}. \mathbf{e}nd{align*} This along with H\"older's inequality, \mathbf{e}qref{eq: first conditions}(iv) for $y_\varepsilon^{{\rm rot},1}$, and \mathbf{e}qref{eq:symmi} yields \begin{align}\label{eq: long eq} \Vert e(u_\varepsilon^1) - e(u_\varepsilon^2) \Vert_{L^1(\Omega')} & = \frac{1}{\varepsilon} \Vert {\rm sym} \big( Z_\varepsilon\, \mathbf{n}abla y_\varepsilon^{{\rm rot},1} \big)\Vert_{L^1(\Omega')} \mathbf{n}otag\\& \le \frac{1}{\varepsilon} \Vert {\rm sym} \big( Z_\varepsilon\, \big(\mathbf{n}abla y_\varepsilon^{{\rm rot},1} - \mathbf{Id} \big) \big)\Vert_{L^1(\Omega')} + \frac{1}{\varepsilon} \Vert {\rm sym} \big( Z_\varepsilon \big)\Vert_{L^1(\Omega')}\mathbf{n}otag\\ & \le \frac{1}{\varepsilon} \Vert Z_\varepsilon \Vert_{L^2(\Omega')} \Vert \mathbf{n}abla y_\varepsilon^{{\rm rot},1} - \mathbf{Id} \Vert_{L^2(\Omega')} + \frac{1}{\varepsilon} \Vert {\rm sym} \big( Z_\varepsilon \big)\Vert_{L^1(\Omega')} \le C\varepsilon^{2\gamma - 1}. \mathbf{e}nd{align} We have that $e(u_\varepsilon^1) - e(u_\varepsilon^2)$ converges to $e(u_1) - e(u_2)$ weakly in $L^2(\Omega' \setminus (E_{u_1} \cup E_{u_2});\Bbb R^{d\times d}_{\rm sym})$, see \mathbf{e}qref{eq: the main convergence}(ii). Then \mathbf{e}qref{eq: long eq} and the fact that $\gamma > \frac{2}{3} > \frac{1}{2}$ imply that $e(u_1) - e(u_2) = 0$ on $\Omega' \setminus (E_{u_1} \cup E_{u_2})$. This shows part (a) of the statement. \mathbf{e}mph{Proof of (b).} Let $(y_\varepsilon)_\varepsilon$ be a sequence satisfying \mathbf{e}qref{eq: minimizer}. Consider two piecewise rotated functions $y^{{\rm rot},i}_\varepsilon$ as given in \mathbf{e}qref{eq: piecewise rotated2} and let $u_1,u_2$ be the limits identified in \mathbf{e}qref{eq: the main convergence}, where the corresponding exceptional sets are denoted by $E_{u_1},E_{u_2}$. We let $\mathcal{J}^i = \lbrace j \in \Bbb N: \, P_j^{\varepsilon,i} \subset \Omega \text{ up to an $\mathcal{L}^d$-negligible set} \rbrace$ for $i=1,2$, and set $D_\varepsilon :=\bigcup_{i=1,2}\bigcup_{j \in \mathcal{J}^i} P_j^{\varepsilon,i}$. By \mathbf{e}qref{eq: first conditions}(ii) and $\gamma < \beta$ we \color{black} obtain \color{black} \begin{align}\label{eq: newD} \limsup\mathbf{n}olimits_{\varepsilon \to 0} \mathcal{H}^{d-1}\big( \big(\partial^*D_\varepsilon \cap \Omega' \big) \setminus \big(J_{y_\varepsilon}\cup J_{\mathbf{n}abla y_\varepsilon} \big) \big) = 0. \mathbf{e}nd{align} As also $\sup_\varepsilon\mathcal{H}^{d-1}(J_{y_\varepsilon} \cup J_{\mathbf{n}abla y_{\varepsilon}}) < + \infty$, we get that $\mathcal{H}^{d-1}(\partial^* D_\varepsilon)$ is uniformly controlled. Therefore, we may suppose that $D_\varepsilon \to D$ in measure for a set of finite perimter $D \subset \Omega$, see \cite[Theorem 3.39]{Ambrosio-Fusco-Pallara:2000}. We observe that $y^{{\rm rot},i}_\varepsilon = y_\varepsilon$ on $\Omega' \setminus D_\varepsilon$ for $i=1,2$ by Remark \ref{re: unchanged}. Therefore, \mathbf{e}qref{eq: rescalidipl} implies that $E_{u_1} \setminus D = E_{u_2} \setminus D$. In the following, we denote this set by $\hat{E}$. Then, \mathbf{e}qref{eq: rescalidipl} and \mathbf{e}qref{eq: the main convergence}(i) also yield \begin{align}\label{eq: lateli} u_1 = u_2 \ \ \ \text{ a.e.\ on } \Omega' \setminus (D \cup \hat{E}). \mathbf{e}nd{align} To compare $u_1$ and $u_2$ inside $D \cup \hat{E}$, we introduce modifications: for $i=1,2$ and sequences $(\lambda_\varepsilon)_\varepsilon \subset \Bbb R^d$, let \begin{align}\label{eq: defffi1} y^{\lambda_\varepsilon,i}_\varepsilon := y^{{\rm rot},i}_\varepsilon + \lambda_\varepsilon \, \chi_{D_\varepsilon}. \mathbf{e}nd{align} By definition, $D_\varepsilon$ does not intersect $\Omega' \setminus \overline{\Omega}$ and has finite perimeter by \mathbf{e}qref{eq: newD}. Thus, we get $y^{\lambda_\varepsilon,i}_\varepsilon \in \mathcal{S}_{\varepsilon,h}$, see \mathbf{e}qref{eq: boundary-spaces} \color{black} and \mathbf{e}qref{eq: first conditions}(i). \color{black} By \color{black} \mathbf{e}qref{eq: first conditions}(ii), \color{black} \mathbf{e}qref{eq: newD}, and the fact that the elastic energy is frame indifferent we also observe that $(y^{\lambda_\varepsilon,i}_\varepsilon)_\varepsilon$ is a minimizing sequence for $i=1,2$ and all $(\lambda_\varepsilon)_\varepsilon \subset \Bbb R^d$. We obtain \begin{align}\label{eq: the same} y_\varepsilon = \color{black} y^{{\rm rot},i}_\varepsilon = \color{black} y^{\lambda_\varepsilon,i}_\varepsilon \ \ \ \ \text{ on $\Omega' \setminus D_\varepsilon $ for all $(\lambda_\varepsilon)_\varepsilon \subset \Bbb R^d$, $i=1,2$}. \mathbf{e}nd{align} This follows from \mathbf{e}qref{eq: defffi1} and $y^{{\rm rot},i}_\varepsilon = y_\varepsilon$ on $\Omega' \setminus D_\varepsilon$ for $i=1,2$, see Remark \ref{re: unchanged}. We now consider two different cases: (1) Fix $i=1,2$, $\lambda \in \Bbb R^d$, and consider $\lambda_\varepsilon = \lambda \varepsilon$. In view of \mathbf{e}qref{eq: rescalidipl}, \mathbf{e}qref{eq: the main convergence}(i), and \mathbf{e}qref{eq: defffi1}, we get that $\varepsilon^{-1}(y_\varepsilon^{\lambda_\varepsilon,i} - \mathbf{id}) \to u_i + \lambda \chi_D $ in measure on $\Omega' \setminus E_{u_i}$. Thus, one can check that $y^{\lambda_\varepsilon,i}_\varepsilon\rightsquigarrow u^{\lambda}_i$ for some $u^\lambda_i \in GSBD^2_h$ satisfying \begin{align}\label{eq: representation} u^\lambda_i = u_i + \lambda \chi_D \text{ on } \Omega'\setminus E_{u_i}. \mathbf{e}nd{align} (2) Recall that $\hat{E}= E_{u_1} \setminus D = E_{u_2} \setminus D = \lbrace x \in \Omega \setminus D: \, |\varepsilon^{-1}(y_\varepsilon^{{\rm rot},i} - \mathbf{id})| \to \infty\rbrace$ for $i=1,2$. In view of \mathbf{e}qref{eq: defffi1}, we can choose a suitable sequence $(\lambda_\varepsilon)_\varepsilon$ such that $|\varepsilon^{-1}(y_\varepsilon^{\lambda_\varepsilon,i} - \mathbf{id})| \to \infty$ on $\hat{E} \cup D$ for $i=1,2$. This along with \mathbf{e}qref{eq: the same} and \mathbf{e}qref{eq: the main convergence}(i),(iv) implies that for $i=1,2$ we have ${y}^{\lambda_\varepsilon,i}_\varepsilon \rightsquigarrow \hat{u}$ for some $\hat{u} \in GSBD_h^2$ satisfying \begin{align}\label{eq: hatti} {\rm (i)} & \ \ \hat{u} = u_1 = u_2 \ \ \ \text{ a.e.\ on } \Omega'\setminus (\hat{E} \cup D), \mathbf{n}otag \\ {\rm (ii)} & \ \ e(\hat{u}) = 0 \ \ \text{ a.e.\ on } \ \hat{E} \cup D, \ \ \ \mathcal{H}^{d-1}(J_{\hat{u}} \cap (\hat{E} \cup D)^1) = 0, \mathbf{e}nd{align} where $(\cdot)^1$ denotes the set of points with density $1$. We now combine the cases (1) and (2) to obtain the statement: since $(y_\varepsilon^{\lambda_\varepsilon,i})_\varepsilon$ are minimizing sequences, Corollary \ref{cor: main cor} implies that each $u^\lambda_i$, $\lambda \in \Bbb R^d$, $i=1,2$, and $\hat{u}$ are minimizers of the problem $ \min_{v \in GSBD^2_h} \mathcal{E}(v)$. In particular, as $e(u_i^\lambda) = e(u_i)$ for all $\lambda \in \Bbb R^d$ for both $i=1,2$, the jump sets of $u^\lambda_1$, $u^\lambda_2$ have to be independent of $\lambda$, i.e., $\mathcal{H}^{d-1}(J_{u_i} \triangle J_{u_i^\lambda}) = 0$ for all $\lambda \in \Bbb R^d$ and $i=1,2$. In view of \mathbf{e}qref{eq: representation} and \mathbf{e}qref{eq: the main convergence}(iv), this yields $\partial^*E_{u_i} \cap \Omega', \partial^* (D \setminus E_{u_i}) \cap \Omega' \subset J_{u_i} $ up to $\mathcal{H}^{d-1}$-negligigble sets. Since $\hat{E} = E_{u_i}\setminus D$, this implies for $i=1,2$ that \begin{align}\label{eq: jumpii} \partial^* (\hat{E} \cup D) \cap \Omega' \subset J_{u_i} \ \ \ \ \ \text{ up to $\mathcal{H}^{d-1}$-negligigble sets.} \mathbf{e}nd{align} Recall that $u_1,u_2$ are both minimizers, that also $\hat{u}$ is a minimzer, and that there holds $\hat{u} = u_1=u_2$ on $\Omega' \setminus (\hat{E} \cup D)$, see \mathbf{e}qref{eq: hatti}(i). This along with \mathbf{e}qref{eq: hatti}(ii) and \mathbf{e}qref{eq: jumpii} yields $e({u}_i) = 0$ on $\hat{E} \cup D$ and $\mathcal{H}^{d-1}(J_{{u}_i} \cap (\hat{E} \cup D)^1) = 0$ for $i=1,2$. Then \mathbf{e}qref{eq: lateli} and \mathbf{e}qref{eq: jumpii} show that $e(u_1) = e(u_2)$ $\mathcal{L}^d$-a.e.\ on $\Omega'$, and $J_{u_1} = J_{u_2}$ up to an $\mathcal{H}^{d-1}$-negligible set. \mathbf{e}nd{proof} We finally provide an example that in case (a) the strains cannot be compared inside $E_{u_1} \cup E_{u_2}$. \begin{example}\label{e2} {\mathbf{n}ormalfont Similar to Example \ref{ex}, we consider $\Omega'= (0,3) \times (0,1)$, $\Omega = (1,3) \times (0,1)$, $\Omega_1 = (0,2) \times (0,1)$, $\Omega_2 = (2,3) \times (0,1)$, and $h \mathbf{e}quiv 0$. Let $z \in W^{2,\infty}(\Omega';\Bbb R^d)$ with $\lbrace z = 0 \rbrace = \mathbf{e}mptyset$, and define $$y_\varepsilon(x) = x \chi_{\Omega_1}(x) + \big(x + \varepsilon z(x)\big) \chi_{\Omega_2}(x) \ \ \ \ \text{for} \ x \in \Omega'.$$ \Bbb RRR Note that $J_{y_\varepsilon} = \partial\Omega_1 \cap\Omega' = \partial\Omega_2 \cap\Omega'$. \color{black} Then two possible alternatives are \begin{align*} (1)& \ \ P^{\varepsilon}_1 = \Omega_1, \ \ P^{\varepsilon}_2 = \Omega_2, \ \ R_1^{\varepsilon} = \mathbf{Id}, \ \ R_2^{\varepsilon} = \bar{R}_\varepsilon,\\ (2)& \ \ \tilde{P}^{\varepsilon}_1 = \Omega', \ \ \tilde{R}_1^{\varepsilon} = \mathbf{Id}, \mathbf{e}nd{align*} where $\bar{R}_\varepsilon \in SO(2)$ satisfies $\bar{R}_\varepsilon = \mathbf{Id} + \varepsilon^\gamma A + {\rm O}(\varepsilon^{2\gamma}) $ for some $A \in \Bbb R^{2 \times 2}_{\rm skew}$, $\gamma \in (\frac{2}{3},\beta)$. Let $u_{\varepsilon} = \varepsilon^{-1} (\sum_{j=1}^2 R_j^{\varepsilon}y_{\varepsilon} \chi_{P_j^{\varepsilon}} -\mathbf{id})$ and $\tilde{u}_{\varepsilon} = \varepsilon^{-1} (y_{\varepsilon} -\mathbf{id})$, We observe that $|u_\varepsilon| \to \infty$ on $\Omega_2$. Possible limits identified in \mathbf{e}qref{eq: the main convergence} are $u = \lambda \chi_{\Omega_2}$ for some $\lambda \in \Bbb R^{d}$, $\lambda \mathbf{n}eq 0$, with $E_u = \Omega_2$, and $\tilde{u}(x) = z(x) \, \chi_{\Omega_2}(x)$ with $E_{\tilde{u}} = \mathbf{e}mptyset$. This shows that in general there holds $e(u) \mathbf{n}eq e(\tilde{u})$ in $E_{u}$.} \mathbf{e}nd{example} \begin{thebibliography}{10} \bibitem{Agostiniani} {\sc V.~Agostiniani, G.~Dal Maso, A.~DeSimone}. \mathbf{n}ewblock {\mathbf{e}m Linearized elasticity obtained from finite elasticity by $\Gamma$-convergence under weak coerciveness conditions}. \mathbf{n}ewblock Ann.\ Inst.\ H.\ Poincar\'e\ Anal.\ Non Lin\'eaire \mathbf{n}ewblock {\bf 29} (2012), 715--735. \bibitem{alicandro.dalmaso.lazzaroni.palombaro} {\sc R.~Alicandro, G.~Dal Maso, G.~Lazzaroni, M.~Palombaro}. \mathbf{e}mph{Derivation of a linearised elasticity model from singularly perturbed multiwell energy functionals}. Arch.\ Ration.\ Mech.\ Anal.\ \textbf{230} (2018), 1--45. \bibitem{Ambrosio:90} {\sc L.~Ambrosio}. \mathbf{n}ewblock {\mathbf{e}m Existence theory for a new class of variational problems}. \mathbf{n}ewblock Arch.\ Ration.\ Mech.\ Anal.\ \mathbf{n}ewblock {\bf 111} (1990), 291--322. \bibitem{Ambrosio:90-2} {\sc L.~Ambrosio}. \mathbf{n}ewblock {\mathbf{e}m On the lower semicontinuity of quasi-convex integrals in $SBV(\Omega; \Bbb R^k)$}. \mathbf{n}ewblock Nonlinear\ Anal.\ \mathbf{n}ewblock {\bf 23} (1994), 405--425. \bibitem{ACD} {\sc L~ Ambrosio, A.~Coscia, G.~Dal Maso}. \mathbf{n}ewblock {\mathbf{e}m Fine properties of functions with bounded deformation}. \mathbf{n}ewblock Arch.\ Ration.\ Mech.\ Anal.\ \mathbf{n}ewblock {\bf 139} (1997), 201--238. \bibitem{Ambrosio-Fusco-Pallara:2000} {\sc L.~Ambrosio, N.~Fusco, D.~Pallara}. \mathbf{n}ewblock {\mathbf{e}m Functions of bounded variation and free discontinuity problems}. \mathbf{n}ewblock Oxford University Press, Oxford 2000. \bibitem{BJ} {\sc J.~F.~Babadjian, A.~Giacomini}. \mathbf{n}ewblock {\mathbf{e}m Existence of strong solutions for quasi-static evolution in brittle fracture.} \mathbf{n}ewblock Ann.\ Sc.\ Norm.\ Super.\ Pisa\ Cl.\ Sci.\ \mathbf{n}ewblock {\bf 13} (2014), 925--974. \bibitem{BCO} \mathbf{n}ewblock {\sc J.M.~Ball, J.C.~Currie, P.L.~Olver}. \mathbf{n}ewblock Null Lagrangians, weak continuity, and variational problems of arbitrary order. \mathbf{n}ewblock \mathbf{e}mph{J.\ Funct.\ Anal.} \mathbf{n}ewblock {\bf 41} (1981), 135--174. \bibitem{Batra} \mathbf{n}ewblock {\sc R.C.~Batra}. \mathbf{n}ewblock Thermodynamics of non-simple elastic materials. \mathbf{n}ewblock \mathbf{e}mph{J.\ Elasticity} \mathbf{n}ewblock {\bf 6} (1976), 451--456. \bibitem{BZ} {\sc A.~Blake, A.~Zisserman}. \mathbf{n}ewblock {\mathbf{e}m Visual Reconstruction}. \mathbf{n}ewblock The MIT Press, Cambridge, 1987. \bibitem{Bourdin-Francfort-Marigo:2008} {\sc B.~Bourdin, G.~A.~Francfort, J.~J.~Marigo}. \mathbf{n}ewblock {\mathbf{e}m The variational approach to fracture}. \mathbf{n}ewblock J.\ Elasticity\ \mathbf{n}ewblock {\bf 91} (2008), 5--148. \bibitem{Braides:02} {\sc A.~Braides}. \mathbf{n}ewblock {\mathbf{e}m $\Gamma$-convergence for Beginners}. \mathbf{n}ewblock Oxford University Press, Oxford 2002. \bibitem{Braides-Solci-Vitali:07} {\sc A.~Braides, M.~Solci, E.~Vitali}. \mathbf{n}ewblock {\mathbf{e}m A derivation of linear elastic energies from pair-interaction atomistic systems}. \mathbf{n}ewblock Netw.\ Heterog.\ Media \mathbf{n}ewblock {\bf 2} (2007), 551--567. \bibitem{capriz} \mathbf{n}ewblock {\sc G.~Capriz}. \mathbf{n}ewblock Continua with latent microstructure. \mathbf{n}ewblock \mathbf{e}mph{ Arch.\ Ration.\ Mech.\ Anal.} \mathbf{n}ewblock {\bf 90} (1985), 43--56. \bibitem{Carriero1} {\sc M.~Carriero, A.~Leaci, F.~Tomarelli} \mathbf{n}ewblock {\mathbf{e}m A second order model in image segmentation: Blake \& Zisserman functional}, in: Variational Methods for Discontinuous Structures (Como, 1994), \mathbf{n}ewblock Progr.\ Nonlinear Differential Equations Appl.\ 25, Birkh\"auser, Basel, (1996), 57--72. \bibitem{Carriero2} {\sc M.~Carriero, A.~Leaci, F.~Tomarelli} \mathbf{n}ewblock {\mathbf{e}m Second Order Variational Problems with Free Discontinuity and Free Gradient Discontinuity}, in: Calculus of Variations: Topics from the Mathematical Heritage of Ennio De Giorgi, \mathbf{n}ewblock Quad.\ Mat., 14, 135--186, Dept.\ Math., Seconda Univ.\ Napoli, Caserta, (2004). \bibitem{Carriero3} {\sc M.~Carriero, A.~Leaci, F.~Tomarelli} \mathbf{n}ewblock {\mathbf{e}m Plastic free discontinuities and special bounded hessian}. \mathbf{n}ewblock C.\ R.\ Acad.\ Sci.\ Paris \mathbf{n}ewblock {\bf 314} (1992), 595--600. \bibitem{Chambolle:2003} {\sc A.~Chambolle}. \mathbf{n}ewblock {\mathbf{e}m A density result in two-dimensional linearized elasticity, and applications}. \mathbf{n}ewblock Arch.\ Ration.\ Mech.\ Anal.\ \mathbf{n}ewblock {\bf 167} (2003), 167--211. \bibitem{Chambolle-Conti-Francfort:2014} {\sc A.~Chambolle, S.~Conti, G.~Francfort}. \mathbf{n}ewblock {\mathbf{e}m Korn-Poincar\'e inequalities for functions with a small jump set.} \mathbf{n}ewblock Indiana Univ.\ Math.\ J.\ \mathbf{n}ewblock {\bf 65} (2016), 1373--1399. \bibitem{Chambolle-Conti-Francfort:2018} {\sc A.~Chambolle, S.~Conti, G.~Francfort}. \mathbf{n}ewblock {\mathbf{e}m Approximation of a brittle fracture energy with a constraint of non-interpenetration.} \mathbf{n}ewblock Arch.\ Ration.\ Mech.\ Anal. \mathbf{n}ewblock {\bf 228} (2018), 867--889. \bibitem{Chambolle-Conti-Iurlano:2018} {\sc A.~Chambolle, S.~Conti, G.~Francfort}. \mathbf{n}ewblock {\mathbf{e}m Approximation of functions with small jump sets and existence of strong minimizers of Griffith's energy.} \mathbf{n}ewblock J.\ Math.\ Pures Appl., to appear. Available at: http://arxiv.org/abs/1710.01929. \bibitem{Chambolle-Giacomini-Ponsiglione:2007} {\sc A.~Chambolle, A.~Giacomini, M.~Ponsiglione}. \mathbf{n}ewblock {\mathbf{e}m Piecewise rigidity}. \mathbf{n}ewblock J.\ Funct.\ Anal.\ \mathbf{n}ewblock {\bf 244} (2007), 134--153. \bibitem{Conti-Iurlano:15} {\sc S.~Conti, M.~Focardi, F.~Iurlano}. \mathbf{n}ewblock {\mathbf{e}m Which special functions of bounded deformation have bounded variation?} \mathbf{n}ewblock Proc.\ Roy.\ Soc.\ Edinb.\ A \mathbf{n}ewblock {\bf 148} (2018), 33--50. \bibitem{Conti-Iurlano:15.2} {\sc S.~Conti, M.~Focardi, F.~Iurlano}. \mathbf{n}ewblock {\mathbf{e}m Integral representation for functionals defined on $SBD^p$ in dimension two}. \mathbf{n}ewblock Arch.\ Rat.\ Mech.\ Anal.\ \mathbf{n}ewblock {\bf 223} (2017), 1337--1374. \bibitem{Crismale2} {\sc A.~Chambolle, V.~Crismale}. \mathbf{n}ewblock {\mathbf{e}m A density result in $GSBD^p$ with applications to the approximation of brittle fracture energies}. \mathbf{n}ewblock Arch.\ Ration.\ Mech.\ Anal.\, to appear. Available at: http://arxiv.org/abs/1708.03281. \bibitem{Crismale} {\sc A.~Chambolle, V.~Crismale}. \mathbf{n}ewblock {\mathbf{e}m Compactness and lower semicontinuity in $GSBD$}. \mathbf{n}ewblock J.\ Eur.\ Math.\ Soc.\ (JEMS), to appear. Available at: http://arxiv.org/abs/1802.03302. \bibitem{Crismale4} {\sc A.~Chambolle, V.~Crismale}. \mathbf{n}ewblock {\mathbf{e}m Existence of strong solutions to the Dirichlet problem for Griffith energy}. \mathbf{n}ewblock Preprint,\ 2018. Available at: http://arxiv.org/abs/1811.07147. \bibitem{Crismale3} {\sc V.~Crismale}. \mathbf{n}ewblock {\mathbf{e}m On the approximation of $SBD$ functions and some applications}. \mathbf{n}ewblock Preprint,\ 2018. Available at: http://arxiv.org/abs/1806.03076. \bibitem{Cortesani-Toader:1999} {\sc G.~Cortesani, R.~Toader}. \mathbf{n}ewblock {\mathbf{e}m A density result in SBV with respect to non-isotropic energies}. \mathbf{n}ewblock Nonlinear Analysis \mathbf{n}ewblock {\bf 38} (1999), 585--604. \bibitem{DalMaso:93} {\sc G.~Dal Maso}. \mathbf{n}ewblock {\mathbf{e}m An introduction to $\Gamma$-convergence}. \mathbf{n}ewblock Birkh{\"a}user, Boston $\cdot$ Basel $\cdot$ Berlin 1993. \bibitem{DalMaso:13} {\sc G.~Dal Maso}. \mathbf{n}ewblock {\mathbf{e}m Generalized functions of bounded deformation}. \mathbf{n}ewblock J.\ Eur.\ Math.\ Soc.\ (JEMS)\ \mathbf{n}ewblock {\bf 15} (2013), 1943--1997. \bibitem{DalMaso-Francfort-Toader:2005} {\sc G.~Dal Maso, G.~A.~Francfort, R.~Toader}. \mathbf{n}ewblock {\mathbf{e}m Quasistatic crack growth in nonlinear elasticity}. \mathbf{n}ewblock Arch.\ Ration.\ Mech.\ Anal.\ \mathbf{n}ewblock {\bf 176} (2005), 165--225. \bibitem{DalMasoNegriPercivale:02} {\sc G.~Dal Maso, M.~Negri, D.~Percivale}. \mathbf{n}ewblock {\mathbf{e}m Linearized elasticity as $\Gamma$-limit of finite elasticity}. \mathbf{n}ewblock Set-valued\ Anal.\ \mathbf{n}ewblock {\bf 10} (2002), 165--183. \bibitem{DalMaso-Lazzaroni:2010} {\sc G.~Dal Maso, G.~Lazzaroni}. \mathbf{n}ewblock {\mathbf{e}m Quasistatic crack growth in finite elasticity with non- interpenetration}. \mathbf{n}ewblock Ann.\ Inst.\ H.\ Poincar\'e\ Anal.\ Non Lin\'eaire\ \mathbf{n}ewblock {\bf 27} (2010), 257--290. \bibitem{DeGiorgi-Ambrosio:1988} {\sc E.~De Giorgi, L.~Ambrosio}. \mathbf{n}ewblock {\mathbf{e}m Un nuovo funzionale del calcolo delle variazioni}. \mathbf{n}ewblock Acc.\ Naz.\ Lincei, Rend.\ Cl.\ Sci.\ Fis.\ Mat.\ Natur.\ \mathbf{n}ewblock {\bf 82} (1988), 199--210. \bibitem{dunn} \mathbf{n}ewblock {\sc J.E.~Dunn, J.~Serrin}. \mathbf{n}ewblock On the thermomechanics of interstitial working. \mathbf{n}ewblock \mathbf{e}mph{Arch.\ Ration.\ Mech.\ Anal.} \mathbf{n}ewblock {\bf 88} (1985), 95--133. \bibitem{Francfort-Marigo:1998} {\sc G.~A.~Francfort, J.~J.~Marigo}. \mathbf{n}ewblock {\mathbf{e}m Revisiting brittle fracture as an energy minimization problem}. \mathbf{n}ewblock J.\ Mech.\ Phys.\ Solids \mathbf{n}ewblock {\bf 46} (1998), 1319--1342. \bibitem{Francfort-Larsen:2003} {\sc G.~A.~Francfort, C.~J.~Larsen}. \mathbf{n}ewblock {\mathbf{e}m Existence and convergence for quasi-static evolution in brittle fracture}. \mathbf{n}ewblock Comm.\ Pure Appl.\ Math.\ \mathbf{n}ewblock {\bf 56} (2003), 1465--1500. \bibitem{Friedrich:15-2} {\sc M.~Friedrich}. \mathbf{n}ewblock {\mathbf{e}m A derivation of linearized Griffith energies from nonlinear models}. \mathbf{n}ewblock Arch.\ Ration.\ Mech.\ Anal.\ \mathbf{n}ewblock {\bf 225} (2017), 425--467. \bibitem{Friedrich:15-3} {\sc M.~Friedrich}. \mathbf{n}ewblock {\mathbf{e}m A Korn-type inequality in SBD for functions with small jump sets}. \mathbf{n}ewblock Math.\ Models Methods Appl.\ Sci.\ \mathbf{n}ewblock {\bf 27} (2017), 2461--2484. \bibitem{Friedrich:15-4} {\sc M.~Friedrich}. \mathbf{n}ewblock {\mathbf{e}m A piecewise Korn inequality in SBD and applications to embedding and density results}. \mathbf{n}ewblock SIAM J.\ Math.\ Anal.\ \mathbf{n}ewblock {\bf 50} (2018), 3842--3918. \bibitem{Manuel} {\sc M.~Friedrich}. \mathbf{n}ewblock {\mathbf{e}m A compactness result in $GSBV^p$ and applications to $\Gamma$-convergence for free discontinuity problems}. \mathbf{n}ewblock Calc.\ Var.\ PDE, to appear. Available at: http://arxiv.org/abs/1807.03647. \bibitem{MFMK} \mathbf{n}ewblock {\sc M.~Friedrich, M.~Kru\v{z}\'{\i}k}. \mathbf{n}ewblock On the passage from nonlinear to linearized viscoelasticity. \mathbf{n}ewblock \mathbf{e}mph{SIAM J.\ Math.\ Anal.} \mathbf{n}ewblock {\bf 50} (2018), 4426--4456. \bibitem{FriedrichSchmidt:2014.2} {\sc M.~Friedrich, B.~Schmidt}. \mathbf{n}ewblock {\mathbf{e}m On a discrete-to-continuum convergence result for a two dimensional brittle material in the small displacement regime}. \mathbf{n}ewblock Netw.\ Heterog.\ Media \mathbf{n}ewblock {\bf 10} (2015), 321--342. \bibitem{Solombrino} {\sc M.~Friedrich, F.~Solombrino}. \mathbf{n}ewblock {\mathbf{e}m Quasistatic crack growth in $2d$-linearized elasticity}. \mathbf{n}ewblock Ann.\ Inst.\ H.\ Poincar\'e\ Anal.\ Non Lin\'eaire \mathbf{n}ewblock {\bf 35} (2018), 27--64. \bibitem{FrieseckeJamesMueller:02} {\sc G.~Friesecke, R.~D.~James, S.~M{\"u}ller}. \mathbf{n}ewblock {\mathbf{e}m A theorem on geometric rigidity and the derivation of nonlinear plate theory from three-dimensional elasticity}. \mathbf{n}ewblock Comm.\ Pure Appl.\ Math.\ \mathbf{n}ewblock {\bf 55} (2002), 1461--1506. \bibitem{Giacomini:2005} {\sc A.~Giacomini}. \mathbf{n}ewblock {\mathbf{e}m Ambrosio-Tortorelli approximation of quasi-static evolution of brittle fractures}. \mathbf{n}ewblock Calc.\ Var.\ PDE. \mathbf{n}ewblock {\bf 22} (2005), 129--172. \bibitem{Giacomini-Ponsiglione:2006} {\sc A.~Giacomini, M.~Ponsiglione}. \mathbf{n}ewblock {\mathbf{e}m A $\Gamma$-convergence approach to stability of unilateral minimality properties}. \mathbf{n}ewblock Arch.\ Ration.\ Mech.\ Anal.\ \mathbf{n}ewblock {\bf 180} (2006), 399--447. \bibitem{Iurlano:13} {\sc F.~Iurlano}. \mathbf{n}ewblock {\mathbf{e}m A density result for GSBD and its application to the approximation of brittle fracture energies}. \mathbf{n}ewblock Calc.\ Var.\ PDE \mathbf{n}ewblock {\bf 51} (2014), 315--342. \bibitem{MR} \mathbf{n}ewblock {\sc A.~Mielke, T.~Roub\'{\i}\v{c}ek}. \mathbf{n}ewblock \mathbf{e}mph{Rate-Independent Systems - Theory and Application}, \mathbf{n}ewblock Springer, New York, 2015. \bibitem{Ulisse} {\sc A.~Mielke, U.~Stefanelli}. \mathbf{n}ewblock {\mathbf{e}m Linearized plasticity is the evolutionary $\Gamma$-limit of finite plasticity}. \mathbf{n}ewblock J.\ Eur.\ Math.\ Soc. (JEMS) \mathbf{n}ewblock {\bf 15} (2013), 923--948. \bibitem{NegriToader:2013} {\sc M.~Negri, R.~Toader}. \mathbf{n}ewblock {\mathbf{e}m Scaling in fracture mechanics by Ba$\check z$ant's law: from finite to linearized elasticity}. \mathbf{n}ewblock Math.\ Models Methods Appl.\ Sci.\ \mathbf{n}ewblock {\bf 25} (2015), 1389--1420. \bibitem{Zanini} {\sc M.~Negri, C.~Zanini}. \mathbf{n}ewblock {\mathbf{e}m From finite to linear elastic fracture mechanics by scaling}. \mathbf{n}ewblock Calc.\ Var.\ PDE \mathbf{n}ewblock {\bf 50} (2014), 525--548. \bibitem{Schmidt:08} {\sc B.~Schmidt}. \mathbf{n}ewblock {\mathbf{e}m Linear $\Gamma$-limits of multiwell energies in nonlinear elasticity theory}. \mathbf{n}ewblock Continuum Mech.\ Thermodyn.\ \mathbf{n}ewblock {\bf 20} (2008), 375--396. \bibitem{Schmidt:2009} {\sc B.~Schmidt}. \mathbf{n}ewblock {\mathbf{e}m On the derivation of linear elasticity from atomistic models}. \mathbf{n}ewblock Netw.\ Heterog.\ Media \mathbf{n}ewblock {\bf 4} (2009), 789--812. \bibitem{SchmidtFraternaliOrtiz:2009} {\sc B.~Schmidt, F.~Fraternali, M.~Ortiz}. \mathbf{n}ewblock {\mathbf{e}m Eigenfracture: an eigendeformation approach to variational fracture}. \mathbf{n}ewblock SIAM\ Mult.\ Model.\ Simul.\ \mathbf{n}ewblock {\bf 7} (2009), 1237--1266. \bibitem{Toupin:62} \mathbf{n}ewblock {\sc R.A.~Toupin}. \mathbf{n}ewblock Elastic materials with couple stresses. \mathbf{n}ewblock \mathbf{e}mph{Arch.\ Ration.\ Mech.\ Anal.} \mathbf{n}ewblock {\bf 11} (1962), 385--414. \bibitem{Toupin:64} \mathbf{n}ewblock {\sc R.A.~Toupin}. \mathbf{n}ewblock Theory of elasticity with couple stress. \mathbf{n}ewblock \mathbf{e}mph{Arch.\ Ration.\ Mech.\ Anal.} \mathbf{n}ewblock {\bf 17} (1964), 85--112. \mathbf{e}nd{thebibliography} \mathbf{e}nd{document}
\betaegin{document} \betaegin{center} {{{\Large\sf\betaf Dependent Mixtures of Geometric Weights Priors}}}\\ {\sf Spyridon J. Hatjispyros \footnote{Corresponding author. Tel.:+30 22730 82326\\ \stackrel{\rm ind}\siment E-mail address: [email protected] }$^{,\,*}$, Christos Merkatas$^{*}$, Theodoros Nicoleris$^{**}$, Stephen G. Walker$^{***}$} \end{center} \centerline{\sf $^{*}$ Department of Mathematics, University of the Aegean,} \centerline {\sf Karlovassi, Samos, GR-832 00, Greece.} \centerline{\sf $^{**}$ Department of Economics, National and Kapodistrian University of Athens,} \centerline{\sf Athens, GR-105 59, Greece. } \centerline{\sf $^{***}$Department of Mathematics, University of Texas at Austin,} \centerline{\sf Austin, Texas 7812, USA. } \betaegin{abstract} A new approach to the joint estimation of partially exchangeable observations is presented. This is achieved by constructing a model with pairwise dependence between random density functions, each of which is modeled as a mixture of \thetaextit{geometric} stick breaking processes. The claim is that mixture modeling with Pairwise Dependent Geometric Stick Breaking Process (PDGSBP) priors is sufficient for prediction and estimation purposes; that is, making the weights more exotic does not actually enlarge the support of the prior. Moreover, the corresponding Gibbs sampler for estimation is faster and easier to implement than the Dirichlet Process counterpart. \noindent {\sl Keywords:} Bayesian nonparametric inference; Mixture of Dirichlet process; Geometric stick breaking weights; Geometric Stick Breaking Mixtures; Dependent Process. \end{abstract} \noindent {\betaf 1. Introduction.} In Bayesian nonparametric methods, the use of priors such as the Dirichlet process (Ferguson, 1973), is justified from the assumption that the observations are exchangeable, which means the distribution of $(X_{1},\lambdadots,X_{n})$ coincides with the distribution of $(X_{\pi(1)},\lambdadots,X_{\pi(n)}),$ for all $\pi\in S(n)$, where $S(n)$ is the set of permutations of $\{1,\lambdadots,n\}$. However, in real life applications, data are often \thetaextit{partially exchangeable}. For example, they may consist of observations sampled from $m$ populations, or may be sampled from an experiment conducted in $m$ different geographical areas. This means that the joint law is invariant under permutations within the $m$ subgroups of observations $(X_{j,i_j})_{1\lambdae i_j\lambdae n_j},\,1\lambdae j\lambdae m$, so for all $\pi_j\in S(n_j)$ { \betaegin{equation} \lambdaabel{Partially1} ((X_{1,i_1})_{1\lambdae i_1\lambdae n_1},\lambdadots,(X_{m, i_m})_{1\lambdae i_m\lambdae n_m})\sim ((X_{1,\pi_1(i_1)})_{1\lambdae i_1\lambdae n_1},\lambdadots,(X_{m, \pi_m(i_m)})_{1\lambdae i_m\lambdae n_m}). \end{equation} } When the exchangeability assumption fails one needs to use non--exchangeable priors. There has been substantial research interest following the seminal work of MacEachern (1999) in the construction of suitable dependent stochastic processes. Such then act as priors in Bayesian nonparametric models. These processes are distributions over a collection of measures indexed by values in some covariate space, such that the marginal distribution is described by a known nonparametric prior. The key idea is to induce dependence between a collection of random probability measures $(\mathbb P_j)_{1\lambdae j\lambdae m}$, where each $\mathbb P_j$ comes from a Dirichlet process (DP) with concentration parameter $c>0$ and base measure $P_0$. Such random probability measures typically are used in mixture models to generate random density functions $f(x) = \int_{\Theta}K(x|\thetaheta)\mathbb P(d\thetaheta)$; see Lo (1984). There is a variety of ways that a DP can be extended to dependent DP. Most of them use the stick-breaking representation (Sethuraman, 1994), that is $$ \mathbb P(\,\cdot\,) = \sum_{k=1}^{\infty}w_{k}\deltaelta_{\thetaheta_{k}}(\,\cdot\,), $$ where $(\thetaheta_k)_{k\ge 1}$ are independent and identically distributed from $P_{0}$ and $(w_k)_{k\ge 1}$ is a stick breaking process; so if $(v_k)_{k\ge 1}$ are independent and identically distributed from ${\cal B}e(1, c)$, a beta distribution with mean $(1+c)^{-1}$, then $w_{1}=v_{1}$ and for $k>1$, $w_{k}=v_{k}\prod_{l<k}(1-v_l)$. Dependence is introduced through the weights and/or the atoms. A classical example of the use of dependent DP's is the Bayesian nonparametric regression problem where a random probability measure $\mathbb P_{z}$ is constructed for each covariate $z$, $$ \mathbb P_z(\,\cdot\,) = \sum_{k=1}^\infty w_k(z)\deltaelta_{\thetaheta_k(z)}(\,\cdot\,), $$ where $(w_{k}(z),\thetaheta_{k}(z))$ is a collection of processes indexed in $z$--space. Extensions to dependent DP models can be found in De Iorio et al. (2004), Griffin and Steel (2006), and Dunson and Park (2008). Recently there has been growing interest for the use of simpler random probability measures which while simpler are yet sufficient for Bayesian nonparametric density estimation. The geometric stick breaking (GSB) random probability measure (Fuentes--Garc\'ia, et al. 2010) has been used for density estimation and has been shown to provide an efficient alternative to DP mixture models. Some recent papers extend this nonparametric prior to a dependent nonparametric prior. In the direction of covariate dependent processes, GSB processes have been seen to provide an adequate model to the traditional dependent DP model. For example, for Bayesian regression, Fuentes--Garcia et al. (2009) propose a covariate dependent process based on random probability measures drawn from a GSB process. Mena et al. (2011) used GSB random probability measures in order to construct a purely atomic continuous time measure--valued process, useful for the analysis of time series data. In this case, the covariate $z\geq 0$ denotes the time that each observation is (discretely) recorded and conditionally on each observation is drawn from a time--dependent nonparametric mixture model based on GSB processes. However, to the best of our knowledge, random probability measures drawn from a GSB process, for modeling related density functions when samples from each density function are available, has not been developed in the literature. In this paper we will construct pairwise dependent random probability measures based on GSB processes. That is, we are going to model a finite collection of $m$ random distribution functions $(\mathbb G_j)_{1\lambdae j\lambdae m}$, where each $\mathbb G_{j}$ is a GSB random probability measure, such that there is a unique common component for each pair $(\mathbb G_{j},\mathbb G_{j'})$ with $j\neq j'$. We are going to use these measures in the context of GSB mixture models, generating a collection of $m$ GSB pairwise dependent random densities $(f_{j}(x))_{1\lambdae j\lambdae m}$. Hence we obtain a set of random densities $(f_{1},\lambdadots,f_{m})$, where marginally each $f_{j}$ is a random density function $$ f_{j}(x) = \int_{\Theta}K(x|\thetaheta)\,\mathbb G_{j}(d\thetaheta), $$ thus generalizing the GSB priors to a multivariate setting for partially exchangeable observations. In the problem considered here, these random density functions $(f_{j})_{1\lambdae j\lambdae m}$ are thought to be related or similar, e.g. perturbations of each other, and so we aim to share information between groups to improve estimation of each density, especially for those densities $f_{j}$ for which the corresponding sample size $n_{j}$ is small. In this direction, the main references include the work of M\"uller et al. (2014), Bulla et al. (2009), Kolossiatis et al. (2013) and Griffin et al. (2013); more rigorous results can be found in Lijoi et al. (2014A, 2014B). All these models have been proposed for the modeling of an arbitrary but finite number of random distribution functions, via a common part and an index specific idiosyncratic part so that for $0<p_j<1$ we have $ \mathbb P_{j} = p_{j}\mathbb P_0 + (1-p_{j})\mathbb P_{j}^*, $ where $\mathbb P_0$ is the common component to all other distributions and $\{\mathbb P_{j}^*:j=1,\lambdadots,m\}$ are the idiosyncratic parts to each $\mathbb P_{j}$, and $\mathbb P_0,\mathbb P_{j}^*\stackrel{\rm iid}\sim {\cal DP}(c,P_0)$. In Lijoi et al. (2014B) normalized random probability measures based on the $\sigma$--stable process are used for modeling dependent mixtures. Although similar (all models coincide only for the $m = 2$ case), these models are different from our model which is based on pairwise dependence of a sequence of random measures (Hatjispyros et al. 2011, 2016A). We are going to provide evidence through numerical experiments that dependent GSB mixture models are an efficient alternative to pairwise dependent DP (PDDP) priors. First, we will randomize the existing PDDP model of Hatjispyros et al. (2011, 2016A), by imposing gamma priors on the concentration masses (leading to the more efficient rPDDP model). Then, for the objective comparison of execution times, we will conduct a-priori synchronized density estimation comparison studies between the randomized PDDP and the pairwise dependent GSB process (PDGSBP) models using synthetic and real data examples. This paper is organized as follows. In Section 2 we will demonstrate the construction of pairwise dependent random densities, using a dependent model suggested by Hatjispyros et al. (2011). We also demonstrate how specific choices of latent random variables can recover the model of Hatjispyros et al. and the dependent GSB model introduced in this paper. These latent variables will form the basis of a Gibbs sampler for posterior inference, given in Section 3. In Section 4 we resort to simulation. We provide comparison studies between the randomized version of the PDDP model and our newly introduced dependent GSB model, involving five cases of synthetic data and a real data set. Finally, Section 5 concludes with a summary and future work. \noindent{\betaf 2. Preliminaries.} We consider an infinite real valued process $\{X_{ji}:1\lambdae j\lambdae m,\,i\ge 1\}$ defined over a probability space $(\Omega,{\cal F}, {\rm P})$, that is partially exchangeable as in (\ref{Partially1}). Let ${\cal P}$ denote the set of probability measures over $\mathbb R$; then de Finetti proved that there exists a probability distribution $\mathbb Pi$ over ${\cal P}^m$, which satisfies { \betaegin{align} \nonumber & {\rm P}\{X_{ji}\in A_{ji}:1\lambdae j\lambdae m,1\lambdae i\lambdae n_j\}\\ \nonumber & = \int_{{\cal P}^m}{\rm P}\{X_{ji}\in A_{ji}:1\lambdae j\lambdae m,1\lambdae i\lambdae n_j\,|\,\mathbb Q_1,\lambdadots,\mathbb Q_m\}\,\mathbb Pi(d\mathbb Q_1,\lambdadots,d\mathbb Q_m)\\ \nonumber & = \int_{{\cal P}^m}\prod_{j=1}^m{\rm P}\{X_{ji}\in A_{ji}:1\lambdae i\lambdae n_j\,|\,\mathbb Q_j\}\,\mathbb Pi(d\mathbb Q_1,\lambdadots,d\mathbb Q_m)\\ \nonumber & = \int_{{\cal P}^m}\prod_{j=1}^m\,\prod_{i=1}^{n_j}\mathbb Q_j(A_{ji})\,\mathbb Pi(d\mathbb Q_1,\lambdadots,d\mathbb Q_m)\,. \end{align} } The de Finetti measure $\mathbb Pi$ represents a prior distribution over partially exchangeable observations. We start off by describing the PDDP model, with no auxiliary variables, using only the de Finetti measure $\mathbb Pi$, marginal measures $\mathbb Q_j$, then, we proceed to the definition of a randomized version of it, and to the specific details for the case of the GSB random measures. \noindent {\betaf A.} In Hatjispyros et al. (2011), the following hierarchical model was introduced. For $m$ subgroups of observations $\{(x_{ji})_{1\lambdae i\lambdae n_j}:1\lambdae j\lambdae m\}$, \betaegin{align} x_{ji}|\thetaheta_{ji} & \stackrel{\rm ind}\sim K(\,\cdot\,|\thetaheta_{ji})\nonumber\\ \thetaheta_{ji}|\mathbb Q_j & \stackrel{\rm iid}\sim\mathbb Q_j(\,\cdot\,)\nonumber\\ \mathbb Q_j = &\sum_{l=1}^mp_{jl}\mathbb P_{jl},\,\,\sum_{l=1}^mp_{jl}=1,\,\,\mathbb P_{jl}=\mathbb P_{lj}\nonumber\\ \mathbb P_{jl} \stackrel{\rm iid}\sim & \,\,{\cal DP}(c, P_0),\,\,1\lambdae j\lambdae l\lambdae m,\nonumber \end{align} for some kernel density $K(\,\cdot\,|\,\cdot\,)$, concentration parameter $c>0$ and parametric central measure $P_0$ for which $\mathbb E(\mathbb P_{jl}(d\thetaheta))=P_0(d\thetaheta)$. So, we have assumed that the random densities $f_j(x)$ are dependent mixtures of the dependent random measures $\mathbb Q_j$ via $f_j(x|\mathbb Q_j)=\int_\Theta K(x|\thetaheta)\mathbb Q_j(d\thetaheta)$, or equivalently, dependent mixtures of the $m$ independent mixtures $g_{jl}(x|\,\mathbb P_{jl})=\int_\Theta K(x|\,\theta)\,\mathbb P_{jl}(d\theta)$, $l=1,\lambdadots,m$. To introduce the rPDDP model, we randomize the PDDP model by sampling the $\mathbb P_{jl}$ measures from the independent Dirichlet processes ${\cal DP}(c_{jl},P_0)$ and then impose gamma priors on the concentration masses, i.e. $ \mathbb P_{jl} \stackrel{\rm ind}\sim \,{\cal DP}(c_{jl}, P_0),\quad c_{jl}\stackrel{\rm ind}\sim{\cal G}(a_{jl},b_{jl}),\,\,1\lambdae j\lambdae l\lambdae m. $ \noindent {\betaf B.} To develop a pairwise dependent geometric stick breaking version, we let the random density functions $f_j(x)$ generated via \betaegin{equation} \lambdaabel{fjofx} f_j(x):=f_j(x|\,\mathbb Q_j)\, = \, \sum_{l=1}^m p_{jl}\,g_{j\,l}(x|\,\mathbb G_{jl}),\quad\mathbb Q_j = \sum_{l=1}^mp_{jl}\mathbb G_{jl},\quad 1\lambdae j\lambdae m. \end{equation} The $g_{jl}(x):=g_{jl}(x|\,\mathbb G_{jl})=\int_\Theta K(x|\,\theta)\,\mathbb G_{jl}(d\theta)$ random densities are now independent mixtures of GSB processes, satisfying $g_{jl}=g_{lj}$, under the slightly altered definition \betaegin{equation} \lambdaabel{abuseddef1} \mathbb G_{jl}=\sum_{k=1}^\infty q_{jlk}\deltaelta_{\thetaheta_{jlk}}\quad{\rm with}\quad q_{jlk}=\lambdaambda_{jl}(1-\lambdaambda_{jl})^{k-1},\,\, \lambdaambda_{jl}\sim h(\,\cdot\,|\xi_{jl}),\,\,\,\thetaheta_{jlk}\stackrel{\rm iid}\sim G_0, \end{equation} where $h$ is a parametric density supported over the interval $(0,1)$ depending on some parameter $\xi_{jl}\in\Xi$, and $G_0$ is the associated parametric central measure. The independent GSB processes $\{\mathbb G_{jl}:\,1\lambdae j,\,l\lambdae m\}$ form a matrix $\mathbb G$ of random distributions with $\mathbb G_{jl}=\mathbb G_{lj}$. In matrix notation \betaegin{equation} \lambdaabel{Qmeasurematrix} {\mathbb Q}\,=\,\lambdaeft(p\otimes \mathbb G\right){\betaf 1}, \end{equation} where $p=(p_{jl})$ is the matrix of random selection weights, and $p\otimes\mathbb G$ is the Hadamard product of the two matrices defined as $(p\otimes\mathbb G)_{jl}=p_{jl}\mathbb G_{jl}$. By letting $\betaf 1$ to denote the $m\thetaimes 1$ matrix of ones it is that the $j$th element of vector $\mathbb Q$ is given by equation (\ref{fjofx}). \noindent {\betaf C.} Following a univariate construction of geometric slice sets (Fuentes--Garc\'ia et al. 2010), we define the stochastic variables ${\mathbf N}=(N_{ji})$ for $1\lambdae i\lambdae n_j$ and $1\lambdae j\lambdae m$, where $N_{ji}$ is an almost surely finite random variable of mass $f_{N}$ possibly depending on parameters, associated with the sequential slice set ${\cal S}_{ji}=\{1,\lambdadots,N_{ji}\}$. Following Hatjispyros et al. (2011, 2016a) we introduce: \betaegin{enumerate} \item The GSB mixture selection variables ${\betaoldsymbol\deltaelta}=(\delta_{ji})$; for an observation $x_{ji}$ that comes from $f_j(x)$, $\delta_{ji}$ selects one of the mixtures $\{g_{jl}(x):l=1,\lambdadots,m\}$. Then the observation $x_{ji}$ came from mixture $g_{j \deltaelta_{ji}}(x)$. \item The GSB clustering variables ${\betaoldsymbol d}=(d_{ji})$; for an observation $x_{ji}$ that comes from $f_j(x)$, given $\deltaelta_{ji}$, $d_{ji}$ allocates the component of the GSB mixture $g_{j \deltaelta_{ji}}(x)$ that $x_{ji}$ came from. Then the observation $x_{ji}$ came from component $K(x|\theta_{j\delta_{ji}d_{ji}})$. \end{enumerate} In what follows, unless otherwise specified, the random densities $f_j(x)$ are mixtures of independent GSB mixtures. \noindent {\betaf Proposition 1.~}{\sl Suppose that the clustering variables $(d_{ji})$ conditionally on the slice variables $(N_{ji})$ are having the discrete uniform distribution over the sets $({\cal S}_{ji})$ that is $d_{ji}|N_{ji}\sim{\cal DU}({\cal S}_{ji})$, and ${\rm P}\{N_{ji}=r|\delta_{ji}=l\}=f_N(r|\lambdaambda_{jl})$, then \betaegin{equation} \lambdaabel{proposition11} f_j(x_{ji},N_{ji}=r)=r^{-1}\sum_{l=1}^{m}p_{jl}f_{N}(r|\lambdaambda_{jl})\sum_{k=1}^{r}\,K(x_{ji}|\thetaheta_{jlk}), \end{equation} and \betaegin{equation} \lambdaabel{proposition12} f_{j}(x_{ji},N_{ji}=r,d_{ji}=k|\deltaelta_{ji}=l) = {1\over r}f_N(r|\lambdaambda_{jl})\,{\cal I}(k\lambdae r)\,K(x_{ji}|\thetaheta_{jlk}). \end{equation} } {\sl The proof is given in Appendix A.} The following proposition gives a multivariate analogue of equation ($2$) in Fuentes--Garc\'ia, et al. (2010): \noindent {\betaf Proposition 2.~}{\sl Given the random set ${\cal S}_{ji}$, the random functions in (\ref{fjofx}) become finite mixtures of a.s. finite equally weighted mixtures of the $K(\,\cdot\,|\,\cdot\,)$ probability kernels, that is \betaegin{equation} \lambdaabel{proposition21} f_j(x_{ji}|N_{ji}=r)=\sum_{l=1}^m{\cal W}(r|\lambdaambda_{jl})\sum_{k=1}^{r}r^{-1}K(x_{ji}|\thetaheta_{jlk}), \end{equation} where the probability weights $\{ {\cal W}(r|\lambdaambda_{jl}):1\lambdae l\lambdae m\}$ are given by $$ {\cal W}(r|\lambdaambda_{jl})={p_{jl} f_N(r|\lambdaambda_{jl})\over \sum_{l'=1}^m p_{jl'} f_N(r|\lambdaambda_{jl'})}. $$ } {\sl The proof is given in Appendix A.} Note that, the one--dimensional model introduced in Fuentes--Garc\'ia et al. (2010), under our notation attains the representation $$ f_j(x_{ji}|N_{ji}=r,\delta_{ji}=l)=\sum_{k=1}^{r}r^{-1}K(x_{ji}|\thetaheta_{jlk}). $$ \noindent{\betaf 2.1 The model.} Marginalizing (\ref{proposition12}) with respect to the variable $(N_{ji}, d_{ji})$, we obtain \betaegin{equation} \lambdaabel{fjgivendelta} f_j(x_{ji}|\delta_{ji}=l)=\sum_{k=1}^\infty\lambdaeft(\sum_{r=k}^\infty r^{-1}f_N(r|\lambdaambda_{jl})\right)K(x_{ji}|\thetaheta_{jlk}). \end{equation} The quantity inside the parentheses on the right-hand side of the previous equation is $f_j(d_{ji}|\delta_{ji}=l)$. Following Fuentes--Garc\'ia, et al. (2010), we substitute $f_N(r|\lambda_{jl})$ with the negative binomial distribution ${\cal NB}(r|2,\lambda_{jl})$, i.e. \betaegin{equation} \lambdaabel{NB2} f_N(r|\lambda_{jl}) = r \lambda_{jl}^2(1-\lambda_{jl})^{r-1}{\cal I}(r\ge 1), \end{equation} so equation (\ref{fjgivendelta}) becomes $$ f_j(x_{ji}|\delta_{ji}=l)=\sum_{k=1}^\infty q_{jlk} K(x_{ji}|\thetaheta_{jlk})\,\,\,{\rm with}\,\,\,q_{jlk}=\lambda_{jl}(1-\lambda_{jl})^{k-1}, $$ and the $f_j$ random densities take the form of a finite mixture of GSB mixtures $$ f_j(x_{ji})=\sum_{l=1}^m p_{jl}\sum_{k=1}^\infty q_{jlk} K(x_{ji}|\thetaheta_{jlk}). $$ We denote the set of observations along the $m$ groups as ${\betaoldsymbol x}=(x_{ji})$ and with ${\betaoldsymbol x}_j$ the set of observations in the $j$th group. The three sets of latent variables in the $j$th group will be denoted as ${\betaoldsymbol N}_j$ for the slice variables, ${\betaoldsymbol d}_j$ for the clustering variables, and finally ${\betaoldsymbol \delta}_j$ for the set of GSB mixture allocation variables. From now on, we are going to leave the auxiliary variables unspecified; especially for $\delta_{ji}$ we use the notation $ \delta_{ji}=(\delta_{ji}^1,\lambdadots,\delta_{ji}^m)\in\lambdaeft\{\vec{e}_1,\lambdadots,\vec{e}_m\right\}\,\,\,{\rm with}\,\,\,{\rm P}\{\delta_{ji}=\vec{e}_l\}=p_{jl}, $ where $\vec{e}_l$ denotes the usual basis vector having its only nonzero component equal to $1$ at position $l$. Hence, for a sample of size $n_1$ from $f_1$, a sample of size $n_2$ from $f_2$, etc., a sample of size $n_m$ from $f_m$ we can write the full likelihood as a multiple product: \betaegin{eqnarray} f({\betaoldsymbol x},{\betaoldsymbol N},{\betaoldsymbol d}\,|\,{\betaoldsymbol \delta}) & = & \prod_{j=1}^m f({\betaoldsymbol x}_j,{\betaoldsymbol N}_j,{\betaoldsymbol d}_j\,|\,{\betaoldsymbol \delta}_j)\nonumber \\ & = & \prod_{j=1}^m\prod_{i=1}^{n_j}{\cal I}(d_{ji}\lambdae N_{ji})\prod_{l=1}^m \lambdaeft\{\lambda_{jl}^2(1-\lambda_{jl})^{N_{ji}-1}K(x_{ji}|\,\thetaheta_{j l d_{ji}})\right\}^{\deltaelta_{ji}^l}.\nonumber \end{eqnarray} In a hierarchical fashion, using the auxiliary variables, we have for $j=1,\lambdadots,m \thetaext{ and } i = 1,\lambdadots, n_{j},$ \betaegin{align} \nonumber & x_{ji}, N_{ji}\,|\, d_{ji}, \delta_{ji}, (\theta_{jr\delta_{ji}})_{1\lambdae r\lambdae m},\lambdaambda_{j\delta_{ji}} \,\stackrel{\rm ind}\sim\, \prod_{r=1}^{m}\lambdaeft\{\lambda_{jr}^2(1-\lambda_{jr})^{N_{ji}-1}K(x_{ji}|\thetaheta_{jr d_{ji}})\right\}^{\deltaelta_{ji}^{r}}{\cal I}(N_{ji}\ge d_{ji})\\ \nonumber & d_{ji}\,|\,N_{ji} \stackrel{\rm ind}\sim {\cal DU}({\cal S}_{ji}),\,\,\,{\rm P}\{\deltaelta_{ji}=\vec{e}_{l}\} = p_{jl}\\ \nonumber & q_{jik}=\lambdaambda_{ji}(1-\lambdaambda_{ji})^{k-1},\,\,\,\theta_{jik}\stackrel{\rm iid}\sim G_0,\,\,\,k\in\mathbb N. \end{align} \noindent{\betaf 2.2 The PDGSBP covariance and correlation.} In this sub--section we find the covariance and the correlation between $f_j(x)$ and $f_i(x)$. First we provide the following lemma. \noindent {\betaf Lemma 1.~}{\sl Let $g_\mathbb G(x)=\int_\Theta K(x|\theta)\mathbb G(d\theta)$ be a random density, with $\mathbb G=\lambda\sum_{j=1}^\infty (1-\lambda)^{j-1}\delta_{\theta_j}$ and $\theta_j\stackrel{\rm iid}\sim G_0$, then $$ \mathbb E[g_\mathbb G(x)^2] = \lambdaeft({1\over 2-\lambda}\right)\lambdaeft\{\lambda\int_\Theta K(x|\thetaheta)^2G_{0}(d\thetaheta) + 2(1-\lambda)\lambdaeft(\int_\Theta K(x|\thetaheta)G_{0}(d\thetaheta)\right)^2\right\}. $$ } {\sl The proof is given in Appendix A.} \noindent {\betaf Proposition 3.~}{\sl It is that \betaegin{equation} \lambdaabel{CovPDSBP} {\rm Cov}(f_j(x),f_i(x))\, =\, p_{ji}\,p_{ij}{\rm Var}\lambdaeft(\int_\Theta K(x|\theta)\mathbb G_{ji}(d\theta)\right), \end{equation} with \betaegin{equation} \lambdaabel{VarPDSBP} {\rm Var}\lambdaeft(\int_\Theta K(x|\theta)\mathbb G_{ji}(d\theta)\right)={\lambda_{ji}\over 2-\lambda_{ji}}{\rm Var}(K(x|\theta)). \end{equation} } {\sl The proof is given in Appendix A.} Suppose now that $(f_j^{\cal D}(x))_{1\lambdae j\lambdae m}$ and $(f_j^{\cal G}(x))_{1\lambdae j\lambdae m}$ are two collections of $m$ DP and $m$ GSB pairwise dependent random densities respectively, i.e. $f_j^{\cal D}(x)=\sum_{l=1}^m p_{jl}g_{jl}^{\cal D}(x)$ with $g_{jl}^{\cal D}(x)=g_{jl}(x|\mathbb P_{jl})$, and $f_j^{\cal G}(x)=\sum_{l=1}^m p_{jl}g_{jl}^{\cal G}(x)$ with $g_{jl}^{\cal G}(x)=g_{jl}(x|\mathbb G_{jl})$. Then we have the following proposition: \noindent {\betaf Proposition 4.~}{\sl For given parameters $(\lambda_{ji})$, $(c_{ji})$, and matrix of selection probabilities $(p_{ji})$ it is that \betaegin{enumerate} \item The PDGSBP and rPDDP correlations are given by \betaegin{equation} \lambdaabel{PDGSBP_Corr} {\rm Corr}(f_j^{\cal G}(x),f_i^{\cal G}(x)) = {\lambdaambda_{ji}p_{ji}p_{ij}\over 2-\lambdaambda_{ji}} \lambdaeft(\sum_{l=1}^m\sum_{r=1}^m {p_{jl}^2 p_{ir}^2\lambdaambda_{jl}\lambdaambda_{ir}\over (2-\lambdaambda_{jl})(2-\lambdaambda_{ir})}\right)^{-1/2}, \end{equation} and \betaegin{equation} \lambdaabel{rPDDP_Corr} {\rm Corr}(f_j^{\cal D}(x),f_i^{\cal D}(x)) = {p_{ji}p_{ij}\over 1+c_{ji}} \lambdaeft(\sum_{l=1}^m\sum_{r=1}^m {p_{jl}^2 p_{ir}^2\over (1+c_{jl})(1+c_{ir})}\right)^{-1/2}. \end{equation} \item When $\lambdaambda_{ji}=\lambdaambda$ and $c_{ji}=c$ for all $1\lambdae j\lambdae i\lambdae m$, the expressions for the rPDDP and PDGSBP correlations simplify to $$ {\rm Corr}(f_j^{\cal G}(x),f_i^{\cal G}(x))={\rm Corr}(f_j^{\cal D}(x),f_i^{\cal D}(x)) = p_{ji}p_{ij}\lambdaeft( \sum_{l=1}^m \sum_{r=1}^m p_{jl}^2 p_{ir}^2 \right)^{-1/2}. $$ \end{enumerate} } {\sl The proof is given in Appendix A.} It is clear that, irrespective of the model, the random densities $f_j(x)$ and $f_i(x)$ are positively correlated whenever $p_{ji}=p_{ij}=1$. Similarly, the random densities $f_j(x)$ and $f_i(x)$ are independent (have no common part) whenever $p_{ji}=p_{ij}=0$. Another, less obvious feature, upon synchronization, is the ability of controlling the correlation among the models. For example, suppose that for $m=2$, the random densities $f_1(x)$ and $f_2(x)$ are dependent, and that $\lambda_{ji}=(1+c_{ji})^{-1}$; then consider the expression $$ D_{12}:=\lambda_{12}^2\, p_{12}^2\, p_{21}^2\,\lambdaeft\{ {\rm Corr}(f_1^{\cal G}(x),f_2^{\cal G}(x))^{-2} -{\rm Corr}(f_1^{\cal D}(x),f_2^{\cal D}(x))^{-2}\right\}. $$ Since correlations are positive, $D_{12}\ge 0$ whenever ${\rm Corr}(f_1^{\cal G}(x),f_2^{\cal G}(x))\lambdae {\rm Corr}(f_1^{\cal D}(x),f_2^{\cal D}(x)),$ and that $D_{12}<0$ whenever ${\rm Corr}(f_1^{\cal G}(x),f_2^{\cal G}(x))> {\rm Corr}(f_1^{\cal D}(x),f_2^{\cal D}(x))$. Then, it is not difficult to see that $$ D_{12}=\lambdaeft(p_{12}^2\lambda_{12}+r_1p_{11}^2\lambda_{11} \right) \lambdaeft(p_{21}^2\lambda_{12}+r_2 p_{22}^2\lambda_{22}\right) -\lambdaeft(p_{12}^2\lambda_{12}+p_{11}^2\lambda_{11}\right) \lambdaeft(p_{21}^2\lambda_{12}+p_{22}^2\lambda_{22}\right) $$ with $r_k=(2-\lambda_{12})/(2-\lambda_{kk})$, $k=1,2$. We have the following cases: \betaegin{enumerate} \item $\lambda_{12}>\max\{\lambda_{11},\lambda_{22}\}\,\Leftrightarrow\,r_1<1, r_2<1\,\Leftrightarrow\, {\rm Corr}(f_1^{\cal G}(x),f_2^{\cal G}(x))>{\rm Corr}(f_1^{\cal D}(x),f_2^{\cal D}(x))$. \item $\lambda_{12}<\min\{\lambda_{11},\lambda_{22}\}\,\Leftrightarrow\,r_1>1, r_2>1\,\Leftrightarrow\, {\rm Corr}(f_1^{\cal G}(x),f_2^{\cal G}(x))<{\rm Corr}(f_1^{\cal D}(x),f_2^{\cal D}(x))$. \item $\lambda_{12}=\lambda_{11}=\lambda_{22}\,\Leftrightarrow\,r_1=r_2=1\,\Leftrightarrow\, {\rm Corr}(f_1^{\cal G}(x),f_2^{\cal G}(x))={\rm Corr}(f_1^{\cal D}(x),f_2^{\cal D}(x))$. \end{enumerate} \noindent{\betaf 3. The PDGSBP Gibbs sampler.} In this section we will describe the PDGSBP Gibbs sampler for estimating the model. The details for the sampling algorithm of the PDDP model can be found in Hatjispyros et al. (2011, 2016A). At each iteration we will sample the variables, \betaegin{align} \nonumber & \thetaheta_{jlk}, 1\lambdae j \lambdae l \lambdae m,\,1\lambdae k \lambdaeq N^*,\\ \nonumber & d_{ji},N_{ji},\deltaelta_{ji}, 1\lambdae j \lambdae m,\,1\lambdae i \lambdae n_j,\\ \nonumber & p_{jl}, 1\lambdaeq j \lambdae m, 1\lambdae l \lambdae m, \end{align} with $N^*=\max_{j,i}N_{ji}$ being almost surely finite. \noindent {\betaf 1.} For the locations of the random measures for $k=1,\lambdadots,d^*$ where $d^*=\max_{j,i}d_{ji}$, it is that $$ f(\thetaheta_{jlk}|\cdots) \propto f(\thetaheta_{jlk}) \betaegin{dcases} \prod_{i=1}^{n_j}K(x_{ji}|\thetaheta_{jlk})^{{\cal I}(\delta_{ji}=\vec{e}_l,\,d_{ji}=k)} \prod_{i=1}^{n_l}K(x_{li}|\thetaheta_{jlk})^{{\cal I}(\delta_{li}=\vec{e}_j,\,d_{li}=k)} & \,\,\,l>j\,,\\ \prod_{i=1}^{n_j}K(x_{ji}|\thetaheta_{jjk})^{{\cal I}(\delta_{ji}=\vec{e}_j,\,d_{ji}=k)} & \,\,\,l=j\,. \end{dcases} $$ If $N^*>d^*$ we sample additional locations $\theta_{jl,d^*+1},\lambdadots,\theta_{jl,N^*}$ independently from the prior. \noindent {\betaf 2}. Here we sample the allocation variables $d_{ji}$ and the mixture component indicator variables $\deltaelta_{ji}$ as a block. For $j=1,\lambdadots,m$ and $i=1,\lambdadots,n_{j}$, we have $$ {\rm P}(d_{ji}=k,\delta_{ji}=\vec{e}_l\,|N_{ji}=r,\cdots)\,\propto\, p_{jl}\,K(x_{ji}|\thetaheta_{jlk})\,{\cal I}(l\lambdae m)\,{\cal I}(k\lambdae r). $$ \noindent {\betaf 3.} The slice variables $N_{ji}$ have full conditional distributions given by $$ {\rm P}(N_{ji} = r\,|\,\delta_{ji}=\vec{e}_l,d_{ji}=l,\cdots)\propto(1-\lambdaambda_{jl})^r\,{\cal I}(r\ge l), $$ which are truncated geometric distributions over the set $\{l, l+1,\lambdadots\}$. \noindent {\betaf 4.} The full conditional for $j=1,\lambdadots,m$ for the selection probabilities ${\betaoldsymbol p}_j=(p_{j1},\lambdadots,p_{jm})$, under a Dirichlet prior $f({\betaoldsymbol p}_j\,|\,{\betaoldsymbol a}_j)\propto\prod_{l=1}^m p_{jl}^{a_{jl}-1}$, with hyperparameter ${\betaoldsymbol a}_j=(a_{j1},\deltaots,a_{jm})$, is Dirichlet $$ f({\betaoldsymbol p}_j\,|\cdots)\,\propto\,\prod_{l=1}^m p_{jl}^{a_{jl}+\sum_{i=1}^{n_l}{\cal I}(\deltaelta_{ji}\,=\,\vec{e}_l) - 1}. $$ \noindent {\betaf 5.} Here we update the geometric probabilities $(\lambda_{jl})$ of the GSB measures. For $1\lambdae j\lambdae l\lambdae m$, it is that $$ f(\lambda_{jl}|\cdots) \propto f(\lambda_{jl}) \betaegin{dcases} \prod_{i=1}^{n_j}\lambdaeft\{\lambda_{jl}^2(1-\lambda_{jl})^{N_{ji}-1}\right\}^{{\cal I}(\delta_{ji}=\vec{e}_l)} \prod_{i=1}^{n_l}\lambdaeft\{\lambda_{jl}^2(1-\lambda_{jl})^{N_{li}-1}\right\}^{{\cal I}(\delta_{li}=\vec{e}_j)} & \,\,\,l>j\\ \prod_{i=1}^{n_j}\lambdaeft\{\lambda_{jj}^2(1-\lambda_{jj})^{N_{ji}-1}\right\}^{{\cal I}(\delta_{ji}=\vec{e}_j)} & \,\,\,l=j\,. \end{dcases} $$ To complete the model, we assign priors to the geometric probabilities. For a fair comparison of the execution time between the two models, we apply $\lambda_{jl}=(1+c_{jl})^{-1}$ transformed priors. So, by placing gamma priors $c_{jl}\sim{\cal G}(a_{jl},b_{jl})$ over the concentration masses $c_{jl}$ of the PDDP model, we have \betaegin{equation} \lambdaabel{TGamma} f(\lambda_{jl})={\cal TG}(\lambda_{jl}\,|\,a_{jl},b_{jl})\propto \lambda_{jl}^{-(a_{jl}+1)}e^{-b_{jl}/\lambda_{jl}}(1-\lambda_{jl})^{a_{jl}-1}\,{\cal I}(0<\lambda_{jl}<1). \end{equation} In the Appendix, we give the full conditionals for $\lambda_{jl}$'s, their corresponding embedded Gibbs sampling schemes, and the sampling algorithm for the concentration masses. \noindent{\betaf 3.1 The complexity of the rPDDP and PDGSBP samplers.} The main difference between the two samplers in terms of execution time, comes from the blocked sampling of the clustering and the mixture indicator variables $d_{ji}$ and $\delta_{ji}$. \noindent{\betaf The rPDDP model:} The state space of the variable $(d_{ji},\deltaelta_{ji})$ conditionally on the slice variable $u_{ji}$ is $ (d_{ji},\deltaelta_{ji})(\Omega)= \cup_{l=1}^{m} \lambdaeft(A_{w_{jl}}(u_{ji}) \thetaimes \{\vec{e}_l\}\right), $ where $A_{w_{jl}}(u_{ji})=\{r\in{\mathbb N}:u_{ji}<w_{jlr}\}$ is the a.s. finite slice set corresponding to the observation $x_{ji}$ (Walker, 2007). At each iteration of the Gibbs sampler, we have $m(m+1)/2$ vectors of stick-breaking weights $\vec{w}_{jl}$, each of length $N_{jl}^*$; where $N_{jl}^{*}\sim 1 + {\rm Poisson}(-c_{jl}\lambdaog u_{jl}^{*})$ with $c_{jl}$ being the concentration parameter of the Dirichlet process $\mathbb{P}_{jl}$ and $u_{jl}^{*}$ being the minimum of the slice variables in densities $f_{j}$ and $f_{l}.$ Algorithm $1$ gives the blocked sampling procedure of the clustering and mixture indicator variables. An illustration of the effect of the slice variable $u_{ji}$ is given in Figure 1(a). \betaegin{algorithm}[H] \caption{:~rPDDP} \lambdaabel{djiDPM} \betaegin{algorithmic}[1] \mathbb Procedure{Sample $(d_{ji},\delta_{ji})$}{} \For{random densities $f_{j},\,\,\,j=1$ to $m$ } \For{each data point $x_{ji}\in f_{j}\,\,\,i=1$ to $n_{j}$ } \For{each mixture component $K(x_{ji}|\thetaheta_{jl}),\,\,\,l=1$ to $m$ } \State Construct slice sets $A_{w_{jl}}(u_{ji})$ \mathbb EndFor \State Sample $(d_{ji}=k,\delta_{ji}=r|\cdots)\propto K(x_{ji}|\thetaheta_{jrk})\,{\cal I}\lambdaeft((k,r)\in \cup_{l=1}^{m}\lambdaeft(A_{w_{jl}}(u_{ji})\thetaimes \{\vec{e}_l\}\right)\right)$ \mathbb EndFor \mathbb EndFor \mathbb EndProcedure \end{algorithmic} \end{algorithm} Since the weights forming the stick-breaking representation are not in an ordered form, the construction of the slice sets in step 5 of Algorithm \ref{djiDPM} requires a complete search in the array where the weights are stored. This operation is done in ${\cal O}(N_{jl}^{*})$ time. For the sampling of the $d_{ji}$ and $\delta_{ji}$ variables in step 6, the choice of their value is an element from the union $\cup_{l=1}^{m}\lambdaeft(A_{w_{jl}}(u_{ji})\thetaimes \{\vec{e}_l\}\right).$ This means that the rPDDP algorithm for each $j$, must create $m$ slice sets which require $N_{jl}^{*}$ comparisons each. The worst case scenario is that the sampled $(d_{ji},\delta_{ji})$ is the last element of $\cup_{l=1}^{m}\lambdaeft(A_{w_{jl}}(u_{ji})\thetaimes \{\vec{e}_l\}\right)$. Thus, the DP based procedure of sampling $(d_{ji},\delta_{ji})$ is of order $$ {\cal O}\lambdaeft( m^2 n_j N_{jl}^* \sum_{l=1}^m|A_{w_{jl}}(u_{ji})| \right) = {\cal O}\lambdaeft(N_{jl}^*\sum_{l=1}^m|A_{w_{jl}}(u_{ji})|\right). $$ \noindent{\betaf The PDGSBP model:} The state space of the variable $(d_{ji},\deltaelta_{ji})$ conditionally on the slice variable $N_{ji}$ is $ (d_{ji}, \delta_{ji})(\Omega)=\cup_{l=1}^{m} \lambdaeft({\cal S}_{ji} \thetaimes \{\vec{e}_l\}\right). $ In the GSB case, the slice variable has a different r\^olee. It indicates at which random point the search for the appropriate $d_{ji}$ will stop. In Figure 1(b) we illustrate this argument. In Algorithm \ref{djiGSB} the worst case scenario is that the sampled $(d_{ji},\delta_{ji})$ will be the last element of $\cup_{l=1}^{m} \lambdaeft({\cal S}_{ji} \thetaimes \{\vec{e}_l\}\right)$. Thus, the GSB based procedure of sampling $(d_{ji},\delta_{ji})$ is of order $ \mathcal{O}\lambdaeft(m^{2}n_{j}N_{jl}\right) = \mathcal{O}\lambdaeft(N_{jl}\right). $ \betaegin{algorithm}[H] \caption{:~PDGSBP} \lambdaabel{djiGSB} \betaegin{algorithmic}[1] \mathbb Procedure{Sample $(d_{ji},\delta_{ji})$}{} \For{random densities $f_{j},\,\,\,j=1$ to $m$ } \For{each data point $x_{ji} \in f_{j}\,\,\,i=1$ to $n_{j}$ } \For{each mixture component $K(x_{ji}|\thetaheta_{jl}),\,\,\,l=1$ to $m$ } \State Sample $(d_{ji}=k,\delta_{ji}=r|\cdots)\propto K(x_{ji}|\thetaheta_{jrk})\,{\cal I}(k\lambdaeq N_{ji})\,{\cal I}(r\lambdaeq m)$ \mathbb EndFor \mathbb EndFor \mathbb EndFor \mathbb EndProcedure \end{algorithmic} \end{algorithm} \betaegin{figure}[H] \centering \subfloat[Subfigure 1 list of figures text][ Stick-breaking weights for some $N_{jl}^*=20$. The red dashed line represents the slice variable $u_{ji}=0.05$. The algorithm must check all the $N_{jl}^*$ values to accept those that they satisfy $u_{ji}<w_{jlk}$. After a complete search, the slice set is $A_{w_{jl}}(u_{ji}) = \{1,2,3,5,7,8\}$. ]{ \includegraphics[width=0.45\thetaextwidth]{FIGURE1a} \lambdaabel{fig:dpslice}} \qquad \subfloat[Subfigure 2 list of figures text][ Geometric stick-breaking weights for $N_{jl}^*=20$. The red dashed line represents the slice variable $N_{ji}=6$. The slice set is simply ${\cal S}_{ji} = \{1,2,3,4,5,6\}$.]{ \includegraphics[width=0.45\thetaextwidth]{FIGURE1b} \lambdaabel{fig:gsbslice}} \caption{A visualization of the effect of the $u_{ji}$ snd $N_{ji}$ slice variables are given in Figures 1(a) and 1(b) respectively.} \lambdaabel{fig:slicevars} \end{figure} \noindent{\betaf 4. Illustrations.} In this section we illustrate the efficiency of the PDGSBP model. For the choice of a normal kernel (unless otherwise specified) $K(x|\thetaheta) = \mathcal{N}(x|\thetaheta)$ where $\thetaheta=(\mu,\thetaau^{-1})$ and $\thetaau=\sigma^{-2}$ is the precision. The prior over the means and precisions of the PDGSBP ($G_0$) and the rPDDP model ($P_0$) is the independent normal-gamma measure, given by $$ P_0(d\mu,d\thetaau)=G_0(d\mu,d\thetaau)=\,{\cal N}(\mu\,|\,\mu_0,\thetaau_0^{-1}) \,{\cal G}(\thetaau\,|\,\epsilon_1,\epsilon_2)\,d\mu d\thetaau. $$ Attempting a noninformative prior specification (unless otherwise specified), we took $\mu_0=0$ and $\thetaau_0=\epsilon_1=\epsilon_2=10^{-3}$. For the concentration masses of the rPDDP model, a-priori, we set $c_{jl}\sim{\cal G}(a_{jl},b_{jl})$. For an objective evaluation of the execution time, of the two algorithms under different scenarios, we choose a synchronized prior specification, namely, for the geometric probabilities, we set $\lambda_{jl}\sim{\cal TG}(a_{jl},b_{jl})$ -- the transformed gamma density given in equation (\ref{TGamma}). In the appendix B, we show that such prior specifications are valid for $a_{jl}>1$. In all our numerical examples, we took $a_{jl}=b_{jl}=1.1$. For our numerical experiments (unless otherwise specified), the hyperparameters $(\alpha_{jl})$ of the Dirichlet priors over the matrix of the selection probabilities $p=(p_{jl})$ has been set to $\alpha_{jl}=1$. In all cases, we measure the similarity between probability distributions with the Hellinger distance. So for example, ${\cal H}_{\cal G}(f,\hat{f})$ and ${\cal H}_{\cal D}(f,\hat{f})$, will denote the Hellinger distance between the true density $f$ and the predictive density $\hat{f}$ of the PDGSBP and rPDDP algorithms, respectively. The Gibbs samplers run for $11\thetaimes 10^4$ iterations leaving the first $10^4$ samples as a burn-in period. \noindent{\betaf 4.1 Time execution efficiency of the PDGSBP model.} \noindent{\betaf Nested normal mixtures with a unimodal common and idiosyncratic part:} Here, we choose to include all pairwise and idiosyncratic dependences in the form of unimodal equally weighted normal mixture components. The mixture components are well separated with unit variance. We define each data model ${\cal M}_m=\{f_j^{(m)}:1\lambdae j\lambdae m\}$ of dimension $m\in\{2,3,4\}$, based on a $4\thetaimes 10$ matrix $M=(M_{jk})$, with entries in the set $\{0,1\}$, having at most two ones in each column and exactly four ones in each row. When there is exactly one entry of one, the column defines an idiosyncratic part. The appearance of exactly two ones in a column defines a common component. We let the matrix $M$ given by \[ M= \betaegin{bmatrix} 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 0 \\ 0 & 1 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 1 \\ 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 \\ \end{bmatrix}, \] and for $m\in\{2,3,4\}$, we define $$ {\cal M}_m:\,f_j^{(m)}(x)\propto\sum_{k=5-m}^{2(m+1)}M_{jk}\,{\cal N}(x|10(k-6),1),\,\,1\lambdae j\lambdae m, $$ We are taking independently samples of sizes $n_j^{(2)}=60$ from the $f_j^{(2)}$'s, $n_j^{(3)}=120$ from the $f_j^{(3)}$'s, and, $n_j^{(4)}=200$ from the $f_j^{(4)}$'s. In all cases, the PDGSBP and the rPDDP density estimations are of the same quality. In Figures 2(a)--(d) we give the histograms of the data sets for the specific case $m=4$, which are overladed with the kernel density estimations (KDE's) based on the predictive samples of the $f_j^{(4)}$'s coming from the PDGSBP (solid line) and the rPDDP (dashed line) models. The differences between the two models are nearly indistinguishable. The Hellinger distances between the true and the estimated densities for the case $m=4$ are given in table 1. In Table 2 we summarize the mean execution times (MET's) per $10^3$ iterations in seconds. The PDGSBP sampler is about three times faster than the rPDDP sampler. The corresponding MET ratios for $m=2,3$ and 4 are $2.96, 3.04$ and 3.37 respectively. We can see that the PDGSBP Gibbs sampler gives slightly faster execution times with increasing $m$. This will become more clear in our next simulated data example, where the average sample size per mode is being kept constant. \betaegin{figure}[H] \includegraphics[width=1\thetaextwidth]{FIGURE2} \centering \caption{Histograms of data sets coming for the case $m=4$. The superimposed KDE's are based on the predictive samples obtained from the PDGSBP and the rPDDP models.} \end{figure} \betaegin{table}[H] \betaegin{tabular}{ccc} \hline $i$ & ${\cal H}_{\cal{G}}(f_i^{(4)},\hat{f}_i^{(4)})$ & ${\cal H}_{\cal{D}}(f_i^{(4)},\hat{f}_i^{(4)})$\\ \hline\hline $1$ & $0.17$ & $0.17$ \\ $2$ & $0.19$ & $0.18$ \\ $3$ & $0.22$ & $0.22$ \\ $4$ & $0.20$ & $0.20$ \\ \hline \end{tabular} \caption{Hellinger distances for the case $m=4$.} \end{table} \betaegin{table}[H] \betaegin{tabular}{cllr} \hline $m$ & Model & Sample size & MET \\ \hline\hline 2 & PDGSBP & $n_j^{(2)}=60$ & 0.57 \\ & rPDDP & & 1.68 \\ \hline 3 & PDGSBP & $n_j^{(3)}=120$ & 2.16 \\ & rPDDP & & 6.57 \\ \hline 4 & PDGSBP & $n_j^{(4)}=200$ & 5.30 \\ & rPDDP & & 17.87 \\ \hline \end{tabular} \caption{Mean execution times in seconds per $10^3$ iterations.} \end{table} \noindent{\betaf Sparse $m$--scalable data set models:} In this example, we attempt to create $m$-scalable normal mixture data sets of the lowest possible sample size. To this respect, we sample independently $m$ groups of data sets from the densities $$ f_j^{(m)}(x)\,\propto\,{\cal N}(x|(j-1)\xi,1)\,{\cal I}(1\lambdae j<m)+ \sum_{k=1}^{m-1}{\cal N}(x|(k-1)\,\xi,1)\,{\cal I}(j=m), $$ with sample sizes $ n_j^{(m)}=n\{{\cal I}(1\lambdae j<m)+(m-1)\,{\cal I}(j=m)\}. $ We have chosen $\xi=10$ and an average sample size per mode of $n=20$, for $m\in\{2,\lambdadots,10\}$. In Figure 3 we depict the average execution times as functions of the dimension $m$. We can see how fast the two MET-curves diverge with increasing $m$. In Figure 4(a)--(j), for the specific case $m=10$, we give the histograms of the data sets, overladed with the KDE's based on the predictive samples of the $f_j^{(10)}$'s coming from the PDGSBP (solid line) and the rPDDP (dashed line) models. We can see that the PDGSBP and the rPDDP density estimations are of the same quality. The Hellinger distances between the true and the estimated densities for the specific case $m=10$ are given in Table 3. The large values of the Hellinger distances ${\cal H}_{\cal{G}}(f_{10}^{(10)},\hat{f}_{10}^{(10)})\alphapprox {\cal H}_{\cal{D}}(f_{10}^{(10)},\hat{f}_{10}^{(10)})\alphapprox 0.22$, are caused by the enlargement of the variances of the underrepresented modes due to the small sample size. \betaegin{figure}[H] \includegraphics[width=0.6\thetaextwidth]{FIGURE3} \centering \caption{Mean execution times for the two models, based on the sparse $m$-scalable data sets.} \end{figure} \betaegin{figure}[H] \includegraphics[width=1\thetaextwidth]{FIGURE4} \centering \caption{Histograms of sparse $m$-scalable data sets for the case $m=10$. The superimposed KDE's are based on the predictive samples of the PDGSBP and the rPDDP models.} \end{figure} \betaegin{table}[H] \betaegin{tabular}{ccccccccccc} \hline $ i$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ & $10$ \\ \hline\hline ${\cal H}_{\cal{G}}(f_i^{(10)},\hat{f}_i^{(10)})$ & $0.08$ & $0.10$ & $0.09$ & $0.14$ & $0.14$ & $0.13$ & $0.14$ & $0.09$ & $0.11$ & $0.22$\\ ${\cal H}_{\cal{D}}(f_i^{(10)},\hat{f}_i^{(10)})$ & $0.09$ & $0.11$ & $0.10$ & $0.15$ & $0.12$ & $0.10$ & $0.14$ & $0.09$ & $0.09$ & $0.22$\\ \hline \end{tabular} \caption{Hellinger distances between true and estimated densities for the case $m=10$ of the sparse scalable data example.} \end{table} \noindent{\betaf 4.2 Normal and gamma mixture models that are not well separated.} \noindent {\betaf The normal mixture example:} We will first consider a normal model for $m=2$, first appeared in Lijoi et. al (2014B). The data models for $f_1$ and $f_2$ are 7-mixtures. Their common part is a 4-mixture that is weighted differently between the two mixtures. More specifically, we sample two data sets of sample size $n_1=n_2=200$, independently from $$ (f_1,f_2)=\lambdaeft({1\over 2}\, g_{11} + {1\over 2}\, g_{12},\,\,{4\over 7}\, g_{21} + {3\over 7}\,g_{22}\right), $$ with \betaegin{align} g_{11} &= \frac{2}{7}{\cal N}(-8,0.25^2) + \frac{3}{7}{\cal N}(1,0.5^2) + \frac{2}{7}{\cal N}(10,1) \nonumber\\ g_{12} &= \frac{1}{7}{\cal N}(-10,0.5^2) + \frac{3}{7}{\cal N}(-3,0.75^2) + \frac{1}{7}{\cal N}(3,0.25^2) + \frac{2}{7}{\cal N}(7, 0.25^2)\nonumber\\ g_{21} &= \frac{2}{8}{\cal N}(-10,0.5^2) + \frac{3}{8}{\cal N}(-3,0.75^2) + \frac{2}{8}{\cal N}(3,0.25^2) + \frac{1}{8}{\cal N}(7, 0.25^2)\nonumber\\ g_{22} &= \frac{1}{3}{\cal N}(-6,0.5^2) + \frac{1}{3}{\cal N}(-1,0.25^2) + \frac{1}{3}{\cal N}(5,0.5^2).\nonumber \end{align} For this case, a-priori we took $(\mu_0,\thetaau_0,\epsilon_1,\epsilon_2)=(0,10^{-3},1,10^{-2})$. In Figure 5(a)--(b) we give the histograms of the data sets, with the predictive densities of the PDGSBP and rPDDP models superimposed in black solid and black dashed curves, respectively. We can see that the PDGSBP and the rPDDP density estimations are of the same quality. In Table 4, we give the Hellinger distance between the true and the estimated densities \betaegin{figure}[H] \centering \includegraphics[width=0.9\thetaextwidth]{FIGURE5} \caption{Density estimations of the 7-mixtures data sets, under the PDGSBP and the rPDDP models. The true densities have been superimposed in red.} \end{figure} \betaegin{table}[H] \betaegin{tabular}{ccc} \hline $ i$ & ${\cal H}_{\cal G}(f_i,\hat{f}_i)$ & ${\cal H}_{\cal D}(f_i,\hat{f}_i)$ \\ \hline\hline $1$ & $0.19$ & $0.18$ \\ $2$ & $0.18$ & $0.15$ \\ \hline \end{tabular} \caption{Hellinger distance between the true and the estimated densities.} \end{table} \noindent {\betaf The gamma mixture example:} In this example we took $m=2$. The data models for $f_1$ and $f_2$ are gamma 4-mixtures. The common part is a gamma 2-mixture, weighted identically among the two mixtures. More specifically, we sample two data sets of sample size $n_1=n_2=160$, independently from $$ (f_1,f_2)=\lambdaeft({2\over 5}\, g_{11} + {3\over 5}\,g_{12},\,\, {7\over 10}\, g_{12} + {3\over 10}\,g_{22}\right), $$ with \betaegin{align} g_{11} &= \frac{2}{ 3}{\cal G}( 2,1.1) + \frac{1}{ 3}{\cal G}( 80,2) \nonumber\\ g_{12} &= \frac{8}{14}{\cal G}( 10,0.9) + \frac{6}{14}{\cal G}(200,8.1)\nonumber\\ g_{22} & = \frac{2}{ 3}{\cal G}(105,3) + \frac{1}{ 3}{\cal G}(500,10),\nonumber \end{align} Because we want to estimate the density of non negative observations, we find it more appropriate to take the kernel to be a log-normal distribution (Hatjispyros et al. 2016B). That is $K(x|\thetaheta) = \mathcal{LN}(x|\thetaheta)$ with $\thetaheta=(\mu,\sigma^2)$, is the log-normal density with mean $\exp(\mu+\sigma^{2}/2)$. For this case, a-priori we set $$ (\mu_0,\thetaau_0,\epsilon_1,\epsilon_2)=(\betaar{S},0.5,2,0.01),\quad \betaar{S}={1\over n_1+n_2}\lambdaeft(\sum_{j=1}^{n_1}\lambdaog x_{1j}+\sum_{j=1}^{n_2}\lambdaog x_{2j}\right). $$ In Figure 6(a)-(b), we display the KDE's based on the predictive samples of the two models. We can see that the PDGSBP and the rPDDP density estimations are of the same quality. In Table 5, we give the Hellinger distances. \betaegin{figure}[H] \centering \includegraphics[width=0.9\thetaextwidth]{FIGURE6} \caption{The KDE's are based on the predictive sample of the PDGSBP model (solid curve in black) and the predictive sample of the rPDDP model (dashed curve in black).} \end{figure} \betaegin{table}[H] \betaegin{tabular}{ccc} \hline $ i$ & ${\cal H}_{\cal G}(f_i,\hat{f}_i)$ & ${\cal H}_{\cal D}(f_i,\hat{f}_i)$ \\ \hline\hline $1$ & $0.13$ & $0.11$ \\ $2$ & $0.19$ & $0.18$ \\ \hline \end{tabular} \caption{Hellinger distances for the gamma mixture data model.} \end{table} Because the common part is equally weighted among $f_1$ and $f_2$, it makes sense to display the estimations of the selection probability matrices under the two models $$ \mathbb E_{\cal G}(p\,|\,(x_{ji})) = \betaegin{pmatrix} 0.42 & 0.58\\ 0.64 & 0.36 \end{pmatrix},\quad \mathbb E_{\cal D}(p\,|\,(x_{ji})) = \betaegin{pmatrix} 0.42 & 0.58\\ 0.69 & 0.31 \end{pmatrix},\quad p_{\rm true}=\betaegin{pmatrix} 0.4 & 0.6\\ 0.7 & 0.3 \end{pmatrix}. $$ \noindent{\betaf 4.3 Borrowing of strength of the PDGSBP model.} In this example we consider three populations $\{D_j^{(s)}:j=1,2,3\}$, under three different scenarios $s\in\{1,2,3\}$. The sample sizes are always the same, namely, $n_1=200$, $n_2=50$ and $n_3=200$ -- the second population is sampled only once. The three data sets $D_1^{(s)}$, $D_2^{(s)}$ and $D_3^{(s)}$, are sampled independently from the normal mixtures $$ (f_{1}^{(s)},f_{2}^{(s)},f_{3}^{(s)})= \lambdaeft((1-q^{(s)})f+q^{(s)}g_1,\,\,f,\,\,(1-q^{(s)})f+q^{(s)}g_2\right), $$ where \betaegin{align} f\,\, & =\frac{3}{10}{\cal N}(-10,1) + \frac{2}{10}{\cal N}(-6,1) +\frac{2}{10}{\cal N}(6,1) + \frac{3}{10}{\cal N}(10,1)\nonumber\\ g_1 & =\,\frac{1}{2}{\cal N}(-4,1) + \frac{1}{2}{\cal N}(4,1)\nonumber\\ g_2 & =\,\frac{1}{2}{\cal N}(-12,1) + \frac{1}{2}{\cal N}(12,1).\nonumber \end{align} More specifically, the three scenarios are: \betaegin{enumerate} \item For $s=1$, we set, $q^{(1)}=0$. This is the case where the three populations are coming from the same 4--mixture $f$. We depict the density estimations under the first scenario in Figures 7(a)--(c). This is the case where the small data set, benefits the most in terms of borrowing of strength. \item For $s=2$, we set, $q^{(2)}=1/2$. The 2-mixtures $g_1$ and $g_2$ are the the idiosyncratic parts of the 6-mixtures $f_1^{(2)}$ and $f_3^{(2)}$, respectively. The density estimations under the second scenario are given in Figures 7(d)--(f). In this case, the strength of borrowing between the small data set and the two large data sets weakens. \item For $s=3$ we set $q^{(3)}=1$. In this case the three populations have no common parts. The density estimations are given in Figures 7(g)--(i). This is the worst case scenario, where there is no borrowing of strength between the small and the two large data sets. \end{enumerate} The Hellinger distances between the true and the estimated densities, for the three scenarios, are given in table 6. In the second column of the Table we can see how the Hellinger distance of the estimation $\hat{f}_2^{(s)}$ and the true density $f_2^{(s)}$ increases as the borrowing of strength weakens, it is that $ {\cal H}_{\cal G}(f_2^{(1)},\hat{f}_2^{(1)})<{\cal H}_{\cal G}(f_2^{(2)},\hat{f}_2^{(2)}) <{\cal H}_{\cal G}(f_2^{(3)},\hat{f}_2^{(3)}). $ \betaegin{figure}[H] \centering \includegraphics[width=0.9\thetaextwidth]{FIGURE7} \caption{Density estimation with the PDGSBP model (curves in black) under the three different scenarios. The true density has been superimposed in red.} \end{figure} \betaegin{table}[H] \betaegin{tabular}{cccc} \hline $s$ & ${\cal H}_{\cal{G}}(f_1^{(s)},\hat{f}_1^{(s)})$ & ${\cal H}_{\cal{G}}(f_2^{(s)},\hat{f}_2^{(s)})$ & ${\cal H}_{\cal{G}}(f_3^{(s)},\hat{f}_3^{(s)})$ \\ \hline\hline $1$ & $0.14$ & $0.19$ & $0.13$\\ $2$ & $0.15$ & $0.22$ & $0.15$\\ $3$ & $0.12$ & $0.26$ & $0.12$\\ \hline \end{tabular} \caption{Hellinger distances between the true and the estimated densities for the three scenario example.} \end{table} \noindent{\betaf 4.4 Real data example.} The data set is to be found at \thetaexttt{http://lib.stat.cmu.edu/datasets/pbcseq} and involves data from 310 individuals. We take the observation as SGOT (serum glutamic-oxaloacetic transaminase) level, just prior to liver transplant or death or the last observation recorded, under three conditions on the individual \betaegin{enumerate} \item The individual is dead without transplantation. \item The individual had a transplant. \item The individual is alive without transplantation. \end{enumerate} We normalize the means of all three data sets to zero. Since it is reasonable to assume the densities for the observations are similar for the three categories (especially for the last two), we adopt the models proposed in this paper with $m = 3$. The number of transplanted individuals is small (sample size of 28) so it is reasonable to borrow strength for this density from the other two. In this example, we set the hyperparameters of the Dirichlet priors for the selection probabilities to $$ \alpha_{jl}= \betaegin{cases} 10, & \mbox{if } j=l=1 \mbox{ or } j=l=3\\ 1, & \mbox{ otherwise.} \end{cases} $$ \betaegin{enumerate} \item In Figure 8(a)--(c) we provide histograms of the real data sets and superimpose the KDE's based on the predictive samples of the PDDP and PDGSBP samplers. The two models give nearly identical density estimations. \item The estimated a-posteriori selection probabilities are given below $$ \mathbb E_{\cal G}(p\,|\,(x_{ji})) = \betaegin{pmatrix} 0.61 & 0.23 & 0.16\\ 0.34 & 0.10 & 0.56\\ 0.08 & 0.12& 0.80 \end{pmatrix},\quad \mathbb E_{\cal D}(p\,|\,(x_{ji})) = \betaegin{pmatrix} 0.67 & 0.16 & 0.17\\ 0.29 & 0.15 & 0.56 \\ 0.10 & 0.12 & 0.78 \end{pmatrix}. $$ \end{enumerate} By comparing the second rows of the selection matrices, we conclude that the strength of borrowing is slightly larger in the case of PDGSBP model . \betaegin{figure}[H] \includegraphics[width=1\thetaextwidth]{FIGURE8} \centering \caption{Histograms of the real data sets with superimposed KDE curves based on the predictive samples of the PDGSBP and rPDDP models.} \end{figure} \noindent{\betaf 5. Discussion.} In this paper we have generalized the GSB process to a multidimensional dependent stochastic process which can be used as a Bayesian nonparametric prior for density estimation in the case of partially exchangeable data sets. The resulting Gibbs sampler is as accurate as its DP based counterpart, yet faster and far less complicated. The main reason for this is that the GSB sampled value of the allocation variable $d_{ji}$ will be an element of the sequential slice set ${\cal S}_{ji}=\{1,\lambdadots,N_{ji}\}$. Thus, there is no need to search the arrays of the weights; we know the state space of the clustering variables in advance. On the other hand, the sampling of $d_{ji}$ in the DP based algorithm will always have one more step; the creation of the slice sets. For an objective comparison of the execution times of the two models, we have run the two samplers in an a-priori synchronized mode. This, involves the placing of ${\cal G}(a_{jl},b_{jl})$ priors over the DP $c_{jl}$ concentration masses, leading to a more efficient version of the PDDP model introduced in Hatjispyros et al. (2011, 2016A). We have show that when the PDGSBP and PDDP models are synchronized, i.e. their parameters satisfy $\lambda_{ji}=(1+c_{ji})^{-1}$, the correlation between the models can be controlled by imposing further restrictions among the $\lambda_{ji}$ parameters. Finally, an interesting research path would be the generalization of the pairwise dependent $\mathbb Q_j$ measures to include all possible interactions, in the sense that $$ \mathbb Q_j(\,\cdot\,)=p_j\,\mathbb G_j(\,\cdot\,)+\sum_{l=2}^m\sum_{\eta\,\in\,{\cal C}_{j,l,m}}p_{j,\eta}\,\mathbb G_{\eta_{(j)}}(\,\cdot\,)\quad {\rm with}\quad p_j+\sum_{l=2}^m\sum_{\eta\,\in\,{\cal C}_{j,l,m}}p_{j,\eta}=1, $$ where the $\mathbb G_j$ and the $\mathbb G_{\eta_{(j)}}$'s are independent GSB processes, ${\cal C}_{j,l,m}=\{(k_1,\lambdadots,k_{l-1}):1\lambdae k_1<\cdots<k_{l-1}\lambdae m, k_r\neq j, 1\lambdae r\lambdae m-1\}$ and $\eta_{(j)}$ is the ordered vector of the elements of the vector $\eta$ and $\{j\}$. Now the $f_j$ densities will be a mixture of $2^{m-1}$ GSB mixtures, and the total number of the independent GSB processes needed to model $(f_1,\lambdadots,f_m)$ will be $2^m-1$. \noindent{\betaf Appendix A} \noindent{\betaf Proof of Proposition 1.} Starting from the $N_{ji}$-augmented random densities we have {\footnotesize \betaegin{eqnarray} f_j(x_{ji},N_{ji}=r) & = & \sum_{l=1}^{m}f_{j}(x_{ji},N_{ji}=r,\deltaelta_{ji}=l) = \sum_{l=1}^{m}p_{jl}\,f_{j}(x_{ji},N_{ji}=r|\deltaelta_{ji} = l) \nonumber\\ & = & \sum_{l=1}^{m}p_{jl}\sum_{k=1}^{\infty}f_{j}(x_{ji},N_{ji}=r,d_{ji}=k|\deltaelta_{ji}=l) \nonumber\\ & = & \sum_{l=1}^{m}p_{jl}f_{j}(N_{ji}=r|\deltaelta_{ji}=l)\sum_{k=1}^{\infty}f_j(d_{ji}=k|N_{ji}=r)f_{j}(x_{ji}|d_{ji}=k,\deltaelta_{ji}=l).\nonumber \end{eqnarray} } Because $f_j(N_{ji}=r|\delta_{ji}=l)=f_N(r|\lambdaambda_{jl})$ and $f_j(x_{ji}|d_{ji}=k,\deltaelta_{ji}=l)=K(x_{ji}|\theta_{jlk})$, the last equation gives \betaegin{align} \nonumber & f_j(x_{ji},N_{ji}=r) = \sum_{l=1}^m p_{jl} f_N(r|\lambdaambda_{jl})\sum_{k=1}^\infty {1\over r}{\cal I}(k\lambdae r)K(x_{ji}|\theta_{jlk})\\ \nonumber & ~~~~~~~~~~~~~~~~~~~\, = \frac{1}{r} \sum_{l=1}^{m}p_{jl}f_{N}(r|\lambdaambda_{jl})\sum_{k=1}^{r}\,K(x_{ji}|\thetaheta_{jlk}). \end{align} Augmenting further with the variables $d_{ji}$ and $\delta_{ji}$ yields $$ f_j(x_{ji},N_{ji}=r,d_{ji}=k,\delta_{ji}=l)={1\over r}\,p_{jl}\,f_N(r|\lambdaambda_{jl})\,{\cal I}(k\lambdae r)\,K(x_{ji}|\theta_{jlk}). $$ Because P$(\delta_{ji}=l)=p_{jl}$, the last equation leads to equation (\ref{proposition12}) and the proposition follows. $\square$ \noindent{\betaf Proof of Proposition 2.} Marginalizing the joint of $x_{ji}$ and $N_{ji}$ with respect to $x_{ji}$ we obtain $$ f_j(N_{ji}=r)=\sum_{l=1}^m p_{jl}f_N(r|\lambdaambda_{jl}). $$ Then dividing equation (\ref{proposition11}) with the probability that $N_{ji}$ equals $r$ we obtain equation (\ref{proposition21}). $\square$ \noindent{\betaf Proof of Lemma 1.} Because $g_\mathbb G(x)=\lambda\sum_{j=1}^\infty(1-\lambda)^{j-1}K(x|\theta_j)$, we have \betaegin{align} \nonumber & \mathbb E\lambdaeft\{g_\mathbb G(x)^2\right\}=\lambda^2\,\mathbb E\lambdaeft\{\lambdaeft(\sum_{j=1}^{\infty}(1-\lambda)^{j-1}K(x|\thetaheta_j)\right)^2\right\}\\ \nonumber & =\lambda^2\lambdaeft\{\sum_{j=1}^\infty(1-\lambda)^{2j-2}\,\mathbb E\lambdaeft[K(x|\theta_j)^2\right]+2\sum_{k=2}^{\infty}\sum_{j=1}^{k-1}(1-\lambda)^{j+k-2}\,\mathbb E[K(x|\theta_j)K(x|\theta_k)]\right\}\\ \nonumber & =\lambda^2\lambdaeft\{\sum_{j=1}^\infty(1-\lambda)^{2j-2}\mathbb E\lambdaeft[K(x|\theta)^2\right]+2\,\sum_{k=2}^{\infty}\sum_{j=1}^{k-1}(1-\lambda)^{j+k-2}\mathbb E[K(x|\theta)]^2\right\}\\ \nonumber & =\lambda^2\lambdaeft\{{1\over \lambda(2-\lambda)}\mathbb E\lambdaeft[K(x|\theta)^2\right]+2\,{1-\lambda\over \lambda^2(2-\lambda)}\mathbb E[K(x|\theta)]^2\right\}, \end{align} which gives the desired result. $\square$ \noindent{\betaf Proof of Proposition 3.} The random densities $f_i(x)=\sum_{l=1}^m p_{il}\,g_{il}(x)$ and $f_j(x)=\sum_{l=1}^m p_{jl}\,g_{jl}(x)$ depend to each other through the random measure $\mathbb G_{ji}$, therefore \betaegin{equation} \lambdaabel{fifj1} \mathbb E[f_i(x)f_j(x)]= \mathbb E[\,\mathbb E(f_i(x)f_j(x)|\mathbb G_{ji})\,]=\mathbb E\{\,\mathbb E[f_i(x)|\mathbb G_{ji}]\,\mathbb E[f_j(x)|\mathbb G_{ji}]\,\}, \end{equation} and \betaegin{align} \nonumber & \mathbb E[f_j(x)|\mathbb G_{ji}]=\sum_{l\neq i}p_{jl}\,\mathbb E[g_{jl}(x)]+p_{ji}g_{ji}(x) =(1-p_{ji})\,\mathbb E[K(x|\theta)]+p_{ji}g_{ji}(x)\\ \nonumber & \mathbb E[f_i(x)|\mathbb G_{ji}]=\sum_{l\neq j}p_{il}\,\mathbb E[g_{il}(x)]+p_{ij}g_{ji}(x) =(1-p_{ij})\,\mathbb E[K(x|\theta)]+p_{ij}g_{ji}(x)\,. \end{align} Substituting back to equation (\ref{fifj1}) one obtains $$ \mathbb E[f_i(x)f_j(x)]=(1-p_{ij}p_{ji})\,\mathbb E[K(x|\theta)]^2+p_{ij}p_{ji}\,\mathbb E\lambdaeft[g_{ji}(x)^2\right]. $$ Using lemma $1$, the last equation becomes $$ \mathbb E[f_i(x)f_j(x)]={\lambda_{ji}p_{ji}p_{ij}\over 2-\lambda_{ji}}\lambdaeft\{\mathbb E[K(x|\theta)^2]-\mathbb E[K(x|\theta)]^2\right\}+\mathbb E[K(x|\theta)]^2, $$ or that $$ {\rm Cov}(f_j(x),f_i(x))\, =\, {\lambda_{ji}p_{ji}\,p_{ij}\over 2-\lambda_{ji}}{\rm Var}(K(x|\theta)). $$ The desired result, comes from the fact that \betaegin{align} \nonumber {\rm Var}\lambdaeft(\int_\Theta K(x|\theta)\mathbb G_{ji}(d\theta)\right) & = \lambdaeft\{{\lambda_{ji}\over 2-\lambda_{ji}}\mathbb E[K(x|\theta)^2]+{2(1-\lambda_{ji})\over 2-\lambda_{ji}}\mathbb E[K(x|\theta)]^2\right\}-\mathbb E[K(x|\theta)]^2\\ \nonumber & = {\lambda_{ji}\over 2-\lambda_{ji}}\lambdaeft(\mathbb E[K(x|\theta)^2]-\mathbb E[K(x|\theta)]^2\right). \end{align} $\square$ \noindent{\betaf Proof of Proposition 4.}\\ (1.) From equation (\ref{VarPDSBP}) and proposition 3, we have that $$ {\rm Var}(f_j^{\cal G}(x)) = {\rm Var}\lambdaeft(\sum_{l=1}^m p_{jl}g_{jl}^{\cal G}(x)\right) =\sum_{l=1}^m {p_{ji}^2\lambdaambda_{ji}\over 2-\lambdaambda_{ji}}{\rm Var}(K(x|\theta)). $$ Normalizing the covariance in equation (\ref{CovPDSBP}) with the associated standard deviations, yields \betaegin{equation} \lambdaabel{CorrG} {\rm Corr}(f_j^{\cal G}(x),f_i^{\cal G}(x)) = {\lambdaambda_{ji}p_{ji}p_{ij}\over 2-\lambdaambda_{ji}} \lambdaeft(\sum_{l=1}^m\sum_{r=1}^m {p_{jl}^2 p_{ir}^2\lambdaambda_{jl}\lambdaambda_{ir}\over (2-\lambdaambda_{jl})(2-\lambdaambda_{ir})}\right)^{-1/2}. \end{equation} Similarly, from proposition 1 in Hatjispyros et al. (2011), it is that $$ {\rm Var}(f_j^{\cal D}(x)) =\sum_{l=1}^m {p_{ji}^2\over 1+c_{ji}}{\rm Var}(K(x|\theta)), $$ and \betaegin{equation} \lambdaabel{CorrD} {\rm Corr}(f_j^{\cal D}(x),f_i^{\cal D}(x)) = {p_{ji}p_{ij}\over 1+c_{ji}} \lambdaeft(\sum_{l=1}^m\sum_{r=1}^m {p_{jl}^2 p_{ir}^2\lambdaambda_{jl}\lambdaambda_{ir}\over (1+c_{jl})(1+c_{ir})}\right)^{-1/2}. \end{equation} \noindent (2.) When $\lambdaambda_{ji}=\lambdaambda$ and $c_{ji}=c$ for all $1\lambdae j\lambdae i\lambdae m$, from equations (\ref{CorrG}) and (\ref{CorrD}), it is clear that $$ {\rm Corr}(f_j^{\cal G}(x),f_i^{\cal G}(x))={\rm Corr}(f_j^{\cal D}(x),f_i^{\cal D}(x)) = p_{ji}p_{ij}\lambdaeft( \sum_{l=1}^m \sum_{r=1}^m p_{jl}^2 p_{ir}^2 \right)^{-1/2}. $$ \noindent{\betaf Appendix B} \noindent{\betaf 1. Sampling of the concentrations masses for the rPDDP model.} \noindent In this case, the random densities $(f_j)$ are represented as finite mixtures of the DP mixtures $g_{jl}(x|\mathbb P_{jl})$, where $\mathbb P_{jl}\sim{\cal DP}(c_{jl},P_0)$. We randomize the concentrations by letting $c_{jl}\sim{\cal G}(a_{jl},b_{jl})$. Following West (1992) we have the following two specific cases: \noindent {\betaf A.} For $j=l$, the posterior $c_{jj}$'s will be affected only by the size of the data set ${\betaoldsymbol x}_j$ and the number of unique clusters for which $\delta_{ji}={\betaf e}_j$. Letting $$ \rho_{jj}=\#\{d_{jj}:\delta_{ji}={\betaf e}_j,1\lambdae i\lambdae n_j\}, $$ we have \betaegin{align} \nonumber & \beta\sim{\cal B}e(c_{jj}+1, n_{j})\\ \nonumber & c_{jj}\,|\,\beta,\rho_{jj}\, \sim\, \pi_\beta\,{\cal G}(a_{jj}+\rho_{jj}, b_{jj}-\lambdaog\beta) + (1-\pi_\beta)\,{\cal G}(a_{jj}+\rho_{jj}-1, b_{jj}-\lambdaog\beta) \end{align} with the weights $\pi_\beta$ satisfying $\frac{\pi_\beta}{1-\pi_\beta}=\frac{a_{jj}+\rho_{jj}-1}{n_{j}(b_{jj}-\lambdaog\beta)}$. \noindent {\betaf B.} For $j\neq l$, the posterior $c_{jl}$'s will be affected by the size of the data sets ${\betaoldsymbol x}_j$ and ${\betaoldsymbol x}_l$ and the cumulative number of unique clusters $d_{ji}$ for which $\delta_{ji}={\betaf e}_l$ and the unique clusters $d_{li}$ for which $\delta_{li}={\betaf e}_j$. Letting $$ \rho_{jl}=\#\{d_{ji}:\delta_{ji}={\betaf e}_l,1\lambdae i\lambdae n_j\}+ \#\{d_{li}:\delta_{li}={\betaf e}_j,1\lambdae i\lambdae n_l\}, $$ it is that \betaegin{align} \nonumber & \beta\sim{\cal B}e(c_{jl}+1, n_{j}+n_{l})\\ \nonumber & c_{jl}\,|\,\beta,\rho_{jl}\,\sim\,\pi_\beta\,\mathcal{G}(a_{jl}+\rho_{jl}, b_{jl}-\lambdaog\beta) + (1-\pi_\beta)\,\mathcal{G}(a_{jl}+\rho_{jl}-1, b_{jl}-\lambdaog\beta), \end{align} with the weights $\pi_\beta$ satisfying $\frac{\pi_\beta}{1-\pi_\beta}=\frac{a_{jl}+\rho_{jl}-1}{(n_{j}+n_{l})(b_{jl}-\lambdaog\beta)}$. Bear in mind that $\rho_{jl}=0$ is always a possibility, so that we impose $a_{jl}>1$. \noindent{\betaf 2. Sampling of the geometric probabilities for the PDGSBP model.} \noindent In this section we provide the full conditionals for the geometric probabilities $\lambdaambda_{jl}$ under beta conjugate and transformed gamma nonconjugate priors. We let $$ S_{jl} =\sum_{i=1}^{n_j}{\cal I}(\delta_{ji}={\betaf e}_l)\quad{\rm and}\quad S_{jl}'=\sum_{i=1}^{n_j}{\cal I}(\delta_{ji}={\betaf e}_l)(N_{ji}-1). $$ \noindent {\betaf A.} For the choice of prior $\lambdaambda_{jl}\sim{\cal B}e(a_{jl},b_{jl})$, for $l=j$ it is that $$ f(\lambda_{jj}|\cdots)={\cal B}e(\lambda_{jl}| a_{jj} + 2 S_{jj}, b_{jj} + S_{jj}'), $$ also, for $l\neq j$ we have $$ f(\lambda_{jl}|\cdots) = {\cal B}e(\lambda_{jl}| a_{jl} + 2(S_{jl}+S_{lj}), b_{jl} + S_{jl}' + S_{lj}'). $$ \noindent {\betaf B.} For the choice of prior $\lambda_{jl}\sim{\cal TG}(a_{jl},b_{jl})$, for $l=j$ it is that $$ f(\lambda_{jj}|\lambdadots) \propto \lambda_{jj}^{2S_{jj} - a_{jj}-1}(1-\lambda_{jj})^{S_{jj}'+ a_{jj}-1}e^{-b_{jj}/\lambda_{jj}}\,{\cal I}(0<\lambda_{jj}<1). $$ To sample from this density, we include the positive auxiliary random variables $\nu_1$ and $\nu_2$ such that $$ f(\lambda_{jj},\nu_1,\nu_2|\cdots) \propto \lambda_{jj}^{2S_{jj} - a_{jj}-1}{\cal I}\lambdaeft(\nu_1<(1-\lambda_{jj})^{S_{jj}'+ a_{jj}-1}\right) {\cal I}\lambdaeft(\nu_2<e^{-b_{jj}/\lambda_{jj}}\right){\cal I}(0<\lambda_{jj}<1). $$ The full conditionals for $\nu_1,\nu_2$ are uniforms $$ f(\nu_1|\cdots) ={\cal U}\lambdaeft(\nu_1|0, (1-\lambda_{jj})^{S_{jj}'+ a_{jj}-1}\right)\quad{\rm and}\quad f(\nu_2|\cdots) ={\cal U}\lambdaeft(\nu_2|0, e^{-b_{jj}/\lambda_{jj}}\right), $$ and the new full conditional for $\lambdaambda_{jj}$ becomes $$ f(\lambda_{jj}|\nu_1,\nu_2,\lambdadots) \propto \lambda_{jj}^{2S_{jj} - a_{jj}-1} \betaegin{cases} {\cal I}\lambdaeft(-{b_{jj}\over\lambdaog\nu_2}<\lambda_{jj}<1-\nu_1^{1/L_{jj}}\right) & L_{jj}\ge 0 \\ {\cal I}\lambdaeft(\max\lambdaeft\{-{b_{jj}\over\lambdaog \nu_2},1-\nu_1^{1/L_{jj}}\right\}<\lambda_{jj}<1\right) & L_{jj}<0, \\ \end{cases} $$ where we have set $L_{jj}=S_{jj}'+ a_{jj}-1$. We can sample from this density using the inverse cumulative distribution function technique. Also, for $l\neq j$ we apply the same embedded Gibbs sampling technique to the full conditional density $$ f(\lambda_{jl}|\cdots)\propto\lambda_{jl}^{2(S_{jl}+S_{lj})-a_{jl}-1}(1-\lambda_{jl})^{S_{jl}'+S_{lj}'+ a_{jl}-1}e^{-b_{jl}/\lambda_{jl}}\,{\cal I}(0<\lambda_{jl}<1). $$ \noindent{\betaf References.} \betaegin{description} \item {\sc Bulla, P., Muliere, P. and Walker, S.G. (2009)}. A Bayesian nonparametric estimator of a multivariate survival function. {\sl Journal of Statistical Planning and Inference} \thetaextbf{139}, 3639--3648. \item {\sc De Iorio, M., M\"uller, P., Rosner, G.L. and MacEachern, S.N. (2004)}. An ANOVA model for dependent random measures. {\sl Journal of the American Statistical Association} \thetaextbf{99}, 205--215. \item {\sc Dunson, D.B. and Park, J.H. (2008)}. Kernel stick--breaking processes. {\sl Biometrika} \thetaextbf{95}, 307--323. \item {\sc Ferguson, T.S. (1973)}. A Bayesian analysis of some nonparametric problems. {\sl Annals of Statistics} \thetaextbf{1}, 209--230. \item {\sc Fuentes--Garcia, R., Mena, R.H., Walker, S.G. (2009)}. A nonparametric dependent process for Bayesian regression {\sl Statistics and Probability Letters} \thetaextbf{79}, 1112--1119. \item {\sc Fuentes--Garcia, R., Mena, R.H., Walker, S.G. (2010)}. A new Bayesian nonparametric mixture model. {\sl Comm.Statist.Simul.Comput} \thetaextbf{39}, 669--682. \item {\sc Griffin, J.E. and Steel, M.F.J. (2006)}. Order--based dependent Dirichlet processes. {\sl Journal of the American Statistical Association} \thetaextbf{101}, 179--194. \item {\sc Griffin, J.E., Kolossiatis, M. and Steel, M.F.J. (2013)}. Comparing distributions by using dependent normalized ranom--measure mixtures. {\sl Journal of the Royal Statistical Society, Series B} \thetaextbf{75}, 499--529. \item {\sc Hatjispyros, S.J., Nicoleris, T. and Walker, S.G. (2011)}. Dependent mixtures of Dirichlet processes. {\sl Computational Statistics and Data Analysis} \thetaextbf{55}, 2011--2025. \item {\sc Hatjispyros, S.J., Nicoleris, T. and Walker, S.G. (2016a)}. Dependent random density functions with common atoms and pairwise dependence. {\sl Computational Statistics and Data Analysis} \thetaextbf{101}, 236--249. \item {\sc Hatjispyros, S.J., Nicoleris, T. and Walker, S.G. (2016b)}. Bayesian nonparametric density estimation under length bias. {\sl Communications in Statistics}\\ DOI: 10.1080/03610918.2016.1263735 \item {\sc Lijoi, A., Nipoti, B. and Pr\"uenster, I. (2014a)}. Bayesian inference with dependent normalized completely random measures. {\sl Bernoulli}, {\betaf 20}, 1260--1291. \item {\sc Lijoi, A., Nipoti, B. and Pr\"uenster, I. (2014b)}. Dependent mixture models: clustering and borrowing information. {\sl Computational Statistics and Data Analysis} \thetaextbf{71}, 17--433. \item {\sc Kolossiatis, M., Griffin, J.E. and Steel, M.F.J. (2013)}. On Bayesian nonparametric modelling of two correlated distributions. {\sl Statistics and Computing} \thetaextbf{23}, 1--15. \item {\sc Lo, A.Y. (1984)}. On a class of Bayesian nonparametric estimates I. Density estimates. {\sl Annals of Statistics} \thetaextbf{12}, 351--357. \item {\sc MacEachern, S.N. (1999)}. Dependent nonparametric processes. In {\sl ``Proceedings of the Section on Bayesian Statistical Science''} pp. 50-55. American Statistical Association. \item {\sc M\"uller, P., Quintana, F., and Rosner, G., (2004)}. A method for combining inference across related nonparametric Bayesian models. {\sl Journal of the Royal Statistical Society, Series B} \thetaextbf{66}, 735--749. \item {\sc Mena, R.H., Ruggiero, M. and Walker, S.G. (2011)}. Geometric stick--breaking processes for continuous--time Bayesian nonparametric modeling. {\sl Journal of Statistical Planning and Inference} \thetaextbf{141} (9), 3217--3230. \item {\sc Sethuraman, J. (1994)}. A constructive definition of Dirichlet priors. {\sl Statistica Sinica} \thetaextbf{4} 639--650. \item {\sc Walker, S.G. (2007)}. Sampling the Dirichlet mixture model with slices {\sl Communications in Statistics} \thetaextbf{36} 45--54. \item {\sc West, M. (1992)}. Hyperparameter estimation in Dirichlet process mixture models. {\sl Technical report} \thetaextbf{92-A03}, Duke University, ISDS. \end{description} \end{document}
\begin{document} \author{S. Gov\thanks{Also with the Center for Technological Education Holon, 52 Golomb St., P.O.B 305, Holon 58102, Israel.} and S. Shtrikman\thanks{Also with the Department of Physics, University of California, San Diego, La Jolla, 92093 CA, USA.}\\The Department of Electronics, \\Weizmann Institute of Science,\\Rehovot 76100, Israel \and H. Thomas\\The Department of Physics and Astronomy,\\University of Basel,\\CH-4056 Basel, Switzerland} \title{Magnetic trapping of neutral particles: Classical and Quantum-mechanical study of a Ioffe-Pritchard type trap.} \date{} \maketitle \begin{abstract} Recently, we developed a method for calculating the \emph{lifetime} of a particle inside a magnetic trap with respect to spin flips, as a first step in our efforts to understand the quantum-mechanics of magnetic traps. The 1D toy model that was used in this study was physically \emph{unrealistic} because the magnetic field was \emph{not} curl-free. Here, we study, both classically and quantum-mechanically, the problem of a neutral particle with spin $S$, mass $m$ and magnetic moment $\mu$, moving in 3D in an inhomogeneous magnetic field corresponding to traps of the Ioffe-Pritchard, `clover-leaf' and `baseball' type. Defining by $\omega_{p}$, $\omega_{z}$ and $\omega_{r}$ the precessional, the axial and the lateral vibrational frequencies, respectively, of the particle in the adiabatic potential $V_{eff}=\mu\left| \mathbf{B} \right| $, we find classically the region in the $\left( \omega_{r} /\omega_{p}\right) $-$\left( \omega_{z}/\omega_{p}\right) $ plane where the particle is trapped. Quantum-mechanically, we study the problem of a spin-one particle in the same field. Treating $\omega_{r}/\omega_{p}$ and $\omega_{z}/\omega_{p}$ as small parameters for the perturbation from the adiabatic Hamiltonian, we derive a closed-form expression for the transition rate $1/T_{esc}$ of the particle from its trapped ground-state. In the extreme cases the expression for $1/T_{esc}$ reduces to \[ \dfrac{1}{T_{esc}}\simeq\left\{ \begin{array} [c]{c} 4\pi\omega_{r}\exp\left[ -\dfrac{2\omega_{p}}{\omega_{r}}\right] \text{; for }\omega_{p}\gg\omega_{r}\gg\omega_{z}\\ 8\sqrt{2\pi}\sqrt{\omega_{p}\omega_{i}}\exp\left[ -\dfrac{2\omega_{p}} {\omega_{i}}\right] \text{ ; for }\omega_{p}\gg\omega_{r}=\omega_{z} \equiv\omega_{i}\\ \sqrt{\dfrac{\pi}{2}}\omega_{r}\left( \dfrac{\omega_{z}}{\omega_{p}}\right) ^{3/2}\exp\left[ -\dfrac{2\omega_{p}}{\omega_{z}}\right] \text{; for } \omega_{p}\gg\omega_{z}\gg\omega_{r} \end{array} \right. . \] \end{abstract} \section{Introduction.\label{intro}} \subsection{Magnetic traps for neutral particles.\label{traps}} Recently there has been rapid progress in techniques for trapping samples of neutral atoms at elevated densities and extremely low temperatures. The development of magnetic and optical traps for atoms has proceeded in parallel in recent years, in order to attain higher densities and lower temperatures \cite{t1,t2,t3,t4,t6}. We should note here that traps for neutral particles have been around much longer than their realizations for neutral atoms might suggest, and the seminal papers for neutral particles trapping as applied to neutrons and plasmas date from the sixties and seventies. Many of these papers are referenced by the authors of Refs.\cite{t1,t2,t3}. In this paper we concentrate on the study of \emph{magnetic} traps. Such traps exploit the interaction of the magnetic moment of the atom with the inhomogeneous magnetic field to provide spatial confinement. Microscopic particles are not the only candidates for magnetic traps. In fact, a vivid demonstration of trapping large scale objects is the hovering magnetic top \cite{levitron,ucas,harrigan,patent}. This ingenious magnetic device, which hovers in mid-air for about 2 minutes, has been studied in the past few years by several authors \cite{edge,Berry,bounds,simon,dynamic,dynamic2}. \subsection{Qualitative description.\label{desc}} The physical mechanism underlying the operation of magnetic traps is the adiabatic principle. The common way to describe their operation is in terms of \emph{classical} mechanics: As the particle is released into the trap, its magnetic moment points antiparallel to the direction of the magnetic field. Inside the trap, the particle experiences translation oscillations with vibrational frequencies $\omega_{vib}$ which are small compared to its precession frequency $\omega_{prec}$. Under this condition the spin of the particle may be considered as experiencing a \emph{slowly} rotating magnetic field. Thus, the spin precesses around the \emph{local} direction of the magnetic field $\mathbf{B}$ (adiabatic approximation) and, on the average, its magnetic moment $\mathbf{\mu}$ points \emph{antiparallel} to the local magnetic field lines. Hence, the magnetic energy, which is normally given by $-\mathbf{\mu}\cdot\mathbf{B}$, is now given (for small precession angle) by $\mu\left| \mathbf{B}\right| $. Thus, the overall effective potential seen by the particle is \begin{equation} V_{eff}\simeq\mu\left| \mathbf{B}\right| . \label{energy} \end{equation} In the adiabatic approximation, the spin degree of freedom is rigidly coupled to the translational degrees of freedom, and is already incorporated in Eq.(\ref{energy}) such that the particle may be considered as having only translational degrees of freedom. When the strength of the magnetic field possesses a\emph{\ minimum}, the effective potential becomes attractive near that minimum, and the whole apparatus acts as a trap. As mentioned above, the adiabatic approximation holds whenever $\omega _{prec}\gg\omega_{vib}$. As $\omega_{prec}$ is inversely proportional to the spin, this inequality can be satisfied provided that the spin of the particle is sufficiently small. If, on the other hand, the spin of the particle is too large, it cannot respond fast enough to the changes of the direction of the magnetic field. In this limit $\omega_{prec}\ll\omega_{vib}$, the spin has to be considered as fixed in space and, according to Earnshaw's theorem \cite{earnshaw}, becomes unstable against \emph{translations}. Note also that $\omega_{prec}$ is proportional to the field $\left| \mathbf{B}\right| $. To prevent $\omega_{prec}$ of becoming too small, resulting in spin-flips (Majorana transitions), most magnetostatic traps include a \emph{bias} field, so that the effective potential $V_{eff}$ possesses a \emph{nonvanishing} minimum. \subsection{The purpose and structure of this paper.\label{purp}} The discussion of magnetic traps in the literature is, almost entirely, done in terms of \emph{classical} mechanics. In microscopic systems, however, quantum effects become important, giving rise to a finite lifetime of the particle within the trap. This requires a quantum-mechanical treatment \cite{quant}. An even more interesting issue is the understanding of how the classical and the quantum descriptions of a \emph{given} system are related. It is important to note here that there are \emph{two} mechanisms by which the particle can escape from the trap: The first one is the usual tunneling of the particle from the trap, without a change of its spin state, to regions where the magnetic field decreases to zero. The time scale for this process can be evaluated by standard methods. The second way the particle can escape from the trap is by flipping its spin state (Majorana transitions). This process, which is different from the first one because there is no potential barrier, is the subject of this paper. As a first step in our efforts to understand the quantum-mechanics of magnetic traps, we recently developed a method for calculating the \emph{lifetime} of a particle inside a magnetic trap with respect to such a spin flip process \cite{life1d}. The toy model that was used in this study consisted of a particle with spin, having only a single translational degree of freedom, in the presence of a 1D inhomogeneous magnetic field. We found that the trapped state of the particle decays with a lifetime given by $\sim1/\left( \sqrt {K}\omega_{vib}\right) \exp\left( 2/K\right) $ where $K=\omega_{vib} /\omega_{prec}$, and where the result is valid for $K\ll1$. Though the field that was used in this model did trap the particle, it was not \emph{realistic} in the sense that it was not curl-free. Our next step was to study the case of a particle with spin, having \emph{two} translational degree of freedom, in the presence of a physically more realistic trapping field that, in contradistinction to the toy model, \emph{is} curl-free \ \cite{life2d}. This model is reminiscent of a Ioffe-Pritchard trap \cite{prit,t2,ife}, but without the axial translational degree of freedom. Here we found that the lifetime is given by $\sim1/\omega_{vib}\exp\left( 2/K\right) $ which is similar to the result found in the 1D case. In the present paper we describe an analysis of a Ioffe-Pritchard type trap which \emph{includes} the axial translational degree of freedom. We neglect the effect of interactions between the particles in the trap, and we analyze the dynamics of a \emph{single} particle inside the trap. Unlike in our previous papers, where we studied the case of a spin $1/2$ particle, we treat here the case of a spin $1$ particle, both as an example to show the validity of our approach for higher spins, and also because it is more relevant in view of the recent development in Bose-Einstein condensation experiments. The structure of this paper is as follows: In Sec.(\ref{def}) we start by defining the system we study, together with useful parameters that will be used throughout this paper. Next, we carry out a \emph{classical} analysis of the problem in Sec.(\ref{class}). Here, we find two stationary solutions for the particle inside the trap. One of them corresponds to a state whose spin is \emph{parallel} to the direction of the magnetic field whereas the other one corresponds to a state whose spin is \emph{antiparallel }to that direction. When considering the dynamical stability of these solutions, we find that only the \emph{antiparallel }stationary solution is stable, as expected from the discussion in Sec.(\ref{desc}) above. In Sec.(\ref{quant}) we reconsider the problem from a \emph{quantum-mechanical} point of view for a spin-\emph{one} particle. Here, we find states that refer to \emph{antiparallel} ($M=-1$, where $M$ is the magnetic quantum number) and \emph{orthogonal} ($M=0$) orientations of the spin, the first of these being bounded while the second one is unbounded. We argue that the third possible situation, in which the spin is \emph{parallel} ($M=+1$) to the direction of the field, has negligible coupling to the bound state, and therefore can be neglected. We show that the \emph{antiparallel} and \emph{orthogonal} states are \emph{coupled} due to the inhomogeneity of the field, and we calculate the transition rate from the bound state to the unbounded state. Finally, in Sec.(\ref{dis}) we compare the results of the classical analysis with those of the quantum analysis and comment on their implications for practical magnetic traps. \section{Description of the problem.\label{def}} We consider a particle of mass $m$, magnetic moment $\mu$ and intrinsic spin $S$ (aligned with $\mu$) moving in an inhomogeneous magnetic field $\mathbf{B}$ corresponding to traps of the Ioffe-Pritchard, `clover-leaf' and `baseball' type \cite{t2}, and given by \begin{align} \mathbf{B} & =\left[ B_{0}+\dfrac{1}{2}B^{\prime\prime}z^{2}-\dfrac{1} {4}B^{\prime\prime}\left( x^{2}+y^{2}\right) \right] \mathbf{\hat{z} }\label{d0}\\ & +\left( B^{\prime}-\dfrac{1}{2}B^{\prime\prime}z\right) x\mathbf{\hat {x}+}\left( -B^{\prime}-\dfrac{1}{2}B^{\prime\prime}z\right) y\mathbf{\hat {y}}\text{.}\nonumber \end{align} This field possesses a nonzero minimum of amplitude at the origin, which is the essential part of the trap. The Hamiltonian for this system is \begin{equation} H=\dfrac{\mathbf{P}^{2}}{2m}-\mathbf{\mu\cdot B} \label{d0.1} \end{equation} where $\mathbf{P}$ is the momentum of the particle. We define $\omega_{p}$ as the precessional frequency of the particle when it is at the origin $(x=0,y=0,z=0)$. Since at that point the magnetic field is $\mathbf{B=}B_{0}\mathbf{\hat{z}}$ we find that \begin{equation} \omega_{p}\equiv\dfrac{\mu B_{0}}{S}\text{.} \label{d1} \end{equation} Next, we define $\omega_{z}$ and $\omega_{r}$ as the small-amplitude axial and lateral vibrational frequencies of the particle when it is placed with \emph{antiparallel }spin into the adiabatic potential given by \[ V_{eff}=\mu\left| \mathbf{B}\right| \simeq\mu B_{0}\left( 1+\dfrac {B_{0}B^{\prime\prime}}{2B_{0}^{2}}z^{2}+\left( \dfrac{(B^{\prime})^{2} -\frac{1}{2}B_{0}B^{\prime\prime}}{2B_{0}^{2}}\right) r^{2}\right) +\mathcal{O}\left( x^{4},x^{2}y^{2},y^{4}\right) , \] from which we get \begin{align} \omega_{z} & \equiv\sqrt{\dfrac{\mu B^{\prime\prime}}{m}}\label{d2}\\ \omega_{r} & \equiv\sqrt{\dfrac{\mu\left[ (B^{\prime})^{2}-\frac{1}{2} B_{0}B^{\prime\prime}\right] }{mB_{0}}}\text{.}\nonumber \end{align} In what follows we assume that $(B^{\prime})^{2}-\frac{1}{2}B_{0} B^{\prime\prime}>0$ such that $\omega_{r}$ is real. We also define the ratios, \begin{align} K_{z} & \equiv\dfrac{\omega_{z}}{\omega_{p}}=\sqrt{\dfrac{B^{\prime\prime }S^{2}}{\mu mB_{0}^{2}}}\label{d3}\\ K_{r} & \equiv\dfrac{\omega_{r}}{\omega_{p}}=\sqrt{\dfrac{\left[ (B^{\prime})^{2}-\frac{1}{2}B_{0}B^{\prime\prime}\right] S^{2}}{\mu mB_{0}^{3}}}\nonumber \end{align} These parameters will be our `measure of adiabaticity'. It is clear that as $K_{z}$ and $K_{r}$ become smaller and smaller, the adiabatic approximation becomes more and more accurate. Note that when the bias field $B_{0}$ vanishes, both $K_{z}$ and $K_{r}$ become infinite, and the adiabatic approximation fails. We will show below that, under this condition, the system becomes \emph{unstable }against spin flips, which is in agreement with our discussion at the beginning. This shows that the introduction of the bias field $B_{0}$, is \emph{essential }to the operation of the trap with regard to spin-flips. \section{Classical analysis.\label{class}} \subsection{The stationary solutions.\label{stat}} We denote by $\mathbf{\hat{n}}$ a unit vector in the direction of the spin (and the magnetic moment). Thus, the equations of motion for the center of mass of the particle are \begin{align} m\dfrac{d^{2}x}{dt^{2}} & =\mu\dfrac{\partial}{\partial x}\left( \mathbf{\hat{n}\cdot B}\right) \label{c1}\\ m\dfrac{d^{2}y}{dt^{2}} & =\mu\dfrac{\partial}{\partial y}\left( \mathbf{\hat{n}\cdot B}\right) \nonumber\\ m\dfrac{d^{2}z}{dt^{2}} & =\mu\dfrac{\partial}{\partial z}\left( \mathbf{\hat{n}\cdot B}\right) \nonumber \end{align} and the evolution of its spin is determined by \begin{equation} S\dfrac{d\mathbf{\hat{n}}}{dt}=\mu\mathbf{\hat{n}\times B}\text{.} \label{c2} \end{equation} The two equilibrium solutions to Eqs.(\ref{c1}) and (\ref{c2}) are \begin{equation} \mathbf{\hat{n}}(t)=\mp\mathbf{\hat{z}} \label{c3} \end{equation} with \begin{align*} x(t) & =0\\ y(t) & =0\\ z\left( t\right) & =0 \end{align*} representing a motionless particle at the origin with its magnetic moment (and spin) pointing \emph{antiparallel} ($\mathbf{\hat{n}}(t)=-\mathbf{\hat{z}}$) to the direction of the field at that point and a similar solution but with the magnetic moment pointing \emph{parallel} to the direction of the field ($\mathbf{\hat{n}}(t)=+\mathbf{\hat{z}}$). \subsection{Stability of the solutions.} To check the stability of these solutions we now add first-order perturbations. We set \begin{align} \mathbf{\hat{n}(}t\mathbf{)} & =\mathbf{\mp\hat{z}+}\epsilon_{x} (t)\mathbf{\hat{x}+}\epsilon_{y}(t)\mathbf{\hat{y}}\label{c4}\\ x(t) & =0+\delta x(t)\nonumber\\ y(t) & =0+\delta y(t)\nonumber\\ z(t) & =0+\delta z(t)\nonumber \end{align} (note that, to first order, the perturbation $\delta\mathbf{\hat{n}} =\epsilon_{x}(t)\mathbf{\hat{x}+}\epsilon_{y}(t)\mathbf{\hat{y}}$ is \emph{orthogonal} to the vector $\mathbf{\hat{n}}$ for the stationary solution $\mathbf{\hat{n}}_{0}=\mathbf{\mp\hat{z}}$, since $\mathbf{\hat{n}}$ is a unit vector), substitute these into Eqs.(\ref{c1}) and (\ref{c2}), and retain only first-order terms. We find that the resulting equations for $\delta x(t)$, $\delta y(t)$, $\delta z\left( t\right) $, $\epsilon_{x}(t)$ and $\epsilon_{y}(t)$ are \begin{align} m\dfrac{d^{2}\delta x}{dt^{2}} & =\pm\dfrac{1}{2}\mu B^{\prime\prime}\delta x+\mu B^{\prime}\epsilon_{x}\label{c5}\\ m\dfrac{d^{2}\delta y}{dt^{2}} & =\pm\dfrac{1}{2}\mu B^{\prime\prime}\delta y-\mu B^{\prime}\epsilon_{y}\nonumber\\ m\dfrac{d^{2}\delta z}{dt^{2}} & =\mp\mu B^{\prime\prime}\delta z\nonumber\\ S\dfrac{d\epsilon_{x}}{dt} & =\mu B_{0}\epsilon_{y}\mp\mu B^{\prime}\delta y\nonumber\\ S\dfrac{d\epsilon_{y}}{dt} & =-\mu B_{0}\epsilon_{x}\mp\mu B^{\prime}\delta x.\nonumber \end{align} The motion of the $z$-coordinate is decoupled from the others. If $B^{\prime\prime}>0$, it is stable only when the upper sign is taken, corresponding to a spin \emph{antiparallel }to the direction of the field. It can be shown that when $B^{\prime\prime}<0$ then, even if the system is stable under axial vibrations (by choosing the lower sign), it cannot be stable as a whole. We therefore disregard the lower sign, and the equation for the $z$-coordinate for the rest of the derivation. The normal modes of the reduced system transform as the irreducible representations of the symmetry group. The 4-dimensional linear space spanned by the deviations $(\delta x,\delta y,\epsilon_{x},\epsilon_{y})$ from the stationary state carries the irreducible representations $\Gamma_{+}$ with characters $e^{-i\gamma}$ and $\Gamma_{-}$ with characters $e^{+i\gamma}$, and may thus be decomposed into the two 2-dimensional invariant subspaces transforming as $\Gamma_{+}$ and $\Gamma_{-}$, respectively. These subspaces are spanned by the circular position coordinates and precessional spin coordinates \begin{align} \Gamma_{+}:\; & (\rho_{+}=\delta x+i\delta y,\epsilon_{-}=\epsilon _{x}-i\epsilon_{y});\label{c5.1}\\ \Gamma_{-}:\; & (\rho_{-}=\delta x-i\delta y,\epsilon_{+}=\epsilon _{x}+i\epsilon_{y}). \label{c5.2} \end{align} Thus, the normal modes consist of a circular motion in the $(x,y)$-plane coupled to a precession of the spin vector in the \emph{opposite} sense. Indeed, after introducing the $(\rho_{\pm},\epsilon_{\mp})$-coordinates into Eqs.(\ref{c5}), this set of four equations decomposes into one pair of equations for $(\rho_{+},\epsilon_{-})$ and another pair for $(\rho _{-},\epsilon_{+})$. We now look for oscillatory (stable) solutions of these equations and set \begin{equation} \rho_{\pm}=\rho_{\pm,0}e^{-i\omega t},\quad\epsilon_{\pm}=\epsilon_{\pm ,0}e^{-i\omega t}. \label{rhoe} \end{equation} This yields the algebraic equations \begin{equation} \Gamma_{+}:\;\left( \begin{array} [c]{cc} \dfrac{1}{2}\mu B^{\prime\prime}+m\omega^{2} & \mu B^{\prime}\\ i\mu B^{\prime} & i\left( \omega S+\mu B_{0}\right) \end{array} \right) \cdot\left( \begin{array} [c]{l} \rho_{+,0}\\ \epsilon_{-,0} \end{array} \right) =\left( \begin{array} [c]{l} 0\\ 0 \end{array} \right) , \label{c8.1} \end{equation} $\allowbreak$ \begin{equation} \Gamma_{-}:\;\left( \begin{array} [c]{cc} \dfrac{1}{2}\mu B^{\prime\prime}+m\omega^{2} & \mu B^{\prime}\\ -i\mu B^{\prime} & i\left( \omega S-\mu B_{0}\right) \end{array} \right) \cdot\left( \begin{array} [c]{l} \rho_{-,0}\\ \epsilon_{+,0} \end{array} \right) =\left( \begin{array} [c]{l} 0\\ 0 \end{array} \right) . \label{c8.2} \end{equation} These equations have non-trivial solutions whenever the determinant of either of the two matrices vanishes. This yields the secular equations \begin{align} \Gamma_{+} & :\left( \frac{\omega}{\omega_{p}}\right) ^{3}+\left( \frac{\omega}{\omega_{p}}\right) ^{2}+\dfrac{1}{2}K_{z}^{2}\left( \frac{\omega}{\omega_{p}}\right) -K_{r}^{2}=0,\label{c11.1}\\ \Gamma_{-} & :\;\left( \frac{\omega}{\omega_{p}}\right) ^{3}-\left( \frac{\omega}{\omega_{p}}\right) ^{2}+\dfrac{1}{2}K_{z}^{2}\left( \frac{\omega}{\omega_{p}}\right) +K_{r}^{2}=0, \label{c11.2} \end{align} which determine the eigenfrequencies $\omega$ of the various modes. Since the reduced system has three degrees of freedom, we expect to have three normal modes. Indeed, when $\omega$ is a solution of the first equation, then $-\omega$ is a solution of the second equation. We define the mode frequencies in Eq.(\ref{c11.1}) to be positive (or, in the case of complex $\omega$, to have positive real part); the negative $\omega$-values are needed to construct real solutions. Then, the $\Gamma_{+}$-modes describe vibrational motions turning counter-clockwise coupled to spin precessions turning clockwise, i.e., opposite to the natural spin precession, and the $\Gamma_{-}$-modes describe vibrational motions turning clockwise coupled to spin precessions turning counter-clockwise, i.e., in the same sense as the natural spin precession. Stability requires that all three solutions of, say, Eq.(\ref{c11.2}) be \emph{real}. We note that at the edge of the stability region (and when $K_{r}\neq0$), two out of the three roots of Eq.(\ref{c11.2}) for $\omega$ become identical. In this case, the third order polynomial Eq.(\ref{c11.2}) takes the form $P\left( \omega\right) =\left( \omega-\omega_{1}\right) ^{2}$ $\left( \omega-\omega_{2}\right) $, which satisfy $\left. dP/d\omega\right| _{\omega=\omega_{1}}=0$. The edge of the stability region is then found by simultaneously solving the equations $P\left( \omega\right) =0$ and $dP/d\omega=0$. The result is given in the form of the parametric curve in the $(K_{r}^{2},K_{z}^{2})$-plane \[ \left\{ \begin{array} [c]{c} K_{r}^{2}\left( t\right) =2t^{3}-t^{2}\\ K_{z}^{2}\left( t\right) =4t-6t^{2} \end{array} \right\} \text{ ; with }\dfrac{1}{2}<t<\dfrac{2}{3} \] which is shown in Fig.(\ref{fig1}). Note that by eliminating $t$ from the second equation and substituting it in the first gives $K_{r}^{2}$ \emph{explicitly} in terms of $K_{z}^{2}$. \section{Quantum-mechanical analysis.\label{quant}} \subsection{The Hamiltonian and its diagonalized form.\label{ham}} In this section we consider the problem of a neutral particle with spin \emph{one} ($S=\hbar$) in a 3D inhomogeneous magnetic field from a quantum-mechanical point of view. Unlike the classical analysis, in which the derivation was valid for any value of the adiabaticity parameters $K_{r}$ and $K_{z}$, we concentrate here on the behavior of the system when $K_{r}$ and $K_{z}$ are \emph{small}. Note also that, quantum-mechanically, the magnetic moment $\mu$ and the spin $S$ of a particle are related by \[ \mu=\gamma S, \] where $\gamma$ is the gyromagnetic ratio of the particle. Setting $\mu=\gamma S$ and $S=\hbar$ in Eqs.(\ref{d3}) gives \begin{align*} K_{z} & =\dfrac{\omega_{p}}{\omega_{z}}=\sqrt{\dfrac{B^{\prime\prime}\hbar }{\gamma mB_{0}^{2}}}\\ K_{r} & =\dfrac{\omega_{p}}{\omega_{r}}=\sqrt{\dfrac{\left[ (B^{\prime })^{2}-\frac{1}{2}B_{0}B^{\prime\prime}\right] \hbar}{\gamma mB_{0}^{3}}}. \end{align*} Now, it is convenient to transform to cylindrical coordinates $(r,\phi,z)$ by setting $x=r\cos\phi$, $y=r\sin\phi$. We denote by $B$ the \emph{amplitude} of $\mathbf{B}$, by $\theta$ its direction with respect to the $z$-axis and by $\varphi$ the angle between the projection of $\mathbf{B}$ onto the $\left( x,y\right) $-plane and the $x$-axis. Thus, Eq.(\ref{d0}) is rewritten as \begin{equation} \mathbf{B}=B\left[ \sin\theta\cos\varphi\mathbf{\hat{x}+}\sin\theta \sin\varphi\mathbf{\hat{y}}+\cos\theta\mathbf{\hat{z}}\right] . \label{h1} \end{equation} The approximate expressions for $B$, $\theta$ and $\varphi$ near the origin are given by \begin{align} B\left( r,\phi,z\right) & \simeq B_{0}\left( 1+\dfrac{B_{0} B^{\prime\prime}}{2B_{0}^{2}}z^{2}+\left( \dfrac{(B^{\prime})^{2}-\frac{1} {2}B_{0}B^{\prime\prime}}{2B_{0}^{2}}\right) r^{2}\right) \text{ }+\mathcal{O}\left( r^{4},z^{2}r^{2},z^{4}\right) \text{,}\label{h1.1}\\ \theta\left( r,\phi,z\right) & =\arctan\left( \dfrac{\sqrt{B_{x} ^{2}+B_{y}^{2}}}{B_{z}}\right) \simeq\dfrac{B^{\prime}r}{B_{0}} +\mathcal{O}\left( r^{2},rz,z^{2}\right) \text{,}\nonumber\\ \varphi\left( r,\phi,z\right) & =\arctan\left( \dfrac{B_{y}}{B_{x} }\right) \simeq\arctan\left( -\dfrac{y}{x}\left[ 1+\left( \frac {B^{\prime\prime}}{B^{\prime}}\right) z\right] \right) \simeq -\phi+\mathcal{O}\left( z\sin\left( 2\phi\right) \right) .\nonumber \end{align} Thus approximately, $B$ and $\theta$ depend only on $r$, whereas $\varphi$ depends only linearly on $\phi$. The time-independent Schr\"{o}dinger equation for this system is \begin{equation} \left[ -\frac{\hbar^{2}}{2m}\nabla^{2}-\mu B\left( \sin\theta\cos\varphi \hat{s}_{x}\mathbf{+}\sin\theta\sin\varphi\hat{s}_{y}+\cos\theta\hat{s} _{z}\right) \right] \Phi(r,\phi,z)=E\Phi(r,\phi,z) \label{h5} \end{equation} where $\hat{s}_{x}$, $\hat{s}_{y}$ and $\hat{s}_{z}$ are the spin one matrices, given by \[ \begin{array} [c]{ccc} \hat{s}_{x}=\dfrac{1}{\sqrt{2}}\left( \begin{array} [c]{lll} 0 & 1 & 0\\ 1 & 0 & 1\\ 0 & 1 & 0 \end{array} \right) & \hat{s}_{y}=\dfrac{1}{\sqrt{2}}\left( \begin{array} [c]{lll} 0 & -i & 0\\ i & 0 & -i\\ 0 & i & 0 \end{array} \right) & \hat{s}_{z}=\left( \begin{array} [c]{lll} 1 & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & -1 \end{array} \right) , \end{array} \] $E$ is the eigenenergy, and $\Phi$ is the three-components spinor \begin{equation} \Phi=\left( \begin{array} [c]{c} \Phi_{+}(r,\phi,z)\\ \Phi_{0}(r,\phi,z)\\ \Phi_{-}(r,\phi,z) \end{array} \right) . \label{h5.1} \end{equation} In matrix form Eq.(\ref{h5}) becomes \begin{equation} \left( H_{K}+H_{M}\right) \left( \begin{array} [c]{c} \Phi_{+}(r,\phi,z)\\ \Phi_{0}(r,\phi,z)\\ \Phi_{-}(r,\phi,z) \end{array} \right) =E\left( \begin{array} [c]{c} \Phi_{+}(r,\phi,z)\\ \Phi_{0}(r,\phi,z)\\ \Phi_{-}(r,\phi,z) \end{array} \right) , \label{h6.0} \end{equation} where $H_{K}$ and $H_{M}$, given by \begin{align} H_{K} & \equiv-\dfrac{\hbar^{2}}{2m}\nabla^{2}\label{h6.1}\\ H_{M} & \equiv-\mu B\left( \begin{array} [c]{lll} \cos\theta & \dfrac{1}{\sqrt{2}}\sin\theta e^{-i\varphi} & 0\\ \dfrac{1}{\sqrt{2}}\sin\theta e^{i\varphi} & 0 & \dfrac{1}{\sqrt{2}}\sin\theta e^{-i\varphi}\\ 0 & \dfrac{1}{\sqrt{2}}\sin\theta e^{i\varphi} & -\cos\theta \end{array} \right) ,\nonumber \end{align} are the kinetic part and the magnetic part of the Hamiltonian $H$, respectively. In order to diagonalize the magnetic part of the Hamiltonian, we make a local \emph{passive} transformation of coordinates on the wavefunction such that the spinor is expressed in a new coordinate system whose $\mathbf{\hat{z}}$ axis coincides with the direction of the magnetic field at the point $\left( r,\phi,z\right) $. We denote by $R\left( r,\phi,z\right) $ the required transformation and set $\Psi=R\Phi$. Thus, $\Psi$ represent \emph{the same} direction of the spin as before the transformation but using the \emph{new} coordinate system. The Hamiltonian in this newly defined system is given by $RHR^{-1}$. We represent the rotation matrix $R$ in terms of the three Euler angles: First, we perform a rotation through an angle $\varphi$ around the $\hat{z}$ axis. Second, we make a rotation through an angle $\theta$ around the \emph{new} position of the $\hat{y}$ axis. At the end of this process the new $\hat{z}$ axis coincide with the direction of the magnetic field. Now the value of the last Euler angle, which is a rotation around the new $\hat{z}$ axis, has no effect on this axis. For simplicity we choose this angle to be $0$. Thus, the representation of the complete transformation for spin-one particle is given by \cite{rot} \[ R=\exp\left[ i\theta\hat{s}_{y}\right] \exp\left[ i\varphi\hat{s} _{z}\right] , \] while its inverse is given by \[ R^{-1}=\exp\left[ -i\varphi\hat{s}_{z}\right] \exp\left[ -i\theta\hat {s}_{y}\right] . \] It is easily verified that the transformation indeed diagonalizes the magnetic part of the Hamiltonian as \[ RH_{M}R^{-1}=-\mu B\hat{s}_{z}. \] As for the kinetic part we show at the Appendix that \begin{equation} RH_{K}R^{-1}=-\dfrac{\hbar^{2}}{2m}\left[ \begin{array} [c]{c} -i\nabla^{2}\varphi\left( \cos\theta\hat{s}_{z}-\sin\theta\hat{s}_{x}\right) -\left| \nabla\varphi\right| ^{2}\left( \cos\theta\hat{s}_{z}-\sin \theta\hat{s}_{x}\right) ^{2}\\ -2i\left( \cos\theta\hat{s}_{z}-\sin\theta\hat{s}_{x}\right) \nabla \varphi\cdot\left( -i\mathbf{\nabla}\theta\hat{s}_{y}+\mathbf{\nabla}\right) \\ -i\nabla^{2}\theta\hat{s}_{y}-\left| \mathbf{\nabla}\theta\right| ^{2} \hat{s}_{y}^{2}-2i\hat{s}_{y}\mathbf{\nabla}\theta\cdot\nabla+\nabla^{2} \end{array} \right] . \label{trans} \end{equation} Since we are interested in the behavior near the origin, we substitute the approximate expressions Eqs.(\ref{h1.1}) into Eq.(\ref{trans}), replace $\cos\theta$ by $1$ and $\sin\theta$ by $0$ (since $\theta$ changes very slowly as compared to the extent over which $\mu B$ changes significantly), and neglect the terms that are proportional to $\nabla^{2}\theta$ and $\left| \mathbf{\nabla}\theta\right| ^{2}$ (being of higher order with respect to $\mathbf{\nabla}\theta\cdot\nabla$ ). This gives \[ RH_{K}R^{-1}\simeq-\dfrac{\hbar^{2}}{2m}\left[ \nabla^{2}-2i\hat{s}_{y} \dfrac{B^{\prime}}{B_{0}}\dfrac{\partial}{\partial r}+2i\dfrac{1}{r^{2}} \hat{s}_{z}\dfrac{\partial}{\partial\phi}-\dfrac{1}{r^{2}}\hat{s}_{z} ^{2}\right] \text{.} \] Thus, the Hamiltonian of the system in the rotated frame may be written approximately as \begin{equation} H\simeq H_{diag}+H_{int}\text{,} \label{h7} \end{equation} where \begin{align} H_{diag} & =-\dfrac{\hbar^{2}}{2m}\left[ \mathbf{\nabla}^{2}+\dfrac {2i}{r^{2}}\hat{s}_{z}\dfrac{\partial}{\partial\phi}-\dfrac{1}{r^{2}}\hat {s}_{z}^{2}\right] -\mu B\hat{s}_{z}\label{h7.01}\\ H_{int} & =i\dfrac{\hbar^{2}B^{\prime}}{mB_{0}}\hat{s}_{y}\dfrac{\partial }{\partial r}.\nonumber \end{align} The first part of the Hamiltonian $H_{diag}$ is diagonal with respect to the spin degrees of freedom. It contains the kinetic part $\sim\mathbf{\nabla} ^{2}$, a term whose form is $-\mu B\hat{s}_{z}$ which is identified as the adiabatic effective potential, and the terms $\sim1/r^{2},ir^{-2}\hat{s} _{z}\partial/\partial\phi$ which appear due to the rotation. The second part of the Hamiltonian $H_{int}$ contains only non-diagonal components. Generally, $H_{int}$ should contain terms which couple a spin state $M$ to the two nearest spin states $M\pm1$ and to the two next-to-nearest spin states $M\pm2$ (see the Appendix). In the limit where $K_{z}$ and $K_{r}$ are small, we see that the coupling of the state with spin projection value $M$ to the states $M\pm2$ is \emph{negligible} compared to its coupling to the $M\pm1$ states. We proceed to find the eigenstates of $H_{diag}$. \subsection{Stationary states of $H_{diag}$.\label{diag}} Since $H_{diag}$ is diagonal, the three spin states of the wavefunction are decoupled. We seek a solution for the spin-down ($M=-1$) state \begin{equation} \Psi_{-}=\left( \begin{array} [c]{c} 0\\ 0\\ \psi_{-}(r,\phi,z) \end{array} \right) \text{ ; }E=E_{-}, \label{h8.01} \end{equation} and for the $M=0$ state \begin{equation} \Psi_{0}=\left( \begin{array} [c]{c} 0\\ \psi_{0}(r,\phi,z)\\ 0 \end{array} \right) \text{ ; }E=E_{0}. \label{h8.02} \end{equation} We do not consider the \emph{spin-up} ($M=+1$) state, since its coupling to the trapped spin-down state is negligible, as explained above. The equation for the non-vanishing component of the spin-down state reads \begin{equation} \left\{ -\dfrac{\hbar^{2}}{2m}\left[ \mathbf{\nabla}^{2}-\dfrac{2i}{r^{2} }\dfrac{\partial}{\partial\phi}-\dfrac{1}{r^{2}}\right] +\mu B\right\} \psi_{-}=E_{-}\psi_{-}, \label{h8.1} \end{equation} whereas the equation for the non-vanishing component of the spin-zero state is \begin{equation} -\dfrac{\hbar^{2}}{2m}\mathbf{\nabla}^{2}\psi_{0}=E_{0}\psi_{0}. \label{h8.2} \end{equation} The solutions of these equations is outlined in the next two subsections. \subsubsection{Stationary spin-down ($M=-1$) states.\label{down}} Eq.(\ref{h8.1}) represents a particle in a cylindrically symmetric \emph{attractive} 3D potential. If the extent of the wave function is small enough we can expand $B$ in Eq.(\ref{h1.1}) to second order in $r$ and $z$ as given by Eq.(\ref{h1.1}), and apply the well-known solution of the harmonic oscillator \cite{sho} in 3D. Under this approximation, Eq.(\ref{h8.1}) becomes \begin{equation} \left\{ \begin{array} [c]{c} -\dfrac{\hbar^{2}}{2m}\left( \dfrac{\partial^{2}}{\partial z^{2}} +\dfrac{\partial^{2}}{\partial r^{2}}+\dfrac{1}{r}\dfrac{\partial}{\partial r}-\dfrac{1}{r^{2}}\left( i\dfrac{\partial}{\partial\phi}+1\right) ^{2}\right) \\ +\left[ \dfrac{m\omega_{z}^{2}}{2}z^{2}+\dfrac{m\omega_{r}^{2}}{2} r^{2}\right] \end{array} \right\} \psi_{-}=\left( E_{-}-\mu B_{0}\right) \psi_{-}.\label{down0.1} \end{equation} The $z$-coordinate decouples and we assume that it is in the ground-state. We thus seek a solution whose form is \begin{equation} \psi_{-}(r,z,\phi)=f(r)e^{i\nu\phi}\left( \dfrac{m\omega_{z}}{\pi\hbar }\right) ^{1/4}\exp\left[ -\dfrac{m\omega_{z}z^{2}}{2\hbar}\right] ,\label{down0.2} \end{equation} with $\nu$ an integer. The equation satisfied by $f\left( r\right) $ is then \begin{equation} -\dfrac{\hbar^{2}}{2m}\left[ \dfrac{d^{2}f}{dr^{2}}+\dfrac{1}{r}\dfrac {df}{dr}-\dfrac{f}{r^{2}}\left( \nu-1\right) ^{2}\right] +\dfrac {m\omega_{r}^{2}r^{2}}{2}f=\left( E_{-}-\mu B_{0}-\dfrac{1}{2}\hbar\omega _{z}\right) f.\label{down0.3} \end{equation} This is an eigenvalue problem for $f$. The smallest eigenvalue for this equation is obtained by setting \[ \nu=1, \] for which the eigenfunction $f$ is \[ f\left( r\right) =De^{i\phi}\exp\left[ -\dfrac{m\omega_{r}}{2\hbar} r^{2}\right] . \] Thus, under the harmonic-oscillator approximation, the normalized down-part of the spin-down state is given by \begin{equation} \psi_{-}=\sqrt{\dfrac{m\omega_{r}}{\pi\hbar}}\text{ }\left( \dfrac {m\omega_{z}}{\pi\hbar}\right) ^{1/4}e^{i\phi}\exp\left[ -\dfrac{m\omega _{r}}{2\hbar}r^{2}\right] \exp\left[ -\dfrac{m\omega_{z}}{2\hbar} z^{2}\right] .\label{down1} \end{equation} Note that the extent of this wave function over which it changes appreciably is given by \begin{equation} \Delta z\sim\sqrt{K_{z}}\sqrt{\dfrac{B_{0}}{B^{\prime\prime}}}\text{ ; }\Delta r\sim\sqrt{K_{r}}\sqrt{\dfrac{B_{0}}{B^{\prime\prime}}}\label{down2} \end{equation} whereas the extent over which $\mu B$ changes significantly (see Eq.(\ref{h1.1})) is \begin{equation} \Delta r_{\mu B}\sim\sqrt{\dfrac{B_{0}}{B^{\prime\prime}}}.\label{down3} \end{equation} Thus, the ratio between these two length scales is \begin{equation} \dfrac{\Delta z}{\Delta r_{\mu B}}\sim\sqrt{K_{z}}\text{ ; }\dfrac{\Delta r}{\Delta r_{\mu B}}\sim\sqrt{K_{r}}.\label{down4} \end{equation} We therefore conclude that when $K_{z}$ and $K_{r}$ are small enough, the harmonic approximation is justified. The wave function $\psi_{-}$, given by Eq.(\ref{down1}), then represents the lowest possible \emph{bound} state for this system. This state corresponds to a \emph{trapped} particle. The energy of this state is \begin{equation} E_{-}=\mu B_{0}+\dfrac{1}{2}\hbar\omega_{z}+\hbar\omega_{r}=\mu B_{0}\left( 1+K_{z}+2K_{r}\right) \simeq\mu B_{0}, \label{down5} \end{equation} while its full spinor representation is \begin{equation} \Psi_{-}=\left( \begin{array} [c]{c} 0\\ 0\\ \sqrt{\dfrac{m\omega_{r}}{\pi\hbar}}\text{ }\left( \dfrac{m\omega_{z}} {\pi\hbar}\right) ^{1/4}e^{i\phi}\exp\left[ -\dfrac{m\omega_{r}}{2\hbar }r^{2}\right] \exp\left[ -\dfrac{m\omega_{z}}{2\hbar}z^{2}\right] \end{array} \right) . \label{down6} \end{equation} \subsubsection{Stationary ($M=0$) states.\label{up}} Eq.(\ref{h8.2}) describes a free particle. It corresponds to an unbounded state representing an \emph{untrapped} particle. In this case there is a continuum of states, each with its own energy. As we are interested in non-radiative decay, we focus on finding a solution with an energy which is \emph{equal} to the energy found for the trapped state, that is \begin{equation} E_{0}=E_{-}\simeq\mu B_{0}. \label{up0} \end{equation} We seek a solution in the form \[ \psi_{0}(r,\phi)=g(r)\exp\left[ ik_{z}z+i\beta\phi\right] , \] where $\beta$ is an integer. Substituting this, together with Eq.(\ref{up0}) into Eq.(\ref{h8.2}) gives \[ \left[ \dfrac{d^{2}}{dr^{2}}+\dfrac{1}{r}\dfrac{d}{dr}+\left( k_{r} ^{2}-\dfrac{\beta^{2}}{r^{2}}\right) \right] g=0, \] where \[ k_{r}^{2}+k_{z}^{2}=\dfrac{2\mu mB_{0}}{\hbar^{2}}\text{.} \] The non-singular solution for $g$ is \[ g\left( r\right) =J_{\beta}\left( k_{r}r\right) , \] where $J_{\beta}\left( x\right) $ is the Bessel function of the first kind of order $\beta$. For what follows, it is convenient to introduce an angle $\gamma$ such that \begin{align*} k_{r} & =k_{0}\sin\gamma\\ k_{z} & =k_{0}\cos\gamma.\\ k_{0} & \equiv\sqrt{\dfrac{2\mu mB_{0}}{\hbar^{2}}} \end{align*} with $0<\gamma<\pi$. We note that $H_{int}$ does not operate on the $\phi$ coordinate. Hence, in order to have a non-vanishing matrix element between the zero-state and the down-state, they must have the \emph{same} $\phi$-dependence. Thus, $\beta =\nu=1$, and as a result, the state with angle $\gamma$ is given by \begin{equation} \psi_{0}^{\gamma}(r,\phi,z)=C_{\gamma}J_{1}\left( k_{0}r\sin\gamma\right) \exp\left[ i\left( \phi+k_{0}z\cos\gamma\right) \right] \text{.} \label{up0.2} \end{equation} with \begin{equation} \Psi_{0}^{\gamma}=\left( \begin{array} [c]{c} 0\\ C_{\gamma}J_{1}\left( k_{0}r\sin\gamma\right) \exp\left[ i\left( \phi+k_{0}z\cos\gamma\right) \right] \\ 0 \end{array} \right) ,\text{ } \label{up0.3} \end{equation} where $C_{\gamma}$ is the normalization constant which is chosen to be real, and depends on $\gamma$. To evaluate $C_{\gamma}$ we temporarily introduce boundary conditions under which the wavefunction $\Psi_{0}^{\gamma}$ vanishes at $r=R$, and satisfies periodic boundary conditions along $z$ with period $Z$. Thus, normalization of $\Psi_{0}^{\gamma}$ gives \[ \int_{-Z/2}^{Z/2}dz\int_{0}^{2\pi}d\phi\int_{0}^{R}rdr\left| \Psi_{0} ^{\gamma}\right| ^{2}=C_{\gamma}^{2}Z2\pi\frac{1}{2}R^{2}[J_{2}(k_{0} R\sin\gamma)]^{2}=1, \] such that \[ C_{\gamma}=\dfrac{1}{\sqrt{Z\pi}R\left| J_{2}(k_{0}R\sin\gamma)\right| }, \] where we have used \cite{grad} \[ \int_{0}^{R}[J_{1}(kr)]^{2}rdr=\frac{1}{2}R^{2}[J_{2}(kR)]^{2}. \] In the asymptotic region $kR\gtrsim1$, the function $J_{2}(kR)$ takes the values $\pm\sqrt{2/(\pi kR)}$ at the zeros of $J_{1}(kR)$. Thus, \[ C_{\gamma}=\dfrac{1}{\sqrt{Z\pi}R\left| J_{2}(k_{0}R\sin\gamma)\right| }\simeq\dfrac{\sqrt{k_{0}\sin\gamma}}{\sqrt{2ZR}} \] and hence \begin{equation} C_{\gamma}^{2}\simeq\dfrac{k_{0}\sin\gamma}{2ZR}. \label{cg2} \end{equation} \subsection{The transition rate.\label{time}} We calculate the transition rate from the bound state given by Eq.(\ref{down6} ) to the unbounded state Eq.(\ref{up0.3}), according to Fermi's golden rule \cite{fermi}. Thus, the infinitesimal decay time from the trapped state to the untrapped state defined by $\gamma$ is given by \begin{equation} d\left( \dfrac{1}{T_{esc}^{\gamma}}\right) =\dfrac{2\pi}{\hbar}\left| H_{i}^{\gamma}\right| ^{2}\rho_{\gamma}(E_{0})d\gamma,\label{t1} \end{equation} where $\rho_{\gamma}(E)d\gamma$ is the density $dN/dE$ of states $\psi _{0}^{\gamma}$ with an angle between $\gamma$ and $\gamma+d\gamma$ and energy between $E_{0}$ and $E_{0}+dE$, and $H_{i}^{\gamma}$ is the matrix element of $H_{int}$ between the bound state and the unbounded state defined by $\gamma$ and $E_{0}$. To find $\rho_{\gamma}(E_{0})$ we note that the final state is defined by the two quantized $k$-vectors $k_{r}=k_{0}\sin\gamma$ and $k_{z}=k_{0}\cos\gamma$. The possible $k_{z}$ values are equally-spaced with lattice constant $dk_{z}=2\pi/Z$. Since the Bessel function $J_{1}(k_{r}r)$ is very close to its asymptotic behavior at large arguments \[ J_{1}(k_{r}R)\simeq\sqrt{\frac{2}{\pi k_{r}R}}\,\cos\left( k_{r}R-\frac{3\pi }{4}\right) , \] the $k_{r}$ are also very much equally-spaced (even when $k_{r}$ is small, it is still a good approximation) with lattice constant $dk_{r}\simeq\pi/R$. Thus, in the $\left( k_{r},k_{z}\right) $-space, the allowed $k$ vectors form a regular lattice, and the number of states $dN$ in the volume element $k_{0}dk_{0}d\gamma$ is given by \[ dN=\dfrac{k_{0}d\gamma dk_{0}}{\left( \dfrac{\pi}{R}\right) \left( \dfrac{2\pi}{Z}\right) }. \] With $dE=\hbar^{2}k_{0}dk_{0}/m$, this gives the density of states \[ \rho_{\gamma}(E)d\gamma=\dfrac{dN}{dE}\simeq\dfrac{mZR}{2\pi^{2}\hbar^{2} }d\gamma. \] This, together with Eq.(\ref{cg2}) yields \begin{equation} C_{\gamma}^{2}\rho_{\gamma}(E)\simeq\dfrac{mk_{0}\sin\gamma}{4\pi^{2}\hbar ^{2}}.\label{c2rho} \end{equation} Evaluation of $H_{i}^{\gamma}$ gives \begin{align*} H_{i}^{\gamma} & =\int_{-\infty}^{\infty}dz {\displaystyle\int\limits_{0}^{\infty}} rdr {\displaystyle\int\limits_{0}^{2\pi}} d\phi\Psi_{0}^{\dagger\gamma}H_{int}\Psi_{-}\\ & =2\pi\sqrt{\dfrac{m\omega_{r}}{\pi\hbar}}\text{ }\left( \dfrac{m\omega _{z}}{\pi\hbar}\right) ^{1/4}\left[ i\dfrac{\hbar^{2}B^{\prime}}{mB_{0} }\dfrac{\left( -i\right) }{\sqrt{2}}\right] \left( -\dfrac{m\omega_{r} }{\hbar}\right) C_{\gamma}\\ \times & {\displaystyle\int\limits_{0}^{\infty}} drr^{2}J_{1}\left( k_{0}r\sin\gamma\right) \exp\left[ -\dfrac{m\omega_{r} }{2\hbar}r^{2}\right] \int_{-\infty}^{\infty}dz\exp\left[ -\dfrac {m\omega_{z}}{2\hbar}z^{2}-ik_{0}z\cos\gamma\right] . \end{align*} and hence \begin{align} \left| H_{i}^{\gamma}\right| ^{2} & =4\pi^{2}\dfrac{m\omega_{r}}{\pi\hbar }\sqrt{\dfrac{m\omega_{z}}{\pi\hbar}}\left( \dfrac{\hbar^{2}B^{\prime}} {\sqrt{2}mB_{0}}\right) ^{2}\left( \dfrac{m\omega_{r}}{\hbar}\right) ^{2}C_{\gamma}^{2}\dfrac{2\pi\hbar}{m\omega_{z}}\left( \dfrac{k_{0}\hbar ^{2}\sin\gamma}{m^{2}\omega_{r}^{2}}\right) ^{2}\label{hi2}\\ & \times\exp\left[ -\frac{\hbar k_{0}^{2}\cos^{2}\gamma}{m\omega_{z}} -\dfrac{k_{0}^{2}\hbar\sin^{2}\gamma}{m\omega_{r}}\right] .\nonumber \end{align} Substituting Eqs.(\ref{hi2}) and (\ref{c2rho}) into Eq.(\ref{t1}) and integrating over $\gamma$ from $0$ to $\pi$ gives \begin{equation} \dfrac{1}{T_{esc}}=2\sqrt{2\pi}\frac{\left( 2\omega_{r}^{2}+\omega_{z} ^{2}\right) \sqrt{\omega_{p}}}{\omega_{r}\sqrt{\omega_{z}}}I\left( \dfrac{2\omega_{p}}{\omega_{r}},\dfrac{2\omega_{p}}{\omega_{z}}\right) \label{1/TT} \end{equation} with \begin{equation} I\left( a,b\right) \equiv\int_{0}^{\pi}d\gamma\sin^{3}\gamma\exp\left[ -a\sin^{2}\gamma-b\cos^{2}\gamma\right] ,\label{I} \end{equation} where we have substituted our previous definitions for $\omega_{p}$, $\omega_{r}$ and $\omega_{z}$. The integral $I\left( a,b\right) $ may be expressed in terms of the simpler integral \begin{align} I_{0}\left( a,b\right) & \equiv\int_{0}^{\pi}d\gamma\sin\gamma\exp\left[ -a\sin^{2}\gamma-b\cos^{2}\gamma\right] \label{I0}\\ & =2\exp\left( -a\right) \int_{0}^{1}\exp\left[ \left( a-b\right) t^{2}\right] dt\nonumber \end{align} by \begin{equation} I\left( a,b\right) =-\dfrac{\partial I_{0}\left( a,b\right) }{\partial a}.\label{I2} \end{equation} In the isotropic case where $\omega_{r}=\omega_{z}\equiv\omega_{i}$, the integral in Eq.(\ref{1/TT}) can be evaluated analytically with the result that \[ \dfrac{1}{T_{esc}}=8\sqrt{2\pi}\sqrt{\omega_{p}\omega_{i}}\exp\left[ -\frac{2\omega_{p}}{\omega_{i}}\right] . \] In the extreme cases $\omega_{r}\gg\omega_{z}$ and $\omega_{z}\gg\omega_{r}$ we obtain from the asymptotic behavior of the error function of real and imaginary argument \cite{abram} \begin{equation} I_{0}\left( a,b\right) \simeq\left\{ \begin{array} [c]{cc} \sqrt{\dfrac{\pi}{b}}\exp\left( -a\right) & \text{; }b\gg a\\ \dfrac{1}{a}\exp\left( -b\right) & \text{; }b\ll a \end{array} \right. ,\label{i0app} \end{equation} and hence \begin{equation} I\left( a,b\right) =-\dfrac{\partial I_{0}\left( a,b\right) }{\partial a}\simeq\left\{ \begin{array} [c]{cc} \sqrt{\dfrac{\pi}{b}}\exp\left( -a\right) & \text{; }b\gg a\\ \dfrac{1}{a^{2}}\exp\left( -b\right) & \text{; }b\ll a \end{array} \right. .\label{i2} \end{equation} Substituting Eqs.(\ref{i0app}) and (\ref{i2}) into Eq.(\ref{1/TT}) gives \begin{equation} \dfrac{1}{T_{esc}}\simeq\left\{ \begin{array} [c]{c} 4\pi\omega_{r}\exp\left[ -\dfrac{2\omega_{p}}{\omega_{r}}\right] \text{; for }\omega_{p}\gg\omega_{r}\gg\omega_{z}\\ 8\sqrt{2\pi}\sqrt{\omega_{p}\omega_{i}}\exp\left[ -\dfrac{2\omega_{p}} {\omega_{i}}\right] \text{ ; for }\omega_{p}\gg\omega_{r}=\omega_{z} \equiv\omega_{i}\\ \sqrt{\dfrac{\pi}{2}}\omega_{r}\left( \dfrac{\omega_{z}}{\omega_{p}}\right) ^{3/2}\exp\left[ -\dfrac{2\omega_{p}}{\omega_{z}}\right] \text{; for } \omega_{p}\gg\omega_{z}\gg\omega_{r} \end{array} \right. ,\label{final} \end{equation} with the conclusion that the transition rate is dominated by the \emph{largest} of the two frequencies $\omega_{z}$ and $\omega_{r}$. \section{Discussion.\label{dis}} The problem we have studied has three important time scales: The shortest time scale is $T_{prec}$, which is the time required for \emph{one} precession of the spin around the axis of the local magnetic field. The intermediate time scale is given by $T_{r,z}=T_{prec}/K_{r,z}$, which are the times required to complete one cycle of the center of mass around the center of the trap in the lateral and axial directions, respectively. These two time scales appear both in the classical and the quantum-mechanical analysis. The longest time scale (provided that $K_{r}$ and $K_{z}$ are small) $T_{esc}$, which is not present in the classical problem, is the time it takes for the particle to escape from the trap. Whereas the classical analysis yields an upper bound for $K_{z}$ and $K_{r}$ for trapping to occur, no such sharp bound exists in the quantum-mechanical analysis. Nevertheless it is interesting to compare the classical bound with the values of $K_{z}$ and $K_{r}$ for which the exponent in the expression for the quantum-mechanical lifetime becomes equal to $1$: According to Fig.(\ref{fig1}), we find that $K_{z},_{\max}=1/\sqrt{2}=0.707$ when $K_{r} =0$, and $K_{r,\max}=\sqrt{4/27}=0.385$ when $K_{z}=0$. From Eq.(\ref{final}) on the other hand, we conclude that $K_{z},_{cr}=0.5$ when $K_{r}=0$, and $K_{r,cr}=0.5$ when $K_{z}=0$. Thus, the quantum-mechanical condition for trapping to occur is roughly the same as the classical condition. These results however, should be taken with caution since our quantum-mechanical analysis is valid only for small values of $K_{r}$ and $K_{z}$. Though our derivation was for the case of a spin-one particle, it is clear that it can be extended to particles with higher spin, and also to \emph{half-integer} spin particles. In view of the results obtained by our recent study of spin half particles in 1D field \cite{life1d} and 2D field \cite{life2d}, we believe that the expression for the lifetime in these cases is similar to the result which is obtained in the present paper. As an example, we apply our results to the case of a spin $1$ atom that is trapped in a field with $B_{0}=100$ Oe and $B_{0}/B^{\prime}\sim\sqrt {B_{0}/B^{\prime\prime}}\sim10$cm. These parameters correspond to typical traps used in Bose-Einstein condensation experiments \cite{bec,bec2,bec3,bec4} . The results, being correct to within an order of magnitude, are outlined in Table \ref{tab1}. We note that in both cases the values of $K_{z}$ and $K_{r}$ are much smaller than $1$. Also, the calculated lifetime of the particle in the trap is extremely large, suggesting that the particle is tightly trapped in this field. In this study we have been interested in the \emph{ground-state }trapped state. In the case of a particle with spin $1/2$ or spin $1$, this is the only \emph{one} trapped spin state. However, when particles with higher spin are considered there are more than one trapped states. A natural question in connection with these is what is the lifetime of these trapped states. Another interesting issue is the lifetime of an \emph{excited} state in a given trapped spin state. Our preliminary results show that some of these excited states may have a \emph{short} lifetime, being \emph{algebraically} dependent on $\omega_{p}/\omega_{r}$ and $\omega_{p}/\omega_{z}$ rather than \emph{exponentially} dependent. This question is still under study. \pagebreak \appendix \section{Transformation of $\nabla^{2}$.} The transformation of $\nabla^{2}$ is given by \begin{equation} R\nabla^{2}R^{-1}=\exp\left[ i\theta\hat{s}_{y}\right] Q\exp\left[ -i\theta\hat{s}_{y}\right] , \label{a1} \end{equation} where \begin{equation} Q\equiv\exp\left[ i\varphi\hat{s}_{z}\right] \nabla^{2}\exp\left[ -i\varphi\hat{s}_{z}\right] . \label{a2} \end{equation} Evaluating $Q$ first gives \begin{align*} \nabla^{2}\left( \exp\left[ -i\varphi\hat{s}_{z}\right] A\right) & =\mathbf{\nabla}\cdot\mathbf{\nabla}\left( \exp\left[ -i\varphi\hat{s} _{z}\right] A\right) \\ & =\mathbf{\nabla}\cdot\left[ \mathbf{\nabla}\left( \exp\left[ -i\varphi\hat{s}_{z}\right] \right) A+\exp\left[ -i\varphi\hat{s} _{z}\right] \mathbf{\nabla}A\right] \\ & =\mathbf{\nabla}\cdot\left[ -i\exp\left[ -i\varphi\hat{s}_{z}\right] \mathbf{\nabla}\varphi\hat{s}_{z}A+\exp\left[ -i\varphi\hat{s}_{z}\right] \mathbf{\nabla}A\right] \\ & =\mathbf{\nabla}\cdot\left[ -i\exp\left[ -i\varphi\hat{s}_{z}\right] \mathbf{\nabla}\varphi\hat{s}_{z}A+\exp\left[ -i\varphi\hat{s}_{z}\right] \mathbf{\nabla}A\right] \end{align*} but \begin{align*} & \mathbf{\nabla}\cdot\left[ -i\exp\left[ -i\varphi\hat{s}_{z}\right] \mathbf{\nabla}\varphi\hat{s}_{z}A\right] \\ & =\mathbf{\nabla}\varphi\cdot\mathbf{\nabla}\left[ -i\exp\left[ -i\varphi\hat{s}_{z}\right] \hat{s}_{z}A\right] -i\exp\left[ -i\varphi \hat{s}_{z}\right] \mathbf{\nabla}^{2}\varphi\hat{s}_{z}A\\ & =\mathbf{\nabla}\varphi\cdot\left[ -i\exp\left[ -i\varphi\hat{s} _{z}\right] \hat{s}_{z}\mathbf{\nabla}A-\exp\left[ -i\varphi\hat{s} _{z}\right] \mathbf{\nabla}\varphi\hat{s}_{z}^{2}A\right] -i\exp\left[ -i\varphi\hat{s}_{z}\right] \mathbf{\nabla}^{2}\varphi\hat{s}_{z}A \end{align*} and \begin{align*} & \mathbf{\nabla}\cdot\left[ \exp\left[ -i\varphi\hat{s}_{z}\right] \mathbf{\nabla}A\right] \\ & =-i\mathbf{\nabla}\varphi\exp\left[ -i\varphi\hat{s}_{z}\right] \hat {s}_{z}\cdot\mathbf{\nabla}A+\exp\left[ -i\varphi\hat{s}_{z}\right] \mathbf{\nabla}^{2}A \end{align*} hence \begin{align*} & \nabla^{2}\left( \exp\left[ -i\varphi\hat{s}_{z}\right] A\right) \\ & =\mathbf{\nabla}\varphi\cdot\left[ -i\exp\left[ -i\varphi\hat{s} _{z}\right] \hat{s}_{z}\mathbf{\nabla}A-\exp\left[ -i\varphi\hat{s} _{z}\right] \mathbf{\nabla}\varphi\hat{s}_{z}^{2}A\right] -i\exp\left[ -i\varphi\hat{s}_{z}\right] \mathbf{\nabla}^{2}\varphi\hat{s}_{z}A\\ & -i\mathbf{\nabla}\varphi\exp\left[ -i\varphi\hat{s}_{z}\right] \hat {s}_{z}\cdot\mathbf{\nabla}A+\exp\left[ -i\varphi\hat{s}_{z}\right] \mathbf{\nabla}^{2}A\\ & =-2i\mathbf{\nabla}\varphi\exp\left[ -i\varphi\hat{s}_{z}\right] \hat {s}_{z}\cdot\mathbf{\nabla}A-\exp\left[ -i\varphi\hat{s}_{z}\right] \left| \mathbf{\nabla}\varphi\right| ^{2}\hat{s}_{z}^{2}A\\ & -i\exp\left[ -i\varphi\hat{s}_{z}\right] \mathbf{\nabla}^{2}\varphi \hat{s}_{z}A+\exp\left[ -i\varphi\hat{s}_{z}\right] \mathbf{\nabla}^{2}A \end{align*} thus \begin{align*} & \exp\left[ i\varphi\hat{s}_{z}\right] \nabla^{2}\left( \exp\left[ -i\varphi\hat{s}_{z}\right] A\right) \\ & =-2i\hat{s}_{z}\mathbf{\nabla}\varphi\cdot\mathbf{\nabla}A-\left| \mathbf{\nabla}\varphi\right| ^{2}\hat{s}_{z}^{2}A-i\mathbf{\nabla} ^{2}\varphi\hat{s}_{z}A+\mathbf{\nabla}^{2}A \end{align*} or in an operatorial form \begin{equation} Q=\exp\left[ i\varphi\hat{s}_{z}\right] \nabla^{2}\exp\left[ -i\varphi \hat{s}_{z}\right] =-2i\hat{s}_{z}\mathbf{\nabla}\varphi\cdot\mathbf{\nabla }-\left| \mathbf{\nabla}\varphi\right| ^{2}\hat{s}_{z}^{2}-i\mathbf{\nabla }^{2}\varphi\hat{s}_{z}+\mathbf{\nabla}^{2}. \label{a3} \end{equation} Substituting in Eq.(\ref{a1}) each of the four terms in Eq.(\ref{a3}) we find \begin{equation} \exp\left[ i\theta\hat{s}_{y}\right] \nabla^{2}\exp\left[ -i\theta\hat {s}_{y}\right] =-i\nabla^{2}\theta\hat{s}_{y}-\left| \mathbf{\nabla} \theta\right| ^{2}\hat{s}_{y}^{2}-2i\hat{s}_{y}\mathbf{\nabla}\theta \cdot\nabla+\nabla^{2}\text{,} \label{a3.1} \end{equation} \begin{equation} \exp\left[ i\theta\hat{s}_{y}\right] \left( -i\mathbf{\nabla}^{2} \varphi\right) \hat{s}_{z}\exp\left[ -i\theta\hat{s}_{y}\right] =-i\mathbf{\nabla}^{2}\varphi\left( \cos\theta\hat{s}_{z}-\sin\theta\hat {s}_{x}\right) \text{,} \label{a3.2} \end{equation} \begin{align} \exp\left[ i\theta\hat{s}_{y}\right] \hat{s}_{z}^{2}\exp\left[ -i\theta \hat{s}_{y}\right] & =\left( \cos\theta\hat{s}_{z}-\sin\theta\hat{s} _{x}\right) ^{2}\label{a3.3}\\ & =\cos^{2}\theta\hat{s}_{z}^{2}-\sin\theta\cos\theta\left( \hat{s}_{z} \hat{s}_{x}+\hat{s}_{x}\hat{s}_{z}\right) +\sin^{2}\theta\hat{s}_{x} ^{2}\text{,}\nonumber \end{align} \begin{align} \exp\left[ i\theta\hat{s}_{y}\right] \hat{s}_{z}\mathbf{\nabla}\exp\left[ -i\theta\hat{s}_{y}\right] & =\exp\left[ i\theta\hat{s}_{y}\right] \hat{s}_{z}\exp\left[ -i\theta\hat{s}_{y}\right] \left( -i\mathbf{\nabla }\theta\hat{s}_{y}+\mathbf{\nabla}\right) \label{a3.4}\\ & =\left( \cos\theta\hat{s}_{z}-\sin\theta\hat{s}_{x}\right) \left( -i\mathbf{\nabla}\theta\hat{s}_{y}+\mathbf{\nabla}\right) \nonumber\\ & =-i\cos\theta\mathbf{\nabla}\theta\hat{s}_{z}\hat{s}_{y}+i\sin \theta\mathbf{\nabla}\theta\hat{s}_{x}\hat{s}_{y}+\cos\theta\hat{s} _{z}\mathbf{\nabla}-\sin\theta\hat{s}_{x}\mathbf{\nabla,}\nonumber \end{align} Substituting Eqs.(\ref{a3.1}) to (\ref{a3.4}) into Eq.(\ref{a1}) gives \[ R\nabla^{2}R^{-1}=\left[ \begin{array} [c]{c} -i\nabla^{2}\varphi\left( \cos\theta\hat{s}_{z}-\sin\theta\hat{s}_{x}\right) -\left| \nabla\varphi\right| ^{2}\left( \cos\theta\hat{s}_{z}-\sin \theta\hat{s}_{x}\right) ^{2}\\ -2i\left( \cos\theta\hat{s}_{z}-\sin\theta\hat{s}_{x}\right) \nabla \varphi\cdot\left( -i\mathbf{\nabla}\theta\hat{s}_{y}+\mathbf{\nabla}\right) \\ -i\nabla^{2}\theta\hat{s}_{y}-\left| \mathbf{\nabla}\theta\right| ^{2} \hat{s}_{y}^{2}-2i\hat{s}_{y}\mathbf{\nabla}\theta\cdot\nabla+\nabla^{2} \end{array} \right] . \] Note that the transformed $\nabla^{2}$ is composed of terms containing $\hat{s}_{i}^{n}$ with $n=0,1$ or $2$. This is a consequence of the fact that the original operator $\nabla^{2}$ is a second order differential operator. Thus, a spin state $\Psi_{M}$ for which $\hat{s}_{z}\Psi_{M}=M\Psi_{M}$ is coupled, in first order, only to the states $\Psi_{M\pm1}$ and $\Psi_{M\pm2}$. \pagebreak \begin{center} \begin{table}[tbp] \centering \caption{Typical time scales for a spin $1$ atom trapped with a field $B_{0} =100$ Oe and $B_{0}/B^{\prime}\sim\sqrt{B_{0}/B^{\prime\prime}}\sim10$cm.} \begin{tabular} [c]{|l|l|}\hline & Spin $1$ atom\\\hline $m$ gr & $\sim10^{-22}$\\ $\mu$ emu & $\sim10^{-20}$\\ $K_{z}$, $K_{r}$ & $\sim10^{-8}$\\ $\omega_{p}^{-1}$ sec & $\sim10^{-9}$\\ $\omega_{r}^{-1}$, $\omega_{z}^{-1}$ sec & $\sim10^{-1}$\\ $T_{esc}$ sec & $\sim10^{\left( 10^{8}\right) }$\\\hline \end{tabular} \label{tab1} \end{table} \end{center} \begin{figure} \caption{Stable region in the $(K_r^2,K_z^2)$-plane, as predicted by the classical analysis.} \label{fig1} \end{figure} \end{document}
\begin{document} \vfuzz2pt \hfuzz2pt \newtheorem{thm}{Theorem}[] \newtheorem{model}{Model} \newtheorem{pro}{Problem}[] \newtheorem{cor}[thm]{Corollary} \newtheorem{lem}[]{Lemma}[section] \newtheorem{prop}[]{Proposition}[section] \theoremstyle{definition} \newtheorem{defn}[thm]{Definition} \theoremstyle{remark} \newtheorem{rem}[]{Remark}[section] \numberwithin{equation}{section} \newtheorem{col}{Conclusion} \baselineskip 16pt \title[]{\bf Global non-isentropic rotational supersonic flows in a semi-infinite divergent duct} \author[]{Geng Lai} \address{} \email{} \subjclass{} \keywords{} \dedicatory{ Department of Mathematics, Shanghai University, Shanghai, 200444, P.R. China\\ \vskip 4pt [email protected]} \subjclass{} \keywords{} \begin{abstract} Supersonic flows for the two-dimensional (2D) steady full Euler system are studied. We construct a global non-isentropic rotational supersonic flow in a semi-infinite divergent duct. The flow satisfies the slip condition on the walls of the duct, and the state of the flow is given at the inlet of the duct. The solution is constructed by the method of characteristics. The main difficulty for the global existence is that uniform a priori $C^1$ norm estimate of the solution is hard to obtain, especially when the solution tends to vacuum state. We derive a group of characteristic decompositions for the 2D steady full Euler system. Using these decompositions, we obtain uniform a priori estimates for the derivatives of the solution. A sufficient condition for the appearance of vacuum is also given. We show that if there is a vacuum then the vacuum is always adjacent to one of the walls, and the interface between gas and vacuum must be straight. The method used here may be also used to construct other 2D steady non-isentropic rotational supersonic flows. \ \vskip 4pt \noindent {\sc Keywords.} 2D steady full Euler system, characteristic decomposition, supersonic flow, vacuum. \ \vskip 4pt \noindent {\sc 2010 AMS subject classification.} Primary: 35L65; Secondary: 35L60, 35L67. \end{abstract} \maketitle \section{\bf Introduction } We consider the 2D steady compressible Euler system: \begin{equation} \left\{ \begin{array}{ll} (\rho u)_x+(\rho v)_y=0, \\[4pt] (\rho u^2+p)_x+(\rho uv)_y=0, \\[4pt] (\rho uv)_x+(\rho v^2+p)_y=0,\\[4pt] (\rho uE +up)_x+(\rho vE+vp)_y=0, \end{array} \right.\label{PSEU} \end{equation} where $(u, v)$ is the velocity, $\rho$ is the density, $p$ is the pressure, $E=\frac{u^2+v^2}{2}+e$ is the specific total energy, and $e$ is the specific internal energy. For polytropic gases, we have the equations of state $$ p=s\rho^{\gamma}\quad \mbox{and}\quad e=\frac{p\tau}{\gamma-1}, $$ where $s$ is the specific entropy and $\gamma>1$ is the adiabatic constant. For smooth flow, system (\ref{PSEU}) can be written as \begin{equation} \left\{ \begin{array}{ll} (\rho u)_x+(\rho v)_y=0, \\[4pt] uu_x+vu_y+\tau p_{x}=0, \\[4pt] uv_x+vv_y+\tau p_{y}=0,\\[4pt] us_x+vs_y=0. \end{array} \right.\label{PsEuler} \end{equation} The eigenvalues of (\ref{PsEuler}) are determined by \begin{equation} \Big(\lambda-\frac{v}{u}\Big)^2\big[(v-\lambda u)^{2}-c^{2}(1+\lambda^{2})\big]=0,\label{characteristice} \end{equation} which yields \begin{equation} \lambda=\lambda_{\pm}(u,v,c)=\frac{uv\pm c\sqrt{u^{2}+v^{2}-c^{2}}}{u^{2}-c^{2}}\quad\mbox{and}\quad\lambda=\lambda_0=\frac{v}{u}. \end{equation} Here, $c=\sqrt{\gamma s \rho^{\gamma-1}}$ is the sound speed. So, if and only if $u^{2}+v^{2}>c^{2}$ (supersonic) system (\ref{PsEuler}) is hyperbolic and has two families of wave characteristics defined as the integral curves of $$C_{\pm}:\quad \frac{{\rm d}y}{{\rm d}x}=\lambda_{\pm}.$$ The stream characteristics are defined as the integral curves $$C_{0}:\quad \frac{{\rm d}y}{{\rm d}x}=\frac{v}{u}.$$ \begin{figure} \caption{ \footnotesize A piecewise smooth supersonic flow in a semi-infinite divergent duct.} \label{Fig2} \end{figure} In this paper, we consider supersonic flows in a two-dimensional semi-infinite long divergent duct. Assume that the duct denoted by $\Sigma$ is symmetric with respect to the x-axis and bounded by two walls $W_{+}$ and $W_{-}$ which are represented by $$ W_{+}=\big\{(x, y) \mid y=f(x), ~~x\geq 0\big\}\quad \mbox{and}\quad W_{-}=\big\{(x, y) \mid y=-f(x), ~~x\geq 0\big\}, $$ where $f(x)$ is assumed to satisfy: \begin{equation}\label{72401} f(0)>0,\quad f'(0)= 0,\quad f''(x)> 0~~\mbox{as}~~x\geq 0,\quad f_{\infty}'=\lim\limits_{x\rightarrow +\infty}f'(x)~~ \mbox{exists}. \end{equation} Then $$ \Sigma=\{(x,y)\mid -f(x)<y<f(x), ~x>0\}; $$ see Figure \ref{Fig2}. In order to study supersonic flows in such a duct, we consider the following problem. Assume that at the inlet of the duct, there is a supersonic incoming flow. Then find a global supersonic flow in the duct $\Sigma$. When the incoming supersonic flow is a uniform flow, the global existence of continuous and piecewise smooth solution in the duct was obtained by Chen and Qu in \cite{CQ2}. \begin{thm} ({ Chen and Qu \cite{CQ2}}) Assume that the state of the incoming uniform supersonic flow is $(u_0, 0, \rho_0, s_0)$ and that the duct is smooth and convex in the sense (\ref{72401}), where the constants $\rho_0>0$, $s_0>0$, and $u_0>c_0:=\sqrt{\gamma s_0\rho_0^{\gamma-1}}$. Then there exists a global continuous and piecewise smooth solution in the duct. Moreover, if $f_{\infty}'$ is greater than a constant determined by the Mach number $u_0/c_0$ and the adiabatic exponent $\gamma$ of the incoming flow, then a vacuum will appear in the duct in finite area. The vacuum region must be adjacent to the walls and the boundary of the vacuum region is straight, which starts from and is tangential to the curved part of the walls. \end{thm} A similar global existence was obtained by Wang and Xin by using potential-stream coordinates in \cite{WX1}. When the incoming flow is sonic, the system becomes degenerate at the inlet of the duct. In a recent paper \cite{WX2}, Wang and Xin solved this degenerate hyperbolic problem and constructed a smooth transonic flow solution in a De Laval nozzle. The supersonic flow constructed in \cite{CQ2} is isentropic and irrotational. So, a natural question is to determine whether this result can be extended to the 2D steady full Euler system. For this purpose, we consider system (\ref{PSEU}) with the boundary condition: \begin{equation}\label{bd1} \left\{ \begin{array}{ll} (u, v, \rho, s)(0, y)=(u_{in}, v_{in}, \rho_{in}, s_{in})(y), & \hbox{$-f(0)\leq y\leq f(0)$;} \\[6pt] (u, v)\cdot {\bf n_w}=0, & \hbox{$W_{+}\cup W_{-}$,} \end{array} \right. \end{equation} where ${\bf n_w}$ denotes the normal vector of $W_{\pm}$, and $(u_{in}, v_{in}, \rho_{in}, s_{in})(y)$ satisfies \begin{description} \item[(A1)] $(u_{in}, v_{in}, \rho_{in}, s_{in})(y)\in C^{1}[-f(0), f(0)]$; \item[(A2)] $v_{in}(y)=0$, $p_{in}(y)=s_{in}(y)\rho_{in}^{\gamma}(y)=\mbox{Const.}$ and $u_{in}(y)>c_{in}(y):=\sqrt{\gamma s_{in}(y)\rho_{in}^{\gamma-1}(y)}$ as $-f(0)\leq y\leq f(0)$; \item[(A3)] $(u_{in}, v_{in}, \rho_{in}, s_{in})(y)=(u_{in}, v_{in}, \rho_{in}, s_{in})(-y)$ as $-f(0)\leq y\leq f(0)$. \end{description} Actually, $(u, v, \rho, s)(x, y)=(u_{in}, v_{in}, \rho_{in}, s_{in})(y)$ is a laminar flow solution of system (\ref{PSEU}) under assumptions (A1)--(A3) . Referring to Figure \ref{Fig25}, if the incoming flow is a uniform supersonic flow $(u_0, 0, \rho_0, s_0)$, then by the results of Courant and Friedrichs (\cite{CF}, Chap. IV.B) we know that there is a simple wave $\it S_{+}$ ($\it S_{-}$, resp.) with straight $C_{+}$ ($C_{-}$, resp.) characteristics issuing from the lower wall $W_{-}$ (upper wall $W_{+}$, resp.). These two simple waves start to interact with each other from a point $\bar{P}=\big(f(0)\sqrt{ u_0^2-c_0^2}/c_0 , 0\big)$. Through $\bar{P}$ we draw a forward $C_{-}$ ($C_{+}$, resp.) characteristic curve $C_{-}^{\bar{P}}$ ($C_{+}^{\bar{P}}$, resp.) in $\it S_{+}$ ($\it S_{-}$, resp.). Then there are the following two cases: \begin{description} \item[(i)] The characteristic curve $C_{-}^{\bar{P}}$ ($C_{+}^{\bar{P}}$, resp.) meets the lower wall $W_{-}$ (upper wall $W_{+}$, resp.) at a point $\bar{B}_0$ ($\bar{D}_0$, resp.), as indicated in Figure \ref{Fig25}(right). \item[(ii)] The characteristic curve $C_{+}^{\bar{P}}$ ($C_{-}^{\bar{P}}$, resp.) does not meet the lower wall $W_{-}$ (upper wall $W_{+}$, resp.). \end{description} \begin{figure} \caption{ \footnotesize Simple waves adjacent to a constant state.} \label{Fig25} \end{figure} In this paper, we show that for case (i), if the incoming flow is a small perturbation of the constant state $(u_0, 0, \rho_0, s_0)$ then the problem (\ref{PSEU}), (\ref{bd1}) admits a global continuous and piecewise smooth supersonic solution. Our main results can be stated as follows. \begin{thm}\label{main}({\bf Main theorem}) Let $$\epsilon=\max\Big\{||u_{in}-u_0||_{_{C^1[-f(0), f(0)]}},~~ ||\rho_{in}-\rho_0||_{_{C^1[-f(0), f(0)]}},~~||s_{in}-s_0||_{_{C^1[-f(0), f(0)]}}\Big\}.$$ Assume that $(u_{in}, v_{in}, \rho_{in}, s_{in})(y)$ satisfies (A1)--(A3) and the duct satisfies (\ref{72401}). For case (i), when $\epsilon$ is sufficiently small, the problem (\ref{PSEU}), (\ref{bd1}) admits a global continuous and piecewise smooth supersonic flow solution in the duct. Moreover, if $$\arctan f_{\infty}'>\frac{2}{\gamma-1}\cdot\frac{c_{in}\big(f(0)\big)}{ u_{in}\big(f(0)\big)}$$ then there are two vacuum regions adjacent to the walls and the interfaces between gas and vacuum are straight lines which start from and are tangential to the walls. \end{thm} \begin{rem} Actually, case (i) is vary easy to happen. In this case, we have the monotonicity conditions: $-\infty<\inf\limits_{\widehat{\bar{P}\bar{B}_0}}\bar{\partial}_{-}c<\sup\limits_{\widehat{\bar{P}\bar{B}_0}}\bar{\partial}_{-}c<0$ and $-\infty<\inf\limits_{\widehat{\bar{P}\bar{D}_0}}\bar{\partial}_{+}c <\sup\limits_{\widehat{\bar{P}\bar{D}_0}}\bar{\partial}_{+}c<0$, where $\bar{\partial}_{\pm}$ are directional derivatives which will be defined later. These monotonicity conditions are crucial in constructing a global continuous and piecewise smooth solution to the problem (\ref{PSEU}), (\ref{bd1}) as $(u_{in}, v_{in}, \rho_{in}, s_{in})(y)$ is a small perturbation of the constant state $(u_0, 0, \rho_0, s_0)$. \end{rem} \begin{rem} This theorem is also true if assumptions (A2) and (A3) are replaced by $v_{in}(f(0))=v_{in}(-f(0))=0$. \end{rem} We will use the method of characteristics, so we need the concept of the direction of the wave characteristics. The direction of the wave characteristics is defined as the tangent direction that forms an acute angle $A$ with the direction of the flow velocity $(u, v)$. By simple computation, we see that the $C_+$ characteristic direction forms with the direction of the flow velocity the angle $A$ from $(u, v)$ to $C_{+}$ in the counterclockwise direction, and the $C_-$ characteristic direction forms with the direction of the flow velocity the angle $A$ from $(u, v)$ to $C_{-}$ in the clockwise direction, as illustrated in Figure \ref{Fig1}. By computation, we have \begin{equation} c^{2}=q^{2}\sin^{2} A,\label{210cqo} \end{equation} in which $q^{2}=u^{2}+v^{2}$. The angle $A$ is called the Mach angle. \begin{figure} \caption{ \footnotesize Characteristic curves, characteristic directions, and characteristic angles.} \label{Fig1} \end{figure} Following \cite{CF} and \cite{Li3}, we use the concept of characteristic angle. The $C_{+}$ ($C_{-}$, resp.) characteristic angle is defined as the counterclockwise angle from the positive $x$-axis to the $C_{+}$ ($C_{-}$, resp.) characteristic direction. We denote by $\alpha$ and $\beta$ the $C_{+}$ and $C_{-}$ characteristic angle, respectively, where $0\leq \alpha-\beta\leq \pi$. Let $\sigma$ be the counterclockwise angle from the positive $x$-axis to the direction of the flow velocity. Obviously, we have \begin{equation} \alpha=\sigma+A,\quad \beta=\sigma-A,\quad\sigma=\frac{\alpha+\beta}{2},\quad A=\frac{\alpha-\beta}{2}, \label{tau} \end{equation} \begin{equation} \label{U} u=q\cos\sigma, \quad v=q\sin\sigma,\quad u=c\frac{\cos\sigma}{\sin A},\quad \mbox{and}\quad v=c\frac{\sin\sigma}{\sin A}. \end{equation} By computation, we also have \begin{equation}\label{4402} q^{2}\cos\alpha\cos\beta=u^2-c^2\quad\mbox{and}\quad q^{2}\sin\alpha\sin\beta=v^2-c^2. \end{equation} Now, let us briefly describe the process of constructing a global continuous and piecewise smooth supersonic solution to the boundary value problem (\ref{PSEU}), (\ref{bd1}). Referring to Figure \ref{Fig2}, through point $B=(0, -f(0))$ draw a forward $C_{+}$ characteristic curve; through point $D=(0, f(0))$ draw a forward $C_{-}$ characteristic curve. These two characteristic curves meet at some point $P$ on the $x-$axis. It can be seen by (A2) and (A3) that the flow in a region bounded by $\widehat{BP}$, $\widehat{DP}$, and $x=0$ is $(u, v, \rho, s)(x, y) =(u_{in}, v_{in}, \rho_{in}, s_{in})(y)$. We solve a slip boundary problem for (\ref{PSEU}) in a region $\Sigma_{0}^{-}$ ($\Sigma_{0}^{+}$, resp.) bounded by $\widehat{BP}$ ($\widehat{DP}$, resp.), $W_{-}$ ($W_{+}$, resp.), and $\widehat{PB_0}$ ($\widehat{PD_0}$, resp.), where $\widehat{PB_0}$ ($\widehat{PD_0}$, resp.) is a $C_{-}$ characteristic curve which issues from $P$ and meets $W_{-}$ ($W_{+}$, resp.) at a point $B_0$ ($D_0$, resp.). Meanwhile, in view of $f''>0$ and $\epsilon$ is sufficiently small we will get $$ \sup\limits_{\widehat{PB_0}}\bar\partial_{-}c<0\quad\mbox{and}\quad \sup\limits_{\widehat{PD_0}}\bar\partial_{+}c<0. $$ where the directional derivatives \begin{equation} \bar{\partial}_+=\cos\alpha\partial_{x}+\sin\alpha\partial_{y}\quad \mbox{and}\quad \bar{\partial}_{-}=\cos\beta\partial_{x}+\sin\beta\partial_{y}. \end{equation} We will then solve a Goursat problem for (\ref{PSEU}) in a region $\Sigma_1$ bounded by $\widehat{PD_0}$, $\widehat{PB_0}$, a forward $C_{-}$ characteristic curve $C_{-}^{D_0}$ issuing from $D_0$, and a forward $C_{+}$ characteristic curve $C_{+}^{B_0}$ issuing from $B_0$. There are two cases: one is that $C_{+}^{B_0}$ and $C_{-}^{D_0}$ intersect at some point $P_1$ as indicated in Figure \ref{Domain}(1); the other is that $C_{+}^{B_0}$ and $C_{-}^{D_0}$ do not intersect with each other as indicated in Figure \ref{Domain}(2). \begin{figure} \caption{ \footnotesize Domains $\Sigma_1$ and $\Sigma_1^{-} \label{Domain} \end{figure} Next, we solve a slip boundary problem for (\ref{PSEU}) in a region $\Sigma_1^{-}$ adjacent to $W_{-}$. If $C_{+}^{B_0}$ and $C_{-}^{D_0}$ intersect at some point $P_1$, then there are two possibilities about $\Sigma_1^{-}$. One is that the $C_{-}$ characteristic curve $C_{-}^{P_1}$ issuing from $P_1$ intersects with $W_{-}$ at a point $B_1$, and then $\Sigma_1^{-}$ is a bounded domain closed by $\widehat{B_0P_1}$, $\widehat{P_1B_1}$, and $W_{-}$; see Figure \ref{Domain}(3). The other is that $C_{-}^{P_1}$ does not intersect with $W_{-}$, and then $\Sigma_1^{-}$ is an infinite region bounded by $\widehat{BP_1}$, $W_{-}$, and $C_{-}^{P_1}$; see Figure \ref{Domain}(4). If $C_{+}^{B_0}$ and $C_{-}^{D_0}$ do not intersect with each other, then $\Sigma_{1}^{-}$ will be an infinite region between $C_{+}^{B_0}$ and $W_{-}$; see Figure \ref{Domain}(5). By symmetry, one can obtain the flow in a region $\Sigma_1^{+}$ adjacent to $W_{+}$. If $C_{+}^{B_0}$ and $C_{-}^{D_0}$ intersect at some point $P_1$ then we continue to solve a Goursat problem for (\ref{PSEU}) with $C_{-}^{P_1}$ and $C_{+}^{P_1}$ as the characteristic boundaries in a region $\Sigma_2$. By repeatedly solving similar Goursat problems and slip boundary problems, one can get the solution in regions $\Sigma_{0}$, $\Sigma_{0}^{+}$, $\Sigma_{0}^{-}$, $\Sigma_{1}$, $\Sigma_{1}^{+}$, $\Sigma_{1}^{-}$, $\Sigma_{2}$, $\Sigma_{2}^{+}$, $\Sigma_{2}^{-}$, $\cdot\cdot\cdot$, as illustrated in Figure \ref{Fig2}. Finally, we shall show that one can obtain a global piecewise smooth supersonic flow solution in the domain $\Sigma$ after solving a finite number of Gourst problems and slip boundary problems. One of the main difficulties to construct the global solution is that uniform a priori $C^1$ norm estimate of solution is hard to obtain, especially when the solution tends to vacuum state. If the flow is isentropic and irrotational then system (\ref{PSEU}) can by using Riemann invariants be reduced to a $2\times 2$ system, and the existence of global classical solution to Goursat problem and slip boundary problem can be obtained by monotonicity conditions of the boundary data; see Li \cite{LiT}. However, if the flow is non-isentropic and rotational then system (\ref{PSEU}) is a $4\times 4$ system and cannot be diagonalized. Thus the methods in \cite{CQ2, LiT, WX1} do not work here. In this paper, we derive the following two important characteristic equations about the entropy and the vorticity $\omega=u_y-v_x$: $$ \bar{\partial}_{0}\Big(\frac{\bar{\partial}_{\pm} s}{c^{\frac{\gamma+1}{\gamma-1}}}\Big) ~=~0\quad \mbox{and}\quad \bar{\partial}_{0}\Big(\frac{\omega}{\rho}\Big) =-\Big(\frac{\bar{\partial}_{+}s}{c^{\frac{\gamma+1}{\gamma-1}}}\Big)\frac{(s\gamma)^{\frac{1}{\gamma-1}} }{\gamma(\gamma-1)s}\bar{\partial}_{0}c^2, $$ where \begin{equation}\label{72801} \bar{\partial}_0=\cos\sigma\partial_{x}+\sin\sigma\partial_{y}. \end{equation} Using these equations, we can control the bounds of $\frac{\bar{\partial}_{\pm} s}{c^{\frac{\gamma+1}{\gamma-1}}}$ and $\frac{\omega}{\rho}$. (Remark: we shall show that although the solution is piecewise smooth in the duct, $\bar{\partial}_{\pm} s$ and $\omega$ are actually continuous in the duct.) We also derive a group of characteristic decompositions (that can be actually seen as a system of ``ordinary differential equations") about $R_{+}$, $R_{-}$, $\frac{\bar{\partial}_{+} s}{c^{\frac{\gamma+1}{\gamma-1}}}$, and $\frac{\omega}{\rho}$, where \begin{equation}\label{1973101} R_{\pm}:=\bar{\partial}_{\pm}c-\frac{j\bar{\partial}_{\pm}s}{\kappa}= \bar{\partial}_{\pm}c-\frac{1}{2\gamma s}\Big(\frac{\bar{\partial}_{\pm} s}{c^{\frac{\gamma+1}{\gamma-1}}}\Big)c^{\frac{2\gamma}{\gamma-1}} , \end{equation} $$ \kappa=\frac{2}{\gamma-1},\quad\mbox{and} \quad j=\frac{c}{\gamma (\gamma-1)s}; $$ see (\ref{cd8}) and (\ref{cd10}). We first use the characteristic decompositions (\ref{cd10}) to prove that $c^{-\frac{2\gamma}{\gamma-1}}R_{+}$ and $c^{-\frac{2\gamma}{\gamma-1}}R_{-}$ have an identical uniform negative upper bound as $\epsilon$ is sufficiently small, and then use this negative upper bound and the characteristic decompositions (\ref{cd8}) to prove that $R_{+}$ ($R_{-}$, resp.) is monotone increasing along $C_{-}$ ($C_{+}$, resp.) characteristic curves in the sense of characteristic direction, and consequently get a uniform negative lower bound of $R_{+}$ and $R_{-}$. The estimations of the derivatives of $u$ and $v$ can then be obtained by (\ref{192303})--(\ref{82505}). In order to prove that the global solution can be constructed after solving a finite number of Gourst problems and slip boundary problems, the method of hodograph transformation in \cite{CQ2} does not work here, since we do not have Riemann invariants for the 2D steady full Euler system. In this paper, we use characteristic angles $\alpha$ and $\beta$. By the uniform negative upper bound of $c^{-\frac{2\gamma}{\gamma-1}}\bar{\partial}_{\pm}c$ and the characteristic equations about $\alpha$ and $\beta$ (see (\ref{192307}) and (\ref{192309})), we will see that the wave characteristic curves are convex as $c$ is sufficiently small. Using the convexity of the wave characteristic curves, we will prove that $\Sigma$ can be covered by a finite number of determinate regions of those Goursat problems and slip boundary value problems; we will also prove that if there is a vacuum then the vacuum is always adjacent to one of the walls and the interface between gas and vacuum must straight. In our previous studies \cite{Lai5, Lai6}, we constructed several non-isentropic rotational supersonic flow solutions for the 2D (pseudo-)steady Euler equations. However, these solutions were constructed in bounded regions and under the assumption that $c$ has a non-zero lower bound. In the present paper, we overcome the difficulty caused by vacuum. The rest of the paper is organized as follows. Section 2 is concerned with characteristic equations of the 2D steady full Euler system. In section 2.1, we derive a group of first order characteristic equations for the variables $\alpha$, $\beta$, $c$, and $s$. In section 2.2, we derive a group characteristic equations about $R_{+}$, $R_{-}$, $c^{-\frac{\gamma+1}{\gamma-1}}\bar{\partial}_{+} s$, and $\frac{\omega}{\rho}$. Section 3 is devoted to construct a global supersonic flow solution to the boundary value problem (\ref{PSEU}), (\ref{bd1}). \section{\bf Characteristic decompositions of the 2D steady Euler equations} \subsection{Characteristic equations} Set \begin{equation} \widehat{E}=\frac{q^{2}}{2}+\frac{c^2}{\gamma-1}.\label{Ber} \end{equation} Then by the last three equations of (\ref{PsEuler}) we have \begin{equation}\label{4401} u\widehat{E}_{x}+v\widehat{E}_{y}=0\quad \mbox{and}\quad \bar{\partial}_{0}\widehat{E}=0. \end{equation} From (\ref{Ber}) we also have \begin{equation}\label{72301} \rho_{x}=\frac{1}{c^2\tau}\Big(\widehat{E}_{x}-uu_{x}-vv_{x}-\frac{\gamma\rho^{\gamma-1}s_x}{\gamma-1}\Big) \quad\mbox{and}\quad \rho_{y}=\frac{1}{c^2\tau}\Big(\widehat{E}_{y}-uu_{y}-vv_{y}-\frac{\gamma\rho^{\gamma-1}s_y}{\gamma-1}\Big). \end{equation} Inserting this into the first equation of (\ref{PsEuler}) and using (\ref{4401}) and the forth equation of (\ref{PsEuler}), we get \begin{equation} (c^{2}-u^{2})u_{x}-uv(u_{y}+v_{x})+(c^{2}-v^{2})v_{y}=0.\label{mass} \end{equation} Multiplying system $$ \begin{array}{rcl} \left( \begin{array}{cc} c^{2}-u^{2} & -uv \\ 0 & -1\\ \end{array} \right)\left( \begin{array}{c} u \\ v \\ \end{array} \right)_{x}+\left( \begin{array}{ccccc} -uv & c^{2}-v^{2}\\ 1 & 0 \\ \end{array} \right)\left( \begin{array}{c} u \\ v \\ \end{array} \right)_{y}=\left( \begin{array}{c} 0 \\ \omega \\ \end{array} \right) \label{matrix1} \end{array} $$ on the left by $(1,\mp c\sqrt{u^{2}+v^{2}-c^{2}})$ and using (\ref{4402}), we get \begin{equation} \left\{ \begin{array}{ll} \displaystyle \bar{\partial}_{+}u+\lambda_{-}\bar{\partial}_{+}v =\frac{\omega\sin A\cos A}{\cos\beta}, \\[10pt] \displaystyle \bar{\partial}_{-}u+\lambda_{+}\bar{\partial}_{-}v=-\frac{\omega\sin A\cos A}{\cos\alpha}. \end{array} \right.\label{form} \end{equation} From (\ref{tau}) and (\ref{U}) we have \begin{equation}\label{72701} \bar{\partial}_{\pm}u=\frac{\cos\sigma}{\sin A}\bar{\partial}_{\pm}c+\frac{c\cos\alpha\bar{\partial}_{\pm}\beta -c\cos\beta\bar{\partial}_{\pm}\alpha}{2\sin^{2}A} \end{equation} and \begin{equation}\label{72702} \bar{\partial}_{\pm}v=\frac{\sin\sigma}{\sin A}\bar{\partial}_{\pm}c +\frac{c\sin\alpha\bar{\partial}_{\pm}\beta -c\sin\beta\bar{\partial}_{\pm}\alpha}{2\sin^{2}A}. \end{equation} Inserting (\ref{72701}) and (\ref{72702}) into (\ref{form}), we obtain \begin{equation}\label{3} \bar{\partial}_{+}c=\frac{c}{\sin2A} (\bar{\partial}_{+}\alpha-\cos2A\bar{\partial}_{+}\beta)+\omega\sin^{2}A \end{equation} and \begin{equation} \bar{\partial}_{-}c=\frac{c}{\sin2A} (\cos2A\bar{\partial}_{-}\alpha-\bar{\partial}_{-}\beta)-\omega\sin^{2}A.\label{4} \end{equation} From the second and the third equations of (\ref{PsEuler}) we have $$ \widehat{E}_x=-v\omega+\frac{\rho^{\gamma-1}}{\gamma-1}s_x\quad \mbox{and} \quad \widehat{E}_y=u\omega+\frac{\rho^{\gamma-1}}{\gamma-1}s_y. $$ Hence, we have \begin{equation}\label{726021} u\bar{\partial}_{+}u+v\bar{\partial}_{+}v+\kappa c\bar{\partial}_{+}c=-v\omega\cos\alpha +u\omega\sin\alpha+\frac{\rho^{\gamma-1}}{\gamma-1}\bar{\partial}_{+}s \end{equation} and \begin{equation}\label{41102} u\bar{\partial}_{-}u+v\bar{\partial}_{-}v+\kappa c\bar{\partial}_{-}c=-v\omega\cos\beta +u\omega\sin\beta+\frac{\rho^{\gamma-1}}{\gamma-1}\bar{\partial}_{-}s. \end{equation} Inserting (\ref{72701})--(\ref{72702}) into (\ref{726021}) and (\ref{41102}), we get \begin{equation} \left(\frac{1}{\sin^{2}A}+\kappa\right)\bar{\partial}_{+}c= \frac{c\cos A}{2\sin^{3}A}(\bar{\partial}_{+}\alpha-\bar{\partial}_{+}\beta) +\omega+j\bar{\partial}_{+}s\label{bB1} \end{equation} and \begin{equation} \left(\frac{1}{\sin^{2}A}+\kappa \right)\bar{\partial}_{-}c= \frac{c\cos A}{2\sin^{3}A}(\bar{\partial}_{-}\alpha-\bar{\partial}_{-}\beta) -\omega+\bar{\partial}_{-}s,\label{bB2} \end{equation} respectively. Inserting (\ref{3}) into (\ref{bB1}), we obtain \begin{equation}\label{6} c\bar{\partial}_{+}\alpha=\Omega\cos^{2}Ac\bar{\partial}_{+}\beta-\Big(\frac{\kappa\sin2A\sin^{2} A}{1+\kappa}\Big)\omega+\frac{j\sin 2A}{1+\kappa}\bar{\partial}_{+}s, \end{equation} where $$ \Omega=\frac{\kappa-1}{\kappa+1}-\tan^2 A. $$ Inserting (\ref{4}) into (\ref{bB2}), we obtain \begin{equation}\label{5} c\bar{\partial}_{-}\beta=\Omega\cos^{2}Ac\bar{\partial}_{-}\alpha-\Big(\frac{\kappa\sin2A\sin^{2} A}{1+\kappa}\Big)\omega-\frac{j\sin 2A}{1+\kappa}\bar{\partial}_{-}s. \end{equation} Combining with (\ref{3}) and (\ref{6}), we have \begin{equation}\label{192306} c\bar{\partial}_{+}\beta=-(1+\kappa)\tan A\bar{\partial}_{+}c+\omega\sin^2A\tan A+j\tan A \bar{\partial}_{+}s \end{equation} and \begin{equation}\label{192307} c\bar{\partial}_{+}\alpha=-\left(\frac{1+\kappa}{2}\right)\Omega\sin2 A\bar{\partial}_{+}c-\omega\sin^2A\tan A +j\tan A \cos 2A\bar{\partial}_{+}s. \end{equation} Combining with (\ref{4}) and (\ref{5}), we have \begin{equation}\label{192308} c\bar{\partial}_{-}\alpha=(1+\kappa)\tan A\bar{\partial}_{-}c+\omega\sin^2A\tan A-j\tan A \bar{\partial}_{-}s \end{equation} and \begin{equation}\label{192309} c\bar{\partial}_{-}\beta=\left(\frac{1+\kappa}{2}\right) \Omega\sin2 A\bar{\partial}_{-}c-\omega\sin^2A\tan A -j\tan A\cos2A \bar{\partial}_{-}s. \end{equation} Inserting (\ref{192306})--(\ref{192309}) into (\ref{72701}) and (\ref{72702}), we get \begin{equation}\label{192303} \bar{\partial}_{+}u=\kappa\sin\beta\bar{\partial}_{+}c+\omega\cos \sigma\sin A-j\sin \beta \bar{\partial}_{+}s, \end{equation} \begin{equation}\label{192304} \bar{\partial}_{-}u=-\kappa\sin\alpha\bar{\partial}_{-}c-\omega\cos \sigma\sin A+j\sin \alpha \bar{\partial}_{-}s, \end{equation} \begin{equation} \bar{\partial}_{+}v=-\kappa\cos\beta\bar{\partial}_{+}c+\omega\sin\sigma\sin A+j\cos \beta \bar{\partial}_{+}s, \end{equation} \begin{equation}\label{82505} \bar{\partial}_{-}v=\kappa\cos\alpha\bar{\partial}_{-}c-\omega\sin\sigma\sin A-j\cos \alpha \bar{\partial}_{-}s. \end{equation} From (\ref{72801}) we have \begin{equation}\label{72802} \partial_{x}=-\frac{\sin\beta\bar{\partial}_{+}-\sin\alpha\bar{\partial}_{-}}{\sin2A},\quad \partial_{y}=\frac{\cos\beta\bar{\partial}_{+}-\cos\alpha\bar{\partial}_{-}}{\sin2A}, \end{equation} and \begin{equation}\label{32403} \bar{\partial}_{+}+\bar{\partial}_{-}=2\cos A \bar{\partial}_{0}. \end{equation} Thus, by $ \bar{\partial}_{0}s=0$ we have \begin{equation}\label{32402} c(\bar{\partial}_{+}j+\bar{\partial}_{-}j)=j(\bar{\partial}_{+}c+\bar{\partial}_{-}c) \end{equation} and \begin{equation}\label{42001} \bar{\partial}_{+}s~=~-\bar{\partial}_{-}s. \end{equation} \subsection{Characteristic decompositions} The method of characteristic decomposition was introduced by Zheng {\it et al.} \cite{Chen1,Li1,Li-Zhang-Zheng,Li2,Li3,Li4} in inverting 2D pseudosteady rarefaction wave interactions. Chen and Qu \cite{CQ1} also used the method of characteristic decomposition to construct global solutions of 2D steady rarefaction wave interactions. These results were obtained under the assumption that the flow is isentropic and irrotational. In this paper, we are concerned with non-isentropic rotational flows. So, we need to derive some characteristic decompositions for the 2D steady full Euler system (\ref{PsEuler}). These decompositions can be seen as systems of ``ordinary differential equations'' for some first order derivatives of the solution and will be extensively used to control the bounds of the derivatives of the solution. \begin{prop} We have the commutator relations \begin{equation} \begin{array}{rcl} \bar{\partial}_{0} \bar{\partial}_{+}- \bar{\partial}_{+} \bar{\partial}_{0}= \displaystyle\frac{1}{\sin A}\Big[\big(\cos A \bar{\partial}_{+}\sigma- \bar{\partial}_{0}\alpha\big) \bar{\partial}_{0}- \big( \bar{\partial}_{+}\sigma-\cos A \bar{\partial}_{0}\alpha\big) \bar{\partial}_{+}\Big] \end{array} \label{comm1} \end{equation} and \begin{equation} \begin{array}{rcl} \bar{\partial}_{-} \bar{\partial}_{+}- \bar{\partial}_{+} \bar{\partial}_{-}= \displaystyle\frac{1}{\sin2 A}\Big[\big(\cos2 A \bar{\partial}_{+}\beta- \bar{\partial}_{-}\alpha\big) \bar{\partial}_{-}- \big( \bar{\partial}_{+}\beta-\cos2 A \bar{\partial}_{-}\alpha\big) \bar{\partial}_{+}\Big]. \end{array} \label{comm} \end{equation} \end{prop} \begin{proof} The commutator relation (\ref{comm1}) was first proved by the author in \cite{Lai5}. The commutator relation (\ref{comm}) was first given by Li and Zhang and Zheng in \cite{Li-Zhang-Zheng}. For the sake of completeness, we sketch the proof. From (\ref{tau}) and (\ref{72801}) we have \begin{equation}\label{41907} \partial_{x}=\frac{\sin\alpha\bar{\partial}_{0}-\sin\sigma\bar{\partial}_{+}}{\sin A}\quad \mbox{and}\quad \partial_{y}=-\frac{\cos\alpha\bar{\partial}_{0}-\cos\sigma\bar{\partial}_{+}}{\sin A}. \end{equation} By computation, we have $$ \begin{aligned} &\bar{\partial}_{0} \bar{\partial}_{+}- \bar{\partial}_{+} \bar{\partial}_{0}\\~=~&(\cos\sigma\partial_{x}+\sin\sigma\partial_{y})(\cos\alpha\partial_{x}+\sin\alpha\partial_{y}) -(\cos\alpha\partial_{x}+\sin\alpha\partial_{y})(\cos\sigma\partial_{x}+\sin\sigma\partial_{y}) \\~=~&(\bar{\partial}_{0}\cos\alpha)\partial_{x}+(\bar{\partial}_{0}\sin\alpha)\partial_{y} -(\bar{\partial}_{+}\cos\sigma)\partial_{x}-(\bar{\partial}_{+}\sin\sigma)\partial_{y}. \end{aligned} $$ Inserting (\ref{41907}) into this we can get (\ref{comm1}). The proof for (\ref{comm}) is similar. \end{proof} \begin{prop}For the derivative of the entropy, we have \begin{equation}\label{4105} \bar{\partial}_{0}\Big(\frac{\bar{\partial}_{+} s}{c^{\frac{\gamma+1}{\gamma-1}}}\Big) ~=~0. \end{equation} \end{prop} \begin{proof} From (\ref{192306}), (\ref{192308}), (\ref{32403}), (\ref{42001}), and (\ref{comm1}) we have $$ \begin{aligned} \bar{\partial}_{0}\bar{\partial}_{+} s&=\frac{1}{\sin A}\big(\cos A\bar{\partial}_{0}\alpha- \bar{\partial}_{+}\sigma\big)\bar{\partial}_{+} s \\&=\frac{1}{\sin A}\Big[\frac{1}{2}(\bar{\partial}_{+}\alpha+\bar{\partial}_{-}\alpha) -\bar{\partial}_{+}\sigma\Big]\bar{\partial}_{+} s\\&=\frac{1}{2\sin A}\Big[\bar{\partial}_{-}\alpha -\bar{\partial}_{+}\beta\Big]\bar{\partial}_{+} s\\&=\frac{1}{2c\sin A}(1+\kappa)\tan A\Big[\bar{\partial}_{-}c +\bar{\partial}_{+}c\Big]\bar{\partial}_{+} s\\&=\Big(\frac{\gamma+1}{\gamma-1}\Big)\frac{\bar{\partial}_{0}c\bar{\partial}_{+} s}{c}. \end{aligned} $$ Thus, we have $$ \bar{\partial}_{0}\Big(\frac{\bar{\partial}_{+} s}{c^{\frac{\gamma+1}{\gamma-1}}}\Big) ~=~c^{-\frac{\gamma+1}{\gamma-1}}\bar{\partial}_{0}\bar{\partial}_{+} s-\frac{\gamma+1}{\gamma-1}c^{\frac{-2\gamma}{\gamma-1}}\bar{\partial}_{0}c\bar{\partial}_{+} s~=~0. $$ We then have this proposition. \end{proof} \begin{prop}For the vorticity, we have \begin{equation}\label{72803} \begin{aligned} \bar{\partial}_{0}\Big(\frac{\omega}{\rho}\Big) =-\Big(\frac{\bar{\partial}_{+}s}{c^{\frac{\gamma+1}{\gamma-1}}}\Big)\frac{(s\gamma)^{\frac{1}{\gamma-1}} }{\gamma(\gamma-1)s}\bar{\partial}_{0}c^2. \end{aligned} \end{equation} \end{prop} \begin{proof} From the second and the third equations of (\ref{PsEuler}), we have \begin{equation}\label{192301} u\omega_x+v\omega_y+(u_x+v_y)\omega+\tau^{-\gamma}\big(\tau_y s_x-\tau_x s_y\big)=0. \end{equation} From (\ref{tau}), (\ref{42001}), and (\ref{192303})--(\ref{72802}) we have \begin{equation}\label{41105} \begin{aligned} u_x+v_y&=\frac{1}{\sin 2A}\Big(\sin\alpha\bar{\partial}_{-}u-\sin\beta\bar{\partial}_{+}u+ \cos\beta\bar{\partial}_{+}v-\cos\alpha\bar{\partial}_{-}v\Big)\\&= -\frac{\kappa}{\sin 2A}(\bar{\partial}_{+}c+\bar{\partial}_{-}c)\\&=- \frac{\kappa}{\sin A}\bar{\partial}_{0}c \end{aligned} \end{equation} and \begin{equation}\label{41106} \begin{aligned} &\tau_y s_x-\tau_x s_y\\[4pt] =&\Bigg\{\Big(\frac{\cos\alpha+\cos\beta}{\sin2A}\Big)\frac{\sin\beta\bar{\partial}_{+}\tau -\sin\alpha\bar{\partial}_{-}\tau}{\sin2A}- \Big(\frac{\sin\alpha+\sin\beta}{\sin2A}\Big)\frac{\cos\beta\bar{\partial}_{+}\tau -\cos\alpha\bar{\partial}_{-}\tau}{\sin2A}\Bigg\}\bar{\partial}_{+}s \\=&-\frac{\bar{\partial}_{+}s}{\sin 2A}(\bar{\partial}_{+}\tau+\bar{\partial}_{-}\tau) \\=&- \frac{\bar{\partial}_{+}s}{\sin A}\bar{\partial}_{0}\tau. \end{aligned} \end{equation} Inserting (\ref{41105}) and (\ref{41106}) into (\ref{192301}), we get \begin{equation}\label{4801} \begin{aligned} \bar{\partial}_{0}\omega&~=~\frac{\kappa\omega}{c}\bar{\partial}_{0}c+ \tau^{-\gamma}\frac{\bar{\partial}_{+}s}{c}\bar{\partial}_{0}\tau~=~ \frac{\kappa\omega}{c}\bar{\partial}_{0}c-2j\bar{\partial}_{+}s\frac{\bar{\partial}_{0}c}{c}. \end{aligned} \end{equation} Consequently, we get $$ \begin{aligned} \bar{\partial}_{0}\Big(\frac{\omega}{\rho}\Big)&=\frac{1}{\rho}\bar{\partial}_{0}\omega -\frac{\omega}{\rho^2}\bar{\partial}_{0}\rho\\&= \frac{\kappa\omega}{\rho c}\bar{\partial}_{0}c-2j\bar{\partial}_{+}s\frac{\bar{\partial}_{0}c}{\rho c}-\frac{\omega}{\rho^2}\bar{\partial}_{0}\rho \\&=-2j\bar{\partial}_{+}s\frac{\bar{\partial}_{0}c}{\rho c}~=~-\Big(\frac{\bar{\partial}_{+}s}{c^{\frac{\gamma+1}{\gamma-1}}}\Big)\frac{(s\gamma)^{\frac{1}{\gamma-1}} }{\gamma(\gamma-1)s}\bar{\partial}_{0}c^2. \end{aligned} $$ We then have this proposition. \end{proof} \begin{prop}\label{10604} We have the characteristic decompositions: \begin{equation}\label{cd} \left\{ \begin{array}{ll} \begin{array}{rcl} \displaystyle c\bar{\partial}_{-}\Big(\bar{\partial}_{+}c-\frac{j\bar{\partial}_{+}s}{\kappa}\Big)&=&\displaystyle \frac{(\gamma+1)\bar{\partial}_{+}c\bar{\partial}_{+}c}{2(\gamma-1)\cos^2A}+\frac{(\gamma+1)-2\sin^2 2A}{2(\gamma-1)\cos^2 A} \bar{\partial}_{-}c\bar{\partial}_{+}c\\[14pt]&&\displaystyle +(H_{11}\bar{\partial}_{-}c+H_{12}\bar{\partial}_{+}c)\omega\sin^2A+(H_{13}\bar{\partial}_{-}c+H_{14}\bar{\partial}_{+}c) j\bar{\partial}_{+}s\\ [8pt]&&\displaystyle +H_{15}(j\bar{\partial}_{+}s)^2+ H_{16}\omega\sin^2Aj\bar{\partial}_{+}s, \end{array} \\[38pt] \begin{array}{rcl} \displaystyle c\bar{\partial}_{+}\Big(\bar{\partial}_{-}c-\frac{j\bar{\partial}_{-}s}{\kappa}\Big)&=&\displaystyle \frac{(\gamma+1)\bar{\partial}_{-}c \bar{\partial}_{-}c}{2(\gamma-1)\cos^2A}+\frac{(\gamma+1)-2\sin^2 2A}{2(\gamma-1)\cos^2 A} \bar{\partial}_{+}c \bar{\partial}_{-}c\\[14pt]&&\displaystyle +(H_{21}\bar{\partial}_{-}c+H_{22}\bar{\partial}_{+}c)\omega\sin^2A+(H_{23}\bar{\partial}_{-}c+H_{24}\bar{\partial}_{+}c) j\bar{\partial}_{+}s\\ [8pt]&&\displaystyle +H_{25}(j\bar{\partial}_{+}s)^2+ H_{26}\omega\sin^2Aj\bar{\partial}_{+}s, \end{array} \end{array} \right. \end{equation} where $H_{ij}=\frac{{f_{ij}}}{\cos ^2 A}$, and ${f_{ij}}$ are polynomial forms in terms of $\sin A$. \end{prop} \begin{proof} From (\ref{comm}) we have $$ \begin{array}{rcl} \bar{\partial}_{-} \bar{\partial}_{+}u- \bar{\partial}_{+} \bar{\partial}_{-}u= \displaystyle\frac{1}{\sin2 A}\Big[\big(\cos2 A \bar{\partial}_{+}\beta- \bar{\partial}_{-}\alpha\big) \bar{\partial}_{-}u- \big( \bar{\partial}_{+}\beta-\cos2 A \bar{\partial}_{-}\alpha\big) \bar{\partial}_{+}u\Big]. \end{array} $$ Inserting (\ref{192303}) and (\ref{192304}) into this, we have $$ \begin{array}{rcl} &&\bar{\partial}_{-}\big[\kappa\sin\beta\bar{\partial}_{+}c+\omega\cos \sigma\sin A-j\sin \beta \bar{\partial}_{+}s]\\[6pt]&&\quad+ \bar{\partial}_{+}\big[\kappa\sin\alpha\bar{\partial}_{-}c+\omega\cos \sigma\sin A-j\sin \alpha \bar{\partial}_{-}s\big] \\[6pt]&=&\displaystyle\frac{1}{\sin2 A}\Big[( \bar{\partial}_{-}\alpha- \cos2 A\bar{\partial}_{+}\beta)(\kappa\sin\alpha\bar{\partial}_{-}c+\omega\cos \sigma\sin A-j\sin \alpha \bar{\partial}_{-}s)\\[8pt]&&\displaystyle\qquad\quad-(\bar{\partial}_{+}\beta-\cos2 A \bar{\partial}_{-}\alpha)(\kappa\sin\beta\bar{\partial}_{+}c+\omega\cos \sigma\sin A-j\sin \beta \bar{\partial}_{+}s)\Big]. \end{array} $$ Thus, we have \begin{equation}\label{32401} \begin{aligned} &\kappa\sin\beta\bar{\partial}_{-}\bar{\partial}_{+}c+\kappa\sin\alpha\bar{\partial}_{+}\bar{\partial}_{-}c +\kappa\cos\beta\bar{\partial}_{-}\beta\bar{\partial}_{+}c+\kappa\cos\alpha\bar{\partial}_{+}\alpha\bar{\partial}_{-}c \\&+\bar{\partial}_{-}(\omega\cos \sigma\sin A)+ \bar{\partial}_{+}(\omega\cos \sigma\sin A)-\bar{\partial}_{-}\big(j\sin \beta \bar{\partial}_{+}s\big) -\bar{\partial}_{+}\big(j\sin \alpha \bar{\partial}_{-}s\big) \\~=~&\frac{\kappa}{\sin2 A}\Big[(\sin\alpha \bar{\partial}_{-}\alpha-\sin\alpha \cos2 A\bar{\partial}_{+}\beta)\bar{\partial}_{-}c-(\sin\beta\bar{\partial}_{+}\beta-\sin\beta\cos2 A \bar{\partial}_{-}\alpha)\bar{\partial}_{+}c\Big]\\& -\frac{\omega\cos\sigma\sin A}{\sin2A}\Big[ \cos2 A\bar{\partial}_{+}\beta- \bar{\partial}_{-}\alpha+\bar{\partial}_{+}\beta-\cos2 A \bar{\partial}_{-}\alpha\Big]\\&+ \frac{1}{\sin2 A}\Big[(\cos2 A\bar{\partial}_{+}\beta- \bar{\partial}_{-}\alpha)j\sin \alpha \bar{\partial}_{-}s+(\bar{\partial}_{+}\beta-\cos2 A \bar{\partial}_{-}\alpha)j\sin \beta \bar{\partial}_{+}s\Big], \end{aligned} \end{equation} Inserting $$ \begin{array}{rcl} \bar{\partial}_{+} \bar{\partial}_{-}c~=~\bar{\partial}_{-} \bar{\partial}_{+}c- \displaystyle\frac{1}{\sin2 A}\Big[\big(\cos2 A \bar{\partial}_{+}\beta- \bar{\partial}_{-}\alpha\big) \bar{\partial}_{-}c- \big( \bar{\partial}_{+}\beta-\cos2 A \bar{\partial}_{-}\alpha\big) \bar{\partial}_{+}c\Big] \end{array} $$ into (\ref{32401}), we get \begin{equation}\label{192305} \begin{aligned} &(\sin\alpha+\sin\beta)c\bar{\partial}_{-}\bar{\partial}_{+}c\\~=~& -\frac{c}{\sin2 A}\Big[(\sin\beta+\sin\alpha)\bar{\partial}_{+}\beta-(\sin\alpha+\sin\beta)\cos2 A \bar{\partial}_{-}\alpha+\cos\beta\sin2A\bar{\partial}_{-}\beta\Big]\bar{\partial}_{+}c\\& -c\cos\alpha\bar{\partial}_{+}\alpha\bar{\partial}_{-}c -\frac{c\omega\cos\sigma\sin A}{\kappa\sin2A}\Big[ \cos2 A\bar{\partial}_{+}\beta- \bar{\partial}_{-}\alpha+\bar{\partial}_{+}\beta-\cos2 A \bar{\partial}_{-}\alpha\Big]\\&-\frac{c}{\kappa}\Big[\bar{\partial}_{-}(\omega\cos \sigma\sin A)+ \bar{\partial}_{+}(\omega\cos \sigma\sin A)\Big]+\frac{c}{\kappa}\Big[\bar{\partial}_{-}\big(j\sin \beta \bar{\partial}_{+}s\big) +\bar{\partial}_{+}\big(j\sin \alpha \bar{\partial}_{-}s\big)\Big]\\&+ \frac{c}{\kappa\sin2 A}\Big[(\cos2 A\bar{\partial}_{+}\beta- \bar{\partial}_{-}\alpha)j\sin \alpha \bar{\partial}_{-}s+(\bar{\partial}_{+}\beta-\cos2 A \bar{\partial}_{-}\alpha)j\sin \beta \bar{\partial}_{+}s\Big]. \end{aligned} \end{equation} In what follows, we are going to compute the right part of (\ref{192305}) item by item. \vskip 6pt {\it Part 1.} Using (\ref{192306})--(\ref{192309}), we have \begin{equation}\label{4803} \begin{aligned} &-c\cos\alpha\bar{\partial}_{+}\alpha\bar{\partial}_{-}c\\ &-\frac{c}{\sin2 A}\Big[(\sin\beta+\sin\alpha)\bar{\partial}_{+}\beta-(\sin\alpha+\sin\beta)\cos2 A \bar{\partial}_{-}\alpha+\cos\beta\sin2A\bar{\partial}_{-}\beta\Big]\bar{\partial}_{+}c \\~=~&\Big(\frac{1+\kappa}{2}\Big)\Omega\cos\alpha\sin 2A\bar{\partial}_{+}c\bar{\partial}_{-}c+\frac{1}{\sin2 A}\Big[(1+\kappa)(\sin\beta+\sin\alpha)\tan A\bar{\partial}_{+}c\\&+(1+\kappa)(\sin\alpha+\sin\beta)\tan A\cos2 A \bar{\partial}_{-}c-\Big(\frac{1+\kappa}{2}\Big)\Omega\cos\beta\sin^2 2A\bar{\partial}_{-}c\Big]\bar{\partial}_{+}c\\&+\omega\cos\alpha\sin^2A\tan A\bar{\partial}_{-}c -\cos\alpha\tan A\cos 2A j\bar{\partial}_{+}s\bar{\partial}_{-}c \\~&-\frac{\omega\sin^2A\tan A}{\sin2 A}\Big[(\sin\beta+\sin\alpha)-(\sin\alpha+\sin\beta)\cos2 A -\cos\beta\sin2A\Big]\bar{\partial}_{+}c \\~&-\frac{j\tan A\bar{\partial}_{+}s}{\sin2 A}\Big[(\sin\beta+\sin\alpha)-(\sin\alpha+\sin\beta)\cos2 A +\cos\beta\sin2A\cos 2A\Big]\bar{\partial}_{+}c. \end{aligned} \end{equation} By (\ref{tau}) and $\sin\alpha+\sin\beta=2\sin\sigma\cos A$, we have \begin{equation} \begin{aligned} &\frac{1}{\sin\alpha+\sin\beta}\Bigg\{\Big(\frac{1+\kappa}{2}\Big)\Omega\cos\alpha\sin 2A\bar{\partial}_{+}c\bar{\partial}_{-}c+\frac{1}{\sin2 A}\Big[(1+\kappa)(\sin\beta+\sin\alpha)\tan A\bar{\partial}_{+}c\\&\qquad\qquad+(1+\kappa)(\sin\alpha+\sin\beta)\tan A\cos2 A \bar{\partial}_{-}c-\Big(\frac{1+\kappa}{2}\Big)\Omega\cos\beta\sin^2 2A\bar{\partial}_{-}c\Big]\bar{\partial}_{+}c\Bigg\}\\~=~&\frac{(\gamma+1)\bar{\partial}_{+}c\bar{\partial}_{+}c}{2(\gamma-1)\cos^2A}+ \Bigg(\frac{(1+\kappa)\cos 2A }{2\cos^2 A}-(1+\kappa)\Omega \sin^2 A\Bigg)\bar{\partial}_{+}c\bar{\partial}_{-}c \\~=~&\frac{(\gamma+1)\bar{\partial}_{+}c\bar{\partial}_{+}c}{2(\gamma-1)\cos^2A} +\Bigg(\frac{\gamma-3}{\gamma-1}\sin^2 A+\frac{(\gamma+1)\sin^4 A}{(\gamma-1)\cos^2 A}+\frac{(\gamma+1)\cos 2 A}{2(\gamma-1)\cos^2 A}\Bigg)\bar{\partial}_{+}c\bar{\partial}_{-}c \\ ~=~&\frac{(\gamma+1)\bar{\partial}_{+}c\bar{\partial}_{+}c}{2(\gamma-1)\cos^2A}+\frac{(\gamma+1)-2\sin^2 2A}{2(\gamma-1)\cos^2 A} \bar{\partial}_{-}c\bar{\partial}_{+}c. \end{aligned} \end{equation} From (\ref{tau}), we also have \begin{equation} \begin{aligned} &(\sin\beta+\sin\alpha)-(\sin\alpha+\sin\beta)\cos2 A -\cos\beta\sin2A\\ =&\sin\beta-\sin\alpha\cos2 A=-\cos \alpha\sin 2A \end{aligned} \end{equation} and \begin{equation} \begin{aligned} &(\sin\beta+\sin\alpha)-(\sin\alpha+\sin\beta)\cos2 A +\cos\beta\sin2A\cos 2A\\~=~& (\sin\beta+\sin\alpha)+\cos 2A(\cos\beta\sin2A-\sin\alpha-\sin\beta) \\~=~& (\sin\beta+\sin\alpha)+\cos 2A(-\sin\beta\cos2A-\sin\beta) \\~=~& \sin\beta\sin^2 2A+ \sin 2A \cos \beta ~=~(\sin\beta+\sin\alpha)\sin^2 2A +\sin 2A \cos \alpha\cos 2A. \end{aligned} \end{equation} Thus, we have \begin{equation} \begin{aligned} &\omega\cos\alpha\sin^2A\tan A\bar{\partial}_{-}c \\&-\frac{\omega\sin^2A\tan A}{\sin2 A}\Big[(\sin\beta+\sin\alpha)-(\sin\alpha+\sin\beta)\cos2 A -\cos\beta\sin2A\Big]\bar{\partial}_{+}c\\~=~&\omega\cos\alpha\sin^2A\tan A(\bar{\partial}_{-}c+\bar{\partial}_{+}c) \\~=~&\omega\cos\sigma\cos A\sin^2A\tan A(\bar{\partial}_{-}c+\bar{\partial}_{+}c)-\omega\sin\sigma\sin A\sin^2A\tan A(\bar{\partial}_{-}c+\bar{\partial}_{+}c) \end{aligned} \end{equation} and \begin{equation} \begin{aligned} &-\cos\alpha\tan A\cos 2A j\bar{\partial}_{+}s\bar{\partial}_{-}c \\~&-\frac{j\tan A\bar{\partial}_{+}s}{\sin2 A}\Big[(\sin\beta+\sin\alpha)-(\sin\alpha+\sin\beta)\cos2 A +\cos\beta\sin2A\cos 2A\Big]\bar{\partial}_{+}c\\~=~&-\cos\alpha\tan A\cos 2A j\bar{\partial}_{+}s(\bar{\partial}_{-}c+\bar{\partial}_{+}c) -\tan A\sin2 A(\sin\beta+\sin\alpha)j\bar{\partial}_{+}s\bar{\partial}_{+}c\\~=~& -\cos\sigma\sin A\cos 2A j\bar{\partial}_{+}s(\bar{\partial}_{-}c+\bar{\partial}_{+}c)+\sin\sigma\sin A\tan A\cos 2A j\bar{\partial}_{+}s(\bar{\partial}_{-}c+\bar{\partial}_{+}c)\\&-\tan A\sin2 A(\sin\beta+\sin\alpha)j\bar{\partial}_{+}s\bar{\partial}_{+}c. \end{aligned} \end{equation} {\it Part 2.} Using (\ref{192306}) and (\ref{192308}), we have \begin{equation} \begin{aligned} &-\frac{c\omega\cos\sigma\sin A}{\kappa\sin2A}\Big[ \cos2 A\bar{\partial}_{+}\beta- \bar{\partial}_{-}\alpha+\bar{\partial}_{+}\beta-\cos2 A \bar{\partial}_{-}\alpha\Big]\\~=~&-\frac{(1+\kappa)\omega\cos\sigma\sin A\tan A}{\kappa\sin2A}\big[ -\cos2 A\bar{\partial}_{+}c- \bar{\partial}_{-}c-\bar{\partial}_{+}c-\cos2 A \bar{\partial}_{-}c\big]\\~=~& \frac{(1+\kappa)\omega\cos\sigma\sin A}{\kappa}(\bar{\partial}_{-}c+\bar{\partial}_{+}c). \end{aligned} \end{equation} {\it Part 3.} Using (\ref{4801}), we have \begin{equation}\label{32404} \begin{aligned} &-\frac{c}{\kappa}\cos \sigma\sin A\big(\bar{\partial}_{-}\omega+ \bar{\partial}_{+}\omega\big)\\~=~& -\frac{2c}{\kappa}\cos \sigma\sin A\cos A\bar{\partial}_0\omega\\~=~& -\frac{2c}{\kappa}\cos \sigma\sin A\cos A\left[\frac{\kappa}{2c\cos A}(\bar{\partial}_{+}c+\bar{\partial}_{-}c)\omega-\frac{j\bar{\partial}_{+}s}{c\cos A}(\bar{\partial}_{+}c+\bar{\partial}_{-}c)\right] \\~=~&-\cos \sigma\sin A(\bar{\partial}_{+}c+\bar{\partial}_{-}c)\omega+\frac{2}{\kappa}\cos \sigma\sin A (\bar{\partial}_{+}c+\bar{\partial}_{-}c)j\bar{\partial}_{+}s. \end{aligned} \end{equation} By a direct computation, we have \begin{equation}\label{4902} \begin{aligned} &(1+\kappa)\tan A+\Big(\frac{1+\kappa}{2}\Big)\Omega\sin 2A\\=&(1+\kappa)\tan A +\Big(\frac{1+\kappa}{2}\Big)\Big(\frac{\kappa-1}{\kappa+1}-\tan^2 A\Big)\sin 2A=2\kappa \sin A \cos A \end{aligned} \end{equation} and $$ (1+\kappa)\tan A-\Big(\frac{1+\kappa}{2}\Big)\Omega\sin 2A=2(1+\kappa)\tan A-2\kappa \sin A \cos A. $$ Therefore, by (\ref{192306})--(\ref{192309}) we have \begin{equation} \begin{aligned} &-\frac{c\omega}{\kappa}\big[\bar{\partial}_{-}(\cos \sigma\sin A)+ \bar{\partial}_{+}(\cos \sigma\sin A)\big]\\~=~& \frac{c\omega\sin\sigma\sin A}{\kappa}\big[\bar{\partial}_{-}\sigma+ \bar{\partial}_{+}\sigma\big]-\frac{c\omega\cos\sigma\cos A}{\kappa}\big[\bar{\partial}_{-}A+ \bar{\partial}_{+}A\big] \\~=~& \frac{c\omega\sin\sigma\sin A}{2\kappa}\big[\bar{\partial}_{-}(\alpha+\beta)+ \bar{\partial}_{+}(\alpha+\beta)\big]-\frac{c\omega\cos\sigma\cos A}{2\kappa}\big[\bar{\partial}_{-}(\alpha-\beta)+ \bar{\partial}_{+}(\alpha-\beta)\big] \\~=~& \frac{\omega\sin\sigma\sin A}{2\kappa}\Big[(1+\kappa)\tan A+\Big(\frac{1+\kappa}{2}\Big)\Omega\sin 2A\Big](\bar{\partial}_{-}c-\bar{\partial}_{+}c)\\&+\frac{\omega\sin\sigma\sin A(1+\cos2A)\tan A}{\kappa}j\bar{\partial}_{+}s\\&- \frac{\omega\cos\sigma\cos A}{2\kappa}\Big[(1+\kappa)\tan A-\Big(\frac{1+\kappa}{2}\Big)\Omega\sin 2A\Big](\bar{\partial}_{-}c+\bar{\partial}_{+}c) \\~=~& \omega\sin\sigma\sin^2 A\cos A(\bar{\partial}_{-}c-\bar{\partial}_{+}c)+\frac{2\omega\sin\sigma\sin A\cos^2A\tan A}{\kappa}j\bar{\partial}_{+}s\\&+\omega\Big[- \frac{(1+\kappa)\cos\sigma\sin A}{\kappa}+\cos\sigma\cos^2 A \sin A\Big](\bar{\partial}_{-}c+\bar{\partial}_{+}c) \end{aligned} \end{equation} {\it Part 4.} By (\ref{comm}), (\ref{32402}), and (\ref{42001}) we have \begin{equation}\label{42101} \begin{aligned} &\frac{c}{\kappa}\big[\bar{\partial}_{-}\big(j\sin \beta \bar{\partial}_{+}s\big) +\bar{\partial}_{+}\big(j\sin \alpha \bar{\partial}_{-}s\big)\big]\\~=~& \frac{c}{\kappa}\Big[\sin \beta\bar{\partial}_{-}j \bar{\partial}_{+}s+\sin \alpha\bar{\partial}_{+}j\bar{\partial}_{-}s +j\cos\beta\bar{\partial}_{-}\beta\bar{\partial}_{+}s+j\cos\alpha\bar{\partial}_{+}\alpha\bar{\partial}_{-}s\\& \quad +j\sin\beta\bar{\partial}_{-}\bar{\partial}_{+}s+j\sin\alpha\bar{\partial}_{+}\bar{\partial}_{-}s+\sin \alpha\bar{\partial}_{-}j\bar{\partial}_{+}s+\sin \alpha\bar{\partial}_{-}j\bar{\partial}_{-}s\Big] \\~=~&-\frac{j}{\kappa}\sin\alpha(\bar{\partial}_{+}c+\bar{\partial}_{-}c)\bar{\partial}_{+}s+\frac{c}{\kappa} (\sin\alpha+\sin\beta)\bar{\partial}_{-}j\bar{\partial}_{+}s\\&~+ \frac{c}{\kappa}\Big[j\cos\beta\bar{\partial}_{-}\beta\bar{\partial}_{+}s+j\cos\alpha\bar{\partial}_{+} \alpha\bar{\partial}_{-}s\Big]+\frac{c}{\kappa}(\sin\alpha+\sin\beta)j\bar{\partial}_{-}\bar{\partial}_{+}s\\& -\frac{cj\sin\alpha}{\kappa\sin2 A}\Big[(\cos2 A\bar{\partial}_{+}\beta- \bar{\partial}_{-}\alpha) \bar{\partial}_{-}s-(\bar{\partial}_{+}\beta-\cos2 A \bar{\partial}_{-}\alpha) \bar{\partial}_{+}s\Big]\\~=~& -\frac{j}{\kappa}(\sin\sigma\cos A+\cos\sigma\sin A)(\bar{\partial}_{+}c+\bar{\partial}_{-}c)\bar{\partial}_{+}s+ \frac{c}{\kappa}(\sin\alpha+\sin\beta)\bar{\partial}_{-}\big(j\bar{\partial}_{+}s\big)\\&+ \frac{c}{\kappa}\Big[j\cos\beta\bar{\partial}_{-}\beta\bar{\partial}_{+}s+j\cos\alpha\bar{\partial}_{+} \alpha\bar{\partial}_{-}s\Big]\\&-\frac{cj\sin\alpha}{\kappa\sin2 A}\Big[(\cos2 A\bar{\partial}_{+}\beta- \bar{\partial}_{-}\alpha) \bar{\partial}_{-}s-(\bar{\partial}_{+}\beta-\cos2 A \bar{\partial}_{-}\alpha) \bar{\partial}_{+}s\Big]. \end{aligned} \end{equation} Noticing the last parts of (\ref{192305}) and (\ref{42101}) and using (\ref{192306}) and (\ref{192308}), we have \begin{equation} \begin{aligned} &\frac{cj}{\kappa\sin2 A}(\bar{\partial}_{+}\beta-\cos2 A \bar{\partial}_{-}\alpha) \bar{\partial}_{+}s\\=&\frac{j\bar{\partial}_{+}s}{\kappa\cos ^2 A} \Bigg\{-\frac{1+\kappa}{2}(\bar{\partial}_{+}c+\cos 2A \bar{\partial}_{-}c)+(1-\cos 2A)\frac{\omega\sin^2 A}{2}+(1-\cos 2A)\frac{j\bar{\partial}_{+}s}{2} \Bigg\}. \end{aligned} \end{equation} Using (\ref{192307}) and (\ref{192309}), we have \begin{equation}\label{4101} \begin{aligned} &\frac{cj}{\kappa}\Big[\cos\beta\bar{\partial}_{-}\beta\bar{\partial}_{+}s+\cos\alpha\bar{\partial}_{+} \alpha\bar{\partial}_{-}s\Big]\\~=~&\frac{cj\bar{\partial}_{+}s}{\kappa}\Big[ \cos\beta\bar{\partial}_{-}\beta-\cos\alpha\bar{\partial}_{+} \alpha\Big] \\~=~&\frac{cj\bar{\partial}_{+}s}{\kappa}\cos\sigma\cos A\big[ \bar{\partial}_{-}\beta-\bar{\partial}_{+}\alpha\big]+\frac{cj\bar{\partial}_{+}s}{\kappa}\sin\sigma\sin A\big[ \bar{\partial}_{-}\beta+\bar{\partial}_{+} \alpha\big]\\~=~& \frac{j\bar{\partial}_{+}s}{\kappa}\cos\sigma\cos A\Big(\frac{1+\kappa}{2}\Big)\Big(\frac{\kappa-1}{\kappa+1}-\tan^2 A\Big)\sin 2A\big[ \bar{\partial}_{-}c+\bar{\partial}_{+}c\big]\\&+ \frac{j\bar{\partial}_{+}s}{\kappa}\sin\sigma\sin A\Big(\frac{1+\kappa}{2}\Big)\Big(\frac{\kappa-1}{\kappa+1}-\tan^2 A\Big)\sin 2A\big[ \bar{\partial}_{-}c-\bar{\partial}_{+}c\big]\\&+\frac{j\bar{\partial}_{+}s}{\kappa}\sin\sigma\sin A\Big[-2\omega\sin^2A\tan A+2j\tan A \cos 2A \bar{\partial}_{+}s\Big] \\~=~& \left[\frac{(\kappa-1)j\bar{\partial}_{+}s}{\kappa}\cos\sigma\sin A\cos^2 A-\frac{(\kappa+1)j\bar{\partial}_{+}s}{\kappa}\cos\sigma\sin^3 A\right]\big( \bar{\partial}_{-}c+\bar{\partial}_{+}c\big)\\&+ \frac{j\bar{\partial}_{+}s}{\kappa}\sin\sigma\sin A\Big(\frac{1+\kappa}{2}\Big)\Big(\frac{\kappa-1}{\kappa+1}-\tan^2 A\Big)\sin 2A\big[ \bar{\partial}_{-}c-\bar{\partial}_{+}c\big]\\&+\frac{j\bar{\partial}_{+}s}{\kappa}\sin\sigma\sin A\Big[-2\omega\sin^2A\tan A+2j\tan A \cos 2A \bar{\partial}_{+}s\Big]. \end{aligned} \end{equation} Therefore, combining with (\ref{192305})--(\ref{4101}) and using $\sin\alpha+\sin\beta=2\sin\sigma\cos A$, we can get the first equation of (\ref{cd}). (Remark: all the parts that contain $\cos\sigma$ can be eliminated.) The proof for the other is similar; we omit the details. We then have Proposition \ref{10604}. \end{proof} In terms of the variables $R_{\pm}$ defined in (\ref{1973101}), system (\ref{cd}) can be written as \begin{equation}\label{cd6} \left\{ \begin{array}{ll} \begin{array}{rcl} \displaystyle c\bar{\partial}_{-}R_{+}&=&\displaystyle \frac{(\gamma+1)R_{+}^2}{2(\gamma-1)\cos^2A}+\frac{(\gamma+1)-2\sin^2 2A}{2(\gamma-1)\cos^2 A} R_{+}R_{-}\\[14pt]&&\displaystyle +(\widetilde{H}_{11}R_{-}+\widetilde{H}_{12}R_{+})\omega\sin^2A+(\widetilde{H}_{13}R_{-}+\widetilde{H}_{14}R_{+}) j\bar{\partial}_{+}s\\ [8pt]&&\displaystyle +\widetilde{H}_{15}(j\bar{\partial}_{+}s)^2+ \widetilde{H}_{16}\omega\sin^2Aj\bar{\partial}_{+}s, \end{array} \\[38pt] \begin{array}{rcl} \displaystyle c\bar{\partial}_{+}R_{-}&=&\displaystyle \frac{(\gamma+1)R_{-}^2}{2(\gamma-1)\cos^2A}+\frac{(\gamma+1)-2\sin^2 2A}{2(\gamma-1)\cos^2 A} R_{-}R_{+}\\[14pt]&&\displaystyle +(\widetilde{H}_{21}R_{-}+\widetilde{H}_{22}R_{+})\omega\sin^2A+(\widetilde{H}_{23}R_{-}+\widetilde{H}_{24}R_{+}) j\bar{\partial}_{+}s\\ [8pt]&&\displaystyle +\widetilde{H}_{25}(j\bar{\partial}_{+}s)^2+ \widetilde{H}_{26}\omega\sin^2Aj\bar{\partial}_{+}s, \end{array} \end{array} \right. \end{equation} where $\widetilde{H}_{ij}=\frac{\widetilde{f_{ij}}}{\cos ^2 A}$, and $\widetilde{f_{ij}}$ are polynomial forms in terms of $\sin A$. Define \begin{equation} \delta_1:=\frac{\omega}{\rho}\quad \mbox{and}\quad \delta_2:=\frac{\bar{\partial}_{+}s}{c^{\frac{\gamma+1}{\gamma-1}}}. \end{equation} Then we have $$ \omega\sin^2A=\frac{ \delta_1 c^{\frac{2\gamma}{\gamma-1}}}{q^2(\gamma s)^{\frac{1}{\gamma-1}}}\quad\mbox{and}\quad j\bar{\partial}_{+}s=\frac{\delta_2 c^{\frac{2\gamma}{\gamma-1}}}{\gamma(\gamma-1)s}. $$ Thus, (\ref{cd6}) can be written as \begin{equation}\label{cd8} \left\{ \begin{array}{ll} \begin{array}{rcl} \displaystyle c\bar{\partial}_{-}R_{+}&=&\displaystyle \frac{(\gamma+1)R_{+}^2}{2(\gamma-1)\cos^2A}+\frac{(\gamma+1)-2\sin^2 2A}{2(\gamma-1)\cos^2 A} R_{+}R_{-}\\[12pt]&&\displaystyle +(\widetilde{H}_{11}R_{-}+\widetilde{H}_{12}R_{+}) \frac{ \delta_1 c^{\frac{2\gamma}{\gamma-1}}}{q^2(\gamma s)^{\frac{1}{\gamma-1}}}+(\widetilde{H}_{13}R_{-}+\widetilde{H}_{14}R_{+}) \frac{\delta_2 c^{\frac{2\gamma}{\gamma-1}}}{\gamma(\gamma-1)s}\\ [8pt]&&\displaystyle +\widetilde{H}_{15} \frac{\delta_2^2 c^{\frac{4\gamma}{\gamma-1}}}{\gamma^2(\gamma-1)^2s^2}+ \widetilde{H}_{16}\frac{\delta_1 \delta_2 c^{\frac{4\gamma}{\gamma-1}}}{\gamma(\gamma-1)sq^2(\gamma s)^{\frac{1}{\gamma-1}}}, \end{array} \\[56pt] \begin{array}{rcl} \displaystyle c\bar{\partial}_{+}R_{-}&=&\displaystyle \frac{(\gamma+1)R_{-}^2}{2(\gamma-1)\cos^2A}+\frac{(\gamma+1)-2\sin^2 2A}{2(\gamma-1)\cos^2 A} R_{-}R_{+}\\[12pt]&&\displaystyle +(\widetilde{H}_{21}R_{-}+\widetilde{H}_{22}R_{+})\frac{\delta_1 c^{\frac{2\gamma}{\gamma-1}}}{q^2(\gamma s)^{\frac{1}{\gamma-1}}}+(\widetilde{H}_{23}R_{-}+\widetilde{H}_{24}R_{+}) \frac{\delta_2 c^{\frac{2\gamma}{\gamma-1}}}{\gamma(\gamma-1)s}\\ [8pt]&&\displaystyle +\widetilde{H}_{25} \frac{\delta_2^2 c^{\frac{4\gamma}{\gamma-1}}}{\gamma^2(\gamma-1)^2s^2}+ \widetilde{H}_{26}\frac{\delta_1 \delta_2 c^{\frac{4\gamma}{\gamma-1}}}{\gamma(\gamma-1)sq^2(\gamma s)^{\frac{1}{\gamma-1}}}. \end{array} \end{array} \right. \end{equation} \begin{prop} For any constant $\nu>0$, we have the decompositions: \begin{equation}\label{cd10} \left\{ \begin{array}{ll} \begin{array}{rcl} \displaystyle c\bar{\partial}_{-}\Big(\frac{R_{+}}{c^{\nu}}\Big)&=&\displaystyle \frac{1}{c^{\nu}}\Bigg\{ \frac{(\gamma+1)R_{+}^2}{2(\gamma-1)\cos^2A}+\frac{(\gamma+1)-2\sin^2 2A}{2(\gamma-1)\cos^2 A} R_{+}R_{-}-\nu R_{+}R_{-}\\[14pt]&&\displaystyle +(\widehat{H}_{11}R_{-}+\widehat{H}_{12}R_{+})\frac{\delta_1 c^{\frac{2\gamma}{\gamma-1}}}{q^2(\gamma s)^{\frac{1}{\gamma-1}}}+(\widehat{H}_{13}R_{-}+\widehat{H}_{14}R_{+}) \frac{\delta_2 c^{\frac{2\gamma}{\gamma-1}}}{\gamma(\gamma-1)s}\\ [8pt]&&\displaystyle+\widehat{H}_{15} \frac{\delta_2^2 c^{\frac{4\gamma}{\gamma-1}}}{\gamma^2(\gamma-1)^2s^2}+ \widehat{H}_{16}\frac{\delta_1 \delta_2 c^{\frac{4\gamma}{\gamma-1}}}{\gamma(\gamma-1)sq^2(\gamma s)^{\frac{1}{\gamma-1}}}\Bigg\}, \end{array} \\[56pt] \begin{array}{rcl} \displaystyle c\bar{\partial}_{+}\Big(\frac{R_{-}}{c^{\nu}}\Big)&=&\displaystyle \frac{1}{c^{\nu}}\Bigg\{\frac{(\gamma+1)R_{-}^2}{2(\gamma-1)\cos^2A}+\frac{(\gamma+1)-2\sin^2 2A}{2(\gamma-1)\cos^2 A} R_{-}R_{+}-\nu R_{+}R_{-}\\[14pt]&&\displaystyle +(\widehat{H}_{21}R_{-}+\widehat{H}_{22}R_{+})\frac{\delta_1 c^{\frac{2\gamma}{\gamma-1}}}{q^2(\gamma s)^{\frac{1}{\gamma-1}}}+(\widehat{H}_{23}R_{-}+\widehat{H}_{24}R_{+}) \frac{\delta_2 c^{\frac{2\gamma}{\gamma-1}}}{\gamma(\gamma-1)s}\\ [8pt]&&\displaystyle+\widehat{H}_{25} \frac{\delta_2^2 c^{\frac{4\gamma}{\gamma-1}}}{\gamma^2(\gamma-1)^2s^2}+ \widehat{H}_{26}\frac{\delta_1 \delta_2 c^{\frac{4\gamma}{\gamma-1}}}{\gamma(\gamma-1)sq^2(\gamma s)^{\frac{1}{\gamma-1}}}\Bigg\}, \end{array} \end{array} \right. \end{equation} where $\widehat{H}_{ij}=\frac{\widehat{f_{ij}}}{\cos ^2 A}$, and $\widehat{f_{ij}}$ are polynomial forms in terms of $\sin A$. \end{prop} \begin{proof} The characteristic decompositions (\ref{cd6}) can be obtained directly by (\ref{cd8}). We omit the details. \end{proof} \section{\bf Global supersonic flows in the semi-infinite divergent duct} \subsection{Simple waves adjacent to the constant state $(u_0, 0, \rho_0, s_0)$} If the incoming supersonic flow is a uniform flow $(u_0, 0, \rho_0, s_0)$, then by the result of Courant and Friedrichs \cite{CF} (see Chapter IV.B) we know that there is a simple wave $\it S_{+}$ ($\it S_{-}$, resp.) with straight $C_{+}$ ($C_{-}$, resp.) characteristics issuing from lower wall $W_{-}$ (upper wall $W_{+}$, resp.); see Figure \ref{Fig25} (right). Let $$ A_0=\arcsin\frac{c_0}{u_0}, \quad c_*^2=\mu^2u_0^2+(1-\mu^2)c_0^2 ,\quad \theta_*=A_0-\frac{\pi}{2}+\frac{1}{\mu}\arctan\big(\mu\sqrt{(u_0/c_0)^2-1}\big),$$ and $$ \left\{ \begin{array}{ll} \displaystyle\frac{\bar{u}(\theta)}{c_{*}}=\cos\mu(\theta-\theta_*)\cos \theta+\mu^{-1}\sin\mu (\theta-\theta_*)\sin \theta,\\[10pt] \displaystyle\frac{\bar{v}(\theta)}{c_*}=\cos\mu (\theta-\theta_*)\sin \theta-\mu^{-1}\sin\mu (\theta-\theta_*)\cos \theta. \end{array} \right. $$ Then $u=\bar{u}(\theta)$, $v=\bar{v}(\theta)$ ($\theta_*-\frac{\pi}{2\mu}<\theta<\theta_*$) represents an epicycloidal characteristic on the $(u, v)$-plane; see Figure \ref{Fig25} (left). Moreover, by a direct computation we have that along the epicycloidal characteristic, $$ \begin{aligned} &c=\bar{c}(\theta):=c_*\cos\mu(\theta-\theta_*),\quad q=\bar{q}(\theta):=c_*\sqrt{\cos^2\mu(\theta-\theta_*)+\mu^{-2}\sin^2\mu(\theta-\theta_*)}~ \\&\alpha=\bar{\alpha}(\theta):=\theta+\frac{\pi}{2},\quad A=\bar{A}(\theta):=\arcsin\left(\frac{\bar{c}(\theta)}{\bar{q}(\theta)}\right),\quad \sigma=\bar{\sigma}(\theta):=\bar{\alpha}(\theta)-\bar{A}(\theta).\\ & \beta=\bar{\beta}(\theta):=\bar{\sigma}(\theta)-\bar{A}(\theta).\quad \end{aligned} $$ Let $\theta=\bar{\theta}(\sigma)$ be the inverse function of $\sigma=\bar{\sigma}(\theta)$ and let $\tilde{\theta}(\xi)= \bar{\theta}\big(-\arctan f'(\xi)\big)$ ($\xi\geq 0$). Then, $$ \begin{aligned} &\qquad (u, v, c)(x, y)=(\bar{u}, \bar{v}, \bar{c})\big(\tilde{\theta}(\xi)\big), \quad s(x,y)=s_0,\\[4pt] &x=\xi+r\cos \bar{\alpha}\big(\tilde{\theta}(\xi)\big), \quad y=-f(\xi)+r\sin\bar{\alpha}\big(\tilde{\theta}(\xi)\big), \quad \xi\geq 0, \quad r\geq 0, \end{aligned} $$ is a simple wave with straight $C_{+}$ characteristic lines issuing from $W_{-}$. By symmetry, we can construct the simple wave with straight $C_{-}$ characteristic lines issuing from $W_{+}$. These results are classical, so we omit the details. The two simple waves start to interact from a point $\bar{P}=(f(0)\cot A_0, 0)$. Through $\bar{P}$ draw a forward $C_{-}$ ($C_{+}$, resp.) cross characteristic curve $C_{-}^{\bar{P}}$ ($C_{+}^{\bar{P}}$, resp.) in $\Delta_{+}$ ($\Delta_{-}$, resp.). In the present paper, $(u_0, 0, \rho_0, s_0)$ and the duct $\Sigma$ are required to satisfy that the characteristic curve $C_{-}^{\bar{P}}$ ($C_{+}^{\bar{P}}$, resp.) meets the lower wall $W_{-}$ (upper wall $W_{+}$, resp.) at a point $\bar{B}_0$ ($\bar{D}_0$, resp.) and $c(\bar{B}_0)>0$, as shown in Figure \ref{Fig25}(right). \begin{rem} It is obvious that if $f(x)\equiv 0$ then this case can happen. So, by studying an initial value problem for an ordinary differential equation, one can find a $\eta>0$ depending only on $(u_0, 0, \rho_0, s_0)$ and $f(0)$, such that if $0<f''(x)<\eta$ as $x\geq 0$ then the above situation can occur. \end{rem} From (\ref{192306})--(\ref{192309}) and (\ref{4902}), we have \begin{equation}\label{42201} \begin{aligned} \bar{\partial}_{0}\sigma&~=~\Big(\frac{\bar{\partial}_{+}+\bar{\partial}_{-}}{2\cos A}\Big)\Big(\frac{\alpha+\beta}{2}\Big)~=~\frac{1}{4c\cos A}(c\bar{\partial}_{+}+c\bar{\partial}_{-})(\alpha+\beta)\\[10pt]&~= \frac{\bar{\partial}_{-}c-\bar{\partial}_{+}c}{(\gamma-1)q}+\frac{j}{q}\bar{\partial}_{+}s ~=~\frac{R_{-}-R_{+}}{(\gamma-1)q} \qquad \mbox{on}\quad W_{-}. \end{aligned} \end{equation} Since simple waves are isentropic irrotational, the simple wave $\it S_{+}$ satisfies $$ \begin{aligned} \bar{\partial}_{-}c&=(\gamma-1)q \bar{\partial}_{0}\sigma =-\frac{(\gamma-1)q f''(x)}{\big(1+[f'(x)]^2\big)^{\frac{3}{2}}}<0\quad \mbox{on}\quad W_{-} \end{aligned} $$ and $$ c\bar{\partial}_{+}\bar{\partial}_{-}c= \frac{(\gamma+1)(\bar{\partial}_{-}c)^2 }{2(\gamma-1)\cos^2A}. $$ Thus, the simple wave solution $\it S_{+}$ satisfies \begin{equation} \begin{aligned} &\quad\sup\limits_{\widehat{\bar{P}\bar{B}_0}}A=A_0,~\quad\sup\limits_{\widehat{\bar{P}\bar{B}_0}}c=c_0, ~\quad~ \inf\limits_{\widehat{\bar{P}\bar{B}_0}}c=c(\bar{B}_0)>0,\\& \inf\limits_{\widehat{\bar{P}\bar{B}_0}}q=u_0,~\quad \sup\limits_{\widehat{\bar{P}\bar{B}_0}}\bar{\partial}_{-}c <0,~\quad\mbox{and}\quad \inf\limits_{\widehat{\bar{P}\bar{B}_0}}\bar{\partial}_{-}c>-\infty. \end{aligned} \end{equation} By symmetry, the simple wave solution $\it S_{-}$ satisfies \begin{equation} \begin{aligned} &\quad\sup\limits_{\widehat{\bar{P}\bar{D}_0}}A=A_0,~\quad~\sup\limits_{\widehat{\bar{P}\bar{D}_0}}c=c_0, ~\quad~ \inf\limits_{\widehat{\bar{P}\bar{D}_0}}c=c(\bar{D}_0)>0,\\& \inf\limits_{\widehat{\bar{P}\bar{D}_0}}q=u_0,~\quad \sup\limits_{\widehat{\bar{P}\bar{D}_0}}\bar{\partial}_{+}c <0,~\quad\mbox{and}\quad \inf\limits_{\widehat{\bar{P}\bar{D}_0}}\bar{\partial}_{+}c>-\infty. \end{aligned} \end{equation} Therefore, we can define the following constants: \begin{equation} \overline{c}_m:=c(\bar{B}_0), \quad \overline{m}_1:=-\sup\limits_{\widehat{\bar{P}\bar{B}_0}}\bar{\partial}_{-}c, \quad \mbox{and}\quad \overline{M}_1: =-\inf\limits_{\widehat{\bar{P}\bar{B}_0}}\bar{\partial}_{-}c. \end{equation} It is obvious that these constants depend only on the state $(u_0, 0, \rho_0, s_0)$ and the duct $\Sigma$. \subsection{Some useful constants} We shall define some constants that depend only on the state $(u_0, 0, \rho_0, s_0)$ and the duct $\Sigma$. These constants will be extensively used in establishing the a priori $C^1$ norm estimates of the solution. Let $c_m$ be a constant in $(0, \frac{\overline{c}_m}{2}]$ such that when $c<c_m$ there holds \begin{equation}\label{72602} \frac{\gamma+1}{(\gamma-1)}\left(1-\frac{c^2}{2\big(\frac{\widehat{E}_0}{2}-\frac{c^2}{\gamma-1}\big)}\right)^{-1} -\frac{2\gamma}{\gamma-1}<-\frac{1}{2}, \end{equation} where the constant $$\widehat{E}_0:=\frac{u_0^2}{2}+\frac{c_0^2}{\gamma-1}.$$ We then define the following constants: \begin{equation}\label{1972605} \begin{aligned} &M_1:=2\overline{M}_1,\quad m_1:=\frac{\overline{m}_1}{2},\quad A_M=\frac{A_0}{2}+\frac{\pi}{4},\quad c_M:=2c_0,\quad s_m:=\frac{s_0}{2},\quad s_M:=2s_0,\\[4pt] & q_m:=\frac{u_0}{2},\quad \nu_1:=\frac{\gamma+1}{(\gamma-1)\cos^2 A_M}+1,\quad m_2:=\frac{m_1}{c_{M}^{\nu_1}},\quad m:=\frac{1}{2}m_2 c_m^{\nu_1-\frac{2\gamma}{\gamma-1}},\\[4pt]& \mathcal{B}:=1+\max\Bigg\{\frac{\gamma^{\frac{1}{\gamma-1}}s_m^{\frac{2-\gamma}{\gamma-1}}c_M^2 }{\gamma(\gamma-1)},~ \frac{\gamma^{\frac{1}{\gamma-1}}s_M^{\frac{2-\gamma}{\gamma-1}}c_M^2 }{\gamma(\gamma-1)}\Bigg\}, \\[6pt]& \widehat{\mathcal{H}}:=\max_{\substack{A\in[0, A_{M}]\\ i=1,2; j=1,\cdot\cdot\cdot, 6}}\Big\{|\widehat{H_{ij}}|\Big\},~ \widetilde{\mathcal{H}}:=\max_{\substack{A\in[0, A_M]\\ i=1,2; j=1,\cdot\cdot\cdot, 6}}\Big\{|\widetilde{H_{ij}}|\Big\}, \\[4pt]& \mathcal{N}:=\max\left\{ \frac{1}{q_m^2(\gamma s_m)^{\frac{1}{\gamma-1}}},~ \frac{1}{\gamma(\gamma-1)s_m},~ \frac{1}{\gamma^2(\gamma-1)^2s_m^2},~ \frac{1}{\gamma(\gamma-1)s_mq_m^2(\gamma s_m)^{\frac{1}{\gamma-1}}}\right\}, \\[8pt]& \widehat{\mathcal{M}}:=4\mathcal{B}\mathcal{N}(\widehat{\mathcal{H}}+1),\quad \widetilde{\mathcal{M}}:=4\mathcal{B}\mathcal{N}(\widetilde{\mathcal{H}}+1), \\[4pt]& \varepsilon_0:=\min\left\{m\gamma s_m,~\frac{m}{2(\widehat{\mathcal{M}}+1)},~ \frac{(\kappa-1)m}{2\widehat{\mathcal{M}}}, ~\frac{m}{3(\widetilde{\mathcal{M}}+1)}\right\}. \end{aligned} \end{equation} \subsection{Flow in domains $\Sigma_{0}^{+}$ and $\Sigma_{0}^{-}$} We are going to construct a global continuous and piecewise smooth supersonic solution to the problem (\ref{PSEU}), (\ref{bd1}). Through $B$ draw a forward $C_{+}$ characteristic curve $y=y_{+}^{B}(x)$ which can be determined by $$\frac{{\rm d}y_{+}^{B}}{{\rm d}x}=\lambda_{+}\big(u_{in}(y_{+}^{B}), 0, c_{in}(y_{+}^{B})\big), \quad y_{+}^{B}(0)=-f(0).$$ Through $D$ draw a forward $C_{-}$ characteristic curve $y=y_{-}^{D}(x)$ which can be determined by $$\frac{{\rm d}y_{-}^{D}}{{\rm d}x}=\lambda_{-}\big(u_{in}(y_{-}^{D}), 0, c_{in}(y_{-}^{D})\big), \quad y_{-}^{D}(0)=f(0).$$ When $\epsilon$ is small the two curves intersect at some point $P=(x_{P}, 0)$ on the $x-$axis. It is easy to see that the flow in the region bounded by $\widehat{BP}$, $\widehat{DP}$, and $x=0$ is $(u, v, \rho, s)(x, y) =(u_{in}, v_{in}, \rho_{in}, s_{in})(y)$. Moreover, we have \begin{equation}\label{72604} |\delta_1(x, y)|=\Big|\frac{u_{in}'(y)}{\rho_{in}(y)}\Big|\leq \varepsilon\quad \mbox{and} \quad |\delta_2(x, y)|=\Bigg|\frac{\cos\alpha_{in}(y) s_{in}'(y)}{c_{in}^{\small\frac{\gamma+1}{\gamma-1}}(y)}\Bigg|\leq\varepsilon\quad \mbox{on}\quad \widehat{BP}\cup\widehat{DP}, \end{equation} where $\alpha_{in}(y)=\arctan \Big(\lambda_{+}\big(u_{in}(y), 0, c_{in}(y))\Big)$ and the constant \begin{equation}\label{198101} \varepsilon~:=~\max\left\{\sup\limits_{-f(0)\leq y\leq f(0)}\Big|\frac{u_{in}'(y)}{\rho_{in}(y)}\Big|,\quad \sup\limits_{-f(0)\leq y\leq f(0)}\Bigg|\frac{s_{in}'(y)}{c_{in}^{\frac{\gamma+1}{\gamma-1}}(y)}\Bigg| \right\}. \end{equation} We next consider system (\ref{PSEU}) with the boundary data \begin{equation}\label{72601} \left\{ \begin{array}{ll} (u, v, \rho, s)=(u_{in}, v_{in}, \rho_{in}, s_{in})(y) & \hbox{on~~$\widehat{BP}$;} \\[4pt] (u, v)\cdot {\bf n_w}=0 & \hbox{on~~$W_{-}$.} \end{array} \right. \end{equation} By the classical results about the stability of classical solutions for quasilinear hyperbolic system, we can get the following lemma. \begin{lem} If $\epsilon$ is sufficiently small then the slip boundary problem (\ref{PSEU}), (\ref{72601}) admits a $C^1$ solution in a region $\Sigma_{0}^{-}$ bounded by $\widehat{BP}$, $W_{-}$, and $\widehat{PB_0}$, where $\widehat{PB_0}$ is a $C_{-}$ characteristic curve which passes through $P$ and ends up at a point $B_0$ on $W_{-}$; see Figure \ref{Fig2}. Moreover, the solution satisfies \begin{equation}\label{72605} \begin{aligned} &|Ds(P)|=0, \quad |D\hat{E}(P)|=0, \quad \inf\limits_{\widehat{PB_0}}\bar{\partial}_{-}{c}\geq -M_1,\quad \sup\limits_{\widehat{PB_0}}\bar{\partial}_{-}{c}\leq -m_1,\\&\quad \sup\limits_{\widehat{PB_0}}A\leq A_{M}, \quad \inf\limits_{\widehat{PB_0}}c>\frac{\bar{c}_m}{2},\quad \inf\limits_{\widehat{PB_0}}\widehat{E}\geq \frac{\widehat{E}_0}{2},\quad \inf\limits_{\widehat{PB_0}}q\geq q_m,\\[4pt]& \quad \sup\limits_{\widehat{PB_0}}c<c_{M}, \quad \sup\limits_{\widehat{PB_0}}|\delta_1|<\mathcal{B}\varepsilon, \quad \mbox{and}\quad \sup\limits_{\widehat{PB_0}}|\delta_2|\leq \varepsilon. \end{aligned} \end{equation} \end{lem} \begin{proof} By computation, we have \begin{equation}\label{72606} |\bar{\partial}_{+}c|=|\sin\alpha_{in}(y) c_{in}'(y)|\leq \epsilon \quad \mbox{on}\quad \widehat{BP}. \end{equation} From (\ref{42201}) we have \begin{equation}\label{4203} \begin{aligned} R_{-}-R_{+}&=(\gamma-1)q \bar{\partial}_{0}\sigma =-\frac{(\gamma-1)q f''(x)}{\big(1+[f'(x)]^2\big)^{\frac{3}{2}}}\quad \mbox{on}\quad W_{-}. \end{aligned} \end{equation} If $\epsilon$ is small then the solution to the problem (\ref{PSEU}), (\ref{72601}) in $\Sigma_{0}^{-}$ is actually a small perturbation of the simple wave $\it S_{+}$ constructed in Section 3.1. The desired estimates (\ref{72605}) can be obtained by using the boundary conditions (\ref{72604}), (\ref{72606}), (\ref{4203}) and by integrating $\bar{\partial}_{0}s=0$, (\ref{4401}), (\ref{4105}), (\ref{72803}), and (\ref{cd}) along characteristic curves. We omit the details. \end{proof} By symmetry, we can get a flow in a region $\Sigma_{0}^{+}$ bounded by $\widehat{DP}$, $W_{+}$, and $\widehat{PD_0}$, where $\widehat{PD_0}$ is a $C_{+}$ characteristic curve which passes through $P$ and ends up at some point $D_0$ on $W_{+}$. Moreover, we have \begin{equation}\label{1972602} \begin{aligned} &|Ds(P)|=0, \quad |D\hat{E}(P)|=0, \quad \inf\limits_{\widehat{PD_0}}\bar{\partial}_{+}{c}\geq -M_1,\quad \sup\limits_{\widehat{PD_0}}\bar{\partial}_{+}{c}\leq -m_1,\\&\quad \sup\limits_{\widehat{PD_0}}A\leq A_{M},\quad \inf\limits_{\widehat{PD_0}}c>\frac{1}{2}\bar{c}_m,\quad\inf\limits_{\widehat{PD_0}}\widehat{E}\geq \frac{1}{2}\widehat{E}_0,\quad \inf\limits_{\widehat{PD_0}}q\geq q_m,\\[4pt]&\quad \sup\limits_{\widehat{PD_0}}c<c_{M}, \quad \sup\limits_{\widehat{PD_0}}|\delta_1|<\mathcal{B}\varepsilon, \quad \mbox{and}\quad \sup\limits_{\widehat{PD_0}}|\delta_2|\leq \varepsilon. \end{aligned} \end{equation} \begin{rem}\label{2032001} See Figure \ref{Fig2}. Although $(\bar{\partial}_{+}u, \bar{\partial}_{+}v, \bar{\partial}_{+}c)$ is discontinuous across the $C_{-}$ characteristic curve $\widehat{DP}$, $(\bar{\partial}_{-}u, \bar{\partial}_{-}v, \bar{\partial}_{-}c)$ is continuous across $\widehat{DP}$. So, by the second equation of (\ref{form}) we know that $\omega$ is continuous across $\widehat{DP}$, since (\ref{form}) holds on both sides of $\widehat{DP}$. Since $\bar{\partial}_{-}s$ is continuous across $\widehat{DP}$, by $\bar{\partial}_{-}s+\bar{\partial}_{+}s=0$ we know that $\bar{\partial}_{+}s$ is also continuous across $\widehat{DP}$. Similarly, we know that $\omega$ and $\bar{\partial}_{\pm}s$ are continuous across $\widehat{BP}$. In view of this fact, one can see that $\omega$ and $\bar{\partial}_{\pm}s$ are actually continuous. \end{rem} \subsection{Flow in domain $\Sigma_{1}$} The purpose of this part is to construct the solution in domain $\Sigma_{1}$, as shown in Figure \ref{Fig2}. For this purpose, we consider system (\ref{PSEU}) with the boundary data \begin{equation}\label{1972601} (u, v, \rho, s)=\left\{ \begin{array}{ll} \big(u_{_{\widehat{PB_0}}}, v_{_{\widehat{PB_0}}}, \rho_{_{\widehat{PB_0}}}, s_{_{\widehat{PB_0}}}\big)(x, y) & \hbox{on $\widehat{PB_0}$;} \\[4pt] \big(u_{_{\widehat{PD_0}}}, v_{_{\widehat{PD_0}}}, \rho_{_{\widehat{PD_0}}}, s_{_{\widehat{PD_0}}}\big)(x, y) & \hbox{on $\widehat{PD_0}$,} \end{array} \right. \end{equation} where $(u_{_{\widehat{PB_0}}}, v_{_{\widehat{PB_0}}}, \rho_{_{\widehat{PB_0}}}, s_{_{\widehat{PB_0}}})(x, y)$ and $(u_{_{\widehat{PD_0}}}, v_{_{\widehat{PD_0}}}, \rho_{_{\widehat{PD_0}}}, s_{_{\widehat{PD_0}}})(x, y)$ represent the state on the characteristic curves $\widehat{PB_0}$ and $\widehat{PD_0}$, respectively. The characteristic curves $\widehat{PB_0}$ and $\widehat{PD_0}$ and the data on them are obtained in the last subsection. Problem (\ref{PSEU}), (\ref{1972601}) is a Goursat problem. By (\ref{72605}) and (\ref{1972602}), we have $ \bar{\partial}_{+}\widehat{E}_{{\widehat{PD_0}}}(P)=\bar{\partial}_{-}\widehat{E}_{{\widehat{PB_0}}}(P)=0$ and $\bar{\partial}_{+}s_{_{\widehat{PD_0}}}(P)=\bar{\partial}_{-}s_{_{\widehat{PB_0}}}(P)=0. $ Hence, we can check that compatibility conditions are satisfied at $P$. So, existence of a local $C^1$ solution is known by the method of characteristics, see for example \cite{Li-Yu}. In order to extend the local solution to global solution, we need to establish the a priori $C^1$ norm estimate of the solution. In what follows, we first assume that the Goursat problem (\ref{PSEU}), (\ref{1972601}) admits a $C^1$ local solution in some region $\Sigma_{1, loc}$, and then establish $C^1$ norm estimate of the solution on $\Sigma_{1, loc}$. Since the local solution is constructed by the method of characteristics, through any point $F$ in $\Sigma_{1, loc}$ we can draw a backward $C_{-}$ ($C_{+}$, resp.) characteristic curve up to some point $F_{-}$ ($F_{+}$, resp.) on $\widehat{PD_0}$ ($\widehat{PB_0}$, resp.), and the two backward characteristic curves do not interact with each other as they go back toward to $\widehat{PD_0}$ and $\widehat{PB_0}$. Let $\Lambda_{1, F}$ be a closed domain bounded by characteristic curves $\widehat{F_{-}F}$, $\widehat{F_{+}F}$, $\widehat{PF_{+}}$, and $\widehat{PF_{-}}$. We have that $\Lambda_{1, F}$ belongs to $\Sigma_{1, loc}$, as indicated in Figure \ref{Fig4}. \begin{lem}\label{lem33} If $\varepsilon<\varepsilon_0$ then the classical solution of the Goursat problem (\ref{PSEU}), (\ref{1972601}) satisfies \begin{equation}\label{4103} \begin{aligned} c>0, \quad\frac{R_{+}}{c^{\frac{2\gamma}{\gamma-1}}}~<~-m,\quad \mbox{and}\quad\frac{R_{-}}{c^{\frac{2\gamma}{\gamma-1}}}~<~-m. \end{aligned} \end{equation} \end{lem} \begin{proof} The proof of this lemma proceeds in four steps. {\it Step 1.} In this step, we shall prove $\frac{R_{\pm}}{c^{\frac{2\gamma}{\gamma-1}}}~<~-m$ on $\widehat{PB_0}\cup\widehat{PD_0}$. By a direct computation, we have that if $c\geq c_m$ and $\frac{R_{\pm}}{c^{\nu_1}}<-\frac{m_2}{2}$ then $$ \frac{R_{\pm}}{c^{\frac{2\gamma}{\gamma-1}}}~=~c^{\nu_1-\frac{2\gamma}{\gamma-1}}\frac{R_{\pm}}{c^{\nu_1}} ~< ~c_m^{\nu_1-\frac{2\gamma}{\gamma-1}}\frac{R_{\pm}}{c^{\nu_1}}~<~-\frac{1}{2}m_2 c_m^{\nu_1-\frac{2\gamma}{\gamma-1}}~=~- m.$$ Thus, in view of $\inf\limits_{\widehat{PD_0}\cup \widehat{PB_0}}c>\frac{\bar{c}_m}{2}\geq c_m$, it suffices to prove $\frac{R_{\pm}}{c^{\nu_1}}<-\frac{m_2}{2}$ on $\widehat{PB_0}\cup\widehat{PD_0}$. In view of (\ref{1972605}), (\ref{1972602}), and $\frac{2\gamma}{\gamma-1}<\nu_1$, we have that if $\varepsilon<\varepsilon_0$ then \begin{equation}\label{41801} \begin{aligned} \frac{R_{+}}{c^{\nu_1}}=\frac{\bar{\partial}_{+}c}{c^{\nu_1}}-\frac{\delta_2\frac{c^{\frac{2\gamma}{\gamma-1}}}{\gamma(\gamma-1)s}}{\kappa c^{\nu_1}}&<-m_2+ \frac{\varepsilon c^{\frac{2\gamma}{\gamma-1}-\nu_1}}{2\gamma s}<-m_2+ \frac{\varepsilon c_m^{\frac{2\gamma}{\gamma-1}-\nu_1}}{2\gamma s_m}<-\frac{m_2}{2}\quad \mbox{on}\quad \widehat{PD_0}. \end{aligned} \end{equation} By symmetry, we have \begin{equation}\label{1972604} \frac{R_{-}}{c^{\nu_1}}~<~-\frac{m_2}{2}\quad \mbox{on} \quad\widehat{PB_0}. \end{equation} From (\ref{1972604}) we have $\big(\frac{R_{-}}{c^{\nu_1}}\big)(P)<-\frac{m_2}{2}$. Suppose that there exists a ``first" point on $\widehat{PD_0}$, such that $\big(\frac{R_{-}}{c^{\nu_1}}\big)=-\frac{m_2}{2}$ at this point. Then, by the second equation of (\ref{cd10}), (\ref{1972605}), (\ref{1972602}), and (\ref{41801}) we have \begin{equation}\label{1972606} \begin{aligned} &c\bar{\partial}_{+}\Big(\frac{R_{-}}{c^{\nu_1}}\Big)\\ ~<~& \frac{1}{c^{\nu_1}}\Bigg\{ \frac{(\gamma+1)R_{+}R_{-}}{2(\gamma-1)\cos^2A}+\frac{(\gamma+1) R_{+}R_{-}}{2(\gamma-1)\cos^2 A} -\nu_1 R_{+}R_{-}\\& \qquad\qquad+(\widehat{H}_{21}R_{-}+\widehat{H}_{22}R_{+})\frac{\delta_1 c^{\frac{2\gamma}{\gamma-1}}}{q^2(\gamma s)^{\frac{1}{\gamma-1}}}+(\widehat{H}_{23}R_{-}+\widehat{H}_{24}R_{+}) \frac{\delta_2 c^{\frac{2\gamma}{\gamma-1}}}{\gamma(\gamma-1)s}\\ &\qquad\qquad +\widehat{H}_{25} \frac{\delta_2^2 c^{\frac{4\gamma}{\gamma-1}}}{\gamma^2(\gamma-1)^2s^2}+ \widehat{H}_{26}\frac{\delta_1 \delta_2 c^{\frac{4\gamma}{\gamma-1}}}{\gamma(\gamma-1)sq^2(\gamma s)^{\frac{1}{\gamma-1}}}\Bigg\} \\~<~&\frac{1}{c^{\nu_1}}\Bigg\{-R_{+}R_{-}+(\widehat{H}_{21}R_{-}+\widehat{H}_{22}R_{+}) \frac{\delta_1 c^{\frac{2\gamma}{\gamma-1}}}{q^2(\gamma s)^{\frac{1}{\gamma-1}}} +(\widehat{H}_{23}R_{-}+\widehat{H}_{24}R_{+}) \frac{\delta_2 c^{\frac{2\gamma}{\gamma-1}}}{\gamma(\gamma-1)s}\\ &\qquad\qquad +\widehat{H}_{25} \frac{\delta_2^2 c^{\frac{4\gamma}{\gamma-1}}}{\gamma^2(\gamma-1)^2s^2}+ \widehat{H}_{26}\frac{\delta_1 \delta_2 c^{\frac{4\gamma}{\gamma-1}}}{\gamma(\gamma-1)sq^2(\gamma s)^{\frac{1}{\gamma-1}}}\Bigg\}\\ ~<~&\frac{1}{c^{\nu_1}}\Bigg\{-R_{+}R_{-}-\big(|\widehat{H}_{21}+|\widehat{H}_{22}|\big)R_{+} \frac{\delta_1 c^{\frac{2\gamma}{\gamma-1}}}{q^2(\gamma s)^{\frac{1}{\gamma-1}}} -\big(|\widehat{H}_{23}|+|\widehat{H}_{24}|\big)R_{+} \frac{\delta_2 c^{\frac{2\gamma}{\gamma-1}}}{\gamma(\gamma-1)s}\\ &\qquad\qquad +|\widehat{H}_{25}| \frac{\delta_2^2 c^{\frac{4\gamma}{\gamma-1}}}{\gamma^2(\gamma-1)^2s^2}+ |\widehat{H}_{26}|\frac{\delta_1 \delta_2 c^{\frac{4\gamma}{\gamma-1}}}{\gamma(\gamma-1)sq^2(\gamma s)^{\frac{1}{\gamma-1}}}\Bigg\} \\~<~& c^{\nu_1}\left\{-\frac{R_{+}R_{-}}{c^{2\nu_1}}-\widehat{\mathcal{M}} \varepsilon c^{\frac{2\gamma}{\gamma-1}-\nu_1}\frac{R_{+}}{c^{\nu_1}}+\widehat{\mathcal{M}}\varepsilon^2c^ {2(\frac{2\gamma}{\gamma-1}-\nu_1)}\right\} \\~<~& c^{\nu_1}\Bigg\{\underbrace{\frac{R_{+}}{c^{\nu_1}}\Big(\frac{m_2}{4}-\widehat{\mathcal{M}} \varepsilon c_m^{\frac{2\gamma}{\gamma-1}-\nu_1}\Big)}_{<0}+\underbrace{\widehat{\mathcal{M}}\varepsilon^2c_m^{2(\frac{2\gamma} {\gamma-1}-\nu_1)}-\frac{m_2^2}{8}}_{<0}\Bigg\}~<~0 \end{aligned} \end{equation} at this point, which leads to a contradiction. Thus, by an argument of continuity we have $ \frac{R_{-}}{c^{\nu_1}}~<~-\frac{m_2}{2}$ on $\widehat{PD_0}$. Similarly, we have $\frac{R_{+}}{c^{\nu_1}}~<~-\frac{m_2}{2}$ on $\widehat{PB_0}$. \begin{figure} \caption{ \footnotesize Domains $\Sigma_{1, loc} \label{Fig4} \end{figure} {\it Step 2.} In this step, we shall prove that for any $F\in\Sigma_{1, loc}$, if $\frac{R_{\pm}}{c^{\nu_1}}~<~-\frac{m_2}{2}$ and $c\geq c_m$ in $\Lambda_{1, F}\setminus\{F\}$, then $(\frac{R_{\pm}}{c^{\nu_1}})(F)~<~-\frac{m_2}{2}$. In view of Remark \ref{2032001}, we can see that $\delta_1$ and $\delta_2$ are continuous across $\widehat{PD_0}$ and $\widehat{PB_0}$. Then by (\ref{4105}), (\ref{72605}), and (\ref{1972602}) we have \begin{equation}\label{42002} |\delta_2|\leq \varepsilon \quad\mbox{on}\quad \Lambda_{1, F}. \end{equation} If $\frac{R_{\pm}}{c^{\nu_1}}~<~-\frac{m_2}{2}$ and $c\geq c_m$ in $\Lambda_{1, F}\setminus\{F\}$, then \begin{equation}\label{41201} \frac{\bar{\partial}_{\pm}c}{c^{\nu_1}}=\frac{R_{\pm}}{c^{\nu_1}}+ \frac{j\bar{\partial}_{\pm}s}{\kappa c^{\nu_1}}<-\frac{m_2}{2}+ \frac{\varepsilon c^{\frac{2\gamma}{\gamma-1}-\nu_1}}{2\gamma s}<-\frac{m_2}{2}+ \frac{\varepsilon c_m^{\frac{2\gamma}{\gamma-1}-\nu_1}}{2\gamma s_m}<-\frac{m_2}{4} \quad\mbox{on}\quad \Lambda_{1, F}\setminus\{F\}. \end{equation} Hence, by (\ref{4401}) and (\ref{32403}) we have \begin{equation}\label{41901} \bar{\partial}_{0}c<0\quad\mbox{and}\quad \bar{\partial}_{0}A=\frac{\sin ^3 A}{c^2 \cos A}\Big(\frac{c\bar{\partial}_{0}c}{\sin ^2 A}+\kappa c\bar{\partial}_{0}c \Big)<0\quad\mbox{on}\quad \Lambda_{1, F}\setminus\{F\}. \end{equation} Consequently, by the boundary data estimates (\ref{72605}) and (\ref{1972602}) we have \begin{equation}\label{41202} c<c_{M},\quad q\geq q_{m},\quad \mbox{and}\quad A\leq A_M\quad\mbox{on}\quad \Lambda_{1, F}. \end{equation} Through $F$ one can draw a backward $C_{0}$ characteristic curve, and this curve can intersect with $\widehat{DP}\cup\widehat{BP}$ at some point $F_{0}$. Integrating (\ref{72803}) along this $C_0$ characteristic curve from $F_0$ to $F$ and using (\ref{72604}), (\ref{42002}), (\ref{41202}), and $\bar{\partial}_{0}c<0$, we get $$ \Big|\frac{\omega}{\rho}(F)-\frac{\omega}{\rho}(F_0)\Big|<\varepsilon\max\Bigg\{\frac{\gamma^{\frac{1}{\gamma-1}}s_m^{\frac{2-\gamma}{\gamma-1}}c_M^2 }{\gamma(\gamma-1)}, \quad \frac{\gamma^{\frac{1}{\gamma-1}}s_M^{\frac{2-\gamma}{\gamma-1}}c_M^2 }{\gamma(\gamma-1)}\Bigg\}, $$ since $\delta_1$ is continuous across $\widehat{PD_0}$ and $\widehat{PB_0}$. Hence, \begin{equation}\label{41203} \big|\delta_1(F)\big|<\mathcal{B}\varepsilon. \end{equation} If $(\frac{R_{-}}{c^{\nu_1}})(F)~=~-\frac{m_2}{2}$ and $(\frac{R_{+}}{c^{\nu_1}})(F)~\leq~-\frac{m_2}{2}$, then by the second equation of (\ref{cd10}), (\ref{42002}), (\ref{41202}), and (\ref{41203}) we have $$ \begin{aligned} c\bar{\partial}_{+}\Big(\frac{R_{-}}{c^{\nu_1}}\Big)~<~0 \quad\mbox{at}\quad F, \end{aligned} $$ as shown in (\ref{1972606}). This leads to a contradiction, since $\frac{R_{+}}{c^{\nu_1}}~<~-\frac{m_2}{2}$ along $\widehat{F_{-}F}$. Similarly, if $(\frac{R_{+}}{c^{\nu_1}})(F)~=~-\frac{m_2}{2}$ and $(\frac{R_{-}}{c^{\nu_1}})(F)~\leq~-\frac{m_2}{2}$, then by the fourth equation of (\ref{cd10}), (\ref{41202}), and (\ref{41203}) we have $ \bar{\partial}_{-}\big(\frac{R_{+}}{c^{\nu_1}}\big)<0 $ at $F$, which leads to a contradiction. Thus, we have $\big(\frac{R_{\pm}}{c^{\nu_1}}\big)(F)~<~-\frac{m_2}{2}$. \vskip 4pt {\it Step 3.} In this step, we shall prove that for any $F\in\Sigma_{1, loc}$, if $\frac{R_{\pm}}{c^{\frac{2\gamma}{\gamma-1}}}~<~-m$ and $c>0$ in $\Lambda_{1, F}\setminus\{F\}$ and $0<c(F)< c_m$, then $\big(\frac{R_{\pm}}{c^{\frac{2\gamma}{\gamma-1}}}\big)(F)~<~-m$. When $\frac{R_{\pm}}{c^{\frac{2\gamma}{\gamma-1}}}~<~-m$ and $c>0$ in $\Lambda_{1, F}\setminus\{F\}$ we have $$ \frac{\bar{\partial}_{\pm}c}{c^{\frac{2\gamma}{\gamma-1}}}=\frac{R_{\pm}}{c^{\frac{2\gamma}{\gamma-1}}}+ \frac{j\bar{\partial}_{\pm}s}{\kappa c^{\frac{2\gamma}{\gamma-1}}}<-m+ \frac{\varepsilon }{2\gamma s_m}<-\frac{m}{2} \quad\mbox{on}\quad \Lambda_{1, F}\setminus\{F\}. $$ Using this we can get $q(F)\geq q_m$, $A(F)\leq A_m$, $\big|\delta_2(F)\big|\leq \varepsilon$, and $\big|\delta_1(F)\big|<\mathcal{B}\varepsilon$, as shown in the previous step. By (\ref{72602}), (\ref{72605}), and (\ref{1972602}) we have \begin{equation}\label{1972607} \begin{aligned} \frac{\gamma+1}{(\gamma-1)\cos^2 A}-\frac{2\gamma}{\gamma-1}~=~& \frac{\gamma+1}{(\gamma-1)}\Bigg(1-\frac{c^2}{2(\hat{E}-\frac{c^2}{\gamma-1})}\Bigg)^{-1}-\frac{2\gamma}{\gamma-1}\\~<~& \frac{\gamma+1}{(\gamma-1)}\Bigg(1-\frac{c^2}{2(\frac{\widehat{E}_0}{2}-\frac{c^2}{\gamma-1})}\Bigg)^{-1}-\frac{2\gamma}{\gamma-1}~<~-\frac{1}{2}\quad \mbox{at}\quad F. \end{aligned} \end{equation} Therefore, if $\big(\frac{R_{+}}{c^{\frac{2\gamma}{\gamma-1}}}\big)(F)~=~-m$ and $\big(\frac{R_{-}}{c^{\frac{2\gamma}{\gamma-1}}}\big)(F)~\leq ~-m$, then by (\ref{1972607}) and the first equation of (\ref{cd10}) we have \begin{equation}\label{42007} \begin{aligned} &c\bar{\partial}_{-}\Big(\frac{R_{+}}{c^{\frac{2\gamma}{\gamma-1}}}\Big)\\<& \frac{1}{c^{\frac{2\gamma}{\gamma-1}}}\Bigg\{ \frac{(\gamma+1)R_{+}R_{-}}{2(\gamma-1)\cos^2A}+\frac{(\gamma+1)}{2(\gamma-1)\cos^2 A} R_{+}R_{-}-\frac{2\gamma}{\gamma-1} R_{+}R_{-}\\& \qquad\qquad+(\widehat{H}_{11}R_{-}+\widehat{H}_{12}R_{+}) \frac{\delta_1 c^{\frac{2\gamma}{\gamma-1}}}{q^2(\gamma s)^{\frac{1}{\gamma-1}}}+(\widehat{H}_{13}R_{-}+\widehat{H}_{14}R_{+}) \frac{\delta_2 c^{\frac{2\gamma}{\gamma-1}}}{\gamma(\gamma-1)s}\\ &\qquad\qquad +\widehat{H}_{15}\frac{\delta_2 ^2c^{\frac{4\gamma}{\gamma-1}}}{\gamma^2(\gamma-1)^2s^2}+ \widehat{H}_{16}\frac{\delta_1\delta_2 c^{\frac{2\gamma}{\gamma-1}}}{q^2\gamma(\gamma-1)s(\gamma s)^{\frac{1}{\gamma-1}}}\Bigg\} \\<&\frac{1}{c^{\frac{2\gamma}{\gamma-1}}}\Bigg\{-\frac{ R_{+}R_{-}}{2}+(\widehat{H}_{11}R_{-}+\widehat{H}_{12}R_{+}) \frac{\delta_1 c^{\frac{2\gamma}{\gamma-1}}}{q^2(\gamma s)^{\frac{1}{\gamma-1}}}+(\widehat{H}_{13}R_{-}+\widehat{H}_{14}R_{+}) \frac{\delta_2 c^{\frac{2\gamma}{\gamma-1}}}{\gamma(\gamma-1)s}\\ &\qquad\qquad +\widehat{H}_{15}\frac{\delta_2 ^2c^{\frac{4\gamma}{\gamma-1}}}{\gamma^2(\gamma-1)^2s^2}+ \widehat{H}_{16}\frac{\delta_1\delta_2 c^{\frac{2\gamma}{\gamma-1}}}{q^2\gamma(\gamma-1)s(\gamma s)^{\frac{1}{\gamma-1}}}\Bigg\}\\ <&\frac{1}{c^{\frac{2\gamma}{\gamma-1}}}\Bigg\{-\frac{ R_{+}R_{-}}{2}-\big(|\widehat{H}_{11}|+|\widehat{H}_{12}|\big)R_{-} \frac{\delta_1 c^{\frac{2\gamma}{\gamma-1}}}{q^2(\gamma s)^{\frac{1}{\gamma-1}}}-\big(|\widehat{H}_{13}|+|\widehat{H}_{14}|\big)R_{-} \frac{\delta_2 c^{\frac{2\gamma}{\gamma-1}}}{\gamma(\gamma-1)s}\\ &\qquad\qquad +|\widehat{H}_{15}|\frac{\delta_2 ^2c^{\frac{4\gamma}{\gamma-1}}}{\gamma^2(\gamma-1)^2s^2}+ |\widehat{H}_{16}|\frac{\delta_1\delta_2 c^{\frac{2\gamma}{\gamma-1}}}{q^2\gamma(\gamma-1)s(\gamma s)^{\frac{1}{\gamma-1}}}\Bigg\} \\<& c^{\frac{2\gamma}{\gamma-1}}\left\{-\frac{R_{+}R_{-}}{2c^{\frac{4\gamma}{\gamma-1}}}-\widehat{M} \varepsilon \frac{R_{-}}{c^{\frac{2\gamma}{\gamma-1}}}+\widehat{M}\varepsilon^2\right\} \\<& c^{\frac{2\gamma}{\gamma-1}}\left\{\frac{R_{-}}{c^{\frac{2\gamma}{\gamma-1}}}\Big(\frac{m}{2}-\widehat{M} \varepsilon \Big)+\widehat{M}\varepsilon^2-\frac{m^2}{4}\right\}~<~0 \qquad\mbox{at}\quad F. \end{aligned} \end{equation} This leads to a contradiction, since $\frac{R_{+}}{c^{\frac{2\gamma}{\gamma-1}}}~<~-m$ along $\widehat{F_{-}F}$. Similarly, If $\big(\frac{R_{-}}{c^{\frac{2\gamma}{\gamma-1}}}\big)(F)~=~-m$ and $\big(\frac{R_{+}}{c^{\frac{2\gamma}{\gamma-1}}}\big)(F)~\leq ~-m$, then by the second equation of (\ref{cd6}) we have $ \bar{\partial}_{-}\big(\frac{R_{-}}{c^{\frac{2\gamma}{\gamma-1}}}\big)<0 $ at $F$, which leads to a contradiction. Thus, we have $\big(\frac{R_{\pm}}{c^{\frac{2\gamma}{\gamma-1}}}\big)(F)~<~-m$. {\it Step 4.} In this step, we shall prove that for any $F\in\Sigma_{1, loc}$, if $c>0$ in $\Lambda_{1, F}\setminus\{F\}$ then $c(F)>0$. By the results of the previous steps and an argument of continuity, we can get that if $c>0$ in $\Lambda_{1, F}\setminus\{F\}$, then $\big|\delta_1\big|\leq \mathcal{B}\varepsilon$, $\big|\delta_2\big|\leq \varepsilon$, $\frac{R_{\pm}}{c^{\frac{2\gamma}{\gamma-1}}}~<~-m$ and $\frac{\bar{\partial}_{\pm}c}{c^{\frac{2\gamma}{\gamma-1}}}<-\frac{m}{2}$ in $\Lambda_{1, F}\setminus\{F\}$. Since $\varepsilon<\varepsilon_0\leq \frac{(\kappa-1)m}{2\widehat{\mathcal{M}}}$, there exists a $A_{s}>0$ such that if $A<A_s$ then $$(1+\kappa)\Big(\frac{\kappa-1}{\kappa+1}\cos^2 A-\sin^2A\Big)\frac{m}{2} -\frac{\widehat{\mathcal{M}}\varepsilon}{2}>0.$$ If $c(F)=0$ then by $A=\arcsin \frac{c}{q}$ and $q\geq q_m$ we have $A(F)=0$. Then there exist a $\bar{F}_{+}$ on $\widehat{F_{+}F}$ and a $\bar{F}_{-}$ on $\widehat{F_{-}F}$ such that $A<A_s$ on $\widehat{\bar{F}_{+}F}$ and $\widehat{\bar{F}_{-}F}$. Therefore, by (\ref{192307}) we have \begin{equation}\label{41301} \begin{aligned} \bar{\partial}_{+}\alpha~=~&c^{\frac{\gamma+1}{\gamma-1}}\tan A\left\{-(1+\kappa)\Big(\frac{\kappa-1}{\kappa+1}\cos^2 A-\sin^2A\Big)\frac{\bar{\partial}_{+}c }{c^{\frac{2\gamma}{\gamma-1}}} -\frac{\delta_1}{q^2(\gamma s)^{\frac{1}{\gamma-1}}} +\frac{\delta_2\cos 2A}{\gamma(\gamma-1)s }\right\}\\~>~&c^{\frac{\gamma+1}{\gamma-1}}\tan A\left\{(1+\kappa)\Big(\frac{\kappa-1}{\kappa+1}\cos^2 A-\sin^2A\Big)\frac{m}{2} -\frac{\mathcal{B}\varepsilon}{q_m^2(\gamma s_m)^{\frac{1}{\gamma-1}}} -\frac{\varepsilon}{\gamma(\gamma-1)s_m }\right\}\\~>~&c^{\frac{\gamma+1}{\gamma-1}}\tan A\left\{(1+\kappa)\Big(\frac{\kappa-1}{\kappa+1}\cos^2 A-\sin^2A\Big)\frac{m}{2} -\frac{\widehat{\mathcal{M}}\varepsilon}{2}\right\}>0\quad \mbox{along}\quad \widehat{\bar{F}_{+}F}. \end{aligned} \end{equation} Similarly, by (\ref{192309}) we have $c\bar{\partial}_{-}\beta<0$ along $\widehat{\bar{F}_{-}F}$. Therefore, by $\alpha(F)-\beta(F)=2A(F)=0$ we know that the forward $C_0$ characteristic curve issuing from any point on $\widehat{\bar{F}_{+}F}$ and the forward $C_0$ characteristic curve issuing from any point on $\widehat{\bar{F}_{-}F}$ intersect at $F$ as illustrated in Figure \ref{Fig82}, which leads to a contradiction in view of the conservation of mass (see \cite{CQ1}, p. 2953). \begin{figure} \caption{ \footnotesize An impossible vacuum point.} \label{Fig82} \end{figure} Combining with the results of the above four steps and using the method of continuity, we can get this lemma. \end{proof} \begin{rem}\label{rem8102} From the proof of Lemma \ref{lem33}, we can see that the classical solution of the Goursat problem (\ref{PSEU}) and (\ref{1972601}) satisfies \begin{equation}\label{41903} \begin{aligned} &q\geq q_m,\quad \hat{E}>\frac{\widehat{E}_0}{2}, \quad 0<A\leq A_{M}, \quad \frac{\bar{\partial}_{\pm}c}{c^{\frac{2\gamma}{\gamma-1}}}<-\frac{m}{2}, \\&\quad \frac{R_{\pm}}{c^{\frac{2\gamma}{\gamma-1}}}<-m, \quad \big|\delta_1\big|\leq \mathcal{B}\varepsilon, \quad \mbox{and}\quad \big|\delta_2\big|\leq \varepsilon. \end{aligned} \end{equation} \end{rem} \begin{rem}\label{rem8101} From steps 1 and 2 in the proof of Lemma \ref{lem33}, we can see that the classical solution of the Goursat problem (\ref{PSEU}) and (\ref{1972601}) also satisfy \begin{equation}\label{198102} \frac{R_{\pm}}{c^{\nu_1}}<-\frac{m_2}{2} \quad \mbox{as}\quad c\geq c_m. \end{equation} \end{rem} \begin{lem}\label{34} If $\varepsilon<\varepsilon_0$ then the classical solution of the Goursat problem (\ref{PSEU}), (\ref{1972601}) satisfies \begin{equation}\label{4104} R_{\pm}>-M_1-\frac{\varepsilon c_M^{\frac{2\gamma}{\gamma-1}}}{2\gamma s_m}. \end{equation} \end{lem} \begin{proof} It is easy to check by (\ref{72605}) and (\ref{1972602}) that \begin{equation}\label{41101} R_{+}\mid_{\widehat{PD_0}}~>~-M_1-\frac{\varepsilon c_M^{\frac{2\gamma}{\gamma-1}}}{2\gamma s_m}\quad \mbox{and}\quad R_{-}\mid_{\widehat{PB_0}}~>~-M_1-\frac{\varepsilon c_M^{\frac{2\gamma}{\gamma-1}}}{2\gamma s_m}. \end{equation} From the first equation of (\ref{cd8}), we have $$ \begin{aligned} c\bar{\partial}_{-}R_{+}~=~& \frac{(\gamma+1)R_{+}^2}{2(\gamma-1)\cos^2A}+\frac{(\gamma+1)-2\sin^2 2A}{2(\gamma-1)\cos^2 A} R_{+}R_{-}\\& +(\widetilde{H}_{11}R_{-}+\widetilde{H}_{12}R_{+}) \frac{\delta_1 c^{\frac{2\gamma}{\gamma-1}}}{q^2(\gamma s)^{\frac{1}{\gamma-1}}}+(\widetilde{H}_{13}R_{-}+\widetilde{H}_{14}R_{+}) \frac{\delta_2 c^{\frac{2\gamma}{\gamma-1}}}{\gamma(\gamma-1)s}\\ & +\widetilde{H}_{15}\frac{\delta_2 ^2c^{\frac{4\gamma}{\gamma-1}}}{\gamma^2(\gamma-1)^2s^2}+ \widetilde{H}_{16}\frac{\delta_1\delta_2 c^{\frac{4\gamma}{\gamma-1}}}{\gamma(\gamma-1)s q^2(\gamma s)^{\frac{1}{\gamma-1}}}\\~>~& \frac{1}{2\cos^2 A} R_{+}R_{-} +(\widetilde{H}_{11}R_{-}+\widetilde{H}_{12}R_{+}) \frac{\delta_1 c^{\frac{2\gamma}{\gamma-1}}}{q^2(\gamma s)^{\frac{1}{\gamma-1}}}+(\widetilde{H}_{13}R_{-}+\widetilde{H}_{14}R_{+}) \frac{\delta_2 c^{\frac{2\gamma}{\gamma-1}}}{\gamma(\gamma-1)s}\\ & +\widetilde{H}_{15}\frac{\delta_2 ^2c^{\frac{4\gamma}{\gamma-1}}}{\gamma^2(\gamma-1)^2s^2}+ \widetilde{H}_{16}\frac{\delta_1\delta_2 c^{\frac{4\gamma}{\gamma-1}}}{\gamma(\gamma-1)s q^2(\gamma s)^{\frac{1}{\gamma-1}}}\\~>~& \frac{ c^{\frac{4\gamma}{\gamma-1}}}{2\cos^2 A}\Bigg\{\frac{R_{+}R_{-}}{c^{\frac{4\gamma}{\gamma-1}}}+\widetilde{\mathcal{M}} \varepsilon \frac{R_{-}}{c^{\frac{2\gamma}{\gamma-1}}}+\widetilde{\mathcal{M}} \varepsilon \frac{R_{+}}{c^{\frac{2\gamma}{\gamma-1}}}-\widetilde{\mathcal{M}}\varepsilon^2\Bigg\} \\~>~&\frac{ c^{\frac{4\gamma}{\gamma-1}}}{2\cos^2 A}\Bigg\{\Big( \frac{R_{+}}{c^{\frac{2\gamma}{\gamma-1}}}+\frac{R_{-}}{c^{\frac{2\gamma}{\gamma-1}}}\Big)\Big(-\frac{m}{3} +\widetilde{\mathcal{M}}\varepsilon\Big) +\frac{m^2}{3}-\widetilde{\mathcal{M}}\varepsilon^2\Bigg\}>0, \end{aligned} $$ since $\varepsilon<\varepsilon_0\leq \frac{m}{3(\widetilde{\mathcal{M}}+1)}$. Combining with this and (\ref{41101}), we have $R_{+}>-M_1-\frac{\varepsilon c_M^{\frac{2\gamma}{\gamma-1}}}{2\gamma s_m}$. Similarly, by the second equation of (\ref{cd8}) we have $ c\bar{\partial}_{+}R_{-}>0. $ Combining this with (\ref{41101}) we get $R_{-}>-M_1-\frac{\varepsilon c_M^{\frac{2\gamma}{\gamma-1}}}{2\gamma s_m}$. We then complete the proof of this lemma. \end{proof} Using (\ref{4401}), (\ref{192303})--(\ref{82505}), and Lemmas \ref{lem33} and \ref{34}, we can establish uniform a priori $C^1$ norm estimate of the solution. Therefore, by the local existence result and the standard continuity extension method (cf. \cite{LiT}), one can extend the local solution to a whole determinate region of the Goursat problem. We then have the following lemma. \begin{lem} The Goursat problem (\ref{PSEU}), (\ref{1972601}) admits a global classical solution in a region $\Sigma_1$ bounded by $\widehat{PD_0}$, $\widehat{PB_0}$, a forward $C_{-}$ characteristic curve $C_{-}^{D_0}$ issuing from $D_0$, and a forward $C_{+}$ characteristic curve $C_{+}^{B_0}$ issuing from $B_0$. Moreover, the solution satisfies (\ref{41903}) and (\ref{198102}). \end{lem} \vskip 4pt \subsection{Flow in $\Sigma_{1}^{+}$ and $\Sigma_{1}^{-}$}The purpose of this subsection is to construct the solution in domains $\Sigma_{1}^{+}$ and $\Sigma_{1}^{-}$. For this purpose, we consider system (\ref{PSEU}) with the boundary conditions: \begin{equation}\label{bd4} \left\{ \begin{array}{ll} (u, v, \rho, s)=\big(u_{{C_{+}^{B_0}}}, v_{{C_{+}^{B_0}}}, \rho_{{C_{+}^{B_0}}}, s_{{C_{+}^{B_0}}}\big)(x, y)& \hbox{on $ C_{+}^{B_0}$;} \\[8pt] (u, v)\cdot {\bf n_w}=0& \hbox{on $W_{-}$,} \end{array} \right. \end{equation} where $\big(u_{{C_{+}^{B_0}}}, v_{{C_{+}^{B_0}}}, \rho_{{C_{+}^{B_0}}}, s_{{C_{+}^{B_0}}}\big)(x, y)$ is the solution of the Goursat problem (\ref{PSEU}), (\ref{1972601}) on $C_{+}^{B_0}$. \begin{figure} \caption{ \footnotesize Domains $\Sigma_{1, loc} \label{Fig5} \end{figure} Existence of a local $C^1$ solution is known by the method of characteristic, see \cite{CQ2, Li-Yu}. In order to extend the local solution to global solution we need to establish the a priori $C^1$ norm estimate of the solution. In what follows, we assume that the boundary value problem (\ref{PSEU}), (\ref{bd4}) has a classical solution in some region $\Sigma_{1, loc}^{-}$, and then establish the $C^1$ norm estimate of the solution on $\Sigma_{1, loc}^{-}$. Since the local classical solution is constructed by the method of characteristics, through any point $F\in \Sigma_{1, loc}^{-}$ ($F$ can be also on $W_{-}$) one can draw a backward $C_{-}$ ($C_{+}$, resp.) characteristic curve up to a point $F_{-}$ ($F_{+}$, resp.) on $C_{+}^{B_0}$ ($W_{-}$, resp.), and the closed domain ${\Lambda_{1, F}^{-}}$ bounded by $\widehat{F_{-}F}$, $\widehat{F_{+}F}$, $\widehat{B_0F_{-}}$, and $\widehat{B_0F_{+}}$ belongs to $\Sigma_{1, loc}^{-}$, as indicated in Figure \ref{Fig5}. \begin{lem}\label{lem36} If $\varepsilon<\varepsilon_0$ then the classical solution of the boundary value problem (\ref{PSEU}), (\ref{bd4}) satisfies $\frac{R_{\pm}}{c^{\frac{2\gamma}{\gamma-1}}}~<~-m$ in ${\Sigma_{1, loc}^{-}}$ and $c>0$ in ${\Sigma_{1, loc}^{-}}\setminus W_{-}$. \end{lem} \begin{proof} The proof of this lemma proceeds in four steps. {\it Step 1.} From Remarks \ref{rem8101} and \ref{rem8102}, we have that along $ C_{+}^{B_0}$, \begin{equation} \frac{R_{+}}{c^{\nu_1}}<-\frac{m_2}{2} \quad \mbox{as}\quad c\geq c_m\quad\mbox{and}\quad \frac{R_{+}}{c^{\frac{2\gamma}{\gamma-1}}}<-m \quad \mbox{as}\quad c>0. \end{equation} By (\ref{4203}) we have $$ R_{-}(B_0)=R_{+}(B_0)-\frac{(\gamma-1)q f''(x)}{\big(1+[f'(x)]^2\big)^{\frac{3}{2}}}<R_{+}(B_0). $$ Thus, we have \begin{equation}\label{198103} \Big(\frac{R_{-}}{c^{\nu_1}}\Big)(B_0)<-\frac{m_2}{2} \quad \mbox{as}\quad c(B_0)\geq c_m,\quad\mbox{and}\quad \Big(\frac{R_{-}}{c^{\frac{2\gamma}{\gamma-1}}}\Big)(B_0)<-m \quad \mbox{as}\quad c(B_0)>0. \end{equation} If there is a point on $ C_{+}^{B_0}$ such that $c\geq c_m$ and $\frac{R_{-}}{c^{\nu_1}}=-\frac{m_2}{2}$ at this point. Then by the second equation of (\ref{cd10}) we have $ c\bar{\partial}_{-}\big(\frac{R_{+}}{c^{\nu_1}}\big)~<~0 $, as shown in (\ref{1972606}). If there is a point on $ C_{+}^{B_0}$ such that $c<c_m$ and $\frac{R_{-}}{c^{\frac{2\gamma}{\gamma-1}}}=-m$ at this point. Then by the second equation of (\ref{cd10}) we have $ c\bar{\partial}_{-}\big(\frac{R_{+}}{c^{\frac{2\gamma}{\gamma-1}}}\big)~<~0 $, as shown in (\ref{42007}). Therefore, by (\ref{198103}), $\bar{\partial}_{\pm}c<0$, and an argument of continuity we have that along $ C_{+}^{B_0}$, \begin{equation} \frac{R_{-}}{c^{\nu_1}}<-\frac{m_2}{2} \quad \mbox{as}\quad c\geq c_m\quad\mbox{and}\quad \frac{R_{-}}{c^{\frac{2\gamma}{\gamma-1}}}<-m \quad \mbox{as}\quad c>0. \end{equation} \vskip 4pt \vskip 4pt {\it Step 2.} In this step, we shall prove that for any $F\in\Sigma_{1, loc}^{-}$, if $\frac{R_{\pm}}{c^{\nu_1}}~<~-\frac{m_2}{2}$ and $c\geq c_m$ in $\Lambda_{1, F}^{-}\setminus\{F\}$, then $\frac{R_{\pm}}{c^{\nu_1}}~<~-\frac{m_2}{2}$ at $F$. Through any point in $\Sigma_{1, loc}^{-}$ one can draw a backward $C_{0}$ characteristic curve, and this curve can intersect with $\widehat{BP}$ at some point. By Remark \ref{2032001}, we can get that $\delta_1$ and $\delta_2$ are also continuous across $C_{+}^{B_0}$. Therefore, as shown in (\ref{42002})--(\ref{41203}), we have $$ |\delta_2|\leq \varepsilon \quad \mbox{and}\quad \big|\delta_1\big|\leq \mathcal{B}\varepsilon\quad \mbox{in}\quad\Sigma_{1, loc}^{-}. $$ If $\big(\frac{R_{+}}{c^{\nu_1}}\big)(F)~=~-\frac{m_2}{2}$ and $\big(\frac{R_{-}}{c^{\nu_1}}\big)(F)~\leq~-\frac{m_2}{2}$, then we get $ c\bar{\partial}_{-}\big(\frac{R_{+}}{c^{\nu_1}}\big)~<~0 $ at $F$. (The proof for this is the same as (\ref{1972606}), so we omit it.) This leads to a contradiction, since $\frac{R_{+}}{c^{\nu_1}}~<~-\frac{m_2}{2}$ in $\Lambda_{1, F}^{-}\setminus\{F\}$. If $\big(\frac{R_{-}}{c^{\nu_1}}\big)(F)~=~-\frac{m_2}{2}$ and $\big(\frac{R_{+}}{c^{\nu_1}}\big)(F)~<~-\frac{m_2}{2}$, then by (\ref{4203}) and $f''>0$ we know that $F$ does not lie on $W_{-}$, and hence $\widehat{F_{+}F}$ exists. Thus, by the second equation of (\ref{cd6}) we have $ \bar{\partial}_{+}\big(\frac{R_{-}}{c^{\nu_1}}\big)<0 $ at $F$. (The proof for this is also the same as (\ref{1972606}), so we omit it.) This leads to a contradiction, since $\frac{R_{+}}{c^{\nu_1}}~<~-\frac{m_2}{2}$ along $\widehat{F_{+}F}$. Thus, we have $\big(\frac{R_{\pm}}{c^{\nu_1}}\big)(F)~<~-\frac{m_2}{2}$. \vskip 4pt {\it Step 3.} Using the method in the third step of the proof of Lemma \ref{lem33} and $f''>0$, one can get that if $\frac{R_{\pm}}{c^{\frac{2\gamma}{\gamma-1}}}~<~-m$ and $c>0$ in $\Lambda_{1, F}^{-}\setminus\{F\}$ and $0<c(F)< c_m$, then $\frac{R_{\pm}}{c^{\frac{2\gamma}{\gamma-1}}}(F)~<~-m$. \vskip 4pt {\it Step 4.} As shown in the fourth step of the proof of Lemma \ref{lem33}, we can prove that for any point $F\in {\Sigma_{1, loc}^{-}}\setminus W_{-}$, if $c>0$ in $\Lambda_{1, F}^{-}\setminus\{F\}$ then $c(F)>0$. Therefore, by the method of continuity we can get this lemma. \end{proof} \begin{rem} Actually, we also have that the classical solution of the boundary value problem (\ref{PSEU}), (\ref{bd4}) satisfies (\ref{41903}) and (\ref{198102}). \end{rem} \vskip 4pt \begin{lem}\label{lem38} Let $$M_2=\max\limits_{x\in [0, +\infty)}\left\{\frac{ f''(x)}{(1+[f'(x)]^2)^{\frac{3}{2}}}\right\}\quad \mbox{and} \quad q_{_M}=\left(u_{in}^2(f(0))+\frac{2c_{in}^2(f(0))}{\gamma-1}\right)^{\frac{1}{2}}. $$ Then when $\varepsilon<\varepsilon_0$ the classical solution of the boundary value problem (\ref{PSEU}), (\ref{bd4}) satisfies $$ R_{\pm}>-M_1-(\gamma-1)q_{_M}M_2-\frac{\varepsilon c_M^{\frac{2\gamma}{\gamma-1}}}{2\gamma s_m}. $$ \end{lem} \begin{proof} From (\ref{41903}) we can get \begin{equation}\label{4205} \bar{\partial}_{-}R_{+}>0\quad\mbox{and}\quad\bar{\partial}_{+}R_{-}>0, \end{equation} as shown in the proof of Lemma \ref{34}. From Lemma \ref{34} we have $$ R_{+} ~>~-M_1-\frac{\varepsilon c_M^{\frac{2\gamma}{\gamma-1}}}{2\gamma s_m}\quad\mbox{on}\quad C_{+}^{B_0}. $$ Thus, we have \begin{equation}\label{4204} R_{+}>-M_1-\frac{\varepsilon c_M^{\frac{2\gamma}{\gamma-1}}}{2\gamma s_m}. \end{equation} From (\ref{4203}) and (\ref{4204}), we have $$ \begin{aligned} R_{-}~=~R_{+}-\frac{(\gamma-1)q f''(x)}{\big(1+[f'(x)]^2\big)^{\frac{3}{2}}}~>~ -M_1-(\gamma-1)q_{_M}M_2-\frac{\varepsilon c_M^{\frac{2\gamma}{\gamma-1}}}{2\gamma s_m}\quad \mbox{along}\quad W_{-}, \end{aligned} $$ since $q\leq q_{_M}$ along $W_{-}$ by (\ref{4401}) and $\bar{\partial}_{0}c<0$. Thus, by (\ref{4205}) we have $$ R_{-}>-M_1-(\gamma-1)q_{_M}M_2-\frac{\varepsilon c_M^{\frac{2\gamma}{\gamma-1}}}{2\gamma s_m}. $$ We then complete the proof of this lemma. \end{proof} Using (\ref{4401}), (\ref{192303})--(\ref{82505}), and Lemmas \ref{lem36} and \ref{lem38}, we can establish uniform a priori $C^1$ norm estimate of the solution. Therefore, by the local existence result and the standard continuity extension method, we can extend the local solution to a whole determinate region of the slip boundary problem; cf. \cite{CQ2,LiT}. We then have the following lemma. \begin{lem} The boundary value problem (\ref{PSEU}), (\ref{bd4}) admits a gloabl $C^1$ solution in a region $\Sigma_1^{-}$ as shown in Figure \ref{Domain}. Moreover, the solution satisfies (\ref{41903}) and (\ref{198102}). \end{lem} By symmetry we can construct a solution in region $\Sigma_1^{+}$, as shown in Figure \ref{Domain}. \subsection{Global solution of the boundary value problem (\ref{PSEU}), (\ref{bd1})} By repeatedly solving Goursat problems and slip boundary problems, we can get the solution in regions $\Sigma_{0}$, $\Sigma_{0}^{+}$, $\Sigma_{0}^{-}$, $\Sigma_{1}$, $\Sigma_{1}^{+}$, $\Sigma_{1}^{-}$, $\Sigma_{2}$, $\Sigma_{2}^{+}$, $\Sigma_{2}^{-}$, $\cdot\cdot\cdot$, as illustrated in Figure \ref{Fig2}. Moreover, the solution satisfies (\ref{41903}). A natural question is whether the region $\Sigma$ can be covered by the determinant regions of these Goursat problems and slip boundary problems. We are going to show that the flow in $\Sigma$ can be obtained after solving a finite number of Gourst problems and slip boundary problems. By a direct computation, we have $$ \sin^2 A=\frac{c^2}{2\Big(\hat{E}-\frac{c^2}{\gamma-1}\Big)} <\frac{c^2}{2\big(\frac{\widehat{E}_0}{2}-\frac{c^2}{\gamma-1}\big)}, $$ since $\hat{E}>\frac{\widehat{E}_0}{2}$. So, there exists a $$0<c_g< \frac{q_m\arctan f'(x_{_{B_0}})}{9(\kappa-1)},$$ such that if $c<c_{g}$ then \begin{equation}\label{41302} (1+\kappa)\Big(\frac{\kappa-1}{\kappa+1}\cos^2 A-\sin^2A\Big)\frac{m}{2} -\frac{\widehat{\mathcal{M}}\varepsilon}{2}>0 \end{equation} and \begin{equation}\label{198301} A<\min\left\{\frac{\arctan f'(x_{_{B_0}})}{3},~ \frac{\pi}{3}\right\}. \end{equation} Here, $x_{_{B_0}}$ is the abscissa of the point $B_0$. \begin{figure} \caption{ \footnotesize $C_{\pm} \label{Fig6} \end{figure} Suppose that there are infinite Goursat regions $\Sigma_i$ ($i=1, 2, 3, \cdot\cdot\cdot$). Then, for each $i\geq 1$, $\Sigma_i$ is bounded by characteristic curves $\widehat{P_{i-1}B_{i-1}}$, $\widehat{P_{i-1}D_{i-1}}$, $\widehat{B_{i-1}P_{i}}$, and $\widehat{D_{i-1}P_{i}}$ as indicated in Figure \ref{Fig6} (left), where $P_0=P$. Meanwhile, for any point $I\in \Sigma_{i+1}$, there is a $I_0\in \widehat{PD_0}\cup\widehat{PB_0}$, such that the forward $C_{+}$ or $C_{-}$ characteristic curve issuing from $I_0$ can reach $I$ after $i$ reflections on the walls. Thus, by $\frac{\bar{\partial}_{\pm}c}{c^{\frac{2\gamma}{\gamma-1}}}<-\frac{m}{2}$ we know that there exists a sufficiently large $i\geq 0$, such that $ c<c_{g}$ in $\Sigma_{i+1}$. Consequently, we have that when $\varepsilon<\varepsilon_0$ there holds \begin{equation}\label{41401} \bar{\partial}_{+}\alpha>0\quad \mbox{in} \quad \Sigma_{i+1}, \end{equation} as shown in (\ref{41301}). Meanwhile, from (\ref{192307}), (\ref{41302}), and (\ref{198301}) we also have that when $\varepsilon<\varepsilon_0$ there holds \begin{equation}\label{41402} \begin{aligned} \bar{\partial}_{+}\alpha&~<~c^{\frac{2\gamma}{\gamma-1}-1}\tan A\left\{-(\kappa-1)\cos^2 A\frac{\bar{\partial}_{+}c }{c^{\frac{2\gamma}{\gamma-1}}} -\frac{\delta_1}{q^2(\gamma s)^{\frac{1}{\gamma-1}}} +\frac{\delta_2\cos 2A}{\gamma(\gamma-1)s }\right\}\\&~<~\frac{c^{\frac{2\gamma}{\gamma-1}}}{q\cos A}\left\{-(\kappa-1)\cos^2 A\frac{\bar{\partial}_{+}c }{c^{\frac{2\gamma}{\gamma-1}}} +\frac{\widehat{\mathcal{M}}\varepsilon}{2}\right\}\\&~<~\frac{c^{\frac{2\gamma}{\gamma-1}}}{q\cos A}\left\{-(\kappa-1)\cos^2 A\frac{\bar{\partial}_{+}c }{c^{\frac{2\gamma}{\gamma-1}}} +\frac{(\kappa-1)m}{4}\right\} \\&~<~\frac{c^{\frac{2\gamma}{\gamma-1}}}{q\cos A}\left\{-(\kappa-1)\cos^2 A\frac{\bar{\partial}_{+}c }{c^{\frac{2\gamma}{\gamma-1}}} -\frac{(\kappa-1)}{2}\frac{\bar{\partial}_{+}c }{c^{\frac{2\gamma}{\gamma-1}}}\right\}\\& ~<~-\frac{3(\kappa-1)}{2q\cos A}\bar{\partial}_{+}c ~<~-\frac{3(\kappa-1)}{q_m}\bar{\partial}_{+}c \qquad \mbox{in} \quad \Sigma_{i+1}. \end{aligned} \end{equation} Hence, along the forward $C_{+}$ characteristic curve passing through $B_{i}$ we have $$ \begin{aligned} \alpha&~<~\alpha(B_{i})+\frac{3(\kappa-1)}{q_m}\big(c(B_{i})-c\big)~<~\alpha(B_{i})+\frac{\arctan f'(x_{_{B_0}})}{3}\\&\qquad\quad~=~ -\arctan f'(x_{_{B_{i}}})+A(B_{i})+\frac{\arctan f'(x_{_{B_0}})}{3}~<~-\frac{\arctan f'(x_{_{B_0}})}{3}~<~0. \end{aligned} $$ Therefore, by symmetry we know that the forward $C_{+}$ characteristic curve issuing from $B_{i}$ and the forward $C_{-}$ characteristic curve issuing from $D_{i}$ do not intersect with each other; see Figure \ref{Fig6}(mid). This implies that $\Sigma_{i+2}$ does not exist. This leads to a contradiction. Therefore, there are only the following two cases: \begin{itemize} \item There exists an $i\geq 0$, such that the forward $C_{+}$ characteristic curve issuing from $B_{i}$ does not intersect with the forward $C_{-}$ characteristic curve issuing from $D_{i}$, as indicated Figure \ref{Fig6}(mid). In this case, $$\Sigma=\Big(\bigcup\limits_{j=0}^{i+1}\Sigma_{j}\Big)\cup\Big(\bigcup\limits_{j=0}^{i+1}\Sigma_{j}^{+}\Big) \cup\Big(\bigcup\limits_{j=0}^{i+1}\Sigma_{j}^{-}\Big).$$ \item There exists an $i\geq 0$, such that the forward $C_{+}$ characteristic curve through $P_{i+1}$ does not intersect with $W_{+}$, and the forward $C_{-}$ characteristic curve through $P_{i+1}$ does not intersect with $W_{-}$ as indicated in Figure \ref{Fig6}(right). In this case, $$\Sigma=\Big(\bigcup\limits_{j=0}^{i+2}\Sigma_{j}\Big)\cup\Big(\bigcup\limits_{j=0}^{i+1}\Sigma_{j}^{+}\Big) \cup\Big(\bigcup\limits_{j=0}^{i+1}\Sigma_{j}^{-}\Big).$$ \end{itemize} Therefore, we obtain a global piecewise smooth solution in the duct by solving a finite number of Gourst problems and slip boundary problems. \subsection{Vacuum regions adjacent to the walls} The purpose of this subsection is to discuss the appearance of vacuum. From (\ref{42201}), we have \begin{equation} \begin{aligned} \bar{\partial}_{0}c~=~& \frac{1}{2\cos A}\big(\bar{\partial}_{-}c+\bar{\partial}_{+}c\big) \\~=~&\frac{1}{2\cos A}\big(\bar{\partial}_{-}c-\bar{\partial}_{+}c+2\bar{\partial}_{+}c\big)\\~=~& \frac{1}{2\cos A}\Big(\frac{2q \bar{\partial}_{0}\sigma}{\kappa}-\frac{2j\bar{\partial}_{+}s}{\kappa}+2\bar{\partial}_{+}c\Big) \\~=~& \frac{1}{2\cos A}\Big(\frac{2q \bar{\partial}_{0}\sigma}{\kappa}+2R_{+}\Big) ~<~\frac{q}{\kappa\cos A}\bar{\partial}_{0}\sigma~\quad \mbox{along}\quad W_{-}. \end{aligned} \end{equation} Since $\bar{\partial}_{0}\widehat{E}=0$ and $\bar{\partial}_{0}\sigma\leq 0$ along $W_{-}$, we have \begin{equation} \bar{\partial}_{0}c~<~\frac{u_{in}\big(f(0)\big)}{\kappa}\bar{\partial}_{0}\sigma~\quad \mbox{along}\quad W_{-}. \end{equation} So, by integration we know that if $$\arctan f_{\infty}'>\kappa \frac{ c_{in}\big(f(0)\big)}{u_{in}\big(f(0)\big)}$$ then vacuum will appear on the wall $W_{-}$. That is, there is a $x_{_V}>0$ such that $c(x_{_V}, -f(x_{_V}))=0$. \begin{figure} \caption{ \footnotesize Vacuum regions adjacent to the walls.} \label{Fig7} \end{figure} In what follows, we are going to show that if there is a vacuum then the vacuum is always adjacent to one of the walls, and the interface between gas and vacuum must be straight. For $\tilde{x}<x_{_V}$, we denote by $y=y(x; \tilde{x})$, $x>\tilde{x}$ the $C_{+}$ characteristic curve issuing from the point $(\tilde{x}, -f(\tilde{x}))$. As shown in (\ref{41401}) and (\ref{41402}), we can prove that when $\tilde{x}$ is sufficiently close $x_{_V}$, $$ c<c_{g}\quad\mbox{and}\quad 0<\bar{\partial}_{+}\alpha<-\frac{3(\kappa-1)}{q_m}\bar{\partial}_{+}c \quad \mbox{along}\quad y=y(x; \tilde{x}),~ x>\tilde{x}. $$ Thus, we have $$ \arctan y'(x; \tilde{x})-\arctan y'(\tilde{x}; \tilde{x})~<~ \frac{3(\kappa-1)}{q_m}c(\tilde{x}, -f(\tilde{x}))\quad \mbox{as}\quad x>\tilde{x}. $$ In addition, since $c(\tilde{x}, -f(\tilde{x}))\rightarrow 0$ and $$\arctan y'(\tilde{x}; \tilde{x})=-\arctan f'(\tilde{x})+A(\tilde{x}, -f(\tilde{x}))\rightarrow -\arctan f'(x_{_V})$$ as $\tilde{x}\rightarrow x_{_V}$, we have that for any $X>x_{_V}$, $$\lim\limits_{\tilde{x}\rightarrow x_{_V}} \Big\|y(x; \tilde{x})+f(x_{_V})+f'(x_{_V})(x-x_{_V})\Big\|_{C[x_{_V}, X]}=0.$$ Therefore, there are no gas flow into the region $\big\{(x,y)\mid -f(x)<y<-f(x_{_V})-f'(x_{_V})(x-x_{_V}), ~x>x_{_V}\big\}.$ By symmetry we also have that there are also no gas flow into the region $\big\{(x,y)\mid f(x_{_V})+f'(x_{_V})(x-x_{_V})<y<f(x), ~x>x_{_V}\big\}.$ See Figure \ref{Fig7}. From Lemmas \ref{lem33} and \ref{lem36} we can see that $c>0$ in $ \big\{(x, y)\mid -g(x)~<~y~<~g(x),~ x>0\big\}, $ where $$ g(x)=\left\{ \begin{array}{ll} f(x), & \hbox{$0<x<x_{_V}$;} \\[4pt] f(x_{_V})+f'(x_{_V})(x-x_{_V}), & \hbox{$x\geq x_{_V}$.} \end{array} \right. $$ Since $\varepsilon\rightarrow 0$ as $\epsilon\rightarrow 0$, we complete the proof of Theorem \ref{main}. \vskip 32pt \small \end{document}
\begingin{document} \title[Approximate cloning and broadcasting]{Information-theoretic limitations on approximate quantum cloning and broadcasting} \mathrm{d}ate{\today} \author{Marius Lemm} \affiliation{Department of Mathematics, California Institute of Technology, Pasadena, CA 91125} \author{Mark M.\ Wilde} \affiliation{Hearne Institute for Theoretical Physics, Department of Physics and Astronomy, Center for Computation and Technology, Louisiana State University, Baton Rouge, Louisiana 70803, USA} \begingin{abstract} We prove new quantitative limitations on any approximate simultaneous cloning or broadcasting of mixed states. The results are based on information-theoretic (entropic) considerations and generalize the well known no-cloning and no-broadcasting theorems. We also observe and exploit the fact that the universal cloning machine on the symmetric subspace of $n$ qudits and symmetrized partial trace channels are dual to each other. This duality manifests itself both in the algebraic sense of adjointness of quantum channels and in the operational sense that a universal cloning machine can be used as an approximate recovery channel for a symmetrized partial trace channel and vice versa. The duality extends to give control on the performance of generalized UQCMs on subspaces more general than the symmetric subspace. This gives a way to quantify the usefulness of a-priori information in the context of cloning. For example, we can control the performance of an antisymmetric analogue of the UQCM in recovering from the loss of $n-k$ fermionic particles. \endnd{abstract} \maketitle A direct consequence of the fundamental principles of quantum theory is that there does not exist a ``machine'' (unitary map) that can clone an arbitrary input state \circte{Dieks,Wootters}. This no-cloning theorem and its generalization to mixed states, the ``no-broadcasting theorem'' \circte{Barnumetal96}, exclude the possibility of making perfect ``quantum backups'' of a quantum state and are essential for our understanding of quantum information processing. For instance, since decoherence is such a formidable obstacle to building a quantum computer and, at the same time, we cannot use quantum backups to protect quantum information against this decoherence, considerable effort has been devoted to protecting the stored information by way of \endmph{quantum error correction} \circte{Knill,Manda,Shor}. Given these no-go results, it is natural to ask how well one can do when settling for \endmph{approximate} cloning or broadcasting. Numerous theoretical and experimental works have investigated such ``approximate cloning machines'' (see \circte{Bu,Gisin,PhysRevA.58.1827,Karen,KW99,Lamas,Scarani,Review,Zhu,Chatterjeeetal} and references therein). These cloning machines can be of great help for \endmph{state estimation}. They can also be of great help to an adversary who is eavesdropping on an encrypted communication, and so knowing the limitations of approximate cloning machines is relevant for \endmph{quantum key distribution}. In this paper, we derive \endmph{new quantitative limitations} posed on any approximate cloning/broadcast (defined below) by \endmph{quantum information theory}. Our results generalize the standard no-cloning and no-broadcasting results for mixed states, which are recalled below (Theorems 1 and 2). We draw on an approach of Kalev and Hen \circte{KH08}, who introduced the idea of studying no-broadcasting via the fundamental principle of the monotonicity of the quantum relative entropy \circte{Lindblad1975,U77}. When at least one state is approximately cloned, while the other is approximately broadcast, we derive an inequality which implies rather strong limitations (Theorem~\rightef{thm:mainNC}). The result can be understood as a quantitative version of the standard no-cloning theorem. The proof uses only fundamental properties of the relative entropy. By invoking recent developments linking the monotonicity of relative entropy to recoverability \circte{FR,BLW,SFR,Wilde,Jungeetal,Sutteretal}, we can derive a stronger inequality (Theorem~\rightef{thm:mainNCnew}). Under certain circumstances, this stronger inequality provides an \endmph{explicit channel} which can be used to \endmph{improve the quality} of the original cloning/broadcast (roughly speaking, how close the output is to the input) a posteriori. This cloning/broadcasting-improving channel is nothing but the parallel application of the rotation-averaged Petz recovery map \circte{Jungeetal}, highlighting its naturality in this context. Related results of ours (Theorems~\rightef{thm:PT-cloner-recovery} and \rightef{thm:cloner-PT-recovery}) compare a given state of $n$ qudits to the maximally mixed state on the (permutation-)symmetric subspace of $n$ qudits. We establish a duality between universal quantum cloning machines (UQCMs) \circte{Bu,Gisin,PhysRevA.58.1827} and symmetrized partial trace channels, in the operational sense that a UQCM can be used as an approximate recovery channel for a symmetrized partial trace channel and vice versa. It is also immediate to observe that these channels are adjoints of each other, up to a constant. A context different from ours, in which a duality between partial trace and universal cloning has been observed, is in quantum data compression \circte{Giulio}. As a special case of Theorem~\rightef{thm:PT-cloner-recovery}, we recover one of the main results of Werner \circte{PhysRevA.58.1827}, regarding the optimal fidelity for $k\to n$ cloning of tensor-product pure states~$\phi^{\otimes k}$. We also draw an analogy of these results to former results from \circte{PhysRevA.93.062314} regarding photon loss and amplification, the analogy being that cloning is like particle amplification and partial trace like particle loss. The methods generalize to subspaces beyond the symmetric subspace: Theorem~\rightef{thm:new} controls the performance of an analogue of the UQCM in recovering from a loss of $n-k$ particles when we are given \endmph{a priori information} about the states (in the sense that we know on which subspaces they are supported, e.g., because we are working in an irreducible representation of some symmetry group). As an application of this, we obtain an estimate of the performance of an \endmph{antisymmetric} analogue of the UQCM for $k\to n$ cloning of fermionic particles. The methods also yield information-theoretic restrictions for general approximate broadcasts of two mixed states. \textit{Background}---The well known no-cloning theorem for pure states establishes that two pure states can be simultaneously cloned iff they are identical or orthogonal. It is generalized by the following two theorems, a no-cloning theorem for mixed states and a no-broadcasting theorem \circte{Barnumetal96,KH08}. Let $\sigmama$ be a mixed state on a system $A$. By definition, a (two-fold) \endmph{broadcast} of the input state $\sigmama$ is a quantum channel $\Lambda_{A\to AB}$, such that the output state $$ \rightho^{\mathrm{out}}_{AB}:=\Lambda_{A\to AB}(\sigmama_A) $$ has the identical marginals $\rightho^{\mathrm{out}}_{A}=\rightho^{\mathrm{out}}_{B}=\sigmama$. A particular broadcast corresponds to the case $\rightho^{\mathrm{out}}_{AB}=\sigmama_A\otimes \sigmama_B$, which is called a \endmph{cloning} of the state $\sigmama$. We call two mixed states $\sigmama_1$ and $\sigmama_2$ orthogonal if $\sigmama_1\sigmama_2=0$. \begin{thm}[No cloning for mixed states, \circte{Barnumetal96,KH08}] \leftabel{thm:NC} Two mixed states $\sigmama_1,\sigmama_2$ can be simultaneously cloned iff they are orthogonal or identical. \end{thm} \begin{thm}[No broadcasting, \circte{Barnumetal96}] \leftabel{thm:UBC} Two mixed states $\sigmama_1,\sigmama_2$ can be simultaneously broadcast iff they commute. \end{thm} By a ``simultaneous cloning/broadcast,'' we mean that the same choice of $\Lambda_{A\to AB}$ is made for broadcasts of $\sigmama_1$ and $\sigmama_2$. These results were essentially first proved in \circte{Barnumetal96}, albeit under an additional minor invertibility assumption. Alternative proofs were given in \circte{Lindblad99, Leifer,Barnumetal07, KH08}. Sometimes Theorem~\rightef{thm:UBC} is called the ``universal no-broadcasting theorem'' to distinguish it from local no-broadcasting results for multipartite systems \circte{Pianietal}. Quantitative versions of the local no-broad\-casting results for multipartite systems were reviewed very recently by Piani \circte{Piani16} (see also \circte{Chatterjeeetal}). No-cloning and no-broadcasting are also closely related to the monogamy property of entanglement via the Choi-Jamiolkowski isomorphism \circte{Leifer}. In this paper, we study limitations on \endmph{approximate} cloning/broadcasting, which we define as follows: \begin{defn}[Approximate cloning/broadcast] Let $\sigmama,\tilde\sigmama$ be mixed states. An $n$-fold \endmph{approximate broadcast} of $\sigmama$ is a quantum channel $\Lambda_{A\to A_1\cdots A_n}$ such that the output state has the identical marginals $\tilde\sigmama$. That is, we consider the situation \beginq \leftabel{eq:approxdefn} \rightho^{\mathrm{out}}_{A_1}=\cdots=\rightho^{\mathrm{out}}_{A_n}=\tilde\sigmama, \endeq where $\rightho^{\mathrm{out}}_{A_1\cdots A_n}:=\Lambda(\sigmama_A)$. An \endmph{approximate cloning} is an approximate broadcast for which $\rightho^{\mathrm{out}}_{A_1\cdots A_n}=\tilde\sigmama_{A_1}\otimes\cdots\otimes \tilde\sigmama_{A_n}$. The main case of interest is $n=2$. \end{defn} Our main results give bounds on (appropriate notions of) distance between $\tilde\sigmama_i$ and $\sigmama_i$ for $i=1,2$, given any pair of input states $\sigmama_1$ and~$\sigmama_2$. \textit{Conventions}---The notions of approximate cloning / broadcast stated above are direct generalizations of the notions of cloning/broadcasting in the literature related to Theorems~\rightef{thm:NC} and~\rightef{thm:UBC}. Regarding the input states, these notions are more general than the one used in the cloning machine literature \circte{Scarani}; we allow for the input states to be arbitrary, whereas they are usually pure tensor-power states $\psi^{{\otimes n}}$ for cloning machines. Our notion of approximate cloning requires the output states to be tensor-product states. Hence, some quantum cloning machines (in particular the universal cloning machine when acting on general input states) are approximate \endmph{broadcasts} by the definition given above. Let us fix some notation. Given two mixed states~$\rightho$ and~$\sigmama$, we denote the \endmph{relative entropy} of $\rightho$ with respect to $\sigmama$ by $D(\rightho\| \sigmama):=\tr{\rightho(\leftog \rightho-\leftog\sigmama)}$, where $\leftog$ is the natural logarithm \circte{U62}. We define the fidelity by $F(\rightho,\sigmama):=\|\sqrt{\rightho}\sqrt{\sigmama}\|_1^2 \in [0,1]$ \circte{U73}, where $\|\cdot\|_1$ is the trace norm. Since all of our bounds involve the relative entropy $D(\sigmama_1\|\sigmama_2)$ of the input states $\sigmama_1$ and $\sigmama_2$, they are only informative when $D(\sigmama_1\|\sigmama_2)<\infty$. This is equivalent to $ \ker\sigmama_2\subseteq \ker\sigmama_1 , $ and we \endmph{assume} this in the following for simplicity. We note that if this assumption fails, our results can still be applied by approximating $\sigmama_2$ (in trace distance) with $\sigmama_2^\varepsilon:=\varepsilon\sigmama_1+(1-\varepsilon)\sigmama_2$ for $\varepsilon \in (0,1)$, which satisfies $\ker\sigmama_2^\varepsilon\subseteq \ker\sigmama_1$. \textit{Main results}---We will now present our main results. All proofs are rather short and deferred to \circte{LW16supmat}. \textit{Restrictions on approximate cloning/broadcasting}---Our first main result concerns limitations if $\sigmama_1$ is approximately broadcast $n$-fold while $\sigmama_2$ is approximately cloned $n$-fold. \begingin{thm}[Limitations on approximate cloning / broadcasting] \leftabel{thm:mainNC} Fix two mixed states $\sigmama_{1}$ and $\sigmama_{2}$. Let $\Lambdabda_{A\rightightarrow A_{1}\cdots A_{n}}$ be a quantum channel such that $n\geq2$ and the two output states $\rightho_{i,A_{1}\cdots A_{n}}^{\operatorname{out} }:=\Lambdabda(\sigmama_{i,A})$ for $i=1,2$ satisfy \beginq \leftabel{eq:approxCB} \begingin{aligned} \rightho_{1,A_{1}}^{\operatorname{out}} & =\cdots=\rightho_{1,A_{n}} ^{\operatorname{out}}=\tilde{\sigmama}_{1},\\ \rightho_{2,A_{1}\cdots A_{n}}^{\operatorname{out}} & =\tilde{\sigmama}_{2,A_{1} }\otimes\cdots\otimes\tilde{\sigmama}_{2,A_{n}}, \endnd{aligned}. \endeq Thus, $\Lambdabda_{A\rightightarrow A_{1}\cdots A_{n}}$ approximately broadcasts $\sigmama_{1,A}$ and approximately clones $\sigmama_{2,A}$. Then \beginq \leftabel{eq:mainNC} \begingin{aligned} D(\sigmama_1\|\sigmama_2)-D(\tilde\sigmama_1\|\tilde\sigmama_2)&\geq (n-1)D(\tilde\sigmama_1\|\tilde\sigmama_2)\\ &\geq \frac{n-1}{2}\|\tilde\sigmama_1-\tilde\sigmama_2\|_1^2. \endnd{aligned} \endeq \endnd{thm} The second inequality in \endqref{eq:mainNC} follows from the quantum Pinsker inequality \circte[Thm.~1.15]{OP93}. To see that \endqref{eq:mainNC} is indeed restrictive for approximate cloning / broadcasting, let $n=2$ and suppose without loss of generality that $\sigmama_1\neq \sigmama_2$, so that $ \mathrm{d}elta:=\frac{1}{6}\|\sigmama_1-\sigmama_2\|_1^2>0. $ We can use the triangle inequality for $\|\cdot\|_1$ and the elementary inequality $2ab\lefteq a^2+b^2$ on the right-hand side in \endqref{eq:mainNC} to get \beginqs D(\sigmama_1\|\sigmama_2)-D(\tilde\sigmama_1\|\tilde\sigmama_2) +\frac{\|\sigmama_1-\tilde\sigmama_1\|_1^2}{2} +\frac{\|\sigmama_2-\tilde\sigmama_2\|_1^2}{2}\geq \mathrm{d}elta. \endeqs Since $\sigmama_1$ and $\sigmama_2$ are fixed, the same is true for $\mathrm{d}elta>0$. Hence, \endmph{for any approximate cloning/broadcasting operation \endqref{eq:approxCB}, at least one of the following three statements must hold}: \begingin{enumerate} \item $\sigmama_1$ is far from $\tilde\sigmama_1$ (i.e., the channel acts poorly on the first state), \item $\sigmama_2$ is far from $\tilde\sigmama_2$ (i.e., the channel acts poorly on the first state), or \item there is a large decrease in the distinguishability of the states under the action of the channel, in the sense that $D(\sigmama_1\|\sigmama_2)-D(\tilde\sigmama_1\|\tilde\sigmama_2)$ is bounded from below by a constant. \endnd{enumerate} Thus, we have a quantitative version of Theorem \rightef{thm:NC} (note that for $\sigmama_i=\tilde\sigmama_i$ ($i=1,2$), Theorem \rightef{thm:mainNCnew} implies $\sigmama_1=\sigmama_2$). As anticipated in the introduction, we can prove a stronger version of Theorem~\rightef{thm:mainNC} by invoking recent developments linking monotonicity of the relative entopy to recoverability \circte{FR,BLW,SFR,Wilde,Jungeetal,Sutteretal}. The stronger version involves an additional non-negative term on the right-hand side in \endqref{eq:mainNC} and it contains an additional integer parameter $m\in\{1,\leftdots,n\}$ (the case $m=n$ corresponds to Theorem~\rightef{thm:mainNC}; the case $m=1$ is also useful as we explain after the theorem). \begin{thm}[Stronger version of Theorem~\rightef{thm:mainNC}] \leftabel{thm:mainNCnew} Under the same assumptions as in Theorem~\rightef{thm:mainNC}, for all $m\in\{1,\leftdots,n\}$, there exists a recovery channel $\mathcal{R}_{A_{1}\cdots A_{m}\rightightarrow A}^{(m)}$ such that \begingin{multline} \leftabel{eq:mainNCnew} D(\sigmama_{1}\Vert\sigmama_{2})-mD(\tilde{\sigmama}_{1}\Vert\tilde{\sigmama}_{2} )\geq\\ -\leftog F(\sigmama_{1},(\mathcal{R}_{A_{1}\cdots A_{m}\rightightarrow A}^{(m)} \circrc\operatorname{tr}_{A_{m+1}\cdots A_{n}}\circrc\Lambdabda)(\sigmama_{1})). \endnd{multline} The recovery channel $\curly{R}^{(m)}\endquiv \mathcal{R}_{A_{1}\cdots A_{m}\rightightarrow A}^{(m)}$ satisfies the identity $\sigmama_{2}=\mathcal{R} ^{(m)}(\tilde{\sigmama}_{2}^{\otimes m}).$ There exists an explicit choice for such an $\curly{R}^{(m)}$ with a formula depending only on $\sigmama_2$ and $\Lambda$ \circte{Jungeetal, LW16supmat}. \end{thm} One can generalize Theorem~\rightef{thm:mainNCnew} to the case of ``$k\to n$ cloning'' \circte{Scarani} where one starts from $k$-fold tensor copies $\sigmama_1^{\otimes k}$ and $\sigmama_2^{\otimes k}$ and broadcasts the former and clones the latter to states on an $n$-fold tensor product; this is Theorem 11 in \circte{LW16supmat}. To see how the additional remainder term in \endqref{eq:mainNCnew} can be useful, we apply Theorem~\rightef{thm:mainNCnew} with $m=1$. It implies that there exists a recovery channel $\mathcal{R} ^{(1)}$ such that \beginq \begingin{aligned} D(\sigmama_{1}\Vert\sigmama_{2})-D(\tilde{\sigmama}_{1}\Vert\tilde{\sigmama}_{2}) & \geq-\leftog F(\sigmama_{1},\mathcal{R}^{(1)}(\tilde{\sigmama} _{1})),\leftabel{eq:local-recovery}\\ \sigmama_{2} & =\mathcal{R}^{(1)}(\tilde{\sigmama}_{2}). \endnd{aligned} \endeq Now suppose that we are in a situation where the left hand side in \endqref{eq:local-recovery} is less than some $\varepsilon>0$. Then, \endqref{eq:local-recovery} implies that $\sigmama_{1} \approx\mathcal{R}^{(1)}(\tilde{\sigmama}_{1})$ and $\sigmama_{2} =\mathcal{R}^{(1)}(\tilde{\sigmama}_{2})$, where $\approx$ stands for $-\leftog F(\sigmama_{1},\mathcal{R}^{(1)}(\tilde{\sigmama}_1))<\varepsilon$. In other words, we can (approximately) recover the input states $\sigmama_i$ from the output marginals $\tilde\sigmama_i$. Therefore, in a next step, we can improve the quality of the cloning / broadcasting channel $\Lambdabda$ by post-composing it with $n$ parallel uses of the local recovery channel $\mathcal{R}^{(1)}$. Indeed, the \endmph{improved cloning channel} $ \Lambda_{\mathrm{impr}}:=(\mathcal{R}^{(1)})^{\otimes n}\circrc \Lambda, $ has the new output states $\rightho_{i,A_{1}\leftdots A_n}^{\operatorname{impr}}:=\Lambda_{\mathrm{impr}}(\sigmama_i)$, $(i=1,2)$ which satisfy $$ \begingin{aligned} \rightho_{1,A_{1}}^{\operatorname{impr}} & =\cdots=\rightho_{1,A_{n}} ^{\operatorname{impr}}=\mathcal{R}^{(1)}(\tilde{\sigmama}_{1})\approx \sigmama_1 , \\ \rightho_{2,A_{1}\cdots A_{n}}^{\operatorname{impr}} & =\sigmama_{2,A_{1} }\otimes\cdots\otimes\sigmama_{2,A_{n}} . \endnd{aligned} $$ Here, $\approx$ again stands for $-\leftog F(\sigmama_{1},\mathcal{R}^{(1)}(\tilde{\sigmama}_1))<\varepsilon$. That is, we have found a strategy to improve the output of the cloning channel $\Lambdabda$, namely to the output of $\Lambda_{\mathrm{impr}}$. \textit{Universal cloning machines and symmetrized partial trace channels}---In our next results, we consider a particular example of an approximate broadcasting channel well known in quantum information theory \circte{PhysRevA.58.1827,KW99,Scarani}, a universal quantum cloning machine (UQCM). We connect the UQCM to relative entropy and recoverability. We recall that the UQCM\ is the optimal cloner for tensor power pure states, in the sense that the marginal states of its output have the optimal fidelity with the input state \circte{PhysRevA.58.1827,KW99}. Let $k$ and $n$ be integers such that $1\lefteq k\lefteq n$. In general, one considers a $k\rightightarrow n$ UQCM as acting on $k$ copies $\psi^{\otimes k}$\ of an input pure state $\psi $ of dimension $d$ (a qudit), which produces an output density operator $\rightho^{(n)}$, a state of $n$ qudits. From Werner's work \circte{PhysRevA.58.1827}, the UQCM is known to be \begingin{equation} \mathcal{C}_{k\rightightarrow n}(\omegaega^{(k)})\endquiv\frac{d[k]}{d[n]} \Pi_{\operatorname{sym}}^{d,n}\lefteft[ \Pi_{\operatorname{sym}}^{d,k} \omegaega^{(k)}\Pi_{\operatorname{sym}}^{d,k}\otimes I^{n-k}\rightight] \Pi_{\operatorname{sym}}^{d,n}. \leftabel{eq:cloning-channel} \endnd{equation} Here $\Pi_{\operatorname{sym}}^{d,n}$ is the projection onto the (permutation-)symmetric subspace of $(\mathbb{C}^d)^{\otimes n}$, which has dimension $d[n]:=\binom{d+n-1} {n}$. We note that $\mathcal{C}_{k\rightightarrow n}$ is trace-preserving when acting on the symmetric subspace. The main results here are Theorems \rightef{thm:PT-cloner-recovery} and \rightef{thm:cloner-PT-recovery}, which highlight the duality between the UQCM \endqref{eq:cloning-channel} and the following symmetrized partial trace channel \begingin{equation} \mathcal{P}_{n\rightightarrow k}(\cdot)\endquiv\Pi_{\operatorname{sym}} ^{d,k}\operatorname{tr}_{n-k}\!\lefteft[ \Pi_{\operatorname{sym}}^{d,n}(\cdot )\Pi_{\operatorname{sym}}^{d,n}\rightight] \Pi_{\operatorname{sym}}^{d,k},\leftabel{eq:partial-trace-channel} \endnd{equation} In addition to the operational sense of duality between the partial trace channel $\mathcal{P}_{n\rightightarrow k}$ and the UQCM $\mathcal{C}_{k\rightightarrow n}$ which is established by Theorems \rightef{thm:PT-cloner-recovery} and \rightef{thm:cloner-PT-recovery}, the two are dual in the sense of quantum channels (up to constant). That is, $\mathcal{P}_{n\rightightarrow k}^{\mathrm{d}ag}=\lefteft( d[n]/d[k]\rightight) \mathcal{C}_{k\rightightarrow n}$. Our results will quantify the quality of the UQCM for certain tasks in terms of the relative entropy $D(\omegaega^{(n)}\Vert\pi _{\operatorname{sym}}^{d,n})$, which is between a general $n$-qudit state $\omegaega^{(n)}$ and the maximally mixed state $\pi_{\operatorname{sym}}^{d,n}$ of the symmetric subspace. We consider the maximally mixed state $\pi_{\operatorname{sym}}^{d,n}$ as a natural ``origin'' from which to measure the ``distance'' $D(\omegaega^{(n)}\Vert\pi _{\operatorname{sym}}^{d,n})$ since it is a (Haar-)random mixture of tensor-power pure states. We recall what one obtains from the standard monotonicity of the relative entropy, namely \begingin{equation} \leftabel{eq:mono2} D(\omegaega^{(n)}\Vert\pi_{\operatorname{sym}}^{d,n})\geq D(\mathcal{P} _{n\rightightarrow k}(\omegaega^{(n)})\Vert\mathcal{P}_{n\rightightarrow k} (\pi_{\operatorname{sym}}^{d,n})). \endnd{equation} Our next main result is the following strengthening of the entropy inequality in \endqref{eq:mono2}: \begin{thm}\leftabel{thm:PT-cloner-recovery} Let $\omegaega^{(n)}$ be a state with support in the symmetric subspace of $(\mathbb{C}^d)^{\otimes n}$, let $\pi_{\operatorname{sym}}^{d,n}$ denote the maximally mixed state on this symmetric subspace, let $\mathcal{C}_{k\rightightarrow n}$ denote the UQCM from \endqref{eq:cloning-channel}, and $\mathcal{P} _{n\rightightarrow k}$ the symmetrized partial trace channel from \endqref{eq:partial-trace-channel}. Then \begingin{multline} D(\omegaega^{(n)}\Vert\pi_{\operatorname{sym}}^{d,n})\geq D(\mathcal{P} _{n\rightightarrow k}(\omegaega^{(n)})\Vert\mathcal{P}_{n\rightightarrow k} (\pi_{\operatorname{sym}}^{d,n}))\leftabel{eq:refined-partial-trace-cloner}\\ +D(\omegaega^{(n)}\Vert(\mathcal{C}_{k\rightightarrow n}\circrc\mathcal{P} _{n\rightightarrow k})(\omegaega^{(n)})). \endnd{multline} \end{thm} The entropy inequality in \endqref{eq:refined-partial-trace-cloner} can be interpreted as follows: The ability of a $k\rightightarrow n$ UQCM to recover an $n$-qubit state $\omega^{(n)}$ from the loss of $n-k$ particles is limited by the decrease of distinguishability between $\omegaega^{(n)}$ and $\pi_{\operatorname{sym}}^{d,n}$ under the action of the partial trace $\mathcal{P}_{n\rightightarrow k}$. Thus, a small decrease in relative entropy (i.e., $D(\omegaega^{(n)}\Vert\pi_{\operatorname{sym}}^{d,n})-D(\mathcal{P} (\omegaega^{(n)})\Vert\mathcal{P}(\pi_{\operatorname{sym}}^{d,n}))\approx \varepsilon$) implies that a $k\rightightarrow n$ UQCM $\mathcal{C}_{k\rightightarrow n}$ will perform well at recovering $\omegaega^{(n)}$ from $\mathcal{P} _{n\rightightarrow k}(\omegaega^{(n)})$. We can also observe that $\mathcal{C}_{k\rightightarrow n}$ is the Petz recovery map corresponding to the state $\sigmama=\pi _{\operatorname{sym}}^{d,n}$ and channel $\mathcal{N}=\operatorname{tr}_{n-k}$ (as defined in \circte{LW16supmat}). As an application of Theorem~\rightef{thm:PT-cloner-recovery}, we consider the special case that is most common in the context of quantum cloning \circte{PhysRevA.58.1827,KW99,Scarani}. We set $\omegaega^{(n)}=\phi^{\otimes n}$ for a pure state $\phi$. In this case, \beginq \begingin{aligned} & \!\!\!\!\!\!\!\! D(\phi^{\otimes n}\Vert\pi_{\operatorname{sym}}^{d,n})-D(\mathcal{P} _{n\rightightarrow k}(\phi^{\otimes n})\Vert\mathcal{P}_{n\rightightarrow k} (\pi_{\operatorname{sym}}^{d,n})) \\ & =-\leftog(d[k]/d[n])\geq D(\phi^{\otimes n}\Vert\mathcal{C}_{k\rightightarrow n} (\phi^{\otimes k})). \endnd{aligned} \endeq By estimating $D\geq-\leftog F$, we recover one of the main results of \circte{PhysRevA.58.1827}, which is that the $k\rightightarrow n$ UQCM has the following performance when attempting to recover $n$ copies of $\phi$ from $k$ copies: \begingin{equation} \leftabel{eq:Werner} F(\phi^{\otimes n},\mathcal{C}_{k\rightightarrow n}(\phi^{\otimes k}))\geq d[k]/d[n]. \endnd{equation} Given the above duality between the symmetrized partial trace channel and the UQCM, we can also consider the reverse scenario. \begin{thm}\leftabel{thm:cloner-PT-recovery} With the same notation as in Theorem~\rightef{thm:PT-cloner-recovery}, the following inequality holds \begingin{multline} D(\omegaega^{(k)}\Vert\pi_{\operatorname{sym}}^{d,k})\geq D(\mathcal{C} _{k\rightightarrow n}(\omegaega^{(k)})\Vert\mathcal{C}_{k\rightightarrow n} (\pi_{\operatorname{sym}}^{d,k} ))\leftabel{eq:refined-partial-trace-cloner-reversed}\\ +D(\omegaega^{(k)}\Vert(\mathcal{P}_{n\rightightarrow k}\circrc\mathcal{C} _{k\rightightarrow n})(\omegaega^{(k)})). \endnd{multline} \end{thm} This entropy inequality can be seen as dual to that in \endqref{eq:refined-partial-trace-cloner}, having the following interpretation:\ if the decrease in distinguishability of $\omegaega^{(k)}$ and $\pi_{\operatorname{sym}}^{d,k}$ is small under the action of a UQCM\ $\mathcal{C}_{k\rightightarrow n}$, then the partial trace channel $\mathcal{P} _{n\rightightarrow k}$ can perform well at recovering the original state $\omegaega^{(k)}$ back from the cloned version $\mathcal{C}_{k\rightightarrow n}(\omegaega^{(k)})$. There is a striking similarity between the inequalities in \endqref{eq:refined-partial-trace-cloner} and \endqref{eq:refined-partial-trace-cloner-reversed} and those from \circte[Sect.~III-A]{PhysRevA.93.062314}, which apply to photonic channels (cf.~\circte{5961814}). This observation is based on the analogy that cloning is like particle amplification and partial trace is like particle loss and we discuss this further in \circte{LW16supmat}. \textit{Restrictions on cloning in general subspaces}---We can generalize the discussion in the previous section to arbitrary subspaces. For $1\lefteq k\lefteq n$, let $X_n$ be a $d_{X_n}$-dimensional subspace of $(\mathbb C^d)^{\otimes n}$ and let $Y_k$ be a $d_{Y_k}$-dimensional subspace of $(\mathbb C^d)^{\otimes k}$. We write $\Pi_{X_n}$, $\Pi_{Y_k}$ for the projections onto these subspaces and $\pi_{X_n}$ and $\pi_{Y_k}$ for the corresponding maximally mixed states. We generalize the definitions in \endqref{eq:cloning-channel} and \endqref{eq:partial-trace-channel} to \begingin{align} \leftabel{eq:Cdefn} \mathcal{C}_{k\rightightarrow n}(\cdot) &\endquiv\frac{d_{Y_k}}{d_{X_n}} \Pi_{X_n}\lefteft[ \Pi_{Y_k} (\cdot)\Pi_{Y_k}\otimes I^{n-k}\rightight] \Pi_{X_n},\\ \mathcal{P}_{n\rightightarrow k}(\cdot)&\endquiv\Pi_{Y_k}\operatorname{tr}_{n-k}\!\lefteft[ \Pi_{X_n}(\cdot )\Pi_{X_n}\rightight] \Pi_{Y_k}. \endnd{align} The cloning map $\mathcal{C}_{k\rightightarrow n}$ is a direct analogue of the UQCM for the specialized task of recovering a state in the subspace $X_n$ from one in the subspace $Y_k$ (previously, $X_n$ and $Y_k$ were both taken to be the symmetric subspace). By inspection, it is completely positive, and if $\operatorname{tr}_{n-k}[ \pi_{X_n} ] = \pi_{Y_k} $, then it is trace preserving when acting on any operator with support in $X_n$. The same argument that proves Theorem~\rightef{thm:PT-cloner-recovery} then gives \begin{thm}\leftabel{thm:new} Let $\omega^{(n)}$ be a state with support in $X_n$, and suppose that $\operatorname{tr}_{n\to k}[\omega^{(n)}]$ is supported in $Y_k$. Then \begingin{multline} D(\omegaega^{(n)}\Vert\pi_{X_n})\geq D(\mathcal{P} _{n\rightightarrow k}(\omegaega^{(n)})\Vert \pi_{Y_k})\leftabel{eq:DP-form-general}\\ + D(\omegaega^{(n)}\Vert(\mathcal{C}_{k\rightightarrow n}\circrc\mathcal{P} _{n\rightightarrow k})(\omegaega^{(n)})). \endnd{multline} \end{thm} The assumption that $\mathrm{tr}_{n\to k}[\omega^{(n)}]$ is supported in $Y_k$ is made for convenience. Without it, the quantity $\mathrm{tr}[\curly{P}_{n\to k}(\omega^{(n)})]<1$ would enter in the statement, cf.\ \circte{LW16supmat}. We can obtain a stronger statement under the additional assumption $\operatorname{tr}_{n-k}[ \pi_{X_n} ] = \pi_{Y_k}$: It implies $\mathcal{P} _{n\rightightarrow k}(\pi_{X_n})=\pi_{Y_k}$ and that $(\mathcal{C}_{k\rightightarrow n}\circrc\mathcal{P} _{n\rightightarrow k})(\omegaega^{(n)})$ has trace one. Theorem \rightef{thm:new} controls the performance of the cloning machine $\mathcal{C}_{k\rightightarrow n}$ \endqref{eq:Cdefn} in recovering from a loss of $n-k$ particles when \endmph{a priori information} about the states is given (in the sense that we know on which subspaces they are supported). To see this, consider, e.g., the case of perfect a priori information when $\mathrm{d}im X_n=1$. Then $D(\omegaega^{(n)}\Vert\pi_{X_n})=0$ and so \endqref{eq:DP-form-general} implies that the cloning is perfect, $\omegaega^{(n)}=(\mathcal{C}_{k\rightightarrow n}\circrc\mathcal{P} _{n\rightightarrow k})(\omegaega^{(n)})$. For non-trivial applications of Theorem \rightef{thm:new}, a natural class of subspaces to consider are those associated to irreducible group representations, e.g.\ of the permutation group acting on $(\mathbb C^d)^{\otimes n}$. To avoid introducing the representation-theoretic background, we focus here on the case when both $X_n$ and $Y_k$ are taken to be the familiar \endmph{antisymmetric} subspace. Physically, the antisymmetric subspace describes fermions and therefore our results have bearing on electronic analogues of the photonic scenarios mentioned above. For this part, we let $d\geq n$. An example system for which $d$ can be larger than $n$ is a tight-binding model on $d$ lattice sites, where each site can host a single electron. The antisymmetric subspace $X_n$ has dimension $d_{X_n}= \binom{d}{n}$. The analogue of a tensor-power pure state in the antisymmetric subspace is a \endmph{Slater determinant} $\vert \Phi_n \rightangle \endquiv \vert \phi_1 \rightangle \wedge \cdots\wedge \vert \phi_n \rightangle$, where the states $\{ \vert \phi_i \rightangle \}_i$ are orthonormal. \circte{LW16supmat} reviews background and how the marginal $\operatorname{tr}_{n\to k}[\Phi_n]$ is again antisymmetric and has quantum entropy $\leftog\binom{n}{k}$. Thus, \endqref{eq:DP-form-general} of Theorem~\rightef{thm:new} applies to establish the first inequality of the following: \beginq \begingin{aligned} \leftog \binom{d-k}{d-n} & = -\leftog\!\left(\binom{d}{k}\cdot \left[\binom{n}{k}\binom{d}{n}\right]^{-1}\right) \\ &\geq D(\Phi_n\Vert(\mathcal{C}_{k\rightightarrow n}\circrc\mathcal{P} _{n\rightightarrow k})(\Phi_n)). \endnd{aligned} \endeq Using $D\geq -\leftog F$ again, we conclude that the performance of the antisymmetric cloning machine $\mathcal{C}_{k\rightightarrow n}$ in recovering from a loss of $n-k$ fermionic particles is controlled by \begingin{equation} F(\Phi_n,\, (\mathcal{C}_{k\rightightarrow n}\circrc\mathcal{P} _{n\rightightarrow k})(\Phi_n))\geq \left[\binom{d-k}{d-n}\right]^{-1}. \endnd{equation} We mention that $(\mathcal{C}_{k\rightightarrow n}\circrc\mathcal{P} _{n\rightightarrow k})(\Phi_n)$ has trace one; this follows from the identity $\operatorname{tr}_{n-k}[ \pi_{X_n} ] = \pi_{Y_k}$ for the antisymmetric subspace (cf.~Lemma 12 in \circte{LW16supmat}). We also mention that the standard symmetric UQCM would produce the zero state in this case and thus yields a (minimal) fidelity of zero. \textit{General restrictions on approximate broadcasts}---As the introduction mentioned, our methods imply new information-theoretic restrictions on any approximate two-fold broadcast. These are relegated to \circte{LW16supmat}. \textit{Conclusion}---In this paper, we have proven several entropic inequalities that pose limitations on the kinds of approximate clonings / broadcasts that are allowed in quantum information processing. Some of the results generalize the well known no-cloning and no-broadcasting results, restated in Theorems~\rightef{thm:NC} and \rightef{thm:UBC}. Other results demonstrate how universal cloning machines and partial trace channels are dual to each other, in the sense that one can be used as an approximate recovery channel for the other, with a performance controlled by entropy inequalities. We can also control the performance of an analogue of the UQCM for cloning between any two subspaces. In particular, we obtain bounds on its performance in recovering from a loss of $n-k$ fermionic particles. \acknowledgments{ We acknowledge discussions with Sourav Chatterjee and Kaushik Seshadreesan and helpful comments by an anonymous referee. After completing the results of this paper, we learned of the related and concurrent work of Marvian and Lloyd \circte{ML16}. We are grateful to them for passing their manuscript along to us. M.M.W.\ acknowledges support from the NSF under Award No. 1350397. } \begingin{thebibliography}{46} \makeatletter \providecommand \@ifxundefined [1]{ \@ifx{#1\undefined} } \providecommand \@ifnum [1]{ \ifnum #1\endxpandafter \@firstoftwo \endlse \endxpandafter \@secondoftwo \fi } \providecommand \@ifx [1]{ \ifx #1\endxpandafter \@firstoftwo \endlse \endxpandafter \@secondoftwo \fi } \providecommand \natexlab [1]{#1} \providecommand \endnquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \circtenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endndgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode `\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\rightelax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingingroup\@sanitize@url \@url } \providecommand \@url [1]{\endndgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \mathrm{d}oibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\rightelax} \providecommand \BibitemShut [1]{\csname bibitem#1\endndcsname} \leftet\auto@bib@innerbib\@empty \bibitem [{\circtenamefont {Dieks}(1982)}]{Dieks} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Dieks}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physics Letters A}\ }\textbf {\bibinfo {volume} {92}},\ \bibinfo {pages} {271 } (\bibinfo {year} {1982})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Wootters}\ and\ \circtenamefont {Zurek}(1982)}]{Wootters} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Wootters}}\ and\ \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Zurek}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {299}},\ \bibinfo {pages} {802–803} (\bibinfo {year} {1982})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Barnum}\ \endmph {et~al.}(1996)\circtenamefont {Barnum}, \circtenamefont {Caves}, \circtenamefont {Fuchs}, \circtenamefont {Jozsa},\ and\ \circtenamefont {Schumacher}}]{Barnumetal96} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Barnum}}, \bibinfo {author} {\bibfnamefont {C.~M.}\ \bibnamefont {Caves}}, \bibinfo {author} {\bibfnamefont {C.~A.}\ \bibnamefont {Fuchs}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Jozsa}}, \ and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Schumacher}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {76}},\ \bibinfo {pages} {2818} (\bibinfo {year} {1996})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Knill}\ and\ \circtenamefont {Laflamme}(1997)}]{Knill} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Knill}}\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Laflamme}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {55}},\ \bibinfo {pages} {900} (\bibinfo {year} {1997})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Mandayam}\ and\ \circtenamefont {Ng}(2012)}]{Manda} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Mandayam}}\ and\ \bibinfo {author} {\bibfnamefont {H.~K.}\ \bibnamefont {Ng}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {86}},\ \bibinfo {pages} {012335} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Shor}(1996)}]{Shor} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.~W.}\ \bibnamefont {Shor}},\ }in\ \href@noop {} {\endmph {\bibinfo {booktitle} {Proceedings of the 37th Annual Symposium on Foundations of Computer Science}}},\ \bibinfo {series and number} {FOCS '96}\ (\bibinfo {publisher} {IEEE Computer Society},\ \bibinfo {address} {Washington, DC, USA},\ \bibinfo {year} {1996})\ pp.\ \bibinfo {pages} {56--}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Bu\ifmmode~\check{z}\endlse \v{z}\fi{}ek}\ and\ \circtenamefont {Hillery}(1996)}]{Bu} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Bu\ifmmode~\check{z}\endlse \v{z}\fi{}ek}}\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Hillery}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {54}},\ \bibinfo {pages} {1844} (\bibinfo {year} {1996})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Gisin}\ and\ \circtenamefont {Massar}(1997)}]{Gisin} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Gisin}}\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Massar}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {79}},\ \bibinfo {pages} {2153} (\bibinfo {year} {1997})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Werner}(1998)}]{PhysRevA.58.1827} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~F.}\ \bibnamefont {Werner}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {58}},\ \bibinfo {pages} {1827} (\bibinfo {year} {1998})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Allahverdyan}\ and\ \circtenamefont {Hovhannisyan}(2010)}]{Karen} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~E.}\ \bibnamefont {Allahverdyan}}\ and\ \bibinfo {author} {\bibfnamefont {K.~V.}\ \bibnamefont {Hovhannisyan}},\ }\href {\mathrm{d}oibase 10.1103/PhysRevA.81.012312} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {81}},\ \bibinfo {pages} {012312} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Keyl}\ and\ \circtenamefont {Werner}(1999)}]{KW99} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Keyl}}\ and\ \bibinfo {author} {\bibfnamefont {R.~F.}\ \bibnamefont {Werner}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Journal of Mathematical Physics}\ }\textbf {\bibinfo {volume} {40}},\ \bibinfo {pages} {3283} (\bibinfo {year} {1999})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Lamas-Linares}\ \endmph {et~al.}(2002)\circtenamefont {Lamas-Linares}, \circtenamefont {Simon}, \circtenamefont {Howell},\ and\ \circtenamefont {Bouwmeester}}]{Lamas} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Lamas-Linares}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Simon}}, \bibinfo {author} {\bibfnamefont {J.~C.}\ \bibnamefont {Howell}}, \ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Bouwmeester}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Science}\ } (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Scarani}\ \endmph {et~al.}(2005)\circtenamefont {Scarani}, \circtenamefont {Iblisdir}, \circtenamefont {Gisin},\ and\ \circtenamefont {Ac\'{\i}n}}]{Scarani} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Scarani}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Iblisdir}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Gisin}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Ac\'{\i}n}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {77}},\ \bibinfo {pages} {1225} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Fan}\ \endmph {et~al.}(2014)\circtenamefont {Fan}, \circtenamefont {Wang}, \circtenamefont {Jing}, \circtenamefont {Yue}, \circtenamefont {Shi}, \circtenamefont {Zhang},\ and\ \circtenamefont {Mu}}]{Review} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Fan}}, \bibinfo {author} {\bibfnamefont {Y.-N.}\ \bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Jing}}, \bibinfo {author} {\bibfnamefont {J.-D.}\ \bibnamefont {Yue}}, \bibinfo {author} {\bibfnamefont {H.-D.}\ \bibnamefont {Shi}}, \bibinfo {author} {\bibfnamefont {Y.-L.}\ \bibnamefont {Zhang}}, \ and\ \bibinfo {author} {\bibfnamefont {L.-Z.}\ \bibnamefont {Mu}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physics Reports}\ }\textbf {\bibinfo {volume} {544}},\ \bibinfo {pages} {241 } (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Zhu}\ and\ \circtenamefont {Ye}(2015)}]{Zhu} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.-Z.}\ \bibnamefont {Zhu}}\ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Ye}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {91}},\ \bibinfo {pages} {042319} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Chatterjee}\ \endmph {et~al.}(2016)\circtenamefont {Chatterjee}, \circtenamefont {Sazim},\ and\ \circtenamefont {Chakrabarty}}]{Chatterjeeetal} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Chatterjee}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Sazim}}, \ and\ \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Chakrabarty}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {93}},\ \bibinfo {pages} {042309} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Kalev}\ and\ \circtenamefont {Hen}(2008)}]{KH08} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Kalev}}\ and\ \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Hen}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {100}},\ \bibinfo {pages} {210502} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Lindblad}(1975)}]{Lindblad1975} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Lindblad}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Comm. Math. Phys.}\ }\textbf {\bibinfo {volume} {40}},\ \bibinfo {pages} {147} (\bibinfo {year} {1975})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Uhlmann}(1977)}]{U77} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Uhlmann}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Comm. Math. Phys.}\ }\textbf {\bibinfo {volume} {54}},\ \bibinfo {pages} {21} (\bibinfo {year} {1977})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Fawzi}\ and\ \circtenamefont {Renner}(2015)}]{FR} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Fawzi}}\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Renner}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Comm. Math. Phys.}\ }\textbf {\bibinfo {volume} {340}},\ \bibinfo {pages} {575} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Berta}\ \endmph {et~al.}(2015)\circtenamefont {Berta}, \circtenamefont {Lemm},\ and\ \circtenamefont {Wilde}}]{BLW} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Berta}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Lemm}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~M.}\ \bibnamefont {Wilde}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Quantum Info. Comput.}\ }\textbf {\bibinfo {volume} {15}},\ \bibinfo {pages} {1333} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Sutter}\ \endmph {et~al.}(2016)\circtenamefont {Sutter}, \circtenamefont {Fawzi},\ and\ \circtenamefont {Renner}}]{SFR} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Sutter}}, \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Fawzi}}, \ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Renner}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Proc. R. Soc. A.}\ }\textbf {\bibinfo {volume} {472}},\ \bibinfo {pages} {20150623} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Wilde}(2015)}]{Wilde} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~M.}\ \bibnamefont {Wilde}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Proc. R. Soc. A}\ }\textbf {\bibinfo {volume} {471}},\ \bibinfo {pages} {20150338} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Junge}\ \endmph {et~al.}()\circtenamefont {Junge}, \circtenamefont {Renner}, \circtenamefont {Sutter}, \circtenamefont {Wilde},\ and\ \circtenamefont {Winter}}]{Jungeetal} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Junge}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Renner}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Sutter}}, \bibinfo {author} {\bibfnamefont {M.~M.}\ \bibnamefont {Wilde}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Winter}},\ }\href@noop {} {}\bibinfo {howpublished} {arXiv:1509.07127}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Sutter}\ \endmph {et~al.}()\circtenamefont {Sutter}, \circtenamefont {Berta},\ and\ \circtenamefont {Tomamichel}}]{Sutteretal} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Sutter}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Berta}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Tomamichel}},\ }\href@noop {} {}\bibinfo {howpublished} {arXiv:1604.03023}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Yang}\ \endmph {et~al.}(2016)\circtenamefont {Yang}, \circtenamefont {Chiribella},\ and\ \circtenamefont {Hayashi}}]{Giulio} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Yang}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Chiribella}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Hayashi}},\ }\href {\mathrm{d}oibase 10.1103/PhysRevLett.117.090502} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {117}},\ \bibinfo {pages} {090502} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Buscemi}\ \endmph {et~al.}(2016)\circtenamefont {Buscemi}, \circtenamefont {Das},\ and\ \circtenamefont {Wilde}}]{PhysRevA.93.062314} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Buscemi}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Das}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~M.}\ \bibnamefont {Wilde}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {93}},\ \bibinfo {pages} {062314} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Lindblad}(1999)}]{Lindblad99} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Lindblad}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Lett. Math. Phys.}\ }\textbf {\bibinfo {volume} {47}},\ \bibinfo {pages} {189} (\bibinfo {year} {1999})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Leifer}(2006)}]{Leifer} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~S.}\ \bibnamefont {Leifer}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {74}},\ \bibinfo {pages} {042310} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Barnum}\ \endmph {et~al.}(2007)\circtenamefont {Barnum}, \circtenamefont {Barrett}, \circtenamefont {Leifer},\ and\ \circtenamefont {Wilce}}]{Barnumetal07} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Barnum}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Barrett}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Leifer}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Wilce}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {99}},\ \bibinfo {pages} {240501} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Piani}\ \endmph {et~al.}(2008)\circtenamefont {Piani}, \circtenamefont {Horodecki},\ and\ \circtenamefont {Horodecki}}]{Pianietal} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Piani}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Horodecki}}, \ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Horodecki}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {100}},\ \bibinfo {pages} {090502} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Piani}()}]{Piani16} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Piani}},\ }\href@noop {} {}\bibinfo {howpublished} {arXiv:1608.02650}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Umegaki}(1962)}]{U62} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Umegaki}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Kodai Math. Seminar Reports}\ }\textbf {\bibinfo {volume} {14}},\ \bibinfo {pages} {59} (\bibinfo {year} {1962})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Uhlmann}(1976)}]{U73} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Uhlmann}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Reports Math. Phys.}\ }\textbf {\bibinfo {volume} {9}},\ \bibinfo {pages} {273} (\bibinfo {year} {1976})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {LemmWilde}(2017)}]{LW16supmat} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Lemm}},\ {\bibfnamefont {M.M.}~\bibnamefont {Wilde, }}}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Supplementary Material}\ } (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Ohya}\ and\ \circtenamefont {Petz}(1993)}]{OP93} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Ohya}}\ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Petz}},\ }\href@noop {} {\endmph {\bibinfo {title} {Quantum Entropy and Its Use}}}\ (\bibinfo {publisher} {Springer},\ \bibinfo {year} {1993})\BibitemShut {NoStop} \bibitem [{\circtenamefont {Harrow}(2013)}]{H13} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~W.}\ \bibnamefont {Harrow}},\ }\href@noop {} {\ (\bibinfo {year} {2013})},\ \bibinfo {note} {arXiv:1308.6595}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Bradler}(2011)}]{5961814} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Bradler}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {IEEE Transactions on Information Theory}\ }\textbf {\bibinfo {volume} {57}},\ \bibinfo {pages} {5497} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Marvian}\ and\ \circtenamefont {Lloyd}(2016)}]{ML16} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Marvian}}\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Lloyd}},\ }\href@noop {} {\ (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {D'Ariano}\ \endmph {et~al.}(2005)\circtenamefont {D'Ariano}, \circtenamefont {Macchiavello},\ and\ \circtenamefont {Perinotti}}]{SB1} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~M.}\ \bibnamefont {D'Ariano}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Macchiavello}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Perinotti}},\ }\href {\mathrm{d}oibase 10.1103/PhysRevLett.95.060503} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {95}},\ \bibinfo {pages} {060503} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Buscemi}\ \endmph {et~al.}(2006)\circtenamefont {Buscemi}, \circtenamefont {D'Ariano}, \circtenamefont {Macchiavello},\ and\ \circtenamefont {Perinotti}}]{SB2} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Buscemi}}, \bibinfo {author} {\bibfnamefont {G.~M.}\ \bibnamefont {D'Ariano}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Macchiavello}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Perinotti}},\ }\href {\mathrm{d}oibase 10.1103/PhysRevA.74.042309} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {74}},\ \bibinfo {pages} {042309} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Ruskai}(2002)}]{Ruskai} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont {Ruskai}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J. Math. Phys.}\ }\textbf {\bibinfo {volume} {43}},\ \bibinfo {pages} {4358} (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Hayden}\ \endmph {et~al.}(2004)\circtenamefont {Hayden}, \circtenamefont {Jozsa}, \circtenamefont {Petz},\ and\ \circtenamefont {Winter}}]{Haydenetal} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Hayden}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Jozsa}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Petz}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Winter}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Comm. Math. Phys.}\ }\textbf {\bibinfo {volume} {246}},\ \bibinfo {pages} {359} (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Petz}(2003)}]{Petz03} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Petz}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Rev. Math. Phys.}\ }\textbf {\bibinfo {volume} {15}},\ \bibinfo {pages} {79} (\bibinfo {year} {2003})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Petz}(1986)}]{Petz86} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Petz}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Comm. Math. Phys.}\ }\textbf {\bibinfo {volume} {105}},\ \bibinfo {pages} {123} (\bibinfo {year} {1986})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Carlen}\ and\ \circtenamefont {Lieb}(2014)}]{CarlenLieb14} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.~A.}\ \bibnamefont {Carlen}}\ and\ \bibinfo {author} {\bibfnamefont {E.~H.}\ \bibnamefont {Lieb}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J. Math. Phys.}\ }\textbf {\bibinfo {volume} {55}},\ \bibinfo {eid} {042201} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\circtenamefont {Fuchs}\ and\ \circtenamefont {van~de Graaf}(1998)}]{FG} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~A.}\ \bibnamefont {Fuchs}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {van~de Graaf}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {IEEE Transactions on Information Theory}\ }\textbf {\bibinfo {volume} {45}},\ \bibinfo {pages} {1216} (\bibinfo {year} {1998})}\BibitemShut {NoStop} \endnd{thebibliography} {\otimes n}ecolumngrid \pagebreak \appendix \section{Monotonicity of the relative entropy and recoverability} We recall the lower bound from \circte{Jungeetal} on the decrease of the relative entropy for a channel $\mathcal{N}$ and states $\rightho$ and $\sigmama$: \begin{thm}[\circte{Jungeetal}] \leftabel{thm:monotonicity-general} Let $\beginta(t):=\frac{\pi}{2}(1+\cosh(\pi t))^{-1}$. For any two quantum states $\rightho,\sigmama$ and a channel $\mathcal{N}$, the following bound holds \begingin{equation} D(\rightho\| \sigmama)\geq D(\mathcal{N}(\rightho)\| \mathcal{N}(\sigmama)) -\int_\mathbb R \leftog F\!\left(\rightho, \curly{R}^t_{\mathcal{N},\sigmama}(\mathcal{N}(\rightho)) \right) \,\mathrm{d}\beginta(t), \nonumber \endnd{equation} where the rotated Petz recovery map $\curly{R}^t_{\mathcal{N},\sigmama}$ is defined as $$ \curly{R}^t_{\mathcal{N},\sigmama}(\cdot) := \sigmama^{(1+it)/2} \mathcal{N}^\mathrm{d}ag \left[ (\mathcal{N}(\sigmama))^{-(1+it)/2} (\cdot) (\mathcal{N}(\sigmama))^{-(1-it)/2}\right] \sigmama^{(1-it)/2}, $$ where $\mathcal{N}^\mathrm{d}ag$ is the completely positive, unital adjoint of the channel $\mathcal{N}$. Every rotated Petz recovery map perfectly recovers $\sigmama$ from $\mathcal{N}(\sigmama)$: $$ \curly{R}^t_{\mathcal{N},\sigmama}(\mathcal{N}(\sigmama)) =\sigmama. $$ \end{thm} In the special case when the applied quantum channel is the partial trace, the inequality becomes as follows: \begin{thm}[\circte{Jungeetal}] \leftabel{thm:monotonicity} Let $\beginta(t):=\frac{\pi}{2}(1+\cosh(\pi t))^{-1}$. For any two quantum states $\rightho_{AB},\sigmama_{AB}$, we have \begingin{equation} D(\rightho_{AB}\| \sigmama_{AB})\geq D(\rightho_B\| \sigmama_B) -\int_\mathbb R \leftog F\!\left(\rightho_{AB}, \curly{R}^t_{A,\sigmama}(\rightho_B) \right) \,\mathrm{d}\beginta(t), \nonumber \endnd{equation} where the rotated Petz recovery map $\curly{R}^t_{A,X}$ is defined in \endqref{eq:rPetz}. \end{thm} \section{A generalization of Theorem~\rightef{thm:mainNCnew} to $k$ to $n$ cloning} \begin{thm} \leftabel{thm:mainNCnewgeneral} Consider the more general situation in which we begin with $k\lefteq n$ tensor-product copies of the state $\sigmama_i$ for $i\in \{1,2\}$, and suppose that the channel $\Lambdabda_{A_1 \cdots A_k \to A_1 \cdots A_n}$ approximately broadcasts $\sigmama_1$, in the sense that $$ \operatorname{tr}_{A_1 \cdots A_n \backslash A_j}[\Lambdabda_{A_1 \cdots A_k \to A_1 \cdots A_n}(\sigmama_1^{\otimes k})] = \tilde\sigmama_1, $$ and approximately clones $\sigmama_2$, in the sense that $$ \Lambdabda_{A_1 \cdots A_k \to A_1 \cdots A_n}(\sigmama_2^{\otimes k}) = \tilde\sigmama_2^{\otimes n}. $$ Then, for every $m\in\{1,\leftdots,n\}$, there exists a recovery channel $\mathcal{R}_{A_{1}\cdots A_{m}\rightightarrow A_1 \cdots A_k}^{(m,k)}$ such that \beginq kD(\sigmama_{1}\Vert\sigmama_{2})-mD(\tilde{\sigmama}_{1}\Vert\tilde{\sigmama}_{2} )\geq -\leftog F(\sigmama_{1},(\mathcal{R}_{A_{1}\cdots A_{m}\rightightarrow A_1 \cdots A_k}^{(m,k)} \circrc\operatorname{tr}_{A_{m+1}\cdots A_{n}}\circrc\Lambdabda)(\sigmama_{1}^{\otimes k})),\nonumber \endeq and the recovery channel $\mathcal{R}_{A_{1}\cdots A_{m}\rightightarrow A_1 \cdots A_k}^{(m,k)}$ satisfies $$\sigmama_{2}^{\otimes k}=\mathcal{R}_{A_{1}\cdots A_{m}\rightightarrow A_1 \cdots A_k}^{(m,k)}(\tilde{\sigmama}_{2}^{\otimes m}).$$ \end{thm} This can be proved by the same method as for Theorem~\rightef{thm:mainNCnew} (see below). \section{On photon amplification and loss} Here we discuss the analogy between \endqref{eq:refined-partial-trace-cloner} and \endqref{eq:refined-partial-trace-cloner-reversed} and the inequalities from Section III-A of \circte{PhysRevA.93.062314}. The partial trace channel is like particle loss, which for photons is represented by a pure-loss channel $\mathcal{L}_{\endta}$ with transmissivity $\endta \in\lefteft[ 0,1\rightight] $. Furthermore, a UQCM is like particle amplification, which for bosons is represented by an amplifier channel $\mathcal{A}_{G}$\ of gain $G\geq1$. Let $\theta_{E}$ denote a thermal state of mean photon number $E\geq0$, and let $\rightho$ denote a state of the same energy $E$. A slight rewriting of the inequalities from Section III-A of \circte{PhysRevA.93.062314}, given below, results in the following: \begingin{align} D(\rightho\Vert\theta_{E})& \gtrsim D(\mathcal{L}_{\endta}(\rightho)\Vert\mathcal{L} _{\endta}(\theta_{E}))\nonumber\\ & \qquad\qquad+D(\rightho\Vert(\mathcal{A}_{1/\endta}\circrc\mathcal{L}_{\endta})(\rightho )),\leftabel{eq:loss-ch}\\ D(\rightho\Vert\theta_{E})&\geq D(\mathcal{A}_{G}(\rightho)\Vert\mathcal{A} _{G}(\theta_{E}))\nonumber\\ & \qquad\qquad+D(\rightho\Vert(\mathcal{L}_{1/G}\circrc\mathcal{A}_{G})(\rightho)),\leftabel{eq:amp-ch} \endnd{align} where the symbol $\gtrsim$ indicates that the entropy inequality holds up to a term with magnitude no larger than $\leftog(1/\endta)$ and which approaches zero as $E\rightightarrow\infty$. So we see that \endqref{eq:loss-ch} is analogous to \endqref{eq:refined-partial-trace-cloner}:\ under a particle loss $\mathcal{L} _{\endta}$, we can apply a particle amplification procedure $\mathcal{A} _{1/\endta}$ to try and recover the lost particles, with a performance controlled by \endqref{eq:loss-ch}. Similarly, \endqref{eq:amp-ch} is analogous to \endqref{eq:refined-partial-trace-cloner-reversed}: under a particle amplification $\mathcal{A}_{G}$, we can apply a particle loss channel $\mathcal{L}_{1/G}$ to try and recover the original state, with a performance controlled by \endqref{eq:amp-ch}. Observe that the parameters specifying the recovery channels are directly related to the parameters of the original channels, just as is the case in \endqref{eq:refined-partial-trace-cloner} and \endqref{eq:refined-partial-trace-cloner-reversed}. Note that an explicit connection between cloning and amplifier channels was established in \circte{5961814}, and our result serves to complement that connection. \begin{proof}[Proof of \endqref{eq:loss-ch} and \endqref{eq:amp-ch}] A proof of \endqref{eq:loss-ch} is as follows. The Hamiltonian here is $a^{\mathrm{d}ag }a$, which is the photon number operator. Let $\rightho$ be a state of energy $E$, and let $\theta_{E}$ be a thermal state of energy $E$ (i.e., $\lefteft\leftangle a^{\mathrm{d}ag}a\rightight\rightangle _{\rightho}=\lefteft\leftangle a^{\mathrm{d}ag}a\rightight\rightangle _{\theta_{E}}=E$). Under the action of a pure-loss channel $\mathcal{L}_{\endta }$, the energies of $\mathcal{L}_{\endta}(\rightho)$ and $\mathcal{L}_{\endta} (\theta_{E})$ are equal to $\endta E$, and we also find that $\mathcal{L}_{\endta }(\theta_{E})=\theta_{\endta E}$. Furthermore, a standard calculation gives that $-\operatorname{tr}[\rightho\leftog\theta_{E}]=H(\theta_{E})=g(E):=\lefteft( E+1\rightight) \leftog\lefteft( E+1\rightight) -E\leftog E$. Putting this together, we find that \begingin{align} D(\rightho\Vert\theta_{E})-D(\mathcal{L}_{\endta}(\rightho)\Vert\mathcal{L}_{\endta }(\theta_{E})) &=H(\mathcal{L}_{\endta}(\rightho))-H(\rightho)+g(E)-g(\endta E)\\ & \geq D(\rightho\Vert(\mathcal{A}_{1/\endta}\circrc\mathcal{L}_{\endta})(\rightho ))-\leftog(1/\endta)+g(E)-g(\endta E). \endnd{align} The first equality is a rewriting using what we mentioned above and the inequality follows from Section III-A of \circte{PhysRevA.93.062314}. When $E=0$, $g(E)-g(\endta E)=0$ also. As $E$ gets larger, $g(E)-g(\endta E)$ is monotone increasing and reaches its maximum of $\leftog(1/\endta)$ as $E\rightightarrow\infty$. The other inequality in \endqref{eq:amp-ch} for an amplifier channel follows similarly. Under the action of an amplifier channel $\mathcal{A}_{G}$, the energies of $\mathcal{A}_{G}(\rightho)$ and $\mathcal{A}_{G}(\theta_{E})$ are $GE$. We also find that $\mathcal{A}_{G}(\theta_{E})=\theta_{GE}$. Proceeding as above, we find that \begingin{align} D(\rightho\Vert\theta_{E})-D(\mathcal{A}_{G}(\rightho)\Vert\mathcal{A}_{G} (\theta_{E})) & =H(\mathcal{A}_{G}(\rightho))-H(\rightho)+g(E)-g(GE)\\ & \geq D(\rightho\Vert(\mathcal{L}_{1/G}\circrc\mathcal{A}_{G})(\rightho))+\leftog G-\lefteft[ g(GE)-g(E)\rightight] \\ & \geq D(\rightho\Vert(\mathcal{L}_{1/G}\circrc\mathcal{A}_{G})(\rightho)). \endnd{align} The first equality is a rewriting and the inequality follows from Section III-A of \circte{PhysRevA.93.062314}. The last inequality follows because $g(GE)-g(E)=0$ at $E=0$, and it is monotone increasing as a function of $E$, reaching its maximum value of $\leftog G$ as $E\rightightarrow\infty$. \end{proof} \section{Proofs of the main results} \begin{proof}[Proof of Theorems \rightef{thm:mainNC} and \rightef{thm:mainNCnew}] Theorem~\rightef{thm:mainNC} follows from the $m=n$ case of Theorem~\rightef{thm:mainNCnew}. Hence, it suffices to prove Theorem~\rightef{thm:mainNCnew}. We start by noting the following general inequality holding for states $\omegaega$ and $\tau$, a channel $\mathcal{N}$, and a recovery channel $\mathcal{R}$: \begingin{align} D(\omegaega\Vert\tau)-D(\mathcal{N}(\omegaega)\Vert\mathcal{N}(\tau)) & \geq-\leftog F(\omegaega,(\mathcal{R}\circrc\mathcal{N})(\omegaega)),\leftabel{eq:junge-et-al-1}\\ \tau & =(\mathcal{R}\circrc\mathcal{N})(\tau),\leftabel{eq:junge-et-al-2} \endnd{align} which is a consequence of convexity of $-\leftog$ and the fidelity applied to Theorem~\rightef{thm:monotonicity-general}, taking \beginq \leftabel{eq:Rdefn} \mathcal{R} := \int_{\mathbb{R}} \curly{R}^t_{\mathcal{N},\tau} \mathrm{d} \beginta(t) \endeq with $\curly{R}^t_{\mathcal{N},\tau}$ as in Theorem~\rightef{thm:monotonicity-general}. To get the inequality, we take $\omegaega=\sigmama_{1}$, $\tau=\sigmama_{2}$, and $\mathcal{N}=\operatorname{tr}_{A_{m+1}\cdots A_{n}}\circrc\Lambdabda$. This then gives the inequality \begingin{equation} D(\sigmama_{1}\Vert\sigmama_{2})-D((\operatorname{tr}_{A_{m+1}\cdots A_{n}} \circrc\Lambdabda)(\sigmama_{1})\Vert(\operatorname{tr}_{A_{m+1}\cdots A_{n}} \circrc\Lambdabda)(\sigmama_{2}))\geq-\leftog F(\sigmama_{1},(\mathcal{R}_{A_{1}\cdots A_{n}\rightightarrow A}^{(m)}\circrc\operatorname{tr}_{A_{m+1}\cdots A_{n}} \circrc\Lambdabda)(\sigmama_{1})), \endnd{equation} where the recovery channel $\mathcal{R}_{A_{1}\cdots A_{n}\rightightarrow A} ^{(m)}$ satisfies \begingin{align} \sigmama_{2} & =(\mathcal{R}_{A_{1}\cdots A_{n}\rightightarrow A}^{(m)} \circrc\operatorname{tr}_{A_{m+1}\cdots A_{n}}\circrc\Lambdabda)(\sigmama_{2}) =\mathcal{R}_{A_{1}\cdots A_{n}\rightightarrow A}^{(m)}(\tilde{\sigmama} _{2}^{\otimes m}). \endnd{align} So then we prove that $-D((\operatorname{tr}_{A_{m+1}\cdots A_{n}}\circrc \Lambdabda)(\sigmama_{1})\Vert(\operatorname{tr}_{A_{m+1}\cdots A_{n}}\circrc \Lambdabda)(\sigmama_{2}))\lefteq-mD(\tilde{\sigmama}_{1}\Vert\tilde{\sigmama}_{2})$. We apply $\leftog(X\otimes Y)=\leftog X\otimes I+I\otimes \leftog Y$ and set $H(X):=-\tr{X\leftog X}$ to get \begingin{align} & \!\!\!\!\!\!\!\!\!\! -D((\operatorname{tr}_{A_{m+1}\cdots A_{n}}\circrc\Lambdabda)(\sigmama_{1} )\Vert(\operatorname{tr}_{A_{m+1}\cdots A_{n}}\circrc\Lambdabda)(\sigmama_{2})) \nonumber\\ & =-D(\rightho_{1,A_{1}\cdots A_{m}}^{\operatorname{out}}\Vert\tilde{\sigmama }_{2,A_{1}}\otimes\cdots\otimes\tilde{\sigmama}_{2,A_{m}})\\ & =H(\rightho_{1,A_{1}\cdots A_{m}}^{\operatorname{out}})+\operatorname{tr} [\rightho_{1,A_{1}\cdots A_{m}}^{\operatorname{out}}\leftog(\tilde{\sigmama}_{2,A_{1} }\otimes\cdots\otimes\tilde{\sigmama}_{2,A_{m}})]\\ & =H(\rightho_{1,A_{1}\cdots A_{m}}^{\operatorname{out}})+\sum_{k=1} ^{m}\operatorname{tr}[\rightho_{1,A_{1}\cdots A_{m}}^{\operatorname{out}} (I_{A_{1}\cdots A_{m}\backslash A_{k}}\otimes\leftog(\tilde{\sigmama}_{2,A_{k}}))] \endnd{align} Recall our assumption from \endqref{eq:approxCB} that the channel broadcasts $\sigmama_1$ to $\tilde\sigmama_1$. It gives \begingin{align} &\!\!\!\!\!\!\!\!H(\rightho_{1,A_{1}\cdots A_{m}}^{\operatorname{out}})+\sum_{k=1} ^{m}\operatorname{tr}[\rightho_{1,A_{1}\cdots A_{m}}^{\operatorname{out}} (I_{A_{1}\cdots A_{m}\backslash A_{k}}\otimes\leftog(\tilde{\sigmama}_{2,A_{k}}))] \nonumber \\ & =H(\rightho_{1,A_{1}\cdots A_{m}}^{\operatorname{out}})+\sum_{k=1} ^{m}\operatorname{tr}[\tilde{\sigmama}_{1}\leftog\tilde{\sigmama}_{2}]\\ & \lefteq\sum_{k=1}^{m}\lefteft[ H(\rightho_{1,A_{k}}^{\operatorname{out} })+\operatorname{tr}[\tilde{\sigmama}_{1}\leftog\tilde{\sigmama}_{2}]\rightight] \\ & =-mD(\tilde{\sigmama}_{1}\Vert\tilde{\sigmama}_{2}). \endnd{align} In the second-to-last step, we used the subadditivity of the entropy $H$ and again \endqref{eq:approxCB}. \end{proof} \begin{proof}[Proof of Theorem~\rightef{thm:PT-cloner-recovery}.] We observe that $\pi_{\operatorname{sym}}^{d,k}=\operatorname{tr}_{n-k}[\pi _{\operatorname{sym}}^{d,n}]$ which follows easily from the representation $\pi_{\operatorname{sym}}^{d,n}=\int d\psi\ \psi^{\otimes n}$ \circte{H13}, the integral being with respect to the Haar probability measure over pure states $\psi$. A proof of \endqref{eq:refined-partial-trace-cloner}\ then follows from a few key steps: \begingin{align} & D(\omegaega^{(n)}\Vert\pi_{\operatorname{sym}}^{d,n})-D(\mathcal{P} _{n\rightightarrow k}(\omegaega^{(n)})\Vert\mathcal{P}_{n\rightightarrow k} (\pi_{\operatorname{sym}}^{d,n}))\nonumber \\ =&-H(\omegaega^{(n)})-\operatorname{tr}[\omegaega^{(n)}\leftog\pi_{\operatorname{sym} }^{d,n}]+H(\mathcal{P}_{n\rightightarrow k}(\omegaega^{(n)}))+\operatorname{tr} [\mathcal{P} _{n\rightightarrow k}(\omegaega^{(n)})\leftog\pi_{\operatorname{sym}}^{d,k}]\nonumber\\ =& H(\mathcal{P}_{n\rightightarrow k}(\omegaega^{(n)}))-H(\omegaega^{(n)} )-\leftog(d[k]/d[n])\nonumber\\ \geq& D(\omegaega^{(n)}\Vert(\mathcal{P}_{n\rightightarrow k}^{\mathrm{d}ag}\circrc \mathcal{P}_{n\rightightarrow k})(\omegaega^{(n)}))-\leftog(d[k]/d[n])\nonumber\\ =& D(\omegaega^{(n)}\Vert(\mathcal{C}_{k\rightightarrow n}\circrc\mathcal{P} _{n\rightightarrow k})(\omegaega^{(n)} )).\leftabel{eq:proof-cloning-partial-trace-recovery} \endnd{align} The first equality holds by definition of quantum relative entropy and in the second equality we used the fact that $\mathrm{tr}[\mathcal{P}_{n\rightightarrow k}(\omegaega^{(n)})]=\mathrm{tr}[\mathrm{tr}_{n\to k}(\omegaega^{(n)})]=\mathrm{tr}[\omega^{(n)}]=1$, wherein the first step holds because $\mathrm{tr}_{n\to k}[\omega^{(n)}]$ is supported in the symmetric subspace. The inequality above is a consequence of \circte[Thm.~1]{PhysRevA.93.062314} which states that \begingin{equation} H(\mathcal{N}(\rightho))-H(\rightho)\geq D(\rightho\Vert(\mathcal{N}^{\mathrm{d}ag}\circrc \mathcal{N})(\rightho)) \endnd{equation} for any state $\rightho$ and positive, trace-preserving map $\mathcal{N}$. (We remark that $\curly{P}_{n\to k}$ is indeed trace-preserving when considered as a map on states supported on the symmetric subspace.) The last equality in \endqref{eq:proof-cloning-partial-trace-recovery} follows from the property of relative entropy that $D(\xi\Vert \tau)-\leftog c=D(\xi\Vert c\tau)$ for states $\xi,\tau$ and $c>0$. \end{proof} Essentially the same argument, with minor modifications, also proves Theorems\rightef{thm:cloner-PT-recovery} and \rightef{thm:new}. For the former, we use the facts that $\mathcal{C}_{k\rightightarrow n}(\pi_{\operatorname{sym}}^{d,k})=\pi _{\operatorname{sym}}^{d,n}$ and that $\mathcal{C}_{k\rightightarrow n}$ is trace-preserving when acting on states supported in the symmmetric subspace. For Theorem \rightef{thm:new}, we use the assumption that $\mathrm{tr}_{n\to k}[\omega^{(n)}]$ is supported in $Y_k$ to get $\mathrm{tr}[\mathcal{P} _{n\rightightarrow k}(\omegaega^{(n)})]=1$. The details are left to the reader. We close this proof section with a remark on a so-far implicit assumption. \begin{rmk}[Non-identical marginals case] Some of our results, Theorems~\rightef{thm:mainNC}, \rightef{thm:mainNCnew} and \rightef{thm:mainBC} (see below), apply to approximate clonings/broadcasts in the sense of Definition 3. That is, we always assume that the marginals of the output state are identical, i.e. \beginq \leftabel{eq:non-identical} \rightho^{\mathrm{out}}_{i,A_1}=\leftdots=\rightho^{\mathrm{out}}_{i,A_n}=\tilde\sigmama_i, \qquad (i=1,2). \endeq We make this assumption for two reasons: (a) It simplifies the bounds in our main results and (b) we believe that it is a natural assumption for approximate cloning/broadcasting. However, the methods apply more generally and they also yield limitations on approximate clonings/broadcasts when \endqref{eq:non-identical} is not satisfied. \end{rmk} \section{The maximally mixed state on the antisymmetric subspace} The following lemma allows us to conclude that the stronger form of Theorem~\rightef{thm:new} applies when considering cloning maps for the antisymmetric subspace. \begin{lm}\leftabel{lm:pi} Let $\curly{H}_n$ denote the antisymmetric subspace of $n$ qudits and let $\pi_n$ denote the maximally mixed state on $\curly{H}_n$. Then $$ \pi_k=\mathrm{tr}_{n\to k}[\pi_n]. $$ \end{lm} \begin{proof}[Proof of Lemma \rightef{lm:pi}] The operator $\mathrm{tr}_{n\to k}[\pi_n]$ is supported on $\curly{H}_k$. It also commutes with all unitaries $U_k$ on $\curly{H}_k$. Indeed, by properties of the partial trace and the fact that $\pi_n$ commutes with all unitaries on $\curly{H}_n$, $$ U_k\mathrm{tr}_{n\to k}[\pi_n]=\mathrm{tr}_{n\to k}[(U_k\otimes I_{\curly{H}_{n-k}})\pi_n]=\mathrm{tr}_{n\to k}[\pi_n (U_k\otimes I_{\curly{H}_{n-k}})]=\mathrm{tr}_{n\to k}[\pi_n]U_k. $$ Since it commutes with all unitaries, $\mathrm{tr}_{n\to k}[\pi_n]$ is proportional to $I_{\curly{H}_k}$. Since $$ \mathrm{tr}_{\curly{H}_k}[\mathrm{tr}_{n\to k}[\pi_n]]=\mathrm{tr}_{\curly{H}_n}[\pi_n]=1, $$ the proportionality constant must be $1/\mathrm{\mathrm{d}im}{\curly{H}_k}=1/\binom{d}{k}$. This proves the lemma. \end{proof} \section{Reductions of Slater determinants and their quantum entropy} \leftabel{sec:reductions-Slater-dets} Here we prove the fact that the quantum entropy of the marginal $\mathrm{tr}_{n\rightightarrow k}[\Phi_{n}]$ is $\leftog\binom{n}{k}$ when $\Phi_{n}$ is a Slater determinant. We can conclude this directly from the expression \endqref{eq:marginal} for the marginal derived below. Before beginning, let us suppose that $\{|\phi_{j}\rightangle\}_{j=1}^{d}$ is an orthonormal basis for a $d$-dimensional Hilbert space $\mathcal{H}$. Letting $d\geq n$, a Slater determinant state $\Phi_{n}$\ corresponding to this basis and a subset $\{1,\leftdots,n\}$ is as follows: \begingin{align} |\Phi_{n}\rightangle & :=|\phi_{1}\rightangle\wedge\cdots\wedge|\phi_{n}\rightangle\\ & :=\frac{1}{\sqrt{n!}}\sum_{\pi\in S_{n}}\mathrm{sgn}(\pi)|\phi_{\pi (1)}\rightangle\otimes\cdots\otimes|\phi_{\pi(n)}\rightangle, \endnd{align} where $S_{n}$ is the set of all permutations of $\{1,\leftdots,n\}$ and $\mathrm{sgn} (\pi)$ denotes its signum. Note that we chose the subset $\lefteft\{ 1,\leftdots,n\rightight\} $ of $\{1,\leftdots,d\}$, but without loss of generality we could have chosen an arbitrary one. The formula \endqref{eq:marginal} below is presumably well known. We include an elementary, but slightly tedious, proof for completeness. \begingin{lm} [Marginal of a Slater determinant]Let $d \geq n$ and $|\Phi_{n}\rightangle=|\phi_{1}\rightangle \wedge\cdots\wedge|\phi_{n}\rightangle$, with $\{|\phi_{j}\rightangle\}_{j=1}^{d}$ an orthonormal basis. A $k$-set $A_{k}$ is a subset of $\{1,\leftdots,n\}$ consisting of exactly $k$ elements. For any $k$-set $A_{k}=\{i_{1},\leftdots,i_{k}\}$, we define \begingin{equation} |\Phi_{A_{k}}\rightangle\leftangle\Phi_{A_k}|:=(|\phi_{i_{1}}\rightangle\wedge\cdots\wedge|\phi_{i_{k} }\rightangle)(\leftangle\phi_{i_{1}}|\wedge\cdots\wedge|\phi_{i_{k} }|). \endnd{equation} Then \begingin{equation} \mathrm{tr}_{n\rightightarrow k}[|\Phi_{n}\rightangle\leftangle\Phi_{n}|]=\frac{1} {\binom{n}{k}}\sum_{A_{k}\ k\mathrm{-set}}|\Phi_{A_{k}}\rightangle\leftangle \Phi_{A_{k}}|. \leftabel{eq:marginal} \endnd{equation} The orthonormality of the states $\{|\Phi_{A_{k}}\rightangle\}$ for fixed $k$ then implies that $H(\mathrm{tr}_{n\rightightarrow k}|\Phi_{n}\rightangle\leftangle \Phi_{n}|)=\leftog\binom{n}{k}$, where $H(\rightho)=-\mathrm{tr}[\rightho\leftog\rightho]$ is the quantum entropy. \endnd{lm} \begingin{proof} By definition of the wedge product, we can write $|\Phi_{n}\rightangle\leftangle \Phi_{n}|$ as \begingin{equation} |\Phi_{n}\rightangle\leftangle\Phi_{n}|=\frac{1}{n!}\sum_{\pi,\sigmama\in S_{n} }\mathrm{sgn}(\pi)\mathrm{sgn}(\sigmama)|\phi_{\pi(1)}\rightangle\leftangle\phi _{\sigmama(1)}|\otimes\cdots\otimes|\phi_{\pi(n)}\rightangle\leftangle\phi_{\sigmama (n)}|. \endnd{equation} Taking the partial trace over the last $n-k$ systems yields the following: \begingin{align} & \mathrm{tr}_{n\rightightarrow k}[|\Phi_{n}\rightangle\leftangle\Phi_{n}|]\nonumber\\ & =\frac{1}{n!}\sum_{\pi,\sigmama\in S_{n}}\mathrm{sgn}(\pi)\mathrm{sgn} (\sigmama)|\phi_{\pi(1)}\rightangle\leftangle\phi_{\sigmama(1)}|\otimes\cdots\otimes |\phi_{\pi(k)}\rightangle\leftangle\phi_{\sigmama(k)}|\ \leftangle\phi_{\pi(k+1)} |\phi_{\sigmama(k+1)}\rightangle\cdots\leftangle\phi_{\pi(n)}|\phi_{\sigmama(k)} \rightangle\leftabel{eq:permutation}\\ & =\frac{1}{n!}\sum_{\pi,\sigmama\in S_{n}}\mathrm{sgn}(\pi)\mathrm{sgn} (\sigmama)|\phi_{\pi(1)}\rightangle\leftangle\phi_{\sigmama(1)}|\otimes\cdots\otimes |\phi_{\pi(k)}\rightangle\leftangle\phi_{\sigmama(k)}|\ \mathrm{d}eltalta_{\pi(k+1),\sigmama (k+1)}\cdots\mathrm{d}eltalta_{\pi(n),\sigmama(n)}. \endnd{align} In the second equality, we used orthonormality. The product of delta functions implies that we only need to consider permutations $\pi$ and $\sigmama$ which agree on $\{k+1,\leftdots,n\}$. To exploit this, we partition the permutations according to which $k$-set $A_{k}$ features as the image of $\{1,\leftdots,k\}$. More precisely, given a $k $-set $A_{k}$, we define \begingin{equation} S_{n}(A_{k}):=\lefteft\{ \pi\in S_{n}\;:\;\pi(\{1,\leftdots,k\})=A_{k}\rightight\} . \endnd{equation} There is a more useful, kind of affine representation of the elements of $S_{n}(A_{k})$ as tuples in $S_{k}\times S_{n-k}$ composed with a fixed bijection $f_{A_{k}}\in S_{n}(A_{k})$. For definiteness, we define $f_{A_{k}} $ to be the unique bijection in $S_{n}(A_{k})$ which preserves ordering. Then \begingin{equation} \pi\in S_{n}(A_{k})\Longleftrightarrow\pi=f_{A_{k}}\circrc(\pi^{k},\pi ^{n-k}),\quad\text{for some }\pi^{k}\in S_{k},\,\pi^{n-k}\in S_{n-k}. \leftabel{eq:represent} \endnd{equation} Here we wrote $(\pi^{k},\pi^{n-k})$ for the permutation that is obtained by applying $\pi^{k}$ to the first $k$ variables and $\pi^{n-k}$ to the last $n-k$ variables. This way of bookkeeping permutations is convenient in \endqref{eq:permutation} above. Using this representation and the identity \endqref{eq:identity} below, we find that \begingin{align} & \mathrm{tr}_{n\rightightarrow k}[|\Phi_{n}\rightangle\leftangle\Phi_{n}|]\nonumber\\ & =\frac{1}{n!}\sum_{A_{k}\ k\mathrm{-set}}\sum_{\substack{\pi,\sigmama\in S_{n}(A_{k}); \\\pi^{n-k}=\sigmama^{n-k}}}\mathrm{sgn}(\pi)\mathrm{sgn} (\sigmama)|\phi_{\pi(1)}\rightangle\leftangle\phi_{\sigmama(1)}|\otimes\cdots\otimes |\phi_{\pi(k)}\rightangle\leftangle\phi_{\sigmama(k)}|\\ & =\frac{1}{n!}\sum_{A_{k}\ k\mathrm{-set}}\sum_{\substack{\pi,\sigmama\in S_{n}(A_{k}); \\\pi^{n-k}=\sigmama^{n-k}}}\mathrm{sgn}(\pi^{k})\mathrm{sgn} (\sigmama^{k})|\phi_{\pi(1)}\rightangle\leftangle\phi_{\sigmama(1)}|\otimes\cdots \otimes|\phi_{\pi(k)}\rightangle\leftangle\phi_{\sigmama(k)}|\\ & =\frac{(n-k)!}{n!}\sum_{A_{k}\ k\mathrm{-set}}\sum_{\pi^{k},\sigmama^{k}\in S_{k}}\mathrm{sgn}(\pi^{k})\mathrm{sgn}(\sigmama^{k}) |\phi_{(f_{A_{k}}\circrc \pi^{k})(1)}\rightangle\leftangle\phi_{(f_{A_{k}}\circrc\sigmama^{k})(1)}|\otimes \cdots\otimes|\phi_{(f_{A_{k}}\circrc\pi^{k})(k)}\rightangle\leftangle\phi_{(f_{A_{k} }\circrc\sigmama^{k})(k)}|\leftabel{eq:aboveperm}. \endnd{align} We used the following identity: \begingin{equation}\leftabel{eq:identity} \mathrm{sgn}(\pi)\mathrm{sgn}(\sigmama)=\mathrm{sgn}(\pi^{k})\mathrm{sgn} (\sigmama^{k}). \endnd{equation} This is a consequence of the fact that $\mathrm{sgn}$ is a group homomorphism, i.e., that $\mathrm{sgn}(\sigmama_{1}\circrc\sigmama_{2} )=\mathrm{sgn}(\sigmama_{1})\mathrm{sgn}(\sigmama_{2})$ holds for any two permutations $\sigmama_{1}$ and $\sigmama_{2}$. Indeed, we have $$ \begingin{aligned} \mathrm{sgn}(\pi)\mathrm{sgn}(\sigmama) & =(\mathrm{sgn}(f_{A_{k}} ))^{2}\mathrm{sgn}((\pi^{k},\pi^{n-k}))\mathrm{sgn}((\sigmama^{k},\sigmama ^{n-k}))\\ & =\mathrm{sgn}((\pi^{k},\pi^{n-k}))\mathrm{sgn}((\sigmama^{k},\pi^{n-k}))\\ & =\mathrm{sgn}((\pi^{k},I_{n-k})\circrc(I_{k},\pi^{n-k}))\mathrm{sgn} ((\sigmama^{k},I_{n-k})\circrc(I_{k},\pi^{n-k}))\\ & =\mathrm{sgn}(\pi^{k})\mathrm{sgn}(\sigmama^{k}). \endnd{aligned} $$ This proves \endqref{eq:identity}. We now return to \endqref{eq:aboveperm} to conclude the proof of \endqref{eq:marginal}. We observe that $$ \mathrm{Perm}(A_{k})=\lefteft\{ f_{A_{k}}\circrc\pi^{k}\circrc f_{A_{k}}^{-1}\;:\;\pi^{k}\in S_{k}\rightight\}. $$ To exploit this, we order each $k$-set $A_k=\{i_1,\leftdots,i_k\}$ with $i_1<\cdots<i_k$. Then, by definition, $f_{A_k}(j)=i_j$ for all $1\lefteq j\lefteq k$. From this, we find that $$ f_{A_k}\circrc\pi^k(j)=f_{A_k}\circrc\pi^k\circrc f_{A_k}^{-1}(i_j)=:\tilde\pi^k(i_j) $$ produces a permutation $\tilde\pi^k\in \mathrm{Perm}(A_{k})$. We use this observation to relabel the sum in \endqref{eq:aboveperm}; and we also use the identity $\mathrm{sgn}(\pi^{k})\mathrm{sgn}(\sigmama^{k})=\mathrm{sgn} (\tilde{\pi}^{k})\mathrm{sgn}(\tilde{\sigmama}^{k})$, which follows by a similar argument as \endqref{eq:identity} above. We get \begingin{align} &\frac{(n-k)!}{n!}\sum_{A_{k}\ k\mathrm{-set}}\sum_{\pi^{k},\sigmama^{k}\in S_{k}}\mathrm{sgn}(\pi^{k})\mathrm{sgn}(\sigmama^{k})\nonumber |\phi_{(f_{A_{k}}\circrc \pi^{k})(1)}\rightangle\leftangle\phi_{(f_{A_{k}}\circrc\sigmama^{k})(1)}|\otimes \cdots\otimes|\phi_{(f_{A_{k}}\circrc\pi^{k})(k)}\rightangle\leftangle\phi_{(f_{A_{k} }\circrc\sigmama^{k})(k)}|\\ & =\frac{1}{\binom{n}{k}}\sum_{A_{k}\ k\mathrm{-set}}\frac{1}{k!}\sum _{\tilde{\pi}^{k},\tilde{\sigmama}^{k}\in\text{Perm}(A_{k})}\mathrm{sgn} (\tilde{\pi}^{k})\mathrm{sgn}(\tilde{\sigmama}^{k})|\phi_{\tilde{\pi}^{k} (i_1)}\rightangle\leftangle\phi_{\tilde{\sigmama}^{k}(i_1)}|\otimes\cdots\otimes |\phi_{\tilde{\pi}^{k}(i_k)}\rightangle\leftangle\phi_{\tilde{\sigmama}^{k}(i_k)}|\\ & =\frac{1}{\binom{n}{k}}\sum_{A_{k}\ k\mathrm{-set}}|\Phi_{A_{k}} \rightangle\leftangle\Phi_{A_{k}}|. \endnd{align} This concludes the proof. \endnd{proof} \section{Limitations on approximate two-fold broadcasts} As mentioned in the main text, our method also gives limitations on approximate two-fold broadcasting. Throughout, we restrict to broadcasts which receive as their input state only a single copy of $\sigmama$. In particular, we are not in a situation where ``superbroadcasting'' \circte{SB1,SB2} is possible. \begin{thm} \leftabel{thm:mainBC} Fix two mixed states $\sigmama_1$ and $\sigmama_2$. Suppose that the quantum channel $\Lambda_{A\to AB}$ is a simultaneous approximate broadcast of $\sigmama_1$ and $\sigmama_2$, i.e., that \beginq \rightho^{\mathrm{out}}_{i,A}=\rightho^{\mathrm{out}}_{i,B}=\tilde\sigmama_{i}, \qquad \rightho^{\mathrm{out}}_{i,AB}:=\Lambda(\sigmama_{i,A}) \endeq for $i=1,2$. Then \beginq \leftabel{eq:mainBC} D(\sigmama_1\|\sigmama_2)-D(\tilde\sigmama_1\|\tilde\sigmama_2)\geq \Delta_{\curly{R}}(\tilde\sigmama_1,\tilde\sigmama_2). \endeq where we have introduced the (channel dependent) ``recovery difference'' \beginq \leftabel{eq:commondefn} \Delta_\curly{R}(\tilde\sigmama_1,\tilde\sigmama_2):= \frac{1}{8} \int_\mathbb R \|\curly{R}^t_{B,\rightho^{\mathrm{out}}_{2,AB}}(\tilde\sigmama_{1,A})-\curly{R}^t_{A,\rightho^{\mathrm{out}}_{2,AB}}(\tilde\sigmama_{1,B})\|_1^2 \ \mathrm{d}\beginta(t). \endeq which features the probability distribution $\beginta(t):=\frac{\pi}{2}(1+\cosh(\pi t))^{-1}$ and the rotated Petz recovery map defined by \beginq \leftabel{eq:rPetz} \curly{R}^t_{A,X}(\cdot):= X_{AB}^{(1+it)/2} \left(I_A\otimes X_B^{-(1+it)/2} (\cdot)X_B^{-(1-it)/2}\right)X_{AB}^{(1-it)/2}. \endeq \end{thm} The proof is given at the end of this appendix. We emphasize that the definition \endqref{eq:commondefn} of the recovery difference $\Delta_\curly{R}(\tilde\sigmama_1,\tilde\sigmama_2)$ is independent of $\rightho^{\mathrm{out}}_{1,AB}$. The rotated Petz recovery map \endqref{eq:rPetz} appears in the strengthening of the monotonicity of relative entropy \circte{Jungeetal}, recalled here as Theorem~\rightef{thm:monotonicity} in the appendix. The rotated Petz recovery map is chosen such that the second state is perfectly recovered, i.e. $$ \curly{R}^t_{B,\rightho^{\mathrm{out}}_{2,AB}}(\tilde\sigmama_{2,A})=\curly{R}^t_{A,\rightho^{\mathrm{out}}_{2,AB}}(\tilde\sigmama_{2,B})=\rightho^{\mathrm{out}}_{2,AB}. $$ One may wonder if the vanishing of the recovery difference implies that $\tilde\sigmama_1$ and $\tilde\sigmama_2$ commute, i.e., if Theorem~\rightef{thm:UBC} is recovered from Theorem~\rightef{thm:mainBC}. Assume that $\Delta_\curly{R}(\tilde\sigmama_1,\tilde\sigmama_2)=0$. One would like to show that this implies that $\tilde\sigmama_1$ and $\tilde\sigmama_2$ commute. A natural idea is to follow the proof of Theorem~\rightef{thm:UBC} in \circte{KH08}. There, the authors appeal to a condition for equality in the monotonicity of the relative entropy by Ruskai \circte{Ruskai} (see also \circte{Haydenetal,Petz03,Petz86}). It yields (see (11) in \circte{KH08}) \beginq \leftabel{eq:SigmaKH} (\Sigma_A\otimes I_B) P_{AB}=(I_A\otimes \Sigma_B)P_{AB},\qquad \Sigma:=\leftog\sigmama_1-\leftog\sigmama_2. \endeq where $P_{AB}$ projects onto the support of $\rightho_{2,AB}^{\mathrm{out}}$. We have \begin{lm} \leftabel{lm:KH} If \endqref{eq:SigmaKH} holds, then $\tilde\sigmama_1$ and $\tilde\sigmama_2$ commute. \end{lm} This was observed without proof in \circte{KH08}; for completeness we include the \begin{proof}[Proof of Lemma \rightef{lm:KH}] First, recall our standing assumption that $\ker\tilde\sigmama_2\subset \ker\tilde\sigmama_1$. It yields that $\tilde\sigmama_1\tilde\sigmama_2=0=\tilde\sigmama_2\tilde\sigmama_1$ on $\ker\tilde\sigmama_2$ and so it suffices to consider the subspace $X:=(\ker\tilde\sigmama_2)^\perp$ in the following. Fix a vector $\ket{k}\in X$. Then, by the definition of the partial trace, there exists another vector $\ket{l}$ such that $$ \ket{k}_A\otimes\ket{l}_B\in (\ker\rightho^{\mathrm{out}}_2)^\perp=\mathrm{supp} \rightho^{\mathrm{out}}_2. $$ Hence we have \endqref{eq:Sigma} when acting on $\ket{k}\otimes\ket{l}$, which implies $ \Sigma \ket{k}=\ket{k}. $ Since $\ket{k}\in X$ was arbitrary, we see that $\Sigma$ acts as the identity on $X$. Moreover, $X=\mathrm{ran}\tilde\sigmama_2$ is an invariant subspace for $\sigmama_2$ and so we can find a unitary $U:X\to X$ such that $U^*\tilde \sigmama_2 U=:\Lambda$ is diagonal. By definition \endqref{eq:Sigma} of $\Sigma$, it follows that, on $X$, $$ I_X=\Lambda^{-1/2-it/2} U^*\tilde\sigmama_1 U \Lambda^{-1/2+it/2}. $$ Hence, $U^*\tilde\sigmama_1 U$ is diagonal as well, implying that $\tilde\sigmama_1$ and $\tilde\sigmama_2$ commute. \end{proof} Contrary to \circte{KH08}, the assumption $\Delta_\curly{R}(\tilde\sigmama_1,\tilde\sigmama_2)=0$, by \endqref{eq:commondefn}, yields only the slightly weaker identity \beginq \leftabel{eq:Sigma} P_{AB}(\Sigma_A\otimes I_B) P_{AB}=P_{AB}(I_A\otimes \Sigma_B)P_{AB},\qquad \Sigma:=\tilde\sigmama_{2}^{-1/2-it/} \tilde\sigmama_{1}\tilde\sigmama_{2}^{-1/2+it/2}. \endeq Note the additional projection $P_{AB}$ in \endqref{eq:Sigma} as compared to \endqref{eq:SigmaKH}. It is due to the symmetrical appearance of $\rightho_2^{\mathrm{out}}$ in the Petz recovery map \endqref{eq:rPetz}. In the special case that $P_{AB}$ projects onto a subset of the ``diagonal'' $\ket{k}_A\otimes\ket{k}_B$, \endqref{eq:Sigma} holds trivially. In particular, \endqref{eq:Sigma} does \endmph{not} imply that $\tilde\sigmama_1$ and $\tilde\sigmama_2$ commute.\\ Now, if one is intent on recovering the no-broadcasting Theorem \rightef{thm:UBC}, one can in fact replace $\Delta_\curly{R}$ on the right-hand side in \endqref{eq:mainBC} by an alternative expression whose vanishing does imply that $\tilde\sigmama_1$ and $\tilde\sigmama_2$ commute. This alternative expression is derived from a strengthened monotonicity inequality of Carlen and Lieb \circte{CarlenLieb14} and reads \beginq \leftabel{eq:Decl} \begingin{aligned} \Delta_{CL}(\tilde\sigmama_1,\tilde\sigmama_2):=&\frac{1}{2}\left\|\sqrt{\rightho_{2,AB}^{\mathrm{out}}}-\endxp\left(\frac{1}{2}(\leftog\rightho_{2,AB}^{\mathrm{out}}-\leftog\tilde\sigmama_{2,A}+\leftog\tilde\sigmama_{1,A})P_{AB}\right)\right\|_2^2\\ &+\frac{1}{2}\left\|\sqrt{\rightho_{2,AB}^{\mathrm{out}}}-\endxp\left(\frac{1}{2}(\leftog\rightho_{2,AB}^{\mathrm{out}}-\leftog\tilde\sigmama_{2,B}+\leftog\tilde\sigmama_{1,B})P_{AB}\right)\right\|_2^2 \endnd{aligned} \endeq Using the result of \circte{CarlenLieb14} in the proof of Theorem \rightef{thm:mainBC} gives $$ D(\sigmama_1\|\sigmama_2)-D(\tilde\sigmama_1\|\tilde\sigmama_2)\geq \Delta_{\operatorname{CL}}(\tilde\sigmama_1,\tilde\sigmama_2), $$ The vanishing $\Delta_{CL}(\tilde\sigmama_1,\tilde\sigmama_2)=0$ implies Ruskai's condition \endqref{eq:SigmaKH} and consequently that $\tilde\sigmama_1$ and $\tilde\sigmama_2$ commute, i.e. \beginq \leftabel{eq:commute} \Delta_{\operatorname{CL}}(\tilde\sigmama_1,\tilde\sigmama_2)=0\quad\mathbb Rightarrow\quad [\tilde\sigmama_1,\tilde\sigmama_2]=0. \endeq However, $\Delta_{\operatorname{CL}}$ does not appear to have information-theoretic content, while $\Delta_{\curly{R}}$ features the Petz recovery map. We close this appendix with the \begin{proof}[Proof of Theorem~\rightef{thm:mainBC}.] The proof is based on the following key estimate. It is a variant of Theorem~\rightef{thm:monotonicity}, which was proved in \circte{Jungeetal}. \begin{lm}[Key estimate] \leftabel{lm:main} Fix two quantum states $\sigmama_1$ and $\sigmama_2$. For any choice of quantum channel $\Lambda_{A\to AB}$, we define \beginq \rightho^{\mathrm{out}}_i:=\Lambda(\sigmama_{i,A}),\qquad (i=1,2). \endeq Let $\beginta(t)=\frac{\pi}{2}(1+\cosh(\pi t))^{-1}$. \begin{enumerate}[label=(\rightoman*)] \item We have \beginq \leftabel{eq:maini'} D(\sigmama_1\|\sigmama_2)-D(\rightho^{\mathrm{out}}_{1,B}\|\rightho^{\mathrm{out}}_{2,B}) \geq -\int_\mathbb R \leftog F\left(\rightho^{\mathrm{out}}_{1,AB}, \curly{R}^t_{A,\rightho^{\mathrm{out}}_{2,AB}}(\rightho^{\mathrm{out}}_{1,B}) \right) \,\mathrm{d}\beginta(t). \endeq \beginq \leftabel{eq:maini} D(\sigmama_1\|\sigmama_2)-D(\rightho^{\mathrm{out}}_{1,A}\|\rightho^{\mathrm{out}}_{2,A}) \geq -\int_\mathbb R \leftog F\left(\rightho^{\mathrm{out}}_{1,AB}, \curly{R}^t_{B,\rightho^{\mathrm{out}}_{2,AB}}(\rightho^{\mathrm{out}}_{1,A}) \right) \,\mathrm{d}\beginta(t), \endeq where the rotated Petz recovery map $\curly{R}^t_{A,X}$ was defined in \endqref{eq:rPetz}. \item Suppose that the output state $\rightho^{\mathrm{out}}_{i,AB}$ has identical marginals, i.e.\ $$ \rightho^{\mathrm{out}}_{i,A}=\rightho^{\mathrm{out}}_{i,B}=:\tilde \sigmama_i,\qquad (i=1,2). $$ Then we have \beginq \leftabel{eq:mainii} D(\sigmama_1\|\sigmama_2)-D(\tilde\sigmama_1\|\tilde\sigmama_2) \geq \begin{cases} -\int_\mathbb R \leftog F\left(\rightho^{\mathrm{out}}_{1,AB}, \curly{R}^t_{A,\rightho^{\mathrm{out}}_{2,AB}}(\tilde\sigmama_{1,B}) \right) \,\mathrm{d}\beginta(t)\\ -\int_\mathbb R \leftog F\left(\rightho^{\mathrm{out}}_{1,AB}, \curly{R}^t_{B,\rightho^{\mathrm{out}}_{2,AB}}(\tilde\sigmama_{1,A}) \right) \,\mathrm{d}\beginta(t). \end{cases} \endeq \end{enumerate} \endnd{lm} \begin{proof}[Proof of Lemma \rightef{lm:main}] The standard monotonicity of quantum relative entropy under quantum channels (without a remainder term) gives $$ D(\sigmama_1\|\sigmama_2)\geq D(\Lambda(\sigmama_1)\|\Lambda(\sigmama_2))=D(\rightho^{\mathrm{out}}_1\| \rightho^{\mathrm{out}}_2). $$ Consider the last expression. When we apply the partial trace over the $A$ subsystem to both states and use Theorem~\rightef{thm:monotonicity}, we obtain \begingin{equation} D(\rightho^{\mathrm{out}}_1\| \rightho^{\mathrm{out}}_2)\geq D(\rightho^{\mathrm{out}}_{1,B}\|\rightho^{\mathrm{out}}_{2,B}) -\int_\mathbb R \leftog F\left(\rightho^{\mathrm{out}}_{1,AB}, \curly{R}^t_{\rightho^{\mathrm{out}}_{2,AB}}(\rightho^{\mathrm{out}}_{1,B}) \right) \, \mathrm{d}\beginta(t). \nonumber \endnd{equation} This proves \endqref{eq:maini'} and \endqref{eq:maini} follows by the same argument, only that the $B$ subsystem is traced out now. Statement (ii) is immediate. \end{proof} With Lemma \rightef{lm:main} at our disposal, we can now prove Theorem~\rightef{thm:mainBC}. We begin by applying Lemma \rightef{lm:main} (ii), averaging the two lines in \endqref{eq:mainii}. We get \begingin{equation} D(\sigmama_1\|\sigmama_2)-D(\tilde\sigmama_1\|\tilde\sigmama_2) \geq -\frac{1}{2}\int_\mathbb R \Bigg(\leftog F\left(\rightho^{\mathrm{out}}_{1,AB}, \curly{R}^t_{B,\rightho^{\mathrm{out}}_{2,AB}}(\tilde\sigmama_{1,A})\right) +\leftog F\left(\rightho^{\mathrm{out}}_{1,AB}, \curly{R}^t_{A,\rightho^{\mathrm{out}}_{2,AB}}(\tilde\sigmama_{1,B})\right) \Bigg) \, \mathrm{d}\beginta(t).\nonumber \endnd{equation} By an elementary estimate and the Fuchs-van de Graaf inequality \circte{FG}, we have for density operators $\omegaega$ and $\tau$ that $$ -\leftog F(\omegaega,\tau)\geq 1-F(\omegaega,\tau)\geq \frac{1}{4}\|\omegaega-\tau\|_1^2. $$ We apply this to the integrand above, followed by the estimate $$ \|X-Y\|_1^2+\|X-Z\|_1^2\geq \frac{1}{2}\|Y-Z\|_1^2, $$ which is a consequence of the triangle inequality and the elementary bound $2ab\lefteq a^2+b^2$. We conclude \begingin{equation} D(\sigmama_1\|\sigmama_2)-D(\tilde\sigmama_1\|\tilde\sigmama_2) \geq \frac{1}{8}\int_\mathbb R \|\curly{R}^t_{B,\rightho^{\mathrm{out}}_{2,AB}}(\tilde\sigmama_{1,A})-\curly{R}^t_{A,\rightho^{\mathrm{out}}_{2,AB}}(\tilde\sigmama_{1,B})\|_1^2 \, \mathrm{d}\beginta(t). \nonumber \endnd{equation} This proves Theorem~\rightef{thm:mainBC}. \end{proof} \endnd{document}
\begin{document} \title{Blowup and Scattering problems for the Nonlinear Schr\"odinger equations} \begin{abstract} We consider $L^{2}$-supercritical and $H^{1}$-subcritical focusing nonlinear Schr\"odinger equations. We introduce a subset $PW$ of $H^{1}(\mathbb{R}^{d})$ for $d\ge 1$, and investigate behavior of the solutions with initial data in this set. For this end, we divide $PW$ into two disjoint components $PW_{+}$ and $PW_{-}$. Then, it turns out that any solution starting from a datum in $PW_{+}$ behaves asymptotically free, and solution starting from a datum in $PW_{-}$ blows up or grows up, from which we find that the ground state has two unstable directions. We also investigate some properties of generic global and blowup solutions. \end{abstract} \section{Introduction} In this paper, we consider the Cauchy problem for the nonlinear Schr\"odinger equation \begin{equation}\label{08/05/13/8:50} 2i\displaystyle{\frac{\partial \psi}{\partial t}}(x,t) +\Delta \psi(x,t)+|\psi(x,t)|^{p-1}\psi(x,t)=0, \quad (x,t) \in \mathbb{R}^{d}\times \mathbb{R}, \end{equation} where $i:=\sqrt{-1}$, $\psi$ is a complex-valued function on $\mathbb{R}^{d}\times \mathbb{R}$, $\Delta$ is the Laplace operator on $\mathbb{R}^{d}$ and $p$ satisfies the so-called $L^{2}$-supercritical and $H^{1}$-subcritical condition \begin{equation}\label{09/05/13/15:03} 2+\frac{4}{d} <p+1 <2^{*}:=\left\{ \begin{array}{ccl} \infty &\mbox{if}& d=1,2, \\[6pt] \displaystyle{\frac{2d}{d-2}} &\mbox{if}& d\ge 3. \end{array} \right. \end{equation} We associate this equation with the initial datum from the usual Sobolev space $H^{1}(\mathbb{R}^{d})$: \begin{equation}\label{09/12/16/15:39} \psi(\cdot, 0)=\psi_{0} \in H^{1}(\mathbb{R}^{d}). \end{equation} We summarize the basic properties of this Cauchy problem (\ref{08/05/13/8:50}) and (\ref{09/12/16/15:39}) (see, e.g., \cite{Cazenave, Ginibre-Velo1, Kato1, Kato2, Kato1995, Sulem-Sulem}). The unique local existence of solutions is well known: for any $\psi_{0} \in H^{1}(\mathbb{R}^{d})$, there exists a unique solution $\psi$ in $C(I_{\max};H^{1}(\mathbb{R}^{d}))$ for some interval $I_{\max}=(-T_{\max}^{-}, T_{\max}^{+}) \subset \mathbb{R}$: maximal existence interval including $0$; $T_{\max}^{+}$ ($-T_{\max}^{-}$) is the maximal existence time for the future (the past). If $I_{\max} \subsetneqq \mathbb{R}$, then we have \begin{equation}\label{10/01/27/11:26} \lim_{t \to *T_{\max}^{*}}\left\| \nabla \psi(t) \right\|_{L^{2}} =\infty \quad (\mbox{blowup}), \end{equation} provided that $T_{\max}^{*}<\infty$, where $*$ stands for $+$ or $-$. Besides, the solution $\psi$ satisfies the following conservation laws of the mass $\mathcal{M}$, the Hamiltonian $\mathcal{H}$ and the momentum $\mathcal{P}$ in this order: for all $t \in I_{\max}$, \begin{align} \label{08/05/13/8:59} \mathcal{M}(\psi(t)) &:=\|\psi(t)\|_{L^{2}}^{2}=\mathcal{M}(\psi_{0}), \\[6pt] \label{08/05/13/9:03} \mathcal{H}(\psi(t)) &:=\|\nabla \psi(t) \|_{L^{2}}^{2}- \frac{2}{p+1}\|\psi(t)\|_{L^{p+1}}^{p+1}=\mathcal{H}(\psi_{0}), \\[6pt] \label{08/10/20/4:37} \mathcal{P}(\psi(t)) &:=\Im \int_{\mathbb{R}^{d}} \nabla \psi(x,t)\overline{\psi(x,t)}\,dx =\mathcal{P}(\psi_{0}). \end{align} If, in addition, $\psi_{0}\in L^{2}(\mathbb{R}^{d},|x|^{2}dx)$, then the corresponding solution $\psi$ also belongs to $C(I_{\max};L^{2}(\mathbb{R}^{d},|x|^{2}dx))$ and satisfies the so-called virial identity (see \cite{Ginibre-Velo1}): \begin{equation}\label{08/12/03/16:55} \begin{split} \int_{\mathbb{R}^{d}}|x|^{2}\left| \psi(x,t) \right|^{2}\,dx &= \int_{\mathbb{R}^{d}}|x|^{2}\left| \psi_{0}(x) \right|^{2}\,dx +2t\Im{\int_{\mathbb{R}^{d}}x\cdot \nabla \psi_{0}(x)\overline{\psi_{0}(x)}\,dx} \\ &\qquad +2\int_{0}^{t}\int_{0}^{t'}\mathcal{K}(\psi(t''))\,dt''dt' \qquad \mbox{for all $t \in I_{\max}$}, \end{split} \end{equation} where $\mathcal{K}$ is a functional defined by \begin{equation}\label{10/02/13/11:38} \mathcal{K}(f) :=\|\nabla f\|_{L^{2}}^{2}-\frac{d(p-1)}{2(p+1)}\|f\|_{L^{p+1}}^{p+1}, \quad f \in H^{1}(\mathbb{R}^{d}). \end{equation} It is worth while noting that \begin{equation}\label{08/05/13/9:16} \mathcal{K}(f)=\mathcal{H}(f)-\frac{d}{2(p+1)}\left\{ p-\left(1+\frac{4}{d}\right) \right\}\|f\|_{L^{p+1}}^{p+1}, \quad f\in H^{1}(\mathbb{R}^{d}), \end{equation} so that: for any $f \in H^{1}(\mathbb{R}^{d})\setminus \{0\}$, we have \begin{eqnarray} \label{08/05/13/9:11} \mathcal{K}(f) > \mathcal{H}(f) & \mbox{if} & p<1+\frac{4}{d} , \\[6pt] \label{08/05/13/11:12} \mathcal{K}(f) = \mathcal{H}(f) & \mbox{if} & p=1+\frac{4}{d}, \\[6pt] \label{08/05/13/11:13} \mathcal{K}(f) < \mathcal{H}(f) & \mbox{if} & p>1+\frac{4}{d}. \end{eqnarray} The virial identity (\ref{08/12/03/16:55}) tells us the behavior of the ``variance'' of solution, from which we expect to obtain a kind of propagation or concentration estimates. However, we can not use (\ref{08/12/03/16:55}) as it is, since we do not require the weight condition $\psi_{0}\in L^{2}(\mathbb{R}^{d},|x|^{2}dx)$. We will work in the pure energy space $H^{1}(\mathbb{R}^{d})$, introducing a generalized version of the virial identity (see (\ref{08/03/29/19:05}) in the appended section \ref{08/10/19/14:57}) to discuss propagation or concentration of a ``solution'' in Section \ref{09/05/06/9:13} and Section \ref{08/07/03/0:40}. \par In some physical literature, nonlinear Schr\"odinger equations, abbreviated to NLSs, (or Gross-Pitaevskii equation) arise as NLSs with linear potentials: \begin{equation}\label{09/05/17/15:43} 2i\displaystyle{\frac{\partial \psi}{\partial t}}(x,t) +\Delta \psi(x,t) +|\psi(x,t)|^{p-1}\psi(x,t)=V(x)\psi(x,t), \quad (x,t) \in \mathbb{R}^{d}\times \mathbb{R}, \end{equation} where $V$ is a real potential on $\mathbb{R}^{d}$. The case $p=d=3$ corresponds to the conventional $\phi^{4}$-model for Bose gasses with negative scattering length (see, e.g., \cite{Leggett}). Typical examples of the potential $V$ are: $V(x)=c_{h}|x|^{2}$ (harmonic potentials with $c_{h}>0$); and $V(x)=g\cdot x$ (the Stark potentials or uniform gravitational fields along $g \in \mathbb{R}^{d}\setminus \{0\}$). However, it is known that these potentials can be ``removed'' by appropriate space-time transformations: For $V=g\cdot x$, the Avrov-Herbst formula works well (see \cite{Carles-Yoshihisa, Cycon-Froese-Kirsch}); For $V=c_{h}|x|^{2}$, the space-time transformations in \cite{Carles, Niederer} do. Therefore, the study of NLSs without potentials may reveal the essential structure of the solutions to NLSs with potentials (\ref{09/05/17/15:43}). \par Our equation (\ref{08/05/13/8:50}) has several kinds of solutions: standing waves, blowup solutions (see (\ref{10/01/27/11:26}) above), and global-in-time solutions which asymptotically behave like free solutions in the distant future/ distant past. Here, the standing waves are nontrivial solutions to our equation (\ref{08/05/13/8:50}) of the form \begin{equation}\label{10/03/24/16:08} \psi(x,t)=e^{\frac{i}{2}\omega t}Q(x), \qquad \omega>0,\quad Q \in H^{1}(\mathbb{R}^{d})\setminus \{0\}. \end{equation} Thus, $Q$ solves the following semilinear elliptic equation (nonlinear scalar field equation): \begin{equation}\label{08/05/13/11:22} \Delta Q- \omega Q+|Q|^{p-1}Q=0, \qquad \omega>0, \quad Q \in H^{1}(\mathbb{R}^{d})\setminus \{0\}. \end{equation} Here, we remark that every solution $Q$ to the equation (\ref{08/05/13/11:22}) satisfies $\mathcal{K}(Q)=0$. Indeed; since any solution $Q$ to (\ref{08/05/13/11:22}) belongs to the space $H^{1}(\mathbb{R}^{d})\cap L^{2}(\mathbb{R}^{d},|x|^{2}dx)$, the standing wave $\psi=e^{\frac{i}{2}\omega t}Q$ enjoys the virial identity (\ref{08/12/03/16:55}), which immediately leads us to $\mathcal{K}(Q)=0$. \par The standing waves are one of the interesting objects in the study of NLSs for both mathematics and physics: standing waves are considered to be the states of Bose-Einstein condensations. In this paper, we are interested in precise instability mechanism of what we call the ground state (the least action solution to (\ref{08/05/13/11:22}), see also (\ref{09/12/16/20:17}) below). For this end, we employ the classical ``potential well'' theory traced back to Sattinger \cite{Stattinger}. To define our potential well $PW$, we need to know some variational properties of the ground state. We shall give the precise definition of $PW$ in (\ref{09/05/18/22:03}) below. Anyway, our $PW$ is divided into $PW_{+}$ and $PW_{-}$ according to the sign of the functional $\mathcal{K}$, i.e., $PW_{+}=PW\cap [\mathcal{K}>0]$, $PW_{-}=PW\cap [\mathcal{K}<0]$ (see (\ref{10/02/13/14:44}), (\ref{10/02/13/14:45})), and the ground state belongs to $\overline{PW_{+}}\cap \overline{PW_{-}}$. We shall show that any solution starting from $PW_{+}$ exists globally in time and asymptotically behaves like a free solution in the distant future and past (see Theorem \ref{08/05/26/11:53}); in contrast, $PW_{-}$ gives rise to ``singular'' solutions (see Theorem \ref{08/06/12/9:48}). Thus, the ground state shows at least two type of instability, since it belongs to $\overline{PW_{+}}\cap \overline{PW_{-}}$. \par In order to define our potential well $PW$, we shall investigate some properties of the ground states here. There are many literature concerning the elliptic equation (\ref{08/05/13/11:22}) (see, e.g., \cite{Berestycki-Lions, Gidas-Ni-Nirenberg, Kwong, Strauss}). We know that if $d\ge 2$, there are infinitely many solutions (bound states) $Q_{\omega}^{n}$ ($n=1,2,\ldots$) such that \begin{equation}\label{09/12/16/20:16} \mathcal{S}_{\omega}(Q_{\omega}^{n}):=\frac{1}{2}\left\| \nabla Q_{\omega}^{n} \right\|_{L^{2}}^{2}+ \frac{\omega}{2} \left\| Q_{\omega}^{n} \right\|_{L^{2}}^{2} -\frac{1}{p+1}\left\| Q_{\omega}^{n} \right\|_{L^{p+1}}^{p+1}\to \infty \quad (n\to \infty), \end{equation} where the functional $\mathcal{S}_{\omega}$ is called the action for (\ref{08/05/13/11:22}) (see, e.g., \cite{Berestycki-Lions, Strauss}). The ground state $Q_{\omega}$ is the least action solution to (\ref{08/05/13/11:22}), more strongly, it satisfies that \begin{equation}\label{09/12/16/20:17} \mathcal{S}_{\omega}(Q_{\omega})=\inf \left\{ \mathcal{S}_{\omega}(Q) \ | \ Q \in H^{1}(\mathbb{R}^{d})\setminus \{0\}, \ \mathcal{K}(Q)= 0 \right\}. \end{equation} In the $L^{2}$-supercritical case ($p>1+\frac{4}{d}$), it turns out that the ground state solves the following variational problems (see Proposition \ref{08/05/13/15:13} below): \begin{align} \label{08/07/02/23:23} N_{1}&:=\inf{ \left\{ \|f\|_{\widetilde{H}^{1}}^{2} \bigm| f \in H^{1}(\mathbb{R}^{d})\setminus \{0\},\ \mathcal{K}(f)\le 0 \right\}},\\[6pt] \label{08/07/02/23:24} N_{2}&:=\inf{\left\{ \mathcal{N}_{2}(f) \bigm| f \in H^{1}(\mathbb{R}^{d})\setminus \{0\}, \ \mathcal{K}(f)\le 0 \right\} }, \\[6pt] \label{08/07/02/23:25} N_{3}&:=\inf{\Big\{ \mathcal{I}(f) \bigm| f \in H^{1}(\mathbb{R}^{d})\setminus \{0\} \Big\}}, \end{align} where \begin{align} \label{10/02/19/15:27} \|f\|_{\widetilde{H}^{1}}^{2} &:= \frac{d(p-1)-4}{d(p-1)}\|\nabla f \|_{L^{2}}^{2}+\|f\|_{L^{2}}^{2}, \\[6pt] \label{10/02/19/15:28} \mathcal{N}_{2}(f) &:= \left\| f \right\|_{L^{2}}^{p+1-\frac{d}{2}(p-1)} \left\| \nabla f \right\|_{L^{2}}^{\frac{d}{2}(p-1)-2}, \\[6pt] \label{10/02/19/15:29} \mathcal{I}(f) &:= \frac{\|f\|_{L^{2}}^{p+1-\frac{d}{2}(p-1)}\|\nabla f\|_{L^{2}}^{\frac{d}{2}(p-1)}}{\|f\|_{L^{p+1}}^{p+1}} . \end{align} We remark that $N_{3}$ is positive and gives the best constant of the Gagliardo-Nirenberg inequality, i.e., \begin{equation}\label{08/05/13/15:45} \|f\|_{L^{p+1}}^{p+1}\le \frac{1}{N_{3}}\|f\|_{L^{2}}^{p+1-\frac{d}{2}(p-1)}\|\nabla f \|_{L^{2}}^{\frac{d}{2}(p-1)} \quad \mbox{for all $f \in H^{1}(\mathbb{R}^{d})$} \end{equation} (for the details, see \cite{Weinstein} and the exposition by Tao \cite{Tao-book1}). A salient point of our approach to these variational problems (\ref{08/07/02/23:23})--(\ref{08/07/02/23:25}) is that we consider them at the same time, which enables us to obtain some identities automatically (see (\ref{08/05/13/15:06}) and (\ref{08/09/24/15:13}) below). The first variational problem (\ref{08/07/02/23:23}) is introduced to make our variational problems easy to solve. Indeed, all minimizing sequences for (\ref{08/07/02/23:23}) are bounded in $H^{1}(\mathbb{R}^{d})$; In contrast, minimizing sequences in the other problems (\ref{08/07/02/23:24}) and (\ref{08/07/02/23:25}) are not necessarily bounded in $H^{1}(\mathbb{R}^{d})$, this fact comes from the invariance of the functionals $\mathcal{K}$, $\mathcal{N}_{2}$ and $\mathcal{I}$ under the scaling \begin{equation}\label{09/12/21/10:33} Q\to \lambda^{\frac{2}{p-1}}Q(\lambda \, \cdot), \quad \lambda>0. \end{equation} The variational values $N_{1}$, $N_{2}$ and $N_{3}$ are closely related each other, as stated in the following: \begin{proposition}\label{08/06/16/15:24} Assume that $d\ge 1$ and $2+\frac{4}{d}<p+1<2^{*}$. Then, we have the following relations: \begin{equation}\label{08/05/13/11:33} N_{1}^{\frac{p-1}{2}}=\left( \frac{2}{d}\right)^{\frac{p-1}{2}} \left\{ \frac{d(p-1)}{(d+2)-(d-2)p}\right\}^{\frac{1}{4}\left\{ (d+2)-(d-2)p \right\}}N_{2} \end{equation} and \begin{equation}\label{08/05/13/11:25} N_{3}=\frac{d(p-1)}{2(p+1)} N_{2}. \end{equation} \end{proposition} The next proposition tells us that the variational problems (\ref{08/07/02/23:23}), (\ref{08/07/02/23:24}) and (\ref{08/07/02/23:25}) have the same minimizer. \begin{proposition} \label{08/05/13/15:13} Assume that $d\ge 1$ and $2+\frac{4}{d}<p+1<2^{*}$. Then, there exists a positive function $Q$ in the Schwartz space $\mathcal{S}(\mathbb{R}^{d})$, which is unique modulo translation and phase shift, such that \begin{align} \label{08/05/13/11:23} N_{1}&=\|Q\|_{\widetilde{H}^{1}}^{2}, \\[6pt] \label{08/05/26/11:41} N_{2}&=\mathcal{N}_{2}(Q), \\[6pt] \label{08/05/13/11:27} N_{3}&=\mathcal{I}(Q), \\[6pt] \label{08/05/13/15:06} \mathcal{K}(Q)&=0, \\[6pt] \label{08/09/24/15:13} \left\| Q \right\|_{L^{2}}^{2} &= \frac{d+2-(d-2)p}{d(p-1)} \left\| \nabla Q\right\|_{L^{2}}^{2} = \frac{d+2-(d-2)p}{2(p+1)} \left\| Q \right\|_{L^{p+1}}^{p+1}, \end{align} and $Q$ solves the equation (\ref{08/05/13/11:22}) with $\omega =1$. \end{proposition} \begin{remark}\label{08/09/27/22:26} It is known that the function $Q$ found in Proposition \ref{08/05/13/15:13} has the following properties: \\ {\rm (i)} There exists $y \in \mathbb{R}^{d}$ such that $Q(x-y)$ is radially symmetric (see \cite{Gidas-Ni-Nirenberg}). \\ {\rm (ii)} $Q$ is the ground state of the equation (\ref{08/05/13/11:22}) with $\omega=1$. Moreover, for all $\omega>0$, the rescaled function $\omega^{-\frac{1}{p-1}}Q(\omega^{-\frac{1}{2}}\cdot )$ becomes the ground state of (\ref{08/05/13/11:22}), and denoting this function by $Q_{\omega}$, we easily verify that \begin{equation}\label{10/03/27/10:23} N_{2}=\mathcal{N}_{2}(Q_{\omega}), \quad N_{3}=\mathcal{I}(Q_{\omega}), \quad \mathcal{K}(Q_{\omega})=0 \qquad \mbox{for all $\omega >0$.} \end{equation} \end{remark} Now, using the variational value $N_{2}$, we define a ``potential well'' $PW$ by\begin{equation}\label{09/05/18/22:03} PW=\left\{ f \in H^{1}\setminus \{0\} \bigm| \mathcal{H}(f) <\mathcal{B}(f) \right\}, \end{equation} where \begin{equation}\label{09/05/18/22:02} \mathcal{B}(f)=\frac{d(p-1)-4}{d(p-1)}\left( \frac{N_{2}}{\|f\|_{L^{2}}^{p+1-\frac{d}{2}(p-1)}}\right)^{\frac{4}{d(p-1)-4}} . \end{equation} We divide $PW$ into two components according to the sign of $\mathcal{K}$: \begin{equation}\label{10/02/13/14:44} PW_{+}=\left\{ f \in PW \bigm| \mathcal{K}(f) >0 \right\}, \end{equation} \begin{equation}\label{10/02/13/14:45} PW_{-}=\left\{ f \in PW \bigm| \mathcal{K}(f) <0 \right\}. \end{equation} It is worth while noting the following facts: \begin{enumerate} \item[1.] $PW_{+}$ and $PW_{-}$ are unbounded open sets in $H^{1}(\mathbb{R}^{d})$: Indeed, one can easily verify this fact by considering the scaled functions $f_{\lambda}(x):=\lambda^{\frac{2}{p-1}}f(\lambda x)$ for $f \in H^{1}(\mathbb{R}^{d})$ and $\lambda>0$. \item[2.] $PW=PW_{+}\cup PW_{-}$ and $PW_{+}\cap PW_{-}=\emptyset$ (see Lemma \ref{09/06/21/15:50}) \item[3.] $PW_{+}$ and $PW_{-}$ are invariant under the flow defined by the equation (\ref{08/05/13/8:50}) (see Proposition \ref{09/06/21/19:28} and Proposition \ref{08/05/26/10:57}). \end{enumerate} \begin{enumerate} \item[4.] The ground state $Q_{\omega}$ belongs to $\overline{PW_{+}}\cap \overline{PW_{-}}$ and $Q_{\omega} \not \in PW_{+}\cup PW_{-}$ for all $\omega>0$, where $\overline{PW_{+}}$ and $\overline{PW_{-}}$ are the closures of $PW_{+}$ and $PW_{-}$ in the $H^{1}$-topology, respectively (see Theorem \ref{08/10/20/5:21}). Moreover, the orbit under the action $((0,\infty) \ltimes \mathbb{R}^{d})\times S^{1}$ \begin{equation}\label{10/03/31/12:23} \left\{ \lambda^{\frac{2}{p-1}} e^{i\theta}Q_{\omega}(\lambda (\cdot-a)) \ \left| \ \lambda >0, \ a \in \mathbb{R}^{d}, \ \theta \in S^{1} \right. \right\} \end{equation} is contained in $\overline{PW_{+}} \cap \overline{PW_{-}}$. \end{enumerate} Here, the last fact above is the key to show the instability of the ground state. We will prove these facts in Section \ref{08/07/02/23:51}, among other properties of these sets $PW$, $PW_{+}$ and $PW_{-}$. \par For the later convenience, we prepare another expression of $PW_{+}$: Besides the functional $\mathcal{N}_{2}$ and the variational value $N_{2}$, we define a functional $\widetilde{\mathcal{N}}_{2}$ and a number $\widetilde{N}_{2}$ by \begin{equation}\label{09/04/29/14:30} \widetilde{\mathcal{N}}_{2}(f) := \left\| f \right\|_{L^{2}}^{p+1-\frac{d}{2}(p-1)}\sqrt{\mathcal{H}(f)}^{\frac{d}{2}(p-1)-2}, \quad f \in H^{1}(\mathbb{R}^{d}) \ \mbox{with}\ \mathcal{H}(f)\ge 0, \end{equation} and \begin{equation}\label{09/04/29/14:31} \widetilde{N}_{2}:=\sqrt{\frac{d(p-1)-4}{d(p-1)}}^{\frac{d}{2}(p-1)-2}N_{2}. \end{equation} Then, it follows from (\ref{08/05/26/11:41}) and (\ref{08/09/24/15:13}) in Proposition \ref{08/05/13/15:13} that \begin{equation}\label{09/07/14/8:24} \widetilde{N}_{2}=\widetilde{\mathcal{N}}_{2}(Q). \end{equation} We can easily verify that: if $f\in H^{1}(\mathbb{R}^{d})$ with $\mathcal{H}(f)\ge 0$, then \begin{equation}\label{08/06/15/14:38} f \in PW \quad \mbox{if and only if} \quad \widetilde{\mathcal{N}}_{2}(f) < \widetilde{N}_{2}. \end{equation} Therefore, we have another expression of $PW_{+}$: \begin{equation}\label{09/12/16/17:07} PW_{+}=\left\{ f \in H^{1}(\mathbb{R}^{d}) \ |\ \mathcal{K}(f)>0, \ \ \widetilde{\mathcal{N}}_{2}(f) < \widetilde{N}_{2} \right\}. \end{equation} Here, in order to consider the wave operators, we introduce a set $\Omega$ which is a subset of $PW_{+}$ (see Remark \ref{08/11/16/22:46}, (ii) below): \begin{equation}\label{10/06/13/11:42} \Omega:=\left\{f \in H^{1}(\mathbb{R}^{d})\setminus \{0\} \ | \ \mathcal{N}_{2}(f)<\widetilde{N}_{2} \right\}. \end{equation} \begin{remark}\label{08/11/16/22:46} {\rm (i)} When $p=1+\frac{4}{d}$, by (\ref{09/04/29/14:30}) and (\ref{09/04/29/14:31}), the condition $\widetilde{\mathcal{N}}_{2}(f)< \widetilde{N}_{2}=\widetilde{\mathcal{N}}_{2}(Q)$ can be reduced to \[ \left\| f \right\|_{L^{2}}< \left\| Q\right\|_{L^{2}}, \] since \[ \lim_{p\downarrow 1+\frac{4}{d}}\sqrt{\frac{d(p-1)-4}{d(p-1)}}^{\frac{d}{2}(p-1)-2} = 1. \] Hence, $PW_{+}$ formally becomes in this $L^{2}$-critical case \begin{equation} \begin{split} \label{10/02/16/17:03} PW_{+} &= \left\{ f \in H^{1}(\mathbb{R}^{d}) \bigm| \mathcal{H}(f)> 0, \ \left\| f \right\|_{L^{2}}< \left\| Q \right\|_{L^{2}} \right\} \\[6pt] &= \left\{ f \in H^{1}(\mathbb{R}^{d})\setminus \{0\} \bigm| \, \left\| f \right\|_{L^{2}}< \left\| Q \right\|_{L^{2}} \right\}, \end{split} \end{equation} where we have used the facts that $\mathcal{K}=\mathcal{H}$ (see (\ref{08/05/13/11:12})) and that $\left\| f \right\|_{L^{2}}< \left\| Q \right\|_{L^{2}}$ implies that $\mathcal{H}(f)>0$ (see \cite{Weinstein, Nawa5}). On the other hand, $PW_{-}$ formally becomes \begin{equation}\label{10/02/16/17:04} PW_{-}=\left\{ f \in H^{1}(\mathbb{R}^{d}) \ | \ \mathcal{H}(f)< 0 \right\}, \end{equation} since $\mathcal{K}=\mathcal{H}$ (\ref{08/05/13/11:12}) again. It is well known that the solutions of (\ref{08/05/13/8:50}) with $p=1+\frac{4}{d}$ with initial data from the set (\ref{10/02/16/17:03}) exist globally in time (see \cite{Weinstein}), and the ones with data from (\ref{10/02/16/17:04}) blow up or grow up (see \cite{Nawa8}). Thus, we may say that our potential wells $PW_{+}$ and $PW_{-}$ in (\ref{10/02/13/14:44}) and (\ref{10/02/13/14:45}) are natural extensions of those in (\ref{10/02/16/17:03}) and (\ref{10/02/16/17:04}) to the case of $2+\frac{4}{d}<p+1<2^{*}$. \\ {\rm (ii)} Since $\widetilde{N}_{2}<N_{2}$, we find by the definition of $N_{2}$ (see (\ref{08/07/02/23:24})) that $\Omega \subset PW_{+}$. \end{remark} Now, we are in a position to state our main results. When symbols with $\pm$ appear in the following theorems and propositions, we always take both upper signs or both lower signs in the double signs. \par The first theorem below is concerned with the behavior of the solutions with initial data from $PW_{+}$. \begin{theorem}[Global existence and scattering] \label{08/05/26/11:53} Assume that $d\ge 1$, $2+\frac{4}{d}<p+1 <2^{*}$ and $\psi_{0} \in PW_{+}$. Then, the corresponding solution $\psi$ to the equation (\ref{08/05/13/8:50}) exists globally in time and has the following properties: \\[6pt] {\rm (i)} $\psi$ stays in $PW_{+}$ for all time, and satisfies that \begin{equation}\label{08/12/16/10:09} \inf_{t\in \mathbb{R}}\mathcal{K}(\psi(t))\ge \left( 1-\frac{\widetilde{N}_{2}(\psi_{0})}{\widetilde{N}_{2}}\right) \mathcal{H}( \psi_{0})>0. \end{equation} {\rm (ii)} $\psi$ belongs to $L^{\infty}(\mathbb{R};H^{1}(\mathbb{R}))$. In particular, \begin{equation}\label{08/09/03/17:03} \sup_{t\in \mathbb{R}}\|\nabla \psi(t)\|_{L^{2}}^{2} \le \frac{d(p-1)}{d(p-1)-4} \mathcal{H}(\psi_{0}). \end{equation} Furthermore, \\ {\rm (iii)} There exist unique $\phi_{+} \in \Omega$ and $\phi_{-}\in \Omega$ such that \begin{equation}\label{10/01/26/20:32} \lim_{t\to \pm \infty}\left\|\psi(t)-e^{\frac{i}{2}t\Delta}\phi_{\pm} \right\|_{H^{1}}= \lim_{t\to \pm \infty}\left\|e^{-\frac{i}{2}t\Delta}\psi(t)-\phi_{\pm} \right\|_{H^{1}} =0. \end{equation} This formula defines the operators $W_{\pm}^{*} \colon \psi_{0} \ \mapsto \ \phi_{\pm}=\lim_{t\to \pm \infty}e^{-\frac{i}{2}t\Delta}\psi(t)$. These operators become homeomorphisms from $PW_{+}$ to $\Omega$, so that we can define the scattering operator $S:=W_{+}^{*}W_{-}$ from $\Omega$ into itself, where $W_{-}:=(W_{-}^{*})^{-1}$: \begin{equation}\label{10/03/31/12:00} \begin{array}{ccll} S=W_{+}^{*}W_{-} : & \Omega & \to & \Omega \\ {}& \text{\rotatebox{90}{$\in$}} & {} & \text{\rotatebox{90}{$\in$}} \\ {}& \phi_{-} &\mapsto & \phi_{+} \qquad . \end{array} \end{equation} \end{theorem} \begin{remark}\label{10/01/27/18:15} {\rm (i)} Theorem \ref{08/05/26/11:53} is an extension of the result by Duyckaerts, Holmer and Roudenko \cite{D-H-R}. See {\bf Notes and Comments} below for the details. \\[6pt] {\rm (ii)} $\phi_{+}$ and $\phi_{-}$ found in Theorem \ref{08/05/26/11:53} are called the asymptotic states at $+\infty$ and $-\infty$, respectively. \\[6pt] {\rm (iii)} To prove the surjectivity of $W_{\pm}^{*}$, we need to construct so-called wave operators $W_{\pm}$ (see Section \ref{09/05/30/21:25}). In fact, the operator $W_{+}^{*}$ ($W_{-}^{*}$) is the inverse of the wave operator $W_{+}$ ($W_{-}$). According to the terminology of spectrum scattering theory for linear Schr\"odinger equation, we might say that $\Omega$ is a set of scattering states. \end{remark} \indent In contrast to the case of $PW_{+}$, the solutions with initial data from $PW_{-}$ become singular: \begin{theorem}[Blowup or growup] \label{08/06/12/9:48} Assume that $d\ge 1$, $2+\frac{4}{d}< p+1< 2^{*}$ and $\psi_{0} \in PW_{-}$. Then, the corresponding solution $\psi$ to the equation (\ref{08/05/13/8:50}) satisfies the followings: \\[6pt] {\rm (i)} $\psi$ stays $PW_{-}$ as long as it exists and satisfies that \begin{equation}\label{09/12/23/22:48} \mathcal{K}(\psi(t))<- \left( \mathcal{B}(\psi_{0})-\mathcal{H}(\psi_{0})\right)<0 \quad \mbox{for all $t \in I_{\max}$}. \end{equation} {\rm (ii)} $\psi$ blows up in a finite time or grows up, that is, \begin{equation}\label{09/05/14/13:32} \sup_{t\in [0, T_{\max}^{+})}\left\| \nabla \psi(t) \right\|_{L^{2}}= \sup_{t\in (-T_{\max}^{-}, 0]}\left\| \nabla \psi(t) \right\|_{L^{2}} =\infty . \end{equation} In particular, if $T_{\max}^{\pm}= \infty$, then we have \begin{equation}\label{09/05/13/15:58} \limsup_{t \to \pm \infty} \int_{|x|>R} |\nabla \psi(x,t)|^{2}\,dx =\infty\quad \mbox{for all $R>0$}. \end{equation} \end{theorem} \begin{remark} {\rm (i)} We do not know whether a solution growing up at infinity exists. \\[6pt] {\rm (ii)} We know (see \cite{Glassey}) that if $\psi_{0} \in H^{1}(\mathbb{R}^{d})\cap L^{2}(\mathbb{R}^{d},|x|^{2}dx)$, then $T_{\max}^{\pm}<\infty$ and the corresponding solution $\psi$ satisfies that \[ \lim_{t\to \pm T_{\max}^{\pm}}\left\| \nabla \psi(t)\right\|_{L^{2}}=\infty. \] For the case $\psi_{0}\not\in L^{2}(\mathbb{R}^{d},|x|^{2}dx)$, see Theorem \ref{08/04/21/9:28} below (see also \cite{Ogawa-Tsutsumi}). \end{remark} Combining Theorems \ref{08/05/26/11:53} and \ref{08/06/12/9:48}, we can show the instability of the ground states: Precisely; \begin{theorem}[Instability of ground state] \label{08/10/20/5:21} Let $Q_{\omega}$ be the ground state of the equation (\ref{08/05/13/11:22}) for $\omega>0$. Then, $Q_{\omega}$ has two unstable directions in the sense that $Q_{\omega} \in \overline{PW_{+}}\cap \overline{PW_{-}}$. In particular, for any $\varepsilon>0$, there exist $f_{+} \in PW_{+}$ and $f_{-}\in PW_{-}$ such that \[ \left\| Q_{\omega}-f_{\pm}\right\|_{H^{1}}\le \varepsilon. \] \end{theorem} \begin{remark} {\rm (i)} An example of $f_{\pm}$ is $\left(1\mp \frac{\varepsilon}{\left\| Q_{\omega}\right\|_{H^{1}}}\right)Q_{\omega}$, where both upper or both lower signs should be chosen in the double signs. \\ {\rm (ii)} The ground state $Q_{\omega}$ also has an (orbitally) stable ``direction''. Indeed, if we start with $e^{ivx}Q_{\omega}$ for ``small'' $v \in \mathbb{R}^{d}$, then $e^{i(vx-\frac{1}{2}v^{2}t)}e^{\frac{i}{2}\omega t}Q_{\omega}(x-vt)$ solves the equation (\ref{08/05/13/8:50}) and stays in a neighborhood of the orbit of $Q_{\omega}$ under the action of $\mathbb{R}^{d} \times S^{1}$ \[ \left\{ e^{i\theta}Q_{\omega}(\cdot-a) \ \left| \ a \in \mathbb{R}^{d},\ \theta \in S^{1} \right. \right\}. \] \end{remark} We state further properties of solutions. \begin{theorem}\label{09/05/18/10:43} Assume that $d\ge 1$ and $2+\frac{4}{d}<p+1<2^{*}$. Let $\psi$ be a global solution to the equation (\ref{08/05/13/8:50}) with an initial datum in $H^{1}(\mathbb{R}^{d})$. Then, the following five conditions are equivalent: \\[6pt] {\rm (i)} Decay of $L^{p+1}$-norm: \begin{equation}\label{10/04/22/12:19} \lim_{t\to \infty}\left\| \psi(t) \right\|_{L^{p+1}}=0 . \end{equation} {\rm (ii)} Decay of $L^{q}$-norm: \begin{equation}\label{10/05/31/8:54} \lim_{t\to \infty}\left\| \psi(t) \right\|_{L^{q}}=0 \quad \mbox{for all $q \in (2,2^{*})$}. \end{equation} \noindent {\rm (iii)} Boundedness of the Strichartz norms: \begin{equation}\label{10/04/22/12:24} \left\|(1-\Delta)^{\frac{1}{2}} \psi \right\|_{L^{r}([0,\infty);L^{q})}< \infty \quad \mbox{for all admissible pair $(q,r)$}. \end{equation} \noindent {\rm (iv)} Boundedness of $X$-norm (see (\ref{08/09/01/10:02}) for the definition of the space $X$): \begin{equation}\label{10/04/22/12:20} \left\| \psi \right\|_{X([0,\infty))}<\infty, \quad \left\| \psi \right\|_{L^{\infty}([0,\infty);H^{1})}<\infty. \end{equation} {\rm (v)} Existence of an asymptotic state at $+\infty$: There exists $\phi_{+} \in H^{1}(\mathbb{R}^{d})$ such that \[ \lim_{t\to \infty}\left\|\psi(t)-e^{\frac{i}{2}t\Delta}\phi_{+} \right\|_{H^{1}}=0. \] A similar result holds for the negative time case. \end{theorem} As a corollary of Theorem \ref{08/05/26/11:53}, with the help of Theorem \ref{09/05/18/10:43}, we obtain the following result: \begin{corollary}\label{09/12/23/22:22} $PW_{+}\cup \{0\}$ is connected in the $L^{q}(\mathbb{R}^{d})$-topology for all $q\in (2, 2^{*})$. \end{corollary} If, for example, $PW_{+}\cup\{0\}$ contains a neighborhood of $0$ in the $L^{q}$-topology, then we can say that $PW_{+}$ is connected in the $L^{q}$-topology, without adding $\{0\}$. However, for any $\varepsilon>0$, there is a function $f_{\varepsilon} \in H^{1}(\mathbb{R}^{d})$ such that $f_{\varepsilon} \notin PW_{+}$ and $\|f_{\varepsilon}\|_{L^{p+1}}=\varepsilon$, so that we need to add $\{0\}$ in Corollary \ref{09/12/23/22:22} above. On the other hand, we can verify that $PW_{+}\cup \{0\}$ contains a sufficiently small ball in $H^{1}(\mathbb{R}^{d})$ (clearly, $PW_{+}$ does not contain $0$). However, we do not know that $PW_{+}$ is connected in the $H^{1}$-topology. \\ \par Next, we consider singular solutions. The following theorem tells us that solutions with radially symmetric data from $PW_{-}$ blow up in a finite time: \begin{theorem}[Existence of blowup solution] \label{08/04/21/9:28} Assume that $d\ge 2$, $2+\frac{4}{d}< p+1 <2^{*}$, and $p\le 5$ if $d=2$. Let $\psi_{0}$ be a radially symmetric function in $PW_{-}$ and let $\psi$ be the corresponding solution to the equation (\ref{08/05/13/8:50}). Then, we have \begin{equation}\label{10/02/22/20:33} T_{\max}^{\pm}<\infty \quad \mbox{and} \quad \lim_{t \to \pm T_{\max}^{\pm}}\|\nabla \psi(t)\|_{L^{2}}=\infty. \end{equation} Furthermore, we have the followings: \\ {\rm (i)} For all $m >0$, there exists a constant $R_{m}>0$ such that \begin{equation}\label{08/06/12/8:54} \int_{|x|>R}|\psi(x,t)|^{2}\,dx < m \quad \mbox{for all $R\ge R_{m}$ and $t \in I_{\max}$} . \end{equation} {\rm (ii)} For all sufficiently large $R>0$, we have \begin{equation}\label{08/06/10/10:21} \int_{0}^{T_{\max}^{+}}(T_{\max}^{+}-t)\left( \int_{|x|>R} |\nabla \psi(x,t)|^{2}dx \right)dt <\infty , \end{equation} \begin{equation}\label{08/06/10/10:22} \int_{0}^{T_{\max}^{+}}(T_{\max}^{+}-t) \left( \int_{|x|>R} |\psi(x,t)|^{p+1}dx \right)dt <\infty. \end{equation} {\rm (iii)} For all sufficiently large $R>0$, we have \begin{equation}\label{08/06/12/6:56} \int_{0}^{T_{\max}^{+}}(T_{\max}^{+}-t)\|\psi(t)\|_{L^{\infty}(|x|>R)}^{4}\,dt < \infty , \end{equation} \begin{equation}\label{08/06/12/6:57} \int_{0}^{T_{\max}^{+}}(T_{\max}^{+}-t)\|\psi(t)\|_{L^{p+1}(|x|>R)}^{\frac{4(p+1)}{p-1}}\,dt < \infty , \end{equation} \begin{equation}\label{08/06/12/6:58} \liminf_{t \to T_{\max}^{+}} (T_{\max}^{+}-t)\|\psi(t)\|_{L^{\infty}(|x|>R)}^{2}=0 , \end{equation} \begin{equation}\label{08/06/12/6:59} \liminf_{t \to T_{\max}^{+}} (T_{\max}^{+}-t)^{\frac{p-1}{2}}\|\psi(t)\|_{L^{p+1}(|x|>R)}^{p+1} =0 . \end{equation} For the negative time case, the corresponding results to {\rm (ii)} and {\rm (iii)} hold valid. \end{theorem} In Theorem \ref{08/04/21/9:28}, we require the condition $p\le 5$ if $d=2$, as well as Ogawa and Tsutsumi \cite{Ogawa-Tsutsumi}. A difficulty in the case $p>5$ with $d=2$ might come from a kind of quantum forces such that the stronger the nonlinear effect becomes, the stronger the dispersion effect does. \\ \par We do not know a lot of things about the asymptotic behavior of such singular solutions as found in Theorem \ref{08/06/12/9:48}. What we can say is the following (for simplicity, we state the forward time case only): \begin{proposition}[Asymptotic profiles of singular solutions] \label{08/11/02/22:52} Assume that $d\ge 1$ and $2+\frac{4}{d}<p+1<2^{*}$. Let $\psi$ be a solution to the equation (\ref{08/05/13/8:50}) such that \begin{equation}\label{10/03/31/16:55} \limsup_{t\to T_{\max}^{+}}\|\nabla \psi(t)\|_{L^{2}}= \limsup_{t\to T_{\max}^{+}}\|\psi(t)\|_{L^{p+1}} =\infty, \end{equation} and let $\{t_{n}\}_{n\in \mathbb{N}}$ be a sequence in $[0,T_{\max}^{+})$ such that \begin{equation}\label{10/01/25/19:06} \lim_{n\to \infty}t_{n}= T_{\max}^{+}, \qquad \left\| \psi(t_{n})\right\|_{L^{p+1}} =\sup_{t\in [0,t_{n})} \left\| \psi(t) \right\|_{L^{p+1}}. \end{equation} For this sequence $\{t_{n}\}$, we put \begin{equation}\label{10/03/31/16:59} \lambda_{n}= \left\| \psi(t_{n})\right\|_{L^{p+1}}^{-\frac{(p-1)(p+1)}{d+2-(d-2)p}}, \end{equation} and consider the scaled functions \begin{equation}\label{10/03/31/17:00} \psi_{n}(x,t):=\lambda_{n}^{\frac{2}{p-1}}\overline{\psi(\lambda_{n}x,t_{n}-\lambda_{n}^{2}t)}, \quad t \in \biggm( -\frac{T_{\max}^{+}-t_{n}}{\lambda_{n}^{2}}, \frac{t_{n}}{\lambda_{n}^{2}} \biggm], \quad n \in \mathbb{N}. \end{equation} Suppose that \begin{equation}\label{10/01/26/15:36} \left\| \psi \right\|_{L^{\infty}([0,T_{\max}^{+});L^{\frac{d}{2}(p-1)})}<\infty . \end{equation} Then, there exists a subsequence of $\{\psi_{n}\}$ (still denoted by the same symbol) with the following properties: There exist \\ {\rm (i)} a nontrivial function $\psi_{\infty} \in L^{\infty}([0,\infty);\dot{H}^{1}(\mathbb{R}^{d})\cap L^{p+1}(\mathbb{R}^{d}))$ solving the equation (\ref{08/05/13/8:50}) in the $\mathscr{D}'([0,\infty);\dot{H}^{-1}(\mathbb{R}^{d})+L^{\frac{p+1}{p}}(\mathbb{R}^{d}))$-sense and \\ {\rm (ii)} a sequence $\{\gamma_{n}\}$ in $\mathbb{R}^{d}$ \\ such that, putting $\widetilde{\psi}_{n}(x,t):=\psi_{n}(x+\gamma_{n},t)$, we have, for any $T>0$, \begin{align*} &\lim_{n\to \infty}\widetilde{\psi}_{n}= \psi_{\infty} \quad \mbox{strongly in $L^{\infty}([0,T];L_{loc}^{q}(\mathbb{R}^{d}))$} \quad \mbox{for all $q \in [1, 2^{*})$}, \\[6pt] &\lim_{n\to \infty}\nabla \widetilde{\psi}_{n} = \nabla \psi_{\infty} \quad \mbox{weakly* in $L^{\infty}([0,T];L_{loc}^{2}(\mathbb{R}^{d}))$}. \end{align*} Furthermore, for any $\varepsilon>0$, there exists $R>0$ such that, for $y_{n}=\lambda_{n}\gamma_{n}$, \begin{equation}\label{10/03/10/18:47} \lim_{n\to \infty}\int_{|x-y_{n}|\le \lambda_{n}R}|\psi(x,t_{n})|^{\frac{d}{2}(p-1)} dx \ge (1-\varepsilon)\| \psi_{\infty} (0) \|_{L^{\frac{d}{2}(p-1)}}^{\frac{d}{2}(p-1)}. \end{equation} \end{proposition} \begin{remark}\label{10/04/21/21:40} The $L^{\frac{d(p-1)}{2}}(\mathbb{R}^{d})$-norm is invariant under the scaling leaving the equation (\ref{08/05/13/8:50}) invariant: Precisely, when $\psi$ is a solution to (\ref{08/05/13/8:50}), putting \begin{equation}\label{10/04/22/11:13} \psi_{\lambda}(x,t):= \lambda^{\frac{d(p-1)}{2}}\psi(\lambda x, \lambda^{2}t) ,\quad \lambda>0, \end{equation} we see that $\psi_{\lambda}$ solves (\ref{08/05/13/8:50}) and satisfies that \begin{equation}\label{10/04/21/12:42} \left\| \psi_{\lambda}(t) \right\|_{L^{\frac{d}{2}(p-1)}} = \left\| \psi(\lambda t) \right\|_{L^{\frac{d}{2}(p-1)}} \quad \mbox{for all $t \in (-\frac{T_{\max}^{-}}{\lambda}, \ \frac{T_{\max}^{+}}{\lambda})$} . \end{equation} \end{remark} Proposition \ref{08/11/02/22:52} tells us that the $L^{\frac{d(p-1)}{2}}(\mathbb{R}^{d})$-norm of a singular solution concentrates at some point under the assumption (\ref{10/01/26/15:36}). In general, it is difficult to check whether the assumption (\ref{10/01/26/15:36}) holds (cf. Merle and Rapha\"el \cite{Merle-Raphael}). \par Without the assumption (\ref{10/01/26/15:36}), we have the following: \begin{proposition}\label{10/01/26/14:49} Under the same assumptions except (\ref{10/01/26/15:36}), definitions and notation in Proposition \ref{08/11/02/22:52}, we define the ``renormalized'' functions $\Phi_{n}^{RN}$ by \begin{equation}\label{10/03/31/17:07} \Phi_{n}^{RN}(x,t)=\psi_{n}(x,t)-e^{\frac{i}{2}t\Delta}\psi_{n}(x,0), \quad n\in \mathbb{N}. \end{equation} Then, for any $T>0$, $\{\Phi_{n}^{RN}\}_{n\in \mathbb{N}}$ is a uniformly bounded sequence in $C([0,T];H^{1}(\mathbb{R}^{d}))$, and satisfies the following alternatives {\rm (i)} and {\rm (ii)}: \\ {\rm (i)} If \begin{equation}\label{10/01/25/18:38} \lim_{n\to \infty}\sup_{t\in [0,T]}\left\| \Phi_{n}^{RN}(t)\right\|_{L^{\frac{d}{2}(p-1)}}=0, \end{equation} then \begin{equation}\label{08/11/20/17:56} \lim_{n\to \infty}\sup_{t \in \left[t_{n}-\lambda_{n}^{2}T, , t_{n} \right]} \left\|\psi(t) -e^{\frac{i}{2}(t-t_{n})\Delta}\psi(t_{n}) \right\|_{L^{\frac{d}{2}(p-1)}}=0. \end{equation} {\rm (ii)} If \begin{equation}\label{10/01/25/18:39} \limsup_{n\to \infty}\sup_{t\in [0,T]}\left\| \Phi_{n}^{RN}(t)\right\|_{L^{\frac{d}{2}(p-1)}}>0, \end{equation} then there exists a subsequence of $\{\Phi_{n}^{RN}\}$ (still denoted by the same symbol) with the following properties: There exist a nontrivial function $\Phi \in L^{\infty}([0,\infty);H^{1}(\mathbb{R}^{d}))$ and a sequence $\{y_{n}\}$ in $\mathbb{R}^{d}$ such that, putting $\widetilde{\Phi}^{RN}_{n}(x,t)=\Phi^{RN}_{n}(x+y_{n},t)$, we have \begin{equation}\label{10/03/15/19:51} \lim_{n\to \infty} \widetilde{\Phi}^{RN}_{n}= \Phi \quad \mbox{weakly* in $L^{\infty}([0,T];H^{1}(\mathbb{R}^{d}))$} . \end{equation} Here, $\Phi$ solves the following equation \begin{equation}\label{10/03/15/19:52} \displaystyle{2i\frac{\partial \Phi}{\partial t}} +\Delta \Phi =-F, \end{equation} where $F$ is the nontrivial function in $L^{\infty}([0,\infty); L^{\frac{p+1}{p}}(\mathbb{R}^{d}))$ given by \begin{equation}\label{10/03/15/19:53} \lim_{n\to \infty}|\psi_{n}|^{p-1}\psi_{n}= F \quad \mbox{weakly* in $\ L^{\infty}([0,T];L^{\frac{p+1}{p}}(\mathbb{R}^{d}))$}. \end{equation} Furthermore, for any $\varepsilon>0$, there exists $R>0$ such that \begin{equation}\label{08/11/20/17:57} \lim_{n\to \infty} \int_{|x-y_{n}|\le \lambda_{n}R} \left| \psi(x,t_{n}-\lambda_{n}^{2}T) - e^{-\frac{i}{2}\lambda_{n}^{2}T \Delta}\psi(x, t_{n})\right|^{\frac{d}{2}(p-1)}\!\! dx \ge (1-\varepsilon)\left\| \Phi(T) \right\|_{L^{\frac{d}{2}(p-1)}}^{\frac{d}{2}(p-1)}. \end{equation} \end{proposition} If the case (i) of Proposition \ref{10/01/26/14:49} occurs, then we may say that the dynamics of the solution is composed of the free evolution and the dilation (\ref{10/04/22/11:13}). On the other hand, in (ii), concentration of $L^{\frac{d(p-1)}{2}}$-mass occurs further. We remark that the left-hand side of (\ref{08/11/20/17:57}) is finite. \\ \par Here, we discuss some relations between the previous works and our results: \\ {\bf Notes and Comments}. \begin{enumerate} \item Our analysis in $PW_{+}$ is inspired by the previous work by Duyckaerts, Holmer and Roudenko \cite{D-H-R, Holmer-Roudenko} (also Kenig and Merle \cite{Kenig-Merle}). They considered a typical nonlinear Schr\"odinger equation, the equation (\ref{08/05/13/8:50}) with $d=p=3$, and proved, in \cite{D-H-R}, that: if $\psi_{0} \in H^{1}(\mathbb{R}^{3})$ satisfies that \begin{equation}\label{10/01/31/13:08} \mathcal{M}(\psi_{0})\mathcal{H}(\psi_{0}) < \mathcal{M}(Q)\mathcal{H}(Q) , \quad \left\| \psi_{0} \right\|_{L^{2}} \left\| \nabla \psi_{0} \right\|_{L^{2}} < \left\| Q \right\|_{L^{2}} \left\| \nabla Q \right\|_{L^{2}}, \end{equation} then the corresponding solution exists globally in time and has asymptotic states at $\pm \infty$, where $Q$ denotes the ground state of the equation (\ref{08/05/13/11:22}) with $\omega=1$. In our terminology, we see that the condition (\ref{10/01/31/13:08}) is equivalent to that $\psi_{0} \in PW_{+}\cup\{0\}$ via the variational problem for $N_{2}$ (see (\ref{08/07/02/23:24})). In this paper, we intensively study the scattering problem on $PW_{+}$, so that we have Theorem \ref{08/05/26/11:53}, which is an extension of the result by Duycaerts et al \cite{D-H-R, Holmer-Roudenko} to all spatial dimensions $d\ge 1$ and $L^{2}$-supercritical and $H^{1}$-subcritical powers $2+\frac{4}{d}<p+1< 2^{*}$. Furthermore, we establish the so-called asymptotic completeness: the wave operators $W_{\pm}$ exists on $\Omega$ and they are homeomorphisms from $\Omega$ to $PW_{+}$. \\ For the nonlinear Klein-Gordon equation, the corresponding result to Theorem \ref{08/05/26/11:53} was obtained by Ibrahim, Masmoudi and Nakanishi \cite{IMN}. \item In order to prove $PW_{+}$ being a set of scattering states, we basically employ the argument of Kenig and Merle \cite{Kenig-Merle}. In their argument, the Bahouri-Gerard type compactness \cite{Bahouri-Gerard} plays an important role: Duyckaerts et al \cite{D-H-R} also used such a compactness (the profile decomposition due to Keraani \cite{Keraani}). However, we employ the classical compactness device due to Brezis and Lieb \cite{Brezis-Lieb} instead of that due to Bahouri and Gerard; The Brezis-Lieb type compactness device is also used to prove the existence of the ground states (see Section \ref{08/05/13/15:57}), and to investigate the blowup solutions (see the proof of Proposition \ref{10/01/26/14:49} in Section \ref{08/07/03/0:40}). As long as we consider the $H^{1}$-solutions, this classical compactness devise seems to be enough for our analysis. \item The decomposition scheme in Sections \ref{09/05/05/10:03} (also the profile decomposition due to Bahouri-Gerard \cite{Bahouri-Gerard}, Keraani \cite{Keraani}) seems to be a kind of perturbation methods employed in quantum physics like a ``Born type approximation scheme''. \item In the course of the proof of Theorem \ref{08/05/26/11:53}, we encounter a ``fake soliton'' (critical element in the terminology of Kenig-Merle \cite{Kenig-Merle}). Then, we make a slightly different approach from Duyckaerts et al \cite{D-H-R} to trace its motion (for details, see Section \ref{09/05/06/9:13}). Moreover, our choice of function spaces is different from theirs \cite{D-H-R, Holmer-Roudenko} (see Section \ref{08/10/07/9:01}). We are choosing our function spaces so that the generalized inhomogeneous Strichartz estimates due to Foshci \cite{Foschi} work well there. \item In \cite{Holmer-Roudenko} (also in \cite{Holmer-Roudenko2}), Holmer and Roudenko also considered the equation (\ref{08/05/13/8:50}) with $d=p=3$ and proved that if $\psi_{0} \in H^{1}(\mathbb{R}^{3})$ satisfies that \begin{align} \label{10/04/05/15:37} &\mbox{$\psi_{0}$ is radially symmetric}, \\[6pt] \label{10/01/31/13:09} &\mathcal{M}(\psi_{0})\mathcal{H}(\psi_{0}) < \mathcal{M}(Q)\mathcal{H}(Q) , \quad \left\|\psi_{0} \right\|_{L^{2}} \left\| \nabla \psi_{0} \right\|_{L^{2}} > \left\| Q \right\|_{L^{2}} \left\| \nabla Q \right\|_{L^{2}} , \end{align} then the corresponding solution blows up in a finite time. In our terminology, we see that the condition (\ref{10/01/31/13:09}) is equivalent to $\psi_{0} \in PW_{-}$ via the variational problem for $N_{2}$. Hence, Theorem \ref{08/06/12/9:48}, together with Theorem \ref{08/04/21/9:28}, is an extension of their result, in particular, to all spatial dimensions $d\ge 1$ and powers $2+\frac{4}{d}<p+1 < 2^{*}$. \item Our $PW_{+}$ and $PW_{-}$ are naturally introduced by the potential well $PW$ by appealing to the variational structure of the ground states. We note again that the functional $\mathcal{K}$ divides the $PW$ into $PW_{+}$ and $PW_{-}$. Our potential well $PW$ seems new. One may find a similarity between $PW$ and the set of initial data given in Theorem 4.1 in Begout \cite{Begout}. However, the relevance is not clear. \item Stubbe \cite{Stubbe} already introduced the condition (\ref{10/01/31/13:08}) and proved the global existence of the solutions with initial data satisfying it. He also conjectured that the condition is sharp in the sense that there exists an initial datum such that it does not satisfy the condition (\ref{10/01/31/13:08}) and leads to a solution blowing up in a finite time. Our result concerning $PW_{-}$ gives an affirmative answer to his conjecture. \end{enumerate} \par This paper is organized as follows. In Section \ref{08/07/02/23:51}, we discuss properties of the potential well $PW$. In Section \ref{08/08/05/14:29}, we introduce function spaces in which Strichartz type estimates work well. We also give a small date theory and a long time perturbation theory. Theorem \ref{09/05/18/10:43} is proved here (see Section \ref{09/05/30/15:49}). In Section \ref{08/10/03/15:11}, we give the proofs of Theorem \ref{08/05/26/11:53} and Corollary \ref{09/12/23/22:22}. Section \ref{08/07/03/0:40} is devoted to the proofs of Theorem \ref{08/06/12/9:48}, Theorem \ref{08/04/21/9:28}, and Propositions \ref{08/11/02/22:52} and \ref{10/01/26/14:49}. In Appendices \ref{08/10/19/14:57}, \ref{08/10/03/15:12}, \ref{08/10/07/9:00}, \ref{09/03/06/16:43} and \ref{08/10/07/9:02} are devoted to preliminaries and auxiliary results. Finally, in Section \ref{08/05/13/15:57}, we give the proofs of Propositions \ref{08/06/16/15:24} and \ref{08/05/13/15:13}. \\ \\ {\bf Notation}. We summarize the notation used in this paper. \par We keep the letters $d$ and $p$ to denote the spatial dimension and the power of nonlinearity of the equation (\ref{08/05/13/8:50}), respectively. \par $\mathbb{N}$ denotes the set of natural numbers, i.e., $\mathbb{N}=\{1,2,3,\ldots\}$. \par $I_{\max}$ denotes the maximal existence interval of the considering solution, which has the form \[ I_{\max}=(-T_{\max}^{-}, T_{\max}^{+}), \] where $T_{\max}^{+}>0$ is the maximal existence time for the future, and $T_{\max}^{-}>0$ is the one for the past. \par Functionals concerned with conservation laws for the equation (\ref{08/05/13/8:50}) are: the mass \[ \mathcal{M}(f):=\left\| f \right\|_{L^{2}}^{2} \qquad \mbox{(see (\ref{08/05/13/8:59}))}, \] the Hamiltonian \[ \mathcal{H}(f):=\left\| \nabla f \right\|_{L^{2}}^{2}-\frac{2}{p+1}\left\| f \right\|_{L^{p+1}}^{p+1} \qquad \mbox{(see (\ref{08/05/13/9:03}))}, \] and the momentum \[ \mathcal{P}(f)=:\Im \int_{\mathbb{R}^{d}}\nabla f(x) \overline{f(x)}\,dx \qquad \mbox{(see (\ref{08/10/20/4:37}))}. \] We also use the functional \[ \mathcal{K}(f):=\left\| \nabla f \right\|_{L^{2}}^{2}-\frac{d(p-1)}{2(p+1)}\left\| f \right\|_{L^{p+1}}^{p+1} \qquad \mbox{(see (\ref{10/02/13/11:38}))}. \] The symbol $\mathcal{K}$ might stand for ``Kamiltonian'' (?). \par Functionals concerned with variational problems are the followings: \[ \|f\|_{\widetilde{H}^{1}}^{2}:=\frac{d(p-1)-4}{d(p-1)}\|\nabla f \|_{L^{2}}^{2}+\|f\|_{L^{2}}^{2} \qquad \mbox{(see (\ref{10/02/19/15:27}))}, \] \[ \mathcal{N}_{2}(f):=\left\| f \right\|_{L^{2}}^{p+1-\frac{d}{2}(p-1)} \left\| \nabla f \right\|_{L^{2}}^{\frac{d}{2}(p-1)-2} \qquad \mbox{(see (\ref{10/02/19/15:28}))}, \] \[ \mathcal{I}(f):=\frac{\|f\|_{L^{2}}^{p+1-\frac{d}{2}(p-1)}\|\nabla f\|_{L^{2}}^{\frac{d}{2}(p-1)}}{\|f\|_{L^{p+1}}^{p+1}} \qquad \mbox{(see (\ref{10/02/19/15:29})))}. \] Variational values concerned with these functionals are \[ N_{1}:=\inf{ \left\{ \|f\|_{\widetilde{H}^{1}}^{2} \bigm| f \in H^{1}(\mathbb{R}^{d})\setminus \{0\},\ \mathcal{K}(f)\le 0 \right\} } \qquad \mbox{(see (\ref{08/07/02/23:23}))}, \] \[ N_{2}:=\inf{\left\{ \mathcal{N}_{2}(f) \bigm| f \in H^{1}(\mathbb{R}^{d})\setminus \{0\}, \ \mathcal{K}(f)\le 0 \right\}} \qquad \mbox{(see (\ref{08/07/02/23:24}))}, \] \[ N_{3}:=\inf{\Big\{ \mathcal{I}(f) \bigm| f \in H^{1}(\mathbb{R}^{d})\setminus \{0\} \Big\}} \qquad \mbox{(see (\ref{08/07/02/23:25}))}. \] \indent We define our ``potential well'' by \[ PW:=\left\{ f \in H^{1}\setminus \{0\} \bigm| \mathcal{H}(f) <\mathcal{B}(f) \right\} \qquad \mbox{(see (\ref{09/05/18/22:03}))}, \] where \[ \mathcal{B}(f):=\frac{d(p-1)-4}{d(p-1)}\left( \frac{N_{2}}{\|f\|_{L^{2}}^{p+1-\frac{d}{2}(p-1)}}\right)^{\frac{4}{d(p-1)-4}} \qquad \mbox{(see (\ref{09/05/18/22:02}))}. \] We put \[ \varepsilon_{0}:=\mathcal{B}(\psi_{0})-\mathcal{H}(\psi_{0}). \] We divide the set $PW$ into two disjoint components: \[ PW_{+}:=\left\{ f \in PW \bigm| \mathcal{K}(f) >0 \right\} \qquad \mbox{(see (\ref{10/02/13/14:44}))}, \] \[ PW_{-}:=\left\{ f \in PW \bigm| \mathcal{K}(f) <0 \right\} \qquad \mbox{(see (\ref{10/02/13/14:45}))}. \] We can rewrite the set $PW_{+}$ in the form \[ PW_{+}=\left\{ f \in H^{1}(\mathbb{R}^{d})\, \biggm| \, \mathcal{K}(f)>0, \ \widetilde{\mathcal{N}}_{2}(f)<\widetilde{N}_{2}\right\}, \] where \[ \widetilde{\mathcal{N}}_{2}(f):=\left\| f \right\|_{L^{2}}^{p+1-\frac{d}{2}(p-1)} \sqrt{\mathcal{H}(f)}^{\frac{d}{2}(p-1)-2} \qquad \mbox{(see (\ref{09/04/29/14:30}))} \] and \[ \widetilde{N}_{2}:=\sqrt{\frac{d(p-1)-4}{d(p-1)}}^{\frac{d}{2}(p-1)-2}N_{2} \qquad \mbox{(see (\ref{09/04/29/14:31}))}. \] We need a subset $\Omega$ of $PW_{+}$ below to consider the wave operators: \[ \Omega:=\left\{ f \in H^{1}(\mathbb{R}^{d})\setminus \{0\} \, \biggm| \, \mathcal{N}_{2}(f)<\widetilde{N}_{2}\right\} \qquad \mbox{(see (\ref{10/06/13/11:42}))} . \] \indent $s_{p}$ stands for the critical regularity of our equation (\ref{08/05/13/8:50}), i.e., \[ s_{p}:=\frac{d}{2}-\frac{2}{p-1} \qquad \mbox{(see (\ref{09/05/13/15:00}))}. \] \indent We will fix a number $q_{1} \in (p+1,2^{*})$ in Sections \ref{08/08/05/14:29} and \ref{08/10/03/15:11}. The number $r_{0}$ is chosen for the pair $(q_{1},r_{0})$ being admissible, i.e., \[ \frac{1}{r_{0}} := \frac{d}{2}\left( \frac{1}{2}-\frac{1}{q_{1}}\right). \] Furthermore, $r_{1}$ and $\widetilde{r}_{1}$ are defined respectively by \[ \frac{1}{r_{1}} := \frac{d}{2}\left( \frac{1}{2}-\frac{1}{q_{1}}-\frac{s_{p}}{d}\right), \] \[ \frac{1}{\widetilde{r}_{1}} := \frac{d}{2}\left( \frac{1}{2}-\frac{1}{q_{1}}+\frac{s_{p}}{d} \right). \] A pair $(q_{2},r_{2})$ is defined by \[ \frac{1}{q_{2}}:= \frac{1}{p-1}\left( 1-\frac{2}{q_{1}}\right), \quad \frac{1}{r_{2}} := \frac{d}{2}\left( \frac{1}{2}-\frac{1}{q_{2}}-\frac{s_{p}}{d}\right). \] \indent For an interval $I$, we define the Strichartz type space $X(I)$ by \[ X(I):=L^{r_{1}}(I;L^{q_{1}}(\mathbb{R}^{d}))\cap L^{r_{2}}(I; L^{q_{2}}(\mathbb{R}^{d})) \qquad \mbox{(see (\ref{08/09/01/10:02}))}. \] Besides, we define the usual Strichartz space $S(I)$ by \[ S(I):=L^{\infty}(I; L^{2}(\mathbb{R}^{d})) \cap L^{r_{0}}(I; L^{q_{1}}(\mathbb{R}^{d})) \qquad \mbox{(see (\ref{10/02/19/22:40}))}. \] \indent In order to show that the solutions starting from $PW_{+}$ have asymptotic states, we will introduce a set \[ PW_{+}(\delta):= \left\{ f \in PW_{+} \, \big| \, \widetilde{\mathcal{N}}_{2}(f) < \delta \right\}, \quad \delta>0 \qquad \mbox{(see (\ref{10/02/19/23:05}))}. \] We will also consider a variational value \[ \begin{split} \widetilde{N}_{c}&:= \sup{\left\{ \delta >0 \bigm| \mbox{$\forall \psi_{0} \in PW_{+}(\delta)$, $\|\psi\|_{X(\mathbb{R})}< \infty$} \right\}} \\ &=\inf{\left\{ \delta >0 \bigm| \mbox{ $\exists \psi_{0} \in PW_{+}(\delta)$, $\|\psi\|_{X(\mathbb{R})}= \infty$} \right\}} \qquad \mbox{(see (\ref{08/09/02/18:06}))}, \end{split} \] where $\psi$ denotes the solution to (\ref{08/05/13/8:50}) with $\psi(0)=\psi_{0}$. \par If $(A_{0},\|\cdot \|_{A_{0}})$ and $(A_{1},\|\cdot \|_{A_{1}})$ are ``compatible'' normed vector spaces (see \cite{Bergh}, p.24.), then $\|\cdot \|_{A_{0}\cap A_{1}}$ denotes the norm of their intersection $X\cap Y$, i.e., \[ \left\| f \right\|_{A_{0}\cap A_{1}}:=\max\{ \left\| f \right\|_{A_{0}}, \left\| f \right\|_{A_{1}}\}\quad \mbox{for all $f \in A_{0}\cap A_{1}$}. \] The symbol $(\cdot, \cdot)$ denotes the inner product of $L^{2}(\mathbb{R}^{d})=L^{2}(\mathbb{R}^{d};\mathbb{C})$, i.e., \[ (f,g):=\int_{\mathbb{R}^{d}}f(x)\overline{g(x)}\,dx, \quad f,g \in L^{2}(\mathbb{R}^{d}). \] \indent $C_{c}^{\infty}(\mathbb{R}^{d})$ denotes the set of infinitely differentiable functions from $\mathbb{R}^{d} \to \mathbb{C}$ with compact supports. \par Using the Fourier transformation $\mathcal{F}$, we define differential operators $|\nabla|^{s}$, $(-\Delta)^{\frac{s}{2}}$ and $(1-\Delta)^{\frac{s}{2}}$, for $s \in \mathbb{R}$, by \[ |\nabla|^{s}f=(-\Delta)^{\frac{s}{2}}f := \mathcal{F}^{-1}[|\xi|^{s}\mathcal{F}[f]], \qquad (1-\Delta)^{\frac{s}{2}}f:=\mathcal{F}^{-1}[(1+|\xi|^{2})^{\frac{s}{2}}\mathcal{F}[f]]. \] \section{Potential well $\boldsymbol{PW}$}\label{08/07/02/23:51} In this section, we shall discuss fundamental properties of the sets $PW$, $PW_{-}$ and $PW_{+}$. In particular, we will prove that these sets are invariant under the flow defined by the equation (\ref{08/05/13/8:50}) (see Propositions \ref{09/06/21/18:53}, \ref{09/06/21/19:28} and \ref{08/05/26/10:57}). Moreover, we prove Theorem \ref{08/10/20/5:21} here. \par We begin with the following fact: \begin{lemma} \label{09/06/21/15:50} The set $PW$ does not contain any function $f$ with $\mathcal{K}(f)=0$, i.e., \begin{equation}\label{10/02/16/20:47} \left\{ f \in H^{1}(\mathbb{R}^{d}) \bigm| \mathcal{K}(f)=0 \right\} \cap PW =\emptyset , \end{equation} so that \begin{equation}\label{10/02/16/20:48} PW=PW_{+}\cup PW_{-}, \qquad PW_{+}\cap PW_{-}=\emptyset. \end{equation} \end{lemma} \begin{proof}[Proof of Lemma \ref{09/06/21/15:50}] Let $f$ be a function in $H^{1}(\mathbb{R}^{d})$ with $\mathcal{K}(f)=0$. Then, the condition $\mathcal{K}(f)=0$ leads to that \begin{equation}\label{09/12/15/8:28} \mathcal{H}(f)=\frac{d(p-1)-4}{d(p-1)}\left\| \nabla f \right\|_{L^{2}}^{2}. \end{equation} Using (\ref{09/12/15/8:28}) and the definitions of $N_{2}$ (see (\ref{08/07/02/23:24})) and $\widetilde{N}_{2}$ (see (\ref{09/04/29/14:31})), we obtain that \begin{equation}\label{10/01/06/18:01} \begin{split} \widetilde{\mathcal{N}}_{2}(f)&= \sqrt{ \frac{d(p-1)-4}{d(p-1)} }^{\frac{d}{2}(p-1)-2} \mathcal{N}_{2}(f) \\ & \ge \sqrt{ \frac{d(p-1)-4}{d(p-1)} }^{\frac{d}{2}(p-1)-2}N_{2}=\widetilde{N}_{2}. \end{split} \end{equation} Hence, it follows from (\ref{08/06/15/14:38}) that $f \not \in PW$. \par We immediately obtain (\ref{10/02/16/20:48}) from (\ref{10/02/16/20:47}) and the definitions of $PW_{+}$ and $PW_{-}$ (see (\ref{10/02/13/14:44}) and (\ref{10/02/13/14:45})). \end{proof} In the next lemma, we consider a path constructed from the ground state: \begin{lemma}\label{08/06/14/23:37} Let $Q_{\omega}$ be the ground state of the equation (\ref{08/05/13/11:22}) for $w>0$. We consider a path $\Gamma_{\omega} \colon [0,\infty) \to H^{1}(\mathbb{R}^{d})$ given by $\Gamma_{\omega}(s)=sQ_{\omega}$ for $s\ge 0$. Then, $\Gamma_{\omega}$ is continuous and satisfies that \begin{align}\label{10/02/16/20:56} & \Gamma_{\omega}(s)\in PW_{+} \quad \mbox{for all $s \in (0,1)$}, \\[6pt] \label{10/02/16/21:03} &\Gamma_{\omega}(1)=Q_{\omega} \notin PW=PW_{+}\cup PW_{-}, \\[6pt] \label{10/02/16/20:58} &\Gamma_{\omega}(s) \in \left\{f \in H^{1}(\mathbb{R}^{d}) \bigm| \mathcal{H}(f) > 0\right\} \cap PW_{-} \quad \mbox{for all $s \in \left(1, \ \left\{ \frac{d(p-1)}{4}\right\}^{\frac{1}{p-1}}\right)$}, \\[6pt] \label{10/02/16/20:57} &\Gamma_{\omega}(s) \in \left\{f \in H^{1}(\mathbb{R}^{d})\setminus \{0\} \bigm| \mathcal{H}(f) \le 0\right\} \subset PW_{-} \quad \mbox{for all $s \in \left[ \left\{ \frac{d(p-1)}{4}\right\}^{\frac{1}{p-1}}, \ \infty\right)$}. \end{align} In particular, $PW_{-}\neq \emptyset$ and $PW_{+} \neq \emptyset$. \end{lemma} \begin{proof}[Proof of Lemma \ref{08/06/14/23:37}] The continuity of $\Gamma_{\omega}\colon (0,\infty)\to H^{1}(\mathbb{R}^{d})$ is obvious from its definition. We shall prove the properties (\ref{10/02/16/20:56})--(\ref{10/02/16/20:57}). As stated in (\ref{10/03/27/10:23}) (see also (\ref{09/12/16/20:17})), the ground state $Q_{\omega}$ satisfies that $\mathcal{K}(Q_{\omega})=0$, which immediately yields that \begin{equation}\label{10/03/27/10:02} \left\| \nabla Q_{\omega} \right\|_{L^{2}}^{2} = \frac{d(p-1)}{2(p+1)}\left\| Q_{\omega} \right\|_{L^{p+1}}^{p+1}. \end{equation} Using (\ref{10/03/27/10:02}), we obtain that \begin{equation}\label{09/09/14/0:33} \mathcal{H}(\Gamma_{\omega}(s))=s^{2}\left( 1-\frac{4}{d(p-1)}s^{p-1} \right)\|\nabla Q_{\omega} \|_{L^{2}}^{2}. \end{equation} This formula gives us that \begin{align} \label{10/03/15/21:47} &\mathcal{H}(\Gamma_{\omega}(s))>0 \quad \mbox{for all $s \in \left(0, \ \left\{ \frac{d(p-1)}{4}\right\}^{\frac{1}{p-1}}\right)$}, \\[6pt] \label{10/03/15/21:35} &\mathcal{H}(\Gamma_{\omega}(s)) \le 0 \quad \mbox{for all $s \in \left[ \left\{ \frac{d(p-1)}{4}\right\}^{\frac{1}{p-1}}, \ \infty\right)$}. \end{align} Similarly, we can verify that \begin{align}\label{10/03/15/21:20} &\mathcal{K}(\Gamma_{\omega}(s))>0 \quad \mbox{for all $s \in (0,1)$}, \\[6pt] \label{10/03/15/21:26} &\mathcal{K}(\Gamma_{\omega}(1))=\mathcal{K}(Q_{\omega})=0, \\[6pt] \label{10/03/15/21:27} &\mathcal{K}(\Gamma_{\omega}(s))<0 \quad \mbox{for all $s\in (1,\infty)$}. \end{align} Now, it follows from (\ref{09/09/14/0:33}) and (\ref{10/03/27/10:23}) that \begin{equation}\label{10/03/15/21:11} \widetilde{\mathcal{N}}_{2}(\Gamma_{\omega}(s))=s^{p-1}\left( 1-\frac{4}{d(p-1)}s^{p-1} \right)^{\frac{d}{4}(p-1)-1} \hspace{-7pt} N_{2} \qquad \mbox{for all $s \in \left(0,\ \left\{ \frac{d(p-1)}{4}\right\}^{\frac{1}{p-1}}\right)$}. \end{equation} This relation (\ref{10/03/15/21:11}) shows that \begin{equation}\label{10/03/15/21:22} \widetilde{\mathcal{N}}_{2}(\Gamma_{\omega}(s))< \widetilde{\mathcal{N}}_{2}(\Gamma_{\omega}(1))=\widetilde{N}_{2} \qquad \mbox{for all $s \in \left(0,\ \left\{ \frac{d(p-1)}{4}\right\}^{\frac{1}{p-1}}\right)\setminus\{1\}$}. \end{equation} Hence, (\ref{10/03/15/21:47}) and (\ref{10/03/15/21:22}), with the help of (\ref{08/06/15/14:38}), lead us to the conclusion that \begin{equation}\label{10/03/15/22:13} \Gamma_{\omega}(s) \in PW \qquad \mbox{for all $s \in \left(0,\ \left\{ \frac{d(p-1)}{4}\right\}^{\frac{1}{p-1}}\right) \setminus \{1\}$} . \end{equation} Then, (\ref{10/02/16/20:56}) follows from (\ref{10/03/15/21:20}). Moreover, (\ref{10/02/16/20:58}) follows from (\ref{10/03/15/21:47}) and (\ref{10/03/15/21:27}). The second claim (\ref{10/02/16/21:03}) is a direct consequence of (\ref{10/03/15/21:26}) and Lemma \ref{09/06/21/15:50}. \par It remains to prove (\ref{10/02/16/20:57}). Since $\mathcal{B}(f)\ge 0$ and $\mathcal{K}(f)<\mathcal{H}(f)$ for all $f \in H^{1}(\mathbb{R}^{d})\setminus \{0\}$, we have a relation \begin{equation}\label{10/03/15/21:34} \left\{f \in H^{1}(\mathbb{R}^{d})\setminus \{0\} \bigm| \mathcal{H}(f) \le 0\right\} \subset PW_{-}. \end{equation} Then, (\ref{10/02/16/20:57}) immediately follows from (\ref{10/03/15/21:35}) and this relation (\ref{10/03/15/21:34}). \end{proof} Now, we are in a position to prove Theorem \ref{08/10/20/5:21}. \begin{proof}[Proof of Theorem \ref{08/10/20/5:21}] We consider the path $\Gamma_{\omega}$ given in Lemma \ref{08/06/14/23:37}. Then, (\ref{10/02/16/20:56}) and the continuity of $\Gamma_{\omega}$ yield that\begin{equation}\label{10/03/28/9:20} \Gamma_{\omega}(s) \in PW_{+} \quad \mbox{for all $s \in (0,1)$}, \qquad \lim_{s\uparrow 1}\Gamma_{\omega}(s) =\Gamma_{\omega}(1)=Q_{\omega} \quad \mbox{strongly in $H^{1}(\mathbb{R}^{d})$}. \end{equation} Hence, we have $Q_{\omega} \in \overline{PW_{+}}$. Similarly, (\ref{10/02/16/20:58}) and the continuity of $\Gamma_{\omega}$ show that $Q_{\omega} \in \overline{PW_{-}}$. \end{proof} We find from the following lemma that $PW_{+}$ has a ``foliate structure''. \begin{lemma}\label{08/09/01/23:45} For all $\eta \in (0,\widetilde{N}_{2})$ and $\alpha>0$, there exists $f \in PW_{+}$ such that $\widetilde{\mathcal{N}}_{2}(f)=\eta$ and $\left\| f \right\|_{L^{2}}=\alpha$. \end{lemma} \begin{proof}[Proof of Lemma \ref{08/09/01/23:45}] We construct a desired function from the continuous path $\Gamma_{\omega}\colon [0,\infty) \to H^{1}(\mathbb{R}^{d})$ ($\omega>0$) given in Lemma \ref{08/06/14/23:37}. Let us remind you that \begin{align} \label{10/03/27/15:09} &\Gamma_{\omega}(s) \in PW_{+} \quad \mbox{for all $s \in (0,1)$}, \\[6pt] & \label{10/03/27/15:19} \widetilde{\mathcal{N}}_{2}(\Gamma_{\omega}(s))=s^{p-1}\left( 1-\frac{4}{d(p-1)}s^{p-1} \right)^{\frac{d}{4}(p-1)-1} N_{2} \quad \mbox{for all $s \in (0, 1)$}, \\[6pt] & \label{10/03/27/15:37} \widetilde{\mathcal{N}}_{2}(\Gamma_{\omega}(0))=0, \qquad \widetilde{\mathcal{N}}_{2}(\Gamma_{\omega}(1))=\widetilde{N}_{2} \end{align} (see (\ref{10/03/15/21:11}) for (\ref{10/03/27/15:19})). Using (\ref{10/03/27/15:19}), we find that $\widetilde{\mathcal{N}}_{2}(\Gamma_{\omega}(s))$ is continuous and monotone increasing with respect to $s$ on $[0,1]$. Hence, the intermediate value theorem, together with (\ref{10/03/27/15:37}), shows that for any $\eta \in (0, \widetilde{N}_{2})$, there exists $s_{\eta} \in (0,1)$ such that $\widetilde{\mathcal{N}}_{2}(\Gamma_{\omega}(s_{\eta}))=\eta$. Moreover, (\ref{10/03/27/15:09}) gives us that $\Gamma_{\omega}(s_{\eta}) \in PW_{+}$. We put $f=\Gamma_{\omega}(s_{\eta})$ and consider the scaled function $f_{\lambda}:=\lambda^{\frac{2}{p-1}}f(\lambda \cdot)$ for $\lambda>0$. Then, it is easy to see that \begin{equation} \label{10/03/27/16:10} f_{\lambda} \in PW_{+}, \quad \widetilde{\mathcal{N}}_{2}(f_{\lambda})=\widetilde{\mathcal{N}}_{2}(f)= \eta, \quad \left\| f_{\lambda} \right\|_{L^{2}}=\lambda^{-\frac{d(p-1)-4}{2(p-1)}}\left\| f\right\|_{L^{2}} \quad \mbox{for all $\lambda>0$}. \end{equation} Hence, for any $\alpha>0$, $f_{\lambda}$ with $\lambda =\left( \frac{ \left\| f \right\|_{L^{2}}}{\alpha}\right)^{\frac{2(p-1)}{d(p-1)-4}}$ is what we want. \end{proof} Finally, we give the invariance results of the sets $PW$, $PW_{+}$ and $PW_{-}$ under the flow defined by the equation (\ref{08/05/13/8:50}): \begin{lemma}[Invariance of $PW$]\label{09/06/21/18:53} Let $\psi_{0} \in PW$ and $\psi$ be the corresponding solution to the equation (\ref{08/05/13/8:50}). Then, we have that \[ \psi(t) \in PW \quad \mbox{for all $t \in I_{\max}$}. \] \end{lemma} \begin{proof}[Proof of Lemma \ref{09/06/21/18:53}] This lemma immediately follows from the mass and energy conservation laws (\ref{08/05/13/8:59}) and (\ref{08/05/13/9:03}). \end{proof} \begin{proposition}[Invariance of $PW_{+}$] \label{09/06/21/19:28} Let $\psi_{0} \in PW_{+}$ and $\psi$ be the corresponding solution to the equation (\ref{08/05/13/8:50}). Then, $\psi$ exists globally in time and satisfies the followings: \begin{equation}\label{10/01/06/21:29} \psi(t) \in PW_{+} \quad \mbox{for all $t \in \mathbb{R}$}, \end{equation} \begin{equation}\label{09/06/21/21:58} \left\| \nabla \psi (t) \right\|_{L^{2}}^{2} < \frac{d(p-1)}{d(p-1)-4}\mathcal{H}(\psi_{0}) \quad \mbox{for all $t \in \mathbb{R}$}, \end{equation} \begin{equation}\label{09/06/21/19:30} \mathcal{K}(\psi(t)) > \left( 1-\frac{\widetilde{N}_{2}(\psi_{0})}{\widetilde{N}_{2}}\right) \mathcal{H}(\psi_{0}) \quad \mbox{for all $t \in \mathbb{R}$}. \end{equation} \end{proposition} \begin{proof}[Proof of Proposition \ref{09/06/21/19:28}] We first prove the invariance of $PW_{+}$ under the flow defined by (\ref{08/05/13/8:50}). With the help of Lemma \ref{09/06/21/18:53}, it suffices to show that \begin{equation}\label{09/08/15/18:45} \mathcal{K}(\psi(t))>0 \quad \mbox{for all $t \in I_{\max}$}. \end{equation} Supposing the contrary that (\ref{09/08/15/18:45}) fails, we can take $t_{0}\in I_{\max}$ such that \begin{equation}\label{10/03/27/16:44} \mathcal{K}(\psi(t_{0}))= 0. \end{equation} Then, (\ref{10/03/27/16:44}) and the energy conservation law (\ref{08/05/13/9:03}) yield that \begin{equation}\label{10/03/27/17:02} \begin{split} 0&=\mathcal{K}(\psi(t_{0})) \\[6pt] &=\mathcal{H}(\psi(t_{0})) - \frac{2}{p+1}\left\{ \frac{d}{4}(p-1)-1 \right\}\|\psi(t_{0})\|_{L^{p+1}}^{p+1} \\[6pt] &= \mathcal{H}(\psi_{0}) - \frac{2}{p+1}\left\{ \frac{d}{4}(p-1)-1 \right\} \frac{2(p+1)}{d(p-1)} \|\nabla \psi(t_{0})\|_{L^{2}}^{2}. \end{split} \end{equation} Since $\psi_{0}\in PW_{+}\subset PW$, we have $\mathcal{H}(\psi_{0})<\mathcal{B}(\psi_{0})$. This inequality and (\ref{10/03/27/17:02}) lead us to that \begin{equation}\label{10/03/27/17:07} 0 < \mathcal{B}(\psi_{0})-\frac{d(p-1)-4}{d(p-1)} \|\nabla \psi(t_{0})\|_{L^{2}}^{2}, \end{equation} which is equivalent to \begin{equation}\label{10/03/27/17:18} \|\nabla \psi(t_{0})\|_{L^{2}}^{2} < \frac{d(p-1)}{d(p-1)-4}\mathcal{B}(\psi_{0}) = \left( \frac{ N_{2}}{\|\psi(t_{0})\|_{L^{2}}^{p+1-\frac{d}{2}(p-1)}}\right)^{\frac{4}{d(p-1)-4}} . \end{equation} Dividing the both sides of (\ref{10/03/27/17:18}) by $\|\nabla \psi(t_{0})\|_{L^{2}}^{2}$, we obtain that \begin{equation}\label{10/03/27/17:24} 1< \left( \frac{N_{2}}{\mathcal{N}_{2}(\psi(t_{0}))}\right)^{\frac{4}{d(p-1)-4}}. \end{equation} On the other hand, (\ref{10/03/27/16:44}), together with the definition of $N_{2}$, leads us to that $\mathcal{N}_{2}(\psi(t_{0}))\ge N_{2}$, so that \begin{equation}\label{10/03/27/17:26} \left( \frac{N_{2}}{\mathcal{N}_{2}(\psi(t_{0}))}\right)^{\frac{4}{d(p-1)-4}} \le 1. \end{equation} This inequality (\ref{10/03/27/17:26}) contradicts (\ref{10/03/27/17:24}). Thus, (\ref{09/08/15/18:45}) must hold. \par Once we obtain (\ref{09/08/15/18:45}), we can easily obtain the following uniform bound: \begin{equation}\label{10/01/06/23:07} \left\| \nabla \psi (t) \right\|_{L^{2}}^{2} < \frac{d(p-1)}{d(p-1)-4}\mathcal{H}(\psi_{0}) \quad \mbox{for all $t \in I_{\max}$}. \end{equation} Indeed, it follows from the energy conservation law (\ref{08/05/13/9:03}) and (\ref{09/08/15/18:45}) that \begin{equation}\label{10/03/28/10:15} \begin{split} \mathcal{H}(\psi_{0})&=\mathcal{H}(\psi(t))= \left\| \nabla \psi(t)\right\|_{L^{2}}^{2}- \frac{2}{p+1}\left\| \psi(t) \right\|_{L^{p+1}}^{p+1} \\[6pt] &>\frac{d(p-1)-4}{d(p-1)}\left\| \nabla \psi(t)\right\|_{L^{2}}^{2} \quad \mbox{for all $t \in I_{\max}$}. \end{split} \end{equation} The estimate (\ref{10/01/06/23:07}), together with the sufficient condition for the blowup (\ref{10/01/27/11:26}), leads us to that $I_{\max}=\mathbb{R}$. Hence, (\ref{10/01/06/21:29}) and (\ref{09/06/21/21:58}) follow from (\ref{09/08/15/18:45}) and (\ref{10/01/06/23:07}), respectively. \par It remains to prove (\ref{09/06/21/19:30}). The Gagliardo-Nirenberg inequality (\ref{08/05/13/15:45}) gives us that \begin{equation}\label{10/03/28/10:21} \begin{split} \mathcal{K}(\psi(t)) & =\left\| \nabla \psi(t) \right\|_{L^{2}}^{2} -\frac{d(p-1)}{2(p+1)}\left\| \psi(t) \right\|_{L^{p+1}}^{p+1} \\[6pt] &\ge \left\| \nabla \psi(t) \right\|_{L^{2}}^{2} -\frac{d(p-1)}{2(p+1)}\frac{1}{N_{3}}\mathcal{N}_{2}(\psi(t)) \left\| \nabla \psi(t)\right\|_{L^{2}}^{2}. \end{split} \end{equation} Moreover, this inequality (\ref{10/03/28/10:21}) and the relation $N_{3}=\frac{d(p-1)}{2(p+1)}N_{2}$ (see (\ref{08/05/13/11:25})) yield that \begin{equation}\label{08/12/16/18:09} \begin{split} \mathcal{K}(\psi(t)) & \ge \left\| \nabla \psi(t) \right\|_{L^{2}}^{2} -\frac{\mathcal{N}_{2}(\psi(t))}{N_{2}} \left\| \nabla \psi(t) \right\|_{L^{2}}^{2} . \end{split} \end{equation} Here, it follows from (\ref{09/06/21/21:58}) that \begin{equation} \label{09/08/15/19:08} \begin{split} \mathcal{N}_{2}(\psi(t))&< \sqrt{\frac{d(p-1)}{d(p-1)-4}}^{\frac{d}{2}(p-1)-2}\hspace{-1cm} \left\|\psi_{0} \right\|_{L^{2}}^{p+1-\frac{d}{2}(p-1)} \sqrt{\mathcal{H}(\psi_{0})}^{\frac{d}{2}(p-1)-2} \\ &= \sqrt{\frac{d(p-1)}{d(p-1)-4}}^{\frac{d}{2}(p-1)-2}\widetilde{\mathcal{N}}_{2}(\psi_{0}). \end{split} \end{equation} Hence, combining (\ref{08/12/16/18:09}), (\ref{09/08/15/19:08}) and $\mathcal{H}(\psi(t))>\mathcal{K}(\psi(t))>0$, we obtain that\begin{equation}\label{10/03/10:27} \begin{split} \mathcal{K}(\psi(t))&> \left\| \nabla \psi(t) \right\|_{L^{2}}^{2} - \sqrt{\frac{d(p-1)}{d(p-1)-4}}^{\frac{d}{2}(p-1)-2} \frac{\mathcal{\widetilde{\mathcal{N}}}_{2}(\psi_{0})}{N_{2}} \left\| \nabla \psi(t) \right\|_{L^{2}}^{2} \\[6pt] & = \left( 1-\frac{\widetilde{\mathcal{N}}_{2}(\psi_{0})}{\widetilde{N}_{2}}\right) \left\| \nabla \psi(t)\right\|_{L^{2}}^{2} \\[6pt] &\ge \left( 1-\frac{\widetilde{\mathcal{N}}_{2}(\psi_{0})}{\widetilde{N}_{2}}\right) \mathcal{H}(\psi(t)) = \left( 1-\frac{\widetilde{\mathcal{N}}_{2}(\psi_{0})}{\widetilde{N}_{2}}\right) \mathcal{H}(\psi_{0}) . \end{split} \end{equation} \end{proof} \begin{proposition}[Invariance of $PW_{-}$]\label{08/05/26/10:57} Let $\psi_{0} \in PW_{-}$ and $\psi$ be the corresponding solution to the equation (\ref{08/05/13/8:50}). Then, we have \begin{equation}\label{09/06/21/18:52} \psi(t) \in PW{-}, \quad \mathcal{K}(\psi(t)) < -\varepsilon_{0} \qquad \mbox{for all $t \in I_{\max}$}, \end{equation} where $\varepsilon_{0}=\mathcal{B}(\psi_{0})-\mathcal{H}(\psi_{0})>0$. \end{proposition} \begin{proof}[Proof of Proposition \ref{08/05/26/10:57}] By the virtue of Lemma \ref{09/06/21/18:53}, it suffices to show that \begin{equation}\label{10/03/28/10:48} \mathcal{K}(\psi(t)) < -\varepsilon_{0} \qquad \mbox{for all $t \in I_{\max}$}. \end{equation} We first prove that \begin{equation}\label{10/03/28/11:09} \mathcal{K}(\psi_{0})<-\varepsilon_{0}. \end{equation} Supposing the contrary that $-\varepsilon_{0}\le \mathcal{K}(\psi_{0})$, we have \begin{equation}\label{10/03/28/10:54} \begin{split} 0&\le \mathcal{K}(\psi_{0})+\varepsilon_{0} \\[6pt] &=\mathcal{H}(\psi_{0})+\varepsilon_{0} - \frac{2}{p+1}\left\{ \frac{d}{4}(p-1)-1 \right\}\|\psi_{0}\|_{L^{p+1}}^{p+1} \\[6pt] &= \mathcal{B}(\psi_{0}) - \frac{2}{p+1}\left\{ \frac{d}{4}(p-1)-1 \right\}\|\psi_{0}\|_{L^{p+1}}^{p+1} . \end{split} \end{equation} Moreover, it follows from (\ref{10/03/28/10:54}) and $\mathcal{K}(\psi_{0})<0$ that \begin{equation}\label{10/03/28/10:55} \begin{split} 0& < \mathcal{B}(\psi_{0}) - \frac{2}{p+1}\left\{ \frac{d}{4}(p-1)-1 \right\} \frac{2(p+1)}{d(p-1)} \|\nabla \psi_{0}\|_{L^{2}}^{2} \\[6pt] &= \mathcal{B}(\psi_{0})-\frac{d(p-1)-4}{d(p-1)} \|\nabla \psi_{0}\|_{L^{2}}^{2} , \end{split} \end{equation} so that \begin{equation}\label{10/03/28/11:02} \|\nabla \psi_{0}\|_{L^{2}}^{2} < \frac{d(p-1)}{d(p-1)-4}\mathcal{B}(\psi_{0}) = \left( \frac{N_{2}}{\|\psi_{0}\|_{L^{2}}^{p+1-\frac{d}{2}(p-1)}}\right)^{\frac{4}{d(p-1)-4}} . \end{equation} Dividing the both sides of (\ref{10/03/28/11:02}) by $\|\nabla \psi_{0}\|_{L^{2}}^{2}$, we obtain that \begin{equation}\label{10/03/28/11:05} 1< \left( \frac{ N_{2}}{\mathcal{N}_{2}(\psi_{0})}\right)^{\frac{4}{d(p-1)-4}} . \end{equation} On the other hand, since $\mathcal{K}(\psi_{0})<0$, the definition of $N_{2}$ (see (\ref{08/07/02/23:24})) implies that \begin{equation}\label{10/03/28/11:06} \left( \frac{ N_{2}}{\mathcal{N}_{2}(\psi_{0})}\right)^{\frac{4}{d(p-1)-4}} \le 1 , \end{equation} which contradicts (\ref{10/03/28/11:05}). Hence, we have proved (\ref{10/03/28/11:09}). \par Next, we prove (\ref{10/03/28/10:48}). Suppose the contrary that (\ref{10/03/28/10:48}) fails. Then, it follows from (\ref{10/03/28/11:09}) and $\psi \in C(I_{\max};H^{1}(\mathbb{R}^{d}))$ that there exists $t_{1}\in I_{\max} \setminus \{0\}$ such that $-\varepsilon_{0}= \mathcal{K}(\psi(t_{1}))$. This relation and the energy conservation law (\ref{08/05/13/9:03}) lead us to that \begin{equation}\label{10/03/28/11:17} \begin{split} 0&= \mathcal{K}(\psi(t_{1}))+\varepsilon_{0} \\[6pt] &=\mathcal{H}(\psi_{0})+\varepsilon_{0} - \frac{2}{p+1}\left\{ \frac{d}{4}(p-1)-1 \right\}\|\psi(t_{1})\|_{L^{p+1}}^{p+1} \\[6pt] &= \mathcal{B}(\psi_{0}) - \frac{2}{p+1}\left\{ \frac{d}{4}(p-1)-1 \right\} \frac{2(p+1)}{d(p-1)} \left( \|\nabla \psi(t_{1})\|_{L^{2}}^{2}+\varepsilon_{0}\right) \\[6pt] &< \mathcal{B}(\psi_{0})-\frac{d(p-1)-4}{d(p-1)} \|\nabla \psi(t_{1})\|_{L^{2}}^{2} . \end{split} \end{equation} Hence, we have that \begin{equation}\label{10/03/28/11:20} \begin{split} \|\nabla \psi(t_{1})\|_{L^{2}}^{2} & < \frac{d(p-1)}{d(p-1)-4}\mathcal{B}(\psi_{0}) =\left( \frac{N_{2}}{\|\psi_{0}\|_{L^{2}}^{p+1-\frac{d}{2}(p-1)}}\right)^{\frac{4}{d(p-1)-4}} \\[6pt] &= \left( \frac{N_{2}}{\|\psi(t_{1})\|_{L^{2}}^{p+1-\frac{d}{2}(p-1)}}\right)^{\frac{4}{d(p-1)-4}} , \end{split} \end{equation} where we have used the mass conservation law (\ref{08/05/13/8:59}) to derive the last equality. Then, an argument similar to the above yields a contradiction: Thus, we completed the proof. \end{proof} \section{Strichartz type estimate and scattering} \label{08/08/05/14:29} In this section, we introduce a certain space-time function space in addition to the usual Strichartz spaces, which enable us to control long-time behavior of solutions. Using this function space, we prepare two important propositions: Proposition \ref{08/08/22/20:59} in Section \ref{09/05/30/15:49} (small data theory) and Proposition \ref{08/08/05/14:30} in Section \ref{09/05/30/21:16} (long time perturbation theory). The former is used to avoid the vanishing and the latter to avoid the dichotomy in our concentration compactness like argument in Section \ref{09/05/05/10:03}. In the end of this section, we show the existence of the wave operators on $PW_{+}$. \subsection{Auxiliary function space $X$}\label{08/10/07/9:01} In order to prove the scattering result ((\ref{10/01/26/20:32}) in Theorem \ref{08/05/26/11:53}), we need to handle the inhomogeneous term of the integral equation associated with (\ref{08/05/13/8:50}) in a suitable function space. Therefore, we will prepare a function space $X(I)$, $I\subset \mathbb{R}$, in which Strichartz type estimate works well. \par Our equation (\ref{08/05/13/8:50}) is invariant under the scaling \begin{equation}\label{10/03/29/10:21} \psi(x,t) \mapsto \psi_{\lambda}(x,t):= \lambda^{\frac{2}{p-1}} \psi(\lambda x, \lambda^{2}t), \end{equation} which determines a critical regularity \begin{equation}\label{09/05/13/15:00} s_{p}:=\frac{d}{2}-\frac{2}{p-1}. \end{equation} The condition (\ref{09/05/13/15:03}) implies that $0<s_{p}<1$. \par Throughout this paper, we fix a number $q_{1}$ with $p+1<q_{1}<2^{*}$. Then, we define indices $r_{0}$, $r_{1}$ and $\widetilde{r}_{1}$ by \begin{align} \label{09/09/23/18:00} &\frac{1}{r_{0}}:=\frac{d}{2}\left( \frac{1}{2}-\frac{1}{q_{1}} \right), \\[6pt] \label{08/09/04/10:16} &\frac{1}{r_{1}}:=\frac{d}{2}\left( \frac{1}{2}-\frac{1}{q_{1}}-\frac{s_{p}}{d}\right), \\[6pt] \label{09/09/23/18:02} &\frac{1}{\widetilde{r}_{1}}:=\frac{d}{2}\left( \frac{1}{2}-\frac{1}{q_{1}}+\frac{s_{p}}{d}\right). \end{align} Here, the pair $(q_{1},r_{0})$ is admissible. Besides these indices, we define a pair $(q_{2},r_{2})$ by \begin{equation}\label{09/12/05/16:15} \frac{p-1}{q_{2}}=1-\frac{2}{q_{1}}, \qquad \frac{1}{r_{2}}:=\frac{d}{2}\left( \frac{1}{2}-\frac{1}{q_{2}}-\frac{s_{p}}{d} \right). \end{equation} \begin{figure} \caption{ Strichartz type estimates: $Q_{0} \end{figure} \\ It is worth while noting that the Sobolev embedding and the Strichartz estimate lead us to the following estimate: For any pair $(q,r)$ satisfying \begin{equation}\label{10/03/29/10:52} \frac{d}{2}(p-1) \le q < 2^{*}, \qquad \frac{1}{r}=\frac{d}{2}\left( \frac{1}{2}-\frac{1}{q}-\frac{s_{p}}{d}\right), \end{equation} we have \begin{equation}\label{08/10/25/23:16} \left\|e^{\frac{i}{2}t\Delta}f \right\|_{L^{r}(I;L^{q})} \lesssim \left\|(-\Delta)^{\frac{s_{p}}{2}}f\right\|_{L^{2}} \quad \mbox{for all $f \in \dot{H}^{s_{p}}(\mathbb{R}^{d})$ and interval $I$}, \end{equation} where the implicit constant depends only on $d$, $p$ and $q$. The pairs $(q_{1},r_{1})$ and $(q_{2},r_{2})$ satisfy the condition (\ref{10/03/29/10:52}), so that the estimate (\ref{08/10/25/23:16}) is valid for these pairs. \par Now, for any interval $I$, we put \begin{align} \label{08/09/01/10:02} X(I)&= L^{r_{1}}(I;L^{q_{1}})\cap L^{r_{2}}(I; L^{q_{2}}), \\[6pt] \label{10/02/19/22:40} S(I)&=L^{\infty}(I; L^{2}) \cap L^{r_{0}}(I; L^{q_{1}}). \end{align} We find that Strichartz type estimates work well in the space $X(I)$: \begin{lemma}\label{08/07/31/12:02} Assume that $d\ge 1$ and $2+\frac{4}{d}< p+1 <2^{*}$. Let $t_{0} \in \mathbb{R}$ and $I$ be an interval whose closure contains $t_{0}$. Then, we have \begin{align} \label{10/03/29/11:06} \left\| \int_{t_{0}}^{t}e^{i(t-t')\Delta} v(t')\,dt' \right\|_{X(I)}&\lesssim \left\|v \right\|_{L^{\widetilde{r}_{1}'}(I;L^{q_{1}'})}, \\[6pt] \label{10/03/29/11:07} \left\| \int_{t_{0}}^{t}e^{i(t-t')\Delta} \left( v_{1}v_{2} \right)(t')\,dt' \right\|_{X(I)} &\lesssim \left\| v_{1} \right\|_{L^{r_{1}}(I;L^{q_{1}})} \left\| v_{2} \right\|_{L^{\frac{r_{2}}{p-1}}(I;L^{\frac{q_{2}}{p-1}})} , \end{align} where the implicit constants depend only on $d$, $p$ and $q_{1}$. \end{lemma} The estimate (\ref{10/03/29/11:06}) in Lemma \ref{08/07/31/12:02} is due to Foschi (see \cite{Foschi}, Theorem 1.4). The estimate (\ref{10/03/29/11:07}) is an immediate consequence of (\ref{10/03/29/11:06}) and the H\"older inequality. \\ \par The following lemma is frequently used in the next section (Section \ref{08/10/03/15:11}); It is fundamental and easily obtained from the H\"older inequality and the chain rule: \begin{lemma} \label{08/08/18/15:41} Assume that $d \ge 1$ and $2+\frac{4}{d}< p+1 < 2^{*}$. Let $t_{0} \in \mathbb{R}$ and $I$ be an interval whose closure contains $t_{0}$. Then, we have \begin{align} \label{08/08/18/15:43} &\left\||v|^{p-1}v \right\|_{L^{r_{0}'}(I; L^{q_{1}'})} \le \left\|v \right\|_{L^{r_{0}}(I;L^{q_{1}})} \left\| v \right\|_{L^{r_{2}}(I;L^{q_{2}})}^{p-1}, \\[6pt] \label{10/03/29/11:14} &\left\|\nabla \left( |v|^{p-1}v \right) \right\|_{L^{r_{0}'}(I; L^{q_{1}'})} \lesssim \left\|\nabla v \right\|_{L^{r_{0}}(I;L^{q_{1}})} \left\| v \right\|_{L^{r_{2}}(I;L^{q_{2}})}^{p-1} , \end{align} where the implicit constant depends only on $d$, $p$ and $q_{1}$. \end{lemma} We also need the following interpolation estimate in the next section (Section \ref{09/05/05/10:03}): \begin{lemma}\label{09/04/29/15:57} For $j \in \{1,2\}$, there exist a constant $\theta_{j} \in (0,1)$ such that \[ \left\|e^{\frac{i}{2}t\Delta}f \right\|_{L^{r_{j}}(I;L^{q_{j}})} \lesssim \left\| e^{\frac{i}{2}t\Delta}f \right\|_{L^{\infty}(I;L^{\frac{d}{2}(p-1)})}^{1-\theta_{j}} \left\| (-\Delta)^{\frac{s_{p}}{2}} f \right\|_{L^{2}}^{\theta_{j}} \quad \mbox{for all $f \in \dot{H}^{s_{p}}(\mathbb{R}^{d})$}, \] where the implicit constant depends only on $d$, $p$ and $q_{1}$. \end{lemma} \begin{proof}[Proof of Lemma \ref{09/04/29/15:57}] Fix a pair $(q,r)$ satisfying (\ref{10/03/29/10:52}) and $q_{1}<q <2^{*}$. Applying the H\"older inequality first and (\ref{08/10/25/23:16}) afterword, we obtain that \begin{equation}\label{10/03/29/11:26} \begin{split} \left\|e^{\frac{i}{2}t\Delta}f \right\|_{L^{r_{j}}(I;L^{q_{j}})} &\le \left\| e^{\frac{i}{2}t\Delta}f \right\|_{L^{\infty}(I;L^{\frac{d}{2}(p-1)})}^{1-\theta_{j}} \left\| e^{\frac{i}{2}t\Delta}f \right\|_{L^{r}(I;L^{q})}^{\theta_{j}} \\[6pt] &\lesssim \left\| e^{\frac{i}{2}t\Delta}f \right\|_{L^{\infty}(I;L^{\frac{d}{2}(p-1)})}^{1-\theta_{j}} \left\| (-\Delta)^{\frac{s_{p}}{2}} f \right\|_{L^{2}}^{\theta_{j}}, \quad j=1,2 \end{split} \end{equation} where \[ \theta_{j}:=\frac{q}{q_{j}}\frac{2q_{j}-d(p-1)}{2q-d(p-1)}, \] and the implicit constant depends only on $d$, $p$ and $q_{1}$ (we may ignore the dependence of $q$). \end{proof} At the end of this subsection, we record a decay estimate for the free solution: \begin{lemma}\label{10/04/08/9:20} Assume that $d\ge 1$. Then, we have \begin{equation}\label{10/04/08/9:22} \lim_{t\to \pm \infty} \left\| e^{\frac{i}{2}t\Delta}f \right\|_{L^{q}}=0 \quad \mbox{for all $q \in (2,2^{*})$ and $f \in H^{1}(\mathbb{R}^{d})$}. \end{equation} \end{lemma} \begin{proof}[Proof of Lemma \ref{10/04/08/9:22}] This lemma is easily follows from the $L^{p}$-$L^{q}$-estimate for the free solution and the density of the Schwartz space $\mathcal{S}(\mathbb{R}^{d})$ in $H^{1}(\mathbb{R}^{d})$. \end{proof} \subsection{Sufficient conditions for scattering} \label{09/05/30/15:49} We shall give two sufficient conditions for solutions to have asymptotic states in the energy space $H^{1}(\mathbb{R}^{d})$. One of them is the small data theory (see Proposition \ref{08/08/22/20:59} and Remark \ref{10/03/29/12:24}). We also give the proof of Proposition \ref{09/05/18/10:43} here. \par We begin with the following proposition: \begin{proposition}[Scattering in the energy space] \label{08/08/18/16:51} Assume that $d\ge 1$ and $2+\frac{4}{d}<p+1<2^{*}$. Let $\psi$ be a solution to the equation (\ref{08/05/13/8:50}). \\ {\rm (i)} Suppose that $\psi$ exists on $[0,\infty)$ and satisfies that \begin{equation}\label{10/03/29/12:27} \left\| \psi \right\|_{X([0,\infty))}< \infty, \qquad \left\|\psi \right\|_{L^{\infty}([0,\infty);H^{1})}< \infty. \end{equation} Then, there exists a unique asymptotic state $\phi_{+} \in H^{1}(\mathbb{R}^{d})$ such that \begin{equation}\label{10/03/29/12:28} \lim_{t \to \infty}\left\| \psi(t)-e^{\frac{i}{2}t\Delta}\phi_{+}\right\|_{H^{1}} =0. \end{equation} {\rm (ii)} Suppose that $\psi$ exists on $(-\infty, 0]$ and satisfies that \begin{equation}\label{10/03/29/12:29} \left\| \psi \right\|_{X((-\infty,0])}< \infty, \qquad \left\|\psi \right\|_{L^{\infty}((-\infty,0];H^{1})}< \infty. \end{equation} Then, there exists a unique asymptotic state $\phi_{-} \in H^{1}(\mathbb{R}^{d})$ such that \begin{equation}\label{10/03/29/12:30} \lim_{t \to -\infty}\left\| \psi(t)-e^{\frac{i}{2}t\Delta}\phi_{-}\right\|_{H^{1}} =0. \end{equation} \end{proposition} For the proof of Proposition \ref{08/08/18/16:51}, we need the following lemma. \begin{lemma} \label{08/08/18/15:53} Assume that $d\ge 1$ and $2+\frac{4}{d}<p+1<2^{*}$. Let $I$ be an interval and $\psi$ be a solution to the equation (\ref{08/05/13/8:50}) on $I$. Suppose that \begin{equation}\label{10/03/29/18:06} \|\psi\|_{X(I)}<\infty, \quad \|\psi\|_{L^{\infty}(I;H^{1})}<\infty. \end{equation} Then, we have that \[ \left\|(1-\Delta)^{\frac{1}{2}}\psi \right\|_{S(I)}<\infty . \] \end{lemma} \begin{proof}[Proof of Lemma \ref{08/08/18/15:53}] For the desired result, it suffices to show that \begin{equation}\label{10/03/29/18:09} \left\|(1-\Delta)^{\frac{1}{2}} \psi \right\|_{L^{r}(I;L^{q})}< \infty \end{equation} for all pair $(q,r)$ with $q_{1}<q<2^{*}$ and $\frac{1}{r}=\frac{d}{2}\left( \frac{1}{2}-\frac{1}{q}\right)$ ($(q,r)$ is an admissible pair). Indeed, the H\"older inequality, together with (\ref{10/03/29/18:06}) and (\ref{10/03/29/18:09}), gives us that \begin{equation}\label{10/03/29/18:45} \left\|(1-\Delta)^{\frac{1}{2}} \psi \right\|_{L^{r_{0}}(I;L^{q_{1}})} \le \left\| \psi \right\|_{L^{\infty}(I;H^{1})}^{1-\frac{q(q_{1}-2)}{q_{1}(q-2)}} \left\|(1-\Delta)^{\frac{1}{2}} \psi \right\|_{L^{r}(I;L^{q})}^{\frac{q(q_{1}-2)}{q_{1}(q-2)}} <\infty. \end{equation} We shall prove (\ref{10/03/29/18:09}). Let $J$ be a subinterval of $I$ with the property that \begin{equation} \left\| (1-\Delta)^{\frac{1}{2}}\psi \right\|_{L^{r}(J;L^{q})}<\infty. \end{equation} Then, applying the Strichartz estimate to the integral equation associated with $(\ref{08/05/13/8:50})$, we obtain that \begin{equation}\label{09/09/24/18:21} \left\| (1-\Delta)^{\frac{1}{2}}\psi \right\|_{L^{r}(J;L^{q})} \lesssim \left\| \psi(t_{0}) \right\|_{H^{1}} + \left\|(1-\Delta)^{\frac{1}{2}} \left( |\psi|^{p-1}\psi \right) \right\|_{L^{r_{0}'}(J;L^{q_{1}'})}, \end{equation} where the implicit constant depends only on $d$, $p$, $q_{1}$ and $q$. Moreover, it follows from Lemma \ref{08/08/18/15:41} and the H\"older inequality that \begin{equation}\label{10/03/29/17:55} \begin{split} \left\| (1-\Delta)^{\frac{1}{2}}\psi \right\|_{L^{r}(J;L^{q})} &\lesssim \left\| \psi(t_{0}) \right\|_{H^{1}} + \left\| (1-\Delta)^{\frac{1}{2}} \psi \right\|_{L^{r_{0}}(J;L^{q_{1}})}\left\| \psi \right\|_{L^{r_{2}}(J;L^{q_{2}})}^{p-1} \\[6pt] &\le \left\|\psi \right\|_{L^{\infty}(I;H^{1})} + \left\|\psi \right\|_{L^{\infty}(J;H^{1})}^{1-\theta} \left\| (1-\Delta)^{\frac{1}{2}} \psi \right\|_{L^{r}(J;L^{q})}^{\theta} \left\| \psi \right\|_{X(J)}^{p-1}, \end{split} \end{equation} where \[ \theta :=\frac{q(q_{1}-2)}{q_{1}(q-2)} \in (0,1) \] and the implicit constant depends only on $d$, $p$, $q_{1}$ and $q$. This estimate (\ref{10/03/29/17:55}) and the Young inequality show that \begin{equation}\label{10/03/29/18:24} \left\| (1-\Delta)^{\frac{1}{2}}\psi \right\|_{L^{r}(J;L^{q})} \lesssim \left\|\psi \right\|_{L^{\infty}(I;H^{1})} + \left\|\psi \right\|_{L^{\infty}(I;H^{1})} \left\| \psi \right\|_{X(I)}^{\frac{p-1}{1-\theta}}, \end{equation} where the implicit constant depends only on $d$, $p$, $q_{1}$ and $q$. Since the right-hand side of (\ref{10/03/29/18:24}) is independent of $J$, the condition (\ref{10/03/29/18:06}) leads us to (\ref{10/03/29/18:09}). \end{proof} Now, we give the proof of Proposition \ref{08/08/18/16:51}. \begin{proof}[Proof of Proposition \ref{08/08/18/16:51}] Since the proofs of (i) and (ii) are very similar, we consider (i) only. A starting point is the following formula derived from the integral equation associated to (\ref{08/05/13/8:50}):\begin{equation}\label{10/03/29/21:57} e^{-\frac{i}{2}t\Delta }\psi(t)-e^{-\frac{i}{2}s \Delta}\psi(s) = \frac{i}{2}\int_{s}^{t}e^{-\frac{i}{2}t'\Delta}\left(|\psi|^{p-1}\psi \right)(t')dt' \quad \mbox{for all $s,t \in [0,\infty)$}. \end{equation} Applying the Strichartz estimate and Lemma \ref{08/08/18/15:41} to this formula, we obtain that \begin{equation}\label{09/12/14/14:32} \begin{split} \left\| e^{-\frac{i}{2}t\Delta }\psi(t)-e^{-\frac{i}{2}s\Delta}\psi(s) \right\|_{H^{1}} &\le \sup_{s'\in [s,t]} \left\| \int_{s}^{s'}e^{-\frac{i}{2}t'\Delta}\left(|\psi|^{p-1}\psi \right)(t')dt' \right\|_{H^{1}} \\[6pt] & \lesssim \left\| (1-\Delta)^{\frac{1}{2}}\left( |\psi|^{p-1}\psi \right) \right\|_{L^{r_{0}'}([s,t];L^{q_{1}'})} \\[6pt] &\lesssim \left\|(1-\Delta)^{\frac{1}{2}} \psi \right\|_{S([0,\infty))} \left\|\psi \right\|_{X([s,t])}^{p-1}, \end{split} \end{equation} where the implicit constant depends only on $d$, $p$ and $q_{1}$. Then, it follows from Lemma \ref{08/08/18/15:53} and the condition (\ref{10/03/29/12:27}) that the right-hand side of (\ref{09/12/14/14:32}) vanishes as $s,t \to \infty$, so that the completeness of $H^{1}(\mathbb{R}^{d})$ leads us to that there exists $\phi_{+} \in H^{1}(\mathbb{R}^{d})$ such that \begin{equation}\label{09/12/14/14:43} \lim_{t\to \infty}\left\| \psi(t)- e^{\frac{i}{2}t\Delta }\phi_{+} \right\|_{H^{1}} = \lim_{t\to \infty} \left\| e^{-\frac{i}{2}t\Delta }\psi(t)-\phi_{+} \right\|_{H^{1}} = 0. \end{equation} To complete the proof, we shall show the uniqueness of a function $\phi_{+}$ satisfying (\ref{09/12/14/14:43}). Let $\phi_{+}$ and $\phi_{+}'$ be functions in $H^{1}(\mathbb{R}^{d})$ satisfying (\ref{09/12/14/14:43}). Then, we easily verify that \begin{equation}\label{10/03/29/22:44} \left\| \phi_{+} -\phi_{+}' \right\|_{H^{1}} \le \left\| e^{-\frac{i}{2}t\Delta }\psi(t)- \phi_{+} \right\|_{H^{1}} + \left\| e^{-\frac{i}{2}t\Delta }\psi(t)- \phi_{+}' \right\|_{H^{1}} \quad \mbox{for all $t>0$}. \end{equation} Hence, taking $t\to \infty$, we have by (\ref{09/12/14/14:43}) that $\phi_{+}=\phi_{+}'$ in $H^{1}(\mathbb{R}^{d})$. \end{proof} The following proposition gives us another sufficient condition for the boundedness of $X(I)$ and $S(I)$-norms. \begin{proposition}[Small data theory] \label{08/08/22/20:59} Assume that $d\ge 1$ and $2+\frac{4}{d}<p+1<2^{*}$. Let $t_{0} \in \mathbb{R}$ and $I$ be an interval whose closure contains $t_{0}$. Then, there exists a positive constant $\delta$, depending only on $d$, $p$ and $q_{1}$, with the following property: for any $\psi_{0} \in H^{1}(\mathbb{R}^{d})$ satisfying that \begin{equation} \left\| e^{\frac{i}{2}(t-t_{0})\Delta}\psi_{0}\right\|_{X(I)}\le \delta, \end{equation} there exists a unique solution $\psi \in C(I;H^{1}(\mathbb{R}^{d}))$ to the equation (\ref{08/05/13/8:50}) with $\psi(t_{0})=\psi_{0}$ such that \begin{equation}\label{10/03/30/9:44} \left\| \psi \right\|_{X(I)}< 2 \left\| e^{\frac{i}{2}(t-t_{0})\Delta} \psi_{0} \right\|_{X(I)}, \quad \left\| (1-\Delta)^{\frac{1}{2}} \psi \right\|_{S(I)}\lesssim \left\| \psi_{0}\right\|_{H^{1}}, \end{equation} where the implicit constant depends only on $d$, $p$ and $q_{1}$. \end{proposition} \begin{remark}\label{10/03/29/12:24} Proposition \ref{08/08/22/20:59}, with the help of (\ref{08/10/25/23:16}) and Proposition \ref{08/08/18/16:51}, shows a small data scattering, i.e., there exists $\varepsilon>0$ such that for any $\psi_{0} \in H^{1}(\mathbb{R}^{d})$ with $\left\| \psi_{0} \right\|_{H^{1}}<\varepsilon$, the corresponding solution has an asymptotic state in $H^{1}(\mathbb{R}^{d})$. For: It follows from (\ref{08/10/25/23:16}) that there exists $\varepsilon>0$ such that if $\| \psi_{0} \|_{H^{1}}<\varepsilon$, then $\left\| e^{\frac{i}{2}(t-t_{0})\Delta}\psi_{0}\right\|_{X(\mathbb{R})}<\delta$ for $\delta$ found in Proposition \ref{08/08/22/20:59}. This fact, together with Proposition \ref{08/08/18/16:51}, yields the desired result. \end{remark} \begin{proof}[Proof of Proposition \ref{08/08/22/20:59}] We prove this lemma by the standard contraction mapping principle. \par It follows from the Strichartz estimate that there exists a constant $C_{0}>0$, depending only on $d$, $p$ and $q_{1}$, such that \begin{equation}\label{09/10/08/17:18} \left\| e^{\frac{i}{2}t\Delta} f \right\|_{S(I)} \le C_{0}\left\| f \right\|_{L^{2}} \quad \mbox{for all $f \in L^{2}(\mathbb{R}^{d})$}. \end{equation} Using this constant, we define a set $Y(I)$ and a metric $\rho$ there by \begin{equation}\label{10/03/30/10:09} Y(I):=\left\{ u \in C(I;H^{1}(\mathbb{R}^{d})) \left| \begin{array}{c} \left\| u \right\|_{X(I)} \le 2\left\| e^{\frac{i}{2}(t-t_{0})\Delta}\psi_{0} \right\|_{X(I)}, \\[6pt] \left\| (1-\Delta)^{\frac{1}{2}} u \right\|_{S(I)}\le 2C_{0}\left\| \psi_{0}\right\|_{H^{1}} \end{array} \right. \right\}, \end{equation} and \begin{equation} \rho(u,v)=\left\| u-v \right\|_{X(I)\cap S(I)}, \quad u,v \in Y(I). \end{equation} We can verify that $(Y(I),\rho)$ is a complete metric space. Moreover, we define a map $\mathcal{T}$ on this space by \begin{equation} \mathcal{T}(u):=e^{\frac{i}{2}(t-t_{0})\Delta}\psi_{0}+\frac{i}{2}\int_{t_{0}}^{t}e^{\frac{i}{2}(t-t')\Delta}|u(t')|^{p-1}u(t')\,dt', \quad u \in Y(I). \end{equation} Now, let $\delta>0$ be a constant to be specified later, and suppose that \begin{equation}\label{09/10/08/17:17} \left\|e^{\frac{i}{2}(t-t_{0})\Delta}\psi_{0} \right\|_{X(I)} < \delta, \end{equation} so that \begin{equation}\label{09/12/15/11:23} \left\|u \right\|_{X(I)} \le 2\delta \quad \mbox{for all $u \in Y(I)$}. \end{equation} We shall show that $\mathcal{T}$ maps $Y(I)$ into itself for sufficiently small $\delta>0$. Take any $u \in Y(I)$. Then, we have $\mathcal{T}(u) \in C(I;H^{1}(\mathbb{R}^{d}))$ (see \cite{Kato2}). Moreover, Lemma \ref{08/07/31/12:02} and (\ref{09/12/15/11:23}) yield that \begin{equation}\label{08/09/01/11:29} \begin{split} \left\|\mathcal{T}(u) \right\|_{X(I)} &\le \left\|e^{\frac{i}{2}(t-t_{0})\Delta} \psi_{0} \right\|_{X(I)} + C_{1} \left\|u \right\|_{X(I)}^{p} \\[6pt] &\le \left\|e^{\frac{i}{2}(t-t_{0})\Delta} \psi_{0} \right\|_{X(I)} +2^{p}C_{1}\delta^{p-1} \left\|e^{\frac{i}{2}(t-t_{0})\Delta} \psi_{0} \right\|_{X(I)} \end{split} \end{equation} for some constant $C_{1}>0$ depending only on $d$, $p$ and $q_{1}$. Hence, if we take $\delta$ so small that \begin{equation}\label{10/03/30/10:49} 2^{p}C_{1}\delta^{p-1}\le 1, \end{equation} then \begin{equation}\label{10/03/30/11:05} \left\|\mathcal{T}(u) \right\|_{X(I)}\le 2\left\|e^{\frac{i}{2}(t-t_{0})\Delta} \psi_{0} \right\|_{X(I)}. \end{equation} On the other hand, it follows from the Strichartz estimate and Lemma \ref{08/08/18/15:41} that \begin{equation}\label{10/03/30/10:54} \begin{split} \left\|(1-\Delta)^{\frac{1}{2}}\mathcal{T}(u) \right\|_{S(I)} &\le C_{0}\left\| \psi_{0} \right\|_{H^{1}} + C_{2} \left\| (1-\Delta)^{\frac{1}{2}} u \right\|_{S(I)} \left\|u \right\|_{X(I)}^{p-1} \\[6pt] &\le C_{0}\left\| \psi_{0} \right\|_{H^{1}} + 2^{p}C_{2} C_{0} \delta^{p-1} \left\| \psi_{0} \right\|_{H^{1}} \end{split} \end{equation} for some constant $C_{2}>0$ depending only on $d$, $p$ and $q_{1}$. If $\delta$ satisfies that \begin{equation}\label{10/03/30/10:58} 2^{p}C_{2} \delta^{p-1} \le 1, \end{equation} then we have from (\ref{10/03/30/10:54}) that \begin{equation}\label{10/03/30/10:59} \left\| (1-\Delta)^{\frac{1}{2}} \mathcal{T}(u) \right\|_{S(I)} \le 2C_{0}\left\|\psi_{0} \right\|_{H^{1}}. \end{equation} Thus, if we take $\delta$ satisfying (\ref{10/03/30/10:49}) and (\ref{10/03/30/10:58}), then $\mathcal{T}$ becomes a self-map on $Y(I)$. \par Next, we shall show that $\mathcal{T}$ is a contraction map on $(Y(I),\rho)$ for sufficiently small $\delta$. Lemma \ref{08/07/31/12:02}, together with (\ref{09/09/27/21:10}) and (\ref{09/12/15/11:23}), gives us that \begin{equation}\label{10/03/30/11:28} \begin{split} \left\|\mathcal{T}(u) -\mathcal{T}(v)\right\|_{X(I)} & \le \left\| \int_{t_{0}}^{t} e^{\frac{i}{2}(t-t')\Delta} \left( |u|^{p-1}u -|v|^{p-1}v \right)(t')dt' \right\|_{X(I)} \\[6pt] &\le C_{3} \left( \left\|u \right\|_{X(I)}^{p-1}+ \left\| v \right\|_{X(I)}^{p-1}\right)\left\| u-v \right\|_{X(I)} \\[6pt] &\le 2^{p}C_{3} \delta^{p-1}\left\| u-v \right\|_{X(I)} \quad \mbox{for all $u,v \in Y(I)$}, \end{split} \end{equation} where $C_{3}$ is some positive constant depending only on $d$, $p$ and $q_{1}$. Similarly, we have by Lemma \ref{08/08/18/15:41} that \begin{equation}\label{10/03/30/11:39} \left\|\mathcal{T}(u) -\mathcal{T}(v)\right\|_{S(I)} \le 2^{p}C_{4} \delta ^{p-1} \left\| u-v \right\|_{S(I)} \end{equation} for some constant $C_{4}>0$ depending only on $d$, $p$ and $q_{1}$. Hence, we find that $\mathcal{T}$ becomes a contraction map in $(Y(I),\rho)$ for sufficiently small $\delta$. Then, the contraction mapping principle shows that there exists a solution $\psi \in Y(I)$ to the equation (\ref{08/05/13/8:50}) with $\psi(t_{0})=\psi_{0}$, which, together with the uniqueness of solutions in $C(I;H^{1}(\mathbb{R}^{d}))$ (see \cite{Kato1995}), completes the proof. \end{proof} At the end of this subsection, we give the proof of Proposition \ref{09/05/18/10:43}. \begin{proof}[Proof of Theorem \ref{09/05/18/10:43}] Let $\psi$ be a global solution to (\ref{08/05/13/8:50}) with an initial datum $\psi_{0} \in H^{1}(\mathbb{R}^{d})$ at $t=0$. \\ {\it We shall prove that (i) implies (ii)}: Suppose that (i) holds. We first show the uniform boundedness: \begin{equation}\label{10/04/29/15:46} \sup_{t\in [0,\infty)}\left\| \psi(t)\right\|_{H^{1}}<\infty. \end{equation} The condition (i) yields that \begin{equation}\label{10/04/29/15:37} \sup_{t\ge T} \left\| \psi(t)\right\|_{L^{p+1}}\le 1 \quad \mbox{for some $T>0$}. \end{equation} Combining this with the energy conservation law (\ref{08/05/13/9:03}), we obtain that \begin{equation}\label{10/04/29/15:32} \left\| \nabla \psi(t) \right\|_{L^{2}}^{2} = \mathcal{H}(\psi(t))-\left\| \psi(t)\right\|_{L^{p+1}}^{p+1} \le \mathcal{H}(\psi_{0})+1 \quad \mbox{for all $t\ge T$}. \end{equation} Hence, this estimate and the mass conservation law (\ref{08/05/13/8:59}) give (\ref{10/04/29/15:46}). \par Now, we shall prove (ii). We employ the H\"older inequality and the Sobolev embedding to obtain that \begin{equation}\label{10/05/31/8:56} \left\| \psi(t) \right\|_{L^{q}} \lesssim \left\| \psi(t)\right\|_{L^{p+1}}^{1-\theta} \left\| \psi(t) \right\|_{H^{1}}^{\theta} \quad \mbox{for all $t \in [0,\infty)$ and $q \in (p+1,2^{*})$}, \end{equation} where $\theta$ is some constant in $(0,1)$, and the implicit constant is independent of $t$. Hence, the condition (i), together with (\ref{10/04/29/15:46}), shows that \begin{equation}\label{10/04/29/15:48} \lim_{t\to \infty}\left\| \psi(t) \right\|_{L^{q}}=0 \quad \mbox{for all $q \in (p+1,2^{*})$}. \end{equation} On the other hand, we have by the H\"older inequality and the mass conservation law (\ref{08/05/13/8:59}) that \begin{equation}\label{10/04/29/15:57} \lim_{t\to \infty} \left\| \psi(t)\right\|_{L^{q}} \le \left\| \psi_{0} \right\|_{L^{2}}^{1-\frac{(q-2)(p+1)}{q(p-1)}} \lim_{t\to \infty} \left\| \psi(t) \right\|_{L^{p+1}}^{\frac{(q-2)(p+1)}{q(p-1)}} =0 \quad \mbox{for all $q \in (2,p+1)$}. \end{equation} \\ \noindent {\it We shall prove (ii) implies (iii)}: We introduce a number $r_{2,0}$ such that $(q_{2}, r_{2,0})$ is admissible, i.e., \begin{equation}\label{10/02/24/16:35} \frac{1}{r_{2,0}}=\frac{d}{2}\left( \frac{1}{2}-\frac{1}{q_{2}}\right), \end{equation} where $q_{2}$ is the number defined in (\ref{09/12/05/16:15}). Then, we have that \begin{equation}\label{10/02/24/16:37} 2< r_{2,0} < r_{2}<\infty \end{equation} for $r_{2}$ defined in (\ref{09/12/05/16:15}). Moreover, we define a number $q_{e}$ by $q_{e}=2^{*}$ if $d\ge 3$ and $q_{e}= 2q_{1}(>q_{2})$ if $d=1,2$, and a space $\bar{S}(I)$ by \begin{equation}\label{10/02/24/17:02} \bar{S}(I) = \bigcap_{{(q,r):admissible }\atop {2\le q\le q_{e}}}L^{r}(I;L^{q}) \quad \mbox{ for an interval $I$}. \end{equation} Then, it follows from the Strichartz estimate and Lemma \ref{08/08/18/15:41} that \begin{equation}\label{10/02/24/16:42} \begin{split} &\left\|(1-\Delta)^{\frac{1}{2}}\psi \right\|_{\bar{S}([t_{0},t_{1}))} \\[6pt] &\lesssim \left\| \psi(t_{0}) \right\|_{H^{1}} + \left\| (1-\Delta)^{\frac{1}{2}}|\psi|^{p-1}\psi \right\|_{L^{r_{0}'}([t_{0},t_{1});L^{q_{1}'})} \\[6pt] &\le \left\|\psi \right\|_{L^{\infty}([0,\infty);H^{1})} + \left\|(1-\Delta)^{\frac{1}{2}}\psi \right\|_{L^{r_{0}}([t_{0},t_{1});L^{q_{1}})} \left\|\psi \right\|_{L^{r_{2}}([t_{0},t_{1});L^{q_{2}})}^{p-1} \\[6pt] &\le \left\|\psi \right\|_{L^{\infty}([0,\infty);H^{1})} + \sup_{t\ge t_{0}}\left\| \psi(t)\right\|_{L^{q_{2}}}^{\frac{p-1}{r_{2}}(r_{2}-r_{2,0})} \left\|(1-\Delta)^{\frac{1}{2}}\psi \right\|_{L^{r_{0}}([t_{0},t_{1});L^{q_{1}})} \left\|\psi \right\|_{L^{r_{2,0}}([t_{0},t_{1});L^{q_{2}})}^{\frac{p-1}{r_{2}}r_{2,0}} \\[6pt] &\le \left\|\psi \right\|_{L^{\infty}([0,\infty);H^{1})} + \sup_{t\ge t_{0}}\left\| \psi(t)\right\|_{L^{q_{2}}}^{\frac{p-1}{r_{2}}(r_{2}-r_{2,0})} \left\|(1-\Delta)^{\frac{1}{2}} \psi \right\|_{\bar{S}([t_{0},t_{1});L^{q_{1}})}^{1+\frac{p-1}{r_{2}}r_{2,0}}, \end{split} \end{equation} where the implicit constant is independent of $t_{0}$ and $t_{1}$. Note here that since the condition (ii) includes (i), we have that $\|\psi\|_{L^{\infty}([0,\infty);H^{1})}<\infty$ (see (\ref{10/04/29/15:46}) above). Hence, the estimate (\ref{10/02/24/16:42}), together with the condition (ii), shows that \begin{equation}\label{10/02/24/17:15} \left\| (1-\Delta)^{\frac{1}{2}}\psi \right\|_{\bar{S}([t_{0},+\infty))}< \infty\quad \mbox{for all sufficiently large $t_{0}>0$}, \end{equation} so that (iii) holds. \\ \\ {\it We shall prove that (iii) implies (iv)}: The estimate (\ref{08/10/25/23:16}) immediately gives us the desired result. \\ \\ {\it We shall prove that (iv) implies (v)}: This follows from Proposition \ref{08/08/18/16:51}. \\ \\ {\it We shall prove that (v) implies (i)}: We define $r$ by \begin{equation}\label{10/03/30/11:57} \frac{1}{r}=\frac{d}{2} \left( \frac{1}{2}-\frac{1}{p+1}\right), \end{equation} so that $(p+1, r)$ is an admissible pair. Then, the Strichartz estimate yields that \begin{equation}\label{10/03/30/11:59} \left\|e^{\frac{i}{2}t\Delta}\phi_{+} \right\|_{L^{r}(\mathbb{R};L^{p+1})} \lesssim \left\| \phi_{+} \right\|_{L^{2}}, \end{equation} where the implicit constant depends only on $d$ and $p$. Hence, we have that \begin{equation}\label{08/11/16/15:23} \liminf_{t\to \infty} \left\| e^{\frac{i}{2}t\Delta}\phi_{+} \right\|_{L^{p+1}} =0. \end{equation} Moreover, it follows from the existence of an asymptotic state $\phi_{+}$ and (\ref{08/11/16/15:23}) that \begin{equation}\label{10/03/30/12:13} \liminf_{t\to \infty}\left\| \psi (t) \right\|_{L^{p+1}} \le \lim_{t\to \infty} \left\| \psi(t) -e^{\frac{i}{2}t\Delta}\phi_{+} \right\|_{L^{p+1}} + \liminf_{t\to \infty}\left\| e^{\frac{i}{2}t\Delta}\phi_{+}\right\|_{L^{p+1}} =0. \end{equation} Using the existence of an asymptotic state again, we also obtain that \begin{equation}\label{09/04/30/11:54} \lim_{t\to \infty}\left\| \nabla \psi(t) \right\|_{L^{2}} = \lim_{t\to \infty}\left\| \nabla e^{-\frac{i}{2}t\Delta} \psi(t) \right\|_{L^{2}} =\left\| \nabla \phi_{+}\right\|_{L^{2}}. \end{equation} This formula (\ref{09/04/30/11:54}) and the energy conservation law (\ref{08/05/13/9:03}) lead us to that \begin{equation}\label{10/03/30/12:09} \lim_{t\to \infty}\left\| \psi(t) \right\|_{L^{p+1}}^{p+1} = -\frac{p+1}{2} \mathcal{H}(\psi(0)) +\frac{p+1}{2} \left\|\nabla \phi_{+} \right\|_{L^{2}}^{2}. \end{equation} In particular, $\displaystyle{\lim_{t\to \infty}\left\| \psi(t) \right\|_{L^{p+1}}^{p+1}}$ exists. Hence, the desired result (i) follows from (\ref{10/03/30/12:13}). \end{proof} \subsection{Long time perturbation theory} \label{09/05/30/21:16} We will employ the so-called ``Born type approximation'' to prove that the solutions starting from $PW_{+}$ have asymptotic states (see Section \ref{08/10/03/15:11}). The following proposition plays a crucial role there. \begin{proposition}[Long time perturbation theory] \label{08/08/05/14:30} Assume that $d\ge 1$ and $2+\frac{4}{d}<p+1<2^{*}$. Then, for any $A>1$, there exists $\varepsilon>0$, depending only on $A$, $d$, $p$ and $q_{1}$, such that the following holds: Let $I$ be an interval, and $u$ be a function in $C(I; H^{1}(\mathbb{R}^{d})) $ such that \begin{align} \label{10/04/02/18:14} &\left\| u \right\|_{X(I)}\le A, \\[6pt] \label{10/04/02/18:21} &\left\| 2i\partial_{t}u+\Delta u+|u|^{p-1}u \right\|_{L^{\widetilde{r}_{1}'}(I;L^{q_{1}'})} \le \varepsilon. \end{align} If $\psi\in C(\mathbb{R};H^{1})$ is a global solution to the equation (\ref{08/05/13/8:50}) and satisfies that \begin{equation} \label{09/09/15/18:03} \left\| e^{\frac{i}{2}(t-t_{1})\Delta} (\psi(t_{1})-u(t_{1}))\right\|_{X(I)} \le\varepsilon \quad \mbox{for some $t_{1} \in I$}, \end{equation} then we have \begin{equation}\label{10/04/06/14:41} \left\| \psi \right\|_{X(I)} \lesssim 1, \end{equation} where the implicit constant depends only on $d$, $p$, $q_{1}$ and $A$. \end{proposition} \begin{remark} We can find from the proof below that Proposition \ref{08/08/05/14:30} still holds valid if we replace $\psi$ with a function $\widetilde{\psi} \in C(\mathbb{R};H^{1}(\mathbb{R}^{d}))$ satisfying that\begin{equation}\label{10/04/02/18:26} \left\| 2i\partial_{t}\widetilde{\psi}+\Delta \widetilde{\psi} +\big|\widetilde{\psi}\big|^{p-1}\widetilde{\psi} \right\|_{L^{\widetilde{r}_{1}'}(I;L^{q_{1}'})}\le \varepsilon. \end{equation} \end{remark} \begin{proof}[Proof of Proposition \ref{08/08/05/14:30}] Let $u$ be a function in $C(I;H^{1}(\mathbb{R}^{d})$ satisfying (\ref{10/04/02/18:14}) and (\ref{10/04/02/18:21}) for $\varepsilon>0$ to be chosen later. Moreover, let $\psi$ be a global solution to the equation (\ref{08/05/13/8:50}) satisfying (\ref{09/09/15/18:03}). \par For simplicity, we suppose that $I=[t_{1},\infty)$ in what follows. The other cases are treated in a way similar to this case. \par We have from the condition (\ref{10/04/02/18:14}) that: for any $\delta >0$, there exist disjoint intervals $I_{1},\ldots, I_{N}$, with a form $I_{j}=[t_{j},t_{j+1})$ ($t_{N+1}=\infty$), such that \begin{equation}\label{10/04/02/13:33} I=\bigcup_{j=1}^{N}I_{j}, \end{equation} and \begin{equation}\label{10/04/03/13:13} \left\| u \right\|_{X(I_{j})}\le \delta \quad \mbox{for all $j=1,\ldots, N$}, \end{equation} where $N$ is some number depending only on $\delta$, $A$, $d$, $p$ and $q_{1}$. \par We put \begin{equation}\label{10/04/06/18:15} w:=\psi-u, \qquad e:=2i\partial_{t}u+\Delta u+|u|^{p-1}u, \end{equation} Then, $w$ satisfies that \begin{align} \label{08/08/05/15:12} &2i\partial_{t}w + \Delta w + |w+u|^{p-1}(w+u)-|u|^{p-1}u+e=0, \\[6pt] \label{08/08/06/2:37} &\left\|e^{\frac{i}{2}(t-t_{1})\Delta} w(t_{1}) \right\|_{X(I)} \le \varepsilon. \end{align} We consider the integral equations associated with (\ref{08/08/05/15:12}): \begin{equation}\label{08/08/05/15:52} w(t)=e^{\frac{i}{2}(t-t_{j})\Delta}w(t_{j}) +\frac{i}{2}\int_{t_{j}}^{t}e^{\frac{i}{2}(t-t')\Delta}W(t')\,dt' , \quad j=1,\ldots, N, \end{equation} where \begin{equation}\label{10/04/03/14:00} W:= |w+u|^{p-1}(w+u)-|u|^{p-1}u-e. \end{equation} It follows from the inhomogeneous Strichartz estimate (\ref{10/03/29/11:06}), the elementary inequality (\ref{09/09/27/21:10}) and the H\"older inequality that \begin{equation}\label{08/08/16:30} \begin{split} &\left\|\int_{t_{j}}^{t}e^{\frac{i}{2}(t-t')\Delta}W(t')\,dt' \right\|_{X([t_{j}, t_{j}'))} \\[6pt] &\le \mathscr{C} \left\{ \left\| w \right\|_{L^{r_{1}}([t_{j}, t_{j}');L^{q_{1}})} \left\|u \right\|_{L^{r_{2}}([t_{j}, t_{j}');L^{q_{2}})}^{p-1} + \left\| w \right\|_{L^{r_{1}}([t_{j}, t_{j}');L_{x}^{q_{1}})} \left\|w \right\|_{L^{r_{2}}([t_{j}, t_{j}');L^{q_{2}})}^{p-1} \right. \\[6pt] &\qquad \qquad \qquad + \left. \left\| e\right\|_{L^{\widetilde{r}_{1}'}([t_{j}, t_{j}');L^{q_{1}'})} \right\} \qquad \mbox{for all $j=1,\ldots, N$ and $t_{j}' \in I_{j}$} , \end{split} \end{equation} where $\mathscr{C}$ is some constant depending only on $d$, $p$ and $q_{1}$. Hence, combining this estimate with (\ref{10/04/02/18:21}) and (\ref{10/04/03/13:13}), we obtain that \begin{equation} \label{08/08/05/17:32} \begin{split} \left\| w \right\|_{X([t_{j}, t_{j}'))} \le \left\|e^{\frac{i}{2}(t-t_{j})\Delta} w(t_{j}) \right\|_{X(I_{j})} + \mathscr{C} \left\{ \delta^{p-1} \left\| w \right\|_{X([t_{j}, t_{j}'))} + \left\|w \right\|_{X([t_{j}, t_{j}'))}^{p} +\varepsilon \right\} & \\[6pt] \mbox{for all $j=1,\ldots, N$ and $t_{j}' \in I_{j}$}. & \end{split} \end{equation} Now, we fix a constant $\delta$ such that \begin{equation} \label{08/08/05/17:39} \delta < \left( \frac{1}{4(1+2\mathscr{C})}\right)^{\frac{1}{p-1}}, \end{equation} so that the number $N$ is also fixed. Then, we shall show that \begin{align} \label{08/08/06/16:52} \left\| e^{\frac{i}{2}(t-t_{j})\Delta}w(t_{j}) \right\|_{X(I)} &\le (1 + 2^{j}\mathscr{C}) \varepsilon \qquad \mbox{for all $j=1,\ldots, N$}, \\[6pt] \label{09/02/13/20:16} \left\| w \right\|_{X(I_{j})} &\le \left( 1 +2^{j+1} \mathscr{C} \right)\varepsilon \qquad \mbox{for all $j=1,\ldots, N$}, \end{align} provided that \begin{equation}\label{08/08/06/16:48} \varepsilon < \left( \frac{1}{4(1+2^{N+1} \mathscr{C})^{p}} \right)^{\frac{1}{p-1}}. \end{equation} We first consider the case $j=1$. The estimate (\ref{08/08/06/16:52}) for $j=1$ obviously follows from (\ref{08/08/06/2:37}). We put \begin{equation}\label{10/04/03/19:15} \bar{t}_{1} = \sup \left\{ t_{1}\le t \le t_{2} \bigm| \left\| w \right\|_{X([t_{1},t])}\le (1+4\mathscr{C}) \varepsilon \right\}. \end{equation} Then, the estimate (\ref{08/08/05/17:32}), together with (\ref{08/08/06/2:37}), shows that $\bar{t}_{1}>t_{1}$. Supposing the contrary that $\bar{t}_{1}<t_{2}$, we have from the continuity of $w$ that \begin{equation}\label{10/04/03/16:39} \left\| w \right\|_{X([t_{1},\bar{t}_{1}])} =(1+4\mathscr{C})\varepsilon. \end{equation} However, (\ref{08/08/05/17:32}), together with (\ref{08/08/06/16:52}) with $j=1$ and (\ref{10/04/03/16:39}), leads us to that \begin{equation}\label{10/04/03/16:41} \begin{split} \left\| w \right\|_{X([t_{1},\bar{t}_{1}])} &\le \left(1+2\mathscr{C}\right) \varepsilon + \mathscr{C} \left\{ \delta^{p-1}(1+4\mathscr{C})\varepsilon + (1+4\mathscr{C})^{p}\varepsilon^{p} + \varepsilon \right\} \\[6pt] & <\varepsilon+4\mathscr{C}\varepsilon. \end{split} \end{equation} This contradicts (\ref{10/04/03/16:39}). Thus, it must hold that $\bar{t}_{1}=t_{2}$ and therefore (\ref{09/02/13/20:16}) holds for $j=1$. \par We shall prove the general case $j=n$ ($2\le n\le N$), provided that both (\ref{08/08/06/16:52}) and (\ref{09/02/13/20:16}) hold for all $j=1, \ldots, n-1$. \par Multiplying the integral equation (\ref{08/08/05/15:52}) with $j=n$ and $t=t_{n}$ by $e^{\frac{i}{2}(t-t_{n})\Delta}$, we obtain that \begin{equation}\label{08/08/05/18:09} e^{\frac{i}{2}(t-t_{n})\Delta}w(t_{n}) = e^{\frac{i}{2}(t-t_{n-1})\Delta}w(t_{n-1}) +\frac{i}{2}\int_{t_{n-1}}^{t_{n}}e^{\frac{i}{2}(t-t')\Delta}\chi_{I_{n-1}}(t') W (t')\,dt', \end{equation} where $\chi_{I_{n-1}}$ is the characteristic function on $I_{n-1}$. Then, the same estimate as (\ref{08/08/05/17:32}) yields that \begin{equation}\label{08/08/05/18:14} \begin{split} \left\| e^{\frac{i}{2}(t-t_{n})\Delta}w(t_{n}) \right\|_{X(I)} &\le \left\|e^{\frac{i}{2}(t-t_{n-1})\Delta}w(t_{n-1}) \right\|_{X(I)} \\[6pt] &\qquad + \mathscr{C} \left\{ \delta^{p-1} \left\| w \right\|_{X(I_{n-1})} +\left\| w \right\|_{X(I_{n-1})}^{p} +\varepsilon \right\} \end{split} \end{equation} for the same constant $\mathscr{C}$ found in (\ref{08/08/05/17:32}). This estimate, together with the inductive hypothesis, gives us that \begin{equation}\label{10/04/03/20:15} \begin{split} \left\| e^{\frac{i}{2}(t-t_{n})\Delta}w(t_{n}) \right\|_{X(I)} &< (1 +2^{n-1}\mathscr{C}) \varepsilon + \mathscr{C} \left\{ 2^{n-3}\varepsilon + \frac{1}{4}\varepsilon + \varepsilon \right\} \\[6pt] &\le (1 + 2^{n}\mathscr{C})\varepsilon, \end{split} \end{equation} which shows that (\ref{08/08/06/16:52}) holds for $j =n$. \par Next, we consider (\ref{09/02/13/20:16}) for $j=n$. As well as the case $j=1$, we put \begin{equation}\label{10/04/04/16:42} \bar{t}_{n} = \sup \left\{ t_{n}\le t \le t_{n+1} \bigm| \left\| w \right\|_{X([t_{n},t])}\le (1+2^{n+1}\mathscr{C}) \varepsilon \right\}. \end{equation} The estimate (\ref{08/08/05/17:32}), together with (\ref{10/04/03/20:15}), shows that $t_{n}<\bar{t}_{n}$. We suppose the contrary that $\bar{t}_{n}<t_{n+1}$, so that \begin{equation}\label{10/04/03/20:34} \left\| w \right\|_{X([t_{n},\bar{t}_{n}])} = \left( 1 +2^{n+1} \mathscr{C} \right) \varepsilon . \end{equation} Then, combining (\ref{08/08/05/17:32}) with (\ref{10/04/03/20:15}) and (\ref{10/04/03/20:34}), we obtain that \begin{equation}\label{10/04/03/20:37} \left\| w \right\|_{X([t_{n},\bar{t}_{n}])} < \left( 1+ 2^{n} \mathscr{C} \right) \varepsilon +\mathscr{C} \left\{ 2^{n-2} \varepsilon + \frac{1}{4}\varepsilon + \varepsilon \right\} \le \left( 1+ 2^{n+1}\mathscr{C} \right) \varepsilon, \end{equation} which contradicts (\ref{10/04/03/20:34}). Hence, we have proved (\ref{08/08/06/16:52}) and (\ref{09/02/13/20:16}). \par Now, it follows from (\ref{09/02/13/20:16}) and (\ref{08/08/06/16:48}) that \begin{equation}\label{10/04/03/21:43} \left\| w \right\|_{L^{r_{j}}(I;L^{q_{j}})}^{r_{j}} = \sum_{n=1}^{N}\left\| w \right\|_{L^{r_{j}}(I_{n};L^{q_{j}})}^{r_{j}} \\[6pt] \le \sum_{n=1}^{N}\left( 1 +2^{N+1} \mathscr{C}\right)^{r_{j}} \varepsilon^{r_{j}} \le \frac{N}{4^{r_{j}}} \le N^{r_{j}} \quad \mbox{for $j=1,2$}. \end{equation} Hence, we have from (\ref{10/04/02/18:14}) and (\ref{10/04/03/21:43}) that \begin{equation}\label{10/04/03/21:50} \left\|\psi \right\|_{X(I)} \le \left\|u \right\|_{X(I)} + \left\|w \right\|_{X(I)} \le A+N, \end{equation} which completes the proof. \end{proof} \subsection{Wave operators} \label{09/05/30/21:25} The following proposition tells us that the wave operators are well-defined on $\Omega$. \begin{proposition}[Existence of wave operators] \label{09/01/12/16:36} Assume $d \ge 1$ and $2+\frac{4}{d}\le p+1 <2^{*}$. \\ {\rm (i)} For any $\phi_{+} \in \Omega$, there exists a unique $\psi_{0} \in PW_{+}$ such that the corresponding solution $\psi$ to the equation (\ref{08/05/13/8:50}) with $\psi(0)=\psi_{0}$ exists globally in time and satisfies the followings: \begin{align} \label{09/10/08/18:42} &\psi \in X([0,+\infty)), \\[6pt] \label{09/10/08/18:43} &\lim_{t\to +\infty}\left\|\psi(t)-e^{\frac{i}{2}t\Delta}\phi_{+} \right\|_{H^{1}}=0, \\[6pt] \label{09/10/08/18:44} &\mathcal{H}(\psi(t))=\left\|\nabla \phi_{+ } \right\|_{L^{2}}^{2} \quad \mbox{for all $t \in \mathbb{R}$}. \end{align} Furthermore, if $\left\| \phi_{+} \right\|_{H^{1}}$ is sufficiently small, then we have \begin{equation}\label{09/10/08/18:45} \left\| \psi \right\|_{X(\mathbb{R})}\lesssim \left\| \phi_{+} \right\|_{H^{1}},\end{equation} where the implicit constant depends only on $d$, $p$ and $q_{1}$. \\ The map defined by $\phi_{+} \mapsto \psi_{0}$ is continuous from $\Omega$ into $PW_{+}$ in the $H^{1}(\mathbb{R}^{d})$-topology. \\[6pt] {\rm (ii)} For any $\phi_{-} \in \Omega$, there exists a unique $\psi_{0} \in PW_{+}$ such that the corresponding solution $\psi$ to the equation (\ref{08/05/13/8:50}) with $\psi(0)=\psi_{0}$ exists globally in time and satisfies that \begin{align} \label{09/10/08/18:46} &\psi \in X((-\infty,0]), \\[6pt] \label{09/10/08/18:47} &\lim_{t\to -\infty}\left\|\psi(t)-e^{\frac{i}{2}t\Delta}\phi_{-} \right\|_{H^{1}}=0, \\[6pt] \label{09/10/08/18:48} &\mathcal{H}(\psi(t))=\left\|\nabla \phi_{-} \right\|_{L^{2}}^{2} \quad \mbox{for all $t \in \mathbb{R}$}. \end{align} Furthermore, if $\left\| \phi_{-} \right\|_{H^{1}}$ is sufficiently small, then we have \begin{equation}\label{09/10/08/18:49} \left\| \psi \right\|_{X(\mathbb{R})}\lesssim \left\| \phi_{-} \right\|_{H^{1}},\end{equation} where the implicit constant depends only on $d$, $p$ and $q_{1}$. \\ The map defined by $\phi_{-} \mapsto \psi_{0}$ is continuous from $\Omega$ into $PW_{+}$ in the $H^{1}(\mathbb{R}^{d})$-topology. \end{proposition} \begin{proof}[Proof of Proposition \ref{09/01/12/16:36}] Since the proofs of (i) and (ii) are very similar, we prove (i) only. We look for a solution to the following integral equation in $X([0,\infty))$ for all $\phi_{+} \in \Omega$: \begin{equation}\label{09/01/12/16:43} \psi(t)=e^{\frac{i}{2}t\Delta}\phi_{+}-\frac{i}{2}\int_{t}^{+\infty}e^{\frac{i}{2}(t-t')\Delta}\left\{ |\psi(t')|^{p-1}\psi(t')\right\}dt'. \end{equation} We first note that the estimate (\ref{08/10/25/23:16}) shows that: For any $\delta>0$, there exists $T_{\delta}>0$ such that \begin{equation}\label{09/10/08/18:32} \left\| e^{\frac{i}{2}t\Delta} \phi_{+} \right\|_{X([T_{\delta},+\infty))} < \delta. \end{equation} Moreover, it follows from the Strichartz estimate that: There exists $C_{0}>0$, depending only on $d$, $p$ and $q_{1}$, such that \begin{equation}\label{09/10/08/18:33} \left\|(1-\Delta)^{\frac{1}{2}} e^{\frac{i}{2}t\Delta}\phi_{+} \right\|_{S(\mathbb{R})} \le C_{0}\left\| \phi_{+} \right\|_{H^{1}}. \end{equation} Using these constants $T_{\delta}$ and $C_{0}$, we define a set $Y_{\delta}[\phi_{+}]$ by \begin{equation}\label{10/04/06/23:34} Y_{\delta}[\phi_{+}]:= \left\{ u \in C(I_{\delta};H^{1}(\mathbb{R}^{d})) \left| \begin{array}{l} \left\| u \right\|_{X(I_{\delta})}\le 2\left\| e^{\frac{i}{2}t\Delta}\phi_{+} \right\|_{X(I_{\delta})}, \\ \left\|(1-\Delta)^{\frac{1}{2}}u \right\|_{S(I_{\delta})}\le 2C_{0}\left\|\phi_{+} \right\|_{H^{1}} \end{array} \right. \right\}, \end{equation} where $I_{\delta}=[T_{\delta},\infty)$. We can verify that $Y_{\delta}[\phi_{+}]$ becomes a complete metric space with a metric $\rho$ defined by \begin{equation}\label{10/04/06/23:37} \rho(u,v):=\max \left\{ \left\| u-v \right\|_{X(I_{\delta})}, \ \left\| u-v \right\|_{S(I_{\delta})} \right\} \quad \mbox{for all $u,v \in Y_{\delta}[\phi_{+}]$}. \end{equation} Then, an argument similar to the proof of Proposition \ref{08/08/22/20:59} (small data theory) leads us to that: If $\delta>0$ is sufficiently small, then there exists a unique solution $\psi \in Y_{\delta}[\phi_{+}]$ to the integral equation (\ref{09/01/12/16:43}). In particular, we find that such a solution exists globally in time and satisfies (\ref{09/10/08/18:49}), provided that $\|\phi_{+}\|_{H^{1}}$ is sufficiently small. \par We shall show that the function $\psi$ obtained above becomes a solution to the equation (\ref{08/05/13/8:50}). Multiplying the both sides of (\ref{09/01/12/16:43}) by the free operator, we have \begin{equation}\label{09/10/17/16:33} e^{\frac{i}{2}(t-T)\Delta}\psi(T)=e^{\frac{i}{2}t\Delta}\phi_{+} -\frac{i}{2}\int_{T}^{+\infty}e^{\frac{i}{2}(t-t')\Delta}\left\{ |\psi(t')|^{p-1}\psi(t') \right\}dt' \quad \mbox{for all $t,T \in I_{\delta}$}. \end{equation} Subtraction (\ref{09/10/17/16:33}) from (\ref{09/01/12/16:43}) yields further that \begin{equation}\label{09/10/17/16:35} \psi(t)=e^{\frac{i}{2}(t-T)\Delta}\psi(T)+\frac{i}{2}\int_{T}^{t}e^{\frac{i}{2}(t-t')\Delta}\left\{ |\psi(t')|^{p-1}\psi(t') \right\}dt' \quad \mbox{for all $t, T \in I_{\delta}$}. \end{equation} Hence, we find by this formula (\ref{09/10/17/16:35}) that $\psi$ is a solution to the equation (\ref{08/05/13/8:50}) on $I_{\delta}$. \par Next, we shall extend the solution $\psi$ to the whole interval $\mathbb{R}$. In view of Proposition \ref{09/06/21/19:28}, it suffices to show that $\psi(t) \in PW_{+}$ for some $t \in I_{\delta}$. Applying the usual inhomogeneous Strichartz estimate to (\ref{09/01/12/16:43}), and then applying Lemma \ref{08/08/18/15:41}, we obtain that \begin{equation}\label{09/01/13/9:54} \lim_{t\to +\infty}\left\| \psi(t)-e^{\frac{i}{2}t\Delta} \phi_{+} \right\|_{H^{1}} \lesssim \lim_{t\to +\infty}\left\|(1-\Delta)^{\frac{1}{2}} \psi \right\|_{S([t,\infty))}\left\| \psi \right\|_{X([t,\infty))}^{p-1} =0. \end{equation} Using this estimate (\ref{09/01/13/9:54}) and Lemma \ref{10/04/08/9:20}, we conclude that \begin{equation}\label{10/04/07/22:54} \begin{split} \lim_{t\to +\infty}\mathcal{K}(\psi(t)) &= \lim_{t\to +\infty} \mathcal{K}(e^{\frac{i}{2}t\Delta}\phi_{+}) \\[6pt] &=\lim_{t\to +\infty} \left\{ \mathcal{K}(\phi_{+}) - \frac{d(p-1)}{2(p+1)}\left\| e^{\frac{i}{2}t\Delta}\phi_{+}\right\|_{L^{p+1}}^{p+1} + \frac{d(p-1)}{2(p+1)}\left\| \phi_{+} \right\|_{L^{p+1}}^{p+1} \right\} \\[6pt] &=\mathcal{K}(\phi_{+}) + \frac{d(p-1)}{2(p+1)}\left\| \phi_{+} \right\|_{L^{p+1}}^{p+1} >0 . \end{split} \end{equation} Hence, it holds that \begin{equation}\label{09/10/17/17:06} 0< \mathcal{K}(\psi(t)) <\mathcal{H}(\psi(t)) \quad \mbox{for all sufficiently large $t >T_{\delta}$}. \end{equation} Moreover, using (\ref{09/01/13/9:54}) and Lemma \ref{10/04/08/9:20} again, and employing the condition $\phi_{+}\in \Omega$, we obtain that \begin{equation}\label{10/04/08/10:59} \begin{split} &\lim_{t\to +\infty}\widetilde{\mathcal{N}}_{2}(\psi(t)) = \lim_{t\to +\infty} \widetilde{\mathcal{N}}_{2}(e^{\frac{i}{2}t\Delta}\phi_{+}) \\[6pt] &= \lim_{t\to +\infty} \left\|\phi_{+} \right\|_{L^{2}}^{p+1-\frac{d}{2}(p-1)} \sqrt{ \left\| \nabla \phi_{+} \right\|_{L^{2}}^{2}-\frac{2}{p+1}\left\| e^{\frac{i}{2}t\Delta}\phi_{+}\right\|_{L^{p+1}}^{p+1} }^{\frac{d}{2}(p-1)-2} \\[6pt] &= \left\|\phi_{+} \right\|_{L^{2}}^{p+1-\frac{d}{2}(p-1)} \left\| \nabla \phi_{+} \right\|_{L^{2}}^{\frac{d}{2}(p-1)-2} =\mathcal{N}_{2}(\phi_{+})<\widetilde{N}_{2}. \end{split} \end{equation} This estimate, together with (\ref{08/06/15/14:38}), shows that \begin{equation}\label{09/10/17/17:07} \psi(t) \in PW_{+} \quad \mbox{for all sufficiently large $t >T_{\delta}$.} \end{equation} Thus, we have proved the global existence of $\psi$. Since $\psi \in X([T,+\infty))$ for sufficiently large $T>0$, we also have $\psi \in X([0,+\infty))$. \par Now, we put $\psi_{0}=\psi(0)$. Then, $\psi_{0}$ is what we want. Indeed, the desired properties (\ref{09/10/08/18:42}) and (\ref{09/10/08/18:43}) have been obtained. Moreover, (\ref{09/10/08/18:43}) and Lemma \ref{10/04/08/9:20} immediately give (\ref{09/10/08/18:44}). \par We shall prove the uniqueness here. Let $\psi_{0}$ and $\widetilde{\psi}_{0}$ be functions in $PW_{+}$ such that the solutions $\psi$ and $\widetilde{\psi}$ to the equation (\ref{08/05/13/8:50}) with $\psi(0)=\psi_{0}$ and $\widetilde{\psi}(0)=\widetilde{\psi}_{0}$ satisfy that \begin{equation}\label{10/04/10/22:19} \lim_{t\to \infty}\left\| \psi(t)-e^{\frac{i}{2}t\Delta}\phi_{+}\right\|_{H^{1}}= \lim_{t\to \infty}\left\|\widetilde{\psi}(t)-e^{\frac{i}{2}t\Delta}\phi_{+}\right\|_{H^{1}} =0. \end{equation} Using (\ref{10/04/10/22:19}), we find that \begin{equation}\label{10/04/10/22:33} \lim_{t\to \infty} \left\| \psi(t)-\widetilde{\psi}(t)\right\|_{H^{1}} = 0. \end{equation} Then, supposing the contrary that $\psi_{0}\neq \widetilde{\psi}_{0}$, we have by the standard uniqueness result (see \cite{Kato1995}) that $\psi(t)\neq \widetilde{\psi}(t)$ for all $t \in \mathbb{R}$, which contradicts (\ref{10/04/10/22:33}): Thus, $\psi_{0}= \widetilde{\psi}_{0}$. \par Finally, we prove the continuity of the map defined by $\phi_{+}\in \Omega \mapsto \psi_{0}\in PW_{+}$. We use $W_{+}$ to denote this map. Let $\phi_{+} \in \Omega$, and let $\{ \phi_{+,n}\}_{n\in \mathbb{N}}$ be a sequence in $\Omega$ satisfying that \begin{equation}\label{10/04/08/13:56} \lim_{n\to\infty} \left\| \phi_{+,n}- \phi_{+} \right\|_{H^{1}} =0. \end{equation} The estimate (\ref{08/10/25/23:16}), together with (\ref{10/04/08/13:56}), shows that \begin{equation}\label{10/04/08/22:24} \lim_{n\to \infty}\left\| e^{\frac{i}{2}t\Delta} \phi_{+,n} - e^{\frac{i}{2}t\Delta} \phi_{+} \right\|_{X(\mathbb{R})} =0. \end{equation} Let $\psi$ and $\psi_{n}$ be the solutions to (\ref{08/05/13/8:50}) with $\psi(0)=W_{+}\phi_{+}$ and $\psi_{n}(0)=W_{+}\phi_{+,n}$, respectively. Then, $\psi$ and $\psi_{n}$ exist globally in time (see Proposition \ref{09/06/21/19:28}) and have the following properties as proved above: \begin{align} \label{10/04/09/12:34} &\psi(t), \psi_{n}(t) \in PW_{+} \quad \mbox{for all $t \in \mathbb{R}$}, \\[6pt] \label{10/04/14/17:38} &\psi, \psi_{n} \in X([0,\infty)) , \\[6pt] & \label{10/04/14/16:58} \lim_{t\to +\infty} \left\| \psi(t)-e^{\frac{i}{2}t\Delta}\phi_{+}\right\|_{H^{1}} = \lim_{t\to +\infty} \left\| \psi_{n}(t)-e^{\frac{i}{2}t\Delta}\phi_{+,n}\right\|_{H^{1}} =0, \\[6pt] & \label{10/04/14/18:04} \mathcal{H}(\psi(t))=\left\| \nabla \phi_{+} \right\|_{L^{2}}^{2}, \quad \mathcal{H}(\psi_{n}(t))=\left\| \nabla \phi_{n,+} \right\|_{L^{2}}^{2} \quad \mbox{for all $t \in \mathbb{R}$}. \end{align} Moreover, we find by (\ref{10/04/14/16:58}) that $\psi$ and $\psi_{n}$ satisfy that \begin{align} \label{10/04/14/17:28} &\psi(t) = e^{\frac{i}{2}t\Delta}\phi_{+} -\frac{i}{2}\int_{t}^{+\infty} \!\! e^{\frac{i}{2}(t-t')\Delta}\left\{ |\psi(t')|^{p-1}\psi(t')\right\}dt' , \\[6pt] \label{10/04/09/11:20} &\psi_{n}(t) = e^{\frac{i}{2}t\Delta}\phi_{+,n} -\frac{i}{2}\int_{t}^{+\infty} \!\! e^{\frac{i}{2}(t-t')\Delta}\left\{ |\psi_{n}(t')|^{p-1}\psi_{n}(t')\right\}dt' . \end{align} Note here that it follows from (\ref{10/04/08/13:56}) and (\ref{10/04/08/22:24}) that there exists a number $n_{0} \in \mathbb{N}$ such that \begin{align} \label{09/09/13/22:06} &\left\|e^{\frac{i}{2}t\Delta} \phi_{+,n} \right\|_{X(I)} \le 2\left\| e^{\frac{i}{2}t\Delta}\phi_{+}\right\|_{X(I)} \quad \mbox{for all interval $I$ and $n\ge n_{0}$}, \\[6pt] \label{10/04/08/22:41} &\left\|\phi_{+,n} \right\|_{H^{1}} \le 2 \left\| \phi_{+} \right\|_{H^{1}} \quad \mbox{for all $n\ge n_{0}$}. \end{align} We find by (\ref{09/09/13/22:06}) that: For any $\delta>0$, there exists $T_{\delta}>0$, independent of $n$, such that \begin{equation} \label{09/09/13/22:14} \left\| e^{\frac{i}{2}t\Delta}\phi_{+} \right\|_{X([T_{\delta},\infty))} <\delta, \qquad \left\|e^{\frac{i}{2}}t\Delta \phi_{+,n} \right\|_{X([T_{\delta},\infty))} \le \delta \quad \mbox{for all $n\ge n_{0}$}. \end{equation} Using (\ref{09/09/13/22:14}) and the formulas (\ref{10/04/14/17:28}) and (\ref{10/04/09/11:20}), we also find that: There exists a constant $\delta_{0}>0$ with the following property: for any $\delta \in (0,\delta_{0}]$, there exists $T_{\delta}>0$ such that \begin{equation}\label{10/04/14/17:53} \left\| \psi \right\|_{X([T_{\delta},\infty))} <2\delta, \qquad \left\|\psi_{n} \right\|_{X([T_{\delta},\infty))} \le 2\delta \quad \mbox{for all $n\ge n_{0}$}. \end{equation} Now, we consider an estimate for the difference $\psi_{n}-\psi$. The formulas (\ref{10/04/14/17:28}) and (\ref{10/04/09/11:20}) shows that \begin{equation}\label{10/05/19/12:05} \psi_{n}(t)-\psi(t) = e^{\frac{i}{2}t\Delta}\left( \phi_{+,n}-\phi_{+} \right) -\frac{i}{2}\int_{t}^{+\infty}e^{\frac{i}{2}(t-t')\Delta} \left\{ |\psi_{n}|^{p-1}\psi_{n}-|\psi|^{p-1}\psi \right\} (t')\,dt'. \end{equation} Applying the Strichartz estimate to (\ref{10/05/19/12:05}), and using (\ref{09/09/27/21:10}) and (\ref{10/04/14/17:53}), we obtain that \begin{equation}\label{10/04/09/11:18} \begin{split} &\left\| \psi_{n}-\psi \right\|_{S([T_{\delta},\infty))} \\[6pt] &\lesssim \left\| \phi_{+,n}-\phi_{+} \right\|_{L^{2}} + \left( \left\| \psi_{n} \right\|_{X([T_{\delta},\infty))}^{p-1}+ \left\| \psi \right\|_{X([T_{\delta},\infty))}^{p-1} \right) \left\| \psi_{n}-\psi \right\|_{S([T_{\delta},\infty))} \\[6pt] & \le \left\| \phi_{+,n}-\phi_{+} \right\|_{L^{2}} + 2^{p}\delta^{p-1} \left\| \psi_{n}-\psi \right\|_{S([T_{\delta},\infty))} \quad \mbox{for all $n\ge n_{0}$}, \end{split} \end{equation} where the implicit constant depends only on $d$, $p$ and $q_{1}$. This estimate, together with (\ref{10/04/08/13:56}), shows that \begin{equation}\label{10/05/20/6:11} \lim_{n\to \infty} \left\| \psi_{n}-\psi \right\|_{S([T_{\delta},\infty))} =0 \quad \mbox{for all sufficiently small $\delta>0$}. \end{equation} Similarly, we can obtain that \begin{equation}\label{10/07/27/14:14} \lim_{n\to \infty} \left\| \psi_{n}-\psi \right\|_{X([T_{\delta},\infty))} =0 \quad \mbox{for all sufficiently small $\delta>0$}. \end{equation} Moreover, considering the integral equations of $\partial_{j}\psi$ and $\partial_{j}\psi_{n}$ for $1\le j\le d$, we obtain by the Strichartz estimate that \begin{equation}\label{10/05/20/5:55} \begin{split} \left\| \partial_{j}\psi_{n}-\partial_{j} \psi \right\|_{S([T_{\delta}, \infty))} &\lesssim \left\| \partial_{j}\phi_{+,n}-\partial_{j}\phi_{+} \right\|_{L^{2}} + \left\| |\psi_{n}|^{p-1}(\partial_{j}\psi_{n}-\partial_{j}\psi) \right\|_{L^{r_{0}'}([T_{\delta},\infty);L^{q_{1}'})} \\[6pt] &\quad + \left\| \left( |\psi_{n}|^{p-1}-|\psi|^{p-1} \right) \partial_{j}\psi \right\|_{L^{r_{0}'}([T_{\delta},\infty);L^{q_{1}'})} \\[6pt] &\quad + \left\| |\psi_{n}|^{p-3}\psi_{n}^{2}\left(\partial_{j}\overline{\psi_{n}} -\partial_{j}\overline{\psi}\right) \right\|_{L^{r_{0}'}([T_{\delta},\infty);L^{q_{1}'})} \\[6pt] &\quad + \left\| \left( |\psi_{n}|^{p-3}\psi_{n}^{2} -|\psi|^{p-3}\psi^{2} \right) \partial_{j}\overline{\psi} \right\|_{L^{r_{0}'}([T_{\delta},\infty);L^{q_{1}'})} \\[6pt] &\lesssim \left\| \partial_{j}\phi_{+,n}-\partial_{j}\phi_{+} \right\|_{L^{2}} + \left\| \psi_{n} \right\|_{X([T_{\delta},\infty)}^{p-1} \left\| \partial_{j}\psi_{n}-\partial_{j}\psi \right\| _{S([T_{\delta},\infty))} \\[6pt] &\quad + \left\| |\psi_{n}|^{p-1}-|\psi|^{p-1} \right\|_{L^{\frac{r_{2}}{p-1}}([T_{\delta},\infty);L^{\frac{q_{2}}{p-1}})} \left\| \partial_{j}\psi \right\|_{S([T_{\delta},\infty))} \\[6pt] &\quad + \left\| \psi_{n} \right\|_{X([T_{\delta},\infty))}^{p-1} \left\| \partial_{j}\psi_{n} -\partial_{j}\psi\right\|_{S([T_{\delta},\infty))} \\[6pt] &\quad + \left\| |\psi_{n}|^{p-3}\psi_{n}^{2} -|\psi|^{p-3}\psi^{2} \right\|_{L^{\frac{r_{2}}{p-1}}([T_{\delta},\infty);L^{\frac{q_{2}}{p-1}})} \left\| \partial_{j}\psi \right\|_{S([T_{\delta},\infty))}, \end{split} \end{equation} where the implicit constant depends only on $d$, $p$ and $q_{1}$. Note here that, Lemma \ref{08/08/18/15:53}, together with (\ref{10/04/14/17:38}), gives us that $\|(1-\Delta)^{\frac{1}{2}}\psi\|_{S([0,\infty))}<\infty$. We also note that (\ref{10/07/27/14:14}) implies that \begin{equation}\label{10/05/20/7:01} \begin{split} &\lim_{n\to \infty} \left\| |\psi_{n}|^{p-1}-|\psi|^{p-1} \right\|_{L^{\frac{r_{2}}{p-1}}([T_{\delta},\infty);L^{\frac{q_{2}}{p-1}})} \\[6pt] &= \lim_{n\to \infty} \left\| |\psi_{n}|^{p-3}\psi_{n}^{2} -|\psi|^{p-3}\psi^{2} \right\|_{L^{\frac{r_{2}}{p-1}}([T_{\delta},\infty);L^{\frac{q_{2}}{p-1}})} =0 \quad \mbox{for all sufficiently small $\delta>0$}. \end{split} \end{equation} Hence, we have by (\ref{10/05/20/5:55}) that \begin{equation}\label{10/05/20/6:25} \lim_{n\to \infty}\left\|\partial_{j} \psi_{n}-\partial_{j} \psi \right\|_{S([T_{\delta},\infty))} =0 \quad \mbox{for all sufficiently small $\delta>0$}. \end{equation} Thus, we obtain from (\ref{10/05/20/6:11}) and (\ref{10/05/20/6:25}) that \begin{equation}\label{10/04/09/12:36} \lim_{n\to \infty} \left\| \psi_{n}-\psi \right\|_{L^{\infty}([T_{\delta},\infty);H^{1})} =0 \quad \mbox{for all sufficiently small $\delta>0$}, \end{equation} so that it follows from the continuous dependence of solutions on initial data that \begin{equation}\label{10/04/10/16:04} \lim_{n\to \infty}\left\| W_{+}\phi_{+,n}-W_{+}\phi_{+} \right\|_{H^{1}} = \lim_{n\to \infty} \left\| \psi_{n}(0)-\psi(0) \right\|_{H^{1}} =0, \end{equation} which completes the proof. \end{proof} \section{Analysis on $\boldsymbol{PW_{+}}$} \label{08/10/03/15:11} Our aim here is to prove Theorem \ref{08/05/26/11:53}. Obviously, Proposition \ref{09/06/21/19:28} provides (\ref{08/12/16/10:09}) and (\ref{08/09/03/17:03}). Therefore, it remains to prove the asymptotic completeness in $PW_{+}$. \par In order to prove the existence of asymptotic states for a solution $\psi$, it suffices to show that $\|\psi\|_{X(\mathbb{R})}<\infty$ by virtue of Proposition \ref{08/08/18/16:51} and Proposition \ref{09/06/21/19:28}. To this end, we introduce a subset of $PW_{+}$: \begin{equation}\label{10/02/19/23:05} PW_{+}(\delta):= \left\{ f \in PW_{+} \Biggm| \widetilde{\mathcal{N}}_{2}(f):=\left\| f \right\|_{L^{2}}^{p+1-\frac{d}{2}(p-1)}\sqrt{\mathcal{H}(f)}^{\frac{d}{2}(p-1)-2} < \delta \right\}, \quad \delta>0. \end{equation} Moreover, we define a number $N_{c}$ by \begin{equation}\label{08/09/02/18:06} \begin{split} \widetilde{N}_{c} &:= \sup{ \left\{ \delta >0 \bigm| \|\psi\|_{X(\mathbb{R})}< \infty \quad \mbox{for all $\psi_{0} \in PW_{+}(\delta)$} \right\} } \\[6pt] &= \inf{\left\{ \delta >0 \bigm| \|\psi\|_{X(\mathbb{R})}= \infty \quad \mbox{for some $\psi_{0} \in PW_{+}(\delta)$} \right\}}, \end{split} \end{equation} where $\psi$ denotes the solution to (\ref{08/05/13/8:50}) with $\psi(0)=\psi_{0}$. \par Since we have \begin{equation}\label{09/05/05/8:52} PW_{+}=PW_{+}(\widetilde{N}_{2}), \end{equation} our task is to prove $\widetilde{N}_{c}=\widetilde{N}_{2}$. \par It is worth while noting here that $\widetilde{N}_{c}>0$: For, it follows from (\ref{08/10/25/23:16}), the interpolation estimate and Proposition \ref{09/06/21/19:28} that \begin{equation}\label{08/07/22/17:36} \begin{split} \left\|e^{\frac{i}{2}t\Delta}\psi_{0} \right\|_{X(\mathbb{R})}^{p-1} &\lesssim \|(-\Delta)^{\frac{s_{p}}{2}} \psi_{0}\|_{L^{2}}^{p-1} \\[6pt] &\le \|\psi_{0}\|_{L^{2}}^{p+1-\frac{d}{2}(p-1)} \|\nabla \psi_{0}\|_{L^{2}}^{\frac{d}{2}(p-1)-2} \\ & < \|\psi_{0} \|_{L^{2}}^{p+1-\frac{d}{2}(p-1)} \sqrt{\frac{d(p-1)}{d(p-1)-4} \mathcal{H}(\psi_{0})}^{\frac{d}{2}(p-1)-2}\\ &=\sqrt{\frac{d(p-1)}{d(p-1)-4}}^{\frac{d}{2}(p-1)-2}\widetilde{\mathcal{N}}_{2}(\psi_{0}) \\ &< \sqrt{\frac{d(p-1)}{d(p-1)-4}}^{\frac{d}{2}(p-1)-2} \hspace{-2pt} \delta \qquad \mbox{for all $\delta>0$ and $\psi_{0} \in PW_{+}(\delta)$}, \end{split} \end{equation} where the implicit constant depends only on $d$, $p$ and $q_{1}$. This estimate shows that there exists $\delta_{0}>0$, depending only on $d$, $p$ and $q_{1}$, such that \begin{equation}\label{10/04/04/23:01} \left\| e^{\frac{i}{2}t\Delta} \psi_{0} \right\|_{X(\mathbb{R})} < \delta_{Pr.\ref{08/08/22/20:59}} \quad \mbox{for all $\psi_{0} \in PW_{+}(\delta_{0})$}, \end{equation} where $\delta_{Pr.\ref{08/08/22/20:59}}$ is a constant found in Proposition \ref{08/08/22/20:59}. Hence, we have by the small data theory (Proposition \ref{08/08/22/20:59}) that $\widetilde{N}_{2}\ge \delta_{0}>0$. \subsection{One soliton vs. Virial identity}\label{09/05/06/9:13} In this section, we present our strategy to prove $\widetilde{N}_{c}=\widetilde{N}_{2}$. We suppose the contrary that $\widetilde{N}_{c}<\widetilde{N}_{2}$. In this undesired situation, we can find a one-soliton-like solution to our equation (\ref{08/05/13/8:50}) in $PW_{+}$ (see Proposition \ref{08/10/16/21:12} below). Then, the soliton-like behavior contradicts the one described by the generalized virial identity (Lemma \ref{08/10/23/18:45}), so that we conclude that $\widetilde{N}_{c}=\widetilde{N}_{2}$. At the end of this Section \ref{09/05/06/9:13}, we actually show this, provided that such a soliton-like solution exists. \par The construction of a soliton-like solution is rather long. We divide it into two parts; In Section \ref{09/05/05/10:03}, one finds a candidate for the soliton, and in Section \ref{08/09/25/17:01}, one sees that the candidate actually behaves like a one-soliton-like solution. \par We here briefly explain how to find a candidate for the one-soliton-like solution. If $\widetilde{N}_{c}<\widetilde{N}_{2}$, then we can take a sequence $\{\psi_{n}\}$ of solutions to (\ref{08/05/13/8:50}) with the property that \begin{equation}\label{10/04/05/17:37} \psi_{n}(t) \in PW_{+} \ \mbox{for all $t\in \mathbb{R}$}, \quad \left\| \psi_{n} \right\|_{X(\mathbb{R})}=\infty, \quad \lim_{n\to \infty}\widetilde{\mathcal{N}}_{2}(\psi_{n}(0))=\widetilde{N}_{c} . \end{equation} We consider the integral equation for $\psi_{n}$: \begin{equation}\label{10/04/05/17:35} \psi_{n}(t)=e^{\frac{i}{2}t\Delta}f_{n}+\frac{i}{2}\int_{0}^{t} e^{\frac{i}{2}(t-t')\Delta}\left\{ |\psi_{n}(t')|^{p-1}\psi_{n}(t')\right\}dt', \end{equation} where we put $f_{n}=\psi_{n}(0)$. We first observe that the linear part of this integral equation possibly behaves like as follows\footnote{$e^{-\eta_{n}^{l}\cdot \nabla}$ denotes the space-translation by $-\eta_{n}$. We may expect the number of summands $f^{l}$ is finite.}: \begin{equation}\label{09/07/15/20:35} e^{\frac{i}{2}t\Delta}f_{n}(x) \sim \sum_{l\ge 1}e^{\frac{i}{2}(t-\tau_{n}^{l})\Delta}e^{-\eta_{n}^{l}\cdot \nabla}f^{l}(x) \end{equation} for some nontrivial functions $f^{l}\in PW_{+}$, $\tau_{n}^{l} \in \mathbb{R}$ and $\eta_{n}^{l} \in \mathbb{R}^{d}$. Of course, this is not a good approximation to $\psi_{n}$. So, putting $\displaystyle{\tau_{\infty}^{l}=\lim_{n\to \infty}\tau_{n}^{l}}$ (possibly $\tau_{\infty}^{l}=\pm \infty$), we solve our equation (\ref{08/05/13/8:50}) with the initial datum $e^{-\frac{i}{2}\tau_{\infty}^{l}\Delta}f^{l}$ at $t=-\tau_{\infty}^{l}$: \begin{equation}\label{09/09/14/15:32} \psi^{l}(t)= e^{\frac{i}{2}(t+\tau_{\infty}^{l})\Delta}e^{-\frac{i}{2}\tau_{\infty}^{l}\Delta} f^{l} +\frac{i}{2}\int_{-\tau_{\infty}^{l}}^{t}e^{\frac{i}{2}(t-t')\Delta}\left\{ |\psi^{l}(t')|^{p-1}\psi^{l}(t')\right\}dt'. \end{equation} Here, in case of $\tau_{\infty}^{l}= \pm \infty$, we are regarding this as the final value problem: \begin{equation}\label{09/09/14/15:33} e^{-\frac{i}{2}t\Delta}\psi^{l}(t)= f^{l} +\frac{i}{2}\int_{\mp \infty}^{t}e^{-\frac{i}{2}t'\Delta}\left\{ |\psi^{l}(t')|^{p-1}\psi^{l}(t')\right\}dt'. \end{equation} Then, instead of (\ref{09/07/15/20:35}), we consider the superposition of these solutions with the space-time translations: \begin{equation}\label{09/09/14/17:16} \psi_{n}^{app}(x,t) := \sum_{l\ge 1} (e^{-\tau_{n}^{l} \frac{\partial }{\partial t} -\eta_{n}^{l}\cdot \nabla} \psi^{l})(x,t) = \sum_{l\ge 1} \psi^{l}(x-\eta_{n}^{l},t-\tau_{n}^{l}) . \end{equation} By Lemma \ref{08/08/23/16:32} below, we will see that this formal object $\psi_{n}^{app}$ is an ``almost'' solution to our equation (\ref{08/05/13/8:50}) with the initial datum $\displaystyle{\sum_{l\ge 1}e^{-\frac{i}{2}\tau_{n}^{l}\Delta}e^{-\eta_{n}^{l}\cdot \nabla}f^{l}}$, and supposed to be a good approximation to $\psi_{n}$. In other words, a kind of superposition principle holds valid in an asymptotic sense as $n\to \infty$. By virtue of the long time perturbation theory (Proposition \ref{08/08/05/14:30}), the sum in $\psi_{n}^{app}$ consists of a finite number of solutions. Actually, as a consequence of the minimizing property of the sequence $\{\psi_{n}\}$ (\ref{10/04/05/17:37}), the summand is just one: put $\Psi:=\psi^{1}$. Then, it turns out that $\Psi$ is a one-soliton-like solution which we are looking for. In fact, we can prove: \begin{proposition}[One-soliton-like solution in $PW_{+}$] \label{08/10/16/21:12} Suppose that $\widetilde{N}_{c}< \widetilde{N}_{2}$. Then, there exists a global solution $\Psi \in C(\mathbb{R}; H^{1}(\mathbb{R}^{d}))$ to the equation (\ref{08/05/13/8:50}) with the following properties: $\{\Psi(t)\}_{t\in \mathbb{R}}$ is; \\ {\rm (i)} a minimizer such that \begin{equation}\label{08/10/20/2:27} \left\| \Psi \right\|_{X(\mathbb{R})}=\infty, \qquad \widetilde{\mathcal{N}}_{2}(\Psi(t))=\widetilde{N}_{c}, \quad \Psi(t) \in PW_{+} \quad \mbox{for all $t \in \mathbb{R}$}, \end{equation} {\rm (ii)} uniformly bounded in $H^{1}(\mathbb{R}^{d})$ with \begin{equation}\label{08/10/18/22:17} \left\| \Psi(t)\right\|_{L^{2}}=\left\| \Psi(0) \right\|_{L^{2}}=1 \quad \mbox{for all $t \in \mathbb{R}$}, \end{equation} and \begin{equation}\label{10/04/05/18:07} \sup_{t \in \mathbb{R}} \left\|\nabla \Psi (t)\right\|_{L^{2}}\le N_{c}^{\frac{1}{\frac{d}{2}(p-1)-2}}, \end{equation} {\rm (iii)} a family of functions with zero momentum, i.e., \begin{equation}\label{08/08/18/17:42} \Im \int_{\mathbb{R}^{d}}\overline{\Psi}(x,t)\nabla \Psi(x,t)\,dx =0 \quad \mbox{for all $t \in \mathbb{R}$}, \end{equation} {\rm (iv)} tight in $H^{1}(\mathbb{R}^{d})$ in the following sense: For any $\varepsilon>0$, there exists $R_{\varepsilon}>0$ and a continuous path $\gamma_{\varepsilon} \in C([0,\infty);\mathbb{R}^{d})$ with $\gamma_{\varepsilon}(0)=0$ such that \begin{equation}\label{08/10/18/22:46} \int_{|x-\gamma_{\varepsilon}(t)|<R_{\varepsilon}}\left| \Psi(x,t)\right|^{2} \,dx > 1-\varepsilon \quad \mbox{for all $t \in [0,\infty)$}, \end{equation} and \begin{equation}\label{08/11/01/15:41} \int_{|x-\gamma_{\varepsilon}(t)|<R_{\varepsilon}}\left| \nabla \Psi(x,t)\right|^{2} \,dx > \left\| \nabla \Psi(t) \right\|_{L^{2}}^{2}-\varepsilon \quad \mbox{for all $t \in [0,\infty)$}. \end{equation} \end{proposition} We will give the proof of Proposition \ref{08/10/16/21:12} in Sections \ref{09/05/05/10:03} and \ref{08/09/25/17:01} later, as mentioned before. The properties (i), (ii) and (iii) are easy matters. The most important part is the tightness (iv). To prove the tightness, we introduce a concentrate function for $\Psi$ (see (\ref{09/07/28/19:58}) below) and consider a sequence $\{\Psi(\cdot+t_{n})\}_{n\in \mathbb{N}}$ for an appropriate sequence $\{t_{n}\}$ with $t_{n} \to +\infty$. Analogous arguments with finding $\Psi$ work on $\{\Psi(\cdot +t_{n})\}_{n\in \mathbb{N}}$ as well, so that we can show the tightness. Once we get the tightness, Lemma \ref{08/10/03/9:46} immediately gives us the continuous path $\gamma_{\varepsilon}$ described in (iv) of Proposition \ref{08/10/16/21:12}. In order to prove $\widetilde{N}_{c}=\widetilde{N}_{2}$, we need to know more subtle behavior of the path however. For a sufficiently long time, we can take $\gamma_{\varepsilon}$ as the almost center of mass, say $\gamma_{\varepsilon}^{ac}$: \begin{lemma}[Almost center of mass] \label{08/09/04/12:16} Let $\Psi$ be a global solution to the equation (\ref{08/05/13/8:50}) satisfying the properties (\ref{08/10/20/2:27}), (\ref{08/10/18/22:17}), (\ref{10/04/05/18:07}), (\ref{08/08/18/17:42}), and (\ref{08/10/18/22:46}), (\ref{08/11/01/15:41}). Let $R_{\varepsilon}$ be a radius found in {\rm (iv)} of Proposition \ref{08/10/16/21:12} for $\varepsilon>0$. We define an ``almost center of mass'' by \begin{equation}\label{08/10/19/16:06} \gamma_{\varepsilon,R}^{ac}(t) :=(\vec{w}_{20R}, |\Psi(t)|^{2}) \quad \mbox{for all $\varepsilon \in (0,\frac{1}{100})$ and $R>R_{\varepsilon}$}, \end{equation} where $\vec{w}_{R}$ is the function defined by (\ref{10/02/27/22:18}). Then, we have: \begin{equation}\label{09/09/26/17:21} \gamma_{\varepsilon,R}^{ac} \in C^{1}([0,\infty) ;\mathbb{R}^{d}), \end{equation} and there exists a constant $\alpha>0$, depending only on $d$ and $p$, such that \begin{align} \label{08/09/04/16:51} &\left|\gamma_{\varepsilon,R}^{ac}(t)\right| \le 20R \quad \mbox{for all $t \in \left[0,\alpha \frac{R}{ \sqrt{\varepsilon}} \right]$ }, \\[6pt] \label{08/10/19/16:25} &\int_{|x-\gamma_{\varepsilon,R}^{ac}(t)|\le 4R}\left| \Psi(x,t)\right|^{2}+\left| \nabla \Psi(x,t)\right|^{2}\,dx \ge \left\| \Psi(t) \right\|_{H^{1}}^{2}-\varepsilon \quad \mbox{for all $t \in \left[ 0, \alpha \frac{R}{ \sqrt{\varepsilon}} \right]$ }. \end{align} \end{lemma} \begin{remark}\label{10/04/11/22:59} In the proof below, we find that the following estimate holds (see (\ref{08/10/19/18:10})): \begin{equation*}\label{08/10/19/16:24} \left| \frac{d\gamma_{\varepsilon,R}^{ac}}{dt}(t)\right| \lesssim \sqrt{\varepsilon } \quad \mbox{for all $t \in \left[0, \alpha \frac{ R}{\sqrt{ \varepsilon } } \right]$} , \end{equation*} where the implicit constant depends only on $d$ and $q$. \end{remark} \begin{proof}[Proof of Lemma \ref{08/09/04/12:16}] We have from Proposition B.1 in \cite{Nawa8} that \begin{equation}\label{08/10/19/16:54} \gamma_{\varepsilon,R}^{ac}(t) =(\vec{w}_{20R},|\Psi(0)|^{2}) + \left( 2\Im \int_{0}^{t} \!\! \int_{\mathbb{R}^{d}} \nabla \vec{w}_{20R}^{j}(x)\!\cdot \!\nabla \Psi(x,s) \overline{\Psi(x,s)}\,dx\,ds \right)_{j=1,\ldots, d}. \end{equation} This formula, with the help of (\ref{08/10/18/22:17}), (\ref{10/04/05/18:07}) and (\ref{08/10/23/18:35}), immediately shows (\ref{09/09/26/17:21}): $\gamma_{\varepsilon,R}^{ac}\in C^{1}([0,\infty);\mathbb{R}^{d})$. \par Next, we shall prove the properties (\ref{08/09/04/16:51}) and (\ref{08/10/19/16:25}). Let $\gamma_{\varepsilon}$ be a path found in Proposition \ref{08/10/16/21:12}, and $t_{\varepsilon}$ be the first time such that the size of $\gamma_{\varepsilon}$ reaches $10R$, i.e., \begin{equation}\label{10/04/05/18:42} t_{\varepsilon} := \inf\left\{ t\ge 0 \bigm| \left| \gamma_{\varepsilon}(t) \right|=10R \right\}. \end{equation} Since $\gamma_{\varepsilon}\in C([0,\infty);\mathbb{R}^{d})$ with $\gamma_{\varepsilon}(0)=0$, we have that $t_{\varepsilon}>0$ and \begin{equation}\label{08/10/19/22:02} \left| \gamma_{\varepsilon}(t) \right| \le 10R \quad \mbox{for all $t \in [0,t_{\varepsilon}]$}. \end{equation} We claim that \begin{equation}\label{08/10/19/23:22} |\gamma_{\varepsilon,R}^{ac}(t) -\gamma_{\varepsilon}(t)|< 2R \quad \mbox{for all $t \in [0,t_{\varepsilon}]$}. \end{equation} It follows from the property (\ref{08/10/18/22:17}) that \begin{equation}\label{08/10/19/22:46} \begin{split} \left| \gamma_{\varepsilon,R}^{ac}(t) -\gamma_{\varepsilon}(t) \right| &= \left| (\vec{w}_{20R}, |\Psi(t)|^{2} )- \gamma_{\varepsilon}(t)\|\Psi(t)\|_{L^{2}}^{2} \right| \\[6pt] &= \left| \int_{\mathbb{R}^{d}} \left\{ \vec{w}_{20R}(x) - \gamma_{\varepsilon}(t) \right\}\left| \Psi(x,t)\right|^{2}\,dx \right| \\[6pt] &\le \int_{|x-\gamma_{\varepsilon}(t)|\le R} \left| x - \gamma_{\varepsilon}(t) \right| \left| \Psi(x,t)\right|^{2}\,dx \\[6pt] & \qquad \qquad + \int_{|x-\gamma_{\varepsilon}(t)|\ge R} \left| \vec{w}_{20R} - \gamma_{\varepsilon}(t) \right| \left| \Psi(x,t) \right|^{2}\,dx. \end{split} \end{equation} Moreover, applying (\ref{08/10/19/22:02}) and (\ref{08/10/23/18:42}) to the second term on the right-hand side above, we obtain that \begin{equation}\label{10/04/11/14:39} \left| \gamma_{\varepsilon, R}^{ac}(t) -\gamma_{\varepsilon}(t) \right| \le R \|\Psi(t)\|_{L^{2}}^{2} + 50R \int_{ |x-\gamma_{\varepsilon}(t)|\ge R} \left| \Psi(x,t) \right|^{2} \,dx \quad \mbox{for all $t \in [0,t_{\varepsilon}]$}. \end{equation} Hence, this inequality (\ref{10/04/11/14:39}), together with (\ref{08/10/18/22:17}) and the tightness (\ref{08/10/18/22:46}), yields that \begin{equation}\label{10/04/11/14:43} \left| \gamma_{\varepsilon,R}^{ac}(t) -\gamma_{\varepsilon}(t) \right| \le R+50R \varepsilon < 2R \quad \mbox{for all $\varepsilon < \frac{1}{100}$ and $t \in [0,t_{\varepsilon}]$}. \end{equation} Now, we have by (\ref{08/10/19/22:02}) and (\ref{08/10/19/23:22}) that \begin{equation}\label{09/09/26/18:10} \left| \gamma_{\varepsilon,R}^{ac}(t)\right| \le 12R \quad \mbox{for all $t \in [0,t_{\varepsilon}]$}. \end{equation} Moreover, (\ref{08/10/19/23:22}) also gives us that \begin{equation}\label{10/04/11/15:16} B_{R}(\gamma_{\varepsilon}(t)) \subset B_{4R}(\gamma_{\varepsilon}^{ac}(t)) \quad \mbox{for all $t \in [0,t_{\varepsilon}]$}, \end{equation} so that the tightness of $\{\Psi(t)\}$ in $H^{1}(\mathbb{R}^{d})$ (see (\ref{08/10/18/22:46}) and (\ref{08/11/01/15:41})) gives us that \begin{equation}\label{08/10/19/22:50} \int_{|x-\gamma_{\varepsilon,R}^{ac}(t)|\le 4R} \left| \Psi(x,t) \right|^{2} + \left| \nabla \Psi(t) \right|^{2} \,dx \ge \left\| \Psi(t)\right\|_{H^{1}}^{2}-\varepsilon \quad \mbox{for all $t \in [0,t_{\varepsilon}]$}. \end{equation} Therefore, for the desired results (\ref{08/09/04/16:51}) and (\ref{08/10/19/16:25}), it suffices to show that there exists a constant $\alpha>0$, depending only on $d$ and $p$, such that \begin{equation}\label{09/09/26/18:21} \alpha \frac{R}{\sqrt{\varepsilon}}\le t_{\varepsilon}. \end{equation} To this end, we prove that \begin{equation}\label{08/10/19/18:10} \left| \frac{d \gamma_{\varepsilon,R}^{ac}}{dt}(t)\right| \lesssim \sqrt{\varepsilon } \quad \mbox{for all $t \in [0, t_{\varepsilon}]$}, \end{equation} where the implicit constant depends only on $d$ and $p$. \par Before proving (\ref{08/10/19/18:10}), we describe how it yields (\ref{09/09/26/18:21}). We easily verify by (\ref{08/10/19/18:10}) that \begin{equation}\label{10/04/11/15:36} |\gamma_{\varepsilon,R}^{ac}(t_{\varepsilon})|-|\gamma_{\varepsilon,R}^{ac}(0)| \le \int_{0}^{t_{\varepsilon}}\left| \frac{d\gamma_{\varepsilon,R}^{ac}}{dt}(t)\right|\,dt \lesssim \sqrt{\varepsilon }t_{\varepsilon}, \end{equation} where the implicit constant depends only on $d$ and $p$. Then, it follows from (\ref{08/10/19/23:22}) and $\gamma_{\varepsilon}(0)=0$ that \begin{equation}\label{10/04/11/15:38} \begin{split} \sqrt{\varepsilon} t_{\varepsilon} &\gtrsim |\gamma_{\varepsilon,R}^{ac}(t_{\varepsilon})|-2R \\[6pt] & \ge |\gamma_{\varepsilon}(t_{\varepsilon})|-|\gamma_{\varepsilon,R}^{ac}(t_{\varepsilon})-\gamma_{\varepsilon}(t_{\varepsilon})|-2R \\[6pt] & \ge 10R -2R -2R=6R, \end{split} \end{equation} which gives (\ref{09/09/26/18:21}). \par Finally, we prove (\ref{08/10/19/18:10}). Using the formula (\ref{08/10/19/16:54}) and the property (\ref{08/08/18/17:42}), we obtain that \begin{equation}\label{10/04/11/15:41} \begin{split} \left| \frac{d \gamma_{\varepsilon,R}^{ac}}{dt}(t)\right|^{2} &=\sum_{j=1}^{d} \left| 2\Im (\nabla \vec{w}_{20R}^{j}\cdot\nabla \Psi(t),\Psi(t)) \right|^{2} \\[6pt] &\le 4\sum_{j=1}^{d} \left\|\nabla \vec{w}_{20R}^{j} \right\|_{L^{\infty}}^{2} \left\|\Psi(t)\right\|_{L^{2}}^{2} \int_{|x|\ge 20R} \left|\nabla \Psi(x,t)\right|^{2}\,dx \quad \mbox{for all $t\ge 0$}. \end{split} \end{equation} Applying (\ref{08/10/18/22:17}) and (\ref{08/10/23/18:35}) to the right-hand side above, we further obtain that \begin{equation}\label{09/09/26/18:55} \left| \frac{d \gamma_{\varepsilon,R}^{ac}}{dt}(t)\right|^{2} \lesssim \int_{|x|\ge 20R} \left| \nabla \Psi(x,t)\right|^{2}\,dx \quad \mbox{for all $t\ge 0$}, \end{equation} where the implicit constant depends only on $d$ and $p$. Since the estimate (\ref{08/10/19/22:02}) shows that \begin{equation}\label{10/04/11/15:48} B_{R}(\gamma_{\varepsilon}(t)) \subset B_{20R}(0) \quad \mbox{for all $t \in [0,t_{\varepsilon}]$}, \end{equation} the estimate (\ref{09/09/26/18:55}), together with the tightness (\ref{08/11/01/15:41}), leads to (\ref{08/10/19/18:10}). \end{proof} Lemma \ref{08/09/04/12:16} implies that $\Psi$ found in Proposition \ref{08/10/16/21:12} is in a bound motion, rather, a standing wave. On the other hand, the generalized virial identity ((\ref{08/03/29/19:05}) in Lemma \ref{08/10/23/18:45}) suggests that $\Psi$ is in a scattering motion. As we already mentioned at the beginning of this Section \ref{09/05/06/9:13}, these two facts contradict each other; Thus, we see that $\widetilde{N}_{2}=\widetilde{N}_{c}$. Here, we show this point precisely: \par The generalized virial identity (\ref{08/03/29/19:05}), together with (\ref{08/12/29/14:16}) and (\ref{08/04/20/12:50}), yields that \begin{equation}\label{10/04/11/16:13} \begin{split} &(W_{R},|\Psi(t)|^{2}) \\[6pt] &\ge (W_{R},|\Psi(0)|^{2})+2t\Im{(\vec{w}_{R}\cdot \nabla \Psi(0),\Psi(0))} +2\int_{0}^{t}\int_{0}^{t'}\mathcal{K}(\Psi(t''))\,dt''dt' \\[6pt] &\qquad -2 \int_{0}^{t}\int_{0}^{t'} \int_{|x|\ge R} \rho_{1}(x)|\nabla \Psi(x,t'')|^{2} + \rho_{2}(x) \left|\frac{x}{|x|}\cdot \nabla \Psi(x,t'')\right|^{2}dx dt''dt' \\[6pt] &\qquad -\frac{1}{2}\int_{0}^{t}\int_{0}^{t'} \left\| \Delta ({\rm div}\, \vec{w}_{R}) \right\|_{L^{\infty}} \left\|\Psi(t'')\right\|_{L^{2}}^{2} dt''dt' \qquad \mbox{for all $R>0$}. \end{split} \end{equation} Applying the estimates (\ref{08/10/18/22:17}), (\ref{08/04/20/16:54}) and (\ref{08/12/29/12:53}) to the right-hand side above, we obtain that \begin{equation}\label{10/04/11/16:31} \begin{split} &(W_{R},|\Psi(t)|^{2}) \\[6pt] &\ge (W_{R},|\Psi(0)|^{2})+2t\Im{(\vec{w}_{R}\cdot \nabla \Psi(0),\Psi(0))} +2\int_{0}^{t}\int_{0}^{t'}\mathcal{K}(\Psi(t''))\,dt''dt' \\[6pt] & \quad -2 \left( K_{1}+K_{2} \right) \int_{0}^{t}\int_{0}^{t'} \int_{|x|\ge R} |\nabla \Psi(x,t'')|^{2} \,dx dt''dt' -\frac{10d^{2}K}{R^{2}} t^{2} \quad \mbox{for all $R>0$}, \end{split} \end{equation} where $K$ is the constant defined in (\ref{10/02/27/22:14}), and $K_{1}$ and $K_{2}$ are the constants found in Lemma \ref{08/04/20/0:27}. Moreover, it follows from the estimate (\ref{09/06/21/19:30}) in Proposition \ref{09/06/21/19:28} that \begin{equation}\label{08/09/04/12:09} \begin{split} &(W_{R},|\Psi(t)|^{2}) \\[6pt] &\ge (W_{R},|\Psi(0)|^{2})+2t\Im{(\vec{w}_{R}\cdot \nabla \Psi(0),\Psi(0))} +t^{2}\omega_{0}\mathcal{H}(\Psi(0)) \\[6pt] & \quad -2(K_{1}+K_{2}) \int_{0}^{t}\int_{0}^{t'} \int_{|x|\ge R} |\nabla \Psi(x,t'')|^{2} \, dx dt''dt' -\frac{10d^{2}K}{R^{2}}t^{2} \quad \mbox{for all $R>0$}, \end{split} \end{equation} where we put \begin{equation}\label{10/04/08/23:11} \omega_{0} :=1-\frac{\widetilde{\mathcal{N}}_{2}(\Psi(0))}{\widetilde{N}_{2}}. \end{equation} Here, we have by Lemma \ref{08/09/04/12:16} that: For any $\varepsilon \in (0,\frac{1}{100})$, there exists $R_{\varepsilon}>0$ with the following property: for any $R\ge R_{\varepsilon}$, there exists $\gamma_{\varepsilon,R}^{ac} \in C^{1}([0,\infty);\mathbb{R}^{d})$ such that \begin{align}\label{09/03/05/18:10} &\left| \gamma_{\varepsilon,R}^{ac}(t) \right| \le 20R \quad \mbox{for all $t \in \bigm[0,\, \alpha\frac{R}{\sqrt{\varepsilon }} \bigm]$}, \\[6pt] \label{08/10/20/2:59} &\int_{|x-\gamma_{\varepsilon,R}^{ac}(t)|\ge 4R} |\nabla \Psi(x,t)|^{2}dx < \varepsilon \quad \mbox{for all $t \in \bigm[0,\, \alpha\frac{R}{\sqrt{\varepsilon }} \bigm]$}, \end{align} where $\alpha$ is some constant depending only on $d$ and $p$. \par We employ (\ref{09/03/05/18:10}) to obtain that \begin{equation}\label{10/04/11/18:04} |x-\gamma_{\varepsilon,R}^{ac}(t)|\ge 4R \quad \mbox{for all $R\ge R_{\varepsilon}$, $t \in \bigm[0,\, \alpha\frac{R}{\sqrt{\varepsilon }}\bigm]$ and $x \in \mathbb{R}^{d}$ with $|x|\ge 24R$}. \end{equation} Hence, (\ref{08/09/04/12:09}), together with the tightness (\ref{08/10/20/2:59}), leads to that \begin{equation}\label{10/04/11/21:21} \begin{split} (W_{50R},|\Psi(t)|^{2}) &\ge (W_{50R},|\Psi(0)|^{2})+2t\Im{(\vec{w}_{50R}\cdot \nabla \Psi(0),\Psi(0))} +t^{2}\omega_{0}\mathcal{H}(\Psi(0)) \\[6pt] & \quad -(K_{1}+K_{2})t^{2}\varepsilon -\frac{10d^{2}K}{(50R)^{2}}t^{2} \quad \mbox{for all $R\ge R_{\varepsilon}$ and $t \in \bigm[0,\, \alpha\frac{R}{\sqrt{\varepsilon }} \bigm]$ }. \end{split} \end{equation} We choose $\varepsilon$ so small that \begin{equation} \label{10/04/11/21:29} 0< \varepsilon < \min \left\{ \frac{1}{100}, \ \frac{\omega_{0}}{4(K_{1}+K_{2})}\mathcal{H}(\Psi(0)) \right\}, \end{equation} and $R$ so large that \begin{equation}\label{08/12/30/14:42} R \ge \max\left\{ R_{\varepsilon}, \ \frac{d\sqrt{K}}{\sqrt{\omega_{0}\mathcal{H}(\Psi(0))}} \right\}. \end{equation} Then, it follows from (\ref{10/04/11/21:21}) that \begin{equation}\label{08/09/04/13:45} \begin{split} (W_{50R},|\Psi(t)|^{2}) \ge (W_{50R},|\Psi(0)|^{2})+2t\Im{(\vec{w}_{50R}\cdot \nabla \Psi(0),\Psi(0))} + \frac{t^{2}}{2}\omega_{0}\mathcal{H}(\Psi(0))& \\[6pt] \mbox{for all $t \in \bigm[0,\, \alpha\frac{R}{\sqrt{\varepsilon }} \bigm]$}&. \end{split} \end{equation} Dividing the both sides of (\ref{08/09/04/13:45}) by $t^{2}$ and applying the estimates (\ref{08/10/18/22:17}) and (\ref{08/12/29/12:48}), we obtain that \begin{equation}\label{08/12/30/14:28} \begin{split} \frac{8(50R)^{2}}{t^{2}} \ge \frac{1}{t^{2}}(W_{50R},|\Psi(0)|^{2}) + \frac{2}{t}\Im{(\vec{w}_{50R}\cdot \nabla \Psi(0),\Psi(0))} + \frac{\omega_{0}}{2}\mathcal{H}(\Psi(0))& \\[6pt] \mbox{for all $t \in \bigm[0,\, \alpha\frac{R}{\sqrt{\varepsilon }} \bigm]$}&. \end{split} \end{equation} In particular, when $t= \alpha \frac{R}{\sqrt{\varepsilon}}$, we have by (\ref{08/10/18/22:17}), (\ref{08/10/23/18:42}) and (\ref{08/12/29/12:48}) that \begin{equation}\label{08/09/04/17:06} \begin{split} \frac{8(50)^{2}\varepsilon}{\alpha^{2}} &\ge \frac{\varepsilon}{\alpha^{2}R^{2}} (W_{50R},|\Psi(0)|^{2}) + \frac{2\sqrt{\varepsilon}}{\alpha R} \Im{ (\vec{w}_{50R}\cdot \nabla \Psi(0),\Psi(0))} + \frac{\omega_{0}}{2}\mathcal{H}(\Psi(0)) \\[6pt] &\ge -\frac{8(50)^{2}\varepsilon}{\alpha^{2}} -\frac{200 \sqrt{\varepsilon}}{\alpha } \left\| \nabla \Psi(0) \right\|_{L^{2}} +\frac{\omega_{0}}{2} \mathcal{H}(\Psi(0)), \end{split} \end{equation} so that \begin{equation}\label{10/04/11/23:32} \frac{8(50)^{2}\varepsilon}{\alpha^{2}} + \frac{8(50)^{2}\varepsilon}{\alpha^{2}} +\frac{200 \sqrt{\varepsilon}}{\alpha } \left\| \nabla \Psi(0) \right\|_{L^{2}} \ge \frac{\omega_{0}}{2} \mathcal{H}(\Psi(0)). \end{equation} However, taking $\varepsilon \to 0$ in (\ref{10/04/11/23:32}), we obtain a contradiction. This absurd conclusion comes from the existence of one-soliton-like solution $\Psi$ (see Proposition \ref{08/10/16/21:12}). Thus, it must hold that $\widetilde{N}_{c}=\widetilde{N}_{2}$, provided that Proposition \ref{08/09/04/12:16} is valid. \subsection{Solving the variational problem for $\widetilde{N}_{c}$} \label{09/05/05/10:03} In this section, we construct a candidate for the one-soliton-like solution, considering the variational problem for $\widetilde{N}_{c}$. \par Supposing that $\widetilde{N}_{c}<\widetilde{N}_{2}$, we can take a minimizing sequence $\{\delta_{n}\}_{n \in \mathbb{N}}$ such that \begin{equation}\label{10/04/15/19:55} \widetilde{N}_{c}<\delta_{n}<\widetilde{N}_{2} \quad \mbox{for all $n \in \mathbb{N}$}, \qquad \lim_{n\to \infty}\delta_{n}=\widetilde{N}_{c}. \end{equation} Moreover, Lemma \ref{08/09/01/23:45} enables us to take a sequence $\{\psi_{0,n}\}_{n\in \mathbb{N}}$ in $PW_{+}$ such that \begin{align}\label{09/05/05/10:26} &\left\|\psi_{0,n}\right\|_{L^{2}}=1 \quad \mbox{for all $n \in \mathbb{N}$}, \\[6pt] \label{09/05/05/10:25} &\widetilde{N}_{c} < \widetilde{\mathcal{N}}_{2}(\psi_{0,n}) <\delta_{n} \quad \mbox{for all $n\in \mathbb{N}$}. \end{align} Note that (\ref{09/05/05/10:25}), together with (\ref{10/04/15/19:55}), leads to that \begin{equation}\label{10/04/15/20:14} \lim_{n\to \infty}\widetilde{\mathcal{N}}_{2}(\psi_{0,n})=\widetilde{N}_{c}. \end{equation} Let $\psi_{n}$ be the solution to (\ref{08/05/13/8:50}) with $\psi_{n}(0)=\psi_{0,n}$. Then, (\ref{09/05/05/10:25}), together with the definition of $\widetilde{N}_{c}$ (see (\ref{08/09/02/18:06})), implies that \begin{equation}\label{09/05/05/10:36} \left\|\psi_{n} \right\|_{X(\mathbb{R})}=\infty \quad \mbox{for all $n \in \mathbb{N}$}. \end{equation} We also find that \begin{equation}\label{10/04/15/20:26} \limsup_{n\to \infty}\left\| e^{\frac{i}{2}t\Delta}\psi_{0,n} \right\|_{L^{\infty}(\mathbb{R};L^{\frac{d}{2}(p-1)})}>0. \end{equation} Indeed, if (\ref{10/04/15/20:26}) fails, then the small data theory (Proposition \ref{08/08/22/20:59}) concludes that \[ \left\| \psi_{n} \right\|_{X(\mathbb{R})}< \infty \quad \mbox{for sufficiently large $n\in \mathbb{N}$}, \] which contradicts (\ref{09/05/05/10:36}). \par The following lemma gives us a candidate for the one-soliton-like solution in Proposition \ref{08/10/16/21:12}. \begin{lemma} \label{08/08/19/23:07} Assume that $d\ge 1$ and $2+\frac{4}{d}< p+1 <2^{*}$. Suppose that $\widetilde{N}_{c}<\widetilde{N}_{2}$. Let $\{ \psi_{n} \}$ be a sequence of global solutions to the equation (\ref{08/05/13/8:50}) in $C(\mathbb{R};H^{1}(\mathbb{R}^{d}))$ such that \begin{align} \label{09/05/03/12:29} &\psi_{n}(t) \in PW_{+} \quad \mbox{for all $t \in \mathbb{R}$ and $n\in \mathbb{N}$}, \\[6pt] \label{10/05/13/10:06} &\left\|\psi_{n}(t) \right\|_{L^{2}}=1 \quad \mbox{for all $n \in \mathbb{N}$ and $t \in \mathbb{R}$}, \\[6pt] \label{09/02/02/23:40} &\sup_{n \in \mathbb{N}}\left\| \psi_{n}(0) \right\|_{H^{1}}<\infty , \\[6pt] \label{08/08/26/15:56} &\lim_{n\to \infty}\widetilde{\mathcal{N}}_{2}(\psi_{n}(t))= \widetilde{N}_{c} \quad \mbox{for all $t \in \mathbb{R}$}, \\[6pt] \label{09/02/17/14:53} &\left\|\psi_{n} \right\|_{X(\mathbb{R})}=\infty \quad \mbox{for all $n\in \mathbb{N}$}. \end{align} Furthermore, we suppose that \begin{equation}\label{08/08/21/17:39} \limsup_{n\to \infty}\left\| e^{\frac{i}{2}t\Delta}\psi_{n}(0) \right\|_{L^{\infty}(\mathbb{R};L^{\frac{d}{2}(p-1)})}>0. \end{equation} Then, there exists a subsequence of $\{\psi_{n}\}$ (still denoted by the same symbol), which satisfies the following property: There exist \\ {\rm (i)} a nontrivial global solution $\Psi \in C(\mathbb{R};H^{1}(\mathbb{R}^{d}))$ to the equation (\ref{08/05/13/8:50}) with \begin{align} \label{09/05/02/10:11} &\left\|\Psi \right\|_{X(\mathbb{R})}=\infty , \\[6pt] \label{09/05/02/10:12} &\Psi(t) \in PW_{+} \quad \mbox{for all $t \in \mathbb{R}$}, \\[6pt] \label{10/05/13/10:09} &\left\| \Psi(t) \right\|_{L^{2}}=1 \quad \mbox{for all $t \in \mathbb{R}$}, \\[6pt] \label{09/05/02/10:13} &\widetilde{\mathcal{N}}_{2}(\Psi(t))=\widetilde{N}_{c} \quad \mbox{for all $t \in \mathbb{R}$}, \end{align} and \\ {\rm (ii)} a nontrivial function $f \in PW_{+}$, a sequence $\{\tau_{n}\}$ in $\mathbb{R}$ with $\displaystyle{\lim_{n\to \infty}\tau_{n}= \tau_{\infty}}$ for some $\tau_{\infty} \in \mathbb{R}\cup \{\pm \infty\}$, and a sequence $\{\eta_{n}\}$ in $\mathbb{R}^{d}$ such that \begin{align} \label{09/05/02/10:10} & \lim_{n\to \infty}e^{\frac{i}{2}\tau_{n}\Delta}e^{\eta_{n}\cdot \nabla }\psi_{n}(0) = f \quad \mbox{ weakly in $H^{1}(\mathbb{R}^{d})$, and a.e. in $\mathbb{R}^{d}$}, \\[6pt] \label{09/07/27/1:49} &\lim_{n\to \infty}\left\| e^{\frac{i}{2}\tau_{n}\Delta} e^{\eta_{n}\cdot \nabla }\psi_{n}(0) - f \right\|_{L^{2}(\mathbb{R}^{d})}=0, \\[6pt] \label{09/05/02/10:08} &\lim_{n\to \infty}\left\| e^{\frac{i}{2}(t+\tau_{n})\Delta} e^{\eta_{n}\cdot \nabla } \psi_{n}(0)-e^{\frac{i}{2}t\Delta} f \right\|_{L^{\infty}(\mathbb{R};L^{\frac{d}{2}(p-1)})\cap X(\mathbb{R})}=0, \\[6pt] \label{09/05/02/10:09} &\lim_{n\to \infty}\left\|\Psi(-\tau_{n})-e^{-\frac{i}{2}\tau_{n}\Delta}f \right\|_{H^{1}}=0 , \\[6pt] \label{09/05/02/10:07} & \lim_{n\to \infty} \left\|e^{\frac{i}{2}t\Delta}\psi_{n}(0) -e^{\frac{i}{2}t\Delta}\left(e^{-\tau_{n}\frac{\partial }{\partial t}-\eta_{n}\cdot \nabla}\Psi\right) (0) \right\|_{X(\mathbb{R})}=0. \end{align} Especially, we have \begin{equation}\label{10/04/15/20:43} \left\| f \right\|_{L^{2}}=\left\| \Psi(t) \right\|_{L^{2}} \quad \mbox{for all $t \in \mathbb{R}$}, \qquad \left\|\nabla f \right\|_{L^{2}}=\lim_{n\to \infty}\left\| \nabla \Psi(-\tau_{n})\right\|_{L^{2}}. \end{equation} \end{lemma} \begin{remark} In this Lemma \ref{08/08/19/23:07}, $\Psi$ is actually a solution to the integral equation (\ref{09/09/14/15:32}). \end{remark} For the proof of Lemma \ref{08/08/19/23:07}, we prepare the following Lemmata \ref{08/08/21/14:12} and \ref{08/08/23/16:32}. The former is a compactness lemma and the latter is a key ingredient to prove the superposition principle (\ref{09/09/14/17:16}) as mentioned in Section \ref{09/05/06/9:13}. \begin{lemma}\label{08/08/21/14:12} Assume that $d\ge 1$ and $2+\frac{4}{d}<p+1<2^{*}$. Let $\{ f_{n} \}$ be a uniformly bounded sequence in $H^{1}(\mathbb{R}^{d})$. Suppose that \begin{equation}\label{09/06/28/16:04} \limsup_{n\to \infty}\left\| e^{\frac{i}{2}t\Delta}f_{n} \right\|_{L^{\infty}(\mathbb{R};L^{\frac{d}{2}(p-1)})}>0. \end{equation} Then, there exists a subsequence of $\{f_{n}\}$ (still denoted by the same symbol), which satisfies the following property: There exist a nontrivial function $f \in H^{1}(\mathbb{R}^{d})$, a sequence $\{t_{n}\}$ in $\mathbb{R}$ with $\displaystyle{\lim_{n\to \infty}t_{n}= t_{\infty}}$ for some $t_{\infty}\in \mathbb{R} \cup \{\pm \infty\}$, and a sequence $\{y_{n}\}$ in $\mathbb{R}^{d}$ such that, putting $ f_{n}^{*}(x):= e^{\frac{i}{2}t_{n}\Delta}f_{n}(x+y_{n}) $, we have that \begin{align} \label{08/08/21/14:23} &\lim_{n\to \infty}f_{n}^{*}= f \quad \mbox{weakly in $H^{1}(\mathbb{R}^{d})$, and strongly in $L_{loc}^{q}(\mathbb{R}^{d})$ for all $q\in [2,2^{*})$}, \\[6pt] \label{08/08/21/17:25} &\lim_{n\to \infty} \left\{ \left\| |\nabla|^{s} f_{n} \right\|_{L^{2}}^{2}- \left\| |\nabla |^{s} (f_{n}^{*}-f) \right\|_{L^{2}}^{2}\right\} =\left\| |\nabla |^{s}f\right\|_{L^{2}}^{2} \quad \mbox{for all $s \in [0,1]$}, \\[6pt] \label{08/08/21/14:35} &\lim_{n\to \infty} \left\{ \left\| f_{n} \right\|_{L^{q}}^{q} -\left\|e^{-\frac{i}{2}t_{n}\Delta}(f_{n}^{*}-f) \right\|_{L^{q}}^{q}- \left\| e^{-\frac{i}{2}t_{n}\Delta}f \right\|_{L^{q}}^{q}\right\}=0 \quad \mbox{for all $q\in [2,2^{*})$}, \\[6pt] \label{08/08/21/14:36} &\lim_{n\to \infty}\left\{ \mathcal{H}\left(f_{n} \right)-\mathcal{H}\left(e^{-\frac{i}{2}t_{n}\Delta}(f_{n}^{*}-f) \right)-\mathcal{H}\left(e^{-\frac{i}{2}t_{n}\Delta}f \right) \right\}=0, \\[6pt] \label{10/04/22/13:28} &\sup_{n\in \mathbb{N}}\left\|f_{n}^{*}-f \right\|_{H^{1}} \le \sup_{n\in \mathbb{N}}\left\|f_{n} \right\|_{H^{1}}<\infty. \end{align} \end{lemma} \begin{proof}[Proof of Lemma \ref{08/08/21/14:12}] Put \begin{equation}\label{10/04/29/10:58} A=\frac{1}{2}\limsup_{n\to \infty}\left\| e^{\frac{i}{2}t_{n}\Delta} f_{n} \right\|_{L^{\frac{d}{2}(p-1)}}^{\frac{d}{2}(p-1)}. \end{equation} Note that $A>0$ by (\ref{09/06/28/16:04}). \par We take a subsequence of $\{f_{n}\}$ (still denoted by the same symbol) and a sequence $\{t_{n}\}$ in $\mathbb{R}$ such that \begin{equation}\label{08/08/21/14:55} \inf_{n\in \mathbb{N}}\left\| e^{\frac{i}{2}t_{n}\Delta} f_{n} \right\|_{L^{\frac{d}{2}(p-1)}}^{\frac{d(p-1)}{2}}\ge A. \end{equation} Here, extracting some subsequence of $\{t_{n}\}$, we may assume that \begin{equation}\label{10/04/24/16:59} \lim_{n\to \infty} t_{n}=t_{\infty} \quad \mbox{for some $t_{\infty}\in \mathbb{R}$ or $t_{\infty}\in \{\pm \infty\}$}. \end{equation} In addition to (\ref{08/08/21/14:55}), we have by the Sobolev embedding that \begin{equation} \label{09/06/28/16:10} \sup_{n\in \mathbb{N}}\left\| e^{\frac{i}{2}t_{n}\Delta}f_{n} \right\|_{L^{p+1}}^{p+1} \le C_{p}B^{p+1} \end{equation} for some constant $C_{p}$ depending only on $d$ and $p$, where we put \begin{equation}\label{10/04/29/10:59} B=\sup_{n\in \mathbb{N}}\left\| f_{n} \right\|_{H^{1}}<\infty. \end{equation} Then, Lemma \ref{08/3/28/20:21} (with $\alpha=2$, $\beta=\frac{d(p-1)}{2}$ and $\gamma=p+1$) shows that \begin{equation}\label{10/04/28/21:53} \begin{split} &\mathcal{L}^{d}\left( \left[ \left| e^{\frac{i}{2}t_{n}\Delta}f_{n} \right| > \eta \right] \right)> \frac{A}{2}\eta^{\frac{d}{2}(p-1)} \\[6pt] & \qquad \mbox{for all $n \in \mathbb{N}$ and $0<\eta<\min \left\{1, \left( \frac{A}{4B^{2}}\right)^{\frac{1}{\frac{d(p-1)}{2}-2}}, \left( \frac{A}{4C_{p}B^{p+1}} \right)^{\frac{1}{p+1-\frac{d(p-1)}{2}}} \right\}$ }. \end{split} \end{equation} Moreover, Lemma \ref{08/03/28/21:00} implies that: There exists a sequence $\{y_{n}\}$ in $\mathbb{R}^{d}$ such that \begin{equation}\label{09/12/07/9:31} \mathcal{L}^{d}\left( \left[ \left| e^{\frac{i}{2}t_{n}\Delta}f_{n}(\cdot +y_{n}) \right| > \frac{\eta}{2} \right]\cap B_{1}(0) \right) \gtrsim \left( \frac{1+A\eta^{\frac{d}{2}(p-1)+2} }{1+B}\right)^{\frac{2}{2^{\dagger}-2}} \ \mbox{for all $n \in \mathbb{N}$}, \end{equation} where $2^{\dagger}=4$ if $d=1,2$ and $2^{\dagger}=2^{*}$ if $d\ge 3$, and the implicit constant depends only on $d$ and $p$. For the sequence $\{y_{n}\}$ found above, we put \begin{equation}\label{10/04/29/12:30} f_{n}^{*}(x)=e^{\frac{i}{2}t_{n}\Delta}f(x+y_{n}). \end{equation} Then, $\{f_{n}^{*}\}$ is uniformly bounded in $H^{1}(\mathbb{R}^{d})$ as well as $\{f_{n}\}$. Hence, there exist a subsequence of $\{f_{n}^{*}\}$ (still denoted by the same symbol), a function $f \in H^{1}(\mathbb{R}^{d})$ such that \begin{equation}\label{09/12/07/10:03} \lim_{n\to \infty}f_{n}^{*} = f \quad \mbox{weakly in $H^{1}(\mathbb{R}^{d})$}. \end{equation} Here, (\ref{09/12/07/10:03}), with the help of the Sobolev embedding, also yields that \begin{equation}\label{10/04/15/21:37} \lim_{n\to \infty}f_{n}^{*} = f \quad \mbox{strongly in $L_{loc}^{q}(\mathbb{R}^{d})$ \quad for all $q\in [2, 2^{*})$}. \end{equation} Therefore, we have by (\ref{09/12/07/9:31}) that \begin{equation}\label{10/04/29/12:38} \begin{split} \left\| f \right\|_{L^{q}}^{q} &\ge \left\| f \right\|_{L^{q}(B_{1}(0))}^{q} =\lim_{n\to \infty}\left\| f_{n}^{*} \right\|_{L^{q}(B_{1}(0))}^{q} \gtrsim \eta^{q} \left( \frac{1+A\eta^{\frac{d}{2}(p-1)+2}}{1+B}\right)^{\frac{2}{2^{\dagger}-2}}\\[6pt] &\quad \mbox{for all $q\in [2,2^{*})$ and $0<\eta<\min \left\{1, \left( \frac{A}{4B^{2}}\right)^{\frac{1}{\frac{d(p-1)}{2}-2}}, \left( \frac{A}{4C_{p}B^{p+1}} \right)^{\frac{1}{p+1-\frac{d(p-1)}{2}}} \right\}$}, \end{split} \end{equation} where the implicit constant depends only on $d$ and $q$. Thus, $f$ is nontrivial. \par Now, we shall show that the sequence $\{f_{n}^{*}\}$ satisfies the property (\ref{08/08/21/17:25}). Indeed, it follows from the weak convergence (\ref{09/12/07/10:03}) that \begin{equation}\label{10/04/15/21:50} \begin{split} &\left\| |\nabla |^{s} \left(f_{n}^{*} - f \right)\right\|_{L^{2}}^{2} \\[6pt] &= \left\| |\nabla |^{s} f_{n}^{*} \right\|_{L^{2}}^{2} - \left\| |\nabla |^{s} f \right\|_{L^{2}}^{2} -2\Re{ \int_{\mathbb{R}^{d}}|\nabla|^{s}\left( f_{n}^{*}(x)-f(x)\right) |\nabla |^{s}f(x)\,dx } \\[6pt] &=\left\| |\nabla |^{s} f_{n} \right\|_{L^{2}}^{2} - \left\| |\nabla |^{s} f \right\|_{L^{2}}^{2}+o_{n}(1) \qquad \mbox{for all $s\in [0, 1]$}, \end{split} \end{equation} so that (\ref{08/08/21/17:25}) holds. \par Next, we shall show (\ref{08/08/21/14:35}). We first consider the case $\displaystyle{t_{\infty}=\lim_{n\to \infty}t_{n} \in \mathbb{R}}$. Then, we can easily verify that \begin{align} \label{09/12/07/11:14} & \lim_{n\to \infty} e^{-\frac{i}{2}t_{n}\Delta}f_{n}^{*} = e^{-\frac{i}{2}t_{\infty}\Delta}f \quad \mbox{weakly in $H^{1}(\mathbb{R}^{d})$, and a.e. in $\mathbb{R}^{d}$}, \\[6pt] \label{09/02/11/18:06} & \lim_{n\to \infty} e^{-\frac{i}{2}t_{n}\Delta}f = e^{-\frac{i}{2}t_{\infty}\Delta}f \quad \mbox{strongly in $H^{1}(\mathbb{R}^{d})$}. \end{align} Moreover, the triangle inequality gives us that \begin{equation}\label{09/02/11/18:08} \begin{split} &\left| \left\| f_{n} \right\|_{L^{q}}^{q} - \left\| e^{-\frac{i}{2}t_{n}\Delta} \left( f_{n}^{*}- f\right) \right\|_{L^{q}}^{q} -\left\| e^{-\frac{i}{2}t_{n}\Delta}f \right\|_{L^{q}}^{q} \right| \\[6pt] &\le \left| \left\| e^{-\frac{i}{2}t_{n}\Delta}f_{n}^{*} \right\|_{L^{q}}^{q} - \left\| e^{-\frac{i}{2}t_{n}\Delta}f_{n}^{*}-e^{\frac{i}{2}t_{\infty}\Delta}f \right\|_{L^{q}}^{q} -\left\| e^{-\frac{i}{2}t_{\infty}\Delta}f \right\|_{L^{q}}^{q} \right| \\[6pt] &\qquad +\left| \left\| e^{-\frac{i}{2}t_{n}\Delta} \left( f_{n}^{*}- f \right) \right\|_{L^{q}}^{q} - \left\|e^{-\frac{i}{2}t_{n}\Delta} f_{n}^{*}- e^{-\frac{i}{2}t_{\infty}\Delta} f \right\|_{L^{q}}^{q} \right| \\[6pt] &\qquad +\left| \left\| e^{-\frac{i}{2}t_{n}\Delta}f \right\|_{L^{q}}^{q} - \left\| e^{-\frac{i}{2}t_{\infty}\Delta}f \right\|_{L^{q}}^{q} \right| \qquad \mbox{for all $q\in [2, 2^{*})$}. \end{split} \end{equation} Here, we have by the Sobolev embedding that \begin{equation}\label{10/04/25/10:34} \sup_{n\in \mathbb{N}} \left\| e^{-\frac{i}{2}t_{n}\Delta} f_{n}^{*}\right\|_{L^{q}} \lesssim \sup_{n\in \mathbb{N}} \left\|e^{-\frac{i}{2}t_{n}\Delta} f_{n}^{*} \right\|_{H^{1}} = \sup_{n\in \mathbb{N}} \left\| f_{n} \right\|_{H^{1}}<\infty \quad \mbox{for all $q\in [2, 2^{*})$}. \end{equation} Therefore, Lemma \ref{08/03/28/21:21}, together with (\ref{09/12/07/11:14}) and (\ref{10/04/25/10:34}), implies that the first term on the right-hand side of (\ref{09/02/11/18:08}) vanishes as $n\to \infty$. The second term also vanishes as $n\to \infty$. Indeed, it follows from (\ref{09/09/27/21:09}) and (\ref{09/02/11/18:06}) that \begin{equation}\label{09/02/11/18:07} \begin{split} &\lim_{n\to \infty} \left| \left\| e^{-\frac{i}{2}t_{n}\Delta} \left( f_{n}^{*}- f \right) \right\|_{L^{q}}^{q} -\left\|e^{-\frac{i}{2}t_{n}\Delta} f_{n}^{*}- e^{-\frac{i}{2}t_{\infty}\Delta} f \right\|_{L^{q}}^{q} \right| \\[6pt] &\lesssim \lim_{n\to \infty}\left( \left\| f_{n} \right\|_{H^{1}}^{q-1} + \left\|f \right\|_{H^{1}}^{q-1}\right) \left\| e^{-\frac{i}{2}t_{n}\Delta}f -e^{-\frac{i}{2}t_{\infty}\Delta}f \right\|_{L^{q}} = 0. \end{split} \end{equation} Moreover, (\ref{09/02/11/18:06}) immediately yields that the third term vanishes as $n\to \infty$. Thus, we have proved the property (\ref{08/08/21/14:35}) when $t_{\infty} \in \mathbb{R}$. \par We next suppose that $t_{\infty}\in \{\pm \infty\}$. In this case, (\ref{09/09/27/21:09}) and Lemma \ref{10/04/08/9:20} yield that \begin{equation}\label{10/04/08/10:19} \begin{split} &\lim_{n\to \infty}\left| \left\| f_{n} \right\|_{L^{q}}^{q} - \left\| e^{-\frac{i}{2}t_{n}\Delta} \left( f_{n}^{*}-f \right) \right\|_{L^{q}}^{q} -\left\| e^{-\frac{i}{2}t_{n}\Delta}f \right\|_{L^{q}}^{q} \right| \\[10pt] &= \lim_{n\to \infty}\left| \left\| e^{-\frac{i}{2}t_{n}\Delta} f_{n}^{*} \right\|_{L^{q}}^{q} - \left\| e^{-\frac{i}{2}t_{n}\Delta} \left( f_{n}^{*}-f \right) \right\|_{L^{q}}^{q} -\left\| e^{-\frac{i}{2}t_{n}\Delta}f \right\|_{L^{q}}^{q} \right| \\[10pt] &\le \lim_{n\to \infty} \left| \left\| e^{-\frac{i}{2}t_{n}\Delta} f_{n}^{*} \right\|_{L^{q}}^{q} - \left\| e^{-\frac{i}{2}t_{n}\Delta} \left( f_{n}^{*}-f \right) \right\|_{L^{q}}^{q} \right|+ \lim_{n\to \infty} \left\| e^{-\frac{i}{2}t_{n}\Delta}f \right\|_{L^{q}}^{q} \\[10pt] &\lesssim \lim_{n\to \infty} \left( \left\| f_{n} \right\|_{H^{1}}^{q-1} +\left\| f \right\|_{H^{1}}^{q-1} \right)\left\| e^{-\frac{i}{2}t_{n}\Delta}f \right\|_{L^{q}} + \lim_{n\to \infty} \left\| e^{-\frac{i}{2}t_{n}\Delta}f \right\|_{L^{q}}^{q} \\[6pt] &= 0 \qquad \mbox{for all $q\in (2,2^{*})$}. \end{split} \end{equation} Since (\ref{08/08/21/14:35}) with $q=2$ follows from (\ref{08/08/21/17:25}), we have proved (\ref{08/08/21/14:35}). \par The property (\ref{08/08/21/14:36}) immediately follows form (\ref{08/08/21/17:25}) and (\ref{08/08/21/14:35}). \par Finally, we shall prove (\ref{10/04/22/13:28}). It follows from (\ref{08/08/21/17:25}) that \begin{equation}\label{10/04/24/17:18} \left\|f_{n}^{*}-f \right\|_{H^{1}} \le \left\| f_{n} \right\|_{H^{1}} \quad \mbox{for sufficiently large $n \in \mathbb{N}$}. \end{equation} Hence, extracting further some subsequence of $\{f_{n}^{*}\}$, we obtain (\ref{10/04/22/13:28}). \end{proof} \begin{lemma}[Dichotomy] \label{08/08/23/16:32} Let $u$ be a function in $X(\mathbb{R})$ and let $\{(\eta_{n}^{1},\tau_{n}^{1})\}$, \ldots, $\{ (\eta_{n}^{L},\tau_{n}^{L})\}$ be sequences in $\mathbb{R}^{d}\times \mathbb{R}$ with \begin{equation}\label{10/04/18/18:07} \lim_{n\to \infty}\left( | \tau_{n}^{k}-\tau_{n}^{l}| + |\eta_{n}^{k}-\eta_{n}^{l}|\right)=\infty \quad \mbox{for all $1\le k < l \le L$}. \end{equation} Then, putting $ u_{n}^{l}(x,t):=u(x-\eta_{n}^{l},t-\tau_{n}^{l}) $, we have \begin{equation}\label{08/08/23/16:35} \lim_{n\to \infty}\left\| \biggm| \sum_{k=1}^{L}u_{n}^{k} \biggm|^{p-1}\sum_{l=1}^{L}u_{n}^{l} -\sum_{l=1}^{L}|u_{n}^{l}|^{p-1}u_{n}^{l} \right\|_{L^{\widetilde{r}_{1}'}(\mathbb{R};L^{q_{1}'})}=0 \end{equation} and \begin{equation}\label{09/12/07/11:54} \lim_{n\to \infty}\left\| \left| u_{n}^{k} \right|^{q_{j}-1} u_{n}^{l} \right\|_{L^{\frac{r_{j}}{q_{j}}}(\mathbb{R};L^{1})}=0 \quad \mbox{for all $j=1,2$ and $k\neq l$}, \end{equation} where $q_{j}$ and $r_{j}$ ($j=1,2$) are the exponents introduced in Section \ref{08/10/07/9:01}. \end{lemma} \begin{proof}[Proof of Lemma \ref{08/08/23/16:32}] Put \begin{equation}\label{10/05/11/15:20} r_{n}=\frac{1}{2}\inf_{1\le k<l\le L}\left( \left| \tau_{n}^{k}-\tau_{n}^{l} \right|+ \left| \eta_{n}^{k}-\eta_{n}^{l} \right|\right) \end{equation} and define functions $\chi_{n}$ and $\chi_{n}^{l}$ on $\mathbb{R}^{d} \times \mathbb{R}$ by \begin{equation}\label{10/05/11/15:21} \chi_{n}(x,t):=\left\{ \begin{array}{ccl} 1 &\mbox{if}& |(x,t)|< r_{n}, \\[6pt] 0 &\mbox{if}& |(x,t)|\ge r_{n}, \end{array} \right. \qquad \chi_{n}^{l}(x,t):=\chi_{n}(x-\eta_{n}^{l},t-\tau_{n}^{l}). \end{equation} Now, we shall prove (\ref{08/08/23/16:35}). The triangle inequality and Lemma \ref{09/03/06/16:46} show that \begin{equation}\label{10/04/11/1:43} \begin{split} &\left\| \biggm| \sum_{k=1}^{L}u_{n}^{k} \biggm|^{p-1}\sum_{l=1}^{L}u_{n}^{l} -\sum_{l=1}^{L}|u_{n}^{l}|^{p-1}u_{n}^{l} \right\|_{L^{\widetilde{r}_{1}'}(\mathbb{R};L^{q_{1}'})} \\[6pt] & \le \sum_{l=1}^{L} \left\| \biggm| \sum_{k=1}^{L}u_{n}^{k} \biggm|^{p-1} u_{n}^{l} -|u_{n}^{l}|^{p-1}u_{n}^{l} \right\|_{L^{\widetilde{r}_{1}'}(\mathbb{R};L^{q_{1}'})} \\[6pt] &\lesssim \sum_{l=1}^{L} \sum_{{k=1}\atop {k\neq l}}^{L} \left\| \left| u_{n}^{k} \right|^{p-1} |u_{n}^{l}| \right\|_{L^{\widetilde{r}_{1}'}(\mathbb{R};L^{q_{1}'})}, \end{split} \end{equation} where the implicit constant depends only on $p$ and $L$. Hence, for (\ref{08/08/23/16:35}), it suffices to show the following estimate: \begin{equation}\label{10/04/12/1:51} \lim_{n\to \infty} \left\| \left| u_{n}^{k} \right|^{p-1} |u_{n}^{l}| \right\|_{L^{\widetilde{r}_{1}'}(\mathbb{R};L^{q_{1}'})} =0 \quad \mbox{for all $k\neq l$}. \end{equation} We begin with the following estimate: \begin{equation}\label{09/01/11/23:40} \begin{split} \left\| \left| u_{n}^{k} \right|^{p-1} |u_{n}^{l}| \right\|_{L^{\widetilde{r}_{1}'}(\mathbb{R};L^{q_{1}'})} &\le \left\| \left| u_{n}^{k} \right|^{p-1} |u_{n}^{l}| - \left| \chi_{n}^{k}u_{n}^{k} \right|^{p-1} |\chi_{n}^{l}u_{n}^{l}| \right\|_{L^{\widetilde{r}_{1}'}(\mathbb{R};L^{q_{1}'})} \\[6pt] &\quad + \left\| \left| \chi_{n}^{k}u_{n}^{k} \right|^{p-1} |\chi_{n}^{l}u_{n}^{l}| \right\|_{L^{\widetilde{r}_{1}'}(\mathbb{R};L^{q_{1}'})} . \end{split} \end{equation} The H\"older inequality gives an estimate for the first term on the right-hand side of (\ref{09/01/11/23:40}): \begin{equation}\label{10/04/12/0:33} \begin{split} &\left\| \left| u_{n}^{k} \right|^{p-1} |u_{n}^{l}| - \left| \chi_{n}^{k}u_{n}^{k} \right|^{p-1} |\chi_{n}^{l}u_{n}^{l}| \right\|_{L^{\widetilde{r}_{1}'}(\mathbb{R};L^{q_{1}'})} \\[6pt] &\le \left\| \left| u_{n}^{k} \right|^{p-1} |u_{n}^{l}| - \left| \chi_{n}^{k}u_{n}^{k} \right|^{p-1} |u_{n}^{l}| \right\|_{L^{\widetilde{r}_{1}'}(\mathbb{R};L^{q_{1}'})} \\[6pt] &\qquad \qquad + \left\| \left| \chi_{n}^{k}u_{n}^{k} \right|^{p-1} |u_{n}^{l}| - \left| \chi_{n}^{k}u_{n}^{k} \right|^{p-1} |\chi_{n}^{l}u_{n}^{l}| \right\|_{L^{\widetilde{r}_{1}'}(\mathbb{R};L^{q_{1}'})} \\[6pt] &\lesssim \left\|(1-\chi_{n}^{k}) u_{n}^{k} \right\|_{X(\mathbb{R})}^{p-1} \left\| u_{n}^{l} \right\|_{X(\mathbb{R})} + \left\| u_{n}^{k} \right\|_{X(\mathbb{R})}^{p-1} \left\|(1-\chi_{n}^{l})u_{n}^{l}\right\|_{X(\mathbb{R})} \\[6pt] &= \left\|(1-\chi_{n}) u \right\|_{X(\mathbb{R})}^{p-1} \left\| u \right\|_{X(\mathbb{R})} + \left\| u \right\|_{X(\mathbb{R})}^{p-1} \left\|(1-\chi_{n})u\right\|_{X(\mathbb{R})} , \end{split} \end{equation} where the implicit constant depends only on $d$, $p$ and $q_{1}$. Note here that \begin{align}\label{10/04/12/0:39} &\lim_{n\to \infty}(1-\chi_{n}(x,t)) u(x,t) = 0 \quad \mbox{a.a. $(x,t) \in \mathbb{R}^{d}\times \mathbb{R}$ and all $l=1,\ldots, L$}, \\[6pt] \label{10/04/12/0:40} &\left| (1-\chi_{n}(x,t)) u(x,t) \right| \le |u(x,t)| \quad \mbox{a.a. $(x,t)\in \mathbb{R}^{d}\times \mathbb{R}$ and all $l=1,\ldots, L$}, \end{align} so that the Lebesgue dominated convergence theorem gives us that \begin{equation}\label{09/12/07/18:34} \lim_{n\to \infty}\left\| (1-\chi_{n})u\right\|_{X(\mathbb{R})} =0 . \end{equation} Hence, the first term on the right-hand side of (\ref{09/01/11/23:40}) vanishes as $n\to \infty$. \par On the other hand, it follows from \begin{equation}\label{10/04/12/0:50} {\rm supp}\, \chi_{n}^{k}\cap {\rm supp}\,\chi_{n}^{l}=\emptyset \quad \mbox{for $k\neq l$} \end{equation} that the second term on the right-hand side of (\ref{09/01/11/23:40}) is zero. Thus, we have proved (\ref{10/04/12/1:51}). \par In a way similar to the proof of (\ref{10/04/12/1:51}), we can obtain (\ref{09/12/07/11:54}). \end{proof} Now, we are in a position to prove Lemma \ref{08/08/19/23:07}. \begin{proof}[Proof of Lemma \ref{08/08/19/23:07}] Put $f_{n}=\psi_{n}(0)$. Since $\psi_{n}(0) \in PW_{+}$, Proposition \ref{09/06/21/19:28} and (\ref{08/08/26/15:56}) give us that \begin{equation}\label{09/02/23/17:18} \begin{split} \limsup_{n\to \infty}\mathcal{N}_{2}(f_{n}) &\le \limsup_{n\to \infty} \sqrt{\frac{d(p-1)}{d(p-1)-4}}^{\frac{d}{2}(p-1)-2}\widetilde{\mathcal{N}}_{2}(f_{n}) \\ &= \sqrt{\frac{d(p-1)}{d(p-1)-4}}^{\frac{d}{2}(p-1)-2} \widetilde{N}_{c} \\ &<\sqrt{\frac{d(p-1)}{d(p-1)-4}}^{\frac{d}{2}(p-1)-2} \widetilde{N}_{2} =N_{2}. \end{split} \end{equation} Now, we apply Lemma \ref{08/08/21/14:12} to the sequence $\{f_{n}\}$ and obtain that: There exist a subsequence of $\{f_{n}\}$ (still denoted by the same symbol), a nontrivial function $f^{1} \in H^{1}(\mathbb{R}^{d})$, a sequence $\{t_{n}^{1}\}$ in $\mathbb{R}$ with $t_{n}^{1}\to t_{\infty}^{1} \in \mathbb{R}\cup \{\pm \infty\}$, and a sequence $\{y_{n}^{1}\}$ in $\mathbb{R}^{d}$ such that, putting \begin{equation}\label{10/05/11/16:15} f_{n}^{1}(x):=\left( e^{\frac{i}{2}t_{n}^{1}\Delta}e^{y_{n}^{1}\cdot \nabla }f_{n}\right)(x)=e^{\frac{i}{2}t_{n}^{1}\Delta}f_{n}(x+y_{n}^{1}), \end{equation} we have: \begin{align}\label{08/08/21/18:10} &\lim_{n\to \infty}f_{n}^{1}= f^{1} \quad \mbox{weakly in $H^{1}(\mathbb{R}^{d})$} \quad \mbox{for all $s \in [0,1]$}, \\[6pt] \label{08/08/26/16:39} &\lim_{n\to \infty}\left\{ \left\| |\nabla|^{s} f_{n} \right\|_{L^{2}}^{2}-\left\| |\nabla|^{s} (f_{n}^{1} -f^{1}) \right\|_{L^{2}}^{2} \right\} =\left\| |\nabla|^{s} f^{1}\right\|_{L^{2}}^{2} \quad \mbox{for all $s\in [0,1]$}, \\[6pt] \label{08/08/26/16:40} &\lim_{n\to \infty}\left\{ \left\| f_{n} \right\|_{L^{q}}^{q} \!-\left\| e^{-\frac{i}{2}t_{n}^{1}\Delta}(f_{n}^{1} -f^{1})\right\|_{L^{q}}^{q}\!\! -\left\| e^{-\frac{i}{2}t_{n}^{1}\Delta}f^{1} \right\|_{L^{q}}^{q}\right\}=0 \quad \mbox{for all $q \in [2,2^{*})$}, \\[6pt] \label{08/08/21/17:49} &\lim_{n\to \infty}\left\{ \mathcal{H}(f_{n})-\mathcal{H}(e^{-\frac{i}{2}t_{n}^{1}\Delta} (f_{n}^{1} -f^{1}))-\mathcal{H}(e^{-\frac{i}{2}t_{n}^{1}\Delta}f^{1}) \right\}=0, \\[6pt] \label{09/02/23/17:55} & \sup_{n\in \mathbb{N}} \left\| f_{n}^{1} -f^{1} \right\|_{H^{1}} <\infty . \end{align} Besides, it follows from (\ref{09/02/23/17:18}) and (\ref{08/08/26/16:39}) that \begin{align} \label{09/12/05/16:00} &\mathcal{N}_{2}(f^{1}) \le \limsup_{n\to \infty}\mathcal{N}_{2}(f_{n})<N_{2}, \\[6pt] \label{09/08/03/0:14} &\limsup_{n\to \infty}\mathcal{N}_{2}(e^{-\frac{i}{2}t_{n}^{1}\Delta}(f_{n}^{1} -f^{1})) =\limsup_{n\to \infty}\mathcal{N}_{2}(f_{n}^{1} -f^{1}) \le \limsup_{n\to \infty}\mathcal{N}_{2}(f_{n}) <N_{2}, \end{align} so that we have by the definition of $N_{2}$ (see (\ref{08/07/02/23:24})) that \begin{align} \label{09/02/23/17:19} &0< \mathcal{K}(f^{1}) < \mathcal{H}(f^{1}), \\[6pt] \label{09/02/22/21:22} &0< \mathcal{K}(e^{-\frac{i}{2}t_{n}^{1}\Delta}(f_{n}^{1} -f^{1})) < \mathcal{H}(e^{-\frac{i}{2}t_{n}^{1}\Delta}(f_{n}^{1} -f^{1})) \quad \mbox{for sufficiently large $n \in \mathbb{N}$}. \end{align} We shall show that \begin{equation}\label{09/05/02/11:03} \left\{ \begin{array}{ccl} e^{-\frac{i}{2}t_{\infty}^{1}\Delta}f^{1} \in PW_{+} &\mbox{if}& t_{\infty}^{1} \in \mathbb{R}, \\[10pt] f^{1} \in \Omega &\mbox{if}& t_{\infty}^{1}=\pm \infty. \end{array} \right. \end{equation} Suppose first that $t_{\infty}^{1} \in \mathbb{R}$. Then, the estimate (\ref{09/02/23/17:18}) gives us that \begin{equation}\label{10/04/20/18:16} \mathcal{N}_{2}(e^{-\frac{i}{2}t_{\infty}^{1}\Delta}f^{1})=\mathcal{N}_{2}(f^{1})<N_{2}, \end{equation} which, together with the definition of $N_{2}$, yields that \begin{equation}\label{09/05/02/11:39} 0< \mathcal{K}(e^{-\frac{i}{2}t_{\infty}^{1}\Delta}f^{1}) <\mathcal{H}(e^{-\frac{i}{2}t_{\infty}^{1}\Delta}f^{1}). \end{equation} Moreover, (\ref{08/08/26/16:39}) and (\ref{08/08/21/17:49}), with the help of (\ref{09/02/22/21:22}), show that \begin{equation}\label{10/04/20/18:23} \widetilde{\mathcal{N}}_{2}(e^{-\frac{i}{2}t_{\infty}^{1}\Delta}f^{1}) = \lim_{n\to \infty}\widetilde{\mathcal{N}}_{2}(e^{-\frac{i}{2}t_{n}^{1}\Delta}f^{1}) \le \lim_{n\to \infty}\widetilde{\mathcal{N}}_{2}(f_{n}). \end{equation} Combining (\ref{10/04/20/18:23}) with (\ref{08/08/26/15:56}), we obtain that \begin{equation}\label{09/05/02/11:40} \widetilde{\mathcal{N}}_{2}(e^{-\frac{i}{2}t_{\infty}^{1}\Delta}f^{1}) \le \widetilde{N}_{c}<\widetilde{N}_{2}. \end{equation} Hence, (\ref{09/05/02/11:39}) and (\ref{09/05/02/11:40}), with the help of the relation (\ref{08/06/15/14:38}), lead to that $e^{-t_{\infty}^{1}\Delta}f^{1} \in PW_{+}$. \par We next suppose that $t_{\infty}^{1}\in \{\pm \infty\}$. In this case , the formula (\ref{08/08/21/17:49}), together with (\ref{09/02/22/21:22}), yields that \begin{equation}\label{10/05/14/11:19} \begin{split} \left\| \nabla f^{1} \right\|_{L^{2}}^{2}&= \mathcal{H}(e^{-\frac{i}{2}t_{n}^{1}\Delta}f^{1}) +\frac{2}{p+1}\left\| e^{-\frac{i}{2}t_{n}^{1}\Delta}f^{1} \right\|_{L^{p+1}}^{p+1} \\[6pt] &\le \mathcal{H}(f_{n}) +o_{n}(1) +\frac{2}{p+1}\left\| e^{-\frac{i}{2}t_{n}^{1}\Delta}f^{1} \right\|_{L^{p+1}}^{p+1}, \end{split} \end{equation} so that we have by Lemma \ref{10/04/08/9:20} that \begin{equation}\label{09/02/18/18:02} \left\| \nabla f^{1} \right\|_{L^{2}}^{2}\le \lim_{n\to \infty}\mathcal{H}(f_{n}). \end{equation} This estimate and (\ref{08/08/26/16:39}) with $s=0$, together with (\ref{08/08/26/15:56}), show that \begin{equation}\label{10/05/11/16:39} \mathcal{N}_{2}(f^{1}) \le \lim_{n\to \infty} \left\|f_{n} \right\|_{L^{2}}^{p+1-\frac{d}{2}(p-1)} \sqrt{\mathcal{H}(f_{n})}^{\frac{d}{2}(p-1)-2} = \widetilde{N}_{c} <\widetilde{N}_{2}. \end{equation} Hence, we have that $f^{1} \in \Omega$. \par Now, we suppose that \begin{equation}\label{10/05/11/16:49} \limsup_{n\to \infty} \left\| e^{\frac{i}{2} t \Delta } \left( f_{n}^{1} -f^{1} \right) \right\|_{L^{\infty}(\mathbb{R};L^{\frac{d}{2}(p-1)}) }>0. \end{equation} Then, we can apply Lemma \ref{08/08/21/14:12} to the sequence $\{f_{n}^{1}-f^{1}\}$, so that we find that: There exist a subsequence of $\{f_{n}^{1}-f^{1}\}$ (still denoted by the same symbol), a nontrivial function $f^{2} \in H^{1}(\mathbb{R}^{d})$, a sequence $\{t_{n}^{2}\}$ in $\mathbb{R}$ with $\displaystyle{\lim_{n\to \infty}t_{n}^{2}= t_{\infty}^{2}\in \mathbb{R}\cup \{\pm \infty\}}$ and a sequence $\{y_{n}^{2}\}$ in $\mathbb{R}^{d}$ such that, putting \begin{equation}\label{09/09/14/17:44} f_{n}^{2}:=e^{\frac{i}{2}t_{n}^{2}\Delta} e^{ y_{n}^{2}\cdot \nabla } \left( f_{n}^{1}-f^{1} \right), \end{equation} we have: \begin{align} \label{09/03/04/16:39} &\lim_{n\to \infty}f_{n}^{2}= f^{2} \quad \mbox{weakly in $H^{1}(\mathbb{R}^{d})$}, \\[6pt] \label{09/07/31/18:13} &\lim_{n\to \infty}\left\{ \left\| |\nabla|^{s} \left( f_{n}^{1}-f^{1}\right) \right\|_{L^{2}}^{2} -\left\| |\nabla |^{s}\left( f_{n}^{2}-f^{2} \right) \right\|_{L^{2}}^{2} \right\}=\left\||\nabla |^{s} f^{2} \right\|_{L^{2}}^{2} \quad \mbox{for all $s\in [0,1]$}, \\[6pt] \label{09/07/31/18:14} & \lim_{n\to \infty}\left\{ \left\| f_{n}^{1}-f^{1} \right\|_{L^{q}}^{q} \! -\left\| e^{-\frac{i}{2}t_{n}^{2}\Delta}(f_{n}^{2} -f^{2})\right\|_{L^{q}}^{q} \!\! -\left\| e^{-\frac{i}{2}t_{n}^{2}\Delta}f^{2} \right\|_{L^{q}}^{q}\right\}=0 \quad \mbox{for all $q\in [2,2^{*})$}, \\[6pt] \label{09/07/31/18:15} & \lim_{n\to \infty}\left\{ \mathcal{H}( f_{n}^{1}-f^{1})-\mathcal{H}(e^{-\frac{i}{2}t_{n}^{2}\Delta} (f_{n}^{2} -f^{2}))-\mathcal{H}(e^{-\frac{i}{2}t_{n}^{2}\Delta}f^{2}) \right\}=0, \\[6pt] \label{09/07/31/18:34} &\sup_{n\in \mathbb{N}} \left\| |\nabla|^{s} (f_{n}^{2} -f^{2}) \right\|_{L^{2}} < \infty. \end{align} Note here that $f_{n}$ is represented in the form \begin{equation}\label{10/04/25/16:09} f_{n} = e^{-\frac{i}{2}\tau_{n}^{1}\Delta}e^{-\eta_{n}^{1} \cdot \nabla }f^{1} +e^{-\frac{i}{2}\tau_{n}^{2}\Delta}e^{-\eta_{n}^{2} \cdot \nabla }f^{2} + e^{-\frac{i}{2}\tau_{n}^{2}\Delta}e^{-\eta_{n}^{2} \cdot \nabla }\left( f_{n}^{2}-f^{2} \right) \quad \mbox{for all $n\in \mathbb{N}$}, \end{equation} where \begin{equation}\label{09/09/14/17:41} \tau_{n}^{1}:=t_{n}^{1}, \quad \eta_{n}^{1}:=y_{n}^{1}, \qquad \tau_{n}^{2}:=\tau_{n}^{1}+t_{n}^{2}, \quad \eta_{n}^{2}:=\eta_{n}^{1}+y_{n}^{2}. \end{equation} We also note that the following formulas hold: \begin{align} \label{09/08/01/21:47} &\lim_{n\to \infty}\left\{ \left\| |\nabla|^{s} f_{n} \right\|_{L^{2}}^{2} -\left\| |\nabla |^{s}\left( f_{n}^{2}-f^{2} \right) \right\|_{L^{2}}^{2} \right\}=\sum_{k=1}^{2} \left\||\nabla |^{s} f^{k} \right\|_{L^{2}}^{2} \quad \mbox{for all $s \in [0,1]$}, \\[6pt] \label{09/08/01/21:48} &\lim_{n\to \infty}\left\{ \left\| f_{n} \right\|_{L^{q}}^{q} \!- \left\| e^{-\frac{i}{2}\tau_{n}^{2}\Delta}(f_{n}^{2} -f^{2})\right\|_{L^{q}}^{q}\!\! -\sum_{k=1}^{2}\left\| e^{-\frac{i}{2}\tau_{n}^{k}\Delta}f^{k} \right\|_{L^{q}}^{q}\right\}=0 \quad \mbox{for all $q \in [2,2^{*})$}, \\[6pt] \label{09/08/01/21:49} &\lim_{n\to \infty}\left\{ \mathcal{H}( f_{n})-\mathcal{H}(e^{-\frac{i}{2}\tau_{n}^{2}\Delta} (f_{n}^{2} -f^{2}))-\sum_{k=1}^{2} \mathcal{H}(e^{-\frac{i}{2}\tau_{n}^{k}\Delta}f^{k}) \right\}=0. \end{align} Moreover, in a similar way to the proofs of (\ref{09/02/22/21:22}) and (\ref{09/05/02/11:03}), we obtain \begin{equation}\label{10/05/11/17:03} 0< \mathcal{K}\left(e^{-\frac{i}{2}\tau_{n}^{2}\Delta}\left( f_{n}^{2}-f^{2} \right) \right) < \mathcal{H}\left(e^{-\frac{i}{2}\tau_{n}^{2}\Delta}\left( f_{n}^{2}-f^{2} \right) \right) \end{equation} and \begin{equation}\label{10/05/11/17:02} \left\{ \begin{array}{ccl} e^{-\frac{i}{2}\tau_{\infty}^{2}}f^{2} \in PW_{+} &\mbox{if}& \tau_{\infty}^{2} \in \mathbb{R}, \\[10pt] f^{2} \in \Omega &\mbox{if}& \tau_{\infty}^{2}=\pm \infty. \end{array} \right. \end{equation} We shall show that \begin{equation}\label{09/02/23/16:16} \lim_{n\to \infty}|\tau_{n}^{2}-\tau_{n}^{1}|+|\eta_{n}^{2}-\eta_{n}^{1}|= \infty. \end{equation} Supposing that (\ref{09/02/23/16:16}) fails, we can take convergent subsequences of $\{\tau_{n}^{2}-\tau_{n}^{1} \}$ and $\{ \eta_{n}^{2}-\eta_{n}^{1} \}$ . Then, (\ref{08/08/21/18:10}), together with the unitarity of the operators $e^{-\frac{i}{2}(\tau_{n}^{2}-\tau_{n}^{1})\Delta}$ and $e^{-(\eta_{n}^{2}-\eta_{n}^{1})\cdot \nabla}$ on $H^{1}(\mathbb{R}^{d})$, shows that \begin{equation}\label{10/05/11/17:06} \lim_{n\to \infty}f_{n}^{2} = 0 \quad \mbox{weakly in $H^{1}(\mathbb{R}^{d})$}, \end{equation} which contradicts the fact that $f^{2}$ is nontrivial. Thus, (\ref{09/02/23/16:16}) holds. \par Now, we suppose that \begin{equation}\label{10/05/11/17:53} \limsup_{n\to \infty} \left\| e^{\frac{i}{2} t \Delta } \left( f_{n}^{2} -f^{2} \right) \right\|_{L^{\infty}(\mathbb{R};L^{\frac{d}{2}(p-1)}) } >0. \end{equation} Then, we can repeat the above procedure; Iterative use of Lemma \ref{08/08/21/14:12} implies the following Lemma: \begin{lemma}\label{09/09/15/15:12} For some subsequence of $\{f_{n}\}$ (still denoted by the same symbol), there exists \\ (i) a family of nontrivial functions in $H^{1}(\mathbb{R}^{d})$, $\{ f^{1}, f^{2}, f^{3}, \ldots \}$ , and \\ (ii) a family of sequences in $\mathbb{R}^{d}\times \mathbb{R}$, $\left\{ \{(\eta_{n}^{1}, \tau_{n}^{1}) \}, \{(\eta_{n}^{2}, \tau_{n}^{2}) \}, \{(\eta_{n}^{3}, \tau_{n}^{3}) \}, \ldots \right\}$ with \begin{equation}\label{10/05/11/18:05} \displaystyle{\lim_{n\to \infty}\tau_{n}^{l}= \tau_{\infty}^{l} \in \mathbb{R} \cup \{\pm \infty \}} \quad \mbox{for all $l\ge 1$}, \end{equation} \begin{equation}\label{09/02/03/0:22} \left\{ \begin{array}{ccl} e^{-\frac{i}{2}\tau_{\infty}^{l}\Delta}f^{l} \in PW_{+} &\mbox{if} & \tau_{\infty}^{l}\in \mathbb{R}, \\[6pt] f^{l} \in \Omega &\mbox{if} & \tau_{\infty}^{l}=\pm \infty \end{array} \right. \mbox{for all $l\ge 1$}, \end{equation} and \begin{equation}\label{09/02/03/0:50} \lim_{n\to \infty}|\tau_{n}^{l}-\tau_{n}^{k}|+|\eta_{n}^{l}-\eta_{n}^{k}|= \infty \quad \mbox{for all $1\le k <l $}, \end{equation} such that, putting \[ \begin{split} f_{n}^{0}&:=f_{n}, \quad f^{0}:=0, \quad \tau_{n}^{0}:=0, \quad \eta_{n}^{0}:=0,\\[6pt] f_{n}^{l}&:=e^{\frac{i}{2}(\tau_{n}^{l}-\tau_{n}^{l-1})\Delta} e^{(\eta_{n}^{l}-\eta_{n}^{l-1})\cdot \nabla} (f_{n}^{l-1}-f^{l-1}) \quad \mbox{for $l\ge 1$}, \end{split} \] we have, for all $l\ge 1$: \begin{align} \label{08/08/21/18:17} &\lim_{n\to \infty}f_{n}^{l}= f^{l} \quad \mbox{weakly in $H^{1}(\mathbb{R}^{d})$, and strongly in $L^{q}_{loc}(\mathbb{R}^{d})$ for all $q \in [2,2^{*})$}, \\[6pt] \label{08/08/24/1:47} &\lim_{n\to \infty}\left\{ \left\| |\nabla|^{s} f_{n}\right\|_{L^{2}}^{2} -\left\| |\nabla |^{s}\left( f_{n}^{l}-f^{l}\right) \right\|_{L^{2}}^{2} \right\}=\sum_{k=1}^{l}\left\||\nabla |^{s} f^{k} \right\|_{L^{2}}^{2} \quad \mbox{for all $s \in [0,1]$}, \\[6pt] \label{08/08/22/15:03} &\lim_{n\to \infty} \left\{ \left\| f_{n} \right\|_{L^{q}}^{q} -\left\| e^{-\frac{i}{2}\tau_{n}^{l}\Delta} \left( f_{n}^{l}-f^{l}\right) \right\|_{L^{q}}^{q} -\sum_{k=1}^{l}\left\|e^{-\frac{i}{2}\tau_{n}^{k}\Delta}f^{k} \right\|_{L^{q}}^{q}\right\}=0 \quad \mbox{for all $q \in [2,2^{*})$}, \\[6pt] \label{08/08/21/18:22} &\lim_{n\to \infty} \left\{ \mathcal{H}(f_{n})-\mathcal{H}(e^{-\frac{i}{2}\tau_{n}^{l}\Delta} \left( f_{n}^{l}-f^{l}\right) ) -\sum_{k=1}^{l}\mathcal{H}(e^{-\frac{i}{2}\tau_{n}^{k}\Delta}f^{k}) \right\}=0, \\[6pt] \label{09/03/05/18:36} &\mathcal{K}(e^{-\frac{i}{2}\tau_{n}^{l}\Delta}\left( f_{n}^{l}-f^{l}\right) )>0. \end{align} \noindent Furthermore, putting $N:=\#\{ f^{1}, f^{2},f^{3},\ldots \}$, we have the alternatives: if $N$ is finite, then \begin{equation} \label{09/09/15/16:19} \lim_{n\to \infty}\left\|e^{\frac{i}{2}t\Delta}\left( f_{n}^{N}-f^{N} \right) \right\|_{L^{\infty}(\mathbb{R};L^{\frac{d}{2}(p-1)})\cap X(\mathbb{R})}=0; \end{equation} if $N=\infty$, then \begin{equation} \label{09/09/15/16:20} \lim_{l \to \infty}\lim_{n\to \infty}\left\|e^{\frac{i}{2}t\Delta}\left( f_{n}^{l}-f^{l} \right) \right\|_{L^{\infty}(\mathbb{R};L^{\frac{d}{2}(p-1)})\cap X(\mathbb{R})}=0. \end{equation} \end{lemma} \begin{proof}[Proof of Lemma \ref{09/09/15/15:12}] The last assertion (\ref{09/09/15/16:20}) is nontrivial. We prove this, provided that the other properties are proved. We first show that \begin{equation} \label{09/09/15/16:40} \lim_{l \to \infty}\lim_{n\to \infty}\left\|e^{\frac{i}{2}t\Delta}\left( f_{n}^{l}-f^{l} \right) \right\|_{L^{\infty}(\mathbb{R};L^{\frac{d}{2}(p-1)})}=0. \end{equation} Suppose the contrary that there exists a constant $\varepsilon_{0}>0$ such that \begin{equation}\label{10/01/02/11:25} \limsup_{l \to \infty}\lim_{n\to \infty}\left\|e^{\frac{i}{2}t\Delta}\left( f_{n}^{l}-f^{l} \right) \right\|_{L^{\infty}(\mathbb{R};L^{\frac{d}{2}(p-1)})}\ge \varepsilon_{0}. \end{equation} Then, we can take an increasing sequence of indices $\{j(L)\}_{L\in \mathbb{N}}$ such that \begin{equation}\label{10/01/02/11:29} \lim_{n\to \infty} \left\|e^{\frac{i}{2}t\Delta}\left( f_{n}^{j(L)}-f^{j(L)} \right) \right\|_{L^{\infty}(\mathbb{R};L^{\frac{d}{2}(p-1)})}\ge \frac{\varepsilon_{0}}{2}. \end{equation} Here, it follows from the construction of the family $\{f^{1},f^{2},f^{3}, \ldots \}$ (see also (\ref{10/04/29/12:38}) in the proof of Lemma \ref{08/08/21/14:12}) that there exists a constant $C(\varepsilon_{0})>0$ depending only on $d$, $p$, $\varepsilon_{0}$ and $\displaystyle{\sup_{n\in \mathbb{N}}\left\| f_{n} \right\|_{L^{2}}}$ such that \begin{equation}\label{09/01/08/0:51} \left\|f^{j(L)+1} \right\|_{L^{2}}\ge C(\varepsilon_{0}) \quad \mbox{for all $L \in \mathbb{N}$}. \end{equation} Then, the uniform bound (\ref{09/02/02/23:40}) (recall that $f_{n}=\psi_{n}(0)$), (\ref{08/08/24/1:47}) and (\ref{09/01/08/0:51}), yield that \begin{equation} \label{08/08/22/15:28} \begin{split} \sup_{n\in \mathbb{N}}\left\| f_{n} \right\|_{L^{2}}^{2} &\ge \lim_{n\to \infty}\left\{ \left\| f_{n} \right\|_{L^{2}}^{2} -\left\| f_{n}^{{j(M)+1}}-f^{{j(M)+1}}\right\|_{L^{2}}^{2} \right\} =\sum_{k=1}^{j(M)+1}\left\|f^{k} \right\|_{L^{2}}^{2} \\[6pt] &\ge \sum_{L=1}^{M}\left\| f^{j(L)+1}\right\|_{L^{2}}^{2} \ge MC(\varepsilon_{0}) \quad \mbox{for all $M\in \mathbb{N}$}. \end{split} \end{equation} Since $\displaystyle{\sup_{n\in \mathbb{N}}\left\| f_{n} \right\|_{L^{2}}<\infty}$, taking $M \to \infty$ in (\ref{08/08/22/15:28}), we have a contradiction. Thus, (\ref{09/09/15/16:40}) holds. \par It remains to prove that \begin{equation} \label{10/05/11/18:21} \lim_{l \to \infty}\lim_{n\to \infty}\left\|e^{\frac{i}{2}t\Delta}\left( f_{n}^{l}-f^{l} \right) \right\|_{X(\mathbb{R})}=0. \end{equation} This estimate follows from Lemma \ref{09/04/29/15:57} and (\ref{09/09/15/16:40}). \end{proof} \noindent {\it Proof of Lemma \ref{08/08/19/23:07} (continued)} We are back to the proof of Lemma \ref{08/08/19/23:07}. \par We begin with proving that that $N=1$ in Lemma \ref{09/09/15/15:12}. To this end, we define an approximate solution $\psi_{n}^{app}$ of $\psi_{n}$: Putting $L=N$ if $N<\infty$; $L$ sufficiently large number specified later exactly if $N=\infty$, we define \begin{equation}\label{09/07/26/23:03} \psi_{n}^{app}(x,t) := \sum_{l=1}^{L} \psi^{l}(x-\eta_{n}^{l},t-\tau_{n}^{l} ). \end{equation} Here, each $\psi^{l}$ is the solution to (\ref{09/09/14/15:32}) (or (\ref{09/09/14/15:33})) with $f^{l}$ just found in Lemma \ref{09/09/15/15:12}, and each $(\eta_{n}^{l},\tau_{n}^{l})$ is the sequence found in Lemma \ref{09/09/15/15:12}, so that we find that \begin{align} \label{09/02/04/8:41} &\lim_{n\to \infty}\left\| \psi^{l}(-\tau_{n}^{l})-e^{-\frac{i}{2}\tau_{n}^{l}\Delta}f^{l} \right\|_{H^{1}}=0 \quad \mbox{for all $1\le l \le L$}, \\[6pt] \label{10/05/16/12:36} & \psi^{l}(t) \in PW_{+} \quad \mbox{for all $1\le l \le L$ and $t \in \mathbb{R}$}. \end{align} We note again that if $\tau_{\infty}^{l}=\pm \infty$, then $\psi^{l}$ is the solution to the final value problem (\ref{09/09/14/15:33}); since $f^{l} \in \Omega$ if $\tau_{\infty}^{l}=\pm\infty$ (see (\ref{09/02/03/0:22})), we actually obtain the desired solution $\psi^{l}$ by Proposition \ref{09/01/12/16:36}. \par Now, we shall show that \begin{equation}\label{09/02/04/11:57} \| \psi^{l} \|_{X(\mathbb{R})}<\infty \quad \mbox{for all $1\le l\le L$}, \quad \mbox{if $N\ge 2$}. \end{equation} For (\ref{09/02/04/11:57}), it suffices to show that $\widetilde{\mathcal{N}}_{2}(\psi^{l}(0)) < \widetilde{N}_{c}$ by the definition of $\widetilde{N}_{c}$ (see (\ref{08/09/02/18:06})). Suppose $N\ge 2$, so that $L\ge 2$. Then, we employ (\ref{08/08/24/1:47}), (\ref{08/08/21/18:22}), (\ref{09/03/05/18:36}) and obtain that \begin{equation}\label{10/05/12/1:11} \begin{split} \widetilde{\mathcal{N}}_{2}(f_{n}) &= \sqrt{ \left\| f_{n}^{L}-f^{L} \right\|_{L^{2}}^{2}+ \sum_{k=1}^{L} \left\| f^{k} \right\|_{L^{2}}^{2} + o_{n}(1) }^{p+1-\frac{d}{2}(p-1)} \\[6pt] & \qquad \times \sqrt{ \mathcal{H} \left( e^{-\frac{i}{2}\tau_{n}^{L} \Delta} \left( f_{n}^{L}-f^{L} \right) \right) + \sum_{k=1}^{L} \mathcal{H} \left( e^{-\frac{i}{2}\tau_{n}^{k}\Delta}f^{k} \right) +o_{n}(1) }^{\frac{d}{2}(p-1)-2} \\[6pt] &\ge \sqrt{ \sum_{k=1}^{L} \left\| f^{k} \right\|_{L^{2}}^{2} }^{p+1-\frac{d}{2}(p-1)} \sqrt{ \sum_{k=1}^{L} \mathcal{H} \left( e^{-\frac{i}{2}\tau_{n}^{k}\Delta}f^{k} \right) }^{\frac{d}{2}(p-1)-2} +o_{n}(1). \end{split} \end{equation} Since (\ref{09/02/03/0:22}) and Lemma \ref{10/04/08/9:20} imply that \begin{equation}\label{10/05/12/0:43} \mathcal{H}\left(e^{-\frac{i}{2}\tau_{n}^{k}\Delta}f^{k}\right) >0 \quad \mbox{for all $k\ge 1$ and sufficiently large $n$}, \end{equation} and since $\psi^{l}$ is the solution to (\ref{09/09/14/15:32}) or (\ref{09/09/14/15:33}), we have from (\ref{08/08/26/15:56}) and (\ref{10/05/12/1:11}) that \begin{equation}\label{10/04/25/16:27} \begin{split} \widetilde{N}_{c}&= \lim_{n\to \infty}\widetilde{\mathcal{N}}_{2}(f_{n}) \\[6pt] &> \lim_{n\to \infty}\left\| f^{l} \right\|_{L^{2}}^{p+1-\frac{d}{2}(p-1)} \sqrt{ \mathcal{H} \left( e^{-\frac{i}{2}\tau_{n}^{l}\Delta}f^{l} \right) }^{\frac{d}{2}(p-1)-2} \\[6pt] &= \lim_{n\to \infty}\left\| \psi^{l}(-\tau_{n}) \right\|_{L^{2}}^{p+1-\frac{d}{2}(p-1)} \sqrt{ \mathcal{H} \left( \psi^{l}(-\tau_{n}) \right) }^{\frac{d}{2}(p-1)-2} \\[6pt] &= \lim_{n\to \infty}\left\| \psi^{l}(0) \right\|_{L^{2}}^{p+1-\frac{d}{2}(p-1)} \sqrt{ \mathcal{H} \left( \psi^{l}(0) \right) }^{\frac{d}{2}(p-1)-2} \\[6pt] &=\widetilde{\mathcal{N}}_{2}(\psi^{l}(0)) \qquad \mbox{for all $1\le l\le L$ and sufficiently large $n$}, \end{split} \end{equation} Thus, (\ref{09/02/04/11:57}) holds. \par We know by (\ref{09/02/04/11:57}) that \begin{equation}\label{10/05/12/1:51} \sup_{n\in \mathbb{N}}\left\| \psi_{n}^{app} \right\|_{X(\mathbb{R})}<\infty. \end{equation} Furthermore, we will see that: When $N=\infty$, there exists $A>0$ with the property that for any $L\in \mathbb{N}$ (the number of components of $\psi_{n}^{app}$, see (\ref{09/07/26/23:03})), there exists $n_{L} \in \mathbb{N}$ such that \begin{equation}\label{09/09/18/14:46} \sup_{n\ge n_{L}}\left\| \psi_{n}^{app} \right\|_{X(\mathbb{R})}\le A . \end{equation} We shall prove this fact. Recall that $X(\mathbb{R})=L^{r_{1}}(\mathbb{R};L^{q_{1}})\cap L^{r_{2}}(I;L^{q_{2}})$. Lemma \ref{09/03/06/16:46} yields that \begin{equation}\label{09/09/21/18:19} \begin{split} &\left\| \psi_{n}^{app}(t) \right\|_{L^{q_{j}}}^{q_{j}} = \left\| \sum_{l=1}^{L}\psi^{l}(\cdot -\eta_{n}^{l}, t-\tau_{n}^{l}) \right\|_{L^{q_{j}}}^{q_{j}} \\[6pt] &\le \sum_{l=1}^{L}\left\|\psi^{l}(t-\tau_{n}^{l}) \right\|_{L^{q_{j}}}^{q_{j}} \\[6pt] &\quad + C_{L}\sum_{l=1}^{L}\sum_{{k=1}\atop {k\neq l}}^{L} \int_{\mathbb{R}^{d}} \left| \psi^{k}(x-\eta_{n}^{k},t-\tau_{n}^{k}) \right| \left| \psi^{l}(x-\eta_{n}^{l},t-\tau_{n}^{l})\right|^{q_{j}-1} \hspace{-0.2cm}dx \quad \mbox{for $j=1,2$} \end{split} \end{equation} for some constant $C_{L}>0$ depending only on $d$, $p$, $q_{1}$ and $L$. Therefore, we have by the triangle inequality that \begin{equation}\label{09/09/21/18:17} \begin{split} &\left\| \psi_{n}^{app} \right\|_{ L^{r_{j}} (\mathbb{R};L^{q_{j}}) }^{q_{j}} = \left\| \left\| \psi_{n}^{app} \right\|_{L^{q_{j}}}^{q_{j}} \right\|_{L^{\frac{r_{j}}{q_{j}}}(\mathbb{R})} \\[6pt] &\le \sum_{l=1}^{L} \left\| \psi^{l} \right\|_{L^{r_{j}}(\mathbb{R};L^{q_{j}})}^{q_{j}} \\[6pt] &\quad + C_{L} \sum_{l=1}^{L}\sum_{{k=1}\atop {k\neq l}}^{L} \left\| \left| \psi^{k}(\cdot-\eta_{n}^{k},\cdot-\tau_{n}^{k}) \right| \hspace{-2pt} \left| \psi^{l}(\cdot-\eta_{n}^{l},\cdot-\tau_{n}^{l})\right|^{q_{j}-1} \right\|_{L^{\frac{r_{j}}{q_{j}}}(\mathbb{R};L^{1})} \\[6pt] &=:I_{j}+II_{j} \qquad \mbox{for $j=1,2$}. \end{split} \end{equation} We first consider the term $I_{j}$. The formula (\ref{08/08/24/1:47}) and the uniform bound (\ref{09/02/02/23:40}) show that \begin{equation}\label{09/09/18/14:55} \sum_{l=1}^{\infty}\left\| f^{l} \right\|_{H^{1}}^{2} < \infty, \end{equation} so that \begin{equation}\label{09/09/18/14:56} \lim_{l\to \infty}\left\| f^{l} \right\|_{H^{1}}= 0. \end{equation} Hence, it follows from Proposition \ref{08/08/22/20:59} (when $\tau_{\infty}^{l}\in \mathbb{R}$) and Proposition \ref{09/01/12/16:36} (when $\tau_{n}^{l}=\pm \infty$) that: There exists $l_{0} \in \mathbb{N}$, independent of $L$ (the number of components of $\psi_{n}^{app}$), such that \begin{equation}\label{09/09/23/13:43} \| \psi^{l} \|_{X(\mathbb{R})} \lesssim \| f^{l} \|_{H^{1}}\le 1 \quad \mbox{for all $l \ge l_{0}$}, \end{equation} where the implicit constant depends only on $d$, $p$ and $q_{1}$. Then, (\ref{09/02/04/11:57}), (\ref{09/09/18/14:55}) and (\ref{09/09/23/13:43}), together with the fact $q_{j}> 2$ for $j=1,2$, lead to that \begin{equation}\label{09/09/23/14:07} \begin{split} I_{j}&= \sum_{l=1}^{l_{0}} \left\| \psi^{l} \right\|_{L^{r_{j}}(\mathbb{R};L^{q_{j}})}^{q_{j}} + \sum_{l=l_{0}+1}^{\infty}\left\| \psi^{l} \right\|_{L^{r_{j}} (\mathbb{R};L^{q_{j}})}^{q_{j}} \\[6pt] &\lesssim \sum_{l=1}^{l_{0}}\left\| \psi^{l} \right\|_{X(\mathbb{R})}^{q_{j}} + \sum_{l=l_{0}+1}^{\infty} \left\| f^{l} \right\|_{H^{1}}^{2} < \infty, \end{split} \end{equation} where the implicit constant depends only on $d$, $p$ and $q_{1}$. \par Next, we consider the term $II_{j}$. Lemma \ref{08/08/23/16:32}, together with the condition (\ref{09/02/03/0:50}), implies that: There exists $n_{L} \in \mathbb{N}$ such that \begin{equation}\label{10/05/12/2:36} \left\|\hspace{-2pt} \left| \psi^{k}(\cdot-\eta_{n}^{k},\cdot-\tau_{n}^{k}) \right| \hspace{-3pt} \left| \psi^{l}(\cdot-\eta_{n}^{l},\cdot-\tau_{n}^{l})\right|^{q_{j}-1} \right\|_{L^{\frac{r_{j}}{q_{j}}}(\mathbb{R};L^{1})} \hspace{-6pt} \le \frac{1}{C_{L}L^{2}} \quad \mbox{for all $n \ge n_{L}$ and $k\neq l$}. \end{equation} Hence, we see that \begin{equation}\label{09/09/23/14:09} II_{j} \le 1 \quad \mbox{for all $n \ge n_{L}$}. \end{equation} Thus, we have proved (\ref{09/09/18/14:46}). \par We shall show that the case $N\ge 2$ can not occur. Note that $\psi_{n}^{app}$ solves the following equation: \begin{equation}\label{09/09/18/15:20} 2i\frac{\partial }{\partial t} \psi_{n}^{app}+\Delta \psi_{n}^{app} +|\psi_{n}^{app}|^{p-1}\psi_{n}^{app} =e_{n}, \end{equation} where \begin{equation}\label{10/05/12/15:40} e_{n}(x,t):= |\psi_{n}^{app}(x,t)|^{p-1}\psi_{n}^{app}(x,t) -\sum_{l=1}^{L}\left|\psi^{l}(x-\eta_{n}^{l},t-\tau_{n}^{l})\right|^{p-1} \!\! \psi^{l}(x-\eta_{n}^{l},t-\tau_{n}^{l}). \end{equation} Proposition \ref{08/08/05/14:30} (long time perturbation theory), with the help of (\ref{09/09/18/14:46}), tells us that there exists $\varepsilon_{1}>0$, independent of $L$ when $N=\infty$, with the following property: If there exists $n\ge n_{L}$ ($n_{L}$ is the number found in (\ref{09/09/18/14:46}) if $N=\infty$, $n_{L}=1$ if $N<\infty$) such that \begin{equation}\label{09/09/16/2:10} \left\| e^{\frac{i}{2}t\Delta}\left( \psi_{n}(0)-\psi_{n}^{app}(0) \right)\right\|_{X(\mathbb{R})} \le \varepsilon_{1} \end{equation} and \begin{equation}\label{09/09/16/2:11} \left\| e_{n} \right\|_{L^{\widetilde{r}'}(\mathbb{R};L^{q_{1}'})}\le \varepsilon_{1}, \end{equation} then \begin{equation}\label{10/05/12/15:48} \left\| \psi_{n} \right\|_{X(\mathbb{R})}<\infty. \end{equation} In the sequel, we show that if $N\ge 2$, then (\ref{09/09/16/2:10}) and (\ref{09/09/16/2:11}) hold valid for some $L$, which shows $N=1$ since (\ref{10/05/12/15:48}) contradicts (\ref{09/02/17/14:53}). It is worth while noting here that \begin{equation}\label{08/08/23/22:06} f_{n} =\sum_{l=1}^{L}e^{-\frac{i}{2}\tau_{n}^{l}\Delta}e^{-\eta_{n}^{l}\cdot \nabla }f^{l}+e^{-\frac{i}{2}\tau_{n}^{L}\Delta}e^{-\eta_{n}^{L}\cdot \nabla }\left( f_{n}^{L}-f^{L}\right), \end{equation} in other words, \begin{equation}\label{09/09/15/17:42} e^{\frac{i}{2}t\Delta}f_{n} =\sum_{l=1}^{L}e^{\frac{i}{2}(t-\tau_{n}^{l})\Delta}e^{-\eta_{n}^{l}\cdot \nabla }f^{l}+e^{\frac{i}{2}(t-\tau_{n}^{L})\Delta}e^{-\eta_{n}^{L}\cdot \nabla }\left( f_{n}^{L}-f^{L}\right). \end{equation} Invoking Lemma \ref{08/08/23/16:32}, we find that (\ref{09/09/16/2:11}) holds for all sufficiently large $n$. We next consider (\ref{09/09/16/2:10}). The formula (\ref{09/09/15/17:42}), with the help of (\ref{08/10/25/23:16}), shows that \begin{equation}\label{09/09/16/12:26} \begin{split} &\left\| e^{\frac{i}{2}t\Delta} \left( \psi_{n}(0) -\psi_{n}^{app}(0) \right)\right\|_{X(\mathbb{R})} = \left\| e^{\frac{i}{2}t\Delta} \left( f_{n}-\sum_{l=1}^{L}e^{-\eta_{n}^{l}\cdot \nabla}\psi^{l}(-\tau_{n}^{l})\right) \right\|_{X(\mathbb{R})} \\[6pt] &\le \left\| e^{\frac{i}{2}(t-\tau_{n}^{L})\Delta}e^{-\eta_{n}^{L}\cdot \nabla }\left( f_{n}^{L}-f^{L}\right) \right\|_{X(\mathbb{R})} \\[6pt] &\qquad + \sum_{l=1}^{L}\left\| e^{\frac{i}{2}t\Delta} \left( e^{-\frac{i}{2}\tau_{n}^{l}\Delta}e^{-\eta_{n}^{l}\cdot \nabla }f^{l} -e^{-\eta_{n}^{l}\cdot \nabla}\psi^{l}(-\tau_{n}^{l}) \right)\right\|_{X(\mathbb{R})} \\[6pt] &\le \left\| e^{\frac{i}{2}t \Delta}\left( f_{n}^{L}-f^{L}\right) \right\|_{X(\mathbb{R})} + C \sum_{l=1}^{L}\left\| e^{-\frac{i}{2}\tau_{n}^{l}\Delta}f^{l} -\psi^{l}(-\tau_{n}^{l}) \right\|_{H^{1}}, \end{split} \end{equation} where $C$ is some constant depending only on $d$, $p$ and $q_{1}$. Here, we have by (\ref{09/09/15/16:19}) and (\ref{09/09/15/16:20}) that \begin{equation}\label{09/09/27/22:08} \begin{split} &\lim_{n\to \infty}\left\|e^{\frac{i}{2}t\Delta}\left( f_{n}^{L}-f^{L}\right) \right\|_{X(\mathbb{R})} \le \frac{\varepsilon_{1}}{4} \\[6pt] &\hspace{3cm} \mbox{for $L=N$ if $N<\infty$, and sufficiently large $L$ if $N=\infty$}. \end{split} \end{equation} Hence, for all $L \in \mathbb{N}$ satisfying (\ref{09/09/27/22:08}), there exists $n_{L,1} \in \mathbb{N}$ such that \begin{equation}\label{09/09/27/22:25} \left\| e^{\frac{i}{2}t \Delta}\left( f_{n}^{L}-f^{L}\right) \right\|_{X(\mathbb{R})}\le \frac{\varepsilon_{1}}{2} \quad \mbox{for all $n\ge n_{L,1}$}. \end{equation} Moreover, (\ref{09/02/04/8:41}) shows that for all $L \le N$, there exists $n_{L,2}\in \mathbb{N}$ such that \begin{equation}\label{09/09/16/2:37} \left\| e^{-\frac{i}{2}\tau_{n}^{l}\Delta}f^{l} -\psi^{l}(-\tau_{n}^{l}) \right\|_{H^{1}}\le \frac{\varepsilon_{1}}{2CL} \quad \mbox{for all $n\ge n_{L,2}$ and $1\le l \le L$}, \end{equation} where $C$ is the constant found in (\ref{09/09/16/12:26}). Combining (\ref{09/09/16/12:26}) with (\ref{09/09/27/22:25}) and (\ref{09/09/16/2:37}), we see that for all $L \in \mathbb{N}$ satisfying (\ref{09/09/27/22:08}), there exists $n_{L,3} \in \mathbb{N}$ such that \begin{equation}\label{09/02/04/9:03} \left\| e^{\frac{i}{2}t\Delta} \left( \psi_{n}(0) -\psi_{n}^{app}(0) \right)\right\|_{X(\mathbb{R})} \le \varepsilon_{1} \quad \mbox{for all $n\ge n_{L,3}$}, \end{equation} which gives (\ref{09/09/16/2:10}). \par We have just proved $N=1$, and therefore $L$ should be one; \[ \psi_{n}^{app}(x,t)=\psi^{1}(x-\eta_{n}^{1},t-\tau_{n}^{1})= \left( e^{-\tau_{n}^{1}\frac{\partial}{\partial t}-\eta_{n}^{1} \cdot \nabla } \psi^{1} \right)(x,t). \] Put $\Psi =\psi^{1}$, $f=f^{1}$, $(\gamma_{n},\tau_{n})=(\gamma_{n}^{1},\tau_{n}^{1})$ and $\tau_{\infty}=\tau_{\infty}^{1}$. Then, these are what we want. Indeed, we have already shown that these satisfy the properties (\ref{09/05/02/10:12}), (\ref{09/05/02/10:10}), (\ref{09/05/02/10:08}), (\ref{09/05/02/10:09}); see (\ref{10/05/16/12:36}) for (\ref{09/05/02/10:12}), (\ref{08/08/21/18:17}) for (\ref{09/05/02/10:10}), (\ref{09/09/15/16:19}) for (\ref{09/05/02/10:08}), and (\ref{09/02/04/8:41}) for (\ref{09/05/02/10:09}). Moreover, the property (\ref{10/04/15/20:43}) immediately follows from (\ref{09/05/02/10:09}). \par It remains to prove (\ref{09/05/02/10:11}), (\ref{10/05/13/10:09}), (\ref{09/05/02/10:13}), (\ref{09/07/27/1:49}) and (\ref{09/05/02/10:07}). \par We first prove that $\Psi$ satisfies the property (\ref{09/05/02/10:11}): $\left\|\Psi \right\|_{X(\mathbb{R})}=\infty$. Suppose the contrary that $\left\|\Psi \right\|_{X(\mathbb{R})}<\infty$. Then, a quite similar argument above works well, so that we obtain an absurd conclusion \begin{equation}\label{10/05/12/16:34} \left\| \psi_{n} \right\|_{X(\mathbb{R})}<\infty \quad \mbox{for sufficiently large $n \in \mathbb{N}$}. \end{equation} Hence, we have proved (\ref{09/05/02/10:11}). \par In order to prove (\ref{09/05/02/10:13}) and (\ref{09/07/27/1:49}), we shall show that there exists a subsequence of $\{f_{n}\}$ (still denoted by the same symbol) such that \begin{align} &\label{10/05/12/20:54} \left\| f \right\|_{L^{2}} = \lim_{n\to \infty}\left\| f_{n} \right\|_{L^{2}}, \\[6pt] &\label{10/05/12/20:53} \lim_{n\to \infty} \mathcal{H}\left(e^{-\frac{i}{2}\tau_{n}\Delta}f\right) = \lim_{n\to \infty} \mathcal{H}\left(f_{n}\right). \end{align} Since $f_{n}^{1} = e^{\frac{i}{2}\tau_{n}^{1}\Delta} e^{\eta_{n}^{1}\cdot \nabla}f_{n}(0)$ and $f=f^{1}$, the weak convergence result (\ref{08/08/21/18:17}), with the help of extraction of some subsequence, leads to that \begin{equation}\label{09/10/08/14:09} \left\| f \right\|_{L^{2}} \le \lim_{n\to \infty}\left\| f_{n} \right\|_{L^{2}}. \end{equation} Extracting some subsequence further, we also have by (\ref{08/08/21/18:22}) and (\ref{09/03/05/18:36}) that \begin{equation}\label{09/10/08/14:10} \lim_{n\to \infty} \mathcal{H}\left(e^{-\frac{i}{2}\tau_{n}\Delta}f\right) \le \lim_{n\to \infty} \mathcal{H}\left(f_{n}\right). \end{equation} Here, we employ the mass conservation law (\ref{08/05/13/8:59}) and (\ref{09/05/02/10:09}) (or the formula (\ref{09/02/04/8:41}) with $l=1$) to obtain that \begin{equation}\label{09/02/18/18:15} \left\| \Psi(0) \right\|_{L^{2}}=\lim_{n\to \infty} \left\| \Psi(-\tau_{n})\right\|_{L^{2}}=\left\| f \right\|_{L^{2}}. \end{equation} Moreover, the energy conservation law (\ref{08/05/13/9:03}) and (\ref{09/02/04/8:41}) with $l=1$ give us that \begin{equation}\label{09/02/18/18:18} \mathcal{H}(\Psi(0))=\lim_{n\to \infty}\mathcal{H}(\Psi(-\tau_{n})) =\lim_{n\to \infty}\mathcal{H}(e^{-\frac{i}{2}\tau_{n}\Delta}f). \end{equation} Hence, supposing the contrary that (\ref{10/05/12/20:54}) or (\ref{10/05/12/20:53}) fails, we have by the minimizing property (\ref{08/08/26/15:56}) that \begin{equation}\label{10/05/12/18:23} \begin{split} \widetilde{\mathcal{N}}_{2}(\Psi(0)) &= \left\| f \right\|_{L^{2}}^{p+1-\frac{d}{2}(p-1)} \lim_{n\to \infty} \sqrt{\mathcal{H}\left(e^{-\frac{i}{2}\tau_{n}\Delta}f\right)}^{\frac{d}{2}(p-1)-2} \\[6pt] &< \lim_{n\to \infty}\left\| f_{n} \right\|_{L^{2}}^{p+1-\frac{d}{2}(p-1)} \lim_{n\to \infty} \sqrt{\mathcal{H}\left(f_{n}\right)}^{\frac{d}{2}(p-1)-2} \\[6pt] &= \lim_{n\to \infty}\widetilde{\mathcal{N}}_{2}(f_{n}) = \widetilde{N}_{c}. \end{split} \end{equation} This estimate, together with the definition of $\widetilde{N}_{c}$ (see (\ref{08/09/02/18:06})), leads to that $\left\|\Psi \right\|_{X(\mathbb{R})}<\infty$, which contradicts $\left\|\Psi \right\|_{X(\mathbb{R})}=\infty$. Thus, (\ref{10/05/12/20:54}) and (\ref{10/05/12/20:53}) holds. Then, an estimate similar to (\ref{10/05/12/18:23}) gives (\ref{09/05/02/10:13}). Moreover, (\ref{09/07/27/1:49}) follows from (\ref{08/08/24/1:47}) with $l=1$, $s=0$ and (\ref{10/05/12/20:54}). \par The properties (\ref{09/07/27/1:49}) and (\ref{09/05/02/10:09}) (or (\ref{10/04/15/20:43})), together with (\ref{10/05/13/10:06}) and the mass conservation law (\ref{08/05/13/8:59}), yield (\ref{10/05/13/10:09}). Moreover, the formula (\ref{09/05/02/10:07}) follows from the estimate (\ref{09/09/16/12:26}) with $N=1$, (\ref{09/09/15/16:19}) and (\ref{09/02/04/8:41}). \end{proof} \subsection{Proof of Proposition \ref{08/10/16/21:12}}\label{08/09/25/17:01} We shall prove Proposition \ref{08/10/16/21:12}, showing that the candidate $\Psi$ found in Lemma \ref{08/08/19/23:07} is actually one-soliton-like solution as $t\to +\infty$. \begin{proof}[Proof of Proposition \ref{08/10/16/21:12}] The properties (\ref{08/10/20/2:27}) and (\ref{08/10/18/22:17}) have been obtained in Lemma \ref{08/08/19/23:07}. Moreover, (\ref{09/06/21/21:58}) in Proposition \ref{09/06/21/19:28}, together with (\ref{08/10/20/2:27}) and (\ref{08/10/18/22:17}), yields (\ref{10/04/05/18:07}). \par We shall prove (\ref{08/08/18/17:42}): the momentum of $\Psi$ is zero. Using the Galilei transformation, we set \begin{equation}\label{10/05/16/21:33} \Psi_{\xi}(x,t):=e^{\frac{i}{2}(x\xi-t|\xi|^{2})}\Psi(x-\xi t, t), \quad \xi \in \mathbb{R}^{d}. \end{equation} It is easy to verify that \begin{align} \label{08/08/31/15:33} &\left\| \Psi_{\xi}(t)\right\|_{L^{q}}=\left\| \Psi(t)\right\|_{L^{q}} \quad \mbox{for all $\xi \in \mathbb{R}^{d}$, $q\in [2,2^{*})$ and $t\in \mathbb{R}$},\\[6pt] \label{08/08/31/15:03} &\left\| \Psi_{\xi}\right\|_{X(\mathbb{R})}=\left\| \Psi \right\|_{X(\mathbb{R})}=\infty \quad \mbox{for all $\xi \in \mathbb{R}^{d}$}. \end{align} Moreover, a simple calculation, together with the mass and momentum conservation laws (\ref{08/05/13/8:59}) and (\ref{08/10/20/4:37}), shows that \begin{equation}\label{10/05/16/21:55} \begin{split} &\left\|\nabla \Psi_{\xi}(t) \right\|_{L^{2}}^{2} = \left\| i\xi \Psi_{\xi}(t) +e^{\frac{i}{2}(x \xi -t|\xi|^{2})}(\nabla \Psi)(x-\xi t, t)\right\|_{L_{x}^{2}}^{2} \\[6pt] &= \int_{\mathbb{R}^{d}} |\xi|^{2}|\Psi_{\xi}(x,t)|^{2}\,dx + \int_{\mathbb{R}^{d}}|\nabla \Psi|^{2}(x,t)\,dt \\[6pt] & \qquad \quad + 2\Re \int_{\mathbb{R}^{d}} \overline{i\xi \Psi_{\xi} }(x,t) e^{\frac{i}{2}(x \xi -t|\xi|^{2})}(\nabla \Psi)(x-\xi t, t)\,dx \\[6pt] &=|\xi|^{2}\left\| \Psi(t)\right\|_{L^{2}}^{2}+\left\| \nabla \Psi(t) \right\|_{L^{2}}^{2} + 2\xi\cdot \Im \int_{\mathbb{R}^{d}} \overline{\Psi}(x,t) \nabla \Psi(x,t)\,dx \\[6pt] &= |\xi|^{2}\left\| \Psi(0)\right\|_{L^{2}}^{2}+\left\| \nabla \Psi(t) \right\|_{L^{2}}^{2} + 2\xi\cdot \Im \int_{\mathbb{R}^{d}} \overline{\Psi}(x,0) \nabla \Psi(x,0)\,dx \quad \mbox{for all $\xi \in \mathbb{R}^{d}$}. \end{split} \end{equation} This equality (\ref{10/05/16/21:55}), together with the energy conservation law (\ref{08/05/13/9:03}), yields that \begin{equation}\label{10/05/16/22:22} \begin{split} &\mathcal{H}(\Psi_{\xi}(t)) = \left\| \nabla \Psi_{\xi}(t)\right\|_{L^{2}}^{2} -\frac{2}{p+1}\left\| \Psi_{\xi}(t) \right\|_{L^{p+1}}^{p+1} \\[6pt] &=\left\| \nabla \Psi_{\xi}(t)\right\|_{L^{2}}^{2} -\frac{2}{p+1}\left\| \Psi(t) \right\|_{L^{p+1}}^{p+1} \\[6pt] &= \mathcal{H}(\Psi(0)) +|\xi|^{2}\left\|\Psi(0) \right\|_{L^{2}}^{2} + 2\xi\cdot \Im \int_{\mathbb{R}^{d}} \overline{\Psi}(x,0)\nabla \Psi(x,0)\,dx \quad \mbox{for all $\xi \in \mathbb{R}^{d}$}. \end{split} \end{equation} Put \begin{equation}\label{08/08/31/15:28} \xi_{0}=-\frac{\Im \int_{\mathbb{R}^{d}} \overline{\Psi}(x,0) \nabla \Psi(x,0)\,dx}{\left\| \Psi(0) \right\|_{L^{2}}}. \end{equation} Then, we have by (\ref{10/05/16/21:55}) and (\ref{10/05/16/22:22}) that \begin{align} \label{08/08/31/15:43} &\left\| \nabla \Psi_{\xi_{0}} (t)\right\|_{L^{2}}^{2} = \left\|\nabla \Psi(t) \right\|_{L^{2}}^{2}- \left( \Im \int_{\mathbb{R}^{d}} \overline{\Psi}(x,0) \nabla \Psi(x,0)\,dx \right)^{2} \ \mbox{for all $t \in \mathbb{R}$}, \\[6pt] \label{08/08/31/15:31} &\mathcal{H}(\Psi_{\xi_{0}}(t)) = \mathcal{H}(\Psi(0))- \left( \Im \int_{\mathbb{R}^{d}} \overline{\Psi}(x,0) \nabla \Psi(x,0)\,dx \right)^{2} \ \mbox{for all $t \in \mathbb{R}$}. \end{align} Now, we suppose that \begin{equation}\label{10/05/16/22:43} \Im \int_{\mathbb{R}^{d}} \overline{\Psi}(x,t) \nabla \Psi(x,t)\,dx = \Im \int_{\mathbb{R}^{d}} \overline{\Psi}(x,0) \nabla \Psi(x,0)\,dx \neq 0. \end{equation} Then, by (\ref{08/08/31/15:33}) with $q=2$, (\ref{08/08/31/15:43}), (\ref{09/06/21/21:58}) in Proposition \ref{09/06/21/19:28} and (\ref{08/10/20/2:27}), we have that \begin{equation}\label{10/08/21/18:04} \begin{split} \mathcal{N}_{2}(\Psi_{\xi_{0}}(t)) &< \mathcal{N}_{2}(\Psi(t)) \\[6pt] &\le \sqrt{\frac{d(p-1)}{d(p-1)-4}}^{\frac{d}{2}(p-1)-2} \widetilde{\mathcal{N}}_{2}(\Psi(t)) = \sqrt{\frac{d(p-1)}{d(p-1)-4}}^{\frac{d}{2}(p-1)-2}\widetilde{N}_{c} \\[6pt] &< \sqrt{\frac{d(p-1)}{d(p-1)-4}}^{\frac{d}{2}(p-1)-2}\widetilde{N}_{2} =N_{2} \qquad \mbox{for all $t \in \mathbb{R}$}. \end{split} \end{equation} This inequality, together with the definition of $N_{2}$ (see \ref{08/07/02/23:24}), implies that \begin{equation}\label{10/05/16/22:47} \mathcal{K}(\Psi_{\xi_{0}}(t)) >0 \quad \mbox{for all $t \in \mathbb{R}$}. \end{equation} Moreover, using (\ref{08/08/31/15:33}) with $q=2$, (\ref{08/08/31/15:31}), and (\ref{08/10/20/2:27}) again, we obtain that \begin{equation}\label{10/05/16/22:46} \widetilde{\mathcal{N}}_{2}(\Psi_{\xi_{0}}(t)) < \widetilde{N}_{c}. \end{equation} Hence, it follows from the definition of $\widetilde{N}_{c}$ that $\left\| \Psi_{\xi_{0}}\right\|_{X(\mathbb{R})}< \infty$, which contradicts (\ref{08/08/31/15:03}). Thus, the momentum of $\Psi$ must be zero.\footnote{The proof of zero momentum is quite similar to the one of Proposition 4.1 in \cite{Holmer-Roudenko}. We can find an analogous trick in Appendix D of \cite{Nawa8}.} \par Next, we shall prove the tightness of either family $\{\Psi(t)\}_{t\in [0,\infty)}$ or $\{\Psi(t)\}_{t\in (-\infty,0]}$ in $H^{1}(\mathbb{R}^{d})$. Since $\left\|\Psi \right\|_{X(\mathbb{R}))}=\infty$, it holds that $\left\|\Psi \right\|_{X([0,\infty))}=\infty$ or $\left\|\Psi \right\|_{X((-\infty,0]))}=\infty$. The time reversibility of our equation (\ref{08/05/13/8:50}) allows us to assume that $\left\|\Psi \right\|_{X([0,\infty))}=\infty$; if not, we consider $\overline{\Psi(x,-t)}$ instead of $\Psi(x,t)$. \par We introduce the following quantity to employ Proposition \ref{08/10/03/9:46}: \begin{equation} \label{09/07/28/19:58} A:=\sup_{R>0}\liminf_{t\to \infty}\sup_{y\in \mathbb{R}^{d}} \int_{|x-y|\le R} \left| \Psi(x,t) \right|^{2} \,dx. \end{equation} By the definition of $A$, we can take a sequence $\{t_{n}\}$ in $[0,\infty)$ such that \begin{align} \label{10/05/17/1:11} &\lim_{n\to \infty}t_{n}=+\infty, \\[6pt] \label{08/10/18/15:36} &\sup_{y\in \mathbb{R}^{d}}\int_{|x-y|\le R} \left| \Psi(x,t_{n}) \right|^{2} \,dx \le \left( 1+\frac{1}{n} \right)A \quad \mbox{for all $R>0$ and $n \in \mathbb{N}$}. \end{align} Put \begin{equation}\label{09/09/16/19:19} \Psi_{n}(x,t):=\Psi(x,t+t_{n}). \end{equation} \noindent We investigate the behavior of this $\Psi_{n}$ in order to show $A=1$. Note here that since $\|\Psi(t)\|_{L^{2}}=1$ for all $t \in \mathbb{R}$, we have $A\le 1$, so that $A\ge 1$ implies $A=1$. \par The sequence $\{\Psi_{n}(0)\}$ satisfies the conditions of Lemma \ref{08/08/19/23:07}: \begin{align} \label{09/02/17/16:33} &\Psi_{n}(t) \in PW_{+} \quad \mbox{for all $t \in \mathbb{R}$}, \\[6pt] \label{10/05/17/22:50} &\left\| \Psi_{n}(t)\right\|_{L^{2}}=1 \quad \mbox{for all $n\in \mathbb{N}$ and $t \in \mathbb{R}$}, \\[6pt] \label{09/02/17/16:34} &\sup_{n\in \mathbb{N}}\left\| \Psi_{n}(0) \right\|_{H^{1}}<\infty, \\[6pt] \label{09/02/17/16:35} &\widetilde{\mathcal{N}}_{2}(\Psi_{n}(t))=\widetilde{N}_{c} \quad \mbox{for all $n \in \mathbb{N}$ and $t \in \mathbb{R}$}, \\[6pt] \label{09/02/17/16:36} &\left\| \Psi_{n} \right\|_{X(\mathbb{R})}=\infty \quad \mbox{for all $n \in \mathbb{N}$}, \\[6pt] \label{09/02/17/16:21} &\limsup_{n\to \infty}\left\| e^{\frac{i}{2}t\Delta} \Psi_{n}(0)\right\|_{ L^{\infty}(\mathbb{R};L^{\frac{d}{2}(p-1)})}>0. \end{align} The assertion of (\ref{09/02/17/16:21}) may need to be verified. If (\ref{09/02/17/16:21}) fails, then we have \begin{equation}\label{09/09/16/19:24} \lim_{n\to \infty}\left\| e^{\frac{i}{2}t\Delta} \Psi(t_{n})\right\|_{L^{\infty}(\mathbb{R};L^{\frac{d}{2}(p-1)})}=0, \end{equation} which leads us to the contradictory conclusion $\left\|\Psi \right\|_{X(\mathbb{R})}<\infty$ by the small data theory (Proposition \ref{08/08/22/20:59}). \par We apply Lemma \ref{08/08/19/23:07} to $\{\Psi_{n}\}$, so that there exists a subsequence of $\{\Psi_{n}\}$ (still denoted by the same symbol) with the following properties: There exists \\ (i) a nontrivial global solution $\Psi_{\infty} \in C(\mathbb{R};H^{1}(\mathbb{R}^{d}))$ to the equation (\ref{08/05/13/8:50}) with \begin{align} \label{09/09/17/14:55} &\left\| \Psi_{\infty} \right\|_{X(\mathbb{R})}=\infty, \\[6pt] \label{10/05/31/9:00} &\Psi_{\infty}(t) \in PW_{+} \quad \mbox{for all $t \in \mathbb{R}$}, \\[6pt] \label{10/05/17/1:29} &\widetilde{\mathcal{N}}_{2}(\Psi_{\infty}(t))=\widetilde{N}_{c} \quad \mbox{for all $t \in \mathbb{R}$}, \end{align} and \\ (ii) a nontrivial function $\Phi \in PW_{+}$, a sequence $\{\tau_{n}\}$ in $\mathbb{R}$ with $\displaystyle{\lim_{n\to \infty}\tau_{n}= \tau_{\infty} \in \mathbb{R}\cup \{\pm \infty\}}$, and a sequence $\{\gamma_{n}\}$ in $\mathbb{R}^{d}$, such that, putting \begin{equation}\label{10/05/17/1:31} \widetilde{\Psi}_{n}:=e^{\frac{i}{2}\tau_{n}\Delta}e^{\gamma_{n}\cdot \nabla} \Psi_{n}, \quad \widetilde{\epsilon}_{n}:= \Psi_{n}-e^{-\tau_{n}\frac{\partial}{\partial t}-\gamma_{n}\cdot \nabla} \Psi_{\infty}, \end{equation} we have: \begin{align} \label{09/02/16/15:35} & \lim_{n\to \infty}\widetilde{\Psi}_{n}(0)=\Phi \quad \mbox{weakly in $H^{1}(\mathbb{R}^{d})$, and a.e. in $\mathbb{R}^{d}$}, \\[6pt] \label{10/05/17/1:47} &\lim_{n\to \infty} \left\| \widetilde{\Psi}_{n}(0) - \Phi \right\|_{L^{2}}=0, \\[6pt] \label{09/02/17/16:44} &\lim_{n\to \infty}\left\| e^{\frac{i}{2}t\Delta}\left( \widetilde{\Psi}_{n}(0)-\Phi \right)\right\|_{L^{\infty}(\mathbb{R};L^{\frac{d}{2}(p-1)})\cap X(\mathbb{R})}=0, \\[6pt] \label{09/02/24/14:28} &\lim_{n\to \infty}\left\|\Psi_{\infty}(-\tau_{n})-e^{-\frac{i}{2} \tau_{n}\Delta}\Phi \right\|_{H^{1}}=0, \\[6pt] \label{08/08/27/13:16} &\lim_{n\to \infty} \left\|e^{\frac{i}{2}t\Delta}\widetilde{\epsilon}_{n}(0) \right\|_{ X(\mathbb{R})}=0. \end{align} Note here that the estimate (\ref{08/10/25/23:16}) gives us that \begin{equation}\label{09/03/05/17:39} \lim_{T\to \infty}\left\|e^{\frac{i}{2}t\Delta} \Phi \right\|_{X((-\infty,-T])} = \lim_{T\to \infty} \left\| e^{\frac{i}{2}t\Delta} \Phi \right\|_{X([T,+\infty))} =0. \end{equation} We claim that $\tau_{\infty} \in \mathbb{R}$. We suppose the contrary that $\tau_{\infty}=+ \infty$ or $\tau_{\infty}=-\infty$. If $\tau_{\infty}=+ \infty$, then (\ref{09/02/17/16:44}) and (\ref{09/03/05/17:39}) show that \begin{equation}\label{10/05/17/2:00} \begin{split} \left\|e^{\frac{i}{2}t\Delta}\Psi_{n}(0) \right\|_{X((-\infty,0])} &= \left\| e^{\frac{i}{2}(t+\tau_{n})\Delta}\Psi_{n}(0) \right\|_{X((-\infty,-\tau_{n}])} \\[6pt] &\le \left\|e^{\frac{i}{2}t\Delta} \Phi \right\|_{X((-\infty,-\tau_{n}])} + \left\| e^{\frac{i}{2}t\Delta}\left( \widetilde{\Psi}_{n}(0)-\Phi \right) \right\|_{X((-\infty,-\tau_{n}])} \\[6pt] &\to 0 \quad \mbox{as $n\to \infty$}. \end{split} \end{equation} Then, the small data theory (Proposition \ref{08/08/22/20:59}) leads us to that \begin{equation}\label{10/05/17/2:04} \left\| \Psi \right\|_{X((-\infty,t_{n}])} = \left\| \Psi_{n} \right\|_{X((-\infty,0])} \lesssim 1 \quad \mbox{for all sufficiently large $n\in \mathbb{N}$}, \end{equation} where the implicit constant is independent of $n$. Hence, taking $n\to \infty$ in (\ref{10/05/17/2:04}), we obtain that \begin{equation}\label{10/05/17/2:07} \left\| \Psi \right\|_{X(\mathbb{R})}<\infty, \end{equation} which contradicts $\left\| \Psi \right\|_{X(\mathbb{R})}=\infty$. Thus, the case $\tau_{\infty}= +\infty$ never happens. On the other hand, if $\tau_{\infty}=-\infty$, then we have by (\ref{09/02/17/16:44}) and (\ref{09/03/05/17:39}) that \begin{equation}\label{10/05/17/2:11} \begin{split} \left\|e^{\frac{i}{2}t\Delta}\Psi_{n}(0) \right\|_{X([0,+\infty))} &= \left\| e^{\frac{i}{2}(t+\tau_{n})\Delta}\Psi_{n}(0) \right\|_{X([-\tau_{n},+\infty))} \\[6pt] &\le \left\|e^{\frac{i}{2}t\Delta} \Phi \right\|_{X([-\tau_{n}, +\infty))} + \left\| e^{\frac{i}{2}t\Delta}\left( \widetilde{\Psi}_{n}(0)-\Phi \right) \right\|_{X((-\tau_{n},+\infty))} \\[6pt] &\to 0 \quad \mbox{as $n\to \infty$}. \end{split} \end{equation} Hence, the small data theory (Proposition \ref{08/08/22/20:59}) shows that \begin{equation}\label{10/05/17/2:15} \left\| \Psi \right\|_{X([t_{n},+\infty))} = \left\| \Psi_{n} \right\|_{X([0,+\infty)}<\infty \quad \mbox{for all sufficiently large $n$}, \end{equation} so that \begin{equation}\label{10/05/17/2:16} \left\| \Psi \right\|_{X([0,+\infty))}<\infty. \end{equation} However, (\ref{10/05/17/2:16}) contradicts the our working hypothesis $\left\| \Psi \right\|_{X([0,+\infty))}=\infty$. Thus, we always have $\tau_{\infty} \in \mathbb{R}$. \par We establish the strong convergence in $L^{p+1}(\mathbb{R}^{d})$: \begin{equation}\label{09/02/24/13:58} \lim_{n\to \infty}e^{\gamma_{n}\cdot \nabla }\Psi(t_{n}) = \lim_{n\to \infty} e^{-\frac{i}{2}\tau_{n}\Delta}\widetilde{\Psi}_{n}(0) = e^{-\frac{i}{2}\tau_{\infty}\Delta}\Phi \quad \mbox{strongly in $L^{p+1}(\mathbb{R}^{d})$}. \end{equation} The Gagliardo-Nirenberg inequality shows that \begin{equation}\label{10/05/17/22:16} \begin{split} &\left\| e^{-\frac{i}{2}\tau_{n}\Delta}\widetilde{\Psi}_{n}(0) - e^{-\frac{i}{2}\tau_{\infty}\Delta}\Phi \right\|_{L^{p+1}}^{p+1} \\[6pt] &\lesssim \left\| e^{-\frac{i}{2}\tau_{n}\Delta}\widetilde{\Psi}_{n}(0) - e^{-\frac{i}{2}\tau_{\infty}\Delta}\Phi \right\|_{L^{\frac{d}{2}(p-1)}}^{p-1} \left\| \nabla \left( e^{-\frac{i}{2}\tau_{n}\Delta}\widetilde{\Psi}_{n}(0) - e^{-\frac{i}{2}\tau_{\infty}\Delta}\Phi \right) \right\|_{L^{2}}^{2} \\[6pt] &\le \left\| e^{-\frac{i}{2}\tau_{n}\Delta}\widetilde{\Psi}_{n}(0) - e^{-\frac{i}{2}\tau_{\infty}\Delta}\Phi \right\|_{L^{\frac{d}{2}(p-1)}}^{p-1} \left( \left\| \nabla \Psi_{n}(0) \right\|_{L^{2}}^{2} + \left\| \nabla \Phi \right\|_{L^{2}}^{2} \right), \end{split} \end{equation} where the implicit constant depends only on $d$ and $p$. This estimate, together with (\ref{09/02/17/16:34}) and (\ref{09/02/17/16:44}), immediately yields (\ref{09/02/24/13:58}). \par Next, we shall show that \begin{equation}\label{09/02/24/14:22} \lim_{n\to \infty}e^{\gamma_{n}\cdot \nabla}\Psi(t_{n}) = \lim_{n\to \infty}e^{-\frac{i}{2}\tau_{n}\Delta}\widetilde{\Psi}_{n}(0) = e^{-\frac{i}{2}\tau_{\infty}\Delta}\Phi \quad \mbox{strongly in $H^{1}(\mathbb{R}^{d})$}. \end{equation} Since (\ref{10/05/17/1:47}) implies the strong convergence in $L^{2}(\mathbb{R}^{d})$, if (\ref{09/02/24/14:22}) fails, then we have by the weak convergence (\ref{09/02/16/15:35}) that \begin{equation}\label{09/02/24/14:24} \liminf_{n\to \infty}\left\| \nabla \Psi_{n}(0) \right\|_{L^{2}} = \liminf_{n\to \infty}\left\| \nabla \widetilde{\Psi}_{n}(0) \right\|_{L^{2}} > \left\| \nabla \Phi \right\|_{L^{2}}, \end{equation} so that we have by (\ref{09/02/24/13:58}) that \begin{equation}\label{10/05/17/20:08} \lim_{n\to \infty}\mathcal{H}(\Psi_{n}(0)) = \lim_{n\to \infty}\mathcal{H}(e^{-\frac{i}{2}\tau_{n}\Delta}\widetilde{\Psi}_{n}(0)) > \mathcal{H}(e^{-\frac{i}{2}\tau_{\infty}\Delta}\Phi). \end{equation} Moreover, it follows from (\ref{10/05/17/1:47}), (\ref{10/05/17/20:08}) and (\ref{09/02/17/16:35}) that \begin{equation}\label{09/02/24/14:30} \widetilde{\mathcal{N}}_{2}(e^{-\frac{i}{2}\tau_{\infty}\Delta}\Phi)< \lim_{n\to \infty} \widetilde{\mathcal{N}}_{2}(\Psi_{n}(0)) =\widetilde{N}_{c}. \end{equation} Since $\Psi_{\infty}$ is the solution to (\ref{08/05/13/8:50}) with $\Psi_{\infty}(-\tau_{\infty})=e^{-\frac{i}{2}\tau_{\infty}\Delta}\Phi$ (see (\ref{09/02/24/14:28})), the estimate (\ref{09/02/24/14:30}), together with the definition of $\widetilde{N}_{c}$ (see (\ref{08/09/02/18:06})), leads to that \begin{equation}\label{10/05/17/22:41} \left\|\Psi_{\infty} \right\|_{X(\mathbb{R})}<\infty , \end{equation} which contradicts (\ref{09/09/17/14:55}). Thus, (\ref{09/02/24/14:22}) must hold. \par Now, we are in a position to prove $A=1$. We take an arbitrarily small number $\varepsilon>0$, and fix it in the following argument; we assume that $\varepsilon<\frac{1}{4}$ at least. \par Let $R_{\varepsilon}>0$ be a number such that \begin{equation}\label{10/05/18/0:14} \int_{|x|\le R_{\varepsilon}}|\Phi(x)|^{2}\,dx \ge \left\| \Phi \right\|_{L^{2}}^{2} -\frac{\varepsilon}{2} = 1-\frac{\varepsilon}{2}, \end{equation} where we have used the fact that $\|\Phi \|_{L^{2}}=1$ (see (\ref{10/05/17/22:50}) and (\ref{10/05/17/1:47})). Then, it follows from (\ref{09/02/24/14:22}) and (\ref{10/05/18/0:14}) that \begin{equation}\label{08/11/01/15:20} \int_{|x|\le R} | \Psi(x+\gamma_{n},t_{n})|^{2}\,dx > 1-\varepsilon \quad \mbox{for all sufficiently large $n\in \mathbb{N}$ and $R\ge R_{\varepsilon}$}. \end{equation} Combining (\ref{08/10/18/15:36}) and (\ref{08/11/01/15:20}), we obtain that \begin{equation}\label{10/05/17/23:00} 1-\varepsilon \le \left( 1+\frac{1}{n} \right)A \quad \mbox{for all sufficiently large $n \in \mathbb{N}$ and $R\ge R_{\varepsilon}$}.\end{equation} Taking $\varepsilon \to 0$ and $n\to \infty$ in (\ref{10/05/17/23:00}), we see that $1\le A$. Hence, $A=1$ as stated above. \par We apply Proposition \ref{08/10/03/9:46} to $|\Psi|^{2}$ and find that: There exist $R_{\varepsilon}>0$, $T_{\varepsilon}>0$ and a continuous path $y_{\varepsilon} \in C([T_{\varepsilon},\infty); \mathbb{R}^{d})$ such that \begin{equation}\label{09/10/08/16:24} \int_{|x-y_{\varepsilon}(t)|<R}\left| \Psi(x,t)\right|^{2} \,dx > 1-\varepsilon \quad \mbox{for all $t \in [T_{\varepsilon}, \infty)$ and $R>R_{\varepsilon}$}. \end{equation} We claim that, if necessary, taking $R_{\varepsilon}$ much larger , the following inequality holds for the same path $y_{\varepsilon}$ just found above: \begin{equation}\label{09/08/10/16:50} \int_{|x-y_{\varepsilon}(t)|<R}\left| \nabla \Psi(x,t)\right|^{2} \,dx > \left\| \nabla \Psi(t) \right\|_{L^{2}}^{2}-\varepsilon \quad \mbox{for all $t \in [T_{\varepsilon}, \infty)$ and $R\ge R_{\varepsilon}$}. \end{equation} We prove this by contradiction: Suppose the contrary that for any $k \in \mathbb{N}$, there exists $t_{k}^{0} \in [T_{\varepsilon},\infty)$ such that \begin{equation}\label{08/11/04/8:36} \int_{|x|< k} |\nabla \Psi(x+y_{\varepsilon}(t_{k}^{0}),t_{k}^{0})|^{2} \le \left\| \nabla \Psi(t_{k}^{0}) \right\|_{L^{2}}^{2}-\varepsilon . \end{equation} Put \begin{equation}\label{10/05/18/0:41} \Psi_{k}^{0}(x,t):= \Psi(x+y_{\varepsilon}(t_{k}^{0}),t+t_{k}^{0}). \end{equation} Then, in a similar argument for $\Psi_{n}$, we obtain a subsequence of $\{ \Psi_{k}^{0} \}$ (still denoted by the same symbol), a sequence $\{\tau_{k}^{0}\}$ in $\mathbb{R}$ with $\displaystyle{\lim_{k\to \infty} \tau_{k}^{0}=\tau_{\infty}^{0} \in \mathbb{R}}$, a sequence $\{y_{k}^{0}\}$ in $\mathbb{R}^{d}$, and a nontrivial function $\Phi^{0} \in H^{1}(\mathbb{R}^{d})$ such that \begin{equation}\label{08/11/02/1:25} \lim_{k\to \infty} \Psi_{k}^{0}(\cdot+y_{k}^{0},0)= e^{-{\frac{i}{2}\tau_{\infty}^{0}\Delta}}\Phi^{0} \quad \mbox{strongly in $H^{1}(\mathbb{R}^{d})$}. \end{equation} Furthermore, (\ref{08/11/02/1:25}) implies that there exists $\widetilde{R}_{\varepsilon}>0$ such that \begin{align} \label{08/11/04/8:18} &\int_{|x-y_{k}^{0}|>\widetilde{R}_{\varepsilon}}\left| \Psi(x+y_{\varepsilon}(t_{k}^{0}), t_{k}^{0})\right|^{2}\,dx \le \varepsilon \quad \mbox{for all sufficiently large $k \in \mathbb{N}$}, \\[6pt] \label{09/09/27/10:52} &\int_{|x-y_{k}^{0}|\le \widetilde{R}_{\varepsilon}}\left| \nabla \Psi(x+y_{\varepsilon}(t_{k}^{0}), t_{k}^{0})\right|^{2}\,dx > \left\| \nabla \Psi(t_{k}^{0})\right\|_{L^{2}}^{2} -\varepsilon \quad \mbox{for all sufficiently large $k \in \mathbb{N}$}. \end{align} On the other hand, we have by (\ref{09/10/08/16:24}) that \begin{equation}\label{08/11/04/8:19} \int_{|x|\le R_{\varepsilon}}\left|\Psi (x+y_{\varepsilon}(t_{k}^{0}),t_{k}^{0})\right|^{2}\,dx > 1-\varepsilon. \end{equation} If $\displaystyle{\sup_{k\in \mathbb{N}}|y_{k}^{0}|= \infty}$, then we can take a subsequence of $\{y_{k}^{0}\}$ (still denoted by the same symbol) such that $\displaystyle{\lim_{k\to \infty}|y_{k}^{0}|= \infty}$, so that \begin{equation}\label{08/11/04/8:17} \left\{x \in \mathbb{R}^{d} \left| |x-y_{k}^{0}|>\widetilde{R}_{\varepsilon} \right. \right\} \supset \left\{ x \in \mathbb{R}^{d} \left| |x|\le R_{\varepsilon} \right. \right\} \quad \mbox{for all sufficiently large $k \in \mathbb{N}$}. \end{equation} However, the result derived from (\ref{08/11/04/8:18}) and (\ref{08/11/04/8:17}) contradicts (\ref{08/11/04/8:19}). Hence, it must hold that $\displaystyle{\sup_{k\in \mathbb{N}}|y_{k}^{0}|<\infty}$. Put $\displaystyle{R_{0}:= \sup_{k \in \mathbb{N}}|y_{k}^{0}|}$. Then, it follows from (\ref{09/09/27/10:52}) that \begin{equation}\label{10/05/18/10:30} \begin{split} \int_{|x|\le \widetilde{R}_{\varepsilon}+R_{0}} \left| \nabla \Psi(x+y_{\varepsilon}(t_{k}^{0}), t_{k}^{0})\right|^{2}\,dx &\ge \int_{|x-y_{k}^{0}|\le \widetilde{R}_{\varepsilon}} \left| \nabla \Psi(x+y_{\varepsilon}(t_{k}^{0}), t_{k}^{0})\right|^{2}\,dx \\[6pt] &> \left\| \nabla \Psi(t_{k}^{0})\right\|_{L^{2}}^{2}-\varepsilon, \end{split} \end{equation} which contradicts (\ref{08/11/04/8:36}) for $k> \widetilde{R}_{\varepsilon}+R_{0}$. Thus, (\ref{09/08/10/16:50}) holds. \par Finally, we obtain (\ref{08/10/18/22:46}) and (\ref{08/11/01/15:41}) from (\ref{09/10/08/16:24}) and (\ref{09/08/10/16:50}), respectively, by a space-time translation: regarding $\Psi(x+y_{\varepsilon}(T_{\varepsilon}), t+T_{\varepsilon})$ as our $\Psi(x,t)$ with $\gamma_{\varepsilon}(t)=y_{\varepsilon}(t+T_{\varepsilon})-y_{\varepsilon}(T_{\varepsilon})$. \end{proof} \subsection{Proofs of Theorem \ref{08/05/26/11:53} and Corollary \ref{09/12/23/22:22}} \label{09/03/04/19:08} We begin with the proof of Theorem \ref{08/05/26/11:53}. \begin{proof}[Proof of Theorem \ref{08/05/26/11:53}] The claims (i) and (ii) are direct consequences of Proposition \ref{09/06/21/19:28}. \par We shall prove (iii). Proposition \ref{09/01/12/16:36} shows that the wave operators $W_{\pm}$ exist on $\Omega$ and continuous. It remains to prove the bijectivity of $W_{\pm}$ and the continuity of $W_{\pm}^{-1}$. \par We first show that $W_{\pm}$ is surjective from $\Omega$ to $PW_{+}$. Let $\psi_{0} \in PW_{+}$ and let $\psi$ be the solution to (\ref{08/05/13/8:50}) with $\psi(0)=\psi_{0}$. Since we have shown that $\widetilde{N}_{c}=\widetilde{N}_{2}$ in Section \ref{09/05/06/9:13}, $\|\psi\|_{X(\mathbb{R})}<\infty$. Therefore, it follows from Proposition \ref{08/08/18/16:51} that $\psi$ has asymptotic states $\phi_{\pm}$ in $H^{1}(\mathbb{R}^{d})$. It remains to prove that $\phi_{\pm} \in \Omega$. \par The energy conservation law (\ref{08/05/13/9:03}) and Theorem \ref{09/05/18/10:43} show that \begin{equation}\label{10/05/09/12:15} 0<\mathcal{H}(\psi_{0})=\lim_{t\to \pm \infty} \left( \left\| \nabla \psi(t) \right\|_{L^{2}}^{2} - \frac{2}{p+1} \left\| \psi(t) \right\|_{L^{p+1}}^{p+1} \right) = \left\| \nabla \phi_{\pm } \right\|_{L^{2}}^{2}. \end{equation} Moreover, the mass conservation law (\ref{08/05/13/8:59}) gives us that \begin{equation}\label{10/05/08/14:29} \left\| \phi_{\pm} \right\|_{L^{2}}^{2} = \lim_{t\to \pm \infty}\left\|\psi(t) \right\|_{L^{2}}^{2} =\left\| \psi_{0} \right\|_{L^{2}}^{2}. \end{equation} Since $\psi_{0}\in PW_{+}$, we obtain from (\ref{10/05/09/12:15}), (\ref{10/05/08/14:29}) and (\ref{09/12/16/17:07}) that \begin{equation}\label{10/06/13/13:30} \mathcal{N}_{2}(\phi_{\pm})= \widetilde{\mathcal{N}}_{2}(\psi_{0})<\widetilde{N}_{2}, \end{equation} so that $\phi_{\pm}\in \Omega$. \par Next, we shall show the injectivity of $W_{\pm}$. Take any $\phi_{+,1}, \phi_{+,2} \in \Omega$ and suppose that \begin{equation}\label{10/05/08/14:34} W_{+}\phi_{+,1}=W_{+}\phi_{+,2}. \end{equation} Put $\psi_{0}=W_{+}\phi_{+,1}=W_{+}\phi_{+,2}$ and let $\psi$ be the solution to the equation (\ref{08/05/13/8:50}) with $\psi(0)=\psi_{0}$. Then, we have \begin{equation}\label{10/05/08/14:46} \begin{split} \left\|\phi_{+,1}-\phi_{+,2} \right\|_{H^{1}} &\le \lim_{t\to +\infty} \left\| \psi(t)-e^{\frac{i}{2}t\Delta}\phi_{+,1} \right\|_{H^{1}} + \lim_{t\to +\infty} \left\| \psi(t)-e^{\frac{i}{2}t\Delta}\phi_{+,2} \right\|_{H^{1}} \\[6pt] &=0, \end{split} \end{equation} so that $\phi_{+,1}=\phi_{+,2}$. Similarly, we see that $W_{-}$ is injective from $\Omega$ to $PW_{+}$. \par Finally, we shall prove that $W_{+}^{-1}$ is continuous. Take a function $\psi_{0}\in PW_{+}$ and a sequence $\{\psi_{0,n}\}$ in $PW_{+}$ such that \begin{equation}\label{10/05/07/18:17} \lim_{n\to \infty}\psi_{0,n} = \psi_{0} \quad \mbox{strongly in $H^{1}(\mathbb{R}^{d})$}. \end{equation} Let $\psi$ and $\psi_{n}$ be the solutions to (\ref{08/05/13/8:50}) with $\psi(0)=\psi_{0}$ and $\psi_{n}(0)=\psi_{n,0}$. Since $\psi_{0}, \psi_{0,n} \in PW_{+}$ and $\widetilde{N}_{c}=\widetilde{N}_{2}$, we have by (\ref{08/09/02/18:06}) that \begin{equation}\label{10/05/08/14:56} \left\| \psi \right\|_{X(\mathbb{R})}<\infty,\quad \left\| \psi_{n} \right\|_{X(\mathbb{R})}< \infty. \end{equation} Note that since $\left\| \psi \right\|_{X(\mathbb{R})}<\infty$, for any $\delta>0$, there exists $T_{\delta}>0$ such that \begin{equation}\label{10/05/07/19:02} \left\| \psi \right\|_{X([T_{\delta},\infty))}<\delta. \end{equation} Now, we put \begin{equation}\label{10/05/07/18:27} w_{n}=\psi_{n}-\psi. \end{equation} Then, it follows from the inhomogeneous Strichartz estimate (\ref{10/03/29/11:06}), an elementary inequality (\ref{09/09/27/21:10}) and the H\"older inequality that \begin{equation} \label{10/05/07/18:29} \begin{split} \left\| w_{n} \right\|_{X([t_{0},t_{1}))} &\le \left\|e^{\frac{i}{2}(t-t_{0})\Delta} w(t_{0}) \right\|_{X([t_{0},t_{1}))} \\[6pt] & \quad + C \left( \left\| \psi \right\|_{X([t_{0},t_{1}))}^{p-1} + \left\|w_{n} \right\|_{X([t_{0}, t_{1}))}^{p-1} \right) \left\|w_{n} \right\|_{X([t_{0}, t_{1}))} \\[6pt] &\hspace{5cm} \mbox{for all $0\le t_{0}<t_{1}\le \infty$}, \end{split} \end{equation} where $C$ is some constant depending only on $d$, $p$ and $q_{1}$. We claim that \begin{equation} \label{10/05/07/19:04} \lim_{n\to \infty} \left\| w_{n} \right\|_{X([T_{\delta},\infty))} =0 \quad \mbox{for all $\delta \in \Big(0,\ \left(\frac{1}{1+4C)} \right)^{\frac{1}{p-1}}\Big)$}, \end{equation} where $C$ is the constant found in (\ref{10/05/07/18:29}) and $T_{\delta}$ is the time satisfying (\ref{10/05/07/19:02}). Let $\varepsilon$ be a number such that \begin{equation}\label{10/05/07/19:12} 0< \varepsilon < \left( \frac{1}{(1+4C)^{p}} \right)^{\frac{1}{p-1}}. \end{equation} Then, the estimate (\ref{08/10/25/23:16}) and the continuous dependence of solutions on initial data show that there exists $N_{\varepsilon}\in \mathbb{N}$ such that \begin{equation}\label{10/05/07/19:08} \left\| e^{\frac{i}{2}(t-T_{\delta})}w_{n}(T_{\delta}) \right\|_{X(\mathbb{R})}<\varepsilon \quad \mbox{for all $n\ge N_{\varepsilon}$}. \end{equation} We define a time $T_{\delta,n}$ by \begin{equation}\label{10/05/07/22:37} T_{\delta, n} = \sup \left\{ t\ge T_{\delta} \bigm| \left\|w_{n} \right\|_{X([T_{\delta},t))}\le (1+4C) \varepsilon \right\} . \end{equation} For (\ref{10/05/07/19:04}), it suffices to show that $T_{\delta,n}=\infty$ for all $n\ge N_{\varepsilon}$. We shall prove this. Note that (\ref{10/05/07/18:29}) with $t_{0}=T_{\delta}$, together with (\ref{10/05/07/19:08}), implies that $T_{\delta,n}>T_{\delta}$ for all $n\ge N_{\varepsilon}$. Supposing the contrary that $T_{\delta,n}<\infty$ for some $n\ge N_{\varepsilon}$, we have from the continuity of $w_{n}$ that \begin{equation}\label{10/05/07/22:57} \left\| w_{n} \right\|_{X([T_{\delta},T_{\delta,n}])} =(1+4C)\varepsilon . \end{equation} However, (\ref{10/05/07/18:29}), together with (\ref{10/05/07/19:02}), (\ref{10/05/07/19:12}), (\ref{10/05/07/19:08}) and (\ref{10/05/07/22:57}), shows that \begin{equation}\label{10/05/07/22:59} \begin{split} \left\| w_{n} \right\|_{X([T_{\delta},T_{\delta,n}])} &\le \varepsilon + C \left( \delta^{p-1} + (1+4C)^{p-1}\varepsilon^{p-1} \right) (1+4C)\varepsilon \\[6pt] & \le \varepsilon+2C \varepsilon \qquad \mbox{for all $\delta \in \Big(0,\ \left(\frac{1}{1+4C)} \right)^{\frac{1}{p-1}}\Big)$ and $n \ge N_{\varepsilon}$}. \end{split} \end{equation} This is a contradiction. Thus, we see that $T_{\delta,n}=\infty$ for all $n\ge N_{\varepsilon}$. \par Besides (\ref{10/05/07/19:04}), we see that there exists $\delta_{0}>0$ depending only on $d$, $p$ and $q_{1}$ such that \begin{equation}\label{10/05/10/9:57} \lim_{n\to \infty} \left\|w_{n} \right\|_{S([T_{\delta},\infty))} =0 \quad \mbox{for all $\delta \in (0,\delta_{0})$}. \end{equation} Indeed, the Strichartz estimate, together with (\ref{09/09/27/21:10}), yields that \begin{equation}\label{10/05/07/23:10} \left\| w_{n} \right\|_{S([T_{\delta},\infty))} \lesssim \left\| w_{n}(T_{\delta})\right\|_{L^{2}} + \left( \left\| \psi \right\|_{X([T_{\delta},\infty))}^{p-1} + \left\| w_{n} \right\|_{X([T_{\delta},\infty))}^{p-1}\right) \left\| w_{n} \right\|_{S([T_{\delta},\infty))}, \end{equation} where the implicit constant depends only on $d$, $p$ and $q_{1}$. This estimate, together with (\ref{10/05/07/19:04}), gives (\ref{10/05/10/9:57}). \par We claim that \begin{equation}\label{10/05/10/10:28} \lim_{n\to \infty} \left\| \psi_{n}-\psi \right\|_{L^{\infty}([T,\infty);H^{1})} = \lim_{n\to \infty} \left\| w_{n} \right\|_{L^{\infty}([T,\infty);H^{1})}=0 \quad \mbox{for sufficiently large $T>0$}. \end{equation} Considering the integral equations of $\partial_{j}\psi$ and $\partial_{j}\psi_{n}$ for $1\le j\le n$, we obtain that \begin{equation}\label{10/05/10/10:09} \begin{split} \left\| \partial_{j}\psi_{n}-\partial_{j} \psi \right\|_{S([T_{\delta}, \infty))} &\lesssim \left\| \psi_{n}(T_{\delta})-\psi(T_{\delta})\right\|_{H^{1}} + \left\| |\psi_{n}|^{p-1}(\partial_{j}\psi_{n}-\partial_{j}\psi) \right\|_{L^{r_{0}'}([T_{\delta},\infty);L^{q_{1}'})} \\[6pt] &\quad + \left\| \left( |\psi_{n}|^{p-1}-|\psi|^{p-1} \right) \partial_{j}\psi \right\|_{L^{r_{0}'}([T_{\delta},\infty);L^{q_{1}'})} \\[6pt] &\quad + \left\| |\psi_{n}|^{p-3}\psi_{n}^{2}\left(\partial_{j}\overline{\psi_{n}} -\partial_{j}\overline{\psi}\right) \right\|_{L^{r_{0}'}([T_{\delta},\infty);L^{q_{1}'})} \\[6pt] &\quad + \left\| \left( |\psi_{n}|^{p-3}\psi_{n}^{2} -|\psi|^{p-3}\psi^{2} \right) \partial_{j}\overline{\psi} \right\|_{L^{r_{0}'}([T_{\delta},\infty);L^{q_{1}'})} \\[6pt] &\lesssim \left\| \psi_{n}(T_{\delta})-\psi(T_{\delta})\right\|_{H^{1}} + \left\| \psi_{n} \right\|_{X([T_{\delta},\infty)}^{p-1} \left\| \partial_{j}\psi_{n}-\partial_{j}\psi \right\| _{S([T_{\delta},\infty))} \\[6pt] &\quad + \left\| |\psi_{n}|^{p-1}-|\psi|^{p-1} \right\|_{L^{\frac{r_{2}}{p-1}}([T_{\delta},\infty);L^{\frac{q_{2}}{p-1}})} \left\| \partial_{j}\psi \right\|_{S([T_{\delta},\infty))} \\[6pt] &\quad + \left\| \psi_{n} \right\|_{X([T_{\delta},\infty))}^{p-1} \left\| \partial_{j}\psi_{n} -\partial_{j}\psi\right\|_{S([T_{\delta},\infty))} \\[6pt] &\quad + \left\| |\psi_{n}|^{p-3}\psi_{n}^{2} -|\psi|^{p-3}\psi^{2} \right\|_{L^{\frac{r_{2}}{p-1}}([T_{\delta},\infty);L^{\frac{q_{2}}{p-1}})} \left\| \partial_{j}\psi \right\|_{S([T_{\delta},\infty))}, \end{split} \end{equation} where the implicit constant depends only on $d$, $p$ and $q_{1}$. This estimate, with the help of (\ref{10/05/07/19:04}) and (\ref{10/05/10/9:57}), leads to (\ref{10/05/10/10:28}). \par Now, put $\phi_{+,n}=W_{+}^{-1}\psi_{0,n}$. Then, we have by (\ref{10/05/10/10:28}) that \begin{equation}\label{10/05/07/23:15} \begin{split} \left\|\phi_{+,n}-\phi_{+} \right\|_{H^{1}} &\le \left\|\psi_{n}(t)-e^{\frac{i}{2}t\Delta}\phi_{+,n} \right\|_{H^{1}} + \left\|\psi(t)-e^{\frac{i}{2}t\Delta}\phi_{+} \right\|_{H^{1}} \\[6pt] &+ \left\| \psi_{n}(t) -\psi(t) \right\|_{H^{1}} \\[6pt] &\to 0 \qquad \mbox{as $t \to \infty$ and $n\to \infty$}. \end{split} \end{equation} Thus, we have proved that $W_{+}$ is a homeomorphism from $\Omega$ to $PW_{+}$. Similarly, we can prove that $W_{-}$ is a homeomorphism from $\Omega$ to $PW_{+}$. \end{proof} We shall give the proof of Corollary \ref{09/12/23/22:22}. \begin{proof}[Proof of Corollary \ref{09/12/23/22:22}] Let $f_{1}, f_{2}\in PW_{+}\cup \{0\}$. Then, Theorem \ref{08/05/26/11:53} shows that the corresponding solutions $\psi_{1}$ and $\psi_{2}$ have asymptotic states at $+\infty$. Hence, it follows from Theorem \ref{09/05/18/10:43} that \begin{equation}\label{10/05/10/0:41} \lim_{t\to \infty}\left\| \psi_{1}(t)\right\|_{L^{p+1}} = \lim_{t\to \infty}\left\| \psi_{2}(t)\right\|_{L^{p+1}} =0, \end{equation} so that we can obtain \begin{equation}\label{08/11/20/18:12} \lim_{t\to \infty}\left\| \psi_{j}(t) \right\|_{L^{q}} =0 \quad \mbox{for all $q \in (2,2^{*})$ and $j=1,2$}. \end{equation} Since the solutions are continuous in $H^{1}(\mathbb{R}^{d})$, we find by (\ref{08/11/20/18:12}) that $f_{1}$ and $f_{2}$ are connected by the path $\{ \psi_{1}(t) \bigm| t\ge 0\} \cup \{0\} \cup \{ \psi_{2}(t) \bigm| t\ge 0\}$ in $L^{q}(\mathbb{R}^{d})$ with $q \in (2,2^{*})$. \end{proof} \section{Analysis on $\boldsymbol{PW_{-}}$} \label{08/07/03/0:40} We shall give the proofs of Theorem \ref{08/06/12/9:48}, Theorem \ref{08/04/21/9:28}, Proposition \ref{08/11/02/22:52} and Proposition \ref{10/01/26/14:49}. \par We begin with the proof of Theorem \ref{08/06/12/9:48}. We employ the idea of Nawa \cite{Nawa8}. Here, the generalized virial identity below (see (\ref{08/03/29/19:05})) plays an important role: \[ \begin{split} &(W_{R}, |\psi(t)|^{2}) \\ &=(W_{R},|\psi_{0}|^{2}) +2t \Im{(\vec{w}_{R} \cdot \nabla \psi_{0},\psi_{0})}+2\int_{0}^{t}\int_{0}^{t'}\mathcal{K}(\psi(t''))\,dt''dt' \\ & \qquad -2\int_{0}^{t}\int_{0}^{t'}\mathcal{K}^{R}(\psi(t''))\,dt''dt' -\frac{1}{2}\int_{0}^{t}\int_{0}^{t'}(\Delta ({\rm div}\,{\vec{w}_{R}}), |\psi(t'')|^{2})\,dt''dt' . \end{split} \] \begin{proof}[Proof of Theorem \ref{08/06/12/9:48}] Since we have already proved (\ref{09/12/23/22:48}) (see Proposition \ref{08/05/26/10:57}), proofs of (\ref{09/05/14/13:32}) and (\ref{09/05/13/15:58}) remain. For simplicity, we consider the forward time only. The problem for the backward time can be proved in a similar way. \par Take any $\psi_{0} \in PW_{-}$ and let $\psi$ be the corresponding solution to our equation (\ref{08/05/13/8:50}) with $\psi(0)=\psi_{0}$. When the maximal existence time $T_{\max}^{+}<\infty$, we have (\ref{09/05/14/13:32}) as mentioned in (\ref{10/01/27/11:26}). Therefore, it suffices to prove (\ref{09/05/13/15:58}).\par We suppose the contrary that (\ref{09/05/13/15:58}) fails when $T_{\max}^{+}=\infty$, so that there exists $R_{0}>0$ such that \begin{equation}\label{08/06/12/9:57} M_{0}:=\sup_{t \in [0,\infty)} \int_{|x|\ge R_{0}}|\nabla \psi(x,t)|^{2}\,dx < \infty. \end{equation} Then, we shall derive a contradiction in three steps. \\ {\it Step1}. We claim that: there exists a constant $m_{0}>0$ such that \begin{equation}\label{08/06/13/04:35} m_{0}< \inf\left\{ \int_{|x|\ge R}|v(x)|^{2}\,dx \left| \begin{array}{l} v \in H^{1}(\mathbb{R}^{d}), \quad \mathcal{K}^{R}(v) \le -\frac{1}{4} \varepsilon_{0}, \\[6pt] \|\nabla v \|_{L^{2}(|x|\ge R)}^{2} \le M_{0}, \quad \|v\|_{L^{2}}\le \|\psi_{0}\|_{L^{2}} \end{array} \right. \right\} \ \mbox{for all $R>0$}, \end{equation} where $\varepsilon_{0}=\mathcal{B}(\psi_{0})-\mathcal{H}(\psi_{0})>0$. Let us prove this claim. Take any $v \in H^{1}(\mathbb{R}^{d})$ with the following properties: \begin{equation}\label{10/03/01/15:45} \mathcal{K}^{R}(v) \le -\frac{1}{4} \varepsilon_{0}, \quad \|\nabla v \|_{L^{2}(|x|\ge R)}^{2} \le M_{0}, \quad \|v\|_{L^{2}}\le \|\psi_{0}\|_{L^{2}}. \end{equation} Note here that the second and third properties in (\ref{10/03/01/15:45}) shows that \begin{equation}\label{10/03/01/16:22} \|v \|_{H^{1}(|x|\ge R)}^{2} \le \|\psi_{0}\|_{L^{2}}^{2}+M_{0}. \end{equation} We also have by the first property in (\ref{10/03/01/15:45}) and (\ref{08/04/20/12:50}) that \begin{equation}\label{08/06/14/22:02} \frac{1}{4}\varepsilon_{0} \le -\mathcal{K}^{R}(v) \le \int_{|x|\ge R}\rho_{3}(x)|v(x)|^{p+1}\, dx. \end{equation} Now, we define $p_{*}$ by \begin{equation}\label{10/05/20/17:06} p_{*}=\left\{ \begin{array}{ccc} 2p-1 &\mbox{if}& d=1,2, \\ 2^{*}-1 &\mbox{if }& d\ge 3. \end{array} \right. \end{equation} Then, the H\"older inequality and the Sobolev embedding show that \begin{equation}\label{08/06/14/22:18} \begin{split} \int_{|x|\ge R}\rho_{3}(x)|v(x)|^{p+1}\, dx &\le \|\rho_{3}\|_{L^{\infty}} \|v \|_{L^{2}(|x|\ge R)}^{p+1-\frac{(p-1)(p_{*}+1)}{p_{*}-1}} \|v \|_{L^{p_{*}+1}(|x|\ge R)}^{\frac{(p-1)(p_{*}+1)}{p_{*}-1}} \\ &\lesssim \|\rho_{3}\|_{L^{\infty}} \|v \|_{L^{2}(|x|\ge R)}^{p+1-\frac{(p-1)(p_{*}+1)}{p_{*}-1}} \|v\|_{H^{1}(|x|\ge R)}^{\frac{(p-1)(p_{*}+1)}{p_{*}-1}}, \end{split} \end{equation} where the implicit constant depends only on $d$ and $p$. This estimate, together with (\ref{10/03/01/16:22}) and (\ref{08/06/14/22:02}), yields that \begin{equation}\label{10/03/01/16:35} \frac{\varepsilon_{0}} {\|\rho_{3}\|_{L^{\infty}} \sqrt{ \left\|\psi_{0}\right\|_{L^{2}}^{2}+M_{0} }^{\frac{(p-1)(p_{*}+1)}{p_{*}-1}} } \lesssim \|v\|_{L^{2}(|x|\ge R)}^{p+1-\frac{(p-1)(p_{*}+1)}{p_{*}-1}}, \end{equation} where the implicit constant depends only on $d$ and $p$. Since $\|\rho_{3}\|_{L^{\infty}}\lesssim 1$ (see (\ref{08/12/29/12:53})), the estimate (\ref{10/03/01/16:35}) gives us the desired result (\ref{08/06/13/04:35}). \\[12pt] {\it Step2}. Let $m_{0}$ be a constant found in (\ref{08/06/13/04:35}). Then, we prove that \begin{equation}\label{10/03/01/17:26} \sup_{0\le t <\infty}\int_{|x|\ge R}|\psi(x,t)|^{2}\,dx \le m_{0} \end{equation} for all $R$ satisfying the following properties: \begin{align} \label{10/03/01/17:21} &R>R_{0}, \\[6pt] \label{08/03/29/20:04} &\frac{10d^{2}K}{R^{2}}\|\psi_{0}\|_{L^{2}}^{2}< \varepsilon_{0}, \\[6pt] \label{08/03/29/20:05} &\int_{|x|\ge R}|\psi_{0}(x)|^{2}\,dx < m_{0}, \\[6pt] \label{08/03/29/20:06} &\frac{1}{R^{2}}\left( 1+ \frac{2}{\varepsilon_{0}} \left\|\nabla \psi_{0}\right\|_{L^{2}}^{2} \right)(W_{R},|\psi_{0}|^{2})<m_{0}, \end{align} where $K$ is the constant given in (\ref{09/09/26/11:16}). We remark that Lemma \ref{10/03/02/17:47} shows that we can take $R$ satisfying (\ref{08/03/29/20:06}). \par Now, for $R>0$ satisfying (\ref{10/03/01/17:21})--(\ref{08/03/29/20:06}), we put\begin{equation} \label{10/03/03/16:49} T_{R}=\sup \left\{ T>0 \biggm| \sup_{0\le t <T }\int_{|x|\ge R}|\psi(x,t)|^{2}\,dx \le m_{0} \right\}. \end{equation} Note here that since $\psi \in C(\mathbb{R};L^{2}(\mathbb{R}^{d}))$ and $\psi(0)=\psi_{0}$, we have by (\ref{08/03/29/20:05}) that $T_{R}>0$. It is clear that $T_{R}=\infty$ shows (\ref{10/03/01/17:26}). \par We suppose the contrary that $T_{R}<\infty$. Then, it follows from $\psi \in C(\mathbb{R};L^{2}(\mathbb{R}^{d}))$ that \begin{equation}\label{10/03/03/17:19} \int_{|x|\ge R}|\psi(x,T_{R})|^{2}\,dx =m_{0}. \end{equation} Hence, the definition of $m_{0}$ (see (\ref{08/06/13/04:35})), together with (\ref{08/06/12/9:57}), (\ref{10/03/01/17:21}) and the mass conservation law (\ref{08/05/13/8:59}), leads us to that \begin{equation}\label{10/03/01/19:19} -\frac{1}{4}\varepsilon_{0}\le \mathcal{K}^{R}(\psi(T_{R})). \end{equation} Moreover, applying this inequality (\ref{10/03/01/19:19}) and (\ref{09/12/23/22:48}) (or (\ref{09/06/21/18:52})) to the generalized virial identity (\ref{08/03/29/19:05}), we obtain that \begin{equation}\label{10/03/02/15:40} \begin{split} (W_{R}, |\psi(T_{R})|^{2}) &< (W_{R},|\psi_{0}|^{2}) +2T_{R} \Im{(\vec{w}_{R} \cdot \nabla \psi_{0},\psi_{0})} -\varepsilon_{0}^{2}T_{R}^{2} \\[6pt] &\qquad +\frac{1}{4}\varepsilon_{0}T_{R}^{2} -\frac{1}{2}\int_{0}^{T_{R}}\int_{0}^{t'}(\Delta ({\rm div}\,{\vec{w}_{R}}), |\psi(t'')|^{2})\,dt''dt' . \end{split} \end{equation} Here, the estimate (\ref{08/04/20/16:54}), the mass conservation law (\ref{08/05/13/8:59}) and (\ref{08/03/29/20:04}) show that the last term on the right-hand side above is estimated as follows: \begin{equation}\label{10/03/02/16:08} \begin{split} -\frac{1}{2}\int_{0}^{T_{R}}\int_{0}^{t'}(\Delta ({\rm div}\,{\vec{w}_{R}}), |\psi(t'')|^{2})\,dt''dt' &\le \frac{1}{2}\int_{0}^{T_{R}}\int_{0}^{t'}\frac{10d^{2}K}{R^{2}} \left\|\psi(t'') \right\|_{L^{2}}^{2}\,dt''dt' \\[6pt] &\le \frac{1}{4}\varepsilon_{0}T_{R}^{2}. \end{split} \end{equation} Therefore, we have \begin{equation}\label{09/09/24/13:05} \begin{split} (W_{R}, |\psi(T_{R})|^{2}) &< (W_{R},|\psi_{0}|^{2}) +2T_{R} \Im{(\vec{w}_{R} \cdot \nabla \psi_{0},\psi_{0})}-\frac{1}{2}T_{R}^{2}\varepsilon_{0}. \\[6pt] &=(W_{R},|\psi_{0}|^{2}) -\frac{1}{2}\varepsilon_{0} \left\{ T_{R} -\frac{2}{\varepsilon_{0}}\Im{(\vec{w}_{R} \cdot \nabla \psi_{0},\psi_{0})} \right\}^{2} + \frac{2}{\varepsilon_{0}}\left| (\vec{w}_{R} \cdot \nabla \psi_{0},\psi_{0}) \right|^{2} \\[6pt] &\le (W_{R},|\psi_{0}|^{2}) + \frac{2}{\varepsilon_{0}}\left| (\vec{w}_{R} \cdot \nabla \psi_{0},\psi_{0}) \right|^{2} \\[6pt] &\le (W_{R},|\psi_{0}|^{2}) + \frac{2}{\varepsilon_{0}} \left\| \nabla \psi_{0} \right\|_{L^{2}}^{2} \int_{\mathbb{R}^{d}} W_{R}(x)|\psi_{0}(x)|^{2}\,dx, \end{split} \end{equation} where we have used the Schwarz inequality and (\ref{09/09/25/13:24}) to derive the last inequality. Combining this inequality (\ref{09/09/24/13:05}) with (\ref{08/03/29/20:06}), we see that \begin{equation}\label{10/03/02/16:28} (W_{R}, |\psi(T_{R})|^{2}) \le \left( 1+ \frac{2}{\varepsilon_{0}} \left\|\nabla \psi_{0}\right\|_{L^{2}}^{2} \right) (W_{R},|\psi_{0}|^{2})<R^{2}m_{0}. \end{equation} On the other hand, since $W_{R}(x) \ge R^{2}$ for $|x|\ge R$ (see (\ref{10/02/27/22:19})), we have \begin{equation}\label{09/09/24/16:30} \int_{|x|\ge R } |\psi(x,T_{R})|^{2} \,dx = \frac{1}{R^{2}}\int_{|x|\ge R} R^{2} |\psi(x,T_{R})|^{2} \,dx \le \frac{1}{R^{2}} (W_{R}, |\psi(T_{R})|^{2}). \end{equation} Thus, it follows form (\ref{10/03/02/16:28}) and (\ref{09/09/24/16:30}) that \begin{equation}\label{10/03/06/23:56} \int_{|x|\ge R } |\psi(x,T_{R})|^{2} \,dx <m_{0}, \end{equation} which contradicts (\ref{10/03/03/17:19}), so that $T_{R}=\infty$ and (\ref{10/03/01/17:26}) hold. \\ {\it Step3}. We complete the proof of Theorem \ref{08/06/12/9:48}. The definition of $m_{0}$, together with the mass conservation law (\ref{08/05/13/8:59}), (\ref{08/06/12/9:57}) and (\ref{10/03/01/17:26}), shows that \begin{equation}\label{10/03/03/17:48} -\frac{1}{4}\varepsilon_{0}\le \mathcal{K}^{R}(\psi(t)) \qquad \mbox{for all $R>0$ satisfying (\ref{10/03/01/17:21})--(\ref{08/03/29/20:06}), and all $t\ge 0$}. \end{equation} Applying this estimate (\ref{10/03/03/17:48}) to the generalized virial identity (\ref{08/03/29/19:05}), we obtain the following estimate as well as {\it Step2} (see (\ref{10/03/02/15:40}) and (\ref{10/03/02/16:08})): \begin{equation}\label{10/03/03/17:56} (W_{R}, |\psi(t)|^{2}) \le (W_{R},|\psi_{0}|^{2}) +2t \Im{(\vec{w}_{R} \cdot \nabla \psi_{0},\psi_{0})}-\frac{1}{2}t^{2}\varepsilon_{0} \quad \mbox{for all $t\ge 0$}. \end{equation} This inequality means that $(W_{R}, |\psi(t)|^{2})$ becomes negative in a finite time, so that $T_{\max}^{+}$ must be finite. However, this contradicts $T_{\max}^{+}=\infty$. Hence, (\ref{08/06/12/9:57}) derives an absurd conclusion: Thus, (\ref{09/05/13/15:58}) holds. \end{proof} Next, we give the proof of Theorem \ref{08/04/21/9:28}. \begin{proof}[Proof of Theorem \ref{08/04/21/9:28}] Let $\psi_{0}$ be a radially symmetric function in $PW_{-}$, and let $\psi$ be the solution to (\ref{08/05/13/8:50}) with $\psi(0)=\psi_{0}$. \par To handle the term containing $\mathcal{K}^{R}$ in the generalized virial identity (\ref{08/03/29/19:05}) for $\psi$, we consider the following variational problem, as well as Lemma 3.3 in \cite{Nawa8}: \begin{lemma} \label{08/04/20/18:30} Assume that $d\ge 3$, $2+\frac{4}{d}< p+1 < 2^{*}$, and $p\le 5$ if $d=2$. Then, there exists $R_{*}>0$ and $m_{*}>0$ such that \[ m_{*} < \inf\left\{ \int_{|x|\ge R}|v(x)|^{2}\,dx \left| \begin{array}{l} v \in H^{1}_{rad}(\mathbb{R}^{d}), \ \mathcal{K}^{R}(v) \le -\frac{1}{4}\varepsilon_{0}, \\[6pt] \|v\|_{L^{2}}\le \|\psi_{0}\|_{L^{2}} \end{array} \right. \right\} \quad \mbox{for all $R\ge R_{*}$}, \] where $H^{1}_{rad}(\mathbb{R}^{d})$ is the set of radially symmetric functions in $H^{1}(\mathbb{R}^{d})$, and $\varepsilon_{0}=\mathcal{H}(\psi_{0})-\mathcal{B}(\psi_{0})>0$. \end{lemma} \begin{proof}[Proof of Lemma \ref{08/04/20/18:30}] We take a function $v \in H_{rad}^{1}(\mathbb{R}^{d})$ with the following properties: \begin{align} \label{10/03/05/13:28} &\mathcal{K}^{R}(v) \le -\frac{1}{4}\varepsilon_{0}, \\[6pt] \label{10/03/05/13:38} &\|v\|_{L^{2}}\le \|\psi_{0}\|_{L^{2}}, \end{align} where $R$ is a sufficiently large constant to be chosen later (see (\ref{10/03/06/18:09})). Since $v \in H_{rad}^{1}(\mathbb{R}^{d})$, $\mathcal{K}^{R}(v)$ is written as follows (see Remark \ref{09/09/25/16:47}): \begin{equation}\label{10/03/05/13:56} \mathcal{K}^{R}(v) = \int_{\mathbb{R}^{d}}\rho_{0}(x)|\nabla v(x)|^{2}\,dx -\int_{\mathbb{R}^{d}}\rho_{3}(x)|v(x)|^{p+1}\,dx. \end{equation} Hence, it follows from (\ref{10/03/05/13:28}) that \begin{equation}\label{08/04/20/19:49} \begin{split} \frac{1}{4}\varepsilon_{0} +\int_{\mathbb{R}^{d}}\rho_{0}(x)|\nabla v(x)|^{2}\,dx &\le -\mathcal{K}^{R}(v)+ \int_{\mathbb{R}^{d}}\rho_{0}(x)|\nabla v(x)|^{2}\,dx \\[6pt] &=\int_{\mathbb{R}^{d}}\rho_{3}(x)|v(x)|^{p+1}\,dx. \end{split} \end{equation} To estimate the right-hand side of (\ref{08/04/20/19:49}), we employ the following inequality (see Lemma 6.5.11 in \cite{Cazenave}): Assume that $d\ge 1$. Let $\kappa$ be a non-negative and radially symmetric function in $C^{1}(\mathbb{R}^{d})$ with $|x|^{-(d-1)}\max\{-\frac{x\nabla \kappa }{|x|},\, 0\} \in L^{\infty}(\mathbb{R}^{d})$. Then, we have that \begin{equation}\label{08/06/12/7:28} \begin{split} & \|\kappa^{\frac{1}{2}} f\|_{L^{\infty}} \\[6pt] &\le K_{5} \|f\|_{L^{2}}^{\frac{1}{2}} \left\{ \left\| |x|^{-(d-1)}\kappa \, \frac{x \nabla f}{|x|} \right\|_{L^{2}}^{\frac{1}{2}} + \left\| |x|^{-(d-1)} \max \left\{-\frac{x \nabla \kappa}{|x|}, \ 0 \right\} \right\|_{L^{\infty}}^{\frac{1}{2}} \left\| f \right\|_{L^{2}}^{\frac{1}{2}} \right\} \\[6pt] &\hspace{9cm} \mbox{for all $f \in H_{rad}^{1}(\mathbb{R}^{d})$}, \end{split} \end{equation} where $K_{5}>0$ is some constant depending only on $d$. \\ This inequality (\ref{08/06/12/7:28}), together with (\ref{08/12/29/12:53}), (\ref{08/04/20/13:06}) and ${\rm supp}\, \rho_{3}=\{|x|\ge R\}$ (see (\ref{08/12/29/14:16})), yields the following estimate for the right-hand side of (\ref{08/04/20/19:49}): \begin{equation}\label{08/07/02/14:50} \begin{split} &\int_{\mathbb{R}^{d}}\rho_{3}(x)|v(x)|^{p+1}\,dx \\[6pt] &\le \|\rho_{3}^{\frac{1}{4}}v\|_{L^{\infty}}^{p-1} \int_{\mathbb{R}^{d}}\rho_{3}^{\frac{5-p}{4}}(x)|v(x)|^{2}\,dx \\[6pt] &\le \frac{K_{5}^{p-1}}{R^{\frac{p-1}{2}(d-1)}} \left\| v \right\|_{L^{2}(|x|\ge R)}^{\frac{p-1}{2}} \left\{ \left\|\sqrt{\rho_{3}} \frac{x\nabla v}{|x|}\right\|_{L^{2}}^{\frac{1}{2}} \!\! + \left\|\max\left\{ -\frac{x \nabla \sqrt{\rho_{3}}}{|x|},\ 0 \right\} \right\|_{L^{\infty}}^{\frac{1}{2}} \!\! \left\| v \right\|_{L^{2}}^{\frac{1}{2}} \right\}^{p-1}\hspace{-12pt}K_{3}^{\frac{5-p}{4}}\|v\|_{L^{2}}^{2} \\[6pt] &\le \frac{K_{5}^{p-1}K_{3}^{\frac{5-p}{4}}}{R^{\frac{p-1}{2}(d-1)}} \left\| v \right\|_{L^{2}(|x|\ge R)}^{\frac{p-1}{2}} \left\{ \left\|\sqrt{\rho_{3}}\nabla v \right\|_{L^{2}}^{\frac{1}{2}} + \frac{(K_{3}')^{\frac{1}{2}}}{R^{\frac{1}{2}}} \left\| v \right\|_{L^{2}}^{\frac{1}{2}} \right\}^{p-1}\hspace{-12pt}\|v\|_{L^{2}}^{2} \\[6pt] &\le \frac{K_{5}^{p-1}K_{3}^{\frac{5-p}{4}}}{R^{\frac{p-1}{2}(d-1)}} \left\| v \right\|_{L^{2}(|x|\ge R)}^{\frac{p-1}{2}} C_{p}\left\{ \left\|\sqrt{\rho_{3}}\nabla v \right\|_{L^{2}}^{\frac{p-1}{2}} + \frac{(K_{3}')^{\frac{p-1}{2}}}{R^{\frac{p-1}{2}}} \left\| v \right\|_{L^{2}}^{\frac{p-1}{2}} \right\} \|v\|_{L^{2}}^{2} \end{split} \end{equation} for some constant $C_{p}>0$ depending only on $p$. Moreover, using $p\le 5$ (so that $\frac{p-1}{2}\le 2$), $\|v\|_{L^{2}}\le \|\psi_{0}\|_{L^{2}}$, and the Young inequality ($ab \le \frac{a^{r}}{r}+\frac{b^{r'}}{r'}$ with $\frac{1}{r}+\frac{1}{r'}=1$), we obtain that \begin{equation}\label{10/03/06/17:54} \begin{split} \mbox{R.H.S. of (\ref{08/07/02/14:50})} &\le \frac{p-1}{4}\left\| v \right\|_{L^{2}(|x|\ge R)}^{2} \left\|\sqrt{\rho_{3}}\nabla v \right\|_{L^{2}}^{2} +\frac{5-p}{4} \left( \frac{C_{p}\left\|\psi_{0}\right\|_{L^{2}}^{2}K_{5}^{p-1}K_{3}^{\frac{5-p}{4}}}{R^{\frac{p-1}{2}(d-1)}}\right)^{\frac{4}{5-p}} \\[6pt] &\qquad +C_{p}\frac{\left\|\psi_{0}\right\|_{L^{2}}^{p+1}K_{5}^{p-1}K_{3}^{\frac{5-p}{4}}(K_{3}')^{\frac{p-1}{2}}}{R^{\frac{d(p-1)}{2}}}. \end{split} \end{equation} If $R$ is so large that \begin{equation}\label{10/03/06/18:09} \frac{5-p}{4} \left( \frac{C_{p}\left\|\psi_{0}\right\|_{L^{2}}^{2}K_{5}^{p-1}K_{3}^{\frac{5-p}{4}}}{R^{\frac{p-1}{2}(d-1)}}\right)^{\frac{4}{5-p}} + \frac{C_{p}\left\|\psi_{0}\right\|_{L^{2}}^{p+1}K_{5}^{p-1}K_{3}^{\frac{5-p}{4}}(K_{3}')^{\frac{p-1}{2}}}{R^{\frac{d(p-1)}{2}}} \le \frac{1}{8}\varepsilon_{0}, \end{equation} then (\ref{08/04/20/19:49}), together with (\ref{08/07/02/14:50}) and (\ref{10/03/06/17:54}), shows that \begin{equation}\label{10/03/06/18:12} \int_{\mathbb{R}^{d}}\left\{ \rho_{0}(x)- \frac{p-1}{4}\|v\|_{L^{2}(|x|\ge R)}^{2}\rho_{3}(x)\right\}|\nabla v(x)|^{2} \le -\frac{\varepsilon_{0}}{8}. \end{equation} Hence, it must hold that \begin{equation}\label{10/03/06/21:04} \inf_{|x|\ge R}{ \frac{\rho_{0}(x)}{\rho_{3}(x)}}< \frac{p-1}{4}\|v\|_{L^{2}(|x| \ge R)}^{2}, \end{equation} which, together with (\ref{08/04/20/16:07}), gives us the desired result. \end{proof} The following lemma tells us that the solution $\psi$ is well localized: \begin{lemma}[cf. Lemma 3.4 in \cite{Nawa8}] \label{08/03/30/15:12} Assume that $d\ge 1$, $2+\frac{4}{d}<p+1<2^{*}$, and $p\le 5$ if $d=2$. Let $R_{*}$ and $m_{*}$ be constants found in Lemma \ref{08/04/20/18:30}. Then, we have \[ T_{\max}^{+}=\sup{\left\{ T>0 \biggm| \sup_{0\le t <T} \int_{|x|\ge R}|\psi(x,t)|^{2}\,dx < m_{*} \right\}} \] for all $R$ with the following properties: \begin{align} \label{10/03/07/0:10} &R>R_{*}, \\[6pt] \label{10/03/07/0:11} &\frac{10d^{2}K}{R^{2}}\|\psi_{0}\|_{L^{2}}^{2}< \varepsilon_{0}, \\[6pt] \label{10/03/07/0:12} &\int_{|x|\ge R}|\psi_{0}(x)|^{2}\,dx < m_{*}, \\[6pt] \label{10/03/07/0:13} &\frac{1}{R^{2}}\left( 1+ \frac{2}{\varepsilon_{0}} \left\|\nabla \psi_{0}\right\|_{L^{2}}^{2} \right)(W_{R},|\psi_{0}|^{2})<m_{*}. \end{align} \end{lemma} \begin{proof}[Proof of Lemma \ref{08/03/30/15:12}] For $R>0$ satisfying (\ref{10/03/07/0:10})--(\ref{10/03/07/0:13}), we put \[ T_{R}:=\sup{\left\{ T>0 \biggm| \sup_{0\le t \le T}\int_{|x|\ge R}|\psi(x,t)|^{2}\,dx < m_{*} \right\}}. \] We suppose the contrary that $T_{R}<T_{\max}^{+}$, so that it follows from $\psi \in C([0,T_{\max}^{+});L^{2}(\mathbb{R}^{d}))$ that \begin{equation}\label{08/03/30/16:09} \int_{|x|\ge R}|\psi(x,T_{R})|^{2}\,dx = m_{*}. \end{equation} Hence, we obtain by Lemma \ref{08/04/20/18:30} that \begin{equation}\label{10/03/06/23:50} -\frac{1}{4}\varepsilon_{0}< \mathcal{K}^{R}(\psi(T_{R})). \end{equation} Then, the same computation as (\ref{10/03/02/15:40})--(\ref{10/03/06/23:56}) yields that \begin{equation}\label{10/03/06/23:58} \int_{|x|\ge R } |\psi(T_{R},x)|^{2} \,dx <m_{*}, \end{equation} which contradicts (\ref{08/03/30/16:09}): Thus, $T_{R}=T_{\max}^{+}$. \end{proof} Now, we are in a position to complete the proof of Theorem \ref{08/04/21/9:28}. \\ {\it Proof of Theorem \ref{08/04/21/9:28} (continued)} We take $R>0$ satisfying (\ref{10/03/07/0:10})--(\ref{10/03/07/0:13}). Then, Lemma \ref{08/03/30/15:12} gives us that \begin{equation}\label{10/03/07/0:19} \int_{|x|\ge R}|\psi(t,x)|^{2}\,dx < m_{*} \quad \mbox{for all $t \in [0,T_{\max}^{+})$}. \end{equation} Moreover, Lemma \ref{08/04/20/18:30}, together with (\ref{10/03/07/0:19}), shows that \begin{equation}\label{10/03/07/0:22} -\frac{\varepsilon_{0}}{4}< \mathcal{K}^{R}(\psi(t)) \quad \mbox{for all $t \in [0,T_{\max}^{+})$}. \end{equation} Applying (\ref{10/03/07/0:22}) and (\ref{09/12/23/22:48}) to the generalized virial identity (\ref{08/03/29/19:05}), we obtain that \begin{equation}\label{10/03/07/0:29} \begin{split} (W_{R}, |\psi(t)|^{2}) &< (W_{R},|\psi_{0}|^{2}) +2t \Im{(\vec{w}_{R} \cdot \nabla \psi_{0},\psi_{0})} -\varepsilon_{0}^{2}t^{2} \\[6pt] &\qquad +\frac{1}{4}\varepsilon_{0}t^{2} -\frac{1}{2}\int_{0}^{t}\int_{0}^{t'}(\Delta ({\rm div}\,{\vec{w}_{R}}), |\psi(t'')|^{2})\,dt''dt' . \end{split} \end{equation} Here, it follows from (\ref{08/04/20/16:54}), the mass conservation law (\ref{08/05/13/8:59}) and (\ref{10/03/07/0:11}) that the last term on the right-hand side above is estimated as follows : \begin{equation}\label{10/03/07/0:31} \begin{split} -\frac{1}{2}\int_{0}^{t}\int_{0}^{t'}(\Delta ({\rm div}\,{\vec{w}_{R}}), |\psi(t'')|^{2})\,dt''dt' &\le \frac{1}{2}\int_{0}^{t}\int_{0}^{t'}\frac{10d^{2}K}{R^{2}} \left\|\psi(t'') \right\|_{L^{2}}^{2}\,dt''dt' \\[6pt] &\le \frac{1}{4}\varepsilon_{0}t^{2}. \end{split} \end{equation} Combining (\ref{10/03/07/0:29}) with (\ref{10/03/07/0:31}), we obtain that \begin{equation}\label{10/03/07/0:30} (W_{R}, |\psi(t)|^{2}) < (W_{R},|\psi_{0}|^{2}) +2t \Im{(\vec{w}_{R} \cdot \nabla \psi_{0},\psi_{0})}-\frac{1}{2}t^{2}\varepsilon_{0}. \end{equation} This inequality means that $(W_{R}, |\psi(t)|^{2})$ becomes negative in a finite time, so that $T_{\max}^{+}<\infty$: This fact, together with (\ref{10/01/27/11:26}), shows (\ref{10/02/22/20:33}). \par Now, we shall show the claims (i)--(iii) in Theorem \ref{08/04/21/9:28}. \par We can find that Lemma \ref{08/03/30/15:12} is still valid if we replace $m_{*}$ with any $m \in (0,m_{*})$. Thus, we see that (i) holds. \par We next consider (ii). Since we have (\ref{08/04/20/12:51}), in order to prove (\ref{08/06/10/10:21}), it suffices to show that \begin{equation}\label{10/03/07/16:52} \int_{0}^{T_{\max}^{+}}(T_{\max}^{+}-t) \left( \int_{\mathbb{R}^{d}}\rho_{0}(x)|\nabla \psi(x,t)|^{2}\,dx \right) dt < \infty \quad \mbox{for all sufficiently large $R>0$}. \end{equation} Let us prove this. Integrating by parts, we find that \begin{equation}\label{10/03/07/17:54} \begin{split} &\int_{0}^{t} (t-t')\left( \int_{\mathbb {R}^{d}}\rho_{0}(x)|\nabla \psi(x,t')|^{2}\,dx \right)dt' \\[6pt] &= \int_{0}^{t}\int_{0}^{t'} \left( \int_{\mathbb{R}^{d}}\rho_{0}(x)|\nabla \psi(x,t'')|^{2}\,dx \right) dt''\,dt' \quad \mbox{for all $t\ge 0$}. \end{split} \end{equation} This formula (\ref{10/03/07/17:54}) and the generalized virial identity (\ref{08/03/29/19:05}) lead us to the following estimate: \begin{equation}\label{10/03/07/17:48} \begin{split} &2 \int_{0}^{t} (t-t')\left( \int_{\mathbb {R}^{d}}\rho_{0}(x)|\nabla \psi(x,t')|^{2}\,dx \right)dt' \\[6pt] &\le (W_{R},|\psi_{0}|^{2}) +2t \Im{(\vec{w}_{R} \cdot \nabla \psi_{0},\psi_{0})}+2\int_{0}^{t}\int_{0}^{t'}\mathcal{K}(\psi(t''))\,dt''dt' \\[6pt] & \quad +2\int_{0}^{t}(t-t') \left( \int_{\mathbb{R}^{d}} \rho_{3}(x)|\psi(x,t')|^{p+1}\,dx \right)dt' -\frac{1}{2}\int_{0}^{t}\int_{0}^{t'}(\Delta ({\rm div}\,{\vec{w}_{R}}), |\psi(t'')|^{2})\,dt''dt' . \end{split} \end{equation} Here, applying the estimate (\ref{10/03/07/0:31}) to the right-hand side of (\ref{10/03/07/17:48}) (abbreviated to R.H.S. of (\ref{10/03/07/17:48})), we have that \begin{equation}\label{10/03/07/18:21} \begin{split} \mbox{R.H.S. of (\ref{10/03/07/17:48})} &\le (W_{R},|\psi_{0}|^{2}) +2t \Im{(\vec{w}_{R} \cdot \nabla \psi_{0},\psi_{0})}-\varepsilon_{0}t^{2} \\[6pt] &\quad +2\int_{0}^{t}(t-t') \left( \int_{\mathbb{R}^{d}} \rho_{3}(x)|\psi(x,t')|^{p+1}\,dx \right)dt' +\frac{\varepsilon_{0}}{4}t^{2}. \end{split} \end{equation} Moreover, taking $R$ so large that (\ref{10/03/06/18:09}) holds, we find by the same computation as (\ref{08/07/02/14:50})--(\ref{10/03/06/17:54}) that \begin{equation}\label{10/03/07/19:03} \int_{\mathbb{R}^{d}}\rho_{3}(x)|\psi(x,t)|^{p+1}\,dx \le \frac{p-1}{4}\left\|\psi(t) \right\|_{L^{2}(|x|\ge R)}^{2} \int_{\mathbb{R}^{d}}\rho_{3}(x)|\nabla \psi(x,t)|^{2}\,dx + \frac{\varepsilon_{0}}{8}. \end{equation} Combining (\ref{10/03/07/17:48}), (\ref{10/03/07/18:21}) and (\ref{10/03/07/19:03}), we obtain that \begin{equation}\label{10/03/07/19:11} \begin{split} &2\int_{0}^{t} (t-t') \left( \int_{\mathbb{R}^{d}} \left\{ \rho_{0}(x)- \frac{p-1}{4}\left\|\psi(t) \right\|_{L^{2}(|x|\ge R)}^{2} \rho_{3}(x) \right\} |\nabla \psi(x,t)|^{2}\,dx \right)\,dt' \\[6pt] &\le (W_{R},|\psi_{0}|^{2}) +2t \Im{(\vec{w}_{R} \cdot \nabla \psi_{0},\psi_{0})} -\frac{\varepsilon_{0}}{2}t^{2} \\[6pt] &\hspace{5cm} \mbox{for all $R>0$ satisfying (\ref{10/03/06/18:09}) and (\ref{10/03/07/0:10})--(\ref{10/03/07/0:13})}. \end{split} \end{equation} Here, using the result of (i) (the formula (\ref{08/06/12/8:54})), we can take a constant $R_{1}>0$ such that \begin{equation}\label{10/03/07/16:47} \frac{p-1}{4}\left\| \psi(t) \right\|_{L^{2}(|x|\ge R_{1})}^{2}\frac{1}{K_{4}} \le \frac{1}{2} \quad \mbox{for all $t \in I_{\max}$}, \end{equation} where $K_{4}$ is some positive constant found in (\ref{08/04/20/16:07}). Therefore, (\ref{10/03/07/19:11}), together with (\ref{10/03/07/16:47}) and (\ref{08/04/20/16:07}), shows that \begin{equation}\label{10/03/07/22:23} \begin{split} & \int_{0}^{t}(t-t')\left( \int_{\mathbb{R}^{d}}\rho_{0}(x)|\nabla \psi(x,t)|^{2}\,dx \right)dt' \\[6pt] &\le (W_{R},|\psi_{0}|^{2}) +2t \Im{(\vec{w}_{R} \cdot \nabla \psi_{0},\psi_{0})} -\frac{\varepsilon_{0}}{2}t^{2} \quad \mbox{for all sufficiently large $R$ and $t \in [0,T_{\max}^{+})$}. \end{split} \end{equation} This, together with (\ref{08/04/20/12:50}) and (\ref{08/04/20/12:51}), gives (\ref{08/06/10/10:21}). \par Once (\ref{08/06/10/10:21}) is proved, we can easily obtain (\ref{08/06/10/10:22}). Indeed, (\ref{10/03/07/19:03}), together with (\ref{08/06/10/10:21}) and (\ref{10/03/01/15:06}), immediately yields (\ref{08/06/10/10:22}). Thus, we have proved (ii). \par Finally, we prove (iii). We take $R>0$ so large that (\ref{08/06/10/10:21}) holds valid. \par To prove (\ref{08/06/12/6:56}), we employ the radial interpolation inequality by Strauss \cite{Strauss}: \begin{equation}\label{10/05/30/14:01} \left\|f \right\|_{L^{\infty}}^{2} \lesssim \frac{1}{R^{d-1}} \left\|f \right\|_{L^{2}} \left\| \nabla f \right\|_{L^{2}} \quad \mbox{for all $f \in H_{rad}^{1}(\mathbb{R}^{d})$}, \end{equation} where the implicit constant depends only on $d$. \\ This inequality (\ref{10/05/30/14:01}), together with (\ref{08/06/10/10:21}) and the mass conservation law (\ref{08/05/13/8:59}), gives us (\ref{08/06/12/6:56}). \par The estimate (\ref{08/06/12/6:57}) follows from (\ref{08/06/12/6:56}) and the inequality \[ \|\psi(t)\|_{L^{p+1}(|x|>R)}^{p+1}\le \|\psi_{0}\|_{L^{2}}^{2}\|\psi(t)\|_{L^{\infty}(|x|>R)}^{p-1}. \] We obtain (\ref{08/06/12/6:58}) and (\ref{08/06/12/6:59}) from (\ref{08/06/12/6:56}) and (\ref{08/06/12/6:57}), respectively. \end{proof} We also have variants of (\ref{08/06/12/6:56})--(\ref{08/06/12/6:59}). For example: in their proofs, replacing $R$ by $R(T_{\max}^{+}-t)^{-\frac{1}{2(d-1)}}$, we obtain \[ \int_{0}^{T_{\max}^{+}}\|\psi(t)\|_{L^{\infty}(|x|>R(T_{\max}^{+}-t)^{-\frac{1}{2(d-1)}})}^{4}\,dt < \infty \] and \[ \int_{0}^{T_{\max}^{+}}\|\psi(t)\|_{L^{p+1}(|x|>R(T_{\max}^{+}-t)^{-\frac{1}{2(d-1)}})}^{\frac{4(p+1)}{p-1}}\,dt < \infty . \] Moreover, replacing $R$ by $R(T_{\max}^{+}-t)^{-\frac{1}{d-1}}$ in (\ref{08/06/12/6:56}) and (\ref{08/06/12/6:57}), we obtain \[ \liminf_{t \to T_{\max}^{+}} \|\psi(t)\|_{L^{\infty}\left(|x|>R(T_{\max}^{+}-t)^{-\frac{1}{d-1}}\right)}^{2}=0 \] and \[ \liminf_{t \to T_{\max}^{+}} \|\psi(t)\|_{L^{p+1}\left(|x|>R(T_{\max}^{+}-t)^{-\frac{1}{d-1}}\right)}^{p+1} =0 , \] respectively. \\ \par Now, we shall give the proof of Propositions \ref{08/11/02/22:52}. \begin{proof}[Proof of Proposition \ref{08/11/02/22:52}] Suppose that $\psi$ is a solution to the equation (\ref{08/05/13/8:50}) satisfying that \begin{equation}\label{10/01/26/12:50} \limsup_{t\to T_{\max}^{+}}\left\|\nabla \psi(t) \right\|_{L^{2}}= \limsup_{t\to T_{\max}^{+}}\left\| \psi(t) \right\|_{L^{p+1}}=\infty. \end{equation} Then, there exists a sequence $\{t_{n}\}_{n\in \mathbb{N}}$ in $[0,T_{\max}^{+})$ such that \begin{align} \label{10/03/08/17:57} &\lim_{n\to \infty}t_{n} = T_{\max}^{+}, \\[6pt] \label{10/01/26/12:53} &\left\| \psi(t_{n})\right\|_{L^{p+1}}=\sup_{t\in [0,t_{n})} \left\| \psi(t)\right\|_{L^{p+1}}. \end{align} Using such a sequence $\{t_{n}\}_{n\in \mathbb{N}}$, we define a number $\lambda_{n}$ by \begin{equation}\label{10/01/26/12:58} \lambda_{n}=\left\| \psi(t_{n})\right\|_{L^{p+1}}^{-\frac{(p-1)(p+1)}{d+2-(d-2)p}}, \quad n \in \mathbb{N}. \end{equation} It is easy to see that \begin{equation}\label{10/03/08/17:59} \lim_{n\to \infty}\lambda_{n} = 0. \end{equation} We consider the scaled functions $\psi_{n}$ defined by \begin{equation}\label{10/01/26/13:14} \psi_{n}(x,t):=\lambda_{n}^{\frac{2}{p-1}}\overline{\psi(\lambda_{n}x,t_{n}-\lambda_{n}^{2}t)}, \quad (x,t)\in \mathbb{R}^{d}\times \left(-\frac{T_{\max}^{+}-t_{n}}{\lambda_{n}^{2}}, \frac{t_{n}}{\lambda_{n}^{2}}\right], \quad n \in \mathbb{N}. \end{equation} We can easily verify that \begin{align} \label{10/03/08/15:29} &2i\displaystyle{\frac{\partial \psi_{n}}{\partial t}} +\Delta \psi_{n}+|\psi_{n}|^{p-1}\psi_{n}=0 \quad \mbox{in} \ \mathbb{R}^{d}\times \left(-\frac{T_{\max}^{+}-t_{n}}{\lambda_{n}^{2}}, \frac{t_{n}}{\lambda_{n}^{2}}\right], \\[6pt] \label{10/03/08/15:33} &\left\| \psi_{n}(t)\right\|_{L^{2}} = \lambda_{n}^{-\frac{d(p-1)-4}{2(p-1)}}\left\| \psi_{0} \right\|_{L^{2}} \quad \mbox{for all $t\in \left(-\frac{T_{\max}^{+}-t_{n}}{\lambda_{n}^{2}}, \frac{t_{n}}{\lambda_{n}^{2}}\right]$}, \\[6pt] \label{10/03/08/15:34} &\mathcal{H}(\psi_{n}(t))=\lambda_{n}^{\frac{2(p+1)}{p-1}-d}\mathcal{H}(\psi_{0}) \quad \mbox{for all $t\in \left(-\frac{T_{\max}^{+}-t_{n}}{\lambda_{n}^{2}}, \frac{t_{n}}{\lambda_{n}^{2}}\right]$}, \\[6pt] \label{08/11/16/16:22} &\sup_{t\in \big[0,\ \frac{t_{n}}{\lambda_{n}^{2}}\big]}\left\| \psi_{n}(t)\right\|_{L^{p+1}}=1, \end{align} where we put $\psi_{0}=\psi(0)$. Besides, we have that \begin{equation}\label{08/11/16/16:23} \sup_{t\in \big[0,\frac{t_{n}}{\lambda_{n}^{2}}\big]}\left\| \nabla \psi_{n}(t)\right\|_{L^{2}}^{2} \le 1 \quad \mbox{for all sufficiently large $n \in \mathbb{N}$}. \end{equation} Indeed, (\ref{10/03/08/15:34}) and (\ref{08/11/16/16:22}) lead us to that \begin{equation}\label{10/03/08/15:52} \begin{split} \left\| \nabla \psi_{n}(t)\right\|_{L^{2}}^{2} &= \mathcal{H}(\psi_{n}(t))+\ \frac{2}{p+1}\left\| \psi_{n}(t) \right\|_{L^{p+1}}^{p+1} \\ &\le \lambda_{n}^{\frac{2(p+1)}{p-1}-d}\mathcal{H}(\psi_{0})+\frac{2}{p+1} \quad \mbox{for all $t\in \big[0, \frac{t_{n}}{\lambda_{n}^{2}}\big]$}, \end{split} \end{equation} which, together with (\ref{10/03/08/17:59}), immediately yields (\ref{08/11/16/16:23}). \par Now, we suppose that \begin{equation}\label{10/01/27/12:38} \left\| \psi \right\|_{L^{\infty}([0,T_{\max}^{+});L^{\frac{d}{2}(p-1)})}<\infty, \end{equation} so that $\{\psi_{n}\}$ satisfies that \begin{equation}\label{10/03/10/18:08} \sup_{n\in \mathbb{N}}\left\| \psi_{n} \right\|_{L^{\infty}([0,\frac{t_{n}}{\lambda_{n}^{2}}];L^{\frac{d}{2}(p-1)})} \le \left\| \psi \right\|_{L^{\infty}([0,T_{\max}^{+});L^{\frac{d}{2}(p-1)})}<\infty . \end{equation} Take any $T>0$. Then, extraction of some subsequence of $\{\psi_{n}\}$ allows us to assume that: \begin{equation}\label{10/04/12/18:23} \sup_{t \in [0,T]} \left\| \psi_{n}(t) \right\|_{L^{\frac{d}{2}(p-1)}}\lesssim 1, \ \sup_{t \in [0,T]} \left\| \psi_{n}(t) \right\|_{L^{p+1}}=1, \ \sup_{t \in [0,T]} \left\| \nabla \psi_{n}(t) \right\|_{L^{2}}\le 1 \quad \mbox{for all $n \in \mathbb{N}$}, \end{equation} where the implicit constant is independent of $n$. The condition (\ref{10/04/12/18:23}) enable us to apply Lemmata \ref{08/10/25/23:47} and \ref{08/10/25/23:48}, so that: There exist a constant $\delta>0$ and a sequence $\{y_{n}\}_{n\in \mathbb{N}}$ in $\mathbb{R}^{d}$ such that, putting $\widetilde{\psi}_{n}(x,t)=\psi_{n}(x+y_{n},t)$, we have \begin{equation}\label{10/03/10/18:02} \sup_{t \in [0,T]}\mathcal{L}^{d}\left( \left[ \left| \widetilde{\psi}_{n}(\cdot +y_{n}, t)\right| \ge \delta \right]\cap B_{1}(0) \right) \gtrsim 1 \quad \mbox{for all $n \in \mathbb{N}$}, \end{equation} where $B_{1}(0)=\{x \in \mathbb{R}^{d} | |x|< 1\}$ and the implicit constant is independent of $n$. Besides, we have by (\ref{10/03/08/15:29}) and (\ref{10/04/12/18:23}) that \begin{align} \label{10/03/10/18:31} &2i\displaystyle{\frac{\partial \widetilde{\psi}_{n}}{\partial t}} +\Delta \widetilde{\psi}_{n}+|\widetilde{\psi}_{n}|^{p-1}\widetilde{\psi}_{n}=0 \quad \mbox{in} \ \mathbb{R}^{d}\times [0,T], \\[6pt] \label{10/03/14/17:57} &\widetilde{\psi}_{n} \in C([0,T];\dot{H}^{1}(\mathbb {R}^{d})\cap L^{p+1}(\mathbb{R}^{d})) \quad \mbox{for all $n\in \mathbb{N}$}, \\[6pt] \label{10/03/10/18:33} & \sup_{t \in [0,T]}\left\| \widetilde{\psi}_{n}(t) \right\|_{L^{p+1}}= 1 , \quad \sup_{t \in [0,T]}\left\| \nabla \widetilde{\psi}_{n}(t) \right\|_{L^{2}}\le 1 \quad \mbox{for all $n \in \mathbb{N}$}. \end{align} We shall prove that $\{\widetilde{\psi}_{n} \}$ is an equicontinuous sequence in $C([0,T];L^{2}(\Omega))$ for any compact set $\Omega \subset \mathbb{R}^{d}$. Let $\chi$ be a function in $C_{c}^{\infty}(\mathbb{R}^{d})$ such that $\chi\equiv 1$ on $\Omega$. Then, we can verify that \begin{equation}\label{10/03/09/14:13} \begin{split} &\left\| \chi \widetilde{\psi}_{n}(t)- \chi \widetilde{\psi}_{n}(s) \right\|_{L^{2}}^{2} \\[6pt] &=\int_{s}^{t}\frac{d}{dt'} \left\|\chi\widetilde{\psi}_{n}(t') -\chi\widetilde{\psi}_{n}(s) \right\|_{L^{2}}^{2}dt' \\[6pt] &=2\Re{ \int_{s}^{t} \!\! \int_{\mathbb{R}^{d}} \! \overline{\chi(x)(\partial_{t}\widetilde{\psi}_{n})(x,t')} \left\{ \chi(x)\widetilde{\psi}_{n}(x,t')-\chi(x)\widetilde{\psi}_{n}(x,s) \right\}dxdt'} \quad \mbox{for all $s,t \in [0,T]$}. \end{split} \end{equation} This identity (\ref{10/03/09/14:13}), together with (\ref{10/03/10/18:31}), yields that \begin{equation}\label{08/12/19/15:09} \begin{split} &\left\| \widetilde{\psi}_{n}(t)-\widetilde{\psi}_{n}(s)\right\|_{L^{2}(\Omega)}^{2} \\[6pt] &\le \left\| \chi \widetilde{\psi}_{n}(t)- \chi \widetilde{\psi}_{n}(s) \right\|_{L^{2}}^{2} \\[6pt] &\le 2 |t-s| \left\| \chi \partial_{t} \widetilde{\psi}_{n} \right\|_{L^{\infty}([0,T];H^{-1})} \left\| \chi \widetilde{\psi}_{n} \right\|_{L^{\infty}([0,T];H^{1})} \\[6pt] &\le 2 |t-s| \left\{ \left\| \chi \Delta \widetilde{\psi}_{n} \right\|_{L^{\infty}([0,T];H^{-1})}+ \left\| \chi (|\widetilde{\psi}_{n}|^{p-1}\widetilde{\psi}_{n}) \right\|_{L^{\infty}([0,T];H^{-1})} \right\} \\[6pt] &\qquad \times \left\{ \left\| \chi \widetilde{\psi}_{n}\right\|_{L^{\infty}([0,T];L^{2})} + \left\| (\nabla \chi) \widetilde{\psi}_{n }\right\|_{L^{\infty}([0,T];L^{2})} + \left\| \chi \nabla \widetilde{\psi}_{n}\right\|_{L^{\infty}([0,T];L^{2})} \right\} \\[6pt] &\hspace{264pt}\mbox{for all $s,t \in [0,T]$}. \end{split} \end{equation} We consider the terms in the first parentheses on the right-hand side of (\ref{08/12/19/15:09}). They are estimated by the Sobolev embedding and the H\"older inequality as follows: \begin{equation}\label{10/03/09/14:30} \begin{split} &\left\| \chi \Delta \widetilde{\psi}_{n} \right\|_{L^{\infty}([0,T];H^{-1})} + \left\| \chi (|\widetilde{\psi}_{n}|^{p-1}\widetilde{\psi}_{n}) \right\|_{L^{\infty}([0,T];H^{-1})} \\[6pt] &= \left\| \Delta (\chi \widetilde{\psi}_{n})-(\Delta \chi)\widetilde{\psi}_{n} -2\nabla \chi \nabla \widetilde{\psi}_{n} \right\|_{L^{\infty}([0,T];H^{-1})} + \left\| \chi (|\widetilde{\psi}_{n}|^{p-1}\widetilde{\psi}_{n}) \right\|_{L^{\infty}([0,T];H^{-1})} \\[6pt] &\lesssim \left\| \nabla (\chi \widetilde{\psi}_{n}) \right\|_{L^{\infty}([0,T];L^{2})} + \left\| (\Delta \chi)\widetilde{\psi}_{n} \right\|_{L^{\infty}([0,T];L^{\frac{p+1}{p}})} + \left\| \nabla \chi \nabla \widetilde{\psi}_{n} \right\|_{L^{\infty}([0,T];L^{2})} \\[6pt] &\qquad \qquad +\left\| \chi (|\psi_{n}|^{p-1}\widetilde{\psi}_{n}) \right\|_{L^{\infty}([0,T];L^{\frac{p+1}{p}})} \\[6pt] &\le \left\| \nabla \chi \right\|_{L^{\frac{2(p+1)}{p-1}}} \left\| \widetilde{\psi}_{n} \right\|_{L^{\infty}([0,T];L^{p+1})} + \left\| \chi \right\|_{L^{\infty}} \left\| \nabla \widetilde{\psi}_{n} \right\|_{L^{\infty}([0,T];L^{2})} \\[6pt] &\qquad + \left\| \Delta \chi \right\|_{L^{\frac{p+1}{p-1}}} \left\| \widetilde{\psi}_{n} \right\|_{L^{\infty}([0,T];L^{p+1})} + \left\| \nabla \chi \right\|_{L^{\infty}} \left\| \nabla \widetilde{\psi}_{n} \right\|_{L^{\infty}([0,T];L^{2})} \\[6pt] & \qquad \qquad + \left\| \chi \right\|_{L^{\infty}}\left\|\widetilde{\psi}_{n} \right\|_{L^{\infty}([0,T];L^{p+1})}^{p+1}, \end{split} \end{equation} where the implicit constant depends only on $d$ and $p$. On the other hand, the terms in the second parentheses on the right-hand side of (\ref{08/12/19/15:09}) are estimated by the H\"older inequality as follows: \begin{equation}\label{10/03/09/16:08} \begin{split} & \left\| \chi \widetilde{\psi}_{n}\right\|_{L^{\infty}([0,T];L^{2})} + \left\| (\nabla \chi) \widetilde{\psi}_{n }\right\|_{L^{\infty}([0,T];L^{2})} + \left\| \chi \nabla \widetilde{\psi}_{n}\right\|_{L^{\infty}([0,T];L^{2})} \\[6pt] &\le 2 \left\| \chi \right\|_{W^{1,\frac{2(p+1)}{p-1}}} \left\| \widetilde{\psi}_{n} \right\|_{L^{\infty}([0,T];L^{p+1})} + \left\|\chi \right\|_{L^{\infty}} \left\| \nabla \widetilde{\psi}_{n} \right\|_{L^{\infty}([0,T];L^{2})}. \end{split} \end{equation} Hence, combining these estimates (\ref{08/12/19/15:09})--(\ref{10/03/09/16:08}) with (\ref{10/03/10/18:33}), we obtain that \begin{equation}\label{10/01/26/18:23} \left\| \widetilde{\psi}_{n}(t)-\widetilde{\psi}_{n}(s)\right\|_{L^{2}(\Omega)}^{2} \lesssim |t-s|, \end{equation} where the implicit constant depends only on $d$, $p$ and $\Omega$ ($\chi$ is determined by $\Omega$). Thus, we see that $\{\widetilde{\psi}_{n}\}$ is equicontinuous in $C([0,T];L^{2}(\Omega))$ for all compact set $\Omega \subset \mathbb{R}^{d}$. \par The estimate (\ref{10/01/26/18:23}), with the help of the Gagliardo-Nirenberg inequality and (\ref{10/03/10/18:33}), also shows that $\{\widetilde{\psi}_{n} \}$ is an equicontinuous sequence in $C([0,T];L^{q}(\Omega))$ for all compact set $\Omega \subset \mathbb{R}^{d}$ and $q \in [2,2^{*})$. Hence, (\ref{10/03/10/18:33}) and the Ascoli-Arz\'era theorem, together with (\ref{10/03/10/18:02}) and (\ref{10/03/10/18:31}), yield that: There exist a subsequence of $\{\widetilde{\psi}_{n}\}$ (still denoted by the same symbol) and a nontrivial function $\psi_{\infty} \in L^{\infty}([0,\infty);\dot{H}^{1}(\mathbb{R}^{d})\cap L^{p+1}(\mathbb{R}^{d}))$ such that \begin{align} \label{10/01/27/12:25} &\lim_{n\to \infty}\widetilde{\psi}_{n}= \psi_{\infty} \quad \mbox{ strongly in $L^{\infty}([0,T];L_{loc}^{q}(\mathbb{R}^{d}))$} \quad \mbox{for all $q \in [2, 2^{*})$}, \\[6pt] \label{10/03/09/16:53} &\lim_{n\to \infty}\nabla \widetilde{\psi}_{n} = \nabla \psi_{\infty} \quad \mbox{weakly* in $L^{\infty}([0,T];L^{2}(\mathbb{R}^{d}))$}, \\[6pt] \label{10/03/09/18:11} &2i\frac{\partial \psi_{\infty}}{\partial t}+\Delta \psi_{\infty}+|\psi_{\infty}|^{p-1}\psi_{\infty}=0 \quad \mbox{in $\mathcal{D}'([0,\infty);\dot{H}^{-1}(\mathbb{R}^{d})+ L^{\frac{p+1}{p}}(\mathbb{R}^{d}))$}. \end{align} Finally, we hall prove the formula (\ref{10/03/10/18:47}). The Lebesgue dominated convergence theorem leads to that for all $\varepsilon>0$, there exists $R>0$ such that \begin{equation}\label{10/04/16/18:52} \int_{|x|\le R}|\psi_{\infty}(0)|^{\frac{d(p-1)}{2}}\,dx \ge (1-\varepsilon) \int_{\mathbb{R}^{d}}|\psi_{\infty}(0)|^{\frac{d(p-1)}{2}}\,dx. \end{equation} Combining this estimate with the strong convergence (\ref{10/01/27/12:25}), we have that \begin{equation}\label{10/04/16/19:00} \lim_{n\to \infty} \int_{|x|\le R} |\widetilde{\psi}_{n}(x,0)|^{\frac{d(p-1)}{2}}\,dx \ge (1-\varepsilon) \int_{\mathbb{R}^{d}}|\psi_{\infty}(0)|^{\frac{d(p-1)}{2}}\,dx. \end{equation} Hence, it follows from (\ref{10/04/16/19:00}) and the relation \begin{equation}\label{10/04/16/16:33} \int_{|x|\le R}|\widetilde{\psi}_{n}(x,0)|^{\frac{d(p-1)}{2}}\,dx = \int_{|x-\lambda_{n}y_{n}|\le \lambda_{n}R}|\psi(x,t_{n})|^{\frac{d(p-1)}{2}}\,dx \end{equation} that the desired result holds. \end{proof} Finally, we shall give the proof of Proposition \ref{10/01/26/14:49}. \begin{proof}[Proof of Proposition \ref{10/01/26/14:49}] We use the same assumptions, definitions and notation in Proposition \ref{08/11/02/22:52}, except the condition (\ref{10/01/26/15:36}). \par We find from the proof of Proposition \ref{08/11/02/22:52} that: For all $T>0$, there exists a subsequence of $\{\psi_{n}\}$ in $C([0,T];H^{1}(\mathbb{R}^{d}))$ (still denoted by the same symbol) with the following properties: \begin{align} \label{10/03/11/15:27} &\sup_{t\in [0,T]}\left\|\psi_{n}(t) \right\|_{L^{p+1}}=1 \quad \mbox{for all $n \in \mathbb{N}$}, \\[6pt] &\label{10/03/11/15:39} \sup_{t \in [0,T]}\left\| \nabla \psi_{n}(t) \right\|_{L^{2}}\le 1 \quad \mbox{for all $n \in \mathbb{N}$}, \\[6pt] &\label{10/03/11/15:36} 2i\frac{\partial \psi_{n}}{\partial t}+\Delta \psi_{n}+|\psi_{n}|^{p-1}\psi_{n}=0 \quad \mbox{in $\mathbb{R}^{d}\times [0,T]$}. \end{align} For such a subsequence $\{\psi_{n}\}$, we define renormalized functions $\Phi_{n}^{RN}$ by \begin{equation}\label{10/01/26/13:15} \Phi_{n}^{RN}(x,t)=\psi_{n}(x,t)-e^{\frac{i}{2}t\Delta}\psi_{n}(x,0), \quad n \in \mathbb{N}. \end{equation} Here, it is worth while noting that \begin{align} \label{10/03/15/18:18} &\Phi^{RN}_{n} \in C([0,T];H^{1}(\mathbb{R}^{d})) \quad \mbox{for all $n \in \mathbb{N}$}, \\[6pt] \label{10/03/11/16:00} &\Phi_{n}^{RN}(t) =\frac{i}{2}\int_{0}^{t}e^{\frac{i}{2}(t-t')\Delta}|\psi_{n}(t')|^{p-1}\psi_{n}(t')\,dt' \quad \mbox{for all $n \in \mathbb{N}$}. \end{align} We shall show that \begin{equation} \label{08/11/21/17:55} \sup_{n\in \mathbb{N}}\left\|\Phi^{RN}_{n}\right\|_{L^{\infty}([0,T];H^{1})} \le C_{T} \end{equation} for some constant $C_{T}>0$ depending only on $d$, $p$ and $T$. Applying the Strichartz estimate to the formula (\ref{10/03/11/16:00}), and using (\ref{10/03/11/15:27}), we obtain the following two estimates: \begin{equation}\label{10/03/11/16:22} \begin{split} \left\|\Phi^{RN}_{n}\right\|_{L^{\infty}([0,T];L^{2})} &\lesssim \left\||\psi_{n}|^{p-1}\psi_{n} \right\|_{L^{\frac{4(p+1)}{4(p+1)-d(p-1)}}([0,T];L^{\frac{p+1}{p}})} \\[6pt] & \le T^{1-\frac{d(p-1)}{4(p+1)}}\left\| \psi_{n} \right\|_{L^{\infty}([0,T];L^{p+1})}^{p} \\[6pt] &\le T^{1-\frac{d(p-1)}{4(p+1)}}, \end{split} \end{equation} \begin{equation}\label{10/03/11/17:29} \begin{split} \left\| \nabla \Phi^{RN}_{n} \right\|_{L^{\infty}([0,T];L^{2})} &\lesssim \left\| \nabla \left( |\psi_{n}|^{p-1}\psi_{n}\right) \right\|_{L^{\frac{4(p+1)}{4(p+1)-d(p-1)}}([0,T];L^{\frac{p+1}{p}})} \\ & \le T^{1-\frac{d(p-1)}{2(p+1)}}\left\| \psi_{n}\right\|_{L^{\infty}([0,T];L^{p+1})}^{p-1}\left\| \nabla \psi_{n}\right\|_{L^{\frac{4(p+1)}{d(p-1)}}([0,T];L^{p+1})} \\ &\le T^{1-\frac{d(p-1)}{2(p+1)}}\left\| \nabla \psi_{n}\right\|_{L^{\frac{4(p+1)}{d(p-1)}}([0,T];L^{p+1})}, \end{split} \end{equation} where the implicit constants depend only on $d$ and $p$. Therefore, for the desired estimate (\ref{08/11/21/17:55}), it suffices to show that \begin{equation}\label{08/11/19/23:00} \sup_{n\in \mathbb{N}} \left\| \nabla \psi_{n} \right\|_{L^{\frac{4(p+1)}{d(p-1)}}([0,T];L^{p+1})} \le D_{T} \end{equation} for some constant $D_{T}>0$ depending only on $d$, $p$ and $T$. Here, note that the pair $(p+1,\ \frac{4(p+1)}{d(p-1)})$ is admissible. In order to prove (\ref{08/11/19/23:00}), we introduce an admissible pair $(q,r)$ with $q=p+2$ if $d=1,2$ and $q=\frac{1}{2}(p+1+2^{*})$ if $d\ge 3$, so that $p+1<q<2^{*}$. Then, it follows from the integral equation for $\psi_{n}$ and the Strichartz estimate that \begin{equation}\label{10/04/16/17:45} \begin{split} &\left\|\nabla \psi_{n} \right\|_{L^{r}\left([0,T];L^{q}\right)} \\[6pt] &\lesssim \left\| \nabla \psi_{n}(0)\right\|_{L^{2}} + \left\| \nabla \left( |\psi_{n}|^{p-1} \psi_{n}\right) \right\|_{L^{\frac{4(p+1)}{4(p+1)-d(p-1)}}([0,T]; L^{\frac{p+1}{p}})} \\[6pt] &\lesssim \left\| \nabla \psi_{n}(0)\right\|_{L^{2}} + \left\| \psi_{n} \right\|_{ L^{\frac{2(p+1)(p-1)}{d+2-(d-2)p}}([0,T]; L^{p+1})}^{p-1} \left\| \nabla \psi_{n} \right\|_{L^{\frac{4(p+1)}{d(p-1)}}([0,T];L^{p+1})} \\[6pt] &\le \left\| \nabla \psi_{n}(0)\right\|_{L^{2}} + T^{\frac{d+2-(d-2)p}{2(p+1)}}\left\| \psi_{n} \right\|_{L^{\infty}([0,T];L^{p+1})}^{p-1} \left\|\nabla \psi_{n} \right\|_{L^{\infty}([0,T];L^{2})}^{1-\frac{q(p-1)}{(q-2)(p+1)}} \left\| \nabla \psi_{n} \right\|_{L^{r}([0,T];L^{q})}^{\frac{q(p-1)}{(q-2)(p+1)}}, \end{split} \end{equation} where the implicit constants depend only on $d$ and $p$. Combining (\ref{10/04/16/17:45}) with (\ref{10/03/11/15:27}) and (\ref{10/03/11/15:39}), we obtain that \begin{equation}\label{10/03/11/17:56} \left\|\nabla \psi_{n} \right\|_{L^{r}\left([0,T];L^{q}\right)} \lesssim 1 + T^{\frac{d+2-(d-2)p}{2(p+1)}}\left\| \nabla \psi_{n} \right\|_{L^{r}([0,T];L^{q})}^{\frac{q(p-1)}{(q-2)(p+1)}}, \end{equation} where the implicit constant depends only on $d$ and $p$. Since $0<\frac{q(p-1)}{(q-2)(p+1)}<1$, this estimate, together with the Young inequality ($ab \le \frac{1}{\gamma}a^{\gamma}+\frac{1}{\gamma'}b^{\gamma'}$ for $\gamma,\gamma'>1$ with $\frac{1}{\gamma}+\frac{1}{\gamma'}=1$), yields that \begin{equation}\label{08/11/19/22:29} \left\| \nabla \psi_{n} \right\|_{L^{r}([0,T];L^{q})} \lesssim 1+T^{\frac{(q-2)\{d+2-(d-2)p\}}{4\{ q-(p+1)\}}} , \end{equation} where the implicit constant depends only on $d$ and $p$. Hence, interpolating (\ref{10/03/11/15:39}) and (\ref{08/11/19/22:29}), we obtain (\ref{08/11/19/23:00}), so that (\ref{08/11/21/17:55}) holds. \par Next, we shall show that $\{\Phi^{RN}_{n}\}$ is an equicontinuous sequence in $C([0,T];L^{q}(\mathbb{R}^{d}))$ for all $q \in [2,2^{*})$. Differentiating the both sides of (\ref{10/03/11/16:00}), we obtain that \begin{equation}\label{10/03/15/16:24} \begin{split} \partial_{t}\Phi^{RN}_{n}(t) &= \frac{i}{2}|\psi_{n}(t)|^{p-1}\psi_{n}(t) -\frac{1}{4}\Delta \int_{0}^{t} e^{\frac{i}{2}(t-t')\Delta}|\psi_{n}(t')|^{p-1}\psi_{n}(t')\,dt' \\[6pt] &=\frac{i}{2}|\psi_{n}(t)|^{p-1}\psi_{n}(t)+\frac{i}{2}\Delta \Phi^{RN}_{n}(t). \end{split} \end{equation} This formula (\ref{10/03/15/16:24}) and the H\"older inequality show that \begin{equation}\label{10/03/15/16:36} \begin{split} &\left\|\Phi^{RN}_{n}(t)-\Phi^{RN}_{n}(s)\right\|_{L^{2}}^{2} \\[6pt] & =\int_{s}^{t}\frac{d}{dt'} \left\|\Phi^{RN}_{n}(t')-\Phi^{RN}_{n}(s) \right\|_{L^{2}}^{2}dt' \\[6pt] &= 2\Re \int_{s}^{t}\int_{\mathbb{R}^{d}} \overline{\partial_{t}\Phi^{RN}_{n}(x,t')}\left\{ \Phi^{RN}_{n}(x,t')-\Phi^{RN}_{n}(x,s)\right\}dxdt' \\[6pt] &\lesssim |t-s|\left\|\psi_{n} \right\|_{L^{\infty}([0,T];L^{p+1})}^{p} \left\| \Phi^{RN}_{n} \right\|_{L^{\infty}([0,T];L^{p+1})} + |t-s|\left\| \nabla \Phi^{RN}_{n}\right\|_{L^{\infty}([0,T];L^{2})}^{2} , \end{split} \end{equation} where the implicit constant depends only on $d$ and $q$. Combining this estimate with (\ref{10/03/11/15:27}) and (\ref{08/11/21/17:55}), we obtain \begin{equation}\label{10/03/15/16:54} \left\|\Phi^{RN}_{n}(t)-\Phi^{RN}_{n}(s)\right\|_{L^{2}}^{2} \lesssim |t-s| \quad \mbox{for all $s,t \in [0,T]$}, \end{equation} where the implicit constant depends only on $d$, $q$ and $T$. Moreover, the Gagliardo-Nirenberg inequality, together with (\ref{08/11/21/17:55}) and (\ref{10/03/15/16:54}), shows that $\{\Phi^{RN}_{n}\}$ is an equicontinuous sequence in $C([0,T];L^{q}(\mathbb{R}^{d}))$ for all $q \in [2,2^{*})$. \par Now, we are in a position to prove the main property of $\{\Phi^{RN}_{n}\}$, the alternatives (i) and (ii). We first suppose that \begin{equation}\label{10/03/15/17:02} \lim_{n\to \infty}\sup_{t\in [0,T]}\left\| \Phi_{n}^{RN}(t)\right\|_{L^{\frac{d}{2}(p-1)}}=0. \end{equation} Then, a direct calculation immediately yields (\ref{08/11/20/17:56}). \par On the other hand, when \begin{equation}\label{10/03/15/17:04} A:=\limsup_{n\to \infty}\sup_{t\in [0,T]}\left\| \Phi_{n}^{RN}(t) \right\|_{L^{p+1}}>0, \end{equation} we can take a subsequence $\{\Phi^{RN}_{n}\}$ (still denoted by the same symbol) such that \begin{equation}\label{08/11/21/18:29} \sup_{t \in [0,T]}\left\|\Phi^{RN}_{n}(t) \right\|_{L^{p+1}} \ge \frac{A}{2} \qquad \mbox{for all $n \in \mathbb{N}$}. \end{equation} This inequality (\ref{08/11/21/18:29}) and the uniform bound (\ref{08/11/21/17:55}) enable us to apply Lemmata \ref{08/10/25/23:47} and \ref{08/10/25/23:48}. Thus, we find that there exist a constant $\delta>0$ and a sequence $\{y_{n}\}_{n\in \mathbb{N}}$ in $\mathbb{R}^{d}$ such that, putting $\widetilde{\Phi}^{RN}_{n}(x,t)=\Phi^{RN}_{n}(x+y_{n},t)$, we have \begin{equation}\label{10/03/15/17:57} \sup_{t \in [0,T]}\mathcal{L}^{d}\left( \left[ \left|\widetilde{\Phi}^{RN}_{n}(t)\right| > \frac{\delta}{2} \right] \cap B_{1}(0) \right)>C \end{equation} for some constant $C>0$ being independent of $n$. Besides, we find by (\ref{08/11/21/17:55}) and (\ref{10/03/15/16:54}) that: $\{\widetilde{\Phi}^{RN}_{n}\}$ is \begin{align} \label{10/04/16/18:13} &\mbox{a uniformly bounded sequence in $C([0,T];H^{1}(\mathbb{R}^{d}))$, and} \\[6pt] &\mbox{an equicontinuous sequence in $C([0,T];L^{q}(\mathbb{R}^{d}))$ for all $q\in [2,2^{*})$}. \end{align} Hence, the Ascoli-Arzel\'a theorem, together with (\ref{10/03/15/17:57}), gives us that: There exist a subsequence of $\{\widetilde{\Phi}^{RN}_{n}\}$ (still denoted by the same symbol) and a nontrivial function $\Phi \in L^{\infty}([0,\infty);H^{1}(\mathbb{R}^{d}))$ such that \begin{align} \label{08/12/18/16:24} &\lim_{n\to \infty}\widetilde{\Phi}^{RN}_{n} = \Phi \quad \mbox{weakly* in $L^{\infty}([0,T];H^{1}(\mathbb{R}^{d}))$}, \\[6pt] \label{08/12/17/18:26} & \lim_{n\to \infty}\widetilde{\Phi}^{RN}_{n}= \Phi \quad \mbox{strongly in $C([0,T];L_{loc}^{q}(\mathbb{R}^{d}))$ \quad for all $q\in[2,2^{*})$}. \end{align} It remains to prove (\ref{10/03/15/19:52})--(\ref{08/11/20/17:57}). We find by (\ref{10/03/11/15:27}) that: There exists a function $F \in L^{\infty}([0,\infty);L^{\frac{p+1}{p}}(\mathbb{R}^{d}))$ such that \begin{equation}\label{08/12/18/16:23} \lim_{n\to \infty}|\psi_{n}|^{p-1}\psi_{n} = F \quad \mbox{weakly* in $L^{\infty}([0,T];L^{\frac{p+1}{p}}(\mathbb{R}^{d}))$}. \end{equation} Then, it follows from the equation (\ref{10/03/15/16:24}) and (\ref{08/12/18/16:24}) that \begin{equation}\label{08/12/18/16:25} 2i\frac{\partial \Phi}{\partial t} +\Delta \Phi + F=0. \end{equation} Here, if $F$ were trivial, then $\Phi$ is so since $\displaystyle{\Phi(0)=\lim_{n\to \infty}\Phi^{RN}_{n}(0)=0}$. Therefore, $F$ is nontrivial. \par We can prove the formula (\ref{08/11/20/17:57}) in a way similar to the proof of (\ref{10/03/10/18:47}) (cf. the estimates (\ref{10/04/16/18:52})-- (\ref{10/04/16/16:33})). \end{proof} \appendix \section{Generalized virial identity}\label{08/10/19/14:57} The proofs of Theorems \ref{08/05/26/11:53}, \ref{08/06/12/9:48} and \ref{08/04/21/9:28} are based on a generalization of the virial identity. To state it, we first introduce a positive function $w$ in $W^{3,\infty}([0,\infty))$, which is a variant of the function in \cite{Nawa8, Ogawa-Tsutsumi, Ogawa-Tsutsumi2}: \begin{equation}\label{10/02/27/22:14} w(r)= \left\{ \begin{array}{lcl} r &\mbox{if}& 0\le r <1, \\[6pt] r-(r-1)^{\frac{d}{2}(p-1)+1} &\mbox{if}& 1\le r < 1+\left( \frac{2}{d(p-1)+2} \right)^{\frac{2}{d(p-1)}}, \\[6pt] \mbox{smooth and $w'\le 0$} &\mbox{if}& 1+\left( \frac{2}{d(p-1)+2} \right)^{\frac{2}{d(p-1)}} \le r <2, \\[6pt] 0 &\mbox{if}& 2\le r . \end{array} \right. \end{equation} Since $w$ is determined by $d$ and $p$ only, we may assume that \begin{equation}\label{09/09/26/11:16} K:=\left\| w \right\|_{W^{3,\infty}} \lesssim 1, \end{equation} where the implicit constant depends only on $d$ and $p$. Using this $w$, we define \begin{equation}\label{10/02/27/22:18} \vec{w}_{R}(x)=(\vec{w}_{R}^{1}(x),\ldots , \vec{w}_{R}^{d}(x)) :=\frac{x}{|x|}Rw\left(\frac{|x|}{R}\right), \quad R>0, \ x \in \mathbb{R}^{d} \end{equation} and \begin{equation}\label{10/02/27/22:19} W_{R}(x):=2R \int_{0}^{|x|}w\left( \frac{r}{R}\right)\,dr, \quad R>0, \ x \in \mathbb{R}^{d}. \end{equation} These functions have the following properties: \begin{lemma}\label{08/10/23/18:31} Assume that $d\ge 1$ and $2+\frac{4}{d}\le p+1 <2^{*}$. Then, we have that \begin{align} \label{09/09/25/13:24} &\left| \vec{w}_{R}(x) \right|^{2} \le W_{R}(x) \quad \mbox{for all $R>0$ and $x \in \mathbb{R}^{d}$}, \\[6pt] \label{08/10/23/18:42} &\left\| \vec{w}_{R} \right\|_{L^{\infty}} \le 2R \quad \mbox{for all $R>0$}, \\[6pt] \label{08/12/29/12:48} &\left\| W_{R}\right\|_{L^{\infty}} \le 8R^{2} \quad \mbox{for all $R>0$}, \\[6pt] \label{08/10/23/18:35} &\left\|\nabla \vec{w}_{R}^{j} \right\| \le 2dK \quad \mbox{for all $R>0$ and $j=1, \ldots, d$}, \\[6pt] \label{08/04/20/16:54} &\|\Delta ({\rm div}\, \vec{w}_{R}) \|_{L^{\infty}} \le \frac{10d^{2}K}{R^{2}} \quad \mbox{for all $R>0$}, \end{align} where $K$ is the constant given in (\ref{09/09/26/11:16}). \end{lemma} \begin{proof}[Proof of Lemma \ref{08/10/23/18:31}] Since $w(0)=0$ and $w'\le 1$, we have that \[ \left| \vec{w}_{R}(x) \right|^{2} =R^{2}w^{2} \left( \frac{|x|}{R}\right) \\[6pt] \le 2R \int_{0}^{|x|} w\left(\frac{r}{R}\right)w'\left(\frac{r}{R}\right)\,dr \le W_{R}(x), \] which shows (\ref{09/09/25/13:24}). \par Now, we can easily verify that that \begin{equation}\label{09/09/25/16:01} \left\| w \right\|_{L^{\infty}}\le 2. \end{equation} This estimate (\ref{09/09/25/16:01}) immediately yields (\ref{08/10/23/18:42}): \begin{equation}\label{10/05/16/10:34} \left| \vec{w}_{R}(x)\right|=R\left| w\left( \frac{|x|}{R}\right)\right|\le 2R. \end{equation} Moreover, (\ref{09/09/25/16:01}), together with the fact ${\rm supp}\,w \subset [0,2]$, gives (\ref{08/12/29/12:48}). Indeed, we have \begin{equation}\label{10/05/16/10:38} \left|W_{R}(x)\right| \le 2R \int_{0}^{2R} w\left(\frac{r}{R}\right)\,dr \le 4R^{2} \left\| w \right\|_{L^{\infty}} \le 8R^{2}. \end{equation} We shall prove (\ref{08/10/23/18:35}). A simple calculation shows that \begin{equation}\label{09/09/26/10:14} \begin{split} \left| \partial_{k} \vec{w}_{R}^{j} (x) \right| &= \left| \frac{\delta_{jk}|x|^{2}-x_{j}x_{k}}{|x|^{3}}Rw\left( \frac{|x|}{R}\right) +\frac{x_{j}x_{k}}{|x|^{2}}w'\left( \frac{|x|}{R}\right) \right| \\[6pt] &\le \frac{R}{|x|} \left| w\left( \frac{|x|}{R}\right)\right| +\left| w'\left( \frac{|x|}{R}\right) \right|. \end{split} \end{equation} Since we have \begin{equation}\label{09/09/26/10:15} \frac{R}{|x|} \left| w\left( \frac{|x|}{R}\right)\right| \le \left\| w \right\|_{L^{\infty}} \quad \mbox{for all $x \in \mathbb{R}^{d}$}, \end{equation} the estimate (\ref{09/09/26/10:14}), together with (\ref{09/09/26/11:16}), leads to (\ref{08/10/23/18:35}). \par The estimate (\ref{08/04/20/16:54}) follows from (\ref{09/09/26/11:16}) and the identity \begin{equation}\label{10/05/16/10:46} \begin{split} \Delta ({\rm div}\, \vec{w}_{R})(x)&= \frac{1}{R^{2}}w'''\left(\frac{|x|}{R}\right) +\frac{2(d-1)}{R|x|}w''\left(\frac{|x|}{R}\right) \\[6pt] & \quad +\frac{(d-1)(d-3)}{|x|^{2}}w'\left(\frac{|x|}{R}\right)-\frac{(d-1)(d-3)}{|x|^{3}}Rw\left(\frac{|x|}{R}\right). \end{split} \end{equation} \end{proof} \begin{lemma}\label{10/03/02/17:47} Assume that $d\ge 1$ and $2+\frac{4}{d}\le p+1 <2^{*}$. Then, for any $m>0$, $C>0$ and $f \in L^{2}(\mathbb{R}^{d})$, there exists $R_{0}>0$ such that \begin{equation}\label{10/05/16/10:47} \frac{C}{R^{2}} (W_{R},|f|^{2})<m \quad \mbox{for all $R\ge R_{0}$}. \end{equation} \end{lemma} \begin{proof}[Proof of Lemma \ref{10/03/02/17:47}] For any $m>0$, $C>0$ and $f \in L^{2}(\mathbb{R}^{d})$, we can take $R_{0}'>0$ such that \begin{equation}\label{10/05/16/10:48} \int_{|x|\ge R_{0}'}|f(x)|^{2}\,dx <\frac{m}{16C}. \end{equation} Hence, it follows from (\ref{08/12/29/12:48}) that \begin{equation}\label{10/03/02/18:32} \frac{C}{R^{2}}\int_{|x|\ge R_{0}'}W_{R}(x)|f(x)|^{2}\,dx \le 8C\int_{|x|\ge R_{0}'}|f(x)|^{2}\,dx <\frac{m}{2} \quad \mbox{for all $R>0$}. \end{equation} Moreover, we have by the definition of $W_{R}$ (see (\ref{10/02/27/22:19})) that \begin{equation}\label{10/03/02/18:16} \frac{C}{R^{2}}\int_{|x|< R_{0}'}W_{R}(x)|f(x)|^{2}\,dx \le C\frac{2R_{0}'K}{R}\left\|f\right\|_{L^{2}}^{2} <\frac{m}{2} \quad \mbox{for all $R> \frac{4CKR_{0}'\|f\|_{L^{2}}^{2}}{m}$}. \end{equation} Combining (\ref{10/03/02/18:32}) and (\ref{10/03/02/18:16}), we obtain the desired result. \end{proof} Now, we introduce our generalized virial identity: \begin{lemma}[Generalized virial identity] \label{08/10/23/18:45} Assume that $d\ge 1$ and $2+\frac{4}{d}\le p+1 < 2^{*}$. Then, we have \begin{equation}\label{08/03/29/19:05} \begin{split} (W_{R}, |\psi(t)|^{2}) &=(W_{R},|\psi_{0}|^{2}) +2t \Im{(\vec{w}_{R} \cdot \nabla \psi_{0},\psi_{0})}+2\int_{0}^{t}\int_{0}^{t'}\mathcal{K}(\psi(t''))\,dt''dt' \\[6pt] &\qquad -2\int_{0}^{t}\int_{0}^{t'}\mathcal{K}^{R}(\psi(t''))\,dt''dt' \\[6pt] &\qquad -\frac{1}{2}\int_{0}^{t}\int_{0}^{t'}(\Delta ({\rm div}\,{\vec{w}_{R}}), |\psi(t'')|^{2})\,dt''dt' \qquad \mbox{for all $R>0$}. \end{split} \end{equation} Here, $\mathcal{K}^{R}$ is defined by \begin{equation}\label{09/05/13/16:38} \mathcal{K}^{R}(f) = \int_{\mathbb{R}^{d}} \rho_{1}(x)|\nabla f(x)|^{2} + \rho_{2}(x) \left| \frac{x}{|x|} \cdot \nabla f(x)\right|^{2} - \rho_{3}(x)|f(x)|^{p+1} dx, \quad f \in H^{1}(\mathbb{R}^{d}), \end{equation} where \begin{align} \label{10/04/11/16:09} \rho_{1}(x)&:=1 -\frac{R}{|x|}w \left( \frac{|x|}{R}\right), \\[6pt] \label{10/04/11/16:10} \rho_{2}(x)&:=\frac{R}{|x|}w \left(\frac{|x|}{R}\right)-w'\left( \frac{|x|}{R}\right), \\[6pt] \label{10/04/11/16:11} \rho_{3}(x)&:=\frac{p-1}{2(p+1)}\left\{ d-w'\left( \frac{|x|}{R}\right)-\frac{d-1}{|x|}Rw\left( \frac{|x|}{R}\right) \right\} . \end{align} \end{lemma} \begin{remark}\label{09/09/25/16:47} If $d=1$ or $\psi$ is radially symmetric, then we have \begin{equation}\label{10/05/16/11:02} \mathcal{K}^{R}(\psi)=\int_{\mathbb{R}^{d}}\rho_{0}(x)|\nabla \psi(x)|^{2}-\rho_{3}(x)|\psi(x)|^{p+1}\, dx, \end{equation} where \begin{equation}\label{10/05/16/11:03} \rho_{0}(x):=1-w'\left( \frac{|x|}{R}\right) =\rho_{1}(x)+\rho_{2}(x). \end{equation} \end{remark} \begin{proof}[Proof of Lemma \ref{08/10/23/18:45}] By the formula (B.17) in \cite{Nawa8}, we immediately obtain (\ref{08/03/29/19:05}). \end{proof} In the next lemma, we give several properties of the weight functions $\rho_{1}$, $\rho_{2}$, $\rho_{3}$ and $\rho_{0}$: \begin{lemma}\label{08/04/20/0:27} Assume that $d\ge 1$ and $2+\frac{4}{d}\le p+1 < 2^{*}$. Then, for all $R>0$, we have the followings: \begin{align} \label{08/12/29/14:16} & {\rm supp}\,{\rho_{j}}=\{x\in \mathbb{R}^{d} \bigm| |x|\ge R \} \quad \mbox{for all $R>0$ and $j=0,1,2,3$}, \\[6pt] \label{08/04/20/12:50} &\inf_{x\in \mathbb{R}^{d}}\rho_{j}(x) \ge 0 \quad \mbox{for all $R>0$ and $j=0,1,2,3$}, \\[6pt] \label{08/04/20/12:51} &\rho_{0}(x)=1 \quad \mbox{if $|x| \ge 2R$}, \\[6pt] \label{10/03/01/15:06} &\rho_{3}(x)=\frac{d(p-1)}{2(p+1)} \quad \mbox{if $|x| \ge 2R$}, \\[6pt] \label{08/12/29/12:53} &\left\| \rho_{j} \right\|_{L^{\infty}}\le K_{j} \quad \mbox{for all $j=0,1,2,3$}, \\[6pt] \label{08/04/20/13:06} &\sup_{x\in \mathbb{R}^{d}} \max\left\{ -\frac{x}{|x|}\nabla \sqrt{\rho_{3}(x)},\ 0 \right\} \le \frac{K_{3}'}{R}, \\[6pt] \label{08/04/20/16:07} &\inf_{|x|\ge R}\frac{\rho_{0}(x)}{\rho_{3}(x)} \ge K_{4}, \end{align} where $K_{j}$ ($j=1,2,3$), $K'_{3}$ and $K_{4}$ are some constants independent of $R$. \end{lemma} \begin{proof}[Proof of Lemma \ref{08/04/20/0:27}] We give proofs of (\ref{08/04/20/13:06}) and (\ref{08/04/20/16:07}) only. \par \par We first prove (\ref{08/04/20/13:06}). A direct calculation shows that \begin{equation}\label{09/09/26/14:14} \frac{x}{|x|}\nabla \sqrt{\rho_{3}(x)} = \frac{ -\frac{1}{R}w''\left( \frac{|x|}{R}\right) -\frac{d-1}{|x|}w'\left( \frac{|x|}{R}\right) +\frac{(d-1)R}{|x|^{2}}w\left( \frac{|x|}{R}\right) } {\sqrt{\frac{2(p+1)}{p-1}}\sqrt{d-w'\left( \frac{|x|}{R}\right)-\frac{(d-1)R}{|x|}w\left( \frac{|x|}{R}\right)}} . \end{equation} Since ${\rm supp}\,\rho_{3}= \{x \in \mathbb{R}^{d} \bigm| |x|\ge R\}$ (see (\ref{08/12/29/14:16})), it suffices to consider the case $|x|\ge R$. When $R\le |x|<R\left\{ 1+\left( \frac{2}{d(p-1)+2}\right)^{\frac{2}{d(p-1)}}\right\}$, we have from (\ref{09/09/26/14:14}) that \begin{equation}\label{09/09/26/15:04} \begin{split} & \max\left\{ -\frac{x}{|x|}\nabla \sqrt{\rho_{3}(x)},\ 0 \right\} \\[6pt] &=\frac{ \max\left\{\left( \frac{|x|}{R}-1\right)^{\frac{d}{2}(p-1)-1} \left( \frac{d^{2}(p-1)^{2}+4d(p-1)+4(d-1)}{4R} +\frac{(d-1)R}{|x|^{2}} -\frac{d(d-1)(p-1)+2}{2|x|} \right) ,\ 0\right\} } { \sqrt{\frac{2(p+1)}{p-1}}\left(\frac{|x|}{R}-1 \right)^{\frac{d}{4}(p-1)} \sqrt{\frac{d}{2}(p-1)+d-\frac{(d-1)R}{|x|}} } \\[6pt] &\le \left( \frac{|x|}{R}-1\right)^{\frac{d}{4}(p-1)-1}\frac{ \left\{ \frac{d^{2}(p-1)^{2}+4d(p-1)+4(d-1)}{4R} +\frac{(d-1)R}{|x|^{2}} +\frac{d(d-1)(p-1)+2}{2|x|} \right\} } { \sqrt{\frac{2(p+1)}{p-1}} \sqrt{\frac{d}{2}(p-1)+1} } \\[6pt] &\lesssim \frac{1}{R} , \end{split} \end{equation} where the implicit constant depends only on $d$ and $p$. \\ On the other hand, when $R\left\{ 1+\left( \frac{2}{d(p-1)+2}\right)^{\frac{2}{d(p-1)}}\right\}\le |x|$, we have \begin{equation} w'\left(\frac{|x|}{R}\right)\le 0, \qquad \frac{R}{|x|}w\left(\frac{|x|}{R}\right)\le 1 \end{equation} and therefore we obtain from (\ref{09/09/26/14:14}) that \begin{equation}\label{10/05/28/15:47} \begin{split} \max\left\{ -\frac{x}{|x|}\nabla \sqrt{\rho_{3}(x)},\ 0 \right\} &\le \frac{ \left| \frac{1}{R}w''\left( \frac{|x|}{R}\right) \right| +\left| \frac{d-1}{|x|}w'\left( \frac{|x|}{R}\right)\right| + \left| \frac{(d-1)R}{|x|^{2}}w\left( \frac{|x|}{R}\right)\right| } {\sqrt{\frac{2(p+1)}{p-1}}\sqrt{d-\frac{(d-1)R}{|x|}w\left( \frac{|x|}{R}\right)}} \\[6pt] &\lesssim \frac{1}{R}\left\| w \right\|_{W^{2,\infty}} , \end{split} \end{equation} where the implicit constant depends only on $d$ and $p$. Thus, we see that (\ref{08/04/20/13:06}) holds. \par Next, we prove (\ref{08/04/20/16:07}). The starting point of the proof is the identity: \begin{equation} \label{08/04/20/16:16} \frac{p-1}{2(p+1)}\frac{\rho_{0}(x)}{\rho_{3}(x)}=\frac{\rho_{0}(x)}{\rho_{0}(x)+(d-1)\left\{ 1-\frac{R}{|x|}w\left( \frac{|x|}{R}\right)\right\}} \quad \mbox{for all $x \in \mathbb{R}^{d}$ with $|x|\ge R$}. \end{equation} When $R \le |x| < R\left\{ 1+\left(\frac{2}{d(p-1)+2}\right)^{\frac{2}{d(p-1)}}\right\}$, we have from (\ref{08/04/20/16:16}) that \begin{equation}\label{10/03/06/16:18}\begin{split} \frac{p-1}{2(p+1)} \frac{\rho_{0}(x)}{\rho_{3}(x)}&= \frac{\rho_{0}(x)}{\rho_{0}(x)+(d-1)\frac{R}{|x|}\left(\frac{|x|}{R}-1 \right)^{\frac{d}{2}(p-1)+1}} \\[6pt] &= \frac{ \left(\frac{|x|}{R}-1 \right)^{\frac{d}{2}(p-1)}}{\left(\frac{|x|}{R}-1 \right)^{\frac{d}{2}(p-1)}+\frac{2(d-1)}{d(p-1)+2}\frac{R}{|x|} \left(\frac{|x|}{R}-1 \right)^{\frac{d}{2}(p-1)+1}} \\[6pt] &= \frac{1}{1+\frac{2(d-1)}{d(p-1)+2}\left(1-\frac{R}{|x|} \right)} \\[6pt] &\ge \frac{1+\left(\frac{2}{d(p-1)+2} \right)^{\frac{2}{d(p-1)}}} {1+\frac{d(p+1)}{d(p-1)+2} \left( \frac{2}{d(p-1)+2} \right)^{\frac{2}{d(p-1)}}}. \end{split} \end{equation} On the other hand, when $R\left\{ 1+\left( \frac{2}{d(p-1)+2} \right)^{\frac{2}{d(p-1)}}\right\} \le |x|$, we have \begin{equation}\label{10/05/28/16:47} 1\le \rho_{0}(x)\le 1+K, \qquad 0\le 1-\frac{R}{|x|}w\left( \frac{|x|}{R}\right)\le 1, \end{equation} and therefore we obtain from (\ref{08/04/20/16:16}) that \begin{equation}\label{10/03/06/16:56} \frac{p-1}{2(p+1)}\frac{\rho_{0}(x)}{\rho_{3}(x)} \ge \frac{1}{K+d} \quad \mbox{for all $x \in \mathbb{R}^{d}$ with $R\left\{ 1+\left( \frac{2}{d(p-1)+2} \right)^{\frac{2}{d(p-1)}}\right\}\le |x|$}. \end{equation} Thus, we see that (\ref{08/04/20/16:07}) holds. \end{proof} \section{Compactness device I} \label{08/10/03/15:12} We recall the following sequence of lemmata. \begin{lemma}[Fr\"olich, Lieb and Loss \cite{Frohlich-Leib-Loss}, Nawa \cite{Nawa1}] \label{08/3/28/20:21} Let $1< \alpha < \beta < \gamma < \infty$ and let $f$ be a measurable function on $\mathbb{R}^{d}$ with \[ \|f \|_{L^{\alpha}}^{\alpha} \le C_{\alpha}, \quad C_{\beta}\le \|f\|_{L^{\beta}}^{\beta}, \quad \|f\|_{L^{\gamma}}^{\gamma}\le C_{\gamma} \] for some positive constants $C_{\alpha}, C_{\beta}, C_{\gamma}$. Then, we have \[ \mathcal{L}^{d}\left( \Big[ |f| > \eta \Big]\right) > \frac{C_{\beta}}{2}\eta^{\beta} \quad \mbox{for all $0< \eta < \min \left\{1, \ \left( \frac{C_{\beta}}{4C_{\alpha}}\right)^{\frac{1}{\beta-\alpha}}, \ \left( \frac{C_{\beta}}{4C_{\gamma}}\right)^{\frac{1}{\gamma-\beta}} \right\}$}. \] \end{lemma} \begin{proof}[Proof of Lemma \ref{08/3/28/20:21}] Let $\eta$ be a constant satisfying \begin{equation}\label{10/04/26/17:15} 0<\eta < \min \left\{1, \ \left( \frac{C_{\beta}}{4C_{\alpha}}\right)^{\frac{1}{\beta-\alpha}}, \ \left( \frac{C_{\beta}}{4C_{\gamma}}\right)^{\frac{1}{\gamma-\beta}} \right\}. \end{equation} Then, we can easily verify that \begin{equation}\label{10/04/26/17:02} \begin{split} C_{\beta} &= \int_{\left[|f|\le \eta \right]}|f(x)|^{\beta}\,dx + \int_{\left[\eta< |f|\le \frac{1}{\eta}\right]}|f(x)|^{\beta}\,dx + \int_{\left[\frac{1}{\eta}<|f|\right]}|f(x)|^{\beta}\,dx \\[6pt] &\le \eta^{\beta-\alpha} \int_{\left[|f|\le \eta \right]}|f(x)|^{\alpha}\,dx + \left(\frac{1}{\eta} \right)^{\beta} \mathcal{L}\left( \Big[ |f|\ge \eta \Big] \right) + \eta^{\gamma-\beta} \int_{\left[|f|\le \eta \right]}|f(x)|^{\gamma}\,dx \\[6pt] &\le \frac{C_{\beta}}{2}+ \left(\frac{1}{\eta} \right)^{\beta} \mathcal{L}\left( \Big[ |f|\ge \eta \Big] \right). \end{split} \end{equation} This estimate (\ref{10/04/26/17:02}) immediately gives us the desired result. \end{proof} \begin{lemma}[Lieb \cite{Lieb}, Nawa \cite{Nawa1}] \label{08/03/28/21:00} Let $1< q < \infty$, and let $f$ be a function in $W^{1,q}(\mathbb{R}^{d})$ with \begin{equation}\label{10/04/26/17:34} \|\nabla f\|_{L^{q}}^{q}\le D_{1}, \quad \mathcal{L}^{d}\left( \Big[ |f| > \eta \Big] \right) \ge D_{2} \end{equation} for some positive constants $D_{1}$, $D_{2}$ and $\eta$. We put \begin{equation}\label{10/04/28/10:44} q^{\dagger}=\left\{ \begin{array}{ccl} \displaystyle{\frac{qd}{d-q}} &\mbox{if} &q<d, \\[6pt] 2q &\mbox{if}& q \ge d . \end{array} \right. \end{equation} Then, there exists $y \in \mathbb{R}^{d}$ such that \begin{equation}\label{10/04/28/10:45} \mathcal{L}^{d}\left( \left[ \left|f(\cdot + y)\right| \ge \frac{\eta}{2} \right] \cap B_{1}(0) \right) \gtrsim \left( \frac{1+\eta^{q}D_{2}}{1+D_{1}} \right)^{\frac{q}{q^{\dagger}-q}} , \end{equation} where the implicit constant depends only on $d$ and $q$, and $B_{1}(0)$ is the ball in $\mathbb{R}^{d}$ with center $0$ and radius $1$. \end{lemma} \begin{proof}[Proof of Lemma \ref{08/03/28/21:00}] Let $f$ be a function in $W^{1,q}(\mathbb{R}^{d})$ satisfying (\ref{10/04/26/17:34}). Put \begin{equation}\label{10/04/28/10:50} g(x)=\max\left\{ |f(x)|-\frac{\eta}{2}, 0 \right\}. \end{equation} Then, we easily verify that \begin{align} \label{10/04/28/10:55} &g \in W^{1,q}(\mathbb{R}^{d}), \qquad \left\| \nabla g \right\|_{L^{q}(\mathbb{R}^{d})} \le D_{1}, \\[6pt] \label{10/04/28/10:56} &{\rm supp}\,{g}\subset \left[ |f|\ge \frac{\eta}{2}\right]. \end{align} We first claim that \begin{equation}\label{10/04/26/17:35} \int_{Q_{y}}|\nabla g(x)|^{q}\,dx < \left( 1+ \frac{D_{1}}{\left\|g\right\|_{L^{q}}^{q}} \right) \int_{Q_{y}}|g(x)|^{q}\,dx \quad \mbox{for some $y \in \mathbb{R}^{d}$}, \end{equation} where $Q_{y}$ denotes the cube in $\mathbb{R}^{d}$ with the center $y$ and side length $\frac{2}{\sqrt{d}}$. We note that $Q_{y}$ is inscribed in $B_{1}(y)$ for all $y \in \mathbb{R}^{d}$. \par Let $\{y_{n}\}_{n\in \mathbb{N}}$ be a sequence in $\mathbb{R}^{d}$ such that \begin{equation}\label{10/04/26/17:43} \bigcup_{n \in \mathbb{N}} Q_{y_{n}}=\mathbb{R}^{d}, \qquad \overset{\circ}{Q_{y_{m}}}\cap \overset{\circ}{Q_{y_{n}}}=\emptyset \quad \mbox{for $m\neq n$} \end{equation} Supposing the contrary that (\ref{10/04/26/17:35}) fails, we have \begin{equation}\label{10/04/28/15:48} \int_{Q_{y_{n}}}|\nabla g(x)|^{q}\,dx \ge \left( 1+\frac{D_{1}}{\left\|g \right\|_{L^{q}}^{q}} \right) \int_{Q_{y_{n}}}|g(x)|^{q}\,dx \quad \mbox{for all $n \in \mathbb{N}$}. \end{equation} Then, summing (\ref{10/04/28/15:48}) over all $n\in \mathbb{N}$ yields that \begin{equation}\label{10/04/28/16:00} D_{1}\ge \int_{\mathbb{R}^{d}}|\nabla g(x)|^{q}\,dx \ge \left( 1+\frac{D_{1}}{\left\|g \right\|_{L^{q}}^{q}} \right) \int_{\mathbb{R}^{d}}|g(x)|^{q}\,dx >D_{1}, \end{equation} which is a contradiction. Hence, (\ref{10/04/26/17:35}) holds. \par Now, it follows from (\ref{10/04/26/17:35}) that: There exists $y_{0} \in \mathbb{N}$ such that \begin{equation}\label{10/04/28/16:34} \int_{Q_{y_{0}}}|\nabla g (x)|^{q}\,dx +|g(x)|^{q}\,dx < \left( 2 +\frac{D_{1}}{1+\left\|g \right\|_{L^{q}}^{q}} \right) \int_{Q_{y_{0}}}|g (x)|^{q}\,dx. \end{equation} On the other hand, the Sobolev embedding leads us to that \begin{equation}\label{10/04/28/16:41} \left( \int_{Q_{y_{0}}}|g (x)|^{q^{\dagger}}\,dx,\right)^{\frac{q}{q^{\dagger}}} \lesssim \int_{Q_{y_{0}}}|\nabla g (x)|^{q}\,dx +|g(x)|^{q}\,dx, \end{equation} where $q^{\dagger}$ is the exponent defined in (\ref{10/04/28/10:44}), and the implicit constant depends only on $d$ and $q$. Combining these estimates (\ref{10/04/28/16:34}) and (\ref{10/04/28/16:41}), we obtain that \begin{equation}\label{10/04/28/16:47} \begin{split} \left( \int_{Q_{y_{0}}}|g (x)|^{q^{\dagger}}\,dx \right)^{\frac{q}{q^{\dagger}}} &\lesssim \left( 2 + \frac{D_{1}}{1+\left\|g\right\|_{L^{q}}^{q}} \right) \int_{Q_{y_{0}}}|g (x)|^{q}\,dx \\[6pt] &\lesssim \left( 2 + \frac{D_{1}}{1+\left\|g\right\|_{L^{q}}^{q}} \right) \mathcal{L}\left( Q_{y_{0}}\cap {\rm supp}\,g \right)^{1-\frac{q}{q^{\dagger}}} \left( \int_{Q_{y_{0}}}|g (x)|^{q^{\dagger}}\,dx \right)^{\frac{q}{q^{\dagger}}}, \end{split} \end{equation} where the implicit constant depends only on $d$ and $q$. Hence, we see that \begin{equation}\label{10/04/28/20:51} \left( \frac{1+\left\|g\right\|_{L^{q}}^{q}}{ D_{1}+2+2\left\|g\right\|_{L^{q}}^{q}} \right)^{\frac{q^{\dagger}}{q^{\dagger}-q}} \lesssim \mathcal{L}\left( Q_{y_{0}}\cap {\rm supp}\,g \right), \end{equation} where the implicit constant depends only on $d$ and $q$. Here, it follows from the definition of $g$ (see (\ref{10/04/28/10:50})) and the assumption (\ref{10/04/26/17:34}) that \begin{equation}\label{10/04/28/16:59} \left\| g \right\|_{L^{q}}^{q} \ge \int_{[|f|\ge \eta]}\left( |f(x)|-\frac{\eta}{2}\right)^{q} \\[6pt] \ge \left( \frac{\eta}{2} \right)^{q} D_{2} . \end{equation} Since ${\rm supp}\,g \subset \left[|f|\ge \frac{\eta}{2}\right]$ (see (\ref{10/04/28/10:56})), the estimate (\ref{10/04/28/20:51}), together with (\ref{10/04/28/16:59}), gives us the desired result. \end{proof} \begin{lemma}[Lieb \cite{Lieb}, Nawa \cite{Nawa1}] \label{08/09/27/22:43} Let $1<q<\infty$, and let $\{f_{n}\}_{n \in \mathbb{N}}$ be a uniformly bounded sequence in $W^{1,q}(\mathbb{R}^{d})$ with \[ \inf_{n\in \mathbb{N}}\mathcal{L}^{d}\left( \Big[ |f_{n}| > \delta \Big]\right) \ge C \] for some constants $\delta>0$ and $C>0$. Then, there exists a sequence $\{y_{n}\}_{n\in \mathbb{R}^{d}}$ in $\mathbb{R}^{d}$ and a nontrivial function $f \in W^{1,q}(\mathbb{R}^{d})$ such that \begin{equation} \label{10/04/28/21:17} f_{n}(\cdot +y_{n}) \to f \quad \mbox{weakly in $W^{1,q}(\mathbb{R}^{d})$}. \end{equation} \end{lemma} \begin{lemma}[Brezis and Lieb \cite{Brezis-Lieb}, Nawa \cite{Nawa1}] \label{08/03/28/21:21} Let $0<q < \infty$ and let $\{f_{n}\}_{n \in \mathbb{N}}$ be a uniformly bounded sequence in $L^{q}(\mathbb{R}^{d})$ with $f_{n}\to f$ a.e. in $\mathbb{R}^{d}$ for some $f \in L^{q}(\mathbb{R}^{d})$. Then, we have \begin{equation}\label{08/09/28/12:26} \lim_{n\to \infty} \int_{\mathbb{R}^{d}}\Big||f_{n}(x)|^{q}-|f_{n}(x)-f(x)|^{q}-|f(x)|^{q} \Big|\,dx = 0 . \end{equation} \end{lemma} \section{Compactness device II} \label{08/10/07/9:00} We recall the following sequence of lemmas: All their proofs are found in Appendix C of \cite{Nawa8}. \begin{lemma}[Nawa \cite{Nawa8}] \label{08/10/25/23:47} Let $1< \alpha < \beta < \gamma < \infty$, $I\subset \mathbb{R}$ and let $u \in C(I;L^{\alpha}(\mathbb{R}^{d}) \cap L^{\gamma}(\mathbb{R}^{d}))$ with \[ \sup_{t\in I}\|u(t) \|_{L^{\alpha}}^{\alpha} \le C_{\alpha}, \quad C_{\beta}\le \sup_{t\in I}\|u(t)\|_{L^{\beta}}^{\beta}, \quad \sup_{t\in I}\|u(t)\|_{L^{\gamma}}^{\gamma}\le C_{\gamma} \] for some positive constants $C_{\alpha}$, $C_{\beta}$ and $C_{\gamma}$. Then, we have \[ \sup_{t\in I} \mathcal{L}^{d}\left( \Big[ |u(t)| > \delta \Big]\right) >C \] for some constants $C>0$ and $\delta >0$ depending only on $\alpha$, $\beta$, $\gamma$, $C_{\alpha}$, $C_{\beta}$ and $C_{\gamma}$. \end{lemma} \begin{lemma}[Nawa \cite{Nawa8}] \label{08/10/25/23:48} Let $1\le q < \infty$, $I \Subset \mathbb{R}$ and let $u$ be a function in $C(I;W^{1,q}(\mathbb{R}^{d}))$ such that \begin{align*} &\sup_{t\in I}\|\nabla u(t)\|_{L^{q}}\le C_{q}, \\[6pt] &\sup_{t\in I}\mathcal{L}^{d}\left( \Big[ |u(t)| > \delta \Big] \right) \ge C \end{align*} for some positive constants $C_{q}$, $\delta$ and $C$. Then, there exists $y \in \mathbb{R}^{d}$ such that \[ \sup_{t\in I}\mathcal{L}^{d}\left( \left[ \left|u(\cdot + y,t)\right| > \frac{\delta}{2} \right] \cap B_{1}(0) \right)\ge C' \] for some constant $C'>0$ depending only on $C_{q}$, $C$, $\delta$ and $d$, where $B_{1}(0)$ is the ball in $\mathbb{R}^{d}$ with center $0$ and radius $1$. \end{lemma} \begin{lemma}[Nawa \cite{Nawa8}] \label{08/10/28/08:14} Let $0<q < \infty$, $I \Subset \mathbb{R}$ and let $\{u_{n}\}_{n \in \mathbb{N}}$ be a equicontinuous and uniformly bounded sequence in $C(I; L^{q}(\mathbb{R}^{d}))$ with $\displaystyle{\lim_{n\to \infty}u_{n}=u}$ a.e. in $\mathbb{R}^{d}\times I$ for some $u \in C(I; L^{q}(\mathbb{R}^{d}))$. Then, we have that: \begin{align} \label{09/12/05/16:05} &\lim_{n\to \infty} \sup_{t \in I}\int_{\mathbb{R}^{d}}\Big||u_{n}(x,t)|^{q}-|u_{n}(x,t)-u(x,t)|^{q}-|u(x,t)|^{q} \Big|\,dx = 0, \\[6pt] \label{10/05/19/18:01} &\lim_{n\to \infty} \left\{ |u_{n}|^{q-1}u_{n}-|u_{n}-u|^{q-1}(u_{n}-u)-|u|^{q-1}u \right\}= 0 \quad \mbox{strongly in $L^{\infty}(I;L^{q'}(\mathbb{R}^{d}))$}. \end{align} \end{lemma} \section{Elementary inequalities}\label{09/03/06/16:43} In this section, we summarize elementary inequalities often used in this paper. We begin with the following obvious inequalities. \begin{lemma}\label{09/05/03/10:56} Let $1<q<\infty$. Then, we have that \begin{align} \label{09/09/27/21:09} &\left| \left| a \right|^{q}-\left| b \right|^{q} \right| \lesssim \left( |a|^{q-1}+|b|^{q-1}\right) |a-b| \qquad \mbox{for all $a,b \in \mathbb {C}$}, \\[6pt] \label{09/09/27/21:10} &\left| \left| a \right|^{q-1}a -\left| b \right|^{q-1}b \right| \lesssim \left( |a|^{q-1}+|b|^{q-1}\right) |a-b| \qquad \mbox{for all $a,b \in \mathbb {C}$}, \end{align} where the implicit constants depend only on $q$. \end{lemma} Besides the above, we also have the following inequality. \begin{lemma}\label{09/03/06/16:46} Let $0< q < \infty$, $L\in \mathbb{N}$ and $a_{1},\ldots, a_{L} \in \mathbb{C}$. Then, we have \[ \left| \left| \sum_{k=1}^{L}a_{k} \right|^{q}\sum_{l=1}^{L}a_{l} -\sum_{l=1}^{L}\left| a_{l} \right|^{q}a_{l} \right| \le C \sum_{l=1}^{L}\sum_{{1\le k \le L} \atop { k\neq l}} \left| a_{l} \right| \left| a_{k} \right|^{q} \] for some constant $C>0$ depending only on $q$ and $L$. \end{lemma} \begin{proof}[Proof of Lemma \ref{09/03/06/16:46}] Since \[ \left| \left| \sum_{k=1}^{L}a_{k} \right|^{q}\sum_{l=1}^{L}a_{l} -\sum_{l=1}^{L}\left| a_{l} \right|^{q}a_{l} \right| \le \sum_{l=1}^{L}\left| a_{l} \right| \left| \left| \sum_{k=1}^{L}a_{k} \right|^{q} -\left| a_{l} \right|^{q} \right|, \] it suffices to prove that \begin{equation}\label{09/03/06/17:15} \sum_{l=1}^{L}\left| a_{l} \right| \left| \left| \sum_{k=1}^{L}a_{k} \right|^{q} -\left| a_{l} \right|^{q} \right| \le C \sum_{l=1}^{L}\sum_{{1\le k \le L} \atop { k\neq l}} \left| a_{l} \right| \left| a_{k} \right|^{q} \end{equation} for some constant $C>0$ depending only on $q$ and $L$. Rearranging the sequence, we may assume that \[ |a_{1}|\le |a_{2}| \le \cdots \le |a_{L-1}|\le |a_{L}|. \] Then, we have \[ \begin{split} \sum_{l=1}^{L-1}\left| a_{l} \right| \left| \left| \sum_{k=1}^{L}a_{k} \right|^{q} -\left| a_{l} \right|^{q} \right| &\le \sum_{l=1}^{L-1}\left| a_{l} \right| \left\{ \left( \sum_{k=1}^{L} \left| a_{k} \right| \right)^{q} + \left| a_{l} \right|^{q} \right\} \\ &\le \sum_{l=1}^{L-1}\left| a_{l} \right| \left\{ \left(L|a_{L}|\right)^{q}+\left|a_{L}\right|^{q} \right\} \\ &=(L^{q}+1)\sum_{l=1}^{L-1}\left| a_{l} \right| |a_{L}|^{q} \\ &\le (L^{q}+1)\sum_{l=1}^{L}\sum_{{1\le k \le L}\atop {k\neq l}}|a_{l}|\left| a_{k} \right|^{q}. \end{split} \] It remains an estimate for the term \[ \left| a_{L} \right| \left| \left| \sum_{k=1}^{L}a_{k} \right|^{q} -\left| a_{L} \right|^{q} \right| . \] When $q>1$, we have by (\ref{09/09/27/21:09}) that \begin{equation*}\label{09/03/06/17:40} \begin{split} \left| a_{L} \right| \left| \left| \sum_{k=1}^{L}a_{k} \right|^{q} -\left| a_{L} \right|^{q} \right| &\lesssim \left| a_{L} \right| \left( \biggm| \sum_{k=1}^{L} a_{k} \biggm|^{q-1} + \left| a_{L} \right|^{q-1} \right) \left|\sum_{l=1}^{L-1} a_{l} \right| \\ &\le \left(L^{q-1}+1 \right)\left| a_{L} \right|^{q} \sum_{l=1}^{L-1}\left| a_{l} \right| \\ &\le \left(L^{q-1}+1 \right) \sum_{l=1}^{L}\sum_{{1\le k \le L} \atop { k\neq l}} \left| a_{l} \right| \left| a_{k} \right|^{q}, \end{split} \end{equation*} where the implicit constant depends only on $q$. Hence, in the case $q>1$, we have obtained the result. We consider the case $0<q\le 1$. When $|a_{L-1}|\ge |a_{L}|/L$, we have \[ \begin{split} \left| a_{L} \right| \left| \left| \sum_{l=1}^{L}a_{l} \right|^{q} -\left| a_{L} \right|^{q} \right| &\le \left| a_{L} \right| \left( L^{q}+1 \right) \left| a_{L}\right|^{q} \\ &\le L\left| a_{L-1} \right| \left( L^{q}+1 \right) \left| a_{L}\right|^{q} \\ &\le L\left(L^{q}+1 \right) \sum_{l=1}^{L}\sum_{{1\le k \le L} \atop { k\neq l}} \left| a_{l} \right| \left| a_{k} \right|^{q}. \end{split} \] On the other hand, when $\left| a_{L-1}\right|\le \left| a_{L} \right|/L$, one can see that \[ \frac{\left| a_{L} \right|}{L} \le \left| a_{L} \right| - \left| \sum_{l=1}^{L-1}a_{l} \right| \le \left| \sum_{l=1}^{L}a_{l}\right| \le \left| a_{L} \right| + \left| \sum_{l=1}^{L-1}a_{l} \right|. \] Moreover, we have by the convexity of the function $f(t)=t^{q}$ ($0<q\le 1$) that \[ \left(\left| a_{L} \right| +\left| \sum_{l=1}^{L-1}a_{l}\right|\right)^{q} -\left| a_{L} \right|^{q} \le \left| a_{L} \right|^{q} -\left( \left| a_{L} \right| -\left| \sum_{l=1}^{L-1}a_{l}\right| \right)^{q}. \] Hence, it follows from these estimates that \[ \begin{split} \left| \left| \sum_{l=1}^{L}a_{l}\right|^{q} -\left| a_{L} \right|^{q} \right| &\le \left| a_{L}\right|^{q} - \left( \left| a_{L}\right| - \left| \sum_{l=1}^{L-1}a_{l} \right|\right)^{q} \\ &\le q \left| \sum_{l=1}^{L-1}a_{l}\right| \left(\left| a_{L}\right| -\left| \sum_{l=1}^{L-1}a_{l} \right| \right)^{q-1} \\ & \le q \left| \sum_{l=1}^{L-1}a_{l}\right| |a_{L}|^{q-1} \end{split} \] and therefore \[ \left| a_{L} \right|\left| \left| \sum_{l=1}^{L}a_{l}\right|^{q} -\left| a_{L} \right|^{q} \right| \le q \left| \sum_{l=1}^{L-1}a_{l}\right| |a_{L}|^{q} \le q \sum_{l=1}^{L}\sum_{{1\le k \le L} \atop { k\neq l}} \left| a_{l} \right| \left| a_{k} \right|^{q}. \] Thus, we have completed the proof. \end{proof} \section{Concentration function} \label{08/10/07/9:02} In this section, $B_{R}(a)$ denotes an open ball in $\mathbb{R}^{d}$ with the center $a \in \mathbb{R}^{d}$ and the radius $R$: $$ B_{R}(a) := \bigl\{ x\in \mathbb{R}^d \bigm | |x-a|<R \bigr\}. $$ In order to trace the ``fake'' soliton, we prepare Proposition \ref{08/10/03/9:46} below. We begin with: \begin{lemma}[Nawa \cite{Nawa1-2}]\label{09/12/29/9:43} Let $0< T_{m} \le \infty$, $1<q<\infty$ and let $\rho$ be a non-negative function in $C([0,T_{m});L^{1}(\mathbb{R}^{d})\cap L^{q}(\mathbb{R}^{d}))$. Suppose that \begin{equation}\label{10/05/19/19:45} \int_{|x-y_{0}|<R_{0}}\rho(x,t_{0})\,dx > C_{0} \end{equation} for some $C_{0}>0$, $R_{0}>0$ and $(y_{0},t_{0})\in \mathbb{R}^{d}\times [0,T_{m})$. Then, there exist $\theta >0$ and $\Gamma>0$, both depending only on $d$, $q$, $\rho$, $C_{0}$, $R_{0}$ and $t_{0}$, such that \[ \int_{|x-y|<R_{0}}\rho(x,t)\,dx > C_{0}, \quad \forall (y,t) \in B_{\Gamma}(y_0) \times (t_0-\theta, t_0 +\theta). \] \end{lemma} \begin{remark} When $\rho \colon [0,T_{m}) \to L^{1}(\mathbb{R}^{d})$ is uniformly continuous, we can take $\theta$ independent of $t_{0}$. Unfortunately, we are not in a case to assume the uniform continuity. \end{remark} \begin{proof}[Proof of Lemma \ref{09/12/29/9:43}] Put \[ \varepsilon_{0}=\int_{|x-y_{0}|<R_{0}}\rho(x,t_{0})\,dx -C_{0}. \] Then, we have by (\ref{10/05/19/19:45}) that $\varepsilon_{0}>0$. Since we have by the H\"older inequality that \begin{equation}\label{10/05/19/19:49} \int_{B_{R_{0}}(y_{0}) \setminus B_{R_{0}}(y)} \rho(x,t_{0}) \,dx \le \mathcal{L}^{d} \left( B_{R_{0}}(y_{0})\setminus B_{R_{0}}(y) \right)^{1-\frac{1}{q}} \|\rho (t_{0})\|_{L^{q}(\mathbb{R}^{d})}, \quad \forall y \in \mathbb{R}^{d}, \end{equation} we can take $\Gamma>0$, depending only on $d$, $q$, $\rho$, $C_{0}$, $R_{0}$ and $t_{0}$, such that \begin{equation}\label{09/12/29/10:59} \int_{B_{R_{0}}(y_{0}) \setminus B_{R_{0}}(y)} \rho(x,t_{0}) \,dx < \frac{\varepsilon_{0}}{3}, \quad \forall y \in B_{\Gamma}(y_{0}). \end{equation} Moreover, it follows from (\ref{09/12/29/10:59}) that \begin{equation}\label{10/05/19/19:53} \begin{split} & \int_{B_{R_{0}}(y_{0})}\rho(x,t_{0})\,dx \\[6pt] & = \int_{B_{R_{0}}(y)}\rho(x,t_{0})\,dx + \int_{B_{R_{0}}(y_{0}))\setminus B_{R_{0}}(y)} \hspace{-12pt}\rho(x,t_{0}) \,dx - \int_{B_{R_{0}}(y)\setminus B_{R_{0}}(y_{0})} \hspace{-12pt} \rho(x,t_{0}) \,dx \\[6pt] & < \int_{B_{R_{0}}(y)}\rho(x,t_{0})\,dx + \frac{\varepsilon_{0}}{3}, \qquad \forall y \in B_{\Gamma}(y_{0}), \end{split} \end{equation} so that we have \begin{equation}\label{08/10/05/15:10} \int_{B_{R_{0}}(y)}\rho(x,t_{0})\,dx > \int_{B_{R_{0}}(y_{0})}\rho(x,t_{0})\,dx - \frac{\varepsilon_{0}}{3}, \quad \forall y \in B_{\Gamma}(y_{0}). \end{equation} Now, since $\rho$ is non-negative, we have that, for any measurable set $\Omega \subset \mathbb{R}^{d}$, \begin{equation}\label{09/12/29/11:12} \int_{\Omega} \rho(x,t_{0})\,dx - \int_{\Omega}\rho(x,t)\,dx \le \left\|\rho(t) - \rho(t_{0})\right\|_{L^{1}(\mathbb{R}^{d})}, \quad \forall t \in [0,T_{m}). \end{equation} This estimate (\ref{09/12/29/11:12}), together with the continuity of $\rho \colon [0,T_{m})\to L^{1}(\mathbb{R}^{d})$, gives us that there exists $\theta >0$, depending only on $d$, $\rho$, $C_{0}$, $R_{0}$ and $t_{0}$, such that \begin{equation}\label{10/05/19/20:05} \begin{split} \int_{B_{R_{0}}(y)\cap B_{R_{0}}(y_{0})}\rho(x,t_{0})\,dx & - \int_{B_{R_{0}}(y)\cap B_{R_{0}}(y_{0})}\rho(x,t)\,dx < \frac{\varepsilon_{0}}{3},\\[8pt] & \qquad\qquad \forall (y,t)\in \mathbb{R}^d \times (t_0-\theta, t_0 + \theta), \end{split} \end{equation} and \begin{equation}\label{10/05/19/20:06} \begin{split} \int_{B_{R_{0}}(y)\setminus B_{R_{0}}(y_{0})} \rho(x,t_{0})\,dx & - \int_{B_{R_{0}}(y)\setminus B_{R_{0}}(y_{0})} \rho(x,t)\,dx < \frac{\varepsilon_{0}}{3},\\[8pt] & \qquad\qquad \forall (y,t)\in \mathbb{R}^d \times (t_0-\theta, t_0 + \theta), \end{split} \end{equation} Combining these estimates, we see that there exists $\theta >0$, depending only on $d$, $\rho$, $C_{0}$, $R_{0}$ and $t_{0}$, such that \begin{equation}\label{08/10/05/15:31} \int_{B_{R_{0}}(y)}\rho(x,t_{0})\,dx - \int_{B_{R_{0}}(y)} \rho(x,t)\,dx < \frac{2\varepsilon_{0}}{3},\\[8pt] \qquad \forall (y,t)\in \mathbb{R}^d \times (t_0-\theta, t_0 + \theta). \end{equation} Moreover, it follows from the estimates (\ref{08/10/05/15:10}) and (\ref{08/10/05/15:31}) that there exist $\theta >0$ and $\Gamma>0$, both depending only on $d$, $q$, $\rho$, $C_{0}$, $R_{0}$ and $t_{0}$, such that \[ \begin{split} \int_{|x-y|<R_{0}}\rho(x,t)\,dx & = \int_{B_{R_{0}}(y)}\rho(x,t_{0})\,dx - \left( \int_{B_{R_{0}}(y)}\rho(x,t_{0})\,dx - \int_{B_{R_{0}}(y)}\rho(x,t)\,dx \right)\\[6pt] & > \int_{B_{R_{0}}(y)}\rho(x,t_{0})\,dx - \frac{2\varepsilon_{0}}{3}\\[6pt] & \ge \int_{B_{R_{0}}(y_{0})}\rho(x,t_{0})\,dx - \varepsilon_{0} = C_{0} \end{split} \] {for all $y \in \mathbb{R}^{d}$ with $|y-y_{0}|<\Gamma$, and all $t \in [0,T_{m})$ with $|t-t_{0}|<\theta$}. Thus, we have proved the lemma. \end{proof} \begin{proposition}[Nawa \cite{Nawa1-2, Nawa1-3}]\label{08/10/03/9:46} Assume that $d\ge 1$. Let $0< T_{m} \le \infty$, $1<q<\infty$, and let $\rho$ be a non-negative function in $C([0,T_{m});L^{1}(\mathbb{R}^{d})\cap L^{q}(\mathbb{R}^{d}))$ with \begin{equation}\label{09/12/27/11:28} \|\rho(t)\|_{L^{1}(\mathbb{R}^{d})}=1 \quad \mbox{ for all $t \in [0,T_{m})$}. \end{equation} We put \[ A_{\rho} = \sup_{R>0}\liminf_{t \to T_{m}}\sup_{y\in \mathbb{R}^{d}} \int_{|x-y|\le R}\rho(x,t)\,dx. \] If $A_{\rho}>\frac{1}{2}$, then for all $\varepsilon \in (0,1)$, there exist $R_{\varepsilon}>0$, $T_{\varepsilon}>0$ and a continuous path $\gamma_{\varepsilon} \in C([T_{\varepsilon},T_{m}); \mathbb{R}^{d})$ such that \[ \int_{|x-\gamma_{\varepsilon}(t)|\le R} \rho(x,t) \,dx \ge (1-\varepsilon)A_{\rho} \quad \mbox{ for all $R\ge 3R_{\varepsilon}$ and $t \in [T_{\varepsilon}, T_{m})$}. \] \end{proposition} \begin{proof}[Proof of Proposition \ref{08/10/03/9:46}] We put $\eta:=2A_{\rho}-1$, so that $A_{\rho}=\frac{1}{2}+\frac{\eta}{2}$. Then, the assumptions (\ref{09/12/27/11:28}) and $A_{\rho}>\frac{1}{2}$ imply that $0< \eta \le 1$. We choose arbitrarily $\varepsilon \in (0,\frac{\eta}{1+\eta})$ and fix it. It follows from the definition of $A_{\rho}$ that for our $\varepsilon>0$, there exist $R_{\varepsilon}>0$ and $T_{\varepsilon}>0$ with the following property: for all $t \in [T_{\varepsilon},T_{m})$, there exists a point $y_{\varepsilon}(t) \in \mathbb{R}^{d}$ such that \begin{equation}\label{08/10/05/22:09} \int_{|x-y_{\varepsilon}(t)| \le R_{\varepsilon}} \rho(x,t) \,dx > \left( 1-\varepsilon \right) A_{\rho}. \end{equation} For these $R_{\varepsilon}$ and $T_{\varepsilon}$, we shall construct a continuous path $\gamma_{\varepsilon} \colon [T_{\varepsilon}, T_{m}) \to \mathbb{R}^{d}$ satisfying that \begin{equation}\label{10/01/31/16:23} \int_{|x-\gamma_{\varepsilon}(t)| \le 3R_{\varepsilon}} \rho(x,t) \,dx \ge \left( 1-\varepsilon \right) A_{\rho} \quad \mbox{for all $t \in [T_{\varepsilon},T_{m})$}, \end{equation} which will lead us to the desired conclusion. To this end, we define: \begin{equation}\label{09/12/27/16:58} T_{\varepsilon}^{*} = \sup \left\{ t \in [T_{\varepsilon}, T_{m}) \biggm| \int_{|x-y_{\varepsilon}(T_{\varepsilon})| \le R_{\varepsilon}} \rho(x,t) \,dx > \left( 1- \varepsilon \right) A_{\rho} \right\}, \end{equation} where $y(T_{\varepsilon})$ is a point found in (\ref{08/10/05/22:09}). By Lemma \ref{09/12/29/9:43}, it follows from (\ref{08/10/05/22:09}) that $T_{\varepsilon}^{*}> T_{\varepsilon}$. If $T_{\varepsilon}^{*}=T_{m}$, then there is nothing to prove: $\gamma_{\varepsilon}(t) \equiv y_{\varepsilon}(T_{\varepsilon})$ $(t\in [T_{\varepsilon}, T_{m}) )$ is the desired continuous path. Hence, we consider the case of $T_{\varepsilon}^{*}<T_{m}$. In this case, by Lemma \ref{09/12/29/9:43} again, we find that: there exists a constant $\theta_{\varepsilon}(T_{\varepsilon}^{*})>0$ such that \begin{equation}\label{08/10/16/0:41} \int_{|x-y_{\varepsilon}(T_{\varepsilon}^{*})| \le R_{\varepsilon}} \rho(x,t) \,dx > \left( 1-\varepsilon \right) A_{\rho} \quad \mbox{ for all $ t \in (T_{\varepsilon}^{*}-\theta_{\varepsilon}(T_{\varepsilon}^{*}), T_{\varepsilon}^{*}+\theta_{\varepsilon}(T_{\varepsilon}^{*})) $ }. \end{equation} Here, we claim that: \begin{equation}\label{09/12/27/19:35} \left( B_{R_{\varepsilon}}(y_{\varepsilon}(T_{\varepsilon}^{*})) \times \{t\} \right) \cap \left( B_{R_{\varepsilon}}(y_{\varepsilon}(T_{\varepsilon})) \times \{t\} \right) \neq \emptyset \quad\text{for}\quad t \in (T_{\varepsilon}^{*}-\theta_{\varepsilon}(T_{\varepsilon}^{*}) ,T_{\varepsilon}^{*}]. \end{equation} For: we suppose the contrary that $t_{1} \in (T_{\varepsilon}^{*}-\theta_{\varepsilon}(T_{\varepsilon}^{*}), T_{\varepsilon}^{*}]$ such that \begin{equation}\label{09/12/28/11:50} \left( B_{R_{\varepsilon}}(y_{\varepsilon}(T_{\varepsilon}^{*})) \times \{t_{1}\} \right) \cap \left( B_{R_{\varepsilon}}(y_{\varepsilon}(T_{\varepsilon})) \times \{t_{1}\} \right) = \emptyset . \end{equation} Then, it follows from the assumption (\ref{09/12/27/11:28}) and (\ref{09/12/28/11:50}) that \begin{equation}\label{09/12/27/20:02} \begin{split} 1 & \ge \int_{ B_{R_{\varepsilon}}(y_{\varepsilon}(T_{\varepsilon}^{*})) \cup B_{R_{\varepsilon}}(y_{\varepsilon}(T_{\varepsilon})) } \rho(x,t_{1}) \,dx\\[6pt] & = \int_{ B_{R_{\varepsilon}}(y_{\varepsilon}(T_{\varepsilon}^{*})) } \rho(x,t_{1}) \,dx + \int_{ B_{R_{\varepsilon}}(y_{\varepsilon}(T_{\varepsilon})) } \rho(x,t_{1})\,dx. \end{split} \end{equation} Moreover, (\ref{08/10/16/0:41}) and the definition of $T_{\varepsilon}^{*}$ yeild that the right-hand side of (\ref{09/12/27/20:02}) is greater than $2(1-\varepsilon)A_{\rho}$. Hence, we obtain \begin{equation*}\label{09/12/27/20:06} 1 > 2(1-\varepsilon)A_{\rho} = 2(1-\varepsilon) \left( \frac{1}{2}+\frac{\eta} {2} \right) = 1 + \eta - \varepsilon (1+\eta), \end{equation*} so that we have \begin{equation}\label{10/01/31/22:37} \varepsilon > \frac{\eta}{1+\eta}, \end{equation} which (\ref{10/01/31/22:37}) contradicts our choice of $\varepsilon < \dfrac{\eta}{1+\eta}$. Therefore (\ref{09/12/27/19:35}) holds valid. Now, for our $\varepsilon \in (0,\frac{\eta}{1+\eta})$, we define a path $\gamma_{\varepsilon}^{*} \colon [T_{\varepsilon},T_{\varepsilon}^{*}] \to \mathbb{R}^{d}$ by \begin{equation}\label{09/12/29/18:05} \gamma_{\varepsilon}^{*}(t) = \begin{cases} \quad y_{\varepsilon}(T_{\varepsilon}) & \mbox{if} \qquad T_{\varepsilon} \le t < T_{\varepsilon}^{*} - \theta_{\varepsilon}(T_{\varepsilon}^{*}), \\[8pt] \quad y_{\varepsilon}(T_{\varepsilon}^{*}) + \displaystyle\frac{T_{\varepsilon}^{*} - t}{\theta_{\varepsilon} (T_{\varepsilon}^{*})} \left( y_{\varepsilon}(T_{\varepsilon}) - y_{\varepsilon}(T_{\varepsilon}^{*}) \right) &\mbox{if} \qquad T_{\varepsilon}^{*} - \theta_{\varepsilon}(T_{\varepsilon}^{*}) \le t \le T_{\varepsilon}^{*},\\[8pt] \quad y_{\varepsilon}(T_{\varepsilon}^{*}) & \mbox{if} \qquad T_{\varepsilon}^{*} \le t < T_{\varepsilon}^{*}+\theta_{\varepsilon}(T_{\varepsilon}^{*}). \end{cases} \end{equation} Clearly we have $ \gamma_{\varepsilon}^{*} \in C([T_{\varepsilon},T_{\varepsilon}^{*}+\theta_{\varepsilon}(T_{\varepsilon}^{*})]; \mathbb{R}^{d}) $. Moreover, it follows from (\ref{09/12/27/19:35}) that \begin{equation}\label{10/05/19/21:10} B_{3R_{\varepsilon}}(\gamma_{\varepsilon}^{*}(t)) \supset B_{R_{\varepsilon}}(y_{\varepsilon}(T_{\varepsilon}^{*})) \cup B_{R_{\varepsilon}}(y_{\varepsilon}(T_{\varepsilon})) \quad \mbox{for all $t \in [T_{\varepsilon}, T_{\varepsilon}^{*}+\theta_{\varepsilon}(T_{\varepsilon}^{*})]$} \end{equation} which, together with the definition of $T_{\varepsilon}^{*}$, yields that \begin{equation}\label{08/10/16/1:12} \begin{split} \int_{|x-\gamma_{\varepsilon}^{*}(t)|\le 3R_{\varepsilon}} \rho(x,t) \,dx & \ge \int_{ B_{R_{\varepsilon}}(y_{\varepsilon}(T_{\varepsilon}^{*})) \cup B_{R_{\varepsilon}}(y_{\varepsilon}(T_{\varepsilon})) } \rho(x,t)\,dx\\[8pt] & \ge \left( 1-\varepsilon \right) A_{\rho} \qquad \mbox{ for all $t \in [T_{\varepsilon},T_{\varepsilon}^{*}+\theta_{\varepsilon}(T_{\varepsilon}^{*})]$}. \end{split} \end{equation} Here, we have enlarged the radius of the ball centered at the $\gamma_{\varepsilon}^{*}(t)$ for $t \in [T_{\varepsilon},T_{\varepsilon}^{*}+\theta_{\varepsilon}(T_{\varepsilon}^{*})]$. We shall extend the path $\gamma_{\varepsilon}^{*}$ to the whole interval $[T_{\varepsilon}, T_{m})$. We recall (\ref{08/10/05/22:09}), so that we have from Lemma \ref{09/12/29/9:43} that, for all $\tau \in (T_{\varepsilon}^*,T_{m})$, there exists $y_{\varepsilon}(\tau)$, and exists a constant $\theta_{\varepsilon}(\tau)>0$ depending only on $d$, $q$, $\rho$, $\varepsilon$ and $\tau$ such that \begin{equation}\label{09/12/30/14:56} \int_{|x-y_{\varepsilon}(\tau)|\le R_{\varepsilon}} \hspace{-6pt} \rho(x,t) \,dx > \left( 1-\varepsilon \right) A_{\rho} \quad \mbox{for all $t \in (\tau-\theta_{\varepsilon}(\tau), \tau+ \theta_{\varepsilon}(\tau)) \subset (T_{\varepsilon}^* , T_{m})$} \end{equation} Now we put $ I_{\tau} : = (\tau-\theta_{\varepsilon}(\tau), \tau+\theta_{\varepsilon}(\tau)) $. Then, we have \[ \bigcup_{\tau \in (T_{\varepsilon}^*,T_{m}) } I_{\tau} = ( T_{\varepsilon}^* , T_{m}), \] so that $\{I_{\tau} \}_{\tau \in (T_{\varepsilon}^*, T_{m})}$ is an open covering of $(T_{\varepsilon}^*, T_{m})$. Since $(T_{\varepsilon}^*, T_{m})$ is a Linder\"of space, we can take a countable subcovering $\{ I_{\tau_{k}}\}_{k \in \mathbb{N}}$, where $\{\tau_{k}\}_{k \in \mathbb{N}}$ is some increasing sequence in $(T_{\varepsilon}^*,T_{m})$ such that $\tau_k < \tau_{k+1}$, arranging that $I_{\tau_k}\cap I_{\tau_{k+2}}=\emptyset$ for $k=1,2, \cdots$. We note that one can take $y_{\varepsilon}(\tau_1) = y_{\varepsilon}(T_{\varepsilon}^*)$. We define $\gamma_{\varepsilon}:=\gamma_{\varepsilon}^*$ on $[T_{\varepsilon}, T_{\varepsilon}^*]$. If necessary, we make an analogous procedure as in constructing $\gamma_{\varepsilon}^*$ in \eqref{09/12/29/18:05} to define $\gamma_{\varepsilon}$ on $[T_{\varepsilon}^*, T_m)$: by writing $I_{\tau_k}\cap I_{\tau_{k+1}} = (a_k, b_k)$ and $y_k:= y_{\varepsilon}(\tau_k)$, \begin{equation*} \gamma_{\varepsilon}(t) = \begin{cases} \quad y_k & \mbox{if} \qquad t\in I_{\tau_k}\setminus (a_k, b_k), \\[6pt] \quad y_{k+1} + \displaystyle\frac{b_k - t}{\, b_k - a_k} \left( y_{k} - y_{k+1} \right) &\mbox{if} \qquad t\in (a_k, b_k), \\[6pt] \quad y_{k+1} & \mbox{if} \qquad t\in I_{\tau_{k+1}}\setminus (a_k, b_k). \end{cases} \end{equation*} Then, $\gamma_{\varepsilon} \colon [T_{\varepsilon},T_m)\to \mathbb{R}^{d}$ is continuous and satisfies that \[ \int_{|x-\gamma_{\varepsilon}(t)|\le 3R_{\varepsilon}} \rho(x,t) \,dx \ge (1-\varepsilon) A_{\rho} \quad \mbox{for all $t \in [T_{\varepsilon}, T_{m})$}, \] since we have \begin{equation*} \left( B_{R_{\varepsilon}}(y_{k}) \times \{t\} \right) \cap \left( B_{R_{\varepsilon}}(y_{k+1}) \times \{t\} \right) \neq \emptyset \quad \mbox{for all $t \in (a_k, b_k)$}. \end{equation*} \end{proof} \begin{remark} Here, we remark that: if $A_{\rho}=1$, the proof becomes easier. Then the following estimate holds: for all $\varepsilon>0$, \begin{equation}\label{09/12/27/20:12} \int_{ B_{R_{\varepsilon}}(y_{\varepsilon}(T_{\varepsilon}^{*})) \cap B_{R_{\varepsilon}}(y_{\varepsilon}(T_{\varepsilon})) } \rho(x,t_{1}) \,dx > 1-2\varepsilon \quad \mbox{ for all $ t \in (T_{\varepsilon}^{*}-\theta_{\varepsilon}(T_{\varepsilon}^{*}), T_{\varepsilon}^{*}] $ }. \end{equation} Indeed, we have from (\ref{08/10/16/0:41}) and the definition of $T_{\varepsilon}^{*}$ that \[ \begin{split} 1 & \ge \int_{ B_{R_{\varepsilon}}(y_{\varepsilon}(T_{\varepsilon}^{*})) \cup B_{R_{\varepsilon}}(y_{\varepsilon}(T_{\varepsilon})) } \rho(x,t_{1}) \,dx\\ & = \int_{ B_{R_{\varepsilon}}(y_{\varepsilon}(T_{\varepsilon}^{*})) } \rho(x,t_{1}) \,dx + \int_{ B_{R_{\varepsilon}}(y_{\varepsilon}(T_{\varepsilon})) } \rho(x,t_{1}) \,dx\\[6pt] & \hspace{60pt} - \int_{ B_{R_{\varepsilon}}(y_{\varepsilon}(T_{\varepsilon}^{*})) \cap B_{R_{\varepsilon}}(y_{\varepsilon}(T_{\varepsilon})) } \rho(x,t_{1})\,dx\\[6pt] & > 2\left( 1-\varepsilon \right) - \int_{ B_{R_{\varepsilon}}(y_{\varepsilon}(T_{\varepsilon}^{*})) \cap B_{R_{\varepsilon}}(y_{\varepsilon}(T_{\varepsilon})) } \rho(x,t_{1}) \,dx \end{split} \] { for all $t \in (T_{\varepsilon}^{*}-\theta_{\varepsilon}(T_{\varepsilon}), T_{\varepsilon}^{*}]$ }, which gives (\ref{09/12/27/20:12}). Thus we do not need to enlarge the radius of the ball centered at the $\gamma(t)$ for $t\in [T_{\varepsilon}, T_m)$. \end{remark} \section{Variational problems}\label{08/05/13/15:57} We give the proofs of the variational problems, Proposition \ref{08/06/16/15:24} and Proposition \ref{08/05/13/15:13}. \par We begin with the proof of Proposition \ref{08/06/16/15:24}. \begin{proof}[Proof of Proposition \ref{08/06/16/15:24}] We first prove the relation (\ref{08/05/13/11:33}): \[ N_{1}^{\frac{p-1}{2}}=\left( \frac{2}{d}\right)^{\frac{p-1}{2}} \left\{ \frac{d(p-1)}{(d+2)-(d-2)p}\right\}^{\frac{1}{4}\left\{ (d+2)-(d-2)p \right\}}N_{2}. \] Take any $f \in H^{1}(\mathbb{R}^{d})\setminus \{0\}$ with $\mathcal{K}(f)\le 0$ and put $f_{\lambda}(x)=\lambda^{\frac{2}{p-1}}f(\lambda x)$ for $\lambda>0$. Then, one can easily verify that: \begin{align} \|f_{\lambda}\|_{\widetilde{H}^{1}}^{2} &=\lambda^{\frac{4}{p-1}+2-d} \left\{ \frac{p-\left(1+\frac{4}{d}\right)}{p-1}\right\} \|\nabla f\|_{L^{2}}^{2} +\lambda^{\frac{4}{p-1}-d}\|f\|_{L^{2}}^{2}, \\[6pt] \mathcal{K}(f_{\lambda}) &=\lambda^{\frac{4}{p-1}+2-d}\mathcal{K}(f)\le 0, \\[6pt] \mathcal{N}_{2}(f_{\lambda}) &= \mathcal{N}_{2}(f). \end{align} Moreover, an elementary calculus shows that $\|f_{\lambda}\|_{\widetilde{H}^{1}}^{2}$ takes the minimum at \begin{equation}\label{10/01/05/22:26} \lambda =\left( \frac{d(p-1)}{d+2-(d-2)p}\right)^{\frac{1}{2}} \frac{\|f\|_{L^{2}}}{\|\nabla f\|_{L^{2}}}, \end{equation} so that \begin{equation}\label{08/06/17/13:36} \min_{\lambda>0}\|f_{\lambda}\|_{ \widetilde{H}^{1} }^{p-1} = \left( \frac{d}{2}\right)^{\frac{p-1}{2}} \left( \frac{d(p-1)}{d+2-(d-2)p} \right)^{\frac{1}{4}\left\{ d+2-(d-2)p+4 \right\} }\mathcal{N}_{2}(f). \end{equation} Now, we consider a minimizing sequence $\{f_{n} \}_{n\in \mathbb{N}}$ of the variational problem for $N_{2}$ (see (\ref{08/07/02/23:24})). Then, it follows from (\ref{08/06/17/13:36}) and the definition of the variational value $N_{1}$ (see (\ref{08/07/02/23:23})) that \begin{equation}\label{10/01/03/11:06} N_{1}^{\frac{p-1}{2}}\le \left( \frac{d}{2}\right)^{\frac{p-1}{2}} \left( \frac{d(p-1)}{d+2-(d-2)p} \right)^{\frac{1}{4}\left\{ d+2-(d-2)p+4 \right\} }N_{2}. \end{equation} On the other hand, considering a minimizing sequence $\{f_{n}\}_{n\in \mathbb{N}}$ of the variational problem for $N_{1}$, we obtain \begin{equation}\label{10/01/03/11:11} N_{1}^{\frac{p-1}{2}}\ge \left( \frac{d}{2}\right)^{\frac{p-1}{2}} \left( \frac{d(p-1)}{d+2-(d-2)p} \right)^{\frac{1}{4}\left\{ d+2-(d-2)p+4 \right\} }N_{2}. \end{equation} This inequality (\ref{10/01/03/11:11}) together with (\ref{10/01/03/11:06}) gives (\ref{08/05/13/11:33}). \par Next, we prove (\ref{08/05/13/11:25}): \[ N_{3}=\frac{d(p-1)}{2(p+1)} N_{2}. \] Let $\{f_{n}\}_{n\in \mathbb{N}}$ be a minimizing sequence of the variational problem for $N_{3}$ (see (\ref{08/07/02/23:25})). For each $f_{n}$, there is a constant $s_{n}>0$ such that $\mathcal{K}(s_{n}f_{n})=0$. We put $g_{n}=s_{n}f_{n}$. Then, one can easily verify that \[ \mathcal{I}(f_{n})=\mathcal{I}(g_{n}), \] so that \begin{equation}\label{10/01/03/11:39} \lim_{n\to \infty}\mathcal{I}(g_{n})= N_{3}. \end{equation} Moreover, using $\mathcal{K}(g_{n})=0$, we find that \begin{equation}\label{10/01/03/11:43} \mathcal{I}(g_{n})=\frac{d(p-1)}{2(p+1)}\mathcal{N}_{2}(g_{n}) \ge \frac{d(p-1)}{2(p+1)}N_{2} \quad \mbox{for all $n \in \mathbb{N}$}. \end{equation} Hence, (\ref{10/01/03/11:39}) and (\ref{10/01/03/11:43}) give us that \begin{equation}\label{10/01/03/11:45} \frac{d(p-1)}{2(p+1)}N_{2} \le N_{3}. \end{equation} Now, we consider a minimizing sequence $\{f_{n}\}_{n\in \mathbb{N}}$ of the variational problem for $N_{2}$ (see (\ref{08/07/02/23:24})). Then, it follows from $\mathcal{K}(f_{n})\le 0$ and the definition of $N_{3}$ that \begin{equation}\label{10/01/03/11:58} N_{3}\frac{2(p+1)}{d(p-1)}\|\nabla f_{n}\|_{L^{2}}^{2} \le N_{3} \|f_{n}\|_{L^{p+1}}^{p+1} \le \|f_{n}\|_{L^{2}}^{p+1-\frac{d}{2}(p-1)}\|\nabla f_{n} \|_{L^{2}}^{\frac{d}{2}(p-1)}. \end{equation} Dividing the both sides of (\ref{10/01/03/11:58}) by $\frac{2(p+1)}{d(p-1)}\|\nabla f_{n}\|_{L^{2}}^{2}$, we have \begin{equation}\label{10/01/03/11:59} N_{3} \le \frac{d(p-1)}{2(p+1)}\|f_{n}\|_{L^{2}}^{p+1-\frac{d}{2}(p-1)}\|\nabla f_{n} \|_{L^{2}}^{\frac{d}{2}(p-1)-2} = \frac{d(p-1)}{2(p+1)} \mathcal{N}_{2}(f_{n}) . \end{equation} Since $\{f_{n}\}_{n\in \mathbb{N}}$ is a minimizing sequence of the variational problem for $N_{2}$, taking $n\to \infty$ in (\ref{10/01/03/11:59}), we obtain \begin{equation*} N_{3}\le \frac{d(p-1)}{2(p+1)}N_{2}, \end{equation*} which together with (\ref{10/01/03/11:45}) gives (\ref{08/05/13/11:25}). \end{proof} Next, we give the proof of Proposition \ref{08/05/13/15:13}. \begin{proof}[Proof of Proposition \ref{08/05/13/15:13}] We first solve the variational problem for \[ N_{1}=\inf\left\{ \left\| f \right\|_{\widetilde{H}^{1}}^{2} \, | \, f \in H^{1}(\mathbb{R}^{d})\setminus \{0\}, \ \mathcal{K}(f)\le 0 \right\}. \] Considering a concrete function, we verify that \begin{equation}\label{10/01/05/10:19} N_{1} \lesssim 1, \end{equation} where the implicit constant depends only on $d$ and $p$. Moreover, the functional $\|\cdot \|_{\widetilde{H}^{1}}$ satisfies that \begin{equation}\label{09/12/19/15:38} \|f\|_{H^{1}}^{2} \le \frac{d(p-1)}{d(p-1)-4}\|f\|_{\widetilde{H}^{1}}^{2} \quad \mbox{for all $f \in H^{1}(\mathbb{R}^{d})$}. \end{equation} We take a minimizing sequence $\{ f_{n}\}_{n \in \mathbb{N}}$ for this problem, so that \begin{align}\label{09/12/19/15:33} \lim_{n\to \infty}\|f_{n}\|_{\widetilde{H}^{1}}^{2} &=N_{1}, \\[6pt] \label{09/12/19/15:32} \mathcal{K}(f_{n})&\le 0 \quad \mbox{for all $n \in \mathbb{N}$}. \end{align} Here, the property (\ref{09/12/19/15:32}) is equivalent to the inequality \begin{equation}\label{09/12/19/15:54} \frac{2(p+1)}{d(p-1)}\|\nabla f_{n} \|_{L^{2}}^{2} \le \|f_{n}\|_{L^{p+1}}^{p+1} \quad \mbox{for all $n \in \mathbb{N}$}. \end{equation} By (\ref{10/01/05/10:19}), (\ref{09/12/19/15:38}) and (\ref{09/12/19/15:33}), we obtain the uniform bound of $\{f_{n}\}_{n \in \mathbb{N}}$ in $H^{1}(\mathbb{R}^{d})$: there exists $C_{1}>0$ depending only on $d$ and $p$ such that \begin{equation}\label{08/05/13/16:01} \sup_{n \in \mathbb{N}}\|f_{n}\|_{H^{1}}\le C_{1}. \end{equation} Moreover, by the Gagliardo-Nirenberg inequality (we can obtain this inequality without solving the variational problem for $N_{3}$) and (\ref{08/05/13/16:01}), we have \begin{equation}\label{09/12/19/16:04} \|f_{n}\|_{L^{p+1}}^{p+1} \lesssim C_{1}^{p+1-\frac{d}{2}(p-1)}\|\nabla f_{n}\|_{L^{2}}^{\frac{d}{2}(p-1)}, \end{equation} where the implicit constant depends only on $d$ and $p$. This inequality (\ref{09/12/19/16:04}), together with (\ref{09/12/19/15:54}), yields that \begin{equation}\label{08/06/28/22:39} \frac{2(p+1)}{d(p-1)} \lesssim C_{1}^{p+1-\frac{d}{2}(p-1)} \|\nabla f_{n}\|_{L^{2}}^{\frac{d}{2}(p-1)-2}. \end{equation} Hence, using (\ref{09/12/19/15:54}) again, we have that: there exists $C_{2}>0$ depending only on $d$ and $p$ such that \begin{equation}\label{08/05/13/15:39} \inf_{n\in \mathbb{N}}\|f_{n}\|_{L^{p+1}}\ge C_{2}. \end{equation} The properties (\ref{08/05/13/16:01}) and (\ref{08/05/13/15:39}) enable us to apply Lemma \ref{08/3/28/20:21}, so that we obtain \[ \mathcal{L}^{d}\left[ |f_{n}| \ge \delta\right]>C \] for some constants $C$ and $\delta >0$ independent of $n$. Moreover, applying Lemma \ref{08/03/28/21:00}, we find that: there exists $y_{n} \in \mathbb{R}^{d}$ such that, putting $\widetilde{f}_{n}(x)=f_{n}(x+y_{n})$, we have \begin{equation}\label{08/05/19/14:04} \mathcal{L}^{d}\left( \left[ \left|\widetilde{f}_{n}\right|\ge \frac{\delta}{2}\right] \cap B_{1}(0) \right)>C' \end{equation} for some constant $C'>0$ independent of $n$. Here, we can easily verify that this sequence $\{\widetilde{f}_{n}\}_{n\in \mathbb{N}}$ has same properties as the original one $\{f_{n}\}_{n\in \mathbb{N}}$: \begin{align} \label{10/01/05/11:16} \lim_{n\to \infty}\|\widetilde{f}_{n}\|_{\widetilde{H}^{1}}^{2} &=N_{1}, \\[6pt] \label{10/01/05/11:18} \mathcal{K}(\widetilde{f}_{n})&\le 0 \quad \mbox{for all $n \in \mathbb{N}$}, \\[6pt] \label{10/01/05/11:22} \sup_{n \in \mathbb{N}}\|\widetilde{f}_{n}\|_{H^{1}}&\le C_{1}. \end{align} We apply Lemma \ref{08/09/27/22:43} to $\{\widetilde{f}_{n}\}_{n\in \mathbb{N}}$ and obtain a subsequence $\{\widetilde{f}_{n}\}_{n \in \mathbb{N}}$ (still denoted by the same symbol) and a nontrivial function $Q \in H^{1}(\mathbb{R}^{d})$ such that \begin{equation}\label{10/01/03/15:38} \lim_{n\to \infty} \widetilde{f}_{n} = Q \quad \mbox{weakly in $H^{1}(\mathbb{R}^{d})$}. \end{equation} The property (\ref{10/01/03/15:38}) also gives us that \begin{equation} \begin{split} \label{08/05/14/12:07} \|\nabla \widetilde{f}_{n}\|_{L^{2}}^{2} -\|\nabla \widetilde{f}_{n}-\nabla Q\|_{L^{2}}^{2} -\|\nabla Q\|_{L^{2}}^{2} &=2\Re \int_{\mathbb{R}^{d}} (\nabla \widetilde{f}_{n}-\nabla Q) \overline{\nabla Q}\,dx \\[6pt] &\to 0 \qquad \mbox{as $n \to \infty$}, \end{split} \end{equation} and \begin{equation} \label{10/01/05/15:06} \|Q \|_{\widetilde{H}^{1}}^{2}\le N_{1}. \end{equation} Here, (\ref{10/01/05/15:06}) follows from the lower continuity in the weak topology and (\ref{10/01/05/11:16}). Moreover, Lemma \ref{08/03/28/21:21}, together with (\ref{10/01/03/15:38}), gives us that: for all $2\le q < 2^{*}$, \begin{equation}\label{08/05/14/12:08} \|\widetilde{f}_{n}\|_{L^{q}}^{q}-\|\widetilde{f}_{n}-Q\|_{L^{q}}^{q}-\|Q\|_{L^{q}}^{q} \to 0 \quad \mbox{as $n \to \infty$}. \end{equation} It follows from (\ref{08/05/14/12:07}) and (\ref{08/05/14/12:08}) that \begin{equation}\label{08/05/14/12:17} \mathcal{K}(\widetilde{f}_{n})-\mathcal{K}(\widetilde{f}_{n}-Q)-\mathcal{K}(Q) \to 0 \quad \mbox{as $n \to \infty$} \end{equation} and \begin{equation}\label{08/05/14/12:26} \|\widetilde{f}_{n} \|_{\widetilde{H}^{1}}^{2} -\| \widetilde{f}_{n}-Q\|_{\widetilde{H}^{1}}^{2} -\| Q \|_{\widetilde{H}^{1}}^{2} \to 0 \quad \mbox{as $n \to \infty$}. \end{equation} We show that the function $Q$ is a minimizer of the variational problem for $N_{1}$. To this end, it suffices to prove that \begin{equation}\label{10/01/04/22:59} \mathcal{K}(Q)\le 0. \end{equation} Indeed, by the definition of $N_{1}$, (\ref{10/01/04/22:59}) implies that $N_{1}\le \|Q\|_{\widetilde{H}^{1}}$, which, together with (\ref{10/01/05/15:06}), yields that \begin{equation}\label{10/01/05/15:01} \|Q\|_{\widetilde{H}^{1}}^{2}=N_{1}. \end{equation} We prove (\ref{10/01/04/22:59}) by contradiction: suppose that $\mathcal{K}(Q)>0$. Then, (\ref{10/01/05/11:18}) and (\ref{08/05/14/12:17}) imply that \[ \mathcal{K}(\widetilde{f}_{n}-Q) \le 0 \qquad \mbox{for all sufficiently large $n \in \mathbb{N}$}. \] Therefore, by the definition of $N_{1}$, we have \begin{equation}\label{08/05/14/12:27} \|\widetilde{f}_{n}-Q\|_{\widetilde{H}^{1}}^{2}\ge N_{1} \qquad \mbox{for all sufficiently large $n \in \mathbb{N}$}. \end{equation} Combining (\ref{08/05/14/12:26}), (\ref{10/01/05/11:16}) and (\ref{08/05/14/12:27}), we obtain that $\|Q\|_{\widetilde{H}^{1}}=0$, which is a contradiction. Hence, we have proved (\ref{10/01/04/22:59}). \par Now, it follows from (\ref{10/01/05/11:16}), (\ref{10/01/03/15:38}) and (\ref{10/01/05/15:01}) that \[ \begin{split} \|\widetilde{f}_{n}-Q\|_{\widetilde{H}_{1}}^{2} &=\|\widetilde{f}_{n}\|_{\widetilde{H}^{1}}^{2} -\frac{2\left\{ p-\left( 1+\frac{4}{d}\right)\right\} }{p-1}\Re{\int_{\mathbb{R}^{d}} \nabla \widetilde{f}_{n}(x) \overline{\nabla Q(x)}\,dx } \\ & \qquad \qquad -2\Re{\int_{\mathbb{R}^{d}} \widetilde{f}_{n}(x) \overline{Q(x)}\,dx }+ \|Q\|_{\widetilde{H}^{1}}^{2} \\ & \to 0 \qquad \mbox{as $n \to \infty$}. \end{split} \] This, together with (\ref{09/12/19/15:38}), immediately yields the strong convergence: \begin{equation}\label{10/01/04/23:10} \lim_{n\to \infty}\widetilde{f}_{n} = Q \ \mbox{ strongly in $H^{1}$}. \end{equation} We shall prove \begin{equation}\label{10/01/05/21:19} \mathcal{K}(Q)=0. \end{equation} Putting $Q_{s}=sQ$ for $s \in \mathbb{R}$, we have that \[ \mathcal{K}(Q_{s})>0 \quad \mbox{for all $s \in \Big( 0,\ \left\{ \frac{2(p+1)}{d(p-1)}\frac{\|\nabla Q\|_{L^{2}}^{2}}{\|Q\|_{L^{p+1}}^{p+1}}\right\}^{\frac{1}{p-1}} \Big)$}. \] Here, (\ref{10/01/04/22:59}) implies that \[ \left\{ \frac{2(p+1)}{d(p-1)}\frac{\|\nabla Q\|_{L^{2}}^{2}}{\|Q\|_{L^{p+1}}^{p+1}}\right\}^{\frac{1}{p-1}}\le 1. \] Supposing the undesired situation $\mathcal{K}(Q)<0$ ($\mathcal{K}(Q)\le 0$ has been proved already), we have by the intermediate value theorem that there exists $s_{0} \in (0,1)$ such that $\mathcal{K}(Q_{s_{0}})=0$, so that $\|Q_{s_{0}}\|_{\widetilde{H}^{1}}^{2}\ge N_{1}$ by the definition of $N_{1}$. However, it follows from (\ref{10/01/05/15:01}) that \[ \|Q_{s_{0}}\|_{\widetilde{H}^{1}}^{2}=s_{0}^{2}\|Q\|_{\widetilde{H}^{1}}^{2} < \|Q\|_{\widetilde{H}^{1}}^{2}=N_{1}, \] which is a contradiction. Hence, (\ref{10/01/05/21:19}) holds valid. \par Next, we consider the variational problem for $N_{2}$. We show that the function $Q$ obtained above is a minimizer of this problem. Put $Q_{\lambda}(x)=\lambda^{\frac{2}{p-1}}Q(\lambda x)$ for $\lambda>0$. We easily verify that \[ \mathcal{K}(Q_{\lambda})=0 \quad \mbox{for all $\lambda >0$}, \] which implies that \[ \|Q_{\lambda}\|_{\widetilde{H}^{1}}^{2}\ge N_{1} \quad \mbox{for all $\lambda>0$}. \] Then, the same argument as (\ref{10/01/05/22:26}), together with (\ref{10/01/05/15:01}), shows that $\|Q_{\lambda}\|_{\widetilde{H}^{1}}\colon (0,\infty)\to [0,\infty)$ takes the minimum at \begin{equation}\label{10/01/05/22:28} \lambda=\left( \frac{d(p-1)}{d+2-(d-2)p}\right)^{\frac{1}{2}}\frac{\left\| Q\right\|_{L^{2}}}{\left\| \nabla Q\right\|_{L^{2}}}=1. \end{equation} Therefore, as well as (\ref{08/06/17/13:36}), we have that \begin{equation*}\label{09/12/19/17:25} \begin{split} N_{1}^{\frac{p-1}{2}}&=\|Q\|_{\widetilde{H}^{1}}^{p-1} =\|Q_{\lambda}\|_{\widetilde{H}^{1}}^{p-1}|_{\lambda=1} \\ &= \left( \frac{d}{2}\right)^{\frac{p-1}{2}} \left( \frac{d(p-1)}{d+2-(d-2)p} \right)^{\frac{1}{4}\left\{ d+2-(d-2)p+4 \right\} }\mathcal{N}_{2}(Q). \end{split} \end{equation*} This, together with (\ref{08/05/13/11:33}) in Proposition \ref{08/06/16/15:24}, leads us to the conclusion that \begin{equation}\label{10/01/05/21:17} \mathcal{N}_{2}(Q)=N_{2}. \end{equation} Here, we remark that (\ref{10/01/05/21:19}) and (\ref{10/01/05/22:28}) yield the relation (\ref{08/09/24/15:13}): \[ \left\| Q \right\|_{L^{2}}^{2} = \frac{d+2-(d-2)p}{d(p-1)} \left\| \nabla Q\right\|_{L^{2}}^{2} = \frac{d+2-(d-2)p}{2(p+1)} \left\| Q \right\|_{L^{p+1}}^{p+1}. \] We shall show that $Q$ is also a minimizer of the variational problem for $N_{3}$. Indeed, (\ref{08/05/13/11:25}) in Proposition \ref{08/06/16/15:24}, together with (\ref{10/01/05/21:19}) and (\ref{10/01/05/21:17}), yields that \begin{equation}\label{10/01/05/21:48} \begin{split} \mathcal{I}(Q)&= \frac{ \|Q\|_{L^{2}}^{p+1-\frac{d}{2}(p-1)} \|\nabla Q\|_{L^{2}}^{ \frac{d}{2}(p-1) }} {\|Q\|_{L^{p+1}}^{p+1}} \\[6pt] &=\frac{d(p-1)}{2(p+1)}\mathcal{N}_{2}(Q) =\frac{d(p-1)}{2(p+1)}N_{2}=\mathcal{N}_{3}. \end{split} \end{equation} \indent Finally, we prove that $Q$ satisfies the equation (\ref{08/05/13/11:22}) with $\omega =1$: \begin{equation}\label{08/06/14/13:28} \Delta Q -Q +|Q|^{p-1}Q=0. \end{equation} Since $\mathcal{I}(Q)$ is the critical value of $\mathcal{I}$ (see (\ref{10/01/05/21:48})), we have \begin{equation}\label{10/01/06/12:25} \begin{split} &0=\frac{d}{d\varepsilon} \mathcal{I}(Q+\varepsilon \phi) \biggm|_{\varepsilon=0} \\[6pt] &=\frac{ (p+1-\frac{d}{2}(p-1)) \|Q\|_{L^{2}}^{p-1-\frac{d}{2}(p-1)} \|\nabla Q\|_{L^{2}}^{\frac{d}{2}(p-1)} \Re \int Q \overline{\phi} dx }{\|Q\|_{L^{p+1}}^{p+1}} \\[6pt] &\qquad + \frac{ \frac{d}{2}(p-1) \|Q\|_{L^{2}}^{p+1-\frac{d}{2}(p-1)} \|\nabla Q\|_{L^{2}}^{\frac{d}{2}(p-1)-2} \Re \int \nabla Q \overline{\nabla \phi} dx }{\|Q\|_{L^{p+1}}^{p+1}} \\[6pt] &\qquad - \frac{ (p+1) \|Q\|_{L^{2}}^{p+1-\frac{d}{2}(p-1)} \|\nabla Q\|_{L^{2}}^{\frac{d}{2}(p-1)} \Re \int |Q|^{p-1}Q \overline{\phi} dx }{\|Q\|_{L^{p+1}}^{2(p+1)}} \quad \mbox{for all $\phi \in C_{c}^{\infty}(\mathbb{R}^{d})$}. \end{split} \end{equation} Combining (\ref{10/01/06/12:25}) with (\ref{10/01/05/21:19}) ($\| Q \|_{L^{p+1}}^{p+1}=\frac{2(p+1)}{d(p-1)}\|\nabla Q \|_{L^{2}}^{2}$), we obtain that \begin{equation}\label{10/01/06/12:37} \begin{split} 0&=\frac{d^{2}(p-1)^{2}}{4(p+1)}\left(\frac{2(p+1)}{d(p-1)} -1 \right) \|Q\|_{L^{2}}^{p-1-\frac{d}{2}(p-1)} \|\nabla Q\|_{L^{2}}^{\frac{d}{2}(p-1)-2} \Re \int_{\mathbb{R}^{d}} Q(x)\overline{\phi(x)}\,dx \\[6pt] & \qquad + \frac{d^{2}(p-1)^{2}}{4(p+1)} \frac{\|Q\|_{L^{2}}^{p+1-\frac{d}{2}(p-1)} \|\nabla Q\|_{L^{2}}^{\frac{d}{2}(p-1)-2}} {\|\nabla Q\|_{L^{2}}^{2}} \Re \int_{\mathbb{R}^{d}} \nabla Q(x)\overline{\nabla \phi(x)}\,dx \\[6pt] &\qquad - \frac{d^{2}(p-1)^{2}}{4(p+1)}\frac{\|Q\|_{L^{2}}^{p+1-\frac{d}{2}(p-1)} \|\nabla Q\|_{L^{2}}^{\frac{d}{2}(p-1)-2}}{\|\nabla Q\|_{L^{2}}^{2}} \Re \int_{\mathbb{R}^{d}}|Q(x)|^{p-1}Q(x) \overline{\phi(x)}\,dx. \end{split} \end{equation} This, together with (\ref{10/01/05/22:28}), shows that $Q$ satisfies the equation (\ref{08/06/14/13:28}) in a weak sense. Moreover, it turns out that $Q$ has the following properties: $Q$ is positive (see \cite{Lieb-Loss}), radially symmetric (see \cite{Gidas-Ni-Nirenberg}), unique up to the translations and phase shift (see \cite{Kwong}), and satisfies the decay estimate (see \cite{Berestycki-Lions, Lieb-Loss}): \begin{equation}\label{10/01/21/11:02} |\partial^{\alpha} Q(x)|\le C e^{-\delta |x|} \quad \mbox{for all multi-index $\alpha$ with $|\alpha|\le 2$}, \end{equation} where $C$ and $\delta$ are some positive constants. \par Finally, we shall show that $Q$ belongs to the Schwartz space $\mathcal{S}(\mathbb{R}^{d})$. Since $Q$ is a smooth and radially symmetric solution to (\ref{08/06/14/13:28}), we have that \begin{equation}\label{10/01/21/10:56} \frac{d^{2}}{dr^{2}}\partial^{\alpha} Q= \partial^{\alpha}Q+\partial^{\alpha}Q^{p}-\partial^{\alpha} \frac{d-1}{r}\frac{d}{dr} Q \quad \mbox{for all multi-index $\alpha$ and $r=|x|$}. \end{equation} Then, an induction argument, together with (\ref{10/01/21/11:02}), leads to that $Q \in \mathcal{S}(\mathbb{R}^{d})$. \end{proof} \section*{Acknowledgments} The authors would like to express their deep gratitude to Professor Kenji Nakanishi for pointing out a mistake in the earliest version of this paper. The authors also thank Doctor Takeshi Yamada for his valuable comments. \par H. Nawa is partially supported by the Grant-in-Aid for Scientific Research (Challenging Exploratory Research \# 19654026) of JSPS. \noindent { Takafumi Akahori, \\ Graduate School of Science and Engineering \\ Ehime University, \\ 2-5 Bunkyo-cho, Matsuyama, 790-8577, Japan \\ E-mail: [email protected] } \\ \\ { Hayato Nawa, \\ Division of Mathematical Science, Department of System Inovation \\ Graduate School of Engineering Science \\ Osaka University, \\ Toyonaka 560-8531 , JAPAN \\ E-mail: [email protected] } \end{document}
\begin{document} \title{{\TheTitle} \begin{abstract} In this paper, we propose a primal-dual algorithm with a {novel momentum term using the partial gradients of the coupling function} that can be viewed as a generalization of the method proposed by Chambolle and Pock in 2016 to solve saddle point problems defined by a convex-concave function $\cL(x,y)=f(x)+\Phi(x,y)-h(y)$ with a general coupling term $\Phi(x,y)$ that is \emph{not} assumed to be bilinear. {Assuming $\grad_x\Phi(\cdot,y)$ is Lipschitz for any fixed $y$, and {$\grad_y\Phi(\cdot,\cdot)$ is Lipschitz}, we show that the iterate sequence converges to a saddle point; and for any $(x,y)$, we derive error bounds in terms of {$\cL(\bar{x}_k,y)-\cL(x,\bar{y}_k)$} for the ergodic sequence $\{\bar{x}_k,\bar{y}_k\}$.} In particular, we show $\cO(1/k)$ rate when the problem is merely convex in $x$. Furthermore, assuming $\Phi(x,\cdot)$ is linear for each fixed $x$ and $f$ is strongly convex, we obtain the ergodic convergence rate of $\cO(1/k^2)$ {-- we are not aware of another single-loop method in the related literature achieving the same rate when $\Phi$ is not bilinear.} Finally, we propose a backtracking technique which does not require the knowledge of Lipschitz constants while ensuring the same convergence results. \sa{We also consider} convex optimization problems with nonlinear functional constraints and we show that using the backtracking scheme, \sa{the} optimal convergence rate can be achieved even when the dual domain is unbounded. We tested our method against other state-of-the-art first-order algorithms and interior point methods for solving quadratically constrained quadratic problems with synthetic data, the kernel matrix learning and regression with fairness constraints arising in machine learning. \end{abstract} \section{Introduction} \label{sec:intro} {Let $(\cX,\norm{\cdot}_{\cX})$ and $(\cY,\norm{\cdot}_{\cY})$ be finite dimensional, normed vector spaces.} In this paper, we study the following saddle point~(SP) problem:\vspace*{-1mm} \begin{equation}\label{eq:original-problem} (P):\quad \min_{x\in\cX}\max_{y\in\cY} \cL(x,y)\triangleq f(x)+\Phi(x,y)-h(y),\vspace*{-1mm} \end{equation} where $f:\cX\rightarrow \reals\cup\{+\infty\}$ and $h:\cY\rightarrow \reals\cup\{+\infty\}$ are convex functions (possibly nonsmooth) and $\Phi:\cX\times\cY\rightarrow \reals$ is a \sa{continuous function with certain differentiability properties}, convex in $x$ and concave in $y$. Our objective is to design an efficient first-order method to compute a saddle point of the structured convex-concave function $\cL$ in~\eqref{eq:original-problem}. The problem $(P)$ covers a broad class of optimization problems, e.g., convex optimization with nonlinear conic constraints which itself includes LP, QP, QCQP, SOCP, and SDP as its subclasses. Indeed, consider \vspace*{-2mm} \begin{equation} \label{eq:conic_problem} \min_{x\in\reals^n}~ \rho(x)\triangleq f(x)+g(x) \quad \hbox{s.t.} \quad G(x)\in -\cK,\vspace*{-2mm} \end{equation} {where $\cK\subseteq \cY^*$ is a closed convex cone in the dual space $\cY^*$}, $f$ is convex (possibly nonsmooth), $g$ is convex with a Lipschitz continuous gradient, $G:\cX\rightarrow \cY^*$ is a smooth $\cK$-convex , Lipschitz function having \sa{a Lipschitz continuous Jacobian}. Various optimization problems that frequently arise in many important applications are special cases of the conic problem in \eqref{eq:conic_problem}, e.g., primal or dual formulations of $\ell_1$ or $\ell_2$-norm soft margin SVM, ellipsoidal kernel machines~\cite{shivaswamy2007ellipsoidal}, kernel matrix learning~\cite{gonen2011multiple,lanckriet2004learning} etc. Using Lagrangian duality, one can equivalently write \eqref{eq:conic_problem} as \vspace*{-1mm} \begin{equation}\label{eq:conic_problem_equivalent} \min_{x\in\reals^n}\max_{y\in\cK^*} f(x)+g(x)+\fprod{G(x),y},\vspace*{-1mm} \end{equation} which is a special case of \eqref{eq:original-problem}, i.e., $\Phi(x,y)=g(x)+\fprod{G(x),y}$ and $h(y)=\ind{\cK^*}(y)$ is the indicator function of $\cK^*$, where {$\cK^*\subseteq\cY$} denotes the dual cone of $\cK$. {\bf Related Work.} Constrained convex optimization can be viewed as a special case of SP \sa{problem}~\eqref{eq:original-problem}, and recently some first-order methods and their randomized-coordinate variants are proposed to solve $\min\big\{ f(x)+g(x):G(x)\in-\reals^m_+\big\}$. In \cite{lin2017levelset}, a level-set method with iteration complexity guarantees is proposed for nonsmooth/smooth and strongly/merely convex settings. In \cite{xu2017first}, a primal-dual method based on the linearized augmented Lagrangian method (LALM) is proposed with \sa{$\cO(1/k)$} sublinear convergence rate in terms of suboptimality and infeasibility (see also \cite{yu2017primal} for another primal-dual algorithm with $\cO(1/k)$ rate. However, none of these methods can solve the more general SP problem we consider in this paper. {SP problems \sa{have} become popular in recent years due to their generality and ability to directly solve constrained optimization problems with certain special structures. There has been several \sa{work on} first-order primal-dual algorithms for \eqref{eq:original-problem} when $\Phi(x,y)$ is bilinear, such as~\cite{chambolle2011first,dang2014randomized,chambolle2016ergodic,he2016accelerated,wang2017exploiting,du2018linear}, and few others have considered \sa{a} more general setting similar to this paper~\cite{palaniappan2016stochastic,nemirovski2004prox,juditsky2011first,he2015mirror,kolossoski2017accelerated} -- see Tseng's forward-backward-forward algorithm~\cite{tseng2000modified} for monotone inclusion problems and a projected reflected gradient method for monotone VIs~\cite{malitsky2015projected} which can also be used to solve~\eqref{eq:original-problem}.} Here, we briefly review some recent work that is closely related to ours. In the rest, we assume that \eqref{eq:original-problem} has a saddle point $(x^*,y^*)$. In~\cite{chambolle2011first}, a special case of \eqref{eq:original-problem} with \sa{a} bilinear coupling term is studied: \begin{align}\label{eq:CP-problem} \min_{x\in\cX}\max_{y\in\cY} \hat{f}(x)+\fprod{Kx,~y}-h(y), \end{align} for some linear operator $K:\cX\rightarrow\cY^*$, where $\hat{f}$ and $h$ are \sa{closed} convex functions with easily computable prox {(Moreau) maps~\cite{hiriart2012fundamentals}}. The authors proposed a primal-dual algorithm which guarantees that \sa{$(x_k,y_k)$ converges to a saddle point $(x^*,y^*)$,} $\cL(\bar{x}_K,y^*)-\cL(x^*,\bar{y}_K)$ converges to $0$ with $\cO(1/K)$ rate when $\hat{f}$ is merely convex and with $\cO(1/K^2)$ rate when $\hat{f}$ is strongly convex, where $\{(\bar{x}_k,\bar{y}_k)\}_k$ is a weighted ergodic average sequence. Later, both Condat~\cite{condat2013primal} and Chambolle \& Pock~\cite{chambolle2016ergodic} studied some related primal-dual algorithms for an SP problem of the form in \eqref{eq:CP-problem} such that $\hat{f}$ has a composite convex structure, i.e., $\hat{f}(x)=f(x)+g(x)$ such that $f$ has an easy prox map and $g$ has Lipschitz continuous gradient -- also see~\cite{vu2013splitting} for a related method. In~\cite{condat2013primal}, convergence of the proposed algorithm is shown without providing any rate statements. In~\cite{chambolle2016ergodic}, it is shown that their previous work in~\cite{chambolle2011first} can be extended to handle non-linear proximity operators based on Bregman distance functions while guaranteing the same rate results -- see also~\cite{chen2014optimal} for an optimal method with $\cO(1/K)$ rate to solve \emph{bilinear} SP problems. \sa{Later Malitsky \& Pock \cite{malitsky2018first} proposed a primal-dual method with linesearch to solve \eqref{eq:CP-problem} with the same rate results as in \cite{chambolle2016ergodic}.} In a recent work, He and Monteiro~\cite{he2016accelerated} considered a \emph{bilinear} SP problem from a monotone inclusion perspective. They proposed an accelerated algorithm based on hybrid proximal extragradient~(HPE) method, and showed that an $\epsilon$-saddle point $(x_\epsilon,y_\epsilon)$ can be computed within $\cO(1/\epsilon)$ iterations. More recently, Kolossoski and Monteiro~\cite{kolossoski2017accelerated} proposed another HPE-type method to solve a more general SP problem as in~\eqref{eq:original-problem} over \emph{bounded} sets {-- it is worth emphasizing that for nonlinearly constrained convex optimization, the dual optimal solution set may be unbounded and/or it may not be trivial to get an upper bound on a dual solution. Indeed, the method in~\cite{kolossoski2017accelerated} is an inexact proximal point method, each prox subinclusion (outer step) is solved using an accelerated gradient method (inner steps).} This work generalizes the method in~\cite{he2016accelerated} as the new method can deal with SP problems that are not bilinear, and it can use general Bregman distances instead of the Euclidean one. Nemirovski~\cite{nemirovski2004prox} and {Juditsky \& Nemirovski~\cite{juditsky2011first}} also studied a convex-concave SP problem with a general coupling, $\min_{x\in X}\max_{y\in Y} \Phi(x,y)$. Writing it as a variational inequality~\sa{(VI)} problem, they proposed a prox-type method, Mirror-prox. Assuming that $X$ and $Y$ are convex \emph{compact} sets, $\Phi$ is differentiable, and $F(x,y)=[\grad_x\Phi(x,y)^\top, -\grad_y\Phi(x,y)^\top]^\top$ is Lipschitz with constant $L$, $\cO(L/K)$ ergodic convergence rate is shown for Mirror-prox where in each iteration $F$ is computed \emph{twice} and a projection onto $X\times Y$ is computed with respect to a general (Bregman) distance. Moreover, \sa{in \cite{juditsky2011first}, for the case $\grad_y\Phi(x,\cdot)$ is linear for all $x$, i.e., $L_{yy}=0$, assuming $Y$ is compact and $\Phi(\cdot,y)$ is strongly convex for any fixed $y$, convergence rate of $\cO(1/K^2)$ is shown for a \emph{multi-stage} method which repeatedly calls Mirror-Prox in each stage}. More recently, He et al.~\cite{he2015mirror} extended \rev{the Mirror-Prox method in~\cite{nemirovski2004prox}} to \sa{handle} $\min_{x\in\cX}\max_{y\in\cY} f(x)+\Phi(x,y)-h(y)$ \sa{with the same convergence guarantees as in~\cite{nemirovski2004prox}}, where $f$ and $h$ are \sa{closed} convex with simple prox maps with respect to a general (Bregman) distance. In these papers, \sa{both the primal and dual step-sizes can be at most $1/L$.} \rev{Later, Malitsky~\cite{malitsky2018proximal} also considered} a monotone \sa{VI} problem of computing $z^*\in \sa{\cZ}$ such that $\fprod{F(z^*),z-z^*}+g(z)-g(z^*)\geq 0$ for all $z\in \sa{\cZ}$, where $F:\cZ\to\cZ$ is a monotone operator, $g$ is a \sa{proper closed} convex function \sa{and $\cZ$ is a finite-dimensional vector space with inner product}. The author proposed a proximal extrapolated gradient method (PEGM) with ergodic convergence rate of $\cO(1/K)$. The proposed method enjoys a backtracking scheme to estimate the local Lipschitz constant of the monotone map $F$ -- see also~\cite{malitsky2018forward} for a related line-search method in a more general setting of monotone inclusion problems. \rev{Finally, while our paper was under review, we become aware of another primal-dual method proposed in~\cite{boob2019stochastic} for solving optimization problems with functional constraints considering convex/nonconvex problems and stochastic/deterministic oracle settings in a unified manner with convergence guarantees -- we compare our results with those in~\cite{boob2019stochastic} for convex and strongly convex minimization problems at the end of the contribution paragraph below.} {\bf Application.} From the application perspective, there are many real-life problems arising in machine learning, signal processing, image processing, finance, etc. such that they can be formulated as a special case of \eqref{eq:original-problem}. In particular, the following problems arising in machine learning can be efficiently solved using the methodology proposed in this paper: \textbf{i)} robust classification under Gaussian uncertainty in feature observations leads to SOCP problems~\cite{bhattacharyya2005second}; \textbf{ii)} distance metric learning formulation proposed in~\cite{xing2003distance} is a convex optimization problem over positive semidefinite matrices subject to nonlinear convex constraints; \textbf{iii)} training ellipsoidal kernel machines~\cite{shivaswamy2007ellipsoidal} requires solving nonlinear SDPs; \textbf{iv)} learning a kernel matrix for transduction problem can be cast as an SDP or a QCQP~\cite{lanckriet2004learning,gonen2011multiple}. In this paper, following~\cite{lanckriet2004learning}, we implemented our method for learning a kernel matrix to predict the labels of partially labeled data sets. To summarize the problem, {suppose we are given a set of labeled data points consisting of feature vectors $\{\ba_i\}_{i\in\cS}\subset\reals^m$, corresponding labels $\{b_i\}_{i\in\cS}\subset\{-1,+1\}$, and a set of unlabeled test data {$\{\ba_i\}_{i\in\cT}\subset\reals^m$}. Let $n_{tr}\triangleq |\cS|$ and $n_t\triangleq |\cT|$ denote the cardinality of the training and test sets, respectively, and define $n\triangleq n_{tr}+n_t$.} Consider $M$ different embedding of the data corresponding to kernel functions $k_\ell:\reals^m\times\reals^m\rightarrow\reals$ for $\ell=1,...,M$. Let $K_\ell\in\mathbb{S}^n_{+}$ be the kernel matrix such that $[K_\ell]_{ij}=k_\ell(\ba_i,\ba_j)$ for $i,j\in\cS\cup\cT$ and consider the partition of {\small $K_\ell=\left( \begin{array}{cc} K_\ell^{tr} & K_\ell^{tr,t} \\ K_\ell^{t,tr} & K_\ell^t \\ \end{array} \right) $}, where $K_\ell^{tr}=[k_\ell(\ba_i,\ba_j)]_{i,j\in\cS}\in\mathbb{S}^{n_{tr}}$, $K_\ell^{t}=[k_\ell(\ba_i,\ba_j)]_{i,j\in\cT}\in\mathbb{S}^{n_{t}}$ and ${K_\ell^{t,tr}}^\top=K_\ell^{tr,t}=[k_\ell(\ba_i,\ba_j)]_{i\in\cS,j\in\cT}\in\reals^{n_{tr}\times n_t}$. The objective is to learn a kernel matrix $K$ belonging to a class of kernel matrices which is a convex set generated by $\{K_\ell\}_{\ell=1}^M$, such that it minimizes the training error of a kernel SVM as a function of $K$. Skipping the details in~\cite{lanckriet2004learning}, one can study both $\ell_1$- and $\ell_2$-norm soft margin SVMs by considering the following generic formulation:\vspace*{-1mm} { \begin{align} \label{eq:kernel_learn} \min_{\substack{K\in\cK, \\ \text{trace}(K)=c}}\ \max_{\substack{\rev{\alpha:\ \mathbf{0}\leq \alpha\leq C\be}, \\ \fprod{\bb, \alpha}=0}} 2\be^\top \alpha - \alpha^\top (G(K^{tr})+\lambda\mathbf{I})\alpha,\vspace*{-8mm} \end{align}} where $c,C>0$ and $\lambda\geq 0$ are model parameters, \rev{$\be\in\reals^{n_{tr}}$ denotes the vector of ones}, $\bb=[b_i]_{i=1}^{n_{tr}}$ and $G(K^{tr})\triangleq\diag(\bb)K^{tr}\diag(\bb)$. Suppose we want to learn a kernel matrix belonging to \rev{the class} $\cK=\{\sum_{\ell=1}^M\eta_\ell K_\ell:\ \eta_\ell\geq 0,\ \ell=1,\ldots, M\}$; clearly, $K\in\cK$ implies $K\succeq 0$. For kernel class $\cK$, \eqref{eq:kernel_learn} takes the following form: \vspace*{-2mm} { \begin{align} \label{eq:kernel_learn_simple} \min_{\substack{\eta:\ \fprod{\br,\eta}=c,\\ \eta\geq 0}}\ \max_{\substack{\rev{\alpha:\ \mathbf{0}\leq \alpha\leq C\be}, \\ \ \ \fprod{\bb,\alpha}=0}} 2\be^\top \alpha - \sum_{\ell=1}^M {\eta_\ell}\alpha^\top G(K^{tr}_\ell)\alpha-\lambda\norm{\alpha}_2^2,\vspace*{-8mm} \end{align}} where $\eta=[\eta_\ell]_{\ell=1}^M$ and $\br=[r_\ell]_{\ell=1}^M$ for $r_\ell=\text{trace}(K_\ell)$. Clearly, \eqref{eq:kernel_learn_simple} is a special case of \eqref{eq:original-problem}. In~\cite{lanckriet2004learning}, \eqref{eq:kernel_learn_simple} is equivalently represented as a QCQP and then solved using MOSEK~\cite{mosek}, a comercial interior-point method~(IPM). \rev{Per-iteration computational complexity} of a generic IPM is $\cO(Mn_{tr}^3)$ for solving the resulting QCQP~\cite{nesterov1994interior}, \rev{while it is $\cO(Mn_{tr}^2)$ for the first-order primal-dual method we proposed in this paper.} Therefore, when $n_{tr}$ is very large, IPMs are not suitable for solving large-scale problems unless the data matrix has certain sparsity structure; and in practice as $n_{tr}$ grows, the first-order methods with much lower per-iteration complexity will have the advantage over IPMs for computing low-to-medium level accuracy solutions. {\bf Contribution.} We propose an \rev{accelerated primal-dual~(APD)} algorithm with a momentum term that can be viewed as a generalization of the method in \cite{chambolle2016ergodic} to solve SP problems with a more general coupling term $\Phi$ that is \emph{not} bilinear. Assuming {$\grad_y\Phi(\cdot,\cdot)$ is Lipschitz} and $\grad_x\Phi(\cdot,y)$ is Lipschitz for any fixed $y$, we show that $(x_k,y_k)$ converges to a saddle point $(x^*,y^*)$ and \rev{for any $(x,y)\in\cX\times\cY$ we derive error bounds in terms of $\cL(\bar{x}_K,y)-\cL(x,\bar{y}_K)$} for the ergodic sequence {-- without requiring primal-dual domains to be bounded}; in particular, we show $\cO(1/K)$ rate when the problem is merely convex in $x$ using a constant step-size rule, \sa{where $K$ denotes the number of gradient computations}. Furthermore, assuming $\Phi(x,\cdot)$ is linear for each fixed $x$ and $f$ is strongly convex, we obtain the ergodic convergence rate of $\cO(1/K^2)$ -- {we are not aware of any other single-loop method with $\cO(1/K^2)$ rate when $\Phi$ is not bilinear.} Moreover, we develop a backtracking scheme, \rev{APDB,} which ensures that above stated rate results continue to hold in terms of the total number of gradient computations even though the Lipschitz constants, $L_{xx}$, $L_{yx}$ and $L_{yy}$, are \emph{not} known -- see Assumption~\ref{assum}. To best of our knowledge, for the strongly convex-concave SP problems, a line-search method ensuring $\cO(1/K^2)$ rate is proposed for the first time for when the coupling function $\Phi$ is \emph{not} bilinear. \rev{In the context of constrained optimization problems, the backtracking scheme helps us demonstrate convergence results even when a dual bound is not available or easily computable. Our results continue to hold when the dual optimal solution set is \emph{unbounded}.} {The previous art for solving SP problems in the general setting include the Mirror-Prox algorithm in~\cite{nemirovski2004prox,he2015mirror}, the HPE-type method in~\cite{kolossoski2017accelerated} \sa{and the PEGM by Malitsky~\cite{malitsky2018proximal}}. All these methods including ours have $\cO(1/\epsilon)$ complexity under mere convexity; however, \rev{our APD and APDB methods both have} an improved $\cO(1/\sqrt{\epsilon})$ rate when $f$ is strongly convex.} Indeed, all the rates derived here are the optimal rates for \sa{the settings considered in this paper -- see~\cite{ouyang2018lower} for the lower complexity bounds of $\cO(1/K)$ and $\cO(1/K^2)$ associated with first-order primal-dual methods for convex-concave and strongly convex-concave bilinear SP problems, respectively}. {When compared to~\cite{kolossoski2017accelerated}, ours is a simpler one-loop algorithm while HPE~\cite{kolossoski2017accelerated} is a two-loop method, \sa{having outer and inner iterations}, requiring a more stronger oracle for subproblems -- see Remark~\ref{rem:prox} and also requiring a bounded domain. Moreover, while convergence to a \emph{unique} limit point is shown for APD, a limit point result (weaker than ours) is shown in~\cite{kolossoski2017accelerated}, i.e., any limit point is a saddle point -- see end of p.1254 in~\cite{kolossoski2017accelerated}.} Another competitor algorithm, Mirror-Prox, requires computing both the primal and dual gradients \emph{twice} during each iteration while the proposed APD method only needs to compute them once; thus, saving the computation cost by half yet achieving the same iteration complexity \rev{-- see Remark~\ref{rem:MP-APD}}. Moreover, under the assumption that $\grad \Phi(\cdot,\cdot)$ is Lipschitz with constant $L$, the method in~\cite{he2015mirror} has a primal-dual step-size less than $1/L$; compared to \cite{he2015mirror} our assumption on $\Phi$ is weaker, our primal and dual step-sizes are larger than $1/L$ {-- see Remark~\ref{rem:differentiability} for further weakening the assumptions on $\Phi$.} {Finally, the numerical results \rev{also} clearly demonstrate that APD has roughly the same iteration complexity as proximal Mirror-Prox; but, requires half the computational efforts (reflected by the savings in computation time).} \sa{Finally, setting $z=[x^\top, y^\top]^\top$, $F(x,y)=[\grad_x\Phi(x,y)^\top, -\grad_y\Phi(x,y)^\top]^\top$ and $g(x,y)=f(x)+h(y)$ within the VI problem of~\cite{malitsky2018proximal} mentioned above, PEGM can deal with \eqref{eq:original-problem}. It is worth emphasizing that PEGM utilizes a single step-size to update the next iterate and uses $\norm{F(z)-F(\bar{z})}$ to estimate the Lipschitz constant $L=L_{xx}+2L_{yx}+L_{yy}$ locally within the backtracking procedure while our method uses two different step-sizes (one for primal and one for dual updates) to locally approximate $L_{xx}+L_{yx}$ and $L_{yx}+2L_{yy}$ for choosing primal and dual step-sizes, respectively -- see Assumption~\ref{assum}. Empirically, we have observed that exploiting the special structure of SP compared to more general VI problems and allowing primal and dual steps chosen separately lead to larger step-sizes, speeding up the convergence in practice.} \rev{For the composite convex problem in \eqref{eq:conic_problem} with $\cK=\reals^m_+$, the method in~\cite{boob2019stochastic} can deal with \emph{unbounded} dual domain through employing an extra linearization for the constraint function. Provided that a bound $B\geq\norm{y^*}+1$ is \emph{known}, where $y^*$ denotes an arbitrary optimal dual solution to \eqref{eq:conic_problem}, the deterministic method in~\cite{boob2019stochastic} can compute an $\epsilon$-optimal and $\epsilon$-feasible solution to \eqref{eq:conic_problem} with $\cO(1/\epsilon)$ and $\cO(1/\sqrt{\epsilon})$ complexity when $g$ is convex and strongly convex, respectively. On the other hand, if $B<\norm{y^*}+1$, then the complexity drops to $\cO(1/\epsilon^2)$ and $\cO(1/\epsilon)$ for convex and strongly convex settings, respectively --see the discussion after \cite[Corollary 2.2 and Theorem 2.3]{boob2019stochastic}. Furthermore, the step-sizes in~\cite{boob2019stochastic} require the knowledge of global Lipschitz constants, and the established convergence rate for the convex setting is non-asymptotic, as the step-size is chosen depending on the tolerance (see $\eta$ choice in~\cite[Theorem 2.3]{boob2019stochastic}); therefore, the sequence won't converge to a primal-dual pair in the limit. In contrast, our backtracking scheme APDB generates an asymptotically optimal sequence, and even if a bound on $\norm{y^*}$ is \emph{not known}, it achieves the optimal complexity guarantees of $\cO(1/\epsilon)$ and $\cO(1/\sqrt{\epsilon})$ oracle calls (in total, including those for line search) for convex and strongly convex settings, respectively --see Section~\ref{sec:without_dualbound}, where we show that even if $L_{xx}$ does not exist (possibly due to unbounded dual domain), \rev{APDB, i.e., the APD with backtracking,} generates a convergent primal-dual sequence without knowing any bound or Lipschitz constants of the problem.} {\bf Organization of the Paper.} In the coming section, we precisely state our assumptions on $\cL$ in \eqref{eq:original-problem}, describe \sa{the proposed algorithms, APD and APD with backtracking~(APDB), and present convergence guarantees for APD and APDB iterate sequences, which are the main results of this paper.} Subsequently, in Section~\ref{sec:methodology}, we provide an easy-to-read convergence analysis proving the main results. \sa{Next, in Section~\ref{sec:constrained}, we discuss how APD and APDB can be implemented for solving constrained convex optimization problems.} Later, in Section~\ref{sec:numeric}, we apply our APD and APDB methods to solve the kernel matrix learning and \sa{QCQP problems to numerically compare APD and APDB with the {Mirror-prox} method ~\cite{he2015mirror}, PEGM~\cite{malitsky2018proximal}} and off-the-shelf interior point methods. Finally, Section~\ref{sec:conclude} concludes the paper. \section{\rev{The} Accelerated Primal-Dual~\rev{(APD)} Algorithm} \begin{defn} \label{def:bregman} {Let $\varphi_{\cX}:\cX\rightarrow\reals$ and $\varphi_{\cY}:\cY\rightarrow\reals$ be differentiable functions on open sets containing $\dom f$ and $\dom h$, respectively. Suppose $\varphi_{\cX}$ and $\varphi_{\cY}$ have closed domains and are 1-strongly convex with respect to $\norm{\cdot}_{\cX}$ and $\norm{\cdot}_{\cY}$, respectively. Let $\bD_{\cX}:\cX\times\cX\rightarrow\reals_+$ and $\bD_{\cY}:\cY\times\cY\rightarrow\reals_+$ be Bregman distance functions corresponding to $\varphi_{\cX}$ and $\varphi_{\cY}$, i.e., $\bD_{\cX}(x,\bar{x})\triangleq \varphi_{\cX}(x)-\varphi_{\cX}(\bar{x})-\fprod{\grad \varphi_\cX(\bar{x}),x-\bar{x}}$, and $\bD_{\cY}$ \sa{is defined similarly using the same form.} } \end{defn} Clearly, $\bD_\cX(x,\bar{x})\geq \tfrac{1}{2}\norm{x-\bar{x}}_\cX^2$ for $x\in\cX$, $\bar{x}\in\dom f$, and $\bD_{\cY}(y,\bar{y})\geq\tfrac{1}{2}\norm{y-\bar{y}}_\cY^2$ for $y\in\cY$ and $\bar{y}\in\dom h$. The dual spaces are denoted by $\cX^*$ and $\cY^*$. For $x'\in\cX^*$, we define the dual norm $\norm{x'}_{\cX^*}\triangleq\max\{\fprod{x',x}:\ \norm{x}_\cX\leq 1\}$, and $\norm{\cdot}_{\cY^*}$ is defined similarly. {We next state {our main assumption and} explain the APD algorithm for \eqref{eq:original-problem}, and discuss its convergence properties as the main results of this paper.} \begin{assumption}\label{assum} Suppose $\bD_{\cX}$ and $\bD_\cY$ be some Bregman distance functions as in Definition~\ref{def:bregman}. In case {$f$ is strongly convex, i.e., $\mu>0$,} we fix $\norm{x}_\cX=\sqrt{\fprod{x,x}}$, and set $\bD_{\cX}(x,\bar{x})=\frac{1}{2}\norm{x-\bar{x}}_\cX^2$. Suppose $f$ and $h$ are closed convex, and $\Phi$ is \sa{continuous} such that\\ {\bf (i)} for any \sa{$y\in\dom h\subset \cY$}, \sa{$\Phi(\cdot,y)$} is convex and differentiable, and {for some $L_{xx}\geq 0$,} \begin{equation} \label{eq:Lxx} \norm{\grad_x \Phi(x,y)-\grad_x \Phi(\bar{x},y)}_{\cX^*}\leq L_{xx}\norm{x-\bar{x}}_\cX,\quad \forall~x,\bar{x}\in\dom f\subset \cX, \end{equation} {\bf (ii)} for any $x\in\dom f$, \sa{$\Phi(x,\cdot)$} is concave and differentiable; \sa{there exist $L_{yx}>0$ and $L_{yy}\geq 0$ such that for all $x,\bar{x}\in\dom f$ and $y,\bar{y}\in\dom h$, one has} \begin{align} \label{eq:Lipschitz_y} \norm{\grad_y \Phi(x,y)-\grad_y \Phi(\bar{x},\bar{y})}_{\cY^*}\leq L_{yy}\norm{y-\bar{y}}_\cY+{L_{yx}}\norm{x-\bar{x}}_\cX. \end{align} \begin{comment} {\bf (iii)} {For any $\bar{x}\in\dom f$, $s\in\cX^*$ and $t>0$, {$\argmin_{x \in \cX} \left\{ tf(x)+\fprod{s,x}+\bD_{\cX}(x,\bar{x}) \right\}$} can be computed efficiently. Similarly, $\argmin_{y \in \cY} \left\{ th(y)+\fprod{s,y}+\bD_{\cY}(y,\bar{y}) \right\}$ is easy to compute for any $\bar{y}\in\dom h$, $s\in\cY^*$ and $t>0$.} \end{comment} \end{assumption} \sa{ We first analyze the convergence properties of {APD}, displayed in Algorithm~\ref{alg:APD}, which repeatedly calls for the subroutine \textbf{MainStep} stated in Algorithm~\ref{alg:mainstep}.} \begin{remark} \label{rem:prox} \sa{$x$- and $y$-subproblems of APD are generalizations of the Moreau map~\cite{hiriart2012fundamentals}. Compared to ours, i.e., $\argmin_{y \in \cY} \left\{ h(y)-\fprod{s,y}+\tfrac{1}{\sigma}\bD_{\cY}(y,\bar{y}) \right\}$,} HPE-type method in~\cite{kolossoski2017accelerated} requires solving $\argmax_{y\in Y} \Phi(\bar{x},y)-\bD_{\cY}(y,\bar{y})/\sigma$ as the $y$-subproblem for some given $\bar{x}$ and $\bar{y}$ where $Y\subset\cY$ is a bounded convex set. This may not be a trivial operation in general. \vspace*{-2mm} \end{remark} \begin{algorithm}[h!] \caption{\textbf{MainStep}$(\bar{x},\bar{y},x_p,y_p,\tau,\sigma,\theta)$} \label{alg:mainstep} \begin{algorithmic}[1] \STATE{\bfseries input: $\tau,\sigma,\theta>0$, $(\bar{x},\bar{y})\in\cX\times\cY$, $(x_p,y_p)\in\cX\times\cY$} \STATE{$s\gets (1+\theta)\grad_y \Phi(\bar{x},\bar{y})-\theta\grad_y\Phi(x_p,y_p)$} \label{algeq:s} \STATE{$\hat{y}\gets \argmin_{y\in\cY} h(y)-\fprod{s,~ y}+{\frac{1}{\sigma}}\bD_{\cY}(y,\bar{y})$} \label{algeq:y-main} \STATE{$\hat{x}\gets \argmin_{x\in\cX} f({x})+\fprod{\grad_x\Phi(\bar{x},\hat{y}), ~x}+{\frac{1}{\tau}}\bD_{\cX}(x,\bar{x})$} \label{algeq:x-main} \RETURN $(\hat{x},\hat{y})$ \end{algorithmic} \end{algorithm} \begin{algorithm}[h!] \caption{Accelerated Primal-Dual algorithm~(APD)} \label{alg:APD} \begin{algorithmic}[1] \STATE{\bfseries Input: $\mu\geq 0$, $\sa{\tau_0,\sigma_0>0}$, $(x_0,y_0)\in\cX\times\cY$} \STATE{$(x_{-1},y_{-1})\gets(x_0,y_0)$, \sa{$\sigma_{-1}\gets\sigma_0$, $\gamma_0\gets \sigma_0/\tau_0$}} \FOR{$k\geq 0$} \STATE{\sa{$\sigma_{k}\gets \gamma_k\tau_{k}$,\quad $\theta_{k}\gets\frac{\sigma_{k-1}}{\sigma_k}$}} \STATE $(x_{k+1},y_{k+1})\gets\hbox{\textbf{MainStep}}(x_k,y_k,x_{k-1},y_{k-1},\tau_k,\sigma_k,\theta_k)$ \STATE{\sa{$\gamma_{k+1}\gets \gamma_k(1+\mu\tau_k)$,\quad $\tau_{k+1}\gets{\tau_k}\sqrt{\frac{\gamma_{k}}{\gamma_{k+1}}}$,\quad $k\gets k+1$}} \label{algeq:gamma_update_APD} \ENDFOR \end{algorithmic} \end{algorithm} Recall that if {$f$ is convex with modulus $\mu\geq 0$}, then { \begin{align}\label{assum-sc} f(x)\geq f(\bar{x})+\fprod{g,~x-\bar{x}}+\frac{\mu}{2}\norm{x-\bar{x}}_\cX^2,\quad \forall~g\in \partial f(\bar{x}),\quad \forall~x,\bar{x}\in\dom f. \end{align}} Note also that \eqref{eq:Lxx} and convexity imply that for any $y\in\dom h$ and $x,\bar{x}\in\dom f$, { \begin{align} 0 &\leq \Phi(x,y) - \Phi(\bar{x},y)-\fprod{\grad_x\Phi(\bar{x},y),~x-\bar{x}} \leq\frac{L_{xx}}{2}\norm{x-\bar{x}}_\cX^2. \label{eq:Lxx_bound} \end{align}} \begin{theorem}\label{thm:main} \sa{\bf (Main Result I)} Let $\bD_{\cX}$ and $\bD_\cY$ be some Bregman distance functions as in Definition~\ref{def:bregman}. Step-size update rule in Algorithm~\ref{alg:APD} implies that \vspace*{-3mm} \begin{align}\label{eq:step-size-rule} \theta_{k+1}=\frac{1}{\sqrt{1+\mu\tau_k}},\quad \tau_{k+1}=\theta_{k+1}\tau_k,\quad \sigma_{k+1}=\sigma_k/\theta_{k+1},\quad\forall~k\geq 0. \end{align} \vspace*{-3mm} Suppose Assumption~\ref{assum} holds, and $\{x_k,y_k\}_{k\geq 0}$ is generated by APD, stated in Algorithm~\ref{alg:APD}, starting from $\tau_0,\sigma_0>0$ such that {\small \begin{align} \label{eq:initial_step_condition} \Big(\frac{1-\delta}{\tau_0}-L_{xx}\Big)\frac{1}{\sigma_0}\geq \frac{{L^2_{yx}}}{c_\alpha},\quad\ 1-\big(\delta+c_\alpha+c_\beta\big)\geq \frac{L_{yy}^2}{c_\beta}\sigma^2_0, \end{align}} for some $\delta, c_\alpha,\ c_\beta\in\reals_+$ such that $c_\alpha+c_\beta+\delta\leq 1$ satisfying $c_\alpha,~c_\beta> 0$ when $L_{yy}>0$, and $c_\alpha>0$, $c_\beta=0$ when $L_{yy}=0$. Then for any $(x,y)\in\cX\times\cY$,\vspace*{-1mm} \begin{align} {\cL(\bar{x}_K,y)-\cL(x,\bar{y}_K) \leq} \frac{1}{T_K}{\Delta(x,y),\quad \Delta(x,y)\triangleq \frac{1}{\tau_0}\bD_{\cX}(x,x_0)+\frac{1}{\sigma_0}\bD_{\cY}(y,y_0),}\label{eq:delta} \end{align} holds for all $K\geq 1$, where $\bar{x}_K\triangleq\frac{1}{T_K}\sum_{k=0}^{K-1} t_k x_{k+1}$, $\bar{y}_K\triangleq\frac{1}{T_K}\sum_{k=0}^{K-1}t_ky_{k+1}$ and $T_K\triangleq\sum_{k=0}^{K-1}t_k$ for some \rev{$\{t_k\}_{k\geq 0}\subset\reals_{++}$ as stated in Parts I and II below:} \textbf{(Part I.)} {Suppose $\mu=0$, step-size rule in Algorithm~\ref{alg:APD} implies $\tau_k=\tau_0$, $\sigma_k=\sigma_0$ and $\theta_k=1$ for $k\geq 0$. Then \eqref{eq:delta} holds for $\{t_k\}$ such that $t_k=1$ for $k\geq 0$; hence, $T_K=K$.} If a saddle point for \eqref{eq:original-problem} exists and {$\delta>0$}, then $\{(x_k,y_k)\}_{k\geq 0}$ converges to a saddle point $(x^*,y^*)$ such that $0\leq \cL(\bar{x}_K,y^*)-\cL(x^*,\bar{y}_K) \leq \cO(1/K)$ and {\small \begin{align} \label{eq:xy-bound-I} {\gamma_0\bD_{\cX}(x^*,x_{K}) +[1-(c_\alpha+c_\beta)]\bD_{\cY}(y^*,y_{K}) \leq \sigma_0\Delta(x^*,y^*)}. \end{align}} \textbf{(Part II.)} Suppose $\mu>0$ and $L_{yy}=0$, in this setting let $\norm{x}_\cX=\sqrt{\fprod{x,x}}$, and $\bD_{\cX}(x,\bar{x})=\frac{1}{2}\norm{x-\bar{x}}_\cX^2$. {Then \eqref{eq:delta} holds for $\{t_k\}$ such that $t_k=\sigma_k/\sigma_0$ for $k\geq 0$, and $T_K =\mathsf{T}heta(K^2)$.}\footnote{$f(k)=\mathsf{T}heta(k)$ means $f(k)=\cO(k)$ and $f(k)=\Omega(k)$.} \begin{comment} for any $(x,y)\in\cX\times\cY$, \begin{align*} \cL(\bar{x}_K,y)-\cL(x,\bar{y}_K)\leq\sa{\frac{1 }{T_K}} \Delta(x,y) \end{align*} holds for all $K\geq 1$, where $\bar{x}_K=\frac{1}{T_K}\sum_{k=0}^{K-1}\sa{\frac{\sigma_k}{\sigma_0}} x_{k+1}$, $\bar{y}_K=\frac{1}{T_K}\sum_{k=0}^{K-1}\sa{\frac{\sigma_k}{\sigma_0}} y_{k+1}$, \end{comment} {If a saddle point for \eqref{eq:original-problem} exists,\footnote{\rev{Since $\mu>0$, all saddle points share the same unique $x$-coordinate, say $x^*\in\cX$.}} then $\{x_k\}_{k\geq 0}$ converges to $x^*$ and $\{y_k\}$ has a limit point. Moreover, if $\delta>0$, then any limit point $(x^*,y^*)$ is a saddle point and it holds that $0\leq \cL(\bar{x}_K,y^*)-\cL(x^*,\bar{y}_K) \leq \cO(1/K^2)$ and} {\small \begin{align} \label{eq:xy-bound-II} \gamma_K\bD_{\cX}(x^*,x_{K}) +(1-c_\alpha)\bD_{\cY}(y^*,y_{K}) \leq \sigma_0 \Delta(x^*,y^*) \end{align}} holds with $\gamma_K=\Omega(K^2)$, which implies that $\bD_{\cX}(x^*,x_{K})=\cO(1/K^2)$. \end{theorem} \begin{comment} \rev{$\bD_{\cX}(x^*,x_K)\leq \frac{\tau_K}{\sigma_K}\sigma_0\Delta(x^*,y^*)=\cO(1/K^2)$, and for $c_\sigma\in(0,1)$ one can also obtain that $\bD_{\cY}(y^*,y_K)\leq \eyh{\frac{\sigma_0}{1-c_\sigma}}{\Delta(x^*,y^*)}$.}\todo{Why do we state convergence only for $\{x_k\}$?} \end{comment} \begin{proof} See Section~\ref{sec:main_thm_proof} for the proof of \rev{the main result I}. \end{proof} \begin{remark} \sa{The particular choice of initial step-sizes, $\tau_0= c_\tau(L_{xx}+L_{yx}^2/\alpha)^{-1}$ and $\sigma_0= c_\sigma(\alpha+2L_{yy})^{-1}$ for any $\alpha>0$ and $c_\tau,\ c_\sigma\in(0,1]$, satisfies \eqref{eq:initial_step_condition}.} \end{remark} \begin{remark} \label{rem:CP-step-size-cond} \sa{The requirement in \eqref{eq:initial_step_condition} generalizes the step-size condition in~\cite{chambolle2016ergodic} for \eqref{eq:CP-problem} with $\hat{f}(x)=f(x)+g(x)$ such that $f$ is closed convex and $g$ is convex having Lipschitz continuous gradient with constant $L_g$. It is required in~\cite{chambolle2016ergodic} that $\left(\frac{1}{\tau_0}-L_g\right)\frac{1}{\sigma_0}\geq \norm{K}^2$. For \eqref{eq:CP-problem}, $\Phi(x,y)=g(x)+\fprod{Kx,y}$; hence, $L_{xx}=L_g$, $L_{yx}=\norm{K}$ and $L_{yy}=0$. Note when $L_{yy}=0$, the second condition in~\eqref{eq:initial_step_condition} holds for all $\sigma_0>0$; thus, setting $c_\alpha=1$, $c_\beta=0$ and $\delta=0$, \eqref{eq:initial_step_condition} reduces to the condition in~\cite{chambolle2016ergodic}.} \sa{Moreover, when $f$ is strongly convex, it is shown in \cite{chambolle2016ergodic} that $\tau_0=\frac{1}{2L_g}$ and $\sigma_0=\frac{L_g}{\norm{K}^2}$ can be used to achieve an accelerated rate of $\cO(1/K^2)$ -- see~\cite[Section~5.2]{chambolle2016ergodic}. Note since $L_{yy}=0$, setting $c_\alpha=1$, $c_\beta=0$ and $\delta=0$, we see that $\tau_0=\frac{1}{2L_{xx}}$ and $\sigma_0=\frac{L_{xx}}{L_{yx}^2}$ satisfies \eqref{eq:initial_step_condition}, and these initial step-sizes are the same as those in~\cite[Section~5.2]{chambolle2016ergodic}.} \end{remark} \begin{remark} \label{rem:differentiability} As in~\cite{kolossoski2017accelerated}, assuming a stronger oracle we can remove \sa{the assumption of $\grad_x\Phi(\cdot,y)$ being Lipschitz for each $y$, i.e., if we replace Line~\ref{algeq:x-main} of \textbf{MainStep} with $\argmin_x f(x)+\Phi(x,\hat{y})+D_{\cX}(x,\bar{x})/\tau$,} then we can remove assumption in \eqref{eq:Lxx}; hence, even if $\Phi$ is nonsmooth in $x$, all our rate results will continue to hold. Let $\Phi(x,y)=x^2y$ on $x\in[-1,1]$ and $y\geq 0$, \sa{i.e., $f(x)=\ind{[-1,1]}(x)$ and $h(y)=\ind{\reals_+}(y)$}; the Lipschitz constant $L$ for $\grad \Phi$ would not exist in this case, and it is not clear how one can modify the analysis of~\cite{he2015mirror} to deal with problems when $\Phi$ is not jointly differentiable.\vspace*{-4mm} \end{remark} \rev{\begin{remark} \label{rem:MP-APD} {At each iteration $k\geq 0$,} our proposed algorithm, APD, {requires computing} only one pair of primal-dual gradients, i.e., $\grad_x\Phi(x_{k+1},y_k)$ and $\grad_y\Phi(x_k,y_k)$ -- note that $\grad_y\Phi(x_{k-1},y_{k-1})$ required for iteration $k$ can be retrieved from iteration $k-1$. However, Mirror-prox~{(MP)} in~\cite{he2015mirror} uses \emph{two} pairs of primal-dual gradients at each iteration $k\geq 0$. {Thus, for a total of $K\geq 1$ iterations, while APD uses $K$ pairs of primal-dual gradients, Mirror-prox uses $2K$ pairs.} {Next we compare the iteration complexity of the two methods}. Consider the problem \eqref{eq:original-problem} and suppose $f(\cdot)=\ind{X}(\cdot)$ and $h(\cdot)=\ind{Y}(\cdot)$ for some compact convex sets $X\subset\cX$ and $Y\subset\cY$. Let $Z\triangleq X\times Y$, and $L$ be the global Lipschitz constant of {$\grad\Phi(\cdot,\cdot)$ over $Z$}. Moreover, let $z=[x^\top;y^\top]^\top$ and define $\bD_Z(z,\bar{z})\triangleq \bD_X(x,\bar{x})+\bD_Y(y,\bar{y})$, for any $z,\bar{z}\in Z$. After $K\geq 1$ iterations, MP~\cite{he2015mirror} can bound $\cG(\bar{z}_K)\triangleq\sup_{(x,y)\in X\times Y} (\Phi(\bar{x}_K,y)-\Phi(x,\bar{y}_K))$ as $\cG(\bar{z}_K)\leq R_{\rm MP}(K)\triangleq\frac{L}{K}\sup_{z\in Z} {\bD_Z(z,z_0)}$; in comparison, APD guarantees that {\small \begin{align*} &\cG(\bar{z}_K)\leq R_{\rm APD}(K)\triangleq\frac{1}{K}\sup_{(x,y)\in X\times Y}\{(L_{xx}+L_{yx}){\bD_X(x,x_0)}+(2L_{yy}+L_{yx}){\bD_Y(y,y_0)}\}. \end{align*}} Note that from \eqref{eq:Lxx} and \eqref{eq:Lipschitz_y} one can conclude that {$\grad\Phi$ is Lipschitz with $L=L_{xx}+2L_{yx}+L_{yy}$. Thus, when $L_{yy}=0$, which is the case for constrained convex optimization problems (as the Lagrangian is an affine function of the dual variable), {MP} bound is larger than the APD bound, i.e., $R_{\rm APD}(K)\leq R_{\rm MP}(K)$ for $K\geq 1$}. Therefore, in this setting, for any $\epsilon>0$, the number of primal-dual gradient calls required by MP to ensure $R_{\rm MP}(K)\leq \epsilon$ is at least \emph{twice} the number of APD primal-dual gradient calls to ensure the same accuracy $R_{\rm APD}(K)\leq \epsilon$. \end{remark}} Our method generalizes the primal-dual method proposed by~\cite{chambolle2016ergodic} to solve SP problems with coupling term $\Phi$ that is \emph{not} bilinear. According to Remark~\ref{rem:CP-step-size-cond}, \rev{to solve \eqref{eq:CP-problem},} any $\tau_0,\sigma_0>0$ such that $\left(\frac{1}{\tau_0}-L_g\right)\frac{1}{\sigma_0}\geq \norm{K}^2$ work for both Algorithm~1 in~\cite{chambolle2016ergodic} and APD, and both methods generate the same iterate sequence with same error bounds. \begin{comment} We can equivalently represent it as \eqref{eq:original-problem} by setting $\Phi(x,y)=g(x)+\fprod{Kx,y}$. Note that $L_{xx}=L$, $L_{yy}=0$, and $L_{yx}=\norm{K}$, where $\norm{\cdot}$ denotes the spectral norm; hence, according to Part I of Theorem~\ref{thm:main} primal-dual steps are constant {satisfying $\tau_0\leq \frac{1}{L+\norm{K}^2/\alpha}$ and $\sigma_0\sa{\leq} \frac{1}{\alpha}$.} This choice of step-sizes for our APD algorithm {is equivalent to} the condition $(1/\tau_0-L)\geq \sigma_0 \norm{K}^2$ for Algorithm~1 in~\cite{chambolle2016ergodic}; and both algorithms generate the same iterate sequence with same error bounds. \end{comment} Similarly, from Remark~\ref{rem:CP-step-size-cond}, in case $f$ is strongly convex, when $\{(\tau_k,\sigma_k,\theta_k)\}$ is chosen as \sa{in~\eqref{eq:step-size-rule} starting from $\tau_0=\frac{1}{2L_g}$ and $\sigma_0=\frac{L_g}{\norm{K}^2}$}, our APD algorithm and Algorithm~4 in~\cite{chambolle2016ergodic} again output the same iterate sequence with the same error bounds. Therefore, APD algorithm inherits the already established connections of the primal-dual framework in~\cite{chambolle2016ergodic} to other well-known methods, e.g., {(linearized) ADMM~\cite{shefi2014rate,aybat2018distributed} and Arrow-Hurwicz method \cite{arrow1958studies}.} \begin{algorithm}[h!] \caption{Accelerated Primal-Dual algorithm with Backtracking~(APDB)} \label{alg:APDB} \begin{algorithmic}[1] \STATE{\bfseries Input: $(x_0,y_0)\in\cX\times\cY$, $\mu\geq 0$, \sa{$c_\alpha,c_\beta,\delta\geq 0$}, $\eta\in (0,1)$, $\sa{\bar{\tau},\gamma_0>0}$} \STATE{$(x_{-1},y_{-1})\gets(x_0,y_0)$, \sa{$\tau_0\gets\bar{\tau}$, $\sigma_{-1}\gets\gamma_0\tau_0$}} \FOR{$k\geq 0$} \LOOP \STATE{\sa{$\sigma_{k}\gets \gamma_k\tau_{k}$,\quad $\theta_{k}\gets\frac{\sigma_{k-1}}{\sigma_k}$},\quad \sa{$\alpha_{k+1}\gets c_\alpha/\sigma_k$,\quad $\beta_{k+1}\gets c_\beta/\sigma_k$}} \STATE{$(x_{k+1},y_{k+1})\gets\hbox{\textbf{MainStep}}(x_k,y_k,x_{k-1},y_{k-1},\tau_k,\sigma_k,\theta_k)$}\label{algeq:APDB-mainstep} \IF{\sa{$E_k(x_{k+1},y_{k+1})\leq -\frac{\delta}{\tau_k}\bD_\cX(x_{k+1},x_k)-\frac{\delta}{\sigma_k}\bD_\cY(y_{k+1},y_k)$}}\label{algeq:test_function} \STATE{\textbf{go to} Line 13} \ELSE \STATE{\sa{$\tau_{k}\gets \eta \tau_k$}} \ENDIF \ENDLOOP \STATE{\sa{$\gamma_{k+1}\gets \gamma_k(1+\mu\tau_k)$},\quad \sa{$\tau_{k+1}\gets{\tau_k}\sqrt{\frac{\gamma_{k}}{\gamma_{k+1}}}$},\quad \sa{$k\gets k+1$}} \label{algeq:gamma_update_APDB} \ENDFOR \end{algorithmic} \end{algorithm} \sa{For some problems, it may either be hard to guess\rev{/know} the Lipschitz constants, $L_{xx}$, $L_{yx}$ and $L_{yy}$, or using these constants may well lead to too conservative step-sizes. Next, inspired by the work~\cite{malitsky2018first}, we propose a backtracking scheme to approximate the Lipschitz constants \emph{locally} and incorporate it within the APD framework as shown in Algorithm~\ref{alg:APDB}, which we call {APDB}.} \sa{To check whether the step-sizes chosen at each iteration $k\geq 0$ \rev{are} in accordance with \emph{local} Lipschitz constants, we define a test function $E_k(\cdot,\cdot)$ that employs linearization of $\Phi$ with respect to both $x$ and $y$.} \sa{For some free parameter sequence $\{\alpha_k,\beta_k\}_{k\geq 0}\subseteq \mathbb{R}_{+}$, we define}\vspace*{-3mm} \sa{\small \begin{align}\label{eq:Ek} E_k(x,y)\triangleq &\Phi(x,y)-\Phi(x_k,y)-\fprod{\grad_x\Phi(x_k,y),x-x_k}-\frac{1}{\tau_k}\bD_\cX(x,x_k)\nonumber\\ &+\tfrac{1}{2\alpha_{k+1}}\norm{\grad_y\Phi(x,y)-\grad_y\Phi(x_k,y)}^2+\tfrac{1}{2\beta_{k+1}}\norm{\grad_y\Phi(x_k,y)-\grad_y\Phi(x_k,y_k)}^2 \nonumber\\ &-\Big(\frac{1}{\sigma_k}-\theta_k(\alpha_k+\beta_k)\Big)\bD_\cY(y,y_k), \end{align}} where we set $0^2/0=0$ which may arise when $L_{yy}=0$ and $\beta_k=0$. \sa{For \rev{the} \emph{particular} $\alpha_k,\beta_k\geq 0$ and $\theta_k$ specified as in Algorithm~\ref{alg:APDB}, we get $E_k(x,y)\leq (L_{xx}+\frac{L_{yx}^2}{c_\alpha}\sigma_k-\frac{1}{\tau_k})\bD_\cX(x,x_k)+(\frac{L_{yy}^2}{c_\beta}\sigma_k+\frac{c_\alpha+c_\beta-1}{\sigma_k})\bD_\cY(y,y_k)$; hence, $E_k$ can be bounded by using the global Lipschitz constants, which prescribes how $\{\tau_k,\sigma_k\}$ should be chosen for APDB so that the test condition in Line~\ref{algeq:test_function} of Algorithm~\ref{alg:APDB} is satisfied.} The rate statement and convergence result of the APDB method, displayed in Algorithm~\ref{alg:APDB}, are given in the next theorem. \begin{theorem}\label{thm:backtrack} \sa{\bf (Main Result II)} Suppose Assumption~\ref{assum} holds. Let $\delta\in[0,1)$, $c_\alpha>0$ and $c_\beta\geq 0$ are chosen as stated below, and define {\small \begin{align} \label{eq:tauhat_bound} \Psi_1\triangleq\frac{c_\alpha L_{xx}}{2\gamma_0L_{yx}^2}\zeta,\ \Psi_2\triangleq \frac{\sqrt{c_\beta(1-(c_\alpha+c_\beta+\delta))}}{\gamma_0L_{yy}},\ \zeta\triangleq -1+\sqrt{1+\frac{4(1-\delta)\gamma_0}{c_\alpha }\frac{L_{yx}^2}{L_{xx}^2}}. \end{align}} For any given $(x_0,y_0)\in\dom f\times \dom h$ and $\bar{\tau},\gamma_0>0$, APDB, stated in Algorithm \ref{alg:APDB}, is well-defined, i.e., the number of inner iterations is finite and bounded by {$1+\log_{1/\eta}(\frac{\bar{\tau}}{\Psi})$} uniformly for $k\geq 0$ for some $\Psi>0$. Let $\{x_k,y_k\}_{k\geq 0}$ denote the iterate sequence generated by APDB, using the test function $E_k$ in \eqref{eq:Ek}. Then {for any $(x,y)\in\cX\times\cY$}, \eqref{eq:delta} holds for $\{t_k\}$ such that {$t_k=\sigma_k/\sigma_0$} for $k\geq 0$, where $(\bar{x}_K,\bar{y}_K)$ and $T_K$ are defined for $K\geq 1$ as in Theorem~\ref{thm:main}.\\ \textbf{(Part I.)} Suppose $\mu=0$. \rev{Let} $c_\alpha+c_\beta+\delta\in(0,1)$ if $L_{yy}>0$; and $c_\beta=0$, $c_\alpha+\delta\in(0,1]$, otherwise. For this setting, {$\Psi=\Psi_1$ if $L_{yy}=0$ and $\Psi=\min\{\Psi_1,\Psi_2\}$ if $L_{yy}>0$}; moreover, $T_K =\Omega(K)$, implying $\cO(1/K)$ sublinear rate for \eqref{eq:delta}. \begin{comment} then for any $(x,y)\in\cX\times\cY$, \begin{align}\label{eq:rate-backtrack} {\cL(\bar{x}_K,y)-\cL(x,\bar{y}_K) \leq} \frac{\sigma_0}{T_K}{\Delta(x,y) ,\quad \Delta(x,y)\triangleq \frac{1}{\tau_0}\bD_{\cX}(x,x_0)+\frac{1}{\sigma_0}\bD_{\cY}(y,y_0),} \end{align} holds for all $K\geq 1$, where $\bar{x}_K=\frac{1}{T_K}\sum_{k=0}^{K-1}{\sigma_k {x_{k+1}}}$, $\bar{y}_K=\frac{1}{T_K}\sum_{k=0}^{K-1}{\sigma_k {y_{k+1}}}$, and $T_K=\sum_{k=0}^{K-1}\sigma_k=\Omega(K)$. \end{comment} Moreover, if a saddle point for \eqref{eq:original-problem} exists and {$\delta>0$ is chosen}, then $\{(x_k,y_k)\}_{k\geq 0}$ converges to a saddle point $(x^*,y^*)$ such that \eqref{eq:xy-bound-I} holds.\\ \textbf{(Part II.)} Suppose $\mu>0$ and $L_{yy}=0$. Let $\norm{x}_\cX=\sqrt{\fprod{x,x}}$, and $\bD_{\cX}(x,\bar{x})=\frac{1}{2}\norm{x-\bar{x}}_\cX^2$. Assume $c_\alpha+\delta\in(0,1]$ and $c_\beta=0$. For this setting, {$\Psi=\Psi_1$} and $T_K=\Omega(K^2)$, implying $\cO(1/K^2)$ sublinear rate for \eqref{eq:delta}. If a saddle point for \eqref{eq:original-problem} exists,\footnote{\rev{Since $\mu>0$, all saddle points share the same unique $x$-coordinate, say $x^*\in\cX$.}} then $\{x_k\}$ converges to $x^*$ and $\{y_k\}$ has a limit point. Moreover, if $\delta>0$, then any limit point $(x^*,y^*)$ is a saddle point satisfying $0\leq \cL(\bar{x}_K,y^*)-\cL(x^*,\bar{y}_K) \leq \cO(1/K^2)$ for $K\geq 1$, and \eqref{eq:xy-bound-II} holds with $\gamma_K=\Omega(K^2)$. \end{theorem} \section{Methodology} \label{sec:methodology} \vspace*{-2mm} \sa{The most generic form of our method, {GAPD}, is presented in Algorithm~\ref{alg:GAPD} which takes step-size sequences as input and repeatedly calls for the subroutine \textbf{MainStep} stated in Algorithm~\ref{alg:mainstep}.} In this section, we provide a general result \sa{in Theorem~\ref{thm:general-bound} for GAPD} unifying \sa{the analyses of merely and strongly {convex} cases described in Theorems~\ref{thm:main} and~\ref{thm:backtrack}}. An easy-to-read convergence analysis is given at the end of this section. In our analysis, \sa{to show the general result in Theorem~\ref{thm:general-bound}}, we assume some conditions on $\{(\tau_k,\sigma_k,\theta_k)\}_{k\geq 0}$, stated in Assumption~\ref{assum:step}; \rev{furthermore, we also discuss in this section that when the Lipschitz constants are known, one can replace Assumption~\ref{assum:step} with Assumption~\ref{assum:step-2}, as it implies Assumption~\ref{assum:step}.} \sa{Later we show that step-size sequence $\{(\tau_k,\sigma_k,\theta_k)\}_k$ generated by APD and APDB, i.e., Algorithms~\ref{alg:APD} and~\ref{alg:APDB}, satisfies these conditions in \rev{Assumptions~\ref{assum:step-2} and~\ref{assum:step}}, respectively.} We define $0^2/0=0$ which may arise when $L_{yy}=0$. \sa{To make the notation tractable, we define some quantities now:} for $k\geq 0$, let $q_k\triangleq \grad_y\Phi(x_k,y_k)-\grad_y\Phi(x_{k-1},y_{k-1})$ and $s_k \sa{\triangleq} \grad_y\Phi(x_k,y_k)+\theta_kq_k$ -- \sa{see Line~\ref{algeq:s} of MainStep in Algorithm~\ref{alg:mainstep} and Line~\ref{algeq:mainstep-GAPD} of GAPD in Algorithm~\ref{alg:GAPD}}. \sa{ \begin{assumption} \label{assum:step} ({\bf Step-size Condition I}) There exists $\{\tau_k,\sigma_k,\theta_k\}_{k\geq 0}$ such that $\{(x_k,y_k)\}_{k\geq 0}$ generated by GAPD, displayed in Algorithm~\ref{alg:GAPD}, and the step-size sequence together satisfy the following conditions for $k\geq 0$: \begin{subequations}\label{eq:step-size-condition} {\small \begin{align} & E_k(x_{k+1},y_{k+1})\leq \sa{-\delta \Big[\bD_\cX(x_{k+1},x_k)/\tau_k+\bD_\cY(y_{k+1},y_k)/\sigma_k\Big]}, \label{eq:step-size-condition-Ek}\\ & {t_k\big(\frac{1}{\tau_k}+\mu\big)\geq \frac{t_{k+1}}{\tau_{k+1}}},\quad {\frac{t_k}{\sigma_k}\geq \frac{t_{k+1}}{\sigma_{k+1}}},\quad \frac{t_k}{t_{k+1}}=\theta_{k+1}, \label{eq:step-size-condition-theta} \end{align}} \end{subequations} {for some positive $\{t_k,~\alpha_k\}_{k\geq 0}$ such that $t_0=1$, nonnegative $\{\beta_k\}_{k\geq 0}$} and $\delta\in[0,1)$, where $E_k(\cdot,\cdot)$ is defined in~\eqref{eq:Ek} using $\{\alpha_k,\beta_k,\theta_k\}$ as above. \end{assumption}} \sa{ \begin{assumption} \label{assum:step-2} ({\bf Step-size Condition II}) For any $k\geq 0$, the step-sizes $\tau_k,\sigma_k$ and momentum parameter $\theta_k$ satisfy $\theta_0=1$, \eqref{eq:step-size-condition-theta} and {\small \begin{align} &\frac{1-\delta}{\tau_k}\geq L_{xx}+\frac{{L^2_{yx}}}{\alpha_{k+1}},\quad\ \frac{1-\delta}{\sigma_k} \geq \theta_k(\alpha_k+\beta_k)+\frac{L^2_{yy}}{\beta_{k+1}}, \label{eq:step-size-condition-pd} \end{align}} {for some positive $\{t_k,~\alpha_k\}_{k\geq 0}$ such that $t_0=1$, nonnegative $\{\beta_k\}_{k\geq 0}$, and $\delta\in[0,1)$}. \end{assumption}} \begin{algorithm}[h!] \caption{\sa{Generic Accelerated Primal-Dual algorithm~(GAPD)}} \label{alg:GAPD} \begin{algorithmic}[1] \STATE{\bfseries Input: $\{\tau_k,\sigma_k,\theta_k\}_{k\geq 0}$, $(x_0,y_0)\in\cX\times\cY$} \STATE{$(x_{-1},y_{-1})\gets(x_0,y_0)$} \FOR{$k\geq 0$} \STATE $(x_{k+1},y_{k+1})\gets\hbox{\textbf{MainStep}}(x_k,y_k,x_{k-1},y_{k-1},\tau_k,\sigma_k,\theta_k)$ \label{algeq:mainstep-GAPD} \ENDFOR \end{algorithmic} \end{algorithm} \subsection{Auxiliary Results} \sa{In this section, we investigate some sufficient conditions on step-size sequence $\{\tau_k,\sigma_k,\theta_k\}$ and the parameter sequence $\{\alpha_k,\beta_k,t_k\}$ that can guarantee some desirable convergence properties for GAPD. Both APD and APDB, {with the iterate and the step-size sequences generated as in Algorithms~\ref{alg:APD} and~\ref{alg:APDB}, respectively,} are particular cases of the GAPD algorithm; hence, we later establish our main results in Sections~\ref{sec:main_thm_proof} and~\ref{sec:main_thm_proof_backtrack} using the results of this section.} \begin{theorem}\label{thm:general-bound} Suppose Assumption~\ref{assum} holds, and $\{x_k,y_k\}_{k\geq 0}$ is generated by GAPD stated in Algorithm~\ref{alg:GAPD} using a parameter sequence $\{\tau_k,\sigma_k,\theta_k\}_{k\geq 0}$ that satisfies Assumption~\ref{assum:step}. Then for any $(x,y)\in\cX\times\cY$ and $K\geq 1$, \begin{eqnarray}\label{eq:rate} \lefteqn{\cL(\bar{x}_{K},y)-\cL(x,\bar{y}_{K})\leq} \\ & & \frac{1}{T_K}{\Delta(x,y)}-\frac{t_K}{T_K}\Big[\frac{1}{\tau_{K}}\bD_{\cX}(x,x_{K}) +\Big(\frac{1}{\sigma_K}-\theta_K(\alpha_K+\beta_K)\Big)\bD_{\cY}(y,y_{K}) \Big], \nonumber \end{eqnarray} where $\Delta(x,y)$ is defined in \eqref{eq:delta}, {$T_K$ and $(\bar{x}_K,\bar{y}_K)$ are defined in Theorem~\ref{thm:main}}. \end{theorem} \begin{proof} For $k\geq0$, using Lemma \ref{lem_app:prox} in the appendix for the $y$- and $x$-subproblems in Algorithm~\ref{alg:GAPD} we get two inequalities that hold for any $y\in\cY$ and $x\in\cX$: \begin{align} &h(y_{k+1})-\fprod{s_k,~ y_{k+1}-y}\label{eq:sc-h}\\ &\qquad\leq h(y) +\frac{1}{\sigma_k}\Big[\bD_{\cY}(y,y_k)-{\bD_{\cY}(y,y_{k+1})}-\bD_{\cY}(y_{k+1},y_k)\Big], \nonumber \end{align} \begin{align} &f(x_{k+1})+\fprod{\grad_{{x}}\Phi(x_k,y_{k+1}),x_{k+1}-x}+\frac{\mu}{2}\norm{x-x_{k+1}}_\cX^2 \label{eq:sc-f}\\ &\qquad\leq f(x) +\frac{1}{\tau_k}\Big[\bD_{\cX}(x,x_k)-{\bD_{\cX}(x,x_{k+1})}-\bD_{\cX}(x_{k+1},x_k)\Big]. \nonumber \end{align} For all $k\geq 0$, let $A_{k+1}\triangleq \frac{1}{\sigma_k}\bD_{\cY}(y,y_k)-\frac{1}{\sigma_k}{\bD_{\cY}(y,y_{k+1})} -\frac{1}{\sigma_k}\bD_{\cY}(y_{k+1},y_k)$ and $B_{k+1}\triangleq \frac{1}{\tau_k}\bD_{\cX}(x,x_k)-\frac{1}{\tau_k}{\bD_{\cX}(x,x_{k+1})}-\frac{1}{\tau_k}\bD_{\cX}(x_{k+1},x_k)-{\frac{\mu}{2}\norm{x-x_{k+1}}_\cX^2}$. The inner product in \eqref{eq:sc-f} can be lower bounded using convexity of $\Phi(x,y_{k+1})$ in $x$ as follows: \begin{align*} \fprod{\grad_{{x}}\Phi(x_k,y_{k+1}),x_{k+1}-x}=&\fprod{\grad_{{x}}\Phi(x_k,y_{k+1}),x_k-x}+\fprod{\grad_{{x}}\Phi(x_k,y_{k+1}),x_{k+1}-x_k}\\ \geq& \Phi(x_k,y_{k+1})-\Phi(x,y_{k+1})+\fprod{\grad_{{x}}\Phi(x_k,y_{k+1}),x_{k+1}-x_k}. \end{align*} Using this inequality after adding $\Phi(x_{k+1},y_{k+1})$ to both sides of \eqref{eq:sc-f}, we get \begin{align}\label{eq:convex-Lip-f} f(x_{k+1})+\Phi(x_{k+1},y_{k+1})\leq& f(x)+\Phi(x,y_{k+1})+B_{k+1}+\sa{\Lambda_k}, \end{align} where $\Lambda_k \triangleq \Phi(x_{k+1},y_{k+1})-\Phi(x_k,y_{k+1})-\fprod{\grad_{{x}}\Phi(x_k,y_{k+1}),x_{k+1}-x_k}$ for $k\geq 0$. Now, \sa{for $k\geq 0$,} summing \eqref{eq:sc-h} and \eqref{eq:convex-Lip-f} and rearranging the terms lead to \begin{align}\label{eq:concave} &\cL(x_{k+1},y)-\cL(x,y_{k+1}) \\ & = f(x_{k+1})+\Phi(x_{k+1},y)-h(y)-f(x)-\Phi(x,y_{k+1})+h(y_{k+1})\nonumber\\ & \leq \Phi(x_{k+1},y)-\Phi(x_{k+1},y_{k+1}) +\fprod{s_k,y_{k+1}-y}+{\Lambda_k}+A_{k+1}+B_{k+1} \nonumber \\ & \leq-\fprod{q_{k+1},y_{k+1}-y}+\theta_k\fprod{q_k,y_{k+1}-y} +{\Lambda_k}+A_{k+1}+B_{k+1}, \nonumber \end{align} where in the last inequality we use the concavity of \sa{$\Phi(x_{k+1},\cdot)$}. \sa{To obtain a telescoping sum later,} we can rewrite the bound in \eqref{eq:concave} as \vspace*{-2mm} \begin{align}\label{eq:rearrange} &{\cL(x_{k+1},y)-\cL(x,y_{k+1})\leq} \\ & \Big[\frac{1}{\tau_k}\bD_{\cX}(x,x_k)+\frac{1}{\sigma_k}\bD_{\cY}(y,y_k)+\theta_k\fprod{q_k,y_k-y}\Big] \nonumber \\ & -\Big[\frac{1}{\tau_{k}}\bD_{\cX}(x,x_{k+1})+\frac{\mu}{2}\norm{x-x_{k+1}}_\cX^2+\frac{1}{\sigma_{k}}\bD_{\cY}(y,y_{k+1})+\fprod{q_{k+1},y_{k+1}-y}\Big] \nonumber \\ & +\Lambda_k-\frac{1}{\tau_{k}}\bD_{\cX}(x_{k+1},x_k) -\frac{1}{\sigma_k}\bD_{\cY}(y_{k+1},y_k)+\theta_k \fprod{q_k,y_{k+1}-y_k}.\nonumber \end{align} \vspace*{-1mm} \noindent Next, we bound the term \sa{$\fprod{q_k,y_{k+1}-y_k}$} in \eqref{eq:rearrange}. Indeed, one can bound $\fprod{q_k,y-y_k}$ for any given $y\in\cY$ as follows. Let $p_k^x\triangleq \grad_y\Phi(x_{k},y_k)-\grad_y\Phi(x_{k-1},y_k)$ and $p_k^y\triangleq \grad_y\Phi(x_{k-1},y_k)-\grad_y\Phi(x_{k-1},y_{k-1})$ which immediately implies that $q_k=p_k^x+p_k^y$. Moreover, for any $y\in\cY$, $y'\in\cY^*$, and \sa{$a>0$, we have $|\fprod{y',y}|\leq \frac{a}{2}\norm{y}_\cY^2+\frac{1}{2a}\norm{y'}_{\cY^*}^2$}. Hence, using this inequality twice, once for $\fprod{p^x_k,y-y_k}$ and once for $\fprod{p^y_k, y-y_k}$, and the fact that $\bD_{\cY}(y,\bar{y})\geq\frac{1}{2}\norm{y-\bar{y}}_\cY^2$, we obtain for all $k\geq 0$ that \begin{align} \label{eq:q_k} |\fprod{q_k,y-y_k}|\leq &\alpha_k\bD_\cY(y,y_k)+\frac{1}{2\alpha_k}\norm{p_k^x}_{\cY^*}^2 +\beta_k\bD_\cY(y,y_k)+\frac{1}{2\beta_k}\norm{p_k^y}_{\cY^*}^2, \end{align} which holds for any $\alpha_k,\beta_k>0$. Moreover, if $L_{yy}=0$, then $\norm{p_k^y}_{\cY^*}=0$; hence, $|\fprod{q_k,y-y_k}|\leq \alpha_k\bD_{\cY}(y,y_k)+\frac{1}{2\alpha_k}\norm{p_k^x}_{\cY^*}^2$ for any $\alpha_k>0$. Since we define $0^2/0=0$, \eqref{eq:q_k} holds for any $\alpha_k>0$ and $\beta_k=0$ when $L_{yy}=0$. Therefore, using \eqref{eq:q_k} within \eqref{eq:rearrange} \sa{with $\{\alpha_k,\beta_k\}$ satisfying Assumption~\ref{assum:step},} we get for $k\geq 0$, \begin{subequations}\label{eq:bound-lagrange} \begin{align} &{\cL(x_{k+1},y)-\cL(x,y_{k+1})\leq} Q_k(z) - R_{k+1}(z) + E_k, \label{eq:bound-lagrange-main}\\ & Q_k(z) \triangleq \frac{1}{\tau_k}\bD_{\cX}(x,x_k)+\frac{1}{\sigma_k}\bD_{\cY}(y,y_k)+\theta_k\fprod{q_k,y_k-y} \label{eq:bound-lagrange-Q}\\ &\qquad \qquad \quad +\frac{\theta_k}{2\alpha_k}\norm{p_k^x}_{\cY^*}^2+\frac{\theta_k}{2\beta_k}\norm{p_k^y}_{\cY^*}^2, \nonumber\\ & R_{k+1}(z)\triangleq \frac{1}{\tau_{k}}\bD_{\cX}(x,x_{k+1})+\frac{\mu}{2}\norm{x-x_{k+1}}_\cX^2+\frac{1}{\sigma_{k}}\bD_{\cY}(y,y_{k+1}) \label{eq:bound-lagrange-R}\\ &\qquad \qquad \quad+\fprod{q_{k+1},y_{k+1}-y} +\frac{1}{2\alpha_{k+1}}\norm{p_{k+1}^x}_{\cY^*}^2+\frac{1}{2\beta_{k+1}}\norm{p_{k+1}^y}_{\cY^*}^2,\nonumber \\ & \sa{E_k\triangleq} \Lambda_k +\frac{1}{2\alpha_{k+1}}\norm{p_{k+1}^x}_{\cY^*}^2 -\frac{1}{\tau_{k}}\bD_{\cX}(x_{k+1},x_k)\nonumber\\ &\qquad \qquad \quad +\frac{1}{2\beta_{k+1}}\norm{p_{k+1}^y}_{\cY^*}^2-\Big(\frac{1}{\sigma_k}-\theta_k(\alpha_k+\beta_k)\Big)\bD_{\cY}(y_{k+1},y_k). \nonumber \end{align} \end{subequations} \sa{Note that $E_k=E_k( x_{k+1}, y_{k+1})$ where $E_k(\cdot,\cdot)$ is defined in \eqref{eq:Ek} for $k\geq 0$.} All the derivations until here, including \eqref{eq:bound-lagrange}, hold for any Bregman distance function $\bD_\cX$. Recall that if $\mu>0$, then we set $\bD_{\cX}(x,\bar{x})=\frac{1}{2}\norm{x-\bar{x}}_\cX^2$ and $\norm{x}_\cX=\sqrt{\fprod{x,x}}$. Now, multiplying both sides by $t_k>0$, summing over $k=0$ to $K-1$, and then using Jensens's inequality\footnote{For any $x\in\cX$ and $y\in\cY$, $\cL(\cdot,y)-\cL(x,\cdot)$ is convex.}, we obtain \vspace*{-4mm} {\small \begin{align}\label{eq:sum-bound} T_K(\cL(\bar{x}_{K},y)-\cL(x,\bar{y}_{K}))&\leq\sum_{k=0}^{K-1}t_k\big(Q_k(z)-R_{k+1}(z)+E_k\big)\\ &\leq t_0Q_0(z)-t_{K-1}R_{K}(z)+\sum_{k=0}^{K-1}t_kE_k,\nonumber \end{align}} where $T_K=\sum_{k=0}^{K-1}t_k$ and the last inequality follows from the step-size conditions in~\eqref{eq:step-size-condition-theta}, which imply that $t_{k+1}Q_{k+1}(z)-t_k R_{k+1}(z)\leq 0$ for $k=0$ to $K-2$. \sa{According to Assumption~\ref{assum:step}, $\tau_k,\sigma_k$ and $\theta_k$ are chosen such that ${E}_k\leq 0$ for $k=0,\hdots, K-1$, then} \eqref{eq:sum-bound} implies that {\small \begin{eqnarray}\label{eq:bound-backtracking} \lefteqn{T_K(\cL(\bar{x}_{K},y)-\cL(x,\bar{y}_{K}))\leq t_0Q_0(z)-t_{K-1}R_{K}(z)} \\ &&\leq \frac{t_0}{\tau_0}\bD_{\cX}(x,x_0)+\frac{t_0}{\sigma_0}\bD_{\cY}(y,y_0)+{t_K\theta_K}\fprod{q_{K},y-y_{K}} \nonumber\\ && \mbox{ }\ -t_K\Big[\frac{1}{\tau_{K}}\bD_{\cX}(x,x_{K})+\frac{1}{\sigma_{K}}\bD_{\cY}(y,y_{K})+\frac{\theta_K}{2\alpha_K}\norm{p_K^x}_{\cY^*}^2 + \frac{\theta_K}{2\beta_K}\norm{p_K^y}_{\cY^*}^2\Big],\nonumber \end{eqnarray}} where \sa{in the last inequality we used $t_KQ_K(z)\leq t_{K-1}R_{K}(z)$ and $q_0=p_0^x=p_0^y=\mathbf{0}$ (which holds due to the initialization $x_0=x_{-1}$ and $y_0=y_{-1}$).} One can upper bound \sa{$\fprod{q_{K},y-y_{K}}$} in~\eqref{eq:bound-backtracking} using \eqref{eq:q_k} for $k=K$. After plugging this bound in~\eqref{eq:bound-backtracking} and dividing both sides by $T_K$, we obtain the desired result in~\eqref{eq:rate}. \end{proof} Lemma~\ref{lem:supermartingale} is used to establish the convergence of the primal-dual iterate sequence. \begin{lemma}~\cite{Robbins71}\label{lem:supermartingale} {Let $\{a_k\}$, $\{b_k\}$, and $\{c_k\}$ be non-negative real sequences such that $a_{k+1}\leq a_k - b_k + c_k$ for all $k\geq 0$, and $\sum_{k=0}^\infty c_k<\infty$. Then $a=\lim_{k\rightarrow \infty}a_k$ exists, and $\sum_{k=0}^\infty b_k<\infty$.} \end{lemma}\vspace*{-4mm} \sa{\begin{theorem}\label{thm:convergence} Suppose a saddle point for \eqref{eq:original-problem} exists, and Assumption~\ref{assum:step} holds for some $\delta>0$ and $\{\alpha_k,\beta_k,t_k\}$ such that $\inf_{k\geq 0}t_k\min\{\frac{1}{\tau_k},~\frac{1}{\sigma_k}-\theta_k(\alpha_k+\beta_k)\}\geq\delta'$ for some $\delta'>0$. \begin{enumerate}[(i)] \item \textbf{Case 1: $\lim_{k\to \infty}\min\{\tau_k,~\sigma_k\}>0$}. Then any limit point of $\{(x_k,y_k)\}_{k\geq 0}$ is a saddle point. In addition, suppose \rev{$\liminf_{k\to \infty}\min\{\alpha_k,\beta_k\}>0$} when $L_{yy}>0$, and \rev{$\liminf_{k\to \infty}\alpha_k>0$} when $L_{yy}=0$, if $\inf_{k\geq 0}t_k>0$ and $\sup_{k\geq 0}t_k<\infty$ hold, then $\{(x_k,y_k)\}$ has a unique limit point. \item \textbf{Case 2: $\tau_k\to 0$ and $\lim_{k\to\infty}\sigma_k>0$.}\footnote{Similar conditions can also be given for the case $\max\{\tau_k,\sigma_k\}\to 0$ or for the case $\sigma_k\to 0$ and $\inf_k \tau_k>0$; however, the two cases considered in Theorem~\ref{thm:general-bound} are sufficient to analyze APD and APDB, displayed in Algorithm~\ref{alg:APD} and Algorithm~\ref{alg:APDB}, respectively.} If $\varphi_{\cX}$ defining $\bD_{\cX}$ is Lipschitz differentiable {and $t_k=\Omega(\frac{1}{\tau_k})$,} then any limit point of $\{(x_k,y_k)\}_{k\geq 0}$ is a saddle point. \end{enumerate} \end{theorem}} \begin{proof} Suppose $z^\#=(x^\#,y^\#)$ is a saddle point $\cL$ in~\eqref{eq:original-problem}. Let $x=x^\#$ and $y=y^\#$ in \eqref{eq:bound-lagrange}, \sa{then \eqref{eq:bound-lagrange-Q}, \eqref{eq:bound-lagrange-R} and the assumption on $\delta'>0$ imply that}\vspace*{-2mm} {\small \begin{subequations} \label{eq:Qk} \begin{align} \sa{t_k}Q_k(z^\#) &\geq \frac{\sa{t_k}}{\tau_k}\bD_\cX(x^\#,x_k)+\sa{t_k}(\frac{1}{\sigma_k}-\theta_k(\alpha_k+\beta_k))\bD_\cY(y^\#,y_k)\nonumber \\ &\geq \delta'\bD_\cX(x^\#,x_k)+\delta'\bD_\cY(y^\#,y_k)> 0, \label{Qk-b}\\ \sa{t_k} R_{k+1}(z^\#) &\geq \sa{t_{k+1}Q_{k+1}(z^\#).} \label{Qk-a} \end{align} \end{subequations}} Multiplying the inequality \eqref{eq:bound-lagrange-main} by $t_k$, then using $\cL({x}_{k+1},y^\#)-\cL(x^\#,{y}_{k+1})\geq 0$, and \eqref{Qk-a}, the following can be obtained { \begin{align}\label{eq:bound-iterates} 0\leq &t_kQ_k(z^\#)-t_{k+1}Q_{k+1}(z^\#)+t_kE_k({x}_{k+1},{y}_{k+1})\\ \leq & t_kQ_k(z^\#)-t_{k+1}Q_{k+1}(z^\#)-\delta t_k[\bD_\cX(x_{k+1},x_k)/\sa{\tau_k}+\bD_\cY(y_{k+1},y_k)/\sigma_k]\nonumber. \end{align}} Let $a_k=t_kQ_k(z^\#)$, \sa{$b_k=\delta t_k[\bD_\cX(x_{k+1},x_k)/\sa{\tau_k}+\bD_\cY(y_{k+1},y_k)/\sigma_k]$}, and $c_k=0$ for $k\geq 0$, then Lemma~\ref{lem:supermartingale} implies that $a\triangleq \lim_{k\rightarrow \infty} a_k$ exist. Therefore, \eqref{Qk-b} implies that $\{z_k\}$ is a bounded sequence, where $z_k\triangleq(x_k,y_k)$; hence, it has a convergent subsequence $z_{k_n}\rightarrow z^*$ as $n\rightarrow \infty$ for some $z^*\in\cX\times\cY$ where $z^*=(x^*,y^*)$. \sa{Note that since $t_k,\theta_k,\alpha_k,\beta_k\geq 0$ for $k\geq 0$, we have $\inf_{k\geq 0}\min\{t_k/\tau_k,~t_k/\sigma_k\}\geq \delta'>0$. Moreover, Lemma~\ref{lem:supermartingale} also implies that $\sum_{k=0}^\infty b_k <\infty$; hence, we also have $\sum_{k\geq 0}\norm{z_{k+1}-z_k}^2<\infty$. Thus,} for any $\epsilon>0$ there exists $N_1$ such that for any $n\geq N_1$, $\max\{\norm{z_{k_n}-z_{k_n-1}},~\norm{z_{k_n}-z_{k_n+1}}\}< \frac{\epsilon}{2}$. Convergence of $\{z_{k_n}\}$ sequence also implies that there exists $N_2$ such that for any $n\geq N_2$, $\norm{z_{k_n}-z^*}< \frac{\epsilon}{2}$. Therefore, letting $N\triangleq \max\{N_1,N_2\}$, we conclude $\norm{z_{k_n\pm 1}-z^*}< \epsilon$, i.e., $z_{k_n\pm 1} \rightarrow z^*$ as $n\rightarrow \infty$. Now we show that $z^*$ is indeed a saddle point of \eqref{eq:original-problem} by considering the optimality conditions for the $y$- and $x$-subproblems of Algorithm~\ref{alg:GAPD}, i.e., Lines~\ref{algeq:y-main} and~\ref{algeq:x-main} of \textbf{MainStep}. For all $n\in\mathbb{Z}_+$, let $u_n\triangleq \left(\grad\sa{\varphi_{\cX}}(x_{k_n})-\grad\sa{\varphi_{\cX}}(x_{k_n+1})\right)/\tau_{k_n}-\grad_x\Phi(x_{k_n},y_{k_n+1})$ and $v_n\triangleq s_{k_n}+\left(\grad\sa{\varphi_{\cY}}(y_{k_n})-\grad\sa{\varphi_{\cY}}(y_{k_n+1})\right)/\sigma_{k_n}$; one has $u_n\in \partial f(x_{k_n+1})$ and $v_n\in \partial h(y_{k_{n}+1})$ for $n\geq 0$. Since \sa{$\grad\varphi_{\cX}$ and $\grad\varphi_{\cY}$ are continuously differentiable on $\dom f$ and $\dom h$, respectively, whenever $\lim_{k\to \infty}\min\{\tau_k,~\sigma_k\}>0$,} it follows from Theorem 24.4 in \cite{rockafellar2015convex} that $\partial f(x^*) \ni \lim_{n\rightarrow \infty}u_n=-\grad_x\Phi(x^*,y^*)$, $\partial h(y^*) \ni \lim_{n\rightarrow \infty}v_n={\grad_y\Phi(x^*,y^*)}$, which implies that $z^*$ is a saddle point of \eqref{eq:original-problem}. In addition, if $\sup_{k\geq 0}t_k<\infty$, we show that $z^*$ is the unique limit point. Since \eqref{eq:Qk} and \eqref{eq:bound-iterates} are true for any saddle point $z^\#$, letting $z^\#=z^*$ and invoking Lemma~\ref{lem:supermartingale} again, one can conclude that $w^*=\lim_{k\rightarrow \infty} w_k \geq 0$ exists, where $w_k\triangleq t_kQ_k(z^*)$. \sa{Because $\{t_k\}\subset\reals_+$ is a bounded sequence, $\{\theta_k\}\subset\reals_+$ is also a bounded sequence. Therefore, using $\lim_{k\to \infty}\min\{\tau_k,\sigma_k\}>0$ {together with $\liminf_{k\to \infty}\min\{\alpha_k,\beta_k\}>0$} when $L_{yy}>0$, it follows from $z_{k_n}\rightarrow z^*$ and $z_{k_n-1}\rightarrow z^*$ that we have} $\lim_{n\rightarrow \infty} w_{k_n}=0$; henceforth, $w^*=\lim_{k\rightarrow \infty} w_k=\lim_{n\rightarrow \infty} w_{k_n}=0$, and \eqref{Qk-b} evaluated at $z^*$ implies that $z_k\rightarrow z^*$. \sa{For $L_{yy}=0$, $\lim_{k\to \infty}\min\{\tau_k,\sigma_k\}>0$ and {$\liminf_{k\to \infty}\alpha_k>0$} is enough since $\norm{p_k^y}=0$ for $k\geq 0$.} \sa{On the other hand, for the case $\tau_k\to 0$ and $\lim_k \sigma_k>0$, we require that $\grad\varphi_{\cX}$ is Lipschitz on $\dom f$ with constant $L_{\varphi_{\cX}}$ and that there exist $C>0$ and $K\geq 1$ such that $t_k\geq C\frac{1}{\tau_k}$ for $k\geq K$. Clearly, $\norm{\grad\sa{\varphi_{\cX}}(x)-\grad\sa{\varphi_{\cX}}(x')}_{\cX^*}\leq \sqrt{2L_{\varphi_{\cX}}^2D_{\cX}(x',x)}$ for $x,x'\in\dom f$; hence, to argue that $\lim_{n\rightarrow \infty}u_n=-\grad_x\Phi(x^*,y^*)$, one needs {$\lim_n \bD_{\cX}(x_{k_n+1},x_{k_n})/{\tau_{k_n}^2}=0$}. Indeed, since $\sum_k b_k<\infty$, $t_k\bD_\cX(x_{k+1},x_k)/\sa{\tau_k}\to 0$ holds; therefore, $\bD_\cX(x_{k+1},x_k)/\tau_k^2\to 0$ as $t_k\geq C\frac{1}{\tau_k}$ for $k\geq K$, which implies $\lim_{n\rightarrow \infty}u_n=-\grad_x\Phi(x^*,y^*)$. Finally, $\lim_{n\rightarrow \infty}v_n={\grad_y\Phi(x^*,y^*)}$ as discussed above for the previous case since $\lim_k \sigma_k>0$ and $\grad\varphi_{\cY}$ is continuous. Again invoking \cite[Theorem 24.4]{rockafellar2015convex}, one can establish that $z^*$ is a saddle point of \eqref{eq:original-problem}.} \end{proof} \sa{Next, we provide some useful results on $\{\tau_k,\sigma_k,\theta_k\}_k$ of the algorithms APD and APDB which will help us derive the rate results in Theorems~\ref{thm:main} and~\ref{thm:backtrack}. } \begin{lemma} \label{lem:stronger-step-condition} \sa{Suppose the sequence $\{\tau_k,\sigma_k,\theta_k\}_{k\geq 0}$ satisfy \eqref{eq:step-size-condition-pd} for some positive $\{t_k,~\alpha_k\}_{k\geq 0}$, nonnegative $\{\beta_k\}_{k\geq 0}$, and $\delta\in[0,1)$. Let $\{x_k,y_k\}$ be the GAPD iterate sequence corresponding to $\{\tau_k,\sigma_k,\theta_k\}$. Then $\{x_k,y_k\}$ and $\{\tau_k,\sigma_k,\theta_k\}$ satisfy \eqref{eq:step-size-condition-Ek} with the same $\{t_k,\alpha_k,\beta_k\}$ and $\delta$.} \end{lemma} \begin{proof} From Assumption \ref{assum}, for any $k\geq 0$ and $(x,y)\in\cX\times\cY$, we have \begin{align*} &\Phi(x,y)-\Phi(x_k,y)-\fprod{\grad_x\Phi(x_k,y),x-x_k}\leq L_{xx}\norm{x-x_k}_{\cX}^2/2\leq L_{xx}\bD_{\cX}(x,x_k),\\ &\tfrac{1}{2}\norm{\grad_y\Phi(x,y)-\grad_y\Phi(x_k,y)}_{\cY^*}^2\leq L_{yx}^2\norm{x-x_k}_{\cY}^2/2\leq L_{yx}^2\bD_{\cX}(x,x_k) , \\ &\tfrac{1}{2}\norm{\grad_y\Phi(x_k,y)-\grad_y\Phi(x_k,y_k)}_{\cY^*}^2\leq L_{yy}^2\norm{y-y_k}_{\cY}^2/2\leq L_{yy}^2\bD_{\cY}(y,y_k). \end{align*} Above inequalities evaluated at $(x,y)=(x_{k+1},y_{k+1})$ imply that \begin{align}\label{eq:upper-bound-E} E_k\triangleq E_k(x_{k+1},y_{k+1})\leq&~ (L_{yx}^2/\alpha_{k+1}+L_{xx}-1/\tau_k)\bD_\cX(x_{k+1},x_k) \nonumber\\ &+(L_{yy}^2/\beta_{k+1}+\theta_k(\alpha_k+\beta_k)-1/\sigma_k)\bD_\cY(y_{k+1},y_k) \nonumber \\ &\leq -\delta[\tfrac{1}{\tau_k}\bD_\cX(x_{k+1},x_k)+\tfrac{1}{\sigma_k}\bD_\cY(y_{k+1},y_k)], \end{align} where in the last inequality we used \eqref{eq:step-size-condition-pd} and non-negativity of Bregman functions. \end{proof} \begin{lemma} \label{lem:step-size-seq} \sa{Given {$\{\tau_k\}_{k\geq 0}\subset\reals_{++}$ and $\bar{\tau},\gamma_0>0$, let $\sigma_{-1}=\gamma_0\bar{\tau}$} and $\sigma_k=\gamma_k \tau_k$, $\theta_k=\sigma_{k-1}/\sigma_k$ and $\gamma_{k+1}=\gamma_k (1+\mu \tau_k)$ for $k\geq 0$. $\{\tau_k,\sigma_k,\theta_k\}$ satisfies \eqref{eq:step-size-condition-theta} for $\{t_k\}$ such that $t_k=\sigma_k/\sigma_0$ for $k\geq 0$.} \end{lemma} \begin{proof} Since $t_k=\sigma_k/\sigma_0$ and $\tau_k>0$ for $k\geq 0$, \eqref{eq:step-size-condition-theta} can be written as $(1+\mu\tau_k)\geq \frac{\sigma_{k+1}\tau_k}{\sigma_k\tau_{k+1}}$ and $\frac{\sigma_k}{\sigma_{k+1}}=\theta_{k+1}$. The latter condition clearly holds due to our choice of $\theta_k$. Moreover, from $\sigma_k= \gamma_k\tau_k$ and $\gamma_{k+1}=\gamma_k(1+\mu\tau_k)$, we conclude that the former condition holds with equality by observing that $(1+\mu\tau_k)=\frac{\gamma_{k+1}}{\gamma_k}=\frac{\sigma_{k+1}\tau_k}{\sigma_k\tau_{k+1}}$. \end{proof} \begin{lemma}\label{lem:parameter} \sa{Under Assumption~\ref{assum}, consider APDB displayed in Algorithm~\ref{alg:APDB} for any given $\delta\in[0,1)$ and $c_\alpha,c_\beta\geq 0$ such that $c_\alpha+c_\beta+\delta\leq 1$. When $L_{yy}>0$, set $c_\alpha,~c_\beta> 0$; otherwise, when $L_{yy}=0$, set $c_\alpha>0$ and $c_\beta=0$. The APDB iterate and step-size sequences, i.e., $\{x_k,y_k\}_{k\geq 0}$ and $\{\tau_k,\sigma_k,\theta_k\}_{k\geq 0}$, are well-defined; more precisely, for any $k\geq 0$, the backtracking condition, i.e., $E_k(x_{k+1},y_{k+1})\leq -\delta[\tfrac{1}{\tau_k}\bD_\cX(x_{k+1},x_k)+\tfrac{1}{\sigma_k}\bD_\cY(y_{k+1},y_k)]$, holds after finite number of inner iterations.} \sa{For $k\geq 0$, $\tau_k\geq\eta\hat{\tau}_k$ for some positive $\{\hat{\tau}_k\}$: when $L_{yy}=0$ and $\mu\geq 0$, $\hat{\tau}_k\geq \Psi_1\sqrt{\gamma_0/\gamma_k}$ for $k\geq 0$; on the other hand, when $L_{yy}>0$ and $\mu=0$, $\hat{\tau}_k\geq \min\{\Psi_1,\Psi_2\}\sqrt{\gamma_0/\gamma_k}$ for $k\geq 0$, where $\Psi_1$ and $\Psi_2$ are defined in \eqref{eq:tauhat_bound}.\footnote{$\mu=0$ implies $\gamma_k=\gamma_0$ for $k\geq 0$, while $\mu>0$ implies $\gamma_{k+1}>\gamma_k$ for $k\geq 0$.}} \end{lemma} \begin{proof} {Fix arbitrary $k\geq 0$. Note that since APDB is a special GAPD corresponding to a particular $\{\tau_k,\sigma_k,\theta_k\}$, Lemma~\ref{lem:stronger-step-condition} implies that whenever \eqref{eq:step-size-condition-pd} holds, \eqref{eq:step-size-condition-Ek} holds as well. Next, we will show that there exists $\hat{\tau}_k>0$ such that \eqref{eq:step-size-condition-pd} is true for all $\tau_k\in(0,\hat{\tau}_k]$. Since $\sigma_k=\gamma_k\tau_k$ and $\theta_k=\sigma_{k-1}/\sigma_k$, \eqref{eq:step-size-condition-pd} is equivalent to \begin{align} \label{eq:pd-equiv} 0\geq -(1-\delta)+L_{xx}\tau_k+\frac{L_{yx}^2}{c_\alpha}\gamma_k\tau_k^2,\qquad 1-(\delta+c_\alpha+c_\beta)\geq\frac{L_{yy}^2}{c_\beta}\gamma_k^2\tau_k^2. \end{align}} Suppose $L_{yy}>0$. Then \eqref{eq:pd-equiv} holds for all $\tau_k\in(0,\hat{\tau}_k]$, where {\small \begin{equation} \label{eq:tauhat} \hat{\tau}_k\triangleq\min\left\{\frac{-L_{xx}+\sqrt{L^2_{xx}+4(1-\delta)L^2_{yx}\gamma_k/c_\alpha}}{2L^2_{yx}\gamma_k/c_\alpha},\quad \frac{\sqrt{c_\beta(1-(c_\alpha+c_\beta+\delta))}}{\gamma_k L_{yy}}\right\}. \end{equation}} On the other hand, when $L_{yy}=0$, the second inequality in~\eqref{eq:pd-equiv} always holds; hence, $\hat{\tau}_k$ is defined by the first term in \eqref{eq:tauhat}. Since in each step of backtracking, $\tau_k$ is decreased by a factor of $\eta\in(0,1)$, when the backtracking terminates, $\tau_k\geq \eta\hat{\tau}_k$. {Consider the case $\mu>0$ and $L_{yy}=0$. Since $L_{yy}=0$, second term in~\eqref{eq:tauhat} is not binding and we have $\hat{\tau}_k\triangleq\frac{-L_{xx}+\sqrt{L^2_{xx}+4(1-\delta)L^2_{yx}\gamma_k/c_\alpha}}{2L^2_{yx}\gamma_k/c_\alpha}$. Moreover, $\mu>0$ and Line~\ref{algeq:gamma_update_APDB} in APDB imply that $\gamma_{k+1}\geq\gamma_k\geq\gamma_0$ for $k\geq 0$.} Next, to analyze this case, we show a useful inequality: for any $a\geq 0$ and $b,c>0$, there exists $d\in (0,1]$ such that $\sqrt{a^2+c b^2}\geq a+\sqrt{c}b d$. In fact, the inequality can be written equivalently as $d^2+\frac{2a}{b\sqrt{c}}d-1\leq 0$, which holds if $d^2+\frac{2a}{b\sqrt{\bar{c}}}d-1\leq 0$ has a solution for any $0<\bar{c}\leq c$. Given such $\bar{c}$, $d= -\frac{a}{b\sqrt{\bar{c}}}+\sqrt{\frac{a^2}{b^2\bar{c}}+1}>0$ solves this tighter quadratic inequality. Employing this result within the definition of $\hat{\tau}_k$ for the case $L_{yy}=0$, i.e., setting $a=L_{xx}$, $b=2\sqrt{\tfrac{1-\delta}{c_\alpha}}L_{yx}$, $c=\gamma_k$, and $\bar{c}=\gamma_0$, implies $\hat{\tau}_k\geq \Psi_1\sqrt{\gamma_0/\gamma_k}$ for $k\geq 0$. {Now, suppose $\mu=0$. Line~\ref{algeq:gamma_update_APDB} in APDB implies that $\gamma_k=\gamma_0$ for $k\geq 0$. Hence, from \eqref{eq:tauhat}, we have $\hat{\tau}_k=\hat{\tau}_0$ for $k\geq 0$; thus, {when $L_{yy}=0$, we get $\hat{\tau}_0\geq \Psi_1$, and when $L_{yy}>0$, we get $\hat{\tau}_0\geq \min\{\Psi_1,\Psi_2\}$}.} \end{proof} \begin{lemma} \label{lem:k2-rate} \sa{Suppose $\mu>0$, and $L_{yy}=0$. \rev{The} step-size sequences generated by both APD and APDB, displayed in Algorithms~\ref{alg:APD} and~\ref{alg:APDB}, respectively, satisfy $\sigma_k=\Omega(k)$, $\tau_k=\Omega(1/\sigma_k)$, and $\tau_k/\sigma_k=\cO(1/k^2)$ for $k\geq 0$. Indeed, $\sigma_k \geq \frac{\Gamma^2}{3\mu} k$, $\tau_k\sigma_k\geq \Gamma^2/\mu$ and $\gamma_k^{-1}=\tau_k/\sigma_k\leq 9/(\Gamma^2 k^2)$ for $k\geq 0$, where $\Gamma=\mu\tau_0\sqrt{\gamma_0}$ for APD and {$\Gamma=\mu\eta\Psi_1\sqrt{\gamma_0}$} for APDB with $\Psi_1$ as defined in~\eqref{eq:tauhat_bound}. Furthermore, for all $\epsilon>0$, $\sigma_k\geq \frac{\Gamma^2}{(2+\epsilon)\mu}k$ and $\tau_k/\sigma_k\leq (2+\epsilon)/(\Gamma^2 k^2)$ for $k\geq \lceil\frac{1}{\epsilon}\rceil$.} \end{lemma} \begin{proof} First, consider $\{\tau_k,\sigma_k,\theta_k\}_k$ generated by APD as shown in Algorithm~\ref{alg:APD}. Note, $\tau_{k+1}=\tau_k\sqrt{\frac{\gamma_k}{\gamma_{k+1}}}$ implies that $\tau_k=\tau_0\sqrt{\frac{\gamma_0}{\gamma_k}}$, for all $k\geq 0$; therefore, using the update rule for $\gamma_{k+1}$ in Line~\ref{algeq:gamma_update_APD} of Algorithm~\ref{alg:APD}, we conclude that for $k\geq 0$, \begin{align}\label{gamma-k-apd} \gamma_{k+1}=\gamma_k(1+\mu\tau_k)\geq \gamma_k+\mu\tau_0\sqrt{\gamma_0\gamma_k}. \end{align} Second, consider $\{\tau_k,\sigma_k,\theta_k\}_k$ generated by APDB as shown in Algorithm~\ref{alg:APDB}. {Since $\tau_k\geq \eta\hat{\tau}_k$ for all $k\geq 0$, the update rule for $\gamma_{k+1}$ and Lemma~\ref{lem:parameter} imply that \begin{align}\label{gamma-k} \gamma_{k+1}=\gamma_k(1+\mu\tau_k)\geq \gamma_k(1+\mu\eta\hat{\tau}_k)\geq \gamma_k+{\mu\eta\Psi_1\sqrt{\gamma_0\gamma_k}},\quad\forall k\geq 0. \end{align}} {Next, we will give a unified analysis for APD and APDB step-size sequences as the bounds in \eqref{gamma-k-apd} and \eqref{gamma-k} have the same form: $\gamma_{k+1}\geq \gamma_k+\Gamma\sqrt{\gamma_k}$, where $\Gamma$, defined in the statement, depends on the algorithm implemented.} Using induction one can show that {$\gamma_k\geq \frac{\Gamma^2}{(2+\epsilon)^2} k^2$ for $k\geq \lceil\frac{1}{\epsilon}\rceil$; hence, setting $\epsilon=1$, we get $\gamma_k\geq \frac{\Gamma^2}{9} k^2$ for $k\geq 0$. Note $\sigma_k=\gamma_k\tau_k$ and $\gamma_{k+1}=\gamma_k(1+\mu\tau_k)$ imply that $\sigma_{k}=\frac{\gamma_{k+1}- \gamma_k}{\mu}$.} Therefore, since $\gamma_{k+1}- \gamma_k\geq \Gamma\sqrt{\gamma_k}\geq \frac{\Gamma^2}{3} k$, we have $\sigma_{k} \geq \frac{\Gamma^2}{3\mu} k$, and $\tau_k\sigma_k=\frac{(\gamma_{k+1}-\gamma_k)^2}{\mu\gamma_k}\geq \Gamma^2/\mu$ for $k\geq 0$. Moreover, $\tau_k/\sigma_k=1/\gamma_k=\cO(1/k^2)$. \end{proof} \subsection{\sa{Proof of Theorem~\ref{thm:main}}} \label{sec:main_thm_proof} Below we establish the results of Theorem~\ref{thm:main}. \subsubsection{Rate Analysis} We first show that $\{\tau_k,\sigma_k,\theta_k\}_{k\geq 0}$ generated by APD in Algorithm~\ref{alg:APD} satisfies \eqref{eq:step-size-rule}. The step-size update rule of APD implies that for $k\geq 0$, {\small \begin{align} \label{eq:step-size-rule-proof} \theta_{k+1}=\frac{\sigma_k}{\sigma_{k+1}}=\frac{\tau_k\gamma_k}{\tau_{k+1}\gamma_{k+1}}=\sqrt{\frac{\gamma_k}{\gamma_{k+1}}}=\frac{1}{\sqrt{1+\mu\tau_k}},\quad \tau_{k+1}=\tau_k\sqrt{\frac{\gamma_k}{\gamma_{k+1}}}=\theta_{k+1}\tau_k. \end{align}} \vspace*{-3mm} Given $\tau_0,\sigma_0>0$ satisfying \eqref{eq:initial_step_condition} for some particular $\delta, c_\alpha, c_\beta\in\reals_+$ as stated in Theorem~\ref{thm:main}, we will consider $\{\alpha_k,\beta_k,t_k\}_{k\geq 0}$ chosen as \begin{align} \label{eq:abt_seq} t_k=\sigma_k/\sigma_0,\quad \alpha_k=c_\alpha/\sigma_{k-1},\quad \beta_k=c_\beta/\sigma_{k-1},\quad \forall k\geq 0. \end{align} We next show that $\{\tau_k,\sigma_k,\theta_k\}$ satisfies Assumption~\ref{assum:step-2}. Indeed, since $\theta_k=\sigma_{k-1}/\sigma_k$, for the choice of $\{\alpha_k,\beta_k\}$ in~\eqref{eq:abt_seq}, \eqref{eq:step-size-condition-pd} can be written as \begin{align} \label{eq:step_cond_induction} \frac{1-\delta}{\tau_k}\geq L_{xx}+\frac{L_{yx}^2}{c_\alpha}\sigma_k,\quad 1-(\delta+c_\alpha+c_\beta)\geq \frac{L_{yy}^2}{c_\beta}\sigma_k^2. \end{align} \rev{Clearly,} \eqref{eq:initial_step_condition} implies that \eqref{eq:step_cond_induction} holds for $k=0$. When $\mu=0$, i.e., Part I, we have $\gamma_k=\gamma_0$ and $\theta_k=1$ for $k\geq 0$; hence, $\tau_k=\tau_0$ and $\sigma_k=\sigma_0$ for $k\geq 0$. Thus, \eqref{eq:step-size-condition-pd} holds for all $k\geq 0$ for $\{\tau_k, \sigma_k,\theta_k\}$ produced by APD. For the case $\mu>0$, i.e., Part II, we will use induction to show that \eqref{eq:step-size-condition-pd} holds. Recall that for this case, we assume $L_{yy}=0$; hence, the second condition in \eqref{eq:step_cond_induction} holds for any $\sigma_k$ as long as $1\geq \delta+c_\alpha+c_\beta$. Now suppose the first condition in \eqref{eq:step_cond_induction} holds for some $k\geq 0$, using $\sigma_{k+1}=\sigma_k\sqrt{\gamma_{k+1}/\gamma_k}$ and $\gamma_{k+1}/\gamma_k\geq 1$, we get \begin{align} \frac{1-\delta}{\tau_{k+1}}=\frac{1-\delta}{\tau_k}\sqrt{\frac{\gamma_{k+1}}{\gamma_k}}\geq L_{xx}+\frac{L_{yx}^2}{c_\alpha}\sigma_{k+1}. \end{align} This completes the induction. Moreover, Lemma~\ref{lem:step-size-seq} implies that $\{\tau_k,\sigma_k,\theta_k\}$ generated by APD satisfies \eqref{eq:step-size-condition-theta} for $\{t_k\}$ such that $t_k=\sigma_k/\sigma_0$ for $k\geq 0$. Thus, Assumption~\ref{assum:step-2} holds for $\{\alpha_k,\beta_k,t_k\}_{k\geq 0}$ shown in \eqref{eq:abt_seq}. Lemma~\ref{lem:stronger-step-condition} implies that $\{x_k,y_k\}$ and $\{\tau_k,\sigma_k,\theta_k\}$ generated by APD satisfies Assumption~\ref{assum:step}. Therefore, \eqref{eq:delta} of \rev{the main result I} directly follows from Theorem~\ref{thm:general-bound}. Consider the setting in Part I of Theorem~\ref{thm:main}, i.e., $\mu=0$. Since $\mu=0$, clearly APD step-size sequence satisfies $\tau_k=\tau_0$, $\sigma_k=\sigma_0$ for $k\geq 0$; hence, $\theta_k=1$ and $t_k=1$ for $k\geq 0$, which implies $T_K=\sum_{k=0}^{K-1}t_k=K$ for $K\geq 1$. Moreover, for any $k\geq 0$, we have $\frac{1}{\sigma_k}-\theta_k(\alpha_k+\beta_k)=\frac{1-(c_\alpha+c_\beta)}{\sigma_0}\geq 0$; thus, \eqref{eq:rate} implies {\small \begin{align}\label{eq:rate_I} \frac{1}{\tau_0}\bD_{\cX}(x,x_{K}) +\frac{1-(c_\alpha+c_\beta)}{\sigma_0}\bD_{\cY}(y,y_{K}) +K[\cL(\bar{x}_{K},y)-\cL(x,\bar{y}_{K})]\leq \Delta(x,y). \end{align}} The rate result in \eqref{eq:delta} for Part I follows from dropping the non-negative terms on the left hand-side of \eqref{eq:rate_I}. \begin{comment} Indeed, we show that there exists $\{\alpha_k,\beta_k\}_{k\geq 0}\subset\reals_{++}\times\reals_+$ such that the particular step-size instances stated in Theorem~\ref{thm:main} (constant in Part I and non-constant in Part II) both satisfy the condition \eqref{eq:step-size-condition-theta} and \eqref{eq:step-size-condition-pd} which according to Lemma \ref{lem:parameter} also satisfy \eqref{eq:step-size-condition-Ek}; hence, the main result will then follow from Theorem~\ref{thm:general-bound}. Consider Part I of Theorem~\ref{thm:main}. For $k\geq 0$, {let $\alpha_k=\alpha>0$ and $\beta_k=L_{yy}\geq 0$.} Setting $\theta_k=1$, $t_k=1$, {$\tau_k=\tau_0$, and $\sigma_k=\sigma_0$} clearly satisfies \eqref{eq:step-size-condition-theta} and \eqref{eq:step-size-condition-pd} for $\delta= 1-\max\{c_\tau,c_\sigma\}\in [0,1)$, and we have $T_K=\sum_{k=0}^{K=1}t_k=K$ and $\frac{1}{\sigma_K}-\theta_K(\alpha_K+\beta_K)=\frac{1}{\sigma_0}-(\alpha+L_{yy})\geq \frac{1-c_\sigma}{\sigma_0}\geq 0$ which give us the rate result in Part I by dropping the non-positive terms in the right hand-side of \eqref{eq:rate}. Moreover, \end{comment} Finally, in case a saddle point $(x^*,y^*)$ exists, letting $x=x^*$ and $y=y^*$ in \eqref{eq:rate_I} and using the fact that $\cL(\bar{x}_K,y^*)-\cL(x^*,\bar{y}_K)\geq 0$, we obtain the bound on iterates given in the theorem. To show Part II of Theorem~\ref{thm:main}, now suppose $\mu>0$ and $L_{yy}=0$. \begin{comment} To show Part II of Theorem~\ref{thm:main}, consider $\{\tau_k,\sigma_k,\theta_k\}_{k\geq 0}$ given in Part II. Let $\alpha_k=\frac{c_\sigma}{\sigma_{k-1}}$ and $\beta_k=0$, for $k\geq 0$, then using induction and the fact that $\theta_k\leq 1$, for all $k\geq 0$, one can verify that the condition \eqref{eq:step-size-condition-pd} holds for $\delta= 1-\max\{c_\tau,c_\sigma\}\in[0,1)$; hence, part 1 and 2 in Lemma \ref{lem:parameter} imply that \end{comment} Since $\theta_k=\sigma_{k-1}/\sigma_k$, $c_\beta=0$, we have $\frac{1}{\sigma_k}-\theta_k(\alpha_k+\beta_k)=\frac{1-c_\alpha}{\sigma_k}\geq0$ for all $k\geq 0$. Hence, using $t_k=\sigma_k/\sigma_0$ for $k\geq 0$, \eqref{eq:rate} implies that {\small \begin{align}\label{eq:rate_II} \frac{\sigma_K}{\tau_{K}}\frac{1}{\sigma_0}\bD_{\cX}(x,x_{K}) +\frac{1-c_\alpha}{\sigma_0}\bD_{\cY}(y,y_{K}) + T_K [\cL(\bar{x}_{K},y)-\cL(x,\bar{y}_{K})]\leq \Delta(x,y). \end{align}} Lemma~\ref{lem:k2-rate} shows that $\sigma_k=\Omega(k)$; hence, $T_K=\sum_{k=0}^{K-1}t_k=\sum_{k=0}^{K-1}\sigma_k/\sigma_0=\Omega(K^2)$. Thus, the rate result in \eqref{eq:delta} for Part II follows from dropping the non-negative terms on the left hand-side of \eqref{eq:rate_II}. \begin{comment} Note that $\frac{1}{\sigma_K}-\theta_K(\alpha_K+\beta_K)=\frac{1-c_\sigma}{\sigma_K}\geq 0$ since $c_\sigma\in(0,1]$. Therefore, dropping the non-positive terms in the right hand-side of \eqref{eq:rate} and using part 4 of Lemma \ref{lem:parameter} implies \eqref{eq:delta} and the rate result. \end{comment} Finally, if a saddle point $(x^*,y^*)$ exists, then letting $x=x^*$ and $y=y^*$ in \eqref{eq:rate_II} and using the fact that $\cL(\bar{x}_K,y^*)-\cL(x^*,\bar{y}_K)\geq 0$ implies the bound on iterates. Moreover, Lemma~\ref{lem:k2-rate} shows that $\tau_k/\sigma_k=\cO(1/k^2)$; hence, we get $\bD_{\cX}(x^*,x_{k})=\cO(1/k^2)$. \begin{comment} that $\bD_\cX(x^*,x_K)\leq \sigma_0 \frac{\tau_K}{\sigma_K}\Delta(x^*,y^*)=\cO(1/K^2)$ and $\bD_{\cY}(y^*,y_K)\leq \frac{\sigma_0}{1-c_\sigma}\Delta(x^*,y^*)$, for $c_\sigma\in(0,1)$. \end{comment} Next, we discuss the convergence properties of the APD iterate sequence. Indeed, the result follows directly from Theorem~\ref{thm:convergence}; hence, we only need to verify the assumptions of the theorem. \subsubsection{Non-ergodic Convergence Analysis} \label{sec:nonergodic-APD} Suppose a saddle point exists. We set $\delta, c_\alpha>0$ and $c_\beta\geq 0$ such that $\delta+c_\alpha+c_\beta\leq 1$. Due to our choice of $\{\alpha_k,\beta_k,t_k\}$ in~\eqref{eq:abt_seq}, using $\gamma_k\geq \gamma_0$ for $k\geq 0$, the general assumption of Theorem~\ref{thm:convergence} holds, i.e., $\inf_{k\geq 0}t_k\min\{\frac{1}{\tau_k},~\frac{1}{\sigma_k}-\theta_k(\alpha_k+\beta_k)\}\geq\delta'$ for $\delta'=\min\{\frac{1}{\tau_0},~\frac{1-(c_\alpha+c_\beta)}{\sigma_0}\}>0$. For Part I, i.e., $\mu=0$, since $\tau_k=\tau_0$ and $\sigma_k=\sigma_0$ for $k\geq 0$, $\lim_{k\to \infty}\min\{\tau_k,~\sigma_k\}>0$; hence, from Theorem~\ref{thm:convergence}, any limit point of $\{(x_k,y_k)\}_{k\geq 0}$ is a saddle point. In addition, since $t_k=1$ for $k\geq 0$, $\lim_{k\to \infty}\alpha_k=\frac{c_\alpha}{\sigma_0}>0$, and since for the case $L_{yy}>0$ we have $\lim_{k\to \infty}\beta_k=\frac{c_\beta}{\sigma_0}>0$, we conclude that $\{(x_k,y_k)\}$ has a unique limit point. For Part II, i.e., $\mu>0$ and $L_{yy}=0$, it holds that $\tau_k\to 0$ and $\lim_{k\to\infty}\sigma_k>0$. Note for this setting, $\varphi_{\cX}(\cdot)=\norm{\cdot}_{\cX}^2$ defining $\bD_{\cX}$ is Lipschitz differentiable. Moreover, since $t_k=\sigma_k/\sigma_0$ and Lemma~\ref{lem:k2-rate} shows that $\tau_k=\Omega(1/\sigma_k)$, we have $t_k=\Omega(\frac{1}{\tau_k})$; thus any limit point of $\{(x_k,y_k)\}_{k\geq 0}$ is a saddle point. \begin{comment} \eyz{Suppose $c_\sigma,c_\tau\in(0,1)$, then \eqref{eq:step-size-condition-pd} holds for $\delta= 1-\max\{c_\sigma,c_\tau\}\in(0,1)$ which using Lemma \ref{lem:parameter} we have that Assumption \ref{assum:step} holds with $\delta>0$. Consider Part I, since the step-sizes are fixed and $t_k=1$, for $k\geq 0$, one can set $\delta'=\min\{\frac{1}{\tau_0},\frac{1-c_\sigma}{c_\sigma}(\alpha+2L_{yy})\}>0$. Moreover, the conditions of \textbf{Case 1} in Theorem \ref{thm:convergence} clearly holds which implies convergence of the iterates to a unique limit point. Now consider Part II where we set $t_k=\sigma_k$ and $\alpha_k=c_\sigma/\sigma_{k-1}$, for $k\geq 0$. From Lemma \ref{lem:parameter} part 4 we have that $t_k=\sigma_k=\Omega(\frac{1}{\tau_k})$, i.e., there exists $C>0$ such that $\frac{\sigma_k}{\tau_k}\geq \min\{C,\frac{\sigma_0}{\tau_0}\}$, for $k\geq 0$, and $\sigma_k=\Omega(k)$; hence, $\sigma_k\geq \min\{\Gamma,\sigma_0\}>0$, for some $\Gamma>0$ and any $k\geq 0$. Therefore, one can set $\delta'=\min\{C,\frac{\sigma_0}{\tau_0}, 1-c_\sigma\}>0$. Finally, since $\bD_\cX$ is chosen to be Euclidean distance the conditions in \textbf{CASE 2} of Theorem \ref{thm:convergence} holds which implies that any limit point of the iterates is a saddle point.} \end{comment} \subsection{\sa{Proof of Theorem~\ref{thm:backtrack}}} \label{sec:main_thm_proof_backtrack} Below we provide rate and non-ergodic convergence analyses for APDB. \subsubsection{Rate Analysis} \label{sec:rate_APDB} We first show that $\{x_k,y_k\}$ and $\{\tau_k,\sigma_k,\theta_k\}$ generated by APDB, displayed in Algorithm~\ref{alg:APDB}, satisfies Assumption~\ref{assum:step}. Indeed, Lemma~\ref{lem:step-size-seq} implies that $\{\tau_k,\sigma_k,\theta_k\}$ generated by APDB satisfies \eqref{eq:step-size-condition-theta} for $\{t_k\}$ such that $t_k=\sigma_k/\sigma_0$ for $k\geq 0$. Moreover, APDB is a special GAPD corresponding to a particular $\{\tau_k,\sigma_k,\theta_k\}$, and Lemma~\ref{lem:parameter} shows that for any $k\geq 0$, the backtracking condition in Line~\ref{algeq:test_function} of Algorithm~\ref{alg:APDB} holds after finite number of inner iterations. Thus, Assumption~\ref{assum:step} clearly holds, and \eqref{eq:delta} directly follows from Theorem~\ref{thm:general-bound} and observing that $\alpha_k$ and $\beta_k$ choice in APDB implies that $\frac{1}{\sigma_k}-\theta_k(\alpha_k+\beta_k)\geq \frac{1-(c_\alpha+c_\beta)}{\sigma_k}\geq 0$. Next, we show the number of inner iterations for each outer iteration $k\geq 0$ can be uniformly bounded by {$1+\log_{1/\eta}(\frac{\bar{\tau}}{\Psi})$}, where $\Psi=\Psi_1$ when $L_{yy}=0$, and $\Psi=\min\{\Psi_1,\Psi_2\}$ when $L_{yy}>0$. For Part I, since $\mu=0$, it is clear that $\gamma_k=\gamma_0>0$ for all $k\geq 0$. From Lemma~\ref{lem:parameter}, $\tau_k\geq\eta\hat{\tau}_k$ for some {$\hat{\tau}_k\geq \Psi\sqrt{\gamma_0/\gamma_k}=\Psi$} for all $k\geq 0$. Since $\{\tau_k\}_k$ is a diminishing sequence, we also have {$\tau_k\leq \tau_0\leq \bar{\tau}$} which implies that the number of backtracking steps is at most $1+\log_{1/\eta}(\frac{\bar{\tau}}{\Psi})$. Furthermore, since $\sigma_k=\gamma_0\tau_k$ for $k\geq 0$, we conclude that \eqref{eq:delta} holds with $T_K=\sum_{k=0}^{K-1}\sigma_k/\sigma_0 \geq {\frac{\eta\Psi}{\tau_0} K}$. {Finally, if a saddle point exists, then using $\cL(\bar{x}_K,y^*)-\cL(x^*,\bar{y}_K)\geq 0$, \eqref{eq:rate} implies \eqref{eq:xy-bound-I} for any saddle point $(x^*,y^*)$.} Consider Part II, i.e., $\mu>0$ and $L_{yy}=0$. Since {$\tau_{k+1}\leq\tau_k\sqrt{\gamma_k/\gamma_{k+1}}$}, we get $\tau_{k}\leq {\bar{\tau}}\sqrt{\gamma_0/\gamma_k}$ for all $k\geq 0$; moreover, according to Lemma \ref{lem:parameter}, we have that {$\tau_k\geq \eta\hat{\tau}_k\geq\eta\Psi\sqrt{\gamma_0/\gamma_k}$} for $k\geq 0$. Therefore, we conclude that the number of backtracking steps is at most {$1+\log_{1/\eta}(\frac{\bar{\tau}}{\Psi})$}. Moreover, Lemma~\ref{lem:k2-rate} shows that $\sigma_k=\Omega(k)$; hence, \eqref{eq:delta} for Part II holds with $T_K=\sum_{k=0}^{K-1}t_k=\sum_{k=0}^{K-1}\sigma_k/\sigma_0=\Omega(K^2)$. {Finally, if a saddle point exists, then \eqref{eq:rate} implies \eqref{eq:xy-bound-II} for any saddle point $(x^*,y^*)$.} \subsubsection{Non-ergodic Convergence Analysis} Using the same arguments in Section~\ref{sec:nonergodic-APD}, the general assumption of Theorem~\ref{thm:convergence} holds for $\delta'$ given in Section~\ref{sec:nonergodic-APD}. For Part I, since $\mu=0$, for $k\geq 0$, $\gamma_k=\gamma_0$; hence, $\sigma_k=\gamma_0\tau_k$. Thus, $t_k=\sigma_k/\sigma_0=\tau_k/\tau_0$. As discussed in Section~\ref{sec:rate_APDB}, for Part I we have ${\eta\Psi}\leq \tau_k\leq\tau_0$ for $k\geq 0$, which implies that {$\inf_{k\geq 0}t_k\geq {\eta\Psi/\tau_0}$ and $\sup_{k\geq 0} t_k\leq 1$}. Furthermore, since $\sigma_k\leq\gamma_0\tau_0=\sigma_0$, we also get $\rev{\liminf_{k\to \infty}}\min\{\alpha_k,\beta_k\}>0$ when $L_{yy}>0$, and $\rev{\liminf_{k\to \infty}}\alpha_k>0$ when $L_{yy}=0$. Therefore, the assumptions for Case 1 of Theorem~\ref{thm:convergence} are satisfied, and we have convergence to a unique saddle point. Part II follows from the same arguments given in Section~\ref{sec:nonergodic-APD}. \begin{comment} $\alpha_k=\frac{c_\alpha}{\sigma_{k-1}}$, $\beta_k=\frac{c_\beta}{\sigma_{k-1}}$, such that $c_\alpha,c_\beta\in(0,1)$, one can set $\delta'=\min\{\gamma_0,1-(c_\alpha+c_\beta)\}>0$. Moreover, from Lemma \ref{lem:parameter} part 3, we have that $\tau_k\geq C/\sqrt{\gamma_0}$, for some $C>0$, and together with the fact that $\sigma_k\leq \gamma_0\tau_0$ (side note: $\sigma_k\leq \gamma_0\tau_\max$), the assumptions of \textbf{CASE 1} are satisfied which implies the convergence to a unique saddle point. Now, consider Part II where we set $\alpha_k=\frac{c_\alpha}{\sigma_{k-1}}$ such that $c_\alpha\in(0,1)$ and $c_\beta=0$, and there exists $C>0$ such that $\frac{\sigma_k}{\tau_k}\geq \min\{C,\frac{\sigma_0}{\tau_0}\}$, for $k\geq 0$. Hence, one can set $\delta'=\min\{C,\frac{\sigma_0}{\tau_0},1-c_\alpha\}>0$. Finally, similar to previous section since $\sigma_k\geq \min\{\Gamma,\sigma_0\}>0$, for some $\Gamma>0$, the result follows. \end{comment} \begin{remark}\label{rem:larger-step} \sa{Selecting larger step-sizes may improve overall practical behavior of the algorithm. To this aim, one can adopt non-monotonic $\{\tau_k\}$ within APDB to possibly select larger steps -- this might increase the number of backtracking steps as \sa{the outer iteration counter $k\geq 0$ increases}. For example, given any $\tau_{\max}>0$, setting $\tau_{k+1}=\min\{\tau_k\sqrt{\frac{\gamma_k}{\gamma_{k+1}}(1+\frac{\tau_k}{\tau_{k-1}})},\tau_\max\}$ in Line~\ref{algeq:gamma_update_APDB} of APDB implies that the number of backtracking steps at iteration $k$ is bounded by $N_k\triangleq 1+\log_{1/\eta}(\frac{\tau_\max}{\Psi}\sqrt{\gamma_k/\gamma_0})$. When $\mu=0$, $N_k=1+\log_{1/\eta}(\frac{\tau_\max}{\Psi})$, while $N_k=\cO(\log(k))$ when $\mu>0$. Thus, given any $(x,y)\in\cX\times\cY$, to guarantee $\cL(\bar{x}_K,y)-\cL(x,\bar{y}_K)\leq \epsilon$, one needs $\cO(1/\epsilon)$ inner iterations in total when $\mu=0$ compared to $\cO(\frac{1}{\sqrt{\epsilon}}\log(1/\epsilon))$ inner iterations when $\mu>0$. It is worth reemphasizing that $\cO(1/\epsilon)$ and $\cO(1/\sqrt{\epsilon})$ are the lower complexity bounds associated with first-order primal-dual methods for convex-concave and strongly convex-concave bilinear SP problems, respectively~\cite{ouyang2018lower}.} \end{remark} \section{Application to {the} Constrained Convex Optimization} \label{sec:constrained} An important special case of \eqref{eq:original-problem} is the convex optimization problem with a nonlinear conic constraint, formulated as in~\eqref{eq:conic_problem}. Indeed, \eqref{eq:conic_problem} can be reformulated as a saddle point problem as shown in \eqref{eq:conic_problem_equivalent}, which is in the form of \eqref{eq:original-problem}. Clearly, $L_{yy}=0$, and $L_{yx}>0$ exists if $G$ is Lipschitz. Moreover, for any fixed $y\in\cY$, a bound on $L_{xx}$, \sa{i.e.,} the Lipschitz constant of $\grad_x\Phi(x,y)$ as a function of $x$, can be computed as \vspace*{-2mm} {\small \begin{align}\label{eq:L_xx-co} \norm{\grad_x \Phi(x,y)-\grad_x \Phi(\bar{x},y)}_{\cX^*}&\leq \norm{\grad g(x)-\grad g(\bar{x})}_{\cX^*}+\norm{\grad G(x)^\top y-\grad G(\bar{x})^\top y}_{\cX^*} \nonumber \\ &\leq (L_g+L_G\norm{y}_{\cY})\norm{x-\bar{x}}_\cX ,\quad \forall~x,\bar{x}\in \rev{\dom f}.\vspace*{-3mm} \end{align}} Now we customize our algorithm and state its convergence result for \eqref{eq:conic_problem}. \begin{assumption} \label{assum:optimization} Suppose $(\cX,~\norm{\cdot}_{\cX})=(\reals^n,~\norm{\cdot})$ and $(\cY,~\norm{\cdot}_{\cY})=(\reals^m,~\norm{\cdot})$ are Euclidean spaces. We assume that a dual optimal solution $y^*\in\cY$ exists. Consider the objective $\rho(x)\triangleq f(x)+g(x)$ in \eqref{eq:conic_problem}, suppose $f:\reals^n\rightarrow\reals\cup\{\infty\}$ is convex (possibly nonsmooth), $g:\reals^n\rightarrow\reals$ is convex with a Lipschitz continuous gradient with constant $L_g$, and $\cK\subset\reals^m$ is a closed convex cone. Moreover, {$G:\reals^n\rightarrow \reals^m$} is $\cK$-convex~\cite{boyd2004convex}, Lipschitz continuous with constant $C_G>0$ and it has a Lipschitz continuous {Jacobian}, denoted by $\grad G:\reals^n\rightarrow\reals^{m\times n}$, with constant $L_G\geq 0$. \end{assumption} \rev{Assumption \ref{assum:optimization} ensures that $\Phi(\cdot,\cdot)$ is convex-concave, and adopting the Euclidean metric will help us to convert our SP rate results into suboptimality and infeasibility rate results for the problem \eqref{eq:conic_problem}.} {In the rest, let $\cP_{\cK}(w) \triangleq \argmin_{y\in\cK}\norm{y-w}$, $d_{\cK}(w)\triangleq \norm{\cP_{\cK}(w)-w}=\norm{\cP_{\cK^{\circ}}(w)}$ where $\cK^\circ=-\cK^*$ denotes the polar cone of $\cK$.} We next consider two scenarios: \textbf{i)} a dual bound is {available}, \textbf{ii)} a dual bound is not {available} -- \rev{we allow the dual solution set to be possibly \emph{unbounded}.} \subsection{A dual bound is available} For any \emph{given} $\kappa, B>0$ such that $\norm{y^*}\leq B$ for some dual optimal solution $y^*$, let {\small \begin{align} \label{eq:B} \cB\triangleq \{y\in\reals^m \ :\ \norm{y}\leq B+\kappa\}. \end{align}} Thus, the Lipschitz constant $L_{xx}$ for \eqref{eq:L_xx-co} can be chosen as $L_{xx}=L_g+(B+\kappa)L_G$ and we set $h(y)=\mathbb{I}_{\cK^*\cap \cB}$. Such a bound $B$ can be computed if a slater point for \eqref{eq:conic_problem} is available. Using the following lemma one can compute a dual bound efficiently. \begin{lemma}\cite{aybat2016distributed}\label{dual-bound} Let $\bar{x}$ be a Slater point for \eqref{eq:conic_problem}, i.e., $\bar{x}\in\relint(\dom \rho)$ such that $G(\bar{x})\in\intr(-\cK)$, and $q:\reals^m\rightarrow\reals\cup\{-\infty\}$ denote the dual function, i.e., {\small $$q(y)\triangleq \begin{cases} \inf_{x}\rho(x)+ \fprod{G(x),~y}, & \hbox{if $y\in\cK^*$;} \\ -\infty, & \hbox{o.w.} \end{cases} $$} For any $\bar{y}\in \dom q$, let $Q_{\bar{y}}\triangleq\{y \in \dom q:\ q(y)\geq q(\bar{y})\}\subset\cK^*$ denote the corresponding superlevel set. Then for all $\bar{y}\in \dom q$, $Q_{\bar{y}}$ can be bounded as follows: \vspace*{-4mm} {\small \begin{equation} \label{eq:dual-bound-radius} \norm{y}\leq \frac{\rho(\bar{x})-q(\bar{y})}{r^*},\quad \forall y \in Q_{\bar{y}}, \end{equation}} where $0<r^*\triangleq \min_w\{-\fprod{G(\bar{x}),~w}:\ \|w\|= 1,\ w\in \cK^*\}$. Although this is not a convex problem due to the nonlinear equality constraint, one can upper bound \eqref{eq:dual-bound-radius} using $0<\tilde{r}\leq r^*$, which can be efficiently computed by solving a convex problem $\tilde{r}\triangleq\min_w\{-\fprod{G(\bar{x}),~w}:\ \|w\|_1= 1,\ w\in \cK^*\}$. \end{lemma} \begin{corollary}\label{cor:bound_known} Consider the convex optimization problem in~\eqref{eq:conic_problem}. \rev{Suppose Assumption \ref{assum:optimization} holds and} let $\{(x_k,y_k)\}_{k\geq 0}$ be the iterate sequence when APD is applied to the following SP problem with $h(y)=\mathbb{I}_{\cK^*\cap \cB}(y)$, \rev{where $\cB$ is defined in~\eqref{eq:B},} \begin{align} \label{eq:constrained-problem-thm} \min_{x\in\reals^n}\max_{y\in\reals^m} f(x)+g(x)+\fprod{G(x),y}-h(y). \end{align} Let $\tau_0=c_\tau\big(L_g+(B+\kappa)L_G+\tfrac{1}{\alpha}C_G^2\big)^{-1}$ and $\sigma_0=c_\sigma\alpha^{-1}$ for some $\alpha>0$ and $c_\tau,c_\sigma\in(0,1]$. Then, for all $K\geq 1$, \begin{align} \label{eq:optimization-bounds} \max\Big\{|\rho(\bar{x}_K)-\rho(x^*)|,~\kappa~d_{-\cK}\big(G(\bar{x}_K)\big)\Big\}\leq \frac{1}{T_K}\Delta(x^*,y_K^\dag)=\cO(1/T_K), \end{align} where $y_K^\dag=(\norm{y^*}+\kappa)\cP_{\cK^*}\big(G(\bar{x}_K)\big) \norm{\cP_{\cK^*}\big(G(\bar{x}_K)\big)}^{-1}$, $\bar{x}_K$ and $T_K$ are defined in Theorem~\ref{thm:main}, and \rev{$\Delta(\cdot,\cdot)$} is defined in \eqref{eq:delta}. Note that $\sup_{K\geq 1}\|y_K^\dag\|=\|y^*\|+\kappa$. \textbf{(Part I.)} Suppose the objective in \eqref{eq:conic_problem} is merely convex. Then \eqref{eq:optimization-bounds} holds with $T_K=K$ for all $K\geq 1$ when $\theta_k=1$, $\tau_k=\tau_0$, $\sigma_k=\sigma_0$ and $t_k=1$ for all $k\geq 0$. \textbf{(Part II.)} Suppose the objective in \eqref{eq:conic_problem} is strongly convex with $\mu>0$. Then \eqref{eq:optimization-bounds} holds with $T_K=\mathsf{T}heta(K^2)$ for all $K\geq 1$ when $\{\tau_k,\sigma_k,\theta_k\}_{k\geq 0}$ sequence is chosen as in \eqref{eq:step-size-rule}, and $t_k=\sigma_k/\sigma_0$ for $k\geq 0$. Moreover, for all $K\geq 1$, \begin{align*} \bD_\cX(x^*,x_K)\leq \frac{\tau_K}{\sigma_K}\sigma_0\Delta(x^*,y^*)=\cO(1/K^2). \end{align*} \end{corollary} \begin{proof} It is easy to verify that $\fprod{G(\bar{x}_K),\sa{y_K^\dag}}= (\norm{y^*}+\kappa) d_{-\cK}(G(\bar{x}_K))$ as for any $w\in\reals^m$ we have $w=\cP_{-\cK}(w)+\cP_{\cK^*}(w)$ and $\fprod{\cP_{-\cK}(w),~\cP_{\cK^*}(w)}=0$. Hence, $\cL(\bar{x}_K,\sa{y_K^\dag})=\rho(\bar{x}_K)+(\norm{y^*}+\kappa) d_{-\cK}(G(\bar{x}_K))$ since $y_K^\dag\in\cK^*$. Note that $\rho(x^*)=\cL(x^*,y^*)\geq \cL(x^*,\bar{y}_K)$. Therefore, \eqref{eq:delta} implies that {\small \begin{align}\label{eq:L_upper_bound} \rho(\bar{x}_K)-\rho(x^*)+(\norm{y^*}+\kappa) d_{-\cK}(G(\bar{x}_K))\leq \cL(\bar{x}_K,y_K^\dag)-\cL(x^*,\bar{y}_K)\leq \sa{\frac{1}{T_K}}\Delta(x^*,y_K^\dag). \end{align}} On the other hand, we also have \begin{align}\label{eq:L_lower_bound} 0\leq \cL(\bar{x}_K,y_K^\dag)-\cL(x^*,y^*) &= \rho(\bar{x}_K)-\rho(x^*)+\fprod{G(\bar{x}_K),y^*} \nonumber \\ &\leq \rho(\bar{x}_K)-\rho(x^*)+\norm{y^*} d_{-\cK}(G(\bar{x}_K)), \end{align} where we used the fact that for any $y\in\reals^m$, $\fprod{y^*,y}\leq \fprod{y^*,\cP_{\cK^*}(y)}\leq \norm{y^*} d_{-\cK}(y)$. Combining \eqref{eq:L_upper_bound} and \eqref{eq:L_lower_bound} gives the desired result. \end{proof} \begin{remark} \label{rem:delta} \sa{ Since $\|y_K^\dag\|=\norm{y^*}+\kappa$ for all $K\geq 1$, one has $\sup_{K\geq 1}\Delta(x^*,y^\dag_K)\leq \frac{1}{2\tau_0}\norm{x^*-x_0}^2+\frac{1}{\sigma_0}((\norm{y^*}+\kappa)^2+\norm{y_0}^2)$. In practice, $\kappa=B$ can be used.} \end{remark} \subsection{\sa{A dual bound is not available}} \label{sec:without_dualbound} Here we consider the situation where the dual bound for \eqref{eq:conic_problem} is not known and hard to compute. We consider two subcases.\\ \indent {\bf CASE 1: {$L_{xx}$ exists.}} Suppose $L_{xx}$ \emph{exists}, but \emph{not} known, then one can {\emph{immediately}} implement APDB which locally estimates the Lipschitz constants. \begin{corollary}\label{cor:bound_unknown} \sa{Consider \eqref{eq:conic_problem} \rev{under Assumption \ref{assum:optimization}.} Let $\{(x_k,y_k)\}_{k\geq 0}$ be the APDB iterate sequence when APDB is applied to \eqref{eq:constrained-problem-thm} with $h(y)=\mathbb{I}_{\cK^*}(y)$. The bounds in Part I and Part II of Corollary~\ref{cor:bound_known} continue to hold {for any $\kappa>0$} when the step-sizes are adaptively updated as described in Algorithm \ref{alg:APDB}.} \end{corollary} \indent {\bf CASE 2: {$L_{xx}$ does not {exist}}.} {Suppose $L_{xx}$ does not exist -- possibly when $\dom h$ is unbounded, e.g., see the example in Remark~\ref{rem:differentiability}.} In this case, one can still implement the algorithm APDB with a slight modification {and still guarantee convergence under Assumption~\ref{assum:optimization}}. In fact, \rev{since the global constant $L_{xx}$ does not exist for all $y\in\dom h$, the challenge in this scenario is to guarantee that there exists a \emph{bounded} set $\bar{\cB}\subset\reals^m$ such that APDB dual iterate sequence lies in it, i.e., $\{y_k\}\subset\bar{\cB}$, which would imply the existence of a constant $L_{xx}$ such that \eqref{eq:Lxx} holds for all $y\in\bar{\cB}$. More precisely, we use induction to show the boundedness of $\{y_k\}$, and this result implies} that $\Phi$ satisfies \eqref{eq:L_xx-co} for $y=y_k$ with $L_{xx}=L_g+L_G\sup_k\norm{y_k}_{\cY}$ for all $k\geq 0$ -- this is what we need for the proof of Theorem~\ref{thm:backtrack} to hold if we relax \textbf{(i)} in Assumption~\ref{assum}. \rev{However, naively using induction to construct a uniform bound on $\{y_k\}_k$ fails as} one needs the Lipschitz constant of $\grad_x\Phi(\cdot,y_{k+1})$ to bound $\norm{y_{k+1}}$ which depends on $y_{k+1}$ at iteration $k$. A remedy to this circular {argument} is to perform the $x$-update first, followed by the $y$-update; this way, at iteration $k\geq 0$, one needs to bound the Lipschitz constant of $\grad_x\Phi(\cdot,y_{k})$, instead of $\grad_x\Phi(\cdot,y_{k+1})$, which is now possible as a bound on $\norm{y_k}$ is available through the induction hypothesis. \sa{To solve \eqref{eq:conic_problem}, we will implement a modified version of APDB on \eqref{eq:constrained-problem-thm}, which is a special case of \eqref{eq:original-problem} with $\Phi(x,y)=g(x)+\fprod{G(x),y}$. In the rest, we consider running a variant of Algorithm~\ref{alg:APDB} with $(x_{k+1},y_{k+1})\gets\hbox{\textbf{MainStep}}(x_k,y_k,x_{k-1},y_{k-1},\tau_k,\sigma_k,\theta_k)$ step modified as follows:} \sa{ \begin{subequations}\label{eq:switchAPD} \begin{align} &{s_k} \gets (1+\theta_k)\grad_x\Phi(x_k,y_k)-\theta_k\grad_x\Phi(x_{k-1},y_{k-1}), \\ &x_{k+1}\gets \argmin_{x\in\cX} f({x})+\fprod{{s_k}, ~x}+{\frac{1}{\tau_k}}\bD_{\cX}(x,x_k),\\ & y_{k+1}\gets \argmin_{y\in\cY} h(y)-\fprod{\grad_y\Phi(x_{k+1},y_k),~ y}+{\frac{1}{\sigma_k}}\bD_{\cY}(y,y_k). \end{align} \end{subequations}} \sa{This switch of $x$- and $y$-updates within Algorithm~\ref{alg:APDB} also requires modifying the test function $E_k$ in \eqref{eq:Ek}. Now we redefine $E_k$ used in Line~\ref{algeq:test_function} as follows:} \sa{\small \begin{align} \label{eq:modified_Ek} E_k(x,y)\triangleq &\frac{1}{2\alpha_{k+1}}\norm{\grad_x\Phi(x,y)-\grad_x\Phi(x,y_k)}^2-\frac{1}{\sigma_k}\bD_\cY(y,y_k)\nonumber\\ &+\frac{1}{2\beta_{k+1}}\norm{\grad_x\Phi(x,y_k)-\grad_x\Phi(x_k,y_k)}^2-\Big(\frac{1}{\tau_k}-\theta_k(\alpha_k+\beta_k)\Big)\bD_\cX(x,x_k). \end{align}} \sa{Next, we state the convergence result of the proposed method.} \begin{corollary}\label{cor:bound_not_exist} \sa{Consider a variant of APDB where Line~\ref{algeq:APDB-mainstep} of Algorithm~\ref{alg:APDB} is replaced by the update-rule in \eqref{eq:switchAPD} and the test function in Line~\ref{algeq:test_function} is set as in~\eqref{eq:modified_Ek} with $\{\alpha_k,\beta_k\}$ chosen as $\alpha_{k+1}=c_\alpha/\tau_k$ and $\beta_{k+1}=\gamma_0 c_\beta/\sigma_k$ for $k\geq 0$.} \rev{Suppose Assumption~\ref{assum:optimization} holds and }\sa{let $(x^*,y^*)$ be a primal-dual optimal solution to \eqref{eq:conic_problem}. Consider $\{(x_k,y_k)\}_{k\geq 0}$ generated by the modified APDB when applied to \eqref{eq:constrained-problem-thm} with $h(y)=\mathbb{I}_{\cK^*}(y)$. Assuming either $\cK=\reals^m_+$ or $\grad G$ is bounded on $\dom f$, one has \begin{align} \label{eq:y_bound} \norm{y_k}\leq \bar{B}\triangleq \norm{y^*}+\sqrt{\gamma_0\norm{x^*-x_0}^2+\norm{y^*-y_0}^2},\quad \forall k\geq 0. \end{align} Moreover, for any $\kappa>0$, \eqref{eq:optimization-bounds} continue to hold for $\{t_k\}$ such that $t_k=\sigma_k/\sigma_0$ for $k\geq 0$. Finally, for $\mu=0$, $T_K=\Omega(K)$ and for $\mu>0$, $T_K=\Omega(K^2)$.} \end{corollary} \begin{proof} {Suppose not only a dual bound for \eqref{eq:conic_problem} is \emph{not} known and hard to compute; but also $L_{xx}$ for \eqref{eq:constrained-problem-thm} does not exist -- possibly when $\dom h$ is unbounded.} Here the main idea is to show that when APDB with update-rules of \eqref{eq:switchAPD}, applied to \eqref{eq:constrained-problem-thm} with $h(y)=\mathbb{I}_{\cK^*}(y)$, APDB generates $\{y_k\}$ that is bounded. Then the convergence results can be proved similar to the previous results using the dual iterate bound. {To this end, we use induction, i.e., we assume that for some $K\geq 1$, $\norm{y_k}\leq \bar{B}$ for $k= 0,\hdots,K-1$, and we prove that $\norm{y_K}\leq \bar{B}$.} Note that the basis of induction clearly holds for $K=1$ as we have $\bar{B}\rev{\geq \norm{y^*}+\norm{y^*-y_0}\geq} \norm{y_0}$. Given $\gamma_0,\bar{\tau}>0$, let $\sigma_{-1}=\gamma_0\bar{\tau}$ and consider $\{t_k,\alpha_k,\beta_k\}$ chosen as \begin{equation}\label{eq:parameter-t-alpha-beta} t_k=\sigma_k/\sigma_0,\quad \alpha_{k+1}=c_\alpha/\tau_k,\quad \beta_{k+1}=\gamma_0c_\beta/\sigma_k. \end{equation} We first verify that $\{x_k,y_k\}_{k=0}^{K-1}$ and $\{\tau_k,\sigma_k,\theta_k\}_{k=0}^{K-1}$ together with $\{\alpha_k,\beta_k,t_k\}$ as stated in~\eqref{eq:parameter-t-alpha-beta} satisfy \eqref{eq:step-size-condition} in Assumption~\ref{assum:step} for $k=0,\hdots,K-1$. To show that \eqref{eq:step-size-condition-Ek} and \eqref{eq:step-size-condition-theta} hold for $k=0,\hdots,K-1$, we revisit the proofs of Lemmas~\ref{lem:step-size-seq} and \ref{lem:parameter}. Note that from the step-size update rules we have that $\sigma_k=\gamma_k\tau_k$, {$\theta_k=\sigma_{k-1}/\sigma_k$} and $\gamma_{k+1}=\gamma_k(1+\mu\tau_k)$, for $k=0,\hdots,K-1$. Therefore, Lemma \ref{lem:step-size-seq} implies that \eqref{eq:step-size-condition-theta} holds for $k=0,\hdots,K-1$. {Note that for any $x\in\dom f$ and $y,y'\in\cY$, we have \begin{align} \label{eq:Lxy} \norm{\grad_x\Phi(x,y)-\grad_x\Phi(x,y')}=\norm{\grad G(x)^\top(y-y')}\leq\norm{\grad G(x)}\norm{y-y'}. \end{align} Moreover, using Lipschitz continuity of $G$ \rev{--see Assumption \ref{assum:optimization}}, one can easily show that if $\cK=\reals^m_+$, then $\grad G$ is bounded.\footnote{\rev{This is an immediate extension of the real-valued case, i.e., if a differentiable, convex function $g:\reals^n\to\reals$ is $L$-Lipschitz with respect to $\norm{\cdot}$, then $\norm{\grad g(\cdot)}_{*}\leq L$.}} Thus, it follows from \eqref{eq:Lxy} that whenever $\grad G$ is bounded on $\dom f$, the Lipschitz constant $L_{xy}$ exists. Next, define $\bar{L}_{xx}\triangleq {L_g+L_G \bar{B}}$ and note that $\norm{\grad_x\Phi(x_{k+1},y_k)-\grad_x\Phi(x_k,y_k)}\leq \bar{L}_{xx}\norm{x_{k+1}-x_k}$, for $k=0,\hdots,K-1$,} where we used \eqref{eq:L_xx-co} and the induction hypothesis, i.e., $\norm{y_k}\leq \bar{B}$ for $k= 0,\hdots,K-1$. Therefore, for $E_k(\cdot,\cdot)$ defined in~\eqref{eq:modified_Ek}, the following upper bound on $E_k\triangleq E_k(x_{k+1},y_{k+1})$ holds for $k=0,\hdots,K-1$: \begin{align*} E_k\leq & \Big(\frac{L_{xy}^2}{\alpha_{k+1}}-\frac{1}{\sigma_k}\Big)\bD_\cY(y_{k+1},y_k) +\Big(\frac{{\bar{L}_{xx}^2}}{\beta_{k+1}}+\theta_k(\alpha_k+\beta_k)-\frac{1}{\tau_k}\Big)\bD_\cX(x_{k+1},x_k)\\ \leq &\Big(\frac{1}{c_\alpha}{L_{xy}^2}\tau_k-\frac{1}{\gamma_k\tau_k}\Big)\bD_\cY(y_{k+1},y_k) +\Big({\bar{L}_{xx}^2}\tau_k\frac{\gamma_k}{c_\beta\gamma_0}-\frac{1-(c_\alpha+c_\beta)}{\tau_k}\Big)\bD_\cX(x_{k+1},x_k), \end{align*} where in the last inequality, we used $\sigma_k=\gamma_k\tau_k$ and {the fact that $\{\gamma_k\}_{k\geq 0}$ is a non-decreasing sequence such that $\gamma_k\geq \gamma_0$, for $k\geq 0$. Therefore, given $\delta\in [0,1)$, one can conclude that for all $k=0,\hdots,K-1$, $E_k\leq -\delta[\bD_\cX(x_{k+1},x_k)/\tau_k+\bD_\cY(y_{k+1},y_k)/\sigma_k]$ holds for any $\tau_k\in(0,\Psi_3\sqrt{\gamma_0/\gamma_k}]$ \begin{comment} if the following conditions hold \begin{align}\label{eq:step-size-cond-lip} \frac{1-\delta}{\sigma_k}\geq \frac{L_{xy}^2}{\alpha_{k+1}} ,\quad \frac{1-\delta}{\tau_k}\geq \frac{\rev{\bar{L}_{xx}^2}}{\beta_{k+1}}+\theta_k(\alpha_k+\beta_k) . \end{align} \end{comment} where $\Psi_3\triangleq \min\{\frac{\sqrt{c_\alpha(1-\delta)}}{L_{xy}\sqrt{\gamma_0}},\frac{\sqrt{c_\beta(1-(c_\alpha+c_\beta+\delta))}}{L_{xx}}\}$;} hence, after a finite number steps the backtracking terminates and we have $\tau_k\geq \eta\Psi_3\sqrt{\gamma_0/\gamma_k}$. At this point, we verified that \eqref{eq:step-size-condition} holds for $k=0,\hdots,K-1$. Following the same proof lines as in Theorem \ref{thm:general-bound} with $x$ and $y$ being switched one can easily derive the following result: {\small \begin{align}\label{eq:bound-lagrange-switch} {\cL(x_{k+1},y)-\cL(x,y_{k+1})\leq} Q_k(z) - R_{k+1}(z) + E_k, \end{align} which holds for $k=0,\ldots,K-1$, where $Q_k(z)$, $R_k(z)$ and $E_k$ are defined similarly: \begin{align*} & Q_k(z) \triangleq \frac{1}{\tau_k}\bD_{\cX}(x,x_k)+\frac{1}{\sigma_k}\bD_{\cY}(y,y_k)+\frac{\theta_k}{2\alpha_k}\norm{\grad_x\Phi(x_k,y_{k})-\grad_x\Phi(x_{k},y_{k-1})}^2 \nonumber\\ &+\theta_k\fprod{{q_k},x-x_k}+\frac{\theta_k}{2\beta_k}\norm{\grad_x\Phi(x_k,y_{k-1})-\grad_x\Phi(x_{k-1},y_{k-1})}^2, \nonumber\\ & R_{k+1}(z)\triangleq \frac{1}{\tau_{k}}\bD_{\cX}(x,x_{k+1})+\frac{\mu}{2}\norm{x-x_{k+1}}^2_{\cX}+\frac{1}{2\beta_{k+1}}\norm{{\grad_x\Phi(x_{k+1},y_{k})}-\grad_x\Phi(x_k,y_k)}^2\nonumber\\ &+\frac{1}{\sigma_{k}}\bD_{\cY}(y,y_{k+1}) +\fprod{{q_{k+1}},x-x_{k+1}}+\frac{1}{2\alpha_{k+1}}\norm{\grad_x\Phi(x_{k+1},y_{k+1})-{\grad_x\Phi(x_{k+1},y_{k})}}^2,\nonumber \\ & E_k = E_k(x_{k+1},y_{k+1}), \nonumber \end{align*}} and {$q_k\triangleq\grad_x\Phi(x_k,y_k)-\grad_x\Phi(x_{k-1},y_{k-1})$ for $k=0,\hdots,K-1$}. It follows from \eqref{eq:step-size-condition-theta} that multiplying \eqref{eq:bound-lagrange-switch} with $t_k$ and summing it over $k=0,\hdots,K-1$, we get \rev{ \begin{eqnarray}\label{eq:final-bound-backtracking-switch} \lefteqn{T_K(\cL(\bar{x}_{K},y)-\cL(x,\bar{y}_{K}))\leq t_0Q_0(z)-t_{K-1}R_K(z)\leq} \\ & & {\Delta(x,y)}-t_{K-1}\Big[\Big(\frac{1}{\tau_{K-1}}-(\alpha_K+\beta_K)\Big)\bD_{\cX}(x,x_{K}) +\frac{1}{\sigma_{K-1}}\bD_{\cY}(y,y_{K}) \Big], \nonumber \end{eqnarray}} \rev{where $\Delta(x,y)$ is defined in \eqref{eq:delta}.} Using \eqref{eq:parameter-t-alpha-beta} at $k=K-1$ we get \rev{ \begin{align} \frac{1}{\tau_{K-1}}-(\alpha_K+\beta_K)=\frac{\gamma_{K-1}}{\sigma_{K-1}}-\Big(\frac{\gamma_{K-1}c_\alpha+\gamma_0c_\beta}{\sigma_{K-1}}\Big)\geq\frac{\gamma_{K-1}}{\sigma_{K-1}}(1-c_\alpha-c_\beta) \geq 0, \end{align}} where \rev{the equality follows from $\sigma_{K-1}=\gamma_{K-1}\tau_{K-1}$, and} in the first inequality we used the fact that {$\gamma_{k+1}\geq\gamma_k>0$ for $k\geq 0$} and $c_\alpha+c_\beta+\delta\in (0,1]$. Therefore, \eqref{eq:final-bound-backtracking-switch} implies that {\small \begin{equation}\label{eq:final-bound-rate} \frac{\rev{\gamma_{K-1}}(1-(c_\alpha+c_\beta))}{\sigma_0}\bD_{\cX}(x,x_{K})+\frac{1}{\sigma_0}\bD_\cY(y,y_K)+T_K(\cL(\bar{x}_{K},y)-\cL(x,\bar{y}_{K}))\leq \Delta(x,y). \end{equation}} Evaluating \eqref{eq:final-bound-rate} at $(x,y)=(x^*,y^*)$ and using $\cL(\bar{x}_{K},y^*)-\cL(x^*,\bar{y}_{K})\geq 0$, we obtain that $\frac{1}{\sigma_0}\bD_\cY(y^*,y_K)\leq \Delta(x^*,y^*)$. Moreover, using $\bD_\cY(y^*,y_K)\geq \frac{1}{2}\norm{y^*-y_K}^2$ one can easily verify that $\norm{y_K}\leq \bar{B}$; hence, the induction is complete. Consider Part I, since $\mu=0$ it is clear that $\gamma_k=\gamma_0$, for $k\geq 0$; hence, we get $T_K= \sum_{k=0}^{K-1}\sigma_k/\sigma_0\geq \eta \frac{\Psi_3}{{\tau_0}}K$. Consider Part II, i.e., $\mu>0$ and $L_{yy}=0$, following the same proof lines of Lemma \ref{lem:k2-rate}, $\gamma_{k+1}=\gamma_k(1+\mu\tau_k)$ implies that $\gamma_{k+1}\geq \gamma_k+\mu\eta\Psi_3\sqrt{\gamma_0\gamma_k}$ which implies that $\gamma_k\geq (\frac{\Gamma}{3})^2k^2$ and $\sigma_k\geq \frac{\Gamma^2k}{3\mu}$ for $k\geq 0$, where $\Gamma=\mu\eta\Psi_3\sqrt{\gamma_0}$. Moreover, $\tau_k\sigma_k\geq \Gamma^2/\mu$, for $k\geq 1$. Therefore, $T_K=\sum_{k=0}^{K-1}\sigma_k/\sigma_0=\Omega({K^2})$, $\tau_K/\sigma_K=1/\gamma_K=\cO(1/K^2)$, and $\sigma_K=\Omega(1/\tau_K)$ for $K\geq 1$. Similar to the proof in section \ref{sec:rate_APDB}, one can observe that for both Part I and II, $\tau_k\geq \eta\Psi_3\sqrt{\gamma_0/\gamma_k}$ and $\tau_k\leq \bar{\tau}\sqrt{\gamma_0/\gamma_k}$ for all $k\geq 0$, which implies a uniform bound $1+\log_{1/\eta}(\frac{\bar{\tau}}{\Psi_3})$ on the number of inner iterations. Moreover, the rate results follow from \eqref{eq:final-bound-rate} similar to the proof of Theorem~\ref{thm:main}. \end{proof} \section{Numerical Experiments}\label{sec:numeric} In this section, we implement both APD and APDB for solving \rev{quadratically constrained quadratic problems with synthetic data, and two problems arising in machine learning, namely, the kernel matrix learning and regression with fairness constraints.} We compare them with other state-of-the-art methods. \sa{All experiments are performed on a machine running 64-bit Windows 10 with Intel i7-8650U @2.11GHz and 16GB RAM.} \begin{remark} Using convexity of $\Phi(\cdot,y)$, one can define another test function $\widetilde{E}_k(x,y)$ that upper bounds $E_k(x,y)$. Indeed, we define $\widetilde{E}_k(x,y)$ by replacing the term \sa{$\Phi(x,y)-\Phi(x_k,y)-\fprod{\grad_x\Phi(x_k,y),x-x_k}$ in the first line of \eqref{eq:Ek} with the inner product $\fprod{\grad_x\Phi(x,y)-\grad_x\Phi(x_k,y),x-x_k}$.} \sa{For backtracking, using $\widetilde{E}_k(x_{k+1},y_{k+1})$ in Line~\ref{algeq:test_function} of APDB instead of $E_k(x_{k+1},y_{k+1})$ leads to a stronger condition; but, in practice, we have found this condition numerically more stable.} \end{remark} \subsection{Kernel Matrix Learning} We test the implementation of our method for solving the kernel matrix learning problem discussed in Section~\ref{sec:intro} for classification. In particular, given a set of kernel matrices $\{K_\ell\}_{\ell=1}^M\subset\mathbb{S}^n_{+}$, consider the problem in~\eqref{eq:kernel_learn_simple}. When $\lambda>0$ and $C=\infty$, the objective is to find a kernel matrix $K^*\in\cK\triangleq\{\sum_{\ell=1}^M\eta_\ell K_\ell:\ \eta\geq \mathbf{0}\}$ that achieves the best training error for $\ell_2$-norm soft margin SVM, and when $\lambda=0$ and $C>0$, the objective is to find a kernel matrix $K^*\in\cK$ that gives the best performance for $\ell_1$-norm soft margin SVM. Once $(\alpha^*,\eta^*)$, a saddle point for \eqref{eq:kernel_learn_simple}, is computed using the training set $\cS$, one can construct $K^*=\sum_{\ell=1}^M{\eta_\ell^*}K_\ell$ and predict unlabeled data in the test set $\cT$ using the model $\cM:\reals^m\rightarrow\{-1,+1\}$ such that the predicted label of $\ba_i$ is $ \cM(\ba_i)={\rm sign}\Big(\sum_{j\in\cS}b_j\alpha^*_j K^*_{ji}+\gamma^*\Big), $ for all $i\in\cT$ where for $\ell_1$ soft margin SVM, $\gamma^*=b_{i^*}-\sum_{j\in\cS}b_j\alpha^*_jK^*_{ji^*}$ for some $i^*\in\cS$ such that $\alpha^*_{i^*}\in (0,C)$, and for $\ell_2$ soft margin SVM, $\gamma^*=b_{i^*}(1-\lambda\alpha^*_{i^*})-\sum_{j\in\cS}b_j\alpha^*_jK^*_{ji^*}$ for some $i^*\in\cS$ such that $\alpha^*_{i^*}>0$. Note that \eqref{eq:kernel_learn_simple} is a special case of \eqref{eq:original-problem} for $f$, $\Phi$ and $h$ chosen as follows: let $y_\ell\triangleq\frac{\eta_\ell r_\ell}{c}$ for each $\ell$ and define $h(y)=\ind{\Delta}(y)$ where $y=[y_\ell]_{\ell=1}^M\in\reals^M$ and $\Delta$ is an $M$-dimensional unit simplex; $\Phi(x,y)= -2\be^\top x + \sum_{\ell=1}^M \frac{c}{r_\ell}y_\ell x^\top G(K^{tr}_\ell)x+\lambda\norm{x}_2^2$, and $f(x)=\ind{X}(x)$ where $X=\{x\in\reals^{n_{tr}}:~0\leq x\leq C,~ \fprod{\bb, x}=0\}$. \subsubsection{{APD} vs \textbf{Mirror-prox} for soft-margin SVMs} In this experiment, we compared our method against Mirror-prox, the primal-dual algorithm proposed by He et al.~\cite{he2015mirror}. We used four different data sets available in UCI repository: \texttt{Ionosphere} (351 observations, 33 {features}), \texttt{Sonar} (208 observations, 60 {features}), \texttt{Heart} (270 observations, 13 {features}) and \texttt{Breast-Cancer} (608 observations, 9 {features}) with three given kernel functions ($M=3$); polynomial kernel function $k_1(\ba,\bar{\ba})=(1+\ba^\top\bar{\ba})^2$, Gaussian kernel function $k_2(\ba,\bar{\ba})=\exp(-0.5(\ba-\bar{\ba})^\top(\ba-\bar{\ba})/0.1)$, and linear kernel function $k_3(\ba,\bar{\ba})=\ba^\top\bar{\ba}$ to compute $K_1, K_2, K_3$ respectively. All the data sets are normalized such that each feature column is mean-centered and divided by its standard deviation. For $\ell_2$-norm soft margin we set $\lambda=1$, for $\ell_1$-norm soft margin SVM we set $C=1$ and for both SVMs $c=\sum_{\ell=1}^3 r_\ell$, where $r_\ell={\rm trace}(K_\ell)$ for $\ell=1,2,3$. The kernel matrices are normalized as in \cite{lanckriet2004learning}; {thus, $\diag(K_\ell)=\mathbf{1}s$, $r_\ell=n_{tr}+n_t$ and $c/r_\ell=3$ for $\ell=1,2,3$.} We tested four different implementations of the APD algorithm: we will refer to the constant step version of APD, stated in Part I of the main result in Theorem~\ref{thm:main}, as {\bf APD1}; and we refer to the adaptive step version of APD, stated in Part II of the main result, as {\bf APD2}. Finally, we also implemented a variant of {\bf APD2} with periodic restarts, and we call it {\bf APD2-restart}. The {\bf APD2-restart} method is implemented simply by restarting the algorithm periodically after every 500 iterations and using the most current iterate as the initial solution for the next call of {\bf APD2}. {All the algorithms are initialized from $x_0=\mathbf{0}$ and $y_0=\frac{1}{M}\mathbf{1}s$.} The results reported are the average values over 10 random replications. In each replication, \%80 of the dataset is selected uniformly at random, and used for training; the rest of data set (\%20) is reserved as test data to calculate the test set accuracy (TSA), which is defined as the fraction of the correctly labeled data in the test set. The algorithms are compared in terms of relative error for the function value ($|\cL(x_k,y_k)-\cL^*|/|\cL^*|$) and for the solution ($\norm{x_k-x^*}_2/\norm{x^*}_2$), where $(x^*,y^*)$ denotes a saddle point for the problem of interest, i.e., \eqref{eq:kernel-l1} or \eqref{eq:kernel-l2}, and $\cL^*\triangleq \cL(x^*,y^*)$. To compute $(x^*,y^*)$, we called MOSEK through CVX \cite{grant2008cvx}. {\bf $\ell_1$-norm Soft Margin SVM:} Consider the following equivalent reformulation of $\ell_1$-norm soft margin problem:\vspace*{-4mm} \begin{align} &\min_{\substack{\rev{x:\ 0\leq x\leq C\be} \\ \fprod{\bb,x}=0}}\max_{y\in\Delta} -2x^\top \be + \sum_{\ell=1}^M \frac{c}{r_\ell}~y_\ell~ x^\top G(K_\ell^{tr})x, \label{eq:kernel-l1} \end{align} Note that \eqref{eq:kernel-l1} is merely convex in $x$ and linear in $y$; therefore, we only used {\bf APD1}. Let $\norm{\cdot}$ denote the spectral norm; the Lipschitz constants defined in \eqref{eq:Lxx} and \eqref{eq:Lipschitz_y} can be set as $L_{xx}\triangleq 6\max_{\ell=1,2,3}\{\norm{G(K_\ell)}\}$, $L_{y y}=0$, $L_{y x}\triangleq 6\sqrt{3}C\max_{\ell=1,2,3}\{\norm{G(K_\ell)}\}$. Recall that the step-size of {\bf Mirror-prox} is determined by Lipschitz constant $L$ of $\grad \Phi$, and which can be set as $L=\sqrt{L_{xx}^2+L_{x y}^2+L_{yx}^2+L_{yy}^2}$, where $L_{xy}$ is defined similarly as $L_{yx}$ in~\eqref{eq:Lipschitz_y}, and for \eqref{eq:kernel-l1} one can take $L_{xy}=L_{yx}$. In these experiments on $\ell_1$ soft margin problems, {\bf APD1} outperformed {\bf Mirror-prox} on all four data sets. In particular, {\bf APD1} and {\bf Mirror-prox} are compared in terms of relative errors for function value and for solution in Figures~\ref{fig:l1-obj} and \ref{fig:l1-sol} respectively. In these figures, relative errorr are plotted against the number of iterations. In this experiment we observed that for fixed number of iterations \rev{$k$}, the run time for {\bf Mirror-prox} is at least twice of {\bf APD1} run time -- while { APD} requires one primal-dual prox operation, {\bf Mirror-prox} needs \emph{two} primal-dual prox operations at each iteration. It is worth mentioning that for \texttt{Ionosphere} and \texttt{Sonar} data sets, 1500 iterations of {\bf APD1} took roughly the same time as MOSEK required to solve the problem, and within 1500 iterations {\bf APD1} was able to generate a decent approximate solution with relative error less than $10^{-4}$ and with a high TSA value; on the other hand, {\bf Mirror-prox} was not able to produce such good quality solutions in a similar amount of time. The average TSA of the optimal solution $(x^*,y^*)$, computed by MOSEK, are 93.81, 84.76, 84.07, 96.79 percent for \texttt{Ionosphere}, \texttt{Sonar}, \texttt{Heart} and \texttt{Breast-Cancer} data sets, respectively. Note that TSA is not necessarily increasing in the number of iterations \sa{$k$}, e.g., {\bf APD1} iterates at \sa{$k=1000$ and $k=2000$} have 85.95\% and 84.76\% TSA values, respectively, for \texttt{Sonar} data set -- note the optimal solution $(x^*,y^*)$ has 84.76\% {TSA -- see Table~\ref{table:l1} for details}. This is a well-known phenomenon and is related to over fitting; in particular, the model’s ability to generalize can weaken as it begins to overfit the training data. \begin{table*}[h] \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{|c|c|c c c|c c c|c c c|c c c|} \hline \multicolumn{2}{| c |}{Iteration~\#} & \multicolumn{3}{| c |}{k=1000} & \multicolumn{3}{| c |}{k=1500} & \multicolumn{3}{| c |}{k=2000} & \multicolumn{3}{| c |}{k=2500} \\ \hline \mbox{} & \mbox{} & & Rel. & & & Rel.& & & Rel.& & & Rel. & \\ Method & Data Set & Time & subopt. & TSA & Time & error & TSA & Time & error & TSA & Time & error & TSA \\ \hline \multirow{2}{*}{\bf APD1} &Ionosphere & 0.34& 5.6e-05 & 93.80 & 0.50& 9.3e-06 & 93.80 & 0.67& 1.6e-06 & 93.80 & 0.84& 3.6e-07 & 93.80 \\ & Sonar & 0.16& 4.6e-04 & 85.95 & 0.24& 4.1e-05 & 84.52 & 0.32& 2.1e-06 & 84.76 & 0.41& 9.7e-08 & 84.76 \\ & Heart & 0.31& 1.1e-06 & 83.89 & 0.46& 3.6e-07 & 83.89 & 0.60& 1.1e-07 & 83.89 & 0.75& 3.6e-08 & 83.88 \\ & Breast-cancer & 0.98& 5.5e-03 & 96.86 & 1.59& 1.0e-03 & 96.79 & 2.19& 2.2e-04 & 96.72 & 2.79& 6.3e-05 & 96.71\\ \hline \multirow{ 2}{*}{\bf Mirror-} &Ionosphere & 0.84& 1.3e-04 & 93.80 & 1.30& 2.6e-05 & 93.80 & 1.80& 6.3e-06 & 93.80 & 2.28& 1.5e-06 & 93.80 \\ & Sonar & 0.47 & 4.3e-03 & 85.24 & 0.71 & 3.4e-04 & 85.48 & 0.96 & 2.9e-05 & 84.52 & 1.24 & 2.9e-06 & 84.76 \\ {\bf prox}& Heart & 0.75 & 1.9e-06 & 83.89 & 1.16 & 7.5e-07 & 83.89 & 1.60 & 2.9e-07 & 83.89 & 2.04 & 1.2e-07 & 83.88 \\ & Breast-cancer & 2.56 & 1.1e-02 & 96.86 & 4.24 & 2.6e-03 & 96.86 & 5.91 & 6.8e-04 & 96.72 & 7.54 & 2.0e-04 & 96.71 \\ \hline \end{tabular} } \caption{$\ell_1$-norm soft margin: runtime (sec), relative error for $\cL(x_k,y_k)$ and TSA (\%) for {\bf APD1} and {\bf Mirror-prox} at iteration $k\in\{1000, 1500, 2000, 2500\}$.} \vspace*{-5mm} \label{table:l1} \end{table*} \begin{figure} \caption{$\ell_1$-norm soft margin: {\bf APD1} \label{fig:l1-obj} \end{figure} \begin{figure} \caption{$\ell_1$-norm soft margin: {\bf APD1} \label{fig:l1-sol} \end{figure} {\bf $\ell_2$-norm Soft Margin SVM:} Consider the following equivalent reformulation of $\ell_2$-norm soft margin problem: \begin{align} &\min_{\substack{x\geq 0\\ \fprod{\bb,x}=0}}\max_{y\in\Delta} -2x^\top \be + \sum_{\ell=1}^M \frac{c}{r_\ell}~y_\ell~x^\top G(K_\ell^{tr})x+\lambda\norm{x}_2^2. \label{eq:kernel-l2} \vspace*{-2mm} \end{align} Since \eqref{eq:kernel-l2} is strongly convex in $x$ and linear in $y$, we implement both {\bf APD2} and {\bf APD2-restart} methods in addition to {\bf APD1}. Due to strong convexity, $\norm{x^*}_2$ can be bounded depending on $\lambda>0$ and the Lipschitz constants can be computed similarly as in $\ell_1$-norm soft margin problem. In these experiments on $\ell_2$ soft margin problems, {\bf APD2-restart} outperformed all other methods on all four data sets. In particular, {\bf APD1}, {\bf APD2}, {\bf APD2-restart} and {\bf Mirror-prox} are compared in terms of relative errors for function value and for solution in Figures~\ref{fig:l2-obj} and \ref{fig:l2-sol} respectively. Given an accuracy level, both {\bf APD1} and {\bf APD2-restart} can compute a solution with a given accuracy requiring much fewer iterations than {\bf Mirror-prox} needs. In addition, we observed that for fixed number of iterations the run time for {\bf Mirror-prox} is almost twice the run time for any {APD} implementation -- see Table~\ref{table:l2}, {and for runtime comparison on larger size problems, see Section~\ref{sec:experiment-ipm}}. Interpreting the results in Figures~\ref{fig:l2-obj} and \ref{fig:l2-sol}, and computational time, we conclude that {APD} implementations can compute a solution with a given accuracy in a significantly lower time than {\bf Mirror-prox} requires. For instance, consider the results for \texttt{Sonar} data set in Figure~\ref{fig:l2-obj}, to compute a solution with $|\cL(x_k,y_k)-\cL^*|/|\cL^*|<10^{-6}$, {\bf APD2-restart} requires 1000 {iterations}, on the other hand, {\bf Mirror-prox} needs around 2000 iterations; hence, {\bf APD2-restart} can compute it in 1/4 of the run time for {\bf Mirror-prox}. This effect is more apparent when these methods are compared on larger scale problems, e.g., see Figure~\ref{fig:largedata-sido}. \begin{table*}[h] \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{|c|c|c c c|c c c|c c c|c c c|} \hline \multicolumn{2}{| c |}{Iteration~\#} & \multicolumn{3}{| c |}{k=1000} & \multicolumn{3}{| c |}{k=1500} & \multicolumn{3}{| c |}{k=2000} & \multicolumn{3}{| c |}{k=2500} \\ \hline \mbox{} & \mbox{} & & Rel. & & & Rel. & & & Rel. & & & Rel. & \\ Method & Data Set & Time & error & TSA & Time & error & TSA & Time & error & TSA & Time & error & TSA \\ \hline \multirow{ 2}{*}{\bf APD1} & Ionosphere & 0.29 & 6.2e-07 & 93.9 & 0.45 & 1.6e-06 & 93.9 & 0.60 & 1.6e-06 & 93.9 & 0.75 & 1.6e-06 & 93.9 \\ & Sonar & 0.15 & 8.3e-05 & 82.6 & 0.23 & 1.3e-06 & 82.4 & 0.31 & 2.3e-08 & 82.1 & 0.40 & 3.6e-10 & 82.1 \\ & Heart & 0.26 & 3.0e-11 & 83.0 & 0.38 & 3.0e-11 & 83.0 & 0.50 & 3.0e-11 & 83.0 & 0.62 & 3.0e-11 & 83.0 \\ & Breast-Cancer & 0.76 & 7.5e-05 & 96.9 & 1.22 & 4.4e-06 & 96.9 & 1.65 & 4.4e-07 & 96.9 & 2.08 & 5.5e-08 & 96.9 \\ \hline \multirow{2}{*}{\bf APD2} & Ionosphere & 0.32 & 1.6e-06 & 93.9 & 0.48 & 1.6e-06 & 93.9 & 0.65 & 1.6e-06 & 93.9 & 0.82 & 1.6e-06 & 93.9 \\ & Sonar & 0.15 & 4.1e-06 & 82.4 & 0.23 & 2.0e-07 & 82.1 & 0.31 & 9.5e-09 & 82.1 & 0.38 & 9.4e-10 & 82.1 \\ & Heart & 0.23 & 4.5e-11 & 83.0 & 0.34 & 3.3e-11 & 83.0 & 0.45 & 3.1e-11 & 83.0 & 0.57 & 3.1e-11 & 83.0 \\ & Breast-Cancer & 0.82 & 4.9e-06 & 96.9 & 1.23 & 7.9e-07 & 96.9 & 1.66 & 2.4e-07 & 96.9 & 2.08 & 9.3e-08 & 96.9 \\ \hline \multirow{ 2}{*}{\bf APD2-} & Ionosphere & 0.33 & 1.6e-06 & 93.9 & 0.49 & 1.6e-06 & 93.9 & 0.64 & 1.6e-06 & 93.9 & 0.80 & 1.6e-06 & 93.9\\ & Sonar & 0.15 & 1.0e-06 & 82.1 & 0.23 & 2.1e-08 & 82.1 & 0.32 & 6.5e-11 & 82.1 & 0.40 & 9.9e-12 & 82.1\\ {\bf restart}& Heart & 0.23 & 3.0e-11 & 83.0 & 0.35 & 3.0e-11 & 83.0 & 0.47 & 3.0e-11 & 83.0 & 0.58 & 3.0e-11 & 83.0\\ & Breast-Cancer & 0.80 & 6.9e-07 & 96.9 & 1.23 & 1.7e-08 & 96.9 & 1.67 & 5.7e-10 & 96.9 & 2.10 & 7.2e-11 & 96.9\\ \hline \multirow{ 2}{*}{\bf Mirror-} & Ionosphere & 0.60 & 1.6e-06 & 93.9 & 0.89 & 1.6e-06 & 93.9 & 1.20 & 1.6e-06 & 93.9 & 1.51 & 1.6e-06 & 93.9\\ & Sonar & 0.31 & 5.5e-04 & 82.4 & 0.45 & 1.4e-05 & 82.4 & 0.61 & 6.9e-07 & 82.1 & 0.76 & 2.1e-08 & 82.1\\ {\bf prox}& Heart & 0.45 & 3.0e-11 & 83.0 & 0.68 & 3.0e-11 & 83.0 & 0.91 & 3.0e-11 & 83.0 & 1.14 & 3.0e-11 & 83.0\\ & Breast-Cancer & 1.48 & 2.0e-04 & 96.9 & 2.35 & 1.4e-05 & 96.9 & 3.23 & 1.6e-06 & 96.9 & 4.07 & 2.5e-07 & 96.9\\ \hline \end{tabular} } \caption{$\ell_2$-norm soft margin: runtime (sec), relative error for $\cL(x_k,y_k)$ and TSA (\%) for {\bf APD1}, {\bf APD2}, {\bf APD2-restart} and {\bf Mirror-prox} at iteration $k\in\{1000, 1500, 2000, 2500\}$.} \vspace*{-8mm} \label{table:l2} \end{table*} \begin{figure} \caption{$\ell_2$-norm soft margin: Comparison of {\bf APD1} \label{fig:l2-obj} \end{figure} \begin{figure} \caption{$\ell_2$-norm soft margin: Comparison of {\bf APD1} \label{fig:l2-sol} \end{figure} \subsubsection{ {APD} vs off-the-shelf interior point methods} \label{sec:experiment-ipm} In this section we compare time complexity of our methods against widely-used, open-source interior point method~(IPM) solvers {\bf Sedumi} v4.0 and {\bf SDPT3} v1.3. Here we used two data sets: \texttt{Breast-Cancer} available in UCI repository and {\texttt{SIDO0}}~\cite{guyon2008sido} (6339 observations, 4932 features -- we used half of the observations in data set). The goal is to investigate how the quality of iterates, measured in terms of relative solution error $\norm{x_k-x^*}_2/\norm{x^*}_2$, changes as run time progresses. Let $x^*$ be the solution of problem \eqref{eq:kernel-l2} which we computed using MOSEK in CVX with the best accuracy option. To compare {\bf APD1}, {\bf APD2}, {\bf APD2-restart} and {\bf Mirror-prox} against the interior point methods {\bf Sedumi} and {\bf SDPT3}, the problem in~\eqref{eq:kernel-l2} is first solved by {\bf Sedumi} and {\bf SDPT3} using their default setting. Let $t_1$ and $t_2$ denote the run time of {\bf Sedumi} and {\bf SDPT3} in seconds, respectively. Next, the primal-dual methods {\bf APD1}, {\bf APD2}, {\bf APD2-restart} and {\bf Mirror-prox} were run with the same settings as in $\ell_2$-norm soft margin experiment for $\max\{t_1,t_2\}$ seconds. The mean of relative solution error of the iterates over 10 replications are plotted against time (seconds) for each of these methods in Figure~\ref{fig:largedata}. {\bf Sedumi} and {\bf SDPT3} are second-order methods and have much better theoretical convergence rates compared to the first-order primal dual methods proposed in this paper. However, for large-scale machine learning problems with dense data, the work per iteration of an IPM is significantly more than that of a first-order method; hence, if the objective is to attain low-to-medium level accuracy solutions, then first-order methods are better suited for this task, e.g., in Figure~\ref{fig:largedata-sido}, {\bf APD2-restart} iterates are more accurate than {\bf Sedumi} and {\bf SDPT3} for the first 2000 and 1000 seconds respectively.\vspace*{-2mm} \begin{figure} \caption{{\texttt{Breast-Cancer} \label{fig:largedata-bc} \caption{{\texttt{SIDO0} \label{fig:largedata-sido} \caption{Run time comparison of {\bf APD1} \label{fig:largedata} \end{figure} \subsection{\sa{Quadratic Constrained Quadratic Programming (QCQP)}}\label{sec:qcqp} In this subsection, we compare APDB against linearized augmented Lagrangian method (LALM)~\cite{xu2017first} and proximal extrapolated gradient method~(PEGM) \cite{malitsky2018proximal} on randomly generated QCQP problems. We consider a QCQP problem of the following form:\vspace*{-3mm} {\small \begin{subequations} \begin{align} \rho^*\triangleq\min_{x\in X} \quad &\rho(x)\triangleq {\frac{1}{2}x^\top A_0 x +b_0^\top x} \label{qcqp:obj}\\ \hbox{s.t.}\quad &G_j(x)\triangleq \frac{1}{2}x^\top A_j x +b_j^\top x-c_j\leq 0, \quad j\in\{1,\hdots,m\}, \label{qcqp:const} \end{align} \end{subequations}} where $X=[-10,10]^n$, $\{A\}_{j=0}^m\subseteq \reals^{n\times n}$ are positive semidefinite matrices generated randomly; $\{b_j\}_{j=0}^m\subseteq \reals^n$ are generated randomly with elements drawn from standard Gaussian distribution; and $\{c_j\}_{j=1}^m\subseteq \reals$ are generated randomly with elements drawn from Uniform distribution over $[0,1]$. We consider two different scenarios: QCQPs with merely convex $\rho(\cdot)$ and QCQPs with strongly convex $\rho(\cdot)$. For merely convex setup, we set $A_j=\Lambda_j^\top S_j \Lambda_j$ for any $j\in\{0,1,\hdots,m\}$ where $\Lambda_j\in\reals^{n\times n}$ is a random orthonormal matrix and $S_j\in\reals_+^{n\times n}$ is a diagonal matrix whose diagonal elements are generated uniformly at random from $[0,100]$ with $0$ included as the minimum element of $S_j$ for merely convex scenario. To generate $\Lambda_j$, we first generate a random matrix $\tilde{\Lambda}_j\in\reals^{n\times n}$ with each entry sampled from standard Gaussian distribution, then we called MATLAB function $\rm{orth}(\tilde{\Lambda}_j)$, which returns an orthonormal basis for the range of $\tilde{\Lambda}_j$. We generate the data for strongly convex problem instances similarly, except for $j=0$; to generate the objective function for the strongly convex case, the diagonal elements of $S_0$ are generated from $[1,101]$. To test, we generate 10 random i.i.d. problem instances for each scenario. For this experiment, we set $n=10^3$ and $m=10$. For both merely convex and strongly convex setups, we implemented all the algorithms on 10 randomly generated QCQP instances, and we ran them until the termination condition $\max\{|\rho(x_k)-\rho^*|/|\rho^*|,~\frac{1}{n}\sum_{j=1}^n G_j(x_k)\}\leq \epsilon$ holds for $\epsilon=10^{-8}$. The backtracking parameter is set to $\eta=0.7$ for all the methods we tested, i.e., APDB, LALM and PEGM. We use the technique suggested in Remark~\ref{rem:larger-step} to possibly select larger step-sizes for APDB. For PEGM, we implemented Algorithm~2 in \cite{malitsky2018proximal} where step-size $\lambda_k$ is increased at the beginning of each iteration, and for LALM we use Algorithm 1 in~\cite{xu2017first} with backtracking. Note that in \cite{xu2017first} an increase in step-sizes has not been considered; hence, the primal step-size at the beginning of each iteration is set to be the last primal step-size. Step-size initializations for PEGM and LALM are done in accordance with APDB, for which we set $\gamma_0=1$ and {$\bar{\tau}=10^{-3}$}. We will refer to APDB with $\mu>0$ and $\mu=0$ as APDB1 and APDB2 respectively. To have a fair comparison among the methods, we plot the statistics againts the total number of gradient ($\grad_x\Phi$ and $\grad_y\Phi$) computations. Figure~\ref{fig:QCQP} illustrates illustrate the performance of the algorithms based on relative suboptimality and infeasibility error. The solid lines indicate the average statistics over 10 randomly generated replications and {the shaded region around each line shows the range statistic over all random instances}. We observe that APDB outperforms the other two competitive methods; moreover, APDB2 through exploiting the strong convexity ($\mu>0$) of the problem achieves $\epsilon$-accuracy faster than APDB1. \begin{comment} \begin{figure} \caption{Comparison of methods in terms of suboptimality (left) and infeasibility (right) for merely convex setting.} \label{fig:QCQP-c} \end{figure} \begin{figure} \caption{Comparison of methods in terms of suboptimality (left) and infeasibility (right) for strongly convex setting.} \label{fig:QCQP-sc} \end{figure} \end{comment} \begin{figure} \caption{Comparison of methods in terms of suboptimality (left) and infeasibility (right) for {merely convex (top row) and strongly convex (bottom row)} \label{fig:QCQP} \end{figure} \subsection{Regression with Fairness Constraints} \label{sec:RFC} \rev{In this experiment, we consider a regression problem \rev{with fairness constraints~\cite{komiyama2018nonconvex} to compute a regressor such that its correlation with the sensitive attributes (e.g., race, gender, age) go to $0$ as the number of samples $N\to\infty$. For each $i\in\{1,\ldots,N\}$, the sample point data $(z^{(i)},w^{(i)},s^{(i)})$ consists of target attribute $z^{(i)}\in\reals$ we want to predict, non-sensitive attributes $w^{(i)}\in\reals^{d_w}$ and sensitive attributes $s^{(i)}\in\reals^{d_s}$. Let $W\triangleq [w^{(1)},\hdots,w^{(N)}]^\top\in \reals^{N\times d_w}$, $S\triangleq [s^{(1)},\hdots,s^{(N)}]^\top\in \reals^{ N\times d_s}$, and let $z\triangleq[z^{(i)}]_{i=1}^N\in\reals^N$}. Suppose $s^{(i)}$ is highly correlated with $w^{(i)}$, i.e., there exists $B\in\reals^{d_s\times d_w}$ such that $w^{(i)}=B^\top s^{(i)}+\varepsilon^{(i)}$ and $\mathbb{E}[\varepsilon^{(i)}|s^{(i)}]=\mathbf{0}$ for all $i\geq 1$. Under this assumption, given i.i.d. $\{(w^{(i)},s^{(i)})\}_{i=1}^N$, one can estimate $B$ with $\hat{B}=(S^\top S)^{-1} S^\top W$ which converges in probability to $B$ as $N\to \infty$. Correlation between $\{w^{(i)}\}_{i=1}^N$ and $\{s^{(i)}\}_{i=1}^N$ can be removed by performing regression using $\{(u^{(i)},s^{(i)})\}_{i=1}^N$ where $[u^{(1)},\hdots,u^{(N)}]^\top =U\triangleq W-S\hat{B}$. Suppose the data is centered, i.e., $\mathbf{1}^\top S=\mathbf{0}^\top$ and $\mathbf{1}^\top W=\mathbf{0}^\top$, which also implies that $\mathbf{1}^\top U=\mathbf{0}^\top$.} \rev{Consider $\hat{z}=S\alpha+U\beta$ as the estimator of $z$ for some $\alpha\in\reals^{d_s}$ and $\beta\in\reals^{d_w}$. Since $\{u^{(i)}\}_{i=1}^N$ and $\{s^{(i)}\}_{i=1}^N$ are uncorrelated, the best estimator of $\hat{z}$ (minimizing MSE) without using sensitive attributes $\{s^{(i)})\}_{i=1}^N$ is given by $\bar{z}=U\beta$. Given a fairness tolerance $\zeta\in[0,1]$, the goal is to compute $(\alpha,\beta)$ such that it minimizes the error $\norm{w-\hat{w}}^2$ subject to $\norm{\bar{w}-\hat{w}}^2/\norm{\hat{w}}^2\leq\zeta$, i.e., the ratio of variance in the estimator $\hat{y}$ explained by the sensitive attributes; thus, as $\zeta$ gets closer to $0$, the regressor becomes more fair, while as $\zeta$ gets closer to $1$ the predictive power of the regressor increases. This problem is formulated as the following non-convex QCQP \cite{komiyama2018nonconvex}: \begin{align}\label{fair-nonconvex} \min_{x=[\alpha^\top \beta^\top]^\top}&\rho_1(x)\triangleq \alpha^\top V_s \alpha + \beta^\top V_u\beta\rev{-2 q_s^\top\alpha-2 q_u^\top\beta}\\ \hbox{s.t.}~ & G(x)\triangleq (1-\zeta)\alpha^\top V_s \alpha -\zeta \beta^\top V_u\beta\leq 0, \nonumber \end{align} where $V_s=\frac{1}{N}S^\top S$ and $V_u=\frac{1}{N}U^\top U$ are sample covariance matrices, \rev{$q_s\triangleq\frac{1}{N}S^\top y$, $q_u\triangleq\frac{1}{N}U^\top y$}, and $\zeta\in (0,1)$ is a user-defined fairness tolerance.} \rev{Let $Q_1\triangleq \begin{bmatrix} V_s & 0 \\ 0 & V_u\end{bmatrix}$, $Q_2\triangleq \begin{bmatrix} V_s & 0 \\ 0 & 0\end{bmatrix}$ and \rev{$q=[q_s^\top q_u^\top]^\top$.} It has been shown in \cite{komiyama2018nonconvex}[Theorem 4], the solution to the problem \eqref{fair-nonconvex} can be equivalently found by solving a convex QCQP which can be \rev{reformulated as:} \begin{align}\label{fair-saddle} \min_{x}\rho_2(x),\quad \hbox{where}\quad\rho_2(x)\triangleq \max\{x^\top Q_1 x,~\frac{1}{\zeta}x^\top Q_2 x \}-2 \rev{q^\top x}. \end{align} \eqref{fair-saddle} is a special case of \eqref{eq:original-problem}; indeed, $\min_x\max_{w\in\Delta_2} w_1 x^\top Q_1 x + \frac{1}{\zeta}w_2 x^\top Q_2 x -\rev{2q^\top x}$ where $\Delta_2\subset\reals^2$ denotes \rev{the unit simplex. We solve this SP problem using the proposed algorithm APDB and compare it with PEGM on both real and synthetic datasets.}} \paragraph{\bf Real Dataset} \rev{We consider the Community and Crime (\texttt{C\&C}) dataset available in UCI repository with 1994 observations, 101 \rev{attributes}, and the National Longitudinal Survey of Youth (\texttt{NLSY79}) dataset\footnote{https://www.bls.gov/nls/} with 6213 observations, 21 \rev{attributes}. For \texttt{C\&C} dataset, the target \rev{attribute $z$} is the normalized violent crime rate of each community and \rev{the sensitive attributes $s_1$, $s_2$} are the ratio of African American people and foreign-born people, respectively. For \texttt{NLSY79} dataset, the target $z$ is the income of each person in 1990 and $s_1$, $s_2$ are the gender and age, respectively. \rev{After each dataset has been normalized by subtracting the mean and dividing by the standard deviation, we tested APDB and PEGM on 10 random replications such that each replication used 80\% of the data points randomly selected from the whole dataset}.} \paragraph{\bf Synthetic Dataset} \rev{Following a similar setup as in \cite[Appendix B]{komiyama2018nonconvex}, we consider a scenario with $d_s=100$ and $d_w=900$, i.e., 10\% of the attributes are sensitive, and $N=10^4$. \rev{The entries of both $W$ and $S$} are drawn from standard normal distribution \rev{$\cN(0,1)$}, and $z=S\mathbf{1}s_{d_s}+\frac{1}{100}W\mathbf{1}s_{d_w}+\eta$ where $\mathbf{1}s_m$ denotes an $m$-dimensional vector of ones, and $\eta\in\reals^N$ is a random vector with elements drawn from $\cN(0,1)$. We ran APDB and PEGM on 10 random replications.} \rev{The parameters for APDB and PEGM algorithms are set as in Section~\ref{sec:qcqp} other than $\bar{\tau}=1$. In Figure \ref{fig:fair-regression}, the plots for three different statistics are displayed against the number of gradient calls, i.e., suboptimality of $x_k$ for \eqref{fair-saddle}, suboptimality of $x_k$ for \eqref{fair-nonconvex}, and infeasibility of $x_k$ for \eqref{fair-nonconvex}. The solid lines indicate the average statistic over 10 randomly generated replications and the shaded region around each \rev{solid} line shows the range statistic over all random instances.} \begin{figure} \caption{\rev{Comparison of methods in terms of suboptimality of \eqref{fair-saddle} \label{fig:fair-regression} \end{figure} \section{\sa{Concluding remarks and future work}}\label{sec:conclude} \begin{comment} We proposed a primal-dual algorithm with a novel momentum term based on gradient extrapolation to solve saddle point problems defined by a convex-concave function $\cL(x,y)=f(x)+\Phi(x,y)-h(y)$ with a general coupling term $\Phi(x,y)$ that is \emph{not} assumed to be bilinear. Assuming $\grad_y\Phi(\cdot,\cdot)$ is Lipschitz and $\grad_x\Phi(\cdot,y)$ is Lipschitz for any fixed $y$, we derive error bounds in terms of {$\cL(\bar{x}_K,y^*)-\cL(x^*,\bar{y}_K)$} for the ergodic sequence -- without requiring primal-dual domains to be bounded; in particular, we show $\cO(1/K)$ rate when the problem is merely convex in $x$ using a constant step-size rule. Furthermore, assuming $\Phi(x,\cdot)$ is linear in $y$ for each fixed $x$ and $f$ is strongly convex, we obtain the ergodic convergence rate of $\cO(1/K^2)$. To best of our knowledge, this is the first time $\cO(1/K^2)$ rate is shown for a single-loop method for solving a saddle point problem when $\Phi$ is not bilinear. Our new method captures a wide range of optimization problems, especially, those that can be cast as an SDP or a QCQP, or more generally conic convex problems with nonlinear constraints. It has been illustrated in the numerical experiments that the APD method can compete against second-order methods when we aim at a low to medium level accuracy. Moreover, APD with single-loop has roughly the same iteration complexity as proximal Mirror-Prox with double-loop iterations; on the other hand, it requires half the computational efforts needed by Mirror-prox. \end{comment} We proposed a primal-dual algorithm with a novel momentum term based on gradient extrapolation to solve SP problems defined by a convex-concave function $\cL(x,y)=f(x)+\Phi(x,y)-h(y)$ with a general coupling term $\Phi(x,y)$ that is \emph{not} assumed to be bilinear. Assuming $\grad_y\Phi$ is Lipschitz and $\grad_x\Phi(\cdot,y)$ is Lipschitz for any fixed $y$, {we show that $\{x_k,y_k\}$ converges to a saddle point $(x^*,y^*)$} and derive error bounds in terms of {$\cL(\bar{x}_K,y^*)-\cL(x^*,\bar{y}_K)$} for the ergodic sequence -- without requiring primal-dual domains to be bounded; in particular, we show $\cO(1/K)$ rate when the problem is merely convex in $x$ using a constant step-size rule, {where $K$ denotes the number of gradient computations.} Assuming $\Phi$ is affine in $y$ and $f$ is strongly convex, we also obtained the ergodic convergence rate of $\cO(1/K^2)$. {Moreover, we introduced a backtracking scheme that guarantees the same results without requiring the Lipschitz constants $L_{xx}$, $L_{yx}$, and $L_{yy}$ to be known. To best of our knowledge, this is the first time $\cO(1/K^2)$ rate is shown for a single-loop method and the first time that a line-search method ensuring $\cO(1/K^2)$ rate is proposed for solving convex-concave SP problems when $\Phi$ is not bilinear.} Our new method captures a wide range of optimization problems, especially, those that can be cast as an SDP or a QCQP, or more generally conic convex problems with nonlinear constraints. {In the context of constrained optimization, using the proposed backtracking scheme, we demonstrated convergence results even when a dual bound is not available or easily computable.} It has been illustrated in the numerical experiments that the APD method can compete against second-order methods when we aim at a low-to-medium-level accuracy. {Moreover, APD and APDB methods performs very well in practice in compare to the other competitive methods.} As a future work, we are eager to investigate the stochastic variant of problem \eqref{eq:original-problem} where $\Phi(x,y)=\bE_\xi[\varphi(x,y;\xi)]$ and $\xi$ is a random variable. Based on our preliminary work on this setup, we have realized that one needs to require a more complicated set of assumptions on the step-size selection rule, and we will consider extending our APD framework to this stochastic setting. Indeed, we have already attempted to analyze randomized block-coordinate variants of APD in~\cite{hamedani2018iter,hamedani2019doubly}. In~\cite{hamedani2018iter}, we showed that if $\grad_x\Phi(\cdot,y)$ is coordinatewise Lipschitz for any fixed $y$ and Assumption~\ref{assum} holds, then the ergodic sequence generated by a variant of APD updating a randomly selected primal coordinate at each iteration together with a full dual variable update achieves the $\cO(m/k)$ convergence rate in a suitable error metric where $m$ denotes the number of coordinates for the primal variable, and $\{x_k,y_k\}$ converges to a random saddle point in an almost sure sense. Assuming that $\Phi(\cdot,y)$ is strongly convex for any $y$ with a uniform convexity modulus, and that $\Phi$ is linear in $y$, convergence rate of $\cO(m/k^2)$ is also shown. Moreover, in~\cite{hamedani2019doubly}, we considered $\min_x\max_y\Phi(x,y)$ where $\Phi(x,y)=\sum_{\ell=1}^p\Phi_{\ell}(x,y)$ for some $p\gg 1$, and both $x$ and $y$ are partitioned into block coordinates. We employ a block-coordinate primal-dual scheme in which randomly selected primal and dual blocks of variables are updated at every iteration. An ergodic convergence rate of $\cO(1/k)$ is obtained using variance reduction which is the first rate result to the best of our knowledge for the finite-sum non-bilinear saddle point problem, matching the best known rate for primal-dual schemes using full gradients. Using an acceleration for the setting where a single component gradient is utilized, a non-asymptotic rate of $\cO(1/\sqrt{k})$ is obtained. For both single and increasing sample size scenarios, almost sure convergence of the iterate sequence to a saddle point is shown. As a future work, we would like to incoorporate a stochastic line-search technique to our APD-based methods that use stochastic gradients or partial gradients for solving SP problems with $\Phi(x,y)=\bE_\xi[\varphi(x,y;\xi)]$. \vspace*{-3mm} {\small } \section{Appendix}\label{append} \begin{lemma} \label{lem_app:prox} Let $\cX$ be a finite dimensional normed vector space with norm $\norm{\cdot}_\cX$, $f:\cX\rightarrow\reals\cup\{+\infty\}$ be a closed convex function with convexity modulus $\mu\geq 0$ with respect to $\norm{\cdot}_\cX$, and $\bD:\cX\times\cX\rightarrow\reals_+$ be a Bregman distance function corresponding to a strictly convex function $\phi:\cX\rightarrow\reals$ that is differentiable on an open set containing $\dom f$. Given $\bar{x}\in\dom f$ and $t>0$, let \begin{align} \label{eq_app:prox} x^+=\argmin_{x\in\cX} f(x)+t \bD(x,\bar{x}). \end{align} Then for all $x\in\cX$, the following inequality holds: \begin{eqnarray} f(x)+t\bD(x,\bar{x})\geq f(x^+) + t\bD(x^+,\bar{x})+t \bD(x,x^+)+\frac{\mu}{2}\norm{x-x^+}_\cX^2. \label{eq_app:bregman} \end{eqnarray} \end{lemma} \begin{proof} This result is a trivial extension of Property 1 in~\cite{Tseng08_1J}. The first-order optimality condition for \eqref{eq_app:prox} implies that $0\in\partial f(x^+)+ t\grad_x \bD(x^+,\bar{x})$ -- here $\grad_x \bD$ denotes the partial gradient with respect to the first argument. Note that for any $x\in \dom f$, we have $\grad_x \bD(x,\bar{x})=\grad \phi(x)-\grad \phi(\bar{x})$. Hence, $t(\grad \phi(\bar{x})-\grad \phi({x}^+))\in\partial f(x^+)$. Using the convexity inequality for $f$, we get \begin{eqnarray*} f(x)\geq f(x^+)+t\fprod{\grad \phi(\bar{x})-\grad \phi({x}^+),~x-x^+}+\frac{\mu}{2}\norm{x-x^+}_\cX^2. \end{eqnarray*} The result in \eqref{eq_app:bregman} immediately follows from this inequality. \end{proof} \end{document}
\begin{document} \begin{abstract} Supercyclides are surfaces with a characteristic conjugate parametrization consisting of two families of conics. Patches of supercyclides can be adapted to a Q-net (a discrete quadrilateral net with planar faces) such that neighboring surface patches share tangent planes along common boundary curves. We call the resulting patchworks ``supercyclidic nets'' and show that every Q-net in $\mathbb{R}P^3$ can be extended to a supercyclidic net. The construction is governed by a multidimensionally consistent 3D system. One essential aspect of the theory is the extension of a given Q-net in $\mathbb{R}P^N$ to a system of circumscribed discrete torsal line systems. We present a description of the latter in terms of projective reflections that generalizes the systems of orthogonal reflections which govern the extension of circular nets to cyclidic nets by means of Dupin cyclide patches. \mathbf end{abstract} \title{Supercyclidic nets} \section{Introduction} Discrete differential geometry aims at the development of discrete equivalents of notions and methods of classical differential geometry. One prominent example is the discretization of parametrized surfaces and the related theory, where it is natural to discretize parametrized surfaces by quadrilateral nets (also called quadrilateral meshes). In contrast to other discretizations of surfaces as, e.g., triangulated surfaces, a quadrilateral mesh reflects the combinatorial structure of parameter lines. While unspecified quadrilateral nets discretize arbitrary parametrizations, the discretization of distinguished types of parametrizations yields quadrilateral nets with special properties. The present work is on the piecewise smooth discretization of classical conjugate nets by \mathbf emph{supercyclidic nets}. They arise as an extension of the well established, integrable discretization of conjugate nets by quadrilateral nets with planar faces, the latter often called \mathbf emph{Q-nets} in discrete differential geometry. Two-dimensional Q-nets as discrete versions of conjugate surface parametrizations were proposed by Sauer in the 1930s \cite{Sauer:1937:ProjLinienGeometrie}. Multidimensional Q-nets are a subject of modern research \cite{DoliwaSantini:1997:QnetsAreIntegrable,BobenkoSuris:2008:DDGBook}. The surface patches that we use for the extension are pieces of \mathbf emph{supercyclides}, a class of surfaces in projective 3-space that we discuss in detail in Section~\ref{sec:supercyclides}. Supercyclides possess conjugate parametrizations with all parameter lines being conics and such that there exists a quadratic tangency cone along each such conic. As a consequence, isoparametrically bounded surface patches coming from those characteristic parametrizations (referred to as \mathbf emph{SC-patches}) always have coplanar vertices, which makes them suitable for the extension of Q-nets to piecewise smooth objects. The 2-dimensional case of such extensions has been proposed previously in the context of Computer Aided Geometric Design (CAGD) \cite{Pratt:1996:DupinCyclidesAndSupercyclides, Pratt:2002:QuarticSupercyclidesDesign, Degen:1994:GeneralizedCyclidesCAGD}, but to the best of our knowledge has not been worked out so far. Based on established notions of discrete differential geometry, we describe in this article the multidimensionally consistent, piecewise smooth extension of $m$-dimensional Q-nets in $\mathbb{R}P^3$ by adapted surface patches, such that edge-adjacent patches in one and the same coordinate plane have coinciding tangent planes along a common boundary curve (see Fig.~\ref{fig:sc_net_intro} for a 2-dimensional example). Relevant aspects of the existing theory are presented in Sections~\ref{sec:qnets_line_systems}~to~\ref{sec:supercyclides} and succeeded by new results which are organized as follows: \begin{itemize} \item We describe the extension of 2D Q-nets to supercyclidic nets in Section~\ref{sec:2d_supercyclidic_nets}, emphasizing the underlying 2D system that governs the extension of Q-nets to fundamental line systems. \item The 3D system that governs multidimensional supercyclidic nets is uncovered and analyzed in Section~\ref{sec:3d_supercyclidic_nets}. We also present piecewise smooth conjugate coordinate systems that are locally induced by 3D supercyclidic nets and give rise to arbitrary Q-refinements of their support structures. \item We define $m$D supercyclidic nets and develop a transformation theory thereof that appears as a combination of the existing smooth and discrete theories in Section~\ref{sec:md_supercyclidic_nets}. \item Finally, we introduce frames of supercyclidic nets and describe a related integrable system on projective reflections in Section~\ref{sec:frames}. \mathbf end{itemize} \begin{figure}[htb] \begin{center} \parbox{.42\textwidth}{\includegraphics[width=.42\textwidth]{scnet_talk_qedges.png}} \hspace{.07\textwidth} \parbox{.42\textwidth}{\includegraphics[width=.42\textwidth]{scnet_talk_all.png}} \mathbf end{center} \caption{A Q-net and its extension to a supercyclidic net. Each elementary quadrilateral is replaced by an adapted SC-patch, that is, a supercyclidic patch whose vertices match the supporting quadrilateral.} \label{fig:sc_net_intro} \mathbf end{figure} The present work is a continuation of our previous works on the discretizations of smooth orthogonal and asymptotic nets by \mathbf emph{cyclidic} and \mathbf emph{hyperbolic nets}, respectively \cite{BobenkoHuhnen-Venedey:2011:cyclidicNets, Huhnen-VenedeyRoerig:2013:hyperbolicNets, Huhnen-VenedeySchief:2014:wTrafos}. We will now review the corresponding theory for cyclidic nets and illustrate the core concepts that may serve as a structural guideline for this article. \paragraph{Cyclidic nets} Three-dimensional orthogonal nets in $\mathbb{R}^3$ are triply orthogonal coordinate systems, while 2-di\-men\-sion\-al orthogonal nets are curvature line parametrized surfaces. Based on the well-known discretization of orthogonal nets by circular nets \cite{Bobenko:1999:DiscreteConformalMaps,CieslinskiDoliwaSantini:1997:CircularNets,KonopelchenkoSchief:1998:3DIntegrableLattices,BobenkoMatthesSuris:2003:OrthogonalSystems}, cyclidic nets are defined as circular nets with an additional structure, that is, as circular nets with orthonormal frames at vertices that determine a unique adapted \mathbf emph{DC-patch} for each elementary quadrilateral with the prescribed circular points as vertices. The term ``DC-patches'' refers to surface patches of \mathbf emph{Dupin cyclides}\footnote{Dupin cyclides are surfaces in 3-space that are characterized by the property that all curvature lines are circles. They are special instances of supercyclides.} that are bounded by curvature lines, i.e., circular arcs. For a 2-dimensional cyclidic net its frames are such that the adapted DC-patches constitute a piecewise smooth $C^1$-surface, cf. Fig.~\ref{fig:cyclidic_from_circular_intro}. \begin{figure}[htb] \begin{center} \parbox{.26\textwidth}{\includegraphics[width=.26\textwidth]{circular_to_cyclidic_co.png}} \hspace{.07\textwidth} $\leftrightarrow$ \hspace{.08\textwidth} \parbox{.26\textwidth}{\includegraphics[width=.26\textwidth]{circular_to_cyclidic_cos_red.png}} \mathbf end{center} \caption{A 2-dimensional cyclidic net may be understood as a piecewise smooth $C^1$-surface composed of DC-patches.} \label{fig:cyclidic_from_circular_intro} \mathbf end{figure} For any DC-patch the tangent planes at the vertices are tangent to a common cone of revolution. Therefore, the tangent planes to a 2D cyclidic net at the vertices constitute a \mathbf emph{conical net}~\cite{LiuPottmannWallnerYangWang:2006:ConicalNets} and combination with the (concircular) vertices yields a \mathbf emph{principal contact element net}, so that cyclidic nets comprise the recognized discretizations of curvature line parametrizations by conical and principal contact element nets \cite{BobenkoSuris:2008:DDGBook,PottmannWallner:2007:FocalGeometry}. Going beyond the discretization of curvature line parametrized surfaces, higher dimensional cyclidic nets provide a discretization of orthogonal coordinate systems that is motivated by the classical Dupin theorem. The latter states that the coordinate surfaces of a triply orthogonal coordinate system are curvature line parametrized surfaces. Accordingly, a 3-dimensional cyclidic net is a 3D circular net with frames at vertices that describe 2D cyclidic nets in each coordinate plane and such that for each edge of $\mathbb{Z}^3$ one has one unique circular arc that is a shared boundary curve of all adjacent DC-patches, cf. Fig.~\ref{fig:3d_cyclidic}. \begin{figure}[htb] \begin{center} \includegraphics[width=.3\textwidth]{octrihedron_red.png} \mathbf end{center} \caption{Three coordinate surfaces of a 3-dimensional cyclidic net.} \label{fig:3d_cyclidic} \mathbf end{figure} Cyclidic nets are piecewise smooth extensions of discrete integrable support structures by surface patches of a particular class that is in some sense as easy as possible but at the same time as flexible as necessary: We use DC-patches for the discretizaton of curvature line parametrized surfaces and those patches are actually characterized by the geometric property that all curvature lines are circular arcs, that is, the simplest non-straight curves. At the same time, DC-patches are flexible enough in order to be fit together to adapted $C^1$-surfaces. Moreover, the class of DC-patches is preserved by M\"obius transformations, which is in line with the transformation group principle for the discretization of integrable geometries~\cite{BobenkoSuris:2008:DDGBook}. But there is also a deeper reason to choose Dupin cyclides for the extension of circular nets, that is, they actually \mathbf emph{embody the geometric characterizations} of the latter in the following sense: It is not difficult to see that Dupin cyclides are characterized by the fact that arbitrary selections of curvature lines constitute a principal contact element net (a circular net with normals at vertices that are related by reflections in symmetry planes). As a consequence, cyclidic nets induce arbitrary refinements of their support structures within the class of circular nets, by simply adding parameter lines of the adapted patches to the support structures. After all, Dupin cyclides provide a perfect link between the theories of smooth and discrete orthogonal nets. This is also reflected by the theory of transformations of cyclidic nets, which arises by combination of the corresponding smooth and discrete theories. \mathbf emph{It turns out that supercyclidic patches play an analogous role for the piecewise smooth extension of Q-nets.} Analogous to the theory of cyclidic nets is the theory of hyperbolic nets as piecewise smooth discretizations of surfaces in 3-space that are parametrized along asymptotic lines. On a purely discrete level, such nets are properly discretized by quadrilateral nets with planar vertex stars \cite{Sauer:1937:ProjLinienGeometrie, BobenkoSuris:2008:DDGBook}. Based on that discretization, hyperbolic nets arise as an extension of A-nets by means of hyperbolic surface patches, which is analogous to the extension of 2D circular nets to 2D cyclidic nets via DC-patches. Many results concerning cyclidic nets on the one hand and hyperbolic nets on the other hand correspond to each other under Sophus Lie's line-sphere-correspondence. \paragraph{Supercyclides in CAGD and applications of supercyclidic nets} Supercyclides have been introduced as \mathbf emph{double Blutel surfaces} by Degen in the 1980s \cite{Degen:1982:ConjugateConics,Degen:1986:ZweifachBlutel}. Soon they found attention in Computer Aided Geometric Design for several reasons. For example their pleasant geometric properties, nicely reviewed in \cite{Degen:1994:GeneralizedCyclidesCAGD}, allow to use parts of supercyclides for blends between quadrics \cite{AllenDutta:1997:SupercyclidesBlending,FoufouGarnier:2005:ImplicitEuqationsOfSupercyclides}. Supercyclidic patchworks with differentiable joins have been proposed in the past \cite{Pratt:1996:DupinCyclidesAndSupercyclides,Degen:1994:GeneralizedCyclidesCAGD} as a continuation of the discussion of composite $C^1$-surfaces built from DC-patches within the CAGD community (see, e.g., \cite{Martin:1983:PrincipalPatches,McLean:1985:CyclideSurfaces,MartinDePontSharrock:1986:CyclideSurfaces,DuttaMartinPratt:1993:CyclideSurfaceModeling,SrinivasKumarDutta:1996:SurfaceDesignUsingCyclidePatches} for the latter). Another field of potential application for supercyclidic nets is architectural geometry (see the comprehensive book \cite{PottmannAsperlHoferKilian:2007:ArchitecturalGeometry} for an introduction to that synthesis of architecture and geometry), because they are aesthetically appealing freeform surfaces that can be approximated at arbitrary precision by flat panels. On the other hand, supercyclidic nets extend their supporting Q-net and may be seen as a kind of 3D texture for that support structure. Purely Dupin cyclidic nets in 3-space, which are special instances of supercyclidic nets, have already been discussed in the context of \mathbf emph{circular arc structures} in \cite{BoEtAl:2011:CAS}, while \cite{KaeferboeckPottmann:2012:DiscreteAffineMinimal,ShiWangPottmann:2013:RationalBilinearPatches} provides an analysis of hyperbolic nets that aims at the application thereof in architecture. \section{Q-nets and discrete torsal line systems} \label{sec:qnets_line_systems} Q-nets are discrete versions of classical conjugate nets and closely related to discrete torsal line systems. Before starting our discussion of those discrete objects, we recall a characterization of classical conjugate nets and their transformations that can be found in \cite{Eisenhart:1923:Transformations} or \cite[Chap.~1]{BobenkoSuris:2008:DDGBook}. \begin{definition}[Conjugate net] \label{def:conjugate_net} A map $x : \mathbb{R}^m \to \mathbb{R}^N,\, N \ge 3$, is called an \mathbf emph{$m$-dimensional conjugate net in $\mathbb{R}^N$} if at every point of the domain and for all pairs $1 \le i \ne j \le m$ one has $\partial_{ij}x \in \vspan{\partial_i x, \partial_j x} \iff \partial_{ij}x = c_{ji} \partial_i x + c_{ij} \partial_j x$.\footnote{Loosely speaking, infinitesimal quadrilaterals formed by parameter lines of a conjugate net are planar.} \mathbf end{definition} \begin{definition}[F-transformation of conjugate nets] \label{def:f-trafo_smooth} Two $m$-dimensional conjugate nets $x,x^+$ are said to be related by a \mathbf emph{fundamental transformation} (\mathbf emph{F-transformation}) if at every point of the domain and for each $1 \le i \le m$ the three vectors $\partial_i x, \partial_i x^+,$ and $\delta x = x^+ - x$ are coplanar. The net $x^+$ is called an \mathbf emph{F-transform} of the net $x$. \mathbf end{definition} The above F-transformations of conjugate nets exhibit the following, Bianchi-type permutability properties. The existence of associated transformations with permutability properties of that kind is classically regarded as a key feature of integrable systems. \begin{theorem}[Permutability properties of F-transformations of conjugate nets] \label{thm:permutability_smooth} \begin{enumerate}[(i)] \item Let $x$ be an $m$-di\-men\-sional conjugate net, and let $x^{(1)}$ and $x^{(2)}$ be two of its F-transforms. Then there exists a 2-parameter family of conjugate nets $x^{(12)}$ that are F-transforms of both $x^{(1)}$ and $x^{(2)}$. The corresponding points of the four conjugate nets $x$, $x^{(1)}$, $x^{(2)}$ and $x^{(12)}$ are coplanar. \item Let $x$ be an $m$-dimensional conjugate net. Let $x^{(1)}$, $x^{(2)}$ and $x^{(3)}$ be three of its F-transforms, and let three further conjugate nets $x^{(12)}$, $x^{(23)}$ and $x^{(13)}$ be given such that $x^{(ij)}$ is a simultaneous F-transform of $x^{(i)}$ and $x^{(j)}$. Then generically there exists a unique conjugate net $x^{(123)}$ that is an F-transform of $x^{(12)}$, $x^{(23)}$ and $x^{(13)}$. The net $x^{(123)}$ is uniquely defined by the condition that for every permutation $(ijk)$ of $(123)$ the corresponding points of $x^{(i)}$, $x^{(ij)}$, $x^{(ik)}$ and $x^{(123)}$ are coplanar. \mathbf end{enumerate} \mathbf end{theorem} Although we gave an affine description, conjugate nets and their transformations are objects of projective differential geometry. Accordingly, we consider the theory of discrete conjugate nets in $\mathbb{R}P^N$ and not in $\mathbb{R}^N$. Before coming to that, we explain some notation that will be used throughout this article. \paragraph{Notation for discrete maps} Discrete maps are fundamental in discrete differential geometry. We are mostly concerned with discrete maps defined on cells of dimension $0,1$, or $2$ of $\mathbb{Z}^m$, that is, maps defined on vertices, edges, or elementary quadrilaterals. Let $e_1,\ldots,e_m$ be the canonical basis of the m-dimensional lattice~$\mathbb{Z}^m$. For $k$ pairwise distinct indices $i_1,\ldots,i_k \in \left\{ 1,\dots,m \right\}$ we denote by \begin{equation*} \coordsurf{i_1 \dots i_k} = \operatorname{span}_\mathbb{Z}(e_{i_1},\dots,e_{i_k}) \mathbf end{equation*} the $k$-dimensional coordinate plane of $\mathbb{Z}^m$ and by \begin{equation*} \cell{i_1 \dots i_k}(z) = \left\{ z + \varepsilon_{i_1} e_{i_1} + \dots + \varepsilon_{i_k} e_{i_k} \mid \varepsilon_{i} = 0,1 \right\} \mathbf end{equation*} the $k$-cell at~$z$ spanned by $e_{i_1},\ldots,e_{i_k}$, respectively. We use upper indices $i_1,\dots,i_k$ to describe maps on $k$-cells as maps on $\mathbb{Z}^m$ by identifying the $k$-cell $\cell{i_1 \dots i_k}(z)$ with its basepoint $z$ \begin{equation*} f^{i_1 \dots i_k}(z) = f(\cell{i_1 \dots i_k}(z)). \mathbf end{equation*} For a map $f$ defined on $\mathbb{Z}^m$ we use lower indices to indicate shifts in coordinate directions \begin{equation*} f_i(z) = f(z+e_i), \quad f_{ij}(z) = f(z + e_i + e_j), \quad \dots \mathbf end{equation*} Often we omit the argument of $f$, writing \begin{equation*} f = f (z), \quad f_i = f (z + ei), \quad f_{-i} = f(z-e_i), \quad \dots \mathbf end{equation*} and analogous for maps defined on cells of higher dimension. \paragraph{Q-nets and discrete torsal line systems} Smooth conjugate nets are discretized within discrete differential geometry by quadrilateral meshes with planar faces. The planarity condition is a straight forward discretization of the smooth characteristic property $\partial_{ij} f \in \vspan{\partial_i f, \partial_j f}$. In the 2-dimensional case, this discretization has been proposed by Sauer~\cite{Sauer:1937:ProjLinienGeometrie} and was later generalized to the multidimensional case \cite{DoliwaSantini:1997:QnetsAreIntegrable,BobenkoSuris:2008:DDGBook}. \begin{definition}[Q-net] \label{def:qnet} A map $x:\mathbb{Z}^m \to \mathbb{R}P^N$, $N\ge3$, is called an \mathbf emph{m-dimensional Q-net} or \mathbf emph{discrete conjugate net} in $\mathbb{R}P^N$ if for all pairs $1 \le i < j \le m$ the elementary quadrilaterals $(x,x_i,x_{ij},x_j)$ are planar. \mathbf end{definition} Closely related to Q-nets in $\mathbb{R}P^N$ are configurations of lines in $\mathbb{R}P^N$ with, e.g., $\mathbb{Z}^m$ combinatorics, such that neighbouring lines intersect. We denote the manifold of lines in $\mathbb{R}P^N$ by \begin{equation*} \mathcal{L}^N := \left\{ \text{Lines in } \mathbb{R}P^N \right\} \cong \operatorname{Gr}(2,\mathbb{R}^{N+1}), \mathbf end{equation*} where $\operatorname{Gr}(2,\mathbb{R}^{N+1})$ is the Grassmanian of 2-dimensional linear subspaces of $\mathbb{R}^{N+1}$. \begin{definition}[Discrete torsal line system] \label{def:line_congruence} A map $l : \mathbb{Z}^m \to \mathcal{L}^N$, $N \ge 3$, is called an \mathbf emph{m-dimensional discrete torsal line system} in $\mathbb{R}P^N$ if at each $z \in \mathbb{Z}^m$ and for all $1 \le i \le m$ the neighbouring lines $l$ and $l_i$ intersect. We say that $l$ is \mathbf emph{generic} if \begin{enumerate}[(i)] \item For each elementary quadrilateral of $\mathbb{Z}^m$ the lines associated with opposite vertices are skew (and therefore span a unique 3-space that contains all four lines of the quadrilateral). \item If $m \ge 3$ and $N \ge 4$, the space spanned by any quadruple $(l,l_i,l_j,l_k)$ of lines, $1 \le i < j < k \le m$, is 4-dimensional. \mathbf end{enumerate} A 2-dimensional line system is called a \mathbf emph{line congruence} and a 3-dimensional line system is called a \mathbf emph{line complex}. \mathbf end{definition} \mathbf enlargethispage{\baselineskip} Recently, line systems on triangle meshes with non-intersecting neighboring lines were studied \cite{WangJiangBompasWallnerPottmann-2013-DiscreteLineConSGP}. To distinguish the two different types of line systems we call the systems with intersecting neighboring lines \mathbf emph{torsal}. Discrete torsal line systems and Q-nets are closely related \cite{DoliwaSantiniManas:2000:TransformationsOfQnets}. \begin{definition}[Focal net] \label{def:focal_net} For a discrete torsal line system $l : \mathbb{Z}^m \to \mathcal{L}^N$ and a direction $i \in \left\{ 1,\ldots,m \right\}$, the \mathbf emph{i-th focal net} $f^i : \mathbb{Z}^m \to \mathbb{R}P^N$ is defined by \begin{equation*} f^i(z) := l(z) \cap l(z + e_i). \mathbf end{equation*} The planes spanned by adjacent lines of the system are called \mathbf emph{focal planes}. Focal points and focal planes of a discrete torsal line system are naturally associated with edges of $\mathbb{Z}^m$, cf.~Fig.~\ref{fig:elem_quad_line_congruence}. \mathbf end{definition} \begin{figure}[ht] \includefig{line_complex_cube} \caption{An elementary cube of a discrete torsal line system: combinatorial and geometric.} \label{fig:elem_quad_line_congruence} \mathbf end{figure} \begin{definition}[Edge systems of Q-nets and Laplace transforms] \label{def:laplace} Given a Q-net $x : \mathbb{Z}^m \to \mathbb{R}P^N$ and a direction $i \in \left\{ 1,\ldots,m \right\}$, we say that the extended edges of direction~$i$ constitute the \mathbf emph{$i$-th edge system} $e^i : \mathbb{Z}^m \to \mathcal{L}^N$. By definition, the $i$-th edge system is a discrete torsal line system, whose different focal nets are called \mathbf emph{Laplace transforms} of $x$. Accordingly, the vertices of the different Laplace transforms are called \mathbf emph{Laplace points} of certain directions with respect to the elementary quadrilaterals of $x$. \mathbf end{definition} Edge systems are called \mathbf emph{tangent systems} in~\cite{DoliwaSantiniManas:2000:TransformationsOfQnets}. We prefer to call them edge systems to distinguish them from the tangents systems of supercyclidic nets (formally defined only in Section~\ref{sec:frames}), which consist of tangents to surface patches at vertices of a supporting Q-net. From simple dimension arguments one obtains the following \begin{theorem}[Focal nets are Q-nets] \label{thm:focal_nets_are_qnets} For a generic discrete torsal line system $l : \mathbb{Z}^m \to \mathcal{L}^N$, $N \ge 4$, each focal net is a Q-net in $\mathbb{R}P^N$. Moreover, focal quadrilaterals of the type $(f^i,f^i_i,f^i_{ij},f^i_j)$ are planar for arbitrary $N$.\footnote{For $m \ge 3$ there are focal quadrilaterals of the type $(f^i,f^i_j,f^i_{jk},f^i_k)$, $i \ne j \ne k \ne i$, which are generically not planar in the case $N = 3$.} \mathbf end{theorem} \paragraph{Underlying 3D systems that govern Q-nets and discrete torsal line systems}\\ Q-nets in $\mathbb{R}P^N, N \ge 3$, and also discrete torsal line systems in $\mathbb{R}P^N, N \ge 4$, are each governed by an associated discrete 3D system: generic data at seven vertices of a cube determine the remaining 8th value uniquely, cf. Fig.~\ref{fig:3d_system}. In both cases, this is evident if one considers intersections of subspaces and counts the generic dimensions with respect to the ambient space. For example, consider the seven vertices $x,x_i,x_{ij}$, $i,j = 1,2,3$, $i \ne j$, of an elementary cube of a Q-net in $\mathbb{R}P^N$. Those points necessarily lie in a 3-dimensional subspace and determine the value $x_{123}$ uniquely as intersection point of the three planes $\pi^{12}_3 = x_3 \vee x_{13} \vee x_{23}$,\ $\pi^{23}_1 = x_1 \vee x_{12} \vee x_{13}$, and $\pi^{13}_2 = x_2 \vee x_{12} \vee x_{23}$. \begin{figure}[htb] \includefig{3d_cube} \caption{The seven values $f,f_i,f_{ij}$ (vertices of a Q-net or lines of a discrete torsal line system) determine the remaining value $f_{123}$ uniquely.} \label{fig:3d_system} \mathbf end{figure} 3D systems of the above type allow to propagate Cauchy data \begin{equation*} f|_{\coordsurf{ij}}, \quad 1 \le i < j \le 3 \mathbf end{equation*} on the coordinate planes $\coordsurf{12},\coordsurf{23},\coordsurf{13}$ to the whole of $\mathbb{Z}^3$ uniquely. In higher dimensions, the fact that the a~priori overdetermined propagation of Cauchy data \begin{equation*} f|_{\coordsurf{ij}}, \quad 1 \le i < j \le m \mathbf end{equation*} to the whole of $\mathbb{Z}^m$ is well-defined is referred to as \mathbf emph{multidimensional consistency} of the underlying system and understood as its \mathbf emph{discrete integrability}. Simple combinatorial considerations show that for discrete $m$D systems that allow to determine the value at one vertex of an $m$D cube from the values at the remaining vertices, the $(m+1)$D consistency implies $(m+k)$D consistency for arbitrary $k \ge 1$. Indeed, this is the case for the systems governing Q-nets and discrete torsal line systems, cf. \cite{BobenkoSuris:2008:DDGBook}. For future reference, we capture the above in \begin{theorem} \label{thm:4d_consistency} Discrete torsal line systems in $\mathbb{R}P^N$, $N \ge 4$, as well as Q-nets in $\mathbb{R}P^M$, $M \ge 3$, are each governed by $m$D consistent 3D systems. \mathbf end{theorem} Multidimensional consistency assures the existence of associated transformations that exhibit Bianchi-type permutability properties in analogy to the corresponding classical integrable systems. In fact, it is a deep result of discrete differential geometry that on the discrete level discrete integrable nets and their transformations become indistinguishable. A prominent example for this scheme are F-transformations of Q-nets. We simply discretize the defining property captured in Definition~\ref{def:f-trafo_smooth} by replacing the partial derivatives $\partial_i x, \partial_i x^+$ by difference vectors. \begin{definition}[F-transformation of Q-nets] \label{def:qnets_ftransform} Two $m$-dimensional Q-nets \begin{equation*} x,x^+ : \mathbb{Z}^m \to \mathbb{R}P^N \mathbf end{equation*} are called \mathbf emph{F-transforms} (\mathbf emph{fundamental transforms}) of one another if at each $z \in \mathbb{Z}^m$ and for all $1 \le i \le m$ the quadrilaterals $(x, x_i, x_i^+, x^+)$ are planar. \mathbf end{definition} Obviously, the condition in Definition~\ref{def:qnets_ftransform} may be re-phrased as follows: The Q-nets $x$ and $x^+$ are F-transforms of one another if the net $X : \mathbb{Z}^m \times \left\{ 0,1 \right\} \to \mathbb{R}P^N$ defined by \begin{equation*} X(z,0) = x(z) \text{ and } X(z,1) = x^+(z) \mathbf end{equation*} is a two-layer $(m+1)$-dimensional Q-net. Therefore, the existence of F-transforms of Q-nets with permutability properties analogous to the classical situation is a simple consequence of the multidimensional consistency of Q-nets. One obtains \begin{theorem}[Permutability properties of F-transformations of Q-nets] \label{thm:permutability_qnets} \begin{enumerate}[(i)] \item Let $x$ be an $m$-dimensional Q-net, and let $x^{(1)}$ and $x^{(2)}$ be two of its discrete F-transforms. Then there exists a 2-parameter family of Q-nets $x^{(12)}$ that are discrete F-transforms of both $x^{(1)}$ and $x^{(2)}$. The corresponding points of the four Q-nets $x$, $x^{(1)}$, $x^{(2)}$ and $x^{(12)}$ are coplanar. The net $x^{(12)}$ is uniquely determined by one of its points. \item Let $x$ be an $m$-dimensional Q-net. Let $x^{(1)}$, $x^{(2)}$ and $x^{(3)}$ be three of its discrete F-transforms, and let three further Q-nets $x^{(12)}$, $x^{(23)}$ and $x^{(13)}$ be given such that $x^{(ij)}$ is a simultaneous discrete F-transform of $x^{(i)}$ and $x^{(j)}$. Then generically there exists a unique Q-net $x^{(123)}$ that is a discrete F-transform of $x^{(12)}$, $x^{(23)}$ and $x^{(13)}$. The net $x^{(123)}$ is uniquely determined by the condition that for every permutation $(ijk)$ of $(123)$ the corresponding points of $x^{(i)}$, $x^{(ij)}$, $x^{(ik)}$ and $x^{(123)}$ are coplanar. \mathbf end{enumerate} \mathbf end{theorem} Theorem~\ref{thm:permutability_qnets} yields the classical results captured by Theorem~\ref{thm:permutability_smooth} after performing a refinement limit in the ``net directions'' while keeping the ``transformation directions'' discrete. \section{Fundamental line systems} \label{sec:fundamental_line_systems} While generic discrete torsal line systems in $\mathbb{R}P^N, N\ge 4$, are governed by a 3D system, this is not the case in $\mathbb{R}P^3$. The reason is that for three generic lines $l_{12}, l_{23}, l_{13}$ in 4-space there is a unique line that intersects all of them, while in $\mathbb{R}P^3$ there is a whole 1-parameter family of such lines. However, one obtains a 3D system for line systems in $\mathbb{R}P^3$ by demanding that the focal quadrilaterals be planar, which turns out to be an admissible reduction of the considered configurations (Corollary~\ref{cor:2d_quad_line_systems}). \begin{definition}[Fundamental line system] \label{def:fundamental_line_system} A discrete torsal line system is called \mathbf emph{fundamental} if its focal nets are Q-nets and if it is generic in the sense that lines associated with opposite vertices of elementary quadrilaterals are skew. In particular, any generic discrete torsal line system in $\mathbb{R}P^N$, $N \ge 4$, is fundamental according to Theorem~\ref{thm:focal_nets_are_qnets}. \mathbf end{definition} \begin{remark*} The terminology was introduced in~\cite{BobenkoSchief:2014:FundamentalSystems}, where fundamental line systems are discussed in the context of integrable systems. It is shown that elementary cubes of such line systems in $\mathbb{R}P^3$ are characterized by the existence of a unique involution $\tau$ on $\mathcal{L}^3$, such that lines associated with opposite vertices of the cube are interchanged. The map $\tau$ is determined by any six lines that remain if one removes a pair of opposite lines. For example, for fixed lines $l_1,l_2,l_3,l_{12},l_{23},l_{13}$ one may express $l_{123}$ as dependent on $l$ according to $l_{123} = \tau(l)$. Moreover, it turns out that $\tau$ comes from a \mathbf emph{correlation} of $\mathbb{R}P^3$ (a projective map from $\mathbb{R}P^3$ to its dual, that is, points and planes are interchanged but lines are mapped to lines). \mathbf end{remark*} In the following we prove essential properties of fundamental line systems that will play a key role for our later considerations of supercyclidic nets. Many of the presented statements appear in~\cite[Sect.~2]{DoliwaSantiniManas:2000:TransformationsOfQnets} \begin{lemma} \label{lem:fundamental_cubes} Elementary cubes of a fundamental line system are characterized by any of the following properties: \begin{enumerate}[(i)] \item \label{item:lem_fun_cubes_planar_quad_one} One focal quadrilateral is planar. \item \label{item:lem_fun_cubes_planar_quad_any} All focal quadrilaterals are planar. \item \label{item:lem_fun_cubes_concurrent_planes_one} The four focal planes of one direction are concurrent, that is, intersect in one point. \item \label{item:lem_fun_cubes_concurrent_planes_any} For each direction the four associated focal planes are concurrent. \mathbf end{enumerate} \mathbf end{lemma} \begin{proof} If the $i$-th focal quadrilateral $Q^i = (f^i, f^i_j, f^i_k, f^i_{jk})$ is planar \iref{item:lem_fun_cubes_planar_quad_one} then the $j$-th focal planes all pass through the Laplace point of $Q^i$ in direction $j$ and analogous for the $k$-th focal planes \iref{item:lem_fun_cubes_concurrent_planes_one}. Hence also the $j$-th and $k$-th focal quadrilaterals are planar, i.e., all focal quadrilaterals are planar \iref{item:lem_fun_cubes_planar_quad_any}. This in turn implies that also the $i$-th focal planes are concurrent and thus each family of focal planes is concurrent \iref{item:lem_fun_cubes_concurrent_planes_any}. Finally, this implies \iref{item:lem_fun_cubes_planar_quad_one}. \mathbf end{proof} \begin{proposition} \label{prop:fundamental_line_systems} If one focal net of a discrete torsal line system is a Q-net, the line system is fundamental. \mathbf end{proposition} \begin{proof} If the line system is at least 3-dimensional, there are two types of focal quadrilaterals of the $i$-th focal net: $Q^{ij} = (f^i,f^i_i,f^i_{ij},f^i_j)$ and $Q^{jk} = (f^i,f^i_j,f^i_{jk},f^i_k)$, $i \ne j \ne k \ne i$. The vertices of $Q^{ij}$ are contained in the plane $l_i \vee l_{ij}$ and hence $Q^{ij}$ is planar. The planarity of $Q^{jk}$ follows from Lemma~\ref{lem:fundamental_cubes} under the assumption of the proposition. \mathbf end{proof} Laplace transforms of generic Q-nets are Q-nets themselves. This follows from Proposition~\ref{prop:fundamental_line_systems} since the $i$-th focal net of the $i$-th edge system is the original Q-net. Hence the following \begin{corollary} Edge systems of generic Q-nets are fundamental. \mathbf end{corollary} \begin{definition} \label{def:extension_inscribed} Let $x:\mathbb{Z}^m \to \mathbb{R}P^N$ be a quadrilateral net and $l:\mathbb{Z}^m \to \mathcal{L}^N$ be a discrete line system. If $x(z) \in l(z)$ for all $z \in \mathbb{Z}^m$ we say that $l$ is an \mathbf emph{extension} of $x$ and, conversely, that $x$ is \mathbf emph{inscribed} in $l$. \mathbf end{definition} Throughout this article, the extension of Q-nets to discrete torsal line systems as well as the construction of Q-nets inscribed in discrete torsal lines systems plays a prominent role (line systems and Q-nets related as in Definition~\ref{def:extension_inscribed} are called \mathbf emph{conjugate} in~\cite{DoliwaSantiniManas:2000:TransformationsOfQnets}). In fact, it turns out that a discrete torsal line system is fundamental if and only if it is the extension of a Q-net. For a concise treatment of this relation and for later reference, we formulate the following evident \pagebreak \begin{lemma}[Construction schemes] \label{lem:2d_systems} \begin{enumerate}[\qquad] \item[(Q)] Construction of a planar quadrilateral inscribed in a torsal line congruence quadrilateral $(l,l_1,l_{12},l_2)$: Three given points $x \in l$, $x_1 \in l_1$, and $x_2 \in l_2$ determine the fourth point $x_{12} = l_{12} \cap (x \vee x_1 \vee x_2)$ uniquely. \item[(L)] Extension of a planar quadrilateral $(x,x_1,x_{12},x_2)$ to a torsal line congruence quadrilateral: Three given lines $l \ni x$, $l_1 \ni x_1$, and $l_2 \ni x_2$ determine the fourth line $l_{12} = (x_{12} \vee l_1) \cap (x_{12} \vee l_2)$ uniquely. \mathbf end{enumerate} Each of the above constructions (Q) and (L) describes a 2D system in the sense that 1-dimensional Cauchy data along two intersecting coordinate axes of $\mathbb{Z}^2$ propagates uniquely onto the entire lattice. \mathbf end{lemma} Let $l$ be a generic discrete torsal line system in~$\mathbb{R}P^N$ for $N \ge 4$, which is automatically fundamental due to Theorem~\ref{thm:focal_nets_are_qnets}. Obviously the central projection of $l$ from a point to a 3-dimensional subspace preserves the planarity of focal quadrilaterals and hence yields a fundamental line system in $\mathbb{R}P^3$. It turns out that the converse is also true. \begin{theorem}[Characterization of fundamental line systems in $\mathbb{R}P^3$] \label{thm:fundamental_line_complex} For a discrete torsal line system $l : \mathbb{Z}^m \to \mathcal{L}^3$, the following properties are equivalent: \begin{enumerate}[(i)] \item \label{item:thm_fundamental_line_complex_fundamental} $l$ is fundamental. \item \label{item:thm_fundamental_line_complex_projection} $l$ is the projection of a generic discrete torsal line system in $\mathbb{R}P^4$. \item \label{item:thm_fundamental_line_complex_one_inscribed_qnet} $l$ is the extension of a generic Q-net. \item \label{item:thm_fundamental_line_complex_consistency} The construction of Q-nets inscribed into $l$ is consistent. \mathbf end{enumerate} \mathbf end{theorem} \begin{proof} \iref{item:thm_fundamental_line_complex_fundamental}~$\Longrightarrow$~\iref{item:thm_fundamental_line_complex_projection}. We start with the observation that for each elementary 3-cube of a fundamental line system $l$ in $\mathbb{R}P^3$, any seven lines determine the eighth line uniquely via the planarity condition for focal quadrilaterals. For the following, we embedd $l$ into $\mathbb{R}P^4$ by identification of $\mathbb{R}P^3$ with any hyperplane $\pi \subset \mathbb{R}P^4$ and in a first step show that each individual cube may be derived via projection. More precisely, for its eigth lines $l,\ldots,l_{123}$ in $\pi$ we show how to construct lines $\hat l,\ldots,\hat l_{123}$ in $\mathbb{R}P^4$ that constitute a fundamental line complex cube, such that $l = \tau(\hat l),\ldots,l_{123}=\tau(\hat l_{123})$ with $\tau$ being the projection of $\mathbb{R}P^4$ from a point $p \notin \pi$ to the hyperplane $\pi$. To begin with, we may choose $p$ to be any point not in $\pi$. Further, we define $\hat{l}=l$, $\hat{l}_1=l_1$, $\hat{l}_{2}=l_2$, and $\hat{l}_{12}=l_{12}$ in~$\pi$. The lift~$\hat{l}_3$ may then be chosen to be any line in the plane $l_3 \vee p$ that passes through $l_3 \cap l$ and lies outside~$\pi$. This determines the lifts~$\hat{l}_{13}$ and~$\hat{l}_{23}$ as each of them is spanned by one focal point in $\pi$ and one lifted focal point on $\hat l_3$, e.g., $\hat l_{13} = (l_{13} \cap l_1) \vee (\hat{l}_3 \cap ((l_{13} \cap l_3) \vee p))$. The seven lifted lines constitute generic Cauchy data for a fundamental line complex cube in~$\mathbb{R}P^4$ and hence there exists a unique eighth line $\hat l_{123}$ that may be constructed as \[ \hat{l}_{123} = (\hat{l}_{12} \vee \hat{l}_{13}) \cap (\hat{l}_{12} \vee \hat{l}_{23}) \cap (\hat{l}_{13} \vee \hat{l}_{23}). \] The central projection $\tau : \mathbb{R}P^4 \setminus \left\{ p \right\} \to \pi$ then yields a fundamental line complex cube in~$\pi$. As seven lines coincide with the original cube by construction, the eighth line has to coincide by uniqueness. It remains to observe that the construction works also globally, since the lift is generic and thus the suggested construction may be extended to Cauchy data for the whole fundamental line system $l$. \iref{item:thm_fundamental_line_complex_projection}~$\Longrightarrow$~\iref{item:thm_fundamental_line_complex_consistency}. We have to show the 3D consistency of the 2D system that is described by Lemma~\ref{lem:2d_systems}~(Q) under the given assumptions. So consider an elementary cube of $l$ and let $x, x_1, x_2, x_3$ be four generic points on the lines $l, l_1, l_2, l_3$, respectively. As before, consider a lift~$\hat{l}$ of~$l$ to~$\mathbb{R}P^4$ and accordingly lifted points $\hat{x}, \hat{x}_1, \hat{x}_2, \hat{x}_3$, the latter spanning a 3-dimensional subspace~$\hat{x} \vee \hat{x}_1 \vee \hat{x}_2 \vee \hat{x}_3$. This subspace defines a unique Q-cube inscribed into $\hat l$ via intersection with the lines of~$\hat{l}$. The projection of the Q-cube in $\mathbb{R}P^4$ yields a Q-cube in~$\mathbb{R}P^3$, that is, the 2D system is 3D, and hence $m$D consistent. \iref{item:thm_fundamental_line_complex_consistency} $\Longrightarrow$ \iref{item:thm_fundamental_line_complex_one_inscribed_qnet}. If the construction of inscribed Q-nets is consistent, then we can construct an inscribed Q-net. \iref{item:thm_fundamental_line_complex_one_inscribed_qnet} $\Longrightarrow$ \iref{item:thm_fundamental_line_complex_fundamental}. It is sufficient to show that each elementary 3-cube of $l$ may be obtained as the projection of a fundamental line complex cube in $\mathbb{R}P^4$, since this implies that all focal nets are Q-nets. So assume you are given a torsal line complex cube $(l,\ldots,l_{123})$ with an inscribed Q-cube~$X=(x,\ldots,x_{123})$ in $\mathbb{R}P^3$. The first thing to note is that the vertices $X$ together with the lines $l,l_1,l_2$, and $l_3$ determines the remaining lines uniquely according to Lemma~\ref{lem:2d_systems}~(L). Now identify $\mathbb{R}P^3$ with any hyperplane $\pi$ in $\mathbb{R}P^4$ and choose a point $p$ outside $\pi$, which defines the central projection $\tau : \mathbb{R}P^4 \setminus \left\{ p \right\} \to \pi$. The next step is to fix the vertices $X$ in $\pi$ and construct generic lifts $\hat{l}$, $\hat{l}_1$, $\hat{l}_2$, $\hat{l}_3 \not\subset \pi$ of the lines~$l$, $l_1$, $l_2$, and $l_3$ through the corresponding vertices that satisfy \begin{equation} \label{eq:cauchy_projection} \hat l \cap \hat l_i \ne \mathbf emptyset, \quad \tau(\hat l)=l, \quad \tau(\hat l_i)=l_i, \quad i = 1,2,3. \mathbf end{equation} We will now show that Lemma~\ref{lem:2d_systems}~(L) consistently determines lines $\hat l_{12}, \hat l_{23}, \hat l_{13}$, and $\hat l_{123}$ through the corresponding vertices of $X$ for the given Cauchy data $\hat l$, $\hat l_1$, $\hat l_2$, $\hat l_3$. This implies the assertion, since the solution $(\hat l,\ldots,\hat l_{123})$ is mapped under $\tau$ to the solution $(l,\ldots,l_{123})$ of the original Cauchy problem because of \mathbf eqref{eq:cauchy_projection} and $\tau(X) = X$. To see the consistency in $\mathbb{R}P^4$, first construct the lines $\hat{l}_{ij}$ according to Lemma~\ref{lem:2d_systems}~(L). As we considered a generic lift, the intersection \[ \hat{l}_{123} = (\hat{l}_{12} \vee \hat{l}_{13}) \cap (\hat{l}_{12} \vee \hat{l}_{23}) \cap (\hat{l}_{13} \vee \hat{l}_{23}) \] of three generic hyperplanes in $\mathbb{R}P^4$ is 1-dimensional and completes a fundamental line complex cube in $\mathbb{R}P^4$. By construction, we also find $x_{123} \in \hat l_{123}$. \mathbf end{proof} The above proof of Theorem~\ref{thm:fundamental_line_complex} reveals the following \begin{corollary}[$m$D consistent systems related to fundamental line systems] \label{cor:2d_quad_line_systems} \begin{enumerate}[(i)] \item Fundamental line systems are governed by an $m$D consistent 3D system. \item The construction of Q-nets inscribed in a discrete torsal line system according to Lemma~\ref{lem:2d_systems}~(Q) is $m$D consistent if and only if the line system is fundamental. \item The extension of Q-nets to discrete torsal line systems according to Lemma~\ref{lem:2d_systems}~(L) is $m$D consistent and always yields fundamental line systems. \mathbf end{enumerate} \mathbf end{corollary} \section{Supercyclides} \label{sec:supercyclides} Let $\mathcal{M}$ be a surface in $\mathbb{R}P^3$ that is generated by a 1-parameter family of conics. If along each conic the tangent planes to $\mathcal{M}$ envelop a quadratic cone, $\mathcal{M}$ is called a \mathbf emph{surface of Blutel}, cf. Fig.~\ref{fig:supercyclides}, left. If the conjugate curves to the conics of a surface of Blutel are also conics, the Blutel property is automatically satisfied for the conjugate family \cite{Degen:1982:ConjugateConics}. Accordingly, such surfaces had been introduced by Degen as \mathbf emph{double Blutel surfaces} originally, but eventually became known to a wider audience under the name of \mathbf emph{supercyclides}.\footnote{Most double Blutel surfaces are (complex) projective images of Dupin cyclides \cite{Degen:1986:ZweifachBlutel}, so Degen later referred to double Blutel surfaces as ``generalized cyclides'' \cite{Degen:1994:GeneralizedCyclidesCAGD} -- although this term was already used by Casey and Darboux for quartic surfaces that have the imaginary circle at infinity as a singular curve and hence generalize Dupin cyclides in a different way. Therefore, Pratt proposed the name ``supercyclides'' for a major subclass of double Blutel surfaces, which is characterized by a certain quartic equation and contains the projective images of quartic Dupin cyclides \cite{Pratt:1996:DupinCyclidesAndSupercyclides,Pratt:1997:QuarticSupercyclidesBasic}. Eventually, the term supercyclides was used for the whole class of double Blutel surfaces by Pratt and Degen \cite{Pratt:1997:SupercyclidesClassification,Degen:1998:SupercyclidesOrigin,Pratt:2002:QuarticSupercyclidesDesign}.} \begin{figure}[htb] \begin{center} \includegraphics[width=.21\textwidth]{tangent_cone.png} \hspace{.15\textwidth} \includegraphics[width=.24\textwidth]{SC-patch_with_cyclide.png} \mathbf end{center} \caption{Left: The tangent planes along a characteristic conic on a supercyclide envelop a quadratic cone. The generators of the cone are the tangents of the conjugate curves. Right: Restriction of a supercyclide to an SC-patch, that is, a surface patch bounded by parameter lines of a characteristic parametrization.} \label{fig:supercyclides} \mathbf end{figure} \begin{definition}[Supercyclides and SC-patches] \label{def:supercyclide} A surface in $\mathbb{R}P^3$ that is generated by two conjugate families of conics, such that the tangent planes along each conic envelop a quadratic cone, is called a \mathbf emph{supercyclide}. We refer to those conics as \mathbf emph{characteristic conics} and accordingly call a conjugate parametrization $f : U \to \mathbb{R}P^3$ of a supercyclide characteristic if its parameter lines are characteristic conics. A (parametrized) \mathbf emph{supercyclidic patch} (\mathbf emph{SC-patch} for short) is the restriction of a characteristic parametrization to a closed rectangle $I_1 \times I_2 \subset U$, cf. Fig.~\ref{fig:supercyclides}, right. \mathbf end{definition} In~\cite{Degen:1982:ConjugateConics} it is shown that for each of the two families of characteristic conics on a supercyclide the supporting planes form a pencil. The axis of such a pencil consists of the vertices of the cones that are tangent to the respective conjugate family of characteristic conics. \begin{definition}[Characteristic lines] \label{def:characteristic_lines} The axes of the pencils supporting the families of characteristic conics of a supercyclide are the \mathbf emph{characteristic lines}. \mathbf end{definition} \paragraph{Examples of supercyclides} Any non-degenerate quadric $Q$ in $\mathbb{R}P^3$ is a supercyclide in a manifold way: Simply take two lines $a^1$ and $a^2$ that are polar with respect to $Q$ and consider the plane pencils through those lines. The intersections of those planes with $Q$ consitute a conjugate network of conics that satisfy the tangent cone property. Another prominent subclass of supercyclides is given by Dupin cyclides, the latter being characterized by the fact that all curvature lines are circles. They are supercyclides, since a net of curvature lines is a special conjugate net and, moreover, the tangent cone property is satisfied. As special instances of Dupin cyclides it is worth to mention the three types of rotationally symmetric tori (ring, horn, and spindle -- having 0, 1, and 2 singular points, respectively) and to recall that all other Dupin cyclides may be obtained from such tori by inversion in a sphere. Finally, since the defining properties of supercyclides are projectively invariant, it is clear that any projective image of a Dupin cyclide is a supercyclide. A complete classification of supercyclides is tedious, see the early contributions \mbox{\cite{Degen:1986:ZweifachBlutel,Barner:1987:DifferentialgeometrischeKennzeichnungSuperzykliden}} and also the later \cite{Pratt:1997:QuarticSupercyclidesBasic,Pratt:1997:SupercyclidesClassification}. One fundamental result is that supercyclides are algebraic surfaces of, at most, degree four. While \cite{Degen:1986:ZweifachBlutel,Barner:1987:DifferentialgeometrischeKennzeichnungSuperzykliden} follow a classical differential geometric approach that relies essentially on convenient parametrizations derived from the double Blutel property, the treatment of supercyclides in \cite{Pratt:1997:QuarticSupercyclidesBasic,Pratt:1997:SupercyclidesClassification} is more of an algebraic nature. On the other hand, there is also a unified approach to the construction and classification of supercyclides that starts with a certain ruled 3-manifold in projective 5-space from which all supercyclides may be obtained by projection and intersection with a hyperplane \cite{Degen:1998:SupercyclidesOrigin}. The basis for this unified treatment can be found already in \cite{Degen:1986:ZweifachBlutel}. \paragraph{Genericity assumption} In this paper we only consider generic supercyclides, that is, supercyclides of degree four with skew characteristic lines. \mathbf enlargethispage{\baselineskip} \begin{proposition}[Properties of SC-patches] \label{prop:properties_supercyclidic_patches} \begin{enumerate}[(i)] \item \label{item:prop_patches_coplanar} The vertices of an SC-patch are coplanar. \item \label{item:prop_patches_projection} Isoparametric points on opposite boundary curves are perspective from the non-cor\-re\-spond\-ing Laplace point of the vertex quadrilateral, cf. Fig.~\ref{fig:scpatches_projection}, left. \item \label{item:prop_patches_concurrent_tangent_congruence_quad} For each coordinate direction, the tangents to the corresponding boundary curves at the four vertices of an SC-patch intersect cyclically. In particular, the eight tangents constitute an elementary hexahedron of a fundamental line complex, cf. Fig.~\ref{fig:scpatches_projection}, middle and right. \item \label{item:prop_patches_concurrent_tangent_planes} The tangent planes at the four vertices intersect in one point. \mathbf end{enumerate} \mathbf end{proposition} \begin{figure}[htp] \begin{center} \parbox{.35\textwidth}{\includegraphics[width=.35\textwidth]{patch_projection.png}} \qquad \qquad \parbox{.5\textwidth}{\includefig{patch_boundary_tangents}} \mathbf end{center} \caption{Opposite boundary curves of an SC-patch are perspective from the non-corresponding Laplace point of the vertex quadrilateral (left). The tangents to the boundary of an SC-patch at vertices constitute an elementary hexahedron of a fundamental line complex (middle and right: geometric and combinatorial, respectively).} \label{fig:scpatches_projection} \mathbf end{figure} \begin{proof} We start with a fact concerning Blutel surfaces that may be found in \cite{Degen:1986:ZweifachBlutel}, that is, for any surface of Blutel the conjugate curves induce projective transformations between the generating conics. From that it may be deduced easily that for a double Blutel surface those projective maps are perspectivities. Applied to opposite boundary curves of an SC-patch, we conclude that the patch vertices are coplanar and that the centers of perspectivity have to be the Laplace points of the vertex quadrilateral. To verify the perspectivity statement, consider two characteristic conics $c_1$ and $c_2$ of one family on a supercyclide for which we assume that they are contained in distinct planes $\pi_1$ and $\pi_2$. According to the above, the conjugate family of conics induces a projective map $\tau : \pi_1 \to \pi_2$ that identifies isoparametric points on $c_1$ and $c_2$. In order to see that $\tau$ is a perspectivity, take three points $x_1,y_1,z_1$ on $c_1$ and denote the tangents in those points by $g_1,h_1,l_1$, respectively. Further let $(x_2,y_2,z_2,g_2,h_2,l_2) = \tau (x_1,y_1,z_1,g_1,h_1,l_1)$ be the corresponding points and tangents with respect to $c_2$. As, e.g., $x_1$ and $x_2$ are isoparametric, the tangents $g_1$ and $g_2$ intersect at the tip $x$ of the cone that is tangent to the supercyclide along the conjugate curve through $x_1$ and $x_2$. One obtains three tangent cone vertices $x,y,z$ of that kind and also the further intersection points $q_i = g_i \cap h_i$, $r_i = g_i \cap l_i$, $s_i = h_i \cap l_i$, $i=1,2$, see Fig.~\ref{fig:perspectivity_proof}. \begin{figure}[htb] \begin{center} \includefig{perspectivity_proof} \mathbf end{center} \caption{Triangles formed by the tangents to $c_1$ and $c_2$ in corresponding points $x_1,y_1,z_1$ and $x_2,y_2,z_2$, are perspective.} \label{fig:perspectivity_proof} \mathbf end{figure} From the observation $\tau(x,y,z,q_1,r_1,s_1) = (x,y,z,q_2,r_2,s_2)$ it follows that the restrictions of $\tau$ to the lines $g_1$, $h_1$, and $l_1$, all being projective maps, are determined by $(x,q_1,r_1) \mapsto (x,q_2,r_2)$, $(x,q_1,s_1) \mapsto (x,q_2,s_2)$, and $(x,s_1,r_1) \mapsto (x,s_2,r_2)$, respectively. On the other hand, the three planes $g_1 \vee g_2, h_1 \vee h_2$, and $l_1 \vee l_2$ intersect in one point $O$, therefore the pairs $(q_1,q_2), (r_1,r_2)$, and $(s_1,s_2)$ of corresponding tangent intersections are each perspective from $O$. From this we may conclude that $\tau : \pi_1 \to \pi_2$ is the perspectivity from $O$. It remains to verify the assertions \iref{item:prop_patches_concurrent_tangent_congruence_quad} and \iref{item:prop_patches_concurrent_tangent_planes}. Now the intersection statement of \iref{item:prop_patches_concurrent_tangent_congruence_quad} follows immediately from the tangent cone property and planarity of the characteristic conics and simply means that the eight tangents to the boundary curves at vertices constitute a line complex cube. The vertex quadrilateral of the patch is a focal quadrilateral for that complex cube and planarity of vertices implies that the complex cube is fundamental due to Lemma~\ref{lem:fundamental_cubes}~\iref{item:lem_fun_cubes_planar_quad_one}. Therefore, the equivalent statement \iref{item:lem_fun_cubes_concurrent_planes_any} of Lemma~\ref{lem:fundamental_cubes} also holds and yields \iref{item:prop_patches_concurrent_tangent_planes} of this proposition. \mathbf end{proof} \begin{remark} \label{rem:singularities} In the complexified setting, a generic quartic supercyclide possesses four isolated singularities -- two at each characteristic line -- and one singular conic \cite{Pratt:1997:QuarticSupercyclidesBasic}. In fact, it can be easily seen that real intersections of a supercyclide with its characteristic lines must be singularities: Suppose that the characteristic line of the first family (the axis of the pencil of planes supporting the first family of conics) intersects the cyclide in a point~$x$. Generically, $x$ lies on a non-degenerate conic of the first familiy and Proposition~\ref{prop:properties_supercyclidic_patches}~\iref{item:prop_patches_projection} implies that all conics of the first family pass through $x$, that is, $x$ is a singular point. On the other hand, if a generic pair of conics of one family intersects (automatically at the corresponding characteristic line), the intersection points are singular points of the cyclide by the same argument. \mathbf end{remark} \paragraph{Notation and terminology for SC-patches} We denote the vertices of an SC-patch $f:[a_0,a_1] \times [b_0,b_1] \to \mathbb{R}P^3$ by \begin{equation*} x = f(a_0,b_0),\ x_1 = f(a_1,b_0),\ x_{12} = f(a_1,b_1),\ x_2 = f(a_0,b_1), \mathbf end{equation*} which gives rise to the (oriented) \mathbf emph{vertex quadrilateral} $(x,x_1,x_{12},x_2)$. Conversely, we say that an SC-patch is \mathbf emph{adapted} to a quadrilateral if it is the vertex quadrilateral of the patch. We define \begin{equation*} \mathcal{A}^3 = \left\{ \text{conic arcs in } \mathbb{R}P^3 \right\} \mathbf end{equation*} and write the boundary curve of an SC-patch that connects the vertices $x$ and $x_i$ as $b^i \in \mathcal{A}^3$. The supporting plane of $b^i$ in $\mathbb{R}P^3$ is usually denoted by $\pi^i$ and we refer to those planes as \mathbf emph{boundary planes} of the patch. Further, we denote by $t^i$ and $t^i_i$ the tangents to $b^i$ at $x$ and at $x_i$, respectively, as depicted in Fig.~\ref{fig:patch_notation}. \begin{figure}[htb] \begin{center} \includefig{patch_terminology} \mathbf end{center} \caption{The boundary of an SC-patch, labeled according to our usual terminology.} \label{fig:patch_notation} \mathbf end{figure} Usually we refer to the characteristic lines of an SC-patch as $a^1$ and $a^2$, where $a^i$ is the line through the Laplace point \begin{equation*} y^i = (x \vee x_i) \cap (x_j \vee x_{12}),\quad i \ne j. \mathbf end{equation*} Obviously, the characteristic lines may be expressed in terms of tangents as \begin{equation*} a^i = (t^i \vee t^i_i) \cap (t^i_j \vee t^i_{12}),\quad i \ne j. \mathbf end{equation*} \paragraph{Extension of planar quadrilaterals to SC-patches} A supercyclidic patch is uniquely determined by its boundary, since points $\tilde x_i \in b^i$, $i = 1,2$, on the boundary of an SC-patch parametrize points $\tilde x_{12}$ in the interior as follows: Given the characteristic lines $a^1$ and $a^2$, the corresponding point $\tilde x_{12}$ may be obtained as intersection of the three planes $(x \vee \tilde x_1 \vee \tilde x_2)$, $(\tilde x_1 \vee a^2)$, and $(\tilde x_2 \vee a^1)$. According to Proposition~\ref{prop:properties_supercyclidic_patches}, valid boundaries of SC-patches that are adapted to a given vertex quadrilateral may be constructed as follows: Two adjacent conical boundary curves may be chosen arbitrarily as well as the two opposite boundary planes. The remaining boundary curves are then determined as perspective images of the initial boundary curves from the corresponding Laplace points of the vertex quadrilateral. It turns out that it is very convenient to emphasize the tangents to boundary curves at vertices in the suggested construction, which gives rise to the following \paragraph{Construction of adapted SC-patches} Denote the given (planar) vertex quadrilateral by $(x,x_1,x_{12},x_2)$. For each $i = 1,2$, first choose $t^i \ni x$ freely and then choose $t^i_1 \ni x_1$ and $t^i_2 \ni x_2$ such that $t^i \cap t^i_j \ne \mathbf emptyset$. According to Proposition~\ref{prop:properties_supercyclidic_patches}~\iref{item:prop_patches_concurrent_tangent_congruence_quad}, the tangent $t^i_{12}$ is then determined subject to $t^i_{12} = (x_{12} \vee t^i_1) \cap (x_{12} \vee t^i_2)$. Further, for each coordinate direction choose a conic segment $b^i$ in the plane $\pi^i = t^i \vee t^i_i$ that is tangent to $t^i$ at $x$ and to $t^i_i$ at $x_i$, respectively. The opposite segment $b^i_j$ is then obtained as the perspective image $\tau^j(b^i)$, where \begin{equation*} \tau^j : \pi^i \to \pi^i_j, \quad p \mapsto (p \vee y^j) \cap \pi^i_j \mathbf end{equation*} is the perspectivity from the Laplace point $y^j = e^j \cap e^j_i$. \begin{remark} \label{rem:patch_construction_dof} In the above construction, we have exploited 10 independent degrees of freedom: For each direction $i$, 4 DOF are used in the construction of the corresponding tangents and 1 DOF is used for the adapted boundary curve $b^i$. The latter corresponds to the free choice of one additional point in the prescribed boundary plane, modulo points on the same segment.\footnote{Recall that a conic is uniquely determined by 5 points, where two infinitesimally close points correspond to a point and a tangent in that point (a 1-dimensional contact element).} \mathbf end{remark} \section{2D supercyclidic nets} \label{sec:2d_supercyclidic_nets} We will now analyze the extension of a given Q-net $x:\mathbb{Z}^2 \to \mathbb{R}P^3$ by one adapted supercyclidic patch for each elementary quadrilateral. At the common edge of two neighboring quadrilaterals we require that the tangent cones coincide. \begin{definition}[Tangent cone continuity] \label{def:tcc_property} Let $Q$ and $\tilde Q$ be two planar quadrilaterals that share an edge $e$. Two adapted SC-patches $f$ and $\tilde f$ with common boundary curve $b$ that joins the vertices of $e$ are said to form a \mathbf emph{tangent cone continuous join}, or \mathbf emph{TCC-join} for short, if the tangency cones to $f$ and $\tilde f$ along $b$ coincide. We also say that $f$ and $\tilde f$ have the \mathbf emph{TCC-property} and, accordingly, satisfy the \mathbf emph{TCC-condition}. \mathbf end{definition} \begin{figure}[hbt] \begin{center} \parbox{.31\textwidth}{\includegraphics[width=.25\textwidth]{adjacent_patches.png}} \parbox{.31\textwidth}{\includegraphics[width=.25\textwidth]{double_cusp_2.png}} \parbox{.28\textwidth}{\includegraphics[width=.28\textwidth]{half_cusp_4.png}} \mathbf end{center} \caption{Pairs of SC-patches that form TCC joins, that is, tangency cones coincide along common boundary curves. As depicted, this does not exclude cusps.} \label{fig:partial_cusps} \mathbf end{figure} The TCC-condition yields a consistent theory for the extension of discrete conjugate nets by surface patches parametrized along conjugate directions in~$\mathbb{R}P^3$. As a quadratic cone is determined by one non-degenerate conic section and two generators, one immediately obtains the following characterization of TCC-joins between SC-patchs. \begin{lemma} \label{lem:tcc_characterization} Two SC-patches $f,\tilde f$ with common boundary curve $b$ form a TCC-join if and only if their tangents in the common vertices coincide. \mathbf end{lemma} Consider coordinates so that $\tilde Q = Q_i$ is the shift of $Q$ in direction $i$ and accordingly rename $(\tilde f,b) \leadsto (f_i,b^j_i)$. If the relevant tangents coincide, they may be expressed as intersections of boundary planes \begin{equation*} t^i_i = \pi^i \cap \pi^i_i, \quad t^i_{ij} = \pi^i_j \cap \pi^i_{ij}, \mathbf end{equation*} and we see that the TCC-property implies that the four involved boundary planes of direction $i$ intersect in one point. Obviously, this means that the two characteristic lines \begin{equation*} a^i = \pi^i \cap \pi^i_j, \quad a^i_i = \pi^i_i \cap \pi^i_{ij} \mathbf end{equation*} associated with the direction $i$ intersect -- the point of intersection being the tip of the coinciding tangency cone. \begin{definition}[2D supercyclidic net] \label{def:sc_net_2d_patches} A 2-dimensional Q-net together with SC-patches that are adapted to its elementary quadrilaterals is called a \mathbf emph{2D supercyclidic net} (2D \mathbf emph{SC-net}) if each pair of patches adapted to edge-adjacent quadrilaterals has the TCC-property. \mathbf end{definition} \paragraph{Affine supercyclidic nets and $C^1$-joins} An interesting subclass of supercyclidic nets are nets that form piecewise smooth $C^1$-surfaces, i.e., the joins of adjacent patches are not allowed to form cusps and the individual patches must not contain singularities (see Fig.~\ref{fig:partial_cusps}). Concise statements concerning the $C^1$-subclass of tangent cone continuous supercyclidic patchworks are tedious to formulate and to prove. As a starting point for future investigation of the $C^1$-subclass we only capture some observations in the following. A Q-net $x : \mathbb{Z}^2 \to \mathbb{R}P^3$ does not contain information about edges connecting adjacent vertices. In the affine setting in turn one has unique finite edges, which allow to distinguish between the different types of non-degenerate vertex quadrilaterals> convex, non-convex, and non-embedded. We note the following implications for adapted SC-patches: \begin{enumerate}[(i)] \item Finite SC-patches adapted to \mathbf emph{non-embedded} vertex quadrilaterals always contain singular points: The intersection point of one pair of edges is the Laplace point and it lies on the characteristic line of the pencil containing the conic arcs. But if these arcs are finite they have to intersect the axis and hence contain a singular point of the surface, see Remark~\ref{rem:singularities} and Fig.~\ref{fig:affine_quads}, left. \item If the vertex quadrilateral is \mathbf emph{non-convex} an adapted SC-patch cannot be finite: For any SC-patch, opposite boundary curves are perspective and hence lie on a unique cone whose vertex is one of the Laplace points of the vertex quadrilateral. For non-convex vertex quadrilaterals, the Laplace points are always contained in edges of the quadrilateral. Thus, the previously mentioned cones always give rise to a situation as depicted in Fig.~\ref{fig:affine_quads}, right. We observe that one of the edges $(x,x_2)$ or $(x_1, x_{12})$ has to lie outside of the cone and the other one inside. Clearly, the conic boundary arc that connects the vertices of the outside segment has to be unbounded. \item If the vertex quadrilateral is \mathbf emph{convex} (and embedded) then there exist finite patches without singularities, but adapted patches have singularities if opposite boundary arcs intersect. \mathbf end{enumerate} \begin{figure}[htb] \parbox{0.47\textwidth}{ \includefig{non_embedded} } \hspace{2em} \parbox{0.4\textwidth}{ \includefig{non_convex} } \caption{SC-patches adapted to different types of affine quadrilaterals. Left: For finite patches adapted to non-convex quadrilaterals, two opposite arcs have to intersect. Right: Different shades show the possible pairs of perspective conic arcs for a non-convex vertex quadrilateral related by the perspectivity with respect to the Laplace point~$y^1$.} \label{fig:affine_quads} \mathbf end{figure} Accordingly, only affine Q-nets composed of convex quadrilaterals may be extended to bounded $C^1$-surfaces (projective images thereof also being $C^1$, but possibly not bounded). In that context, note that the cyclidic nets of \cite{BobenkoHuhnen-Venedey:2011:cyclidicNets} are defined as extensions of circular nets with embedded quadrilaterals (automatically convex) in order to avoid singularities. \paragraph{Discrete data for a 2D supercyclidic net} We have seen that supercyclidic patches are completely encoded in their boundary curves and that the TCC-property may be reduced to coinciding tangents in common vertices. Accordingly, a 2-dimensional supercyclidic net is completely encoded in the following data: \begin{enumerate}[(i)] \item The supporting Q-net $x: \mathbb{Z}^2 \to \mathbb{R}P^3$. \item The congruences of tangents $t^1,t^2 : \mathbb{Z}^2 \to \mathcal{L}^3$. \item The boundary curves $b^1,b^2 : \mathbb{Z}^2 \to \mathcal{A}^3$ with $b^i$ tangent to $t^i$ at $x$ and to $t^i_i$ at $x_i$ and such that for each quadrilateral opposite boundary curves are perspective from the corresponding Laplace point. \mathbf end{enumerate} \begin{remark} \label{rem:discrete_data_2d} In order to obtain discrete data that consists of points and lines only, for given Q-net $x$ and tangents $t^i$ the conic segment $b^i$ may be represented by one additional point in its supporting plane $\pi^i$ (see also Remark~\ref{rem:patch_construction_dof}). Accordingly, the boundary curves are then described by maps $b^i : \mathbb{Z}^2 \to \mathbb{R}P^3$ with $b^i \in t^i \vee t^i_i$. This representation does not only fix the conic arc but also a notion of parametrized supercyclidic nets. \mathbf end{remark} The tangents at vertices of a supercyclidic net are obviously determined by the conic splines composed of the patch boundaries. For the formulation of a related Cauchy problem it is more convenient to capture the tangents and boundary curves separately. \paragraph{Cauchy data for a 2D supercyclidic net} \label{par:cauchy_2d} According to Lemma~\ref{lem:2d_systems}~(L), the tangents of a 2D SC-net are uniquely determined by the tangents along coordinate axes if the supporting Q-net is given. On the other hand, if all the tangents are known, we know all boundary planes and may propagate suitable initial boundary splines according to the perspectivity property of SC-patches that is captured by Proposition~\ref{prop:properties_supercyclidic_patches}~\iref{item:prop_patches_projection}. Hence Cauchy data of a 2D supercyclidic net consists of, e.g., \begin{enumerate}[(i)] \item The supporting Q-net $x : \mathbb{Z}^2 \to \mathbb{R}P^3$. \item The tangents along two intersecting coordinate lines for each coordinate direction \begin{equation*} t^i|_{\coordsurf{j}},\quad i,j=1,2. \mathbf end{equation*} \item Two conic splines \begin{equation*} b^i|_{\coordsurf{i}},\quad i=1,2, \mathbf end{equation*} such that $b^i$ is tangent to $t^i$ at $x$ and to $t^i_i$ at $x_i$, cf. Fig.~\ref{fig:2d_cyclidic_cauchy}. \mathbf end{enumerate} \begin{figure}[htb] \begin{center} \includefig{2d_cyclidic_cauchy_data} \mathbf end{center} \caption{Cauchy data for a 2D cyclidic net.} \label{fig:2d_cyclidic_cauchy} \mathbf end{figure} \paragraph{Rigidity of supercylidic nets} An SC-patch is determined by its boundary curves uniquely. Further, one observes that for fixed Q-net and fixed tangents the variation of a boundary curve propagates along a whole quadrilateral strip due to the perspectivity property. Moreover, a local variation of tangents is not possible for a prescribed Q-net, as each tangent is determined uniquely by the contained vertex of the Q-net and two adjacent tangents. This shows that it is not possible to vary a supercyclidic net locally. However, we know that local deformation of 2D Q-nets is possible\footnote{Local deformation of higher dimensional Q-nets is not possible as they are governed by a 3D system.} \cite{Hoffmann:2010:PQDeformation}, which gives rise to the question whether effects of the (global) variation of a given 2D SC-net might be minimized by incorporation of suitable local deformations of the supporting Q-net. \pagebreak \paragraph{Q-refinement induced by 2D supercyclidic nets} Obviously, any selection of characteristic conics on an SC-patch induces a Q-refinement of its vertex quadrilateral since also the vertices of the induced ``sub-patches'' are coplanar. Accordingly, any continuous supercyclidic patchwork induces an arbitrary Q-refinement of the supporting 2D Q-net. This is interesting in the context of the convergence of discrete conjugate nets to smooth conjugate nets, but also a noteworthy fact from the perspective of architectural geometry: 2D supercyclidic nets may be realized at arbitrary precision by flat panels. \paragraph{Implementation on a computer} We have implemented two ways to experiment with supercyclidic nets. The first is in java and solves the Cauchy problem described on page~\pageref{par:cauchy_2d}, using the framework of jReality and VaryLab.\footnote{\texttt{http://www.jreality.de} and \texttt{http://www.varylab.com}} We start with a Q-net with $\mathbb{Z}^2$ combinatorics. To create the two tangent congruences we prescribe initial tangents at the origin and propagate them along each coordinate axis via reflections in the symmetry planes of the edges (which does not exploit all possible degrees of freedom). The prescribed tangent congruences on the coordinate axes can then be completed to tangent congruences on the entire Q-net according to Lemma~\ref{lem:2d_systems}~(L). The conic splines on the coordinate axes are described as rational quadratic Bezier curves with prescribed weight. The initial weights are then propagated to all edges by evaluating suitable determinant expressions, which take into account the Q-net, the focal nets, the weights, and the intersection points of the four tangent planes at each quadrilateral. This yields a complete description of the resulting supercyclidic net in terms of rational quadratic Bezier surface patches. See Fig.~\ref{fig:sc_net_implementation} for two different SC-nets adapted to one and the same supporting Q-net that are obtained that way. The second approach to the computation of supercyclidic nets is based on a global variational approach and uses the framework of Tang et al.~\cite{tang-2014-ff}. It also makes use of quadratic Bezier surface patches, where quadratic constraints involving auxiliary variables guarantee that those patches constitute an SC-net. \begin{figure}[htb] \begin{center} \includegraphics[width=.88\textwidth]{sc_extension.png} \mathbf end{center} \caption{Two different supercyclidic nets with same supporting Q-net. The Q-net with tangents at vertices is drawn on top of the corresponding supercyclidic net.} \label{fig:sc_net_implementation} \mathbf end{figure} \section{3D supercyclidic nets} \label{sec:3d_supercyclidic_nets} Motivated by a common structure behind integrable geometries that is reflected by the ``multidimensional consistency principle'' of discrete differential geometry, see \cite{BobenkoSuris:2008:DDGBook}, we start out with \begin{definition}[3D supercyclidic net] \label{def:sc_net_3d_from_2d} A Q-net $x:\mathbb{Z}^3 \to \mathbb{R}P^3$ together with SC-patches that are adapted to its elementary quadrilaterals -- such that patches adapted to edge-adjacent quadrilaterals meet in a common boundary curve associated with the corresponding edge -- is called a \mathbf emph{3D supercyclidic net} if the restriction to any coordinate surface is a 2D supercyclidic net. \mathbf end{definition} While Definition~\ref{def:sc_net_3d_from_2d} appears natural, it is not obvious that one can consistently (that is, with coinciding boundary curves) adapt SC-patches even to a single elementary hexahedron of a 3D Q-net. Before proving that this is indeed possible (Theorem~\ref{thm:cyclidic_3d_system}), we introduce some convenient terminology. \begin{definition}[Common boundary condition and supercyclidic cubes] Consider several quadrilaterals that share an edge. We say that SC-patches adapted to those quadrilaterals satisfy the \mathbf emph{common boundary condition} if their boundary curves associated with the common edge all coincide. Moreover, we call a 3-dimensional Q-cube with SC-patches adapted to its faces, such that the common boundary condition is satisfied for each edge, a \mathbf emph{supercyclidic cube} or \mathbf emph{SC-cube} for short, cf. Fig.~\ref{fig:sc_cube}. \mathbf end{definition} \begin{figure}[htb] \begin{center} \includegraphics[width=.3\textwidth]{sc_cube_1.png} \hspace{.13\textwidth} \includegraphics[width=.3\textwidth]{sc_cube_2.png} \mathbf end{center} \caption{A supercyclidic cube: Six SC-patches with shared boundary curves whose vertices form a cube with planar faces.} \label{fig:sc_cube} \mathbf end{figure} \begin{theorem}[Supercyclidic 3D system] \label{thm:cyclidic_3d_system} Three faces of an SC-cube that share one vertex, that is, three generic SC-patches with one common vertex and cyclically sharing a common boundary curve each, determine the three opposite faces (``SC 3D system''). Accordingly, the extension of a Q-cube to an SC-cube is uniquely determined by the free choice of three SC-patches that are adapted to three faces that meet at one vertex and satisfy the common boundary condition. \mathbf end{theorem} \begin{proof} First note that the three given SC-patches determine a unique supporting Q-cube $C$, since Q-nets are governed by a 3D system and seven vertices of the cube are already given. Further, denote the vertices of $C$ in the standard way by $x,\ldots,x_{123}$ so that the vertex quadrilaterals of the three patches are $Q^{ij} = (x,x_i,x_{ij},x_j)$, $1 \le i < j \le 3$, and the common vertex is $x$. Assume that the three given patches may be completed to an SC-cube. Then through each edge of $C$ in direction $i$ there is a unique boundary plane that contains the associated boundary curve. We denote those planes by \begin{equation*} \pi^i = t^i \vee t^i_i \supset e^i = x \vee x_i \mathbf end{equation*} and the respective shifts. Now consider the perspectivities between those planes that identify the opposite boundary curves of the adapted patches. They are central projections through the Laplace points (cf. Fig.~\ref{fig:laplace_projection_cube}, left) \begin{equation*} y^{i,j} = (x \vee x_j) \cap (x_i \vee x_{ij}), \mathbf end{equation*} which we denote by \begin{equation} \label{eq:laplace_projection} \tau^{i,j} : \pi^i \to \pi^i_j, \quad p \mapsto (p \vee y^{i,j}) \cap \pi^i_j, \mathbf end{equation} so that \begin{equation} \label{eq:boundary_projection_tau} b^i_j = \tau^{i,j}(b^i). \mathbf end{equation} Here we make the genericity assumption that the tangents are such, that the boundary planes do not contain the Laplace points. Now fix one direction $i$ and consider the corresponding ``maps around the cube'' as depicted in Fig.~\ref{fig:laplace_projection_cube}, right. \begin{figure}[htb] \begin{center} \qquad \includefig{laplace_projections} \mathbf end{center} \caption{Laplace projections around a cube.} \label{fig:laplace_projection_cube} \mathbf end{figure} On use of the fact that the four involved Laplace points $y^{i,j},y^{i,j}_k,y^{i,k},y^{i,k}_j$ are contained in the line \begin{equation*} l^i = Q^{jk} \cap Q^{jk}_i, \mathbf end{equation*} one verifies that the relations \mathbf eqref{eq:boundary_projection_tau} imply \begin{equation} \label{eq:commuting_projections} \tau^{i,j}_k \circ \tau^{i,k} = \tau^{i,k}_j \circ \tau^{i,j}. \mathbf end{equation} So, given the three patches adapted to $Q^{12},Q^{23}$, and $Q^{13}$, for any direction $i$ we know already the three planes $\pi^i,\pi^i_j,\pi^i_k$, $\left\{ i,j,k \right\} = \left\{ 1,2,3 \right\}$, and the claim of Theorem~\ref{thm:cyclidic_3d_system} follows if we show that given those planes there exists a unique plane $\pi^i_{jk}$ through $e^i_{jk}$ such that \mathbf eqref{eq:commuting_projections} holds. The fact that $\pi^i_{jk}$ is unique, if it exists at all, follows again from the collinearity of $y^{i,j},y^{i,j}_k,y^{i,k}$, and $y^{i,k}_j$ as this guarantees that the construction \begin{equation} \label{eq:laplace_projection_constructive} p_{jk} = (\tau^{i,j}(p) \vee y^{i,k}_j) \cap (\tau^{i,k}(p) \vee y^{i,j}_k) \subset l^i \vee p \mathbf end{equation} yields a well-defined map \begin{equation*} \tau^i : \mathbb{R}P^3 \setminus \left\{ l^i \right\} \to \mathbb{R}P^3, \quad p \mapsto p_{jk}. \mathbf end{equation*} To see the existence of $\pi^i_{jk}$, note that the above construction may be reversed and that $\tau^i$ maps lines to lines according to \begin{equation*} l \mapsto l_{jk} = (\tau^{i,j}(l) \vee y^{i,k}_j) \cap (\tau^{i,k}(l) \vee y^{i,j}_k). \mathbf end{equation*} Thus $\tau^i$ is a bijection between open subsets of $\mathbb{R}P^3$ that maps lines to lines and therefore the restriction of a projective transformation $\tilde \tau^i$ due to the fundamental theorem of real projective geometry. Accordingly, $\pi^i_{jk} = \tilde \tau^i(\pi^i)$ is the well-defined plane we are looking for. \mathbf end{proof} \begin{remark} \label{rem:compatible_planes_via_concurrency} Observe that in the above proof the point $q^i = \pi^i \cap \pi^i_j \cap \pi^i_k$ is a fixed point of $\tau^i$ and hence $q^i \in \pi^i_{jk}$. Accordingly, the plane $\pi^i_{jk}$ may be constructed as $\pi^i_{jk} = q^i \vee e^i_{jk}$ and $q^i$ becomes the common intersection of the four characteristic lines of the direction $i$, that is, $q^i = a^i \cap a^i_j \cap a^i_{jk} \cap a^i_k$. This shows that for an SC-cube in fact \mathbf emph{all} planes supporting the characteristic conics of its supercyclidic faces associated with one and the same direction $i$ contain the point $q^i$. \mathbf end{remark} \begin{corollary} \label{cor:compatible_planes} Let $\pi^i,\pi^i_j,\pi^i_k$, and $\pi^i_{jk}$ be four planes through the respective $i$-edges of a Q-cube $(x,x_i,x_{ij},x_{123})$ in $\mathbb{R}P^3$, such that each plane contains exactly one edge. Then the central projections \mathbf eqref{eq:laplace_projection} through the (collinear) Laplace points commute in the sense of \mathbf eqref{eq:commuting_projections} if and only if the four planes are concurrent. \mathbf end{corollary} \begin{theorem}[Properties of SC-cubes] \label{thm:sc_cube_properties} Characteristic conics around an SC-cube close up and bound (unique) SC-patches as indicated in Fig.~\ref{fig:SC_cube_families}. Patches obtained this way consitute three families $\mathcal{F}^i$, $i=1,2,3$, where the notation is such that patches of the family $\mathcal{F}^i$ interpolate smoothly between opposite cyclidic faces of the cube with respect to the direction $i$. The induced SC-patches have the following properties: \begin{enumerate}[(i)] \item For each direction $i$ there exists a unique point $q^i$, which is the common intersection of the characteristic lines $a^i$ of all patches $\mathcal{F}^j \cup \mathcal{F}^k$, $\left\{ i,j,k \right\} = \left\{ 1,2,3 \right\}$. \label{item:sc_cube_properties_common_point} \item Patches $f^i \in \mathcal{F}^i$ and $f^j \in \mathcal{F}^j$, $i \ne j$, intersect along a common characteristic conic that is associated with their shared net direction. \label{item:sc_cube_properties_common_conic} \item Patches $f^i, \tilde f^i \in \mathcal{F}^i$ are classical fundamental transforms of each other. In particular, this holds for opposite cyclidic faces of the initial SC-cube. \label{item:sc_cube_properties_ftrafo} \mathbf end{enumerate} \mathbf end{theorem} \begin{figure}[htb] \begin{center} \quad\quad\quad\includefig{smooth_families} \mathbf end{center} \caption{SC-cubes induce smooth families of SC-patches.} \label{fig:SC_cube_families} \mathbf end{figure} \begin{proof} The proof of Theorem~\ref{thm:cyclidic_3d_system} shows that going along characteristic conics of the supercyclidic faces of an SC-cube closes up as indicated in Fig.~\ref{fig:SC_cube_families}. Generically, one obtains four conic arcs that intersect in four coplanar vertices\footnote{If opposite boundary curves of a supercyclidic face intersect, those intersections are singular points of the patch (Remark~\ref{rem:singularities}). Accordingly, one obtains only three or less vertices when going around the cube if one passes through such points. However, as the singularities are isolated, the presented argumentation may be kept valid by incorporation of a continuity argument.} as the four centers of perspectivity that relate isoparametric points on the patch boundaries are collinear (cf. Fig.~\ref{fig:laplace_projection_cube}). In order to verify that those arcs constitute the boundary of an SC-patch, it remains to show that opposite arcs are related by a ``Laplace perspectivity'' with respect to the constructed vertex quadrilateral. So consider Fig.~\ref{fig:SC_cube_families} and note that the vertices $x,x_1,x_{13},x_3,\tilde x_2,\tilde x_{12},\tilde x_{123},\tilde x_{23}$ constitute a Q-cube. Moreover, there are unique planes $\pi^1,\tilde \pi^1_2, \tilde \pi^1_{23},\pi^1_3$ through its edges of direction 1 that support the characteristic conics of the original cyclidic faces and those planes intersect in a point $q^1$ according to Remark~\ref{rem:compatible_planes_via_concurrency}. We know already that some of the characteristic conics in those planes are related by Laplace perspectivities, namely \begin{equation*} \tau^{1,3} : \pi^1 \to \pi^1_3, \quad \tilde\tau^{1,2} : \pi^1 \to \tilde\pi^1_2, \quad \tilde\tau^{1,2}_3 : \pi^1_3 \to \tilde\pi^1_{23}. \mathbf end{equation*} Therefore, also the curves in the planes $\tilde\pi^1_2$ and $\tilde\pi^1_{23}$ are related by the Laplace perspectivity $\tilde\tau^{1,3}_2 : \tilde\pi^1_2 \to \tilde\pi^1_{23}$ according to Corollary~\ref{cor:compatible_planes}. We conclude that characteristic conics of the supercyclidic faces of an SC-cube induce three families of SC-patches, each family interpolating between opposite faces. Denote by $\mathcal{F}^i$ the family that interpolates in the direction $i$. The construction implies that the patches of the family $\mathcal{F}^i$ may be parametrized smoothly by points $\tilde x^i$ of the boundary arc $b^i$ that connects $x$ and $x_i$. Note that, as a consequence, any intermediate patch $f^i \in \mathcal{F}^i$ splits the original SC-cube into two smaller SC-cubes, for which $f^i$ is a common face. \mathbf emph{Proof of \iref{item:sc_cube_properties_common_point}:} This is an immediate consequence of Remark~\ref{rem:compatible_planes_via_concurrency}. \mathbf emph{Proof of \iref{item:sc_cube_properties_common_conic}:} As indicated in Fig.~\ref{fig:SC_cube_fundamental_trafo}, the patches $f^i$ and $f^j$ determine a pair of corresponding points $p$ and $p_k$ on the opposite faces of the SC-cube with respect to the common net direction $k$ of $f^i$ and $f^j$. Together with the vertices of $f^i$ and $f^j$, the points $p$ and $p_k$ induce a refinement of the original supporting Q-cube into four smaller Q-cubes. This follows from the aforementioned property that any patch $f^i \in \mathcal{F}^i$ different from the opposite faces of the SC-cube splits the cube into two smaller SC-cubes, together with the fact that the characteristic conics around an SC-cube close up. We conclude that for two of the induced SC-cubes the vertices $p$ and $p_k$ are connected by a characteristic conic of $f^i$, while for the other two cubes they are connected by a characteristic conic of $f^j$. The fact that those conics coincide is again an implication of Remark~\ref{rem:compatible_planes_via_concurrency} together with Corollary~\ref{cor:compatible_planes}. \mathbf emph{Proof of \iref{item:sc_cube_properties_ftrafo}:} It is sufficient to prove the claim for opposite faces of the SC-cube, since any pair $f^i, \tilde f^i \in \mathcal{F}^i$ may be understood as a pair of opposite faces after a possible 2-step reduction of the initial SC-cube to a smaller SC-cube that is induced by $f^i$ and~$\tilde f^i$. Now, corresponding points $p$ and $p_k$ on opposite faces, top and bottom, say, may be identified by using "vertical" patches as indicated in Fig.~\ref{fig:SC_cube_fundamental_trafo}. Thus property \iref{item:sc_cube_properties_common_conic} implies that corresponding tangents in those points intersect due to the tangent cone property for the vertical patches. \mathbf end{proof} \begin{figure}[htb] \begin{center} \includefig{f_trafos_1} \mathbf end{center} \caption{Opposite faces of an SC-cube form a fundamental pair.} \label{fig:SC_cube_fundamental_trafo} \mathbf end{figure} \paragraph{Supercyclidic coordinate systems} We say that an SC-cube is \mathbf emph{regular} if its six supercyclidic faces do not contain singularities and opposite faces are disjoint. Note that this implies that all patches of all three families are regular, which may be derived from the fact that isolated singularities of SC-patches appear as intersections of opposite boundary curves, cf. Remark~\ref{rem:singularities}. This in turn shows that for a regular SC-cube, any two patches $f^i, \tilde f^i \in \mathcal{F}^i$ are disjoint. Otherwise there would exist a patch $f$ of another family through $p \in f^i \cap \tilde f^i$ for which the same reasoning as above together with a ``cutting down argument'' as in the proof of Theorem~\ref{thm:sc_cube_properties} would imply that $p$ is a singular point of $f$. Having observed this, we see that for a regular SC-cube the property \iref{item:sc_cube_properties_common_conic} of Theorem~\ref{thm:sc_cube_properties} gives rise to classical conjugate coordinates on the region that is covered by the SC-patches of the families $\mathcal{F}^1$, $\mathcal{F}^2$, and $\mathcal{F}^3$, cf. Fig.~\ref{fig:SC_cube_coordinates}. Associated with those coordinates, planarity of vertex quadrilaterals of sub-patches of the coordinate surfaces induces arbitrary Q-refinements of the original Q-cube. \begin{figure}[htb] \begin{center} \includefig{3d_conjugate_coordinates} \mathbf end{center} \caption{Regular SC-cubes induce smooth conjugate coordinates and give rise to arbitrary Q-refinements of the supporting Q-cube.} \label{fig:SC_cube_coordinates} \mathbf end{figure} \paragraph{TCC-reduction of the supercyclidic 3D system} Theorem~\ref{thm:cyclidic_3d_system} shows that it is possible to equip elementary quadrilaterals of a 3D Q-net consistently with adapted SC-patches that satisfy the common boundary condition, such an extension being determined by its restriction to three coordinate planes (one of each family). We will now demonstrate that imposition of the TCC-condition in coordinate planes, i.e., to require that each 2D layer is a 2D supercyclidic net is an admissible reduction of the underlying 3D system, which in turn shows the existence of 3D supercyclidic nets. In fact, it turns out that the TCC-reduction is multidimensionally consistent, which will later give rise to multidimensional SC-nets as well as fundamental transformations thereof that exhibit the same Bianchi-permutability properties as fundamental transformations of classical conjugate nets and their discrete counterparts. \begin{theorem} \label{thm:tcc_reduction} Imposition of the TCC-property on 2D coordinate planes is an admissible reduction of the SC 3D system. This means, propagation of admissible TCC-Cauchy data for coordinate planes $\coordsurf{ij}$ according to Theorem~\ref{thm:cyclidic_3d_system} yields a Q-net with SC-patches adapted to elementary quadrilaterals, such that the common boundary condition is satisfied and the TCC-property holds in all 2D coordinate planes. In particular, the reduction is multidimensionally consistent. \mathbf end{theorem} \begin{proof} Denote the supporting Q-net by $x$ and the adapted patches by $f$ and assume TCC-Cauchy data $(x,f)|_{\coordsurf{jk}}$, $1 \le j < k \le m$, to be given (admissible in the sense that the common boundary condition is satisfied along coordinate axes). To begin with, note that~$x$ is uniquely determined by the Cauchy data $x|_{\coordsurf{ij}}$ as Q-nets are governed by an $m$D consistent 3D system. Accordingly, we may assume that $x$ is given and focus on the Cauchy data $f|_{\coordsurf{jk}}$ for adapted SC-patches. The propagation of adapted patches may be understood as propagation of their defining boundary curves and it has been shown (Remark~\ref{rem:compatible_planes_via_concurrency} and Corollary~\ref{cor:compatible_planes}) that, on a 3D cube, the propagation of boundary curves by Laplace perspectivities is consistent if and only if the boundary planes satisfy \begin{equation} \label{eq:boundary_plane_propagation_3d} \pi^i_{jk} = (\pi^i \cap \pi^i_j \cap \pi^i_k) \vee x_{jk} \vee x_{ijk} = q^i \vee e^i_{jk}, \quad i \ne j \ne k \ne i. \mathbf end{equation} Accordingly, it has to be shown that the propagation of boundary planes subject to \mathbf eqref{eq:boundary_plane_propagation_3d} is multidimensionally consistent. Moreover, we have to assure that the induced lines \begin{equation*} t^i = \pi^i_{-i} \cap \pi^i \mathbf end{equation*} at vertices $x$ constitute a discrete torsal line system. This is the case if and only if they are compatible with adapted SC-patches, while automatically implying the TCC-property in coordinate planes (see Proposition~\ref{prop:properties_supercyclidic_patches}~\iref{item:prop_patches_concurrent_tangent_congruence_quad} and Lemma~\ref{lem:tcc_characterization}). Both suggested properties may be verified by a reverse argument: Instead of considering the Cauchy problem for boundary planes, consider the evolution of tangents subject to Lemma~\ref{lem:2d_systems}~(L) and for the admissible Cauchy data $t^i|_{\coordsurf{ij}}$, $j = 1,\ldots,m$. Due to Corollary~\ref{cor:2d_quad_line_systems} the propagation is multidimensionally consistent and one obtains a unique fundamental line system $t^i$. It remains to note that the solution $t^i$ encodes the unique solution \begin{equation*} \pi^i = t^i \vee t^i_i \mathbf end{equation*} of the original Cauchy problem, for the reason that \mathbf eqref{eq:boundary_plane_propagation_3d} is satisfied by virtue of Lemma~\ref{lem:fundamental_cubes}~\iref{item:lem_fun_cubes_concurrent_planes_any}. \mathbf end{proof} \section{$m$D supercyclidic nets and their fundamental transformations} \label{sec:md_supercyclidic_nets} The previous considerations motivate the following definition of $m$-dimensional supercyclidic nets, whose existence is verified by Theorem~\ref{thm:tcc_reduction}. \begin{definition}[$m$D supercyclidic net] \label{def:sc_net_md_from_2d} Let $x:\mathbb{Z}^m \to \mathbb{R}P^3$, $m \ge 2$, be a Q-net and assume that $f: \left\{ \text{2-cells of } \mathbb{Z}^m \right\} \to \left\{ \text{supercyclidic patches in } \mathbb{R}P^3 \right\}$ describes SC-patches, which are adapted to the elementary quadrilaterals of $x$ and satisfy the common boundary condition. The pair $(x,f)$ is called an \mathbf emph{$m$D supercyclidic net} if the restriction to any coordinate surface is a 2D supercyclidic net, that is, if the TCC-property is satisfied in each 2D coordinate plane. \mathbf end{definition} \paragraph{Discrete data for an $m$D supercyclidic net} Analogous to the 2-dimensional case, an $m$-dimensional supercyclidic net is completely encoded in the following data: \begin{enumerate}[(i)] \item The supporting Q-net $x: \mathbb{Z}^m \to \mathbb{R}P^3$. \item The discrete torsal tangent systems $t^i : \mathbb{Z}^m \to \mathcal{L}^3$, $i = 1,\ldots,m$. \item The boundary curves $b^i : \mathbb{Z}^m \to \mathcal{A}^3$, $i = 1,\ldots,m$, with $b^i$ tangent to $t^i$ at $x$ and to $t^i_i$ at $x_i$ and such that for each quadrilateral opposite boundary curves are perspective from the corresponding Laplace point. \mathbf end{enumerate} See also Remark~\ref{rem:discrete_data_2d} on the fully discrete description of boundary curves in terms of points and lines. \paragraph{Cauchy data for an $m$D supercyclidic net} According to Theorem~\ref{thm:tcc_reduction}, Cauchy data for an $m$D supercyclidic net is given by a collection $(x^{ij},f^{ij})$ of 2D supercyclidic nets for the 2D coordinate planes $\coordsurf{ij}$, $1 \le i < j \le m$, which has to be admissible in the sense that the common boundary condition is satisfied along coordinate lines. Independent Cauchy data in turn is given by Cauchy data for such compatible 2D supercyclidic nets and we conclude that the following constitutes Cauchy data for an $m$D supercyclidic net. \begin{enumerate}[(i)] \item \label{item:cauchy_data_sc_qnet} Cauchy data for the supporting Q-net $x : \mathbb{Z}^m \to \mathbb{R}P^3$, e.g., \begin{equation*} x|_{\coordsurf{ij}},\quad 1 \le i < j \le m. \mathbf end{equation*} \item \label{item:cauchy_data_sc_tangents} Cauchy data for the discrete torsal tangent systems $t^i$, $i = 1,\ldots,m$, e.g., \begin{equation*} t^i|_{\coordsurf{j}},\quad i,j=1,\ldots,m. \mathbf end{equation*} \item \label{item:cauchy_data_sc_splines} One adapted spline of conic segments for each direction, e.g., \begin{equation*} b^i|_{\coordsurf{i}},\quad i=1,\ldots,m, \mathbf end{equation*} such that $b^i$ is tangent to $t^i$ at $x$ and to $t^i_i$ at $x_i$. \mathbf end{enumerate} \paragraph{Fundamental transformations of supercyclidic nets} Motivated by the theories of cyclidic and hyperbolic nets, we define fundamental transformations of supercyclidic nets by combination of the according smooth and discrete fundamental transformations of the involved SC-patches and supporting Q-nets, respectively. \begin{definition}[F-transformation of supercyclidic nets] \label{def:sc_net_f_trafo} Two $m$D supercyclidic nets $(x,f)$ and $(x^+,f^+)$ are called \mathbf emph{fundamental transforms} of each other if the supporting Q-nets $x$ and $x^+$ form a discrete fundamental pair and each pair of corresponding SC patches $f,f^+$ forms a classical fundamental pair. \mathbf end{definition} It turns out that two $m$D supercyclidic nets are fundamental transforms if and only if they may be embedded as two consecutive layers of an $(m+1)$-dimensional supercyclidic net. While the possibility of such an embedding implies the fundamental relation according to Theorem~\ref{thm:sc_cube_properties}~\iref{item:sc_cube_properties_ftrafo}, the converse has to be shown. It is not difficult to see (think of Cauchy data for $m$D supercyclidic nets) that this is an implication of the following Proposition~\ref{prop:SC_cube_and_ftrafos}. \begin{proposition} \label{prop:SC_cube_and_ftrafos} Consider a 3D cube $Q$ with planar faces and SC-patches $f,f^+$ adapted to one pair of opposite faces of $Q$. Then the following statements are equivalent: \begin{enumerate}[(i)] \item \label{item:sc_cube_trafos_transforms} The patches $f$ and $f^+$ are classical fundamental transforms of each other. \item \label{item:sc_cube_trafos_perspective} Corresponding boundary curves $b$ and $b^+$ of $f$ and $f^+$ are perspective from the associated Laplace point $(x \vee x^+) \cap (y \vee y^+)$ of the face of $Q$ that consists of the vertices $x,y$ of $b$ and $x^+,y^+$ of $b^+$. \item \label{item:sc_cube_trafos_extension} The configuration can be extended to an SC-cube. The extension is uniquely determined by the choice of the four missing boundary planes that have to be concurrent, and a conic segment that connects corresponding vertices in one of those planes. \mathbf end{enumerate} \mathbf end{proposition} For the proof of Proposition~\ref{prop:SC_cube_and_ftrafos} we will use the following \begin{lemma} \label{lem:fundamental_curves} Let $f,f^+ : U \subset \mathbb{R}^2 \to \mathbb{R}P^3$ be smooth conjugate nets that are fundamental transforms of each other and let $b$ and $b^+$ be corresponding parameter lines, that is, tangents to $b$ and $b^+$ in corresponding points intersect. Further denote by $c$ the curve that consists of intersection points of the conjugate tangents along $b$ and $b^+$. If $b$ and $b^+$ both are planar and $c$ is regular, then $c$ is also a planar curve. Moreover, the supporting planes of $b,b^+$, and $c$ intersect in a line. \mathbf end{lemma} \begin{proof}[Proof of Lemma~\ref{lem:fundamental_curves}] Denote by $t$ and $t^+$ the tangents to $b$ and $b^+$ and by $g$ and $g^+$ the conjugate tangents to $f$ and $f^+$. As $b$ and $b^+$ are planar, corresponding tangents $t$ and $t^+$ intersect along the line $l$ that is the intersection of the supporting planes of $b$ and $b^+$. In order to see that the curve $c$ composed of the intersection points $g \cap g^+$ is planar, note the following: The tangent planes $\pi$ to $f$ along $b$ are of the form $t \vee g$ and analogous for the tangent planes $\pi^+$ of $f^+$ one has $\pi^+ = t^+ \vee g^+$. So $\pi \cap \pi^+ = h = (t \cap t^+) \vee (g \cap g^+)$ and we conclude $h \cap l = t \cap t^+$. Moreover, the lines $h$ are the tangents of $c$, since the planes $\pi$ along $b$ envelop a torsal ruled surface $\sigma$ whose rulings are precisely the conjugate tangents $g$.\footnote{This is a classical, purely geometric description of conjugate tangents to a smooth surface (cf. \cite{PottmannWallner:2001:ComputationalLineGeometry}).} Accordingly, $c$ is a curve on $\sigma$ and thus in each point of $c$ its tangent is contained in the corresponding tangent plane $\pi$ of $\sigma$. Of course the same argumentation holds if we consider the planes $\pi^+$ and hence the tangents to $c$ must be of the form $\pi \cap \pi^+$. Finally, for regular $c$ the planarity follows from the fact that continuity of a regular curve $\gamma$ implies that $\gamma$ is planar if and only if there exists a line that intersects all of its tangents. Clearly, the supporting plane of $c$ also passes through $l$. \mathbf end{proof} \begin{proof}[Proof of Proposition~\ref{prop:SC_cube_and_ftrafos}] A Q-cube with one conic segment per edge that connects the incident vertices encodes an SC-cube if and only if for each quadrilateral the conic segments that connect the vertices of opposite edges are ``Laplace perspective'', cf.~Fig.~\ref{fig:laplace_projection_cube}. Thus, \iref{item:sc_cube_trafos_perspective} is necessary for \iref{item:sc_cube_trafos_extension}. It is also sufficient since the suggested construction in \iref{item:sc_cube_trafos_extension} always yields an SC-cube according to Corollary~\ref{cor:compatible_planes} and actually covers all possible extension as a consequence of the proof of Theorem~\ref{thm:cyclidic_3d_system}. On the other hand, \iref{item:sc_cube_trafos_extension} $\Longrightarrow$ \iref{item:sc_cube_trafos_transforms} is covered by Theorem~\ref{thm:sc_cube_properties}~\iref{item:sc_cube_properties_ftrafo} and it remains to show \iref{item:sc_cube_trafos_transforms} $\Longrightarrow$ \iref{item:sc_cube_trafos_perspective}. So consider two corresponding boundary curves $b$ and $b^+$ of the SC-patches $f$ and $f^+$ that connect four coplanar vertices $x,y,x^+,y^+$. As the patches $f$ and~$f^+$ are fundamental transforms, the tangents to $b$ and $b^+$ in isoparametric points $p$ and $p^+$ intersect along the line $l$ that is the intersection of the two supporting planes $\pi$ and $\pi^+$ of $b$ and $b^+$, respectively. On the other hand, also the conjugate tangents $g$ and $g^+$ in $p$ and $p^+$ intersect in $q = g \cap g^+$ because of the fundamental relation in the other direction and those points $q$ trace a curve $c$. Moreover, all conjugate tangents along $b$ pass through the apex $a$ of the tangency cone $\mathcal{C}$ to $f$ along $b$ and analogously all conjugate tangents along $b^+$ pass through the apex $a^+$ of the tangency cone $\mathcal{C}^+$ to $f^+$ along $b^+$, cf. Fig.~\ref{fig:ftrafo_proof}. The curve $c$ is contained in the intersection $\mathcal{C} \cap \mathcal{C}^+$ and thus is regular. As $b$ and $b^+$ are planar, the conditions of Lemma~\ref{lem:fundamental_curves} are met and we may conclude that $c$ is planar (actually, a conic arc) and contained in a plane $\pi_c$ that passes through the line $l$. \begin{figure}[htb] \includefig{ftrafo_proof} \caption{Two boundary curves of a fundamental pair of SC-patches with coplanar vertices $x,y,x^+,y^+$.} \label{fig:ftrafo_proof} \mathbf end{figure} It is obvious that isoparametric points on $b$ and $c$ are perspective from $a$ (along the generators of $\mathcal{C}$) and analogously $b^+$ and $c$ are perspective from $a^+$. Accordingly, those perspective relations between curves are restrictions of the perspective projective transformations \begin{equation*} \begin{array}{lll} \tau_a : \pi \to \pi_c, & \quad p \mapsto q \quad & (\text{central projection through } a)\\ \tau_{a^+} : \pi_c \to \pi^+, & \quad q \mapsto p^+ \quad & (\text{central projection through } a^+). \mathbf end{array} \mathbf end{equation*} Further denote by $\tau = \tau_{a^+} \circ \tau_a$ the composition of those perspectivities so that \begin{equation*} \tau : \pi \to \pi^+, \quad p \mapsto p^+ \mathbf end{equation*} identifies corresponding points on $b$ and $b^+$. The fact that $\tau$ is also a perspectivity may be seen as follows. First note, that $\tau$ is determined by four points in general position in $\pi$ and their images in $\pi^+$. We know already $\tau(x) = x^+$ as well as $\tau(y) = y^+$ and, moreover, that those corresponding points are perspective from $o = (x \vee x^+) \cap (y \vee y^+)$. It remains to observe that $l = \pi \cap \pi^+$ consists of fixed points of $\tau$ because of $l \subset \pi_c$ and to consider two further points $r,s \in l$ such that $x,y,r,s$ are in general position. \mathbf end{proof} \begin{remark*} The center of perspectivity $o$ for $\tau$ lies on the line $a \vee a^+$. This follows from the fact that each triple $p,p^+,q$ of corresponding points is contained in a plane that is spanned by intersecting generators $g$ and $g^+$ of $\mathcal{C}$ and $\mathcal{C}^+$, respectively. All planes of that kind are contained in the pencil of planes through the line that is spanned by the apices of $\mathcal{C}$ and $\mathcal{C}^+$, that is, $a \vee a^+$. \mathbf end{remark*} \paragraph{Construction of fundamental transforms} Let $s=(x,f)$ be an $m$D supercyclidic net. According to our previous considerations, the construction of a fundamental transform $s^+=(x^+,f^+)$ of $s$ corresponds to the extension of $s$ to a 2-layer $(m+1)$-dimensional supercyclidic net. Any such extension may be obtained as follows. First, construct an F-transform $x^+$ of $x$, that is, extend $x$ to a 2-layer $(m+1)$-dimensional Q-net $X:\mathbb{Z}^m \times \left\{ 0,1 \right\} \to \mathbb{R}P^3$ with $X(\cdot,0) = x$ so that $x^+ = X(\cdot,1)$. Cauchy data for such an extension is given by the values of $x^+$ along the coordinate axes $\coordsurf{i}$, $i=1,\ldots,m$, which has to satisfy the condition that the quadrilaterals $(x,x_i,x^+_i,x^+)$ are planar. Further, the section on Cauchy data for an $m$D supercyclidic net shows that the remaining relevant data are the tangents of $s^+$ at one point $x^+_0$, which have to intersect the corresponding tangents of $s$ at the corresponding point $x_0$. The boundary curves of $s^+$ are then obtained as perspective images of the boundary curves of $s$. \begin{remark*} In the above construction there are some degrees of freedom left for the construction of compatible ``vertical'' patches in the resulting 2-layer $(m+1)$D SC-net, but this does not affect the net $s^+$. \mathbf end{remark*} \mathbf enlargethispage{\baselineskip} \paragraph{Permutability properties} Our previous considerations of $m$D supercyclidic nets immediately yield the following permutability properties for F-transforms of supercyclidic nets. Corollary~\ref{cor:permutability_scnets} is a literal translation of the analogous Theorem~\ref{thm:permutability_qnets} for Q-nets and follows from the fact that the missing tangents of the involved supercyclidic nets are uniquely determined by their supporting Q-nets (subject to the $m$D consistent 2D system that describes the extension of Q-nets to torsal line systems, cf. Corollary~\ref{cor:2d_quad_line_systems}). \begin{corollary}[Permutability properties of F-transformations of supercyclidic nets] \label{cor:permutability_scnets} \begin{enumerate}[(i)] \item Let $s=(x,f)$ be an $m$-dimensional supercyclidic net, and let $s^{(1)}$ and $s^{(2)}$ be two of its F-transforms. Then there exists a 2-parameter family of SC-nets $s^{(12)}$ that are F-transforms of both $s^{(1)}$ and $s^{(2)}$. The corresponding vertices of the four SC-nets $s$, $s^{(1)}$, $s^{(2)}$ and $s^{(12)}$ are coplanar. The net $s^{(12)}$ is uniquely determined by one of its vertices. \item Let $s=(x,f)$ be an $m$-dimensional supercyclidic net. Let $s^{(1)}$, $s^{(2)}$ and $s^{(3)}$ be three of its F-transforms, and let three further SC-nets $s^{(12)}$, $s^{(23)}$ and $s^{(13)}$ be given such that $s^{(ij)}$ is a simultaneous F-transform of $s^{(i)}$ and $s^{(j)}$. Then generically there exists a unique SC-net $s^{(123)}$ that is an F-transform of $s^{(12)}$, $s^{(23)}$ and $s^{(13)}$. The net $s^{(123)}$ is uniquely determined by the condition that for every permutation $(ijk)$ of $(123)$ the corresponding vertices of $s^{(i)}$, $s^{(ij)}$, $s^{(ik)}$ and $s^{(123)}$ are coplanar. \mathbf end{enumerate} \mathbf end{corollary} \section{Frames of supercyclidic nets via projective reflections} \label{sec:frames} \begin{definition}[Tangent systems and frames of supercyclidic nets] Let $x:\mathbb{Z}^m \to \mathbb{R}P^3$ be the supporting Q-net of a supercyclidic net and $t^i : \mathbb{Z}^m \to \mathcal{L}^3$, $i=1,\ldots,m$, be the tangents to the supercyclidic patches at vertices of $x$. We call \[(t^1,\ldots,t^m) : \mathbb{Z}^m \to \underbrace{\mathcal{L}^3 \times \ldots \times \mathcal{L}^3}_{m \text{ times}}\] the \mathbf emph{tangent system} of the supercyclidic net and $(t^1,\ldots,t^m)(z)$ the \mathbf emph{tangents at $x(z)$}. For $m \in \left\{ 2,3 \right\}$ we refer to the tangents as \mathbf emph{frames} of the supercyclidic net. \mathbf end{definition} It turns out that the integrable structure behind the tangent systems of supercyclidic nets (in the sense of Corollary~\ref{cor:2d_quad_line_systems}) has a nice expression in terms of \mathbf emph{projective reflections} that are associated with the edges of the supporting Q-net. In fact, the uncovered integrable system on projective reflections governs the simultaneous extension of a single Q-net to multiple fundamental line systems also in the multidimensional case and in that sense is more general than the theory of supercyclidic nets which is rooted in 3-space. Before going into details, let us recall that any pair of disjoint subspaces $U,V \subset \mathbb{R}P^N$ with $U \vee V = \mathbb{R}P^N$ induces a unique projective reflection $f: \mathbb{R}P^N \to \mathbb{R}P^N$ as follows. Points of the union $U \cup V$ are defined to be fixed points of $f$. For all other points $x$ there exists a unique line $l$ through $x$ that intersects $U$ and $V$, which yields points $p = U \cap l$ and $q = V \cap l$. The image $f(x)$ is then defined by the cross-ratio condition \begin{equation*} \operatorname{cr}(x,p,f(x),q) = -1. \mathbf end{equation*} Note that any projective reflection is an involution, $f^2 = \mathrm{id}$, since $\operatorname{cr}(x,p,f(x),q) = -1$ if and only if $\operatorname{cr}(f(x),p,x,q) = -1$. Relevant for our purpose is the particular subclass of projective reflections that we denote by \begin{equation*} \mathcal{F}_{o,\pi}(N) = \left\{ \text{Projective reflections in } \mathbb{R}P^N \text{ induced by a point } o \text{ and a hyperplane } \pi \not\ni o \right\}. \mathbf end{equation*} Now the starting point for the following considerations is the observation that for a generic 3D supercyclidic net any two frames $T=(t^1,t^2,t^3)$ and $T_i=(t^1_i,t^2_i,t^3_i)$ at adjacent vertices $x$ and $x_i$ are related by a unique projective reflection $f^i \in \mathcal{F}_{o,\pi}(3)$, that is \begin{equation} \label{eq:pr_frames} f^i(T) = T_i \iff f^i(T_i) = T. \mathbf end{equation} To verify this claim, first note that \mathbf eqref{eq:pr_frames} implies $f^i(x) = x_i$. Further let $f^i \sim (o^i,\pi^i) \in \mathcal{F}_{o,\pi}(3)$ be any projective reflection with $f^i(x) = x_i$ and denote $p^j = t^j \cap \pi^i$. As $x \ne x_i$ we have $x \not\in \pi$ and thus \begin{equation*} f^i (t^j) = f^i(x \vee p^j) = x_i \vee p^j. \mathbf end{equation*} Therefore, \mathbf eqref{eq:pr_frames} implies that the $p^j$ have to be the three focal points $p^j = t^j \cap t^j_i$, $j = 1,2,3$, and $\pi^i = p^1 \vee p^2 \vee p^3$ is uniquely determined. On the other hand, the condition $f^i(x) = x_i$ determines $o^i \in x \vee x_i$ uniquely according to $\operatorname{cr}(x,q^i,x_i,o^i) = -1$, where $q^i = \pi^i \cap (x \vee x_i)$ is distinct from $x$ and $x_i$. The next step is to consider frames $T,T_i,T_j,T_{ij}$ at the four vertices $x,x_i,x_j,x_{ij}$ of a quadrilateral. One obtains four induced projective reflections $f^i,f^i_j,f^j,f^j_i \in \mathcal{F}_{o,\pi}(3)$ for which \begin{equation*} (f^j_i \circ f^i) (T) = T_{ij} = (f^i_j \circ f^j) (T) \mathbf end{equation*} and in particular \begin{equation} \label{eq:commuting_reflections_vertices} f^i(x) = x_i, \quad f^j(x) = x_j, \quad f^i_j(x_j) = f^j_i(x_i) = x_{ij}. \mathbf end{equation} Clearly, $f^i,f^j$, and $T$ determine the frames $T_i,T_j$, and $T_{ij}$ subject to $T_i = f^i(T), T_j = f^j(T)$, and Lemma~\ref{lem:2d_systems}~(L). Accordingly, the projective reflections $f^i_j$ and $f^j_i$ may be constructed from $f^i$ and $f^j$ but the construction depends on $T$. However, it turns out that this constructive propagation \begin{equation} \label{eq:reflection_propagation} (f^i,f^j) \mapsto (f^i_j,f^j_i) \mathbf end{equation} is independent of $T$ modulo \mathbf eqref{eq:commuting_reflections_vertices}. Before we show this, note that the previous considerations and described constructions also apply for the simultaneous extension of a Q-net $x : \mathbb{Z}^m \to \mathbb{R}P^N$ to fundamental line systems $t^1,\ldots,t^N$ and yield unique projective reflections that relate the lines at adjacent vertices. Accordingly, we will now go beyond the setting of supercyclidic nets in $\mathbb{R}P^3$ and consider the multidimensional case in $\mathbb{R}P^N$. Moreover, it is evident that the frame dependent construction of reflections is $m$D consistent because of the multidimensional consistency of the extension of Q-nets to fundamental line systems, cf. Corollary~\ref{cor:2d_quad_line_systems}. In order to show that the propagation \mathbf eqref{eq:reflection_propagation} depends only on the supporting Q-net, still inducing an $m$D consistent 2D system with variables in $\mathcal{F}_{o,\pi}(N)$ on edges, we will need the following \begin{lemma} \label{lem:reflection_planes_intersection} Let $Q=(x,x_1,x_{12},x_2)$ be a planar quadrilateral in $\mathbb{R}P^N$ and $(t^i,t^i_1,t^i_{12},t^i_2)$, $i=1,\ldots,N$, be $N$ simultaneous extensions of $Q$ to torsal line congruence quadrilaterals subject to Lemma~\ref{lem:2d_systems}~(L). Further denote by $\pi^j = \bigvee_i (t^i \cap t^i_j)$ the hyperplane spanned by focal points that are associated with the edge $x \vee x_j$ (cf. Fig.~\ref{fig:reflection_planes}) and let $X = \pi^1 \cap \pi^2 \cap \pi^1_2 \cap \pi^2_1$. Then $\operatorname{codim}(X) = 2$. \mathbf end{lemma} \begin{figure}[htb] \begin{center} \includefig{reflection_planes} \mathbf end{center} \caption{Hyperplanes spanned by focal points of line congruence quadrilaterals that extend the same planar quadrilateral.} \label{fig:reflection_planes} \mathbf end{figure} \begin{proof} For $1 \le i < j \le N$ the eight corresponding lines $t^i,t^j,\ldots,t^i_{12},t^j_{12}$ form a line complex cube for which $Q$ is a focal quadrilateral. Now consider the four lines $l^{1,ij} = (t^i \cap t^i_1) \vee (t^j \cap t^j_1) \subset \pi^1$, $l^{2,ij} \subset \pi^2$, $l^{1,ij}_2 \subset \pi^1_2$, and $l^{2,ij}_1 \subset \pi^2_1$ that are spanned by the remaining focal points of the complex cube. Lemma~\ref{lem:fundamental_cubes} implies that those four lines intersect in one point $p^{ij} = l^{1,ij} \cap l^{2,ij} \cap l^{1,ij}_2 \cap l^{2,ij}_1 \in X$ due to the planarity of $Q$. As always, we assume that the data is generic so that $\tilde X = \bigvee_{i \ne j} p^{ij}$ is a subspace of codimension 1 in $\pi^1 = \bigvee_{i \ne j} l^{1,ij}$ and thus has codimension 2 in $\mathbb{R}P^N$. Now $\tilde X \subset X \subset \pi^1 \cap \pi^2$ together with $\operatorname{codim}(\pi^1 \cap \pi^2) = 2$ shows $X = \tilde X$ and hence proves the lemma. \mathbf end{proof} \begin{theorem}[Extension of Q-nets to fundamental line systems via projective reflections] \label{thm:projective_reflections} Let $(x,x_1,x_{12},x_2)$ be a planar quadrilateral in $\mathbb{R}P^N$, $N \ge 3$, and let $f^1,f^2 \in \mathcal{F}_{o,\pi}(N)$ be projective reflections with $f^i(x) = x_i$. Then there exist unique projective reflections $f^1_2,f^2_1 \in \mathcal{F}_{o,\pi}(N)$, which are determined by the conditions \begin{equation} \label{eq:pr_propagation} f^i_j(x_j) = x_{12} \text{ and } (f^2_1 \circ f^1) (l) = (f^1_2 \circ f^2) (l) \text{ for all } l \in \mathcal{L}^N \text{ with } x \in l. \mathbf end{equation} Further, let $x : \mathbb{Z}^m \to \mathbb{R}P^N$, $N \ge 3$, be a generic Q-net. Then the induced propagation \begin{equation*} \tau : \mathbb{Z}^m \times \mathcal{F}_{o,\pi}(N) \times \mathcal{F}_{o,\pi}(N) \to \mathcal{F}_{o,\pi}(N) \times \mathcal{F}_{o,\pi}(N), \quad (f^i,f^j) \mapsto (f^i_j,f^j_i) \mathbf end{equation*} of projective reflections is $m$D consistent, that is, admissible Cauchy data $f^i|_{\coordsurf{i}}$, $i = 1,\ldots,m$, (mapping adjacent vertices onto each other) determines all remaining projective reflections uniquely. \mathbf end{theorem} \begin{proof} First observe that the maps $f^i|_{\coordsurf{i}}$, $i = 1,\ldots,m$, together with one line $t(0) \ni x(0)$ are Cauchy data for the ($m$D consistent) extension of $x$ to a fundamental line system. Accordingly, a frame $T(0)=(t^1,\ldots,t^N)(0)$ at $x(0)$ yields well defined frames at every vertex of $x$. Those frames in turn yield one well-defined projective reflection for each edge as described before: The hyperplane $\pi^i$ of the reflection $f^i \sim (o^i,\pi^i)$ associated with the edge $(x,x_i)$ is spanned by the corresponding focal points, $\pi^i = \bigvee_{j=1,\ldots,N} (t^j \cap t^j_i)$, which in turn allows to construct the center $o^i$ from the condition $f^i(x) = x_i$. It remains to show that the construction is independent of $T(0)$, that is, the propagation $(f^i,f^j) \mapsto (f^i_j,f^j_i)$ according to \mathbf eqref{eq:pr_propagation} is well-defined. So consider a planar quadrilateral $Q=(x,x_1,x_{12},x_2)$ in $\mathbb{R}P^N$ with one generic frame $T=(t^1,\ldots,t^N)$ at $x$ and let $f^1,f^2 \in \mathcal{F}_{o,\pi}(N)$ with $f^i(x) = x_i$ be given. One obtains frames $T_1 = f^1(T)$ at $x_1$ and $T_2 = f^2(T)$ at $x_2$ and Lemma~\ref{lem:2d_systems}~(L) yields a unique frame $T_{12}$ at $x_{12}$. As described above this induces projective reflections $f^1_2,f^2_1$ such that \begin{equation} \label{eq:pr_commuting_frames} (f^2_1 \circ f^1) (T) = T_{12} = (f^1_2 \circ f^2) (T) \mathbf end{equation} and we have to show that the so obtained maps satisfy \mathbf eqref{eq:pr_propagation}. By construction we have $f^i_j(x_j) = x_{12}$. On the other hand, the second condition $(f^2_1 \circ f^1) (l) = (f^1_2 \circ f^2) (l)$ for all $l \in \mathcal{L}^N(x) = \left\{ l \in \mathcal{L}^N \mid x \in l \right\}$ may be rewritten as \begin{equation} \label{eq:pr_identity} (f^2 \circ f^1_2 \circ f^2_1 \circ f^1) (l) = l, \mathbf end{equation} since projective reflections are involutions.\footnote{Note that the identity \mathbf eqref{eq:pr_identity} on sets does not imply $f^2 \circ f^1_2 \circ f^2_1 \circ f^1 = \mathrm{id}$. The composition could also be a homothety from an affine point of view.} In order to verify \mathbf eqref{eq:pr_identity} we identify lines through vertices of $Q$ with their intersections with the hyperplanes of the projective reflections according to \begin{equation*} \begin{array}{lclclcl} x & \in & l & \quad \leftrightarrow \quad & p & = & l \cap \pi^1\\ x_1 & \in & l_1 & \quad \leftrightarrow \quad & p_1 & = & l_1 \cap \pi^2_1\\ x_{12} & \in & l_{12} & \quad \leftrightarrow \quad & p_{12} & = & l_{12} \cap \pi^1_2\\ x_2 & \in & l_2 & \quad \leftrightarrow \quad & p_2 & = & l_2 \cap \pi^2\\ \mathbf end{array} \mathbf end{equation*} After this identification, the images of lines through the vertices of $Q$ under the considered projective reflections correspond to the images of their representing points under certain central projections: \begin{align*} f^1 : \mathcal{L}^N(x) \to \mathcal{L}^N(x_1) & \quad \leftrightarrow \quad \tau^1 : \pi^1 \to \pi^2_1 \quad (\text{central projection through } x_1)\\ f^2_1 : \mathcal{L}^N(x_1) \to \mathcal{L}^N(x_{12}) & \quad \leftrightarrow \quad \tau^2_1 : \pi^2_1 \to \pi^1_2 \quad (\text{central projection through } x_{12})\\ f^1_2 : \mathcal{L}^N(x_{12}) \to \mathcal{L}^N(x_2) & \quad \leftrightarrow \quad \tau^1_2 : \pi^1_2 \to \pi^2 \quad (\text{central projection through } x_2)\\ f^2 : \mathcal{L}^N(x_2) \to \mathcal{L}^N(x) & \quad \leftrightarrow \quad \tau^2 : \pi^2 \to \pi^1 \quad (\text{central projection through } x) \mathbf end{align*} The identity \mathbf eqref{eq:pr_identity} may then be rewritten as \begin{equation} \label{eq:tau_identity} \tau = \tau^2 \circ \tau^1_2 \circ \tau^2_1 \circ \tau^1 = \mathrm{id}. \mathbf end{equation} As $\tau : \pi^1 \to \pi^1$ is a projective transformation and $\dim(\pi^1)=N-1$, it is sufficient to show that there are $N+1$ fixed points of $\tau$ in general position. In fact, we already have $N$ fixed points because of \mathbf eqref{eq:pr_commuting_frames}, that is \begin{equation*} \tau(p^i) = p^i, \quad i = 1,\ldots,N, \mathbf end{equation*} for $p^i = t^i \cap \pi^1$. The existence of one further fixed point is guaranteed by Lemma~\ref{lem:reflection_planes_intersection}. \mathbf end{proof} The analytic description of Q-nets with frames at vertices via reflections might be useful as it turned out to be in the special case of circular nets. For circular nets, orthonormal frames at vertices were introduced in \cite{BobenkoMatthesSuris:2003:OrthogonalSystems} and played an essential role in the proof of the $C^\infty$-convergence of circular nets to smooth orthogonal nets. The Dupin cyclidic structure behind was discovered later and gave rise to the introduction of cyclidic nets in \cite{BobenkoHuhnen-Venedey:2011:cyclidicNets}. In that case, the orthonormal frames at adjacent vertices of the supporting circular net are related by a reflection in the Euclidean symmetry plane, which is a special instance of a projective reflection (clearly, cyclidic nets in $\mathbb{R}^3$ are special instances of supercyclidic nets). Note that the Euclidean reflections in the cyclidic case are uniquely determined by the supporting circular net, thus a Dupin cyclidic net is uniquely determined by its supporting circular net and the frame in one vertex. \begin{thebibliography}{10} \bibitem{AllenDutta:1997:SupercyclidesBlending} S.~{Allen} and D.~{Dutta}. \newblock {Supercyclides and blending}. \newblock {\mathbf em {Comput. Aided Geom. Des.}}, 14(7):637--651, 1997. \bibitem{Barner:1987:DifferentialgeometrischeKennzeichnungSuperzykliden} M.~Barner. \newblock {Eine differentialgeometrische Kennzeichnung der allgemeinen Dupinschen Zykliden}. \newblock {\mathbf em Aequationes Math.}, 34:277--286, 1987. \bibitem{BoEtAl:2011:CAS} P.~Bo, H.~Pottmann, M.~Kilian, W.~Wang, and J.~Wallner. \newblock Circular arc structures. \newblock {\mathbf em ACM Trans. Graphics}, 30:\#101,1--11, 2011. \newblock Proc. SIGGRAPH. \bibitem{Bobenko:1999:DiscreteConformalMaps} A.~I. Bobenko. \newblock Discrete conformal maps and surfaces. \newblock In P.~A. Clarkson and F.~W. Nijhoff, editors, {\mathbf em Symmetries and integrability of difference equations (Canterbury 1996)}, volume 255 of {\mathbf em London Math. Soc. Lecture Notes}, pages 97--108. Cambridge University Press, 1999. \bibitem{BobenkoHuhnen-Venedey:2011:cyclidicNets} A.~I. Bobenko and E.~Huhnen-Venedey. \newblock Curvature line parametrized surfaces and orthogonal coordinate systems: discretization with dupin cyclides. \newblock {\mathbf em Geometriae Dedicata}, 159(1):207--237, 2012. \bibitem{BobenkoMatthesSuris:2003:OrthogonalSystems} A.~I. Bobenko, D.~Matthes, and Y.~B. Suris. \newblock {Discrete and smooth orthogonal systems: $C^\infty$-approximation}. \newblock {\mathbf em Int. Math. Res. Not.}, 45:2415--2459, 2003. \bibitem{BobenkoSchief:2014:FundamentalSystems} A.~I. Bobenko and W.~K. Schief. \newblock {Discrete line complexes and integrable evolution of minors}. \newblock Preprint, {\tt http://arxiv.org/abs/1410.5794}, 2014. \bibitem{BobenkoSuris:2008:DDGBook} A.~I. Bobenko and Y.~B. Suris. \newblock {\mathbf em {Discrete Differential Geometry. Integrable structure}}, volume~98 of {\mathbf em Graduate Studies in Mathematics}. \newblock AMS, 2008. \bibitem{CieslinskiDoliwaSantini:1997:CircularNets} J.~Cieslinski, A.~Doliwa, and P.~M. Santini. \newblock The integrable discrete analogues of orthogonal coordinate systems are multi-dimensional circular lattices. \newblock {\mathbf em Phys. Lett. A}, 235:480--488, 1997. \bibitem{Degen:1982:ConjugateConics} W.~Degen. \newblock {Surfaces with a conjugate net of conics in projective space}. \newblock {\mathbf em Tensor, New Ser.}, 39:167--172, 1982. \bibitem{Degen:1986:ZweifachBlutel} W.~Degen. \newblock {Die zweifachen Blutelschen Kegelschnittfl\"achen}. \newblock {\mathbf em Manuscr. Math.}, 55:9--38, 1986. \bibitem{Degen:1994:GeneralizedCyclidesCAGD} W.~{Degen}. \newblock {Generalized cyclides for use in CAGD}. \newblock In {\mathbf em {The mathematics of surfaces IV. Proceedings of the 4th IMA conference held in Bath, GB, September 1990}}, pages 349--363. Oxford Clarendon Press, 1994. \bibitem{Degen:1998:SupercyclidesOrigin} W.~{Degen}. \newblock {On the origin of supercyclides}. \newblock In {\mathbf em {The mathematics of surfaces VIII. Proceedings of the 8th IMA conference held in Birmingham, GB, August and September 1998}}, pages 297--312. Winchester: Information Geometers, 1998. \bibitem{DoliwaSantini:1997:QnetsAreIntegrable} A.~Doliwa and P.~M. Santini. \newblock {Multidimensional quadrilateral lattices are integrable}. \newblock {\mathbf em Phys. Lett. A}, 233:265--372, 1997. \bibitem{DoliwaSantiniManas:2000:TransformationsOfQnets} A.~Doliwa, P.~M. Santini, and M.~Ma{\~n}as. \newblock {Transformations of quadrilateral lattices}. \newblock {\mathbf em J. Math. Phys.}, 41(2):944--990, 2000. \bibitem{DuttaMartinPratt:1993:CyclideSurfaceModeling} D.~Dutta, R.~Martin, and M.~Pratt. \newblock Cyclides in surface and solid modeling. \newblock {\mathbf em IEEE Computer Graphics and Applications}, 13:53--59, 1993. \bibitem{Eisenhart:1923:Transformations} L.~{Eisenhart}. \newblock {\mathbf em {Transformations of surfaces}}. \newblock {Princeton, N. J. Princeton University Press}, 1923. \bibitem{FoufouGarnier:2005:ImplicitEuqationsOfSupercyclides} S.~Foufou and L.~Garnier. \newblock {Obtainment of Implicit Equations of Supercyclides and Definition of Elliptic Supercyclides}. \newblock {\mathbf em Machine Graphics and Vision}, 14(2):123--144, 2005. \bibitem{Hoffmann:2010:PQDeformation} T.~Hoffmann. \newblock On local deformations of planar quad-meshes. \newblock In K.~Fukuda, J.~Hoeven, M.~Joswig, and N.~Takayama, editors, {\mathbf em Mathematical Software – ICMS 2010}, volume 6327 of {\mathbf em Lecture Notes in Computer Science}, pages 167--169. Springer Berlin Heidelberg, 2010. \bibitem{Huhnen-VenedeyRoerig:2013:hyperbolicNets} E.~Huhnen-Venedey and T.~Rörig. \newblock Discretization of asymptotic line parametrizations using hyperboloid surface patches. \newblock {\mathbf em Geometriae Dedicata}, 168(1):265--289, 2013. \bibitem{Huhnen-VenedeySchief:2014:wTrafos} E.~Huhnen-Venedey and W.~K. Schief. \newblock {On Weingarten transformations of hyperbolic nets}. \newblock {\mathbf em Int. Math. Res. Not.}, pages 1--61, 2014. \bibitem{KaeferboeckPottmann:2012:DiscreteAffineMinimal} F.~K{\"a}ferb{\"o}ck and H.~Pottmann. \newblock Smooth surfaces from bilinear patches: discrete affine minimal surfaces. \newblock {\mathbf em Comput. Aided Geom. Design}, 30(5):476--489, 2013. \bibitem{KonopelchenkoSchief:1998:3DIntegrableLattices} B.~G. Konopelchenko and W.~K. Schief. \newblock {Three-dimensional integrable lattices in Euclidean spaces: conjugacy and orthogonality}. \newblock {\mathbf em R. Soc. Lond. Proc. Ser. A}, 454:3075--3104, 1998. \bibitem{LiuPottmannWallnerYangWang:2006:ConicalNets} Y.~Liu, H.~Pottmann, J.~Wallner, Y.-L. Yang, and W.~Wang. \newblock Geometric modeling with conical meshes and developable surfaces. \newblock {\mathbf em ACM Trans. Graphics}, 25(3):681--689, 2006. \newblock Proc. SIGGRAPH. \bibitem{Martin:1983:PrincipalPatches} R.~Martin. \newblock {Principal patches -- a new class of surface patch based on differential geometry}. \newblock {\mathbf em Eurographics Proceedings}, pages 47--55, 1983. \bibitem{MartinDePontSharrock:1986:CyclideSurfaces} R.~Martin, J.~de~Pont, and T.~Sharrock. \newblock {Cyclide surfaces in computer aided design}. \newblock In J.~Gregory, editor, {\mathbf em {The mathematics of surfaces}}, pages 253--267. {Clarendon Press}, 1986. \bibitem{McLean:1985:CyclideSurfaces} D.~McLean. \newblock {A method of Generating Surfaces as a Composite of Cyclide Patches}. \newblock {\mathbf em The Computer Journal}, 28(4):433--438, 1985. \bibitem{PottmannAsperlHoferKilian:2007:ArchitecturalGeometry} H.~Pottmann, A.~Asperl, M.~Hofer, and A.~Kilian. \newblock {\mathbf em {Architectural Geometry}}. \newblock Bentley Institute Press, 2007. \bibitem{PottmannWallner:2001:ComputationalLineGeometry} H.~Pottmann and J.~Wallner. \newblock {\mathbf em Computational line geometry}. \newblock Mathematics and Visualization. Springer-Verlag, Berlin, 2001. \bibitem{PottmannWallner:2007:FocalGeometry} H.~Pottmann and J.~Wallner. \newblock {The focal geometry of circular and conical meshes}. \newblock {\mathbf em Adv. Comput. Math.}, 29(3):249--268, 2008. \bibitem{Pratt:1996:DupinCyclidesAndSupercyclides} M.~J. {Pratt}. \newblock {Dupin cyclides and supercyclides}. \newblock In {\mathbf em {The mathematics of surfaces VI. Based of the 6th IMA mathematics of surfaces international conference, Brunel Univ., Uxbridge, Middlesex, GB, September 1994}}, pages 43--66. Oxford: Oxford Univ. Press, 1996. \bibitem{Pratt:1997:SupercyclidesClassification} M.~J. {Pratt}. \newblock {Classification and characterization of supercyclides}. \newblock In {\mathbf em {The mathematics of surfaces VII. Proceedings of the 7th conference, Dundee, Great Britain, September 1996}}, pages 25--41. Winchester: Information Geometers, Limited, 1997. \bibitem{Pratt:1997:QuarticSupercyclidesBasic} M.~J. {Pratt}. \newblock {Quartic supercyclides I: Basic theory}. \newblock {\mathbf em {Comput. Aided Geom. Des.}}, 14(7):671--692, 1997. \bibitem{Pratt:2002:QuarticSupercyclidesDesign} M.~J. {Pratt}. \newblock {Quartic supercyclides for geometric design}. \newblock In {\mathbf em {From geometric modeling to shape modeling. IFIP TC5 WG5. 2 7th workshop on geometric modeling: Fundamentals and applications, Parma, Italy, October 2--4, 2000}}, pages 191--208. Boston: Kluwer Academic Publishers, 2002. \bibitem{Sauer:1937:ProjLinienGeometrie} R.~Sauer. \newblock {\mathbf em {Projektive Liniengeometrie}}. \newblock {Berlin, W. de Gruyter \& Co. (G\"oschens Lehrb\"ucherei I. Gruppe, Bd. 23)}, 1937. \bibitem{ShiWangPottmann:2013:RationalBilinearPatches} L.~Shi, J.~Wang, and H.~Pottmann. \newblock Smooth surfaces from rational bilinear patches. \newblock {\mathbf em Comput. Aided Geom. Design}, 31(1):1--12, 2014. \bibitem{SrinivasKumarDutta:1996:SurfaceDesignUsingCyclidePatches} Y.~Srinivas, V.~Kumar, and D.~Dutta. \newblock Surface design using cyclide patches. \newblock {\mathbf em Computer-Aided Design}, 28(4):263--276, 1996. \bibitem{tang-2014-ff} C.~Tang, X.~Sun, A.~Gomes, J.~Wallner, and H.~Pottmann. \newblock Form-finding with polyhedral meshes made simple. \newblock {\mathbf em ACM Trans. Graphics}, 33(4), 2014. \newblock {P}roc. SIGGRAPH. \bibitem{WangJiangBompasWallnerPottmann-2013-DiscreteLineConSGP} J.~Wang, C.~Jiang, P.~Bompas, J.~Wallner, and H.~Pottmann. \newblock Discrete line congruences for shading and lighting. \newblock {\mathbf em Computer Graphics Forum}, 32/5, 2013. \mathbf end{thebibliography} \mathbf end{document}
\begin{document} \mathbf title[Torsors over Laurent polynomial rings]{A classification of torsors over Laurent polynomial rings } \author{V. Chernousov} \address{Department of Mathematics, University of Alberta, Edmonton, Alberta T6G 2G1, Canada} \mathbf thanks{ V. Chernousov was partially supported by the Canada Research Chairs Program and an NSERC research grant} \email{[email protected]} \author{P. Gille}\address{UMR 5208 Institut Camille Jordan - Universit\'e Claude Bernard Lyon 1 43 boulevard du 11 novembre 1918 69622 Villeurbanne cedex - France \newline Institute of Mathematics Simion Stoilow of the Romanian Academy, Calea Grivitei 21, RO-010702 Bucharest, Romania. } \mathbf thanks{ P. Gille was supported by the Romanian IDEI project PCE$_{-}$2012-4-364 of the Ministry of National Education CNCS-UEFISCIDI} \email{[email protected]} \author{A. Pianzola} \address{Department of Mathematics, University of Alberta, Edmonton, Alberta T6G 2G1, Canada. \newline \indent Centro de Altos Estudios en Ciencia Exactas, Avenida de Mayo 866, (1084) Buenos Aires, Argentina.} \mathbf thanks{A. Pianzola wishes to thank NSERC and CONICET for their continuous support}\email{[email protected]} \begin{abstract} Let $R_n$ be the ring of Laurent polynomials in $n$ variables over a field $k$ of characteristic zero and let $K_n$ be its fraction field. Given a linear algebraic $k$--group $G$, we show that a $K_n$-torsor under $G$ which is unramified with respect to $X={\rm Spec}(R_n)$ extends to a unique toral $R_n$--torsor under $G$. This result, in turn, allows us to classify all $G$-torsors over $R_n$. \noindent {\em Keywords:} Reductive group scheme, torsor, multiloop algebra. \\ \noindent {\em MSC 2000:} 14F20, 20G15, 17B67, 11E72. \end{abstract} \maketitle {\small \mathbf tableofcontents } \section{Introduction} Torsors are the algebraic analogues of the principal homogeneous spaces that one encounters in the theory of Lie groups. While the latter are (under natural assumptions) locally trivial (in the usual topology), this is not the case for the former: a torsor $E \mathbf to X$ under a group scheme $\mathfrak G$ over $X$ is not necessarily trivial (i.e., isomorphic to $\mathfrak G$ with $\mathfrak G$ acting on itself by right multiplication) when restricted to any non-empty Zariski open subset $U$ of $X.$ The reason for this is that the Zariski topology is too coarse. The path out of this serious obstacle was initiated by J.-P. Serre (with what is now called the finite \'etale topology) and then implemented in full generality (together with an accompanying descent theory) by A. Grothendieck. The idea is to have certain morphisms $U \mathbf to X$ (e.g. \'etale or flat and of finite presentation) replace open immersions as trivializing local data. Torsors have played an important role in number theory (Brauer groups, Tate-Shafarevich group, Manin obstructions) and in the Langlands program (Ngo's proof of the Fundamental Lemma). Somehow surprisingly torsors have been used over the last decade to solve difficult problems in infinite dimensional Lie theory (see \cite{GP3} for an extensive list of references. See also \cite{CGP2} and \cite {KLP}). The way that torsors arise in this context is the following. The infinite dimensional object $\mathcal{L}$ under consideration (for example, the centreless core of an extended affine Lie algebra [EALA] or a Lie superconformal algebra) has an invariant called the centroid (essentially the linear endomorphisms of the object that commute with their multiplication). These centroids are Laurent polynomial rings $k[{t_1^{\pm 1}},\operatorname{cd}ots, t_n^{\pm1}]$ in finitely many variables over a base field $k$ of characteristic $0.$ We will denote this ring by $R_n$, or simply by $R$ if no confusion is possible. The object $\mathcal{L}$ is naturally an $R$-module and it inherits the algebraic structure of $\mathcal{L}$ (for example $\mathcal{L}$ is a Lie algebra over $R$). It is when $\mathcal{L}$ is viewed as an object over $R$ that torsors enter into the picture. One can for example classify up to $R$-isomorphism the objects under consideration using non-abelian \'etale cohomology. Of course at the end of the day one wants to understand the problem under consideration (say a classification) over $k$ and not $R$. Thankfully there is a beautiful theory, known as the ``centroid trick", that allows this passage. The above discussion motivates why one is interested in the classification of torsors over $R$ under a smooth reductive $R$-group scheme. We believe that the understanding of such torsors is of its own interest. This is the purpose of the present work. There is an important class of torsors under $\mathfrak G$ called {\it loop torsors} that appear naturally in infinite dimensional Lie theory. (Loop torsors are defined over an arbitrary base in \cite{GP3}. They have already caught the attention of researchers in other areas. See for example \cite{PZ}.) It is shown in \cite{GP3} that if $E$ is a torsor over $R$ under $\mathfrak G$, then $E$ being a loop torsor is equivalent to $E$ being {\it toral}, i.e. that the twisted $R$-group scheme $^{E}\mathfrak G$ admits a maximal torus. This is a remarkable property of the ring $R$: loop and toral torsors coincide. Toral torsors under reductive group schemes were completely classified in our paper \cite{CGP2} with the use of Bruhat-Tits theory of buildings (see the Acyclicity Theorem~\ref{acyclicity}). We will use toral torsors in what follows (but the reader is asked to keep in mind that these are precisely the torsors that arise in infinite dimensional Lie theory). Given a smooth reductive group scheme $\mathfrak G$ over $R_n$ we want to classify/describe all the isomorphism classes of $R_n$-torsors under $\mathfrak G$. Since $\mathfrak G$ is smooth reductive they are in natural one-to-one correspondence with elements of the pointed set $H^1_{\acute et}(R_n,\mathfrak G).$ We have of course by definition a natural inclusion $$H^1_{\acute et, toral}(R_n, \mathfrak G) \, \subseteq \, H^1_{\acute et} (R_n, \mathfrak G)$$ where $H^1_{\acute et, toral}(R_n, \mathfrak G)$ is the subset consisting of (isomorphism) classes of toral $\mathfrak G$-torsors. From now on by default our topology will be \'etale; in particular we will denote $H^1_{\acute et}$ by $H^1$ and $H^1_{\acute et,toral}$ by $H^1_{toral}$. One of our main results is the following. \begin{theorem}\label{main1} Under the above notation there is a natural bijection $$ H^1(R_n, \mathfrak G) \longleftrightarrow \bigsqcup_{[E] \in H^1_{toral}(R_n, \mathfrak G) } \, H^1_{Zar}(R_n, {^{E}\mathfrak G}).$$ \end{theorem} The content of the above equality could be put in words as follows. Given a torsor $E'$ over $R_n$ under $\mathfrak G$ there exists a {\it unique} toral torsor $E$ such that $E'$ is locally isomorphic (in the Zariski topology) to $E.$ The proof of this result is achieved by a careful analysis (of independent interest) of the ramification of the torsors under consideration. We denote by $K_n=k(t_1,\dots,t_n)$ the fraction field of $R_n$ and set $F_n=k((t_1))\dots ((t_n))$. The precise statement of our other main result is the following. \begin{theorem}\label{main} Let $\mathfrak G$ be a smooth affine $R_n$--group scheme. Assume that either {\rm (i)} $\mathfrak G$ is reductive and admits a maximal $R_n$-torus (equivalently $\mathfrak G$ is ``loop reductive'' \cite[cor. 6.3]{GP3}); \noindent or {\rm (ii)} there exists a linear (smooth, not necessary connected) algebraic group $\text{\rm \bf G}$ over $k$ and a loop torsor $E$ under $\text{\rm \bf G} \mathbf times_k R_n$ such that $\mathfrak G= {^{E}(\text{\rm \bf G} \mathbf times_{k} R_n)}$. \noindent Then we have natural bijections $$ H^1_{ toral }(R_n , \mathfrak G) \buildrel \sim \over \longrightarrow H^1(K_n , \mathfrak G)_{R_n-unr} \buildrel \sim \over \longrightarrow H^1(F_n, \mathfrak G) $$ where $H^1(K_n , \mathfrak G)_{R_n-unr}$ stands for the subset of the Galois cohomology set $ H^1(K_n , \mathfrak G)$ consisting of (isomorphism) classes of $\mathfrak G$-torsors over $K_n$ extending everywhere in codimension one (see \S \ref{sect_def_unr}). \end{theorem} We need to explain briefly the assumptions. In both cases (i) and (ii) we consider loop (=toral) group schemes because they play a central role in the classification of $R_n$-torsors (see Theorem~\ref{main1}). Also, even though reductive group schemes are the main interest in this paper, in case (ii) we include group schemes over $R_n$ which are not necessary ``connected". Far from being a trivial generalization, this case is absolutely essential for applications to infinite dimensional Lie theory. Indeed one is forced to understand twisted forms of $R_n$-Lie algebras $\mathfrak{g} \otimes_k R_n$, where $\mathfrak{g}$ is a split finite dimensional simple Lie algebra over $k$. This leads to torsors under the group $\mathbf G \mathbf times_k R_n$ where $\mathbf G$ is the linear algebraic $k$-group ${\rm Aut}(\mathfrak{g}).$ Many interesting infinite dimensional Lie objects over $k$, including extended affine Lie algebras (a particular case of them are the celebrated affine Kac--Moody Lie algebras) and Lie superconformal algebras, follow under the above considerations. Note that the special case $\mathbf G={\operatorname{PGL}}_d$ (i.e. $R_n$--Azumaya algebras) was already quite understood by Brauer group techniques when the base field is algebraically closed \cite[\S 4.4]{GP2}. Note also that Theorem~\ref{main} refines our acyclicity theorem, i.e. the bijection $H^1_{toral}(R_n, \mathfrak G) \buildrel \sim \over \longrightarrow H^1(F_n, \mathfrak G)$ (\cite[th. 14.1]{CGP2} in case (i) (resp. \cite[th. 8.1]{GP3} in case (ii)). The structure of the paper is as follows. In section \ref{section_sorites}, we establish useful generalities about unramified functors. Section \ref{section_cohomology} discusses unramified non-abelian cohomology. In section \ref{section_proof}, we prove Theorem~\ref{main}. Section \ref{section_applications} is devoted to applications including a disjoint union decomposition for the set $H^1(R_n , \mathfrak G)$ (Theorem~\ref{main1}). Two important particular cases are considered in detail: the cases of orthogonal groups and projective linear groups. This illustrates that our main result, which may look rather abstract and remote in appearance, can lead to new very concrete classifications/descriptions of familiar objects. \section{Unramified functors}\label{section_sorites} We follow essentially the setting of \cite{CT}. Let $S$ be a scheme. If $X$ is an integral $S$-scheme we denote by $\kappa(X)$ the fraction field of $X$. Let $\mathcalF$ be an $S$-functor, that is a contravariant functor $X \mapsto \mathcalF(X)$ from the category of $S$--schemes into the category of sets. If $X$ is integral normal one defines the following two subsets of $ \mathcalF(\kappa(X)) $: $$ \mathcalF(\kappa(X))_{X-loc}:= \bigcap\limits_{ x \in X} \, \mathrm{Im}\Bigl( \mathcalF(O_{X,x}) \mathbf to \mathcalF(\kappa(X)) \Bigr) $$ and $$ \mathcalF(\kappa(X))_{X-unr}:= \bigcap\limits_{ x \in X^{(1)}} \, \mathrm{Im}\Bigl( \mathcalF(O_{X,x}) \mathbf to \mathcalF(\kappa(X)) \Bigr). $$ The first subset $\mathcalF(\kappa(X))_{X-loc}$ is called the subset of {\it local classes} with respect to $X$ and the second one $\mathcalF(\kappa(X))_{X-unr}$ is called the subset of {\it unramified classes } with respect to $X$. Obviously, we have the inclusions $$ \mathcalF(\kappa(X))_{X-loc} \, \subseteq \, \mathcalF(\kappa(X))_{X-unr} \, \subseteq \, \mathcalF(\kappa(X)). $$ \begin{lemma}\label{lem_funct} Let $Y$ is an integral normal scheme over $S.$ Let $f:Y \mathbf to X$ be a dominant morphism of $S$-schemes. Consider the map $\mathcalF(f^*): \mathcalF(\kappa(X)) \mathbf to \mathcalF(\kappa(Y))$ induced by the comorphism $f^*=f^*_{\kappa(X)}:\kappa(X)\mathbf to\kappa(Y)$. Then. \noindent (1) $\mathcalF(f^*)(\mathcalF(\kappa(X))_{X-loc}) \, \subseteq \, \mathcalF(\kappa(Y))_{Y-loc}$. \noindent (2) If $f$ is flat then $\mathcalF(f^*)(\mathcalF(\kappa(X))_{X-unr}) \, \subseteq \, \mathcalF(\kappa(Y))_{Y-unr}$. \end{lemma} \begin{proof} (1) The comorphism $f^*$ allows us to view $\kappa(X)$ as a subfield $\kappa(X) \hookrightarrow \kappa(Y)$ of the field $\kappa(Y)$. Let $\mathfrak gamma \in \mathcalF(\kappa(X))_{X-loc}$. We want to show that its image $\mathfrak gamma_{\kappa(Y)}:=\mathcalF(f^*)(\mathfrak gamma) \in \mathcalF(\kappa(Y))$ under the base change is local with respect to $Y$. Let $y \in Y$ and put $x=f(y)$. The commutative square $$ \begin{CD} \kappa(X) & \enskip \subset \enskip & \kappa(Y) \\ \cup && \cup \\ O_{X,x}& \enskip \subset \enskip & O_{Y,y} \end{CD} $$ induces a commutative diagram $$ \xymatrix{ \mathcalF( \kappa(X)) \ar[r] & \mathcalF( \kappa(Y)) \\ \mathcalF( O_{X,x}) \ar[r] \ar[u]& \mathcalF( O_{Y,y} ).\ar[u] } $$ Since $\mathfrak gamma \in \mathrm{Im}\bigl( \mathcalF( O_{X,x}) \mathbf to \mathcalF( \kappa(X)) \bigr)$, it follows that $\mathfrak gamma_{\kappa(Y)}$ is contained in $\mathrm{Im}\bigl( \mathcalF( O_{Y,y}) \mathbf to \mathcalF( \kappa(Y))\bigr)$. Thus $\mathfrak gamma_{\kappa(Y)} \in \mathcalF( \kappa(Y))_{Y-loc}$. \noindent (2) Assume now that $\mathfrak gamma \in \mathcalF( \kappa(X))_{X-unr}$ and let $y \in Y^{(1)}$. As above, we set $x=f(y)$. Without loss of generality we may assume that $$X=\operatorname{Spec}(A)= \operatorname{Spec}(O_{X,x})$$ and that $$Y=\operatorname{Spec}(B)=\operatorname{Spec}(O_{Y,y})$$ where $B$ is a DVR. Let $v$ be the discrete valuation on $\kappa(Y)$ corresponding to the valuation ring $B$. For brevity we denote by $K$ (resp. $L$) the fraction field of $A$ (resp. $B$). If $v(K^\mathbf times)=0$, then $K \subset B$ and therefore $$\mathfrak gamma_{\kappa(Y)} \in \mathrm{Im}\bigl( \mathcalF( O_{Y,y}) \mathbf to \mathcalF( \kappa(Y)) \bigr).$$ Assume now that $v(K^\mathbf times) \not = 0$. Then ${\mathfrak m}_A B \not = B$ so that $A \mathbf to B$ is a local morphism. By \cite[0.6.6.2]{EGAI}, since $B$ is flat over $A$ the ring $B$ is a faithfully flat $A$--module. It follows that $A=B \cap K$ (apply \cite[\S\,I.3, \S 5, prop. 10]{BAC} with $F=K$ and $F'=A$. An alternative proof can be given by appealing to \cite[2.1.13]{EGAIV}). Let $A_v= \{ x \in K^\mathbf times \, \mid \, v(x) \mathfrak geq 0\}$ be the valuation ring of $v_{\mid K}$. Then $A_v= K \cap B=A$, so that $A$ is a DVR. This implies that $\mathfrak gamma \in \mathrm{Im}\bigl( \mathcalF( O_{X,x}) \mathbf to \mathcalF( \kappa(X)) \bigr)$, and the commutative diagram above yields that $\mathfrak gamma_{\kappa(Y)} \in \mathrm{Im}\bigl( \mathcalF( O_{Y,y}) \mathbf to \mathcalF( \kappa(Y)) \bigr)$. Thus $\mathfrak gamma_{\kappa(Y)} \in \mathcalF( \kappa(Y))_{Y-unr}$. \end{proof} We shall discuss next the case of non-abelian cohomology functors, but we remark that this technique and considerations can be applied to various interesting functors such as Brauer groups, Witt groups, unramified Galois cohomology... \section{Non-abelian cohomology}\label{section_cohomology} \subsection{Some terminology}\label{subsec_terminology} Let $X$ be a scheme and let $\mathfrak G$ be an $X$--group scheme. The pointed set of non-abelian \v Cech cohomology on the flat (resp. \'etale, Zariski) site of $X$ with coefficients in $\mathfrak G$, is denoted by $H_{fppf}^1(X, \mathfrak G)$ (resp. $H_{\acute et}^1(X, \mathfrak G)$, $H_{Zar}^1(X, \mathfrak G))$. These pointed sets measure the isomorphism classes of sheaf torsors over $X$ under $\mathfrak G$ with respect to the chosen topology (see \cite[ Ch.\,IV \S1]{M} and \cite{DG} for basic definitions and references). If $X = \operatorname{Spec}(R),$ following customary usage and depending on the context, we also use the notation $H_{fppf}^1(R, \mathfrak G)$ instead of $H_{fppf}^1(X, \mathfrak G)$. Similarly for the \'etale and Zariski sites. If $\mathfrak G$ is in addition affine, by faithfully flat descent all of our sheaf torsors are representable. They are thus {\it torsors} in the usual sense. For a $\mathfrak G$-torsor $E$ we denote by ${^E\mathfrak G}$ the twisted form of $\mathfrak G$ by inner automorphisms; it is an affine group scheme over $X$. Furthermore, if $\mathfrak G$ is smooth all torsors are locally trivial for the \'etale topology. In particular, $H_{\acute et}^1(X, \mathfrak G) =H_{fppf}^1(X, \mathfrak G).$ These assumptions on $\mathfrak G$ hold in most of the situations that arise in our work. Also, as we mentioned in the introduction, by default our topology will be \'etale so that instead of $H^1_{\acute et}(X,\mathfrak G)$ we will write $H^1(X,\mathfrak G)$. Given an $X$--group $\mathfrak G$ and a morphisms $Y \rightarrow X$ of schemes, we let $\mathfrak G_Y$ denote the $Y$--group $\mathfrak G \mathbf times_X Y$ obtained by base change. For convenience, we will denote $H^1(Y, \mathfrak G_Y)$ by $H^1(Y, \mathfrak G)$. Assuming $\mathfrak G$ affine of finite type, a \emph{maximal torus} $\mathfrak T$ of $\mathfrak G$ is a subgroup $X$-scheme $\mathfrak T$ of $\mathfrak G$ such that $\mathfrak T \mathbf times_X \overline{\kappa(x)}$ is a maximal $\overline{\kappa(x)}$--torus of $\mathfrak G \mathbf times_X \overline{ \kappa(x)}$ for each point $x \in X$ \cite[XII.1]{SGA3}. Here $\overline{\kappa(x)}$ denotes an algebraic closure of $\kappa(x)$. A $\mathfrak G$--torsor $E$ is \emph{toral} if the twisted group scheme $^E \mathfrak G$ admits a maximal $X$-torus. We denote by $H_{fppf, toral}^1(X, \mathfrak G)$ the subset of $H_{fppf}^1(X, \mathfrak G)$ consisting of (isomorphism) classes of toral $X$--torsors under $\mathfrak G$. \subsection{Torsion bijection}\label{torsion} If $E$ is an $X$--torsor under $\mathfrak G$ (not necessarily toral), according to \cite[III.2.6.3.1]{Gi} there exists a natural bijection $$ \mathbf tau_E: H_{fppf}^1(X, {^E\mathfrak G}) \mathbf to H_{fppf}^1(X, \mathfrak G),$$ called the torsion bijection, which takes the class of the trivial torsor under $^E\mathfrak G$ to the class of $E$. It is easy to see that its restriction to classes of toral torsors induces a bijection $$H_{fppf, toral }^1(X, {^E\mathfrak G}) \mathbf to H_{fppf,toral}^1(X, \mathfrak G).$$ \subsection{Acyclicity Theorem} The following theorem is the main tool for proving our main results. \begin{theorem}\label{acyclicity} Let $\mathfrak G$ be a smooth affine $R_n$--group scheme. Assume that either {\rm (i)} $\mathfrak G$ is reductive and admits a maximal $R_n$-torus (equivalently $\mathfrak G$ is ``loop reductive'' \cite[cor. 6.3]{GP3}); \noindent or {\rm (ii)} there exists a linear (smooth, not necessary connected) algebraic group $\text{\rm \bf G}$ over $k$ and a loop torsor $E$ under $\text{\rm \bf G} \mathbf times_k R_n$ such that $\mathfrak G= {^{E}(\text{\rm \bf G} \mathbf times_{k} R_n)}$. \noindent Then a natural map $$ H^1_{ toral }(R_n , \mathfrak G) \longrightarrow H^1(F_n, \mathfrak G) $$ is bijective. \end{theorem} \begin{proof} See \cite[th. 14.1]{CGP2} in case (i) and \cite[th. 8.1]{GP3} in case (ii). \end{proof} \subsection{Grothendieck--Serre's conjecture}\label{sect_gr} The following conjecture is due to Grothendieck--Serre (\cite[Remarque 3]{Se1}, \cite[Remarque 1.1.a]{Gr}). \begin{conjecture}\label{conjlocal} Let $R$ be a regular local ring with fraction field $K$. If ${\mathfrak G}$ a reductive group scheme over $R$ then the natural map $H^1(R, {\mathfrak G}) \mathbf to H^1(K, {\mathfrak G})$ has trivial kernel. \end{conjecture} If $R$ contains an infinite field $k$ (of any characteristic), the conjecture has been proven by Fedorov--Panin \cite{FP,PSV}. When ${\mathfrak G} = \mathbf G \mathbf times_k R$ for some reductive $k$-group $\mathbf G$, the so called ``constant" case, was established before by Colliot-Th\'el\`ene and Ojanguren \cite{CTO}. For our considerations we need a similar result for group schemes which are not necessary ``connected". \begin{lemma}\label{devissage} Let $R$ be a regular local ring with fraction field $K$. Let ${\mathfrak G}$ be an affine smooth group scheme over $R$ which is an extension of a finite twisted constant group scheme $\text{\rm \bf F}$ over $R$ by a reductive group scheme ${\mathfrak G}^0$ over $R$. Assume that Grothendieck--Serre's conjecture holds for ${\mathfrak G}^0$. Then the map $H^1(R, {\mathfrak G}) \mathbf to H^1(K, {\mathfrak G})$ has trivial kernel. \end{lemma} \begin{proof} We consider the commutative exact diagram of pointed sets $$ \begin{CD} \text{\rm \bf F}(R) @>>> H^1(R, {\mathfrak G}^0) @>>> H^1(R, {\mathfrak G}) @>>> H^1(R,\text{\rm \bf F}) \\ @V{\wr}VV @V{\acute eta}VV @V{\nu}VV @V{\lambda}VV \\ \text{\rm \bf F}(K) @>>> H^1(K,{\mathfrak G}^0) @>>> H^1(K,{\mathfrak G}) @>>> H^1(K,\text{\rm \bf F}). \\ \end{CD} $$ Since $\lambda$ is injective, an easy diagram chase shows that ${\rm Ker}\,\acute eta$ surjects onto ${\rm Ker}\,\nu$. But by hypothesis ${\rm Ker}\,\acute eta$ vanishes, so the assertion follows. \end{proof} As a corollary of Fedorov--Panin's theorem we get the following facts. \begin{corollary}\label{cor_FP} Let $R$ be a regular local ring containing an infinite field with fraction field $K$. Let ${\mathfrak G}$ be an affine smooth group scheme over $R$ which is an extension of a finite twisted constant group scheme over $R$ by a reductive group scheme over $R$. Then the natural map $H^1(R, {\mathfrak G}) \mathbf to H^1(K, {\mathfrak G})$ has trivial kernel. \end{corollary} \begin{corollary}\label{zar} Let $X$ be an integral smooth affine variety over an infinite field with function field $K$. Let ${\mathfrak G}$ be an affine smooth group scheme over $X$ which is an extension of a finite twisted constant group scheme over $X$ by a reductive group scheme over $X$. Then the sequence of pointed sets $$ 1\longrightarrow H^1_{Zar}(X,\mathfrak G)\longrightarrow H^1(X,\mathfrak G)\longrightarrow H^1(K,\mathfrak G) $$ is exact. \end{corollary} \subsection{Rational torsors everywhere locally defined}\label{sect_def_unr} For a smooth affine group scheme $\mathfrak G$ over an integral normal scheme $X$ Colliot-Th\'el\`ene and Sansuc \cite[\S 6]{CTS} introduced the following sets: $$ D^{\mathfrak G}(X):={\rm Im}\Bigl( H^1(X,{\mathfrak G}) \mathbf to H^1( \kappa(X), {\mathfrak G}) \Bigr), $$ $$ H^1(\kappa(X),{\mathfrak G})_{X-loc} := \bigcap\limits_{ x \in X} \, D^{\mathfrak G}( {\mathcal O}_{X,x}) \enskip \subseteq \enskip H^1( \kappa(X), {\mathfrak G} ), $$ and $$ H^1(\kappa(X),{\mathfrak G})_{X-unr} := \bigcap\limits_{ x \in X^{(1)}} \, D^{\mathfrak G}( {\mathcal O}_{X,x}) \enskip \subseteq \enskip H^1( \kappa(X), {\mathfrak G} ). $$ Clearly, we have the inclusions $$D^{\mathfrak G}(X) \subseteq H^1(\kappa(X),{\mathfrak G})_{X-loc}\subseteq H^1(\kappa(X),{\mathfrak G})_{X-unr}.$$ In our terminology introduced in \S\,\ref{section_sorites} the two last sets are nothing but the local and unramified subsets with respect to $X$ attached to the functor $\mathcalF$ given by $\mathcalF(Y)= H^1(Y, {\mathfrak G})$ for each $X$-scheme $Y$. Unramified classes have the following geometrical characterization. \begin{lemma}\label{lem_unr} Let $\mathfrak gamma \in H^1(\kappa(X),{\mathfrak G})$. Then $\mathfrak gamma \in H^1(\kappa(X),{\mathfrak G})_{X-unr}$ if and only if there exists an open subset $U$ of $X$ and a class $\widetilde \mathfrak gamma \in H^1_{\acute et}(U, {\mathfrak G})$ such that {\rm (i)} $\mathfrak gamma=(\widetilde \mathfrak gamma)_{\kappa(X)}$; {\rm (ii)} $X^{(1)} \subset U$. \end{lemma} \begin{proof} In one direction the statement is obvious.The other one was treated in \cite[cor. A.8]{GP2}. \end{proof} A special case of Lemma \ref{lem_funct} is then the following. \begin{lemma}\label{lem_funct2} Let $f:Y \mathbf to X$ be a dominant morphism of integral and normal $S$-schemes. Let $$\mathcalF(f^*): H^1( \kappa(X), {\mathfrak G} ) \mathbf to H^1( \kappa(Y), {\mathfrak G}_Y )$$ be the map induced by the comorphism $f^*:\kappa(X)\mathbf to\kappa(Y)$. Then. \noindent {\rm (1)} $\mathcalF(f^*)(H^1(\kappa(X),\mathfrak G)_{X-loc}) \, \subseteq \, H^1(\kappa(Y),\mathfrak G)_{Y-loc}.$ \noindent {\rm (2)} If $f$ is flat then $$\mathcalF(f^*)(H^1(\kappa(X),\mathfrak G)_{X-unr}) \, \subseteq \, H^1(\kappa(Y),\mathfrak G)_{Y-unr}.$$ \end{lemma} \begin{remark} {\rm If $X$ is regular and the ``purity conjecture'' holds for $\mathfrak G$ and local rings of $X$, i.e. $D_{\mathfrak G}(O_{X,x})= H^1(\kappa(X), {\mathfrak G})_{O_{X,x}-unr}$ for all points $x\in X$, then assertion (1) implies that (2) holds without flatness assumption for $f$. } \end{remark} We now combine earlier work by Colliot-Th\'el\`ene/Sansuc and Nisnevich theorem~\cite{N} on Grothendieck-Serre conjecture for reductive groups over DVR. \begin{proposition}\label{prop_special} Assume that $X$ is a regular integral scheme and that ${\mathfrak G}$ is an extension of a finite twisted constant group scheme by a reductive group scheme ${\mathfrak G}^0$. Then \noindent {\rm (1)} $D^{\mathfrak G}$ defines a contravariant functor for the category of regular integral $X$-schemes. \noindent {\rm (2)} If $X=\operatorname{Spec}(k)$, then $H^1(\, \, ,{\mathfrak G})_{loc}$ defines a contravariant functor for the category of smooth integral $k$--varieties. \end{proposition} \begin{proof} Nisnevich's theorem states that if $A$ is a DVR and ${\bf G}$ is a reductive group over $A$ then the natural map $H^1(R,{\bf G})\mathbf to H^1(K,{\bf G})$, where $K$ is a fraction field of $A$, is injective. By Lemma \ref{devissage}, it holds more generally for a group $\text{\rm \bf G}$ which is an extension of a finite twisted constant group by a reductive group. In particular, if $\mathfrak gamma_1,\mathfrak gamma_2 \in H^1(A,{\bf G})$ have the same image in $H^1(K,{\bf G})$, they have the same specialization modulo the maximal ideal of $A$. By \cite[6.6.1]{CTS}, this specialization property holds more generally over an arbitrary valuation ring. Then the assertions follow from \cite[6.6.3]{CTS} and \cite[proposition 2.1.10]{CT}. \end{proof} \section{Proof of Theorem~\ref{main}}\label{section_proof} Let $\mathfrak G$ be a smooth affine group scheme over $R_n$ satisfying condition (i) or (ii) in Theorem \ref{main}. Clearly, $$ {\rm Im}\,[H^1( R_n,\mathfrak G) \mathbf to H^1(K_n,\mathfrak G)] \, \subset \, H^1(K_n,\mathfrak G)_{R_n-unr}, $$ so that we have the factorization $$ \leqno{\bf (*)} \qquad \quad H^1_{ toral}( R_n,\mathfrak G) \stackrel{\phi}{\longrightarrow} H^1(K_n,\mathfrak G)_{R_n-unr} \stackrel{\psi}{\longrightarrow} H^1(F_n,\mathfrak G). $$ The Acyclicity Theorem~\ref{acyclicity} states that the composite map $\psi \circ \phi$ is bijective; in particular, $\phi$ is injective and $\psi$ is surjective. \begin{lemma}\label{equivalence} The following are equivalent. \noindent {\rm (i)} $\phi$ is bijective. \noindent {\rm (ii)} The map $$ ^E\psi:H^1(K_n, {^E\mathfrak G})_{R_n-unr} \longrightarrow H^1(F_n,{^E\mathfrak G}) $$ has trivial kernel for all toral $R_n$--torsors $E$ under $\mathfrak G$. \end{lemma} \begin{proof} $(i) \Longrightarrow (ii)$: Assume that $\phi$ is bijective. The above factorization $(*)$ and the bijectivity of $\psi\circ\phi$ yield that we have bijections $$H^1_{ toral}( R_n,\mathfrak G) \buildrel \sim \over \longrightarrow H^1(K_n,\mathfrak G)_{R_n-unr} \buildrel \sim \over \longrightarrow H^1(F_n,\mathfrak H).$$ Let now $E$ be a toral $R_n$--torsor under $\mathfrak G$. Recall that the torsion bijection map $$ \mathbf tau_E: H^1( R_n,{^E\mathfrak G}) \buildrel \sim \over \longrightarrow H^1( R_n,\mathfrak G)$$ induces a bijection $$ H^1_{ toral}( R_n,{^E\mathfrak G}) \buildrel \sim \over \longrightarrow H^1_{ toral}( R_n,\mathfrak G)$$ (see \S\,\ref{torsion}). The commutative diagram of torsion bijections $$ \xymatrix{ H^1_{ toral}( R_n,\mathfrak G) \ar[r]^{\sim} & H^1(K_n,\mathfrak G)_{R_n-unr} \ar[r]^{\sim} & H^1(F_n,\mathfrak G) \\ H^1_{ toral}( R_n,{^E\mathfrak G}) \ar[r] \ar[u]^{\wr} & H^1(K_n,{^E\mathfrak G})_{R_n-unr} \ar[r]^{^E\psi} \ar[u]^{\wr} & H^1(F_n,{^E\mathfrak G}) \ar[u]^{\wr} \\ } $$ shows that $^E\psi$ is bijective, so a fortiori has trivial kernel. \noindent $(ii) \Longrightarrow (i)$: We have noticed above that $\phi$ is injective, hence it remains to prove its surjectivity only. Let $[E']\in H^1(K_n,\mathfrak G)_{R_n-unr}$. Since $\psi\circ\phi$ is bijective there exists a class $[E]\in H^1(R_n,\mathfrak G)$ such that $\psi([E'])=\psi(\phi([E]))$. It follows from the above commutative diagram that under the torsion bijection $H^1(K_n,{^E\mathfrak G})_{R_n-unr}\mathbf to H^1(K,\mathfrak G)_{R_n-unr}$ the class $[E']$ corresponds to an element in $H^1(K_n,{^E\mathfrak G})_{R_n-unr}$ lying in the kernel of $^E\psi$. Since by our hypothesis ${\rm Ker}({^E\psi})=1$, this implies that the class $[E']$ corresponds to the trivial one in $H^1(K_n,{^E\mathfrak G})_{R_n-unr}$ or equivalently $\phi([E])=[E']$. \end{proof} Note that hypothesis (i) and (ii) in Theorem~\ref{main} are stable with respect to twisting by a toral $R_n$-torsor under $\mathfrak G$. Therefore the above lemma reduces the proof of Theorem~\ref{main} to showing that for all group schemes $\mathfrak H$ over $R_n$ satisfying conditions (i) or (ii) in Theorem~\ref{main} a natural map $$\psi: H^1(K_n, \mathfrak H)_{R_n-unr} \longrightarrow H^1(F_n,\mathfrak H)$$ has trivial kernel. To prove this fact we proceed by induction on $n\mathfrak geq 1$ by allowing the base field $k$ to vary. \noindent{$n=1$:} Since we are in dimension one, by Lemma \ref{lem_unr} the map $$H^1( R_1, \mathfrak H) \longrightarrow H^1( K_1, \mathfrak H)_{R_1-unr}$$ is onto. Therefore, Lemma~\ref{equivalence} applied to the group scheme $\mathfrak G=\mathfrak H$ and the trivial torsor $E=1$ shows that $\psi$ has trivial kernel. \noindent{$n\mathfrak geq 2$:} Consider the following field tower: $$K_n\subset F_{n-1}(t_n)\subset F_n.$$ Let $\mathfrak gamma\in {\rm Ker}(\psi)$ and let $\mathfrak gamma'$ be its image in $H^1( F_{n-1}(t_n), \mathfrak H)$. Since the morphism of affine schemes $\operatorname{Spec}( F_{n-1}[t_n^{\pm 1}]) \mathbf to \operatorname{Spec}(R_n)$ is flat and dominant, by Lemma \ref{lem_funct2}\,(2) we have $$ \mathfrak gamma' \in H^1\bigl( F_{n-1}(t_n), \mathfrak H\bigr)_{F_{n-1}[t_n^{\pm 1}]-unr} . $$ Since $F_n=F_{n-1}((t_n))$ and $\mathfrak gamma_{F_n}=1$, we then conclude that $$ \mathfrak gamma' \in {\rm Ker}\Bigl( H^1\bigl( F_{n-1}(t_n), \mathfrak H \bigr)_{F_{n-1}[t_n^{\pm 1}]-unr} \mathbf to H^1\bigl(F_{n-1}((t_n)),\mathfrak H \bigr) \Bigr). $$ But according to the case $n=1$ (applied to the base field $F_{n-1}$) the last kernel is trivial. Thus $\mathfrak gamma'=1$, i.e. $$ \leqno{(**)} \qquad \qquad \mathfrak gamma \in {\rm Ker}\Bigl( H^1\bigl( K_n, \mathfrak H)_{R_n-unr} \mathbf to H^1( F_{n-1}(t_n), \mathfrak H \bigr) \Bigr) . $$ Now, we observe that the field $F_{n-1}(t_n)= k((t_1))\dots ((t_{n-1}))(t_n)$ embeds into $k(t_n)((t_1))\dots((t_{n-1}))=k'((t_1))\dots((t_{n-1}))$ with $k'=k(t_n)$, so that we have a commutative diagram $$ \begin{CD} H^1\bigl( K_n, \mathfrak H \bigr)_{R_n-unr} @>>> H^1\bigl( F_{n-1}(t_n), \mathfrak H \bigr) \\ \cap && @VVV \\ H^1\bigl( k'(t_1,...,t_{n-1}), \mathfrak H \bigr)_{R_{n-1} \otimes_k k'-unr} @>>> H^1\bigl( k'((t_1))\dots ((t_{n-1})) , \mathfrak H \bigr) , \end{CD} $$ where the left vertical inclusion again is due to Lemma \ref{lem_funct2}\,(2). By the induction hypothesis applied to the base field $k'$, the bottom horizontal map has trivial kernel. Therefore the top horizontal map has trivial kernel as well. By $(**)$, this implies $\mathfrak gamma=1$. \section{Applications}\label{section_applications} \subsection{A disjoint union decomposition for $R_n$--torsors} We shall use now our Theorem~\ref{main} and Fedorov-Panin's theorem to prove Theorem~\ref{main1}. In fact we will prove a little bit more general result by allowing $\mathfrak G$ to be any group scheme satisfying conditions of Theorem~\ref{main}. \begin{theorem}\label{cor_main} Let $\mathfrak G$ be as in Theorem \ref{main}. Then there is a natural bijection $$ \bigsqcup_{[E] \in H^1_{ toral}(R_n, \mathfrak G) } H^1_{Zar} (R_n, {^{E}\mathfrak G}) \enskip \buildrel{\mathbf Theta} \over \buildrel \sim \over \longrightarrow \enskip H^1_{}(R_n, \mathfrak G). $$ \end{theorem} \begin{proof} Recall first that the torsion bijection $\mathbf tau_E$ (see \S\,\ref{torsion}) allows us to embed $$ H^1_{Zar} (R_n, {^{E}\mathfrak G}) \hookrightarrow H^1_{}(R_n, {^{E}\mathfrak G}) \buildrel \mathbf tau_{E} \over \buildrel \sim \over \longrightarrow H^1(R_n, \mathfrak G). $$ and this, in turn, induces a natural map $$ \mathbf Theta: \bigsqcup_{[E] \in H^1_{toral}(R_n, \mathfrak G) } H^1_{Zar} (R_n, {^{E}\mathfrak G}) \enskip \mathbf to \enskip H^1_{\acute et}(R_n, \mathfrak G). $$ \noindent{\it Surjectivity of $\mathbf Theta$.} Assume $[\mathfrak gamma] \in H^1(R_n, \mathfrak G)$. Since the generic class $\mathfrak gamma_{K_n} \in H^1(K_n, \mathfrak G)$ is $R_n$--unramified by Theorem~\ref{main} there is a unique toral class $[E] \in H^1(R_n , \mathfrak G)_{toral}$ such that $[E]_{K_n}=\mathfrak gamma_{K_n}$. Consider the following commutative diagram $$ \begin{CD} & &H^1(R_n, \mathfrak G) @>>> H^1(K_n, \mathfrak G) \\ & &@A{\mathbf tau_{E}}A{\wr}A @A{\mathbf tau_{E} }A{\wr}A \\ H^1_{Zar}(R_n,{^E\mathfrak G}) & \,\,\stackrel{\iota}{\hookrightarrow}\,\, & H^1(R_n, {^{E}\mathfrak G}) @>>> H^1(K_n, {^{E}\mathfrak G}) \end{CD} $$ with an exact bottom horizontal line (see Corollary~\ref{zar}). It follows from the diagram that that $\mathbf tau_{E}^{-1}(\mathfrak gamma) \in {\rm Im}\,\iota$. Hence there exists a unique class $\acute eta \in H^1_{Zar}(R_n, {^{E}\mathfrak G}) $ such that $\acute eta=\mathbf tau_{E}^{-1}(\mathfrak gamma)$. By construction, $\mathbf Theta(\acute eta)=\mathfrak gamma$. \noindent{\it Injectivity of $\mathbf Theta$.} Let $E,E'$ be two toral torsors under $\mathfrak G$ and let $$ \acute eta \in H^1_{Zar}(R_n, {^E\mathfrak G}),\ \ \acute eta' \in H^1_{Zar}(R_n, {^{E'}\mathfrak G})$$ be such that $\mathbf Theta(\acute eta)=\mathbf Theta(\acute eta')$, i.e. $\mathbf tau_{E}( \acute eta)= \mathbf tau_{E'}( \acute eta') \in H^1(R_n, \mathfrak G)$. Since $\acute eta$ is locally trivial in the Zariski topology we conclude that $$\mathbf tau_{E}( \acute eta)_{K_n}= \mathbf tau_{E}( 1)_{K_n}=[E]_{K_n}$$ and similarly for $E'$. It follows that $$ [E]_{K_n}=[E']_{K_n} \in H^1(K_n, \mathfrak G)_{R_n-unr}$$ and therefore, by Theorem~\ref{main}, we have $[E]=[E']$. But $\mathbf tau_E=\mathbf tau_{E'}$ is bijective. Therefore, $\acute eta=\acute eta'$. \end{proof} \begin{example}{\rm Let $\text{\rm \bf G}$ be a reductive $k$--group with the property that all of its semisimple quotients are isotropic (for example, $k$-split). By a result of Raghunathan \cite[th. B]{R} one has $$ H^1_{Zar}( {\bf A}_k^n, \text{\rm \bf G})=1$$ for all $n \mathfrak geq 0$. According to \cite[prop. 2.2]{GP2}, any Zariski locally trivial $\text{\rm \bf G}$-torsor over $R_n$ can be extended to a $\text{\rm \bf G}$-torsor over ${\bf A}_k^n$. Therefore $H^1_{Zar}(R_n, \text{\rm \bf G})=1$. This implies that the map $H^1(R_n,\text{\rm \bf G}) \mathbf to H^1(K_n,\text{\rm \bf G})$ has trivial kernel. In other words, rationally trivial $R_n$--torsors under $\text{\rm \bf G}$ are trivial. } \end{example} We expect that the similar result holds in a more general case. \begin{conjecture} Let $\mathfrak G$ be a loop reductive group scheme over $R_n$. If all semisimple quotients of $\mathfrak G$ are isotropic, then $H^1_{Zar}(R_n,\mathfrak G)=1$. \end{conjecture} \begin{remark} Note that Artamonov's freeness result \cite{A} as well as Parimala's result \cite{P} for quadratic forms over $R_2$ are particular special cases of this conjecture \end{remark} Theorem~\ref{cor_main} gives a classification of all $\mathfrak G$-torsors that involves first the description of all its toral torsors and then to studying locally trivial in Zariski topology torsors under its twisted toral forms. In the following two subsections we show how our theorem works in particular cases of orthogonal groups and projective linear groups. \subsection{The case of orthogonal groups} Let $q_{split}$ be a split quadratic form over $k$ of dimension $d \mathfrak geq 1$ and let ${\bf O}(q_{split})$ be the corresponding split orthogonal group. It is well-known that $H^1(R_n, {\bf O}(q_{split}))$ classifies non-singular quadratic $R_n$--forms of dimension $d$ \cite[III.5.2]{DG}. This allows us to identify classes of torsors under $\mathfrak G={\bf O}(q_{split,R_n})$ with classes of $d$-dimensional quadratic forms over the ring $R_n$. \begin{proposition} For each subset $I \subseteq \{1,...,d\}$, we put $t_I= \operatorname{pr}od_{i \in I} t_i\in R_n^{\mathbf times}$ with the convention $t_\emptyset = 1$. \noindent {\rm (1)} Each class in $H^1_{toral}(R_n, {\bf O}(q_{split}))$ contains a unique $R_n$--quadratic form of the shape $$ \bigoplus\limits_{I \subseteq \{1,...,d\}} \, \langle t_I \rangle \otimes q_{I, R_n} $$ where all $q_I$'s with $I\not=\emptyset$ are non-singular anisotropic quadratic forms over $k$ such that $$d =\bigoplus\limits_{I \subseteq \{1,...,d\}} \, \dim(q_I).$$ \noindent {\rm (2)} Let $q$ be a non--singular quadratic $R_n$--form of dimension $d$. Then there exists a unique quadratic $R_n$-form $q_{loop}$ as in $(1)$ such that $q$ is a Zariski $R_n$--form of $q_{loop}$. Furthermore, $q$ is isometric to $q_{loop}$ if and only if $q$ is diagonalizable. \end{proposition} \begin{proof} (1) By Acyclicity Theorem~\ref{acyclicity} it suffices to compute $H^1(F_n,{\bf O}(q_{split}))$ or equivalently isometry classes of $d$-dimensional quadratic forms over $F_n$. Let $q$ be such quadratic form. We want to show that it is as in (1). By the Witt theorem we may assume without loss of generality that $q$ is anisotropic. We proceed by induction on $n\mathfrak geq 0$. The case $n=0$ is obvious. Assume that $n\mathfrak geq 1$. Note that $F_n=F_{n-1}((t_n))$. Springer's decomposition \cite[\S 19]{EKM}, then shows that $q \cong q' \oplus \langle t_n \rangle \, q''$ where $q'$ and $q''$ are (unique) anisotropic quadratic forms over $F_{n-1}$. By induction on $n$, $q'$ and $q''$ are of the required form, hence we the assertion for $q$ follows. The unicity is clear. \noindent (2) The first assertion follows from Theorem~\ref{cor_main}. If $q$ is isometric to $q_{loop}$, then $q$ is diagonalizable since so is $q_{loop}$. Conversely, assume that $q$ is diagonalizable: $q=\langle b_1,\ldots,b_d\,\rangle$. Since $$ R_n^\mathbf times/ (R_n^\mathbf times)^2 \buildrel \sim \over \longrightarrow k^\mathbf times/ (k^\mathbf times)^2 \mathbf times \langle t_1^{\epsilon_1}\ldots t_n^{\epsilon_n}\ |\ \epsilon_1,\ldots,\epsilon_n=0,1\,\rangle. $$ all coefficients of $q$ are of the shape $b_i=a_i \, t_{I_i} $ with $I_i\subseteq \{1,\ldots,d\}$ and $a_i\in k^{\mathbf times}$. Then we can renumber $b_1,\ldots,b_d$ in such a way that $q$ is as in (1) and we are done. \end{proof} \begin{corollary} Toral classes in $H^1(R_n,{\bf O}(q_{split}))$ correspond to diagonalizable $R_n$-quadratic forms. \end{corollary} \subsection{The projective linear case} Let $\mathfrak G= {\operatorname{PGL}}_d$. The set $H^1(R_n, {\operatorname{PGL}}_d)$ classifies Azumaya $R_n$-algebras of degree $d$ \cite[prop. 2.5.3.13]{CF}. Recall also that if $\mathcalA$ is an Azumaya algebra over $R_n$ of degree $d,$ then there is a one-to-one correspondence between commutative \'etale $R_n$--subalgebras of $\mathcalA$ of dimension $d$ and maximal $R_n$--tori of the group scheme ${\operatorname{PGL}}_1(\mathcalA).$ (This is a general well-known fact \cite[XIV.3.21.(b)]{SGA3}. Indeed, if $S$ is a commutative \'etale $R_n$--subalgebra of $\mathcalA$ of degree $d$ then $\mathfrak T=R_{S/R_n}({\bf G}_m)/{\bf G}_m$ is a maximal $R_n$--torus of ${\operatorname{PGL}}_1(\mathcalA)$. Conversely, to a maximal $R_n$-subtorus $\mathfrak T$ of ${\operatorname{PGL}}_1(\mathcalA)$ one associates the $R_n$--subalgebra $\mathcalA^\mathfrak T$ of fixed points under the natural action of $\mathfrak T$. This subalgebra $\mathcalA^{\mathfrak T}$ has the required properties locally and hence globally). From the above discussion it follows that $H^1_{toral}(R_n,{\operatorname{PGL}}_d)$ consists of isomorphism classes of Azumaya $R_n$--algebras $\mathcalA$ of degree $d$ having \'etale commutative $R_n$--subalgebras of dimension $d$. We pass to description of locally trivial torsors under ${\operatorname{PGL}}_1(\mathcalA)$. \begin{lemma} Let $\mathcalA$ be an Azumaya algebra over $R_n$. Then the natural map $H^1_{Zar}(R_n, {\operatorname{GL}}_1(\mathcalA)) \mathbf to H^1_{Zar} (R_n, {\operatorname{PGL}}_1(\mathcalA))$ is bijective. \end{lemma} \begin{proof} The exact sequence $$ 1 \mathbf to {\rm G}_m \mathbf to {\operatorname{GL}}_1(\mathcalA) \mathbf to {\operatorname{PGL}}_1(\mathcalA) \mathbf to 1$$ gives rise to a commutative diagram with exact rows (of pointed sets) $$ \begin{CD} 1@>>> H^1(R_n, {\operatorname{GL}}_1(\mathcalA)) @>{\phi}>> H^1(R_n, {\operatorname{PGL}}_1(\mathcalA))@>> > \operatorname{Br}(R_n) \\ && @VVV @V{\psi_1}VV @V{\psi_2}VV\\ & & H^1(K_n,{\operatorname{GL}}_1(A))=1 @>>> H^1(K_n, {\operatorname{PGL}}_1(\mathcalA))@>>> \operatorname{Br}(K_n) . \end{CD} $$ Every class in $H^1(R_n,{\operatorname{GL}}_1(\mathcalA)$ is rationally trivial, hence by Fedorov--Panin's result \cite{FP,PSV} it is locally trivial in the Zariski topology. In other words $$H^1(R_n,{\operatorname{GL}}_1(\mathcalA))=H^1_{Zar}(R_n,{\operatorname{GL}}_1(\mathcalA)).$$ Clearly $$ \phi(H^1_{Zar}(R_n,{\operatorname{GL}}_1(\mathcalA))) \, \subseteq \, H^1_{Zar}(R_n,{\operatorname{PGL}}_1(\mathcalA)). $$ Conversely, let $\mathfrak gamma\in H^1_{Zar}(R_n,{\operatorname{PGL}}_1(\mathcalA))$. Since $\psi_1(\mathfrak gamma)=1$ and since $\psi_2$ is injective, we obtain $\mathfrak gamma\in {\rm Im}(\phi)$. Finally, it remains to note that the above diagram shows that $\phi$ has trivial kernel and this is true for all Azumaya algebras over $R_n$. Then the standard twisting argument enables us to conclude that $\phi$ is injective. \end{proof} Thus, the disjoint union decomposition of the set of isomorphism classes of torsors under ${\operatorname{PGL}}_d$ becomes $$ \bigsqcup_{[\mathcalA] \in H^1_{ toral}(R_n, {\operatorname{PGL}}_d) } H^1_{Zar} (R_n, {\operatorname{GL}}_1(\mathcalA)) \enskip \buildrel{\mathbf Theta} \over \buildrel \sim \over \longrightarrow \enskip H^1(R_n, {\operatorname{PGL}}_d). $$ In general we can't say much about the subset $H^1_{Zar} (R_n, {\operatorname{GL}}_1(\mathcalA))$. Recall only that it classifies right invertible $\mathcalA$-modules (so that the above decomposition is coherent with \cite[prop. 4.8]{GP2}). More precisely, if $\mathcalA$ is a toral Azumaya $R_n$--algebra and $\mathcalL$ is a right invertible $\mathcalA$-module, then the class $[\mathcalL]$ corresponds to the class of the Azumaya algebra $\mathrm{End}_{\mathcalA}(\mathcalL)$ under the map $\mathbf Theta$. However, when the base field $k$ is algebraically closed, toral Azumaya algebras over $R_n$ are easy to classify explicitly, and also much more information about Zariski trivial torsors is available due to Artamonov's freeness statements \cite{A}. More precisely, let $k$ be an algebraically closed field. Choose a coherent system of primitive roots of unity $(\zeta_n)_{n \mathfrak geq 1}$ in $k$. Given integers $r,s$ satisfying $1 \leq r \leq s$, we let $A(x,y)_r^s$ denote the Azumaya algebra of degree $s$ over the Laurent polynomial ring $k[x^{\pm 1}, y^{\pm 1}]$ defined by a presentation $$ X^s=x, \, Y^s= y, \, YX= \zeta_s^r \, XY. $$ \begin{lemma}\label{division} Let $s_1,\ldots,s_m,r_1,\ldots r_m$ be positive integers such that $(s_i,r_i)=1$ for all $i=1,\ldots,m$. Then the Azumaya algebra $$\mathcalA=A(t_1,t_2)^{s_1}_{r_1} \otimes A(t_3,t_4)^{s_2}_{r_2} \otimes \operatorname{cd}ots A(t_{2m-1},t_{2m})^{s_m}_{r_m}$$ is a division algebra. \end{lemma} \begin{proof} Indeed, using the residue method it is easy to see that it is a division algebra even over the field $F_{2m}=k((t_1))\ldots((t_{2m}))$. \end{proof} We next recall that the group ${\operatorname{GL}}_d(\mathbb Z)$ acts in a natural way on the ring $R_n$, hence it acts on $H^1(R_n, {\operatorname{PGL}}_d)$ (for details see \cite[8.4]{GP3}). \begin{theorem} Assume that $k$ is algebraically closed. \noindent {\rm (1)} $H^1_{toral}(R_n, {\operatorname{PGL}}_d)$ consists of ${\operatorname{GL}}_d(\mathbb Z)$--orbits of classes of $R_n$--algebras of the following shape: $$ \mathcalA= M_{s_0}(R_n) \otimes_{R_n} A(t_1,t_2)^{s_1}_{r_1} \otimes A(t_3,t_4)^{s_2}_{r_2} \otimes \operatorname{cd}ots A(t_{2m-1},t_{2m})^{s_m}_{r_m} $$ where the integers $m$, $s_0, s_1,r_1,\dots,s_m, r_m$ satisfy the following conditions: {\rm (i)} $0 \leq 2m \leq n$; $1 \leq r_i \leq s_i$ and $(r_i,s_i)=1$ for all $i=1,...,m$; {\rm (ii)} $s_0 \mathfrak geq 1$ and $s_0 s_1 \dots s_m=d$. \noindent {\rm (2)} $\mathcalA \otimes_{R_n} F_n$ is division if and only if $s_0=1$. \noindent {\rm (3)} Let $\mathcalA$ be an Azumaya algebra as in (1). If $s_0\mathfrak geq 2$ we have $$H^1_{Zar} (R_n, {\operatorname{GL}}_1(\mathcalA))=1.$$ \end{theorem} \begin{proof}(1) If $\mathcalA \otimes_{R_n} F_n$ is division, this is \cite[th. 4.7]{GP3}. Assume now that $\mathcalA \otimes_{R_n} F_n$ is not division. By Wedderburn's theorem there exists an integer $s \mathfrak geq 1$ and a central division $F_n$--algebra $A'$ such that $\mathcalA \otimes_{R_n} F_n\cong M_s(A')$. The acyclicity theorem provides a toral $R_n$--Azumaya algebra $\mathcalA'$ such that $\mathcalA' \otimes_{R_n} F_n\cong A'$. Since $M_s(\mathcalA')$ and $\mathcalA$ are isomorphic over $F_n$, the acyclicity theorem again shows that $\mathcalA \cong M_s(\mathcalA')$. It remains to note that $\mathcalA'$ being a division algebra over $R_n$ is of the required form by the first case. \noindent (2) The assertion follows from Lemma~\ref{division}. \noindent (3) Write $\mathcalA= M_s( \mathcalA')$ with $\mathcalA'$ is division and $s \mathfrak geq 2$. Morita equivalence provides a one-to-one correspondence between invertible $\mathcalA$--modules and finitely generated projective $\mathcalA'$--modules of relative rank $s$. Artamonov's result~\cite{A} states that those $\mathcalA'$--modules are free \cite[cor. 3]{A} since $s\mathfrak geq 2$, so that invertible $\mathcalA$--modules are free as well. This implies $H^1_{Zar} (R_n, {\operatorname{GL}}_1(\mathcalA))=1$. \end{proof} \begin{remarks}{\rm \noindent (a) The third statement shows that for each invertible $\mathcalA$--module $P$ the module $P \oplus \mathcalA$ is free. \noindent (b) The third statement refines Steinmetz's results in the $2$-dimension case ($n=2$) \cite[th. 4.8]{St} where the case $s\mathfrak geq 3$ was considered only. Note that \cite{St} provides some other cases for classical groups when all Zariski locally trivial torsors are trivial. } \end{remarks} \subsection{Applications to $R_n$--Lie algebras} We next consider the special case $\text{\rm \bf G}=\operatorname{Aut}(\mathfrak gg)$ where $\mathfrak gg$ is a split simple Lie algebra over $k$ of finite dimension. For such group the set $H^1_{\acute et}(R_n, \text{\rm \bf G})$ classifies $R_n$-forms of the Lie algebra $\mathfrak gg \otimes_k R_n$ and $H^1_{ toral }(R_n, \text{\rm \bf G})$ classifies loop objects, i.e. those which arise from loop cocycles. More precisely, by \cite[\S 6]{GP3} we have $$ \mathrm{Im}\Bigl( H^1\bigl( \pi_1(R_n,1), \text{\rm \bf G}(k_s)) \mathbf to H^1(R_n, \text{\rm \bf G}) \Bigr) = H^1_{ toral }(R_n, \text{\rm \bf G}).$$ Theorems~\ref{main} and \ref{cor_main} have the following consequences. \begin{corollary} {\rm (1)} Let $\widetilde{\mathcal{L}}$ be a $K_n$--form of the Lie algebra $\mathfrak gg \otimes_k K_n$. If $\widetilde{\mathcal{L}}$ is unramified, i.e. extends everywhere in codimension one, then $\widetilde{\mathcal{L}}$ is isomorphic to the generic fiber of a unique multiloop Lie algebra $\mathcal{L}.$ (Of course $\mathcal{L}$, being a multiloop algebra, is a twisted from of the $R_n$--Lie algebra $\mathfrak gg \otimes_k R_n$). \noindent {\rm (2)} Let $\mathcalL$ be any $R_n$--form of $\mathfrak gg \otimes_R R_n$. Then there exists a (unique up to $R_n$--isomorphism) multiloop Lie algebra $\mathcalL_{loop}$ over $R_n$ such that $\mathcalL$ is a Zariski $R_n$--form of $\mathcalL_{loop}$. \end{corollary} \end{document}
\begin{document} \title[Lefschetz property]{The Lefschetz property for families of curves} \author{J\'anos Koll\'ar} \maketitle By the Lefschetz hyperplane theorem, if $X\subset \p^N$ is a smooth, projective variety and $C:=X\cap L$ is a positive dimensional intersection of $X$ with a linear subspace, then the natural map $$ \pi_1(C)\to \pi_1(X)\qtq{is surjective.} $$ The same conclusion holds if $X$ is quasi projective, but here $C$ has to be an intersection of $X$ with a linear subspace in {\em general position.} The aim of this note is to study families of curves $\{C_m:m\in M\}$ that satisfy this Lefschetz--type property. Such results were proved in the papers \cite{MR1786496, MR2011744}. Arithmetic applications are given in \cite{MR1786496, MR1745009, MR2019976} and \cite{k-hom} studies this question for homogeneous spaces. Related results are in \cite{MR2566998, MR2426347, MR2894633}. \begin{defn}\label{leff.prop.defn} A {\it family of schemes} over a normal variety $X$ over $\c$ is a diagram $$ M\stackrel{p}{\leftarrow} C_M\stackrel{u}{\to} X. \eqno{(\ref{leff.prop.defn}.1)} $$ Our main interest is in families where $M$ is irreducible and $p$ is flat with irreducible fibers. The family (\ref{leff.prop.defn}.1) satisfies the {\it Lefschetz property} if the following holds. \begin{enumerate}\setcounter{enumi}{1} \item[] For every Zariski open subset $\emptyset\neq X^0\subset X$ there is a Zariski open subset $\emptyset\neq M^0\subset M$ such that, for every $m\in M^0$, the induced map $$ u(X^0,m)_*:\pi_1\bigl(C_m\cap u^{-1}(X^0)\bigr)\to \pi_1(X^0)\qtq{is surjective.} $$ \end{enumerate} We say that (\ref{leff.prop.defn}.1) satisfies the {\it weak Lefschetz property} if there is a constant $N$ (independent of $X^0$) such that, for suitable choice of $M^0$, the image of $u(X^0,m)_* $ has index at most $N$ in $\pi_1(X^0) $. {\it Notes.} We ignore the base point since the surjectivity of the maps between the fundamental groups of connected schemes does not depend on the choice of a base point. The Lefschetz properties over arbitrary base fields are considered in (\ref{leff.prop.defn.gen}). \end{defn} Our main Theorem \ref{main.thm} is somewhat technical, though I believe it to be essentially optimal. The original arguments of \cite{MR1786496, MR2011744} need high degree {\em very free} rational curves. By contrast, the current proof frequently works for the lowest degree {\em free} curves. As an illustration, a simple yet nontrivial example is given by lines on hypersurfaces. \begin{cor} \label{lines.on.hyp.cor} Let $X\subset \p^n$ be a smooth hypersurface of degree $d$ over $\c$. The following are equivalent. \begin{enumerate} \item The family of lines has the Lefschetz property. \item The family of lines has the weak Lefschetz property. \item $d\leq n-2$ or $X$ is a line in $\p^2$. \end{enumerate} \end{cor} Let us start with some situations when the Lefschetz property fails. \begin{exmp} \label{3.key.exmps} Let $M\leftarrow C_M\stackrel{u}{\to} X$ be a flat, irreducible family of irreducible varieties. (\ref{3.key.exmps}.1) Assume that $u$ is not dominant. Then any $X^0\subset X\setminus \overline{u(C_M)}$ with infinite fundamental group shows that the weak Lefschetz property does not hold. (\ref{3.key.exmps}.2) Assume that there is an open subset $X^*$ and a dominant morphism to a positive dimensional variety $q:X^*\to Z^*$ such that every $X^*\cap C_m$ is contained in a fiber of $q$ for general $m\in M$. (We say that $X$ is {\it generically $C_M$-connected} if there is no such map $q:X^*\to Z^*$; see (\ref{chains.say.8}) for a better definition.) Let $Z^0\subset Z^*$ be an open subset and $X^0:=q^{-1}(Z^0)$. Then $$ \im\bigl[\pi_1\bigl(C_m\cap u^{-1}(X^0)\bigr)\to \pi_1(X^0)\bigr] \subset \ker\bigl[ \pi_1(X^0)\to \pi_1(Z^0)\bigr]. $$ If $Z^0$ has infinite fundamental group then the weak Lefschetz property fails. (\ref{3.key.exmps}.3) Assume that $u:C_M\to X$ does not have geometrically irreducible generic fiber. Then there is a nontrivial Stein factorization $u:C_m \stackrel{w}{\map} Y \stackrel{v}{\to} X$ where $v$ is finite, generically \'etale of degree $>1$ and $w$ has geometrically irreducible generic fiber (see \cite[Lem.9]{MR2011744} for the non-proper variant used here). Let $X^0\subset X$ be an open set such that $v$ is \'etale over $X^0$ and $Y^0:=v^{-1}(X^0)$. For general $m\in M$, the induced map $C_m\to X$ factors through $Y$, hence $$ \im\bigl[\pi_1\bigl(C_m\cap u^{-1}(X^0)\bigr)\to \pi_1(X^0)\bigr] \subset \im\bigl[ \pi_1(Y^0)\to \pi_1(X^0)\bigr] \subsetneq\pi_1(X^0). $$ In this case the Lefschetz property fails but the weak variant could hold with $N=$ the number of geometric irreducible components of the generic fiber of $u$. More generally, we see that the weak Lefschetz property for $M\leftarrow C_M\stackrel{u}{\to} X$ is equivalent to the weak Lefschetz property for $M\leftarrow C_M\stackrel{w}{\to} Y$. The advantage is that $w:C_M\to Y$ has geometrically irreducible generic fiber. (\ref{3.key.exmps}.4) An extreme case of the above is when $u:C_M\to X$ is generically finite. Then $w:C_M\to Y$ is birational and (\ref{3.key.exmps}.2) applies to $p:C_M\to M$. Thus the weak Lefschetz property does not hold for $M\leftarrow C_M\stackrel{u}{\to} X$. (A trivial exception is when $M$ is a single point, giving the case $X=\p^1$ in (\ref{lines.on.hyp.cor}.3).) (\ref{3.key.exmps}.5) A difficulty in using the reduction method of (\ref{3.key.exmps}.3) is that being generically $C_M$-connected changes as we pass from $X$ to $Y$. A typical example is the following. For some $n\geq 2$ set $X=S^2\p^n\setminus (\mbox{diagonal})$. Its universal cover is $\tilde X=\p^n\times \p^n\setminus (\mbox{diagonal})$. Let $\tilde u:\tilde C_L\to \tilde X$ be the family of lines that are contained in some $\p^n\times \{\mbox{point}\}$ and $u:C_L\to X$ the corresponding family of lines in $X$. Note that $X$ is generically $C_L$-connected but $\tilde X$ is not generically $\tilde C_L$-connected. Since each point in $X$ has 2 preimages in $\tilde X$, each fiber of $u$ has 2 irreducible components. For an open set $W\subset \p^n$ let $X_W\subset X$ denote the image of $W\times W$. Then there is an extension $$ 1\to \pi_1(W)+\pi_1(W)\to \pi_1\bigl(X_W\bigr)\to \{\pm 1\}\to 1 $$ and for any line $C_m$, the image of $\pi_1\bigl(C_m\cap X_W\bigr)$ lies in the first summand $\pi_1(W) $. Thus if $\pi_1(W) $ is infinite then even the weak Lefschetz property fails. (\ref{3.key.exmps}.6) Continuing with the previous example, let $\tilde w:\tilde C_Q\to \tilde X$ be the family of conics (that is rational curves of bidegree $(1,1)$) and $w:C_Q\to X$ the corresponding family of conics in $X$. Here both $\tilde w$ and $w$ have connected fibers. It follows from our results that $\tilde w:\tilde C_Q\to \tilde X$ satisfies the Lefschetz property but $w:C_Q\to X$ only satisfies the weak Lefschetz property (with $N=2$). (\ref{3.key.exmps}.7) The Lefschetz properties are really about small open subsets of $X$ and of $C_M$. To see this, let $u:Y\to X$ be a morphism between normal varieties, $X^0\subset X$ an open subvariety and $Y^0:=u^{-1}(X^0)$. Then $\pi_1(X^0)\to \pi_1(X)$ is surjective (cf.\ \cite[2.10]{shaf-book}), thus the index in Definition \ref{leff.prop.defn} increases as we pass from $X$ to $X^0$. That is, $$ \Bigl| \pi_1(X^0) : \im\bigl[\pi_1(Y^0)\to \pi_1(X^0)\bigr]\Bigr| \geq \Bigl|\pi_1(X) : \im\bigl[\pi_1(Y)\to \pi_1(X)\bigr]\Bigr|. $$ Next let $C^0_M\subset C_M$ be a dense open subset. Then $C^0_m$ is a dense open subset of $C_m$ for general $m\in M$. Thus, if $C_m$ is normal, then $\pi_1(C^0_m)\to \pi_1(C_m)$ is surjective by \cite[2.10]{shaf-book}, so $$ \im\bigl[\pi_1(C^0_m)\to \pi_1(X^0)\bigr]= \im\bigl[\pi_1(C_m)\to \pi_1(X^0)\bigr]. $$ \end{exmp} Our first result says that these examples almost explain every failure of the Lefschetz property. \begin{prop} \label{leff-ewkleff.prop} Let $X$ be a normal variety over $\c$ and $M\leftarrow C_M \to X$ a flat, irreducible family of irreducible varieties. Then each of the following statements implies the next. \begin{enumerate} \item $M\leftarrow C_M\to X$ satisfies the Lefschetz property. \item $C_M\to X$ is dominant, has geometrically irreducible generic fiber and $X$ is generically $C_M$-connected. \item $M\leftarrow C_M\to X$ satisfies the weak Lefschetz property. \end{enumerate} \end{prop} In any concrete situation is usually easy to check that $C_M\to X$ is dominant and has geometrically irreducible generic fiber. Being generically $C_M$-connected is not always clear but it holds if $X$ is smooth, proper, has Picard number 1 and $M\leftarrow C_M\stackrel{u}{\to} X$ is a locally complete family of free curves; see \cite[IV.4.14]{rc-book}. Sometimes the difference between the Lefschetz property and the weak Lefschetz property is minor, but in the arithmetic applications \cite{MR1786496, MR1745009, MR2019976} having surjectivity is essential. The following main technical result says that if we avoid the bad situations (\ref{3.key.exmps}.1--3) and we have surjectivity for $\pi_1(X)$ itself then the Lefschetz property holds. More generally, the extent of any failure of the Lefschetz property is determined by $X$ itself. \begin{thm} \label{main.thm} Let $M\stackrel{p}{\leftarrow} C_M\stackrel{u}{\to} X$ be a family of varieties over a smooth (not necessarily proper) variety $X$, defined over $\c$. Assume that \begin{enumerate} \item $p$ and $u$ are both smooth with irreducible fibers, \item $u$ is surjective and \item $X$ is generically $C_M$-connected. \end{enumerate} Let $j:X^0\DOTSB\lhook\joinrel\to X$ be an open subset and $j_*:\pi_1(X^0)\to \pi_1(X)$ the induced map on the fundamental groups. Then there is an open subset $\emptyset\neq M^0\subset M$ such that $$ \im\bigl[\pi_1\bigl(C_m\cap u^{-1}(X^0)\bigr)\to \pi_1(X^0)\bigr] = j_*^{-1}\Bigl(\im\bigl[\pi_1\bigl(C_m\bigr)\to \pi_1(X)\bigr]\Bigr) \eqno{(\ref{main.thm}.4)} $$ for every $m\in M^0$. \end{thm} We already know from Proposition \ref{leff-ewkleff.prop} that both images in (\ref{main.thm}.4) are finite index subgroups. Thus (\ref{main.thm}.4) is equivalent to the equality $$ \Bigl| \pi_1(X^0) : \im\bigl[\pi_1\bigl(C_m\cap u^{-1}(X^0)\bigr)\to \pi_1(X^0)\bigr]\Bigr| = \Bigl|\pi_1(X) : \im\bigl[\pi_1(C_m)\to \pi_1(X)\bigr]\Bigr|. $$ If $X$ is simply connected then the right hand side of (\ref{main.thm}.4) equals $\pi_1(X^0)$. Thus, in this case, we assert that $\pi_1\bigl(C_m\cap u^{-1}(X^0)\bigr)\to \pi_1(X^0)$ is onto for every $m\in M^0$. The latter is exactly the Lefschetz property. When applying Theorem \ref{main.thm} to any family $M\stackrel{p}{\leftarrow} C_M\stackrel{u}{\to} X$, we first replace $M$ by $M\setminus \sing M$, then replace $C_M$ by the largest open subset $C^0_M$ where $p$ and $u$ are both smooth and finally replace $X$ by $u(C^0_M)$. The first step is entirely harmless. The key question is to understand how large $X\setminus u(C^0_M)$ is; only the divisors contained in it matter. As a significant example, let $X$ be smooth, proper and $M\subset \mor(\p^1, X)$ a nonempty, irreducible, open subset with universal morphism $u:M\times \p^1\to X$. For $x\in X$ let $M_x\subset M$ be the set of maps $[f]\in M$ such that $f(0{:}1)=x$. \begin{cor} \label{RC.leff.cor} Let $X$ be a normal, proper variety and $M\subset \mor(\p^1, X)$ a nonempty, irreducible, open subset parametrizing free maps with universal morphism $u:M\times \p^1\to X$. Assume that \begin{enumerate} \item $X\setminus \sing X$ is simply connected, \item $X\setminus u\bigl(M\times \p^1\bigr)$ has codimension $\geq 2$ and \item $X$ is generically $M\times \p^1$-connected. \end{enumerate} Then $u:M\times \p^1\to X$ satisfies the Lefschetz property iff $M_x$ is irreducible for general $x\in X$. \end{cor} Proof. The projection $M\times \p^1\to M$ is obviously smooth and $u$ is smooth by \cite[I.3.5.4]{rc-book} since we parametrize free morphism. We apply Theorem \ref{main.thm} to $X^*:=u\bigl(M\times \p^1\bigr)$ replacing $X$. By assumption, $X^*$ is obtained from the simply connected smooth variety $X\setminus \sing X$ by removing a closed subscheme of codimension $\geq 2$. Thus $X^*$ is also simply connected and hence the right hand side of (\ref{main.thm}.3) equals $\pi_1(X^0)$. \qed \begin{rem} \label{bouquet.rem} If $M_x$ is reducible for general $x\in X$ then instead of $u:M\times \p^1\to X$ one can work with the family of rational curves obtained by smoothing a bouquet of rational curves through $x$, one from each irreducible component of $M_x$. \end{rem} Note that the assumptions (\ref{RC.leff.cor}.1--3) hold if $X$ is smooth and has Picard number $\rho(X)=1$. Thus we get the following. \begin{cor} \label{RC.leff.cor.cor} Let $X$ be a smooth proper variety with $\rho(X)=1$. Let $M\subset \mor(\p^1, X)$ be a nonempty, irreducible, open subset parametrizing free maps. Then the universal morphism $u:M\times \p^1\to X$ satisfies the Lefschetz property iff $M_x$ is irreducible for general $x\in X$. \qed \end{cor} \begin{say}[Proof of Corollary \ref{lines.on.hyp.cor}] Let $M\leftarrow C_M\stackrel{u}{\to} X$ be the universal family of lines. Let $x\in X$ be a point. After a coordinate change, we may assume that $x=(1{:}0{:}\cdots{:}0)$. Write the equation of $X_d$ as $$ g_1(x_1,\dots, x_n)x_0^{d-1}+\cdots + g_d(x_1,\dots, x_n). $$ The family of lines in $X$ through $x$ is then given by the equations $$ M_x:=\bigl(g_1=\cdots=g_d\bigr)\subset \p^{n-1}. $$ $M_x$ is smooth of dimension $n-1-d$ for general $x\in X$ by \cite[II.3.11]{rc-book}. Thus $M_x$ is a smooth complete intersection, hence irreducible if $n-1-d\geq 1$. Thus $M$ has a unique irreducible component $M^0\subset M$ such that the corresponding family $u^0: C^0_M{\to} X$ is dominant and has geometrically irreducible generic fiber. Thus, by Corollary \ref{RC.leff.cor.cor}, $M^0\leftarrow C^0_M\stackrel{u^0}{\to} X$ satisfies the Lefschetz property. Conversely, assume that $d\geq n-1$. If $d\geq n$ then there is no line through a general point; this is like example (\ref{3.key.exmps}.1). The $d=n-1$ case is discussed in (\ref{3.key.exmps}.4). \qed \end{say} \begin{rem} The proof applies to any smooth, Fano complete intersection of Fano index $\geq 3$. If the Fano index is $2$, applying Remark \ref{bouquet.rem} yields very high degree curves, but most likely conics work if the Fano index is $2$ and cubics if the Fano index is $1$. As far as I know, Corollary \ref{lines.on.hyp.cor} should hold in any characteristic. It holds for general hypersurfaces where the family of lines is smooth and has the expected dimension (cf.\ \cite[V.4.3]{rc-book}). \end{rem} The proof of Theorem \ref{main.thm} follows the outlines of \cite[Sec.5]{k-hom}. First we recall properties of open chains, then we pass to a subfamily that is topologically trivial. After studying which chains lift to \'etale covers, the proof is completed in Paragraphs \ref{all.lift.say}--\ref{key.lift.prop.2}. At the end we consider how to modify the statements and the proofs to work over arbitrary fields. \subsection*{Open chains}{\ } \begin{say}[Chains of varieties over $X$] \label{chains.say} Let $M\leftarrow C_M\stackrel{u}{\to} X$ be a family of schemes over $X$. A {\it $C_M$-link} is a morphism of a triple $u_m:( C_m, a, b)\to X$ where $m\in M$ and $a,b\in C_m$. A {\it $C_M$-chain} of length $r$ over $X$ consists of \begin{enumerate} \item $C_M$-links $u_i:(C_i, a_i, b_i)\to X$ for $i=1,\dots, r$ such that \item $u_i(b_i)=u_{i+1}(a_{i+1})$ for $i=1,\dots, r-1$. \end{enumerate} We say that the chain {\it starts} at $u_1(a_{1})\in X$ and {\it ends} at $u_r(b_r)\in X$ or that it {\it connects} $u_1(a_{1})$ and $u_r(b_r)$. A $C_M$-chain determines a reducible variety $\vee_{i=1}^rC_i$ obtained from the disjoint union of $C_1,\dots, C_r$ by identifying $b_i\in C_i$ with $a_{i+1}\in C_{i+1}$ for $i=1,\dots, r-1$. The morphisms $u_i$ then define a morphism $\vee_i u_i: \vee_iC_i\to X$. If the $C_i$ are connected then the image of $\vee_i u_i $ is a connected subscheme of $X$ which contains the starting and end points of the chain. Starting with $M\leftarrow C_M\stackrel{u}{\to} X$ the set of all pairs $( C_m,a)$ (resp.\ triples $( C_m,a, b)$) is naturally given by $$ C_M\leftarrow C_M\times_MC_M\stackrel{u\circ \pi_2}{\longrightarrow} X \qtq{and} C_M\times_MC_M\leftarrow C_M\times_MC_M\times_MC_M\stackrel{u\circ \pi_3} {\longrightarrow} X $$ where the marked points are given by the diagonal maps $$ \delta:C_M\to C_M\times_MC_M\qtq{and} \delta_1, \delta_2:C_M\times_MC_M\to C_M\times_MC_M\times_MC_M. $$ Here $\pi_i$ denotes the $i$th coordinate projection and $\delta_i$ maps the first $C_M$ identically to the $i$th factor and the second $C_M$ diagonally to the other 2 factors. Thus all $C_M$-chains of length 1 are parametrized by $$ C_M\times_MC_M\leftarrow C_M\times_MC_M\times_MC_M\stackrel{u\circ \pi_3} {\longrightarrow} X \eqno{(\ref{chains.say}.3)} $$ which we denote from now on by $$ \chain(C_M, 1)\stackrel{p^{(1)}}{\longleftarrow} C_M^{(1)}\stackrel{u^{(1)}}{\longrightarrow} X. \eqno{(\ref{chains.say}.4)} $$ Out of this we get that all $C_M$-chains of length 2 are parametrized by $$ \chain(C_M, 2) :=\chain(C_M, 1)\times_X \chain(C_M, 1) $$ where the 2 maps $\chain(C_M, 1)\to X$ are given by $u^{(1)}\circ \delta_2$ on the first copy and $u^{(1)}\circ \delta_1$ on the second copy. Over this there is a universal family $$ \chain(C_M, 2)\ \stackrel{p^{(r,1)}\vee p^{(r,2)}}{\longleftarrow}\ C_M^{(2,1)}\vee C_M^{(2,2)}\ \stackrel{u^{(r,1)}\vee u^{(r,2)}}{\longrightarrow}\ X $$ where $ C_M^{(r,i)} $ denotes the universal family of the $i$th links of the $r$-chains. By iterating this we get $\chain(C_M, r)$ parametrizing length $r$ chains $$ \chain(C_M, r)\ \stackrel{\vee_i p^{(r,i)}}{\longleftarrow}\ \vee_{i=1}^rC_M^{(r,i)}\ \stackrel{\vee_i u^{(r)}}{\longrightarrow}\ X. \eqno{(\ref{chains.say}.5)} $$ If $C_M\to M$ is flat with irreducible fibers then $C_M^{(1)}\to \chain(C_M, 1)$ is also flat with irreducible fibers. For a point $x\in X$ we have $$ \bigl(u^{(1)}\bigr)^{-1}(x)\cong u^{-1}(x)\times_M C_M. $$ Thus we conclude the following. {\it Claim \ref{chains.say}.6.} Assume that $M$ is irreducible and both maps $M\leftarrow C_M\to X$ are flat with irreducible fibers. Then: \begin{enumerate} \item[a)] Each $\chain(C_M, r)$ is irreducible. \item[b)] The maps $\chain(C_M, r)\stackrel{p^{(r,i)}}{\longleftarrow} C_M^{(r,i)}\stackrel{u^{(r,i)}}{\longrightarrow} X$ are flat with irreducible fibers. \item[c)] If $C^0_M\subset C_M$ is a dense open subset then $\chain(C^0_M, r)$ is a dense open subset of $\chain(C_M, r)$. \qed \end{enumerate} \end{say} \begin{defn} \label{chains.say.8} With the above notation, the starting and end points give morphisms $$ \alpha, \beta: \chain(C_M,r)\to X. $$ We say that $X$ is {\it generically $C_M$-connected} if $$ \alpha\times\beta: \chain(C_M,r)\to X\times X $$ is dominant for some $r$, that is, if two general points of $X$ can be connected by a $C_M$-chain of length $r$. (The equivalence of this definition with the one given in (\ref{3.key.exmps}.2) is proved in \cite[IV.4.13]{rc-book}.) If $u$ is open and $X$ is generically $C_M$-connected then $\alpha\times\beta $ is dominant for every $r\geq\dim X$ by \cite[IV.4.13]{rc-book}. Note that if $X$ is generically $C_M$-connected, $M$ is irreducible and both maps $M\leftarrow C_M\to X$ are flat with irreducible fibers then, by (\ref{chains.say}.6.c), $X$ is also generically $ C_M^0$-connected for every dense open subset $ C_M^0\subset C_M$. \end{defn} Now we choose an especially well behaved subset $ C_M^0\subset C_M$. \begin{prop} \label{gen.pos.trans.prop} Let $X$ be a normal variety and $M\stackrel{p}{\leftarrow} C_M\stackrel{u}{\to} X$ a family of varieties over $X$ where $M$ is irreducible and both maps are flat with irreducible fibers. Let $\emptyset\neq X^0\subset X$ be an open subset. Then there is an open subset $\emptyset\neq C_M^0\subset C_M$ with induced maps $p^0:C_M^0\to M$ and $u^0:C_M^0\to X$ such that \begin{enumerate} \item $p^0$ is smooth with irreducible fibers, \item $p^0$ is a topologically locally trivial fiber bundle (over its image), \item the image of $u^0$ is contained in $X^0$ and \item $u^0$ has irreducible (hence connected) fibers. \end{enumerate} \end{prop} Proof. We first replace $C_M$ by the open subset $C_M^1=u^{-1}(X^0)$ and then by the open subset $C_M^2\subset C_M^1$ where $p$ is smooth. By \cite[p.43]{gm-book}, every map between algebraic varieties is a locally topologically trivial fiber bundle over a Zariski open subset. Thus by passing to an open subset $C_M^0\subset C_M^2$ we may assume that properties (1--3) hold. Since each fiber of $u$ is irreducible, the same holds for $u^0$. \qed The pointed fibers $(C^0_m, a)$ also form a topologically locally trivial fiber bundle $C^0_M\leftarrow C_M^0\times_MC_M^0$. Given a point $x\in X^0$, the set of all $(C^0_m, a)$ such that $u(a)=x$ form a topologically locally trivial fiber bundle over the connected base $(u^0)^{-1}(x)$. As we noted in (\ref{3.key.exmps}.7), $$ \im\bigl[\pi_1(C^0_m,a)\to \pi_1(X^0,x)\bigr]= \im\bigl[\pi_1(C^1_m,a)\to \pi_1(X^0,x)\bigr]. $$ Thus we obtain the following. \begin{cor}\label{pi1.triv.subfamily} Notation and assumptions as in (\ref{gen.pos.trans.prop}). Then, for every $m\in M$, $a\in C^0_m$ and $x:=u(a)$, the image of the induced map $$ \Gamma(X^0,C,x) :=\im\bigl[\pi_1\bigl(C^0_m, a\bigr) \stackrel{u^0_*}{\longrightarrow} \pi_1\bigl( X^0, x\bigr)\bigr] \subset \pi_1\bigl( X^0, x\bigr) $$ depends only on $\bigl( X^0, x\bigr)$ and $C_M$ but not on $m\in M$ and $a\in C^0_m$.\qed \end{cor} \subsection*{Topologically locally trivial families} \begin{say} \label{gen.pos.trans.say} We work with families $M\stackrel{p}{\leftarrow} C_M\stackrel{u}{\to} X$ such that $p$ has irreducible fibers and the following holds: (\ref{gen.pos.trans.say}.1) For every $x\in X, m\in M$ and $a\in C_m$ satisfying $u_m(a)=x$, the image of the induced map $$ \Gamma(X,C,x) :=\im\bigl[\pi_1\bigl(C_m, a\bigr) \stackrel{u_*}{\longrightarrow} \pi_1\bigl( X, x\bigr)\bigr] \subset \pi_1\bigl( X, x\bigr) $$ does not depend on $m\in M$ and $a\in C_m$. An equivalent formulation is the following. (\ref{gen.pos.trans.say}.2) Let $\bigl(\tilde X, \tilde x\bigr)\to (X,x)$ be any covering space such that $$ u_m: \bigl(C_m,a\bigr)\to \bigl(X,x\bigr) \qtq{lifts to} \tilde u_m: \bigl(C_m,a\bigr)\to \bigl(\tilde X,\tilde x\bigr) $$ for some $m\in M$ and $a\in C_m$. Then the lift exists for every $n\in M, b\in C_n$ for which $u_n(b)=x$. Now fix a point $x\in X$. Corresponding to $\Gamma(X,C,x)$ there is an \'etale cover $$ q_X: \bigl(\tilde X, \tilde x\bigr)\to \bigl(X,x\bigr). \eqno{(\ref{gen.pos.trans.say}.3)} $$ We do not yet know that $\Gamma(X,C,x)$ has finite index, so $\tilde X\to X $ could have infinite degree. Thus $\tilde X$ is an analytic space for now. \end{say} \begin{prop}\label{key.lift.prop.1} Notation and assumptions as in (\ref{gen.pos.trans.say}). Then every $C_M$-chain on $X$ starting at $x$ lifts to a $C_M$-chain on $\tilde X $ starting at $\tilde x$. \end{prop} Proof. A $C_M$-chain is given by the data $u_i:(C_i, a_i, b_i)\to X$. Set $x_1:=x$. By the choice of $\Gamma(X,C,x_1)$, $$ u_1: (C_1, a_1)\to (X, x_1) \qtq{lifts to} \tilde u_1: (C_1, a_1)\to (\tilde X, \tilde x_1). $$ If we let $\tilde x_2$ denote the image of $b_1$ then we can view the latter map as $$ \tilde u_1: (C_1, b_1)\to (\tilde X, \tilde x_2). $$ We next apply (\ref{gen.pos.trans.say}) to $$ u_1: ( C_1, b_1)\to (X, x_2)\qtq{and} u_2: ( C_2, a_2)\to (X, x_2) $$ to see that if one of them lifts to $(\tilde X, \tilde x_2)$ then so does the other. This gives us $$ \tilde u_2: (C_2, a_2)\to (\tilde X, \tilde x_2). $$ We can iterate the argument to lift the whole chain. \qed \begin{cor}\label{key.lift.cor} Notation and assumptions as in (\ref{gen.pos.trans.say}). Assume in addition that $X$ is $C_M$-connected. Then $\Gamma(X,C,x)\subset \pi_1\bigl(X,x\bigr)$, as in (\ref{gen.pos.trans.say}.1), has finite index, thus $q_X: \bigl(\tilde X, \tilde x\bigr)\to \bigl(X,x\bigr)$ is an algebraic \'etale cover. More precisely, the degree of $q_X$ is bounded by $N:=$ the number of irreducible components of the geometric generic fiber of $\alpha\times\beta:\chain(X,\dim X)\to X\times X$. \end{cor} Proof. Let $\chain(C_M,r,x)\subset \chain(C_M,r)$ denote the subscheme parametrizing chains that start at $x$. Thus $\chain(C_M,r,x)$ is a fiber of $\alpha: \chain(C_M,r)\to X$ and, for general $x\in X$, the number of irreducible components of the geometric generic fiber of $\beta:\chain(X,r,x)\to X$ equals $N$. Let $p^{(r)}:\vee_i C^{(r,i)}_M\to \chain(X,r,x)$ be the universal family of $C_M$-chains of length $r$ with starting and end point sections $\alpha, \beta: \chain(X,r,x)\to \vee_i C^{(r,i)}_M$. Note that $u^{(r,1)}\circ \alpha$ maps $\chain(X,r,x)$ to $\{x\}$ and, by (\ref{key.lift.prop.1}), each fiber of $p^{(r)}$ lifts to a $\tilde C_m$-chain on $\tilde X$ starting at $\tilde x$. Thus $$ \vee_i u^{(r,i)}: \vee_i C^{(r,i)}_M\to X \qtq{lifts to} \vee_i \tilde u^{(r,i)}: \vee_i \tilde C^{(r,i)}_M\to \tilde X. $$ In particular, the end point map $$ u^{(r,r)}\circ \beta: \chain(C_M,r,x)\to X \qtq{lifts to} \tilde u^{(r,r)}\circ \tilde \beta: \chain(C_M,r,x)\to \tilde X. $$ Therefore $$ \im\bigl[ \beta_*:\pi_1\bigl(\chain(C_M,r,x)\bigr)\to \pi_1(X)\bigr] \subset \Gamma(X,C,x). $$ By assumption (and \cite[4.13]{rc-book}) $\beta$ is dominant for $r\geq \dim X$. Therefore, by \cite[2.10]{shaf-book}, the index is bounded as $$ \Bigl|\pi_1(X) : \im\bigl[ \pi_1\bigl(\chain(C_M,r,x)\bigr)\to \pi_1(X) \bigr]\Bigr|\leq N. \qed $$ \begin{say}[Proof of \ref{leff-ewkleff.prop}] The implication (\ref{leff-ewkleff.prop}.1) $\Rightarrow$ (\ref{leff-ewkleff.prop}.2) was already noted in (\ref{3.key.exmps}.1--3). It remains to show that (\ref{leff-ewkleff.prop}.2) $\Rightarrow$ (\ref{leff-ewkleff.prop}.3). As we noted in (\ref{chains.say.8}), replacing $C_M$ with an open subset $\emptyset\neq C^0_M\subset C_M$ does not change the assumptions. Thus we may assume that the assumptions of (\ref{key.lift.cor}) hold. This gives the bound $N:=$ the number of irreducible components of the geometric generic fiber of $\alpha\times\beta:\chain(X,\dim X)\to X\times X$. \qed \end{say} \subsection*{Proof of Theorem \ref{main.thm}}{\ } Fix an open subset $X^0\subset X$ and use (\ref{gen.pos.trans.prop}) to obtain $C^0_M\subset C_M$. Then pick a general point $x\in X^0$ and, as in Paragraph \ref{gen.pos.trans.say}, construct $$ q^0_X: \bigl(\tilde X^0, \tilde x\bigr)\to (X^0,x). $$ By Proposition \ref{leff-ewkleff.prop}, $q^0_X $ has finite degree, thus it extends (uniquely) to a normal, possibly ramified, finite cover $$ q_X: \bigl(\tilde X, \tilde x\bigr)\to (X,x). $$ If $q_X$ is also \'etale then $\tilde X^0\to X^0 $ is the pull-back of the finite \'etale cover $\tilde X\to X$; this is what (\ref{main.thm}.4) asserts. All that remains is to derive a contradiction if $q_X$ is ramified. Since $X$ is smooth, in this case there is a nonempty branch divisor $B\subset X$. First we show that most $C_M$-chains starting at $x$ lift to $\tilde X$. Then we use the branch divisor to show that most chains do not lift, thereby arriving at a contradiction. \begin{say}[Lifting $C_M$-chains]\label{all.lift.say} A $C^0_M$-chain is given by the data $u^0_i:(C^0_i, a_i, b_i)\to X$. Here each $C^0_i $ is an open subset of the corresponding $C_i$ thus the $C^0_M$-chain naturally corresponds to a $C_M$-chain given by the data $u_i:(C_i, a_i, b_i)\to X$. Since the $C_i$ are normal (even smooth) and $q_X:\tilde X\to X$ is finite, every lifting $\tilde u^0_i: C^0_i\to \tilde X$ of $u^0_i$ uniquely extends to $\tilde u_i: C_i\to \tilde X$. Thus if a $C^0_M$-chain lifts to $\tilde X^0 $ then the corresponding $C_M$-chain also lifts to $\tilde X$. \end{say} \begin{say}[Non-liftable chains] \label{key.lift.prop.2} Assume that the branch divisor $B_X\subset X$ of $q_X:\tilde X\to X$ is nonempty. Let $B^*_X\subset B_X$ be the open subset of smooth points. Since $u:C_M\to X$ is surjective and smooth, the preimage $B_C:=u^{-1}(B_X)$ is also nonempty and $u^{-1}(B^*_X)$ is smooth and nonempty. Let $B^*_C\subset u^{-1}(B^*_X)$ be the set of points where the restriction of $p$ to $B_C$ is smooth. Finally let $M^*\subset M$ be the open subset consisting of those points $m\in M$ such that $C_m$ meets $B^*_C$ in at least 1 point. Thus for $m\in M^*$ there is a map of the unit disc $\tau_m:\Delta\to C_m$ such that $u\circ \tau_m: \Delta\to X$ is transversal to $B$. Since $q_X$ branches along $B$, the sheets of $\tilde X\to X$ have nontrivial monodromy around $B$ and the pull-back to $\Delta$ still has nontrivial monodromy. Set $d:=\deg \tilde X/ X$. If $m\in M^*$ then the pull-back $$ q_m: C_m\times_X\tilde X\to C_m $$ is a degree $d$ cover that is \'etale outside $C_m\cap B^*_C$ and whose monodromy around $C_m\cap B^*_C$ is nontrivial. The cover need not be connected or normal, but, due to the monodromy, it can not be a union of $d$ trivial covers $C_m\cong C_m$. That is, if $a\in C_m$ is a general point and $\tilde a_1,\dots, \tilde a_d$ its preimages in $C_m\times_X\tilde X $ then, for at least one $\tilde a_i$, the identity map $(C_m, a)\to (C_m, a)$ can not be lifted to $( C_m, a)\to\bigl( C_m\times_X\tilde X, \tilde a_i\bigr)$. Thus if $y\in X$ is the image of $a$ and $\tilde y_1,\dots, \tilde y_d\in \tilde X$ its preimages, then for at least one $\tilde y_i$, the map $ u_m: (C_m, a)\to (X,y)$ can not be lifted to $$ \tilde u_{(m,i)}:(C_m , a)\not\to \bigl( \tilde X, \tilde y_i\bigr). $$ Consider now the dominant map $\tilde \beta_r: \chain(C^0_M,r,x)\to \tilde X^0$ and let $X^*\subset X$ be a Zariski open subset such that $q_X^{-1}(X^*)\subset \im \tilde \beta_r$. By choosing the above $u_m:C_m\to X$ generically, we may assume that there is a point $a_{r+1}\in C_m$ such that $y:=u_m(a_{r+1})\in X^*$. By the choice of $X^*$, for every $\tilde y^*_i\in q_X^{-1}(y)$ there is a $C^0_M$-chain of length $r$ whose lift to $\tilde X$ connects $\tilde x$ and $\tilde y^*_i$. We can add $ u^0_m: \bigl(C^0_m, a_{r+1}, b_{r+1}\bigr)\to X$ as the last link of any of these chains. Thus we get $d$ different $C^0_M$-chains of length $r+1$ and, for at least one of them, its extension to a $C_M$-chain can not be lifted to $\tilde X$. This contradicts (\ref{all.lift.say}) and completes the proof of Theorem \ref{main.thm}. \qed \end{say} \subsection*{Other fields}{\ } Our results apply to varieties over an arbitrary field, with two modifications. First, we have to use the algebraic fundamental group; which we still denote by $\pi_1$. Note that if $k$ is any field with algebraic closure $\bar k$ and $p:Y\to X$ is a morphism of geometrically irreducible $k$-varieties then the induced map $\pi_1(Y)\to \pi_1(X)$ is surjective iff $\pi_1(Y\times_k\bar k)\to \pi_1(X\times_k\bar k)$ is surjective. Thus our questions are geometric in nature and the key point is to understand what happens over algebraically closed fields in positive characteristic. The main difference is that even the classical Lefschetz theorem fails in the non-projective case. For instance, $\pi_1(\a^1)\to \pi_1(\a^2)$ is not surjective in positive characteristic. (An example is given by the cover $(z^p+z+x=0)\subset \a^3$ of the $xy$-plane which splits over any line $x=c$.) This is remedied with the following variant of Definition \ref{leff.prop.defn}. \begin{defn}\label{leff.prop.defn.gen} Let $k$ be an algebraically closed field of positive characteristic and $$ M\stackrel{p}{\leftarrow} C_M\stackrel{u}{\to} X. \eqno{(\ref{leff.prop.defn.gen}.1)} $$ a family of schemes where $M$ is geometrically irreducible and $p$ is flat with irreducible fibers. We say that the family (\ref{leff.prop.defn.gen}.1) satisfies the {\it Lefschetz property} if the following holds. \begin{enumerate} \item[] For every Zariski open dense subset $X^0\subset X$ and every finite quotient $\pi_1(X^0)\onto G$ there is a Zariski open dense subset $M^0_G\subset M$ such that, for every $m\in M^0_G$, the induced map $$ u(X^0,G,m)_*:\pi_1\bigl(C_m\cap u^{-1}(X^0)\bigr)\to \pi_1(X^0)\to G\qtq{is surjective.} $$ \end{enumerate} We say that (\ref{leff.prop.defn.gen}.1) satisfies the {\it weak Lefschetz property} if there is a constant $N$ (independent of $X^0$ and of $G$) such that, for a suitable choice of $ M^0_G$, the image of $u(X^0,G,m)_* $ has index at most $N$ in in $G$. \end{defn} With this notion, the only question is what should replace the topologically trivial family used in (\ref{gen.pos.trans.prop}). Topological triviality is used only through its consequence (\ref{gen.pos.trans.say}.1). In our case we need that $$ \Gamma(X,C,x) :=\im\bigl[\pi_1\bigl(C_m, a\bigr)\to \pi_1\bigl( X, x\bigr)\to G\bigr] \subset G $$ be independent of $m\in M$ and $a\in C_m$. This is an easy consequence of the semicontinuity property of the fundamental groups in fibers; see \cite[Prop.16]{MR2011744} for a precise statement and proof. The rest of the arguments go through with minor changes. \end{ack} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def\dbar{\leavevmode\hbox to 0pt{\hskip.2ex \accent"16\hss}d} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def\polhk#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth \lower1.5ex\hbox{`}\hidewidth\crcr\unhbox0}}} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def\polhk#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth \lower1.5ex\hbox{`}\hidewidth\crcr\unhbox0}}} \def$''${$''$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} \vskip1cm \noindent Princeton University, Princeton NJ 08544-1000 {\begin{verbatim}[email protected]\end{verbatim}} \end{document}
\begin{document} \title{A Numerical Study of Bravyi-Bacon-Shor and Subsystem Hypergraph Product Codes} \author{Muyuan Li} \email{[email protected]} \affiliation{School of Computational Science and Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332, USA} \author{Theodore J. Yoder} \email{[email protected]} \affiliation{IBM T. J. Watson Research Center, Yorktown Heights, NY, 10598, United States} \date{November 2019} \begin{abstract} We provide a numerical investigation of two families of subsystem quantum codes that are related to hypergraph product codes by gauge-fixing. The first family consists of the Bravyi-Bacon-Shor (BBS) codes which have optimal code parameters for subsystem quantum codes local in 2-dimensions. The second family consists of the constant rate ``generalized Shor" codes of Bacon and Cassicino \cite{bacon2006quantum}, which we re-brand as subsystem hypergraph product (SHP) codes. We show that any hypergraph product code can be obtained by entangling the gauge qubits of two SHP codes. To evaluate the performance of these codes, we simulate both small and large examples. For circuit noise, a $\llbracket 21,4,3\rrbracket$ BBS code and a $\llbracket 49,16,3\rrbracket$ SHP code have pseudthresholds of $2\times10^{-3}$ and $8\times10^{-4}$, respectively. Simulations for phenomenological noise show that large BBS and SHP codes start to outperform surface codes with similar encoding rate at physical error rates $1\times 10^{-6}$ and $4\times10^{-4}$, respectively. \end{abstract} \maketitle \section{Introduction} Two-dimensional topological error-correcting codes are extremely attractive models of quantum error-correction. Structurally, low-weight stabilizers -- just weight four for the surface code and weight six for the most popular color code -- that are also local in the plane make for simple fault-tolerant syndrome measurement circuits. In turn, this simplicity leads to surprisingly high thresholds \cite{wang2011surface} compared to, say, concatenated codes \cite{aliferis2005quantum}. On the other hand, error-correction in two dimensions is inherently limited by the Bravyi-Poulin-Terhal bound \cite{bravyi2010tradeoffs}, which states that a two-dimensional code using $N$ qubits to encode $K$ qubits with code distance $D$ must satisfy $cKD^2\le N$ for some universal constant $c$. In particular, two-dimensional codes with constant rate $K\propto N$ must have constant distance, which precludes error-correction with constant space overhead \cite{gottesman2014fault} in two dimensions. These constraints on two-dimensional codes explains the recent surge of interest in quantum hypergraph product codes \cite{tillich2013quantum,leverrier2015quantum}, which break the plane (i.e.~are \emph{not} local in two dimensions) but in doing so achieve $K\propto N$ and $D\propto\sqrt{N}$. Given the small-set flip decoder \cite{leverrier2015quantum}, which is single-shot with an asymptotic threshold, hypergraph product codes promise quantum error-correction with constant overhead \cite{fawzi2018constant}. However, hypergraph product codes also have a couple of undesirable properties from a practical standpoint. First, the small-set flip decoder, although theoretically satisfactory, is likely not practical due to low thresholds even when measurements are perfect \cite{grospellier2018numerical}. This is somewhat to be expected by analogy with classical expander codes, where the classical flip decoder \cite{sipser1996expander} is greatly outperformed by heuristic decoders, such as belief propagation \cite{richardson2001capacity}. It is also unclear that the small-set flip decoder works well at all on small examples suitable for near-term implementation. Second, the stabilizer weights of hypergraph product codes are relatively large, e.g.~the best performing codes in \cite{grospellier2018numerical} have stabilizers with weight 11, which necessitates a corresponding increase in fault-tolerant circuit complexity and a decrease in thresholds with respect to circuit-level noise. Here we take an empirical look at two families of subsystem codes that, while related to hypergraph product codes, may have some advantages for near-term implementation. Because these are subsystem codes, the operators measured for error-correction are quite small -- in the cases we explore here they never exceed weight six. We also demonstrate how the powerful technique of belief propagation can be applied to decode these codes. The first family consists of the Bravyi-Bacon-Shor (BBS) codes \cite{bravyi2012subsystem}. BBS codes achieve $K,D\propto\sqrt{N}$ with just two-body measurements and are easily modified so that these measurements are local in two dimensions. Furthermore, they can be gauge-fixed to hypergraph product codes \cite{yoder2019optimal}. The second family consists of the ``generalized Shor" codes of Bacon and Cassacino \cite{bacon2006quantum}. We rename these codes subsystem hypergraph product (SHP) codes, because we prove that any hypergraph product code is two SHP codes with their gauge qubits entangled. SHP codes can achieve $K\propto N$ and $D\propto\sqrt{N}$ just like hypergraph product codes. Compared to BBS codes, they have higher weight gauge operators, weight six in our instances. We perform numerical experiments with these code families in two regimes of operation. In the small-code regime, we construct small, distance-3 codes in each class, develop fault-tolerant circuits for measuring their stabilizers, and calculate pseudothresholds for circuit noise. We find pseudotresholds of $2\times10^{-3}$ for a $\llbracket21,4,3\rrbracket$ BBS code and $8\times10^{-4}$ for a $\llbracket49,16,3\rrbracket$ SHP code. These results suggest that the BBS code in particular is quite a good candidate for protecting four logical qubits with a small quantum computer. In the large-code regime, we create BBS and SHP codes from regular classical expander codes. We modify belief propagation to include measurement errors and apply it to decode these codes under an error model including data and measurement noise (but without circuit-level noise). Despite no asymptotic thresholds, compared to a single logical qubit of surface code with similar encoding rate, BBS and SHP codes do achieve better logical error rates per logical qubit provided sufficiently low physical error rates: $p<10^{-6}$ for BBS codes and $p<4\times10^{-4}$ for SHP codes. The paper is organized as follows. In Section \ref{sec:BBS} we review the Bravyi-Bacon-Shor codes and present a circuit-level simulation of the $\llbracket21,4,3\rrbracket$ code. In Section \ref{sec:bigSHP} we look at the construction of the subsystem hypergraph product codes, find their code parameters, and present a circuit-level simulation of the $\llbracket49,16,3\rrbracket$ code. In Section \ref{sec:decode} we show how to add measurement noise to a classical belief propagation decoder so that it can then be used to decode the BBS and SHP codes. In Section \ref{sec:results} we present numerical results on large BBS and SHP codes and compare them to surface codes. \section{Review of Bravyi-Bacon-Shor Codes} \label{sec:BBS} In this section, we review the Bravyi-Bacon-Shor (BBS) codes that were introduced by Bravyi \cite{bravyi2010tradeoffs} and explicitly constructed in \cite{yoder2019optimal}. Let $\mathcal{F}_2$ denote the finite field with two elements 0,1. A Bravyi-Bacon-Shor code is defined by a binary matrix $A \in \mathcal{F}_2^{n_1 \times n_2}$, where qubits live on sites $(i,j)$ of the matrix $A$ for which $A_{i,j} = 1$. As shown in \cite{bravyi2010tradeoffs,yoder2019optimal}, given $A$ we can define two classical codes corresponding to its column-space and row-space: \begin{align} \mathcal{C}_1 &= \col{A},\\ \mathcal{C}_2 &= \row{A}, \end{align} where $\mathcal{C}_1$ and $\mathcal{C}_2$ has code parameters $[n_1, k, d_1]$, $[n_2, k, d_2]$, generating matrices $G_1$ and $G_2$, and parity check matrices $H_1$ and $H_2$. The notation for Pauli operators on the qubit lattice is defined as follows. A Pauli $X$- or $Z$-type operator acting on the qubit at site $(i,j)$ in the lattice is written as $X_{i,j}$ or $Z_{i,j}$. A Pauli operator acting on multiple qubits is specified by its support $S$: \begin{equation} X(S) = \prod_{ij}(X_{i,j})^{S_{ij}}, \,\, S \in \mathcal{F}_2^{n_1 \times n_2}, \end{equation} where $S_{ij}=1$ implies that $A_{ij}=1$, since qubits only exist where $A_{ij}=1$. Similar notations will be used throughout the rest of this paper. We let $|A|=\sum_{ij}A_{ij}$ and $|v|=\sum_iv_i$ denote the Hamming weights of matrices and vectors. \begin{defn}\cite{bravyi2010tradeoffs} The Bravyi-Bacon-Shor code constructed from $A \in \mathcal{F}_2^{n_1 \times n_2}$, denoted $\text{BBS}(A)$, is an $\llbracket N,K,D\rrbracket$ quantum subsystem code with gauge group generated by 2-qubit operators and \begin{align*} N&=|A|,\\ K&=\rank{A},\\ D&=\min\{ |\vec{y}|>0:\vec{y}\in \row{A} \cup \col{A}\},\\ \end{align*} \end{defn} As CSS quantum subsystem codes, the gauge group of BBS codes is generated by $XX$ interactions between any two qubits sharing a column in $A$ and $ZZ$ between any two qubits sharing a row in $A$. The gauge group can be more formally written as \begin{align} \label{eq:BBS_gx} \mathcal{G}^{(\text{bbs})}_X&=\{X(S):G_R S=0,S \subseteq A\},\\ \label{eq:BBS_gz} \mathcal{G}^{(\text{bbs})}_Z&=\{Z(S):S G_R^T =0,S \subseteq A\}, \end{align} where $G_R = (1,1, \ldots, 1)$ is the generating matrix of the classical repetition code, and the subset notation $S\subseteq A$ means that $S$ is a matrix such that, for all $i,j$, $S_{ij}=1$ implies $A_{ij}=1$. For bare logical operators of the BBS code to commute with all of its gauge operators, each bare logical X-type operator must be supported on entire rows of the matrix and each bare logical Z-type operator must be supported on entire columns of the matrix. To express this similarly to the gauge operators above, define the parity check matrix of the classical repetition code $H_R$. Then we have the sets of $X$- and $Z$-type logical operators: \begin{align} \label{eq:BBS_Lx} \mathcal{L}^{(\text{bbs})}_X&=\{X(S \cap A):S H^T_R=0\},\\ \label{eq:BBS_Lz} \mathcal{L}^{(\text{bbs})}_Z&=\{Z(S \cap A):H_R S =0\}. \end{align} Consequently, the group of stabilizers for the BBS code is the intersection of the group of bare logical operators with the gauge group: \begin{align} \label{eq:BBS_Sx} \mathcal{S}_X^{(\text{bbs})} &= \mathcal{L}_X^{(\text{bbs})} \cap \mathcal{G}_X^{(\text{bbs})}\\ &= \{ X(S \cap A):S H^T_R=0, G_1 S=0 \},\\ \label{eq:BBS_Sz} \mathcal{S}_Z^{(\text{bbs})} &= \mathcal{L}_Z^{(\text{bbs})} \cap \mathcal{G}_Z^{(\text{bbs})}\\ &= \{ Z(S \cap A):H_R S=0, S G_2^T=0 \}. \end{align} \subsection{Constructing BBS codes with classical linear codes} In \cite{yoder2019optimal}, the following method of constructing a BBS code from classical codes was given. \begin{thm} Given two classical linear codes $\mathcal{C}_1$ and $\mathcal{C}_2$ with parameters $[ n_1,k,d_1 ]$ and $[ n_2,k,d_2]$, and generating matrices $G_1 \in \mathcal{F}_2^{k \times n_1}$ and $G_2 \in \mathcal{F}_2^{k \times n_2}$, we can construct the code $BBS(A)$ by \begin{equation} A = G^T_1 Q G_2 \in \mathcal{F}_2^{n_1 \times n_2}, \end{equation} where $Q \in \mathcal{F}_2^{k \times k}$ can be any full rank $k \times k$ matrix. Then $BBS(A)$ is an $\llbracket N,K,D \rrbracket$ quantum subsystem code with \begin{align} min(n_1 d_2, d_1 n_2) &\leq N \leq n_1 n_2,\\ K &= k,\\ D &= min(d_1, d_2). \end{align} \end{thm} The matrix $Q \in \mathcal{F}_2^{k \times k}$ represents the non-uniqueness of the generating matrices, and adjusting $Q$ would only affect the number of physical qubits in $\text{BBS}(A)$. It is easy to see that $\col{A} = \row{G_1} = \mathcal{C}_1$, and $\row{A} = \row{G_2} = \mathcal{C}_2$, and the conclusions in the theorem about the code parameters follow. \subsection{Example: A $[[21,4,3]]$ Bravyi-Bacon-Shor Code} \label{sec:BBS_Hamming} \begin{table*} \begin{center} \begin{tabular}{ c|c|c||c} \hline Qubits & $X_L$ & $Z_L$ & Stabilizers\\ \hline \hline 1 & $X_0 X_1 X_2$ & $Z_0 Z_12 Z_{17}$ & $X_0 X_1 X_2 X_3 X_4 X_5 X_9 X_{10} X_{11} X_{12} X_{13} X_{14}$\\ 2 & $X_3 X_4 X_5$ & $Z_3 Z_9 Z_{16}$ & $X_0 X_1 X_2 X_6 X_7 X_8 X_9 X_{10} X_{11} X_{15} X_{16} X_{17}$\\ 3 & $X_6 X_7 X_8$ & $Z_6 Z_{15} Z_{18}$ & $X_3 X_4 X_5 X_6 X_7 X_8 X_9 X_{10} X_{11} X_{18} X_{19} X_{20}$\\ 4 & $X_3 X_4 X_5 X_6 X_7 X_8 X_9 X_{10} X_{11}$ & $Z_3 Z_4 Z_9 Z_{13} Z_{16} Z_{19}$ & $Z_6 Z_{15} Z_{18} Z_3 Z_9 Z_{16} Z_4 Z_{13} Z_{19} Z_7 Z_{10} Z_{14}$\\ & & & $Z_6 Z_{15} Z_{18} Z_0 Z_{12} Z_{17} Z_{4} Z_{13} Z_{19} Z_{1} Z_5 Z_8$\\ & & & $Z_3 Z_{9} Z_{16} Z_0 Z_{12} Z_{17} Z_{4} Z_{13} Z_{19} Z_{2} Z_{11} Z_{20}$\\ \hline \hline \end{tabular} \caption{Stabilizers and a set of canonical logical operators for the $[[21, 4, 3]]$ Bravyi-Bacon-Shor code constructed using the $[7,4,3]$ Hamming code.} \label{table:hamming} \end{center} \end{table*} The $[7,4,3]$ Hamming code is generated by \[G = \begin{bmatrix} 1 & 0 & 0 & 0 & 1 & 1 & 0 \\ 0 & 1 & 0 & 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 1 & 1 & 1 & 1 \end{bmatrix} , \text{\space}H = \begin{bmatrix} 1 & 1 & 0 & 1 & 1 & 0 & 0 \\ 1 & 0 & 1 & 1 & 0 & 1 & 0 \\ 0 & 1 & 1 & 1 & 0 & 0 & 1 \end{bmatrix}. \] Using $Q = \begin{psmallmatrix} 0 & 0 & 1 & 0\\ 0 & 1 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0\end{psmallmatrix}$ we can construct a $[[21, 4, 3]]$ Bravyi-Bacon-Shor code $A = G^T Q G$: \[A = \begin{bmatrix} 0 & 0 & 1 & 0 & 0 & 1 & 1 \\ 0 & 1 & 0 & 1 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 & 1 & 1 & 0 \\ 0 & 1 & 0 & 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 1 & 1 & 0 & 0 \\ 1 & 1 & 1 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 1 & 0 & 0 & 1 \\ \end{bmatrix}, \] which minimizes the number of qubits used. We construct a canonical set of bare logical operators for the four logical qubits encoded in the $\llbracket 21,4,3 \rrbracket$ code along with a set of stabilizers generators, as shown in TABLE \ref{table:hamming}. Note that while qubit $4$ has high weight bare logical operators due to the construction that we have chosen, it can still suffer from weight three logical operators, such as $Z_4Z_{13}Z_{19}$, and so its error rate has the same slope as the others. We estimated the performance of this code by simulating it under circuit level standard depolarizing noise, where Pauli channels with Kraus operators \begin{align} \begin{split} E_{1q} &= \{\sqrt{1-p}I, \sqrt{\frac{p}{3}}X, \sqrt{\frac{p}{3}}Y, \sqrt{\frac{p}{3}}Z\},\\ E_{2q} &= \{\sqrt{1-p}I, \sqrt{\frac{p}{15}}IX, \ldots \sqrt{\frac{p}{15}}ZZ\}, \end{split} \label{eq:pauli} \end{align} are applied after each 1- and 2-qubit gate in the circuit, respectively. We call $p\in[0,1]$ the physical error rate. Assuming the code is fault-tolerantly prepared into its logical $\ket{0000}$ state, we simulated the circuit of error correction and destructive measurement of data qubits with single qubit memory errors added before error correction. The same error rate is used across the circuit for memory errors, gate errors, and measurement errors. Note that idle errors are not considered in the circuit-level simulations presented in this paper. When we consider a trapped ion architecture where long-range interactions required by these subsystem codes of interest can be easily implemented, idle errors have minimal effect to the logical system when compared to gate errors \cite{debroy2019logical}. The results are shown in FIG.~\ref{fig:BBS_hamming}. Note that since the BBS codes can be considered a compass code in 2-dimensions \cite{li20192d}, to create a fault-tolerant circuit for syndrome extraction it suffices to use a single ancillary qubit for each of the weight-12 stabilizers as listed in TABLE \ref{table:hamming}. Hence the total number of qubits required to perform fault-tolerant syndrome extraction for this code is $21+6=27$. We perform the syndrome extraction once and if the syndrome is trivial, we stop and no correction is needed. If the syndrome is not trivial, we measure the syndrome again and decode with the outcome. From FIG.~\ref{fig:BBS_hamming} we can see that qubit $4$ performs slightly worse than qubits 1-3, due to the fact that its higher weight logical operators have more chance of anti-commuting with dressed logical operators than qubits 1-3. \begin{figure} \caption{Simulated performance of the $[[21, 4, 3]]$ BBS code under circuit level depolarizing error, with one ancillary qubit per stabilizer for fault-tolerant syndrome extraction. The block pseudothreshold for the code block with 4 encoded logical qubits is $2.3 \times 10^{-3} \label{fig:BBS_hamming} \end{figure} \section{Another family of subsystem hypergraph product codes} \label{sec:bigSHP} In this section we take a look at the ``generalized Shor" codes in Bacon and Casaccino \cite{bacon2006quantum} from a new perspective. In particular, we find that these codes are in a sense the most natural subsystem hypergraph product codes because two of them, without ancillas, can be gauge-fixed to a hypergraph product code and, conversely, any hypergraph product code can be gauge-fixed into two generalized Shor codes. We therefore refer to generalized Shor codes as subsystem hypergraph product (SHP) codes. Contrast SHP codes with BBS codes, which can also be gauge-fixed to hypergraph product codes \cite{yoder2019optimal}. Gauge-fixing BBS codes requires ancillas and the result is only a certain subset of all hypergraph product codes with less than constant rate. \subsection{Hypergraph product codes} \label{sec:HGP} To facilitate our proofs, we review the hypergraph product code construction briefly in this section. \begin{defn}\cite{tillich2013quantum} Let $H_1\in\{0,1\}^{n_1^T\times n_1}$ and $H_2\in\{0,1\}^{n_2^T\times n_2}$. The hypergraph product (HGP) of $H_1$ and $H_2$ is a quantum code $\text{HGP}(H_1,H_2)$ with stabilizers \begin{align}\label{eq:hgp_sx} S^{(\text{hgp})}_X&=\left(H_1\otimes I_{n_2},I_{n_1^T}\otimes H_2^T\right),\\\label{eq:hgp_sz} S^{(\text{hgp})}_Z&=\left(I_{n_1}\otimes H_2,H_1^T\otimes I_{n_2^T}\right). \end{align} \end{defn} By Eq.~\eqref{eq:hgp_sx} we mean that each vector $v\in\mathcal{F}_2^N$ in the rowspace of the matrix on the righthand side indicates an $X$-type Pauli operator $X^{v}:=\prod_{i=1}^NX_i^{v_i}$ in the stabilizer group. Likewise with $Z$-type operators in Eq.~\eqref{eq:hgp_sz}. Similar notation will be used throughout this section. Treating $H_1$ and $H_2$ as parity check matrices, we have two classical codes $\mathcal{C}_1=\ker(H_1)$ and $\mathcal{C}_2=\ker(H_2)$ with some parameters $[n_1,k_1,d_1]$ and $[n_2,k_2,d_2]$, respectively. Likewise, treat $H_1^T$ and $H_2^T$ as parity check matrices of the ``transpose" codes $\mathcal{C}_1^T=\ker(H_1^T)$ and $\mathcal{C}_2^T=\ker(H_2^T)$ with respective parameters $[n_1^T,k_1^T,d_1^T]$ and $[n_2^T,k_2^T,d_2^T]$. Because of the rank-nullity theorem \begin{equation}\label{eq:parameter_relation} n_i-k_i=n_i^T-k_i^T \end{equation} for $i=1,2$. The hypergraph product code $\text{HGP}(H_1,H_2)$ then has parameters \cite{tillich2013quantum} \begin{equation} \llbracket n_1n_2+n_1^Tn_2^T,k_1k_2+k_1^Tk_2^T,D\rrbracket, \end{equation} where \begin{equation} D=\bigg\{\begin{array}{lr}\min(d_1,d_2),&k_1^T=0\text{ or }k_2^T=0\\\min(d_1,d_2,d_1^T,d_2^T),&\text{otherwise}\end{array}. \end{equation} Moving on, we notice that there are $n_1n_2+n_1^Tn_2^T$ qubits in $\text{HGP}(H_1,H_2)$ that we lay out on two square lattices, an $n_1\times n_2$ lattice referred to as the ``large'' lattice, denoted $L$, and an $n_1^T\times n_2^T$ lattice referred to as the ``small'' lattice, denoted $l$. See Fig.~\ref{fig:hypergraph_prod.png}. Despite the names, the small lattice need not contain fewer qubits than the large lattice, although typically (e.g.~in random constructions of classical LDPC codes \cite{Gallager1962}) $n_i^T\approx n_i-k_i<n_i$ and this is the case. \begin{figure} \caption{The large and small lattices, $L$ and $l$.} \label{fig:hypergraph_prod.png} \end{figure} We label qubits in these lattices in row major fashion. Thus, a (row) vector $r^T\otimes c^T$ for $r\in\{0,1\}^{n_1}$ and $c\in\{0,1\}^{n_2}$ indicates exactly the qubits that are both in the rows indicated by $r$ and in the columns indicated by $c$ of the large lattice. Qubits in the large lattice are labeled first, i.e.~$1,2,\dots,n_1n_2$, followed by qubits in the small lattice, i.e.~$n_1n_2+1,\dots,n_1n_2+n_1^Tn_2^T$. For later purposes, we point out some subgroups of the stabilizer group. For instance, certain stabilizers of $\text{HGP}(H_1,H_2)$ are supported entirely on the large lattice. Because the rowspace of $S^{(\text{hgp})}_X$ represents all $X$-type stabilizers, if $x\in\{0,1\}^{n_1^T}$, $c\in\ker(H_2)=\mathcal{C}_2\subseteq\{0,1\}^{n_2}$, then \begin{equation} (x^T\otimes c^T)S^{(\text{hgp})}_X=\left(x^TH_1\otimes c^T,0\right) \end{equation} is a stabilizer supported entirely on the first $n_1n_2$ qubits, i.e.~entirely on the large lattice. Let $G_1\in\{0,1\}^{k_1\times n_1}$ and $G_2\in\{0,1\}^{k_2\times n_2}$ be generator matrices for codes $\mathcal{C}_1$ and $\mathcal{C}_2$. Then, we can provide a generating set of stabilizers on the large lattice like \begin{align}\label{eq:large_lattice_Sx} S^{(\text{hgp},L)}_X&=\left(H_1\otimes G_2\right),\\\label{eq:large_lattice_Sz} S^{(\text{hgp},L)}_Z&=\left(G_1\otimes H_2\right). \end{align} Similarly, some stabilizers of $\text{HGP}(H_1,H_2)$ are supported entirely on the small lattice. Let $F_1\in\{0,1\}^{k_1^T\times n_1^T}$ and $F_2\in\{0,1\}^{k_2^T\times n_2^T}$ be generating matrices for codes $\mathcal{C}_1^T$ and $\mathcal{C}_2^T$. The stabilizers on the small lattice have generating sets \begin{align}\label{eq:small_lattice_Sx} S^{(\text{hgp},l)}_x&=\left(F_1\otimes H_2^T\right),\\\label{eq:small_lattice_Sz} S^{(\text{hgp},l)}_z&=\left(H_1^T\otimes F_2\right). \end{align} Logical operators of $\text{HGP}(H_1,H_2)$ are those that commute with all stabilizers (we include the stabilizers themselves in this count). For instance, rows of the matrix $\left(I_{n_1}\otimes G_2,0\right)$ indicate $X$-type logical operators, since \begin{equation} S^{(\text{hgp})}_Z\left(I_{n_1}\otimes G_2,0\right)^T=0. \end{equation} The complete generating sets of $X$-type and $Z$-type logical operators are \begin{align} L^{(\text{hgp})}_X&=\left(\begin{array}{cc} H_1\otimes I_{n_2}&I_{n_1^T}\otimes H_2^T\\ I_{n_1}\otimes G_2&0\\ 0&F_1\otimes I_{n_2^T} \end{array}\right),\\ L^{(\text{hgp})}_Z&=\left(\begin{array}{cc} I_{n_1}\otimes H_2&H_1^T\otimes I_{n_2^T}\\ G_1\otimes I_{n_2}&0\\ 0&I_{n_1^T}\otimes F_2 \end{array}\right). \end{align} Nontrivial logical operators are logical operators that are not stabilizers. An alternative representation of stabilizers and logical operators is to specify them by their supports. For instance $X^{(L)}(S)$ is an $X$-type Pauli supported on the qubits specified by $S\in\{0,1\}^{n_1\times n_2}$ in the large lattice. Likewise for $X^{(l)}(T)$ with $T\in\{0,1\}^{n_1^T\times n_2^T}$ on the small lattice. Of course, $Z$-type Paulis $Z^{(L)}(S)$, $Z^{(l)}(T)$ are denoted analogously. Using this support-matrix notation, we get alternative descriptions of the stabilizer groups \begin{alignat}{3}\label{eq:ShgpX} \mathcal{S}^{(\text{hgp})}_X&=\big\{X^{(L)}(S)X^{(l)}(T):&SH_2^T&=H_1^TT,\\\nonumber &&G_1S&=0,TF_2^T=0\big\},\\\label{eq:ShgpZ} \mathcal{S}^{(\text{hgp})}_Z&=\big\{Z^{(L)}(S)Z^{(l)}(T):&H_1S&=TH_2,\\\nonumber &&SG_2^T&=0,F_1T=0\big\}. \end{alignat} and the logical operators \begin{align}\label{eq:LhgpX} \mathcal{L}_X^{(\text{hgp})}&=\{X^{(L)}(S)X^{(l)}(T):SH_2^T=H_1^TT\},\\\label{eq:LhgpZ} \mathcal{L}_Z^{(\text{hgp})}&=\{Z^{(L)}(S)Z^{(l)}(T):H_1S=TH_2\}, \end{align} which are useful for discussing gauge-fixing later. \subsection{Subsystem hypergraph product codes} \label{sec:SHP} In this section, we define the generalized Shor codes from \cite{bacon2006quantum} with notation similar to our description of HGP codes. This makes the two code families easier to relate later. \begin{defn}\label{def:SHP_codes} Let $H_1\in\{0,1\}^{n_1^T\times n_1}$ and $H_2\in\{0,1\}^{n_2^T\times n_2}$. The subsystem hypergraph product (SHP) code of $H_1$ and $H_2$ is the quantum subsystem code $\text{SHP}(H_1,H_2)$ with gauge operators \begin{align} G^{(\text{shp})}_X&=\left(H_1\otimes I_{n_2}\right),\\ G^{(\text{shp})}_Z&=\left(I_{n_1}\otimes H_2\right). \end{align} \end{defn} It is worth noting that while the definition of $\text{HGP}(H_1,H_2)$ depends on the parity check matrices $H_1$ and $H_2$, the definition of $\text{SHP}(H_1,H_2)$ depends only on the codes $\mathcal{C}_1=\ker{H_1}$ and $\mathcal{C}_2=\ker{H_2}$. This is because the gauge groups $G^{(\text{shp})}_X$ and $G^{(\text{shp})}_Z$ are the same for $\text{SHP}(H_1,H_2)$ and $\text{SHP}(H_1',H_2')$ whenever $\row{H_1}=\row{H_1'}$ and $\row{H_2}=\row{H_2'}$. Let us calculate the parameters $\llbracket N,K,D\rrbracket$ of the SHP code. There are clearly $N=n_1n_2$ qubits in the code, which we place on a lattice like in Fig.~\ref{fig:SHP_code}. \begin{figure} \caption{A subsystem hypergraph product code. For each column, $X$-type gauge operators are supported on qubits indicated by the parity checks $H_1$. For each row, $Z$-type gauge operators are supported on qubits indicated by the parity checks $H_2$.} \label{fig:SHP_code} \end{figure} To calculate $K$, begin by noticing that certain $X$-type operators, the bare $X$-type logical operators, commute with the entire group of gauge operators. These are generated by \begin{equation}\label{eq:LX} L^{(\text{shp})}_X=\left(I_{n_1}\otimes G_2\right), \end{equation} because $L^{(\text{shp})}_X\left(G^{\text{(shp)}}_Z\right)^T=0$. Likewise, the bare $Z$-type logical operators are \begin{equation}\label{eq:LZ} L^{(\text{shp})}_Z=\left(G_1\otimes I_{n_2}\right). \end{equation} The stabilizers of a subsystem code are those gauge operators that also commute with all elements of the gauge group, i.e.~the center of the gauge group. These are generated by \begin{align}\label{eq:shp_SX} S^{(\text{shp})}_X&=\left(H_1\otimes G_2\right),\\\label{eq:shp_SZ} S^{(\text{shp})}_Z&=\left(G_1\otimes H_2\right), \end{align} matching those stabilizers of $\text{HGP}(H_1,H_2)$ that are supported entirely on the large lattice (see Eqs.~\eqref{eq:large_lattice_Sx}, \eqref{eq:large_lattice_Sz}). Next, the number of encoded qubits can be calculated by comparing the ranks of $L^{(\text{shp})}_X$ and $S^{(\text{shp})}_X$ (or, equivalently of $L^{(\text{shp})}_Z$ and $S^{(\text{shp})}_Z$). \begin{align} K&=\rank{L^{(\text{shp})}_X}-\rank{S^{(\text{shp})}_X}\\ &=n_1k_2-(n_1-k_1)k_2\\ &=k_1k_2. \end{align} What does the description of $\text{SHP}(H_1,H_2)$ look like in support-matrix notation? Writing down the relevant groups, we have \begin{align} \mathcal{G}^{(\text{shp})}_X&=\{X(S):G_1S=0\},\\ \mathcal{G}^{(\text{shp})}_Z&=\{Z(S):SG_2^T=0\},\\ \mathcal{L}^{(\text{shp})}_X&=\{X(S):SH_2^T=0\},\\ \mathcal{L}^{(\text{shp})}_Z&=\{Z(S):H_1S=0\},\\ \mathcal{S}^{(\text{shp})}_X&=\{X(S):G_1S=0,SH_2^T=0\},\\ \mathcal{S}^{(\text{shp})}_Z&=\{Z(S):SG_2^T=0,H_1S=0\}. \end{align} Dressed logical operators are denoted $\hat{\mathcal{L}}^{(\text{shp})}_X=\mathcal{L}^{(\text{shp})}_X\mathcal{G}^{(\text{shp})}_X$ and $\hat{\mathcal{L}}^{(\text{shp})}_Z=\mathcal{L}^{(\text{shp})}_Z\mathcal{G}^{(\text{shp})}_Z$. To compute the distance $D$ of the subsystem hypergraph product code, we need to find the minimum weight of an element of $\hat{\mathcal{L}}^{(\text{shp})}_X-\mathcal{G}^{(\text{shp})}_X$ or of $\hat{\mathcal{L}}^{(\text{shp})}_Z-\mathcal{G}^{(\text{shp})}_Z$. Let us suppose $M\in\hat{\mathcal{L}}^{(\text{shp})}_X-\mathcal{G}^{(\text{shp})}_X$. Then, $M$ can be written as $M=X(S)X(T)$ where $X(S)\in\mathcal{L}^{(\text{shp})}_X$ and $X(T)\in\mathcal{G}^{(\text{shp})}_X$, so $SH_2^T=0$ and $G_1T=0$. Also, since $M$ is not in $\mathcal{G}^{(\text{shp})}_X$, there is some $M'$ corresponding to a row of $L_Z^{(\text{shp})}$ that anticommutes with $M$. Glancing at Eq.~\eqref{eq:LZ}, this means $M'=X(S')$ where $S'$ is the outer product $S'=\vec c\hspace{2pt}\hat e_j^T$ for some $\vec c\in\mathcal{C}_1$ and some $j$ such that \begin{equation} \tr{((S+T)^TS')}=\hat e_j^T(S+T)^T\vec c=1. \end{equation} This trace being 1 (modulo two) expresses the anticommutation of $M$ and $M'$. Clearly, it implies $(S+T)^T\vec c\neq\vec 0$. Because $\vec c\in\mathcal{C}_1$, there is a vector $\vec x$ such that $\vec c=G_1^T\vec x$ and accordingly, \begin{equation} (S+T)^T\vec c=S^T\vec c+T^TG_1^T\vec x=S^T\vec c \end{equation} using $G_1T=0$. Moreover, $H_2S^T\vec c=0$ using $SH_2^T=0$ and so $(S+T)^T\vec c$ is a nonzero vector in $\ker(H_2)=\mathcal{C}_2$. Thus, by definition of the classical code distance $|M|=|S+T|\ge|(S+T)^T\vec c\hspace{1pt}|\ge d_2$. Likewise, if we suppose $M\in\hat{\mathcal{L}}^{(\text{shp})}_Z-\mathcal{G}^{(\text{shp})}_Z$ we find $|M|\ge d_1$. Thus, we have shown $D\ge\min(d_1,d_2)$ and it is not hard given the form of $L^{(\text{shp})}_X$ and $L^{(\text{shp})}_Z$ to see that this in fact holds with equality $D=\min(d_1,d_2)$. Therefore, the subsystem hypergraph product code is a $\llbracket n_1n_2,k_1k_2,\min(d_1,d_2)\rrbracket$ code. Quantum subsystem codes generalize quantum subspace codes because their stabilizers and logical qubits do not fix all the available degrees of freedom. The remaining degrees of freedom are counted as gauge qubits. These can be thought of as extra logical qubits that are not protected and thus not used to hold any meaningful information. If we calculate the number of gauge qubits in a subsystem hypergraph product code, we find it is \begin{equation}\label{eq:count_gauge_qubits} N-\rank{S^{(\text{shp})}_X}-\rank{S^{(\text{shp})}_Z}-K=(n_1-k_1)(n_2-k_2). \end{equation} \subsection{Example: A $\llbracket 49, 16, 3 \rrbracket$ subsystem hypergraph product code} \begin{figure} \caption{Simulated performance of the $\llbracket 49, 16, 3 \rrbracket$ SHP code under circuit level depolarizing noise, with one ancillary qubit per stabilizer for fault-tolerant syndrome extraction. The block pseudothreshold for the single code block with 16 encoded logical qubits is $8 \times 10^{-4} \label{fig:SHP_hamming} \end{figure} Using the classical $[7,4,3]$ Hamming code for both the $X$ and $Z$ part, we can construct a $\llbracket 49, 16, 3 \rrbracket$ subsystem hypergraph product code by following Definition~\ref{def:SHP_codes}. We can construct a canonical set of logical operators of weight $3$ and $4$ for the $16$ logical qubits encoded in the same code block, along with a set of 24 stabilizer generators. Note that similar to the BBS codes, for each of the stabilizers it suffices to use a single ancillary qubit to fault-tolerantly extract its syndrome, by performing CNOT gates in the order of gauge operators and hence directing propagated errors away from the direction of logical errors. Similar to the $\llbracket 21, 4, 3 \rrbracket$ BBS code, we study the $\llbracket 49, 16, 3 \rrbracket$ SHP code under circuit level depolarizing noise as shown in Eq.~\ref{eq:pauli}. The results are shown in FIG. \ref{fig:SHP_hamming}. Since the $4$ encoded logical bits in the $[7,4,3]$ Hamming code have different performances and the constructed logical operators for the SHP code have different weights, the performance of the $16$ encoded logical qubits varies and their pseudothreshold ranges between $10^{-4}$ and $2 \times 10^{-4}$. \subsection{SHP codes gauge-fix to HGP codes} To begin, we define gauge-fixing in general. See also \cite{yoder2019optimal}. We use the notation that for gauge group $\mathcal{G}$, its stabilizer group (centralizer) is $\mathcal{S}(\mathcal{G})$ and it encodes $K(\mathcal{G})$ qubits. \begin{defn}\label{defn:gauge_fixing} We say that the gauge group $\mathcal{G}'$ is a gauge-fixing of the gauge group $\mathcal{G}$ if \begin{enumerate} \item $\mathcal{S}(\mathcal{G})\le\mathcal{S}(\mathcal{G}')\le\mathcal{G}'\le\mathcal{G}$, and \item $K(\mathcal{G})=K(\mathcal{G}')$. \end{enumerate} We also say that a code is a gauge-fixing of another code if their gauge groups are related in this way. \end{defn} We noted below Eq.~\eqref{eq:shp_SZ} that the stabilizers of $\text{SHP}(H_1,H_2)$ are exactly those stabilizers of $\text{HGP}(H_1,H_2)$ that are supported entirely on the large lattice. Similarly, one can check that the stabilizers of $\text{SHP}(H_2^T,H_1^T)$ are those of $\text{HGP}(H_1,H_2)$ supported entirely on the small lattice (i.e.~Eqs.~(\ref{eq:small_lattice_Sx},\ref{eq:small_lattice_Sz})). Also, Eq.~\eqref{eq:count_gauge_qubits} says that $\text{SHP}(H_2^T,H_1^T)$ has $(n_2^T-k_2^T)(n_1^T-k_1^T)$ gauge qubits, which is the same number as $\text{SHP}(H_1,H_2)$ by Eq.~\eqref{eq:parameter_relation}. These two facts suggest the following theorem. \begin{thm} $\mathcal{Q}'=\text{HGP}(H_1,H_2)$ is a gauge-fixing of $\mathcal{Q}=\text{SHP}(H_1,H_2)\text{SHP}(H_2^T,H_1^T)$. \end{thm} \begin{proof} We employ Definition~\ref{defn:gauge_fixing}. It should be clear that \begin{align} K(\mathcal{Q})&=K(\text{SHP}(H_1,H_2))+K(\text{SHP}(H_2^T,H_1^T))\\ &=k_1k_2+k_1^Tk_2^T=K(\mathcal{Q}'), \end{align} therefore satisfying part (2) of the definition. For part (1), it is important to associate (via a 1-1 map) the physical qubits of $\mathcal{Q}$ and $\mathcal{Q}'$. Recall, qubits of $\mathcal{Q}'$ are placed on the two lattices $L$ and $l$. A qubit at site $(i,j)$ in $\text{SHP}(H_1,H_2)$ is associated with the qubit at $(i,j)$ in $L$. On the other hand, qubit $(i,j)$ of $\text{SHP}(H_2^T,H_1^T)$ is associated instead with the qubit at $(j,i)$ on the small lattice $l$. Now, taken as a whole, this code $\mathcal{Q}$ has gauge operators and stabilizers that can be written as \begin{widetext} \begin{align} \mathcal{G}^{(\mathcal{Q})}_X&=\left\{X^{(L)}(S)X^{(l)}(T):G_1S=0,TF_2^T=0\right\},\\ \mathcal{G}^{(\mathcal{Q})}_Z&=\left\{X^{(L)}(S)X^{(l)}(T):SG_2^T=0,F_1T=0\right\},\\ \mathcal{S}^{(\mathcal{Q})}_X&=\{X^{(L)}(S)X^{(l)}(T):SH_2^T=0,H_1^TT=0,G_1S=0,TF_2^T=0\},\\ \mathcal{S}^{(\mathcal{Q})}_Z&=\{Z^{(L)}(S)Z^{(l)}(T):H_1S=0,TH_2=0,SG_2^T=0,F_1T=0\}. \end{align} \end{widetext} It should now be clear that we have \begin{align} \mathcal{S}_X^{\mathcal{Q}}&\le\mathcal{S}_X^{\text{hgp}}\le\mathcal{G}_X^{\mathcal{Q}},\\ \mathcal{S}_Z^{\mathcal{Q}}&\le\mathcal{S}_Z^{\text{hgp}}\le\mathcal{G}_Z^{\mathcal{Q}}, \end{align} thereby satisfying part (1) of Def.~\ref{defn:gauge_fixing}. \end{proof} In essence, the two SHP codes live on the large and small lattices in Fig.~\ref{fig:hypergraph_prod.png}, respectively, and gauge-fix to the HGP code by placing their gauge qubits in $(n_1-k_1)(n_2-k_2)$ maximally entangled two-qubit states. \section{Decoding BBS and SHP codes} \label{sec:decode} Both the BBS codes and SHP codes can be decoded by directly running a classical decoder on the corresponding classical code used to construct the quantum code. In this section we review the decoding of BBS codes, as discussed in \cite{yoder2019optimal}, and show that similar arguments can be applied to SHP codes. We review the classical belief propagation decoder for expander codes, and show how it can be used to tolerate measurement errors and therefore decode BBS and SHP codes. \subsection{Decoding the BBS codes} To decode the BBS codes, we have to establish associations between the stabilizers of the quantum code and the parity checks of the classical code. For convenience, we assume that $A$ is an $n \times n$ symmetric matrix constructed as $A = G^T Q G$, where $G$ is the generating matrix of a $[n,k,d]$ classical code $\mathcal{C}$, so there is only one classical code $\mathcal{C}=\row{A}=\col{A}$ under consideration. We let $H$ be the parity check matrix of $\mathcal{C}$. Given Eq. \ref{eq:BBS_Sx}, let $S$ be the support of a X-type stabilizer of BBS(A), $X(S \cap A) \in \mathcal{S}_X^{(\text{bbs})}$. Since $SH^T_R=0$, rows of $S$ are codewords of $\mathcal{C}_R$, either all 1s or all 0s. Because $GS=0$, columns of $S$ are parity checks of $\mathcal{C}$. Therefore, $S \cap A = \diag{\vec{r}}A$ for some $\vec{r} \in \row{H}$. Hence we have \begin{equation} \mathcal{S}_X^{(\text{bbs})} = \{ X(\diag{\vec{r}}A):\vec{r}\in \row{H} \}. \end{equation} Similarly, \begin{equation} \mathcal{S}_Z^{(\text{bbs})} = \{ Z(A\diag{\vec{c}}):\vec{c}\in \row{H} \}. \end{equation} Thus, the parity checks of the classical code indicate which sets of rows or columns constitute a stabilizer, and give us a one-to-one correspondence between the quantum stabilizers and the classical parity checks. Since single qubit Pauli $X$ errors within a column are equivalent up to gauge operators, each column is only sensitive to an odd number of Pauli $X$ error. The even or oddness of a column corresponds to the 0 or 1 state of an effective classical bit in the code $\mathcal{C}$. Similarly, the symmetry of A indicates that the same correspondence holds for Pauli $Z$ errors in rows and the even or oddness of rows in $A$. \begin{algo}[\textbf{The Induced Decoder for $\text{BBS}(A)$}] Given a symmetric binary matrix $A=G^T Q G$ where $\mathcal{C}=\row{G}=\row{A}=\col{A}$ is a classical $[n,k,d]$ code, we can decode the Bravyi-Bacon-Shor code $\text{BBS}(A)$ by:\\ \begin{itemize} \item Collect the $X$- or $Z$-type syndrome $\vec{\sigma}$ for the quantum code $\text{BBS}(A)$. \item Run the classical decoder to obtain a set of corrections for the classical code $\vec{c}=\mathcal{D}(\vec{\sigma})$. \item For each bit in the correction $\vec{c}$, apply a Pauli $Z$- or $X$-type correction to a single qubit in each row or column corresponding to the classical bit. \end{itemize} \end{algo} The time complexity of the induced decoder consists of the time to construct the stabilizer values and the time to run the classical decoder $\mathcal{D}$. Given an $[n,k,d]$ classical code $\mathcal{C}$, for a weight-$w$ parity check of the classical code the corresponding stabilizer of the $\text{BBS}(A)$ code is the sum of $O(wn)$ two-qubit gauge measurement. There are $m$ such stabilizers and suppose the classical decoder runs in time at most $t$, then the induced decoder of $\text{BBS}(A)$ takes time $O(mwn+t)$. When classical expander codes are used to construct $\text{BBS}(A)$ and the belief propagation decoder is used as classical decoder $\mathcal{D}$, $m=O(n)$, $w=O(1)$, $t=O(n)$, so the induced decoder runs in time $O(mwn+t)=O(n^2+n)=O(N)$, which is linear in the size of the quantum code. \subsection{Decoding the SHP codes} Similar to what we have done for the BBS codes, to decode the SHP codes we have to associate the stabilizers of the quantum code to the parity checks of the classical codes. To illustrate the idea most easily, we assume that the X and Z part of the SHP code are generated by the same $[n,k,d]$ classical code $\mathcal{C}$ with generating matrix $G \in \mathcal{F}_2^{k \times n}$ and parity check matrix $H \in \mathcal{F}_2^{m \times n}$, SHP($H_1, H_2$) = SHP($H$). From Eq.~\eqref{eq:shp_SX} and Eq.~\eqref{eq:shp_SZ} we have \begin{align} S_Z^{(\text{shp})} &= Z(G \otimes H)\\ &= \{ Z(g^T \otimes h):g\in \row{G}, h\in \row{H} \}, \end{align} \begin{align} S_X^{(\text{shp})} &= X(H \otimes G)\\ &= \{ X(h^T \otimes g):h\in \row{H}, g\in \row{G}\}. \end{align} Since $\rank{G}=k$, the eigenvalues of the quantum stabilizers correspond to exactly $k$ sets of syndromes for the classical code $\mathcal{C}$. In the case of $Z$-type stabilizers, let $g_i$ be the $i$-th row of $G$ for each $i$ such that $1 \leq i \leq k$. A set of syndromes for the classical code $\mathcal{C}$ is generated by measuring the following set of stabilizers \begin{equation} \{ Z(g_i^T \otimes h_j): h_j \in \row{H}, 1 \leq j \leq m \}. \end{equation} These $k$ sets of syndromes are passed to the classical decoder and results in $k$ sets of $n$-bit corrections on $\mathcal{C}$. However, in order to apply these $k$ sets of classical corrections canonically onto independent sets of qubits in the SHP code without affecting each other, we have to make sure that the generating matrix $G$ is in the reduced row echelon form, so that the $i$-th set of corrections can be applied on the $i$-th row of qubits lattice. \begin{algo}[\textbf{The Induced Decoder of $\text{SHP}(H)$}] Given a $[n,k,d]$ classical code $\mathcal{C}$ with generating matrix $G \in \mathcal{F}_2^{k,n}$ and parity check matrix $H \in \mathcal{F}_2^{m,n}$, the hypergraph subsystem code SHP(H) can be decoded by \begin{itemize} \item Reshape $G$ into its reduced row echelon form \\ $G = [I_k \,\,B]$. \item Collect the $X$- or $Z$-type syndrome $\vec{\sigma}$ of the quantum code $\text{SHP}(H)$. \item For each $i \in \{1,2,\ldots,k\}$, the syndrome corresponding to the set of stabilizers $\{ Z(g_i^T \otimes h_j): h_j \in \row{H}, 1 \leq j \leq m \}$ is passed to the classical decoder $\mathcal{D}$, and a $n$-bit correction $\vec{c_i}$ is obtained. \item For each set of corrections $\vec{c_i}$, Pauli $Z$- or $X$-type corrections are applied to the qubits $\vec{e_i} \otimes [1,1, \ldots, 1]$, where $\vec{e_i}$ is the $i$-th unit vector. \end{itemize} \end{algo} Hence when using the induced decoder on the quantum code $\text{SHP}(H)$ that is constructed by a $[n,k,d]$ code, we have to run the classical decoder $\mathcal{D}$ a total of $k$ times. The correction consists of $k$ sets of $n$-qubit Paulis have to be applied on the first $k$ rows or columns of the qubit lattice. The time complexity of the induced decoder for SHP codes again consists of the time to construct the stabilizer values and the time to run the classical decoder $\mathcal{D}$. For a weight-$w$ parity check of the classical code, the corresponding stabilizer of the SHP code is the sum of $O(n)$ number $w$-qubit gauge measurements. There are $O(k \times m)$ stabilizers, and the classical decoder $\mathcal{D}$ needs to be run $k$ times where each run takes time at most $t$, then the induced decoder takes time $O(kmnw+kt)$. When classical expander codes are used to construct the SHP code and the belief propagation decoder is used as the classical decoder $\mathcal{D}$, $m = O(n)$, $k = O(n)$, $w = O(1)$, $t = O(n)$, so the induced decoder runs in time $O(kmnw + kt)=O(n^3+nt)=O(N^{3/2})$. \subsection{Classical belief propagation decoder} In the previous two sections we have shown that decoding both the BBS codes and the SHP codes amount to directly decoding the underlying classical code $\mathcal{C}$ that was used to construct the quantum code, and apply the resulting corrections to the appropriate set of qubits in the quantum code. Therefore, in order to maximize the performance of the induced decoding algorithm the best classical decoder should be employed with modifications to tolerate measurement noise. Sipser and Spielman have analyzed the flip decoder \cite{sipser1996expander, spielman1996linear} for classical expander codes and in the scenario that the parity checks are noisy in addition to the bits. A quantum version of the classical flip decoder has been shown to decode the quantum expander codes efficiently \cite{leverrier2015quantum, fawzi2018efficient, fawzi2018constant, leverrier2015quantum}. However, when classical LDPC codes and expander codes are considered, various iterative message-passing decoding algorithms have been shown to result in codes with rate approaching the Shannon capacity together with efficient decoding algorithm (see e.g.~\cite{richardson2000design}). Message passing algorithms get the name as information is transmitted back and forth between variable and check nodes along the edges of the graph that is used to define the classical code. The transmitted message along an edge is a function of all received messages at the node except for a particular edge. This property ensures that the incoming messages are independent for a tree like graph. Among these well-known decoding algorithms, the belief propagation (BP) decoder, sometimes referred to as Gallager's soft decoding algorithm \cite{Gallager1962}, have been shown to out perform other message-passing algorithms for classical LDPC codes when the binary symmetric channel (BSC) is considered. In this section we briefly describe the BP decoder for classical LDPC codes. For a comprehensive discussion of this area, we point the reader to the book by Richardson and Urbanke \cite{richardson2008modern} and the notes by Guruswami \cite{guruswami2006iterative}, which are excellent resources on this topic. In particular, here we present the modified BP decoder that uses parity check values as input instead of bit values, in order to simulate the quantum case where data qubit values are not known to decoders. In order to run the BP decoder using parity check values, we add another set of $m$ ``syndrome nodes" $s_j, 1 \leq j \leq m$, that have one-to-one correspondence to the check nodes: syndrome node $s_j$ and check node $j$ are connected by edge $(s_j, j)$. These syndrome nodes $s_i$ are used to store the measured parity check values. Without loss of generality, we assume that the all $0$s message is the correct message to be received. \begin{algo} [\textbf{Belief Propagation Decoding Algorithm}] Assuming the probability $p$ for each bit of the incoming message to be flipped is the same, then the log-likelihood ratio $m_i$ of the $i$-th bit is \begin{equation} m_i = \log{\frac{1-p}{p}}. \end{equation} For the syndrome nodes $s_i$, we let $m_{s_i} = + \infty$ if the $i$-th syndrome is $0$ and $m_{s_i} = - \infty$ if the $i$-th syndrome is $1$. Do the following two steps alternatively: \begin{enumerate} \item \textbf{Rightbound messages}: For all edges $e = (i,j)$, $i \in \{1, 2, \ldots, n\} \cup \{s_1, s_2, \ldots, s_m \}$, do the following: \begin{itemize} \item if this is the zeroth round, $g_{i,j} = m_i$. \item Otherwise \begin{equation} g_{i,j} = m_i + \sum_{k \in \mathcal{N}(i) \backslash j}h_{i,k} \end{equation} where $\mathcal{N}(i)$ denotes the set of neighbors of node $i$. \end{itemize} The variable node $i$ sends the message $g_{i,j}$ to check node $j$. \item \textbf{Leftbound messages}: For edges $e = (i,j)$, $i \in \{1, 2, \ldots, n \}$ do the following: \begin{equation} h_{i,j} = f \left( \prod_{k \in \mathcal{N}(j) \backslash i} \frac{e^{g_{k,j}}-1}{e^{g_{k,j}}+1} \right), \,\,\,\, f(u) = \log{\frac{1+u}{1-u}}. \end{equation} The check node $j$ sends the message $h_{i,j}$ to node $i$. \end{enumerate} At each step we can determine the current variable node values $v_i$ given their updated log-likelihood ratios: $v_i = 0$ if $m_i > 0$ and $v_i = 1$ if $m_i < 0$. The above iterative step terminates when all check nodes are satisfied based on the current $v_i$, or the predetermined number of iterations is reached. The variable node value $v_i$ at the final step is used as correction for the noisy channel output $b_i$. \label{algo:BP} \end{algo} If the graph considered has large enough girth when compared to the number of iterations of the algorithm, the messages at each iteration would approach the true log-likelihood ratio of the bits given the observed values. By applying expander graph arguments to message passing algorithms it has been shown that the BP decoding algorithm can correct errors efficiently, with time linear in the block size \cite{burshtein2001expander}. Therefore the belief propagation decoder is a good candidate for decoding the BBS and SHP codes constructed using classical LDPC codes. \subsection{Handling measurement errors with BP decoder} As we mentioned previously, decoding algorithms for classical codes usually do not consider the problem of measurement noise. In previous studies of the BP decoder, no explicit proposals have been made regarding handling measurement noise when decoding classical expander codes. In order to use the BP decoder to decode the BBS and SHP codes as part of the induced decoder, modifications have to be made in order to tolerate measurement errors on parity check measurements. When given a classical code $\mathcal{C}$ with $n$ variable nodes and $m$ check nodes, in addition to what we have done in Algorithm \ref{algo:BP} we add $m$ variable nodes to the graph so that each of them has a one-to-one correspondence with the $m$ check nodes: variable node $n+j$ is connected to check node $j$ via edge $(n+j, j)$. These additional variable nodes are used to represent measurement errors on the parity checks. An example of the modified graph for decoding a classical linear code of block length $6$ is shown in FIG. \ref{fig:BP_decoding}. In the binary symmetric channel, let $p$ be the probability that a bit is flipped and let $q$ be the probability that a measurement is flipped. We define the log-likelihood ratio $m_i$ for the $n+m$ variable nodes as: \begin{itemize} \item $m_i = \log{\frac{1-p}{p}}, 1 \leq i \leq n$, \item $m_i = \log{\frac{1-q}{q}}, n+1 \leq i \leq n+m$. \end{itemize} For the syndrome nodes $s_i$, we let $m_{s_i} = + \infty$ if the $i$-th syndrome is $0$ and $m_{s_i} = - \infty$ if the $i$-th syndrome is $1$. When executing the belief propagation decoding algorithm, the normal message passing process is executed as described in Algorithm \ref{algo:BP}. For the added syndrome node $s_i$, they send their log-likelihood ratios $m_i$ to the associated check node $j$ with message $g_{i,j} = m_i$ during the rightbound messages phase in each iteration, but there will be no incoming messages from the check nodes $c_i$ to change their values. The algorithm terminates when all check nodes are satisfied or a predetermined number of iterations is reached, and the $n$-bit variable node values at the final step are used as corrections for the noisy data qubits. \begin{figure} \caption{The graph for decoding a classical code of length $6$ using the modified BP decoder that tolerates measurement errors. The syndrome nodes $s_1, s_2, s_3$ are assigned log-likelihood values $\pm \infty$ given the input parity check measurement values $0$ or $1$.} \label{fig:BP_decoding} \end{figure} By employing the above described modifications to the BP algorithm, we can efficiently decode the classical expander codes while tolerating measurement errors. \section{Numerical Simulations and Results} \label{sec:results} \begin{figure} \caption{Simulating the performance of Bravyi-Bacon-Shor codes constructed using (a) $(3,6)$- and (b) $(5,6)$-biregular bipartite graphs. BBS codes by $(5,6)$ graphs outperforms BBS codes by $(3,6)$ graphs due to superior performance of classical $(5,6)$ codes.} \label{fig:BBS} \end{figure} In this section we present numerical results of decoding the BBS and SHP codes using the induced decoders instantiated with the modified BP decoder that handles measurement errors. All simulations are done under the phenomenological error model, where given probability $p$, random single-qubit bit or phase flip errors of the form $E_{1q} = \{\sqrt{1-p}I, \sqrt{p}X\}$ or $E_{1q} = \{\sqrt{1-p}I, \sqrt{p}Z\}$ are applied independently on qubits and measurements output the wrong (opposite) value with probability $p$. There is no circuit-level error propagation in the simulation. In order to maximize the parameters and performance of the quantum codes when decoded by the induced decoders, we construct the BBS and SHP codes with classical regular LDPC codes defined by biregular bipartite graphs. To obtain symmetric performance for $X$- and $Z$-type errors, both $X$ and $Z$ part of each quantum code are constructed with the same classical LDPC code. Since both the BBS and SHP codes are defined as CSS codes, $X$- and $Z$-type errors can be decoded separately using the induced decoder. Hence in the rest of the paper we assume that each qubit independently suffers from Pauli $X$- and $Z$-type errors as described in the previous paragraph, and study the performance of these codes by plotting the average logical error rate per logical qubit of the $K$-qubit block versus the physical error rate of each qubit. Using this metric allows us to directly compare the average performance of quantum codes with different encoding rates on an equal footing, instead of comparing large blocks with vastly different numbers of encoded qubits. By doing so we are taking into account both the performance and encoding rate when comparing different codes, but to some extent ignoring the potential correlation between logical errors. The classical regular LDPC codes that are used to construct the BBS and SHP codes were randomly generated biregular bipartite graphs using the configuration model \cite{richardson2008modern}. It can be shown that asymptotically these graphs will have a good expansion coefficient, making them classical expander codes with good performance. For each of the selected block size, we randomly generated $1000$ biregular bipartite graphs with specified node degrees and simulated their performance under the binary symmetric channel. The best-performing classical code is chosen to construct the quantum code. Since the induced decoder for the quantum code directly decodes on the underlying classical code, a relatively good classical code implies a relatively good quantum code. \begin{figure} \caption{Comparing the average error rate per logical qubit of the BBS codes constructed with (5,6)-biregular bipartite graphs of block size $240, 300, 360$ to surface codes of sizes $26 \times 26, 30 \times 30, 32 \times 32$. } \label{fig:BBS_VS_SF} \end{figure} We studied two classes of graphs for generating classical LDPC codes: the $(3,6)$- and $(5,6)$- biregular bipartite graphs, which we will refer to as the $(3,6)$ and $(5,6)$ codes. By simulating the performance of these two classical codes with the BP decoder, we observed that the $(5,6)$ codes significantly outperform the $(3,6)$ codes, which agrees with previous studies in classical coding theory \cite{richardson2001capacity, mackay1999good}. Given a $(b,c)$ code of size $n$, the number of encoded bits is $k=\frac{c-b}{b}n$ and the encoding rate for the classical code is $\frac{c-b}{b}$. Hence the BBS codes constructed with $(b,c)$ classical code have parameters $\llbracket N_{BBS}, K_{BBS} \rrbracket = \llbracket O(n^2), \frac{c-b}{b} n \rrbracket$, and the SHP codes have parameters $\llbracket N_{SHP}, K_{SHP} \rrbracket = \llbracket n^2, (\frac{c-b}{b})^2 n^2 \rrbracket$. In all plots, $n$ is the number of bits/variable nodes for the classical LDPC code, $N$ is the number of physical qubits in the quantum code, $K$ is the number of encoded logical qubits in the quantum code, $D$ is the average distance of the quantum code found through fitting the simulated data to $P_L=A p^D$, and $r$ is the encoding rate of the quantum code. The numerical performance of the BBS codes presented in Figures \ref{fig:BBS}, \ref{fig:BBS_VS_SF} and \ref{fig:BBS_VS_SHP} are obtained using importance sampling to error rates as low as $10^{-4}$, and best-fit lines are plotted in order to extrapolate the codes behavior to low error regimes. Details of importance sampling can be found in \cite{LiBareAnc2017}. The numerical performance of the SHP codes presented in Figures \ref{fig:SHP} and \ref{fig:BBS_VS_SHP} are obtained through Monte Carlo simulations at various physical error rates. From FIG. \ref{fig:BBS} we can see that the BBS codes constructed with $(5,6)$ codes have significantly better performance than that with $(3,6)$ codes, as expected given the results on the classical codes. It is clear from FIG. \ref{fig:BBS} that the BBS codes do not have a fault-tolerant threshold, due to the fact that the weight-$2$ gauge operators in the quantum code result in a superexponential scaling of number of weight-$D$ dressed logical operators. A similar behavior is observed for the SHP codes constructed with $(5,6)$ codes, as shown in \ref{fig:SHP}, where the SHP codes also do not exhibit a fault-tolerant threshold. \begin{figure} \caption{Simulating the performance of the SHP codes constructed using $(5,6)$-biregular bipartite graphs. Their average performance per logical qubit are compared to the size $6 \times 6$ surface code. All codes in this plot have encoding rate $1/36$.} \label{fig:SHP} \end{figure} \begin{figure} \caption{Comparing the average performance per logical qubit of the BBS codes to the SHP codes. The BBS and SHP codes with the same $n$ are constructed using the same $(5,6)$-biregular bipartite graph.} \label{fig:BBS_VS_SHP} \end{figure} To benchmark the performance of the BBS codes and SHP codes, we compare them to the surface codes as well as to each other. As previously mentioned, in order to obtain a reasonable comparison we compare the BBS and SHP codes against surface codes of similar encoding rate $r$ by comparing the average error rate of single logical qubits within the same code block to the logical error rate of the surface code. In FIG. \ref{fig:BBS_VS_SF} we are comparing the average logical error rate per logical qubit of the BBS codes constructed with $(5,6)$ codes of sizes $n=240, 300, 360$ to surface codes of sizes $26\times 26, 30 \times 30, 32 \times 32$. Note that the surface code results are simulated using the Union Find decoder \cite{delfosse2017almost}, so that we are comparing a linear time decoding algorithm of the BBS codes to a linear time decoding algorithm of the surface code. The BBS codes have better distances than surface codes of similar encoding rates, but they only outperform the surface codes for physical error rates below $10^{-6}$. Similar results for SHP codes are shown in FIG.\ref{fig:SHP}. Since the SHP codes have constant encoding rates and when the $(5,6)$ codes are used, the resulting encoding rate is $r=1/36$, so we are comparing the average error rate for single logical qubits in the SHP codes to a single $6 \times 6$ block of surface code. The SHP codes can have significantly better distance than surface code of the same encoding rate, but they do not outperform the surface code until physical error rates $p \leq 4 \times 10^{-4}$. Finally, we compare the average performance per logical qubit of the BBS and SHP codes, as shown in \ref{fig:BBS_VS_SHP}. The comparison is made between BBS and SHP codes constructed using the exact same $(5,6)$-biregular bipartite graph. While it seems that the SHP codes' average logical qubit performance is slightly worse than that of the BBS codes, bear in mind that the SHP codes have much higher encoding rate. \section{Conclusion} We studied two different constructions of quantum subsystem error-correcting codes using classical linear codes: the Bravyi-Bacon-Shor (BBS) codes and the subsystem hypergraph product (SHP) codes. We reviewed the BBS codes that was introduced in a previous paper \cite{yoder2019optimal}, and presented a construction of the SHP codes that can be viewed similar to the hypergraph product codes \cite{tillich2013quantum}. We proposed efficient algorithms to decode the BBS and SHP codes while handling measurement errors by using a modified belief propagation decoder for classical expander codes. We studied the numerical performance of the BBS and SHP codes, and showed that while these codes do not have a fault-tolerant threshold, they have very good distance scaling and encoding rates. When constructed using classical expander codes, the BBS codes have encoding rates $O(1/\sqrt{N})$ and the SHP codes have constant encoding rates dependent on the expander code parameters. Suppose the same classical expander code is used, the resulting SHP codes have even higher encoding rates than the hypergraph product codes. Hence for large block sizes these codes could offer significant savings in terms of resource overhead when trying to achieve a specific logical error rate. It is worth noting that while we are already observing very good logical performance by simulating codes constructed with small biregular bipartite graphs, classical LDPC codes asymptotically become better expander codes and the belief propagation decoder will give a much better performance for expander codes of larger block sizes. Therefore, the BBS and SHP codes are worth studying for the purpose of large scale quantum error correction. Future studies on these codes could include investigating the potential of using large irregular LDPC codes to construct the BBS and SHP codes in order to achieve better logical performances, tailoring the quantum code for biased noise models by using two different classical codes to construct asymmetric BBS and SHP codes, and methods to apply fault-tolerant logical operations within the same code block. \end{document}
\begin{document} \baselineskip=16.3pt \parskip=14pt \begin{center} \section*{Number of Rational points of the Generalized Hermitian Curves over $\mathbb F_{p^n}$} {\large Emrah Sercan Y{\i}lmaz \footnote {Research supported by Science Foundation Ireland Grant 13/IA/1914} \\ School of Mathematics and Statistics\\ University College Dublin\\ Ireland} \end{center} \subsection*{Abstract} In this paper we consider the curves $H_{k,t}^{(p)} : y^{p^k}+y=x^{p^{kt}+1}$ over $\mathbb F_p$ and and find an exact formula for the number of $\F_{p^n}$-rational points on $H_{k,t}^{(p)}$ for all integers $n\ge 1$. We also give the condition when the $L$-polynomial of a Hermitian curve divides the $L$-polynomial of another over $\F_p$. \textbf{Keywords:} Maximal curves, Hermitian curves, Point counting, L-poynomials \section{Introduction} Let $q$ be a prime power and $\mathbb{F}_q$ be the finite field with $q$ elements. Let $X$ be a projective smooth absolutely irreducible curve of genus $g$ defined over $\mathbb{F}_q$. Consider the $L$-polynomial of the curve $X$ over $\mathbb F_{q}$, defined by $$L_{X/\mathbb{F}_q}(T)=L_X(T)=\exp\left( \sum_{n=1}^\infty ( \#X(\mathbb F_{q^n}) - q^n - 1 )\frac{T^n}{n} \right).$$ where $\#X(\mathbb F_{q^n})$ denotes the number of $\mathbb F_{q^n}$-rational points of $X$. It is well known that $L_X(T)$ is a polynomial of degree $2g$ with integer coefficients, so we write it as \begin{equation} \label{L-poly} L_X(T)= \sum_{i=0}^{2g} c_i T^i, \ c_i \in \mathbb Z. \end{equation} It is also well known that $c_0=1$ and $c_{2g}=q^g$. Let $k$ be a positive integer and $t$ be a nonnegative integer. In this paper we will study the curves $$H_{k,t}^{(p)} :y^{p^k}+y=x^{p^{kt}+1}$$ which are defined over $\mathbb F_p$. The curve $H_{k,t}^{(p)}$ has genus $p^{kt}(p^{k}-1)/2$ except for $(p,t)=(2,0)$, and if $(p,t)=(2,0)$, the genus of $H_{k,t}^{(p)}$ is $0$. In this paper we will prove the following theorem which gives the number of rational points over all extension $\F_p$. \begin{thm}\label{thm-Hkt-point} Let $k=2^vw$ where $w$ is an odd integer. Let $n$ be a positive integer with $d=(n,4kt)$ and $c=(n,4k)$. Then $$-p^{-n/2}[\#H_{k,t}(\mathbb F_{p^n})-(p^n+1)]= \begin{cases} 0 &\text{ if }\ 2^{v+1} \nmid d \text{ and } d \mid kt,\\ \epsilon\cdot(p^{c/2}-1) &\text{ if }\ 2^{v+1} \mid\mid d \text{ and } d \mid kt,\\ \epsilon\cdot(p^{c/4}-1) &\text{ if }\ 2^{v+2} \mid d \text{ and } d \mid kt,\\ p^{c}-1 &\text{ if }\ d\nmid kt, \frac d2 \mid kt \text{ and $t$ is even },\\ -(p^{c}-1)(p^{d/2}-1) &\text{ if }\ d\nmid kt, \frac d2 \mid kt \text{ and $t$ is odd },\\ (p^{c}-1)(p^{d/4}-1) &\text{ if }\ d\nmid 2kt, d \mid 4kt \end{cases}$$ where $\epsilon=0$ if $p=2$ and $\epsilon=1$ if $p$ is odd. \end{thm} When $t=1$ this curve is very well known to be a maximal curve over $\F_{p^{2k}}$. We have calculated the number of rational points over all extension of $\F_p$. We will write $H_{k}^{(p)}$ for $H_{k,1}^{(p)}$. \begin{cor}\label{cor-Hk-point} Let $k=2^vw$ where $w$ is an odd integer. Let $n$ be a positive integer with $d=(n,4k)$. Then $$-p^{-n/2}[\#H_k^{(p)}(\mathbb F_{p^n})-(p^n+1)]= \begin{cases} 0 &\text{ if }\ 2^{v+1} \nmid d,\\ -p^{d/2}(p^{d/2}-1) &\text{ if }\ 2^{v+1} \mid\mid d,\\ p^{d/4}(p^{d/4}-1) &\text{ if }\ 2^{v+2} \mid d. \end{cases}$$ \end{cor} We will usually drop the superscript in $H_{k,t}^{(p)}$ and write $H_{k,t}$. Throughout the paper $\left(\frac{\cdot }{p}\right)$ denotes the Legendre symbol. In Section 2, we will give some backgrounds which we will use in our proofs. In Section 3, we will give the proofs of two general tools which are useful for proving our results. In Section 4, we will find the $F_{p^n}$-number of rational points of $H_{k,0}$ where $p$ is an odd prime. In Section 5, we will find the $F_{p^n}$-number of rational points of $H_{k,t}$ where $p$ is an odd prime. In Section 6, we will find the $F_{p^n}$-number of rational points of $H_{k,t}$ where $p=2$. In Section 7 and 8, we will give some remarks on maximal curves which are related to our previous results. In Section 9, we will prove that in which condition the $L$-polynomial of a Hermitian curve divides the $L$-polynomial of another Hermitian curve over $\mathbb F_p$. \section{Background} In this section we will give some basic facts that we will use. Some of this requires that $p$ is odd, some of it does not. \subsection{More on Curves}\label{morec} Let $X$ be a projective smooth absolutely irreducible curve of genus $g$ defined over $\mathbb{F}_q$. Let $\eta_1,\cdots,\eta_{2g}$ be the roots of the reciprocal of the $L$-polynomial of $X$ over $\mathbb F_{q}$ (sometimes called the Weil numbers of $X$, or Frobenius eigenvalues). Then, for any $n\geq 1$, the number of rational points of $X$ over $\mathbb F_{q^{n}}$ is given by \begin{equation}\label{eqn-sum of roots} \#X(\mathbb F_{q^{n}})=(q^{n}+1)- \sum\limits_{i=1}^{2g}\eta_i^n. \end{equation} The Riemann Hypothesis for curves over finite fields states that $|\eta_i|=\sqrt{q}$ for all $i=1,\ldots,2g$. It follows immediately from this property and \eqref{eqn-sum of roots} that \begin{equation} |\#X(\mathbb F_{q^n})-(q^n+1)|\leq 2g\sqrt{q^n} \end{equation} which is the Hasse-Weil bound. We call $X(\mathbb F_{q})$ \emph{maximal} if $\eta_i=-\sqrt{q}$ for all $i=1,\cdots,2g$, so the Hasse-Weil upper bound is met. Equivalently, $X(\mathbb F_{q})$ is maximal if and only if $L_X(T)=(1+\sqrt{q} T)^{2g}$. We call $X(\mathbb F_{q})$ \emph{minimal} if $\eta_i=\sqrt{q}$ for all $i=1,\cdots,2g$, so the Hasse-Weil lower bound is met. Equivalently, $X(\mathbb F_{q})$ is minimal if and only if $L_X(T)=(1-\sqrt{q} T)^{2g}$. Note that if $X(\mathbb F_{q})$ is minimal or maximal then $q$ must be a square (i.e.\ $r$ must be even). The following properties follow immediately. \begin{prop} \label{minimal-prop} \begin{enumerate} \item If $X(\mathbb F_{q})$ is maximal then $X(\mathbb F_{q^{n}})$ is minimal for even $n$ and maximal for odd $n$. \item If $X(\mathbb F_{q})$ is minimal then $X(\mathbb F_{q^{n}})$ is minimal for all $n$. \end{enumerate} \end{prop} \begin{prop}\label{pureimag} \cite{MY} If $X$ is a curve defined over $\F_q$ and $X(\mathbb F_{q^{2n}})$ is maximal, then $\#X(\mathbb F_{q^n})=q^n+1$. \end{prop} The best known example of a maximal curve is $H_k$ over $\mathbb F_{p^{2k}}$. \subsection{Supersingular Curves}\label{supsing} A curve $X$ of genus $g$ defined over $\mathbb F_q$ ($q=p^r$) is \emph{supersingular} if any of the following equivalent properties hold. \begin{enumerate} \item All Weil numbers of $X$ have the form $\eta_i = \sqrt{q}\cdot \zeta_i$ where $\zeta_i$ is a root of unity. \item The Newton polygon of $X$ is a straight line of slope $1/2$. \item The Jacobian of $X$ is geometrically isogenous to $E^g$ where $E$ is a supersingular elliptic curve. \item If $X$ has $L$-polynomial $L_X(T)=1+\sum\limits_{i=1}^{2g} c_iT^i$ then $$ord_p(c_i)\geq \frac{ir}{2}, \ \mbox{for all $i=1,\ldots ,2g$.}$$ \end{enumerate} By the first property, a supersingular curve defined over $\mathbb F_q$ becomes minimal over some finite extension of $\mathbb F_q$. Conversely, any minimal or maximal curve is supersingular. \subsection{Quadratic forms}\label{QF} We now recall the basic theory of quadratic forms over $\mathbb{F}_{q}$, where $q$ is odd. Let $K=\mathbb{F}_{q^n}$, and let $Q:K\longrightarrow \mathbb{F}_{q}$ be a quadratic form. The polarization of $Q$ is the symplectic bilinear form $B$ defined by $B(x,y)=Q(x+y)-Q(x)-Q(y)$. By definition the radical of $B$ (denoted $W$) is $ W =\{ x\in K : B(x,y)=0 \text{ for all $y\in K$}\}$. The rank of $B$ is defined to be $n-\dim(W)$. The rank of $Q$ is defined to be the rank of $B$. The following result is well known, see Chapter $6$ of \cite{lidl} for example. \begin{prop}\label{counts} Continue the above notation. Let $N=|\{x\in K : Q(x)=0\}|$, and let $w=\dim(W)$. If $Q$ has odd rank then $N=q^{n-1}$; if $Q$ has even rank then $N=q^{n-1}\pm (q-1)q^{(n-2+w)/2}$. \end{prop} In this paper we will be concerned with quadratic forms of the type $Q(x)=\Tr(f(x))$ where $f(x)$ has the form $\sum a_{ij} x^{q^i+q^j}$ for any prime power $q$ and $\Tr$ maps to $\mathbb F_q$. If $N$ is the number of $x\in \mathbb{F}_{q^n}$ with $\Tr(f(x))=0$, then because elements of trace 0 have the form $y^q-y$, finding $N$ is equivalent to finding the exact number of $\mathbb{F}_{q^n}$-rational points on the curve $C: y^q-y=f(x)$. Indeed, \begin{equation}\label{quadpts} \#C(\mathbb F_{q^n})=qN+1. \end{equation} \subsection{Relations on the Number of Rational Points} In this section we state a theorem which allows us to find the number of $\F_{p^n}$-rational points of a supersingular curve by finding the the number of $\F_{p^m}$-rational points only for the divisors $m$ of $s$, where the Weil numbers are $\sqrt{p}$ times an $s$-th root of unity. Note that $s$ is even because equality holds in the Hasse--Weil bound over $\F_{p^s}$. \begin{thm}[\cite{MY2}]\label{reduction-thm} Let $X$ be a supersingular curve of genus $g$ defined over $\mathbb F_q$ with period $s$. Let $n$ be a positive integer, let $\gcd (n,s)=m$ and write $n=m\cdot t$. If $q$ is odd, then we have $$ \#X(\F_{q^n})-(q^n+1)=\begin{cases} q^{(n-m)/2}[\#X(\F_{q^m})-(q^m+1)] &\text{if } m\cdot r \text{ is even},\\ q^{(n-m)/2}[\#X(\F_{q^m})-(q^m+1)]&\text{if } m \cdot r \text{ is odd and } p\mid t, \\ q^{(n-m)/2}[\#X(\F_{q^m})-(q^m+1)]\left(\frac{(-1)^{(t-1)/2}t}{p}\right)&\text{if } m \cdot r \text{ is odd and } p\nmid t. \end{cases}$$ If $q$ is even, then we have $$ \#X(\F_{q^n})-(q^n+1)=\begin{cases} q^{(n-m)/2}[\#X(\F_{q^m})-(q^m+1)] &\text{if } m\cdot r \text{ is even},\\ q^{(n-m)/2}[\#X(\F_{q^m})-(q^m+1)](-1)^{(t^2-1)/8}&\text{if } m \cdot r \text{ is odd}.\\ \end{cases}$$ \end{thm} \subsection{Discrete Fourier Transform} In this section we recall the statement of the Discrete Fourier Transform and its inverse. \begin{prop}[Inverse Discrete Fourier Transform] Let $N$ be a positive integer and let $w_N$ be a primitive $N$-th root of unity over any field where $N$ is invertible. If $$F_n=\sum_{j=0}^{N-1}f_jw_N^{-jn}$$ for $n=0,1\cdots, N-1$ then we have $$f_n=\frac1{N}\sum_{j=0}^{N-1}F_jw_N^{jn}$$ for $n=0,1\cdots, N-1$. \end{prop} \subsection{Divisibilty Theorems} The following theorem is well-known. \begin{thm}\label{chapman:KleimanSerre} (Kleiman--Serre) If there is a surjective morphism of curves $C \longrightarrow D$ that is defined over $\mathbb{F}_q$ then $\mathrm{L}_{D}(T)$ divides $\mathrm{L}_C(T)$. \end{thm} Moreover, If there is a surjective morphism of curves $C \longrightarrow D$ that is defined over $\mathbb{F}_q$, we will say that $D$ is covered by $C$. However, there are cases where there is no map of curves and yet there is divisibility of L-polynomials. We suspect that $H_k^{(p)} $ and $H_{2k}^{(p)} $ is such a case, see Corollary \ref{divisibility-2k}. \begin{prop}[\cite{MY}]\label{divisiblity-period} Let $C$ and $D$ be supersingular curves over $\mathbb F_q$. If $L(C)$ divides $L(D)$, then $s_C$ divides $s_D$. \end{prop} \section{The Image of the Maps $y\to y^{p^k}\pm y$ over $\F_{p^n}$} In this section, we will give some useful tools. We will use these tools in sections \ref{sectHk} and \ref{sectHk-even}. \begin{lemma}\label{prop-y^p-y} Let $n$ and $k$ be integers with $(n,k)=d$. For integer $m\ge 1$ define $$S_m=\{y^{p^m}-y \ : \ y \in \mathbb F_{p^n}\}.$$ Then the set equality $S_k=S_d$ holds. \end{lemma} \begin{proof} Since both $y\to y^{p^k}-y$ and $y\to y^{p^d}-y$ are additive group homomorphism from $\mathbb F_{p^n}$ to $\mathbb F_{p^n}$ with kernel $\mathbb F_{p^d}$, the cardinalities of $S_k$ and $S_d$ equal. Since $$y^{p^k}-y=\left(\sum_{i=0}^{k/d-1}y^{p^{di}}\right)^{p^d}-\sum_{i=0}^{k/d-1}y^{p^{di}},$$ we have $S_k\subseteq S_d$ and therefore $S_k=S_d$. \end{proof} Note that if $p=2$, we will prefer to use Lemma \ref{prop-y^p-y} instead of the following lemma. \begin{lemma}\label{prop-y^p+y} Let $n$ and $k$ be integers with $(n,k)=d$. Let $k/d$ is odd and $n/d$ is even. Then $$\{y^{p^d}+y \ : \ y \in \mathbb F_{p^n}\}=\{y^{p^k}+y \ : \ y \in \mathbb F_{p^n}\}.$$ \end{lemma} \begin{proof} Since $n/d$ is even, there exist $\mu \in \mathbb F_{p^n}^\times$ such that $\mu^{p^d}=-\mu$. Since $k/d$ is odd, we also have $\mu^{p^k}=-\mu$. We have $$\{y^{p^d}-y \ : \ y \in \mathbb F_{p^n}\}=\{y^{p^k}-y \ : \ y \in \mathbb F_{p^n}\}$$ by Lemma \ref{prop-y^p-y}. If we multiply the sets by $-\mu^{-1}$, $$\{-\mu^{-1}(y^{p^d}-y) \ : \ y \in \mathbb F_{p^n}\}=\{-\mu^{-1}(y^{p^k}-y) \ : \ y \in \mathbb F_{p^n}\}$$ holds. Since $\{\mu y \ : \ y \in \mathbb F_{p^n}\}=\{y \ : \ y \in \mathbb F_{p^n}\}$, $$\{-\mu^{-1}((\mu y)^{p^d}-(\mu y)) \ : \ y \in \mathbb F_{p^n}\}=\{-\mu^{-1}((\mu y)^{p^k}-(\mu y)) \ : \ y \in \mathbb F_{p^n}\}$$ holds. Simply put, $$\{y^{p^d}+y \ : \ y \in \mathbb F_{p^n}\}=\{y^{p^k}+y \ : \ y \in \mathbb F_{p^n}\}$$ holds. \end{proof} \section{The Number of $\mathbb F_{p^n}$-Rational Points of $y^{p^k}+y=x^2$ where $p$ is odd}\label{sectHk0} Let $p$ be an odd prime. In this section we will find the number of rational points of $H_{k,0}$ over $\F_{p^n}$ for all $n\ge 1$ where $k$ is a positive integer and prove Theorem \ref{thm-Hk0-point}. \begin{lemma}\label{lemma-Hk-max-min-x^2} Then the curve $H_{k,0}$ is maximal over $\F_{p^{2k}}$ and minimal over $\F_{p^{4k}}$. \end{lemma} \begin{proof} Let $\mu$ be a nonzero element in $\mathbb F_{p^{2k}}$ such that $\mu^{p^k}=-\mu$. Since $(x,y) \mapsto(-\mu x,-\mu y)$ is an one-to-one map from $\mathbb F_{p^{2k}}$ to $\mathbb F_{p^{2k}}$, we have $\#H_k(\F_{p^{2k}})$ equals the number of $\F_{p^{2k}}$-rational points of $$G_{k,0,\mu}: y^{p^k}-y=\mu x^{2}.$$ We have $$ \Tr_{\F_{p^{2k}}/\F_{p^k}}(\mu x^{2})=\mu x^{2}+\mu^{p^{k}}x^{2p^k}=\mu (x^2-x^{2p^k}) $$ for all $x\in \F_{p^{2k}}$. Since $\deg (x^{2(p^k-1)}-1,x^{p^{2k}-1}-1)=2(p^k-1)$, we have that the number of $\mathbb F_{p^{2kt}}$-rational points of $G_{k,d,\mu}$ is $$1+p^k(1+2(p^k-1))=p^{2k}+1+(p^k-1)p^k.$$ by (\ref{quadpts}). Therefore, we have $$H_{k,t}(\F_{p^{2kt}})=G_{k,t,\mu}(\F_{p^{2kt}})=(p^{2k}+1)-(p^k-1)p^{k}.$$ Since the genus of the curve $H_{k,0}$ is $(p^k-1)/2$, $H_{k,0}$ is maximal over $\F_{p^{2k}}$ and therefore it is minimal over $\F_{p^{4k}}$ by Proposition \ref{minimal-prop}. \end{proof} \begin{lemma}\label{lem-divides-k-0} Let $d\mid k$. The number of $\mathbb F_{p^d}$-rational points of $H_{k,0}$ is $p^d+1$. \end{lemma} \begin{proof} Since $u^{p^{k}}=u$ for each $u\in \mathbb F_{p^d}$, we have $$0=y^{p^k}+y-x^{p^{k}+1}=2y-x^2.$$ Since for each $x\in \mathbb F_{p^d}$ there exists a unique $y\in \mathbb F_{p^d}$ satisfying the above equality, we have the result. \end{proof} \begin{lemma}\label{lem-divides-2k-0} Let $d\mid k$ with $2d \nmid k$. The number of $\mathbb F_{p^{2d}}$-rational points of $H_{k,0}$ is $p^{2d}+1+(p^d-1)p^d$. \end{lemma} \begin{proof} Write $k=2^ut$ where $u$ is a non-negative integer and $t$ is odd. There exists an odd $e$ such that $k=d\cdot e$ and $e \mid t$. Since $e$ is odd, there exists an integer $f$ such that $e=2f+1$. Let $u \in \mathbb F_{p^{2d}}$. Then $$u^{p^k}=u^{p^{d\cdot e}}=u^{p^{d\cdot(2f+1)}}=u^{p^{2d\cdot f+ d}}=u^{p^d}.$$ Hence we have $$y^{p^k}+y-x^{2}=y^{p^d}+y-x^{2}$$ for all $x,y \in \mathbb F_{p^{2d}}$. Therefore, we have $$\#H_{k,0}(\mathbb F_{p^{2d}})=\#H_{d,0}(\F_{p^{2d}}).$$ Since $H_{d,0}$ is maximal over $\mathbb F_{p^{2d}}$ by Theorem \ref{lemma-Hk-max-min-x^2}, we have the result. \end{proof} \begin{lemma}\label{lem-divides-4k-x^2} Let $d\mid 4k$ with $d \nmid 2k$. The number of $\mathbb F_{p^{d}}$-rational points of $H_{k,0}$ is $p^{d}+1+(p^{d/4}-1)p^{d/2}$. \end{lemma} \begin{proof} Write $d=4e$ where $e$ is an integer and $k=2^ut$ where $u$ is a non-negative integer and $t$ is odd. There exists an odd $f$ such that $k=e\cdot f$ and $f \mid t$. Since $f$ is odd, there exists an integer $g$ such that $e=4g+1$ or $e=4g-1$. \textbf{Case A. $\mathbf{e=4g+1.}$} We have $$u^{p^k}=u^{p^{e\cdot f}}=u^{p^{e\cdot(4g+1)}}=u^{p^{4e\cdot g+ e}}=u^{p^{e}}$$ for all $u \in \mathbb F_{p^{4e}}$. Hence we have $$y^{p^k}+y-x^{p^k+1}=y^{p^e}+y-x^{2}$$ for all $x,y \in \mathbb F_{p^{4e}}$. Therefore, we have $$\#H_{k,0}(\mathbb F_{p^{4e}})=\#H_{e,0}(\F_{p^{4e}}).$$ Since $H_{e,0}$ is minimal over $\mathbb F_{p^{4e}}$ by Lemma \ref{lemma-Hk-max-min-x^2} , we have the result. \textbf{Case B. $\mathbf{e=4g-1.}$} We have $$u^{p^k}=u^{p^{e\cdot f}}=u^{p^{e\cdot(4g-1)}}=u^{p^{4e\cdot (g-1)+3e}}=u^{p^{3e}}$$ for all $u \in \mathbb F_{p^{4e}}$. Hence we have $$y^{p^k}+y-x^{p^k+1}=y^{p^{3e}}+y-x^{p^{3e}+1}=(y^{p^e}+y-x^{p^e+1})^{p^{3e}}$$ for all $x,y \in \mathbb F_{p^{4e}}$. Therefore, we have $$\#H_{k,0}(\mathbb F_{p^{4e}})=\#H_{e,0}(\F_{p^{4e}}).$$ Since $H_{e,0}$ is minimal over $\mathbb F_{p^{d}}$ by Lemma \ref{lemma-Hk-max-min-x^2} , we have the result. \end{proof} \begin{thm}\label{thm-Hk0-point} Let $k=2^vt$ where $t$ is an odd integer. Let $n$ be a positive integer with $d=(n,4k)$. Then $$-p^{-n/2}[\#H_{k,0}(\mathbb F_{p^n})-(p^n+1)]= \begin{cases} 0 &\text{ if }\ 2^{v+1} \nmid d,\\ -(p^{d/2}-1)&\text{ if }\ 2^{v+1} \mid\mid d,\\ p^{d/4}-1 &\text{ if }\ 2^{v+2} \mid d. \end{cases}$$ \end{thm} \begin{proof} It follows by Lemma \ref{lem-divides-k-0}, \ref{lem-divides-2k-0} and \ref{lem-divides-4k-x^2}. \end{proof} \section{The Number of $\mathbb F_{p^n}$-Rational Points of $H_{k,t}$ where $p$ is odd}\label{sectHk} Let $p$ be an odd prime. In this section we will find the number of rational points of $H_{k,t}$ over $\F_{p^n}$ for all $n\ge 1$ where $k$ and $t$ are positive integers and mainly prove the Theorem \ref{thm-Hkt-point} for odd primes. \begin{lemma}\label{lemma-Hk-max-min-odd} Let $t$ be an odd integer. Then the curve $H_{k,t}$ is maximal over $\F_{p^{2kt}}$ and minimal over $\F_{p^{4kt}}$. \end{lemma} \begin{proof} Let $\mu$ be a nonzero element in $\mathbb F_{p^{2k}}$ such that $\mu^{p^k}=-\mu.$ Since $(x,y)\mapsto(\mu x,\mu y)$ is an one-to-one map from $\mathbb F_{p^{2kt}}$ to $\mathbb F_{p^{2kt}}$, we have $\#H_k(\F_{p^{2kt}})$ equals to number of $\F_{p^{2kt}}$-rational points of $$G_{k,t,\mu}: y^{p^k}-y=\mu x^{p^{kt}+1}.$$ Since \begin{align*} \Tr_{\F_{p^{2kt}}/\F_{p^k}}(\mu x^{p^{kt}+1}) &=\Tr_{\F_{p^{kt}}/\F_{p^k}}[\Tr_{\F_{p^{2kt}}/\F_{p^{kt}}}(\mu x^{p^{kt}+1}+\mu^{p^{kt}}x^{p^{2kt}+p^{kt}})]\\ &=\Tr_{\F_{p^{kt}}/\F_{p^k}}(x^{p^{kt}+1}(\mu+\mu^{p^{kt}}))\\ &=\Tr_{\F_{p^{kt}}/\F_{p^k}}(0)\\ &=0 \end{align*} for all $x\in \F_{p^{2k}}$, by (\ref{quadpts}) we have $$H_{k,t}(\F_{p^{2kt}})=G_{k,t,\mu}(\F_{p^{2kt}})=1+p^{2kt}\cdot p^{k}=(p^{2kt}+1)+p^{kt}(p^k-1)\sqrt{p^{2kt}}.$$ Since the genus of the curve $H_{k,t}$ is $p^{kt}(p^k-1)/2$, $H_{k,t}$ is maximal over $\F_{p^{2kt}}$ and therefore it is minimal over $\F_{p^{4kt}}$ by Proposition \ref{minimal-prop}. \end{proof} \begin{lemma}\label{lemma-Hk-max-min-even} Let $t$ be an even integer. Then the number of rational points of $H_{k,t}$ over $\F_{p^{2kt}}$ is $p^{2kt}+1-(p^k-1)p^{kt}$ and $H_{k,t}$ is minimal over $\F_{p^{4kt}}$. \end{lemma} \begin{proof} Let $\mu$ be a nonzero element in $\mathbb F_{p^{2k}}$ such that $\mu^{p^k}=-\mu$ as in Lemma \ref{lemma-Hk-max-min-odd}. Since $(-\mu x,-\mu y)$ is an one-to-one map from $\mathbb F_{p^{2kt}}$ to $\mathbb F_{p^{2kt}}$, we have $\#H_k(\F_{p^{2kt}})$ equals to number of $\F_{p^{2kt}}$-rational points of $$G_{k,t,\mu}: y^{p^k}-y=\mu x^{p^{kt}+1}.$$ We have \begin{align*} \Tr_{\F_{p^{2kt}}/\F_{p^k}}(\mu x^{p^{kt}+1}) &=\Tr_{\F_{p^{kt}}/\F_{p^k}}[\Tr_{\F_{p^{2kt}}/\F_{p^{kt}}}(\mu x^{p^{kt}+1}+\mu^{p^{kt}}x^{p^{2kt}+p^{kt}})]\\ &=\Tr_{\F_{p^{kt}}/\F_{p^k}}(x^{p^{kt}+1}(\mu+\mu^{p^{kt}}))\\ &=\Tr_{\F_{p^{kt}}/\F_{p^k}}(2\mu x^{p^{kt}+1}) \end{align*} for all $x\in \F_{p^{2k}}$. Since $x\to x^{p^{kt}+1}$ is $p^{kd}+1$-to-$1$ map from $\mathbb F_{p^{2kt}}^\times$ to $\mathbb F_{p^{kt}}^\times$ and since $\Tr_{\F_{p^{kt}}/\F_{p^k}}(x)$ is a linear map from $\mathbb F_{p^{2kt}}$ to $\mathbb F_{p^k}$, we have that the number of $\mathbb F_{p^{2kt}}$-rational points of $G_{k,d,\mu}$ is $$1+p^k(1+(p^{kt}+1)(p^{k-1}-1))=(p^{2kt}+1)-(p^k-1)p^{kt}.$$ by (\ref{quadpts}). Therefore, we have $$H_{k,t}(\F_{p^{2kt}})=G_{k,t,\mu}(\F_{p^{2kt}})=(p^{2kt}+1)-(p^k-1)p^{kt}.$$ Let $Q(x)=\Tr_{\F_{p^{4kt}}/\F_{p^{k}}}(\mu x^{p^{kt}+1})$. Then \begin{align*} B(x,y) &=Q(x+y)-Q(x)-Q(y)\\ &=\Tr_{\F_{p^{4kt}}/\F_{p^{k}}}(\mu(xy^{p^{kt}}+yx^{p^{kt}}))\\ &=\Tr_{\F_{p^{4kt}}/\F_{p^{k}}}(\mu y^{p^{kt}}(x+x^{p^{2kt}})) \end{align*} is a bilinear form over $\mathbb F_{p^k}$. Then the radical of $B$ is $$W=\{x\in \F_{p^{4kt}} \ | \ x^{p^{2kt}}+x=0 \}.$$ Since $x^{p^{2kt}}+x$ divides $x^{p^{4kt}}-x$, we have $\dim_{\F_{p^k}} W=2t$. Since $4t-\dim_{\F_{p^k}} W=2t$ is even, by Proposition \ref{counts}, we have $$N=p^{4kt-k}\pm p^{k(4t-2+2t)/2}=p^{n(k-1)}\pm (p^k-1)p^{k(3t-1)}.$$ Hence by (\ref{quadpts}) we have $$\#H_{k,t}(\F_{p^{4kt}})=p^{4kt}+1\pm (p^k-1)p^{3kt}=p^{4kt}+1\pm (p^k-1)p^{kt}\sqrt{p^{4kt}}.$$ Therefore, $H_{k,t}$ is either maximal or minimal over $\F_{p^{4kt}}$. It cannot be maximal by Proposition \ref{pureimag}. Therefore, it is minimal over $\F_{p^{4kt}}$. \end{proof} \begin{lemma}\label{lem-divides-k} Let $d\mid kt$. The number of $\mathbb F_{p^d}$-rational points of $H_{k,t}$ equals to the number of $\mathbb F_{p^d}$-rational points of $H_{k,0}$. \end{lemma} \begin{proof} Since $u^{p^{kt}}=u$ for each $u\in \mathbb F_{p^d}$, we have $$0=y^{p^k}+y-x^{p^{kt}+1}=y^{p^k}+y-x^2.$$ Therefore, we have the result. \end{proof} \begin{lemma}\label{lem-divides-2k} Let $d\mid kt$ with $2d \nmid kt$ and let $(k,d)=c$. The number of $\mathbb F_{p^{2d}}$-rational points of $H_{k,t}$ equals to he number of $\mathbb F_{p^{2d}}$-rational points of $H_{c,d/c}$. \end{lemma} \begin{proof} Write $kt=2^uv$ where $u$ is a non-negative integer and $v$ is odd. There exists and odd $e$ such that $kt=d\cdot e$ and $e \mid v$. Since $e$ is odd, there exists an integer $f$ such that $e=2f+1$. Let $u \in \mathbb F_{p^{2d}}$. Then $$u^{p^{kt}}=u^{p^{d\cdot e}}=u^{p^{d\cdot(2f+1)}}=u^{p^{2d\cdot f+ d}}=u^{p^d}.$$ Hence we have $$y^{p^k}+y-x^{p^{kt}+1}=y^{p^k}+y-x^{p^d+1}$$ for all $x,y \in \mathbb F_{p^{2d}}$. Since $(2d,k)=(d,k)$ and since $2d/c$ is even and $k/c$ is odd, by Proposition \ref{prop-y^p+y}, there exists $z\in \mathbb F_{p^{2kd}}$ such that $$y^{p^k}+y-x^{p^{d}+1}=z^{p^c}+z-x^{p^d+1}$$ for all $x,y \in \mathbb F_{p^{2d}}$. Therefore, we have $\#H_{k,2d}(\mathbb F_{p^{2d}})=\#H_{c,d/c}(\F_{p^{2d}})$. \end{proof} \begin{lemma}\label{lem-divides-4k} Let $d\mid 4kt$ with $d \nmid 2kt$ and let $(k,d)=c$. The number of $\mathbb F_{p^{d}}$-rational points of $H_{k,t}$ equals to he number of $\mathbb F_{p^{d}}$-rational points of $H_{c,d/4c}$. \end{lemma} \begin{proof} Write $d=4e$ where $e$ is an integer and $k=2^uv$ where $u$ is a non-negative integer and $v$ is odd. There exists and odd $f$ such that $kt=e\cdot f$ and $f \mid t$. Since $f$ is odd, there exists an integer $g$ such that $e=4g+1$ or $e=4g-1$. \textbf{Case A. $\mathbf{e=4g+1.}$} We have $$u^{p^{kt}}=u^{p^{e\cdot f}}=u^{p^{e\cdot(4g+1)}}=u^{p^{4e\cdot g+ e}}=u^{p^{e}}$$ for all $u \in \mathbb F_{p^{4e}}$. Hence we have $$y^{p^k}+y-x^{p^{kt}+1}=y^{p^k}+y-x^{p^e+1}$$ for all $x,y \in \mathbb F_{p^{4e}}$. Since $d/c$ is even and $k/c$ is odd, by Proposition \ref{prop-y^p+y} we have $$y^{p^k}+y-x^{p^{kt}+1}=y^{p^c}+y-x^{p^e+1}$$ for all $x,y \in \mathbb F_{p^{4e}}$. Therefore, we have $\#H_{k,t}(\mathbb F_{p^{d}})=\#H_{c,d/4c}(\F_{p^{d}})$. \textbf{Case B. $\mathbf{e=4g-1.}$} We have $$u^{p^{kt}}=u^{p^{e\cdot f}}=u^{p^{e\cdot(4g-1)}}=u^{p^{4e\cdot (g-1)+3e}}=u^{p^{3e}}$$ for all $u \in \mathbb F_{p^{4e}}$. Hence we have $$y^{p^k}+y-x^{p^{kt}+1}=y^{p^k}+y-x^{p^{3e}+1}$$ for all $x,y \in \mathbb F_{p^{4e}}$. Since $d/c$ is even and $k/c$ is odd, by Proposition \ref{prop-y^p+y} we have $$y^{p^k}+y-x^{p^{kt}+1}=y^{p^c}+y-x^{p^{3e}+1}$$ for all $x,y \in \mathbb F_{p^{4e}}$. Since $(x^{p^{3e}+1})^{p^e}=x^{p^{e}+1}$ for all $x\in F_{p^{4e}}$ and the map $(x,y) \to (x^{p^e},y)$ from $F_{p^{4e}}$ to $F_{p^{4e}}$ is one-to-one and onto, we have $\#H_{k,t}(\mathbb F_{p^{d}})=\#H_{c,4d/c}(\F_{p^{d}})$. \end{proof} Now we can prove Thorem \ref{thm-Hkt-point} for odd primes. \begin{proof}[Proof of Theorem \ref{thm-Hkt-point} where $p$ is odd] It follows by Lemma \ref{lem-divides-k}, \ref{lem-divides-2k}, \ref{lem-divides-4k} and Theorem \ref{reduction-thm}. \end{proof} \section{The Number of $\mathbb F_{p^n}$-Rational Points of $H_{k,t}$ where $p=2$}\label{sectHk-even} This section is similar to the previous section. We will clarify what the differences are and prove Theorem \ref{thm-Hkt-point} for $p=2$. \begin{lemma}\label{lem-p=2-x^2} Let $k$ and $n$ be integers. The cardinality of the set $\{(x,y) \in \F_{2^n}^2 \: : \: y^{p^k}+y=x^2\}$ is $p^n$. \end{lemma} \begin{proof} Since the map $x\mapsto x^2$ is an automorphism on $\mathbb F_q$ where $q$ is even, we have $$|\{(x,y) \in \F_{2^n}^2 \: : \: y^{p^k}+y=x^2\}|=|\{(x,y) \in \F_{2^n}^2 \: : \: y^{p^k}+y=x\}|.$$ Moreover, by Proposition \ref{prop-y^p-y}, we have $$|\{(x,y) \in \F_{2^n}^2 \: : \: y^{p^k}+y=x\}|=|\{(x,y) \in \F_{2^n}^2 \: : \: y^{p^d}+y=x\}|$$ where $d=(n,k)$. This cardinality basically equals to $$p^d \cdot |\{x \in \F_{2^n} \: : \: \Tr_{\F_{2^{n}}/\F_{2^d}}(x)=0\}|=p^k\cdot p^{n-k}=p^n.$$ This finishes the proof. \end{proof} Note that this result in Lemma \ref{lem-p=2-x^2} is also follows by the fact that the genus of $H_{k,0}$ is $0$. \begin{lemma} Let $k$ and $t$ be integers. Then $H_{k,t}$ is maximal over $\F_{2^{2kt}}$ and minimal over $\F_{2^{4kt}}$. \end{lemma} \begin{proof} We want to note that every root of $x^{2^k}+x$ is in $\F_{2^k}$ and $2x=0$ for all such $x$ since we are in characteristic $2$. Now the proof is similar to the proof of Theorem \ref{lemma-Hk-max-min-odd}. \end{proof} \begin{lemma}\label{lem-all-p=2} The followings hold.\begin{itemize} \item[1.] Let $d\mid kt$. The number of $\mathbb F_{2^d}$-rational points of $H_{k,t}$ equals to $H_{k,0}$. \item[2.] Let $d\mid kt$ with $2d \nmid kt$ and let $(k,d)=c$. The number of $\mathbb F_{2^{2d}}$-rational points of $H_{k,t}$ equals to he number of $\mathbb F_{2^{2d}}$-rational points of $H_{c,d/c}$. \item[3.] Let $d\mid 4kt$ with $d \nmid 2kt$ and let $(k,d)=c$. The number of $\mathbb F_{2^{d}}$-rational points of $H_{k,t}$ equals to he number of $\mathbb F_{2^{d}}$-rational points of $H_{c,d/4c}$. \end{itemize} \end{lemma} \begin{proof} The proofs are respectively same as the proofs of Lemmas \ref{lem-divides-k}, \ref{lem-divides-2k} and \ref{lem-divides-4k}. \end{proof} Now we can prove Thorem \ref{thm-Hkt-point} for $p=2$. \begin{proof}[Proof of Theorem \ref{thm-Hkt-point} where $p=2$] It follows by Lemma \ref{lem-all-p=2} and Theorem \ref{reduction-thm}. \end{proof} \section{Remarks on Maps between the Curves $H_{k,t}$} The Hermitian curve $H_k$ is well known to be a maximal curve over $\mathbb F_{p^{2k}}$. We also proved this fact in Lemma \ref{lemma-Hk-max-min-odd}. It is important to find the covering relationships between our curves $H_{k,t}$ and the well known Hermitian curves $H_k$. In the following propositions we will give the coverings of $H_{k,t}$ and show that each $H_{k,t}$ covers and is covered by some Hermitian curves. \begin{prop} Let $k$ be an integer and $t$ be an odd integer. Then we have the following map $$\begin{matrix} H_{k,t} &\to &H_{k}\\ (x,y) &\mapsto &(x^{(p^{kt}+1)/(p^k+1)},y). \end{matrix}$$ \end{prop} \begin{prop} Let $k$ be an integer and $t$ be an odd integer. Then we have the following map $$\begin{matrix} H_{kt} &\to &H_{k,t}\\ (x,y) &\mapsto &(x,\sum\limits_{i=0}^{t-1}(-1)^iy^{p^{ki}}). \end{matrix}$$ \end{prop} \begin{prop} Let $p=2$ and let $k$ and $t$ be integers. Then we have the following map $$\begin{matrix} H_{kt} &\to &H_{k,t}\\ (x,y) &\mapsto &(x,\sum\limits_{i=0}^{t-1}y^{p^{ki}}). \end{matrix}$$ \end{prop} The proofs are simple verification. \section{Remarks on Maximal Curves} Assume $t$ is odd if $p$ is odd. Since $H_{k,t}$ is a maximal curve over $\mathbb F_{q^{2kt}}$, any curve covered by $H_{k,t}$ is a maximal curve over $\mathbb F_{q^{2kt}}$. This follows easily from Theorem \ref{chapman:KleimanSerre}. This fact allows us to state the following theorems. \begin{prop} Let $k$ and $t$ be positive integers and assume $t$ is odd if $p$ is odd. Let $m\ge 2$ be a positive integer dividing $p^{kt}+1$. Then the curve $$C: y^{p^k}+y=x^m$$ is a maximal curve over $\mathbb F_{q^{2kt}}$. \end{prop} If we apply the method in the proof of Lemma \ref{lemma-Hk-max-min-odd}, we have the following result. \begin{prop} Let $k$ and $t$ be positive integers and assume $t$ is odd if $p$ is odd. Let $m\ge 2$ be a positive integer dividing $p^{kt}+1$. Let $\mu$ be a nonzero root of $x^q+x=0$. Then the curve $$C: y^{p^k}-y=\mu x^m$$ is a maximal curve over $\mathbb F_{q^{2kt}}$. \end{prop} \section{Divisibility Property of the Curves $H_k$} In this section, we will interest in $L$-polynomials of Hermitian curves and prove the Theorems \ref{Hkdiv} and \ref{Hknotdiv}. Similar proofs can be given for the curves $H_{k,t}$. \begin{lemma}\label{divisibility-lemma-un} Let $k$ be a positive integer. For $n\geq 1$ define $$U_n=-p^{-n/2}[\#H_{2k}(\mathbb F_{p^n})-\#H_{k}(\mathbb F_{p^n})]$$and write $U_n$ as a linear combination of the $8k$-th roots of unity as $$U_n=\sum_{j=0}^{8k-1}u_jw_{s}^{-jn}.$$ Then we have $$u_n \ge 0$$ for all $n \in \{ 0,1,\cdots, 8k-1\}$. \end{lemma} \begin{proof} Let $U_0=U_s$. Write $k=2^vz$ where $v$ is a positive integer and $z$ is an odd integer. By Corollary \ref{cor-Hk-point} we have $$U_n=\begin{cases} 0 &\text{if } 2^{v+1} \nmid n, \\ p^{(n,k)}(p^{(n,k)}-1) &\text{if } 2^{v+1} \mid \mid n,\\ 0 &\text{if } 2^{v+2} \mid \mid n,\\ p^{(n,2k)}(p^{(n,2k)}-1)- p^{(n,k)}(p^{(n,k)}-1) &\text{if } 2^{v+3} \mid n. \end{cases}$$ Therefore, by Inverse Fourier Transform we have that \begin{align*} u_{n}&=\frac{1}{8k}\sum_{j=0}^{8k-1}U_jw_{8k}^{jn}\\ &=\frac{1}{8k}\left[\sum_{j=0}^{2z-1}U_{2^{v+1}(2j+1)}w_{4z}^{jn} +\sum_{j=0}^{z-1}U_{2^{v+3}j}w_{2z}^{jn} \right] \quad \textrm{because $U_n=0$ if $2^{v+1} \nmid n$ or $2^{v+2} \mid \mid n$}\\ &\ge \frac{1}{8k}\left(U_0- \sum_{j=1}^{x-1}|U_{2^{v+3}j}| -\sum_{j=1}^{2z-1}|U_{2^{v+1}(2j+1)}| \right) \quad \textrm{by the triangle inequality}\\ &\ge \frac{1}{8k}\left[p^{2k}(p^{2k}-1)-3zp^k(p^k-1)\right] \quad \textrm{because $U_s=U_0=p^{2k}(p^{2k}-1)-p^k(p^k-1)$}\\ & \qquad \qquad \qquad \qquad \qquad \qquad\qquad \qquad \qquad \qquad \textrm{ and all others are $\le p^k(p^k-1)$} \\ &= \frac{1}{8k}p^k(p^k-1)(p^k(p^k+1)-3z)\\&\ge 0. \end{align*} This finishes the proof. \end{proof} We write $L(H_k)$ for $L_{H_k}$. \begin{cor}\label{divisibility-2k} Let $k$ be a positive integer. Then $$L(H_k) \mid L(H_{2k}).$$ \end{cor} \begin{proof} Lemma \ref{divisibility-lemma-un} shows that the multiplicity of each root of $L(H_k)$ is smaller than or equal to its multiplicity as a root of $L(H_{2k})$. \end{proof} We do not know if there is a map from $H_{2k}$ to $H_k$. \begin{lemma}\label{divisiblity-odd} Let $k$ be an integer and $t$ be an odd integer. Then $$L(H_k)\mid L(H_{kt}).$$ \end{lemma} \begin{proof} Since $t$ is odd, $p^k+1$ divides $p^{kt}+1$ and therefore there is a map of curves $H_{kt} \longrightarrow H_k$ given by $$(x,y)\to \left(x^{(p^{kt}+1)/(p^k+1)},\sum_{i=0}^{l-1}(-1)^iy^{p^{ik}}\right).$$ Hence $L(X_k) \mid L(X_{kt})$ by Theorem \ref{chapman:KleimanSerre}. \end{proof} \begin{thm}\label{Hkdiv} Let $k$ and $m$ be positive integers. Then $$L(H_k) \mid L(H_{km}).$$ \end{thm} \begin{proof} If $m=1$, then the result is trivial. Assume $m\ge 2$ and write $m=2^st$ where $t$ is odd. Since $t$ is odd, by Lemma \ref{divisiblity-odd} we have $$L(H_k)\mid L(H_{kt})$$ and by Corollary \ref{divisibility-2k} we have $$L(H_{2^{i-1}kt})\mid L(H_{2^{i}kt})$$ for all $i\in\{1,\cdots,s\}$. Hence $$L(H_k) \mid L(H_{km}).$$ \end{proof} \begin{thm}\label{Hknotdiv} Let $k$ and $\ell $ be positive integers such that $k$ does not divide $\ell$. Then $L(H_k)$ does not divide $L(H_{\ell })$. \end{thm} \begin{proof} The period of $H_k$ is $4k$ and the period of $H_\ell$ is $4\ell$. Since $4k$ does not divide $4\ell$, we have $L(H_k)$ does not divide $L(H_\ell)$ by Proposition \ref{divisiblity-period}. \end{proof} \end{document}
\begin{document} \title{Relabelling in Bayesian mixture models by pivotal units} \author{Leonardo Egidi\thanks{Dipartimento di Scienze Statistiche, Universit\`a degli Studi di Padova, Italy, e-mail:[email protected]} \and Roberta Pappad\`{a} \thanks{Dipartimento di Scienze Economiche, Aziendali, Matematiche e Statistiche `Bruno de Finetti', Universit\`{a} degli Studi di Trieste, Via Tigor 22, 34124 Trieste, Italy, e-mail: [email protected], [email protected], [email protected]} \and Francesco Pauli\footnotemark[2] \and Nicola Torelli\footnotemark[2]} \date{} \maketitle \begin{abstract} In this paper a simple procedure to deal with label switching when exploring complex posterior distributions by MCMC algorithms is proposed. Although it cannot be generalized to any situation, it may be handy in many applications because of its simplicity and very low computational burden. A possible area where it proves to be useful is when deriving a sample for the posterior distribution arising from finite mixture models, when no simple or rational ordering between the components is available. \end{abstract} \section{Introduction} Label switching is a well-known and fundamental problem in Bayesian estimation of finite mixture models \citep{McPeel2000}. The label switching problem arises when exploring complex posterior distributions by Markov Chain Monte Carlo (MCMC) algorithms because the likelihood of the model is invariant to the relabelling of mixture components. Since there are as many maxima as there are permutations of the indices ($G!$), the likelihood has then multiple global maxima. This is a minor problem (if a problem at all) when we perform classical inference, since any maximum leads to a valid solution and inferential conclusions are the same regardless of which one is chosen. On the contrary, invariance with respect to labels is a major problem when Bayesian inference is used. If the prior distribution is invariant with respect to the labelling as well as the likelihood, then the posterior distribution is multimodal. To make inference on a parameter specific of a component of the mixture, a sample from the posterior that represent different modes would be inappropriate. An actual MCMC sample may or may not switch labels depending on the efficiency of the sampler. If the raw MCMC sampler randomly switches labels, then it is unsuitable for exploring the posterior distributions for component-related parameters. A range of solutions has been proposed to perform inference in presence of label switching \citep{fruhwirth2001markov, stephens2000dealing}, but for complex models most of the existing procedures are complex and computationally expensive. A full solution entails obtaining valid samples for each parameter, and the methods in Section~\ref{sec:relabeling} are designed to relabel the raw Markov chains for this purpose. Simpler solutions are available if we do not need posterior samples for all the parameters. In this paper a simple procedure based on the post MCMC relabelling of the chains to deal with label switching when exploring complex posterior distributions by MCMC algorithms is proposed. As pointed out in Section~\ref{sec:mcmc}, we can totally ignore the relabelling if the quantities of interest are label invariant. Besides the extreme case of label invariant quantities, we illustrate in Section~\ref{sec:clustering} and \ref{sec:probgroups} how to obtain a clustering and even a matrix of probabilities of units belonging to groups, using the raw MCMC sample without the need to fully relabel it. In Section~\ref{sec:pivotal} we propose a method which performs a relabelling starting from a suitable clustering of the samples, with the aim of using an MCMC sample to infer on the characteristics of the components in terms of both probabilities of each unit being in each group and the group parameters. The performance of the algorithm is explored via the simulation study discussed in Section~\ref{sim:study} and a case study on real data is presented in Section~\ref{case:study}. Section~\ref{conc} concludes. \newcommand{{\bm \mu}}{{\bm \mu}} \newcommand{{\bm \pi}}{{\bm \pi}} \newcommand{\ca}[1]{\ensuremath{[#1]_h}} \newcommand{\uno}[1]{\ensuremath{\left|#1\right|}} \newcommand{\ensuremath{\mathcal D}}{\ensuremath{\mathcal D}} \section{The relabelling problem} \label{sec:problem} Prototypical models in which the labelling issue arises are mixture models, where, for a sample ${\bm y}=(y_1,\ldots,y_n)$ we assume \begin{equation*} (Y_i|Z_i=g) \thicksim f(y;\mu_g,\phi), \label{eq:ydist} \end{equation*} where the $Z_i$, $i=1,\ldots,n$, are i.i.d.\ random variables and \begin{equation*} Z_i\in\{1,\ldots,G\},\;\; P(Z_i=g)=\pi_g. \label{eq:gdist} \end{equation*} The likelihood of the model is then \begin{equation} L({\bm y};{\bm \mu},{\bm \pi},\phi) = \prod_{i=1}^n \sum_{g=1}^G \pi_g f(y_i;\mu_g,\phi), \label{eq:lik} \end{equation} with ${\bm \mu}=(\mu_{1},\dots,\mu_{G})$ component-specific parameters and ${\bm \pi}=(\pi_{1},\dots,\pi_{G})$ mixture weights. Equation~\eqref{eq:lik} is invariant under a permutation of the indices of the groups, that is, if $(j_1,\ldots,j_G)$ is a permutation of $(1,\ldots,G)$ and ${\bm \pi}'=(\pi_{j_1},\ldots,\pi_{j_G}) $, ${\bm \mu}'=(\mu_{j_1},\ldots,\mu_{j_G}) $ are the corresponding permutations of ${\bm \pi}$ and ${\bm \mu}$, then \begin{equation} L({\bm y};{\bm \mu},{\bm \pi},\phi) = L({\bm y};{\bm \mu}',{\bm \pi}',\phi). \label{eq:invlik} \end{equation} As a consequence, the model is unidentified with respect to an arbitrary permutation of the labels. When Bayesian inference for the model is performed, if the prior distribution $p_0({\bm \mu},{\bm \pi},\phi)$ is invariant under a permutation of the indices, that is $p_0({\bm \mu},{\bm \pi},\phi) = p_0({\bm \mu}',{\bm \pi}',\phi)$, then so is the posterior \begin{equation} p({\bm \mu},{\bm \pi},\phi|{\bm y}) \propto p_0({\bm \mu},{\bm \pi},\phi)L({\bm y};{\bm \mu},{\bm \pi},\phi), \end{equation} which is then multimodal with (at least) $G!$ modes. This implies that all simulated parameters should be switched to one among the $G!$ symmetric areas of the posterior distribution, by applying suitable permutations of the labels to each MCMC draw. \subsection{Relabelling and label switching in MCMC sampling} \label{sec:mcmc} In the following we assume that we obtained an MCMC sample from the posterior distribution for model (\ref{eq:lik}) with a prior which is labelling invariant. We denote as $\{\ca{\theta}:h=1,\ldots,H\}$ the sample for the parameter $\theta=({\bm \mu},{\bm \pi},\phi)$. We assume that also the $Z$ variable is MCMC sampled and denote as $\{\ca{Z}:h=1,\ldots,H\}$ the corresponding sample. In principle, a perfectly mixing chain should visit the points $({\bm \mu},{\bm \pi},\phi)$ and $({\bm \mu}',{\bm \pi}',\phi)$ with the same frequency. A chain with these characteristics for a model with $G=2$ and where $f(\cdot;\mu_g,\phi)$ is the Gaussian distribution with parameters $\mu_g$ and $\phi$, $\mathcal N(\mu_g,\phi)$, is depicted in Figure~\ref{fig:mcmcexample}{(a)}, together with the posterior distribution for ${\bm \mu}$. A chain with a less than perfect mixing may either concentrate on one mode of the posterior distribution (Figure~\ref{fig:mcmcexample}{(b)}) or exhibit random switches (Figure~\ref{fig:mcmcexample}{(c)}). A naive, but effective, solution to the relabelling issue is to use a sampler which is inefficient with respect to the labelling -- that is, it is unlikely to switch labels -- but otherwise efficient \citep{puolamaki2009bayesian}. This can be an {\it ex post} solution, that is, we can ignore the relabelling issue if we verify that we obtained a chain where no switch occurred, but it is impractical in general terms since it is difficult to tune a sampler so that it is inefficient enough to avoid label switches but not too inefficient. We note that the presence of label switches (or the whole issue of relabelling) is totally not relevant if the quantities we are interested in are invariant with respect to the labels, as is the case for a prediction of $(y_1,y_2)$ (depicted in Figure~\ref{fig:mcmcexamplePost}, top row), or the inference for the parameter $\phi$. A particularly relevant example of invariant quantity is the probability of two units being in the same group, $c_{ij}=P(Z_i=Z_j|\ensuremath{\mathcal D})$, $i,j=1,\ldots,n$, whose estimate based on the sample is \begin{equation} \hat{c}_{ij} = \frac{1}{H} \sum_{h=1}^H \uno{\ca{Z_i}=\ca{Z_j}}. \label{eq:Cmatrix} \end{equation} The $n\times n$ matrix $C$ with elements $\hat{c}_{ij}$ can be seen as an estimated similarity matrix between units, and the complement to one $\hat{s}_{ij}=1-\hat{c}_{ij}$ as a dissimilarity matrix (note that it is not a distance metric as $s_{ij}=0$ does not imply that the units $i$ and $j$ are the same). Relabelling becomes relevant when we are interested, directly or indirectly, in the features of the $G$ groups, for example the posterior (and predictive) distributions of component-related quantities such as the difference $\mu_2-\mu_1$ or the probability of each unit belonging to each group, $q_{ig}=P(Z_i=g|\ensuremath{\mathcal D})$, whose MCMC estimate is \begin{equation} \hat{q}_{ig} = \frac{1}{H} \sum_{h=1}^H \uno{\ca{Z_i}=g}, \label{eq:probappartstimata} \end{equation} for $i=1,\ldots,n$ and $g=1,\ldots,G$. In Figure~\ref{fig:mcmcexamplePost}, bottom row, we depict the posterior distribution of $\mu_2-\mu_1$ based on the samples $\{\ca{\mu_2}-\ca{\mu_1}:h=1,\ldots,H\}$ obtained using the three chains. The first version is formally correct given that the model is not identified, but it is not able to tell us what is the average difference between the groups. The second version does answer to our question on the difference between the groups but is based on a very partial exploration of the posterior. The third version leads to an incorrect answer. It is then clear that the raw MCMC sample can not be used to study the posterior distributions of component-related quantities such as $\mu_g$ or $P(Z_i=g|\ensuremath{\mathcal D})$. In order to study the posterior distributions of component-related quantities such as $\mu_g$, we need to define a suitable method to permute the labels at each iteration of the Markov chain. Then, the new labels are such that different labels do refer to different components of the mixture. \section{Partitioning observations} \label{sec:clustering} A partition of the observations, meaning a point estimate of the group for each unit, can be easily obtained. Doing this, however, the issue of obtaining an estimate for groups features (posteriors of $\mu_g$) or the probability of units belonging to each group ($\hat{q}_{ig}$ in Equation~\eqref{eq:probappartstimata}) remains open. In fact, the usual difficulties related to clustering techniques apply (for instance, the groups depend on the choice of the distance). A partition can be also obtained by maximizing the posterior distribution, notwithstanding the fact that the maximum is not unique (there are $G!$ modes), since the maxima are equivalent any would be suitable. Alternatively, the probabilities in Equation~\eqref{eq:Cmatrix} can be used to derive a partition of observations by employing some clustering technique based on a suitable similarity matrix. A more sophisticated option, see \citet{fritsch2009improved}, involves defining a distance between partitions, for example \begin{equation} d({\bm z}^*,{\bm z})= \sum_{i<k} d_1\uno{z^*_i\neq z_k}\uno{z^*_i= z_k} + d_2\uno{z^*_i= z_k}\uno{z^*_i \neq z_k}, \label{eq:distclusterA} \end{equation} and then search for the partition which minimizes the expected distance with the true groups $\bar{{\bm z}}$, which means, if $d_1=d_2=1$, find ${\bm z}^*$ which minimizes \begin{equation} E(d({\bm z}^*,\bar{{\bm z}})|\ensuremath{\mathcal D}) = \sum_{i<k}\big|\uno{z^*_i=z^*_k}-c_{ik}\big|, \label{eq:mindist} \end{equation} where $c_{ik}$ can be replaced by $\hat{c}_{ik}$. Alternative distances between partitions may be used, for instance the Rand index $d_2({\bm z}^*,{\bm z})=1-d({\bm z}^*,{\bm z})\left(\binom{n}{2}\right)^{-1}$ or the adjusted Rand index \citep{hubert1985comparing}. Note that if the distance function is a linear operator then the following holds: \begin{equation} E(d({\bm z}^*,{\bm z})|\ensuremath{\mathcal D}) = d\left({\bm z}^*,E({\bm z}|\ensuremath{\mathcal D})\right). \label{eq:lineardistance} \end{equation} The expectations in \eqref{eq:mindist} or \eqref{eq:lineardistance} can be obtained using the MCMC sample as \begin{equation} E(d({\bm z}^*,{\bm z})|\ensuremath{\mathcal D}) = \frac{1}{H}\sum_{h=1}^H d({\bm z}^*,\ca{{\bm z}}). \label{eq:expdistmcmc} \end{equation} The optimization should be done in the space of all possible partitions, since this can be very large, the authors suggest performing optimization on a suitable subset, reasonable alternatives being the set $\{\ca{{\bm z}}\}$ or the set of clusterings resulting from different classical algorithms applied to the similarity matrix \eqref{eq:Cmatrix} (or the union of the two). \begin{figure} \caption{\label{fig:mcmcexample} \label{fig:mcmcexample} \end{figure} \begin{figure} \caption{\label{fig:mcmcexamplePost} \label{fig:mcmcexamplePost} \end{figure} \section{Obtaining probabilities of belonging to a group} \label{sec:probgroups} \citet{puolamaki2009bayesian} deal with the relabelling issue considering as an objective the $n\times G$ matrix with elements $q_{ig}=P(Z_i=g)$ ($\tilde{\beta}$ in their notation). This is obtained by maximizing the Bernoulli likelihood. The latter can be specified according to two alternative formulations. The first is one in \citet{puolamaki2009bayesian}, where the $HG\times n$ matrix $Z'$ is such that \[ Z'_{ri} = 1 \mbox{ iff } \ca{Z_i} = g \mbox{ where } r=G(h-1)+g, \] and get \begin{equation} L = \prod_{r=1}^R\sum_{g=1}^G\prod_{i=1}^n q_{ig}^{Z'_{ri}}(1-q_{ig})^{1-Z'_{ri}}. \end{equation} The above likelihood can also be written as \begin{equation} L= \sum_{h=1}^H \sum_{g=1}^G\prod_{i=1}^n q_{ig}^{\uno{\ca{Z_i}=g}}(1-q_{ig})^{1-\uno{\ca{Z_i}=g}}. \end{equation} The intuitive idea behind this strategy is that if two units $i_1$ and $i_2$ often belong to the same group, that is, $Z'_{r,i_1}=Z'_{r,i_2}$ for many $r$, then they should be assigned to the same group, thus leading to a high value of $q_{i_1\bar{g}}$ and $q_{i_2\bar{g}}$ for some value of $\bar{g}$. Note that the likelihood above is itself labelling invariant, thus it has $G!$ maxima. An EM algorithm is proposed to perform the optimization: \begin{description} \item[E step:] for each row $r$ (which represent a group in an iteration) and for each group obtain \[ \gamma_{rg} = \frac{\prod_{i=1}^n q_{ig}^{Z'_{ri}}(1-q_{ig})^{1-Z'_{ri}}} {\sum_{g=1}^G \prod_{i=1}^n q_{ig}^{Z'_{ri}}(1-q_{ig})^{1-Z'_{ri}}} = \frac{p((z_{1g},\ldots,z_{ng})|\theta)}{\sum_{g=1}Gp((z_{1g},\ldots,z_{ng})|\theta)}; \] \item[M step:] compute the mean of the $Z'_{ri}$ with weights $\gamma_{rg}$ \[ q_{ig} = \frac{\sum_{r=1}^R\gamma_{rg}Z'_{ri}}{\sum_{r=1}^R\gamma_{rg}} = \frac{\sum_{r:Z'_{ri}=1}\gamma_{rg}}{\sum_{r=1}^R\gamma_{rg}}. \] \end{description} Equivalently, the matrix $Q$ can be found minimizing the cost function \[ \prod_{h=1}^H \sum_{\nu\in\mathcal V}\frac{1}{G!} q_{i\nu(\ca{Z_i})}. \] \section{Relabeling methods} \label{sec:relabeling} Relabelling means permuting the labels at each iteration of the Markov chain in such a way that the relabelled chain can be used to draw inference on component specific parameters. Loosely speaking we may say that the relabelled chain can be seen as a chain where no label switching has occurred or, in other words, the new labels are such that different labels do refer to different components of the mixture. One method to perform the relabelling involves imposing identifiability constraints such as $\pi_1<\pi_2<\ldots<\pi_G$ or $\mu_1<\mu_2<\ldots<\mu_G$. Equivalently, this may be seen as a conditioning of the full (multimodal) posterior where the conditioning event is the identifiability constraint. Such a solution, although theoretically sound, may not be applicable when an obvious constraint does not exist and it may not work well if the components are not well separated \citep{stephens2000dealing,jasra2005markov}. It is worth noting that relabelling strategies may act during the MCMC sampling, and/or they may be used to post-process the chains. In general, those solutions which post-process the chains are particularly convenient (since the issue can be ignored in performing the MCMC and then dealt with later). Generally, existing relabelling procedures select the permutation of the labels that minimizes a well defined distance between some components, such as pivots and classification probabilities, at each MCMC iteration. \citet{papastamoulis2016label} provides the \textbf{label.switching} \texttt{R} package with a range of deterministic and probabilistic methods for performing relabelling: in Section~\ref{sim:study} and \ref{case:study} a comparison between some of these alternatives and our methodology will be provided. \subsection{Decision theoretic approach} A rather general decision theoretic framework for the relabelling problem is proposed by \citet{stephens2000dealing}. Such approach translates the problem to that of choosing an action $a$ from a set of actions $\mathcal A$ where a loss function $\mathcal L:\mathcal A\times\Theta\rightarrow\mathbb R$ represents the loss we incur if we choose the action $a$ and the true value of the parameter is $\theta$. The loss function makes sense if it is permutation invariant (remember that if we permute the parameter the model remains the same), we can obtain a permutation invariant loss function $\mathcal L$ from a non invariant one $\mathcal L_0$ by defining \[ \mathcal L(a;\theta) = \underset{\nu}{\min} \mathcal L_0(a;\nu(\theta)). \] The action $a$ is then chosen by minimizing the posterior expected loss \[ \mathcal R(a) = E(\mathcal L(a;\theta)|\ensuremath{\mathcal D}), \] which can be approximated using the MCMC sample by \begin{eqnarray} \hat{\mathcal R}(a) &=& \frac{1}{H}\sum_{h=1}^H \mathcal L(a;\ca{\theta}) \\ &=& \frac{1}{H}\sum_{h=1}^H \underset{\nu_h}{\min} \mathcal L_0(a;\nu_h(\ca{\theta})) \\ &=&\underset{\nu_1,\ldots,\nu_H}{\min} \left( \frac{1}{H}\sum_{h=1}^H \mathcal L_0(a;\nu_h(\ca{\theta})) \right). \end{eqnarray} The action $a$ can be the estimation of the parameter (or part of it) and the loss function may be a distribution to be fitted or an estimation error, the choice should be driven by the objective of inference. If the objective is the clustering of $n$ units into $G$ groups a reasonable action is reporting the $n\times G$ matrix $Q = [q_{ig}]$ where $q_{ig}$ is the probability that the $i$-th unit belongs to the group $g$. A corresponding loss is then the distance, somehow measured (\citet{stephens2000dealing} employs the Kullback-Leibler distance), between $Q$ and its true value $P(\theta)=[p_{ig}(\theta)]$ where (for the toy example) \[ p_{ig}(\theta)= P(Z_i=g|{\bm y},\theta) = \frac{\pi_g f(y_i;\mu_g,\theta)}{\sum_j \pi_j f(y_i;\mu_j,\theta)} \] The general algorithm for performing \citet{stephens2000dealing} method is as follows \begin{description} \item[Start:] from arbitrary permutations $\nu_1,\ldots,\nu_H$. \item[Step 1:] obtain $a=\underset{a}{\mbox{argmin}} \sum_{h=1}^H \mathcal L_0(a;\nu_h(\ca{\theta}))$. \item[Step 2:] obtain $\nu_h=\underset{\nu_h}{\mbox{argmin}} L_0(a;\nu_h(\ca{\theta}))$. \end{description} Note that step 2 entails $n$ minimizations with respect to all the permutations ($G!$), \citet{stephens2000dealing} points out the existence of efficient numerical algorithm if the loss function $\mathcal L_0$ can be written as $\mathcal L_0=\sum_{g=1}^G \mathcal L_0^{(g)}(a;\pi_g,\mu_g,\phi)$.\ A problem with this method might be the choice of the appropriate loss function/the dependence of the results on the loss function. Our method presented in Sect.~\ref{sec:pivotal} does not require a minimization step, and for this reason might be computationally appealing in many situations. \section{Pivotal method} \label{sec:pivotal} Suppose that a partition of the observations in $\hat{G}$ groups, $\mathcal G_1,\ldots,\mathcal G_{\hat{G}}$ has been obtained as discussed in Section~\ref{sec:clustering}. As already pointed out, this may be enough for some purposes, but we may be interested in the probabilities $P(Z_i=g)$ and in the posteriors for groups parameters, $\mu_g$. Suppose that we can find $\hat{G}$ units, $i_1,\ldots,i_{\hat{G}}$, one for each group, which are (pairwise) separated with (posterior) probability one (that is, the posterior probability of any two of them being in the same group is zero). In terms of the matrix $C$, the $\hat{G}\times \hat{G}$ sub-matrix with only the row and columns corresponding to $i_1,\ldots,i_{\hat{G}}$ will be the identity matrix. We then use the $\hat{G}$ units, called pivots in what follows, to identify the groups and to relabel the chains: for each $h=1,\ldots, H$ and $g=1,\ldots,\hat{G}$ \begin{equation} \ca{\mu_g}=\ca{\mu_{\ca{Z_{i_g}}}}; \label{eq:relabelmu} \end{equation} \begin{equation} \ca{Z_i}=g \mbox{ for } i:\ca{Z_i}=\ca{Z_{i_g}}. \label{eq:relabelZ} \end{equation} The availability of $\hat{G}$ perfectly separated units is crucial to the procedure, and it can not always be guaranteed. We now discuss three different circumstances under which the relabelling procedure is unsuitable \begin{description} \item[(i)] the number of actual groups in the MCMC sample is higher than $\hat{G}$; \item[(ii)] the number of actual groups in the MCMC sample is lower than $\hat{G}$; \item[(iii)] the number of actual groups in the MCMC sample is equal to $\hat{G}$ but the pivots are not perfectly separated. \end{description} Let us first clarify what is meant by the number of actual groups. The model has $G$ components, but some mixture components may be empty in the Markov chain, that is, it may happen that $\#\{g: \ca{Z_i}=g\mbox{ for some }i\}<G\, \forall h$. By actual number of groups we mean the number of non empty groups, $G_0$ in what follows. It is then clear that the Markov chain does not have informations on more than $G_0$ groups. We also note that the number of non empty groups may vary with iterations, let \[ \ca{G} = \#\{g: \ca{Z_i}=g\mbox{ for some }i\}. \] \newcommand{\mathcal H}{\mathcal H} Consider now the set $\mathcal H_1\subset\{1,\ldots,H\}$ of iterations where $\ca{G}>\hat{G}$; some units and groups will then have no available pivot. These units will not be attributed any group by performing (\ref{eq:relabelZ}). Thus for these units \[ \sum_{g=1}^{\hat{G}} \hat{P}(Z_i=g) =\sum_{g=1}^{\hat{G}}\hat{q}_{ig} = \sum_{g=1}^{\hat{G}}\frac{1}{H} \sum_{i=1}^H \uno{\ca{Z_i}=g} < 1. \] We suggest cancelling those iterations of the chains where this occur, that is, the final --partial-- chain is a sample from the posterior conditional on having at most $\hat{G}$ non empty groups. Consider now the set $\mathcal H_2\subset\{1,\ldots,H\}$ of iterations where $\ca{G}<\hat{G}$; if $h\in\mathcal H_2$, $\ca{Z_{i_k}}=\ca{Z_{i_s}}$ for some pivots $i_k, i_s$. As a consequence, $\hat{c}_{hk}>0$: the pivots are not perfectly separated. The procedure in \eqref{eq:relabelmu} and \eqref{eq:relabelZ} can not be performed (it is not well defined), so also in this case we will have to cancel the corresponding part of the chain. Finally, consider the set \[ \mathcal H_3=\{ h : \exists k,s \mbox{ s.t. } \ca{Z_{i_k}}=\ca{Z_{i_s}} \} \] that is, the set of iterations where (at least) two pivots are put in the same group. Note that $\mathcal H_2\subset \mathcal H_3$ but $\mathcal H_3$ may be larger. The same provision as above applies, we need to get rid of this part of the chain. In the end, we will relabel the chain with iterations \begin{equation} \mathcal H_0 = \{1,\ldots, H\} \backslash (\mathcal H_1\cup\mathcal H_2\cup\mathcal H_3) \label{eq:itvalid} \end{equation} which can be considered a sample from the posterior distribution conditional on (i) there being exactly $\hat{G}$ non empty groups, (ii) the pivots falling into different groups. \subsection{Pivots identification} \label{piv:id} A relevant issue is how to identify the pivots, noting that perfectly separated units may not exist and that, even if they exist, we may not be able to find them since the set of all possible choices is too big to be fully searched. The general method we put forward is to select a unit for each group according to some criterion, for instance for group $g$ containing units $\mathcal G_g$ we may chose $\bar{i}\in\mathcal G_g$ that maximizes one of the quantities \begin{equation} \label{eq:maxmeth} \underset{j\in\mathcal G_g}{\max}\, c_{\bar{i}j}, \;\;\;\; \sum_{j\in\mathcal G_g} c_{\bar{i}j}, \;\;\;\; \sum_{j\in\mathcal G_g} c_{\bar{i}j} - \sum_{j\not\in\mathcal G_g} c_{\bar{i}j}; \end{equation} or minimizes one of the quantities \begin{equation} \label{eq:minmeth} \underset{j\in\mathcal G_g}{\min}\, c_{\bar{i}j}, \;\;\;\; \underset{j\not\in\mathcal G_g}\, c_{\bar{i}j}, \;\;\;\; \sum_{j\not\in\mathcal G_g} c_{\bar{i}j}. \end{equation} We introduce a further method, which we call \textit{Maxima Units Search} (hereafter MUS), that turns out to be suitable in case of a low number of mixture component, e.g., $G=3, 4$. This procedure differs from the others in the strategy for detecting pivots, since it does not rely upon a maximization/minimization step but it identifies those units satisfying a proper search within the estimated similarity matrix $C$ (see the Appendix for more details on the MUS procedure). The quality of the choice of pivotal units by the proposed methods is measured by the probability of the conditioning event \begin{equation} \frac{1}{H} \#\mathcal H_0. \label{eq:condevent} \end{equation} estimated by the (original, raw) MCMC sample. It is worth noting that the idea of solving the relabelling issue by fixing the group for some units dates back to \citet{chung2004difficulties}, who, however, gave no indication on how to choose the units. Also, since they suggest imposing such a restriction in the MCMC, there is no measure of the extent to which it influences the result (of the extent to which it is informative if we interpret it as a prior information). We note, however, that \citet{chung2004difficulties} may be very interesting when a set of units which are to be attributed to different groups can be defined exogenously. Another related idea is put forward by \citet{yao2014online}, who propose finding a reference labelling, that is, a clustering for the sample (for example, the posterior mode), and then relabel each iteration by minimizing some distance from the reference labelling. The general idea is similar to the one we suggest, but it is more computationally demanding because of the required minimizations, on the other hand it avoids the need to condition on the pivots being separated. We can argue, however, that the latter is not a big drawback of our proposal since its effects can be measured and is likely to be small in many practical instances. \section{Simulation study} \label{sim:study} The aim of this section is to evaluate the performance of the pivotal method introduced before. In particular, our goal is to investigate the behaviour of the proposed solution for dealing with label switching in different simulated scenarios. For this purpose, we focus on data simulated from a mixture of non-equally weighted mixtures of bivariate Gaussian distributions with unequal covariance matrices, so that the generated components may result in overlapping clusters. Specifically, the simulation scheme consists in the following steps. \begin{enumerate} \item[(i)] Simulate $n$ values $Y_{1}, \dots, Y_{n}$, from a mixture of mixtures of bivariate Gaussian distributions, where \begin{equation} \label{mixt:of:mixt} (Y_i|Z_i=g) \thicksim \sum_{s=1}^{2} p_{gs}\, \mathcal{N}_{2}(\mathbf{\mu}_{gs}, \Sigma_{s}). \end{equation} That is, conditionally on being in group $g \in \{1,\dots,G\}$, $\mathbf{y}_{i}$ is picked out from one of two possible Gaussian distributions with weights, means and covariances $p_{gs}$, $\mu_{gs}$ and $\Sigma_{s}$, $s=1,2$, respectively. The likelihood of the model is then \begin{equation*} L({\bm y};{\bm \mu},{\bm \pi},\Sigma) = \prod_{i=1}^n \sum_{g=1}^G \pi_g \left(\sum_{s=1}^{2} p_{gs} \, \mathcal{N}_{2}(\mathbf{\mu}_{gs}, \Sigma_s)\right). \label{eq:lik:mixtofmixt} \end{equation*} \item[(ii)] Obtain an MCMC sample which effectively explores all modes of the posterior distribution. \item[(iii)] Estimate the $n\times n$ similarity matrix $C$ with elements $c_{ij}=P(Z_i=Z_j|\ensuremath{\mathcal D})$, $i,j=1,\ldots,n$, by Equation~\eqref{eq:Cmatrix}. \item[(iv)] Apply a suitable clustering technique based on the estimated dissimilarity matrix with elements $\hat{s}_{ij}=1-\hat{c}_{ij}$ and obtain a partition of the observations in $\hat{G}$ groups with units $\mathcal{G}_{g}, g=1,\dots,\hat{G}.$ \item[(v)] Detect the pivots, one for each group, according to one criterion among the ones discussed before. \item[(vi)] If necessary, discard those iterations of the chains belonging to $\mathcal H_1\cup\mathcal H_2\cup\mathcal H_3$ (see Section~\ref{sec:pivotal}) and relabel the resulting chain with iterations in $\mathcal{H}_{0}$ (see Equation~\eqref{eq:itvalid}) via \eqref{eq:relabelmu} and \eqref{eq:relabelZ}. \end{enumerate} In the following, a sample size of $n=1000$ and $G=4$ components are considered. For $g=1,\dots,4$, we set $\pi_{g}=1/4$, $p_{g1}=0.2$, $p_{g2}=0.8$, and $\Sigma_{1}=\mathbf{I}_{2}$, $\Sigma_{2}=200\,\mathbf{I}_{2}$, being $\mathbf{I}_{2}$ the $2\times2$ identity matrix. We generate our simulated data from model~\eqref{mixt:of:mixt} (see Figure~\ref{sim:data}) using the input means reported in Table~\ref{tab:scenarios} and obtain an MCMC sample by considering $H=3000$ iterations. We proceed following points (i)-(vi) described above. As a remark, two different clustering strategies are applied on the dissimilarities $\hat{s}_{ij}$ in order to obtain $\hat{G}$ clusters of observations, namely the agglomerative and partitioning hierarchical clustering. Both methods only require a distance or a dissimilarity matrix as input and return a set of nested clusters that are organized as a tree. The former starts with the points as individual clusters and, at each step, merges the closest pair of clusters, according to some criterion to compute cluster proximity; the latter starts with one, all-inclusive cluster and, at each step, splits a cluster until only singleton clusters of individual points remain.\\ We observe that the two algorithms provide very similar results in terms of the resulting clusters, and do not affect the performance of the relabelling procedure. Therefore, for the sake of illustration, we restrict to agglomerative hierarchical clustering, where the so-called \emph{complete linkage} is adopted as a common criterion for the computation of the dissimilarity between two clusters, since it is less susceptible to noise and outliers than other linkages. \begin{table} \centering \begin{tabular}{rccc} \hline & Scenario A & Scenario B & Scenario C \\ \hline $\mu_{1s}$ & (25,0) & (-10,-10) & (-10,-10) \\ $\mu_{2s}$ & (60,0) & (20,-10) & (20,-10) \\ $\mu_{3s}$ & (0,20) & (-10,20) & (5,5) \\ $\mu_{4s}$ & (50,20) & (20,20) & (5,25) \\ \hline \end{tabular} \caption{Two-dimensional mean vectors used as input for the three illustrated scenarios A, B and C, with number of mixture components $G=4$.} \label{tab:scenarios} \end{table} \begin{figure} \caption{Illustration of a simulated sample of size $n=1000$ from model (\ref{mixt:of:mixt} \label{sim:data} \end{figure} Figures~\ref{pivot:A}--\ref{pivot:C} display the results of the agglomerative hierarchical clustering on simulated data from scenarios A, B and C, respectively. In each chart of Figures~\ref{pivot:A}--\ref{pivot:C} a different method for identifying the pivots is adopted ((a)-(g)), and the selected units are marked with red points on the plots. Recall that, by definition, the pivots are perfectly (pairwise) separated units. Therefore, the performance of the seven different identification methods will be higher as the posterior probability of any two of them being in the same group is closer to zero. As can be noticed, panels (b), (e), (f) and (g) seem to provide an accurate choice of the pivots in all situations, since they are clearly well separated and suitable as representative units for each group. This is not a negligible issue in terms of the relabelling performance, which is strongly affected by the choice of such $\hat G$, by virtue of Equation~\eqref{eq:relabelmu}. The estimated proportions of relabelled iterations based on 100 simulated samples are reported in Table~\ref{tab:prop}. As expected, better performances in terms of pivot selection are likely to reflect into a higher proportion of relabelled iterations in almost all situations. Coherently with the considerations drawn from Figures~\ref{pivot:A}--\ref{pivot:C}, methods (b), (e) and (f) register the highest chain proportions (less than $1\%$ of the iterations is discarded) for both scenarios A and B. Method (c) seems to have the worst performance regardless of the considered scenario, in particular, for scenario C the chain keeps only about $8\%$ of the original iterations. The fact that the third simulated scenario shows globally less satisfactory results is not surprising. In fact, the input means are so close each other that the cluster algorithms may fail in recognizing the true data partition, thus reflecting on the quality of the choice of the pivotal units. However, (e) and (f) criteria and MUS algorithm are preferable to the others. In order to compare the proposed methodology with other relabelling algorithms, in the task of estimating the means of the mixture components, we consider the Puolam\"{a}ki and Kaski procedure \citep{puolamaki2009bayesian} and three other methods implemented in the \textbf{label.switching} package \citep{papastamoulis2016label}. In Figure~\ref{6metodi} the median estimates of relabelled group means are plotted for a simulated example from scenario B and four alternative methods. As can be seen, our relabelling procedure seems to provide very accurate estimates of group means. Similar results are achieved by ECR-iterative-1, ECR and DATA-BASED, while Puolam\"{a}ki and Kaski algorithm appears not to yield reliable estimates for the group means. In Table~\ref{tab:MSE} are reported the mean square errors of the relabelled estimates, obtained as mean over $B=100$ macro-replications of the Euclidean distances between the input means and the corresponding estimates, according to scenarios A, B and C. In all scenarios the highest mean square errors are obtained for method (c), for each component of the mixture. Criteria (b), (e) and (f) give very similar results, and the MUS algorithm outperforms all other pivotal methods for three cases in Scenario A and two in Scenario B. ECR-iterative-1 performances in terms of mean square errors are comparable with our algorithm in most cases, although it shows to give lower errors in the third scenario, while Puolam\"{a}ki and Kaski algorithm seems to provide poor estimates in all situations (see also Figure~\ref{6metodi}). \begin{table}[H] \centering \begin{tabular}{rrrrrrrr} \hline & (a) & (b) & (c) & (d) & (e) & (f) & (g) \\ \hline Scenario A & 0.475 & 0.993& 0.124 & 0.506 & 0.993 & 0.993 & 0.313 \\ Scenario B & 0.519 & 0.998 & 0.101 & 0.707 & 0.998 & 0.998 & 0.995 \\ Scenario C & 0.139 & 0.300 & 0.079 & 0.267 & 0.368 & 0.507 & 0.374 \\ \hline \end{tabular} \caption{Estimated proportion of relabelled iterations (see Equation~\eqref{eq:condevent}), over 100 macro-replications, based on the original MCMC sample, according to Scenario A, B and C. The observations are clustered according to agglomerative hierarchical algorithm.} \label{tab:prop} \end{table} \begin{figure} \caption{Simulated sample of size $n=1000$ from Scenario A (see Table~\ref{tab:scenarios} \label{pivot:A} \end{figure} \begin{figure} \caption{Simulated sample of size $n=1000$ from Scenario B (see Table~\ref{tab:scenarios} \label{pivot:B} \end{figure} \begin{figure} \caption{Simulated sample of size $n=1000$ from Scenario C (see Table~\ref{tab:scenarios} \label{pivot:C} \end{figure} \begin{figure} \caption{Scenario B. Crosses are input means, red points are the median values of relabelled estimates. \emph{(Top left)} \label{6metodi} \end{figure} \begin{table}[H] \centering \begin{tabular}{rrrrrr} \hline & Scenario A & $\bm{\mu}_{1}$ & $\bm{\mu}_{2}$ & $\bm{\mu}_{3}$ & $\bm{\mu}_{4}$ \\ \hline & (a) & 13.8078 & 1.6724 & 2.0158 & 10.8232 \\ &(b) & 13.7064 & 1.6104 & 1.9814 & 9.1846 \\ &(c) & 22.8786 & 7.5307 & 6.7326 & 14.5896 \\ &(d) & 14.0215 & 1.6619 & 1.9951 & 11.2910 \\ &(e) & 13.7301 & 1.6264 & 1.8889 & 9.2900 \\ &(f) & 13.7794 & 1.6723 & 1.8979 & 9.2897 \\ & MUS & \textbf{12.5787} & \textbf{1.5531} & \textbf{1.7919} & 9.6220 \\ \hline &ECR-1 & 13.6403 & 1.6605 & 1.9015 & \textbf{8.8085} \\ &P \& K & 25.5940 & 15.5229 & 15.1522 & 27.2411 \\\hline \hline & Scenario B & $\bm{\mu}_{1}$ & $\bm{\mu}_{2}$ & $\bm{\mu}_{3}$ & $\bm{\mu}_{4}$ \\ \hline & (a) & 1.4066 & 1.6251 & 1.5984 & 1.5884 \\ &(b) & 1.4123 & 1.6005 & 1.5737 & 1.5419 \\ &(c) & 4.8496 & 4.3588 & 4.8097 & 5.2142 \\ &(d) & 1.4096 & 1.5961 & 1.5729 & 1.5403 \\ &(e) & 1.4127 & 1.6003 & 1.5736 & 1.5417 \\ &(f) & 1.4121 & 1.5982 & 1.6192 & 1.5420 \\ & MUS & \textbf{1.4070} & \textbf{1.5877} & 1.5728 & 1.5437 \\ \hline &ECR-1 & 1.4129 & 1.5984 & \textbf{1.5717} & \textbf{1.5429} \\ &P \& K & 18.4657 & 18.6185 & 18.6796 & 19.0404\\\hline \hline & Scenario C & $\bm{\mu}_{1}$ & $\bm{\mu}_{2}$ & $\bm{\mu}_{3}$ & $\bm{\mu}_{4}$ \\ \hline & (a) & 9.1013 & 9.9974 & 8.4766 & 19.3288 \\ & (b) & 6.9196 & 7.8994 & 8.7700 & 14.1766 \\ & (c) & 11.6894 & 10.8810 & 8.8252 & 22.6435 \\ & (d) & 7.7730 & 9.1701 & 9.1987 & 16.4153 \\ & (e) & 7.6160 & 7.1054 & 10.2073 & 13.2589 \\ & (f) & 7.1992 & 7.1643 & 9.4728 & 15.2713 \\ & MUS & 6.7458 & 7.5579 & 9.7924 & 14.8356 \\ \hline &ECR-1 & \textbf{6.4891} & \textbf{6.7234} & 8.4472 & \textbf{9.3649}\\ &P \& K & 17.5726 & 16.8717 & \textbf{3.4988} & 20.2620\\ \hline \end{tabular} \caption{Mean squared error $\displaystyle\dfrac{1}{B}{\sum_{j=1}^{B}}||{\bm \mu}_{gs}^{(j)}-\hat{{\bm \mu}}_{gs}^{(j)}||$ computed for $B=100$ macro-replications, of the estimates of the mean vector components ${\bm \mu}_{gs}$, $g=1,...,4, s=1,2$, according to criteria (a)--(f) and MUS of pivotal method, Puolam\"{a}ki and Kaski (P\&K) and ECR-1 algorithms.} \label{tab:MSE} \end{table} \section{Case study} \label{case:study} Fishery dataset, originally taken from \citet{titterington1985statistical} and used by \citet{papastamoulis2016label} for comparing different relabelling procedures, consists of $n = 256$ snapper length measurements. In Figure~\ref{fish} the histogram of the lengths is shown. In this section, we apply our method to these data and test its efficiency, comparing the results with some methods from the \textbf{label.switching} package. We use a Gaussian mixture with $G=5$ components as suggested by \citet{papastamoulis2016label}, that is: \begin{equation} \label{eq:fish} y_{i} \sim \sum_{g=1}^{G} p_{g} \mathcal{N}(\mu_{g}, \sigma^{2}_{g}), \ \ i=1,...,n \end{equation} \begin{figure} \caption{Histogram of fishery data. On $x$-axis the snapper length measurements} \label{fish} \end{figure} We set up a Gibbs sampling through the \textbf{bayesmix} \texttt{R} package \citep{grun2011bayesmix}, with $H=11000$ iterations and a burn-in period of 1000. \begin{figure} \caption{Fishery data. (\emph{From top left to bottom right} \label{8metodi} \end{figure} In Figure~\ref{8metodi} the raw MCMC sample (Top left) and the reordered MCMC samples for $\mu_{g}, g=1,...,5$, for different methods, are shown. Despite an ordering constraint for the mean components (the priors are chosen according to the {\ttfamily independence} option, which favours a natural ordering of the means), label switching occurs, and the raw sampler is unable to yield useful means estimates for the single components. The \texttt{label.switching} function of the same package is used to reorder the obtained chains according to the resulting permutations. The methods from the \textbf{label.switching} package seem to perform similarly. In particular, for the greatest mean (light blue trace) there is a global tendency of switching. We note that for the DATA-BASED method the same happens also for the second mean (blue trace). Our pivotal method seems to work better in isolating the five high-posterior density regions. We recall that the reordering for our method is explained by \eqref{eq:relabelmu}. Concerning the computational times reported in Table~\ref{tab:cpu}, AIC is the fastest method, since it only applies an ordering constraint and consequently permutes the simulated MCMC output, while STEPHENS ---a probabilistic relabelling--- is the slowest. Our method is quite fast, especially if compared with ECR-iterative-1, ECR-iterative-2 and DATA-BASED. \begin{table} \centering \begin{tabular}{r|c} Method & CPU time (sec.)\\ \hline STEPHENS & 344.50 \\ PRA & 3.96 \\ ECR & 8.66 \\ ECR-iterative-1 & 60.83 \\ ECR-iterative-2 & 28.39 \\ AIC & 0.08 \\ DATA-BASED & 22.68 \\ \textbf{Pivotal} & \textbf{9.57} \end{tabular} \caption{Fishery data: CPU times in seconds for different methods, with $H=11000$, burn-in=1000, $n=256$ and $G=5$.} \label{tab:cpu} \end{table} \section{Conclusions} \label{conc} We propose a simple procedure for dealing with label switching in Bayesian mixture models, based on the identification of as many pivots as mixtures components, used for relabelling the resulting MCMC chains. The main novelty of our contribution consists in providing some useful indications on how to choose the pivots, since, as mentioned in Section 6.1, the idea of solving the relabelling issue by fixing the groups for some units is not new \citep{chung2004difficulties}. We suggest to adopt one of six alternative methods based on a maximization or a minimization of some quantities derived from a similarity matrix obtained through the MCMC sample, or a further demanding algorithm suitable when the number of groups $G$ is relatively small (e.g. $G=4$). A fundamental issue is represented by the pairwise (perfect) separation between pivots, since it is crucial for the proposed procedure and, usually, non-trivial. From a computational side, the method appears to be fast and simpler than other relabelling methods, since it does not require a maximization/minimization step at each iteration, and only requires a permutation of the labels induced by the pivots membership. A simulation study is conducted in order to test the proposed solution on different possible scenarios, showing overall good performances. A case study on a real dataset is presented, and the results seem to confirm the advantage of using the proposed methodology. Moreover, when also considering the computational time of our algorithm compared to some procedures available in the \textbf{label.switching} \texttt{R} package, we conclude that the proposed methodology may represent a valid approach to the label switching problem and, in some cases, may be preferable to other existing solutions. \section*{Appendix} \label{A:MUS} \subsection*{MUS algorithm} The algorithm of \textit{Maxima Units Search} is an alternative method for detecting pivots which does not rely upon a maximization/minimization step as the other six procedures in \eqref{eq:maxmeth} and \eqref{eq:minmeth}, but it searches for $\hat{G}$ pivots which satisfy a proper feature within the estimated similarity matrix $C$. The underlying idea is to choose as pivots those units in correspondence of which the $\hat{G} \times \hat{G}$ sub-matrix of $C$ with only the row and columns corresponding to $i_1,\ldots,i_{\hat{G}}$, is more often (close to) the identity matrix. Let us denote this sub-matrix of $C=(c_{ij})$ only containing the rows and columns corresponding to the pivots $i_1,\ldots,i_{\hat{G}}$ by $\mathcal{T}_{(\hat{G}\times \hat{G})}$. It is worth stressing that for a small number of groups (e.g., $G=4$) and a sample size $n$ ranging between $100$ and $1000$, this research can be computationally demanding. Furthermore, a positive number of identity matrices is not always guaranteed. However, the MUS algorithm has proved to be efficient in terms of mean square errors for group means estimation, as shown in Table~\ref{tab:MSE}. The main steps of the algorithm are summarized below. \begin{description} \item[(i)] For every group $g, \ g = 1,...,\hat G$, find the \emph{maxima} units $j^{1}_{g},...,j^{M}_{g}$ within matrix $C$, i.e. the units in group $g$ with the greatest number of zeros in correspondence of the units of the other $\hat{G}-1$ groups, where $M$ is a precision parameter fixed in advance (in our simulation study $M=5$). \item[(ii)] For these $M \times \hat G$ units, count the number of distinct identity sub-matrices of rank $\hat G$ $\mathcal{T}_{(\hat{G}\times \hat{G})}$ which contain them. \item[(iii)] For each group $g, \ g = 1,...,\hat G$, select the unit which yields the maximum number of identity matrices of rank $\hat G$. Such unit represents the pivot to be used for relabelling the chains as explained in Section~\ref{sec:pivotal}. \end{description} \end{document}
\begin{document} \begin{frontmatter}[classification=text] \title{Spectral sets in $\mathbb{Z}_{p^2qr}$} {\sf Aut}thor[pgom]{G\'abor Somlai \thanks{Research was supported by the J\'anos Bolyai Research Fellowship of the Hungarian Academy of Sciences and the New National Excellence Program under the grant number UNKP-20-5-ELTE-231, received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 741420)}} \begin{abstract} We prove that every spectral set in $\mathbb{Z}_{p^2qr}$ tiles, where $p$, $q$ and $r$ are primes. Combining this with a recent result of Malikiosis \cite{Romanos2020} we obtain that Fuglede's conjecture holds for $\mathbb{Z}_{p^2qr}$. {\sf e}nd{abstract} {\sf e}nd{frontmatter} \section{Introduction} A Lebesgue measurable set $\Omega \subset \mathbb{R}^n$ is called a {\it tile} if there is set $T$ in $\mathbb{R}^n$, called the {\it tiling complement} of $\Omega$ such that almost every element of $\mathbb{R}^n$ can uniquely written as $\omega+t$, where $\omega \in \Omega $ and $t \in T$. We say that $\Omega$ is {\it spectral} if $L^2(\Omega)$ admits an orthogonal basis consisting of exponential functions of the form $\{ e^{i (\lambdambda x)} \mid \lambdambda \in \Lambdambda, x \in \Omega \}$, where $\Lambda \subset \mathbb{R}^n$. In this case $\Lambda$ is called a {\it spectrum } for $\Omega$. Fuglede conjectured \cite{F1974} that a bounded domain $S \subset \mathbb{R}^d$ tiles the $d$-dimensional Euclidean space if and only if the set of $L^2(S)$ functions admits an orthogonal basis of exponential functions. The conjecture might have been motivated by Fuglede's result \cite{F1974} that it is true when the tiling complement or the spectrum is a lattice in $\mathbb{R}^n$. The conjecture was disproved in \cite{T2004}. Tao considered a discrete version of the original conjecture and constructed spectral sets in $\mathbb{Z}_2^{11}$ and $\mathbb{Z}_3^5$, which are not tiles. The latter example was lifted to $\mathbb{R}^5$ to refute the spectral-tile direction of Fuglede's conjecture. Matolcsi \cite{M2004} proved that the spectral-tile direction of the conjecture fails in $\mathbb{R}^4$. Kolountzakis and Matolcsi \cite{KMkishalmaz,KM2006a} and Farkas, Matolcsi and M\'ora \cite{FMM2006} provided counterexamples in $\mathbb{R}^3$ for both directions of the conjecture. It is important to note that every (finite) tile of the integers is periodic \cite{cm}. Moreover, the tile-spectral direction of the conjecture is true for $\mathbb{R}$ if and only it holds for $\mathbb{Z}$, which is further equivalent for the conjecture to hold for every finite cyclic group, see \cite{DL2014}. If the spectral-tile direction of Fuglede's conjecture holds for $\mathbb{R}$, then it holds for $\mathbb{Z}$, which again implies that it holds for every (finite) cyclic group. On the other hand, the converse of this statements is not necessarily true. The recent investigations of Fuglede's conjecture for finite cyclic groups started with a result of \L aba \cite{laba} proving that both directions of Fuglede's conjecture holds for $\mathbb{Z}_{p^k}$ for every positive integer $k$. It was also proved in \cite{laba} that tile-spectral direction holds for cyclic groups whose order is divisible by (at most) two different primes. Kolountzakis and Malikiosis \cite{Kol} proved that the conjecture holds for $\mathbb{Z}_{p^k q}$, where $p^k$ is an arbitrary power of the prime $p$ and $q$ is a prime. As a strengthening it was proved in \cite{negyen} that Fuglede's conjecture holds for $\mathbb{Z}_{p^nq^2}$, where $p$ and $q$ are prime. A recent result of Malikiosis shows that a for cyclic group of order $p^mq^n$, Fuglede's conjecture holds if $ \min \{m,n \} \le 6$ or $p^{m-2}<q^4$. Another important result was proved by Shi \cite{shi} who showed that Fuglede's conjecture is true for $\mathbb{Z}_k$ if $k$ is the product of $3$ (distinct) primes. The so-called Coven-Meyerowitz conjecture states that if $n$ is square-free, then every tile of $\mathbb{Z}_n$ is a complete set of residues$\pmod{k}$, where $k$ is a divisor of $n$. This was originally settled (in two short comments) by \L aba and Meyerowitz on Tao's blog and a self-contained proof of this fact was provided by Shi \cite{shi}. Coven and Meyerowitz \cite{cm} proved that a subset $A$ of $\mathbb{Z}_n$ tiles if two properties called (T1) and (T2), defined in the next section, are satisfied. The converse also holds if $n$ has at most two different prime divisors or if $n=p^m q_1 \ldots q_k$ where $p, q_1, \ldots ,q_k$ are different primes \cite{Romanos2020}. Furthermore it was also proved in \cite{cm} that for every $n \in \mathbb{N}$ the tiles of $\mathbb{Z}_n$ satisfy (T1). The main result of the paper is the following. \begin{Thme} Every spectral set in $\mathbb{Z}_{p^2qr}$ is a tile. {\sf e}nd{Thme} Combining this theorem with the result of Malikiosis mentioned above we obtain that Fuglede's conjecture holds for $\mathbb{Z}_{p^qr}$. \section{Fuglede's conjecture for cyclic groups} Fuglede's conjecture is still open in the $1$ and $2$ dimensions. We will focus on the one dimensional case which is heavily connected to the discrete version of the conjecture for cyclic groups. Let $S $ be a subset of $\mathbb{Z}_n$. We say that $S$ is a tile if and only if there is a $T \subset \mathbb{Z}_n$ such that $S+T=\mathbb{Z}_n$ and $|S||T|=n$. We say that $S$ is spectral if and only if the vector space of complex functions on $S$ is spanned by pairwise orthogonal functions, which are the restrictions of some irreducible representations of $\mathbb{Z}_n$. Note that cyclic groups are abelian thus the irreducible representations considered here are all one dimensional so these are just characters. The irreducible representations of $\mathbb{Z}_n$ are of the following form: \[ \chi_k(x)=e^{\EuScriptrac{2 \pi i k}{n}x},\] so these are parametrized by the elements of $\mathbb{Z}_n$. It is easy to verify that $\chi_k$ and $\chi_l$ are orthogonal if and only if $\chi_{k-l}$ is orthogonal to the trivial representation, which can also be written as \[ \sum_{s\in S} \chi_{k-l}(s)=0. \] One can assign a polynomial $m_S$ to $S$ by $\sum_{s \in S}x^s$, which is called the {\it mask polynomial} of $S$. It is easy to see that $\sum_{s\in S} \chi_{k}(s)=0$ if and only if $\xi_{\EuScriptrac{n}{gcd(k,n)}}$ is a root of $m_S$, where $\xi_d$ is a primitive $d$'th root of unity. This can also be written as $\Phi_d \mid m_S$, where $\Phi_d$ is the $d$'th cyclotomic polynomial. Note that mask polynomials can also be defined for any element of the group ring $\mathbb{Z}[\mathbb{Z}_n]$. Now using the character table of $\mathbb{Z}_n$ we have that $(S,\Lambda)$ is a spectral pair if and only if the submatrix of the character table whose rows are indexed by the elements of $\Lambda$ and columns by those of $S$ is a complex Hadamard matrix. In fact, the adjoint of a complex Hadamard matrix is also a complex Hadamard matrix so if $(S,\Lambda)$ is a spectral pair, then $(\Lambda,S)$ is a spectral pair too. Now we introduce the properties that are needed to formulate the Coven-Meyerowitz conjecture (first appeared in a paper of \L aba and Konyagin \cite{labakonyagin}) for any cyclic group. Let $H_S$ be the set of prime powers $p^k$ dividing $N$ such that $\Phi_{p^k}(x) \mid m_S(x)$. \begin{enumerate}[{\bf(T1)}] \item\lambdabel{t1} $m_S(1)=\mbox {\bf Proof.\ }od_{d \in H_S} \Phi_d(1)$, \item\lambdabel{t2} for pairwise relatively prime elements $q_i$ of $H_S$, we have $\Phi_{\mbox {\bf Proof.\ }od q_i} \mid m_S(x)$. {\sf e}nd{enumerate} We remind that if (T1) and (T2) hold for some $S \subset \mathbb{Z}_n$, then $S$ is a tile and if $S$ is a tile, then (T1) holds, see \cite{cm}. Further we mention that \L aba \cite{laba} proved that a set having (T1) and (T2) properties also is a spectral set. \section{Preliminary lemmas} For the sake of simplicity let $n=p^2qr$. First, we collect the results obtained in \cite{negyen} that apply in our case. Note that in our case ($n=p^2qr$) for every proper subgroup or quotient group of $\mathbb{Z}_n$ we have that spectral sets coincide with tiles. The results of Section 4 in \cite{negyen} were summarised in a statement called Reduction 1 but it is important to note that $n$ has more than $2$ different prime divisors so we only have the following. \begin{prop}\lambdabel{propreduction} Let us assume that $(S,\Lambda)$ is a spectral pair for an abelian group whose subgroups and factor groups satisfy the spectral-tile direction of Fuglede's conjecture. Without loss of generality we may assume $0 \in S$, $0 \in \Lambda$. Further $S$ is a tile if one of the following holds. \begin{enumerate} \item\lambdabel{itemreductiona} $S$ or $\Lambda$ does not generate $\mathbb{Z}_n$, \item\lambdabel{itemreductionb} $S$ can be written as the union of $\mathbb{Z}_u$-cosets, where $u$ is a prime dividing $n$. {\sf e}nd{enumerate} {\sf e}nd{prop} \begin{lem}\lambdabel{lempqnemoszt} Let $0 \in T \subset \mathbb{Z}_N$ a generating set and assume $x$ and $y$ are different prime divisors of $N$. Then there are elements $t_1 \ne t_2$ of $T$ such that $x \nmid t_1-t_2$ and $y \nmid t_1-t_2$ for any pair of prime divisors of $N$. {\sf e}nd{lem} \begin{proof} $T$ is not contained in any proper coset of $\mathbb{Z}_N$ so it contains an element $t_1$ not divisible by $x$ and $t_2$ not divisible by $y$. If $y \nmid t_1$, then $t_1-0 \in (T -T)$ is not divisible by either $x$ or $y$, when we are done so we may assume $y \mid t_1$. Similar argument shows that we may assume $x \mid t_2$. Then $x \nmid t_1-t_2$ and $y \nmid t_1-t_2$, as required. {\sf e}nd{proof} Another important tool is the following lemma. This is the same as Proposition 3.4 in \cite{negyen} formulated in a different language. Let $m$ be a square-free integer, where $m$ is the product of $d$ primes. Then $\mathbb{Z}_m \cong \bigoplus_{i=1}^d\mathbb{Z}_{p_i}$ a direct sum of $d$ cyclic groups of different prime order so the elements of $\mathbb{Z}_m$ are encoded by $d$-tuples. This allows us to introduce Hamming distance on $\mathbb{Z}_m$. Further we say that $P$ is a cuboid in $\mathbb{Z}_m$ if it can be written as $\mbox {\bf Proof.\ }od H_i$, where $H_i \subset \mathbb{Z}_{p_i}$ with $|H_i|=2$. \begin{prop}\lambdabel{prophypercube} Let $w$ be an element of the group ring $\mathbb{Z}[\mathbb{Z}_m]$ with nonnegative coefficients, where $m=\mbox {\bf Proof.\ }od_{i=1}^d p_i$, a product of $d$ different primes. Assume $\Phi_m \mid m_w$. Let $P$ be a $d$-dimensional cuboid and $p$ a vertex of $P$. Then \begin{equation}\lambdabel{eqparallelograme} \sum_{c \in P} (-1)^{d_H(p,c)}w(c)=0. {\sf e}nd{equation} {\sf e}nd{prop} We will refer to the previous proposition or more precisely to equation {\sf e}qref{eqparallelograme} as the $d$-dimensional cube-rule. Note that Proposition \ref{prophypercube} is a corollary of Corollary 3.4 in \cite{negyen}, that is formulated below. \begin{lem}\lambdabel{lem0603} Let $w$ be an element of the group ring $\mathbb{Z}[\mathbb{Z}_m]$ with nonnegative coefficients. Assume $\Phi_m \mid m_{\omega}$. \begin{enumerate} \item\lambdabel{item:lem0603a} Then $w$ can be written as the weighted sum of $\mathbb{Z}_{p_i}$-cosets with rational coefficients. \item If $m=pq$, where $p$ and $q$ are different primes, then $w$ can be written as the $\mathbb{Z}_p$-cosets and $\mathbb{Z}_q$-cosets with nonnegative integer coefficients. {\sf e}nd{enumerate} {\sf e}nd{lem} In fact, the coefficients in Lemma \ref{lem0603} \ref{item:lem0603a} can be chosen to be integers. It is also important to know what happens if $\omega$ is a multiset on $\mathbb{Z}_m$, with $\Phi_m \mid m_{\omega}$, where $m$ is not square free, which is also described in \cite{negyen}. Let $m'$ be the radical of $m$. Then $\omega$ is the weighted sum of $\mathbb{Z}_{p_i}$-cosets with integer coefficients again, implying that the cube-rule holds for the restriction of $\omega$ to each $\mathbb{Z}_{m'}$-coset. \begin{lem}\lambdabel{lemcuberulecorr} Let $T \subset \mathbb{Z}_{pqr}$. Assume $T$ satisfies the $3$-dimensional cube-rule and $T \cap ((t+\mathbb{Z}_q) \cup (t+\mathbb{Z}_r))=\{t\}$ for all $t\in T$, where $\mathbb{Z}_q$ and $\mathbb{Z}_r$ denotes the unique subgroups of $\mathbb{Z}_{pqr}$ of order $q$ and $r$, respectively. Then $T$ is a union of $\mathbb{Z}_p$-cosets. {\sf e}nd{lem} Note that the statement holds verbatim for any permutation of the primes $p,q,r$. \begin{proof} Assume $\Phi_{pqr} \mid m_T$, then $T$ satisfies the $3$ dimensional cube-rule. Suppose $t \in T$. By our assumption $T \cap (t+\mathbb{Z}_q)=\{t\}$ and $T \cap (t+\mathbb{Z}_r)=\{t\}$. By the way of contradiction, assume $T$ is not a union of $\mathbb{Z}_{p}$-cosets so there is $t \in T$ with $t_p \in (t+\mathbb{Z}_p)\setminus T$. Then for every cuboid containing $t$ and $t_p$ as its vertices, the neighbours (consider the cube as the natural graph on $8$ vertices) of $t$ are not in $T$. Now using the cube-rule we obtain that the vertex of the cuboid, which is of Hamming distance $3$ from $t$ is contained in $T$. Thus for every $x \in \mathbb{Z}_{pqr}$ with $p \mid x-t_p$ and $d_H(x,t)=3$ we have $x \in T$. Then there are elements of $T$ whose difference is divisible by $pr$ if $q>2$ and the same holds with $pq$ if $r>2$. This contradicts the assumption that $T \cap ((x+\mathbb{Z}_q) \cup (x+\mathbb{Z}_r))=\{x\}$. {\sf e}nd{proof} Now we prove a Lemma that will be used in the proof of our main result. Since $\mathbb{Z}_n$ is a cyclic group, for every $m \mid n$ there is a unique subgroup of $\mathbb{Z}_n$ of order $m$. Thus if $f$ is a function on $\mathbb{Z}_n$, then there is a well defined way to define its projection to $\mathbb{Z}_{\EuScriptrac{n}{m}}$ by summing the values of $f$ on each $\mathbb{Z}_m$-coset. This can also be applied to sets by identifying them with their characteristic function. \begin{lem}\lambdabel{lemprojekcio} Let $T \subset \mathbb{Z}_N$, where $N$ is a positive integer and let $m $ and $r$ be divisors of $N$, where $r$ is a prime with $(m,r)=1$. Assume further that for every $1 < d \mid m$ we have $\Phi_d \mid m_T$ or $\Phi_{dr} \mid m_T$. Let $T_m$ denote the projection (points counted with multiplicity) of $T$ to $\mathbb{Z}_m$. Then $T_m=c\mathbb{Z}_m+rD$, where $c \ge 0$ an integer and $D$ is a multiset on $\mathbb{Z}_m$. {\sf e}nd{lem} \begin{proof} Since $(d,r)=1$ we obtain $\Phi_{dr}(x)=\EuScriptrac{\Phi_d(x^r)}{\Phi_d(x)}$. This equation can be written as $\Phi_{dr}(x)=\Phi_d(x)^{r-1}$ in the polynomial ring $\mathbb{Z}_r[x]$. Now $\Phi_{dr}(x) \mid m_T(x)$ implies $\Phi_d(x) \mid m_T(x)$ in $\mathbb{Z}_r[x]$. These cyclotomic polynomials are pairwise relatively prime in $\mathbb{Z}_r[x]$ as well so we obtain $\mbox {\bf Proof.\ }od_{1 < d \mid m} \Phi_d \mid m_T$ in $\mathbb{Z}_r[x]$. This implies first that $\mbox {\bf Proof.\ }od_{1 <d \mid m} \Phi_d \mid m_T$ in $\mathbb{Z}_r[x]$, so that $$ m_T(x) {\sf e}quiv c \left( 1+x+ {\rm c.d.}ots +x^{m-1} \right) \pmod{x^m-1}$$ in $\mathbb{Z}_r[x]$, for some $0 \le c <r$. If we also have $\Phi_1(x) \mid m_T$ in $\mathbb{Z}_r[x]$, then we have $c=0$. {\sf e}nd{proof} \section{Proof of the main result} Let $(S, \Lambda)$ be a spectral pair. We will distinguish certain different cases by the cardinality of $S$, which is equal to that of $\Lambda$. For a subset $A \subseteq \mathbb{Z}_n$ we write $k \mid \mid |A|$ if $gcd(|A|,n)=k$. Before we start proving our main result we introduce a notation. For every $k \mid n$ there is a unique subgroup of order $k$ of $\mathbb{Z}_n$, which we may also denote by $\mathbb{Z}_k$. For a subset $A$ of $\mathbb{Z}_n$ one can define a function from the cosets of $\mathbb{Z}_k$ to $\mathbb{N}$. The image of a coset is the number of elements of $A$ contained in the coset. This function can also be considered as a multiset i.e. elements of $\mathbb{Z}[\mathbb{Z}_{\EuScriptrac{n}{k}}]$ with nonnegative coefficients, and it will be denoted by $A_k$. Sometimes we say that $A_k$ is the natural projection of $A$ to $\mathbb{Z}_{\EuScriptrac{n}{k}}$. \subsection{$|S|$ has 3 prime divisors} In this case we assume that $|S|$ is divisible by three of the primes $p,q,r$ counted with multiplicity. Thus the cases handled here are $p^2q \mid \mid |S|$, $p^2r \mid \mid |S|$ and $pqr \mid \mid |S|$. Assume $p^2q\mid\mid |S|$. Project $S$ to $\mathbb{Z}_{p^2q}$. If two elements of $S$ project to the same element of $\mathbb{Z}_{p^2q}$, then we have a pair of elements of $s_1, s_2 \in S$ with $p^2q \mid \mid s_1-s_2$. Note that this is always the case when $|S|>p^2q$. Thus we have $\Phi_r \mid m_{\Lambda}$, which implies $r \mid |\Lambda|=|S|$, a contradiction. Thus $|S|=p^2q$ and $S$ is a complete set of residues $\pmod{p^2q}$ so is a tile. The same argument works if $p^2r\mid\mid |S|$. Let us assume now that $pqr \mid \mid |S|$. If a $\mathbb{Z}_{p^2}$-coset contains at least $p+1$ elements of $S$, then $S$ contains a pair of elements contained in this $\mathbb{Z}_p$-coset and a pair of elements contained in the same $\mathbb{Z}_{p^2}$-coset but in different $\mathbb{Z}_p$-cosets. These imply $\Phi_p \mid m_{\Lambda} $ and $\Phi_{p^2} \mid m_{\Lambda} $ so $p^2 \mid |\Lambda|=|S|$, a contradiction. It remains to investigate the case when each $\mathbb{Z}_{p^2}$-coset contains exactly $p$ elements of $S$, which gives $|S|=pqr$. Moreover by excluding $\Phi_p \Phi_{p^2} \mid m_{\Lambda}$ we obtain that the intersection of $S$ with each $\mathbb{Z}_{p^2}$-coset is either a $\mathbb{Z}_p$-coset or it is a complete coset representative of $\mathbb{Z}_p$-cosets. In both of these cases $S$ is a tile. Note that similar argument will also be used later in this paper if $|S|>p^2 \min \{q,r\}$ or $|S|>pqr$. \subsection{$|S|$ has two prime divisors} Now we handle the cases when $p^2 \mid \mid |S|$, $pq \mid\mid |S|$, $pr \mid\mid |S|$ and $qr \mid\mid |S|$. {\bf Case 1.} We first handle the case $p^2 \mid \mid |S|$. \\ Assume first that $\Phi_n \mid m_S$. Then we may apply the cube-rule on every $\mathbb{Z}_{pqr}$-coset. By Lemma \ref{lemcuberulecorr} we obtain that $S$ is the union of $\mathbb{Z}_p$-cosets, which case is handled in Proposition \ref{propreduction} \ref{itemreductionb}. Thus we may assume $\Phi_n \nmid m_S$. If every $\mathbb{Z}_{qr}$-coset contains exactly one element of $S$, then $S$ is a tile. Thus we may assume there are $s_1 \ne s_2 \in S$ with $p^2 \mid s_1-s_2$. If $p^2q \mid\mid s_1-s_2$ or $p^2r \mid\mid s_1-s_2$, then $r \mid |S|$ or $q \mid |S|$, respectively. Both of these cases contradict $p^2 \mid \mid |S|$. Thus we have $p^2 \mid \mid s_1-s_2$ so $\Phi_{qr} \mid m_{\Lambdambda}$. It follows from $\Phi_{qr} \mid m_{\Lambda}$ that $\Lambda_{qr}$ is the weighted sum of $\mathbb{Z}_q$-cosets and $\mathbb{Z}_r$-cosets with nonnegative weights by Lemma \ref{lem0603}. Both type of cosets appear since otherwise we would have $q \mid |S|$ or $r \mid |S|$, a contradiction. Let $\Gamma$ be a graph whose vertices are the elements of $\Lambda$ and two vertices are adjacent if and only if their difference is not divisible by either $q$ or $r$. Note that we have already assumed $\Phi_n \nmid m_S$ which implies $p \mid \lambdambda-\lambdambda '$ or $q \mid \lambdambda-\lambdambda '$ or $r \mid \lambdambda-\lambdambda '$ for every $\lambdambda, \lambdambda ' \in \Lambdambda$. Thus if $\Gamma$ is connected, then we have $\Lambda$ is contained in a $\mathbb{Z}_{pqr}$-coset, which is excluded by Proposition \ref{propreduction}. Without loss of generality we may assume $r>q$ so $r\ge 3$. Then there are $\lambda, \lambda' \in \Lambda$ with $q \mid \lambda-\lambda'$ but $r \nmid \lambda-\lambda'$ and there is $\lambda'' \in \Lambda$, whose $q$ and $r$-coordinates differ from those of $\lambda$ and $\lambda'$. By this we mean $q \nmid \lambda-\lambda''$, $q \nmid \lambda'-\lambda''$, $r \nmid \lambda-\lambda''$ and $r \nmid \lambda'-\lambda''$. Since $\Phi_n \nmid m_S$ we have $p \mid \lambda-\lambda''$ and $p \mid \lambda'-\lambda''$ so we have $pq \mid \lambda-\lambda'$ and $r \nmid \lambda-\lambda'$. Thus we either have $\Phi_r \mid m_S$, which is excluded since $r \nmid |S|$, or we have $\Phi_{pr} \mid m_S$. Assume $\Gamma$ is disconnected. Let $\tilde{H}de{\Lambdambda}_{qr}$ denote the underlying set of the multiset $\Lambdambda_{qr}$. We have that $\Lambda_{qr}$ is the sum of $\mathbb{Z}_q$ and $\mathbb{Z}_r$-cosets and we have seen that both types appear. Thus $\tilde{H}de{\Lambda}_{qr}$ is the union of a $\mathbb{Z}_q$-coset $Q$ and a $\mathbb{Z}_r$-coset $R$, otherwise $\Gamma$ is connected. In this case, $\Gamma$ has a large connected component consisting of those elements of $\Lambda$ which do not project to the intersection of $Q$ and $R$ denoted by $x \in \mathbb{Z}_{qr}$. Thus these points are contained in a single $\mathbb{Z}_{pqr}$-coset. Note that the number of elements of $\Lambda$ projecting to $x$ is less than $\EuScriptrac{|\Lambda|}{2}$. We claim that $\Phi_n \nmid m_{\Lambda}$. By the way of contradiction let us assume $\Phi_n \mid m_{\Lambda}$. Using the same argument when we excluded $\Phi_n \mid m_S$ we have that $\Lambda$ is the union of $\mathbb{Z}_p$-cosets. But then $(\Lambda,S)$ is a spectral pair and by Proposition \ref{propreduction} we have that $\Lambda$ is a tile whence $|\Lambda|=p^2=|S|$. Theorem B1 in \cite{cm} shows that (T1) holds for $\Lambda$. Thus it follows that $\Phi_p \Phi_{p^2} \mid m_{\Lambda}$. Since $\Phi_p \Phi_{p^2}=1+x+ \ldots +x^{p^2-1}$ and $|\Lambda|=p^2$ we have $\Lambda$ is a complete set of coset representatives of $\mathbb{Z}_{qr}$ in $\mathbb{Z}_n$. This contradicts the fact that all but the elements of $\Lambda$ projecting to $x \in \mathbb{Z}_{qr}$ give the same remainder$\pmod{p}$. We have that there is $\mu$ in $\Lambda$ projecting to $x \in \mathbb{Z}_{qr}$ with $p \nmid \mu-\lambda$ and $p \nmid \mu-\lambda''$, where $\lambda$ projects to $R \setminus\{x\}$ and $\lambda'$ to $Q \setminus\{x\}$, otherwise $\Lambda$ is contained in a $\mathbb{Z}_{pqr}$-coset. Thus we obtain $\Phi_{p^2q} \mid m_S$ and $\Phi_{p^2r} \mid m_S$ since $\Phi_n \nmid m_S$. Now we exclude the case $p=2$. If $p=2$, then by $r>q$ we have $r \ge 5$. Thus the description of $\tilde{H}de{\Lambda}_{qr}$ shows that there are at least $4$ elements of $\Lambda$ projecting to $R$ in the same connected component of $\Gamma$. The difference of any two of these is divisible by $p$ so there is a pair whose difference is divisible by $p^2$ as well. Since their projections to $\mathbb{Z}_{qr}$ lie in $R$ we obtain $\Phi_r \mid m_S$, a contradiction. We claim that $p>r$. Otherwise, since $p=2$ is excluded we have $r-p \ge 2$. Hence there are more than $p$ elements of $\Lambda$, projecting to mutually different points of $R \setminus \{x\}$. Their difference is divisible by $p$ since they are in the same connected component of $\Gamma$ but then there would be a pair of elements of $\Lambdambda$ whose difference is divisible by $p^2$ as well, implying $\Phi_r \mid m_S$ and $r \mid |S| $, a contradiction. Now $p>q$ follows from our assumption that $r > q$. A simple calculation shows that the number of elements $m$ of $\Lambda$ projecting to $x$ exceeds $p$. This follows from $p^2 \le kq+lr$ and $m=k+l$ since $p>q,r$, where $k$ and $l$ are defined by $\Lambda_{qr}=kQ+lR$. Thus there are elements $\lambda_3,\lambda_4,\lambda_5$ of $\Lambda$ with $qr \mid \mid \lambda_3-\lambda_4$ and $pqr \mid \mid \lambda_3-\lambda_5$ thus implying $\Phi_p \Phi_{p^2} \mid m_S$. We remind that $\Phi_{pr} \mid m_S$. The fact that $\Phi_p \Phi_{p^2} \mid m_S$ implies that every $\mathbb{Z}_{qr}$-coset contains the same amount of elements of $S$. Denote this number by $a$. If $a=1$, then $S$ is a tile so we assume $a \ge 2$. If $a=q$, then $|S|=p^2q$ which case has been handled before. If $a>q$, then $|S|>p^2q$ and by a simple pigeonhole argument we obtain $r \mid |S|$. Now project $S$ to $\mathbb{Z}_{p^2r}$. $S_{p^2r}$ is a set, otherwise $q \mid |S|$. Then each $\mathbb{Z}_r$-coset in $\mathbb{Z}_{p^2r}$ contains $2 \le a < q$ elements of $S_{p^2r}$. Using $\Phi_{p^2r} \mid m_S$ we have that the intersection of $S_{p^2r}$ with each $\mathbb{Z}_{pr}$-coset is the union of $\mathbb{Z}_p$-cosets or $\mathbb{Z}_r$-cosets. Since $a<q<r$ it is the union of $\mathbb{Z}_p$-cosets only. Then we build up a graph $\Gamma'$ whose vertices are the elements of $\Lambda$ and two vertices are adjacent if and only if their difference is not divisible by either $p$ or $r$. It follows from the previous observations using $p \ge 3$ that $\Gamma'$ is connected. Since $\Phi_n \nmid m_\Lambda$ we have that the $q$-coordinates of these elements of $\Lambda$ are the same so $\Lambda$ is contained in a proper coset of $\mathbb{Z}_n$. Thus by Proposition \ref{propreduction} we have that $S$ is a tile. {\bf Case 2.} Assume $pq \mid\mid |S|$. We may exclude the case, when $S$ is complete set of residues$\pmod{pq}$, which is the same as $S$ contains exactly one element from each $\mathbb{Z}_{pr}$-coset, since $S$ is a tile in this case. Then there are elements of $S$ projecting to the same element of $\mathbb{Z}_{pq}$. We either have $\Phi_{pr} \mid m_{\Lambda}$ or $\Phi_p \mid m_{\Lambda}$ or $\Phi_{r} \mid m_{\Lambda}$. The latter case is impossible since $r \nmid |S|$. Now we project $S$ to $\mathbb{Z}_{p^2q}$. The projection $S_{p^2q}$ is a set, otherwise we have $\Phi_r \mid m_{\Lambda}$ and thus $r \mid |S|$, a contradiction. If there is a $\mathbb{Z}_{p^2}$-coset of $\mathbb{Z}_{p^2q}$ containing more than $p$ elements of $S_{p^2q}$, then there are two among them, whose difference is not divisible by $p$, implying $\Phi_{p^2} \mid m_{\Lambda}$ or $\Phi_{p^2r} \mid m_{\Lambda}$. If every $\mathbb{Z}_{p^2}$-coset of $\mathbb{Z}_{p^2q}$ contains exactly $p$ elements of $S_{p^2q}$ and for each $\mathbb{Z}_{p^2}$-coset all of these elements are contained in the same $\mathbb{Z}_p$-coset, then we have that $S$ is a tile. Thus we may assume $\Phi_{p^2} \mid m_{\Lambda}$ or $\Phi_{p^2r} \mid m_{\Lambda}$. Applying Lemma \ref{lemprojekcio} using the conditions that $\Phi_p \mid m_{\Lambda}$ or $\Phi_{pr} \mid m_{\Lambda}$, and $\Phi_{p^2} \mid m_{\Lambda}$ or $\Phi_{p^2r} \mid m_{\Lambda}$ we obtain that the projection of $\Lambda$ to $\mathbb{Z}_{p^2}$ is of the following form: \begin{equation}\lambdabel{eqp2} \Lambda_{p^2}=c \mathbb{Z}_{p^2}+rD, {\sf e}nd{equation} where $c$ is a nonnegative integer and $D$ is a multiset on $\mathbb{Z}_{p^2}$. If $c = 0$, then $r \mid |\Lambda|$ and if $D={\sf e}mptyset$, then $p^2 \mid |\Lambda|$. Both cases contradict our assumption that $pq \mid \mid |S|=|\Lambda|$. Thus there are at least $r+1$ elements of $\Lambda$ projecting to the same element of $\mathbb{Z}_{p^2}$ so we obtain $\Phi_q \mid m_S$. If $q <r$, then there are two among them, whose difference is divisible by $q$ as well, implying $\Phi_r \mid m_S$, a contradiction. Thus we may assume $q>r$. Assume $\Phi_{p^2r} \mid m_{\Lambda}$. Then $\Lambda_{p^2r}$ is a multiset, which is the sum of $\mathbb{Z}_p$-cosets and $\mathbb{Z}_r$-cosets by Lemma \ref{lem0603}. Since $c>0$ and $D\ne0$ in equation {\sf e}qref{eqp2}, there is a $\mathbb{Z}_{pr}$-coset of $\mathbb{Z}_{p^2r}$, whose intersection with the multiset $\Lambda_{p^2r}$ is the sum of $k$ $\mathbb{Z}_p$-cosets and $l$ $\mathbb{Z}_r$-cosets with $k+l \ge 2$. Now we argue that $\Lambda_{p^2r}$ contains a $\mathbb{Z}_r$-coset. Assume this is not the case, thus we can write $\Lambda_{p^2r}$ as the sum of $\mathbb{Z}_p$-cosets. Then the number of elements of $\Lambda_{p^2r}$ contained in each $\mathbb{Z}_{pr}$-coset is divisible by $p$. If $\Phi_p \mid m_{\Lambda}$, then these numbers are the same so we would have $p^2 \mid |\Lambda|$, a contradiction. If $\Phi_{pr} \mid m_{\Lambda}$, then $\Lambda_{pr}$ is the sum of $\mathbb{Z}_p$ and $\mathbb{Z}_r$-cosets. If $\Lambda_{pr}$ contains a $\mathbb{Z}_r$-coset, then $\Lambda_{p^2r}$-contains a $\mathbb{Z}_{pr}$-coset since it is the sum of $\mathbb{Z}_p$-cosets so $\Lambda_{p^2r}$ contains a $\mathbb{Z}_r$-coset as required. If $\Lambda_{pr}$ is the sum of $\mathbb{Z}_p$-cosets only, then we again have $p^2 \mid |\Lambda|$, a contradiction. Since $c>0$ in equation {\sf e}qref{eqp2} and since we have a $\mathbb{Z}_{p^2r}$-coset containing a $\mathbb{Z}_r$-coset, which is projected on a $\mathbb{Z}_r$-coset contained in $\Lambda_{p^2r}$, we have two elements $\lambda_1,\lambda_2 \in \Lambda_{p^2r}$ with $pr=gcd(p^2r,\lambda_1-\lambda_2) $, which we may also write as $pr \mid\mid \lambda_1-\lambda_2$. It is not hard to see from the description of $\Lambda $ in this case that for every $d \mid p^2r$ we have $\lambda,\lambda' \in \Lambda_{p^2r}$ with $d \mid \mid \lambda-\lambda'$. Then we have $\Phi_p \Phi_{r}\Phi_{pr}\Phi_{p^2}\Phi_{p^2r} \mid m_{S}$ in $\mathbb{Z}_q[x]$ so by projecting $S$ to $\mathbb{Z}_{p^2r}$ we obtain a multiset of the form $c'\mathbb{Z}_{p^2r}+qD'$ ($c',D' \ge0$). If $c'=0$, then $S$ is a spectral set, which is the union of $\mathbb{Z}_q$-cosets, hence a tile. If $c'>0$, then $D'=0$ since a $\mathbb{Z}_q$-coset cannot contain more than $q$ points of $\Lambda$. Then $p^2r \mid |S|$, a contradiction. Thus we may assume $\Phi_{p^2r} \nmid m_{\Lambda}$ so we have $\Phi_{p^2} \mid m_{\Lambda}$. Then since $p^2 \nmid |S|$ we must have $\Phi_p \nmid m_{\Lambda}$ so we have $\Phi_{pr} \mid m_{\Lambda}$. We remind that we have already seen that $\Phi_q \mid m_S$ so $S$ is equidistributed on the $\mathbb{Z}_{p^2r}$-cosets. Now we apply this to obtain information about the structure of $S$. We investigate the intersection of $S$ with each $\mathbb{Z}_{p ^2r}$-cosets. Assume $s_1,s_2 \in S$ are contained in a $\mathbb{Z}_{p ^2r}$-coset but are not contained in a $\mathbb{Z}_{pr}$-coset. If their difference is not divisible by $r$, then we would have $\Phi_{p^2r} \mid m_{\Lambda}$, which we have excluded above. Similarly, if $s_3 \ne s_4 \in S$ are contained in a $\mathbb{Z}_{pr}$-coset, then they need to have different $r$-coordinates, otherwise we would have $\Phi_p \mid m_{\Lambda}$. Their difference is not divisible by $p^2$ since we would have $\Phi_r \mid m_{\Lambda}$, which is impossible since $r \nmid |\Lambda|=|S|$. Each $\mathbb{Z}_{p^2r}$-coset contains the same amount of elements of $S$ by $\Phi_q \mid m_S$, which is then at least $p$. The previous argument shows that each $\mathbb{Z}_{p^2r}$-coset contains exactly $p$ elements of $S$ and either they lie in different $\mathbb{Z}_{pr}$-cosets or they are all contained in one $\mathbb{Z}_{pr}$-coset with $p^2$ not dividing their differences. If for each $\mathbb{Z}_{p^2r}$-coset only one of the two types appears, then $S$ is a tile. Now we argue that $\Phi_{p^2q} \mid m_S$ or $\Phi_{n} \mid m_S$. By Proposition \ref{propreduction} we may assume $0 \in \Lambda$ and that $\Lambda$ is not contained in a proper subgroup of $\mathbb{Z}_{p^2qr}$. Then our claim follows from Lemma \ref{lempqnemoszt}. Assume $\Phi_{p^2q} \mid m_S$. Since $r \nmid |S|=|\Lambda|$ we have $S_{p^2q}$ is a set. Then $S_{p^2q}$ is the union of $\mathbb{Z}_p$ and $\mathbb{Z}_q$-cosets. Since there is at least one $\mathbb{Z}_{p^2r}$-coset such that each of its $\mathbb{Z}_{pr}$-cosets contains one element of $S$ we have that $S_{p^2q}$ is the union of $\mathbb{Z}_q$-cosets all contained in different $\mathbb{Z}_{pq}$-cosets. This contradicts the existence of $\mathbb{Z}_{p^2r}$-cosets of $\mathbb{Z}_{n}$, which contains a $\mathbb{Z}_{pr}$-coset containing exactly $p$ elements of $S$ (any pair of these elements have different $r$-coordinate). It follows that we may assume $\Phi_{n} \mid m_S$. It is clear from our previous discussion that there are $x,y$ with $p \mid x-y$ and $q \nmid x-y$ such that $\mathbb{Z}_{pr}+x$ and $\mathbb{Z}_{pr}+y$ contains $1$ and $p$ elements of $S$, respectively. We remind that if it contains $p$, then in that $\mathbb{Z}_{pr}$-coset, the difference of the elements of $S$ lying in this $\mathbb{Z}_{pr}$-coset is not divisible by either $p^2$ or with $r$. Then one can build up a $3$-dimensional cube in the $\mathbb{Z}_{pqr}$-coset containing $x$ and $y$, which contains exactly one element of $S$ or exactly two elements of $S$ of Hamming distance $2$, which contradicts the fact that $S$ satisfies the $3$-dimensional cube-rule in each $\mathbb{Z}_{pqr}$-cosets. A similar argument works if $pr \mid \mid |S|$ since the role of $q$ and $r$ is symmetric. {\bf Case 3.} Let us assume that $qr \mid\mid |S|$. \\ Then either $S$ is a complete set of residues $\pmod{qr}$, whence $S$ is a tile or there are two different elements of $S$, whose difference is divisible by $qr$. This would imply $\Phi_p \mid m_{\Lambda}$ or $\Phi_{p^2} \mid m_{\Lambda}$. In both of these cases we have $p \mid |\Lambda|=|S|$, a contradiction. \subsection{$|S|$ has at most one prime divisors among $p,q,r$} Let us assume $1 \mid\mid |S|$ or $p \mid\mid |S|$ or $q \mid\mid |S|$ or $r \mid\mid |S|$. If $\Phi_n \mid m_S$, then the intersection of $S$ with each $\mathbb{Z}_{pqr}$-coset satisfies the $3$-dimensional cube-rule. Then by Lemma \ref{lemcuberulecorr}, we cannot have $1 \mid\mid |S|$. If $S$ is a union of $\mathbb{Z}_p$-cosets, $\mathbb{Z}_q$-cosets or $\mathbb{Z}_r$-cosets, respectively, then by applying Proposition \ref{propreduction} we conclude that $S$ is a tile. A similar argument works for $\Lambda$. If $\Phi_n \mid m_{\Lambda}$, then $\Lambda$ is the union of $\mathbb{Z}_p$-coset or $\mathbb{Z}_q$-cosets or $\mathbb{Z}_r$-cosets. Then since $(\Lambda,S)$ is also a spectral pair we have by Proposition \ref{propreduction} that $\Lambda$ is a tile. Then $|\Lambda| = |S|\mid n$ so we have that $|S|=|\Lambda|=p$ or $|S|=|\Lambda|=q$ or $|S|=|\Lambda|=r$. By Lemma \ref{lemcuberulecorr} $\Lambda$ is just a $\mathbb{Z}_{|\Lambda|}$-coset, which implies that $\Phi_{|\Lambda|} \mid m_S$. Then it is easy to see that (T1) and (T2) are satisfied for $S$ so it is a tile. Thus we may assume $\Phi_n \nmid m_S$ and $\Phi_n \nmid m_{\Lambda}$. By Lemma \ref{lempqnemoszt} we have $\Phi_{p^2q} \Phi_{p^2r}\mid m_{S}$. Without loss of generality we may assume $r \nmid |S|$ since the role of $q$ and $r$ are symmetric. Then $S_{p^2q}$ is a set so it is the disjoint union of $\mathbb{Z}_p$-cosets and $\mathbb{Z}_q$-cosets. By Proposition \ref{propreduction} we have that $\lambdangle S \rangle= \mathbb{Z}_n$ so $\lambdangle S_{p^2q} \rangle= \mathbb{Z}_{p^2q}$. It follows using $0 \in S_{p^2q}$ that that at least two $\mathbb{Z}_{pq}$-cosets of $\mathbb{Z}_{p^2q}$ contain elements of $S_{p^2q}$. Note that the same argument works for $\Lambda_{p^2q}$ as well. We remind that $S_{p^2q}$ is the union of $\mathbb{Z}_p$-cosets and $\mathbb{Z}_q$-cosets. Assume there is a $\mathbb{Z}_{pq}$-coset in $\mathbb{Z}_{p^2q}$, whose intersection with $S_{p^2q}$ contains a $\mathbb{Z}_q$-coset and another $\mathbb{Z}_{pq}$-coset containing a $\mathbb{Z}_p$-coset also contained in $S_{p^2q}$. Then for every $d \mid p^2q$ with $d \ne p^2q$, $d \ne p$ there are $s_{d,1},s_{d,2} \in S$ such that $d \mid \mid \pi_{p^2q}(s_{d,1})-\pi_{p^2q}(s_{d,2})$, where $\pi_{p^2q}$ is the natural projection of $\mathbb{Z}_n$ to $\mathbb{Z}_{p^2q}$. Hence $\Phi_{d} \mid m_{\Lambda}$ or $\Phi_{dr} \mid m_{\Lambda}$ for every $ d \mid p^2q$ and $d \ne 1$, $d \ne pq$. Note that the exception for $d=p$ only applies if the intersection of each $\mathbb{Z}_{pq}$-coset in $\mathbb{Z}_{p^2q}$ with $S_{p^2q}$ is either empty or a $\mathbb{Z}_p$-coset or a $\mathbb{Z}_q$-coset It follows from Lemma \ref{lempqnemoszt} that $\Phi_{qr} \mid m_{\Lambdambda}$ or $\Phi_{pqr} \mid m_{\Lambdambda}$ since $\Phi_{n} \mid m_{\Lambdambda}$ has already been excluded. Let us assume $\Phi_{pqr} \mid m_{\Lambdambda}$. Then $\Phi_{d} \mid m_{\Lambda}$ or $\Phi_{dr} \mid m_{\Lambda}$ for every $ d \mid p^2q$ and $d \ne 1$. Thus by Lemma \ref{lemprojekcio} we have that $\Lambda_{p^2q}=c\mathbb{Z}_{p^2q}+rD$. If $c>0$, then $|S|\ge p^2q$. In this case $S$ is a tile or $r \mid |S|$ by the argument used in Case 1. If $c=0$, then we obtain $r \mid |S|$, a contradiction. It follows we may assume $\Phi_{qr} \mid m_{\Lambdambda}$ and $\Phi_{pqr} \nmid m_{\Lambdambda}$. Then $\Lambdambda_{qr}$ is the sum of $\mathbb{Z}_q$-cosets and $\mathbb{Z}_r$-cosets. If both types appear, then $\Phi_p \mid m_S$ or $\Phi_{p^2} \mid m_S$. The fact that $\Phi_p \mid m_S$ implies that the intersection of each $\mathbb{Z}_{pq}$-coset in $\mathbb{Z}_{p^2q}$ with $S_{p^2q}$ is of the same cardinality, which does not hold in our case since $p \ne q$. On the other hand, $\Phi_{p^2} \mid m_S$ would imply that $\Lambdambda_{p^2}$ is equidistributed in each $\mathbb{Z}_p$-coset, which does not hold since there is a $\mathbb{Z}_{pq}$-coset in $\mathbb{Z}_{p^2q}$ that intersects $S_{p^2q}$ in a single $\mathbb{Z}_q$-coset. Now we have that $\Lambdambda_{qr}$ is the disjoint union of $k\ge 2$ $\mathbb{Z}_q$-cosets by Proposition \ref{propreduction} \ref{itemreductiona}. If $\lambdambda$ and $\lambdambda'$ are elements of $\Lambdambda_{qr}$ of Hamming distance $2$, then for the elements $\overline{\lambdambda}$ and $\overline{\lambdambda'}$ of $\Lambdambda$ projecting to these elements we have $qr \mid \mid \overline{\lambdambda} - \overline{\lambdambda'}$ since $\Phi_{pqr} \nmid m_S$ and $\Phi_{n} \nmid m_S$. We obtain that $\overline{\lambdambda}$ and $\overline{\lambdambda'}$ are contained in the same $\mathbb{Z}_{pqr}$-coset. This holds for every element of $\Lambdambda$ if $q >2$ or if $k >2$, when Proposition \ref{propreduction} \ref{itemreductiona} gives that $S$ is a tile. The remaining case is $|S|=4 $, which is handled by Theorem 2.1 in \cite{KMkishalmaz}. Assume now that the intersection of $S_{p^2q}$ with each $\mathbb{Z}_{pq}$-coset is the union of (possible 0) $\mathbb{Z}_q$-cosets and this intersection is nonempty for at least two cosets of $\mathbb{Z}_{pq}$. Then it follows from $\Phi_n \nmid m_{\Lambda}$ that if $x,y \in S$, whose natural projections to $\mathbb{Z}_{p^2q}$ are not contained in a proper coset of $\mathbb{Z}_{pq}$, then their difference is divisible by $r$. If $q>2$, then it follows from $\lambdangle S_{p^2q}\rangle=\mathbb{Z}_{p^2q}$ that the difference of any two elements of $S$ is divisible by $r$ so it is contained in a proper coset of $\mathbb{Z}_n$. Proposition \ref{propreduction} gives that $S$ is a tile in this case. If $q=2$ but there are more than two $\mathbb{Z}_{pq}$-cosets containing elements of $S_{p^2q}$, then we build up a graph $\Gamma$ having vertex set $S$ and two vertices are adjacent if and only if their difference is not divisible by either $p$ or $q$. Again, the difference of two adjacent vertices is divisible by $r$. It is not hard to verify that $\Gamma $ is connected and then $S$ is contained in a $\mathbb{Z}_{p^2q}$-coset of $\mathbb{Z}_n$ so it is a tile. If there is a $\mathbb{Z}_{pq}$-coset in $\mathbb{Z}_{p^2q}$, whose intersection with $S_{p^2q}$ contains at least two $\mathbb{Z}_{q}$-cosets and another one which contains at least one $\mathbb{Z}_{q}$-coset, then again we have that for every $1 \ne d \mid p^2q$ there are elements $s_1,s_2$ of $S_{p^2q}$ such that $d \mid \mid s_1-s_2$, which case has already been handled above. The $|S|=4$ case follows from a Theorem 2.1 of Kolountzakis and Matolcsi \cite{KMkishalmaz}, which says that spectral sets of cardinality at most $5$ in finite abelian groups are tiles. Thus it remains that $S_{p^2q} $ is the union of $\mathbb{Z}_p$-cosets only. We also have that these $\mathbb{Z}_p$-cosets are not contained in a $\mathbb{Z}_{pq}$-coset or a $\mathbb{Z}_{p^2}$-coset of $\mathbb{Z}_{p^2q}$. For every $s \in S_{p^2q}$ the unique element of $S$ projecting to $s$ and it is denoted by $\bar{s}$. Assume that for every $x \in S_{p^2q}$ there is $y \in S_{p^2q}$ such that $p \nmid x-y$ and $q \nmid x-y$. Then for every $x' \in x+\mathbb{Z}_p \subset S_{p^2q}$, we have $p \nmid x'-y$ and $q \nmid x'-y$. Since $\Phi_n \nmid m_{\Lambda}$ we have that $r \mid \bar{x}-\bar{y}$ and $\bar{x'}-\bar{y}$ so $r \mid \bar{x}-\bar{x'}$. The same holds for every element of $x+\mathbb{Z}_p$ so $\{ \overline{x+u} \colon~ u \in \mathbb{Z}_p \}$ is a $\mathbb{Z}_p$-coset. Therefore $S$ is the union of $\mathbb{Z}_p$-cosets, which is handled by Proposition \ref{propreduction}. If there is a $x \in S_{p^2q}$ such that $p \mid x-y$ or $q \mid x-y$ for every $y \in S_{p^2q}$, then $S_{p^2q}$ is contained in $(x+\mathbb{Z}_{pq}) \cup (x+\mathbb{Z}_{p^2})$. Since $S_{p^2q}$ is not contained in any of this two sets appearing in the union we again have that for every $d \mid p^2q$ with $d \ne p^2q$ there are $x,y \in S_{p^2q}$ with $d \mid \mid x-y$, which case has already been settled. \begin{thebibliography}{99} \bibitem{cm} E. M. Coven, A. Meyerowitz. \newblock Tiling the integers with translates of one finite set. \newblock {{\sf e}m Journal of Algebra} {\bfserieseries 212}(1): 161--174, 1999. \bibitem{DL2014} D. E. Dutkay, C. K. Lai. \newblock ``Some reductions of the spectral set conjecture to integers''. \newblock {{\sf e}m Mathematical Proceedings of the Cambridge Philosophical Society}, \textbf{156}(1), 123--135: 2014. \bibitem{FMM2006} B. Farkas, M. Matolcsi, P. M\'ora. \newblock On Fuglede's conjecture and the existence of universal spectra. \newblock {{\sf e}m Journal of Fourier Analysis and Applications}, \textbf{12}(5): 483--494, 2006. \bibitem{F1974} B. Fuglede. \newblock Commuting self-adjoint partial differential operators and a group theoretic problem. \newblock {{\sf e}m Journal of Functional Analysis}, \textbf{16}(1): 101--121, 1974. \bibitem{KMkishalmaz} M.N. Kolountzakis, M. Matolcsi. \newblock Complex Hadamard matrices and the spectral set conjecture. \newblock {{\sf e}m Collect. Math.}, {\bf57}: 281--291, 2006. \bibitem{laba} I. \L aba. \newblock The spectral set conjecture and multiplicative properties of roots of polynomials. \newblock {{\sf e}m Journal of the London Mathematical Society}, \textbf{65}(3): 661--671, 2002. \bibitem{labakonyagin} S. Konyagin , L. \L aba. \newblock Spectra of certain types of polynomials and tiling of integers with translates of finite sets. \newblock {{\sf e}m Journal of Number Theory}, \textbf{103}(2): 267--80, 2003. \bibitem{M2004} M. Matolcsi. \newblock Fuglede's conjecture fails in dimension 4. \newblock {{\sf e}m Proceedings of the American Mathematical Society}, \textbf{133}(10): 3021--3026, 2005. \bibitem{negyen} G. Kiss, D. R. Malikiosis, G. Somlai, M. Vizer. \newblock On the discrete Fuglede and Pompeiu problems. \newblock {{\sf e}m Analysis \& PDE}, \textbf{13}(3): 765--788, 2020. \bibitem{KM2006a} M. N. Kolountzakis, M. Matolcsi. \newblock Tiles with no spectra. \newblock {{\sf e}m Forum Mathematicum} \textbf{18}(3): 519--528, 2006. \bibitem{Romanos2020} R. D. Malikiosis. On the structure of spectral and tiling subsets of cyclic groups. \newblock {{\sf e}m Forum of Mathematics, Sigma}, vol. 10. Cambridge University Press, 2022. \bibitem{Kol} R. D. Malikiosis, M. N. Kolountzakis. \newblock Fuglede's conjecture on cyclic groups of order $p^nq$. \newblock {{\sf e}m Discrete Analysis}, {\bf 12}: 2017. 16pp. \bibitem{shi} R. Shi. \newblock Fuglede's conjecture holds on cyclic groups $\mathbb{Z}_{pqr}$. \newblock {{\sf e}m Discrete Analysis}, \textbf{14}: 2019. 14pp. \bibitem{T2004} T. Tao. \newblock Fuglede's conjecture is false in 5 and higher dimensions. \newblock {{\sf e}m Mathematical Research Letters}, {\bf 11}(2): 251--258, 2004. {\sf e}nd{thebibliography} \begin{dajauthors} \begin{authorinfo}[pgom] G\'abor Somlai\\ E\"otv\"os Lor\'and University\\ Budapest, Hungary\\ gabor.somlai\imageat{}ttk\imagedot{}elte\imagedot{}hu \\ \url{http://zsomlei.web.elte.hu} {\sf e}nd{authorinfo} {\sf e}nd{dajauthors} {\sf e}nd{document}
\betaegin{enumerate}gin{document} -\allowhyphenstle{Integrality of Volumes of Representations} \operatorname{Aut}hor[M.~Bucher]{Michelle Bucher} \alphaddress{Section de Math\'ematiques Universit\'e de Gen\`eve, 2-4 rue du Li\`evre, Case postale 64, 1211 Gen\`eve 4, Suisse} \epsilonilonmail{[email protected]} \operatorname{Aut}hor[M.~Burger]{Marc Burger} \alphaddress{Department Mathematik, ETH Z\"urich, R\"amistrasse 101, CH-8092 Z\"urich, Switzerland} \epsilonilonmail{[email protected]} \operatorname{Aut}hor[A.~Iozzi]{Alessandra Iozzi} \alphaddress{Department Mathematik, ETH Z\"urich, R\"amistrasse 101, CH-8092 Z\"urich, Switzerland} \epsilonilonmail{[email protected]} \thanks{Michelle Bucher was partially supported by Swiss National Science Foundation projects PP00P2-128309/1, 200020-178828/1 and 200021-169685, Alessandra Iozzi was partial supported by the Swiss National Science Foundation projects 2000021-127016/2 and 200020-144373 and Marc Burger was partially supported by the Swiss National Science Foundation project 200020-144373. The authors thank the Institute Mittag-Leffler in Djursholm, Sweden, and the Institute for Advances Studies in Princeton, NJ, for their warm hospitality during the preparation of this paper.} {\operatorname{d}}ate{\rightarrowday} \betaegin{enumerate}gin{abstract} Let $M$ be an oriented complete hyperbolic $n$-manifold of finite volume. Using the definition of volume of a representation previously given by the authors in {\operatorname{c}}ite{Bucher_Burger_Iozzi_Mostow} we show that the volume of a representation $\operatorname{r}ho{\operatorname{c}}olon \pi_1(M)\rightarrow\mathrm{Isom}p({\mathbb H}^n)$, properly normalized, takes integer values if $n$ is even and $\gammaeq4$. If $M$ is not compact and $3$-dimensional, it is known that the volume is not locally constant. In this case we give explicit examples of representations with volume as arbitrary as the volume of hyperbolic manifolds obtained from $M$ via Dehn fillings. \epsilonilonnd{abstract} \title{Integrality of Volumes of Representations} \section{Introduction}L^\infty_{\mathrm{alt}}mbdaabel{sec:intro} Let $M$ be a connected oriented complete hyperbolic manifold of finite volume, which we represent as the quotient $M=\Gammaamma\betaegin{enumerate}gin{aligned}ckslash{\mathbb H}^n$ of real hyperbolic $n$-space ${\mathbb H}^n$ by a torsion-free lattice $\Gammaamma<\mathrm{Isom}p({\mathbb H}^n)$ in the group of orientation preserving isometries of ${\mathbb H}^n$. Given a representation $\operatorname{r}ho{\operatorname{c}}olon \Gammaamma\rightarrow\mathrm{Isom}p({\mathbb H}^n)$, our central object of study is the volume ${\operatorname{Vol}}(\operatorname{r}ho)$ of $\operatorname{r}ho$ as defined in {\operatorname{c}}ite{Burger_Iozzi_Wienhard_toledo} for $n=2$ and in general in {\operatorname{c}}ite{Bucher_Burger_Iozzi_Mostow}. This notion extends the classical one introduced in {\operatorname{c}}ite{Goldman82} for $M$ compact and, as it was shown in {\operatorname{c}}ite{Kim_Kim_OnDeformation}, if $M$ is of finite volume it coincides with definitions introduced by other authors {\operatorname{c}}ite{Dunfield, Francaviglia, Kim_Kim_Volume}. We refer to \S~\operatorname{r}ef{sec:congruence} for the definition of ${\operatorname{Vol}}(\operatorname{r}ho)$ and content ourselves with listing some of its main properties. \betaegin{enumerate} \item The volume function is uniformly bounded on the representation variety $\textup{Hom}(\Gammaamma,\mathrm{Isom}p({\mathbb H}^n))$, that is \betaegin{enumerate}gin{equation*} |{\operatorname{Vol}}(\operatorname{r}ho)|L^\infty_{\mathrm{alt}}mbdaeq{\operatorname{Vol}}(M) \epsilonilonnd{equation*} and \betaegin{enumerate}gin{equation*} {\operatorname{Vol}}({\operatorname{Id}}_\Gammaamma)={\operatorname{Vol}}(M)\,, \epsilonilonnd{equation*} where ${\operatorname{Id}}_\Gammaamma{\operatorname{c}}olon\Gammaamma{\operatorname{r}m H}ookrightarrow\mathrm{Isom}p({\mathbb H}^n)$ is the canonical injection. \item (Rigidity) There is equality \betaegin{enumerate}gin{equation*} {\operatorname{Vol}}(\operatorname{r}ho)={\operatorname{Vol}}(M) \epsilonilonnd{equation*} if and only if either: \betaegin{enumerate} \itemL^\infty_{\mathrm{alt}}mbdaabel{eq:1} $n=2$ and $\operatorname{r}ho$ is the holonomy representation of a (possibly infinite volume) complete hyperbolization of the smooth surface underlying $M$, {\operatorname{c}}ite{Goldman_thesis, Burger_Iozzi_difff, Koziarz_Maubon}, or \itemL^\infty_{\mathrm{alt}}mbdaabel{eq:2} $n=3$ and $\operatorname{r}ho$ is conjugate to ${\operatorname{Id}}_\Gammaamma$, {\operatorname{c}}ite{Dunfield, Francaviglia_Klaff, Besson_Courtois_Gallot_comm, Bucher_Burger_Iozzi_Mostow}. \epsilonilonnd{enumerate} \itemL^\infty_{\mathrm{alt}}mbdaabel{eq:3} The volume function is continuous on $\textup{Hom}(\Gammaamma,\mathrm{Isom}p({\mathbb H}^n))$ (see Proposition~\operatorname{r}ef{thm:cont} in Appendix~\operatorname{r}ef{sec:continuity}). \itemL^\infty_{\mathrm{alt}}mbdaabel{eq:4} If either $M$ is compact and $n\gammaeq2$ {\operatorname{c}}ite{Reznikov} (see also {\operatorname{c}}ite{DLSW19}) or $M$ is finite volume and $n\gammaeq4$ {\operatorname{c}}ite{Kim_Kim_OnDeformation}, the volume is constant on connected components of the representation variety. As a consequence it takes only finitely many values. \itemL^\infty_{\mathrm{alt}}mbdaabel{eq:5} If $M$ is a non-compact surface, the range of ${\operatorname{Vol}}$ coincides with the interval $[-{\operatorname{c}}hi(M),{\operatorname{c}}hi(M)]$, where ${\operatorname{c}}hi(M)$ is the Euler characteristic of $M$. \itemL^\infty_{\mathrm{alt}}mbdaabel{eq:6} If $M$ is compact and $n$ is even, then \betaegin{enumerate}gin{equation*} \frac{2{\operatorname{Vol}}(\operatorname{r}ho)}{{\operatorname{Vol}}(S^{n})}\in{\mathbb Z}\,, \epsilonilonnd{equation*} where here and in the sequel, ${\operatorname{Vol}}(S^{n})$ is the volume of the $n$-sphere $S^{n}$ of constant curvature 1. \epsilonilonnd{enumerate} Our main result is a generalization of the integrality property \epsilonilonnd{equation}ref{eq:6} to the case in which $M$ is not compact, and $n$ is even and $\gammaeq4$. We remark that this is in sharp contrast with \epsilonilonnd{equation}ref{eq:5}. \betaegin{enumerate}gin{thm}L^\infty_{\mathrm{alt}}mbdaabel{thm: main thm} Let $\Gammaamma < \mathrm{Isom}^+(\mathbb{H}^{2m})$ be a torsion-free lattice and let $\operatorname{r}ho{\operatorname{c}}olon \Gammaamma \operatorname{r}ightarrow \mathrm{Isom}^+(\mathbb{H}^{2m})$ be a representation. Assume that $2m\gammaeq4$. \betaegin{enumerate}gin{enumerate} \item If the manifold $M=\Gammaamma\setminus \mathbb{H}^{2m}$ has only toric cusps, then \betaegin{enumerate}gin{equation*} \frac{2 {\operatorname{Vol}}(\operatorname{r}ho)}{{\operatorname{Vol}}(S^{2m})}\in {\mathbb Z}\,. \epsilonilonnd{equation*} \item In general \betaegin{enumerate}gin{equation*} \frac{2 {\operatorname{Vol}}(\operatorname{r}ho)}{{\operatorname{Vol}}(S^{2m})}\in \frac{1}{B_{2m-1}}{\operatorname{c}}dot {\mathbb Z}\,, \epsilonilonnd{equation*} where $B_{2m-1}$ is the Bieberbach number in dimension $2m-1$. \epsilonilonnd{enumerate} \epsilonilonnd{thm} We recall that the Bieberbach number is the smallest integer $B_d$ such that any compact flat $d$-manifold has a covering of degree $B_d$ that is a torus. Such $d$-manifolds occur as connected components of the boundary of a compact core. Recall in fact that, in the context of hyperbolic geometry a compact core $N$ of $M$ is a compact submanifold that is obtained as the quotient by $\Gammaamma$ of the complement in ${\mathbb H}^{d+1}$ of a $\Gammaamma$-invariant family of pairwise disjoint open horoballs centered at the cusps. The strategy of the proof of Theorem~\operatorname{r}ef{thm: main thm} builds on results in {\operatorname{c}}ite{Burger_Iozzi_Wienhard_toledo}, where the authors studied the case in which ${\operatorname{d}}im M=2$ and established congruence relations for ${\operatorname{Vol}}(\operatorname{r}ho)$. In order to implement this strategy, we show that the continuous bounded class \betaegin{enumerate}gin{equation*} \overline mega_{2m}^{\mathrm{b}}\in{\operatorname{r}m H}cb^{2m}(\operatorname{SO}(2m,1)^{\operatorname{c}}irc,{\mathbb R})\,, \epsilonilonnd{equation*} defined by the volume form on ${\mathbb H}^{2m}$, has a canonical representative \betaegin{enumerate}gin{equation*} \varepsilon_{2m}^{\mathrm{B,b}}\in{\operatorname{r}m H}b^{2m}(\operatorname{SO}(2m,1)^{\operatorname{c}}irc,{\mathbb Z}) \epsilonilonnd{equation*} in the bounded Borel cohomology of $\operatorname{SO}(2m,1)^{\operatorname{c}}irc$ that, under the change of coefficients ${\mathbb Z}\rightarrow{\mathbb R}$, corresponds to $(-1)^m\frac{2}{{\operatorname{Vol}}(S^{2m})}\overline mega_{2m}^{\mathrm{b}}$. For $2m=2$, $\varepsilon_{2}^{\mathrm{b}}$ coincides with the classical bounded Euler class as defined in {\operatorname{c}}ite{Ghys}. Then we establish a congruence relation modulo ${\mathbb Z}$ for $(-1)^m\frac{2}{{\operatorname{Vol}}(S^{2m})}{\operatorname{Vol}}(\operatorname{r}ho)$ in terms of invariants attached to the boundary components of $N$ that are now assumed to be $(2m-1)$-tori. If $T_i$ is a component of $\partial N$ and $\operatorname{r}ho_i{\operatorname{c}}olon{\mathbb Z}^{2m-1}\rightarrow\operatorname{SO}(2m,1)^{\operatorname{c}}irc$ is the restriction of $\operatorname{r}ho$ to $\pi_1(T_i)\simeq{\mathbb Z}^{2m-1}$, then the invariant attached to $T_i$ is \betaegin{enumerate}gin{equation*} \operatorname{r}ho_i^\alphast(\varepsilon_{2m}^{\mathrm{b}})\in{\operatorname{r}m H}b^{2m}({\mathbb Z}^{2m-1},{\mathbb Z})\simeq{\mathbb R}/{\mathbb Z}\,. \epsilonilonnd{equation*} In the case in which $m=1$, $\operatorname{r}ho_i^\alphast(\varepsilon_2^{\mathrm{b}})$ coincides with the negative of the rotation number of $\operatorname{r}ho_i^\alphast(1)\in\operatorname{SO}(2,1)^{\operatorname{c}}irc$ and we show in \S~\operatorname{r}ef{sec:vanishing} that, if $m\gammaeq2$, $\operatorname{r}ho_i^\alphast(\varepsilon_{2m}^{\mathrm{b}})$ always vanishes. \betaegin{enumerate}gin{rem*} In general $\Gammaamma$ has always a subgroup of finite index all whose cusps are toric. However little is known about which collections of compact flat $(2m-1)$-manifolds $N$ are the components of the boundary of a compact core as above. If ${\operatorname{d}}im M=4$, it is known that there are compact flat $3$-manifolds that are not diffeomorphic to $\partial N$, {\operatorname{c}}ite[Corollary~1.4]{Long_Reid_00}, while on the positive side there are hyperbolic $4$-manifolds with $\partial N$ that is a $3$-torus, {\operatorname{c}}ite{Kolpakov_Martelli}. Which leads to the following: \epsilonilonnd{rem*} \betaegin{enumerate}gin{question} If $\Lambdaambda$ is the fundamental group of a compact flat $(2m-1)$-manifold and $\operatorname{r}ho{\operatorname{c}}olon\Lambdaambda\rightarrow\operatorname{SO}(2m,1)^{\operatorname{c}}irc$ is any representation, does $\operatorname{r}ho^\alphast(\varepsilon_{2m}^{\mathrm{b}})$ vanish for $2m\gammaeq4$? \epsilonilonnd{question} Thus it is really in all odd dimensions that the nature of the values of ${\operatorname{Vol}}$ remain mysterious, though, according to the results in {\operatorname{c}}ite{Kim_Kim_OnDeformation}, for $n\gammaeq4$ there are only finitely many possibilities. We end this introduction by giving a result in dimension $3$. In this case, the character variety of $\Gammaamma<\mathrm{Isom}p({\mathbb H}^3)$ is smooth near ${\operatorname{Id}}_\Gammaamma$, and its complex dimension near ${\operatorname{Id}}_\Gammaamma$ equals the number $h$ of cusps of $M$ {\operatorname{c}}ite{Thurston_notes}. As a consequence of the volume rigidity theorem and the continuity of ${\operatorname{Vol}}$, the image of ${\operatorname{Vol}}$ contains at least an interval $[{\operatorname{Vol}}(M)-\epsilonilonp,{\operatorname{Vol}}(M)]$ for some $\epsilonilonp>0$. Special points in the character variety of $\Gammaamma$ come from Dehn fillings of $M$. Let $M_\tau$ denote the compact manifold obtained from $M$ by Dehn surgery along a choice of $h$ simple closed loops $\tau=\{\tau_1,{\operatorname{d}}ots,\tau_h\}$. If the length of each geodesic loop $\tau_j$ is larger than $2\pi$, $M_\tau$ admits a hyperbolic structure {\operatorname{c}}ite{Thurston_notes} and an analytic formula for ${\operatorname{Vol}}(M_\tau)$ depending on the length of the $\tau_j$ has been given in {\operatorname{c}}ite{Neumann_Zagier}. \betaegin{enumerate}gin{prop}L^\infty_{\mathrm{alt}}mbdaabel{prop:intro} Let $M_\tau$ be the compact $3$-manifold obtained by Dehn filling from the hyperbolic $3$-manifold $M$. If $\operatorname{r}ho_\tau{\operatorname{c}}olon \pi_1(M)\rightarrow\mathrm{Isom}p({\mathbb H}^3)$ is the representation obtained from the composition of the quotient homomorphism $\pi_1(M)\rightarrow\pi_1(M_\tau)$ with the holonomy representation of the hyperbolic structure on $M_\tau$, then \betaegin{enumerate}gin{equation*} {\operatorname{Vol}}(\operatorname{r}ho_\tau)={\operatorname{Vol}}(M_\tau)\,. \epsilonilonnd{equation*} \epsilonilonnd{prop} Thus with our cohomological definition, ${\operatorname{Vol}}(\operatorname{r}ho)$ gives a continuous interpolation between the special values ${\operatorname{Vol}}(M_\tau)$. A natural question here is whether ${\operatorname{Vol}}$ is real analytic. \betaigskip The structure of the paper is as follows. In \S~\operatorname{r}ef{sec:0} we summarize the main facts about various group cohomology theories used in this paper. In \S~\operatorname{r}ef{sec:euler} we define a Borel cohomology class $\varepsilon_{2m}\in{\operatorname{r}m H}_\mathrm{B}^{2m}(\operatorname{SO}(2m,1),{\mathbb Z}_\epsilonilonpsilon)$ with coefficients (see \S~\operatorname{r}ef{sec:0}) and relate it to an explicit multiple of the class $\overline mega_{2m}\in{\operatorname{r}m H}_\mathrm{B}^{2m}(\operatorname{SO}(2m,1),{\mathbb R}_\epsilonilonpsilon)$ defined by the volume form on ${\mathbb H}^{2m}$ (see \epsilonilonnd{equation}ref{eq:vanEst+}). In \S~\operatorname{r}ef{sec:bddEuler} we first show that $\varepsilon_{2m}$ has a unique bounded representative $\varepsilon_{2m}^\mathrm{b}\in{\operatorname{r}m H}_{\mathrm{B,b}}^{2m}(\operatorname{SO}(2m,1),{\mathbb Z}_\epsilonilonpsilon)$ (Proposition~\operatorname{r}ef{prop:bdd-euler}); in \S~\operatorname{r}ef{sec:congruence} we proceed to define the volume ${\operatorname{Vol}}(\operatorname{r}ho)$ of a representation $\operatorname{r}ho{\operatorname{c}}olon\pi_1(M)\rightarrow\mathrm{Isom}p({\mathbb H}^{2m})=\operatorname{SO}(2m,1)^{\operatorname{c}}irc$ and use the bounded integral class $\varepsilon_{2m}^\mathrm{b}$ to establish a congruence relation for ${\operatorname{Vol}}(\operatorname{r}ho)$ (Theorem~\operatorname{r}ef{thm: vol modulo boundary}). In \S~\operatorname{r}ef{sec:vanishing}, which is the core of the paper, we show that the contributions from the various boundary components of a compact core of $M$ to the congruence relation all vanish for $2m\gammaeq4$. In \S~\operatorname{r}ef{sec:examples} we relate the volume of the representations of $\pi_1(M)$ obtained by Dehn surgery to the volumes of the corresponding manifolds. In the Appendix we prove the continuity of the map $\operatorname{r}ho\textup{Map}sto{\operatorname{Vol}}(\operatorname{r}ho)$. \section{Various cohomology theories}L^\infty_{\mathrm{alt}}mbdaabel{sec:0} We collect in this section cohomological results that will be used throughout the paper. Given a locally compact second countable group $G$, we consider ${\mathbb Z}$, ${\mathbb R}$ and ${\mathbb R}/{\mathbb Z}$ as trivial modules. If $\epsilonilonpsilon{\operatorname{c}}olon G\rightarrow\{-1,+1\}$ is a continuous homomorphism, we denote by ${\mathbb Z}_\epsilonilonpsilon$ and ${\mathbb R}_\epsilonilonpsilon$ the corresponding coefficient $G$-modules, where $g_\alphast t=\epsilonilonpsilon(g) t$ for $t\in{\mathbb Z}, {\mathbb R}$ and by ${\mathbb R}_\epsilonilonpsilon/{\mathbb Z}_\epsilonilonpsilon$ the corresponding quotient module. If $A$ is any of the above $G$-modules, ${\operatorname{r}m H}_\mathrm{B}^\betaullet (G,A)$ denotes the cohomology of the complex of Borel measurable $A$-valued cochains on $G$. If $A={\mathbb R},{\mathbb R}_\epsilonilonpsilon$, we will also need the continuous cohomology ${\operatorname{r}m H}c^\betaullet(G,A)$ with coefficients in $A$ and we will use that for $A={\mathbb R},{\mathbb R}_\epsilonilonpsilon$ the comparison map \betaegin{enumerate}gin{equation}L^\infty_{\mathrm{alt}}mbdaabel{eq:0.1} \xymatrix{ {\operatorname{r}m H}c^\betaullet(G,A)\alphar[r]^-\simeq &{\operatorname{r}m H}_\mathrm{B}^\betaullet(G,A) } \epsilonilonnd{equation} is an isomorphism {\operatorname{c}}ite[Theorem~A]{Austin_Moore}. To compute the continuous cohomology of $G$ we use repeatedly that if $G-\allowhyphensmes V\rightarrow V$ is a proper smooth action of a Lie group $G$ on a smooth manifold $V$, there is an isomorphism \betaegin{enumerate}gin{equation}L^\infty_{\mathrm{alt}}mbdaabel{eq:0.2} {\operatorname{r}m H}c^\betaullet(G,{\mathbb R})\simeq{\operatorname{r}m H}^\betaullet(\textup{O}mega^\betaullet(V)^G) \epsilonilonnd{equation} with the cohomology of the complex $\textup{O}mega^\betaullet(V)^G$ of $G$-invariant differential forms on $V$, {\operatorname{c}}ite[Theorem~6.1]{Hochschild_Mostow}. If $V$ is a symmetric space $G/K$, then \betaegin{enumerate}gin{equation}L^\infty_{\mathrm{alt}}mbdaabel{eq:0.25} {\operatorname{r}m H}c^\betaullet(G,{\mathbb R})\simeq\textup{O}mega^\betaullet(V)^G\,, \epsilonilonnd{equation} since every $G$-invariant differential form on $G/K$ is closed. Another result we will use is Wigner's isomorphism {\operatorname{c}}ite{Wigner}, or rather a special case thereof {\operatorname{c}}ite[Theorem~E]{Austin_Moore}: namely if $A={\mathbb Z}$ or $A={\mathbb Z}_\epsilonilonpsilon$, there is a natural isomorphism \betaegin{enumerate}gin{equation}L^\infty_{\mathrm{alt}}mbdaabel{eq:0.3} {\operatorname{r}m H}_\mathrm{B}^\betaullet(G,A)\simeq{\operatorname{r}m H}_\mathrm{sing}^\betaullet(BG,A)\,, \epsilonilonnd{equation} where $BG$ is the classifying space of $G$ and ${\operatorname{r}m H}_\mathrm{sing}^\betaullet$ refers to singular cohomology. A vanishing theorem that is often used states that if $L$ is compact, then \betaegin{enumerate}gin{equation}L^\infty_{\mathrm{alt}}mbdaabel{eq:0.4} {\operatorname{r}m H}c^k(L,{\mathbb R})=0\quad\text{ and }\quad{\operatorname{r}m H}c^k(L,{\mathbb R}_\epsilonilonpsilon)=0 \epsilonilonnd{equation} for all $k\gammaeq1$. Turning to bounded cohomology, ${\operatorname{r}m H}_\mathrm{B,b}^\betaullet(G,A)$ denotes the cohomology of bounded $A$-valued Borel cochains on $G$. An important point is that if $A={\mathbb R}$ or $A={\mathbb R}_\epsilonilonpsilon$, the comparison map from continuous bounded to Borel bounded cohomology induces an isomorphism \betaegin{enumerate}gin{equation}L^\infty_{\mathrm{alt}}mbdaabel{eq:0.5} \xymatrix{ {\operatorname{r}m H}cb^\betaullet(G,A)\alphar[r]^-\simeq &{\operatorname{r}m H}_\mathrm{B,b}^\betaullet (G,A) } \epsilonilonnd{equation} as can be readily verified using the regularization operators defined in {\operatorname{c}}ite[\S~4]{Blanc}. Analogously to the vanishing of the continuous cohomology for compact groups, if $P$ is amenable, then \betaegin{enumerate}gin{equation}L^\infty_{\mathrm{alt}}mbdaabel{eq:06} {\operatorname{r}m H}cb^k(P,{\mathbb R}){\operatorname{c}}ong {\operatorname{r}m H}^k_\mathrm{B,b}(P,{\mathbb R})=0 \epsilonilonnd{equation} for all $k\gammaeq1$. Consider now the two short exact sequence of coefficients \betaegin{enumerate}gin{equation}L^\infty_{\mathrm{alt}}mbdaabel{eq:0.8-} \xymatrix@1{ 0\alphar[r] &{\mathbb Z}\,\,\alphar@{^{(}->}[r] &{\mathbb R}\alphar@{->>}[r] &{\mathbb R}/{\mathbb Z}\alphar[r] &0\, , } \epsilonilonnd{equation} \betaegin{enumerate}gin{equation*} \xymatrix@1{ 0\alphar[r] &\ZZ_\epsilon\alphar@{^{(}->}[r] &\textup{Hom}ep\alphar@{->>}[r] &\textup{Hom}ep/\ZZ_\epsilon\alphar[r] &0\, . } \epsilonilonnd{equation*} Using that ${\mathbb R}\rightarrow {\mathbb R}/{\mathbb Z} $ admits a bounded Borel section, one obtains readily, both for the trivial and nontrivial modules, long exact sequences in Borel and bounded Borel cohomology with commutative squares coming from the comparison maps $c_{{\mathbb Z}}$ and ${\operatorname{c}}_{{\mathbb R}}$ between these two cohomology theories: {\small \betaegin{enumerate}gin{equation} L^\infty_{\mathrm{alt}}mbdaabel{eqn:0.8} \xymatrix{ {\operatorname{d}}ots\alphar[r] &{\operatorname{r}m H}_\mathrm{B}^{2m-1}(G,{\mathbb R}/{\mathbb Z})\alphar[r]^-{{\operatorname{d}}elta^{\mathrm{b}}}\alphar@{=}[d] &{\operatorname{r}m H}_\mathrm{B,b}^{2m}(G,{\mathbb Z})\alphar[r]\alphar[d]^{c_{{\mathbb Z}}} &{\operatorname{r}m H}_\mathrm{B,b}^{2m}(G,{\mathbb R})\alphar[r]\alphar[d]^{c_{{\mathbb R}}} &{\operatorname{r}m H}_\mathrm{B}^{2m}(G,{\mathbb R}/{\mathbb Z})\alphar[r]\alphar@{=}[d] &{\operatorname{d}}ots\\ {\operatorname{d}}ots\alphar[r] &{\operatorname{r}m H}_\mathrm{B}^{2m-1}(G,{\mathbb R}/{\mathbb Z})\alphar[r]_-{\operatorname{d}}elta &{\operatorname{r}m H}_\mathrm{B}^{2m}(G,{\mathbb Z})\alphar[r] &{\operatorname{r}m H}_\mathrm{B}^{2m}(G,{\mathbb R})\alphar[r] &{\operatorname{r}m H}_\mathrm{B}^{2m}(G,{\mathbb R}/{\mathbb Z})\alphar[r] &{\operatorname{d}}ots\,, } \epsilonilonnd{equation}} and {\small \betaegin{enumerate}gin{equation} L^\infty_{\mathrm{alt}}mbdaabel{eq:0.7} \xymatrix{ {\operatorname{d}}ots\alphar[r] &{\operatorname{r}m H}_\mathrm{B}^{2m-1}(G,\textup{Hom}ep/\ZZ_\epsilon)\alphar[r]^-{{\operatorname{d}}elta^{\mathrm{b}}}\alphar@{=}[d] &{\operatorname{r}m H}_\mathrm{B,b}^{2m}(G,\ZZ_\epsilon)\alphar[r]\alphar[d]^{c_{{\mathbb Z}}} &{\operatorname{r}m H}_\mathrm{B,b}^{2m}(G,\textup{Hom}ep)\alphar[r]\alphar[d]^{c_{{\mathbb R}}} &{\operatorname{r}m H}_\mathrm{B}^{2m}(G,\textup{Hom}ep/\ZZ_\epsilon)\alphar[r]\alphar@{=}[d] &{\operatorname{d}}ots\\ {\operatorname{d}}ots\alphar[r] &{\operatorname{r}m H}_\mathrm{B}^{2m-1}(G,\textup{Hom}ep/\ZZ_\epsilon)\alphar[r]_-{\operatorname{d}}elta &{\operatorname{r}m H}_\mathrm{B}^{2m}(G,\ZZ_\epsilon)\alphar[r] &{\operatorname{r}m H}_\mathrm{B}^{2m}(G,\textup{Hom}ep)\alphar[r] &{\operatorname{r}m H}_\mathrm{B}^{2m}(G,\textup{Hom}ep/\ZZ_\epsilon)\alphar[r] &{\operatorname{d}}ots\,, } \epsilonilonnd{equation}} where ${\operatorname{d}}elta$ and ${\operatorname{d}}elta^\mathrm{b}$ are the connecting homomorphisms. \section{Proportionality between volume and Euler class}L^\infty_{\mathrm{alt}}mbdaabel{sec:euler}L^\infty_{\mathrm{alt}}mbdaabel{sec:newsectionProp} Let $\mathrm{Isom}({\mathbb H}^n)$ be the full group of isometries of real hyperbolic spaces ${\mathbb H}^n$, let $\epsilonilonp{\operatorname{c}}olon \mathrm{Isom}({\mathbb H}^n)\rightarrow\{-1,1\}$ denote the homomorphism with kernel $\mathrm{Isom}p({\mathbb H}^n)$ and let $\ZZ_\epsilon\subset\textup{Hom}ep$ the corresponding modules. Using \epsilonilonnd{equation}ref{eq:0.1}, {\operatorname{c}}ite[Proposition~2.1]{Bucher_Burger_Iozzi_Mostow} and \epsilonilonnd{equation}ref{eq:0.25}, we have isomorphisms \betaegin{enumerate}gin{equation}L^\infty_{\mathrm{alt}}mbdaabel{eq:vanEst+} {\operatorname{r}m H}_\mathrm{B}^n(\mathrm{Isom}({\mathbb H}^n),\textup{Hom}ep) \simeq {\operatorname{r}m H}c^n(\mathrm{Isom}({\mathbb H}^n),\textup{Hom}ep) \simeq {\operatorname{r}m H}c^n(\mathrm{Isom}({\mathbb H}^n)^{\operatorname{c}}irc,{\mathbb R}) \simeq\textup{O}mega({\mathbb H}^n)^{\mathrm{Isom}({\mathbb H}^n)^{\operatorname{c}}irc} \epsilonilonnd{equation} and denote by $\overline mega_n\in{\operatorname{r}m H}_\mathrm{B}^n(\mathrm{Isom}({\mathbb H}^n),\textup{Hom}ep)$ the generator corresponding to the volume form on ${\mathbb H}^n$. If $n=2m$ is even, we can identify $\mathrm{Isom}({\mathbb H}^{2m})$ with $\operatorname{SO}(2m,1)$. The diagram of injections \betaegin{enumerate}gin{equation*}L^\infty_{\mathrm{alt}}mbdaabel{equ:inclusions} \xymatrix{ &\mathrm{O}(2m)\alphar@{^{(}->}[dr]\alphar@{_{(}->}[dl]& \\ \mathrm{GL}(2m,{\mathbb R}) & & \mathrm{SO}(2m,1)} \epsilonilonnd{equation*} realizes $\textup{O}(2m)$ as a maximal compact subgroup of both $\operatorname{SO}(2m,1)$ and $\operatorname{GL}(2m,{\mathbb R})$ and induces homotopy equivalences \betaegin{enumerate}gin{equation*} B\operatorname{GL}(2m,{\mathbb R})\simeq B\textup{O}(2m)\simeq B\operatorname{SO}(2m,1)\,. \epsilonilonnd{equation*} The homomorphism \betaegin{enumerate}gin{equation*} \operatorname{GL}(2m,{\mathbb R})\rightarrow\{-1,1\} \epsilonilonnd{equation*} associating the sign of the determinant, coincides on $\textup{O}(2m)$ with the restriction of $\epsilonilonpsilon$ and will be denoted by $\epsilonilonpsilon$. Thus we obtain isomorphisms \betaegin{enumerate}gin{equation}L^\infty_{\mathrm{alt}}mbdaabel{eq:2.1} {\operatorname{r}m H}_\mathrm{sing}^{2m}(B\operatorname{GL}(2m,{\mathbb R}),\ZZ_\epsilon) {\operatorname{c}}ong {\operatorname{r}m H}_\mathrm{sing}^{2m}(B\textup{O}(2m),\ZZ_\epsilon) {\operatorname{c}}ong {\operatorname{r}m H}_\mathrm{sing}^{2m}(B\operatorname{SO}(2m,1),\ZZ_\epsilon)\,. \epsilonilonnd{equation} The (universal) Euler class $\varepsilonm^\mathrm{univ}\in {\operatorname{r}m H}^{2m}_\mathrm{sing}(B\mathrm{GL}^+(2m,{\mathbb R}),{\mathbb Z})$ is a singular class in the integral cohomology of the classifying space $B\mathrm{GL}^+(2m,{\mathbb R})$ of oriented ${\mathbb R}^{2m}$-vector bundles (see {\operatorname{c}}ite[\S~9]{Milnor_Stasheff}). It is the obstruction to the existence of a nowhere vanishing section. As it changes sign when the orientation is reversed, it extends to a class $\varepsilonm^\mathrm{univ}\in {\operatorname{r}m H}^{2m}_\mathrm{sing}(B\mathrm{GL}(2m,{\mathbb R}),\ZZ_\epsilon)$. Furthermore, if $M$ is a closed oriented $2m$-dimensional manifold, its tangent bundle is classified by a (unique up to homotopy) classifying map \betaegin{enumerate}gin{equation*} f{\operatorname{c}}olon M\operatorname{r}ightarrow B \mathrm{GL}^+(2m,{\mathbb R}) {\operatorname{r}m H}ookrightarrow B\mathrm{GL}(2m,{\mathbb R})\simeq B\mathrm{O}(2m) \epsilonilonnd{equation*} inducing a map \betaegin{enumerate}gin{equation*} f^*{\operatorname{c}}olon {\operatorname{r}m H}^{2m}_\mathrm{sing}(B\mathrm{GL}(2m,{\mathbb R}),\ZZ_\epsilon){\operatorname{c}}ong{\operatorname{r}m H}^{2m}_\mathrm{sing}(B\mathrm{O}(2m){\mathbb R},\ZZ_\epsilon) \operatorname{r}ightarrow {\operatorname{r}m H}^{2m}_\mathrm{sing}(M,{\mathbb Z}) \epsilonilonnd{equation*} and thus \betaegin{enumerate}gin{equation} L^\infty_{\mathrm{alt}}mbdaabel{equ:EulerChar} {\operatorname{c}}hi(M)=L^\infty_{\mathrm{alt}}mbdaangle f^*(\varepsilonm^\mathrm{univ}),[M]\operatorname{r}angle\,. \epsilonilonnd{equation} Now we use Wigner's isomorphism \epsilonilonnd{equation}ref{eq:0.3} \betaegin{enumerate}gin{equation}L^\infty_{\mathrm{alt}}mbdaabel{eq:2.3} {\operatorname{r}m H}_\mathrm{sing}^{2m}(B\operatorname{SO}(2m,1),\ZZ_\epsilon){\operatorname{c}}ong {\operatorname{r}m H}_\mathrm{B}^{2m}(\operatorname{SO}(2m,1),\ZZ_\epsilon) \epsilonilonnd{equation} and call Euler class the group cohomology class \betaegin{enumerate}gin{equation*} \varepsilon_{2m}\in{\operatorname{r}m H}_\mathrm{B}^{2m}(\operatorname{SO}(2m,1),\ZZ_\epsilon) \epsilonilonnd{equation*} corresponding to $\varepsilon_{2m}^\mathrm{univ}$ under the composition of the isomorphisms in \epsilonilonnd{equation}ref{eq:2.1} and \epsilonilonnd{equation}ref{eq:2.3}. Since ${\operatorname{r}m H}^{2m}_\mathrm{B}(\mathrm{SO}(2m,1),\textup{Hom}ep)$ is one dimensional by \epsilonilonnd{equation}ref{eq:2.1}, the image of $\varepsilonm$ under the change of coefficients $\ZZ_\epsilon{\operatorname{r}m H}ookrightarrow \textup{Hom}ep$ is a multiple of the volume class $\overline mega_{2m}$. We show now that this multiple is given by \betaegin{enumerate}gin{equation} L^\infty_{\mathrm{alt}}mbdaabel{eqn:Hirzebruch} \betaegin{enumerate}gin{aligned} {\operatorname{r}m H}hc^{2m}(\mathrm{SO}(2m,1),\ZZ_\epsilon)&L^\infty_{\mathrm{alt}}mbdaongrightarrow{\operatorname{r}m H}_\mathrm{B}^{2m}(\mathrm{SO}(2m,1),\textup{Hom}ep)\\ \varepsilonm\quad\qquad&L^\infty_{\mathrm{alt}}mbdaongmapsto(-1)^m\frac{2}{{\operatorname{Vol}}(S^{2m})}\overline mega_{2m}. \epsilonilonnd{aligned} \epsilonilonnd{equation} Indeed, let $M$ be a closed oriented hyperbolic manifold. The lattice embedding $i{\operatorname{c}}olon \pi_1(M){\operatorname{r}m H}ookrightarrow \mathrm{SO}(2m,1)$ induces classifying maps \betaegin{enumerate}gin{equation} L^\infty_{\mathrm{alt}}mbdaabel{eq:ugly arrow} \xymatrix{ M=K(\pi_1(M),1) \alphar[rr]^-{Bi} \alphar@/_2.5pc/[rr]^{\overline{Bi}} &&B\mathrm{SO}(2m,1)\simeq B\mathrm{O}(2m)\simeq B\mathrm{GL}(2m,{\mathbb R}) . } \epsilonilonnd{equation} Set $\Gammaamma=i(\pi_1(M))$. The orthonormal frame bundle over $M$ is naturally identified with \betaegin{enumerate}gin{equation*} \Gammaamma\setminus \mathrm{SO}(2m,1)\, . \epsilonilonnd{equation*} Extending the principal group structure from $\mathrm{O}(2m)$ to $\mathrm{SO}(2m,1)$ gives the principal $\mathrm{SO}(2m,1)$-bundle \betaegin{enumerate}gin{equation*} (\Gammaamma \setminus \mathrm{SO}(2m,1))-\allowhyphensmes_{\mathrm{O}(2m)} \mathrm{SO}(2m,1)\, . \epsilonilonnd{equation*} The latter is isomorphic to \betaegin{enumerate}gin{equation*} (\mathrm{SO}(2m,1)/ \mathrm{O}(2m))-\allowhyphensmes_\Gammaamma \mathrm{SO}(2m,1)\, , \epsilonilonnd{equation*} which is the flat principal $\mathrm{SO}(2m,1)$-bundle associated to the lattice embedding $i{\operatorname{c}}olon \pi_1(M){\operatorname{r}m H}ookrightarrow \mathrm{SO}(2m,1)$. It follows that the composition from (\operatorname{r}ef{eq:ugly arrow}) \betaegin{enumerate}gin{equation*} \overline{Bi}:ML^\infty_{\mathrm{alt}}mbdaongrightarrow B\mathrm{O}(2m)\simeq B\mathrm{GL}(2m,{\mathbb R}) \epsilonilonnd{equation*} classifies the tangent bundle $TM$. As a consequence, \betaegin{enumerate}gin{equation*} {\operatorname{c}}hi(M)=L^\infty_{\mathrm{alt}}mbdaangle \overline{Bi}^*(\varepsilonm^\mathrm{univ}),[M]\operatorname{r}angle\,. \epsilonilonnd{equation*} Thus, by the naturality of Wigner's isomorphism, the following diagram commutes: \betaegin{enumerate}gin{equation*} \xymatrix{ & {\operatorname{r}m H}^{2m}_\mathrm{sing}(B\mathrm{GL}(2m,{\mathbb R}),\ZZ_\epsilon) \alphar[ld]_-{\overline{Bi}}\alphar@{=}[d]\\ {\operatorname{r}m H}^{2m}_\mathrm{sing}(M,{\mathbb Z}) & {\operatorname{r}m H}^{2m}_\mathrm{sing}(B\mathrm{SO}(2m,1),\ZZ_\epsilon) \alphar[l]^-{Bi}\\ {\operatorname{r}m H}^{2m}(\pi_1(M),{\mathbb Z}) \alphar[u]^{\operatorname{c}}ong & {\operatorname{r}m H}^{2m}_\mathrm{B}(\mathrm{SO}(2m,1),\ZZ_\epsilon). \alphar@{=}[u]\alphar[l]^{i^*} } \epsilonilonnd{equation*} We deduce that \betaegin{enumerate}gin{equation*} {\operatorname{c}}hi(M)=L^\infty_{\mathrm{alt}}mbdaangle i^*\varepsilonm,[M]\operatorname{r}angle\,. \epsilonilonnd{equation*} Moreover, the hyperbolic volume of $M$ is obviously given by \betaegin{enumerate}gin{equation*} \mathrm{vol}(M)=L^\infty_{\mathrm{alt}}mbdaangle i^*\overline mega_{2m},[M]\operatorname{r}angle\,. \epsilonilonnd{equation*} The Chern--Gauss--Bonnet Theorem {\operatorname{c}}ite[Chapter~13, Theorem~26]{Spivak} implies that if $M$ is a hyperbolic $(2m)$-dimensional manifolds then \betaegin{enumerate}gin{equation*} {\operatorname{c}}hi(M)=(-1)^m\frac{2}{{\operatorname{Vol}}(S^{2m})}\mathrm{vol}(M)\,, \epsilonilonnd{equation*} where the proportionality constant is up to the sign $(-1)^m$ the corresponding quotient between the Euler characteristic and the volume of the compact dual of hyperbolic space, namely the $(2m)$-sphere of constant curvature $+1$. Finally, since the lattice embedding induces an injection \betaegin{enumerate}gin{equation*} i^*{\operatorname{c}}olon {\operatorname{r}m H}_\mathrm{B}^{2m}(\mathrm{SO}(2m,1),\textup{Hom}ep)L^\infty_{\mathrm{alt}}mbdaongrightarrow {\operatorname{r}m H}^{2m}(\pi_1(M),{\mathbb R})\,, \epsilonilonnd{equation*} we obatin the value of the proportionality constant in (\operatorname{r}ef{eqn:Hirzebruch}). \betaegin{enumerate}gin{rem}L^\infty_{\mathrm{alt}}mbdaabel{rem:3.1} Recall that the orientation cocycle \betaegin{enumerate}gin{equation} \mathrm{Or}{\operatorname{c}}olon (S^1)^3L^\infty_{\mathrm{alt}}mbdaongrightarrow \{+1,0,-1\} \epsilonilonnd{equation} assigns the value $\pm 1$ to distinct triple of points according to their orientation, and $0$ otherwise. Identifying $S^1$ with $\partial {\mathbb H}^2$ we obtain by evaluation a Borel cocycle and a cohomology class \betaegin{enumerate}gin{equation*} [\mathrm{Or}]\in {\operatorname{r}m H}_\mathrm{B}^2(\mathrm{Isom}({\mathbb H}^2),\textup{Hom}ep)\,. \epsilonilonnd{equation*} Since the area of ideal hyperbolic triangles in ${\mathbb H}^2$ is $\pm \pi$ depending on the orientation, this cocycle represents $(1/\pi)\overline mega_2=-2\varepsilon_2$, where the last equality comes from (\operatorname{r}ef{eqn:Hirzebruch}). Thus the Euler class $\varepsilon_2$ is represented by $-\frac12\mathrm{Or}$, {\operatorname{c}}ite[Lemma~2.1]{Iozzi_ern} and {\operatorname{c}}ite[Proposition~8.4]{Bucher_Monod}. \epsilonilonnd{rem} \section{The bounded Euler class}L^\infty_{\mathrm{alt}}mbdaabel{sec:bddEuler} In this section we recall that the volume class of hyperbolic $n$-space admits a unique bounded representative and establish for $n$ even the analogous result for the Euler class, which leads to the definition of the bounded Euler class in Proposition~\operatorname{r}ef{prop:bdd-euler}. In \S~\operatorname{r}ef{sec:congruence} we define the volume of a representation and use the existence of the bounded Euler class in even dimensions to prove in Theorem \operatorname{r}ef{thm: vol modulo boundary} a congruence relation for the volume of a representation. \subsection{Bounded volume and Euler classes} \betaegin{enumerate}gin{lemma}L^\infty_{\mathrm{alt}}mbdaabel{lem:comp} Let $G:=\mathrm{Isom}({\mathbb H}^n)$. In the commuting diagram { \betaegin{enumerate}gin{equation*} \xymatrix{ {\operatorname{r}m H}hcb^{n}(G,\ZZ_\epsilon)\alphar[r]\alphar[d]_{c_{\mathbb Z}} &{\operatorname{r}m H}hcb^{n}(G,\textup{Hom}ep)\alphar[d]^{c_{\mathbb R}} \\ {\operatorname{r}m H}hc^{n}(G,\ZZ_\epsilon)\alphar[r] &{\operatorname{r}m H}hc^{n}(G,\textup{Hom}ep) } \epsilonilonnd{equation*}} the vertical maps are isomorphisms\,. \epsilonilonnd{lemma} \betaegin{enumerate}gin{proof} The fact that $c_{\mathbb R}$ is an isomorphism follows from {\operatorname{c}}ite[Proposition~2.1]{Bucher_Burger_Iozzi_Mostow} and the identifications between the (bounded) Borel and the (bounded) continuous cohomology. To show that $c_{\mathbb Z}$ is an isomorphism, we will do diagram chases in (\operatorname{r}ef{eq:0.7}). n\raise4pt\hbox{\tiny o}\kern+.2emindent {\epsilonilonm Surjectivity of $c_{\mathbb Z}$:} Let $\alphalpha\in {\operatorname{r}m H}hc^{n}(G,\ZZ_\epsilon)$. Denote by $\alphalpha_{\mathbb R} \in {\operatorname{r}m H}hc^{n}(G,\textup{Hom}ep)$ its image under the change of coefficients and $\alphalpha_{\mathbb R}^{\mathrm b}=(c_{\mathbb R})^{-1}(\alphalpha_{\mathbb R})\in {\operatorname{r}m H}hcb^{n}(G,\textup{Hom}ep)$. By exactness of the lines in (\operatorname{r}ef{eq:0.7}), the image of $\alphalpha_{\mathbb R}$ in ${\operatorname{r}m H}hc^{n}(G,\textup{Hom}ep/\ZZ_\epsilon)$ vanishes. And thus the same holds for the image of $\alphalpha_{\mathbb R}^{\mathrm b}$. Thus there is $\betaegin{enumerate}ta\in {\operatorname{r}m H}_\mathrm{B}^{n}(G,\ZZ_\epsilon)$ with image $\alphalpha_{\mathbb R}^{\mathrm b}$. But ${\operatorname{c}}_{\mathbb Z}(\betaegin{enumerate}ta)-\alphalpha$ goes to zero in ${\operatorname{r}m H}hc^{n}(G,\textup{Hom}ep)$, hence ${\operatorname{c}}_{\mathbb Z}(\betaegin{enumerate}ta)-\alphalpha={\operatorname{d}}elta(\epsilonilonta)$ for some $\epsilonilonta\in{\operatorname{r}m H}hc^{n-1}(G,\textup{Hom}ep/\ZZ_\epsilon)$. Thus $c_{\mathbb Z}(\betaegin{enumerate}ta+{\operatorname{d}}elta^{\mathrm{b}}(\epsilonilonta))=\alphalpha$. n\raise4pt\hbox{\tiny o}\kern+.2emindent {\epsilonilonm Injectivity of $c_{\mathbb Z}$:} Observe that ${\operatorname{r}m H}_\mathrm{B}^{n-1}(G,\textup{Hom}ep)={\operatorname{r}m H}c^{n-1}(G,\textup{Hom}ep)=0$. Indeed this group injects into ${\operatorname{r}m H}c^{n-1}(G^{\operatorname{c}}irc,{\mathbb R})$ by restriction, and the latter vanishes, taking into account \epsilonilonnd{equation}ref{eq:0.2}, since there are no $\operatorname{SO}(n)$-invariant $(n-1)$-forms on ${\mathbb R}^n$ and hence no $G^{\operatorname{c}}irc$-invariant differential $(n-1)$-forms on ${\mathbb H}^{n}$. Let now $\alphalpha\in{\operatorname{r}m H}hcb^{n}(G,\ZZ_\epsilon)$ with $c_{\mathbb Z}(\alphalpha)=0$. Since $c_{\mathbb R}$ is an isomorphism, we have that the image $\alphalpha_{\mathbb R}\in{\operatorname{r}m H}_\mathrm{B,b}^{n}(G,\textup{Hom}ep)$ vanishes. Hence there is $\betaegin{enumerate}ta\in{\operatorname{r}m H}_\mathrm{B}^{n-1}(G,\textup{Hom}ep/\ZZ_\epsilon)$ with ${\operatorname{d}}elta^{\mathrm{b}}(\betaegin{enumerate}ta)=\alphalpha$. But then ${\operatorname{d}}elta(\betaegin{enumerate}ta)=c_{\mathbb Z}(\alphalpha)=0$. By exactness, this implies that $\betaegin{enumerate}ta$ is in the image of ${\operatorname{r}m H}_\mathrm{B}^{n-1}(G,\textup{Hom}ep)$. Since ${\operatorname{r}m H}_\mathrm{B}^{n-1}(G,\textup{Hom}ep)=0$, then $\betaegin{enumerate}ta=0$, which finally implies that $\alphalpha=0$. \epsilonilonnd{proof} As a consequence of the fact that $c_{\mathbb R}$ is an isomorphism and that the volume of geodesic simplices in hyperbolic $n$-space is bounded, the volume cocycle defines also a bounded Borel class \betaegin{enumerate}gin{equation*} \overline mega_n^{\mathrm{b}}\in{\operatorname{r}m H}_\mathrm{B,b}^n(G,\textup{Hom}ep) \epsilonilonnd{equation*} corresponding to the volume class $\overline mega_n$. As an immediate consequence of Lemma \operatorname{r}ef{lem:comp} and the correspondence (\operatorname{r}ef{eqn:Hirzebruch}) we obtain: \betaegin{enumerate}gin{prop}L^\infty_{\mathrm{alt}}mbdaabel{prop:bdd-euler} Let $G=\mathrm{Isom}({\mathbb H}^{2m})$. The Euler class $\varepsilonm\in{\operatorname{r}m H}hc^{2m}(G,\ZZ_\epsilon)$ has a bounded representative $\varepsilonmb\in{\operatorname{r}m H}hcb^{2m}(G,\ZZ_\epsilon)$ that has the following properties: \betaegin{enumerate} \item it is unique, and \item under the change of coefficients $\ZZ_\epsilon\rightarrow\textup{Hom}ep$ it corresponds to \betaegin{enumerate}gin{equation*} (-1)^m\frac{2}{{\operatorname{Vol}}(S^{2m})}\overline mega_{2m}^{\mathrm{b}}\in{\operatorname{r}m H}hcb^{2m}(G,\textup{Hom}ep)\,. \epsilonilonnd{equation*} \epsilonilonnd{enumerate} \epsilonilonnd{prop} \betaegin{enumerate}gin{rem} \betaegin{enumerate} \item With a slight abuse of notation we denote equally by $\varepsilonmb\in{\operatorname{r}m H}hcb^{2m}(\textup{O}(2m),\ZZ_\epsilon)$ and by $\varepsilonmb\in{\operatorname{r}m H}hcb^{2m}(\operatorname{SO}(2m),{\mathbb Z})$ the restriction of $\varepsilonmb$ respectively to $\textup{O}(2m)$ and to $\operatorname{SO}(2m)$. \item If $m=1$ and with the usual slight abuse of notation, the restriction $\varepsilon_2^\mathrm{b}\in{\operatorname{r}m H}hcb^2(\mathrm{Isom}({\mathbb H}^{2})^{\operatorname{c}}irc,{\mathbb Z})$ is the usual bounded Euler class, where $\mathrm{Isom}({\mathbb H}^{2})^{\operatorname{c}}irc$ is considered as a group of orientation preserving homeomorphisms of the circle. \epsilonilonnd{enumerate} \epsilonilonnd{rem} \subsection{Definition of volume and congruence relations}L^\infty_{\mathrm{alt}}mbdaabel{sec:congruence} Let $M$ be a complete finite volume hyperbolic $n$-dimensional manifold and let $\operatorname{r}ho{\operatorname{c}}olon \pi_1(M)\rightarrow \mathrm{Isom}p({\mathbb H}^n)$ be a homomorphism. Given a compact core $N$ of $M$ we consider $\operatorname{r}ho$ as a representation of $\pi_1(N)$ and use the pullback via $\operatorname{r}ho$ in bounded cohomology together with the isomorphism \betaegin{enumerate}gin{equation*} {\operatorname{r}m H}b^n(\pi_1(N),{\mathbb R}){\operatorname{c}}ong{\operatorname{r}m H}b^n(N,{\mathbb R}) \epsilonilonnd{equation*} to obtain a bounded singular class in ${\operatorname{r}m H}b^n(N,{\mathbb R})$, denoted $\operatorname{r}ho^*(\overline mega_n^\mathrm{b})$ by abuse of notation. Using the isometric isomorphism \betaegin{enumerate}gin{equation}L^\infty_{\mathrm{alt}}mbdaabel{eq:j} j{\operatorname{c}}olon {\operatorname{r}m H}b^{2m}(N,\partial N,{\mathbb R})L^\infty_{\mathrm{alt}}mbdaongrightarrow{\operatorname{r}m H}b^{2m}(N,{\mathbb R}) \epsilonilonnd{equation} {\operatorname{c}}ite[Theorem~1.2]{BBFIPP}, the volume of $\operatorname{r}ho$ is defined by \betaegin{enumerate}gin{equation*} {\operatorname{Vol}}(\operatorname{r}ho):=L^\infty_{\mathrm{alt}}mbdaangle j^{-1}(\operatorname{r}ho^*(\overline mega_{2m}^{\mathrm{b}})),[N,\partial N]\operatorname{r}angle\,. \epsilonilonnd{equation*} If $n=2m$, by the same abuse of notation, and by considering again the pullback in bounded cohomology via $\operatorname{r}ho$, this time together with the isomorphism \betaegin{enumerate}gin{equation*} {\operatorname{r}m H}b^{2m}(\pi_1(N),{\mathbb Z}){\operatorname{c}}ong {\operatorname{r}m H}b^{2m}(N,{\mathbb Z})\,, \epsilonilonnd{equation*} we obtain a class $\operatorname{r}ho^*(\varepsilonmb)\in{\operatorname{r}m H}b^{2m}(N,{\mathbb Z})$, which is a bounded singular integral class. Thus, denoting ${\operatorname{d}}elta^\mathrm{b}$ the connecting homomorphism in the long exact sequence in bounded singular cohomology \betaegin{enumerate}gin{equation}L^\infty_{\mathrm{alt}}mbdaabel{eq:delta} {\operatorname{d}}elta^\mathrm{b}{\operatorname{c}}olon {\operatorname{r}m H}^{2m-1}(\partial N,{\mathbb R}/{\mathbb Z})L^\infty_{\mathrm{alt}}mbdaongrightarrow{\operatorname{r}m H}b^{2m}(\partial N,{\mathbb Z}) \epsilonilonnd{equation} (which is in fact an isomorphism), we have: \betaegin{enumerate}gin{thm} L^\infty_{\mathrm{alt}}mbdaabel{thm: vol modulo boundary} Let $M$ be a complete hyperbolic manifold of finite volume and even dimension $n=2m$ and $N$ a compact core of $M$. If $\operatorname{r}ho{\operatorname{c}}olon \pi_1(M)\operatorname{r}ightarrow \mathrm{Isom}p({\mathbb H}^{2m})$, then \betaegin{enumerate}gin{equation*} (-1)^m\frac{2}{{\operatorname{Vol}}(S^{2m})}{\operatorname{c}}dot {\operatorname{Vol}}(\operatorname{r}ho)\epsilonilonnd{equation}uiv-L^\infty_{\mathrm{alt}}mbdaangle ({\operatorname{d}}elta^\mathrm{b})^{-1} \operatorname{r}ho^*(\varepsilonmb)|_{\partial N} , [\partial N] \operatorname{r}angle\,\mathrm{mod}\, {\mathbb Z}\,. \epsilonilonnd{equation*} \epsilonilonnd{thm} \betaegin{enumerate}gin{proof} In fact, \epsilonilonnd{equation}ref{eq:j} and \epsilonilonnd{equation}ref{eq:delta} are part of the following diagram \betaegin{enumerate}gin{equation*} \xymatrix{ {\operatorname{r}m H}^{2m-1}(\partial N,{\mathbb R}/{\mathbb Z})\alphar[r]^-{{\operatorname{d}}elta^\mathrm{b}} &{\operatorname{r}m H}b^{2m}(\partial N,{\mathbb Z})\alphar[r] &{\operatorname{r}m H}b^{2m}(\partial N,{\mathbb R})=0\\ &{\operatorname{r}m H}b^{2m}( N,{\mathbb Z})\alphar[r]\alphar[u] &{\operatorname{r}m H}b^{2m}(N,{\mathbb R})\alphar[u]\\ &&{\operatorname{r}m H}b^{2m}(N,\partial N,{\mathbb R})\alphar[u]_j\,, } \epsilonilonnd{equation*} where the rows are obtained from the long exact sequence \betaegin{enumerate}gin{equation*} \xymatrix@1{ {\operatorname{d}}ots\alphar[r] &{\operatorname{r}m H}b^{n-1}(X,{\mathbb R}/{\mathbb Z})\alphar[r] &{\operatorname{r}m H}b^n(X,{\mathbb Z})\alphar[r] &{\operatorname{r}m H}b^n(X,{\mathbb R})\alphar[r] &{\operatorname{r}m H}b^n(X,{\mathbb R}/{\mathbb Z})\alphar[r] &{\operatorname{d}}ots\,, } \epsilonilonnd{equation*} with $X=\partial N$ and $X=N$, induced by the change of coefficients in \epsilonilonnd{equation}ref{eq:0.8-} and from the fact that ${\operatorname{r}m H}b^\betaullet(\partial N,{\mathbb R})=0$ since $\pi_1(\partial N)$ is amenable; the columns on the other hand follow from the long exact sequence in relative bounded cohomology associated to the inclusion of pairs $(N,\varnothing){\operatorname{r}m H}ookrightarrow(N,\partial N)$ (see {\operatorname{c}}ite[\S~2.2]{Burger_Iozzi_Wienhard_toledo}). Let $z$ be a ${\mathbb Z}$-valued singular bounded cocycle representing $\operatorname{r}ho^*(\varepsilonmb)\in{\operatorname{r}m H}b^{2m}(N,{\mathbb Z})$. Restricting $z$ to the boundary $\partial N$ we obtain a ${\mathbb Z}$-valued singular bounded cocycle $z|_{\partial N}$, which we know is a coboundary when considered as a ${\mathbb R}$-valued cocycle since ${\operatorname{r}m H}b^{2m}(\partial N,{\mathbb R})=0$. Thus there must exist a bounded ${\mathbb R}$-valued singular $(2m-1)$-cochain $b$ on $\partial N$ such that \betaegin{enumerate}gin{equation*} (z|_{\partial N})_{\mathbb R}=d b\,, \epsilonilonnd{equation*} where $d$ is the differential operator on (bounded ${\mathbb R}$-valued) singular cochains. On the one hand, we note that since $d b$ is ${\mathbb Z}$-valued, the cochain $b\,\mathrm{mod}\, {\mathbb Z}$ is a ${\mathbb R}/{\mathbb Z}$-valued $(2m-1)$-cocycle on $\partial N$ whose cohomology class is mapped to \betaegin{enumerate}gin{equation*} [\operatorname{r}ho^*(\varepsilonmb)|_{\partial N}]=[z|_{\partial N}]={\operatorname{d}}elta^{\mathrm{b}}([b\,\mathrm{mod}\, {\mathbb Z}]) \epsilonilonnd{equation*} by the connecting homomorphism ${\operatorname{d}}elta^{\mathrm{b}}$ in \epsilonilonnd{equation}ref{eq:delta}. On the other hand, define a bounded ${\mathbb R}$-valued singular $(2m-1)$-cochain $\overline{b}$ on $N$ by extending $b$ to $N$, \betaegin{enumerate}gin{equation*} \overline{b}(\sigma):= L^\infty_{\mathrm{alt}}mbdaeft\{ \betaegin{enumerate}gin{array}{ll} b(\sigma) & \mathrm{if \ } \sigma\subset \partial N, \\ 0 & \mathrm{otherwise.} \epsilonilonnd{array} \operatorname{r}ight. \epsilonilonnd{equation*} Then $[z_{\mathbb R}-d \overline{b}]=[z_{\mathbb R}]\in{\operatorname{r}m H}b^{2m}(N,{\mathbb R})$, and since $z_{\mathbb R}-d\overline{b}$ vanishes on $\partial N$ we have actually constructed a cocycle representing the relative bounded class $j^{-1}((\operatorname{r}ho^*(\varepsilonmb))_{\mathbb R})\in {\operatorname{r}m H}b^{2m}(N,\partial N,{\mathbb R})$. It remains to evaluate $j^{-1}((\operatorname{r}ho^*(\varepsilonmb))_{\mathbb R})$ on $[N,\partial N]$ by using this specific cocycle. Let $t$ be a singular chain representing the relative fundamental class $[N,\partial N]$ over ${\mathbb Z}$. In particular, $\partial t$ is a cycle representing the fundamental class $[\partial N]$. Then we obtain \betaegin{enumerate}gin{eqnarray*} L^\infty_{\mathrm{alt}}mbdaangle j^{-1}((\operatorname{r}ho^*(\varepsilonmb))_{\mathbb R}), [N,\partial N]\operatorname{r}angle \,\mathrm{mod}\, {\mathbb Z}&=&L^\infty_{\mathrm{alt}}mbdaangle z_{\mathbb R}-d \overline{b}, t\operatorname{r}angle \,\mathrm{mod}\, {\mathbb Z}\\ &=& \underbrace{L^\infty_{\mathrm{alt}}mbdaangle z_{\mathbb R}, t\operatorname{r}angle}_{\in {\mathbb Z}} -L^\infty_{\mathrm{alt}}mbdaangle \overline{b}, \partial t \operatorname{r}angle \,\mathrm{mod}\, {\mathbb Z}\\ &=& -L^\infty_{\mathrm{alt}}mbdaangle b\,\mathrm{mod}\, {\mathbb Z}, [\partial N]\operatorname{r}angle \\ &=&-L^\infty_{\mathrm{alt}}mbdaangle ({\operatorname{d}}elta^{\mathrm{b}})^{-1}(\operatorname{r}ho^*(\varepsilon_{2m})|_{\partial N}), [\partial N]\operatorname{r}angle. \epsilonilonnd{eqnarray*} \epsilonilonnd{proof} \section{Vanishing of the bounded Euler class on tori and the proof of Theorem \operatorname{r}ef{thm: main thm}}L^\infty_{\mathrm{alt}}mbdaabel{sec:vanishing} The goal of this section is to prove Theorem \operatorname{r}ef{thm: main thm}. From Theorem \operatorname{r}ef{thm: vol modulo boundary}, we know that $(-1)^m2/\mathrm{Vol}(S^{2m})$ times the volume of a representation is determined, $\,\mathrm{mod}\, {\mathbb Z}$, by the restriction to the cusps of the pullback of the bounded Euler class. The main result of this section will be to prove, in Theorem \operatorname{r}ef{thm:vanishing}, that the pullback of the bounded Euler class by any representation $ \operatorname{r}ho{\operatorname{c}}olon {\mathbb Z}^{2m-1}\rightarrow\operatorname{SO}(2m,1)^{\operatorname{c}}irc$ is identically zero for $m\gammaeq 2$. We conclude the section by showing how Theorems~\operatorname{r}ef{thm: vol modulo boundary} and \operatorname{r}ef{thm:vanishing} imply Theorem~\operatorname{r}ef{thm: main thm}. The bounded class $\varepsilonmb\in{\operatorname{r}m H}hcb^{2m}(\operatorname{SO}(2m,1),\ZZ_\epsilon)$ defined in the previous section restricts to a class on $\operatorname{SO}(2m,1)^{\operatorname{c}}irc$ with trivial ${\mathbb Z}$-coefficients. When $m=1$ and \betaegin{enumerate}gin{equation*} \operatorname{r}ho{\operatorname{c}}olon {\mathbb Z}\rightarrow\operatorname{SO}(2,1)^{\operatorname{c}}irc \epsilonilonnd{equation*} is a homomorphism, then $\operatorname{r}ho^*(\varepsilon_2^{\mathrm{b}})\in{\operatorname{r}m H}b^2({\mathbb Z},{\mathbb Z}){\operatorname{c}}ong{\mathbb R}/{\mathbb Z}$ is the negative of the rotation number of $\operatorname{r}ho(1)$, {\operatorname{c}}ite{Ghys}. In contrast to this, in higher dimension we have the following: \betaegin{enumerate}gin{thm}L^\infty_{\mathrm{alt}}mbdaabel{thm:vanishing} Let $m\gammaeq2$ and let $\operatorname{r}ho{\operatorname{c}}olon {\mathbb Z}^{2m-1}\rightarrow\operatorname{SO}(2m,1)^{\operatorname{c}}irc$ be a homomorphism. Then $\operatorname{r}ho^*(\varepsilonmb)$ vanishes in ${\operatorname{r}m H}b^{2m}({\mathbb Z}^{2m-1},{\mathbb Z})$. \epsilonilonnd{thm} Before proving the theorem we need some information about Abelian subgroups of $\operatorname{SO}(2m,1)^{\operatorname{c}}irc$. To fix the notation, recall that \betaegin{enumerate}gin{equation*} \operatorname{SO}(2m,1):=\betaigg\{A\in\operatorname{GL}(2m+1,{\mathbb R}):\,{\operatorname{d}}et A=1,\,A\text{ preserves }q(x):=\sum_{i=1}^{2m}x_i^2-x_{2m+1}^2\betaigg\}\,. \epsilonilonnd{equation*} Then the maximal compact subgroup $K<\operatorname{SO}(2m,1)$ is the image of $\textup{O}(2m)$ under the homomorphism \betaegin{enumerate}gin{equation*} \betaegin{enumerate}gin{aligned} \textup{O}(2m)&L^\infty_{\mathrm{alt}}mbdaongrightarrow\,\,\operatorname{SO}(2m,1)\\ A\quad&L^\infty_{\mathrm{alt}}mbdaongmapsto\betaegin{enumerate}gin{pmatrix}A&0\\0&{\operatorname{d}}et A\epsilonilonnd{pmatrix}\,, \epsilonilonnd{aligned} \epsilonilonnd{equation*} and the image $K^{\operatorname{c}}irc$ of $\operatorname{SO}(2m)$ is the maximal compact subgroup of $\operatorname{SO}(2m,1)^{\operatorname{c}}irc$. If $T$ is the image of \betaegin{enumerate}gin{equation*} \betaegin{enumerate}gin{aligned} \textup{O}(2)^m\quad&L^\infty_{\mathrm{alt}}mbdaongrightarrow\qquad\qquad\operatorname{SO}(2m,1)\\ (A_1,{\operatorname{d}}ots,A_m)&L^\infty_{\mathrm{alt}}mbdaongmapsto \betaegin{enumerate}gin{pmatrix} A_1& & &\\ &{\operatorname{d}}dots& &\\ & &A_m&\\ & & &{\operatorname{pr}}od_{i=1}^m{\operatorname{d}}et A_i \epsilonilonnd{pmatrix}\,, \epsilonilonnd{aligned} \epsilonilonnd{equation*} we define \betaegin{enumerate}gin{equation*} T_0:=T{\operatorname{c}}ap K^{\operatorname{c}}irc\,. \epsilonilonnd{equation*} Since $\operatorname{SO}(2m,1)^{\operatorname{c}}irc$ preserves each connected component of the two-sheeted hyperboloid \betaegin{enumerate}gin{equation*} x_1^2+{\operatorname{d}}ots+x_{2m}^2-x_{2m+1}^2=-1\,, \epsilonilonnd{equation*} the parabolic subgroup $P=\operatorname{Stab}_{\operatorname{SO}(2m,1)^{\operatorname{c}}irc}({\mathbb R}(e_1-e_{2m+1}))$ admits the decomposition $P=MAN$, where \betaegin{enumerate}gin{equation*} \betaegin{enumerate}gin{aligned} M:&=L^\infty_{\mathrm{alt}}mbdaeft\{m(U):=\betaegin{enumerate}gin{pmatrix}1 & & \\ & U & \\ & & 1\epsilonilonnd{pmatrix}:\,U\in\operatorname{SO}(2m-1)\operatorname{r}ight\}\\ A:&=L^\infty_{\mathrm{alt}}mbdaeft\{ a(t):= \betaegin{enumerate}gin{pmatrix} {\operatorname{c}}osh t&0&\sinh t\\ 0&{\operatorname{Id}}&0\\ \sinh t&0&{\operatorname{c}}osh t \epsilonilonnd{pmatrix}:\, t\in{\mathbb R} \operatorname{r}ight\} \epsilonilonnd{aligned} \epsilonilonnd{equation*} and \betaegin{enumerate}gin{equation*} N:= L^\infty_{\mathrm{alt}}mbdaeft\{ n(x):= \betaegin{enumerate}gin{pmatrix} 1-\frac{\|x\|^2}{2}&-x&-\frac{\|x\|^2}{2}\\ {}^tx&I&{}^tx\\ \frac{\|x\|^2}{2}&x&1+\frac{\|x\|^2}{2} \epsilonilonnd{pmatrix}:\,x\in{\mathbb R}^{2m-1} \operatorname{r}ight\}\,.\epsilonilonnd{equation*} We can now outline the proof of Theorem~\operatorname{r}ef{thm:vanishing}. According to Lemma~\operatorname{r}ef{lem:tricot} below, there are up to conjugation two cases to consider for $\operatorname{r}ho:{\mathbb Z}^{2m-1}\rightarrow\operatorname{SO}(2m,1)^{\operatorname{c}}irc$: \betaegin{enumerate}gin{enumerate} \item $\operatorname{r}ho$ takes values in $P$: then, building on Lemma~\operatorname{r}ef{lem:cohom of parabolic}, Lemma~\operatorname{r}ef{lem:vanish} shows that $\varepsilon_{2m}^\mathrm{b}|_P\in{\operatorname{r}m H}_{\mathrm{B,b}}^{2m}(P,{\mathbb Z})$ vanishes and this implies Theorem~\operatorname{r}ef{thm:vanishing} in this case. \item $\operatorname{r}ho$ takes values in $T_0$: this case splits into two subcases, namely $\operatorname{r}ho({\mathbb Z}^{2m-1})n\raise4pt\hbox{\tiny o}\kern+.2emt\subset T^{\operatorname{c}}irc$, which is dealt with in Lemma~\operatorname{r}ef{lem:vanish2}, and $\operatorname{r}ho({\mathbb Z}^{2m-1})\subset T^{\operatorname{c}}irc$. In the latter case we represent $\varepsilon_{2m}^\mathrm{b}|_{\operatorname{SO}(2)^{2m}}$ as the image under the connecting homomorphism of an explicit class in ${\operatorname{r}m H}_\mathrm{B}^{2m-1}(\operatorname{SO}(2)^{2m},{\mathbb R}/{\mathbb Z})$ whose pullback via $\operatorname{r}ho$ we evaluate in Lemma~\operatorname{r}ef{lem:rot=0} on an explicit cycle representing the fundamental class $[{\mathbb Z}^{2m-1}]$ of ${\operatorname{r}m H}_{2m-1}({\mathbb Z}^{2m-1},{\mathbb Z})$ constructed in Lemma~\operatorname{r}ef{lem: representation Zn}. \epsilonilonnd{enumerate} \betaegin{enumerate}gin{lemma}L^\infty_{\mathrm{alt}}mbdaabel{lem:tricot} Let $B<\operatorname{SO}(2m,1)^{\operatorname{c}}irc$ be an Abelian group. Then up to conjugation one of the following holds: \betaegin{enumerate} \item $B<P$; \item $B<T_0$. \epsilonilonnd{enumerate} \epsilonilonnd{lemma} \betaegin{enumerate}gin{proof} Since $B$ is Abelian, it is elementary {\operatorname{c}}ite[Lemma~1, \S~5.5]{Ratcliffe}, that is, it is either of elliptic type or of parabolic type or of hyperbolic type. \betaegin{enumerate} \item[(i)] If $B$ is of parabolic type, it fixes a point in $\partial{\mathbb H}^{2m}$ {\operatorname{c}}ite[Theorem~5.5.1]{Ratcliffe} and thus it can be conjugated into $P$. \item[(ii)] If $B$ is of elliptic type it fixes a point in ${\mathbb H}^{2m}$ {\operatorname{c}}ite[Theorem~5.5.3]{Ratcliffe} and hence it can be conjugated into $K^{\operatorname{c}}irc{\operatorname{c}}ong\operatorname{SO}(2m)$. Since it is Abelian, it can be simultaneously reduced to a diagonal $2-\allowhyphensmes 2$ bloc form and hence can be conjugated into $T_0$. \item[(iii)] If $B$ is of hyperbolic type, every union of finite orbits in $\overline{{\mathbb H}^{2m}}$ consists of two points, {\operatorname{c}}ite[Theorem~5.5.6]{Ratcliffe}. Thus there is a geodesic $g\in{\mathbb H}^{2m}$ that is left setwise invariant by $B$. If $\{g_-,g_+\}$ are its endpoints, then either $Bg_+=g_+$ and we are in case (i) above, or there is $b\in B$ with $bg_+=g_-$. But then $B$ fixes the unique $b$-fixed point $g_0\in g$, hence we are in case (ii). \epsilonilonnd{enumerate} \epsilonilonnd{proof} \betaegin{enumerate}gin{rem} Since Theorems~5.5.1, 5.5.3 and 5.5.6 in {\operatorname{c}}ite{Ratcliffe} are characterizations respectively of elliptic, parabolic and hyperbolic groups of isometries, it follows from Lemma~\operatorname{r}ef{lem:tricot} that an Abelian group $B<\operatorname{SO}(2m,1)^{\operatorname{c}}irc$ cannot be hyperbolic. \epsilonilonnd{rem} \betaegin{enumerate}gin{lemma}L^\infty_{\mathrm{alt}}mbdaabel{lem:cohom of parabolic} Let $P<\operatorname{SO}(2m,1)^{\operatorname{c}}irc$ be the minimal parabolic subgroup as above. Then: \betaegin{enumerate}gin{equation*} {\operatorname{r}m H}_\mathrm{B}^k(P,{\mathbb R})=0 \epsilonilonnd{equation*} for $k=2m-1$ and $k=2m$. \epsilonilonnd{lemma} \betaegin{enumerate}gin{proof} Combining \epsilonilonnd{equation}ref{eq:0.1} and \epsilonilonnd{equation}ref{eq:0.2}, we show the vanishing assertion by showing the vanishing of the cohomology of $P$-invariant differential forms on $M\betaegin{enumerate}gin{aligned}ckslash P$ in degrees $2m$ and $2m-1$. This follows if we establish the following: \betaegin{enumerate}gin{claim} $\textup{O}mega^{2m-1}(M\betaegin{enumerate}gin{aligned}ckslash P)^P={\mathbb R}\alphalpha$, where $\alphalpha$ is an explicit $(2m-1)$ form with $d\alphalpha\neq0$. \epsilonilonnd{claim} Let us first see how the claim implies the vanishing. It follows from the claim that \betaegin{enumerate}gin{equation*} d{\operatorname{c}}olon\textup{O}mega^{2m-1}(M\betaegin{enumerate}gin{aligned}ckslash P)^P\rightarrow \textup{O}mega^{2m}(M\betaegin{enumerate}gin{aligned}ckslash P)^P \epsilonilonnd{equation*} is injective, which implies the vanishing in degree $2m-1$. Since $P$ acts transitively on $M\betaegin{enumerate}gin{aligned}ckslash P$ and ${\operatorname{d}}im(M\betaegin{enumerate}gin{aligned}ckslash P)=2m$, then ${\operatorname{d}}im \textup{O}mega^{2m}(M\betaegin{enumerate}gin{aligned}ckslash P)^PL^\infty_{\mathrm{alt}}mbdaeq1$. But then $d\alphalpha\neq0$ implies that ${\operatorname{d}}im\textup{O}mega^{2m}(M\betaegin{enumerate}gin{aligned}ckslash P)^P=1$, the image of $d$ is $\textup{O}mega^{2m}(M\betaegin{enumerate}gin{aligned}ckslash P)^P$ which implies the vanishing in degree $2m$. We now prove the claim. Observe that \betaegin{enumerate}gin{equation*} \textup{O}mega^{2m-1}(M\betaegin{enumerate}gin{aligned}ckslash P)^P\simeq\Lambdaambda^{*(2m-1)}T_{x_0}(M\betaegin{enumerate}gin{aligned}ckslash P)^M\,, \epsilonilonnd{equation*} where $\Lambdaambda^{*(2m-1)}T_{x_0}(M\betaegin{enumerate}gin{aligned}ckslash P)^M$ denotes the space of $M$-invariant $(2m-1)$-multilinear forms on the tangent space to $M\betaegin{enumerate}gin{aligned}ckslash P$ at the basepoint $x_0=[M]\in M\betaegin{enumerate}gin{aligned}ckslash P$. Under the identification \betaegin{enumerate}gin{equation}L^\infty_{\mathrm{alt}}mbdaabel{eq:id} \betaegin{enumerate}gin{aligned} M\betaegin{enumerate}gin{aligned}ckslash P\quad&L^\infty_{\mathrm{alt}}mbdaongrightarrow{\mathbb R}-\allowhyphensmes{\mathbb R}^{2m-1}\\ Ma(t)n(x)&L^\infty_{\mathrm{alt}}mbdaongmapsto\,\,(t,x) \epsilonilonnd{aligned} \epsilonilonnd{equation} the point $x_0$ corresponds to $(0,0)$, so that \betaegin{enumerate}gin{equation*} T_{x_0}(M\betaegin{enumerate}gin{aligned}ckslash P)\simeq T_{(0,0)}({\mathbb R}-\allowhyphensmes{\mathbb R}^{2m-1})\simeq{\mathbb R}-\allowhyphensmes{\mathbb R}^{2m-1} \epsilonilonnd{equation*} and the $M$-action on $T_{x_0}(M\betaegin{enumerate}gin{aligned}ckslash P)$ corresponds to the action on ${\mathbb R}-\allowhyphensmes{\mathbb R}^{2m-1}$ via ${\operatorname{Id}}-\allowhyphensmes\operatorname{SO}(2m-1)$ \betaegin{enumerate}gin{equation*} \xymatrix{ (t,x)\alphar[rr]^{{\operatorname{Id}}-\allowhyphensmes m(U)} &&(t,xU)\,. } \epsilonilonnd{equation*} Let $\{e_0,e_1,{\operatorname{d}}ots,e_{2m-1}\}$ be the canonical basis of ${\mathbb R}-\allowhyphensmes{\mathbb R}^{2m-1}$ (with $e_0$ spanning ${\mathbb R}$) and let $\{e_0^\alphast,e_1^\alphast,{\operatorname{d}}ots,e_{2m-1}^\alphast\}$ be the dual basis. Every $(2m-1)$-multilinear form $\alphalpha$ has a canonical decomposition \betaegin{enumerate}gin{equation*} \alphalpha=e_0^\alphast\wedge\betaegin{enumerate}ta+\overline mega\,, \epsilonilonnd{equation*} where $\betaegin{enumerate}ta$ is a $(2m-2)$-multilinear form on ${\mathbb R}^{2m-1}$ and $\overline mega$ is the pullback to ${\mathbb R}-\allowhyphensmes{\mathbb R}^{2m-1}$ of a $(2m-1)$-multilinear form on ${\mathbb R}^{2m-1}$. By uniqueness of the decomposition, $\alphalpha$ is ${\operatorname{Id}}-\allowhyphensmes\operatorname{SO}(2m-1)$-invariant if and only if $\betaegin{enumerate}ta$ and $\overline mega$ are $\operatorname{SO}(2m-1)$-invariant. Thus $\overline mega$ is a multiple of the determinant \betaegin{enumerate}gin{equation*} \overline mega=L^\infty_{\mathrm{alt}}mbdaambda e_1^\alphast\wedge{\operatorname{d}}ots\wedge\epsilonilon_{2m-1}^\alphast \epsilonilonnd{equation*} and if we show that there are no $\operatorname{SO}(2m-1)$-invariant $(2m-2)$-multilinear forms on ${\mathbb R}^{2m-1}$ we will have shown that $\textup{O}mega^{2m-1}(M\betaegin{enumerate}gin{aligned}ckslash P)^P$ is one-dimensional. The fact that $\Lambdaambda^{\alphast{2m-1}}({\mathbb R}^{2m-1})$ is one-dimensional and the pairing \betaegin{enumerate}gin{equation*} \betaegin{enumerate}gin{aligned} \Lambdaambda^{\alphast{2m-2}}({\mathbb R}^{2m-1})-\allowhyphensmes({\mathbb R}^{2m-1})^\alphast&\rightarrow\Lambdaambda^{\alphast{2m-1}}({\mathbb R}^{2m-1})\\ (\betaegin{enumerate}ta,L^\infty_{\mathrm{alt}}mbdaambda)\qquad\qquad&L^\infty_{\mathrm{alt}}mbdaongmapsto\quad\betaegin{enumerate}ta\wedgeL^\infty_{\mathrm{alt}}mbdaambda \epsilonilonnd{aligned} \epsilonilonnd{equation*} show that there is an isomorphism \betaegin{enumerate}gin{equation*} \betaegin{enumerate}gin{aligned} \Lambdaambda^{\alphast{2m-2}}({\mathbb R}^{2m-1})\qquad&L^\infty_{\mathrm{alt}}mbdaongrightarrow ({\mathbb R}^{2m-1})^\alphast\\ e_1^\alphast\wedge{\operatorname{d}}ots\wedge \widehat{e_j^\alphast}\wedge{\operatorname{d}}ots \wedge e_{2m-1}^\alphast&\textup{Map}sto \qquad e_j^\alphast \epsilonilonnd{aligned} \epsilonilonnd{equation*} that is $\operatorname{SO}(2m-1)$-equivariant. Since the $\operatorname{SO}(2m-1)$-action on $({\mathbb R}^{2m-1})^\alphast$ is irreducible, there are no $\operatorname{SO}(2m-1)$-invariant $(2m-2)$-multilinear forms on ${\mathbb R}^{2m-1}$, thus showing that $\textup{O}mega^{2m-1}(M\betaegin{enumerate}gin{aligned}ckslash P)^P$ is one-dimensional. We show now that the exterior derivative on $\textup{O}mega^{2m-1}(M\betaegin{enumerate}gin{aligned}ckslash P)^P$ does not vanish. With the identification \epsilonilonnd{equation}ref{eq:id}, the right translation $R_{(t,x)}$ by an element $m(U)a(t)n(x)$ is given by \betaegin{enumerate}gin{equation*} R_{(t,x)}(s,y)=(s+t,e^tyU+x)\,. \epsilonilonnd{equation*} In fact \betaegin{enumerate}gin{equation*} \betaegin{enumerate}gin{aligned} R_{(t,x)}(s,y) &=(M a(s)n(y) )m(U)a(t)n(x)\\ &=\underbrace{M m(U)}_{=M}\underbrace{m(U)^{-1}a(s)}_{=a(s)m(U)^{-1}}n(y)m(U)a(t)n(x)\\ &=Ma(s)\underbrace{m(U)^{-1}n(y)m(U)}_{=n(yU)}a(t)n(x)\\ &=Ma(s)n(yU)a(t)n(x)\\ &=Ma(s)a(t)\underbrace{a(t)^{-1}n(yU)a(t)}_{=n(e^tyU)}n(x)\\ &=Ma(s+t)n(e^tyU+x)\,. \epsilonilonnd{aligned} \epsilonilonnd{equation*} In particular \betaegin{enumerate}gin{equation*} R_{(t,x)}(0,0)=Ma(t)n(x)\simeq(t,x), \epsilonilonnd{equation*} so that $\overline mega\in\textup{O}mega^{2m-1}({\mathbb R}-\allowhyphensmes{\mathbb R}^{2m-1})$ can be extended to a $P$-invariant differential $(2m-1)$-form \betaegin{enumerate}gin{equation*} \overline mega_{(0,0)}((s_1,y_1),{\operatorname{d}}ots,(s_{2m-1},y_{2m-1}))=((R_{(t,x)})^\alphast\overline mega)_{(0,0)}((s_1,y_1),{\operatorname{d}}ots,(s_{2m-1},y_{2m-1}))\, \epsilonilonnd{equation*} for $(s_j,t_j)\in{\mathbb R}-\allowhyphensmes{\mathbb R}^{2m-1}\simeq T_{(0,0)}({\mathbb R}-\allowhyphensmes{\mathbb R}^{2m-1})$, $j=1,{\operatorname{d}}ots,2m-1$. Since \betaegin{enumerate}gin{equation*} \betaegin{enumerate}gin{aligned} {}&\overline mega_{(0,0)}((s_1,y_1),{\operatorname{d}}ots,(s_{2m-1},y_{2m-1}))\\ &=\overline mega_{(t,x)}(D_{(0,0)}R_{(t,x)}((s_1,y_1)),{\operatorname{d}}ots,D_{(0,0)}R_{(t,x)}((s_{2m-1},y_{2m-1})))\\ &=\overline mega_{(t,x)}((s_1,e^ty_1),{\operatorname{d}}ots,(s_{2m-1},e^ty_{2m-1}))\\ &=e^{(2m-1)t}\overline mega_{(t,x)}((s_1,y_1),{\operatorname{d}}ots,(s_{2m-1},y_{2m-1}))\,, \epsilonilonnd{aligned} \epsilonilonnd{equation*} then \betaegin{enumerate}gin{equation*} \overline mega_{(t,x)}=e^{-(2m-1)t}\overline mega_{(0,0)}\,, \epsilonilonnd{equation*} so that \betaegin{enumerate}gin{equation*} d\overline mega=-(2m-1)e^{-(2m-1)t}dt\wedge \overline mega \epsilonilonnd{equation*} is not vanishing. This shows the claim and completes the proof of the lemma. \epsilonilonnd{proof} \betaegin{enumerate}gin{lemma}L^\infty_{\mathrm{alt}}mbdaabel{lem:vanish} If $m\gammaeq2$ then the restriction \betaegin{enumerate}gin{equation*} \varepsilonmb|_P\in{\operatorname{r}m H}hb^{2m}(P,{\mathbb Z}) \epsilonilonnd{equation*} vanishes. \epsilonilonnd{lemma} \betaegin{enumerate}gin{proof} Considering the long exact sequence in bounded and ordinary cohomology associated to \epsilonilonnd{equation}ref{eq:0.8-}, and taking into account Lemma~\operatorname{r}ef{lem:cohom of parabolic} and the vanishing of bounded real cohomology of $P$ (\operatorname{r}ef{eq:06}), we obtain the diagram \betaegin{enumerate}gin{equation*} \xymatrix{ 0\alphar[r] &{\operatorname{r}m H}_\mathrm{B}^{2m-1}(P,{\mathbb R}/{\mathbb Z})\alphar[r]^-{{\operatorname{d}}elta^{\mathrm{b}}}\alphar@{=}[d] &{\operatorname{r}m H}hcb^{2m}(P,{\mathbb Z})\alphar[r]\alphar[d]^{c_{\mathbb Z}} &0\\ 0\alphar[r] &{\operatorname{r}m H}_\mathrm{B}^{2m-1}(P,{\mathbb R}/{\mathbb Z})\alphar[r]_-{\operatorname{d}}elta &{\operatorname{r}m H}hc^{2m}(P,{\mathbb Z})\alphar[r] &0\,, } \epsilonilonnd{equation*} where we used Lemma~\operatorname{r}ef{lem:cohom of parabolic} in the bottom row and the vanishing of bounded cohomology with real coefficients in the top row. Thus ${\operatorname{r}m H}hcb^{2m}(P,{\mathbb Z}){\operatorname{c}}ong{\operatorname{r}m H}hc^{2m}(P,{\mathbb Z})$. If we show that $\varepsilonm|_P=0$, this will imply that $\varepsilonmb|_P=0$. To see that $\varepsilonm|_P=0$, by naturality of Wigner's isomorphism \epsilonilonnd{equation}ref{eq:0.3} and the fact that $M$ is maximal compact in $P$, the restriction to $M$ induces an isomorphism ${\operatorname{r}m H}hc^{2m}(P,{\mathbb Z}){\operatorname{c}}ong{\operatorname{r}m H}hc^{2m}(M,{\mathbb Z})$. Since however the Euler class $\varepsilonm$ of $\operatorname{SO}(2m)$ restricted to $L^\infty_{\mathrm{alt}}mbdaeft\{\betaegin{enumerate}gin{pmatrix}1&0\\0&U\epsilonilonnd{pmatrix}:\,U\in\operatorname{SO}(2m-1)\operatorname{r}ight\}$ vanishes, we conclude that $\varepsilonm|_P=0$. \epsilonilonnd{proof} We deduce then using Lemma~\operatorname{r}ef{lem:vanish} that Theorem~\operatorname{r}ef{thm:vanishing} holds if the image of $\operatorname{r}ho$ is contained in $P$. We must therefore turn to the case in which $\operatorname{r}ho({\mathbb Z}^{2m-1})$ lies in $T_0$. Let then \betaegin{enumerate}gin{equation*} \pi_i{\operatorname{c}}olon TL^\infty_{\mathrm{alt}}mbdaongrightarrow\textup{O}(2) \epsilonilonnd{equation*} be the projection on the $i$-th factor of $T$ and let \betaegin{enumerate}gin{equation*} \varepsiloni:=\pi_i^*(\varepsilon_2)\quad\text{ and }\quad\varepsilonib:=\pi_i^*(\varepsilon_2^{\mathrm{b}})\,, \epsilonilonnd{equation*} where $\varepsilon_2$ and $\varepsilon_2^{\mathrm{b}}$ are respectively the Euler class and the bounded Euler class of $\textup{O}(2)$. Observe that if $L$ is compact, the long exact sequence \epsilonilonnd{equation}ref{eq:0.7} gives \betaegin{enumerate}gin{equation}L^\infty_{\mathrm{alt}}mbdaabel{eq:lulodiagram} \xymatrix{ 0\alphar[r] & {\operatorname{r}m H}_\mathrm{B}^\betaullet(L,\textup{Hom}ep/\ZZ_\epsilon)\alphar[r]^{\operatorname{c}}ong\alphar@{=}[d] &{\operatorname{r}m H}hcb^\betaullet(L,\ZZ_\epsilon)\alphar[d]\alphar[r] &0\\ 0\alphar[r] &{\operatorname{r}m H}_\mathrm{B}^\betaullet(L,\textup{Hom}ep/\ZZ_\epsilon)\alphar[r]^{\operatorname{c}}ong &{\operatorname{r}m H}hc^\betaullet(L,\ZZ_\epsilon)\alphar[r] &0\,. } \epsilonilonnd{equation} Since the ordinary Euler class is a characteristic class and $T$ is a product, we have \betaegin{enumerate}gin{equation}L^\infty_{\mathrm{alt}}mbdaabel{eq:eucup} \varepsilonm|_T=\varepsilon_{(1)}\alphabxcup{\operatorname{d}}ots\alphabxcup\varepsilon_{(m)}\,, \epsilonilonnd{equation} and hence it follows from \epsilonilonnd{equation}ref{eq:lulodiagram} that \betaegin{enumerate}gin{equation}L^\infty_{\mathrm{alt}}mbdaabel{eq:eumcup} \varepsilonmb|_T=\varepsilon_{(1)}^{\mathrm{b}}\alphabxcup{\operatorname{d}}ots\alphabxcup\varepsilon_{(m)}^{\mathrm{b}}\,. \epsilonilonnd{equation} Observe that the image of $\operatorname{SO}(2)^m$ in $T$ is its connected component $T^{\operatorname{c}}irc$. \betaegin{enumerate}gin{lemma}L^\infty_{\mathrm{alt}}mbdaabel{lem:vanish2} If $\operatorname{r}ho{\operatorname{c}}olon {\mathbb Z}^{2m-1}\rightarrow T_0$ does not take values in $T^{\operatorname{c}}irc$, then $\operatorname{r}ho^*(\varepsilonmb)=0$. \epsilonilonnd{lemma} \betaegin{enumerate}gin{proof} An Abelian subgroup of $\textup{O}(2)$ not contained in $\operatorname{SO}(2)$ is of the form $\{1,\sigma\}$, with $\sigma^2=e$. Thus if $\operatorname{r}ho({\mathbb Z}^{2m-1})n\raise4pt\hbox{\tiny o}\kern+.2emt\subset T^{\operatorname{c}}irc$, there is $\pi_i$ such that $\pi_i\operatorname{r}ho({\mathbb Z}^{2m-1})\subset\{1,\sigma\}$. Since ${\operatorname{r}m H}b^2(\{1,\sigma\},\ZZ_\epsilon)=0$, then $\varepsilon_2^\mathrm{b}$ vanishes on $\{1,\sigma\}$, and hence $\operatorname{r}ho^*(\varepsilonmb)=0$ by \epsilonilonnd{equation}ref{eq:eumcup}. \epsilonilonnd{proof} Thus we are reduced to analyze homomorphisms \betaegin{enumerate}gin{equation*} \operatorname{r}ho{\operatorname{c}}olon {\mathbb Z}^{2m-1}L^\infty_{\mathrm{alt}}mbdaongrightarrow T^{\operatorname{c}}irc{\operatorname{c}}ong\operatorname{SO}(2)^m\,. \epsilonilonnd{equation*} As before, since $\operatorname{SO}(2)^m$ is compact and hence amenable, the connecting homomorphism \betaegin{enumerate}gin{equation*} {\operatorname{d}}elta^\mathrm{b}{\operatorname{c}}olon {\operatorname{r}m H}_\mathrm{B}^{2m-1}(\operatorname{SO}(2)^m,{\mathbb R}/{\mathbb Z})L^\infty_{\mathrm{alt}}mbdaongrightarrow {\operatorname{r}m H}hcb^{2m}(\operatorname{SO}(2)^m,{\mathbb Z}) \epsilonilonnd{equation*} is an isomorphism. We fix the orientation preserving identification \betaegin{enumerate}gin{equation*} \operatorname{SO}(2)L^\infty_{\mathrm{alt}}mbdaongrightarrow{\mathbb R}/{\mathbb Z}\,, \epsilonilonnd{equation*} and, for $m=1$, we define the homogeneous $1$-cocycle \betaegin{enumerate}gin{equation} L^\infty_{\mathrm{alt}}mbdaabel{eq:rot} \betaegin{enumerate}gin{aligned} \mathrm{Rot}{\operatorname{c}}olon \operatorname{SO}(2)-\allowhyphensmes\operatorname{SO}(2)&L^\infty_{\mathrm{alt}}mbdaongrightarrow\operatorname{SO}(2){\operatorname{c}}ong{\mathbb R}/{\mathbb Z}\\ (g,h)\qquad&L^\infty_{\mathrm{alt}}mbdaongmapsto \qquad g^{-1}h \epsilonilonnd{aligned} \epsilonilonnd{equation} and \betaegin{enumerate}gin{equation*} [\mathrm{Rot}]\in{\operatorname{r}m H}_\mathrm{B}^1(\operatorname{SO}(2),{\mathbb R}/{\mathbb Z}) \epsilonilonnd{equation*} the corresponding class. Then ${\operatorname{d}}elta^\mathrm{b}([\mathrm{Rot}])=\varepsilon_2^\mathrm{b}$ and from this we deduce that if \betaegin{enumerate}gin{equation*} \vartheta:=\pi_1^*([\mathrm{Rot}])\in{\operatorname{r}m H}_\mathrm{B}^1(\operatorname{SO}(2)^m,{\mathbb R}/{\mathbb Z})\,, \epsilonilonnd{equation*} then ${\operatorname{d}}elta^\mathrm{b}$ maps the class $\vartheta\alphabxcup\varepsilon_{(2)}\alphabxcup{\operatorname{d}}ots\alphabxcup\varepsilon_{(m)}\in{\operatorname{r}m H}_\mathrm{B}^{2m-1}(\operatorname{SO}(2)^m,{\mathbb R}/{\mathbb Z})$ to the class $\varepsilon_{(1)}^\mathrm{b}\alphabxcup{\operatorname{d}}ots\alphabxcup\varepsilon_{(m)}^\mathrm{b}\in{\operatorname{r}m H}hcb^{2m}(\mathrm{SO}(2)^m,{\mathbb Z})$. As a result, it follows from the commutativity of the square \betaegin{enumerate}gin{equation*} \xymatrix{ {\operatorname{r}m H}_\mathrm{B}^{2m-1}(\operatorname{SO}(2)^m,{\mathbb R}/{\mathbb Z})\alphar[r]^-{{\operatorname{d}}elta^\mathrm{b}}\alphar[d]_{\operatorname{r}ho^\alphast} &{\operatorname{r}m H}hcb^{2m}(\operatorname{SO}(2)^m,{\mathbb Z})\alphar[d]^{\operatorname{r}ho^*}\\ {\operatorname{r}m H}^{2m-1}({\mathbb Z}^{2m-1},{\mathbb R}/{\mathbb Z})\alphar[r]^-{{\operatorname{d}}elta^\mathrm{b}} &{\operatorname{r}m H}b^{2m}({\mathbb Z}^{2m-1},{\mathbb Z}) } \epsilonilonnd{equation*} that \betaegin{enumerate}gin{equation}L^\infty_{\mathrm{alt}}mbdaabel{eq:2.4} \operatorname{r}ho^*(\varepsilonmb)={\operatorname{d}}elta^\mathrm{b}(\operatorname{r}ho^*(\vartheta\alphabxcup\varepsilon_{(2)}\alphabxcup{\operatorname{d}}ots\alphabxcup\varepsilon_{(m)}))\,. \epsilonilonnd{equation} In order to prove the vanishing of the left hand side, we are going to show that $\operatorname{r}ho^*(\vartheta\alphabxcup\varepsilon_{(2)}\alphabxcup{\operatorname{d}}ots\alphabxcup\varepsilon_{(m)})=0$. To this end we use that the pairing \betaegin{enumerate}gin{equation}L^\infty_{\mathrm{alt}}mbdaabel{eq:pairing} \betaegin{enumerate}gin{aligned} {\operatorname{r}m H}^{2m-1}({\mathbb Z}^{2m-1},{\mathbb R}/{\mathbb Z})&L^\infty_{\mathrm{alt}}mbdaongrightarrow\qquad\,\,{\mathbb R}/{\mathbb Z}\\ \betaegin{enumerate}ta\qquad\qquad&L^\infty_{\mathrm{alt}}mbdaongmapsto\,\,L^\infty_{\mathrm{alt}}mbdaangle\betaegin{enumerate}ta,[{\mathbb Z}^{2m-1}]\operatorname{r}angle \epsilonilonnd{aligned} \epsilonilonnd{equation} is an isomorphism, where $[{\mathbb Z}^{2m-1}]\in{\operatorname{r}m H}_{2m-1}({\mathbb Z}^{2m-1},{\mathbb Z}){\operatorname{c}}ong{\mathbb Z}$ denotes the fundamental class. We will need the following \betaegin{enumerate}gin{lemma}L^\infty_{\mathrm{alt}}mbdaabel{lem: representation Zn} Let $n$ be even and $n\gammaeq2$ and let $e_1,{\operatorname{d}}ots,e_n$ be the canonical basis of ${\mathbb Z}^n$. Then the group chain \betaegin{enumerate}gin{equation*} z= \sum_{\sigma\in \mathrm{Sym}(n)} \mathrm{sign}(\sigma)[0,e_{\sigma(1)},e_{\sigma(1)}+e_{\sigma(2)},{\operatorname{d}}ots,e_{\sigma(1)}+{\operatorname{d}}ots+e_{\sigma(n)}] \epsilonilonnd{equation*} is a representative of the fundamental class $[{\mathbb Z}^n]\in {\operatorname{r}m H}_n({\mathbb Z}^n,{\mathbb Z}){\operatorname{c}}ong {\mathbb Z}$. \epsilonilonnd{lemma} \betaegin{enumerate}gin{proof} We first check that $\partial z=0$ so that $z$ is indeed a cycle. Let \betaegin{enumerate}gin{equation*} s_\sigma:=[0,e_{\sigma(1)},e_{\sigma(1)}+e_{\sigma(2)},{\operatorname{d}}ots,e_{\sigma(1)}+{\operatorname{d}}ots+e_{\sigma(n)}]\,. \epsilonilonnd{equation*} Observe that, for $1L^\infty_{\mathrm{alt}}mbdaeq iL^\infty_{\mathrm{alt}}mbdaeq n-1$, the sum of each of the $i$-th faces of the simplices appearing in $z$ over all permutations of $\sigma\in \mathrm{Sym}(n)$ vanishes since \betaegin{enumerate}gin{equation*} \partial_is_\sigma=[0,e_{\sigma(1)},e_{\sigma(1)}+e_{\sigma(2)},{\operatorname{d}}ots,\wideparen{e_{\sigma(1)}+{\operatorname{d}}ots+e_{\sigma(i)}},{\operatorname{d}}ots,e_{\sigma(1)}+{\operatorname{d}}ots+e_{\sigma(n)}] \epsilonilonnd{equation*} is invariant under the odd transposition $\tau_i:=(\sigma(i),\sigma(i+1))$. Indeed if $1L^\infty_{\mathrm{alt}}mbdaeq iL^\infty_{\mathrm{alt}}mbdaeq n-1$, \betaegin{enumerate}gin{equation*} \betaegin{enumerate}gin{aligned} \sum_{\sigma\in \mathrm{Sym}(n)} \mathrm{sign}(\sigma)\partial_is_\sigma &=\sum_{\sigma\in \mathrm{Sym}(n)}\mathrm{sign}(\sigma)\partial_is_{\sigma\tau_i}\\ &=\sum_{\sigma\in \mathrm{Sym}(n)}\mathrm{sign}(\sigma\tau_i^{-1})\partial_is_{\sigma}\\ &=-\sum_{\sigma\in \mathrm{Sym}(n)}\mathrm{sign}(\sigma)\partial_is_\sigma \epsilonilonnd{aligned} \epsilonilonnd{equation*} and hence implies that $\sum_{\sigma\in \mathrm{Sym}(n)}\mathrm{sign}(\sigma)\partial_is_\sigma=0$. Thus \betaegin{enumerate}gin{equation}L^\infty_{\mathrm{alt}}mbdaabel{eq:5.75} \partial z=\sum_{\sigma\in \mathrm{Sym}(n)}\mathrm{sign}(\sigma)\partial_0s_\sigma+\sum_{\sigma\in \mathrm{Sym}(n)}\mathrm{sign}(\sigma)\partial_ns_\sigma \epsilonilonnd{equation} and it remains to see that the $0$-th and $n$-th face cancel each other. By ${\mathbb Z}^n$-invariance, \betaegin{enumerate}gin{equation}L^\infty_{\mathrm{alt}}mbdaabel{eqn:0andnFaces} \betaegin{enumerate}gin{aligned} \partial_0s_\sigma &=\small{[e_{\sigma(1)},e_{\sigma(1)}+e_{\sigma(2)},{\operatorname{d}}ots,e_{\sigma(1)}+{\operatorname{d}}ots+e_{\sigma(n)}]}\\ &=\small{[0,e_{\sigma(2)},{\operatorname{d}}ots,e_{\sigma(2)}+{\operatorname{d}}ots+e_{\sigma(n)}]}\\ &=\small{[0,e_{\sigma\tau(1)},{\operatorname{d}}ots,e_{\sigma\tau(1)}+{\operatorname{d}}ots+e_{\sigma\tau(n-1)}]}\\ &=\partial_n s_{\sigma\tau}\,, \epsilonilonnd{aligned} \epsilonilonnd{equation} where $\tau=(1,2,{\operatorname{d}}ots,n)$ is the cyclic permutation of signature $\mathrm{sign}(\tau)=(-1)^{n-1}$. From \epsilonilonnd{equation}ref{eq:5.75} and \epsilonilonnd{equation}ref{eqn:0andnFaces} follows that \betaegin{enumerate}gin{equation*} \betaegin{enumerate}gin{aligned} \partial z &=\sum_{\sigma\in \mathrm{Sym}(n)}\mathrm{sign}(\sigma)\partial_ns_{\sigma\tau}+\sum_{\sigma\in \mathrm{Sym}(n)}\mathrm{sign}(\sigma)\partial_ns_{\sigma}\\ &=\sum_{\sigma\in \mathrm{Sym}(n)}(-1)^{n-1}\mathrm{sign}(\sigma)\partial_ns_{\sigma}+\sum_{\sigma\in \mathrm{Sym}(n)}\mathrm{sign}(\sigma)\partial_ns_{\sigma} =0 \epsilonilonnd{aligned} \epsilonilonnd{equation*} since $n$ is even. Let $\overline mega_{{\mathbb R}^n}\in{\operatorname{r}m H}^n({\mathbb Z}^n,{\mathbb R})$ be the Euclidean volume class. Note that the volume class evaluates to $1$ on the fundamental class, as the $n$-torus generated by the canonical basis has volume $1$. A cocycle representing $\overline mega_{{\mathbb R}^n}$ is given by $V_n{\operatorname{c}}olon ({\mathbb Z}^n)^{n+1}\operatorname{r}ightarrow {\mathbb R}$ sending $v_0,v_1,{\operatorname{d}}ots,v_n$ to the signed volume of the affine simplex with vertices in the lattice ${\mathbb Z}^n\subset {\mathbb R}^n$, that is \betaegin{enumerate}gin{equation*} V_n(v_0,v_1,{\operatorname{d}}ots,v_n)=\frac{1}{n!} \mathrm{det}(v_1-v_0,{\operatorname{d}}ots,v_n-v_0)\,. \epsilonilonnd{equation*} In order to show that \betaegin{enumerate}gin{equation*} L^\infty_{\mathrm{alt}}mbdaangle \overline mega_{{\mathbb R}^n}, [{\mathbb Z}^n]\operatorname{r}angle=V_n(z)=1 \epsilonilonnd{equation*} we need to evaluate $V_n$ on each summand of $z$. By definition, \betaegin{enumerate}gin{equation*} \betaegin{enumerate}gin{aligned} &V_n(0,e_{\sigma(1)},e_{\sigma(1)}+e_{\sigma(2)},{\operatorname{d}}ots,e_{\sigma(1)}+{\operatorname{d}}ots+e_{\sigma(n)})\\ =&\frac{1}{n!}{\operatorname{d}}et(e_{\sigma(1)},e_{\sigma(1)}+e_{\sigma(2)},{\operatorname{d}}ots,e_{\sigma(1)}+{\operatorname{d}}ots+e_{\sigma(n)})\\ =&\mathrm{sign}(\sigma)\frac{1}{n!}{\operatorname{d}}et(e_1,e_1+e_2{\operatorname{d}}ots,e_1+{\operatorname{d}}ots+e_n)\\ =&\mathrm{sign}(\sigma)\frac{1}{n!}\,, \epsilonilonnd{aligned} \epsilonilonnd{equation*} where we have used for the second equality the fact that the determinant is alternating with respect to line permutations. Summing up, we obtain \betaegin{enumerate}gin{equation*} L^\infty_{\mathrm{alt}}mbdaangle \overline mega_{{\mathbb R}^n} , [{\mathbb Z}^n] \operatorname{r}angle = \sum_{\sigma\in \mathrm{Sym}(n)} \frac{1}{n!}=1\,, \epsilonilonnd{equation*} thus proving the lemma. \epsilonilonnd{proof} \betaegin{enumerate}gin{lemma}L^\infty_{\mathrm{alt}}mbdaabel{lem:rot=0} Let $m\gammaeq2$. Then \betaegin{enumerate}gin{equation*} L^\infty_{\mathrm{alt}}mbdaangle\operatorname{r}ho^*(\vartheta\alphabxcup\varepsilon_{(2)}\alphabxcup{\operatorname{d}}ots\alphabxcup\varepsilon_{(m)}),[{\mathbb Z}^{2m-1}]\operatorname{r}angle=0\,. \epsilonilonnd{equation*} \epsilonilonnd{lemma} \betaegin{enumerate}gin{proof} We use as a representative of $\varepsilon_2\in{\operatorname{r}m H}^2(\mathrm{SO}(2),{\mathbb Z})$ the multiple $-\frac12$ of the orientation cocycle (see Remark~\operatorname{r}ef{rem:3.1}). This cocycle takes values in $\frac{1}{2}\mathbb{Z}$ but represents an integral class and in particular evaluates to an integer on a fundamental class. Hence a representative for $\kappaappa:=\vartheta\alphabxcup \varepsilon_{(2)}\alphabxcup {\operatorname{d}}ots \alphabxcup \varepsilon_{(m)}\in{\operatorname{r}m H}_\mathrm{B}^{2m-1}(\operatorname{SO}(2)^m,{\mathbb R}/{\mathbb Z})$ is given by the cocycle mapping $g_0,{\operatorname{d}}ots,g_{2m-1}\in\operatorname{SO}(2)^m $ to the product \betaegin{enumerate}gin{equation*} \frac{(-1)^{m-1}}{2^{m-1}}{\operatorname{c}}dot \mathrm{Rot}_1(g_0,g_1){\operatorname{c}}dot \mathrm{Or}_2(g_1,g_2,g_3){\operatorname{c}}dot {\operatorname{d}}ots {\operatorname{c}}dot \mathrm{Or}_m(g_{2m-3},g_{2m-2},g_{2m-1})\,, \epsilonilonnd{equation*} where $\mathrm{Rot}_1$ denotes the pullback to $\operatorname{SO}(2)^m$ via the first projection $\pi_1$ of the homogeneous 1-cocycle $\mathrm{Rot}$ defined in (\operatorname{r}ef{eq:rot}), while $\mathrm{Or}_j$ denotes the pullback to $\operatorname{SO}(2)^m$ via $\pi_j$ of the orientation cocycle \betaegin{enumerate}gin{equation*} \mathrm{Or}{\operatorname{c}}olon \operatorname{SO}(2)^3L^\infty_{\mathrm{alt}}mbdaongrightarrow\{-1,0,1\}\,. \epsilonilonnd{equation*} We now evaluate the pullback $\operatorname{r}ho^*(\kappaappa)$ on the cycle $z$ of Lemma~\operatorname{r}ef{lem: representation Zn} and, writing $f_i=\operatorname{r}ho(e_i)\in \operatorname{SO}(2)^m$, obtain \betaegin{enumerate}gin{equation} L^\infty_{\mathrm{alt}}mbdaabel{equ: rho on torus} \betaegin{enumerate}gin{aligned} &L^\infty_{\mathrm{alt}}mbdaangle\operatorname{r}ho^*(\kappaappa),[{\mathbb Z}^{2m-1}]\operatorname{r}angle\\ =&\frac{(-1)^{m-1}}{2^{m-1}} \sum_{\sigma\in \mathrm{Sym}(n)} \mathrm{sign}(\sigma) \mathrm{Rot}_1(0,f_{\sigma(1)}) {\operatorname{c}}dot\mathrm{Or}_2(f_{\sigma(1)}, f_{\sigma(1)}f_{\sigma(2)},f_{\sigma(1)} f_{\sigma(2)}f_{\sigma(3)})\\ &{\operatorname{r}m H}phantom{XXXXxXXXX}{\operatorname{c}}dot (\mathrm{Or}_3\alphabxcup {\operatorname{d}}ots \alphabxcup \mathrm{Or}_m)(f_{\sigma(1)} f_{\sigma(2)}f_{\sigma(3)},{\operatorname{d}}ots,f_{\sigma(1)}{\operatorname{c}}dot {\operatorname{d}}ots{\operatorname{c}}dot f_{\sigma(n)}). \epsilonilonnd{aligned} \epsilonilonnd{equation} To prove that this expression vanishes first observe that \betaegin{enumerate}gin{equation*} \mathrm{Or}(f_1,f_2,f_3)=-\mathrm{Or}(f_1^{-1},f_2^{-1},f_3^{-1}) \epsilonilonnd{equation*} holds for any $f_1,f_2,f_3$ in $\operatorname{SO}(2)$ (but not in $\operatorname{SL}(2,\mathbb{R})$). Specializing to $f_1=g^{-1}$, $f_2=\mathrm{Id}$, $f_3=h$ and using that $\mathrm{Or}$ is alternating gives \betaegin{enumerate}gin{equation*} \mathrm{Or}(g^{-1},\mathrm{Id},h)=\mathrm{Or}(h^{-1},\mathrm{Id},g) \epsilonilonnd{equation*} for any $g,h$ in $\operatorname{SO}(2)$. Finally, using the invariance of $\mathrm{Or}$, we can multiply the variables on the left hand side of the latter equality by $fg$, and the variables on the right hand side by $fh$ to obtain \betaegin{enumerate}gin{equation} L^\infty_{\mathrm{alt}}mbdaabel{equ:Or(f,fg,fgh)} \mathrm{Or}(f,fg,fgh)=\mathrm{Or}(f,fh,fhg) \epsilonilonnd{equation} for any $f,g,h$ in $\operatorname{SO}(2)$. Thus from \epsilonilonnd{equation}ref{equ: rho on torus} and \epsilonilonnd{equation}ref{equ:Or(f,fg,fgh)} and with $\tau_2=(2 3)$, we obtain the asserted vanishing as \betaegin{enumerate}gin{equation*} \betaegin{enumerate}gin{aligned} &L^\infty_{\mathrm{alt}}mbdaangle\operatorname{r}ho^*(\kappaappa),[{\mathbb Z}^{2m-1}]\operatorname{r}angle\\ =&\frac{(-1)^{m-1}}{2^{m-1}} \sum_{\sigma\in \mathrm{Sym}(n)} \mathrm{sign}(\sigma) \mathrm{Rot}_1(0,f_{\sigma(1)}) {\operatorname{c}}dot\mathrm{Or}_2(f_{\sigma(1)}, f_{\sigma(1)}f_{\sigma(3)},f_{\sigma(1)} f_{\sigma(3)}f_{\sigma(2)})\\ &{\operatorname{r}m H}phantom{XXXXxXXXX}{\operatorname{c}}dot (\mathrm{Or}_3\alphabxcup {\operatorname{d}}ots \alphabxcup \mathrm{Or}_m)(f_{\sigma(1)} f_{\sigma(2)}f_{\sigma(3)},{\operatorname{d}}ots,f_{\sigma(1)}{\operatorname{c}}dot {\operatorname{d}}ots{\operatorname{c}}dot f_{\sigma(n)})\\ =&\mathrm{sign}(\tau_2)\frac{(-1)^{m-1}}{2^{m-1}} \sum_{\sigma\in \mathrm{Sym}(n)} \mathrm{sign}(\sigma) \mathrm{Rot}_1(0,f_{\sigma(1)}) {\operatorname{c}}dot\mathrm{Or}_2(f_{\sigma(1)}, f_{\sigma(1)}f_{\sigma(2)},f_{\sigma(1)} f_{\sigma(2)}f_{\sigma(3)})\\ &{\operatorname{r}m H}phantom{XXXXxXXXX}{\operatorname{c}}dot (\mathrm{Or}_3\alphabxcup {\operatorname{d}}ots \alphabxcup \mathrm{Or}_m)(f_{\sigma(1)} f_{\sigma(2)}f_{\sigma(3)},{\operatorname{d}}ots,f_{\sigma(1)}{\operatorname{c}}dot {\operatorname{d}}ots{\operatorname{c}}dot f_{\sigma(n)})\\ =&-L^\infty_{\mathrm{alt}}mbdaangle\operatorname{r}ho^*(\kappaappa),[{\mathbb Z}^{2m-1}]\operatorname{r}angle\,. \epsilonilonnd{aligned} \epsilonilonnd{equation*} \epsilonilonnd{proof} \betaegin{enumerate}gin{proof}[Proof of Theorem~\operatorname{r}ef{thm:vanishing}] Let $\operatorname{r}ho{\operatorname{c}}olon {\mathbb Z}^{2m-1}\rightarrow\operatorname{SO}(2m,1)^{\operatorname{c}}irc$ be a homomorphism. By Lemma~\operatorname{r}ef{lem:tricot} either $\operatorname{r}ho({\mathbb Z}^{2m-1})<P$ up to conjugation and then $\operatorname{r}ho^*(\varepsilonmb)=0$ by Lemma~\operatorname{r}ef{lem:vanish} or $\operatorname{r}ho^*({\mathbb Z}^{2m-1})<T_0$ up to conjugacy. Then either $\operatorname{r}ho({\mathbb Z}^{2m-1})n\raise4pt\hbox{\tiny o}\kern+.2emt\subset T^{\operatorname{c}}irc$ and the vanishing follows from Lemma~\operatorname{r}ef{lem:vanish2} or $\operatorname{r}ho({\mathbb Z}^{2m-1})<T^{\operatorname{c}}irc$, in which case Lemma~\operatorname{r}ef{lem:rot=0} and \epsilonilonnd{equation}ref{eq:2.4} imply that $\operatorname{r}ho^*(\vartheta\alphabxcup\varepsilon_{(2)}\alphabxcup{\operatorname{d}}ots\alphabxcup\varepsilon_{(m)})=0$ and hence $\operatorname{r}ho^*(\varepsilonmb)=0$ by \epsilonilonnd{equation}ref{eq:pairing}. \epsilonilonnd{proof} \betaegin{enumerate}gin{proof}[Proof of Theorem \operatorname{r}ef{thm: main thm}] Let $N$ be a compact core of $M$ and let $C_1,{\operatorname{d}}ots,C_h$ the connected components of the boundary of $\partial N$. It follows from Theorem~\operatorname{r}ef{thm: vol modulo boundary} that \betaegin{enumerate}gin{equation*} (-1)^m\frac{2}{{\operatorname{Vol}}(S^{2m})}{\operatorname{Vol}}(\operatorname{r}ho)\epsilonilonnd{equation}uiv-\sum_{i=1}^h L^\infty_{\mathrm{alt}}mbdaangle ({\operatorname{d}}elta^\mathrm{b})^{-1}(\operatorname{r}ho^*(\varepsilonmb)|_{C_i},[C_i]\operatorname{r}angle\mod {\mathbb Z}\,, \epsilonilonnd{equation*} with the usual abuse of notation that $\operatorname{r}ho^*(\varepsilonmb)|_{C_i}$ refers to the element in ${\operatorname{r}m H}b^{2m}(C_i,{\mathbb Z})$ corresponding to $(\operatorname{r}ho|_{\pi_1(C_i)})^*(\varepsilonmb)\in{\operatorname{r}m H}b^{2m}(\pi_1(C_i),{\mathbb Z})$. If now all the $C_i$'s are tori, the above congruence relation and Theorem~\operatorname{r}ef{thm:vanishing} imply that $(-1)^m\frac{2}{{\operatorname{Vol}}(S^{2m})}{\operatorname{Vol}}(\operatorname{r}ho)\in{\mathbb Z}$. In the general case, let $p_i{\operatorname{c}}olon C'_i\rightarrow C_i$ be a covering of degree $B_{2m-1}$ that is a torus. Then \betaegin{enumerate}gin{equation*} \betaegin{enumerate}gin{aligned} B_{2m-1}L^\infty_{\mathrm{alt}}mbdaangle({\operatorname{d}}elta^\mathrm{b})^{-1}(\operatorname{r}ho^*(\varepsilonmb)|_{C_i}),[C_i]\operatorname{r}angle =&L^\infty_{\mathrm{alt}}mbdaangle({\operatorname{d}}elta^\mathrm{b})^{-1}(\operatorname{r}ho^*(\varepsilonmb)|_{C_i}),p_{i*}([C'_i])\operatorname{r}angle\\ =&L^\infty_{\mathrm{alt}}mbdaangle({\operatorname{d}}elta^\mathrm{b})^{-1}p_i^*(\operatorname{r}ho^*(\varepsilonmb)|_{C_i}),[C'_i]\operatorname{r}angle\,. \epsilonilonnd{aligned} \epsilonilonnd{equation*} Now observe that $p_i^*(\operatorname{r}ho^*(\varepsilonmb)|_{C_i})\in{\operatorname{r}m H}b^{2m}(C'_i,{\mathbb Z})$ corresponds to the class \betaegin{enumerate}gin{equation*} (\operatorname{r}ho{\operatorname{c}}irc p_{i*})^*(\varepsilonmb)=(\operatorname{r}ho|_{\pi_1(C'_i))}^*(\varepsilonmb)\in{\operatorname{r}m H}b^{2m}(\pi_1(C'_i),{\mathbb Z})\,, \epsilonilonnd{equation*} which vanishes by Theorem~\operatorname{r}ef{thm:vanishing}. \epsilonilonnd{proof} \section{Examples of nontrivial and non-maximal representations}L^\infty_{\mathrm{alt}}mbdaabel{sec:examples} In this section we give examples of volumes of representations. More precisely: \betaegin{enumerate} \item[--] In \S~\operatorname{r}ef{subsec:5.1} we set ourselves in dimension $3$. Here we show in particular that the volume of a Dehn filling of a finite volume hyperbolic manifold coincides with the volume of the filling representation. In fact Proposition~\operatorname{r}ef{prop: vol(rho)=vol(rho0)} deals with a more general case. \item[--] In \S~\operatorname{r}ef{subsec:5.2}, by glueing appropriately copies of a hyperbolic manifold of arbitrary dimensions with totally geodesic boundary, we construct manifolds $M_k$ and representations of $\pi_1(M_k)$ whose volume is a rational multiple of ${\operatorname{Vol}}(M_k)$. \epsilonilonnd{enumerate} \subsection{Dimension $3$: representations given by Dehn filling}L^\infty_{\mathrm{alt}}mbdaabel{subsec:5.1} Let $M$ be a complete finite volume hyperbolic $3$-manifold, which, for simplicity, we assume has only one cusp. If $N$ is a compact core of $M$, its boundary $\partial N$ is Euclidean with the induced metric and hence there is an isometry $\varphi{\operatorname{c}}olon \partial N\rightarrow{\mathbb T}^2$ to a two-dimensional torus for an appropriate flat metric on ${\mathbb T}^2$. We obtain then a decomposition of $M$ as a connected sum \betaegin{enumerate}gin{equation*} M=N\#({\mathbb T}^2-\allowhyphensmes{\mathbb R}_{\gammaeq0})\,, \epsilonilonnd{equation*} where the identification is via $\varphi$. We are now going to fill in a solid two-torus to obtain a compact manifold. To this end, let $\tau\subset\partial N$ be a simple closed geodesic and let us choose a diffeomorphism $\varphi_\tau{\operatorname{c}}olon \partial N\rightarrow S^1-\allowhyphensmes S^1$, in such a way that $\varphi_\tau(\tau)=S^1-\allowhyphensmes\{*\}$. Then $M_\tau$ is the connected sum \betaegin{enumerate}gin{equation*} M_\tau{\operatorname{c}}olon =N\#({\mathbb D}^2-\allowhyphensmes S^1)\,, \epsilonilonnd{equation*} identified via $\varphi_\tau$. Denote by $j_\tau{\operatorname{c}}olon N{\operatorname{r}m H}ookrightarrow M_\tau $ the canonical inclusion and by $p{\operatorname{c}}olon M\operatorname{r}ightarrow N$ the canonical projection given by the cusp retraction ${\mathbb T}^2-\allowhyphensmes {\mathbb R}_{>0} \operatorname{r}ightarrow {\mathbb T}^2 $. The composition \betaegin{enumerate}gin{equation*} f_\tau= j_\tau {\operatorname{c}}irc p{\operatorname{c}}olon M L^\infty_{\mathrm{alt}}mbdaongrightarrow M_\tau \epsilonilonnd{equation*} induces a map \betaegin{enumerate}gin{equation*} (f_\tau)_*{\operatorname{c}}olon \Gammaamma L^\infty_{\mathrm{alt}}mbdaongrightarrow \Gammaamma_\tau \epsilonilonnd{equation*} between the fundamental groups $\Gammaamma=\pi_1(M)$ and $\Gammaamma_\tau=\pi_1(M_\tau)$. \betaegin{enumerate}gin{prop} L^\infty_{\mathrm{alt}}mbdaabel{prop: vol(rho)=vol(rho0)} Let $M_\tau$ be the compact $3$-manifold obtained by Dehn filling from the hyperbolic $3$-manifold $M$ with one cusp. Let $\operatorname{r}ho{\operatorname{c}}olon \Gammaamma_\tau\operatorname{r}ightarrow \mathrm{SO}(3,1)$ be any representation of $\Gammaamma_\tau$ and let $\operatorname{r}ho_\tau{\operatorname{c}}olon =\operatorname{r}ho{\operatorname{c}}irc f_\tau{\operatorname{c}}olon \Gammaamma\rightarrow\mathrm{SO}(3,1)$. Then \betaegin{enumerate}gin{equation*} {\operatorname{Vol}}(\operatorname{r}ho_\tau)={\operatorname{Vol}}(\operatorname{r}ho)\,. \epsilonilonnd{equation*} \epsilonilonnd{prop} By Gromov--Thurston $(2\pi)$-Theorem {\operatorname{c}}ite{GromovThurston}, for all geodesic curves $\tau$ for which the induced length is greater than $2\pi$ in the induced Euclidean metric on $\partial N$, the compact manifold $M_\tau$ admits a hyperbolic structure. Proposition~\operatorname{r}ef{prop:intro} is then an immediate consequence of Proposition~\operatorname{r}ef{prop: vol(rho)=vol(rho0)}. To prove the proposition, recall that by definition, the volume of the representation $\operatorname{r}ho_\tau$ is equal to \betaegin{enumerate}gin{equation*} {\operatorname{Vol}}(\operatorname{r}ho_\tau)=L^\infty_{\mathrm{alt}}mbdaangle c{\operatorname{c}}irc \Psi^{-1} {\operatorname{c}}irc f^* {\operatorname{c}}irc \operatorname{r}ho^*(\overline mega_3^{\mathrm{b}}), [N,\partial N]\operatorname{r}angle\,, \epsilonilonnd{equation*} where all maps involved can be read in the diagram below. We will start by defining a map $F{\operatorname{c}}olon {\operatorname{r}m H}^3(M_\tau)\operatorname{r}ightarrow {\operatorname{r}m H}^3(N,\partial N)$ that will turn the diagram below into a commutative diagram (Lemma \operatorname{r}ef{lem: F commute}) and which will induce a canonical isomorphism (Lemma \operatorname{r}ef{lem: F isom}). \betaegin{enumerate}gin{equation}L^\infty_{\mathrm{alt}}mbdaabel{equ: big diag degree 3} \xymatrix{ {\operatorname{r}m H}cb^3(\mathrm{SO}(3,1)) \alphar[d]^{c} \alphar@/^2pc/[rr]^{\operatorname{r}ho_\tau^*} \alphar[r]_-{\operatorname{r}ho^*}\alphar[d]^{c} &{{\operatorname{c}}olor{red}{{\operatorname{r}m H}b^3(\Gammaamma_\tau)}}\alphar@[red][d]^{{\operatorname{c}}olor{red}{c}} \alphar@[red][r]_{{\operatorname{c}}olor{red}{f_\tau^*}} &{{\operatorname{c}}olor{red}{{\operatorname{r}m H}b^3(\Gammaamma)}} &{{\operatorname{c}}olor{red}{{\operatorname{r}m H}b^3(N,\partial N)}}\alphar@[red][d]^{{\operatorname{c}}olor{red}{c}} \alphar@[red][l]^-{{\operatorname{c}}olor{red}{\Psi}}_-{{\operatorname{c}}olor{red}{{\operatorname{c}}ong}} \\ {\operatorname{r}m H}c^3(\mathrm{SO}(3,1)) \alphar[r]_-{\operatorname{r}ho^*} &{{\operatorname{c}}olor{red}{{\operatorname{r}m H}^3(\Gammaamma_\tau)}}\alphar@[red][r]^{{\operatorname{c}}olor{red}{{\operatorname{c}}ong}}_{{\operatorname{c}}olor{red}{g}} &{{\operatorname{c}}olor{red}{{\operatorname{r}m H}^3(M_\tau)}}\alphar@{.>}@[red][r]_-{{\operatorname{c}}olor{red}{F\,\,}} &{{\operatorname{c}}olor{red}{{\operatorname{r}m H}^3(N,\partial N)}}.} \epsilonilonnd{equation} The inclusions \betaegin{enumerate}gin{equation*} \xymatrix{ M_\tau \alphar@{^{(}->}[dr]_i& & (N,\partial N)\alphar@{_{(}->}[dl]^{(j_\tau,\varphi_\tau)} \\ &(M_\tau,\mathbb{D}^2-\allowhyphensmes S^1)& } \epsilonilonnd{equation*} induce the following homology and cohomology maps \betaegin{enumerate}gin{equation}L^\infty_{\mathrm{alt}}mbdaabel{equ: ho can iso} \xymatrix{ {\operatorname{r}m H}_\betaullet(M_\tau,{\mathbb Z})\alphar[dr]_-{i_*} & &{\operatorname{r}m H} _\betaullet((N,\partial N),{\mathbb Z})\alphar[dl]^{(j_\tau,\varphi_\tau)_*} \\ &{\operatorname{r}m H}_\betaullet((M_\tau,\mathbb{D}^2-\allowhyphensmes S^1),{\mathbb Z}) } \epsilonilonnd{equation} and \betaegin{enumerate}gin{equation}L^\infty_{\mathrm{alt}}mbdaabel{equ: coho can iso} \xymatrix{ {\operatorname{r}m H}^\betaullet(M_\tau,{\mathbb Z}) & &{\operatorname{r}m H} ^\betaullet((N,\partial N),{\mathbb Z}) \\ &{\operatorname{r}m H}^\betaullet((M_\tau,\mathbb{D}^2-\allowhyphensmes S^1),{\mathbb Z})\alphar[lu]^{i_*}\alphar[ur]_{(j_\tau,\varphi_\tau)_*} } \epsilonilonnd{equation} \betaegin{enumerate}gin{lemma} L^\infty_{\mathrm{alt}}mbdaabel{lem: F isom} In degree $3$ the maps in \epsilonilonnd{equation}ref{equ: ho can iso} and \epsilonilonnd{equation}ref{equ: coho can iso} are canonical isomorphisms and the composition \betaegin{enumerate}gin{equation*} F=(j_\tau,\varphi_\tau)^*{\operatorname{c}}irc (i^*)^{-1}{\operatorname{c}}olon {\operatorname{r}m H}^3(M_\tau,{\mathbb Z}) L^\infty_{\mathrm{alt}}mbdaongrightarrow {\operatorname{r}m H}^3(N,\partial N ,{\mathbb Z}) \epsilonilonnd{equation*} maps the dual $\betaegin{enumerate}ta_{M_\tau}$ of the fundamental class of $M_\tau$ to the dual $\betaegin{enumerate}ta_{[N,\partial N]}$ of the fundamental class of $(N,\partial N)$. \epsilonilonnd{lemma} \betaegin{enumerate}gin{proof} It is enough to show the statement in homology where we show that fundamental classes are mapped to each other by showing the existence of a compatible triangulation of the three manifolds. Start with a triangulation of the boundary torus $S^1-\allowhyphensmes S^1 =\partial N$, extend it on the one hand to the filled torus $\mathbb{D}^2-\allowhyphensmes S^1$ and on the other hand to $N$, {\operatorname{c}}ite[Theorem 10.6]{Munkres}. This produces compatible triangulations representing $[M_\tau]$, $[M_\tau,\mathbb{D}^2-\allowhyphensmes S^1]$ and $[N,\partial N]$. \epsilonilonnd{proof} \betaegin{enumerate}gin{lemma}L^\infty_{\mathrm{alt}}mbdaabel{lem: F commute} The diagram (\operatorname{r}ef{equ: big diag degree 3}) commutes. \epsilonilonnd{lemma} \betaegin{enumerate}gin{proof} We only need to show that the right rectangle commutes. For this, we will decompose the diagram in subdiagrams as follows: \betaegin{enumerate}gin{equation*} \xymatrix{ {{\operatorname{c}}olor{red}{\operatorname{r}m H}b^3(\Gammaamma)}\alphar[r]^{\operatorname{c}}ong &{\operatorname{r}m H}b^3(M) &{\operatorname{r}m H}b^3(N)\alphar[l]_{\operatorname{c}}ong^{p^*} & \\ {{\operatorname{c}}olor{red}{\operatorname{r}m H}b^3(\Gammaamma_\tau)}\alphar@[red][u]^{{\operatorname{c}}olor{red}{f^*}}\alphar[r]^{\operatorname{c}}ong \alphar@[red][dd]_{{\operatorname{c}}olor{red}{c}} &{\operatorname{r}m H}b^3(M_\tau) \alphar[u]^{f^*}\alphar[ru]_-{j_\tau^*} \alphar[dd]_{c} & &{{\operatorname{c}}olor{red} {\operatorname{r}m H}b^3(N,\partial N)} \alphar@[red]@/_5.3pc/[lllu]^{{\operatorname{c}}olor{red}{\Psi}} \alphar[lu]_-{i_{| N}^*}\ \alphar@[red][dd]^{{\operatorname{c}}olor{red}{c}}\\ & &{\operatorname{r}m H}b^3(M_\tau,\mathbb{D}^2-\allowhyphensmes S^1) \alphar[dd]^<<<<<<{c}\alphar[lu]_-{i^*}\alphar[ur]^-{(j_\tau,\varphi_\tau)^*} & \\ {{\operatorname{c}}olor{red}{\operatorname{r}m H}^3(\Gammaamma_\tau)}\alphar@[red][r]^{{\operatorname{c}}olor{red}{\operatorname{c}}ong}_{{\operatorname{c}}olor{red}g} &{{\operatorname{c}}olor{red}{ {\operatorname{r}m H}^3(M_\tau)} }\alphar@[red]@{->}'[r][rr]^<<<<<<<<<<<<<{{\operatorname{c}}olor{red}{ F}} & & {{\operatorname{c}}olor{red}{\operatorname{r}m H}^3(N,\partial N)}\\ & &{\operatorname{r}m H}^3(M_\tau,\mathbb{D}^2-\allowhyphensmes S^1) \alphar[lu]_-{i^*}\alphar[ru]^-{(j_\tau,\varphi_\tau)^*}\,. & } \epsilonilonnd{equation*} Since by naturality, all subdiagrams commute, the lemma follows. \epsilonilonnd{proof} \betaegin{enumerate}gin{proof}[Proof of Proposition \operatorname{r}ef{prop: vol(rho)=vol(rho0)}] Using the commutativity of the diagram (\operatorname{r}ef{equ: big diag degree 3}) and the fact that $ c (\overline mega_3^{\mathrm b})= \overline mega_3$ and $g{\operatorname{c}}irc\operatorname{r}ho^*(\overline mega_3)={\operatorname{Vol}}(\operatorname{r}ho) {\operatorname{c}}dot \betaegin{enumerate}ta_{M_\tau}$ we compute \betaegin{enumerate}gin{eqnarray*} c{\operatorname{c}}irc \Psi^{-1}{\operatorname{c}}irc f_\tau^*{\operatorname{c}}irc \operatorname{r}ho^*(\overline mega_3^{\mathrm b}) &=& F{\operatorname{c}}irc g{\operatorname{c}}irc \operatorname{r}ho^*{\operatorname{c}}irc c (\overline mega_3^{\mathrm b})\\ &=& F{\operatorname{c}}irc g{\operatorname{c}}irc \operatorname{r}ho^*(\overline mega_3)\\ &=& F({\operatorname{Vol}}(\operatorname{r}ho) {\operatorname{c}}dot \betaegin{enumerate}ta_{M_\tau}) \\ &=& {\operatorname{Vol}}(\operatorname{r}ho) {\operatorname{c}}dot \betaegin{enumerate}ta_{[N,\partial N]}. \epsilonilonnd{eqnarray*} It is immediate that \betaegin{enumerate}gin{equation*} {\operatorname{Vol}}(\operatorname{r}ho_\tau)=L^\infty_{\mathrm{alt}}mbdaangle c{\operatorname{c}}irc \Psi^{-1} {\operatorname{c}}irc f^* {\operatorname{c}}irc \operatorname{r}ho^*(\overline mega_3)^{\mathrm{b}}, [N,\partial N] \operatorname{r}angle = L^\infty_{\mathrm{alt}}mbdaangle {\operatorname{Vol}}(\operatorname{r}ho) {\operatorname{c}}dot \betaegin{enumerate}ta_{[N,\partial N]}, [N,\partial N]\operatorname{r}angle ={\operatorname{Vol}}(\operatorname{r}ho)\,, \epsilonilonnd{equation*} which finishes the proof of the proposition. \epsilonilonnd{proof} \subsection{Representations giving rational multiples of the maximal representation}L^\infty_{\mathrm{alt}}mbdaabel{subsec:5.2} Let $M$ be a $n$-dimensional hyperbolic manifold with nonempy totally geodesic boundary (possibly with cusps) (see for example {\operatorname{c}}ite{Millson}). Suppose that the boundary of $M$ has at least two connected components, which can be achieved by taking appropriate coverings of a manifold with a connected boundary. Decompose $\partial M=C_0 \sqcup C_1$. Let $M'$ be the double of $M$ along $C_1$. Observe that $M'$ has as boundary two copies of $C_0$ with opposite orientation. Glueing these two copies, we obtain a complete hyperbolic manifold $M_1$. We can repeat the procedure as follows: Take $k$ copies of $M'$, glue the two copies of $C_0$ two by two so as to obtain a connected closed hyperbolic manifold $M_k$. Observe that ${\operatorname{Vol}}(M_k)=2k\,{\operatorname{Vol}}(M)$. \vskip1cm \betaegin{enumerate}gin{tikzpicture} [scale=.6] {\operatorname{d}}raw (-3.4,1.8) arc (90:270: .7cm and 1.8cm); {\operatorname{d}}raw (3.4,1.8) arc (90:270: .7cm and 1.8cm); {\operatorname{d}}raw [xshift=2cm](6.5,1.8) arc (90:270: .7cm and 1.8cm); {\operatorname{d}}raw[xshift=2cm] (13.3,1.8) arc (90:270: .7cm and 1.8cm); {\operatorname{d}}raw[dashed] (-3.4,-1.8) arc (-90:90: .7cm and 1.8cm); {\operatorname{d}}raw (3.4,-1.8) arc (-90:90: .7cm and 1.8cm); {\operatorname{d}}raw[dashed] [xshift=2cm](6.5,-1.8) arc (-90:90: .7cm and 1.8cm); {\operatorname{d}}raw [xshift=2cm](13.3,-1.8) arc (-90:90: .7cm and 1.8cm); {\operatorname{d}}raw[dotted, thick] (-3.5,.2) arc (-100:100: .3cm and .6cm); {\operatorname{d}}raw (3.4,.2) arc (-100:100: .3cm and .6cm); {\operatorname{d}}raw[dotted, thick] [xshift=2cm](6.5,.2) arc (-100:100: .3cm and .6cm); {\operatorname{d}}raw [xshift=2cm](13.3,.2) arc (-100:100: .3cm and .6cm); {\operatorname{d}}raw[dotted, thick] (-3.4,1.1) arc (80:280: .1cm and .3cm); {\operatorname{d}}raw (3.4,1.1) arc (80:280: .1cm and .3cm); {\operatorname{d}}raw[dotted, thick] [xshift=2cm](6.5,1.1) arc (80:280: .1cm and .3cm); {\operatorname{d}}raw [xshift=2cm](13.3,1.1) arc (80:280: .1cm and .3cm); {\operatorname{d}}raw[dotted, thick] (-3.5,-1.4) arc (-100:100: .3cm and .6cm); {\operatorname{d}}raw (3.4,-1.4) arc (-100:100: .3cm and .6cm); {\operatorname{d}}raw[dotted, thick] [xshift=2cm](6.5,-1.4) arc (-100:100: .3cm and .6cm); {\operatorname{d}}raw [xshift=2cm](13.3,-1.4) arc (-100:100: .3cm and .6cm); {\operatorname{d}}raw[dotted, thick] (-3.4,-.5) arc (80:280: .1cm and .3cm); {\operatorname{d}}raw (3.4,-.5) arc (80:280: .1cm and .3cm); {\operatorname{d}}raw[dotted, thick] [xshift=2cm](6.5,-.5) arc (80:280: .1cm and .3cm); {\operatorname{d}}raw [xshift=2cm](13.3,-.5) arc (80:280: .1cm and .3cm); {\operatorname{d}}raw (-.8,2) ellipse (.5cm and 1.5cm); {\operatorname{d}}raw[dashed] (.8,3.5) arc (90:-90: .5cm and 1.5cm); {\operatorname{d}}raw (.85,3.5) arc (90:270: .5cm and 1.5cm); {\operatorname{d}}raw [xshift=2cm](9.1,2) ellipse (.5cm and 1.5cm); {\operatorname{d}}raw[dashed] [xshift=2cm](10.7,3.5) arc (90:-90: .5cm and 1.5cm); {\operatorname{d}}raw [xshift=2cm](10.7,3.5) arc (90:270: .5cm and 1.5cm); {\operatorname{d}}raw (-.85,2.1) arc (-100:100:.2cm and .5cm); {\operatorname{d}}raw[dotted, thick] (.75,2.1) arc (-100:100:.2cm and .5cm); {\operatorname{d}}raw [xshift=2cm](9.05,2.1) arc (-100:100:.2cm and .5cm); {\operatorname{d}}raw[dotted, thick] [xshift=2cm](10.7,2.1) arc (-100:100:.2cm and .5cm); {\operatorname{d}}raw (-.8,2.9) arc (80:280:.1cm and .3cm); {\operatorname{d}}raw[dotted, thick] (.8,2.9) arc (80:280:.1cm and .3cm); {\operatorname{d}}raw [xshift=2cm](9.1,2.9) arc (80:280:.1cm and .3cm); {\operatorname{d}}raw[dotted, thick] [xshift=2cm](10.7,2.9) arc (80:280:.1cm and .3cm); {\operatorname{d}}raw (-.85,.8) arc (-100:100: .2cm and .5cm); {\operatorname{d}}raw[dotted, thick] (.75,.8) arc (-100:100: .2cm and .5cm); {\operatorname{d}}raw [xshift=2cm](9.05,.8) arc (-100:100: .2cm and .5cm); {\operatorname{d}}raw[dotted, thick][xshift=2cm] (10.65,.8) arc (-100:100: .2cm and .5cm); {\operatorname{d}}raw (-.8,1.6) arc (80:280:.1cm and .3cm); {\operatorname{d}}raw[dotted, thick] (.8,1.6) arc (80:280:.1cm and .3cm); {\operatorname{d}}raw [xshift=2cm](9.1,1.6) arc (80:280:.1cm and .3cm); {\operatorname{d}}raw[dotted, thick] [xshift=2cm](10.7,1.6) arc (80:280:.1cm and .3cm); {\operatorname{d}}raw (-.8,-2) ellipse (.5cm and 1.6cm); {\operatorname{d}}raw[dashed] (.8,-.4) arc (90:-90:.5cm and 1.6cm); {\operatorname{d}}raw (.8,-.4) arc (90:270:.5cm and 1.6cm); {\operatorname{d}}raw [xshift=2cm](9.1,-2) ellipse (.5cm and 1.6cm); {\operatorname{d}}raw[dashed] [xshift=2cm](10.7,-.4) arc (90:-90:.5cm and 1.6cm); {\operatorname{d}}raw [xshift=2cm](10.7,-.4) arc (90:270:.5cm and 1.6cm); {\operatorname{d}}raw (-.85,-1.35) arc (-100:100:.2cm and .35cm); {\operatorname{d}}raw[dotted, thick] (.8,-1.35) arc (-100:100:.2cm and .35cm); {\operatorname{d}}raw [xshift=2cm](9.05,-1.35) arc (-100:100:.2cm and .35cm); {\operatorname{d}}raw[dotted, thick] [xshift=2cm](10.7,-1.35) arc (-100:100:.2cm and .35cm); {\operatorname{d}}raw (-.8,-.8) arc (80:280:.1cm and .2cm); {\operatorname{d}}raw[dotted, thick] (.8,-.8) arc (80:280:.1cm and .2cm); {\operatorname{d}}raw [xshift=2cm](9.1,-.8) arc (80:280:.1cm and .2cm); {\operatorname{d}}raw[dotted, thick] [xshift=2cm](10.7,-.8) arc (80:280:.1cm and .2cm); {\operatorname{d}}raw (-.85,-2.35) arc (-100:100:.2cm and .35cm); {\operatorname{d}}raw[dotted, thick] (.8,-2.35) arc (-100:100:.2cm and .35cm); {\operatorname{d}}raw [xshift=2cm](9.05,-2.35) arc (-100:100:.2cm and .35cm); {\operatorname{d}}raw[dotted, thick] [xshift=2cm](10.7,-2.35) arc (-100:100:.2cm and .35cm); {\operatorname{d}}raw (-.8,-1.8) arc (80:280:.1cm and .2cm); {\operatorname{d}}raw[dotted, thick] (.8,-1.8) arc (80:280:.1cm and .2cm); {\operatorname{d}}raw [xshift=2cm](9.1,-1.8) arc (80:280:.1cm and .2cm); {\operatorname{d}}raw[dotted, thick] [xshift=2cm](10.7,-1.8) arc (80:280:.1cm and .2cm); {\operatorname{d}}raw (-.85,-3.35) arc (-100:100:.2cm and .35cm); {\operatorname{d}}raw[dotted, thick] (.8,-3.35) arc (-100:100:.2cm and .35cm); {\operatorname{d}}raw [xshift=2cm](9.05,-3.35) arc (-100:100:.2cm and .35cm); {\operatorname{d}}raw[dotted, thick] [xshift=2cm](10.7,-3.35) arc (-100:100:.2cm and .35cm); {\operatorname{d}}raw (-.8,-2.8) arc (80:280:.1cm and .2cm); {\operatorname{d}}raw[dotted, thick] (.8,-2.8) arc (80:280:.1cm and .2cm); {\operatorname{d}}raw [xshift=2cm](9.1,-2.8) arc (80:280:.1cm and .2cm); {\operatorname{d}}raw[dotted, thick] [xshift=2cm](10.7,-2.8) arc (80:280:.1cm and .2cm); {\operatorname{d}}raw (-3.4,1.8) .. controls (-2.8, 1.8) and (-2.4, 2) .. (-2.1,2.65);{\operatorname{d}}raw (3.4,1.8) .. controls (2.8, 1.8) and (2.4, 2) .. (2.1,2.65); {\operatorname{d}}raw [xshift=2cm](6.5,1.8) .. controls (7.1, 1.8) and (7.5, 2) .. (7.8,2.65);{\operatorname{d}}raw [xshift=2cm](13.3,1.8) .. controls (12.7, 1.8) and (12.3, 2) .. (12,2.65); {\operatorname{d}}raw (-2.1,2.65) ..controls (-1.6,3.5) and (-1.2,3.5) .. (-.8,3.5);{\operatorname{d}}raw (2.1,2.65) ..controls (1.6,3.5) and (1.2,3.5) .. (.8,3.5); {\operatorname{d}}raw [xshift=2cm](7.8,2.65) ..controls (8.3,3.5) and (8.7,3.5) .. (9.1,3.5);{\operatorname{d}}raw[xshift=2cm] (12,2.65) ..controls (11.5,3.5) and (11.1,3.5) .. (10.7,3.5); {\operatorname{d}}raw (-3.4,-1.8) .. controls (-2.8,-1.8) and (-2.4,-2) ..(-2.1,-2.7);{\operatorname{d}}raw (3.4,-1.8) .. controls (2.8,-1.8) and (2.4,-2) ..(2.1,-2.7); {\operatorname{d}}raw [xshift=2cm](6.5,-1.8) .. controls (7.1,-1.8) and (7.5,-2) ..(7.8,-2.7);{\operatorname{d}}raw [xshift=2cm](13.3,-1.8) .. controls (12.7,-1.8) and (12.3,-2) ..(12,-2.7); {\operatorname{d}}raw (-2.1,-2.7) .. controls (-1.6,-3.5) and (-1.2,-3.5) .. (-.8,-3.6);{\operatorname{d}}raw (2.1,-2.7) .. controls (1.6,-3.5) and (1.2,-3.5) .. (.8,-3.6); {\operatorname{d}}raw [xshift=2cm](7.8,-2.7) .. controls (8.3,-3.5) and (8.7,-3.5) .. (9.1,-3.6);{\operatorname{d}}raw [xshift=2cm](12,-2.7) .. controls (11.5,-3.5) and (11.1,-3.5) .. (10.7,-3.6); {\operatorname{d}}raw (-0.8,0.5) .. controls (-1.1,0.5) and (-1.1,-0.4) .. (-0.8, -0.4);{\operatorname{d}}raw (0.8,0.5) .. controls (1.1,0.5) and (1.1,-0.4) .. (0.8, -0.4); {\operatorname{d}}raw [xshift=2cm](9.1,0.5) .. controls (8.8,0.5) and (8.8,-0.4) .. (9.1, -0.4);{\operatorname{d}}raw[xshift=2cm] (10.7,0.5) .. controls (11,0.5) and (11,-0.4) .. (10.7, -0.4); {\operatorname{d}}raw[<->] (-.27,2) -- (.28,2); {\operatorname{d}}raw[<->] (-.27,-2) -- (.28,-2); {\operatorname{d}}raw[<-] (4.5,0) -- (5,0); {\operatorname{d}}raw[dashed] (5,0) -- (7,0); {\operatorname{d}}raw[->] (7,0) -- (7.5,0); {\operatorname{d}}raw[<->] (11.63,2) -- (12.18,2); {\operatorname{d}}raw[<->] (11.63,-2) -- (12.18,-2); {\operatorname{d}}raw[<->] (-4.5,0) .. controls (-5,0) and (-6,0) .. (-6, -2) .. controls (-4,-10) and (16,-10) .. (18,-2) .. controls (18,0) and (17,0) .. (16.5,0); \filldraw (-4.1,-4) circle (2pt); {\operatorname{d}}raw[thick] (-4.1,-4) -- (0,-4); {\operatorname{d}}raw[->][thick] (-4.1,-4) -- (-2,-4); \filldraw (0,-4) circle (2pt); {\operatorname{d}}raw[thick] (0,-4) -- (4.1,-4); {\operatorname{d}}raw[<-, thick] (2,-4) -- (4.1,-4); \filldraw (4.1,-4) circle (2pt); {\operatorname{d}}raw[<-] (4.5,-4) -- (5,-4); {\operatorname{d}}raw[dashed] (5,-4) -- (7,-4); {\operatorname{d}}raw[->] (7,-4) -- (7.5,-4); \filldraw (7.7,-4) circle (2pt); {\operatorname{d}}raw[thick] (7.7,-4) -- (11.8,-4); {\operatorname{d}}raw[->][thick] (7.7,-4) -- (9.7,-4); \filldraw (11.8,-4) circle (2pt); {\operatorname{d}}raw[thick] (11.8,-4) -- (15.9,-4); {\operatorname{d}}raw[<-, thick] (13.8,-4) -- (15.9,-4); \filldraw (15.9,-4) circle (2pt); {\operatorname{d}}raw (-3.5,2.5) node {$C_0$}; {\operatorname{d}}raw (-1,4) node {$C_1$}; {\operatorname{d}}raw (0,-6.5) node {$M'$}; {\operatorname{d}}raw (-2,0) node {$M$}; {\operatorname{d}}raw (6,6) node {$M_k$}; {\operatorname{d}}raw (-4.1,-4.5) node {$C_0$}; {\operatorname{d}}raw (0,-4.5) node {$C_1$}; {\operatorname{d}}raw (4.1,-4.5) node {$C_0$}; \betaegin{enumerate}gin{scope}[yshift=-.3cm] {\operatorname{d}}raw (-4,4.5) .. controls (-4,5) and (-3,5) .. (-2,5) .. controls (-2,5) and (4.5,5) .. (5,5) .. controls (5.5,5) and (5.7,5.3) .. (6,5.5) .. controls (6.3,5.3) and (6.5,5.1) .. (7,5) .. controls (7.5,5) and (14,5) .. (14.5,5) .. controls (15.5,5) and (16,5) .. (16,4.5); \epsilonilonnd{scope} \betaegin{enumerate}gin{scope}[xshift=2.4cm, yshift=-3.1cm] {\operatorname{d}}raw[scale=.41, rotate=180](-4,4.5) .. controls (-4,5) and (-3,5) .. (-2,5) .. controls (-2,5) and (4.5,5) .. (5,5) .. controls (5.5,5) and (5.7,5.3) .. (6,5.5) .. controls (6.3,5.3) and (6.5,5.1) .. (7,5) .. controls (7.5,5) and (14,5) .. (14.5,5) .. controls (15.5,5) and (16,5) .. (16,4.5); \epsilonilonnd{scope} \epsilonilonnd{tikzpicture} For any $\epsilonilonll <k$, there are degree one maps $f{\operatorname{c}}olon M_k \operatorname{r}ightarrow M_{k-\epsilonilonll}$ obtained by folding $\epsilonilonll$ copies of $M'$ in $M_k$ along its boundary. These maps send the last $\epsilonilonll+1$ copies of $M'$ inside $M_k$ to the last copy of $M'$ in $M_{k-\epsilonilonll}$ as illustrated in the following picture for $\epsilonilonll=2$: \betaegin{enumerate}gin{tikzpicture} [scale=.9] {\operatorname{d}}raw (-6,0) -- (-2,0); {\operatorname{d}}raw[dashed] (-2,0) -- (0,0); {\operatorname{d}}raw (0,0) -- (6,0); \foreach \x in {-6,-4,-2,0,2,4,6} \filldraw (\x,0) circle (2pt); \foreach \y in {-5,-3,1,3,5} \filldraw (\y,0) circle (1pt); \foreach \z in {-6,-4,0,2,4} {\operatorname{d}}raw[->] (\z,0) -- (\z+.5,0); \foreach \q in {-4,-2,2,4,6} {\operatorname{d}}raw[->] (\q,0) -- (\q-.5,0); {\operatorname{d}}raw (-8,.5) node {$M_k$}; \foreach \x in {-6,-4,0,2,4,6} {\operatorname{d}}raw (\x,.5) node {$C_0$}; \foreach \y in {-5,1,3,5} {\operatorname{d}}raw (\y,.5) node {$-\allowhyphensny{C_1}$}; {\operatorname{d}}raw (-6,-2) -- (-2,-2); {\operatorname{d}}raw[dashed] (-2,-2) -- (0,-2); {\operatorname{d}}raw (0,-2) -- (2,-2); \foreach \x in {-6,-4,-2,0,2} \filldraw (\x,-2) circle (2pt); \foreach \y in {-5,-3,1} \filldraw (\y,-2) circle (1pt); {\operatorname{d}}raw (2,-2) -- (0,-2.5) -- (2,-2.7); \filldraw (0,-2.5) circle (2pt); \filldraw (2,-2.7) circle (2pt); \filldraw (1,-2.25) circle (1pt); \filldraw (1,-2.6) circle (1pt); \foreach \z in {-6,-4,0} {\operatorname{d}}raw[->] (\z,-2) -- (\z+.5,-2); {\operatorname{d}}raw[->] (0,-2.5) -- (.5,-2.375); {\operatorname{d}}raw[->] (0,-2.5) -- (.5,-2.55); \foreach \q in {-4,-2,2} {\operatorname{d}}raw[->] (\q,-2) -- (\q-.5,-2); {\operatorname{d}}raw[->] (2,-2) -- (1.5,-2.125); {\operatorname{d}}raw[->] (2,-2.7) -- (1.5,-2.65); \foreach \x in {-6,-4,0,2} {\operatorname{d}}raw (\x,-1.5) node {$C_0$}; \foreach \y in {-5,1} {\operatorname{d}}raw (\y,-1.5) node {$-\allowhyphensny{C_1}$}; {\operatorname{d}}raw (-6,-4) -- (-2,-4); {\operatorname{d}}raw[dashed] (-2,-4) -- (0,-4); {\operatorname{d}}raw (0,-4) -- (2,-4); \foreach \x in {-6,-4,-2,0,2} \filldraw (\x,-4) circle (2pt); \foreach \y in {-5,-3,1} \filldraw (\y,-4) circle (1pt); \foreach \z in {-6,-4,0} {\operatorname{d}}raw[->] (\z,-4) -- (\z+.5,-4); \foreach \q in {-4,-2,2} {\operatorname{d}}raw[->] (\q,-4) -- (\q-.5,-4); {\operatorname{d}}raw (-8,-4.5) node {$M_{k-2}$}; \foreach \x in {-6,-4,0,2} {\operatorname{d}}raw (\x,-4.5) node {$C_0$}; \foreach \y in {-5,1} {\operatorname{d}}raw (\y,-4.5) node {$-\allowhyphensny{C_1}$}; {\operatorname{d}}raw[->] (-8,0) -- (-8,-4); \epsilonilonnd{tikzpicture} The induced representation of $\pi_1(M_k)$ obtained by the induced map on fundamental groups composed with the lattice embedding of $\pi_1(M_{k-\epsilonilonll})$ in $\mathrm{Isom}(\mathbb{H}^n)$ has volume equal to the volume of $M_{k-\epsilonilonll}$, that is $(k-\epsilonilonll)/k$ times the volume of the maximal representation. \alphappendix \section{On the continuity of the volume of a representation}L^\infty_{\mathrm{alt}}mbdaabel{sec:continuity} The goal of this section is to prove the following: \betaegin{enumerate}gin{prop}L^\infty_{\mathrm{alt}}mbdaabel{thm:cont} Let $\Gammaamma<\mathrm{Isom}({\mathbb H}^n)$ be any torsion-free lattice. The function \betaegin{enumerate}gin{equation*} \betaegin{enumerate}gin{aligned} \textup{Hom}(\Gammaamma, \mathrm{Isom}(\mathbb{H}^n))&L^\infty_{\mathrm{alt}}mbdaongrightarrow \quad{\mathbb R}\\ \operatorname{r}ho\qquad\quad&L^\infty_{\mathrm{alt}}mbdaongmapsto {\operatorname{Vol}}(\operatorname{r}ho) \epsilonilonnd{aligned} \epsilonilonnd{equation*} is continuous. \epsilonilonnd{prop} We begin with some preliminaries. Let $G$ be a locally compact group, $\Gammaamma<G$ a lattice and $L$ a locally compact group. We denote by $\mathrm{C}_\mathrm{b}(X)$ the continuous bounded real valued functions on a topological space $X$. We define a map \betaegin{enumerate}gin{equation*} \betaegin{enumerate}gin{array}{rccl} &\mathrm{C}_\mathrm{b}(L^{n+1})-\allowhyphensmes \mathrm{Rep}(\Gammaamma,L)&L^\infty_{\mathrm{alt}}mbdaongrightarrow &\mathrm{C}_\mathrm{b}(\Gammaamma^{n+1})\\ &(c,\pi)&L^\infty_{\mathrm{alt}}mbdaongmapsto &\quad\pi^*( c ), \epsilonilonnd{array} \epsilonilonnd{equation*} where \betaegin{enumerate}gin{equation*} \pi^*( c )(\gammaamma_0,{\operatorname{d}}ots,\gammaamma_n)=c(\pi(\gammaamma_0),{\operatorname{d}}ots,\pi(\gammaamma_n))\,. \epsilonilonnd{equation*} We endow $\mathrm{C}_\mathrm{b}(L^{n+1})$ with the topology of uniform convergence and $\mathrm{C}_\mathrm{b}(\Gammaamma^{n+1})$ with the topology of pointwise convergence with control of norms: in other words $\alphalpha_n\rightarrow\alphalpha$ in $\mathrm{C}_\mathrm{b}(\Gammaamma^{n+1})$ if it converges pointwise and $\sup_n\|\alphalpha_n\|_\infty<\infty$. With these topologies and with the pointwise convergence topology on $\mathrm{Rep}(\Gammaamma,L)$ the above map is continuous. We proceed to implement the transfer from $\Gammaamma$ to $G$. For this, let $s{\operatorname{c}}olon \Gammaamma\setminus G\operatorname{r}ightarrow G$ be a Borel section and $r{\operatorname{c}}olon G\operatorname{r}ightarrow \Gammaamma$ be defined by \betaegin{enumerate}gin{equation*} g=r(g){\operatorname{c}}dot s(p(g))\,, \epsilonilonnd{equation*} where $p{\operatorname{c}}olon G\operatorname{r}ightarrow \Gammaamma\setminus G$ denotes the canonical projection. Given a $\Gammaamma$-invariant cochain $\alphalpha\in \mathrm{C}_\mathrm{b}(\Gammaamma^{n+1})^\Gammaamma$, define \betaegin{enumerate}gin{equation*} T\alphalpha(g_0,{\operatorname{d}}ots,g_n)=\int_{\Gammaamma\setminus G} \alphalpha(r(gg_0),{\operatorname{d}}ots,r(gg_n))d\mu(g)\,, \epsilonilonnd{equation*} where $\mu$ is the Haar measure on $G$ normalized so that $\mu(\Gammaamma\setminus G)=1$. \betaegin{enumerate}gin{prop} Suppose that the Borel section has the property that images of compact subsets are precompact. Then \betaegin{enumerate}gin{enumerate} \item $T\alphalpha$ is continuous, hence $T\alphalpha\in \mathrm{C}_\mathrm{b}(G^{n+1})^G$, \item $T{\operatorname{c}}olon \mathrm{C}_\mathrm{b}(\Gammaamma^{n+1})^\Gammaamma \operatorname{r}ightarrow \mathrm{C}_\mathrm{b}(G^{n+1})^G$ is continuous for pointwise convergence on $\Gammaamma$ with control of norms and uniform convergence on compact sets on $G^{n+1}$. \epsilonilonnd{enumerate} \epsilonilonnd{prop} \betaegin{enumerate}gin{proof} Let $D=s(\Gammaamma\setminus G)$. Choose $\epsilonilonpsilon>0$ and $C\subset D$ compact with $\mu(D\setminus C)<\epsilonilonpsilon$. Then \betaegin{enumerate}gin{equation*} \mid T\alphalpha(g_0,{\operatorname{d}}ots,g_n)-\int_C \alphalpha(r(gg_0),{\operatorname{d}}ots,r(gg_n))d\mu(g) \mid < \epsilonilonpsilon \| \alphalpha \|_\infty\,. \epsilonilonnd{equation*} Now we write \betaegin{enumerate}gin{equation*} \int_C \alphalpha(r(gg_0),{\operatorname{d}}ots,r(gg_n))d\mu(g) = \sum_{\gammaamma_0,{\operatorname{d}}ots,\gammaamma_n}\alphalpha(\gammaamma_0,{\operatorname{d}}ots,\gammaamma_n)\mu(C{\operatorname{c}}ap \gammaamma_0Dg_0^{-1}{\operatorname{c}}ap {\operatorname{d}}ots {\operatorname{c}}ap \gammaamma_n Dg_n^{-1})\,. \epsilonilonnd{equation*} Before we continue with the proof of the proposition, we need to show that for every compact subset $K\subset G$ the number $F_K$ of translates of the fundamental domain $D$ that $K$ intersects is finite: \betaegin{enumerate}gin{lemma} For any compact subset $K\subset G$, the set \betaegin{enumerate}gin{equation*} F_K:= \{ \gammaamma \in \Gammaamma \mid K {\operatorname{c}}ap \gammaamma D\neq \epsilonilonmptyset\} \epsilonilonnd{equation*} is finite. \epsilonilonnd{lemma} Note that the lemma is wrong for arbitrary fundamental domains, even for cocompact $\Gammaamma$. Indeed, start by writing the standard fundamental domain $(0,1]$ of ${\mathbb Z}$ in ${\mathbb R}$ as \betaegin{enumerate}gin{equation*} D_0=\sqcup_{n=1}^{+\infty} (1/2^n,1/2^{n-1}]\,, \epsilonilonnd{equation*} and perturb it by translating each of the disjoint interval of $D_0$ by a different translation, for example obtaining the new fundamental domain \betaegin{enumerate}gin{equation*} D=\sqcup_{n=1}^{+\infty} n+(1/2^n,1/2^{n-1}]\,. \epsilonilonnd{equation*} Take as compact set the closed interval $C=[0,1]$. Then for every $-nL^\infty_{\mathrm{alt}}mbdaeq 0$, the intersection $C{\operatorname{c}}ap (-n+D)$ is nonempty. \betaegin{enumerate}gin{proof} Set $F:={\operatorname{c}}up_{\epsilonilonta\in \Gammaamma} \epsilonilonta K$ and observe that $F{\operatorname{c}}ap D=s(p(F))$ is relatively compact by our choice of Borel section. Since $\gammaamma K {\operatorname{c}}ap D= \gammaamma K {\operatorname{c}}ap (F{\operatorname{c}}ap D)$ and $K$ and $F{\operatorname{c}}ap D$ are relatively compact, the lemma follows by the discreteness of $\Gammaamma$. \epsilonilonnd{proof} Going back to the proof of the proposition, fix compact subsets $C_0,{\operatorname{d}}ots,C_n$ of $G$ such that $g_i\in C_i$. Observe that $F_{Cg_i}\subset F_{CC_i}$ and if $\gammaamma_i\in F_{CC_i}\setminus F_{Cg_i}$ then the measure of $C{\operatorname{c}}ap \gammaamma_0Dg_0^{-1}{\operatorname{c}}ap {\operatorname{d}}ots {\operatorname{c}}ap \gammaamma_n Dg_n^{-1}$ is zero. We can thus rewrite the above sum as \betaegin{enumerate}gin{equation*} \int_C \alphalpha(r(gg_0),{\operatorname{d}}ots,r(gg_n))d\mu(g) = \sum_{\gammaamma_i\in F_{CC_i}}\alphalpha(\gammaamma_0,{\operatorname{d}}ots,\gammaamma_n)\mu(C{\operatorname{c}}ap \gammaamma_0Dg_0^{-1}{\operatorname{c}}ap {\operatorname{d}}ots {\operatorname{c}}ap \gammaamma_n Dg_n^{-1})\,, \epsilonilonnd{equation*} for any $(g_0,{\operatorname{d}}ots,g_n)\in C_0-\allowhyphensmes {\operatorname{d}}ots C_n$. The point (2) of Proposition follows since if $\alphalpha_n\operatorname{r}ightarrow \alphalpha$ with pointwise convergence and $\sup_n \| \alphalpha_n\|_\infty < +\infty$ then $T\alphalpha_n\operatorname{r}ightarrow T\alphalpha$ uniformly on compact sets. Finally, we show (1) by showing that the function \betaegin{enumerate}gin{equation*} (g_0,{\operatorname{d}}ots,g_n)\textup{Map}sto \mu(\{ C{\operatorname{c}}ap \gammaamma_0Dg_0^-1{\operatorname{c}}ap {\operatorname{d}}ots {\operatorname{c}}ap \gammaamma_n Dg_n^{-1}\}) \epsilonilonnd{equation*} is continuous. To estimate the difference \betaegin{enumerate}gin{equation*} \mu(C{\operatorname{c}}ap \betaigcap_{i=1}^n \gammaamma_i D g_i^{-1} )-\mu(C{\operatorname{c}}ap \betaigcap_{i=1}^n \gammaamma_i D h_i^{-1}) \epsilonilonnd{equation*} we introduce the notation \betaegin{enumerate}gin{equation*} A(x_0,{\operatorname{d}}ots ,x_n):=C {\operatorname{c}}ap \betaigcap_{i=0}^n \gammaamma_i D x_i^{-1} \,, \epsilonilonnd{equation*} for any $x_0,{\operatorname{d}}ots, x_n\in G$. The above difference thus becomes \betaegin{enumerate}gin{equation*} \mu(A(g_0,{\operatorname{d}}ots,g_n))-\mu(A(h_0,{\operatorname{d}}ots,h_n)) \epsilonilonnd{equation*} which we rewrite as a telescopic sum \betaegin{enumerate}gin{equation*} \sum_{i=0}^nL^\infty_{\mathrm{alt}}mbdaeft( \mu(A(h_0,{\operatorname{d}}ots,h_{i-1},g_{i},g_{i+1},{\operatorname{d}}ots g_n))-\mu(A(h_0,{\operatorname{d}}ots,h_{i-1},h_i,g_{i+1},{\operatorname{d}}ots g_n))\operatorname{r}ight)\,. \epsilonilonnd{equation*} Setting \betaegin{enumerate}gin{equation*} B_j:=C {\operatorname{c}}ap \betaigcap_{\epsilonilonll=0}^{i-1} \gammaamma_\epsilonilonll D h_\epsilonilonll^{-1} {\operatorname{c}}ap \betaigcap_{\epsilonilonll=i+1}^n \gammaamma_\epsilonilonll D g_\epsilonilonll^{-1}\,, \epsilonilonnd{equation*} the telescopic sum becomes \betaegin{enumerate}gin{equation*} \sum_ {j=0}^n (\mu(B_j{\operatorname{c}}ap \gammaamma_jDg_j^{-1})-\mu(B_j{\operatorname{c}}ap \gammaamma_j Dh_j^{-1}))\,. \epsilonilonnd{equation*} Using the simple set theoretical inequality valid for any sets $B,E,E'$ \betaegin{enumerate}gin{equation*} |\mu(B{\operatorname{c}}ap E)-\mu(B{\operatorname{c}}ap E')| L^\infty_{\mathrm{alt}}mbdaeq \mu((B{\operatorname{c}}ap E)\Delta (B{\operatorname{c}}ap E'))L^\infty_{\mathrm{alt}}mbdaeq \mu(E\Delta E')\,, \epsilonilonnd{equation*} we obtain for each summand the estimate \betaegin{enumerate}gin{equation*} |\mu(B_j{\operatorname{c}}ap \gammaamma_jDg_j^{-1})-\mu(B_j{\operatorname{c}}ap \gammaamma_j Dh_j^{-1})|L^\infty_{\mathrm{alt}}mbdaeq \mu( \gammaamma_jDg_j^{-1}\Delta \gammaamma_jDh_j^{-1})=\| {\operatorname{c}}hi_{Dg_j^{-1}h_j}-{\operatorname{c}}hi_D\|_1\,. \epsilonilonnd{equation*} Thus \betaegin{enumerate}gin{equation*} \betaegin{enumerate}gin{aligned} &| \mu(C{\operatorname{c}}ap \betaigcap_{i=1}^n \gammaamma_i D g_i^{-1} )-\mu(C{\operatorname{c}}ap \betaigcap_{i=1}^n \gammaamma_i D h_i^{-1})|\\ L^\infty_{\mathrm{alt}}mbdaeq &\sum_{j=0}^n |\mu(B_j{\operatorname{c}}ap \gammaamma_jDg_j^{-1})-\mu(B_j{\operatorname{c}}ap \gammaamma_j Dh_j^{-1})|\\ L^\infty_{\mathrm{alt}}mbdaeq &\sum_{j=0}^n \| {\operatorname{c}}hi_{Dg_j^{-1}h_j}-{\operatorname{c}}hi_D\|_1\,. \epsilonilonnd{aligned} \epsilonilonnd{equation*} The continuity of the right regular action of $G$ on $L^1(G)$ concludes the proof of the proposition. \epsilonilonnd{proof} \betaegin{enumerate}gin{proof}[Proof of Proposition~\operatorname{r}ef{thm:cont}] Consider $\Gammaamma$ as a lattice in the full isometry group $\mathrm{Isom}(\mathbb{H}^n)$ and denote by $\varepsilon{\operatorname{c}}olon \mathrm{Isom}(\mathbb{H}^n)\operatorname{r}ightarrow \{-1,+1\}$ the homomorphism sending an isometry to $+1$ if it preserves orientation and $-1$ otherwise. By what precedes, the cohomology class $\mathrm{transf}(\operatorname{r}ho^*(\overline mega_{\mathbb{H}^n}))$ can be represented by the continuous cocycle sending $(g_0,{\operatorname{d}}ots,g_n)\in \mathrm{Isom}(\mathbb{H}^n)^{n+1}$ to \betaegin{enumerate}gin{equation*} \int_{\Gammaamma\setminus \mathrm{Isom}(\mathbb{H}^n)} \varepsilon(g) \overline mega_n(\operatorname{r}ho(r(gg_0)),{\operatorname{d}}ots,\operatorname{r}ho(r(gg_n)))d\mu(g)\,. \epsilonilonnd{equation*} Note that the cocycle stays continues after transferring from $\mathrm{Isom}^+(\mathbb{H}^n)$ to $\mathrm{Isom}(\mathbb{H}^n)$. Integrating over a maximal compact subgroup $K$ in $\mathrm{Isom}(\mathbb{H}^n)$ we obtain a continuous cocycle $(\mathbb{H}^n)^{n+1}\operatorname{r}ightarrow {\mathbb R}$ that sends an $(n+1)$-tuple of points $g_0K,{\operatorname{d}}ots,g_nK\in \mathrm{Isom}(\mathbb{H}^n)/K{\operatorname{c}}ong \mathbb{H}^n$ to \betaegin{enumerate}gin{equation}L^\infty_{\mathrm{alt}}mbdaabel{ eq: integral for cocycle} \int_{K^{n+1}}{\operatorname{pr}}od_{i=0}^ndk_i \int_{\Gammaamma\setminus \mathrm{Isom}(\mathbb{H}^n)} \varepsilon(g) \overline mega_n(\operatorname{r}ho(r(gg_0k_0)),{\operatorname{d}}ots,\operatorname{r}ho(r(gg_nk_n)))d\mu(g)\,. \epsilonilonnd{equation} We showed in {\operatorname{c}}ite[Proposition 3.3]{Bucher_Burger_Iozzi_Mostow} that \betaegin{enumerate}gin{equation*} \mathrm{transf}(\operatorname{r}ho^*(\overline mega_n))=\frac{{\operatorname{Vol}}(\operatorname{r}ho)}{{\operatorname{Vol}}(M)}{\operatorname{c}}dot \overline mega_{\mathbb{H}^n}\in {\operatorname{r}m H}cb^n(\mathrm{Isom}(\mathbb{H}^n),{\mathbb R}_\varepsilon)\,. \epsilonilonnd{equation*} Since there are no coboundaries in degree $n$ for $\mathrm{Isom}(\mathbb{H}^n)$-equivariant continuous bounded cochains on $\mathbb{H}^n$, this implies that we have a strict equality between (\operatorname{r}ef{ eq: integral for cocycle}) and \betaegin{enumerate}gin{equation*} \frac{{\operatorname{Vol}}(\operatorname{r}ho)}{{\operatorname{Vol}}(M)} {\operatorname{c}}dot \overline mega_n(g_0K,{\operatorname{d}}ots,g_nK)\,. \epsilonilonnd{equation*} Since (\operatorname{r}ef{ eq: integral for cocycle}) varies continuously in $\operatorname{r}ho$, so does ${\operatorname{Vol}}(\operatorname{r}ho)$. \epsilonilonnd{proof} \vskip1cm \betaibliographystyle{alpha} \vskip1cm \newcommand{\epsilonilontalchar}[1]{$^{#1}$} \betaegin{enumerate}gin{thebibliography}{DLSW19} \betaibitem[AM13]{Austin_Moore} T.~Austin and C.~C. Moore. \newblock Continuity properties of measurable group cohomology. \newblock {\epsilonilonm Math. Ann.}, 356(3):885--937, 2013. \betaibitem[BBF{\epsilonilontalchar{+}}14]{BBFIPP} M.~Bucher, M.~Burger, R.~Frigerio, A.~Iozzi, C.~Pagliantini, and M.~B. Pozzetti. \newblock Isometric embeddings in bounded cohomology. \newblock {\epsilonilonm J. Topol. Anal.}, 6(1):1--25, 2014. \betaibitem[BBI13]{Bucher_Burger_Iozzi_Mostow} M.~Bucher, M.~Burger, and A.~Iozzi. \newblock A dual interpretation of the {G}romov-{T}hurston proof of rigidity and volume rigidity for representations of \newblock In {\epsilonilonm Trends in harmonic analysis}, volume~3 of {\epsilonilonm Springer INdAM Ser.}, pages 47--76. Springer, Milan, 2013. \betaibitem[BCG07]{Besson_Courtois_Gallot_comm} G.~Besson, G.~Courtois, and S.~Gallot. \newblock In\'egalit\'es de {M}ilnor-{W}ood g\'eom\'etriques. \newblock {\epsilonilonm Comment. Math. Helv.}, 82(4):753--803, 2007. \betaibitem[BI07]{Burger_Iozzi_difff} M.~Burger and A.~Iozzi. \newblock Bounded differential forms, generalized {M}ilnor-{W}ood and an application to deformation rigidity. \newblock {\epsilonilonm Geom. Dedicata}, 125:1--23, 2007. \betaibitem[BIW10]{Burger_Iozzi_Wienhard_toledo} M.~Burger, A.~Iozzi, and A.~Wienhard. \newblock Surface group representations with maximal {T}oledo invariant. \newblock {\epsilonilonm Ann. of Math. (2)}, 172(1):517--566, 2010. \betaibitem[Bla79]{Blanc} Ph. Blanc. \newblock Sur la cohomologie continue des groupes localement compacts. \newblock {\epsilonilonm Ann. Sci. \'{E}cole Norm. Sup. (4)}, 12(2):137--168, 1979. \betaibitem[BM12]{Bucher_Monod} M.~Bucher and N.~Monod. \newblock The norm of the {E}uler class. \newblock {\epsilonilonm Math. Ann.}, 353(2):523--544, 2012. \betaibitem[DLSW19]{DLSW19} P.~Derbez, Y.~Liu, H.~Sun, and S.~Wang. \newblock Volume of representations and mapping degree. \newblock {\epsilonilonm Adv. Math.}, 351:570--613, 2019. \betaibitem[Dun99]{Dunfield} N.~M. Dunfield. \newblock Cyclic surgery, degrees of maps of character curves, and volume rigidity for hyperbolic manifolds. \newblock {\epsilonilonm Invent. Math.}, 136(3):623--657, 1999. \betaibitem[FK06]{Francaviglia_Klaff} S.~Francaviglia and B.~Klaff. \newblock Maximal volume representations are {F}uchsian. \newblock {\epsilonilonm Geom. Dedicata}, 117:111--124, 2006. \betaibitem[Fra04]{Francaviglia} S.~Francaviglia. \newblock Hyperbolic volume of representations of fundamental groups of cusped 3-manifolds. \newblock {\epsilonilonm Int. Math. Res. Not.}, (9):425--459, 2004. \betaibitem[Ghy87]{Ghys} {\'E}.~Ghys. \newblock Groupes d'hom\'eomorphismes du cercle et cohomologie born\'ee. \newblock In {\epsilonilonm The {L}efschetz centennial conference, {P}art {III} ({M}exico {C}ity, 1984)}, volume~58 of {\epsilonilonm Contemp. Math.}, pages 81--106. Amer. Math. Soc., Providence, RI, 1987. \betaibitem[Gol80]{Goldman_thesis} W.~M. Goldman. \newblock {\epsilonilonm DISCONTINUOUS GROUPS AND THE {E}ULER CLASS}. \newblock ProQuest LLC, Ann Arbor, MI, 1980. \newblock Thesis (Ph.D.)--University of California, Berkeley. \betaibitem[Gol82]{Goldman82} W.~M. Goldman. \newblock Characteristic classes and representations of discrete subgroups of {L}ie groups. \newblock {\epsilonilonm Bull. Amer. Math. Soc. (N.S.)}, 6(1):91--94, 1982. \betaibitem[GT87]{GromovThurston} M.~Gromov and W.~Thurston. \newblock Pinching constants for hyperbolic manifolds. \newblock {\epsilonilonm Invent. Math.}, 89(1):1--12, 1987. \betaibitem[HD62]{Hochschild_Mostow} G.~Hochschild and G.~D.Mostow. \newblock Cohomology of {L}ie groups. \newblock {\epsilonilonm Illinois J. Math.}, 6:367--401, 1962. \betaibitem[Ioz02]{Iozzi_ern} Al. Iozzi. \newblock Bounded cohomology, boundary maps, and rigidity of representations into {${\operatorname{r}m Homeo}_+(\betaold S^1)$} and {${\operatorname{r}m SU}(1,n)$}. \newblock In {\epsilonilonm Rigidity in dynamics and geometry ({C}ambridge, 2000)}, pages 237--260. Springer, Berlin, 2002. \betaibitem[KK12]{Kim_Kim_Volume} I.~Kim and S.~Kim. \newblock Volume invariant and maximal representations of discrete subgroups of lie groups, 2012. \betaibitem[KK13]{Kim_Kim_OnDeformation} I.~Kim and S.~Kim. \newblock On deformation spaces of nonuniform hyperbolic lattices, 2013. \betaibitem[KM08]{Koziarz_Maubon} V.~Koziarz and J.~Maubon. \newblock Harmonic maps and representations of non-uniform lattices of {${\operatorname{r}m PU}(m,1)$}. \newblock {\epsilonilonm Ann. Inst. Fourier (Grenoble)}, 58(2):507--558, 2008. \betaibitem[KM13]{Kolpakov_Martelli} A.~Kolpakov and B.~Martelli. \newblock Hyperbolic four-manifolds with one cusp. \newblock {\epsilonilonm Geom. Funct. Anal.}, 23(6):1903--1933, 2013. \betaibitem[LR00]{Long_Reid_00} D.~D. Long and A.~W. Reid. \newblock On the geometric boundaries of hyperbolic {$4$}-manifolds. \newblock {\epsilonilonm Geom. Topol.}, 4:171--178, 2000. \betaibitem[Mil76]{Millson} J.~J. Millson. \newblock On the first {B}etti number of a constant negatively curved manifold. \newblock {\epsilonilonm Ann. of Math. (2)}, 104(2):235--247, 1976. \betaibitem[MS74]{Milnor_Stasheff} J.~W. Milnor and J.~D. Stasheff. \newblock {\epsilonilonm Characteristic classes}. \newblock Princeton University Press, Princeton, N. J.; University of Tokyo Press, Tokyo, 1974. \newblock Annals of Mathematics Studies, No. 76. \betaibitem[Mun66]{Munkres} J.~R. Munkres. \newblock {\epsilonilonm Elementary differential topology}, volume 1961 of {\epsilonilonm Lectures given at Massachusetts Institute of Technology, Fall}. \newblock Princeton University Press, Princeton, N.J., 1966. \betaibitem[NZ85]{Neumann_Zagier} W.~D. Neumann and D.~Zagier. \newblock Volumes of hyperbolic three-manifolds. \newblock {\epsilonilonm Topology}, 24(3):307--332, 1985. \betaibitem[Rat94]{Ratcliffe} J.~G. Ratcliffe. \newblock {\epsilonilonm Foundations of hyperbolic manifolds}, volume 149 of {\epsilonilonm Graduate Texts in Mathematics}. \newblock Springer-Verlag, New York, 1994. \betaibitem[Rez96]{Reznikov} A.~Reznikov. \newblock Rationality of secondary classes. \newblock {\epsilonilonm J. Differential Geom.}, 43(3):674--692, 1996. \betaibitem[Spi79]{Spivak} M.~Spivak. \newblock {\epsilonilonm A comprehensive introduction to differential geometry. {V}ol. {V}}. \newblock Publish or Perish, Inc., Wilmington, Del., second edition, 1979. \betaibitem[Thu78]{Thurston_notes} W.~Thurston. \newblock {\epsilonilonm Geometry and topology of 3-manifolds}. \newblock Notes from Princeton University, Princeton, NJ, 1978. \betaibitem[Wig73]{Wigner} D.~Wigner. \newblock Algebraic cohomology of topological groups. \newblock {\epsilonilonm Trans. Amer. Math. Soc.}, 178:83--93, 1973. \epsilonilonnd{thebibliography} \epsilonilonnd{document}
\begin{document} \title{Class Group Relations in a Function Field Analogue of ${\mathbb Q} \section{Introduction} Let $p$ be an odd prime and $k={\mathbb F}_p(T)$. Let $\lambda$ be a non-zero root of $x^p+Tx=0$, and $\gamma$ be a root of $x^p+Tx=P(T)$, where $P(T) \in k$ but $P(T) \neq Q(T)^p + TQ(T)$ for any $Q(T) \in k$. We have the following lattice of fields\footnote{The edge labels indicate the Galois groups of the corresponding extensions, which will be described in Section \ref{proof}.}: {\centering \begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=3em,column sep=0em,minimum width=2em,ampersand replacement=\&] { \& L = k(\lambda, \gamma) \& \\ F = k(\gamma) \& \& K = k(\lambda) \\ \& k = {\mathbb F}_p(T) \& \\ }; \path[-] (m-2-1) edge node [pos=0.4, above] {$\Delta$ \, \,} (m-1-2) (m-2-3) edge node [pos=0.4, above] {\, \, $G$} (m-1-2) (m-3-2) edge node {} (m-2-1) (m-3-2) edge node [right] {$\Omega$} (m-1-2) (m-3-2) edge node [pos=0.6, below] {\, \, \, $\cong \Delta$} (m-2-3) ; \end{tikzpicture} } Our main objective is to prove the following theorems about $Cl_L^0$, the group of degree-0 divisor classes\footnote{We take the \emph{divisor group} of a function field to mean the free abelian group indexed by its primes. Its quotient by the principal divisors (those which represent an element of the field) is the \emph{divisor class group}. The subgroup of elements with total exponent 0 is then the \emph{degree-0 divisor class group}.} of $L$: \begin{theorem}\label{main} $Cl_L^0$ is isomorphic to a $(p-1)$-st power of a finite abelian group. \end{theorem} \begin{theorem}\label{main2} The class numbers of $L$ and $F$ are related via $h_L=h_F^{p-1}$ (and in fact this holds when $p$ is a prime power). \end{theorem} When the $\ell$-rank of $Cl_F^0$ is 1 for every prime $\ell$ dividing $h_F$ (e.g. when $h_F$ is square-free), these combine to say that $Cl_L^0=(Cl_F^0)^{p-1}$. However, this is not known in general. In Section \ref{pre}, we motivate these theorems and provide some background on the fields being considered, and the remaining sections are devoted to the proofs. \section{Preliminaries}\label{pre} \subsection{Number field antecedents} Theorem \ref{main} is an analogue of a recent result of Schoof in the number field setting. In \cite{schoof2020ideal}, he proves the following: \begin{theorem}\label{schoof} Let $p>2$ be a regular prime and $n \in {\mathbb Z}$ not a $p$-th power. Suppose that all prime divisors $l \neq p$ of $n$ are primitive roots mod $p$. Then the ideal class group $Cl_L$ of $L = {\mathbb Q}(\zeta_p, \sqrt[p]{n})$ and the kernel of the norm map $N_{L/{\mathbb Q}(\zeta_p)}$ fit into the exact sequences \[ 0 \to V \to \ker(N_{L/{\mathbb Q}(\zeta_p)}) \to A^{p-1} \to 0 \text{ and} \] \[ 0 \to \ker(N_{L/{\mathbb Q}(\zeta_p)}) \to Cl_L \to Cl_{{\mathbb Q}(\zeta_p)} \to 0, \] where $A$ is a finite abelian group and $V$ is an ${\mathbb F}_p$-vector space of dimension at most $\left(\frac{p-3}{2}\right)^2$. In particular, if $\# Cl_{{\mathbb Q}(\zeta_p)} = 1$, then $Cl_L / V$ is a $(p-1)$-st power of a finite abelian group. \end{theorem} Theorem \ref{main2} is inspired by one proved by Honda \cite{honda1971pure}: \begin{theorem}\label{honda} Let $F={\mathbb Q}(\sqrt[3]{n})$, and $L={\mathbb Q}(\sqrt[3]{n}, \zeta_3)$ be its normal closure. Then $h_L = h_F^2$ or $h_L = \frac{1}{3} h_F^2$. \end{theorem} \subsection{Function field analogue of ${\mathbb Q}(\zeta_p)$} We return now to considering extensions of $k={\mathbb F}_p(T)$. Let $\Lambda$ denote the roots (in $\bar{k}$, an algebraic closure) of $x^p+Tx=0$. Fixing a nonzero root $\lambda$, we see that $\Lambda=\{m\lambda \, | \, m \in {\mathbb F}_p \} \cong {\mathbb F}_p^+$, and that $k(\Lambda)=k(\lambda)$ is a degree $p-1$ cyclic extension of $k$. These definitions should be reminiscent of those for the $p$-th roots of unity $\mu_p = \{ \zeta_p^i \, | \, i \in {\mathbb F}_p \}$, and in fact they are part of a rich theory of `cyclotomic' function fields first developed by Hayes \cite{hayes1974explicit} based on work by Carlitz \cite{carlitz1938class}. For a more thorough discussion of the connection, we refer to \cite{goss1983arithmetic}. We note two properties which will be important to the proofs of Theorems \ref{main} and \ref{main2}, and which contribute to their comparative simplicity vis-\`a-vis Theorems \ref{schoof} and \ref{honda}. The first is that $k(\Lambda) = k(\lambda)$ has genus 0 (since $T = -\lambda^{p-1}$), and therefore its degree-0 divisor class group is trivial. The second is that there are exactly two primes which ramify in $k(\lambda) / k$, both of which are totally ramified: the prime $(T) = (\lambda)^{p-1}$, and the distinguished infinite prime $(1/T) = (1/\lambda)^{p-1}$. This is fairly easy to see in our case from the discriminant, but also applies to a more general class of such extensions \cite{hayes1974explicit}. \subsection{Function field analogue of ${\mathbb Q}(\sqrt[p]{n})$} Now let $\gamma$ denote a root of $x^p+Tx=P(T)$, where $P(T) \in k$ but $P(T) \neq Q(T)^p + TQ(T)$ for any $Q(T) \in k$. Then $\gamma + \lambda$, $\gamma + 2\lambda$, $\dots$, $\gamma + (p-1)\lambda$ are the other roots, and the fields $k(\gamma + i\lambda)$ are conjugate degree-$p$ extensions of $k$ with Galois closure $k(\lambda, \gamma)$. Continuing the analogy of the previous section, this construction parallels that of ${\mathbb Q}(\zeta_p^i\sqrt[p]{n})$ and its Galois closure, ${\mathbb Q}(\zeta_p^i, \sqrt[p]{n})$. By discriminant considerations, the only primes that possibly ramify in \linebreak $k(\lambda, \gamma) / k(\lambda)$ are those lying above $(T)$ and $(1/T)$. Therefore every prime of $k$ which ramifies in $k(\lambda, \gamma)/k$ is totally ramified in $k(\lambda)/k$. \section{Proof of Theorem \ref{main}}\label{proof} As in the introduction, we set $k={\mathbb F}_p(T)$, $K=k(\lambda)$, $F=k(\gamma)$, and $L=k(\lambda, \gamma)$. Define $\Omega = \text{Gal}(L/k)$, $G = \text{Gal}(L/K)$, and $\Delta = \text{Gal}(L/F) \cong \text{Gal}(K/k)$. These groups have the presentations \begin{itemize} \item $G = \langle \tau \, | \, \tau^p=1 \rangle \cong {\mathbb F}_p^+$, \item $\Delta = \langle \sigma \, | \, \sigma^{p-1}=1 \rangle \cong {\mathbb F}_p^\times$, \item $\Omega = \langle \sigma, \tau \, | \, \sigma^{p-1}=1, \tau^p=1, \sigma \tau \sigma^{-1} = \tau^{\omega(\sigma)} \rangle$, \end{itemize} where $\omega: \Delta \to {\mathbb F}_p^\times$ is the cyclotomic character defined by $\sigma(\lambda) = \omega(\sigma) \lambda$. Refer to the field diagram in the introduction for a depiction of these relationships. Naturally, there is a Galois action of $\Omega$ on $Cl_L^0$. The norm element $N_G = \sum_G \tau$ of $G$ gives a map $Cl_L^0 \to Cl_L^0$ that factors through $Cl_K^0$, which is trivial. Thus $Cl_L^0$ is a module over the group ring ${\mathbb Z}[\Omega]/(N_G)$, or alternately a module over ${\mathbb Z}[G]/(N_G)$ with a twisted action of $\Delta$ (by which we mean that $\Delta$ acts on $Cl_L^0$ in a way that is consistent with the action of $\Delta$ on ${\mathbb Z}[G]/(N_G)$). Now, ${\mathbb Z}[G]/(N_G) \cong {\mathbb Z}[\zeta_p]$ as a $\Delta$-module (because $N_G$ is the $p$-th cyclotomic polynomial evaluated at $\tau$), so we may freely apply standard facts about $\zeta_p$ to $\tau$.\footnote{We opt to keep the notation in terms of $\tau$ rather than $\zeta_p$, to maintain coherence with the function field setting.} We are now ready to develop the proof of Theorem \ref{main}. We proceed by separately considering the $p$ part and the non-$p$ part of $Cl_L^0$. \subsection{The non-$p$ part of $Cl_L^0$} In this section, let $M$ denote the non-$p$ part of the degree-0 divisor class group of $L$. The following proposition describes the $\Delta$-module structure of $M$. \begin{proposition}\label{nonp} The map \[ \varphi: M^\Delta \otimes_{\mathbb Z} {\mathbb Z}[G]/(N_G) \to M \] given by $\sum_i m_i \otimes [\tau^i] \mapsto \sum_i \tau^i m_i$ is an isomorphism of $\Delta$-modules. \end{proposition} \begin{proof} Suppose first that $\sum_i \tau^i m_i = 0$ (and note that since $N_G$ acts trivially, this sum can be assumed to be over $1 \leq i \leq p-1$). Then $\sum_i \tau^{i\omega(\sigma)} m_i = 0$ for all $\sigma$, and thus for $1 \leq j \leq p-1$, \begin{align*} 0 &= \sum_{\sigma \in \Delta} \tau^{-j\omega(\sigma)} (1-\tau^{j\omega(\sigma)}) \sum_i \tau^{i\omega(\sigma)} m_i \\ &= \sum_i \sum_{\sigma \in \Delta} (1-\tau^{j\omega(\sigma)}) \tau^{(i-j)\omega(\sigma)} m_i. \end{align*} $\sum_\sigma (1-\tau^{j\omega(\sigma)}) \tau^{(i-j)\omega(\sigma)}$ acts as $p-1 + (1 - N_G) = p$ for $i=j$, and as 0 for $i \neq j$, so this says $pm_j = 0$, and thus $m_j=0$, for all $j$ (since $M$ has order prime to $p$). Therefore $\varphi$ is injective. Now suppose $m \in M$. Then for any $i$, $N_\Delta (\tau^i m) = \sum_\sigma \tau^{i\omega(\sigma)} \sigma(m) \in M^\Delta$, and accordingly, \begin{align*} &\sum_{i=1}^{p-1} \tau^{-i\omega(\sigma')}(1-\tau^{i\omega(\sigma')}) \sum_\sigma \tau^{i\omega(\sigma)} \sigma(m) \\ = &\sum_\sigma \sum_{i=1}^{p-1} (1-\tau^{i\omega(\sigma')}) \tau^{i(\omega(\sigma)-\omega(\sigma'))} \sigma(m) \in im(\varphi) \text{ for all } \sigma' \in \Delta. \end{align*} $\sum_i (1-\tau^{i\omega(\sigma')}) \tau^{i(\omega(\sigma)-\omega(\sigma'))}$ acts as $p$ for $\sigma' = \sigma$ and as 0 for $\sigma' \neq \sigma$, so this says that each $p\sigma'(m)$, and in particular $pm$, is in $im(\varphi)$. Therefore $\varphi$ is surjective (again using that $M$ has order prime to $p$). \end{proof} Ignoring the module structure gives $M \cong (M^\Delta)^{p-1}$ as abelian groups, which settles the non-$p$ part of Theorem \ref{main}. \subsection{The $p$ part of $Cl_L^0$} From this section forward, $M$ will denote the $p$ part of the degree-0 divisor class group of $L$. Having only $p$-power torsion allows us to strengthen the ${\mathbb Z}[G]/(N_G)$-module structure previously described to a ${\mathbb Z}_p[G]/(N_G)$-module structure, still with the twisted $\Delta$-action as before. $A = {\mathbb Z}_p[G]/(N_G)$ is a discrete valuation ring, and its maximal ideal is generated by $\tau-1$ (just as $\zeta_p-1$ generates the maximal ideal of ${\mathbb Z}_p[\zeta_p]$). Furthermore, $(\tau-1)^{p-1} = (p)$ as ideals of $A$, which will be key to this section's results. As before, $\Delta$ acts on $\tau$, and thus on $\tau-1$, by the cyclotomic character $\omega: \Delta \to {\mathbb F}_p^\times$. We have a filtration of ideals \[ A \supset (\tau-1)A \supset (\tau-1)^2A \supset \dots \] with successive quotients ${\mathbb F}_p$, ${\mathbb F}_p(\omega)$, ${\mathbb F}_p(\omega^2)$, $\dots$, where $X(\omega^i)$ denotes that the default action of $\Delta$ on the module $X$ is twisted by the character $\omega^i$. We now prove two results which characterize the structure of $A$-modules with particularly `nice' $\Delta$-action: \begin{lemma} \emph{\cite[Prop.~3.1]{schoof2020ideal}} Let $M'$ be a finite $A$-module with twisted $\Delta$-action. Then $\Delta$ acts trivially on $M'/(\tau-1)M'$ if and only if there exist $n_1, n_2, \dots, n_t \geq 1$ such that \[ M' \cong \bigoplus_{i=1}^t A/(\tau-1)^{n_i}A. \] \end{lemma} \begin{proof} Suppose first that the isomorphism holds. Then $M'/(\tau-1)M'$ is a direct sum of $A/(\tau-1)A \cong {\mathbb F}_p$ terms with trivial $\Delta$ action. Conversely, suppose that $\Delta$ acts trivially on $M'/(\tau-1)M'$. Then the map $M'^\Delta \to (M'/(\tau-1)M')^\Delta = M'/(\tau-1)M'$ is surjective (its cokernel is the first $\Delta$-cohomology group of $(\tau-1)M'$, which is trivial because $\Delta$ and $M'$ have comprime orders). This says that there are $\Delta$-invariant elements $v_1, \dots, v_t$ which generate $M'$ over $A$, i.e. that there is a surjective map from $A^t \to M'$ taking 1 in the $i$-th coordinate to $v_i$. By the finiteness of $M'$, this descends to a surjective map \[ \varphi: \bigoplus_{i=1}^t A/(\tau-1)^{n_i}A \to M'. \] If $\varphi$ is not injective, there is a nonzero element $x$ in the kernel, and we may assume $x = (x_1, \dots, x_t) \in \bigoplus_i (\tau-1)^{n_i-1}A/(\tau-1)^{n_i}A$. Now, $\Delta$ acts on $x$ by $\omega^m$ for some $m$, but by $\omega^{n_i-1}$ on each of these summands, so $x_i$ is zero unless $n_i-1=m+k_i(p-1)$ for some $k_i$. Reordering if necessary, let the first $s$ coordinates of $x$ be exactly those which are nonzero, and choose $n_1$ to be minimal among $n_1, \dots, n_s$. For $i=1, \dots, s$, define $\mu_i$ such that $x_i = (\tau-1)^m p^{k_i} \mu_i$, and $m_i \in {\mathbb Z}$ such that $m_i \equiv \mu_i \mod (\tau-1)^N$, where $N$ is the maximum of the $n_i$. Notice that $\mu_i$, and thus $m_i$, is a unit in $A$. We are now able to construct a new map \[ \varphi': A/(\tau-1)^{n_1-1}A \oplus \bigoplus_{i=2}^t A/(\tau-1)^{n_i}A \to M', \] which takes the basis vector $e_1 = (1, 0, \dots, 0)$ to $\sum_{i=1}^s m_i p^{k_i-k_1} v_i$, and $e_i$ to $v_i$ for $i \neq 1$. This is well-defined because \begin{align*} \varphi'((\tau-1)^m p^{k_1} e_1) &= \sum_{i=1}^s (\tau-1)^m p^{k_i} m_i v_i \\ &= \sum_{i=1}^s (\tau-1)^m p^{k_i} \mu_i v_i = \sum_{i=1}^s x_i v_i = \varphi(x) = 0. \end{align*} Furthermore, $\varphi'$ is surjective because, by the surjectivity of $\varphi$, $m_1 v_1 \in im(\varphi')$, and $m_1$ is invertible. If $\varphi'$ is not injective, we repeat this procedure until we reach a map that is, at which point we will have found a direct sum of finite quotients of $A$ which is isomorphic to $M'$. \end{proof} \begin{proposition}\label{pstruct} Let $M'$ be a finite $A$-module with twisted $\Delta$-action, such that $\Delta$ acts trivially on $M'/(\tau-1)M'$ and by $\omega^{-1}$ on $M'[\tau-1]$. Then there exists a finite abelian $p$-group $H$ such that \[ M \cong H \otimes_{{\mathbb Z}_p} A. \] \end{proposition} \begin{proof} Suppose $M' \cong A/(\tau-1)^n A$ for some positive integer $n$. Then $M'[\tau-1]=(\tau-1)^{n-1}A/(\tau-1)^n A \cong {\mathbb F}_p(\omega^{n-1})$. By assumption, this requires $n=(p-1)m$ for some positive integer $m$, whereby $A/(\tau-1)^n A \cong A/p^m A \cong {\mathbb Z}/p^m{\mathbb Z} \otimes_{{\mathbb Z}_p} A$. By the previous lemma, a general $M'$ satisfying the condition on $M'/(\tau-1)M'$ is a direct sum of such ${\mathbb Z}/p^m{\mathbb Z} \otimes_{{\mathbb Z}_p} A$, proving the proposition. \end{proof} It remains to show that this proposition can be applied to (a twist of) $M$. We will achieve this by exploring the Galois cohomology of $Cl_L^0$ and related objects. \subsection{Galois cohomology of $Cl_L^0$} In this section, $\hat{H}^i(X)$ is the $i$-th Tate $G$-cohomology group of $X$. We fix the notations \begin{itemize} \item $P_L$, the principal $L$-divisors \item $D_L^0$, the $L$-divisors of degree 0 \item ${\mathbb I}_L^0$, the ideles of total valuation 0 \item $C_L^0 = {\mathbb I}_L^0 / L^\times$, the idele classes of total valuation 0 \item $U_L$, the product of the local unit groups $U_{\mathfrak L}$ of $L$. \end{itemize} We have the following commutative diagram of $\Omega$-modules in which the rows and columns are exact: \[\def1.5{1.5} \begin{array}{ccccccccc} & & 1 & & 1 & & 0 & & \\ & & \downarrow & & \downarrow & & \downarrow & & \\ 1 & \to & {\mathbb F}_p^\times & \to & L^\times & \to & P_L & \to & 0 \\ & & \downarrow & & \downarrow & & \downarrow & & \\ 1 & \to & U_L & \to & {\mathbb I}_L^0 & \to & D_L^0 & \to & 0 \\ & & \downarrow & & \downarrow & & \downarrow & & \\ 1 & \to & U_L / {\mathbb F}_p^\times & \to & C_L^0 & \to & Cl_L^0 & \to & 0 \\ & & \downarrow & & \downarrow & & \downarrow & & \\ & & 1 & & 1 & & 0 & & \end{array} \] We will make use of various long exact sequences induced in cohomology by the above diagram in order to study $\hat{H}^i(Cl_L^0)$ for $i=-1, 0$. We remark that a $G$-cohomology group of an $\Omega$-module is an ${\mathbb F}_p[\Delta]$-module (because it is killed by $p$ and is $G$-invariant), and any map induced in cohomology by any of the maps in the diagram is $\Delta$-equivariant. Furthermore, since $G$ is a cyclic group, we have $\Delta$-isomorphisms $\hat{H}^i(X) \to \hat{H}^{i-2}(X)(\omega^{-1})$ for every $i \in {\mathbb Z}$ and $\Omega$-module $X$, given by cupping with a generator of $\hat{H}^{-2}({\mathbb Z}) \cong H_1({\mathbb Z}) \cong G \cong {\mathbb Z}/p{\mathbb Z}(\omega)$. \begin{lemma} $C_L^0$ and ${\mathbb F}_p^\times$ have trivial Tate $G$-cohomology. \end{lemma} \begin{proof} The degree map on the idele class group gives rise to the sequence \[ 0 \to C_L^0 \to C_L \to {\mathbb Z} \to 0. \] This gives rise to a long exact sequence which includes \[ H^0(C_L) \to H^0({\mathbb Z}) \to \hat{H}^1(C_L^0) \to \hat{H}^1(C_L), \] where the first two terms are standard (i.e. non-Tate) cohomology. We have that $H^0(C_L) = C_L^G = C_K$ \cite[p.~2]{artin1968class}. The leftmost map, then, is the degree map on $C_K$ \emph{as a subgroup of} $C_L$. An idele class of $C_K$ which has valuation 1 at an inert prime and valuation 0 elsewhere maintains this property when extended to $C_L$. Since $G$ is cyclic, there are (infinitely many) primes inert in $L/K$ by the Chebotarev density theorem, and thus the map $H^0(C_L) \to H^0({\mathbb Z})$ is surjective. We also have $\hat{H}^1(C_L) = 0$ \cite[p.~19]{artin1968class}, and so $\hat{H}^1(C_L^0) = 0$. Passing the first exact sequence to Tate cohomology and recognizing that \linebreak $\hat{H}^0(C_L) \cong {\mathbb Z}/p{\mathbb Z}$ \cite[p.~19]{artin1968class}, $\hat{H}^0({\mathbb Z}) \cong {\mathbb Z}/p{\mathbb Z}$, and $\hat{H}^{-1}({\mathbb Z}) \cong \hat{H}^1({\mathbb Z}) = 0$, we have that $\hat{H}^0(C_L^0) = 0$ as well. As for ${\mathbb F}_p^\times$, it is a finite module of order prime to the order of $G$, and so has trivial Tate $G$-cohomology. \end{proof} Applying this lemma to the long exact sequences induced by the leftmost column and bottom row of the diagram immediately gives: \begin{corollary}\label{iso} $\hat{H}^{-1}(Cl_L^0) \cong \hat{H}^0(U_L)$ and $\hat{H}^{0}(Cl_L^0) \cong \hat{H}^1(U_L)$. \end{corollary} \begin{lemma}\label{action} $\Delta$ acts trivially on $\hat{H}^1(U_L)$ and $\hat{H}^2(U_L)$. \end{lemma} \begin{proof} We write $\ell$ for a prime of $k$, ${\mathfrak l}$ for a prime of $K$ above $\ell$, and ${\mathfrak L}$ for a prime of $L$ above ${\mathfrak l}$. We have decomposition groups $\Omega_{\mathfrak L} = D({\mathfrak L}/\ell)$, $G_{\mathfrak L} = D({\mathfrak L}/{\mathfrak l})$, and $\Delta_{\mathfrak l} = D({\mathfrak l}/\ell)$. For each $i$, $\hat{H}^i(U_L)$ can be expressed as a product of local cohomology groups: \begin{align*} \hat{H}^i(U_L) &= \hat{H}^i(\prod_{{\mathfrak L} \text{ of } L} U_{\mathfrak L}) \\ &= \hat{H}^i(\prod_{{\mathfrak l} \text{ of } K} \bigoplus_{{\mathfrak L}|{\mathfrak l}} U_{\mathfrak L}) \\ &= \bigoplus_{\ell \text{ ram in } L} \bigoplus_{{\mathfrak l}|\ell} \hat{H}^i(\bigoplus_{{\mathfrak L}|{\mathfrak l}} U_{\mathfrak L}) \\ &= \bigoplus_{\ell \text{ ram in } L} \bigoplus_{{\mathfrak l}|\ell} \hat{H}^i(G_{\mathfrak L}, U_{\mathfrak L}), \end{align*} with the last equality by applying Shapiro's Lemma to $\bigoplus_{{\mathfrak L}|{\mathfrak l}} U_{\mathfrak L} = \text{Ind}_{G_{\mathfrak L}}^G U_{\mathfrak L}$. Furthermore, we saw in Section \ref{pre} that any prime that ramifies in $L$ ramifies totally in $K$, which means that $\Delta_{\mathfrak l} = \Delta$ and any action it has on $\hat{H}^1(U_L)$ is on a single summand $\hat{H}^i(G_{\mathfrak L}, U_{\mathfrak L})$. Thus for $i=1,2$, it is sufficient to show that $\Delta$ acts trivially on $\hat{H}^i(G_{\mathfrak L}, U_{\mathfrak L})$. We look first at $i=1$. Since $G$ and $\Delta$ have coprime orders, the inflation-restriction sequence \[ 0 \to \hat{H}^1(\Delta_{\mathfrak l}, U_{\mathfrak l}) \to \hat{H}^1(\Omega_{\mathfrak L}, U_{\mathfrak L}) \to \hat{H}^1(G_{\mathfrak L}, U_{\mathfrak L})^{\Delta_{\mathfrak l}} \to 0 \] is exact. Now, from local class field theory, $\hat{H}^1(G_{\mathfrak L}, U_{\mathfrak L})$ is cyclic of order equal to the ramification index $e_{{\mathfrak L}/{\mathfrak l}}$ \cite[p.~9]{artin1968class}, and likewise $\hat{H}^1(\Omega_{\mathfrak L}, U_{\mathfrak L}) \cong {\mathbb Z}/e_{{\mathfrak L}/\ell}{\mathbb Z}$ and $\hat{H}^1(\Delta_{\mathfrak l}, U_{\mathfrak l}) \cong {\mathbb Z}/e_{{\mathfrak l}/\ell}{\mathbb Z}$. This forces $\hat{H}^1(G_{\mathfrak L}, U_{\mathfrak L})^{\Delta_{\mathfrak l}} = \hat{H}^1(G_{\mathfrak L}, U_{\mathfrak L})$, and so the action of $\Delta=\Delta_{\mathfrak l}$ on $\hat{H}^1(U_L)$ is trivial. Next we take $i=2$. Again by the coprime orders of $G$ and $\Delta$, and also using that $\hat{H}^1(G_{\mathfrak L}, L_{\mathfrak L}) = 0$ by Hilbert Theorem 90, the sequence \[ 0 \to \hat{H}^2(\Delta_{\mathfrak l}, K_{\mathfrak l}) \to \hat{H}^2(\Omega_{\mathfrak L}, L_{\mathfrak L}) \to \hat{H}^2(G_{\mathfrak L}, L_{\mathfrak L})^{\Delta_{\mathfrak l}} \to 0 \] is exact. But local class field theory gives us that these cohomology groups are dual to the decomposition groups that define them \cite[p.~9]{artin1968class}, and so by order considerations, we must have $\hat{H}^2(G_{\mathfrak L}, L_{\mathfrak L})^{\Delta_{\mathfrak l}} = \hat{H}^2(G_{\mathfrak L}, L_{\mathfrak L})$. Since the inclusion-induced map $\hat{H}^2(G_{\mathfrak L}, U_{\mathfrak L}) \to \hat{H}^2(G_{\mathfrak L}, L_{\mathfrak L})$ is injective (its kernel is $\hat{H}^1(G_{\mathfrak L}, {\mathbb Z})$, which is trivial), we conclude that $\hat{H}^2(U_L)$ is $\Delta$-invariant as well. \end{proof} We are now ready to connect the cohomological theory back to $M$, the $p$-part of $Cl_L^0$. \begin{corollary} $\Delta$ acts trivially on $M[\tau-1]$ and via $\omega$ on $M/(\tau-1)M$. \end{corollary} \begin{proof} By Corollary \ref{iso} and the fact that $N_G$ kills $M$, we have \[ M[\tau-1] \cong \hat{H}^{0}(Cl_L^0) \cong \hat{H}^1(U_L) \text{ and} \] \[ M/(\tau-1)M \cong \hat{H}^{-1}(Cl_L^0) \cong \hat{H}^0(U_L) \cong \hat{H}^2(U_L)(\omega), \] which have the claimed actions by Lemma \ref{action}. \end{proof} Finally, we can prove the $p$ part of our result: \begin{proposition} $M$ is a $(p-1)$-st power of some finite abelian $p$-group. \end{proposition} \begin{proof} We consider the twist $M' = M(\omega^{-1})$. The previous result says that $\Delta$ acts trivially on $M'/(\tau-1)M'$ and by $\omega^{-1}$ on $M'[\tau-1]$. Thus Proposition \ref{pstruct} may be applied to $M'$. As abelian groups, $M \cong M'$, so we are done. \end{proof} Combining this with Proposition \ref{nonp}, we have that the $p$ part and non-$p$ part of $Cl_L^0$ are each the $(p-1)$-st power of an abelian group, and so the proof of Theorem \ref{main} is complete. \section{Proof of Theorem \ref{main2}}\label{proof2} \subsection{Character and zeta function relations} Henceforth let $q$ be a power of an odd prime. The Galois groups $G$, $\Delta$, and $\Omega$ may still be defined as in the beginning of Section \ref{main}, with the caveat that $G$ is no longer a cyclic group when $q$ is not prime. Instead, we have an isomorphism $\nu: G \to {\mathbb F}_q$, where $\nu(\tau)$ is the element of ${\mathbb F}_q$ such that $\tau(\gamma)=\gamma + \nu(\tau) \lambda$. Now, $\Omega$ can be conveniently realized as the matrix group $$ \left\{ \left. \begin{pmatrix} a & x \\ 0 & 1 \end{pmatrix} \hspace{.2cm} \right| \hspace{.2cm} a,x \in {\mathbb F}_q, a \neq 0 \right\}, $$ with $\sigma \leftrightarrow \begin{pmatrix} \omega(\sigma) & 0 \\ 0 & 1 \end{pmatrix}$ for $\sigma \in \Delta$ and $\tau \leftrightarrow \begin{pmatrix} 1 & \nu(\tau) \\ 0 & 1 \end{pmatrix}$ for $\tau \in G$. Thus the elements with $a=1$ are identified with the elements of $G$, and those with $x=0$ with the elements of $\Delta$. We are interested in four characters of $\Omega$ which we will show fit an arithmetic relation. These are: \begin{itemize} \item $\chi_L$, for the regular representation (permutation representation on $\Omega$) \item $\chi_K$, for the permutation representation on $\Omega / G$ \item $\chi_F$, for the permutation representation on $\Omega / \Delta$ \item $\chi_k$, for the trivial representation (permutation representation on $\Omega / \Omega)$. \end{itemize} \begin{proposition} $$ \chi_L - \chi_k = \chi_K - \chi_k + (q-1)(\chi_F - \chi_k). $$ \end{proposition} \begin{proof} We know of course that $\chi_k$ takes the value 1 on every element of $\Omega$, and that $\chi_L$ takes $|\Omega|=q(q-1)$ on the identity and 0 elsewhere. Now, $\begin{pmatrix} a & x \\ 0 & 1 \end{pmatrix} = \begin{pmatrix} 1 & x \\ 0 & 1 \end{pmatrix} \begin{pmatrix} a & 0 \\ 0 & 1 \end{pmatrix} = \begin{pmatrix} a & 0 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & a^{-1}x \\ 0 & 1 \end{pmatrix}$. This says that each coset of $\Omega / \Delta$ can be represented by a unique element of $G$, and each coset of $\Omega / G$ by a unique element of $\Delta$. We have $\begin{pmatrix} a & x \\ 0 & 1 \end{pmatrix} \begin{pmatrix} b & 0 \\ 0 & 1 \end{pmatrix} = \begin{pmatrix} ab & 0 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & (ab)^{-1}x \\ 0 & 1 \end{pmatrix}$, so an element of $\Omega$ fixes a coset of $\Omega / G$ if and only if $a=1$. This says that $\chi_K \begin{pmatrix} a & x \\ 0 & 1 \end{pmatrix} = |\Omega / G| = q-1$ for $a=1$, and 0 for $a \neq 1$. On the other hand, $\begin{pmatrix} a & x \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & y \\ 0 & 1 \end{pmatrix} = \begin{pmatrix} 1 & ax+y \\ 0 & 1 \end{pmatrix} \begin{pmatrix} a & 0 \\ 0 & 1 \end{pmatrix}$, so an element of $\Omega$ fixes a coset of $\Omega / \Delta$ if and only if $x = y(1-a)$. This means that $\chi_F \begin{pmatrix} a & x \\ 0 & 1 \end{pmatrix} = |\Omega / \Delta| = q$ for $a=1, x=0$, 0 for $a=1, x \neq 0$, and 1 for $a \neq 1$. Using the values ascertained above, the relation holds for each element $\begin{pmatrix} a & x \\ 0 & 1 \end{pmatrix}$ of $\Omega$ as follows: \begin{itemize} \item For $a=1, x=0$: $q(q-1) - 1 = (q-1) - 1 + (q-1)(q-1)$. \item For $a=1, x \neq 0$: $0 - 1 = (q-1) - 1 + (q-1)(0 - 1)$. \item For $a \neq 1$: $0 - 1 = 0 - 1 + (q-1)(1 - 1)$. \end{itemize} \end{proof} This arithmetic relation between characters gives rise to a corresponding multiplicative relation between L-functions, and thus zeta functions \cite{serre1965zeta}: \begin{corollary}\label{zeta-rel} Let $\zeta_*$ denote the zeta function for the field $*$. Then $$ \frac{\zeta_L}{\zeta_k} = \frac{\zeta_K}{\zeta_k} \cdot \left( \frac{\zeta_F}{\zeta_k} \right)^{q-1}. $$ \end{corollary} \subsection{Residues of zeta functions}\label{residues} Schmidt \cite{schmidt1931analytische} gives the residue formula $$ \lim_{s \to 1} (s-1) \zeta(s) = \frac{q^{1-g}h}{(q-1)\log q} $$ and the functional equation $$ \zeta(1-s) = q^{(g-1)(2s-1)}\zeta(s) $$ for the zeta function of a function field in positive characteristic, where $g$ and $h$ denote the genus and class number, respectively\footnote{See Roquette \cite{roquette2001class} for an English reference summarizing these results, but note a typo in the numerator of the residue formula (reversing the sign of the exponent).}. These combine to give a formula for the residue of $\zeta(1-s)$ at 1: $$ \lim_{s \to 1} (s-1)\zeta(1-s) = \lim_{s \to 1} (s-1) q^{(g-1)(2s-1)} \zeta(s) = \frac{h}{(q-1)\log q}. $$ As $K$ and $k$ have trivial class group, applying this equation to the relation in Corollary \ref{zeta-rel} gives $h_L = h_F^{q-1}$, completing the proof of Theorem \ref{main2}. \iffalse \subsection{Other consequences} The residue formulas in Section \ref{residues} can also be arranged to derive an expression for the genus: $$ \lim_{s \to 1} \frac{\zeta(1-s)}{\zeta(s)} = q^{g-1}, $$ which (keeping in mind that $K$ and $k$ have genus 0) may be combined with the equation from Corollary \ref{zeta-rel} to give $$ g_L = (q-1)g_F. $$ The only primes that ramify in $L/k$ are $T$ and $1/T$, and these ramify completely in $K/k$, so $e_P = p-1$ for any prime $P$ of $F$ above $T$ or $1/T$. Now, applying the Riemann-Hurwitz formula to $L$ and $F$, we have $$ 2-2g_L = (q-1)(2-2g_F) - \sum (e_P - 1), $$ where $e_P$ denotes the ramification index of the prime $P$ of $F$. Using the genus relation just derived, this simplifies to $\sum (e_P - 1) = 2q-4$. Therefore $T$ and $1/T$ ramify completely in $L/F$ (and thus in every extension in the original field diagram). \fi \end{document}
\begin{document} \title{Okounkov bodies associated to pseudoeffective divisors II} \author{Sung Rak Choi} \address{Department of Mathematics, Yonsei University, Seoul, Korea} \email{[email protected]} \author{Jinhyung Park} \address{School of Mathematics, Korea Institute for Advanced Study, Seoul, Korea} \email{[email protected]} \author{Joonyeong Won} \address{Center for Geometry and Physics, Institute for Basic Science, Pohang, Korea} \email{[email protected]} \date{\today} \keywords{Okounkov body, pseudoeffective divisor, asymptotic invariant, Zariski decomposition} \thanks{S. Choi and J. Park were partially supported by NRF-2016R1C1B2011446. J. Won was partially supported by IBS-R003-D1, Institute for Basic Science in Korea. } \begin{abstract} We first prove some basic properties of Okounkov bodies, and give a characterization of Nakayama and positive volume subvarieties of a pseudoeffective divisor in terms of Okounkov bodies. Next, we show that each valuative and limiting Okounkov bodies of a pseudoeffective divisor which admits the birational good Zariski decomposition is a rational polytope with respect to some admissible flag. This is an extension of the result of Anderson-K\"{u}ronya-Lozovanu about the rational polyhedrality of Okounkov bodies of big divisors with finitely generated section rings. \end{abstract} \maketitle \section{Introduction} This paper is a continuation of our investigation on Okounkov bodies associated to pseudoeffective divisors (\cite{CHPW1}, \cite{CHPW2}, \cite{CPW}). Let $X$ be a smooth projective variety of dimension $n$, and $D$ be a divisor on $X$. Fix an admissible flag $Y_\bullet$ on $X$, that is, a sequence of irreducible subvarieties $$ Y_\bullet: X=Y_0\supseteq Y_1\supseteq \cdots\supseteq Y_{n-1}\supseteq Y_n=\{ x\} $$ where each $Y_i$ is of codimension $i$ in $X$ and is smooth at $x$. The Okounkov body $\Delta_{Y_\bullet}(D)$ of a big divisor $D$ with respect to $Y_\bullet$ is a convex body in the Euclidean space $\R^n$ which carries rich information of $D$. Okounkov first defined the Okounkov body associated to an ample divisor in \cite{O1}, \cite{O2}. After this pioneering work, Lazarsfeld-Musta\c{t}\u{a} \cite{lm-nobody} and Kaveh-Khovanskii \cite{KK} independently generalized Okounkov's work to big divisors (see \cite{B2} for a survey). We then further extended the study of Okounkov bodies to pseudoeffective divisors in \cite{CHPW1}. More precisely, we have introduced and studied two convex bodies, called the \emph{valuative Okounkov body} $\Delta^{\val}_{Y_\bullet}(D)$ and the \emph{limiting Okounkov body} $\Delta^{\lim}_{Y_\bullet}(D)$ associated to a pseudoeffective divisor $D$. See Sections \ref{okbdsubsec} and \ref{nakpvssec} for definitions and basics on Okounkov bodies. In this paper, we first prove supplementary results to \cite{CHPW1}. Main theorems of \cite{CHPW1} and the subsequent results in this paper depend on the following property of the Okounkov body. This theorem is a generalization of \cite[Theorem 4.26]{lm-nobody} and \cite[Theorem 3.4]{Jow}. \begin{theoremalpha}[=Theorem \ref{newtheorem}]\label{newthrm} Let $X$ be a smooth projective variety of dimension $n$, and $D$ be a big divisor on $X$. Fix an admissible flag $Y_\bullet$ such that $Y_{n-k} \not\subseteq \mathbf B_+(D)$. Then we have $$ \Delta_{Y_{n- k\bullet}}(D) = \Delta_{Y_\bullet}(D) \cap (\{ 0\}^{n-k} \times \R_{\geq 0}^{k}). $$ \end{theoremalpha} In \cite{CHPW1}, we proved that the Okounkov bodies $\Delta^{\val}_{Y_\bullet}(D)$ and $\Delta^{\lim}_{Y_\bullet}(D)$ encode nice properties of the divisor $D$ if the given admissible flag $Y_\bullet$ contains a Nakayama subvariety of $D$ or a positive volume subvariety of $D$ (see Theorem \ref{chpwmain}). We show the following characterization of those special subvarieties in terms of Okounkov bodies. \begin{theoremalpha}[=Theorem \ref{geomcrit}]\label{critintro} Let $X$ be a smooth projective variety of dimension $n$, and $D$ be an $\R$-divisor on $X$. Fix an admissible flag $Y_\bullet$ such that $Y_n$ is a general point in $X$. Then we have the following: \begin{enumerate}[leftmargin=0cm,itemindent=.6cm] \item[$(1)$] If $D$ is effective, then $Y_\bullet$ contains a Nakayama subvariety of $D$ if and only if $\Delta^{\val}_{Y_\bullet}(D) \subseteq \{0 \}^{n-\kappa(D)} \times \R^{\kappa(D)}$. \item[$(2)$] If $D$ is pseudoeffective, then $Y_\bullet$ contains a positive volume subvariety of $D$ if and only if $\Delta^{\lim}_{Y_\bullet}(D) \subseteq \{0 \}^{n-\kappa_\nu(D)} \times \R^{\kappa_\nu(D)}$ and $\dim \Delta^{\lim}_{Y_\bullet}(D)=\kappa_\nu(D)$. \end{enumerate} \end{theoremalpha} One of the most important properties one can probably expect a convex set in $\R^n$ to satisfy is rational polyhedrality. However, the geometric structure of Okounkov body is rather wild. It can be non-polyhedral even if the variety $X$ is a Mori dream space and a divisor $D$ is ample (see \cite[Subsection 6.3]{lm-nobody}, \cite[Section 3]{KLM}). However, Anderson-K\"{u}ronya-Lozovanu proved that if a big divisor $D$ has a finitely generated section ring $R(X, D):=\bigoplus_{m \geq 0} H^0(X, mD)$, then there exists an admissible flag $Y_\bullet$ such that the Okounkov body $\Delta_{Y_\bullet}(D)$ is a rational polytope (\cite[Theorem 1]{AKL}). We also refer to \cite[Theorems 1.1 and 4.17]{CPW} and \cite[Corollary 4.5]{S} for more related results. Our next aim is to generalize \cite[Theorem 1]{AKL} to the valuative and limiting Okounkov bodies. We recall that when a divisor $D$ is big, it has a finitely generated section ring if and only if it admits the birational good Zariski decomposition (see \cite[III.1.17.Remark]{nakayama}). However, for a pseudoeffective divisor $D$, such equivalence no longer holds in general; $D$ admits the birational good Zariski decomposition if and only if $D$ has a finitely generated section ring and is abundant (see Proposition \ref{zdabfg}). For the rational polyhedrality of the Okounkov bodies of pseudoeffective divisors, we assume the existence of good Zariski decomposition on some birational model instead of the finite generation condition. See Subsection \ref{zdsubsec} for our definition of (good) Zariski decomposition. \begin{theoremalpha}[=Corollary \ref{ratpolval} and Theorem \ref{ratsimlim}]\label{main1} Let $X$ be a smooth projective variety, and $D$ be a pseudoeffective $\Q$-divisor on $X$ which admits the good birational Zariski decomposition. Then each Okounkov bodies $\Delta^{\val}_{Y_\bullet}(D)$ and $\Delta^{\lim}_{Y_\bullet}(D)$ is rational polyhedral with respect to some admissible flag $Y_\bullet$. \end{theoremalpha} We expect that the rational polyhedrality of Okounkov body holds in more general situations. There are examples of divisors which do not admit birational good Zariski decompositions, but whose associated Okounkov bodies are rational polyhedral (see Remark \ref{ratrem}). To prove Theorem \ref{main1} for the case of valuative Okounkov bodies, we use the same idea as \cite[Proposition 4]{AKL}. Using only the finite generation of section ring, we show the rational polyhedrality of the valuative Okounkov body with respect to an admissible flag taken by the intersections of general members of the linear series (see Theorem \ref{ratsimval}). For the case of limiting Okounkov bodies, under the given assumption, we prove the statement by reducing to the rationality problem of the limiting Okounkov body on some high model $f \colon Y\to X$ where the good Zariski decomposition of $f^*D$ exists (see Theorem \ref{ratsimlim}). The organization of the paper is as follows. In Section \ref{prelimsec}, we collect basic facts on various notions that are used in the proofs. Next, in Section \ref{okbdsubsec}, we recall basic properties of Okounkov bodies, and prove Theorem \ref{newthrm}. Then we study some properties of Nakayama subvarieties and positive volume subvarieties to show Theorem \ref{critintro} in Section \ref{nakpvssec}. Section \ref{ratsec} is devoted to showing Theorem \ref{main1}. \subsection*{Acknowledgment} We would like to thank the referee for providing numerous helpful suggestions and comments and for pointing out several gaps in earlier versions of this manuscript. \section{Preliminaries}\label{prelimsec} In this section, we collect relevant facts which will be used later. Throughout the paper, $X$ is a smooth projective variety of dimension $n$, and we always work over an algebraically closed field of characteristic zero. \subsection{Asymptotic invariants} We review basic asymptotic invariants of divisors, namely, the asymptotic base loci and volume functions. The \emph{stable base locus} of an $\R$-divisor $D$ is defined as $\SB(D):= \bigcap_{D \sim_{\R} D' \geq 0} \Supp(D')$. The \emph{augmented base locus} of an $\R$-divisor $D$ is defined as $\mathbf B_+(D):=\bigcap_A\text{SB}(D-A)$ where the intersection is taken over all ample divisors $A$. The \emph{restricted base locus} of an $\R$-divisor $D$ is defined as $\mathbf B_-(D):=\bigcup_{A}\SB(D+A)$ where the union is taken over all ample divisors $A$. Note that $\mathbf B_+(D)$ and $\mathbf B_-(D)$ depend only on the numerical class of $D$. For details, we refer to \cite{elmnp-asymptotic inv of base} and \cite{lehmann-red}. Now, let $V$ be an irreducible subvariety of $X$ of dimension $v$. The \emph{restricted volume} of a $\Z$-divisor $D$ along $V$ is defined as $ \vol_{X|V}(D):=\limsup_{m \to \infty} \frac{h^0(X|V,mD)}{m^v/v!} $ where $h^0(X|V,mD)$ is the dimension of the image of the natural restriction map $\varphi \colon H^0(S,\mathcal O_X(D))\to H^0(V,\mathcal O_V(D))$. The restricted volume $\vol_{X|V}(D)$ depends only on the numerical class of $D$, and one can uniquely extend it to a continuous function $$ \vol_{X|V} \colon \text{Big}^V (X) \to \R $$ where $\text{Big}^V(X)$ is the set of all $\R$-divisor classes $\xi$ such that $V$ is not properly contained in any irreducible component of $\mathbf B_+(\xi)$. When $V=X$, we simply let $\vol_X(D):=\vol_{X|X}(D)$, and we call it the \emph{volume} of an $\R$-divisor $D$. For more details on volumes and restricted volumes, see \cite{pos} and \cite{elmnp-restricted vol and base loci}. Now assume that $V\not\subseteq\mathbf B_-(D)$ for an $\R$-divisor $D$. The \emph{augmented restricted volume} of $D$ along $V$ is defined as $\vol_{X|V}^+(D):=\lim_{\eps\to 0+} \vol_{X|V}(D+\eps A)$ where $A$ is an ample divisor on $X$. The definition is independent of the choice of $A$. Note that $\vol_{X|V}^+(D)=\vol_{X|V}(D)$ for $D \in \text{Big}^V (X)$. This also extends uniquely to a continuous function $$ \vol_{X|V}^+ \colon \overline{\text{Eff}}^V(X) \to \R $$ where $\overline{\text{Eff}}^V(X) := \text{Big}^V(X) \cup \{ \xi \in \overline{\text{Eff}}(X) \setminus \text{Big}(X) \mid V \not\subseteq \mathbf B_-(\xi) \}$. For $D\in \overline{\text{Eff}}^V(X)$, we have $\vol_{X|V}(D) \leq \vol_{X|V}^+(D) \leq \vol_{V}(D|_V)$, and both inequalities can be strict in general. See \cite{CHPW1} for more details on augmented restricted volumes. \subsection{Iitaka dimension} Let $D$ be an $\R$-divisor on $X$. Let $\mathbb N(D)=\{m\in\Z_{>0}|\; |\lfloor mD\rfloor|\neq\emptyset\}$. For $m\in\mathbb N(D)$, we consider the rational map $\phi_{mD} \colon X \dashrightarrow Z_m \subseteq \mathbb P^{\dim|\lfloor mD\rfloor|}$ defined by the linear system $|\lfloor mD\rfloor |$. The \emph{Iitaka dimension} of $D$ is defined as $$ \kappa(D):=\left\{ \begin{array}{ll} \max\{\dim\text{Im}(\phi_{mD}) \mid \;m\in\mathbb N(D)\}& \text{if }\mathbb N(D)\neq\emptyset\\ -\infty&\text{if }\mathbb N(D)=\emptyset. \end{array} \right. $$ We remark that the Iitaka dimension $\kappa(D)$ is not really an invariant of the $\R$-linear equivalence class of $D$. Nonetheless, it satisfies the property that $\kappa(D)=\kappa(D')$ for effective divisors $D,D'$ such that $D\sim_{\R}D'$. For another important invariant, we fix a sufficiently ample $\Z$-divisor $A$ on $X$. The \emph{numerical Iitaka dimension} of $D$ is defined as $$ \kappa_\nu(D):= \max\left\{k \in \Z_{\geq0} \left|\; \limsup_{m \to \infty} \frac{h^0(X, \lfloor mD \rfloor + A)}{m^k}>0 \right.\right\} $$ if $h^0(X, \lfloor mD \rfloor + A)\neq\emptyset$ for infinitely many $m>0$, and we let $\kappa_\nu(D):=-\infty$ otherwise. The numerical Iitaka dimension $\kappa_\nu(D)$ depends only on the numerical class $[D]\in\N^1(X)_{\R}$. \begin{definition} An $\R$-divisor $D$ is said to be \emph{abundant} if $\kappa(D)=\kappa_\nu(D)$. \end{definition} By definition, $\kappa(D) \leq \kappa_\nu(D)$ holds and the inequality can be strict in general. However, $\kappa_\nu(D)=\dim X$ if and only if $\kappa(D)=\dim X$. We refer to \cite{E}, \cite{lehmann-nu}, \cite{nakayama} for more detailed properties of $\kappa$ and $\kappa_\nu$. Recall that the \emph{section ring of an $\R$-divisor $D$} is defined as $R(X, D):=\bigoplus_{m \geq 0} H^0(X, \lfloor mD \rfloor)$. \begin{proposition}[{\cite[Corollary 1]{MR}}]\label{semiampleabundant} A $\Q$-divisor $D$ on $X$ is semiample if and only if it is nef, abundant, and its section ring is finitely generated. \end{proposition} \subsection{Zariski decomposition}\label{zdsubsec} We now briefly recall several notions related to Zariski decompositions in higher dimension. For more details, we refer to \cite{B1}, \cite{nakayama}, \cite{P}. To define the divisorial Zariski decomposition, we first consider a divisorial valuation $\varsigmama$ on $X$ with the center $V:=\operatorname{Cent}_X \varsigmama$ on $X$. If $D$ is a big $\R$-divisor on $X$, we define \emph{the asymptotic valuation} of $\varsigmama$ at $D$ as $\ord_V(||D||):=\inf\{\varsigmama(D')\mid D\equiv D'\geq 0\}$. If $D$ is only a pseudoeffective $\R$-divisor on $X$, we define $\ord_V(||D||):=\lim_{\epsilon\to 0+}\ord_V(||D+\eps A||)$ for some ample divisor $A$ on $X$. This definition is independent of the choice of $A$. The \emph{divisorial Zariski decomposition} of a pseudoeffective $\R$-divisor $D$ is the decomposition $$ D=P_{\varsigmama}+N_{\varsigmama} $$ into the \emph{negative part} $N_{\varsigmama}:=\sum_{\codim E=1} \ord_E(||D||)E$ where the summation is over the codimension one irreducible subvarieties $E$ of $X$ such that $ \ord_E(||D||)>0$ and the \emph{positive part} $P_{\varsigmama}:=D-N_{\varsigmama}$. Let $D$ be an $\R$-divisor on $X$ which is effective up to $\sim_{\R}$. The \emph{$s$-decomposition} of $D$ is the decomposition $$ D=P_s+N_s $$ into the \emph{negative part} $N_s:=\inf\{L \mid L \sim_{\R} D, L \geq 0\}$ and the \emph{positive part} $P_s:=D-N_s$. The positive part $P_s$ is also characterized as the smallest divisor such that $P_s \leq D$ and $R(X, P_s) \simeq R(X, D)$ (\cite[Proposition 4.8]{P}). Note that $P_s \leq P_\varsigmama$ and $P_s, P_\varsigmama$ do not coincide in general. \begin{lemma}\label{abundantdiv=s} Let $D$ be an abundant $\R$-divisor on $X$ with the divisorial Zariski decomposition $D=P_\varsigmama + N_\varsigmama$ and the $s$-decomposition $D=P_s+N_s$. Then $P_\varsigmama=P_s$. \end{lemma} \begin{proof} Let $\varsigmama$ be a divisorial valuation on $X$ with $V=\operatorname{Cent}_X \varsigmama$. \cite[Proposition 6.4]{lehmann-red} implies that $\inf_{m \in \Z_{>0}, D' \in |\lfloor mD\rfloor|} \frac{1}{m}\varsigmama(D) = \ord_V(||D||)$ holds. Since $\inf_{m \in \Z_{>0}, D' \in |\lfloor mD\rfloor|} \frac{1}{m}\varsigmama(D)=\varsigmama(N_s)$, we see that $D=P_s + N_s$ is the divisorial Zariski decomposition. \end{proof} The \emph{Fujita-Zariski decomposition} of a pseudoeffective $\R$-divisor $D$ is the decompositon $$ D=P_f+N_f $$ into the effective \emph{negative part} $N_f$ and the nef \emph{positive part} $P_f$ such that if $f \colon Y \to X$ is a birational morphism from a smooth projective variety and $f^*D=P'+N'$ with $P'$ nef and $N' \geq 0$, then $P' \leq f^*P$. By definition, the divisorial Zariski decomposition and $s$-decomposition uniquely exist, and the Fujita-Zariski decomposition is also unique if it exists. Recall that the Fujita-Zariski decomposition does not exist in general even if we take the pullback on a sufficiently high model $f \colon \widetilde{X} \to X$ (see \cite[Chapter IV]{nakayama}). It is unclear in general whether the Fujita-Zariski decomposition is the divisorial Zariski decomposition (cf. \cite[III.1.17.Remark (2)]{nakayama}). However, this holds when the divisor is abundant and the positive part is semiample. \begin{proposition}\label{goodzd} Let $D$ be an abundant $\Q$-divisor on $X$ having a decomposition $D=P+N$ into a nef divisor $P$ and an effective divisor $N$. Then the following are equivalent: \begin{enumerate}[leftmargin=0cm,itemindent=.6cm] \item[$(1)$] It is the divisorial Zariski decomposition with $P=P_\varsigmama$ semiample. \item[$(2)$] It is the Fujita-Zariski decomposition with $P=P_f$ semiample. \item[$(3)$] It is the $s$-decomposition with $P=P_s$ semiample. \end{enumerate} \end{proposition} \begin{proof} $(1) \Rightarrow (2)$: It is easy to check that the divisorial Zariski decomposition with the nef positive part is the Fujita-Zariski decomposition (see \cite[III.1.17.Remark]{nakayama}).\\ $(2) \Rightarrow (3)$: Let $D=P_s+N_s$ be the $s$-decomposition. Then $P_f \geq P_s$ by definition. Since $P_f$ is semiample, we also have $P_f \leq P_s$. Therefore $P_f=P_s$. \\ $(3) \Rightarrow (1)$: It follows from Lemma \ref{abundantdiv=s}. \end{proof} \begin{definition}\label{defbirgoodzd} If one of the conditions in Proposition \ref{goodzd} holds for an abundant $\Q$-divisor $D$, then we say that $D$ \emph{admits the good Zariski decomposition}, and denote it by $D=P+N$. We say that $D$ \emph{admits the birational good Zariski decomposition} if there exists a birational morphism $f \colon \widetilde{X} \to X$ from a smooth projective variety such that $f^*D$ admits the good Zariski decomposition. \end{definition} \begin{proposition}\label{qgoodzd} Let $D$ be a pseudoeffective $\Q$-divisor with the good Zariski decomposition $D=P+N$. Then $P, N$ are also $\Q$-divisors. \end{proposition} \begin{proof} Since $P$ is semiample, there exists a morphism $f \colon X \to Y$ such that $P \sim_{\R} f^* A$ where $A$ is an ample divisor on $Y$. The ample divisor $A$ can be written as a finite sum of ample Cartier divisors on $Y$ with positive real coefficients. Thus we can write $P \sim_{\R} \sum_{i=1}^k a_i P_i$ for some semiample Cartier divisors $P_i$ and some positive real numbers $a_i$. Now we write $N = \sum_{j=1}^m b_j N_j$ for prime divisors $N_1, \ldots, N_m$ and positive real numbers $b_j$. Then $N_1, \ldots, N_m$ are linearly independent in $\N^1(X)_{\R}$ by \cite[III.1.10.Proposition]{nakayama}. Let $V_P$ and $V_N$ be the subspaces of $\N^1(X)_\R$ spanned by $\{P_i\}_{i=1}^k$ and $\{N_j\}_{j=1}^m$, respectively. We now claim that $V_P\cap V_N=\{0\}$. Suppose that the claim does not hold. Then there exists a nonzero class $\eta\in V_P\cap V_N$ such that $\eta\equiv P'\equiv N'$ where $P'\in \bigoplus_{i=1}^k \R\cdot P_i$ and $N'\in \bigoplus_{j=1}^m \R\cdot N_j$. Note that there exists a positive number $\epsilon>0$ such that for any real number $r$ satisfying $|r|<\epsilon$, the divisor $P-r P'$ is nef and $N+r N'$ is effective. Thus \cite[Proposition III.1.14 (2)]{nakayama} implies that in the following decompositions $$ \begin{array}{rl} D&=P+N\\ &\equiv (P-r P')+(N+r N'), \end{array} $$ we have $N\leq N+r N'$, hence $0\leq r N'$ for any $r$ such that $|r|<\epsilon$. However, since $N'$ is a nonzero divisor, this is a contradiction. The claim implies that if $D$ is a $\Q$-divisor, then so is $N$ in the decomposition $D = P+N$. Therefore $P, N$ are both $\Q$-divisors. \end{proof} Now, we characterize when a divisor admits the birational good Zariski decomposition. \begin{proposition}\label{zdabfg} Let $D$ be a pseudoeffective $\Q$-divisor on $X$. Then $D$ admits the birational good Zariski decomposition if and only if $D$ is abundant and $R(X, D)$ is finitely generated. \end{proposition} \begin{proof} Suppose that there exists a birational morphism $f \colon \widetilde{X} \to X$ from a smooth projective variety such that $f^*D = P+N$ is the good Zariski decomposition. By definition, $D$ is abundant. Note that $R(X, D) \simeq R(\widetilde{X}, f^*D) \simeq R(\widetilde{X}, P)$. Since $P$ is a semiample $\Q$-divisor by Proposition \ref{qgoodzd}, it follows from Proposition \ref{semiampleabundant} that $R(X, D)$ is finitely generated. Conversely, suppose that $D$ is abundant and $R(X, D)$ is finitely generated. For a sufficiently large and divisible integer $m>0$, we take a resolution $f \colon \widetilde{X} \to X$ of the base locus of $|mD|$ and consider the decomposition $f^*(mD)=M+F$ into the base point free $M$ and the fixed part $F$ of $|f^*mD|$ By the finite generation of $R(X, D)$, we see that $f^*D=\frac{1}{m}M+\frac{1}{m}F$ is the $s$-decomposition with semiample positive part. By Proposition \ref{goodzd}, $f^*D$ admits the good Zariski decomposition. \end{proof} \section{Okounkov bodies}\label{okbdsubsec} In this section, we recall the construction of Okounkov bodies associated to pseudoeffective divisors in \cite{lm-nobody}, \cite{KK}, and \cite{CHPW1} and basic results. In the end, we prove Theorem \ref{newthrm} (=Theorem \ref{newtheorem}). First, fix an admissible flag on $X$ $$ Y_\bullet: X=Y_0\supseteq Y_1\supseteq\cdots \supseteq Y_{n-1}\supseteq Y_n=\{x\} $$ where each $Y_i$ is an irreducible subvariety of codimension $i$ in $X$ and is smooth at $x$. Let $D$ be an $\R$-divisor on $X$ with $|D|_{\R}:=\{ D' \mid D \sim_{\R} D' \geq 0 \}\neq\emptyset$. We define a valuation-like function $$ \nu_{Y_\bullet}:|D|_{\R}\to \R_{\geq0}^n $$ as follows. For $D'\in |D|_\R$, let $$\nu_1=\nu_1(D'):=\ord_{Y_1}(D').$$ Since $D'-\nu_1(D')Y_1$ is effective, we can define $$\nu_2=\nu_2(D'):=\ord_{Y_2}((D'-\nu_1Y_1)|_{Y_1}).$$ If $\nu_i=\nu_i(D')$ is defined, then we define $\nu_{i+1}=\nu_{i+1}(D')$ inductively as $$\nu_{i+1}(D'):=\ord_{Y_{i+1}}((\cdots((D'-\nu_1Y_1)|_{Y_1}-\nu_2Y_2)|_{Y_2}-\cdots-\nu_iY_i)|_{Y_{i}}).$$ The values $\nu_i(D')$ for $1 \leq i$ obtained as above define $\nu_{Y_\bullet}(D')=(\nu_1(D'),\nu_2(D'),\cdots,\nu_n(D'))$. \begin{definition} The \emph{Okounkov body} $\Delta_{Y_\bullet}(D)$ of a big $\R$-divisor $D$ with respect to an admissible flag $Y_\bullet$ is defined as the closure of the convex hull of $\nu_{Y_\bullet}(|D|_{\R})$ in $\R^n_{\geq 0}$. \end{definition} More generally, a similar construction can be applied to a graded linear series $W_\bullet$ on $X$ to construct the Okounkov body $\Delta_{Y_\bullet}(W_\bullet)$ of $W_\bullet$. For more details, we refer to \cite{lm-nobody}. When $D$ is not big, we have the following extension introduced in \cite{CHPW1}. \begin{definition}[{\cite[Definitions 1.1 and 1.2]{CHPW1}}] Let $D$ be an $\R$-divisor on $X$. \begin{enumerate}[leftmargin=0cm,itemindent=.6cm] \item[(1)] When $D$ is effective up to $\sim_\R$, i.e., $|D|_{\R}\neq \emptyset$, the \emph{valuative Okounkov body} $\Delta^{\val}_{Y_\bullet}(D)$ of $D$ with respect to an admissible flag $Y_\bullet$ is defined as the closure of the convex hull of $\nu_{Y_\bullet}(|D|_{\R})$ in $\R^n_{\geq 0}$. If $|D|_\R=\emptyset$, then we set $\Delta^{\val}_{Y_\bullet}(D):=\emptyset$. \item[(2)] When $D$ is pseudoeffective, the \emph{limiting Okounkov body} $\Delta^{\lim}_{Y_\bullet}(D)$ of $D$ with respect to an admissible flag $Y_\bullet$ is defined as $$\Delta^{\lim}_{Y_\bullet}(D):=\lim_{\epsilon \to 0+}\Delta_{Y_\bullet}(D+\epsilon A) = \bigcap_{\epsilon >0} \Delta_{Y_\bullet}(D+\epsilon A),$$ where $A$ is an ample divisor on $X$. (Note that $\Delta^{\lim}_{Y_\bullet}(D)$ is independent of the choice of $A$.) If $D$ is not pseudoeffective, we set $\Delta^{\lim}_{Y_\bullet}(D) :=\emptyset$. \end{enumerate}\end{definition} \begin{remark} Boucksom's numerical Okounkov body $\Delta^{\text{num}}_{Y_\bullet}(D)$ in \cite{B2} is the same as our limiting Okounkov body $\Delta^{\lim}_{Y_\bullet}(D)$. \end{remark} Suppose that $D$ is effective. By definition, $\Delta^{\val}_{Y_\bullet}(D) \subseteq \Delta^{\lim}_{Y_\bullet}(D)$, and the inclusion can be strict in general (see \cite[Examples 4.2 and 4.3]{CHPW1}). Moreover, by \cite[Proposition 3.3 and Lemma 4.8]{B2}, we have $$\dim \Delta^{\val}_{Y_\bullet}(D) = \kappa (D) \leq \dim \Delta^{\lim}_{Y_\bullet}(D) \leq \kappa_\nu(D).$$ The following lemmas will be useful for computing Okounkov bodies. \begin{lemma}\label{okbdbir} Let $D$ be an $\R$-divisor on $X$. Consider a birational morphism $f : \widetilde{X} \to X$ with $\widetilde{X}$ smooth and an admissible flag $$ \widetilde{Y}_\bullet : \widetilde{X}=\widetilde{Y}_0 \supseteq \widetilde{Y}_1 \supseteq \cdots \supseteq \widetilde{Y}_{n-1} \supseteq \widetilde{Y}_n=\{ x' \}. $$ on $\widetilde{X}$. Suppose that $Y_n$ is a general point in $X$ and $$ Y_\bullet:=f(\widetilde{Y}_\bullet) : X=Y_0 \supseteq Y_1=f(\widetilde{Y}_1) \supseteq \cdots \supseteq Y_{n-1}=f(\widetilde{Y}_{n-1}) \supseteq Y_n=f(\widetilde{Y}_n)=\{ f(x') \}. $$ is an admissible flag on $X$. Then we have $\Delta^{\val}_{\widetilde{Y}_\bullet}(f^*D)=\Delta^{\val}_{Y_\bullet}(D)$ and $\Delta^{\lim}_{\widetilde{Y}_\bullet}(f^*D) = \Delta^{\lim}_{Y_\bullet}(D)$. \end{lemma} \begin{proof} The limiting Okounkov body case is shown in \cite[Lemma 3.3]{CHPW2}. The proof for the valuative Okounkov body case is almost identical and we leave the details to the readers as an exercise. \end{proof} \begin{lemma}\label{okbdzd} Let $D$ be an $\R$-divisor on $X$ with the $s$-decomposition $D=P_s+N_s$ and the divisorial Zariski decomposition $D=P_\varsigmama+N_\varsigmama$. Fix an admissible flag $Y_\bullet$ on $X$ such that $Y_n$ is a general point in $X$. Then we have $\Delta^{\val}_{Y_\bullet}(D)=\Delta^{\val}_{Y_\bullet}(P_s)$ and $\Delta^{\lim}_{Y_\bullet}(D)=\Delta^{\lim}_{Y_\bullet}(P_\varsigmama)$, respectively. \end{lemma} \begin{proof} The first assertion follows from the fact that $R(X, D) \simeq R(X, P_s)$ and the construction of the valuative Okounkov body. The second assertion is nothing but \cite[Lemma 3.5]{CHPW2}. \end{proof} Finally, we give a proof of the main result of this section. The following key result is implicitly used in \cite{CHPW1} (especially in the proof of \cite[Theorem B]{CHPW1}) and in this paper as well. We include the complete proof here. \begin{theorem}\label{newtheorem} Let $X$ be a smooth projective variety of dimension $n$, and $D$ be a big divisor on $X$. Fix an admissible flag $Y_\bullet$ such that $Y_{n-k} \not\subseteq \mathbf B_+(D)$. Then we have $$ \Delta_{Y_{n- k\bullet}}(D) = \Delta_{Y_\bullet}(D) \cap (\{ 0\}^{n-k} \times \R_{\geq 0}^{k}). $$ \end{theorem} \begin{proof} We may assume that each $Y_i$ is a smooth variety. Let $\{ A_i \}$ be a sequence of ample divisors on $X$ such that each $D+A_i$ is a $\Q$-divisor and $\lim\limits_{i \to \infty} A_i = 0$. Then we have $$ \Delta_{Y_\bullet}(D) = \bigcap_{i=1}^{\infty} \Delta_{Y_\bullet}(D+A_i) \text{ and } \Delta_{Y_{n-k\bullet}}(D) = \bigcap_{i=1}^{\infty} \Delta_{Y_{n-k\bullet}}(D+A_i). $$ Furthermore, $Y_{n-k} \not\subseteq \mathbf B_+(D+A_i)$ for all $i$. Note that it is enough to prove the statement for the $\Q$-divisors $D+A_i$ for all sufficiently large $i$. Thus we assume below that $D$ is a $\Q$-divisor. It is easy to check that $\Delta_{Y_{n-k\bullet}}(D) \subseteq \Delta_{Y_\bullet}(D)$. This implies that $\Delta_{Y_{n-k\bullet}}(D) \subseteq \Delta_{Y_\bullet}(D) \cap (\{ 0\}^{n-k} \times \R_{\geq 0}^{k})$ by definition. Suppose that the inclusion is strict: $$ \Delta_{Y_{n-k\bullet}}(D) \subsetneq \Delta_{Y_\bullet}(D) \cap (\{ 0\}^{n-k} \times \R_{\geq 0}^{k}). $$ Then there exists a point $(0^{n-k}, x_1, \ldots, x_k) \in \Delta_{Y_\bullet}(D) \cap (\{ 0\}^{n-k} \times \R_{\geq 0}^{k})$, but $(0^{n-k}, x_1, \ldots, x_k) \not\in \Delta_{Y_{n-k\bullet}}(D)$. Let $A$ be an ample $\Q$-divisor on $X$. Note that $\Delta_{Y_{n-k\bullet}}(D) \subseteq \Delta_{Y_{n-k\bullet}}(D+\epsilon A)$ for any $\epsilon \geq 0$. Since $Y_{n-k} \not\subseteq \mathbf B_+(D+\epsilon A)$, we have $\vol_{\R^k} \Delta_{Y_{n-k\bullet}}(D+\epsilon A)= \frac{1}{(n-k)!}\vol_{X|Y_{n-k}}(D+\epsilon A)$. Recall that by \cite[Theorem A]{elmnp-restricted vol and base loci}, the function $\vol_{X|Y_{n-k}} \colon \text{Big}^{Y_{n-k}}(X) \to \R$ is continuous, where $\text{Big}^{Y_{n-k}}(X)$ denotes the cone in $\N^1(X)_\R$ consisting of the real divisor classes $\eta$ such that $Y_{n-k}$ is not properly contained in any of the irreducible components of $\mathbf B_+(\eta)$. Thus we can find a rational number $\epsilon >0$ such that $(x_1, \ldots, x_k)\not\in\Delta_{Y_{n-k\bullet}}(D+\epsilon A)$ and $$ \vol_{\R^k} \Delta_{Y_{n-k\bullet}}(D+\epsilon A) < \vol_{\R^k} \Delta $$ where $\Delta\subseteq \R^k$ is the convex hull of the set $\Delta_{Y_{n-k\bullet}}(D)$ and the point $(x_1, \ldots, x_k)$. Note that we can fix a small neighborhood $U$ of $(x_1, \ldots, x_k)$ in $\R^k$ which is disjoint from $\Delta_{Y_{n-k\bullet}}(D+\epsilon A)$. There exists a sufficiently small $\delta >0$ such that the divisors $$ \begin{array}{rcl} A_1=A_1(\delta_1)& \sim_{\Q} &\frac{1}{2}\epsilon A+\delta_1 Y_1,\\ A_2=A_2(\delta_1,\delta_2)& \sim_{\Q} &A_1|_{Y_1}+\delta_2 Y_2,\\ &\vdots&\\ A_{n-k}=A_{n-k}(\delta_1,\delta_2,\ldots,\delta_{n-k})& \sim_{\Q} &A_{n-k-1}|_{Y_{n-k-1}}+\delta_{n-k} Y_{n-k} \end{array}$$ are successively ample for any $\delta_j$ satisfying $\delta \geq \delta_1, \delta_2, \ldots, \delta_{n-k} >0$. Since $(0^{n-k}, x_1, \ldots, x_k) \in \Delta_{Y_\bullet}(D)$, there exists a sequence of valuative points $$\mathbf x_i=(\delta_1^i, \ldots, \delta_{n-k}^i, x_1^i, \ldots, x_k^i)\in\Delta_{Y_\bullet}(D)$$ such that $$ \lim_{i \to \infty} \delta_j^i = 0 \text{ for $1 \leq j \leq n-k$\;\; and\; }\lim_{i \to \infty} x_l^i = x_l \text{ for $1 \leq l \leq k$}. $$ Since it is known that the set of rational valuative points $\{\nu_{Y_\bullet}(D')| D\sim_\Q D'\geq 0\}$ is dense in $\Delta_{Y_\bullet}(D)$, we may assume that $\mathbf x_i\in \{\nu_{Y_\bullet}(D')| D\sim_\Q D'\geq 0\}$ so that $\mathbf x_i\in\Q^n$ for all $i$. We now fix a sufficiently large $i$ such that $0\leq \delta_j^i < \delta$ for all $1 \leq j \leq n-k$ and $(x_1^i, \ldots, x_k^i)$ lies in the small neighborhood $U$ in $\R^k$ of $(x_1, \ldots, x_k)$. Since $\mathbf x_i$ is a rational valuative point of $\Delta_{Y_\bullet}(D)$, there exist an effective divisor $D'\sim_\Q D$ such that $\nu_{Y_\bullet}(D')=\mathbf x_i$. Namely, we have $$ \begin{array}{rcl} D'&=&D_1 + \delta_1^i Y_1,\\ D_1|_{Y_1}&=&D_2 + \delta_2^i Y_2,\\ &\vdots& \\ D_{n-k-1}|_{Y_{n-k-1}}&=&D_{n-k}+\delta_{n-k}^i Y_{n-k} \end{array} $$ where $D_j$ on $Y_{j-1}$ ($j=1,\ldots, n-k$) are effective divisors. Now note that we have $$ D'+\frac{1}{2}\epsilon A = D_1 + \left( \frac{1}{2}\epsilon A + \delta_1^i Y_1 \right) \sim_{\Q} D_1 + A'_1 $$ where we may assume that $A'_1$ is an effective ample divisor such that $\mult_{Y_1} A'_1=0$. We also have $$ (D_1+A'_1)|_{Y_1} = D_2 + (A'_1|_{Y_1} + \delta_2^i Y_2) \sim_{\Q} D_2 + A'_2 $$ where we may assume that $A'_2$ is an effective ample divisor such that $\mult_{Y_2} A'_2=0$. By continuing this process, we finally obtain $$ (D_{n-k-1} + A'_{n-k-1})|_{Y_{n-k-1}} = D_{n-k} + (A'_{n-k-1}|_{Y_{n-k-1}} + \delta_{n-k}^i Y_{n-k}) \sim_{\Q} D_{n-k} + A'_{n-k} $$ where we may assume that $A'_{n-k}$ is an effective ample divisor such that $\mult_{Y_{n-k}} A'_{n-k}=0$. We now claim that there exists an effective divisor $D'' \sim_{\Q} D+\epsilon A$ such that $D''|_{Y_{n-k-1}} = D_{n-k}+E$ for some effective divisor $E$ with $\mult_{Y_{n-k}}E=0$ and $\nu_{Y_{n-k\bullet}}(E|_{Y_{n-k}}) = (x_1', \ldots, x_k')$ where we may assume that $x_j'\geq 0$ are arbitrarily small. Note that such $D''$ defines a rational valuative point $\nu_{Y_{\bullet}}(D'') = (0^{n-k}, x_1^i+x_1', \ldots, x_k^i + x_k') \in \Delta_{Y_\bullet}(D + \epsilon A)$. Thus $(x_1^i+x_1', \ldots, x_k^i + x_k') \in \Delta_{Y_{n-k \bullet}}(D+\epsilon A)$. If our claim holds, then we can conclude that $(x_1^i+x_1', \ldots, x_k^i + x_k')$ belongs to the small neighborhood $U$ of $(x_1, \ldots, x_k)$ in $\R^k$, which is a contradiction since $U$ is disjoint from $\Delta_{Y_{n-k\bullet}}(D+\epsilon A)$. Therefore we finally obtain $\Delta_{Y_{n-k\bullet}}(D) = \Delta_{Y_\bullet}(D) \cap (\{ 0\}^{n-k} \times \R_{\geq 0}^{k})$. It now remains to show the claim. For a sufficiently divisible and large integer $m>0$, we take a log resolution $f_m \colon \widetilde{X}_m \to X$ of the base ideal of $|m(D+\frac{1}{2}\epsilon A)|$ so that we obtain a decomposition $f_m^*(m(D+\frac{1}{2}\epsilon A)) = M_m' + F_m'$ into a base point free divisor $M_m'$ and the fixed part $F_m'$ of $|f_m^*(m(D+\frac{1}{2}\epsilon A))|$. Let $M_m:=\frac{1}{m}M_m'$. We may assume that $f_m$ is isomorphic outside $\mathbf B_+(D+\frac{1}{2}\epsilon A)$. We can take smooth strict transforms $\widetilde{Y}_i^m$ on $\widetilde{X}_m$ of $Y_i$ for $1 \leq i \leq n-k$. For a general point $y$ in $\widetilde{Y}_{n-k}^m$, we have the positive moving Seshadri constant $\epsilon(||D+\frac{1}{2}\epsilon A||; f_m(y)) > 0$. Thus we also have the positive Seshadri constant $\epsilon(M_m; y) >0$ for $m \gg 0$ so that $\widetilde{Y}_{n-k}^m \not\subseteq \mathbf B_+(M_m)$. Let $g_m \colon \widetilde{X}_m \to Z_m$ be the birational morphism defined by $|M_m'|$. Possibly by taking a further blow-up of $\widetilde{X}_m$, we may assume that every irreducible component of the exceptional locus of $g_m$ is a divisor. We can still assume that $f_m$ is isomorphic over a general point in $Y_{n-k}$. The divisor $H_m:=M_m - E_m$ is ample for any sufficiently small effective divisor $E_m$ whose support is the $g_m$-exceptional locus. Note that $\mult_{\widetilde{Y}_{n-k}^m}(E_m)=0$. Let $f_m^*(D+\frac{1}{2}\epsilon A)=P_m+N_m$ be the divisorial Zariski decomposition. As in \cite[Proof of Proposition 3.7]{lehmann-nu}, by applying \cite[Proposition 2.5]{elmnp-asymptotic inv of base}, we see that $P_m - M_m$ is arbitrarily small if we take a sufficiently large $m>0$. Since we may take an arbitrarily small $E_m$, so is $P_m - H_m$ for a sufficiently large $m>0$. For simplicity, we fix a sufficiently large integer $m>0$ and we denote $f=f_m$, $\widetilde{X}=\widetilde{X}_m$ and $\widetilde{Y}_i=\widetilde{Y}_i^m$. Let $f^*(D+\frac{1}{2}\epsilon A) = P+N$ be the divisorial Zariski decomposition. Then as we have seen above, we can assume that $P$ can be arbitrarily approximated by an ample divisor $H$ on $\widetilde{X}$ such that $F=f^*(D+\frac{1}{2}\epsilon A)-H$ is an effective divisor satisfying $\mult_{\widetilde{Y}_{n-k}}(F)=0$. Note that $F-N$ is an arbitrarily small effective divisor such that $\mult_{\widetilde{Y}_{n-k}}(F-N)=0$. Thus we can find an effective divisor $A_0 \sim_{\Q} A$ such that $\mult_{Y_{n-k-1}}A_0=0$, $E_0:=\frac{1}{2}\epsilon f^*A_0|_{\widetilde{Y}_{n-k-1}} - (F-N)|_{\widetilde{Y}_{n-k-1}}$ is effective, and $\mult_{\widetilde{Y}_{n-k}}E_0=0$. Let $f^*D=P'+N'$ be the divisorial Zariski decomposition. Since $P'+f^*(\frac{1}{2}\epsilon A)$ is movable, we get $P \geq P'+f^*(\frac{1}{2}\epsilon A)$ and so $N' \geq N$. Since $Y_{n-k} \not\subseteq \mathbf B_+(D)$, every irreducible component of $N'$ cannot contain $\widetilde{Y}_{n-k-1}$. Clearly, $f^*D_{n-k} - N'|_{\widetilde{Y}_{n-k-1}}$ is effective, and so is $f^*D_{n-k} - N|_{\widetilde{Y}_{n-k-1}}$. Thus $$ E_1:=f^*(D_{n-k} + A_{n-k}') - N|_{\widetilde{Y}_{n-k-1}} + E_0 = f^*(D_{n-k}+A_{n-k}') - F|_{\widetilde{Y}_{n-k-1}} + \frac{1}{2}\epsilon f^*A_0|_{\widetilde{Y}_{n-k-1}} $$ is an effective divisor on $\widetilde{Y}_{n-k-1}$. Note that $E_1 \sim_{\Q} (H+\frac{1}{2}\epsilon f^*A)|_{\widetilde{Y}_{n-k-1}}$. Since $$ H^0\left(\widetilde{X}, m\left(H+\frac{1}{2}\epsilon f^*A\right)\right) \to H^0\left(\widetilde{Y}_{n-k-1}, m\left(H+\frac{1}{2}\epsilon f^*A\right)\mathbf Bigm|_{\widetilde{Y}_{n-k-1}}\right) $$ is surjective for all sufficiently divisible integers $m>0$, it follows that there exists $H' \sim_{\Q} H+\frac{1}{2}\epsilon f^*A$ such that $H'|_{\widetilde{Y}_{n-k-1}} = E_1$. Then we have $$ (H' + F)|_{\widetilde{Y}_{n-k-1}} = E_1 + F|_{\widetilde{Y}_{n-k-1}} = f^*D_{n-k} +E' $$ where $$ E':=f^*A_{n-k}' + (F-N)|_{\widetilde{Y}_{n-k-1}} + E_0 = f^*A_{n-k}' + \frac{1}{2}\epsilon f^*A_0|_{\widetilde{Y}_{n-k-1}} $$ is an effective divisor. Note that $\mult_{\widetilde{Y}_{n-k}}E'=0$. We may also assume that each $x_j'\geq 0$ is arbitrarily small in $\nu_{\widetilde{Y}_{n-k\bullet}}(E'|_{\widetilde{Y}_{n-k}}) = (x_1', \ldots, x_k')$. By letting $D'':=f_*(H'+F) \sim_{\Q} D + \epsilon A$ and $E:=f_*E'$, we obtain the divisors satisfying the required properties. This shows the claim, and hence, we complete the proof. \end{proof} \section{Nakayama subvarieties and positive volume subvarieties}\label{nakpvssec} In \cite{CHPW1}, we introduced Nakayama subvarieties and positive volume subvarieties of divisors. We now further study those subvarieties, and prove Theorem \ref{critintro}(=Theorem \ref{geomcrit}) in this section. We first recall the definitions of those subvarieties. \begin{definition}[{\cite[Definitions 2.7 and 2.13]{CHPW1}}] Let $D$ be an $\R$-divisor on $X$. \begin{enumerate}[leftmargin=0cm,itemindent=.6cm] \item[(1)] When $D$ is effective, a \emph{Nakayama subvariety of $D$} is an irreducible subvariety $U \subseteq X$ such that $\dim U=\kappa(D)$ and for every integer $m \geq 0$ the natural map $$ H^0(X, \lfloor mD \rfloor) \to H^0(U, \lfloor mD|_U \rfloor) $$ is injective (or equivalently, $H^0(X, \mathcal I_U \otimes \mathcal O_X(\lfloor mD \rfloor))=0$ where $\mathcal I_U$ is an ideal sheaf of $U$ in $X$). \item[(2)] When $D$ is pseudoeffective, a \emph{positive volume subvariety of $D$} is an irreducible subvariety $V \subseteq X$ such that $\dim V = \kappa_\nu(D)$ and $\vol_{X|V}^+(D)>0$. \end{enumerate} \end{definition} \begin{remark} In \cite{CHPW1}, we required an additional condition $V \not \subseteq \mathbf B_-(D)$ for the definition of positive volume subvariety. However, we can drop this condition by Lemma \ref{notinbm}. Note that $V \not \subseteq \mathbf B_-(D)$ does not imply $\vol_{X|V}^+(D)>0$ (see \cite[Example 2.14]{CHPW1}). \end{remark} \begin{lemma}\label{notinbm} Let $D$ be a pseudoeffective $\R$-divisor on $X$. If $V$ is a positive volume subvariety of $D$, then $V \not\subseteq \mathbf B_-(D)$. \end{lemma} \begin{proof} If $V \subseteq \mathbf B_-(D)$, then there is a sequence $\{ A_i \}$ of ample divisors on $X$ such that $\lim_{i \to \infty} A_i = 0$ and $V \subseteq \SB(D+A_i)$. Then $\vol_{X|V}(D+A_i)=0$, so $\vol_{X|V}^+(D)=0$. Thus $V$ is not a positive volume subvariety of $D$. \end{proof} \begin{remark} Even if $V$ is a positive volume subvariety of $D$, it is possible that $V \subseteq \SB(D)$. For instance, consider a ruled surface $S$ carrying a nef divisor $D$ such that $D\cdot C>0$ for every irreducible curve $C \subseteq S$, but $D$ is not ample (see e.g., \cite[Example 1.5.2]{pos}). Since $\kappa(D)=-\infty$, we have $\SB(D)=S$. Thus every positive volume subvariety of $D$ is contained in $\SB(D)$. \end{remark} \begin{remark}\label{gensub} When $\kappa(D)=0$ (resp. $\kappa_\nu(D)=0$), every point not in $\Supp(D)$ (resp. $\mathbf B_-(D)$) is a Nakayama (resp. positive volume) subvariety of $D$. When $\kappa(D)>0$, any $\kappa(D)$-dimensional general subvariety (e.g., intersection of general ample divisors) is a Nakayama subvariety of $D$ (\cite[Proposition 2.9]{CHPW1}). Similarly, when $\kappa_\nu(D)>0$, any $\kappa_\nu(D)$-dimensional intersection of sufficiently ample divisors is a positive volume subvariety of $D$ (\cite[Proposition 2.17]{CHPW1}). In particular, we can always construct an admissible flag $Y_\bullet$ on $X$ containing a Nakayama subvariety of $D$ or a positive volume subvariety of $D$ such that $Y_n$ is a general point in $X$. \end{remark} The importance of such special subvarieties associated to divisors is that one can read off interesting asymptotic properties of divisors from Okounkov bodies with respect to admissible flags containing those subvarieties. The following theorem is the main result of \cite{CHPW1}, which can be regarded as a generalization of \cite[Theorem A]{lm-nobody}. \begin{theorem}[{\cite[Theorems A and B]{CHPW1}}]\label{chpwmain} We have the following: \begin{enumerate}[leftmargin=0cm,itemindent=.6cm] \item[$(1)$] Let $D$ be an effective $\R$-divisor on $X$. Fix an admissible flag $Y_\bullet$ containing a Nakayama subvariety $U$ of $D$ such that $Y_n$ is a general point in $X$. Then $\Delta^{\val}_{Y_\bullet}(D) \subseteq \{0 \}^{n-\kappa(D)} \times \R^{\kappa(D)}$ so that one can regard $\Delta^{\val}_{Y_\bullet}(D) \subseteq \R^{\kappa(D)}$. Furthermore, we have $$\dim \Delta^{\val}_{Y_\bullet}(D)=\kappa(D) \text{ and } \vol_{\R^{\kappa(D)}}(\Delta^{\val}_{Y_\bullet}(D))=\frac{1}{\kappa(D)!} \vol_{X|U}(D).$$ \item[$(2)$] Let $D$ be a pseudoeffective $\R$-divisor on $X$, and fix an admissible flag $Y_\bullet$ containing a positive volume subvariety $V$ of $D$. Then $\Delta^{\lim}_{Y_\bullet}(D) \subseteq \{0 \}^{n-\kappa_\nu(D)} \times \R^{\kappa_\nu(D)}$ so that one can regard $\Delta^{\lim}_{Y_\bullet}(D) \subseteq \R^{\kappa_\nu(D)}$. Furthermore, we have $$\dim \Delta^{\lim}_{Y_\bullet}(D)=\kappa_\nu(D) \text{ and } \vol_{\R^{\kappa_\nu(D)}}(\Delta^{\lim}_{Y_\bullet}(D))=\frac{1}{\kappa_\nu(D)!} \vol_{X|V}^+(D).$$ \end{enumerate} \end{theorem} \begin{remark} To extract asymptotic properties of divisors from $\Delta^{\val}_{Y_\bullet}(D)$ as in Theorem \ref{chpwmain} (1), we need to assume that $Y_n$ is a general point in $X$. When considering $\Delta^{\val}_{Y_\bullet}(D)$ (resp. $\Delta^{\lim}_{Y_\bullet}(D)$, we say that $Y_n$ is \emph{general} if $Y_n$ is not contained in $\SB(D)$ (resp. $\mathbf B_-(D)$) (see \cite[Lemma 2.6]{lm-nobody} and \cite[Subsection 3.2]{CHPW1}). \end{remark} As an application of Theorem \ref{chpwmain}, we now prove the following Theorem \ref{critintro}. \begin{theorem}\label{geomcrit} Let $D$ be an $\R$-divisor on $X$. Fix an admissible flag $Y_\bullet$ such that $Y_n$ is a general point in $X$. We have the following: \begin{enumerate}[leftmargin=0cm,itemindent=.6cm] \item[$(1)$] If $D$ is effective, then $Y_\bullet$ contains a Nakayama subvariety of $D$ if and only if $\Delta^{\val}_{Y_\bullet}(D) \subseteq \{0 \}^{n-\kappa(D)} \times \R^{\kappa(D)}$. \item[$(2)$] If $D$ is pseudoeffective, then $Y_\bullet$ contains a positive volume subvariety of $D$ if and only if $\Delta^{\lim}_{Y_\bullet}(D) \subseteq \{0 \}^{n-\kappa_\nu(D)} \times \R^{\kappa_\nu(D)}$ and $\dim \Delta^{\lim}_{Y_\bullet}(D)=\kappa_\nu(D)$. \end{enumerate} \end{theorem} \begin{proof} The $(\Rightarrow)$ direction of both $(1)$ and $(2)$ at once follows from Theorem \ref{chpwmain}. For the $(\Leftarrow)$ direction of $(1)$, note that $\ord_{Y_{n-\kappa(D)}}(D')=0$ for every effective divisor $D' \sim_{\R} D$ under the assumption that $\Delta^{\val}_{Y_\bullet}(D) \subseteq \{0 \}^{n-\kappa(D)} \times \R^{\kappa(D)}$. This means that $H^0(X, \mathcal{I}_{Y_{n-\kappa(D)}} \otimes \mathcal{O}_X(\lfloor mD \rfloor)) = 0$ for every integer $m \geq 0$. Thus $Y_{n-\kappa(D)}$ is a Nakayama subvariety of $D$. For the $(\Leftarrow)$ direction of $(2)$, take an arbitrary ample divisor $A$ on $X$. Since $\Delta_{Y_\bullet}(D+A) \supseteq \Delta^{\lim}_{Y_\bullet}(D)$, it follows that $$ \Delta_{Y_\bullet}(D+A)\cap(\{0\}^{n-\kappa_\nu(D)}\times\R_{\geq0}^{\kappa_\nu(D)})\supseteq\Delta^{\lim}_{Y_{\bullet}}(D). $$ Since $Y_n$ is general, we have $Y_{n-\kappa_\nu(D)} \not\subseteq \mathbf B_-(D)$. Thus $Y_{n-\kappa_\nu(D)} \not\subseteq \mathbf B_+(D+A)$ and using Theorem \ref{newtheorem}, we obtain $\Delta_{Y_{n-\kappa_\nu(D)\bullet}}(D+A)\supseteq\Delta^{\lim}_{Y_{\bullet}}(D)$. Therefore, by \cite[(2.7)]{lm-nobody} we have $$ \begin{array}{rl} \vol_{X|Y_{n-\kappa_\nu(D)}}(D+A)&= \kappa_\nu(D)!\cdot \vol_{\R^{\kappa_\nu(D)}}\Delta_{Y_{n-\kappa_\nu(D)\bullet}}(D+A)\\ &\geq \kappa_\nu(D)!\cdot \vol_{\R^{\kappa_\nu(D)}}\Delta^{\lim}_{Y_{\bullet}}(D). \end{array} $$ The given condition implies that $\vol_{\R^{\kappa_\nu(D)}}\Delta^{\lim}_{Y_{\bullet}}(D)>0$. Hence, $\vol_{X|Y_{n-\kappa_\nu(D)}}^+(D)>0$, and by definition $Y_{n-\kappa_\nu(D)}$ is a positive volume subvariety of $D$. \end{proof} Regarding Theorem \ref{geomcrit} (1), we recall that $\dim \Delta^{\val}_{Y_\bullet}(D)=\kappa(D)$ always holds whenever $D$ is effective by \cite[Proposition 3.3]{B2}. \section{Rational polyhedrality of Okounkov bodies}\label{ratsec} This section is devoted to showing the rational polyhedrality of Okounkov bodies of pseudoeffective divisors. We then finally prove Theorem \ref{main1} (=Corollary \ref{ratpolval} and Theorem \ref{ratsimlim}). First, we study the Okounkov bodies under surjective morphisms. \begin{lemma}[{cf. \cite[Lemma 3.3]{CHPW2}}]\label{morokbd} Let $f \colon X \to \overline{X}$ be a surjective morphism of projective varieties of the same dimension $n$, and fix an admissible flag $$ Y_\bullet: X=Y_0\supseteq Y_1\supseteq\cdots \supseteq Y_{n-1}\supseteq Y_n=\{x\} $$ on $X$ such that $$ \overline{Y}_\bullet : \overline{X}=f(Y_0)\supseteq f(Y_1)\supseteq \cdots \supseteq f(Y_{n-1}) \supseteq f(Y_n)=\{f(x) \} $$ is an admissible flag on $\overline{X}$. For a big $\Z$-divisor $D$ on $\overline{X}$, consider a graded linear series $W_\bullet$ associated to $f^*D$ on $X$ with $W_k:=H^0(\overline{X}, kD) \subseteq H^0(X, kf^*D)$ for any integer $k \geq 0$. Then $\Delta_{Y_\bullet}(W_\bullet)=\Delta_{\overline{Y}_\bullet}(D)$. \end{lemma} \begin{proof} It follows from the construction of Okounkov body associated to a graded linear series. \end{proof} The following lemma plays a crucial role in proving Theorem \ref{main1}. \begin{lemma}[{cf. \cite[Proposition 4]{AKL}}]\label{simplex} Let $W_\bullet$ be a graded linear series on a smooth projective variety $X$ generated by a base point free linear series $W_1$. Suppose also that $W_1$ defines a surjective morphism $f \colon X \to \overline{X}$ of projective varieties of the same dimension $n$. Let $Y_\bullet$ be an admissible flag on $X$ defined by successive intersection of sufficiently general members $E_1, \ldots, E_n$ of $W_1$ ; $Y_i := E_1 \cap \cdots \cap E_i$ for $1 \leq i \leq n-1$ and $Y_n = \{x \}$ is a general point in $X$. Then $\Delta_{Y_\bullet}(W_\bullet)$ is a $n$-dimensional simplex in $\R_{\geq 0}^n$ whose verticies are $0, e_1, \ldots, e_{n-1}, \vol_X(W_\bullet)e_n$. \end{lemma} \begin{proof} There exists a very ample $\Z$-divisor $D$ on $\overline{X}$ so that we may assume $W_k=H^0(\overline{X}, kD) \subseteq H^0(X, kf^*D)$ for any integer $k \geq 0$. By the genericity assumption on $E_j$ for defining $Y_i$, we may assume that $$ \overline{Y}_\bullet : \overline{X}=f(Y_0)\supseteq f(Y_1)\supseteq \cdots \supseteq f(Y_{n-1}) \supseteq f(Y_n) $$ is an admissible flag on $\overline{X}$. By Lemma \ref{morokbd}, $\Delta_{Y_\bullet}(W_\bullet)=\Delta_{\overline{Y}_\bullet}(D)$. Note that $D^n=\vol_{\overline{X}}(D)=\vol_X(W_\bullet)$. By applying \cite[Proposition 4]{AKL} to $\Delta_{\overline{Y}_\bullet}(D)$, we obtain the assertion. \end{proof} We now show the rational polyhedrality of $\Delta^{\val}_{Y_\bullet}(D)$. \begin{theorem}\label{ratsimval} Let $D$ be an effective $\Q$-divisor on $X$ with finitely generated section ring $R(X,D)$. Then there exists an admissible flag $Y_\bullet$ on $X$ containing a Nakayama subvariety of $D$ such that $\Delta^{\val}_{Y_\bullet}(D)$ is a rational simplex in $\{0\}^{n-\kappa(D)}\times\R^{\kappa(D)}$ of dimension $\kappa(D)$. \end{theorem} \begin{proof} Let $m>0$ be a sufficiently divisible and large integer such that $mD$ is a $\Z$-divisor and the section ring $R(X, mD)$ is generated by $H^0(X, mD)$. We take a log resolution $f \colon \widetilde{X} \to X$ of the base ideal $\frak{b}(|mD|)$ so that we obtain a decomposition $f^*(mD)=M+F$ into a base point free divisor $M$ and the fixed part $F$ of $|f^*(mD)|$. Note that the morphism $\phi \colon \tilde{X} \to Z$ given by $|M|$ is the Iitaka fibration of $f^*D$. Let $A_1, \ldots, A_{n-\kappa(D)}$ be sufficiently general ample divisors on $\tilde{X}$ such that each $Y_i':=A_1 \cap \cdots \cap A_i$ for $1 \leq i \leq n-\kappa(D)$ is a smooth irreducible subvariety of dimension $n-i$. By Remark \ref{gensub}, $U:=Y_{n-\kappa(D)}'$ is a Nakayama subvariety of $f^*D$. Let $W_k$ be the image of the natural injective map $H^0(\widetilde{X}, kf^*(mD)) \to H^0(U, kf^*(mD)|_U)$ for any integer $k \geq 0$. Then $W_\bullet$ is a graded linear series on $U$ generated by $W_1$. Note that $\phi|_U \colon U \to Z$ is a surjective morphism of projective varieties of the same dimension $\kappa(D)$ defined by $W_1$. Now take sufficiently general members $E_1, \ldots, E_{\kappa(D)}$ of $W_1$ such that $Y_{n-\kappa(D)+i}':=E_1 \cap \cdots \cap E_i$ for $1 \leq i \leq \kappa(D)-1$ is a smooth irreducible subvariety of $X$ (and $U$) of dimension $\kappa(D)-i$, and $Y_n'=\{x\}$ where $x$ is a general point in $U$. In particular, $Y_\bullet': Y'_0\supseteq \cdots \supseteq Y'_n$ is an admissible flag on $\widetilde{X}$ and the partial flag $Y'_{n-\kappa(D)\bullet}$ is an admissible flag on $U$. Then by Lemma \ref{simplex}, $\Delta_{Y'_{n-\kappa(D)\bullet}}(W_\bullet)$ is a $\kappa(D)$-dimensional simplex. Recall from \cite[Remark 3.11]{CHPW1} that $\Delta^{\val}_{Y'_\bullet}(f^*D)=\Delta_{Y'_{n-\kappa(D)\bullet}}(W_\bullet)$. Furthermore, by the genericity assumption on $Y_\bullet'$, we can assume that $Y_\bullet : f(Y'_0)\supseteq \cdots\supseteq f(Y'_n)$ is an admissible flag on $X$ and $f(Y_{n-\kappa(D)}')$ is a Nakayama subvariety of $D$. By Lemma \ref{okbdbir}, $\Delta^{\val}_{Y_\bullet}(D)=\Delta^{\val}_{Y'_\bullet}(f^*D)$, and hence, $\Delta^{\val}_{Y_\bullet}(D)$ is a rational simplex. Finally, by Theorem \ref{chpwmain} (1), $\Delta^{\val}_{Y_\bullet}(D)$ is contained in $\{0\}^{n-\kappa(D)}\times\R^{\kappa(D)}$ and is of dimension $\kappa(D)$. \end{proof} \begin{corollary}\label{ratpolval} Let $D$ be an effective $\Q$-divisor on $X$ which admits the birational good Zariski decomposition. Then there exists an admissible flag $Y_\bullet$ on $X$ containing a Nakayama subvariety of $D$ such that $\Delta^{\val}_{Y_\bullet}(D)$ is a rational simplex in $\{0\}^{n-\kappa(D)}\times\R^{\kappa(D)}$ of dimension $\kappa(D)$. \end{corollary} \begin{proof} By Proposition \ref{zdabfg}, $D$ has a finitely generated section ring. Then the assertion now follows from Theorem \ref{ratsimval}. \end{proof} We now turn to the limiting Okounkov body case. \begin{lemma}\label{oklimnef} Let $P$ be a nef divisor on $X$, and consider an admissible flag $Y_\bullet$ on $X$ containing a smooth positive volume subvariety $V=Y_{n-\kappa_\nu(D)}$ of $P$. Then $\Delta^{\lim}_{Y_\bullet}(P)=\Delta_{Y_{n-\kappa_\nu(P)\bullet}}(P|_V)$. \end{lemma} \begin{proof} By definition, it is clear that $\Delta^{\lim}_{Y_\bullet}(P) \supseteq \Delta_{Y_{n-\kappa_\nu(P)\bullet}}(P|_V)$. Thus it is sufficient to show that their Euclidean volumes in $\R^{\kappa_\nu(P)}$ are equal, i.e.,$ \vol_{\R^{\kappa_\nu(P)}}(\Delta^{\lim}_{Y_\bullet}(P))=\vol_{\R^{\kappa_\nu(P)}}(\Delta_{Y_{n-\kappa_\nu(P)\bullet}}(P|_V))$, or equivalently, $\vol_{X|V}^+(P)=\vol_{V}(P|_V)$ by Theorem \ref{chpwmain}. Fix an ample divisor $A$ on $X$. Since $P+\eps A$ is ample for any $\eps >0$, it follows that $\vol_{X|V}(P+\eps A)=\vol_V((P+\eps A)|_V)$. By the continuity of the volume function, we obtain $$ \vol_{X|V}^+(P)=\lim_{\eps \to 0+}\vol_{X|V}(P+\eps A)=\lim_{\eps \to 0+} \vol_V((P+\eps A)|_V)=\vol_V(P|_V), $$ so we complete the proof. \end{proof} We next obtain an analogous result on the rational polyhedrality of $\Delta^{\lim}_{Y_\bullet}(D)$. \begin{theorem}\label{ratsimlim} Let $D$ be a pseudoeffective $\Q$-divisor on $X$ which admits the birational good Zariski decomposition. Then there exists an admissible flag $Y_\bullet$ on $X$ containing a positive volume subvariety of $D$ such that $\Delta^{\lim}_{Y_\bullet}(D)$ is a rational simplex in $\{0\}^{n-\kappa_\nu(D)}\times\R^{\kappa_\nu(D)}$ of dimension $\kappa_\nu(D)$. \end{theorem} \begin{proof} Let $f \colon \widetilde{X} \to X$ be a birational morphism of smooth projective varieties of dimension $n$ such that $f^*D=P+N$ is the good Zariski decomposition. Let $A_1, \ldots, A_{n-\kappa_\nu(D)}$ be sufficiently general ample divisors on $\tilde{X}$ such that each $Y_i':=A_1 \cap \cdots \cap A_i$ for $1 \leq i \leq n-\kappa_\nu(D)$ is a smooth irreducible subvariety of dimension $n-i$. By Remark \ref{gensub}, $V:=Y_{n-\kappa_\nu(D)}'$ is a positive volume subvariety of $f^*D$. By \cite[Theorem 2.18]{CHPW1}, $P|_V$ is big, and $mP|_V$ on $V$ is base point free for a sufficiently divisible and large integer $m>0$. Let $E_1, \ldots, E_{\kappa_\nu(D)-1} \in |mP|_V|$ be general members such that each $Y_{n-\kappa_\nu(D)+i}':=E_1 \cap \cdots \cap E_i$ for $1 \leq i \leq \kappa_\nu(D)-1$ is a smooth irreducible subvariety of $X$ of dimension $\kappa_\nu(D)-i$, and $Y'_n:=\{ x\}$ where $x$ is a general point in $V$. Then $Y'_\bullet : \widetilde{X}=Y'_0\supseteq \cdots\supseteq Y'_n$ is an admissible flag on $\widetilde{X}$. By \cite[Theorem 7]{AKL}, $\Delta_{Y'_{n-\kappa_\nu(D)\bullet}}(P|_V)$ is a $\kappa_\nu(D)$-dimensional simplex. By Lemma \ref{oklimnef}, $\Delta^{\lim}_{Y'_\bullet}(P)=\Delta_{Y'_{n-\kappa_\nu(D)\bullet}}(P|_V)$, and by Lemma \ref{okbdzd}, $\Delta^{\lim}_{Y'_\bullet}(f^*D)=\Delta^{\lim}_{Y'_\bullet}(P)$. By the genericity assumption on $Y'_\bullet$, we can assume that $Y_\bullet : f(Y'_0) \supseteq \cdots\supseteq f(Y'_n)$ is an admissible flag on $X$ and $f(Y_{n-\kappa_\nu(D)}')$ is a positive volume subvariety of $D$. By Lemma \ref{okbdbir}, we obtain $\Delta^{\lim}_{Y_\bullet}(D)=\Delta^{\lim}_{Y'_\bullet}(f^*D)$, and hence, $\Delta^{\lim}_{Y_\bullet}(D)$ is a rational simplex. Finally, by Theorem \ref{chpwmain}, $\Delta^{\lim}_{Y_\bullet}(D)$ is in $\{0\}^{n-\kappa_\nu(D)}\times\R^{\kappa_\nu(D)}$ and of dimension $\kappa_\nu(D)$. \end{proof} \begin{remark}\label{ratrem} The problem of the rational polyhedrality of Okounkov body is not yet fully understood. It was shown in \cite[Corollary 13]{AKL} and \cite[Theorems 1.1 and 4.17]{CPW} that on a smooth projective surface, there always exists an admissible flag with respect to which the Okounkov body of any $\Q$-divisor is a rational polytope. Thus, in particular, even if a pseudoeffective $\Q$-divisor is not abundant or does not have finitely generated section ring, the associated Okounkov body can still be a rational polytope with respect to some admissible flag. On the other hand, even when the given variety is a Mori dream space, the Okounkov body can be non-polyhedral for some admissible flag (see \cite[Section 3]{KLM}). \end{remark} \end{document}
{\beta}egin{document} \title{f The number of chains of subgroups\ of a finite elementary abelian $p$-group} {\beta}egin{abstract} In this short note we give a formula for the number of chains of subgroups of a finite elementary abelian $p$-group. This completes our previous work \cite{5}. \end{abstract} \noindent oindent{{\beta}f MSC (2010):} Primary 20N25, 03E72; Secondary 20K01, 20D30. \noindent oindent{{\beta}f Key words:} chains of subgroups, fuzzy subgroups, finite elementary abelian $p$-groups, recurrence relations. {\sigma}ection{Introduction} Let $G$ be a group. A \textit{chain of subgroups} of $G$ is a set of subgroups of $G$ totally ordered by set inclusion. A chain of subgroups of $G$ is called \textit{rooted} (more exactly $G$-\textit{rooted}) if it contains $G$. Otherwise, it is called \textit{unrooted}. Notice that there is a bijection between the set of $G$-\textit{rooted} chains of subgroups of $G$ and the set of distinct fuzzy subgroups of $G$ (see e.g. \cite{5}), which is used to solve many computational problems in fuzzy group theory. {\beta}igskip The starting point for our discussion is given by the paper \cite{5}, where a formula for the number of rooted chains of subgroups of a finite cyclic group is obtained. This leads in \cite{3} to precise expression of the well-known central Delannoy numbers in an arbitrary dimension and has been simplified in \cite{2}. Some steps in order to determine the number of rooted chains of subgroups of a finite elementary abelian $p$-group are also made in \cite{5}. Moreover, this counting problem has been naturally extended to non-abelian groups in other works, such as \cite{1,4}. The purpose of the current note is to improve the results of \cite{5}, by indicating an explicit formula for the number of rooted chains of subgroups of a finite elementary abelian $p$-group. {\beta}igskip Given a finite group $G$, we will denote by ${\cal C}(G)$, ${\cal D}(G)$ and ${\cal F}(G)$ the collection of all chains of subgroups of $G$, of unrooted chains of subgroups of $G$ and of $G$-rooted chains of subgroups of $G$, respectively. Put $C(G)=|{\cal C}(G)|$, $D(G)=|{\cal D}(G)|$ and $F(G)=|{\cal F}(G)|$. The connections between these numbers have been established in \cite{2}, namely: {\beta}igskip\noindent {{\beta}f Theorem 1.} {\it Let $G$ be a finite group. Then $$F(G)=D(G)+1 \mbox{ and }\, C(G)=F(G)+D(G)=2F(G)-1\,.$$} In the following let $p$ be a prime, $n$ be a positive integer and $\mathbb{Z}_p^n$ be an elementary abelian $p$-group of rank $n$ (that is, a direct product of $n$ copies of $\mathbb{Z}_p$). First of all, we recall a well-known group theoretical result that gives the number $a_{n,p}(k)$ of subgroups of order $p^k$ in $\mathbb{Z}_p^n$, $k=0,1,...,n$. {\beta}igskip\noindent {{\beta}f Theorem 2.} {\it For every $k=0,1,...,n$, we have $$a_{n,p}(k)=\frac{(p^n-1)\cdots (p-1)}{(p^k-1)\cdots (p-1)(p^{n-k}-1)\cdots (p-1)}\,.$$}Our main result is the following. {\beta}igskip\noindent {{\beta}f Theorem 3.} {\it The number of rooted chains of subgroups of the elementary abelian $p$-group $\mathbb{Z}_p^n$ is $$F(\mathbb{Z}_p^n)=2{+}2f(n){\sigma}um_{k=1}^{n-1}{\sigma}um_{1\leq i_1<i_2<...<i_k\leq n-1}\frac{1}{f(n{-}i_k)f(i_k{-}i_{k-1})\cdots f(i_2{-}i_1)f(i_1)}\,,$$where $f:\mathbb{N}\longrightarrow\mathbb{N}$ is the function defined by $f(0)=1$ and $f(r)=\displaystyle\prod_{s=1}^r (p^s-1)$ for all $r\in\mathbb{N}^*$.} {\beta}igskip Obviously, explicit formulas for $C(\mathbb{Z}_p^n)$ and $D(\mathbb{Z}_p^n)$ also follow from Theorems 1 and 2. By using a computer algebra program, we are now able to calculate the first terms of the chain $f_n=F(\mathbb{Z}_p^n)$, $n\in\mathbb{N}$, namely: {\beta}egin{description} \item[\hspace{10mm}-] $f_0=1$; \item[\hspace{10mm}-] $f_1=2$; \item[\hspace{10mm}-] $f_2=2p+4$; \item[\hspace{10mm}-] $f_3=2p^3+8p^2+8p+8$; \item[\hspace{10mm}-] $f_4=2p^6+12p^5+24p^4+36p^3+36p^2+24p+16$. \end{description} {\beta}igskip Finally, we remark that the above $f_3$ is in fact the number $a_{3,p}$ obtained by a direct computation in Corollary 10 of \cite{5}. {\sigma}ection{Proof of Theorem 3} We observe first that every rooted chain of subgroups of $\mathbb{Z}_p^n$ are of one of the following types: $$G_1 {\sigma}ubset G_2 {\sigma}ubset ...{\sigma}ubset G_m=\mathbb{Z}_p^n\, \mbox{ with }\, G_1\noindent eq 1\leqno(1)$$and $$1 {\sigma}ubset G_2 {\sigma}ubset ...{\sigma}ubset G_m=\mathbb{Z}_p^n\,.\leqno(2)$$It is clear that the numbers of chains of types (1) and (2) are equal. So $$f_n=2x_n\,,\leqno(3)$$where $x_n$ denotes the number of chains of type (2). On the other hand, such a chain is obtained by adding $\mathbb{Z}_p^n$ to the chain $$1 {\sigma}ubset G_2 {\sigma}ubset ...{\sigma}ubset G_{m-1},$$where $G_{m-1}$ runs over all subgroups of $\mathbb{Z}_p^n$. Moreover, $G_{m-1}$ is also an ele\-mentary abelian $p$-group, say $G_{m-1}\cong\mathbb{Z}_p^k$ with $0\leq k\leq n$. These show that the chain $x_n$, $n\in\mathbb{N}$, satisfies the following recurrence relation $$x_n={\sigma}um_{k=0}^{n-1} a_{n,p}(k)x_k\,,\leqno(4)$$which is more facile than the recurrence relation founded by applying the Inclusion-Exclusion Principle in Theorem 9 of \cite{5}. {\beta}igskip Next we prove that the solution of (4) is given by $$x_n=1{+}{\sigma}um_{k=1}^{n-1}{\sigma}um_{1\leq i_1<i_2<...<i_k\leq n-1} a_{n,p}(i_k)a_{i_k,p}(i_{k-1})\cdots a_{i_2,p}(i_1)\,.\leqno(5)$$We will proceed by induction on $n$. Clearly, (5) is trivial for $n=1$. Assume that it holds for all $k<n$. One obtains $$\hspace{-20mm}x_n={\sigma}um_{k=0}^{n-1} a_{n,p}(k)x_k=1+{\sigma}um_{k=1}^{n-1} a_{n,p}(k)x_k=$$ $$\hspace{-3mm}=1{+}{\sigma}um_{k=1}^{n-1} a_{n,p}(k)\left(1+{\sigma}um_{r=1}^{k-1}{\sigma}um_{1\leq i_1<i_2<...<i_r\leq k-1} a_{k,p}(i_r)a_{i_r,p}(i_{r-1})\cdots a_{i_2,p}(i_1)\right)=$$ $$=1{+}{\sigma}um_{k=1}^{n-1} a_{n,p}(k)+{\sigma}um_{k=1}^{n-1}a_{n,p}(k){\sigma}um_{r=1}^{k-1}{\sigma}um_{1\leq i_1<i_2<...<i_r\leq k-1} \hspace{-1mm}a_{k,p}(i_r)a_{i_r,p}(i_{r-1})\cdots a_{i_2,p}(i_1)=$$ $$=1{+}{\sigma}um_{k=1}^{n-1} a_{n,p}(k)+{\sigma}um_{k=1}^{n-1}a_{n,p}(k){\sigma}um_{r=1}^{n-2}{\sigma}um_{1\leq i_1<i_2<...<i_r\leq k-1} \hspace{-1mm}a_{k,p}(i_r)a_{i_r,p}(i_{r-1})\cdots a_{i_2,p}(i_1)=$$ $$=1{+}{\sigma}um_{k=1}^{n-1} a_{n,p}(k)+{\sigma}um_{k=1}^{n-1}a_{n,p}(k){\sigma}um_{r=2}^{n-1}{\sigma}um_{1\leq i_1<i_2<...<i_{r-1}\leq k-1} \hspace{-1mm}a_{k,p}(i_{r-1})a_{i_{r-1},p}(i_{r-2})\cdots a_{i_2,p}(i_1)=$$ $$=1{+}\hspace{-3mm}{\sigma}um_{1\leq i_1\leq n-1} a_{n,p}(i_1)+{\sigma}um_{r=2}^{n-1}{\sigma}um_{k=1}^{n-1}a_{n,p}(k)\hspace{-5mm}{\sigma}um_{1\leq i_1<i_2<...<i_{r-1}\leq k-1} \hspace{-2mm}a_{k,p}(i_{r-1})a_{i_{r-1},p}(i_{r-2})\cdots a_{i_2,p}(i_1)=$$ $$\hspace{-7mm}=1{+}\hspace{-3mm}{\sigma}um_{1\leq i_1\leq n-1} a_{n,p}(i_1)+{\sigma}um_{r=2}^{n-1}{\sigma}um_{1\leq i_1<i_2<...<i_r\leq n-1} a_{n,p}(i_r)a_{i_r,p}(i_{r-1})\cdots a_{i_2,p}(i_1)=$$ $$\hspace{14mm}=1+{\sigma}um_{r=1}^{n-1}{\sigma}um_{1\leq i_1<i_2<...<i_r\leq n-1} a_{n,p}(i_r)a_{i_r,p}(i_{r-1})\cdots a_{i_2,p}(i_1)\,,$$as desired. Since by Theorem 2 $$a_{n,p}(k)=\frac{(p^n-1)\cdots (p-1)}{(p^k-1)\cdots (p-1)(p^{n-k}-1)\cdots (p-1)}=\frac{f(n)}{f(k)f(n-k)}\,,\forall\hspace{1mm} 0\leq k\leq n\,,$$the equalities (3) and (5) imply that $$f_n=2+2f(n){\sigma}um_{k=1}^{n-1}{\sigma}um_{1\leq i_1<i_2<...<i_k\leq n-1}\frac{1}{f(n{-}i_k)f(i_k{-}i_{k-1})\cdots f(i_2{-}i_1)f(i_1)}\,,$$completing the proof. ${\sigma}criptstyle\Box$ {\beta}egin{thebibliography}{10} {\beta}ibitem{1} Davvaz, B., Ardekani, R.K., {\it Counting fuzzy subgroups of a special class of non-abelian groups of order $p^3$}, Ars Combin. {{\beta}f 103} (2012), 175-179. {\beta}ibitem{2} Oh, J.M., {\it The number of chains of subgroups of a finite cyclic group}, European J. Combin. {{\beta}f 33} (2012), 259-266. {\beta}ibitem{3} T\u arn\u auceanu, M., {\it The number of fuzzy subgroups of finite cyclic groups and Delannoy numbers}, European J. Combin. {{\beta}f 30} (2009), 283-287, doi: 10.1016/j.ejc.2007.12.005. {\beta}ibitem{4} T\u arn\u auceanu, M., {\it Classifying fuzzy subgroups of finite nonabelian groups}, Iran. J. Fuzzy Syst. {{\beta}f 9} (2012), 33-43. {\beta}ibitem{5} T\u arn\u auceanu, M., Bentea, L., {\it On the number of fuzzy subgroups of finite abelian groups}, Fuzzy Sets and Systems {{\beta}f 159} (2008), 1084-1096, doi: 10.1016/j.fss.2007.11.014. \end{thebibliography} \vspace*{1,5mm}\\ ace*{5ex}{\sigma}mall {\beta}egin{minipage}[t]{4cm} Marius T\u arn\u auceanu \\ Faculty of Mathematics \\ ``Al.I. Cuza'' University \\ Ia\c si, Romania \\ e-mail: {\tt [email protected]} \end{minipage} \end{document}
\begin{document} \begin{abstract} We study the uniform computational content of Ramsey's theorem in the Weihrauch lattice. Our central results provide information on how Ramsey's theorem behaves under product, parallelization and jumps. From these results we can derive a number of important properties of Ramsey's theorem. For one, the parallelization of Ramsey's theorem for cardinality $n\geq1$ and an arbitrary finite number of colors $k\geq2$ is equivalent to the $n$--th jump of weak K\H{o}nig's lemma. In particular, Ramsey's theorem for cardinality $n\geq1$ is $\SO{n+2}$--measurable in the effective Borel hierarchy, but not $\SO{n+1}$--measurable. Secondly, we obtain interesting lower bounds, for instance the $n$--th jump of weak K\H{o}nig's lemma is Weihrauch reducible to (the stable version of) Ramsey's theorem of cardinality $n+2$ for $n\geq2$. We prove that with strictly increasing numbers of colors Ramsey's theorem forms a strictly increasing chain in the Weihrauch lattice. Our study of jumps also shows that certain uniform variants of Ramsey's theorem that are indistinguishable from a non-uniform perspective play an important role. For instance, the colored version of Ramsey's theorem explicitly includes the color of the homogeneous set as output information, and the jump of this problem (but not the uncolored variant) is equivalent to the stable version of Ramsey's theorem of the next greater cardinality. Finally, we briefly discuss the particular case of Ramsey's theorem for pairs, and we provide some new separation techniques for problems that involve jumps in this context. In particular, we study uniform results regarding the relation of boundedness and induction problems to Ramsey's theorem, and we show that there are some significant differences with the non-uniform situation in reverse mathematics. \ \\ {\bf Keywords:} computable analysis, Weihrauch lattice, Ramsey's theorem. \end{abstract} \maketitle \setcounter{tocdepth}{1} \tableofcontents \pagebreak \section{Introduction} \label{sec:introduction} In this paper we study uniform computational properties of Ramsey's theorem for cardinality $n$ and $k$ colors. We briefly recall some basic definitions. By $[M]^n:=\{A\subseteq M:|A|=n\}$ we denote the set of subsets of $M$ with exactly $n$ elements. We identify $k$ with the set $\{0,...,k-1\}$ for every $k\in{\mathbb{N}}$. We also allow the case $k={\mathbb{N}}$. Any map $c:[{\mathbb{N}}]^n\to k$ with finite range is called a {\em coloring} ({\em of} $[{\mathbb{N}}]^n$). A subset $M\subseteq{\mathbb{N}}$ is called {\em homogeneous} ({\em for} $c$) if there is some $i\in k$ such that $c(A)=i$ for every $A\in[M]^n$. In this situation we write $c(M)=i$, which is understood to imply that $M$ is homogeneous. Frank P.\ Ramsey proved the following theorem \cite{Ram30}. \begin{theorem}[Ramsey's theorem 1930~\cite{Ram30}] \label{thm:Ramsey} For every coloring $c:[{\mathbb{N}}]^n\to k$ with $n,k\geq1$ there exists an infinite homogeneous set $M\subseteq{\mathbb{N}}$. \end{theorem} We will abbreviate Ramsey's theorem for cardinality $n$ and $k$ colors by $\text{\rm\sffamily RT}_{n,k}$.\footnote{We do not use the more common abbreviation $\text{\rm\sffamily RT}_k^n$ since we will use upper indices to indicate the number of jumps or products.} The computability theoretic study of Ramsey's theorem started when Specker proved that there exists a computable counterexample for Ramsey's theorem for pairs \cite{Spe71}, which shows that Ramsey's theorem cannot be proved constructively. \begin{theorem}[Specker 1969~\cite{Spe71}] \label{thm:Specker} There exists a computable coloring $c:[{\mathbb{N}}]^2\to k$ without a computable infinite homogeneous set $M\subseteq{\mathbb{N}}$. \end{theorem} Jockusch provided a very simple proof of Specker's theorem, and he improved Specker's result by showing the following \cite{Joc72}. \begin{theorem}[Jockusch 1972~\cite{Joc72}] \label{thm:Jockusch} For every computable coloring $c:[{\mathbb{N}}]^n\to 2$ with $n\geq1$ there exists an infinite homogeneous set $M\subseteq{\mathbb{N}}$ such that $M'\mathop{\leq_{\mathrm{T}}}\emptyset^{(n)}$. However, there exists a computable coloring $c:[{\mathbb{N}}]^n\to2$ for each $n\geq2$ without an infinite homogeneous set $M\subseteq{\mathbb{N}}$ that is computable in $\emptyset^{(n-1)}$. \end{theorem} Another cornerstone in the study of Ramsey's theorem was the cone avoidance theorem (Theorem~\cite[Theorem~2.1]{SS95}) that was originally proved by Seetapun. \begin{theorem}[Seetapun 1995~\cite{SS95}] \label{thm:Seetapun} Let $c:[{\mathbb{N}}]^2\to2$ be a coloring that is computable in $B\subseteq{\mathbb{N}}$, and let $(C_i)_i$ be a sequence of sets $C_i\subseteq{\mathbb{N}}$ such that $C_i\mathop{\not\leq_{\mathrm{T}}} B$ for all $i\in{\mathbb{N}}$. Then there exists an infinite homogeneous set $M$ for $c$ such that $C_i\mathop{\not\leq_{\mathrm{T}}} M$ for all $i\in{\mathbb{N}}$. \end{theorem} This theorem was generalized by Cholak, Jockusch and Slaman who proved in particular the following version \cite[Theorem~12.2]{CJS01}. \begin{theorem}[Cholak, Jockusch and Slaman 2001~\cite{CJS01}] \label{thm:CJS-lower} For every computable coloring $c:[{\mathbb{N}}]^n\to k$ there exists an infinite homogeneous set $M\subseteq{\mathbb{N}}$ such that $\emptyset^{(n)}\mathop{\not\leq_{\mathrm{T}}} M'$. \end{theorem} Cholak, Jockusch and Slaman also improved Jockusch's theorem (Theorem~\ref{thm:Jockusch}) for the case of Ramsey's theorem for pairs \cite{CJS01,CJS09}. \begin{theorem}[Cholak, Jockusch and Slaman 2001~\cite{CJS01}] \label{thm:CJS-upper} For every computable coloring $c:[{\mathbb{N}}]^2\to2$ there exists an infinite homogeneous set $M\subseteq{\mathbb{N}}$, which is low$_2$, i.e., such that $M''\mathop{\leq_{\mathrm{T}}}\emptyset''$. \end{theorem} \begin{figure} \caption{Ramsey's theorem for different cardinalities and colors in the Weihrauch lattice: all solid arrows indicate strong Weihrauch reductions against the direction of the arrow, all dashed arrows indicate ordinary Weihrauch reductions.} \label{fig:diagram-RTnk} \end{figure} We will make use of these and other earlier results in our uniform study of Ramsey's theorem. A substantial number of results in this article are based on the second author's PhD thesis \cite{Rak15}. The first uniform results on Ramsey's theorem were published by Dorais, Dzhafarov, Hirst, Mileti and Shafer~\cite{DDH+16} and by Dzhafarov~\cite{Dzh15,Dzh16}. Among other things they proved the following squashing theorem~\cite[Theorem~2.5]{DDH+16} that establishes a relation between products and parallelization for problems such as Ramsey's theorem. Here a {\em problem} $f:\subseteq{\mathbb{N}}^{\mathbb{N}}\rightrightarrows{\mathbb{N}}^{\mathbb{N}}$, i.e., a partial multi-valued function, is called {\em finitely tolerant} if there is a computable partial function $T:\subseteq{\mathbb{N}}^{\mathbb{N}}\to{\mathbb{N}}^{\mathbb{N}}$ such that for all $p,q\in{\rm dom}(f)$ and $k\in{\mathbb{N}}$ with $(\forall n\geq k)(p(n)=q(n))$ it follows that $r\in f(q)$ implies $T\langle r,k\rangle\in f(p)$. By $\langle ,\rangle$ we denote a standard pairing function on Baire space ${\mathbb{N}}^{\mathbb{N}}$. Intuitively, finite tolerance means that for two almost identical inputs and a solution for one of these inputs we can compute a solution for the other input. The squashing theorem relates products $g\times f$ to parallelizations $\widehat{g}$ of problems (these notions will be defined in the next section). \begin{theorem}[Squashing theorem \cite{DDH+16}] \label{thm:squashing} For $f,g:\subseteq{\mathbb{N}}^{\mathbb{N}}\rightrightarrows{\mathbb{N}}^{\mathbb{N}}$ we obtain: \begin{enumerate} \item If ${\rm dom}(f)={\mathbb{N}}^{\mathbb{N}}$ and $f$ is finitely tolerant, then $g\times f\mathop{\leq_{\mathrm{W}}} f\text{\rm\sffamily L}ongrightarrow \widehat{g}\mathop{\leq_{\mathrm{W}}} f$. \item If ${\rm dom}(f)=2^{\mathbb{N}}$ and $f$ is finitely tolerant, then $g\times f\mathop{\leq_{\mathrm{sW}}} f\text{\rm\sffamily L}ongrightarrow \widehat{g}\mathop{\leq_{\mathrm{sW}}} f$. \end{enumerate} \end{theorem} This theorem allowed the authors of \cite{DDH+16} to prove that Ramsey's theorem for strictly increasing numbers of colors forms a strictly increasing chain with respect to {\em strong} Weihrauch reducibility~\cite[Theorem~3.1]{DDH+16}: \[\text{\rm\sffamily RT}_{n,2}\mathop{<_{\mathrm{sW}}}\text{\rm\sffamily RT}_{n,3}\mathop{<_{\mathrm{sW}}}\text{\rm\sffamily RT}_{n,3}\mathop{<_{\mathrm{sW}}}....\] However, they left it an open question~\cite[Question~7.1]{DDH+16} whether an analogous statement also holds for ordinary Weihrauch reducibility. We will be able to use our Theorem~\ref{thm:products} on products to answer this question in the affirmative (see Theorem~\ref{thm:increasing-colors}). Independently, similar results were obtained by Hirschfeldt and Jockusch~\cite{HJ16} and Patey~\cite{Pat16a}. Altogether, the diagram in Figure~\ref{fig:diagram-RTnk} displays how Ramsey's theorem for different cardinalities and colors is situated in the Weihrauch lattice. We briefly describe the further organization of this article. In the following Section~\ref{sec:preliminaries} we briefly present the basic concepts and definitions related to the Weihrauch lattice, and we collect a number of facts that are useful for our study. In Section~\ref{sec:uniform} we introduce the uniform versions of Ramsey's theorem that we are going to consider in this study, and we establish a core lower bound in Theorem~\ref{thm:lower-bound} that will be used to derive almost all other lower bound results. We also prove key results on products in Theorem~\ref{thm:products} and on parallelization in Theorem~\ref{thm:delayed-parallelization} that also lead to our main lower bound results, which are formulated in Corollary~\ref{cor:discrete-lower-bounds} and Corollary~\ref{cor:delayed-parallelization}. In Section~\ref{sec:upper} we discuss jumps of Ramsey's theorem together with upper bound results. In particular, we prove Theorem~\ref{thm:CRT-SRT}, which is our main result on jumps and shows that the stable version of Ramsey's theorem can be seen as the jump of the colored version of the next smaller cardinality. These results lead to Corollary~\ref{cor:induction}, which shows that composition with one jump of weak K\H{o}nig's lemma is sufficient to bring Ramsey's theorem from one cardinality to the next greater cardinality. From this result we can conclude our main upper bound result in Corollary~\ref{cor:upper-bound}, which shows that Ramsey's theorem of cardinality $n$ is reducible to the $n$--th jump of weak K\H{o}nig's lemma. Together with our lower bound results this finally leads to the classification of the parallelization of Ramsey's theorem of cardinality $n$ as the $n$--th jump of weak K\H{o}nig's lemma in Corollary~\ref{cor:parallelization} and to the classification of the exact Borel degree of Ramsey's theorem in Corollary~\ref{cor:Borel-measurability}. We also use these results to obtain the above mentioned result on increasing numbers of colors. In Section~\ref{sec:pairs} we briefly discuss the special case of Ramsey's theorem for pairs, we summarize some known results, provide some new insights and formulate some open questions. In Section~\ref{sec:separation} we develop separation techniques for jumps, and we apply them to separate some uniform versions of Ramsey's theorem. Finally, in Section~\ref{sec:boundedness} we discuss the relation of closed and compact choice problems to Ramsey's theorem, which corresponds to the discussion of boundedness and induction principles in reverse mathematics. \section{Preliminaries} \label{sec:preliminaries} We use the theory of the Weihrauch lattice (as it has been developed in \cite{BBP12,BG11a,BG11,BGH15a,BGM12,BHK15}) as a framework for the uniform classification of the computational content of mathematical problems. We present a very brief introduction of the main concepts and refer the reader to \cite{BGH15a} and \cite{BHK15} for a more detailed introduction. Formally, the Weihrauch lattice is formed by equivalence classes of partial multi-valued functions $f:\subseteq X\rightrightarrows Y$ on represented spaces $X,Y$, as defined below. We will simply call such functions {\em problems} here, and they are, in fact, computational challenges in the sense that for every $x\in{\rm dom}(f)$ the goal is to find {\em some} $y\in f(x)$. In this case ${\rm dom}(f)$ contains the admissible instances $x$ of the problem, and for each instance $x$ the set $f(x)$ contains the corresponding solutions. For problems $f:\subseteq X\rightrightarrows Y$, $g:\subseteq Y\rightrightarrows Z$ we define the {\em composition} $g\circ f: \subseteq X\rightrightarrows Z$ by $g\circ f(x):=\{z\in Z:(\exists y\in f(x))\;z\in g(y)\}$, where it is crucial to use the domain ${\rm dom}(g\circ f):=\{x\in X:f(x)\subseteq{\rm dom}(g)\}$. We also denote the composition briefly by $gf$. A {\em represented space} $(X,\delta)$ is a set $X$ together with a representation, i.e., a surjective partial map ${\delta:\subseteq{\mathbb{N}}^{\mathbb{N}}\to X}$ that assigns {\em names} $p\in{\mathbb{N}}^{\mathbb{N}}$ to points $\delta(p)=x\in X$. Given a problem $f:\subseteq X\rightrightarrows Y$ on represented spaces $(X,\delta_X)$ and $(Y,\delta_Y)$ we say that a partial function $F:\subseteq {\mathbb{N}}^{\mathbb{N}}\to{\mathbb{N}}^{\mathbb{N}}$ {\em realizes} $f$, in symbols $F\vdash f$, if $\delta_YF(p)\in f\delta_X(p)$ for all $p\in{\rm dom}(f\delta_X)$. Notions such as computability and continuity can now be transferred from Baire space to represented spaces via realizers. For instance, $f$ is called {\em computable}, if it admits a computable realizer $F$. We refer the reader to \cite{BHW08,Wei00} for further details. The intuition behind Weihrauch reducibility is that $f\mathop{\leq_{\mathrm{W}}} g$ holds if there is a computational procedure for solving~$f$ during which a single application of the computational resource~$g$ is allowed. There are actually two slightly different formal versions of this reduction, which are both needed. \begin{definition}[Weihrauch reducibility] Let $f: \subseteq X\rightrightarrows Y$ and $g: \subseteq W\rightrightarrows Z$ be problems. \begin{enumerate} \item $f$ is called {\em Weihrauch reducible} to $g$, in symbols $f\mathop{\leq_{\mathrm{W}}} g$, if there are computable ${K: \subseteq X \rightrightarrows W}$, $H: \subseteq X\times Z\rightrightarrows Y$ such that $\emptyset\not=H(x,gK(x))\subseteq f(x)$ for all $x\in{\rm dom}(f)$. \item $f$ is called {\em strongly Weihrauch reducible} to $g$, in symbols $f\mathop{\leq_{\mathrm{sW}}} g$, if there are computable $K: \subseteq X\rightrightarrows W$, $H: \subseteq Z\rightrightarrows Y$ such that $\emptyset\not=HgK(x)\subseteq f(x)$ for all $x\in{\rm dom}(f)$. \end{enumerate} \end{definition} In terms of realizers and functions on Baire space (strong) Weihrauch reducibility can also be rephrased as follows \cite[Lemma~4.5]{GM09}: \begin{itemize} \item $f\mathop{\leq_{\mathrm{W}}} g\iff(\exists$ computable $H,K:\subseteq{\mathbb{N}}^{\mathbb{N}}\to{\mathbb{N}}^{\mathbb{N}})(\forall G\vdash g)\; H\langle {\rm id},GK\rangle\vdash f$. \item $f\mathop{\leq_{\mathrm{sW}}} g\iff(\exists$ computable $H,K:\subseteq{\mathbb{N}}^{\mathbb{N}}\to{\mathbb{N}}^{\mathbb{N}})(\forall G\vdash g)\; HGK\vdash f$. \end{itemize} Here ${\rm id}:{\mathbb{N}}^{\mathbb{N}}\to{\mathbb{N}}^{\mathbb{N}}$ denotes the identity of Baire space. A relation between Weihrauch reducibility and strong Weihrauch reducibility can be established with the notion of a cylinder. A problem $f$ is called a {\em cylinder} if ${\rm id}\times f\mathop{\leq_{\mathrm{sW}}} f$, which is equivalent to the property that $g\mathop{\leq_{\mathrm{W}}} f\iff g\mathop{\leq_{\mathrm{sW}}} f$ holds for all problems $g$ \cite[Corollary~3.6]{BG11}. Weihrauch reducibility induces a lattice with a rich and very natural algebraic structure. We briefly summarize some of these algebraic operations for problems ${f: \subseteq X\rightrightarrows Y}$ and $g: \subseteq W\rightrightarrows Z$: \begin{itemize} \item $f\times g:\subseteq X\times W\rightrightarrows Y\times Z$ is the {\em product} of $f$ and $g$ and represents the parallel evaluation of problem $f$ on some input $x$ and $g$ on some input $w$. \item $f\sqcup g:\subseteq X\sqcup W\rightrightarrows Y\sqcup Z$ is the {\em coproduct} of $f$ and $g$ and represents the alternative evaluation of $f$ on some input $x$ or $g$ on some input $w$ (where $X\sqcup W$ and $Y\sqcup Z$ denote the disjoint unions). \item $f*g:=\sup\{f_0\circ g_0: f_0\mathop{\leq_{\mathrm{W}}} f\mbox{ and }g_0\mathop{\leq_{\mathrm{W}}} g\}$ is the {\em compositional product} and represents the consecutive usage of the problem $f$ after the problem $g$. \item $f^*:=\bigsqcup_{n=0}^\infty f^n$ is the {\em finite parallelization} and allows an evaluation of the $n$--fold product~$f^n$ for some arbitrary given $n\in{\mathbb{N}}$. \item $\widehat{f}:\subseteq X^{\mathbb{N}}\rightrightarrows Y^{\mathbb{N}}$ is the {\em parallelization} of $f$ and allows a parallel evaluation of countably many instances of $f$. \item $f':\subseteq X\rightrightarrows Y$ denotes the {\em jump} of $f$, which is formally the same problem as $f$, but the input representation $\delta_X$ of $X$ is replaced by its jump $\delta_X':=\delta_X\circ\lim$. \end{itemize} Here $\lim: \subseteq{\mathbb{N}}^{\mathbb{N}}\to{\mathbb{N}}^{\mathbb{N}},\langle p_0,p_1,p_2,...\rangle\mapsto\lim_{i\to\infty}p_i$ is the limit map on Baire space for $p_i\in{\mathbb{N}}^{\mathbb{N}}$, where $\langle\ \rangle$ denotes a standard infinite tupling function on Baire space. We recall that $\langle n,k\rangle:=\frac{1}{2}(n+k+1)(n+k)+k$, and inductively this can be extended to finite tuples by $\langle i_{n+1},i_n,...,i_0\rangle:=\langle i_{n+1},\langle i_n,...,i_0\rangle\rangle$ for all $n\geq1$ and $i_0,...,i_{n+1}\in{\mathbb{N}}$. Likewise, we define $\langle p_0,p_1,p_2,...\rangle\in{\mathbb{N}}^{\mathbb{N}}$ for $p_i\in{\mathbb{N}}^{\mathbb{N}}$ by $\langle p_0,p_1,p_2,...\rangle\langle n,k\rangle:=p_n(k)$. Similarly as $\lim$, we define the limit $\lim_{2^{\mathbb{N}}}:\subseteq2^{\mathbb{N}}\to2^{\mathbb{N}}$ on Cantor space $2^{\mathbb{N}}$. For finite sets $X\subseteq{\mathbb{N}}$ we denote by $\lim_X:\subseteq X^{\mathbb{N}}\to X$ the ordinary limit operation with respect to the discrete topology. By $f^{(n)}$ we denote the {\em $n$--fold jump} of $f$, and we denote the {\em $n$--fold compositional product} of $f$ with itself by $f^{[n]}$, i.e., $f^{[0]}={\rm id}$, $f^{[n+1]}:=f^{[n]}*f$. By $f^n:\subseteq X^n\rightrightarrows Y^n$ we denote the {\em $n$--fold product} $f\times ...\times f$ of $f$. One can also define a sum operation $\sqcap$ that play the role of the infimum for ordinary Weihrauch reducibility $\mathop{\leq_{\mathrm{W}}}$. But we are not going to use this operation here. Altogether, $\mathop{\leq_{\mathrm{W}}}$ induces a lattice structure, and the resulting lattice is called {\em Weihrauch lattice}. This lattice is not complete as infinite suprema do not need to exist, but the special supremum $f*g$ is even a maximum that always exists \cite[Proposition~31]{BP16}, and the compositional product $*$ is associative \cite[Corollary~17]{BP16}. Finite parallelization and parallelization are closure operators in the Weihrauch lattice. Further information on the algebraic structure can be found in \cite{BP16}. For metric spaces $X$ we denote by $\SO{n}$ the corresponding class of Borel subsets of $X$ of level $n\geq1$ \cite{Kec95}. We recall that a partial function $f:\subseteq X\to Y$ on metric spaces $X$ and $Y$ is called {\em $\SO{n}$--measurable}, if for every open set $U\subseteq Y$ there exists a set $V\in\SO{n}$ such that $f^{-1}(U)=V\cap{\rm dom}(f)$, i.e., such that $f^{-1}(U)$ is a $\SO{n}$--set relative to the domain of $f$. In the case of $n=1$ we obtain exactly continuity. Likewise we define $f$ to be {\em effectively $\SO{n}$--measurable} if a corresponding $V$ can be uniformly computed from a given $U$ \cite[Definition~3.5]{Bra05}. In this case we obtain exactly computability for $n=1$. The notion of (effective) $\SO{n}$--measurability can be transferred to problems $f:\subseteq X\rightrightarrows Y$ via realizers, i.e., $f$ is called {\em (effectively) $\SO{n}$--measurable}, if it has a realizer $F:\subseteq{\mathbb{N}}^{\mathbb{N}}\to{\mathbb{N}}^{\mathbb{N}}$ that is (effectively) $\SO{n}$--measurable.\footnote{This definition yields a conservative extension of the notion of (effective) measurability to multi-valued functions on represented space. It coincides for single-valued functions on computable Polish spaces with the usual notion of (effective) measurability as it is known from descriptive set theory~\cite{Bra05}.} It is easy to see that Weihrauch reducibility preserves (effective) $\SO{n}$--measurability downwards. \begin{fact} \label{fact:Borel} Let $f,g$ be problems and $n,k\geq1$. Then \begin{enumerate} \item $f$ (effectively) $\SO{n}$--measurable and $g$ (effectively) $\SO{k}$--measurable $\text{\rm\sffamily L}ongrightarrow f*g$ (effectively) $\SO{n+k-1}$--measurable. \item $f,g$ (effectively) $\SO{n}$--measurable $\text{\rm\sffamily L}ongrightarrow f\times g$ (effectively) $\SO{n}$--measurable. \item $f\mathop{\leq_{\mathrm{W}}} g$ and $g$ (effectively) $\SO{n}$--measurable $\text{\rm\sffamily L}ongrightarrow f$ (effectively) $\SO{n}$--measurable. \item $f\mathop{\leq_{\mathrm{W}}}\lim^{[n]}\iff f$ is effectively $\SO{n+1}$--measurable. \end{enumerate} \end{fact} \begin{proof} (1) and (2) follow form \cite[Proposition~3.8]{Bra05}. (3) follows from (1) and (2) since $H\langle {\rm id},GK\rangle$ is (effectively) $\SO{n}$--measurable, if $G:\subseteq{\mathbb{N}}^{\mathbb{N}}\to{\mathbb{N}}^{\mathbb{N}}$ is so and the functions $H,{K:\subseteq{\mathbb{N}}^{\mathbb{N}}\to{\mathbb{N}}^{\mathbb{N}}}$ are computable (see also \cite[Proposition~5.2]{Bra05}). (4) Here ``$\text{\rm\sffamily L}ongrightarrow$'' follows from (3) since $\lim$ is effectively $\SO{2}$--measurable \cite[Proposition~9.1]{Bra05}, and hence $\lim^{[n]}$ is effectively $\SO{n+1}$--measurable by (1). The inverse implication ``$\text{\rm\sffamily L}ongleftarrow$'' follows with the help of the effective Banach-Hausdorff-Lebesgue theorem \cite[Corollary~9.6]{Bra05} applied to the corresponding realizers on Baire space. \end{proof} The last mentioned item can also be expressed such that $\lim^{[n]}$ is {\em effectively $\SO{n+1}$--complete} in the Weihrauch lattice. An important problem in the Weihrauch lattice is {\em closed choice} ${\mbox{\rm\sffamily C}_X: \subseteq{\mathcal A}_-(X)\rightrightarrows X}$, ${A\mapsto A}$, which maps every closed set $A\subseteq X$ to its points. By ${\mathcal A}_-(X)$ we denote the set of closed subsets of a (computable) metric space $X$ with respect to negative information. This means that a closed set is essentially described by enumerating basic open balls that exhaust its complement. Intuitively, closed choice $\mbox{\rm\sffamily C}_X$ is the following problem: given a closed set $A$ by a description that lists everything that does not belong to $A$, find a point $x\in A$ (see \cite{BBP12} for further information and precise definitions). By $\text{\rm\sffamily BWT}_X:\subseteq X^{\mathbb{N}}\rightrightarrows X$ we denote the {\em Bolzano-Weierstra\ss{} theorem} for the space $X$, which is the problem: given a sequence $(x_n)_n$ with a compact closure, find a cluster point of this sequence. This problem has been introduced and studied in detail in \cite{BGM12}. If $X$ itself is compact, then this problem is total. We are particularly interested in finite versions of the Bolzano-Weierstra\ss{} theorem here. We recall that we identify $k\in{\mathbb{N}}$ with the set $X=\{0,...,k-1\}$ in the following. By $\text{\rm\sffamily WKL}$ we denote {\em weak K\H{o}nig's lemma}, which is the problem $\text{\rm\sffamily WKL}:\subseteq {\rm Tr}\to2^{\mathbb{N}},T\mapsto[T]$ that maps every infinite binary tree $T$ to the set of its infinite paths. We summarize a number of facts that are of particular importance for us. Here and in the following we will occasionally use that jumps are monotone with respect to strong Weihrauch reducibility, i.e., $f\mathop{\leq_{\mathrm{sW}}} g\text{\rm\sffamily L}ongrightarrow f'\mathop{\leq_{\mathrm{sW}}} g'$ \cite[Proposition~5.6(2)]{BGM12}. \begin{fact} \label{fact:WKL-BWT} We obtain \begin{enumerate} \item $\text{\rm\sffamily BWT}_k^{(n-1)}\mathop{\equiv_{\mathrm{sW}}}\mbox{\rm\sffamily C}_k^{(n)}$ for all $k\in{\mathbb{N}}$, $n\geq1$, \item $\text{\rm\sffamily WKL}^{(n)}\mathop{\equiv_{\mathrm{sW}}}\widehat{\mbox{\rm\sffamily C}_k^{(n)}}$ for all $k\geq2$ and $n\in{\mathbb{N}}$, \item $\lim^{(n)}\mathop{\equiv_{\mathrm{sW}}}\widehat{\lim_k^{(n)}}$ for all $k\geq 2$, $k={\mathbb{N}}$ and $n\in{\mathbb{N}}$, \item $\lim_k^{(n)}\mathop{<_{\mathrm{sW}}}\text{\rm\sffamily BWT}_k^{(n)}$ and $\lim_k^{(n)}\mathop{<_{\mathrm{W}}}\text{\rm\sffamily BWT}_k^{(n)}$ for all $k\geq2$, $k={\mathbb{N}}$ and $n\in{\mathbb{N}}$, \item $\text{\rm\sffamily WKL}^{(n)}\mathop{<_{\mathrm{sW}}}\lim^{(n)}\mathop{<_{\mathrm{sW}}}\text{\rm\sffamily WKL}^{(n+1)}$ and $\text{\rm\sffamily WKL}^{(n)}\mathop{<_{\mathrm{W}}}\lim^{(n)}\mathop{<_{\mathrm{W}}}\text{\rm\sffamily WKL}^{(n+1)}$ for all $n\in{\mathbb{N}}$, \item $(\text{\rm\sffamily WKL}')^{[n]}\mathop{\equiv_{\mathrm{W}}}\text{\rm\sffamily WKL}^{(n)}$ and $\lim^{[n]}\mathop{\equiv_{\mathrm{W}}}\lim^{(n-1)}$ for all $n\geq1$. \end{enumerate} \end{fact} \begin{proof} (1) follows from \cite[Theorem~9.4]{BGM12} (noting that the cluster point problem used there is identical to the Bolzano-Weierstra\ss{} theorem in the finite case). (2) follows since $\text{\rm\sffamily WKL}\mathop{\equiv_{\mathrm{sW}}}\widehat{\mbox{\rm\sffamily C}_k}$, which has essentially been proved in \cite[Theorem~8.2]{BG11}, where ${\mathcal L}PO$ is just a reformulation of $\mbox{\rm\sffamily C}_2$ and from the fact that jumps and parallelizations commute by \cite[Proposition~5.7(3)]{BGM12}. For (3) see \cite[Fact~3.5]{BGM12}. (4) The reductions follow since the limit can be seen as a special case of the Bolzano-Weiertra\ss{} theorem \cite[Proposition~11.21]{BGM12} and they are strict since $\lim_k^{(n)}$ is $\SO{n+2}$--measurable, but $\text{\rm\sffamily BWT}_k^{(n)}$ for $k\geq2$ is not, see the proof of \cite[Proposition~9.1]{BGH15a}. The positive parts of the reductions in (5) are easy to see, and the strictness follows from (relativized versions) of the low basis theorem and (relativized versions) of the Kleene tree construction, respectively (see also \cite{BBP12,BGM12}). (6) The statement for $\text{\rm\sffamily WKL}$ follows by induction from $\text{\rm\sffamily WKL}^{(n+1)}\mathop{\equiv_{\mathrm{W}}}\text{\rm\sffamily WKL}^{(n)}*\text{\rm\sffamily WKL}'$, where ``$\mathop{\leq_{\mathrm{W}}}$'' holds since $\lim\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily WKL}'$ and the converse reduction follows from \cite[Theorem~8.13]{BGM12}. The statement for $\lim$ follows for instance from \cite[Corollary~5.16]{BGM12} since $\lim^{(n)}$ is a cylinder. \end{proof} \section{The Uniform Scenario, Lower Bounds and Products} \label{sec:uniform} In this section we plan to introduce the uniform versions of Ramsey's theorem that we are going to study, and we will prove some basic facts about them. While in non-uniform settings such as reverse mathematics \cite{Sim09} certain information on infinite homogeneous sets can just be assumed to be available, we need to make this more explicit. For instance, it turned out to be useful to consider an enriched version $\mbox{\rm\sffamily C}RT_{n,k}$ of Ramsey's theorem that provides additional information on the color of the produced infinite homogeneous set. In \cite{CJS01} the stable version of Ramsey's theorem $\text{\rm\sffamily SRT}_{n,k}$ was introduced, which is a restriction of Ramsey's theorem that we consider too. We need to introduce some notation first. For every $n\geq1$ we assume that we use some total standard numbering $\vartheta_n:{\mathbb{N}}\to[{\mathbb{N}}]^n$ that can be defined, for instance, by $\vartheta_n\langle i_0,i_1,...,i_{n-1}\rangle:= \left\{k+\sum_{j=0}^{k}i_j:k=0,...,n-1\right\}$ for all $i_0,...,i_{n-1}\in{\mathbb{N}}$, i.e., the set $\vartheta_n\langle i_0,i_1,...,i_{n-1}\rangle$ contains the numbers $i_0<i_0+i_1+1<i_0+i_1+i_2+2<...<i_0+i_1+i_{n-1}+n-1$. However, we will not make any technical use of this specific definition of $\vartheta_n$. Occasionally, we use the notation $\{i_0<i_1<...<i_{n-1}\}$ for a set $\{i_0,i_1,...,i_{n-1}\}\in[{\mathbb{N}}]^n$ with the additional property that $i_0<i_1<...<i_{n-1}$. By ${\mathcal C}_{n,k}$ we denote the set of colorings $c:[{\mathbb{N}}]^n\to k$ which is represented such that $p\in{\mathbb{N}}^{\mathbb{N}}$ is a name for $c$, if $p(i)=c(\vartheta_n(i))$ (this is equivalent to using the natural function space representation $[\vartheta_n\to{\rm id}_k]$ as known in computable analysis \cite{BHW08,Wei00}). By ${\mathcal C}_{n,{\mathbb{N}}}$ we denote the set of all colorings $c:[{\mathbb{N}}]^n\to{\mathbb{N}}$ with a finite range, represented analogously. By ${\mathcal H}_c$ we denote the set of infinite homogeneous sets $M\subseteq{\mathbb{N}}$ for the coloring $c$. A coloring $c:[{\mathbb{N}}]^n\to k$ is called {\em stable}, if $\lim_{i\to\infty}c(A\cup\{i\})$ exists for all $A\in[{\mathbb{N}}]^{n-1}$. The expression $k\geq a,{\mathbb{N}}$ with $a\in{\mathbb{N}}$ is means $k\in\{x\in{\mathbb{N}}:x\geq a\}\cup\{{\mathbb{N}}\}$. \begin{definition}[Uniform variants of Ramsey's theorem] For all $n\geq 1$ and $k\geq 1,{\mathbb{N}}$ we define \begin{enumerate} \item $\text{\rm\sffamily RT}_{n,k}:{\mathcal C}_{n,k}\rightrightarrows2^{\mathbb{N}},\text{\rm\sffamily RT}_{n,k}(c):={\mathcal H}_c$, \item $\mbox{\rm\sffamily C}RT_{n,k}:{\mathcal C}_{n,k}\rightrightarrows k\times2^{\mathbb{N}},\mbox{\rm\sffamily C}RT_{n,k}(c):=\{(c(M),M):M\in{\mathcal H}_c\}$, \item $\text{\rm\sffamily SRT}_{n,k}:\subseteq{\mathcal C}_{n,k}\rightrightarrows2^{\mathbb{N}},\text{\rm\sffamily SRT}_{n,k}(c):=\text{\rm\sffamily RT}_{n,k}(c)$,\\ where ${\rm dom}(\text{\rm\sffamily SRT}_{n,k}):=\{c\in{\mathcal C}_{n,k}:c$ stable$\}$, \item $\mbox{\rm\sffamily C}SRT_{n,k}:\subseteq{\mathcal C}_{n,k}\rightrightarrows k\times 2^{\mathbb{N}},\mbox{\rm\sffamily C}SRT_{n,k}(c):=\{(c(M),M):M\in{\mathcal H}_c\}$,\\ where ${\rm dom}(\mbox{\rm\sffamily C}SRT_{n,k}):=\{c\in{\mathcal C}_{n,k}:c$ stable$\}$, \item $\text{\rm\sffamily RT}_{n,+}:=\bigsqcup_{k\geq1}\text{\rm\sffamily RT}_{n,k}$, $\text{\rm\sffamily RT}:=\bigsqcup_{n\geq1}\text{\rm\sffamily RT}_{n,+}$. \end{enumerate} \end{definition} All formalized versions of Ramsey's theorem mentioned here are well-defined by Ramsey's theorem~\ref{thm:Ramsey}. We call $n$ the {\em cardinality} of the respective version of Ramsey's theorem. Here $\mbox{\rm\sffamily C}RT_{n,k}$ enriches $\text{\rm\sffamily RT}_{n,k}$ by the information on the color of the resulting infinite homogeneous set, and $\text{\rm\sffamily SRT}_{n,k}$ is a restriction of $\text{\rm\sffamily RT}_{n,k}$ defined only for stable colorings. The coproduct $\text{\rm\sffamily RT}_{n,+}$ as well as $\text{\rm\sffamily RT}_{n,{\mathbb{N}}}$ can both be seen as different uniform versions of the principle $(\forall k)\;\text{\rm\sffamily RT}_{n,k}$ that is usually denoted by $\text{\rm\sffamily RT}^n_{<\infty}$ in reverse mathematics (see, e.g., \cite{Hir15}). In the case of $\text{\rm\sffamily RT}_{n,{\mathbb{N}}}$ the finite number of colors is left unspecified, whereas in the case of $\text{\rm\sffamily RT}_{n,+}$, the number of colors is an input parameter. We emphasize that we use Cantor space $2^{\mathbb{N}}$, i.e., the infinite homogeneous sets $M\in{\mathcal H}_c$ are represented via their characteristic functions $\chi_M:{\mathbb{N}}\to\{0,1\}$. However, by definition any infinite subset $A\subseteq M$ of an infinite homogeneous set $M\in{\mathcal H}_c$ is in ${\mathcal H}_c$ too, and given an enumeration of an infinite set $M$, we can find a characteristic function $\chi_A$ of an infinite subset $A\subseteq M$. This means that we can equivalently think about sets in $2^{\mathbb{N}}$ as being represented via enumerations.\footnote{More formally, we could equivalently consider the output space of $\text{\rm\sffamily RT}_{n,k}$ and its variants as ${\mathcal A}_+({\mathbb{N}})$ or ${\mathcal A}({\mathbb{N}})$, i.e., as space of subsets of ${\mathbb{N}}$ equipped with positive or full information, respectively, which corresponds topologically to the lower Fell topology and the Fell topology, respectively.} We obtain the following obvious strong reductions between the different versions of Ramsey's theorem. \begin{lemma}[Basic reductions] \label{lem:basic-reductions} $\text{\rm\sffamily SRT}_{n,k}\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily RT}_{n,k}\mathop{\leq_{\mathrm{sW}}}\mbox{\rm\sffamily C}RT_{n,k}$ and \linebreak $\text{\rm\sffamily SRT}_{n,k}\mathop{\leq_{\mathrm{sW}}}\mbox{\rm\sffamily C}SRT_{n,k}\mathop{\leq_{\mathrm{sW}}}\mbox{\rm\sffamily C}RT_{n,k}$ for all $n\geq1$ and $k\geq1,{\mathbb{N}}$. \end{lemma} We note that the colored versions of Ramsey's theorem are Weihrauch equivalent to the corresponding uncolored versions. This is because given a coloring $c$ and an infinite homogeneous set $M\in{\mathcal H}_c$, we can easily compute $c(M)$ by choosing some points $i_0<i_1<...<i_{n-1}$ in $M$ and by computing $c\{i_0,i_1,...,i_{n-1}\}$. Hence, we obtain the following corollary. \begin{corollary} \label{cor:basic-equivalences} $\text{\rm\sffamily RT}_{n,k}\mathop{\equiv_{\mathrm{W}}}\mbox{\rm\sffamily C}RT_{n,k}$ and $\text{\rm\sffamily SRT}_{n,k}\mathop{\equiv_{\mathrm{W}}}\mbox{\rm\sffamily C}SRT_{n,k}$ for all $n\geq1$, $k\geq1,{\mathbb{N}}$. \end{corollary} \begin{figure} \caption{The degree of Ramsey's theorem for fixed $n\geq2$ and $k\geq2,{\mathbb{N} \label{fig:diagram-RT} \end{figure} The diagram in Figure~\ref{fig:diagram-RT} illustrates the situation, and it displays further information on lower and upper bounds that is justified by proofs that we will only provide step by step. The diagram illustrates the non-trivial case $n\geq2$, $k\geq2,{\mathbb{N}}$ whereas the bottom cases where $n=1$ or $k=1$ are described in the following result. \begin{proposition}[The bottom cases] \label{prop:bottom} Let $n\geq1$ and $k\geq 2,{\mathbb{N}}$. Then we obtain \begin{enumerate} \item $\mbox{\rm\sffamily C}SRT_{n,1}\mathop{\equiv_{\mathrm{sW}}}\text{\rm\sffamily SRT}_{n,1}\mathop{\equiv_{\mathrm{sW}}}\text{\rm\sffamily RT}_{n,1}\mathop{\equiv_{\mathrm{sW}}}\mbox{\rm\sffamily C}RT_{n,1}$ are computable and not cylinders, \item $\lim_k\mathop{\equiv_{\mathrm{W}}}\text{\rm\sffamily SRT}_{1,k}\mathop{<_{\mathrm{W}}}\text{\rm\sffamily BWT}_k\mathop{\equiv_{\mathrm{W}}}\text{\rm\sffamily RT}_{1,k}$, \item $\lim_k\mathop{\leq_{\mathrm{sW}}}\mbox{\rm\sffamily C}SRT_{1,k}$ and $\text{\rm\sffamily BWT}_k\mathop{\leq_{\mathrm{sW}}}\mbox{\rm\sffamily C}RT_{1,k}$. \end{enumerate} \end{proposition} \begin{proof} We note that $\mbox{\rm\sffamily C}SRT_{n,1},\text{\rm\sffamily SRT}_{n,1},\text{\rm\sffamily RT}_{n,1}$ and $\mbox{\rm\sffamily C}RT_{n,1}$ can yield any infinite subset of ${\mathbb{N}}$ (together with the color $1$ in the case of $\mbox{\rm\sffamily C}SRT_{n,1},\mbox{\rm\sffamily C}RT_{n,1}$), and hence they are all constant as multi-valued functions, hence computable and strongly equivalent to each other. Since the identity cannot be strongly reduced to constant multi-valued problems, it follows that all the aforementioned problems are not cylinders. It is easy to see that $\lim_k\mathop{\equiv_{\mathrm{W}}}\text{\rm\sffamily SRT}_{1,k}$ and $\text{\rm\sffamily BWT}_k\mathop{\equiv_{\mathrm{W}}}\text{\rm\sffamily RT}_{1,k}$, whereas $\lim_k\mathop{<_{\mathrm{W}}}\text{\rm\sffamily BWT}_k$ is given by Fact~\ref{fact:WKL-BWT}(4). The strong reductions $\lim_k\mathop{\leq_{\mathrm{sW}}}\mbox{\rm\sffamily C}SRT_{1,k}$ and $\text{\rm\sffamily BWT}_k\mathop{\leq_{\mathrm{sW}}}\mbox{\rm\sffamily C}RT_{1,k}$ are also clear. \end{proof} This result demonstrates that the Ramsey theorem for cardinality $1$ and $k$ colors $\text{\rm\sffamily RT}_{1,k}$ is equivalent to the Bolzano-Weierstra\ss{} $\text{\rm\sffamily BWT}_k$ for the space $k$, and hence the general Ramsey theorem can be seen as a generalization of the Bolzano-Weierstra\ss{} theorem for finite spaces. Next we want to prove the following interesting lower bound on the complexity of Ramsey's theorem. \begin{theorem}[Lower bound] \label{thm:lower-bound} $\text{\rm\sffamily BWT}_2^{(n-1)}\mathop{\equiv_{\mathrm{sW}}}\mbox{\rm\sffamily C}_2^{(n)}\mathop{\leq_{\mathrm{sW}}}\mbox{\rm\sffamily C}SRT_{n,2}$ for all $n\geq2$. \end{theorem} \begin{proof} We note that by Fact~\ref{fact:WKL-BWT} $\mbox{\rm\sffamily C}_2^{(n)}\mathop{\equiv_{\mathrm{sW}}}\text{\rm\sffamily BWT}_2^{(n-1)}\mathop{\equiv_{\mathrm{sW}}}\text{\rm\sffamily BWT}_2\circ\lim_{2^{\mathbb{N}}}^{[n-1]}$. Let $p\in{\rm dom}(\text{\rm\sffamily BWT}_2\circ\lim_{2^{\mathbb{N}}}^{[n-1]})$ and $q:=\lim_{2^{\mathbb{N}}}^{[n-1]}(p)$. Then \[q(i_0)=\lim_{i_1\to\infty}\lim_{i_2\to\infty}...\lim_{i_{n-1}\to\infty}p\langle i_{n-1},...,i_0\rangle\] for all $i_0\in{\mathbb{N}}$. We compute the coloring $c:[{\mathbb{N}}]^n\to 2$ with \[c\{i_0<i_1<....<i_{n-1}\}:=p\langle i_{n-1},i_{n-2},...,i_1,i_0\rangle.\] It is clear that $c$ is a stable coloring, and with the help of $\mbox{\rm\sffamily C}SRT_{n,2}$ we can compute $(c(M),M)\in\mbox{\rm\sffamily C}SRT_{n,2}(c)$. We claim that $c(M)\in\text{\rm\sffamily BWT}_2(q)$. This proves $\text{\rm\sffamily BWT}_2^{(n-1)}\mathop{\leq_{\mathrm{sW}}}\mbox{\rm\sffamily C}SRT_{n,2}$. We still need to prove the claim.\\ \underline{1.\ Case:} $q$ is a convergent binary sequence, i.e., $x:=\lim_2(q)\in\{0,1\}$ exists.\\ In this case $q(i_0)=x$ for all sufficiently large $i_0$ and $\text{\rm\sffamily BWT}_2(q)=\{x\}$. Since $M$ is infinite, there will be such a sufficiently large $i_0\in M$. Since $q=\lim_{2^{\mathbb{N}}}^{[n-1]}(p)$ it follows that there will be sufficiently large $i_{n-1}>...>i_1>i_0$ in $M$ such that $p\langle i_{n-1},i_{n-2},...,i_1,i_0\rangle=x$, and hence $c(M)=x\in\{x\}=\text{\rm\sffamily BWT}_2(q)$.\\ \underline{2.\ Case:} $q$ is not a convergent binary sequence.\\ In this case $\text{\rm\sffamily BWT}_{2}(q)=\{0,1\}$ and $c(M)\in\text{\rm\sffamily BWT}_{2}(q)$ hold automatically. \end{proof} It is clear that Ramsey's theorem for any number of colors $k$ can be reduced to Ramsey's theorem for any greater number of colors. This is because any coloring $c:[{\mathbb{N}}]^n\to k$ can be seen as a coloring for any number $m\geq k$ of colors, and this idea applies to all versions of Ramsey's theorem that we have considered. \begin{lemma}[Increasing colors] \label{lem:increasing-color} For all $n,k\geq1$ we obtain: \begin{enumerate} \item $\text{\rm\sffamily SRT}_{n,k}\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily SRT}_{n,k+1}\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily SRT}_{n,{\mathbb{N}}}$, \item $\mbox{\rm\sffamily C}SRT_{n,k}\mathop{\leq_{\mathrm{sW}}}\mbox{\rm\sffamily C}SRT_{n,k+1}\mathop{\leq_{\mathrm{sW}}}\mbox{\rm\sffamily C}SRT_{n,{\mathbb{N}}}$, \item $\text{\rm\sffamily RT}_{n,k}\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily RT}_{n,k+1}\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily RT}_{n,+}\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily RT}_{n,{\mathbb{N}}}$, \item $\mbox{\rm\sffamily C}RT_{n,k}\mathop{\leq_{\mathrm{sW}}}\mbox{\rm\sffamily C}RT_{n,k+1}\mathop{\leq_{\mathrm{sW}}}\mbox{\rm\sffamily C}RT_{n,{\mathbb{N}}}$. \end{enumerate} \end{lemma} We will later come back to the question whether these reductions are strict. In particular, with Theorem~\ref{thm:lower-bound} and Lemma~\ref{lem:increasing-color} we have now established the lower bound that is indicated in the diagram in Figure~\ref{fig:diagram-RT}. \begin{corollary}[Lower bound] \label{cor:lower-bound} $\text{\rm\sffamily BWT}_2^{(n-1)}\mathop{\leq_{\mathrm{sW}}}\mbox{\rm\sffamily C}SRT_{n,k}$ for all $n\geq2$, $k\geq2,{\mathbb{N}}$ and $\lim\nolimits_2^{(n-1)}\mathop{\leq_{\mathrm{sW}}}\mbox{\rm\sffamily C}SRT_{n,k}$ for all $n\geq1$ and $k\geq2,{\mathbb{N}}$. \end{corollary} The second statement follows from the first one by Fact~\ref{fact:WKL-BWT} in the case of $n\geq2$ and follows from Proposition~\ref{prop:bottom} in the case of $n=1$. We note that by Corollary~\ref{cor:LLPO-RT} below $\mbox{\rm\sffamily C}SRT_{n,k}$ cannot be replaced by $\text{\rm\sffamily SRT}_{n,k}$ in this result. It follows from Theorem~\ref{thm:CJS-lower} that the binary limit $\lim\nolimits_2$ cannot be replaced by the limit on Baire space in the previous result. We recall that $\lim^{(n-1)}\mathop{\equiv_{\mathrm{sW}}}\lim^{[n]}$. \begin{corollary}[Limit avoidance] \label{cor:limit-avoidance1} $\lim^{(n-1)}\mathop{\not\leq_{\mathrm{W}}}\lim*\text{\rm\sffamily RT}_{n,{\mathbb{N}}}$ for all $n\geq2$. \end{corollary} \begin{proof} Let us assume that $\lim^{(n-1)}\mathop{\leq_{\mathrm{W}}}\lim*\text{\rm\sffamily RT}_{n,{\mathbb{N}}}$ holds for some $n\geq2$. Then $\lim^{(n-1)}$ maps some computable $p\in {\mathbb{N}}^{\mathbb{N}}$ to $\emptyset^{(n)}$, and the reduction maps $p$ to a computable coloring $c:[{\mathbb{N}}]^{n}\to{\mathbb{N}}$ with finite ${\rm range}(c)$. By Theorem~\ref{thm:CJS-lower} there exists an infinite homogeneous set $M\subseteq{\mathbb{N}}$ for $c$ such that $\emptyset^{(n)}\mathop{\not\leq_{\mathrm{T}}} M'$. Now the assumption is that there is a limit computation performed on $p$ and $M$ that produces $\emptyset^{(n)}$. But any result produced by such a limit computation can also be computed from $M'$ since $p$ is computable (and for all computable functions $F,G:\subseteq{\mathbb{N}}\to{\mathbb{N}}$ there is a computable function $H:\subseteq{\mathbb{N}}\to{\mathbb{N}}$ such that $F\circ\lim\circ\;G=H\circ\text{\rm\sffamily J}$). Hence, $\emptyset^{(n)}\mathop{\leq_{\mathrm{T}}} M'$, which is a contradiction. \end{proof} We obtain the following corollary since $\lim*\lim^{(n-2)}\mathop{\equiv_{\mathrm{W}}}\lim^{(n-1)}$, which in turn follows from \cite[Corollary~5.17]{BGM12} since $*$ is associative. \begin{corollary}[Limit avoidance] \label{cor:limit-avoidance} $\lim^{(n-2)}\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{n,{\mathbb{N}}}$ for all $n\geq2$. \end{corollary} We recall that a problem $f$ is called {\em parallelizable}, if $f\mathop{\equiv_{\mathrm{W}}}\widehat{f}$. We can conclude that Ramsey's theorem is not parallelizable. \begin{corollary}[Parallelizability] \label{cor:parallelizability} $\text{\rm\sffamily RT}_{n,k}$ and $\text{\rm\sffamily SRT}_{n,k}$ are not parallelizable for all $n\geq1$ and $k\geq2,{\mathbb{N}}$. \end{corollary} \begin{proof} By Corollaries~\ref{cor:lower-bound} and \ref{cor:basic-equivalences} we have $\lim_2^{(n-1)}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{n,k}$ for $n\geq 1$, $k\geq2,{\mathbb{N}}$. Let us assume that $\text{\rm\sffamily SRT}_{n,k}$ is parallelizable. Since parallelization is a closure operator, this implies by Lemma~\ref{lem:increasing-color} \[\lim\nolimits^{(n-2)}\mathop{\leq_{\mathrm{W}}}\lim\nolimits^{(n-1)}\mathop{\equiv_{\mathrm{W}}}\widehat{\lim\nolimits_2^{(n-1)}}\mathop{\leq_{\mathrm{W}}}\widehat{\text{\rm\sffamily SRT}_{n,k}}\mathop{\equiv_{\mathrm{W}}}\text{\rm\sffamily SRT}_{n,k}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{n,{\mathbb{N}}},\] where the first equivalence holds by Fact~\ref{fact:WKL-BWT}~(3). This is a contradiction to Corollary~\ref{cor:limit-avoidance}. The proof for $\text{\rm\sffamily RT}_{n,k}$ follows analogously. \end{proof} We recall that a problem $f$ is called {\em idempotent}, if $f\mathop{\equiv_{\mathrm{W}}} f\times f$. The squashing theorem (Theorem~\ref{thm:squashing}) implies that every total, finitely tolerant and idempotent problem is parallelizable. Hence we can draw the following conclusion from Corollary~\ref{cor:parallelizability} (this has also been stated in \cite[Lemma~3.3]{DDH+16}). \begin{corollary}[Idempotency] \label{cor:no-idempotency} $\text{\rm\sffamily RT}_{n,k}$ is not idempotent for all $n\geq1$ and $k\geq2$. \end{corollary} Another noticeable consequence of Corollary~\ref{cor:lower-bound} is the following. \begin{corollary}[Cylinders] \label{cor:cylinder-CSRT} $\widehat{\mbox{\rm\sffamily C}SRT_{n,k}^{(m)}}$ and $\widehat{\mbox{\rm\sffamily C}RT_{n,k}^{(m)}}$ are cylinders for all $n\geq1$, $k\geq2,{\mathbb{N}}$ and $m\geq0$. \end{corollary} The claim follows since ${\rm id}\mathop{\leq_{\mathrm{sW}}}\lim\mathop{\equiv_{\mathrm{sW}}}\widehat{\lim\nolimits_2}\mathop{\leq_{\mathrm{sW}}}\widehat{\mbox{\rm\sffamily C}SRT_{n,k}}$ holds by Corollary~\ref{cor:lower-bound} and Fact~\ref{fact:WKL-BWT}. The next lemma now formulates a simple finiteness condition that Ramsey's theorem satisfies and implies that Ramsey's theorem has very little uniform computational power. \begin{lemma}[Finite Intersection Lemma] \label{lem:finite-intersection} Let $c_i:[{\mathbb{N}}]^n\to k$ be colorings for $i=1,...,m$ with $m,n\geq1$, $k\geq1,{\mathbb{N}}$. Then we obtain $\bigcap_{i=1}^m\text{\rm\sffamily RT}_{n,k}(c_i)\not=\emptyset$. \end{lemma} \begin{proof} We first consider $k\geq1$. We use a bijection $\alpha:\{0,1,...,k-1\}^m\to\{0,1,...,k^m-1\}$ in order to construct a map $f:({\mathcal C}_{n,k})^m\to{\mathcal C}_{n,k^m}$ by \[f(c_1,...,c_m)(A):=\alpha(c_1(A),...,c_m(A))\] for all colorings $c_1,...,c_m\in{\mathcal C}_{n,k}$ and $A\in[{\mathbb{N}}]^n$. Given $c_1,...,c_m\in{\mathcal C}_{n,k}$ we consider $c:=f(c_1,...,c_m)$, and we claim that $\text{\rm\sffamily RT}_{n,k^m}(c)\subseteq\bigcap_{i=1}^m\text{\rm\sffamily RT}_{n,k}(c_i)$. To this end, let $M\in\text{\rm\sffamily RT}_{n,k^m}(c)$ and $x:=c(M)$. If $(x_1,...,x_m):=\alpha^{-1}(x)$, then we obtain $c_i(A)=x_i$ for all $i=1,...,m$ and $A\in[M]^n$, and hence $M$ is homogeneous for all $c_1,...,c_m$. This proves the claim. It follows by Ramsey's theorem~\ref{thm:Ramsey} that $\text{\rm\sffamily RT}_{n,k^m}(c)\not=\emptyset$. We now consider the case $k={\mathbb{N}}$. In this case we use Cantor's tupling function $\alpha:{\mathbb{N}}^m\to{\mathbb{N}}$ in order to construct a map $f:({\mathcal C}_{n,{\mathbb{N}}})^m\to{\mathcal C}_{n,{\mathbb{N}}}$ analogously as above. We obtain $\text{\rm\sffamily RT}_{n,{\mathbb{N}}}(c)\subseteq\bigcap_{i=1}^m\text{\rm\sffamily RT}_{n,{\mathbb{N}}}(c_i)$. It follows by Ramsey's theorem~\ref{thm:Ramsey} that $\text{\rm\sffamily RT}_{n,{\mathbb{N}}}(c)\not=\emptyset$. In any case this proves the claim of the lemma. \end{proof} This result could be generalized and proved in different possibly simpler ways. The specific construction used in our proof will be reused for the proof of Corollary~\ref{cor:products}. We recall that for a multi-valued function $f:\subseteq X\rightrightarrows Y$ we have defined $\# f$, the {\em cardinality} of $f$, in \cite{BGH15a} as the supremum of all cardinalities of sets $M\subseteq{\rm dom}(f)$ such that $\{f(x):x\in M\}$ is pairwise disjoint. The cardinality yields an invariant for strong Weihrauch reducibility, i.e., $f\mathop{\leq_{\mathrm{sW}}} g$ implies $\# f\leq \#g$ \cite[Proposition~3.6]{BGH15a}. Lemma~\ref{lem:finite-intersection} now implies the following perhaps surprising fact. \begin{corollary}[Cardinality] \label{cor:cardinality-RT} $\#\text{\rm\sffamily RT}_{n,k}^{(m)}=\#\widehat{\text{\rm\sffamily RT}_{n,k}^{(m)}}=1$ for all $n\geq1$, $k\geq1,{\mathbb{N}}$ and $m\geq0$. \end{corollary} In the case of the parallelization we only need to apply Lemma~\ref{lem:finite-intersection} pairwise to any component of the sequence. We recall that a multi-valued function $f$ was called {\em discriminative} in \cite{BHK15} if $\mbox{\rm\sffamily C}_2\mathop{\leq_{\mathrm{W}}} f$ and {\em indiscriminative} otherwise. Likewise, we could call $f$ {\em strongly discriminative} if $\mbox{\rm\sffamily C}_2\mathop{\leq_{\mathrm{sW}}} f$. We obtain $\#\mbox{\rm\sffamily C}_2=2$ since $\{0\}$ and $\{1\}$ are in the domain of $\mbox{\rm\sffamily C}_2$. Hence it follows that Corollary~\ref{cor:cardinality-RT} implies in particular that Ramsey's theorem is not strongly discriminative, not even in its parallelized form. \begin{corollary}[Strong discrimination] \label{cor:LLPO-RT} $\mbox{\rm\sffamily C}_2\mathop{\not\leq_{\mathrm{sW}}}\widehat{\text{\rm\sffamily RT}_{n,k}^{(m)}}$ for all $n\geq1$, $k\geq1,{\mathbb{N}}$ and $m\geq 0$. \end{corollary} Since $\mbox{\rm\sffamily C}_2'\mathop{\equiv_{\mathrm{sW}}}\text{\rm\sffamily BWT}_2$ by Fact~\ref{fact:WKL-BWT}, we can conclude that $\mbox{\rm\sffamily C}SRT$ cannot be replaced by $\text{\rm\sffamily SRT}$ in Theorem~\ref{thm:lower-bound} and Corollary~\ref{cor:lower-bound} (without simultaneously replacing the strong reduction by an ordinary one).\footnote{It follows from Corollary~\ref{cor:DNC} below that in Corollary~\ref{cor:LLPO-RT} $\mbox{\rm\sffamily C}_2$ cannot be replaced by $\text{\rm\sffamily A}CC_{\mathbb{N}}$, as defined in \cite{BHK15}.} We can also conclude from Corollary~\ref{cor:cardinality-RT} that the parallelized uncolored versions of Ramsey's theorem are not cylinders since $\#{\rm id}=|{\mathbb{N}}^{\mathbb{N}}|$. \begin{corollary}[Cylinders] \label{cor:cylinder-SRT} $\widehat{\text{\rm\sffamily RT}_{n,k}^{(m)}}$ and $\widehat{\text{\rm\sffamily SRT}_{n,k}^{(m)}}$ are not cylinders for all $n\geq1$, $k\geq1,{\mathbb{N}}$ and $m\geq 0$. \end{corollary} Since $\#\mbox{\rm\sffamily C}SRT_{n,k}\geq k$ holds (because there are monochromatic colorings for each color), we can also conclude from Corollary~\ref{cor:cardinality-RT} that the colored versions of Ramsey's theorem are not strongly equivalent to the uncolored ones (in the case of at least two colors). \begin{corollary} \label{cor:CSRT-RT} $\mbox{\rm\sffamily C}SRT_{n,k}\mathop{\not\leq_{\mathrm{sW}}}\widehat{\text{\rm\sffamily RT}_{n,k}^{(m)}}$ for all $n\geq1$, $k\geq2,{\mathbb{N}}$ and $m\geq0$. \end{corollary} The following result is a consequence of Lemma~\ref{lem:finite-intersection} and its proof. For a finite sequence $(f_i)_{i\leq m}$ of multi-valued functions $f_i:\subseteq X\rightrightarrows Y$ we denote the {\em intersection} by $\bigcap_{i=1}^mf_i:\subseteq X\rightrightarrows Y,x\mapsto\bigcap_{i=1}^mf_i(x)$, where ${\rm dom}(\bigcap_{i=1}^mf_i)$ contains all points $x\in X$ such that $\bigcap_{i=1}^mf_i(x)\not=\emptyset$. We recall that $f^m$ denotes the $m$--fold product of $f$ with respect to $\times$. \begin{corollary}[Products] \label{cor:products} For all $n,m,k\geq1$ we obtain \begin{enumerate} \item $\text{\rm\sffamily RT}_{n,k}^m\mathop{\leq_{\mathrm{sW}}}\bigcap_{i=1}^m\text{\rm\sffamily RT}_{n,k}\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily RT}_{n,k^m}$, \item $\text{\rm\sffamily SRT}_{n,k}^m\mathop{\leq_{\mathrm{sW}}}\bigcap_{i=1}^m\text{\rm\sffamily SRT}_{n,k}\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily SRT}_{n,k^m}$, \item $\text{\rm\sffamily RT}_{n,{\mathbb{N}}}^m\mathop{\leq_{\mathrm{sW}}}\bigcap_{i=1}^m\text{\rm\sffamily RT}_{n,{\mathbb{N}}}\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily RT}_{n,{\mathbb{N}}}$, \item $\text{\rm\sffamily SRT}_{n,{\mathbb{N}}}^m\mathop{\leq_{\mathrm{sW}}}\bigcap_{i=1}^m\text{\rm\sffamily SRT}_{n,{\mathbb{N}}}\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily SRT}_{n,{\mathbb{N}}}$. \end{enumerate} \end{corollary} \begin{proof} The functions $f:({\mathcal C}_{n,k})^m\to{\mathcal C}_{n,k^m}$ and $f:({\mathcal C}_{n,{\mathbb{N}}})^m\to{\mathcal C}_{n,{\mathbb{N}}}$ constructed in the proof of Lemma~\ref{lem:finite-intersection} are computable, and hence they yield the reductions $\bigcap_{i=1}^m\text{\rm\sffamily RT}_{n,k}\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily RT}_{n,k^m}$ and $\bigcap_{i=1}^m\text{\rm\sffamily RT}_{n,{\mathbb{N}}}\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily RT}_{n,{\mathbb{N}}}$, respectively. Additionally, both maps $f$ have the property that they map stable colorings $c_1,...,c_m$ to stable colorings $c:=f(c_1,...,c_m)$, hence they also yield the corresponding reductions in the stable case. The other reductions follow from Lemma~\ref{lem:finite-intersection}. \end{proof} We note that by \cite[Proposition~2.1]{DDH+16} we also have $\text{\rm\sffamily RT}_{n,k}\times\text{\rm\sffamily RT}_{n,l}\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily RT}_{n,kl}$ for all $n,k,l\geq1$. We also note that the following result is implicitly included in \cite[Section~1]{DDH+16} (the proof given there is for the case $l=1$ and $k=2$ and can be generalized straightforwardly). \begin{proposition}[Compositional products] \label{prop:compositional-products} $\text{\rm\sffamily RT}_{n,k+l}\mathop{\leq_{\mathrm{W}}} \text{\rm\sffamily RT}_{n,k}*\text{\rm\sffamily RT}_{n,l+1}$ for all $n,k,l\geq1$. \end{proposition} From Corollary~\ref{cor:products} we can directly conclude that Ramsey's theorem for an unspecified finite number of colors is idempotent. \begin{corollary}[Idempotency] \label{cor:idempotency} $\text{\rm\sffamily RT}_{n,{\mathbb{N}}}$ and $\text{\rm\sffamily SRT}_{n,{\mathbb{N}}}$ are idempotent for all $n\geq1$. \end{corollary} Since the sequence $(f_m)_m$ of maps $f_m:({\mathcal C}_{n,{\mathbb{N}}})^m\to{\mathcal C}_{n,{\mathbb{N}}}$ from the proof of Corollary~\ref{cor:products} is uniformly computable, we obtain the following corollary. \begin{corollary}[Finite parallelization] \label{cor:finite-parallelization} $\text{\rm\sffamily RT}_{n,k}^*\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{n,+}$ and $\text{\rm\sffamily SRT}_{n,k}^*\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{n,+}$ for all $n\geq1$ and $k\geq1,{\mathbb{N}}$. \end{corollary} We note that $\text{\rm\sffamily RT}_{n,k}^*=\bigsqcup_{m\geq0}\text{\rm\sffamily RT}_{n,k}^m$, where $\text{\rm\sffamily RT}_{n,k}^0={\rm id}$, and hence we obtain only an ordinary Weihrauch reduction in the previous result. Corollary~\ref{cor:finite-parallelization} leads to the obvious question, whether additional factors can make up for color increases. \begin{question}[Colors and factors] \label{quest:colors-factors} Does $\text{\rm\sffamily RT}_{n,k}^*\mathop{\equiv_{\mathrm{W}}}\text{\rm\sffamily RT}_{n,+}$ hold for $n,k\geq 2$? \end{question} We note that the equivalence is known to hold in the case of $n=1$. \begin{proposition}[Colors and factors] \label{prop:colors-factors} $\text{\rm\sffamily RT}_{1,n+1}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{1,2}^n$ for all $n\geq1$ and, in particular, $\text{\rm\sffamily RT}_{1,2}^*\mathop{\equiv_{\mathrm{W}}}\text{\rm\sffamily RT}_{1,+}$. \end{proposition} \begin{proof} In \cite[Theorem~32]{Pau10} it was proved that $\mbox{\rm\sffamily C}_{n+1}\mathop{\leq_{\mathrm{sW}}}\mbox{\rm\sffamily C}_2^n$ holds for all $n\geq1$ (only ``$\mathop{\leq_{\mathrm{W}}}$'' was claimed but the proof shows ``$\mathop{\leq_{\mathrm{sW}}}$''). By Fact~\ref{fact:WKL-BWT}(1) this implies $\text{\rm\sffamily BWT}_{n+1}\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily BWT}_2^n$, since jumps are monotone with respect to $\mathop{\leq_{\mathrm{sW}}}$. Hence, with Proposition~\ref{prop:bottom} we obtain $\text{\rm\sffamily RT}_{1,n+1}\mathop{\equiv_{\mathrm{W}}}\text{\rm\sffamily BWT}_{n+1}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily BWT}_2^n\mathop{\equiv_{\mathrm{W}}}\text{\rm\sffamily RT}_{1,2}^n$. Since this reduction is uniform in $n$, we can conclude that \[\text{\rm\sffamily RT}_{1,+}=\bigsqcup_{k\geq1}\text{\rm\sffamily RT}_{1,k}\mathop{\leq_{\mathrm{W}}}\bigsqcup_{k\geq1}\text{\rm\sffamily RT}_{1,2}^{k-1}\mathop{\equiv_{\mathrm{W}}}\text{\rm\sffamily RT}_{1,2}^*.\] The other direction follows since the reductions in Corollary~\ref{cor:products} are uniform and hence \[\text{\rm\sffamily RT}_{1,2}^*=\bigsqcup_{m\in{\mathbb{N}}}\text{\rm\sffamily RT}_{1,2}^m\mathop{\leq_{\mathrm{W}}}\bigsqcup_{m\in{\mathbb{N}}}\text{\rm\sffamily RT}_{1,2^m}\mathop{\leq_{\mathrm{W}}}\bigsqcup_{k\geq1}\text{\rm\sffamily RT}_{1,k}=\text{\rm\sffamily RT}_{1,+}.\qedhere\] \end{proof} As a consequence of the next result we obtain that any unspecified finite number of colors can be reduced to two colors for the price of an increase of the cardinality. Simultaneously, the following theorem also gives us a handle to show that the complexity of Ramsey's theorem increases with increasing numbers of colors. \begin{theorem}[Products] \label{thm:products} $\text{\rm\sffamily RT}_{n,{\mathbb{N}}}\times\text{\rm\sffamily RT}_{n+1,k}\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily RT}_{n+1,k+1}$ and\linebreak $\text{\rm\sffamily SRT}_{n,{\mathbb{N}}}\times\text{\rm\sffamily SRT}_{n+1,k}\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily SRT}_{n+1,k+1}$ for all $n\geq1$ and $k\geq1,{\mathbb{N}}$. \end{theorem} \begin{proof} We start with the first reduction. Given a coloring $c_1:[{\mathbb{N}}]^n\to{\mathbb{N}}$ with finite range and a coloring $c_2:[{\mathbb{N}}]^{n+1}\to k$ we construct a coloring $c^+:[{\mathbb{N}}]^{n+1}\to k+1$ as follows: \[c^+(A):=\left\{\begin{array}{ll} c_2(A) & \mbox{if $A$ is homogeneous for $c_1$}\\ k & \mbox{otherwise} \end{array}\right.\] for all $A\in[{\mathbb{N}}]^{n+1}$. Let $M\in\text{\rm\sffamily RT}_{n+1,k+1}(c^+)$ and let $p:{\mathbb{N}}\to{\mathbb{N}}$ be the principal function of $M$. Then we define a coloring $c_p:[{\mathbb{N}}]^n\to{\mathbb{N}}$ by $c_p(A):=c_1(p(A))$ for all $A\in[{\mathbb{N}}]^n$. By construction, $c_p$ has finite range too. Let $B\in\text{\rm\sffamily RT}_{n,{\mathbb{N}}}(c_p)$. Then $p(B)$ is homogeneous for $c_1$ and $p(B)\subseteq M$. Hence any $A\in[p(B)]^{n+1}$ is also homogeneous for $c_1$, which implies $c^+(A)=c_2(A)$ and hence $c^+(M)=c_2(A)<k$. This implies $M\in\text{\rm\sffamily RT}_{n+1,k}(c_2)$ and all $A\in[M]^{n+1}$ are homogeneous for $c_1$. We claim that this implies $M\in\text{\rm\sffamily RT}_{n,{\mathbb{N}}}(c_1)$. The claim yields $\text{\rm\sffamily RT}_{n+1,k+1}(c^+)\subseteq\text{\rm\sffamily RT}_{n,{\mathbb{N}}}(c_1)\cap\text{\rm\sffamily RT}_{n+1,k}(c_2)$, and hence the desired reduction follows. We still need to prove the claim. To this end we show that all $A\in [M]^{n+1}$ are homogeneous for $c_1$ with respect to one fixed color. Firstly, we note that for every two $A,B\in[M]^{n+1}$ there is a finite sequence $A_1,...,A_l\in[M]^{n+1}$ such that $A_1=A$, $A_l=B$ and $|A_i\cap A_{i+1}|\geq n$ for all $i=1,...,l-1$. This is because each element of $A$ can be replaced step by step by one element of $B$. Now we fix some $i\in\{1,...,l-1\}$. Since $A_i$ and $A_{i+1}$ are homogeneous for $c_1$ by assumption and they share an $n$--element subset, it is clear that $c_1(A_i)=c_1(A_{i+1})$. Since this holds for all $i\in\{1,...,l-1\}$, we obtain $c_1(A)=c_1(B)$. This means that all $A\in[M]^{n+1}$ are homogeneous for $c_1$ with respect to the same fixed color, and hence, in particular, all $A\in[M]^n$ share the same color with respect to $c_1$, i.e., $M\in\text{\rm\sffamily RT}_{n,{\mathbb{N}}}(c_1)$. Now we still show that the same construction also proves the second reduction regarding stable colorings. For this it suffices to show that $c^+:[{\mathbb{N}}]^{n+1}\to k+1$ is stable for all stable colorings $c_1:[{\mathbb{N}}]^n\to{\mathbb{N}}$ and $c_2:[{\mathbb{N}}]^{n+1}\to k$. For this purpose let $c_1,c_2$ be stable, and let $A\in[{\mathbb{N}}]^n$. Then $[A]^{n-1}=\{B_0,B_1,...,B_{n-1}\}$ and for each $i=0,...,n-1$ there is some $l_i\geq\max(B_i)$ and some $x_i\in{\mathbb{N}}$ such that $c_1(B_i\cup\{j\})=x_i$ for $j>l_i$, since $c_1$ is stable. There is also some $l_n\geq\max(A)$ and some $x_n\in k$ such that $c_2(A\cup\{j\})=x_n$ for $j>l_n$, since $c_2$ is stable. Let $l:=\max\{l_0,...,l_{n-1},l_n\}$. Then \[c^+(A\cup\{j\})=\left\{\begin{array}{ll} c_2(A\cup\{j\}) & \mbox{if $c_1(A)=x_0=x_1=...=x_{n-1}$}\\ k & \mbox{otherwise} \end{array}\right.\] for all $j>l$. Hence $c^+$ is stable. \end{proof} We note that in the case of $k=1$ we get the following corollary. \begin{corollary}[Color reduction] \label{cor:color-reduction} $\text{\rm\sffamily RT}_{n,{\mathbb{N}}}\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily RT}_{n+1,2}$ and $\text{\rm\sffamily SRT}_{n,{\mathbb{N}}}\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily SRT}_{n+1,2}$ for all $n\geq1$. \end{corollary} We note that the increase of the cardinality from $n$ to $n+1$ in this corollary is necessary in the case of $\text{\rm\sffamily RT}$ by Corollaries~\ref{cor:no-idempotency} and \ref{cor:finite-parallelization}. We mention that the proof of Theorem~\ref{thm:products} also shows that the coloring $c^+$ constructed therein has only infinite homogeneous sets of colors other than $k$. In the case of $k=1$ this means that only infinite homogeneous sets of color $0$ occur. We denote by $0\mbox{-}\text{\rm\sffamily SRT}_{n+1,2}$ the stable version $\text{\rm\sffamily SRT}_{n+1,2}$ of Ramsey's theorem restricted to colorings that only admit infinite homogeneous sets of color $0$. We obtain the following corollary, which we consider only as a technical step towards the proof of Corollary~\ref{cor:color-reduction-jumps} in the next section (and hence we do not phrase it for $\text{\rm\sffamily RT}$ in place of $\text{\rm\sffamily SRT}$). \begin{corollary}[Color reduction] \label{cor:color-reduction-zero} $\text{\rm\sffamily SRT}_{n,{\mathbb{N}}}\mathop{\leq_{\mathrm{sW}}}0\mbox{-}\text{\rm\sffamily SRT}_{n+1,2}$ for all $n\geq1$. \end{corollary} With the help of Proposition~\ref{prop:bottom}(2) in the case $k={\mathbb{N}}$ we obtain the following corollary of Corollary~\ref{cor:color-reduction}. \begin{corollary}[Discrete lower bounds] \label{cor:discrete-lower-bounds} $\lim_{\mathbb{N}}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{2,2}$ and $\text{\rm\sffamily BWT}_{\mathbb{N}}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{2,2}$. \end{corollary} Since Ramsey's theorem is not parallelizable by Corollary~\ref{cor:parallelizability}, it is clear that some increase in the cardinality is necessary in order to accommodate the parallelization. We will show that it is sufficient and necessary to increase the cardinality by $2$. The next result shows that this is sufficient, and it uses the ideas that have already been applied in the proofs of Corollary~\ref{cor:products} and Theorem~\ref{thm:products}. \begin{theorem}[Delayed Parallelization] \label{thm:delayed-parallelization} $\widehat{\text{\rm\sffamily RT}_{n,k}}\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily RT}_{n+2,2}$ and\linebreak $\widehat{\text{\rm\sffamily SRT}_{n,k}}\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily SRT}_{n+2,2}$ for all $n\geq1$ and $k\geq1,{\mathbb{N}}$. \end{theorem} \begin{proof} We start with the reduction $\widehat{\text{\rm\sffamily RT}_{n,k}}\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily RT}_{n+2,2}$ for $k\geq1$. Given a sequence $(c_i)_i$ of colorings $c_i:[{\mathbb{N}}]^n\to k$, we want to determine infinite homogeneous sets $M_i$ for all of them in parallel, using $\text{\rm\sffamily RT}_{n+2,2}$. The sequence $(f_m)_m$ of functions $f_m:({\mathcal C}_{n,k})^m\to{\mathcal C}_{n,k^m}$, defined as in the proof of Lemma~\ref{lem:finite-intersection}, is computable, and we use it to compute a sequence $(d_m)_m$ of colorings $d_m\in{\mathcal C}_{n,k^m}$ by $d_m:=f_m(c_0,...,c_{m-1})$. Given the sequence $(d_m)_m$, we can compute a sequence $(d_m^+)_m$ of colorings $d_m^+:[{\mathbb{N}}]^{n+1}\to2$ by \[d_m^+(A):=\left\{\begin{array}{ll} 0 & \mbox{if $A$ is homogeneous for $d_m$}\\ 1 & \mbox{otherwise} \end{array}\right.\] for all $A\in[{\mathbb{N}}]^{n+1}$. Now, in a final step we compute a coloring $c:[{\mathbb{N}}]^{n+2}\to2$ with \[c(\{m\}\cup A):=d_m^+(A)\] for all $A\in[{\mathbb{N}}]^{n+1}$ and $m<\min(A)$. Given an infinite homogeneous set $M\in\text{\rm\sffamily RT}_{n+2,2}(c)$ we determine a sequence $(M_i)_i$ as follows: for each fixed $i\in{\mathbb{N}}$ we first search for a number $m>i$ in $M$, and then we let $M_i:=\{x\in M:x>m\}$. It follows from the definition of $c$ that $M_i$ is homogeneous for $d_m^+$, and following the reasoning in the proof of Theorem~\ref{thm:products}, we obtain that $M_i$ is also homogeneous for $d_m$. Following the reasoning in the proof of Lemma~\ref{lem:finite-intersection}, we finally conclude that $M_i\in\bigcap_{j=0}^{m-1}\text{\rm\sffamily RT}_{n,k}(c_j)$, hence, in particular, $M_i\in\text{\rm\sffamily RT}_{n,k}(c_i)$, which was to be proved. We note that the entire construction preserves stability. As shown in the proof of Corollary~\ref{cor:products}, the function $f$ preserves stability. Hence, given a sequence $(c_i)_i$ of stable colorings, also the sequence $(d_m)_m$ consists of stable colorings. Likewise, it was shown in the proof of Theorem~\ref{thm:products} that in this case also the sequence $(d_m^+)_m$ consists of stable colorings. It follows immediately from the construction of $c$ that also $c$ is stable, since \[\lim_{j\to\infty}c(\{m\}\cup A\cup\{j\})=\lim_{j\to\infty}d_m(A\cup\{j\})\] for all $A\in[{\mathbb{N}}]^{n}$ and $m<\min(A)$. Altogether, this proves $\widehat{\text{\rm\sffamily SRT}_{n,k}}\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily SRT}_{n+2,2}$. The case $k={\mathbb{N}}$ can be handled analogously. \end{proof} Again the observation made after Corollary~\ref{cor:color-reduction} applies: the colorings $d_m^+$ can only have infinite homogeneous sets of color $0$, and hence also $c$ can only have infinite homogeneous sets of color $0$. This yields the following corollary, which we consider as a technical step towards the proof of Corollary~\ref{cor:lower bounds with jumps} in the next section (hence we do not phrase it for $\text{\rm\sffamily RT}$ in place of $\text{\rm\sffamily SRT}$). \begin{corollary}[Delayed Parallelization] \label{cor:delayed-parallelization-color} $\widehat{\text{\rm\sffamily SRT}_{n,k}}\mathop{\leq_{\mathrm{sW}}}0\mbox{-}\text{\rm\sffamily SRT}_{n+2,2}$ for all $n\geq1$ and $k\geq1,{\mathbb{N}}$. \end{corollary} Theorem~\ref{thm:delayed-parallelization} in combination with some other results also yields the following lower bounds of versions of Ramsey's theorem. \begin{corollary}[Lower bounds] \label{cor:delayed-parallelization} $\lim\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{3,2}$, $\text{\rm\sffamily WKL}'\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{3,2}$ and\linebreak $\text{\rm\sffamily WKL}^{(n)}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{n+2,2}$ for all $n\geq2$. \end{corollary} \begin{proof} With the help of Fact~\ref{fact:WKL-BWT}, Corollary~\ref{cor:basic-equivalences} and Theorem~\ref{thm:delayed-parallelization} we obtain: \begin{itemize} \item $\lim\mathop{\equiv_{\mathrm{W}}}\widehat{\lim_2}\mathop{\equiv_{\mathrm{W}}}\widehat{\text{\rm\sffamily SRT}_{1,2}}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{3,2}$, \item $\text{\rm\sffamily WKL}'\mathop{\equiv_{\mathrm{W}}}\widehat{\mbox{\rm\sffamily C}_2'}\mathop{\leq_{\mathrm{W}}}\widehat{\text{\rm\sffamily RT}_{1,2}}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{3,2}$, \item $\text{\rm\sffamily WKL}^{(n)}\mathop{\equiv_{\mathrm{W}}}\widehat{\mbox{\rm\sffamily C}_2^{(n)}}\mathop{\leq_{\mathrm{W}}}\widehat{\text{\rm\sffamily SRT}_{n,2}}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{n+2,2}$ for $n\geq2$. \end{itemize} For the first statement we have additionally used Proposition~\ref{prop:bottom} and in the latter two cases Theorem~\ref{thm:lower-bound}. \end{proof} Corollary~\ref{cor:delayed-parallelization} generalizes \cite[Corollary~2.3]{HJ16} (see Corollary~\ref{cor:KL}). Corollary~\ref{cor:limit-avoidance} together with Corollary~\ref{cor:delayed-parallelization} show that both corollaries are optimal in the sense that $\lim^{(n-2)}$ cannot be replaced by $\lim^{(n-3)}$ in the statement of the Corollary~\ref{cor:limit-avoidance}, and $\text{\rm\sffamily WKL}^{(n)}$ cannot be replaced by $\lim^{(n)}$ in the statement of Corollary~\ref{cor:delayed-parallelization}. In particular, $n+2$ in Theorem~\ref{thm:delayed-parallelization} is also optimal and cannot be replaced by $n+1$. However, the following question remains open. \begin{question} \label{quest:WKL-SRT32} Does $\text{\rm\sffamily WKL}'\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{3,2}$ hold? \end{question} As a combination of Corollaries~\ref{cor:discrete-lower-bounds} and \ref{cor:delayed-parallelization} we obtain the following result on versions of Ramsey's theorem above the limit and the Bolzano-Weierstra\ss{} theorem for $\{0,1\}, {\mathbb{N}}$ and ${\mathbb{N}}^{\mathbb{N}}$, respectively. \begin{corollary}[Cones] \label{cor:cones} $\lim_2\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily BWT}_2\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{1,2}$, $\lim_{\mathbb{N}}\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily BWT}_{\mathbb{N}}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{2,2}$ and $\lim\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily BWT}_{{\mathbb{N}}^{\mathbb{N}}}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{3,2}$. \end{corollary} \begin{proof} We note that $\lim_X\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily BWT}_X$ holds for arbitrary computable metric spaces $X$ by \cite[Proposition~11.21]{BGM12}. Hence the first reduction chain follows with Proposition~\ref{prop:bottom}, the second one follows from Corollary~\ref{cor:discrete-lower-bounds} and the third one from Corollary~\ref{cor:delayed-parallelization} with the extra observation that $\text{\rm\sffamily WKL}'\mathop{\equiv_{\mathrm{W}}}\text{\rm\sffamily BWT}_{{\mathbb{N}}^{\mathbb{N}}}$ by \cite[Corollaries 11.6 and 11.7]{BGM12}. \end{proof} \section{Jumps, Increasing Cardinality and Color and Upper Bounds} \label{sec:upper} The purpose of this section is to provide a useful upper bound on Ramsey's theorem. Simultaneously, we will demonstrate how the complexity of Ramsey's theorem increases with increasing cardinality. The proof is subdivided into several steps. The first and crucial step made by Theorem~\ref{thm:CRT-SRT} is interesting by itself and connects the jump of the colored version of Ramsey's theorem with the stable version of the next greater cardinality. This is one result where the usage of the colored version of Ramsey's theorem is essential. We start with a result that prepares the first direction of this theorem. \begin{proposition}[Jumps] \label{prop:jumps} $\mbox{\rm\sffamily C}RT_{n,k}'\mathop{\leq_{\mathrm{sW}}}\mbox{\rm\sffamily C}SRT_{n+1,k}$ for all $n\geq1$ and $k\geq1,{\mathbb{N}}$. \end{proposition} \begin{proof} Let $(c_i)_i$ be a sequence that converges to a coloring $c_\infty:[{\mathbb{N}}]^n\to k$. Without loss of generality we can assume that the $c_i$ are colorings $c_i:[{\mathbb{N}}]^n\to k$ themselves (this can easily be achieved by replacing every value $\geq k$ in the range of $c_i$ by $0$). We compute the coloring $c:[{\mathbb{N}}]^{n+1}\to k$ with \[c(A\cup\{i\}):=c_i(A)\] for all $A\in[{\mathbb{N}}]^n$ and $i>\max(A)$. Then $c$ is stable, and we claim that $\text{\rm\sffamily RT}_{n+1,k}(c)\subseteq\text{\rm\sffamily RT}_{n,k}(c_\infty)$. To this end, let $M\in\text{\rm\sffamily RT}_{n+1,k}(c)$, and let $A\in[M]^n$. Since $M$ is infinite and $\lim_{i\to\infty}c(A\cup\{i\})=c_\infty(A)$, we obtain $c(M)=c_\infty(A)$. Since this holds for all $A\in[M]^n$, we obtain that $M$ is homogeneous for $c_\infty$, i.e., $M\in\text{\rm\sffamily RT}_{n,k}(c_\infty)$. We note that we also obtain $c_\infty(M)=c(M)$, and hence this proves $\mbox{\rm\sffamily C}RT_{n,k}'\mathop{\leq_{\mathrm{sW}}}\mbox{\rm\sffamily C}SRT_{n+1,k}$. \end{proof} We obtain the following corollary, since the jump operator is monotone with respect to strong reductions by \cite[Proposition~5.6]{BGM12}. \begin{corollary}[Jumps] \label{cor:jumps} $\text{\rm\sffamily RT}_{n,k}^{(m)}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{{n+m},k}$ for all $m,n\geq1$, $k\geq1,{\mathbb{N}}$. \end{corollary} It is not immediately clear whether this result also holds with a strong reduction. Now we are prepared to formulate our main result on jumps of Ramsey's theorem. \begin{theorem}[Jumps] \label{thm:CRT-SRT} $\mbox{\rm\sffamily C}RT_{n,k}'\mathop{\equiv_{\mathrm{W}}}\text{\rm\sffamily SRT}_{n+1,k}$ for all $n\geq1$, $k\geq1,{\mathbb{N}}$. \end{theorem} \begin{proof} By Proposition~\ref{prop:jumps} and Corollary~\ref{cor:basic-equivalences} we obtain $\mbox{\rm\sffamily C}RT_{n,k}'\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{n+1,k}$, and it remains to prove $\text{\rm\sffamily SRT}_{n+1,k}\mathop{\leq_{\mathrm{W}}}\mbox{\rm\sffamily C}RT_{n,k}'$. Let $c:[{\mathbb{N}}]^{n+1}\to k$ be a stable coloring. Then we can define a sequence $(c_i)_i$ of colorings $c_i:[{\mathbb{N}}]^n\to k$ by \[c_i(A):=\left\{\begin{array}{ll} c(A\cup\{i\}) & \mbox{if $i>\max(A)$}\\ 0 & \mbox{otherwise} \end{array}\right.\] for all $A\in[{\mathbb{N}}]^n$. Then $(c_i)_i$ is a converging sequence of colorings, and let the limit be denoted by $c_\infty:[{\mathbb{N}}]^n\to k$. With the help of $\mbox{\rm\sffamily C}RT_{n,k}$ we can compute $(c_\infty(M_\infty),M_\infty)\in\mbox{\rm\sffamily C}RT_{n,k}(c_\infty)$. We will now describe how we can use this set $M_\infty$ together with $c_\infty$ and $c$ in order to computably enumerate an infinite homogeneous set $M\in\text{\rm\sffamily SRT}_{n+1,k}(c)$. The set $M=\bigcup_{i=0}^\infty M_i$ will be defined inductively using sets $M_i\in[M_\infty]^{n+i}$. We start with choosing some $M_0\in[M_\infty]^n$. Then we continue in steps $i=0,1,2...$ as follows: Let us assume that we have $M_i\in [M_\infty]^{n+i}$. For all $A\in[M_i]^{n}$ we obtain \[\lim_{j\to\infty} c(A\cup\{j\})=c_\infty(A)=c_\infty(M_\infty).\] Hence, we can effectively find an $m>\max(M_i)$ such that $M_{i+1}:=M_i\cup\{m\}$ satisfies $c(M_{i+1})=c_\infty(M_\infty)$. Since all the sets $M_i$ with $i\geq1$ are homogeneous sets for $c$ with the same color $c_\infty(M_\infty)$, the set $M:=\bigcup_{i=0}^\infty M_i$ is an infinite homogeneous set for $c$ with $c(M)=c_\infty(M_\infty)$. This proves the desired reduction $\text{\rm\sffamily SRT}_{n+1,k}\mathop{\leq_{\mathrm{W}}}\mbox{\rm\sffamily C}RT_{n,k}'$. \end{proof} It follows from Corollaries~\ref{cor:cylinder-CSRT} and \ref{cor:cylinder-SRT} that the equivalence in Theorem~\ref{thm:CRT-SRT} cannot be replaced by a strong equivalence. We note that the color provided by $\mbox{\rm\sffamily C}RT_{n,k}'$ is not needed if the color of the infinite homogeneous set that is to be constructed is known in advance. Hence, the proof of Theorem~\ref{thm:CRT-SRT} yields also the following result. \begin{corollary}[Jumps in the case of known color] \label{cor:CRT-SRT} $0\mbox{-}\text{\rm\sffamily SRT}_{n+1,2}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{n,2}'$ for all $n\geq1$. \end{corollary} We note that Corollaries~\ref{cor:CRT-SRT} and \ref{cor:color-reduction-zero} allow us to improve the bound given in Corollary~\ref{cor:color-reduction} somewhat. \begin{corollary}[Color reduction with jumps] \label{cor:color-reduction-jumps} $\text{\rm\sffamily SRT}_{n,{\mathbb{N}}}\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily RT}_{n,2}'$ for all $n\geq1$. \end{corollary} We collect all a number of lower bound results in the following corollary that strengthen some earlier results mentioned in Corollaries~\ref{cor:discrete-lower-bounds} and \ref{cor:delayed-parallelization}. \begin{corollary}[Lower bounds with jumps] \label{cor:lower bounds with jumps} $\lim_{\mathbb{N}}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{1,2}'$, $\lim\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{2,2}'$ and\linebreak $\text{\rm\sffamily WKL}^{(n)}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{n+1,2}'$ for all $n\geq2$. \end{corollary} \begin{proof} We obtain $\lim_{\mathbb{N}}\mathop{\equiv_{\mathrm{W}}}\text{\rm\sffamily SRT}_{1,{\mathbb{N}}}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{1,2}'$ by Proposition~\ref{prop:bottom} and Corollary~\ref{cor:color-reduction-jumps}. With the help of Fact~\ref{fact:WKL-BWT}, Proposition~\ref{prop:bottom}, Theorem~\ref{thm:lower-bound} and Corollaries~\ref{cor:delayed-parallelization-color} and \ref{cor:CRT-SRT} we obtain \begin{itemize} \item $\lim\mathop{\equiv_{\mathrm{W}}}\widehat{\lim_2}\mathop{\equiv_{\mathrm{W}}}\widehat{\text{\rm\sffamily SRT}_{1,2}}\mathop{\leq_{\mathrm{sW}}} 0\mbox{-}\text{\rm\sffamily SRT}_{3,2}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{2,2}'$, \item $\text{\rm\sffamily WKL}^{(n)}\mathop{\equiv_{\mathrm{W}}}\widehat{\mbox{\rm\sffamily C}_2^{(n)}}\mathop{\leq_{\mathrm{W}}}\widehat{\text{\rm\sffamily SRT}_{n,2}}\mathop{\leq_{\mathrm{W}}}0\mbox{-}\text{\rm\sffamily SRT}_{n+2,2}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{n+1,2}'$ for $n\geq2$.\qedhere \end{itemize} \end{proof} Analogously to Question~\ref{quest:WKL-SRT32} the following question remains open. \begin{question} \label{quest:WKL-RT22} Does $\text{\rm\sffamily WKL}'\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{2,2}'$ hold? \end{question} Roughly speaking, Theorem~\ref{thm:CRT-SRT} indicates that any increase in the cardinality of Ramsey's theorem corresponds to a jump. We can also conclude from Corollary~\ref{cor:jumps} that Ramsey's theorem is increasing with respect to increasing cardinality. \begin{lemma}[Increasing cardinality] \label{lem:increasing-size} $\text{\rm\sffamily SRT}_{n,k}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{n,k}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{n+1,k}\mathop{\leq_{\mathrm{W}}} \text{\rm\sffamily RT}_{n+1,k}$ for all $n\geq1$, $k\geq1,{\mathbb{N}}$. \end{lemma} \begin{proof} It follows from Lemma~\ref{lem:basic-reductions} and Corollaries~\ref{cor:jumps} and \ref{cor:basic-equivalences} that\linebreak $\text{\rm\sffamily SRT}_{n,k}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{n,k}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{n,k}'\mathop{\leq_{\mathrm{sW}}}\mbox{\rm\sffamily C}RT_{n,k}'\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{n+1,k}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{n+1,k}$. \end{proof} We will soon see in Corollary~\ref{cor:increasing-size} and \ref{cor:RT22-SRT2N} that the reductions in this lemma are all strict in certain cases. We note that $\mbox{\rm\sffamily C}RT_{n,k}'\mathop{\leq_{\mathrm{W}}}\mbox{\rm\sffamily C}RT_{n,k}*\lim\mathop{\equiv_{\mathrm{W}}}\text{\rm\sffamily RT}_{n,k}*\lim$. The latter problem is very stable and has several useful descriptions. \begin{lemma}[Jump of the cylindrification] \label{lem:jump-cylindrification} For all $n\geq1$, $k\geq1,{\mathbb{N}}$ we obtain $\text{\rm\sffamily RT}_{n,k}*\lim\mathop{\equiv_{\mathrm{W}}}(\text{\rm\sffamily RT}_{n,k}\times{\rm id})'\mathop{\equiv_{\mathrm{W}}}\text{\rm\sffamily RT}_{n,k}'\times\lim$. \end{lemma} \begin{proof} The second equivalence holds since jumps and products commute by \cite[Proposition~5.7~(2)]{BGM12}. Since $\text{\rm\sffamily RT}_{n,k}\times{\rm id}$ is a cylinder we have by \cite[Corollary~5.17]{BGM12} \[(\text{\rm\sffamily RT}_{n,k}\times{\rm id})'\mathop{\equiv_{\mathrm{W}}}(\text{\rm\sffamily RT}_{n,k}\times{\rm id})*\lim\mathop{\equiv_{\mathrm{W}}}(\text{\rm\sffamily RT}_{n,k}*\lim)\times\lim\mathop{\equiv_{\mathrm{W}}}\text{\rm\sffamily RT}_{n,k}*\lim.\] The latter two equivalences hold since $\lim$ is idempotent. \end{proof} Since $\text{\rm\sffamily RT}_{n,k}\times{\rm id}$ is the cylindrification of Ramsey's theorem, this lemma characterizes the jump of the cylindrification of Ramsey's theorem up to Weihrauch equivalence. In particular, we obtain the following consequence of Theorem~\ref{thm:CRT-SRT}. \begin{corollary} \label{cor:SRT-RT-lim} $\text{\rm\sffamily SRT}_{n+1,k}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{n,k}*\lim$ for all $n\geq1$, $k\geq1,{\mathbb{N}}$. \end{corollary} This result is the crucial step towards our upper bound result. The next step involves a usage of the cohesiveness problem. We recall that a set $A\subseteq{\mathbb{N}}$ is called {\em cohesive} for a sequence $(R_i)_i$ of sets $R_i\subseteq{\mathbb{N}}$ if $A\cap R_i$ or $A\cap({\mathbb{N}}\setminus R_i)$ is finite for each $i\in{\mathbb{N}}$. In other words, up to finitely many exceptions, $A$ is fully included in any $R_i$ or its complement. By $\mbox{\rm\sffamily C}OH:(2^{\mathbb{N}})^{\mathbb{N}}\rightrightarrows2^{\mathbb{N}}$ we denote the cohesiveness problem, where $\mbox{\rm\sffamily C}OH(R_i)$ contains all sets $A\subseteq{\mathbb{N}}$ that are cohesive for $(R_i)_i$. The uniform computational content of $\mbox{\rm\sffamily C}OH$ has already been studied in \cite{DDH+16,BHK15,Dzh15}. The relevance of the cohesiveness problem for Ramsey's theorem has originally been noticed by Cholak et al.\ who proved over $\text{\rm\sffamily RCA}_0$ that $\text{\rm\sffamily RT}_{2,2}\iff\text{\rm\sffamily SRT}_{2,2}\wedge\mbox{\rm\sffamily C}OH$ \cite[Lemma~7.11]{CJS01} (see \cite{CJS09} for a correction). We use the same idea to prove the following, which was in the case of $n=k=2$ also observed by Dorais et al.\ \cite{DDH+16}. \begin{proposition} \label{prop:RT-SRT-COH} $\text{\rm\sffamily RT}_{n,k}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{n,k}*\mbox{\rm\sffamily C}OH$ for all $n\geq1$, $k\geq1,{\mathbb{N}}$. \end{proposition} \begin{proof} We fix some $n,k\geq1$. Given a coloring $c:[{\mathbb{N}}]^{n}\to k$ we compute a sequence $(R_i)_i$ of sets $R_i\subseteq {\mathbb{N}}$ as follows: \[R_{\langle i,j\rangle}:=\{r\geq\max\vartheta_{n-1}(i):c(\vartheta_{n-1}(i)\cup\{r\})=j\}\] for all $i,j\in{\mathbb{N}}$. Here $\vartheta_{n-1}$ denotes the numbering of $[{\mathbb{N}}]^{n-1}$ introduced in Section~\ref{sec:uniform}. With the help of $\mbox{\rm\sffamily C}OH$ we can compute an infinite cohesive set $Y\in\mbox{\rm\sffamily C}OH(R_i)_i$ for the sequence $(R_i)_i$. Let $\sigma:{\mathbb{N}}\to{\mathbb{N}}$ be the principal function of $Y$. Since $Y$ is cohesive for $(R_i)_i$ and ${\rm range}(c)$ is finite, it follows that for all $B\in[{\mathbb{N}}]^{n-1}$ there is some $j\in{\mathbb{N}}$ such that $c(B\cup\{r\})=j$ holds for almost all $r\in Y$. Hence, the coloring $c_\sigma:[{\mathbb{N}}]^n\to k$, defined by \[c_\sigma(A):=c(\sigma(A))\] for all $A\in[{\mathbb{N}}]^n$ is stable. With the help of $\text{\rm\sffamily SRT}_{n,k}$ we can compute an infinite homogeneous set $M_\sigma\in\text{\rm\sffamily SRT}_{n,k}(c_\sigma)$. It is clear that $M:=\sigma(M_\sigma)$ is an infinite homogeneous set for $c$. \end{proof} Now we can combine Corollary~\ref{cor:SRT-RT-lim} with Proposition~\ref{prop:RT-SRT-COH} in both possible orders in order to obtain the following result. We recall that the compositional product $*$ is associative \cite[Proposition~31]{BP16}. \begin{corollary} \label{cor:RT-SRT-lim-COH} For all $n\geq1$, $k\geq1,{\mathbb{N}}$ we obtain \begin{enumerate} \item $\text{\rm\sffamily RT}_{n+1,k}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{n,k}*\lim*\mbox{\rm\sffamily C}OH$, \item $\text{\rm\sffamily SRT}_{n+1,k}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{n,k}*\mbox{\rm\sffamily C}OH*\lim$. \end{enumerate} \end{corollary} The diagram in Figure~\ref{fig:diagram-RT-COH-lim} illustrates the situation. \begin{figure} \caption{Upper bounds for Ramsey's theorem for $n\geq2$, $k\geq2,{\mathbb{N} \label{fig:diagram-RT-COH-lim} \end{figure} The first bound given by Corollary~\ref{cor:RT-SRT-lim-COH} is particularly useful, since the following result was proved in \cite[Corollary~14.14]{BHK15}. \begin{fact} \label{fact:WKL-lim-COH} $\text{\rm\sffamily WKL}'\mathop{\equiv_{\mathrm{W}}}\lim*\mbox{\rm\sffamily C}OH$. \end{fact} Fact~\ref{fact:WKL-lim-COH} together with Corollary~\ref{cor:RT-SRT-lim-COH} yields the following. \begin{corollary}[Induction] \label{cor:induction} $\text{\rm\sffamily RT}_{n+1,k}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{n,k}*\text{\rm\sffamily WKL}'$ for all $n\geq1$, $k\geq1,{\mathbb{N}}$. \end{corollary} This corollary can be interpreted such that $\text{\rm\sffamily WKL}'$ is sufficient to transfer $\text{\rm\sffamily RT}_{n,k}$ into $\text{\rm\sffamily RT}_{n+1,k}$. An obvious question is whether $\text{\rm\sffamily WKL}'$ is minimal with this property. It follows from Proposition~\ref{prop:RT2*RT1xlim} below and Lemma~\ref{lem:jump-cylindrification} that $\text{\rm\sffamily WKL}'$ cannot be replaced by $\lim$ in Corollary~\ref{cor:induction}. Corollary~\ref{cor:induction} could also be proved directly using so-called Erd\H{o}s-Rado trees.\footnote{This approach has been used in \cite[Proposition~6.6.1]{Rak15}; for a proof theoretic analysis of Ramsey's theorem this method has been applied in \cite{KK09a,KK12,BS14}, and in reverse mathematics it has been used to prove that Ramsey's theorem is provable over $\text{\rm\sffamily ACA}_0$; see \cite{EHMR84a} for the Erd\H{o}s-Rado method in general.} It reflects the fact that Ramsey's theorem can be proved inductively, where the induction base follows from the Bolzano-Weierstra\ss{} theorem for the $k$--point space by $\text{\rm\sffamily RT}_{1,k}\mathop{\equiv_{\mathrm{W}}}\text{\rm\sffamily BWT}_k\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily WKL}'$, which holds by Proposition~\ref{prop:bottom} and Fact~\ref{fact:WKL-BWT}. Altogether, this means that we can derive the following result from Corollary~\ref{cor:induction} and Fact~\ref{fact:WKL-BWT}(6). \begin{corollary}[Upper bound] \label{cor:upper-bound} $\text{\rm\sffamily RT}_{n,k}\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily WKL}^{(n)}$ for all $n\geq1$, $k\geq1,{\mathbb{N}}$. \end{corollary} We note that we get a strong reduction here, since $\text{\rm\sffamily WKL}^{(n)}$ is a cylinder. The upper bound is tight in terms of the number of jumps and also up to parallelization. Using the upper bound from Corollary~\ref{cor:upper-bound} together with the lower bound from Corollary~\ref{cor:delayed-parallelization} yields the following characterization of Ramsey's theorem $\text{\rm\sffamily RT}$. \begin{corollary} \label{cor:RT} $\text{\rm\sffamily RT}\mathop{\equiv_{\mathrm{W}}}\bigsqcup_{n=0}^\infty\lim^{(n)}\mathop{\equiv_{\mathrm{W}}}\bigsqcup_{n=0}^\infty\text{\rm\sffamily WKL}^{(n)}$. \end{corollary} This degree corresponds to the class $\text{\rm\sffamily ACA}_0'$ in reverse mathematics (see \cite[Theorem~6.27]{Hir15}). We call a problem $f$ {\em countably irreducible} if $f\mathop{\leq_{\mathrm{W}}}\bigsqcup_{n=0}^\infty g_n$ implies $f\mathop{\leq_{\mathrm{W}}} g_n$ for some $n\in{\mathbb{N}}$.\footnote{This property has been called ``join-irreducible'' in previous publications, but formally it is a strengthening of join-irreducibility in the lattice theoretic sense. It is also not identical to ``countable join-irreducibility'' since the countable coproduct is not a supremum in general.} Since it is clear that $\mbox{\rm\sffamily C}_{{\mathbb{N}}^{\mathbb{N}}}$ is countably irreducible \cite[Corollary~5.6]{BBP12}, and it is easy to see that $\bigsqcup_{n=0}^\infty\lim^{(n)}\mathop{\leq_{\mathrm{W}}}\mbox{\rm\sffamily C}_{{\mathbb{N}}^{\mathbb{N}}}$, we obtain the following corollary. \begin{corollary} \label{cor:RT-CNN} $\text{\rm\sffamily RT}\mathop{<_{\mathrm{W}}}\mbox{\rm\sffamily C}_{{\mathbb{N}}^{\mathbb{N}}}$. \end{corollary} Here $\mbox{\rm\sffamily C}_{{\mathbb{N}}^{\mathbb{N}}}$ can be seen as a possible counterpart of $\text{\rm\sffamily ATR}_0$ in reverse mathematics, in the sense that some statements that are equivalent to $\text{\rm\sffamily ATR}_0$ over $\text{\rm\sffamily RCA}_0$ in reverse mathematics turn out to be equivalent to $\mbox{\rm\sffamily C}_{{\mathbb{N}}^{\mathbb{N}}}$ in the Weihrauch lattice if interpreted as problems.\footnote{Results in this direction are not yet published, but this emerged during a discussion at a recent Dagstuhl seminar on the subject.} We obtain the following characterization of the parallelization of Ramsey's theorem. \begin{corollary}[Parallelization] \label{cor:parallelization} $\widehat{\text{\rm\sffamily RT}_{n,k}}\mathop{\equiv_{\mathrm{W}}}\text{\rm\sffamily WKL}^{(n)}\mathop{\equiv_{\mathrm{sW}}}\widehat{\mbox{\rm\sffamily C}RT_{n,k}}$ for all $n\geq1$, $k\geq2,{\mathbb{N}}$ and $\widehat{\text{\rm\sffamily SRT}_{n,k}}\mathop{\equiv_{\mathrm{W}}}\text{\rm\sffamily WKL}^{(n)}\mathop{\equiv_{\mathrm{sW}}}\widehat{\mbox{\rm\sffamily C}SRT_{n,k}}$ for all $n\geq2$, $k\geq2,{\mathbb{N}}$. \end{corollary} \begin{proof} Using Fact~\ref{fact:WKL-BWT}, Corollary~\ref{cor:basic-equivalences}, the lower bound from Corollary~\ref{cor:lower-bound}, the upper bound from Corollary~\ref{cor:upper-bound} and the fact that parallelization is a closure operator we obtain the following reduction chain for $n\geq2$, $k\geq2,{\mathbb{N}}$: \[\text{\rm\sffamily WKL}^{(n)}\mathop{\equiv_{\mathrm{sW}}}\widehat{\mbox{\rm\sffamily C}_2^{(n)}}\mathop{\equiv_{\mathrm{sW}}}\widehat{\text{\rm\sffamily BWT}_2^{(n-1)}}\mathop{\leq_{\mathrm{sW}}}\widehat{\mbox{\rm\sffamily C}SRT_{n,k}}\mathop{\leq_{\mathrm{W}}}\widehat{\text{\rm\sffamily SRT}_{n,k}} \mathop{\leq_{\mathrm{sW}}}\widehat{\text{\rm\sffamily RT}_{n,k}}\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily WKL}^{(n)},\] which implies $\widehat{\text{\rm\sffamily SRT}_{n,k}}\mathop{\equiv_{\mathrm{W}}}\widehat{\text{\rm\sffamily RT}_{n,k}}\mathop{\equiv_{\mathrm{W}}}\text{\rm\sffamily WKL}^{(n)}$, and the corresponding strong equivalences $\widehat{\mbox{\rm\sffamily C}SRT_{n,k}}\mathop{\equiv_{\mathrm{sW}}}\widehat{\mbox{\rm\sffamily C}RT_{n,k}}\mathop{\equiv_{\mathrm{sW}}}\text{\rm\sffamily WKL}^{(n)}$ follow, since the colored versions of Ramsey's theorem are cylinders by Corollary~\ref{cor:cylinder-CSRT}, and $\text{\rm\sffamily WKL}^{(n)}$ is a cylinder too. The proof for $n=1$ follows similarly using Proposition~\ref{prop:bottom}. \end{proof} We mention that $\widehat{\text{\rm\sffamily SRT}_{1,k}}\mathop{\equiv_{\mathrm{W}}}\lim\mathop{\equiv_{\mathrm{sW}}}\widehat{\mbox{\rm\sffamily C}SRT_{1,k}}$ holds for $k\geq2,{\mathbb{N}}$ by Corollary~\ref{cor:lower-bound}. By Corollary~\ref{cor:cylinder-SRT} the parallelized uncolored versions of Ramsey's theorem are not cylinders, and hence the Weihrauch equivalences $\mathop{\equiv_{\mathrm{W}}}$ in the previous corollary cannot be replaced by strong ones $\mathop{\equiv_{\mathrm{sW}}}$. Since $\text{\rm\sffamily WKL}^{(n)}\mathop{\leq_{\mathrm{W}}}\lim^{(n)}$ and $\text{\rm\sffamily WKL}^{(n)}\mathop{\not\leq_{\mathrm{W}}}\lim^{(n-1)}$ for all $n\geq1$ by Fact~\ref{fact:WKL-BWT} and $\lim^{(n)}$ is effectively $\SO{n+2}$--complete by Facts~\ref{fact:WKL-BWT} and \ref{fact:Borel}, we obtain the following corollary that characterizes the Borel complexity of Ramsey's theorem. \begin{corollary}[Borel complexity] \label{cor:Borel-measurability} $\text{\rm\sffamily SRT}_{n,k}$ and $\text{\rm\sffamily RT}_{n,k}$ are both effectively $\SO{n+2}$--measurable, but not effectively $\SO{n+1}$--measurable for all $n\geq2$, $k\geq2,{\mathbb{N}}$. \end{corollary} Both positive statements and the negative statement on $\text{\rm\sffamily RT}_{n,k}$ also hold for $n=1$. By an application of Facts~\ref{fact:Borel} and \ref{fact:WKL-BWT} we can also rephrase the negative statements as follows. \begin{corollary} \label{cor:SRT-limits} $\text{\rm\sffamily SRT}_{n,k}\mathop{\not\leq_{\mathrm{W}}}\lim^{(n-1)}$ for $n\geq2$ and $\text{\rm\sffamily RT}_{n,k}\mathop{\not\leq_{\mathrm{W}}}\lim^{(n-1)}$ for $n\geq1$ and both for $k\geq2,{\mathbb{N}}$. \end{corollary} Since effective $\SO{n}$--measurability is preserved downwards by Weihrauch reducibility by Fact~\ref{fact:Borel}, this in turn implies that Ramsey's theorem actually forms a strictly increasing chain with increasing cardinality. \begin{corollary}[Increasing cardinality] \label{cor:increasing-size} $\text{\rm\sffamily RT}_{n,k}\mathop{<_{\mathrm{W}}}\text{\rm\sffamily RT}_{n+1,2}$, $\text{\rm\sffamily RT}_{n,k}\mathop{<_{\mathrm{sW}}}\text{\rm\sffamily RT}_{n+1,2}$,\linebreak $\text{\rm\sffamily SRT}_{n,k}\mathop{<_{\mathrm{W}}}\text{\rm\sffamily SRT}_{n+1,2}$ and $\text{\rm\sffamily SRT}_{n,k}\mathop{<_{\mathrm{sW}}}\text{\rm\sffamily SRT}_{n+1,2}$ for all $n\geq1$ and $k\geq2,{\mathbb{N}}$. \end{corollary} Here the positive parts of the reduction hold by Corollary~\ref{cor:color-reduction}. The separations (at least in the case of $\text{\rm\sffamily RT}$) do also follow from classical non-uniform separation results of Jockusch \cite{Joc72} who showed that every computable instance of $\text{\rm\sffamily RT}_{n,k}$ has a $\emptyset^{(n)}$--computable solution, whereas there are computable instance of $\text{\rm\sffamily RT}_{n+1,k}$ that have no $\emptyset^{(n)}$--computable solution. We can also draw some conclusions on increasing numbers of colors. In \cite[Theorem~3.1]{DDH+16} Dorais et al.\ have proved that $\text{\rm\sffamily RT}_{n,k}\mathop{<_{\mathrm{sW}}}\text{\rm\sffamily RT}_{n,k+1}$ holds for all $n,k\geq1$. Their main tool was a version of the squashing theorem (Theorem~\ref{thm:squashing}) for strong Weihrauch reducibility. With the help of Theorem~\ref{thm:products} we can strengthen this result to ordinary Weihrauch reducibility, which answers \cite[Question~7.1]{DDH+16}. This result was independently obtained by Hirschfeldt and Jockusch~\cite[Theorem~3.3]{HJ16} and Patey~\cite[Corollary~3.15]{Pat16a}. \begin{theorem}[Increasing numbers of colors] \label{thm:increasing-colors} $\text{\rm\sffamily RT}_{n,k}\mathop{<_{\mathrm{W}}}\text{\rm\sffamily RT}_{n,k+1}$ for all $n,k\geq1$. \end{theorem} \begin{proof} Let us assume that $\text{\rm\sffamily RT}_{n,2}\times\text{\rm\sffamily RT}_{n+1,k}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{n+1,k}$ holds for some $n,k\geq1$. Then by the squashing theorem (Theorem~\ref{thm:squashing}) we obtain $\widehat{\text{\rm\sffamily RT}_{n,2}}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{n+1,k}$, and hence by Corollary~\ref{cor:parallelization} \[\lim\nolimits^{(n-1)}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily WKL}^{(n)}\mathop{\equiv_{\mathrm{W}}}\widehat{\text{\rm\sffamily RT}_{n,2}}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{n+1,k}\] in contradiction to Corollary~\ref{cor:limit-avoidance}. Hence $\text{\rm\sffamily RT}_{n,2}\times\text{\rm\sffamily RT}_{n+1,k}\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{n+1,k}$ for all $n,k\geq1$. On the other hand, we have $\text{\rm\sffamily RT}_{n,2}\times\text{\rm\sffamily RT}_{n+1,k}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{n+1,k+1}$ by Theorem~\ref{thm:products}. This implies $\text{\rm\sffamily RT}_{n+1,k}\mathop{<_{\mathrm{W}}}\text{\rm\sffamily RT}_{n+1,k+1}$ for all $n,k\geq1$. The claim $\text{\rm\sffamily RT}_{1,k}\mathop{<_{\mathrm{W}}}\text{\rm\sffamily RT}_{1,k+1}$ for all $k\geq1$ was already known \cite[Theorem~13.4]{BGM12} via Proposition~\ref{prop:bottom}. \end{proof} From this result we can also conclude that the two uniform versions of $\text{\rm\sffamily RT}^n_{<\infty}$ are not equivalent. We recall that a problem $f:\subseteq X\rightrightarrows Y$ is called a {\em fractal}, if there is an $F:\subseteq{\mathbb{N}}^{\mathbb{N}}\rightrightarrows{\mathbb{N}}^{\mathbb{N}}$ with $F\mathop{\equiv_{\mathrm{W}}} f$ and $F|_A\mathop{\equiv_{\mathrm{W}}} f$ for all clopen $A\subseteq{\mathbb{N}}^{\mathbb{N}}$ such that $A\cap{\rm dom}(F)\not=\emptyset$. Analogously, {\em strong fractals} are defined with $\mathop{\equiv_{\mathrm{sW}}}$ in place of $\mathop{\equiv_{\mathrm{W}}}$. It is easy to see that $\text{\rm\sffamily RT}_{n,{\mathbb{N}}}$ and $\text{\rm\sffamily SRT}_{n,{\mathbb{N}}}$ are fractals and hence countably irreducible by \cite[Proposition~2.6]{BGM12}, and thus the strictness of the reductions given in the following corollary follows from Theorem~\ref{thm:increasing-colors}. \begin{corollary}[Arbitrary numbers of colors] \label{cor:arbitrary-colors} $\text{\rm\sffamily RT}_{n,+}\mathop{<_{\mathrm{W}}}\text{\rm\sffamily RT}_{n,{\mathbb{N}}}$ and \linebreak $\text{\rm\sffamily SRT}_{n,+}\mathop{<_{\mathrm{W}}}\text{\rm\sffamily SRT}_{n,{\mathbb{N}}}$ for all $n\geq1$. \end{corollary} Another consequence of the squashing theorem (Theorem~\ref{thm:squashing}) is the following result that shows that the complexity of Ramsey's theorem also grows with an increasing number of factors. This generalizes Corollary~\ref{cor:no-idempotency}. \begin{proposition}[Increasing number of factors] \label{prop:factors} $\text{\rm\sffamily RT}_{n,k}^m\mathop{<_{\mathrm{W}}}\text{\rm\sffamily RT}_{n,k}^{m+1}$ for all $n,m\geq1$ and $k\geq2$. \end{proposition} \begin{proof} Let us assume that $\text{\rm\sffamily RT}_{n,k}^{m+1}\mathop{\equiv_{\mathrm{W}}}\text{\rm\sffamily RT}_{n,k}\times\text{\rm\sffamily RT}_{n,k}^m\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{n,k}^{m}$ holds for some $n,m\geq1$ and $k\geq2$. Then by the squashing theorem (Theorem~\ref{thm:squashing}) we obtain $\widehat{\text{\rm\sffamily RT}_{n,k}}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{n,k}^m$, and hence by Corollaries~\ref{cor:finite-parallelization}, \ref{cor:color-reduction} and \ref{cor:parallelization} we can conclude \[\lim\nolimits^{(n-1)}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily WKL}^{(n)}\mathop{\equiv_{\mathrm{W}}}\widehat{\text{\rm\sffamily RT}_{n,k}}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{n,k}^m\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{n,k}^*\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{n+1,2},\] in contradiction to Corollary~\ref{cor:limit-avoidance}. Hence, $\text{\rm\sffamily RT}_{n,k}^{m+1}\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{n,k}^{m}$. \end{proof} We note that Question~\ref{quest:colors-factors} remains, whether additional factors can make up for color increases. The characterization given in Corollary~\ref{cor:parallelization} also implies the following result on the arithmetic complexity of homogeneous sets with the help of the uniform low basis theorem~\cite[Theorem~8.3]{BBP12}. \begin{corollary}[Arithmetic complexity] \label{cor:arithmetic-complexity} Every computable sequence $(c_i)_i$ of colorings $c_i:[{\mathbb{N}}]^n\to k$ for $n\geq1$ and $k\geq2$ admits a sequence $(M_i)_i$ such that $\langle M_0,M_1,...\rangle'\mathop{\leq_{\mathrm{T}}}\emptyset^{(n+1)}$ and such that $M_i$ is an infinite homogeneous set for $c_i$ for each $i\in{\mathbb{N}}$. \end{corollary} \begin{proof} We use the map $\text{\rm\sffamily L}:=\text{\rm\sffamily J}^{-1}\circ\lim$, where $\text{\rm\sffamily J}:{\mathbb{N}}^{\mathbb{N}}\to{\mathbb{N}}^{\mathbb{N}},p\mapsto p'$ is the Turing jump operation. The uniform low basis theorem~\cite[Theorem~8.3]{BBP12} states that $\text{\rm\sffamily WKL}\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily L}$ holds, which implies $\text{\rm\sffamily WKL}^{(n)}\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily L}^{(n)}$, since jumps are monotone with respect to strong Weihrauch reducibility. By Corollary~\ref{cor:parallelization} we have $\widehat{\text{\rm\sffamily RT}_{n,k}}\mathop{\equiv_{\mathrm{W}}}\text{\rm\sffamily WKL}^{(n)}$, and hence any instance $(c_i)_i$ of $\widehat{\text{\rm\sffamily RT}_{n,k}}$ has a solution $M=\langle M_0,M_1,...\rangle$ whose jump $M'$ is computable in $\emptyset^{(n+1)}$. \end{proof} Here $n+1$ cannot be replaced by $n$ in $\emptyset^{(n+1)}$ by Corollary~\ref{cor:parallelization}, and in this sense this result is optimal. Hence, we have a striking difference in the arithmetic complexity between the non-uniform Ramsey theorem as witnessed by Theorems~\ref{thm:Jockusch} and \ref{thm:CJS-upper} and the uniform sequential version of Ramsey's theorem for sequences as witnessed by Corollary~\ref{cor:arithmetic-complexity}. This yields another proof of Corollary~\ref{cor:parallelizability}. Since jumps commute with parallelization by \cite[Proposition~5.7(3)]{BGM12}, Corollary~\ref{cor:parallelization} yields also the following corollary, which states that under parallelization a jump of the colored versions of Ramsey's theorem corresponds exactly to an increase in cardinality by one. \begin{corollary}[Parallelized jumps] \label{cor:jumps-parallelization-CSRT} For all $n\geq2$, $k\geq2,{\mathbb{N}}$ we obtain\linebreak $\widehat{\mbox{\rm\sffamily C}SRT_{n,k}'}\mathop{\equiv_{\mathrm{sW}}}\widehat{\mbox{\rm\sffamily C}SRT_{n+1,k}}\mathop{\equiv_{\mathrm{sW}}}\widehat{\mbox{\rm\sffamily C}RT_{n,k}'}\mathop{\equiv_{\mathrm{sW}}}\widehat{\mbox{\rm\sffamily C}RT_{n+1,k}}$. \end{corollary} A similar property also holds for the uncolored version of Ramsey's theorem $\text{\rm\sffamily RT}_{n,k}$, but in this case it cannot be so easily concluded since the parallelized uncolored theorem is not a cylinder by Corollary~\ref{cor:cylinder-SRT}. Hence, we need the following reformulation of Corollary~\ref{cor:SRT-RT-lim} that is justified by Lemma~\ref{lem:jump-cylindrification}. \begin{corollary} \label{cor:SRT-RT-lim2} $\text{\rm\sffamily SRT}_{n+1,k}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{n,k}'\times\lim$ for all $n\geq1$, $k\geq1,{\mathbb{N}}$. \end{corollary} Now we are prepared to prove the following result on parallelized jumps of Ramsey's theorem. \begin{corollary}[Parallelized jumps] \label{cor:jumps-parallelization-SRT} $\widehat{\text{\rm\sffamily RT}_{n,k}'}\mathop{\equiv_{\mathrm{W}}}\widehat{\text{\rm\sffamily RT}_{n+1,k}}$ for all $n\geq1$ and $k\geq2,{\mathbb{N}}$. \end{corollary} \begin{proof} It follows from Corollary~\ref{cor:jumps} that $\text{\rm\sffamily RT}_{n,k}'\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{n+1,k}$, and hence one direction of the claims follows since parallelization is a closure operator. By Corollaries~\ref{cor:parallelization}, \ref{cor:SRT-RT-lim2}, \ref{cor:lower bounds with jumps}, Fact~\ref{fact:WKL-BWT}~(3) and since parallelization commutes with products it follows that \[\widehat{\text{\rm\sffamily RT}_{n+1,k}}\mathop{\equiv_{\mathrm{W}}}\widehat{\text{\rm\sffamily SRT}_{n+1,k}}\mathop{\leq_{\mathrm{W}}}\widehat{\text{\rm\sffamily RT}_{n,k}'}\times\lim\mathop{\equiv_{\mathrm{W}}}\widehat{\text{\rm\sffamily RT}_{n,k}'}\times\widehat{\lim\nolimits_{\mathbb{N}}}\mathop{\leq_{\mathrm{W}}}\widehat{\text{\rm\sffamily RT}_{n,k}'},\] which completes the proof. \end{proof} We leave it open whether a corresponding fact can be established for the stable version of Ramsey's theorem (which is not the case for $n=1$). We have now completely provided all the positive information that is displayed in the diagram in Figure~\ref{fig:diagram-RT}. What still remains to be done are some of the separations. \section{Ramsey's Theorem for Pairs} \label{sec:pairs} In this section we want to discuss $\text{\rm\sffamily RT}_{2,2}$, which is of particular interest. For one, we derive some conclusions from the general results that we proved before, we mention some results that have been proved by other authors, and we raise some open questions. The neighborhood of Ramsey's theorem in the Weihrauch lattice is illustrated in the diagram in Figure~\ref{fig:diagram-RT22}. \begin{figure} \caption{Ramsey's theorem for pairs and two colors in the Weihrauch lattice: all solid arrows indicate strong Weihrauch reductions against the direction of the arrow, all dashed arrows indicate ordinary Weihrauch reductions, and the boxes indicate the given levels of the effective Borel hierarchy.} \label{fig:diagram-RT22} \end{figure} We start with two straightforward separation results that show that the jump of the cylindrification of $\text{\rm\sffamily RT}_{1,k}$ (see Lemma~\ref{lem:jump-cylindrification}) is incomparable with $\text{\rm\sffamily RT}_{2,2}$. \begin{proposition} \label{prop:RT2*RT1xlim} $\text{\rm\sffamily RT}_{2,2}\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{1,{\mathbb{N}}}'\times\lim$ and $\text{\rm\sffamily RT}_{1,2}'\times\lim\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{2,{\mathbb{N}}}$. \end{proposition} \begin{proof} For every computable input $\text{\rm\sffamily RT}_{1,{\mathbb{N}}}'\times\lim$ clearly has a limit computable output, since for a limit computable coloring $c:[{\mathbb{N}}]^1\to{\mathbb{N}}$ there is always some $k\in{\mathbb{N}}$ such that $A:=\{i\in{\mathbb{N}}:c(i)=k\}$ is infinite. Such an $A$ is clearly limit computable and $A\in\text{\rm\sffamily RT}_{1,{\mathbb{N}}}(c)$. However, $\text{\rm\sffamily RT}_{2,2}$ has computable inputs $c:[{\mathbb{N}}]^2\to2$, which have no limit computable outputs $A\in\text{\rm\sffamily RT}_{2,2}(c)$ by Theorem~\ref{thm:Jockusch}. Hence, $\text{\rm\sffamily RT}_{2,2}\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{1,{\mathbb{N}}}'\times\lim$. While $\lim\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{1,2}'\times\lim$ holds obviously, we obtain $\lim\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{2,{\mathbb{N}}}$ by Corollary~\ref{cor:limit-avoidance1}. Hence, $\text{\rm\sffamily RT}_{1,2}'\times\lim\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{2,{\mathbb{N}}}$. \end{proof} This result yields the following corollary. \begin{corollary} \label{cor:RT12xlim-RT22} $\text{\rm\sffamily RT}_{1,2}'\times\lim\mathop{|_{\mathrm{W}}}\text{\rm\sffamily RT}_{2,2}$. \end{corollary} Since we have $\text{\rm\sffamily SRT}_{2,{\mathbb{N}}}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{1,{\mathbb{N}}}*\lim\mathop{\equiv_{\mathrm{W}}}\text{\rm\sffamily RT}_{1,{\mathbb{N}}}'\times\lim$ by Corollary~\ref{cor:SRT-RT-lim} and Lemma~\ref{lem:jump-cylindrification}, we also obtain the following corollary of Proposition~\ref{prop:RT2*RT1xlim} \begin{corollary} \label{cor:RT22-SRT2N} $\text{\rm\sffamily RT}_{2,2}\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{2,{\mathbb{N}}}$ and hence, in particular, $\text{\rm\sffamily RT}_{2,2}\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{2,2}$. \end{corollary} The corresponding non-uniform result in reverse mathematics was much harder to obtain and solved a longstanding open question \cite{CSY14}. The proof uses a non-standard model, whereas our result would follow from a separation with a standard model. Among other things, we are going to discuss the relation of $\text{\rm\sffamily RT}_{2,2}$ to the cohesiveness problem. The following proof is essentially the proof of Cholak, Jockusch and Slaman \cite[Theorem~12.5]{CJS01}. \begin{proposition}[Cohesiveness] \label{prop:COH-RT22} $\mbox{\rm\sffamily C}OH\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily RT}_{2,2}$. \end{proposition} \begin{proof} Let $(R_i)_i$ be a sequence of sets $R_i\subseteq{\mathbb{N}}$. We compute a sequence $(S_i)_i$ by $S_{2i}:=R_i$ and $S_{2i+1}:=\{i\}$ for all $i\in{\mathbb{N}}$. We let $d(i,j):=\min\{k:\chi_{S_k}(i)\not=\chi_{S_k}(j)\}$. The definition of $(S_i)_i$ ensures that $d$ is well-defined for all $i<j$, and it can be computed from $(S_i)_i$. We compute a coloring $c:[{\mathbb{N}}]^2\to2$ with \[c\{i<j\}:=\left\{\begin{array}{ll} 0 & \mbox{if $i\in S_{d(i,j)}$}\\ 1 & \mbox{otherwise} \end{array}\right.,\] and we consider an infinite homogeneous set $M\in\text{\rm\sffamily RT}_{2,2}(c)$ and $i\in{\mathbb{N}}$. We claim that $M$ is cohesive for $(S_i)_i$ and hence for $(R_i)_i$. Let us assume for a contradiction that $k$ is the smallest number such that $M\cap S_k$ and $M\cap({\mathbb{N}}\setminus S_k)$ are infinite. Since $k$ is minimal, there is a number $m\in{\mathbb{N}}$ such that $d(i,j)\geq k$ for all $i,j\geq m$ in $M$. There are also sufficiently large $i_0,i_1,i_2\geq m$ in $M$ such that $i_0<i_1<i_2$, $\chi_{S_k}(i_0)\not=\chi_{S_k}(i_1)$ and $\chi_{S_k}(i_1)\not=\chi_{S_k}(i_2)$. This implies $d(i_0,i_1)=k=d(i_1,i_2)$, which in turn yields $c\{i_0,i_1\}\not=c\{i_1,i_2\}$, in contradiction to the homogeneity of $M$. Hence, $M$ is cohesive for $(R_i)_i$, and we obtain $\mbox{\rm\sffamily C}OH\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily RT}_{2,2}$. \end{proof} We obtain $\text{\rm\sffamily RT}_{2,2}\mathop{\not\leq_{\mathrm{W}}}\mbox{\rm\sffamily C}OH$ since $\mbox{\rm\sffamily C}OH\mathop{\leq_{\mathrm{sW}}}\lim$ by \cite[Proposition~12.10]{BHK15}, but on the other hand $\text{\rm\sffamily RT}_{2,2}\mathop{\not\leq_{\mathrm{W}}}\lim'$ by Corollary~\ref{cor:SRT-limits}. We obtain the following corollary. \begin{corollary} \label{cor:COH-RT22-SRT22} $\text{\rm\sffamily SRT}_{2,2}\sqcup\mbox{\rm\sffamily C}OH\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{2,2}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{2,2}*\mbox{\rm\sffamily C}OH\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily WKL}''$. \end{corollary} \begin{proof} By Proposition~\ref{prop:COH-RT22}, Lemma~\ref{lem:basic-reductions} and since $\sqcup$ is the supremum with respect to $\mathop{\leq_{\mathrm{W}}}$ we have $\text{\rm\sffamily SRT}_{2,2}\sqcup\mbox{\rm\sffamily C}OH\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{2,2}$. The reduction $\text{\rm\sffamily RT}_{2,2}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{2,2,}*\mbox{\rm\sffamily C}OH$ follows from Proposition~\ref{prop:RT-SRT-COH}. The last mentioned reduction in the corollary follows from Corollary~\ref{cor:upper-bound} and Facts~\ref{fact:WKL-lim-COH} and \ref{fact:WKL-BWT} since \[\text{\rm\sffamily SRT}_{2,2}*\mbox{\rm\sffamily C}OH\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily WKL}''*\mbox{\rm\sffamily C}OH\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily WKL}'*\lim*\mbox{\rm\sffamily C}OH\mathop{\equiv_{\mathrm{W}}} \text{\rm\sffamily WKL}'*\text{\rm\sffamily WKL}'\mathop{\equiv_{\mathrm{W}}} \text{\rm\sffamily WKL}''.\qedhere\] \end{proof} Given the fact that $\text{\rm\sffamily SRT}_{2,2}\sqcup\mbox{\rm\sffamily C}OH\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily SRT}_{2,2}\times\mbox{\rm\sffamily C}OH\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{2,2}*\mbox{\rm\sffamily C}OH$, it would be desirable to clarify the relation of $\text{\rm\sffamily RT}_{2,2}$ to all the three mentioned problems. \begin{question} \label{quest:SRT22-COH-RT22} How does $\text{\rm\sffamily RT}_{2,2}$ exactly relate to $\text{\rm\sffamily SRT}_{2,2}\sqcup\mbox{\rm\sffamily C}OH$, $\text{\rm\sffamily SRT}_{2,2}\times\mbox{\rm\sffamily C}OH$ and $\text{\rm\sffamily SRT}_{2,2}*\mbox{\rm\sffamily C}OH$? \end{question} We can at least say something. We have $\mbox{\rm\sffamily C}OH\mathop{\leq_{\mathrm{W}}}\lim$ by \cite[Proposition~12.10]{BHK15} and by Corollary~\ref{cor:SRT-RT-lim} and Lemma~\ref{lem:jump-cylindrification} we know that $\text{\rm\sffamily SRT}_{2,2}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{1,2}'\times\lim$. Since $\lim$ is idempotent we obtain the following corollary. \begin{corollary} \label{cor:SRT22-COH-RT12-lim} $\text{\rm\sffamily SRT}_{2,2}\times\mbox{\rm\sffamily C}OH\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{1,2}'\times\lim$. \end{corollary} With Proposition~\ref{prop:RT2*RT1xlim} we arrive at the following conclusion. \begin{corollary} \label{cor:RT22-SRT22-COH} $\text{\rm\sffamily RT}_{2,2}\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{2,2}\times\mbox{\rm\sffamily C}OH$. \end{corollary} We also mention that Corollary~\ref{cor:products} and Proposition~\ref{prop:COH-RT22} imply the following. \begin{corollary} $\text{\rm\sffamily SRT}_{2,2}\times\mbox{\rm\sffamily C}OH\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{2,4}$. \end{corollary} We recall that the problem $\text{\rm\sffamily RT}_{1,2}'$ is also studied under the name $\text{\rm\sffamily D}_2^2$ (see for instance \cite{CJS01,CLY10}). We introduce the following notation, where we use lower indices again. \begin{definition} We define $\text{\rm\sffamily D}_{n,k}:=\text{\rm\sffamily RT}_{1,k}^{(n-1)}$ for all $n,k\geq1$. \end{definition} Dzhafarov proved that $\mbox{\rm\sffamily C}OH\mathop{\not\leq_{\mathrm{sW}}}\text{\rm\sffamily D}_{2,k}$ holds for all $k\in{\mathbb{N}}$ \cite[Corollary~1.10]{Dzh15}. In fact, his key theorem \cite[Theorem~1.5]{Dzh15} yields even the following stronger result. \begin{corollary} \label{cor:COH-D} $\mbox{\rm\sffamily C}OH\mathop{\not\leq_{\mathrm{sW}}}\mbox{\rm\sffamily C}RT_{1,k}^{(m)}$ for all $k,m\in{\mathbb{N}}$. \end{corollary} In a subsequent paper \cite{Dzh16} Dzhafarov proved the following result. \begin{theorem} \label{thm:COH-SRT22} $\mbox{\rm\sffamily C}OH\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{2,+}$ and hence, in particular, $\mbox{\rm\sffamily C}OH\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{2,2}$. \end{theorem} Since K\H{o}nig's lemma $\text{\rm\sffamily K}L$ is often discussed in the context of $\text{\rm\sffamily RT}_{2,2}$, we mention it briefly in passing. By $\text{\rm\sffamily K}L:\subseteq{\rm Tr}_{\mathbb{N}}\rightrightarrows{\mathbb{N}}^{\mathbb{N}}$ we denote the multi-valued function that is defined on all infinite finitely branching trees $T\subseteq{\mathbb{N}}^*$ and such that $\text{\rm\sffamily K}L(T)=[T]$ is the set of infinite paths of $T$. The set ${\rm Tr}_{\mathbb{N}}$ of trees $T\subseteq{\mathbb{N}}^*$ is represented via characteristic functions of such trees as usually. We can also consider a variant $\text{\rm\sffamily K}L_+$ of $\text{\rm\sffamily K}L$, where the set of trees is represented by positive information only, i.e., by an enumeration of the corresponding tree $T$. The following theorem is essentially based on results from \cite{BGM12}. If $n\in{\mathbb{N}}$, then we write $\widehat{n}\in{\mathbb{N}}^{\mathbb{N}}$ for the constant sequence with value $n$. \begin{theorem} \label{thm:KL} $\text{\rm\sffamily K}L\mathop{\equiv_{\mathrm{sW}}}\text{\rm\sffamily K}L_+\mathop{\equiv_{\mathrm{sW}}}\text{\rm\sffamily WKL}'$. \end{theorem} \begin{proof} It is clear that $\text{\rm\sffamily K}L\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily K}L_+$. By \cite[Corollaries~11.6 and 11.7]{BGM12} we have that $\text{\rm\sffamily WKL}'\mathop{\equiv_{\mathrm{sW}}}\text{\rm\sffamily BWT}_{{\mathbb{N}}^{\mathbb{N}}}$. It is easy to see that $\text{\rm\sffamily K}L_+\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily BWT}_{{\mathbb{N}}^{\mathbb{N}}}$: Given a finitely branching tree $T\subseteq{\mathbb{N}}^*$ by an enumeration $T=\{w_n:n\in{\mathbb{N}}\}$ we just compute a sequence $(x_n)_n$ in ${\mathbb{N}}^{\mathbb{N}}$ with $x_n:=w_n\widehat{0}$. Since $T$ is finitely branching, the set $\overline{\{x_n:n\in{\mathbb{N}}\}}$ is compact, and hence $(x_n)_n$ has cluster points. It is clear that all cluster points of $(x_n)_n$ are infinite paths of $T$. This proves $\text{\rm\sffamily K}L_+\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily BWT}_{{\mathbb{N}}^{\mathbb{N}}}$. We now prove $\text{\rm\sffamily BWT}_{{\mathbb{N}}^{\mathbb{N}}}\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily K}L_+$. Given a sequence $(x_n)_n$ that lies in a compact subset of ${\mathbb{N}}^{\mathbb{N}}$, we enumerate a tree $T$ as follows. We start with the empty tree $T$ and in step $n=0,1,2,...$ we inspect $x_n$. If $w\sqsubseteq x_n$ is the longest prefix of $x_n$ that was already enumerated into $T$, then we enumerate $x_n|_k$ into $T$ in step $n$, where $k=|w|+1$. In this way a tree $T$ is enumerated such that $[T]$ is the set of cluster points of $(x_n)_n$. The tree $T$ is a finitely branching tree since $(x_n)_n$ lies within a compact set. Finally, we prove $\text{\rm\sffamily K}L_+\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily K}L$. Given a finitely branching tree $T\subseteq{\mathbb{N}}^*$ by an enumeration $T=\{w_n:n\in{\mathbb{N}}\}$, we compute a finitely branching tree $S\subseteq{\mathbb{N}}^*$ (by determining its characteristic function) such that ${\rm pr}_1([S])=[T]$, where \[{\rm pr}_1(\langle n_0,k_0\rangle,\langle n_1,k_1\rangle,\langle n_2,k_2\rangle...)=(n_0,n_1,n_2,...)\in{\mathbb{N}}^{\mathbb{N}}\] for all $n_i,k_i\in{\mathbb{N}}$. Since ${\rm pr}_1$ is computable, we obtain $\text{\rm\sffamily K}L_+\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily K}L$ in this way. We create the tree $S$ in stages $n=0,1,2,...$ and in stage $n$ we inspect the $n$--th word $w_n$ that is enumerated into $T$. In this stage we decide that a word of the form $u_{w,v}:=\langle w_n(0),v(0)\rangle\langle w_n(1),v(1)\rangle...\langle w_n(k-1),v(k-1)\rangle$ with $k=|w_n|$ and all its prefixes belong to $S$, where $v$ is the lexicographically smallest word $v\in{\mathbb{N}}^k$ such that $u_{w,v}$ and all its prefixes have not yet been decided to belong to ${\mathbb{N}}^*\setminus S$. Additionally, we decide $w\in{\mathbb{N}}^*\setminus S$ for all non-empty words $w$ that have not yet been decided to belong to $S$ and such that $|w|\leq n$ and $w(i)\leq n$ for all $i\leq n-1$. On the one hand, this algorithm ensures that the tree $S$ is decided for larger and larger blocks of size $n$, and on the other hand, the construction guarantees that the resulting tree $S$ is finitely branching: on each level $i$ of $T$ only finitely many different values can occur and at some stage $n$ of the construction all words $w$ of length $i$ in $T$ have been considered and corresponding words $u_{w,v}$ have been added to $S$. The fact that we always choose the lexicographically smallest $v$ ensures that no new strings of length $i$ are added to $S$ after stage $n$. Finally, the construction also guarantees ${\rm pr}_1([S])=[T]$ as promised. \end{proof} Now Theorem~\ref{thm:KL} and Corollary~\ref{cor:delayed-parallelization} yield the following result, which was independently proved in \cite[Corollary~2.3]{HJ16} by a very different method. \begin{corollary} \label{cor:KL} $\text{\rm\sffamily K}L\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{3,2}$. \end{corollary} We note that by Corollary~\ref{cor:LLPO-RT} the reduction cannot be replaced by a strong one since $\mbox{\rm\sffamily C}_2\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily WKL}\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily K}L$. Likewise, one obtains ${\mathcal W}KL\mathop{\not\leq_{\mathrm{sW}}}\text{\rm\sffamily RT}_{n,k}$ for all $n,k\geq1$ since $\mbox{\rm\sffamily C}_2\mathop{\leq_{\mathrm{sW}}}{\mathcal W}KL$. We note that by Corollary~\ref{cor:limit-avoidance} we obtain that Corollary~\ref{cor:KL} is optimal in the following sense. \begin{corollary} \label{cor:KL-RT22} $\text{\rm\sffamily K}L\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{2,{\mathbb{N}}}$. \end{corollary} However, we note the related Questions~\ref{quest:WKL-SRT32} and \ref{quest:WKL-RT22}. Liu proved the following theorem and lemma \cite[Theorem~1.5 and proof of Corollary~1.6]{Liu12}. \begin{theorem}[Liu 2012~\cite{Liu12}] \label{thm:Liu} For any set $C$ not of $\text{\rm\sffamily P}A$--degree and any $A\subseteq{\mathbb{N}}$ there exists an infinite subset $G$ of $A$ or ${\mathbb{N}}\setminus A$ such that $G\oplus C$ is also not of $\text{\rm\sffamily P}A$--degree. \end{theorem} \begin{lemma}[Liu 2012~\cite{Liu12}] \label{lem:Liu} For any set $C$ not of $\text{\rm\sffamily P}A$--degree and any uniformly $C$--computable sequence $(C_i)_i$ there exists a set $G$ which is cohesive for $(C_i)_i$ and such that $G\oplus C$ is also not of $\text{\rm\sffamily P}A$--degree. \end{lemma} We consider Peano arithmetic as the problem $\text{\rm\sffamily P}A:{\mathcal D}\rightrightarrows{\mathcal D},{\bf b}\mapsto\{{\bf a}:{\bf a}\gg{\bf b}\}$ in the Weihrauch lattice, where ${\mathcal D}$ denotes the set of Turing degrees represented by their members and ${\bf a}\gg{\bf b}$ expresses the property that ${\bf a}$ is a $\text{\rm\sffamily P}A$--degree relative to ${\bf b}$. From Liu's Theorem~\ref{thm:Liu} we can directly derive $\text{\rm\sffamily P}A\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily D}_{n,2}$ for all $n\geq1$, and we also obtain the following corollary. \begin{corollary} \label{cor:PA} $\text{\rm\sffamily P}A\mathop{\not\leq_{\mathrm{W}}}\mbox{\rm\sffamily C}RT_{1,2}^{(n-1)}$ for all $n\geq1$. \end{corollary} In particular, this implies $\text{\rm\sffamily P}A\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{2,2}$, since $\text{\rm\sffamily SRT}_{2,2}\mathop{\equiv_{\mathrm{W}}}\mbox{\rm\sffamily C}RT_{1,2}'$ by Theorem~\ref{thm:CRT-SRT}. Using Liu's Lemma~\ref{lem:Liu} this can be strengthened as follows. \begin{corollary} \label{cor:PA-RT22} $\text{\rm\sffamily P}A\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{2,2}*\mbox{\rm\sffamily C}OH$. \end{corollary} Since $\text{\rm\sffamily RT}_{2,2}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{2,2}*\mbox{\rm\sffamily C}OH$ by Proposition~\ref{prop:RT-SRT-COH}, this implies $\text{\rm\sffamily P}A\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{2,2}$. Since $\text{\rm\sffamily P}A\mathop{<_{\mathrm{W}}}\text{\rm\sffamily WKL}$ \cite[Theorem~5.5, Corollary~6.4]{BHK15}, we also get the following conclusion. \begin{corollary} \label{cor:WKL} $\text{\rm\sffamily WKL}\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{2,2}$. \end{corollary} Likewise, one should be able to use the methods provided by Liu \cite{Liu15} to show that $\text{\rm\sffamily MLR}\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{2,2}$ (where $\text{\rm\sffamily MLR}$ denotes the problem to find a point that is Martin-L\"of random relative to the input) and hence ${\mathcal W}KL\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{2,2}$. It would also be interesting to find out whether a variant of the above ideas can be used to answer Question~\ref{quest:WKL-RT22}. By $\mbox{\rm\sffamily C}onC_X$ we denote the restriction of $\mbox{\rm\sffamily C}_X$ to connected subsets. We now want to discuss the intermediate value theorem $\text{\rm\sffamily IVT}$, which we do not define here, but we identify it with $\mbox{\rm\sffamily C}onC_{[0,1]}$ (see \cite[Corollary~4.10]{BLRMP16a} and \cite[Theorem~6.2]{BG11a}, where $\mbox{\rm\sffamily C}onC_{[0,1]}$ appears under the names $\mbox{\rm\sffamily C}onC_1$ and $\mbox{\rm\sffamily C}_I$, respectively). $\mbox{\rm\sffamily C}onC_{[0,1]}$ can be seen as a one-dimensional version of the Brouwer fixed point theorem and thus of the equivalence class of $\text{\rm\sffamily WKL}$ (see \cite{BLRMP16a}). Hence, from the uniform perspective it is an obvious question to ask how $\text{\rm\sffamily IVT}$ is related to Ramsey's theorem. In order to approach this question we provide a new upper bound on $\text{\rm\sffamily IVT}$ which is of independent interest. By ${\mbox{\rm\sffamily C}L_X:\subseteq X^{\mathbb{N}}\rightrightarrows X}$ we denote the {\em cluster point problem} of $X$, which is the problem to find a cluster point of the given input sequence (this is an extension of $\text{\rm\sffamily BWT}_X$ since we do not demand that the input sequence lies within a compact set, but only that it admits a cluster point). By \cite[Theorem~9.4]{BGM12} we have $\mbox{\rm\sffamily C}L_X\mathop{\equiv_{\mathrm{sW}}}\mbox{\rm\sffamily C}_X'$ for every computable metric space $X$. \begin{proposition} \label{prop:IVT} $\text{\rm\sffamily IVT}\mathop{\leq_{\mathrm{W}}}\mbox{\rm\sffamily C}_{\mathbb{N}}'$. \end{proposition} \begin{proof} We have $\text{\rm\sffamily IVT}\mathop{\equiv_{\mathrm{W}}}\mbox{\rm\sffamily C}onC_{[0,1]}\mathop{\equiv_{\mathrm{W}}}\text{\rm\sffamily B}_I$ \cite[Proposition~3.6 and Theorem~6.2]{BG11a}, where $\text{\rm\sffamily B}_I:\subseteq{\mathbb{R}}_<\times{\mathbb{R}}_>\rightrightarrows{\mathbb{R}},(x,y)\mapsto\{z:x\leq z\leq y\}$ with ${\rm dom}(\text{\rm\sffamily B}_I):=\{(x,y):x\leq y\}$ denotes the boundedness problem introduced in \cite{BG11a} and $\mbox{\rm\sffamily C}_{\mathbb{N}}'\mathop{\equiv_{\mathrm{W}}}\mbox{\rm\sffamily C}L_{\mathbb{N}}$~\cite[Theorem~9.4]{BGM12}. Here ${\mathbb{R}}_<$ and ${\mathbb{R}}_>$ denote the real numbers represented as supremum and infimum of sequences of rational numbers, respectively. The aim is to prove $\text{\rm\sffamily B}_I\mathop{\leq_{\mathrm{W}}}\mbox{\rm\sffamily C}L_{\mathbb{N}}$. Hence, we can assume that we have two monotone sequences $(a_n)_n$ and $(b_n)_n$ of rational numbers in $[0,1]$ with $a_n\leq a_{n+1}<b_{n+1}\leq b_n$ for all $n\in{\mathbb{N}}$, and the goal is to find some $x\in[0,1]$ with $a:=\sup_{n\in{\mathbb{N}}}a_n\leq x\leq\inf_{n\in{\mathbb{N}}}b_n=:b$. Now we produce a sequence $(c_n)_n$ of natural numbers in stages $n=0,1,2,...$ as follows, where we use a variable $k\in{\mathbb{N}}$ that is initially $k=0$. If at stage $n$ we find that $|b_n-a_n|<2^{-k-1}$, then we let $k:=k+1$, and we choose $c_n:=0$. Otherwise, we choose some value $c_n\not=0$ such that $a_n<\overline{c_n}<b_n$, where $\overline{c_n}$ is the rational number with code $c_n$. If possible, we choose $c_n$ such that it is identical to the previous such number chosen, and if that is impossible, then we chose $c_n$ with $\overline{c_n}:=a_n+\frac{b_n-a_n}{2}$. Now there are two possible cases. If $a=b$, then the sequence $(c_n)_n$ contains infinitely many zeros and at most one other number $c$ occurs infinitely often in $(c_n)_n$ and it must be such that $a=b=\overline{c}$. Otherwise, if $a\not=b$, then the sequence $(c_n)_n$ contains only finitely many zeros, and only exactly one number $c$ different from zero occurs infinitely often. This number satisfies $a\leq \overline{c}\leq b$. The result of $\mbox{\rm\sffamily C}L_{\mathbb{N}}(c_n)_n$ is one number $c$ that occurs infinitely often in $(c_n)_n$. If $c=0$, then we can compute $x:=a=b$ with the help of the input $(a_n)_n$, $(b_n)_n$. If $c\not=0$, then $x:=\overline{c}$ is a suitable result. \end{proof} Propositions~\ref{prop:IVT}, \ref{prop:KN-CN} and Theorem~\ref{thm:jumps-compact-choice}, which we are going to prove in Section~\ref{sec:boundedness}, yield the reduction $\text{\rm\sffamily IVT}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{2,{\mathbb{N}}}$. Ludovic Patey (personal communication) improved this result by showing that $\text{\rm\sffamily IVT}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{2,2}$. This answered an open question in an earlier version of this article. $\text{\rm\sffamily RT}_{2,2}$ and $\text{\rm\sffamily SRT}_{2,2}$ cannot be Weihrauch reducible to any of $\text{\rm\sffamily P}A,\text{\rm\sffamily MLR}$ or $\mbox{\rm\sffamily C}OH$ since the latter are reducible to $\lim$ and the former are not. However, we can even say more. We recall that a problem $f:\subseteq X\rightrightarrows Y$ is called {\em densely realized}, if the set $\{F(p):F\vdash f\}$ is dense in the domain of the representation of $Y$ for all $p$ that are names for an input in the domain of $f$, which intuitively means that a name of a result can start with any arbitrary prefix (that is a prefix of a valid name). Since it is easy to see that $\text{\rm\sffamily P}A,\text{\rm\sffamily MLR}$ and $\mbox{\rm\sffamily C}OH$ are all densely realized, it follows by \cite[Proposition~4.3]{BHK15} that $\text{\rm\sffamily P}A^{(m)},\text{\rm\sffamily MLR}^{(m)}$ and $\mbox{\rm\sffamily C}OH^{(m)}$ are all indiscriminative. In particular we obtain the following corollary. \begin{corollary} \label{cor:RT22-PA-MLR-COH} $\text{\rm\sffamily SRT}_{1,2}\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily P}A^{(m)}$, $\text{\rm\sffamily SRT}_{1,2}\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily MLR}^{(m)}$ and $\text{\rm\sffamily SRT}_{1,2}\mathop{\not\leq_{\mathrm{W}}}\mbox{\rm\sffamily C}OH^{(m)}$ for all $m\in{\mathbb{N}}$. \end{corollary} We mention that it was noted by Hirschfeldt and Jockusch \cite[Figure~6]{HJ16} that the following corollary follows from \cite[Theorem~2.3]{HJK+08}. \begin{corollary}[Diagonally non-computable functions] \label{cor:DNC} $\text{\rm\sffamily DNC}_{\mathbb{N}}\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily RT}_{1,2}'=\text{\rm\sffamily D}_{2,2}$. \end{corollary} More information on the uniform content and relations between the problems $\text{\rm\sffamily DNC},\text{\rm\sffamily P}A,\text{\rm\sffamily WKL},{\mathcal W}KL,\text{\rm\sffamily MLR},\mbox{\rm\sffamily C}OH$ and other problems can be found in \cite{BHK15}. \section{Separation Techniques for Jumps} \label{sec:separation} In this section we plan to prove some further separation results related to $\text{\rm\sffamily RT}_{2,2}$ (that justify some of the missing arrows in the diagram in Figure~\ref{fig:diagram-RT22}). For this purpose we collect some classical results that yield separation techniques for jumps. Often continuity arguments can be used for separation results. In the case of reductions to jumps of the form $f\mathop{\leq_{\mathrm{W}}} g'$, such continuity arguments are not always applicable, since jumps $g'$ implicitly involve limits. However, continuity arguments can occasionally be replaced by arguments based on the existence of continuity points, which are still available for limit computable functions. The classical techniques that we recall here concern $\SO{2}$--measurable functions, which form the topological counterpart of limit computable functions. Baire proved that the set of points of continuity of every $\SO{2}$--measurable function is a comeager $G_\delta$--set under certain conditions. We recall that a set is called {\em comeager} if it contains a countable intersection of dense open sets. The following is result is \cite[Theorem~24.14]{Kec95}. \begin{proposition}[Baire~\cite{Kec95}] \label{prop:Baire} Let $X,Y$ be metric spaces, and let $Y$ additionally be separable. If $f:X\to Y$ is $\SO{2}$--measurable, then the set of points of continuity of $f$ is a comeager $G_\delta$--set in $X$. \end{proposition} We recall that a topological space $X$ is called a {\em Baire space}\footnote{We distinguish between {\em a} Baire space $X$ in general and {\em the} Baire space ${\mathbb{N}}^{\mathbb{N}}$, in particular, which is also an instance of a Baire space.}, if every comeager set $C\subseteq X$ is dense in the space $X$. Hence we get the following corollary. \begin{corollary} \label{cor:Baire} Let $X$ and $Y$ be metric spaces, where $X$ is additionally a Baire space and $Y$ is separable. If $f:X\to Y$ is $\SO{2}$--measurable, then $f|_U$ has a point of continuity for every non-empty open set $U\subseteq X$. \end{corollary} By the Baire category theorem \cite[Theorem~8.4]{Kec95}, every complete separable metric space is a Baire space. Moreover, closed subspaces of complete separable metric spaces are complete again. Hence Corollary~\ref{cor:Baire} easily implies the direction ``(1) $\text{\rm\sffamily L}ongrightarrow$ (2)'' of the following characterization of $\SO{2}$--measurable functions \cite[Theorem~24.15]{Kec95}. \begin{theorem}[Baire characterization theorem~\cite{Kec95}] \label{thm:Baire} Let $X,Y$ be separable metric spaces, and let $X$ additionally be complete. For every function $f:X\to Y$ the following conditions are equivalent to each other: \begin{enumerate} \item $f$ is $\SO{2}$--measurable, \item $f|_A$ has a point of continuity for every non-empty closed set $A\subseteq X$. \end{enumerate} \end{theorem} We note that while Theorem~\ref{thm:Baire} yields a stronger conclusion than Corollary~\ref{cor:Baire}, it also requires stronger conditions on the domain $X$. In certain situations, we will be dealing with Baire spaces $X^{\mathbb{N}}$ which are not Polish spaces, and hence we will have to take recourse to Corollary~\ref{cor:Baire}. We recall \cite[Proposition~9.1]{Bra05} that the limit map $\lim$ is a prototype of a $\SO{2}$--measurable function (relatively to its domain of convergent sequences), and hence Theorem~\ref{thm:Baire} can be used as a separation tool for certain reductions to jumps, since limit computable functions are closed under composition with continuous functions. \begin{fact} \label{fact:lim} $\lim:\subseteq{\mathbb{N}}^{\mathbb{N}}\to{\mathbb{N}}^{\mathbb{N}}$ is $\SO{2}$--measurable relatively to its domain. \end{fact} For the following result we use the Baire characterization theorem~\ref{thm:Baire} as a separation technique. For partial functions $f:\subseteq X\to Y$ we use the notation $f(A):={\{y\in Y:(\exists x\in A)\; f(x)=y\}}$ for the image of $A$ under $f$. \begin{theorem} \label{thm:BWT-RTxlim} $\text{\rm\sffamily BWT}_{k+1}\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{1,k}'\times\lim$ for all $k\geq1$. \end{theorem} \begin{proof} Since $\text{\rm\sffamily RT}_{1,k}'\times\lim$ is a cylinder, it suffices to prove $\text{\rm\sffamily BWT}_{k+1}\mathop{\not\leq_{\mathrm{sW}}}\text{\rm\sffamily RT}_{1,k}'\times\lim$. Let us assume for a contradiction that $\text{\rm\sffamily BWT}_{k+1}\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily RT}_{1,k}'\times\lim$, i.e., there are computable functions $H,K$ such that $HGK$ is a realizer of $\text{\rm\sffamily BWT}_{k+1}$ whenever $G$ is a realizer of $\text{\rm\sffamily RT}_{1,k}'\times\lim=(\text{\rm\sffamily RT}_{1,k}\times{\rm id})\circ(\lim\times\lim)$. Since $\text{\rm\sffamily BWT}_{k+1}$ is total, it follows that $\langle K_1,K_2\rangle:=\langle\lim\times\lim\rangle\circ K$ is total too, and this function is $\SO{2}$--measurable by Fact~\ref{fact:Borel}. We construct a number of points $p_0,...,p_k\in{\mathbb{N}}^{\mathbb{N}}$ inductively. By the Theorem~\ref{thm:Baire} there is a point $p_0\in{\mathbb{N}}^{\mathbb{N}}$ of continuity of $\langle K_1,K_2\rangle$. Let $\langle s_0,r_0\rangle:=\langle K_1(p_0),K_2(p_0)\rangle$, let $q_0\in\text{\rm\sffamily RT}_{1,k}(s_0)$ be a maximal homogeneous set, and let $c_0\leq k-1$ be the corresponding color. Let $k_0\leq k$ be such that $H\langle q_0,r_0\rangle=k_0$. By continuity of $H$ there exists a prefix $w_0\sqsubseteq\langle q_0,r_0\rangle$ such that $H(w_0{\mathbb{N}}^{\mathbb{N}})=\{k_0\}$. We can assume that $w_0=\langle x_0,y_0\rangle$ is of even length $2n_0$. Since $\langle K_1,K_2\rangle$ is continuous at $p_0$, there is a $v_0\sqsubseteq p_0$ such that $K_1(v_0{\mathbb{N}}^{\mathbb{N}})\subseteq s_0|_{n_0}{\mathbb{N}}^{\mathbb{N}}$ and $K_2(v_0{\mathbb{N}}^{\mathbb{N}})\subseteq y_0{\mathbb{N}}^{\mathbb{N}}$. Now we consider $N_0:=\{0,...,k\}\setminus\{k_0\}$ and the closed set $A_0:=v_0N_0^{\mathbb{N}}$. Again by Theorem~\ref{thm:Baire} there exists a point $p_1\in A_0$ of continuity of $\langle K_1,K_2\rangle|_{A_0}$. Let $\langle s_1,r_1\rangle:=\langle K_1(p_1),K_2(p_1)\rangle$, let $q_1\in\text{\rm\sffamily RT}_{1,k}(s_1)$ be a maximal homogeneous set, and let $c_1\leq k-1$ be the corresponding color. Let $k_1\leq k$ be such that $H\langle q_1,r_1\rangle = k_1$. Since $p_1\in A_0$ contains $k_0$ at most finitely many times, it follows that $k_1\not=k_0$. By continuity of $H$ there exists a prefix $w_1\sqsubseteq \langle q_1,r_1\rangle$ such that $H(w_1{\mathbb{N}}^{\mathbb{N}})=\{k_1\}$. Again we can assume that $w_1=\langle x_1,y_1\rangle$ is of even length $2n_1$, and additionally we can assume $n_1>n_0$. Since $v_0\sqsubseteq p_1$ we have $y_0\sqsubseteq K_2(p_1)=r_1$, and we obtain \[s_1|_{n_0}=K_1(p_1)|_{n_0}=K_1(p_0)|_{n_0}=s_0|_{n_0}.\] If $c_1=c_0$, i.e., if the maximal homogeneous sets $q_0$ and $q_1$ of $s_0$ and $s_1$, respectively, have the same color, then $x_0=q_0|_{n_0}=q_1|_{n_0}$ follows due to the maximality of $q_0$ and $q_1$ and hence $w_0\sqsubseteq\langle q_1,r_1\rangle$, which implies $H\langle q_1,r_1\rangle=k_0$ and hence $k_0=k_1$. Since $k_0\not=k_1$, we can conclude that $c_0\not=c_1$. Now we continue as before: since $\langle K_1,K_2\rangle|_{A_0}$ is continuous at $p_1$, there is a $v_1\sqsubseteq p_1$ such that $K_1(v_1{\mathbb{N}}^{\mathbb{N}})\subseteq s_1|_{n_1}{\mathbb{N}}^{\mathbb{N}}$ and $K_2(v_1{\mathbb{N}}^{\mathbb{N}})\subseteq y_1{\mathbb{N}}^{\mathbb{N}}$. Now we consider $N_1:=\{0,...,k\}\setminus\{k_0,k_1\}$ and the closed set $A_1:=v_1N_1^{\mathbb{N}}$. Again by Theorem~\ref{thm:Baire} there exists a point $p_2\in A_1$ of continuity of $\langle K_1,K_2\rangle_{A_1}$. In this case we eventually obtain colors $k_2\leq k,c_2\leq k-1$ such that $k_2\not\in\{k_0,k_1\}$ and $c_2\not\in\{c_0,c_1\}$. This construction can be repeated $k$ times since there are $k+1$ colors in $\{0,...,k\}$ until $p_0,...,p_k$ and $c_0,...,c_k$ are determined. However, the construction yields that the $c_0,...,c_k$ have to be pairwise different, which is a contradiction since there are only $k$ colors $c\in\{0,...,k-1\}$ available. \end{proof} We get the following immediate corollary with the help of Lemma~\ref{lem:jump-cylindrification} and Corollary~\ref{cor:SRT-RT-lim}. \begin{corollary} \label{cor:BWTk-SRT2k} $\text{\rm\sffamily BWT}_{k+1}\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{2,k}$ for all $k\geq1$. \end{corollary} Since $\text{\rm\sffamily BWT}_{k+1}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{1,k+1}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{2,k+1}$ by Proposition~\ref{prop:bottom} and Lemma~\ref{lem:increasing-size}, we obtain also the following separation result as a consequence of Corollary~\ref{cor:BWTk-SRT2k}. \begin{corollary} \label{cor:SRT-k-k+1} $\text{\rm\sffamily SRT}_{2,k+1}\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{2,k}$ for all $k\geq1$. \end{corollary} As next separation result we want to prove $\text{\rm\sffamily BWT}_2'\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_ {1,2}'$. To this end we are going to use Corollary~\ref{cor:Baire} as a separation tool. It is not too difficult to see that for every topological space $X$ that is endowed with the discrete topology, the product space $X^{\mathbb{N}}$ is a Baire space. We will apply this observation to the particular case of the set \[X_2:=\{p\in2^{\mathbb{N}}:(\exists k)(\forall n\geq k)\;p(n)=p(k)\}\] of convergent sequences of zeros and ones. On the other hand, $X_2$ is also endowed with the subspace topology that it inherits from the ordinary product topology on ${\mathbb{N}}^{\mathbb{N}}$. In both cases, we assume that $X_2^{\mathbb{N}}$ is equipped with the respective product topology. In order to distinguish both topologies, we speak about ``discrete continuity'' of a map $f:X_2^{\mathbb{N}}\to Y$, if $X_2$ is endowed with the discrete topology and about continuity if the usual subspace topology of ${\mathbb{N}}^{\mathbb{N}}$ is used for $X_2$. In the latter case, the map \[h_2:X_2^{\mathbb{N}}\to{\mathbb{N}}^{\mathbb{N}},((x_{i,j})_i)_j\mapsto\langle(x_{0,j})_j,(x_{1,j})_j,(x_{2,j})_j,...\rangle,\] is a computable embedding, i.e., $h_2$ as well as the partial inverses $h_2^{-1}$ are both computable and continuous. Intuitively, $h_2$ swaps rows and columns and applies the Cantor encoding $\langle ...\rangle$, both being purely technical changes. Hence we will identify $X_2^{\mathbb{N}}$ with (a subspace of) ${\mathbb{N}}^{\mathbb{N}}$ via $h_2$ in the following proof. We note that $\text{\rm\sffamily BWT}_2'$ formally has the same domain as $\text{\rm\sffamily BWT}_2$, namely $2^{\mathbb{N}}$. But a realizer of $\text{\rm\sffamily BWT}_2'$ has to use $\lim_{2^{\mathbb{N}}}$ as input representation and hence it will have domain $h_2(X_2^{\mathbb{N}})$, which we identify with $X_2^{\mathbb{N}}$. For $w\in{\mathbb{N}}^n$ and $p\in{\mathbb{N}}^{\mathbb{N}}$ we use the notation $w^\rightarrow p:=wp(n)p(n+1)p(n+2)...$ for the sequence that is obtained from $p$ by replacing the first $n$ positions of $p$ by $w$. \begin{theorem} \label{thm:BWT2j-RT12j} $\text{\rm\sffamily BWT}_2'\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_ {1,2}'$. \end{theorem} \begin{proof} Let us assume for a contradiction that $\text{\rm\sffamily BWT}_2'\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{1,2}'$. Then there are computable functions $H,K$ such that $H\langle {\rm id},GK\rangle$ is a realizer of $\text{\rm\sffamily BWT}_2'$ whenever $G$ is a realizer of $\text{\rm\sffamily RT}_{1,2}'$. We tacitly identify $X_2^{\mathbb{N}}$ with a subspace of ${\mathbb{N}}^{\mathbb{N}}$ via $h_2$. Since $\text{\rm\sffamily BWT}_2':2^{\mathbb{N}}\rightrightarrows\{0,1\}$ is total, it follows that $\lim K:X_2^{\mathbb{N}}\to{\mathbb{N}}^{\mathbb{N}}$ is also total and $\SO{2}$--measurable. If we endow $X_2$ with the discrete topology, then $X_2^{\mathbb{N}}$ is a Baire space, and by Corollary~\ref{cor:Baire} there exists a point of discrete continuity $x\in X_2^{\mathbb{N}}$ of $\lim K$. Let $c:=\lim K(x)$ be the corresponding coloring. Then there is some maximal infinite homogeneous set $q\in\text{\rm\sffamily RT}_{1,2}'(c)$, and there is a realizer $G$ of $\text{\rm\sffamily RT}_{1,2}'$ that maps $K(x)$ to $q$. By continuity of $H$ there is some $m\in{\mathbb{N}}$ such that $H\langle x|_mX_2^{\mathbb{N}},q|_m{\mathbb{N}}^{\mathbb{N}}\rangle=\{e\}$ for some $e\in\{0,1\}$. Without loss of generality, we assume $e=0$. Hence, by discrete continuity of $\lim K$ at $x$, there is some $k>m$ such that $\lim K(x|_kX_2^{\mathbb{N}})\subseteq c|_m2^{\mathbb{N}}$. We construct some $y=(x|_k,z_0,z_1,z_2...)\in X_2^{\mathbb{N}}$ such that all $z_i\in X_2$ converge to $1$, and the coloring $c_y:=\lim K(y)$ has maximal homogeneous sets $q_0,q_1\in\text{\rm\sffamily RT}_{1,2}(c_y)$ of different colors. This yields a contradiction: since $x|_k\sqsubseteq y$, we obtain $c|_m\sqsubseteq c_y$, and hence $q|_m$ must be prefix of either $q_0$ or $q_1$, say $q_0$, and this implies $H\langle y,q_0\rangle=e=0$, whereas the correct result would have to be $1$, since all the $z_i$ converge to $1$. We describe the inductive construction of $y$ in even and odd stages $i\in{\mathbb{N}}$. Stage $0$: Let $y_0:=x|_k\widehat{0}\,\widehat{0}...$, $c_0:=\lim K(y_0)$, and let $r_0\in\text{\rm\sffamily RT}_{1,2}(c_0)$. Then by continuity of $H$ there is some $m_0>k$ such that $H\langle x|_{k}w_{m_0}X_2^{\mathbb{N}},r_0|_{m_0}{\mathbb{N}}^{\mathbb{N}}\rangle=\{0\}$ with $w_m:=(0^m\widehat{1})^m=(0^m\widehat{1},0^m\widehat{1},...,0^m\widehat{1})\in X_2^m$ for all $m\in{\mathbb{N}}$. Stage $1$: Let $y_1:=x|_kw_{m_0}\widehat{1}\,\widehat{1}...$, $c_1:=\lim K(y_1)$, and let $r_1\in\text{\rm\sffamily RT}_{1,2}(c_1)$. Then also ${0^{m_0}}^\rightarrow r_1\in\text{\rm\sffamily RT}_{1,2}(c_1)$, and by continuity of $H$ there is some $m_1>k+m_0$ such that $H\langle y_1|_{m_1}X_2^{\mathbb{N}},{0^{m_0}}^\rightarrow r_1|_{m_1}{\mathbb{N}}^{\mathbb{N}}\rangle=\{1\}$. We now assume that $i\geq1$ and that $y_0,...,y_{2i-1}\in X_2^{\mathbb{N}}$ and $m_0,...,m_{2i+1}\in{\mathbb{N}}$ have already been constructed. Stage $2i$: Let $y_{2i}:=y_{2i-1}|_{m_{2i-1}}\widehat{0}\,\widehat{0}...$, $c_{2i}:=\lim K(y_{2i})$, and let $r_{2i}\in\text{\rm\sffamily RT}_{1,2}(c_{2i})$. Then by continuity of $H$ there is some $m_{2i}>m_{2i-1}$ such that \[H\langle y_{2i-1}|_{m_{2i-1}}w_{m_{2i}}X_2^{\mathbb{N}},{0^{m_{2i-1}}}^\rightarrow r_{2i}|_{m_{2i}}{\mathbb{N}}^{\mathbb{N}}\rangle=\{0\}.\] Stage $2i+1$: Let $y_{2i+1}:=y_{2i-1}|_{m_{2i-1}}w_{m_{2i}}\widehat{1}\,\widehat{1}...$, $c_{2i+1}:=\lim K(y_{2i+1})$, and let $r_{2i+1}\in\text{\rm\sffamily RT}_{1,2}(c_{2i+1})$. Then also ${0^{m_{2i}}}^\rightarrow r_{2i+1}\in\text{\rm\sffamily RT}_{1,2}(c_{2i+1})$, and by continuity of $H$ there is some $m_{2i+1}>m_{2i-1}+m_{2i}$ such that \[H\langle y_{2i+1}|_{m_{2i+1}}X_2^{\mathbb{N}},{0^{m_{2i}}}^\rightarrow r_{2i+1}|_{m_{2i+1}}{\mathbb{N}}^{\mathbb{N}}\rangle=\{1\}.\] This construction yields a sequence $(y_i)_i$ in $X_2^{\mathbb{N}}$ and a strictly increasing sequence $(m_i)_i$ in ${\mathbb{N}}$ such that $y_{2i-1}|_{m_{2i-1}}\sqsubseteq y_{2i+1}$ for all $i$. Since all the blocks $w_m$ that are added to the initial $x|_k$ in the odd stages are of the form $w_m=(z_1,...,z_m)$, with $z_i\in X_2$ that all converge to $1$, it follows that $(y_{2i+1})_i$ converges to a $y=(x|_k,z_0,z_1,z_2...)\in X_2^{\mathbb{N}}$, with $z_i$ that all converge to $1$. Let $c_y:=\lim K(y)$ be the corresponding coloring. We now prove that there are homogeneous sets in $\text{\rm\sffamily RT}_{1,2}(c_y)$ of different color. We fix some maximal homogeneous set $r\in\text{\rm\sffamily RT}_{1,2}(c_y)$. It suffices to prove that there are infinitely many $i$ with $r(i)=0$. The inductive proof follows the stages of the construction above and uses the corresponding objects constructed therein. Stage $0,1$: We obtain $r_0|_{m_0}\not\subseteq r$, since otherwise there is some $s\in\text{\rm\sffamily RT}_{1,2}(c_y)$ such that $r_0|_{m_0}\sqsubseteq s$ and $H\langle y,s\rangle=1$. Since $x|_kw_{m_0}\sqsubseteq y$ this contradicts the choice of $m_0$ in Stage 0. In particular, we obtain that there is some $i$ with $0\leq i<m_0$ and $r(i)=0$. Stages $2i,2i+1$: Likewise, ${0^{m_{2i-1}}}^\rightarrow r_{2i}|_{m_{2i}}\not\subseteq r$, since otherwise there is some $s\in\text{\rm\sffamily RT}_{1,2}(c_y)$ such that ${0^{m_{2i-1}}}^\rightarrow r_{2i}|_{m_{2i}}\sqsubseteq s$ and $H\langle y,s\rangle=1$. Since $y_{2i-1}|_{m_{2i-1}}w_{m_{2i}}\sqsubseteq y$ this contradicts the choice of $m_{2i}$ in Stage $2i$. In particular, there is some $i$ with $m_{2i-1}\leq i<m_{2i}$ and $r(i)=0$. Altogether, this proves that there are infinitely many $i$ with $r(i)=0$, since $(m_i)_i$ is strictly increasing. \end{proof} By Corollary~\ref{cor:lower bounds with jumps} we have $\lim_{\mathbb{N}}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{1,2}'$. On the other hand, we obtain the following corollary. \begin{fact} \label{fact:CN-BWT} $\lim_{\mathbb{N}}\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily BWT}_k^{(m)}$ for all $k,m\geq1$. \end{fact} \begin{proof} Let us denote by $\text{\rm\sffamily U}\mbox{\rm\sffamily C}_{\mathbb{N}}$ the restriction of $\mbox{\rm\sffamily C}_{\mathbb{N}}$ to singletons. By \cite[Fact~3.2(1)]{BGM12} $\mbox{\rm\sffamily UC}_{\mathbb{N}}$ is a strong fractal, and $\mbox{\rm\sffamily C}_{\mathbb{N}}$ is obviously slim, which means that ${\rm range}(\text{\rm\sffamily U}\mbox{\rm\sffamily C}_{\mathbb{N}})={\rm range}(\mbox{\rm\sffamily C}_{\mathbb{N}})$. Under these conditions \cite[Theorem~13.3]{BGM12} yields $\mbox{\rm\sffamily C}_{\mathbb{N}}\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily BWT}_{k}^{(m)}$ since $|{\rm range}(\mbox{\rm\sffamily C}_{\mathbb{N}})|=|{\mathbb{N}}|>k=|{\rm range}(\text{\rm\sffamily BWT}_k^{(m)})|$. By \cite[Proposition~3.8]{BGM12} we have $\lim_{\mathbb{N}}\mathop{\equiv_{\mathrm{W}}}\mbox{\rm\sffamily C}_{\mathbb{N}}$, which implies the claim. \end{proof} We could also replace the application of \cite[Theorem~13.3]{BGM12} in this proof by an application of Proposition~\ref{prop:choice-cardinality} that we prove below. Now it follows with Fact~\ref{fact:CN-BWT} that $\text{\rm\sffamily RT}_{1,2}'\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily BWT}_2'$. Hence, together with Theorem~\ref{thm:BWT2j-RT12j} we obtain the following corollary. \begin{corollary} \label{cor:BWT2j-RT12j} $\text{\rm\sffamily BWT}_2'\mathop{|_{\mathrm{W}}}\text{\rm\sffamily RT}_{1,2}'$. \end{corollary} By Proposition~\ref{prop:bottom} we have $\text{\rm\sffamily BWT}_2\mathop{\equiv_{\mathrm{W}}}\text{\rm\sffamily RT}_{1,2}$, but Corollary~\ref{cor:BWT2j-RT12j} shows that the situation is very different for strong Weihrauch reducibility. With the help of \cite[Proposition~5.6(2)]{BGM12} we obtain the following. \begin{corollary} \label{cor:BWT2-RT12} $\text{\rm\sffamily BWT}_2\mathop{|_{\mathrm{sW}}}\text{\rm\sffamily RT}_{1,2}$. \end{corollary} It is clear that $\text{\rm\sffamily D}_{2,2}=\text{\rm\sffamily RT}_{1,2}'\mathop{\leq_{\mathrm{sW}}}\mbox{\rm\sffamily C}RT_{1,2}'\mathop{\equiv_{\mathrm{W}}}\text{\rm\sffamily SRT}_{2,2}$ by Theorem~\ref{thm:CRT-SRT}, and by Theorem~\ref{thm:lower-bound} we also obtain $\text{\rm\sffamily BWT}_2'\mathop{\leq_{\mathrm{sW}}}\mbox{\rm\sffamily C}RT_{1,2}'$. Now Corollary~\ref{cor:BWT2j-RT12j} also leads to the following separation result which was independently proved by Dzhafarov \cite[Corollary~3.3]{Dzh16}. \begin{corollary} \label{cor:SRT22-D22} $\text{\rm\sffamily SRT}_{2,2}\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily D}_{2,2}$. \end{corollary} Altogether, we have now separated several of the degrees illustrated in the diagram of Figure~\ref{fig:diagram-RT22}. \section{Boundedness and Induction} \label{sec:boundedness} The purpose of this section is to collect some results on how Ramsey's theorem is related to boundedness and induction. In reverse mathematics there is a well-known strict hierarchy of induction principles $\text{\rm\sffamily I}\sO{n}$ and boundedness principles $\text{\rm\sffamily B}\sO{n}$ (see \cite[Theorem~4.32]{Hir15}): \[\text{\rm\sffamily B}\sO{1}\leftarrow\text{\rm\sffamily I}\sO{1}\leftarrow\text{\rm\sffamily B}\sO{2}\leftarrow\text{\rm\sffamily I}\sO{2}\leftarrow...\] While these principles have no immediate interpretation as problems in the Weihrauch lattice, there are equivalent formulations of these principles that do have such interpretations. For instance, it is well-known that $\text{\rm\sffamily I}\sO{n}$ is equivalent to the least number principle $\text{\rm\sffamily L}\pO{n}$ over a very weak system \cite[Theorem~2.4]{HP93}. The least number principle $\text{\rm\sffamily L}\pO{1}$ can be interpreted as the the following problem: \[\min:\subseteq{\mathbb{N}}^{\mathbb{N}}\to{\mathbb{N}},p\mapsto\min\{n\in{\mathbb{N}}:(\forall k\in{\mathbb{N}})\;p(k)\not=n\}.\] Equivalently, one could also consider the map $\min:\subseteq{\mathcal A}_-({\mathbb{N}})\to{\mathbb{N}},A\mapsto\min(A)$ that maps every non-empty closed set $A\subseteq{\mathbb{N}}$ (given by an enumeration of its complement) to its minimum. The $n$--th jump $\min^{(n)}$ now clearly corresponds to $\text{\rm\sffamily L}\pO{n+1}$ for all $n\in{\mathbb{N}}$. The following result is easy to obtain. \begin{proposition}[Least number principle] \label{prop:least} $\mbox{\rm\sffamily C}_{\mathbb{N}}\mathop{\equiv_{\mathrm{sW}}}\min$. \end{proposition} \begin{proof} $\mbox{\rm\sffamily C}_{\mathbb{N}}\mathop{\leq_{\mathrm{sW}}}\min$ is obvious, since given an enumeration $p$ of a the complement of a non-empty set $A\subseteq{\mathbb{N}}$, we have $\min(p)\in A$. We also have $\min\mathop{\leq_{\mathrm{sW}}}\mbox{\rm\sffamily C}_{\mathbb{N}}$: Given a sequence $p\in{\mathbb{N}}^{\mathbb{N}}$ we start enumerating the complement of a set $A\subseteq{\mathbb{N}}$ while we inspect the sequence $p$. The algorithm proceeds in stages $i=0,1,2,...$. In stage $i$ we let $k_i:=\min\{n\in{\mathbb{N}}:(\forall j\leq i)\;p(j)\not=n\}$ and we remove all $\langle n,m\rangle$ from $A$ for $n,m\leq i$ and $m\not=k_i$. Hence, the final set $A$ satisfies \[A\subseteq\{\langle n,k\rangle\in{\mathbb{N}}:n\in{\mathbb{N}},k=\min({\mathbb{N}}\setminus{\rm range}(p))\}\] and given a number $\langle n,k\rangle\in A$ we can easily extract $k=\min({\mathbb{N}}\setminus{\rm range}(p))$. Hence, $\min\mathop{\leq_{\mathrm{sW}}}\mbox{\rm\sffamily C}_{\mathbb{N}}$. \end{proof} Since jumps preserve strong Weihrauch equivalences we obtain $\mbox{\rm\sffamily C}_{\mathbb{N}}^{(n)}\mathop{\equiv_{\mathrm{sW}}}\min^{(n)}$ for all $n\in{\mathbb{N}}$, and hence $\mbox{\rm\sffamily C}_{\mathbb{N}}^{(n)}$ can be seen as a counterpart of $\text{\rm\sffamily I}\sO{n+1}$ for all $n\in{\mathbb{N}}$. Likewise, the boundedness principle $\text{\rm\sffamily B}\sO{n}$ has been characterized with the help of the pigeonhole principle and the regularity principle $\mbox{\rm\sffamily R}\sO{n}$. More precisely, it is known that $\text{\rm\sffamily B}\sO{n+2}$ is equivalent to $\mbox{\rm\sffamily R}\sO{n+1}$ for all $n\in{\mathbb{N}}$ over a very weak system~\cite[Theorem~2.23]{HP93}. Now $\text{\rm\sffamily BWT}_{\mathbb{N}}$ can be seen as a counterpart of $\mbox{\rm\sffamily R}\sO{1}$ since the computational goal of $\text{\rm\sffamily BWT}_{\mathbb{N}}$ is to find a number that appears infinitely often in a bounded sequence. More generally, $\mbox{\rm\sffamily R}\sO{n+1}$ corresponds to $\text{\rm\sffamily BWT}_{\mathbb{N}}^{(n)}$ for all $n\in{\mathbb{N}}$. We recall that $\text{\rm\sffamily K}_{\mathbb{N}}$ denotes {\em compact choice} on ${\mathbb{N}}$, which can be defined as closed choice $\mbox{\rm\sffamily C}_{\mathbb{N}}$, except that additionally an upper bound for the input set is provided (see \cite{BGM12} for a precise definition). Another characterization of $\text{\rm\sffamily K}_{\mathbb{N}}$ is $\text{\rm\sffamily K}_{\mathbb{N}}\mathop{\equiv_{\mathrm{sW}}}\mbox{\rm\sffamily C}_2^*$ (which holds by \cite[Proposition~10.9]{BGM12} where ${\mathcal L}PO\mathop{\equiv_{\mathrm{sW}}}\mbox{\rm\sffamily C}_2$) and we obtain $\text{\rm\sffamily K}_{\mathbb{N}}'\mathop{\equiv_{\mathrm{sW}}}\text{\rm\sffamily BWT}_{\mathbb{N}}$ by \cite[Corollary~11.10]{BGM12}. Hence, it is justified to say that $\text{\rm\sffamily K}_{\mathbb{N}}^{(n+1)}$ corresponds to $\text{\rm\sffamily B}\sO{n+2}$ for all $n\in{\mathbb{N}}$. Analogously to the above implication chain for $\text{\rm\sffamily I}\sO{n}$ and $\text{\rm\sffamily B}\sO{n}$ we obtain \[\text{\rm\sffamily K}_{\mathbb{N}}\mathop{<_{\mathrm{W}}}\mbox{\rm\sffamily C}_{\mathbb{N}}\mathop{<_{\mathrm{W}}}\text{\rm\sffamily K}_{\mathbb{N}}'\mathop{<_{\mathrm{W}}}\mbox{\rm\sffamily C}_{\mathbb{N}}'\mathop{<_{\mathrm{W}}}\text{\rm\sffamily K}_{\mathbb{N}}''\mathop{<_{\mathrm{W}}}\mbox{\rm\sffamily C}_{\mathbb{N}}''\mathop{<_{\mathrm{W}}}....\] This reduction chain is based on the following observation. \begin{proposition} \label{prop:KN-CN} $\text{\rm\sffamily K}_{\mathbb{N}}^{(n)}\mathop{<_{\mathrm{sW}}}\mbox{\rm\sffamily C}_{\mathbb{N}}^{(n)}\mathop{<_{\mathrm{sW}}}\text{\rm\sffamily K}_{\mathbb{N}}^{(n+1)}$ and $\text{\rm\sffamily K}_{\mathbb{N}}^{(n)}\mathop{<_{\mathrm{W}}}\mbox{\rm\sffamily C}_{\mathbb{N}}^{(n)}\mathop{<_{\mathrm{W}}}\text{\rm\sffamily K}_{\mathbb{N}}^{(n+1)}$ for $n\in{\mathbb{N}}$. \end{proposition} \begin{proof} We obtain $\text{\rm\sffamily K}_{\mathbb{N}}\mathop{\leq_{\mathrm{sW}}}\mbox{\rm\sffamily C}_{\mathbb{N}}\mathop{\equiv_{\mathrm{sW}}}\lim_{\mathbb{N}}\mathop{\leq_{\mathrm{sW}}}\text{\rm\sffamily BWT}_{\mathbb{N}}\mathop{\equiv_{\mathrm{sW}}}\text{\rm\sffamily K}_{\mathbb{N}}'$. Here $\mbox{\rm\sffamily C}_{\mathbb{N}}\mathop{\equiv_{\mathrm{sW}}}\lim_{\mathbb{N}}$ holds by \cite[Proposition~3.8]{BGM12}. The claimed reductions follow since jumps are monotone with respect to $\mathop{\leq_{\mathrm{sW}}}$. The separation results follow from Fact~\ref{fact:WKL-BWT}~(5) since we have $\widehat{\mbox{\rm\sffamily C}_{\mathbb{N}}}\mathop{\equiv_{\mathrm{sW}}}\widehat{\lim_{\mathbb{N}}}\mathop{\equiv_{\mathrm{sW}}}\lim$ (see also \cite[Example~3.10]{BBP12}), $\widehat{\text{\rm\sffamily K}_{\mathbb{N}}}\mathop{\equiv_{\mathrm{sW}}}\text{\rm\sffamily WKL}$ and since parallelization commutes with jumps. Here $\widehat{\text{\rm\sffamily K}_{\mathbb{N}}}\mathop{\equiv_{\mathrm{sW}}}\text{\rm\sffamily WKL}$ holds since we obtain that $\mbox{\rm\sffamily C}_2\mathop{\leq_{\mathrm{sW}}}\mbox{\rm\sffamily C}_2^*\mathop{\equiv_{\mathrm{sW}}}\text{\rm\sffamily K}_{\mathbb{N}}\mathop{\leq_{\mathrm{sW}}}\widehat{\mbox{\rm\sffamily C}_2}\mathop{\equiv_{\mathrm{sW}}}\text{\rm\sffamily WKL}$ and parallelization is a closure operator. \end{proof} In reverse mathematics a considerable amount of work has been spent in order to calibrate Ramsey's theorem for pairs according to the above boundedness and induction principles. Hence, it is natural to ask how it compares uniformly to choice principles. We start with some easy observations, and as a preparation we prove the following result. The proof is a simple variation of the proof of \cite[Theorem~13.3]{BGM12}. \begin{proposition}[Choice and cardinality] \label{prop:choice-cardinality} Let $f:\subseteq X\rightrightarrows{\mathbb{N}}$ and $X\subseteq{\mathbb{N}}$ be such that $\mbox{\rm\sffamily C}_X\mathop{\leq_{\mathrm{W}}} f$. Then $|X|\leq|{\rm range}(f)|$. \end{proposition} \begin{proof} Let us assume that $\mbox{\rm\sffamily C}_X\mathop{\leq_{\mathrm{W}}} f$. We use the representation $\psi_-$ of closed subsets of $X$. Then there are computable $H,K$ such that $H\langle{\rm id},GK\rangle\vdash\mbox{\rm\sffamily C}_X$ for all $G\vdash f$. We fix some realizer $G\vdash f$. For simplicity and without loss of generality we assume that $G$ and $H$ have target space ${\mathbb{N}}$. We consider the following claim: for each $i\in{\mathbb{N}}$ with $|X|>i$ there exist \begin{enumerate} \item $k_i\in X\setminus\{k_0,...,k_{i-1}\}$, \item $n_i\in{\rm range}(f)\setminus\{n_0,...,n_{i-1}\}$, \item $p_i$, a name of a closed subset $A_i\subseteq X$, \item $w_i\sqsubseteq p_i$, \end{enumerate} such that $w_{i-1}\sqsubseteq w_i$, $GK(p_i)=n_i$ and $H\langle w_i{\mathbb{N}}^{\mathbb{N}},n_i\rangle=k_i$. Let $w_{-1}$ be the empty word. We prove this claim by induction on $i$. Let us assume that $|X|>0$, and let $p_0$ be a name of $A_0:=X$. Then $k_0:=H\langle p_0,GK(p_0)\rangle\in A_0$ and $n_0:=GK(p_0)$. By continuity of $H$ there is some $w_0\sqsubseteq p_0$ such that such that $H\langle w_0{\mathbb{N}}^{\mathbb{N}},n_0\rangle=k_0$. Let us now assume that $|X|>1$, and let $p_1$ be a name of $A_1:=X\setminus\{k_0\}$ with $w_0\sqsubseteq p_1$. Then $k_1:=H\langle p_1,GK(p_1)\rangle\in A_1$ and $n_1:=GK(p_1)$. Since $k_1\not=k_0$, we obtain $n_0\not=n_1$ since $H\langle w_0{\mathbb{N}}^{\mathbb{N}},n_0\rangle=k_0$. By continuity of $H$ there is some $w_1\sqsubseteq p_1$ with $w_0\sqsubseteq w_1$ such that $H\langle w_1{\mathbb{N}}^{\mathbb{N}},n_1\rangle=k_1$. The proof can now continue inductively as above which proves the claim. The claim implies $|X|\leq|{\rm range}(f)|$. \end{proof} From this result we can conclude the following observation. \begin{proposition}[Finite Choice] \label{prop:finite-choice} $\mbox{\rm\sffamily C}_k\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{1,k}$ and $\mbox{\rm\sffamily C}_{k+1}\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{1,k}$ for all $k\geq1$. \end{proposition} \begin{proof} By Lemma~\ref{prop:bottom} we have $\lim_k\mathop{\equiv_{\mathrm{W}}}\text{\rm\sffamily SRT}_{1,k}$, hence the first reduction follows from \cite[Corollary~13.8]{BGM12} and can easily been proved directly. By Lemma~\ref{prop:bottom} we have $\text{\rm\sffamily BWT}_k\mathop{\equiv_{\mathrm{W}}}\text{\rm\sffamily RT}_{1,k}$, and hence the second claim follows from Proposition~\ref{prop:choice-cardinality}. \end{proof} Since $\text{\rm\sffamily K}_{\mathbb{N}}\mathop{\equiv_{\mathrm{W}}}\mbox{\rm\sffamily C}_2^*$ by \cite[Proposition~10.9]{BGM12} we get the following conclusion. \begin{proposition}[Compact choice] \label{prop:compact-choice} $\text{\rm\sffamily K}_{\mathbb{N}}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{1,+}$ and $\text{\rm\sffamily K}_{\mathbb{N}}\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{1,k}$ for all $k\geq1$. \end{proposition} \begin{proof} Since $\mbox{\rm\sffamily C}_2\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{1,2}$, we obtain $\text{\rm\sffamily K}_{\mathbb{N}}\mathop{\equiv_{\mathrm{W}}}\mbox{\rm\sffamily C}_2^*\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{1,2}^*\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{1,+}$ by Corollary~\ref{cor:finite-parallelization}. On the other hand, $\mbox{\rm\sffamily C}_{k+1}\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{1,k}$ and $\mbox{\rm\sffamily C}_{k+1}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily K}_{\mathbb{N}}$ implies $\text{\rm\sffamily K}_{\mathbb{N}}\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{1,k}$ for all $k\geq1$. \end{proof} We obtain the following corollary. \begin{corollary}[Jump of compact choice] \label{cor:jump-compact-choice} $\text{\rm\sffamily K}_{\mathbb{N}}'\mathop{\equiv_{\mathrm{W}}}\text{\rm\sffamily RT}_{1,{\mathbb{N}}}$ and $\text{\rm\sffamily K}_{\mathbb{N}}'\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{2,+}$. \end{corollary} \begin{proof} $\text{\rm\sffamily K}_{\mathbb{N}}'\mathop{\equiv_{\mathrm{W}}}\text{\rm\sffamily RT}_{1,{\mathbb{N}}}$ follows from Proposition~\ref{prop:bottom} since $\text{\rm\sffamily K}_{\mathbb{N}}'\mathop{\equiv_{\mathrm{sW}}}\text{\rm\sffamily BWT}_{\mathbb{N}}$. Corollary~\ref{cor:BWTk-SRT2k} implies $\text{\rm\sffamily K}_{\mathbb{N}}'\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{2,k}$ for all $k\geq1$, and since $\text{\rm\sffamily K}_{\mathbb{N}}'$ is countably irreducible (as any jump is by \cite[Proposition~5.8]{BGM12}) we obtain $\text{\rm\sffamily K}_{\mathbb{N}}'\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{2,+}$. \end{proof} The equivalence $\text{\rm\sffamily K}_{\mathbb{N}}'\mathop{\equiv_{\mathrm{W}}}\text{\rm\sffamily RT}_{1,{\mathbb{N}}}$ corresponds to a well-known theorem of Hirst~\cite{Hir87}, which says that $\text{\rm\sffamily RT}^1_{<\infty}$ is equivalent to $\text{\rm\sffamily B}\sO{2}$ over $\text{\rm\sffamily RCA}_0$ (see also \cite[Theorem~6.81]{Hir15}), whereas the second equivalence $\text{\rm\sffamily K}_{\mathbb{N}}'\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{2,+}$ shows that the reverse mathematics result \cite{CJS01} (see also \cite[Theorem~6.82]{Hir15}) that $\text{\rm\sffamily SRT}_{2,2}$ proves $\text{\rm\sffamily RT}_{<\infty}^1$ over $\text{\rm\sffamily RCA}_0$ cannot be proved uniformly (for instance the proof presented in \cite[Theorem~6.82]{Hir15} contains a non-constructive case distinction). Corollary~\ref{cor:jump-compact-choice} yields also one direction of the following corollary, and the other direction follows since $\text{\rm\sffamily RT}_{1,{\mathbb{N}}}\mathop{\leq_{\mathrm{W}}}\lim'$, but $\text{\rm\sffamily SRT}_{2,2}\mathop{\not\leq_{\mathrm{W}}}\lim'$ and $\text{\rm\sffamily SRT}_{2,{\mathbb{N}}}\mathop{\not\leq_{\mathrm{W}}}\lim'$ by Corollary~\ref{cor:SRT-limits}. \begin{corollary} \label{cor:SRT22-RT1N} $\text{\rm\sffamily RT}_{1,{\mathbb{N}}}\mathop{|_{\mathrm{W}}}\text{\rm\sffamily SRT}_{2,2}$ and $\text{\rm\sffamily RT}_{1,{\mathbb{N}}}\mathop{|_{\mathrm{W}}}\text{\rm\sffamily SRT}_{2,+}$. \end{corollary} We can conclude the following results on $\mbox{\rm\sffamily C}_{\mathbb{N}}$ from earlier results. \begin{proposition}[Closed choice] \label{prop:closed-choice} $\mbox{\rm\sffamily C}_{\mathbb{N}}\mathop{\equiv_{\mathrm{W}}}\text{\rm\sffamily SRT}_{1,{\mathbb{N}}}$ and $\mbox{\rm\sffamily C}_{\mathbb{N}}\mathop{\not\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{1,+}$. \end{proposition} \begin{proof} The first statement follows from Proposition~\ref{prop:bottom} since $\mbox{\rm\sffamily C}_{\mathbb{N}}\mathop{\equiv_{\mathrm{W}}}\lim_{\mathbb{N}}$ by \cite[Proposition~3.8]{BGM12}, and the second statement follows from the fact that $\mbox{\rm\sffamily C}_{\mathbb{N}}$ is countably irreducible by \cite[Fact~3.2]{BGM12} (since every strong fractal is a fractal and hence countably irreducible), but $\mbox{\rm\sffamily C}_{\mathbb{N}}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{1,k}$ for some $k\geq1$ is impossible by Proposition~\ref{prop:compact-choice} since $\text{\rm\sffamily K}_{\mathbb{N}}\mathop{\leq_{\mathrm{W}}}\mbox{\rm\sffamily C}_{\mathbb{N}}$. \end{proof} Altogether, we have justified the way choice problems are displayed in Figure~\ref{fig:diagram-RTnk}. Finally, we provide the following reduction that can be derived from earlier results. \begin{theorem}[Jumps of compact choice] \label{thm:jumps-compact-choice} $\text{\rm\sffamily K}_{\mathbb{N}}^{(n)}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{n,{\mathbb{N}}}$ for all $n\geq2$. \end{theorem} \begin{proof} By Propositions~\ref{prop:bottom} and \ref{prop:jumps} and since jumps are monotone with respect to $\mathop{\leq_{\mathrm{sW}}}$ we obtain $\text{\rm\sffamily K}_{\mathbb{N}}^{(n)}\mathop{\equiv_{\mathrm{sW}}}\text{\rm\sffamily BWT}_{\mathbb{N}}^{(n-1)}\mathop{\leq_{\mathrm{sW}}}\mbox{\rm\sffamily C}RT_{1,{\mathbb{N}}}^{(n-1)}\mathop{\leq_{\mathrm{sW}}}\mbox{\rm\sffamily C}SRT_{n,{\mathbb{N}}}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily SRT}_{n,{\mathbb{N}}}$. \end{proof} The special case for $n=2$ can be seen as the uniform version of a theorem of Cholak, Jockusch and Slaman \cite{CJS01}, see also \cite[Theorem~6.89]{Hir15}, which states that $\text{\rm\sffamily SRT}_{<\infty}^2$ proves $\text{\rm\sffamily B}\Sigma^0_3$ over $\text{\rm\sffamily RCA}_0$. In light of Corollary~\ref{cor:jump-compact-choice} and Theorem~\ref{thm:jumps-compact-choice} there is quite some gap in between $\text{\rm\sffamily SRT}_{2,+}$ and $\text{\rm\sffamily SRT}_{2,{\mathbb{N}}}$. The diagram summarizes our calibration of choice problems by Ramsey's theorem. \begin{figure} \caption{Closed and compact choice on natural numbers calibrated with Ramsey's theorem in the Weihrauch lattice.} \label{fig:diagram-KC} \end{figure} Further questions could be studied along these lines. We mention the following question. The first part of this question is related to \cite[Theorem~6.85]{Hir15} and the second part to \cite[Open Question~6.92]{Hir15}. \begin{question}[Jump of closed choice] \label{quest:jump-closed-choice} Does $\mbox{\rm\sffamily C}_{\mathbb{N}}'\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{2,2}$ hold? Does $\mbox{\rm\sffamily C}_{\mathbb{N}}''\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{2,{\mathbb{N}}}$ hold? \end{question} In light of \cite[Theorem~6.85]{Hir15} a positive answer to the first part of this question seems unlikely. Another natural question is how cluster point problems are related to Ramsey's theorem. We can say at least something. \begin{corollary}[Cluster point problem] \label{cor:CR-RT33} $\mbox{\rm\sffamily C}L_{\mathbb{R}}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{3,3}$. \end{corollary} \begin{proof} We recall that $\mbox{\rm\sffamily C}L_{\mathbb{N}}\mathop{\equiv_{\mathrm{W}}}\mbox{\rm\sffamily C}_{\mathbb{N}}'$ and $\mbox{\rm\sffamily C}L_{2^{\mathbb{N}}}\mathop{\equiv_{\mathrm{W}}}\mbox{\rm\sffamily C}_{2^{\mathbb{N}}}'\mathop{\equiv_{\mathrm{W}}}\text{\rm\sffamily WKL}'$ by \cite[Theorem~9.4]{BGM12}. By \cite[Proposition~9.15]{BGM12} we have $\mbox{\rm\sffamily C}L_{\mathbb{R}}\mathop{\equiv_{\mathrm{W}}}\mbox{\rm\sffamily C}_{\mathbb{N}}'\times\mbox{\rm\sffamily C}L_{2^{\mathbb{N}}}\mathop{\equiv_{\mathrm{W}}}\mbox{\rm\sffamily C}_{\mathbb{N}}'\times\text{\rm\sffamily WKL}'$. Hence, we obtain with Proposition~\ref{prop:KN-CN}, Theorem~\ref{thm:jumps-compact-choice} and Corollary~\ref{cor:delayed-parallelization} \[\mbox{\rm\sffamily C}L_{\mathbb{R}}\mathop{\equiv_{\mathrm{W}}}\mbox{\rm\sffamily C}_{\mathbb{N}}'\times\text{\rm\sffamily WKL}'\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily K}_{\mathbb{N}}''\times\text{\rm\sffamily WKL}'\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{2,{\mathbb{N}}}\times\text{\rm\sffamily RT}_{3,2}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{3,3}.\] The last mentioned reduction holds by Theorem~\ref{thm:products}. \end{proof} However, it is not immediately clear whether the following holds. \begin{question} \label{quest:CR-RT32} $\mbox{\rm\sffamily C}L_{\mathbb{R}}\mathop{\leq_{\mathrm{W}}}\text{\rm\sffamily RT}_{3,2}$? \end{question} \section{Conclusion} We have studied the uniform computational content of Ramsey's theorem in the Weihrauch lattice, and we have clarified many aspects of Ramsey's theorem in this context. Key results are the lower bound provided in Theorem~\ref{thm:lower-bound}, the theorems on products (Theorem~\ref{thm:products}) and parallelization (Theorem~\ref{thm:delayed-parallelization}), as well as the theorem on jumps (Theorem~\ref{thm:CRT-SRT}) and the upper bounds in Corollary~\ref{cor:upper-bound} derived from it. From this tool box of key results (together with the squashing theorem, Theorem~\ref{thm:squashing}) we were able to derive a number of interesting consequences, such as the characterization of the parallelization of Ramsey's theorem in Corollary~\ref{cor:parallelization} and the effect of increasing numbers of colors in Theorem~\ref{thm:increasing-colors}. The separation tools provided in Section~\ref{sec:separation} have led to some further clarity. A number of important questions regarding the uniform behavior of Ramsey's theorem were left open. Hopefully, some future study will shed further light on this question. \section{Acknowledgments} We would like to thank Steve Simpson and Ludovic Patey for helpful comments on an earlier version of this article and the anonymous referee for his or her very careful proof reading that helped us to improve the presentation of the article. \end{document}
\begin{document} \title[An equality on Monge-Amp\`re measures]{An equality on Monge-Ampere measures} \author[M. El Kadiri]{Mohamed El Kadiri} \address{University of Mohammed V \\Department of Mathematics, \\Faculty of Science \\P.B. 1014, Rabat \\Morocco} \email{[email protected]} \subjclass[2010]{31C10, 32U05, 32U15.} \keywords{Plurisubharmonic function, Plurifine topology, Plurifinely open set, Monge-Amp\`ere operator, Monge-Amp\`ere measure.} \begin{abstract} Let $u$ and $v$ be two plurisubharmonic functions in the domain of definition of the Monge-Amp\`ere operator on a domain $\Omega\subset {\mathbb C}^n$. We prove that if $u=v$ on a plurifinely open set $U\subset \Omega$ that is Borel measurable, then $(dd^cu)^n|_U=(dd^cv)^n|_U$. This result was proved by Bedford and Taylor in \cite{BT} in the case where $u$ and $v$ are locally bounded, and by El Kadiri and Wiegerinck \cite{EKW} when $u$ and $v$ are finite, and by Hai and Hiep in \cite{HH} when $U$ is of the form $U=\bigcup_{j=1}^m\{\varphi_j>\psi_j\}$, where $\varphi_j$, $\psi_j$, $j=1,...,m$, are plurisubharmonic functions on $\Omega$. \end{abstract} \maketitle \section{Introduction} Let $u$ and $v$ be two locally bounded plurisubharmonic (psh in abbreviated form) functions on a domain $\Omega\subset {\mathbb C}^n$ and $U$ a plurifinely open set $\subset \Omega$. In \cite{BT} Bedford and Taylor proved that if $u=v$ on $U$ then the restrictions of the Monge-Amp\`ere measures $(dd^cu)^n$ and $(dd^cv)^n$ to $U$ are equal (see \cite[Corollary 4.3]{BT}). El Kadiri and Wiegerinck showed in \cite{EKW} that this result is true when $u$ and $v$ are two psh functions in the general domain of definition of the Monge-Amp\`ere operator, finite on $U$, and they extended it to the setting of $\cal F$-plurisubharmonic functions on $U$. For the latter notion see \cite{EK} and \cite{EKFW}. Hai and Hiep extended this result to the large Cegrell class of plurisubharmonic functions on a hyperconvex domain $\Omega$ and a plurifine open subset $U$ of $\Omega$ of the form $U=\bigcup_{j=1}^m\{\varphi>\psi_j\}$, where $\varphi_j$ and $\psi_j$ ($1\leq i, j\leq m$) are plurisubharmonic functions in the Cegrell class $\cal E(\Omega)$ (see \cite[Theorem 1.1]{HH}). Recall here that the plurifine topology on an open set in ${\mathbb C}^n$ is the coarsest topology on $\Omega$ that makes all plurisubharmonic functions on $\Omega$ continuous. The plurifine topology has been investigated by many authors, namely, Bedford and Taylor, El Marzguioui, Fuglede and Wiegerinck, see \cite{BT}, \cite{EK}, \cite{EKFW} and \cite{EMW}. Our main purpose in this paper is to extend the above results to any plurifinely open set that is Borel measurable. Indeed, we shall prove that if $u$ and $v$ are two plusubharmonic functions in the Cegrell class $\cal E(\Omega)$, where $\Omega$ is a bounded hyperconvex domain of ${\mathbb C}^n$, are equal on a plurifinely open set $U\subset \Omega$ that is Borel measurable, then $(dd^cu)^n|_U=(dd^cv)^n|_U$. This implies that this result is true when $\Omega$ is a general open subset of ${\mathbb C}^n$ and $\cal E(\Omega)$ is replaced by the general domain of definition $\cal D$ of the Monge-Amp\`ere operator on $\Omega$, in the sense of Blocki \cite{Bl}. \section{The cegrell classes} Let $\Omega$ be a bounded hyperconvex domain in ${\mathbb C}^n$. From \cite{Ce1} and \cite{Ce2} we recall the following subclasses of $\PSH_-(\Omega)$, the cone of nonpositive plurisubharmonic functions on $\Omega$: $${\cal E}_0={\cal E}_0(\Omega)=\{\varphi\in \PSH_-(\Omega): \lim_{z\to \partial \Omega}\varphi(z)=0, \ \int_\Omega (dd^c\varphi)^n<\infty\},$$ $${\cal F}={\cal F}(\Omega)=\{\varphi\in \PSH_-(\Omega): \exists \ \cal E_0\ni \varphi_j\searrow \varphi, \ \sup_j\int_\Omega (dd^c\varphi_j)^n<\infty\},$$ and \begin{equation*} \begin{split} \cal E=\cal E(\Omega)=\{\varphi \in \PSH_-(\Omega): \forall z_0\in \Omega, \exists \text{ a neighborhood } \omega \ni z_0,&\\ {\cal E}_0\ni \varphi_j \searrow \varphi \text{ on } \omega, \ \sup_j \int_\Omega(dd^c\varphi_j)^n<\infty \}. \end{split} \end{equation*} As in \cite{Ce2}, we note that if $u\in \PSH_-(\Omega)$ then $u\in \cal E(\Omega)$ if and only if for every $\omega\Subset \Omega$, there is $v\in \cal F(\Omega)$ such that $v\ge u$ and $v=u$ on $\omega$. On the other hand we have $\PSH_-(\Omega)\cap L^\infty_{loc}(\Omega)\subset \cal E(\Omega)$. The classical Monge-Amp\`ere operator on $\PSH_-(\Omega)\cap L^\infty_{loc}(\Omega)$ can be extended uniquely to the class $\cal E(\Omega)$, the extended operator is still denoted by $(dd^c\cdot)^n$. According to Theorem 4.5 from \cite{Ce1}, the class $\cal E$ is the biggest class $\cal K\subset \PSH_-(\Omega)$ satisfying the following conditions: (1) If $u\in \cal K$, $v\in \PSH_-(\Omega)$ then $\max(u,v)\in \cal K$. (2) If $u\in \cal K$, $\varphi_j\in \PSH_-(\Omega)\cap L^\infty_{loc}(\Omega)$, $\varphi_j\searrow u$, $j\to +\infty$, then $((dd^c\varphi_j)^n)$ is weak*-convergent. We also recall, following Blocki, cf \cite{Bl}, that the general domain of definition $\cal D$ of the Monge-Amp\`ere operator on a domain $\Omega$ of ${\mathbb C}^n$ consists of plurisubharmonic functions $u$ on $\Omega$ for which there is a nonnegative (Radon) measure $\mu$ on $\Omega$ such that for any decreasing sequence $(u_j)$ of locally bounded plurisubharmonic functions on $\Omega$, the sequence of measures $(dd^cu_j)^n$ is weakly*-convergent to $\mu$. The measure $\mu$ is denoted by $(dd^cu)^n$ and called the Monge-Amp\`ere of (or associated with) $u$. When $\Omega$ is bounded and hyperconvex then $\cal D\cap\PSH_-(\Omega)$ coincides with the class $\cal F=\cal F(\Omega)$, cf. \cite{Bl}. \section{Equality between the Monge-Amp\`ere measures on plurifine open sets} The following theorem was proved by Hai and Hiep in \cite{HH}: \begin{theorem}[{\cite[Theorem 1.1]{HH}}]\label{thm1.1} Let $\Omega$ be a bounded hyperconvex domain in ${\mathbb C}^n$ and $\varphi_1,...,\varphi_m$, $\psi_1,...,\psi_m$ are plurusubharmonic functions on $\Omega$. Let $U=\{\varphi_1>\psi_1\}\cap ...\cap \{\varphi_m > \psi_m\}$. Assume that $u,v \in \cal E(\Omega)$. If $u=v$ on $U$ then $(dd^cu)^n|_U=(dd^cv)^n|_U$. \end{theorem} To prove our general result we only need the following weaker particular case of Theorem \ref{thm1.1} where $m=1$: \begin{theorem}\label{thm1.2} Let $\Omega$ be a bounded hyperconvex domain in ${\mathbb C}^n$ and $\varphi$ a plurusubharmonic function on $\Omega$. Let $U=\{\varphi> c\}$ where $c$ is a real constant. Assume that $u,v \in \cal E(\Omega)$. If $u=v$ on $U$ then $(dd^cu)^n|_{U}=(dd^cv)^n|_{U}$. \end{theorem} \begin{proof} We adapt the proof of Theorem \ref{thm1.1} given in \cite{HH} to our case. For each $k\in {\mathbb N}^*$, set $u_k=\max\{u,-k\}$ and $v_k=\max\{v,-k\}$. Then $u_k,v_k\in \PSH(\Omega)\cap L_{loc}^\infty(\Omega)$, $u_k\searrow u$, $v_k\searrow v$ as $k\to \infty$. On the other hand, from the equality $u=v$ on $U$, it follows that $u_k=v_k$ on $U$ and hence $$(dd^cu_k)^n|_U=(dd^cv_k)^n|_U$$ according to Corollary 4.3 in \cite{BT}. Thus $$(\max(\varphi,c)-c)(dd^cu_k)^n=(\max(\varphi, c)-c)(dd^cv_k)^n$$ for all $k\geq 1$. Letting $k\to +\infty$, we deduce by Corollary 3.2 in \cite{H} that $$(\max(\varphi,c)-c)(dd^cu)^n=(\max(\varphi,c)-c))(dd^cv)^n.$$ for all $c\in {\mathbb Q}$. But $$\max(\varphi,c)-c=\varphi-c> 0$$ on $\{\varphi >c\}$, so that $(dd^cu)^n=(dd^cv)^n$ on $\{\varphi > c\}$, and the desired conclusion follows. \end{proof} \begin{remark} Because every domain $\Omega\subset {\mathbb C}^n$ can be written as $\Omega=\bigcup_{j\in {\mathbb N}}\Omega_j$ where $\Omega_j$, $j=1,2,...$, are open balls, and hence hyperconvex, it follows that Theorem \ref{thm1.1} is true for any domain $\Omega\subset {\mathbb C}^n$ and $u,v\in \cal D$, the domain of definition of Monge-Amp\`ere operator. \end{remark} For a set $E\subset \Omega$, we define $$u_E(z)=\sup \{v(z): v\in \PSH_{-}(\Omega), \ v\le -1 \text{ on } E\}$$ and $u_E^*$ the upper semicontinuous regularization of $u_E$, that is the function defined on $\Omega$ by $$u_E^*(z)=\limsup_{\zeta\to z}u_E(\zeta)$$ for every $z\in \Omega$, (the relative extremal function of $E$, see \cite[p. 158]{Kl}). \begin{prop}\label{prop1.1} Let $\Omega$ a domain of ${\mathbb C}^n$. If a subset $A$ of $\Omega$ is plurithin at $z\in {\mathbb C}^n$, then there is an open set $\sigma_z$ containing $z$ such that $u_{(A\setminus \{z\})\cap \sigma_z}^*(z)>-1$. \end{prop} \begin{proof} The result is obvious if $z\notin \overline A$. Suppose that $z\in \overline A$ and that $A$ is plurithin at $z$. According to \cite[Proposition 2.2]{BT}, there is a psh function $u\in \PSH_-(\Omega)$ such that $$\limsup_{\zeta\to z, \zeta\in A, \zeta\ne z}u(\zeta)< u(z).$$ Hence, there is an open set $\sigma_z\subset \Omega$ containing $z$, and a real $\alpha< 0$ such that $$u(\zeta)\le \alpha <u(z)$$ for every $\zeta\in (A\setminus \{z\})\cap \sigma_z$. The function $v=\frac{-1}{\alpha}u$ is psh $\le 0$ on $\Omega$ and satisfies $v(\zeta)\leq -1$ for every $\zeta \in (A\setminus \{z\})\cap \sigma_z$, so that $$-1<v(z)\le u_{(A\setminus \{z\})\cap \sigma_z}^*(z).$$ \end{proof} Now we can state the main result of the present article: \begin{theorem}\label{thm3.3} Let $\Omega$ be an hyperconvex open subset of ${\mathbb C}^n$ and $U$ an $\cal F$-open subset of $\Omega$ that is Borel measurable, and let $u$, $v$ be two psh functions in the Cegrell class $\cal E(\Omega)$. If $u=v$ on $U$, then $(dd^cu)^n|_U=(dd^cv)^n|_U$. \end{theorem} \begin{proof} Let $(\omega_j)$ be a base of the Euclidean topology on ${\mathbb C}^n$ formed by open balls relative to the usual Euclidean norm on ${\mathbb C}^n$ and let $z\in U$. The set $\complement U=\Omega\setminus U$ being thin at $z$, then, according to Proposition \ref{prop1.1}, there is an integer $j_z$ such that $u^*_{(\complement U) \cap \omega_{j_z}}(z)>-1$. Denoting by $\bar u^*_{{\complement U}\cap \omega_{j_z}}$ the function defined in the same manner as $u^*_{({\complement U})\cap\omega_{j_z}}$ with $\Omega$ replaced by $\omega_{j_z}$, we obviously have $$z\in V_z:=\{\bar u^*_{({\complement U})\cap \omega_{j_z}}>-1\}\subset \omega_{j_z}$$ because $$-1<u^*_{\complement U\cap \omega_{j_z}}(z)\leq \bar u^*_{{\complement U}\cap \omega_{j_z}}(z).$$ It is clear that $V_z$ is a plurifinely open set. On the set $\complement U\cap \omega_{j_z}$ we have $\bar u^*_{{\complement U}\cap \omega_{j_z}}=-1$ q.e., and hence $V_z=U\cap V_z\cup F_z$ for some pluripolar set $F_z\subset \omega_{j_z}$. On the other hand, we have $$\bigcup_{z\in U} V_z=\bigcup_{z\in U}\{\bar u^*_{({\complement U})\cap \omega_{j_z}}>-1\} =\bigcup_{j\in J}\{\bar u^*_{({\complement U})\cap \omega_j}>-1\},$$ where $J=\{j_z: z\in U\} (\subset {\mathbb N})$. For each $j\in J$, there is a point $z_j\in U$ such that $j=j_{z_j}$, so that $$U\subset \bigcup_{z\in U}V_z=\bigcup_{j\in J}\{\bar u^*_{({\complement U})\cap \omega_{j_{z_j}}}>-1\} =\bigcup_{j\in J}V_{z_j}.$$ The restrictions of the psh functions $u$ and $v$ to $\omega_{j_{z_j}}$ are equal on $V_{z_j}\setminus F_{z_j}\subset U$ since they are equal on $U$, and therefore they are equal on $V_{z_j}$ by plurifine continuity. It then follows that $(dd^cu)^n|_{V_{z_j}}=(dd^cv)^n|_{V_{z_j}}$ according to Theorem \ref{thm1.1} applied with $\Omega=\omega_{j_{z_j}}$. From this we infer that $(dd^cu)^n|_{\bigcup_j V_{z_j}}=(dd^c)^n|_{\bigcup_j V_{z_j}}$ and hence $(dd^cu)^n|_U=(dd^c)^n|_U$ because $U\subset \bigcup_jV_{z_j}$, and the proof is complete. \end{proof} \begin{remark} In the proof of Theorem \ref{thm3.3}, we also proved that any plurifinely open subset of a domain $\Omega\subset {\mathbb C}^n$ is of the form $\bigcup U_j\setminus P$, where $U_j$, $j=1,2,...$, is of the form $U_j=\{\zeta \in B_j: \varphi_j>-1\}$, $\varphi_j$ is a plurisubharmonic function on an open ball $B_j$, $j=1,2,...$, and $P$ a pluripolar set. \end{remark} \begin{cor}\label{cor3.4} Let $\Omega$ be an open subset of ${\mathbb C}^n$, $U$ a plurifinely open subset of $\Omega$ that is Borel measurable, and $u,v\in \cal D$, the domain of definition of the Monge-Amp\`ere operator on $\Omega$. If $u=v$ on $U$, then $(dd^cu)^n|_U=(dd^cv)^n|_U$. \end{cor} \begin{proof} We can find open balls $B_j\subset {\mathbb C}^n$, $j=1, 2, ...,$ such that $\Omega=\bigcup B_j$. According to Theorem \ref{thm3.3} we have $(dd^cu)^n|_{(U\cap B_j)}=(dd^cv)^n|_{(U\cap B_j)}$ for every $j\ge 1$ and therefore $(dd^cu)^n|_U=(dd^cv)^n|_U$. \end{proof} \section{A generalzation of Theorem 3.3} Let us first recall the following result of Hai and Hiep: \begin{prop}[{\cite[Proposition 4.1]{HH}}]\label{prop4.1} Let $\Omega$ be an open subset of ${\mathbb C}^n$, $u$ a psh function on $\Omega$ that is bounded near the boundary of $\Omega$ and $T$ a closed positive current on $\Omega$ of bidegree $(p,p)$ ($p<n$), then the current $dd^c(u T)$ has locally finite mass in $\Omega$. \end{prop} Proposition \ref{prop4.1} allows us to define the current $dd^cu\wedge T$ on $\Omega$ by putting $$dd^cu\wedge T=dd^c(uT).$$ The following proposition is a generalization of Proposition 4.4 from \cite{HH}: \begin{prop}\label{prop4.3} Let $\Omega\subset {\mathbb C}^n$ be an open set, $T$ a closed positive current of bidegree $(p,p)$ on $\Omega$ $(p<n)$ and $\omega$ an open subset of $\Omega$. Assume that $u_j$, $j=1,2,...$, and $u$ are plurisubharmonic functions bounded near the boundary of $\Omega$. If $u_j\searrow u$ then the currents $h(\varphi_1, ..., \varphi_m)(dd^cu_j\wedge T|_{\omega}) \to h(\varphi_1, ..., \varphi_m)(dd^cu\wedge T|_{\omega})$ weakly (on $\omega$) for all $\varphi_1,...,\varphi_m\in \PSH \cap L_{loc}^\infty(\omega)$ and $h\in \cal C({\mathbb R}^m)$. \end{prop} \begin{proof} Let $B=B(z,r)$ and $B'=B(z,r')$ be open balls of ${\mathbb C}^n$ such that $\overline {B'}\subset B\subset \overline B\subset \omega$ and let $\psi$ the psh function on $\Omega$ defined by $\psi(\zeta)=|\zeta|^2-r_0^2$, where $r_0$ is chosen such that $r'<r_0<r$. Since the functions $\varphi_1,...,\varphi_m$ are bounded on $\overline{B'}$ and $\psi(\zeta)>0$ outside of $B(z,r_0)$ we can find a constant $A>0$ such that, for every $j=1,\ldots,$, $\max\{\varphi_j,A\psi\}=\varphi_j$ on $\overline{B'}$ and $\max\{\varphi_j,A\psi\}=A\psi$ outside a compact neighborhood of $\overline B$ in $\omega$. By the sheaf property of plurisubharmonic functions, the function defined by $\psi_j=\max\{\varphi_j,A\psi\}$ on $B$ and $\psi_j=A\psi$ on $\Omega\setminus B$ is a locally bounded psh on $\Omega$. It follows from Proposition 4.4 from \cite{HH} that $h(\psi_1,...,\psi_m)(dd^cu_j\wedge T)|_{B'}\to h(\psi_1,...,\psi_m)(dd^cu\wedge T)|_{B'}$ weakly (as currents on $B'$), and hence $h(\varphi_1,...,\varphi_m)(dd^cu_j\wedge T|_\omega)|_{B'}\to h(\varphi_1,...,\varphi_m)(dd^cu\wedge T|_\omega)|_{B'}$ weakly on $B'$ for we have $h(\psi_1,...,\psi_m)dd^cu_j\wedge T)|_{B'}=h(\varphi_1,...,\varphi_m)(dd^cu_j\wedge T)_\omega)|_{B'}$ for every $j$, and $(h(\psi_1,...,\psi_m)dd^cu\wedge T)|_{B'}=(h(\varphi_1,...,\varphi_m)(dd^cu\wedge T)|_\omega)|_{B'}$. It follows that $h(\varphi_1, ..., \varphi_m)(dd^cu_j\wedge T)|_{\omega} \to h(\varphi_1, ..., \varphi_m)(dd^cu\wedge T)|_{\omega}$ weakly on $B'$. The property to be proved being local, hence the proof is complete. \end{proof} \begin{prop}[{\cite[Proposition 4.6]{NP}}] \label{prop4.5} Let $\Omega\subset {\mathbb C}^n$ be an open set, $T$ a closed positive current of bidegree $(n-1,n-1)$ on $\Omega$ and $\omega$ an open subset of $\Omega$. Let $\varphi, \psi\in \PSH(\omega)$ and $u,v\in \PSH\cap L_{loc}^\infty(\Omega)$ such that $u=v$ on $\cal O=\{\varphi >\psi\}$. Then $$ dd^cu\wedge T|_{\cal O}=dd^cv\wedge T|_{\cal O}.$$ \end{prop} As a consequence of Proposition \ref{prop4.5} and the proof of Theorem \ref{thm3.3} we have the following \begin{prop} Let $\Omega\subset {\mathbb C}^n$ be an open set and $T$ a closed positive current of bidegree $(n-1,n-1)$. Assume that $u,v\in \PSH(\Omega)\cap L_{loc}^\infty(\Omega)$ are such that $u=v$ on a Borel measurable $\cal F$-open set $\cal O\subset \Omega$, then $$dd^cu\wedge T|_{\cal O}=dd^cv\wedge T|_{\cal O}.$$ \end{prop} \begin{proof} In the proof of Theorem \ref{thm3.3} we have seen that there is a sequence $(\omega_j)$ of open subsets of ${\mathbb C}^n$ such that $\cal O$ is a subset of the union $\bigcup V_j$ of $\cal F$-open sets $V_j$, $j=1,2,\ldots$, of $\Omega$ all of the forms $V_j=\{\psi_j>-1\}$, where $\psi_j\in \PSH(\omega_j)$, and such that $u=v$ on each of them. According to Proposition \ref{prop4.5} we have $ dd^cu\wedge T|_{V_j}=dd^cv\wedge T|_{V_j}$ for every $j=1,2,\ldots$, and thus $ dd^cu\wedge T|_{\cal O}=dd^cv\wedge T|_{\cal O}.$ \end{proof} Now we can state the following generalization of Theorem 1.2 from \cite{HH}: \begin{theorem}\label{thm4.6} Let $\Omega$ an open set in ${\mathbb C}^n$, $\omega$ an open subset of $\Omega$, $T$ a closed positive current on $\Omega$ and $u, v\in \PSH(\Omega)$ bounded near the boundary of $\Omega$. If $u=v$ on a Borel measurable plurifinely open set $U\subset \Omega$ then $$dd^cu\wedge T|_U=dd^cv\wedge T|_U.$$ \end{theorem} \begin{proof} Let $u, v\in \PSH(\Omega)$ be bounded near the boundary of $\Omega$ and equal on a Borel measurable plurifinely open set $U\subset \Omega$. We have seen in the proof of Theorem \ref{thm3.3} that there is a sequence of open set $\omega_j\subset \Omega$, a sequence of plurifinely open sets $V_j \subset \omega_j$ of the forms $V_j=\{\psi_j>-1\}(\subset \omega_j)$ for some sequence of functions $\psi_j\in \PSH(\omega_j)$ such that for every $j$, $-1\leq \psi_j\leq 0$, $u=v$ on $V_j$, and $U\subset \bigcup_j V_j$. For every integer $k$ let $u_k=\max(u,-k)\in L_{loc}^\infty(\Omega)$ and $v_k=\max(v,-k)\in \L_{loc}^\infty(\Omega)$, then $u_k\searrow u$ and $v_k\searrow v$. Since $u_k=v_k$ on $V_j$, we have, according to Proposition \ref{prop4.5}, $dd^cu_k\wedge T|_{V_j}=dd^cv_k\wedge T|_{V_j}$ and hence $$(\psi_j+1)(dd^cu_k\wedge T)|\omega_j=(\psi_j+1)(dd^cv_k\wedge T)|\omega_j$$ for any $j$ and $k$. By letting $k\to +\infty$ we therefore have for every $j$ $$(\psi_j+1)(dd^cu\wedge T)|_{\omega_j}=(\psi_j+1)(dd^cv\wedge T)|_{\omega_j}$$ according to Proposition \ref{prop4.3} so that $$(dd^cu\wedge T)|_{V_j}=(dd^cv\wedge T)|_{V_j}$$ because $0<\psi_j+1\leq 1$ on $V_j$ and $V_j$ is a Borel subset of $\omega_j$. We conclude that $$(dd^cu\wedge T)|_{\bigcup_jV_j}=(dd^cv\wedge T)|_{\bigcup_jV_j},$$ whence $$(dd^cu\wedge T)|_U=(dd^cv\wedge T)|_U.$$ \end{proof} \end{document}
\begin{document} \title{Reply to Marinatto's comment on ``Bell's theorem without inequalities and without probabilities for two observers''} \author{Ad\'{a}n Cabello} \email{[email protected]} \affiliation{Departamento de F\'{\i}sica Aplicada II, Universidad de Sevilla, 41012 Sevilla, Spain} \date{\today} \begin{abstract} It is shown that Marinatto's claim [Phys. Rev. Lett. {\bf 90}, 258901 (2003)] that the proof of ``Bell's theorem without inequalities and without probabilities for two observers'' [A. Cabello, Phys.~Rev.~Lett. {\bf 86}, 1911 (2001)] requires four spacelike separated observers rather than two is unjustified. \end{abstract} \pacs{03.65.Ud, 03.65.Ta} \maketitle In his Comment~\cite{Marinatto03}, Marinatto claims that the proof of Bell's theorem without inequalities in~\cite{Cabello01a} requires four spacelike separated observers rather than two, as asserted in~\cite{Cabello01a}. Marinatto's claim is based on the fact that to test some of the properties used in the proof, for instance the property \begin{equation} P_\psi (B_2=B_4 | A_1 A_3=+1) = 1, \label{Alice1} \end{equation} one of the observers (Bob) must measure the spin along one direction of his particle~$2$, $B_2$, and also the spin along one direction of his particle~$4$, $B_4$. Marinatto argues that, since both measurements are not spacelike separated, then measuring~$B_2$ could disturb~$v(B_4)$ (the element of reality corresponding to~$B_4$), and measuring~$B_4$ could disturb $v(B_2)$ (the element of reality corresponding to~$B_2$). Therefore, he maintains that, to avoid such possible disturbances, both measurements should be spacelike separated. However, such a prevention is not needed, because it can be demonstrated that measuring~$B_2$ does not disturb~$v(B_4)$, and measuring~$B_4$ does not disturb~$v(B_2)$. Let us recall Einstein, Podolsky, and Rosen's (EPR) criterion for elements of reality: {\em ``If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of physical reality corresponding to this physical quantity''}~\cite{EPR35}. In the scenario described in~\cite{Cabello01a}, Alice, by means of a spacelike separated measurement of the spin along one direction of her particle~$1$, $A_1$, can use the property \begin{equation} P_\psi (A_1=B_2) = 0 \end{equation} to predict with certainty the element of reality~$v(B_2)$. Analogously, she, by means of a spacelike separated measurement of $A_3$ on her particle~$3$, can use the property \begin{equation} P_\psi (A_3=B_4) = 0 \end{equation} to predict with certainty the element of reality~$v(B_4)$. Note that, according to EPR's criterion, what allows Alice to conclude that there is an element of reality, for instance~$v(B_2)$, is the fact that the result of Bob's measurement of~$B_2$ {\em can be predicted with certainty}. The fact that close to particle~$2$ there could exist (or not) a second particle (in this case particle~$4$) on which Bob could perform one measurement or other does not invalidate Alice's prediction, since, according to the predictions of quantum mechanics (presumably corroborated by any conceivable experiment), the presence (or absence) of particle~$4$ close to particle~$2$, or the fact that Bob performs one experiment or other on particle 4, do not change the result for $B_2$ predicted by Alice (otherwise this effect could be used to transmit information between spacelike separated regions). Therefore, I must conclude that Marinatto's argument does not justify the need for more spacelike separated observers. Indeed, the proof of Bell's theorem without inequalities using four qubits~\cite{Cabello01a} can be transformed into a proof using only two particles~\cite{Cabello03,CPZBZ03}. \end{document}
\begin{document} \markboth{S. Chen, Y. Chen, and Q. Yang} {Randomized Testing of $q$-Monomials} \title{Towards Randomized Testing of $q$-Monomials in Multivariate Polynomials} \author{Shenshi Chen and Yaqing Chen} \address{Department of Computer Science,\\ University of Texas-Pan American,\\ Edinburg, TX 78539, USA\\ \email{[email protected]}} \author{Quanhai Yang} \address{College of Information Engineering,\\ Northwest A\&F University,\\ Yangling, Shaanxi 712100, China\\} \maketitle \begin{abstract} Given any fixed integer $q\ge 2$, a $q$-monomial is of the format $\displaystyle x^{s_1}_{i_1}x^{s_2}_{i_2}\cdots x_{i_t}^{s_t}$ such that $1\le s_j \le q-1$, $1\le j \le t$. $q$-monomials are natural generalizations of multilinear monomials. Recent research on testing multilinear monomials and $q$-monomials for prime $q$ in multivariate polynomials relies on the property that $Z_q$ is a field when $q\ge 2 $ is prime. When $q>2$ is not prime, it remains open whether the problem of testing $q$-monomials can be solved in some compatible complexity. In this paper, we present a randomized $O^*(7.15^k)$ algorithm for testing $q$-monomials of degree $k$ that are found in a multivariate polynomial that is represented by a tree-like circuit with a polynomial size, thus giving a positive, affirming answer to the above question. Our algorithm works regardless of the primality of $q$ and improves upon the time complexity of the previously known algorithm for testing $q$-monomials for prime $q>7$. \end{abstract} \keywords{Algebra; complexity; multivariate polynomials; monomials; monomial testing; randomized algorithms.} \section{Introduction} \subsection{Background} Recently, significant efforts have been made towards studying the problem of testing monomials in multivariate polynomials \cite{koutis08,williams09,Bjorklund2010,chen12a,chen12b,chen11,chen11a,chen11b}, with the central question consisting of whether a multivariate polynomial represented by a circuit (or even simpler structure) has a multilinear (or some specific) monomial in its sum-product expansion. This question can be answered straightforwardly when the input polynomial has been expanded into a sum-product representation, but the dilemma, though, is that obtaining such a representation generally requires exponential time. The motivation and necessity of studying the monomial testing problem can be clearly understood from its connections to various critical problems in computational complexity as well as the possibilities of applying algebraic properties of polynomials to move forward the research on those critical problems (see, e.g., \cite{chen11}). Historically, polynomials and the studies thereof have, time and again, contributed to many advancements in theoretical computer science research. Most notably, many major breakthroughs in complexity theory would not have been possible without the invaluable roles played by low degree polynomial testing/representing and polynomial identity testing. For example, low degree polynomial testing was involved in the proof of the PCP Theorem, the cornerstone of the theory of computational hardness of approximation and the culmination of a long line of research on IP and PCP (see, Arora {\em et al.} \cite{arora98} and Feige {\em et al.} \cite{feige96}). Polynomial identity testing has been extensively studied due to its role in various aspects of theoretical computer science (see, for example, Kabanets and Impagliazzo \cite{kabanets03}) and its applications in various fundamental results such as Shamir's IP=PSPACE \cite{shamir92} and the AKS Primality Testing \cite{aks04}. Low degree polynomial representing \cite{minsky-papert68} has been sought after in order to prove important results in circuit complexity, complexity class separation and subexponential time learning of Boolean functions (see, for examples, Beigel \cite{beigel93}, Fu\cite{fu92}, and Klivans and Servedio \cite{klivans01}). Other breakthroughs in the field of algorithmic design have also been achieved by combinations of randomization and algebrization. Randomized algebraic techniques have led to the randomized algorithms of time $O^*(2^k)$ for the {\sc $k$-path} problem and other problems \cite{koutis08,williams09}. Another recent seminal example is the improved randomized $O(1.657^n)$ time algorithm for the Hamiltonian path problem by Bj{\"o}rklund~\cite{Bjorklund2010}. This algorithm provided a positive answer to the question of whether the Hamiltonian path problem can be solved in time $O(c^n)$ for some constant $0<c < 2$, a challenging problem that had been open for half of a century. Bj{\"o}rklund {\em et al.} further extended the above randomized algorithm to the $k$-path testing problem with $O^*(1.657^k)$ time complexity \cite{Bjorklund2010b}. Very recently, those two algorithms were simplified by Abasi and Bshouty \cite{bshouty13}. These are just a few examples and a survey of related literature is beyond the scope of this paper. \subsection{The Related Work} The problem of testing multilinear monomials in multivariate polynomials was initially exploited by Koutis \cite{koutis08} and then by Williams \cite{williams09} to design randomized parameterized algorithms for the $k$-path problem. Koutis \cite{koutis08} initially developed an innovative group algebra approach to testing multilinear monomials with odd coefficients in the sum-product expansion of any given multivariate polynomial. Williams \cite{williams09} then further connected the polynomial identity testing problem to multilinear monomial testing and devised an algorithm that can test multilinear monomials with odd or even coefficients. The work by Chen {\em et al.} \cite{chen12a,chen12b,chen11,chen11a,chen11b} aimed at developing a theory of testing monomials in multivariate polynomials in the context of a computational complexity study. The goal was to investigate the various complexity aspects of the monomial testing problem and its variants. Initially, Chen and Fu \cite{chen11} proved a series of foundational results, beginning with the proof that the multilinear monomial testing problem for $\Pi\Sigma\Pi$ polynomials is NP-hard, even when each factor of the given polynomial has at most three product terms and each product term has a degree of at most $2$. These results have built a base upon which further study of testing monomials can continue. Subsequently, Chen {\em et al.} \cite{chen11b} (see, also,\cite{chen12b}) studied the generalized $q$-monomial testing problem. They proved that when $q\ge 2$ is prime, there is a randomized $O^*(q^k)$ time algorithm for testing $q$-monomials of degree $k$ with coefficients $\not=0\imod q$ in an arithmetic circuit representation of a multivariate polynomial which can then be derandomized into a deterministic $O^*((6.4p)^k)$ time algorithm when the underlying graph of the circuit is a tree. In the third paper, Chen and Fu \cite{chen12a} (and \cite{chen11a}) turned to finding the coefficients of monomials in multivariate polynomials. Naturally, testing for the existence of any given monomial in a polynomial can be carried out by computing the coefficient of that monomial in the sum-product expansion of the polynomial. A zero coefficient means that the monomial is not present in the polynomial, whereas a nonzero coefficient implies that it is present. Moreover, they showed that coefficients of monomials in a polynomial have their own implications and are closely related to core problems in computational complexity. \subsection{Contribution and Organization} Recent research on testing multilinear monomials and $q$-monomials for prime $q$ in multivariate polynomials relies on the property that $Z_2$ and $Z_q$ are fields only when $q> 2 $ is prime. When $q>2$ is not prime, $Z_q$ is no longer a field, hence the group algebra based approaches in \cite{koutis08,williams09,chen11b,chen12b} are not applicable to cases of non-prime $q$. It remains open whether the problem of testing $q$-monomials can be solved in some compatible complexity for non-prime $q$. Our contribution in this paper is a randomized $O^*(7.15^k s^2(n))$ algorithm for testing $q$-monomials of degree $k$ in a multivariate polynomial represented by a tree-like circuit of size $s(n)$, thus giving an affirming answer to the above question. Our algorithm works for both prime $q$ and non-prime $q$ as well. Additionally, for prime $q>7$, our algorithm provides us with some substantial improvement on the time complexity of the previously known algorithm \cite{chen11b,chen12b} for testing $q$-monomials. The rest of the paper is organized as follows. In Section 2, we introduce the necessary notations and definitions. In Section 3, we examine three examples to understand the difficulty to transform $q$-monomial testing to multilinear monomial testing. In Section 4, we propose a new method for reconstructing a given circuit and a technique to replace each occurrence of a variable with a randomized linear sum of $q-1$ new variables. We show that, with the desired probability, the reconstruction and randomized replacements help transform the testing of $q$-monomials in any polynomial represented by a tree-like circuit to the testing of multilinear monomial in a new polynomial. We design a randomized $q$-monomial testing algorithm in Section 5 and conclude the paper in Section 6. \section{Notations and Definitions} For $1\le i_1 < \cdots <i_k \le n$, $\pi =x_{i_1}^{s_1}\cdots x_{i_t}^{s_t}$ is called a monomial. The degree of $\pi$, denoted by $\mbox{deg}(\pi)$, is $\sum\limits^t_{j=1}s_j$. $\pi$ is multilinear, if $s_1 = \cdots = s_t = 1$, i.e., $\pi$ is linear in all its variables $x_{i_1}, \dots, x_{i_t}$. For any given integer $q\ge 2$, $\pi$ is called a $q$-monomial if $1\le s_1, \dots, s_t \le q-1$. In particular, a multilinear monomial is the same as a $2$-monomial. An arithmetic circuit, or circuit for short, is a directed acyclic graph consisting of $+$ gates with unbounded fan-ins, $\times$ gates with two fan-ins, and terminal nodes that correspond to variables. The size, denoted by $s(n)$, of a circuit with $n$ variables is the number of gates in that circuit. A circuit is considered a tree-like circuit if the fan-out of every gate is at most one, i.e., the underlying directed acyclic graph that excludes all the terminal nodes is a tree. In other words, in a tree-like circuit, only the terminal nodes can have more than one fan-out (or out-going edge). Throughout this paper, the $O^*(\cdot)$ notation is used to suppress $\mbox{poly}(n,k)$ factors in time complexity bounds. By definition, any polynomial $F(x_1,\dots,x_n)$ can be expressed as a sum of a list of monomials, called the sum-product expansion. The degree of the polynomial is the largest degree of its monomials in the expansion. With this expanded expression, it is trivial to see whether $F(x_1,\dots,x_n)$ has a multilinear monomial, or a monomial with any given pattern. Unfortunately, such an expanded expression is essentially problematic and infeasible due to the fact that a polynomial may often have exponentially many monomials in its sum-product expansion. In general, a polynomial $F(x_1,\dots,x_n)$ can be represented by a circuit. This type of representation is simple and compact and may have a substantially smaller size polynomially in $n$, when compared to the number of all monomials in its sum-product expansion. Thus, the challenge then is to test whether $F(x_1,\dots,x_n)$ has a multilinear (or some other desired) monomial efficiently, without expanding it into its sum-product representation. For any given $n\times n$ matrix ${\cal A}$, let $\mbox{perm}({\cal A})$ denote the permanent of ${\cal A}$ and $\mbox{det}({\cal A})$ the determinant of ${\cal A}$. For any integer $k \ge 1$, we consider the group $Z^k_2$ with the multiplication $\cdot$ defined as follows. For $k$-dimensional column vectors $\vec{x}, \vec{y} \in Z^k_2$ with $\vec{x} = (x_1, \ldots, x_k)^T$ and $\vec{y} = (y_1, \ldots, y_k)^T$, $\vec{x} \cdot \vec{y} = (x_1+y_1, \ldots, x_k+y_k)^T.$ $\vec{v}_0=(0, \ldots, 0)^T$ is the zero element in the group. For any field ${\cal F}$, the group algebra ${\cal F}[Z^k_2]$ is defined as follows. Every element $u \in {\cal F}[Z^k_2]$ is a linear addition of the form \begin{eqnarray}\label{exp-2} u &=& \sum_{\vec{x}_i\in Z^k_2,~ a_{i}\in {\cal F}} a_{i} \vec{x}_i. \end{eqnarray} For any element $v = \sum\limits_{\vec{x}_i\in Z^k_2,~ b_{i}\in {\cal F}} b_{i} \vec{x}_i$, we define \begin{eqnarray} u + v &=& \sum_{a_{i},~ b_{i}\in {\cal F},~ \vec{x}_i\in Z^k_2} (a_i+b_i) \vec{x}_i, \ \mbox{and} \nonumber\\ u \cdot v &=& \sum_{a_i,~ b_j\in {\cal F},~ \mbox{ and }~\vec{x}_i,~ \vec{y}_j\in Z^k_2} (a_i b_j) (\vec{x}_i\cdot \vec{y}_j). \nonumber \end{eqnarray} For any scalar $c \in {\cal F}$, \begin{eqnarray} c u &=& c \left(\sum_{\vec{x}_i\in Z^k_p, \ a_i\in {\cal F}} a_{i} \vec{x}_i\right) = \sum_{\vec{x}_i\in Z^k_2,\ a_{i}\in {\cal F}} (c a_{i})\vec{x}_i. \nonumber \end{eqnarray} The zero element in the group algebra $\displaystyle {\cal F}[Z^k_2]$ is $\displaystyle {\bf 0} = \sum_{\vec{v}} 0\vec{v}$, where $0$ is the zero element in ${\cal F}$ and $\vec{v}$ is any vector in $\displaystyle Z_2^k$. For example, ${\bf 0} = 0\vec{v_0} = 0\vec{v}_1 + 0\vec{v}_2 + 0\vec{v}_3$, for any $\displaystyle \vec{v}_i \in Z^k_2$, $1\le i\le 3$. The identity element in the group algebra $\displaystyle {\cal F}[Z^k_2]$ is $ {\bf 1} = 1 \vec{v}_0 = \vec{v}_0$, where $1$ is the identity element in ${\cal F}$. For any vector $\vec{v} =(v_1, \ldots, v_k)^T \in Z_2^k$, for $i\ge 0$, let $\displaystyle (\vec{v})^i = (i v_1, \ldots, i v_k)^T.$ When the field ${\cal F}$ is $Z_2$ with respect to $\imod 2$ operation, for any $x,y\in Z_2$, $x y $ and $x+y$ stands for $x y \imod 2$ and $x+y \imod 2$, respectively. In particular, in the group algebra $Z_2[Z_2^k]$, for any $\vec{z}\in Z_2^k$ we have $(\vec{v})^0 = (\vec{v})^2 = \vec{v}_0.$ \section{$q$-Monomials, Multilinear Monomials and Plus Gates} As we pointed out before, group algebra based algorithms \cite{koutis08,williams09,chen11b,chen12b} cannot be called upon to test $q$-monomials when $q$ is not prime, because $Z_q$ is not a field. Hence, in such a case the algebraic foundation for applying those algorithms is no longer available. It seems quite hopeful that there might be a way to transform the problem of testing $q$-monomials into the problem of testing multilinear monomials and thus utilize the existing techniques for the latter problem to solve the former problem. One plausible strategy to accomplish such a transformation is to replace each variable $x$ in a given multivariate polynomial by a sum $y_1+y_2+\cdots+y_{q-1}$ of $q-1$ new variables. Ideally, such replacements should result in a multilinear monomial in the new polynomial that corresponds to the given $q$-monomial in the original polynomial and vice versa, thereby allowing the multilinear monomial testing algorithm based on some group algebra over a field of characteristic $2$~\cite{koutis08,williams09} to be adopted for the testing of multilinear monomials in the new polynomial. Unfortunately, some careful analysis will reveal that this approach has, as exhibited in Example \ref{ex-1}, a profound technical barrier that prevents us from applying those mulilinear monomial testing algorithms. \begin{example}\label{ex-1} Consider a simple $4$-monomial $\pi = x^3$ of degree $3$. Replacing $x$ with $y_1+y_2+y_3$ in $\pi$ results in \begin{eqnarray} r(\pi) &=& (y_1+y_2+y_3)^3 \nonumber \\ &= & y_1^3+y_2^3+y_3^3 + 3y_1^2y_2 + 3y_1y^2_2 +3y_2^2y_3 + 3y_2y^2_3 + 3y_1^2y_3 + 3y_1y^2_3 \nonumber \\ & & +~ 6 y_1y_2y_3. \nonumber \end{eqnarray} \end{example} $r(\pi)$ has one and only one degree $3$ multilinear monomial $\pi'=y_1y_2y_3$. It is unfortunate that the coefficient $c(\pi')$ of $\pi'$ is $6$, an even number. When applying the group algebra based multilinear monomial testing algorithms to $r(\pi)$ over the field $Z_2$ with respect to $(\bmod~ 2)$ operation, the even coefficient $c(\pi')$ will help eliminate $\pi'$ from $r(\pi)$. Hence, we are unable to find the existence of any multilinear monomials in the sum-product expansion of $r(\pi)$. Knowing that the above example can be generalized to arbitrary $q$-monomials for $q>2$, we have to design an innovative replacement technique so that certain multilinear monomials in the new polynomial will survive the elimination by the $(\bmod~ 2)$ operation over $Z_2$, or by the characteristic 2 property over any field of characteristic 2. Specifically, we have to ensure, with complete or desired probabilistic certainty, that a given $q$-monomial $\pi$ with coefficient $c(\pi)$ in the original polynomial will correspond to one or a list of {\em "distinguishable"} multilinear monomials with odd coefficients in the derived polynomial, regardless of the parity of $c(\pi)$. \begin{figure}\label{fig1} \label{fig2} \end{figure} When group algebraic elements are selected to replace variables in the input polynomial, the polynomial might become zero due to mutual annihilation of the results from a list of multilinear monomials with odd coefficients. Koutis \cite{koutis08} proved that when those group algebraic elements are uniform random, with a probability at least $\frac{1}{4}$, the input polynomial that has multilinear monomials with odd coefficients will not become zero, even if mutual annihilation of the results from a list of multilinear monomials with odd coefficients may happen. Williams \cite{williams09} introduced a new variable for each $\times$ gate in the representative circuit for the input polynomial that can help avoid the aforementioned mutual annihilation. In essence, the new variables added for the $\times$ gates can help generate one or a list of "distinguishable" multilinear monomials with odd coefficients in the derived polynomial, no matter whether the coefficient of the original multilinear monomial is even or odd. However, this approach cannot help resolve the $q$-monomial testing problem, due to possible implications of $+$ gates. In order to understand the above situation, let us examine Example 1 again. Following Williams's algorithm, we first reconstruct the circuit in Figure~\ref{fig1}. The expanded circuit, after the replacement of $x$ by $y_1+y_2+y_3$ along with the addition of new variables $z_1$ and $z_2$ for the two respective $+$ gates, is shown in Figure~\ref{fig2}. The coefficient for the only multilinear monomial $y_1y_2y_3$ produced by the new circuit is $6z_1z_2$, which is even and thus helps annihilate $y_1y_2y_3$ with respect to $(\bmod~ 2)$ operation or in general the characteristic 2 property of the underlying field. The following two examples provide us with more evidences that there are technical difficulties in dealing with possible implications of $+$ gates. \begin{figure}\label{fig3} \label{fig4} \end{figure} \begin{example}\label{ex-2} Let $F(x_1,x_2) = 2x_1^4x_2 + 2x_2^2$ as represented by the circuit in Figure~\ref{fig3}. $F$ has one $5$-monomial $\pi_1 = x_1^4x_2$ and one $3$-monomial $\pi_2 = x_2^2$, each of which has a coefficient $2$. \end{example} When one follows the approach by Williams \cite{williams09} to add, for each $\times$ gate in Figure~\ref{fig3}, a new $\times$ gate that multiplies the output of this gate with a new variable, then one obtains a new circuit in Figure~\ref{fig4} that computes $$F'(z_1,z_2,\ldots,z_7,x_1,x_2) = z_1z_3z_5z_7x_1^4x_2 + z_3z_4z_6z_7x^4_1x_2 + 2z_7x_2^2.$$ Although $2x_1^4x_2$ in $F$ is spilt into two distinguishable occurrences that have respective unique coefficients $z_1z_3z_5z_7$ and $z_3z_4z_6z_7$, yet $2x_2^2$ in $F$ corresponds to $2z_7x_2^2$ that has an even coefficient $2z_7$. In particular, the implications of $+$ gates on testing multilinear monomials can be seen from the following example. \begin{example}\label{ex-3} Let $G(x_1,x_2,x_3) = 2x_1^2x_3 + 2x_2x_3$. Changing the terminal node $x_2$ to $x_3$ for the top $\times$ gate in Figure~\ref{fig3} (respectively, for the top second $\times$ gate in Figure~\ref{fig4} gives a circuit to compute $G$ (respectively, $G'$). \end{example} Like in Example 2, $G'(z_1,z_2,z_3,x_1,x_2, x_3) = z_1z_3z_5z_7x_1^4x_3 + z_3z_4z_6z_7x^4_1x_3 + 2z_7x_2x_3$. Here, $2x_1^4x_3$ is spilt into two distinguishable occurrences that have unique coefficients $z_1z_3z_5z_7$ and $z_3z_4z_6z_7$, respectively. However, the only multilinear monomial $2x_2x_3$ in $G$ corresponds to $2z_7x_2x_3$ that has an even coefficient $2z_7$. Therefore, this multilinear monomial cannot be detected by Williams' algorithm. Example 3 exhibits that there is a flaw in the circuit reconstruction by Williams \cite{williams09}: Introducing a new variable to multiply the output of every $\times$ gate is not sufficient to overcome the difficulty that may possibly be caused by $+$ gates. \section{Circuit Reconstruction and A Transformation} In this section, we shall design a new method to reconstruct a given circuit and a randomized variable replacement technique so that we can transform, with some desired success probability, the testing of $q$-monomials to the testing of multilinear monomials. To simplify presentation, we assume from now on through the rest of the paper that if any given polynomial has $q$-monomials in its sum-product expansion, then the degrees of those multilinear monomials are at least $k$ and one of them has exactly a degree of $k$. This assumption is feasible, because when a polynomial has $q$-monomials of degree $< k$, e.g., the least degree of those is $\ell$ with $1\le \ell < k$, then we can multiply the polynomial by a list of $k-\ell$ new variables so that the resulting polynomial will have $q$-monomials with degrees satisfying the aforementioned assumption. \subsection{Circuit Reconstruction}\label{CR} For any given polynomial $F(x_1,x_2,\ldots,x_n)$ represented by a tree-like circuit ${\cal C}$ of size $s(n)$, we first reconstruct the circuit ${\cal C}$ in three steps as follows. {\bf Eliminating redundant $+$ gates.} Starting with the root gate, check to see whether a $+$ gate receives input from another $+$ gate. If a $+$ gate $g$ receives input from a $+$ gate $f$, which receives inputs from gates $f_1,f_2,\ldots,f_{s}$ and/or terminal nodes $u_1,u_2,\ldots,u_t$, then delete $f$ and let the gate $g$ to receive inputs directly from $f_1,f_2,\ldots,f_{s}$ and/or $u_1,u_2,\ldots,u_t$. Repeat this process until there are no more $+$ gates receiving input from another $+$ gate. Note that we consider tree-like circuits only. Since each gate of such a circuit has at most one output, the above eliminating process will not increase the size of the circuit. {\bf Duplicating terminal nodes.} For each variable $x_i$, if $x_i$ is the input to a list of gates $g_1, g_2, \ldots, g_{\ell}$, then create $\ell$ terminal nodes $u_1, u_2, \ldots, u_{\ell}$ such that each of them represents a copy of the variable $x_i$ and $g_j$ receives input from $u_j$, $1\le j\le \ell$. Let ${\cal C}^*$ denote the reconstructed circuit after the above two reconstruction steps. Since the original circuit ${\cal C}$ is tree-like, the underlying graph of ${\cal C}^*$, including all the terminal nodes, is a tree. Such a tree structure implies the following simple facts: \begin{itemize} \item There is no duplicated occurrence of any input variable along any path from the root to a terminal node. \item Every occurrence of each variable $x_i$ in the sum-product expansion of $F$ is represented by a terminal node for $x_i$. \item The size of the new circuit is at most $n s(n)$. \item Any $+$ gate will receive input from $\times$ gates and/or terminal nodes. \end{itemize} \begin{figure} \caption{The New Circuit for $F(x_1,x_2) = 2x_1^4x_2 + 2x_2^2$} \label{fig5} \end{figure} {\bf Adding new variables for $\times$ gates and for those terminal nodes that directly connect to $+$ gates.} Having completed the reconstruction for ${\cal C}^*$, we then expand it to a new circuit ${\cal C'}$ as follows. For each $\times$ gate $g_i$ in ${\cal C}^*$, we attach a new $\times$ gate $g'_i$ that multiplies the output of $g_i$ with a new variable $z_i$, and feed the output of $g'_i$ to the gate that reads the output of $g_i$. Here, the way of introducing new variables for $\times$ gates follows what is done by Williams in \cite{williams09}. However, in addition to these new $z$-variables, we may need to introduce additional variables for $+$ gates. Specifically, for each $+$ gate $f$ that receives inputs from terminal nodes $u_1,u_2,\ldots,u_t$, we add a $\times$ gate $f_j$ and have it to receive inputs from $u_j$ and a new variable $z_j$ and then feed its output to $f$, $1\le j\le t$. Note that $f$ may receive input from $\times$ gates but no new gates are needed for those gates with respect to $f$. Assume that a list of $h$ new $z$-variables $z_1, z_2, \ldots, z_h$ have been introduced into the circuit ${\cal C'}$. Let $F'(z_1, z_2, \ldots, z_h, x_1, x_2, \ldots, x_n)$ be the new polynomial represented by ${\cal C'}$. In Figure~\ref{fig5}, we show the reconstructed circuit for the one in Figure~\ref{fig3} that represents $F(x_1,x_2) = 2x_1^4x_2 + 2x_2^2$. By this new circuit, $$F'(z_1,z_2,\ldots,z_9,x_1,x_2) = z_1z_2z_5z_7x_1^4x_2 + z_3z_4z_6z_7x^4_1x_2 + z_7z_8x_2^2 + z_7z_9x_2^2.$$ As expected, not only is $2x_1^4x_2$ in $F$ split into two distinguishable occurrences that have unique coefficients $z_1z_2z_5z_7$ and $z_3z_4z_6z_7$, but also $2x_2^2$ in $F$ is split into two distinguishable occurrences that have unique coefficients $z_7z_8$ and $z_7z_9$. Notably, those four coefficients are multilinear monomials of $z$-variables and each has an odd scalar coefficient 1. \begin{lemma}\label{rtm-lem1} $F(x_1,x_2,\ldots,x_n)$ has a monomial $\pi$ of degree $k$ in its sum-product expansion if and only if there is a monomial $\alpha \pi$ in the sum-product expansion of $F'(z_1, z_2, \ldots, z_h, x_1, x_2,\ldots, x_n)$ such that $\alpha$ is a multilinear monomial of $z$-variables with degree $\le 2k-1$. Furthermore, if $F'$ has two products $\alpha_1\pi$ and $\alpha_2\pi$ in its sum-product expansion, then we have $\alpha_1 \not= \alpha_2$, where $\alpha_1$ and $\alpha_2$ are products of $z$-variables; and any two different monomials of $x$-variables in $F'$ will have different coefficients that are products of $z$-variables. \end{lemma} \begin{proof} By the reconstruction processes, ${\cal C^*}$ computes exactly the same polynomial $F$. If $F$ has a monomial $\pi$ of degree $k$, then let ${\cal T}$ be the subtree of ${\cal C^*}$ that generates the monomial $\pi$, and ${\cal T'}$ be the corresponding subtree of ${\cal T}$ in ${\cal C'}$. By the way the new $z$-variables are introduced, the monomial generated by ${\cal T'}$ is $\alpha\pi$ with $\alpha$ as the product of all the $z$-variables added to ${\cal T}$ to yield ${\cal T'}$. Since $\pi$ has degree $k$, ${\cal T}$ has $k-1$ many $\times$ gates. So, ${\cal T'}$ has $k-1$ new $\times$ gates along with $k-1$ many new $z$-variables that are added with respect to those $\times$ gates in ${\cal T}$. In addition, ${\cal T'}$ has $k$ terminal nodes representing $k$ individual copies of $x$-variables in $\pi$. When such a terminal node is connected to a $+$ gate, then a new $\times$ gate is added along with a new $z$-variable. Thus, the terminal nodes in ${\cal T}'$ can contribute at most $k$ additional $z$-variables. Therefore, the degree of $\alpha$ is at most $2k-1$. Since all those $z$-variables are distinct, $\alpha$ is multilinear. If $F'$ has a monomial $\alpha\pi$ such that $\alpha$ is a product of $z$-variables and $\pi$ is a product of $x$-variables, then let ${\cal M}'$ be the subtree of ${\cal C'}$ that generates $\alpha\pi$. According to the construction of ${\cal C^*}$ and ${\cal C'}$, removing all the $z$-variables along with the newly added $\times$ gates from ${\cal M'}$ will result in a subtree ${\cal M}$ of ${\cal C}^*$ that generates $\pi$. Thereby, $\pi$ is a monomial in $F$. Assume that $F'$ has $\alpha_1 \pi$ and $\alpha_2\pi$ in its sum-product expansion, where $\alpha_1$ and $\alpha_2$ are products of $z$-variables. Let ${\cal T}'_1$ and ${\cal T}'_2$ be the two subtrees in ${\cal C'}$ that generate $\alpha_1 \pi$ and $\alpha_2\pi$, respectively. Since each of such subtrees in ${\cal C'}$ can be used once to generate one product in the sum-product expansion of $F'$, we have ${\cal T}'_1 \not= {\cal T}'_2.$ Let ${\cal T}_1$ and ${\cal T}_2$ be the two respective subtrees of ${\cal T}'_1$ and ${\cal T}'_2$ in ${\cal C^*}$. By the ways of circuit reconstruction and introduction of new $z$-variables, ${\cal T}'_1 \not= {\cal T}'_2$ implies ${\cal T}_1 \not= {\cal T}_2$. Note that ${\cal T}_1$ and ${\cal T}_2$ generates the same $\pi$. There are two cases for ${\cal T}_1$ and ${\cal T}_2$ to differ: either ${\cal T}_1$ and ${\cal T}_2$ differ at a $\times$ gate $g$, or they have the same $\times$ gates but differ at a terminal node $u$. In the former case, the $z$-variables added with respect to $g$ will make $\alpha_1$ and $\alpha_2$ different. In the latter case, we assume without loss of generality that ${\cal T}_1$ has a terminal node $u$ but ${\cal T}_2$ does not. In this case, the parent node $u'$ of $u$ has to be a $+$ gate. Hence, a new $z$-variable is added for the new $\times$ gate between $u'$ and $u$. Therefore, this new $z$-variable makes $\alpha_1$ and $\alpha_2$ different. Now, consider that $F'$ has two monomials $\alpha\pi$ and $\beta\phi$ such that, $\pi$ and $\phi$ are products of $x$-variables and $\alpha$ and $\beta$ are products of $z$-variables. Let ${\cal H}'_1$ and ${\cal H'}_2$ be the subtrees in ${\cal C'}$ that generate $\alpha\pi$ and $\beta\phi$, respectively. Again, according to the construction of ${\cal C^*}$ and ${\cal C'}$, removing all the $z$-variables along with the newly added $\times$ gates from ${\cal H'}_1$ and ${\cal H'}_2$ will result in two subtrees ${\cal H}_1$ and ${\cal H}_2$ of ${\cal C}^*$ that generate $\pi$ and $\phi$, respectively. When $\pi \not= \phi$, ${\cal H}_1$ and ${\cal H}_2$ are different subtrees. Following a similar analysis in the above paragraph for ${\cal T}_1$ and ${\cal T}_2$ to be different, we have $\alpha \not= \beta.$ Also, since the $z$-variables in $\alpha$ corresponds to $\times$ gates in ${\cal H'}_1$ that do not repeat themselves because ${\cal H'}_1$ is a tree, $\alpha$ is multilinear. Similarly, $\beta$ is also multilinear. Combining the above analysis completes the proof for the lemma. \end{proof} \subsection{A Transformation} In order to present the technique to transform the testing of $q$-monomials to the testing of multilinear monomials, we introduce one more definition related to variable replacements. \begin{definition}\label{def-2} Let $q\ge 2$ be a fixed integer. Let $\pi = x^s$ for $1\le s\le q-1$. Consider \begin{eqnarray}\label{eq-0} r(\pi) & = & \prod^{s}_{i=1}(c_{i1}y_1+c_{i2}y_2+\cdots + c_{i(q-1)}y_{q-1}), \nonumber \end{eqnarray} where $c_{ij}$ are constants and $y_j$ are new variables, $1\le i\le s$ and $1\le j\le q-1$. For $1\le s \le q-1$, let $\pi' = y_1y_2\cdots y_s$. Define the coefficient matrix of $\pi'$ with respect to $r(\pi)$ as $$ {\cal C}[\pi', r(\pi)]=\left(\begin{array}{cccc} c_{11} & c_{12} & \cdots & c_{1s}\\ c_{21} & c_{22} & \cdots & c_{2s}\\ & & \cdots & \\ c_{s1} & c_{s2} & \cdots & c_{ss} \end{array} \right). $$ \end{definition} {\bf Transformation:}\label{Trans} For any given $n$-variate polynomial $F(x_1,x_2,\ldots,x_n)$ represented by a circuit ${\cal C}$, we first carry out the circuit reconstruction as addressed in Subsection \ref{CR} to obtain a new circuit ${\cal C'}$ and let $F'(z_1, z_2, \ldots, z_h, x_1, x_2,\ldots, x_n)$ be the new polynomial represented by ${\cal C'}$. The transformation through replacing $x$-variables works as follows: For each variable $x_i$ and for each terminal node $u_j$ representing $x_i$ in circuit ${\cal C'}$, select uniform random values $c_{ij\ell}$ from $Z_2$ and replace $x_i$ at the node $u_j$ with \begin{eqnarray}\label{trans-exp1} r(x_i) &=& (c_{ij1}y_{i1} + c_{ij2}y_{i2} +\cdots+c_{ij(q-1)}y_{i(q-1)}). \end{eqnarray} Let $$ G(z_1,\ldots,z_h,y_{11},\ldots,y_{1(q-1)},\ldots,y_{n1},\ldots,y_{n(q-1)}) $$ be the polynomial resulted from the above replacements for circuit ${\cal C'}$. We need Lemmas \ref{lem3} and \ref{lem-0} in the following to help estimate the success probability of the transformation. Consider the vector space $Z_2^n$. For any vector $\vec{v}_1,\vec{v}_2,\ldots,\vec{v}_{k}\in Z_2^n$, $1\le k\le n$, let $\mbox{span}(\vec{v}_1,\vec{v}_2,\ldots,\vec{v}_{k})$ denote the linear space generated by those $k$ vectors. The following lemma follows directly from Lemma 6.3.1 of Blum and Kannan in \cite{blum95}. \begin{lemma}\label{lem3} \cite{blum95} Assume that $\vec{v}_1,\vec{v}_2,\ldots,\vec{v}_{k}$ are random vectors uniformly chosen from $Z_2^n$, $1\le k\le n$ and $n\ge 1$. Let $\mbox{Pr}[\vec{v}_1,\vec{v}_2,\ldots,\vec{v}_{k}]$ denote the probability that $\vec{v}_1,\vec{v}_2,\ldots,\vec{v}_{k}$ are linearly independent. We have $$ \mbox{Pr}[\vec{v}_1,\vec{v}_2,\ldots,\vec{v}_{k}] >0.28. $$ \end{lemma} Koutis had a proof for $\mbox{Pr}[\vec{v}_1,\vec{v}_2,\ldots,\vec{v}_{k}] > \frac{1}{4}$, which is contained in the proof for his Theorem 2.4 \cite{koutis08}. But some careful examination will show that there is a flaw in the analysis for $k=3$. Nevertheless, we present a proof in the following. \begin{proof} From the basis of linear algebra, we know that $\mbox{span}(\vec{v}_1,\vec{v}_2,\ldots,\vec{v}_{k})$ has $2^k$ vectors and any vector in $Z^n_2 - \mbox{span}(\vec{v}_1,\vec{v}_2,\ldots,\vec{v}_{k})$ is linearly independent of $\vec{v}_1,\vec{v}_2,\ldots,\vec{v}_{k}.$ Note that $|Z^n_2| = 2^n$. Therefore, \begin{eqnarray}\label{eq-vec} \mbox{Pr}[\vec{v}_1,\vec{v}_2,\ldots,\vec{v}_{k}] &=& \mbox{Pr}[\vec{v}_k \not\in \mbox{span}(\vec{v}_1,\vec{v}_2,\ldots,\vec{v}_{k-1})] \cdot \mbox{Pr}[\vec{v}_1,\vec{v}_2,\ldots,\vec{v}_{k-1}] \nonumber \\ &=&(1-\frac{1}{2^{n-k +1}})\cdot \mbox{Pr}[\vec{v}_1,\vec{v}_2,\ldots,\vec{v}_{k-1}] \nonumber \\ &=& \prod^k_{i=1}(1-\frac{1}{2^{n-i+1}}) \nonumber \\ &\ge & \prod^{k}_{i=1}(1-\frac{1}{2^i}). \end{eqnarray} The last inequality holds because of $1\le k \le n$. For any $1\le k \le 40$, by simply carrying out the computation for the right product of expression (\ref{eq-vec}), we obtain \begin{eqnarray} \label{eq-vec-2} \prod^{k}_{i=1}(1-\frac{1}{2^i}) &\ge & 0.288788 > 0.28,~~ 1\le k \le 40, \mbox{and} \\ \left(~\prod^{40}_{i=1}(1-\frac{1}{2^i})~\right) \cdot \frac{40}{41} &\ge & 0.281444 > 0.28. \end{eqnarray} It is obvious that $2^i > i^2$ for $i\ge 41$. Combining this with expressions (\ref{eq-vec}) and (5) yields, for any $k>40$, \begin{eqnarray}\label{eq-vec-3} \prod^{k}_{i=1}(1-\frac{1}{2^i}) &=& \left(~\prod^{40}_{i=1}(1-\frac{1}{2^i})~\right) \cdot \left(~\prod^{k}_{i=41}(1-\frac{1}{2^i})~\right) \nonumber \\ &\ge & \left(~\prod^{40}_{i=1}(1-\frac{1}{2^i})~\right) \cdot \left(~\prod^{k}_{i=41}(1-\frac{1}{i^2})~\right) \nonumber \\ &=& \left(~\prod^{40}_{i=1}(1-\frac{1}{2^i})~\right) \cdot \left(~\prod^{k}_{i=41}(\frac{(i-1)(i+1)}{i^2})~\right) \nonumber \\ &=& \left(~\prod^{40}_{i=1}(1-\frac{1}{2^i})~\right) \cdot \frac{40}{41} \cdot \frac{k+1}{k}\nonumber \\ &\ge & 0.28 \cdot \frac{k+1}{k}\nonumber \\ &\ge & 0.28 \end{eqnarray} The complete proof is then derived from expressions (\ref{eq-vec-2}) and (\ref{eq-vec-3}). \end{proof} \begin{lemma}\label{lem-0} For any integer matrix ${\cal A} = (a_{ij})_{n\times n}$, we have \begin{eqnarray} \mbox{perm}({\cal A}) \imod 2 & = & \mbox{det}({\cal A}) \imod 2. \end{eqnarray} \end{lemma} \begin{proof} Let $\lambda$ be any permutation of $\{1,2,\ldots,n\}$, and $\mbox{sign}(\lambda)$ be the sign of the permutation $\lambda$. Since for any integer $b$, $b \equiv -b \imod 2$, we have \begin{eqnarray} \mbox{det}({\cal A}) \imod 2 & = & \left(~\sum_{\lambda} (-1)^{\mbox{sign}(\lambda)}a_{1\lambda(1)}a_{2\lambda(2)}\cdots a_{n\lambda(n)}~\right) \imod 2 \nonumber \\ & = & \left(~\sum_{\lambda} a_{1\lambda(1)}a_{2\lambda(2)}\cdots a_{n\lambda(n)}~\right) \imod 2 \nonumber \\ & = & \mbox{perm}({\cal A}) \imod 2. \nonumber \end{eqnarray} \end{proof} It is obvious that the above lemma can be easily extended to any field of characteristic 2. We are now ready to estimate the success probability of the transformation. \begin{lemma}\label{rtm-lem2} Assume that the variable replacements are carried out over a field ${\cal F}$ of characteristic $2$ (e.g., $Z_2$). If a given $n$-variate polynomial $F(x_1,x_2,\ldots,x_n)$ that is represented by a tree-like circuit ${\cal C}$ has a $q$-monomial of $x$-variables with degree $k$, then, with a probability at least $0.28^k$, $G$ has a unique multilinear monomial $\alpha \pi$ such that $\pi$ is a degree $k$ multilinear monomial of $y$-variables and $\alpha$ is a multilinear monomial of $z$-variables with degree $\le 2k-1$. If $F$ has no $q$-monomials, then $G$ has no multilinear monomials of $y$-variables, i.e., $G$ has no monomials of the format $\beta \phi$ such that $\beta$ is a multilinear monomial of $z$-variables and $\phi$ is a multilinear monomial of $y$-variables. \end{lemma} \begin{proof} We first show the second part of the lemma, i.e., if $F$ has no $q$-monomials, then $G$ has no multilinear monomials of $y$-variables. Suppose otherwise that $G$ has a multilinear monomial $\beta \phi$. Let $\phi = \phi_1\phi_2\cdots \phi_s$ such that $\phi_j$ is the product of all the $y$-variables in $\phi$ that are used to replace the variable $x_{i_j}$, and let $\mbox{deg}(\phi_j) = d_j$, $1\le j\le s$. Consider the subtree ${\cal T}'$ of ${\cal C}'$ that generates $\beta\phi$ when the $x$-variables are replaced by a linear sum of $y$-variables according to expression (\ref{trans-exp1}). Then, the subtree ${\cal T}$ in ${\cal C}^*$ that corresponds to ${\cal T}'$ in ${\cal C}'$ computes a monomial $\pi = x_{i_1}^{d_i}x_{i_2}^{d_2}\cdots x^{d_s}_{i_s}$ and $\phi$ is a multilinear monomial in the expansion of the replacement $r(\pi)$, which is obtained by replacing each occurrence of $x$-variable with a linear sum of $(q-1)$ many $y$-variables by expression $(\ref{trans-exp1})$. If there is one $d_j$ such that $d_j\ge q$, then let us look at the replacements for $x_{i_j}^{d_j}$, denoted as $$ r(x_{i_j}^{d_j}) = \prod^{d_j}_{t = 1} (c_{t1}y_{1}+c_{t2}y_{2}+\cdots + c_{t(q-1)}y_{(q-1)}). $$ Since $d_j\ge q$, by the pigeon hole principle, the expansion of the above $r(x_{i_j}^{d_j})$ has no multilinear monomials. Thereby, we must have $1\le d_j\le q-1$, $1\le j\le s$. Hence, $\pi$ is a $q$-monomial in $F$, a contradiction to our assumption at the beginning. Therefore, when $F$ has no $q$-monomials, then $G$ must not have any multilinear monomials of $y$-variables. We now prove the first part of the lemma. Suppose $F$ has a $q$-monomial $\pi = x_{i_1}^{s_1}x_{i_2}^{s_2}\cdots x_{i_t}^{s_t}$ with $1\le s_j \le q-1$, $1\le j\le t$, and $k = \mbox{deg}(\pi)$. By Lemma \ref{rtm-lem1}, $F'$ has at least one monomial corresponding to $\pi$. Moreover, each of such monomials has a format $\alpha \pi$ such that $\alpha$ is a unique multilinear monomials of $z$-variables with $\mbox{deg}(\alpha) \le 2k-1$. Let $\beta = \alpha \pi$ be one of such monomials. Consider the subtree ${\cal T'}$ of ${\cal C}'$ that generates $\beta$. Based on the construction of ${\cal C}'$, ${\cal T'}$ has $s_j$ terminal nodes representing $s_j$ occurrences of $x_{i_j}$ in $\pi$, $1\le j\le t$. By variable replacements in expression (\ref{trans-exp1}), $\beta$ becomes $r(\beta)$ as follows: \begin{eqnarray}\label{rtm-exp3} r(\beta) &=& \alpha r(\pi) \nonumber \\ &=& \alpha~\prod^t_{\ell=1} \left[~ \prod^{s_\ell}_{j=1}(c_{\ell j1}y_{\ell1}+c_{\ell j2}y_{\ell 2}+\cdots + c_{\ell j(q-1)}y_{\ell (q-1)})~\right], \end{eqnarray} where each occurrence $j$ of $x_{i_\ell}$ is replaced by $(c_{\ell j1}y_{\ell1}+c_{\ell j2}y_{\ell 2}+\cdots + c_{\ell j(q-1)}y_{\ell (q-1)}).$ For $1\le \ell \le t$, let $\pi_\ell = x^{s_\ell}_{i_\ell}$, and \begin{eqnarray}\label{rtm-exp4} r(\pi_\ell) &=& \prod^{s_\ell}_{j=1}(c_{\ell j1}y_{\ell1}+c_{\ell j2}y_{\ell 2}+\cdots + c_{\ell j(q-1)}y_{\ell (q-1)}). \end{eqnarray} Since $1\le s_{\ell} \le q-1$, by expression (\ref{rtm-exp4}), $r(\pi_\ell)$ has a multilinear monomial $\pi'_\ell$ with coefficient $c_\ell$ such that \begin{eqnarray} \label{rtm-exp5} \pi'_\ell & = & y_{\ell1} y_{\ell 2} \cdots y_{\ell s_{\ell}}, \mbox{ and } \end{eqnarray} \begin{eqnarray} \label{rtm-exp6} c_\ell &=& \mbox{perm}(C[\pi'_\ell, r(\pi_\ell)]), \end{eqnarray} where the coefficient matrix, as defined in Definition \ref{def-2}, is $$ {\cal C}[\pi'_\ell, r(\pi_\ell)]=\left(\begin{array}{cccc} c_{\ell 11} & c_{\ell 12} & \cdots & c_{\ell 1s_\ell}\\ c_{\ell 21} & c_{\ell 22} & \cdots & c_{\ell 2s_\ell}\\ & & \cdots & \\ c_{\ell s_\ell 1} & c_{\ell s_\ell 2} & \cdots & c_{\ell s_\ell s_\ell} \end{array} \right). $$ Since the field ${\cal F}$ has characteristic $2$ and all the entries in the coefficient are $0/1$ values, we have by Lemma \ref{lem-0} $$ \mbox{perm}(C[\pi'_\ell, r(\pi_\ell)]) = \mbox{det}(C[\pi'_\ell, r(\pi_\ell)]). $$ Because each row of $C[\pi'_\ell, r(\pi_\ell)]$ is a uniform random vector in $Z^{s_\ell}_2$, by Lemma \ref{lem3}, with a probability of at least $0.28$, those row vectors are linearly independent, implying $\mbox{det}(C[\pi'_\ell, r(\pi_\ell)]) =1$. Hence, by expressions (\ref{rtm-exp4}), (\ref{rtm-exp5}) and (\ref{rtm-exp6}), with a probability at least $0.28$, $r(\pi_\ell)$ has a multilinear monomial $\pi'_\ell$. By expression (\ref{rtm-exp3}), with a probability at least $0.28^t\ge 0.28^k$, $\alpha r(\pi)$ has a desired multilinear monomial $\alpha \pi'_1\pi'_2\cdots \pi'_t$. \end{proof} \section{Randomized Testing of $q$-monomials} Let $d = \log_2 (2k-1) + 1$ and ${\cal F} = \mbox{GF}(2^d)$ be a finite field of $2^d$ many elements. We consider the group algebra ${\cal F}[Z^k_2]$. Please note that the field ${\cal F} = \mbox{GF}(2^d)$ has characteristic $2$. This implies that, for any given element $w \in {\cal F}$, adding $w$ for any even number of times yields $0$. For example, $w + w = 2w = w+w+w+w = 4w = 0.$ The algorithm RandQMT for testing whether any given $n$-variate polynomial $F(x_1,x_2,\ldots,x_n)$ that is presented by a tree-like circuit ${\cal C}$ has a $q$-monomial of degree $k$ is given in the following. \begin{quote} \noindent{\bf Algorithm RandQMT} (\underline{Rand}omized $\underline{q}$-\underline{M}onomials \underline{T}esting): \begin{description} \item[1.] As described in Subsection \ref{CR}, reconstruct the circuit ${\cal C}$ to obtain ${\cal C}^*$ that computes the same polynomial $F(x_1,x_2,\ldots,x_n)$ and then introduce new $z$-variables to ${\cal C}^*$ to obtain the new circuit ${\cal C'}$ that computes $F'(z_1,z_2,\ldots, z_h,x_1,x_2,\ldots, x_n)$. \item[2.] Repeat the following loop for at most $(\frac{1}{0.28})^k$ times. \begin{description} \item[2.1.] For each variable $x_i$ and for each terminal node $u_j$ representing $x_i$ in circuit ${\cal C'}$, select uniform random values $c_{ij\ell}$ from $Z_2$ and replace $x_i$ at the node $u_j$ with \begin{eqnarray}\label{rtm-exp1} \hspace{-0.3in} r(x_i) &=& (c_{ij1}y_{i1} + c_{ij2}y_{i2} +\cdots+c_{ij(q-1)}y_{i(q-1)}). \end{eqnarray} Let $$G(z_1,\ldots,z_h,y_{11},\ldots,y_{1(q-1)},\ldots,y_{n1},\ldots,y_{n(q-1)}) $$ be the polynomial resulted from the above replacements for circuit ${\cal C'}$. \item[2.2.] Select uniform random vectors $\vec{v}_{ij} \in Z^k_2-\{\vec{v}_0\}$, and replace the variable $y_{ij}$ with $(\vec{v}_{ij} + \vec{v}_0)$, $1\le i \le n$ and $1\le j\le q-1$. \item[2.3.] Use ${\cal C'}$ to calculate \begin{eqnarray}\label{rtm-exp2} G' &=& G(z_1,\ldots,z_h,(\vec{v}_{11}+\vec{v}_0),\ldots,(\vec{v}_{1(q-1)}+\vec{v}_0), \ldots,\nonumber \\ & & ~~~~(\vec{v}_{n1}+\vec{v}_0),\ldots,(\vec{v}_{n(q-1)}+\vec{v}_0)) \nonumber \\ & = & \sum_{j=1}^{2^k} f_j(z_1,\ldots,z_h) \cdot \vec{v}_j, \end{eqnarray} where each $f_j$ is a polynomial of degree $\le 2k-1$ over the finite field ${\cal F}=\mbox{GF}(2^d)$, and $\vec{v}_j$ with $1\le j\le 2^k$ are the $2^k$ distinct vectors in $Z^k_2$. \item[2.4.] Perform polynomial identity testing with the Schwartz-Zippel algorithm \cite{motwani95} for every $f_{j}$ over ${\cal F}$. Return {\em "yes"} if one of those polynomials is not identical to zero. \end{description} \item[3.] Return {\em "no"} if no {\em "yes"} has been returned in the loop. \end{description} \end{quote} It should be pointed out that the actual implementation of Step 2.3 would be running the Schwartz-Zippel algorithm concurrently for all $f_j$, $1\le j\le 2^k$, utilizing the circuit ${\cal C'}$. If one of those polynomials is not identical to zero, then the output of $G'$ as computed by circuit ${\cal C'}$ is not zero. The group algebra technique established by Koutis \cite{koutis08} assures the following two properties: \begin{lemma}\label{rtm-lem3} (\cite{koutis08})~ Replacing all the variables $y_{ij}$ in $G$ with group algebraic elements $\vec{v}_{ij}+\vec{v}_0$ will make all monomials $\alpha \pi$ in $G$ become zero, if $\pi$ is non-multilinear with respect to $y$-variables. Here, $\alpha$ is a product of $z$-variables. \end{lemma} \begin{proof} Recall that ${\cal F}$ has characteristic $2$. For any $\vec{v} \in Z^k_2$, in the group algebra ${\cal F}[Z^k_2]$, \begin{eqnarray}\label{rt-1} (\vec{v}+\vec{v}_0)^2 &=& \vec{v}\cdot\vec{v} + 2\cdot\vec{v}\cdot\vec{v}_0 + \vec{v}_0\cdot\vec{v}_0 \nonumber \\ &=&\vec{v}_0+ 2\cdot\vec{v} + \vec{v}_0 \nonumber \\ &=& 2\cdot \vec{v}_0 + 2\cdot\vec{v} = {\bf 0}. \end{eqnarray} Thus, the lemma follows directly from expression (\ref{rt-1}). \end{proof} \begin{lemma}\label{rtm-lem4} (\cite{koutis08})~ Replacing all the variables $y_{ij}$ in $G$ with group algebraic elements $\vec{v}_{ij}+\vec{v}_0$ will make any monomial $\alpha \pi$ to become zero, if and only if the vectors $\vec{v}_{ij}$ are linearly dependent in the vector space $Z^k_2$. Here, $\pi$ is a multilinear monomial of $y$-variables and $\alpha$ is a product of $z$-variables. Moreover, when $\pi$ becomes non-zero after the replacements, it will become the sum of all the vectors in the linear space spanned by those vectors. \end{lemma} \begin{proof} The analysis below gives a proof for this lemma. Suppose $V$ is a set of linearly dependent vectors in $Z^k_2$. Then, there exists a nonempty subset $T \subseteq V$ such that $\prod_{\vec{v}\in T} \vec{v}= \vec{v}_0$. For any $S\subseteq T$, since $\prod_{\vec{v}\in T} \vec{v} = (\prod_{\vec{v}\in S} \vec{v}~) \cdot (\prod_{\vec{v}\in T-S} \vec{v}~)$, we have $\prod_{\vec{v}\in S} \vec{v} = \prod_{\vec{v}\in T-S} \vec{v}$. Thereby, we have \begin{eqnarray} \prod_{\vec{v}\in T }(\vec{v} + \vec{v}_0) &=& \sum_{S\subseteq T} \prod_{\vec{v}\in S}\vec{v} = {\bf 0}, \nonumber \end{eqnarray} since every $\prod_{\vec{v}\in S}\vec{v}$ is paired by the same $\prod_{\vec{v}\in T-S}\vec{v}$ in the sum above and the addition of the pair is annihilated because ${\cal F} $ has characteristic $2$. Therefore, \begin{eqnarray} \prod_{\vec{v}\in V}(\vec{v} + \vec{v}_0) &=& \left(~\prod_{\vec{v}\in T} (\vec{v}+\vec{v}_0)\right) \cdot \left(~\prod_{\vec{v}\in V-T}(\vec{v}+\vec{v}_0)\right) \nonumber \\ & =& 0 \cdot \left(~\prod_{\vec{v}\in V-T}(\vec{v}+\vec{v}_0)\right) = {\bf 0}. \nonumber \end{eqnarray} Now consider that vectors in $V$ are linearly independent. For any two distinct subsets $S, T\subseteq V$, we must have $\prod_{\vec{v}\in T} \vec{v} \not=\prod_{\vec{v}\in S} \vec{v}$, because otherwise vectors in $S \cup T - (S \cap T)$ are linearly dependent, implying that vectors in $V$ are linearly dependent. Therefore, \begin{eqnarray} \prod_{\vec{v}\in V}(\vec{v} + \vec{v}_0) &=& \sum_{T\subseteq V} \prod_{\vec{v}\in T} \vec{v} \nonumber \end{eqnarray} is the sum of all the $2^{|V|}$ distinct vectors spanned by $V$. \end{proof} \begin{theorem}\label{thm-rtm} Let $q>2$ be any fixed integer and $F(x_1,x_2,\ldots,x_n)$ be an $n$-variate polynomial represented by a tree-like circuit ${\cal C}$ of size $s(n)$. Then the randomized algorithm RandQMT can decide whether $F$ has a $q$-monomial of degree $k$ in its sum-product expansion in time $O^*(7.15^k s^2(n))$. \end{theorem} For applications, we often require that the size of a given circuit is a polynomial in $n$. in such cases, the upper bound in the theorem becomes $O^*(7.15^k)$. \begin{proof} From the introduction of the new $z$-variables to the circuit ${\cal C'}$, it is easy to see that every monomial in $F'$ has the format $\alpha \pi$, where $\pi$ is a product of $x$-variables and $\alpha$ is a product of $z$-variables. Since only $x$-variables are replaced by respective linear sums of new $y$-variables as specified in expression (\ref{rtm-exp1}) (or expression (\ref{trans-exp1})), monomials in $G$ have the format $\beta \phi$, where $\phi$ is a product of $y$-variables and $\beta$ is a product of $z$-variables. Suppose that $F$ has no $q$-monomials. By Lemma \ref{rtm-lem2}, $G$ has no monomials $\beta\phi$ such that $\phi$ is a multilinear monomial of $y$-variables and $\beta$ is a product of $z$-variables. In other words, for every monomial $\beta\phi$ in $G$, the $y$-variable product $\phi$ must not be multilinear. Moreover, by Lemma \ref{rtm-lem3}, replacing $y$-variables will make $\phi$ in every monomial $\beta\phi$ in $G$ to become zero. Hence, the replacements will make $G$ to become zero and so the algorithm RandQMT will return {\em "no"}. Assume that $F$ has a $q$-monomial of degree $k$. By Lemma \ref{rtm-lem2}, with a probability at least $0.28^k$, $G$ has a monomial $\beta \phi$ such that $\phi$ is a $y$-variable multilinear monomial of degree $k$ and $\beta$ is a $z$-variable multilinear monomial of degree $\le 2k -1$. It follows from Lemma \ref{lem3}, a list of uniform vectors from $Z^{k}_2$ will be linearly independent with a probability at least $0.28$. By Lemma \ref{rtm-lem4}, with a probability at least $0.28$, the multilinear monomial $\phi$ will not be annihilated by the group algebra replacements at Steps 2.2 and 2.3. Precisely, with a probability at least $0.28$, $\beta\phi$ will become \begin{eqnarray}\label{rtm-exp7} \lambda(\beta\phi) & = & \sum^{2^k}_{i=1} \beta \vec{v}_i, \end{eqnarray} where $\vec{v}_i$ are distinct vectors in $Z^k_2$. Let ${\cal S}$ be the set of all those multilinear monomials $\beta\phi$ that survive the group algebra replacements for $y$-variables in $G$. Then, \begin{eqnarray}\label{rtm-exp8} G' &= &G(z_1,\ldots,z_h,(\vec{v}_{11}+\vec{v}_0),\ldots,(\vec{v}_{1(q-1)}+\vec{v}_0),\ldots,\nonumber \\ &&~~~(\vec{v}_{n1}+\vec{v}_0),\ldots,(\vec{v}_{n(q-1)}+\vec{v}_0)) \nonumber \\ &=& \sum_{\beta\phi \in {\cal S}} \lambda(\beta\phi) \nonumber \\ &=& \sum_{\beta\phi\in {\cal S}} \left(\sum^{2^k}_{i=1}\beta \vec{v}_i\right) \nonumber \\ &= & \sum_{j=1}^{2^k} \left(\sum_{\beta\phi\in {\cal S}} \beta \right) \vec{v}_j \end{eqnarray} Let \begin{eqnarray} f_j(z_1,\ldots,z_h) & = & \sum_{\beta\phi\in {\cal S}} \beta. \nonumber \end{eqnarray} By Lemmas \ref{rtm-lem2} and \ref{rtm-lem3}, the degree of $\beta$ is at most $2k-1$. Hence, the coefficient polynomial $f_j$ with respect to $\vec{v}_j$ in $G'$ after the algebra replacements has degree $\le 2k-1$. Also, by Lemma \ref{rtm-lem2}, $\beta$ is unique with respect to every $\phi$ for each monomial $\beta\phi$ in $G$. Thus, the possibility of a {\rm "zero-sum"} of coefficients from different surviving monomials is completely avoided during the construction of $f_j$. Therefore, conditioned on that ${\cal S}$ is not empty, $F'$ must not be identical to zero, i.e., there exists at least one $f_j$ that is not identical to zero. At Step 2.4, we use the randomized algorithm by Schwartz-Zippel \cite{motwani95} to test whether $f_j$ is identical to zero. It is known that this testing can be done with a probability at least $\frac{2k-1}{|{\cal F}|} = \frac{1}{2}$ in time polynomially in $s(n)$ and $\log_2 |{\cal F}| = 1+ \log_2 (2k-1)$. Since ${\cal S}$ is not empty with a probability at least $0.28$, the success probability of testing whether $G$ has a degree $k$ multilinear monomial is at least $0.28 \times \frac{1}{2} > \frac{1}{8}$, under the condition that $G$ has at least one degree $k$ multilinear monomial. Summarizing the above analysis, when $F$ has a $q$-monomial of degree $k$ with a probability at least $0.28^k$, $G$ has a degree $k$ multilinear monomial $\phi$ of $y$-variables in the format $\beta\phi$ with coefficient $\beta$ that is a multilinear monomial of $z$-variables with degree $\le 2k-1$. Thus, the probability that $G$ does not have any degree $k$ multilinear monomials of $y$-variables in the aforementioned format $\beta\phi$ in its sum-product expansion during any of the $\left(\frac{1}{0.28}\right)^k$ loop iterations is at most $$ \left(1- (0.28)^k\right)^{(\frac{1}{0.28})^k} \le \frac{1}{e}. $$ This implies that the probability that $G$ has at least one degree $k$ multilinear monomial during at least one of the $\left(\frac{1}{0.28}\right)^k$ loop iterations is at least $$ 1- \frac{1}{e}. $$ When $G$ has at least one degree $k$ multilinear monomial $\phi$ of $y$-variables in the format $\beta\phi$ as described above, the group algebra replacement technique and the Schwartz-Zippel polynomial identity testing algorithm as analyzed above will detect this with a probability at least $\frac{1}{8}$. Therefore, when $F$ has one $q$-monomial in its sum-product expansion, with a probability at least $$ \frac{1}{8} \times \left(1-\frac{1}{e}\right), $$ algorithm RandQMT will detect this. Finally, we address the issues about how to calculate $G'$ and the time needed to do so. Naturally, every element in the group algebra ${\cal F}[Z^k_2]$ can be represented by a vector in $Z^{2^k}_2$. Adding two elements in ${\cal F}[Z^k_2]$ is equivalent to adding the two corresponding vectors in $Z_2^{2^k}$, and the latter can be done in $O(2^k)$ time via component-wise sum. In addition, multiplying two elements in ${\cal F}[Z^k_2]$ is equivalent to multiplying the two corresponding vectors in $Z_2^{2^k}$, and the latter can be done in $O(k2^k\log_2|{\cal F}|)=O(k^22^k)$ with the help of a similar Fast Fourier Transform style algorithm as in Williams \cite{williams09}. Calculating $G'$ consists of $n * s^2(n)$ arithmetic operations of either adding or multiplying two elements in ${\cal F}[Z^k_2]$ based on the circuit $C'$. Hence, the total time needed is $O(n*s^2(n) k^2 2^k)=O^*(2^k s^2(n))$. At Step 2.4, we run the Schwartz-Zippel algorithm on $G'$ to simultaneously test whether there is one $f_j$ such that $f_j$ is not identical to zero. The total time for the entire algorithm is $O^*(2^k s^2(n) \cdot (\frac{1}{0.28})^k)$. Since $$ 2\times \frac{1}{0.28} =2\times \frac{100}{28} <7.15, $$ the time complexity of algorithm RandQMT is bounded by $O^*(7.15^k s^2(n)).$ \end{proof} \section{Concluding Remarks} The group algebra approaches to testing multilinear monomials \cite{koutis08,williams09} and $q$-monomials for prime $q$ \cite{chen11b,chen12b} rely on the property that $Z_2$ and $Z_q$ are fields for primes $q > 2$. These approaches are not applicable to the general case of testing $q$-monomials, since $Z_q$ is no longer a field when $q$ is not prime. In this paper, we have developed a variable replacement technique and a new way to reconstruct a given circuit. When the two are combined, they help us transform the $q$-monomial testing problem to the multilinear monomial testing problem in a randomized setting. We have also proved that the transformation has the desired success probability to warrant its application to the design of our new algorithm. It should be pointed out that the time complexity of the randomized $q$-monomial testing algorithm obtained in \cite{chen11} runs in time $O^*(q^k)$ for prime $q\ge 2$, when the size of the circuit is a polynomial in $n$. Algorithm $\mbox{RandQMT}$ runs in time $O^*(7.15^k)$, hence it significantly improves the time complexity of the algorithm in \cite{chen11} for prime $q > 7$. \section*{Acknowledgments} Shenshi is supported by Dr. Bin Fu's NSF CAREER Award, 2009 April 1 to 2014 March 31. Yaqing is supported by a UTPA Graduate Assistantship. Part of Quanhai's work was done while he was visiting the Department of Computer Science at the University of Texas-Pan American. \end{document}
\textbfegin{document} \author{Jessica Fintzen} \title{Types for tame $p$-adic groups } \date{ } \maketitle \textbfegin{abstract} Let $k$ be a non-archimedean local field with residual characteristic $p$. Let $G$ be a connected reductive group over $k$ that splits over a tamely ramified field extension of $k$. Suppose $p$ does not divide the order of the Weyl group of $G$. Then we show that every smooth irreducible complex representation of $G(k)$ contains an $\mathfrak{s}$-type of the form constructed by Kim--Yu and that every irreducible supercuspidal representation arises from Yu's construction. This improves an earlier result of Kim, which held only in characteristic zero and with a very large and ineffective bound on $p$. By contrast, our bound on $p$ is explicit and tight, and our result holds in positive characteristic as well. Moreover, our approach is more explicit in extracting an input for Yu's construction from a given representation. \\[-1.2cm] \end{abstract} { \renewcommand{\thefootnote}{} \mathfrak{o}otnotetext{MSC2010: 22E50} \mathfrak{o}otnotetext{Keywords: representations of reductive groups over non-archimedean local fields, types, supercuspidal representations, $p$-adic groups} \mathfrak{o}otnotetext{The author was partially supported by a postdoctoral fellowship of the German Academic Exchange Service (DAAD), an AMS--Simons travel grant and NSF Grants DMS-1638352 and DMS-1802234.} } \tableofcontents \section{Introduction} The aim of the theory of types is to classify, up to some natural equivalence, the smooth irreducible complex representations of a $p$-adic group in terms of representations of compact open subgroups. For $\GL_n$ it is known that every irreducible representation contains an $\mathfrak{s}$-type. This theorem lies at the heart of many results in the representation theory of $\GL_n$ and plays a key role in the construction of an explicit local Langlands correspondence for $\GL_n$ as well as in the study of its fine structure. One of the main results of this paper is the existence of $\mathfrak{s}$-types for general $p$-adic groups and the related exhaustion of supercuspidal representations under minimal tameness assumptions. These tameness assumptions arise from the nature of the available constructions of supercuspidal representations for general $p$-adic groups. To explain our results in more detail, let $k$ denote a non-archimedean local field with residual characteristic $p$ and let $G$ be a connected reductive group over $k$. Before introducing the notion of a type, let us first discuss the case of supercuspidal representations, the building blocks of all other representations. Since the constructions below of supercuspidal representations for general reductive groups $G$ assume that $G$ splits over a tamely ramified extension of $k$, we will impose this condition from now on. Under this assumption, Yu (\cite{Yu}) gave a construction of supercuspidal representations as representations induced from compact mod center, open subgroups of $G(k)$ generalizing an earlier construction of Adler (\cite{Adler}). Yu's construction is the most general construction of supercuspidal representations for general reductive groups known at present and it has been widely used to study representations of $p$-adic groups, e.g. to obtain results about distinction, to calculate character formulas, to suggest an explicit local Langlands correspondence and to investigate the theta correspondence. However, all these results only apply to representations obtained from Yu's construction. In this article, we prove that \textit{all} supercuspidal representations of $G(k)$ are obtained from Yu's construction if $p$ does not divide the order of the Weyl group $W$ of $G$. This result was previously shown by Kim (\cite{Kim}) under the assumption that $k$ has characteristic zero and that $p$ is ``very large''. Note that Kim's hypotheses on $p$ depend on the field $k$ and are much stronger than our requirement that $p \nmid \abs{W}$, see \cite[\S~3.4]{Kim}. The few primes that divide the order of the Weyl group of $G$ are listed in Table \ref{table-weyl-group}, and we expect that this assumption is optimal in general when also considering types as below for the following reason. Yu's construction is limited to tori that split over a tamely ramified field extension of $k$. If $p$ does not divide the order of the Weyl group of $G$ and $G$ splits over a tamely ramified extension (our assumptions), then all tori split over a tame extension. However, if one of these assumptions is violated, then, in general, the group $G$ contains tori that do not split over a tame extension (for some non-split inner forms of split groups of type $A_n, n \geq 2, D_l, l \geq 4$ prime, or $E_6$ the condition on the prime number is slightly weaker, see \cite[Theorem~2.4 and Corollary~2.6]{Fi-tame-tori} for the details). We expect that we can use these tori to produce supercuspidal representations (of Levi subgroups) that were not constructed by Yu. Examples of such representations are provided by the construction of Reeder and Yu (\cite{ReederYu}), whose ingredients exist also when $p \mid \abs{W}$ (whenever they exist for some large prime $p$), see \cite{FR} and \cite{Fi}. In order to study arbitrary smooth irreducible representations, we recall the theory of types introduced by Bushnell and Kutzko (\cite{BK-types}): By Bernstein (\cite{Bernstein}) the category $\mathcal{R}(G)$ of smooth complex representations of $G(k)$ decomposes into a product of subcategories $\mathcal{R}^{\mathfrak{s}}(G)$ indexed by the set of inertial equivalence classes $\mathfrak{I}$ of pairs $(L, \sigma)$ consisting of a Levi subgroup $L$ of (a parabolic subgroup of) $G$ together with a smooth irreducible supercuspidal representation $\sigma$ of $L(k)$: $$\mathcal{R}(G)=^{\prime}od_{\mathfrak{s} ^{-1}n \mathfrak{I}}\mathcal{R}^{\mathfrak{s}}(G). $$ Let $\mathfrak{s} ^{-1}n \mathfrak{I}$. Following Bushnell--Kutzko (\cite{BK-types}), we call a pair $(K, \rho)$ consisting of a compact open subgroup $K$ of $G(k)$ and an irreducible smooth representation $\rho$ of $K$ an \textit{$\mathfrak{s}$-type} if for every irreducible smooth representation $\pi$ of $G(k)$ the following holds: \textbfegin{center} $\pi$ lies in $\mathcal{R}^{\mathfrak{s}}(G)$ if and only if $\pi|_K$ contains $\rho$. \end{center} In this case the category $\mathcal{R}^\mathfrak{s}(G)$ is isomorphic to the category of (unital left) modules of the Hecke algebra of compactly supported $\rho$-spherical functions on $G(k)$. Thus, if we know that there exists an $\fs$-type for a given $\mathfrak{s} ^{-1}n \mathfrak{I}$, then we can study the corresponding representations $\mathcal{R}^{\mathfrak{s}}(G)$ using the corresponding Hecke algebra. We say that a smooth irreducible representation $(\pi, V_\pi)$ of $G(k)$ \textit{contains a type} if there exists an $\fs$-type $(K, \rho)$ for the class $\mathfrak{s} ^{-1}n \mathfrak{I}$ that satisfies $(\pi, V_\pi) ^{-1}n \mathcal{R}^{\mathfrak{s}}(G)$, i.e. $\pi|_{K}$ contains $\rho$. Using the theory of $G$-covers introduced by Bushnell and Kutzko in \cite{BK-types}, Kim and Yu (\cite{KimYu}) showed that Yu's construction of supercuspidal representations can also be used to obtain types by omitting some of the conditions that Yu imposed on his input data. In this paper we prove that every smooth irreducible representation of $G(k)$ contains such a type if $k$ is a non-archimedian local field of \textit{arbitrary} characteristic whose residual characteristic $p$ does not divide the order of the Weyl group of $G$. This excludes only a few residual characteristics, and we expect the restriction to be optimal in general as explained above. If $k$ has characteristic zero and $p$ is ``very large'', then Kim and Yu deduced this result already from Kim's work (\cite{Kim}). Our approach is very different from Kim's approach. While Kim proves statements about a measure one subset of all smooth irreducible representations of $G(k)$ by matching summands of the Plancherel formula for the group and the Lie algebra, we use a more explicit approach involving the action of one parameter subgroups on the Bruhat--Tits building. This means that even though we have formulated some statements and proofs as existence results, the interested reader can use our approach to extract the input for the construction of a type from a given representation. To indicate the rough idea of our approach, we assume from now that $p$ does not divide the order of the Weyl group of $G$, and we denote by $(\pi, V_\pi)$ an irreducible smooth representation of $G(k)$. Recall that Moy and Prasad (\cites{MP1, MP2}) defined for every point $x$ in the Bruhat--Tits building $\mathscr{B}(G,k)$ of $G$ and every non-negative real number a compact open subgroup $G_{x,r} \mathfrak{su}bset G(k)$ and a lattice $\mathfrak{g}_{x,r} \mathfrak{su}bset \mathfrak{g}$ in the Lie algebra $\mathfrak{g}=\Lie(G)(k)$ of $G$ such that $G_{x,r} \trianglelefteq G_{x,s}$ and $\mathfrak{g}_{x,r} \mathfrak{su}bseteq \mathfrak{g}_{x,s}$ for $r>s$. Moy and Prasad defined the \textit{depth} of $(\pi, V_\pi)$ to be the smallest non-negative real number $r_1$ such that there exists a point $x ^{-1}n \mathscr{B}(G,k)$ so that the space of fixed vectors $V_\pi^{G_{x,r_1+}}$ under the action of the subgroup $G_{x,r_1+}:=\textbfigcup_{s>r_1}G_{x,s}$ is non-zero. In \cite{MP2} they showed that every irreducible depth-zero representation contains a type. A different proof using Hecke algebras was given by Morris (\cite{Morris-depth-zero}, announcement in \cite{Morris}). More generally, Moy and Prasad showed that $(\pi, V_\pi)$ contains an unrefined minimal K-type, and all unrefined minimal K-types are associates of each other. For $r_1=0$, an \textit{unrefined minimal K-type} is a pair $(G_{x,0}, \chi)$, where $\chi$ is a cuspidal representation of the finite (reductive) group $G_{x,0}/G_{x,0+}$. If $r_1 >0$, then an unrefined minimal K-type is a pair $(G_{x,r_1}, \chi)$, where $\chi$ is a nondegenerate character of the abelian quotient $G_{x,r_1}/G_{x,r_1+}$. While the work of Moy and Prasad revolutionized the study of representations of $p$-adic groups, the unrefined minimal K-type itself determines the representation only in some special cases. Our first main result in this paper (Theorem \ref{Thm-existence-of-datum}) shows that every smooth irreducible representation of $G(k)$ contains a much more refined invariant, which we call a \textit{datum}. A datum is a tuple $$(x,(X_i)_{1 \leq i \leq n}, (\rho_0, V_{\rho_0}))$$ for some integer $n$, where $x ^{-1}n \mathscr{B}(G,k)$, $X_i ^{-1}n \mathfrak{g}^*$ for $1 \leq i \leq n$ satisfying certain conditions and $(\rho_0, V_{\rho_0})$ is an irreducible representation of a finite group (which is the reductive quotient of the special fiber of the connected parahoric group scheme attached to the derived group of a twisted Levi subgroup of $G$), see Definition \ref{Def-extendeddatum} and Definition \ref{Def-datum} for the details. Our datum can be viewed as a refinement of the unrefined minimal K-type of Moy and Prasad as follows. To a datum $(x,(X_i)_{1 \leq i \leq n}, (\rho_0, V_{\rho_0}))$ we associate a sequence of subgroups $G \mathfrak{su}pset H_1 \mathfrak{su}pset H_2 \mathfrak{su}pset \hdots \mathfrak{su}pset H_n \mathfrak{su}pset H_{n+1}$, which are (apart from allowing $H_1=G_1$) the derived groups of twisted Levi subgroups $G = G_1 \mathfrak{su}pset G_2 \mathfrak{su}pset \hdots \mathfrak{su}pset G_n \mathfrak{su}pset G_{n+1}$, and real numbers $r_1>r_2> \hdots >r_n >0$ such that for $1 \leq i \leq n$ the element $X_i$ yields a character $\chi_i$ of $$(H_i)_{x_i,r_i}/(H_i)_{x_i,r_i+} \simeq \Lie(H_i)(k)_{x_i,r_i}/\Lie(H_i)(k)_{x_i,r_i+} \mathfrak{su}bset \mathfrak{g}_{x,r_i}/\mathfrak{g}_{x,r_i+} $$ for a suitable point $x_i ^{-1}n \mathscr{B}(H_i,k)$. If $G$ is semisimple, for simplicity, then $(G_{x_1,r_1}, \chi_1)$ is an unrefined minimal K-type of depth $r_1$ contained in $(\pi,V_\pi)$, and the pair $((H_i)_{x_i,r_i}, \chi_i)$ is an unrefined minimal K-type of depth $r_i$ for $H_i$. The existence of a maximal datum for any irreducible representation of $G(k)$ is a key ingredient for producing the input that is needed for the construction of types as in Kim--Yu (\cite{KimYu}). In order to exhibit a datum in a given representation, we require the elements $(X_i)_{1 \leq i \leq n}$ in the datum $(x,(X_i)_{1 \leq i \leq n}, (\rho_0, V_{\rho_0}))$ to satisfy a slightly stronger condition than the non-degeneracy necessary for an unrefined minimal K-type. We call our conditions \textit{generic}, see Definition \ref{Def-generic}. This condition ensures that the deduced input for Yu's construction is generic in the sense of Yu (\cite[\S~15]{Yu}) and at the same time it is crucial for the proof of the existence of the datum. The existence of a datum in $(\pi, V_\pi)$ is proved recursively, i.e. by first showing the existence of a suitable element $X_1$, then finding a compatible element $X_2$, and then $X_3$, etc., until we obtain a tuple $(X_i)_{1 \leq i \leq n}$ and finally exhibit the representation $(\rho_0, V_{\rho_0})$. The existence of $X_1$ can be considered as a refinement of the existence of an unrefined minimal K-type by Moy and Prasad and relies on the existence result of generic elements proved in Proposition \ref{Lemma-almoststable-generic-representative}. When constructing the remaining part of the datum we need to ensure its compatibility with $X_1$ and rely on several preparatory results proved in Section \ref{Section-killing-lemmas}. At this step the imposed conditions on the elements $X_i$ become essential. The final crucial part of this paper is concerned with deducing from the existence of a datum in Theorem \ref{Thm-exhaustion-of-types} that every smooth irreducible representation of $G(k)$ contains one of the types constructed by Kim--Yu and, similarly, that every irreducible supercuspidal representation of $G(k)$ arises from Yu's construction, see Theorem \ref{Thm-exhaustion-Yu}. This requires using the elements $X_i$ from the datum to provide appropriate characters of the twisted Levi subgroups $G_i$ ($2 \leq i \leq n+1$) and using $(\rho_0, V_{\rho_0})$ to produce a depth-zero supercuspidal representation $\pi_0$ of $G_{n+1}(k)$. We warn the reader that the depth-zero representation of $G_{n+1}(k)$ is in general not simply obtained by extending and inducing $(\rho_0, V_{\rho_0})$. The relationship between $\rho_0$ and $\pi_0$ requires the study of Weil representations and can be found in Section \ref{Section-existence-of-type}, in particular in Lemma \ref{Lemma-repzeroK}. The main difficulty lies in showing that a potential candidate for the depth-zero representation $\pi_0$ of $G_{n+1}(k)$ is supercuspidal, which is the content of Lemma \ref{Lemma-cuspidal}. We conclude the paper by mentioning in Corollary \ref{Cor-supercuspidal-criterion} how to read off from a maximal datum for $(\pi,V_\pi)$ if the representation $(\pi, V_\pi)$ is supercuspidal or not. We would like to point out that the exhaustion of supercuspidal representations and the existence of types for arbitrary smooth irreducible representations have already been extensively studied for special classes of reductive groups for which other case-specific tools are available, e.g. a lattice theoretic description of the Bruhat--Tits building and a better understanding of the involved Hecke algebras. In 1979, Carayol (\cite{Carayol}) gave a construction of all supercuspidal representations of $\GL_n(k)$ for $n$ a prime number. In 1986, Moy (\cite{Moy-exhaustion}) proved that Howe's construction (\cite{Howe}) exhausts all supercuspidal representations of $\GL_n(k)$ if $n$ is coprime to $p$. Bushnell and Kutzko extended the construction to $\GL_n(k)$ for arbitrary $n$ and proved that every irreducible representation of $\GL_n(k)$ contains a type (\cites{BK, BK-types, BK-types-exhaustion}). As mentioned above, these results play a crucial role in the representation theory of $\GL_n(k)$. Based on the work for $\GL_n(k)$, Bushnell and Kutzko (\cite{BK-SLn}) together with Goldberg and Roche (\cite{Goldberg-Roche}) provide types for all Bernstein components for $\SL_n(k)$. For classical groups Stevens (\cite{Stevens}) has recently provided a construction of supercuspidal representations for $p \neq 2$ and proved that all supercuspidal representations arise in this way. A few years later, Miyauchi and Stevens (\cite{Miyauchi-Stevens}) provided types for all Bernstein components in that setting. The case of inner forms of $\GL_n(k)$ was completed by Sécherre and Stevens (\cites{Secherre-Stevens, Secherre-Stevens-types}) around the same time, subsequent to earlier results of others for special cases (e.g. Zink (\cite{Zink}) treated division algebras over non-archimedean local fields of characteristic zero and Broussous (\cite{Broussous}) treated division algebras without restriction on the characteristic). The existence of types for inner forms of $\GL_n(k)$ plays a key role in the explicit description of the local Jacquet--Langlands correspondence. \textbf{Structure of the paper.} In Section \ref{Section-prime}, we collect some consequences of the assumption that the residual field characteristic $p$ does not divide the order of the Weyl group of $G$. Section \ref{Section-almost-stable} concerns the definition and properties of generic elements and includes an existence result for generic elements. In Section \ref{Section-datum}, we introduce the notion of a datum and define what it means for a representation to contain a datum and for a datum to be a maximal datum for a representation. The proof that every smooth irreducible representation of $G(k)$ contains a datum is the subject of Section \ref{Section-existence-of-datum}. Several results that are repeatedly used in this proof are shown in the preceding section, Section \ref{Section-killing-lemmas}. In Section \ref{Section-existence-of-type}, we use the result about the existence of a datum to derive that every smooth irreducible representation of $G(k)$ contains one of the types constructed by Kim and Yu, and, in Section \ref{Section-exhaustion-Yu}, we prove analogously that every smooth irreducible supercuspidal representation of $G(k)$ arises from Yu's construction. \textbf{Conventions and notation.} Throughout the paper, we require reductive groups to be connected and all representations are smooth complex representations unless mentioned otherwise. We do not distinguish between a representation and its isomorphism class. As explained in the introduction, by \textit{type} we mean an $\mathfrak{s}$-type for some inertial equivalence class $\mathfrak{s}$. We will use the following notation throughout the paper: $k$ is a non-archimedean local field (of arbitrary characteristic) and $G$ is a reductive group over $k$ that will be assumed to split over a tamely ramified field extension of $k$. We write $\mathfrak f$ for the residue field of $k$ and denote its characteristic by $p$. We fix an algebraic closure $\overline k$ of $k$ and all field extensions of $k$ are meant to be algebraic and assumed to be contained in $\overline k$. For a field extension $F$ of $k$, we denote by $F^{{ur}}$ its maximal unramified field extension (in $\overline k$). We write $\mathcal{O}$ for the ring of integers of $k$, $\mathcal{P}$ for its maximal ideal, $\pi$ for a uniformizer, and $\val:k \rightarrow \mathbb{Z} \cup \{^{-1}nfty\}$ for a valuation on $k$ with image $\mathbb{Z} \cup \{^{-1}nfty\}$. If $F$ is an (algebraic) field extension of $k$, then we also use $\val$ to denote the valuation on $F$ that extends the valuation on $k$. We write $\mathcal{O}_F$ for the ring of integers in $F$ and $\mathcal{P}_F$ for the maximal ideal of $\mathcal{O}_F$. Throughout the paper we fix an additive character $\varphi: k \rightarrow \mathbb{C}^*$ of $k$ of conductor $\mathcal{P}$. If $E$ is a field extension of a field $F$ (e.g. of $k$ or $\mathfrak f$) and $H$ is a scheme defined over the field $F$, then we denote by $H_E$ or $H \times_F E$ the base change $H \times_{\Spec F} \Spec E$. If $A$ is an $F$-module, then we write $A_E$ for $A \otimes_F E$ and $A^*$ for the $F$-linear dual of $A$. For $X ^{-1}n A^*$ and $Y ^{-1}n A_E$, we write $X(Y)$ for $(X \otimes 1)(Y)$, $(X \otimes 1) ^{-1}n A^* \otimes_F E \simeq (A_E)^*$. If a group acts on $A$, then we let it also act on $A^*$ via the contragredient action. In general, we use upper case roman letters, e.g. $G, H, G_i, T, \hdots$, to denote linear algebraic groups defined over a field $F$, and we denote the $F$-points of their Lie algebras by the corresponding lower case fractur letters, e.g. $\mathfrak{g}, \mathfrak{h}, \mathfrak{g}_i, \mathfrak{t}$. The action of the group on its Lie algebra is the adjoint action, denoted by $\Ad$, unless specified otherwise. If $H$ is a reductive group over $F$, then we denote by $H^{\text{der}}$ its derived group. We write $\mathbb{G}_a$ and $\mathbb{G}_m$ for the additive and multiplicative group schemes over $\mathbb{Z}$ or over the ring or field that becomes apparent from the context. If $S$ is a split torus contained in $H$ (defined over $F$), then we write $X^*(S)=\Hom_F(S,\mathbb{G}_m)$ for the characters of $S$ defined over $F$, $X_*(S)=\Hom_F(\mathbb{G}_m,S)$ for the cocharacters of $S$ (defined over $F$), $\Phi(H,S) \mathfrak{su}bset X^*(S)$ for the roots of $H$ with respect to $S$, and if $S$ is a maximal torus, then $\check \Phi(H,S) \mathfrak{su}bset X_*(S)$ denotes the coroots. We might abbreviate $\Phi(H_{\overline k},T)$ by $\Phi(H)$ for a maximal torus $T$ of $H_{\overline k}$ if the choice of torus $T$ does not matter. We use the notation $\left\leftarrowngle \cdot, \cdot\right\rangle: X_*(S) \times X^*(S) \rightarrow \mathbb{Z}$ for the standard pairing, and if $S$ is a maximal torus, then we denote by $\check\alpha ^{-1}n \check \Phi(H,S)$ the dual root of $\alpha ^{-1}n \Phi(H,S)$. For a subset $\Phi$ of $X^*(S) \otimes_\mathbb{Z} \mathbb{R}$ (or $X_*(S)\otimes_\mathbb{Z} \mathbb{R}$) and $R$ a subring of $\mathbb{R}$, we denote by $R\Phi$ the smallest $R$-submodule of $X^*(S) \otimes_\mathbb{Z} \mathbb{R}$ (or $X_*(S)\otimes_\mathbb{Z} \mathbb{R}$, respectively) that contains $\Phi$. For $\chi ^{-1}n X^*(S)$ and $\leftarrowmbda ^{-1}n X_*(S)$, we denote by $d\chi ^{-1}n \Hom_F(\Lie(S),\Lie(\mathbb{G}_m))$ and $d\leftarrowmbda ^{-1}n \Hom_F(\Lie(\mathbb{G}_m), \Lie(S))$ the induced morphisms of Lie algebras. If $(\pi, V)$ is a representation of a group $Q$, then we denote by $V^Q$ the elements of $V$ that are fixed by $Q$. If $Q'$ is a group containing $Q$ as a subgroup and $q' ^{-1}n Q'$, then we define the representation $({^{q'}} \pi, V)$ of $q'Q{q'}^{-1}$ by ${^{q'}} \pi(q)=\pi({q'}^{-1}qq')$ for all $q ^{-1}n q'Q{q'}^{-1}$. Finally, we let $\widetilde \mathbb{R}=\mathbb{R} \cup \{ r+ \, | \, r ^{-1}n \mathbb{R}\}$ with its usual order, i.e. for $r$ and $s ^{-1}n \mathbb{R}$ with $r<s$, we have $r<r+<s<s+$. \textbf{Acknowledgment.} The author thanks Stephen DeBacker, Wee Teck Gan, Tasho Kaletha, Ju-Lee Kim and Loren Spice for discussions related to this paper, as well as Jeffrey Adler, Anne-Marie Aubert, Stephen DeBacker, Tasho Kaletha, Ju-Lee Kim, Gopal Prasad, Vincent Sécherre and Maarten Solleveld for feedback on some parts of an earlier version of this paper. The author is also very grateful to the referee for a careful reading of the paper and helpful comments and suggestions. The author thanks the University of Michigan, the Max-Planck Institut für Mathematik and the Institute for Advanced Study for their hospitality and wonderful research environment. \section{Assumption on the residue field characteristic} \leftarrowbel{Section-prime} Recall that $k$ denotes a non-archimedean local field with residual characteristic $p$ and $G$ is a connected reductive group over $k$. We assume that $G$ is not a torus. We know already all the smooth, irreducible, supercuspidal representations of a torus. They are simply the smooth characters of the torus. Moreover, Yu (\cite{Yu}) works in his construction of supercuspidal representations only with tori of $G$ that split over a tame extension. Hence we make the following assumption throughout the paper. \textbfegin{Assumption} \leftarrowbel{Assumption-p-W} We assume that $G$ splits over a tamely ramified extension of $k$ and $p \nmid \abs{W}$, where $W$ denotes the Weyl group $W$ of $G(\overline k)$. \end{Assumption} By \cite{Fi-tame-tori} Assumption \ref{Assumption-p-W} implies that all tori of $G$ are tame. For absolutely simple groups other than some non-split inner forms of split groups of type $A_n, n \geq 2, D_l, l \geq 4$ prime, or $E_6$ this assumption is also necessary (and in the excluded cases only minor modifications on the assumption on $p$ are necessary), see \cite[Theorem~2.4 and Corollary~2.6]{Fi-tame-tori} for details. We collect a few consequences of our assumption for later use. \textbfegin{Lemma} \leftarrowbel{Lemma-restriction-on-p} The assumption that $p \nmid \abs{W}$ implies the following \textbfegin{enumerate} [label=(\alph*),ref=\alph*] ^{-1}tem \leftarrowbel{item-p-Levi} The prime $p$ does not divide the order of the Weyl group of any Levi subgroup of (a parabolic subgroup of) $G_{\overline k}$. ^{-1}tem \leftarrowbel{item-bond} The prime $p$ is larger than the order of any bond of the Dynkin diagram $\Dyn(G)$ of $G_{\overline k}$, i.e. larger than the square of the ratio of two root lengths of roots in $\Phi(G)$. ^{-1}tem \leftarrowbel{item-bad-prime} The prime $p$ is not a bad prime (in the sense of \cite[4.1]{Springer-Steinberg}) for $\check\Phi:=\check\Phi(G)$, i.e. $\mathbb{Z}\check\Phi/\mathbb{Z}\check\Phi_0$ has no $p$-torsion for all closed subsystems $\check\Phi_0$ in $\check \Phi$ ^{-1}tem \leftarrowbel{item-torsion-prime} The prime $p$ is not a torsion prime (in the sense of \cite[1.3~Definition]{Steinberg-torsion}) for $\Phi:=\Phi(G)$ (and hence also not for $\check\Phi(G)$), i.e. $\mathbb{Z}\check\Phi/\mathbb{Z}\check\Phi_0$ has no $p$-torsion for all closed subsystems $\Phi_0$ in $\Phi$. ^{-1}tem \leftarrowbel{item-index-of-connection} The prime $p$ does not divide the index of connection (i.e. the order of the root lattice in the weight lattice) of any root(sub)system generated by a subset of a basis of $\Phi(G)$. \end{enumerate} \end{Lemma} \textbf{Proof.\\} Part \eqref{item-p-Levi} is obvious, Part \eqref{item-bond}, \eqref{item-bad-prime} and \eqref{item-torsion-prime} can be read of from Table \ref{table-weyl-group}. Part \eqref{item-index-of-connection} follows from the fact that the index of connection of $\Phi(G)$ divides $\abs{W}$ (\cite[VI.2~Proposition~7]{Bourbaki-4-6}).\qed \textbfegin{table}[h]\mathfrak{o}otnotesize \textbfegin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline type & $A_n \, (n \geq 1)$ & $B_n \, (n \geq 3)$ & $C_n \, (n \geq 2)$ & $D_n \, (n \geq 3)$ & $E_6$ & $E_7$ & $E_8$ & $F_4 $ & $G_2$ \\ \hline $\abs{W}$ & $(n+1)!$ & $2^n \cdot n!$ &$2^n \cdot n!$ & $2^{n-1} \cdot n!$ & $2^7\cdot3^4\cdot5$ & $2^{10}\cdot3^4\cdot5\cdot7$ & $2^{14}\cdot3^5\cdot5^2 \cdot 7$ & $2^7 \cdot 3^2$ & $2^2 \cdot 3$ \\ \hline bad & - & 2 & 2 & 2 & 2, 3 & 2, 3 & 2, 3, 5 & 2, 3 & 2, 3 \\ \hline torsion & - & 2 & - & 2 & 2, 3 & 2, 3 & 2, 3, 5 & 2, 3 & 2 \\ \hline \end{tabular} \caption{Order of Weyl groups (\cite[VI.4.5-VI.4.13]{Bourbaki-4-6}); bad primes (\cite[4.3]{Springer-Steinberg}) and torsion primes (\cite[1.13~Corollary]{Steinberg-torsion}) for irreducible root systems} \leftarrowbel{table-weyl-group} \end{table} \section{{Almost strongly stable} and {generic} elements} \leftarrowbel{Section-almost-stable} Let $E$ be a field extension of $k$. We denote by $\mathscr{B}(G,E)$ the (enlarged) Bruhat--Tits building of $G_E$ over $E$, and we sometimes write $\mathscr{B}$ for $\mathscr{B}(G,k)$. For $x ^{-1}n \mathscr{B}(G,E)$ and $r ^{-1}n \mathbb{R}_{\geq 0}$, we write $G(E)_{x,r}$ for the Moy--Prasad filtration subgroup of $G(E)$ of depth $r$, which we abbreviate to $G_{x,r}$ for $G(k)_{x,r}$, and we set $G(E)_r=\textbfigcup_{x ^{-1}n \mathscr{B}(G,E)} G(E)_{x,r}$. For $r ^{-1}n \mathbb{R}$, we denote by $(\mathfrak{g}_E)_{x,r}$ and $(\mathfrak{g}_E)^*_{x,r}$ the Moy--Prasad filtration of $\mathfrak{g}_E=\Lie(G_E)(E)$ and its dual $\mathfrak{g}_E^*$, respectively. We set $(\mathfrak{g}_E)_r=\textbfigcup_{x ^{-1}n \mathscr{B}(G,E)}(\mathfrak{g}_E)_{x,r}$ and $(\mathfrak{g}_E)^*_r=\textbfigcup_{x ^{-1}n \mathscr{B}(G,E)}(\mathfrak{g}_E)^*_{x,r}$. Recall that if $X ^{-1}n (\mathfrak{g}_E)^*_{x,r}$, then $X((\mathfrak{g}_E)_{x,(-r)+}) \mathfrak{su}bset \mathcal{P}_E$ and $X((\mathfrak{g}_E)_{x,-r}) \mathfrak{su}bset \mathcal{O}_E$. For convenience, we define our Moy--Prasad filtration subgroups and subalgebras with respect to the valuation $\val$ of $E$ that extends the normalized valuation $\val$ of $k$, i.e. in such a way that by \cite[Proposition~1.4.1]{Adler} (which applies because $G$ splits over a tamely ramified extension) we have \textbfegin{equation} \leftarrowbel{eqn-MP-filtration} (\mathfrak{g}_E)_{x,r} \cap \mathfrak{g} = \mathfrak{g}_{x,r} \end{equation} for all $r ^{-1}n \mathbb{R}$. We denote by $\mathbb{R}P_x$ the reductive quotient of the special fiber of the connected parahoric group scheme attached to $G$ at $x$, i.e. $\mathbb{R}P_x$ is a reductive group defined over $\mathfrak f$ satisfying $\mathbb{R}P_x(\mathfrak f_F)=G(F)_{x,0}/G(F)_{x,0+}$ for every unramified extension $F$ of $k$ with residue field $\mathfrak f_F$. We also refer to $\mathbb{R}P_x$ as ``the reductive quotient of $G$ at $x$''. For any $r ^{-1}n \mathbb{R}$, the adjoint action of $G(k^{ur})_{x,0}$ on $(\mathfrak{g}_{k^{ur}})_{x,r}$ induces a linear action of the algebraic group $\mathbb{R}P_x$ on $V_{x,r}:=\mathfrak{g}_{x,r}/\mathfrak{g}_{x,r+}$ and therefore also on its dual $V_{x,r}^*$. For $X ^{-1}n \mathfrak{g}_E^*-\{0\}$ and $x ^{-1}n \mathscr{B}(G,E)$, we denote by $d_E(x,X) ^{-1}n \mathbb{R}$ the largest real number $d$ such that $X ^{-1}n (\mathfrak{g}_E^*)_{x,d}$, and set $d_E(x,0)=^{-1}nfty$. We call $d_E(x,X)$ the \textit{depth of $X$ at $x$}. We define the \textit{depth of $X$} over $E$ to be $d_E(X):=\mathfrak{su}p_{x ^{-1}n \mathscr{B}(G,E)}d_E(x,X) ^{-1}n \mathbb{R} \cup ^{-1}nfty$. If $E=k$, then we often write $d(x,X)$ for $d_k(x,X)$ and $d(X)$ for $d_k(X)$. Note that if $X ^{-1}n \mathfrak{g}^*$, then $d_k(x,X)=d_E(x,X)$ and $d_k(X)=d_E(X)$ by our choice of normalization. Recall that if $V$ is a finite dimensional linear algebraic representation of a reductive group $H$ defined over some field $F$, then $X ^{-1}n V(F)$ is called \textit{semistable} under the action of $H$ if the Zariski-closure of the orbit $H(\overline F).X\mathfrak{su}bset V(\overline F)$ does not contain zero, and is called \textit{unstable} otherwise. We introduce two slightly stronger notions for our setting. \textbfegin{Def} \leftarrowbel{Def-almost-stable} Let $X ^{-1}n \mathfrak{g}^*$. We denote by $\overline X$ the map $V_{x,-d(x,X)}:=\mathfrak{g}_{x,-d(x,X)}/\mathfrak{g}_{x,(-d(x,X))+} \rightarrow \mathfrak f$ induced from $X:\mathfrak{g}_{x,-d(x,X)} \rightarrow \mathcal{O}$. \textbfegin{itemize} ^{-1}tem We say that $X$ is \textit{almost stable} if the $G$-orbit of $X$ is closed. ^{-1}tem We say that $X$ is \textit{almost strongly stable at $x$} if $X$ is almost stable and $\overline X ^{-1}n (V_{x,-d(x,X)})^*$ is semistable under the action of $\mathbb{R}P_x$. \end{itemize} \end{Def} \textbfegin{Lemma} \leftarrowbel{Lemma-almost-stable-depth} Let $X ^{-1}n \mathfrak{g}^*-\{0\}$ be almost strongly stable at $x$. Then $d(x,X)=d(X)$. \end{Lemma} \textbf{Proof.\\} Suppose $d(x,X)<d(X)$, and write $r=d(x,X)$. Then by \cite[Corollary~3.2.6]{Adler-DeBacker} (together with their remark at the beginning of Section~3) the coset $X+\mathfrak{g}^*_{x,r+}$ is degenerate, i.e. contains an unstable element. Hence $\overline X$ is unstable by \cite[4.3.~Proposition]{MP1} (while Moy and Prasad assume simply connectedness throughout their paper \cite{MP1}, it is not necessary for this claim). This contradicts that $X$ is almost strongly stable and finishes the proof. \qed \textbfegin{Def} Let $H$ be a reductive group over some field $F$. A smooth, closed subgroup $H'$ of $H$ is called a \textit{twisted Levi subgroup} if there exists a finite field extension $E$ over $F$ such that $H'\times_{F} E$ is a Levi subgroup of a parabolic subgroup of $H \times_{F} E$. \end{Def} \textbfegin{Lemma} \leftarrowbel{Lemma-stabilizer-Levi} Let $X ^{-1}n \mathfrak{g}^*$ be almost stable (under the contragredient of the adjoint action of $G$). Then the centralizer $\mathbb{C}ent_{G}(X)$ of $X$ in $G$ is a twisted Levi subgroup of $G$. \end{Lemma} \textbf{Proof.\\} It suffices to show that $\mathbb{C}ent_{G\times_k{\overline k}}(X)$ is a Levi subgroup of $G_{\overline k}$, because $\mathbb{C}ent_{G}(X)\times_{k}\overline k=\mathbb{C}ent_{G\times_k{\overline k}}(X)$. Since $p$ does not divide the order of the Weyl group of $G$, we can $G_{\overline k}$-equivariantly identify $\mathfrak{g}^*_{\overline k}$ with $\mathfrak{g}_{\overline k}$ (see \cite[Proposition~4.1]{Adler-Roche}, Lemma \ref{Lemma-restriction-on-p}\eqref{item-index-of-connection} and Lemma \ref{Lemma-restriction-on-p}\eqref{item-bond}). Using this identification to view $X$ in $\mathfrak{g}$, by \cite[14.25~Proposition]{Borel}, every $X$ is contained in the Lie algebra of a Borel subgroup $B=TU$ for $T$ a maximal torus and $U$ the unipotent radical of $B$ (defined over $\overline k$). Hence we can write $X=X_s+X_n$, where $X_s ^{-1}n \Lie(T)(\overline k)$ and $X_n ^{-1}n \Lie(U)(\overline k)$, and there exists a one parameter subgroup $\leftarrowmbda: \mathbb{G}_m \rightarrow T \mathfrak{su}bset \mathbb{C}ent_{G_{\overline k}}(X_s)$ such that $\lim_{t \rightarrow 0} \leftarrowmbda(t).X_n=0$, and therefore $\lim_{t \rightarrow 0} \leftarrowmbda(t).X=X_s$. Since $X$ is almost stable, this implies that $X_s$ is contained in the $G(\overline k)$-orbit of $X$, and therefore $X$ is semisimple, hence $X=X_s$. In other words, $X$ is in the zero eigenspace in $\mathfrak{g}^*_{\overline k}$ of $T$. Since $p$ is not a torsion prime for $\check \Phi(G)$ (Lemma \ref{Lemma-restriction-on-p}\eqref{item-torsion-prime}) and $p$ does not divide the index of connection of $\Phi(G)$ (Lemma \ref{Lemma-restriction-on-p}\eqref{item-index-of-connection}), we obtain by \cite[Proposition~7.1.~and~7.2]{Yu} (which is based on \cite{Steinberg-torsion}) that the centralizer $\mathbb{C}ent_{G_{\overline k}}(X)$ of $X$ in $G_{\overline k}$ is a connected reductive group whose root datum is given by $(X^*(T), \Phi_X, X_*(T), \check\Phi_X)$ with $\Phi_X=\{\alpha ^{-1}n \Phi(G_{\overline k},T) \, | \, X(d\check\alpha(1))=0)\}$ and $\check \Phi_X=\{\check \alpha \, | \, \alpha ^{-1}n \Phi_X\}$. Note that $\check\Phi_X$ is a closed subsystem of $\check\Phi$ (i.e. $\mathbb{Z} \check\Phi_X \cap \check\Phi = \check\Phi_X$). Since $\mathbb{Z}\check\Phi/\mathbb{Z}\check\Phi_X$ is $p$-torsion free by Lemma \ref{Lemma-restriction-on-p}\eqref{item-bad-prime}, we have $\check\Phi_X = \mathbb{Q}\check\Phi_X \cap \check\Phi$ and hence $\Phi_X = \mathbb{Q}\Phi_X \cap \Phi$. By \cite[VI.1,~Proposition~24]{Bourbaki-4-6} there exists a basis $\Delta$ for $\Phi$ containing a basis $\Delta_X$ for $\Phi_X$. Thus $\mathbb{C}ent_{G_{\overline k}}(X)$ is a Levi subgroup of $G_{\overline k}$. \qed \textbfegin{Def} \leftarrowbel{Def-generic} We say that an element $X ^{-1}n \mathfrak{g}^*$ is \textit{generic of depth $r$ at $x$} $^{-1}n \mathscr{B}(G,k)$ if $X$ is almost stable and if there exists a tamely ramified extension $E$ over $k$ and a split maximal torus $T \mathfrak{su}bset \mathbb{C}ent_G(X)\times_k E$ such that \textbfegin{itemize} ^{-1}tem $x ^{-1}n \mathscr{A}(T,E) \cap \mathscr{B}(G,k)$, where $\mathscr{A}(T,E)$ denotes the apartment of $T$ in $\mathscr{B}(G,E)$ ^{-1}tem $X ^{-1}n \mathfrak{g}^*_{x,r}$ (i.e. $X(\mathfrak{g}_{x,(-r)+})\mathfrak{su}bset \mathcal{P}$), ^{-1}tem for every $\alpha ^{-1}n \Phi(G,T)$ we have $X(H_\alpha)=0$ or $\val(X(H_\alpha))=r$, where $H_\alpha=d\check\alpha(1)$, and ^{-1}tem if $X(H_\alpha)=0$ for all $\alpha ^{-1}n \Phi(G,T)$, then $d(x,X)=r$. \end{itemize} \end{Def} Note that $H_\alpha=d\check\alpha(1) \neq 0$, because $p$ does not divide the index of connection of $\Phi(G)$ by Lemma \ref{Lemma-restriction-on-p}\eqref{item-index-of-connection}. We will see in Corollary \ref{Cor-generic-depth} below that if $X$ is generic of depth $r$ at $x$, then $d(x,X)=r$. \textbfegin{Lemma} \leftarrowbel{Lemma-structure-of-X} Let $X ^{-1}n \mathfrak{g}^*$ be generic of depth $r$ at $x$. Then for every (split) maximal torus $T \mathfrak{su}bset \mathbb{C}ent_G(X) \times_k \overline k$ we have \textbfegin{itemize} ^{-1}tem $X(H_\alpha)=0$ for all $\alpha ^{-1}n \Phi(\mathbb{C}ent_G(X),T)$, and ^{-1}tem $\val(X(H_\alpha))=r$ for all $\alpha ^{-1}n \Phi(G,T)-\Phi(\mathbb{C}ent_G(X),T)$. \end{itemize} Moreover, for all $\alpha ^{-1}n \Phi(G,T)$ we have $X((\mathfrak{g}_{\overline k})_\alpha)=0$, where $(\mathfrak{g}_{\overline k})_\alpha$ denotes the $\alpha$-root subspace of $\mathfrak{g}_{\overline k}$. \end{Lemma} \textbf{Proof.\\} Choose a Chevalley system $\{x_\alpha:\mathbb{G}_a \rightarrow G_{\overline k} \, | \, \alpha ^{-1}n \Phi(G,T) \}$ with corresponding Lie algebra elements $\{X_{\alpha}=dx_\alpha(1) \, | \alpha ^{-1}n \Phi(G,T)\}$. Since $T \mathfrak{su}bset \mathbb{C}ent_G(X)\times_k \overline k$, we have $X(X_\alpha)=X(\Ad(t)(X_\alpha))=\alpha(t)X(X_\alpha)$ for all $t ^{-1}n T(\overline k)$, and hence $X(X_\alpha)=0$ for all $\alpha ^{-1}n \Phi(G,T)$. Thus $X((\mathfrak{g}_{\overline k})_\alpha)=0$. Since the split maximal tori of $\mathbb{C}ent_G(X)\times_k \overline k$ are conjugate in $\mathbb{C}ent_G(X)\times_k \overline k$, we have $X(H_\alpha)=0$ or $\val(X(H_\alpha))=r$ for $\alpha ^{-1}n \Phi(G,T)$. By \cite[Propostion~7.1]{Yu}, we have $\alpha ^{-1}n \Phi(\mathbb{C}ent_G(X),T)$ if and only if $X(H_\alpha)=0$, see also the proof of Lemma \ref{Lemma-stabilizer-Levi}. \qed \textbfegin{Cor} \leftarrowbel{Cor-generic-depth} Let $X ^{-1}n \mathfrak{g}^*-\{0\}$ be generic of depth $r$ at $x$. Then $d(x,X)=d(X)=r$. \end{Cor} \textbf{Proof.\\} Let $E$ be a tame extension of $k$ and $T$ a split maximal torus of $\mathbb{C}ent_G(X) \times_k E$ such that $x^{-1}n \mathscr{A}(T,E)$. By Lemma \ref{Lemma-restriction-on-p}\eqref{item-index-of-connection} the element $H_\alpha$ is of depth zero for all $\alpha ^{-1}n \Phi(G,T)$. Hence $d(x,X)=d_E(x,X)=r$ by Lemma \ref{Lemma-structure-of-X} (or by definition if $X(H_\alpha)=0$ for all $\alpha ^{-1}n \Phi(G,T)$). If $y ^{-1}n \mathscr{B}(G_E,E), s ^{-1}n \mathbb{R}$ and $X ^{-1}n (\mathfrak{g}_E^*)_{y,s}$, then \cite[Lemma~8.2]{Yu} implies that $X$ restricted to $\Lie(T)(E)$ lies in $\Lie(T)^*(E)_{s}$. Since $X$ has depth $d(x,X)=r$ when restricted to $\Lie(T)(E)$, we deduce that $d_E(y,X)\leq r$. Hence $d(X)=d(x,X)=r$. \qed \textbfegin{Cor}\leftarrowbel{Cor-almoststable-and-generic-implies-almoststronglystable} Let $X ^{-1}n \mathfrak{g}^*-\{0\}$ be generic of depth $r$ at $x$. Then $X$ is almost strongly stable at $x$. \end{Cor} \textbf{Proof.\\} Suppose $X$ is not almost strongly stable at $x$. Then $\overline X ^{-1}n \mathfrak{g}^*_{x, r}/\mathfrak{g}^*_{x, r+}$ is unstable. Since $\mathfrak f$ is perfect, by \cite[Corollary~4.3]{Kempf} there exists a non-trivial one parameter subgroup $\overline \leftarrowmbda: \mathbb{G}_m \rightarrow \mathbb{R}P_x$ in the reductive quotient $\mathbb{R}P_x$ of $G$ at $x$ (defined over $\mathfrak f$) such that $\lim_{t\rightarrow0}\overline \leftarrowmbda(t).\overline{X}=0$. Let $\mathbb{S}$ be a maximal split torus of $\mathbb{R}P_x$ containing $\overline \leftarrowmbda(\mathbb{G}_m)$. Then there exists a split torus $\mathcal{S}$ (defined over $\mathcal{O}_k$) in the parahoric group scheme $\mathbb{P}_x$ of $G$ at $x$ whose special fiber is $\mathbb{S}$ and whose generic fiber $S$ is a split torus in $G$. This allows us to lift $\overline \leftarrowmbda$ to a one parameter subgroup $\leftarrowmbda: \mathbb{G}_m \rightarrow S \mathfrak{su}bset G$. Let $\mathscr{A}(S, k)$ be the apartment of $S$ (i.e. the apartment of a maximal torus in $G$ that contains $S$). Then $\mathscr{A}(S,k)$ contains $x$ and is the affine space underlying the real vector space $X_*(S) \otimes_{\mathbb{Z}} \mathbb{R}$. If $\epsilon>0$ is sufficiently small, we obtain $X ^{-1}n \mathfrak{g}^*_{x+\epsilon\leftarrowmbda, r+}$. Hence $d(X)>r$, which contradicts Corollary \ref{Cor-generic-depth}. \qed \textbfegin{Rem} \leftarrowbel{Rem-BT} Recall that if $G'$ is a Levi subgroup of (a parabolic subgroup of) $G$, then we have an embedding of the corresponding Bruhat--Tits buildings $\mathscr{B}(G',k) \hookrightarrow \mathscr{B}(G,k)$. Even though this embedding is only unique up to some translation, its image is unique. Since we assume that all tori of $G$ split over tamely ramified extensions of $k$, every twisted Levi subgroup of $G$ becomes a Levi subgroup over a finite tamely ramified extension of $k$. Hence using (tame) Galois descent, we obtain a well defined image of $\mathscr{B}(G',k)$ in $\mathscr{B}(G,k)=\mathscr{B}$ for every twisted Levi subgroup $G'$ of $G$. In the sequel, we might identify $\mathscr{B}(G',k)$ with its image in $\mathscr{B}$. \end{Rem} \textbfegin{Rem} \leftarrowbel{Rem-B} Since $p$ does not divide the index of connection of $\Phi(G)$ (Lemma \ref{Lemma-restriction-on-p}\eqref{item-index-of-connection}), Adler and Roche (\cite[Proposition~4.1]{Adler-Roche}) provide a non-degenerate, $G$-equivariant, symmetric bilinear form $B: \mathfrak{g} \times \mathfrak{g} \rightarrow k$ such that the induced identification of $\mathfrak{g}$ with $\mathfrak{g}^*$ identifies $\mathfrak{g}_{x,r}$ with $\mathfrak{g}^*_{x,r}$ for all $x ^{-1}n \mathscr{B}(G,k), r ^{-1}n \mathbb{R}$. Moreover, $B$ stays non-degenerate when restricted to the Lie algebra of any twisted Levi subgroup of $G$. \end{Rem} Using the bilinear form from Remark \ref{Rem-B} to view $(\mathfrak{g}')^*=(\Lie(G')(k))^*$ as a subset of $\mathfrak{g}^*$ for $G'$ a twisted Levi subgroup of $G$, we have the following lemma, which is a translation of a result by Kim--Murnaghan (\cite[Lemma~2.3.3]{Kim-Murnaghan}) into the dual setting. \textbfegin{Lemma}[Kim--Murnaghan] \leftarrowbel{Lemma-Kim-Murnaghan} Let $r^{-1}n \mathbb{R}$, $x^{-1}n \mathscr{B}$, and let $X^{-1}n \mathfrak{g}^*_{x,r}\mathfrak{su}bset \mathfrak{g}^*$ be generic of depth $r$ at $x$. Denote $\mathbb{C}ent_G(X)$ by $G'$ and $\Lie(\mathbb{C}ent_G(X))(k)$ by $\mathfrak{g}'$. If $X' ^{-1}n (\mathfrak{g}')^*_{r+} \mathfrak{su}bset \mathfrak{g}^*$ and $y ^{-1}n \mathscr{B}(G,k) - \mathscr{B}(G',k)$, then $d(y,X+X')<d(X)$. \end{Lemma} \textbf{Proof.\\} Suppose $X \neq 0$ as the statement is trivial otherwise. Let $E$ be a tame extension of $k$ and $T$ a split maximal torus of $\mathbb{C}ent_G(X) \times_k E$ such that $x^{-1}n \mathscr{A}(T,E)$. By the definition of the bilinear form $B$ in the proof of \cite[Proposition~4.1]{Adler-Roche} (a sum of scalings of killing forms together with a bilinear form on the center) together with Lemma \ref{Lemma-structure-of-X}, the generic element $X$ corresponds to an element $\check X$ of $\mathfrak{t}=\Lie(T)(k) \mathfrak{su}bset \mathfrak{g}$, hence of $\mathfrak{t}_r=\mathfrak{t} \cap \mathfrak{g}_{x,r}$. Moreover, it follows from the definition of the bilinear form that $d\alpha(\check X)=X(H_\alpha)$. Hence Lemma \ref{Lemma-structure-of-X} implies that $\check X$ is a good semisimple element of depth $r$ (see \cite[Definition~2.2.4]{Adler} for the definition of ``good semisimple element''). Since $\mathbb{C}ent_G(X)=\mathbb{C}ent_G(\check X)$ and $X'$ corresponds to an element in $\mathfrak{g}'_{r+}$ under the identification of $\mathfrak{g}^*$ with $\mathfrak{g}$, the lemma follows from \cite[Lemma~2.3.3]{Kim-Murnaghan}, because $B$ preserves depth. (Note that Kim and Murnaghan impose in \cite{Kim-Murnaghan} much stronger conditions on $G$ and $k$, in particular that $k$ has characteristic zero. However the required Lemma~2.3.3 holds also in our setting by the same proof and observing that Corollary~2.2.3 and Lemma~2.2.4 in \cite{Kim-Murnaghan} (which are used in the proof of \cite[Lemma~2.3.3]{Kim-Murnaghan}) follow from results of Adler and Roche (\cite{Adler-Roche}) that are valid in our situation by our Lemma \ref{Lemma-restriction-on-p}.) \qed To state the following main result in this section more conveniently, we fix a $G$-equivariant distance function $\mathrm{d}:\mathscr{B}(G,k) \times \mathscr{B}(G,k) \rightarrow \mathbb{R}_{\geq 0}$ on the building $\mathscr{B}(G,k)$, which is the restriction of a distance function $\mathrm{d}_E:\mathscr{B}(G,E) \times \mathscr{B}(G,E) \rightarrow \mathbb{R}_{\geq 0}$ for some tame extension $E$ of $k$ over which $G$ splits that satisfies $\abs{\alpha(x-y)}\leq \mathrm{d}_E(x,y)$ for all maximal split tori $T_E$ of $G_E$, all $x, y ^{-1}n \mathscr{A}(T_E,E)$ and all $\alpha ^{-1}n \Phi(G_E,T_E)$. (This normalization will only become relevant in the proof of Theorem \ref{Thm-existence-of-datum} below.) \textbfegin{Prop} \leftarrowbel{Lemma-almoststable-generic-representative} Let $r ^{-1}n \mathbb{R}$ and $x ^{-1}n \mathscr{B}$. If $X ^{-1}n \mathfrak{g}^*$ is almost strongly stable at $x$ with $d(x,X)=r$, then for every $\epsilon > 0$ there exists $x' ^{-1}n \mathscr{B}$ with $\mathrm{d}(x,x')<\epsilon$ such that $X ^{-1}n \mathfrak{g}^*_{x',r}$, the coset $X+\mathfrak{g}^*_{x',r+}$ contains an element $\widetilde X$ that is generic of depth $r$ at $x'$, and the points $x$ and $x'$ are contained in $\mathscr{B}(\mathbb{C}ent_G(\widetilde X),k) \mathfrak{su}bset \mathscr{B}$. \end{Prop} \textbf{Proof.\\} Let $T$ be a maximal torus of $\mathbb{C}ent_G(X)$ and $E$ a tame extension of $k$ over which $T$ splits. Choose a point $y$ in $\mathscr{A}(T,E) \cap \mathscr{B}(G,k)$. If $\alpha ^{-1}n \Phi(G,T_E)$ and $X_\alpha ^{-1}n (\mathfrak{g}_E)_\alpha$, then $X(X_\alpha)=X(\Ad(t)X_\alpha)=\alpha(t)X(X_\alpha)$ for all $t ^{-1}n T(E)$, hence $X(X_\alpha)=0$. Thus the depth of $X$ at $y$ is equal to the depth of $X$ restricted to $\mathfrak{t}=\Lie(T)(k)$. On the other hand, by \cite[Lemma~8.2]{Yu}, the assumption that $X ^{-1}n \mathfrak{g}^*_{x,r}$ implies that $X$ restricted to $\mathfrak{t}$ lies in $\mathfrak{t}^*_r$. Hence $d(y,X) \geq r$. Since $r=d(X)$ by Lemma \ref{Lemma-almost-stable-depth}, we deduce that $d(y,X)=r$. \textbf{Claim.} $X+\mathfrak{g}^*_{y,r+}$ contains a generic element of depth $r$ at $y$. \textbf{Proof of claim.} Let $\check\Phi_0 \mathfrak{su}bset \check\Phi:=\check\Phi(G,T_{E})$ be the collection of coroots $\check \alpha$ for which $\val(X(H_\alpha))>r$. Note that $\check\Phi_0$ is a closed subsystem of $\check\Phi$ (i.e. $\mathbb{Z} \check\Phi_0 \cap \check\Phi = \check\Phi_0$). Since $\mathbb{Z}\check\Phi/\mathbb{Z}\check\Phi_0$ is $p$-torsion free by Lemma \ref{Lemma-restriction-on-p}\eqref{item-bad-prime}, we also have $\check\Phi_0 = \mathbb{Q}\check\Phi_0 \cap \check\Phi$. Moreover, since $X$ and $T$ are defined over $k$, the set $\check\Phi_0$ is stable under the action of the Galois group $\Gal(E/k)$. Let $Y \mathfrak{su}bset \mathfrak{g}_{E}=\Lie(G)(E)$ be the $E$-subspace spanned by $\{H_\alpha \, | \, \check\alpha ^{-1}n \check\Phi_0\}$. By the above observations about $\check \Phi_0$, the subspace $Y$ is $\Gal(E/k)$-stable, and if $H_\alpha ^{-1}n Y$, then $\check\alpha ^{-1}n \check\Phi_0$. Define $$Y_T^\perp:=\left\{ Z ^{-1}n \Lie(T)({E}) \, | \, d\alpha(Z)=0 \, \mathfrak{o}rall \check\alpha ^{-1}n \check\Phi_0 \right\}.$$ Then $Y_T^\perp$ is a $\Gal(E /k)$-stable complement to $Y$ in $\Lie(T)({E})$, and we set $$Y^\perp := Y_T^\perp \oplus \textbfigoplus_{\alpha ^{-1}n \Phi(G,T)} (\mathfrak{g}_{E})_\alpha .$$ Then $Y^\perp$ is a $\Gal(E /k)$-stable complement to $Y$ in $\mathfrak{g}_{E}$, and we define $X' ^{-1}n \mathfrak{g}_{E}^*$ by $$ X'(Z+Z^\perp)=X(Z) \, \text{ for all } Z ^{-1}n Y, Z^\perp ^{-1}n Y^\perp . $$ Since $Y$ and $Y^\perp$ are $\Gal(E/k)$-stable and $X$ is defined over $k$, the linear functional $X'$ is $\Gal(E/k)$-invariant and hence defined over $k$, i.e. we can view $X'$ as an element of $\mathfrak{g}^*$. Let $\check \Delta_0$ be a basis for $\check \Phi_0$, and $\check \Delta$ a basis for $\check \Phi$ containing $\check \Delta_0$ (such a $\check \Delta$ exists by \cite[VI.1,~Proposition~24]{Bourbaki-4-6}). For $\check\alpha ^{-1}n \check\Delta$, we denote by $\check\omega_\alpha ^{-1}n \mathbb{Q}\check\Phi$ the fundamental coweight corresponding to $\alpha$, i.e. $\left\leftarrowngle\check\omega_\alpha,\alpha\right\rangle=1$ and $\left\leftarrowngle\check\omega_\alpha,\textbfeta\right\rangle=0$ for $\check \textbfeta ^{-1}n \check\Delta-\{\check\alpha\}$. Similarly, for $\check\alpha ^{-1}n \check\Delta_0$, let $\check\omega^0_\alpha ^{-1}n \mathbb{Q}\check\Phi_0$ be the fundamental coweight with respect to the (co-)root system $\check \Phi_0$. By Lemma \ref{Lemma-restriction-on-p}\eqref{item-index-of-connection}, we have $\check \omega_\alpha ^{-1}n \mathbb{Z}\left[\mathfrak{r}ac{1}{\abs{W}}\right]\check\Phi$ and $\check \omega^0_\alpha ^{-1}n \mathbb{Z}\left[\mathfrak{r}ac{1}{\abs{W}}\right]\check\Phi_0$. Denote by $H_{\check\omega_\alpha}$ ($\check\alpha ^{-1}n \check\Delta$) and $H_{\check\omega^0_{\alpha'}}$ ($\check{\alpha'} ^{-1}n \check\Delta_0$) the image of $\check\omega_\alpha$ and $\check\omega^0_{\alpha'}$ under the linear map $\mathbb{Z}\left[\mathfrak{r}ac{1}{\abs{W}}\right]\check\Phi \rightarrow \Lie(T)(E) $ obtained by sending $\check{\alpha''}$ to $H_{\alpha''}$ (${\alpha''} ^{-1}n \Phi$). Then we have $$H_{\check\omega_\alpha}\equiv \left\{ \textbfegin{array}{rl} 0 \mod Y^\perp & \text{ for } \check\alpha ^{-1}n \check\Delta-\check\Delta_0 \\ H_{\check\omega^0_\alpha} \mod Y^\perp & \text{ for } \check \alpha ^{-1}n \check \Delta_0 . \end{array} \right. $$ For $\textbfeta ^{-1}n \Phi$, we have $\check \textbfeta=\mathfrak{su}m_{\check \alpha ^{-1}n \check \Delta}\left\leftarrowngle\check\textbfeta, \alpha \right\rangle {\check\omega_\alpha}$, and hence we obtain \textbfegin{equation}\leftarrowbel{equation-X-beta} H_\textbfeta= \mathfrak{su}m_{\check \alpha ^{-1}n \check \Delta}\left\leftarrowngle\check\textbfeta, \alpha \right\rangle H_{\check\omega_\alpha} \equiv \mathfrak{su}m_{\check \alpha ^{-1}n \check \Delta_0}\left\leftarrowngle\check\textbfeta, \alpha \right\rangle H_{\check\omega^0_\alpha} \mod Y^\perp . \end{equation} Recall that $\left\leftarrowngle\check\textbfeta, \alpha \right\rangle$ are integers for $\check\alpha ^{-1}n \check \Delta$ and that the index of the coroot lattice $\mathbb{Z}\check\Phi_0$ in the coweight lattice $\mathbb{Z}[\check\omega_{\alpha}\, | \, \check\alpha ^{-1}n \check\Phi_0]$ is coprime to $p$ by Lemma \ref{Lemma-restriction-on-p}\eqref{item-index-of-connection}. Hence $\mathfrak{su}m_{\check \alpha ^{-1}n \check \Delta_0}\left\leftarrowngle\check\textbfeta, \alpha \right\rangle H_{\check\omega^0_\alpha}$ is contained in the $\mathcal{O}_{\overline k}$-span of $\{ H_\alpha \, | \, \check\alpha ^{-1}n \check \Phi_0\}$. Thus, by the definition of $\check\Phi_0$, we obtain $\val(X'(H_\textbfeta))>r$ for all $\check\textbfeta ^{-1}n \check\Phi$. In addition, $X'$ vanishes on the center of $\mathfrak{g}_E$ and on $\textbfigoplus_{\alpha ^{-1}n \Phi(G,T)} (\mathfrak{g}_{E})_\alpha$, because these subspaces are contained in $Y^\perp$. Hence, by Lemma \ref{Lemma-restriction-on-p}\eqref{item-index-of-connection}, we have $\val(X'((\mathfrak{g}_{E})_{y,0})) \mathfrak{su}bset \mathbb{R}_{>r}$. Using that the Moy--Prasad filtration behaves well with respect to base change (Equation \eqref{eqn-MP-filtration}), we obtain $$\val(X'(\mathfrak{g}_{y,{-r}})) \mathfrak{su}bset \val(X'((\mathfrak{g}_{E})_{y,-r})) \mathfrak{su}bset \mathbb{R}_{\geq -r} + \val(X'((\mathfrak{g}_{E})_{y,0})) \mathfrak{su}bset \mathbb{R}_{>0} .$$ Thus $X'^{-1}n \mathfrak{g}^*_{y,r+}$, and $\widetilde X=X-X' ^{-1}n X+\mathfrak{g}^*_{y,r+}$ with $\val(\widetilde X(H_\alpha))=r$ for $\check \alpha \not ^{-1}n \check\Phi_0$ and $\widetilde X(H_\alpha)=0 $ for $\check\alpha ^{-1}n \check\Phi_0$. In order to prove the claim, it remains to show that the orbit of $\widetilde X$ is closed. Since $p \nmid \abs{W}$ we can $G$-equivariantly identify $\mathfrak{g}^*$ with $\mathfrak{g}$ as in Remark \ref{Rem-B}. Since $T$ is in $\mathbb{C}ent_G(X)$ and acts trivially on $X'$, the torus $T$ also centralizes $\widetilde X=X-X'$, and hence $\widetilde X ^{-1}n \Lie(T)(E)$ under the identification of $\mathfrak{g}^*$ with $\mathfrak{g}$. Thus $\widetilde X$ is semisimple, and therefore its $G$-orbit is closed (\cite[9.2]{Borel}). Hence $\widetilde X ^{-1}n X+\mathfrak{g}^*_{y,r+}$ is generic of depth $r$ at $y$. To finish the proof of the proposition, recall that $d(x,\widetilde X + X')=d(x,X)=r=d(y,\widetilde X)=d(\widetilde X)$ (by Corollary \ref{Cor-generic-depth}). We write $G'=\mathbb{C}ent_G(\widetilde X)$ and $\mathfrak{g}'=\Lie(G')(k)$. Since $X'$ has depth greater than $r$ at $y$ and vanishes on $\textbfigoplus_{\alpha ^{-1}n \Phi(G,T)} (\mathfrak{g}_{E})_\alpha$, it lies in $(\mathfrak{g}')^*_{r+} \mathfrak{su}bset \mathfrak{g}^*$. Hence we deduce from Lemma \ref{Lemma-Kim-Murnaghan} that $x ^{-1}n \mathscr{B}(G',k)$. Thus there exists a maximal torus $\widetilde T$ in $G'\mathfrak{su}bset G$ with $x ^{-1}n \mathscr{A}(\widetilde T)$, and, by Lemma \ref{Lemma-structure-of-X}, the element $\widetilde X$ is generic of depth $r$ at $x$. If $\widetilde X ^{-1}n X+\mathfrak{g}^*_{x,r+}$, then we are done by choosing $x'=x$ and observing that $\overline X =\overline{\widetilde X}$. Hence it remains to consider the case that $\widetilde X \notin X+\mathfrak{g}^*_{x,r+}$. Then $d(x,X')=d(x,X-\widetilde X)=r<d(y,X')\leq d(X')$. Viewing these as depths for $\mathscr{B}(G',k)$, we deduce from \cite[Corollary~3.2.6]{Adler-DeBacker} (together with their remark at the beginning of Section~3) that the coset $X'+(\mathfrak{g}')^*_{x,r+}$ is degenerate, i.e. contains an unstable element. Hence $\overline{X'} ^{-1}n ({\mathfrak{g}'}_{x,-r}/{\mathfrak{g}'}_{x,(-r)+})^*$ is unstable by \cite[4.3.~Proposition]{MP1}. Since $\mathfrak f$ is perfect, by \cite[Corollary~4.3]{Kempf} there exists a non-trivial one parameter subgroup $\overline \leftarrowmbda: \mathbb{G}_m \rightarrow \mathbb{R}P'_x$ in the reductive quotient $\mathbb{R}P'_x$ of $G'$ at $x$ (defined over $\mathfrak f$) such that $\lim_{t\rightarrow0}\overline \leftarrowmbda(t).\overline{X'}=0$. As in the proof of Corollary \ref{Cor-almoststable-and-generic-implies-almoststronglystable}, we let $\mathbb{S}$ be a maximal split torus of $\mathbb{R}P'_x$ containing $\overline \leftarrowmbda(\mathbb{G}_m)$, and $\mathcal{S}$ a split torus (defined over $\mathcal{O}_k$) in the parahoric group scheme $\mathbb{P}'_x$ of $G'$ at $x$ whose special fiber is $\mathbb{S}$ and whose generic fiber $S$ is a split torus in $G'$. This allows us to consider $\overline \leftarrowmbda$ as an element $\leftarrowmbda$ of $X_*(S)$. Let $\mathscr{A}(S, k)$ be the apartment of $S$ (i.e. the apartment of a maximal (maximally split) torus $T_S \mathfrak{su}bset G'$ containing $S$). Then $\mathscr{A}(S,k)$ contains $x$ and is the affine space underlying the real vector space $X_*(S) \otimes_{\mathbb{Z}} \mathbb{R}$. If $\epsilon>0$ is sufficiently small, then $X' ^{-1}n \mathfrak{g}^*_{x+\epsilon\leftarrowmbda, r+}$ and $X \equiv \widetilde X \mod \mathfrak{g}^*_{x+\epsilon\leftarrowmbda,r+}$. Let $E'$ be a tamely ramified extension of $k$ over which $T_S$ splits. Then $x':=x+\epsilon\leftarrowmbda ^{-1}n \mathscr{A}(T_S,E') \cap \mathscr{B}(G,k)$, and since $T_S \mathfrak{su}bset G'=\mathbb{C}ent_G(\widetilde X)$, the element $\widetilde X$ is generic at $x'$ of depth $r$ (by Lemma \ref{Lemma-structure-of-X}). \qed \textbfegin{Aside} The claim proved within the proof of Proposition \ref{Lemma-almoststable-generic-representative} is the dual statement of \cite[Theorem~3.3]{Fi-tame-tori} and could be deduced from the latter as well. We decided to give an independent (but analogous) proof so that the reader has the option to see what assumptions on $p$ enter the claim at which point and observe that in many cases slightly weaker assumptions on $p$ suffice. \end{Aside} \section{The datum} \leftarrowbel{Section-datum} In this section we define the notion of a datum of $G$ and what it means for a datum to be contained in a smooth irreducible representation of $G(k)$. In Section \ref{Section-existence-of-datum} (Theorem \ref{Thm-existence-of-datum}) we will show that every irreducible representation contains such a datum. From this result we will deduce in Section \ref{Section-existence-of-type} (Theorem \ref{Thm-exhaustion-of-types}) and Section \ref{Section-exhaustion-Yu} (Theorem \ref{Thm-exhaustion-Yu}) that every irreducible representation contains a type of the form constructed by Kim--Yu (\cite{KimYu}) based on Yu's construction of supercuspidal representations (\cite{Yu}) and that Yu's construction yields all supercuspidal representations. \textbfegin{Def} \leftarrowbel{Def-extendeddatum} Let $n ^{-1}n \mathbb{Z}_{\geq 0}$. An \textit{extended datum} of $G$ of length $n$ is a tuple $$(x, (r_i)_{1 \leq i \leq n}, (X_i)_{1 \leq i \leq n}, (G_i)_{1 \leq i \leq n+1},(\rho_0, V_{\rho_0}))$$ where \textbfegin{enumerate}[label=(\alph*),ref=\alph*] ^{-1}tem \leftarrowbel{Con-a} $r_1 > r_2 > \hdots > r_n >0$ are real numbers ^{-1}tem $X_i ^{-1}n \mathfrak{g}^*_{x,-r_i} \setminus \mathfrak{g}^*_{x,(-r_i)+}$ for $1 \leq i \leq n$ ^{-1}tem $G=G_1 \mathfrak{su}pseteq G_2 \mathfrak{su}psetneq G_3 \mathfrak{su}psetneq \hdots \mathfrak{su}psetneq G_{n+1}$ are twisted Levi subgroups of $G$ ^{-1}tem \leftarrowbel{Con-d} $x ^{-1}n \mathscr{B}(G_{n+1},k)\mathfrak{su}bset \mathscr{B}(G,k)$ ^{-1}tem $(\rho_0, V_{\rho_0})$ is an irreducible representation of $(G_{n+1}^{\text{der}})_{x,0}/(G_{n+1}^{\text{der}})_{x,0+}$ \end{enumerate} satisfying the following conditions for all $1 \leq i \leq n$ \textbfegin{enumerate}[label=(\roman*),ref=\roman*] ^{-1}tem \leftarrowbel{Cond-1} $X_i ^{-1}n \mathfrak{g}_i^*:=\Lie(G_i)(k)^* \mathfrak{su}bset \mathfrak{g}^*$ ^{-1}tem $X_i$ is generic of depth $-r_i$ at $x ^{-1}n \mathscr{B}(G_i,k)$ as element of $\mathfrak{g}_i^*$ (under the action of $G_i$) ^{-1}tem \leftarrowbel{Cond--1} $G_{i+1}=\mathbb{C}ent_{G_i}(X_i)$ \end{enumerate} A \textit{truncated extended datum} of $G$ of length $n$ is a tuple $$(x, (r_i)_{1 \leq i \leq n}, (X_i)_{1 \leq i \leq n}, (G_i)_{1 \leq i \leq n+1})$$ of data as above satisfying \eqref{Con-a} through \eqref{Con-d} and (\ref{Cond-1}) through (\ref{Cond--1}). \end{Def} Note that a truncated extended datum of $G$ of length 0 consists only of a point $x ^{-1}n \mathscr{B}(G,k)$ and the group $G$, and an extended datum of $G$ of length 0 consists only of a point $x ^{-1}n \mathscr{B}(G,k)$, the group $G$, and an irreducible representation of $(G^{\text{der}})_{x,0}/(G^{\text{der}})_{x,0+}$. We are mainly interested in extended data of positive length. \textbfegin{Def} \leftarrowbel{Def-datum} Let $n ^{-1}n \mathbb{Z}_{\geq 0}$. A \textit{datum} of $G$ of length $n$ is a tuple $(x, (X_i)_{1 \leq i \leq n}, (\rho_0, V_{\rho_0}))$ consisting of a point $x ^{-1}n \mathscr{B}(G,k)$, elements $X_i ^{-1}n \mathfrak{g}^*$ for $1 \leq i \leq n$ and an irreducible representation $ (\rho_0, V_{\rho_0})$ of $(\mathbb{C}ent_{G}(\mathfrak{su}m_{i=1}^n X_i))^{\text{der}}(k) \cap G_{x,0}/ (\mathbb{C}ent_{G}(\mathfrak{su}m_{i=1}^n X_i))^{\text{der}}(k) \cap G_{x,0+}$ for which there exist real numbers $r_1 > r_2 > \hdots > r_n >0$ and a sequence of twisted Levi subgroups $G=G_1 \mathfrak{su}pseteq G_2 \mathfrak{su}psetneq G_3 \mathfrak{su}psetneq \hdots \mathfrak{su}psetneq G_{n+1}$ of $G$ such that $(x, (r_i)_{1 \leq i \leq n}, (X_i)_{1 \leq i \leq n}, (G_i)_{1 \leq i \leq n+1},(\rho_0, V_{\rho_0}))$ is an extended datum. A \textit{truncated datum} of $G$ of length $n$ is a tuple $(x, (X_i)_{1 \leq i \leq n})$ consisting of a point $x ^{-1}n \mathscr{B}(G,k)$ and elements $X_i ^{-1}n \mathfrak{g}^*$ for $1 \leq i \leq n$ for which there exist real numbers $r_1 > r_2 > \hdots > r_n >0$ and a sequence of twisted Levi subgroups $G=G_1 \mathfrak{su}pseteq G_2 \mathfrak{su}psetneq G_3 \mathfrak{su}psetneq \hdots \mathfrak{su}psetneq G_{n+1}$ of $G$ such that $(x, (r_i)_{1 \leq i \leq n}, (X_i)_{1 \leq i \leq n}, (G_i)_{1 \leq i \leq n+1})$ is a truncated extended datum. \end{Def} Given a truncated datum $(x, (X_i)_{1 \leq i \leq n})$ or a datum $(x, (X_i)_{1 \leq i \leq n}, (\rho_0, V_{\rho_0}))$ of $G$ of length $n$, we denote by $(x, (r_i)_{1 \leq i \leq n}, (X_i)_{1 \leq i \leq n}, (G_i)_{1 \leq i \leq n+1})$ the unique truncated extended datum containing it or by $(x, (r_i)_{1 \leq i \leq n}, (X_i)_{1 \leq i \leq n}, (G_i)_{1 \leq i \leq n+1},(\rho_0, V_{\rho_0}))$ the unique extended datum containing it, respectively, as in Definition \ref{Def-datum}. \textbfegin{Rem} There are two main differences between a datum and the input for Yu's construction in \cite{Yu}. The first difference is that we only work with elements $X_i ^{-1}n \mathfrak{g}^*$ and not with characters of $G_{i+1}(k)$. The second difference is that $(\rho_0, V_{\rho_0})$ is an irreducible representation of $(G_{n+1}^{\text{der}})_{x,0}/(G_{n+1}^{\text{der}})_{x,0+}$ that might not be cuspidal. The representation $(\rho_0, V_{\rho_0})$ is more a place holder at this point that appears in some sense naturally in Section \ref{Section-existence-of-datum}, and from which we have to extract a cuspidal representation that forms the input for Yu's construction in Section \ref{Section-existence-of-type} (Lemma \ref{Lemma-repzeroK} and Lemma \ref{Lemma-cuspidal}). Thus our datum can be viewed as a skeleton of the input for Yu's construction. \end{Rem} For later convenience, we note the following lemma. \textbfegin{Lemma} \leftarrowbel{Lemma-truncateddatum-with-y} If $(x, (X_i)_{1 \leq i \leq n})$ is a truncated datum of $G$, and $y$ is a point of $\mathscr{B}(G_{n+1},k) \mathfrak{su}bset \mathscr{B}(G,k)$, then $(y, (X_i)_{1 \leq i \leq n})$ is also a truncated datum of $G$. \end{Lemma} \textbf{Proof.\\} This follows from Lemma \ref{Lemma-structure-of-X}. \qed In order to relate a truncated datum $(x, (X_i)_{1 \leq i \leq n})$ or a datum $(x, (X_i)_{1 \leq i \leq n}, (\rho_0, V_{\rho_0}))$ to representations of $G(k)$, we introduce the following associated groups for $1 \leq i \leq n+1$: \textbfegin{itemize} ^{-1}tem $H_1:=G_1$ if $G_1=G_2$ and $H_1:=G_1^{\text{der}}$ if $G_1 \neq G_2$ ^{-1}tem $H_i:=G_i^{\text{der}}$ for $i>1$ ^{-1}tem $(H_{i})_{x,\widetilde r}:=G_{x,\widetilde r}\cap H_i(k)=(H_i)_{x_i,\widetilde r}$ for $\widetilde r ^{-1}n \widetilde \mathbb{R}_{\geq 0}:=\mathbb{R}_{\geq 0} \cup \{ r+ \, | \, r ^{-1}n \mathbb{R}_{\geq 0} \}$, \end{itemize} where $x_i$ denotes the image of $x ^{-1}n \mathscr{B}(G_i,k)$ in $\mathscr{B}(H_i,k)$. In order to define another subgroup $(H_{i})_{x,\widetilde r,\widetilde r'}$ of $G(k)$ for $\widetilde r \geq \widetilde r' \geq \mathfrak{r}ac{\widetilde r}{2}>0$ ($\widetilde r, \widetilde r' ^{-1}n \widetilde \mathbb{R}$) and $1 \leq i \leq n$, we choose a maximal torus $T$ of $G_{i+1}$ such that $x ^{-1}n \mathscr{A}(T,E)$, where $E$ denotes a finite tamely ramified extension of $k$ over which $T$ splits. Then we define $$ (G_{i})_{x,\widetilde r,\widetilde r'} := G(k) \cap \left\leftarrowngleT(E)_{\widetilde r}, U_\alpha(E)_{x,\widetilde r}, U_\textbfeta(E)_{x,\widetilde r'} \, | \, \alpha ^{-1}n \Phi(G_i, T)\mathfrak{su}bset\Phi(G,T), \textbfeta ^{-1}n \Phi(G_i, T)-\Phi(G_{i+1}, T) \, \right\rangle,$$ where $U_\alpha(E)_{x,r}$ denotes the Moy--Prasad filtration subgroup of depth $r$ (at $x$) of the root group $U_\alpha(E) \mathfrak{su}bset G(E)$ corresponding to the root $\alpha$, and $$ (H_{i})_{x,\widetilde r, \widetilde r'} := H_i(k) \cap (G_{i})_{x,\widetilde r, \widetilde r'} $$ Note that $(G_{i})_{x,\widetilde r, \widetilde r'}$ is denoted $(G_{i+1},G_i)(k)_{x_i,\widetilde r, \widetilde r'}$ in \cite{Yu}. Yu (\cite[p.~585 and p.~586]{Yu}) shows that this definition is independent of the choice of $T$ and $E$. We define the subalgebras $\mathfrak{h}_{i}$, $(\mathfrak{h}_{i})_{x,\widetilde r}$ and $(\mathfrak{h}_{i})_{x,\widetilde r, \widetilde r'}$ of $\mathfrak{g}$ analogously. For convenience, we also set $r_{n+1}=0$. \textbfegin{Def} \leftarrowbel{Def-datum-contained} Let $(\pi, V_\pi)$ be a smooth irreducible representation of $G(k)$. A datum $(x, (X_i)_{1 \leq i \leq n}, (\rho_0, V_{\rho_0}))$ of $G$ is said to be \textit{contained in} $(\pi, V_\pi)$ if $V_\pi^{\cup_{1 \leq i \leq n+1}((H_{i})_{x,r_i+})}$ contains a subspace $V'$ such that \textbfegin{itemize} ^{-1}tem $(\pi|_{(H_{n+1})_{x,0}}, V')$ is isomorphic to $(\rho_0, V_{\rho_0})$ as a representation of $(H_{n+1})_{x,0}/(H_{n+1})_{x,0+}$ and ^{-1}tem $(H_{i})_{x,r_i,\mathfrak{r}ac{r_i}{2}+} /(H_{i})_{x,r_i+} \simeq (\mathfrak{h}_{i})_{x,r_i,\mathfrak{r}ac{r_i}{2}+} /(\mathfrak{h}_{i})_{x,r_i+}$ acts on $V'$ via the character $\varphi \circ X_i$ for $1\leq i \leq n$, \end{itemize} where we recall that $\varphi: k \rightarrow \mathbb{C}^*$ is an additive character of $k$ of conductor $\mathcal{P}$ that is fixed throughout the paper. Similarly, $(\pi,V_\pi)$ is said to contain a truncated datum $(x, (X_i)_{1 \leq i \leq n})$ if $V_\pi^{\cup_{1 \leq i \leq n}((H_{i})_{x,r_i+})}$ contains a one dimensional subspace on which $(H_{i})_{x,r_i,\mathfrak{r}ac{r_i}{2}+} /(H_{i})_{x,r_i+} \simeq (\mathfrak{h}_{i})_{x,r_i,\mathfrak{r}ac{r_i}{2}+} /(\mathfrak{h}_{i})_{x,r_i+}$ acts via $\varphi \circ X_i$ for $1 \leq i \leq n$. \end{Def} The data that we are going to use to extract a type from a given representation are the following. \textbfegin{Def} \leftarrowbel{Def-datum-for} Let $(\pi, V_\pi)$ be a smooth, irreducible representation of $G(k)$. We say that a tuple $(x, (X_i)_{1 \leq i \leq n}, (\rho_0, V_{\rho_0}))$ \textit{is a maximal datum for} $(\pi, V_\pi)$ if $(x, (X_i)_{1 \leq i \leq n}, (\rho_0, V_{\rho_0}))$ is a datum of $G$ that is contained in $(\pi,V_\pi)$ such that if $(x', (X_i)_{1 \leq i \leq n'}, (\rho_0', V_{\rho_0}'))$ is another datum of $G$ contained in $(\pi,V_\pi)$, then the dimension of the facet of $\mathscr{B}(G_{n+1},k)$ that contains $x$ is at least the dimension of the facet of $\mathscr{B}(G_{n+1},k)$ that contains $x'$. \end{Def} \section{Some results used to exhibit data} \leftarrowbel{Section-killing-lemmas} In order to prove that every irreducible representation of $G(k)$ contains a datum, we first prove a lemma and derive some corollaries that we are going to repeatedly use in the proof of the existence of a datum in Section \ref{Section-existence-of-datum}. (The reader might skip this section at first reading and come back to it when the results are used in the proof of Theorem \ref{Thm-existence-of-datum}.) \textbfegin{Lemma} \leftarrowbel{Lemma-kill-C} Let $x ^{-1}n \mathscr{B}(G,k)$, $r ^{-1}n \mathbb{R}_{>0}$, and let $X ^{-1}n \mathfrak{g}^*-\{0\}$ be generic of depth $-r$ at $x$. Write $G'=\mathbb{C}ent_G(X)$, $\mathfrak{g}'= \Lie(G')(k)$, and let $T$ be a maximal torus of $G'$ that splits over a tamely ramified extension $E$ of $k$ and such that $x ^{-1}n \mathscr{A}(T,E) \cap \mathscr{B}(G,k)$. We set $\mathfrak{t}=\Lie(T)(k)$, $\mathfrak{r}'=\mathfrak{g} \cap \left(\textbfigoplus_{\alpha ^{-1}n \Phi(G',T_E)} (\mathfrak{g}_E)_\alpha \right) \mathfrak{su}bset \mathfrak{g}'$ and $\mathfrak{r}''=\mathfrak{g} \cap \left(\textbfigoplus_{\alpha ^{-1}n \Phi(G,T_E)-\Phi(G',T_E)} (\mathfrak{g}_E)_\alpha \right)$, and we denote by $\mathfrak{j}^*$ the subspace of elements in $\mathfrak{g}^*$ that vanish on $\mathfrak{t} \oplus \mathfrak{r}' =\mathfrak{g}'$. Then \textbfegin{enumerate}[label=(\alph*),ref=\alph*] ^{-1}tem \leftarrowbel{item-fX} The map $f_X: \mathfrak{r}'' \rightarrow \mathfrak{g}^*$ defined by $Y \mapsto (Z \mapsto X([Y,Z]))$ is a vector space isomorphism of $\mathfrak{r}''$ onto $\mathfrak{j}^*$, and $f_X(\mathfrak{r}'' \cap \mathfrak{g}_{x,r'}) = \mathfrak{j}^*\cap\mathfrak{g}^*_{x,r'-r}$ for $r' ^{-1}n \mathbb{R}$. ^{-1}tem \leftarrowbel{item-kill-C} Let $d$ be a real number such that $\mathfrak{r}ac{r}{2}\leq d < r$. For every $0<\epsilon<\mathfrak{r}ac{r-d}{2}$, if $C ^{-1}n \mathfrak{j}^* \cap \mathfrak{g}^*_{x,-(d+\epsilon)}$, then there exists $g ^{-1}n G_{x,r-d-\epsilon} \mathfrak{su}bset G_{x,0+}$ such that \textbfegin{enumerate}[label=(\roman*),ref=\roman*] ^{-1}tem \leftarrowbel{item-kill-C-i} $\Ad(g)(X+C)|_{\mathfrak{g}_{x,r}} = X|_{\mathfrak{g}_{x,r}}$, ^{-1}tem $\Ad(g)(X+C)|_{\mathfrak{r}'' \cap \mathfrak{g}_{x,d+}} = 0 = X|_{\mathfrak{r}'' \cap \mathfrak{g}_{x,d+}}$, ^{-1}tem \leftarrowbel{item-kill-C-iii} if $(\pi, V_\pi)$ is a representation of $G(k)$ and $V'$ is a subspace of $V_\pi$ on which the group $$ G(k) \cap \left\leftarrowngleT(E)_{r+}, U_\alpha(E)_{x,(d+\epsilon)+}, U_\textbfeta(E)_{x,r+} \, | \, \alpha ^{-1}n \Phi(G,T_E)-\Phi(G',T_E), \textbfeta ^{-1}n \Phi(G',T_E) \right\rangle $$ acts trivially and that is stable under the action of a subgroup $H$ of $G'^{\text{der}}(k) \cap G_{x,(2d-r+2\epsilon)+}$, then $g^{-1}Hg$ preserves $V'$ and $(^{g}\pi|_{H},V') = (\pi|_{H},V'),$ \end{enumerate} where $\Ad$ denotes the contragredient of the adjoint action. \end{enumerate} \end{Lemma} \textbf{Proof.\\} \eqref{item-fX} Let $Y ^{-1}n \mathfrak{r}''$. Recall that $[(\mathfrak{g}_E)_\alpha, (\mathfrak{g}_E)_\textbfeta] \mathfrak{su}bset (\mathfrak{g}_E)_{\alpha+\textbfeta}$ for $\alpha, \textbfeta ^{-1}n \Phi(G, T_E)$, $\alpha \neq -\textbfeta$ (where $(\mathfrak{g}_E)_{\alpha+\textbfeta}=\{0\}$ if $\alpha+\textbfeta \notin \Phi(G,T_E)$). Hence, if $Z ^{-1}n \mathfrak{r}'$, then $[Y,Z] ^{-1}n \mathfrak{r}''$, and $X([Y,Z])=0$ by Lemma \ref{Lemma-structure-of-X}. Similarly, if $Z ^{-1}n \mathfrak{t}$, then $[Y,Z] ^{-1}n \mathfrak{r}''$, and $X([Y,Z])=0$. Thus the image of the linear map $f_X$ is contained in $\mathfrak{j}^*\mathfrak{su}bset \mathfrak{g}^*$. Choose a Chevalley system $\{x_\alpha:\mathbb{G}_a \rightarrow G_{E} \, | \, \alpha ^{-1}n \Phi(G,T_E) \}$ for $G_E$ with corresponding Lie algebra elements $\{X_{\alpha}=dx_\alpha(1) \, | \alpha ^{-1}n \Phi(G,T_E)\}$. Then for $\alpha ^{-1}n \Phi(G, T_E)-\Phi(G',T_E)$, we have $[X_\alpha,X_{-\alpha}]=H_\alpha=d\check\alpha(1)$. Hence, extending $f_X$ linearly to $\mathfrak{r}'' \otimes_k E$, the element $f_X(X_\alpha)$ in $\mathfrak{j}^* \otimes_k E$ is a map that sends $X_\textbfeta$ to $c\partialta_{-\alpha,\textbfeta}$ for $\textbfeta ^{-1}n \Phi(G,T_E)-\Phi(G',T_E)$ for some constant $c ^{-1}n E$ with $\val(c)=\val(X(H_\alpha))=-r$, by Lemma \ref{Lemma-structure-of-X}. From this description, we see that $d(x,f_X(X_\alpha))=-r-d(x,X_{-\alpha})=-r-(-\alpha(x))=\alpha(x)-r$ while $d(x,X_\alpha)=\alpha(x)$. Thus $f_X(\mathfrak{r}'' \otimes_k E \cap (\mathfrak{g}_E)_{x,r'})=\mathfrak{j}^* \otimes_k E \cap (\mathfrak{g}^*_E)_{x,r'-r}$, and hence $f_X(\mathfrak{r}'' \cap \mathfrak{g}_{x,r'})=\mathfrak{j}^* \cap (\mathfrak{g}^*)_{x,r'-r}$ because $E$ is tamely ramified over $k$. In particular, $f_X: \mathfrak{r}'' \rightarrow \mathfrak{j}^*$ is a vector space isomorphism. \eqref{item-kill-C} By \eqref{item-fX}, there exists $Y ^{-1}n \mathfrak{r}''$ of depth $\geq r-d-\epsilon> 0$ such that $C=X([Y,\_])$. Let $\exp$ denote a mock exponential function from $\mathfrak{g}_{x,r-d-\epsilon}$ to $G_{x,r-d-\epsilon}$ as defined in \cite[Section~1.5]{Adler}, i.e. if $Y=\mathfrak{su}m_{\alpha ^{-1}n \Phi(G,T_E)-\Phi(G',T_E)} a_\alpha X_{\alpha}$ for some $a_\alpha ^{-1}n E$, then $$\exp(Y) \equiv ^{\prime}od_{\alpha ^{-1}n \Phi(G,T_E)-\Phi(G',T_E)} x_\alpha(a_\alpha) \mod G(E)_{x,2(r-d-\epsilon)}$$ (viewing $\exp(Y) ^{-1}n G(k)$ inside $G(E)$) for some fixed (arbitrarily chosen) order of the roots $\Phi(G,T_E)-\Phi(G',T_E)$. We set $g=(\exp(-Y))^{-1}$. Note that $Y ^{-1}n \mathfrak{g}_{x,r-d-\epsilon}$ implies that $x_\alpha(-a_\alpha) ^{-1}n G(E)_{x,r-d-\epsilon}$. Let $Z ^{-1}n \mathfrak{g}_{x,r'}$ for some $r'^{-1}n \mathbb{R}$. Then by \cite[Proposition~1.6.3]{Adler}, we have $$ \Ad(g^{-1})(Z) \equiv Z + [-Y,Z] \mod \mathfrak{g}_{x,r'+2(r-d-\epsilon)} .$$ Hence, using that $g ^{-1}n G_{x,r-d-\epsilon}, X ^{-1}n \mathfrak{g}^*_{x,-r}, C ^{-1}n \mathfrak{g}^*_{x,-(d+\epsilon)}$, $Y ^{-1}n \mathfrak{g}_{x,r-d-\epsilon}$ and $\epsilon < \mathfrak{r}ac{r-d}{2}$, we obtain $$\Ad(g)(X+C)(Z)=X(\Ad(g^{-1})(Z))+C(\Ad(g^{-1})(Z))=X(Z) \quad \text{for $Z ^{-1}n \mathfrak{g}_{x,r}$, and}$$ $$\Ad(g)(X+C)(Z)=X(\Ad(g^{-1})(Z))+C(\Ad(g^{-1})(Z))=X(Z+[-Y,Z])+C(Z)=X(Z)$$ for $Z ^{-1}n \mathfrak{r}'' \cap \mathfrak{g}_{x,d+}$ (using $C=X([Y,\_])$). To prove the remaining claim, observe that for $h ^{-1}n H \mathfrak{su}bset G'(k) \cap G_{x,(2d-r+2\epsilon)+}$, $\alpha ^{-1}n \Phi(G,T_E)-\Phi(G',T_E)$ and $x_\alpha(a_\alpha) ^{-1}n G(E)_{x,r-d-\epsilon}$, we have $x_\alpha(a_\alpha) h x_\alpha(a_\alpha)^{-1} = h \cdot u $ for some $u$ in $\left\leftarrowngleU_\alpha(E)_{x,(d+\epsilon)+}\, | \, \alpha ^{-1}n \Phi(G,T_E)-\Phi(G',T_E) \right\rangle$, and hence $$g h g^{-1} \equiv h \cdot u' \mod G(E)_{x,r+} $$ for some $u'$ in $\left\leftarrowngleU_\alpha(E)_{x,(d+\epsilon)+}\, | \, \alpha ^{-1}n \Phi(G,T_E)-\Phi(G',T_E) \right\rangle$. Thus $\pi(g h g^{-1})$ and $\pi(h)$ agree on $V'$. \qed \textbfegin{Cor} \leftarrowbel{Cor-kill-C-new} Let $n$ be a positive integer and $(x,(X_i)_{1 \leq i \leq n})$ a truncated datum of length $n$ (with corresponding truncated extended datum $(x,(r_i)_{1 \leq i \leq n}, (X_i)_{1 \leq i \leq n}, (G_i)_{1 \leq i \leq n+1})$). Let $1 \leq j \leq n$, choose a maximal torus $T_j$ of $\mathbb{C}ent_{H_j}(X_j) \mathfrak{su}bset H_j$ that splits over a tame extension $E$ of $k$ and such that $x ^{-1}n \mathscr{A}(T_j,E) \cap \mathscr{B}(H_j,k)$. Write $\mathfrak{t}_j=\Lie(T_j)(k)$. Let $(\pi, V_\pi)$ be a representation of $G(k)$. Let $d, \epsilon ^{-1}n \mathbb{R}$ such that $\mathfrak{r}ac{r_j}{2} \leq d < r_j$ and $\mathfrak{r}ac{r_j-d}{2}>\epsilon>0$. Suppose that $V'$ is a nontrivial subspace of $V_\pi^{\textbfigcup_{1 \leq i \leq j}(H_i)_{x,r_i+}}$ on which $(H_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}+}/(H_i)_{x,r_i+}\simeq (\mathfrak{h}_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}+}/(\mathfrak{h}_i)_{x,r_i+}$ acts via $\varphi \circ X_i$ for $1 \leq i \leq j-1$ and $(H_j)_{x,r_j-\epsilon,d+}/(H_j)_{x,r_j+}\simeq (\mathfrak{h}_j)_{x,r_j-\epsilon,d+}/(\mathfrak{h}_j)_{x,r_j+}$ acts via $\varphi \circ (X_j+C)$ for some $C ^{-1}n (\mathfrak{h}_j^*)_{x,-(d+\epsilon)}$ that is trivial on $\mathfrak{t}_j + \mathfrak{h}_{j+1}$. Then there exists $g ^{-1}n (H_j)_{x,r_j-d-\epsilon}$ such that \textbfegin{enumerate}[label=(\roman*),ref=\roman*] ^{-1}tem $V'':=\pi(g)V' \mathfrak{su}bset V_\pi^{\textbfigcup_{1 \leq i \leq j}(H_i)_{x,r_i+}}$ ^{-1}tem \leftarrowbel{item-blubb2} $(H_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}+}/(H_i)_{x,r_i+}\simeq (\mathfrak{h}_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}+}/(\mathfrak{h}_i)_{x,r_i+}$ acts on $ V''$ via $\varphi \circ X_i$ for $1 \leq i \leq j-1$ ^{-1}tem $(H_j)_{x,r_j,d+}/(H_j)_{x,r_j+}\simeq (\mathfrak{h}_j)_{x,r_j,d+}/(\mathfrak{h}_j)_{x,r_j+}$ acts on $ V''$ via $\varphi \circ X_j$ ^{-1}tem any subgroup $H$ of $(H_{j+1})_{x,(2d-r_j+2\epsilon)+}$ that stabilizes $V'$ also stabilizes $V''$ and $(\pi|_{H},V'') \simeq (\pi|_{H},V')$. \end{enumerate} \end{Cor} \textbf{Proof.\\} Let $g ^{-1}n (H_j)_{x,r_j-d-\epsilon}$ be as constructed in the proof of Lemma \ref{Lemma-kill-C}\eqref{item-kill-C} applied to the group $H_j$ with generic element $X_j$ of depth $r_j$. Hence $(\pi|_{H},V'') \simeq (\pi|_{H},V')$ by Lemma \ref{Lemma-kill-C}\eqref{item-kill-C}\eqref{item-kill-C-iii}. Note that $g^{-1}((H_i)_{x,r_i+})g=(H_i)_{x,r_i+}$ for $1 \leq i \leq j$. Thus $V'' \mathfrak{su}bset V_\pi^{\textbfigcup_{1 \leq i \leq j}(H_i)_{x,r_i+}}$. To show \eqref{item-blubb2}, recall that $g^{-1}=^{\prime}od_{\alpha ^{-1}n \Phi(H_j,(T_j)_E)-\Phi(\mathbb{C}ent_{H_j}(X_j),(T_j)_E)} x_\alpha(-a_\alpha)g'$ with $g'^{-1}n H_j(E)_{x,2r_j-2d-2\epsilon}$ and $x_\alpha(-a_\alpha) ^{-1}n H_j(E)_{x,r_j-d-\epsilon}$. Hence for $h ^{-1}n (H_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}+}$ for $1 \leq i <j$ we have $g^{-1}hg \equiv h \mod (H_i)_{x,r_i+,(\mathfrak{r}ac{r_i}{2}+r_j-d-\epsilon)+}$ with $r_j-d-\epsilon>0$, and therefore $(H_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}+}/(H_i)_{x,r_i+}\simeq (\mathfrak{h}_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}+}/(\mathfrak{h}_i)_{x,r_i+}$ acts on $ V''$ via $\varphi \circ X_i$. In addition, we have $g^{-1}(H_j)_{x,r_j,d+}g\mathfrak{su}bset(H_j)_{x,r_j-\epsilon,d+}$. Since $(H_j)_{x,r_i-\epsilon,d+}/(H_j)_{x,r_j+}\simeq (\mathfrak{h}_j)_{x,r_j-\epsilon,d+}/(\mathfrak{h}_j)_{x,r_j+}$ acts via $\varphi \circ (X_j+C)$ on $V'$, we obtain from Lemma \ref{Lemma-kill-C}\eqref{item-kill-C} that $(H_j)_{x,r_j,d+}/(H_j)_{x,r_j+}\simeq (\mathfrak{h}_j)_{x,r_j,d+}/(\mathfrak{h}_j)_{x,r_ji+}$ acts on $ V''$ via $\varphi \circ X_j$. \qed \textbfegin{Cor} \leftarrowbel{Cor-kill-C-type1} Let $n$ be a positive integer and $(x,(X_i)_{1 \leq i \leq n})$ a truncated datum of length $n$ (with corresponding truncated extended datum $(x,(r_i)_{1 \leq i \leq n}, (X_i)_{1 \leq i \leq n}, (G_i)_{1 \leq i \leq n+1})$). Let $T$ be a maximal torus of $G_{n+1}$ such that $x^{-1}n \mathscr{A}(T,E)$, set $\mathfrak{t}=\Lie(T)(k)$, and let $(\pi, V_\pi)$ be a representation of $G(k)$. Let $0<\epsilon < \mathfrak{r}ac{r_n}{4}$ such that $(H_{i})_{x,r_i-\epsilon,\mathfrak{r}ac{r_i}{2}+}=(H_{i})_{x,r_i,\mathfrak{r}ac{r_i}{2}+}$ for all $1 \leq i \leq n$. Suppose that $V'$ is a nontrivial subspace of $V_\pi^{\textbfigcup_{1 \leq i \leq n}(H_i)_{x,r_i+}}$ on which the action of $(H_{i})_{x,r_i,\mathfrak{r}ac{r_i}{2}+} /(H_{i})_{x,r_i+} \simeq (\mathfrak{h}_{i})_{x,r_i,\mathfrak{r}ac{r_i}{2}+} /(\mathfrak{h}_{i})_{x,r_i+}$ via $\pi$ is given by $\varphi \circ \left(C_i+ X_i\right)$ for some $C_i ^{-1}n (\mathfrak{h}_i)^*_{x,-(\mathfrak{r}ac{r_i}{2}+\epsilon)}$ that is trivial on $(\mathfrak{t} \cap \mathfrak{h}_i) + \mathfrak{h}_{i+1}$ for all $1 \leq i \leq n$. Then there exists $g ^{-1}n G_{x,\mathfrak{r}ac{r_n}{2}-\epsilon}\mathfrak{su}bset G_{x,0+}$ such that \textbfegin{enumerate}[label=(\roman*),ref=\roman*] ^{-1}tem $V'':=\pi(g)V' \mathfrak{su}bset V_\pi^{\textbfigcup_{1 \leq i \leq n}(H_i)_{x,r_i+}}$ ^{-1}tem $(H_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}+}/(H_i)_{x,r_i+}\simeq (\mathfrak{h}_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}+}/(\mathfrak{h}_i)_{x,r_i+}$ acts on $ V''$ via $\varphi \circ X_i$ for $1 \leq i \leq n$ ^{-1}tem any subgroup $H$ of $(H_{n+1})_{x,2\epsilon+}$ that stabilizes $V'$ also stabilizes $V''$ and $(\pi|_{H},V'') \simeq (\pi|_{H},V')$. \end{enumerate} \end{Cor} \textbf{Proof.\\} Since $(H_{i})_{x,r_i-\epsilon,\mathfrak{r}ac{r_i}{2}+}=(H_{i})_{x,r_i,\mathfrak{r}ac{r_i}{2}+}$ for all $1 \leq i \leq n$, we can apply Corollary \ref{Cor-kill-C-new} successively for $j=1, 2, \hdots, n$ with $d=\mathfrak{r}ac{r_1}{2}, \mathfrak{r}ac{r_2}{2}, \hdots , \mathfrak{r}ac{r_n}{2}$, respectively. We obtain $g=g_{n} \cdot \hdots \cdot g_1^{-1}n (H_{n})_{x,\mathfrak{r}ac{r_{n}}{2}-\epsilon} \cdot \hdots \cdot (H_1)_{x,\mathfrak{r}ac{r_1}{2}-\epsilon}\mathfrak{su}bset G_{x,\mathfrak{r}ac{r_n}{2}-\epsilon}$ such that $\pi(g)V' \mathfrak{su}bset V_\pi^{\cup_{1 \leq i \leq j-1}((H_i)_{x,r_i+})}$ and the action of $(H_{i})_{x,r_i,\mathfrak{r}ac{r_i}{2}+} /(H_{i})_{x,r_i+} \simeq (\mathfrak{h}_{i})_{x,r_i,\mathfrak{r}ac{r_i}{2}+} /(\mathfrak{h}_{i})_{x,r_i+}$ on $\pi(g)V'$ via $\pi$ is given by $\varphi \circ X_i$ for $1\leq i \leq n$, and the action of any subgroup $H$ of $(H_{n+1})_{x,2\epsilon+}$ that stabilizes $V'$ also stabilizes $V''$ and $(\pi|_{H},V'') \simeq (\pi|_{H},V')$. \qed \textbfegin{Cor} \leftarrowbel{Cor-kill-C-type2} Let $n$ be a positive integer and $(x,(X_i)_{1 \leq i \leq n})$ a truncated datum of length $n$ (with corresponding truncated extended datum $(x,(r_i)_{1 \leq i \leq n}, (X_i)_{1 \leq i \leq n}, (G_i)_{1 \leq i \leq n+1})$). Let $(\pi, V_\pi)$ be a representation of $G(k)$. Let $0<\epsilon < \mathfrak{r}ac{r_n}{4}$ such that $(H_{n})_{x,r_n-2\epsilon}=(H_{n})_{x,r_n}$. Suppose that $V'$ is a nontrivial subspace of $V_\pi^{\textbfigcup_{1 \leq i \leq n}(H_i)_{x,r_i+}}$ on which the action of $(H_{i})_{x,r_i,\mathfrak{r}ac{r_i}{2}+} /(H_{i})_{x,r_i+} \simeq (\mathfrak{h}_{i})_{x,r_i,\mathfrak{r}ac{r_i}{2}+} /(\mathfrak{h}_{i})_{x,r_i+}$ via $\pi$ is given by $\varphi \circ X_i$ for all $1 \leq i \leq n-1$, and the action of $(H_{n})_{x,r_n} /(H_{n})_{x,r_n+} \simeq (\mathfrak{h}_{n})_{x,r_n} /(\mathfrak{h}_{n})_{x,r_n+}$ via $\pi$ is given by $\varphi \circ X_n$. Then there exists a subspace $V'' \mathfrak{su}bset V_\pi^{\textbfigcup_{1 \leq i \leq n}(H_i)_{x,r_i+}}$ such that $(H_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}+}/(H_i)_{x,r_i+}\simeq (\mathfrak{h}_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}+}/(\mathfrak{h}_i)_{x,r_i+}$ acts on $ V''$ via $\varphi \circ X_i$ for $1 \leq i \leq n$. \end{Cor} \textbf{Proof.\\} Let $T$ be a maximal torus of $G_{n+1}$ with Lie algebra $\mathfrak{t}=\Lie(T)(k)$. Let $d=\max(\mathfrak{r}ac{r_n}{2},r_n-3\epsilon)$. Note that the commutator $[^{\prime}od_{1 \leq i \leq n-1} (H_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}+}(H_{n})_{x,r_n},(H_{n})_{x,r_n,d+}]$ acts trivially on $V'$ and hence we can replace $V'$ without loss of generality by $\pi((H_{n})_{x,r_n,d+})V'$. Since $(H_{n})_{x,r_n-2\epsilon}=(H_{n})_{x,r_n}$, the action of $(H_{n})_{x,r_n-\epsilon,d+}$ on $V'$ factors through $(H_{n})_{x,r_n-\epsilon,d+}/(H_{n})_{x,r_n+}$ and, after replacing $V'$ by a subspace if necessary, is given by $\varphi \circ \left(X_n + C_3 \right)$ for some $C_3 ^{-1}n (\mathfrak{h}_n)^*_{x,-(r_n-2\epsilon)}$ that is trivial on $\mathfrak{t} \cap \mathfrak{h}_n + \mathfrak{h}_{n+1}$. Applying Corollary \ref{Cor-kill-C-new} for $j=n$ and $d=\max(\mathfrak{r}ac{r_n}{2},r_n-3\epsilon)$, we obtain $g_3 ^{-1}n (H_n)_{x,\epsilon}$ such that $\pi(g_3)V'\mathfrak{su}bset V_\pi^{\textbfigcup_{1 \leq i \leq n}(H_i)_{x,r_i+}}$, the group $(H_{i})_{x,r_i,\mathfrak{r}ac{r_i}{2}+} /(H_{i})_{x,r_i+} \simeq (\mathfrak{h}_{i})_{x,r_i,\mathfrak{r}ac{r_i}{2}+} /(\mathfrak{h}_{i})_{x,r_i+}$ acts on $\pi(g_3)V'$ via $\varphi \circ X_i$ for $1 \leq i \leq n-1$ and $(H_{n})_{x,r_n,d+} /(H_{n})_{x,r_n+} \simeq (\mathfrak{h}_{n})_{x,r_n, d+} /(\mathfrak{h}_{n})_{x,r_n+}$ acts on $\pi(g_3)V'$ via $\varphi \circ X_n$. Replacing $V'$ by $\pi(g_3)V'$ and using the same reasoning, we can apply Corollary \ref{Cor-kill-C-new} for $j=n$ repeatedly with $d= r_n - 4 \epsilon , r_n - 5 \epsilon, r_n-6\epsilon , \hdots, r_n - (N-1) \cdot \epsilon, r_n - N \cdot \epsilon,\mathfrak{r}ac{r_n}{2}$ (and replacing $V'$ at each step if necessary), where $N$ is the largest integer for which $N \cdot \epsilon < \mathfrak{r}ac{r_n}{2}$. After the final step we obtain a subspace $V'' \mathfrak{su}bset V_\pi^{\textbfigcup_{1 \leq i \leq n}(H_i)_{x,r_i+}}$ on which $(H_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}+}/(H_i)_{x,r_i+}\simeq (\mathfrak{h}_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}+}/(\mathfrak{h}_i)_{x,r_i+}$ acts via $\varphi \circ X_i$ for $1 \leq i \leq n$. \qed \section{Every irreducible representation contains a datum} \leftarrowbel{Section-existence-of-datum} \textbfegin{Thm} \leftarrowbel{Thm-existence-of-datum} Let $(\pi, V_\pi)$ be a smooth irreducible representation of $G(k)$. Then $(\pi, V_\pi)$ contains a datum. \end{Thm} \textbf{Proof.\\} The strategy of this proof consists of recursively extending the length of a truncated datum contained in $(\pi , V_\pi)$ until a certain function $f$ defined below is zero. We then show how to turn this truncated datum into a datum that is contained in $(\pi, V_\pi)$. More precisely, for the recursion step, we let $j$ be a positive integer such that $(\pi,V_\pi)$ contains a truncated datum $(x_{j-1}, (X_i)_{1 \leq i \leq j-1})$ of $G$ of length $j-1$. We will then show that $(\pi,V_\pi)$ contains a truncated datum $(x_{j}, (X_i)_{1 \leq i \leq j})$ of $G$ of length $j$ and repeat the recursion or that $(\pi,V_\pi)$ contains a datum $(x_{j-1}, (X_i)_{1 \leq i \leq j-1}, (\rho_0, V_{\rho_0}))$ and the proof is finished. Since $G_i \mathfrak{su}bsetneq G_{i-1}$ for $i>2$, the recursion has to terminate after finitely many steps. The base case of the recursion is given by $j=1$, in which case we let $x_0=x_{j-1}$ be an arbitrary point of $\mathscr{B}=\mathscr{B}(G,k)$ and denote by $r_0=r_{j-1}$ the depth of $(\pi,V_\pi)$ at $x_0$. To perform the recursion step, let $j$ be a positive integer such that $(\pi,V_\pi)$ contains a truncated datum $(x_{j-1}, (X_i)_{1 \leq i \leq j-1})$ of $G$ of length $j-1$, and write $\mathscr{B}_j:=\mathscr{B}(G_j,k) \mathfrak{su}bset \mathscr{B}$, where the inclusion of Bruhat--Tits buildings is as explained in Remark \ref{Rem-BT}. We define a function $f:\mathscr{B}_j \rightarrow \mathbb{R}_{\geq 0} \cup ^{-1}nfty$ as follows: For $y ^{-1}n \mathscr{B}_j$, we set $f(y)$ to be the smallest non-negative real number $r_j$ such that \textbfegin{itemize} ^{-1}tem the truncated datum $(y, (X_i)_{1 \leq i \leq j-1})$ is contained in $(\pi,V_\pi)$ ^{-1}tem there exists $X_j ^{-1}n (\mathfrak{g}_j)^*_{y,-r_j}$ almost stable, where $\mathfrak{g}_j=\Lie(G_j)(k)$, and ^{-1}tem there exists $V_{j-1} \mathfrak{su}bset V_\pi^{\cup_{1 \leq i \leq j-1}({(H_{i})_{y,r_i+}})}$ \end{itemize} satisfying the following two properties \textbfegin{enumerate}[label=(\alph*),ref=\alph*] ^{-1}tem \leftarrowbel{item-in-pfa} for $1\leq i \leq j-1$ the group $$(H_{i})_{y,r_i,\mathfrak{r}ac{r_i}{2}+} /(H_{i})_{y,r_i+} \simeq (\mathfrak{h}_{i})_{y,r_i,\mathfrak{r}ac{r_i}{2}+} /(\mathfrak{h}_{i})_{y,r_i+}$$ associated to $(y, (X_i)_{1 \leq i \leq j-1})$ acts on $V_{j-1}$ via $\varphi \circ X_i$ (this condition is automatically satisfied for $j=1$), and ^{-1}tem \leftarrowbel{item-in-pfb} $V_{j-1}^{(H_j)_{y,r_j+}}$ contains a nontrivial subspace $V_j'$ such that if $r_j>0$, then $(H_j)_{y,r_j}/(H_j)_{y,r_j+} \simeq (\mathfrak{h}_j)_{y,r_j}/(\mathfrak{h}_j)_{y,r_j+}$ acts on $V_j'$ via $\varphi \circ X_j$. \end{enumerate} If such a real number $r_j$ does not exist, then we set $f(y)=^{-1}nfty$. Note that $(y, (X_i)_{1 \leq i \leq j-1})$ is a truncated datum of $G$ by Lemma \ref{Lemma-truncateddatum-with-y}. Moreover, $f$ is well defined, because the Moy--Prasad filtration is semi-continuous and for every $r ^{-1}n \mathbb{R}$ every $(\mathfrak{g}_j)^*_{y,r+}$-coset contains an almost stable element (e.g. take an element dual to a semisimple element under the non-degenerate bilinear form $B$ provided by \cite{Adler-Roche}, see Remark \ref{Rem-B}). In addition, by our assumption, $f(x_{j-1}) \leq r_{j-1}$ (because if $j>1$, we could take $X_j=0$ for $r_j=r_{j-1}$). In the case $j=1$, the real number $f(y)$ is simply the depth of $(\pi, V_\pi)$ at $y$. \textbfegin{Lemmasub} \leftarrowbel{Lemma-Aj} \textbfegin{enumerate}[label=(\roman*),ref=\roman*] ^{-1}tem \leftarrowbel{Lemma-Aj-ii} $f(g.x)=f(x)$ for all $x ^{-1}n \mathscr{B}_j$ and $g ^{-1}n G_j(k)$ ^{-1}tem \leftarrowbel{Lemma-Aj-iii} The subset $f^{-1}(\mathbb{R}_{\geq 0})$ of $\mathscr{B}_j$ is open in $\mathscr{B}_j$ and the function $f:\mathscr{B}_j \rightarrow \mathbb{R}_{\geq 0} \cup ^{-1}nfty$ is continuous on $f^{-1}(\mathbb{R}_{\geq 0})$. ^{-1}tem \leftarrowbel{Lemma-Aj-iv} The subset $f^{-1}(\mathbb{R}_{\geq 0})$ of $\mathscr{B}_j$ is closed in $\mathscr{B}_j$, hence equal to $\mathscr{B}_j$. \end{enumerate} \end{Lemmasub} \textbf{Proof of Lemma \ref{Lemma-Aj}.}\\ \textit{Proof of part (\ref{Lemma-Aj-ii}).} Observe that $X_i$ ($1\leq i <j$) and $G_i$ ($1 \leq i \leq j$) are stabilized by $G_j(k)$, hence the $G_j(k)$-invariance of $f$ follows. \textit{Proof of part (\ref{Lemma-Aj-iii}).} If $j=1$, then $f(x)$ is the depth of $\pi$ at $x$, and the claim is true. Hence we assume $j>1$. Let $(x, (X_i)_{1 \leq i \leq j-1})$ be a truncated datum contained in $(\pi,V_\pi)$, $X_j ^{-1}n (\mathfrak{g}_j)^*_{x,-f(x)}$ almost stable and $V_{j}' \mathfrak{su}bset V_{j-1} \mathfrak{su}bset V_\pi^{\cup_{1 \leq i \leq j-1}((H_i)_{x,r_i+})}$ satisfying the conditions \eqref{item-in-pfa} and \eqref{item-in-pfb} above. If $f(x)>0$, then set $r_j=f(x)$, otherwise let $0<r_j \leq r_{j-1}$ be arbitrary. For $1 \leq i \leq j-1$, let $d_{i}<r_i$ be a positive real number such that $(G_i)_{x,r_i-d_i}=(G_i)_{x,r_i}$. Note that $d_i>0$ exists for $1 \leq i \leq j-1$ by the semi-continuity of the Moy--Prasad filtration. Let $\min\{\mathfrak{r}ac{r_j}{4},\mathfrak{r}ac{d_i}{2} \, | \, 1 \leq i <j-1\}>\epsilon>0$ and let $y ^{-1}n \mathscr{B}_j$ with $\mathrm{d}(x,y)<\epsilon$. Let $T$ be a maximal torus of $G_j$ that splits over a tamely ramified extension $E$ of $k$ such that $x$ and $y$ are contained in $\mathscr{A}(T_E,E)$. Then $(y,(X_i)_{1 \leq i \leq j-1})$ is a truncated datum by Lemma \ref{Lemma-truncateddatum-with-y}. By the normalization of the distance $\mathrm{d}$ on the building $\mathscr{B}$, we have $\abs{\alpha(x-y)}\leq \mathrm{d}(x,y)<\epsilon$ for all $\alpha ^{-1}n \Phi(G,T_E)$. Hence, since $X_i$ vanishes on $\mathfrak{g}_i \cap \textbfigoplus_{\alpha ^{-1}n \Phi(G_i,T_E)} \mathfrak{g}(E)_{\alpha}$ for $1 \leq i \leq j-1$, we have $V_j'\mathfrak{su}bset V_\pi^{\cup_{1 \leq i \leq j-1}((H_i)_{y,r_i+})}$ and the commutator $\left[^{\prime}od_{1 \leq i \leq j-1} (H_{i})_{y,r_i-\mathfrak{r}ac{d_i}{2},\mathfrak{r}ac{r_i}{2}+}(H_j)_{y,r_j+\epsilon},\right.$ $\left. ^{\prime}od_{1 \leq i \leq j-1} (H_{i})_{y,r_i-\mathfrak{r}ac{d_i}{2},\mathfrak{r}ac{r_i}{2}+}(H_j)_{y,r_j+\epsilon}\right]$ is contained in $^{\prime}od_{1 \leq i \leq j-1} \ker (\varphi \circ X_i)|_{(H_{i})_{x,r_i,\mathfrak{r}ac{r_i}{2}+}}(H_j)_{x,r_j+}$. Hence, adjusting $V_j'$ if necessary (to a subspace of $\pi(^{\prime}od_{1 \leq i \leq j-1} (H_{i})_{y,r_i-\mathfrak{r}ac{d_i}{2},\mathfrak{r}ac{r_i}{2}+})V_j'$), the action of $$(H_{i})_{y,r_i-\mathfrak{r}ac{d_i}{2},\mathfrak{r}ac{r_i}{2}+} /(H_{i})_{y,r_i+} \simeq (\mathfrak{h}_{i})_{y,r_i-\mathfrak{r}ac{d_i}{2},\mathfrak{r}ac{r_i}{2}+} /(\mathfrak{h}_{i})_{y,r_i+}$$ via $\pi$ on $V_j'$ is given by $$\varphi \circ \left(C_i+ X_i\right)$$ for some $C_i ^{-1}n (\mathfrak{h}_i)^*_{y,-(\mathfrak{r}ac{r_i}{2}+\epsilon)}$ being trivial on $(\mathfrak{t} \cap \mathfrak{h}_i) + \mathfrak{h}_{i+1}$ for all $1 \leq i \leq j-1$. Moreover, $X_j ^{-1}n (\mathfrak{g}_j)^*_{x,-r_j} \mathfrak{su}bset (\mathfrak{g}_j)^*_{y,-r_j-\epsilon}$, and the action of $(H_j)_{y,r_j+\epsilon}$ on $V_{j-1}'$ factors through $(H_j)_{y,r_j+\epsilon}/(H_j)_{y,(r_j+\epsilon)+}\simeq (\mathfrak{h}_j)_{y,r_j+\epsilon}/(\mathfrak{h}_j)_{y,(r_j+\epsilon)+}$, on which it is given by $\varphi \circ X_j$ (which, as an aside, yields the trivial action). By Corollary \ref{Cor-kill-C-type1}, there exists $g^{-1}n G_{y,\mathfrak{r}ac{r_{j-1}}{2}-\epsilon}\mathfrak{su}bset G_{y,0+}$ such that $\pi(g)V'_j \mathfrak{su}bset V_\pi^{\cup_{1 \leq i \leq j-1}((H_i)_{y,r_i+})}$, the action of $(H_{i})_{y,r_i,\mathfrak{r}ac{r_i}{2}+} /(H_{i})_{y,r_i+} \simeq (\mathfrak{h}_{i})_{y,r_i,\mathfrak{r}ac{r_i}{2}+} /(\mathfrak{h}_{i})_{y,r_i+}$ on $\pi(g)V_j'$ via $\pi$ is given by $\varphi \circ X_i$ for $1 \leq i \leq j-1$, and the action of $(H_j)_{y,r_j+\epsilon}\mathfrak{su}bset (H_j)_{y,2\epsilon+}$ on $\pi(g)V_j'$ factors through $(H_j)_{y,r_j+\epsilon}/(H_j)_{y,(r_j+\epsilon)+}\simeq (\mathfrak{h}_j)_{y,r_j+\epsilon}/(\mathfrak{h}_j)_{y,(r_j+\epsilon)+}$ and is given by $\varphi \circ X_j$. Thus $f(y)\leq r_j+\epsilon$. Hence the set $f^{-1}(\mathbb{R}_{\geq 0})$ is open in $\mathscr{B}_j$. Moreover, if $f(x)=0$, then this implies that $f$ is continuous on $f^{-1}(\mathbb{R}_{\geq 0})$, because $f(y) \geq 0$ and $r_j>0$ can be chosen arbitrarily small in this case. It remains to prove continuity around $x$ in the case $f(x)=r_j >0$. Suppose $f(y)< r_j-\epsilon$, and let $X'_j ^{-1}n (\mathfrak{g}_j)^*_{y,-(r_j-\epsilon)+}$ be almost stable satisfying condition \eqref{item-in-pfb} above. Note that $(G_i)_{y,r_i-\mathfrak{r}ac{d_i}{2}}=(G_i)_{y,r_i}$ for $1 \leq i \leq j-1$. Hence, by the same reasoning as above (switching $x$ and $y$), we deduce that $f(x) < r_j$, a contradiction. Thus $f(y) \geq r_j-\epsilon$ and $f$ is continuous on $f^{-1}(\mathbb{R}_{\geq 0})$. \textit{Proof of part (\ref{Lemma-Aj-iv}).} Suppose $y ^{-1}n \mathscr{B}_j$ is in the closure of $f^{-1}(\mathbb{R}_{\geq 0})$, and let $d>0$ be sufficiently small such that for all $r ^{-1}n \mathbb{R}_{\geq d}$ with $G_{y,r} \neq G_{y,r+}$ we have $G_{y,{r-d}}=G_{y,r}$. Let $\mathfrak{r}ac{d}{8} > \epsilon >0$ and $x ^{-1}n f^{-1}(\mathbb{R}_{\geq 0})$ with $\mathrm{d}(x,y)<\epsilon$. Then $G_{x,r} \neq G_{x,r+}$ implies $G_{x,{r-d+2\epsilon}}=G_{x,r}$ (if $r ^{-1}n \mathbb{R}_{\geq d-2\epsilon}$), hence $G_{x,{r-\mathfrak{r}ac{d}{2}}}=G_{x,r}$, and $G_{x,0+}=G_{x,\mathfrak{r}ac{d}{2}}$. Thus we can apply the proof of part \eqref{Lemma-Aj-iii} to deduce that $f(y)$ is finite. \qed \textsubscript{Lemma \ref{Lemma-Aj}} Since $f$ is $G_j(k)$-equivariant, continuous, bounded below by zero, and the fundamental domain for the action of $G_j(k)$ on $\mathscr{B}_j$ is bounded, there exists a point $x_j ^{-1}n \mathscr{B}_j$ such that $f(x_j) \leq f(x)$ for all $x ^{-1}n \mathscr{B}_j$. Define $r_j=f(x_j)$ and note that $r_j \leq f(x_{j-1}) \leq r_{j-1}$. We distinguish two cases. \textbf{Case 1: $r_j>0$.} Let $(x_j, (X_i)_{1 \leq i \leq j-1})$ be a truncated datum contained in $(\pi,V_\pi)$, $X_j ^{-1}n (\mathfrak{g}_j)^*_{x,r_j}$ almost stable and $V_{j}' \mathfrak{su}bset V_{j-1} \mathfrak{su}bset V_\pi^{\cup_{1 \leq i \leq j-1}((H_i)_{x,r_i+})}$ satisfying the conditions \eqref{item-in-pfa} and \eqref{item-in-pfb} above. \textbfegin{Lemmasub} \leftarrowbel{Lemma-Pf1} The element $X_j$ of $\mathfrak{g}_j^*$ is almost strongly stable at $x_j ^{-1}n \mathscr{B}_j$. \end{Lemmasub} \textbf{Proof of Lemma \ref{Lemma-Pf1}.} Suppose $X_j$ is not almost strongly stable. Since $X_j$ is almost stable, this implies that $\overline{X_j} ^{-1}n ((\mathfrak{g}_j)_{x_j,r_j}/(\mathfrak{g}_j)_{x_j,r_j+})^*$ is unstable. Thus, by \cite[Corollary~4.3]{Kempf} there exists a non-trivial one parameter subgroup $\overline \leftarrowmbda: \mathbb{G}_m \rightarrow {(\mathbb{R}P_j)}_{x_j}$ in the reductive quotient ${(\mathbb{R}P_j)}_{x_j}$ of $G_j$ at $x_j$ such that $\lim_{t \rightarrow 0}\overline\leftarrowmbda(t).\overline{X_j}=0$. This means $\overline{X_j}$ is trivial on the root spaces corresponding to roots $\alpha$ with $\left\leftarrowngle\alpha,\leftarrowmbda\right\rangle<0$. Let $\mathscr{S}$ be a split torus of the parahoric group scheme ${(\mathbb{P}_j)}_{x_j}$ of $G_j$ such that $\mathscr{S}_{\mathfrak f}$ is a maximal split torus of $({\mathbb{R}P_j})_{x_j}$ containing $\overline \leftarrowmbda(\mathbb{G}_m)$ and such that $\mathscr{S}_k$ is contained in a maximal torus $T_j \mathfrak{su}bset G_j$ which splits over a tame extension $E$ of $k$ and whose apartment $\mathscr{A}(T_E,E) \cap \mathscr{B}_j$ contains $x_j$. Let $\leftarrowmbda: \mathbb{G}_m \rightarrow \mathscr{S}_{k}$ be the one parameter subgroup corresponding to $\overline \leftarrowmbda$. Then for $\epsilon>0$ small enough, we have $(H_j)_{x_j+\epsilon \leftarrowmbda,r_j}\mathfrak{su}bset (H_j)_{x_j,r_j}$ and $X_j ^{-1}n (\mathfrak{g}_j)_{x_j+\epsilon \leftarrowmbda,-r_j}^*$, where $x_j+\epsilon \leftarrowmbda ^{-1}n \mathscr{A}(T_E,E) \cap \mathscr{B}_j$. Moreover, the image of $X_j$ in $(\mathfrak{g}_j)_{x_j+\epsilon \leftarrowmbda,-r_j}^*/(\mathfrak{g}_j)_{x_j+\epsilon \leftarrowmbda,-r_j+}^*$ is trivial. Let $r_j > \partialta >0$ such that the subgroup $(H_j)_{x_j+\epsilon \leftarrowmbda,r_j-\partialta}$ equals $(H_j)_{x_j+\epsilon \leftarrowmbda,r_j}$ and therefore acts trivially on $V_j'$. Analogously to the first part of the proof of Lemma \ref{Lemma-Aj}\eqref{Lemma-Aj-iii}, for $\epsilon$ sufficiently small, there exist $d_i>0$ for $1 \leq i \leq j-1$ such that we have $V_j'\mathfrak{su}bset V_\pi^{\cup_{1 \leq i \leq j-1}((H_i)_{x_j+\epsilon \leftarrowmbda,r_i+})}$ and (after potentially adjusting $V_j'\mathfrak{su}bset V_\pi^{\cup_{1 \leq i \leq j-1}((H_i)_{x_j+\epsilon \leftarrowmbda,r_i+})}$) the action of $(H_{i})_{x_j+\epsilon \leftarrowmbda,r_i-\mathfrak{r}ac{d_i}{2},\mathfrak{r}ac{r_i}{2}+} /(H_{i})_{x_j+\epsilon \leftarrowmbda,r_i+} \simeq (\mathfrak{h}_{i})_{x_j+\epsilon \leftarrowmbda,r_i-\mathfrak{r}ac{d_i}{2},\mathfrak{r}ac{r_i}{2}+} /(\mathfrak{h}_{i})_{x_j+\epsilon \leftarrowmbda,r_i+}$ via $\pi$ on $V_j'$ is given by $\varphi \circ \left(C_i + X_i\right)$ for some $C_i ^{-1}n (\mathfrak{h}_i)^*_{x_j+\epsilon \leftarrowmbda,-(\mathfrak{r}ac{r_i}{2}+\epsilon)}$ being trivial on $(\mathfrak{t} \cap \mathfrak{h}_i) + \mathfrak{h}_{i+1}$ for all $1 \leq i < j$, the group $(H_j)_{x_j+\epsilon\leftarrowmbda,r_j-\partialta}$ acts trivially on $V_j'$, and $(x_j+\epsilon\leftarrowmbda, (X_i)_{1\leq i \leq j-1})$ is a truncated datum. Assuming $\epsilon$ is sufficiently small and applying Corollary \ref{Cor-kill-C-type1} (or if $j=1$, set $g=1$), we obtain $g ^{-1}n G_{x_j+\epsilon\leftarrowmbda,0+}$ such that $\pi(g)V'_j \mathfrak{su}bset V_\pi^{\cup_{1 \leq i \leq j-1}((H_i)_{x_j+\epsilon \leftarrowmbda,r_i+})}$, the action of $(H_{i})_{x_j+\epsilon \leftarrowmbda,r_i,\mathfrak{r}ac{r_i}{2}+} /(H_{i})_{x_j+\epsilon \leftarrowmbda,r_i+} \simeq (\mathfrak{h}_{i})_{x_j+\epsilon \leftarrowmbda,r_i,\mathfrak{r}ac{r_i}{2}+} /(\mathfrak{h}_{i})_{x_j+\epsilon \leftarrowmbda,r_i+}$ on $\pi(g)V_j'$ via $\pi$ is given by $\varphi \circ X_i$ for $1 \leq i \leq j-1$, and the action of $(H_j)_{x_j+\epsilon \leftarrowmbda,r_j-\partialta}$ on $\pi(g)V_j'$ is trivial. Hence $f(x_j+\epsilon\leftarrowmbda)\leq r_j-\partialta < r_j=f(x_j)$, which contradicts the choice of $x_j$. Thus $X_j$ is almost strongly stable. \qed\textsubscript{Lemma \ref{Lemma-Pf1}} Now we can show that, after changing $x_j$ and $X_j$ if necessary, we obtain a truncated datum of $G$ of length $j$ that is contained in $(\pi, V_\pi)$. \textbfegin{Lemmasub} \leftarrowbel{Lemma-Pf2} There exists a choice of $x_j$ and $X_j$ as above such that $(x_j, (X_i)_{1 \leq i \leq j})$ is a truncated datum contained in $(\pi, V_\pi)$. \end{Lemmasub} \textbf{Proof of Lemma \ref{Lemma-Pf2}.} Let $x_j$ and $X_j$ be as in Lemma \ref{Lemma-Pf1}. Let $\epsilon>0$ be sufficiently small (as specified later). By Proposition \ref{Lemma-almoststable-generic-representative} (applied to $G_j$) there exists $y ^{-1}n \mathscr{B}_j \mathfrak{su}bset \mathscr{B}$ and $\widetilde X ^{-1}n X_j + (\mathfrak{g}_j)_{y,(-r_j)+}^*$ such that $\mathrm{d}(x_j,y)< \epsilon$, the element $\widetilde X$ is generic of depth $-r_j$ at $y$, and $x_j$ and $y$ are contained in $\mathscr{B}(\mathbb{C}ent_{G_j}(\widetilde X),k)\mathfrak{su}bset \mathscr{B}_j$. Note that for $\epsilon$ sufficiently small, we have $(H_j)_{y,r_j} \mathfrak{su}bset (H_j)_{x_j,r_j}$ and the action of $(H_j)_{y,r_j}$ on $V_j'$ factors through $(H_j)_{y,r_j}/(H_j)_{x_j,r_j+}$ on which it is given by $X_j$. Since $X_j-\widetilde X ^{-1}n (\mathfrak{h}_j)_{y,(-r_j)+}^*$, this difference is trivial on $(\mathfrak{h}_j)_{y,r_j}$. Therefore the action of $(H_j)_{y,r_j}/(H_j)_{x_j,r_j+}$ on $V_j'$ is also given by $\widetilde X$, and, in particular, it factors through $(H_j)_{y,r_j}/(H_j)_{y,r_j+}$. Moreover, the tuple $(y,(X_i)_{1 \leq i \leq j-1})$ is a truncated datum by Lemma \ref{Lemma-truncateddatum-with-y}. Substituting $X_j$ by $\widetilde X$ and applying Corollary \ref{Cor-kill-C-type1} (if $j>1$) as in the proofs of Lemma \ref{Lemma-Aj} and Lemma \ref{Lemma-Pf1} and possibly substituting $V_j'$ by $\pi(g)V_j'$ for some $g ^{-1}n G_{y,0+}$, we can achieve that \eqref{item-in-pfa} and \eqref{item-in-pfb} above are satisfied at the point $y$. This implies that $f(y)=r_j$. Note that $(y, (X_i)_{1 \leq i \leq j})$ is a truncated datum. (If $j>2$, then $G_j \neq \mathbb{C}ent_{G_j}(X_j)$, because otherwise $X_j(\mathfrak{h}_j)=0$ and hence $f(x_j)$ would not be minimal.) By Corollary \ref{Cor-kill-C-type2} this truncated datum is contained in $(\pi, V_\pi)$. \qed\textsubscript{Lemma \ref{Lemma-Pf2}} This finishes the recursion step. Since $G_j \mathfrak{su}bsetneq G_{j-1}$ for $j>2$, after repeating this construction finitely many times we obtain an integer $n$ and a truncated datum $(x_n, (X_i)_{1 \leq i \leq n})$ contained in $(\pi,V_\pi)$ with $r_{n+1}=0$, i.e. we move to the second case. \textbf{Case 2: $r_j=0$.} Let $V_j'$ be the maximal subspace of $V_\pi^{\cup_{1 \leq i \leq j}((H_i)_{x_j,r_i+})}$ satisfying \eqref{item-in-pfa} and \eqref{item-in-pfb} above. Note that $(H_j)_{x_j,0}$ stabilizes $V_j'$, because $(H_j)_{x_j,0}$ centralizes $X_i$ and stabilizes $(H_i)_{x_j,r_i+}$ and $(H_i)_{x_j,r_i,\mathfrak{r}ac{r_i}{2}+}$ for $1 \leq i <j$. Let $(\rho_0, V_{\rho_0})$ be an irreducible $(H_j)_{x_j,0}/(H_j)_{x_j,0+}$-subrepresentation of $V_j'$ viewed as a representation of $(H_j)_{x_j,0}/(H_j)_{x_j,0+}$. Then $(x_j, (X_i)_{1 \leq i \leq j-1},$ $ (\rho_0,V_{\rho_0}))$ is a datum contained in $(\pi, V_\pi)$. \qed \textbfegin{Rem} \leftarrowbel{Rmk-uniqueness} In the next section we will use the existence of a maximal datum for a given representation $(\pi, V_\pi)$ to deduce the existence of a type for $(\pi, V_\pi)$. Note however that a datum itself might not determine the Bernstein component, i.e. a given datum might be a maximal datum for representations in different Bernstein components. If one is interested in determining the Bernstein component uniquely, one has to enhance the datum slightly (to a representation of $(M_{n+1})_x$, where $M_{n+1}$ is a Levi subgroup of $G_{n+1}$ that we are going to attached to $x$ and $G_{n+1}$ in Section \ref{Section-existence-of-type}, page \pageref{page-Levi}). Such an enhancement determines the Bernstein component uniquely by \cite[10.3~Theorem]{KimYu}, which is based on the work of Hakim--Murnaghan \cite{Hakim-Murnaghan} for supercuspidal representations. The assumption required in Hakim--Murnaghan was removed by Kaletha in \cite[Corollary~3.5.5]{Kaletha}. \end{Rem} \section{From a {datum} to types} \leftarrowbel{Section-existence-of-type} Let $(\pi,V_\pi)$ be a smooth irreducible representation of $G(k)$, and let $(x, (X_i)_{1 \leq i \leq n}, (\rho_0, V_{\rho_0}))$ be a maximal datum for $(\pi, V_\pi)$, which exists by Theorem \ref{Thm-existence-of-datum}. In this section we show how to use this datum in order to exhibit a type contained in $(\pi, V_\pi)$. In order to do so we will define characters $\phi_i:G_{i+1}(k) \rightarrow \mathbb{C}$ of depth $r_i$ for $1 \leq i \leq n$ and a depth-zero representation of a compact open subgroup $K_{G_{n+1}}$ of $G_{n+1}(k)$ that contains $(G_{n+1})_{x,0}$. We will prove that these objects satisfy all necessary conditions imposed by Kim and Yu (\cite{KimYu}) so that Yu's construction (\cite{Yu}) yields a type. In Theorem \ref{Thm-exhaustion-of-types} we will conclude that the resulting type is contained in $(\pi, V_\pi)$. Recall that Moy and Prasad (\cite[6.3 and 6.4]{MP2}) attach to $x$ and $G_{n+1}$ a Levi subgroup $M_{n+1}$ \leftarrowbel{page-Levi} of $G_{n+1}$ such that $x ^{-1}n \mathscr{B}(M_{n+1},k) \mathfrak{su}bset \mathscr{B}(G_{n+1},k)$ and $(M_{n+1})_{x,0}$ is a maximal parahoric subgroup of $M_{n+1}(k)$ with $(M_{n+1})_{x,0}/(M_{n+1})_{x,0+} \simeq (G_{n+1})_{x,0}/(G_{n+1})_{x,0+}$. We denote by $(M_{n+1})_{x}$ the stabilizer of $x ^{-1}n \mathscr{B}(M_{n+1},k)$ in $M_{n+1}(k)$. Then we define following Kim and Yu (\cite[7.1 and 7.3]{KimYu}) the group $K_{G_{n+1}}$ to be the group generated by $(M_{n+1})_x$ and $(G_{n+1})_{x,0}$. Let $V'$ be a subspace of $V_\pi$ as provided by Definition \ref{Def-datum-contained} for the datum $(x, (X_i)_{1 \leq i \leq n}, (\rho_0, V_{\rho_0}))$ contained in $(\pi, V_\pi)$, and let $\widetilde V$ \leftarrowbel{page-V-tilda} be the irreducible $K_{G_{n+1}}$-subrepresentation of $V_\pi$ containing $V'$. Note that any $g ^{-1}n K_{G_{n+1}} \mathfrak{su}bset (G_{n+1})_x$ centralizes $X_i$ for $1 \leq i \leq n$ and hence stabilizes $(H_i)_{x,r_i+}$ ($1 \leq i \leq n+1$). Thus $\widetilde V$ is contained in $V_\pi^{\cup_{1 \leq i \leq n+1}((H_i)_{x,r_i+})}$. Moreover, let $T$ be a maximal torus of $M_{n+1} \mathfrak{su}bset G_{n+1}$ whose apartment contains $x$. Then, for $t ^{-1}n T(k)_{0+}$ and $g ^{-1}n K_{G_{n+1}}$, we have $tgt^{-1}g^{-1} ^{-1}n (H_{n+1})_{x,0+}$. Hence, if $v ^{-1}n \widetilde V$ is an element such that $T(k)_{0+}$ preserves $\mathbb{C} \cdot v$, then $T(k)_{0+}$ also preserves $\mathbb{C} \cdot gv$ and acts on both spaces via the same character. Since $\widetilde V$ is an irreducible $K_{G_{n+1}}$-representation, we deduce that $T(k)_{0+}$ acts on $\widetilde V$ via some character $\phi_T$ (times identity). Before using $\phi_T$ to define the characters $\phi_i$, we recall Lemma~3.1.3 of \cite{Kaletha}. \textbfegin{Lemma}[\cite{Kaletha}] \leftarrowbel{Lemma-Kaletha} If $r ^{-1}n \mathbb{R}_{>0}$ and $1 \rightarrow A \rightarrow B \rightarrow C \rightarrow 1 $ is an exact sequence of tori that are defined over $k$ and split over a tamely ramified extension of $k$, then $$ 1 \rightarrow A(k)_r \rightarrow B(k)_r \rightarrow C(k)_r \rightarrow 1$$ is an exact sequence. \end{Lemma} \textbfegin{Cor} \leftarrowbel{Cor-generated} Let $r ^{-1}n \mathbb{R}_{>0}$ and $1 \leq j \leq n+1$. Then $(G_{j})_{x,r}$ is generated by $T(k)_{r}$ and $(H_{j})_{x,r}$. \end{Cor} \textbf{Proof.\\} Note that $T\cap H_{j}$ is a maximal torus of $H_{j}$ (\cite[Example~2.2.6]{ConradSGA3}). Then by Lemma \ref{Lemma-Kaletha} the map $T(k)_r \rightarrow (T/T\cap H_{j})(k)_r = (G_{j}/H_{j})(k)_r$ is surjective, and hence $T(k)_r$ also surjects onto $G_{j}(k)_{x,r}/H_{j}(k)_{x,r} \mathfrak{su}bset (G_{j}/H_{j})(k)_r$. (That $G_{j}(k)_{x,r}$ maps to $(G_{j}/H_{j})(k)_r$ can be seen by considering a tame extension over which $T$ splits.) \qed Now we define $\phi_i$ recursively, first for $i=n$, then $i=n-1, n-2, \hdots, 1$. Suppose we have already defined $\phi_{n}, \hdots, \phi_{j+1}$ of depth $r_n, \hdots, r_{j+1}$ for some $1 \leq j \leq n$ ($j=n$ meaning no character has been defined yet) such that $$\pi|_{T(k) \cap (H_{j+1})_{x,0+}} = \phi_n|_{T(k)\cap (H_{j+1})_{x,0+}} \cdot \hdots \cdot \phi_{j+1}|_{T(k)\cap (H_{j+1})_{x,0+}} \cdot ^{^{-1}nfty}d_{\widetilde V} \, \text{ on } \widetilde V .$$ Then we let $\phi_j'=\phi_T \cdot \phi_n|_{T(k)_{0+}}^{-1} \cdot \hdots \phi_{j+1}|_{T(k)_{0+}}^{-1}$, which is trivial on $T(k) \cap (H_{j+1})_{x,0+}=(T\cap H_{j+1})(k)_{0+}$ and on $T(k) \cap (H_{j})_{x,r_j+}=(T\cap H_{j})(k)_{r_j+}$. By Lemma \ref{Lemma-Kaletha} we have $(G_{j+1}/H_{j+1})(k)_{0+}\simeq T/(T\cap H_{j+1})(k)_{0+} \simeq T(k)_{0+}/(T\cap H_{j+1})(k)_{0+}$. Hence $\phi_j'$ defines a character of $(G_{j+1}/H_{j+1})(k)_{0+}$ that extends via Pontryagin duality to some character $\widetilde \phi_j'$ of $(G_{j+1}/H_{j+1})(k)$. Restricting $\widetilde \phi_j'$ to the image of $G_{j+1}(k)$, we obtain a character of $G_{j+1}(k)$ that we also denote by $\widetilde \phi_j'$. Note that $\widetilde \phi_j'$ is trivial on $H_{j+1}(k)$ and $\widetilde \phi_j'|_{T(k)_{0+}}$ coincides with $\phi_j'$. Similarly, the character $\phi_j'$ gives rise to a character $\hat \phi_j'$ of $(G_{j}/H_{j})(k)_{r_j+}$ that can be extended and composed to yield a character (also denoted by $\hat \phi_j'$) of $G_j(k)$ that is trivial on $H_j(k)$ and coincides with $\widetilde \phi_j'$ on $T(k)_{r_j+}$. If $j=1$, we may and do choose $\hat \phi_j'$ to be the trivial character. We define $\phi_j=\widetilde \phi_j' \cdot (\hat \phi_j')^{-1}|_{G_{j+1}(k)}$. Then $\phi_j$ has depth $r_j$ (because by considering a tame extension that splits the torus $T$, we see that $G_{j+1}(k)_{x,r_j+}$ maps to $(G_{j+1}/H_{j+1})(k)_{r_j+}\simeq T(k)_{r_j+}/(T\cap H_{j+1})(k)_{r_j+}$; or use Corollary \ref{Cor-generated}) and $$\pi|_{T(k) \cap (H_{j})_{x,0+}} = \phi_n|_{T(k)\cap (H_{j})_{x,0+}} \cdot \hdots \cdot \phi_{j}|_{T(k)\cap (H_{j})_{x,0+}} \cdot ^{^{-1}nfty}d_{\widetilde V} \, \text{ on } \widetilde V .$$ \textbfegin{Lemma} \leftarrowbel{Lemma-phi-properties} For $1 \leq j \leq n$ the character $\phi_j:G_{j+1}(k) \rightarrow \mathbb{C}^*$ satisfies the following properties: \textbfegin{enumerate}[label=(\roman*),ref=\roman*] ^{-1}tem \leftarrowbel{item-character-extension-0} $\phi_j$ is trivial on ${(G_{j+1})_{x,r_j+}}$ and on $H_{j+1}(k)$, ^{-1}tem \leftarrowbel{item-character-extension-1} $\phi_j|_{(H_j)_{x,r_j}\cap G_{j+1}(k)}$ factors through $$((H_j)_{x,r_j}\cap G_{j+1}(k))/((H_j)_{x,r_j+}\cap G_{j+1}(k)) \simeq ((\mathfrak{h}_j)_{x,r_j}\cap \mathfrak{g}_{j+1})/((\mathfrak{h}_j)_{x,r_j+}\cap \mathfrak{g}_{j+1})$$ and is given by $\varphi \circ X_j|_{(\mathfrak{h}_j)_{x,r_j}\cap \mathfrak{g}_{j+1}}$, ^{-1}tem \leftarrowbel{item-character-extension-generic} $\phi_j$ is $G_j$-generic of depth $r_j$ (in the sense of \cite[\S~9]{Yu}) relative to $x$, and ^{-1}tem \leftarrowbel{item-character-extension-2} the group $(G_{n+1})_{x,0+}$ acts on $\widetilde V$ via $^{\prime}od_{1 \leq i \leq n} \phi_i|_{(G_{n+1})_{x,0+}}$. \end{enumerate} \end{Lemma} \textbf{Proof.\\} Part \eqref{item-character-extension-0} follows immediately from the above construction. For Part \eqref{item-character-extension-1}, note that using Corollary \ref{Cor-generated} we see that $(H_j)_{x,r_j} \cap G_{j+1}(k)=H_j(k) \cap (G_{j+1})_{x,r_j}$ is generated by $T(k)_{r_j} \cap H_j(k)$ and $(H_{j+1})_{x,r_j}$. Since $\phi_j$ is trivial on $H_{j+1}(k)$ and coincides with $\phi_T$ on $T(k)_{r_j} \cap H_j(k)$, the claim follows from the properties of $V'$ in Definition \ref{Def-datum-contained}. For Part \eqref{item-character-extension-generic}, note that by Part \eqref{item-character-extension-1} there exists $Y ^{-1}n (\mathfrak{g}_{j+1})^*_{x,-r_j}$ such that $Y$ is trivial on $\mathfrak{h}_{j}$ and $\phi_j|_{(G_{j+1})_{x,r_j}}$ is given by the character of $(G_{j+1})_{x,r_j}/(G_{j+1})_{x,r_j+} \simeq (\mathfrak{g}_{j+1})_{x,r_j}/(\mathfrak{g}_{j+1})_{x,r_j+}$ arising from $\varphi \circ (Y+X_j)$. Since $Y$ is trivial on $\mathfrak{h}_j$, the element $Y$ is fixed under the dual of the adjoint action of $G_{j+1}$ on $\mathfrak{g}_{j+1}$. Hence by the definition of $G_{j+1}$, the group $G_{j+1}$ centralizes $Y+X_j$. Moreover, if $T$ is a maximal torus of $G_{j+1}$, and $\alpha ^{-1}n \Phi(G_{j},T_{\overline k})-\Phi(G_{j+1},T_{\overline k})$, then $H_\alpha ^{-1}n (\mathfrak{h}_j)_{\overline k}$ and hence $$ \val((Y+X_j)(H_\alpha))= \val(X_j(H_\alpha)) = -r_j,$$ where the last equality follows from Lemma \ref{Lemma-structure-of-X}. Since $p$ is not a torsion prime for the dual root datum of $G_j$ by Lemma \ref{Lemma-restriction-on-p}\eqref{item-p-Levi}, \eqref{item-torsion-prime} and \eqref{item-index-of-connection} (applied to the dual root datum of $G_j$), the character $\phi_j$ is $G_j$-generic of depth $r_j$ by \cite[Lemma~8.1]{Yu}. Part \eqref{item-character-extension-2} follows from the observation above that $\phi_1|_{T(k)_{0+}}=\phi_T\cdot \phi_n|_{T(k)_{0+}}^{-1} \cdot \hdots \phi_{2}|_{T(k)_{0+}}^{-1}$ and that $(H_{n+1})_{x,0+}$ acts trivially on $\widetilde V$ together with Corollary \ref{Cor-generated}. \qed \textbfegin{Cor} \leftarrowbel{Cor-repzero} The irreducible representation $( ^{\prime}od_{1 \leq i \leq n} \phi_i^{-1}|_{K_{G_{n+1}}} \cdot \pi|_{K_{G_{n+1}}}, \widetilde V )$ of $K_{G_{n+1}}$ is trivial on $(G_{n+1})_{x,0+}$ and its restriction to $(H_{n+1})_{x,0}$ contains $(\rho_0,V_{\rho_0})$ as an irreducible subrepresentation. \end{Cor} \textbf{Proof.\\} This is an immediate consequence of Lemma \ref{Lemma-phi-properties} \eqref{item-character-extension-0} and \eqref{item-character-extension-2} and the definition of $\widetilde V$. \qed Recall our convention that by ``type'' we mean an $\fs$-type for some inertial equivalence class $\mathfrak{s} ^{-1}n \mathfrak{I}$. In order to obtain a type for our representation $(\pi,V_\pi)$ of $G(k)$ using the construction of Kim and Yu in \cite{KimYu} we denote by $r_\pi$ the depth of the representation $(\pi, V_\pi)$, i.e. $r_\pi=r_1$ if $n \geq 1$ and $r_\pi=0$ if $n=0$, and we make the following definitions: \leftarrowbel{page-Yu-datum} \textbfegin{eqnarray*} \vec G &=& \left\{ \textbfegin{array}{ll} (G_{n+1}, G_n, \hdots, G_2, G_1=G) & \text{ if } G_2 \neq G_1 \text{ or } n=0 \\ (G_{n+1}, G_n, \hdots, G_3, G_2=G) & \text{ if } G_2=G_1 \end{array}\right. \\ \vec r & = & \left\{ \textbfegin{array}{ll} (r_{n}, r_{n-1}, \hdots, r_2, r_1, r_\pi) & \text{ if } G_2 \neq G_1 \text{ or } n=0 \\ (r_{n}, r_{n-1}, \hdots, r_2, r_1) & \text{ if } G_2=G_1 \end{array}\right. \\ \vec \phi & = & \left\{ \textbfegin{array}{ll} (\phi_{n}, \phi_{n-1}, \hdots, \phi_2, \phi_1, 1) & \text{ if } G_2 \neq G_1 \text{ or } n=0 \\ (\phi_{n}, \phi_{n-1}, \hdots, \phi_2, \phi_1) & \text{ if } G_2=G_1 \end{array}\right. \\ K &=& K_{G_{n+1}}(G_{n})_{x,\mathfrak{r}ac{r_n}{2}}\hdots(G_1)_{x,\mathfrak{r}ac{r_1}{2}} \\ K_{0+} &=& (G_{n+1})_{x,0+}(G_{n})_{x,\mathfrak{r}ac{r_n}{2}}\hdots(G_1)_{x,\mathfrak{r}ac{r_1}{2}} \\ K_+ &=& (G_{n+1})_{x,0+}(G_{n})_{x,\mathfrak{r}ac{r_n}{2}+}\hdots(G_1)_{x,\mathfrak{r}ac{r_1}{2}+} \\ K^H_{0+} &=& (H_{n+1})_{x,0+}(H_{n})_{x,\mathfrak{r}ac{r_n}{2}}\hdots(H_1)_{x,\mathfrak{r}ac{r_1}{2}} \\ K^H_+ &=& (H_{n+1})_{x,0+}(H_{n})_{x,\mathfrak{r}ac{r_n}{2}+}\hdots(H_1)_{x,\mathfrak{r}ac{r_1}{2}+} \end{eqnarray*} \textbfegin{Lemma} \leftarrowbel{Lemma-Ks} We have the following identities: \textbfegin{equation*} \textbfegin{aligned} K_+ & = (G_{n+1})_{x,0+}(H_{n})_{x,\mathfrak{r}ac{r_n}{2}+}\hdots(H_1)_{x,\mathfrak{r}ac{r_1}{2}+}=(G_{n+1})_{x,0+}K^H_+ \\ \quad K_{0+} & = (G_{n+1})_{x,0+}(H_{n})_{x,\mathfrak{r}ac{r_n}{2}}\hdots(H_1)_{x,\mathfrak{r}ac{r_1}{2}}= (G_{n+1})_{x,0+} K^H_{0+} \\ K^H_{0+} &= (H_{n+1})_{x,0+}(H_{n})_{x,r_n,\mathfrak{r}ac{r_n}{2}}\hdots(H_1)_{x,r_1,\mathfrak{r}ac{r_1}{2}} \\ K^H_+ &= (H_{n+1})_{x,0+}(H_{n})_{x,r_n,\mathfrak{r}ac{r_n}{2}+}\hdots(H_1)_{x,r_1,\mathfrak{r}ac{r_1}{2}+} \, . \end{aligned} \end{equation*} \end{Lemma} \textbf{Proof.\\} The first two lines follow from Corollary \ref{Cor-generated}. It is clear that $$ K^H_{0+} \mathfrak{su}pset (H_{n+1})_{x,0+}(H_{n})_{x,r_n,\mathfrak{r}ac{r_n}{2}}\hdots(H_1)_{x,r_1,\mathfrak{r}ac{r_1}{2}} .$$ In order to prove the fourth identity, it remains to show that $$(H_{i})_{x, \mathfrak{r}ac{r_i}{2}+} \mathfrak{su}bset (H_{n+1})_{x,0+}(H_{n})_{x,r_n,\mathfrak{r}ac{r_n}{2}+}\hdots(H_1)_{x,r_1,\mathfrak{r}ac{r_1}{2}+}$$ for all $n+1 \geq i \geq 1$, where we recall that $r_{n+1}=0$. We show this by induction. For $i=n+1$ the statement is obvious, so assume $n \geq i \geq 1$ and that the statement holds for $i+1$. Then $(H_{i+1})_{x, \mathfrak{r}ac{r_i}{2}+} \mathfrak{su}bset (H_{i+1})_{x, \mathfrak{r}ac{r_{i+1}}{2}+} \mathfrak{su}bset (H_{n+1})_{x,0+}(H_{n})_{x,r_n,\mathfrak{r}ac{r_n}{2}+}\hdots(H_1)_{x,r_1,\mathfrak{r}ac{r_1}{2}+}$, and it suffices to prove that $(H_{i})_{x, \mathfrak{r}ac{r_i}{2}+} = (H_{i})_{x, r_i, \mathfrak{r}ac{r_i}{2}+} (H_{i+1})_{x, \mathfrak{r}ac{r_i}{2}+}$, or, equivalently, $$(\mathfrak{h}_{i})_{x, \mathfrak{r}ac{r_i}{2}+} / (\mathfrak{h}_{i})_{x, r_i} = ((\mathfrak{h}_{i})_{x, r_i, \mathfrak{r}ac{r_i}{2}+} + (\mathfrak{h}_{i+1})_{x, \mathfrak{r}ac{r_i}{2}+})/ (\mathfrak{h}_{i})_{x, r_i} . $$ This follows by taking $\Gal(E/k)$-invariants of the following equality (of abelian groups with $\Gal(E/k)$-action) $$ (\mathfrak{h}_{i}(E))_{x, \mathfrak{r}ac{r_i}{2}+} / (\mathfrak{h}_{i}(E))_{x, r_i} = (\mathfrak{h}_{i}(E))_{x, r_i, \mathfrak{r}ac{r_i}{2}+}/ (\mathfrak{h}_{i}(E))_{x, r_i} \oplus (\mathfrak{h}_{i+1}(E))_{x, \mathfrak{r}ac{r_i}{2}+}/(\mathfrak{h}_{i+1}(E))_{x, r_i}, $$ where $E$ is a tamely ramified extension of $k$ over which $G_{i+1}$ and $G_i$ split. The third identity is proved analogously. \qed Let $\rho_{Y}u$ be an irreducible representation of $K_{G_{n+1}}$ such that $\rho_{Y}u|_{(G_{n+1})_{x,0}}$ factors through $(G_{n+1})_{x,0}/(G_{n+1})_{x,0+}$ and contains a cuspidal representation of $(G_{n+1})_{x,0}/(G_{n+1})_{x,0+}$ . By Lemma \ref{Lemma-phi-properties}\eqref{item-character-extension-generic} and \cite[7.3~Remark]{KimYu}\mathfrak{o}otnote{Remark 7.3. in \cite{KimYu} explains how to get the 5-tuple $\Sigma$ (using the notation from \cite{KimYu}) from our 5-tuple. The authors mention in this remark that as a last step one ``can then extend/modify $^{-1}ota$ to a family $\{^{-1}ota\}$ which is $\vec s$-generic''. However, by doing so one might have to change our point $x$ (which is denoted by $y$ in \cite{KimYu}) to a nearby point in the building. In order to keep working with $x$ we will not perform this last modification. As a consequence the requirement of $\{^{-1}ota\}$ being $\vec s$-generic in Condition D2 of \cite[7.2]{KimYu} might not be satisfied. However, we can still carry out Yu's construction with our tuple.} the tuple $(\vec G, x, \vec r, \rho_{Y}u, \vec \phi)$ satisfies Conditions D1, D3, D4 and D5\mathfrak{o}otnote{Ju-Lee Kim confirmed that ''relative to $x$ for all $x ^{-1}n \mathscr{B}(G')$'' in Condition D5 in \cite[7.2]{KimYu} should be ``relative to $y$'' (using the notation of \cite{KimYu}). } in \cite[7.2]{KimYu}. Using this tuple we can carry out Yu's construction (\cite[\S4]{Yu}) as explained in \cite[7.4]{KimYu} to obtain a representation of $K$ that we denote by $(\pi_K,V_{\pi_K})$. (Note that $p \neq 2$ by our assumption that $p \nmid \abs{W}$ and that $G$ is not a torus.) By construction, the representation $\pi_K$ is of the form $\rho_{Y}u \otimes \kappa_{\vec \phi}$, where $\rho_{Y}u$ also denotes the extension of $\rho_{Y}u$ from $K_{G_{n+1}}$ to $K$ that is trivial on $(G_n)_{x,\mathfrak{r}ac{r_n}{2}}\hdots (G_1)_{x,\mathfrak{r}ac{r_1}{2}}$, and $(\kappa_{\vec \phi}, V_\kappa)$ \leftarrowbel{page-kappa} is a representation of $K$ that depends only on $(\vec G, \vec r, \vec \phi)$, i.e. not on the choice of $\rho_{Y}u$ (\cite{Yu}[\S4] or \cite[12.4]{Kim}). In particular, $(\pi_K|_{K_+},V_{\pi_K})$ does not depend on $\rho_{Y}u$. We denote by $\hat \phi_i$ ($1 \leq i \leq n$) the character of $K_{G_{n+1}}(G_{i+1})_{x,0}G_{x,\mathfrak{r}ac{r_i}{2}+}$ defined in \cite[\S~4]{Yu}, i.e. the unique character of $K_{G_{n+1}}(G_{i+1})_{x,0}G_{x,\mathfrak{r}ac{r_i}{2}+}$ satisfying \textbfegin{itemize} ^{-1}tem $\hat \phi_i|_{K_{G_{n+1}}(G_{i+1})_{x,0}}=\phi_i|_{K_{G_{n+1}}(G_{i+1})_{x,0}}$, and ^{-1}tem $\hat \phi_i|_{G_{x,\mathfrak{r}ac{r_i}{2}+}} $ factors through \textbfegin{eqnarray*} G_{x,\mathfrak{r}ac{r_i}{2}+}/G_{x,r_i+} & \simeq& \mathfrak{g}_{x,\mathfrak{r}ac{r_i}{2}+}/\mathfrak{g}_{x,r_i+} = (\mathfrak{g}_{i+1} \oplus \mathfrak{r}'')_{x,\mathfrak{r}ac{r_i}{2}+}/(\mathfrak{g}_{i+1} \oplus \mathfrak{r}'')_{x,r_i+} \\ & \rightarrow & (\mathfrak{g}_{i+1})_{x,\mathfrak{r}ac{r_i}{2}+}/(\mathfrak{g}_{i+1})_{x,r_i+} \simeq (G_{i+1})_{x,\mathfrak{r}ac{r_i}{2}+}/(G_{i+1})_{x,r_i+}, \end{eqnarray*} on which it is induced by $\phi_i$. Here $\mathfrak{r}''$ is as defined in Lemma \ref{Lemma-kill-C}, i.e. $\mathfrak{r}''=\mathfrak{g} \cap \textbfigoplus_{\alpha ^{-1}n \Phi(G,T_E)-\Phi(G_{i+1},T_E)} (\mathfrak{g}_E)_\alpha$ for some maximal torus $T$ of $G_{i+1}$ that splits over a tame extension $E$ of $k$, and the map $\mathfrak{g}_{i+1} \oplus \mathfrak{r}'' \rightarrow \mathfrak{g}_{i+1}$ sends $\mathfrak{r}''$ to zero. \end{itemize} Then Yu proves in \cite[Proposition~11.4]{Yu} that $(G_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}}/\left((G_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}+} \cap \ker(\hat \phi_i)\right)$ is a Heisenberg $p$-group with center $(G_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}+} / \left((G_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}+} \cap \ker(\hat \phi_i)\right)$. Let $(\omega_i, V_{\omega_i})$ denote the Heisenberg representation of this Heisenberg $p$-group with central character $\hat \phi_i|_{(G_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}+} }$. Then we observe from the construction of $(\kappa_{\vec \phi}, V_\kappa)$ and \cite[Theorem~11.5]{Yu} that $(\kappa_{\vec \phi}|_{K_{0+}}, V_\kappa)$ is irreducible and that the underlying vector space $V_\kappa$ is $\textbfigotimes_{i=1}^n V_{\omega_i}$. If $n=0$, then the empty tensor product is meant to be a one dimensional vector space. In that case $(\kappa_{\vec \phi}|_{K_{0+}}, V_\kappa)$ is the trivial one dimensional representation. The restriction of $(\kappa_{\vec \phi}, V_\kappa)$ to $(G_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}}$ for $1 \leq i \leq n$ is given by letting $(G_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}}$ act via the Heisenberg representation $\omega_i$ of $(G_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}}/\left((G_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}+} \cap \ker(\hat \phi_i)\right)$ with central character $\hat \phi_i|_{(G_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}+} }$ on $V_{\omega_i}$ and via $\hat \phi_j|_{(G_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}}}$ on $V_{\omega_j}$ for $j \neq i$. \textbfegin{Lemma} \leftarrowbel{Lemma-K+} There exists an irreducible $K_+$-subrepresentation of $(\pi|_{K_+}, \widetilde V)$ that is isomorphic to any one-dimensional $K_+$-subrepresentation of $(\pi_K|_{K_+},V_{\pi_K})$. \end{Lemma} \textbf{Proof.\\} By \cite[Proposition~4.4]{Yu}, the representation $(\pi_K|_{K_+},V_{\pi_K})$ is $\theta:=^{\prime}od_{1 \leq i \leq n} \hat \phi_i|_{K_+}$-isotypic. Let $(\pi|_{K_+},V'')$ be an irreducible $K_+$-subrepresentation of $(\pi|_{K_+}, V') \mathfrak{su}bset (\pi|_{K_+}, \widetilde V)$. By Lemma \ref{Lemma-phi-properties}\eqref{item-character-extension-2}, the group $(G_{n+1})_{x,0+}$ acts on $V''$ via $\theta$. Moreover, by Lemma \ref{Lemma-phi-properties}\eqref{item-character-extension-0} and \eqref{item-character-extension-1} the restriction $\theta|_{(H_{i})_{x,r_i,\mathfrak{r}ac{r_i}{2}+}}$ for $1 \leq i \leq n$ factors through $(H_{i})_{x,r_i,\mathfrak{r}ac{r_i}{2}+} /(H_{i})_{x,r_i+} \simeq (\mathfrak{h}_{i})_{x,r_i,\mathfrak{r}ac{r_i}{2}+} /(\mathfrak{h}_{i})_{x,r_i+}$, where it is given by $\varphi \circ X_i$ (by the last line of Lemma \ref{Lemma-structure-of-X}). Hence the group $(H_{i})_{x,r_i,\mathfrak{r}ac{r_i}{2}+}$ acts on $V''$ via $\theta$ for $1 \leq i \leq n$. Since $(G_{n+1})_{x,0+}$ together with $(H_{i})_{x,r_i,\mathfrak{r}ac{r_i}{2}+}$, $1 \leq i \leq n$, generate $K_+$ by Lemma \ref{Lemma-Ks}, we are done. \qed We denote by $N^H$ the kernel in $K^H_+$ of $\theta|_{K^H_+}=^{\prime}od_{1 \leq i \leq n} \hat \phi_i|_{K^H_+}$. \textbfegin{Lemma} \leftarrowbel{Lemma-Heisenberg} If $n>0$, then $K^H_{0+}/N^H$ is a Heisenberg $p$-group with center $K_+^H/N^H$.\\ If $n=0$, then $K^H_{0+}/N^H=K^H_{+}/N^H$. \end{Lemma} \textbf{Proof.\\} Note that $[K^H_{0+}, K^H_{0+}] \mathfrak{su}bset K^H_+$ and $[K^H_{0+}, K^H_{+}] \mathfrak{su}bset (H_{n+1})_{x,0+}(H_{n})_{x,r_n+,\mathfrak{r}ac{r_n}{2}+}\hdots(H_1)_{x,r_1+,\mathfrak{r}ac{r_1}{2}+} \mathfrak{su}bset N^H$. Thus the center of $K^H_{0+}/N^H$ contains $K_+^H/N^H$ and we have a pairing $(a,b)= \theta(aba^{-1}b^{-1})$ on $K^H_{0+}/K^H_{+} \times K^H_{0+}/K^H_{+}$. Note that $$K^H_{0+}/K^H_{+} \simeq (H_1)_{x,r_1,\mathfrak{r}ac{r_1}{2}}/(H_1)_{x,r_1,\mathfrak{r}ac{r_1}{2}+} \oplus \hdots \oplus (H_n)_{x,r_n,\mathfrak{r}ac{r_n}{2}}/(H_n)_{x,r_n,\mathfrak{r}ac{r_n}{2}+} $$ and it is easy to check (as done in the proof of \cite[Proposition~18.1]{Kim}) that $( \cdot, \cdot)$ is the sum of the pairings $(\cdot, \cdot)_i$ on $(H_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}}/(H_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}+} $ defined by $(a,b)_i=\hat \phi_i(aba^{-1}b^{-1})$. By \cite[Lemma~11.1]{Yu}, the pairing $(\cdot,\cdot)_i$ is non-degenerate $1 \leq i \leq n$, and hence the pairing $(\cdot,\cdot)$ is non-degenerate. Thus the center of $K^H_{0+}/N^H$ is contained in $K_+^H/N^H$, and therefore equals $K_+^H/N^H$. Moreover, the image of $\theta|_{K^H_+}$ is $\{ c ^{-1}n \mathbb{C} \, | \, c^p=1\}$, which implies that $K_+^H/N^H$ has order $p$. The remainder of the proof works completely analogous to Yu's proof (\cite[Proposition~11.4]{Yu}) that the group $(H_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}}/\left((H_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}+} \cap \ker(\hat \phi_i)\right) $ is a Heisenberg $p$-group with center $(H_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}+}/\left((H_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}+} \cap \ker(\hat \phi_i)\right)$ for $1 \leq i \leq n$. We outline the proof as a convenience for the reader and refer to \cite[Proposition~11.4]{Yu} for details: We first prove the statement over a tame extension $E$ over which $G_{n+1}$ is split, and denote by $K^H_{0+}(E), K^H_0(E)$ and $N(E)$ the corresponding groups constructed over $E$. By \cite[Lemma~10.1]{Yu} and the above observations (over $E$), it suffices to exhibit subgroups $W_1$ and $W_2$ of $K^H_{0+}(E)/N^H(E)$ that have trivial intersection with the center and whose image in $K^H_{0+}(E)/K^H_0(E)$ form a complete polarization. This can be achieved by using positive and negative root groups, respectively. To conclude that $K^H_{0+}/N^H$ is a Heisenberg $p$-group, we then embed $K^H_{0+}/N^H$ into $K^H_{0+}(E)/N^H(E)$, observe that by above its image $K^H_{0+}/K^H_0$ in $K^H_{0+}(E)/K^H_0(E)$ is a non-degenerate subspace, and apply \cite[Lemma~10.3]{Yu}. The second half of the lemma follows immediately from the definition of $K^H_{0+}$ and $K^H_+$. \qed Let $(\pi|_{K}, \widehat V)$ \leftarrowbel{page-V-hat} be the irreducible $K$-subrepresentation of $(\pi, V_\pi)$ that contains $\widetilde V$. \textbfegin{Lemma} \leftarrowbel{Lemma-repzeroK} There exists an irreducible representation $(\rho_0K,V_{\rho_0}K)$ of $K$ that is trivial on $K_{0+}$ such that $( \rho_0K \otimes \kappa_{\vec \phi} , V_{\rho_0}K \otimes V_\kappa) \simeq (\pi|_{K}, \widehat V)$. \end{Lemma} \textbf{Proof.\\} Since $K_{0+}=G_{x,0+}K^H_{0+}$ (Lemma \ref{Lemma-Ks}) and $G_{x,0+}\mathfrak{su}bset K_+$ acts on $V_{\pi_K}$ via $\theta|_{G_{x,0+}}$ (times identity) by \cite[Proposition~4.4]{Yu}, we deduce from the irreducibility of $(\kappa_{\vec \phi}|_{K_{0+}}, V_\kappa)$ mentioned above that also its restriction $(\kappa_{\vec \phi}|_{K^H_{0+}}, V_\kappa)$ to $K^H_{0+}$ is irreducible. Recall that $(\kappa_{\vec \phi}|_{K^H_{0+}}, V_\kappa)$ factors through $K^H_{0+}/N^H$ and $K^H_+$ acts via the character $\theta|_{K^H_+}$ (times identity). By Lemma \ref{Lemma-Heisenberg} and the theory of Heisenberg representations there exists a unique irreducible representation of $K^H_{0+}$ factoring through $K^H_{0+}/N^H$ and having $K^H_+/N^H$ act via the character $\theta|_{K^H_+}$ (times identity). On the other hand, Lemma \ref{Lemma-K+} and the observation that $[K^H_{0+},K_{+}]\mathfrak{su}bset N^H$ imply that $(\pi|_{K^H_{0+}}, \widehat V)$ contains an irreducible $K^H_{0+}$-subrepresentation on which $K_{+}$ acts via the character $\theta|_{K_+}$ (times identity), and which therefore is isomorphic to $(\kappa_{\vec \phi}|_{K^H_{0+}}, V_\kappa)$ as a $K^H_{0+}$-representation. Moreover, since $K_{0+}=K_+K^H_{0+}$, we deduce from the $K_+$-action that $(\pi|_{K_{0+}}, \widehat V)$ contains an irreducible $K_{0+}$-subrepresentation isomorphic to $(\kappa_{\vec \phi}|_{K_{0+}}, V_\kappa)$. Hence, by \cite[Proposition~18.5]{Kim} (or rather the analogous statement in our setting that is proved in the same way), the irreducible representation $(\pi|_K,\widehat V)$ of $K$ that extends $(\kappa_{\vec \phi}|_{K_{0+}}, V_\kappa)$ is of the form $(\rho_0K \otimes \kappa_{\vec \phi}, V_{\rho_0}K \otimes V_\kappa)$ for some irreducible representation $(\rho_0K,V_{\rho_0}K)$ of $K$ that is trivial on $K_{0+}$. \qed \textbfegin{Cor} \leftarrowbel{Cor-action-of-H} The subspace $\widehat V$ is contained in $V_\pi^{\cup_{1 \leq i \leq n+1}((H_i)_{x,r_i+})}$ and the action of the group $(H_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}+}/(H_i)_{x,r_i+}\simeq (\mathfrak{h}_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}+}/(\mathfrak{h}_i)_{x,r_i+}$ on $\widehat V$ via $\pi$ is given by the character $\varphi \circ X_i$ for $1 \leq i \leq n$. \end{Cor} \textbf{Proof.\\} Let $1 \leq i \leq n$. We have $(H_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}+} \mathfrak{su}bset K_+$ and $(H_{n+1})_{x,0+} \mathfrak{su}bset K_+$ and by Lemma \ref{Lemma-repzeroK} the representation $(\pi|_{K_+}, \widehat V)$ is $\theta$-isotypic. As we saw in the proof of Lemma \ref{Lemma-K+}, the character $\theta|_{(H_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}+}}$ factors through $(H_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}+}/(H_i)_{x,r_i+}\simeq (\mathfrak{h}_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}+}/(\mathfrak{h}_i)_{x,r_i+}$, on which it is given by $\varphi \circ X_i$, and $\theta|_{(H_{n+1})_{x,0+}}$ is trivial by Lemma \ref{Lemma-phi-properties}\eqref{item-character-extension-0}. \qed \textbfegin{Lemma} \leftarrowbel{Lemma-cuspidal} The irreducible components of the representation $(\rho_0K|_{(G_{n+1})_{x,0}}, V_{\rho_0}K)$ provided by Lemma \ref{Lemma-repzeroK} are cuspidal representations of $(G_{n+1})_{x,0}/(G_{n+1})_{x,0+}$. \end{Lemma} \textbfegin{Rem} Readers familiar with Kim's work may expect that we could mainly cite \cite{Kim} for the proof of Lemma \ref{Lemma-cuspidal}. However, contrary to the claim in \cite[Proposition~17.2.(2)]{Kim}, the representation $\rho|_{(G_{n+1})_{x,0}} \otimes \kappa_{\vec \phi}|_{(G_{n+1})_{x,0}} \otimes ^{\prime}od_{1 \leq i \leq n} \phi_i^{-1}|_{(G_{n+1})_{x,0}}$ might not necessarily be cuspidal when viewed as a representation of $(G_{n+1})_{x,0}/(G_{n+1})_{x,0+}$. Since the proof of the above mentioned proposition in \cite{Kim} is not correct, we provide a different and independent proof of Lemma \ref{Lemma-cuspidal}. \end{Rem} \textbf{Proof of Lemma \ref{Lemma-cuspidal}} Suppose $(\rho', V_{\rho'})$ is an irreducible subrepresentation of $(\rho_0K|_{(G_{n+1})_{x,0}}, V_{\rho_0}K)$ that is not cuspidal (viewed as a representation of $(G_{n+1})_{x,0}/(G_{n+1})_{x,0+}$). Then there exists (the $\mathfrak f$-points of) a unipotent radical $U_{\ff}$ of a (proper) parabolic subgroup of the reductive group (with $\mathfrak f$-points) $(G_{n+1})_{x,0}/(G_{n+1})_{x,0+}=(\mathbb{R}P_{n+1})_x(\mathfrak f)$ such that $\rho'|_{U_{\ff}}$ contains the trivial representation of $U_{\ff}$. Denote by $V_{\rho'}''$ a subspace of $V_{\rho'}$ on which $U_{\ff}$ acts trivially. By \cite[Corollary~2.2.5 and Proposition~2.2.9]{Pseudoreductive2} there exists a one parameter subgroup $\overline \leftarrowmbda: \mathbb{G}_m \rightarrow (\mathbb{R}P_{n+1})_x$ such that $U_{\ff}=\{g ^{-1}n (\mathbb{R}P_{n+1})_x(\mathfrak f) \, | \lim_{t \rightarrow 0}\overline\leftarrowmbda(t).g=1 \,\}$. Let $\leftarrowmbda: \mathbb{G}_m \rightarrow G_{n+1}$ denote a lift of $\overline \leftarrowmbda$ that factors through a maximally split maximal torus $T$ of $G_{n+1}$ whose apartment $\mathscr{A}(T)$ contains $x$ (see the proof of Lemma \ref{Lemma-Pf1} for more details about such a lift). Let $\kappa'={\kappa_{\vec \phi}}|_{(G_{n+1})_{x,0}} \otimes ^{\prime}od_{1 \leq i \leq n} \phi_i^{-1}|_{(G_{n+1})_{x,0}}$, which is trivial on $(G_{n+1})_{x,0+}$ (by either combining Corollary \ref{Cor-repzero} with Lemma \ref{Lemma-K+} or by using the proof of Lemma \ref{Lemma-K+}) and therefore can also be regarded as a representation of $(G_{n+1})_{x,0}/(G_{n+1})_{x,0+}$ and hence of $U_{\ff}$. Recall that $(\omega_i, V_{\omega_i})$ denotes the Heisenberg representation of $(G_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}}/\left((G_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}+} \cap \ker(\hat \phi_i)\right)$ with central character (the restriction of) $\hat \phi_i$, and that the vector space $V_\kappa$ underlying the representation of $\kappa'$ is $\textbfigotimes_{i=1}^n V_{\omega_i}$. By the construction of Yu (\cite{Yu}[\S4, p.~592 and Theorem~11.5]), the representation $\kappa'$ is defined by letting $(G_{n+1})_{x,0}/(G_{n+1})_{x,0+}$ act on each of the tensor product factors $V_{\omega_i}$ in $\textbfigotimes_{i=1}^nV_{\omega_i}$ by mapping $(G_{n+1})_{x,0}/(G_{n+1})_{x,0+}$ to the symplectic group $\Sp(V_i)$ of the corresponding symplectic space $V_i:=(G_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}}/(G_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}+}$ with pairing defined by $(a,b)_i=\hat \phi_i(aba^{-1}b^{-1})$ and composing with a Weil representation. The map from $(G_{n+1})_{x,0}/(G_{n+1})_{x,0+}$ to $\Sp(V_i)$ is induced by the conjugation action of $(G_{n+1})_{x,0}$ on $(G_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}}$. Let $E$ be a tamely ramified extension of $k$ over which $T$ splits, and define for $1 \leq i \leq n$ the space $V_i^+$ to be the image of $ G(k) \cap \left\leftarrowngleU_\alpha(E)_{x,\mathfrak{r}ac{r_i}{2}} \, | \, \alpha ^{-1}n \Phi(G_i, T)-\Phi(G_{i+1}, T), \leftarrowmbda(\alpha)>0 \right\rangle $ in $V_i$, the space $V_i^0$ to be the image of $ G(k) \cap \left\leftarrowngleU_\alpha(E)_{x,\mathfrak{r}ac{r_i}{2}} \, | \, \alpha ^{-1}n \Phi(G_i, T)-\Phi(G_{i+1}, T), \leftarrowmbda(\alpha)=0 \right\rangle $ in $V_i$, and $V_i^-$ to be the image of $ G(k) \cap \left\leftarrowngleU_\alpha(E)_{x,\mathfrak{r}ac{r_i}{2}} \, | \, \alpha ^{-1}n \Phi(G_i, T)-\Phi(G_{i+1}, T), \leftarrowmbda(\alpha)<0 \right\rangle $ in $V_i$. Then $V_i=V_i^+ \oplus V_i^{0} \oplus V_i^-$, the subspaces $V_i^+$ and $V_i^-$ are both totally isotropic, the orthogonal complement of $V_i^+$ is $V_i^+ \oplus V_i^0$, and $V_i^0$ is a non-degenerate subspace of $V_i$. Let $P_i \mathfrak{su}bset \Sp(V_i)$ be the (maximal) parabolic subgroup of $\Sp(V_i)$ that preserves the subspace $V_i^+$. Note that the image of $U_{\ff}$ in $\Sp(V_i)$ is contained in $P_i$. Let $U_{i,\ff}$ be the image of $$U_i:= G(k) \cap \left\leftarrowngleU_\alpha(E)_{x,\mathfrak{r}ac{r_i}{2}} \, | \, \alpha ^{-1}n \Phi(G_i, T)-\Phi(G_{i+1}, T), \leftarrowmbda(\alpha) >0 \right\rangle $$ in the Heisenberg group $(G_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}}/\left((G_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}+} \cap \ker(\hat \phi_i)\right)$. Then by Yu's construction of the special isomorphism $$j_i:(G_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}}/\left((G_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}+} \cap \ker(\hat \phi_i)\right) \rightarrow V_i^\sharp$$ in \cite[Proposition~11.4]{Yu}, where $V_i^\sharp$ is the group $V_i \ltimes \mathbb{F}_p$ with group law $(v,a).(v',a')=(v+v', a+a'+\mathfrak{r}ac{1}{2}(v,v')_i)$, and since $\leftarrowmbda(\mathbb{G}_m) \mathfrak{su}bset T$, we have $j_i(U_{i,\mathfrak f})=V_i^+ \ltimes 0$. By \cite[Theorem~2.4.(b)]{Gerardin}\mathfrak{o}otnote{As Loren Spice pointed out, the statement of \cite[Theorem~2.4.(b)]{Gerardin} contains a typo. From the proof provided by \cite{Gerardin} one can deduce that the stated representation of $P(E_+,j)H(E_+^\perp,j)$ (i.e. the pull-back to $P(E_+,j)H(E_+^\perp,j)$ of a representation of $SH(E_0,j_0)$ as in part (a')) should be tensored with $\chi^{E_+} \ltimes 1$ before inducing it to $P(E_+,j)H(E,j)$ in order to define $\pi_+$ (using the notation of \cite{Gerardin}).} the restriction of the Weil--Heisenberg representation $V_{\omega_i}$ (via $j_i^{-1}$) to $P_i \ltimes U_{i,\ff}$ contains a subrepresentation $V_{\omega_i}'$ on which $U_{i,\ff}$ acts trivially and on which the action of $P_i$ is as follows: By \cite[Lemma~2.3.(c)]{Gerardin} there exist surjections $p_i^1: P_i \twoheadrightarrow \GL(V_i^+)$ and $p_i^2: P_i \twoheadrightarrow \Sp(V_i^0)$. Then the action of $P_i$ on $V_{\omega_i}'$ is the tensor product of $p_i^1$ composed with a (quadratic) character $\chi$ of $\GL(V_i^+)$ and $p_i^2$ composed with a Weil representation of $\Sp(V_i^0)$. Note that the image of $U_{\ff}$ in $\GL(V_i^+)$ (by composing $U_{\ff} \rightarrow P_i$ with $p_i^1: P_i \twoheadrightarrow \GL(V_i^+)$) is unipotent and hence contained in the commutator subgroup of $\GL(V_i^+)$. Thus $\chi \circ p_i^1$ is trivial on the image of $U_{\ff}$. Moreover, the image of $U_{\ff}$ in $\Sp(V_i^0)$ (by composing $U_{\ff} \rightarrow P_i$ with the surjection $p_i^2: P_i \twoheadrightarrow \Sp(V_i^0)$) is contained in a minimal parabolic subgroup of $\Sp(V_i^0)$ (\cite[3.7.~Corollaire]{Borel-Tits-unipotent}) and hence also in a parabolic subgroup $P_i^0$ of $\Sp(V_i^0)$ that fixes a maximal totally isotropic subspace of $V_i^0$. By \cite[Theorem~2.4.(b)]{Gerardin} the Weil representation $V_{\omega_i}'$ restricted to $P_i^0$ contains a one dimensional subrepresentation $V_{\omega_i}''$ on which the action of $P_i^0$ factors through a character of $P_i^0/U(P_i^0)$ where $U(P_i^0)$ denotes the unipotent radical of $P_i^0$. Since the image of $U_{\ff}$ is unipotent and hence its image in $P_i^0/U(P_i^0)$ (which is isomorphic to a general linear group) is contained in the commutator subgroup of $P_i^0/U(P_i^0)$, the group $U_{\ff}$ acts trivially on $V_{\omega_i}''$. Let $V_{\kappa}''$ denote the subspace $\otimes_{1 \leq i \leq n} V_{\omega_i}''$ of $\otimes_{1 \leq i \leq n} V_{\omega_i}=V_\kappa$. Let $U_{n+1}^H$ be the preimage of $U_{\ff}$ in $(H_{n+1})_{x,0}$ under the surjection $(H_{n+1})_{x,0} \twoheadrightarrow (H_{n+1})_{x,0}/(H_{n+1})_{x,0+}$. Since $\phi_i$ is trivial on $H_{n+1}(k)$ for all $1 \leq i \leq n$ (Lemma \ref{Lemma-phi-properties}\eqref{item-character-extension-0}), the action of the group $U_{n+1}^H$ via $\rho \otimes \kappa_{\vec \phi}$ on the subspace $V_{\rho'}'' \otimes V_{\kappa}''$ of $V_\rho \otimes V_\kappa$ is the trivial action. Moreover, recall that the restriction of $(\kappa_{\vec \phi}, V_\kappa)$ to $(G_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}}$ for $1 \leq i \leq n$ is given by letting $(G_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}}$ act via the Heisenberg representation $\omega_i$ on $V_{\omega_i}$ and via $\hat \phi_j|_{(G_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}}}$ on $V_{\omega_j}$ for $j \neq i$. By Lemma \ref{Lemma-phi-properties}\eqref{item-character-extension-0} and the definition of $\hat \phi_j$, the character $\hat \phi_j$ is trivial on ${(H_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}}}$ for $j \neq i$. Hence $U_i$ (which is contained in $(H_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}}$) acts trivially via $\rho \otimes \kappa_{\vec \phi}$ on $V_{\rho'}'' \otimes V_{\kappa}''$. If $\epsilon>0$ is sufficiently small, then we have $$(H_{n+1})_{x+\epsilon \leftarrowmbda,0+}\mathfrak{su}bset \left\leftarrowngle(H_{n+1})_{x,0+}, U_{n+1}^H\right\rangle \quad \text{ and } \quad (H_i)_{x+\epsilon \leftarrowmbda,r_i,\mathfrak{r}ac{r_i}{2}+}\mathfrak{su}bset\left\leftarrowngle(H_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}+},U_i\right\rangle$$ for $1 \leq i \leq n$, where $x + \epsilon \leftarrowmbda$ arises from the action of $\epsilon \leftarrowmbda$ on $x ^{-1}n \mathscr{A}(T)$. Since $\rho \otimes \kappa_{\vec \phi}$ is by the definition of $\rho$ isomorphic to a $K$-subrepresentation of $(\pi|_K, V_\pi)$, we obtain a non-trivial subspace $V''$ of $V_\pi$ on which $(H_i)_{x+\epsilon \leftarrowmbda, r_i, \mathfrak{r}ac{r_i}{2}+}/(H_i)_{x+\epsilon \leftarrowmbda,r_i+} \simeq (\mathfrak{h}_i)_{x+\epsilon \leftarrowmbda, r_i, \mathfrak{r}ac{r_i}{2}+}/(\mathfrak{h}_i)_{x+\epsilon \leftarrowmbda,r_i+}$ acts via $\varphi \circ X_i$ for $1 \leq i \leq n$ and that is fixed by $(H_{n+1})_{x+\epsilon \leftarrowmbda,0+}$. Since $x+\epsilon\leftarrowmbda ^{-1}n \mathscr{A}(T)$, the tuple $(x+\epsilon\leftarrowmbda,(X_i)_{1 \leq i \leq n})$ is a truncated datum by Lemma \ref{Lemma-truncateddatum-with-y}, and by the same arguments as in Case 2 of the proof of Theorem \ref{Thm-existence-of-datum}, we can extend it to a datum $(x+\epsilon\leftarrowmbda,(X_i)_{1 \leq i \leq n}, (\rho_0',V_{\rho_0}'))$ contained in $(\pi, V_\pi)$. However, since $U_{\ff}$ was non-trivial (and $\epsilon>0$ sufficiently small), the dimension of the facet of $\mathscr{B}(G_{n+1},k)$ that contains $x+\epsilon\leftarrowmbda$ is larger than the dimension of the facet of $\mathscr{B}(G_{n+1},k)$ that contains $x$. This is a contradiction to the choice of $(x, (X_i)_{1 \leq i \leq n}, (\rho_0,V_{\rho_0}))$, i.e. to the assumption that $(x, (X_i)_{1 \leq i \leq n}, (\rho_0, V_{\rho_0}))$ is a maximal datum for $(\pi, V_\pi)$ (as in Definition \ref{Def-datum-for}). \qed In order to prove that $(\pi,V_\pi)$ contains a type as constructed by Kim and Yu in \cite{KimYu}, we introduce some additional notation following \cite[2.4]{KimYu}. We denote by $Z_s(M_{n+1})$ the maximal split torus in the center of $M_{n+1}$ and by $M_i$ the centralizer of $Z_s(M_{n+1})$ in $G_i$ for $1 \leq i \leq n$. We say (compare \cite[3.5.~Definition]{KimYu}) that the resulting commutative diagram of embeddings (where the embeddings are chosen as explained in Remark \ref{Rem-BT}) \textbfegin{eqnarray} \leftarrowbel{diagram-generic} \xymatrix{ \mathscr{B}(M_{n+1},k) \ar@{^{(}->}[r] \ar@{^{(}->}[d] & \mathscr{B}(M_{n},k) \ar@{^{(}->}[r] \ar@{^{(}->}[d] & \hdots \ar@{^{(}->}[r] \ar@{^{(}->}[d] & \mathscr{B}(M_1,k) \ar@{^{(}->}[d] \\ \mathscr{B}(G_{n+1},k) \ar@{^{(}->}[r] & \mathscr{B}(G_{n},k) \ar@{^{(}->}[r] & \hdots \ar@{^{(}->}[r] & \mathscr{B}(G_1,k) \\ } \end{eqnarray} is \textit{$(\mathfrak{r}ac{r_{n+1}}{2}, \mathfrak{r}ac{r_n}{2}, \hdots, \mathfrak{r}ac{r_1}{2})$-generic relative to $x$} if $$ \mathfrak{su}m_{i=1}^{n} \left(\dim((G_i)_{x,\mathfrak{r}ac{r_i}{2}}/(G_i)_{x,\mathfrak{r}ac{r_i}{2}+}) - \dim((M_i)_{x,\mathfrak{r}ac{r_i}{2}}/(M_i)_{x,\mathfrak{r}ac{r_i}{2}+}) \right)=0 , $$ where we recall that $r_{n+1}=0$. Note that this property is independent of the choice of embeddings in Diagram \eqref{diagram-generic}\mathfrak{o}otnote{While our point $x$ is a point of $\mathscr{B}(G,k)$ that is viewed as a point of $\mathscr{B}(M_{i},k)$ and $\mathscr{B}(G_i,k)$ via the above embeddings, Kim and Yu (\cite{KimYu}) fix a point in $\mathscr{B}(M_{n+1},k)$ and consider its image in $\mathscr{B}(M_{i},k)$ and $\mathscr{B}(G_i,k)$. Hence the genericity property in \cite[3.5.~Definition]{KimYu} does depend on the embeddings.}. \textbfegin{Thm} \leftarrowbel{Thm-exhaustion-of-types} Let $(\pi, V_\pi)$ be a smooth irreducible representation of $G(k)$. Then $(\pi,V_\pi)$ contains one of the types constructed by Kim--Yu in \cite{KimYu}. \end{Thm} \textbf{Proof.\\} By Theorem \ref{Thm-existence-of-datum}, the representation $(\pi, V_\pi)$ contains a datum. Let $(x, (X_i)_{1 \leq i \leq n}, (\rho_0, V_{\rho_0}))$ be a maximal datum for $(\pi, V_\pi)$ such that the non-negative number $\mathfrak{su}m_{i=1}^{n} \left(\dim((G_i)_{x,\mathfrak{r}ac{r_i}{2}}/(G_i)_{x,\mathfrak{r}ac{r_i}{2}+})\right.$ $\left.- \dim((M_i)_{x,\mathfrak{r}ac{r_i}{2}}/(M_i)_{x,\mathfrak{r}ac{r_i}{2}+}) \right)$ is minimal among all possible choices of maximal data for $(\pi, V_\pi)$. Performing the constructions above (page \pageref{page-Yu-datum} and Lemma \ref{Lemma-repzeroK}) we obtain a tuple $(\vec G, x, \vec r, \rho_0K|_{K_{G_{n+1}}}, \vec \phi)$ and an associated representation $(\pi_K,V_{\pi_K})=(\rho_0K \otimes \kappa_{\vec \phi}, V_{\rho_0}K \otimes V_\kappa)$ as constructed by Kim and Yu that is contained in $(\pi,V_\pi)$. It remains to show that $(K, \pi_K)$ is a type, i.e. that all the requirements that Kim and Yu impose on the tuple $(\vec G, x, \vec r, \rho_0K|_{K_{G_{n+1}}}, \vec \phi)$ for the construction of types are satisfied. By Lemma \ref{Lemma-phi-properties}\eqref{item-character-extension-generic} and Lemma \ref{Lemma-cuspidal} it therefore remains to show that Diagram \eqref{diagram-generic} is $(\mathfrak{r}ac{r_{n+1}}{2}, \mathfrak{r}ac{r_n}{2}, \hdots, \mathfrak{r}ac{r_1}{2})$-generic relative to $x$. Suppose that this is not the case. Then, by \cite[3.6~Lemma~(b)]{KimYu} and the definition of the Moy--Prasad filtration, there exists $\leftarrowmbda ^{-1}n X_*(Z_s(M_{n+1}))$ such that if $\epsilon>0$ is sufficiently small, then Diagram \eqref{diagram-generic} is $(\mathfrak{r}ac{r_{n+1}}{2}, \mathfrak{r}ac{r_n}{2}, \hdots, \mathfrak{r}ac{r_1}{2})$-generic relative to $x+\epsilon \leftarrowmbda$ and $(G_i)_{x+\epsilon\leftarrowmbda,\mathfrak{r}ac{r_i}{2}} \mathfrak{su}bseteq (G_i)_{x,\mathfrak{r}ac{r_i}{2}}$ and $(G_i)_{x+\epsilon\leftarrowmbda,r_i} \mathfrak{su}bseteq (G_i)_{x,r_i}$ for $1 \leq i \leq n+1$. Note that this implies that $(G_{n+1})_{x+\epsilon\leftarrowmbda,0} = (G_{n+1})_{x,0}$ and $(G_{n+1})_{x+\epsilon\leftarrowmbda,0+} = (G_{n+1})_{x,0+}$ because $\leftarrowmbda ^{-1}n X_*(Z_s(M_{n+1}))$ and $(M_{n+1})_{x,0}/(M_{n+1})_{x,0+} \simeq (G_{n+1})_{x,0}/(G_{n+1})_{x,0+}$ by definition of $M_{n+1}$. Using the notation of the proof of Lemma \ref{Lemma-cuspidal} the image of $(G_i)_{x+\epsilon\leftarrowmbda,r_i,\mathfrak{r}ac{r_i}{2}+}$ in the Heisenberg group $(G_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}}/((G_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}+}\cap \ker(\hat\phi_i))=j_i^{-1}(V_i \ltimes \mathbb{F}_p)$ is $j_i^{-1}(V_i^+ \ltimes \mathbb{F}_p)$, where $V_i^+$ is the totally isotropic subspace $((G_i)_{x+\epsilon\leftarrowmbda,r_i,\mathfrak{r}ac{r_i}{2}+}(G_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}+})/(G_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}+}$ of $V_i=(G_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}}/(G_i)_{x,r_i,\mathfrak{r}ac{r_i}{2}+}$. For $1 \leq i \leq n$, let $V_{\omega_i}'$ be a subspace of the Heisenberg representation $V_{\omega_i}$ on which $V_i^+$ acts trivially, and denote by $V_\kappa'$ the subspace $\otimes_{1 \leq j \leq n} V_{\omega_j}'$ of $\otimes_{1 \leq j \leq n} V_{\omega_j}=V_\kappa$. Then the action of $(H_i)_{x+\epsilon\leftarrowmbda,r_i,\mathfrak{r}ac{r_i}{2}+}$ on $V_{\rho_0}K \otimes V_\kappa'$ factors through $(H_i)_{x+\epsilon\leftarrowmbda,r_i,\mathfrak{r}ac{r_i}{2}+}/(H_i)_{x+\epsilon\leftarrowmbda,r_i+} \simeq (\mathfrak{h}_i)_{x+\epsilon\leftarrowmbda,r_i,\mathfrak{r}ac{r_i}{2}+}/(\mathfrak{h}_i)_{x+\epsilon\leftarrowmbda,r_i+} $ on which it is given by the character $\varphi \circ X_i$ for $1 \leq i \leq n$. Moreover, $(H_{n+1})_{x+\epsilon\leftarrowmbda,0+} = (H_{n+1})_{x,0+}$ acts trivially on $V_{\rho_0}K \otimes V_\kappa'$. Hence (by Lemma \ref{Lemma-truncateddatum-with-y} and the same arguments as those in Case 2 of the proof of Theorem \ref{Thm-existence-of-datum}) we obtain a maximal datum $(x+\epsilon\leftarrowmbda, (X_i)_{1 \leq i \leq n}, (\rho_0', V_{\rho_0}'))$ for $(\pi, V_\pi)$ with $$\mathfrak{su}m_{i=1}^{n} \left(\dim((G_i)_{x+\epsilon\leftarrowmbda,\mathfrak{r}ac{r_i}{2}}/(G_i)_{x+\epsilon\leftarrowmbda,\mathfrak{r}ac{r_i}{2}+}) - \dim((M_i)_{x+\epsilon\leftarrowmbda,\mathfrak{r}ac{r_i}{2}}/(M_i)_{x+\epsilon\leftarrowmbda,\mathfrak{r}ac{r_i}{2}+}) \right)=0. $$ This contradicts that $0 < \mathfrak{su}m_{i=1}^{n} \left(\dim((G_i)_{x,\mathfrak{r}ac{r_i}{2}}/(G_i)_{x,\mathfrak{r}ac{r_i}{2}+}) - \dim((M_i)_{x,\mathfrak{r}ac{r_i}{2}}/(M_i)_{x,\mathfrak{r}ac{r_i}{2}+}) \right)$ was minimal among all possible choices of maximal data for $(\pi, V_\pi)$. \qed \textbfegin{Rem} Theorem \ref{Thm-exhaustion-of-types} has been derived by Kim and Yu (\cite[9.1~Theorem]{KimYu}) from the result about exhaustion of Yu's supercuspidal representations by Kim (\cite{Kim}) under much more restrictive assumptions than our Assumption \ref{Assumption-p-W}. First of all they require the local field $k$ to have characteristic zero, and secondly their assumption on the residual characteristic $p$ is much stronger than ours, i.e. far from optimal, see \cite[\S~3.4]{Kim}. \end{Rem} \section{Exhaustion of supercuspidal representations} \leftarrowbel{Section-exhaustion-Yu} Recall that we assume throughout the paper that $G$ splits over a tame extension and $p \nmid \abs{W}$. Under these assumptions, we obtain the following corollary of Section \ref{Section-existence-of-type}. \textbfegin{Thm} \leftarrowbel{Thm-exhaustion-Yu} Every smooth irreducible supercuspidal representation of $G(k)$ arises from the construction of Yu (\cite{Yu}). \end{Thm} \textbf{Proof.\\} Let $(\pi, V_\pi)$ be a smooth irreducible supercuspidal representation of $G(k)$. By Section \ref{Section-existence-of-type}, in particular Theorem \ref{Thm-exhaustion-of-types}, we can associate to $(\pi, V_\pi)$ a tuple $(\vec G, x, \vec r, \rho_0K|_{K_{G_{n+1}}}, \vec \phi)$ such that $(\pi,V_\pi)$ contains the type $(K, \pi_K)$ associated to it by Kim--Yu following Yu's construction. Let $M_{n+1}$ be the Levi subgroup of $G_{n+1}$ attached to $x$ and $G_{n+1}$ as in Section \ref{Section-existence-of-type}, page \pageref{page-Levi}. We recall that $Z_S(M_{n+1})$ denotes the maximal split torus of the center $Z(M_{n+1})$ of $M_{n+1}$, and that $M_1$ is the Levi subgroup of $G$ that is the centralizer $\mathbb{C}ent_{G}(Z_S(M_{n+1}))$ of $Z_S(M_{n+1})$ in $G$. Kim and Yu (\cite[7.5~Theorem]{KimYu}) show that the type $(K, \pi_K)$ is a cover of a type for the group $M_1$. Hence, since $(\pi, V_\pi)$ is supercuspidal, we have $M_1=G$. This implies that $Z_S(M_{n+1})$ is contained in the center of $G$. Hence $Z(G_{n+1})/Z(G)$ is anisotropic, where $Z(G_{n+1})$ and $Z(G)$ denote the centers of $G_{n+1}$ and $G$, respectively, and $M_{n+1}=G_{n+1}$. Instead of working with $K_{G_{n+1}}=(G_{n+1})_x$ in Section \ref{Section-existence-of-type}, we could have equally well performed all constructions for the stabilizer $(G_{n+1})_{[x]}$ of the image $[x]$ of $x$ in the reduced Bruhat--Tits building of $G_{n+1}$ (by replacing $(M_{n+1})_x$ by $(M_{n+1})_{[x]}$ everywhere) to obtain a representation $(\rho_0KYu, V_{\rho_0}KYu)$ of $\wt K=(G_{n+1})_{[x]}(G_{n})_{x,\mathfrak{r}ac{r_n}{2}}\hdots(G_1)_{x,\mathfrak{r}ac{r_1}{2}} $ such that the representation $(\pi_KYu,V_{\pi_K}Yu)$ of $\wt K$ associated to $(\vec G, x, \vec r, \rho_0KYu|_{(G_{n+1})_{[x]}}, \vec \phi)$ by Yu is contained in $(\pi|_{\wt K}, V_\pi)$. Since $M_{n+1}=\mathbb{C}ent_{G_{n+1}}(Z_S(M_{n+1}))=G_{n+1}$, the compactly induced representation $^{-1}nd_{(G_{n+1})_{[x]}}^{G(k)}\rho_0KYu|_{(G_{n+1})_{[x]}}$ is irreducible supercuspidal (by \cite[Proposition~6.6]{MP2}). Hence $(\vec G, x, \vec r, \rho_0KYu|_{(G_{n+1})_{[x]}}, \vec \phi)$ satisfies all the conditions that Yu requires for his construction of supercuspidal representations (\cite[\S~3]{Yu}), and $^{-1}nd_{\wt K}^{G(k)}\pi_KYu$ is the corresponding irreducible supercuspidal representations (\cite[Proposition~4.6]{Yu}). By Frobenius reciprocity, we obtain a non-trivial morphism from $(^{-1}nd_{\wt K}^{G(k)}\pi_KYu, ^{-1}nd_{\wt K}^{G(k)}V_{\pi_K}Yu)$ to $(\pi, V_\pi)$, and hence these two irreducible representations are isomorphic. \qed \textbfegin{Rem} The exhaustion of supercuspidal representations by Yu's construction has been known under the assumption that $k$ has characteristic zero and $p$ is a sufficiently large prime number thanks to Kim (\cite{Kim}). We refer the reader to \cite[\S~3.4]{Kim} for the precise conditions for $p$ being ``sufficiently large''. These assumptions are much stronger than $p \nmid \abs{W}$. \end{Rem} The proof of Theorem \ref{Thm-exhaustion-Yu} also shows how to recognize if a representation is supercuspidal by only considering a maximal datum for this representation. \textbfegin{Cor} \leftarrowbel{Cor-supercuspidal-criterion} Let $(\pi, V_\pi)$ be a smooth irreducible representation of $G(k)$, and let $(x, (X_i)_{1 \leq i \leq n},$ $(\rho_0, V_{\rho_0}))$ be a maximal datum for $(\pi,V_\pi)$. Then $(\pi,V_\pi)$ is supercuspidal if and only if $x$ is a facet of minimal dimension in $\mathscr{B}(G_{n+1},k)$ and $Z(G_{n+1})/Z(G)$ is anisotropic, where $G_{n+1}=\mathbb{C}ent_{G}(\mathfrak{su}m_{i=1}^n X_i)$. \end{Cor} \textbf{Proof.\\} The point $x$ is a facet of minimal dimension in $\mathscr{B}(G_{n+1},k)$ if and only if $M_{n+1}=G_{n+1}$. Hence we have seen in the proof of Theorem \ref{Thm-exhaustion-Yu} that $(\pi, V_\pi)$ being supercuspidal implies the other two conditions in the corollary. The proof of Theorem \ref{Thm-exhaustion-Yu} also shows that the other two conditions are sufficient to prove that $(\pi, V_\pi)$ is supercuspidal. \qed \textbfibliography{Jessicasbib} \end{document}
\begin{document} \title{Scattering quantum random-walk search with errors} \author{A.~G\'abris} \affiliation{ Research Institute for Solid State Physics and Optics, H-1525 Budapest, P. O. Box 49, Hungary } \author{T.~Kiss} \affiliation{ Research Institute for Solid State Physics and Optics, H-1525 Budapest, P. O. Box 49, Hungary } \author{I.~Jex} \affiliation{ Department of Physics, FJFI {\v C}VUT, {B\v r}ehov\'a 7, 115 19 Praha 1 - Star{\'e} M\v{e}sto, Czech Republic } \date{\today} \begin{abstract} We analyze the realization of a quantum-walk search algorithm in a passive, linear optical network. The specific model enables us to consider the effect of realistic sources of noise and losses on the search efficiency. Photon loss uniform in all directions is shown to lead to the rescaling of search time. Deviation from directional uniformity leads to the enhancement of the search efficiency compared to uniform loss with the same average. In certain cases even increasing loss in some of the directions can improve search efficiency. We show that while we approach the classical limit of the general search algorithm by introducing random phase fluctuations, its utility for searching is lost. Using numerical methods, we found that for static phase errors the averaged search efficiency displays a damped oscillatory behaviour that asymptotically tends to a non-zero value. \end{abstract} \maketitle \section{Introduction} The generalization of random walks for quantum systems \cite{AHARONOV1993QRW} proved to be a fruitful concept \cite{kempe-2003-44} attracting much recent interest. Algorithmic application for quantum information processing is an especially promising area of utilization of quantum random walks (QRW) \cite{Ambainis2004Quantum-walks-a}. In his pioneering paper \cite{Grover} Grover presented a quantum algorithm that can be used to search an unsorted database quadratically faster than the existing classical algorithms. Shenvi, Kempe and Whaley (SKW) \cite{shenvi:052307} proposed a search algorithm based on quantum random walk on a hypercube, which has similar scaling properties as the Grover search. In the SKW algorithm the oracle is used to modify the quantum coin at the marked vertex. In contrast to the Grover search, this algorithm generally has to be repeated several times to produce a result, but this merely adds a fixed overhead independent of the size of the search space. There are various suggestions and some experiments how to realize quantum walks in a laboratory. The schemes proposed specifically for the implementation of QRWs include ion traps \cite{travaglione:032310}, nuclear magnetic resonance \cite{du-2003-67} (also experimentally verified \cite{Ryan2005Experimental-im}), cavity quantum electrodynamics \cite{di:032304,agarwal:033815}, optical lattices \cite{dur-2002-66}, optical traps \cite{eckert:012327}, optical cavity \cite{Roldan2005Optical-impleme}, and classical optics \cite{knight-2003-227}. Moreover, the application of standard general logic networks to the task is always at hand \cite{Hines-pra75, Fujiwara2005Scalable-networ}. The idea of the scattering quantum random walk (SQRW) \cite{hillery:032314} was proposed as an answer to the question that can be posed as: how to realize a coined walk by a quantum optical network built from passive, linear optical elements such as beam splitters and phase shifters? It turned out that such a realization is possible and, in fact, it leads to a natural generalization of the coined walk, the scattering quantum random walk \cite{Kosik2005Scattering-mode}. The SQRW on the hypercube allows for a quantum optical implementation of the SKW search algorithm \cite{shenvi:052307}. Having a proposal for a physical realization at hand we are in the position to analyze in some detail the effects hindering its successful operation. Noise and decoherence strongly influence quantum walks. For a recent review on this topic see \cite{Kendon2006Decoherence-in-}. The first investigations in this direction indicated that a small amount of decoherence can actually enhance the mixing property \cite{Kendon2003Decoherence-can}. For a continuous QRW on a hypercube there is a threshold for decoherence, beyond which the walk behaves classically \cite{Alagic2005Decoherence-in-}. Ko\v sik {\it et al} analyzed SQRW with randomized phase noise on a $d$ dimensional lattice \cite{Kosik2006Quantum-walks-w}. The quantum walk on the line has been studied by several authors in the linear optical context, with the emphasis on the effect of various initial states, as well as on the impact of decoherence \cite{jeong:pra68.012310, pathak:pra75.032351}. The quantum random walk search with imperfect gates was discussed in some detail by Li {\it et al} \cite{Li2006Gate-imperfecti}, who have considered the case when the Grover operator applied in the search is systematically modified. Such an imperfection decreases the search probability and also shifts its first maximum in time. In this paper we analyze the impact of noise on the SKW algorithm typical for the experimental situations of the SQRW. In particular, first we focus on photon losses and show that, somewhat contradicting the na\"{\i}ve expectation, non-trivial effects such as the enhancement of the search efficiency can be observed. As a second type of errors we study randomly distributed phase errors in two complementary regimes. The first regime is characterized by rapid fluctuation of the optical path lengths, that leads to the randomization of phases for each run of the algorithm. We show that the classical limit of the SKW algorithm, reached by increasing the variance of the phase fluctuations, does not correspond to a search algorithm. In the other regime, the stability of the optical path lengths is maintained over the duration of one run, thus the errors are caused by static random phases. This latter case has not yet been considered in the context of QRWs. We found that static phase errors bring a significantly different behaviour compared to the case of phase fluctuations. Under static phase errors the algorithm retains its utility, with the average success probability displaying a damped oscillatory behaviour that asymptotically tends to a non-zero constant value. The paper is organized as follows. In the next section we introduce the scattering quantum walk search algorithm. In section \ref{sec:uniform}.\ we derive analytic results for the success probability of search for the case when a single coefficient describes photon losses independent of the direction. In section \ref{sec:non-uniform}.\ we turn to direction dependent losses, and present estimations of the success probability based on analytical calculations and numerical evidence. In section \ref{sec:phase}.\ phase noise is considered and consequences for the success probability are worked out. Finally, we conclude in Sec.~\ref{sec:conclusions}. \section{The scattering quantum walk search algorithm} \label{sec:sqrw} The quantum walk search algorithm is based on the generalized notion of coined quantum random walk (CQRW), allowing the coin operator to be non-uniform accross the vertices. In the early literature the coin is considered as position (vertex) independent. The CQRW is defined on the product Hilbert space $ {\ensuremath{\mathcal{H}}} = {\ensuremath{\mathcal{H}}} C \otimes {\ensuremath{\mathcal{H}}} G$, where $ {\ensuremath{\mathcal{H}}} C$ refers to the quantum coin, and $ {\ensuremath{\mathcal{H}}} G$ represents the graph on which the walker moves. The discrete time-evolution of the system is governed by the unitary operator \begin{equation} U=SC\, , \end{equation} where $C$ is the coin operator which corresponds to flipping the quantum coin, and $S$ is the step or translation operator that moves the walker one step along some outgoing edge, depending on the coin state. Adopting a binary string representation of the vertices $V$ of the underlying graph $G=(V,E)$, the step operator $S$ (a permutation operator of the entire Hilbert space $ {\ensuremath{\mathcal{H}}} $) can be expressed as \begin{equation} S = \sum_{d=0}^{n-1} \sum_{x\in V} \ket{d, x\oplus e_{dx}} \bra{d,x}\, . \label{eq:S_gendef} \end{equation} In (\ref{eq:S_gendef}) $x$ denotes the vertex index. Here, and in the rest of this paper we identify the vertices with their indices and understand $V$ as the set of vertex indices. The most remarkable fact about $S$ is that it contains all information about the topology of the graph. In particular, the actual binary string values of $e_{dx}$ are determined by the set of edges $E$. This is accomplished by the introduction of direction indices $d$, which run from $0$ to $n-1$ in case of the $n$ regular graphs which are used in the search algorithm. To implement the scattering quantum random walk on an $n$ regular graph of $N$ nodes, identical $n$-multiports \cite{Jex-optcomm117, Zukovski_pra55} are arranged in columns each containing $N$ multiports. The columns are enumerated from left to right, and each row is assigned a number sequentially. The initial state enters on the input ports of multiports in the leftmost column. The output and input ports of multiports of neighbouring columns $j$ and $j+$ are then indexed suitably and connected according to the graph $G$. For the formal description of quantum walks on arrays of multiports, we propose to label every mode by the row index and input port index of its \textit{destination} multiport. We note that an equally good labelling can be defined using the row index and output port index of the \textit{source} multiport. To describe single excitation states, we use the notation $\ket{d,x}$ where the input port index of the destination multiport is $d=0,1,\ldots,n-1$, and the row index is $x=0,1,\ldots,N-1$. Thus the total Hilbert space can effectively be separated into some product space $ {\ensuremath{\mathcal{H}}} C \otimes {\ensuremath{\mathcal{H}}} G$. To be precise, the additional label $j$ would be necessary to identify in which column the multiport is, however, we think of the column index as a discrete time index, and drop it as an explicit label of modes. Thus a time-evolution $U=SC$ can be generated by the propagation through columns of multiports. A quantum walk can be realized in terms of the basis defined using the destination indices, and we shall term it ``standard basis'' through this section. First, we recall that an $n$-multiport can be fully characterized by an SU($n$) transformation matrix $\mathbf{C}$. The effect of such multiport on single excitation states $\ket{\psi}\in {\ensuremath{\mathcal{H}}} C$ is given by the formula, \begin{equation} \ket{\psi}=\sum_{d=0}^{n-1} a_d \ket{d} \to \sum_{d,k=0}^{n-1} C_{dk} a_k \ket{d}, \label{eq:C} \end{equation} where $\ket{d}$ denotes the single photon state with the photon being in the $d$ mode, i.e.\ $\ket{d} = \ket0_0 \ldots \ket1_d \ldots \ket0_{n-1}$. We note, that a multiport with any particular transformation matrix $\mathbf{C}$ can be realized in a laboratory \cite{Reck_prl73}. To simplify calculations it may be beneficial to choose an indexing of input and output ports such that the connections required to realize the graph $G$ can be made in such way that each input port has the same index as the corresponding source output port. Therefore the label $d$ can stay unique during ``propagation.'' We emphasize that this is not a necessary assumption for a proper definition of SQRW, but an important property that makes also easier to see that SQRWs are a superset of generalized CQRWs. This indexing of input and output ports for walks on a hypercube is depicted on Fig.~\ref{fig:port-example}a, with some of the actual connections illustrated for a (three dimensional) cube on Fig.~\ref{fig:port-example}b. Considering an array of identical multiports, an arbitrary input state undergoes the transformation by the same matrix $\mathbf{C}$ for every $x$. Let the output port $d$ of multiport $x$ be connected to multiport $x\oplus e_{dx}$ in the next row. Thus the mode labelled by the source indices $d$ and $x$, is labelled by $d$ and $x\oplus e_{dx}$ in terms of the destination indices. Therefore, effect of propagation in terms of our standard basis is written, \begin{equation} \sum_{d,x} a_{dx} \ket{d,x} \to \sum_{dkx} C_{dk} a_{kx} \ket{d,x \oplus e_{dx}}. \label{eq:SC} \end{equation} Comparing this formula with Eqs.~(\ref{eq:S_gendef}) and (\ref{eq:C}) we see that this formula corresponds to a $U=SC=S(C_0\otimes\openone)$ transformation where $C_0$ is generated by the matrix $\mathbf{C}$. Due to the local nature of the realization of the coin operation, it is straight-forward to realize position dependent coin operations, such as the one required for the quantum walk search algorithm. \begin{figure} \caption{a) Illustration of the labelling of input and output ports of multiports used for the realization of the walk on the $n$ dimensional hypercube. b) Schematic depiction of the setup of the SQRW implementation of the SKW algorithm for $n=3$, with the marked node being $\ensuremath{x_{\mathrm{t} \label{fig:port-example} \end{figure} In particular, the SKW algorithm \cite{shenvi:052307} is based on the application of two distinct coin operators, e.g. \begin{subequations} \label{eq:std-coin-pair} \begin{eqnarray} C_0 &=& G, \\ C_1 &=& -\openone, \end{eqnarray} \end{subequations} where $G$ is the Grover inversion or diffusion operator $G:= -\openone + 2 |s^C\rangle\langle s^C|$, with $|s^C\rangle=1/\sqrt{n} \sum_{d=1}^{n} \ket{d}$ \cite{moore02quantum}. In the algorithm, the application of the two coin operators is conditioned on the result of oracle operator $ {\ensuremath{\mathcal{O}}} $. The oracle marks one $\ensuremath{x_{\mathrm{t}}}$ as target, hence the coin operator becomes conditioned on the node: \begin{equation} C' = C_0 \otimes \openone + (C_1 - C_0)\otimes \ket{\ensuremath{x_{\mathrm{t}}}}\bra{\ensuremath{x_{\mathrm{t}}}}. \label{eq:pert_coin} \end{equation} When $n$ is large, the operator $U':=SC'$ can be regarded as a perturbed variation of $U=S(C_0\otimes\openone)$. The conditional transformation (\ref{eq:pert_coin}) is straight-forward to implement in the multiport network. For the two coins (\ref{eq:std-coin-pair}) one has to use a simple phase shifter at position $\ensuremath{x_{\mathrm{t}}}$ in every column of the array, and a multiport realizing the Grover matrix $G$ at every other position. The connection topology required to implement a walk on the hypercube is such that in the binary representation we have $e_d=0\ldots1\ldots0$ with 1 being at the $d$'th position, i.e.\ $e_d=2^d$. See Fig.~\ref{fig:port-example}b for a schematic example, when $\ensuremath{x_{\mathrm{t}}}=001$. The above described scheme to realize quantum walks in an array of multiports using as many columns as the number of iterations of $U$ can be reduced to only a single column. To do this, one simply needs to connect the output ports back to the appropriate input ports of the destination multiport in the same column. This feed-back setup is similar to the one introduced in Ref.~\cite{Kosik2005Scattering-mode}. \section{Uniform decay} \label{sec:uniform} We begin our analysis of the effect of errors on the quantum walk search algorithm by concentrating on photon losses. In an optical network, photon losses are usually present due to imperfect optical elements. An efficient model for linear loss is to introduce fictitious beam-splitters with transmittances corresponding to the effective transmission rate (see Fig.~\ref{fig:loss-scheme}). \begin{figure} \caption{Schematic illustration of the photon loss model being used. The losses suffered by each output mode are represented by fictitious beam-splitters with transmittivities $\eta_d$. The beam-splitters incorporate the combined effect of imperfections of the multiport devices, and effects influencing the state during propagation between the multiports (e.g.\ scattering and absorption).} \label{fig:loss-scheme} \end{figure} The simplest case is when all arms of the multiports are characterized by the same linear loss rate $\eta$. The operator describing the effect of decay on a single excitation density operator can then be expressed as \begin{equation} {\mathcal D} (\varrho) = \eta^2 \varrho + (1-\eta^2) \ket{0}\!\bra{0} . \label{eq:homloss_op} \end{equation} The total evolution of the system after one iteration may be written as $\varrho \to {\mathcal D} (U\varrho U^{\dag})$. It is important to note that with the introduction of this error, the original Hilbert space $ {\ensuremath{\mathcal{H}}} G$ of one-photon excitations must be extended by the addition of the vacuum state $\ket0$. The action of the SQRW evolution operator $U$ on the extended Hilbert space follows from the property $U\ket0=\ket0$. Due to the nature of Eq.~(\ref{eq:homloss_op}) and the extension of $U$, one can see that the order of applying the unitary time step and the error operator $ {\mathcal D} $ can be interchanged. Therefore, over $t$ steps the state of the system undergoes the transformation \begin{equation} \varrho \to \eta^{2t} U^t \varrho U^{\dag t}+ (1-\eta^{2t}) \ket{0}\!\bra{0} = {\mathcal D} ^t(U^t \varrho U^{\dag t}). \end{equation} To simplify calculations, we introduce a linear (but non-unitary) operator to denote the effect of the noise operator $ {\mathcal D} $ on the search Hilbert space: \begin{equation} D \ket{\psi} = \eta \ket{\psi}. \end{equation} This operator is simply a multiplication with a number. It is obviously linear, however, for $\eta<1$ not unitary. The operator $D$ does not describe any coherence damping within the one-photon subspace, since it only uniformly decreases the amplitude of the computational states and introduces the vacuum. Since all final statistics are gathered from the search Hilbert space $ {\ensuremath{\mathcal{H}}} G$, it is possible to drop the vacuum from all calculations, and incorporate all information related to it into the norm of the remaining state. In other words, we can think of $DU$ as the time step operator, and relax the requirement of normalization. Using this notation, the effect of $t$ steps is very straight-forward to express: \begin{equation} \ket{\psi} \to \eta^t U^{t} \ket{\psi}. \label{eq:homo_nstep} \end{equation} This formula indicates that inclusion of the effect of uniform loss may be postponed until just before the final measurement. The losses, therefore, may simply be included in the detector efficiency (using an exponential function of the number of iterations). Applying the above model of decay to the quantum walk search algorithm we define the new step operator $U''=DU'$, and write the final state of the system after $t$ steps as \begin{multline} (U'')^t \ket{\psi_0} = \\ \eta^t \cos(\omega'_0t) \ket{\psi_0}- \eta^t \sin(\omega'_0t)\ket{\psi_1} + \eta^t O\left(\frac{n^{3/4}}{\sqrt{2^n}}\right) \ket{\tilde{r}}. \end{multline} Adopting the notation of Ref.~\cite{shenvi:052307}, the probability of measuring the target state $\ket{x=0}$ at the output after $t$ steps can be expressed as \begin{multline} p_n(\eta,t) = \sum_{d=0}^{n-1} \left|\left< d,0 \left| (U'')^t \psi_0 \right.\right> \right|^2 \\ = \eta^{2t} \sin^2 (\omega'_0 t) \left|\left<\left. R,0 \right| \psi_1\right> \right|^2 + 2^{-n} \eta^{2t} \cos^2 (\omega'_0t) \\ + O(1/2^n). \label{eq:prob_uniform} \end{multline} We know from Ref.~\cite{shenvi:052307} that $\left|\left<\left. R,0\right| \psi_1 \right>\right|^2 = 1/2 - O(1/n)$. Since an overall exponential drop of the success probability is expected due to the $\eta^{2t}$ factor, we search for the maximum $t_f$ be before the ideal time-point $|\omega'_0|t=\pi/2$. This guarantees that $\sin^2 (\omega'_0 t_f)$ is finite, therefore due to the $2^{-n}$ factor for large $n$ the second term can be omitted, and it is sufficient to maximize the function \begin{equation} p_n(\eta,t) = \eta^{2t} \sin^2 (\omega'_0t) \left( 1/2 - O(1/n) \right), \label{eq:prob_uniform_approx} \end{equation} with respect to $t$. After substituting the result $|\omega'_0| = 1/\sqrt{2^{n-1}}[ 1 - O(1/n) ] \pm O(n^{3/2}/2^n)$ from Ref.~\cite{shenvi:052307}, these considerations yield the global maximum at $t_f = \sqrt{2^{n-1}} \left[\mathop{\mathrm{acot}}\nolimits (-\ln\eta\sqrt{2^{n-1}}) + O(1/n) \right]$. During operation we set \begin{equation} t_m:= \sqrt{2^{n-1}} \mathop{\mathrm{acot}}\nolimits(-\ln\eta\sqrt{2^{n-1}}), \end{equation} or the closest integer, as the time yielding the maximum probability of success. To simplify the upcoming formulae, we introduce the variables \begin{eqnarray} x &=& -\ln\eta\ \sqrt{2^{n-1}}, \\ \varepsilon &=& \log_2(1 - \eta). \end{eqnarray} The variable $\varepsilon$ can be regarded as a logarithmic transmission parameter (the ideal case corresponds to $\varepsilon=\infty$, and complete loss to $\varepsilon=0$). When $\varepsilon$ is sufficiently large, the expression $-\ln\eta$ can be approximated to first order in $2^{-\varepsilon}$ and we obtain \begin{equation} x \approx 2^{-\varepsilon + n/2 - 1/2}. \label{eq:x-approx} \end{equation} Upon substituting $t_m$ into (\ref{eq:prob_uniform_approx}) we can use the new variable $x$ to express the sine term as \begin{multline} \sin^2(\omega'_0t_m) =\sin^2(|\omega'_0|t_m) = \sin^2\left[\mathop{\mathrm{acot}}\nolimits x(1+O(1/n))\right] =\\ = \frac1{1+x^2} + \frac{2x\mathop{\mathrm{acot}}\nolimits x}{1+x^2}O(1/n) + \frac{\mathop{\mathrm{acot}}\nolimits^2x}{1+x^2} O(1/n^2). \end{multline} Thus for the maximum success probability $p_n^{\mathrm{max}}(\eta) = p_n(\eta, t_m)$ we obtain \begin{equation} p_n^{\mathrm{max}}(\eta) = \frac{e^{-2x\mathop{\mathrm{acot}}\nolimits x}}{ 1 + x^2} \left[ \frac12 - O(1/n) + x\mathop{\mathrm{acot}}\nolimits x\, O(1/n) \right]. \label{eq:maxprob-homo} \end{equation} This formula is our main result for the case of uniform photon losses. In the large $n$ limit it gives the approximate performance of the SKW search algorithm as a function of the transmission rate and the size of the search space. Since $x\mathop{\mathrm{acot}}\nolimits x$ is bounded in $x$, the accuracy of the term in brackets is bounded by $O(1/n)$. The most notable consequence of the second $O(1/n)$ contribution is that while in the ideal case the probability $1/2$ is an upper bound, in the lossy case deviations from the leading term, \begin{equation} p^{\mathrm{max}}(x) = \frac12 \exp(-2x\mathop{\mathrm{acot}}\nolimits x) \frac1{1+x^2}, \label{eq:maxprob-approx} \end{equation} can be expected in either direction. The functional form of Eq.~(\ref{eq:maxprob-approx}), plotted on Fig.~\ref{fig:plot_prob_dimless}, allows for a universal interpretation of the dependence of success probability on the transmission rate and the size of the search space through the combined variable $x$. For small losses we can use the approximation (\ref{eq:x-approx}) and conclude that the search efficiency depends only on the difference $n/2-\varepsilon$. The approximation is compared with the results of numerical calculations on Fig.~\ref{fig:plot_max_prob_compare}. We can observe the $O(1/n)$ accuracy of the theoretical curves as expected, hence producing poorer fits at smaller ranks. The positive deviations from the theoretical curves observable at low transmission rates are due to the second $O(1/n)$ term of Eq.~(\ref{eq:maxprob-homo}). \begin{figure} \caption{Probability of measuring the target state after the optimal number of iterations according to the approximation in Eq.~(\ref{eq:maxprob-approx} \label{fig:plot_prob_dimless} \end{figure} \begin{figure} \caption{Maximum success probabilities for different ranks of hypercube ($n$) calculated using the theoretical approximation, and numerical simulations. The theoretical curves are drawn with continuous lines of different patterns, and the numerical results are represented by points interconnected with the same line pattern and colour as the theoretical approximates corresponding to the same logarithmic transmission parameter $\varepsilon$.} \label{fig:plot_max_prob_compare} \end{figure} \section{Direction dependent loss} \label{sec:non-uniform} In the present section we no longer assume equal loss rates, and consider the schematically depicted loss model on Fig.~\ref{fig:loss-scheme} with arbitrary $\eta_d$ parameters. Because of the high symmetry of the hypercube graph, and the use of mainly identical multiports, we can neglect the position dependence of the transmission coefficients. The operator $ {\mathcal D} $ describing the decoherence mechanism thus acts on a general term of the density operator as \begin{equation} {\mathcal D} (\ket{d,x}\bra{d',x'}) = \eta_d \eta_{d'} \ket{d,x}\bra{d',x'} + \delta_{xx'} \delta_{dd'} \eta_d^2 \ket{0}\!\bra{0} . \end{equation} To describe the overall effect of this operator on a pure state, we re-introduce the linear decoherence operator in a more general form, \begin{equation} D = \sum_{d} \eta_d \ket{d}\bra{d} \otimes \openone, \label{eq:D-inhomo} \end{equation} and use the notation $\{\eta\}$ to denote the set of coefficients $\eta_d$. Due to the symmetry of the system, the sequential order of coefficients is irrelevant. With the re-defined operator the effect of decoherence reads \begin{equation} {\mathcal D} (\varrho) = \varrho' + (1 - \mathop{\mathrm{Tr}}\nolimits \varrho') \ket{0}\!\bra{0} , \label{eq:inhomo-dop} \end{equation} where $\varrho=\ket{\psi}\bra{\psi}$ is the initial state, and the non-vacuum part of the output state is $\varrho'=\ket{\psi'}\bra{\psi'}$, with $\ket{\psi'}= D\ket{\psi}$. Therefore, we can again reduce our problem to calculating the evolution of unnormalized pure states, just as in the uniform case, and use the non-unitary step operator $U''=DU'$ with the more general noise operator. Telling how well the algorithm performs under these conditions is a complex task. First we give a lower bound on the probability of measuring the target node, based on generic assumptions. To begin, we separate the noise operator into two parts \begin{equation} D = \eta + D', \label{eq:D-sep} \end{equation} where, for the moment, we leave $0\le\eta\le1$ undefined. As a consequence of Eq.~(\ref{eq:D-inhomo}) the diagonal elements of $D'$ are $[D']_{dd}=\delta_d=\eta_d-\eta$, and the off-diagonal elements are zero. From Eq.~(\ref{eq:inhomo-dop}) it follows that starting from a pure state $\ket{\psi_0}$, after $t$ non-ideal steps the state of the system can be characterized by the unnormalized vector $\ket{\psi'(t)}$, which is related to the state obtained from the same initial state by $t$ ideal steps as \begin{equation} \ket{\psi'(t)} = \eta^t\ket{\psi(t)} + \ket{r}. \end{equation} The expression of the residual vector $\ket{r}$ reads \begin{equation} \ket{r} = \sum_{k=1}^t (DU')^{t-k} D' \eta^{k-1} \ket{\psi(k)}. \end{equation} To obtain the probability of measuring the target state $\ket{x=0}$ we have to evaluate the formula \begin{equation} p_n(\{\eta\},t) = \sum_{d=0}^{n-1} \left| \eta^t \braket{d,0}{\psi(t)} + \braket{d,0}{r} \right|^2. \label{eq:inhomo-prob-def} \end{equation} Due to the symmetry of the graph and the coins, we use e.g.\ Eq.~(\ref{eq:prob_uniform_approx}) and obtain $\braket{d,0}{\psi(t)} \approx -\sin(\omega'_0t)/\sqrt{2n}$. To obtain a lower bound on $p_n(\{\eta\},t)$ we note that the sum is minimal if $\braket{d,0}{r}=\textrm{const}=K$ for every $d$ (we consider a worst case scenario when all $\braket{d,0}{r}$ are negative). Now we assume that the second term is a correction with an absolute value smaller than that of the first term. For the upper bound on $K$, we use the inequality \begin{equation} \sum_{d=0}^{n-1} \left| \braket{d,0}{r} \right|^2 \le \braket{r}{r}. \end{equation} The norm of $\ket{r}$ can be bound using the eigenvalues of $U$, $D$, and $D'$. Let $\eta_{\mathrm{max}} = \max \Set{\eta_d | d=0,\ldots,n-1}$ and $\delta_{\mathrm{max}} = \max \Set{\left|\delta_d\right| | d=0,\ldots,n-1}$. Then we have \begin{equation} \braket{r}{r} \le \sum_{k=1}^t \eta_{\mathrm{max}}^{t-k} \delta_{\mathrm{max}} \eta^{k-1} = \frac{\eta_{\mathrm{max}}}{\eta} \frac{\delta_{\mathrm{max}}}{\eta_{\mathrm{max}}-\eta} (\eta_{\mathrm{max}}^t - \eta^t). \end{equation} Since $U$ is unitary, its contribution to the above formula is trivial. Our upper bound on $|K|$ hence becomes $|K| \le 1/{\sqrt n} (\eta_{\mathrm{max}}\delta_{\mathrm{max}}/\eta) (\eta_{\mathrm{max}}^t - \eta^t) / (\eta_{\mathrm{max}} - \eta)$. Combining the results, we obtain a lower bound on the probability for measuring the target node, \begin{multline} p_n(\{\eta\},t) \ge \eta^{2t} \left\{ \sqrt{p_n^{(i)}(t)} \right. \\ \left. - \frac{\eta_{\mathrm{max}}}{\eta} \frac{\delta_{\mathrm{max}}}{\eta_{\mathrm{max}}-\eta} \left[ \left(\frac{\eta_{\mathrm{max}}}{\eta}\right)^t -1 \right] \right\}^2, \label{eq:p-lower-bound} \end{multline} where $p_n^{(i)}(t)$ stands for the corresponding probability of the ideal (lossless) case. We maximize the lower bound with respect to the arbitrary parameter $\eta$. The procedure can be carried out noting that $\delta_{\mathrm{max}} = \max\{ \eta_{\mathrm{max}}-\eta, \eta-\eta_{\mathrm{min}} \}$, thereby we find the maximum at $\eta=\bar{\eta} \equiv (\eta_{\mathrm{max}} + \eta_{\mathrm{min}})/2$, yielding the formula \begin{equation} p_n(\{\eta\},t) \ge \bar{\eta}^{2t}\left\{ \sqrt{p_n^{(i)}(t)} - ({\eta_{\mathrm{max}}}/{\bar{\eta}}) \left[ \left({\eta_{\mathrm{max}}}/{\bar{\eta}}\right)^t -1 \right] \right\}^2. \label{eq:opt-lower-bound} \end{equation} To interpret the formula (\ref{eq:opt-lower-bound}), we consider the two terms in the curly braces separately. The first term returns the success probability for uniform losses with transmission coefficient $\bar\eta$. The second term may be considered as a correction term that depends not only on some average value of the loss distribution, but also on its degree of non-uniformity in a way that is reminiscent of a mean square deviation. We observe that Eq.~(\ref{eq:p-lower-bound}) provides a useful lower bound only for $\{\eta\}$ distributions violating uniformity to only a small degree. When the expression inside the curly braces becomes negative, the assumption made on the magnitude of the second term of Eq.~(\ref{eq:inhomo-prob-def}) becomes invalid, and therefore the formula does not give a correct lower bound. The estimated lower bound (\ref{eq:p-lower-bound}) decreases with increasing degree of non-uniformity, in accordance with a naive expectation. However, as we shall show later, numerical simulations taking into account the full complexity of the problem provide evidence to the contrary: departure from uniformity can result in improved efficiency. \begin{figure} \caption{The coefficients for the $\left<\eta\right>$ dependent second order term in the Taylor series expansion of $p_n^{\mathrm{\max} \label{fig:fit-taylor} \end{figure} Inspired by the appearance of the average loss rate in the lower bound (\ref{eq:opt-lower-bound}), we introduce the mean and the variance of the direction dependent losses, \begin{equation} \left<\eta\right> = \frac1n \sum_{d=0}^{n-1} \eta_d, \quad \mbox{and} \quad Q = \frac1n \sum_{d=0}^{n-1} \delta_d^2. \end{equation} By using the Taylor expansion of the success probability function $p_n^{\mathrm{max}}$ around the point $\eta_d=\left<\eta\right>$, the deviations from the uniform loss case can be well estimated at small degrees of non uniformity. Using the permutation symmetry of $p_n^{\mathrm{max}}$ we can express the Taylor series as \begin{equation} p_n^{\mathrm{max}}(\{\eta\}) = p_n^{\mathrm{max}}(\left<\eta\right>) + B Q^2 + C W^3 + O(\delta_d^4), \label{eq:pmax-Taylor} \end{equation} where $W^3 = 1/n \sum_k \delta_k^3$. We notice that $Q$ may be regarded as the mean deviation of $\{\eta\}$ as a distribution, and hence it is a well-defined statistical property of the random noise. In other words, as long as a second order Taylor expansion gives an acceptable approximation, the probability of success depends only on the statistical average and variance ($\left<\eta\right>$, $Q$) of the noise and not on the specific values of $\{\eta\}$. Using numerical simulations, we have determined the values of $B$ up to rank $n=10$, and studied the impact of higher order terms. The second order Taylor coefficients were determined by fitting over the numerically obtained success probabilities at data points where the higher order moments of the loss distributions were small. An example plot of $B$ is provided on Fig.~\ref{fig:fit-taylor}, for a system $n=8$. The higher order effects were suppressed by selecting the lowest values of $W$ from several repeatedly generated random distributions $\{\eta\}$. A general feature exhibited by all studied cases is that the second order coefficients satisfy the inequality \begin{equation} B\ge 2^{-n}. \label{eq:lowerb-second} \end{equation} It is remarkable that this tight lower bound depends only on the size of the system. The dependence of $B$ on $\left<\eta\right>$ is monotonous with discontinuities. We found the number of discontinuities to be proportional to the rank $n$. Our numerical studies have shown that the value of $B$ before the first discontinuity is always a constant, and equal to the empirical lower bound (\ref{eq:lowerb-second}). \begin{figure} \caption{The relative improvement of maximum success probability comparing direction dependent loss to uniform loss with identical average loss rates. The difference is measured as $[p_n^{\mathrm{max} \label{fig:max-prob-Q35} \end{figure} To plot the success probabilities corresponding to arbitrary random coefficients we used the pair of variables $\left<\eta\right>$ and $Q$. On these plots, the higher order terms cause a ``spread'' of the appearing curves. A sample plot is displayed on Fig.~\ref{fig:max-prob-Q35} where the relative improvement is compared to the uniform case, in percentages. We observe a general increase of efficiency as compared to the uniform case with the same average loss rate. A general tendency is that for smaller values of $\left<\eta\right>$ the improvement is larger, interrupted, however, by discontinuities. These discontinuities closely follow those of the second order coefficient $B$. \begin{figure} \caption{The difference of the maximum success probabilities, in the presence of direction dependent loss with coefficients $\{\eta\} \label{fig:max-prob-max} \end{figure} The numerical studies, involving the generation of 1000 sets of uniformly randomly generated transmission coefficients for each of the systems of up to sizes $n=10$, indicate that with the help of Eq.~(\ref{eq:lowerb-second}) the first two terms of the expansion Eq.~(\ref{eq:pmax-Taylor}) can be used to obtain a general lower bound: \begin{equation} p_n^{\textrm{max}}(\{\eta\}) \ge p_n^{\mathrm{max}} (\left<\eta\right>) + 2^{-n}Q^2. \end{equation} The inequality implies that the overall contribution from higher order terms is positive, or always balanced by the increase of $B$. The appeal of this lower bound is that it depends only on the size of the system $N=2^n$, and the elementary statistical properties of the noise ($\left<\eta\right>$, $Q$). Therefore, together with the formula (\ref{eq:maxprob-approx}) for uniform loss, a straight-forward estimation of success probability is possible before carrying out an experiment. Up to now, we concentrated on comparing the performance of the search algorithm suffering non-uniform losses with those suffering uniform loss with coefficient equal to the average of the non-uniform distribution. Another physically interesting question is how attenuation alone affects search efficiency. We can formulate this question using the notations above as follows. Consider a randomly generated distribution $\{\eta\}$ and compare the corresponding success probability with the one generated by a uniform distribution with transmission coefficient $\eta_{\mathrm{max}} = \max \{\eta\}$. We chose $Q$ as a measure of how much an $\eta_{\mathrm{max}}$ uniform distribution needs to be altered to obtain $\{\eta\}$, and made the comparisons using the same set of samples. A typical plot is presented on Fig.~\ref{fig:max-prob-max}. It appears that as we start deviating from the original uniform distribution, an initial drop of efficiency is followed by a region where improvement shows some systematic increase. However, it is still an open question, whether it is really a general feature that for some values of $Q$ the efficiency is always increased. On the other hand these plots provide clear evidence that for a significant number of cases the difference $p_n^{\mathrm{max}}(\{\eta\}) - p_n^{\mathrm{max}}(\eta_{\mathrm{max}})$ is positive. In other words, rather counter-intuitively, we can observe examples where increased losses result in the improvement of search efficiency. Since the time evolution with losses is non-unitary, the improvement cannot be trivially attributed to the fact that the Grover operator is not the optimal choice for the marked coin. \section{Phase errors} \label{sec:phase} In the present section we discuss another type of errors typically arising in optical multiport networks. These errors are due to stochastic changes of the optical path lengths relative to what is designated, and manifest as undesired random phase shifts. Depending on how rapidly the phases change, we may work in two complementary regimes. In the ``phase fluctuation'' regime the phases at each iteration are different. These errors can typically be caused by thermal noise. In the ``static phase errors'' regime, the undesired phases have slow drift such that on the time scale of an entire run of the quantum algorithm their change is insignificant. The origin of such errors can be optical element imperfections, optical misalignments, or a slow stochastic drift in one of the experimental parameters. Phase errors in the fluctuation regime have been studied in Ref.~\cite{Kosik2006Quantum-walks-w} for walks on $N$ dimensional lattices employing the generalized Grover or Fourier coin. The impact of a different type of static error on the SKW algorithm has been analyzed in Ref.~\cite{Li2006Gate-imperfecti}. To begin the formal treatment, let $F$ denote the operator introducing the phase shifts, and write it as \begin{equation} F(\{\varphi\}) = \sum_{d,x} e^{i\varphi_{dx}} \ket{d,x}\!\bra{d,x}. \end{equation} This operator is unitary, hence the step operator \begin{equation} U(\{\varphi\}) = S F(\{\varphi\}) C', \label{eq:phase-noise-U} \end{equation} that depends on the phases $\{ \varphi_{dx} \vert d=0..n-1, x=0..2^n-1\}$ is unitary as well. In case of phase fluctuations, at each iteration $t$ we have the parameters $\varphi_{dx}^{(t)}$ such that all $\varphi_{dx}^{(t)}$ are independent random variables for every $d$, $x$ and $t$, according to some probability distribution. In case of static phase errors, $\varphi_{dx}^{(t)}$ and $\varphi_{dx}^{(t')}$ are considered to be the same random variables for every pair of $d$ and $x$. The formalism of Ref.~\cite{Kosik2006Quantum-walks-w} can be applied to the walk on the hypercube, and extended to the case of non-uniform coins and position dependent phases. Namely, using the shorthand notations $D=\left\{0,1,2,\ldots,n-1\right\}$ and \begin{equation} E(k,l) = \bigoplus_{j=l}^k e_{a_j}, \end{equation} the state after $t$ iterations can be expressed as \begin{multline} \ket{\psi(\{\varphi\},t)} = \frac1{\sqrt{n2^n}} \sum_{x_0\in V} (-1)^{\delta_{\ensuremath{x_{\mathrm{t}}} x_0}} \sum_{\underline{a}\in D^t} e^{i\varphi(\underline{a}, x_0)} \\ \times \tilde \Xi_{\ensuremath{x_{\mathrm{t}}}} (\underline{a}, x_0) \ket{a_1, x_0 \oplus E(t,1)}, \end{multline} where \begin{multline} \tilde \Xi_{\ensuremath{x_{\mathrm{t}}}} (\underline{a}, x_0) = \\ \prod_{j=1}^{t-1} \left( C^{(0)}_{a_ja_{j+1}} + [C^{(1)} - C^{(0)}]_{a_ja_{j+1}} \delta_{\ensuremath{x_{\mathrm{t}}} \oplus x_0, E(t,j+1)} \right), \end{multline} and $\varphi(\underline{a},x_0)= \sum_{j=1}^t \varphi^{(t+1-j)}_{a_j,x_0 \oplus E(t,j-1)}$. For the standard SKW algorithm, the coin matrices are $C^{(0)}_{aa'} = 2/n-\delta_{aa'}$ and $C^{(1)}_{aa'}=-\delta_{aa'}$, however, the SKW algorithm is reported to work with more general choices of operators $C_{0/1}$ \cite{shenvi:052307}. For the following study, we express the probability of finding the walker at position $x$ after $t$ iterations as the sum $p_n(x,\{\varphi\},t)=p_n^I(x,\{\varphi\},t) + p_n^C(x,\{\varphi\},t)$, such that the incoherent and coherent contributions are \begin{eqnarray} p_n^I(x,\{\varphi\},t) &=& \frac1{n2^n} \sum_{\underline{a}\in D^t} \left| \tilde\Xi_{\ensuremath{{\tilde x}_{\mathrm{t}}}}(\underline{a}) \right|^2, \label{eq:px-incoh} \\ p_n^C(x,\{\varphi\},t) &=& \frac1{n2^n} \sum_{\underline{a}\neq \underline{a}' } \Phi_{\underline{a}'}^* \Phi_{\underline{a}} \tilde\Xi_{\ensuremath{{\tilde x}_{\mathrm{t}}}}(\underline{a}')^* \tilde\Xi_{\ensuremath{{\tilde x}_{\mathrm{t}}}}(\underline{a}) \delta_{a_1'a_1}, \label{eq:px-coh} \end{eqnarray} where $\ensuremath{{\tilde x}_{\mathrm{t}}}=\ensuremath{x_{\mathrm{t}}} \oplus x$. The appearing phase factors are \begin{equation} \Phi_{\underline{a}} = (-1)^{\delta_{\ensuremath{{\tilde x}_{\mathrm{t}}}, E(t,1)}} e^{i\varphi(\underline{a}, x \oplus E(t,1))}, \end{equation} and $\tilde \Xi_{\ensuremath{{\tilde x}_{\mathrm{t}}}}(\underline{a})=\tilde \Xi_{\ensuremath{{\tilde x}_{\mathrm{t}}}}(\underline{a}, E(t,1))$, i.e. \begin{equation} \tilde \Xi_{\ensuremath{{\tilde x}_{\mathrm{t}}}} (\underline{a}) = \prod_{j=1}^{t-1} \left( C^{(0)}_{a_ja_{j+1}} + [C^{(1)} - C^{(0)}]_{a_ja_{j+1}} \delta_{\ensuremath{{\tilde x}_{\mathrm{t}}}, E(j,1)} \right). \label{eq:Xi_tg} \end{equation} Note, that when the probability of finding the walker at the target node $\ensuremath{x_{\mathrm{t}}}$ is to be calculated we must set $x=\ensuremath{x_{\mathrm{t}}}$, therefore, we have $\ensuremath{{\tilde x}_{\mathrm{t}}}=0$. In the following we shall show that the incoherent contribution is constant, \begin{equation} p_n^I(x, \{\varphi\}, t) = \frac1{2^n}, \label{eq:px-coh-const} \end{equation} for any two unitary coins $C_{0/1}$. Consequently, $p_n^I$ is constant also for balanced coins such as those in Eq.~(\ref{eq:std-coin-pair}). The summations in Eq.~(\ref{eq:px-incoh}) can be rearranged in increasing order of indices of $a_j$, yielding \begin{multline} p_n^I(x, \{\varphi\}, t) = \\ \frac1{n2^n} \sum_{a_1,a_2=0}^{n-1} \left| C^{(0)}_{a_1a_2} + [C^{(1)}- C^{(0)}]_{a_1a_2} \delta_{\ensuremath{{\tilde x}_{\mathrm{t}}}, e_{a_1}} \right|^2 \times \cdots \\ \times \sum_{a_{t-1}=0}^{n-1} \left| C^{(0)}_{a_{t-2}a_{t-1}} + [C^{(1)} - C^{(0)}]_{a_{t-2}a_{t-1}} \delta_{\ensuremath{{\tilde x}_{\mathrm{t}}}, E(t-2,1)} \right|^2 \\ \times \sum_{a_t=0}^{n-1} \left| C^{(0)}_{a_{t-1}a_t} + [C^{(1)} - C^{(0)}]_{a_{t-1}a_t} \delta_{\ensuremath{{\tilde x}_{\mathrm{t}}}, E(t-1,1)} \right|^2. \end{multline} Since $E(t-1,1)$ depends on $a_j$ only when $j\leq t-1$, and due to the unitarity of the coins $\langle a_{t-1}| C_0^{\dag} C_0 | a_{t-1} \rangle = \langle a_{t-1}|C_1^{\dag} C_1 | a_{t-1} \rangle = 1$, the summation over $a_t$ can be evaluated and we obtain $1$. Hence, we see that $p_n^I(x, \{\varphi\}, t) = p_n^I(x, \{\varphi\}, t-1)$, and this implies Eq.~(\ref{eq:px-coh-const}) by induction. \begin{figure} \caption{The averaged (1000 samples) probability of measuring the target node, when $n=6$ and $\Delta\varphi = 3^o, 6^o, 9^o, 12^o$. The tendency of the success probability to a constant, non-zero value can be observed on this numerically obtained plot. It is also observable that a larger variance results in a smaller asymptotic value.} \label{fig:phase-noise-evolution} \end{figure} The average probability $\bar{p}_n(x,t)$ of finding the walker at node $x$ is obtained by averaging the random phases according to their appropriate probability distribution. Using Eq.~(\ref{eq:px-coh-const}) this probability can be expressed as \begin{equation} \bar{p}_n(x,t) = \left<p_n(x, \{\varphi\}, t) \right> = \frac1{2^n} + \left< p_n^C(x, \{\varphi\}, t) \right>, \end{equation} where $\left<\ldots\right>$ denotes taking the average for each random variable $\varphi^{(t)}_{dx}$ in case of phase fluctuations, and for each $\varphi_{dx}$ in case of static phase errors. It is reasonable to assume that each random variable has the same probability distribution. To analyze the impact of phase errors on the search efficiency, we study the behaviour of the coherent term $\left< p_n^C(x, \{\varphi\}, t) \right>$ for different random distributions. In case of phase fluctuations characterized by a uniform distribution, the coherent term immediately vanishes and we obtain $\bar{p}_n(x,t) = 1/2^n$. This case can be considered as the classical limit of the quantum walk. Therefore, we conclude that the classical limit of the SKW algorithm is not a search algorithm, independently of the two unitary coins used. Assuming a Gaussian distribution of random phases is motivated by the relation of each phase variable $\varphi$ to the optical path length. The changes in the optical path lengths which introduce phase shifts are not restricted to a $2\pi$ interval. In what follows, we assume that the random phases have a zero centered Gaussian distribution with a variance $\Delta\varphi$. We arrive at the classical limit even when the phase fluctuations have a finite width Gaussian distribution, simply by repeatedly applying the time evolution operator $U\{\varphi\}$. For such Gaussian distribution, the coherent term exhibits exponential decrease with time, a behaviour also confirmed by our numerical calculations. In the static phase error regime the mechanism of cancellation of phases is different than in the fluctuation regime, and more difficult to study analytically. For uniform random distribution we expect a sub-exponential decay of the coherent term to zero. For a zero centered Gaussian distribution with variance $\Delta \varphi$ we performed numerical simulations using the standard two coins of Eq.~(\ref{eq:std-coin-pair}). \begin{figure} \caption{Time dependence of success probabilities for two different phase configurations, numerically calculated for a system of rank $n=6$. The difference in frequencies of the major oscillations is clearly observable for larger times.} \label{fig:phase-noise-diff} \end{figure} The numerical results for the success probability $\bar{p}_n(\ensuremath{x_{\mathrm{t}}}, t)$ for several values of $\Delta\varphi$ are plotted on Fig.~\ref{fig:phase-noise-evolution}. The data points were obtained by calculating success probabilities for $1000$ randomly generated phase configurations and taking their averages at each time step $t$. By studying the repetition of the random phase configuration we come to several remarkable conclusions. First, the time evolution of the success probability tends (on a long time scale, $t\gg t_{f}$) to a finite, non-zero constant value. Consequently, being subject to static phase errors, the SKW algorithm retains its utility as search algorithm. Second, the early steps of the time evolution are characterized by damped oscillations reminding of a collapse. Third, the smaller the phase noise the larger is the long time stationary value to which the system evolves. We have plotted the stationary values obtained by numerical calculations, against the rank of the hypercube on Fig.~\ref{fig:phase-noise}. Better insight into the above features can be gained by examining the shape of the individual runs of the algorithm with the given random phase configurations. As it can be seen on Fig.~\ref{fig:phase-noise-diff}, the success probabilities for different runs display the typical oscillations around a non-zero value. They differ slightly in their frequencies depending on the random phases chosen, hence when these oscillations are summed up we get the typical collapse behaviour. Also, since these frequencies continuously fill up a band specified by the width of the Gaussian, we expect no revivals to happen later. For higher order hypercubes the success probability drops almost to zero for already very moderate phase errors, resembling a behaviour seen on Fig.~\ref{fig:plot_prob_dimless}. \begin{figure} \caption{The long time stationary values of the success probability (obtained by averaging over 1000 samples) against the size of the search space, for $\Delta\varphi=0^o,3^o,6^o,15^o$.} \label{fig:phase-noise} \end{figure} \section{Conclusions} \label{sec:conclusions} We studied the SQRW implementation of the SKW search algorithm and analyzed the influence on its performance the two most common type of disturbances, namely photon losses and phase errors. Our main result for the photon loss affected SQRW search algorithm is that the introduction of non-uniform distribution of the loss can significantly improve the search efficiency compared to uniform loss with the same average. In many cases, even the sole increase of losses in certain directions may improve the search efficiency. Mostly based on numerical evidence we have set a lower bound for the search probability as a function of the average and variance of the randomly distributed direction dependent loss. We concentrated our analysis on two complementary regimes of phase errors. When the system is subject to rapid phase fluctuations, the classical limit of the quantum walk is approached. We have shown that in this limit the SKW algorithm loses its applicability to the search problem for any pair of unitary coins. On the other hand, we showed that when the phases are kept constant during each run of the search, the success rate does not drop to zero, but approaches a finite value. The effect in its mechanism is reminiscent to exponential localization found in optical networks \cite{TJS}. Therefore, in the long-time limit, static phase errors are less destructive than rapidly fluctuating phase errors. \acknowledgments{ Support by the Czech and Hungarian Ministries of Education (CZ-2/2005), by MSMT LC 06002 and MSM 6840770039 and by the Hungarian Scientific Research Fund (T049234 and T068736) is acknowledged. } \end{document}
\textbf{B}egin{document} \title{Braid Index Bounds Ropelength From Below} \author{Yuanan Diao} \address{Department of Mathematics and Statistics\\ University of North Carolina Charlotte\\ Charlotte, NC 28223} \epsilonmail{[email protected]} \subjclass[2010]{Primary: 57M25; Secondary: 57M27} \keywords{knots, links, braid index, ropelength.} \textbf{B}egin{abstract} For an un-oriented link $\mathcal{K}$, let $L(\mathcal{K})$ be the ropelength of $\mathcal{K}$. It is known that when $\mathcal{K}$ has more than one component, different orientations of the components of $\mathcal{K}$ may result in different braid index. We define the largest braid index among all braid indices corresponding to all possible orientation assignments of $\mathcal{K}$ the {\epsilonm absolute braid index} of $\mathcal{K}$ and denote it by $\textbf{B}(\mathcal{K})$. In this paper, we show that there exists a constant $a>0$ such that $L(\mathcal{K})\ge a \textbf{B}(\mathcal{K}) $ for any $\mathcal{K}$, {\epsilonm i.e.}, the ropelength of any link is bounded below by its absolute braid index (up to a constant factor). \epsilonnd{abstract} \maketitle \section{Introduction}\label{s1} An important geometric property of a link is its ropelength, defined (intuitively) as the minimum length of a unit thickness rope that can be used to tie the link. Let $\mathcal{K}$ be an un-oriented link, $Cr(\mathcal{K})$ be the minimum crossing number of $\mathcal{K}$ and $L(\mathcal{K})$ be the ropelength of $\mathcal{K}$. One way to understand the ropelength of a link is to associate it with the topological complexity of the link as measured by some link invariant. For example one can attempt to express the ropelength, or an estimate of it, of a link as a function of the minimum crossing number of the link. This turned out to be a very difficult problem in general and results are limited. For example, while it has been shown that $L(\mathcal{K})\ge 31.32$ for any nontrivial knot $\mathcal{K}$ \cite{Denne2006}, the precise ropelength for any given nontrivial knot is not known. It has been shown in \cite{Buck, Buck2} that in general $L(\mathcal{K})\ge 1.105 (Cr(\mathcal{K}))^{3/4}$ and that this $3/4$ power can be attained by a family of infinitely many links \cite{Cantarella1998, Diao1998}. On the other hand, not all links obtain this $3/4$ power law since there exist families of infinitely many links such that the ropelength of a link from any of these families grows linearly as the crossing number of the link \cite{Diao2003}. This result is based on the fact that the ropelength of a link is bounded below by the bridge number of the link (multiplied by some positive constant) and the fact that there are families of (infinitely many) links whose bridge numbers are proportional to their crossing numbers. To the knowledge of the author, the bridge number is the only known link invariant that has been used to establish the ropelength of a link. Of course, if a link has a small bridge number, then we would not be able to establish a good lower bound of the ropelength of the link using its bridge number. In this paper we show that the braid index of a link can also be used to bound the ropelength of the link from below (again up to the multiple of a positive constant). For an un-oriented link $\mathcal{K}$ with more than one component, different orientations of the components of $\mathcal{K}$ may result in different braid index (an invariant of oriented links). We will call the largest braid index among all braid indices corresponding to different orientation assignments of $\mathcal{K}$ the {\epsilonm absolute braid index} of $\mathcal{K}$ and denote it by $\textbf{B}(\mathcal{K})$. In this paper, we show that there exists a constant $a>0$ such that $L(\mathcal{K})\ge a \textbf{B}(\mathcal{K}) $ for any $\mathcal{K}$, {\epsilonm i.e.}, the ropelength of any link is bounded below by its absolute braid index (up to a constant factor). Since the bridge number of a link is smaller than or equal to its absolute braid index, and many links with bounded bridge numbers can have absolute braid indices proportional to their crossing numbers, this result will allow us to establish better ropelength lower bound for many more links. \section{Special cord diagrams and their Seifert diagrams}\label{s2} \textbf{B}egin{definition}\label{cord} {\epsilonm Let $\mathcal{K}$ be an oriented link and $\mathcal{D}$ be a projection diagram of $\mathcal{K}$. Without loss of generality we will assume that the projection plane is $z=0$. Let $\alpha_1$, $\alpha_2$, ..., $\alpha_n$ be simple arcs of $\mathcal{D}$. We say that $\alpha_1$, $\alpha_2$, ..., $\alpha_n$ form a {\epsilonm special cord diagram} $R$ (associated with $\mathcal{D}$) if the following conditions hold: (i) the end points of $\alpha_1$, $\alpha_2$, ..., $\alpha_n$ do not cross each other and are distributed on a topological circle $C$ (in the projection plane $z=0$); (ii) the interiors of $\alpha_1$, $\alpha_2$, ..., $\alpha_n$ are completely within the disk $\overline{C}$ bounded by $C$; (iii) $\mathcal{D}\setminus \cup_{1\le j\le n}\alpha_j$ does not intersect $\overline{C}$; (iv) the arc $\gamma_j$ on $\mathcal{D}$ corresponding to $\alpha_j$ resides in a slab $Z_j$ defined by $z_j^\prime\le z\le z_j^{\primep}$ for some $z_j^\prime\le z_j^\primep$; (v) $Z_k\cap Z_j=\epsilonmptyset$ if $j\not=k$.} \epsilonnd{definition} Notice that by conditions (iv) and (v), a new special cord diagram $R^\prime$ can be obtained from a cord diagram $R$ by replacing each $\alpha_j$ with a simple curve $\alpha_j^\prime$: the choice of $\alpha_j^\prime$ is arbitrary so long as it is the projection of a curve $\gamma_j^\prime$ that resides within the slab $Z_j$ sharing the same end points with $\gamma_j$ and is bounded within $C$. The result is still a special cord diagram associated with $\mathcal{D}^\prime$ where $\mathcal{D}^\prime$ is the resulting new projection diagram which is still a projection diagram of $\mathcal{K}$. We say that $R^\prime$ is {\epsilonm equivalent} to $R$. In other word, the cords have fixed end points but otherwise can move freely within $C$. \textbf{B}egin{definition}\label{Seifert_diagram}{\epsilonm A Seifert diagram of a special cord diagram is the diagram obtained from the special cord diagram by smoothing all crossings in the diagram.} \epsilonnd{definition} Note: since we are only interested in the Seifert diagrams of special cord diagrams, the over/under strands of the crossings in the diagrams are not important to us and will not be shown in our figures. Also, in a Seifert diagram of a special cord diagram, there are only two types of curves: topological circles (Seifert circles) and simple curves with their end points on $C$ (we will call these {\epsilonm partial Seifert circles}). See Figure \ref{cord_diagram} for an illustration of a special cord diagram and its Seifert diagram. \textbf{B}egin{figure}[htb!] \includegraphics[scale=1.0]{cord} \caption{Left: A special cord diagram; Right: The Seifert diagram of it. \label{cord_diagram}} \epsilonnd{figure} Let us assign $C$ an (arbitrary) orientation. Consider an oriented simple curve $\textbf{B}eta$ with its end points on $C$ and its interior bounded within $C$. We call the arc $\textbf{B}eta^\prime$ of $C$ that shares end points with $\textbf{B}eta$ and is parallel to $\textbf{B}eta$ (in terms of their orientations) the {\epsilonm companion} of $\textbf{B}eta$ and the region bounded by $\textbf{B}eta$ and $\textbf{B}eta^\prime$ the {\epsilonm domain} of $\textbf{B}eta$. \textbf{B}egin{definition}\label{coherent}{\epsilonm A special cord diagram $R$ is said to be {\epsilonm coherent} if we can choose an orientation of $C$ such that the Seifert diagram of $R$ satisfies the following conditions: (i) its Seifert circles (if there are any) are concentric to each other and all share the same orientation with $C$; (ii) the domain of any partial Seifert circle cannot contain any Seifert circles; (iii) if the domain of a a partial Seifert circle contains another partial Seifert circle, it must contain the entire domain of that partial Seifert circle.} \epsilonnd{definition} The special cord diagram as shown in Figure \ref{cord_diagram} is not coherent: no matter how we choose the orientation of $C$, there is always a partial Seifert circle whose domain contains some Seifert circles. Figure \ref{coherent} shows a coherent special cord diagram that is equivalent to it. The following lemma assures us that this is always possible. \textbf{B}egin{figure}[htb!] \includegraphics[scale=1.0]{cord2} \caption{Left: A special cord diagram equivalent to the special cord diagram shown in the left of Figure \ref{cord_diagram}; Right: The corresponding Seifert diagram is coherent with the orientation $C$ as shown in the figure. \label{coherent}} \epsilonnd{figure} \textbf{B}egin{lemma}\label{Lemma1} Let $R$ be a special cord diagram with $n$ cords, then there exists a coherent special cord diagram $R^\prime$ that is equivalent to $R$. Furthermore, the Seifert diagram of $R^\prime$ contains exactly $n$ partial Seifert circles and at most $n-1$ Seifert circles. \epsilonnd{lemma} \textbf{B}egin{proof} Let us assign $C$ the clockwise orientation. We will prove the lemma by induction. The case of $n=1$ is trivial. Assume that the statement of the lemma holds for $n=n_0\ge 1$ and let us consider the case for $n=n_0+1$. Consider first the special cord diagram $R_{n_0}$ containing the first $n_0$ cords. By the induction assumption, there exists a coherent special cord diagram $R^\prime_{n_0}$ that is equivalent to $R_{n_0}$ such that its Seifert diagram $S_{n_0}$ contains $n_0$ partial Seifert circles and at most $n_0-1$ (concentric) Seifert circles which all have clockwise orientation. We will construct $R^\prime_{n_0+1}$ by choosing an appropriate $\alpha_{n_0+1}^\prime$ (namely the last cord appropriately modified) starting from $R^\prime_{n_0}$. There are two cases to consider. In the first case, the intersection of the companion of $\alpha_{n_0+1}$ (the last cord of $R_{n_0+1}$) with the companion of any other partial Seifert circle is either empty or a simply connected arc on $C$. Figure \ref{intersect2} illustrates how $\alpha_{n_0+1}^\prime$ may be chosen and the resulting Seifert diagram after all crossings have been smoothed. Notice that in the illustration we only showed Seifert circles and partial Seifert circles of $R^\prime_{n_0+1}$. Although $\alpha_{n_0+1}^\prime$ may have additional crossings with cords in the original diagram $R^\prime_{n_0}$, once these crossings are smoothed, due to the orientation of the curves involved, it is easy to verify that the resulting Seifert circles and partial Seifert circles are as illustrated in Figure \ref{intersect2}. It is clear that in this case we obtain a new coherent Seifert diagram with one additional partial Seifert circle and no additional Seifert circles. Thus the statement of the lemma holds for this case. In the second case, the intersection of the companion of $\alpha_{n_0+1}$ with the companion of at least one other partial Seifert circle consists of two disconnected simple arcs on $C$ as shown in the left of Figure \ref{new_Seifert}. The middle of Figure \ref{new_Seifert} shows how $\alpha_{n_0+1}^\prime$ is constructed and the right side shows the resulting Seifert diagram: it has one additional partial Seifert circle and one additional Seifert circle. Again the statement of the lemma holds and this proves the lemma. \epsilonnd{proof} \textbf{B}egin{figure}[htb!] \includegraphics[scale=1.0]{intersect2} \caption{Left: The companion of $\alpha_{n_0+1}$ is shown in thick line and its intersection with the companions of other partial Seifert circles are either empty or a simply connected arcs on $C$. The orientations of the Seifert circles and partial Seifert circles are parallel to that of $C$ (not shown in the figure); Middle: The choice of $\alpha^\prime_{n_0+1}$; Right: The resulting Seifert diagram. \label{intersect2}} \epsilonnd{figure} \textbf{B}egin{figure}[htb!] \includegraphics[scale=1.0]{new_Seifert} \caption{Left: The companion of $\alpha_{n_0+1}$ is shown in thick line. Notice that its intersection with the companion of one partial Seifert circles consists of two disjoint arcs on $C$. The orientations of the Seifert circles and partial Seifert circles are parallel to that of $C$ (not shown in the figure); Middle: The choice of $\alpha^\prime_{n_0+1}$; Right: The resulting Seifert diagram. \label{new_Seifert}} \epsilonnd{figure} \section{Absolute braid index bounds the ropelength from below}\label{s3} Let us first consider links realized on the cubic lattice. Let $\mathcal{K}$ be an un-oriented link and $K_c$ a realization of $\mathcal{K}$ on the cubic lattice. The length of $K_c$ is denoted by $L(K_c)$ and the minimum of $L(K_c)$ over all lattice realization $K_c$ of $\mathcal{K}$ is called the {\epsilonm minimum step number} of $\mathcal{K}$ and is denoted by $L_c(\mathcal{K})$. One nice property of $L_c(\mathcal{K})$ is that in theory it can be determined through exhaustive search. For example, it has been shown that $L_c(\mathcal{K})=24$, $30$ and $34$ for the trefoil \cite{Diao1993}, the figure 8 knot and $34$ for the $5_1$ knot \cite{Scharein2009}. However in reality the precise value of $L_c(\mathcal{K})$ is also very difficult to determine and the above three examples are the only known results for nontrivial knots in fact. A line segment on $K_c$ between two neighboring lattice points is called a {\epsilonm step}. A step that is parallel to the $x$-axis is called an $x$-step. $y$-steps and $z$-steps are similarly defined. Let $x(K_c)$, $y(K_c)$ and $z(K_c)$ be the total number of $x$-steps, $y$-steps and $z$-steps respectively, then $x(K_c)+y(K_c)+z(K_c)=L(K_c)$. Without loss of generality, let us assume that $z(K_c)\ge \max\{x(K_c),y(K_c)\}$ hence $z(K_c)\ge (1/3)L(K_c)$ and $x(K_c)+y(K_c)=L(K_c)-z(K_c)\le (2/3)L(K_c)$. We now consider the projection of $K_c$ to the $xy$-plane. The resulting diagram is not a regular one. However if we tilt $K_c$ slightly, then we will obtain a regular projection of $K_c$ and all crossings will occur near a lattice point on the $xy$-plane. At a lattice point where we see crossings of the projection, consider the arcs of the projection bounded by a unit square centered at the lattice point as shown in Figure \ref{square}. It is rather obvious that these arcs define a special cord diagram with each arc resides in a slab that is disjoint from other slabs that contain the other arcs, since each cord consists of two half steps that are parts of some $x$ and/or $y$ steps, and possibly some consecutive $z$ steps, hence two different cords is separated by a slab of thickness near one (without the tilt it would be precisely one). \textbf{B}egin{figure}[htb!] \includegraphics[scale=.8]{square} \caption{Left: the top view of a unit length square centered at a lattice point where the projected strands of $K_c$ intersect; Right: A slightly tilted projection leads to a special cord diagram. \label{square}} \epsilonnd{figure} Let $m$ be the number of lattice points where the projection of $K_c$ has intersections, and let $n_j$ be the number of arcs involved at the $j$-th such lattice point. Now assign $K_c$ an orientation so that it yields $\textbf{B}(\mathcal{K})$. By Lemma \ref{Lemma1}, we can modify the special cord diagrams to make them coherent. The result is a regular projection $K^\prime$ which is an ambient isotopy of $K_c$. After we smooth all crossings in $K^\prime$, at the $j$-th cord diagram, we obtain $n_j$ partial Seifert circles and at most $n_j-1$ Seifert circles. Each partial Seifert circles and each arc of $K^\prime$ that is not contained in these special cord diagrams must be connected to at least one other partial Seifert circle in order to form a complete Seifert circle, thus the total number of Seifert circles in $K^\prime$ formed by the partial Seifert circles and the arcs not in the cord diagrams is at most $\frac{1}{2}\sum_{1\le j\le m}n_j$. It follows that the total number of Seifert circles in $K^\prime$ (denoted by $s(K^\prime)$) is bounded above by $\frac{1}{2}\sum_{1\le j\le m}n_j+\sum_{1\le j\le m}(n_j-1)<\frac{3}{2}\sum_{1\le j\le m}n_j$. On the other hand, each cord in the special diagram has total length one in its $x$ and $y$-step portion, hence the total length of the $x$ and $y$-steps in the projection of $K_c$ is at least $\sum_{1\le j\le m}n_j$. Thus we have $\sum_{1\le j\le m}n_j\le x(K_c)+y(K_c)\le (2/3)L(K_c)$ and it follows that $$ s(K^\prime)< \frac{3}{2}\sum_{1\le j\le m}n_j\le \frac{3}{2} \frac{2}{3}L(K_c)=L(K_c). $$ It is well known that for any oriented link diagram $\mathcal{D}$, we have $\textbf{b}(\mathcal{D})\le s(\mathcal{D})$ where $s(\mathcal{D})$ is the number of Seifert circles in $\mathcal{D}$ \cite{Ya}. Since $K_c$ has the orientation that yields $\textbf{b}(K_c)=\textbf{B}(\mathcal{K})$, we have $\textbf{B}(\mathcal{K})=\textbf{b}(K_c)=\textbf{b}(K^\prime)\le s(K^\prime)<L(K_c)$. Since $K_c$ is arbitrary, replacing it by a step length minimizer of $\mathcal{K}$ yields $\textbf{B}(\mathcal{K})<L_c(\mathcal{K})$. Finally, it has been shown that $L_c(\mathcal{K})<14L(\mathcal{K})$ \cite{Diao2002}, thus we have proven the following theorem: \textbf{B}egin{theorem}\label{T1} Let $\mathcal{K}$ be an un-oriented link, then $\textbf{B}(\mathcal{K})<L_c(\mathcal{K})<14L(\mathcal{K})$, that is, $L(\mathcal{K})>(1/14)\textbf{B}(\mathcal{K})$. \epsilonnd{theorem} In a recent paper, the author and his colleagues derived explicit formulas for braid indices of many alternating links including all alternating Montesinos links \cite{Diao2019}. Using these formulas one can easily identify many families of alternating links with small bridge numbers but with braid indices proportional to their crossings numbers, these provide us new examples of link families whose ropelengths grow at least linearly as their crossing numbers (since the previously known method based on the bridge numbers would not get us these results). The following are just a few such examples. \textbf{B}egin{example}\label{E1}{\epsilonm Let $\mathcal{K}$ be the $(2,2n)$ torus link, a two component link with $2n$ crossings. There are two different choices for the orientations of the two components. One of them yields a braid index of $2$ while the other yields a braid index of $n+1$. Thus we have $\textbf{B}(\mathcal{K})=n+1=Cr(\mathcal{K})/2+1$, hence $L_c(\mathcal{K})>n+1$ and $L(\mathcal{K})>(n+1)/14>Cr(\mathcal{K})/28$.} \epsilonnd{example} \textbf{B}egin{example}\label{E1}{\epsilonm Let $\mathcal{K}$ be a twist knot with $n\ge 4$ crossings. We have $\textbf{B}(\mathcal{K})=\textbf{b}(\mathcal{K})=k+1=(Cr(\mathcal{K})+1)/2$ if $n=2k+1$ is odd, and $\textbf{B}(\mathcal{K})=\textbf{b}(\mathcal{K})=k+2=Cr(\mathcal{K})/2+1$ if $n=2k+2$ is even. It follows that $L(\mathcal{K})>(Cr(\mathcal{K})+1)/28$ for any twist knot $\mathcal{K}$.} \epsilonnd{example} \textbf{B}egin{figure}[htb!] \includegraphics[scale=.8]{Pretzel} \caption{An alternating pretzel knot with three columns containing $2k+1$, $2m+1$ and $2n+1$ crossings respectively ($k$, $m$ and $n$ are non-negative integers and the case of $k=m=n=0$ gives the trefoil knot). \label{Pretzel}} \epsilonnd{figure} \textbf{B}egin{example}\label{E2}{\epsilonm Consider the pretzel knot $\mathcal{K}$ a projection of which is given in Figure \ref{Pretzel}. $Cr(\mathcal{K})=2(k+m+n)+3$ since it is alternating. It can be calculated from the formulas given in \cite{Diao2019} that $\textbf{B}(\mathcal{K})=\textbf{b}(\mathcal{K})=2+k+m+n>(1/2)Cr(\mathcal{K})$. It follows that $L(\mathcal{K})>Cr(\mathcal{K})/28$ as well.} \epsilonnd{example} Notice that in the above examples, the bridge numbers are either 2 or 3. Furthermore, since the link diagrams given in the above examples are all algebraic link diagrams, it is known that the ropelengths of these links grow at most linearly as their crossings numbers \cite{Diao2006}. Thus the ropelengths of these links in fact grow linearly as their crossing numbers. \section{Further discussions}\label{s4} For an oriented link $\mathcal{K}$ with a projection diagram $\mathcal{D}$, consider the HOMFLY-PT polynomial $H(\mathcal{D},z,a)$ defined using the skein relation $aH(\mathcal{D}_+,z,a)-a^{-1}H(\mathcal{D}_-,z,a)=zH(\mathcal{D}_0,z,a)$ (and the initial condition $H(\mathcal{D},z,a)=1$ if $\mathcal{D}$ is the trivial knot). Let $E(\mathcal{D})$ and $e(\mathcal{D})$ be the highest and lowest powers of $a$ in $H(\mathcal{D},z,a)$ and define $\textbf{b}_0(\mathcal{K})=(E(\mathcal{D})-e(\mathcal{D}))/2+1$. It is a well known result that $\textbf{b}_0(\mathcal{K})\le \textbf{b}(\mathcal{K})$ where $\textbf{b}(\mathcal{K})$ is the braid index of $\mathcal{K}$ \cite{Morton1986}. In the case that $\mathcal{K}$ is un-oriented, similarly to the definition of $\textbf{B}(\mathcal{K})$, we define $ \textbf{B}_0(\mathcal{K})=\max\{\textbf{b}_0(\mathcal{K}^\prime): \ \mathcal{K}^\prime\in O(\mathcal{K})\}$ where $O(\mathcal{K})$ is the set of oriented links obtained by assigning all possible orientations to the components of $\mathcal{K}$. Apparently we have $\textbf{B}_0(\mathcal{K})\le \textbf{B}(\mathcal{K})$ hence we have the following theorem, which is handy when we do not have a precise formula for the braid index of the link. \textbf{B}egin{theorem}\label{T2} Let $\mathcal{K}$ be an un-oriented link, then $\textbf{B}_0(\mathcal{K})<L_c(\mathcal{K})<14L(\mathcal{K})$ and $L(\mathcal{K})>(1/14)\textbf{B}_0(\mathcal{K})$. \epsilonnd{theorem} It has been conjectured that the ropelength of an alternating link $\mathcal{K}$ is bounded below by a constant multiple of its crossing number. Our result shows that this conjecture holds for many alternating links. A remaining challenge is about the alternating links whose absolute braid index is small, for example the $(2, 2n+1)$ torus knot whose braid index is 2. While its minimum projection looks so much like the minimum projection of the $(2, 2n)$ torus link and it is quite plausible that its ropelength should behave linearly as its crossing number, we do not have a way to prove it! We end this paper with this problem as a challenge to our reader. \textbf{B}egin{thebibliography}{99} \textbf{B}ibitem{Buck} G.~Buck and J.~Simon, {\epsilonm Thickness and Crossing Number of Knots}, Topology Appl. \textbf{ 91}(3) (1999), 245--257. \textbf{B}ibitem{Buck2} G.~Buck, {\epsilonm Four-thirds Power Law for Knots and Links}, Nature, \textbf{ 392} (1998), 238--239. \textbf{B}ibitem{Cantarella1998} J.~Cantarella, R.~Kusner and J.~Sullivan, {\epsilonm Tight knot values deviate from linear relations}, Nature, \textbf{392} (1998), 237--238. \textbf{B}ibitem{Denne2006} E.~Denne, Y.~Diao and J.~ Sullivan, {\epsilonm Quadrisecants Give New Lower Bounds for the Ropelength of a Knot}, Geometry and Topology, \textbf{10} (2006), 1--26. \textbf{B}ibitem{Diao1993} Y.~Diao, {\epsilonm Minimal Knotted Polygons on the Cubic Lattice}, J. Knot Theory Ramifications \textbf{2}(4) (1993), 413--425. \textbf{B}ibitem{Diao1998} Y.~Diao and C.~Ernst, {\epsilonm The Complexity of Lattice Knots}, Topology and its Applications, \textbf{90} (1998), 1--9. \textbf{B}ibitem{Diao2006} Y.~Diao and C.~Ernst, {\epsilonm Hamiltonian Cycles and Ropelengths of Conway Algebraic Knots}, J. Knot Theory Ramifications \textbf{15}(1) (2006), 121--142. \textbf{B}ibitem{Diao2019} Y.~Diao, C.~Ernst, G.~Hetyei and P.~Liu, {\epsilonm A Diagrammatic Approach for Determining the Braid Index of Alternating Links}, 2019, preprint. Available at \url{http://arxiv.org/abs/1901.09778}. \textbf{B}ibitem{Diao2002} Y.~Diao, C.~Ernst and E.~J.~Jance Van Rensburg, {\epsilonm Upper Bounds on Linking Number of Thick Links}, J. Knot Theory Ramifications \textbf{11}(2) (2002), 199--210. \textbf{B}ibitem{Diao2003} Y.~Diao, C.~Ernst and M.~Thistlethwaite, {\epsilonm The Linear Growth in the Length of a Family of Thick Knots}, J. Knot Theory Ramifications \textbf{12}(5) (2003), 709--715. \textbf{B}ibitem{Morton1986} H.~Morton {\epsilonm Seifert Circles and Knot Polynomials}, Math. Proc. Cambridge Philos. Soc. \textbf{99} (1986), 107--109. \textbf{B}ibitem{Scharein2009} R.~Scharein, K.~Ishihara, J.~Arsuaga, K.~Shimokawa, Y.~Diao and M.~Vazquez, {\epsilonm Bounds for minimal step number of knots in the simple cubic lattice}, J. Phys. A: Math. Theor \textbf{42}(47) (2009): 475006. \textbf{B}ibitem{Ya} S.~Yamada {\epsilonm The Minimal Number of Seifert Circles Equals The Braid Index of A Link}, Invent. Math. \textbf{89} (1987), 347--356. \epsilonnd{thebibliography} \epsilonnd{document}
\begin{document} \title{A Ring Topology-based Communication-Efficient Scheme for D2D Wireless Federated Learning} \author{\IEEEauthorblockN{ Zimu Xu, Wei Tian, Yingxin Liu, Wanjun Ning, and Jingjin Wu, \emph{Member, IEEE}} \thanks{The authors are with the Department of Statistics and Data Science, BNU-HKBU United International College, Zhuhai, Guangdong, 519087, P. R. China. J. Wu is also with the Guangdong Provincial Key Laboratory of Interdisciplinary Research and Application for Data Science, Guangdong, 519087, P. R. China. (E-mail: \href{mailto:[email protected]}{[email protected]}; \href{mailto:[email protected]}{[email protected]}; \href{mailto:[email protected]}{[email protected]}; \href{mailto:[email protected]}{[email protected]}; \href{mailto:[email protected]}{[email protected]}). Corresponding author: J. Wu. This work is supported by Zhuhai Basic and Applied Basic Research Foundation Grant ZH22017003200018PWC, the Guangdong Provincial Key Laboratory of Interdisciplinary Research and Application for Data Science, Project code 2022B1212010006, and Guangdong Higher Education Upgrading Plan (2021-2025) UIC R0400001-22. }} \maketitle \begin{abstract} Federated learning (FL) is an emerging technique aiming at improving communication efficiency in distributed networks, where many clients often request to transmit their calculated parameters to an FL server simultaneously. However, in wireless networks, the above mechanism may lead to prolonged transmission time due to unreliable wireless transmission and limited bandwidth. This paper proposes a communication scheme to minimize the uplink transmission time for FL in wireless networks. The proposed approach consists of two major elements, namely a modified Ring All-reduce (MRAR) architecture that integrates D2D wireless communications to facilitate the communication process in FL, and a modified Ant Colony Optimization algorithm to identify the optimal composition of the MRAR architecture. Numerical results show that our proposed approach is robust and can significantly reduce the transmission time compared to the conventional star topology. Notably, the reduction in uplink transmission time compared to baseline policies can be substantial in scenarios applicable to large-scale FL, where client devices are densely distributed. \end{abstract} \begin{IEEEkeywords} Federated learning, D2D wireless networks, Ring All-reduce, Communication efficiency, Ant Colony Optimization Algorithm. \end{IEEEkeywords} \section{Introduction} Thanks to the rapid growth in the number and popularity of user devices (UDs) capable of collecting and processing data, artificial intelligence (AI) techniques are more commonly applied in many aspects of daily life. Federated learning (FL)~\cite{li2020federated} is a class of distributed machine learning techniques that keeps all actual training data on local UDs while only exchanging specific model parameters with each other. By decoupling the training process to local UDs, FL enables collaborative learning of AI models without storing data in the server. As a result, the privacy of UDs is better preserved than traditional centralized mechanisms. This advantage in turns motivates more users to participate in FL and leads to a more accurate global model. A base station (BS) can function as an FL server in wireless networks, while the UDs are considered clients. One obstacle to the wider application of FL in wireless networks is that it requires multiple rounds of model parameters exchange between the FL server and clients until the FL process converges to an accurate global model~\cite{li2020federated}. In every training round, the BS updates the global model by aggregating the local parameters sent by UDs generated from their respective local models. Then, the BS broadcasts the updated global model to the UDs~\cite{8917724}. Most UDs participating in the training round will transmit within a short timeframe for synchronization purposes. This leads to a substantial strain on the uplink, especially in urban scenarios where a single BS serves a high volumes of UDs for the FL process. As a result, the uplink transmission latency would become unacceptable, and the overall FL performance would be negatively impacted. There are two straightforward directions to tackle the above-mentioned problem in existing studies. One direction, referred as gradient compression, aims to reduce the data size of transmitted parameters to improve communication efficiency~\cite{9177084} Another direction is to reduce the frequency of communication by restricting the number of training rounds such that fewer communications between the BS and UDs would be required~\cite{liu2020client}. However, both approaches would sacrifice the accuracy of the final model to some extent. It has been investigated in more recent studies that D2D communications can be an attractive alternative to improve the communication efficiency for FL in wireless networks, as it can effectively avoid the communication bottleneck problem when compared to the traditional server-client (star) topology. Most approaches aiming at maximizing the efficiency of wireless D2D communications involve optimal resource allocation or power control~\cite{7051236}. One study that focused on the training algorithm is~\cite{xing2021federated}, which proposed to implement the decentralized stochastic gradient descent (DSGD) in an FL D2D wireless network. However, DSGD generally requires more communication rounds to achieve the same level of training accuracy as in conventional algorithms such as FedSGD, FedAvg and FedProx, due to its distributed implementation. We take a different perspective in this paper, by proposing a topology-based approach that incorporates a modified version of recently proposed Ring All-reduce (RAR) architecture~\cite{9796785} into FL in orthogonal frequency division multiple access (OFDMA) wireless networks. Unlike the approach in~\cite{xing2021federated}, our approach does not alter FL algorithms and thus does not incur extra communication rounds to converge. Instead, we focus on the parameter aggregation process in each training round to minimize the uplink transmission time, by identifying the most efficient ring topology for aggregating the parameters. Another benefit of our modified RAR (MRAR)-based approach is enhanced resilience against link failures. In case of link failures, only a small amount of additional data is required for re-transmission to maintain the aggregation accuracy under the MRAR implementation. This preserves the advantage in communication efficiency of MRAR over the star topology. Our main contributions are summarized as follows, \begin{itemize} \item We incorporate the MRAR architecture to address a potential bottleneck problem caused by simultaneous transmissions that may hinder the large-scale application of FL in OFDMA wireless networks. Note that we are not proposing a new FL algorithm, but a robust and efficient communication scheme to aggregate parameters under conventional FL algorithms. Therefore, our proposed approach can potentially improve the overall efficiency and performance of various existing FL algorithms. \item We analytically prove that, the expected uplink transmission time by our proposed MRAR approach increases sublinearly with the density of UDs. This is a significant improvement over the conventional star topology with a linear increasing rate. \item We propose an ACO-based algorithm to overcome the computational complexity in identifying the optimal composition of the MRAR, which in turns gives the shortest transmission time amongst all possible ring topologies. \item We demonstrate by simulation results that our proposed method significantly outperforms existing communication schemes in terms of transmission efficiency and robustness. Particularly, in scenarios where the UDs are densely distributed, the reduction in transmission time can substantial. Also, within the MRAR framework, the ACO-based algorithm achieves further reduction in transmission time over the greedy algorithm. \end{itemize} \section{Preliminaries and System Model} \subsection{Parameter aggregation in federated learning} We consider that a set of $\mathcal{V} = \{1,…,K\}$ of $K$ UDs are participating in an FL training round. In mainstream FL implementations such as FedSGD, FedAvg, FedProx and federated learning with quantization (FedPAQ)~\cite{reisizadeh2020fedpaq}, UDs do not share their own data, but only exchange and aggregate trained parameters with each other. At the end of each training round, UDs take the aggregated parameters from the global model as the initial parameters in the next round. The following equation represents a common form of final aggregation and update of global parameters, \begin{small} \begin{equation} w_{\text{global}} = \sum_{k \in \mathcal{V}} \frac{|D_k|}{|D|} w_k, \label{global} \end{equation} \end{small}\ignorespacesafterend where $|D_k|$ is the size of locally trained data at UD $k$, $|D|$ is the total size of trained data from all UDs, $w_{\text{global}}$ is the aggregated parameters or soft labels, and $w_k$ is locally trained parameters or soft labels at UD $k$. We consider the uplink of an OFDMA wireless network. When FL is implemented in such networks with a traditional star topology, UDs synchronously send their locally trained parameters to the BS at the end of each training round. The BS then updates the global model as in~\eqref{global} before broadcasting the updated model to UDs. The UDs initialize their local model with the global model at the beginning of the next round. \begin{figure*} \caption{\textit{An illustration of MRAR implementation with 3 UDs with a single-link failure} \label{interrupt} \end{figure*} \subsection{The Modified Ring All-reduce scheme} We now introduce the MRAR scheme and show how it implements the aggregation task in FL in a different way. In MRAR, UDs form a logical ring where each UD has two neighbors. A UD will only send data to the ``next" neighbor in the clockwise direction and receive data from the ``last" neighbor in the counter-clockwise direction. Two key steps to implement the MRAR are \textit{Scatter-reduce} and \textit{All-gather}. Assume that $K$ UDs participate in the current training round, each UD will equally divide its weighted local model parameter (i.e. $w_k$ in \eqref{global}) into $K$ chunks, that is, \begin{small} \begin{equation} \frac{|D_k|}{|D|}w_k = \left(c_k^{(1)}\oplus c_k^{(2)} \oplus ... \oplus c_k^{(K)} \right), \end{equation} \end{small}\ignorespacesafterend where $\oplus$ is the concatenation operation and $c_k^{(j)} = \frac{|D_k|}{|D|}w_k^{(j)}$ represents the $j$-th chunk for UD $k$. If we label an arbitrary UD as UD $1$, and number the other UDs in ascending order according to the transmission direction, then the two steps in the $n$-th round can be described as follows, \begin{itemize} \item \textit{Scatter-reduce}: UD $k$ sends the $\left[(k-n+1)\%K\right]$-th chunk of its accumulated weighted local model to its next neighbor, and receives its last neighbor's $\left[(k-n)\%K\right]$-th chunk. Here, $\%$ represents the modular operator. UD $k$ then accumulates the received chunk to obtain \begin{small} \begin{equation} \underbrace{\sum_{j=k-n}^k c_{j\%K}^{((k-n)\%K)}}_{\text{To be sent in the next round}} = \underbrace{\sum_{j=k-n}^{k-1} c_{j\% K}^{((k-n)\%K)}}_{\text{The received chunk}} + c_k^{((k-n)\%K)} \end{equation} \end{small} \item \textit{All-gather}: UD $k$ sends the $\left[(k-1)\%K\right]$-th chunk to the BS, which then aggregates the global model by splicing all data chunks together. That is, \begin{small} \begin{equation} w_{\text{global}} = (\sum_{k=1}^K c_k^{(1)} \oplus \sum_{k=1}^K c_k^{(2)} \oplus ... \oplus \sum_{k=1}^K c_k^{(K)}). \end{equation} \end{small} If the link of UD $k$ to UD $k+1$ is down at the $n$th round, UD $k$ also sends the $\left[(k-n)\%K\right]$-th chunk to the BS, for recovering lost information due to previous link failure(s). \end{itemize} The entire implementation consists of $N-1$ steps of scatter-reduce followed by one all-gather. By the end of the process, the BS broadcast the global model to UDs for subsequent training rounds. Fig~\ref{interrupt} illustrates an example of the MRAR implementation with 3 participating UDs with a single link failure. In particular, Fig.~\ref{interrupt} also demonstrates that, at most one additional chunk is required to be transmitted for a single link failure to obtain the same aggregation result at the BS. In this case, the additional chunk is $c_1^{(1)}$. \subsection{Communication model} For uplink OFDMA systems, it is reasonable to consider the Signal-to-Noise Ratio (SNR) to measure the quality of transmissions. Consider a transmission from UD $i$ to UD $j$ (we consider the BS as a special UD with index $0$ hereafter), the uplink SNR is $\displaystyle \text{SNR}_{i,j}=(d_{ij}^{-\alpha}p_i)/N_0,$ where $\alpha$ is the path loss exponent, $d_{i,j}$ is the distance between transmitter $i$ and receiver $j$, $p_i$ is the transmission power of UD $i$, and $N_0$ represents the noise power. Then, the data rate $R_{i, j}$ of transmitter $i$ to receiver $j$ can be represent by $\displaystyle R_{i, j}=B_i\log(1+\text{SNR}_{i, j}),$ where $B_i$ is the allocated bandwidth for UD $i$. In terms of OFDMA, if we consider a worst-case scenario where all UDs request to transmit simultaneously, the allocated bandwidth should be constrained by $\sum_{i \in \mathcal{V}} B_i \leqslant B $, where $B$ is the total bandwidth allocated for FL. \subsection{Analysis of transmission time} Recall the procedure of the traditional federated learning scheme, the aggregation (i.e., \eqref{global}) can only be performed by the BS after all UDs have finished transmissions. The uplink transmission time of each round is then \begin{small} \begin{equation} T_{\text{star}} = \frac{M}{\min_{i \in \mathcal{V}}{R_{i, 0}}}, \end{equation} \end{small}\ignorespacesafterend where $M$ is the model size. In MRAR scheme, each UD will transmit $N-1$ chunks in scatter-reduce and $1$ chunk in all-gather. Since each chunk contains $1/N$ of model parameters, the transmission time of each round is \begin{small} \begin{equation} T_{\text{MRAR}} = \underbrace{\frac{(N-1)M}{N\min_{i \in \mathcal{V}}{R_{i, r(i)}}}}_{T_{\text{SR}}} + \underbrace{\max_{i \in \mathcal{V}} \frac{(\mathcal{I}_i+1)M}{NR_{i,0}}}_{T_{\text{AG}}}, \end{equation}\label{TMRAR} \end{small}\ignorespacesafterend where $r(i)$ represents the index of the next neighbour of UD $i$ in the ring. $\mathcal{I}_i$ is the number of additional trunks transmitted to guarantee the aggregation accuracy, which is equal to the number of link failures between UD $i$ and $r(i)$ that occurred in scatter-reduce in the same training round. The two terms in~\eqref{TMRAR}, $T_{\text{SR}}$ and $T_{\text{AG}}$, refer to the transmission time in the scatter-reduce and all-gather steps, respectively. Next, we show that the expected transmission time under the traditional star topology is approximately linearly correlated with the density of UDs participating in FL in the area, given that the distribution of UDs follows a spatial Poisson Point Process (PPP). In FL, the transmission time of each round is determined by the slowest transmission. Therefore, we may optimally allocate the bandwidth to balance the transmission rate of all connections and thus minimize the transmission time. If the UDs are distributed in a circle with radius $R$, with the BS at the centre, and the minimum distance between UD and BS is set to $1$, the expectation of transmission time is \begin{small} \begin{equation} \begin{aligned} \mathbb{E}\left[T_{\text{star}}^*\right] &= \frac{ M}{B}\mathbb{E}\left[\sum_{k}\frac{1}{\log(1 + \text{SNR}_{k, 0})} \right] \\ &= \frac{M}{B} \sum_{n = 1}^{\infty} \frac{n[\lambda \pi R^2]^n e^{-\lambda \pi R^2}}{n!} \mathbb{E}\left[ \frac{1}{\log(1 + \text{SNR}_{k, 0})}\right] \\ &= \frac{\lambda \pi R M}{B} \frac{2}{R^2} \int_{x=0}^{R} \frac{x dx}{\log (1+\frac{p x^{-\alpha}}{N_0})}, \end{aligned} \label{ET} \end{equation} \end{small}\ignorespacesafterend where $\lambda$ is the intensity of PPP, $p$ is the transmission power of each UD. It can be inferred from~\eqref{ET} that the lower bound of $\mathbb{E}\left[T^*_{\text{star}} \right]$ also increases linearly with $\lambda$. We now show that the expected transmission time in MRAR increases sublinearly with $\lambda$. First, the transmission time in scatter-reduce through the optimal ring is bounded above by that through a greedy ring, which is constructed by the following procedures: \begin{enumerate} \item Add an arbitrarily selected UD to an empty chain. \item Among UDs not yet in the chain, select the one with the shortest distance (and thus the highest SNR) with the last added UD. Add the selected UD to the chain. Repeat until all UDs are added. \item Connect the last UD to the first UD to form a ring. \end{enumerate} We derive the expected transmission time for the greedy ring, which is the upper bound for scatter-reduce as, \begin{small} \begin{equation} \begin{aligned} \mathbb{E}\left[T_{\text{SR}}^*\right] & < \frac{M}{B}\mathbb{E}\left[\sum_k\frac{1}{\log(1+\text{SNR}_{k,r(k)})} + \frac{1}{\log(1+\frac{p(2R)^{-\alpha}}{N_0})}\right] \\ &= \frac{M}{B}\underbrace{\mathbb{E}\left[\sum_k\frac{1}{\log(1+\text{SNR}_{k,r(k)})} \right]}_{C} + \underbrace{\frac{M}{B\log(1+\frac{p(2R)^{-\alpha}}{N_0})}}_{b}, \end{aligned} \label{UB} \end{equation} \end{small}\ignorespacesafterend where $\log(1+p(2R)^{-\alpha}/N_0)^{-1}$ represents the maximum transmission time of the link formed by step 3) in the greedy ring as $2R$ is the maximum distance between two arbitrary UDs. For \eqref{UB}, we further analyze the upper bound of $C$ as $b$ is a constant. Since the nodes generated by PPP are independent, $C$ can be transformed to \begin{small} \begin{equation} \begin{aligned} C = \mathbb{E} \left[\sum_k\int_{0}^{R}P'_k(x)\frac{Mdx}{\log(1 + \frac{pd_k^{-\alpha}}{N_0})}\right], \end{aligned} \label{chain} \end{equation} \end{small}\ignorespacesafterend where $P_k(x)$ is the probability that both UD $k$ and $r(k)$ are within a circle of radius $R$, and the distance between them is less than $x$. By the nature of PPP, we obtain \begin{small} \begin{equation} \begin{aligned} C &< \sum_{n=1}^{\infty} \frac{[\lambda \pi R^2]^n e^{-\lambda \pi R^2}}{n!} \mathbb{E} \left[\sum_{k=1}^{n}\frac{e^{-\lambda \pi x^2}Mdx}{\log(1+\frac{px^{-\alpha}}{N_0})}\right]\\ &= \sum_{n=2}^{\infty} \frac{n[\lambda \pi R^2]^n e^{-\lambda \pi R^2}}{n!} \int_{0}^{R} \frac{e^{-\lambda \pi x^2}dx}{\log(1 + \frac{px^{-\alpha}}{N_0})} \\&= \lambda \pi R^2 \int_{0}^{R} \frac{e^{-\lambda \pi x^2}dx}{\log(1 + \frac{px^{-\alpha}}{N_0})}. \end{aligned} \label{EC} \end{equation} \end{small} Therefore, the expected transmission time for scatter-reduce sublinearly increases with $\lambda$. For all-reduce, the analysis follows similarly from that for the star topology, that is \begin{small} \begin{equation} \mathbb{E}\left[T_{\text{AG}}^*\right] = \frac{ (\mathbb{E}\left[\sum_k\mathcal{I}_k\right]+1)M}{B} \int_{x=0}^{R} \frac{x dx}{\log (1+\frac{p x^{-\alpha}}{N_0})}. \label{EAG} \end{equation} \end{small} In D2D-enabled wireless FL, link failures are rather infrequent as the data size per transmission is relatively small (only parameters), and the SNR is generally high due to shorter transmission distances. Therefore, $\mathbb{E}\left[\sum_k\mathcal{I}_k\right]$ is close to $0$, and $\mathbb{E}\left[T_{\text{AG}}^*\right]$ will remain approximately constant as $\lambda$ changes. Combining~\eqref{EC} and~\eqref{EAG}, we conclude that the expected total transmission time of MRAR increases sublinearly with $\lambda$. \section{Optimal Formation of the MRAR Architecture} In this section, we demonstrate the algorithm to identify the best ring to minimize the total transmission time, i.e. to find the next neighbor $r(k)$ for each UD $k$ such that \begin{small} \begin{equation} \begin{aligned} \min &\quad T_{\text{SR}}^* \\ \text{s.t.} \quad\quad & r(i) \neq r(j), \quad i, j\in \mathcal{V}, i \neq j \\ & r^{(n)}(k) = k, \quad k \in \mathcal{V}. \end{aligned} \label{Problem} \end{equation} \end{small} Since \eqref{Problem} has the same constraints as the TSP and is also NP-hard, we apply the ACO, which is an efficient and the most commonly used method for solving the TSP~\cite{dorigo2006ant}, to solve this problem. The completed ACO algorithm is presented as Algorithm~\ref{algorithm1}, while the key steps of the algorithm are summarized as follows, \begin{enumerate} \item Initialize $a$ ants on each UD. \item Each ant starts travelling from the current UD and constructs a ring by repeatedly applying the state transition rule, \begin{small} \begin{equation} \begin{aligned} P_{i, j} = \frac{h_{i,j}^\beta R_{i,j}^\gamma}{\sum_{k \in \mathcal{V}} h_{i,k}^\beta R_{i,k}^\gamma}, \end{aligned} \label{prob} \end{equation} \end{small}\ignorespacesafterend where $\mathcal{V}$ contains UDs that have not been visited. \item Each ant calculates the total transmission time corresponding to the ring, and updates the pheromone by \begin{small} \begin{equation} \begin{aligned} h_{i, j} = \rho(h_{i, j} + \triangle h_{i, j}) + (1-\rho) \frac{1}{T_{\text{SR}^*}^*}, \end{aligned} \label{update} \end{equation} \end{small}\ignorespacesafterend where $\text{ring}^*$ is the so-far best ring that achieves minimum transmission time, and $\triangle h_{i, j}$ is the sum of $\frac{1}{T_{\text{SR}}^*}$ of all ants. \item Stop the algorithm if it reaches the maximum number of iterations $t$, else go to step 1). \end{enumerate} \renewcommand{\textbf{Input:}}{\textbf{Input:}} \renewcommand{\textbf{Output:}}{\textbf{Output:}} \begin{algorithm}[h] \caption{Ant colony optimization} \begin{algorithmic}[1] \Require $K$: the number of UDs; \Statex \quad \ \ $d$: distance matrix; \Ensure $r^*:\mathcal{V} \rightarrow \mathcal{V}$, \Statex \quad \quad \ i.e. the optimal connection scheme of the ring; \State initialize $\textbf{h} = \textbf{1}$ \Repeat \State initialize $a$ ants on each UD; \For {each ant} \State construct a ring by repeatedly applying \eqref{prob}; \State calculate the transmission time $T_{SR}^*$ \Statex \quad \quad \, $\text{ of the ring;}$ \If {$T_{SR}^* < T_{SC^*}^*$} \State Update the best ring, i.e. $r^*(\cdot) = r(\cdot)$; \EndIf \State $\triangle h_{i, j} += \frac{1}{T_{ring}^*}$; \EndFor \State update the pheromone by \eqref{update}; \Until {reach the maximum iteration} \end{algorithmic} \label{algorithm1} \end{algorithm} In scenarios where the physical movement of UDs is not negligible, we may have different optimal rings in different communication rounds. At the beginning of each round, the BS collects the locations of UDs. Then, the BS invokes the ACO algorithm to determine the optimal logical ring and send it back to UDs. Such exchange of location information can be completed almost instantly as the data involved are very small, and thus would not have a significant impact on the implementation of our proposed policy. Meanwhile, it is reasonable to consider that the UDs are static within a single communication round. The complexity of the ACO algorithm is polynomial, or in specific, $O(aN^2t)$, which makes the algorithm scalable for relatively large networks. Our numerical results also show that the average period of a single communication round is short. Therefore, the location of UDs and channel condition for any transmission pairs can be regarded as constant. \section{Experimental Results} \begin{figure*} \caption{\textit{Performance comparison of different communication schemes.} \label{line} \label{inter} \label{DSGD} \end{figure*} \subsection{Simulation setup} We consider a $400m \times 400m$ squared area with a single BS at the central of the area. The positions of UDs are uniformly and independently distributed in the area. We consider that all UDs have the same transmission power $p = 0.1$W. Other relevant network parameters are set as $M = 10$Mb, $B = 100$MHz, $\alpha = 4$ and $N_0 = -90$dBm. We consider two baseline policies referred to as \emph{star} and \emph{greedy}, respectively. In star, we consider the previously described traditional communication approach where all UDs only communicate with the BS to exchange the model parameters. In greedy, we consider a ring topology where the connection scheme is constructed by repeatedly connecting to the closest UD that has not been visited. We here reiterate that our approach focuses on the communication scheme of FL in wireless networks and does not alter the training algorithm (e.g. FedSGD, FedAvg, FedProx, FedPAQ) itself. Therefore, we can compare the performances of our proposed scheme and the baselines only by the transmission time of each round. Besides, we apply the optimal bandwidth allocation policy to every scenario to guarantee a more fair comparison. In the previous section, we have intuitively explained that the transmission time will not be longer than any other bandwidth allocation methods (e.g., all UDs are evenly allocated bandwidth) under all schemes. For the ACO, we set $\beta = 2$, $\gamma = 2$ and $\rho = 0.8$. We initialize $a = 10$ ants for each UD in each iteration and perform $t = 30$ iterations in total. \subsection{Numerical results} We use $T_{\text{uStar}}^*$ as an optimistic estimation for $T_{\text{star}}^*$ in the following results. We simulate 50 cases for each scenario with the same number of UDs, and comparisons of averaging transmission time per round are demonstrated in Fig.~\ref{line}. We can conclude from the results that the MRAR architecture constructed by ACO (MRAR-ACO) outperforms the other two schemes in almost all scenarios. Meanwhile, the gaps between the transmission time of star and MRAR schemes (including greedy and ACO) become larger as the number of UDs increases. This is consistent with our previous analysis that, under MRAR, the total transmission time increases sublinearly with the density of UDs, which is superior to the linear increase in star. To demonstrate that our proposed MRAR is robust to link failures, we fix $K = 50$ and adjust the link failure probability. We simulate $50$ cases for each data point in Fig.~\ref{inter}. While the average transmission time per round by MRAR increases linearly with the failure probability, it is still significantly less than Star in all cases. This shows that our approach is resilient to D2D link failures and is still applicable to situations where D2D communication is relatively unstable. In addition, we compare the performance between MRAR and DSGD, an existing approach we mentioned earlier that aggregates locally trained parameters in a different decentralized manner~\cite{xing2021federated}. The simulation results shown in Fig.~\ref{DSGD} are based on a trained CNN model by MINIST with 1,181,792 parameters. In DSGD, each UD broadcasts its local model to other UDs within a preset distance $D$, of which the value would affect the communication efficiency. For more intuitive comparisons, we present the training accuracy versus total communication time of DSGD with different values of $D$, together with FedAvg implemented by our proposed MRAR formed by ACO (FA-MRAR-ACO), and FedAvg under the traditional star topology (FA-Star). As shown in Fig.~\ref{DSGD}, a larger value of $D$ in DSGD generally leads to better communication efficiency, as reflected by higher levels of final training accuracy and a faster convergence rate. The reason is that, while a shorter preset broadcast distance can shorten the communication time per round in DSGD, more rounds are required until convergence as a smaller $D$ naturally results in a larger number of smaller local clusters. On the other hand, while DSGD can improve communication efficiency compared to FA-Star under certain circumstances, our proposed FA-MRAR-ACO outperforms DSGD with all values of $D$. That is, FA-MRAR-ACO can achieve a very high level of training accuracy in a relatively short period, by both effectively reducing the communication time per round, and controlling the number of rounds required to achieve convergence. \section{Conclusion} The paper investigated the communication efficiency for FL in OFDMA wireless networks. We showed that the traditional star communication scheme for FL was inefficient especially when UDs were densely distributed. To overcome the problem, we proposed the MRAR architecture for FL parameter aggregation, and applied the ACO to optimize the formation of the ring. Analytical proof and simulation results showed that our proposed approach achieved significant improvement in both efficiency and robustness compared with baseline policies. \end{document}
\begin{document} \title{On the exceptional zeros of Rankin-Selberg $L$-functions} \author{Dinakar Ramakrishnan\footnote{Partially supported by the NSF grant DMS-9801328} and Song Wang} \address {253-37 Caltech, Pasadena, CA 91125, USA} \email{[email protected] \quad [email protected]} \maketitle \pagestyle{myheadings} \markboth{D.Ramakrishnan and S.Wang}{Exceptional zeros of Rankin-Selberg $L$-functions} \section* {\bf Introduction} In this paper we study the possibility of real zeros near $s=1$ for the Rankin-Selberg $L$-functions $L(s, f \times g)$ and $L(s, {\rm sym}^2(g) \times {\rm sym}^2(g))$, where $f,g$ are newforms, holomorphic or otherwise, on the upper half plane $\mathcal H$, and sym$^2(g)$ denotes the automorphic form on GL$(3)/\mathbb Q$ associated to $g$ by Gelbart and Jacquet ([GJ79]). We prove that the set of such zeros of these $L$-functions is the union of the corresponding sets for $L(s, \chi)$ with $\chi$ a quadratic Dirichlet character, which divide them. Such a divisibility does not occur in general, for example when $f, g$ are of level $1$. When $f$ is a Maass form for SL$(2, \mathbb Z)$ of Laplacian eigenvalue $\lambda$, this leads to a sharp lower bound, in terms of $\lambda$, for the norm of sym$^2(f)$ on GL$(3)/\mathbb Q$, analogous to the well known and oft-used result for the Petersson norm of $f$ proved in [HL94] and [GHLL94]. As a consequence of our result on $L(s, {\rm sym}^2(g) \times {\rm sym}^2(g))$ one gets a good upper bound for the {\it spectrally normalized} first coefficient $a(1,1)$ of sym$^2(g)$. (In the artihmetic normalization, $a(1,1)$ would be $1$.) In a different direction, we are able to show that the symmetric sixth and eighth power $L$-functions of modular forms $f$ with trivial character (Haupttypus) are holomorphic in $(1 - \frac{c}{\log M}, 1)$, where $M$ is the {\it thickened conductor} (see section 1) and $c$ a universal, positive, effective constant; by a recent theorem of Kim and Shahidi ([KSh2001]), one knows that these $L$-functions are invertible in $\mathbb Re(s) \geq 1$ except possibly for a pole at $s=1$. If $f$ runs over holomorphic newforms of a {\it fixed} weight (resp. level), for example, the thickened conductor $M$ is essentially the level (resp. weight). We will in general work over arbitrary number fields and use the adelic language. First some preliminaries. Suppose $D(s)$ is any Dirichlet series given as an Euler product in $\{\mathbb Re(s) > 1\}$, which admits a meromorphic continuation to the whole $s$-plane with no pole outside $s=1$, together with a functional equation relating $s$ to $1-s$ after adding suitable archimedean factors. By an {\it exceptional zero}, or a {\it Siegel zero}, or perhaps more appropriately (cf. [IwS2000]) a {\it Landau-Siegel zero}, of $D(s)$, one means a real zero $s = \beta$ of $D(s)$ which is close to $s=1$. More precisely, such a zero will lie in $(1-\frac{C}{\log M}, 1)$, where $C$ is an effective, universal constant $> 0$ (see section 1). The {\it Grand Riemann Hypothesis} (GRH) would imply that there should be no such an exceptional zero, but it is of course quite hard to verify. It was shown in [HRa95] that for any number field $F$, the $L$-function $L(s, \pi)$ of a cusp form $\pi$ in GL$(2)/F$ admits no Landau-Siegel zero. In the special case when $\pi$ is {\it dihedral}, i.e., associated to a character $\chi$ of a quadratic extension $K$ of $F$, $L(s,\pi)$ is simply the abelian $L$-function $L(s,\chi)$ considered by Hecke, and if $\theta$ is the non-trivial automorphism of $K/F$, the cuspidality of $\pi$ forces $\chi$ to be distinct from $\chi \circ \theta$. We will say that $\pi$ is of type $(K/F,\chi)$ in this case. It was also shown in [HRa95] that for any $n > 1$, the standard $L$-series $L(s, \pi)$ of cusp forms $\pi$ on GL$(n)/F$ admit no Landau-Siegel zero {\it if} one assumes Langlands's {\it principle of functoriality}, in particular the existence of the {\it automorphic tensor product}. An analogous, but slightly more complicated, statement can be made for general Rankin-Selberg $L$-series on GL$(n) \times $GL$(m)$, but assuming the full force of functoriality is but a distant dream at the moment, though it is highly instructive to be aware of what it entails. So it becomes an interesting problem to know how much one can {\it unconditionally} prove by making use of available instances of functoriality; the method has to deviate some from that given in [HRa95]. This is what we carry out here for $n = m \leq 3$. Roughly speaking, the main point is to find a suitable positive Dirichlet series $D(s)$ which is divisible by the $L(s)$ of interest to a degree $k$, say, which is (strictly) {\it larger} than the order of pole of $D(s)$ at $s=1$. If there is anything creative here, at all, it is in the proper choice of $D(s)$ and then in the verification of the holomorphy of $D(s)/L(s)^k$, at least in a real interval $(t,1)$ for a fixed $t < 1$. It should be noted, however, that this approach fails to give anything significant for $L$-functions of quadratic characters; for two very interesting, and completely different, approaches for this crucial case see [IwS2000] and [GS2000]. Now fix a number field $F$ and consider the Rankin-Selberg $L$-function $L(s, \pi \times \pi')$ associated to a pair $(\pi, \pi')$ of cusp forms on GL$(2)/F$. Denote by $\omega$, resp. $\omega'$, the central character of $\pi$, resp. $\pi'$. Our first main result is the following \noindent{\bf Theorem A} \, \it Let $\pi, \pi'$ be cuspidal automorphic representations of GL$(2, \mathbb A_F)$. Then $L(s, \pi \times \pi')$ admits no Landau-Siegel zero except possibly in the following cases: \begin{enumerate} \item[{(i)}] $\pi$ is non-dihedral and $\pi' \simeq \pi \otimes \mu$ with $\omega\mu$ of order $\leq 2$; \item[{(ii)}] $\pi$, resp. $\pi'$, is dihedral of type $(K,\chi)$, resp. $(K',\chi')$, with $K' = K$ and $\chi'\chi$ or $\chi'(\chi \circ \theta)$ of order $\leq 2$. \end{enumerate} In case (i), resp. (ii), the exceptional zeros of $L(s, \pi \times \pi')$ are the same as those of $L(s,\omega\mu)$, resp. $L(s,\chi'\chi)L(s,\chi'(\chi \circ \theta))$. In case (ii), if $\chi'\chi$ or $\chi'(\chi \circ \theta)$ is trivial, then the exceptional zeros are the same as those of $\zeta_K(s)$. In either case, there is at most one exceptional zero. \rm For the vast majority of cases not satisfying (i) or (ii), $L(s, \pi \times \pi')$ has no exceptional zero. In particular, if $\pi_0, \pi'_0$ are fixed, non-dihedral cusp forms on GL$(2)/F$ which are not twist equivalent to each other, there exists an effective constant $c > 0$ such that the family $L(s, \pi_0 \times (\pi'_0 \otimes \chi))$, with $\chi$ running over quadratic characters of conductor $q$ prime to the levels of $\pi_0, \pi'_0$, admits {\it no} real zero $\beta$ with $\beta \in \left(1 - \frac{c}{{\rm log}q}\right)$. In case (i) we have $$ L(s, \pi \times \pi') \, = \, L(s,\omega\mu)L(s, sym^2(\pi) \otimes \mu). $$ The non-existence of Landau-Siegel zeros for $L(s, sym^2(\pi))$ (for $\pi$ non-dihedral) has been known for a while by the important work of Goldfeld, Hoffstein, Lieman and Lockart ([GHLL94]). For general $\mu$, the non-existence for $L(s, sym^2(\pi) \otimes \mu)$ is known by [Ba97], following an earlier reduction step given in section 6 of [HRa95]. So our Theorem is not new in this case. For any cusp form $\pi$ on $GL(2)/F$, let $L(s, \pi; {\rm sym}^n)$ denote, for every $n \geq 1$, the symmetric $n$-th power $L$-function of $\pi$ (see section 1 for a definition). It is expected that there is an automorphic form sym$^n(\pi)$ on GL$(n+1)/F$ whose standard $L$-function coincides with $L(s, \pi; {\rm sym}^n)$. This is classical ([GJ79]) for $n=2$ and a major breakthrough has been made recently for $n=3$ ([KSh2000]) and for $n=4$ ([K2000]). The proof of Theorem A uses the result for $n=3$ as well as the construction of the first author ([Ra2000]) of the Rankin-Selberg product of pairs $(\pi, \pi')$ of forms on GL$(2)/F$ as an automorphic form $\pi \boxtimes \pi'$ on GL$(4)/F$. Recall that $\pi$ is {\it dihedral} iff it admits a self-twist by a non-trivial, necessarily quadratic, character. One says that it is {\it tetrahedral}, resp. {\it octahedral}, iff sym$^2(\pi)$, resp. sym$^3(\pi)$, is cuspidal and admits a non-trivial self-twist by a cubic, resp. quadratic character. It is well known that sym$^2(\pi)$ is cuspidal ([GJ79]) iff $\pi$ is not dihedral. It has been shown in [KSh2000], resp. [KSh2001], that sym$^3(\pi)$, resp. sym$^4(\pi)$, is cuspidal iff $\pi$ is not dihedral or tetrahedral, resp. not dihedral, tetrahedral or octahedral. We will henceforth say that a cusp form $\pi$ on $GL(2)/F$ is of {\it solvable polydedral type} if it is either dihedral or tetrahedral or octahedral. Our second main result is the following: \noindent{\bf Theorem B} \, \it Let $\pi$ be a self-dual cuspidal automorphic representation of GL$(2, \mathbb A_F)$. Then the set of Landau-Siegel zeros of $L(s, {\rm sym}^2(\pi) \times {\rm sym}^2(\pi))$ is the union of the sets of Landau-Siegel zeros of abelian $L$-functions of the form $L(s, \chi)$, $\chi^2 = 1$, which divide $L(s, {\rm sym}^2(\pi) \times {\rm sym}^2(\pi))$, if any. Moreover, if $\pi$ is not of solvable polyhedral type and if $\zeta_F(s)$ has no exceptional zero (for example when $F = \mathbb Q$), then there is no exceptional zero. \rm This theorem holds also for any cuspidal $\pi$ on GL$(2)/F$ which is a unitary character twist of a selfdual representation. The following corollary of Theorem B gives precise bounds for the Petersson norm on $GL(3)/\mathbf{Q}$ of the symmetric square of a Maass wave form $g$ of level $1$. In Section 6 (see Definition 6.3), we will define a suitably normalized function ${\rm sym}^{2} (g)$ spanning the space of the Gelbart-Jacquet square lift to $GL(3)/\mathbf{Q}$ of the cuspidal automorphic representation $\pi$ generated by $g$. \noindent{\bf Corollary C} \,\it Let $g$ be a Maass form on the upper half plane $\mathcal H$, relative to SL$(2, \mathbb Z)$, of weight zero and Laplacian eigenvalue $\lambda$, which is also an eigenfunction of Hecke operators. Then for each $\varepsilon > 0$, \[ \frac{1}{\log (\lambda + 1)} << \langle\,{\rm sym}^{2} (g), {\rm sym}^{2} (g)\,\rangle <<_{\varepsilon} (\lambda + 1)^{\varepsilon}, \] where ${\rm sym}^2(g)$ is spectrally mormalized as in Theorem B. Moreover, if $\{a(m,n)\}$ denotes the collection of Fourier coefficients of the spectrally normalized function ${\rm sym}^2(g)/\vert\vert{\rm sym}^2(g)\vert\vert$, we have $$ \vert a(1,1)\vert \, << \, \log (\lambda + 1). $$ \rm Our proof of Theorem B will establish on the way that the symmetric $4$-th power $L$-function of any self-dual cusp form $\pi$ on GL$(2)$, not of solvable polyhedral type, admits no Landau-Siegel zero. A slew of interesting results for the $L(s, \pi; {\rm sym}^n)$ for {\it $n$ up to $9$} have been established recently by H.~Kim and F.~Shahidi in [KSh2001], proving in particular the meromorphic continuation, functional equation and holomorphy in $\mathbb Re(s) \geq 1$, except for a possible pole at $s=1$. It may be of some interest to know, for $n > 4$, how far to the left of $s=1$ can the holomorphy assertion can be extended. A consequence of our work is the following tiny, but apparently non-trivial, extension to the left of $s=1$ for $n=6, 8$. \noindent{\bf Theorem D} \, \it Let $F$ be a number field and $\pi$ a self-dual cusp form on GL$(2)/F$ of thickened conductor $M$. Then there exists a universal, effective constant $c > 0$ such that $L(s, \pi; {\rm sym}^6)$ and $L(s, \pi; {\rm sym}^8)$ have no pole in the real interval $(1 - \frac{c}{\log M}, 1)$. \rm Now we will say a few words about the proofs. Regarding Theorem A, suppose we are in the main case, i.e., neither $\pi$ nor $\pi'$ is dihedral and also $\pi'$ is not isomorphic to $\pi \otimes \mu$ for any character $\mu$. Under these hypotheses, $\pi \boxtimes \pi'$ is cuspidal on GL$(4)/F$ by [Ra2000]. When it is not self-dual, there is a simple argument (see section 3 of [HRa95]) to deduce the non-existence of a Siegel zero. So we may assume that $\pi \boxtimes \pi'$ is self-dual, which implies that the central characters $\omega, \omega'$ of $\pi, \pi'$ are inverses of each other. For simplicity assume for the moment that $\omega, \omega'$ are trivial. (For a full treatment of the general case, see section 4.) Then the key point is to appeal to the following identity of $L$-functions \begin{align} L(s, \Pi \times \Pi) = &\zeta_F(s) L(s, {\rm sym}^2(\pi))^2 L(s, \pi \times \pi)^4 L(s, {\rm sym}^3(\pi) \times \pi')^2 \notag \\ &L(s, {\rm sym}^2(\pi) \times {\rm sym}^2(\pi)) L(s, (\pi \boxtimes \pi') \times (\pi \boxtimes \pi')). \notag \end{align} where $\Pi$ is an isobaric automorphic form on GL$(8)/F$ defined by $$ \Pi \, = \, 1 \boxplus \pi \boxtimes \pi' \boxplus {\rm sym}^{2} ({\pi}), $$ where $1$ denotes the trivial automorphic form on GL$(1)/F$ and $\boxplus$ the Langlands sum operation on automorphic forms defined by his theory of Eisenstein series ([La79]), proved to be well defined by the work of Jacquet and Shalika ([JS81]). The degree $64$ Rankin-Selberg $L$-function $L(s, \Pi \times \Pi)$ has the standard analytic properties and defines, in $\mathbb Re(s) > 1$, a Dirichlet series with non-negative coefficieents. Moreover, it has a pole of order $3$ at $s=1$, and since since $L(s, \pi \times \pi')$ occurs in its factorization to a power larger than $3$, a standard lemma (see Lemma 1.7) precludes the latter from having any Landau-Siegel zero. We also need to show that the ratio $L(s, \Pi \times \Pi)/L(s, \pi \times \pi')^4$ is holomorphic, for which we appeal to the automorphy of sym$^3(\pi)$ ([KSh2000]). The proof of Theorem B involves a further wrinkle, and uses in addition the automorphy of sym$^4(\pi)$ ([K2000]), as well as the works of Bump-Friedberg ([BuG92]) on the symmetric square $L$-functions of GL$(n)$. The well known identity $$ L(s, {\rm sym}^2(\pi) \times {\rm sym}^2) \, \zeta_F(s) L(s, {\rm sym}^2(\pi)) L(s, {\rm sym}^4(\pi)) $$ reduces the problem to studying the Landau-Siegel zeros of $L(s, {\rm sym}^4(\pi))$. We show in section 5 (see Theorem B$^\prime$) that for $\pi$ {\it not} of solvable polyhedral type, $L(s, \pi; {\rm sym}^4)$ has no exceptional zero. To do this we set $$ \Pi : \, = \, 1 \boxtimes {\rm sym}^2(\pi) \boxplus {\rm sym}^4(\pi), $$ and consider \begin{align} L(s, \Pi \times \Pi) = &\zeta_F(s) L(s, {\rm sym}^2(\pi))^2 L(s, {\rm sym}^4(\pi))^4 L(s, \pi; {\rm sym}^6)^2 \notag \\ &L(s, {\rm sym}^2(\pi) \times {\rm sym}^2(\pi)) L(s, {\rm sym}^4(\pi) \times {\rm sym}^4(\pi)). \notag \end{align} Since $L(s, \Pi_f \times \Pi_f)$ defines a Dirichlet series in $\mathbb Re(s) > 1$ with non-negative coefficients and has a pole of order $3$ at $s=1$, things appear to be in good shape, {\it till} we realize that one does not yet know how to prove that $L(s, \pi; {\rm sym}^6)$ is holomorphic in any real interval $(1-t, 1)$ for a fixed $t < 1$. But luckily we also have the factorization $$ L(s, {\rm sym}^3(\pi); {\rm sym}^2) \, = \, L(s, \pi; {\rm sym}^6) L(s, {\rm sym}^2(\pi)), $$ which allows us, after exercising some care about the bad factors, to prove the holomorphy of the ratio $L(s, \Pi \times \Pi)/L(s, {\rm sym}^4(\pi))^4$ in $(1/2, 1)$. For details we refer to section 5. In order to prove Corollary C, we begin by deducing a precise relationship between the Peterson norm of a suitably mormalized ${\rm sym}^2(g)$ (see section 6) and $L(s, {\rm sym}^{2} (g) \times {\rm sym}^{2} (g))$ by using the results on its integral representation due to Jacquet, Piatetski-Shapiro and Shalika ([JPSS83], [JS90]), and an exact formula of E. Stade for pairs of spherical representations of GL$(3, \mathbb R)$ ([St93], [St2001]). More precisely, $(\,{\rm sym}^{2}(g), {\rm sym}^{2}(g)\,)$ differs $\text{\rm Res}_{s = 1} L(s, {\rm sym}^{2}(g) \times {\rm sym}^{2}(g))$ by a constant factor coming from the residue of an Eisenstein series. (There should also be a similar result when $f$ is a holomorphic newform for SL$(2, \mathbb Z)$, but at this point one does not appear to know enough about the archimedean zeta integral on GL$(3) \times $GL$(3)$ to achieve this.) Finally, the nonexistence of Landau-Siegel zeros for $L(s, {\rm sym}^{2}(g) \times {\rm sym}^{2}(g))$ allows us to bound from below its residue at $s=1$, and this in turn gives us, when sym$^2(g)$ is replaced by its spectral normalization, the desired bound on the {\it first Fourier coefficient} $a(1,1)$. For details, see Section 6. To prove Theorem D we appeal to the factorization of $L(s, {\rm sym}^4(\pi); \Lambda^2)$ above as well as to the identity $$ L(s, {\rm sym}^4(\pi); {\rm sym}^2) \, = \, L(s, \pi; {\rm sym}^8) L(s, {\rm sym}^4(\pi)) \zeta_F(s). $$ For any cuspidal automorphic representation $\Pi$ of GL$(n, \mathbb A_F)$, if $S$ denotes the union of the archimedean places of $F$ with the set of finite places where $\Pi$ is ramified, one knows by Bump-Ginzburg ([BuG92]) that the incomplete $L$-function $L^S(s, \Pi; {\rm sym}^2)$, defined in a right half plane by the Euler product over places outside $S$, is holomorphic in $(1/2, 1)$. Applying this with $n=4, \Pi = {\rm sym}^4(\pi)$, and carefully taking care of the factors at $S$, we deuce the holomorphy of the syemmetric square $L$-function of sym$^4(\pi)$. Now the knowledge gained from the proof of Theorem B on Landau-Siegel zeros of $L(s, {\rm sym}^4(\pi))$ allows us (see section 7) to prove Theorem D. We would like to thank D.~Bump, W.~Duke, H.~Jacquet, H.~Kim, W.~Luo, S.~Miller, F.~Shahidi and E.~Stade for useful conversations and/or correspondence. Clearly this paper depnds on the ideas and results of the articles [HL94], [GHLL94], [HRa95], [Ra2000], [KSh2000,1] and [K2000]. The first author would also like to thank the NSF for support through the grants DMS-9801328 and DMS-0100372. The subject matter of this paper formed a portion of the first author's {\it Schur Lecture} at the University of Tel Aviv in March 2001, and he would like to thank J.~Bernstein and S.~Gelbart for their interest. \vskip 0.2in \section{\bf Preliminaries on Landau-Siegel zeros} For every $m \geq 1$, let ${\mathcal D}_m$ denote the class of {\it Dirichlet series} $L(s)= \sum\limits_{n \geq 1} \frac{a_n} {n^s}$, absolutely convergent in $Re(s) > 1$ with an Euler product $\prod_p P_p(p^{-s})^{-1}$ of degree $m$ there, extending to whole $s-plane$ as a meromorphic function of bounded order, in fact with no poles anywhere except at $s=1$, and satisfying (relative to another Dirichlet series $L^{\vee}(s)$ in ${\mathcal D}_m$) a functional equation of the form $$ L_\infty (s) \, L(s) = WN^{1/2 - s} L_\infty ^{\vee} (1 - s) L^{\vee} (1-s), \leqno(1.1) $$ where $W \in \mathbb C^\ast$, $N \in \mathbb N$, and the {\it archimedean factor} $L_\infty (s)$ is $$ \pi^{-sm/2} \prod\limits_{j=1}^m \mathbb Gamma\left(\frac{s + b_j}{2}\right), $$ where $(b_j) \, \in \, \mathbb C^m$. Put ${\mathcal D} = \bigcup\limits_{m \geq 1} {\mathcal D}_m$. It might be useful to compare this with the definition of the Selberg class ([Mu94]). In the latter one requires in addition a Ramanujan type bound on the coefficients, but allows more complicated Gamma factors. One says that $L(s)$ is {\it self-dual} if $L(s)=L^{\vee}(s)$, in which case $W = \{\pm 1 \}$. Important examples are $\zeta(s)$, Dirichlet and Hecke $L$-functions, the $L$-functions of {\it holomorphic newforms} $g$ of weight $k \geq 1$ and level $N$, normalized to be $$ L(s) = L (s + \frac{k-1}{2}, g), \,\, L_\infty(s) = \pi^{-s} \mathbb Gamma\left(\frac{s+(k-1)/2}{2}\right) \mathbb Gamma\left(\frac{s+(k+1)/2}{2}\right), \leqno(1.2) $$ $L$-functions of {\it cuspidal Maass forms} $\phi$ of level $N$ which are eigenfunctions of Hecke operators: $$ L(s) = L(s, \phi), \,\, L_\infty(s)=\pi^{-s} \mathbb Gamma\left(\frac{s + \delta +w}{2}\right) \mathbb Gamma\left(\frac{s+\delta - w}{2}\right), \,\, \delta \in \{0,1 \}, \leqno(1.3) $$ and the {\it Rankin-Selberg $L$-functions} $L(s, f \times g)$, where $f, g$ are cusp forms of holomorphic or Maass type. See the next section for their definition and generalizations. Call an $L(s) \in \mathcal D$ {\it primitive} if it cannot be factored as $L_1(s)L_2(s)$ with $L_1(s), L_2(s)$ non-scalar in $\mathcal D$. The Dirichlet and Hecke $L$-functions, as well as those of cusp forms on the upper half plane are primitive. \noindent{\bf Conjecture I} \, \it Every $L(s) \, \epsilon \, {\mathcal D}_m$ is {\it quasi-automorphic}, ie, there exists an automorphic form $\pi$ on GL$(m)/\mathbb Q$ such that $L_p(s)\, = \, L(s, \pi_p)$ for almost all $p$. Moreover, $L(s)$ is primitive iff $\pi$ is cuspidal. \rm This is compatible with Langlands philosophy ([La70]) and with the conjecture of Cogdell and Piatetski-Shapiro ([CoPS94]); Cogdell has remarked to us recently that Piatetski-Shapiro has also formulated (unpublished) a similar conjecture involving analogs of $\mathcal D$. Note such a thing cannot be formulated over number fields of degree $> 1$ as one can permute the Euler factors lying over any given rational prime. Also, there exists an example of Patterson over function fields $F$ over a finite field $\mathbb F_q$ satisfying analogous conditions, but with zeros on the lines $Re(s) = 1/4$ and $Re(s) = 3/4$. One problem in characteristic $p$ is that there is no minimal global field $F$ such as $\mathbb Q$. It is still an interesting open problem to know if one can define a good notion of "primitivity" over function fields. \noindent{\bf Conjecture II} \, \it For any $L(s) \, \in \, {\mathcal D}$, if it has a pole of order $r$ at $s=1$, then $\zeta(s)^r | L(s)$, ie., $L(s)=\zeta(s)^r L_1(s)$, with $L_1(s) \, \epsilon \, {\mathcal D}$. \rm This is compatible with the conjectures of Selberg, Tate and Langlands. \noindent{\bf Definition 1.4} \, \it Let $L(s) \, \epsilon \, {\mathcal D}_m$ with $L_{\infty}(s) = \pi^{-ms} \mathop\Pi_{j=1}^m \mathbb Gamma\left(\frac{s+ b_j}{2}\right)$. Define its {\it thickened conductor} to be $$ M = N (2+ \Lambda) $$ where $$ \Lambda \, = \, \mathop\sum_{j=1}^m |b_j|). $$ \rm \noindent{\bf Definition 1.5} \, \it Let $c>0$. Then we say that $L(s)$ has a Landau-Siegel zero relative to $c$ if $L(\beta)=0$ for some $\beta \, \in \, (1- \frac{c}{\log M}, 1)$. \rm \noindent{\bf Definition 1.6} \it Let ${\mathcal F}$ be a family, by wich we will mean a class of $L$-functions in ${\mathcal D}$ with $M \rightarrow \infty$ in ${\mathcal F}$. We say that ${\mathcal F}$ admits no {\it Landau-Siegel Zero} if there exists an effective constant $c>0$ such that no $L(s)$ in ${\mathcal F}$ has a zero in $(1- \frac{c}{\log M}, 1)$. \rm The general expectation is framed by the following \noindent{\bf Conjecture III} \, \it Let $\mathcal F$ be a family in $\mathcal D$. Then $\mathcal F$ admits no Landau-Siegel zero. \rm One reason for interest in this is that the lack of such a zero implies a good lower bound for $L(s)$ at $s=1$. Note that in view of Conjecture I, the {\it Grand Riemann Hypothesis}, shortened as GRH, implies that all the non-trivial zeros of any $L(s)$ in $\mathcal D$ lie on the critical line, hence it implies Conjecture III. Of course the GRH is but a distant goal at the moment, and it is hopefully of interest to verify Conjecture III for various families. The $L$-functions of pure motives over $\mathbb Q$, in particular those associated to the cohomology of smooth projective varieties $X/\mathbb Q$, are expected to be automorphic and hence should belong to $\mathcal D$. For these $L$-functions, when they are of {\it even} Frobenius weight, the values at $s=1$ have arithmetic significance by the general Bloch-Kato conjectures, and so the question of non-existence of Landau-Siegel zeros is helpful to understand from a purely arithmetical point of view. We will need the following useful (and well known) fact: \noindent{\bf Lemma 1.7} \, \it Let $L(s) \, \epsilon \, {\mathcal D}_m$ be a positive Dirichlet series having a pole of order $r \geq 1$ at $s=1$, with $L'(s)/L(s) <0$ for real $s$ in $(1, \infty)$. Then there exists an effective constant $C>0$, depending only on $m$ and $r$, such that $L(s)$ has most $r$ real zeros in $(1 - \frac{C}{\log M}, 1)$. \rm For a more relaxed discussion of these matters, see the expository article [Ra99] on the Landau-Siegel zeros, as well as the articles [GHLL94] and [HRa95]. \vskip 0.2in \section{\bf Preliminaries on automorphic $L$-functions} Fix a number field $F$ with ring of integers $\mathfrak{O}_F$, discriminant $D_F$, and adele ring $\mathbb A_F = F_\infty \times \mathbb A_{F,f}$, where $F_\infty$ is the product of the archimedean completions of $F$ and the ring $\mathbb A_{F,f}$ of finite adeles is a restricted direct product of the completions $F_v$ over non-archimedean places $v$. For each finite $v$, let $\mathfrak{O}_v$ denote the ring of integers of $F_v$. When $F = \mathbb Q$, $F_\infty \simeq \mathbb R$ and $\mathbb A_{F,f} \simeq \mathbb Hat \mathbb Z \otimes \mathbb Q$, where $\mathbb Hat \mathbb Z$ is the inverse limit of $\{\mathbb Z/m \vert m \geq 1\}$ and is non-canonically isomorphic to $\prod_p \mathbb Z_p$. Recall that a cuspidal automorphic representation $\pi$ of GL$(n, \mathbb A_F)$ is among other things admissible, i.e., a restricted tensor product $\otimes_v \pi_v \simeq \pi_\infty \otimes \pi_f$, where $v$ run over all the places of $F$ and $\pi_v$ is, for almost all finite $v$, unramified, i.e., its space admits a vector invariant under GL$(n, \mathfrak{O}_v)$. Given a partition of $n$ as $\sum_{j =1}^r n_j$ with each $n_j \geq 1$, and cuspidal automorphic representations $\pi_1, \ldots, \pi_r$ of GL$(n_1, \mathbb A_F), \ldots $GL$(n_r,\mathbb A_F)$, Langlands's theory of Eisenstein series constructs a so called {\it isobaric} automorphic representation ([La79]) $\pi$ of GL$(n, \mathbb A_F)$, which is unique by the work of Jacquet-Shalika ([JS81]), written as $$ \pi: = \, \boxplus_{i=1}^r \pi_i, \leqno(2.1) $$ with the property that its standard degree $n$ $L$-function $L(s, \pi)$ is the product $\prod_{i=1}^r L(s, \pi_i)$. Write $$ L(s, \pi_{\infty}) \, = \, \pi^{-dns/2}\prod_{j=1}^{dn} \mathbb Gamma(\frac{s+b_j(\pi)}{2}), \leqno(2.2) $$ where $d = [F:\mathbb Q]$ and the $b_j(\pi)$ are complex numbers depending only on $\pi_\infty$. Now consider a pair of isobaric automorphic representations $\pi, \pi'$ of ${\rm GL}(n, \mathbb A_F)$, GL$(m, \mathbb A_F)$, respectively. The associated Rankin-Selberg $L$-function is given as an Euler product of degree $nm$: $$ L(s, \pi \times \pi') \, = \, \prod_v L(s, \pi_v \times \pi'_v), \leqno(2.3) $$ convergent in a right half plane, with its {\it finite part}, namely $L(s, \pi_f \times \pi'_f)$, defining a Dirichlet series. When $m=1$ and $\pi'$ is the trivial representation $1$, this $L$-function agrees with the standard $L$-function. There are two distinct methods for defining these $L$-functions, the first using the {\it gcd}s of integral representations, due to Jacquet, Piaietski-Shapiro and Shalika ([JPSS83]), and the second via the constant terms of Eisenstein series on larger groups, due to Langlands and Shahidi ([Sh88, 90]); see also [MW89]. The fact that they give the same $L$-functions is non-trivial but true. These $L$-functions also admit a meromorphic continuation to the whole $s$-plane with no poles except possibly at $1-s_0$ and $s_0$ for a unique $s_0 \in i\mathbb R$; such a pole occurs iff $\pi$ and $\pi' \otimes \vert.\vert^{s_0}$ are contragredients of each other. One also has the functional equation $$ L(s, \pi \times \pi') \, = \, \varepsilon(s, \pi \times \pi')W(\pi \times \pi') L(1-s, \overline \pi \times \overline \pi') \leqno(2.4) $$ where $$ \varepsilon(s, \pi \times \pi') \, = \, (d_F^{nm}N(\pi \times \pi'))^{\frac{1}{2}-s}, $$ which is an invertible holomorphic function. Here $N(\pi \times \pi')$ is the {\it conductor}, and $W(\pi \times \pi') \in \mathbb C^\ast$ the {\it root number}, of the pair $(\pi, \pi')$. The following was proved in [HRa95] (Lemma $a$ of section 2): \noindent{\bf Lemma 2.5} \, \it For any unitary, isobaric automorphic representation $\pi$ of GL$(n, \mathbb A_F)$, the Dirichlet series defined by $L(s, \pi_f \times \overline \pi_f)$ has non-negative coefficients. Moreover, the logarithmic derivative $L'(s, \pi_f \times \overline \pi_f)/L(s, \pi_f \times \overline \pi_f)$ is negative for real $s$ in $(1, \infty)$. \rm The local Langlands correspondence for GL$(n)$, proved by Harris-Taylor ([HaT2000]) and Henniart ([He2000]) in the non-archimedean case (and proved long ago by Langlands in the archimedean case), gives a bijection at any place $v$, preserving the $L$- and $\varepsilon$-factors of pairs, between irreducible admissible representations $\pi_v$ of GL$(n, F_v)$ and $n$-dimensional representations $\sigma_v = \sigma(\pi_v)$ of the extended (resp. usual) Weil group $W_{F_v}': = W_{F_v} \times $SL$(2, \mathbb C)$ (resp. $W_{F_v}$) in the $p$-adic (resp. archimedean) case. This gives in particular the identity at any finite $v$: $$ N(\pi_v \times \pi'_v) \, = \, N(\sigma(\pi_v) \otimes \sigma(\pi'_v)), \leqno(2.7) $$ where for any representation $\tau$ of $W_{F_v}'$, $N(\tau)$ denotes the usual Artin conductor. A consequence of this is the sharp bound: $$ {M(\pi)}^{- n'} {M(\pi')}^{- n} \le M(\pi \times \pi') \le {M(\pi)}^{n'} {M(\pi')}^{n}. \leqno(2.8) $$ In fact we do not need the full force of this, and the weaker bound proved in Lemma $b$, section 2 of[HRa95], where the exponents were polynomially dependent on $n, n'$, is actually sufficient for our purposes. Combining all this information with Lemma 1.7 we get \noindent{\bf Proposition 2.9} \, \it Let $\pi$ be an isobaric automorphic representation of $GL(n, \mathbb A_{F})$ with $L(s, \pi \times \bar{\pi})$ having a pole of order $r \ge 1$ at $s = 1$. Then there is an effective constant $c \ge 0$ depending on $n$ and $r$, such that $L(s, \pi \times \overline \pi)$ has at most $r$ real zeros in the interval $$ J : \, = \, \{s \in \mathbb C \vert 1 - c / \log M (\pi \times \overline \pi) < \mathbb Re(s) < 1\}. $$ Furthermore, if $L(s, \pi \times \pi') = {L_{1} (s)}^{k} L_{2} (s)$ for some nice $L$--series $L_{1} (s)$ and $L_{2} (s)$ with $k > r$ and $L_{2} (s)$ holomorphic in $(t,1)$ for some fixed $t \in (0,1)$, then $L_{1} (s)$ has no zeros in $J$. \rm This provides a very useful criterion to prove the nonexistence of Landau-Siegel zeros in some cases. By the definition of the conductor of the $L$--series, if we know that the logarithm of the conductor of $L_{2} (s)$ does not exceed some multiple of the logarithm of the conductor of $L_{1} (s)$, with the constant depending only on the degrees of those $L$--series and $k$, then we can conclude that the logarithm of the conductor of $L_{1} (s)$ is bounded above and below by some multiples of the logarithm of the conductor of $L(s, \pi \times \pi')$, which will then imply that $L_{1} (s)$ has no Landau-Siegel zero. Given any isobaric automorphic representation $\pi$ of GL$(n, \mathbb A_F)$, a finite dimensional $\mathbb C$-representation $r$ of (the connected dual group) GL$(n, \mathbb C)$, and a character $\mu$ of $W_F$, we can define the associated automorphic $L$-function by $$ L(s, \pi; r \otimes \mu) \, = \, \prod\limits_v \, L(s, r(\sigma(\pi_v)) \otimes \mu_v), \leqno(2.11) $$ and $$ \varepsilon(s, \pi; r \otimes \mu) \, = \, \prod\limits_v \, \varepsilon(s, r(\sigma(\pi_v)) \otimes \mu_v), $$ where $v$ runs over all the places of $F$, $\pi_v \to \sigma(\pi_v)$ the arrow giving the local Langlands correspondence for GL$(n)/F_v$, and local factors are those attached to representations of the (extended) Weil group ([De73]). (To be precise, in the treatment of the non-archimedean case in [De73], Deligne uses the Weil-Deligne group $WD_{F_v}$, but it is not difficult to see how its {\it representations} are in bijection with those of $W'_{F_v}$. Also, the local $\varepsilon$-factors depend on the choice of a non-trivial additive character and the Haar measure, but we suppress this in our notation.) Originally, Langlands gave a purely automorphic definition of the local factors at almost all places, but now, thanks to [HaT2000] and [He2000], we can do better. We can also define {\it higher analogs of the Rankin-Selberg $L$-functions} and set, for any pair $(\pi, \pi')$ of isobaric automorphic forms on $({\rm GL}(n), {\rm GL}(m))/F$, a pair $(r, r')$ of finite dimensional $\mathbb C$-representations of ${\rm GL}(n, \mathbb C), {\rm GL}(m, \mathbb C))$, and a character $\mu$ of $W_F$, $$ L(s, \pi \times \pi'; r \otimes r' \otimes \mu) \, = \, \prod\limits_v \, L(s, r(\sigma(\pi_v)) \otimes r'(\sigma(\pi'_v)) \otimes \mu_v), \leqno(2.11) $$ and $$ \varepsilon(s, \pi \times \pi'; r \otimes r' \otimes \mu) \, = \, \prod\limits_v \, \varepsilon(s, r(\sigma(\pi_v)) \otimes r'(\sigma(\pi'_v)) \otimes \mu_v). $$ When $m=1$, $\pi' \simeq 1$ and $r' \simeq 1$, $L(s, \pi \times \pi'; r \otimes r' \otimes \mu)$ coincides with $L(s, \pi; r \otimes \mu)$. For each $j \geq 1$, let sym$^j$ denote the symmetric $j$-th power of the standard representation of GL$(n, \mathbb C)$. The definition (2.11) above gives in particular the families of automorphic $L$-functions $L(s, \pi; {\rm sym}^j \otimes \mu)$ and $L(s, \pi \times \pi'; {\rm sym}^j \otimes {\rm sym}^k \otimes \mu)$ for isobaric automorphic representations $\pi, \pi'$ of GL$(2, \mathbb A_F)$ and idele class character $\mu$, which we may, and we will, identify (via class field theory) with a character, again denoted by $\mu$, of $W_F$. One calls $L(s, \pi; {\rm sym}^j)$ {\it the symmetric $j$-th power $L$-function}. \vskip 0.2in \section{Some useful instances of functoriality} Here we summarize certain known instances, which we will need, of functorial transfer of automorphic forms from one group to another. Let $\pi, \pi'$ be cuspidal automorphic representations of GL$(n, \mathbb A_F)$, GL$(m, \mathbb A_F)$, and let $r, r'$ be $\mathbb C$-representations of GL$(n, \mathbb C)$, GL$(m, \mathbb C)$ of dimension $d, d'$ respectively. The Langlands philosophy then {\it predicts} that there exists an isobaric automorphic representation $r(\pi) \boxtimes r'(\pi')$ of GL$(dd', \mathbb A_F)$ such that $$ L(s, r(\pi) \boxtimes r'(\pi')) \, = \, L(s, \pi \times \pi'; r \otimes r'). \leqno(3.1) $$ When it is known to exist, the map $\pi \to r(\pi)$ will be called a {\it functorial transfer} attached to $r$; some also call it a lifting.This is far from being known in this generality, but nevertheless, there have been some notable instances of progress wwhich we will make use of. Sometimes we do not know $r(\pi)$ exists, but still one has some good properties of thee relevant $L$-functions. A cuspidal automorphic representation $\pi$ is said to be {\bf dihedral} iff it admits a self-twist by a (necessarily) quadratic character $\delta$, i.e., $\pi \simeq \pi \otimes \delta$. Equivalently, there is a quadratic extension $K/F$ and a character $\chi$ of $K$, such that $\pi$ is isomorphic to $I_K^F(\chi)$, the representation {\it automorphically induced} by $\chi$ from $K$ (to $F$). The passage from the second to the first definition is by taking $\delta$ to be the quadratic character of $F$ associated to $K/F$. We will need to use the following results: \noindent{\bf Theorem 3.2} ([Ra2000]) \, \it Let $\pi, \pi'$ be cuspidal automorphic representations of GL$(2, \mathbb A_F)$. Then there exists an isobaric automorphic representation $\pi \boxtimes \pi'$ of GL$(4, \mathbb A_F)$ such that $$ L(s, \pi \boxtimes \pi') \, = \, L(s, \pi \times \pi'). $$ Moreover, $\pi \boxtimes \pi'$ is cuspidal iff one of the following happens: \begin{itemize} \item[(i)] $\pi, \pi'$ are both non-dihedral {\it and} there is no character $\mu$ such that $\pi' \simeq \pi \otimes \mu$; \item[(ii)] One of them, say $\pi'$, is dihedral, with $\pi' = I_K^F(\chi)$ for a character $\chi$ of a quadratic extension $K$, and the base change $\pi_K$ is cuspidal and not isomorphic to $\pi_K \otimes (\mu \circ \theta)\mu^{-1},$ where $\theta$ denotes the non-trivial automorphism of $K/F$. \end{itemize} \rm Note that in case (ii), $\pi$ may or may not be dihedral, and in the latter situation, $\pi \boxtimes \pi'$ is cuspidal. If $L(s) = \prod_v L_v(s)$ is an Euler product, and if $T$ is a finite set of places, we will write $L^T(s)$ for the incomplete Euler product $\prod_{v \notin T} L_v(s)$. \noindent{\bf Theorem 3.3} ([GJ79] for $n=2$, [PPS89] for $n=3$ and [BuG92] for general $n$) \, \it Let $\pi$ be a cuspidal automorphic representation of GL$(n, \mathbb A_F)$. Let $S$ be the union of the archimedean places of $F$ with the set of finite places where $\pi$ is ramified. Then $L^S(s, \pi; {\rm sym}^2)$ admits a meromorphic continuation and is holomorphic in in the real interval $(1/2,1)$. \rm When $n=2$, there is even an isobaric automorphic representation sym$^2(\pi)$ of GL$(3, \mathbb A_F)$ such that $$ L(s, {\rm sym}^2(\pi)) \, = \, L(s, \pi; {\rm sym}^2) \quad \quad (n=2), $$ and sym$^2(\pi)$ is cuspidal iff $\pi$ is non-dihedral. \rm We are stating here only the facts which we need. The reader is urged to read the articles quoted to get the full statements. The functional equation and the meromorphic continuuation of the symmetric square $L$-functions of GL$(n)/F$ can also be deduced from the Langalnds-Shahidi method. \noindent{\bf Theorem 3.5} ([KSh2000], [K2000], [KSh2001]) \, Let $\pi$ be a cuspidal automorphic representation of GL$(2, \mathbb A_F)$. Then for $j = 3,4$, there is an isobaric automorphic representation sym$^j(\pi)$ such that $$ L(s, {\rm sym}^j(\pi)) \, = \, L(s, \pi; {\rm sym}^j) \quad \quad (j=3,4). $$ Moreover, sym$^3(\pi)$ is cuspidal iff sym$^2(\pi)$ is cuspidal and does not admit a self-twist by a cubic character, while sym$^4(\pi)$ is cuspidal iff sym$^3(\pi)$ is cuspidal and does not admit a self-twist by a quadratic character. \rm A cuspidal automorphic representation $\pi$ of GL$(2, \mathbb A_F)$ is said to be {\bf tetrahedral}, resp. {\bf octahedral}, iff sym$^2(\pi)$, resp. sym$^3(\pi)$, is cuspidal and admits a non-trivial self-twist by a cubic, resp. quadratic character. We will say that $\pi$ is of {\bf solvable polydedral type} iff it is either dihedral or tetrahedral or octahedral. \vskip 0.2in \section{\bf Proof of Theorem A} In this section we will say that a pair $(\pi, \pi')$ of cuspidal automorphic representations of GL$(2, \mathbb A_F)$ is of {\bf general type} iff we have: \noindent{$(4.1)$} \begin{itemize} \item[(a)] Neither $\pi$ nor $\pi$ is dihedral; \, and \item[(b)] $\pi'$ is not a twist of $\pi$ by a character. \end{itemize} First we will deal with the special cases when (a) or (b) does not hold. Suppose (a) is satisfied, but not (b), i.e., there is a chharacter $\mu$ of (the idele class group of) $F$ such that $$ \pi' \, \simeq \, \pi \otimes \mu. $$ Then we have the decomposition $$ \pi \boxtimes \pi' \, \simeq \, ({\rm sym}^2(\pi) \otimes \mu) \boxplus \omega\mu, \leqno(4.2) $$ where $\pi \boxtimes \pi'$ denotes the isobaric automorphic representation of GL$(4, \mathbb A_F)$ associated to $(\pi, \pi')$ in [Ra2000], and $\omega$ is the central character of $\pi$. In terms of $L$-functions, we have $$ L(s, \pi \times \pi') \, = \, L(s, {\rm sym}^2(\pi) \otimes \mu)L(s, \omega\mu). \leqno(4.3) $$ One knows that , since $\pi$ is non-dihdral, $L(s, $sym$^2(\pi) \otimes \mu)$ admits no Landau-Siegel zero. This was proved in the ground-breaking article [GHLL94] for $\mu =1$ and $\pi$ self-dual; the general case was taken care of by a combination of the arguments of [HRa95] and then [Ba97]. Besides, when $\omega\mu$ is not self-dual, i.e., not of order $\leq 2$, $L(s, \omega\mu)$ admits no Siegel zero (see for example [HRa95]). Finally, it is a well known classical fact that for any character $\chi$ of order $\leq 2$, $L(s, \chi)$ can have at most one Siegel zero. So, putting all this together, we see that \noindent{$(4.4)$} \begin{itemize} \item[($\alpha$)] The Landau-Siegel zeros of $L(s, \pi \times (\pi \otimes \mu))$ coincide with those of $L(s, \omega\mu)$, \, and \item[($\beta$)] This set is empty unless $\omega\mu$ is of order $\leq 2$, in which case there is at most one Landau-Siegel zero. \end{itemize} If $F$ is a Galois number field (over $\mathbb Q$) not containing any quadratic field, one knows by [Stk74] that the Dedekind zeta function of $F$ has no Landau-Siegel zero. So we may replace {\it order $\leq 2$} in $(\beta)$ by {\it order $2$} for such $F$. This gives the desired assertion in this case, and it also brings up case (i) of Theorem A. Next consider the case when $\pi$ is non-dihedral, but $\pi'$ is dihedral, associated to a chaaracter $\chi$ of a quadratic extension $K$ of $F$. We will write $\pi' = I_K^F(\chi)$ and say that it is automorphically induced from $K$ to $F$ by $\chi$. Then by the basic properties of base change ([AC89], [Ra2000]) we have $$ \pi \boxtimes \pi' \, \simeq \, I_K^F(\pi_K \otimes \chi), \leqno(4.5) $$ where $\pi_K$ denotes the base change of $\pi$ to GL$(2)/K$, which is cuspidal because $\pi$ is non-dihedral. Thus by the inductive nature of $L$-functions, we get the following identity: $$ L(s, \pi \times \pi') \, = \, L(s, \pi_K \otimes \chi), \leqno(4.6) $$ By [HRa95] we know that $L(s, \pi_K \otimes \chi)$ does not admit any Landau-Siegel zero, and this gives Theorem A in this case. Now suppose both $\pi$, $\pi'$ are both dihedral. Then $\pi$, resp. $\pi'$, is naturally attached to a dihedral representation $\sigma$, resp. $\sigma'$, of the global Weil group $W_F$. Say, $\sigma = $Ind$_K^F(\chi)$, for a character $\chi$ of the Weil group of a quadratic extenssion. (By abuse of notation, we are writing Ind$_K^F$ instead of Ind$_{W_K}^{W_F}$. Since $\boxtimes$ corresponds to the usual tensor product on the Weil group side (see [Ra2000]), we have $$ L(s, \pi \times \pi') \, = \, L(s, {\rm Ind}_K^F(\chi) \otimes \sigma'). \leqno(4.7) $$ By Mackey, $$ {\rm Ind}_K^F(\chi) \otimes \sigma' \, \simeq \, {\rm Ind}_{K}^{F}(\chi \otimes {\rm Res}_K^F(\sigma')), \leqno(4.8) $$ where Res$_K^F$ denotes the restriction functor taking representations of $W_F$ to ones of $W_F$. Suppose $\sigma'$ is also {\it not} induced by a character of $W_K$. Then ${\rm Res}_K^F(\sigma')$ is irreducible and the base change $\pi'_K$ is cuspidal, and since $L(s, \pi \times \pi')$ equals $L(s, \pi'_K \otimes \chi)$, it has no Landau-Siegel zero, thanks to [HRa95]. So we may assume that $\sigma'$ is also induced by a character $\chi'$ of $W_K$. Then $$ {\rm Res}_K^F(\sigma') \, \simeq \, \chi' \oplus (\chi' \circ \theta), $$ where $\theta$ denotes the non-trivial automrophism of $K/F$. Plugging this into (4.8) and making use of the inductive nature f $L$-functions, we get $$ L(s, \pi \times \pi') \, = \, L(s, \chi\chi')L(s, \chi(\chi' \circ \theta)). \leqno(4.9) $$ So there is no Landau-Siegel zero unless $\chi\chi'$ or $\chi(\chi' \circ \theta)$ is of order $\leq 2$, which we will asssume to be the case from now on. We have yet to show that there is at most one Landau-Siegel zero , which is true (see the remarks above) if only one of them has order $\leq 1$. Suppose they are both of order $\leq 2$. If one of them, say $\chi\chi'$ is trivial, then $$ L(s, \pi \times \pi') \, = \, \zeta_F(s)L(s, \nu), \leqno(4.10) $$ where $\nu = \frac{\chi' \circ \theta}{\chi'}$. Note that since $\sigma'$ is irreduucible, $\chi'$ is not equal to $\chi' \circ \theta$.Then $\nu$ must be a quadratic character, and the right hand side of (4.10) evidently defines a non-negative Dirichlet series with a pole of order $1$ at $s=1$. So by Lemma 1.7, $L(s, \pi \times \pi')$ can have at most one Landau-Siegel zero. It is left to consider when $\mu: = \chi\chi'$ and $\nu'$ are both quadratic characters. The argument here is well known, and we give it nly for thre sake of completeness. Notes that the Dirichlet series defined by $$ L(s): \, = \, \zeta_F(s)L(s, \mu)L(s, \nu)L(s, \mu\nu), \leqno(4.11) $$ has non-negative coefficieents, meromorphic continuation and a functional equation, with no pole except at $s=1$, where the pole is simple; $L(s)$ is the Dedekind zeta function of the biquadratic extension of $F$ obtained as the compositum of the quadratic extensions cut out by $\mu$ and $\nu$. Thus by applying Lemma 1.7 again, we see that $L(s)$, and hence its divisor $L(s, \pi \times \pi')$ (see (4.9), has at most one Landau-Siegel zero. This finishes the proof of Theorem A when $\pi, \pi'$ are both dihedral, bringing up case (ii) when they are both defined by chracters of the same quadratic extension $K$. So we may, and we will, asssume from here on that bothe (a) and (b) of (4.1) are satisfied. Now Theorem A will be proved if we establish the following theorem, which gives a stronger statement. \noindent{\bf Theorem 4.12} \, \it Let $\pi$ and $\pi'$ are unitary cuspidal automorphic representations of GL$(2, \mathbb A_F)$,and assume that the pair $(\pi, \pi')$ is of general type. Then, \textnormal{(a)}: There is an effective absolute constant $c \gneqq 0$ such that $L(s, \pi \times \pi')$ has no zero in the interval $(\, 1 - c / {\log M}, 1\,)$. \textnormal{(b)}: Additionally, if $\pi$ and $\bar{\pi}'$ is not twist equivalent by a product of a quadratic character and ${|\,|}^{\mathfrak{i} t}$, then there exists an absolute effective constant $c_{2} \gneqq 0$ such that $L(s, \pi \times \pi')$ has no zero in the region \[ \Set{s = \sigma + \mathfrak{i} t \mid \sigma \le 1 - {(c_{2} \mathfrak{L}_{t})}^{-1}} \] where \[ \mathfrak{L}_{t} = \log{[N(\pi \times \pi') \, D_{F}^{4} \, {(2 + |t| + \Lambda)}^{4 N}]}, \] with $N = [F : \mathbf{Q}]$ and $\Lambda$ denoting the maximum of the infinite types of $\pi$ and $\pi'$. \rm See 1.4 for the definition of the infinite parameter $\Lambda$. Such a result was a working hypothesis in the work of Moreno ([Mo]) on a n effective vesion of the strong multiplicity one theorem for GL$(2)$. \noindent \emph{Proof. of Theorem 4.12} (a) \, Put $$ \mathfrak{L} \, = \, \mathfrak{L}_0. $$ Then by definition of $\Lambda$, $$ \log M \, = \, \mathfrak{L}. \leqno(4.13) $$ Let $\omega$ and $\omega'$ be the central characters of $\pi$ and $\pi'$ respectively. Since $\pi, \pi'$ is nondihedral, sym$^2(\pi)$ and ${\rm sym}^{2} (\pi')$ are cuspidal. Also, $(\pi, \pi')$ being of general type implies (cf. [Ra2000]) that their Rankin-Selberg product $\pi \boxtimes \pi'$ of GL$(4, \mathbb A_F)$ is cuspidal. Consider the following isobaric automorphic representation $$ \Pi = 1 \boxplus (\pi \boxtimes \pi') \boxplus ({\rm sym}^{2} (\bar{\pi}) \otimes \omega) \leqno(4.14) $$ Write, as usual $$ \Pi \, = \, \Pi_\infty \otimes \Pi_f. $$ Note that $\Pi$ is unitary and so its contragredient $\Pi^\vee$ identifies with its complex conjugate $\bar \Pi$. By the bi-additivity of the Rankin-Selberg process, we have the factorization \noindent{$(4.15)$} \begin{align} L (s, \Pi_f \times \bar{\Pi}_f) = &\zeta_{F} (s) L (s, \pi_f \times \pi'_f) L (s, \bar{\pi}_f \times \bar{\pi}'_f) \notag \\ & L (s, {\rm sym}^{2} (\pi_f) \otimes {\omega}^{-1}) \notag \\ & L (s, {\rm sym}^{2} (\bar{\pi_f}) \otimes \omega) L (s, (\pi_f \boxtimes \pi'_f) \times (\bar{\pi}_f \boxtimes \bar{\pi}'_f)) \notag \\ & L (s, {\rm sym}^{2} (\pi_f) \times {\rm sym}^{2} (\bar{\pi}_f)) L (s, (\pi_f \boxtimes \pi'_f) \times {\rm sym}^{2} (\pi_f) \otimes {\omega}^{-1}) \notag \\ & L (s, (\bar{\pi}_f \boxtimes \bar{\pi}'_f) \times {\rm sym}^{2} (\bar{\pi}_f) \otimes \omega). \notag \end{align} By abuse of notation, we are writing $\omega$ instead of $\omega_f$, which should not cause any confusion. It is well known that $\zeta_F(s)$ has a simple pole at $s=1$, and since sym$^2(\pi)$ and $\pi \boxtimes \pi'$ are cuspidal, $L (s, (\pi_f \boxtimes \pi'_f) \times (\bar{\pi_f} \boxtimes \bar{\pi}'_f))$ and $L(s, {\rm sym}^{2} (\pi_f) \times {\rm sym}^{2} (\bar{\pi}_f))$ have simple poles at $s=1$ as well. Moreover, the remaining factors are entire with no zero at $s=1$ (see the discussion following (2.3)). Thus $$ {\rm ord}_{s=1} L(s, \Pi_f \times \bar \Pi_f) \, = \, 3. \leqno(4.16) $$ By Lemma 2.5, the Dirichlet series defined by $L (s, \Pi_f \times \bar{\Pi_f})$ has nonnegative coefficients. Put $$ L_{1} (s) = L (s, \pi_f \times \pi'_f) L (s, \bar{\pi}_f \times \bar{\pi}'_f) \leqno(4.17) $$ Since the real zeros of $L (s, \pi_f \times \pi'_f)$ and $ L (s, \bar{\pi}_f \times \bar{\pi}'_f)$ are the same, we get for any $\beta \in (0,1)$, $$ {\rm ord}_{s=\beta} L_1(s) \, = \, 2 \, {\rm ord}_{s=\beta} L(s, \pi_f \times \pi'_f). \leqno(4.18) $$ Next observe that at any place $v$, if $\sigma_v$ (resp. $\sigma'_v$) denotes the $2$-dimensional representation of $W'_{F_v}$ (resp. $W_{F_v}$) attached to $\pi_v$ for $v$ finiite (resp. $v$ archimedean) by the local Langlands correspondence, we have $$ \sigma_v \otimes {\rm sym}^3(\sigma_v) \, \simeq \, (\sigma_v \otimes \omega_v) \oplus {\rm sym}^3(\sigma_v), $$ which implies the decomposition \noindent{$(4.19)$} \begin{align} (\sigma_v \otimes \sigma'_v) \otimes {\rm sym}^2(\sigma_v) \otimes \omega_v^{-1} &\simeq (\sigma_v \otimes {\rm sym}^2(\sigma_v) \otimes \omega_v^{-1}) \otimes \sigma_v' \notag \\ &\simeq (\sigma_v \otimes \sigma'_v) \oplus ({\rm sym}^3(\sigma_v) \otimes \omega_v^{-1} \otimes \sigma'_v). \end{align} This gives, by the definition of automorphic $L$-functions in section 1, the following identity of $L$-functions: $$ L (s, (\pi_f \boxtimes \pi'_f) \times {\rm sym}^{2} (\pi_f) \otimes {\omega}^{-1}) \, = \, L (s, \pi_f \times \pi'_f) L (s, A^3(\pi_f) \times \pi'_f) \leqno(4.20) $$ where, following [KSh2001], we have set $$ A^3(\pi) : \, = \, {\rm sym}^3(\pi) \otimes {\omega}^{-1}. $$ We need \noindent{\bf Lemma 4.21} \, \it Since $(\pi, \pi')$ is of general type, $L(s, A^3(\pi_f) \times \pi_f')$ and $L(s, A^3(\bar \pi_f) \times \bar \pi'_f)$ are entire. \rm {\it Proof of Lemma} \, Existence of a pole for one of them, say at $s=s_0$, will imply a pole for the other at $s = \overline s_0$; hence it suffices to prove that $L(s, A^3(\pi_f) \times \pi'_f)$ is entire. Since the local factors at the archimedean places do not vanish, it is enough to show that the full $L$-function $L(s, A^3(\pi) \times \pi')$ is entire. Since $(\pi, \pi')$ is of general type, $\pi, \pi'$ are non-dihedral and not twists of each ther. If $\pi$ is not tetrahedral (see section 3 for definition), then by [KSh2000], sym$^3(\pi)$ is cuspidal. The asserton of Lemma is clear in thhis case by the standard results on the Rankin-Selberg $L$-functions (see section 2). So we may, and we will, assume that $\pi$ is tetrahedral. Then sym$^2(\pi)$ is isomorphic to sym$^2(\pi) \otimes \nu$ for some cubic character $\nu$, and by Theorem 2.2 of [KSh2001], $A^3(\pi)$ is isomorphic to $(\pi \otimes \nu) \boxplus (\pi \otimes \nu^2)$. Then $L(s, A^3(\pi) \times \pi')$ factors as $L(s, (\pi \otimes \nu) \times \pi')L(s, (\pi \otimes \nu^2) \times \pi')$, which is entire by the Rankin-Selberg theory, because $\pi'$ is not a twist of $\bar \pi \simeq \pi \otimes \omega^{-1}$. \qed Put \noindent{$(4.22)$} \begin{align} L_{2} (s)= &\zeta_{F} (s) L (s, {\rm sym}^{2} (\pi_f) \otimes {\omega}^{-1}) L (s, {\rm sym}^{2} (\bar{\pi_f}) \otimes \omega) \notag \\ &L (s, (\pi_f \boxtimes \pi'_f) \times (\bar{\pi}_f \boxtimes \bar{\pi}'_f)) L (s, {\rm sym}^{2} (\pi_f) \times {\rm sym}^{2} (\bar{\pi}_f)) \notag \\ &L (s, A_3(\pi_f) \times \pi'_f) L (s, \bar A_3(\pi)_f \times \bar{\pi}_f') \notag \end{align} Then $$ L (s, \Pi \times \bar{\Pi}) \, = \, L_{1}^{2} (s) L_{2} (s). \leqno(4.23) $$ Applying Lemma 4.21, and using the cuspidality of sym$^2(\pi)$ and $\pi \boxtimes \pi'$, we get the following \noindent{\bf Lemma 4.24} \, \it $L_2(s)$ is entire. \rm Combining this lemma with (4.16), (4.23) and (2.x), and using Lemma 1.7, we get the existence of a positive, effective constant $c$ such that $$ 2 {\rm ord}_{s=\beta} L_{1}(s) \, \leq \, 3 \quad if \quad \beta \in (1-c/\log M, 1). \leqno(4.25) $$ In view of (4.18), if $L (s, \pi \times \pi')$ has a Landau-Siegel zero $\beta$ (relative to $c$), then $\beta$ will be a zero of $L_{1}^{2} (s)$ of multiplicity $4$, leading to a contradiction. We have now proved part (a) of Theorem 4.12, and hence Theorem A. (b) \, First note that under the condition of (2), $L(s, \pi \times \pi' \otimes | |^{\mathfrak{i} t})$ has no Landau-Siegel zero. Moreover, the maximum $\Lambda_{t}$ of infinite types $\Lambda(\pi)$ and $\Lambda(\pi' \otimes ||^{\mathfrak{i} t})$ are no more than $|t| + \Lambda$. Thus $L(s, \pi \otimes ||^{\mathfrak{i} t})$ has no zero in the interval $$ 1 - \frac{1}{(c_{2} L_{t})} < \sigma < 1 \leqno(4.26) $$ Since we have $$ L(\sigma + \mathfrak{i} t, \pi \times \pi') \, = \, L(\sigma, \pi \times \pi' \otimes ||^{\mathfrak{i} t}), $$ the assertion of (b) now follows. \qedsymbol \vskip 0.2in \section{\bf Proof of Theorem B} Let $\pi$ be a cuspidal automorphic representation of GL$(2, \mathbb A_F)$ of central character $\omega$. First we will dispose of the {\it solvable polyhedral} cases, where we will not need to assume that $\pi$ is self-dual. Suppose $\pi$ is {\it dihedral}, i.e., of the form $I_K^F(\chi)$ for a character $\chi$ (of the idele classes) of a quadratic extension $K$ of $F$, with $\theta$ denoting non-trivial automorphism of $K/F$. Let $\chi_o$ denote the restriction of $\chi$ to $F$. Note that $$ \chi\chi^\theta \, = \, \chi_0 \circ N_{K/F}, \leqno(5.1) $$ where $N_{K/F}$ denotes the norm from $K$ to $F$. ($\chi_0 \circ N_{K/F}$ is the base change $(\chi_0)_K$ of $\chi_0$ to $K$.) In particular, $$ I_K^F(\chi\chi^\theta) \, \simeq \, \chi_0 \boxplus \chi_0\delta, \leqno(5.2) $$ where $\delta$ denotes the quadratic character of $F$ attached to $K/F$ by class field theory. For any pair $(\lambda, \xi)$ of characters of $K$, one has (cf. [Ra2000]) $$ I_K^F(\lambda) \boxtimes I_K^F(\xi) \, \simeq \, I_K^F(\lambda\xi) \boxplus I_K^F(\lambda\xi^\theta). \leqno(5.3) $$ Putting $\lambda = \xi = \chi$ in (5.3), and using (5.1), (5.2), we get $$ \pi \boxtimes \pi \, \simeq \, I_K^F(\chi^2) \boxplus \chi_0 \boxplus \chi_0\delta. $$ Since $\pi \boxtimes \pi$ is the isobaric sum ($\boxplus$) of sym$^2(\pi)$ with $\omega$, which is $\chi_0\delta$ (as it corresponds to the determinant of the representation Ind$_K^F(\chi)$ of $W_K$), we get $$ {\rm sym}^2(\pi) \, \simeq \, I_K^F(\chi^2) \boxplus \chi_0, \leqno(5.4) $$ Putting $\lambda = \xi = \chi^2$ in (5.3), using (5.1), (5.2), (5.4), and the inductive nature of $L$-functions, we get the following identity of $L$-functions: $$ L(s, {\rm sym}^2(\pi) \times {\rm sym}^2(\pi)) \, = \, L(s, \chi^4)L(s, \chi_0^2)^2 L(s, \chi_0^2\delta) L(s, \chi^3\chi^\theta)^2. \leqno(5.5) $$ It is an abelian $L$-function, and the problem of Landau-Siegel zeros here is the classical one, and there is no such zero unless one (or more) of the characters appearing on the right of (5.5) is of order $\leq 2$. When $\omega = 1$, $\chi_0$ is $\delta$, and since $\delta^2 =1 = \delta \circ N_{K/F}$, we obtain $$ L(s, {\rm sym}^2(\pi) \times {\rm sym}^2(\pi)) \, = \, L(s, \chi^4)\zeta_F(s)^2L(s, \delta)L(s, \chi)^2. \leqno(5.6) $$ Next let $\pi$ be {\it tetrahedral}, in which case sym$^2(\pi)$ is cuspidal and admits a self-twist by a non-trivial cubic character $\mu$. In other words, there is a cyclic extension $M/F$ of degree $3$ cut out by $\mu$, with non-trivial automorphism $\alpha$, and a character $\lambda$ of $M$, not fixed by $\alpha$, such that $$ {\rm sym}^2(\pi) \, \simeq \, I_M^F(\lambda). \leqno(5.7) $$ Since by Mackey, $$ {\rm Ind}_M^F(\lambda)^{\otimes 2} \, \simeq \, {\rm Ind}_M^F(\lambda^2) \oplus {\rm Ind}_M^F(\lambda\lambda^\alpha) \oplus {\rm Ind}_M^F(\lambda\lambda^{\alpha^2}) $$ we get $$ L(s, {\rm sym}^2(\pi) \times {\rm sym}^2(\pi)) \, = \, L(s, \lambda^2) L(s, \lambda\lambda^\alpha) L(s, \lambda\lambda^{\alpha^2}). \leqno(5.8) $$ Again it is an abelian $L$-function, and there is nothing more to prove. Now let $\pi$ be {\it octahedral}. Then by definition, sym$^j(\pi)$ is cuspidal for $j \leq 3$ and moreover, $$ {\rm sym}^3(\pi) \, \simeq \, {\rm sym}^3(\pi) \otimes \eta, \leqno(5.9) $$ for a quadratic character $\eta$. Equivalently, there is a quadratic extension $E/F$ (attached to $\eta$) such that the base change $\pi_E$ is tetrahedral, ie., there exists a cubic character $\nu$ of $E$ such that $$ {\rm sym}^2(\pi_E) \, \simeq \, {\rm sym}^2(\pi_E) \otimes \nu. \leqno(5.10) $$ Now we appeal to the evident identity $$ L(s, {\rm sym}^2(\pi) \times {\rm sym}^2(\pi)) \, = \, L(s, {\rm sym}^4(\pi))L(s, {\rm sym}^2(\pi)\otimes \omega) \zeta_F(s). \leqno(5.11) $$ Then it suffices to prove that the set of Landau-Siegel zeros of $L(s, {\rm sym}^4(\pi))$ is the same as that of the maximal abelian $L$-function dividing it. To this end we note that by Theorem 3.3.7, part (3), of [KSh2001], $$ {\rm sym}^4(\pi) \, \simeq \, I_E^F(\nu^2) \otimes \omega^2 \boxplus {\rm sym}^2(\pi) \otimes \omega\eta, \leqno(5.12) $$ so that $$ L(s, {\rm sym}^4(\pi)) \, = \, L(s, \nu^2(\omega \circ N_{E/F})^2)L(s, {\rm sym}^2(\pi) \otimes \omega\eta). \leqno(5.13) $$ Recall from section 1 that since $\pi$ is non-diehdral, $L(s, {\rm sym}^2(\pi) \otimes \beta)$ has no Landau-Siegel zeero for any character $\beta$. So we are done in this case as well. So from now on we may, and we will, assume that $\pi$ is {\it not of solvable polyhedral type}. In view of the identity (5.11), the derivation of Theorem B will be complete once we prove the following result on the symmetric $4$-th power $L$-function of $\pi$, which may be of independent interest. \noindent{\bf Theorem B$^\prime$} \, \it Let $\pi$ be a cusspidal automorphic represenation of GL$(2, \mathbb A_F)$ with trivial central character, which is not of solvable polyhedral type. Then $L(s, {\rm sym}^{4} (\pi))$ admits no Landau-Siegel zero, More explicitly, there exists a postive, effective constant $C$ such that it has no zero in the real interval $(\,1 - C {\mathfrak{L}}^{-1} \,)$ for some constant $C$, where \[ \mathfrak{L} = \log [N(\pi) D_{F}^{2} {(2 + \Lambda)}^{2 N}] \] where $N = [F : \mathbf{Q}]$, and $\Lambda$ the infinite type of $\pi$. \rm \noindent{\bf Corollary 5.14} \, \it Under the hypotheses of Theorem B$^\prime$, the Landau-Siegel zeros of $L(s, {\rm sym}^{2} (\pi) \times {\rm sym}^{2} (\pi))$, if any, comes from one of $\zeta_F(s)$. If $F$ is a Galois extension of $\mathbb Q$ not containing any quadratic field, there is no Landau-Siegel zero at all. \rm \noindent \emph{Proof of Theorem B$^\prime$} First we note that ${\rm sym}^{4} (\pi)$ is cuspidal as $\pi$ is not of solvable polyhedral type ([KSh2001]). Also, ${\rm sym}^4(\pi)$ is self-dual as $\pi$ is. Put $$ \Pi = 1 \boxplus {\rm sym}^{2} (\pi) \boxplus {\rm sym}^{4} (\pi), \leqno(5.15) $$ which is a self-dual isobaric automorphic representation of GL$(9, \mathbb A_F)$. Since it is unitary, it is also self-conjugate. A formal caculation gives the identities $$ L(s, {\rm sym}^{4}(\pi) \times {\rm sym}^{2} (\pi)) \, = \, L(s, {\rm sym}^{2} (\pi))L(s, \pi; {\rm sym}^6), \leqno(5.16) $$ and \noindent{$(5.17)$} \begin{align} L(s, \Pi \times \Pi) = &\zeta_{F} (s) L(s, {\rm sym}^{2} (\pi) \times {\rm sym}^{2} (\pi)) L(s, {\rm sym}^{4} (\pi) \times {\rm sym}^{4} (\pi)) \notag \\ &\phantom{=} L(s, {\rm sym}^{2} (\pi))^2 L(s, {\rm sym}^{4} (\pi))^{2} L(s, {\rm sym}^{4}(\pi) \times {\rm sym}^{2} (\pi))^2 \notag \\ &= \zeta_{F} (s) L(s, {\rm sym}^{2} (\pi) \times {\rm sym}^{2} (\pi)) L(s, {\rm sym}^{4} (\pi) \times {\rm sym}^{4} (\pi)) \notag \\ &\phantom{=} L(s, {\rm sym}^{2} (\pi))^4 L(s, {\rm sym}^{4} (\pi))^{4} L(s, \pi; {\rm sym}^6)^2. \notag \end{align} By Lemma 2.5, the Dirichlet series defined by $L(s, \Pi_f \times \Pi_f)$ has non-negativ coefficieents and moreover, the cusspidality of sym$^j(\pi)$ for $j = 2,4$ implies that $$ -{\rm ord}_{s=1} L(s, \Pi_f \times \Pi_f) \, = \, 3. \leqno(5.18) $$ Put $$ L_{1} (s) \, = \, L(s, {\rm sym}^{4} (\pi_f))^4 \leqno(5.19) $$ and define $L_{2} (s)$ by the equation $$ L(s, \Pi_f \times \Pi_f) \, = \, L_1(s)L_2(s). \leqno(5.20) $$ \noindent{\bf Proposition 5.21} \, \it $L_{2} (s)$ is holomorphic in the interval $(\,1/2, 1\,)$. \rm {\it Proof of Proposition} \, Since we have \noindent{$(5.22)$} \begin{align} L_{2} (s) &= \zeta_{F} (s) L(s, {\rm sym}^{2} (\pi_f) \times {\rm sym}^{2} (\pi_f)) L(s, {\rm sym}^{4} (\pi_f) \times {\rm sym}^{4} (\pi_f)) \notag \\ &\phantom{=} L(s, {\rm sym}^{2} (\pi_f))^4 L(s, \pi_f; {\rm sym}^{6})^2, \notag \end{align} and since all the factors other than the square of the symmetric $6$-th power $L$-function are, owing to the cuspidality of sym$^j(\pi)$ for $j = 2,4$, holomorphic in $(0,1)$, it suffices to show the same for $L(s, \pi_f; {\rm sym}^6)$. But this we cannot do, given the current state of what one knows. But we are thankfully rescued by the following identity $$ L(s, {\rm sym}^{3} (\pi_f); {\rm sym}^{2}) = L(s, {\rm sym}^{2} (\pi_f) ) L(s, \pi_f; {\rm sym}^{6}). \leqno(5.23) $$ Consequently, we have \noindent{$(5.24)$} \begin{align} L_{2} (s) &= \zeta_{F} (s) L(s, {\rm sym}^{2} (\pi_f) \times {\rm sym}^{2} (\pi_f)) L(s, {\rm sym}^{4} (\pi_f) \times {\rm sym}^{4} (\pi_f)) \notag \\ &\phantom{=} L(s, {\rm sym}^{2} (\pi_f))^2 L(s, {\rm sym}^3(\pi_f); {\rm sym}^2)^2, \notag \end{align} and Proposition 5.21 will follow from \noindent{\bf Lemma 5.25} \, \it $L(s, {\rm sym}^3(\pi_f); {\rm sym}^2)$ is holomorphic in $(1/2, 1)$. \rm {\it Proof of Lemma 5.25} \, Let $S$ be the union of the archimedean places of $F$ with the (finite) set of fiinite places $v$ where ${\rm sym}^4(\pi)$ is unramified. It will be proved in section 7 (see Lemmas 7.9 and 7.4) that at any $v$ in $S$, $L(s, \pi_v; {\rm sym}^{2j})$ is holomorphic in $(1/2, 1)$ for $j \leq 4$. Thus it suffices to prove that the incomplete $L$-function $L^S(s, {\rm sym}^3(\pi); {\rm sym}^2)$, defined in a right half plane by $\prod_{v \notin S} L(s, {\rm sym}^3(\pi_v); {\rm sym}^2)$, is holomorphic in $(1/2, 1)$. But since sym$^3(\pi)$ is a cuspidal automorphic representation of GL$(4, \mathbb A_F)$, this is a consequence (see Theorem 3.3) of the main result of [BuG92]. \qed {\it Proof of Theorem B$^\prime$} (contd.) \, Apply Lemma 1.7 and Proposition 2.9 to the positive Dirichlet series $L(s, \Pi_f \times \Pi_f)$, which has a pole at $s=1$ of order $3$. Since $L_2(s)$ is holomorphic in $(1/2,1)$, there is an effective constant $c > 0$ such that the number of real zeros of $L_1(s)$ in $(1 - \frac{c}{\log M(\Pi \times \Pi)}, 1)$ is bounded above by $3$. But $L_1(s)$ is the fourth power of $L(s, \pi_f; {\rm sym}^4)$, and so $L(s, \pi_f; {\rm sym}^4)$ can have no zero in this interval. Also, by (2.8), $$ M(\pi; {\rm sym}^4) \, \asymp \, M(\Pi \times \Pi), \leqno(5.26) $$ where the implied constants are effective. Now we have proved Theorem B$^\prime$, and hence Theorem B. \qedsymbol \noindent{\bf Remark 5.27} \, In Theorem B, we asssumed that $\pi$ is self-dual. To treat the general case with these arguments one needs the following hypothesis for $r=4$. \noindent{\bf Hypothesis 5.28} \, \it Let $\pi$ be a unitary cuspidal representaion of $GL(r, \mathbb A_F)$, and $\chi$ a non-trivial quadratic character of $F$, then $L(s, \pi; {\rm sym}^{2} \otimes \chi)$ is holomorphic in $(\,t, 1\,)$ for a fixed real number $t < 1$. \rm For $r=2$, of course, there is nothing to do as sym$^2(\pi)$ is automorphic ([GJ79]). For $r=3$, this was established W.~Banks in [Ba96], thus proving a hypothesis of [HRa95] enabling the completion of the proof of the lack of Landau-Siegel zeros for cusp forms on GL$(3)/F$. \vskip 0.2in \section{\bf Proof of Corollary C} Here $g$ is a Maass form on the upper half plane relative to SL$(2, \mathbb Z)$, with Laplacian eigenvalue $\lambda$ and Hecke eigenvalues $a_p$. If $\pi$ is the cuspidal automorphic representation of GL$(2, \mathbb A)$, $\mathbb A = \mathbb A_\mathbb Q$, generated by $g$ (see [Ge75]), we may consider the Gelbart-Jacquet lift sym$^2(\pi)$, which is an isobaric automorphic representation of GL$(3, \mathbb A)$. Since $g$ has level $1$, it is not dihedral, and so sym$^2(\pi)$ is cuspidal. Moreover, since sym$^2(\pi_p)$ is, for any prime $p$, {\it unramified} because $\pi_p$ is, which means that sym$^2(\pi_p)$ is {\it spherical} at $p$, i.e., it admits a non-zero vector fixed by the maximal compact subgroup $K_p : = $GL$(3, \mathbb Z_p)$. It is also spherical at infinity, i.e., has a non-zero fixed vector under the orthogonal group $K_\infty: = {\rm O}(3)$. Moreover, the center $Z(\mathbb A)$ acts trivially, and the archimedean component sym$^2(\pi_\infty)$ consists of eigenfunctions for the center $\mathfrak z$ of the enveloping algebra of Lie$({\rm GL}(3, \mathbb R))$. In sum, sym$^2(\pi)$ is a subrepresentation of $$ V: \, = \, L^2(Z(\mathbb A){\rm GL}(3, \mathbb Q) \backslash {\rm GL}(3, \mathbb A)), \leqno(6.1) $$ admitting a spherical vector, i.e., a (non-zero) smooth function $\phi$ invariant under $K: = \prod_v K_v$, where $v$ runs over the places $\{\infty, 2,3,5,7, \ldots, p, \ldots\}$. GL$(3, \mathbb A)$ acts on $V$ by right translation and leaves invariant the natural sclar product $\langle \, .\, ,\, . \, \rangle$ given, for all $\xi_1, \xi_2 \in V$, by $$ \langle \, \xi_1 \, , \, \xi_2 \, \rangle \, = \, \int_{Z(\mathbb A){\rm GL}(3, \mathbb Q) \backslash {\rm GL}(3, \mathbb A)} \xi_1(x){\overline \xi_2}(x) dx, $$ where $dx$ is the quotient measure defined by the Haar measures on ${\rm GL}(3, \mathbb A)$, $Z(\mathbb A)$ and ${\rm GL}(3, \mathbb Q)$, chosen as follows. On the additive group $\mathbb A$, take the measure to be the product measure $\prod_v dy_v$, where $dy_\infty$ is the Lebesgue measure on $\mathbb Q_\infty = \mathbb R$, and for each prime $p$, $dy_p$ is normalized to give measure $1$ to $\mathbb Z_p$. Take the measure $d^\ast y = dy/|y|$ on $\mathbb A^\ast$, where $|y| = \prod_v |y_v|$ the natural absolute value, namely the one given by $|y_\infty| = {\rm sgn}(y_\infty)y_\infty$ and $|y_p| = p^{-v_p(y_p)}$. Since the center $Z$ is isomorphic to the multiplicative group, this defines a Haar measure $dz = \prod_v dz_v$ on $Z(\mathbb A)$. On GL$(3, \mathbb A)$ take the product measure $\prod_v dx_v$, where each $dx_v$ is given, by using the Iwasawa decomposition GL$(3, \mathbb Q_v) = Z_vT_vN_vK_v$, as $dz_vdt_vdn_vdk_v$. Here $T_v$ denotes the subgroup of diagonal matrices of determinant $1$, with $dt_v$ being the transfer of the measure $d^\ast t_v$ via the isomorphism $T_v \simeq F_v^\ast$, $N_v$ the unipotent upper triangular group with measure $dn_v$ being the transfer of $dt_v$ via the isomorphism of $N_v$ with the additive group $F_v$, and $dk_v$ the Haar measure on $K_v$ noralized to give total volume $1$. The representation sym$^2(\pi)$ is a unitary summand. Since sym$^2(\pi)$ is irreducible, such a $\phi$ will generate the whole space by taking linear combinations of its translates and closure. Note that $\phi$ is the pull back to GL$(3, \mathbb A)$ of a function $\phi_0$, which is real analytic by virtue of being a $\mathfrak z$-eigenfunction, on the $5$-dimensional (real) orbifold $$ M : \, = \, Z(\mathbb A){\rm GL}(3, \mathbb Q) \backslash {\rm GL}(3, \mathbb A)/K \, = \, SL(3, \mathbb Z)\backslash {\rm SO}(3). \leqno(6.2) $$ Since $\phi$ and $\phi_0$ determine each other, we will by abuse of notation use the same symbol $\phi$ to denote both of them. The spherical function $\phi$, sometimes called a {\it new vector}, is unique only up to multilication by a scalar. It is important for us to normalize it. There are two natural ways to do it. The first way, called the {\it arithmetic normalization}, is to make the Fourier coefficient $a(1,1)$ (see below) equal $1$ (as for newforms on the upper half plane). The second way, which is what we will pursue here, is called the {\it spectral normalization}, and normalizes the scalar product $\langle, \rangle$ of $\phi$ with itself to be essentially $1$.When so normalized, we will use the symbol sym$^2(f)$ for $\phi$. We will appeal to the Fourier expansions in terms of the Whittaker functions to do it. We begin with the general setup. \noindent \emph{Definition 6.3} Let $\phi$ be automorphic form on GL$(n)/\mathbb Q$ generating a unitary, spherical, cuspidal automorphic representation $\Pi$. Say that $\phi$ is \emph{normalized} if we have: $$ \phi (g) \, = \, \sum_{\gamma \in U(n - 1, \mathbb Q) \backslash {\rm GL}(n - 1, \mathbb Q)} W_{\Pi} \left( \begin{pmatrix} \gamma & \phantom{0} \\ \phantom{0} & 1 \end{pmatrix} g \right), \leqno(6.4) $$ where $U(n - 1, \mathbb Q)$ denotes the subgroup of $GL(n - 1, \mathbb Q)$ consisting of upper triangular, unipotent matrices, $W_{\Pi} = \prod_{v} W_{\Pi, v}$ the global Whittaker function whose local components are defined below. (Again, $\Pi$ spherical means that $\Pi_v$ admits, at every place $v$, a non-zero vector invariant under the maximal compact (mod center) subgroup $K_v$, which is GL$(n, \mathfrak \mathbb Z_p)$ when $v$ is $v_p$ for a prime $p$.) \noindent{$(6.5)$} \begin{itemize} \item{$W_{\Pi, p}$ is, for any prime $p$, the unique $K_{p}$-invariant function corresponding to $\Pi_{p}$ normalized so that $W_{\Pi, p} (e) = 1$.} \item{At the archimedean place, $$ W_{\Pi, \infty} = C(\Pi)^{-\frac{1}{2}} W_{n, a}, $$ where $W_{n, a}$ be the normalized spherical function of infinite type $a$ on $GL(n, \mathbb R)$ in the sense of Stade [St2001], and $$ C(\Pi) = L(1, \Pi_{\infty} \times \Pi_{\infty}). $$} \end{itemize} Denote the function so normalized in the space of $\Pi$ by the symbol $\phi(\Pi)$. Now let us get back to our Maass form $g$ for SL$(2, \mathbb Z)$, with associated spherical cuspidal representation $\pi$, resp. ${\rm sym}^{2} (\pi)$, of GL$(2, \mathbb A)$, resp. GL$(3, \mathbb A)$. We set $$ {\rm sym}^2(g) \, = \, \phi({\rm sym}^2(\pi)). \leqno(6.6) $$ Since $g$ has level $1$, one knows that $\lambda > 1/4$ (in fact $> 50$, though we do not need it), so that if we write $$ \lambda \, = \, \frac{1-t^2}{4}, $$ then $t$ is a non-zero real number; so we may choose $t$ to be positive. We have $$ L(s, \pi_\infty) \, = \, \mathbb Gamma_\mathbb R(s+it)\mathbb Gamma_\mathbb R(s-it). $$ Consequently, $$ L(s, {\rm sym}^2(\pi_\infty)) \, = \, \mathbb Gamma_\mathbb R(s+2it)\mathbb Gamma_\mathbb R(s)\mathbb Gamma_\mathbb R(s-2it), $$ and $$ L(s, {\rm sym}^2(\pi_\infty) \times {\rm sym}^2(\pi_\infty)) \, = \, \mathbb Gamma_\mathbb R(s+4it)\mathbb Gamma_\mathbb R(s+2it)^2\mathbb Gamma_\mathbb R(s)^3\mathbb Gamma_\mathbb R(s-2it)^2\mathbb Gamma_\mathbb R(s-4it). $$ Then, since $\mathbb Gamma(1-ait)$ is the complex conjugate of $\mathbb Gamma(1+ait)$ for any real $a$ and since $\mathbb Gamma(1) = 1$, we obtain \noindent{$(6.7)$} \begin{align} C ({\rm sym}^{2} (\pi)) &= L(1, {\rm sym}^{2} (\pi_{\infty}) \times {\rm sym}^{2} (\pi_{\infty})) \notag \\ &= \pi^{- 3} {|\mathbb Gamma(\frac{1}{2} + 2 i t)|}^{2} {|\mathbb Gamma (\frac{1}{2} + i t)|}^{4} \notag \\ &= 1 / (\cosh (2 \pi t) {\cosh (\pi t)}^{2}) \notag \end{align} Recall that Theorem B proves that if $\pi$ is not of {\it solvable polyhedral type}, then $L(s, {\rm sym}^2(\pi) \times {\rm sym}^2(\pi))$ admits no Landau-Siegel zero. To put this to use we need the following \noindent{\bf Proposition 6.8} \, \it If $\pi$ is a spherical cusipdal representation on $GL(2)/\mathbb Q$, then $\pi$ is not of solvable polyhedral type. \rm {\it Proof of Proposition} \, At each prime $p$ (resp. $\infty$) let $\sigma_p$ (resp. $\sigma_\infty$) denote the $2$-dimensional representation of $W'_{\mathbb Q_p}$ (resp. $W_\mathbb R$) associated to $\pi_p$ (resp. $\pi_\infty$) by the local Langlands correspondence. By the naturality of this correspondence, we know that the conductors of $\pi_p$ and $\sigma_p$ agree at every $p$. On the other hand, as $\pi$ is spherical, the conductor of $\pi$, which is the product of the conductors of all the $\pi_p$, is trivial. This implies that for every $p$, the conductor of $\sigma_p$, and hence also that of sym$^j(\sigma_p)$ is trivial for any $j \geq 1$. Appealing to the local correspondence again, we see that \noindent{$(6.9)$} \it For any $j \leq 4$, the automorphic representation ${\rm sym}^{j} (\pi)$ is spherical. \rm For the definition of conductors, for any local field $k$, of representations of GL$(n, k)$ admitting a Whittaker model, see [JPSS79]. Assume that $\pi$ is of solvable polyhedral type, i.e., it is either dihedral or tetrahedral or octahedral. Recall that if $\pi$ is {\it dihedral}, then $\pi$ is automorphically induced. i.e. there exists an idele class character $\chi$ of a quadratic field $K$ s.t. $$ \pi \, \simeq \, I^{\mathbb Q}_{K} (\chi). \leqno(6.10) $$ If $\pi$ is {\it tetrahedral}, then by [KSh2000], sym$^2(\pi)$ is cuspidal and moreover, $$ {\rm sym}^{2} (\pi) \, \simeq \, I_{K}^{\mathbb Q} (\chi), \leqno(6.11) $$ for some idele class character $\chi$ of some cyclic extension $K$ of degree $3$ over $\mathbb Q$. If $\pi$ is {\it octahedral}, then by [KSh2001], sym$^3(\pi)$ is cuspidal and $$ {\rm sym}^3(\pi) \, \simeq \, {\rm sym}^3(\pi) \otimes \mu, $$ for some quadratic Dirichlet character $\mu$. This implies, by [AC89], that $$ {\rm sym}^{3} (\pi) \, \simeq \, I_{K}^{\mathbb Q} (\eta), \leqno(6.12) $$ for some cuspidal automorphic representaton $\eta$ of GL$(2, \mathbb A_K)$, with $K$ being the quadratic field associated to $\mu$. In view of (6.9), it suffices to show that some sym$^j(\pi)$ must be ramified, thus giving a contradiction. Thanks to (6.10), (6.11) and (6.12), one is reduced to proving the following \noindent{\bf Lemma 6.13} \, \it Let $K/\mathbb Q$ be a cyclic extension of degree $\ell$, a prime, and let $\eta$ be a cuspidal automorphic representation of GL$(m, \mathbb A_K)$, $m \geq 1$. Then $I_K^\mathbb Q(\eta)$ is ramified at some $p$. \rm One can be much more precise than this, but this crude statement is sufficient for our purposes. However it should be noted that there are polyhedral rpresentations, for example of holomorphic type of weight $1$ for $F = \mathbb Q$, with prime conductor. {\it Proof of Lemma 6.13} \, Put $\Pi = I_K^\mathbb Q(\eta)$. Since $\mathbb Q$ has class number $1$, $K/\mathbb Q$ is ramified. So there exists some prime $p$, and a place $u$ of $K$ above $p$, such that $K_u/\mathbb Q_p$ is ramified of degree $\ell$. The local component $\Pi_p$ is simply $I_{K_u}^{\mathbb Q_p}(\eta_u)$, and it is enough to check that $\Pi_p$ must be ramified. If $\sigma_u$ is the $m$-dimensional representation of $W'_{K_u}$, then the conductor of $\Pi_p$ is the same as that of Ind$_{K_u}^{\mathbb Q_p}(\sigma_u)$. Moreover, $\sigma_u$ is semisimple and its conductor is divisible by that of Ind$_{K_u}^{\mathbb Q_p}(\sigma'_u)$ for any irreducible subrespresentation $\sigma'_u$ of $W'_{K_u}$. So it suffices to prove the following \noindent{\bf Sublemma 6.14} \, \it Let $E/F$ be a cyclic ramified extension of non-archimedean local fields, and let $\sigma$ be an irreducible $m$-dimensional representation of $W'_E$. Then Ind$_E^F(\tau)$ is ramified. \rm {\it Proof of Sublemma} \, Since $W'_{E}$ is $W_E \times {\rm SL}(2, \mathbb C)$, the irreducibility hypothesis implies that $$ \sigma \, \simeq \, \tau \otimes {\rm sym}^j(st), $$ for some irreducible $\tau$ of $W_E$ and $j \geq 0$, where $st$ denotes the natural $2$-dimensional represenation of SL$(2, \mathbb C)$. Then $$ {\rm Ind}_E^F(\sigma) \, \simeq \, {\rm Ind}_{W_E}^{W_F}(\tau) \otimes {\rm sym}^j(st). \leqno(6.14) $$ It suffices to prove that ${\rm Ind}_{W_E}^{W_F}(\tau)$ is ramified. Recall that there is a short exact sequence $$ 1 \, \rightarrow \, I_F \, \rightarrow W_F \, \rightarrow \, \mathbb Z \, \rightarrow \, 1, \leqno(6.15) $$ where $I_F$ denotes the inertia subgroup of Gal$(\overline F/F)$. If $\mathbb F_q$ is the residue field of $F$ and $\varphi$ the Frobenius $x \to x^q$, then $W_F$ is just the inverse image of the group of integral powers of the $\varphi$ under the natural map $$ {\rm Gal}(\overline F/F) \, \rightarrow \, {\rm Gal}(\overline \mathbb F_q/\mathbb F_q). $$ Suppose $\rho: = $Ind$_{W_E}^{W_F}(\tau)$ is unramified. Then by definition $I_F$ must act trivially, and since the quotient $W_F/I_F$ is abelian, $\rho$ is forced to be a sum of one dimensional, unramified representations. For this one must have \begin{itemize} \item[(i)] dim$(\tau) \, = \, 1$; \, and \item[(ii)] $\tau^\theta \, \simeq \, \tau$, with $\theta$ denoting the non-trivial automorphism of $E/F$. \end{itemize} Consequently we have $$ \rho \, \simeq \, \oplus_{i=0}^{[E:F]-1} \nu\delta^i, \leqno(6.16) $$ where $\nu$ is a character of $W_F$ extending $\tau$ and $\delta$ the character of $W_F$ associaated to $E/F$.But whatever $nu$ is, $\nu\delta^i$ will necessarily be ramified for some $i$ between $0$ and $[E:F]-1$. Thus $\rho = $Ind$_{W_E}^{W_F}$ is ramified, contradicting the supposition that it is unramified. Done. \qed We have now proved Proposition 6.8. Next we need the following two lemmas. \noindent{\bf Lemma 6.17} \, \it Let $L(s) = \Sigma^{\infty}_{n = 1} \frac{b(n)}{n^{s}}$ be an $L$-series with nonnegative coefficients, with $b(1) = 1$. Assume that $L(s)$ converges for $\mathbb Re s > 1$ with an analytic continuation to $\mathbb Re s > 0$. Let $M > 1$. Suppose that $L(s)$ satisfies the growth condition below on the line $\mathbb Re s = \frac{1}{2}$ \[ |L(\frac{1}{2} + i \gamma)| \le M {(|\gamma| + 1)}^{B} \] for some postive constant $B$. If $L(s)$ has no real zeros in the range \[ 1 - \frac{1}{log M} < s < 1 \] then there exists an effective constant $c = c(B)$ such that \[ \text{\rm Res}_{s = 1} L(s) \ge \frac{c}{\log M} \] \rm For a proof, see [GHLL94]. \noindent{\bf Lemma 6.18} \, \it Let $L(s) = L(s, {\rm sym}^{2}(\pi) \times {\rm sym}^{2}(\pi))$. Then there exist absolute constants $A$ and $B$ such that \[ L(\frac{1}{2} + i \gamma) \le {(\lambda + 1)}^{A} {(|\gamma| + 1)}^{B} \] \rm {\it Proof}. Note that, for any prime $p$, as $\pi_p$ is unramified, the $p$-part of $L(s)$ is the reciprocal of a polynomial in $p^{-s}$ of degree $9$. Let $\alpha_{p}$, $\beta_{p}$ be the coefficients of the Satake representation of $\pi_{p}$. Note that we assume that $\pi$ is self-dual, thus $$ {L_{p} (s)}^{-1} = (1 - \alpha_{p}^{4} p^{-s}) (1 - \beta_{p}^{4} p^{-s}) {(1 - \alpha_{p}^{2} p^{-s})}^{2} {(1 -\beta_{p}^{2} p^{-s})}^{2} {(1 - p^{-s})}^{3} \leqno(6.19) $$ Now apply classical bound $\vert\alpha_p\vert < p^{1/4}$, $\vert\alpha_p\vert < p^{1/4}$ on the coefficients; we know a much stronger bound now (cf. [K], Appendix 2), but the $1/4$ bound suffices for us. Then $L(s)$ is bounded by an absolute constant on the line $\mathbb Re (s) = 2$. Also, $L(s)$ satisfies a functional equation relating $s$ and $1 - s$. Thus, we get a bound for $L(s)$ on the line $\mathbb Re (s) = -1$. We claim that the ratio of gamma factors arising from the functional equation is bounded by a certain fixed power of $\lambda$ and the imaginary part of $s$. In fact, the constants giving the infinite type of $L(s)$ are all imaginary as $\lambda > 50$ for Maass forms of kevel $1$. (The constants are real or purely imaginary, and the latter happens iff $\lambda \geq 1/4$, which is a difficult open problem for Maass forms of higher level.). Moreover, the self-duality of $\pi$ implies that the constant set is symmetric about the real axis. So, the norm of the ratio of the gamma factors is a product of a constant and some terms of $|\frac{\mathbb Gamma(1 + i t)}{\mathbb Gamma(- \frac{1}{2} - i t)}|$ where $t$ involves the imaginary part of $s$ and the constants (of the infinity type). Note that $$ \left| \frac{\mathbb Gamma (1 + i t)}{\mathbb Gamma (- \frac{1}{2} - i t)} \right| = \left| \frac{\mathbb Gamma (1 + i t)}{\mathbb Gamma (-\frac{1}{2} + i t)} \right| = {|t|}^{\frac{3}{2}} (1 + O (t^{-1})), \leqno(6.20) $$ since for $a \le \sigma \le b$ we have the estimation. $$ |\mathbb Gamma (s)| = \sqrt{2 \pi} e^{- \frac{\pi}{2}} {|t|}^{\sigma - 1/2} (1 + O (t^{-1})) \leqno(6.21) $$ where the implied constant depends only on $a$ and $b$. Hence the claim. As $\pi$ is spherical, so is ${\rm sym}^{4} (\pi)$. Hence we get $$ L(- 1 + i \gamma) << {(\lambda + 1)}^{A} {(|\gamma| + 1)}^{B} \leqno(6.22) $$ for certain constants $A$ and $B$. Applying the Phragmen-Lindel\"{o}f principle in the strip $- 1 \le \mathbb Re (s) \le 2$, we see that the same bound applies also on the line $\mathbb Re (s) = \frac{1}{2}$. \qedsymbol The following proposition sets up the relationship between the Peterson norm of the normalized automorphic function ${\rm sym}^{2}(g)$ and the residue of a certain $L$-series at $s = 1$. Denote $Z_{n} (\mathbb A)$ the center of $GL(n, \mathbb A)$. Denote $E^{*} (g, h_{s})$ the Eisenstein series, where $h_{s} = \prod_{v} h_{s, v}$ and $h_{s, v}$ is in the space of the induced representation $$ {\rm Ind}_{P(n-1, 1, F_{v})}^{GL(n, F_{v})} (\delta_P^{s}) \leqno(6.23) $$ where $\delta_P$ is the modular quasicharacter of the standard parabolic subgroup $P(\mathbb Q_{v})$ of type $(n-1,1)$, whose whose Levi facor is GL$(n - 1) \times {\rm GL}(1)$. \noindent{\bf Proposition 6.24} \, \it Let $\Pi = \Pi_{\infty} \otimes \Pi_{f}$ be an unramified cusp form on $GL(n, \mathbb A)$, with $\Pi_{\infty}$ a spherical principal series representation with trivial central character. Then $$ \int_{Z_n(\mathbb A) GL(n, \mathbb Q) \backslash GL(n, \mathbb A)} \phi (g) {\overline \phi}) (g) E^{*} (g, h_{s}) dg = \frac{L (s, \Pi_{\infty} \times \Pi_{\infty}) L(s, \Pi_f \times \Pi_f)} {L (1, \Pi_{\infty} \times \Pi_{\infty})}, $$ where $\phi$ is the normalized function in the space of $\Pi$. Furthermore, $$ \langle\,\phi, \phi\,\rangle {\rm Res}_{s = 1} E^{*} (g, h_{s}) \, = \, {\rm Res}_{s = 1} L(s, \Pi \times \Pi) $$ \rm {\it Proof of Propsition 6.24} Let us study the integral $$ I = \int_{Z_{n}(\mathbb A) GL(n, \mathbb Q) \backslash GL(n, \mathbb A)} \phi (g) \phi (g) E^{*} (g, h_{s}) dg. \leqno(6.25) $$ By the Rankin-Selberg unfolding method, we have $$ I = \prod_{v} \Psi(v, W_{\phi, v}, W_{\phi, v}, h_{s, v}) \leqno(6.26) $$ where $$ I_{v} = \Psi(v, W, W, h_{s, v}) = \int_{Z_{n} (\mathbb Q_{v}) X_{n} (\mathbb Q_{v}) \backslash GL_{n} (\mathbb Q_{v})} W (g) W (g) h_{s} (g) dg \leqno(6.27) $$ Here $X_{n}$ denotes the subgroup of the upper triangular, unipotent matrices, and $W_{\phi, v}$ is a Whittaker function for $\Pi_{v}$. By Jacquet-Shalika [JS81], when we choose $\phi$ to be a new vector, this local integral $I_{v}$ equals to $L(s, \Pi_{v} \times \Pi_{v})$ when $v$ is nonarchimedean. When $v$ is archimedean, we appeal to the work of Stade ([St93], [St2001]) and obtain $$ I_{v} \, = \, {C (\Pi)}^{-1} L(s, \Pi_{v} \times \Pi_{v}) ,\leqno(6.28) $$ where $C (\Pi) = L(1, \Pi_{\infty} \times \Pi_{\infty})$. It appears that such a result has also been obtained by Jacquet and Shalika in the spherical case. In the non-spherical case, they can prove only that the $L$-factor is a fiite linear combination of such integrals. Thus $I$ is in fact the same as the quotient of the complete $L$-series for $\Pi \times \Pi$ by $L(1, \Pi_{\infty} \times \Pi_{\infty})$. Note that the Whitaker function at infinity we take here differs from the standard one used by Stade in [St2001] by the factor $C (\Pi)^{-1/2}$. Now take the residue at $s=1$ on both sides, and note that ${\rm Res}_{s = 1} E^{*}(g, h_{s})$ is a positive constant independent of $g$. Hence the Proposition. \qedsymbol {\it Proof of Corollary C} (contd). Since sym$^2(\pi)$ is spherical in our case, we may apply Proposition 6.24 with $\Pi = {\rm sym}^{2}(\pi)$ and get $$ (\,{\rm sym}^{2} (f), {\rm sym}^{2} (f)\,) = {C}^{-1} {\rm Res}_{s = 1} L(s, {\rm sym}^{2} (\pi) \times {\rm sym}^{2} (\pi)) \leqno(6.29) $$where $C = {\rm Res}_{s = 1} E^{*}(g, h_{s})$. The right side of the corollary is easy since $$ {\rm Res}_{s = 1} L(s, {\rm sym}^{2} (\pi) \times {\rm sym}^{2} (\pi)) = L(1, {\rm sym}^{2} (\pi)) L(1, {\rm sym}^{4} (\pi)) \leqno(6.30) $$ which is bounded by any arbituary power of $1 + \lambda$. (See [HRa95]) To prove the left side, it suffices to show that $$ {\rm Res}_{s = 1} L(s, {\rm sym}^{2} (\pi) \times {\rm sym}^{2} (\pi)) >> {(\log (1 + \lambda))}^{-1} \leqno(6.31) $$ For this apply Lemmas 6.17 and 6.18, with $L(s) = L(s, {\rm sym}^{2} (\pi) \times {\rm sym}^{2} (\pi))$ and $M = {(\lambda + 1)}^{B'}$ for suitably large constant $B'$. It remains to prove the asserted bound on the first Fourier coefficient of the spectral normalization of sym$^2(g)$. Put $$ \mathbb Gamma \, = \, {\rm GL}(3, \mathbb Z), \leqno(6.32) $$ $$ \mathbb Gamma_0 \, = \, \{ \gamma = (\gamma_{ij}) \in \mathbb Gamma \, \vert \, \gamma_{31} = \gamma_{32} = 0, \, \gamma_{33} = 1\}, $$ and $$ \mathbb Gamma_\infty \, = \, \{ \gamma = (\gamma_{ij}) \in \mathbb Gamma \, \vert \, \gamma_{ij} = 0 \, {\rm if} \, i > j\}. $$ Recall that the cusp form sym$^2(g)$ on GL$(3)/\mathbb Q$ being of spherical type defines, and is determined by, a function, again denoted by sym$^2(g)$, on the double coset space $$ \mathbb Gamma\backslash {\rm GL}(3, \mathbb R)/Z_\mathbb R{\rm O}(3), \leqno(6.33) $$ where $Z_\mathbb R$ is the center of GL$(3, \mathbb R)$. Define the {\it spectrallly normalized} function in the space of sym$^2(\pi)$ to be $$ {\rm sym}^2(g)^{\rm spec} \, = \, {\rm sym}^2(g)/||{\rm sym}^2(g)||, \leqno(6.34) $$ with $|| \, . \, ||$ denoting (as usual) the $L^2$-norm given by $\langle \, . \, , \, . \, \rangle^{1/2}$. The adelic Fourier expansion (6.4) gives rise to the following explicit expansion (see [Bu89], page 71, formula (2.1.6)) as a function of GL$(3, \mathbb R)$: $$ {\rm sym}^2(g)(x) \, = \, \sum\limits_{(m,n) \ne (0,0)} \sum\limits_{\gamma \in \mathbb Gamma_\infty\backslash \mathbb Gamma_0} \, \frac{a(m,n)({\rm sym}^2(g))}{mn} W_\infty\left( \begin{pmatrix} mn & 0 & 0\\ 0 & n & 0 \\ 0 & 0 & 1 \end{pmatrix} \gamma x\right). \leqno(6.35) $$ The coefficients $a(m,n)({\rm sym}^2(g))$ are bimultiplicative, implying in particular that the first coefficient $a(1,1)({\rm sym}^2(g))$ is equal to $1$. Consequently, $$ a(1,1) : \, = \, a(1,1)({\rm sym}^2(g)^{\rm spec}) \, = \, \frac{1}{||{\rm sym}^2(g)||}, \leqno(6.36) $$ and $$ |a(1,1)|^2\langle {\rm sym}^2(g)\, , \, {\rm sym}^2(g)\rangle \, = \, 1. $$ Hence the bound on $|a(1,1)|$ follows from the bound proved above for $\langle {\rm sym}^2(g)\, , \, {\rm sym}^2(g)\rangle$. Done. \qedsymbol \vskip 0.2in \section{\bf Proof of Theorem D} Let $\pi$ be a cuspidal automorphic representation of GL$(2, \mathbb A_F)$ of trivial central character. Denote by $S$ the union of the set $S_\infty$ of archimedean places of $F$ with the set of finite places where $\pi$ is ramified. Given any Euler product $L(s) = \prod_v L_v(s)$ over $F$, we will write $L^S(s)$ to mean the (incomplete Euler) product of $L_v(s)$ over all $v$ outside $S$. Next recall (see section 3) that for every $j \leq 4$, there is an isobaric auutomorphic representation sym$^j(\pi)$ of GL$(j+1, \mathbb A_F)$, established long ago for $j=2$ by S. Gelbart and H. Jacquet [GJ77], and very recently for $j =3$, resp. $j=4$, by H. Kim and F. Shahidi ([KSh2000]), resp. H. Kim ([K2000]), such that $$ L(s, {\rm sym}^j(\pi)) \, = \, L(s, \pi, {\rm sym}^j). $$ \noindent{\bf Lemma 7.1} \, \it Let $T$ be any finite set of places. Then we have the following factorizations of incomplete $L$-functions: $$ L^T(s, {\rm sym}^3(\pi), {\rm sym}^2) \, = \, L^T(s, \pi, {\rm sym}^6)L^T(s, {\rm sym}^2(\pi)) \leqno(i) $$ and $$ L^T(s, {\rm sym}^4(\pi), {\rm sym}^2) \, = \, L^T(s, \pi, {\rm sym}^8)L^T(s, {\rm sym}^4(\pi))\zeta_F^T(s). \leqno(ii) $$ \rm {\it Proof} \, It suffices to prove these locally at every place otside $T$. But at any $v$, we have by definition, $$ L(s, {\rm sym}^3(\pi_v), {\rm sym}^2) \, = \, L(s, \Lambda^2({\rm sym}^4(\sigma_v))), \leqno(7.2) $$ and $$ L(s, {\rm sym}^4(\pi_v), {\rm sym}^2) \, = \, L(s, {\rm sym}^2({\rm sym}^4(\sigma_v))), $$ where $\sigma_v$ is the $2$-dimensional representation of $W_{F_v}$, resp. $W'_{F_v}$, associated to $\pi_v$ by the local correspondence for $v$ archimedean, resp. non-archimedean. By the Clebsch-Gordon identities, we have $$ {\rm sym}^2({\rm sym}^3(\sigma_v)) \, \simeq \, {\rm sym}^6(\sigma_v) \oplus {\rm sym}^2(\sigma_v), \leqno(7.3) $$ and $$ {\rm sym}^2({\rm sym}^4(\sigma_v)) \, \simeq \, {\rm sym}^8(\sigma_v) \oplus {\rm sym}^4(\sigma_v) \oplus 1. $$ The assertion of the Lemma now follows. \qed \noindent{\bf Lemma 7.4} \, \it Let $\pi$ be a cuspidal automorphic representation of GL$(2, \mathbb A_F)$ with trivial central character, and let $v$ be a place where $\pi_v$ is a ramified, non-tempered principal series representatin. Then ${\rm sym}^4(\pi)$ is unramified at $v$. \rm {\it Proof}. \, As $\pi_v$ is a ramified principal series representation of trivial central character, we must have $$ \pi_v \, \simeq \, \mu_v \boxplus \mu_v^{-1}, \leqno(7.5) $$ for a ramified (quasi-)character $\mu_v$ of $F_v^\ast$. Since $\pi_v$ is non-tempered, we may write, after possibly interchnging $\mu_v$ and $\mu_v^{-1}$, $$ \mu_v \, = \, \nu_v\vert . \vert_v^t, \leqno(7.6) $$ for a unitary character $\nu_v$ of $F_v^\ast$ and a real number $t > 0$. ($\vert . \vert_v$ denotes as usual the normalizeed absolute value on $F_v$.) On the other hand, the unitarity of $\pi_v$ says that its complex conjugate representation $\overline \pi_v$ is isomorphic to the contragredient $\pi_v^\vee$. This forces the identity $$ \nu_v \, = \, \overline \nu_v. $$ Since $\nu_v$ is unitary, we get $$ \nu_v^2 \, = \, 1 \quad {\rm and} \quad \pi_v \, \simeq \, \nu_v \otimes \pi^0_v, \leqno(7.7) $$ where $$ \pi_v^0 \, \simeq \, \vert . \vert_v^t \boxplus \vert . \vert_v^{-1}. $$ Then the associated $2$-dimensional Weil group representation $\sigma_v$ is of the form $\nu_v \otimes \sigma_v^0$, with $\sigma_v^0$ corresponding to $\pi_v^0$. Moreover, since $\nu_v$ is quadratic, we have for any $j \geq 1$, $$ {\rm sym}^{2j}(\sigma_v) \, \simeq \, {\rm sym}^{2j}(\sigma_v^0), \leqno(7.8) $$ which is unramified. Since by [K2000], sym$^4(\pi)_v$ corresponds to sym$^4(\sigma_v)$ (at every place $v$), we see that it must be unramified as claimed. \qed \noindent{\bf Lemma 7.9} \, \it Let $\pi$ be a cuspidal automorphic representation of GL$(2, \mathbb A_F)$, and $v$ a place of $F$ where $\pi_v$ is tempered. Then for any $j \geq 1$, the local factor $L(s, \pi_v, {\rm sym}^j)$ is holomorphic in $\mathbb Re(s) > 1/2$ except for a possible pole at $s=1$.. \rm {\it Proof}. \, If $v$ is archimedean, or if $v$ is finite but $\pi_v$ is not special, $\pi_v$ corresponds to a $2$-dimensional representation $\sigma_v$ of the local Weil group $W_{F_v}$. The temperedness of $\pi_v$ implies that $\sigma_v$ has bounded image in GL$(2, \mathbb C)$. Then for any finite-dimensional $\mathbb C$-representation $r$ of dimension $N$, in particular for sym$^j$, of GL$(2, \mathbb C)$, the image of $r(\sigma_v)$ will be bounded, and this implies that the admissible, irreducible representation $\Pi_v$ of GL$(N, F_v)$, associated to $r(\sigma_v)$ by the local Langlands correspondence, is tempered. Then $L(s, \Pi_v)$ is holomorphic in $\mathbb Re(s) > 1/2$ except for a possible pole at $s=1$ (see [BaR94]). (One can also prove directly, using the extension in [De73] of Brauer's theorem to the representations of $W_{F_v}$, that $L(s, r(\sigma_v)$ has the requisite property.) We are now done in this case because $L(s, \pi_v; {\rm sym}^j)$ is defined to be $L(s, {\rm sym}^j(\sigma_v))$. So we may take $v$ to be finite and assume that $\pi_v$ is a special representaton $sp(\mu_v)$ (see [HRa95], setion 2 for notation), associated to the partition $2 = 1 + 1$ and a (unitary) character $\mu_v$ of $F_v^\ast$. Then the associated $\sigma_v$ is of the form $(w,g) \to \mu_v(w) \otimes g$, for all $w$ in $W_{F_v}$ and $g$ in SL$(2, \mathbb C)$. So we have $$ {\rm sym}^j(\sigma_v) \, \simeq \, \mu_v^j \otimes {\rm sym}^j, $$ which corresponds to the special representation $sp(\mu_v^j)$ of GL$(j+1, F_v)$ associated to the partition $j+1 = 1 + \ldots + 1$ and the character $\mu_v^j$. Now we may appeal to the fact (see [BaR94]) that for any unitary character $\nu_v$, the function $L(s, sp(\nu_v))$ is holomorphic in $\mathbb Re(s) > 0$. \qed Having established these preliminary lemmas, we are ready to begin the {\it proof of Theorem D}. Let $S$ denote the union of the archimedean places of $F$ with the set of finite places $v$ where $\pi_v$ is ramified {\it and} tempered. In view of Lemma 7.9, it suffices to show the following \noindent{\bf Proposition 7.10} \, \it The incomplete $L$-function $L^S(s, \pi; {\rm sym}^6)$ is holomorphic in the real interval $(1-\frac{c}{\log M}, 1)$ for a positive, effective constant $c$ independent of $\pi$, with $M$ denoting the thickened conductor of $\pi$. The same result holds for the symmetric $8$th power $L$-function if $F$ is a Galois extension of $\mathbb Q$ not containing any quadratic extension of $\mathbb Q$. \rm {\it Proof}. \, When $\pi$ is of solvable polyhedral type, the results of Kim and Shahidi in [Ksh2001] imply that $L^S(s, \pi; {\rm sym}^j)$ is holomorphic in $(1/2,1)$ for any $j \leq 9$. So we may assume that we are not in this case, so that sym$^4(\pi)$ is a cuspidal automorphic representation of GL$(5, \mathbb A_F)$. By the definition of $S$, given any place $v$ outside $S$, $\pi_v$ is either unramified or a ramified, non-tempered principal series representation. Thanks to Lemma 7.4, sym$^4(\pi_v)$ is unramified in either case. So we may appeal to the work of Bump-Ginzburg ([BuG92]) on the symmetric square $L$-fnctions, we get the holomorphy in $(1/2, 1)$ of the incomplete $L$-functions $L^S(s, {\rm sym}^4(\pi); \Lambda^2)$ and $L^S(s, {\rm sym}^4(\pi); {\rm sym}^2)$. Next we appeal to the identities of Lemma 7.1 with $T = S$. The assertion of the Proposition is then clear for the symmetric $6$-th power $L$-function since $L^S(s, {\rm sym}^2(\pi))$ admits no Landau-Siegel zero by [GHLL94]. So let us turn our attention to the (incomplete) symmetric $8$-th power $L$-function of $\pi$. It suffices, by the identity (ii) of Lemma 7.1, that it suffices to show that $L^S(s, {\rm sym}^4(\pi))\zeta^S_F(s)$ admits no Landau-Siegel zero. Since $F$ is by hypothesis a Galois extension of $\mathbb Q$ not containing any quadratic field, one knows by Stark ([St]) that $\zeta_F^S(s)$ admits no Landau-Siegel zero. So we are finally done by our proof of Theorem B, where we showed that $L^S(s, {\rm sym}^4(\pi))$ admits no Landau-Siegel zero. Strictly speaking, we showed it for the full $L$-function. But the local factors at $S$, being tempered, do not have any pole in $(1/2,1)$. \qed \vskip 0.2in \section*{\bf Bibliography} \begin{description} \item[{[AC89]}] J. Arthur and L. Clozel, {\it Simple Algebras, Base Change and the Advanced Theory of the Trace Formula}, Ann. Math. Studies {\bf 120} (1989), Princeton, NJ. \item[{[Ba97]}] W.~Banks, \emph{Twisted symmetric-square $L$-functions and the nonexistence of Siegel zeros on ${\rm GL}(3)$}, Duke Math. J. {\bf 87} (1997), no. 2, 343--353. \item[{[BaR94]}] L.~Barthel and D.~Ramakrishnan, \emph{A non-vanishing result for twists of $L$-functions of GL$(n)$}, Duke Math. Journal {\bf 74}, no.3 (1994), 681-700. \item[{[Bu89]}] D.~Bump, \emph{The Rankin-Selberg method: A survey}, in {\it Number theory, trace formulas and discrete groups} (Oslo, 1987), 49--109, Academic Press, Boston, MA (1989). \item[{[BuG92]}] D.~Bump and D.~Ginzburg, \emph{Symmetric square $L$-functions on ${\rm GL}(r)$}, Ann. of Math. (2) {\bf 136}, no. 1, 137--205 (1992). \item[{[CoPS94]}] J.~Cogdell and I.~Piatetski-Shapiro, \emph{Converse Theorems for $GL_n$}, Publications Math. IHES {\bf 79} (1994), 157--214. \item[{[De73]}] P.~Deligne, \emph{Les constantes des \'equations fonctionnelles des fonctions $L$}, in {\it Modular functions of one variable} II, Springer Lecture Notes {\bf 349} (1973), 501-597. \item[{[Ge75]}] S. Gelbart, {\it Automorphic forms on adele groups}, Annals of Math. Studies {\bf 83} (1975), Princeton. \item[{[GJ79]}] S.~Gelbart and H.~Jacquet, \emph{A relation between automorphic representations of GL$(2)$ and GL$(3)$}, Ann. Scient. \'Ec. Norm. Sup. (4) {\bf 11} (1979), 471--542. \item[{[GHLL94]}] D.~Goldfeld, J.~Hoffstein and D.~Liemann, \emph{An effective zero free region}, Ann. of Math. {\bf 140} (1994), appendix to [HL94]. \item[{[GS2000]}] A.~Granville and H.~Stark, \emph{$abc$ implies no ``Siegel zeros'' for $L$-functions of characters with negative discriminant}, Inventiones Math. {\bf 139} (2000), no. 3, 509--523. \item[{[HaT2000]}] M.~Harris and R.~Taylor, \emph{On the geometry and cohomology of some simple Shimura varieties}, preprint (2000), to appear in the Annals of Math. Studies, Princeton. \item[{[He2000]}] G.~Henniart, \emph{Une preuve simple des conjectures de Langlands pour ${\rm GL}(n)$ sur un corps $p$-adique}, Invent. Math. {\bf 139}, no. 2, 439--455 (2000). \item[{[HL94]}] J.~Hoffstein and P.~Lockhart, \emph{Coefficients of Mass forms and the Siegel zero}, Ann. Math. (2) \textbf{140} (1994), 161--181. \item[{[HRa95]}] J.~Hoffstein and D.~Ramakrishnan, \emph{Siegel Zeros and Cusp Forms}, IMRN(1995), No.\ \textbf{6}, 279--308. \item[{[IwS2000]}] H.~Iwaniec and P.~Sarnak, \emph{The non-vanishing of central values of automorphic $L$-functions and Landau-Siegel zeros}, Israel Journal of Math. 120 (2000), part A, 155--177. \item[{[JPSS79]}] H.~Jacquet, I.~Piatetski-Shapiro and J.A.~Shalika, \emph{Automorphic forms on ${\rm GL}(3)$}. II. Ann. of Math. (2) {\bf 109}, no. 2, 213--258 (1979). \item[{[JPSS83]}] H.~Jacquet, I.~Piatetski-Shapiro and J.A.~Shalika, \emph{Rankin-Selberg convolutions}, Amer. J of Math. {\bf 105} (1983), 367--464. \item[{[JS81]}] H.~Jacquet and J.A.~Shalika, \emph{Euler products and the classification of automorphic forms} I \& II, Amer. J of Math. {\bf 103} (1981), 499--558 \& 777--815. \item[{[JS90]}] H.~Jacquet and J.A.~Shalika, \emph{Rankin-Selberg convolutions: archimedean theory}, in {\it Piatetski-Shapiro Festschrift}, Israel Math. conf. Proc., Part II, 125-207, The Weizmann Science Press of Israel (1990). \item[{[K2000]}] H.~Kim, \emph{Functoriality of the exterior square of GL$_4$ and the symmetric fourth of GL$_2$}, preprint (2000). \item[{[KSh2000]}] H.~Kim and F.~Shahidi, \emph{Functorial products for GL$(2) \times $GL$(3)$ and the symmetric cube for GL$(2)$}, preprint (2000), to appear in Annals of Math. \item[{[KSh2001]}] H.~Kim and F.~Shahidi, \emph{Cuspidality of symmetric powers with applications}, preprint (2001), to appear in the Duke Journal of Math. \item[{[La70]}] R.P.~Langlands, \emph{Problems in the theory of automorphic forms}, in {\it Lectures in modern analysis and applications III}, Lecture Notes in Math. {\bf 170} (1970), Springer-Verlag, Berlin, 18--61. \item[{[La79]}] R.P.~Langlands, \emph{On the notion of an automorphic representation. A supplement}, in {\it Automorphic forms, Representations and $L$-functions}, ed. by A. Borel and W. Casselman, Proc. symp. Pure Math {\bf 33}, part 1, 203-207, AMS. Providence (1979). \item[{[La80]}] R.P.~Langlands, {\it Base change for GL$(2)$}, Annals of Math. Studies {\bf 96}, Princeton (1980). \item[{[MW89]}] C.~Moeglin and J.-L.~Waldspurger, \emph{Poles des fonctions $L$ de paires pour GL$(N)$}, Appendice, Ann. Sci. \'Ecole Norm. Sup. (4) {\bf 22}, 667-674 (1989). \item[{[Mo85]}] C.J.~Moreno, \emph{Analytic proof of the strong multiplicity one theorem}, Amer. J.Math. \textbf{107}, \textsl{no} 1, 163--206. \item[{[Mu94]}] Murty,~M.~Ram, \emph{Selberg's conjectures and Artin $L$-functions}, Bull. Amer. Math. Soc. (N.S.) {\bf 31} (1994), no. 1, 1--14. \item[{[PPS89]}] S.~.J.~Patterson and I.~Piatetski-Shapiro, \emph{The symmetric-square $L$-function attached to a cuspidal automorphic representation of ${\rm GL}\sb 3$}, Math. Ann. {\bf 283} (1989), no. 4, 551--572. \item[{[Ra99]}] D.~Ramakrishnan, \emph{Landau--Siegel Zeros and Cusp Forms}, IAS Lecture (1999), preprint on www.math.caltech.edu/people/dinakar.html. \item[{[Ra2000]}] D.~Ramakrishnan, \emph{Modularity of the Rankin-Selberg $L$-series, and Multiplicity one for SL$(2)$}, Annals of Mathematics {\bf 152} (2000), 45--111. \item[{[Sh88]}] F.~Shahidi, \emph{On the Ramanujan conjecture and the finiteness of poles for certain $L$-functions}, Ann. of Math. (2) {\bf 127} (1988), 547--584. \item[{[Sh90]}] F.~Shahidi, \emph{A proof of the Langlands conjecture on Plancherel measures; Complementary series for $p$-adic groups}, Ann. of Math. {\bf 132} (1990), 273-330. \item[{[St93]}] E.~Stade, \emph{Hypergeometric series and Euler factors at infinity for $L$-functions on GL$(3, \mathbb R) \times $GL$(3, \mathbb R)$}, American Journal of Math. {\bf 115}, No.2 (1993), 371--387. \item[{[St2001]}] E.~Stade, \emph{Archimedean $L$-Factors on $GL(n) \times GL(n)$ and Generalized Barnes Integrals}, to appear. \item[{[Stk74]}] H. Stark, Some effective cases of the Brauer-Siegel theorem, Inventiones Math. {\bf 23} (1974), 135--152. \end{description} \vskip 0.3in Dinakar Ramakrishnan \qquad \qquad Song Wang \vskip 0.2in \end{document}